In linear algebra, the adjugate or classical adjoint of a square matrix A is the transpose of its cofactor matrix and is denoted by adj(A).[1][2] It is also occasionally known as adjunct matrix,[3][4] or "adjoint",[5] though the latter term today normally refers to a different concept, the adjoint operator which for a matrix is the conjugate transpose.
The product of a matrix with its adjugate gives a diagonal matrix (entries not on the main diagonal are zero) whose diagonal entries are the determinant of the original matrix:
where I is the identity matrix of the same size as A. Consequently, the multiplicative inverse of an invertible matrix can be found by dividing its adjugate by its determinant.
Definition
The adjugate of A is the transpose of the cofactor matrix C of A,
In more detail, suppose R is a unital commutative ring and A is an n × n matrix with entries from R. The (i, j)-minor of A, denoted Mij, is the determinant of the (n − 1) × (n − 1) matrix that results from deleting row i and column j of A. The cofactor matrix of A is the n × n matrix C whose (i, j) entry is the (i, j) cofactor of A, which is the (i, j)-minor times a sign factor:
The adjugate of A is the transpose of C, that is, the n × n matrix whose (i, j) entry is the (j, i) cofactor of A,
Important consequence
The adjugate is defined so that the product of A with its adjugate yields a diagonal matrix whose diagonal entries are the determinant det(A). That is,
where I is the n × n identity matrix. This is a consequence of the Laplace expansion of the determinant.
The above formula implies one of the fundamental results in matrix algebra, that A is invertible if and only if det(A) is an invertible element of R. When this holds, the equation above yields
Examples
1 × 1 generic matrix
Since the determinant of a 0 × 0 matrix is 1, the adjugate of any 1 × 1 matrix (complex scalar) is . Observe that
2 × 2 generic matrix
The adjugate of the 2 × 2 matrix
is
By direct computation,
In this case, it is also true that det(adj(A)) = det(A) and hence that adj(adj(A)) = A.
3 × 3 generic matrix
Consider a 3 × 3 matrix
Its cofactor matrix is
where
Its adjugate is the transpose of its cofactor matrix,
3 × 3 numeric matrix
As a specific example, we have
It is easy to check the adjugate is the inverse times the determinant, −6.
The −1 in the second row, third column of the adjugate was computed as follows. The (2,3) entry of the adjugate is the (3,2) cofactor of A. This cofactor is computed using the submatrix obtained by deleting the third row and second column of the original matrix A,
The (3,2) cofactor is a sign times the determinant of this submatrix:
and this is the (2,3) entry of the adjugate.
Properties
For any n × n matrix A, elementary computations show that adjugates have the following properties:
- , where is the identity matrix.
- , where is the zero matrix, except that if then .
- for any scalar c.
- .
- .
- If A is invertible, then . It follows that:
- adj(A) is invertible with inverse (det A)−1A.
- adj(A−1) = adj(A)−1.
- adj(A) is entrywise polynomial in A. In particular, over the real or complex numbers, the adjugate is a smooth function of the entries of A.
Over the complex numbers,
- , where the bar denotes complex conjugation.
- , where the asterisk denotes conjugate transpose.
Suppose that B is another n × n matrix. Then
This can be proved in three ways. One way, valid for any commutative ring, is a direct computation using the Cauchy–Binet formula. The second way, valid for the real or complex numbers, is to first observe that for invertible matrices A and B,
Because every non-invertible matrix is the limit of invertible matrices, continuity of the adjugate then implies that the formula remains true when one of A or B is not invertible.
A corollary of the previous formula is that, for any non-negative integer k,
If A is invertible, then the above formula also holds for negative k.
From the identity
we deduce
Suppose that A commutes with B. Multiplying the identity AB = BA on the left and right by adj(A) proves that
If A is invertible, this implies that adj(A) also commutes with B. Over the real or complex numbers, continuity implies that adj(A) commutes with B even when A is not invertible.
Finally, there is a more general proof than the second proof, which only requires that an n × n matrix has entries over a field with at least 2n + 1 elements (e.g. a 5 × 5 matrix over the integers modulo 11). det(A+t I) is a polynomial in t with degree at most n, so it has at most n roots. Note that the ij th entry of adj((A+t I)(B)) is a polynomial of at most order n, and likewise for adj(A+t I) adj(B). These two polynomials at the ij th entry agree on at least n + 1 points, as we have at least n + 1 elements of the field where A+t I is invertible, and we have proven the identity for invertible matrices. Polynomials of degree n which agree on n + 1 points must be identical (subtract them from each other and you have n + 1 roots for a polynomial of degree at most n – a contradiction unless their difference is identically zero). As the two polynomials are identical, they take the same value for every value of t. Thus, they take the same value when t = 0.
Using the above properties and other elementary computations, it is straightforward to show that if A has one of the following properties, then adj A does as well:
- Upper triangular,
- Lower triangular,
- Diagonal,
- Orthogonal,
- Unitary,
- Symmetric,
- Hermitian,
- Skew-symmetric,
- Skew-Hermitian,
- Normal.
If A is invertible, then, as noted above, there is a formula for adj(A) in terms of the determinant and inverse of A. When A is not invertible, the adjugate satisfies different but closely related formulas.
- If rk(A) ≤ n − 2, then adj(A) = 0.
- If rk(A) = n − 1, then rk(adj(A)) = 1. (Some minor is non-zero, so adj(A) is non-zero and hence has rank at least one; the identity adj(A) A = 0 implies that the dimension of the nullspace of adj(A) is at least n − 1, so its rank is at most one.) It follows that adj(A) = αxyT, where α is a scalar and x and y are vectors such that Ax = 0 and AT y = 0.
Column substitution and Cramer's rule
Partition A into column vectors:
Let b be a column vector of size n. Fix 1 ≤ i ≤ n and consider the matrix formed by replacing column i of A by b:
Laplace expand the determinant of this matrix along column i. The result is entry i of the product adj(A)b. Collecting these determinants for the different possible i yields an equality of column vectors
This formula has the following concrete consequence. Consider the linear system of equations
Assume that A is non-singular. Multiplying this system on the left by adj(A) and dividing by the determinant yields
Applying the previous formula to this situation yields Cramer's rule,
where xi is the ith entry of x.
Characteristic polynomial
Let the characteristic polynomial of A be
The first divided difference of p is a symmetric polynomial of degree n − 1,
Multiply sI − A by its adjugate. Since p(A) = 0 by the Cayley–Hamilton theorem, some elementary manipulations reveal
In particular, the resolvent of A is defined to be
and by the above formula, this is equal to
Jacobi's formula
The adjugate also appears in Jacobi's formula for the derivative of the determinant. If A(t) is continuously differentiable, then
It follows that the total derivative of the determinant is the transpose of the adjugate:
Cayley–Hamilton formula
Let pA(t) be the characteristic polynomial of A. The Cayley–Hamilton theorem states that
Separating the constant term and multiplying the equation by adj(A) gives an expression for the adjugate that depends only on A and the coefficients of pA(t). These coefficients can be explicitly represented in terms of traces of powers of A using complete exponential Bell polynomials. The resulting formula is
where n is the dimension of A, and the sum is taken over s and all sequences of kl ≥ 0 satisfying the linear Diophantine equation
For the 2 × 2 case, this gives
For the 3 × 3 case, this gives
For the 4 × 4 case, this gives
The same formula follows directly from the terminating step of the Faddeev–LeVerrier algorithm, which efficiently determines the characteristic polynomial of A.
In generally, adjugate matrix of arbitrary dimension N matrix can be computed by Einstein's convention.
Relation to exterior algebras
The adjugate can be viewed in abstract terms using exterior algebras. Let V be an n-dimensional vector space. The exterior product defines a bilinear pairing
Abstractly, is isomorphic to R, and under any such isomorphism the exterior product is a perfect pairing. Therefore, it yields an isomorphism
Explicitly, this pairing sends v ∈ V to , where
Suppose that T : V → V is a linear transformation. Pullback by the (n − 1)st exterior power of T induces a morphism of Hom spaces. The adjugate of T is the composite
If V = Rn is endowed with its canonical basis e1, …, en, and if the matrix of T in this basis is A, then the adjugate of T is the adjugate of A. To see why, give the basis
Fix a basis vector ei of Rn. The image of ei under is determined by where it sends basis vectors:
On basis vectors, the (n − 1)st exterior power of T is
Each of these terms maps to zero under except the k = i term. Therefore, the pullback of is the linear transformation for which
that is, it equals
Applying the inverse of shows that the adjugate of T is the linear transformation for which
Consequently, its matrix representation is the adjugate of A.
If V is endowed with an inner product and a volume form, then the map φ can be decomposed further. In this case, φ can be understood as the composite of the Hodge star operator and dualization. Specifically, if ω is the volume form, then it, together with the inner product, determines an isomorphism
This induces an isomorphism
A vector v in Rn corresponds to the linear functional
By the definition of the Hodge star operator, this linear functional is dual to *v. That is, ω∨∘ φ equals v ↦ *v∨.
Higher adjugates
Let A be an n × n matrix, and fix r ≥ 0. The rth higher adjugate of A is an matrix, denoted adjr A, whose entries are indexed by size r subsets I and J of {1, ..., m} . Let Ic and Jc denote the complements of I and J, respectively. Also let denote the submatrix of A containing those rows and columns whose indices are in Ic and Jc, respectively. Then the (I, J) entry of adjr A is
where σ(I) and σ(J) are the sum of the elements of I and J, respectively.
Basic properties of higher adjugates include :
- adj0(A) = det A.
- adj1(A) = adj A.
- adjn(A) = 1.
- adjr(BA) = adjr(A) adjr(B).
- , where Cr(A) denotes the r th compound matrix.
Higher adjugates may be defined in abstract algebraic terms in a similar fashion to the usual adjugate, substituting and for and , respectively.
Iterated adjugates
Iteratively taking the adjugate of an invertible matrix A k times yields
For example,
See also
References
- ↑ Gantmacher, F. R. (1960). The Theory of Matrices. Vol. 1. New York: Chelsea. pp. 76–89. ISBN 0-8218-1376-5.
- ↑ Strang, Gilbert (1988). "Section 4.4: Applications of determinants". Linear Algebra and its Applications (3rd ed.). Harcourt Brace Jovanovich. pp. 231–232. ISBN 0-15-551005-3.
- ↑ Claeyssen, J.C.R. (1990). "On predicting the response of non-conservative linear vibrating systems by using dynamical matrix solutions". Journal of Sound and Vibration. 140 (1): 73–84. Bibcode:1990JSV...140...73C. doi:10.1016/0022-460X(90)90907-H.
- ↑ Chen, W.; Chen, W.; Chen, Y.J. (2004). "A characteristic matrix approach for analyzing resonant ring lattice devices". IEEE Photonics Technology Letters. 16 (2): 458–460. Bibcode:2004IPTL...16..458C. doi:10.1109/LPT.2003.823104.
- ↑ Householder, Alston S. (2006). The Theory of Matrices in Numerical Analysis. Dover Books on Mathematics. pp. 166–168. ISBN 0-486-44972-6.
Bibliography
External links
- Matrix Reference Manual
- Online matrix calculator (determinant, track, inverse, adjoint, transpose) Compute Adjugate matrix up to order 8
- "Adjugate of { { a, b, c }, { d, e, f }, { g, h, i } }". Wolfram Alpha.