You are on page 1of 8

[_

3.3 Further Properties of the Determinant


In this section, we investigate the determinant of a product and the determinant
of a transpose. We also introduce the classical adjoint of a matrix. Finally,
we present Cramer's Rule, an alternative technique for solving certain linear
systems using determinants.
Theorems 3.9, 3.10, 3.11 and 3.13 are not proven in this section. An
interrelated progressive development of these proofs is left as Exercises 23
through 36.
Determinant of a Matrix Product
We begin by proving that the determinant of a product of two matrices A and
B is equal to the product of their determinants |A| and |B|.
THEOREM 3.7
If A and B are both n x n matrices, then |AB| = |A| |B|.

PROOF OF THEOREM 3.7

First, suppose A is singular. Then |A| = 0 by Theorem 3.5. If |AB| = 0. then


|AB| - |A||B| and we will be done. We assume |AB| ^ 0 and get a contradiction. If |AB| ^ 0. (AB)"1 exists, and I = AB(AB) -1 . Hence, B(AB) - 1 is
a right inverse for A. But then by Theorem 2.9, A 1 exists, contradicting the
fact that A is singular.
Now suppose A is nonsingular. In the special case where A = I, we have
|A| = 1 (why?), and so |AB| = |IB| = |B| =1|B| = | A | |B|. Finally, if A is
any other nonsingular matrix, then A is row equivalent to I, so there is a
sequence Ri, R>
Rk of row operations such that /?*( (/?2(/?i(I))) )
= A. (These are the inverses of the row operations that row reduce A to I.)
Now, each row operation R, has an associated real number r,, so that applying
Rj to a matrix multiplies its determinant by r, (as in Theorem 3.3). Hence,
|AB| =
=

\Rk(--|Rk(-

(/? 2 (/?l(I n )))- -)B|


(R2(Ri(l,B)))

)|

= n >'2'1 |IB|
= rk r2H |I | |B|
= |/?*( (/?2(^i(I))) )||B|
= |A||B|.

Let

" 3 2
1
A =
5 02
H 4J

by Theorem 2.1, part (2)


by Theorem 3.3
by the I special case
by Theorem 3.3

and B =

Quick calculations show that |A| = - 1 7 and |B


minant of
9
1 1
AB = '
9 -5 - 6
-7 5 1 1

1 -1
0
4 2 -1
0 3
- 2
16. Therefore, the deter-

is |AB| = |A| |B| = (-17) (16) = -272.


One consequence of Theorem 3.7 is that | AB | = 0 if and only if either |A| = 0
or |B| = 0 . (See Exercise 6(a).) Therefore, it follows that AB is singular if and
only if either A or B is singular. Another important result is
COROLLARY 3.8

If A is nonsingular, then |A 1| |A|-

PROOF OF COROLLARY 3.8

If A is nonsingular, then AA _ 1 = I- By Theorem 3.7, |A||A 11 = |I| = 1, so


lA"11 = 1 / | A | .

Determinant of the Transpose


THEOREM 3.9
If A is an n x n matrix, then |A| = |A 7

See Exercises 23 through 31 for an outline of the proof of Theorem 3.9.


A quick calculation shows that if
-1
4 1
0 3
2
-1 -1 2

A =

then |A| = - 3 3 . Hence, by Theorem 3.9,


|A y | =

- 1

- 1

4 0 - 1 = -33.
1 3 2

Theorem 3.9 can be used to prove "column versions" of several earlier


results involving determinants. For example, the determinant of a lower triangular matrix equals the product of its main diagonal entries, just as for
an upper triangular matrix. Also, if a square matrix has an entire column of
zeroes, or if it has two identical columns, then its determinant is zero, just as
with rows.
Also, column operations analogous to the familiar row operations can be
defined. For example, a type (I) column operation multiplies all entries of a
given column of a matrix by a nonzero scalar. Theorem 3.9 can be combined
with Theorem 3.3 to show that each type of column operation has (he same
effect on the determinant of a matrix as its corresponding row operation.

THS32Z

Let
A =

2 5
1
1 2 3
-3 1 -1

After the type (II) column operation (col. 2) <


2

B =

3 (col. 1) + (col. 2), we have

-1

1 -1
- 3 10

A quick calculation checks thai |A| = 43 = |B|. Thus, this column operation
of type (II) has no effect on the determinant, as we would expect.

A More General Cofactor Expansion

Our definition of the determinant specifies that we multiply the elements a,u
of the last row of an n x /; matrix A by their corresponding cofactors Ai, and
sum the results. The next theorem shows the same result is obtained when a
cofactor expansion is performed across any row or any column of the matrix!
THEOREM 3.10
Let A be an n x n matrix, with n > 2. Then,
(1) a il A l + a/aAa + + aj n Ai = |A|, for each i, 1 < i < n
(2) a\jA\j + U2j A-2j + + ajAnj = |A|, for each j, 1 < j < n.
The formulas for IAI given in Theorem 3.10 are called the cofactor expansion (or, Laplace expansion) along the /th row (part (1)) and j t h column
(part (2)). An outline of the proof of this theorem is provided in Exercises
23 through 32. The proof that any row can be used, not simply the last row, is
established by considering the effect of certain row swaps of the matrix. Then
the |A| = |A' | formula explains why any column expansion is allowable.
Consider the matrix
A=

5
2
-1
6

0
2
3
0

1
3
2
1

After some calculation, we find that the 16 cofactors of A are


-4n
Ai
An
Al

- -12
9

= -6
= -3

-4l2 = - 7 4
42
-422 =
Ai2 = - 4 6
40

-4i3 50
^23 = - 5 1
34
As3 =
-^43 = - 1 9

22
-4l4 =
-424 = - 3
-4:34 =
2
-444 = - 1 7

We will use these values to compute |A| by a cofactor expansion across several
different rows and columns ol A. Along the 2nd row, we ha\e
| A |

C l 2 \ A > \

C I 2 2 A 2 2

" 2 3 ^ 2 3

U 2 4 A 2 4

= 2(9) + 2(42) + 3(51) + l ( - 3 ) = - 5 4 .


Along the 2nd column, we have
|A| = a 12-412 + t'22-^22 + a32-^32 + ^42^42
_ 0(74) + 2(42) + 3(46) + 0(40) = 54.
Along the 4th column, we have
|A| = 14-414 + CI 24^24 + 34^34 + 44^44
= -2(22) + 1(3) + 5(2) + 1C17) = - 5 4 .

Note in Example 4 that cofactor expansion is easiest along the second


column because that column has two zeroes (entries a\2 and ^42). In this case,
only two cofactors, A22 and A32 were really needed to compute |A|. We generally choose the row or column containing the largest number of zero entries
for cofactor expansion.
The A n o i n t Matrix

DEFINITION
Let A be an n x 11 matrix, with n > 2. The (classical) adjoint A of A is
the n x n matrix whose (/', j) entry is Aji, the ( j , i) cofactor of A.
Notice that the (/', j) entry of the adjoint is not the cofactor Aij of A but is
Aji instead. Hence, the general form of the adjoint of an n x n matrix A is
-4n A2\ An 1
An2
A12 A22

A =

A In Aln
Recall the matrix

5 0
A=

2 2 3

-1 3 2
6 0 1

Atin

-2

1
5
1

whose cofactors Aij were given in Example 4. Grouping these cofactors into
a matrix gives the adjoint matrix for A.
-12
9 -6 -3
-74
42 - 4 6 40
50 - 5 1 34 - 1 9
22 - 3
2 -17
Note that the cofactors are "transposed;" that is, the cofactors for entries in
the same row of A are placed in the same column of A.

The next theorem shows that the adjoint A of A is "almost" an inverse


for A.
THEOREM 3.11
If A is an /; x n matrix with adjoint matrix A, then
AA = AA = (|A|)I.

The fact that the diagonal entries of AA and AA equal |A| follows immediately from Theorem 3.10 (why?). The proof that the other entries of AA and
AA equal zero is outlined in Exercises 23 through 35.
\ m Z Z M

Using A and A from Example 5, we have

A.4 =

5 0
2 2
-13
6 0

1-2
3 1
2 5
1 1

-12
9 -6 -3
-74
42 - 4 6 40
50 - 5 1 34 - 1 9
22 - 3
2 -17

-54
0
0
0~
0 -54
0
0
0
0 -54
0 = (-54)1,
0
0
0 -54
(verify!), as predicted by Theorem 3.10, since |A| = - 5 4 (see Example 4).
Similarly, you can check that AA = (54)14 as well.

Calculating Inverses with the Adjoint Matrix

If |A| 7^ 0 we can divide the equation in Theorem 3.11 by the scalar |A| to
obtain (1/ |A|) (A.4) = I. But then, A ((1/ | A | M ) = I. Therefore, the scalar
multiple 1/ |A| of the adjoint A must be the inverse matrix of A, and we have
proved
COROLLARY 3.12

If A is a nonsingular /; x n matrix with adjoint A, then A - 1 = ( 4 - ) A


This corollary gives an algebraic formula for the inverse of a matrix (when
it exists).
The anoint matrix for
~ -2 0 -3"
0 1 0
B 0 0 4

' Bn

is

B =

Bn
n

B-i i

"

B-22 B32

B13 B'23 633

where each B,j (for 1 < i, j < 3) is the (i, j) cofactor of B. But a quick
computation of these cofactors (try it!) gives
4
B=

0 - 8
0

3
0

0 - 2

Now, |B| = - 8 (because B is upper triangular), and so

B
IBI

J 0

4 0 3
0
0 -8
0 0 -2

0 1
0 0

Finding the inverse by row reduction is usually quicker than using the adjoint. However, Corollary 3.12 is often useful for proving other results (see
Exercise 19).
Cramer's Rule
We conclude this section by stating an explicit formula, known as Cramer's
Rule, for the solution to a system of n equations and n variables when it is
unique:

THEOREM 3.13 (Cramer's Rule)


Let AX = B be a system of n equations in /J variables with |A| / 0. For
1 < i < n, let A, be the n x n matrix obtained by replacing the ith column
of A with B. Then the entries of the unique solution X are
I Ail
|A|

*2

|A 2 |
|A|

_ |A|
|A|

The proof of this theorem is outlined in Exercise 36. Cramer's Rule cannot
be used for a system AX = B in which |A| = 0 (why?). It is frequently used
on 3 x 3 systems having a unique solution, because the determinants involved
can be calculated quickly by hand.
We will solve
5*1 - 3x2 10X3 = - 9
2xi + 2^2 3x 3 =
4
3*1 *2 + 5x3 = - 1
using Cramer's Rule. This system is equivalent to AX = B where

A =

5 - 3 -10 "
2 2 -3
-3 -1
5

and

B=

" -9 "
4
-1

Ai =

and A i =

2
-1

5 -3
2 2
-3 -1

5
2
-3

1
0

-9
4
-1

CO

A quick calculation shows that |A| = - 2 . Let

-3
5

, A2 =

-9
4
-1

-10
-3
5

-9
4
-1

The matrix Aj is identical to A, except in the first column, where its entries
are taken from B. A2 and A3 are created in an analogous manner. A quick
computation shows that |Ai| = 8. IA2I = 6. and |Aj| = 4. Therefore,
|Ai|
" = W

8
=2=~4'

X i

|A 2 |
-6
= W = - 2 =

"d

A3 =

'

| A? I
4
W = =2 = ~ 2 '

Hence, the unique solution to the given system is (xi, *2, *3) = (4, 3, 2).
Notice that solving the system in Example 8 essentially amounts to calculating four determinants: |A|, |Ai|, IA2I, and |A31.

You might also like