You are on page 1of 231

SCHAUM'S

OUTLINE THEORY aiMl>ROBLEMS


SERIES

of

MATRICES

www.TheSolutionManual.com
by FRANK AYRES, JR.

including

Completely Solved in Detail

SCHAUM PUBLISHING CO.


NEW YORK
SCHAVM'S OUTLINE OF

THEORY AI\[D PROBLEMi;


OF

MATRICES

www.TheSolutionManual.com
BY

FRANK AYRES, JR., Ph.D.


Formerly Professor and Head,
Department of Mathematics
Dickinson College

^C^O
-^

SCHAIJM'S OUTLINE SERIES


McGRAW-HILL BOOK COMPANY
New York, St. Louis, San Francisco, Toronto, Sydney
www.TheSolutionManual.com
('opyright 1962 by McGraw-Hill, Inc. AH Rights Reserved. Printed in the
United States of America. No part of this publication may be reproduced,
stored in a retrieval system, or transmitted, in any form or by any means,
electronic, mechanical, photocopying, recording, or otherwise, without the prior
written permission of the publisher.

02656

78910 SHSH 7543210


Preface

Elementary matrix algebra has now become an integral part of the mathematical background
necessary for such diverse fields as electrical engineering and education, chemistry and
sociology,
as well as for statistics and pure mathematics. This book, in presenting the
more
essential mate-
rial, designed primarily to serve as a useful supplement to current texts and as a handy refer-
is

ence book for those working in the several fields which require some knowledge
of matrix theory.
Moreover, the statements of theory and principle are sufficiently complete that the book
could
be used as a text by itself.

www.TheSolutionManual.com
The material has been divided into twenty-six chapters, since the logical arrangement is
thereby not disturbed while the usefulness as a reference book is increased. This
also permits
a separation of the treatment of real matrices, with which the majority of readers
will be con-
cerned, from that of matrices with complex elements. Each chapter contains
a statement of perti-
nent definitions, principles, and theorems, fully illustrated by examples. These, in
turn, are
followed by a carefully selected set of solved problems and a considerable number
of supple-
mentary exercises.

The beginning student in matrix algebra soon finds that the solutions of numerical exercises
are disarmingly simple. Difficulties are likely to arise from the
constant round of definition, the-
orem, proof. The trouble here is essentially a matter of lack of mathematical maturity,'
and
normally to be expected, since usually the student's previous work in mathematics has
been
concerned with the solution of numerical problems while precise statements of principles and
proofs of theorems have in large part been deferred for later courses. The aim of the
present
book is to enable the reader,if he persists through the introductory paragraphs and
solved prob-
lems in any chapter, to develop a reasonable degree of self-assurance about the material.

The solved problems, in addition to giving more variety to the examples illustrating the
theorems, contain most of the proofs of any considerable length together with
representative
shorter proofs. The supplementary problems call both for the solution of numerical
exercises
and for proofs. Some of the latter require only proper modifications of proofs given earlier;
more important, however, are the many theorems whose proofs require but a few lines. Some are
of the type frequently misnamed "obvious" while others will be found to call for
considerable
ingenuity. None should be treated lightly, however, for it is due precisely to the abundance of
such theorems that elementary matrix algebra becomes a natural first course for those seeking
to attain a degree of mathematical maturity. While the large number of these problems
in any
chapter makes it impractical to solve all of them before moving to the next, special attention
is directed to the supplementary problems of the first two chapters. A
mastery of these will do
much to give the reader confidence to stand on his own feet thereafter.

The author wishes to take this opportunity to express his gratitude to the staff of the Schaum
Publishing Company for their splendid cooperation.

Frank Ayres, Jr.


CarHsle, Pa.
October, 1962
www.TheSolutionManual.com
CONTENTS

Page
Chapter 1 MATRICES 1

Matrices. Equal matrices. Sums of matrices. Products of matrices.


Products by partitioning.

Chapter 2 SOME TYPES OF MATRICES 10


Triangular matrices.Scalar matrices. Diagonal matrices. The identity

www.TheSolutionManual.com
matrix. Inverse of a matrix. Transpose of a matrix. Symmetric
matrices. Skew-symmetric matrices. Conjugate of a matrix. Hermitian
matrices. Skew-Hermitian matrices. Direct sums.

Chapter 3 DETERMINANT OF A SQUARE MATRIX. 20


Determinants of orders 2 and 3. Properties of determinants, Minors
and cofactors. Algebraic complements.

Chapter 4 EVALUATION OF DETERMINANTS 32


Expansion along a row or column. The Laplace expansion. Expansion
along the first row and column. Determinant of a product. Derivative
of a determinant.

Chapter 5 EQUIVALENCE 39
Rank of a matrix. Non-singular and singular matrices. Elementary
transformations. Inverse of an elementary transformation. Equivalent
matrices. Row canonical form. Normal form. Elementary matrices.
Canonical sets under equivalence. Rank of a product.

Chapter 6 THE ADJOINT OF A SQUARE MATRIX 49


The adjoint. The adjoint of a product. Minor of an adjoint.

Chapter 7 THE INVERSE OF A MATRIX. 55


Inverse of a diagonal matrix. Inverse from the adjoint. Inverse from
elementary matrices. Inverse by partitioning. Inverse of symmetric
matrices. Right and left inverses of m
X w matrices.

Chapter 8 FIELDS 64
Number fields. General fields. Sub-fields. Matrices over a field.
CONTENTS

Page
Chapter 9 LINEAR DEPENDENCE OF VECTORS AND FORMS 67
Vectors. Linear dependence of vectors, linear forms, polynomials, and
matrices.

Chapter 10 LINEAR EQUATIONS 75


System of non-homogeneous equations. Solution using matrices. Cramer's
rule.Systems of homogeneous equations.

Chapter 11 VECTOR SPACES 85


Vector spaces. Sub-spaces. Basis and dimension. Sum space. Inter-
section space. Null space of a matrix. Sylvester's laws of nullity.
Bases and coordinates.

www.TheSolutionManual.com
Chapter 12 LINEAR TRANSFORMATIONS 94
Singular and non-singular transformations. Change of basis. Invariant
space. Permutation matrix.

Chapter 13 VECTORS OVER THE REAL FIELD 100


Inner product. Length. Schwarz inequality. Triangle inequality.
Orthogonal vectors and spaces. Orthonormal basis. Gram-Schmidt
orthogonalization process. The Gramian. Orthogonal matrices. Orthog-
onal transformations. Vector product.

Chapter 14 VECTORS OVER THE COMPLEX FIELD 110


Complex numbers. Inner product. Length. Schwarz inequality. Tri-
angle inequality. Orthogonal vectors and spaces. Orthonormal basis.
Gram-Schmidt orthogonalization process. The Gramian. Unitary mat-
rices. Unitary transformations.

Chapter 15 CONGRUENCE 115


Congruent matrices. Congruent symmetric matrices. Canonical forms
of real symmetric, skew-symmetric, Hermitian, skew-Hermitian matrices
under congruence.

Chapter 16 BILINEAR FORMS 125


Matrix form. Transformations. Canonical forms. Cogredient trans-
formations. Contragredient transformations. Factorable forms.

Chapter 17 QUADRATIC FORMS 131


Matrix form. Transformations. Canonical forms. Lagrange reduction.
Sylvester's law of inertia. Definite and semi-definite forms. Principal
minors. Regular form. Kronecker's reduction. Factorable forms.
CONTENTS

Page
Chapter lo HERMITIAN FORMS 146
Matrix form. Transformations. Canonical forms. Definite and semi-
definite forms.

Chapter lif THE CHARACTERISTIC EQUATION OF A MATRIX 149


Characteristic equation and roots. Invariant vectors and spaces.

Chapter 20 SIMILARITY 156


Similar matrices. Reduction to triangular form. Diagonable matrices.

www.TheSolutionManual.com
Chapter 21 SIMILARITY TO A DIAGONAL MATRIX 163
Real symmetric matrices. Orthogonal similarity. Pairs of real quadratic
forms. Hermitian matrices. Unitary similarity. Normal matrices.
Spectral decomposition. Field of values.

Chapter 22 POLYNOMIALS OVER A FIELD 172


Sum, product, quotient of polynomials. Remainder theorem. Greatest
common divisor. Least common multiple. Relatively prime polynomials.
Unique factorization.

Chapter 2o LAMBDA MATRICES 179


The X-matrix or matrix polynomial. Sums, products, and quotients.
Remainder theorem. Cayley-Hamilton theorem. Derivative of a matrix.

Chapter 24 SMITH NORMAL FORM 188


Smith normal form. Invariant factors. Elementary divisors.

Chapter 25 THE MINIMUM POLYNOMIAL OF A MATRIX 196


Similarity invariants. Minimum polynomial. Derogatory and non-
derogatory matrices. Companion matrix.

Chapter 26 CANONICAL FORMS UNDER SIMILARITY 203


Rational canonical form. A
second canonical form. Hypercompanion
matrix. Jacobson canonical form. Classical canonical form. A reduction
to rational canonical form.

INDEX 215

INDEX OF SYMBOLS 219


www.TheSolutionManual.com
'

chapter 1

Matrices

A RECTANGULAR ARRAY OF NUMBERS enclosed by a pair of brackets, such as

"2 1 3
3 l'
(a) and (b) 2 1
-1

www.TheSolutionManual.com
1 5 4 7

and subject to certain rules of operations given below is called a matrix. The matrix (a) could be
(2x + 3y + 7z =
considered as the coefficient matrix of the system of homogeneous linear equations
\ X- y + 5z = [

i2x + 3y = 7
-

or as the augmented matrix of the system of non-homogeneous linear equations


\x- y = 5
Later, we shall see how the matrix may be used to obtain solutions of these systems. The ma-
trix (b) could be given a similar interpretation or we might consider its rows as simply the coor-
dinates of the points (1,3, 1), (2, 1,4), and (4,7, 6) in ordinary space. The matrix will be used
later to settle such questions as whether or not the three points lie in the same plane with the
origin or on the same line through the origin.

In the matrix

^11 ^12 '^IS '

(1.1)

*mi "m2 ""m3

the numbers or functions a^- are called its elements. In the double subscript notation, the first
subscript indicates the row and the second subscript indicates the column in which the element
stands. Thus, all elements in the second row have 2 as first subscript and all the elements in
the fifth column have 5 as second subscript. A matrix of m rows and n columns is said to be of
order "m by ra" or mxra.

(In indicating a matrix pairs of parentheses, ( ), and double bars, || ||, are sometimes
used. We shall use the double bracket notation throughout.)

At times the matrix (1.1) will be called "the mxra matrix [a^ ]
" or "the mxn matrix A =
[a^-]". When the order has been established, we shall write simply "the matrix 4".

SQUARE MATRICES. When m = n, (1.1) is square and will be called a square matrix of order n or an
re-square matrix.

In a square matrix, the elements a^, 022. . . , " are called its diagonal elements.

The sum of the diagonal elements of a square matrix A is called the trace of A.

1
MATRICES [CHAP. 1

EQUAL MATRICES. Two matrices A = [a^] and B = [bij] are said to be equal (A = B) if and only if

they have the same order and each element of one is equal to the corresponding element of the
other, that is, if and only if

a = 1,2, , ro; / = 1, 2 n)
^^J 'V
Thus, two matrices are equal if and only if one is a duplicate of the other.

ZERO MATRIX. A matrix, every element of which is zero, is called a zero matrix. When ^ is a zero
matrix and there can be no confusion as to its order, we shall write A = Q instead of the mxn
array of zero elements.

SUMS OF MATRICES. If 4 = [a^A and S = [fe.^-] are two mxn matrices, their sum (difference), A B,
is defined as the mxn matrix C = where each element of C
[c^A is the sum (difference) of the

www.TheSolutionManual.com
,

corresponding elements of A and B. Thus, AB [Oy bij] .

'\ 2 31 0'
2 3
Example 1. It A and = then
14 1 I

-12 5

2 +3
A +B
Lo+(- 1) 1 +2 + 5j
and
1-2 2-3 3-0
O-(-l) 1-2 4-5

Two matrices of the same order are said to be conformable for addition or subtraction. Two
matrices of different orders cannot be added or subtracted. For example, the matrices (a) and
(b) above are non-conformable for addition and subtraction.

The sum of k matrices ^ is a matrix of the same order as A and each of its elements is k
times the corresponding element of A. We define: If k is any scalar (we call k a scalar to dis-
tinguish it from [k] which is a 1x1 matrix) then by kA = Ak is meant the matrix obtained from
A by multiplying each of its elements by k.

Example 2. If ^ '11 then


[:

I -2
A+A + A 3A A-3
! 3
and
r-5(l) -5(-2)"| r-5 10-1
-5A
L-5(2) -5(3) J L-10 -15j

by -A, called the negative of /4, is meant the matrix obtained from A by mul-
In particular,
tiplying each of its elementsby -1 or by simply changing the sign of all of its elements. For
every A, we have A +(-A) = 0, where indicates the zero matrix of the same order as A.

Assuming that the matrices A,B,C are conformable for addition, we state:
(a) A + B = B + A (commutative law)
(b) A + (B+C) = (A + B)+C (associative law)
(c) k(A + B) = kA + kB = (A + B)k, A- a scalar
(d) There exists a matrix D such that A + D = B.
These laws are a result of the laws of elementary algebra governing the addition of numbers
and polynomials. They show, moreover,
1. Conformable matrices obey the same laws of addition as the elements of these matrices.
CHAP. 1] MATRICES

MULTIPLICATION. By the product AB in that order of the Ixm matrix A = [a^i a^g ais a^m] and

fell

^31
the mxl matrix fi is meant the 1x1 matrix C = [on fen + 012 fesi + + aimfemi]

fell

fe^i

That is, [an Oi '^imj ~ L^ii fell "*"


^12 feji + + imi]

-|^^2 a^^fe^^J

www.TheSolutionManual.com
femi

Note that the operation is row by column; each element of the row is multiplied into the cor-
responding element of the column and then the products are summed.
1'

Example 3. (a) [2 3 4] 1 [2(l) + 3(-l) + 4(2)] = [7]


L 2.

-2
(b) [3 -1 4] 6 [-6 - 6+ 12] =
3J

By the product AB in that order of the mxp matrix A = [a^-] and the p xn matrix B = [bij]
is meant the mxn matrix C = \c;;'\
<-
V -'
where
p
Hj ii^ij + "t2 ^2; + + "-ip i>,pj ^^^"ikbkj (f = 1, 2, . . . ,m-; / = 1, 2 re).

Think of A as consisting of m rows and B as consisting of n columns. In forming C = AB


each row of A is multiplied once and only once into each column of B. The element c^ of C is then

the product of the ith row of A and the /th column of B.

Example 4.

^11 *^ 1 9 '011611 + 012621 11 ^12 + "12 622


A B [611 612I 021611+022^21 0^21 ^12 + '^22 ^22
621 622J
."31 O32 ."31611+032621 "31612+002600

The product ^S is defined or A is conformable to B for multiplication only when the number
ofcolumns of A is equal to the number of rows of S. If ^ is conformable to B for multiplication
{AB is defined), B is not necessarily conformable to A for multiplication (BA may or may not
be
^^^ined)- See Problems 3-4.

Assuming that A,B,C are conformable for the indicated sums and products, we have
(e) A(B + C) = AB + AC (first distributive law)
(/") (A + B)C = AC + BC (second distributive law)
(g) A(BC) = (AB)C (associative law)

However,
(A) AB i= BA, generally,
(i) AB = does not necessarily imply i = or S = 0,

(/) AB = AC does not necessarily imply B = C.


See Problems 3-8.
MATRICES [CHAP. 1

PRODUCTS BY PARTITIONING. Let A= [a^J] be of order mxp and 8= [b^j] be of order pxn. In
forming the product AB, the matrix A is in effect partitioned into m matrices of order Ixp and B
into n matrices of order pxl. Other partitions may be used. For example, let A and 6 be parti-
tioned into matrices of indicated orders by drawing in the dotted lines as

(piXTii) I
(pixnj)
(mixpi) I (m]^xp2) (m-Lxps)
A =
(P2X%) I
(p2Xra2)
Jmgxpi) I
(m,2Xp2) j
(m2Xp3)_
(psxni) !
(p3Xre2)

Am A^2 Aig
"21 I
O22
A^i I
A,
"31 I
"32

In any such partitioning, itnecessary that the columns of A and the rows of 6 be partitioned
is
in exactly the same way; however m^, mg, re^, rig may be any non-negative (including 0) integers

www.TheSolutionManual.com
such that mi+ m2 = m and rei+ ^2 = n. Then

^llOll + '4i2021 ""


'4l3"31 A-^-^B-12 + A-^qBqq + '^13832 Cii C12
^8 c
A21B11 + 422^21 + A-2_iiB^^ A^^B^^ + ^22^22 + ^423632 _C21 1^22.

2 1 1110
Examples. Compute /4S, given /I = 3 2 and B 2 110
1 1 2 3 12
Partitioning so that

11 ^12
2 1 !

fill ^12
111
A = 3 2 '0 and B 2 1 1
A21 A22 B21 S22
10 11 2 3 1 I 2

[A^-iB-i^i + A-12B21 A-ij^B'12 + A-i,


we have AB =
1 AqiB^^ + ^22^21 ^21^12 * -^2^ B,

1 1 1

2 1 1 = ' [2]
i4' [3 3] [?
1"
1 1
[1 0] + [l][2 3 1] [1 0] + [l][2]
2 1 1

[4 3 3-1 To o] To] Tol 4 3 3


4 3 3
7 5 5JL0J 7 5 5
_[l 1 1] + [2 3 1] [0]+[2] [3 4 2] [2] 3 4 2 2

See also Problem 9.

Let A, B.C... be re-square matrices. Let A be partitioned into matrices of the indicated
orders
(pixpi) I
(piX P2) I
. . . I
(piX Ps)" All A-^2
J I I

(P2X pi) '

j
^"
(P2X Ps)^ "" '

j
(P2XPS) A21 A22
I

I
I T

.(PsXpi) I
(psXp2) I
... I (PsXPs) /loo

and let 8, C, ... be partitioned in exactly the same manner. Then sums, differences, and products
may be formed using the matrices A^i, A^2< : Sii. 5i2. - ^n, C12
CHAP. 1] MATRICES

SOLVED PROBLEMS
1 2-10 3 -4 1
2I
ri + 3 2+(-4) -1 + 1 0+2 4-202
1. (a) 4 2 1 15 3 = 4+1 0+5 2+0 1+3 5 5 2 4
2 -5 1 2_ 2-2 3-1 2 +2 -5 + (-2) 1 +3 2 + (-l) 4-741
'l 2 -1 O' 3-412 1-3 2+4 -1-1 0-2 -2 6 -2
(b) 4 2 1 1 5 3 = 4-1 0-5 2-0 1-3 3-5 2
2-5 12 2-2 3-1 2-2 -5+2 1-3 2+1 -3 -2

1 2 -1 3 6-3
(c) 3 4 2 1 12 6 3
2 -5 1 2. . 6 -15 3 6,

"l 2 -1 o" -1 -2 1

(d) - 4 2 1 -4 -2 -1
2 -5 1 2 -2 5 -1 -2

www.TheSolutionManual.com
1 2 -3 -2 P 1
2. If A 3 4 and 6 1 -5 find D = r s such that A + B - T) = 0.

5 6 4 3 t u

1-3-p 2-2- ?1 2-p -9 "0 0"

If A-\-B-D - 3+1-r 4-5-s - 4-r -1-s = -2-p = and p = -2, 4-r =


_5+4- 6+3- u. 9- 9-u_ .0

-2 0"
and r = 4, Then D = 4 -1 = A +
- 9 9.

3. (a) [4 5 6] [4(2) +5(3) + 6(-l)] = [17]

2(4) 2(5) 2(6)- 8 10 12


(b) [4 e] =
3(4) 3(5) 3(6) = 12 15 18
[J -1(4) -1(5) -1(6). .-4 -5 -6

4 -6 9 6"

(c) [1 2 3] -7 10 7
5 8 -11 -8
[ 1(4) + 2(0) + 3 (5) 1 (-6) + 2 (-7) + 3(8) 1(9) + 2(10) + 3(-ll) 1 (6) + 2(7) + 3(-8)]
[19 4 -4 -4]

r2(l) + 3(2)+4(3)1 _ poT


(rf)

C Ll(l) + 5(2) + 6(3)J '


L29J

(e)
l]
\ "gl ^ ri(3) + 2(l) + l(-2) l(-4) + 2(5) + 1(2)]
^ p 8]
G 2J _2 2J
[4(3) + 0(1) + 2 (-2) 4(-4) + 0(5) + 2(2)J [s -I2J

{2 -1 1"

. Let <4 = 1 2 . Then


.1 1.

'2 -1 1 \2 -1 r "5 -3 r 5 -3 f 2 -1 1' 11 -8


A"" = 1 2 1 2 = 2 1 4 and 2 1 4 1 2 = 8-18
1 1
b 1- _3 -1 2. 3 -1 2 .1 1. .8-43
The reader will show that A A.a'^ and ^^./l^ = /4^.^^.
MATRICES [CHAP. 1

5. Show that:
2 ? 2

2 3 2
(b) S
1=17=1
S a--
J
= 22 3

j=i 1=1
a,-,-,
J

2 3 3 2
(c) 2: aife( S b^hCj^j)
^
= 2 (2 aif,bkh)chj
k=i h=t h=i k=i ^

2
(")
J, ife(*fei +'^fej) = "il^^i + '^lj) + '^t2(*2i+'^2i) = (il^i+''i2*2i) + (il<^17+'^i2'^2i)
2 2

2 3 2
(6) 2 2 a-- = 2 (a^j^+ a-^ + a -p) = (an + 0^2 + a^g) + (a^t + 092 + <223)
t=i j=i '' i=i
= (Oil + O21) + (12 + "22) + (ai3 + "23)
2
2 a-
" + 2
2
a + a.
2
2 =22 3 2
o-- .

www.TheSolutionManual.com
i=l i = l ^2 i = i 13 j = ii = i V
This is simply the statement that in summing all of the elements of a matrix, one may sum first the
elements of each row or the elements of each column.

222
2 3 2
(c) 2 a., ( 2
b,,c,-)
tfe^^^j^ kh hy
= 2 a-,(bi, c + 6, c + &, c O
37'
^_j ^-^ tfe^ fei ij fee 2JI fe3

0-il(^ll'^lj + ftl2C2J+ ^IsCsj) + 012(621^17 + b2QC2J + 623C3J)


(oil fell + ai2*2i)cij + (0^1612 + aiQh.22)cQj + (aiifeis + ai2b23)cgj

= ( ,^ oifefefei)cii + ( S a{fefefe2)c2f + (2 ai<^b^^)c3j


**
k=l ^
fe=i -^
fe=i -^

3 2

= /.l/li^^^'fe'^^'^'^r

6. Prove: It A = [a^-] is of order mxn and if S = [fe^-] and C = [c--] are of order nxp, then /}(B + C)
^ '
= AB + AC.

The elements of the j'th row of A are a^^ , a^^, ... , a^^ and the elements of the /th column of S +C are
feijj+cij,
62J+
c'2j fenj +%j- Then the element standing in the ith row and /th column of A(B + C) is

oil(feij + Cij) + afe(fe2j + C2j) +... +o.i(feni + = = the sum Of


%i) ^^^"ikHj+^kj) S^^iifefefej +^2^aifeC;^j,

the elements standing in the ith row and /th column of AB and ^C.

7. Prove: If i = [aij] is of order mxn, if S = [6^-] is of order rexp, and if C = [c.-A is of order pxo
-^
then ^(6C) = (iS)C.
P
The elements of the j throw of /4 areai^.a^g, ...,a- and the elements ofthe/th column of BC are 2 6ih c^
p P h= i ^ .

'iJ'

2 b^f^Cf^- 2 b^j^c^j-, hence the element standing in the ith row and /th column of A (BC) is
P P P n P
ait^^b^chj +ai2^^b2h%j + ... + tn = ^^"ik<-j^,^hh<^hj^
J^ *n/i<;/ij

P n n n n
= ^^^^j_"ik''kh)<^hj = (2^^0ifefefei)cij + (X^aif^bf^2)<:QJ + + (^^"ikbkp)<^pj

This is the element standing in the j'th row and /th column of (AB)C; hence, A(BC) - (AB)C.

8. Assuming A, B ,C,D conformable, show in two ways that (A +B)(C +D) = AC + AD + BC + BD.
Using (e) and then (/), (A+B)(C+D) = (A +B)C + (A +B)D = AC +BC +AD +BD.
Using (/") and then (e), (A+B)(C+D) = A(C +D) + B(C +D) = AC + AD ^ BC + BD
= AC +BC +A.D +BD.
CHAP. 1] MATRICES

"l o'
1 l'
1
"l 10 1 'l o' 3 1 2
"4
1 2"

9. (a) 1 2 1 1 + 2 [3 1 2] 1 + 6 2 4 = 6 3 4
1
1 3 1 1 3 1 9 3 6 9 3 7
_3 1 2_

'1
10 10 0' '1
10 10 o" 1
[0]
2 '
1 1 1
1 2 :]

10 oil
(b)
1

1
3
4 i

1 5
10 3 10

1
1

j2
:o]
[0 1 3]
[0]

[0] [0: ]
10 10 6 1 1 3 [0 "][o

2
3
12
10

www.TheSolutionManual.com
18

"1 0'
1 ! 1 1 2 1 3 4 5 1 2 3 4 5 1 1

2 1
-4- -
!

0^0 2 3|4 5 6 2 3 4 5 6_ 2 1

3 4' 7"
i
3 1 2 !
3 4 15 6 7 1 2 3 3 1 2 5 6 3 1 2
(c)
1
1 2 1 1 4 5 1 6 7 8 1 2 1 4 5 1 2 1 6 7 8 1 2 1

\
1 1
-
1

+-
9 8
-^- 1
7 6 5

4
1

[l]-[8 7]
1 9 8. 1

[l]-[6 5 4]
1 7 6 5 1 1

_0 1 i 1 8 7 1
6 5 [1]-[1]

'7 9 11' "13"


3 5 7 9 11 13
10 13 16 19 4 10 13 16 19
7
31 33 "35 37 39" 41" 31 33 35 37 39 41
20 22 24 26 28 30 20 22 24 26 28 30
13 13 .13 13 13 .13. 13 13 13 13 13 13
8 7 6 5 4 1
[6 5 4] [l]

%i Hiyi+ "1272
11^1 0]^2^2
10. Let { %2 = a^iy-i + 0^272 be three linear forms in y^ and yj ^-nd let be a
.J1 O21 Zi + h-2
%3 os 1 yi + % 2 72
linear transformation of the coordinates (yi, j^) into new coordinates (z^, zg). The result of applying
the transformation to the given forms is the set of forms

%i = (o-Li 6^j + aj^2 621) Zi + (on ii2 + '^12^22) ^2

%<2 ~ ((^2 1 C^l 1 "*"


^2 2 ^2 1) ^1 "*"
(^2 1 ^12 "*"
^2 2 ^22) ^2

X3 = ("31^11 + ^32^21) Zl + ("3 1^12 + "32 ^22) Z2

*i %1 012 r-
Vl
-.

Using matrix notation, we have the three forms "21 "22 and the transformation
Vr,
^3 1 '^3 2
611 612
The result of applying the transformation is the set of three forms
O21 022
'xi "11 %2 r
-1

pri ii2 /,
"2 1 "^2 2
^2 1 622 ^',
Os 1 ^^3 2

Thus, when a set of m linear forms in n variables with matrix A is subjected to a linear trans-
formation of the variables with matrix B , there results a set of m linear forms with matrix C = AB
'

MATRICES [CHAP. 1

SUPPLEMENTARY PROBLEMS
2
-3' -1
"l 3 2I 4 ]
2I
Given A = 5 2 , B = 4 2 5 , and C = 3 2
-1
.1 1_ _2 3_
J -2 3_

4 1 -f -3 I -51
(a) Compute: A + B = 9 2 7 , A-C = 5-3
_3 -1 4j _ t-2j
-2 -4 6
(6) Compute: -2-4 -10 -4 0-S =
-2 2 -2
(c) Verify: A + {B~C) = (A+B)-C.
(d) Find the matrix D such that A+D^B. Verify that D =B-A = -(A-B).

1 -1 1 1 2 3 -11 6 -1

www.TheSolutionManual.com
12. Given A = -3 2 -1 and B 2 4 6 , compute AB = Q and BA = -22 12 -2 Hence, ABi^BA
-2 1 1 2 3 -11 -1
6
generally.

\ -3 2 14 10 2 1-1-2
13. Given A = 2 1 -3 , B = 2 111 and C 3 -2 -1 -1 show that AB = AC. Thus, AB = AC
4 -3 -1 1-212 2-5-1
does not necessarily imply B = C

1 1 -1 1 3
^["12 3 -4"!
14. Given A = 2 3 , B = 2 and C show that (AB)C = A(BC).
0-2
,

3 -1 2 -1 [2 ij
4

15. Using the matrices of Problem 11, show that A(B + C) = AB + AC and (A+B)C = AC +BC.

16. Explain why, in general, (ABf ^ A^ 2AB + B^ and A^ - B^ ^ (A-B)(A+B).

2 -3 -5 -13 5 2 -2 -4
17. Given A = -14 5 B 1 -3 -5 and C -13 4
1 -3 -4 -13 5 1 -2 -3
(a) show that AB = BA = 0, AC = A, CA = C.

(b) use the results of (a) to show that ACB = CS^, A'^ - B"^ = (A -B)(A + B), (A Bf = A'^ + b'

2
18. Given where i = -1, derive a formula for the positive integral powers of A .

Ans. A = I, A, -I, -A according as n =4p, 4p-l-l, ip + 2, 4p-^3, where /= n


ij-
[0

19. Show that the product of any two or more


-'"'[;;][-:;] [rM'i -"] [0 -]- ["0 3-

[
i [? 0]
is a matrix of the set.

20. Given the matrices A of order mx, B of order nxp, and C of order rx^, under what conditions
on p, 9,
and r would the matrices be conformable for finding the products and what is the order of each" (a)
ABC
(b)ACB, (c)A(B + C)?
Ans. (a) p = r; mx-q (b) r = n = g; m x p (c) r = n, p = q; m x g
CHAP. 1] MATRICES

21. Compute AB, given:


10 11 10 10 2 1

(a) A 1 j
1 and B 1
j
Ans. 1 2
1 1
0^ 1 1 1 I

m
"
1 o"
1
(b) A and B 1 Ans.
1
--1 2

12 10 10 1
4 1
1 '

'2 2
(c) A and B Ans
1
1 1 i 10
I 2 2 1 i
2 2

22. Prove: (a) trace (A + B) = trace A + trace B, (6) trace (kA) = k trace A.

www.TheSolutionManual.com
2
h -2 1
1

-1
2
= 2n+r.-3y. -3 -3
V, Y^l
_
^;^^^l^ y,\ [2
1 [2 1
2 3

r -zi + 722"]

[-221 - 622J'

24. If -4 = [aiji and B = [fe^J are of order m x n and if C = [c^j] is of order nx. p, show that (/4+B)C =^C + BC.

25. Let /4 = [a^-] and B = [fej,-;,] , where (i = 1, 2 m; / = 1, 2 p; A; = 1, 2, ... ,n). Denote by /S- the sum of

Pi
182

the elements of the /th row of B, that is, let fij = S b j)^. Show that the element in the ith row of A

Pi
is the sum of the elements lying in the sth row of AB. Use this procedure to check the products formed in
Problems 12 and 13.

26. A relation (such as parallelism, congruency) between mathematical entities possessing the following properties:

(i ) Determinative Either a is in the relation to 6 or a is not in the relation to b.

(II) Reflexive a is in the relation to a, for all a.

(iii) Symmetric If a is In the relation to b then b is in the relation to a.


(iv) Transitive If a is in the relation to b and b is in the relation to c then a is in the relation to c.

is called an equivalence relation.


Show that the parallelism of lines, similarity of triangles, and equality of matrices are equivalence
relations. Show that perpendicularity of lines is not an equivalence relation.

27. Show that conform ability for addition of matrices is an equivalence relation while con form ability for multi-
plication is not.

28. Prove: If .4, B, C are matrices such that AC = CA and BC = CB, then (AB BA)C = C(AB BA).
chapter 2

Some Types of Matrices

THE IDENTITY MATRIX. A square matrix A whose elements a^: = for i>j is called upper triangu-
lar; a square matrix A whose elements a^= for i<j is called lower triangular. Thus

"ii "12 "13

www.TheSolutionManual.com
U tip o tZo o
"2n

03S is upper triangular and

is lower triangular.

The matrix D Ogj which is both upper and lower triangular, is call-

... a,

ed a diagonal matrix. It will frequently be written as

D = diag(0]^i, 022, 033,


' ^nn )
See Problem 1.

If in the diagonal matrix D above, Oi^ = Ogs = ci = k, D is called a scalar matrix; if.

in addition, k = l, the matrix is called the identity matrix and is denoted by /. For example

'1

and 1
t"] 1

When the order is evident or immaterial, an identity matrix will be denoted by /. Clearly,
/+/ + ... to p terms = p / = diag (p,p,p

p) and f
= I-I ... to p factors = /. Identity ma-

2 3I
[1. c (5 , then l2-A =

A.L hAh A, as the reader may readily show.

10
.

CHAP. 2] SOME TYPES OF MATRICES 11

SPECIAL SQUARE MATRICES. If A and B are square matrices such that AB = BA, then A and B are
called commutative or are said to commute. It is a simple matter to show that if A is any ra-square
matrix, it commutes with itself and also with L
See Problem 2.

If A and B are such that AB = -BA, th^ matrices A and 6 are said to anti-commute.
,fe+i
A matrix A for which A A, where A: is a positive integer, is called periodic. If k is
k-t-i
e for which
the least positive integer A A, then A is said to be of period k.

If i = 1, so that A^ = A, then A is callled idempotent


See Problems 3-4.
aP
A matrix A for which 4=0, where p is a positive integer, is called nilpotent. If p is the
least positive integer for which A'^ = 0, ther A is said to be nilpotent of index
See Problems 5-6.

www.TheSolutionManual.com
THE INVERSE OF A MATRIX. If A and B are square matrices such that AB = BA = I, then B is call-

ed the inverse of A and we write B = A'^ (B equals A inverse). The matrix B also has A
as its
inverse and we may write A = B~^

1 2 3 6 -2 -3 1 ) o'
Example 1. Since 1 3 3 -1 1 = /, each matrix in the product is the inverse of
12 4 -1 1 ( 1 1

the other.

We shall find later (Chapter?) that not every square matrix has an inverse. We can show
here, however, that if A has an inverse then that inverse is unique.
See Problem 7.

If A and B are square matrices of the same order with inverses A^^ and B~^ respectively,

then (AB)''^ = B'^-A~^, that is,

1. The inverse of the product of tijvo matrices, having inverses, is the product in re-
verse order of these inverses.
See Problem 8.

A matrix A such that A^ = I is called involutory. An identity matrix, for example, isinvol-
utory. An involutory matrix is its own inve r$e.
See Problem 9.

THE TRANSPOSE OF A MATRIX. The matrix of order nxm obtained by interchanging the rows and
columns of an mxn matrix A is called the trahspose of A and is denoted by A' (A transpose). For
1 4'

example, the transpose of A IS i' = 2 Note that the element


^[-e] 3
5

6.
a-
Ir

7
in the ith row

and /th column of A stands in the /th row and ith column of A\

If A' and ZTare transposes respectively qf A and B, and if A: is a scalar, we have immediately

(a) (A'Y = A and (b) (kAy = kA'

In Problems 10 and 11, we prove:

II. The transpose of the sum of two matrices is the sum of their transposes, i.e.

(A + BY = A'+ S'
12 SOME TYPES OF MATRICES [CHAP. 2

and
in. The transpose of the product of two matrices is the product in reverse order of their
transposes, i.e.,

(AB)' = B'-A'
See Problems 10-12.

SYMMETRIC MATRICES. A square matrix A such that A'= A is called symmetric. Thus, a square
matrix A = \
a^j] is symmetric provided o^-- = a,-,^ for all values of i and , /. For example,
1 2 3
2 4-5 is symmetric and so also is kA for any scalar k.

3-5 6
In Problem 13, we prove
IV. If A is an ra-square matrix, then A + A' is symmetric.

www.TheSolutionManual.com
A square matrix A such that A'= -A is called skew-symmetric. Thus, a square matrix A is
skew-symmetric provided a^.- = -Uj^ for all values of i and /. Clearly, the diagonal elements are
"
-2 3"

zeroes. For example, A 2 4 is skew-symmetric and so also is kA for any scalar k.

-3 -4

With only minor changes in Problem 13, we can prove


V. If A is any re-square matrix, then A - A' is skew-symmetric.

From Theorems IV and V follows


VI. Every square matrix A can be written as the sum of a symmetric matrix B = ^(A+A')
and a skew-symmetric matrix C = ^(A-A'). See Problems 14-15.

THE CONJUGATE OF A MATRIX. Let a and b be real numbers and let i = V-1 then, z = a+ hi is ;

called a complex number. The complex numbers a+bi and a-bi are called conjugates, each
being the conjugate of the other. If z = a+ bi, its conjugate is denoted by z = a+ hi.

If z^ = a + bi and Z2 = z^ = a~ bi, then Z2 = z-^ = a- bi = a+ bi, that is, the conjugate of


the conjugate of a complex number z is z itself.

If z.i=a+bi and Z2 = c+ di, then

(i) z-L + Zg = (a+c) + (b+d)i and z^ + z^ = (a+c) - (b+d)i = (a-bi) + (c-di)

that is, the conjugate of the sum of two complex numbers is the sum of their conjugates.

(ii) z^- Zg = (ac-bd) + (ad+bc)i and z^^ = (ac-bd)- (ad+bc)i = (a-bi)(c-di) = F^-i^,

that is, the conjugate of the product of two complex numbers is the product of their conjugates.

When i is a matrix having complex numbers as elements, the matrix obtained from A by re-
placing each element by its conjugate is called the conjugate of /4 and is denoted by A (A conjugate).

l + 2i i I -21 - i
Example 2. When A then A =
3 2-3i 3 2 + 3i

If A and B are the conjugates of the matrices A and B and itk is any scalar, we have readily

(c) (.4) = A and (c?) (X4) = ~k-A

Using (i) and (ii) above, we may prove


.

CHAP. 2] SOME TYPES OF MATRICES 13

_VII. The conjugate of the sum of two matrices is the sum of their conjugates, i.e.,
(A + B) = A + B.
Vin. The conjugat e of the product of two matrices is the product, in the same order, of
their conjugates, i.e., (AB) = A-B.

The transpose of A is denoted by A^(A conjugate transpose). It is sometimes written as A*.


We have
IX. T^e transpose of the conjugate of A is equal to the conjugate of the transpose of
A, i.e., (Ay = (A').
Example 3. Prom Example 2

1 - 2i 3 1 + 2i 3 [l-2i 3 1
(AY = while A' and (A') = (AY
-i 2+ 3i - Zi
i 2 -i 2 + 3iJ

www.TheSolutionManual.com
HERMITIAN MATRICES. A square matrix ^ = [0^^] such that A' = A is called Herraitian. Thus, /I
is Hermitian provided a^j = uj^ for all values of i and /. Clearly, the diagonal elements of an
Hermitian matrix are real numbers.

1 1-i 2

Example 4. The matrix A = 1 +j 3 i is Hermitian.

2 -I

Is kA Hermitian if k is any real number? any complex number?

A square matrix ^ = [0^^] such that 1= -A is called skew-Hermitian. Thus, ^ is skew-


Hermitian provided a^^ = -o^-- for all values of
Clearly, the diagonal elements of a i and /.
skew-Hermitian matrix are either zeroes or pure imaginaries.

i l-i 2
Examples. The matrix A = -1-t 3i i is skew-Hermitian. Is kA skew-Hermitian it k is any real
-2 i

number? any complex number? any pure imaginary?

By making minor changes in Problem 13, we may prove


X. If A is an ra-square matrix then 4 + Z is Hermitian and A-T is skew-Hermitian.

From Theorem X follows


XI. Every square matrix A with complex elements can be
written as the sum of an
Hermitian matrix B={(A + A') and a skew-Hermitian matrix
C=i(A-A').

DIRECT SUM. Let A^, A^ As be square matrices of respective orders m^, ,ms. The general-
ization

^1 ...

A^ ...

diag(^i,^2 As)

... A,

of the diagonal matrix is called the direct sum of the Ai


14 SOME TYPES OF MATRICES [CHAP. 2

1 2 -1
Examples. Let /4i=[2], ^5 and An 2 3
B:} 4 1 -2

12
3 4
The direct sum of A^,A^,Aq is diag(^3^, /42, /4g) =
12-1
2 3

4 1-2

Problem 9(6), Chapter 1, illustrates

XII. If A = Aia,g {A^,A^, ...,Ag) and S = diag(Si, S2 where Ai and Behave


^s).
the same order for (s = 1, 2 s), then AB = diag(^iSj^, A^B^ ^s^s)-

www.TheSolutionManual.com
SOLVED PROBLEMS

ii "12 'm 11^1

1. Since
Or,
2i "22 2n 22 21 OooOoo "^22 ''2n
the product AB of

''ml "7)12 "tn ''WB "Ml ''wm"m2 ^mm"nn

an m-square diagonal matrix A = diag(oii, a^^ an) and any mxn matrix B is obtained by multi-
plying the first row of B by %i, the second row of B by a^^- and so on.

2. Show that the matrices and K commute for all values of o, 6, c, J.

This follows from \" ^1 P "^l =


P^+*'^
'"^ + H = ^ '^l [" *1

2 -2 -4
3. Show that -13 4 is idempotent.

1 -2 -3_

2 -2 -4' '
2 -2 -4" -2 -4
/ = -13 4 -13 4 3 4
_ 1 -2 -3_ . 1 -2 -3_ -2 -3

4. Show that if AB = A and BA = B, then i and B are idempotent.

.45.4 = (AB)A = A-A = A^ and ABA = .4(34) = AB = A ; then 4^ = .4 and 4 is idempotent. Use BAB to
show that B is idempotent.
.

CHAP. 2] SOME TYPES OF MATRICES 15

1 1 3
5. Show that A 5 2 6 is nilpotent of order 3.

2 -1 -3
3"
1 1 3 1 1 1 1 3
/ = 5 2 6 5 2 6 = 3 3 9 and A A^.A = 3 3 9 5 2 6 =
-2 -1 -3 -2 -1 -3 -1 -1 -3 -1 -1 -3 -2 -1 -3

6. If A is nilpotent of index 2, show that AOAf= A for n any positive integer.


Since /? = 0, A^ = a'' = ... = ^" = 0. Then A{IAf = A{lnA) = AnA^ = A.

7. Let A,B,C be square^ matrices such that AB = 1 and CA = 1. Then iCA)B = CUB) so that B
C. Thus, 5 = C= ^ ^ is
the unique inverse of ^. (What is S~^?)

www.TheSolutionManual.com
8. Prove: (AB)^ = B~^-A'^

By definition {ABf{AB) = (AB)(ABf = /. Now


(B' -A )AB = b\a''-A)B B^-I-B = B-'.B
and AB(B~'^-A~'-) = A(B-B''^)A~^ A-A = /

By Problem 7, (AB)'^ is unique; hence, (AB)^ B-'.A-'

9. Prove: A matrix A is involutory if and only if (IA)(I+ A) = 0.

Suppose (I-A)(l+A) = I-A^ = 0; then A^ = I and /I is involutory.

Supposed is involutory; then A^ = I and (I-A)(I+A) = I-A^ = I-I = 0.

10. Prove: (A+B)' = A' + B'.

Let A = [ay] and B = [6^]. We need only check that the element in the ith row and /th column of
A'. S' and (A+B)' are respectively a^, bjj_. and aj^+ bj^.

11. Prove: (AB)' = S'i'.

Let A = [ay] be of order mxn, B = [6y] be of order nxp ; then C = AB = [cy] is of order mxp. The
element standing in the ith row and /th column of AB cy =
is
J^aife. b^j and this is also the element stand-
ing in the /th row and ith column of (AB)'.

The elements of the /th row of S'are iy, b^j bnj and the elements of the ith column of ^'are a^^,
"i2 Hn- Then the element in the /th row and ith column of B'/l'is
n n

Thus, (AB)' = B'A'.

12. Prove: (ABC)' = CB'A'.

mite ABC = (AB)C. Then by Problem


, 1 1 ,
(ABC)' = \(AB)C\' = C'(AB)' = C'B'A'.
2g SOME TYPES OP MATRICES [CHAP. 2

13. Show that if A = [o^j] is n-square, then B = [bij] = A + A' is symmetric.

First Proof.

The element in the ith row and /th column of .4 is aij and the corresponding element of /I' is aji; hence,

bij = aij + a^i. The element in the /th row and ith column of A is a^i and the corresponding element of .4' is

oj^j] hence, bji = a^^ + aij. Thus, bij = bji and B is symmetric.

Second Proof.

By Problem 10, (A+Ay = A' + (A')' = A'+A = A + A' and (^ +.4') is symmetric.

14. Prove: If A and B are re-square symmetric matrices then AB is symmetric if and only if A and B
commute.
Suppose A and B commute so that AB = BA. Then (AB)' = B'A' = BA = AB and AB is symmetric.

Suppose AB is symmetric so that (AB)' = AB. Now (AB)' = B'A' = BA ; hence, AB = BA and the ma-

www.TheSolutionManual.com
trices A and B commute.

15. Prove: Ifthe m-square matrix A is symmetric (skew-symmetric) and if P is of order mxn then B =
P'AP is symmetric (skew-symmetric).

If .4 is symmetric then (see Problem 12) B' = (P'AP)' = P'A'(P')' = P'A'P = P'AP and B is symmetric.

If ,4 is skew-symmetric then B' = (P'AP)' = -P'AP and B is skew-symmetric.

16. Prove: If A and B are re-square matrices then A and 5 commute if and only if A~ kl and B- kl
commute for every scalar k.

Suppose A and B commute; then AB = B.4 and

{A-kI)(B-kI) = AB - k(A+B) + k^I

BA - k(A+B) + k^I = (B-kr)(A-kl)

Thus, Akl and B kl commute.

Suppose 4 -A;/ and B -kl commute; then

(A-kI)(B-kI) = AB - k(A+B) + k^I

BA - k(A+B) + k^I = (B -kr\(A-kI)

AB = BA. and A and B commute.


CHAP. 21 SOME TYPES OF MATRICES 17

SUPPLEMENTARY PROBLEMS
17. Show that the product of two upper (lower) triangular matrices is upper (lower) triangular.

18. Derive a rule for forming the product BA of an mxn matrix S and ^ = diag(aii,a22 a).
Hint. See Problem 1.

19. Show that the scalar matrix with diagonal element A: can be written as Wand that kA = klA =Aia.g(k,k k) A,
where the order of / is the row order of A

20. If .4 is re-square, show that a'^ a'^ = a'^ A^ where p and q are positive integers.

2-3-5 -13 5
21. (a) Show that A = -14 5 and B = i _3 _5 are idempotent.
1 -3 [-1 3 5
-4J
Using A and B, show that the converse of Problem 4 does not hold.

www.TheSolutionManual.com
(b)

22. If A is idempotent, show that B = I-A is idempotent and that AB = BA = 0.

1 2 2
23. (a) If A 2 1 2 show that .4 4.A - 5! = 0.
2 2 1.

2 1 3
(b) If A 1 -1 2 show that -4 - 2A - 94 = 0. but -4^ - 2/1 - 9/ /
,
0.
1 2 1

-1 -1 -1" 2
1 o"
3
1
24. Show that 1 = 1 -1 -1 = =
1 /.
_ 1 1 -1 -1 -1

1 -2 -6
25. Show that A -3 2 9 is periodic, of period 2.
2 0-3

1 -3 -4
26. Show that -13 4 is nilpotent.
1 -3 -4

'12 3 2 -1 -6
27. Show that (a) A = 3 2 and B = 3 2 9 commute.
-1 ~1 -1 _-l -1 -4_
"112' 2/3 -1/3'
(b) A = 2 3 and B = -3/5
1 2/5 1/5 commute.
-12 4 7/15 -1/5 1/15

28. Show that A anti-commute and (A+ Bf = A^ + B'^

29. Show that each of anti-commutes with the others.

30. Prove: The only matrices which commute with every n-square matrix are the n-square scalar matrices.

31. (a) Find all matrices which commute with diag(l, 2, 3).
(b) Find all matrices which commute with diag(aii,a22 a^^).
Ans. (a) diag(a,fe, c) where a,6, c are arbitrary.
.

SOME TYPES OF MATRICES [CHAP. 2


18

1 2 3 "3-2-1
32. Show that (a) 2 5 7 I is the inverse of -4 1 -1
-2 -4 -5 3 2 1.

10 "
1

(b)
2 10
is the inverse of
-2100
4 2 10 0-2 10
-2311 _ 8 -1 -1 1.

33. Set to find the inverse of ^"'-


[3:][:^E3 [3 4]- [3/2 -1/2]

34. Show that the inverse of a diagonal matrix A, all of whose diagonal elements are different from zero, is a

diagonal matrix whose diagonal elements are the inverses of those of A and in the same order. Thus, the
inverse of / is /

-1

www.TheSolutionManual.com
1 4 3 3"|

35. Show that A = 4-3 4 and B -1 0-1 are involutory.


3-3 4 -
-4 -44-3J

10
36. Let A
10 I2
by partitioning. Show that A
/j.

a b -1 /I21 '2 /s
c d 0-1

37. Prove: (a)(A')'=A, (b) (kA)" = kA', (c) (^^)' = (/l')^ for p a positive integer.

38. Prove: (ABCy C'^B-^A-'^. Hint. Write ABC=iAB)C.

39. Prove: (a) (A-^)'^ = A, (b) (kA)-^ = -j-A'^, (c) (A'^Y^ = (A'^y forp a positive integer.

40. Show that every real symmetric matrix is Hermitian.

41. Prove: (a)(A)=A, (b) (A + B) = A + B ,


(c)(kA)=kA, (d) {AB) = A B.

1 1 + I 2 + 3 i

42. Show: (a) A = 1- J 2 -i is Hermitian,


2- 3 J i

i 1 + j 2 - 3 J

(b) B = -1 + i 2i 1 is skew-Hermitian,
-2-3J -1
(c) iB is Hermitian,

(<f ) A is Hermitian and B is skew-Hermitian.

43. If A is n-square, show that (a) AA' and A' A are symmetric, (6) A-irA', AA', and A'A are Hermitian.

44. Prove: If H is Hermitian and A is any conformable matrix then (A)' HA is Hermitian.

45. Prove: Every Hermitian matrix A can be written as B + iC where B is real and symmetric and C is real and
skew-symmetric.

46. Prove: (a) Every skew-Hermitian matrix A can be written as A = B + iC where B is real and skew-symmetric
and C is real and symmetric. (6) A' A is real if and only if B and C anti-commute.

47. Prove: If A and B commute so also do ^"^ and B' , A' and B\ and A' and B"

48. Show that for m and n positive integers, ^4 and S"^ commute if A and 5 commute.
CHAP. 2] SOME TYPES OF MATRICES 19

n
A 1 A nA 2n('-l)A
A 1 A raA ,ra n-i
49. Show (a) (6) A 1 = A nX
\ A
A A"

50. Prove: If A is symmetric or skew-symmetric then AA'= A'A and / are symmetric.

51. Prove: If 4 is symmetric so also is a-4 +6/1^ +.-.+/ where a, 6 g are scalars and p is a positive
integer.

52. Prove: Every square matrix A can bfe written as /I = B +C where B is Hermitian and C is skew-He rmitian.

53. Prove: If ^ is real and skew-symmetric or if ^ is complex and skew-Hermitian then iA are Hermitian.

54. Show that the theorem of Problem 52 can be stated:


Every square matrix A can be written as A =B+iC where B and C are Hermitian.

www.TheSolutionManual.com
55. Prove: If A and B are such that AB = A and BA = B then (a) B'A'= A' and A'B"= B\ (b) A" and B' sue
idempotent, (c) ^ = B = / if ^ has an inverse.

56. If^ is involutory. show that k(.I+A) and kO-A) are idempotent and j(I+A) ^(I-A) = 0.

57. If the n-square matrix A has an inverse A , show:


(a) (A Y= (A') , (b) (A)^ = A-\ (c) (r)-^=(^-V
Hint, (a) Prom the transpose of AA~^ = I. obtain (A'''-) as the inverse of /!'

58. Find all matrices which commute with (a) diag(i. i, 2, 3), (6) diag(l. 1, 2, 2).
Ans. (a) dia.g(A. b.c). (b) dia.g(A.B) where A and B are 2-square matrices with arbitrary elements and b. c
are scalars.

59. If A^.A^ A^ are scalar matrices of respective orders mj^.m^ m^. find all matrices which commute
with diag(^i, ^2 '^s)
Ans. dia.g(B^.B^ B^) where Si, S2 -85 are of respective orders m-^.m^, m^ with arbitrary elements.

60. If AB = 0, where A and B are non-zero n-square matrices, then A and B are called divisors of zero. Show
that the matrices A and B of Problem 21 are divisors of zero.

61. If A = diae(Ai.A2 A^) and B = di&giB^.B^ B^) where ^^ and B^ are of the same order, (J = 1, 2,
..., s), show that
(a) A+B = diag(^i+Si,^2 + S2 -^s + Bs)
(b) AB = diag(^iBi. /I2S2 A^B^)
(c) trace AB = trace /liB^ + trace /I2S2 + ... + trace A^B^.

62. Prove: If ^ and B are n-square skew-symmetric matrices then AB is symmetric if and only if A and S commute.

63. Prove: If A is n-square and B = rA+sI, where r and s are scalars, then A and B commute.

64. Let A and B he n-square matrices and let ri, rg, si, S2 be scalars such that risj 7^
rssi. Prove that Ci =
ri4+siB, C2 = r^A+SQB commute if and only if A and B commute.

65. Show that the n-square matrix A will not have an inverse when (a) A has a row (column) of zero elements or
(6) /I has two identical rows (columns) or (c) ^ has arow(column)whichis the sum of two other rows(columns).

66. If A and B are n-square matrices and A has an inverse, show that
(A+B)A~'^(A-B) = (A-B)A''^(A+B)
chapter 3

Determinant of a Square Matrix

the 3! = 6 permutations of the integers 1,2,3 taken


together
PERMUTATIONS. Consider
(3.1) 123 132 213 231 312 321

and eight of the 4! = 24 permutations of the integers 1,2,3,4 taken together

1234 2134 3124 4123

www.TheSolutionManual.com
(3.2) 1324 2314 3214 4213

a given permutation a larger integer precedes a smaller one,


If in
we say that there is an
even (odd), the permutation is
inversion. If in a given permutation the number of inversions is
even since there is no inver-
called even (odd). For example, in (3.1) the permutation 123 is
sion, the permutation 132 is odd since in it 3 precedes 2,
the permutation 312 is even since in
precedes and 3 precedes 2. In (3.2) the permutation 4213 is even since in it 4 precedes
it 3 1

2, 4 precedes 1, 4 precedes 3, and 2 precedes 1.

THE DETERMINANT OF A SQUARE MATRIX. Consider the re-square matrix

'11 "12 "IS


Uo 1 Oo p ^03 *

(3.3)

-'ni "ne "n3

and a product

(3.4) ^j, "'^L


1 ~-'2 3Jo
~^3 '^"
"^n

of n of its elements, selected so that one and only one element


comes from any row and one
and only one element comes from any column. In (3.4), as a matter of convenience, the factors
the
have been arranged so that the sequence of first subscripts is the natural order 1,2 re;

second subscripts is then some one of the re! permutations of the inte-
sequence j^, j^ / of
gers 1,2 re. (Facility will be gained if the reader will parallel the work of this section be-
ginning with a product arranged so that the sequence of second subscripts is in natural order.)
For a given permutation /i,/2, ...,^ of the second subscripts, define ^j^j^....j^ = +1 or -1
according as the permutation is even or odd and form the signed product


(3.5) -W-2- k'^k ^^h '^Jn

the determinant of A, denoted by U|, is meant the sum of all the different
signed prod-
By
ucts of the form (3.5), called terms of Ul, which can be formed from the elements of A; thus.

(3.6) le Jlj2- M '^h

where the summation extends over p=n\ permutations hk---Jn of the integers 1,2,,

The determinant of a square matrix of order re is called a determinant of order re.

20
CHAP. 3] DETERMINANT OP A SQUARE MATRIX 21

DETERMINANTS OF ORDER TWO AND THREE. Prom (3.6) we have for n = 2 and n = 3,

-"ll "12
(3.7) ^11^22 21 '^12^^21
12 "'"
11^22 CE-j^2^21
^21 "22

and

'll "12 "13


(3.8) 21 ^ "? P*^ ^123 ail0223S + ^132 Oll02s'^32 + ^213 Oi221<%3
"31 "32 "S3 + 231 %2 02331 + 312 Ol3'%lC'32 + ^321 %3 ^^2 %!
flu 022 %3 - ^11023032 - 012^1033

%l(22 033 - 02S32) " "12(021 "ss " ^S^Sl) + ^IS (a2lOS2 - 022^31)

022 ^3 O21 053 21 <^2

www.TheSolutionManual.com
+ Oi
"^32 ^3 "31 <^33 '^Sl ^^32

Example 1.

1 21
(a) 1-4 - 2-3 6 = -2
3 4

2 -11
(fc) 2-0 - (-1)3 + 3
3

2 3 5
10 1 1 11 1 01
(O 1 1 = 2 + 5
11 2 2 11
2 1

2(0-0 -1-1) - 3(1-0 -1-2) + 5(1-1-0-2) = 2(-l) - 3(-2) + 5(1)

2 -3 -4
(d) 1 0-2 = 2{0(-6)-(-2)(-5)! - (-3){l(-6)- (-2)0! + (-4) {l(-5) - O-o!
-5 -6
-20 18 + 20 -18

See Problem 1.

PROPERTIES OF DETERMINANTS. Throughout this section, A is the square matrix whose determi-
nant Ul is given by (3.6).
Suppose that every element of the sth row (every element of the/th column) is zero.
Since
every term of (3.6) contains one element from this row (column), every
term in the sum is zero
and we have
I. If every element of a row (column) of a square matrix A
is zero, then U| =0.
Consider the transpose A' of A. It can be seen readily that every term of
(3.6) can be ob-
tained from A' by choosing properly the factors in order from the
first, second, ... columns. Thus,
II. If 4 is a square matrix then
U'l = \A\; that is, for every theorem concerning the rows
of a determinant there is a corresponding theorem concerning
the columns and vice versa.
Denote by B the matrix obtained by multiplying each of the elements of
the ith row of A by
a scalar k. Since each term in the expansion of |5| contains one and only one
element from its
fth row, that is, one and only one element having A; as a factor,

\B\ = k 1 \: , , a. , n^- n
*%'2---i'iii2J2----anj^!
\ . = Jc\A\
P
Thus,
22 DETERMINANT OF A SQUARE MATRIX [CHAP. 3

III.every element of a row (column) of a determinant U| is multiplied by a scalar k, the


If

determinant is multiplied by k; if every element of a row (column) of a determinant \A\ has k as


a factor then k may be factored from \A\ . For example,
a7']^]^ ltd AQ CI
^

O\ /l' t* QO

OrQ 1 f (X'

Let S denote the matrix obtained from A by interchanging its ith and (i+l)st rows. Each
product in (3.6) of \A\ is a product of |s|, and vice versa; hence, except possibly for signs,

(3.6) is the expansion of \b\. In counting the inversions in subscripts of any term of (3.6) as a
term of \b\, i before i+1 in the row subscripts is an inversion; thus, each product of (3.6) with
its sign changed is a term of |s| and \e\ = - \A\. Hence,
IV. If B is obtained from A by interchanging any two adjacent rows (columns), then \b\ =

www.TheSolutionManual.com
- \a\.

As a consequence of Theorem IV, we have


V. If B is obtained from A by interchanging any two of its rows (columns), then \b\ = -\A\.

VI. If B is obtained from A by carrying its ith row (column) over p rows (columns), then
|s| = (-i)^UI.

VII. If two rows (columns) of A are identical, then \A\ = .

Suppose that each element of the first row of A is expressed as a binomial %,


^li
= b-^: + c-^j,

(j = 1,2, ...,n). Then

I ^Jd2 in ^^^ii ^ "^i^ "2 J23 J5 ""in

Os-,- Os,
p^kk^ Jn ^^k'^k'^k-- "njn^ "Jn

h,. ii2 ^13 - ^m 'In

21 O22 "23 02n


+

Oni n2 "na '^nn -^n2 '-"ns

In general,

VIII. If every element of the ith row (column) of A is the sum of p terms, then \A\ can
be expressed as the sum of p determinants. The elements in the ith rows (columns) of these
p determinants are respectively the first, second, ..., pth terms of the sums and all other rows
(columns) are those of A.

The most useful theorem is

IX. If B is obtained from A by adding to the elements of its ith row (column), a scalar mul-
tiple of the corresponding elements of another row (column), then \b\ = \a\. For example.

Cfrii T"/CCEog ^22 Clnr

Clf,', {- Karin. Clf=n Clr^r, Cfg ^ + H'Qri^ ^Q'2 "^ i^Clr Zgg + ka^^
See Problems 2-7.

FIRST MINORS AND COFACTORS. Let A be the re-square matrix (3.3) whose determinant \A\ is given
by (3.6). When from A the elements of its ith row and /th column are removed, the determinant
of the remaining (re- l)-square matrix is called a first minor of A or of \a\ and denoted by \M^j\.
CHAP. 3] DETERMINANT OF A SQUARE MATRIX 23

More frequently, it is called the minor of Oy. The signed minor, (-if'*'^ \Mij\ is called the
cofactor of a^-
and is denoted by a;
"V
11 12 ''13

Example 2. If A "21 '^SS "23

"'31 ^^32 ''SS

22 "23 "21 "23 21 22


M 11 ki2 kic
"32 "33 "31 "33 "31 "32
and
1+1 I 1 + 21 I

(-1) kll 1^11 1 , ai2 (-1) 1^12! =

14-31 I

(-1) ki3 \m.

Then (3.8) is

www.TheSolutionManual.com
Ml = aiilA/iil - O12IM12I + aislMigl
= "11*11 + ai20ii2 + "130^13

In Problem 9, we prove
X. The value
of the determinant U|, where A is the matrix of (3.3), is the sum of the prod-
ucts obtained by multiplying each element of a row (column) of
U| by its cofactor, i.e..
n
(3.9) Ul = aiiflii + a^g a^g + + ^in * in ^ aik'kk
n
(3.10) Ml = "^if^if + 021*2/ + + a^jd^j (ij, = 1,2 n)
fe?i"fei*j

Using Theorem VII, we can prove


XI. The sum of the products formed by multiplying the elements of a row (column) of an
ra-square matrix A by the corresponding cofactors of another row (column) of A is
zero.

Example 3. If A is the matrix of Example 2, we have

and
"310^31 + "320^32 + "330^33 = Ul
"12*12 + "22*22 + "32*32 = I
'4 I

while

"31*21 + "32*22 + "SS*23 =


and
"12*13 + "22*23 + "32*33 =

See Problems 10-11.

MINORS AND ALGEBRAIC COMPLEMENTS. Consider the matrix (3.3). Let


h, i^ i^ arranged in ,

order of magnitude, be m, (1 < m < ra), of the row indices


1,2 n and let j^, j^ /^ arrang-
ed in order of magnitude, be m of the column indices. Let the
remaining row and cofumn indi-
ces, arranged in order of magnitude, be respectively i^,^,, i^ and /+;+2 ^ . Such
a separation of the row and column indices determines uniquely
two matrices

a.- ,
,

'l'J'2 '^i.
Jm

.Jl.j2.- im i2.ii ''2. 72 i2 :h


(3.11) h i
^n

_ "^m Ji %J2
% ^n _
24 DETERMINANT OF A SQUARE MATRIX [CHAP. 3

and

''m+i'J'm+i '-m + i'Jffl+s n.7n

.7ra + i'Jm+2' Jn ''m + S'Jm + i ^m+2>7m+2 ^ra+2>Jn


(3.12)

^^Jm-i-i ^n'j7ii+2 ^ri'Jn

called sub-matrices of ^.

The determinant of each of these sub-matrices is called a minor of A and the pair of minors

J1.J2. Jm J'm-'-i'Jn^-Q Jn
and
^m+l' '-m-^2 ''r

www.TheSolutionManual.com
are called complementary minors of A, each being the complement of the other.

Examples. For the 5 -square matrix A [iij.

"I'Z %4 '^15
1,3 '^21 '^23 2,4,5
I

i
'^2,5
I

'
~ and '^l, 3.4- "32 ^^34 '^SS
I

I
f^Sl 63 I
"42 "44 ^^45

are a pair of complementary minors.

Let
(3.13) U + In + + in + h + h + + h
and
(3.14) q - i-n +1 + fm+2 + + *n + ^ra + i + /m+2 + " + In

p Ji, is. J-

The signed minor (-1) A,- , , is called the algebraic complement of

Jm-'-i'Jn-i-2 Jn

7(4-1. 'to+2 ^n
J+i'Jm,+2 7n
and (-l)'^ /I. . is called the algebraic complement of
''m,+ l>^m+2> ''-
J1.72 Jm

H>h ^n

2+-5 + 14-31 13 ,1.3


Exainple4. For the minors of Examples. (-1) I Ap' g I
= - I
/i^ g I
is the algebraic complement

,2 4 5 ,
l(-3 + 4-^2l-446 I .2,4,5 2 4 5 1

the algebraic complement of


I
I I

Of I
A.{3\i I
and (-1) I
A-^ 34 I
= -I ^ijg',4 I
is

1.3
^^is I
Note that the sign given to the two complementary minors is the same. Is this

always true?

.Ji Ji
When m = l, (3.11) becomes A and an element of A. The
[%ii] "HJi
J2.J3. Jn
complementary minor is Ml i in the notation of the section above, and the
I

algebraic complement is the cofactor OL-Lj.

A minor of A, whose diagonal elements are also diagonal elements of A, is called a principal
minor of A. The complement of a principal minor of A is also a principal minor of A; the alge-

braic complement of a principal minor is its complement.


CHAP. 3] DETERMINANT OP A SQUARE MATRIX 25

Examples. For the 5-square matrix A


H'
"?2 "24 (^25
"11 "15
^1.3 and I
A, ,t 5 !
= 0^2 044 045
31 S3
52 "54 055
are a pair of complementary principal minors of A What is the algebraic complement of each ? .

The terms minor, complementary minor, algebraic complement, and principal minor as de-
fined above for a square matrix A will also be used without change in connection
with U |

See Problems 12-13.

SOLVED PROBLEMS

www.TheSolutionManual.com
1- (") ! !l = 2-4 - 3(-l) = 11
1-1 4|

1 2
4 5l 3 5 3 4
(b) 3 4 5 (1) - (l)(4-7 - 5-6) - + 2(3-6 - 4-5)
6 71 5 7 5 6
5 6 7 -2-4 = -6
1 6

(c) 3 4 15 = 1(4-21 - 11 -18


5 6 21

id) 2 3 5 = 1(3-3 - 5-1)

4 1 3

2. Adding to the elements of the first column the corresponding elements of the other columns,
-4 1 1 1 1 1 1 1

1 -4 1 1 -4 1 1 1

1 1 -4 1 1 1 -4 1 1

1 1 -4 1 1 1 -4 1

1 1 1 -4 1 1 1 -4
by Theorem I.

3. Adding the second column to the third, removing the common factor from this third column and
using Theorem Vn
1 a h + c 1 a a+b+ c 1 a 1

1 b c +a 1 b a+b+c (a + b + c) 1 b 1

1 c a+h 1 c a+b+c 1 c 1

4. Adding to the third row the first and second rows, then removing the common factor
2; subtracting
the second row from the third; subtracting the third row from the first;
subtracting 'the first row
from the second; finally carrying the third row over the other rows
26 DETERMINANT OF A SQUARE MATRIX [CHAP. 3

Oj^+fej a^+b^ ag+63

6^+C-L 62+ ^2 ^3+^3 i^+Ci 62 + ^2 &3+^3 bi + c^ b^+c^ b^+Cr

Cj+a^ C2 + a2 C3+a3 a5^+5]^+Cj^ a2+fe2+"^2 "s+^s+'^s

bi 62 63 b-i 62 ^3 a^^ 02 flg

6^ + C-L 62+^2 ^3+^3 Cl C2 Cg bi ^2 63

a^ 02 '^3 a-, Oo 03 C]^ C2 Cg

Oi % 1

5. Without expanding, show that Ul 02 ^2 1 -(Oj^ - 02) (02 ~ '^s) ('^s - %)


Og Og 1

Subtract the second row from the first; then

www.TheSolutionManual.com
l-2 fl-j^ ^2 Oi + a2 1

2 2
02 ar, 1 (a^ _ 02) a2 O2 t by Theorem III

2 2
Og Og 1 Og ag 1

and oj^-oj is a factor of U|. Similarly, a^-a^ and ag-a^ are factors. Now M| is of order three in the

letters; hence.

(i) \a\ = k(a^^-a^)(a^-as)(as-a^)

The product of the diagonal elements. 0^02. is a term of \a\ and, from (i), the term is -ka^a^. Thus,
A: = -l and \a\ = -{a^-a^){a^-a2){as-a^). Note that U vanishes if and only if two of the a^, og. os are |

equal.

6. Prove: If A is skew-symmetric and of odd order 2p- 1, then Ml =

Since A is skew-symmetric, -4'= -A; then U I


= \-A (-if^-^-UI - Ul- But, by Theorem II,

Ul = Ul; hence, Ut = -U! and Ul = 0.

7. Prove: If A is Hermitian, then \A\ is a real number.

Since A is Hermitian. A = A' , and \a\ = U'| = U| by Theorem II. But if

Ul =
I ^iij2...j"yi% %; = '^ + *'

then Ul = |eii;2---Jn^iii"2i?--^"in = " " ^'

Now Ul = Ul requires 6 = 0; hence. U| is a real number.

1 2 3
8. For the matrix A = 2 3 2
12 2

2 2 1+3 2 3
(-1) 1+2
1

a,, = (-ir'\l ?1 = 2, ai2 = -2, Mi3 = (-ly


12 2 |i 2 11 2I

2 3 2+2I 1 3 2+3 1
1 2
,2+1 -1, a,23
(-ly a22 = (-1) = (-1)
2 2 1 2 1 2

3+ 2 3 1 3 3+3I 1 2
(-1) 3+2I a 33
1

(-ly 1 age = (-1)


3 2 2 2I 2 31
CHAP. 3] DETERMINANT OF A SQUARE MATRIX 27

Note that the signs given to the minors of the elements in forming the cofactors follow the pattern

+ - +
- + -
+ - +
where each sign occupies the same position in the display as the element, whose cofactor is required, oc-
cupies in ^, Write the display of signs for a 5-square matrix.

9. Prove: The value of the determinant U| of an re-square matrix A is the sum of the
products obtained
by multiplying each element of a row (column) of A by its cofactor.
We shall prove this for a row. The terms of (3.6) having a^^ as a factor are

Now e. ^i"''^ i" ^ P"'^tin the is in natural order. Then


kh---Jn~ ^jbjb- -in lJi..fe. -in. 1 (a) may

www.TheSolutionManual.com
be written as

(6)

where the summation extends over the cr= (n-i)! permutations of the integers
2,3 n, and hence, as

022 2S <2n

(c) "an
"ii "11 1
'twill

^n2 "ns

Consider the matrix B obtained from A by moving its sth column over the first s-l columns.
By Theorem
VI. \B\ = (-1) U|. Moreover, the element standing in the first row and first column of B is
a^s and the
minor of a^^ in B is precisely the minor \M^\ of
a^s in A. By the argument leading to (c), the terms of
ais mis\ are all the terms of \b\ having a^s as a factor and, thus, all the
terms of (-1)^"^ having a.^ U| as
a factor. Then the terms of ais!(-if M^/isli are all the terms of \a\ having a^s as a factor Thus

(3.15)

+ + ai5!(-lf||/,3|! + ... + ai !(-!)'*" M,

"11*11 + ai2i2 + + ain*in

,s+i
since (-1) = (-1) We have (3.9) with = We
J i shall call (3.15) the expansion of
. .

U| along its first

^ "' ''^ '^ ^^^^^ '^' ^^-^^ '' ' = '^ '^ '''^i"^'J by f^P^^ting the above argu-
m.r,JWlTr\T
B be the
ments. Let J'*' ^iT^
matrix obtained from A by moving its rth row over the first r-1
rows and then its .th col-
umn over the first s-l columns. Then

T~l s-l
(-1) (-!) \a (-1) u
The element standing in the row and the
the minor of a.rs in
first first column of 5 is a and the minor of a^^ in
^^ B is yreciseiy
i precisely
^ Thus, . the terms of

are all the terms of U | having a^^ as a factor. Then

r+fel
,l,-rkU-l) M.rk\ 2 a^j^a.
rk^rk
k=i
and we have (3.9) for j = r.
.

28 DETERMINANT OF A SQUARE MATRIX [CHAP. 3

10. When oLij is the cofactor of aij in the ra-square matrix A = [a^j] , show that

'll "12 %,f-i "'1 %,j4-i ^'i.n

(h,j-i 02 *2n
i +
^

(i) fC^GC^-j "T fCi^OL^j "T + itj^dj^j

'^i ^2 "W,j-i <%, 7 + 1


'^n

This relation follows from (3.10) by replacing a^j with k^, 027 with k^ 0^7 with Atj, In making these ,

^2J
replacements none of the cofactors OLij.QL^j. (t^j appearing is affected since none contains an element ,

from the / th column of A

By Theorem VII, the determinant in {i) is when A^= a^^, (r = 1,2 n and s ^ j). By Theorems Vin,
and VII, the determinant in (i) is I
4 |
when Ay + fea^g, (r = 1,2 n and s ^ /).
\7

Write the eauality similar to (0 obtained from (3.9) when the elements of the ith row of A are replaced
by /ci,/c2.

www.TheSolutionManual.com
1 2 3 4 5 28 25 38
11. Evaluate: (a) \
A 3 04 (c) 1 2 3 (e) 42 38 65
2 -5 1 -2 5 -4 56 47 83

1 4 8 2 3-4
(b) -2 1 5 ((f) 5-6 3
-3 2 4 4 2-3

(a) Expanding along the second column (see Theorem X)

1 2

3 4 Cl-^^fX^Q "f"
022^22 ^ ^32 32 ai2 + a22 + (-5)o;g2

2 -5 1

1 2
34.2I -10
-5(-l)-' I
5(4-6)
3 4l

(b) Subtracting twice the second column from the third (see Theorem IX)

1 4 8 1 4 8-2-4 1 4
1 4
-2 15 2 1 5-2-1 = -2 1 3 3(-l)''
-3 2
-3 2 4 3 2 4-2-2 -3 2

-3(14) -42

(c) Subtracting three times the second row from the first and adding twice the second row to the third

3 4 5 3-3(1) 4-3(2) 5-3(3) -2 -4


-2 -4
1 2 3 1 2 3 = 1 2 3
9 2
-2 5 -4 -2 + 2(1) 5 + 2(2) -4+ 2(3) 9 2

(-4 + 36) -32

(d) Subtracting the first column from the second and then proceeding as in (c)

2 3-4 2 1 -4 2-2(1) 1 -4 + 4(1)


u 5-6 3 = 5-11 3 5-2(-ll) -11 3 + 4(-ll)

4 2-3 4 -2 -3 4-2(-2) -2 -3 + 4(-2)

1
27 -41
= 27 -11 -41 -31
-11
8 -2 - 11
CHAP. 3] DETERMINANT OF A SQUARE MATRIX 29

(e) Factoring 14 from the first column, then using TheoremIX to reduce the elements in the remaining columns

28 25 38 2 25 38 2 25-12(2) 38-20(2)
42 38 65 14 3 38 65 14 3 38-12(3) 65-20(3)
56 47 83 4 47 83 4 47-12(4) 83-20(4)

2 1 -2 1
-1 9
4 3 2 5 14 - -12 9 -14 -14(-l-54) 770
4-13 6 -1 1
6 1

12. Show that p and q, given by (3.13) and (3.14), are either both even or both
odd.

Since each row (column) index is found in either p or 9 but never in both,

P + q = (1 + 2+-+7J) + (l + 2+-"+n) = 2-kn{n + l) = n(n +1)

Now p+q is even (either n or n + 1 is even); hence, p and q are either both even or both odd. Thus,

www.TheSolutionManual.com
(-1) = (-1)^ and only one need be computed.

12 3 4 5
6 7 8 9 10
13. For the matrix A [..] 11 12 13 14 15 the algebraic complement of | Ao's is
16 17 18 19 20
.21 22 23 24 25

, ,^2+3+2+4| .1,3,51
13 5
(-1) Ml,4,5l 16 18 20 (see Problem 12)
21 23 25

and the algebraic complement of | '4^'4^'5| is -


I12 141

SUPPLEMENTARY PROBLEMS
14. Show that the permutation 12534 of the integers 1, 2, 3. 4, 5 is even, 24135 is odd, 41532 is even, 53142 is
odd, and 52314 is even.

15. List the complete set of permutations of 1, 2,3,4, taken together; show that half are even and half are odd.

16. Let the elements of the diagonal of a 5-square matrix A be a.b.cd.e. Show, using (3.6), that when ^ is
diagonal, upper triangular, or lower triangular then \a\ = abcde.

17. Given -4
[j J]
= and B = [^ ^^^^ ^^^^ AB^BA^ A'b 4 AB' ^ a'b' ^ B'a' but that the determinant of
6J
each product is 4.

18. Evaluate, as in Problem 1,

2 -1 1 2 2-2 2 3
(a) 3 2 4 = 27 12
(b) 3 = 4 (c) -2 4 =
-1 3 2 3 4 -3 -4
30 DETERMINANT OF A SQUARE MATRIX [CHAP. 3

1 2 10
19. (a) Evaluate \A 2 3 9

4 5 11

(b) Denote by B | | the determinant obtained from j


-4 |
by multiplying the elements of its second column by 5.
Evaluate \B \
to verify Theorem III.

(c) Denote by |
C |
the determinant obtained from |
.4 |
by interchanging its first and third columns. Evaluate
I
C I
to verify Theorem V.
1 2 7 1 2 3
(d) Show that I
A 2 3 5 2 3 4 thus verifying Theorem VIII.
4 5 8 4 5 3

1 2 7
(e) Obtain from \A \
the determinant |o| = 2 3 3 by subtracting three times the elements of the first
4 5-1
column from the corresponding elements of the third column. Evaluate j D j to verify Theorem IX.

(/) In U
subtract twice the first row from the second and four times the first row from the third.

www.TheSolutionManual.com
I
Evaluate
the resulting determinant.

(g) In I
/I I
multiply the first column by three and from it subtract the third column. Evaluate to show that
\A I
has been tripled. Compare with (e). Do not confuse (e) and (g).

20. If -4 is an n-square matrix and 4 is a scalar, use (3.6) to show that U^ |


= /t^M |.

21. Prove: (a) If U |


= k. then \a\ = k = \a'\.
(b) If A is skew-Hermitian, then |
-4 |
is either real or is a pure imaginary number.

22. (a) Count the number of interchanges of adjacent rows (columns) necessary to obtain 6 from A in Theorem V
and thus prove the theorem.
(b) Same, for Theorem VI.

23. Prove Theorem VII. Hint: Interchange the identical rows and use Theorem V.

24. Prove: If any two rows (columns) of a square matrix A are proportional, then |
,4 |
= o.

25. Use Theorems VIII, III, and VII to prove Theorem IX.

26. Evaluate the determinants of Problem 18 as in Problem 11.

a b
c d
27. Use (3.6) to evaluate \A\ =
e /
; then check that \a\ = ".P/. Thus, if A = diag(A-^. A^). where
g h
A^, A^ are 2-square matrices, | A U4i|

-1/3 -2/3 -2/3


28. Show that the cofactor of each element of 2/3 1/3 -2/3 is that element.

2/3 -2/3 1/3

-4 -3 -3
29. Show that the cofactor of an element of any row of 1 1 is the corresponding element of the same
4 4 3
numbered column.

30. Prove: (a) If A is symmetric then <Xij= 0^j^ when i 4 j

(b) If A is n-square and skew-symmetric then aij = (-1)


"P when 4
Ot--^ i
j
CHAP. 3] DETERMINANT OF A SQUARE MATRIX 31

31. For the matrix A of Problem 8 ;

(a) show that j


^ |
= 1

(b) form the matrix C 12 (^^22 32 and show that AC = [.


_'^13 ^23 Otgg
(c) explain why the result in (6) is known as soon as (a) is known.

be a a^
32. Multiply the columns of 6^ ca 52 respectively by a,b.c ; remove the common factor from each of
c^ c2 ab
be ab ca
the rows to show that A ab ca be
ca be ab

a^ a bed a^ a

www.TheSolutionManual.com
1 a-^ 1

6^ 6 1 aed
33. Without evaluating show that
(a - b)(a - c)(a - d)(b - c)(i - d)(e - d).
e^ e I abd c^ c'^ e 1

d^ d 1 abc d d^ d 1

1 1 ... 1 1 1 ... 1 1

1 1...1 1 1 ... 1 1

34. Show that the ra-square determinant


1 1 0...1 1 1 ... 1 1 n-1
a\ - ("-!) (-1) (n~l).
\

1 1 1 ... 1

1 1 1 ...0 1 1 1 ... 1 1

re-i n-2
a^ 1
ra-i n-2
ar, 1
35. Prove: = S(i - 2)(i - as). (ai-a^H(a2- 03X02-04)... (02 -a^j 7i-i - "ni
n-i ra-2
o 1

na^+b-^ na^+b^ na^+bg O]^ 02 O3


36. Without expanding, show that nb^ + c^
nb-j^+e^ nb^+Cg (n + l)(rf--n + 1) 61 62 is
nc-i^+a.^ nCQ+a^ ncQ + a^ Ci Cg Cg

X a xb
37. Without expanding, show that the equation x-i-a x-c has as a root.
x+b x+c

+6 a a
a a+ b a
38. Prove ,n-i
b {na + b).

a a a +6
chapter 4

Evaluation of Determinants

PROCEDURES FOR EVALUATING determinants of orders two and three are found in Chapters. In
Problem 11 of that chapter, two uses of Theorem IX were illustrated: (a) to obtain an element 1
or 1 if the given determinant contains no such element, (b) to replace an element of a given
determinant with 0.

For determinants of higher orders, the general procedure is to replace, by repeated use of

www.TheSolutionManual.com
Theorem IX, Chapters, the given determinant U! by another |b| -= \bij\ having the property
that all elements, except one, in some row (column) are zero. If &>, is this non-zero element
'Pq
and ^p^ is its cofactor.

S = bpq dp^ i-ir minor of b.


'pq
^Pq

Then the minor of bp is treated in similar fashion and the process is continued until a determi-
nant of order two or three is obtained.

Example 1.

2 3-2 4 2 + 2(3) 3 + 2(-2) -2 + 2(1) 4 + 2(2) 8-10 8

3-2 12 3 -2 1 2 3-2 1 2

3 2 3 4 3-3(3) 2-3(-2) 3-3(1) 4-3(2) -6 8 0-2


-2405 -2 4 5 -2405
8-1 8 8 + 8(-l) -1 8 + 8(-l) 0-10
= (-1)" -6 8 -2 -6 + 8(8) 8 -2 + 8(8) 58 8 62
-2 4 5 -2 + 8(4) 4 5 + 8(4) 30 4 37

'^
-(-l)- = (-l)P' -286
30 37
See Problems 1-3

For determinants having elements of the type in Example 2 below, the following variation
may be used: divide the first row by one of its non-zero elements and proceed to obtain zero
elements in a row or column.

Example 2.

0.921 0.185 0.476 0.614 1 0.201 0.517 0.667 1 0.201 0.517 0.667
0.782 0.157 0.527 0.138 0.782 0.157 0.527 0.138 0.123 -0.384
0.921 0.921
0.872 0.484 0.637 0.799 0.872 0.484 0.637 0.799 0.309 0.196 0.217
0.312 0.555 0.841 0.448 0.312 0.555 0.841 0.448 0.492 0.680 0.240

0.123 -0.384 -0.320 1

0.921 0.309 0.196 0.217 0.921(-0.384) 0.309 0.196 0.217


0.492 0.680 0.240 0.492 0.680 0.240

1
0.309 0.265
0.921(-0.384) 0.309 0.265 0.217 0.921(-0.384)
0.492 0.757
0.492 0.757 0.240

0.921(-0.384)(0.104) = -0.037

32
I 2

CHAP. 4] EVALUATION OF DETERMINANTS 33

THE LAPLACE EXPANSION. The expansion of a determinant U |


of order n along a row (column) Is
a special case of the Laplace expansion. Instead of selecting one row of \a\, let m rows num-
bered I'l, J2 i^ , when arranged in order of magnitude, be selected. Prom these m rows
n(n- l)...(ra - m + l)
P minors
1-2 .... m
can be formed by making all possible selections of m columns from the n columns. Using these
minors and their algebraic complements, we have the Laplace expansion

> Jrr. , Jn+i' jm+2 Jn


(4.1) |(-1)^ i. . i. .

^m4-i'% + 2 ^n

where s = i^ + i^+... + i^ +A + /2 + +h and the summation extends over the/? selections of the
column indices taken m at a time.

Example 3.

2 3-2 4

www.TheSolutionManual.com
3-2 12
Evaluate |
A , using minors of the first two rows.
3 2 3 4

-2405
Prom (4.1),

1+2+1+21 1 1 + 2+1 + SI
U (-1) U1.2 U-5aI
^3,4 1
+ (-1)" "Mi',|-U
1,2!
,1,31
3,4!
2,41

_l + 2 + l + 4| 1,4 1 + 2 + 2 + 31 .2,31
+ (-1) + (-1) I ^1,:

,
,,1+2 + 2+4. .2,4 .13 1 + 2+3+4 .34
+ (-1) Mi'jl-Us'.
I

+ (-ly *l,2r 1^3, 4I

2 3 3 4 2 -2 2 4 2 4 2 3
+
3 -2 5 3 1 4 5 3 2 4

3 -2 3 4 3 4 3 3 -2 4
+ - 3 2
-2

_2 _2 +

1 5 2 -2 D 1 2 -2 4

(-13)(15) - (8)(-6) + (-8)(-12) + (-1)(23) - (14)(6) + (-8)(16)

-286
See Problems 4-6

DETERMINANT OF A PRODUCT. If A and B are n-square matrices, then

(4.2) Us I
= Ul-lsl
See Problem 7

EXPANSION ALONG THE FIRST ROW AND COLUMN. If ^ = [aid is n-square, then

n n ii
(4.3) ^11 ^1 IJ IJ
i=2 j,-=2

Where a^ is the cofactor of o^ and a^^-is the algebraic complement of the minor "ii^iil
of^.
ii Otj I

DERIVATIVE OF A DETERMINANT. Let the ra-square matrix A = [a^,-] have as elements differen-
^tj.
tiable functions of a variable x. Then
J

34 EVALUATION OF DETERMINANTS [CHAP. 4

I. The \A\, of \A\ with respect to x is the sum of n determinants obtained


derivative,
dx
by replacing in all possible ways the elements of one row (column) of by their derivatives U I

with respect to x.

Example 4.

x^ x^\ 3 2x 1 x2 x + 1 3 x+ 1 3

dx 1 2x-\ x"" = 1 2x-\ x^ + 2 3^^ + 1 2x-i x-"

X -2 X -2 X -2

5 + 4x - 12x^ 6x
See Problem 8

www.TheSolutionManual.com
SOLVED PROBLEMS
2 3 -2 4 2 3 -2 4 2 3-2 4
7 4 -3 10 2(2) 4-2(3) -3-2(-2) 10-2(4) 3 -2 1 2
1. -286 (See Example 1)
3 2 3 4 3 2 3 4 3 2 3 4
-2 4 5 2 4 5 -2 4 5

There are, of course, many other ways of obtaining an element +1 or -1; for example, subtract the first
column from the second, the fourth column from the second, the first row from the second, etc.

1 -1 2 1 -1 + 1 2-2(1) 10
2 3 2 -2 2 3 2+2 -2-2(2) 2 3 4-6
2 4 2 1 2 4 2 +2 1-2(2) 2 4 4-3
3 1 5 -3 3 1 5 + 3 -3-2(3) 3 18-9
3

4
1
4
4-3
8
-6

-9
=
3-2(4)

1-3(4)
4
4-2(4)

8-3(4)
4-3 -6-2(-3)

-9-3(-3)
-5
4
-11 -4
-4
4-3

-5 -4
3 = -72
-11 -4

1 + J 1+ 2j
3. Evaluate \A\ 1 - i 2-3i
1- 2i 2 + : i

Multiply the second row by l + j and the third row by l + 2J; then

1 +i 1+2j l +j 1+2; 1 +J 1 + 2J

(l+J)(l+2j)Ml = (-1+3j)|-^| 2 5-J 2 5-J 8- 14j 25 - 5j

5 -4 + 7i 1 -4 + 7j -10 + 2J 1 -4 + 7J -10 + 2i

1+i l + 2j
I -6 + 18
- 14i 25 - 5j

and ,4
CHAP. 4] EVALUATION OF DETERMINANTS 35

4. Derive the Laplace expansion of \A\ = \aij\ of order n, using minors of order m<n
ji'h Jrn
Consider the m-square minor A; of 1^1 in which the row and column indices are arranged in
H-''^ %\
order of magnitude. Now by i; -
interchanges of adjacent rows of |^ the row numbered ii can be brought
1 |
,

into the first row. by i^- 2 interchanges of adjacent rows the tow numbered is can be brought into the second
row by % -m interchanges of adjacent rows the row numbered % can be brought into the mth row. Thus.
+ (ig- + + (ijj -m)
after (I'l- 1) 2) 11 + ^2+ + 'm-2'"('"+l) interchanges of adjacent rows the rows
numbered I'l, i^ i occupy the position of the first m rows. Similarly, after /i + j^ + + /^ - \m(m+l)
interchanges of adjacent columns, the columns numbered /i,/2 /jj occupy the position of the first m col-
umns. As a result of the interchanges of adjacent rows and adjacent columns, the minor selected above oc-
cupies the upper left corner and its complement occupies the lower right comer of the determinant; moreover.
I
A has changed sign
I
cr = j- + i + + + /i + % /2 + + /-"('" +1) times which is equivalent to
[1 + ^2+ + in + /i + ii + + in changes. Thus

Ji.J2' Jm Jm+i'Jn-i-2 Jn
A- yields m!(ra -m)! terms of (-1) \a\ or
H'''^ '-m 'm-H'''m +2-

www.TheSolutionManual.com
Ji~h- Jn Jn-H' Jm+2' ' Jn
(a) (-ir yields w!(n- m)! terms of \a\.
"TOJ-l' n+2'

n(n~l)...(n-m + l)
Let I'l, 12 in be held fixed. Prom these rows
different m- square p
l'2....m m\{n m)\
minors may be selected. Each of these minors when multiplied by its algebraic complement yields m!(/j-m)'
terms of U|. Since, by their formation, there are no duplicate terms of
U| among these products.
Jn Jm-1' Jvn Jn
S(-i/ I
, I'm

where s = i^ + i^+ + i^ +j^ + j^ + + in and the summation extends over the p different selections
/i, 72 in of the column indices.

12 3 4
5. Evaluate A
2 12 1 using minors of the first two columns.
11
3 4 12

1 21 |1 1 1 21 2 ll 2 11 13 4
(-1)^ + (-ir
2 2
+ (-If
1 I ll 3 41 1 ll 3 41 ll 1

(-3)(1) + (-2)(1) (5)(-l)

6. li A,B, and C are re-square matrices, prove

A
B
C B
Prom the first n rows of |P| only one non-zero n-square minor, U|, can be formed. Its algebraic com-
plement is |s|. Hence, by the Laplace expansion, |p| = |.4|.|b|.

7. Prove \AB\ = U |

|
S
Suppose A = [a^j] and B = [bij] are n -square. Let C = [c^j] = AB so that c
V ^Hkhj- i^rom
Problem 6
36 EVALUATION OF DETERMINANTS [CHAP. 4

ail 012
"m

021 022 2n

"ni n2 "nn

-1 6ii 612 ..
hi

-1 621 ^22
.. ?,2

-1

To the (n+l)st column of \P\ add fen times the first column, 621 times the second column 6i times
the nth column; we have

On 012
"m Cii

www.TheSolutionManual.com
"21 "22 2ra C21

"ni "n2 "nn Cm

-1 612
h,
-1 622 .. 62

Next, to the (n + 2)nd column of |P| add fei2 times the first column, 622 times the second column,
times the nth column. We have

in Cii C12

2n C21 C22

"nn Cm Cn2

^13 ^m
*23 b^n

A C
Continuing this process, we obtain finally \P\ . Prom the last n rows of |
P |
only one non-

zero n-square minor, 1-/| = (-l)'^ can be formed. Its algebraic complement is (_i)i+24--"+ra+(n+ii+-"+2n|(-|

= (-lf'2"^^'|c|. Hence, \p\ = (-i)Vlf" ^"^"lc| = \c\ and \c\ = \ab\ = U1.|b|.

Oil %2 %S
8. Let A = 021 '%2 '^a where Oif = aij(x), (i, j = 1,2,3), are differentiable functions of x. Then
031 032 033

%l"22*33 + 122Sa31 + %s"S2"21 " (^ix"^"^ " 0:l202lOs3 " (hs^^O'Sl

and, denoting -ra;: by a,',-,


dx ^^ ^J
CHAP. 4] EVALUATION OF DETERMINANTS 37

'^ll"22''S3 + 22ll"33 + ''33"ll"22 + "l2"23"31 + 023012131 + O31O12O23


dx
+ %3032021 + O32OISO2I + 021013032 ~ Oj^jCggOgg >i23'^n''S2 OgjOj^jOgg

f f f f / /
a^OQiOg^ 0210-12033 Og^a^a^^ air^a^^a^.^ 02205^303^ - Ogj^o^gOjg

Oil 0^11 + o-L20^i2 "*"


OisOi^g + a^-^dj^x + 022(^22 + 023(^23 "^ Og-LCXgi + 032(^32 + 033(^33

oil 012 Ol3 11 O12 Ol3 '11 "12 "13

021 022 023 + 21 O22 O23 + i2i "22 "23

031 032 033 31 O32 O33 '31 "32 "33

by Problem 10, Chapter 3.

www.TheSolutionManual.com
SUPPLEMENTARY PROBLEMS
9. Evaluate:

3 5 7 2 1 -2 -4
3
2 4 11 2 -1 4 -3
-304
(o) 156
-2000 ic)
2 3 -4 -5
113 4 3 -4 5 6

1 -2 3 -2 -2
1116 2 -1 1 3 2
{b)
2 4 16 = 41 (d) 1 1 2 1 1 118
4 12 9 -4 -3 -2 -5
1
2 4 2 7
3 -2 2 2 -2

10. If A Is n-square, show that \A^A \


is real and non-negative.

11. Evaluate the determinant of Problem 9(o) using minors from the first two rows; also using minors from the
first two columns.

12. (o) Let aJ"' "^ and B = r ''


'^

Use |^B|
4B|
[^-02 ojj

= |.4|"|B|
i.4|-|B| to show
1^-62

that
bjj

(oi
(<
0000
+ 02)(6i+62) = (0161-0262)
Q
+ (0261+0163)
O
.

+ t03 a2+iaA 61 + 163 62 + 164


(6) Let A tOj^ I and B =
-Og + ja^ Oi-iogJ [-62 + 864 6i-j6gJ
Use \AB inii.
\ts\ to
222222,2,2
express (01 + 02+03 + 04X61+62+63+64) as a sum of four squares.

2 1

3 2 1
13. Evaluate using minors from the first three rows. Ans. -720
4 3 2 1

5 4 3 2 1

6 5 4 3 2 1
38 EVALUATION OF DETERMINANTS [CHAP. 4

112 12 1

111
14. Evaluate 110 using minors from the first two columns. .4ns. 2

112
12 2 11

15. If ^1,^2 ^^s


^"^^ square matrices, use the Laplace expansion to prove

|diag(4i,.42 As)\ = Uil-U,! ....


U
a^ a^ Og a^

*1 *2 ^3 *4
16. Expand using minors of the first two rows and show that
a-^ a^ flg a^

*1 ^2 ^3 *4

www.TheSolutionManual.com
a^ a2

K 62 60 6.. 62 63

A
17. Use the Laplace expansion to show that the n-square determinant , where is fe-square, is zero when
B C
A > 2"-

18. In \A\ = aiiOiii + ai2ai2 + OisO^is + ai4i4. expand each of the cofactors a^g. OL^a. tti* along its first col-
umn to show
4 4 ti
^11^11 ~ .-^ .^ ^il^lj^lj
11 "IJ l^lj
1=2 J=2

where ffi,- is the algebraic complement of the minor Of U


"ii "ij

19. If a^j denotes the cofactor of a^j in the n-square matrix A = [a^,-], show that the bordered determinant

"11 "12

ire Pi ?i 92
?n
"21 22

"2ra P2 Pi "11 '^12 " "m
.^
t=i j=i
X
Pilj
'J
Cti
^V
"ni n2

"nra Pre Pn "ni "712 "nn
li 92
In

Hint. Use (4 .3).

20. For each of the determinants |.4| . find the derivative.

X I 2 a: -1 x-l 1

() (b) x^ + 4 3
2*: 1 ;<: (c) a; a: 2a: +5
2x Zx + l
3-2 x^+l x+l x^

-4ns. (a) 2a: + 9a:^- 8a;=^ , (6) 1 - 6a: + 21a:^ + 12a;^ - 15a:*, (c) 6a:^ - 5*"^ - 28x^ + 9a:^ + 20a; - 2

21. Prove : If A and B are real n-square matrices with A non-singular and if ff = 4 + iS is Hermitian, then
. .

chapter 5

Equivalence

THE RANK OF A MATRIX. A non-zero matrix A is said to have rank r if at least one of its r-square
minors is different from zero while every (r+l)-square minor, if any, is zero. A zero matrix is
said to have rank 0.

2 3"
'l
1 2
Example 1. The rank of A 2 3 4 is r= 2 since
2 3
-1^0 while U = 0.

3 5 7

www.TheSolutionManual.com
See Problem 1.

An re-square matrix A is called non-singular if its rank r=n, that is, if \


A ^ 0. Otherwise,
A is called singular. The matrix of Example 1 is singular.

Prom I
AB\ A\-\B\ follows

I. The product of two or more non-singular re-square matrices is non-singular; the prod-
uct of two or more re-square matrices is singular if at least one of the matrices is singular.

ELEMENTARY TRANSFORMATIONS. The following operations, called elementary transformations,


on a matrix do not change either its order or its rank:

(1) The interchange of the ith and /th rows, denoted by Hij;
The interchange of the ith and /th columns, denoted by K^j

(2) The multiplication of every element of the ith row by a non-zero scalar k, denoted by H^(k);
The multiplication of every element of the ith column by a non-zero scalar k, denoted by Ki(k).

(3) The addition to theelements of the sth row of k, a scalar, times the corresponding elements
of the /th row, denoted by Hij(k) ;

The addition to the elements of the ith column of k, a scalar, times the corresponding ele-
ments of the /th column, denoted by K^j(k)

The transformations H are called elementary row transfonnations; the transformations K are
called elementary column transformations.

The elementary transformations, being precisely those performed on the rows (columns) of a
determinant, need no elaboration. It an elementary transformation cannot alter the
is clear that
order of a matrix. In Problem 2, it is shown that an elementary transformation does not alter its
rank.

THE INVERSE OF AN ELEMENTARY TRANSFORMATION. The inverse of an elementary transforma-


tion is an operation which undoes the effect of the elementary transformation; that is, after A
has been subjected to one of the elementary transformations and then the resulting matrix has
been subjected to the inverse of that elementary transformation, the final result is the matrix A.

39
1

40 EQUIVALENCE [CHAP. 5

1 2 3
Example 2. Let A 4 5 6
7 8 9
3"
"l 2
The effect of the elementary row transformation H2i(-2) is to produce B 2 10
.7 8 9
The effect of the elementary row transformation ff2i(+ 2) on B is to produce A again
Thus, ff2i(-2) and H2x(+2) are inverse elementary row transformations.

The inverse elementary transformations are

-1
"ij = % ^ij

(2') Hi\k) = H^(i/k)

(3') H--(k) = H^A-k) K^jik) = Kij(-k)

www.TheSolutionManual.com
We have
II. The inverse of an elementary transformation is an elementary transformation of the
same type.

EQUIVALENT MATRICES. Two matrices A and B are called equivalent, A'^B, if one can be obtained
from the other by a sequence of elementary transformations.

Equivalent matrices have the same order and the same rank.

Examples. Applying in turn the elementary transformations W2i(-2), ^siCD. Ws2(-1).



1 2 -1 4 12-14 1 2 -1 4 12-1 4
2 4 3 5 5-3 5 -3 '^
5-3
-1 -2 6 -7 -1 -2 6 -7 5 -3

Since all 3-square minors of B are zero while I 1 t^ 0, the rank of S is 2 ; hence,
I 5 3
the rank of ^ is 2. This procedure of obtaining from A an eauivalent matrix B from which the
rank is evident by inspection is to be compared with that of computing the various minors of -4.

See Problem 3.

ROW EQUIVALENCE. If a matrix A is reduced to B by the use of elementary row transformations a-


lone, B is said to be row equivalent to A and conversely. The matrices A and B of Example 3
are row equivalent.

Any non-zero matrix A of rank r is row equivalent to a canonical matrix C in which

(a) one or more elements of each of the first r rows are non-zero while all other rows have
only zero elements.

(b) in the ith row, (i =1,2, ...,r), the first non-zero element is 1; let the column in which
this element stands be numbered ;'-
.
't

(C) ]\ < /2 < < j^.

(d) the only non-zero element in the column numbered j^, (i =1,2 r), is the element 1 of
the ith row.
CHAP. 5] EQUIVALENCE 41

To reduce A to C, suppose /i is the number of the first non-zero column of A.

(ii) If aij
171
7^ 0, use //i(l/oi,-
iji ) to reduce it to 1, when necessary.
(is) If a; J = but o^
^ 0, use ffij, and proceed as in (i^).
Vi ^^7

(ii) Use row transformations of type (3) with appropriate multiples of the first row to obtain
zeroes elsewhere in the /^st column.

If non-zero elements of the resulting matrix B occur only in the first row, B = C. Other-
wise, suppose 72 is the number of the first column in which this does not occur. If &2j ^ 0,
use ^2(1/^2^2) as in (ii); if but bqj^ f 0, use H^^ and proceed as in (ii). Then, as
&2J2=
in (il), clear the /gnd column of all other non-zero elements.

If non-zero elements of the resulting matrix occur only in the first two rows, we have C.
Otherwise, the procedure is repeated until C is reached.

Example 4. The sequence of row transformations ff2i(-2), ffgiCD ; 2(l/5) ; //i2(l). //ssC-S) applied

www.TheSolutionManual.com
to A of Example 3 yields

1 2 -1 4 1 2 -1 4 1 2 -1 4 1 2 17/5
^\j '\^
2 4 3 5 5 -3 1 -3/5 '%^
1 -3/5
1 -2 6 -7 5 -3 5 -3

having the properties (a)-(rf).

See Problem 4.

THE NORMAL FORM OF A MATRIX. By means of elementary transformations any matrix A of rank
r > can be reduced to one of the forms

(5.1)

A
/.
\l% "'"'
M
called its normal form. zero matrix is its own normal form.

Since both row and column transformations may be used here, the element 1 of the first row
obtained in the section above can be moved into the first column. Then both the first row and
firstcolumn can be cleared of other non-zero elements. Similarly, the element 1 of the second
row can be brought into the second column, and so on.

For example, the sequence ff2i(-2), ^31(1). ^2i(-2), Ksi(l), X4i(-4). K23, K^{\/%),

/^32(-l), ^42(3) applied to 4 of Example 3 yields I ^ , the normal form.

See Problem 5.

ELEMENTARY MATRICES. The matrix which results when an elementary row (column) transforma-
tion is applied to the identity matrix /^ is called an elementaryrow (column) matrix. Here, an
elementary matrix will be denoted by the symbol introduced to denote the elementary transforma-
tion which produces the matrix.
0'
1
Example 5. Examples of elementary matrices obtained from /g 1

1_

"0 0' '1 0" 0'


1 'l

1 = K, Ha(k) 1 K-sik). H^g(k) :


1 k K^^yk)
_0 1_ k_ 1_
42 EQUIVALENCE [CHAP. 5

Every elementary matrix is non-singular. (Why?)


The effect of applying an elementary transformation to an mxn matrix A can be produced by
multiplying A by an elementary matrix.

To effect a given elementary row transformation on A of order mxn, apply the transformation
to Ijn to form the corresponding elementary matrix H and multiply A on the left by H.

To effect a given elementary column transformation on A, apply the transformation to / to


form the corresponding elementary matrix K and multiply A on the right by K.
~1 3~ 3~ "7 9"
2 "O f 'l 2 8
Example 6. When A = 4 5 6 , H^^-A = 1 4 5 6 = 4 5 6 interchanges the first and third
_7 8 9_ 1 0_ 7 8 9_ _1 2 3_

1 2
3'
"l o"
"723'
rows of A ; 4/^13(2) = 4 5 6 10 = 16 5 6 adds to
t( the first column of A two times

J 8 9 _2 1_ _25 39J
the third column.

www.TheSolutionManual.com
LET A AND B BE EQUIVALENT MATRICES. Let the elementary row and column matrices corre-
sponding to the elementary row and column transformations which reduce /I to 5 be designated
as //i./Zg ^s< J^\,T^Q. '^t
where //^ is the first row transformation, //g is the second, ...;
K^ is the first column transformation, K^ is the second Then

(5.2) //. IU-H,.A K-i-K^ K, PAQ =

where
(5.3) Ih H^-H^ and

We have
III. Two matrices A and B are equivalent if and only if there exist non-singular matrices
P and Q defined in (5.3) such that PAQ = B.

"1 2 -1 2~|

Example 7. When A 2 5-23, ^3i(-l) //2i(-2) -^ ^2i(-2) -Ksid) .K4i(-2) -K^sd) .Ks(i)
_1 2 1
2J

1-200 ~1 1 0~ 1
2" "1000" "1000"
["100"! r 1 o"j

0-2 10 10 10 10 1 10
1 1
10 10 1 10 5
[j^i ij L iJ
1 _0 1 1_ _0 1_ _0 1_

1-25-4
o"l
10 1
[1
= PAQ 10
[:;=} 2
1 oj
1

Since any matrix is equivalent to its normal form, we have

IV. If ^ is an re-square non-singular matrix, there exist non -singular matrices P and Q
as defined in (5.3) such that PAQ = 1^ .

See Problem 6.
CHAP. 5] EQUIVALENCE 43

INVERSE OF A PRODUCT OF ELEMENTARY MATRICES. Let

P = H^...H^-H^ and Q = K-^-K^-.-Kt


as in (5.3). Since each H and K has an inverse and since the inverse of a product is the product
in reverse order of the inverses of the factors

P~^ = H;\hI\..H^^ Q''


(5.4) and = Kt...Kt-Kt-
Let A be an re-square non-singular matrix and P and Q defined above be such
let that PAQ
= / . Then

(5.5) A = P'\PAQ)Q^ = P'^-k-Q^ = P'^-Q'^

We have proved

V. Every non-singular matrix can be expressed as a product of elementary matrices.

www.TheSolutionManual.com
See Problem 7.
From this follow

VI. If A is non-singular, the rank of AB (also of BA) is that of B.

VII. If P and Q are non-singular, the rank of PAQ is that of A.

CANONICAL SETS UNDER EQUIVALENCE. In Problem 8, we prove


VIII. Two mxn matrices A and B are equivalent if and only if they have the same rank.
A set of my.n matrices is called a canonical set under equivalence if every mx-n matrix is
equivalent to one and only one matrix of the set. Such a canonical set
is given by (5.1) as r
ranges over the values 1,2 m or 1,2. ...,re whichever is the smaller.
See Problem 9.

RANK OF A PRODUCT. Let A be an mxp matrix of rank r. By Theorem III there exist non-singular
matrices P and Q such that

PAQ = N = P''
]

Then 4 = P NQ . Let S be a pxre matrix and consider the rank of

(5.6) AB = P~'NQ'^B
By Theorem AB is that of NQ'^B. Now the rows of NQ~'b consist of the firstr
VI, the rank of
rows of Q B and
m-r rows of zeroes. Hence, the rank of AB cannot exceed r
the rank of A
Similarly, the rank of AB cannot exceed that of S. We
have proved
IX. The rank of the product of two matrices cannot exceed
the rank of either factor.

suppose iS = then from (5.6). NQ-'b = 0. This requires that the


:
first r rows of Q'^B
be zeroes while the remaining rows may
be arbitrary. Thus, the rank of Q-'b and, hence
the
rank of B cannot exceed p-r. We have proved

X. If the mxp matrix A is of rank r and the pxn matrix


if B is such that AB = the
rank of B cannot exceed p-r.
44 EQUIVALENCE [CHAP. 5

SOLVED PROBLEMS
1 2 3] 1 2
1. (a) The rank of A is 2 since ^ and there are no minors of order three.
-4 sj -4

1 2 3
2 3
(b) The rank of A 12 5 is 2 since | ^ j
= and ^0.
2 5
2 4 8
"0 3"
2
(c) The rank of A 4 6 is 1 since |
i |
= 0, each of the nine 2-square minors is 0, but nov
_0 6 9_

every element is

Show that the elementary transformations do not alter the rank of a matrix.

We shall consider only row transformations here and leave consideration of the column transformations

www.TheSolutionManual.com
as an exercise. Let the rank of the mxn matrix ,4 be r so that every (r+l)-square minor of A, it any, is zero.
Let B be the matrix obtained from .4 by a row transformation. Denote by \R\ any (r+l)-square minor of A and
by Is] the (r+l)-squaie minor of B having the same position as \R\ .

Let the row transformation be H^j Its effect on |/?| is either (i) to leave
. it unchanged, (ii) to interchange
two of its rows, or (lii) to interchange one of its rows with a row not of \R\ . In the case (i), \S\ = \r\ =0;
in the case (ii), \S\ = -\r\ = ; in the case (iii), \s\ is, except possibly for sign, another (r+l)-square minor
of l^l and, hence, is 0.

Let the row transformation be Hi(k). Its effect on \R\ is either (1) to leave it unchanged or (ii) to multi-
ply one of its rows by A:. Then, respectively, |S| = |/?| = o or |S| = ;i:|/?| = o.

Let the row transformation be Hij(k). Its effect on |/?| is either (i) to leave it unchanged, (ii) to increase
one of its rows by k times another of its rows, or (iii) to increase one of its rows by k times a row not of S|.
|

In the cases (i) and (ii), |S|=|ft| = 0; in the case (iii), \s\ = /?| + A: (another (r+i)-square minor of /I) = |

0 k-0 = 0.

Thus, an elementary row transformation cannot raise the rank of a matrix. On the other hand, it cannot
lower the rank tor, if it did, the inverse transformation would have to raise it. Hence, an elementary row
transformation does not alter the rank of a matrix.

For each of the matrices A obtain an equivalent matrix B and from it, by inspection, determine the
rank of A.
"1 3" "1 3~ '1 3"
2 1 2 3 2 2
"^
(a) A = 2 1 3
'-^-/
-3 -3 1 1
-""J
1 1

_3 2 1_ -4 -8 _0 1 2_ _0 1_

The transformations used were //2i(-2). ffsi(-3); H^(-l/3), HQ(-l/i); Hg^i-l). The rank is 3.

"1 0" 0" "1 0'


2 3 1 2 3 2 3 2 3 0" 1 2 3
2 4 3 2 -3 2 -4 -8 3 4 -8 3 r^ -4 -8 3
(b) A ''V. ''\j '~^
S. The rank is 3.
3 2 1 3 4 -8 3 -3 2 -3 2 -3 2

_6 8 7 5_ -44 -11 5_ p -4 -11 5_ -3 2

1+i i 1 1
'~^
(c) A i + 2j i 1 + 2j
'-Vy
i 1 + 2J = B. The rank is 2.
1 + 2j l+i_ _1
i 1 + 2J_

Note. The equivalent matrices 5 obtained here are not unique. In particular, since in (o) and (i)
only
row transformations were used, the reader may obtain others by using only column
transformations.
When the elements are rational numbers, there generally is no gain in mixing row and column
transformations.
CHAP. 5] EQUIVALENCE 45

4. Obtain the canonical matrix C row equivalent to each of the given matrices A.

13 113 113 2 10 4
12 6 12 6 13-2 13-2
(a) A = '->-'

2 3 9 2 3 9 13-2
113 13 13 -2_ GOOD
1 2 -2 3 f 1 2 -2 3 l" 1 -2 3 3" '10 3 7~
10 1

(b) A =
1 3 -2 3 --v^
1 -1 '^ 1 -1 ^\y
10 -1 --O
10 0-1
2 4 -3 6 4 1 2 1 2 10 2 10 2
.1 1 -1 4 6_ p -1 1 1 5_ 1 1 4_ pool 2_ 1 2

5. Reduce each of the following to normal form.

1 2 -1 1 2 -1 "l o"| fl o" 1 'l o' 'l o'

www.TheSolutionManual.com
(a) A 3 4 1 2
^\y
-2 1 5 0-2 1 5 p- 1-2 5 1-2 10 'V.
10
2 3 2 5 7 2 3 7 2 sj Lo 2 7 3_ 11 P U -7_ p 1 0_

= Us o]

The elementary transformations are:

^2i(-3). 3i{2); K2i(-2), K4i(l); Kgg; Hg^-^y. ^32(2). /f42(~5); /fsd/ll), ^43(7)

[0234"! P 354 1 3 5 "1000" "1000' "1000"


(6) A 23 5 4^-0 2 3 4
'-^
2 3 'Xy
13 4
^\j
10 <^
10
4 8 13 I2J 8 13 12 2 8 13 2 3 4 13
|_4 i

4. p 1 0_ p 0_

The elementary transformations are:

^12: Ki(2); H3i(-2); KsiC-S), X3i(-5), K4i(-4); KsCi); /fs2(-3), K42(-4); ftjgC- 1)

1 2 3-2
6. Reduce A 2-2 1 3 to normal form A' and compute the matrices P^ and (^^ such that P-i_AQ^ = A^.
3 04 1
Since A is 3x4, we shall work with the array Each row transformation is performed on a row of
A U
seven elements and each column transformation is performed on a column of seven elements.

1 -3 2 1 -2 -3 2
10 1

1 1 1

1 1
12 3 1 2 3-2 1 1
2-2 1 10 -6 -5 7 1 -6 -5 7 -2 -5 7 -2
3 4 1 -6 -5 7 1 -6 -5 7 -3 0-1-1
1 1/3 -3 2 1 1/3 -4/3 -1/3
-1/6 0-1/6 -5/6 7/6
10
1
10
or
1

1-57-210
10 1

N Pi
1 0-210
0-1-11 0-1-1 1
46 EQUIVALENCE [CHAP. 5

1 1/3 -4/3 -1/3


1
-1/6 -5/6
10
7/6
Thus, Pi = -2
-1 -1
1

1
10 and PiAQ-i = 10 N.

1 3 3
7. Express A = 1 4 3 as a product of elementary matrices.
.1 3 4_

The elementary transformations //2i(-l). ^siC-D; ^2i(-3). 'f3i(-3) reduce A to 4 , that is, [see (5.2)]

/ = H^-H^-A-K^-K^ = ff3i(-l)-^2i(-l)-'4-X2i(-3)-?f3:i.(-3)

fl 0"1 fl 0"j fl 3"| fl 3 O'

Prom (5.5), Hl-H'^-K^-Kt = 110 010 010 010

www.TheSolutionManual.com
_0 ij [l ij [p ij _0 1_

8. Prove: Two mxn matrices A and B are equivalent if and only if they have the same rank.
If A and B have same rank, both are equivalent to the same matrix (5.1) and are equivalent
the to each
other. Conversely, ^ and B are equivalent, there exist non-singular matrices P and Q such that B
if = PAQ.
By Theorem VII, A and B have the same rank.

9. A canonical set for non-zero matrices of order 3 is

[i:l-[:i:] [!:][::]

nm
A canonical set tor non-zero 3x4 matrices is

Vh o]
[:=:] &:]

10. If from a square matrix A of order n and rank r^, a submatrix B consisting of s rows (columns) of A
is selected, the rank r^ of B is equal to or greater than r^ + s - n.

The normal form of A has n-rj^ rows whose elements are zeroes and the normal form of 6 has s-rg rows
whose elements are zeroes. Clearly

'A ^
from which follows r > r + s - n as required.
B A
. .

CHAP. 5] EQUIVALENCE 47

SUPPLEMENTARY PROBLEMS
4 6
2 3 2
2 1 2 12-23 5
5
6 7
7
8

11. Find the rank of (a) (b)


3 2 2
(c)
2 5-46 (d)
3 5 1 6 7 8 9
4 3 4 -1 -3 2 -2
3 4 5 10 11 12 13 14
7 4 6 2 4-16 15 16 17 18 19

Ans. (a) 2, (b) 3, (c) 4. (d) 2

12. Show by considering minors that A, A'. A. and .4' have the same rank.

13. Show that the canonical matrix C, row equivalent to a given matrix A, is uniquely determined by A.

14. Find the canonical matrix row equivalent to each of the following:

12 3 4 1 1/9" 1 1 10
Tl 2-3"]^ri 0-7]

www.TheSolutionManual.com
(a) (b) 3 4 12 'X^
10 1/9 (c) 2 1 10
[2 5 -4j [o 1 2j
_4 3 1 2 1 11/9 3 -3 12
1 2 10 1

3 2 10-1 1 -1 1 1 1 10 0-12
(d) 2 -1 1 1 (e)
1 -1 2 3 1
^V/
10 1

2 -2 1 2 1 2
5 6
1 1 1 -3 3
1 3

15. Write the normal form of each of the matrices of Problem 14.

Ans. (a) [I, 0], (b).(c) [/g o] (d) P^ (e) P' jl


j]

12 3 4
16. Let A = 2 3 4 1

3 4 12
(a) From /a form Z/^^. /^O). //i3(-4) and check that each HA effects the corresponding row transformation.
(6) Prom U form K^^. Ks(-l). K^^O) and show that each AK effects the corresponding column transformation.
(c) Write the inverses H^l, H^ {?,), H^lc-i) of the elementary matrices of (a). Check that for each H,H-H~^^l
(d) Write the inverses K^l. ifg^C-l).
K^lo) of the elementary matricesof (6) . Check that for each K. KK~^ = I
"0 0'
3 "0 1 4"|
(e) Compute B = ^12 ft,(3) -//isC- 4) 1 -4 and C = H^^(-4:)-H^(3)-Hi 1/3 0.
1 ij
(/) Show that BC ^ CB = I .

17. (a) Show that /?',-= H-. . K-(k) = H^(k). and K^,(/t) = H^-(k)
(b) Show that if /? is a product of elementary column matrices. R'is the product in reverse order of the same
elementary row matrices.

18. Prove: (a) AB and BA are non-singular if .4 and B are non-singularra -square matrices.
(b) AB and BA are singular if at least one of the n-square matrices A and B is singular.

19. UP and Q are non-singular, show that A,PA,AQ, and PAQ have the same rank.
Hint. Express P and Q as products of elementary matrices.

13 6-1
20. Reduce B 14 5 1 to normal form /V and compute the matrices P^ and Qr, such that P^BQ^ = N
15 4 3
.

48 EQUIVALENCE [CHAP. 5

21. (a) Show that the number of matrices in a canonical set of n-square matrices under equivalence is n+l.
(6) Show that thenumber of matrices in a canonical set of mxn matrices under equivalence is the smaller of
m+l and n+1.

12 4 4
22. Given A 13 2 6 of rank 2. Find a 4-square matrix S 7^ such that AB = 0.

2 5 6 10
Hint. Follow the proof of Theorem X and take

Q-'b
abed
_e / g A_

where a.b h are arbitrary.

23. The matrix A of Problem 6 and the matrix B of Problem 20 are equivalent. Find P and Q such that B - PAQ.

www.TheSolutionManual.com
24. If the mxn matrices A and B are of rank rj and rg respectively, show that the rank of A+B cannot exceed

25. Let ^ be an arbitrary n-square matrix and S be an n-square elementary matrix. By considering each of the
six different types of matrix S, show that \AB\ = |^| |fi|

26. Let A and B be n-square matrices, (a) If at least one is singular show that \AB\ = |/4|-|s| ; (6) If both are
non-singular, use (5.5) and Problem 25 to show that \AB\ = \a\-\B\ .

27. Show that equivalence of matrices is an equivalence relation.

28. Prove: The row equivalent canonical form of a non-singular matrix A is I and conversely.

29. Prove: Not every matrix A can be reduced to normal form by row transformations alone.
Hint. Exhibit a matrix which cannot be so reduced.

30. Show how to effect on any matrix A the transformation H^: by using a succession of row transformations of
types (2) and (3).

31. Prove: If .4 is an mxn matrix, (m ^n), of rank m, then AA' is a non-singular symmetric matrix. State the
theorem when the rank of A is < m.
chapter 6

The Adjoint of a Square Matrix

THE ADJOINT. Let A = be an n-square matrix and be the cofactor of a-; then by definition
[a^ ]
y oij-
y

A = 12 ^22
(6.1) adjoint adj ^

www.TheSolutionManual.com
'^ire '^sn

Note carefully that the cofactors of the elements of the ith row (column) of A are the elements
of the jth column (row) of adj A.

1 2 3
Example i. For the matrix A 2 3 2

3 3 4

11= 6, ai2 = -2. Cl-lS = -3. fflsi = 1, 0122 = -5, a^g = 3, ffai = -5, Otgg = 4, agg = -1

*
6 1-5
1

and adj A -2 -5 4
-3 3 -1
See Problems 1-2.

Using Theorems X and XI of Chapter 3, we find

%1 %2 "In
2 1 ^^2 0271
(6.2) i(adj A)

'^ ni CE 7

= diag(M), 1^1 1^1) A-L (adj A) A

Examples. For the matrix ^ of Example 1, U| = -7 and

1 2 3 6 1 -5I f-T 0'


/I (adj /I) = 2 3 2 -2 -55 4 = 0-7 -7/
3 3 4 -3 3-3 -ij [_
-7

By taking determinants in (6.2), we have

(6.3) U|. I
adj /I I
= |, adj ^ I
. U
There follow

I. If A is re-square and non-singular, then

(6.4) I
adj 4 I

49
50 THE ADJOINT OF A SQUARE MATRIX [CHAP. 6

II. If A is re-square and singular, then

A(a,diA) = (ad}A)A =

If A is of rank < ra-l, then adj A = 0. If i is of rank ra-1, then adj A is of rank 1.

See Problem 3-

THE ADJOINT OF A PRODUCT. In Problem 4, we prove

III. If A and B are re-square matrices,

(6.5) adj AB = adj 6 adj A

www.TheSolutionManual.com
MINOR OF AN ADJOINT. In Problem 6, we prove

^ii.i2 i
IV. Let be -square minor of the re-square matrix A = [o,-,-].
^l'''-2 % tjj

7m + i' 7m+2'
Jn complement in A, and
let be its
^m+i' ^ra+2'

Ji'J2 Jn
let denote
dei the m-square minor of adj A whose elements occupy the
%. ^2 %
Ji' J2 Jm
same position in adj A as those of occupy in A.

Then
Ji' J2 Jm Jvi + i' Jm + 2 Jn
(6.6) M: (-D^'i^l'

where s = j\ + S2 + + %+ /i + 72 + + Jt,

If in (6.6) , A is non-singular, then

J2< ' Jm Jm+i' Jm + 2 Jn


uaH-i
,?i.
s
(6.7) M (-1)
I

i-
H, %, ^M+1' 'm+2. . ^n

When m = 2, (6.7) becomes


Jg>J4- Jn
(6.8) (_l)V*2+Jl-'j2m
H'J2 ^2'J2

.J1.J2
\A\ algebraic complement of

When m = n-l, (6.7) becomes


Jl'j2 Jre-1
(6.9) M, (-1) Ul a.

When m = n, (6.7) becomes (6.4).


CHAP. 6] THE ADJOINT OF A SQUARE MATRIX 51

SOLVED PROBLEMS
a h '^ll 0^21 d -b
1. The adjoint of i4 = IS I _ I

c d c a

3 4 2 3 2 3
4 3 4 3l 3 4
-7 6 -1
2. The adjoint of A = 1 4 1 3 1 3
IS -1
[;] 1 3 1 3 1 4
1
1

-2 1
1 3 1 2 1 2
1 4 1 4 1 3

Prove: A

www.TheSolutionManual.com
3. If is of order n and rank n-\, then adj A is of rank 1.

First we note that, since A is of rank n-\. there is at least one


non-zero cofactor and the rank of adj A
IS at least one. By Theorem X, Chapter 5, the rank of adj^ is at most n-{n-l) =
i. Hence, the rank is
exactly one.

4. Prove: adj ^S = adjS adj y4 .

^y(6.2) ^Badj^S = \ab\-I = {a.aiAB)AB


Since ^S-adjS-adj^ = (S adj S ) adj =
.4 /I ^(l^l-Oadj/i = jslc^adj/^) = |b|.U|./ = \AB\-I
and (adjB-adj^)4fi adjS {(adj^)^lfi =
= adjS-U|./.B = U|{(adjB)Si = \
Ab\ .
I

we conclude that adj ^S = adj S


adj /I

5. Show that adj(adji) = U| -A, if \a\ ?^ 0.

By (6.2) and (6.4),

adj ^ adj (adj ^) = diag(| adj ^|, |


adj/l| |adj^|)
1^-1
= diagd^l Ul
1

,
Ul )

Then

adj (adj ^) = UT'^-^


adj (adj ^) = \aT' -A
and adj (adj ^) = UT'^-^

Prove:
Ji' h in
6. Let A; ; ^^ ^" m-square minor of the re-square matrix
'"n I
A = [a:-]
tj-

Jm+i. Jw4-2 in
let be its complement in A, and
^m+i. ^m+2 ^n

^^^ ^*^e the m-square minor of adj


nH.i2 in\ /I whose elements occupy the same

J1.J2 is
position in adjv4 as those of occupy in A.
^H,i^, ...,i Then
52 THE ADJOINT OF A SQUARE MATRIX [CHAP. 6

Ji. J/2' > Jm Jra + i' Jii ' Jn


u H' ^2- ' ^m
(-ifur ^i i

where s = Ji + f2+ + % +/1 + 72+ + 7m

Prom

a,- - a,- 1- a,- , 1


a,- , a,, ; '00


h-j9 ^^2 > ,/2

"^ra.ii "^mJ's "im.im ! "^raJm+i " "^m^n


tti, 1 *i j ai j 1

a,- , a,- , a,- , 1 1


Wl'A ''m+iJs '
''m+iJm 1 '^m+iJm+i ''m+i-Jn

www.TheSolutionManual.com
a,- - a,- a,- 1
"W2
, , 1

"ht.ji in.i '


"V.im4.i "traJn

Ul
h.Jm^i.

\a\
''i2Jn


Ul "^M'Jn^i " "^-Jn

"%-i-i'Jn-^i "V+i'in


^ri'^m+i "in.in

by taking determinants of both sides, we have

Ji. Js' H\ Jm+1' Jm-t2 Jn


(-ifUI- M % ^2 ''w
4.
^m+l' ''m+2 %
where s is as defined in the theorem. Prom this, the required form follows immediately.

7. Prove: If 4 is a skew-symmetric of order 2ra, then \A\ is the square of a polynomial in the elements
of i.

By its definition. \
a\ is a polynomial in its elements; we are to show that under the conditions given
above this polynomial is a perfect square.

oi
when A = r
1 ,1 2
The theorem is true for n = l since, \ \, \A\ = a .

\-a Oj

Assume now that the theorem is true when n = k and consider the skew-symmetric matrix A = [a^j] of

'
E== '^2k^..2n^\
order2A: + 2. By partitioning, write A = P^l
\ \ where r
\
-zri^i.-zii^-zy
Then S is skew-sym-
"

CHAP. 6] THE ADJOINT OP A SQUARE MATRIX 53

metric of order 2k and, by assumption, \b\ = f where /is a polynomial in the elements of B.
If a^j denotes the cofactor of a^j in we have by Problem
/I, 6, Chapter 3, and (6.8)

*2fe+l, 2fe+l 0^2^+2. 2fe+l I


0^2^+2, 2fe+l

f''2fe+l, 2fe+2 0'2fe4-2, 2fe+2 ^2^+1, 2/!e+2 I

Moreover, Otsfe+s.sfe+i = -fflgfe+i^sfe+s I hence,

1^1/ - Ll2/fe+l. 2fe+2 and

a perfect square.

www.TheSolutionManual.com
SUPPLEMENTARY PROBLEMS
8. Compute the adjoint of:

2"
"l 2 3~ ~1 5
2 S' 'l 2*

(o) 1 2 (b) 1 2 (c) 2 1 (rf)


110 2

_0 0_ 2 1
_0 1_ _3 2 1_
10 1

1 -2 2 -4
1 1 1 4 -2
Ans. (a) 0-2 (i) -2 2 6 -16
1 (c) -2 -5 4 (rf)

_0 1_ 1 1 -2 1
10 3-5
-2 10

9. Verify:
(a) The adjoint of a scalar matrix is a scalar matrix.
(b) The adjoint of a diagonal matrix is a diagonal
matrix.
(c) The adjoint of a triangular matrix is a triangular
matrix.

10. Write a matrix /I of order 3 such that


7^
adj^ =0.

11. If ^ is a 2-square matrix, show that adj(adj/4) = A.

-1 -2 -2 -4 -3 -3
12. Show that the adjoint of ^ = 2 1 -2 is 3^' and the adjoint of A = 1 1 is A itself.
2-2 1
4 4 3

13. Prove: If an n-square matrix A is of rank <n-\. then adj^ = 0.

14. Prove: If ^ is symmetric, so also is adj-4.

15. Prove: If ^ is Hermitian, so also is adj/1.

16. Prove: If A is skew-symmetric of order , then adj^ is symmetric or skew-symmetric according as . is


odd
54 THE ADJOINT OF A SQUARE MATRIX [CHAP. 6

17. Is there a theorem similar to that of Problem 16 for skew-Hermitian matrices?

18. For the elementary matrices, show that


-1
(a) adj Hij = -Hij
-1
(6) adj H^ (k) = diag(l/A:. l/k 1/k, 1, l/k l/k), where the element 1 stands in the ith row

(c) adj Hij


Hjk(k) = Hij(k), with similar results for the K's.
HJ

19. Prove: If A is an n-squaie matrix of rank n or n-l and if H^...H^ -H-^ -A K-^-K,2---^t = ^ where \ is

L or , then

-1 -1 -1 -1 -1 -1
adj A = adj Xi adj K^ adj K^ adj A adj H^ adj H^ adj H^

20. Use the method of Problem 19 to compute the adjoint of

www.TheSolutionManual.com
1110
2 3 3 2
(o) A of Problem 7 Chapter 5 (b)
12
,

3 2
4 6 7 4

-14 2-2 2
7 -3 -3
14 -2 2 -2
Ans. (a) -1 1 (b)
-1 1
-7 1-1 1

21. Let A = [a^--] and B = [^k-a^A be 3-square matrices. If S{C) = sum of elements of matrix C, show that

S(adj-4) = S(adjB) and \b\ = k S(adj,4) - Ui

22. Prove: If 4 is n-square then I adj (adj ^) I = UI .

23. Let A^ =\.%j\ ('/ = 1. 2, ...,n) be the lower triangular matrix whose triangle is the Pascal triangle; for
example,
10
110
12 10
13 3 1

Define bij
"V
= (-1)^ ^ai,
Hj and verify for n =2,3,4 that

(i) adJ-4 = [6.^,-]


tjj

24. Let B be obtained from A by deleting its ith and pth rows and /th and gth columns. Show that

^ij ^pj

iq pq
where o!^- is the cofactor of a^j in \A\
chapter 7

The Inverse of a Matrix

IF A AND B are n-square matrices such that AB = BA =1, B is called the inverse of A, (B=A~^) and
A is called the inverse of B, (A = B''^).

In Problem l, we prove

www.TheSolutionManual.com
I. An ra-square matrix A has an Inverse if and only if it is non-singular.

The inverse of a non-singular n-square matrix is unique.


(See Problem?, Chapter2.)
II. If A is non-singular, then AB = AC implies B = C.

THE INVERSE of a non-singular diagonal matrix diag (i k, A) is the diagonal matrix

diag(i/k^,l/kr,. ..., 1A)

^^ :^i' -^2 4 are non-singular matrices, then the inverse


of the direct sum diag(/} 4
diag(Al^. A^ A~s)

Procedures for computing the inverse of a general


non-singular matrix are given below.

INVERSE FROM THE ADJOINT. From A adj i = U|


(6.2) .
/. if ^ is non-singular

n/M| a^^/\A\ ... ai/U|


i2/U| a22/!.4| ... a2/UI
adj^
(7.1)

am/Ml a^n/\A\ ... ann/\A\

1 2 3 -7 6 -1
Example 1. From Problem 2, Chapters, the adjoint of A = 1 3 4 is 1 -1
1 4 3 1 -2 1

"7/2
Since Ml = -2, A~^ = ^'^^ -^
-3 f
-k X

2 1 -
^J
See Problem 2.

55
56 THE INVERSE OF A MATRIX [CHAP. 7

INVERSE FROM ELEMENTARY MATRICES. Let the non-singular n-square matrix A be reduced to /
by elementary transformations so that

H^.-.H^.H^. A-K^-K2...Kt = PAQ = I

Then A ^ P ^ Q^ by (5.5) and, since (S~V^ = S,

(7.2) A'^ = (P'^-Q'Y = Q-P = K^-K^... KfH^... H^-H^

Example 2. From Problem 7, Chapters,


1 1 "l -3 1 -3
HqH^ AK^Kq 1 -1 1 A 1 1 = /

-1 1_ 1 _0 1__ 1

1 -3 o" 'l
-3" "106" '100" 7 -3 -3
Then A K^K2H2'i -1

www.TheSolutionManual.com
1 1 1 1 = -1 1

1 1 1 -1 1 -10 1

In Chapter 5 it was shown that a non-singular matrix can be reduced to normal form by row
transformations alone. Then, from (7.2) with Q =1, we have

(7.3) H,...H^.H,

That is,

III. If A is reduced to / by a sequence of row transformations alone, then A~ is equal


to the product in reverse order of the corresponding elementary matrices.

1 3 3
Examples. Find the inverse of .4 = 1 4 3 of Example 2 using only row transformations to reduce /I to/.

13 4

Write the matrix [A Q and perform the sequence of row transformations which carry A into
Ir, on the rows of six elements. We have

1 3 3 1 1 3 3 1 1 3 4 -3 1 7 -3 -3
'\j
[AI^] = 1 4 3 1
"X^
1 -1 1 1 1 1
-"o
1 -1 1

1 3 4 1 1 -1 1 1 1 1 1 -li 1

[/3^"']

7 -3 -3
by (7,3). Thus, as A is reduced to /g, /g is carried into A -1 1

-1 1

See Problem 3.

INVERSE BY PARTITIONING. Let the matrix A = [a^j] of order n and its inverse B = [6.^,-] be par-
titioned into submatrices of indicated orders:

41 A12 ^11 "12


(pxp) (pxq) (pxp) (px?)
and where p + q = n
^21 A22 "21 S22
(qxp) (qxq) (qxp) (qxq)
CHAP. 7] THE INVERSE OF A MATRIX 57

Since AB = BA = I^, we have


(i) /4iiSii + /lisBgi = Ip (iii) Bgi'^ii + B^z'^si =
(7.4)
(ii) 4iiSi2 + A-^^B^-z = (iv) Bjiiia + Bss^^as = Iq

Then, provided /4ij^ is non-singular,

!Bxi = '4ij^ + (A.^^^ "-i-zK (^^si '^n) ^21 = ~C (^21 '^u)

where ^ = .422 - A^i^Ali A^^).


See Problem 4.

In practice, A.^^ is usually taken of order ra-l. To obtain A^^, the following procedure is
used. Let

%2 %3 %4

www.TheSolutionManual.com
11
"11 %2 13
[Oil %2l O23 G^ =
021 022 023 '^24
' Osi 022 ,

2i ^^aj Ogj Ogj 033 6(34


31 32 33_
41 %2 '''43 ''44

After computing C^^, partition G3 so that /422 = [o33] and use (7.5) to obtain C^^. Repeat the proc
ess on G4. after partitioning it so that ^22 = [044], and so on.

1 3 3
Example 4. Find the inverse of A = 1 4 3 using partitioning.
1 3 4

Take ^11
11 = [!']. ^12 , ^421 = [13], and ^22= [4]- Now

^= ^22 - '42i(^;' ^12) = [4]-[l3]M = [1], and f^ = [l]

Then

Sii = 4\ + ('4i\/ii2)rw^ii) = i]^[o]fii-'^^^


[j

S12 = -('4i\^i2)r^ "


I oj

S21 = -^ (^2i'4la) = [-1,0]

522 = r' = [1]

" -3 -3
7
fill fil2
and -1 1
S21 B22
_ -1 1

See Problems 5-6.


58 THE INVERSE OF A MATRIX [CHAP. 7

THE INVERSE OF A SYMMETRIC MATRIX. When A is symmetric, aij = ciji and only ^(n+1) cofac-
tors need be computed instead of the usual n^ in obtaining A~^ from adj A.

If there is to be any gain in computing A~^ as the product of elementary


matrices, the ele-
mentary transformations must be performed so that the property of being symmetric is preserved.
This requires that the transformations occur in pairs, a row transformation followed immediately
by the same column transformation. For example,

'0 b c a b ...

b a ... b c
. c ...

Ht ^l

a b

www.TheSolutionManual.com
c a c
b
c c
^i(- ^) ^^i(- |)

However, when the element a in the dia^gonal is replaced by 1, the pair of transformations are
H^(l/\Ja) and K^{l/\Ja). In general, ^Ja is either irrational or imaginary; hence, this procedure
is not recommended.
The maximum gain occurs when the method of partitioning is used since then (7.5) reduces to

(7.6)

where f = A^^ - A.^^(Af-^A^^).


See Problem 7.

When A is not symmetric, the above procedure may be used to find the inverse of A'A, which
is symmetric, and then the inverse of A is found by

(7.7) A'^ = (A'Ai^A-

SOLVED PROBLEMS
1. Prove: An re-square matrix A has an inverse if and only if it is non- singular.

Suppose A is non-singular. By Theorem IV, Chapter 5, there exist non-singular matrices P and Q such
that PAQ=I. Then A=P~^-Q'^ and A''^ = Q-P exists.
-1 -1 _i
Supposed exists. The A- A =1 is of rank n. If /4 were singular, /l/l would be of rank < n; hence,
A is non-singular.
CHAP. 7] THE INVERSE OF A MATRIX 59

2. (a) When A
!]
- I
/I I =5, adji r4-3l,andi"=r4/5-3/5-]
[-1 [_-l/5
[? 2j
2/5J

2 3 1 1 -5 1 -5
(b) When A = 1 2 3 then U! = 18, adj A 7 1 and A 7 1

3 1 2 -5 7 -5 7

2 4 3 2

3 6 5 2
3. Find the inverse of i 4 =
2 5 2 -3
_4 5 14 14_

2 4 0' 3/2
3 2 1
1 1 2 1 1
1/2 1 2 3/2 1 '
1/2
3 6 5 2 10 1 3 6 5 2 1 1/2 -1 -3/2 1
[AIJ
I
"K^ ^\j
1 '
- 3 10 -3 10

www.TheSolutionManual.com
2 5 2 1 2 5 2 1 1 -1 -5 -1 1
1

4 5 14 1 1 1
1 _4 5 14 14 1 1 -3 8 10 -2 1

0' "1 0'


1 2 3/2 1 ]
1/2 7/2 11 5/2 -2
'Xy
1 -1 -5 ]
-1 1
'V^
1-1-5 -1 1

1/2 -1 ; -3/2 1 1-2 -3 2


-3 8 10 ; -2
h 5-5 -5 3 1

10 18 1
13 -7 -2 - 0'
10 18 !
13 -7-2 '

'X'
1 -7| -4 2 1
'-\J
1 -7 1
-4 2 1

1 -2 -3 j
2 1 -2 [
-3 2
5 ; 10 --10 3 1_ 11 2 -2 3/5 1/5

1 1
--23 29 -64/5 -18/5
'^-/
100] 10 -12 26/5 7/5
1 i
1 -2 6/5 2/5
1 2 -2 3/5 1/5
1

Ih A-']

-23 29 -64/5 -18/5


10 -12 26/5 7/5
The inverse is A
1 -2 6/5 2/5
2 -2 3/5 1/5

(i) iiiSn + iisSsi = / (iii) S^i^ii + B^^A^^ =


4. Solve (.., ^ ^ for Bii,Si2,S2i, and S
/fiiS,2
(11) + A^^B^^ = (iV) ^21^12 + 52,42 = /

Set S22 = f"^ Prom (ii), B^^ = -(-4^1-412)^"^; from (iii), B21 = - ^~\A^^At^y, and, from (i), B-^^

^ti - A'l^A^^B^^ = 4\ + (^lVi2)f~\'42iA\).

Finally, substituting in (iv),

*. 1
-f (-42i'4ii)^i2 + ^'^422 = / and f - '^'2 ('^21'*lll)'^12
60 THE INVERSE OP A MATRIX [CHAP. 7

12 3 1

5. Find the inverse of A


13 3 2
by partitioning.
2 4 3 3
1111
1 2 3
(a) Take Gg 1 3 3 and partition so that
2 4 3

[24], and [3]

-1 r 3 -2l ,-1 [3 -2 r 3 -2
,
Now 11 =
jj.
^11-41. - '421-4 11 = [24] [2 0],
^_^ |^_^ ^

,f
= ^22 - '42i('4ii^i2) = [3] - [2 4]r = [-3], and f =[-l/3]

www.TheSolutionManual.com
3 -2] _ [2 ol
Then Sii = 4l\ + (.4lVi2)rV2i^i\) = [j "^l + Hr-i] [2 0]
-1 ij [o oJ

=
3L-3 3J

B12 = ('4ii'4i2)C
-'^
- 1 f (/I2A1) = i[2 0],
3
4-fl
3-6 3

and -3 3
D01 Oo
2 0-1

1 2 3 1

(6) Partition A so that A^^ 1 3 3 , A^Q - 2 -421 =[ 1 1 1 ] and /I22 = [ I ]

2 4 3 3

3-6 3

Now 4i = i -3 3 A~'^ A - i A^iA-^-^ - "X"L2 3 2J,


2 0-1

.f = [l]-[l 1 l](i-) 3 and C'^=[3]


-1 &
3'
3 -6 3' 3 -6 1 -2 1

Then B, 3

2
3

-1
^1
-1
3 [3]-|[2 -3 2] =
I
3

2
3
-1
4 -2
6-9
3
6
-2
1-2
1
2
-1
" 0'

B12 = -3 S21 = [-2 3 -2]. B22 = [3]


_ 1_

1-210
[Sii S12 I 1-2 2-3
and
S21 B22J 1-11
-2 3-2 3
.

CHAP. 7] THE INVERSE OP A MATRIX 61

1 3 3
6. Find the inverse of A = 1 3 4 by partitioning.
1 4 3

We cannot take -4,


u since this is singular.

1 1 3 3 7 - 3 -3
By Example 3,
.

the inverse of // A 1 A = 1 4 3 S is B -1 1 Then


1 1 3 4 -1 1

7 -3 -3' "1 0'


7 -3 -3'

B^Ho 1 1 1 = 1 1

1 1 1 1 1

Thus, if the (ra-i)-square minor A^.^_ of the n-square non-singular matrix A is singular, we first bring a
non-singular (n-l)-square matrix into the upper left corner to obtain S, find the inverse of B, and by the prop-

www.TheSolutionManual.com
er transformation on B~^ obtain A''^

2 1-12
7. Compute the inverse of the symmetric matrix A
13 2-3
-1 2 1-1
2-3-1 4

2 1 -1
Consider first the submatrix G^ 1 3 2 partitioned so that
-1 2 1

-421 = [-1 2], ^22= [1]

Now

^22 - ^2i(Ai^i2) = [l]-[-l 2]rM = [-2] and f^= [-i]

Then B,

B12 = B21 = [-k k]. B22 = [-i]


["jj.

1 3 -5
1
and 3 -1 5
10
-5 5 -5

Consider now the matrix A partitioned so that

2 1 -1
'^11 1 3 2 [2 -3 -1], [4]
-1 2 1

1 3 -5 -1/5
Now A< _1_
10
-5
3-1
5 -5
5

-2
2/5 .
<f= [18/5 ], t = [5/I8].
62 THE INVERSE OF A MATRIX [CHAP. 7

2 5-7
Then JS^ 5-15 521 = ^[1 -2 10], B22 = [5/I8]
-7 5 11

2 5-71
5-1 5-2
and
-7 5 11 10
1 -2 10 5

www.TheSolutionManual.com
SUPPLEMENTARY PROBLEMS
8. Find the adjoint and inverse of each of the following:
1 2
"
1 2-1" "2
3 4" "1 2
3"
3
(a) -1 1 2 (b) 4 3 1 (O 2 4 5 (d)
n n
n
U w ^ 1
J.

_ 2 -1 1_ _1 2 4 3 5 6
3

-2/3
3-15 -10 4 9 1 -3 2
1

1/3
Inverses (a)
^ -153-1 5

3

^'^i
-5
15 -4 -14
1 6
,
(c) -3
2-1
3 -1 .
(d)
1/2 -1/6
- -J - ^ - - 1/3

9. Find the inverse of the matrix of Problem 8((?) as a direct sum.

10. Obtain the inverses of the matrices of Problems using the method of Problems.

1 1 1 1 3 4 2 7 2 5 2 3
13 3 2 1

1 4 3 3-1
-4 4
11. Same, for the matrices (a)
1

2
2

3
3

5 -5
.
(b)
2
5
3

7
3

3
2

9
(c)
2
3
3

6
3

3 2
(d) 13 4 11
-4 -5
1 11-1
1
3 8 2 3 2 3 4 12 8
1-2-12 2
2 16 -6 4 -144 36 60 21

1 22 41 - 30 -1 48 -20 -12 -5
(a) (c)
18 -10 -44 30 -2 48 48 -4 -12 -13
4 -13 6 -1 12 -12 3

30 -20 -15 25 -5
-1 11 7 -26
30 -11 -18 7 -8
-1 -7 -3 16
(b) 1 (^)i -30 12 21 -9 6
2 1 1 -1
-15 12 6-9 6
1 -1 -1 2
15 -7 -6 -1 -1

12. Use the result of Example 4 to obtain the inverse of the matrix of Problem 11(d) by partitioning.

13. Obtain by partitioning the inverses of the matrices of Problems 8(a), 8(6), 11(a) - 11(c).
CHAP. 7] THE INVERSE OF A MATRIX 63

12-12 12 2

14. Obtain by partitioning the inverses of the symmetric matrices (a)


2 2-11 (b)
112 3
-1 -1 1 -1 2 2 2 3
2 1-12 2 3 3 3
1 -1 -1 -1 -3 3-3 2
Ans. (a)
-1 -1 -1 1 3-4 4-2
(b)
-1 -1 -5 -1 -3 4-5 3
-1 1 -1 -1 2-2 3-2

15. Prove: If .4 is non-singular, then AB = AC implies B = C.

16. Show that if the non-singular matrices A and B commute, so also do


(a) A andS, (b) A und B ,
(c) A a.ndB Hint, (a) A (AB)A = A (BA)A

17. Show that if the non-singular matrix A is symmetric, so also is /I


^.

Hint: A^A

www.TheSolutionManual.com
:r / = (AA"^)' = (A~^)'A .

18. Show that if the non-singular symmetric matrices A and B commute, then (a) A~'^B. (b) AB"'^. and (c) A~^B~^
are symmetric. Hint: (a) {A'^ B)' = {BA^'^)' = (A'^)'B' = A~^B.

19. An mxn matrix A is said to have a right inverse B if AB = I and a left inverse C it CA = I. Show that A
has
a right inverse if and only ifA is of rank m and has a left inverse if and only if the rank of A is n

13 2 3
20. Find a right inverse of A 14 13 if one exists.
13 5 4
1 3 2
Hint. The rank of A is 3 and the submatrix S 1 4 1 is non-singular with inverse S . A right inverse of
1 3 5

17 -9 -5
A
-4 3 1
is the 4x3 matrix B =
-1 1
L J

1 3 3
7 -3 -3
21, Show that the submatrix T -1 1
1 4 3 of A of Problem 20 is non-singular and obtain as another
1 3 4
-1 1
right inverse of A.

1 1 1
7 -1 -1 a
22. Obtain -310b as a left inverse of
3

3
4
3
3

4
, where a,b. and c are arbitrary.
-3 1 c

13 4 7
23. Show that A 14 5 9 has neither a right nor a left inverse.
2 3 5 8

24. Prove: If ^ then


/111 ^12
|.4ii| 0, |.4ii| -1^22 - ^^si^ii/ligl
A21 Aq2

25. If |/+ l| ;^ 0, then (/ + .4)-^ and (I - A) commute.

26. Prove:(i) of Problem 23, Chapter 6.


chapter 8
Fields

NUMBER FIELDS. A collection or set S of real or complex numbers, consisting of more than the ele-
ment 0, is called a number field provided the operations of addition, subtraction, multiplication,
and division (except by 0) on any two of the numbers yield a number of S.
Examples of number fields are:

(a) the set of all rational numbers,

www.TheSolutionManual.com
(b) the set of all real numbers,
(c) the set of all numbers of the form a+b\f3 ,where a and b are rational numbers,
(d) the set of all complex numbers a+ bi, where a and b are real numbers.
The set of all integers and the set of all numbers of the form bvs , where i is a rational
number, are not number fields.

GENERAL FIELDS. A collection or set S of two or more elements, together with two operations called
addition (+) and multiplication (). is called a field F provided that a, b, c, ... being elements of
F, i.e. scalars,

a + b is a unique element of F
a+b = b+a

a + (b + c) = (a + b) + c

For every element a in F there exists an element in F such that a+ = +a = a.

As For each element a in F there exists a unique element -a in F such that a + (-a) = 0.

ab = a-b is a unique element of F


ab = ba

(ab)c = a(bc)

For every element a in F there exists an element 1 ?^ such that 1-a = 1 = o.

For each element a ^ in F there exists a unique element a'^ in F such that a- a'''^ =
a~ -0=1.
Di : a(b + c) = ab+ ac
D^'- (a+b)c = ac + bc

In addition to the number fields listed above, other examples of fields are:

(e) the set of all quotients


^^^ of polynomials in x with real coefficients.
Q{x)

(/) the set of all 2x2 matrices of the form


ti where a and b are real numbers.

(g) the set in which 0+0=0.


This field, called of characteristic 2, will be excluded hereafter.
In this field, for example, the customary proof that a determinant having two rows identical
is is not valid. By interchanging the two identical rows, we are led to D = -D or 2D = ;

but D is not necessarily 0.

64
CHAP. 8] FIELDS 65

SUBFIELDS. If S and T are two sets and if every member of S is also a member of T, then S is called
a subset of T.

If S and T are fields and if S is a subset of T, then S is called a subfield of T. For exam-
ple, the field of all real numbers is a subfield of the field of all complex numbers; the field of
all rational numbers is a subfield of the field of all real numbers and the field of all complex
numbers.

MATRICES OVER A FIELD, When all of the elements of a matrix A are in a field F, we say that
'Mis over F". For example.

A =
1 1/2
is over the rational field and B
11 + }
is over the complex field
1/4 2/3 2 1 - 3i

Here, A is also over the real field while B is not; also A is over the complex field.

Let A.B, C, be matrices over the same field F and F be

www.TheSolutionManual.com
... let the smallest field which
contains all the elements; that is, if all the elements are rational numbers, the field F is the
rational field and not the real or complex field. An examination of the various operations de-
fined on these matrices, individually or collectively, in the previous chapters shows that no
elements other than those in F are ever required. For example:
The sum, difference, and product are matrices over F.

If A is non-singular, its inverse is over F.


If A ^l then there exist matrices P and Q over F such that PAQ = / and / is over F.
If A is over the rational field and is of rank r, its rank is unchanged when considered over
the real or the complex field.

Hereafter when A is said to be over F it will be assumed that F is the smallest field con-
taining all of its elements.

In later chapters it will at times be necessary to restrict the field, say, to the
real field.
At other times, the field of the elements will be extended, say, from the rational field to the real
field. Otherwise, the statement "A over F" implies no restriction on the field, except for the
excluded field of characteristic two.

SOLVED PROBLEM
1. Verify that the set of all complex numbers constitutes a field.

To do this we simply check the properties A^-A^, Mi-Mg, and D^-D^. The zero element (/I4) is and
the unit element (M^) is 1. If a + bi and c + di are two elements, the negative (A^) of a + bi is -a-bi. the
product (A/ 1) is (a+bi){c + di) = (ac -bd) + (ad + bc)i ; the inverse (M5) of a + bi^o is

1 _ g bi _ a _ hi
a + bi a^ +62 a^+ b^ a^ + b^
Verification of the remaining properties is left as an exercise for the reader.
66 FIELDS [CHAP. 8

SUPPLEMENTARY PROBLEMS
2. Verify (a) the set of all real numbers of the form a + b\r5 where a and b are ra:ional numbers and

(6) the set of all quotients ^ of polynomials in x with real coefficients constitute fields.
Q(x)

3. Verify (a) the set of all rational numbers,


(b) the set of all numbers a + bvS, where a and b are rational numbers, and
(c) the set of all numbers a+bi, where a and b are rational numbers are subfields of the complex field.

a b
4. Verify that the set of all 2x2 matrices of the form ,
where a and b are rational numbers, forms a field.
b a

Show that this is a subfield of the field of all 2x2 matrices of the form
a h
w]\eK a and h are real numbers.
b a

Why does not

www.TheSolutionManual.com
5. the set of all 2x2 matrices with real elements form a field?

6. A set R of elements a,b.c.... satisfying the conditions {Ai, A^. A^. A^, A^; Mi. M.y, D^, D^) of Page 64 is called
a ring. To emphasize the fact that multiplication is not commutative, R may be called a non- commutative
ring. When a ring R satisfies Mg, it is called commutative. When a ring R satisfies M^. it is spoken of as
a ring with unit element.

Verify:
(a) the set of even integers 0,2,4, ... is an example of a commutative ring without unit element.
(b) the set of all integers 0,+l,2,+3, ... is an example of a commutative ring with unit element.
(c) the set of all n-square matrices over F is an example of a non-commutative ring with unit element.

(d) the set of all 2x2 matrices of the form , where a and b are real numbers, is an example of a
commutative ring with unit element.

7. Can the set (a) of Problem 6 be turned into a commutative ring with unit element by simply adjoining the ele-
ments 1 to the set?

8. By Problem 4, the set (d) of Problem 6 is a field. Is every field a ring? Is every commutative ring with unit
element a field? i

To ol
9. Describe the ring of all 2x2 matrices \
^ , where a and b are in F. If A. is any matrix of the ring and

^ = . show that LA = A. Call L a left unit element. Is there a right unit element?

10. Let C be the field of all complex numbers p + qi and K be the field of all 2x2 matrices where p, q,
a -b^ ^
u, V are real numbers. Take the complex number a + bi and the matrix as corresponding elements of
the two sets and call each the image of the other.
-3l To -4l
[2 ; 3+ ^^2i, 5.
3 2j L4 Oj
(b) Show that the image of the sum (product) of two elements of K is the sum (product) of their images in C.
(c) Show that the image of the identity element of K is the identity element of C.
(d) What is the image of the conjugate of a + bi?

(e) What is the image of the inverse of "!]


[:
This is an example of an isomorphism between two sets.
chapter 9

Linear Dependence of Vectors and Forms

of real numbers (%, x^) is used to denote a point Z in a plane. The same
pair
THE ORDERED PAIR
of numbers, written as [x^, x^], will be used here to denote the two-dimensional vector or 2-vector

OX (see Fig. 9-1).


X2 ''^3(*11 + ^21.. ^12 + ^2!

X.2(X^1. X22)

www.TheSolutionManual.com
X(Xi. Xq)

Fig. 9-1

If .Yi = [%i,xi2] and X^= [x^^.x^q] are distinct 2-vectors, the parallelogram law for
their sum (see Fig. 9-2) yields

A3 A^ + A2 = L ^11 "^ -^21 1 -^12 "^ -'^221

Treating X^ and Xg as 1x2 matrices, we see that this is merely the rule for adding matrices giv-
en in Chapter 1. Moreover, if k is any scalar,

kX-i L rC%'\ -1 , fv X-y Q J

is the familiar multiplication of a vector by a real number of physics.

VECTORS. By an n -dimensional vector or re-vector X over F is meant an ordered set of n elements x^


of F, as

(9.1) X = [a;i,x2, ...,%]

The elements x^,X2 % are called respectively the first, second, ..., reth components of X.

Later we shall find it more convenient to write the components of a vector in a column, as

(91) I X-i , X2, . , Xi J

Now (9.1) and (9.1') denote the same vector; however, we shall speak of (9.1) as a row vector
and (9.1') as a column vector. We may, then, consider the pxq matrix A as defining p row vectors
(the elements of a row being the components of a 17-vector) or as defining g column vectors.

67
68 LINEAR DEPENDENCE OF VECTORS AND FORMS [CHAP. 9

The vector, all of whose components are zero, is called the zero vector and is denoted by 0.

The sum and difference of two row (column) vectors and the product of a scalar and a vec-
tor are formed by the rules governing matrices.

Example 1. Consider the 3-vectors

A:i= [3,1,-4], -^2= [2.2,-3], ^3= [0,-4,1], and ^4= [-4,-4,6]


(a) 2X-^-iX^ = 2[3,l,-4] - 5[2,2,-3] = [6. 2, -S] - [lO, 10, -I5] = [-4,-8,7]
(6) IX^+X^. = 2[2,2,-3]+ [-4,-4.6] = [o,0.o] =
(c) 2X^ -3X2-^3 =

{d) 2X^ - X^- Xq+ X^ =

The vectors used here are row vectors. Note that if each bracket is primed to denote col-
umn vectors, the results remain correct.

www.TheSolutionManual.com
LINEAR DEPENDENCE OF VECTORS. The m re-vectors over F

Ai = |.%]^,%2' ^mJ

are said to be linearly dependent over F provided there exist m elements h-^^.k^, ,k^ of F, not
all zero, such that

(9.3) k.^X-1 + k^X^^ + + k^X-^ =

Otherwise, the m vectors are said to be linearly independent.

Example 2. Consider the four vectors of Example 1. By (6) the vectors ^2 ^"d Xj^_ are linearly dependent;
so also are X^, X^, and X^ by (c) and the entire set by {d).

The vectors X^ and Xq^. however, are linearly independent. For. assume the contrary so that

fel^l + ;c2A'2 = [Zk^ + 2k^, k^+ 2k^, -'iJc.i_~ Zk^l = [o, 0,0]

Then 3fti + 2k^ = 0, ft^ + 2^2 = 0, and -ik^ - lik^ = 0. Prom the first two relations A^ =
and then ^2 = 0.

Any n-vector X and the n-zero vector are linearly dependent.

A vector ^,B+i is said to be expressible as a linear combination of the vectors Ai, X^ X^


if there exist elements k^, k^, ...,k^ of F such that

Xfn+i = %A]^ + :2A2 + + k^Xf^

BASIC THEOREMS. If in (9.3), k^ ?^ 0, we may solve for

^i = - T"!^!'^! + + ^i-i^t-i + -^i+i^i+i + + ^m^m! or

(9.4) Xi = SiJVi + + s^_iZi_i + s^j-i^i+i + + s^J^

Thus,
I. If m vectors are linearly dependent, some one of them may always be expressed as
a linear combination of the others.
.

CHAP. 9] LINEAR DEPENDENCE OF VECTORS AND FORMS 69

II. If m vectors X^, X^ X^ are linearly independent while the set obtained by add-
ing another vector X^j,-^ is linearly dependent, then Jt^+i can be expressed as a linear com-
bination of Xi, X^, , X^
Examples. Prom Example 2, the vectors X^, and X^ are linearly independent while X^.X^.^niXg are
linearly dependent, satisfying the relations 2X.i^-2X^- Xg= 0. Clearly, Zg=2A^i-3^2-

III. If among the m vectors X^, X^ X^ there is a subset of r<m vectors which are
linearly dependent, the vectors of the entire set are linearly dependent.

Example 4. By (b) of Example 1, the vectors X^ and X^ are linearly dependent; by (d), the set of four
vectors is linearly dependent. See Problem 1.

IV. If the rank of the matrix

Xi 1 %2
m<n

www.TheSolutionManual.com
(9.5) ,

*TOl ^n2 %
associated with the m vectors (9.2) is r<m, there are exactly r vectors of the set which
are linearly independent while each of the remaining m-r vectors can be expressed as a
linear combination of these r vectors. See Problems 2-3.

V. A necessary and sufficient condition that the vectors (9.2) be linearly dependent
is that the matrix (9.5) of the vectors be of rank r<m. If the rank is m, the vectors are
linearly independent.

The set of vectors (9.2) is necessarily linearly dependent if m>n.

If the set of vectors (9.2) is linearly independent so also is every subset of them.

A LINEAR FORM over F in ra variables x^, x^ is a polynomial of the type


n
(9-6) 2 OiXi = a^x^ + a^x^ + + a^A:^
i=i
where the coefficients are in F
Consider a system of m linear forms in n variables

/l = CIu% + %2^2 +
+ '^2n*re
(9.7)

/m 1^1 + 0~foXo + %n^n


and the associated matrix

-'ml "m2

If there exist elements k^.k^ k^ , not all zero, in F such that

Kk + ^^^2/2 + ... + A;4 =


70 LINEAR DEPENDENCE OF VECTORS AND FORMS [CHAP. 9

the forms (9.7) are said to be linearly dependent; otherwise the forms are said to be linearly
independent. Thus, the linear dependence independence of the forms of (9.7) is equivalent
or
to the linear dependence or independence of the row vectors of A.

Example 5. The forms /i = 2xi - 2 + 3*g, /2 = x.^+ 2% + 4^=3. /g = ix^ - Tx^ + Xg are linearly depend-

2-13
ent since A = 1 2 4 is of rank 2. Here, 3/^ - "if^ - fs = .

4 -7 1

The system (9.7) is necessarily dependent if m>n. Why?

SOLVED PROBLEMS

www.TheSolutionManual.com
1. Prove: If among the m vectors X^,X^, ...,X^ there is a subset, say, X^,X^ X^, r<m, which is
linearly dependent, so also are the m vectors.

Since, by hypothesis, k^X-^ + k^X^ + + k^X^ = with not all of the k's equal to zero, then

k^X^ + k^X^ +
+ k^X^ + 0-.Y^+i + + 0-.Yot =

with not all of the k'a equal to zero and the entire set of vectors is linearly dependent.

2. Prove: If the rank of the matrix associated with a set of m ra-vectors is r<m, there are exactly r

vectors which are linearly independent while each of the remaining m-r vectors can be expressed
as a linear combination of these r vectors.

Let (9.5) be the matrix and suppose first that m<n If the r-rowed minor in the upper left hand comer
.

is equal to zero, we interchange rows and columns as are necessary to bring a non-vanishing r-rowed minor
into this position and then renumber all rows and columns in natural order. Thus, we have

11 11?

21 25-'
A

Consider now an (r+l)-rowed minor

*11 12 %-r Xiq

%1 %2 . Xqt x^q

*rl X^2 ^rr Xrq


xp^ Xpr, . .
xpf xpq

where the elements xp; and xj^q are respectively from any row and any column not included in A. Let h^,k^,
...,A;^+i = A be the respective cofactors of the elements x^g. x^q x^q. xpq, of the last column of V. Then,
by (3.10)
CHAP. 9] LINEAR DEPENDENCE OF VECTORS AND FORMS 71

fci^ii + k2X2i + + krXri + ^r+i*^i = (i = 1,2 r)

and by hypothesis k^x^q + k^xQq + + krx^q + krA-ixpq = y =

Now let the last column of V be replaced by another of the remaining columns, say the column numbered
u. not appearing in A. The cofactors of the elements of this column are precisely the k's obtained above
so that

k^x^n + ^2*2W + + ^rXru + ^r-n^pu =

Thus,
k^x^j; + k^2t + " + f'r'^rt + f'r-n'^pt = (t = 1,2 n)

and, summing over all values of t,

k^X^ + k^X^ + + k^X^ + k^^-^Xp =

Since /i:,^+i = A ji^ 0, Xp is a. linear combination of the r linearly independent vectors X-^^. X^ X^. But ^^j
was any one ^I hence, each of these may be expressed as a linearcom-

www.TheSolutionManual.com
of the m-r vectors -V^+i, ^r+2
binatlon of ^j^, X^ X^.

For the case m>n.


consider the matrix when to each of the given m vectors m-n additional zero compo-
nents are added. This matrix is [^ o]. Clearly the linear dependence or independence of the vectors and
1

also the rank of A have not been changed.

Thus, in either case, the vectors Xr+^ X^ are linear combinations of the linearly Independent vec-
tors X-^.X^. ..., X^ as was to be proved.

3. Show, using a matrix, that each triple of vectors

X^ = [1,2,-3.4] X-L = [2,3,1,-1]

(a) ^2 = [3,-1,2,1] and (b) ^2= [2, 3, 1,-2]

^^3= [1,-5,8,-7] ^3= [4,6,2,-3]

is linearly dependent. In each determine a maximum subset of linearly independent vectors and
express the others as linear combinations of these.

12-34
(a) Here, 3-121 is of rank 2; there are two linearly independent vectors, say X.^ and X^ . The minor
1-5 8-7
1 2 -3
1 2

-1
j^ . Consider then the minor 3-12 The cofactors of the elements of the third column are
3
1-5 8

respectively -14, 7, and -7. Then -1^X^ + 1X2-1X2= and Xg = -2X^ + X^.

2 3 1-1
(b) Here 2 3 1-2 is of rank 2; there are two linearly independent vectors, say X^ and Xg. Now the
4 6 2-3
2 3
2-113 -1
2

2 3
; we interchange the 2nd and 4th columns to obtain 2-213 for which
-2
5^0.
2
4-326
2 -1 1

The cofactors of the elements of the last column of 2 -2 1 are 2,2,-2 respectively. Then
4-32.
2X^ + 2X2 - 2Xs = and Xg = Xi + X,
.

72 LINEAR DEPENDENCE OF VECTORS AND FORMS [CHAP. 9

4. Let Pi(l, 1, 1), PsCl, 2, 3), PsiZ, 1, 2), and P^(2, 3, 4) be points in ordinary space. The points Pi, P^
and the origin of coordinates determine a plane tt of equation

X y z 1

(i)
1111 2y + z
12 3 1

Substituting the coordinates of P^ into the left member of (i). we have


2 3 4 1 2 3 4
2 3 4
1111 1110 = 1 1 1
12 3 1 12 3
1 2 3
1 1

2 3 4
]'

www.TheSolutionManual.com
Thus, P4 lies in tt. The significant fact here is that [P^., Px.Pq 1 1 1 is of rank 2.

1 2 3

We have verified: Any three points of ordinary space lie in a plane through the origin provided the matrix
of their coordinates is of rank 2.

Show that Pg does not lie in v.

SUPPLEMENTARY PROBLEMS
5. Prove: If m vectors X,^. X^ X-^ are linearly independent while the set obtained by adding another vector

-^m+i is linearly dependent, then ^m+i can be expressed as a linear combination of X^.X^ X^^.

6. Show that the representation of /^^+i in Problems is unique.


m n m
Hint: Suppose ^7^+1 = X kiXi = 1, siXi and consider 2 (A:^. - s^ ) X^
i=i i=i i=i

7. Prove: A necessary and sufficient condition that the vectors (9.2) be linearly dependent is that the matrix
(9.5) of the vectors be of rank r<m.
Hint: Suppose the m vectors are linearly dependent so that (9.4) holds. In (9.3) subtract from the ith row the
product of the first row by s^, the product of the second row by S2. ^s Indicated in (9.4). For the
converse, see Problem 2.

8. Examine each of the following sets of vectors over the real field for linear dependence or independence. In
each dependent set select a maximum linearly independent subset and express each of the remaining vectors
as a linear combination of these.

^1 = [1,2,1] ^1 = [2,1, 3,2, -1]


Xj_ = [2,-1,3,2]
X2 = [2,1.4] ^2 = [4,2 1.-2 .3]
(a) X^ = [1,3,4.2] (6) (c)
Xa = [4,5,6] Xs = [0,0 5.6, -5]
X3 = [3,-5,2,2]
^4 = [1.8.-3] X^ = [6,3 -1,--6,7]

A3 = 2a^ + A.Q Xs = 2^1--X^


Ans. (a) Xq = 2X^ - X^ (b) (c)
A. A = 5a 1 2a o x^ = 2X2--^1
CHAP. 9] LINEAR DEPENDENCE OF VECTORS AND FORMS 73

9. Why can there be no more than n linearly independent -vectors over F'>

10. Show that if in (9.2) either Xi = Xj or X^ = aXj, a in F. the set of vectors is linearly dependent. Is the
converse true?

11. Showthatanyn-vectorZandthen-zero vector are linearly dependent; hence, A" ando are considered proportional.
Hint: Consider k^^X +
k^-O = where fc^ = o and ftj ^ 0.

12. (a) Show that X._ = [l.l+i,i], X^ = [j,_j,i_i] and X^ = [l-i-2i,l-i, 2-j ] are linearly dependent over
the rational field and, hence, over the complex field.
(b) Show that Zi = [l.l+i.i], X^ = [i.-i.l-i], and Xq = [o.l-2i.2-i] are linearly independent over
the real field but are linearly dependent over the complex field.

13. Investigate the linear dependence or independence of the linear forms:

/i = 3% - Xg + 2Xg + x^ fx = 2Xx - 3Xg + 4A!g - 2*4

www.TheSolutionManual.com
() fz = 2::i + 3x2 - Xg+ 2x^ (b) f^ = 3%-L + 2^2 - 2x3 + 5*4

/a = 5x^ - 95C2 + 8xq - x^.


fg = 5Xj^ - X2+ 2Xq + X4.

Arts, (a) 3/i - 2/2 - /g =

14. Consider the linear dependence or independence of a system of polynomials

Hox + aij^x + ai_^x + a; (i = 1,2 m)

and show that the system is linearly dependent or independent according as the row vectors of the coeffi-
cient matrix

"10 "11

20 21

"^0 "ni nn

are linearly dependent or independent, that is, according as the rank of 4 is less than or
r equal to 1

15. If the polynomials of either system are linearly dependent, find a linear combination which is identically
zero.

Pi = x ~ 3x2 + 4^ _ 2 Pj_ = 2x* + 3:c^ -4x^+5x + 3


(a) P2 = 2x2 - 6 + 4 (b) P2 = x + 2x2- Zx + \

Ps = x - 2*2 + X Pg = X* + 2x- x^ + X + 2

Ans. (a) 2Pi + Pg - 2P3 = (6) P^ + P^- 2Pg =

16. Consider the linear dependence or independence of a set of 2x2 matrices M.

over F.
[::]-Ci]-[::]
Show that fe^il/i + A:2^^2 + ^3^3 = , when not all the k's (in F) are zero, requires that the rank of the

abed
matrix e f g h be < 3. (Note that the matrices M-^.Mz.Mq are considered as defining vectors of four

p q s t

components,)

Extend the result to a set of mxn matrices


74 LINEAR DEPENDENCE OF VECTORS AND FORMS [CHAP. 9

- - -

1 2 3 2 1 3 3 3

17. Show that 3 2 4 3 4 2 , and 3 6 are linearly dependent.

1 3 2 2 2 1 4 3
_ _ _ -

18. Show that any 2x2 matrix can be written as a linear combination of the matrices and
[o oj' [o oj [i oj'

n Generalize to nxn matrices.

19. If the ra-vectors X^^.X^ X^ are linearly independent, show that the vectors Y-^.Yr, 1^ , where 7^ =
n
2 aijXj. are linearly independent if and only if ^ = \_aij'\ is non-singular.

20. If A is of rank r,show how to construct a non-singular matrix B such that AB = [C^, C2 C7-, o]
where C^, C^ C^ are a given set of linearly independent columns of A.

www.TheSolutionManual.com
21. Given the points Pi(l, 1, 1, 1), Pjd, Ps(2, 2, 2, 2), and /VO, 4. 5, 6) of four-dimensional space,
2, 3. 4),

(a) Show that the rank of [Pi, P3]' is so that the points lie on a line through the origin.
1

(6) Show that [P^, P^. P3, PaY is of rank 2 so that these points lie in a plane through the origin,
(c) Does P5(2, 3. 2. 5) lie in the plane of (6)?

22. Show that every n-square matrix A over F satisfies an equation of the form

A^ + k-^A^ ^ + kr,A^ ^ + ... + kp^^A + kpl =

where the k^ are scalars of F

Hint: Consider /, 4,^^,/!^ 4 in the light of Problem 16.

23. Find the equation of minimum degree (see Problem 22) which is satisfied by

(a) 4 = L J,
[:;] '-[:-:]
A
(b) = \_ \. (c) A
[;:]
Ans. (a) 4^-24=0, (b) 4^-24 + 27 = 0, (c) A^ - 2A +1 =

24. In Problem 23(b) and (c), multiply each equation by 4"^ to obtain (b) A'''^ = I-^A. (c) A~'^=2l-A, and
thus verify: If A over F is non-singular, then A' can be expressed as a polynomial in A whose coeffi-
cients are scalars of F.
.

chapter 10

Linear Equations

DEFINITIONS. Consider a system of m linear equations in the n unknowns xi.a;. > *r?

OqiX-i + 022X2 + + c!2n*-n ~ ^2


(10.1)

www.TheSolutionManual.com
"Wl*!"'" Ob2*^2 + + OIj, = Aa
\

in which the coefficients (o's) and the constant terms (A's) are in F
By a solution in F of the system is meant any set of values of %,%2. x^ in F which sat-
isfy simultaneously the m equations. When the system has a solution, it is said to be consistent;
otherwise, the system is said to be inconsistent. A consistent system has either just one solu-
tion or infinitely many solutions.

Two systems of linear equations over F in the same number of unknowns are called equiv-
alent every solution of either system is a solution of the other. A system of equations equiv-
if

alent to (10.1) may be obtained from it by applying one or more of the transformations: (o) in-
terchanging any two of the equations, (b) multiplying any equation by any non-zero constant in
F, or (c) adding to any equation a constant multiple of another equation. Solving a system of
consistent equations consists in replacing the given system by an equivalent system of pre-
scribed form.

SOLUTION USING A MATRIX. In matrix notation the system of linear equations (10.1) may be written
as
^11 1
\'
(10.2) 02 1 02 2 ^2n *2 = K

r". hm

or, more compactly, as


(10.3) AX = H
?here A = [o^^-] is the coefficient matrix, X = [x^.Xr, xj\ and H = [h^h A^]'

Consider now for the system (10.1) the augmented matrix

Oil ai2 "i?i K


(10.4) 021 02 2 ^271^2 [A H]

ml 0^2 <^nn m

(Each row of (10.4) is simply an abbreviation of a corresponding equation of (10.1); to read the
equation from the row, we simply supply the unknowns and the + and = signs properly.)

75
76 LINEAR EQUATIONS [CHAP. 10

To solve the system (10.1) by means of (10.4), we proceed by elementary row transformations
to replace A by the row equivalent canonical matrix of Chapter 5. In doing this, we operate on
the entire rows of (10.4).

3xi + x^ 2X2 - 1
Example 1. Solve the system
^.X-^ ^Xq^ Xg = 3

2x-j^ + ^Xq + 2xq = 4

2' "1
2 1 1 2 1 2 1 2
1 -2 5 -5 -5 1 1 1
The augmented matrix VA H\ = V
-3 -1 -] 1 -5 -5 -11 -5 -5
0. .0

'1
-1 1 1'

1 1 1 1

1 1 1 1

www.TheSolutionManual.com
.0 OJ 0.

Thus, the solution is the equivalent system of equations: x-i =1, 2 = 0, xq = 1. Ex-
pressed in vector form, we have X = [l, 0, l] .

FUNDAMENTAL THEOREMS. When the coefficient matrix A of the system (10.1) is reduced to the
row equivalent canonical form C, suppose {A H] is reduced to [C K], where K= ^1,^5 A:]'.

If A is of rank r, the first r rows of C contain one or more non-zero elements. The first non-zero

element in each of these rows is 1 and the column in which that 1 sta,nds has zeroes elsewhere.
The remaining rows consist of zeroes. Prom the first r rows of [C K], we may obtain each of

the variables x: , x: ,
... ,xj (the notation is that of Chapter 5) in terms of the remaining varia-
Jr
bleS X: , X: , ... X; and one of the i^, Uq k^.
Jr+1 Jr+2 Jn

If If = k k^ 0, then (10.1) is consistent and an arbitrarily selected set of


values for X,- , ac,- X-
. together with the resulting values of %Ji ,, ,

J2
x
, , ... , X- constitute
Jr
a solution. On the other hand, if at least one of is different from zero, say
V+i' "r+s '

kj; 7^ 0, the corresponding equation reads

Qx.^ + 0% + + 0*71 7^

and (10.1) is inconsistent.

In the consistent case, A and [A H] have the same rank; in the inconsistent case, they
have different ranks. Thus

I. A system AX = H of m linear equations in n unknowns is consistent if and only if

the coefficient matrix and the augmented matrix of the system have the same rank.

II. In a consistent r<n, re-r of the unknowns may be chosen


system (10.1) of rank
so that the coefficient matrix of the remainingr unknowns is of rank r. When these n-r
unknowns are assigned any whatever values, the other r unknowns are uniquely determined.

Xi + 2*2 3*^3 ~ 4^4 = 6

Example 2. For the system aci + 3^2 + xg 2x4. = 4

2^1 + 5a:2 2%3 5x^1 = 10


CHAP. 10] LINEAR EQUATIONS 77

12-3-4 6 12-3-4 6 1 -11 -8 10


[A H] 13 1-2 4
">-/
1 4 2-2 'Xy
1 4 2-2
2 5-2-5 10_ 1 4 3-2 _0 10
"l -11 lO"
-V 1 4 0-2 = [C K]
^0 10
Since A and [A H] are each of rank r = 3, the given system is consistent; moreover,
the general solution contains n-r = 4-3 = 1 arbitrary constant. Prom the last row
of [C K], x^= 0. Let xs = a, where a is arbitrary; then aj-l = 10 + 11a and xq = -2-4o.
The solution of the system is given by x^ = 10 + lla, xj = -2-4o, xg = a, x^, = or
X = [lO + llo, -2-4a, a, o]'

If a consistent system of equations over F has a unique solution (Example 1) that solution
is over F. If the system has infinitely many solutions (Example 2) it has infinitely many solu-
tions F when the arbitrary values to be assigned are over F. However, the system has
over
infinitelymany solutions over any field 2F of which F is a subfield. For example, the system

www.TheSolutionManual.com
of Example 2 has infinitely many solutions over F (the rational field) if o is restricted
to rational
numbers, it has infinitely many real solutions if a is restricted to real numbers, it has infinitely
many complex solutions if a is any whatever complex number.
See Problems 1-2.

NON-HOMOGENEOUS EQUATIONS. A linear equation

a-^ Xi + 0^X2 + + n*n = h

is called non-homogeneous if A 7^ 0. A system AX = H is called a system of non-homogeneous


equations provided H is not a zero vector. The systems of Examples 1 and 2 are non-homogeneous
systems.

In Problem 3 we prove
ni. A system of re non-homogeneous equations in n unknowns has a unique solution
provided the rank of its coefficient matrix A is n, that is. provided \a\ ^ 0.

In addition to the
method above, two additional procedures for solving a consistent
system
of n non-homogeneous equations in as many unknowns AX = H are given
below. The first of
these is the familiar solution by determinants.

(a) Solution by Cramer's Rule. Denote by 4, (i = 1,2 n) the matrix obtained from A by re-
placing Its Jth column with the column of constants (the
h's). Then, if \A\ y^ 0, the system
AX = H has the unique solution

<1"'5) % = 777 , X2 = X = ' "'

See Problem 4.

2xi X2 + Sxg + *4 =
Xl + X2 - 3x3 - 4x4
Example 3. Solve the system using Cramer's Rule.
3x:i + 6x2 - 2x3 + X4.

2%-^ + 2*2 + 2x3 - 3x4


We find

1 5 1
5 1
1 -3 -4 -3 -4
6 -2 1
-120.
-2
= -240
1
2 2-3 2 -3
78 LINEAR EQUATIONS [CHAP. 10

2 5 5 1 2 15 1

1 -1 -3 -4
=
1 1-1-4
-24,
3 8-21 3 6 8 1

2 2 2-3 2 2 2-3
2 1 5 5
1 1 -3 -1
and -96
3 6 -2 8
2 2 2 2_

-240 A^ -24 1
Then x-i = = 0, and
-120 Ml -120 5 -120

-96
^4
-120

(b) Solution using A ^. If |


^ #
|
0, A~^ exists and the solution of the system AX = H is given

www.TheSolutionManual.com
by
(10.6) A-^-AX = A-^H or X ^ A-^H

2xi + 3X2 + Xg 2 3 1

Example 4. The coefficient matrix of the system { x^ + 2% + 3xg is A 1 2 3

3^1 + a;2 + 2a:3 3 1 2

1 -5 7
From Problem 2(6), Chapter 7, A 7 1 -5 Then
18
-5 7 1

"35'
1 -5 7 ["9"
1
AX A-^H J_ 7 1 -5 6 29
18 18
-5 7 1 L8_ . 5_

The solution of the system is x^ = 35/18, x^ = 29/18, x^ - 5/18.


See Problem 5.

HOMOGENEOUS EQUATIONS. A linear equation

'^1*1 + "2*2 + + ''n'^n =


(10.7)

is called homogeneous. A system of linear equations

(10.8) AX =

in n unknowns is called a system of homogeneous equations. For the system (10.8) the rank
of the coefficient matrix A and the augmented matrix [A 0] are the same; thus, the system is
always consistent. Note that X = 0, that is, %i = xs = = = is always a solution; it is %
called the trivial solution.

If the rank ot A is n, then n of the equations of (10.8) can be solved by Cramer's rule for the
unique solution xj^ = X2= ...= x^= and the system has only the trivial solution. If the rank of
A is r<n. Theorem II assures the existence of non-trivial solutions. Thus,
IV. A necessary and sufficient condition for (10.8) to have a solution other than the
trivial solution is that the rank of A be r < n.
V. A necessary and sufficient condition that a system of n homogeneous equations in
n unknowns has a solution other than the trivial solution is |/4 |
= 0.

VI. If the rank of (10.8) is r < n, the system has exactly n-r linearly independent solu-
tions such that every solution is a linear combination of these n-r and every such linear
combination is a solution. See Problem 6.
CHAP. 10] LINEAR EQUATIONS 79

LET Iiand X^he two distinct solutions of AX = H. Then AX^ = H, AX^ = H, and A (Xx- X^) = AY = 0.

Thus, Y = X^~ X^ is a non-trivial solution of AX = 0.


Conversely, if Z is any non-trivial solution of AX = and if X^ is any solution of AX = H,
then X = Xl+ Z is also a solution of AX = H. As Z ranges over the complete solution of AX = 0,
Zy, + Z ranges over the complete solution of AX = H. Thus,
'P
VII. If the system of non-homogeneous equations AX = E is consistent, a complete
solution of the system is given by the complete solution of AX = plus any particular so-
lution of AX = H.

i Xi 2x2 + 3x3
Example 5. In the system set x^ = 0; then xg = 2 and x^ = 1. A particular
Xj + ^2 + 2 Xg = 5
I

solution is = [o, 1, 2]'. The complete solution of < *^


~ *^ [-7a,a,3oJ
A:^,

-^
IS ,

\^Xi + Xq + 2*3 =
where a is arbitrary. Then the complete solution of the given system is

X = [-7a,a,3o]' + [O, 1,2]' = [-7a, 1 +a, 2+3a]'

www.TheSolutionManual.com
Note. The above procedure may be extended to larger systems. However, it is first
necessary to show that the system is consistent. This is a long step in solving the
system by the augmented matrix method given earlier.

SOLVED PROBLEMS

xi + X2 ~ 2xs + X4 + 3 K5 = 1

e 2i - ^2 + 2% + 2x4 + 6*5 = 2

3 ail + 2 X2 - 4 Xg - 3 X4 - 9 xg = 3

tion

The augmented matrix


'l 1-2 1 3 l'
"1
1 3 r 1 1 -2
[A H] = 2-1 2 2 6 2
-v-
-3 '\J
1 -2
3 2 -4 -3 -9 3_ -1 2 -6 8 0_ 0-1 2 -18

1 1 3 1 1 3
1-2 -2
-6 -18 1 3

1 1

1 -2000
13
Then x^ - 1, x^- 2xs = 0, and ^4 + 33=5 = 0. Take xg = a and x^ = b. where a and b are arbitrary; the complete
solution may be given as xx= 1. x^^ 2a, x^^a, x^ = -3b. x^ = b or as AT = [l, 2a, a, -3 6, 6]'.

x^ + X2 + 2Xg + X4 = 5
Solve 2%i + 3x2 - s - 2x4 = 2 .

4xi + 5% + 3xg = 7
Snliitinn*
'11 2 1
5"' '112 1 5 1 7 5 13
[A H] = 2 3-1-22 "V-
1-5-4 -8 1 -5 -4 -8
4 5 3 7 1-5-4 -13 -5
The last row reads 0-xi + O-^tj + O-^g + 0-4 = -5; thus the given system is inconsistent and has no solution.
80 LINEAR EQUATIONS [CHAP. 10

3. Prove: A system AX = H of n non-homogeneous equations in n unknowns has a unique solution


provided \A\ 7^ 0.

If A is non-singular, it is equivalent to /. When A is reduced by row transformations only to /, suppose


[A H] is reduced to [/ K]. Then X = K is a solution of the system.

Suppose next that X = L is a second solution of the system; then AK = H and AL - H, and AK = AL.
Since A is non-singular, K = L, and the solution is unique.

4. Derive Cramer's Rule.

Let the system of non-homogeneous equations be

il*l + a^jAiQ + + ain*n - h^

(1) ] uqiXi + 022*2 + + 2n*n = ^z

www.TheSolutionManual.com
"ni*i + an2*2 + + "nn^n - ^n

Denote by A the coefficient matrix [a. ] and let a^,-be the cofactor of in A . Multiply the first equa-

tion of (J) by ttn, the second equation by Ksi the last equation by ttni. and add. We have
n n n n
S ai^di-^x-^ + 2 ai20iii% + + .S ain^ilXn
i=l i=i 1=1 1 =1

which by Theorems X and XI, and Problem 10, Chapter 3, reduces to

hx ai2 "m
^^2 22 '^2n so that x-i =
^11j-
T
^1

^n ""no. "n

Next, multiply the equations of (i) respectively by a^i, ^ii n2 and sum to obtain

"11 ^1 ''is "m


\A\.x^
Oqi /12 '^23 "2n
so that *<,

Continuing in this manner, we finally multiply the equations of (i) respectively by a^n. 2n (^,71

and sum to obtain


Oil "l,n-i "1
"21 "2,n-i ^2 so that x^.

'ni

~
2Xi + X2 + 5 *3 + *^4 5

Xi + X2 ~ 3x3 - 4*4 = -1
5. Solve the system using the inverse of the coefficient matrix.
3 Xi + 6 Xj - 2 aig + :4
=

2x1 + 2x:2 + 2a;3 - 3x4 =


Solution:
2 15 1 120 120 -120'

1 1-3-4 -69 -73 17 80


Then
The inverse of A
3 6-2 1 120 -15 -35 -5 40
2 2 2-3 .24 8 8 -40.
CHAP. 10] LINEAR EQUATIONS 81

"120 120 -120" 5" "


2
1 -69 -73 17 80 -1 1/5
120 -15 -35 -5 40 8
.24 8 8 -40. . 2. 4/5
(See Example 3.)

Xl + ^2 + s + *4 =
6. Solve i + 3*2 + 2xq + 4a;4 =
2Xi + Xg - Xj^ =

Solution:

1 1 1 1 o" 1 1 1 1 11110
\A U\ 1 3 2 4 'X/
2 1 3 2 13
-1 -2 -1 -3

www.TheSolutionManual.com
2 1 -

11110 1 i -i o"
''\J 3
1 i 1 1 2 2

The complete solution of the system is x^ = -^a + 16, ^^ = -! - |6, xs=a, x^ = h. Since the rank of
A is 2, we may obtain exactly n-r = 4-2 = 2 linearly independent solutions. One such pair, obtained by
first taking a = 1, 6 = i and then a = 3, 6 = 1 is

x^ = 0. X2 = -2, *3 = 1, X4 = 1 and x^ = -1, x^ = -3, ^3 = =


3, :x:4 1

What can be said of the pair of solutions obtained by taking a = b = and a = b =


1 37

7. Prove: In a square matrix A of order n and rank n-1, the cofactors of the
elements of any two rows
(columns) are proportional.

Since 1^1 =0, the cofactors of the elements of any row (column) of A are a solution X-^ of the system
AX = (A'X = 0).

Now the system has but one linearly independent solution since A (A^) is of rank n-l.
Hence, for the
cofactors of another row (column) of A (another solution X^ of the
system), we have X^ = kX^.

8. Prove: If /"i, f^ f^ are m<n linearly independent linear forms over F in n variables, then the
linear forms

^j = ^^'ijfi- 0-=1.2 p)

are linearly dependent if and only if the mxp matrix [5^] is of rank r<p.

The g's are linearly dependent if and only if there exist scalars a^,a^ a. F
in , not all zero, such
that

lgl + "2g2 + +apgp = ai ^^Hxfi + 2 .| si^fi + ... + ap .2 sj^fi

P p

.2 ( 2 a;:S--)f.
1=1 J=l J ^J ^
82 LINEAR EQUATIONS [CHAP. 10

Since the /'s are linearly independent, this requires

+ "p'iP (i = 1, 2 m)
j?/i^v ^i^-ii

Now, by Theorem IV, the system of m homogeneous equations in p unknovras S s^j xj - has a non-

trivial solution X = [a^.a^, apV if and only if [^^j] is of rank r<p.

9. Suppose A = [a- ] of order n is singular. Show that there always exists a matrix R = [h^j] ?^ of

order n such that AB = 0.

Let Bi.Bg B be the column vectors of B. Then, by hypothesis, AB^ = ^65 = .. = AB^ = 0. Con-

sider any one of these, say AB^ = 0, or

ii&it + "iQ^'st + + "m ^nt =

www.TheSolutionManual.com
ni*it + n2^2t+ + "nn^nt

Since the coefficient matrix ^ is singular, the system in the unknowns h^t,^'2t i>nt has solutions other

than the trivial solution. Similarly, AB^ = 0, AB^ = 0, ... have solutions, each being a column of S.

SUPPLEMENTARY PROBLEMS
10. Find all solutions of:

x-^ + x^ + 3 = 4

{a) Xj_ 2^:5 + *3 ~ 3*'4 (c) 2%i + 5%2


~ 2%3 = 3

x-^ -^ 1 x^ 7^3 = 5

Xj^ + %2 + ->=3 + %4 =

( X^ + Xq + Xg = 4 ^1 + %2 + -^S
- % = 4
(b) (d)
\2x^ + 5 3C2 ~ 2 Xg = 3 X\ -^ Xq x^ + :>:4 = -4

X-i Xq + Xr, + x^ = 2

/4ns. (a) ! = 1 + 2a 6 + 3c, ajj = o, X3 = b, x^ =

(b) xi = -7a/3 + 17/3, x:2 = 4a/3 - 5/3, Xg


(d) Xi = % = 1, :;3 = 4 = 2

11. Find all non-trivial solutions of:


*1 + 2*2 "*'
3*3
i x-i 2x^ + 3x3
(a) < (c) 2*1 + X2 + 3*3
12*1 + 5*2 + 6*3
+ 2*2 +
3*i *3

4*1 + 2*3 +
2x-i X2 + ^x^
2*1 + 3 *3
*2 ^ *4-
(.h) 3*1 + 22 + *3
7*2 4*3 5*4
*i - 4*2 + 5*3
2*1 11*5 1*3 5*4

/4ns. (o) *i = -3o, X2 = 0, *g = a

(6) *i = *2 = *3 = a

5 3,
(d) *i
J. *o = o *4
4
1 .

CHAP. 10] LINEAR EQUATIONS 83

12. Reconcile the solution of 10(d) with another x-i = c, x^ = d, x^ = - ^c - , x^ = c +d.


O O o o

1 1 2
13. Given A = 2 2 4 find a matrix B of rank 2 such that AB = 0. Hint. Select the columns of B from
3 3 6_
the solutions of AX = 0.

14. Show that a square matrix is singular if and only if its rows (columns) are linearly dependent.

15. Let AX = be a system of n homogeneous equations in n unknowns and suppose A of rank r = n-1. Show
that any non-zero vector of cofactors [a^i, a^j OLinV of a row of ^ is a solution of AX = 0.

16. Use Problem 15 to solve:

^1 - 2%2 + 3%
ixg = + 2x^ -
/
(b)
/ 2xj_ Xg
(c) J 2.^1 + 3*2 + 4 % =
\2x-^ + 5%2 + 6*3
Xg = y'ix^ 4%2 + 2%3 = 1 2xi %2 + 6 Xg =

www.TheSolutionManual.com
Hint. To the equations of (o) adjoin Oxi + Ox^ + 0%3 = and find the cofactors of the elements of the
"l -2 a""

third row of 2 5 6

Ans. (a) xi = -27a. X2 = 0, X3 = 9a or [3a, 0, -a]', (6) [2a, -7a, -17a]', (c) [lla, -2a, -4a]'

17. Let the coefficient and the augmented matrix of the system of 3 non-homogeneous
equations in 5 unknowns
AX =H be of rank 2 and assume the canonical form of the augmented
matrix to be

1 613 614 fcj^g ci

1 623 624 625 C2

_0

with not both of ci, C2 equal to 0. First choose X3 = x^ = x^ =


and obtain X^ = [c^, c^, 0, 0, o]' as a solu-
tion of AX = H. Then choose X3 = 1, x^ = xg =
also X3 = x^ = 0, x^ = 1 and X3 = x^ = 0, Xg == 1 to ob-
0,
tain other solutions X^,Xg, and X^. show that these 5-2 + 1=4 solutions are linearly independent.

18. Consider the linear combination Y = s,X^ + s^X^ + s^X^ + s^X^ of the solutions of Problem 17 Show
that Y is a solution of AX = H if and only if (i) s, +.2 +-3 +^4 = 1- Thus, with s s,. .3, .4 arbitrary except
for (O, i^ IS a complete solution of AX = H.

19. Prove: Theorem VI. Hint. Follow Problem 17 with c^ = c^ = 0.

20. Prove: If ^ is an m xp matrix of rank and S x


r^ is a p matrix of rank r^ such that AB = 0, then r, +
^-<
r^ f
n
Hint. Use Theorem VI.

21. Using the 4x5 matrix .1 = [a^^-] of rank 2, verify: In an x matrix A of


rank .. the r-square determi-
nants formed from the columns of a submatrix consisting
of any r rows of A are proportional to the
r-square
determinants formed from any other submatrix consisting r of rows of A
Hint. Suppose the firsttwo rows are linearly independent so that a^j = p^.a^j
+Ps^a,.. a^.-p^a^,-
f^ ij +p42a2,-
ha2 2j.
(7 = 1.2 5). Evaluate the 2-square determinants j j

I "17 ai5
a^q
and
025 \'^3q agg "4(7 "AS

22. Write a proof of the theorem of Problem 21.

^'" ' '''''"''


''
*'' ""''"'''' '""'"' ^ '" ' '^"' ""' *^'" "'' '''^'"^ ^^1^""^ ^"-""S "^ -
f?ctors''hcJ?'"
(a) a^^-a^^ = a^J^(x^, (t) =
a^^aJJ a^jUji
where (h,i,j,h = 1, 2 n).
84 LINEAR EQUATIONS [CHAP. 10

11114 10
123-4 2 01000
21126 . -,,. 00100 From B =[A H] infer that the
24. Show that B
32-1-13 IS row equivalent to
00010
,

122-2 4 00001
2 3 -3 1 ij [_0

system of 6 linear equations in 4 unknowns has 5 linearly independent equations. Show that a system of
m>n linear equations in n unknowns can have at most re +1 linearly independent equations. Show that when
there are ra + 1 , the system is inconsistent.

25. If AX =H is consistent and of rank r, for what set of r variables can one solve?

26. Generalize the results of Problems 17 and 18 to m non-homogeneous


equations in n unknowns with coeffi-

cient and augmented matrix of the same rank r to prove: If the coefficient and the augmented matrix of the
AX H of m
system = non-homogeneous equations in n unknowns have rank r and if X-^^.X^ Xn-r-n are

www.TheSolutionManual.com
linearly independent solutions of the system, then

X = si^i + 51^2 + + ^n-r+i^n-r+i


n-r+i
where S 1, is a complete solution.
i=i

27. In a four-pole electrical network, the imput quantities i and h are given in terms of the output quantities

2 and Iq by
E^ = oEq + 6/2 "11 a h 'eI 'e2
_ = A
h cE^ + dlQ c d h. >-
_/lJ

Show that and


1 'b
A
d 1 c

Solve also for 2 and I^, h and I^, 4 and E^ .

Show that the


28. Let the system of n linear equations in n unknowns 4X = H, H ^ 0, have a unique solution.
system AX = K has a unique solution for any n-vector K ^ 0.
'l -1 1" 'xi "n"
AX = X2 = F = for the x^ as linear forms in the y's.
29. Solve the set of linear forms 2 1 3 72
1 2 3 _s_
p.
Now write down the solution of A' X = Y

30. Let A be n-square and non-singular, and let S^ be the solution of AX = E^, (i = 1, 2 n). where ^is the
Identify the matrix [S^, Sg S^].
n-vector whose ith component is 1 and whose other components are 0.

31. Let 4 be an m X 71 matrix with m<n and let S^ be a solution of AX - E^, {i -1,2,. m). where E^ is the
m-vector whose ith component is 1 and whose other components are 0. If K = \k^. k^. , k^' show
, that

k^Si + A:2S2 + + ^n^%


is a solution of AX = K.
Chapter 11

Vector Spaces

UNLESS STATED OTHERWISE, all vectors will now be column vectors. When components are dis-
played, we shall write [xj^.x^ a^]'. The transpose mark (') indicates that the elements are
to be written in a column.

A set of such 7z-vectors over F is said to be closed under addition if the sum of any two of

www.TheSolutionManual.com
them is a vector of the set. Similarly, the set is said to be closed under scalar multiplication
if every scalar multiple of a vector of the set is a vector of the set.

Example 1. (a) The set of all vectors [x^. x^. x^Y of ordinary space havinr equal
components (x-^ = x^^ x^)
is closed under both addition and scalar multiplication. For, the sum
of any two of the
vectors and k times any vector (k real) are again vectors having equal
components.
(6) The set of all vectors [x^.x^.x^Y of ordinary space is closed under addition and scalar
multiplication.

VECTOR SPACES. Any set of re-vectors over F which is closed under both addition and scalar multi-
plication is called a vector space. Thus, if X^, X^ X^ are n-vectors over F, the set of all
linear combinations

<"!) K^i + KX^ + + KX^ (kiinF)

is a vector space over F.


For example, both of the sets of vectors (a) and (b) of Example
1 are
vector spaces. Clearly, every vector space (ll.l) contains the zero
re-vector while the zero
re-vector alone is a vector space. (The space
(11.1) is also called a linear vector space.)
The totality V^iF) of all re-vectors over F is called the re-dimensional vector space over F.

SUBSPACES. A set V of the vectors of V^(F) is called a subspace


of V^(F) provided V is closed un-
der addition and scalar multiplication. Thus,
the zero re-vector is a subspace of F(F)- so
also
IS V^(F) Itself. The set (a) of Example
1 is a subspace (a line) of ordinary
space. In general
If X^, X^, ..., Z^ belong to V^(F), the space of all linear combinations
(ll.l) is a subspace of

A vector space V is said to be spanned or generated by the re-vectors


X^, X^ X^ pro-
vided (a) the Xi lie in V and (b) every vector of F is a linear
combination (11.1). Note that the
vectors X^, X^ X^_ are not restricted to be linearly independent.

Examples. Let F be the field R of real numbers


so that the 3-vectors X^ = [i.i.lY X^ = 2 i]'
.
\i
^3= [1.3,2]' and X^= [3,2,1]' lie in ordinary space S = V^R). Any vector \a b cY ot
-
s can be expressed as . , j

85
.

86 VECTOR SPACES [CHAP. 11

yi + y2 + ys + 3y4.

+ 72^2 + ys'^s + + 2X2 + Sys + 274.


Ti ^1 y4-^4 yi

yi + 3y2 + 2y3 Ja

since the resulting system of equations

yi + y2 + ys + 3x4

(i) yi + 2y2 + 3ys + 2y4

yi + 3y2 + 2y3 y4-

is consistent. Thus, the vectors Xj^, Xj. Xg, X4 spanS.

of
and ^2 are linearly independent. They span a subspace (the plane
tt)
The vectors X-y
real numbers.
S which contains every vector hX^ + kX.^. where /i and k are

www.TheSolutionManual.com
hX^. where
The vector X^ spans a subspace (the line L) of S which contains every vector
A is a real number.
See Problem 1.

the dimension of a vector space V is meant the


maximum number of lin-
BASIS AND DIMENSION. By
same thing, the minimum number of linearly in-
early independent vectors in F or, what is the
geometry, ordinary space is considered as
dependent vectors required to span V. In elementary
Here we have been considering it as a
a 3-space (space of dimension three) of points
(a, 6, c).
and the line L is of
3-space of vectors ia,h,c \. The plane n of Example 2
is of dimension 2

dimension 1.
^
consisting of 7i-vectors will be denoted by F(F). When r = n,
A vector space of dimension r

we shall agree to write p;(F) for %\F).

linearly independent vectors of V^(F) is called a


basis of the space. Each vec-
A set of r

combination of the vectors of this basis. All bases of


tor of the space is then a unique linear
linearly independent vectors of the
V^(F) have exactly the same number of vectors but any
r

space will serve as a basis.

X^ Example 2 span S since any vector [a, b. c ]' of S can be expressed


Example 3. The vectors X^. Xr,. of

yi + ys + ys

X-y
71-^1 + j^Xq + yg^a yi + 2y2 + 3ys

yi + 3X2 + 2ys

a
!yi
xi + y2 + ys =

,^ + 3X3 = h unlike the system ( i), has a u-


The resulting system of equations yi + 2y2 ,

Xi+ 3X2 + 2xs = c

are not a
nique solution. The vectors X^.X^.X^ are a basis of S. The vectors X^.X^.X^
Example whose basis is the set X^. X^
basis of S . (Show this.) They span the subspace tt of 2,

Chapter 9 apply here, of course. In particular, Theorem IV may be re-


Theorems I-V of

stated as:
are a set of re-vectors over the rank of the raa:ro matrix
F and if r is
I If X X^ .. X^
of their components,' then from the set r
Unearly independent vectors may be selected. These
T vectors span a V^iF) in which
the remaining m-T vectors lie.
See Problems 2-3.
CHAP. 11] VECTOR SPACES 87

Of considerable importance are :

II. If JYi, ^2, ..., Zm are m<n linearly independent n-vectors of V^iF) and if J^+j^,
^m4.2. . ^n are any n-m vectors of V^iF) which together with X.^, X^ X^ form a linearly
independent set, then the set X^, X^ Z is a basis of V^iF).

See Problem 4.

III. If Z^.Zg,...,! are m<7i linearly independent -vectors over F, then the
p vectors
m
^i = i^'ij^i (/=l-2 P)
are linearly dependent if p>m or, when p<m, if [s^,] is of rank r<p.

IV. If Zi, .2, ..., Z are linearly independent re-vectors over F, then the vectors
n
Yi = 1 a^jXj
"
(i = 1,2 re)
=

www.TheSolutionManual.com
7 1
are linearly independent if and only if [a^-] is nonsingular.

IDENTICAL SUBSPACES. If ,V^(F) and X(F) are two subspaces of F(F), they are identical if and
only each vector of X(F)
if is a vector of ^V;[(F) and conversely, that is, if and only if each
is a subspace of the other.

See Problem 5.

SUM AND INTERSECTION OF TWO SPACES. Let V\F) and f3^) be two vector spaces. By their
sum is meant the totality of vectors X+Y where X is in V^(F) and Y is in V^iF). Clearly, this
is a vector space; we call it the sum space
V^^iF). The dimension s of the sum space of two
vector spaces does not exceed the sum of their dimensions.

By
the intersection of the two vector spaces is meant the
totality of vectors common to the
two spaces. Now if Z is a vector common to the two spaces, so also is aX;
likewise if X and
y are common to the two spaces so also is aX^bY. Thus, the intersection of two spaces
is a
vector space; we call it the intersection space V\F).
The dimension of the intersection space
of two vector spaces cannot exceed the smaller of
the dimensions of the two spaces.

V. If two vector spaces FV) and V^(F) have


V^\F) as sum space and V^(F) as inter-
section space, then h + k = s + t.

Example 4. Consider the subspace 7f, spanned by X^ and X^ of


Example 2 and the subspace tt^ spanned
by Xs and X^. Since rr^ and tt^ are not identical (prove this)
and since the four vectors span
S, the sum space of tt^ and tt^ is S.

Now 4X1 - X2 = X^: thus, X^ lies in both tt^ and 7t^. The subspace (line
L) spanned
by X^ IS then the intersection space of 77^ and 77^ Note that 77^ and 77^ are each of dimension
.

2, S IS of dimension 3, and L is of dimension 1. This agrees with Theorem V.


See Problems 6-8.

NUIXITY OF A MATRIX. For a system of homogeneous equations AX = 0, the


solution vectors X
constitute a vector space called the null space of A.
The dimension of this space, denoted by
^
A'^ IS called the nullity of A.
,

Restating Theorem VI, Chapter 10, we have


VI. If A has nullity N^ ,
then AX = has N^ linearly independent solutions X^. X^
88 VECTOR SPACES [CHAP. 11

Xti such that every solution of AX = is a linear combination of them and every such
A
linear combination is a solution.

A basis for the null space of A is any set of N^ linearly independent solutions of AX = 0.

See Problem 9.

Vn. For an mxre matrix A of rank rj and nullity N/^,

(11.2) rA + Nji = n

SYLVESTER'S LAWS OF NULLITY. If A and B are of order ra and respective ranks q and rg , the

rank and nullity of their product AB satisfy the inequalities

''AB > '^ + 'B - "

www.TheSolutionManual.com
(11.3) Nab > Na , Nab > Nb

Nab < Na + Nb See Problem 10.

BASES AND COORDINATES. The ra-vectors

E^ = [0,1,0 0]', En = [0,0,0 1]'


El = [1,0,0 0]',

are called elementaiy or unit vectors over F. The elementary vector Ej, whose /th component
is 1, is called the /th elementary vector. The elementary vectors E^, E^ constitute an
important basis for f^^Cf).
Every vector X = [%,% ^nl' of 1^(F) can be expressed
uniquely as the sum
n
X 2 xiEi XxE-^ + 2-^2 +
+ '^nEr,
1=1

Of the elementary vectors. The components %, x^ x^ oi X are now called the coordinates of
X relative to the E-basis. Hereafter, unless otherwise specified, we shall assume that a vector
X is given relative to this basis.
Then there exist unique scalars %, 03 a^
Let Zi, Zg Zn be another basis of 1^(F).
in F such that
n
X 1 aiZi a.^Z.^ + 02 Zg + + dj^Zj^
i =i

These scalars 01,05 o are called the coordinates of X relative to the Z-basis. Writing

JY^ = [oi, og a^Y, we have

(11.4) X = [Zi.Zg Zn]Xz = Z-Xz


whose columns are the basis vectors Zi, Zg Z.
where Z is the matrix

Z^ = [l, -1 ]', Z3 = [l, -1, -1 ]' is a basis of Fg (F) and Xz = [1.2.3]'


Examples. If Zi = [2, -1, 3]', 2,

is a vector of Vg(F) relative to that basis, then


r "1
2 1 1 7

-1 -1 [7,0,-2]'
X [Zi, Zg, Zg]A:^ 2

3 -1 -1 -2

relative to the -basis. See Problem 11.


CHAP. 11] VECTOR SPACES 89

Let W^, \ \ be yet another basis of f^(F). Suppose Xig = [61,^2 K\' so that

(11.5) X = \_\,\ WX^ -r . Xy

From (11.4) and (11.5), Z = Z JV^ = IT Zj^ and

(11.6) X^ = ^'^.Z-X^ PX,

where P = IF"^Z.

Thus,
VIII. If a vector of f^(F) has coordinates Z^ and X^ respectively relative to two bases
of P^(F), then there exists a non-singular matrix P determined
, solely by the two bases and
given by (11.6) such that Xf/ = PX^
See Problem 12.

www.TheSolutionManual.com
SOLVED PROBLEMS
1. The set of all vectors A' = [%, x^, Xg, x^Y, where x^ + x^ + x^ + x^ = Q is a subspace V of V^(F)
since the sum of any two vectors of the set and any scalar multiple of a vector of the set have
components whose sum is zero, that is, are vectors of the set.

1 3 1

2 4
2. Since is of rank 2, the vectors X.^ = [1,2,2, 1 ]', X^ = [3,4,4,3]', and X^ = [1,0,0, 1 ]'
2 4

1 3 1

are linearly dependent and span a vector space ^ (F).

Now any two of these vectors are linearly independent; hence, we may take X-^ and X^, X^ and A:g. or X^
and Xg as a basis of the V2{F).

14 2 4

3. Since
13 12 is of rank 2, the vectors ^"1 = [1,1,1,0]', = [4,3,2,-1 = [2,1,0,-1]',
12 A's ]', A'g

-1 -1 -2

and ^"4 = [4,2,0,-2]' are linearly dependent and span a V^(F).

For a basis, we may take any two of the vectors except the pair Xg, X^.

4. The vectors X^, X^, Xg of Problem 2 lie in V^(F). Find a basis.

For a basis of this space we may take X^,X^,X^ = [l.O.O.O]', and Xg = [o, 1,0,0]' or X^.X2.Xg =
[1.2,3.4]'. and X; = [1,3,6,8]' since the matrices [X^, X2, X^.Xg] and [X^. X2.Xg. Xy] are of rank
4.
90 VECTOR SPACES [CHAP. 11

5. Let Zi = [l,2,l]', Z2 = [l,2,3]', A:3 = [3,6,5 ]', Y^ = [0,Q,lY, i'2=[l.2,5]' be vectors of Vs(F).
Show that the space spanned by X-^, X^, Xq and the space spanned by Y.^, Y^ are identical.

First, we note that X^ and X^ are linearly independent while Xq = 2Zj^ + X^.. Thus, the X^ span a space
of dimension two. say iF|('''). Also, the Yi being linearly independent span a space of dimension two. say

Next, iX.^, Y^ = 2^2- ^li ^i = ^i^ - 4^1. X^ = Y^- 271.


Fi = kXz - Thus, any vector aY-^ + fcK, of
2lg^(F) is (50 + 26)^2 - (2a + b)X.^ of iPg^CF) and any vector cXj^ + dX^ of iff(f)
a vector is a vector
(c + d)Y2 - (4c + 2<i)Ki of QVi(F). Hence, the two spaces are identical.

6. (a) If Z = [%, 2.%]' lies in the Vg(F) spanned by X^ = [ 1,-1,1]' and X^ = [3,4,-2]', then

-2% + 5X2 + IXg

www.TheSolutionManual.com
(b) If X = [xi,x^,X3,x^Y lies in the Pi(F) spanned by X^ = [1,1,2,3]' and X^ = [ 1,0,-2,1]', then

3Ci 1 1
% 1
2 1 1 1

-2
is of rank 2. Since 4 0, this requires = - 2*1 + 4% - % = and
% 2 1

%4. 3 1
% 2

xi 1 1

% 1 = %+ 2*2 - s;* = ,

4. 3 1

These problems verify: Every ^(^ ) may be defined as the totality of solutions over F of a system of n-k

linearly independent homogeneous linear equations over F in n unknowns.

7. Prove: two vector spaces Vn(F) and I^(F) have Vn(F) as sum space and V^iF) as intersection
If

space, then h + k = s + t.

Suppose t = h; then Vr!XF) is a subspace of (^^F) and their sum space is 1^' itself. Thus, s=k. t =h and
s + t - h+k. The reader will show that the same is true if t = k.

Suppose next that t<h. t<k and let X^^.X^ X^ span V^(F). Then by Theorem H there exist vectors
Yh so that X1.X2 ^t.i't-n Yh span l^(F) and vectors Zj^^. Z^+j Z^ so that
^t^-i, yt-+2
A:i,A:2 ^t.Zt+1 2fe span I^'^F).

Now suppose there exist scalars a's and 6's such that

t h k
(11-4) X "iXi + .S a^Yi + S biZi
1=1 t=t+i i=t+i

t ft

i=i i=t+i i=t+i

h k
The vector on the left belongs to P^(F),and from the right member, belongs also to ^(F); thus it belongs
to V^(F). But X^.X^ X-t span Vn(Fy, hence, a^+i = at+2 = = "f, = 0.

t k
Now from (11.4), 2 "iXi + 2; b^Z^
i=i i=t+i
t*^"^,
But the X's and Z's are linearly independent so that a^ = 05 = = t = ''t^i = *t+2 = = ^fe = :

the ^f's.y-s, and Z's are a linearly independent set and span ^(F). Then s =h + k-t as was to be proved.
CHAP. 11] VECTOR SPACES 91

8. Consider ^FsCF) having X^ = [l,2,2Y and Z2 = [ 1,1,1 ]' as basis and jFgCF) having 71 = [3,1,2]'

113 1

and Fj = [ 1.0,1 ]' as basis. Since the matrix of the components 2 110 is of rank 3, the sum
3 12 1

space is V,(F). As a basis, we may take X-^, X^, and Y^

Prom h + k = s + t. the intersection space is a VsiF). To find a basis, we equate linear combinations
2 2
of the vectors of the bases of iFg(F) and s^'sCF) as

6 - 3e = 1

take d = 1 for convenience, and solve ^2a + 6- c = obtaining a = 1/3, 6 = -4/3, c = -2/3. Then
( 3a + 6 - 2e = 1

www.TheSolutionManual.com
aX;]^ + 6^2 = [-1.-2/3.-1/3 ]' is a basis of the intersection space. The vector [3,2,1]' is also a basis.

113 3

2 2 4
9. Determine a basis for the null space of A
10 2 1

113 3

x^ =
Consider the system of equations AX = which reduces to
\x2+ Xs + 2x^ =

A basis for the null space of .4 is the pair of linearly independent solutions [1.2,0,-1]' and [2,l,-l,o]'
of these equations.

10. Prove: r > r + r - n.


AB A B

Suppose first that A has the form Then the first r, rows of AB are the first r. rows of B while

the remaining rows are zeros. By Problem 10, Chapters, the rank of AB is ^45 > + % - "
'k

Suppose next that A is not of the above form. Then there exist nonsingular matrices P and
Q such that
PAQ has that form while the rank of PAQB is exactly that of AB (why?).

The reader may consider the special case when B =

11. Let X=[l,2.lY relative to the S-basis. Find its coordinates relative to a new basis Zi = [l 1 0]'
Z2 = [1,0,1]', and Zg = [1,1. l]'.

Solution (a). Write

1 1 \ 1 !a + i + c = 1

(i) X = aZ^ + 6Z2 + cZr^. that is. 2 = a 1 + h + c 1 Then a + c = 2 and a = 0, 6 = 1,


1 1 1 b + c = 1

c = 2. Thus relative to the Z-basis, we have X^ = [0,-1,2]'


92 VECTOR SPACES [CHAP. 11

Solution (6). Rewriting (i) as X = {Z-^.Z^.Z^^X^ = ZXg, we have

1 -1 1

X^ . Z X 1 -1 2 [0,-1,2^

1 1 1 1

. _ _

12. Let X^ and X-^ be the coordinates of a vector X with respect to the two bases 7,^ = [1,1,0]'
Z2=[l,0,l]', Z3= [1,1,1]' and f4 = [l,l,2]', ff^ = [2,2,1 ]', Ifg = [1,2,2 ]'. Determine the ma-
trix ? such that X^ = PX^ .

1 1 1 'l 2 l' 2-3 2


Here Z = \_Z^, Z^, Zg] 1 1 ,
^ = 1 2 2 and W 2 0-1
1 1 2 1 2 -3 3

www.TheSolutionManual.com
-1 4 1

Then P W'^Z = ^ 2 1 1 by (11.6).

0-3

SUPPLEMENTARY PROBLEMS

13. Let [x-i^. x^. x^. X4,y be an arbitrary vector of Vi(R). where R denotes the field of real numbers. Which of the
following sets are subspaces of K^(R)'?
(a) All vectors with Xj_= X2 = X3 = x^. (d) All vectors with x-^ = 1 .

with x^ = x^. x^=2x^. (e) All vectors with x^.Xr,.Xs.xj^ integral.


(6) All vectors
(c) All vectors with %4 = 0.

Ans. All except {d) and (e).

14. Show that [ 1.1.1.1 ]' and [2,3.3,2]' are a basis of the fi^(F) of Problem 2.

15. Determine the dimension of the vector space spanned by each set of vectors. Select a basis for each.

[1.1,1.1]'
[1,2,3.4.5]' [l.l.O.-l]'
[3,4,5,6]'
(a) [5.4,3,2,1]', (b) [1,2,3,4]' , ^'^
[1.2,3,4]'
[1.1,1.1,1]' [2.3.3,3]'
[1,0,-1,-2]'

Ans. (a), (b), {c). r= 2

16. (a) Show that the vectors X-^ = [l,-l,l]' and X^ = [3,4,-2]' span the same space as Y^ = [9,5,-1 ]' and
72= [-17,-11.3]'.
(6) Show that the vectors X^ = [ 1,-1,1 ]' and A'2 = [3,4,-2 ]' do not span the same space as Ti = [-2,2,-2]'
and K, = [4,3,1]'.

n. Show that if the set X^.X^ Xfe is a basis lor Vn(F). then any other vector Y of the space can be repre-
sented uniquely as a linear combination of X-^, X^ X^ .

k
Hint. Assume Y 51 aiXi = S biXi-
CHAP. 11] VECTOR SPACES 93

18. Consider the 4x4 matrix whose columns are the vectors of a basis of the Vi(R) of Problem 2 and a basis of
the \i(R) of Problem 3. Show that the rank of this matrix is 4; hence. V^R) is the sum space and l^(R), the

zero space, is the intersection space of the two given spaces.

19. Follow the proof given in Problem 8, Chapter 10, to prove Theorem HI.

20. Show that the space spanned by [l,0,0,0,o]', [0,0,0,0,1 ]', [l.O,l,0,0]', [0,0,1,0,0]' [l,0,0,l,l]' and the
space spanned by [l,0.0.0,l]', [0,1,0,1,0 ]', [o,l,-2,l,o]', [l,0,-l,0,l ]', [o,l,l,l,o]' are of dimensions
4 and 3, respectively. Show that [l,0,l,0,l]' and [l,0,2,0,l]' are a basis for the intersection space.

21. Find, relative to the basis Z^= [l,1.2]', Zg = [2.2,l]', Zg = [l,2,2]' the coordinates of the vectors
(a) [l.l.o]', (b) [1,0, l]', (c) [l.l.l]'.
Ans. (a) [-1/3,2/3,0]', (6) [4/3, 1/3, -1 ]', (c) [l/3, 1/3, ]'

22. Find, relative to the basis Zi=[o,l,o]', Z2=[i,l,l]', Z3=[3,2,l]' the coordinates of the vectors

www.TheSolutionManual.com
(a) [2,-1,0]', (b) [1,-3,5]', (c) [0,0,l]'.
Ans. (a) [-2,-1,1]', (6) [-6,7,-2]', (c) [-1/2, 3/2, -1/2 ]'

23. Let X^ and X^^ be the coordinates of a vector X with respect to the given pair of bases. Determine the
trix P such that Xj^ = PX^ .

Zi= [1,0,0]', Z2=[i,o,l]', Z3= [1,1, il- Zi = [0,1,0]', 2^ = [1.1,0]', 23 = [1.2.3]'


ea) ^^^
1^1 = [0,1,0]', [^2= [1,2,3]', If'3= [1,-1,1]' = [l,1.0]'.
!fi 1^2= [1,1.1]', 1^3= [1,2.1]'
2 41 r 1 -2"1
Ans. (a) P = ^ , (6) P = -1 2
2 2j L 1 ij

n
24. Prove: If Pj is a solution of AX = Ej . (j = 1,2 n). then 2 hjPj is a solution of AX = H. where H =
[''1.^2 KV-
Hint. H = h^Ej^ + h^E^+ +hnE^.

25. The vector space defined by all linear combinations of the columns of a matrix A
is called the column space
of A. The vector space defined by all linear combinations of the
rows of A is called the row space of ^.
Show that the columns of AB are in the column space of A and the rows of AB are in
the row space of fi.

26. Show that AX = H a system of m non-homogeneous equations in n unknowns, is consistent if and only
.
if the
the vector H belongs to the column space of A

1 1 1111
27. Determine a basis for the null 1-1,
space of (a) (6) 12 12
1 1 3 4 3 4
Ans. (a) [1,-1,-1]', (6) [ 1,1,-1, -i ]', [l, 2,-1, -2]'

28. Prove: (a) N^,>N^. N^^>N^ (b)N^,<N^^N^

Hint: (a) /V^g = n - r^g ; r^g < r^ and rg .

(b) Consider n~r^g . using the theorem of Problem 10.

29. Derive a procedure for Problem 16 using only column transformations on A = [X^. X^, y^ Y^]. Then resolve
Problem 5.
chapter 12

Linear Transformations

DEFINITION. Let X = [x,., x^, .... %]' and Y = lyx. y^. JnY ^^ ^^ vectors of l^(F), their co-
ordinates being relative to the same basis of the space. Suppose that the coordinates of X .

Y are related by

yi '^ll'^l "^ ^12 "^2 "^ T ^^Yi J^

+ df^Yj.^ri

www.TheSolutionManual.com
(12.1)

or, briefly, AX

where A = [a^.-] is over F. Then (12.1) is a transformation T which carries any vector X of
V^(F) into (usually) another vector Y of the same space, called its image.

If (12.1) carries X^ into F^ and X^ into Y^, then

(a) it carries kX^ into ^y^, for every scalar k, and

(fe) it carries aX^ + bX^ into aY^ + feFg. for every pair of scalars a and b. For this reason, the
transformation is called linear.

1 1 2

Example 1. Consider the linear transformation Y = AX 1 2 5 A" in ordinary space Vq(R).


13 3

'12"
'l 1 2 2

(a) The image of A" = [2,0,5]' is Y 1 2 5 = 27 [12,27.17]'.


1 3 3 5 17

'2'
'l 1 2 'x{

(6) The vector X whose image is y = [2,0.5]' is obtained by solving 1 2 5 *2 =

.1 3 3_ 3. 5

112 2 10 13/5
Since 12 5 10 11/5 .
X = [13/5,11/5,-7/5]'.
13 3 5 1 -7/5

BASIC THEOREMS. If in (12.1), X = [\,Q 0]'='i then Y = [ an, Ogi, ..., a^J' and, in general,
if ^ = - then Y = [a^j.a^j "nf]'-
Hence,

I. A linear transformation (12.1) is uniquely determined when the images (Y's) of the
basis vectors are known, the respective columns of A being the coordinates of the images
of these vectors. See Problem l.

94
CHAP. 12] LINEAR TRANSFORMATIONS 95

A linear transformation (12-1) is called non-singular if the images of distinct vectors Xi

are distinct vectors Y^. Otherwise the transformation is called singular.

II. A linear transformation (12.1) is non-singular if and only if A, the matrix of the
transformation, is non-singular. See Problem 2.

in. A non-singular linear transformation carries linearly independent (dependent) vec-


tors into linearly independent (dependent) vectors. See Problem 3.

Prom Theorem HI follows


k
IV. Under a non-singular transformation (12.1) the image of a vector space Vn(F) is a
vector space VjJ,F), that is, the dimension of the vector space is preserved. In particular,
the transformation is a mapping of ^(F) onto itself.

When A is non-singular, the inverse of (12.1)

www.TheSolutionManual.com
X = A'^y

carries the set of vectors Y^, Y^, ...,\ whose components are the columns of A into the basis
vectors of the space. It is also a linear transformation.

V. The elementary vectors ^ of \{F) may be transformed into any set of n linearly
independent n-vectors by a non-singular linear transformation and conversely.

VI. If Y = AX carries a vector X into a vector F, if Z = BY carries Y into Z, and if


IT = CZ carries Z into W, then Z = BY = {BA)X carries X into Z and IF = (CBA\X carries
X into IF.

VII. When any two sets of re linearly independent re-vectors are given,
there exists a
non-singular linear transformation which carries the vectors of one set into the
vectors of
the other.

CHANGE OF BASIS. Relative to a Z-basis, let 7^ = AX^, be a linear transformation of ^(F). Suppose
that the basis is changed and let X^ and Y^ be the coordinates of X^, and Y^ respectively rela-
tive to the new basis. By Theorem VIH, Chapter 11, there exists
a non-singular matrix P such
that X-^ = ?X^ and Yy, = PY^ or, setting ?~^ = Q, such that

X^ = QX^ and Y^ = QY^

Then Y^ = Q-% = Q-^X, = Q-^AQX^ = BX^


where

(12.2) 6 = Q-^AQ

Two matrices A and B such that there exists a non-singular matrix


Q for which B = Q'^AQ
are called similar. We have proved
vm. If Y^ = AX^ is a linear transformation of V^(F) relative to a given basis (Z-basis)
and Yjf = BX^ is the same linear transformation relative to another basis (IF-basis),
then
A and B are similar.

Note. Since Q = P"^, (12.2) might have been written as B = PAP-^. A study of similar matrices
will be made later. There we shall agree to write B = R'^AR instead of S = SAS'^ but
for no compelling reason.
96 LINEAR TRANSFORMATIONS [CHAP. 12

1 1 3

Example 2. Let Y = AX = 1 2 1 Z be a linear transformation relative to the -basis and let W^

1 3 2

[l.2,l]', W^ = [1,-1.2]', IFg = [1,-1,-1]' be a new basis, Given the vector X = [3,0,2]',
(a)

find the coordinates of its image relative to the W-basis. Find the linear transformation
(b)

Yjr = BXjf corresponding to V = AX. (c) Use the result of (b) to find the image ijf of Xy =
[1.3,3]'.
1 1 1 3 3

Write W = [W^.W^.W^] = 2 -1 -1 then W


-1
-
1 1 -2 3
9
1 2 -1 5 -1 -3

(a) Relative to the If -basis, the vector X = [3,0,2]' has coordinates Xff = W X = [l,l,l]'.

The image of ^ is Y = AX = [9,5,7]' which, relative to the IF-basis is Yf^ = W Y =

[14/3,20/9.19/9]'.

www.TheSolutionManual.com
36 21 -15
(b) Y w\ W^AX (W~^AW)Xjf = BXj^ 21 10 -11
-3 23 -1

36 21 -15 1 6

(c) Yj. = 21 10 -11 3 = 2 [6,2,7]'

-3 23 -1 3 7
L.
See Problem 5,

SOLVED PROBLEMS

1. (a) Set up the linear transformation Y = AX which carries E^ into Y^ = [1,2,3]', E^ into [3,1,2]',
and 3 into Fg = [2,1,3]'.
{h) Find the images of li= [1,1,1]', I2 = [3,-1,4 ]', and ^3 = [4,0,5]'.
(c) Show that X^ and Zg ^-^^ linearly independent as also are their images.
(d) Show that Xi, X^, and Z3 are linearly dependent as also are their images.
1 3 2

(a) By TheoremI, A = [y^, Fg, K3] ; the equation of the linear transformation is Y =^ AX 2 1 1

3 2 3

13 2

(6) The image of X^= [l,l.l]' is Y-^ 2 1 1 [6.4,8]'. The image of -Yg is Ys = [8,9,19]' and the

3 2 3

image of Xg is K3 =[ 14.13,27]'.

1 3 6 8

(c) The rank of [A'^.Xg] = 1 -1 is 2 as also is that of [^i, Kg] 4 9 Thus, X^ and X^ are linearly

_1 4 8 19

independent as also are their images

(rf) We may compare the ranks of \_X^. X^. X^] and {Y^.Y^.Yq\; however, X^ = X^ + X^ and Fg = Fi+Zg so that
both sets are linearly dependent.
CHAP. 12] LINEAR TRANSFORMATIONS 97

2. Prove: A linear transformation (12.1) is non-singular if and only if A is non-singular.

Suppose A is non-singular and the transforms of X^ ^ X^ are Y = AX^ = AX^. Then A{X-i^-Xi) = and
the system of homogeneous linear equations AX = Q has the non-trivial solution X = X-^-X^. This is pos-
sible if and only if .4| = o, a contradiction of the hypothesis that A is non-singular.
|

3. Prove: A non-singular linear transformation carries linearly independent vectors into


linearly in-
dependent vectors.

Assume the contrary, that is, suppose that the images Yi = AXi. (i = 1,2 p) of the linearly independ-
ent vectors X-^.Xr, Xp are linearly dependent. Then there exist scalars s-^.s^ sp , not all zero, such that

P
^ H'^i = ^lYi + s^Y2+ + s^Yfy
^^
=
1=1 f^

P
'
.|^ ^(-4^1) = A(Sj^X^+ S2X^+ + spXp) =

www.TheSolutionManual.com
Since A is non -singular, s^X^ + s^X^ + -. + spXp = But this is contrary to the
o. hypothesis that the Xi are
linearly independent. Hence, the Y^ are linearly independent.

4. A certain linear transformation F = iZ carries Z^ = ]'


[ 1,0,1 into [2,3,-1]', ^^s =[ 1.-1.1 ]' into
[3.0,-2]', and I3
tion of the transformation.
=[
1.2,-1]' into [-2,7,-1]'. Find the images of ,! fj^, 4 and write the equa-

a + b + c = I

Let aX.i^ + bXr,+ cXg= E^; then -6 + 2c = and a = -^, 6 = 1, c =


i.
Thus, E^= -hX^ + X^ + ^Xg
a + b - c =

and its imageis Y, = -^2, 3,-1 ]'+ [3,0,-2]' +


H-2.7.-1 ]' = [l,2,-2]'. Similarly, the image of 5 is
Y2 = 1-1.3,1 J and the image of Eg is K, = [l,l,l ]'. The equation of the transformation is

1 -1 1

Y = [Y^.Y^.Yg]X 2 3 1

-2 1 1

1 1 2
5. If Yy = AXf, = 2 2 1 X^ is a linear transformation relative to the Z-basis
of Problem 12, Chap-
3 1 2

ter 11, find the same transformation 7^ = BX^ relative to the f'-basis of that problem.

1 4 1"

From Problem 12, Chapter 11, = PX^ = i


Xf^r 2 1 1 ^z- Then
--3 0_

-1 1 -1"

P''X,., -1 =
^!i Q^w
2 1 :d

-2 14 -6
and PY Q-^AX, \^
Q^^Q^r., 7 14 9 X,
3
-9 3
98 LINEAR TRANSFORMATIONS [CHAP. 12

SUPPLEMENTARY PROBLEMS
6. In Problem 1 show: (a) the transformation is non-singular, (b) X = A Y carries the column vectors of A into
the elementary vectors.

7. Using the transformation of Probleml, find (a) the image of Af = [1,1,2]', (h) the vector X whose image is
[-2.-5.-5]'. -4ns. (a) [8,5.11]', (b) [-3.-1. 2]'

8. Study the effect of the transformation Y = IX. also Y = klX.

9. Set up the linear transformation which carries E^ into [l,2,3]', 5 into [3.1. 2]', and 3 into [2,-1,-1]'.
Show that the transformation is singular and carries the linearly independent vectors [ 1,1,1 ]' and [2,0,2]'
into the same image vector.

10. Suppose (12.1) is non-singular and show that if X-^.X^. .... X^ are linearly dependent so also are their im-
ages Y^.Y^ Y^.

www.TheSolutionManual.com
11. Use Theorem III to show that under a non-singular transformation the dimension of a vector space is un-
changed. Hint. Consider the images of a basis of P^ (F).

1 1

12. Given the linear transformation Y 2 3 1 X. show (a) it is singular, (6) the images of the linearly in-

-2 3 5

dependent vectors ^i=[l,l,l]', JVg = [2.I.2 ]', and A:3=[i.2,3]' are linearly dependent, (c) the image
of V^{R) is a Vs(R).

1 1 3

13. Given the linear transformation Y 1 2 4 X. show (a) it is singular, (6) the image of every vector of the
113
2 1
V, (R) spanned by [ 1.1,1 ]' and [3.2.0 ]' lies in the K,(fl) spanned by [5,7.5]'.

14. Prove Theorem Vn.


Hint. Let Xi and Yi, (j = 1,2 n) be the given sets of vectors. Let Z = AX carry the set Xi into E{ and
Y = BZ carry the E^ into Kj.

15. Prove: Similar matrices have equal determinants.

12 3

16. Let Y = AX = 3 2 1 A^ be a linear transformation relative to the -basis and let a new basis, say Z, =

_1 1 1_

[1,1,0]', ^2 = [1.0.1]', Z3 = [1.1.1]' be Chosen. Let AT = [1,2,3]' relative to the E-basis. Show that
(a) Y = [14,10,6]' is the image of A^ under the transformation.
(6) X. when referred to the new basis, has coordinates X^, = [-2,-1.4]' and Y has coordinates Y^ = [8,4,2]'

1 0-1"
(c) X^ = PX and Y^ = PY. where P 1 -1 iZ^, Zf2, Z^i
-1 1 1

(d) Yy = Q ^AQX. , where Q =P ^.

0"
1 1

17. Given the linear transformation 7^ 1 1 Xjf . relative to the IF-basis: W^= [o,-l,2]', IK,= [4,1,0]'

1 1
.

CHAP. 121 LINEAR TRANSFORMATIONS 99

IK5 = [-2.0,-4]'- Find the representation relative to the Z-basis: Z^ = [i,-l,l]', Z2 = [l,0,-l]', Z3=[l,2,l]'.

-10 3

Am 2 2-5
-10 2

18. If. in the linear transformation Y - AX. A is singular, then the null space of A is the vector space each of
whose vectors transforms into the zero vector. Determine the null space of the transformation of
123"
(a) Problem 12. (6) Problem 13. (c) Y 2 4 6 X.

3 6 9

-4ns. (a) ^ (R) spanned by [l.-l.l]'


(6) I^\R) spanned by [2,l,-l]'
(c) I/^() spanned by [2,-l,o]' and [3,0,-1 ]'

www.TheSolutionManual.com
19. If y = AX carries every vector of a vector space I^ into a vector of that same space, v^ is called an In-
variant space of the transformation. Show that in the real space V^{R) under the linear transformation

1 -f
(a) F = 12 \ X. the \l^ spanned by [l.-l.o]', the V^ spanned by [2,-1.-2]', and the V^ spanned by
2 2 3

[1.-1,-2]' are invariant vector spaces.

2 2 1

(6) y = 13 1 X. the Vq spanned by [l.l.l]' and the ^ spanned by [l,0,-l]' and [2,-l,0]' are invariant
1 2 2

spaces. (Note that every vector of the V^ is carried into itself.)

y =
10
(c) X, the li spanned by [l,l,l,l]' is an invariant vector space.
1

-14-6 4

20. Consider the linear transformation Y = PX :

yi (i = 1.2 n) in which /1./2 /^ is a permuta-


tion of 1, 2 n.
^h

(a) Describe the permutation matrix P


(b) Prove: There are n! permutation matrices of order n.

(c) Prove: If Pj and fg are permutation matrices so also are P3 = P-lP2 and P^^P^Pt.
(d) Prove: If P is a permutation matrix so also are P' and PP' = /.

(e) Show that each permutation matrix P can be expressed as a product of a number of the elementary col-
umn matrices K^2, ^28 ^n-T-n-
(/) Write P = [^^, E^^. ^^] where ii, ij % is a permutation of 1,2 n and ^ . are the ele-

mentary n-vectors. Find a rule (other than P~ = P') for writing P~ For example, when n = 4 and .

P = [s, 1, 4, 2], then P'^ = [2. '4. 1. 3]; when P = [E^ E^. 1, 3], then P~^ = [g, 2, 4, 1].
chapter 13

Vectors Over the Real Field

INNER PRODUCT. In this chapter all vectors are real and l^(R) is the space of all real re-vectors.

If Z = [%,%, ..., x^y and y = [ji, 72, , YnY are two vectors of l^(R), their inner product is
defined to be the scalar

(13.1) X-Y = x^y^ + x^j^ + + XnJn

Example 1. For the vectors X^=\\.\,\\', ^2= [2.1,2]', ^3 = [l.-2.l]':

www.TheSolutionManual.com
(a) X^-X^ = 1-2 + 1- 1 + 1- 2 = 5

(6) X-^-X^ = 1-1 + l(-2) +1-1 =

(c) X.yX^ = 1-1 + 1-1 + 1-1 = 3

(rf) ^1-2^:2 = 1-4 + 1-2 + 1-4 = 10 = 2(^1-^2)

Note. The inner product is frequently defined as

(13.1') X.Y = X'Y = Y'X


The use of X'Y and Y'X is helpful; however, X'Y and Y'X are 1x1 matrices while
X-Y is the element of the matrix. With this understanding, (13.1') will be used
here. Some authors write Z|y for X-Y In vector analysis, the inner product is call- .

ed the dot product.

The following rules for inner products are immediate

(a) X^-X^ = X^-X^, X^-hX^ = HX^-X^)

(13.2) ib) X^-(X^ + Xs) = (X^+Xs)-X^ = X^-X^ + X^-X^

(c) (X^+ X^) -


(Xs+ X^) = X^-X^ + X^-X^ + X^-Xs + X^-X^

ORTHOGONAL VECTORS. Two vectors X and Y of V^iR) are said to be orthogonal if their inner
product is 0. The vectors Z^ and Xg of Example 1 are orthogonal.

THE LENGTH OF A VECTOR X of ^i(R), denoted by \\ X\\ , is defined as the square root of the in-
ner product of X and X thus. ;

(13.3) II ^11 = \/ X-X = \/ xl + xl+ --- + X.

Examples. Prom Example 1(c), \\ X^\\ = V3 .

See Problems 1-2.

100
CHAP. 13] VECTORS OVER THE REAL FIELD 101

Using (13.1) and (13.3), it may be shown that

(13.4) x.Y = iil|z+y||' - WxW - ||y|h

A vector X whose length is ||z|| = 1 is called a unit vector. The elementary vectors E^
are examples of unit vectors.

THE SCHWARZ INEQUALITY. If X and Y are vectors of ^(/?), then

(13.5) \X-Y\ < \\x\\.\\y\\

that is, the numerical value of the inner product of two real vectors is at most the product of
their lengths.

See Problem 3.

www.TheSolutionManual.com
THE TRIANGLE INEQUALITY. If X and Y are vectors of )/(/?), then

(13-6) l!^+yll < IUII + ||y||

ORTHOGONAL VECTORS AND SPACES. If X^, X^ X^ are m<n mutually orthogonal non-zero
n-vectors and CiZi + ^2^2+ ...+ c^^ =
if 0, then for i = 1,2 m. (c.lX^+ 0^X2+
+ c^X^) Xi =
0. Since this requires 0^ = for i = 1,2 m , we have
I. Any set of m< n mutually orthogonal non-zero re-vectors is a linearly independent
set and spans a vector space Ijf(/?).

A vector Y is said to be orthogonal to a vector space Vn(R) if it is orthogonal to every


vector of the space.

II. If a vector Y is orthogonal to each of the re-vectors X^, X^ X^, it is orthogonal


to the space spanned by them.
See Problem 4.

HI. If Vn(R) is a subspace of I^(/?), k>h. there exists at least one vector X of V^CR)
which is orthogonal to V^\R).
See Problem 5.

Since mutually orthogonal vectors are linearly independent, a vector space V'^(R), m>0,
can contain no more than m mutually orthogonal vectors. Suppose we have found r<m mutually
orthogonal vectors of a V^(R). They span a V^iR), a subspace of V*(R), and by Theorem HI,
there exists at least one vector of V^(R) which is orthogonal to the I^(/?). We now have
r+l
mutually orthogonal vectors of l^(R) and by repeating the argument, we show

IV. Every vector space V^(R), m>0, contains m but not more than m mutually orthog-
onal vectors.

Two vector spaces are said to be orthogonal if every vector of one is orthogonal to every
vector of the other space. For example, the space spanned by X^ = [1,0,0,1]' and X^ =
[0,1,1,0]' is orthogonal to the space spanned by X^ = [ 1,0,0,-1]' and X^ = [0,1,-1,0 ]'
since (aX^ + bXr,) (0X3+ dX^) = for all a,b,c,d.
102 VECTORS OVER THE REAL FIELD [CHAP. 13

k
V. The set of all vectors orthogonal to every vector of a given Vn,(R) is a unique vec-
tor space
^ Vn'^(R).
n J
\
ggg Problem 6.

We may associate with any vector ^ 7^ o a unique unit vector U obtained by dividing the
components of X by \\X\\ This operation is called normalization. Thus, to normalize the vector
.

X = [2,4,4]', divide each component by ||^|| = V4 + 16 + 16 = 6 and obtain the unit vector
[1/3,2/3.2/3]'.

A basis of Vn(R) which consists of mutually orthogonal vectors is called an orthogonal ba-
sis of the space; if the mutually orthogonal vectors are also unit vectors, the basis is called a

normal orthogonal or orthononnal basis. The elementary vectors are an orthonormal basis of ^(R).
See Problem 7.

THE GRAM-SCHMIDT ORTHOGONALIZATION PROCESS. Suppose X^, X^ ! are a basis of


V^(R). Define

www.TheSolutionManual.com
y, = X,

^g'^3 V ^1-^3 V
V - Y
^S - ^3 Y V ^ V V ^

-'w-l Xi "
ll*^!
-'m - '^ y Y %-l y y' "'l

Then the unit vectors Gj = ^ , (i = l,2,...,m) are mutually orthogonal and are an orthonormal
1^11

basis of F(/?).

Example 3. Construct, using the Gram-Schmidt process, an orthogonal basis of V2(R). given a basis
A'i= [1,1,1]', a:2= [1,-2,1]', Xs=[i.2.zY.

(i) Y^ = X^ = [1.1.1]'

(ii) Y^ = X^- ~^Yi = [1,-2,1]' - ^1-1 = [1,-2.1]'

(ill) ^3 = ^3 - ^Y, - ^Y, = [1,2,3]' - -^y, - ^[1,1.1]' = [-1,0,1]'

The vectors G^ = -jpj = [l/\/l. l/\/3, l/^/3l ]',

G2 = -^, = [l/Ve, -2/V6, l/Ve]' and Gg = -ii- = [-I/V2, 0, 1/\A2]'

are an orthonormal basis of ^(fl). Each vector G^ is a unit vector and each product G^ Gj =
0. Note that Fg = -^2 here because X.^ and A^2 a^re orthogonal vectors.

See Problems 8-9.


CHAP. 13] VECTORS OVER THE REAL FIELD 103

Let Zi, ^2 ^m be a basis of a f^(/?) and suppose that X^, X^ Xg,(l< s< m), are
mutually orthogonal. Then, by the Gram-Schmidt process, we may obtain an orthogonal basis
y^, Yg }^ of the space of which, it is easy to show, Yj^ = X^, (i = 1,2 s). Thus,

VI. If X-i^, X2, , Xs,(l< s<m), are mutually orthogonal unit vectors of a Vn(R), there
exist unit vectors X^^^, ^m
X^^.^- i" the space such that the set X^, X^, ...,X^ is an
orthonormal basis.

THE GRAMIAN. Let X^, X^ Z>, be a set of real n-vectors and define the Gramian matrix

A^ A^
' A^ A2
. . .
A-L A>) A]^ A^ A^ A2 . . . X-^ A,

A2 Aj X2 A2
' ' . . . A2 ' Ajj A2 A^ A2 A2 X^Xp
(13.8) G =

Xp- X^ Xp- X2 Xp- Xp XpX-i XpX^ XpXp

www.TheSolutionManual.com
... ...

Clearly, the vectors are mutually orthogonal if and only if G is diagonal.

In Problem 14, Chapter 17, we shall prove

VII. For a set of real re-vectors Z^, X^, .... Xp, |


G >0. The
|
equality holds if and only
if the vectors are linearly dependent.

ORTHOGONAL MATRICES. A square matrix A is called orthogonal if

(13.9) AA' = A'A = I

that is, if

(13.9') A' = A'

Prom (13.9) it is clear that the column vectors (row vectors) of an orthogonal matrix A are
mutually orthogonal unit vectors.

l/\/3 l/x/6 -1/^2


Example 4. By Examples, A l/\[Z -2/\/6 is orthogonal.

l/\/3 \/\[& l/x/2

There follow readily

VIII. If the real re-square matrix A is orthogonal, its column vectors (row vectors) are
an orthonormal basis of V^(R), and conversely.

IX. The inverse and the transpose of an orthogonal matrix are orthogonal.

X. The product of two or more orthogonal matrices is orthogonal.

XI. The determinant of an orthogonal matrix is 1.

ORTHOGONAL TRANSFORMATIONS. Let

(13.10) Y = AX
104 VECTORS OVER THE REAL FIELD [CHAP. 13

be a linear transformation in Xi(R) and let the images of the n-vectors I^ and
X^ be denoted by
Yi and Y^ respectively. Prom (13.4) we have

x^-x, = u\\x,^x,f - \\X,f - \\X,f]


and
Y,-Y, = k\\\Y,^Y,f - II
y, Y,f]

Comparing right and left members, we see that if (13.10) preserves lengths it preserves inner
products, and conversely. Thus,

XII. A linear transformation preserves lengths if and only if it preserves inner product s.

A linear transformation Y=AX is called orthogonal if its matrix A is orthogonal. In Prob-


lem 10, we prove

XIII. A linear transformation preserves lengths if and only if its matrix is orthogonal.

www.TheSolutionManual.com
l/\/2 l/v/6 -I/V2
Examples. The linear transformation Y = AX = l/\/3 -2/\/6 X ii3 orthogonal. The image of
I/V3 l/\/6 l/v^
X = [a,i,c]'is

" 26 a
y + _^ _ _1_ _f b c "]/

and both vectors are of length yja^ + b^ + c^ .

XIV. If (13.10) is a transformation of coordinates from the -basis to another, the Z-


basis, then the Z-basis is orthonormal if and only if A is orthogonal.

SOLVED PROBLEMS

1. Given the vectors Zj. = [1,2,3]' and X^ = [2,-3,4]', find:


(o) their inner product, (b) the length of each.

2
(a) X^-X^ = XiX^ = [1.2.3] -3 = 1(2) + 2(-3) + 3(4) = 8

(6) \\X^f = X^.X^ = X[X^ = [1,2,3] = 14 and \\Xji = vTi

lA-jf = 2(2) + (-3)(-3) + 4(4) = 29 and \\X^\\ = V29


.

CHAP. 13] VECTORS OVER THE REAL FIELD 105

2. (a) Show that 1 = [1/3, -2/3, -2/3 ]' and Y ^ [2/3.-1/3, 2/3]' are orthogonal.
(b) Find a vector Z orthogonal to both X and Y.

2/3
(a) X-Y = Xy = [1/3,-2/3,-2/3] -1/3 = and the vectors are orthogonal.
2/3

1/3 2/3 O"

(6) Write [A:,y,o] -2/3 -1/3 and compute the cofactors -2/3. -2/3, 1/3 of the elements of the
-2/3 2/3 0_

column of zeros. Then by (3.11) Z = [-2/3, -2/3, 1/3]' is orthogonal to both A: and K.

3. Prove the Schwarz Inequality: If X and Y are vectors of Vn(R), then \X-Y\ < ||A'||.||y||

Clearly, the theorem is true if A" or F is the zero vector. Assume then X and Y are non-zero vectors.

www.TheSolutionManual.com
that
If a is any real number,

llaA^ + yf = (aX + Y)-(aX + Y)

= [ax^ + y^.ax^+y^ axn+yn]-[ax^+y^. ax^ + y^ axn+jnY


= (a x^ + 2ax^y^ + yj^) + (a^x^ + Zax^y^ + y^) + + (o^^ + 2a%j^ + y^ )

= a^xf + 2aX.Y + \\Yf >

Now a quadratic polynomial in a is greater than or equal to zero for all real values
of o if and only if its
discriminant is less than or equal to zero. Thus,

i(X.Yf - i\\xf- \\Yf <

and \x-y\ < m-\\Y\

4. Prove: If a vector Y is orthogonal to each of the n-vectors X^, X^ X^. it is orthogonal to the
space spanned by these vectors.

Any vector of the space spanned by the X's can be written as a^X^+a^X^-^
^-c^Xtji . Then

{aiXi + a^X^+... + a^X^)-Y = a^X^-Y + a^X^-Y + + a^X^-Y =

Since Xi-Y = 0, (i = 1,2 m). Thus, Y is orthogonal to every vector of the space and by definition is
orthogonal to the space. In particular, if Y is orthogonal to every vector of a basis of a vector space, it is
orthogonal to that space.

5. Prove: If a ^() is a subspace of a V^(R), k>h. then there exists at least


one vector A" of v\R)
which "
is orthogonal to the f^(fi).

^^* -^I'-^s ^h be a basis of the FV). let X^,^^ be a vector in the vM) but not in the P^(R) and
consider the vector

<*) X = a^X^ + a^X^ + ... + aj^X,^ + a^^^Xf,^^

The condition that X be orthogonal to each of X^.X^ consists of h homogeneous linear equations
Xf,
106 VECTORS OVER THE REAL FIELD [CHAP. 13

a^X^-X^ + a^X^-X^ + ... + af,Xh-X-, + a,,^^Xf^^^. X^ = o

a^X^.X^ + a^X^.X^ + ... + a^Xh-X^ + af^^^Xf,^^- X^ = o

In the ft + l unknowns a^.a^ ay,^^. By Theorem IV, Chapter 10, a


non-trivial solution exists. When these
values are substituted in (i), we have a non-zero (why?) vector
X orthogonal to the basis vectors of the kA)
and hence to that space.

^ ""^^ ^ ^^^ ^^''^'''^ orthogonal to every vector of a given V^{R) is a unique


Vn-k'J'^^ vector space

^^* -^i-^s A-fe be a basis of the V^{R). The ..-vectors X orthogonal to each of the Jt,- satisfy the
system of homogeneous equations

www.TheSolutionManual.com
^'> X^.X=o.X^.X=Q Xk.X=0
Since the X^^ are linearly independent, the coefficient matrix of the
system (i) is of rank k ; hence, there are
n-k linearly independent solutions (vectors) which span
a K"-\/?). (See Theorem VI, Chapter 10.)
Uniqueness follows from the fact that the intersection space of the
V^iR) and the V^'^(R) is the zero-
space so that the sum space is Xi{R).

7. Find an orthonormal basis of V^(R), given X = [1/^6, 2/x/6, l/x/6]'.

Note that A- is a unit vector. Choose Y = [l/Vl, -IA/2 ]'


0, another unit vector such that X -Y =
Then, as in Problem 2(a), obtain Z = [l/^Jl. -l/VI, complete the
1A/3]' to set.

8. Derive the Gram-Schmidt equations (13,7).

^^' ^^' -^2 >^n be a given basis of V^(R) and denote by Y^. Yr, Y^ the set of mutually orthogonal
vectors to be found.

(a) Take Y-^^ X^.

(b) Take Y^ = Xr, + aY-^ . Since Y-^ and Y^ are to be mutually orthogonal.

Y^.Y^ = Y^-X^ + Y^-aY^ = Y^-X^ + aY^-Y^ =

Y . X Y X
and o = - -i-2 . Thus. K, = X^ ~ ^1^
Xo- ^Sill Y^ .

(c) Take Y^ = X3 + aYr, + bY^ . Since Y^. K,, Y^ are to be mutually orthogonal,

yi-Yj, = Yi-Xg + aY^.Y^ + bY^-Y^ = Y^- X^ + bY^-Y^ =


and
Y^.Y^ = Y^-Xs + aY^-Y^ + bY^.Y^ = Y^-X^ + aY^-Y^ =

Then a = _ ^ , 6 = - ^ll^ , and 7, = Z3 - ^ki^ K _ 2W^ v

(d) Continue the process until K, is obtained.


CHAP. 13] VECTORS OVER THE REAL FIELD 107

9. Construct an orthonormal basis of Fg, given the basis Z^ = [2,1,3]', X^ = [1,2,3]', and Xg = [1,1,1]'.

Take Y^ = X^ = [2.I.3Y. Then

K, = X^ - ^^i-i = [1,2.3]' - ii[2,1.3]' = [-6/7.15/14,3/14]'

V _ y "2 ^3 V ^1 '"^3 V

[1.1.1]' - ^
9 7 14 I4J 7 [_3 3 3j

Normalizing the y-s. we have [2/\/l4. l/i/Ii, 3/v'l4 ]', [-4/\/42, 5/V42. l/\/42]', [l/Vs. 1/V3. -1/^3 ]'

as the required orthonormal basis.

www.TheSolutionManual.com
10. Prove: A linear transformation preserves lengths if and only if its matrix is orthogonal.

Let y^. Yq be the respective images of X^,X2 under the linear transformation Y = AX.

Suppose A is orthogonal so that A/l = /. Then

(1) Y^-Y^ = YiY^ = (X'^A')(AX^) = X\X^ = X^^-X^

and, by Theorem XII lengths are preserved.

Conversely, suppose lengths (also inner products) are preserved. Then

Y^-Y^ = X{(A'A)X2 = X^X^. A'A=1


and A is orthogonal.

SUPPLEIVIENTARY PROBLEMS

H. Given the vectors A'l = [l,2,l ]'. .Yg = [2.I.2]', vYg = [2.1,-4 ]'. find:
(a) the inner product of each pair,
(6) the length of each vector.
(c) a vector orthogonal to the vectors X^, X^ ; X.^, Xq .

Ans. (a) 6, 0, -3 (6) Vb, 3, V^ (c) [l,0,-l ]', [3.-2.1]'

12. Using arbitrary vectors of ^3(7?). verify (13.2).

13. Prove (13.4).

14. Let a: = [1.2.3,4]' and Y = [2,1,-1.1]' be a basis of a V^(R) and Z = [4,2,3,l]' lie in a V^{R) containing X
and y.
(a) Show that Z is not in the ^^{R).
(b) Write W = aX + bY + cZ and find a vector W of the V^{R) orthogonal to both X and Y.

15. (a) Prove: A vector of I^(ft) is orthogonal to itself if and only if it is the zero vector.

(6) Prove: If X-^. X^. Xg are a set of linearly dependent non-zero ^-vectors and Z^ Xg = Xj_-Xq=
if
0, then
X^ and Xq are linearly dependent.
108 VECTORS OVER THE REAL FIELD [CHAP. 13

16. Prove: A vector X is orthogonal to every vector of a P^"(/?) If and only if it is orthogonal to every vector of
a basis of the space.

17. Prove: If two spaces V^iR) and t^(fl) are orthogonal, their intersection space is I^(if).

18. Prove: The Triangle Inequality.


Hint. Show that !|Z + 7 p < (||.Y|| + ||y||)^, using the Schwarz Inequality.

19. Prove: ||
A" + 7 1|
= |1a:|1 + \y\ if and only if X and Y are linearly dependent.

20. Normalize the vectors of Problem 11.


Ans. [l/Ve, 2/V6. l/\/6]', [2/3, 1/3, 2/3]', [2/V2i, l/V^I, -4/V2i ]'

21. Show that the vectors X. Y .Z of Problem 2 are an orthonormal basis of V^{R).

22. (o) Show that if X^. X^ X^ are linearly independent so also are the unit vectors obtained by normalizing

www.TheSolutionManual.com
them.
(6) Show that if the vectors of (a) are mutually orthogonal non-zero vectors, so also are the unit vectors
obtained by normalizing them.

23. Prove: (a) If A is orthogonal and |


.4 |
= 1, each element of A is equal to its cofactor in |
^4 |
,

(6) If .4 is orthogonal and \a\ = -i, each element of^ is equal to the negative of its cofactor in |^| .

24. Prove Theorems VIII, IX, X, XI.

25. Prove; If .4 and B commute and C is orthogonal, then C'AC and C'BC commute.

26. Prove that AA' (or A'A), where A is n-square. is a diagonal matrix if and only if the rows (or columns) of A
are orthogonal.

27. Prove: UX and Y are n-vectors, then XY'-YX' is symmetric,

28. Prove: If X and Y are n-vectors and A is n-square, then X-(^AY) = {A'X) -Y

n
29. Prove: If A"]^, Z^ A' be an orthonormal basis and if A^ = 2 c^X^, then (a) X-X^ = c^, (i = 1,2 n);
^=1
(b)X.X = cl+4+... + cl

30. Find an orthonormal basis of VsiR). given (a) X^ = [ 3/VT7, -2/\/T7, 2/1/17]'; (6) [3,0,2]'
Ans. (a) X^. [0.l/^^2.l/\f2Y, [-4/\^, -3/^34, 3/\/34]'
(6) [3/VI3, 0, 2/\/l3]', [2/VI3. 0, -3/VT3]', [0,1,0]'

31. Construct orthonormal bases of V^i^R) by the Gram-Schmidt process, using the given vectors in order:
{a) [1,-1,0]', [2,-1,-2]', [1,-1,-2]'
(6) [1.0,1]', [1,3,1]', [3.2,1]'
(e) [2,-1,0]', [4,-1,0]', [4,0,-1]'
Ans. (a) [iV^. -iV^, 0]', [V^/6, V2/6, -2v'V3]', [-2/3,-2/3,-1/3]'
(b) [^\/~2, 0, iV2]', [0,1,0]', [iV^. 0, -iV2]'
(c) [2V5/5, -V5/5, 0]', [^/5/5.2^/5/5.oy, [o,0,-l]'

32. Obtain an orthonormal basis of I^(ft), given ^"1 =[ 1,1,-1 ]' and ATg = [2,1,0 ]'.

Hint. Take Y^ = X^, obtain Y^ by the Gram-Schmidt process, and Y^ by the method of Problem 2(6).

Ans. [\A3/3, V'3/3, -V^/3]', [5\/2. 0, 2\A2]', [\/6/6, -\/6/3, -\/6/6 ]'
CHAP. 13] VECTORS OVER THE REAL FIELD 109

33- Obtain an orthonormal basis of V^iR). given X-^ = [7,_i,_i ]'.

34. Show in two ways that the vectors [l.2,3,4]', [l. -1.-2, -3]', and [5,4,5,6]' are linearly dependent.

35. Prove: If A is skew-symmetric. and I + A is non-singular, then B = (I -A)(I + A)~^ is orthogonal.

36. Use Problem 35 to obtain the orthogonal matrix S ,


given

"
12
s"!
(a) A =
-5 oj'
(b) A 10 3

2-3
"5-14 2
-12 -51
Ans. -10 -5 -10
(a)
^ 5 -I2J'
(b)

10 2 -11

www.TheSolutionManual.com
37. Prove: If .4 is an orthogonal matrix and it B = AP , where P is non-singular, then PB^ is orthogonal.

38. In a transformation of coordinates from the -basis to an orthonormal Z-basis with matrix P. Y = AX be-
comes 71 = P^^APX^ or 7^= BX^ (see Chapter 12). Show that if A is orthogonal so also is B. and con-
versely, to prove Theorem XIV.

39. Prove: If 4 is orthogonal and / + .4 is non -singular then B = (I - A) (I + A)'"^ is skew-symmetric.

40. Let X = [xj^.x^.xsY and Y = [yi.yQ.ygY be two vectors of VsiR) and define the vector product, X xY , of
X2 Jl 3 ys ^1 yi
^ and y as Z = ZxF = [21, 25, 23]' where 21 = , Z2 = . Zg = After identifying
^3 ys % yi 2 yi
the z^ as cofactors of the elements of the third column
mn of X^, Y^. ], estciblish:

(a) The vector product of two linearly dependent vectors is the zero vector.

(6) The vector product of two linearly independent vectors is orthogonal to each of the two vectors.
(c) XxY = -(YxX-)
(d) (kX)xY = k(XxY) = XxikY), k a scalar.

41. If W. X. Y. Z are four vectors of V^{R), establish:

(a) X x{Y+Z) = XxY + XxZ


(b)X-(YxZ) Y-(ZxX) = = Z-(XxY) = \XYZ\
W-Y W-Z
(c) (WxX)-(YxZ) =
X-Y X-Z

(d) (XxY)-(XxY) =
X-X X-Y
Y-X Y-Y
chapter 14

Vectors Over the Complex Field

COMPLEX NUMBERS. x and j are real numbers and i is defined by the relation j^ = 1, z = x^iy
If

is called a complex number. The real number x is called the real part and the real number y is
called the imaginary part of x + fy.

Two complex numbers are equal if and only if the real and imaginary parts of one are equal
respectively to the real and imaginary parts of the other.

A complex number x + iy = and only x = y =

www.TheSolutionManual.com
if if 0.

The conjugate of the complex number z = x+iy is given by z = x+iy = xiy. The sum
(product) of any complex number and its conjugate is a real number.

The absolute value |z| of the complex number z = x+iy is given by |z| = \J z-z = \fW+y^
It follows immediately that for any complex number z = x + iy,

(14.1) |z| > \A and \z\ > \y\

VECTORS. Let X be an ra-vector over the complex field C. The totality of such vectors constitutes
the vector space I^(C). Since ^(R) is a subfield, it is to be expected that each theorem con-
cerning vectors of I^(C) will reduce to a theorem of Chapter 13 when only real vectors are con-
sidered.

If Z = [%, x^ XnY and y = [ji, 72 y^Y are two vectors of P^(C), their inner product
is defined as

(14.2) X-Y = XY = %yi + x^y^ + + XnJn

The following laws governing inner products are readily verified;

(a) I-y = y^ (/) X-Y+Y.X = 2?.{X.Y)

(b) (cX)-Y = c(X-Y) where R(X-Y) is the real part of X-Y.

(14.3) (c) X-(cY) = c(X-Y) (g) X-Y-Y-X = 2CiX-Y)

(d) X-(Y+Z) = X-Y + X-Z where C(Z-y) is the imaginary part of Z-F.

(e) (Y+Z)-X = Y-X+Z-X See Problem 1.

The length of a vector X is given by ||Z|| = \/ X-X = \/%% + %^2 + + XnXn-

Two vectors X and Y are orthogonal if X-Y = Y-X = Q.

For vectors of V^{C), the Triangle Inequality

(14.4) \\X+Y\\ < ||Z|| + ||i'll

and the Schwarz Inequality (see Problem 2)

(14.5) \X-Y\ < \\X\\-\\Y\\

hold. Moreover, we have (see Theorems I-IV of Chapter 13)

110
CHAP. 14] VECTORS OVER THE COMPLEX FIELD 111

I. Any set of m mutually orthogonal non-zero re-vectors over C is linearly independent


and, hence, spans a vector space I^(C).

II. If a vector Y is orthogonal to each of the re-vectors X^, X^ X^, then it is or-
thogonal to the space spanned by these vectors.

III. If V^iC) is a subspace of V^C), k>h, then there exists at least one vector Z in
V^(C) which is orthogonal to V^(C).

IV. Every vector space J^(C), m>0, contains m but not more than m mutually orthog-
onal vectors.

A
basis of I^(C) which consists of mutually orthogonal vectors is
called an orthogonal
basis. If the mutually orthogonal vectors are also unit
vectors, the basis is called a nonnal
or orthonormal basis.

www.TheSolutionManual.com
THE GRAM-SCHMIDT PROCESS. Let X,. X^ X^ be a basis for F^^CC). Define

Y, = X,

In - Xn

^2-^3 y Yi-Xs
(14.6) Yd = Xn
Y .Y
l2-l2 ^ Y,.Y,
Y^

Y y
'a-i'^m y
yn _
~
y
"Si T; y
^m-i'-'m-i
-'m-i
'y^
The unit vectors Gi 7. (i = 1.2 m) are an orthonormal basis for ^^(C).

V. If X^, X^ X^, (l<s<m), are mutually orthogonal unit vectors


of ^(0) there
exist unit vectors (obtained by the Gram-Schmidt
Process) Z,,,, X, ^ ;i' in the snaoe
such that the set Z Z^ Z, is an orthonormal basis.

THE GRAMIAN. Let X X^ Xp be a set of ;s- vectors with complex elements and define the
Gramian matrix.

Xi'Xi Xj^-Xq X^-Xp Aj X-^Xq


X-y X^ Xp
X^'X'i X^-X^ X^-Xp Ag X^ XqXq X2 xp
(14.7)

Xp-X^ Xp-X^ Xp-Xp XI X-^ XI x^ x^x.


p^p
Clearly, the vectors are mutually orthogonal if and
only if G is diagonal.
Following Problem 14, Chapter 17, we may prove
VI For a set of re-vectors X^. X^ Xp with complex elements, \G\ > 0. The equality
holds if and only if the vectors are linearly
dependent.
112 VECTORS OVER THE COMPLEX FIELD fCHAP. 14

UNITARY MATRICES. An n-square matrix A is called unitary if (AyA = A(A)'= I, that is if (i)'= A ^.

The column vectors (row vectors) of a unitary matrix are mutually orthogonal unit vectors.

Paralleling the theorems on orthogonal matrices of Chapter 13, we have


VII. The column vectors (row vectors) of an re-square unitary matrix are an orthonormal
basis of l^(C), and conversely.

VIII. The inverse and the transpose of a unitary matrix are unitary.

IX. The product of two or more unitary matrices is unitary.

X. The determinant of a unitary matrix has absolute value 1.

UNITARY TRANSFORMATIONS. The linear transformation

(14.8) Y = AX

www.TheSolutionManual.com
where A is unitary, is called a unitary transformation.

XI. A linear transformation preserves lengths (and hence, inner products) if and only
if its matrix is unitary.

XII. If Y = AX is a transformation of coordinates from the i'-basis to another the Z-


basis, then the Z-basis is orthonormal if and only if A is unitary.

SOLVED PROBLEMS

1. Given X = [l+j, -J, 1]' and Y = [2 + 3J, 1- 2J, i]',


(a) find X-Y and Y-X (c) verify X-Y + Y-X = 2R(X-Y)

(b) verify X-Y = Y-X (d) verify X-Y -Y-X = 2C(X-Y)

2+ 3/

(a) X-Y = X'Y = [l-i,f,i; l-2i = (l-i)(2+3!) + i{l-2i) + 1(0 = 7 4 3j

1 +i

Y-X = Y'X = [2-3i.l + 2i.-i] i 7 - 3i

(b) From (a): Y-X. the conjugate of Y-X. is 7 + 3i = X-Y.

(c) X-Y + Y-X = (7 + 3i) + (7-3i) = 14 = 2(7) = 2R(X-y)

(d) X-Y -Y-X = (7 + 3j) -(7-30 = 6j = 2(30 = 2C(X-Y)

2. Prove the Schwarz Inequality: \X-Y\ < p|| 11Y||.

As in the case of real vectors, the Inequality is true if X = or ^ = 0. When X and Y are non-zero
vectors and a is real, then

\\aX + Yf = {aX+Y)-{aX+Y) = a X-X + a{X-Y + Y-X) + Y-Y = a'Wxf + 2aR{X-Y) + \\Y\\ > 0.
CHAP. 14] VECTORS OVER THE COMPLEX FIELD 113

Since the quadratic function in a is non-negative if and only if its discriminant is non-positive,

R{X-Yf - \xf\Yf < and Ra-Y) < IkMyll


If X-Y = 0, then -F = fl(;5:-y) < ||x| X-Y X-Y
|;f c =

|
||y | . if ,i
O. define
\x-y\
Then R(cX-Y) < \\cX\\-\\y\\ = |c||U||-iyl| = |UM|y|| while, by (14.3(6)), R(cX-Y)
R[ciX-Y)] = \X-Y\. Thus, U-yl < U||-||y|| forall^andy.

3. Prove: B = (A)'A is Hermitian for any square matrix A.

(B) = \(A)'AV = (A'A) = (A)A = B and B is Hermitian.

4. If i = B + iC is Hermitian, show that (I)/l is real if and only if B and C anti-commute.

www.TheSolutionManual.com
Since B+iC is Hermitian, (B+iC)' = B + iC; thus.

(A)A = (B + iCUB+iC) = (B + iC)(B+iC) = B^ + i(BC + CB) - C^

This is real if and only if BC + CS = o or BC = -CB ; thus, if and only if B and C anti-commute.

5. Prove: If A is skew-Hermitian. then iA is Hermitian.

Consider B = ~iA. Since A is skew-Hermitian, (A)' = -A. Then

(S)' = (riA)' = t(Z)' = i(-A) = -iA = B


and S is Hermitian. The reader will consider the case B = iA.

SUPPLEMENTARY PROBLEMS
6. Given the vectors X^=[i.2i,iY. A:2 = [l, 1+ o]', and Xg = - 2]'
/, [i, 1 j,
(a) find X^-X^ and X^-X^,
(b) find the length of each vector Xi ,

(c) show that [l-i, -1, i-j]' is orthogonal to both X^ and X^.
(d) find a vector orthogonal to both X^ and Jf g

Ans. (a) 2-3i.-i (b) ^/e .


V^ V^ . (d) [-1 -5iJ .3 -i]

7. Show that [l + i.i.lV. [iA-i.oy, and [l -i. 1. 3j ]' are both linearly independent and mutually orthogonal.

8. Prove the relations (14.3).

9. Prove the Triangle Inequality.

10. Prove Theorems I-IV.

11. Derive the relations (14.6).


114 VECTORS OVER THE COMPLEX FIELD [CHAP. 14

12. Using the relations (14-6) and the given vectors in order, construct an orthonormal basis for iy.C) when the

vectors are
(a) [0,1,-1]', [l + j,l,l]'. [l-j,l,l]'
(b) [l + i.i.lY. [2.1-2i.2 + iY, [l-i.O.-iY.

Arts, (a) [O.^V2.-iV2]', [1(1 + 0. I. 2 ]'. [-^i, ^(1 + 0. ^d + ]'

ri 1 ^ I 1 T r 1 1 - 5t 3+ 3jy r1-i
7 -t -5
-5 -6 + 3i
-b 3j
y
(&) [2(1+0. 2..2 J. [3;^. 4^. 4^ J- L2V30 'aVso- 2\Am^

13. Prove: If /I is a matrix over the complex field, then A + A has only real elements and A- A has only pure
imaginary elements.

14. Prove Theorem V.

15. If A is n-square, show


(a) A'A is diagonal if and only if the columns of A are mutually orthogonal vectors.

www.TheSolutionManual.com
(b) A'A = / if and only if the columns of A are mutually orthogonal unit vectors.

16. Prove: If X and Y are n-vectors and A is re-square, then X-AY = A'X-Y

17. Prove Theorems VII-X.

18. Prove: If A is skew-Hermitian such that I + A is non-singular, then B = (l-A){I + A)~^ is unitary.

I 1 +t
r i+j1
19. Use Problem 18 to form a unitary matrix, given (
"^ (6) i

i
L-1+.- J- -1 + j i

_9 + 8i -10 -4i -16-18i


1 r-l + 2j -4-2i"|
Ans. (a) (b) -2-24J l + 12i -10 -4t
5 |_ 2-4i -2-jJ' 29
4-lOj -2-24J -9 + 8j

20. Prove: If ^ and B are unitary and of the same order, then AB and BA are unitary.

21. Follow the proof in Problem 10. Chapter 13. to prove Theorem XI.

22. Prove: If ^ is unitary and Hermitian. then A is involutory.

3+ i

J//3
1(1 +
2^
4 + 3i
23. Show that -k l/\/3 is unitary.
2y/T5

-i/y/3 -^
2Vl5_

-1
24. Prove: If A is unitary and if B = .4P where P is non-singular, then PB is unitary.

25. Prove: U A is unitary and I+A is non-singular, then B = (I - A) (,! + A)'^ is skew-Hermitian.
chapter 15

Congruence

CONGRUENT MATRICES. Two re-square matrices A and B over F are called congruent, , over F if

there exists a non-singular matrix P over F such that

(15.1) B = FAP
Clearly, congruence is a special case of equivalence so that congruent matrices have the same

www.TheSolutionManual.com
rank.

When P is expressed as a product of elementary column matrices, P' is the product in re-
verse order of the same elementary row matrices; that is, A and B are congruent provided A can

be reduced to B by a sequence of pairs of elementary transformations, each pair consisting of


an elementary row transformation followed by the same elementary column transformation.

SYMMETRIC MATRICES. In Problem 1, we prove


I. Every symmetric matrix A over F of rank r is congruent over F to a diagonal matrix

whose first r diagonal elements are non-zero while all other elements are zero.

Example 1. Find a non-singular matrix P with rational elements such that D - P'AP is diagonal, given

12 3 2

2 3 5 8

3 5 8 10

2 8 10 -8

reducing A to D, we use [A /] and calculate en route the matrix P' First we use
In .

ff2i(-2) and K2i(-2). then //gjC-S) and XgiC-S), then H^-ii-2) and K^^{-2) to obtain zeroes
in the first row and in the first column. Considerable time is saved, however, if the three
row transformations are made first and then the three column transformations. If A is not then
transformed into a symmetric matrix, an error has been made. We have

12 3 2 1 10 10
[AH^
2 3 5 8 1
c
0-1-1 4 -2100
3 5 8 10 1
'V-
0-1-1 4 -3010
2 8 10 -8 1 4 4-12 -2001
1 1 1 1

0-100 -2 1
c
0-100 -2 10
~\^
-1 -1 1 4 -10 4 1

4 10 4 1 -1-110

[DP']

115
116 CONGRUENCE [CHAP. 15

1 -2 -10 -1
1 4-1
Then
1

10
The matrix D to which A has been reduced is not unique. The additional transformations
10
0-100
ffgCi) and Kg(^). for example, will replace D by the diagonal meitrix while the
10

10
0-900 however, no pair of
transformations H^d) and K^Ci) replace D by . There is,
4

www.TheSolutionManual.com
rational or real transformations which will replace D by a diagonal matrix having only non-neg-
ative elements in the diagonal.

REAL SYMMETRIC MATRICES. Let the real symmetric matrix A be reduced by real elementary
transformations to a congruent diagonal matrix D, that is, let P'AP = D. While the non-zero
diagonal elements of D depend both on A and P. it will be shown in Chapter 17 that the number
of positive non-zero diagonal elements depends solely on A.

By a sequence of row and the same column transformations of type 1 the diagonal elements
of D may be rearranged so that the positive elements precede the negative elements. Then a
sequence of real row and the same column transformations of type 2 may be used to reduce the
diagonal matrix to one in which the non-zero diagonal elements are either +1 or 1. We have

II. A real symmetric matrix of rank r is congruent over the real field to a canonical
matrix

P
(15.2) C =
'r-p

The integer p of (15.2) is called the index of the matrix and s = p-(r p) is called the

signature.

Example 2. Applying the transformations H23. K^a and H^ik), Kr,(k) to the result of Example 1, we have

1 1 10 1

0-100 -2 1 C 10 -5 2 5
IC\(/]
[A\n
4 -10 4 1 0-10 -2 10
-1 -1 1 -1-110
and (/AQ = C. Thus, A is of rank r = 3, index p = 2, and signature s = 1.

III. Two re-square real symmetric matrices are congruent over the real field if and only

if they have the same rank and the same index or the same rank and the same signature.

In the real field the set of all re-square matrices of the type (15.2) is a canonical set over
congruence for real ra-square symmetric matrices.
CHAP. 15] CONGRUENCE 117

IN THE COMPLEX FIELD, we have

IV. Every ra-square complex symmetric matrix of rank r is congruent over the field of
complex numbers to a canonical matrix

/^
(15.3)

Examples. Applying the transformations H^ii) and K^f^i) to the result of Example 2, we have

10 1 10 1

1 -5 2 5 C 10 -5 2 k
[^1/] [D ']
0-10
1

-2 1 10 -2i i

-1 -1 1 -1 -1 1

www.TheSolutionManual.com
R'AR
and
^&:] See Problems 2-3.

V. Two ra-square complex symmetric matrices are congruent over the field of complex
numbers if and only if they have the same rank.

SKEW-SYMMETRIC MATRICES. If A is skew-symmetric, then

(FAP)' = FAT = r(-A)P = -FAP


Thus,

VI. Every matrix B = FAP congruent to a skew-symmetric matrix A is also skew-


symmetric.

In Problem 4, we prove

VII. Every n-square skew-symmetric matrix A over F is congruent over F to a canoni-


cal matrix

(15.4) B = diag(Di, Dj 0^,0,..., 0)

where = r
n .

D,-
I
,(f=l,2 t). The rank of /} is r = 2t.
See Problems.

There follows

Vin. Two ra-square skew-symmetric matrices over F are congruent over F if and only
if they have the same rank.

The set of all matrices of the type (15.4) is a canonical set over congruence for re-square
skew-symmetric matrices.

HERMITIAN MATRICES. Two n-square Hermitian matrices A and B are called Hermitely congruent,
[^ ], or conjunctive if there exists a non-singular matrix P such that

(15.5) FAP
Thus,
X

118 CONGRUENCE [CHAP. 15

IX. Two re-square Hermitian matrices are conjunctive if and only if one can be obtain-
ed from the other by a sequence of pairs of elementary transformations, each pair consist-
ing of a column transformation and the corresponding conjugate row transformation.

X. An Hermitian matrix A of rank r is conjunctive to a canonical matrix

Ip

(15.6) -Ir-p

The integer p of (15.6) is called the index of A and s = p-(r-p) is called the signature.

XI. Two re-square Hermitian matrices are conjunctive if and only if they have the same
rank and index or the same rank and the same signature.

The reduction of an Hermitian matrix to the canonical form (15.6) follows the procedures

www.TheSolutionManual.com
of Problem 1 with attention to the proper pairs of elementary transformations. The extreme
troublesome case is covered in Problem?.
See Problems 6-7.

SKEW-HERMITIAN MATRICES. If A is skew-Hermitian, then

(FAPy = (PAT) FAP


Thus,

XII. Every matrix B = FAP conjunctive to a skew-Hermitian matrix A is also skew-


Hermitian.

By Problems, Chapter 14, H = -iA is Hermitian if A is skew-Hermitian. By Theorem


there exists a non-singular matrix P such that

Ip

FHP = C = -Ir-p

Then iFHP = iF{-iA)P = FAP = iC and

Up
(15.7) B = FAP = - ilr~p

Thus,

XIII. Every re-square skew-Hermitian matrix A is conjunctive to a matrix (15.7) in

which r is the rank of A and p is the index of iA.

XIV. Two re-square skew-Hermitian matrices A and B are conjunctive if and only if

they have the same rank while -iA and -iB have the same index.
See Problem 8.
CHAP. 15] CONGRUENCE 119

SOLVED PROBLEMS
1. Prove: Every symmetric matrix over F of rank r can be reduced to a diagonal matrix having exactly
r non-zero elements in the diagonal.

Suppose the symmetric matrix A = [a^,-] is not diagonal. If a^ / 0. a sequence of pairs of elementary
transformations of type 3, each consisting of a row transformation and the same column transformation,
will
reduce A to

"n2 "ns

Now the continued reduction is routine so long as b^^.c^^. are different from zero. Suppose then
that along in the reduction, we have obtained

www.TheSolutionManual.com
the matrix

hss
.

S+1, s+2 *s+l, n


k^
S+2, s+ 1 k c

'^ri , s+i "n, s->-2

in which the diagonal element k^^^^^^^ = o. every we have proved


If 4y = 0, the theorem with s = r. If,
however, some k^j ,
say V s+^ / 0, we move it into the (s+l,s+i) position by the proper row and column
transformation of type 1 when u = v; otherwise, we add the
(s+u)th row to the (s+t;)th row and after the
corresponding column transformation have a diagonal element
different from zero. (When a^, = o, we proceed
as in the case Ar^+i ^^.^ = o above.)

Since we are led to a sequence of equivalent matrices, A is


ultimately reduced to a diagonal matrix
whose first t diagonal elements are non-zero while all other elements
are zero.

'12 2
2. Reduce the symmetric matrix A 2 3 5 to canonical form (15.2) and to canonical form
(15.3).
.2 5 5
In each obtain the matrix P which effects the reduction.
-
1 2 2 1
1 I
1 1 1 1
u\n 2 3 5 1
1 c -1 1
I

-2 1 C -1 2 1 c 2 -4 1 1
2 5 5 1 1 1 -2 1 2 4 1 1 -1 -2 1

[0|^']

To obtain (15.2), we have

1 1 1 1

[0 1 Pi'] = 2 -4 1 1 1 i\Pi \C\P'\


-2^f2 k\f2
-1 -2 1 0-1 -2 1
120 CONGRUENCE [CHAP. 15

1 -av^ -2
and j\/2 1

5a/2

To obtain (15.3), we have


1 1 1 1 1

= C -2\/2 2\/2 iV2 Vc\?'\


[D\Pl] 2 4 1 1 1 1

-1 2 1 1 1 2i -i

1 -2V2 2J

and k\f2 -i

www.TheSolutionManual.com
3. Find a non-singular matrix ? such that ?'A? is in canonical form (15.3), given

1 i 1 + i

A = i 2-j
1 + i 2-i 10 + 2i

^
1 i 1+J 1 1 1 1

2-j c 3 - 2j -i
[^1/] i 1 1 '-\J 1 1

1 +i 2-i 10 + 2j 1 1 3-2J 10 -1-j 1

_
p
10 1 1 1 1

i 1
c 1 -i 1
1 1

5+12J l + 2i 3 + 2i 1 7+ 4i!' -5+12i 3-2t


1 1

13 13 13 J

= [c\n
7+ 4i
1 -I
13

-5+12i
Here. 1
13

3-2j
13

4. Prove: Every ?i-square skew-symmetric matrix A over F of rank 2t is congruent over F to a matrix

B = diag(Di, Dg 7)^,0 0)

where D^ =
U :] (J = 1,2, )

then some = -aji ?^ 0. Interchange the sth and first rows and the
It A = Q. then S = ^. If -4 ^ 0, mj
and first columns and the /th and second columns to replace
/th and second rows; then Interchange the Jth

oy
a..

^ by the skew-symmetric matrix -Oy


-a,;,
j

1
g
2
V Next multiply the first row and the first column by l/a^^-

3 'S*
i
CHAP. 15] CONGRUENCE 121

1
''2
to obtain -1 and from it, by elementary row and column transformations of type 3, obtain

1
Di
-1
F4

If /v= 0. the reduction is complete; otherwise, the process is repeated on F4 until B is obtained.

5. Find a non-singular matrix P such that P'AP is in canonical form (15.4), given

2 4
n n 1

-2 -1

www.TheSolutionManual.com
-4 3 2

Using a^s 4 0, we need only interchange the third and second rows followed by the interchange of the

2 4 10 2 4 10
third and second columns of
1-3 10 -2 -1 -2 10
to obtain
-2 -1 -2 10 1 0-3 10
-4320 1 -4230 1

Next, multiply the


first row and first column by 2; then proceed to clear the first two rows and the first
two columns of non-zero elements. We have, in turn.

10 2 i 1 k
-1 -1 -2 10 1 1
and
1 0-3 10 -5 -2i 1

-2230 1 5 -1 -2 1

Finally, multiply the third row and third column by -1/5 to obtain

1 1/2
-1 10 Di
P'
1 1/10 -1/5 Do
0-10 -1 0-2 1

1/2 1/10 -1
-1/5
Thus when P = P'A?
10-2 = diag (Di.Dj).

6. Find a non-singular matrix P such that P'AP is in canonical form (15.6), given

1 1 - i -3 + 2i
A l + i

-3 - 2j
122 CONGRUENCE [CHAP. 15

1 1-j -3 + 2i j
1 1 1

[^1/] 1+i 2 -i 1 1 c 5 -l-i 1

-3 - 2j i 1 1 5 -13 3 + 2j 1

10 1 1 1

00
25
13
2-3i
13
1
5_
13
C 1
2-3f
5V13
13
5\/l3 \/l3
1

-13 3 + 2i 1 3 + 2i 1
0-1
VT.5 Vl3

vc\p'^

2+ 3t 3-2i
5\/l3 \fl3

www.TheSolutionManual.com
13
and
5\/T3

1 1

^/I3 \/T3

7. Find a non-singular matrix P such that P'AP is in canonical form (15.6), given
1 l + 2i 2 - 3i
4 1 - 2i 5 -4 - 2 j

2+ 3f -4 + 2f 13

1 1 + 2i 2 - 3j 1 1 1 1

[41/] 1 - 2i 5 -4 - 2s 1 1
HC 5j -1 + 2j 10
2+ 3i -4 + 2i 13 10 1 -5i -2-3! 1

10 1 10 1

HC 10 5i 2 1 i
HC 10 2 1 i

-5J -2-Zi 1 -5/2 -2-2J 5J 2

HC 10 '10 10 \/T6

-4 - 4j
0-1
\/To x/Io x/To

[cin

-4 + 4i

ao \/T0

and
10 10

i
vTo
.

CHAP. IB] CONGRUENCE 123

8. Find a non-singular matrix P such that P'AP is in canonical form (15.7), given

i 1 1 + j

A = 1 l + 2t

1 + i -l + 2i 2j

1 i 1 + i

Consider the Hermitian matrix H -J 2 -i


1-i 2 +i 2

1 -l-2j -i

The non-singular matrix P = 1 1 is such that P'HP = diag[l, 1, -l].


10
Then P'AP = diag[i. i. -i ]

www.TheSolutionManual.com
SUPPLEMENTARY PROBLEMS
9. Find a non-singular matrix P such that P'AP is in canonical form (15.2), given
1 2 O' "O 1 2" " 1 -1 o1
(a) A
-:]
(6) A = 2 3-1 (c) ^ = 1 4 (d-) A = -1 2 1
[-:
-1 -2_ .2 4 0, . 11.
1 -2 .2 "^V2 -^v^ -1 1 1 -1
Ans. (a) P (b) P 1 - I .
(c) ^ = kV2 ky/2 -k , (d) P = 1 -1
[::]
I i 1

10. Find a non-singular matrix P such that P'AP is in canonical form (15.3). given
2i H- j 2 - 4j
r
I
1 1+211
i + 2i I

' l +j 1 +i -l-2i
|_1 + 2i 1 + 4iJ
2 - 4; -1 - 2i -3 - 5i

2(1 -J) i/\/2 + 0/2


n 2(-i-2,)]
(1
Ans. (a) P (b) P = (l-J)/\/2 (-3-2i)/l3
b ^ J
(3 + 2i)/l3

11. Find a non-singular matrix P such that P'AP is in canonical form (15.4). given
(a)
1
(6)
2
(c) (d) 12-2
-1 A 1 -2 -1 0-1 1
/I =
-2 3
=
-2 -3
3 A =
0-1 3
A :

-2103
Ans.
- -" 2-3 2-1-3
(a) (b) (c) "0010" (d) '1
-1/3 r
1 2 -3/2"'
P = P P
10 3 1 -2/3 2
1 = 1 = P =

1
10 2 1/3
1 1
i

124 CONGRUENCE [CHAP. 15

12. Find a non-singular matrix P such that P'AP is in canonical form (15.6). given
1 1 +i 2 1 1 + j 3 - 2i
1 - 3i"|
(a) A tl+ ' (6) A 1-i 3 -i (c) ^ 1-j 3 3-4i
l 3i 10
J 2 i 4 3 + 2f 3 + 4t 18

1 -1-i (-5-i)/\/5 1 -1-j (-2 + 51)


-i + sfl
Ans. (a) P [1 (6) P = 1 (2-f)/V5 (c) P = 1 (-2-J)
1 J
i/Vs 1

13. Find a non-singular matrix P such that P'AP is in canonical form (15.7). given
i -1-i -1

(a) A (c) A = 1-j 1-i


L-i+j I
J . 1 -1-j -i.

2+j

www.TheSolutionManual.com
i -1 l+i 1

(6) A 1 2i i (i) A -1 l-2i


-l+i i 6i ,
_-2 + i -1-2J .

i (l-J)/\/2 -l'

Ans. (a) P (c) P l/\/2 -1


[: -r] ,0 1,

1 -i -2 + 3j l/x/2 (l-2i)/VT0 -l/\/2'

(6) P = 1 -2 - (rf) P = i/y/2 (-2-i)/yJT0 i/\/2

1 J l/\/l0 J

14 If D = ^1 show that a 2-square matrix C satisfies C'DC = D if and only if |Cl=l.


L-i oj

Show U > and only if n -p


Let A be a non-singular n-square real symmetric matrix of index that
p. if
15. |

is even.

16- Prove: A non-singular symmetric matrix A is congruent to its inverse.


Hint. Take P = BB' where B'AB = I and show that P'AP = A .

to obtain (15.6) for Hermitian


17. Rewrite the discussion of symmetric matrices including the proof of Theorem I

matrices.

18. Prove: It A B then A is symmetric (skew-symmetric) if and only if B is symmetric (skew-symmetric).

- T) is non-
Let S be a non-singular symmetric matrix and T be a skew-symmetric matrix such that (S +
7") (S
19.
singular. Show that P'SP = S when
P = (S+TT^(S-T)
Hint. P'SP =[(S-Tf^(S+T)S~'^(S-T)<.S+T)'^r^.

Let S be a non-singular symmetric matrix and let T be such that (S+T)(S - T) is non-singular.
Show that
20.
if P'SP = S when P = (S + TT^iS- T) and I+P is non-singular,
then T is skew-symmetric.

Hint. T = S(/-P)(/ + P)-^ = S(I + PrHl-P).

21. Show that congruence of n-square matrices is an equivalence relation.


chapter 16

Bilinear Forms

AN EXPRESSION which is linear and homogeneous in each of the sets of variables (x^, x^ m)
and (y^, y^ Jri)
is called a bilinear fonn in these variables. For example,

XxJi + 2x172 - ISxija - 4x271 + 15x272 - ^s/s

www.TheSolutionManual.com
is a bilinear form in the variables {x^, Xg) and (y.^, y^, y-g).

The most general bilinear form in the variables (x^, Xg x) and {y^.y^ Jn) maybe
written as

%i%yi + oi2%y2 +


+ %r! X i7?i

+ 021^271 + 022^72 +
+ fl2ra%7ra

+ ami^myi + Om2%y2 +

+ ""mv^n Jn

or, more briefly, as

n n
(16.1) /(^.y) S S ai-x^y
t =i j=i -^ '

%1 012
in '7i'

Ojl ^22
2n 72
% *2> . %
_0'n\ Om2
Omra_ _Yn_

where X = [x^, x^ x ]', ^ = [0^^], and i' = [yi,72 7]'.

The matrix A of the coefficients is called the matrix of the bilinear form and the rank of A
is called the rank of the form.

See Problem 1.

Example 1. The bilinear form

1 1 Ti

^iXi + ^irs + *2yi + ^2X2 + ^373 Ui. %.% 1 1 y2


1 73

X'AY

125
.

126 BILINEAR FORMS [CHAP. 16

CANONICAL FORMS. Let the m x's of (16.1) be replaced by new variables u's by means of the lin-

ear transformation

(16.2) Xi = 1. bijUj, (f=l,2 m) or X = BU

and the n y's be replaced by new variables v's by means of the linear transformation

(16.3) 2 c^jVJ, (f=l,2 n) or Y = CV


Ti
J=i
V J

We have X'AY = (BU)"A(CV) = U'(BAC)V. Now applying the linear transformations U = IX,
V = lY we obtain a new bilinear form in the original variables X\B'AC)Y = X'DY

Two bilinear forms are called equivalent if and only if there exist non-singular transfor-
mations-which carry one form into the other.

I. Two bilinear forms with mxn matrices A and B over F sire equivalent over F if and

www.TheSolutionManual.com
only if they have the same rank.

If the rank of (16.1) is r, there exist (see Chapters) non-singular matrices P and Q such
that
Ir
FAQ

Taking B = P' in (16.2) and C = Q in (16.3), the bilinear form is reduced to

Ir
(16.4) UXPAQ)V U' V = U^V^ + ^2^2 + + U^V^

Thus,
II. Any bilinear form over F of rank r can be reduced by non-singular linear transfor-
mations over F to the canonical form u^^v^ + u^v^ + + u^i'r-

1 1

Examples. For the matrix of the bilinear form X^AY = X' 1 1 Y ofE^xamplel,

10 10 1 -1 1 -1
10 10 10 1 1

1 1 1 1

A h
10 110 10 1 10 10 10 10 10
110 10 1-1-110 1-1-110 1 0-110
10 1 1 1 1 1 1 1

p'
h
1 -1 1 -1
Thus, X = PU 1 V and y = QV 1 1 V reduce X'AY to
1 1

1 1 1 1 -l'

-1 1 1 1 1 1 U%V U-^V-:^ + U^V^ + Ugfg

1 1 1
CHAP. 16] BILINEAR FORMS 127

The equations of the transformations are

X-j^ = u Ur, Ti = ^1 - "3

X2 = U, and 72 = V^ + 1^3

Xg = Ts = ^3

See Problem 2

TYPES OF BILINEAR FORMS. A bilinear form 1 S ij x


a,-,-
'^iJj X'AY is called

symmetric symmetric
alternate skew-symmetric
according as A is
Hermltlan Hermltlan
alternate Hermltlan skew-Hermltlan

www.TheSolutionManual.com
COGREDIENT TRANSFORMATIONS. Consider a bilinear form X'AY in the two sets of n variables
(i. 2 ^) and (yi.ys. .3^)- When the ;'s and y's are subjected to the same transforma-
tion X = CU and Y = CV the variables are said to be transformed cogredlently. We have
III. Under cogredient transformations X = CU and Y = CV, the bilinear form X'AY.
where A is re-square, is carried into the bilinear form U\C'AC)V.

If A is symmetric, so also is C'AC; hence,

IV. A symmetric bilinear form remains symmetric under cogredient transformations of


the variables.

V. Two bilinear forms over F are equivalent under cogredient transformations of the
variables if and only if their matrices are congruent over F.

From Theorem I, ChapterlS, we have

VI. A symmetric bilinear form of rank r can be reduced by non-singular cogredient


transformations of the variables to

(16.5) %%yi + 02*272 + + OrXryr

From Theorems II and IV, Chapter 15, follows

VIl. A real symmetric bilinear form of rank r can be reduced by non-singular cogredient
transformations of the variables in the real field to

(16.6) iyi + %y2 + + xpyp - a;^4.i 7^4.1 - - x^Jr


and in the complex field to

(16.7) %yi + 2y2 + + x^y^

See Problem 3.

CONTRAGREDIENT TRANSFORMATIONS. Let the bilinear form be that of the section above. When
the x's are subjected to the transformation X = (C~^yU and the y's are subjected to the trans-
formation Y = CV, the variables are said to be transformed contragredlently. We have
128 BILINEAR FORMS [CHAP. 16

VIII. Under contragredient transformations X = (C-K^T


)'U and Y = CV, the bilinear form
X'AY, where A is n-square, is carried into the bilinear form U\C AC)V.

IX. The bilinear form X'lY = x^y^ + x^y^ + + %y is transformed into itself if and
only if the two sets of variables are transformed contragrediently.

FACTORABLE BILINEAR FORMS. In Problem 4, we prove

X. A non-zero bilinear form is factorable if and only if its rank is one.

SOLVED PROBLEMS

www.TheSolutionManual.com
Jx
1 2 -13] 1 2 -13l
1. %yi + 2iy2 - 13%yg - 4a^yi + 15x^y^ - x^y^ = %. 2 -1 Ji = X'
-4
4 15 15 -ij
73

2. Reduce %yi + 2x^y^ + 3%y3 - Ix^y^ + 2%yi - 2^y2 + x^y^ + 3x2y4 + "ix-^y^ + 4%y3 + x^y^ to canon-
ical form.

1 2 3-2
The matrix of the form is A 2-2 1 3 By Problem 6. Chapter 5. the non-singular matrices

3 4 1

1 1/3 -4/3 -1/3


10 -1/6 -5/6 7/6 /g
-2
-1 -1
1

1
and Q =
10 are such that PAQ Thus, the linear transfor-

mations
14 1

X^ = Ui 2U2 Uq . y
6 ^ 6 ^ 6
X = P'U or I :x:2 = U2 - U3 and Y = QV or

xs = "3
7a

reduce X'AY to u^v^ + U2t)2.

12 3 2
2 3 5
3. Reduce the symmetric bilinear form X'AY = X' Y by means of cogredient transfor-
3 5 8 10
2 8 10 -8
mations to (16.5) in the rational field, (b) (16.6) in the real field, and (c) (16.7) in the complex
field.

1 -2 -10 -1 1 -2 -10 -1
1 4 -1 1 4-1
(o) Prom Example 1, Chapter 15. the linear transformations X = u. Y =
1 1

1 10
CHAP. 16] BILINEAR FORMS 129

reduce XAY to uj^'i "2% + ^usfg.

1 -5 -2 -1 1 -5 -2 -1
(6) Prom Example Chapter the linear transformations X 2 1 -1 2 1-1
2, 15, u. Y =
1 1
1

2 i
reduce X'AY to u^v^ + ug^s - "3%-

(c) Prom the result of Example 2, Chapter 15. we may obtain

1 1 000' 10 1

1 -5 2 i
C
10 -5202
-1 -2 1 10 -2j i

-1 1 1 -1-110
1 -5 -2J -1 1 -5 -2J -1
-1

www.TheSolutionManual.com
2 i 2 -1 t
Thus, the linear transformations X u. Y = V reduce X'AY to
1 1
1

2 2
U^V^ + U2V2 + UgVg.

4. Prove: A non-zero bilinear form f(x,y) is factorable if and only if its rank is 1.

Suppose the form is factorable so that

and, hence, a^j = b^cj. Clearly, any second order minor of A = [a^-], as

aj5 a^bj
aij aib^
bib^
t H
Lfei "ks_ fk^'j fe*s_ "k "k

vanishes. Thus the rank of /4 is 1.

Conversely, suppose that the bilinear form is of rank 1. Then by


Theorem I there exist non-singular
linear transformations which reduce the form to U'(BAC)V = u^v^.
Now the inverses of the transformations

= jnr ] and = 2 '^jyj

carry u^u^ into (S rijXj)(l s^jyp = f(x.y) and so f(x.y) is factorable.


130 BILINEAR FORMS [CHAP. 16

SUPPLEMENTARY PROBLEMS
5. Obtain linear transformations which reduce each of the following bilinear forms to canonical form (16.4)
(a) x^j^ - 2^73 + "ix^j^ + ^272 - 3*273 - ^sTi - ^72 - ^sTs

2-510 3 10 7 4 5 8
-2 -2 4 7 5
(6) r -4 -11 2 7 y. (c) X'
8
-2
(rf) r 5 3
3

12 6
5-510 1

3
1

5 8 5 6 10

6. Obtain cogredient transformations which reduce

1 2 3 1 3 1

(a) r 2 5 4 y and (6) X' 3 10 2 y to canonical form (16.6).

3 4 14 1 2 5_

1 -2 -7 'l -3 -4V3/3
C

www.TheSolutionManual.com
Ans. (a) C 1 2 (6) = 1 \/3/3
1 \/3/3

Ir
7. If Bj^, B2. Ci , C2 are non-singular n-square matrices such that B-^A^C^ = B^A^C^ = find the transfor-

mation which carries X'A^Y into UA2V.


Arts. X = (B'^B^iU, Y = C^C^V

8. Interpret Problem 23, Chapter 5, in terms of a pair of bilinear forms.


- -

1 1
1
2 221
i

i i
9. Write the transformation contragredient to .Y = 1 1 /. .4ns. y -2 2 2

1 1
i
2 ~2
2
. .

10. Prove that an orthogonal transformation is contragredient to itself, that is .Y = PV Y . = PV.

11. Prove: Theorem IX.

-1
12. If X'AY is a real non-singular bilinear form then XA Y is called its reciprocal bilinear form. Show that when
reciprocal bilinear forms are transformed cogrediently by the same orthogonal transformation, reciprocal bi-
linear forms result.

13. Use Problem 4, Chapter 15, to show that there exist cogredient transformations X = PU. Y = PV which re-

duce an alternate bilinear form of rank r = 2t to the canonical form


U1V2 - W2V-L + U3V4.- u^V3 + + ugt-i "^t - "sf^et-i

14. Determine canonical forms for Hermitian and alternate Hermitian bilinear forms.
Hint. See (15.6) and (15.7).
chapter 17

Quadratic Forms

A HOMOGENEOUS POLYNOMIAL of the type

n n
(17.1) q = X'AX

whose coefficients a^,-


"u
are elements of F is called a quadratic form over F in the variables
^It ^2 ^7J'

www.TheSolutionManual.com
2 2 2
Example 1. 9 = as^^ + 2*2 - "^x^ - 'ix^x^ + 8x^x^ is a quadratic form in the variables xi.x^.xg. The ma-
trix of the form may be written in various ways according as the cross-product terms -4x1x2
and 8x1x3 are separated to form the terms 012X1X2.021^*1 and 013X1X3,031X3X1. We shall
agree that the matrix of a quadratic form be symmetric and shall always separate the cross-
product terms so that a a^ Thus,
V .

1 '
"-^s
7Xg - 4Xj^X2 + SXj^Xg

1 -2 4
= X' 2 2

4 0-7

The symmetric matrix A = [o^^- ] is called the matrix of the quadratic form and the rank of
A is called the rank of the form. If the rank is r<n, the quadratic form is called singular;
otherwise, non-singular.

TRANSFORMATIONS. The linear transformation over F, X = BY , carries the quadratic form (17.1)
with symmetric matrix A over F into the quadratic form

(17.2) (.BYyA(BY) YXB'AB)Y

with symmetric matrix B'AB.

Two quadratic forms in the same variables %, xz % are called equivalent if and only if
there exists a non-singular linear transformation X=BY which, together with Y =IX, carries
one of the forms into the other. Since B'AB is congruent to A, we have
I. The rank of a quadratic form is invariant under a non-singular transformation of the
variables.

II. Two quadratic forms over F are equivalent over F if and only if their matrices are
congruent over F.

Prom Problem 1, Chapter 15, it follows that a quadratic form of rank r can be reduced to
the form

(17.3)
K^i + Kr^ + + h^y^, hi/=o

131
132 QUADRATIC FORMS [CHAP. 17

in which only terms in the squares of the variables occur, by a non-singular linear transforma-
tion X = BY. ViTe recall that the matrix B is the product of elementary column matrices while 6'
is the product in reverse order of the same elementary row matrices.

-2 4

Example 2. Reduce q = X' 2 X of Example 1 to the form (17.3).

-7

1 -2 4 1
1 o' 1 1

We have [A I] -2 2 1 1 0-2 8 2 1

4 0-7 1 1 8 -23 -4 1

1 1

0-2 2 1 = [D Bl
9 4 4 1

www.TheSolutionManual.com
1 2 4

Thus. X = BY 1 4 Y reduces q to q' = y=-2v^+9v=.


1
See Problems 1-2.

LAGRANGE'S REDUCTION. The reduction of a quadratic form to the form (17.3) can be carried out
by a procedure, known as Lagrange's Reduction, which consists essentially of repeated com-
pleting of the square.

Example 3.
1 7%;
'3
4x^*2 "*" ^*1*3

[x^- 'ix^{x^-2x^)\ + 2x1 7^:

{xl-'^x^{x^-2x^) + i{x^-2x/\ + 2x1 - Ixl - Ux^-2xJ


(x^-2x^ + ix/ - 2(xl-8x^x^) 2Sx

(x^-2x^ + 'ix^f - 2(.xl-Zx^x^-\-\Qxl, + Qxl

(\-2^2+H)' - 2(A:^-4^3f + Qxl

yi = xx - 2x2 + 4acs xi = yi + 2y2 + 4ys


Thus, ys = xq^ - 4x3 ot X2 = y2 + 4y3

73 = Xg X3 = ys

reduces q to y^ - 2y^ + 9y^.


See Problem 3.

REAL QUADRATIC FORMS. Let the q = X'AX be reduced by a real non-singular


real quadratic form
transformation to the form (17.3). one or more of the h^ are negative, there exists anon-
If

singular transformation X= CZ, where C is obtained from 6 by a sequence of row and column
transformations of type 1, which carries q into

(17.4) sz^
s,z7 + s^z-
s z^ + ... + Sfyzj, - s*+iZa+i - ... - s^;.
11 2 2

in which the terms with positive coefficients precede those with negative coefficients.
CHAP. 17] QUADRATIC FORMS 133

Now the non- singular transformation

Wi = VSi Zi , a = 1,2 r)

= 'j (/ =r+l,r+2 71)


"'i
or

= ;- ' .1.1 w
'''4k- J

carries (17.4) into the canonical form

(17.5) wl + wl + + w^^ - ;2^^ - - wl

Thus, since the product of non-singular transformations is a non-singular transformation,


we have
Every real quadratic form can be reduced by a real non-singular transformation to
III.

www.TheSolutionManual.com
the canonical form (17.5) where p, the number of positive terms, is called the index and r
is the rank of the given quadratic form.

Example 4. In

q
I
Example
=
o
y^ -
2,
Q
2y^ + 9yQ
the quadratic form
Z
^12
o = a^ + 2x^

The non-singular transformation yi = z^, y^ = 23, ys = 22 carries q'


.
- Ix^ - ixx
3 1213
8xx + was reduced to

into q" = zf + 922 ~ ^^3 ^^^ ^^^ non-singular transformation 21 = w^, 22 = w^/S, zg = wa/v^
J
reduces q
" i.
to q
"'
=
2,2
w^+ w^ - w^.
2

Combining the transformations, we have that the non-singular linear transformation

Xl = Wi + ^W2 + V^M>3
1 4/3 v^
X2 = |mJ2 + 5V^;g ^ = 4/3 2^ W
1/3
X3 =
3"^
2
S qr to q = W^ + wl- w%. The quadratic
q form is of rank 3 and index 2.

SYLVESTER'S LAW OF INERTIA. In Problem 5, we prove the law of inertia:

IV. If a real quadratic form is reduced by two real non-singular transformations to


canonical forms (17.5), they have the same rank and the same index.

Thus, the index of a real symmetric matrix depends upon the matrix and not upon the ele-
mentary transformations which produce (15.2).

The difference between the number of positive and negative terms, p - {r-p), in (17.5) is
called the signature of the quadratic form. As a consequence of Theorem IV, we have

V. Two real quadratic forms each in n variables are equivalent over the real field if
and only if they have the same rank and the same index or the same rank and the same sig-
nature.

COMPLEX QUADRATIC FORMS. Let the complex quadratic form X'AX be reduced by a non-singular
transformation to the form (17.3). It is clear that the non-singular transformation

2i = ^iVi. (i = 1. 2 r)

Zj = yj , (7 = r+1, r+2, ...,7i)


134 QUADRATIC FORMS [CHAP. 17

Ul
1
Y -'- 1, 1,
)^
''''{/h^'/h;'-
carries (17.3) into

(17.6) < -4-- - 4


Thus,

VI. Every quadratic form over the complex field of rank r can be reduced by a non-
singular transformation over the complex field to the canonical form (17.6).

VII. Two complex quadratic forms each in n variables are equivalent over the complex
field if and only if they have the same rank.

DEFINITE AND SEMI-DEFINITE FORMS. A real non-singular quadratic form q = X'AX, \a\ f 0, in

www.TheSolutionManual.com
n variables is called positive definite if its rank and index are equal. Thus, in the real field a
positive definite quadratic form can be reduced to j\-^ J^-^ -^
y^ and for any non-trivial set
of values of the x's, ^ > 0.

A real singular quadratic form q =X'AX, \a\ =0, is called positive semi-definite if its
rank and index are equal, i.e., r = p< n. Thus, in the real field a positive semi-definite quad-
ratic form can be reduced to y^ + yj "^ + y^ r< re,
, and for any non-trivial set of values of
the a;'s, q> 0.

A real non-singular quadratic formq = X'AX is called negative definite if its index p = 0,
i.e., r = n,p = 0. Thus, in the real field a negative definite form can be reduced to -y^ - y^ -
- y^ and for any non-trivial set of values of the x's, q < 0.

A real singular quadratic form q = X'AX is called negative semi-definite if its index p = 0,
i.e., r < re, p = 0. Thus, in the real field a negative semi-definite form can be reduced to

- y^ - ^"^ ^f ^y non-trivial set of values of the x's, q 5 0.


-yl y^
Clearly, if q is negative definite (semi-definite), then -q is positive definite(semi-definite).
For positive definite quadratic forms, we have

VIII. If q =X'AX is positive definite, then U| >0.

PRINCIPAL MINORS. A minor of a matrix A is called principal if it is obtained by deleting certain


rows and the same numbered columns of A. Thus, the diagonal elements of a principal minor of
A are diagonal elements of A.
In Problem 6, we prove

IX. Every symmetric matrix of rank r has at least one principal minor of order r differ-

ent from zero.

DEFINITE AND SEMI-DEFINITE MATRICES. The matrix i of a real quadratic form q = X'AX is call-
ed definite or semi-definite according as the quadratic form is definite or semi-definite. We have

X. A real symmetric matrix A is positive definite if and only if there exists a non-
singular matrix C such that A = C'C.

XI. A real symmetric matrix A of rank r is positive semi-definite if and only if there
exists a matrix C of rank r such that A = C'C.
See Problem 7.
CHAP. 17] QUADRATIC FORMS 135

XII. If A is positive definite, every principal minor of A is positive.


See Problem 8.

XIII. If A is positive semi- definite, every principal minor of A is non-negative.

REGULAR QUADRATIC FORMS. For a symmetric matrix A = [a^,-] over F, we define the leading
principal minors as

%1 04.2 %3
Oil O12
(17.7) PO = 1, Pl= Oil. P2 = Pa Ogi 052 O23 Pn= Ul
021 022
Oqi O32 '^3

In Problem 9, we prove
XIV. Any re-square non-singular symmetric matrix A can be rearranged by interchanges

www.TheSolutionManual.com
of certain rows and the interchanges of corresponding columns so that not both p_^ and
Pn-2 are zero.
XV. If A is asymmetric matrix and if Pn-iPn^^ but Pn-i = 0, then p_2 and p
have opposite signs.

112 1

112 2
Example 5. For the quadratic form X'AX = X
2 2 3 4
X, po = 1, pi = 1, p2 = 0, p3 = 0, P4 = m= 1.

12 4 1

Here Oigs t^ 0; the transformation X = Kg^X yields

'1112
X'
112 2 X
(i)
12 14
2 2 4 3

for which po = 1, Pi = 1, P2 = 0. Ps = -1. P4= 1- Thus, for ( i) not both p2 and ps are zero.

A
symmetric matrix A of rank r is said to be regularly arranged if no two consecutive p's
in the sequence po.pi, -.Pr ^^^ zero. When A is regularly arranged the quadratic form X'AX
is said to be regular. In Example 5, the given form is not regular; the quadratic form (i) in the
same example is regular.
Let i be a symmetric matrix of rank r. By Theorem IX, A contains at least one non-
vanishing r-square principal minor M whose elements can be brought into the upper left corner
of A. Then p^ ^^ while p^^.^ = p^^.^ = ... = p = 0. By Theorem XIV, the first r rows and the
first r columns may be rearranged so that at least one of Pr-i and p^_2 is different from zero.
If p^_^ j^ and p^_2 =0, we apply the above procedure to the matrix of Pt--i; if Pt--2^0,
we apply the procedure to the matrix of P7-_2; ^^^^ so on, until M is regularly arranged. Thus,

XVI. Any symmetric matrix (quadratic form) of rank r can be regularly arranged.
See Problem 10.

XVII. A real quadratic form X'AX is positive definite if and only if its rank is re and
all leading principal minors are positive.

XVIII. A real quadratic form X'AX of rank r is positive semi-definite if and only if
each of the principal minors po.Pi Pr is positive.
136 QUADRATIC FORMS [CHAP. 17

KRONECKER'S METHOD OF REDUCTION. The method of Kronecker for reducing a quadratic form
into one in which only the squared terms of the variables appear is based on

XIX. U q = X'AX is a quadratic form over F in n variables of rank r, then by a non-


singular linear transformation over F it can be brought to q' = X'BX which a non-
in
singular r-rowed minor C oi A occupies the upper left corner of B. Moreover, there exists
a non-singular linear transformation over F which reduces q to q" = X'CX, a non-singular
quadratic form in r variables.

XX. li q = X'AX is a non-singular quadratic form over F in n variables and if Pn-i =


a ^ 0, the non-singular transformation

a = 1, 2. re-1)

^nnyn
or
10 !

www.TheSolutionManual.com
1 a^n
BY
1 <^n-i, n

carries q into S 2 HiJiYj + Pn-xPnYn in which one squared term in the variables
has been isolated.

1 -2 4
1 -2
Example 6. For the quadratic form X'AX = X' -2 2 -^i P2 = O^S -2 ^ 0. The
-2 2
4 0-7
non-singular transformation

Xl n + a.13 ys yi - 8y3 1 -8
2 = ys + a23 ys ys - 8y3 1 -8

xs =
ss ys
= - 2ys 0-2
reduces X'AX to

1 1 -2 1 -8 1 -2
Y' 1 -2 2 1 -8 Y' -2 2

8 -8 -2 4 0-7 0-2 36

in which the variable yg appears only in squared form

XXI. U q = X'AX is a non-singular quadratic form over F and if o^-i, n-i = ^nn
but a,
're,?!-! ^
'
0, the non-singular transformation over F
"

xi = yi + CLi,n-iyn-i + '^inyn, (i 1, 2, ...,n-2)

*ra-i ^n-i.n yn. Xn a n n-i Yn-


,

or
1 ... !, ^.

1 ... a2 n-1

z = sy =
1 a.-2, n-1
_i a
'-'-n.-2, n

... '-n-i, n
,0 ... a__i
T

CHAP. 17] QUADRATIC FORMS 137

n-2 n-2
carries 9 into 2 2 (Hjyiyj + 1'^n,n-\Pnjn-\yn-
i-i j-\

The further transformation

'

Ji = Zi, (i = 1, 2 n-2)
Jn-x = Zn-i 2n

Jn = Zn-i + Zn
n-2 n-2
yields 2 .2 a^jz^zj + 2a_^_j^p(zn-i - z^ in which two squared terms with opp-
V 1 J -1
site signs are isolated.

Example 7. For the quadratic form

1 2 1

X'AX X' 2 4 3

www.TheSolutionManual.com
1 3 1

^22 - O'ss = but tXg2 = -1 i^ 0. The non-singular transformation

xi = yi + cti272 + *i3ys 1 2

X2 = *23r3 or -1
xq = a 32 72 0-10
reduces X'AX to

1 1 2 1 1 ] 1

1 -1 2 4 3 ( 1 y = rSY = + 2y,y3
yl
2 -1 1 3 1 -] 1

The transformation

)'l = 21 1
1

72 = 22 - 23 1 -1
73 = 22 + 23 1 1

carries Y'BY into

1 1 o' 1 1

Z'

-1
1 1

1 1
1 1

1
-1
1
z' 2

-2
< + K 2zi

Consider now a quadratic form in n variables of rank r. By Theorem XIX, q can be re-
duced to 9i = X'AX where A has a non-singular r-square minor in the upper left hand corner
and zeros elsewhere. By Theorem XVI, A may be regularly arranged.

If pr-i f 0, Theorem XX can be used to isolate one squared term

(17.8) Pr-iVrYr

If pr-i = but a.r-1, r-i ^0, interchanges of the last two rows and the last two columns
yield a matrix in which the new pr-i = a^_i^^_i f 0. Since pr-2 ^ 0, Theorem XX can be used
twice to isolate two squared terms

(17.9) Pr-2 r-i, r-iJ a r-i., r-iPrYr


which have opposite signs since pr-2 and pr have opposite signs by Theorem XV.
138 QUADRATIC FORMS [CHAP. 17

If pr-L = and a^_i_r-i = 0, then (see Problem 9) a,-, r-i ^ and Theorem XXI can be
used to isolate two squared terms

(17.10) 2ar,r~ipr{yr-i-yr)

having opposite signs.

This process may be repeated until the given quadratic form is reduced to another con-
taining only squared terms of the variables.

sequence pr-i,Pr
In (17.8) the isolated term will be positive or negative according as the
presents a permanence or a variation of sign. In (17.9) and (17.10) it is seen that the se-
quences p^_2, oLr-i. r-v Pr ^"^^ Pr-2' ^r, r-v Pr Present one permanence and one variation
of sign regardless of the sign of dr-i, r-i and a.^, r-i Thus,

XXII. It q = X'AX, a regular quadratic form of rank r, is reduced to canonical form


by the method of Kronecker, the number of positive terms is exactly the number of perma-
nences of sign and the number of negative terms is exactly the number of variations of

www.TheSolutionManual.com
sign in the sequence po, pi, P2 P^, where a zero in the sequence can be counted either
as positive or negative but must be counted.
See Problems 11-13.

FACTORABLE QUADRATIC FORMS. Let X'AX f 0, with complex coefficients, be the given quad-
ratic form.

Suppose X'AX factors so that

(1) X'AX = (oj.%1 + 02*2 + + ;)( iiXi + b.2_X2 + + b^x^)

0--
1

to.; .^ is non-singular. Let the


hi tj
J becomes .

The non-singular transformation

yx = OiaCi + 02*2 + + 0

y2 = ^1*1 + ^2^2 + + ^n^n

ys = Xs y = x^

transforms ( i ) into y-^y-z of rank 2. Thus, ( 1) is of rank 2.

If the factors are linearly dependent, at least one element a^^f 0. Let the variables and
their coefficients be renumbered so that o^ is ci . The non-singular transformation

!Y^ = a^Xx + 02*2 + + (^n^n

72 = X2 y = Xn

transforms (i) into y? of rank 1. Thus, (1) is of rank 1.

Conversely, if X'AX has rank 1 or 2 it may be reduced respectively by Theorem VI to y^ or

r^ + y^, each of which may be written in the complex field as the product of two linear factors.

We have proved
XXIII. A quadratic form X'AX ^ with complex coefficients is the product of two
linear factors if and only if its rank is r<2 .
CHAP. 17] QUADRATIC FORMS 139

SOLVED PROBLEMS
12 3 2
2 3 5 8
1. Reduce q = X'AX = X' X to the form (17.3).
3 5 8 10
2 8 10-8
Prom Example 1, Chapter 15,

12 3 2 1 1 10
[^1/]
2 3 5 8 1
c
0-100 -2100 iD'.P']
^\J
3 5 8 10 1 4 -10 4 1

2 8 10 -8 1 -1-110
1 -2 -10 -1
1 4 -1
Thus, the transformation X = PY = Y reduces q to the required form y^
1 yl^'^yl-

www.TheSolutionManual.com
1

1 2 2
2. Reduce q = X'AX = X' 2 4 8 X to the form (17.3).
2 8 4

We find

[^1/] C 8 1 1 [D\P']
1
2
i
2

1 -4
Thus, the transformation X = PY 1 -i Y reduces q to y^ + 8y^ - 2y^
1 2

3. Lagrange reduction.
2
2x^ + 5^2 + ,
,...2 2
(a) q =
.

19*3
,

- 24x^+ 8a:^a=2 + I2x^x^ + Sx^x^ + l%x^x^ - 8x^x^ - IGx^x^

= 2{xf+ 2x^(2x^ + 3x^+2x^)1 + 5x^+ 19x1 " 24*4 + ^S^j^g - 8x^x^ - IGx^x^

= 2{xl+ 2x^(.2x^+Zx^+2x^) + {2x^ + 3x^ + 2x^f\


+ bxl + 19*2 - 24*2 ^ jg^^^^
"2*3 _ g^^^^ _ ^^
~ 8*2^^ _ 2(2*^ + 3v +2*,)2

= 2 (*i + 2*2 + 3*g + 2*^)2 - 3 {*^ + 2*2(*g+ 4*4) ! + *3 - 32*2 - ^q_^

= 2 (*i + 2*2 + 3*g + 2*^)2 _ 3 (^ + ^ + 4^^)2 + 4 (*g - 2*4)2

ryi *i + 2*2 + 3*3 + 2*4

Thus, the transformation \yi =


Irs
=
*2 + *s + 4*4
*3-2*t
'Educes, to
00^
2y2_3y22+4y2.

*4

(i) For the quadratic form of Problem 2, we have

9 *2 + 4*^*2 + 4*^*3 + 4*2 + 16*2*3 + ** + 2*2 + 2*3)2 + 8*2*3


(*^

Since there is no term in *2 or *2 but a tenii in *2*3 , we use the non-singular transformation
140 QUADRATIC FORMS [CHAP. 17

( i ) Xl = Zl ,
A;2 = 22 . Xs - Z2 + ^3

to obtain

q = + 4z2+2z3f' +82^+82223 = (z^ + 422+2z3)2+8(22+2Z3f - 2z2 = y^ + ^y^ - 2yi^


(2i
- - - - -

1 4 2 1 1 4 2 1 1 2 2

Now Y 1
X
2 Z and from ( i), Z 1 X\ hence, Y = 1 i 1 X = ii
1 -1 1 1 -1 1 -1 1

'1 6"
-4
Thus, the non-singular transformation X = 1 - 2
I

y effects the reduction.


1

1 2

4. Using the result of Problem 2,

1 1

www.TheSolutionManual.com
[^1/] C 8 -4 1 1
i
-2 -2 2

and applying the tiansfoimations HgCiV^). K2(i+-\/2) and ^3(5 a/2). ^3(2 V2 ), we have

10 1
-|

c
1

10
10 [cic']
[^!/] 8 4 1 1

0-2 -2 2 0-1 -iVl ii\/2

2 2
1 -V2 1

Thus, the transformation X = (^y k^f2 -\\f2 y reduces q = X' 2 4 8 X to the canonical form

i\f2 iv^ 2 8 4
2,2 ,2

5. Prove: If a real quadratic form q is carried by two non-singular transformations into two distinct
reduced forms
2 2 2
(i) y; + yg
+ y^

and
<ii) y' + yl + - - -
- y'
+ yq yq,i yj.2
then p = q

Suppose q>p. Let X = FY be the transformation which produces (1) and X = GY be the transfor-
mation which produces ( ii ). Then

filial + bi2X2 + + binxn

621X1 + 622*^2 + + b^nXn


F~^X =

bjiixi + bn2X2 + + bnnpcn

and

cii% + ci22 +
+ Cin^n
C21X1 + C222 +
+ C2ren
G'^X

cr!.i;i + cn2*:2 + + cnnXn


.

CHAP. 17] QUADRATIC FORMS 141

respectively carry (i) and (ii) back into q. Thus,

(iii) (''ii*i + *i2*2+- + *in^n)^ + ... + {bp^'^^ + bp^x^-\-- + bp^Xyf

= (CiiA:^ + c^2*2+- + c^^f + + (cq^x^ + c x^+- + c x^f


('^g + l.l^l''' "^g +1,2*2 "*"
'^q + l,/!^^

Consider the r-q+p < r equations

*ii % + 6i2 *2 + + *in*n = ^g+i,i*i + '^(7+1,2*2 + + '^g + M*re =


''21*1+^22*2 + + 6^ 't?+2,l*l + '^9+2,2*2 +

bpiH + *;^2*2 + + 6jbn% = ^ri *i + ^2*2 + + c^^x^

www.TheSolutionManual.com
By Theorem IV, Chapter 10, they have a non-trivial solution, say (Ot^.ttg 0!^^). When this solutionis
substituted into ( iii ), we have

- (*;b+i,i'i + ^^+1,2*2 + + bp^i,rf-J^ (Vll + V22+--- + V?In)


= (-^11 i + %22 + + c.^n'^nf + + (<^7ii + ^(?22 + + >an)^
Clearly, this requires that each of the squared terms be zero. But then neither F nor G is non-singular,
contrary to the hypothesis. q<p. A repetition
Thus, of the above argument under the assumption that
q<p will also lead to a contradiction. Hence q=p.

6. Prove: Every symmetric matrix A of rank r has at least one principal minor of order r different
from zero.

Since A is of rank r, it has at least one r-square minor which is different from zero. Suppose that it
stands in the rows numbered Ji,i2 'r- I^^t these rows be moved above to become the first r rows of the
matrix and let the columns numbered fi,s2 i^ be moved in front to be the first r columns.

Now rows are linearly independent while all other rows are linear combinations of them. By
the first r

taking proper linear combinations of the first r rows and adding to the other rows, these last n-r rows can
be reduced to zero. Since A is symmetric, the same operations on the columns will reduce the last n-r
columns to zero. Hence, we now have

ili2

H2i2

"Vii %i2
"vv

in which a non-vanishing minor stands in the upper left hand corner of the matrix. Clearly, this is a princi-
pal minor of A

7. Prove: A real symmetric matrix A of rank r is positive semi-definite if and only if there exists a
matrix C of rank r such that A = C'C.
Since A is of rank r, its canonical form i Then there exists a non-singular matrix B
'--[t:]
142 QUADRATIC FORMS [CHAP. 17

such that A = B%B. Since A'i'= Ai = A^ , we have A = B'NxB = B'N^Ny^B = B'N(-N^B . Set C = A^B; then C is

of rank r and A = C'C as required.

Conversely, let C be a real n-square matrix of rank r; then A = C'C is of rank s<r. Let its canonical
form be

N^ = diag(di, (^2 ^5,0,0 0)

where each di is either +1 or -1. Then there exists a non-singular real matrix E such that E'(C'C)E = A^g

Set CE = B = [bij\. Since B'B = N^, we have


2 2 2
(i = 1, 2 s)

and

+ + + ^jn = 0' ii = s+l,s+2 n)


*ii *i2

Clearly, each d;>0 and A is positive semi -definite.

www.TheSolutionManual.com
8. Prove: If A is positive definite, then every principal minor of A is positive.

Let q = XAX. The principal minor of A obtained by deleting its ith tow and column is the matrix A^ of
the quadratic form q^ obtained from q by setting x^ = 0. Now every value of qj_ for non-trivial sets of val-
ues of its variables is also a value of g and, hence, is positive. Thus, Aj_ is positive definite.
This argument may be repeated for the principal minors A^j, A^^j-^, ... obtained from A by deleting two,
three, ... rows and the same columns of A.

By Theorem VI, Ai> 0, A^j> 0, ... I thus, every principal minor is positive.

Prove: Any ra-square non-singular matrix A = [ay] can be rearranged by interchanges of certain
rows and the interchanges of corresponding columns so that not both p_^ and p_2 are zero.

Clearly, the theorem is true for A of order 1 and of order 2. Moreover, it is true for A of order n>2
when p^_^ = a^ j^ 0. Suppose 01^ = ; then either (a) some a^^ ?^ or (b) all a^^ = 0.

Suppose (a) some CL^ ?^ 0. After the ith row and the ith column have been moved to occupy the position
of the last row and the last column, the new matrix has p_^ = CC^j^ 0.
Suppose (6) all a^i = 0, Since \A\ J^ 0. at least one a^i i^ 0. Move the ith row into the (n-l)st position
and the ith column into the (n-l)st position. In the new matrix _i,n = \n-i ^ ^- ^^ (^-^^^ '^ ^^''^

a a Vi,n
^n-i,n-i ^n-i,n
-a.n-i,n PnsPn
a a Vi,n
n, n-i

and Pn^^O.
Note that this also proves Theorem XV.

2 1

13 1
X
10. Renumber the variables so that q = X'AX = X'
4
is regular.
2 3 1

1111
Here po = 1, pi = 0, ps = 0, ps = -4, p* = -3. Since pi = P2 = 0, ps ^ 0, we examine the matrix

2
2
1 3 of p3 . The cofactor B22 = ?^ ; the interchange of the second and third rows and of
2 4
2 3 4
CHAP. 17] QUADRATIC FORMS 143

the second and third columns of A yields

2 1

2 4 3 1
r 3 11
1111
for which p^ - 1, p^ = 0, p^ = -4, pg = -4, p^ = -3. Here, X2 has been renumbered as xq and ;<:g as xg

12 3 4

11. Reduce by Kronecker's method q = X'


2 15 6
X.
3 5 2 3
4 6 3 4

Here Pq - 1, p^ - 1, p^
3, pg - 20, p^ = -5 and q is regular. The sequence of p's presents one
permanence and three variations in sign; the reduced form will have one positive and three negative terms.

www.TheSolutionManual.com
Since each pj / 0, repeated use of Theorem XIX yields the reduced form

PoPiTi + PiPgrl + PsPsrl + PsP^rl = y\ - "^yl


- eo^g - looy^

12 3 1

2 4 6 3
12. Reduce by Kronecker's method q = X'AX = X' X.
3 6 9 2
13 2 5

Here A is of rank 3 and fflgg i 0. An interchange of the last two rows and the last two columns carries

12 13 12 10
1 2 1
2 4 3 6 2 4 3
A into 5 in which C = 2 4 3 ^ Since S is of rank can be reduced to
13 5 2
0. 3, it
13 5
1 3 5
3 6 2 9

1 2 1
Now q has been reduced to XCX = X' 2 4 3 X which p^ =
for 1, p^ = 1, p^ =0, pg = -1. The reduced
1 3 5

1 1
form will contain two positive and one negative term. Since p = but = 4 i^ 0, the reduced
1 5
form is by (16.8)

PoPiri + Pxy-i-zJi + y-iiP^yl =


yl + 4y| - 4y|

1-2 1 2

Reduce by Kronecker's method


-2 4 1-1
13. q = X'
1112 X.

2-12 1

Here p^ - 1, p^ - 1, p^ - 0, Pg 9, p^ = 27 ; the reduced form will have two positive and two nega-

1 -2 1

tive terms. Consider the matrix B 2 4 1 of pg. Since /^gg = but /3g2 =-3^0 the reduced for
L 1 I ij

is by (16.8) and (16.9)

PoPiTi^ + 2/332 Pg (y| - y|) + pgp^y^ =


yl + 54y|
- 54^3^ - 243y|
: 1

144 QUADRATIC FORMS [CHAP. 17

14. Prove: For a set of real ra-vectors Xi, X^, , Xp,

Ai
Ai A]_ A2
Ag Ai * A2 ^2 ' ;t2-x
>

Z^ Zi Xp- X2
Xp- Xp

the equality holding if and only if the set is linearly dependent.


P
(a) Suppose the vectors Xi are linearly independent and let X = [i,2 xp]' f 0. Then Z = 2 X^x^ ^
i=i
and 0<Z-Z = 2 X-x. )( 2 X:xA = X\X':XAX = r(A:^-;f,-)^ = X'GX
(
J t-j' t J'
1=1 j=i 3

Since this quadratic form is positive definite, |


G > |
0.

(6) Suppose the vectors X^ are linearly dependent. Then there exist scalars k^^^.k^ kp, not all zero,

www.TheSolutionManual.com
such that S
f = i=i 4,,jy.^ = and, hence, such that

X..^ kiXjj-Xi + k^Xj X2 + + kpXj Xp a = 1.2 p)

Thus the system of homogeneous equations

Xj-X^xx + Xa-Xqxq +
'J J
+ Xa-X,
'j '
P ^P a = 1,2 p)

has a non-trivial solution xi= k; ,


(i = 1,2 p), and \G\ 0.

We have proved that I G > 1 0. To prove the converse of (b). we need only to assume |
G |
= and
p P
reverse the steps of (6) to obtain ;?, ^ = 0, (/ = 1,2 p) where f= l^k^Xi. Thus, S kjXy^ =
^ i= 1 -
J
^. ^= 0, if
= 0, and the given vectors Xj are linearly dependent.

SUPPLEMENTARY PROBLEMS

15. Write the following quadratic forms in matrix notation

(a) xl + 4*1^2 + 34 (6) 2x1 - + <<^) ^1 " 2^| - 3a: + 4j:i%2 + 6;c^*g - ^x^x^
6j;i2 ^s

2-3 12 3

Ans. (a) rP ^JX (b) X' -3 (c) r 2 -2 -4


1 3 -4 -3

2 -3 1

16. Write out in full the quadratic form in xi.x^.xs whose matrix is -3 2 4
1 4 -5

Ans. 2x1 - 6x^x^ + 2x.^Xg + 2x^ + 8X2^3 5<

17. Reduce by the method of Problem 1 and by Lagrange's Reduction:

4
1111 012 1
1 2
1-13-3 r -1 ^ -2
6-2 X X X (c) (d) 1
(a) X' 2
4 -2 18
(6)
13 3 1
1

2-10
1

1 -2 3
1-3 1-3
- -
, ,, 2 2,2
Ans. (a)
yl
+ 2y^ - 48y| (6) y,= 2^2^ + 4y^ (c) y^ 72 + 8yg id) Ji -y^z^ya
Hint. In (c) and (d) use a;i = zg, X2 = zi, xg = Z2.
CHAP. 17] QUADRATIC FORMS 145

18. (a) Show that X'\ \x = ^' \x but the matrices have different ranks.

(b) Show that the symmetric matrix of a quadratic form is unique.

19. Show that over the real field xf+xl- ixl+ix^x^ and 9xf+2x^+2xl + 6x^x^-6x^x^-8x^x^ are equivalent.

20. Prove: A real symmetric matrix is positive (negative) definite if and only
if it is congruent over the real
field to / (-/).

21. Show that X'AX of Problem 12 is reduced to X'CX hy X = RX. where R = Ks^K^^{-5)K^^a). Then prove
Theorem XIX.

22. (a) Show two real quadratic forms in the same variables are positive definite, so also is their
that if
sum.
(6) Show that if q^ is a positive definite fonn in x^.x^ xs and 92 is a positive definite form in xs^i. xs+2.
..x^, then g = ?i + 92 is a positive definite form in xx,x2 x^.

www.TheSolutionManual.com
23. Prove: If C is any real non-singular matrix, then C'C is positive definite.
Hint: Consider XlX = Y'ClCY

24. Prove: Every positive definite matrix A can be written as 4 = CC. (Problems 23 and 24 complete the proof
of Theorem X.) Hint: Consider D'AD=I.

25. Prove: If a real symmetric matrix A is positive definite, so also is A^ for


p any positive integer.

26. Prove: If ^ is a real positive definite symmetric matrix and B and C


if are such that B'AB =1 and A = C'C
then CB is orthogonal.

27. Prove; Every principal minor of a positive semi-definite matrix A is


equal to or greater than zero.

28. Show that ax^ - 2bx^x^ + cx\ is positive definite and only a>0 and \a\ =
if if ac-l^ >0.

29. Verify the stated effect of the transformation in each of Theorems


XX and XXI.

30. By Kronecker's reduction, after renumbering the variables when necessary, transform each of the following
into a canonical form.

12 12
1 -1
12
2
10-2 1
(a) X' -1 2 -1 (c) r 1112 (e) X' 1 (g) X' 1 -2
0-1 2 -2 1 3 1 -2 3_
2 2 2 1

1 2 3 1
4-4 2 2 -1"
2 1

(b) X' -4 3 -3 X (d) X'


2-462 1
2 4 3 1
(/) X' 2 4 2 (h) X'
2 -3 1
3 6 9 3
-12 3 11
12 3 1
3
1111
Hint: In (g), renumber the variables to obtain (e) and also as in Problem
17(rf).

Ans. (a) p^=p^=p^=p^=l; yf


+ y| + y2 ^^j Pq = Pi = ^ 22 = -! P3
I
-1
2
-
2,2 "^
. Ti 72 ^3
(b) iyl - 16y| + 16y| (/) Po = Pi=l. 2S=-4. P3 -16; y^^ + 128y|- 128y=
(c) - + 4y= - 3y2 See
yl ^yl (g) (e).

(d) - 8y, (A) 4y2 - 16y| + 16y| + 12y^


jI

31. Show that q = xf- Gxl - 6^ - Zxl - x^x^ - xx + 2xx + Uxx - XUx + ^xx^ can be factored
2 4- 3 4-
chapter 18

Hermitian Forms

THE FORM DEFINED by


_ n n __ _
(18.1) h = X'HX = S S h^jXiXj, hij = hj^
v=\ j=i

where U is Hermitian and the components of X are in the field of complex numbers, is called an
Hermitian form. The rank of H is called the rank of the form. If the rank is Kra, the form is

www.TheSolutionManual.com
called singular; otherwise, non-singular.

If H and X are real, (18.1) is a real quadratic form; hence, we shall find that the theorems

here are analogous to those of Chapter 17 and their proofs require only minor changes from
those of that chapter.
Since H is Hermitian, every h^i is real and every h^^^x;^ is real. Moreover, for the pair of

cross-products h^j'x^Xj and hj^XjXj^,

h^jx^xj + hjiXjXi = hijXiXj + hijXiXj

is real. Thus,

1. The values of an Hermitian form are real.

The non-singular linear transformation X = BY carries the Hermitian form (18.1) into an-

other Hermitian form

(18.2) {mH{BY) = Y(WHB)Y

Two Hermitian forms in the same variables x^ are called equivalent if and only if there exists
a non-singular linear transformation X = BY which, together with Y = IX, carries one of the
forms into the other. Since B'HB and H are conjunctive, we have

H. The rank of an Hermitian form is invariant under a non-singular transformation of


the variables,

and
HI. Two Hermitian forms are equivalent if and only if their matrices are conjunctive.

REDUCTION TO CANONICAL FORM. An Hermitian form (18.1) of rank r can be reduced to diagonal
form

(18.3) Kjxn + k^y-iLji + + 'crfryr' ^i^^ and real

by a non-singular linear transformation X = BY. From (18.2) the matrix B is a product of ele-
mentary column matrices while B' is the product in reverse order of the conjugate elementary
row matrices.
By a further linear transformation, (18.3) can be reduced to the canonical form [see (15.6)]

(18.4) %Zi + i^22 + + 'zpzp - z^+i2^ + i - - z-rV

146
CHAP. 18] HERMITIAN FORMS 147

of index p and signature p-(r-p). Here, also,


p depends upon the given form and not upon
the transformation which reduces that form to (18.4).

IV. Two Hermitian forms each in the same n variables are equivalent if and
only if
they have the same rank and the same index or the same rank and the same
signature.

DEFINITE AND SEMI-DEFINITE FORMS. A non-singular Hermitian form h = X'HX in a variables


is called positive definite if its^rank and index are equal to n. Thus, a positive definite Her-
mitian form can be reduced to y^y^ + y^y^ + +Jnyn and for any non-trivial set of values of
the x's, A>0.

A singular Hermitian form h = X'HX is called positive semi-definite if its rank and index
a.:e equal, i.e., rj= p < n. Thus, a positive semi-definite Hermitian form can be reduced
to
yiTi + y2y2 + + yrYr .
r< n, and for any non-trivial set of values of the x's, h>0.
H of an Hermitian form XliX is called positive definite or positive semi-
The matrix

www.TheSolutionManual.com
definite according as the form is positive definite or positive
semi-definite.

V. An Hermitian fom is positive definite if and only if there exists a


non-singular
matrix C such that H = C'C.
VI. If H is positive definite, every principal minor of H is positive, and conversely.
VII. If H is positive semi- definite, every principal minor of H is non-negative, and
conversely.

SOLVED PROBLEM

1 1 + 2i 2 - 3i
1. Reduce X' \-2i 5 -4-2i X to canonical form (18.4).
2 + 3i -4 + 2f 13

Prom Problem 7, Chapter 15,

1 1 + 2j 2-Zi I o" 'l I 0'


1-2/ 5 -4 - 2j I HC
'^^
I 2/\/io i/^To j/yTo
2 + 3i -4 + 2i 13 l_ -1
(-4-4j)/v^ j/a/TO 1/vTo

Thus, the non-singular linear transformation

1 2/VlO (-4 + 4i)/V^


BY 1/VIO -i/yflQ
,0 -j/VTo 1/v^
reduces the given Hermitian form to Jiyr + Jsys - ysys
148 HERMITIAN FORMS [CHAP. 18

SUPPLEMENTARY PROBLEMS
2. Reduce each of the following to canonical form.

1 1 - 3i 2 - 3i
+ 2il
[1 1
X (c) r 1 + 3j 1 2 + 3i X
l-2i 2 J
2 + 3j 2 - 3t 4

I 1-i 3 - 2i

(b) x'\ jx
:]^
(d) r 1+j 2-J X
i" 3 + 2J 2 +J 4

Hint: For (6), first multiply the second row of H by j and add to the first row.

(-l-20/\/3l^ _
Ans. {a) X =
tl

www.TheSolutionManual.com
(.b) X nyi - y-m
V2
1 (-1 + 30/3 -l"

(c) X 1/3 -1 y ; nn - 7272


1

'l (-l+f)/A/2 (-l + 30/\/^


(d) X = 1/^2 (-3-2J)/\/lO y ;
J<iyi - 7272 " 7373

.0 2/VlO

3. Obtain the linear transformation X = BY which followed by Y = IX carries (a) of Problem 2 into (6).

j_ri (-i-2i)/v3i
~
Ti n
Ans. X= -^\ :^' I I \y
v/2"Lo l/v^ Jb ij'

1 1+i -1 1 1+j l+2i

4. Show that X' 1-t 6 -3+J X is positive definite and X' 1-i 3 5 X is positive semi-definite.

-1 -3-J 11 l-2i 5 10

5. Prove Theorems V-VII.

6. Obtain for Hermitian forms theorems analogous to Theorems XIX- XXI, Chapter 17, for quadratic forms.

xi X2 ^n
X-L All hi2 hin n n
7. Prove: ^2 /J21 h22 h^n S _S Tj^-Xj^Xj where 77- is the cofaotor of h^: in H = \h^j\
1=1 j=i

^n K. K. ^nn
Hint: Use (4.3).
chapter 19

The Characteristic Equation of a Matrix

THE PROBLEM. Let Y = AX, where A = [o^], (i,j = 1,2 n). be a lineal transformation over F.
In general, the transformation carries a vector Z= [%i,%2 x^Y into a vector Y = \.%,J-z JriY
whose only connection with X is through the transformation. We shall investigate here the
possibility of certain vectors X being carried by the transformation into KX, where A is either
a scalar of F or of some field 3F ofwhich F is a subfield.

www.TheSolutionManual.com
Any vector X which by the transformation is carried into KX, that is, any vector X for which
(19.1) AX = XX
is called an invariant vector under the transformation.

THE CHARACTERISTIC EQUATION. From (19.1), we obtain

A di^ O^ 2
^21 ^ f^2 ^271
(19.2) \X-AX = (XI-A)X =

~ On2 X a-nn

The system of homogeneous equations (19.2) has non-trivial solutions if and only if

A Oil Oi2 0-tn


a-zi A 022 ci^n
(19.3) \XI-A\ =

c-ni On2 A Onn

The expansion of this determinant yields a polynomial 0(A) of degree re in A which is known as
the characteristic polynomial of the transformation or of the matrix A. The equation
<^(X) =
is called the characteristic equation of A and its roots Ai,A2 A are called the character-
A = Ai is a characteristic root, then (19.2) has non-trivial solutions which are
istic roots of i. If
the components of invariant or characteristic vectors associated with (corresponding to) that
root.
Characteristic roots are also known as latent roots and eigenvalues; characteristic
vectors
are called latent vectors and eigenvectors.

r2
Example 1. Determine the characteristic roots and associated invariant vectors, given A

A-2 -2 -1
The characteristic equation is -1 A-3 -1 = A^' - tA^ + llA and the
-1 -2 A-2
characteristic roots are Ai= 5, A2= 1, A3= 1.

149
A

150 THE CHARACTERISTIC EQUATION OF A MATRIX [CHAP. 19

When A = Ai = 5, (19.2) becomes

3 -2 -ll Vxi 1 -1 Xl

-1 2-1 ^2 = or 1 -1 X2 =

-1 -2 3 xs _X3

3 -2 -1 1 -1
since -1 2 -1 is row equivalent to 1 -1
-1 -2 3

A solution is given by x-i = x^ = xq = i hence, associated with the characteristic root


\

A=5 is the one-dimensional vector space spanned by the vector [l,l,l]'. Every vector
[k.k.kY of this space is an invariant vector of A.

When A = A2=1, (19.2) becomes

-2 -l'

www.TheSolutionManual.com
1 Xl

1 -2 -1 X2 Xl + 2x2 +^3 =

1 -2 -1 _X3_

Two linearly independent solutions are (2,-1,0) and (1,0,-1). Thus, associated with
the characteristic root A = 1 is the two-dimensional vector space spanned by
Xi = [2,-l,0]'
and X2 = [1,0,-1]'. Every vector hX^+kX^ = [2h+k,-h,-ky is an invariant vector of A.
See Problems 1-2.

GENERAL THEOREMS. In Problem 3, we prove a special case (A: = 3) of

I.As If Xk ^f^ distinct characteristic roots of a matrix A and if Xi, X^


Ai, Xj^

are non-zero invariant vectors associated respectively with these roots, the X's are line-

arly independent.

In Problem 4, we prove a special case (ra = 3) of

II. The Ath derivative of cfi(X) = \XI A\ where A is re-square, with respect to A is ,

k\ times the sum of the principal minors of order n-k of the characteristic matrix when k<n,
is re! when k = n, and is when k>n.

As a consequence of Theorem II, we have

III. If Ai is an r-fold characteristic root of an re-square matrix A, the rank of Xil A


is not less than nr and the dimension of the associated invariant vector space is not
greater than r.
See Problem 5.

In particular

Iir. Xi is a simple characteristic root of an re-square matrix A, the rank of Xil


If
is re 1 and the dimension of the associated invariant vector space is 1.

2 2 1

Example 2. For the matrix A = 1 3 1 of Ex. 1, the characteristic equation is <^(A) = (A-5)(A-1) =
1 2 2

0. The invariant vector [l,l,l]' associated with the characteristic root A=5 and the linearly
independent invariant vectors [2,-l,o]' and [l,0,-l]' associated with the multiple root A=l
are a linearly independent set (see Theorem I).

The invariant vector space associated with the simple characteristic root A=5 is of
CHAP. 19] THE CHARACTERISTIC EQUATION OF A MATRIX 151

dimension 1. The invariant vector space associated with the characteristic root A = l, of
multiplicity 2, Theorems III and III').
is of dimension 2 (see

See also Problem 6.

Since any principal minor of A" is equal to the corresponding principal minor ot A, we have
by (19.4) of Problem 1,

IV. The characteristic roots of A and A' are the same.

Since any principal minor of A is the conjugate of the corresponding principal minor of A,
we have

V. The characteristic roots of A and of /I' are the conjugates of the characteristic
roots of A.

By comparing characteristic equations, we have

www.TheSolutionManual.com
VI. If Xi, A2 A are the characteristic roots of an ra-square matrix A and if A; is a
scalar, then AAi, kX^ AA are the characteristic roots of kA.
VII. If Ai, A2, ...,A are the characteristic roots of an n-square matrix A and if & is a
scalar, then \-Lk,\2k \ik are the characteristic roots of 4 -A/.

In Problem 7, we prove
VIII. If a is a characteristic root of a non-singular matrix A, then \A\/a is a character-
istic root of adj A.

SOLVED PROBLEMS

1. If A is re-square, show that

(19.4) 0(A) = |A/-^| = A" + 51 A"-^ + 52 A"-^ + ... + s_,A + (-l)"|i|

where s^ ,
(m = 1, 2 re-1) is (-1) times the sum of all the m-square principal minors of A.

We rewrite \\! - A\ in the form

A-aii 0-ai2 ~ a-in

0-021 A-a22 0-a2n

O-'^ni O-Ons X- ar

and, each element being a binomial, suppose that the determinant


has been expressed as the sum of 2" de-
terminants in accordance with Theorem VHI, Chapter 3. One of these
determinants has A as diagonal ele-
ments and zeroes elsewhere; its value is A". Another is free of
A; its value is (-1)"|^| The remaining
determinants have m columns, {m = 1, 2, ....- 1), of -A and n~m
columns each of which contains just one
non-zero element A.

Consider one of these determinants and suppose that columns numbered h,


of -A
its ia % are columns

After an even number of interchanges (count them) of adjacent


rows and of adjacent columns, the de-
terminant becomes
152 THE CHARACTERISTIC EQUATION OF A MATRIX [CHAP. 19

%.ii "iiA
"is.ii

"inA "imA
"im.-im
(-1)'"
I
'i.'2 ^n I

A ..

"in-k %.fe
%''''OT

.. A

t^.to im!
where Aj i i\ is an m-square principal minor of -4 . Now
^'2 %
(-1)" S ^'^ %
P H.^ ^

n (n -

1) ... (n -m + 1) m

www.TheSolutionManual.com
as (ii, is i^ ) runs through the p = different combinations of 1, 2 n taken at a

time.

1 -4 -1 -4
A =
2 5-4
2. Use (19.4) of Problem 1 to expand |A/-^|, given
-1 1-2 3
-1 4-1 6
We have Sl 1+0-2+6 = 5

1 -4 1 -1 1 -4 5 -4 -2 3
S2 + + + + +
2 -1 -2 -1 6 1 -2 4 6 -1 6

-3 + 2-5+16-9
1 -4 -1 1 -4 -4 1 -1 -4 5-4
SS 2 5 + 2 -4 1 -2 3 + 1 -2 3

-1 1 -2 -1 4 6 1 -1 6 4-1 6

-3 + 16 --
8 + 2 = 7

Ml = 2

A^'
Then \XI - A\ = A'^- 5 + 9A^ - 7A + 2.

3. Let Ai, A"!; As, ^s) A3, ^3 be distinct characteristic roots and associated invariant vectors of A.
Show that Xi, X2, X3 are linearly independent.
Assume the contrary, that is, assume that there exist scalars 0%, 02, 03 , not all zero, such that

(i) a-]_Xi + 0^X2 + 03X3 =

Multiply (1) by A and recall that AX^ = A^ A^^ ;


we have

(II) a^AXi + oqAX^ + agAXg = aiAi^i + 02^2X2 + osAgXg =

Multiply (11) by A and obtain

(III) aiAi^i + osAjA^s + a3A3A^s =

Now (i), (11), (111) may be written as

1 1 1 aiXi
(iv) Ai
Ai Ao
As An
A3 02X2
ooXq =

x\ \l A\ "3^3
1

CHAP. 19] THE CHARACTERISTIC EQUATION OF A MATRIX 153

1 1 1

By Problem 5, Chapter 3, S = Ai As As ^ 0; hence, B~ exists. Multiplying (iv) by B~ , we


\l Al x%
have [aiX-j, a^X^.asXaY = 0. But this requires 01 = 02 = 03=0, contrary to the hypothesis.

Thus. Xi. X^, Xq are linearly independent.

A On -012 -ai3
4. From (j!)(X) = \\1 - A\ = -021 A 022 - 123 we obtain
-<31 -032 A- 033

1 A -Oil O12 -Ol3 A -Oil -ai2 Ois


<^'(A) -021 A-O22 -023 1 -02 A (322 023
- Osi - O32 A-O33 -031 -O32 A-a3g 1

A-O22 A -Oil A -Oil

www.TheSolutionManual.com
-O23I -Ol3 -012
032 A Ossl Os 1 A Og 3 -O2I A 022
the sum of the principal minors of XI - A of order two

1 A -022 -O23 1 A -Oil -oia


<^"(A) + + +
-032 A -033 1 OSI
c A 033 1

1 A -01 1 -0 12
+ +
-021 A-o22 1

2!(A-Oll) + (A-a22) + (A-Ogg)!

2! times the sum of the principal minors of XI -A of order one

0"'(A)

Also <j!.^"\A) = cji^^hX) = ... =

5. Prove: If X^ is an r-fold characteristic root of an re-square matrix A, the rank of X^J -A is not less
than n T and the dimension of the associated invariant vector space is not greater than r.
Since Ai is an r-fold root of <;6(A) = 0, ^SCA^) = ct>(X^) = 4' <.\) = = 4>''^~^\x^) = and cfy'^AXi) + 0. Now
(^"^HX^ is r! times the sum of the principal minors of order n-r of A^/ -A; hence, not every principal minor
can vanish and X^l-A is of rank at least n-r. By (11.2), the associated invariant vector space otXj^l -A,
I.e., its null-space, is of dimension at most r.

6. For the matrix of Problem 2, find the characteristic roots and the associated invariant vector spaces.

The characteristic roots are 1, 1, 1, 2.

1 4 1 4

For A = 2: XI -A
-2 2-5 4
is of rank 3; its null-space is of dimen-
1 -1 4 -3
1 -4
sion 1. The associated invariant vector space is that spanned by [2, 3, -2, -3]'.

5 -4
-1 1 -2
For A= 1: XI -A = is of rank 3; its null-space is of dimen-
1 2 -1
1-4 1-5
sion 1. The associated invariant vector space is that spanned by [3, 6, -4, -5]'
.

154 THE CHARACTERISTIC EQUATION OF A MATRIX [CHAP. 19

7. Prove: If a is a non-zero characteristic root of the non- singular n- square matrix A, then \A\/a is
a characteristic root of adj A.

By Problem 1,

(i) a"+ .!"-% + ^n-i^ + (-1) l'^!

where s^. (i = 1. 2 n - 1) is (-1) times the sum of all f-square principal minors of A, and

l/xZ-adj/ll = m" + Si/^""^ + ... + S^^^/s + (-if ladj^l

where Sj , (/ = 1, 2 n - 1) is (-1)-^ times the sum of the /-square principal minors of adj A

By and the definitions of and Si= (-l)"s^_.,, Sj = (-1)" |-4|s._ S = (-if Ml"-^si,
and
llll-M (6.4)

A\ = \AY' ^ then
|adj
l
s,- S,-
J
,

To -^
Jl 1
,

n-i
|;x/-adj A\ - (-if !(-if ;x" + ._,A^"-' + ._2MlAi"-" + ... + s,|/irV + siMf-V + M 1

and

(^f Ml

www.TheSolutionManual.com
Mr"|/x/-adj 4| = (-if !i + s,(i^) + ... + ._i(f^f-' + (-if /(/^)
\A\ \A\ \A\
Now
/(R) = (-if ii + ,,(1) + ... + ._,(lf- + (-if (If Ml!
and by (i)

a"/(4^) = (-ifia"+ sia"-'+ ... + ^_,a + (-i)"Ml

Hence, M|/Ot is a characteristic root of adj A.

8. Prove: The characteristic equation of an orthoganal matrix


We have

</,(A) = |X/-P| = \\PIP'-P\ = |-PA(^/-P')|


AAA F

=
is a reciprocal equation.

A"|f/-P| = A"c^(i)

SUPPLEMENTARY PROBLEMS

9. For the following matrices , determine the characteristic roots and a basis of each of the associated invariant
vector spaces.

1 -1 1 2 2 -2 -8 -12 2 1 1 1 -1 -1
(a) 12 1 (c) 2 1 (e) 1 4 4 (g) 1 2 1 (O 1 -1
2 2 3 -12 2 1 1 1 -1

1 1 -2 1 -3 -9 -12 2 2 '2 - J '


1
(6) 1 2 1 (rf) 1 (/) 1 3 4 (A) 2 2 (/) 1+t
1 -1 1 -3 3 1_ _0 1_ _i 2-i

2-4 '-1
3 2 5 6-10 7 -1 -6 3'

2 3 2-1 -5 -4 9 -6 1 -2 -3
(0 n)
(A:)
112-1 -3 -2 6 -4 -1 1 1

2 2 2-1 -3 -3 7 -5 -1 -1 -5 3

Ans. (a) 1, [1, -1,0]'; 2, [2, -1,-2]'; 3, [1,-1,-2]'


(b) -1, [1,0, 1]'; 2, [1,3,1]'; 1, [3, 2, 1]'

(c) 1, [1,1,-1]'; 2, [2,1,0]';


(rf) 1, [1,1,1]'
CHAP. 19] THE CHARACTERISTIC EQUATION OF A MATRIX 155

(e) 2, [2,-1,0]'; 0, [4,-1,0]'; 1, [4,0,-1]'


(/) 0, [3,-1,0]'; 1, [12,-4, -1]'
(g) 1, [1,0,-1]', [0,1,-1]'; 3, [1,1,0]'
(A) 0, [1,-1,0]'; 1, [0,0,1]'; 4, [1,1,0]'
(j) -1, [0,1,-1]'; i, [l + j,l,l]'; -J, [l-i.1,1]'
(/) 2, [1,0,1]'; 1+t, [0, 1,0]'; 2-2j, [1,0,-1]'
(/c) 1, [1,0,-1,0]', [1,-1,0,0]'; 2, [-2,4, 1.2]'; 3, [0,3, 1,2]'
(/) 1, [1,2,3,2]'; -1, [-3,0,1,4]'
(m) 0, [2,1,0, 1]'; 1, [3,0, 1,4]'; -1, [3,0, 1,2]'

10. Prove: If A" is a unit vector and if AX = XX then X'AX = X .

H. Prove: The characteristic roots of a diagonal matrix are the elements of its diagonal and the associated in-
variant vectors are the elementary vectors E^

12. Prove Theorems I and VI.

www.TheSolutionManual.com
13. Prove Theorem VII.

Hint. If |A/-.4| =(A-Ai)(A-X2)...(A-A) then \(X + k)I -A\ = (X + k~ X-O (X + k -X2) (X + k -X^^.
14. Prove: The characteristic roots of the direct sum diag(.4i, A2 A^) are the characteristic roots of A^ A^

13. Prove: If A and A' = |


! are n-square and r<n, show that NA and AN have the same characteristic
equation. t:]
16. Prove: If the n-stjuare matrix A is of rank r. then at least n -r of its characteristic roots are zero.

17. Prove: If A and B are n-sauare and A is non-singular, then A~'^B and BA~'^ have the same characteristic
roots.

18. For A and B of Problem 17, show that B and A'^BA have the same characteristic roots.

X9. Let ^ be an n-square matrix. Write \XI -A''^\ = \-XA~'^(^I -A)\ and conclude that iAl IA2 lA are
the characteristic roots of ^4"^

20. Prove: The characteristic roots of an orthogonal matrix P are of absolute value 1.

Hint. If A-, X. are a characteristic root and associated invariant vector of P, then X' X = (PX.Y(PX-) =

21. Prove: If A^ ^ \ is a characteristic root and A^ is the associated invariant vector of an orthogonal matrix
P, then XlXi = 0.

22. Prove: The characteristic roots of a unitary matrix are of absolute value 1.

23. Obtain, using Theorem II,

0(0) = (-i)"M|
<^ (0) = (-1) times the sum of the principal minors of order n - 1 of A

<P (0) = (-1) r! times the sum of the principal minors of order n - r of .4

0^">(O) = n!

24. Substitute from Problem 23 into

c/3(X) = (?i(0) + (^'(0)-X +-^(f^'(0)-X^ + ... +^0'^'^'(O)-A"

to obtain (19.4).
chapter 20

Similarity

TWO ra-SQUARE MATRICES A and B over F are called similar over F if there exists a non-singular
matrix R over F such that

(20.1) B = R'^-AR

2 2 11

Example 1. The matrices A 1 3 1 of Example 1, Chapter 19, and

www.TheSolutionManual.com
1 2 2J

-3 -3] 2
li 2 1 133" 5 14 13]
B = R~'^AR = -110 1 3 1 1 4 3 = 1

-1
E7 ij [1 12 2 13 4 iJ
are similar.

The characteristic equation (A 5)(A 1)^ = of 6 is also the characteristic equationof ^.

An invariant vector of B associated with A= 5 is Y^ = [l,0,0]' and it is readily shown


that X-L = RYi = an invariant vector of A associated with the same
[1, 1, 1]' is characteristic
root A = 5. The reader will show that Fg = [7, -2, O]' and Yg = [n, -3, -2]' are a pair of line-
arly independent invariant vectors of B associated with A = 1 while X^ = RY2 and Xg = RYg
are a pair of linearly independent invariant vectors of A associated with the same root A = 1.

Example 1 illustrates the following theorems:

I. Two similar matrices have the same characteristic roots.

For a proof, see Problem 1.

II. If Y is an invariant vector of B = R~ AR corresponding to the characteristic root


A^ of B, then X = RY is an invariant vector of A corresponding to the same characteristic
root A^ of A.
For a proof, see Problem 2.

DIAGONAL MATRICES. The characteristic roots of a diagonal matrix D = diag(ai, 02 a^) are
simply the diagonal elements.
A diagonal matrix always has n linearly independent invariant vectors. The elementary
vectors E^ are such a set since DE^ = a^E^, (i = 1, 2, ...,n).

As a consequence, we have (see Problems 3 and 4 for proofs)

III. Any re-square matrix A, similar to a diagonal matrix, has n linearly independent
invariant vectors.

IV. If an re-square matrix A has re linearly independent invariant vectors, it is similar

to a diagonal matrix.
See Problem 5.

In Problem 6, we prove
V. Over a field F an re-square matrix A is similar to a diagonal matrix if and only if

\IA factors completely in F and the multiplicity of each A, is equal to the dimension of
the null-space of X.I A.
1/

156
CHAP. 20] SIMILARITY 157

Not every ra-square matrix is similar to a diagonal matrix. The matrix of Problem 6, Chap-
ter 19, is an example. There, corresponding to the triple root X = 1, the null-space of A/-i
is of dimension 1.

We can prove, however,

VI. Every ra-square matrix A is similar to a triangular matrix whose diagonal elements
are the characteristic roots of A.

See Problems 7-8.

As special cases, we have


VII. If A is any real n- square matrix with real characteristic roots, there exists an
orthogonal matrix P such that P'^ AP = P'AP is triangular and has as diagonal elements
the characteristic roots of A.
See Problems 9-10.
VIII. If A is any re-square matrix with complex elements or a real re-square matrix with

www.TheSolutionManual.com
complex characteristic roots, there exists a unitary matrix U such that W'^AU UAU is =--

triangular and has as diagonal elements the characteristic roots of A.


See Problem 11.

The matrices A and P' AP of Theorem VII are called orthogonally similar.

The matrices A and U'^AU of Theorem VHI are called unitarily similar.

DIAGONABLE MATRICES. A matrix A which is similar to a diagonal matrix is called diagonable.


Theorem IV is basic to the study of certain types of diagonable matrices in the next chapter.

SOLVED PROBLEMS
1. Prove: Two similar matrices have the same characteristic roots.

Let A and B = R~^AR be the similar matrices; then

(') A/-B = \I - R'^AR = R-'^UR - R-''aR = R'^(XI-A)R


and
\\I-B\ = \R-^.\XI-A\.\R\ = \XI-A\

Thus, A and S have the same characteristic equation and the same characteristic roots.

2. Prove: If Y is an invariant vector oi B = R ^AR corresponding to the characteristic root A,-, then
X = RY is an invariant vector of A corresponding to the same characteristic root \,- of A.
By hypothesis, BY = X^Y and RB = AR; then
AX = ARY = RBY = RX^Y = X-RY = X^X
and A" Is an invariant vector of A corresponding to the characteristic root X^.

3. Prove: Any matrix A which is similar to a diagonal matrix has re linearly independent invariant
vectors.

Let R AR = diag(6i, 62. > b^) = B. Now the elementary vectors fii, Eg
^n ^'^ invariant vectors
of B. Then, by Theorem n, the vectors Xj = RE. are invariant vectors of A. Since R is non-singular, its
column vectors are linearly independent.
158 SIMILARITY [CHAP. 20

4. Prove: If an n-square matrix A has n linearly independent invariant vectors, it is similar to a

diagonal matrix.

Let the n linearly independent invariant vectors X^. Xq X^ be associated with the respective charac-
teristic roots Ai, Xs. /^n ^ '^^"^ ,4A'^ = A^A'^, 0' = 1, 2 n). L,et R = [X^,X2, ....X^]; then

AR = [AX^.AX^ AX^] = [Ai^i, ^2^2 ^n^n^


Ai ...

A2 ...
1X1, X2 '^n-'
R diag(Ai, A2 A)

Hence, R ^AR = diag(Ai, A2 A).

5. A set of linearly independent invariant vectors of the matrix A of Example 1, Chapter 19, is

www.TheSolutionManual.com
Zi = [1,1,1]', Z2 = [2,-1,0]', ^3 = [1,0,-1]'

1 2 1 1 2 1

Take R = [Jf 1, Z2, ^3! 1 -1 ; then R ^


1-2 1 and
10-1 1 2 -3
0"
1 2 1 2 2 1" 1 2 1 5

R'UR 1 -2 1 1 3 1 1 -1 1

1 2 -3 ,1 2 2_ 1 -1 P 1

a diagonal matrix.

6. Prove: Over a field F an n-square matrix A is similar to a diagonal matrix if and only if Xl-A
factors completely in F and the multiplicity of each A.^ is equal to the dimension of the null-space
of X^I-A.
First, R-'^AR
suppose that diag(Ai, A2, B and that exactly k of these characteristic
^n'
roots are equal to A^. Then X^I - B has exactly k zeroes in its diagonal and, hence, is of rankn-fe; its
null-space is then of dimension n-(n -k)=k. But \l-A = R (Xj^I - B) R~^ thus, A.^/-^ has the same ;

rank n -k and nullity k as has \^I - B


Conversely, let Ai,A2 Ag be the distinct characteristic roots of A with respective multiplicities
r, where ri+r2+...+rc Denote by V^^. V^^, the associated invariant vector spaces.
'i-''2 s
Take Xi^.Xi^. .,Xir- as a basis of the invariant vector space Vf.. (i = 1,2 s). Suppose that there
exist scalars a not all zero, such that
XJ'

(i) (aiiA^ii + 012-^12 + + "iTi^iri) + (o2i''''2i + 022^^22 + ''2'r2^2r2)

+ ... + (osi^si "^


^s^Xs'z "^ * "^ a' STg XciT^))
^'T^ ^STg -

Now each vector Yi = ai^Xu + ai^Xi^ + + air^Xir^) = 0, (i = 1, 2 s). for otherwise, it is an

invariant vector and by Theorem I their totality is linearly independent. But this contradicts (i); thus, the

X's constitute a basis of K and A is similar to a diagonal matrix by Theorem IV.

7. Prove: Every ra-square matrix A is similar to a triangular matrix whose diagonal elements are the

characteristic roots of A.
A and let X-l be an invariant vector of A corresponding to
Let the characteristic roots of A be Ai, Aj
the characteristic root Ai. column of a non-singular matrix Qi whose remaining columns
Take X-i as the first

may be any whatever such that \Qi\ ^ 0. The first column of AQi_ is AX^ = X-^X^ and the first column of
O]'. Thus,
Ql^AQi is Ql^XiXi^. But this, being the first column of Qi'^XiQi. is [Ai,
CHAP. 20] SIMILARITY 159

(i) Qi^AQ^
hi Bil
[o A,j

where A-^ is of order n - 1.

Since |A/ -ft AQ^\ = (X-Xi)\XI - Ai\, and Qi^ AQi_ and A have the same characteristic roots, it
follows that the characteristic roots of Ai are Xg. Ag A. If n = 2, A-i = [A2] and the theorem is proved
with Q = Qi.

Otherwise, let X^ be an invariant vector of Ai corresponding to the characteristic root As- Take X^ as
the first column of a non-singular matrix Qq whose remaining columns may be any whatever such that IftI /^ 0.
Then
= [^2 52I
(il) Q2^A^Q^
[0 A^j

where A2 is of order n - 2. If n = 3, /I2 = [As], and the theorem is proved with Q = Q^-
[0
Q2J

www.TheSolutionManual.com
Otherwise, we repeat the procedure and, after n - 1 steps at most, obtain

/i I2
(iii)
n-2
(?2 <?3
Qn-i
such that (J /1() is triangular and has as diagonal elements the characteristic roots of A.

8. Find a non-singular matrix Q such that Q AQ is triangular, given

9-1 8-9
6 -1 5 -5
5 1 -4 5
4 5 -4
Here |A/-^| = (A^-l)(A^-4) and the characteristic roots are 1,-1,2,-2. Take [5,5,- 1, 3j', an
invariant vector corresponding to the characteristic root 1 as the first column of a non-singular matrix
Qx
whose remaining columns are elementary vectors, say
^5000'
5 10
-10 10
3 1

Then
1 5-1 8-9
1
-5 5 -15 20
and Qi AQ^
5 1 5 4 -12 16 [O Aq
-3 5 3 17
4
A characteristic root of Ai is -1 and an associated invariant vector is [4, 0, -l]'. Take ft 1

i 1 1

then

o"! -20 -15 20] r_i 5I


J_
II and ft A1Q2 20
-48 64 =
-11 48j L "d
A characteristic root of A2 is 2 and an associated invariant vector is [s, ll]'. Take Qs } then
[.:
2/5I
<?3' = H[-11
8 ,! "I
sj
and ft'-42ft [2

-2J
SIMILARITY [CHAP. 20
160

Now
5 32

pi o"| [h o1 5 4 Q'' 1
-40 40
.

[o '
[o Qsj 1 8 160 4 20
Q^j
3 -1 11 1_ -180 40 -220 160_

"l 1 -7 -9/5"
-1 5 1
and
2 2/5
-2

9. If A is any real re-square matrix with real characteristic roots then there exists an orthogonal
matrix P such that P''^AP is triangular and has as diagonal elements the characteristic roots of ^.

Let Ai.Xs X be the characteristic roots of A. Since the roots are real the associated invariant
vectors will also be real. As in Problem 7, let Qi be formed having an invariant vector corresponding to Aj

www.TheSolutionManual.com
as first column. Using the Gram-Schmidt process, obtain from Qt an orthogonal matrix Pi whose first column
is proportional to that of Q-^. Then
hi fill
P'lAP^

where Ai is of order n - 1 and has As, A3, ...Ayjas characteristic roots.

Next, form Qq having as first column an invariant vector of Ai corresponding to the root A2 and, using
the Gram-Schmidt process, obtain an orthogonal matrix P^- Then

TAs Ss"!
iji-g
[0 A^j

After sufficient repetitions, build the orthogonal matrix

pi 0] pn-2
P = Pi. "I
|_o pj [ P_J
for which P~^AP is triangular with the characteristic roots of A as diagonal elements.

10. Find an orthogonal matrix P such that


"2 1"
2

P-^AP = P"" 1 3 1

1 2 2_

is triangular and has the characteristic roots of A as diagonal elements.

Prom Example 1, Chapter 19, the characteristic roots are 5, 1, 1 and an invariant vector corresponding
to A = 1 is [1,0, -1]'.
"
1

We take Qi = 10
1 and, using the Gram-Schmidt process, obtain
-1 1

1 /V2 l/v^
Pi = 1

-l/\/2 1/1/2"

an orthogonal matrix whose first column is proportional to [l, 0, -l]'.

We find
1/a/2 -I/V2 2 2 1 l/>/2 I/V2" 1

Pl'^Pi = 1 1 3 1 1 = 3^/2"
\0 A^j
l/^/T 1/ 21/2" 3
1/1/2" 1 2 2 -1/V2 V2
CHAP. 20] SIMILARITY 161

Now Ax has A = 1 a characteristic root and [l, -\/2]' as associated invariant vector. From Q2 = \_ rn 1 .

r ^ r-n Lv2 ij
2/v/6
we obtain by the Gram-Schmidt process the orthogonal matrix "2=1 1/V3 I
Then
L-2/V6 I/V3J
1/\A2 -1/^^ 1/^6
P = P-, 1/a/3 2/^6
I X]
-I/V2" -I/a/S" 1/V6"

is orthogonal and P~^ AP = 1 ->/2


5

11. Find a unitary matrix f/ such that V All is triangular and has as diagonal elements the charac-
teristic roots of A, given

www.TheSolutionManual.com
5 + 5i 1 +t -6 - 4f

A = -4 - 6i 2 - 2J 6 + 4f

2 + 3i -1 + i -3 - 2i

The characteristic equation of A is A(A^ +(-4-J)A+ 5 - i) = and the characteristic roots are
"
1

0, 1-i, 3 + 2i. For A = 0, take [l, -1, l]' as associated invariant vector and form Qi = -1 1

_ 1 1_
The Gram -Schmidt process produces the unitary matrix

i/\fz i/Ve' -i/\/2


f/i = -l/vA3 2/v^
1/V3" l/\/6" 1/^2"

Now
-2y/2a-i) ~(26 + 24:i)/y/6

1-i (2 + 3i)/\A3"

3 +2J

so that, for this choice of Qi, the required matrix U = Ui.

12. Find an orthogonal matrix P such that P ^AP is triangular and has as diagonal elements the char-
acteristic roots of A, given

3 -1
-1 5

1 -1
The characteristic roots are 2,3,6 and the associated invariant vectors may be taken as [l,0, -l]'.
[1,1,1]', [1,-2,1]' respectively. Now these three vectors are both linearly independent and mutually
orthogonal. Taking

l/V^ l/y/J l/y/6


P = l/y/3 -2/^/6
-1/-/2 l/y/3 l/yje

we find P~^ AP = diag(2,3, 6). This suggests the more thorough study of the real symmetric matrix made
in the next chapter.
162 SIMILARITY [CHAP. 20

SUPPLEMENTARY PROBLEMS
13. Find an orthogonal matrix P such that f'^AP is triangular and has as diagonal elements the characteristic
roots ofA for each of the matrices A of Problem 9(a), (6), (c), (d). Chapter 19.

\l\pZ \/Zs[2 2/3 I/V3" 1/-/2 -1/^/6""

Ins. (a) -l/v/2 \/^^pi 2/3 (c) i/Va" 2/>/6


-4/3a/2 1/3 -1/1/3 1/^ 1/^6"

I/V2" -l/V'2 1/V^ -i/'v^ -i/Ve"'


(b) 1 (rf)
1/V3" 2/V6"
1/^2" 1/V2" I/V3" I/a/2" -1/\/6"

14. Explain why the matrices (a) and (6) of Problem 13 are similar to a diagonal matrix while (c)
and (rf) are
not. Examine the matrices {a)-{m) Problem
Chapter 19 and determine those which are similar to a
of 9,
diagonal matrix having the characteristic roots as diagonal elements.

www.TheSolutionManual.com
15. For each of the matrices A of Problem 9(j). (/), Chapter 19, find a unitary matrix V such that U'^AU is tri-
angular and has as diagonal elements the characteristic roots of A.

l/v^ -(l+i)/2 \l\p2. -1//!


Ans. (i) l/>/2 a-i)/2^/2 2 (/) 1

-0/2/2" i
\/\[2 (1 2 \/^[2 \/y[2

16. Prove: If ^ is real and symmetric and P is orthogonal, then P''^ AP is real and symmetric.

17. Make the necessary modification of Problem 9 to prove Theorem VIII.

18. Let B^ and C^ be similar matrices for (i = 1, 2 m). Show that

S = diag(5i, B2, ...,S) and C = diag(Ci, C2 C)


-1 .

are similar. Hint. Suppose C^ = i?^ B^ fl^ and form ij = diag(Bi, Bg B^).

19. Let B = diag(Bi, Sj) and C = diagCSj, B^). Write / = diagC/i./g), where the orders of /i and /g are those

of Bi and B^ respectively, and define B Show that BT^EK = C to prove B and C are similar.
U oJ

20. Extend the result of Problem 19 to B = diag(Bi, Bg B) and C any matrix obtained by rearranging the
S^ along the diagonal.

21. If A and B are n-square, then AE and B/1 have the same characteristic roots.
Hint. Let P AQ =N; then PABP"^ =NQ''^BP''' and Q~^ BAQ = Q'^BP'^N. See Problem 15, Chapter 19

22. If A-i.A^ A^ are non-singular and of the same order, show thut A-^A^-.. A^, A2As-.. A^A^, A3... A^A-^^A^.
... have the same characteristic equation.

23. Let Q'^AQ = B where B is triangular and has as diagonal elements the characteristic roots Ai, X2 K
of .4.

(a) Show that


S Q A Q is triangular and has as diagonal elements the kth powers of the characteristic
roots of A.

(b) Show that 1 X,- = trace A

24. Show that similarity is an equivalence relation.

'2
2 r 2 1 -1
25. Show that 1 3 1 and 2-1 have the same characteristic roots but are not similar.
_1 2 2_ -3 -2 3
chapter 21

Similarity to a Diagonal Matrix

REAL SYMMETRIC MATRICES. The study of real symmetric matrices and Hermitian matrices may
be combined but we shall treat them separately here. For real symmetric matrices, we have:
I. The characteristic roots of a real symmetric matrix are all real.
See Problem 1.

II. The invariant vectors associated with distinct characteristic roots of a real sym-
metric matrix are mutually orthogonal.

www.TheSolutionManual.com
See Problem 2.

When A is real and symmetric, each B^ of Problem 9, Chapter 20, is 0; hence,


III. If 4 is a real ra-square
symmetric matrix with characteristic roots Ai, \^, ..., A,
then there exists a real orthogonal matrix P such that P'AP = P'^AP = diag(Ai, Aj A).

Theorem III implies

IV. If A^ is a characteristic root of multiplicity r^ of a real symmetric matrix, then


there is associated with A^ an invariant space of dimension r^.

In terms of a real quadratic form, Theorem III becomes


V. Every real quadratic form q = X' AX can be reduced by an orthogonal transformation
X = BY to a canonical form

(21.1) Aiyi + X^yl + ... + X^y"^

where r is the rank of A and Ai, Aj A^ are its non-zero characteristic roots.

Thus, the rank of q is the number of non-zero characteristic roots of A while the index is
the number of positive characteristic roots or, by Descartes Rule of signs, the number of varia-
tions of sign in |A/ ^| = 0.

VI. A real symmetric matrix is positive definite if and only if all of its characteristic
roots are positive.

ORTHOGONAL SIMILARITY. If P is an orthogonal matrix and 8 = P"^ AP then B is said to be or- .

thogonally similar to A. Since P~^ = P', B is also orthogonally congruent and orthogonally
equivalent to A. Theorem III may be restated as

VII. Every real symmetric matrix A is orthogonally similar to a diagonal matrix whose
diagonal elements are the characteristic roots of A.

See Problem 3.

Let the characteristic roots of the real symmetric matrix A be arranged so that Ai ^ A2 =
... ^A. Then diag(Ai, A2, . A.) is a unique diagonal matrix similar to A. The totality of
such diagonal matrices constitutes a canonical set for real symmetric matrices under orthogonal
similarity. We have

VIII. Two real symmetric matrices are orthogonally similar if and only if they have
the same characteristic roots, that is, if and only if they are similar.

163
164 SIMILARITY TO A DIAGONAL MATRIX [CHAP. 21

PAIRS OF REAL QUADRATIC FORMS. In Problem 4, we prove


IX. If X' AX and X'BX are real quadratic forms in
(x-^^, x^ x^) and if X'BX is posi-
tive definite, there exists a real non- singular linear transformation X = CY which carries
X'AX into

AiXi + A2Y2 + + Ay
and X' BX into

yi
22 + 72 +
2
+ Tn
where A^ are the roots of \\BA\=0.
See also Problems 4-5.

HERMITIAN MATRICES. Paralleling the theorems for real symmetric matrices, we have
X. The characteristic roots of an Hermitian matrix are real.

See Problem 7.

www.TheSolutionManual.com
XI. The invariant vectors associated with distinct characteristic roots of an Hermitian
matrix are mutually orthogonal.

XII. If H is an re-square Hermitian matrix with characteristic roots Ai, A2. , A, there
exists a unitary matrix U such that U'HU = U~^HU = diag(Ai, A2, ..., A). The matrix H
is called unitarily similar to U' HU .

XIII. If A is a characteristic root of multiplicity r- of the Hermitian matrix H, then


there is associated with A,- an invariant space of dimension r- .

Let the characteristic roots of the Hermitian matrix H be arranged so that Ai i A2 ^ i An-
Then diag(Ai, A2 A.) is a unique diagonal matrix similar to H. The totality of such diago-
nal matrices constitutes a canonical set for Hermitian matrices under unitary similarity. There
follows
XIV. Two Hermitian matrices are unitarily similar if and only if they have the same
characteristic roots, that is, if and only if they are similar.

NORMAL MATRICES. An n-square matrix A is called normal if AA' = A'A. Normal matrices include
diagonal, real symmetric, real skew-symmetric, orthogonal, Hermitian, skew-Hermitian, and
unitary matrices.

Let .4 be a normal matrix and f/ be a unitary matrix, and write B = U'AU. Then B' = U'A'U
and B'B = U'A'U -U'AU = U'A'AU = U'AJ'U = U'AU -U'A'U = BB'. Thus,
XV. If ^ is a normal matrix and U is a. unitary matrix, then B = U'AU is a normal matrix.

In Problem 8, we prove
XVI. If X^ is an invariant vector corresponding to the characteristic root A/ of a nor-
mal rnatrix A, then X^ is also an invariant vector of A' corresponding to the characteristic
root A^.

In Problem 9, we prove
XVII. A square matrix A is unitarily similar to a diagonal matrix if and only if A is
normal.

As a consequence, we have
XVIII. If A is normal, the invariant vectors corresponding to distinct characteristic
roots are orthogonal.
See Problem 10.
CHAP. 211 SIMILARITY TO A DIAGONAL MATRIX 165

MX. If A- is a characteristic root of multiplicity r^ of a normal matrix A, the associ-

ated invariant vector space has dimension r^.

XX. Two normal matrices are unitarily similar if and only if they have the same char-
acteristic roots, that is, if and only if they are similar.

SOLVED PROBLEMS

1. Prove: The characteristic roots of an re-square real symmetric matrix A are all real.

Suppose that h+ik is a complex characteristic root of A. Consider

B = {(h + ik)I-A\[(h-ik)I-A\ = (hl-Af+k'^I

www.TheSolutionManual.com
which is real and singular since {h + ik)I -A is singular. There exists a non-zero real vector X such that
BX = and, hence,

X'BX = X\h!-AfX + k^X'X = X'(hl-A)'(hl -A)X + k^X'X =

The vector (/(/- 4) A" is real; hence, {(hi - A)X\' {(hi -A)X\ ^ 0. Also, X'X>0. Thus, A: = and
there are no complex roots.

2. Prove: The invariant vectors associated with distinct characteristic roots of a real symmetric
matrix A are mutually orthogonal.

Let A'l and X^ be invariant vectors associated respectively with the distinct characteristic roots Xi and
As of 4. Then
AX^ = Ai^i and AX^ = Aj'^s. also A^2'4^i = X-iX^X^ and Xj^AX2 = XqXiX^

Taking transposes

^1 A X^ = A 1 A*! ^2 and XqAX-j^ = X2X2X1

Then X^X^X^ = A2^iA^2 and, since Ai ^ A2, X[X2 = 0. Thus, X^ and X2 are orthogonal.

3. Find an orthogonal matrix P such that P'^AP is diagonal and has as diagonal elements the char-
acteristic roots of A given
,

^7-2 1

-2 10 -2
1 -2 7

The characteristic equation is

A-7 2 -1
2 A -10 2 A - 24A^ + I8OA - 432
-1 2 A-7
and the characteristic roots are 6, 6, 12.
- -
1 2 -1 Xl
For A = 6, we have 2 -4 2 X2 or Xi - 2x2 + xs = and choose as associated in-

1 2 -1 X3

variant vectors the mutually orthogonal pair X^ = [l, 0, -l]' and X2 = [l, 1, l]'. When A = 12, we take X3 =
[1, -2,1]' as associated invariant vector.
166 SIMILARITY TO A DIAGONAL MATRIX [CHAP. 21

Using the normalized foim of these vectors as columns of P , we have

1/VT 1/x/y X/\f&


P = I/VT -2/sf6
-i/x/T l/VT l/yjl

It is left as an exercise to show that P~^AP = (iiag(6, 6, 12).

4. Prove: If X'AX and X'BX are real quadratic forms in (xi,x2 ) and if X'BX is positive defi-
nite, there exists a real non-singular linear transformation X = CY which carries Z'/lZinto
Xiyi + \2y2 + + ^nTn ^'^'^ ^'B^ into yi + y| + ... + y^, where Xi, Ag, ., A are the roots of
\\B-A\ =0.
By Theorem VII there exists an orthogonal transformation X = GV which carries X'BX into

(i) V'(G'BG)V = jji^vf + fJ.2V2 + + Ait;^

www.TheSolutionManual.com
where jjii.jj.^ /x are the characteristic roots (all positive) of B.

Let H = diag(l/\/7H, X/sfJT^ l/i//!^)- Then V = HW carries (i) into

(ii) W (H G BGH)W 1 2
+ wt

Now for the real quadratic form W (H G'AGH)W there exists an orthogonal transformation W = KY which
carries it into

Y'(K'h'g'AGHK)Y = X^y2 + X^y^ + ... + X^y^

where Xi.Xg X are the characteristic roots ot H G AGH. Thus, there exists a real non-singular trans-
formation X = CY = GHKY which carries X'AX into X^y^ + X^y^ + ... + Xy^ and X'BX into

Y'(K'h'g'BGHK)Y Y (K~ IK)Y -'2


-'i 'n

Since for all values of X,

K'h'g'(\B-A)GHK XK'h'g'BGHK - K'h'g'AGHK = diag(X,X X) - diag(Xi,X2 X)


= diag(X-Xi, X-X2 X-X)
it follows that Xi, Xg X are the roots of |XB-/l| =0

5. Prom Problem 3, the linear transformation

\/^pr l/vA3" l/sfQ 1//6"

X = (GH)W l/VT -2/'/6" 1/^r W


-l/\f2 IATs 1/v^ 1/2x^3"

\/2\fl 1/3n/2 1/(

1/3-/2 -1/: w
_-l/2>/3 1/3\A2 1/f

7 -2 1

carries X'BX = X' 2 10 -2 Z into ir'ZIF.

1 -2 7

The same transformation carries

1/3
X'AX X into tr' 2
0^
CHAP. 21] SIMILARITY TO A DIAGONAL MATRIX 167

Since this is a diagonal matrix, the transformation W = KY of Problem 4 is the identity transformation W = lY.

Thus, the real linear transformation X = CY = {GH)Y carries the positive definite quadratic form X'BX

into ri + y| + and the quadratic form X' AX into ^y^ + iy| + |y|. It is left as an excersise to show that
7l
\kB-A\ = 36(3\-l)(2X-l)^.

6. Prove: Every non-singular real matrix A can be written as A = CP where C is positive definite

symmetric and P is orthogonal.


Since A is non-singular, AA' is positive definite symmetric by Theorem X, Chapter 17. Then there
Q"'' k^) = B with each > Define
exists an orthogonal matrix Q such that AA' Q = diag(A;i, ij A:^ 0.

Bi = diag(\/^, VT^ \fk~^) and C = QB-^Q''^. Now C is positive definite symmetric and

C^ = QB^Q-^QB^Q-^ = QB^Q-^ QBQ- AA'

Define P = C'^A. Then PP' = C'^AA'C'^^ C ^C'^C^ = I and P is orthogonal. Thus A = CP with C
positive definite symmetric and P is orthogonal as required.

www.TheSolutionManual.com
7. Prove: The characteristic roots of an Hermitian matrix are real.

Let \j be a characteristic root of the Hermitian matrix H. Then there exists a non-zero vector X^ such
that HXj^ = A jX^ Now X'^HX^ = XjX'j^Xiis real and different from zero and so also is the conjugate trans-
.

pose X\HX;j^ = \^X\X^. Thus, X^=X.j andX^isreal.

8. Prove: If X: is an invariant vector corresponding to a characteristic root A,- of a normal matrix A,

then Z^ is an invariant vector of A corresponding to the characteristic root X^.

Since A is normal,

(XI-A)(XI-A)' (KI-A)(XI-A') = XXI - \ a' XA + AA


XXI - XA' -XA+ a' A (XI -A)' (XI -A)

so that XI -A is normal. By hypothesis, BX; = (X^I -A)X^= 0; then

(SX^)'(SA'i) = X-B'-BXi = X'iB.B'X^ = (W X ^)' {W X ^) = and B'X^ = (X^/-I')Zi =

Thus, X: is an invariant vector of A' corresponding to the characteristic root X ^.

9. Prove: An re-square matrix A is unitarily similar to a diagonal matrix if and only if A is normal.
Suppose A is normal. By Theorem vni. Chapter 20, there exists a unitary matrix U such that

^1 6i2 bin

As bin

U AU

.... Xfi-i bn-i. n

By Theorem XV, B is normal so that B' B = BB . Now the element in the first row and first column of
B'b is XiXi while the corresponding element of BB is

AjAi + 612^12 + bigbis + ... + bmbm


Since these elements are equal and since each b^j b-^j 2 0, we conclude that each b^j = 0. Continuing
with the corresponding elements in the second row and second column, we conclude that every b^; of B
is zero. Thus, B = diag(Ai,A2 A). Conversely, let A be diagonal; then A is normal
168 SIMILARITY TO A DIAGONAL MATRIX [CHAP. 21

10. Prove: If A is normal, the invariant vectors corresponding to distinct characteristic roots are
orthogonal.
Let Ai, A*! and Xj, ^2 be distinct characteristic roots and associated invariant vectors of A. Then
AXi=\iXi, AX^ = \^X^ and, by_Problem 8. jI'A'i =Xi^i,_I'A'2 = AaATa. Now ^2/lA'i = Ai^a^i and,
taking the conjugate transpose, X[a' X^ = X^X'-^X^. But X[a' X^ = \^X[X2. Thus, Xi^^A's = Xa^i-^s
and, since Xx iXo., XiX2= as required.

11. Consider the conic x^ - 12xix.2. - 4.xi = 40 or

(i) X'AX = X = 40
[-: ::]
referred to rectangular coordinate axes OX^ and OX2.
The characteristic equation of A is

www.TheSolutionManual.com
A-1 6
|A/-^| = = (A-5)(A + 8) =

For the characteristic roots Ai = 5 and A2 = -8, take [3, -2]' and [2, 3]' respectively as associated

tsVTs 2/V/T3I
invariant vectors. Now form the orthogonal matrix P = | '__ " '^lll whose columns are the two
-2/v/T3 3/vT3j
vectors after normalization. The transformation X = PY reduces (i) to

/v/T3 -2/\flf\ f 1 -6~]


f 3/x/T3 2 Vis'
13"! fs o"!

-\y - y'\
5yl
^-y| 40
'VTs 3/VT3. 2/\/l3 3/\/l3. 13 J Lo -Sj

The conic is an hyperbola.

Aside from the procedure, this is the familiar rotation of axes in plane analytic geometry to effect the
elimination of the cross-product term in the equation of a conic. Note that by Theorem VII the result is
known as soon as the characteristic roots are found.

12. One problem of solid analytic geometry is that of reducing, by translation and rotation of axes,
the equation of a quadric surface to simplest form. The main tasks are to locate the center and
to determine the principal directions, i.e., the directions of the axes after rotation. Without at-
tempting to justify the steps, we show here the role of two matrices in such a reduction of the
equation of a central quadric.
Consider the surface Zx^ + 2xy + 2xz + iyz - 2x - 14y + 2z - 9 = and the symmetric matrices
" "
3 1 1 -1
3 1 1
1 2 -7
A = 1 2 and B =
1 2 1
1 2
- - -1 -7 1 -9

formed respectively from the terms of degree two and from all the terms.

The characteristic equation of A is

A-3 -1 -1
1A/-.11 -1 A -2
-1 -2 A

The characteristic roots and associated unit invariant vectors are :

Ai = 1, fi = I
~. ^ . -;=| ; A2 = 4, V2 -2, V3
^N-TfJ
CHAP. 21] SIMILARITY TO A DIAGONAL MATRIX 169

Using only the elementary row transformations Hj(k) and H^:(k), where / / 4,

r- -1 3 1 1 -1 1 -4
Bi 1 2 -7 --VJ
1

2 1 2 1 1 1
^ ft]
-1 -7 1 -9 -4

'3x + y + 0-1=0
Considering B^ as the augmented matrix of the system of equations X +22-7=0 we find

\^ X + 2y +1=0
from Di the solution x = -1, y = 0, z = 4 or C(-l, 0, 4). Prom D^, we have d = -4.

The rank of ^ is 3 and the rank of B is 4; the quadrlc has as center C(-l, The required reduced

111
0, 4).
equation is

XtX'^ + X^y'^ + XsZ^ + d X + 4F - 2Z - 4

The equations of translation are x = x' 1, y = y', z = z' + 4.

www.TheSolutionManual.com
The principal directions are di, 1-2. i^s- Denote by E the inverse of \_v\.v.2,v^. The equations of the
rotation of axes to the principal directions are

i/\A3" -i/i/s" -1/vT


\_X Y Z\- E [X Y Z] 2/vT i/vT i/vT
-i/x/T i/VT

SUPPLEMENTARY PROBLEMS

13. For each of the following real symmetric matrices A find an orthogonal matrix , P such P
that /""^.^P is diago
nal and has as diagonal elements the characteristic roots of A.

2 -1 2 () 1 2 -4 "3
2 2" "4 -1 r
21
(a) 2 . (b) 3 (c) -4 2 -2 id) 2 2 (e) -1 4 -1
-1 2_ _1 2_ _ 2 -2 -1_ _2 4_ _1 -1 4_

i/\/^ 1/VT l/\f2 \/sf& l/v/ 2/3 1/3 2/3


Ans. (a) 1 (b) -2/vT l/v/s" (c) -2/3 2/3 1/3
i/VT -i/v^T -1/VT 1/\A6" i/a/s" 1/3 2/3 -2/3

2/3 1/3 2/3 i/x/T 1/vT i/vT


(d) -2/3 2/3 1/3 (e) 2/^/6' -l/v/T
-1/ 3 --2/3 2/3 yV 6 -1/\A2 i/VT

14. Find a linear transformation which reduces X'BX to X' AX to Xtyl-^X^y^ + Asy^, where
yl-^y'^-^y'l and
Xi are the rootsof \XB - A\ =0, given

-2 l\ r2 fj 7 -4 -4 2 2 2
{a) A = [7
-2 10 -2 , 5=03 (6) -4 = -4 1 -8 , B = 2 5 4
1 -2 7J \\ 2J _-4 -8 1_ 2 4 5

l/\/T 1/3VT 1/3 2/3 2/3 l/3x/l0


/4ns. (a) -2/3x^2" 1/3 (6) 2/3 1/3 2/3^/10
-\/\[2 1/31/2" 1/3 1/3 -2/3 2/3\/l0
.

170 SIMILARITY TO A DIAGONAL MATRIX [CHAP. 21

15. Prove Theorem IV.


Hint. If /'"^^P = diag(Ai, A.1 Xi, X^ + i- '^r-i-2 ^ri)'
"^S" P"\Ai/ -/1)P = diag(0, 0,

'^i -^r + i' '^1 ~'^r+2 ^1 ~^v) '^ ^ ''^"'^ n - r.

16. Modify the proof in Problem 2 to prove Theorem XI.

17. Prove Theorems XII, XIH, and XIX.

18. Identify each locus:

(o) 20x2- 24xi:>;2 + 27%^ = 369, (c) \Q?, x\ - 312%ia:>, + 17 %| = 900,

(h) 3x^+ 2%2 + 3%2 = "i' (''^


*i + 2^:1x2 + Xg = 8

19. Let A be real and skew-symmetric. Prove:

(a) Each characteristic root of A is either zero or a pure imaginary.


(6) 7 + ^ is non-singular, / -.4 is non-singular.

www.TheSolutionManual.com
(c) B = (/ + /iri(/-4) is orthogonal. (See Problem 35, Chapter 13.)

20. Prove: If A Is normal and non-singular so also is A~'^ .

21. Prove: If A is normal, then A is similar to A .

22. Prove: A square matrix A is normal if and only if it can be expressed as H + iK, where H and K are commu-
tative Hermitian matrices.

23. If A is n-square with characteristic roots Xi, A,2 A., then A is normal if imd only if the characteristic

roots of AA are XiAi, X2A2 ^n^rv _ _


Hint. Write C/"^<4(7 = T = [. ], where C/ is unitary and T is triangular. Now tr(rT') = tr(-4/l') requires

ty = for J T* /

24. Prove: If 4 is non-singular, then AA' is positive definite Hermitian. Restate the theorem when A is real

and non-singular.

25. Prove: If A and B are n-square and normal and if A and S' commute, then ^B and BA are normal.

26. Let the characteristic function of the n-square matrix A be

<^(X) = (X-x^f^iX-x^f'^ ...a-'Kfs


and suppose there exists a non-singular matrix P such that

(/) P'^ AP = diag(Ai/.ri' '^2-'r2 -^sV^)

Define by B.- ,
(j = 1, 2 s) the n-square matrix diag(0, 0, ..., 0,/ , 0) obtained by replacing A. by 1
'i
and Xj, / by in the right member of (/) and define
r a i).

T?
E, -
= DR. ,
PBiP~^. = 1,2
.

a s)

Show that

(a) P'^AP ~-
AiBi + A282+ ... + A5B,
(6) A = Ail + A22 + +'^3^5
(c) Every ^ is idempotent.
(d) E^Ej =0 for i ^j.
(e) 1 + 2 + + E3 = /

(/) The rank of ^ is the multiplicity of the characteristic root A^.

(g) (Xil-A)Ei = 0, (J = 1,2 s)

(h) IfpM is any polynomial in x. then p(A) = p(Ai)i + p(A2)2 + + P(^s^^s-

Hint. Establish ^^ = Aii + A22 + + xIe A = Aii + A22 + . + X^E^.


CHAP. 21] SIMILARITY TO A DIAGONAL MATRIX 171

(i) Each E^ is a polynomial in A.


Hint. Define /(A) = (A -Xi)(A-X2) (A -X^) and /^(A) = /(A)/(A-A^), (i = l,2 s). Then

(/) A matrix fi commutes with A if and only if it commutes with every ^


Hint. If B commutes with A it commutes with every polynomial in A.
(k) If A is normal, then each Ei is Hermitian.

(I) U A is non-singular, then


A-^ = Al% + A-2% + ... + A-^fig

(m) If A is positive definite Hermitian, then


,1/2
/f = A^'^ = vTii + VA22 + ... + VAs^s
is positive definite Hermitian.

(n) Equation (6) is called the spectral decomposition of A. Show that it is unique.

27. (o) Obtain the spectral decomposition

www.TheSolutionManual.com
"
24 -20 lO" 4/9 -4/9 2/9" 5/9 4/9 -2/9'
A - -20 24 -10 = 49 -4/9 4/9 -2/9 + 4 4/9 5/9 2/9
_ 10 -10 9_ 2/9 -2/9 1/9 -2/9 2/9 8/9
"
29 20 -lO"
1
(b) Obtain A-^ --
20 29 10
196
-10 10 44_

38/9 -20/9 10/9'

(c) Obtain //^ = -20/9 38/9 -10/9


_ 1 0/9 -10/ 9 23/9_

28. Prove: If A is normal and commutes with B, then A' and B commute.
Hint. Use Problem 26 (/).

29. Prove: If A is non-singular then there exists a unitary matrix U and a positive definite Hermitian matrix H
such that A= HU
Hint. Define H by H^ = AA' and f/ = H~'^A.

30. Prove: If A is non-singular, then A is normal if and only if H and U of Problem 29 commute.

31. Prove: The square matrix A is similar to a diagonal matrix if and only if there exists a positive definite
Hermitian matrix H such that H~^AH is normal.

32. Prove: A real symmetric (Hermitian) matrix is idempotent if and only if its characteristic roots are O's and I's.

33. Prove: If .4 is real symmetric (Hermitian) and idempotent, then r. = tiA.

34. Let A be normal, B = I + A be non-singular, and C = B^^B'.


Prove: (a) A and (S')"^ commute, (6) C is unitary.

35. Prove: If tf is Hermitian, then {I + iHT'^{I ~iH) is unitary.

36. If A is n-square, the set of numbers X' AX where X is 2i unit vector is called the field of values ot A. Prove:
(a) The characteristic roots of A are in its field of values.
(6) Every diagonal element of A and every diagonal element of U'^AU, where U is unitary, is in the field
of values of A.

(c) If A is real symmetric (Hermitian), every element in its field of values is real.
(d) If A is real symmetric (Hermitian),
values its field of is the set of reals Ai i A < A, where Ai is the
least and Ais the greatest characteristic root of A.
.

chapter 22

Polynomials Over a Field

POLYNOMIAL DOMAIN OVER F. Let X denote an abstract symbol (indeterminate) which is assumed
to be commutative with itself and with the elements of a field F. The expression

n n-i p
(22.1) f(X) = cirik + On-^X + + a^X + a^X

where the a^ are in F is called a polynomial in \ over F.

www.TheSolutionManual.com
If every (22.1) is called the zero polynomial and we write f(\) = 0. If a^ ^
oj; = ,
,

degree n and a is called its leading coefficient. The polynomial /(A) =


(22.1) is said to be of
Oq^ = oq ^ is said to be of degree zero; the degree of the zero polynomial is not defined.

If a = 1 in (22.1), the polynomial is called raonic.

polynomials in A which contain, apart from terms with zero coefficients, the same
Two
terms are said to be equal.

The totality of polynomials (22.1) is called a polynomial domain F[\] over F.

SUM AND PRODUCT. Regarding the individual polynomials of F[X\ as elements of a number sys-
tem, the polynomial domain has most but not all of the properties of a field. For example

f(X) + g(\) = g(X) + f(X) and f(X)-g(X) = g(X)-f(X)

If f(X) is of degree m and g(X) is of degree n,

(i) ^(X) + g(A) is of degree TO when m.>n, of degree at most m when m=n, and of degree re

when m<n.
(ii) /(A)-g(A) is of degree m + ra.
If /(A) ?^ while f(X)-g(X) = 0, then g(X) =

If g(A) ^ and h(X)-g(X) = h(X)-g(X), then A(A) = A:(A)

QUOTIENTS. In Problem 1, we prove


I. If /(A) and g(X) ^ are polynomials in F[A], then there exist unique polynomials
/((A)and r(A) in F[A], where r(A) is either the zero polynomial or is of degree less than
that of g(A), such that

(22.2) /(A) = h{X)-g{X) + r(A)

Here, r(A) is called the remainder in the division of /(A) by g(A). If r(A) = 0, g{X) is said
to divide /(A) while g(A) and h{X) are called factors of /(A).

172
CHAP. 22] POLYNOMIALS OVER A FIELD 173

Let /(A) = h(X)-g(X). When g(A.) is of degree zero, that is, when g(A,) = c, a constant,
the factorization is called trivial. A non-constant polynomial over F is called irreducible over
F if its only factorization is trivial.

Example 1, Over the rational field A, -3 is irreducible; over the real field it is factorable as (X+\/3)(X-^j3).
Over the real field (and hence over the rational field) A^+4 is irreducible; over the complex
field it is factorable as (\+2i)(\-2i)

THE REMAINDER THEOREM. Let f(X) be any polynomial and g(X) = X- a. Then (22.2) becomes
(22.3) /(A) = h(X)-(X-a) + r

where r is free of A. By (22.3), f(a) = r, and we have

n. When /(A) is divided by A- a until a remainder free of A is obtained, that remainder


is f(a).

www.TheSolutionManual.com
m. A polynomial /(A) has X- a as a factor if and only if f(a) = 0.

GREATEST COMMON DIVISOR. If h(X) divides both f(X) and g(A), it is called a common divisor of
f(X) and g(A).
A polynomial d(X) is called the greatest common divisor of /"(A) and g(X) if

(i) d(X) is monic,

(ii) d{X) is a common divisor of /(A) and g(X),

(Hi) every common divisor of /(A) and g(X) is a divisor of d(X).

In Problem 2, we prove
IV. If /(A) and g(X) are polynomials in F[X], not both the zero polynomial, they have
a unique greatest common divisor d(X) and there exist polynomials h(X) and ^(A) in F[X]
such that

(22-4) d(X) = h(X)-f(X) + k(X) g(X)

See also Problem 3.

When the only common divisors of /"(A) and g(X) are constants, their greatest common divisor
is d(X) = 1.

Example?. The greatest common divisor of /(A) = +4)(A^+3A+ and g(A) =


(A^ 5) (A^ -1)(A^ + 3A+5) is
A +3A+5. and (22.4) is

A.= + 3A + 5 = i/(A) - lg(A)


5

We have also (1-A)-/(A) + (X+4)-g(X) = 0. This illustrates

V. If the greatest common divisor of /(A) of degree re>0 and


g(X) of degree m>0 is
not 1, there exist non-zero polynomials a(X) of degree <m and 6(A) of degree <n such
that

a (A) -/(A) + b(X)- g(X) =

and conversely.
See Problem 4.

RELATIVELY PRIME POLYNOMIALS. Two polynomials are called relatively prime if their greatest
common divisor is i.
174 POLYNOMIALS OVER A FIELD [CHAP. 22

VI. If g(A) is irreducible in F[X] and /(A) is any polynomial of F[X], then either g(X)
divides /(A) or g(X) is relatively prime to /(A).

VII. If g(A) is irreducible but divides /(A) h(X), it divides at least one of /(A) and h(X).
VIII. If /"(A) and g:(A) are relatively prime and if each divides A(A), soalsodoes /(A)-g(A).

UNIQUE FACTORIZATION. In Problems, we prove

IX. Every non-zero polynomial f(X) of F[A] can be written as

(22.5) /(A) = c
9i(A)
?2(A) . . .
^^(A)

where c ^ is a constant and the qi(X) are monic irreducible polynomials of F[X]-

www.TheSolutionManual.com
SOLVED PROBLEMS

1. Prove: If /(A) and g(X)^0 are polynomials in F[X], there exist unique polynomials h(X) and r(X)
in F[X], where r(A) is either the zero polynomial or is of degree less than that of g(X), such that
(i) /(A) = h(X)-g(X) + r(X)

Let
n n-i
/(A) = o^jA + %1-iA. + + a^A + Oq

and

g(A) = h^X + i^-iA + + 6iA + 6o. ^m ?^

Clearly, the theorem is true if /(A) = or if n<m. Suppose that n>m; then

/(A) - 7^ A g(A) = /i(A) = cpX + c^-iA + + Co

is either the zero polynomial or is of degree less than that of /(A).

If /i(A) = or is of degree less than that of g(A), we have proved the theorem with h(X) = ~X and
r(X) = /i(A). Otherwise, we form

/(A) - ^A"-\(A) - ;^A^-\(A) = /,(A)

Again, f^X) =0 or is of degree less than that of g(A), we have proved the theorem. Otherwise, we repeat
if

the process. Since in each step, the degree of the remainder (assumed ^ 0) is reduced, we eventually reach
a remainder r(A) = 4(A) which is either the zero polynomial or is of degree less than that of g(A).

To prove uniqueness, suppose


/(A) = h(X)-g(X) + r(X) and /(A) = 'c(A)-g(A) + s(A)

where the degrees of r(A) and s(\) are less than that of g(A). Then

h{X)-giX) + r(X) = k(X)-g(X) + s(X)


and
[k(X) -h(X)]g(X) = r(X) - s(X)

Now r(A)-s(A) is of degree less than m while, unless A:(A) - h(X) = 0. [k(X) - A(A)]g(A) is of degree equal
to or greater than m. Thus, k{X) - h(X) = 0, r(A) - s(A) = so that k{X) = h{X) and r(A) = s(A). Then both
h(X) and r(A) are unique.
)

CHAP. 22] POLYNOMIALS OVER A FIELD 175

2. Prove: If f(X) and g(A) are polynomials in F[X], not both zero, they have a unique greatest common
divisor d(\) and there exist polynomials A(A) and k(\) in F such that

(a) da) = h{X)-f(X) + ka)-g(\)

If, say, /(A) = 0, then d(X) = bin, g(X) where i^ is the leading coefficient of g(k) and we have (a) with
A(A) = 1 and k(X) = 6 .

Suppose next that the degree of g(A) is not greater than that of /(A). By Theorem I, we have

(*) /(A.) = ?i(A)-g(A) + ri(A)

where r^^(X) =0 or is of degree less than that of g(A). If r^(X) = 0, then d(X) = b'^^giX) and we have (a) with
h(X) = and k(X) = 6^ .

If r^(A) 7^ 0, we have
(ii) g(A) = g2(A)-ri(A) + r^iX)

where /^(A) = or is of degree less than that of ri(A). If r^(X) = 0, we have from ( i

www.TheSolutionManual.com
HiX) = /(A) - ?i(A)-g(A)

and from it obtain (a) by dividing by the leading coefficient of ri(A).

If r^(X) 4 0, we have
(i") 'i(A) = 93(A) TsCA) + rsCA)

where rgCA) = or is of degree less than that of r^(X'). If rg(A) = 0, we have from ( i) and (ii)

'2(A) = g(A) - ?2(A)-'-i(A) = g(A) - ?2(A)[/(A) - 9i(A)-g(A)]


= -92(A) -/(A) + [1 + 9i(A)-92(A)]g(A)

and from it obtain (a) by dividing by the leading coefficient of r2(A),

Continuing the process under the assumption that each new remainder is different from 0, we have, in
general,

(v) 'i(A) = 9i+2(A)-'-i+i(A) + ri+2(A)

moreover, the process must conclude with

(V) 's-2(A) = +
9s(A)-'-s-i(A) r^cA), &(A)?^0
and
(^^ 's-i(A) = 9s+i(A)-'-s(A)

By (vi), rjCA) divides rs-i(A), and by (v). also divides rs_2(A). Prom (iv). we have

's-3(A) = 9s-i(A)-'s-2(A) + r5_i(A)

so that rs(A) divides r^-gCA). Thus, by retracing the steps leading to (vi), we conclude that rs(A) divides
both /(A) and g(A). If the leading coefficient of rs(A) is c, then rf(A) = e~^'s(A).

Prom(l) ri(A) = /(A) - 91(A) g(A) = /!i(A) -/(A) + /ci(A) g(A) and substituting in (ii)

'2(A) = -52(A). /(A) + [l + 9i(A)-92(A)]g(A) = A2(A)-/(A) + i2(A)-g(A)

Prom (iii), /3(A) = t^(K) - qa(>^) r^(X) . Substituting for r^(X) and r2(A), we have
'3(A) = [1 + 92(A) 93(A)]/(A) + [-9i(A) - 93(A) - 9i(A) 92(A) 93(A)]g(A)
= h3(X)-f(X) + 43(A) g(A)

Continuing, we obtain finally,

rs(A) = hs(X)'f(X) + ksiX)-g(.X)

Then d(X) = c-\^(X) = c-i/5(A)-/(A) + c-iA:s(A)-g(A) = h(X)-f(X) + k(X)-g(X) as required.


The proof that d(X) is unique is left as an excercise.
176 POLYNOMIALS OVER A FIELD [CHAP. 22

3. Find the greatest common divisor d(X) of


/(A) = 3A^ + 1)C + llA + 6 and g(\) = )C + 2)i-)^-X+2
and express d(\) in the form of Theorem III.

We find

(i) /(A) = (3A+l)g(A) + (A' + 4A^+6A+4)

(li) g(A) = (A-2)(A^ + 4A^ + 6A+4) + (A^ + 7A+10)

(iil) A^+4A2+6A + 4 = (A-3)(A^ + 7A+10) + (17A + 34)


and
(Iv) A^ + 7A + 10 = (^A + p^)(17A+34)
The greatest common divisor is J-(17A+34) = A + 2 .

Prom (iii),

17A +34 = (A + 4A^ + 6A + 4) - (A - 3)(A^ + 7A + 10)

www.TheSolutionManual.com
Substituting for A^ + 7A + 10 from (ii)

17A +34 = (A^ + 4>f + 6A + 4) - (A - 3)[g(A) - (A- 2Xi^ + 4;? + 6A+4)]


(A^ - 5A + 7)(A^ + 4>f + 6A + 4) - (A - 3)g(A)

and for A + 4A^ + 6A + 4 from ( i )

17A +34 = (A^ - 5A + 7)/(A) + (-3A + 14A^ - 17A - 4)g(A)


Then

A + 2 = i(A^ - 5A + 7) -/(A) + i(-3A + 14A^ - 17A - 4)-g(A)

4. Prove: If the greatest common divisor of /(A) of degree re> and g(X) of degree m > is not 1,

there exist non-zero polynomials o(A) of degree < m and b(X) of degree < n such that
(a) a(X)-f(X) + b(X)-g(X) =

and conversely.

Let the greatest common divisor of /(A) and g(A) be d(X) ^ 1 ; then

/(A) = rf(A)-/i(A) and g(A) = d(X)-g^(X)

where /i(A) is of degree <n and gi(A) is of degree <m. Now

gi(A)-/(A) = gi(A)-rf(A)-/i(A) = g(A)-/i(A)

and

gi(A)-/(A) + [-/i(A)-g(A)] =

Thus, taking a(A) = gi(A) and 6(A) = -/i(A), we have (a).

Conversely, suppose /(A) and g(A) are relatively prime and (a) holds. Then by Theorem IV there exist
polynomials h(X) and ^(A) such that

A(A)-/(A) + k(X)-g(X) = 1

Then, using (a),

a(X) = a(X)-h(X)-f(X) + a(A)-A:(A)-g(A)

= -b(X)-h(X)-g(X) + a(A)-i(A)-g(A)

and g(A) divides a(A). But this is impossible; hence, if (a) holds, /(A) and g(A) cannot be relatively prime.
CHAP. 22] POLYNOMIALS OVER A FIELD 177

5. Prove: Every non-zero polynomial /(A) in F[\] can be written as

/(A) = c-q^(X).q^a) ...


qr(\)
where c ,^ is a constant and the q^(\) are monic irreducible polynomials in F[A].

Write

(i) /(A) = a-/i(A)

where a is the leading coefficient of /(A). If /^(A) is irreducible, then (i) satisfies the conditions of the
theorem. Otherwise, there is a factorization

(") /(A) = n-g(A)-A(A)

If g(A) and /!(A) are irreducible, then (ii) satisfies the conditions of the theorem. Otherwise, further factor-
ization leads to a set of monic irreducible factors.

To prove uniqueness, suppose that

and a p^(X) PaCA)

www.TheSolutionManual.com
"n 9i(A) ?2(A) . .
. 9r(A)
. . .
p^CA)

are two factorizations with r<s. Since 91(A) divides it must divide some one of the
Pi(A)-p2(A) ... ps(A),

Pi(A) which, by a change in numbering, may be taken as p^(\). Since Pi(A) is monic and irreducible, ?i(A) =
Pi(A). Then 92(A) divides P2(A)-p3(A) and, after a repetition of the argument above, 92(A) = P2(A).
... Ps(A)
Eventually, we have 9i(A) = Pi(A) for r and Pr-^i(A)
i = 1,2 Pr+2(A) ... p^cA) = 1. Since the latter equal-
ity is impossible, r = s and uniqueness is established.

SUPPLEMENTARY PROBLEMS

6. Give an example in which the degree of /(A) + g(A) is less than the degree of either /(A) or g(A).

7. Prove Theorem III.

8. Prove: If /(A) divides g(A) and h{\). it divides g(A) + h(\).

9. Find a necessary and sufficient condition that the two non-zero polynomials
/(A) and g(A) in F[X] divide
each other.

10. For each of the following, express the greatest common divisor in the form of Theorem IV
() /(A) = 2A^-A=+2A2-6A-4, g(A) = A*-A=-A2+2A-2
(6) /(A) = A^- A" - 3A= - 11A+ 6 , g(A) = A= - 2A" - 2A - 3
(c) /(A) = 2A^ + 5A^+ 4A= - A^ - A + 1, g(A) = A% 2A% 2A + 1
(d) /(A) = 3A^ - 4A=' + A^ - 5A + 6, g(A) = A= + 2A -h 2
Ans. (a) A=-2 = - ^ (A- 1)/(A) + i (2A=+ l)g(A)

(6) A- 3 = -l_(A+4)/(A) + l-(A^+5A+5)g(A)


to lo

('^) ^+1 = ^(A+4)/(A)-^ ^(-2A=^-9A2_2A + 9)g(A)

^"^^ ^ ^ jL(5A+2)/(A) + ^^{-l5X^+4.A>?~55X+ib)sa)

11. Prove Theorem VI.


Hint. Let d{\) be the greatest common divisor of /(A) and g(A) then g(A) = d{X)-h(X)
;
and either d(X) or
h{X) is a constant.
178 POLYNOMIALS OVER A FIELD [CHAP. 22

12. Prove Theorems VII and VIII.

13. Prove: If /(A) is relatively prime to g(A) and divides g(X)-a(X), it divides a( A).

14. The least common multiple of /(A.) and g(A) is a monic polynomial which is a multiple of both /(A) and g(A),
and is of minimum degree. Find the greatest common divisor and the least common multiple of

(a) /(A) = A^- 1, g(A) = A=- 1

(b) /(A) = (A-l)(A+lf(A+2). g(A) = (A+l)(A+2f(A-3)


Ans. (a) g.c.d. = A-l; l.c.m, = (A^- l)(A^ + A+ 1)

(b) g.c.d. = (A+i)(A + 2); l.c.m. = (A- l)(A+ if (A + 2f (A- 3)

15. Given 4 = I 2 1 2 I , show


12 ;2 ij 3

www.TheSolutionManual.com
(o) (f)(k) = A^ - SA^ - 9A - 5 and (f)(A) = A - sA!^ - 9A - 51 =

(b) m(A) = 0. when m(A) = A^ - 4A - 5 .

16. What property of a field is not satisfied by a polynomial domain?

17. The scalar c is called a root of the polynomial /(A) if /(c) = 0. Prove: The scalar e is a root of /(A) if

and only if A- c is a factor of /(A).

18. Suppose /(A) = (A-c)^g(A). (a) Show that c is a root of multiplicity k-l of /'(A), (b) Show that c is a

root of multiplicity k> 1 of /(A) if and only if c is a root of both /(A) and /'(A).

19. Take /(A) and g(A), not both 0, in F[\] with greatest common divisor d(X). Let K be any field containing F.
Show that if D(A) is the greatest common divisor of /(A) and g(A) considered in K[A], then D (A) = d(X).

Hint: Let d(X) = h(\)- f(\) + k(X)- g(\). /(A) = s(X)-D(X). g(A) = t(X)-D(X). and D(X) = c{X)-d(X).

20. Prove: An n-square matrix A is normal it A '


can be expressed as a polynomial

a^A + a^^^A + ... + aiA + oqI

in /I.
chapter 23

Lambda Matrices

DEFINITIONS. Let F[X] be a polynomial domain consisting of all polynomials in A with coefficients
in F. A non-zero mxn matrix over F[Aj

oii(A) ai2(A) ... ai(A)


(23.1) ^(A) asiCA) a22(A.) ... CsnCA)
[ij (A)]

www.TheSolutionManual.com
Omi(A) a^^iX.) ... aj^n(>^)

is called a A-matrix.

Let p be the maximum degree in A of the polynomials a^jiX) of


(23.1). Then A(X) can be
written as a matrix polynomial of degree p in A,
sp-l
(23.2) ^(A) Ap\^ + A.^X^-^ + ... + A,\ + Ao
where the ^^ are mxn matrices over F.

A^+A + l A* + 2A= + 3A^ +


Example 1. '^(A) 5]
A= - 4 A^ - 3A=
.
J
1 1 5
A^ + A +
u -4
-J L J
is a A-matrix or matrix polynomial of degree four.

IfA(X) is ra-square, it is called singular or non-singular


according as \A(X)\ is oris not
zero. Further. A(X) is called proper or improper according
as A^ is non-singular or singular
The matrix polynomial of Example 1 is non-singular and
improper.

OPERATIONS WITH A-MATRICES. Consider the two n-square A-matrices or matrix polynomials
over F(X)

(23.3) ^(A) A^X^ + A .P-'


+ Ai^X + Aq
p -1
and

(23.4) B(X) + S^_, '


+
a"^ + ... SiA + So
The matrices (23.3) and (23.4) are said to be equal, ^(A) = 5(A), provided
p=a andA-VI,'
^ ^
= B-
(j = 0,1,2 p).

The sum A(X) + S(A) is a A-matrix C(A) obtained by adding corresponding


elements of the
two A-matrices.

The product /1(A) B(A) is a A-matrix or matrix polynomial of degree at


most p+q. If either
^(A) or B(A) is non-singular, the degree of ^(A) 5(A) and also of
5(A) /1(A) is exactly p + 9.

The equality (23.3) is not disturbed when A is replaced throughout


by any scalar k of F
For example, putting A = A in (23.3) yields

A(k) = Apk^ + Ap_.^k^~^ + ... + A^k ^ Ao

179
180 LAMBDA MATRICES [CHAP. 23

However, when \ is replaced by an re-square matrix C, two results can be obtained due to the
fact that, in general, two ra-square matrices do not commute. We define

(23.5) Ag(C) A^C^ + A^_,C^


p-i
^ + ... + A,_C + Ao

and

(23.6) 4 (J) - C^A^


. ^^ . C^
+ . ^ A^
^p 1
+ + ^^^i + ^o
called respectively the right and left functional values of A(\).

Example Let A
Jx^
A' XmI
X o1. To i1. To il
and C =
1 2
2. (X)
\X-2 X'' + 2j P
[O ij [l oj [-2 2j _3 4_

[lo 15
Then Ajf(C)
[o ij [3 ' [1 4) ' [-2 14 26
4J oJ [3 2J
j,nd

9 12

www.TheSolutionManual.com
Al(C)
[17 27

See Problem 1.

DIVISION. In Problem 2, we prove


I. If A(\) and B{X) are matrix polynomials (23.3) and (23.4) and if Bq is non-singular,
then there exist unique matrix polynomials Qi_(X), RxiX); Q2(X), R2(X), where /?i(A) and
/?2(A) are either zero or of degree less than that of 6(A), such that

(23.7) .4(A) = (?a(A)-S(A) + Ri(A)

and

(23.8) A(X) = S(A)-(?2(A) + R2(X)

If /?i(A)=0, S(A) is called a right divisor of i(A); if /?2(A) = 0, 6(A) is called a left di-
visor of /1(A).

jV + A"^ + A - 1 A% A% A + 21 X%
Example 3. If A (A) =
-A + 2A
and B (A) '

a^aJ
'1 then
L 2A^ 2A''
J
Ta^-i A-iirA^+i ii r 2A 2A+3
+
'4(A) = Q-l(X)-B(X) Ri(X)
^ -2A_
|_ 2A 2
JL A A'^+aJ j_-5A

and

A(X) B{k)-Q2{X)
1_ A A^ + aJLA-I ij
Beie, B (A) is a left divisor of A (A)
See Problem 3.

A matrix polynomial of the form

(23.9) 6(A) = bqX"-!^ + 5,_,A'?"'-/ + ... + b,X I + bol^ = i(A)-/

is called scalar. A scalar matrix polynomial B(A) = fc(A)-/ commutes with every ra-square ma-

trix polynomial.

If in (23.7) and (23.8), 6(A) = b(X) I. then

(23.10) A(X) = Q^(X)- B(X) + Rt(X) = B(X)- Q^(X) + R^X)

rA^ + 2A A + ll
Example 4. Let A (A) and B(A) = (A + 2)/2. Then
LA^^-I 2A+1J
CHAP. 23] LAMBDA MATRICES 181

rA + 2 If A ll To -ll

If /?i(A) = in (23.10), then ^(A) = 6(A)- / (?i(A) and we have


II. A
matrix polynomial i(A) = [oy (A)] of degree n is divisible by a scalar matrix pol-
ynomial S(A) = 6(A) / if and only if every oy (A) is divisible by 6(A).

THE REMAINDER THEOREM. Let A(X) be the A-matrix of (23.3) and let B =[6^,-] be an re-square
matrix over F. Since Xl - B is non-singular, we may write
(23.11) A(\) = (2i(A)-(A/-S) + /?!

www.TheSolutionManual.com
and
(23.12) A(\) = (A/-fi)-(?2(A) + R2

where R^ and Kg are free of A. It can be shown


m. If the matrix polynomial A(X) of (23.3) is divided by \I -B, where B =[b^-] is ra-
''
re, until remainders ^1 and R2, free of A, are obtained, then

. = Aj^(B) = A^B^ + Ap_^B^~' + ... + A,B + Ao


and
a, = Aj^(B) = B^Ap + B^~'a^_^ + ... + BA, + ^
Examples. Let ^(A) = I ."^M and \I - B = r~^ "^ The
LA-2 A=' + 2j |_-3 A-4J
[a + I 3 IfA-l -2I flO isl
'^'
[4 A+
4JL-3 A-4J " [14 26j =

and

'^'^

Prom Example
=
U a'-JH' a!4J
^
[17 27] = (^'-B^Q^i^^-^^

2, R^ = Ag(B) and j?2 = A^(B) in accordance with Theorem III.

When ^(A) is a scalar matrix polynomial

'4(A) = /(A)-/ = apIX^ + ap.jx''^ + ... + ai/A + Oq/

the remainders in (23.11) and (23.12) are identical so that

^1 = ^2 = apB^ + ap_^B^~^ + ... + oifi + aol


and we have

IV. If a scalar matrix polynomial /(A)-/ is divided by A/-B until a remainder/?.


free of A, is obtained, then R = f(B).

As a consequence, we have
V. A scalar matrix polynomial /(A)-/ is divisible by KI^- B if and only if f(B) = 0.

CAYLEY-HAMILTON THEOREM. Consider the re-square matrix A = [o,-,-] having characteristic ma-
ijj
trix Xl - A and characteristic equation 0(A) = \Xl - A\ = 0. By (6.2)

(A/-i)-adj(A/-i) = (f>(X)-I
182 LAMBDA MATRICES [CHAP. 23

Then ^(A) / is divisible by XI - A and, by Theorem V, (^(A) = 0. Thus,

Vl. Every square matrix A = [oj,-] satisfies its characteristic equation <fi(X) = 0.

Example 6. The characteristic equation of A is A - 7>i + llX - 5 = 0. Now


V.i
32 62 31

31 63 31
31 62 32

and
"2
32 62 3l" 7 12 6 2 r 1 o"

31 63 31 - 7 6 13 6 + 11 1 3 1 - 5 1

31 62 32_ 6 12 7. _1 2 2 p 1_

See Problem 4.

www.TheSolutionManual.com
SOLVED PROBLEMS

1. For the matrix A(X) [,


A+ 1 ij
compute -4
^p(C)
and 4r(C) when C=
L 2J
.

^'<'^' =
'2 "3
i o]C ' C o][o ' c 3

and

there
2. Prove: ^(A) and B(A) are the A-matrices (23.3) and (23.4) and if Bq is non-singular, then
If

exist unique polynomial matrices Q-,(X), i(A); QsO^). R2M. where Ri(A)
and R^iX) are either

zero or of degree less than that of 6(A), such that

(i) ^(A) = QAX)-B(X) + Ri(X)

and
(ii) ^(A) = 6(A) (?2(A) + R^iX)

If p < q, then (I) holds with (3i(A) = and fii(A) = A(X). Suppose that p iq; then

4(A) - ApB-^B(X)X^''^ = C(A)

where C(A) is either zero or of degree at most p - 1.

If C(A) is zero or of degree less than q, we have (i) with

(3i(A) = ApB'q'-^''^ and R^iX) = C(A)


CHAP. 23] LAMBDA MATRICES 183

If C(X) = C^X + ... where s> q. form

A(\) - ApB-^''B(\)\P-1 - C^S-1B(A)X^-'? D(X)

If D(X) is either zero or of degree less than q, we have (i) with

Qi(\) = ApB-'XP'^
Aj^B-^XP'^l .C^B'^^X'-^
+ C^B^^X^-'f and Ri(X) = 0(A)

otherwise, we continue the process. Since this results in a sequence of matrix polynomials C(A), D(A), ...

of decreasing degrees, we ultimately reach a matrix polynomial which is either zero or of degree less than
g
and we have (i).

To obtain (II), begin with

^(A) - B(X)B-^''ApXP~1

This derivation together with the proof of uniqueness will be left as an exercise. See Problem 1, Chapter 22.

rA^ + 2A=-l A'-A-l]


3. Given ^(A) and S(A)

www.TheSolutionManual.com
L A + A'
A" A^ + 1 A=
A" + 1
J L-A^ + 2 A^ - A
|_
J
find matrices Q-l(X), Ri(X); (?2(A), R2(X) such that
(a) A(X) = Qi(X)-B(X) + Ri(X). (b) A(X) = S(A)-(?2(A) + R^iX) as in Problem 2.

We have

and

r 2 -ii -, Tl ii
Here, So and S2 =
ij [i
L-1 2J
(a) We compute

3 1' -2 -1'
1] 2 fo -l"! f-l
^(A) -^4S;^5(A)A^ = A + ^

1 1 10 A + A +
11 C(A)

2 2 -10 3] P-i -1I


C(A) - C3S2^S(A)A = A2 +
,

3 1 -6 2
J
A +
L
11 _
= D{X)

and
-6 5"] ["-13 3 -6A-13 5A + 3
D(A) - D^B^^BCX) =
-2 ^ + -9 5 -2A-9 3A + 5
fli(A)
[_
3J
'1
1
Then (?i(A) = (A^X^ + Cs A +O2 }B^-^ A= +
--

p q
P 5] A + P ^1

+ 4A + 4 A^ + sA + e]
fA^
2A+ 4 3A + 5
J
(b) We compute

^(A) - B(A)S2^'44A^ E(X)

(A) - 5(A) B2 ^sA

and

F(X) - B(X)B2 F^ ftsCA)


184 LAMBDA MATRICES [CHAP. 23

Then QziX) = B2^(A^>? + 3 A + F2)

[A^+4A + 4 2X + 2I

A^ + 6A + 9 3A + 5J

1 1 2

4. Given A 3 1 1 use the fact that A satisfies its characteristic equation to compute A"

2 3 1

and A ; also, since A is non-singular, to compute A and A'

A-1 -1 -2
A/-^| -3 A-1 -1 A^-3A^-7A-11 =

-2 -3 A-1

www.TheSolutionManual.com
Then
"8 8 5" "l 1 2" "1 0" 42 31 29
3A +1 A + 11/ 8 7 8 + 7 3 1 1 + 11 1 = 45 39 31

_13 8 8_ 2 3 1_ 1 53 45 42

42 31 29" '8 8 5' "1


1 2 193 160 144
^* = 3/ +7/ + UA = 3 45 39 31 + 7 8 7 8 + U 3 1 1 = 224 177 160

_53 45 42_ 13 8 8 _2 3 1_ 272 224 193

From 11/ = -1 A -3-4 +A , we have


'1 0' '1
1 2" 8 8 5

- 34 +/II 1 - 3 3 1 1 + 8 7 8
-J-7/ 11
_0 1_ 2 3 1_ _13 8 8_

-2 5 -1
-1 -3 5
11
7 -1 -2

-1" "1 0" i 2


2 5 1
i{_7^-i -3I + A\ 1 -3 5 - 33 1 + 11 3 1 1
121
7 -1 -2 1 2 3 1

-8 -24 29'

40 -1 -24
121
-27 40 -8

5. Let Ai, As, ,^n be the characteristic roots of an re-square matrix A and let h(x) be a polynomial
of degree p in x:. Prove that \h(A)\ = h(\)- h(X2) h(X^).

We have

(i) \XI-A\ = (A-Ai)(A-Ao)...(A-A^)

Let

(ii) h(x) = c(,si- x)(s2-x) ...(Sp - x)

Then
h(A) = c(sil-A)(S2l-A)...(spI-A)
4

CHAP. 23] LAMBDA MATRICES 185

and

|A(^)| = c'P\siI-A\-\s2l-A\ ...\sp!-A\


!c(si-Ai)(si-A2) ... (si-A)i
Jc(S2-Ai)(S2-A.2) ... (S2-X)! ... {c(Sp -\t)(,Sp -As) ... (Sp -X)!
jc(Sl-Ai)(S2-Xl) ... (S^- Xl)i
{c(si-A2)(S2-A2) ... (Sp-\2)\ ... ic(si-A)(s2-A) ... (s^-A)!
= h(\i_)h(\2) ...h(\^)

using (ii).

www.TheSolutionManual.com
SUPPLEMENTARY PROBLEMS

6. Given A(\) =
Pr^ ^1 sna B(A)=r^^ ^^^^1 compute:
[a^+i A-iJ [a + 1 a
J
_r2\2+2A A^+2A1
(a) -4(A) + B(A)
[_A^ + A + 2 2A - ij

r2A -A^l
(6) A(\) - B(A) ^
[a^-a -ij

rA* + 2A=+A=+A A* + 3A= + 3A=1


(c) ^(A)-S(A) = \ . o A -, o
L ^ +2A^ - 1 A-' + A' + 2A^J

r2A'' + 3A'+A^+A 2A^ -a]


(rf) B(A)M(A)
2A=' + 3A^ + 3A 2\^
1_ J

7. Given /1(A) compute:

2 1
Ag(C) = B;p(C) = Bn(C)-4n(C)
4;f(C)-B^(C)
5 -2 [-: 1 [l7
-7J
i?^
=
1
5 -1 3 3"]
^;?(C) = <?i?(C) =
9 -3 3 -3I

^i(C) ^i(C)-B^(C) B^(C)-A^(C) =

E :] [::;]

[3 3-
where P(A) = 4(A)- B(A) and Q(X) = B(A)M(A).
--M
8. If /4(A) and B(A) are proper n-square A-matrices of respective degrees p and q, and if C(A) is any non-zero
A-matrix, show that the degree of the triple product in any order is at least p+q.
186 LAMBDA MATRICES [CHAP. 23

9. For each pair of matrices A(X) and B(X), find matrices (?i(A), /?i(A); Q^X), Rd^) satisfying (23.7) and (23.8).

|> + 2X A 1 Fa a]
(a) .4(X) = ,, " '

,, . BiX)
I , I

La%i a^-aJ ''''-'I -aJ

f -A.2 + A. -A^ + 2A + 2l FA-I -ll


''''-
(b) AiX)
U.2A-1 1
J' [o x4
A^-2A+1 A'^ + A^ + 7A-2 5A^+2A + 4

(c) .4(A) A'* + 3A'' - 3A - 1 3A^+ 2A + 2 4A^ + 6A+ 1

2A - A + 2 A^ + 2A'' A + A^ + 8A - 4

A^+1 1 3A-1
B(X) 2A A^ A+ 1

A-2 2A A""

www.TheSolutionManual.com
3AnA"-l A"-l A^-A A^-2A-1 A^ + 1 A + 1

(d) A(X) A=-A^+l A* + A^ + 2 A - 1 B(X) A+1 A'' + A A^-2A


A^ + A A+ 1 2A* + A - 2 A A-2 A^ + A - 1

[\+i a1 r 2a a-i1 .
fo o]
Q '^^''-'- ^=^'^^ ''^'''-
Ans. (a)
X ij'
U +2 -A + 2j' [l ij

f-A -A-l] f-A-l -A + ll


^
f- 1 3
(b) 0,(A)= .,(A)^0; ,,(A)= ..(A).
\^_^^^
_J,
|^_^
J, |^_ 1 1

-A+1 A^+3 -A+7 -16A + 14 -fjA-3 -5A + 2


(c) (?i(A) = A^-1 3A + 5 -3A+2 fti(A) -21A + 4 -2A+3 A-5
2A-3 A A-6 5A-7 loA + 3 18A-7

A-l A" 2
,2
<?2(A) A^ + 1 A 3 /?2(A) =

1 2 A+ 1

3A^ + 6A + 31 -3A^-5A-16 3A'^-7A + 8

(d) Qi(X) = A - 3 A=^ - A - 1 -A^ + 4A -7


-2A - 1 2A'^ - 2A - 1

81A + 46 -12A - 16 -85A - 23

Ri(X) 4A - 1 15A - 9 12A - 5

-9A - 8 -7A 17A - 2

3A^+5A + 31 -A^-A-4 2A^-4A + 3

<?2(A) A - 14 A" -2A + 6A - 6

-3A - 2 3 2A - 2A - 2

7lA + 46 -12A - 8 -A + ll

fl2(A) -26A - 30 llA + 6 4A - 4


-15A - 30 2A + 4 16A - 16

10. Verify in Problem 9(b) that Ri(A) = .4^(0) and figCA) = -4^(0) where B(A)=A/-C.
CHAP. 23] LAMBDA MATRICES 187

+ 3X+ 1 A + 2 A 1
=1fx
1 1
[\^
11. Given ^2 ''* C(X)
A-2 A^-3A .
+ 2j [A - 3
^A
^
A+lJ
A

(a) compute .4(A) = B(A)- C(A)

(b) find (3(A) and fi(A) of degree at most one such that 4(A) = Q(X)- B(\) + R(\).

Fa^'+SA^-VA- 1 A^'+SA^+sA + l] [A + 4 A + si r-9 A + 1 -A - gl


S(A) +
L^- 5A=+llA - 10 A='-A^-3A+2j A-6 A - ij |_13A-6 9A + 10J

12. Given A compute as in Problem 4

10
'1

y
, A^ -
P
|_i2
^^1
i7j

www.TheSolutionManual.com
[I !;]

13. Prove: If A and B are similar matrices and g(A) is any scalar polynomial, then g(A) and g(fi) are similar.

Hint. Show first that A and B are similar for any positive integer k.

14. Prove: If fi = diag(Si, S^ B^) and g(A) is any scalar polynomial, then

g(B) = diag(g(Si), g(B2) g(B))

15. Prove Theorem ni.


Hint. Verify: A/ - B divides A(\)-Ajf(B).

16. The matrix C is called a root of the scalar matrix polynomial B(A) of (23.9) if B(C) = 0. Prove: The matrix
C is a root of B(A) if and only if the characteristic matrix of C divides B(A).

17. Prove: If Ai, Aj A are the characteristic roots of A and if f(A) is any scalar polynomial in A, then the
characteristic roots of f(A) are /(Ai), /(A2) /(A)-
Hint. Write \- f(x) = c(xi- x)(x2- x) ...(x^ - x) so that \\I - f(A)\ = c'^\xj - A\ \x2l - A\ ...\xsl -A\.
Nowuse |*j/-4| = (xj-Ai)(::i-A2)...(*^-A) and c(xi -Xj) (x2-\j) ... (x^ -\j) = \- f(\j).

1 -1
18. Find the characteristic roots of f(A) = A'^ -2A+3, given A = 2 3 2
1 1 2

19. Obtain the theorem of Problem 5 as a corollary of Problem 17.

20. Prove: If X is an invariant vector of A of Problem 17, then X is an invariant vector of f(A)

21. Let A(t) = [a^,-()] where the a^j{t) are real polynomials in the real variable t. Take

A(t)

and differentiate the last member as if it were a polynomial


Sfnomial with
wit! constant coefficients to suggest the defi-
nition ,

22. Derive formulas for:

(a) -r-\A(t) + B{t)\; (b) -T-\cA(t)\.


dt
where c is a constant or c =[ciA; (c) ^{A(t)-
dt
B(t)\; (d) ^A
dt
'it).

n
Hint. For (c). write A(t)- B(t) = C(t) = [ci;.(t)] and differentiate cv,-() = 2 a,-i,(0 6i,,-(). For (d),
use A{t)-A-\t) = /. ' ^ k=i ^^ "^^
chapter 24

Smith Normal Form

BY AN ELEMENTARY TRANSFORMATION on a A.-matrix A(X) over F[\] is meant:

(7) The interchange of the ith and /th tow, denoted by ff|-- ; the interchange of the ith and /th

www.TheSolutionManual.com
column, denoted by K^j .

(2) The multiplication of the ith row by a non-zero constant k, denoted by H^(k);

the multiplication of the ith column by a non-zero constant k, denoted by K^ik).

(3) The addition to the ith row of the product of /(X), any polynomial of F[\], and the /th row,
denoted by ^^(/(X));
the addition to the ith column of the product of /(X) and the ;th column, denoted by K^j (/(X)).

These are the elementary transformations of Chapter 5 except that in (3) the word scalar has

been replaced by polynomial. An elementary transformation and the elementary matrix obtained
by performing the elementary transformation on / will again be denoted by the same symbol.
Also, a row transformation on ^(X) is effected by multiplying it on the left by the appropriate
H and a column transformation is effected by multiplying ^(X) on the right by the appropriate K.
Paralleling Chapter 5, we have
I. Every elementary matrix in F[\] has an inverse which in turn is an elementary ma-
trix in F[\].

II. If \A(\)\ = k ^ 0, with k in F, A(\) is a product of elementary matrices.

III. The rank of a X-matrix is invariant under elementary transformations.

Two re-square X-matrices i(X) and B(X) with elements in F[X] are called equivalent pro-
vided there exist P(\) = H^ ff 2 ' ^1 and Q(\) = K^- K^ ... K^ such that

(24.1) B(X) = P(\)-A(X)-Qa)

Thus,

IV. Equivalent mxn X-matrices have the same rank.

THE CANONICAL SET. In Problems 1 and 2, we prove

V.Let A(\) and B(X) be equivalent matrices of rank r; then the greatest common di-
visor of all s-square minors of A(\), s r, is also the greatest common divisor of all s-
square minors of B(\).

In Problem 3, we prove
VI. Every X-matrix A(X) of rank r can be reduced by elementary transformations to the

Smith normal form

188
CHAP. 24] SMITH NORMAL FORM 189

A (A)
/2(A)

(24.2) N(\)

... . . . 0_

where each /^(A) is monic and f^(\) divides f^^^ (A), (i = 1, 2, ... 1).

When a A-matrix A(\) of rank r has been reduced to (24.2), the greatest common divisor of
alls-square minors of A(\), s ^ r, is the greatest common divisor of all s-square minors of A^(A)
by Theorem V. Since in N(\) each f^(X) divides /^^j(A), the greatest common divisor of all s-
square minors of N(\) and thus of A(\) is

= l,2,

www.TheSolutionManual.com
(24.3) g,(A) = /i(\)-f2(A)-...-4(A), (s .r)

Suppose A(\) has been reduced to

/V(A) = diag(/-i(A), /2(A) /^(A), 0,

and to
/Vi(A) = diag(Ai(A). AsCA) h^iX), O)

By (24.3),

gs(A) = /i(A)-/2(A)-...-/s(A) = Ai(A)-A2(A)-...-As(A)

Now gi(A) =/i(A) =Ai(A), g2(A) =/i(A)-/2(A) =Ai(A)-A2(A) so that U\) = h^W, in gen-
eral, if we define go(A) = 1, then

(24.4) 4(^) h^(\). (s = 1, 2 r)


s:s(^)/s-i(^)
and we have
VII. The matrix N{X) of (24.2) is uniquely determined by the given matrix /1(A).

Thus, the Smith normal matrices are a canonical set for equivalence over F[A].

A + 2 A + i A. + 3

Example 1. Consider .4(A) = A'+2A^+A A^+A'^ + A 2A%3A'^+A


A^ + 3A + 2 A^ + 2A + 1 3A^ + 6A + 3

It is readily found that the greatest common divisor of the one-row minors (elements) of
A(X) is gi(A) = 1, the greatest common divisor of the two-row minors of A(X) is g2(A) =X,
and g3(A) = i\A(X)\ = A^ +X^ . Then, by (24.4),

/i(A) = gi(A) = 1, /2(A) = g2(A)/gi(A) = A, /3(A) = g3(A)/g2(A) A^+A


and the Smith normal form of A(X) is
10
A'(A) A
A^+A
For another reduction, see Problem 4.

INVARIANT FACTORS. The polynomials f^(X),f-2{X) /V(A) in the diagonal of the Smith normal
form of A(X) are called invariant factors of ^(A). If 4(A) = 1, k r, then /i(A) = /2(A) = ... =
fk(X) = 1 and each is called a trivial invariant factor.
As a consequence of Theorem VII, we have
190 SMITH NORMAL FORM [CHAP. 24

VIII. Two re-square X-matrices over F[X] are equivalent over F[X\ if and only if they
have the same invariant factors.

ELEMENTARY DIVISORS. Let A(X) be an re-square X-matrix over F[X] and let its invariant factors
be expressed as

(24.5) f^(\) = {p,(X)\'^i^\p^(\)fi^...\PsMi^^^.(i = i.2 r)

where pi(A), p2(X), ., PgCX) are distinct monic, irreducible polynomials of F[X\. Some of the
q^j may be zero and the corresponding factor may be suppressed; however, since fj (A) divides
/i+i(A), qi+t,j^gij. (f = 1.2 r-l;/ = l,2, .s).
The factors !p-(X)S'^*i ^ 1 which appear in (24.5) are called elementary divisors over F[X]
of A(X).

Example 2. Suppose a 10-square A-matrlx A(X) over the rational field has the Smith normal form

10

www.TheSolutionManual.com
10 (A.-1)(A^+1)
I

'

(A-1)(A^ + 1)^A I

(A-l)^(X" + l)^A^(A"-3)

The rank is 5. The invariant factors are

/i(A) = 1, /2(A) = 1, /3(A) = (A-1)(A^+1),

U(k) = (\-l)(X^+lfX. /5(A) = (A-l)''(A.''+lf A^(A''-3)

The elementary divisors are

(X-l)^ A-1, A-1. (A^'+l)'', (A^+l)^, (A'^+l), A^ A, A"" - 3

Note that the elementary divisors are not necessarily distinct; in the listing each ele-
mentary divisor appears as often as it appears in the invariant factors.

Example 3. (a) Over the real field the invariant factors of A(X) of Example 2 are unchanged but the ele-

mentary divisors are

(A-l)"^, A-1, A-1, (A^lf, (A^' + l)'' , (A^' + l), X', A, X-\fz, X+x/l
since A^-3 can be factored.

(6) Over the complex field the invariant factors remain unchanged but the elementary divisors
are
(A-1)^, A-1, A-1, {X + if, (X + if, X + i. (X-if.
(X-if, X-i, A^, A, X-\/l, X+\/l
The Invariant factors of a A-matrix determine its rank and its elementary divisors;
conversely, the rank and elementary divisors determine the invariant factors.

Example 4. Let the elementary divisors of the 6-square A-matrix A(X) of rank 5 be

A=, A"", A, (A-l)^ (A-^)^ A-l, (A+l)^ A+ l

Find the invariant factors and write the Smith canonical form.

To form /5(A), form the lowest common multiple of the elementary divisors, i.e.,

/5(A) = a" (A-1)'' (A + if


CHAP. 24] SMITH NORMAL FORM 191

To form f^i\), remove the elementary divisors used in /sCX) from the original list and
form the lowest common multiple of those remaining, i.e.,

UiX.) = A^(X-lf(A+l)
Repeating, /sCX) = A(X-l). Now the elementary divisors are exhausted; then /sCX) = /i(A) = 1.

The Smith canonical form is

\(X- 1)
N(\)
A^(A-lf (A + 1)
A=(A--lf(X+lf

Since the invariant factors of a A-matrix are invariant under elementary transformations, so

www.TheSolutionManual.com
also are the elementary divisors. Thus,
IX. Two re-square X-matrices over F[\] are equivalent over F[X] if and only if they
have the same rank and the same elementary divisors.

SOLVED PROBLEMS

1. Prove: If P(\) is a product of elementary matrices, then the greatest common divisor of all s-

square minors of P(\)- A(X) is also the greatest common divisor of all s-square minors of A(\).
It is necessary only to consider P(A)-.4(A) where P(A) is each of the three types of elementary matrices H.
Let R(\) be an s-square minor of .4(A) and let S(A) be the s-square minor of P(X)- A(\) having the same
position as R(X). Consider P(A) = H^j its effect on ^(A) is either (i) to leave R(X) unchanged, (u) to in-
;

terchange two rows of R(X). or (iii) to interchange a row of R(X) with a row not in R(X). In the case of (i),
S(X) = R(X); in the case of (ii), S(A) = -R(X); in the case of (Iii), S(X) is except possibly for sign another
s-square minor of ^(A).

Consider P(X) = H^ (k); then either S(A) = R(X) or S(X) = kR(X).

Finally, consider P(A) = H^j (/(A)). Its effect on ^(A) is either


(i) to leave R{X) unchanged, (U) to in-
crease one of the rows of R(X) by /(A) times another row of R(X). or (iii) to increase one of the rows of R(X)
by /(A) times a row not of R(X). In the case of (i) and (li), S(X) = R(Xy, in the case of (iii),
S(X) = R(X) f(X)-T(X)
where 7(A) is an s-square minor of .4(A).

Thus, any s-square minor of P(X)' A(X) is a linear combination of s-square minors of /4(A). If g(A) is
the greatest common divisor of all s-square minors of .4(A) and gi(A) is the greatest common divisor of all
s-square minors of P(A)--4(A), then g(A) divides gi(A). Let B(X) = P(XyA(X).

Now .4(A) = P"^(A)- fi(A) and P~'^(X) is a product of elementary matrices. Thus, gi(A) divides g(A) and
gi(A) = g(A).

2. Prove: If P(X) and Q(X) are products of elementary matrices, then the greatest common divisor of all

s-square minors of P(A) -^(A) -QiX) is also the greatest common divisor of all s-square minors otA(X).
Let B(A) = P(A)M(A) and C(A) = B(X) Q(X). Since C'(A) = Q'(X)- b'(X) and Q'(X) is a product of ele-

mentary matrices, the greatest common divisor of all s-square minors of C'(A) is the greatest common divisor
of all s-square minors of s'(A). But the greatest common divisor of all s-square minors of C'(X) is the great-
estcommon divisor of all s-square minors of C(A) and the same is true for B'(A) and B(A). Thus, the greatest
common divisor of all s-square minors of C(A) = P(X)- A(X)- Q(.X) is the greatest common divisor of all s-
square minors of ^(A).
192 SMITH NORMAL FORM [CHAP. 24

3. Prove: Every A-matrix A(\) = [a^j(\)] of rank r can be reduced by elementary transformations to
the Smith normal form

fi(X)

/2(A)

N(X) L(A)

where each fi(\) is monic and f^(X) divides /^^j^(A), (i = 1, 2, ...,r- 1).

The theorem is true for A(\) = 0. Suppose A(X) ^ 0; then there is an element a::(K) of minimum 7^
J
By means of a transformation of type 2, this element may be made monic and, by the proper inter-

www.TheSolutionManual.com
degree.
changes of rows and of columns, can be brought into the (l,l)-position in thematrix to become the new aii(A).

(a) Suppose a 11 (A) divides every other element of ^(A). Then by transformations of type 3, 4(A) can be re-
duced to

/i(A)
(i)
B(A)

where /i(A) = a 11(A).

(6) Suppose that an(A) does not divide every element of A(X). Let ai--(A) be an element in the first row
which is not divisible by an (A). By Theorem I, Chapter 23, we can write

ay (A) g(A)an(A) + ry(A)

where ri^-(A) is of degree less than that of aii(A). Prom the /th column subtract the product of q{X) and
the first column so that the element in the first row and /th column is now ri,-(A). By a transformation of
type 2, replace this element by one which is monic and, by an interchange of columns bring it into the
(l,l)-position as the new oii(A). If now an(A) divides every element of A{X), we proceed to obtain (i).

Otherwise, after a finite number of repetitions of the above procedure, we obtain a matrix in which every
element in the first row and the first column is divisible by the element occupying the (l,l)-position.

If this element divides every element of A(X), we proceed to obtain (i). Otherwise, suppose a,-,-(A) is
"J
not divisible by On (A). Let
aii(X) - qix(X)- a^^iX) and aij(A) = gijXA)- aii(A). From the ith row sub-
tract the product of gii(A) and the first row. This replaces aii(A) by and aij (A) by aij{X) - qii(X) a^jiX).

Now add the jth row to the first. This leaves oii(A) unchanged but replaces aij(A) by

aij (A) - qii(X)- aij(X) + aj(\) ^ij(X) + gij(A)U- ?ii(A)!aii(A)

Since this is not divisible by aii(A), we divide it by aii(A) and as before obtain a new replacement (the
remainder) for aii(A). This procedure is continued so long as the monic polynomial last selected as aii(A)
does not divide every element of the matrix. After a finite number of steps we must obtain an a 11(A) which
does divide every element and then reach (i).

Next, we treat B(A) in the same manner and reach

i(A)

/2(A)

C(A)

Ultimately, we have the Smith normal form.

Since /i(A) is a divisor of every element of B(A) and /2(A) is the greatest common divisor of the elements
of S(A), /i(A) divides /2(A). Similarly, it is found that each /^(A) divides fi^^iX).
CHAP. 24] SMITH NORMAL FORM 193

4. Reduce
X + 2 A+ 1 A + 3
^(A) A=' + 2A= +A A + a"" + A 2A'' + 3A^ +A
A^ + 3A + 2 X" + 2 A + 1 3 a"" + 6a + 3

to its Smith normal form.

It necessary to follow the procedure of Problem 3 here. The element /i(A) of the Smith normal
is not
form is the greatestcommon divisor of the elements of A(\); clearly this is 1. We proceed at once to obtain
such an element in the (l,l)-position and then obtain (1) of Problem 3. After subtracting the second column
from the first, we obtain
"
1 A+ 1 A+ 3 1 A + 1 A + 3

'4(A) A^ a" + A% A 2A + 3A%A A A%A


A+ 1 A^ + 2A + 1 3A^ + 6A + 3 p 2A^ + 2A.

www.TheSolutionManual.com
A A + A
Lo S(A)J
2A + 2A_

Now the greatest common divisor of the elements of S(A) is A. Then

A+A
2A^ + 2A 2A^ + 2A A" + A

and this is the required form.

5. Reduce
A - 1 A + 2
-4(A) = A"" + A A^ A'' + 2A
A=- 2A - 3A + 2 A= + A - :

to its Smith normal form.

We find

1 A- 1 A + 2 A- 1 A + 2"
1

^(A) - A A= A=+2A A A
A- A^-3A + 2 A^+A-3_ A + 1 A+1
"l " "l 1 1

^\^ A -A - 1 -V- -1 -A -1 1 A + 1

A+1 A+1 A+ 1 -A' -A A(A+i)

using the elementary transformations ^i2(-l); ^2i(-A), ^3i(-A + 2); /f2i(-A + l), .Ksi(-A -2); HqsC-I);
^23(1); Hs2(A + l), H^i-iy. K32(-A-l), Xs(-l)-

SUPPLEMENTARY PROBLEMS
6. Show that H.jK^j = H^{k)K^a/k) = H^j{f{\))
Kj^{-f(,X)) = /.

7. Prove: An n-square A-matrix A(\) is a product of elementary matrices if and only if |.4(X)| is a non-zero
constant.
194 SMITH NORMAL FORM [CHAP. 24

8. Prove: An n-square A-matrix ^(A) may be reduced to / by elementary transformations if and only if\A(X)\
is a non-zero constant.

9. Prove: A A-matrix .4(A) over F[AJ has an inverse with elements in F[\] if and only if /1(A) is a product of
elementary matrices.

10. Obtain matrices P(\) and Q(k) such that P(\)- A(\)- Q(X) = I and then obtain

A(Xr^ = Q(X)-P(X)
given

A+ 1 1

-4(A) = 1 A+ 1 A
2 A+2 A+1

A+2 -A-1

www.TheSolutionManual.com
Hint. See Problem 6, Chapter 5. Ans A-1 A^ + 2A-1 -A^'-A + l

-A -A^ - 3 A - 2 A^ + 2 A+ 1

11. Reduce each of the following to its Smith normal form:

X A A-1 1

(a) A%A >? + 2A A^-1 '-\>


A
_2A'' - 2 A \^-2A 2A^ - 3A + 2 A^

>+ 1 A= + A 2A -A^A 1

(b) A-1 A^ 1 A^- 2A+ 1 'Xy


A+ 1

_ A^ A^ 2A= - A= + l_ A= +
I

A+1 2A-2 A- 2 A^ 1 00
A% A + 1 2A'' - 2A + 1 a''- 2A A" 1
(c) '-Xj

a"" - A - 2 3\^ -7A + 4 2A'' - 5 A + 4 A"" -2A= a"" - A


A^A"" 2A''-2A'' a'' - 2A^ A= A^-
A^ + 2 A + 1 A" + A A^ + A"" + A - 1 A^ + A 1

A^ + A + 1 \^ + l A" A"- 1 1
(d) '-\j

A^ +A A" A% A - 1 A^ A-1
_ A%A^ A= A-^ A= + A^ - 1 a"" - i_

'a=^ + 1 A= + 3A + 3 A^ + 4A - 2 A= + 3

A-2 A-1 A+ 2 A-2


(e)
3\ + l 4A+3 2\ + 2 3A + 2
- U
A^ + 2A A^ + 6A + 4 A% 6A - 1 A ^+ 2A + 3J

"a^ 1 D

if) \^ -2\ + 1 '-VJ


1

A+ 1 D A=( A- ifOV+l I

12. Obtain the elementary divisors over the rational field, the real field, and the complex field for each of the
matrices of Problem 11.
CHAP. 24] SMITH NORMAL FORM 195

13. The following polynomials are non-trivial invariant factors of a matrix. Find its elementary divisors in the
real field.

(a) X^-X. X=-A^ A^-2A^ + X^

(b) A+1, X^- 1, (X^-lf. (X^-lf


(c) X, X" + X, A' - X'' + 2X'' - 2A^+ A'' - X"^

(d) X. A^ + A, A'' + 2A + A, A''+A''+2A'' + 2A + A'' +A


Ans. (a) X^, A^, A, (A -if, A-1, A-1
(b) A+1, A+1, (A+lf, (A+lf, A-1, (A -if, (A -if
(c) A, A, A^ A^' + l, (A^'+lf, A-1
(d) A, A, A. A, A^ + l, (A^+lf, (A^ + lf. A+1

14. The following polynomials are the elementary divisors of a matrix whose rank is six. What are its invariant
factors?

www.TheSolutionManual.com
(a) A, A, A+ 1, A+2, A + 3, A+ 4 (c) (A -if, (A -if, (A -if, A-1, (A+lf
(6) A^ A=, A, (A- 1)2, A-1 (d) A^, A^ A, (A + 2f, (A + 2f, (A+2f
.4ns. (a) 1, 1, 1, 1, A, A(A + l)(A + 2)(A+3)(A + 4)
(b) 1, 1, 1, A, A^cA-l), A^A-lf
(c) 1, 1, A-1, (A- if, (A- if, (A- if (A+lf

(d) 1, 1, 1, A(A + 2f, A''(A + 2f, A''(A + 2f

15. Solve the system of ordinary linear differential equations

!Dxx + (D + i)x2 =

(0 + 2)^1 - (D-l)xs = t

(D + l)x2 + (D+ 2)X2 = e*

where xj, x^, xs are unknown real functions of a real variable t and = 4.
at
Hint. In matrix notation, the system is

D D+1 Xl 'o'

AX D+2 -D + 1 X2 = t

D+1 D+2 3_ e\

Now the polynomials in D of combine as do the polynomials in A of a A-matrix; hence, beginning with a
/I

computing form similar to that of Problem 6, Chapter 5, and using in order the elementary transformations:
:i2(-l), .ffi(-l), K2i(D + l), H2i(-D-2). ff3i(D+l), K^siD). H^si-i). K^ih. K^gi^D +1). Hs^i-kD),
Hs(2). Ks(l/5) obtain

-1 1 k(D + l) ^(5D2+12D + 7) 1

PAQ = 5D + 6 1 -4 .4 -1 -jD ^-^(SD^ + TD) 1 A'l

-5D2-8D-2 -D 4D + 2 \D i(5D2+7D + 2) d=+|d + | -


5 5
the Smith normal form of A.

Use the linear transformation X = QY to carry AX = H into AQY = H and from PAQY = N-J = PH
get

yi = 0, y2=-4e* (D2+|D + i)y3 = 6e*-l and ys = K^e"^^^ + K^e-^ + \e^ ~ ^


3 4

Finally, use X = QY to obtain the required solution

i = 3Cie-**/5 + it _ |, ^2 = 12Cie-'^*/5 + Cje"* - i, Xs = -2Cie-**/s+ ie*+ i


chapter 25

The Minimum Polynomial of a Matrix

THE CHARACTERISTIC MATRIX \I-A of an -square matrix A over F is a non-singular A-matrix


having invariant factors and elementary divisors. Using (24.4) it is easy to show
I. If D is a diagonal matrix, the elementary divisors of \I- D are its diagonal elements.

In Problem 1, we prove
Two

www.TheSolutionManual.com
II. re-square matrices A and B over F are similar over F if and only if their char-
acteristic matrices have the same invariant factors or the same rank and the same elemen-
tary divisors in F[X.].

Prom Theorems I and II, we have


HI. An re-square matrix A over F is similar to a diagonal matrix if and only if \I- A
has linear elementary divisors in F[\].

SIMILARITY INVARIANTS. The invariant factors of \I - A are called similarity Invariants of A.

Let P(X) and Q(\) be non-singular matrices such that P(X) (\1-A)-Q(\) is the Smith nor-
mal form

diag(fi(X), 4(X), ..., j^a))

Now \P(\).(XI-A).Q(\)\ = \P(X)\.\Qa)\4,a) = A(A)-/2(A)-.../na).


Since <^(X) and /^(A) are monic, | P(A) 1

| (?(A) |
= 1 and we have
IV. The characteristic polynomial of an re-square matrix A is the product of the invar-
iant factors of XI- A or of the similarity invariants of A.

THE MINIMUM POLYNOMIAL. By the Cay ley-Hamilton Theorem (Chapter 23), every re-square matrix
A satisfies its characteristic equation <^(A) = of degree re. That monic polynomial m(X) of
minimum degree such that m(A) = is caUed the minimum polynomial of A and m(X) = is
called the minimum equation of A. (m(X) is also called the minimum function oi A.)

The most elementary procedure for finding the minimum polynomial of i 5.^ involves the
following routine:

(I) If 4 = oq/, then m{X) = X-ao:


(II) If A ^ al for all a but i^ = a^A + aj , then m(A) = A^ - a^X - uq ;

(III) If A^ ^ aA + hi for all a and b but A'' = a^A"^ + a^A + a^I , then
're(A) = A^ - a^)? - a^X - Oq
and so on.

122
Example 1. Find the minimum polynomial of ^ = 2 1 2

2 2 1

196
CHAP. 25] THE MINIMUM POLYNOMIAL OF A MATRIX 197

Clearly A oqI = is impossible. Set

'9 8 8
'122" '100"
8 9 8 = i 2 1 2 + oo 1

8 8 9 2 2 1 1

9 = Oj^ + ao_
Using the first two elements of the first row of each matrix, we have | then
8 = 2ai
a^= i and oq = 5. After (and not before) checking for every element of A^, we conclude that
A^ = iA + 51 and the required minimum polynomial is X^ - 4X - 5.

In Problem 2, we prove
V. If A is any re-square matrix over F and /(A) is any polynomial over F, then f(A) =
if and only if the minimum polynomial m{\) of A divides /(A).

In Problem 3, we prove
VI. The minimum polynomial m(\) of an ra-square matrix A is that similarity invariant

www.TheSolutionManual.com
fn(X) of A which has the highest degree.

Since the similarity invariants f^(X), f^(k) fn-iM all divide ^(A), we have
Vn. The characteristic polynomial (fi(\) ofA is the product of the minimum polynomial
of A and certain monic factors of m(X).
and
Vin. The characteristic matrix of an ra-square matrix A has distinct linear elementary
divisors if and only if m(A), the minimum polynomial of A, has only distinct linear factors.

NON-DEROGATORY MATRICES. An re-square matrix A whose characteristic polynomial and minimum


polynomial are identical is called non-derogatoiy; otherwise, derogatory. We have
IX. An re-square matrix A is non-derogatory if and only if A has just one non-trivial
similarity invariant.

It is also easy to show


X. If Si and Sg have minimum polynomials /rei(A) and m^iX) respectively, the minimum
polynomial ro(A) of the direct sum D = diag (61,55) is the least common multiple of ^^(A)
and m2(A).
This result may be extended to the direct sum of m matrices.
XI. Let gi(X),g2(X),
..gnCA) be distinct, monic, irreducible polynomials in F[A] and
let Aj be a non-derogatory matrix such that \XI - Aj\ = \g-(X)\ "i (; = 1,2, ...,m). Then
fii ,02
B = diag(^i, ^2 ^n) has ^(A) fgi(A)! ig2(A)P-ig^(A)! '^
as both char-
acteristic and minimum polynomial.

COMP ANION MATRIX. Let A be non-derogatory with non-trivial similarity invariant

(25.1) g(X) = ^(A) = a" + a-iA"'^ + + a^X + Oq

We define as the companion matrix of g(X),

(25-2) C(g) = [-a], if g(X) = X+a


and for re > 1
198 THE MINIMUM POLYNOMIAL OF A MATRIX [CHAP. 25

(25.3) C(g)
1

Oq -% -e
-'^n-3 -an-2 -Gj

In Problem 4, we prove
XII. The companion matrix C(g) of a polynomial g(\) has g(\) as both its character-
istic and minimum polynomial.

(Some authors prefer to define C(g) as the transpose of the matrix given in (25.3).
Both forms will be used here.)
See Problem 5.

It is easy to show

www.TheSolutionManual.com
XIII. If /4 is non-derogatory with non-trivial similarity invariant f^(X) = {k-af, then
"a 1 ... O'
a 1 ...

(25.4) / = [o], if ra = 1, and / = , if ra > 1


... o 1

... a

has f^{\) as its characteristic and minimum polynomial.

SOLVED PROBLEMS

1. Prove: Two ra-square matrices A and S over F are similar over F if and only if their characteristic
matrices have the same invariant factors or the same elementary divisors in F[X\-

Suppose A and B are similar. From (i) of Problem 1, Chapter 20. it follows that Xl - A and XI -B are
equivalent. Then by Theorems VHI and IX of Chapter 24, they have the same invariant factors and the same
elementary divisors.

Conversely, let Xl - A and Xl - B have the same invariant factors or elementary divisors. Then by
Theorem VIII, Chapter 24 there exist non-singular A-matrices P(X) and Q{X) such that

P{X)-{XI-A)-Q{X) Xl - B
or

(i) P{X)-{XI-A) = (Xl-B)-Q (X)

Let

(ii) P{X) = (X/-S).Si(A) + Ri

(iii) Q(X) = S2(X)-(A/-S) + /?2

(iv) Q'\X) = S3(A)-(A/-.4) + /?3

where R^, Rq, and R3 are free of A. Substituting in ( i ), we have

(Xl-B)-S^(X)-(Xl-A) + Rj^(Xl-A) = (A/-B)-S3(A)-(A/-4) + (XI-B)Rs


or
(V) {Xl-B){S^{X)-Sa{X)\{Xl-A) = (A/-B)Rg - R^{XI-A)
CHAP. 25] THE MINIMUM POLYNOMIAL OF A MATRIX 199

Then S^(X) - Sg(A) = and

(vi) (Xl-B)Rg = R^(\I-A)

since otherwise the left member of (v) Is of degree at least two while the right member is of degree at most
one.

Using (ill), (Iv), and (vi)

I = Q(^)-Q~\\)

= Q(\)\Ss(\)-(Xl-A) + Rs\
= Q(X.)-S3(X).(X!-A) + \S2(X)-(XI-B) + R^lRg
= Q(X) Ss(X) (XI - A) + S2(X)-(Xl-B)Rs + R^Rs
= Q(X)-Ss(X)-(Xl-A) + S2(X)-R^-(XI-A) + R^Rg
or

(vii) I - R^Rs = \Q{X) Sg(X) ^ S^iX) RxUXl - A)

www.TheSolutionManual.com
Now Q{X)-Sq(X) + S^(X)R-^ = and / = R2R3 since otherwise the left member of (vii) is of degree zero in
X while the right member is of degree at least one. Thus, Rg = R^ and, from (vi)

XI - B = /?i(A/-^)/?2 = ARi2 - R^AR^

Since A.B.R^, and R^ are free of A. /Ji = R^: then XI - B = XI - R'^AR^ and A and B are similar, as was
to be proved.

2. Prove: If A is any ra-square matrix over F and f(X) is any polynomial in F[X]. then f(A) = if and
only if the minimum polynomial m(X) of A divides /(A).

By the division algorithm. Chapter 22,

f(X) = q(X)-n(X) + r(A)


and then
f(A) = q(A)-n(A) + r(A) = r(A)

Suppose f(A) = 0; then r(A) = 0. Now if r(A) ji 0, its degree is less than that of m(X), contrary
to the
hypothesis that m(A) is the minimum polynomial of ^. Thus, r(A) = and m(A) divides /(A).
Conversely, suppose /(A) = ?(A)-m(A). Then f(A) = q(A)-m(,A) = 0.

3. Prove: The minimum polynomial m(X) of an ra-square matrix A is that similarity invariant fn(X) of A
which has the highest degree.

Let gri-i(X) denote the greatest common divisor of the (n - l)-square minors of XI -A. Then
\XI-A\ = cf,(X) = g-i(A)-/(A)
and
adj(A/-/l) = g_i(A)-S(A)

where the greatest common divisor of the elements of B(X) is 1.

Now (A/-^).adj(A/-^) = (f)(X)-I sothat

(X/-A)-gn_^(X).B(X) = gn-i(A)-/(A)-/
or

(i) (A/-^).S(A) = fn(X)-I

Then XI -A is a divisor of f^(X)-I and by Theorem V, Chapter 23, f^(A) = 0.


200 THE MINIMUM POLYNOMIAL OF A MATRIX [CHAP. 25

By Theorem V, m(\) divides fn(X-)- Suppose

(ii) = g(X.)-m(\)
fn(\)

Since m(A) = 0, XI -A is a divisor of m(X) /, say

m(\)-l = (\I-A).C(\)

Then, using (i) and (ii).

(\I-A)-B(\) = fn(X.)-l = 9(A)-m(A)-/ = 9(A) (A/ - ^). C(A)


and
5(A) = ?(A) C(A)

Now 9(A) divides every element of iS(A); hence ?(A) =1 and, by (ii),

/n(A) = m(A)

as was to be proved.

www.TheSolutionManual.com
4. Prove: The companion matrix C(g) of a polynomial g(\) has g(A) as both its characteristic and
minimum polynomial.

The characteristic matrix of (25.3) is

A -1
A -1

^1 "2 an-2 A + a^_j

2 ,n-l
To
the first column add A times the second column, A times the third column, A
.

times the last


column to obtain

-1
A
G(A)

A -1
g(A) Oi a^ a_2 A+a^-l

Since |G(A)| = g(A), the characteristic polynomial of C(g) is g(A). Since the minor of the element
g(A)
in G(X) is 1, the greatest common divisor of all (n-l)-square minors of G(\) is 1. Thus, C(g) is non -derog-
atory and its minimum polynomial is g(A).

5. The companion matrix of g(\) = A^ + 2A - A= + 6A - 5 is

10 5
10 10 0-6
10 or, if preferred. 10 1

1 10-2
5-61-20 1
4

CHAP. 25] THE MINIMUM POLYNOMIAL OF A MATRIX 201

SUPPLEMENTARY PROBLEMS
6. Write the companion matrix of each of the following polynomials:
(a) X'' + \^ - 2\ - X (d) X"" - 2X!' - X^ + 2X
(b) (X^-i)(X+2) (e) ]?()? + I)
(c) (X-lf (/) (A+2)(A^-2A^ + 4A-8)

1 'o 1 o" 1

Ans. (a) 1 (b) 1 (c) 1

1 2 - -1 8 4 -2 1 --3 3

1 o" 1 1 o'

1 1 1
(d) (e) (/)
1 1 1

-2 1 2 0-10 16

A=

www.TheSolutionManual.com
7. Prove: Every 2-square matrix [a^] for which (011-022)^ + 4012021/0 is non-derogatory.

8. Reduce G(X) of Problem 4 to diag (1 , 1 1 , g(X) ).

9. For each of the following matrices ^. (i) find the characteristic and minimum polynomial and (ii) list the
non-trivial invariant factors and the elementary divisors in the rational field.

1 1 1 3 2 1 2 2 1 1 2
(a) 2 (b) 5 2 6 (c) 1 id) 2 2 (e)
1 1 1 2
3 -2 -1 -3 1 2 2 1 _1 1 2

'211 1"
2-3 1-3 -5
-2
4-6 3 s"
3-2 2 1
4 2 3 -1 -6 -3 -6
if)
-6 -2 -3 -2 (g)
-3 -3 -4 -3
ih) 4-3 4-1 -6
-3 -1 -1 -2 4-2 4 0-4
2 6 4 6
-1 0-2 1 2
Ans. (a) <^(A) = m(X) = (A-l)(A-2)(A-3); i.f. (A-l)(A-2)(A-3); e.d. (A-1), (A-2). (A-3)
(b) <^( A) = m(,\.) = X^; if. = e.d = A=

(^(A) = (A-if(A-2); i.f. A-1, (A-i)(A-2)


(c)
m(X) = (A-1) (A-2) ; e.d. A-1, A-1, A-2

(/)(A) = (A+ihA-5) ; i.f. A+1, (A+i)(A-5)


id)
m(A) = (A+l)(A-5) ; e.d. A+i, A+l, A-5

,^(A) = A= - 4A= i.f. A, A''-4A


ie)
m(A) = A^ - 4A e.d. A, A, A-
c^(A) = A(A-H)^(A-1) ; i.f. A+l, A^-A
if)
m(A) = A(A^-l) e.d. A, A+l, A+l, A-1

(/)(A) = A2(A+1)2 ; i.f. A, A(A+l)2


is)
m(A) = A(A+1)2 ; e.d. A. A, (A+D^

oS(A) (A-2) (A""- A- 2)''; i.f. A-2, A'^-A-2, A^-A-2


(h)
m(X) A^-A-2 e.d. A-2. A-2, A-2, A+l, A+l

10. Prove Theorems VII and VIII.

11. Prove Theorem X.


Hint. m(D) = diag(m(Bi), m(B^) ) = requires m(Si) = miB^) = ; thus, mi(A) and m^X) divide m(X).
202 THE MINIMUM POLYNOMIAL OF A MATRIX [CHAP. 25

12. Prove Theorem XI.

13. If A is n-square and if k is the least positive integer such that A = 0, .4 is culled nilpotent of index A:.
Show that A is nilpotent of index k if and only if its characteristic roots are all zero.

14. Prove: (a) The characteristic roots of an n-square idempotent matrix A are either or 1.

(b) The rank of A is the number of characteristic roots which are 1.

15. Prove: Let A.B.C.D be ra-square matrices over F with C and D non-singular. There exist non-singular
matrices P and Q such that PCQ = A. PDQ = B it and only if R(X) = \C - A and S(A.) = XD-B have the
same invariant factors or the same elementary divisors.
Hint. Follow the proof in Problem 1, noting that similarity is replaced by equivalence.

16. Prove: If the minimum polynomial m(X) of a non-singular matrix A is of degree s, then A~^ is expressible
as a scalar polynomial of degree slmA.

www.TheSolutionManual.com
17. Use the minimum polynomial to find the inverse of the matrix A of Problem 9(h).

18. Prove: Every linear factor X-A.J of 0(A) is a factor of m(A).


Hint. The theorem follows from Theorem VII or assume the contrary and w:rite m(A) = {\-X^)q{X) + r,

r ^ 0. Then {A -\^I)q(A) + rl = and A -X^l has an inverse.

19. Use -4 = . 1 ^
to show that the minimum polynomial is not the product of the distinct factors of 0(A).
[o ;]

20. Prove: If g(\) is any scalar polynomial in A, then g(^) is singular if and only if the greatest common divi-
sor of g(A) and m(A), the minimum polynomial oi A, is d(k) ^ 1.

Hint, (i) Suppose d{\) i- 1 and use Theorem V, Chapter 22.


(ii) Suppose d(A) = 1 and use Theorem IV, Chapter 22.

21. Infer from Problem 20 that when g(.4) is non-singular, then [g(/l)] is expressible as a polynomial in A of
degree less than that of m(\).

22. Prove: If the minimum polynomial m(A) of A over F is irreducible in F\X\ and is of degree s in A, then the
set of all scalar polynomials in A with coefficients in F of degree < s constitutes a field.

23. Let A and S be square matrices and denote by m(A) and (A) respectively the minimum polynomials oi AB
and BA. Prove:
(a) m(A) = n(A) when not both .A and B are singular.
(&) m(A) and n(A) differ at most by a factor A when both A and B are singular.
Hint. B.m(^S)./l = (B,4).m(B.4) = and A-n{BA)-B = (AB)-n(AB) = 0.

24. Let A be of dimension mxn and S be of dimension nxm, m > n, and denote by 0(A) and 0(A) respectively
the characteristic polynomials of AB and BA. Show 0(A) = A (/f(A).

25. Let X.I be an invariant vector associated with a simple characteristic root of A. Prove: If A and B com-
mute, then Xj^ is an invariant vector of B.

26. If the matrices A and B commute, state a theorem concerning the invariant vectors of B when A has only
simple characteristic roots.
chapter 26

Canonical Forms Under Similarity

THE PROBLEM. In Chapter 25it was shown that the characteristic matrices of two similar n-square

matrices A and R'^AR over F have the same invariant factors and the same elementary divisors.
In this chapter, we establish representatives of the set of all matrices R'^AR which are (i) sim-

www.TheSolutionManual.com
ple in structure and (ii) put into view either the invariant factors or the elementary divisors.
These matrices, four in number, are called canonical forms of A. They correspond to the canon-

ical matrix N introduced earlier for all mxn matrices of rank r under equivalence.
'[^ o] '

THE RATIONAL CANONICAL FORM. Let A be an ra-square matrix over F and suppose first that its
characteristic matrix has just one non-trivial invariant factor /(\). The companion matrix
C(f )
of /(A) was shown in Chapter 25 to be similar to A. We define it to be the rational
canonical
form S of all matrices similar to A

Suppose next that the Smith normal form of \I - A is

<26-l) diag (1, 1 1, f.(X), f.^^iX), ..., f^(X))

with the non-trivial invariant factor of degree s^, (i = j.j + 1 n). We define as the
f^(\.) ra-
tional canonical form of all matrices similar to A
(26-2) S = di^s {C(fj), C(fJ ^^) C(/))

To show that A and S have the same similarity invariants we note that C(f-) is similar to
D^ = diag (1, 1 l,/i(A)) and, thus, S is similar to diag(D^- D D). By a sequence
,
.^^ of
interchanges of two rows and the same two columns, we have S similar to

diag (1,1 l.fja), fj^^(\) f^iX))

We have proved
I. Every square matrix A is similar to the direct sum (26.2) of the companion matrices
of the non-trivial invariant factors of XI - A.

Example 1. Let the non-trivial similarity invariants of A over the rational field be

/8(A) = X+1. fg(X) A%L /io(A) = A^ + 2A + 1

Then

1
1
1
C(fa) = [-1], C(f9) 1 C(fio)
-10 10
1

-10 -2

203
)

204 CANONICAL FORMS UNDER SIMILARITY [CHAP. 26

and

-1

-1
1
diag(C(/g),C(/9),C(/io))
1

1
-1 -2
Is the required form of Theorem I.

Note. The order in which the companion matrices are arranged along the diagonal is
immaterial. Also

www.TheSolutionManual.com
-1
-1
1

-1
1

1 -2
1

using the transpose of each of the companion matrices above is an alternate form.

A SECOND CANONICAL FORM. Let the characteristic matrix of A have as non-trivial invariant fac-

tors the polynomials f^(X) of (26.1). Suppose that the elementary divisors are powers of t dis-

tinct irreducible polynomials in F[X]: pi(A,), paCX), ..-, PjCA). Let

(26.3) fi(\) = ipi(X)l^"!p2(A)!^2^..{p,(X)!'^is-


(pi(A)i'"''iP2(A)i'"''...iPi( (i = i,i+'^ n)

where not every factor need appear since some of the ^'s may be zero. The companion matrix
C{p?^^) of any factor present has \])^(X)\ ^^ as the only non-trivial similarity invariant; hence,
C{{^) is similar to

diag (C(p^?" ). Cipl-'i Cip^^ti))

We have
II. Every square matrix A over F is similar to the direct sum of the companion matri-
ces of the elementary divisors over F of XI A.

Example 2. For the matrix A of Example 1, the elementary divisors over the rational field are A+1, X+1,
(A.+ l)^, A^-A,+ l, (X -A+1) . Their respective companion matrices are

1
[-1], [-1],
[-: .;] [.: 1
\
1 2 -3 2

and the canonical form of Theorem II is


CHAP. 26] CANONICAL FORMS UNDER SIMILARITY 205

-1

-1
1
-1 -2
1

-1 1

-1 2 -3 2

THE JACOBSON CANONICAL FORM. Let A be the matrix of the section above with the elementary
divisors of its characteristic matrix expressed as powers of irreducible polynomials in F\_\].

www.TheSolutionManual.com
Consider an elementary divisor {p(X)!^. If q = 1, use C(p), the companion matrix; if q>l,
build

C(p) M
C(p) M ..

(26.4) Cq(P) =

. . . C(p) M
. . C(p)

where M is a matrix of the same order as C(p) having the element 1 in the lower left hand corner
and zeroes elsewhere. The matrix C^^Cp) of (26.4), with the understanding that Ci(p) = C(p) is
called the hypercompanion matrix of \p(X)]'^. Note that in (26.4), there is a continuous line of
I's just above the diagonal.

When the alternate companion matrix C'(p) is used, the hypercompanion matrix of jp(A)!''is

C'ip)
N C'ip)
N C'ip) .

Ca(P)

. C'ip)

N C'ip)

where A' is a matrix of the same order as C'ip) having the element 1 in the upper right hand cor-
ner and zeroes elsewhere. In this form there is a continuous line of I's just below the diagonal.

Examples. Let IpiX.)]'^ = (\^ + 2\- 1)'^. Then C(p) = T M, M=y


and

1 -2 1

1 -2 1
CqiP)
1

1 -2 1

1 -2
.

206 CANONICAL FORMS UNDER SIMILARITY [CHAP. 26

In Problem 1, it is shown that Cq(p) has {pCA)}*^ as its only non-trivial similarity invariant.
Thus, Cq(p) is similar to C(p^) and may be substituted for it in the canonical form of Theorem II.

We have
III. Every square matrix A over F is similar to the direct sum of the hypercompanion
matrices of the elementary divisors over F of \I - A.

Example 4. For the matrix A of Example 2, the hypercompanion matrices of the elementary divisors A + 1,

A+1 and A -A+1 are their companion matrices, the hypercompanion matrix of (A+l)^is
1

r-1 n
and that of (A^-A + 1)^ is
-11 10 Thus, the canonical form of Theorem
1

0-11
nils

-1

-1

www.TheSolutionManual.com
-1 1
-1
1

-1 1

-1 1 1

-1 1

The use of the term "rational" in connection with the canonical form of Theorem I is some-
what misleading. It was used originally to indicate that in obtaining the canonical form only
rational operations in the field of the elements of A are necessary. But this is, of course, true
also of the canonical forms (introduced later) of Theorems n and III. To further add to the con-
fusion, the canonical form of Theorem III is sometimes called the rational canonical form.

THE CLASSICAL CANONICAL FORM. Let the elementary divisors of the characteristic matrix of A
be powers of linear polynomials. The canonical form of Theorem III is then the direct sum of
hypercompanion matrices of the form

1 ..
i
1 .
i
(26.5) C^(P)
.. 1
i
. . <^.,

corresponding to the elementary divisor !p(A)! (A-a^)^ For an example, see Problem 2.

This special case of the canonical form of Theorem III is known as the Jordan or classical
canonical form. [Note that C^(p) of (26.5) is of the type / of (25.4). ] We have
IV. Let 7
be the field in which the characteristic polynomial of a matrix A factors
into linear polynomials. Then A is similar over ^to the direct sum of hypercompanion mat-
rices of the form (26.5), each matrix corresponding to an elementary divisor (X-a^)'^.

Example 5. Let the elementary divisors over the complex field of \I -A be: A-i, X + i, (X-i) ,
(X+i) .
CHAP. 26] CANONICAL FORMS UNDER SIMILARITY 207

The classical canonical form of A is

-1
i 1

-i 1

-i

From Theorem IV follows


V. An re-square matrix A is similar to a diagonal matrix if and only if the elementary
divisors oi XI - A
are linear polynomials, that is, if and only if the minimum polynomial of
A is the product of distinct linear polynomials.
See Problems 2- 4.

A REDUCTION TO RATIONAL CANONICAL FORM.

www.TheSolutionManual.com
In concluding this discussion of canonical forms,
it will be shown
that a reduction of any re-square matrix to its rational canonical
form can be
made, at least theoretically, without having prior knowledge of the invariant
factors of \I - A.
A somewhat different treatment of this can be found in Dickson, L. E., Modern Algebraic
Theo-
ries,Benj.H. Sanborn, 1926. Some improvement on purely computational
aspects is made in
Browne, E. T., American Mathematical Monthly, vol. 48 (1940).

We shall need the following definitions:

If A is an re-square matrix and X is an re-vector


over F and if g(X) is the monic polynomial
in F[\] of minimum degree such that g(A)-X = 0. then with respect
to A the vector X is said
to belong to g(X).

If, with respect to A the vector X belongs to g(X) of degree p, the linearly independent
vectors X,AX.A''X /4^ X are called a chain having X as its leader.
2 -6
Example 6. Let A 1 -3 The vectors Z = [l, 0, oj' and AX = [2, 1, l]' are linearly independent
1 -2
2
while A X = X. Then (A -I)X=0 and A" belongs to the polynomial A,^-l For y =
[1,0, -ij', AY =[-1,0,1]' Y; thus, (A+I)Y=0 and F belongs to the polynomial A+ 1.
m(\) is the minimum polynomial of an re-square matrix A,
If
then m(A)- X = for every re-
vector X. Thus, there can be no chain of length greater than the degree of
m(A). For the matrix
of Example 6, the minimum polynomial is A^ - 1.

Let S be the rational canonical form of the re-square matrix A


over F. Then, there exists a
non-singular matrix R over F such that
(26-6) R'^AR = S = diag(C.,C.^^ C)
where, for convenience, Q/^) in (26.2) has been replaced
by C^. We shall assume that C,- . the
companion matrix of the invariant factor
/iCA) = A^i + c^ ^_X'i- + c,. A +
has the form

-^tl
1
-^i2
1 .

Ci
-^i3

. 1
208 CANONICAL FORMS UNDER SIMILARITY [CHAP. 26

Prom (26.6), we have


(26.7) AR = RS = Rdiag(Cj,Cj^^ C)

Let R be separated into column blocks R-,R-^ R^ so that R^ and C^, (i=/,/ + l n)

have the same number of columns. From (26.7),

AR = A[R..R.^^ RJ = IRj.Rj,, /?Jdiag(C,,..C^.^^ C)


= iRjCj.Rj^^C.^^ R,,CJ
and
AR^ = R^Ci. (i=j,j + l n)

Denote the si column vectors of Ri by Ri^, R^s. , Ris- and form the product
t
st
RiCi - [Rii, Ri2, , Risi\Ci - [Ri2,Ri3, ,Risj,> - ^__ Rik^ik]
Since

ARi = A[Ri^,Ri^, -.Risi] = [ARi^.ARi2,. ..ARisil = RiCi

www.TheSolutionManual.com
we have
(26.8) Ri2 = ARn, Ris - ARi2 = A Rii, - Risi = A^'-'Ri.
and

(26.9) -|>i.% - ^Ris,

Substituting into (26.9) from (26.8), we obtain

-i= K l
c^kA^-'Ri^ = A'iRu
or

(26.10) (A^^ + cis.A^'i'^ + ... + a^A + cirDRir =

Prom the definition of C^ above, (26.10) may be written as

(26.11) fi(A)-Ri^ =

Let Rit be denoted by X, so that (26.11) becomes fj(A)-X: =0; then, since X-,AXj,
A Xj^ A ^ X^ are linearly independent, the vector X^ belongs to the invariant factor f^iX).
Thus, the column vectors of fi^ consist of the vectors of the chain having X^, belonging to /^(X),
as leader.

To summarize: the n linearly independent columns of R, satisfying (26.2), consist of re-/+l


chains
Xi,AXi A^^"'Xi (i=j,i + l n)

whose leaders belong to the respective invariant factors / (A), fj.AX.) /^(X) and whose lengths
satisfy the condition < s,- ^ s- ^^ i ... ^ s.
J +1
J n
We have
VI. Por a given re-SQuare matrix A over F:
(i) let X^ be the leader of a chain ^ of maximum length for all ra- vectors over F\
(it) let X^_^ be the leader of a chain maximum length (any member of which
^_^ of
is linearly independent of the preceding members and those of for all n- )

vectors over F which are linearly independent of the vectors of ;

(iii) let X^^^he the leader of a chain -n-2 ^ maximium length (any member of which
is linearly independent of the preceding members and those of and fi'^.i)
for all n- vectors over F which are linearly independent of the vectors of ^
and_i;
and so on. Then, for
CHAP. 26] CANONICAL FORMS UNDER SIMILARITY 209

R AR is the rational canonical form of A.

1 1 1

Example 7. Let A 1 2 2 Take ;f=[l,0,o]'; then ^, 4X = [l, 1, l]', ^^A: = [3, 5. 6].' are line-
1 3 2
arly Independent while ^;f = [14,25,30]'= 54^X- a:. Thus, {A-bA^+I)X = Q and A'
belongs to /gCX) = m(X) = A - 5A^ + 1 = <jS(A). Taking

1 1 3
R = [a;. 4A',4=A'] 15
1 6
we find

1 -3 2 1 3 14
6-5 AR = [AX.A'^X^a'^X] 1 5 25

www.TheSolutionManual.com
-1 1 1 6 30

0-1
and R ^AR 1 = s
1 5

Here A is non-derogatory with minimum polynomial m(A) irreducible over the rational
field.
Every 3-vector over this field belongs to m(X). (see Problem 11), and leads a chain
of length
three. The matrix R having the vectors of any chain as column vectors is such that R'^AR = S.

Example Let A = Tak


8. A" = [l,-l,o]'; then ^ A' = AT and AT belongs to A- 1. NowA-1

cannot be the minimum polynomial m(A) of A. It is, however, a divisor of m(A), (see Problem
11). and could be a similarity invariant of A.

Next, take F=[l,0,0]', The vectors Y, -4? = [2, 1, 2]', ^^^ =[ 11, 8, s]' are linearly
independent while A^Y = [54,43,46]' = 5A'^Y + SAY -77. Thus, Y belongs to m(A) =
A - 5A - 3A + 7 = <^(A). The polynomial A- 1 is not a similarity invariant; in fact, unless
the first choice of vector belongs to a polynomial which could reasonably
be the minimum
function, it should be considered a false start. The reader may verify
that

R'^AR

11
when R = [Y,AY,A'^Y]

See Problems 5-6.

SOLVED PROBLEMS
Prove: The matrix C^ip) of (26.4) has !p(A)!'? as its only non-trivial similarity invariant.
Let C^(p) be of order s. The minor of the element in the last row and first column of A/ - C
(p) is 1
so that the greatest common divisor of all (s-l)-square minors of A/ - C (p) is 1. Then the invanant fac-
tors of A/ - C^(p) are 1,1 l.f^(\). But /^(A) = ip(A)!'? since

0(A) = |A/-C^(p)| = \XI-C(pf = jp(A)!9


210 CANONICAL FORMS UNDER SIMILARITY [CHAP. 26

2. The canonical form (a) is that of Theorems I and II, the non-trivial invariant factor and elementary
divisor being X^+4X +6A^ +4X + 1. The canonical form of Theorem III is (b).

1 1 1

1 -1 1
(a) (b)
1 -1 1

-1 -4 -6 -4 -1

The canonical form (a) is that of Theorem I, the invariant factors being \ + 2,\ -4, A'+3A -4A-12
and the elementary divisors being X + 2, X+ 2, X+ 2, X- 2, X- 2, X+ 3. The canonical form of both
Theorems II and III is (b).
2 2

1 -2

www.TheSolutionManual.com
4 -2
(a) (b)
1 2

1 2
12 4 -3 -3

4. The canonical form (a) is that of Theorem III. Over the rational field the elementary divisors are

X+ 2, X+ 2, (X'^ +2X-lf , (X^ +2X- if and the invariant factors are


(X + 2)(X^ + 2X - 1)^ (X + 2)(X^+2X-1)^

The canonical form of Theorem I is (b) and that of Theorem II is (c).

-2
-2
1

1 --2 1

1
1 --2
(a)
1

1 -2 1
1

1 -2 1

1
1 -2

2 7 -10 --6
1
(b)
1 ID

2 11 12 17 -14 --21 -8
CHAP. 26] CANONICAL FORMS UNDER SIMILARITY 211

-200000000000
-2
1

1
-1 4 -2 -4
(c)
1

1
1 -6 9 4 -9 -6

www.TheSolutionManual.com
-2 3 3 -1 -6 -2
-1 2 1

-2 1 2
5. Let A = Take X
1 -1 -1 2 1

-2 -1 1 3 1

-1 2 0_

Then AX = [-2 1, 1, 1. 1, 1]', A^X = [1,0,-1,0,0 -ir. A'' X =[-3.1.1,1. 1,2]' are linearly inde-
pendent while -4 A" = [l, 0, -2, 0, 0, -2]' = 2^=A: -Z; X belongs to A*-2A^ + 1 We tentatively assume
m(\) = A* - 2X^ + 1 and write Xg for X.
The vector Y = [o, 0, 0, 1, 0, O]' is linearly independent of the members
of the chain led by Xg and
AY - [-1,0, 1, -1, 1,0]' is linearly independent of Y and the members
of the chain. Now A^Y 2,,
= Y so that
Y belongs to A -1. Since the two polynomials complete the set of non-trivial
invariant factors we write
Xs for Y. When
-1 1 -2 1 -3 "o 1
1 1

R 1 -1 1 -1
= [Xs.AXs.Xg.AXe.A^Xe.A^'Xfi
-1
R~'^AR -
1 u 1 1
1 1 1 2
-1 2 1
the rational canonical form 'of ^.

Note. The vector Z = [o, 1, 0, 0, 0. Oj' is linearly independent of the members of the chain led by Xg
and AZ - [3, 0, -2, 1, -2, o]' is linearly independent of Z and the members of the chain However A'^Z =
1-1,1,0,0,0,1]'= -AXg + A Xg + Z; then (A'' ~ 1)(Z - AXg) = and W=Z-AXe = [2.0 -1 -1 ll -i]'
belongs to A - 1. Using this as X^. we may form another R with which
to obtain the rational canonical' form.

-2 -1 -1 -1
1 3 1 1

6. Let A = -1 -4 -2 -1 Take X = [1,0,0,0,0]'.


-1 -4 -1 -2
-2 -2 -2 -2
Then = [-2,1,-1,-1,-2]', ^^^=[1,1,-1,-1,0]' are linearly independent while a'^X^
^.^5
r
L-1, -2, -2,0J - 2A
2, X-3X and X belongs to A - 2A^ + 3. We tentatively assume this to be the minimum
polynomial m(A) and label X as X^.
212 CANONICAL FORMS UNDER SIMILARITY [CHAP. 26

When, in A, the fourth column is subtracted from the first, we have [-1,0,0,1,0]'; hence, if y =
[l, 0, 0,-1, o]', AY = -Y and Y belongs to X+ 1. Again, when the fourth column of A is subtracted from
the third, we have [o, 0, -1, 1, O]'; hence, if Z = [O, 0, 1, -1, O]', AZ = -Z find Z belongs to A+ 1. Since
y, Z, and the members of the chain led by X^ are linearly independent, we label Y as X^ and Z as X^. When

11-2 1 -1
11 0-100
R = [Xs.X^.X^.AXs,.A%] = 1 0-1-1 R~ AR = 0-3
-1 -1 -1 -1 10
0-2 12
the rational canonical form of A.

www.TheSolutionManual.com
SUPPLEMENTARY PROBLEMS
7. For each of the matrices (a)-{h) of Problem 9, Chapter 25, write the canonical matrix of Theorems I, n, in
over the rational field, Can any of these matrices be changed by enlarging the number field?

Partial Ans. (a) I, 1 II, ni, diag(l,2, 3)

6 -11 6

"o 1 o"

(b) I, II, m. 1

_0 0_

"o o'

(e) I, 1 n, in.

_0 4

-1
1 -1
(/) I. n, ni.
-1
_ 1

o'

1
(g) n. m.
1 -1 1

-1 -1 -2. -1
'2

10
(h) I, 10 n. in, diag(2, 2,2, -1, -1)

8. Under what conditions will (a) the canonical forms of Theorems I and n be identical? (b) the canonical
forms of Theorems n and in be identical? (c) the canonical form of Theorem H be diagonal?
CHAP. 26] CANONICAL FORMS UNDER SIMILARITY 213

"o o"

9. Identify the canonical form 1 Check with the answer to Problem 8(6).

_0 0_

10. Let the non-singular matrix A have non-trivial invariant factors (a) X + 1 A, + 1 (X + i)^ (b) X^+ 1,
Write the canonical forms of Theorems I, n, III over the rational field and
that of Theorem IV.

Ans. (a)
-1
1

-1
10
1

www.TheSolutionManual.com
1

_0 -10
- -2 0_

-1
-1
1
-1 -2
1
n.
-1 1

1
1

. - -1 2 -3 2

-1 6"

-1
-1 1

-1
1
in,
-1 1

--1 1 1

1
-1
-
1_

-1 o"
-1
-1 1
-1
a
IV, where a,^ = ^(li\/3).
a 1

a
/S

/3 1

B
.

214 CANONICAL FORMS UNDER SIMILARITY [CHAP. 26

11. Prove: If with respect to an n-square matrix A, the vector X belongs to g(X) then g(X) divides the minimum
polynomial m(A) of A.
Hint. Suppose the contrary and consider m(A) = A (A) " g (A) + r (X).

12. In Example 6, show that X,AX. and Y are linearly independent and then reduce A to its rational canonical
form.

13. In Problem 6:

(a) Take 7 = [o, 1, 0, 0, o]', linearly independent of the chain led by ^5, and obtain X4 = y_ (3/1 _2/) A^g

belonging to A+1.

(i>) Take Z = [O, 0, 1, 0, O]', linearly independent of X^ and the chain led by X5, and obtain X^ = Z -X^
belonging to A + 1.

(c) Compute R~ AR using the vectors X3 and X^ of (b) and (a) to build R.

14. For each of the matrices A of Problem 9(a)-(h), Chapter 25, find R such that R' AR is the rational canon-

www.TheSolutionManual.com
ical form of A

15. Solve the system of linear differential equations


dxi
2Xx + X2 + Xg + X4 + i
dt
dx2
4^1 + 2x2 + 3At3
dt

dxs ^ - - 3*3 -
6*1 2a:2 2xi^
dt
dX4 - - -
3ai X2 Xq 2x4.
dt

where the x^ are unknown functions of the real variable t.

TT.
Hint.
i. T
Leti ^V = U, . .3. -JT
r
.
J
define

-
"^^
=
dxi
|^^,
dxQ dxn dxA'
.
and rewrite the system as
-jf. -jf,
^J
2 111
dX_ 4 2 3
(i)
-6 -2 -3 -2
X + AX + H
dt
-3 -1 -1 -2

Since the non-singular linear transformation X - RY carries (i) into

dY
R ^ARY + R^H
dt
choose R so that R~ AR is the rational canonical form of A. The elementary 4-vector 1 belonging to A - A
is leader of the chain X-i = Ei, AX^.A'^X-i while 4 yields X2 = E^- Xi + 2AXi belonging to A + 1. Now
with
12-1 3

4-2 8
[X.L,AXi,A^Xi,X2]
0-6 4 -12
0-3 2-5
0'
t t

dY^ 1 1 yi + 73
Y + =
dt 1 y2
-1 -74

Then
C^+it'^ 2Ci + C2e*+ 3(Cs + C4)e"*+ ^-2t+l'
Cge* + Cge-* - t 2C1 + 2C2e*+ 2(3C3 + 4C4)e-* + ^ - 4 + 2
and X = RY =
-Ci + J. Cse"-
Cse^- -t_l,2^
Sr -4C1 - 2C2e*- 2(5C3+6C4)e-* - 2^+ 6t - 4
46"* -2C1 - Cae*- 5(C3+C4)e"* - 2+3{-2_
INDEX

Absolute value of a complex number, 110 Characteristic roots (cont.)


Addition of Hermitian matrices, 164
of matrices, 2, 4 of inverse A, 155
of vectors, 67 of real orthogonal matrices, 155
Adjoint of a square matrix of real skew-symmetric matrices, 170
definition of, 49 of real symmetric matrices, 163
determinant of, 49 of unitary matrices, 155
inverse from, 55

www.TheSolutionManual.com
Characteristic vectors
rank of, 50 {see Invariant vectors)
Algebraic complement, 24 Classical canonical form, 206
Anti-commutative matrices, 11 Closed, 85
Associative laws for Coefficient matrix, 75
addition of matrices, 2 Cofactor, 23
fields, 64 Cogredient transformation, 127
multiplication of matrices, 2 Column
Augmented matrix, 75 space of a matrix, 93
transformation, 39
Basis Commutative law for
change of, 95 addition of matrices, 2
of a vector space, 86 fields, 64
orthonormal, 102, 111 multiplication of matrices, 3
Bilinear form(s) Commutative matrices, 11
canonical form of, 126 Companion matrix, 197
definition of, 125 Complementary minors, 24
equivalent, 126 Complex numbers, 12, 110
factorization of, 128 Conformable matrices
rank of, 125 for addition, 2
reduction of, 126 for multiplication, 3
Congruent matrices, 115
Canonical form Conjugate
classical (Jordan), 206 of a complex number, 12
Jacobson, 205 of a matrix, 12
of bilinear form, 126 of a product, 13
of Hermitian form, 146 of a sum, 13
of matrix, 41, 42 Conjugate transpose, 13
of quadratic form, 133 Conjunctive matrices, 117
rational, 203 Contragredient transformation, 127
row equivalent, 40 Coordinates of a vector, 88
Canonical set Cramer's rule, 77
under congruence, 116, 117
under equivalence, 43, 189 Decomposition of a matrix into
under similarity, 203 Hermitian and skew-Hermitian parts, 13
Cayley-Hamilton Theorem, 181 symmetric and skew-symmetric parts, 12
Chain of vectors, 207 Degree
Characteristic of a matrix polynomial, 179
equation, 149 of a (scalar) polynomial, 172
polynomial, 149 Dependent
Characteristic roots forms, 69
definition of, 149 matrices, 73
of adj A, 151 polynomials, 73
of a diagonal matrix, 155 vectors, 68
of a direct sum, 155 Derogatory matrix, 197

215
216 INDEX

Determinant Hermitian form


definition of, 20 canonical form of, 146
derivative of, 33 definite, 147
expansion of index of, 147
along first row and column, 33 rank of, 146
along a row (column), 23 semi-definite, 147
by Laplace method, 33 signature of, 147
multiplication by scalar, 22 Hermitian forms
of conjugate of a matrix, 30 equivalence of, 146
of conjugate transpose of a matrix, 30 Hermitian matrix, 13, 117, 164
of elementary transformation matrix, 42 Hypercompanion matrix, 205
of non-singular matrix, 39
of product of matrices, 33 Idempotent matrix, 11
of singular matrix, 39 Identity matrix, 10
of transpose of a matrix, 21 Image
Diagonal of a vector, 94
elements of a square matrix, 1 of a vector space, 95
matrix, 10, 156 Index
Diagonable matrices, 157 of an Hermitian form, 147

www.TheSolutionManual.com
Diagonalization of a real quadratic form, 133

by orthogonal transformation, 163 Inner product, 100, 110


Intersection space, 87
by unitary transformation, 164
Dimension of a vector space, 86 Invariant vector(s)
definition of, 149
Direct sum, 13
Distributive law for of a diagonal matrix, 156
of an Hermitian matrix, 164
fields, 64
of a normal matrix, 164
matrices, 3
Divisors of zero, 19 of a real symmetric matrix, 163
of similar matrices, 156
Dot product, 100
Inverse of a (an)
diagonal matrix, 55
Eigenvalue, 149 direct sum, 55
Eigenvector, 149 elementary transformation, 39
Elementary matrix, 11, 55
matrices, 41 product of matrices, 11
n-vectors, 88 symmetric matrix, 58
transformations, 39 Involutory matrix, 11
Equality of
matrices, 2 Jacobson canonical form, 205
matrix polynomials, 179 Jordan (classical) canonical form, 206
(scalar) polynomials, 172
Equations, linear Kronecker's reduction, 136
equivalent systems of, 75
solution of, 75 Lagrange's reduction, 132
system of homogeneous, 78 Lambda matrix, 179
system of non-homogeneous, 77 Laplace's expansion, 33
Equivalence relation, 9 Latent roots (vectors), 149
Equivalent Leader of a chain, 207
bilinear forms, 126 Leading principal minors, 135
Hermitian forms, 146 Left divisor, 180
matrices, 40, 188 Left inverse, 63
quadratic forms, 131, 133, 134 Linear combination of vectors, 68
systems of linear equations, 76 Linear dependence (independence)
of forms, 70
of matrices, 73
Factorization into elementary matrices, 43, 188
of vectors, 68
Field, 64
Lower triangular matrix, 10
Field of values, 171
First minor, 22 Matrices
congruent, 115
Gramian, 103, 111 equal, 2
Gram-Schmidt process, 102, 111 equivalent, 40
Greatest common divisor, 173 over a field, 65
INDEX 217

Matrices (cont.) Null space, 87


product of, 3 Nullity, 87
scalar multiple of, 2 w-vector, 85
similar, 95, 156
square, 1 Order of a matrix, 1
sum of, 2 Orthogonal
Matrix congruence, 163
definition of, 1 equivalence, 163
derogatory, 197 matrix, 108
diagonable, 157 similarity, 157, 163
diagonal, 10 transformation, 103
elementary row (column), 41 vectors, 100, 110
elementary transformation of, 39 Orthonormal basis, 102, 111
Hermitian, 13, 117, 164
idempotent, 11 Partitioning of matrices, 4
inverse of, 11, 55 Periodic matrix, 11
lambda, 179 Permutation matrix, 99
nilpotent, 11 Polynomial
nonderogatory, 197 domain, 172
non-singular, 39 matrix, 179

www.TheSolutionManual.com
normal, 164 monic, 172
normal form of, 41 scalar, 172
nullity of, 87 scalar matrix, 180
of a bilinear form, 125 Positive definite (semi-definite)
of an Hermitian form, 146 Hermitian forms, 147
of a quadratic form, 131 matrices, 134, 147
order of, 1 quadratic forms, 134
orthogonal, 103, 163 Principal minor
periodic, 11 definition of, 134
permutation, 99 leading, 135
polynomial, 179 Product of matrices
positive definite (semi-definite), 134, 147 adjoint of, 50
rank of, 39 conjugate of, 13
scalar, 10 determinant of, 33
singular, 39 inverse of, 11
skew-Hermitian, 13, 118 rank of, 43
skew-symmetric, 12, 117 transpose of, 12
symmetric, 12, 115, 163
triangular, 10, 157 Quadratic form
unitary, 112, 164 canonical form of, 133, 134
Matrix Polynoniial(s) definition of, 131
definition of, 179 factorization of, 138
degree of, 179 rank of, 131
product of, 179 reduction of
proper (improper), 179 Kronecker, 136
scalar, 180 Lagi-ange, 132
singular (non-singular), 179 regular, 135
sum of, 179 Quadratic form, real
Minimum polynomial, 196 definite, 134
Multiplication index of, 133
in partitioned form, 4 semi-definite, 134
of matrices, 3 signature of, 133
Quadratic forms
equivalence of, 131, 133, 134
Negative
definite form (matrix), 134, 147 Rank
of a matrix, 2 of adjoint, 50
semi-definite form (matrix), 134, 147 of bilinear form, 125
Nilpotent matrix, 11 of Hermitian form, 146
Non-derogatory matrix, 197 of matrix, 39
Non-singular matrix, 39 of product, 43
Normal form of a matrix, 41 of quadratic form, 131
Normal matrix, 164 of sum, 48
218 INDEX

Right divisor, 180 Symmetric matrix (cont.)


Right inverse, 63 deiinition of, 12
Root invariant vectors of, 163
of polynomial, 178 System(s) of Equations, 75
of scalar matrix polynomial, 187
Row Trace, 1
equivalent matrices, 40 Transformation
space of a matrix, 93 elementary, 39
transformation, 39 linear, 94
orthogonal, 103
Scalar singular, 95
matrix, 10 unitary, 112
matrix polynomial, 180 Transpose
multiple of a matrix, 2 of a matrix, 11
polynomial, 172 of a product, 12
product of two vectors (see inner product) of a sum, 11
Schwarz Inequality, 101, 110 Triangular inequality, 101, 110
Secular equation (see Triangular matrix, 10, 157
characteristic equation)
Unit vector, 101

www.TheSolutionManual.com
Signature
of Hermitian form, 147 Unitary
of Hermitian matrix, 118 matrix, 112
of real quadratic form, 133 similarity, 157
of real symmetric matrix, 116 transformation, 112
Similar matrices, 95, 196 Upper triangular matrix, 10
Similarity invariants, 196
Singular matrix, 39 Vector(s)
Skew-Hermitian matrix, 13, 118 belonging to a polynomial, 207
Skew-symmetric matrix, 12, 117 coordinates of, 88
Smith normal form, 188 definition of, 67
Span, 85 inner product of, 100
Spectral decomposition, 170 invariant, 149
Spur (see Trace) length of, 100, 110
Sub-matrix, 24 normalized, 102
Sum of orthogonal, 100
matrices, 2 vector product of, 109
vector spaces, 87 Vector space
Sylvester's law basis of, 86
of inertia, 133 definition of, 85
of nullity, 88 dimension of, 86
Symmetric matrix over the complex field, 110
characteristic roots of, 163 over the real field, 100
Index of Symbols

Symbol Page Symbol Page

Hj

www.TheSolutionManual.com
1 E^, (vector) 88

[^i;] 1 X-Y; X Y 100, 110


A 1
\\n 100, 110

S 3 G 103, 111

^k 10 zxy 109
A-'- A' 11 Q 115
A': / 11
p 116
I; A 12 s 116
A'- A*- A^ 13
q 131
\A\; det ^ 20 h 146

\^ij\ 22
A. A,- 149
^ii'i2 im
ii, is, .... i^ 23 <^(A) 149
a,. 23 E^, (matrix) 170
r 39 /(A) 172

39
h-^ij F[A] 172
Hi(k), K^ik) 39 ^(A) 179
H^.{k). K.j(k) 39 A^ (C)
AjfiC), 180
-V.
40 NiX) 189
N 43 189
fi(^)
adj A 49 m(A) 196
F 64 C(g) 198
X. X^ 67 J 198
yn(f) 85 S 203
V^tiF) 86 Cq(p) 205
^A 87

219
www.TheSolutionManual.com
www.TheSolutionManual.com

You might also like