You are on page 1of 92

CEGEP Linear Algebra Problems

An open source collection of


422 CEGEP level linear algebra problems from 14 authors.

Edited by

Yann Lamontagne,
Sylvain Muise

February 19, 2017


http://obeymath.org/CEGEPLinearAlgebraProblems
Contents

1 Systems of Linear Equations 1


1.1 Introduction to Systems of Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Gaussian and Gauss-Jordan Elimination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Applications of Linear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2 Matrices 7
2.1 Introduction to Matrices and Matrix Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Algebraic Properties of Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3 Matrix Inverses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.4 Elementary Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

3 Determinants 17
3.1 The Laplace Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.2 Determinants and Elementary Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.3 Properties of Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.4 Adjoint of a Matrix and Cramers Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

4 Vector Geometry 23
4.1 Introduction to Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.2 Dot Product and Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.3 Cross Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.4 Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.5 Planes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

5 Vector Spaces 33
5.1 Introduction to Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.2 Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
5.3 Spanning Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.4 Linear Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5.5 Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
5.6 Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

6 Applications 43
6.1 The Simplex Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

A Answers to Exercises 45

References 85

Index 85
Chapter 1

Systems of Linear Equations

1.1 Introduction to Systems of Lin- 3x1 2x2 = 4


2x1 = 3
ear Equations d.
x1 + 9x2 = 8
5x1 7x2 = 13
1.1.1 [GH] State which of the following equations is a linear
equation. If it is not, state why.
1.1.5 [GH] Convert given augmented matrix into a system
p of linear equations. Use the variables x1 , x2 , . . ..
a. x + y + z = 10 f. x21 + x22 = 25
 
b. xy + yz + xz = 1 g. x1 + y + t = 1 1 2 3 1 0 0 0 2
a.
c. 3x + 9 = 3y 5z + x 7 h. x1 + 9 = 3 cos(y) 5z 1 3 9 0 1 0 0 1
  d. 0 0 1 0 5

i. cos(15)y + x4 = 1 3 4 7
b. 0 0 0 1 3
d. 5y + x = 1 j. 2x + 2y = 16 0 1 2  
e. (x 1)(x + 1) = 0

1 1 1 1 2
 1 0 1 0 7 2
c. e.
2 1 3 5 7 0 1 3 2 0 5

1.1.2 [GH] Solve the system of linear equations using sub-


stitution, comparison and/or elimination. 1.1.6 [GH] Perform the given row operations on

2 1 7
x + y = 1 x y+ z= 1 0 4 2 .
a.
2x 3y = 8 c. 2x + 6y z = 4 5 0 3
2x 3y = 3 4x 5y + 2z = 0
b.
3x + 6y = 8 x+y z=1 a. 1R1 R1 d. 2R2 + R3 R3
d. 2x + y =2 b. R2 R3 e. 21 R2 R2
y + 2z = 0
c. R1 + R2 R2 f. 52 R1 + R3 R3

1.1.3 [KK] Graphically, find the point of intersection of the 1.1.7 [GH] Give the row operation that transforms A into
two lines 3x + y = 3 and x + 2y = 1. That is, graph each line B where
and see where they intersect.

1 1 1
A = 1 0 1 .
1 2 3
1.1.4 [GH] Convert the given system of linear equations into
an augmented matrix.
1 1 1

1 0 1

3x + 4y + 5z = 7 a. B = 2 0 2 d. B = 1 1 1
a. x + y 3z = 1 1 2 3 1 2 3
2x 2y + 3z = 5
1 1 1

1 1 1

2x + 5y 6z = 2 b. B = 2 1 2 e. B = 1 0 1
b. 9x 8z = 10 1 2 3 0 2 2
2x + 4y + z = 7
3 5 7

x1 + 3x2 4x3 + 5x4 = 17 c. B = 1 0 1
c. x1 + 4x3 + 8x4 = 1 1 2 3
2x1 + 3x2 + 4x3 + 5x4 = 6

1
CHAPTER 1. SYSTEMS OF LINEAR EQUATIONS 1.2. GAUSSIAN AND GAUSS-JORDAN ELIMINATION
1.1.8 [JH] In the system 1.2 Gaussian and Gauss-Jordan
ax + by = c Elimination
dx + ey = f
1.2.1 [GH] State whether or not the given matrices are in
each of the equations describes a line in the xy-plane. By reduced row echelon form.
geometrical reasoning, show that there are three possibilities:
there is a unique solution, there is no solution, and there are
   
1 0 0 0 0 1 0 0 5
infinitely many solutions. a. g.
0 1 1 0 0 l. 0 1 0 7

0 1
 
0 0 0
 0 0 1 3
b. h.
1 0 0 0 0

1.1.9 [JH] Is there a two-unknowns linear system whose 2 0 0 2
solution set is all of R2 ? m. 0 2 0 2
 
1 1 1 1 1
c. 0 0 2 2
1 1 i. 0 1 1

0 0 1

0 1 0 0
 
1.1.10 [KK] You have a system of k equations in two vari- 1 0 1
d.
n. 0 0 1 0
ables, k 2. Explain the geometric significance of 0 1 2 1 0 0
a. No solution.
  j. 0 1 0 0 0 0 0
1 0 0
e. 0 0 0
b. A unique solution. 0 0 1 0 0 1 5
  1 0 0 o. 0 0 0 0
c. An infinite number of solutions. 1 0 1
f. k. 0 0 1 0 0 0 0
0 1 1
0 0 0

1.2.2 [SZ] State whether the given matrix is in reduced row


echelon form, row echelon form only or in neither of those
forms.
 
1 0 3 1 0 0 0
a.
0 1 3 d. 0 1 0 0

3 1 1 3
0 0 0 1

b. 2 4 3 16
1 0 4 3 0
1 1 1 5 e. 0 1 3 6 0

1 1 4 3
0 0 0 0 0
 
c. 0 1 3 6
1 1 4 3
f.
0 0 0 1 0 1 3 6

1.2.3 [KK] Consider the following augmented matrix in


which denotes an arbitrary number and  denotes a nonzero
number. Determine whether the given augmented matrix is
consistent. If consistent, is the solution unique?

 
0  0 0  0 0
a.
0 0 
c.
0 0 0 

0 0 0 0  0 0 0 0 

 
b. 0  0  0
d.
0 0  0 0 0 0  0
0 0 0 0 

1.2.4 [SZ] The following matrices are in reduced row echelon


form. Determine the solution of the corresponding system of
linear equations or state that the system is inconsistent.

2
CHAPTER 1. SYSTEMS OF LINEAR EQUATIONS 1.2. GAUSSIAN AND GAUSS-JORDAN ELIMINATION
 
1 0 2 1 0 0 3 0 5x + y = 17 x y + z = 4
a. a.
0 1 7 d. 0 1 2 6 0 x+y= 5 g. 3x + 2y + 4z = 5

1 0 0 3
0 0 0 0 1 x+ y+ z=3 x 5y + 2z = 18

b. 0 1 0 20 1 0 8 1 7 b. 2x y + z = 0 2x 4y + z = 7
0 0 1 19 0 1 4 3 2 3x + 5y + 7z = 7 h. x 2y + 2z = 2
e.

1 0 0 3 4
0 0 0 0 0 4x y + z = 5 x + 4y 2z = 3
c. 0 1 0 6 6 0 0 0 0 0 c. 2y + 6z = 30 2x y + z = 1

0 0 1 0 2 1 0 9 3 x+ z= 5 i. 2x + 2y z = 1
f. 0 1 4 20 x 2y + 3z = 7 3x + 6y + 4z = 9
0 0 0 0 d. 3x + y + 2z = 5 x 3y 4z = 3
2x + 2y + z = 3 j. 3x + 4y z = 13
3x 2y + z = 5 2x 19y 19z = 2
1.2.5 [GH] Use Gauss-Jordan elimination to put the given e. x + 3y z = 12 x+ y+z= 4
matrix into reduced row echelon form. x + y + 2z = 0 k. 2x 4y z = 1
  2x y + z = 1 x y = 2
1 2 2 1 1
a. f. 4x + 3y + 5z = 1 x y+ z= 8
3 5 j. 1 1 1
  2 1 2
5y + 3z = 4 l. 3x + 3y 9z = 6
2 2 7x 2y + 5z = 39
b.
3 2 1 2 1

4 12
 k. 1 3 1
c. 1 3 0
2 6 1.2.8 [GH] Find the solution to the given linear system. If

5 7
 1 2 3 the system has infinite solutions, give two particular solutions.
d. l. 0 4 5
10 14
  1 6 9 2x1 + 4x2 = 2
1 1 4 a.
e. x1 + 2x2 = 1

2 1 1 1 1 1 2 x1 + x2 + 6x3 + 9x4 = 0
h.
  m. 2 1 1 1 x1 + 5x2 = 3 x1 + x3 + 2x4 = 3
7 2 3 b.
f. 1 1 1 0 2x1 10x2 = 6 x1 + 2x2 + 2x3 = 1
3 1 2
  2 1 1 5 x1 + x2 = 3 i. 2x1 + x2 + 3x3 = 1
3 3 6 c.
g. n. 3 1 6 1 2x1 + x2 = 4 3x1 + 3x2 + 5x3 = 2
1 1 2 3 0 5 0
  3x1 + 7x2 = 7 2x1 + 4x2 + 6x3 = 2
4 5 6 d.
h. 1 1 1 7 2x1 8x2 = 8 j. 1x1 + 2x2 + 3x3 = 1
12 15 18
o. 2 1 0 10 3x1 + 6x2 + 9x3 = 3

2 4 8
2x1 + 4x2 + 4x3 = 6
3 2 1 17 e. 2x1 + 3x2 = 1
i. 2 3 5 x1 3x2 + 2x3 = 1 k.
4 1 8 15 2x1 3x2 = 1
2 3 6 x1 + 2x2 + 2x3 = 2
p. 1 1 2 7 f. 2x1 + x2 + 2x3 = 0
2x1 + 5x2 + x3 = 2
3 1 5 11 l. x1 + x2 + 3x3 = 1
x1 x2 + x3 + x4 = 0
g. 3x1 + 2x2 + 5x3 = 3
2x1 2x2 + x3 = 1
1.2.6 [JH] Use Gausss Method to find the unique solution
for each system.
1.2.9 [KK] Find the solution to the system of equations,
2x + 3y = 13 x z=0 65x + 84y + 16z = 546, 81x + 105y + 20z = 682, and 84x +
a. 110y + 21z = 713.
x y = 1 b. 3x + y =1
x + y + z = 4
1.2.10[YL] Solve the following systems by Gauss-Jordan
elimination and find two particular solution for each system:
1.2.7 [SZ] Solve the following systems of linear equations.
y z+ wv=0
2x 3y + 4z 4w + v = 0
a.
3x 3y + 4z 4w + v = 0
5x 5y + 7z 7w + v = 0
y z+ wv=3
b. 2x 3y + 4z 4w + v = 2
3x 3y + 4z 4w + v = 2

3
CHAPTER 1. SYSTEMS OF LINEAR EQUATIONS 1.2. GAUSSIAN AND GAUSS-JORDAN ELIMINATION
1.2.11[YL] Given 1.2.17 [KK] If a system of equations has more equations
than variables, can it have a solution? If so, give an example
3x1 + 3x2 + 7x3 3x4 + x5 = 3 and if not, explain why.
2x1 + 3x2 + 3x3 + x4 2x5 = 1
4x1 + 17x3 2x4 x5 = 1
1.2.18 [KK] If a system of linear equations has fewer equa-
tions than variables and there exist a solution to this system.
a. Solve the following system by Gauss-Jordan elimination.
Is it possible that your solution is the only one? Explain.

b. Find two particular solution to the above system.


c. Find a solution to the above system when x3 = 1. 1.2.19 [JH] For which values of k are there no solutions,
many solutions, or a unique solution to this system?

1.2.12[YL] Given x y=1


3x 3y = k
3x1 + 3x2 + 7x3 3x4 = 0
2x1 + 3x2 + 3x3 + x4 = 0
4x1 + 17x3 2x4 = 0
1.2.20 [GH] State for which values of k the given system
9x1 + 6x2 + 27x3 4x4 = 0
will have exactly 1 solution, infinite solutions, or no solution.

a. Solve the system by Gauss-Jordan elimination. x1 + 2x2 = 1 x1 + 2x2 = 1


a. c.
b. Find two particular nontrivial solution to the system. 2x1 + 4x2 = k x1 + kx2 = 2
c. Find a solution to the system when x1 = 1. x1 + 2x2 = 1 x1 + 2x2 = 1
b. d.
x1 + kx2 = 1 x1 + 3x2 = k

1.2.13[MH] Given

0

2 5 18
1.2.21 [KK] Choose h and k such that the augmented matrix
3 6 0 3 shown has each of the following: one solution, no solution and
A=
infinitely many solutions.
3 7 0 2
5 10 3 7    
1 h 2 1 2 2
a. b.
2 4 k 2 h k
a. Find the reduced row echelon form of the matrix A.
b. Suppose that A is the augmented matrix of a linear sys-
tem. Write the corresponding system of linear equations.
Use the first part to find the solution of the system. 1.2.22[MB] The augmented matrix of a linear system is
given by
c. Suppose that A is the coefficient matrix of a homogeneous
1 4 2 1
linear system. Write the corresponding system of linear 0 1 1 4
equations. Use the first part to find the solution of the
0 0 a b
system.
Without performing any calculation, state for what values of
a and b there is
1.2.14 [SZ] Find at least two different row echelon forms for a. no solution?
the matrix
b. exactly one solution?
 
1 2 3
4 12 8 c. infinitely many solutions?

1.2.15 [JH] Find the coefficients a, b, and c so that the 1.2.23[YL] Consider the following augmented matrix of a
graph of f (x) = ax2 + bx + c passes through the points (1, 2), system of linear equations.
(1, 6), and (2, 3).
1 2 3 4

0 5 6 7
2
1.2.16 [JH] True or false: a system with more unknowns 0 0 a 1 ba b
than equations has at least one solution. (As always, to say
true you must prove it, while to say false you must produce For which value(s) of a and b, if any, the system
a counterexample.) a. has a unique solution, justify.
b. has no solutions, justify.

4
CHAPTER 1. SYSTEMS OF LINEAR EQUATIONS 1.3. APPLICATIONS OF LINEAR SYSTEMS
c. has infinitely many solutions, justify. 1.3 Applications of Linear Systems

1.2.24[OV] Find k for which the system 1.3.1 [KK] Consider the following diagram of four circuits.

x1 + kx2 =0 3 20 volts 1
(k 1)x1 + x3 = 0
x1 + (k + 2)x3 = 0 I2 I3
5 volts 5 1
a. has infinitely many solutions.
2 6
b. has only the trivial solution.

I1 I4
1.2.25[YL] Given the augmented matrix of a linear system: 10 volts 1 3

4 2

1 2 3 4
0 2 4 5 6
0 0 0 a2 1 b2 a2 The jagged lines denote resistors and the numbers next to
If possible for what values of a and b the system has them give their resistance in ohms, written as . The breaks
in the lines having one short line and one long line denote a
a. no solution? Justify.
voltage source which causes the current to flow in the direction
b. exactly one solution? Justify. which goes from the longer of the two lines toward the shorter
c. infinitely many solutions? Justify. along the unbroken part of the circuit. The current in amps in
the four circuits is denoted by I1 , I2 , I3 , I4 and it is understood
that the motion is in the counter clockwise direction. If Ik
1.2.26[OV] Determine conditions on the bi s, if any, in order ends up being negative, then it just means the current flows
to guarantee that the linear system is consistent. in the clockwise direction. Then Kirchhoffs law states:
x + z = b1 The sum of the resistance times the amps in the counter clock-
x + y + 2z = b2 wise direction around a loop equals the sum of the voltage
2x + y + 3z = b3 sources in the same direction around the loop.
In the above diagram, the top left circuit gives the equation
2I2 2I1 + 5I2 5I3 + 3I2 = 5
1.2.27[YL] Given the augmented matrix of a linear system
For the circuit on the lower left,

1 3 1 4 b1
3 2 4 5 b2 4I1 + I1 I4 + 2I1 2I2 = 10
4 1 5 1 b3 .

Write equations for each of the other two circuits and then
7 1 9 6 b4 give a solution to the resulting system of equations.
Determine the restrictions on the bi s for the system to be
consistent. 1.3.2 [KK] Consider the following diagram of three circuits.
3 12 volts 7
1.2.28 [JH] Prove that, where a, b, . . . , e are real numbers
and a 6= 0, if
ax + by = c I1 I2
10 volts 5 3
has the same solution set as
2 1
ax + dy = e

then they are the same equation. What if a = 0? I3


2 4

1.2.29 [JH] Show that if ad bc 6= 0 then 4

ax + by = j
cx + dy = k The current in amps in the four circuits is denoted by
I1 , I2 , I3 and it is understood that the motion is in the
has a unique solution. counter clockwise direction. Solve for I1 , I2 , I3 .

5
Chapter 2

Matrices

2.1 Introduction to Matrices and b. a 112 matrix times a 121 matrix


Matrix Operations c. a 23 matrix times a 21 matrix
d. a 22 matrix times a 22 matrix

2.1.1 [JH] Find the indicated entry of the following matrix.


  2.1.6 [GH] State the dimensions of A and B. State the
1 3 1 dimensions of AB and BA, if the product is defined. Then
A=
2 1 4 compute the product AB and BA, if possible.
a. a2,1 b. a1,2 c. a2,2 d. a3,1  
1 2 2 6
a. A = ,
1 4  e. A = 6 2 ,
2 5 5 1
2.1.2 [JH] Determine the size of each matrix. B=  
3 1 4 5 0
B=
    
3 1
 4 4 4
1 0 4 1 1 5 10 b. A = ,
a. c. 2 2

1 4

2 1 5 b. 1 1 10 5 f. A = ,
7 6
 
3 1 1 0 7
B=  
4 2 9 1 1 5 5
B=

0 1
2 1 3 5
2.1.3 [GH] Simplify the given expression where c. A = 1 1 ,

1 2 1
     2 4 g. A = 1 2 1 ,
1 1 3 2

2 0
A=
7 4
B=
5 9 B= 0 0 2
3 8

0 0 2
B = 1 2 1

a. A + B c. 3(A B) + B 2 1
b. 2A 3B d. 2(A B) (A 3B) d. A = 9 5 , 1 0 0
3 1

  4 1 3
5 6 4 h. A = 2 3 5 ,
B=
0 6 3 1 5 3
2.1.4 [GH] The row and column matrix U and V are defined.
Find the product U V , where possible. 2 4 3
B = 1 1 1
4 0 2
   
a. U = 1 4 , c. U = 1  2 3 ,
2 3
V = V =
5 2
2.1.7 [GH] Given a diagonal matrix D and a matrix A,
   
b. U = 6 2 1 2 , d. U = 2 5 ,
3 1 compute the product DA and AD, if possible.
2 V = 1
V = 9

1
5

2.1.5 [JH] Give the size of the product or state not de-
fined.
a. a 23 matrix times a 31 matrix

7
CHAPTER 2. MATRICES 2.1. INTRODUCTION TO MATRICES AND MATRIX OPERATIONS
 
1 1 1 d1 0 a. 3C 4D h. F G
c. D = ,
a. D = 2 2 2 , 0 d
2 b. A (D + 2C) i. Illustrate the associa-
3 3 3 a b tivity of matrix mul-
A= c. AE
0 0 c d tiplication by multiply-
A = 0 3 0 d. AE
d1 0 0 ing (AB)C and A(BC)
0 0 5 e. 3BC 4BD where A, B, and C are
d. D = 0 d2 0 ,
f. CB + D
 
4 0 matrices above.
b. D = , 0 0 d3
 0 3
 a b c g. GC
1 2 A = d e f
A=
1 2 g h i
2.1.11 [SZ] Use the matrices

10 11
     
2.1.8 [GH] Given a matrix A compute A2 and A3 . 1 2 0 3 2 0
A= B= C= 3
3 4 5 2 5 5 9
 
0 1 0 1 0
a. A =

1 0 d. A = 0 0 1 7 13 1 2 3
  1 0 0 D = 43 0 E = 0 4 9
2 0
b. A = 6 8 0 0 5
0 3 0 0 1
e. A = 0 0 0 to compute the following or state that the indicated operation
1 0 0
0 1 0 is undefined.
c. A = 0 3 0
0 0 5
a. 7B 4A h. A2 B 2
b. AB i. (A + B)(A B)
2.1.9 [MB] Consider the matrices c. BA j. A2 5A 2I2
d. E+D k. E 2 + 5E 36I3
3 0     e. ED l. EDC
1 4 2 4 1
A = 1 2 , B = and C = f. CD + 2I2 A m. CDE
3 1 5 0 2
1 1
g. A 4I2 n. ABCEDI2
Suppose also that the matrices D and E have the following
sizes: D is 24 and E is 43.
Determine whether the given matrix expression is defined. 2.1.12 [GH] In each part a matrix A is given. Find AT .
For those that are defined, give the size of the resulting ma- State whether A is upper/lower triangular, diagonal, sym-
trix. metric and/or skew symmetric.

a. EB c. B 2

9 4 10 3 4 5
b. (2AC + B T )T d. CDE a. 6 3 7 f. 0 3 5
8 1 1 0 0 3
 
4 2 9 1 0
g.
2.1.10 [HE] Let b. 5 4 10 0 9
6 6 9
4 0 2

3 4
 

1 2 3
 
4 1 2
 4 7 4 9 h. 0 2 3
A= , B = 5 1 , C = , c.
1 1 0 1 5 1 9 6 3 9 2 3 6
1 1  
7 4 0 6 1
d.
 

3 4
4 6 i. 6 0 4
1 0 1
4 0 0
1 4 0
D= , E = 2 3 ,
0 2 1 e. 2 7 0
0 1
  4 2 5
2  
F = , G = 2 1 .
3
Compute each of the following and simplify, whenever possi- 2.1.13 [KK] Consider the matrices
ble. If a computation is not possible, state why.
1 2    
2 5 2 1 2
A = 3 2 , B = , C= ,
3 2 1 5 0
1 1

8
CHAPTER 2. MATRICES 2.1. INTRODUCTION TO MATRICES AND MATRIX OPERATIONS
   
1 1 1 2.1.22 [YL] Consider the matrices:
D= , E=
4 3 3
1 2    
Find the following if possible. If it is not possible explain why. 2 5 2 1 2
A= 3 2 , B=
, C= ,
3 2 1 5 0
1 1
a. 3AT d. EE T g. DT BE
b. 3B AT e. B T B
   
1 1 1
D= , E=
c. E T B f. CAT 4 3 3
Evaluate the following if possible, justify.

2.1.14 [MB] Prove that if the product CAC T is well-defined, a. tr(C)AB d. tr(C)A tr(D)B T
then the matrix A must be a square matrix. b. AA e. EA
c. 3DE 2E f. (AC)E
2.1.15 [JH] Show that if G has a row of zeros then GH (if
defined) has a row of zeros. Does the same statement hold for
columns? 2.1.23 [MB] Write the 3 4 matrix B = [bij ] whose entries
satisfy 
i j if |i j| 1,
2.1.16 [JH] Show that if the first and second rows of G are b ij =
2i if |i j| < 1.
equal then so are the first and second rows of GH. Generalize.

2.1.17 [JH] Describe the product of two diagonal matrices. 2.1.24 [YL] Consider the matrices:
     
A = aij 33 , B = bij 23 , C = cij 32 ,
2.1.18 [JH] Show that the product of two upper triangular    
matrices is upper triangular. Does this also hold for lower D = dij 22 , E = eij 36
triangular matrices? where aij = i j, bij = (1)i 2 + (1)j 3, cij = i +
j, dij = (ij)2 , eij = i + j. Evaluate the following if possi-
2.1.19 [KK] Show that the main diagonal of every skew ble, justify.
symmetric matrix consists of only zeros.
a. EA c. C T AB T e. (3BC 2D)T
b. AB T d. tr(D2 ) f. tr(E)
2.1.20 [GH] Find the trace of the given matrix.
 
4 1 1 2 6 4
d. 2.1.25 [JH] Find the product of this matrix with its trans-
a. 2 0 0 1 8 10
1 2 5 pose.
e. Any skew-symmetric  
cos sin
matrix.
 
1 5 sin cos
b.
9 5 f. In

10 6 7 9
2 1 6 9 2.1.26 [KK] A matrix A is called idempotent if A2 = A.
c.
0 4 4 0 Show that the following matrix is idempotent.
3 9 3 10
2 0 2
A= 1 1 2
1 0 1
2.1.21 [MH] Consider the matrices

  1 2
3 0 1
A= and B = 0 1 . 2.1.27 [GH] Find values for the scalars a and b that satisfy
2 3 1
1 0 the given equation.

Find the following when possible. If not, justify briefly.


           
3 8 7 1 1 5
a. a +b = c. a +b =
1 4 1 1 3 5
a. BA c. (AB)T d. A + B T f. tr(BA)  
4
   
6 10
 
1
  
3 4

b. A2 e. tr(A)B b. a +b = d. a +b =
2 3 5 3 9 12

9
CHAPTER 2. MATRICES 2.2. ALGEBRAIC PROPERTIES OF MATRICES
 
2.1.28 [KK] Let A =
1 1
. Find all 2 2 matrices, 2.2 Algebraic Properties of Matrices
3 3
B such that AB = 0.
2.2.1 [GH] Given the matrices A and B below. Find X that
satisfies the equation.
2.1.29 [YL] Solve for all a, b, c, d such that
   
3 1 1 7
A= B=
   
3a + 3b + 7c 3d 2a + 3b + 3c + d 1 2 2 5 3 4
=
4a + 17c 2d 9a + 6b + 27c 4d 3 6
a. 2A + X = B c. 3A + 2X = 1B
b. A X = 3B d. A 21 X = B

2.2.2 [OV] Find matrix A, if


  T  
1 1 1 2
2AT 3 =
0 4 3 3

2.2.3 [CR] Given the matrix


 
2 1
A=
0 2
a. A2 b. A3 c. A5

2.2.4 [GH] The following statement

(A + B)2 = A2 + 2AB + B 2

is false. We investigate that claim here.


   
5 3 5 5
a. Let A = and let B = . Compute
3 2 2 1
A+B
b. Find (A + B)2 by using the previous part.
c. Compute A2 + 2AB + B 2 .
d. Are the results from the two previous parts equal?
e. Carefully expand the expression (A + B)2 = (A + B)(A +
B) and show why this is not equal to A2 + 2AB + B 2 .

2.2.5 [KK] Suppose A and B are square matrices of the same


size. Which of the following are necessarily true? Justify.
2
a. (A B) = A2 2AB + B 2
2
b. (AB) = A2 B 2
2
c. (A + B) = A2 + 2AB + B 2
2
d. (A + B) = A2 + AB + BA + B 2
e. A2 B 2 = A (AB) B
3
f. (A + B) = A3 + 3A2 B + 3AB 2 + B 3
g. (A + B) (A B) = A2 B 2

2.2.6 [JH] Prove each, assuming that the operations are


defined, where G, H, and J are matrices, where Z is the zero
matrix, and where r and s are scalars.

10
CHAPTER 2. MATRICES 2.2. ALGEBRAIC PROPERTIES OF MATRICES
a. Matrix addition is commutative G + H = H + G. c. If A and B are nn matrices then tr(A + B) = tr(A) +
b. Matrix addition is associative G+(H +J) = (G+H)+J. tr(B).
c. The zero matrix is an additive identity G + Z = G. d. If A and B are nn matrices then tr(AB) = tr(BA).
d. 0G=Z
e. (r + s)G = rG + sG 2.2.14 [YL] A non-zero square matrix A is said to be nilpo-
f. Matrices have an additive inverse G + (1) G = Z. tent of degree 2 if A2 = 0.
g. r(G + H) = rG + rH Prove or disprove: There exists a square 2 2 matrix that is
h. (rs)G = r(sG) symmetric and nilpotent of degree 2.

2.2.15 [YL] A square matrix A is called idempotent if A2 =


2.2.7 [YL] Prove or disprove the following statements
A.
a. Cancellation Law If A, B, C are matrices such that Prove: If A is idempotent then A + AB ABA is idempotent
AB = AC then B = C. for any square matrix B with the same dimension as A.
b. Commutativity If A, B are square matrices of the same
size then AB = BA.
2.2.16 [MH] An involutory matrix is a matrix A such that
c. Zero Factor Property If A, B are matrices such that
A2 = I.
AB = 0 then A = 0 or B = 0
a. Show that In is involutory.
b. Show that if A is an involutory matrix , then A is a square
2.2.8 [KK] If possible find all k such that AB = BA. matrix.
   
1 2 1 2 c. If A is involutory, is AT also involutory? If yes, prove it.
a. A = , B=
3 4 3 k If no, find a counter example.
   
1 2 1 2 d. Suppose that A and B are involutory, Is AB also involu-
b. A = , B=
3 4 1 k tory? If yes, prove it. If no, find a condition on A and B
so that AB is also involutory?

2.2.9 [JH]
a. Prove that H p H q = H p+q and (H p )q = H pq for positive 2.2.17* [JH] Find the formula for the n-th power of this
integers p, q. matrix.  
p p p
1 1
b. Prove that (rH) = r H for any positive integer p and 1 0
scalar r R.

2.2.10 [JH]
T
a. Show that (G + H) = GT + H T .
T
b. Show that (r H) = r H T .
T
c. Show that (GH) = H T GT .
d. Show that the matrices HH T and H T H are symmetric.

2.2.11 [JH] Prove that for any square H, the matrix H +H T


is symmetric. Do every symmetric matrix have this form?

2.2.12 [GH]
a. Prove that for any n n matrix A, A + AT is symmetric
and A AT is skew-symmetric.
b. Prove that any n n can be written as the sum of a
symmetric and skew-symmetric matrices.

2.2.13 [YL] Prove the following statements


a. If A is a nn matrix then tr(AT ) = tr(A).
b. If A is a nn matrix then tr(cA) = c tr(A).

11
CHAPTER 2. MATRICES 2.3. MATRIX INVERSES
2.3 Matrix Inverses
 
1 a b
2.3.8 [YL] Find a formula for tr(A ) if A = is
c d
invertible.
2.3.1 [GH] Given the matrices A. Find A1 , if possible.
   
1 5 2 5 2.3.9 [OV] Consider p(X) = X 3 2X 2 + 4X 3I
a. c.
5 24 3 4 
2 1

    a. If p(A) = , compute p(AT )
3 0 1 3 1 2
b. d.
0 7 2 6 b. If p(B) = 0 where B is an nn matrix, find B 1 in terms
of B.

2.3.2 [GH] Given the matrices A and B. Compute (AB)1


and B 1 A1 . 2.3.10 [JH] In real number algebra, there are exactly two
    numbers, 1 and 1, that are their own multiplicative inverse.
1 2 1 2 Does H 2 = I have exactly two solutions for 22 matrices?
a. A = , b. A = ,
1 1  3 4 
3 5 7 1
B= B= 2.3.11 [GH] Prove or disprove: If A and B are 22 invertible
2 5 2 1
matrices then A + B is an invertible matrix.

1
2.3.3 [KK] Show (AB) = B 1 A1 by verifying that 2.3.12 [GH] Given the matrices A. Find A1 , if possible.

AB B 1 A1 = I and B 1 A1 (AB) = I

25 10 4 2 3 4
a. 18 7 3 i. 3 6 9
6 2 1 1 9 13
   
1 2 1 1

2.3.4 [YL] Given C = and D = . Solve 1 0 0 5 1 0
5 0 4 3 b. 4 1 7 j. 7 7 1
the given equations for X. 20 7 48 2 8 1
a. CXD = 10I2
4 1 5

1 0 0 0

T
b. C((DX) 2I)1 = C c. 5 1 9 19 9 0 4
k.
10 2 19 33 4 1 7

1 5 0
4 2 0 1
2.3.5 [YL] Solve for X given that it satisfies
d. 2 15 4 1 0 0 0
DXDT = tr(BC)BC 4 19 1 27 1 0 4
l.
18 0 1 4

where 25 8 0
e. 78 25 0 4 0 0 1
2 1


2 1 0
 
2 2
 48 15 1 1 0 2 8
B= C = 3 2 D =
. 0 1 0 0
3 4 0 1 2 1 0 0 m.
1 0

f. 7 5 8
0 4 29 110
2 2 3 0 3 5 19


0 0 1
0 0 1 0
2.3.6 [YL] Solve for X given that it satisfies 0 0 0 1
g. 1 0 0 n.
1 1 0 0 0
0 1 0

2A + X T =I 0 1 0 0
0 1 0
where h. 1 0 0 1 0 0 0

1 3
 0 2 0 0
A= 0 0 1 o.
1 2 0 0 3 0
0 0 0 4

2.3.7 [YL] Solve for A given that it satisfies


T 2.3.13 [SZ] Find the inverse of the matrix or state that the
(I AT )1 = (tr(B)B 2 )
matrix is not invertible.
where  
2 3
B=
1 2

12
CHAPTER 2. MATRICES 2.3. MATRIX INVERSES

3 0 4 1 2 3 a. Find A1 .
a. 2 1 3 c. 2 3 11 b. Solve for X where AX = B and
3 2 5 3 4 19
1 0 21 0 0


4 6 3 1 0 3 0
B = 0 1 0 2 1
b. 3 4 3 2 2 8 7
d.
5
4 2 21 0 0
1 2 6 0 16 0
1 0 4 1  T 1
1
c. Find 2 A if possible.

2.3.14 [YL] Consider the following system:


2.3.18 [MH] Solve the given matrix equation:
x 2y = 5
3x 4y = 6 2 6 6 1 2 0
2 7 6 X = 0 1 4 .
a. Write the above system as a matrix equation. 2 7 7 2 5 0
b. Solve the matrix equation by using the inverse of the
coefficient matrix.
2.3.19 [YL] Solve for X given that it satisfies
c. Give a geometrical interpretation of the solution set.
 1 1
tr(A)A + X T A = I3
6
2.3.15 [GH] Given the matrices A and b below. Find x that
satisfies the equation Ax = b by using the inverse of A where
  1 2 3
3 5 1 2 12 B = 0 2 3
a. A = ,
0 0 3
 2 3 c. A = 0 1 6 ,
21 3 0 1
b=
13 17

1 4
 b = 5 2.3.20 [YL, MB] Given that A, B, C and M are invertible
b. A = , 20 matrices simplify the following expressions.
 4  15
a. AT B(AB)1 (AT A1 )1

21 1 0 3
b= d. A = 8 2 13 , T T
77 b. B T AT ((AB) )1 C(C T A) B(CB)1
12 3 20 T T T
34 c. (AT M 1 )1 (A2 ) (H T AT (A1 ) )
b = 159
243
2.3.21 [OV] Assuming that all matrices are n n and in-
vertible, solve the following equation for X and simplify.
1 T 1 T
2.3.16 [YL] Consider A1 BC T DT X C 1 A B T = B 1

1 2 3
A = 0 1 2 .
0 0 3 2.3.22 [KK] Give an example of a matrix A such that A2 =
I, A 6= I and A 6= I.
a. Find A1 .
b. Using A1 solve Ax = b where 2.3.23 [JH] What is the inverse of rH?

x1 1
x = x2 and b = 2 . 2.3.24 [KK] Suppose AB = AC and A is an invertible n n
x3 1 matrix. Does it follow that B = C? Explain.

2.3.25 [JH] Assume that H is invertible and that HG is the


2.3.17 [YL] Given zero matrix. Show that G is a zero matrix.

2 2 0
A = 4 3 0 . 2.3.26 [JH] Prove that if H is invertible then the inverse
1
3 2 2 commutes with a matrix GH 1 = H 1 G if and only if H
itself commutes with that matrix GH = HG.

13
CHAPTER 2. MATRICES 2.4. ELEMENTARY MATRICES
2.3.27 [YL] Show that if A is invertible then its inverse is 2.4 Elementary Matrices
unique.

2.4.1 [JH] Predict the result of each multiplication by an


2.3.28 [JH] Prove: If T is an invertible matrix and k is a elementary matrix, and then verify by performing the multi-
natural number then (T k )1 = (T 1 )k . plication.
        
3 0 1 2 1 0 1 2 1 0 1 2
1 a. b. c.
2.3.29 [KK] Prove: If A is invertible, then A1

= A. 0 1 3 4 0 2 3 4 2 1 3 4

2.3.30 [KK] Show that if A is an invertible n n matrix,


1 T
then so is AT and AT = A1 . 2.4.2 [JH] Find
a. a 33 matrix that, acting from the left, swaps rows one
4 and two.
2.3.31 [JH] Show that if T is square and if T is the zero
1 2
matrix then (I T ) = I + T + T + T .3 b. a 22 matrix that, acting from the right, swaps column
one and two.

2.3.32 [YL] Prove: If B and C are nn matrices such that    


A = B T C + C T B is invertible then A1 is symmetric. a b c 0 1
2.4.3 [SZ] Let A = E1 = E2 =
    d e f 1 0
5 0 1 2
2.3.33 [YL] Show that a matrix A which satisfy A3 + 3A2 + E3 = .
0 1 0 1
A + I = 0 is invertible and express its inverse in terms of A Compute E1 A, E2 A and E3 A. What effect did each of the Ei
and the identity. matrices have on the rows of A? Create E4 so that its effect
on A is to multiply the bottom row by 6.
2.3.34 [YL] Given an n n matrix A such that p(A) = 0
where p(x) = x3 x2 + 1. Determine the inverse of A in terms 2.4.4 [JH] Write
of A.
 
1 0
3 3
2.3.35 [YL] Given an n n identity I and p(x) = a0 + a1 x + as the product of two elementary matrices.
a2 x2 + . . . + am xm an arbitrary polynomial of degree m. Find
the condition for which p(I) is invertible.
1 0 1
2.4.5 [OV] Given A = 3 2 4
2.3.36 [YL] Prove: If AB and BA are both invertible then 0 0 1
A and B are both invertible. a. Find the inverse of A.
b. Write A1 as a product of elementary matrices.
2.3.37 [JH] Prove or disprove: Nonsingular matrices com-
mute.
2.4.6 [MB] Consider the following system of equations:
2.3.38 [YL] Prove: If A and B are square matrices satisfying x1 x3 = 2
AB = I, then A = B 1 . 5x1 2x3 = 1 ,
x2 =4
2.3.39 [MH] An involutory matrix is a matrix A such that which can be written as AX = B, where A is the matrix of
A2 = I. Show that if A is an involutory matrix, then A is coefficients, X is the column matrix of unknowns, and B is
invertible and find A1 . the column matrix of constants.
a. Compute the inverse of A and use it to solve the system.
2.3.40 [SS] Show that if A is a 3 2 matrix and if B is a
2 3 matrix, then AB is not invertible. b. Using your work of a., write the inverse of the matrix A
as a product of elementary matrices.

14
CHAPTER 2. MATRICES 2.4. ELEMENTARY MATRICES
2.4.7 [YL] Write the given matrix as a product of elementary 2.4.13 [YL] Prove: If Ei are elementary matrices and
matrices En E2 E1 A is invertible then A is invertible.
0 0 1
1 3 0
2 4 0 2.4.14 [MH] An involutory matrix is a matrix A such that
A2 = I. Some elementary matrices are involutory. What is
the corresponding type of row operation?
2.4.8 [YL] Express
2.4.15 [OV] True or false? Justify your answer either prov-
1 0 0 ing the assertion or giving an example showing that it is false.
A = 1 0 1 The product of two elementary matrices of the same size must
0 2 1 be an elementary matrix.
as a product of 4 elementary matrices.
2.4.16 [YL] Show that there exist an nn symmetric ele-
2.4.9 [KK] Given matrices A and B and suppose a row oper- mentary matrix E such that E 2 = I.
ation is applied to A and the result is B. Find the elementary
matrix E such that EA = B. Find the inverse of E, E 1 , 2.4.17[OV] If matrices A and B are row equivalent then it
such thatE 1 B = A. can be denoted as A
= B. Show that
1 2 1 1 2 1 A
a. A =
a. A = 0 5 1 and B = 2 1 4
2 1 4 0 5 1 b. If A
= B, then B
= A.
c. If A
= B and B
= C, then A = C.
1 2 1

1 2 1
b. A = 0 5 1 and B = 0 5 1
2 1 4 1 12 2

1 2 1 1 2 1
c. A = 0 5 1 and B = 2 4 5
2 1 4 2 1 4

2.4.10 [YL] Show that



5 7 9 1 2 3
A = 1 2 3 and B = 4 5 6
4 5 6 8 10 12

are row-equivalent by finding 3 elementary matrices Ei such


that E3 E2 E1 A = B.

2.4.11 [YL] Given



1 1 1 1
A = 2 2 2 2 ,
3 3 3 3

1 0 0 1 0 0 0 0 1
E1 = 0 1 0 , E2 = 0 3 0 , E3 = 0 1 0
2 0 1 0 0 1 1 0 0
and
T
(X T E1 E2 E3 ) = A
solve for X, if possible.

2.4.12 [JH] Prove that any matrix row-equivalent to an


invertible matrix is also invertible.

15
Chapter 3

Determinants

3.1 The Laplace Expansion 3.1.5 [GH] Find the determinant of the given matrix using
cofactor expansion.

3.1.1 [GH] Compute the determinant of the following ma-



3 2 3 3 1 0
trices. a. 6 1 10 d. 3 0 4
    8 9 9 0 1 4
10 7 1 7
a. c.

8 9 5 9 8 9 2 0 0 1 1
b. 9 9 7 1 1 0 1
e.
   
6 1 10 1 5 1 9 1 1 1 0

b. d.
7 8 4 7 1 0 1 0
1 4 1
c. 0 3 0 1 0 0 1
1 2 2 1 0 0 1
3.1.2 [GH] For the following matrices, construct the sub- f.
1 1 1

0
matrices used to compute the minors M1,1 , M1,2 and M1,3 .
1 0 1 1
Compute the cofactors C1,1 , C1,2 , and C1,3 .

7 3 10 5 3 3
a. 3 7 6 c. 3 3 10 3.1.6 [SZ] Compute the determinant of the given matrix.
1 6 10 9 3 9
x x2
 

2 9 6

6 4 6
1 2 3
a.
b. 10 6 8 d. 8 0 0 1 2x d. 2 3 11
0 3 2 10 8 1

1 ln(x)
3 4 19

x3 x 3 i j k
b.
3 1 3 ln(x) e. 1 0 5
4 9 4 2
3.1.3 [JH] Evaluate the determinant by performing a cofac- x x4
tor expansion 4 6 3 1 0 3 0
3 0 1 c. 3 4 3 2 2 8 7
f.
5 0

1 2 2
1 2 6 16 0
1 3 0 1 0 4 1

a. along the first row,


b. along the second row, 3.1.7 [JH] Verify that the determinant of an upper-
c. along the third column. triangular 33 matrix is the product of the main diagonal.

a b c
3.1.4 [KK] Find the determinants of the following matrices. det 0 e f = aei
0 0 i
1 2 3 1 2 3 2
a. 3 2 2 1 3 2 3 Is it the same for lower triangular matrices?
c.
0 9 8 4 1 5 0

4 3 2
1 2 1 2
3.1.8 [KK] Find the determinants of the following matrices.
b. 1 7 8
3 9 3

17
CHAPTER 3. DETERMINANTS 3.2. DETERMINANTS AND ELEMENTARY OPERATIONS
 
a.
1 34 2 3 15 0 3.2 Determinants and Elementary
0 2 0 4 1 7
c.
0 0 3

5 Operations
4 3 14
b. 0 2 0 0 0 0 1
0 0 5 3.2.1 [KK] An operation is done to get from the first matrix
to the second. Identify the operation and how the determi-
nant will change.
3.1.9 [YL] Solve for .    
a b a c
a. to
c d b d
   
a b a b

1 0 3
d. to
1
   
c d 2c 2d
= 2 6 a b c d
3 1 b. to
1 3 5 c d a b
   
a b b a
 e. to
c d d c
  
a b a b
c. to
c d a+c b+d
3.1.10 [JH] True or false: Can we compute a determinant
by expanding down the diagonal? Justify.
3.2.2 [GH] A matrix M and det(M ) are given. Matrices
A, B and C are obtained by performing operations on M .
3.1.11 [JH] Which real numbers make
Determine the determinants of A, B and C and indicate the

cos sin
 operations used to obtain A, B and C.
sin cos
9 7 8 5 1 5
equal to zero? a. M = 1 3 7 , c. M = 4 0 2 ,
6 3 3 0 0 4
det(M ) = 41, det(M ) = 16,

18 14 16 0 0 4
A = 1 3 7 , A = 5 1 5 ,
6 3 3 4 0 2

9 7 8 5 1 5
B = 1 3 7 , B = 4 0 2 ,
96 73 83 0 0 4

9 1 6 15 3 15
C = 7 3 3. C = 12 0 6 .
8 7 3 0 0 12

0 3 5 5 4 0
b. M = 3 1 0 , d. M = 7 9 3 ,
2 4 1 1 3 9
det(M ) = 45, det(M ) = 120,

0 3 5 1 3 9
A = 2 4 1 , A = 7 9 3 ,
3 1 0 5 4 0
0 3 5 5 4 0
B = 3 1 0 , B = 14 18 6 ,
8 16 4 3 9 27
3 4 5 5 4 0
C= 3 1 0 . C = 7 9 3.
2 4 1 1 3 9

3.2.3 [GH] Find the determinant of the given matrix by us-


ing elemetary operations to bring the matrix under triangular
form.

18
CHAPTER 3. DETERMINANTS 3.3. PROPERTIES OF DETERMINANTS

4 3 4 1 0 0 3.3 Properties of Determinants
a. 4 5 3 d. 0 1 0
3 4 5 1 1 1
3.3.1 [JH] Which real numbers x make this matrix singular?
1 2 1 5 1 0 0
b. 5 5 4 3 5 2 5  
e. 12 x 4
4 0 0 2 4 3 4 8 8x

5 0 4
5 4 3 3

c. 2 4 1 2 1 4 4
5 0 4 3 3 3 2
f. 3.3.2 [KK] Consider the following matrices. Does there exist
0 4 5 1
a value of t for which this matrix fails to have an inverse?
2 5 2 5
Justify.

1 0 0
a. 0 cos t sin t
3.2.4 [MB] Compute the determinant of the matrix
0 sin t cos t
et cos t et sin t
t
1 2 0 3 e
0 4 2 1 b. et et cos t et sin t et sin t + et cos t
A=
0 2 1 1 et 2et sin t 2et cos t
2 3 1 0
1 t t 2

by combining row operations and cofactor expansions. c. 0 1 2t


t 0 2

3.2.5 [YL] Consider


      3.3.3 [KK] If A, B, and C are each nn matrices and ABC
a b c d b 2a 3b
A= , B= , C= . is invertible, show why each of A, B, and C are invertible.
c d 7a 5c 7b 5d d 2c 3d

3.3.4 [KK] Show that if det (A) 6= 0 for A an n n matrix,


a. If det(B) = 5 then determine det(A).
it follows that if AX = 0, then X = 0.
b. If det(C) = 5 then determine det(A).

3.3.5 [KK] Suppose A, B are n n matrices and that


3.2.6 [YL] Consider AB = I then show that BA = I. Hint: First explain why
det (A) , det (B) are both nonzero. Then (AB) A = A and
a d g 3d 3e 3f
then show BA (BA I) = 0. From this use what is given to
A = b e h and B = a + 2d b + 2e c + 2f .
conclude A (BA I) = 0.
c f k 4g 4h 4k

If det(B) = 5 then determine det(A).


3.3.6 [KK] Suppose A is an upper triangular matrix. Show
that A1 exists if and only if all elements of the main diagonal
are non zero. Is it true that A1 will also be upper triangular?

a 0 b
3.2.7 [YL] If A = 0 c
0 and det(2AT ) = 16 then Explain. Could the same be concluded for lower triangular
d 0 matrices?
e
2a 4a 6a + 12b

evaluate 0 8c 10c .
3.3.7 [JH] Prove: If S and T are nn matrix then det(T S) =
2d 4d 6d + 12e det(ST ).

3.2.8 Vandermondes determinant[JH] Prove: 3.3.8 [YL] Prove or disprove: If det(A2 ) = det(A) then
A2 = A.
1 1 1
det a b c = (b a)(c a)(c b)
a2 b2 c2 3.3.9 [KK] Show det (aA) = an det (A) for an nn matrix
A and scalar a.

3.2.9 [KK] Let A be an rr matrix and suppose there are 3.3.10 [YL] Prove: If A is an nn skew-symmetric matrix
r 1 rows (columns) such that all rows (columns) are linear where n is odd then det(A) = 0.
combinations of these r1 rows (columns). Show det (A) = 0.

19
CHAPTER 3. DETERMINANTS 3.3. PROPERTIES OF DETERMINANTS
3.3.11 [YL] Prove: If A is an nn matrix such that AT A = A 3.3.20 [JH] Prove or disprove: The determinant is a linear
then det(A) = 0 or det(A) = 1. function, that is det(x T + y S) = x det(T ) + y det(S).

3.3.12 [JH] Prove that each statement holds for 22 ma- 3.3.21 [KK] True or false. If true, provide a proof. If false,
trices. provide a counter example.
a. The determinant of a product is the product of the de- a. If A is a 33 matrix with a zero determinant, then one
terminants det(ST ) = det(S) det(T ). column must be a multiple of some other column.
b. If T is invertible then the determinant of the inverse is b. If any two columns of a square matrix are equal, then the
the inverse of the determinant det(T 1 ) = ( det(T ) )1 . determinant of the matrix equals zero.
c. For two nn matrices A and B, det (A + B) = det (A) +
det (B) .
3.3.13 [YL] Given
d. For an nn matrix A, det (3A) = 3 det (A)
3 3 1
e. If A1 exists then det A1 = det (A) .

a 0 0 0 0 0 a a
0 b 0 2
0 0 0 0 b f. If B is obtained by multiplying a single row of A by 4
A= 0 0 c
and B = 1 1.
0 0
c 0 c then det (B) = 4 det (A) .
4 4
0 0 0 d 0 0 n
d d g. For A an nn matrix, det (A) = (1) det (A) .
h. If A is a real nn matrix, then det AT A 0.

If det(B) = 5 then determine det(A).
i. If Ak = 0 for some positive integer k, then det (A) = 0.
j. If AX = 0 for some X 6= 0, then det (A) = 0.
3.3.14 [KK] Prove or disprove: If A and B are square ma-
trices of the same size then det (A + B) = det (A) + det (B).

3.3.15 [JH]
a. Suppose that det(A) = 3 and that det(B) = 2. Find
det(A2 B T B 2 AT ).
b. If det(A) = 0 then show that det(6A3 + 5A2 + 2A) = 0.

3.3.16 [JH]
a. Give a non-identity matrix with the property that AT =
A1 .
b. Prove: If AT = A1 then det(A) = 1.
c. Does the converse to the above hold?

3.3.17 [JH] Two matrices H and G are said to be similar if


there is a nonsingular matrix P such that H = P 1 GP Show
that similar matrices have the same determinant.

3.3.18 [KK] An nn matrix is called nilpotent if for some


positive integer, k it follows Ak = 0. If A is a nilpotent matrix
and k is the smallest possible integer such that Ak = 0, what
are the possible values of det (A)?

3.3.19 [JH] Show that this gives the equation of a line in


R2 thru (x2 , y2 ) and (x3 , y3 ).

x x2 x 3

y y 2 y3 = 0

1 1 1

20
CHAPTER 3. DETERMINANTS 3.4. ADJOINT OF A MATRIX AND CRAMERS RULE
3.4 Adjoint of a Matrix and 3.4.8 [YL] Consider the matrix:
Cramers Rule

sin 0 cos
A= 0 0
cos 0 sin
3.4.1 [JH] Find the adjoint of the following matrices.
a. Determine the value(s) of for which A is invertible.

2 1 4 1 4 3
a. 1 0 2 d. 1 0 3 b. Determine the adjoint of A.
1 0 1 1 8 9
 
3 1 2 1 0 0 3.4.9 [YL] Given
b.
2 4 1 2 1 0
e.

  0 1 2 1
0 1 2 0
1 1 3 0 0 4
c. 0 0 1 2 A=
5 0
5 0 0 6

0 7 8 0
.
3.4.2 [JH] a. Evaluate det(A).
a. Find a formula for the adjoint of a 22 matrix. b. Evaluate det adj (3A1 )T .

b. Use the above to derive the formula for the inverse of a
22 matrix.
3.4.10 [YL] Let B be a 33 matrix where det(B) = 3. Find
det 2B + B 2 adj(B) .

3.4.3 [JH] Derive a formula for the adjoint of a diagonal
matrix.
3.4.11 [KK] Use the formula for the inverse in terms of the
3.4.4 [JH] Prove that the transpose of the adjoint is the cofactor t matrix to find the inverse of the matrix.

adjoint of the transpose. e 0 0
a. 0 et cos t et sin t
0 et cos t et sin t et cos t + et sin t
3.4.5 [JH] Prove or disprove: adj(adj(T )) = T . t
e cos t sin t
b. et sin t cos t
3.4.6 [KK] Determine whether the matrix A has an inverse et cos t sin t
by finding whether the determinant is non zero. If the deter-
minant is nonzero, find the inverse using the formula for the
inverse which involves the cofactor matrix. 3.4.12 [YL] Prove: If A is an invertible nn matrix then
det(adj(A)) = (det(A))n1 .

1 2 3 1 3 3
a. 0 2 1 c. 2 4 1 3.4.13 [MB] Consider two 4 4 matrices A and B, with
3 1 0 0 1 1 det(A) = 2 and det(B) = 3. Find the determinant of M ,
knowing that det(2B T M A1 B) = det(adj(A)A2 B).

1 2 0 1 0 3
b. 0 2 1 d. 1 0 1
3 1 1 3 1 0 3.4.14 [YL] Let B be a 33 matrix where det(B) = 3. Find
det 2B + B 2 adj(B) .


3.4.7 [MB] Knowing that the cofactor matrix of A is


3.4.15 [YL] Given
3 9 12
cof(A) = 1 10 4 10 1 1 1 9 3 4 1 0 9
8 2 7 0
9 1 1 4 9 2 7 7 9
  0 0 8 1 1 4 9 2 7 7
and that the first row of A is 2 1 1

0 0 0 7 1 1 4 9 2 7

a. Find adj(A). 0 0 0 0 6 1 1 4 9 2
A=0

b. Find det(A). 0 0 0 0 5 1 1 4 9
0 0 0 0 0 0 4 1 1 4
c. Find the minors M12 and M22 of A.
0

0 0 0 0 0 0 3 1 1
d. Find A1 .
0 0 0 0 0 0 0 0 2 1

0 0 0 0 0 0 0 0 0 1

21
CHAPTER 3. DETERMINANTS 3.4. ADJOINT OF A MATRIX AND CRAMERS RULE

4 3 2 8 4 3 2 8 3.4.19 [SZ] Use Cramers Rule to solve for x4 .
2 3 1 4 0 3 1 4
B=1 2 1 4 , C = 0 0
.
1 4 x1 x3 = 2 4x1 + x2 = 4
1 2 2 8 0 0 0 8 2x2 x4 = 0 x2 3x3 = 1
a. b.
x1 2x2 + x3 = 0 10x1 + x3 + x4 = 0
a. If G is a 44 matrix show that BG is not invertible. x3 + x4 = 1 x2 + x3 = 3
b. If F is a 10 10 invertible matrix then evaluate
det F 101 adj(A)(F 1 )101 .


c. Evaluate det 2 adj(A) + 3A1 , if possible.



3.4.20 [YL] Determine for which value(s) of , if any, the
d. If D is a 1010 matrix such that A1 D2 = I then deter-
following system can be solved using Cramers Rule.
mine det(D), if possible.
e. Evaluate det B 101 adj(C) + BC 2015 , if possible.

x 2y +z 2w = 5
3x 4y + 3w = 6
 
det(C) T 3
f. If E is a square matrix and det 2 E A = then
5y w = 7
find det(E), if possible. 2x y 2z =8
g. If H is a 1010 matrix and det(2HA + Hadj(A)A2 ) = 0
then show that H singular.
h. Evaluate det(adj(A) + I), if possible.
i. Determine adj(adj(A)), if possible.

3.4.16 [MB] Use Cramers rule to solve this system.

x1 + 2x2 = 1
.
x1 + 4x2 = 2

3.4.17 [YL] Solve only for x1 using Cramers Rule.

x1 2x2 + 3x3 = 4
5x2 6x3 = 7
8x3 = 9

3.4.18 [GH] Given the matrices A and b, evaluate det(A)


and det(Ai ) for all i. Use Cramers Rule to solve Ax = b. If
Cramers Rule cannot be used to find the solution, then state
whether or not a solution exists.
 
3 0 3 7 14
d. A =
a. A = 5 4 4 2 4
5 5 4
 
1
b=
24 4
b=0
4 9 3

31 e. A = 5 2 13
 
b. A =
9 5 1 10 13
4 7 28
 
45 b = 35
b=
20 7
 
8 16 7 4 25
c. A =
10 20 f. A = 2 1 7
9 7 34
 
48
b=
60 1
b = 3
5

22
Chapter 4

Vector Geometry

4.1 Introduction to Vectors ~ = ~k? If so, find


j. Is there a scalar m such that m(~v + 2 w)
it.
4.1.1 [GHC] Sketch ~u, ~v , ~u + ~v and ~u ~v on the same axes.
4.1.4 [GHC] Find k~uk, k~v k, k~u + ~v k and k~u ~v k.
y
a. ~u = (2, 1), ~v = (3, 2)
z
b. ~u = (3, 2, 2), ~v = (1, 1, 1)
u
a. c. v . c. ~u = (1, 2), ~v = (3, 6)
x
u d. ~u = (2, 3, 6), ~v = (10, 15, 30)
x y

. v . 4.1.5 [GHC] Under what conditions is k~uk + k~v k = k~u +~v k?


y

u z
4.1.6 [GHC] Find the unit vector ~u in the direction of ~v .
u
d. . a. ~v = (3, 7)
b. x
b. ~v in the first quadrant of R2 that makes a 50 angle with
v x v y the x-axis.
. c. ~v in the second quadrant of R2 that makes a 30 angle
.
with the y-axis.

4.1.7 [MB] Find a vector ~u, of length 3, in the direction of


4.1.2 [GHC] Points P and Q are given. Write the vector
P~Q, where P (5, 1, 2) and Q(2, 2, 5).
P Q in component form and using the standard unit vectors.
a. P = (2, 1), Q = (3, 5)
b. P = (3, 2), Q = (7, 2) 4.1.8 [JH] Decide if the two vectors are equal.
c. P = (0, 3, 1), Q = (6, 2, 5) a. the vector from (5, 3) to (6, 2) and the vector from (1, 2)
to (1, 1)
d. P = (2, 1, 2), Q = (4, 3, 2)
b. the vector from (2, 1, 1) to (3, 0, 4) and the vector from
(5, 1, 4) to (6, 0, 7)
4.1.3 [MC] Let ~v = (1, 5, 2) and w
~ = (3, 1, 1).
a. ~v w ~ 4.1.9 [SM] Determine the distance from P to Q as given.
b. ~v + w ~ a. P (2, 3) and Q(4, 1)
c. k~~vvk b. P (1, 1, 2) and Q(3, 1, 1)
d. k 12 (~v w)k
~ c. P (4, 6, 1) and Q(3, 5, 1)
1
e. k 2 (~v + w)k
~ d. P (2, 1, 1, 5) and Q(4, 6, 2, 1)
f. 2 ~v + 4 w ~
g. ~v 2 w. ~
4.1.10 [GHC] Let ~u = (1, 2) and ~v = (1, 1).
h. ~ = ~i.
Find the vector ~u such that ~u + ~v + w
a. Find ~u + ~v , ~u ~v , 2~u 3~v .
i. ~ = 2 ~j + ~k.
Find the vector ~u such that ~u + ~v + w
b. Find ~x where ~u + ~x = 2~v ~x.

23
CHAPTER 4. VECTOR GEOMETRY 4.1. INTRODUCTION TO VECTORS
c. Sketch the above vectors on the same axes, along with ~u c. Determine whether the points S(1, 0, 1, 2), T (3, 2, 3, 1),
and ~v . and U (3, 4, 1, 5) are collinear. Show how you decide.

4.1.11 [GHC] Let ~u = (1, 1, 1) and ~v = (2, 1, 2). 4.1.19 [SM] Determine whether the three points P (1, 1, 0),

a. Find ~u + ~v , ~u ~v , ~u 2~v . Q(2, 4, 1) and R(1, 11, 2) are collinear.
b. Find ~x where ~u + ~x = ~v + 2~x.
c. Sketch the above vectors on the same axes, along with ~u 4.1.20 [SM] Show that a(5, 7) + b(3, 10) = (16, 77) rep-
and ~v . resents a system of two linear equations in the two variables
a and b. Solve and check.
4.1.12 [SM] Let ~x = (1, 2, 2) and ~x = (2, 1, 3). Deter-
mine 4.1.21 [SM] Show that a(1, 1, 0) + b(3, 2, 1) + c(0, 1, 4) =
a. 2~x 3~y (1, 1, 19) represents a system of three linear equations in the
b. 3(~x + 2~y ) + 5~x three variables a, b, and c. Solve and check.
c. ~z such that ~y 2~z = 3~x
d. ~z such that ~z 3~x = 2~z 4.1.22 [SM] Prove that the two diagonals of a parallelogram
bisect each other.

4.1.13 [SM] Find the coordinates of the point which is one


4.1.23 [SM] Prove that the line segment joining the mid-
third of the way from the point (1, 2) to the point (3, 2).
points of two sides of a triangle is parallel to the third side
and half as long.
4.1.14 [SM] Consider the points P (2, 3, 1), Q(3, 1, 2),

R(1, 4, 0), and S(5, 1, 5). Determine P Q, P R, P S, QR, and 4.1.24 [SM] Prove that the quadrilateral P QRS, whose ver-

SR, and verify that P Q + QR = P R = P S + SR. tices are the midpoints of the sides of an arbitrary quadrilat-
eral ABCD, is a parallelogram.
4.1.15 [SM] Find the midpoint of the line segment joining C
Q
the given points. B
a. (2, 1, 1) and (3, 1, 4)
b. (2, 1, 0, 3) and (3, 2, 1, 1) R
P

4.1.16 [SM] Find the points that divide the line segment D
S
joining the given points into three equal parts. A
a. (2, 4, 1) and (1, 1, 7)
b. (1, 1, 5) and (4, 2, 1)
4.1.25 [SM] Prove that the line segments joining the mid-
points of opposite sides of a quadrilateral bisect each other.
4.1.17 [SM] Given the points P and Q, and the real number

r, determine the point R such that P R = rP Q. Make a rough
sketch to illustrate the idea. 4.1.26 [SM] A median of a triangle is a line segment from
a vertex to the midpoint of the opposite side. Prove that the
a. P (1, 4, 5) and Q(3, 1, 4); r = 14
three medians of any triangle intersect at a common point
b. P (2, 1, 1, 6) and Q(8, 7, 6, 0); r = 13 G that is two-thirds of the distance from each vertex to the
c. P (2, 1, 2) and Q(3, 1, 4); r = 43 midpoint of the opposite side. (The point G is called the
centroid of the triangle.)

4.1.18 [SM] A set of points in Rn is collinear if they all line


on one line. 4.1.27 [JH] Does any vector have length zero except a zero
vector? (If yes, produce an example. If no, prove it.)
a. By considering directed line segments, give a general
method for determining whether a given set of three
points is collinear. 4.1.28 [JH] Show that if ~v 6= ~0 then ~v /k~v k has length one.
b. Determine whether the points P (1, 2, 2, 1), Q(4, 1, 4, 2), What if ~v = ~0?
and R(5, 4, 2, 1) are collinear. Show how you decide.

24
CHAPTER 4. VECTOR GEOMETRY 4.2. DOT PRODUCT AND PROJECTIONS
4.1.29 [JH] Show that if r 0 then r~v is r times as long as 4.2 Dot Product and Projections
~v . What if r < 0?

4.2.1 [GHC] Find the dot product of the given vectors.


4.1.30 [SM]
Prove, as a consequence of the triangle inequal- a. ~u = (2, 4), ~v = (3, 7)
ity, that k~xk k~y k k~x ~y k.
b. ~u = (1, 1, 2), ~v = (2, 5, 3)
c. ~u = (1, 1), ~v = (1, 2, 3)
d. ~u = (1, 2, 3), ~v = (0, 0, 0)

4.2.2 [SM] Determine whether the given pair of vectors is


orthogonal.

a. (1, 3, 2), (2, 2, 2) c. (2, 1, 1), (1, 4, 2)


b. (3, 1, 7), (2, 1, 1) d. (4, 1, 0, 2), (1, 4, 3, 0)

4.2.3 [GHC] A vector ~v is given. Give two vectors that are


orthogonal to ~v .

a. ~v = (4, 7) c. ~v = (1, 1, 1)
b. ~v = (3, 5) d. ~v = (1, 2, 3)

4.2.4 [SM] Determine all values of k for which the vectors


are orthogonal.

a. (3, 1), (2, k) c. (1, 2, 3), (3, k, k)


b. (3, 1), (k, k 2 ) d. (1, 2, 3), (k, k, k)

4.2.5 [GHC] Find the measure of the angle between the two
vectors in both radians and degrees.
a. ~u = (1, 1), ~v = (1, 2)
b. ~u = (2, 1), ~v = (3, 5)
c. ~u = (8, 1, 4), ~v = (2, 2, 0)
d. ~u = (1, 7, 2), ~v = (4, 2, 5)

4.2.6 [SM] Determine the angle (in radians) between the


vectors ~a and ~b given.
a. ~a = (2, 1, 4) and ~b = (4, 2, 1)
b. ~a = (1, 2, 1) and ~b = (3, 1, 0)
c. ~a = (5, 1, 1, 2) and ~b = (2, 3, 2, 1)

4.2.7 [MH] Suppose that ~u and ~v are two vectors


in the xy-plane with directions as given in the diagram
and such that ~u has length 2 and ~v has length 3.
y


u
32
x
28


v

25
CHAPTER 4. VECTOR GEOMETRY 4.2. DOT PRODUCT AND PROJECTIONS
a. Find ~u ~v . b. Show that u~1 = (1, 2, 3), u~2 = (1, 2, 1), and u~3 =
b. Find k~u + ~v k. (8, 2, 4) are orthogonal to each other, and write w ~ =
(13, 4, 7) as a linear combination of u~1 , u~2 , and u~3 .

4.2.8 [YL] Given ~u, ~v R2 where ~u = (1, 1), ||~v || = 1 and


the angle between ~u and ~v is /4. 4.2.15 [SM] Verify the triangle inequality and the Cauchy-
a. Determine ~u ~v . Schwartz inequality for the given vectors.
b. Determine ~v . a. ~x = (4, 3, 1) and ~y = (2, 1, 5)
b. ~x = (1, 1, 2) and ~y = (3, 2, 4)
4.2.9 [SM] Find the angle between each pair of vectors.
a. ~u = (1, 4, 2) and ~v = (3, 1, 4) in R3 4.2.16 Cauchy-Schwartz Inequality[YL] Prove without
b. ~u = (2, 4, 0, 1, 3) and ~v = (1, 1, 4, 2, 0) in R5 assuming that the law of cosine holds in Rn : If ~u, ~v Rn then
|~u ~v | k~ukk~v k.
4.2.10 [SM] Determine
a. proj(2,3,2) ((4, 1, 3)) and perp(2,3,2) ((4, 1, 3)) 4.2.17 [SM] Consider the following statement: If ~a ~b = ~a ~c
b. proj(1,1,2) ((4, 1, 2)) and perp(1,1,2) ((4, 1, 2)) then ~b = ~c.
c. proj(2,1,1) ((5, 1, 3)) and perp(2,1,1) ((5, 1, 3)) a. If the statement is true, prove it. If the statement is false,
d. proj(1,2,1,3) ((2, 1, 2, 1)) and provide a counterexample.
perp(1,2,1,3) ((2, 1, 2, 1)) b. If we specify ~a 6= ~0, does that change the result?

4.2.11 [SM] For each of the given pairs of vectors ~a, ~b, check 4.2.18 [SM] Prove the parallelogram law for the norm:
that ~a is a unit vector, determine proj~a (~b) and perp~a (~b), and 2 2 2
k~a + ~bk + k~a ~bk = 2k~ak + 2k~bk
2
~ ~
check your results by verifying that proj~a (b) + perp~a (b) = b ~
and ~a perp~a (~b) = 0 in each case. for all vectors in Rn .
a. ~a = (0, 1) and ~b = (3, 5)
4.2.19 [MH] Let ~u and ~v be vectors in Rn .
b. ~a = 35 , 45 and ~b = (4, 6)

a. Simplify
c. ~a = (0, 1, 0) and ~b = (3, 5, 2) (~u + ~v ) (~u ~v )
d. ~a = 13 , 32 , 23 and ~b = (4, 1, 3)


b. Use your previous result to show that the parallelogram


4.2.12 [SM] Consider the force represented by the vector defined by ~u and ~v is a rhombus (i.e. has four sides of
F~ = (10, 18, 6), and let ~u = (2, 6, 3). equal length) if and only if its diagonals are perpendicu-
a. Determine a unit vector in the direction of ~u. lar.
Hint: Express the diagonals as vectors in terms of ~u and
b. Determine the projection of F~ onto ~u.
~v . A drawing may help.
c. Determine the component of F~ perpendicular to ~u.

4.2.20 [JH]
4.2.13 [SM] Let ~v be any non-zero vector in R2 , and v be
the unit vector in its direction. a. Find the angle between the diagonal of the unit square
in R2 and one of the axes.
a. Show that v can be written as v = (cos , sin ), where
is the angle from the positive x-axis to ~v . Also show b. Find the angle between the diagonal of the unit cube in
that ~v = k~v k(cos , sin ). R3 and one of the axes.
b. Prove the formula cos( ) = cos cos + sin sin c. Find the angle between the diagonal of the unit cube in
by considering the dot product of the two unit vectors Rn and one of the axes.
e~a = (cos , sin ) and e~b = (cos , sin ). d. What is the limit, as n goes to , of the angle between
the diagonal of the unit cube in Rn and one of the axes?
4.2.14 [SM]
a. Let u~1 , u~2 , and u~3 be non-zero vectors orthogonal to each 4.2.21 [JH] Is any vector orthogonal to itself?
other, and let w ~ = au~1 + bu~2 + cu~3 . Show that
w
~ u~1 w
~ u~2 w
~ u~3 4.2.22 [JH] Describe the algebraic properties of dot product.
a= 2, b= 2, c= 2
ku~1 k ku~2 k ku~3 k

26
CHAPTER 4. VECTOR GEOMETRY 4.3. CROSS PRODUCT
a. Is it right-distributive over addition: (~u + ~v ) w ~ + 4.3
~ = ~u w Cross Product
~v w?
~
b. Is it left-distributive (over addition)? 4.3.1 [GHC] Vectors ~u and ~v are given. Compute ~u ~v and
c. Does it commute? show this is orthogonal to both ~u and ~v .
d. Associate? a. ~u = (3, 2, 2), ~v = (0, 1, 5)
e. How does it interact with scalar multiplication? b. ~u = (5, 4, 3), ~v = (2, 5, 1)
c. ~u = (1, 0, 1), ~v = (5, 0, 7)
d. ~u = (1, 5, 4), ~v = (2, 10, 8)
4.2.23 [JH] Suppose that ~u ~v = ~u w ~ and ~u 6= ~0. Must
~v = w?
~ e. ~u = ~i, ~v = ~j
f. ~u = ~i, ~v = ~k
4.2.24 [JH] Show that if ~x ~y = 0 for every ~y then ~x = ~0. g. ~u = ~j, ~v = ~k

4.2.25 [JH] Give a simple necessary and sufficient condition 4.3.2 [SM] Calculate the following cross products.
to determine whether the angle between two vectors is acute, a. (1, 5, 2) (2, 1, 5)
right, or obtuse. b. (2, 3, 5) (4, 2, 7)
c. (1, 0, 1) (0, 4, 5)
4.2.26 [JH] Generalize to Rn the converse of the
Pythagorean Theorem, that if ~u and ~v are perpendicular then
2 2 2
k~u + ~v k = k~u k + k~v k . 4.3.3 [GHC] The magnitudes of vectors ~u and ~v in R3 are
given, along with the angle between them. Use this infor-
mation to find the magnitude of ~u ~v .
4.2.27 [JH] Show that k~u k = k~v k if and only if ~u + ~v and
~u ~v are perpendicular. Give an example in R2 . a. k~uk = 2, k~v k = 5, = 30
b. k~uk = 3, k~v k = 7, = /2
c. k~uk = 3, k~v k = 4, =
4.2.28 [JH] Prove that, where ~u, ~v Rn are nonzero vectors,
the vector d. k~uk = 2, k~v k = 5, = 5/6
~u ~v
+
k~u k k~v k
4.3.4 [GHC] Find a unit vector orthogonal to both ~u and
bisects the angle between them. Illustrate in R2 . ~v .
a. ~u = (1, 1, 1), ~v = (2, 0, 1)
4.2.29 [JH] Show that the inner product operation is lin- b. ~u = (1, 2, 1), ~v = (3, 2, 1)
~ Rn and k, m R, ~u (k~v + mw)
ear : for ~u, ~v , w ~ =
c. ~u = (5, 0, 2), ~v = (3, 0, 7)
k(~u ~v ) + m(~u w).
~
d. ~u = (1, 2, 1), ~v = (2, 4, 2)

4.2.30 [SM] An altitude of a triangle is a line segment from


a vertex that is orthogonal to the opposite side. Prove that 4.3.5 [SM] Let p~ = (1, 4, 2), ~q = (3, 1, 1), and ~r =
the three altitudes of a triangle intersect at a common point. (2, 3, 1). Check by calculation that the following general
(This point is called the orthocenter of the triangle. properties hold.
a. p~ p~ = ~0
4.2.31 [SM] Use dot products to prove the Theorem of b. p~ ~q = ~q p~
Thales: If A and B are endpoints of a diameter of a circle, c. p~ 3~r = 3(~ p ~r)
and C is any other point on the circle, then angle ACB is d. p~ (~q + ~r) = p~ ~q + p~ ~r
a right angle.
e. p~ (~q ~r) 6= (~
p ~q) ~r
C

4.3.6 [GHC] Find the area of the triangle with the given
vertices.
A B a. (0, 0, 0), (1, 3, 1) and (2, 1, 1).
O
b. (5, 2, 1), (3, 6, 2) and (1, 0, 4).
c. (1, 1), (1, 3) and (2, 2).
d. (3, 1), (1, 2) and (4, 3).

27
CHAPTER 4. VECTOR GEOMETRY 4.4. LINES
4.3.7 [SM] Calculate the area of the parallelogram deter- 4.4 Lines
mined by the following vectors. (Hint: For the vectors in R2 ,
think of them as vectors in R3 by letting z = 0.)
4.4.1 [SM] Write the vector equations of the line passing
a. (1, 2, 1) and (2, 3, 1) through the given point with the given direction vector.
b. (1, 0, 1) and (1, 1, 4) a. point (3, 4), direction vector (5, 1)
c. (1, 2) and (2, 5) b. point (2, 0, 5), direction vector (4, 2, 11)
d. (3, 1) and (4, 3) c. point (4, 0, 1, 5, 3), direction vector (2, 0, 1, 2, 1)

4.3.8 [SM] Find the volume of the parallelepiped determined


4.4.2 [SM] Write a vector equation for the line that passes
by the following vectors.
through the given points.
a. (4, 1, 1), (1, 5, 2), and (1, 1, 6)
a. (1, 2) and (2, 3)
b. (2, 1, 2), (3, 1, 2), and (0, 2, 5)
b. (4, 1) and (2, 1)
c. (1, 3, 5) and (2, 1, 0)
4.3.9 [MB] Given three vectors ~u = (2, 1, 3), ~v =
d. 21 , 14 , 1 and 1, 1, 13
 
(1, 1, 1), and w ~ = (0, 1, 1), find the volume of the
parallelepiped defined by ~u, 2~v , and 3w.
~ e. (1, 0, 2, 5) and (3, 2, 1, 2)

4.3.10 [YL] Given A(1, 0, 1), B(0, 1, 2) and C(3, 2, 1). 4.4.3 [GHC] Write the vector, parametric and symmetric
a. Compute projAB ~ ~
AC (2, 3, 4). equations of the lines described.
b. Compute the area of the triangle defined by A, B, C. a. Passes through P = (2, 4, 1), parallel to d~ = (9, 2, 5).
c. Compute the volume of the parallelepiped defined by AC, ~ b. Passes through P = (6, 1, 7), parallel to d~ = (3, 2, 5).
~ and ~v = (1, 0, 0).
AB c. Passes through P = (2, 1, 5) and Q = (7, 2, 4).
d. Passes through P = (1, 2, 3) and Q = (5, 5, 5).
4.3.11 [YL] If ~u, ~v , w ~ R3 and ~u (~v w)
~ = 2 then evaluate e. Passes through P = (0, 1, 2) and orthogonal to both d~1 =
(2, 1, 7) and d~2 = (7, 1, 3).
a. ~u (~v ~v )
f. Passes through P = (5, 1, 9) and orthogonal to both d~1 =
b. (3~v ) ((5~u) w)
~ (1, 0, 1) and d~2 = (2, 0, 3).
c. (~u ~v ) (~v w)
~ g. Passes through the point of intersection and orthogonal
of both lines, where ~x = (2, 1, 1) + t(5, 1, 2) and
4.3.12 [GHC] Show, using the definition of the cross prod- ~x = (2, 1, 2) + t(3, 1, 1).
uct, that ~u ~u = ~0. h. Passes through the point of intersection and orthogonal
to both
lines, where
4.3.13 [MH] Consider the three vectors ~u = (4, 1, 5), x = t
x = 2 + t

~v = (1, 4, 1) and w
~ = (1, 1, 2). ~x = y = 2 + 2t and ~x = y = 2 t .
a. Compute the scalar triple product of ~u, ~v and w.
~

z =1+t z = 3 + 2t

b. What can you deduce about the vectors ~u, ~v and w,~ sup- i. Passes through P = (1, 1), parallel to d~ = (2, 3).
posing they have the same initial point?
j. Passes through P = (2, 5), parallel to d~ = (0, 1).

a b c
4.3.14 [MH] Let A = d e f such that det A = 10. Let 4.4.4 [SM] Find the equations (in vector form and para-
g h i metric form) of the line passing through the given point and
~u, ~v and w
~ be the columns of A. Find ~u (~v w).
~ parallel to the given vector.
a. A(1, 1, 1), d~ = (2, 3, 1)
4.3.15 [GHC] Show, using the definition of the cross prod- b. B(2, 4, 5), d~ = 3i j 2k
uct, that ~u (~u ~v ) = 0; that is, that ~u is orthogonal to the
cross product of ~u and ~v .
4.4.5 [SM] Find the equations of the lines through
4.3.16 [YL] Prove or disprove: If ~u, ~n , ~n R such that P (3, 4, 7) which are parallel to the coordinate axes.
1 2
3

~n1 is not a scalar multiple of ~n2 then


proj~n1 (proj~n1 ~n2 (~u)) = ~0 4.4.6 [SM] Find the parametric equations of the line through
the given point and parallel to the line with given equations

28
CHAPTER 4. VECTOR GEOMETRY 4.4. LINES

a. A(0, 0, 2), ~x = (1, 2, 1) + t(2, 3, 3), t R 1 0
1 0

1 + t 1 t R , 3 + s 1 sR
x = 1 + 2t
2 1 2 2

b. B(1, 0, 0), y = 1 + 3t , t R

z = 1 2t

4.4.12 [SM] For the given point and line, find by projection
the point on the line that is closest to the given point, and
4.4.7 [JH] Does (1, 0, 2, 1) lie on the line through (2, 1, 1, 0)
use perp to find the distance from the point to the line.
and (5, 10, 1, 4)?
a. point (0, 0), line ~x = (1, 4) + t(2, 2), t R
b. point (2, 5), line ~x = (3, 7) + t(1, 4), t R
4.4.8 [SM] Do the lines ~x = (1, 0, 1) + t(1, 1, 1), t R and
c. point (1, 0, 1), line ~x = (2, 2, 1) + t(1, 2, 1), t R
~x = (2, 3, 4) + s(0, 1, 2), s R have a point of intersection?
d. point (2, 3, 2), line ~x = (1, 1, 1) + t(1, 4, 1), t R

4.4.9 [SM] Determine the point of intersections (if any) for


each pair of lines. 4.4.13 [GHC] Find the distance from the point to the line.
a. ~x = (1, 2) + t(3, 5), t R and
~x = (3, 1) + s(4, 1), s R a. P = (1, 1, 1), ~x = (2, 1, 3) + t(2, 1, 2)
b. ~x = (2, 3, 4) + t(1, 1, 1), t R and b. P = (2, 5, 6), ~x = (1, 1, 1) + t(1, 0, 1)
~x = (3, 2, 1) + s(3, 1, 1), s R c. P = (0, 3), ~x = (2, 0) + t(1, 1)
c. ~x = (3, 4, 5) + t(1, 1, 1), t R and d. P = (1, 1), ~x = (4, 5) + t(4, 3)
~x = (2, 4, 1) + s(2, 3, 2), s R
d. ~x = (1, 0, 1) + t(3, 1, 2), t R and
~x = (5, 0, 7) + s(2, 2, 2), s R 4.4.14 [MB] Using projections, determine the distance from
the point A(2, 3) to the line x y = 0.

4.4.10 [GHC] Determine if the described lines are the same


4.4.15 [YL] Given E : x 2y = 3 and P = (1, 1).
line, parallel lines, intersecting or skew lines. If intersecting,
give the point of intersection. a. Find the closest point on E from P .
a. ~x = (1, 2, 1) + t(2, 1, 1), b. Find the shortest distance from P to E.
~x = (3, 3, 3) + t(4, 2, 2).
b. ~x = (2, 1, 1) + t(5, 1, 3), 4.4.16 [GHC] Find the distance between the two lines.
~x = (14, 5, 9) + t(1, 1, 1). a. ~x = (1, 2, 1) + t(2, 1, 1) and ~x = (3, 3, 3) + t(4, 2, 2).
c. ~x = (3, 4, 1) + t(2, 3, 4), b. ~x = (0, 0, 1) + t(1, 0, 0) and ~x = (0, 0, 3) + t(0, 1, 0).
~x = (3, 3, 3) + t(3, 2, 4).
d. ~x = (1, 1, 1) + t(3, 1, 3),
~x = (7, 3, 7) + t(6, 2, 6). 4.4.17 [MB] Consider the points P (1, 0, 1), Q(2, 1, 2),
and R(3, 2, 1), and the vector ~v = (4, 0, 1) in R3 .

x = 1 + 2t
x = 3 t

e. ~x = y = 3 2t and ~x = y = 3 + 5t a. Consider L1 , the line passing through P and Q, and L2
the line passing through R and parallel to ~v . Determine
z=t z = 2 + 7t

whether these lines are parallel, intersect, or are skew.
x = 1.1 + 0.6t
x = 3.11 + 3.4t
b. If L1 and L2 intersects, find their point of intersection,
f. ~x = y = 3.77 + 0.9t and ~x = y = 2 + 5.1t then find the distance between the two lines.

z = 2.3 + 1.5t z = 2.5 + 8.5t



x = 0.2 + 0.6t = 0.86 + 9.2t
x
g. ~x = y = 1.33 0.45t and ~x = y = 0.835 6.9t

z = 4.2 + 1.05t z = 3.045 + 16.1t


x = 0.1 + 1.1t
x = 4 2.1t

h. ~x = y = 2.9 1.5t and ~x = y = 1.8 + 7.2t

z = 3.2 + 1.6t z = 3.1 + 1.1t

4.4.11 [JH] Find the intersection of the pair of lines, if


possible.

29
CHAPTER 4. VECTOR GEOMETRY 4.5. PLANES
4.5 Planes l. Contains
the point (4, 1, 1) and is orthogonal to the line
x = 4 + 4t

4.5.1 [SM] Find the scalar equation of the plane containing ~x = y = 1 + 1t

the given point with the given normal.
z = 1 + 1t
a. point (1, 2, 3), normal (2, 4, 1) m. Contains the point (4, 7, 2) and is parallel to the plane
b. point (2, 5, 4), normal (3, 0, 5) 3(x 2) + 8(y + 1) 10z = 0.
c. point (1, 1, 1), normal (3, 4, 1) n. Contains the point (1, 2, 3) and is parallel to the plane
x = 5.

4.5.2 [SM] Find the equation of the plane containing


A(2, 4, 1) and parallel to the plane 2x + 3y 5z = 6. 4.5.6 [SM] Determine the scalar equation of the plane that
contains the following points.
a. (2, 1, 5), (4, 3, 2), (2, 6, 1)
4.5.3 [SM] Find an equation for the plane through the given
b. (3, 1, 4), (2, 0, 2), (1, 4, 1)
point and parallel to the given plane.
c. (1, 4, 2), (3, 1, 1), (2, 3, 1)
a. point (1, 3, 1), plane 2x1 3x2 + 5x3 = 17
b. point (0, 2, 4), plane x2 = 0
4.5.7 [SM] Determine the scalar equation of the plane with
the given vector equation.
4.5.4 [GHC] Give any two points in the given plane.
a. ~x = (1, 4, 7) + s(2, 3, 1) + t(4, 1, 0), s, t R
a. 2x 4y + 7z = 2 c. x = 2 b. ~x = (2, 3, 1) + s(1, 1, 0) + t(2, 1, 2), s, t R
b. 3(x+2)+5(y9)4z = 0 d. 4(y + 2) (z 6) = 0 c. ~x = (1, 1, 3) + s(2, 2, 1) + t(0, 3, 1), s, t R

4.5.8 [JH]
a. Describe the plane through (1, 1, 5, 1), (2, 2, 2, 0), and
4.5.5 [GHC] Give the equation of the described plane in
(3, 1, 0, 4).
standard and general forms.
b. Is the origin in that plane?
a. Passes through (2, 3, 4) and has normal vector
~n = (3, 1, 7).
b. Passes through (1, 3, 5) and has normal vector 4.5.9 [JH] Describe the plane that contains this point and
~n = (0, 2, 4). line.
c. Passes through the points (1, 2, 3), (3, 1, 4) and (1, 0, 1).
2 1 1

0 0 + 1 t t R

3 4 2

d. Passes through the points (5, 3, 8), (6, 4, 9) and (3, 3, 3).
e. Contains the intersecting lines
~x = (2, 1, 2) + t(1, 2, 3) and ~x = (2, 1, 2) + t(2, 5, 4). 4.5.10 [SM] Determine a normal vector for the plane or the
f. Contains the intersecting lines hyperplane.
~x = (5, 0, 3) + t(1, 1, 1) and ~x = (1, 4, 7) + t(3, 0, 3). a. 3x1 2x2 + x3 = 7 in R3
g. Contains the parallel lines b. 4x1 + 3x2 5x3 6 = 0 in R3
~x = (1, 1, 1) + t(1, 2, 3) and ~x = (1, 1, 2) + t(1, 2, 3).
c. x1 x2 + 2x3 3x4 = 5 in R4
h. Contains the parallel lines
~x = (1, 1, 1) + t(4, 1, 3) and ~x = (2, 2, 2) + t(4, 1, 3).
4.5.11 [SM] Determine the scalar equation of the hyperplane
the point (2, 6, 1) and the line
i. Contains
passing through the given point with the given normal.
x = 2 + 5t

~x = y = 2 + 2t a. point (1, 1, 1, 2), normal (3, 1, 4, 1)

z = 1 + 2t
b. point (2, 2, 0, 1), normal (0, 1, 3, 3)
j. Contains
the point (5, 7, 3) and the line
x = t
4.5.12 [SM] Determine the point of intersection of the given
~x = y=t line and plane.

z=t a. ~x = (2, 3, 1)+t(1, 2, 4), t R, and 3x1 2x2 +5x3 = 11

k. Contains the point (5, 7, 3) and is orthogonal to the line


~x = (4, 5, 6) + t(1, 1, 1). b. ~x = (1, 1, 2) + t(1, 1, 2), t R, and 2x1 + x2 x3 = 5

30
CHAPTER 4. VECTOR GEOMETRY 4.5. PLANES
4.5.13 [GHC] Find the point of intersection between the 4.5.19 [GHC] Give the equation of the line that is the in-
line and the plane. tersection of the given planes.
a. line: (1, 2, 3) + t(3, 5, 1), plane: 3x 2y z = 4 a. p1 : 3(x 2) + (y 1) + 4z = 0, and
b. line: (1, 2, 3) + t(3, 5, 1), plane: 3x 2y z = 4 p2 : 2(x 1) 2(y + 3) + 6(z 1) = 0.
c. line: (5, 1, 1) + t(2, 2, 1), plane: 5x y z = 3 b. p1 : 5(x 5) + 2(y + 2) + 4(z 1) = 0, and
d. line: (4, 1, 0) + t(1, 0, 1), plane: 3x + y 2z = 8 p2 : 3x 4(y 1) + 2(z 1) = 0.

4.5.20 [JH] Find the intersection of these planes.


4.5.14 [JH] Find the intersection of the line and plane, if
possible. 1 0


2

1

0

0

1 t + 1 s t, s R

1 3

0 + t 1 t R , s 1 + w 4 s, w R


1 1 2 1

1 0 2

1 + 3 k + 0 m k, m R

0 0 4

4.5.15 [SM] Given the plane 2x1 x2 + 3x3 = 5, for each
of the following lines, determine if the line is parallel to the
plane, orthogonal to the plane, or neither parallel nor orthog-
4.5.21 [SM] Determine a vector equation of the line of in-
onal. If the answer is neither, determine the angle between
tersection of the given planes.
the direction vector of the line and the normal vector of the
plane. a. x + 3y z = 5 and 2x 5y + z = 7
a. ~x = (3, 0, 4) + t(1, 1, 1), t R b. 2x 3z = 7 and y + 2z = 4
b. ~x = (1, 1, 2) + t(2, 1, 3), t R
c. ~x = (3, 0, 0) + t(1, 1, 2), t R 4.5.22 [SM] In each case, determine whether the given pair
of lines has a point of intersection; if so, determine the scalar
d. ~x = (1, 1, 2) + t(4, 2, 6), t R
equation of the plane containing the lines, and if not, deter-
e. ~x = t(0, 3, 1), t R mine the distance between the lines.
a. ~x = (1, 3, 1) + s(2, 1, 1) and ~x = (0, 1, 4) + t(3, 0, 1),
4.5.16 [GHC] Find the distances. s, t R
a. The distance from the point (1, 2, 3) to the plane b. ~x = (1, 3, 1) + s(2, 1, 1) and ~x = (0, 1, 7) + t(3, 0, 1),
3(x 1) + (y 2) + 5(z 2) = 0. s, t R
b. The distance from the point (2, 6, 2) to the plane c. ~x = (2, 1, 4) + s(2, 1, 2) and ~x = (2, 1, 5) + t(1, 3, 1),
2(x 1) y + 4(z + 1) = 0. s, t R
c. The distance between the parallel planes d. ~x = (0, 1, 3) + s(1, 1, 4) and ~x = (0, 1, 5) + t(1, 1, 2),
x + y + z = 0 and s, t R
(x 2) + (y 3) + (z + 4) = 0
d. The distance between the parallel planes 4.5.23 [YL] Given E : x 2y = 3 and P = (1, 1, 1).
2(x 1) + 2(y + 1) + (z 2) = 0 and a. Find the equation of a line orthogonal to E that passes
2(x 3) + 2(y 1) + (z 3) = 0 through P .
b. Find the closest point on E from P .
4.5.17 [SM] Use a projection (onto or perpendicular to) to c. Find the shortest distance from P to E.
find the distance from the point to the plane.
a. point (2, 3, 1), plane 3x1 x2 + 4x3 = 5 4.5.24 [MB] Given the following: parametric equations of
b. point (2, 3, 1), plane 2x1 3x2 5x3 = 5 two skew lines L1 and L2 , two nonparallel planes P1 and P2 ,
coordinates of the point Q, that does not lie on L1 , L2 , P1 ,
c. point (0, 2, 1), plane 2x1 x3 = 5
nor P2 .
d. point (1, 1, 1), plane 2x1 x2 x3 = 4
a. Does a line that is perpendicular to L1 and passes
through Q exist? If so, is it unique? How would the
4.5.18 [SM] For the given point and hyperplane in R4 , de- equation of such a line (if it exist), be obtained with the
termine by a projection the point in the hyperplane that is provided information?
closest to the given point. b. Does a line parallel to the intersection of the planes P1
a. point (2, 4, 3, 4), hyperplane 3x1 x2 + 4x3 + x4 = 0 and P2 exist? If so, is it unique? How would the equation
such a line (if it exist), be obtained with the provided
b. point (1, 3, 2, 1), hyperplane x1 + 2x2 + x3 x4 = 4 information?

31
CHAPTER 4. VECTOR GEOMETRY 4.5. PLANES
c. Does a plane containing L1 and L2 exists? Is so, is it
unique? How would the equation of such a plane (if it
exists), be obtained with the provided information?

4.5.25 [JH] Show that if a vector is perpendicular to each of


two others then it is perpendicular to each vector in the plane
they generate. (Remark. They could generate a degenerate
plane (a line or a point) but the statement remains true.)

32
Chapter 5

Vector Spaces

5.1 Introduction to Vector Spaces 5.1.5 [JH] For each, list three elements and then show it is
a vector space.
a. The set of three-component row vectors with their usual
5.1.1 [JH] Name the zero vector for each of these vector
operations.
spaces.
b. The set
a. The space of degree three polynomials under the natural
operations.

(x, y, z, w) R4 x + y z + w = 0

b. The space of 23 matrices.
under the operations inherited from R4 .
c. The space { f : [0, 1] R | f is continuous}.
d. The space of real-valued functions of one natural number
variable. 5.1.6 [JH] Show that the following are not vector spaces.
a. Under the operations inherited from R3 , this set
5.1.2 [JH] Find the additive inverse, in the vector space, of

(x, y, z) R3 x + y + z = 1

the vector.
a. In P3 , the vector 3 2x + x2 . b. Under the operations inherited from R3 , this set
b. In the space M22 ,
(x, y, z) R3 x2 + y 2 + z 2 = 1

 
1 1
. c. Under the usual matrix operations,
0 3
  
a 1
x
c. In { ae + be x
| a, b R }, the space of functions of the a, b, c R
b c
real variable x under the natural operations, the vector
3ex 2ex . d. Under the usual polynomial operations,

a0 + a1 x + a2 x2 a0 , a1 , a2 R+

5.1.3 [JH] For each, list three elements and then show it is
a vector space. where R+ is the set of reals greater than zero
a. The set of linear polynomials P1 = e. Under the inherited operations,
{ a0 + a1 x | a0 , a1 R } under the usual polynomial
(x, y) R2 x + 3y = 4, 2x y = 3 and 6x + 4y = 10

addition and scalar multiplication operations.
b. The set of linear polynomials
{ a0 + a1 x | a0 2a1 = 0}, under the usual poly-
nomial addition and scalar multiplication operations. 5.1.7 [JH] Is the set of rational numbers a vector space over
R under the usual addition and scalar multiplication opera-
tions?
5.1.4 [JH] For each, list three elements and then show it is
a vector space. 5.1.8 [JH] Prove that the following is not a vector space:
a. The set of 22 matrices with real entries under the usual the set of two-tall column vectors with real entries subject to
matrix operations. these operations.
b. The set of 22 matrices with real entries where the 2, 1     
x1 x2 x1 x2
    
x rx
entry is zero, under the usual matrix operations. + = r =
y1 y2 y1 y2 y ry

33
CHAPTER 5. VECTOR SPACES 5.1. INTRODUCTION TO VECTOR SPACES
5.1.9 [JH] Prove or disprove that R3 is a vector space under 5.1.14 [JH] Prove or disprove that the following is a vector
these operations. space: the set of polynomials of degree greater than or equal
to two, along with the zero polynomial.

x1 x2 0 x rx
a. y1 + y2 = 0 and r y = ry
z1 z2 0 z rz 
5.1.15 [JH] Is (x, y) x, y R a vector space under
x1 x2 0 x 0 these operations?
b. y1 + y2 = 0 and r y = 0
a. (x1 , y1 ) + (x2 , y2 ) = (x1 + x2 , y1 + y2 ) and r (x, y) =
z1 z2 0 z 0
(rx, y)
b. (x1 , y1 ) + (x2 , y2 ) = (x1 + x2 , y1 + y2 ) and r (x, y) =
5.1.10 [JH] For each, decide if it is a vector space; the (rx, 0)
intended operations are the natural ones.
a. The set of diagonal 22 matrices
5.1.16 [JH] Prove the following:
a. For any ~v V , if w ~ V is an additive inverse of ~v , then
  
a 0
a, b R ~v is an additive inverse of w.
~ So a vector is an additive
0 b
inverse of any additive inverse of itself.
b. The set of 22 matrices b. Vector addition left-cancels: if ~v , ~s, ~t V then ~v + ~s =
   ~v + ~t implies that ~s = ~t.
x x + y
x+y y x, y R
5.1.17 [JH] Consider the set
c. (x, y, z, w) R4 x + y + w = 1


d. The set of functions { f : R R | df /dx + 2f = 0} x
y x + y + z = 1
e. The set of functions { f : R R | df /dx + 2f = 1}
z

under these operations.


5.1.11 [YL] Let V = { A | A M22 and det(A) 6= 0}
with the following operations:

x1 x2 x1 + x2 1 x rx r + 1
y1 + y2 = y1 + y2 r y = ry
A + B = AB and kA = kA z1 z2 z1 + z2 z rz
That is, vector addition is matrix multiplication and scalar Show that it is a vector space.
multiplication is the regular scalar multiplication.
a. Does V satisfy closure under vector addition? Justify.
5.1.18 [JH] The definition of vector spaces does not explic-
b. Does V contain a zero vector? If so find it. Justify.
itly say that ~0 + ~v = ~v (it instead says that ~v + ~0 = ~v ). Show
c. Does V contains an additive inverse for all of its vectors? that it must nonetheless hold in any vector space.
Justify.
d. Does V satisfy closure under scalar multiplication? Jus-
tify. 5.1.19 [JH] Prove or disprove that the following is a vector
space: the set of all matrices, under the usual operations.

5.1.12 [YL] Determine whether the following is a vector


5.1.20 [JH] In a vector space every element has an additive
space: V = A A M22 and AT = A with the follow-
inverse. Is the additive inverse unique (Can some elements
ing operations:
have two or more)?
A + B = AB and kA = kA

That is, vector addition is matrix multiplication and scalar 5.1.21 [JH] Assume that ~v V is not ~0.
multiplication is the regular scalar multiplication. Justify. a. Prove that r ~v = ~0 if and only if r = 0.
b. Prove that r1 ~v = r2 ~v if and only if r1 = r2 .
5.1.13 [JH] Show that the set R+ of positive reals is a vector c. Prove that any nontrivial vector space is infinite.
space when we interpret x + y to mean the product of x and
y (so that 2 + 3 is 6), and we interpret r x as the r-th power
of x.

34
CHAPTER 5. VECTOR SPACES 5.2. SUBSPACES
5.2 Subspaces 5.2.10 [JH] Subspaces are subsets and so we naturally con-
sider how is a subspace of interacts with the usual set oper-
ations.
5.2.1 [JH] Which of these subsets of the vector space of
a. If A, B are subspaces of a vector space, are their inter-
22 matrices are subspaces under the inherited operations?
section A B be a subspace?
Justify.
   b. Is the union A B a subspace?
a 0
a. a, b R c. If A is a subspace, is its complement be a subspace?
0 b
  
a 0
b. a+b=0
0 b 5.2.11 [JH] Is the relation is a subspace of transitive? That
  
a 0 is, if V is a subspace of W and W is a subspace of X, must
c. a+b=5
0 b V be a subspace of X?
  
a c
d. a + b = 0, c R
0 b

5.2.2 [JH] Is this a subspace of P2 :


a0 + a1 x + a2 x2 a0 + 2a1 + a2 = 4 ? Justify.


5.2.3 [JH] The solution set of a homogeneous linear system


is a subspace of Rn where the system has n variables. What
about a non-homogeneous linear system; do its solutions form
a subspace (under the inherited operations)?

5.2.4 [JH]
a. Prove that every point, line, or plane thru the origin in
R3 is a subspace of R3 under the inherited operations.
b. What if it doesnt contain the origin?

5.2.5 [JH] R3 has infinitely many subspaces. Do every non-


trivial space have infinitely many subspaces?

5.2.6 [JH] Is the following a subspace under the inherited


natural operations: the real-valued functions of one real vari-
able that are differentiable?

5.2.7 [JH] Determine if each is a subspace of the vector


space of real-valued functions of one real variable.
a. The even functions { f : R R | f (x) = f (x) for all x }.

b. The odd functions { f : R R | f (x) = f (x) for all x }.

5.2.8 [JH] Is R2 a subspace of R3 ?

5.2.9 [JH]
a. Give a set that is closed under scalar multiplication but
not addition.
b. Give a set closed under addition but not scalar multipli-
cation.
c. Give a set closed under neither.

35
CHAPTER 5. VECTOR SPACES 5.3. SPANNING SETS

5.3 Spanning Sets x
b. y 3x + 2y + z = 0

z

5.3.1 [JH] Determine whether the vector lies in the span of
x
the set.



y

2 1 0 c. 2x + y + w = 0 and y + 2z = 0
z
a. 0, 0 , 0



w

1 0 1


d. a0 + a1 x + a2 x2 + a3 x3 a0 + a1 = 0 and a2 a3 = 0

b. x x3 , x2 , 2x + x2 , x + x3

     
0 1 1 0 2 0 e. The set P4 in the space P4
c. , ,
4 2 1 1 2 3
f. M22 in M22

5.3.2 [JH] Which of these are members of the span


5.3.6 [JH] Show that for any subset S of a vector space,
span cos2 x, sin2 x in the vector space of real-valued
span (span (S)) = span (S). (Hint. Members of span (S)
functions of one real variable?
are linear combinations of members of S. Members of
span (span (S)) are linear combinations of linear combinations
a. f (x) = 1 c. f (x) = sin x
of members of S.)
b. f (x) = 3 + x2 d. f (x) = cos(2x)

5.3.7 [YL] Given the following two subspace of R3 : W1 =


{x | A1 x = 0} and W2 = { x | A2 x = 0} where
5.3.3 [JH] Which of these sets spans R3 ?

1 0 0

1 2 3 5 7 9
a. 0 , 2 , 0 A1 = 4 5 6 , A2 = 5 7 9 .
0 0 3 3 3 3 10 14 18


2 1 0
b. 0 , 1 , 0 Determine whether the two subspaces are equal or whether

1 0 1
one of the subspaces is contained in the other.

1 3
c. 1 , 0 5.3.8 [JH] Prove: ~v span ({~v1 , . . . , ~vn }) if and only if
0 0 span ({~v1 , . . . , ~vn }) = span ({~v , ~v1 , . . . , ~vn })


1 3 1 2
d. 0 , 1 , 0 , 1
5.3.9 [JH] Does the span of a set depend on the enclosing
1 0 0 5

space? That is, if W is a subspace of V and S is a subset
2 3 5 6 of W (and so also a subset of V ), might the span of S in W
e. 1 , 0 , 1 , 0 differ from the span of S in V ?
1 1 2 2

5.3.10 [JH] Because span of is an operation on sets we nat-


5.3.4 [JH] Express each subspace as a span of a set of vec- urally consider how it interacts with the usual set operations.
tors.

a. (a b c) a c = 0
a. If S T are subsets of a vector space, is span (S)
   span (T )?
a b
b. a+d=0 b. If S, T are subsets of a vector space, is span (S T ) =
c d
   span (S) span (T )?
a b
c. 2a c d = 0 and a + 3b = 0 c. If S, T are subsets of a vector space, is span (S T ) =
c d
span (S) span (T )?
d. a + bx + cx3 a 2b + c = 0

d. Is the span of the complement equal to the complement
e. The subset of P2 of quadratic polynomials p such that of the span?
p(7) = 0

5.3.5 [JH] Find a set that spans the given subspace.


a. The xz-plane in R3 .

36
CHAPTER 5. VECTOR SPACES 5.4. LINEAR INDEPENDENCE
5.4 Linear Independence 5.4.7 [YL] Given the vectors ~u = ( + 1, 1, ), ~v =
(, 2, 2) and w~ = ((1, 1, ).
a. For which value(s) of if any, are the vectors ~u, ~v and w ~
5.4.1 [JH] Determine whether each subset of R3 is linearly
linearly independent.
dependent or linearly independent.
b. For which value(s) of if any, span ({~u, ~v , w~ }) = R3 .
1 2 4
a. 3 , 2 , 4 ~ }) is a plane
c. For which value(s) of if any, span ({~u, ~v , w

5 4 14
through the origin.

1 2 3 d. For which value(s) of if any, span ({~u, ~v , w~ }) is a line
b. 7 , 7 , 7 through the origin.

7 7 7
e. For which value(s) of if any, the vectors ~u, ~v and w ~
generate a parallelpiped of volume 2016.
0 1
c. 0 , 0
1 4

5.4.8 [JH]
9 2 3 12 a. Show that if the set {~u, ~v , w ~ } is linearly independent
d. 9 , 0 , 5 , 12
then so is the set {~u, ~u + ~v , ~u + ~v + w
~ }.
0 1 4 1

b. What is the relationship between the linear independence
or dependence of {~u, ~v , w~ } and the independence or de-
5.4.2 [JH] Which of these subsets of P2 are linearly depen- pendence of {~u ~v , ~v w,
~ w ~ ~u }?
dent and which are independent?
a. 3 x + 9x2 , 5 6x + 3x2 , 1 + 1x 5x2

5.4.9 [JH]
b. x2 , 1 + 4x2

a. When is a one-element set linearly independent?
c. 2 + x + 7x2 , 3 x + 2x2 , 4 3x2

b. When is a two-element set linearly independent?
d. 8 + 3x + 3x2 , x + 2x2 , 2 + 2x + 2x2 , 8 2x + 5x2


5.4.10 [JH] Show that if {~x, ~y , ~z } is linearly independent


5.4.3 [JH] Prove that each set { f, g } is linearly independent then so are all of its proper subsets: {~x, ~y }, {~x, ~z }, {~y , ~z },
in the vector space of all functions from R+ to R. {~x },{~y }, {~z }. Is the converse also true?
a. f (x) = x and g(x) = 1/x
b. f (x) = cos(x) and g(x) = sin(x) 5.4.11 [JH]
c. f (x) = ex and g(x) = ln(x) a. Show that this

1 1
5.4.4 [JH] Which of these subsets of the space of real-valued S = 1 , 2
functions of one real variable are linearly dependent and which 0 0

are linearly independent?
is a linearly independent subset of R3 .
2 2 2 2
 
a. 2, 4 sin (x), cos (x) d. (1 + x) , x + 2x, 3 b. Show that
b. { 1, sin(x), sin(2x)}
 2
3
e. 0, x, x 2
c. { x, cos(x) } f. cos(2x), sin2 (x), cos2 (x)

0
is in the span of S by finding c1 and c2 giving a linear
relationship.
5.4.5 [JH] Is the xy-plane subset of the vector space R3
linearly independent? 1 1 3
c1 1 + c2 2 = 2
0 0 0
5.4.6 [YL] Let ~u = (1, , ), ~v = (2, 2, 2) and
~ = ( 2, 5 2, 2).
w Show that the pair c1 , c2 is unique.
a. For what value(s) of will {~u, ~v } be linearly dependent. c. Assume that S is a subset of a vector space and that ~v is
in span (S), so that ~v is a linear combination of vectors
b. For what value(s) of will {~u, ~v , w
~ } be linearly inde- from S. Prove that if S is linearly independent then a
pendent. linear combination of vectors from S adding to ~v is unique
(that is, unique up to reordering and adding or taking

37
CHAPTER 5. VECTOR SPACES 5.4. LINEAR INDEPENDENCE
n o
away terms of the form 0 ~s). Thus S as a spanning set ~0 . What is wrong with this argument for the if
is minimal in this strong sense: each vector in span (S) direction of that conjecture? If the union S T is
is a combination of elements of S a minimum number of linearly independent then the only solution to c1~s1 +
times (only once). + cn~sn + d1~t1 + + dm~tm = ~0 is the trivial one
d. Prove that it can happen when S is not linearly indepen- c1 = 0, . . . , dm = 0. So any member of the intersec-
dent that distinct linear combinations sum to the same tion of the spans must be the zero vector because in
vector. c1~s1 + + cn~sn = d1~t1 + + dm~tm each scalar is zero.

5.4.12 [JH] b. Give an example showing that the conjecture is false.


a. Show that any set of four vectors in R2 is linearly depen- c. Find linearly independent sets S and T so that the union
dent. of S (S T ) and T (S T ) is linearly independent,
but the union S T is not linearly independent.
b. Is this true for any set of five? Any set of three?
d. Characterize when the union of two linearly independent
c. What is the most number of elements that a linearly
sets is linearly independent, in terms of the intersection
independent subset of R2 can have?
of spans.

5.4.13 [JH] Is there a set of four vectors in R3 such that 5.4.20 [JH] With a some calculation we can get formulas to
any three form a linearly independent set? determine whether or not a set of vectors is linearly indepen-
dent.
5.4.14 [JH] a. Show that this subset of R2
a. Prove that a set of two perpendicular nonzero vectors    
a b
n ,
from R is linearly independent when n > 1. c d
b. What if n = 1?
c. Generalize to more than two vectors. is linearly independent if and only if ad bc 6= 0.
b. Show that this subset of R3

5.4.15 [JH] Show that, where S is a subspace of V , if a a b c
subset T of S is linearly independent in S then T is also d , e , f
linearly independent in V . Is the converse also true? g h i

is linearly independent iff aei + bf g + cdh hf a idb


5.4.16 [JH] Show that the nonzero rows of an echelon form gec 6= 0.
matrix form a linearly independent set. c. When is this subset of R3

a b
5.4.17 [JH] In R4 what is the largest linearly independent set d , e
you can find? The smallest? The largest linearly dependent
g h

set? The smallest?
linearly independent?
5.4.18 [JH]
a. Is the intersection of linearly independent sets indepen-
dent? Must it be?
b. How does linear independence relate to complementa-
tion?
c. Show that the union of two linearly independent sets can
be linearly independent.
d. Show that the union of two linearly independent sets need
not be linearly independent.

5.4.19 [JH]
a. We might conjecture that the union S T of linearly in-
dependent sets is linearly independent if and only if their
spans have a trivial intersection span (S) span (T ) =

38
CHAPTER 5. VECTOR SPACES 5.5. BASIS
5.5 Basis 5.5.9 [JH] Find a basis for each of these subspaces of the
space P3 of cubic polynomials.
5.5.1 [JH] Determine if each is a basis for R3 . a. The subspace of cubic polynomials p(x) such that p(7) =
0.

1 3 0 0 1 2 b. The subspace of polynomials p(x) such that p(7) = 0 and
a. h2 , 2 , 0i c. h 2 , 1 , 5i p(5) = 0.
3 1 1 1 1 0 c. The subspace of polynomials p(x) such that p(7) = 0,

1 3 0 1 1 p(5) = 0, and p(3) = 0.
b. h2 , 2i d. h 2 , 1 , 3i d. The space of polynomials p(x) such that p(7) = 0, p(5) =
3 1 1 1 0 0, p(3) = 0, and p(1) = 0.

5.5.2 [JH] Determine if each is a basis for P2 . 5.5.10 [YL] Given


a. hx2 x + 1, 2x + 1, 2x 1i
W = p(x) = a0 + a2 x2 + a3 x3 p(1) = 0

b. hx + x2 , x x2 i
a subspace of P3 .
a. Find a basis B for W.
5.5.3 [JH] Represent the vector with respect to the given
b. Find the coordinate vector of p(x) = 2 + 2x2 relative
basis.      to the basis B.
1 1 1
a. ,B=h , i R2
2 1 1
b. x2 + x3 , D = h1, 1 + x, 1 + x + x2 , 1 + x + x2 + x3 i P3 5.5.11 [JH] Can a basis contain a zero vector?

5.5.4 [JH] Find a basis for P2 , the space of all quadratic 5.5.12 [JH] Let h~1 , ~2 , ~3 i be a basis for a vector space.
polynomials. Must any such basis contain a polynomial of a. Show that hc1 ~1 , c2 ~2 , c3 ~3 i is a basis when c1 , c2 , c3 6= 0.
each degree: degree zero, degree one, and degree two? What if at least one ci is 0?
b. Prove that h~
1 , ~ 3 i is a basis where
~ 2, ~ i = ~1 + ~i .
5.5.5 [JH] Find a basis for the solution set of this system.

x1 4x2 + 3x3 x4 = 0 5.5.13 [JH] Find one vector ~v that will make each into a
2x1 8x2 + 6x3 2x4 = 0 basis for the space.
 
1
a. h , ~v i in R2
1

5.5.6 [JH] Find a basis for M22 , the space of 22 matrices. 1 0
b. h1 , 1 , ~v i in R3
0 0
5.5.7 [JH] Find a basis for each of the following. 2
c. hx, 1 + x , ~v i in P2
a. The subspace a2 x2 + a1 x + a0 a2 2a1 = a0 of P2


b. The space of three component vectors whose first and 5.5.14 [JH] Where h~1 , . . . , ~n i is a basis, show that in this
second components add to zero equation
c. This subspace of the 22 matrices
~1 + + ck ~k = ck+1 ~k+1 + + cn
c1 ~n
  
a b
c 2b = 0 each of the ci s is zero. Generalize.
0 c

5.5.15 [JH] If a subset is not a basis, can linear combinations


5.5.8 [JH] Find the span of each set (that is, find restric- be not unique? If so, must they be?
tion(s) on the coefficient of the polynomial) and then find a
basis for that span.
5.5.16 [JH]
a. { 1 + x, 1 + 2x } in P2
a. Find a basis for the vector space of symmetric 22 ma-
b. 2 2x, 3 + 4x2 in P2

trices.
b. Find a basis for the space of symmetric 33 matrices.

39
CHAPTER 5. VECTOR SPACES 5.6. DIMENSION
c. Find a basis for the space of symmetric nn matrices. 5.6 Dimension

5.6.1 [JH] Find a basis for, and the dimension of, P2 .


5.5.17 [JH] We can show that every basis for R3 contains
the same number of vectors.
a. Show that no linearly independent subset of R3 contains 5.6.2 [JH] Find a basis for, and the dimension of, M22 , the
more than three vectors. vector space of 22 matrices.
b. Show that no spanning subset of R3 contains fewer than
three vectors. Hint: recall how to calculate the span of 5.6.3 [JH] Find a basis for, and the dimension of, the solu-
a set and show that this method cannot yield all of R3 tion set of this system.
when we apply it to fewer than three vectors.
x1 4x2 + 3x3 x4 = 0
2x1 8x2 + 6x3 2x4 = 0
5.5.18 [JH] Find a basis for the following vector space:

x 5.6.4 [JH] Find the dimension of the vector space of matrices
y x + y + z = 1  
a b

z

c d
is a vector space under these operations.
subject to each condition.

x1 x2 x1 + x2 1 x rx r + 1 a. a, b, c, d R
y1 + y2 = y1 + y2 r y = ry b. a b + 2c = 0 and d R

z1 z2 z1 + z2 z rz
c. a + b + c = 0, a + b c = 0, and d R

5.6.5 [YL] Given



W = p(x) = a0 + a1 x + a2 x2 + a3 x3 p(1) = 0 and p(1) = 0


a subspace of P3 . Determine the dimension of W .

5.6.6 [JH] Find the dimension of each.


a. The space of cubic polynomials p(x) such that p(7) = 0.

b. The space of cubic polynomials p(x) such that p(7) = 0


and p(5) = 0.
c. The space of cubic polynomials p(x) such that p(7) = 0,
p(5) = 0, and p(3) = 0.
d. The space of cubic polynomials p(x) such that p(7) = 0,
p(5) = 0, p(3) = 0, and p(1) = 0.

 2 [JH] 2 What is the dimension


5.6.7 of the span of the set
cos , sin , cos 2, sin 2 ? This span is a subspace of
the space of all real-valued functions of one real variable.

5.6.8 [JH] What is the dimension of the vector space M35


of 35 matrices?

5.6.9 [JH] Show that this is a basis for R4 .



1 1 1 1
0 1 1 1
h
0 , 0 , 1 , 1i

0 0 0 1

40
CHAPTER 5. VECTOR SPACES 5.6. DIMENSION
5.6.10 [JH] Where S is a set, the functions f : S R form
a vector space under the natural operations: the sum f + g is
the function given by f + g (s) = f (s) + g(s) and the scalar
product is r f (s) = r f (s). What is the dimension of the
space resulting for each domain?
a. S = { 1}
b. S = { 1, 2}
c. S = { 1, . . . , n }

5.6.11 [JH] Show that any set of four vectors in R2 is linearly


dependent.

5.6.12 [JH] Show that h~ 1 , ~ 3 i R3 is a basis if and


~ 2,
only if there is no plane through the origin containing all three
vectors.

5.6.13 [JH]
a. Prove that any subspace of a finite dimensional space has
a basis.
b. Prove that any subspace of a finite dimensional space is
finite dimensional.

5.6.14 [JH] Prove that if U and W are both three-


dimensional subspaces of R5 then U W is non-trivial. State
a generalization of the above.

5.6.15 [JH]
a. Consider first how bases might be related by . Assume
that U, W are subspaces of some vector space and that
U W.
Can there exist bases BU for U and BW for W such that
BU BW ? Must such bases exist?
For any basis BU for U , must there be a basis BW for W
such that BU BW ?
For any basis BW for W , must there be a basis BU for
U such that BU BW ?
For any bases BU , BW for U and W , must BU be a subset
of BW ?
b. Is the of bases a basis? For what space?
c. Is the of bases a basis? For what space?
d. What about the complement operation?

5.6.16 [JH] Assume U and W are both subspaces of some


vector space, and that U W .
a. Prove that dim(U ) dim(W ).
b. Prove that equality of dimension holds if and only if U =
W.

41
Chapter 6

Applications

6.1 The Simplex Method b. Minimize Z = 2x + y + z subject to the constraints

x+z 2
6.1.1 [YL] 2x + y + z 3.
a. Maximize Z = 3x + y subject to the constraints
2x y 60
c. Minimize Z = 2x + y + 3z subject to the constraints
x + y 50.
x + 4z 2
b. Maximize Z = 2x + y + 3z subject to the constraints 2x + y + z 3
x + y + z 1.
2x y + z 100
x + y + 2z 70.
d. Minimize Z = x + y + z subject to the constraints

c. Maximize Z = 2x + y + 3z subject to the constraints x + y 50


2x y z 10 y + z 50
x + y + 2z 60 x + z 10.
x y z 10.
e. Minimize Z = 3x + y 2z subject to the constraints
d. Maximize Z = 4x + 3y + 3z subject to the constraints
2x y 60
y z 10 x + y 2z 50
2x y + 2z 60 x y z 10.
2x y z 10
x + y + 2z 30.

e. Maximize Z = 2xy +4z +2w subject to the constraints


y + z + w 30
2x y + 2z w 60
2x y z w 10.

6.1.2 [YL]
a. Minimize Z = x + y subject to the constraints
x+y 2
3x + y 4.

43
Appendix A

Answers to Exercises

Note that either a hint, a final answer or a complete solution x1 + 2x2 = 3


a.
is provided. x1 + 3x2 = 9
3x1 + 4x2 = 7
1.1.1 b.
x2 = 2
a. Yes
x1 + x2 x3 x4 = 2
b. No c.
2x1 + x2 + 3x3 + 5x4 = 7
c. Yes
x1 = 2
d. Yes x2 = 1
d.
e. No x3 = 5
f. No x4 = 3
g. Yes x1 + x3 + 7x5 = 2
e.
h. No x2 + 3x3 + 2x4 =5
i. Yes
1.1.6
j. No 2 1 7

a. 0 4 2
1.1.2 5 0 3
a. x = 1, y = 2
2 1 7
b. x = 2, y = 31 b. 5 0 3
c. x = 1, y = 0, and z = 2. 0 4 2
d. x = 1, y = 0, and z = 0.

2 1 7
c. 2 3 5
3x + y = 3 5 0 3
1.1.3 , Solution is: x = 1, i y = 0
x + 2y = 1
2 1 7

d. 0 4 2
1.1.4 5 8 1
3 4 5 7
a. 1 1 3 1 2 1 7
2 2 3 5 e. 0 2 1
5 0 3
2 5 6 2
b. 9 0 8 10 2 1 7
2 4 1 7 f. 0 4 2
0 5/2 29/2
1 3 4 5 17
c. 1 0 4 8 1
1.1.7
2 3 4 5 6
a. 2R2 R2
3 2 4
2 0 3 b. R1 + R2 R2
d.
1

c. 2R3 + R1 R1
9 8
5 7 13 d. R1 R2
e. R2 + R3 R3
1.1.5

45
APPENDIX A. ANSWERS TO EXERCISES
1.1.8 Recall that if a pair of lines share two distinct points 1.2.4
then they are the same line. Thats because two points de- a. (2, 7)
termine a line, so these two points determine each of the two b. (3, 20, 19)
lines, and so they are the same line.
Thus the lines can share one point (giving a unique solution), c. (3t + 4, 6t 6, 2, t) for all real numbers t
share no points (giving no solutions), or share at least two d. Inconsistent
points (which makes them the same line). e. (8s t + 7, 4s + 3t + 2, s, t)
for all real numbers s and t
1.1.9 Yes, this one-equation system:
f. (9t 3, 4t + 20, t)
0x + 0y = 0 for all real numbers t
is satisfied by every (x, y) R2 . 1.2.5 
1 0
1.1.10 Each equation can be represented in R2 as a line. a.
0 1
a. At least one of the line is parallel and does not lie on an 
1 0

other line. b.
0 1
b. All the lines intersect at one and only one point. 
1 3

c. All the lines are identical, i.e. they all lie on top of each c.
0 0
other.  
1 7/5
d.
1.2.1 0 0
 
a. Yes 1 0 3
e.
b. No 0 1 7
 
c. No 1 0 1
f.
0 1 5
d. Yes  
1 1 2
e. Yes g.
0 0 0
f. Yes 5
23
 
1 4
g. No h.
0 0 0
h. Yes
1 0 0
i. No i. 0 1 0
j. Yes 0 0 1
k. Yes

1 0 0
l. Yes j. 0 1 0
m. No 0 0 1

n. Yes 1 0 0
k. 0 1 0
o. Yes
0 0 1
1.2.2

1 0 0
a. Reduced row echelon form l. 0 1 0
b. Neither 0 0 1

c. Row echelon form only 1 0 0 1
d. Reduced row echelon form m. 0 1 1 1
0 0 0 0
e. Reduced row echelon form
1 0 0 5
f. Row echelon form only
n. 0 1 0 2
1.2.3 0 0 1 3

a. The solution exists but is not unique. 1 0 1 3
o. 0 1 2 4
b. A solution exists and is unique.
0 0 0 0
c. The solution exists but is not unique.
1 0 3 4
d. There might be a solution. If so, there are infinitely p. 0 1 1 3
many. 0 0 0 0

46
APPENDIX A. ANSWERS TO EXERCISES
1.2.6 b. (x, y, z, w, v) = (0, 14 + 3s, 11 + 2t + s, s, t)
a. x = 2, y = 3 where s, t R. (x, y, z, w, v) = (0, 14, 11, 0, 0),
b. x = 1, y = 4, and z = 1. (x, y, z, w, v) = (0, 14, 13, 0, 1)

1.2.7 1.2.11
a. (2, 7) a. (x1 , x2 , x3 , x4 , x5 ) =
b. (1, 2, 0) (60s 55t + 30, 79 3 s + 73
3 t 38
3 , 14s + 13t 7, s, t)
where s, t R.
c. (t + 5, 3t + 15, t) for all real numbers t
b. If s = t = 0 then (x1 , x2 , x3 , x4 , x5 ) =
d. (2, 1, 1)
(30, 383 , 7, 0, 0).
e. (1, 3, 2) If s = 0 and t = 1 then (x1 , x2 , x3 , x4 , x5 ) =
f. Inconsistent (25, 35 3 , 6, 0, 1).
g. (1, 3, 2) c. If t = 0 then s = 47 and (x1 , x2 , x3 , x4 , x5 ) =
h. 3, 12 , 1

( 30 316 4
7 , 21 , 1, 7 , 0).
i. 13 , 23 , 1


j. 19 51 11 4
 1.2.12
13 t + 13 , 13 t + 13 , t for all real numbers t
k. Inconsistent a. (x1 , x2 , x3 , x4 ) = (60t, 79
3 t, 14t, t) where t R.

l. (4, 3, 1) b. If t = 1 then (x1 , x2 , x3 , x4 ) = (60, 79


3 , 14, 1).
If t = 3 then (x1 , x2 , x3 , x4 ) = (180, 79, 42, 3).
1.2.8 1 79 14 1
c. If t = 60 then (x1 , x2 , x3 , x4 ) = (1, 180 , 60 , 60 ).
a. x1 = 1 2t; x2 = t where t R. Possible solutions:
x1 = 1, x2 = 0 and x1 = 1, x2 = 1. 1.2.13

b. x1 = 3 + 5t; x2 = t where t R. Possible solutions: 1 0 0 3
0 1 0 1
x1 = 3, x2 = 0 and x1 = 8, x2 = 1. a.

0 0 1 4
c. x1 = 1; x2 = 2.
0 0 0 0
d. x1 = 0; x2 = 1.
2x2 5x3 = 18
e. x1 = 11 + 10t; x2 = 4 + 4t; x3 = t where t R.
3x1 6x2 = 3
Possible solutions: x1 = 11, x2 = 4, x3 = 0 and b. ,
3x1 + 7x2 = 2
x1 = 1, x2 = 0 and x3 = 1.
5x1 + 10x2 + 3x3 = 7
f. x1 = 23 + 89 t; x2 = 23 59 t; x3 = t where t R. Possible (x1 , x2 , x3 ) = (3, 2, 4)
solutions: x1 = 32 , x2 = 23 , x3 = 0 and x1 = 49 , x2 =
2x2 5x3 18x4 = 0
19 , x3 = 1.
3x1 6x2 + 3x4 = 0
g. x1 = 1 s t; x2 = s; x3 = 1 2t; x4 = t where s, t R. c. ,
3x1 + 7x2 2x4 = 0
Possible solutions: x1 = 1, x2 = 0, x3 = 1, x4 = 0 and 5x1 + 10x2 + 3x3 + 7x4 = 0
x1 = 2, x2 = 1, x3 = 3, x4 = 2. (x1 , x2 , x3 , x4 ) = (3t, t, 4t, t) where t R
h. x1 = 3 s 2t; x2 = 3 5s 7t; x3 = s; x4 = t where
s, t R. Possible solutions: x1 = 3, x2 = 3, x3 = 0,
   
1 2 3 1 3 2
x4 = 0 and x1 = 0, x2 = 5, x3 = 1, x4 = 1. 1.2.14 and
0 1 1 0 1 1
i. x1 = 13 43 t; x2 = 31 13 t; x3 = t where t R. Possible
solutions: x1 = 13 , x2 = 31 , x3 = 0 and x1 = 1, x2 = 0, 1.2.15 Because f (1) = 2, f (1) = 6, and f (2) = 3 we get a
x3 = 1. linear system.
j. x1 = 1 2s 3t; x2 = s; x3 = t where s, t R. Possible 1a + 1b + c = 2
solutions: x1 = 1, x2 = 0, x3 = 0 and x1 = 8, x2 = 1, 1a 1b + c = 6
x3 = 3. 4a + 2b + c = 3
k. No solution; the system is inconsistent. After performing Gaussian elimination we obtain
l. No solution; the system is inconsistent.
a+ b+ c= 2
2b = 4
1.2.9 x = 2, y = 4, z = 5
3c = 9
1.2.10
which shows that the solution is f (x) = 1x2 2x + 3.
a. (x, y, z, w, v) = (0, 3t, 2t + s, s, t) where s, t R.
(x, y, z, w, v) = (0, 3, 2, 0, 1), (x, y, z, w, v) = 1.2.16 The following system with more unknowns than equa-
(0, 0, 1, 1, 0)

47
APPENDIX A. ANSWERS TO EXERCISES
tions 1.2.27 Consistent if b3 b2 b1 = 0 and b4 2b2 b1 = 0.
x+y+z=0
x+y+z=1 1.2.28 If a 6= 0 then the solution set of the first equation
is {(x, y) | x = (c by)/a }. Taking y = 0 gives the solution
has no solution.
(c/a, 0), and since the second equation is supposed to have the
1.2.17 For example, x + y = 1, 2x + 2y = 2, 3x + 3y = 3 has same solution set, substituting into it gives that a(c/a)+d0 =
an infinitely many solutions. e, so c = e. Then taking y = 1 in x = (c by)/a gives that
a((c b)/a) + d 1 = e, which gives that b = d. Hence they
1.2.18 No. There must be a free variable and since the sys- are the same equation.
tem is consistent there are infinitely many solutions When a = 0 the equations can be different and still have the
same solution set: e.g., 0x + 3y = 6 and 0x + 6y = 12.
1.2.19 After performing Gaussian elimination the system be-
comes 1.2.29 We take three cases: that a 6= 0, that a = 0 and
xy= 1 c 6= 0, and that both a = 0 and c = 0.
0 = 3 + k For the first, we assume that a 6= 0. Then Gaussian elimina-
This system has no solutions if k 6= 3 and if k = 3 then it has tion
infinitely many solutions. It never has a unique solution. ax + by = j
((cb/a) + d)y = (cj/a) + k
1.2.20
shows that this system has a unique solution if and only if
a. Never exactly 1 solution; infinite solutions if k = 2; no (cb/a) + d 6= 0; remember that a 6= 0 so that back substi-
solution if k 6= 2. tution yields a unique x (observe, by the way, that j and k
b. Exactly 1 solution if k 6= 2; infinite solutions if k = 2; play no role in the conclusion that there is a unique solution,
never no solution. although if there is a unique solution then they contribute to
c. Exactly 1 solution if k 6= 2; no solution if k = 2; never its value). But (cb/a) + d = (ad bc)/a and a fraction is
infinite solutions. not equal to 0 if and only if its numerator is not equal to 0.
d. Exactly 1 solution for all k. Thus, in this first case, there is a unique solution if and only
if ad bc 6= 0.
1.2.21 In the second case, if a = 0 but c 6= 0, then we swap
a. If h 6= 2 there will be a unique solution for any k. If cx + dy = k
h = 2 and k 6= 4, there are no solutions. If h = 2 and by = j
k = 4, then there are infinitely many solutions.
b. If h 6= 4, then there is exactly one solution. If h = 4 and to conclude that the system has a unique solution if and only
k 6= 4, then there are no solutions. If h = 4 and k = 4, if b 6= 0 (we use the case assumption that c 6= 0 to get a
then there are infinitely many solutions. unique x in back substitution). But where a = 0 and c 6= 0 the
condition b 6= 0 is equivalent to the condition adbc 6= 0.
1.2.22 That finishes the second case.
a. a = 0 and b 6= 0 Finally, for the third case, if both a and c are 0 then the
system
b. a 6= 0 0x + by = j
c. a = 0 and b = 0 0x + dy = k
1.2.23 might have no solutions (if the second equation is not a mul-
a. a 6= 1 and b R tiple of the first) or it might have infinitely many solutions (if
the second equation is a multiple of the first then for each y
b. a = 1 and b R
satisfying both equations, any pair (x, y) will do), but it never
c. a = 1 and b R has a unique solution. Note that a = 0 and c = 0 gives that
ad bc = 0.
1.2.24

a. k = 1 3 1.3.1 The two other equations are

b. k 6= 1 3
6I3 6I4 + I3 + I3 + 5I3 5I2 = 20
2I4 + 3I4 + 6I4 6I3 + I4 I1 = 0
1.2.25
a. Possible if a = 1 and a 6= b. Then the system is
b. Not possible.
2I2 2I1 + 5I2 5I3 + 3I2 = 5
c. Possible if a 6= 1 or a = b.
4I1 + I1 I4 + 2I1 2I2 = 10
6I3 6I4 + I3 + I3 + 5I3 5I2 = 20
1.2.26 b3 = b1 + b2
2I4 + 3I4 + 6I4 6I3 + I4 I1 = 0

48
APPENDIX A. ANSWERS TO EXERCISES
The solution is: d. 22

I1 = 2. 010 7, I2 = 1. 269 9, I3 = 2. 735 5, I4 = 1. 535 3 2.1.6    


8 3 3 24
a. AB = , BA =
10 9 4 2
1.3.2 The equations obtained are  
1 2 12
b. AB = , BA is not defined
2I1 + 5I1 + 3I1 5I2 = 10 10 4 32

I2 + 3I2 + 7I2 + 5I2 5I1 = 12 3 8
2I3 + 4I3 + 4I3 + I3 I2 = 0 c. AB = 5 8 , BA is not defined
8 32
Simplifying this yields

10 18 11  
52 21
d. AB = 45 24 21, BA =
10I1 5I2 = 10 45 27
15 12 9
16I2 5I1 = 12
11I3 I2 = 0 32 34 24  
22 14
e. AB = 32 38 8 , BA =
4 12
Solving the system gives 16 21 4
 
7 3 7 15
44 34 34 f. AB = , BA is not defined
I1 = , I2 = , I3 = 5 1 17 5
27 27 297
3 4 0 0 0 4
Thus all currents flow in the clockwise direction. g. AB = 1 4 0, BA = 3 6 1
2.1.1 2 0 0 1 2 1

a. 2 21 17 5 19 5 23
h. AB = 19 5 19 , BA = 5 7 1
b. 3
5 9 4 14 6 18
c. 1
d. Not defined. 2.1.7
2 2 2 2 3 5
2.1.2
a. DA = 6 6 6 , AD = 4 6 10
a. 23 15 15 15 6 9 15
b. 32 
4 6
 
4 8

c. 22 b. DA = , AD =
4 6 3 6
   
d a d1 b d a d2 b
2.1.3 c. DA = 1 , AD = 1
 d2 c d2 d d1 c d2 d
2 1
a.
d1 a d1 b d1 c

d1 a d2 b d3 c

12 13

11 8
 d. DA = d2 d d2 e d2 f , AD = d1 d d2 e d3 f
b. d3 g d3 h d3 i d1 g d2 h d3 i
1 19
 
9 7 2.1.8
c.
11 6 
1 0
 
0 1

2 3
  a. A = ,A =
2 1 0 1 1 0
d.
12 13 
4 0
 
8 0

b. A2 = 3
,A =
0 9 0 27
2.1.4
1 0 0 1 0 0
a. 22 c. A2 = 0 9 0 , A3 = 0 27 0
b. 2 0 0 25 0 0 125
c. 23

0 0 1 1 0 0
d. Not possible. d. A2 = 1 0 0, A3 = 0 1 0
e. Not possible. 0 1 0 0 0 1

0 1 0 0 0 0
2.1.5 e. A2 = 0 0 0, A3 = 0 0 0
a. 21 0 0 0 0 0 0
b. 11
c. Not defined. 2.1.9

49
APPENDIX A. ANSWERS TO EXERCISES

a. The product is undefined as the number of columns of E 30 20 15
is 3 and not equal to the number of rows of B which is k. E 2 + 5E 36I3 = 0 0 36
2. 0 0 36
b. The expression is defined and the resulting matrix is 23. 3449
407 99

15 6
l. EDC = 954815 101
3 648
c. The expression is undefined as B is not square. 324 35 360
d. The expression is defined and the resulting matrix is 23.
m. CDE is undefined
2.1.10
 
16 3 2
" #
a. 90749
15 28867
5
3 7 1 n. ABCEDI2 =
  156601
15 47033
5
2 0 2
b.
3 13 3
2.1.12
c. Not possible, since dimension of A and E are not the
9 6 8

same.
  a. 4 3 1
7 1 10 7 1
d.
5 1
4 5 6
36 19 2 b. 2 4 6
e. 83 22 11 9 10 9
19 10 3
4 9
f. Not possible, since the dimension of CD is 22 and is 7 6
not equal to the dimension of D. c. 4 3

 
g. 9 7 3 9 9
 
4 2
 
h. 7 4
6 3 d. , symmetric
4 6

4 2 4
2.1.11   e. 0 7 2, A is lower triangular and AT is upper
4 29
a. 7B 4A = 0 0 5
47 2
  triangular.
10 1
3 0 0

b. AB =
20 1 f. 4 3 0 , A is upper triangular and AT is lower
 
9 12 5 5 3
c. BA =
1 2 triangular.
 
1 0
d. E + D is undefined g. , diagonal.
0 9

67
11
4 0 2
3
e. ED = 178 72 h. 0 2 3 , symmetric.
3
30 40 2 3 6

" # 0 6 1
238
3 126 i. 6 0 4, skew-symmetric.
CD + 2I2 A = 863 361 1 4 0
15 5
 
3 2 2.1.13
g. A 4I2 =
3 0
 
3 9 3
a.
6 6 3
 
8 16
h. A2 B 2 =
25 3
 
5 18 5
b.
11 4 4
 
7 3
i. (A + B)(A B) =
46 2
 
c. 7 1 5
 
0 0
 
2 1 3
j. A 5A 2I2 = d.
0 0 3 9

13 16 1
e. 16 29 8
1 8 5

50
APPENDIX A. ANSWERS TO EXERCISES
 
5 7 1 e. Not defined since the trace of a matrix is only defined for
f.
5 15 5 square matrices.
g. Not possible. f. 3

2.1.14 Lets denote by m n the size of the matrix C, and 2.1.22


by p q the size of the matrix A. Since the product CA 4 1 4
is defined, p must be equal to n. Since the product AC T is a. 0 11 8
defined, and C T is n m, q must be equal to n. As both p 5 7 1
and q must be equal to n, A is a square matrix (of size n n). b. Not defined.
 
4
2.1.15 The i-th row of GH is made up of the products of the c.
21
i-th row of G with the columns of H. The product of a zero
row with a column is zero. 9 10
It works for columns if stated correctly: if H has a column of d. 17 10

zeros then GH (if defined) has a column of zeros. 9 3
e. Not
defined.
2.1.16 The generalization is to go from the first and second 17
rows to the i1 -th and i2 -th rows. Row i of GH is made up of f. 31
the dot products of row i of G and the columns of H. Thus 2
if rows i1 and i2 of G are equal then so are rows i1 and i2 of
GH.

2 1 2 3
2.1.17 If the product of two diagonal matrices is defined, if 2.1.23 B = 1 4 1 2
both are nn then the product of the diagonals is the diagonal 2 1 6 1
of the products: where G, H are equal-sized diagonal matrices,
2.1.24
GH is all zeros except each that i, i entry is gi,i hi,i .
a. Not
defined.

2.1.18 A matrix is upper triangular if and only if its i, j entry 9 3
is zero whenever i > j. Thus, if G, H are upper triangular b. 0 0
then hi,j and gi,j are zero when i > j. An entry in the product 9 3
pi,j = gi,1 h1,j + + gi,n hn,j is zero unless at least some of
 
54 6
the terms are nonzero, that is, unless for at least some of c.
72 6
the summands gi,r hr,j both i r and r j. Of course, if d. 289
i > j this cannot happen and so the product of two upper 
83 19

triangular matrices is upper triangular. (A similar argument e.
116 4
works for lower triangular matrices.)
f. Not defined.
2.1.19 f A is skew-symmetric then A = AT . It follows that 2.1.25 The product is the identity matrix (recall that
aii = aii and so each aii = 0. cos2 + sin2 = 1).
2.1.20
2.1.26 Evaluate A2 = AA.
a. 9
b. 6 2.1.27
c. 23 a. a = 1, b = 1/2
d. Not defined; the matrix must be square. b. a = 5/2 + 3/2t, b = t where t R
e. 0 c. a = 5, b = 0
f. n d. No solution.

2.1.21 2.1.28

1 6 3  
1 1 x y
 
x z w y

a. 2 3 1 =
3 3 z w 3x + 3z 3w + 3y

3 0 1  
0 0
b. Not defined since A is not a square matrix. =
0 0
c. Not defined since A and B do not have the same dimen-
sion.
  Solution
  w = y, x = z so
is: the matrices are of the form
4 0 2 x y
d.
0 4 1 x y

51
APPENDIX A. ANSWERS TO EXERCISES
2.1.29 (a, b, c, d) = (29 + 60t, 13 79
3 t, 7 14t, t) where and
t R. h1,1 + g1,1 ... h1,n + g1,n
H +G=
.. ..
2.2.1 . .
hm,1 + gm,1 ... hm,n + gm,n
 
5 9
a. X =
1 14 and the two are equal since their entries are equal gi,j + hi,j =
hi,j + gi,j .
 
0 22
b. X =
7 17
  2.2.7 All three statements are false, find counter examples.
5 2
c. X =
9/2 19/2 2.2.8
 
8 12 a.
d. X =
10 2     
1 2 1 2 7 2k + 2
=
  3 4 3 k 15 4k + 6
2 1
2.2.2 A =
    
0 8 1 2 1 2 7 10
=
3 k 3 4 3k + 3 4k + 6
2.2.3  
3k + 3 = 15
Thus you must have , Solution is: k = 4

4 4

5 2k + 2 = 10
a. A = 2 c. A
 =  A3 A2 =
0 4 32 80 b.

8 12
 0 32     
3 2 1 2 1 2 3 2k + 2
b. A = A A = =
0 8 3 4 1 k 7 4k + 6
    
1 2 1 2 7 10
=
2.2.4  1 k 3 4 3k + 1 4k + 2
0 2
a. However, 7 6= 3 and so there is no possible choice of k
5 1
  which will make these matrices commute.
10 2
b.
5 11 2.2.9 Hint: Expand and rearrange.
 
11 15
c. 2.2.10
37 32 T
d. No. a. The i, j entry of (G + H) is gj,i + hj,i . That is also the
T T
i, j entry of G + H .
e. (A + B) = AA + AB + BA + BB = A2 + AB + BA + B 2 T
b. The i, j entry of (r H) is rhj,i , which is also the i, j
2.2.5 entry of r H T .
a. Not necessarily true. Find a counterexample. c. The i, j entry of GH T is the j, i entry of GH, which is
the dot product of the j-th row of G and the i-th column
b. Not necessarily true. Find a counterexample.
of H. The i, j entry of H T GT is the dot product of the
c. Not necessarily true. Find a counterexample. i-th row of H T and the j-th column of GT , which is the
d. Necessarily true. Prove. dot product of the i-th column of H and the j-th row
e. Necessarily true. Prove. of G. Dot product is commutative and so these two are
f. Not necessarily true. Find a counterexample. equal.
g. Not necessarily true. Find a counterexample. d. By the prior part each equals its transpose, e.g.,
T T
(HH T ) = H T H T = HH T .
2.2.6 First, each of these properties is easy to check in an
entry-by-entry way. For example, writing 2.2.11 For H + H T , the i, j entry is hi,j + hj,i and the j, i
entry of is hj,i + hi,j . The two are equal and thus H + H T is
g1,1 . . . g1,n h1,1 . . . h1,n symmetric.
G = ... .. .. .. Every symmetric matrix does have that form, since we can
H =

. . .
g ... g h ... h write H = (1/2) (H + H T ).
m,1 m,n m,1 m,n

then, by definition we have 2.2.12


a. Verify using the definition of symmetric and skew-
g1,1 + h1,1 ... g1,n + h1,n symmetric matrices.
G+H =
.. ..
. . b. Hint: use the previous part as the symmetric and skew-
gm,1 + hm,1 ... gm,n + hm,n symmetric matrices.

52
APPENDIX A. ANSWERS TO EXERCISES
2.3.3 (AB) B 1 A1 = A  BB 1 A1 = AA1 = I

2.2.13
1 1 1
a. Hint: The main diagonal is remains the same if a matrix B A (AB) = B A1 A B = B 1 IB = B 1 B = I
is transposed.
2.3.4
b. Hint: Apply the definition of the trace to arbitrary ma-  
8 2
trices cA and A. a. X =
19 6
c. Hint: Apply the definition of the trace to arbitrary ma-  
9 3
trices A and B. b. X =
12 3
d. Hint: Analyse the ij product of the elements of the main
diagonal.  
0 1
2.3.5 A =
2.2.14 Disprove: Show that it is impossible to obtain a 11 17 2
nonzero matrix.  
1 2
2.3.6 X =
2.2.15 Hint: Apply the definition of an idempotent matrix. 6 3
2.2.16  3
4 3

a. Involutory since I 2 = I 2.3.7 A =
1 43
b. For A2 to be defined, A needs to have the same number
a+d
of rows as columns. 2.3.8 tr A1 = adbc
T
c. Hint: Show that (( A))2 = I.
2.3.9
d. AB is inovolutory under the condition that A and B  
T 2 1
commute. a. p(A ) =
1 2
b. B 1 = 13 (B 2 2B + 4I)
 
1 1
2.2.17 Let F = . We have
1 0
      2.3.10 Here are four solutions to H 2 = I.
2 2 1 3 3 2 4 5 3  
F = F = F = 1 0
1 1 2 1 3 2
0 1
In general,  
fn+1 fn
Fn =
   
1 0 1 0
fn fn1 2.3.11 Disprove: A = B= .
0 1 0 1
where fi is the i-th Fibonacci number fi = fi1 + fi2 and
f0 = 0, f1 = 1, which we verify by induction, based on this 2.3.12

equation. 1 2 2
     a. 0 1 3
fi1 fi2 1 1 fi fi1 6 10 5
=
fi2 fi3 1 0 fi1 fi2
1 0 0

b. 52 48 7
2.3.1 8 7 1

24 5 1 9 4
a.
5 1 c. 5 26 11

1/3 0
 0 2 1
b.
0 1/7 91 5 20

4/7 5/7
 d. 18 1 4
c. 22 1 5
3/7 2/7
d. The inverse does not exist. 25 8 0
e. 78 25 0
2.3.2   30 9 1
1 1 1 2 3
1 0 0

a. (AB) = B A =
1 7/5 f. 5 3 8
 
7/10 3/10 4 2 5
b. (AB)1 = B 1 A1 =
29/10 11/10
0 1 0

g. 0 0 1
1 0 0

53
APPENDIX A. ANSWERS TO EXERCISES

0 1 0 7
h. 1 0 0 d. x = 7
0 0 1 9
i. The inverse does not exist.
j. The inverse does not exist. 2.3.16
1

1 2 3
1 0 0 0
3 a. A1 = 0 1 23
1 0 4 1
k. 35 10 1 47
0 0 3
16
2 2 0 9 3
b. x = 83
1 0 0 0 1
11 1 0 4 3
l.
2 0 1 4

2.3.17
4 0 0 1 3
2 1 0
1 28 2 12 a. A = 2 1 0
0 1 0 0
m. 1 2 2
0 254 19 110 3
1 43

2 2 1
0 67 5 29
b. X = 2 1 1
2 1
0 0 1 0 7 2 3
4 2
0 0 0 1 2
n. 1 0 0 0
3 4 2
0 1 0 0 c. 2 2 4
0 0 4
1 0 0 0
0 1/2 0 0 5
o. 0 0 1/3
2 8 0
0
2.3.18 X = 1 3 4
0 0 0 1/4
2 6 4
2.3.13
0 0 0
1 8 4
2.3.19 X = 6 3 0
a. 1 3 1
0 3 4
1 6 3
5 7 1

2 2 2 2.3.20
b. 74 49 14 a. I

16 1
6
1
6
b. CAT
c. not invertible c. M AT H
16 0 3 0

1
90 1 35 7 2.3.21 X = DT C T B 1 C T
2 2 2
d.

5 0 1 0


1 0 0
36 0 7 1 2.3.22 A = 0 1 0
0 0 1
2.3.14
    
1 2 x 5 2.3.23 The proof that the inverse is r1 H 1 = (1/r) H 1
a. =
3 4 y 6 (provided, of course, that the matrix is invertible) is easy.
b. (x, y) = (8/2, 9/2)
c. Two lines instersection at (x, y) = (8/2, 9/2). 2.3.24 Yes B = C. Multiply AB = AC on the left by A1 .

2.3.25 The associativity of matrix multiplication gives


2.3.15  
H 1 (HG) = H 1 Z = Z and also H 1 (HG) = (H 1 H)G =
2
a. x = IG = G.
3
2.3.26 Multiply both sides of the first equation by H.
 
7
b. x =
7
2.3.27 Suppose that that there exists two inverse then show
7 that they are equal using the fact that they are both the
c. x = 1 inverse of A.
1

54
APPENDIX A. ANSWERS TO EXERCISES
2.3.28 T k (T 1 )k = (T T T ) (T 1 T 1 T 1 )
= 2.4.2
T k1 (T T 1 )(T 1 )k1 = = I. a. This matrix swaps row one and row three.

2.3.29 Since A1 A = AA1 = I and inverses are unique 0 1 0 a b c d e f
1 1 0 0 d e f = a b c
then it follows that A1

= A.
0 0 1 g h i g h i
2.3.30
T T b. This matrix swaps column one and two.
AT A1 = A1 A = I T = I     
1 T T 1 T
a b 0 1 b a
T =
 
A A = AA =I =I c d 1 0 d c

2.3.31 Checking that when I T is multiplied on both 


d e f

4
sides by that expression (assuming that T is the zero ma- 2.4.3 E 1 A = E1 interchanged R1 and R2 of A.
a b c
trix) then the result is the identity matrix is easy. The ob- 
5a 5b 5c
vious generalization is that if T n is the zero matrix then E2 A = E2 multiplied R1 of A by 5.
(I T )1 = I + T + T 2 + + T n1 ; the check again is d e f 
a 2d b 2e c 2f
easy. E3 A = E3 replaced R1 in A with
d e f
2.3.32 Hint: Apply the definition of symmetric matrices. R1 2R2. 
1 0
E4 =
1 2
2.3.33 Show that A = A 3A I. 0 6

2.3.34 A1 = A A2 2.4.4   
1 0 1 0
2.3.35 Invertible if a0 + a1 + a2 + . . . + am 6= 0. 0 3 1 1

2.3.36 Hint: Use the definition of the inverse of a matrix.


1 0 1
2.3.37 It is false; these two dont commute. 2.4.5 A1 = 3 1 7
=
2 2 2
    0 0 1
1 2 5 6
1 0 1 1 0 0 1 0 0 1 0 0
3 4 7 8 0 1 0 0 1 7 0 1
2 2 0 3 1 0
Note:
0 0 1 0 0 1 0 0 1 0 0 1
2.3.38 Hint: Show that the homogeneous system Ax = 0 The answer is not unique.
has only the trivial solution.
2.4.6 2 1
2.3.39 Since AA = I it follows that A1 = A. 3 3 0 1
a. A1 = 0 0 1 and X = 4
2.3.40 Since B is 2 3, then the homogenuous system 35 13 0 3
BX = 0 has non-trivial solutions, say X0 . Thus the sys-
b. A1 =E4 E3 E2 E1 where
tem (AB)X = 0 also has a non-trivial solution, namely X0 . 1 0 0 1 0 0

This in turn implies that AB cannot be invertible. E1 = 5 1 0 , E2 = 0 0 1 ,
2.4.1 0 0 1 0 1 0

1 0 0 1 0 1
a. The second matrix has its first row multiplied by 3.
E3 = 0 1 0 , E4 = 0 1 0 Note: The answer
0 0 31
 
3 6 0 0 1
3 4 is not unique.

2.4.7
b. The second matrix has its second row multiplied by 2.
  0 0 1 0 0 1 2 0 0 1 0 0 1 2 0
1 2 1 3 0 = 0 1 0 0 1 0 1 1 1 0 1 0
6 8 2 4 0 1 0 0 0 0 1 0 0 1 0 0 1
Note: The answer is not unique.
c. The second matrix undergoes the combination operation
of replacing the second row with 2 times the first row 2.4.8
added to the second.
1 0 0

1 0 0

1 0 0

1 0 0

 
1 2 A = 1 1 0 0 0 1 0 1 1 0 2 0
1 0 0 0 1 0 1 0 0 0 1 0 0 1

55
APPENDIX A. ANSWERS TO EXERCISES
Note: The answer is not unique. 3.1.1
a. 34
2.4.9
1 0 0
b. 41
a. E = E 1 = 0 0 1 c. 44
0 1 0 d. 74

1 0 0 1 0 0
b. E = 0 1 0 , E 1 = 0 1 0 3.1.2      
0 0 12 0 0 2 7 6 3 6 3 7
a. M1,1 = , M1,2 = , M1,3 = .
6 10 1 10 1 6
1 0 0 1 0 0 C1,1 = 43, C1,2 = 24, C1,3 = 11.
c. E = 0 1 1 , E 1 = 0 1 1    
6 8 10 8
0 0 1 0 0 1 b. M1,1 = , M1,2 = , M1,3 =
3 2 0 2
 
10 6
0 1 0 1 0 0 .
1 0 3
2.4.10 E1 = 0 0 E2 = 0 1 0 E3 = C1,1 = 36, C1,2 = 20, C1,3 = 30.
0 0 1 0 0 2      
3 10 3 10 3 3
1 0 0 c. M1,1 = , M1,2 = , M1,3 = .
1 3 9 9 9 9 3
1 0 Note: The answer is not unique.
C1,1 = 3, C1,2 = 63, C1,3 = 18.
0 0 1    
0 0 8 0
d. M1,1 = , M1,2 = , M1,3 =
 8 1 10 1

1 1 1 1 
2.4.11 X = 23 23 23 23 8 0
.
1 1 1 1 10 8
C1,1 = 0, C1,2 = 8, C1,3 = 64.
2.4.12 Assume that B is row equivalent to A and that A
is invertible. Because they are row-equivalent, there is a se- 3.1.3
quence of elementary row operations to reduce one to the 2 2 1 2 1 2
a. 3(+1)
+ 0(1)
+ 1(+1)
= 13
other. We can do that reduction with elementary matri- 3 0 1 0 1 3
ces, for instance, A can change by row operations to B as

0 1
+ 2(+1) 3 1 + 2(1) 3 0 = 13

B = En E1 A. This equation gives B as a product of ele- b. 1(1)
3 0 1 0 1 3
mentary which are invertible matrices then, B is also invert-
1 2

+ 2(1) 3 0 + 0(+1) 3 0 = 13

ible. c. 1(+1)
1 3 1 3 1 2
2.4.13 Hint: A matrix is invertible if and only if it can be
expressed as a product of elementary matrices. 3.1.4
a. 31
2.4.14 Elementary matices which its inverse is the same. El-
b. 375
ementary matrices obtained by interchanging two rows of the
identity. c. 2

1

0 1 1
 
1 1
 3.1.5
2.4.15 False, =
2 1 0 1 2 3 a. 59
b. 250
2.4.16 Show that the elementary matrix obtained by inter-
c. 3
changing the first and last row satisfy the conditions.
d. 0
2.4.17 e. 0
a. A is obtained from A by performing no elementary row f. 2
operation on A.
b. If B is obtained from A by performing k elementary row 3.1.6
operations then A is obtained from B by performing the a. x2
k inverse elementary row operations. 1
b. 7
c. If B is obtained from A by performing k elementary row x
c. 12
operations and C is obtained from B by performing l
elementary row operations then C is obtained from A by d. 0
performing the k + l elementary row operations. e. 20i + 43j + 4k
f. 2

56
APPENDIX A. ANSWERS TO EXERCISES
3.1.7 Evaluate the determinant using a cofactor expansion. d. 1
The same is true for lower triangular matrices. e. 113
3.1.8 f. 179
a. 2 3.2.4 det(A) = 41
b. 40
c. 24 3.2.5
2

3 33 a. det(A) = 7 b. det(A) = 52
3.1.9 = 4

3.1.10 False, Here is a determinant whose value 5


3.2.6 det(A) = 12

1 0 0
3.2.7 384
0 1 0 = 1
3.2.8 Hint: Use elementary operations to bring the matrix

0 0 1
under triangular form.
doesnt equal the result of expanding down the diagonal.
3.2.9 If the determinant is nonzero, then it will remain
1 0 1 0 1 0 nonzero with row operations applied to the matrix. How-
1 (+1)
+ 1 (+1)
+ 1 (+1)
=3
0 1 0 1 0 1 ever, by assumption, you can obtain a row of zeros by doing
row operations. Thus the determinant must have been zero.

3.1.11 There are no real numbers that make the matrix 3.3.1 This equation
singular because the determinant of the matrix cos2 + sin2  
is never 0, it equals 1 for all . 12 x 4
0 = det( ) = 64 20x + x2 = (x 16)(x 4)
8 8x
3.2.1
has roots x = 16 and x = 4.
a. The transpose was applied and it does not change the
determinant. 3.3.2
b. Two rows were switched and so the resulting determinant a. The determinant is equal to 1, so the matrix is invertible
is 1 times the first. for all t.
c. The determinant is unchanged since the operation is b. The determinant is equal to 5et hence it never equal to
adding first row to the second. zero, so the matrix is invertible for all t.
d. The second row was multiplied by 2 so the determinant c. The determinant
is equal to t3 + 2 hence it has no inverse
3
of the result is 2 times the original determinant. when t = 2.
e. Two columns were switched so the determinant of the
second is 1 times the determinant of the first. 3.3.3 This follows because det (ABC) =
det (A) det (B) det (C) and if this product is nonzero,
3.2.2 then each determinant in the product is nonzero and so each
a. det(A) = 90; 2R1 R1 . of these matrices is invertible.
det(B) = 45; 10R1 + R3 R3 . 3.3.4 If det (A) 6= 0, then A1 exists and by multiplying
det(C) = 45; C = AT . both sides on the left by A1 . The result is X = 0.
b. det(A) = 41; R2 R3 .
det(B) = 164; 4R3 R3 . 3.3.5 1 = det (A) det (B). Hence both A and B have in-
det(C) = 41; R2 + R1 R1 . verses. Given any X,
c. det(A) = 16; R1 R2 then R1 R3 .
A (BA I) X = (AB) AX AX = AX AX = 0
det(B) = 16; R1 R1 and R2 R2 .
det(C) = 432; C = 3M . and so it follows that (BA I) X = 0. Since X is arbitrary,
d. det(A) = 120; R1 R2 then R1 R3 then R2 R3 . it follows that BA = I.
det(B) = 720; 2R2 R2 and 3R3 R3 .
det(C) = 120; C = M . 3.3.6 The given condition is what it takes for the determi-
nant to be non zero. Recall that the determinant of an upper
3.2.3 triangular matrix is just the product of the entries on the
main diagonal.
a. 15
b. 52 3.3.7 det(T S) = det(T )det(S) = det(S)det(T ) = det(ST ).
c. 0

57
APPENDIX A. ANSWERS TO EXERCISES
 
0 1 simplifies to the familiar form
3.3.8 Disprove using the counter example A = .
0 0
y = x (x3 x2 )/(y3 y2 ) + (x2 y3 x3 y2 )/(y3 y2 )
3.3.9 det (aA) = det (aIA) = det (aI) det (A) = an det (A) .
The diagonal matrix which has a down the main diagonal has (the y3 y2 = 0 case is easily handled).
determinant equal to an .
3.3.20 Disprove. Recall that constants come out one row at
3.3.10 Hint: Apply the determinant on AT = A a time.
     
2 4 1 2 1 2
3.3.11 Hint: Apply the determinant on AT A = A. det( ) = 2 det( ) = 2 2 det( )
2 6 2 6 1 3
3.3.12
This contradicts linearity (here we didnt need S, i.e., we can
a. Plug and chug: the determinant of the product is this
take S to be the matrix of zeros).
    
a b w x aw + by ax + bz
det( ) = det( ) 3.3.21
c d y z cw + dy cx + dz
1 1 2
= acwx + adwz + bcxy + bdyz a. False. Consider 1 5 4
acwx bcwz adxy bdyz 0 3 3
b. True.
while the product of the determinants is this.
c. False.
   
a b w x d. False.
det( ) det( ) = (ad bc)(wz xy)
c d y z e. True.
Verification that they are equal is easy. f. True.
b. Use the prior part. g. True.
h. True.
3.3.13 det(A) = 24 5 i. True.
  j. True.
1 0
3.3.14 The statement is false. Consider A = B =
0 1 3.4.1
 
1 0

. 0 1 2
0 1 a. 3 2 8
0 1 1
3.3.15  
4 1
a. If it is defined then it is (32 )(2)(22 )(3). b.
2 3
b. Hint: det 6A3 + 5A2 + 2A = det A det 6A2 + 5A + 2I.  
0 1
c.
3.3.16  5 1

24 12 12

0 1
a. A = .
1 0 d. 12 6 6
b. 1 = det(AA1 ) = det(AAT ) = det(A) det(AT ) = 8 4 4

2 4 3 2 1
(det(A))
3 6 4 2
c. The converse does not hold; here is an example. e.
2 4 6 3
1 2 3 4
 
3 1
2 1
3.4.2     
t2,2 t1,2
1 a.
T1,1 T2,1
= = t2,2 t1,2
3.3.17 If H = P GP then det(H) = T1,2 T2,2 t2,1 t1,1 t2,1 t1,1
det(P 1 ) det(G) det(P ) = det(P 1 ) det(P ) det(G) =  
t t1,2
det(P 1 P ) det(G) = det(G). b. (1/t1,1 t2,2 t1,2 t2,1 ) 2,2
t2,1 t1,1
k
3.3.18 0 since 0 = det (0) = det Ak = (det (A)) .


3.3.19 An algebraic check is easy.

0 = xy2 +x2 y3 +x3 yx3 y2 xy3 x2 y = x(y2 y3 )+y(x3 x2 )+x2 y3 x3 y2

58
APPENDIX A. ANSWERS TO EXERCISES

3.4.3 Consider this diagonal matrix. 1 2 0

b. 0 2 1 = 7 so it has an inverse. This inverse is

d1 0 0 . . . 3 1 1
0 d2 0 1
27 2
T 7 7

D = 0 0 d3

1 3 6

.. 3 1 1

.
1
2 1 5 = 7 7 7

7
dn 2 1 2

6 5 2
7 7 7
If i 6= j then the i, j minor is an (n 1)(n 1) matrix with
only n 2 nonzero entries, because we have deleted both di 1 3 3

and dj . Thus, at least one row or column of the minor is all c. 2 4 1
= 3 so it has an inverse which is
zeroes, and so the cofactor Di,j is zero. If i = j then the 0 1 1
1 0 3

minor is the diagonal matrix with entries d1 , . . . , di1 , di+1 ,
. . . , dn . Its determinant is obviously (1)i+j = (1)2i = 1 32 1
3
5
3


times the product of those.
2

3 31 2
3
d2 dn 0 0
0 d1 d3 dn 0

1 0 3
adj(D) =

.

..
d. 1 0 1 = 2 and so it has an inverse. The inverse is

3 1 0
d1 dn1

1 3
2 0

2
3
29

1
3.4.4 Just note that if S = T T then the cofactor Sj,i equals

2
the cofactor Ti,j because (1)j+i = (1)i+j and because the
1
21

2 0
minors are the transposes of each other (and the determinant
of a transpose equals the determinant of the matrix).
3.4.7
3.4.5 False. A counter example. 3 1 8
a. adj(A) = 9 10 2
1 2 3 3 6 3 0 0 0 12 4 7
T = 4 5 6 adj(T ) = 6 12 6 adj(adj(T ))b. 0 0 =027
= det(A)
7 8 9 3 6 3 0 0 0
c. M12 = 9 and M22 = 10

3 1 8
1
3.4.6 d. A1 = 27 9 10 2

1 2 3 12 4 7
a. 0 2 1 = 13 and so it has an inverse given by
3 1 0 3.4.8
T a. Determinant of A is 6= 0, therefore A is invertible.
2 1
0 1 0 2

sin 0 cos


1 0
3 0
3 1
b. 0 1 0


1
2 3 1 3
1 2

cos 0 sin
1 0
3 0
13 3 1


2 3 1 3 1 2
2 1 0 1 0 2
3.4.9
T 9
a. 12 b. 433

1 3 6
1
= 3 9 5
13
4 1 2
1 3.4.10 375
3 4
13 13 13
3.4.11
t
3 9 1 e 0 0
13

= 13 13
t t

a. 0
e cos t e sin t = e3t . Hence

t t t t

6 5
2 0 e cos t e sin t e cos t + e sin t
13 13 13

59
APPENDIX A. ANSWERS TO EXERCISES
the inverse is 3.4.19
2t T 4
a.
e 0 0
e3t 0 e2t cos t + e2t sin t e2t cos t e2t sin tb. 1


0 e2t sin t e2t cos (t)


t 3.4.20 The determinant of the coefficient matrix is non-zero
e 0 0 for all . Hence the system can be solve for any value of .
= 0 et (cos t + sin t) (sin t) et
0 et (cos t sin t) (cos t) et 4.1.1
y

1 t 1 t

2e 0 2e
b. 12 cos t + 12 sin t sin t 1
2 sin t 12 cos t
1 u
2 sin t 12 cos t cos t 2 cos t 12 sin t
1
a. x
1
3.4.12 Hint: Use the identity det(A)A = adj(A).

v

4
3.4.13 det(M ) = u + v

u
3 . v

y
3.4.14 375
u
3.4.15 u + v
a. Hint: take the determinant of BG.
b. (10!)9 b. x

(2(10!)+3) 10 v v
c.
10! u

d. 10!
e. 0 .
10
f. (96)210 (10!)

3
z
g. Hint: Show that det(H)(2 + det(A))10 det(A) = 0 then
conclude that det(H) = 0. u + v
v .
c.
(10+10!)(9+10!)(8+10!)(7+10!)(6+10!)(5+10!)(4+10!)(3+10!)(2+10!)(1+10!)
h. (10!)10
u v
u

x y
i. Hint: Show that adj(adj(A)) = (det(A))n2 A.
.
1
3.4.16 x1 = 0, x2 = 2

z
3.4.17 x1 = 4
u u + v
3.4.18 .
d.
a. det(A) = 123, det(A1 ) = 492, det(A2 ) = u v
123, det(A
3 ) = 492, y
x v
4
x = 1 . .
4
4.1.2
 =43, det(A1 ) = 215, det(A2 ) = 0,
b. det(A)

5 a. P Q = (1, 6) = 1~i + 6~j
x= .
0 b. P Q = (4, 4) = 4~i + 4~j
c. det(A) = 0, det(A1 ) = 0, det(A2 ) = 0, det(A3 ) = 0.
c. P Q = (6, 1, 6) = 6~i ~j + 6~k
Infinite solutions exist. ~ ~
d. det(A) = 0, det(A1 ) = 56, det(A2 ) = 26. No solution d. P Q = (2, 2, 0) = 2i + 2j
exist.
4.1.3
e. det(A) = 0, det(A1 ) = 0, det(A2 ) = 0, det(A3 ) = 0.
Infinite solutions exist. a. (4, 4, 3)
f. det(A) = 0, det(A1 ) = 1247, det(A2 ) = 49, det(A3 ) = b. (2, 6, 1)
 
49. No solution exist. c. 1 30
, 30
2
5 ,
30

60
APPENDIX A. ANSWERS TO EXERCISES

41
d. 2
b. (10, 10, 22)

e. 41 c. (1/2, 7/2, 9/2)
2
f. (14, 6, 8) d. (3, 6, 6)
g. (7, 3, 4) 5 2

(h) (1, 6, 1) 4.1.13 3, 3

h. (2, 4, 2)
4.1.14 P Q = (1, 2, 3), P R = (1, 1, 1), P S =
i. No.
(7, 2, 4), QR = (2, 3, 2), SR = (6, 3, 5)
4.1.4
4.1.15
a. k~uk = 5, k~v k = 13, k~u + ~v k = 26, k~u ~v k = 10
a. 21 , 1, 32


b. k~uk = 17, k~v k = 3, k~u + ~v k = 14, k~u ~v k = 26
b. 12 , 12 , 12 , 1

c. k~uk = 5, k~v k = 3 5, k~u + ~v k = 2 5, k~u ~v k = 4 5
d. k~uk = 7, k~v k = 35, k~u + ~v k = 42, k~u ~v k = 28 4.1.16
a. (1, 3, 3) and (0, 2, 5)
4.1.5 When ~u and ~v have the same direction. (Note: parallel
b. 23 , 43 , 11 and 73 , 53 , 73
 
is not enough.) 3

4.1.6 4.1.17

a. 0, 13 11

a. ~u = (3/ 30, 7/ 30) 4 , 4
b. ~u = (cos 50 , sin 50 ) (0.643, 0.766) 0, 1, 23 , 8

b.

c. ~u = (cos 120 , sin 120 ) = (1/2, 3/2). 14

c. 3 , 1, 6

4.1.7 ~u = 3 (7, 3, 3) 4.1.18


67
a. AB = k AC for some k R
4.1.8
b. since 2P Q = P R, the points are collinear
a. No, they are different.
c. S, T , and U are not collinear because SU 6= k ST for any
kR
   
1 0
1 3
4.1.19 Since P~R = 2P~Q, the points P , Q and R are colin-
b. Yes, they are the same. ear.


1
5a + 3b = 16
4.1.20 We have which gives that a = 1
1 7a 10b = 77
3 and b = 7.

a + 3b = 1
4.1.9 4.1.21 We have a + 2b + c = 1 which implies that a =

b + 4c = 19
a. 2 10
2, b = 1 and c = 5.
b. 5

c. 170 4.1.22 Suppose the vertices are labeled A, B, C and D.


d. 3 6 Show that 12 AC = AB + 12 4 BD.

4.1.10 4.1.23 If A, B, and C are the vertices of the triangle, P is


a. ~u + ~v = (2, 1); ~u ~v = (0, 3); 2~u 3~v = (1, 7). the midpoint of AC, and Q is the midpoint of BC, show that
1
b. ~x = (1/2, 2). P Q = 2 AB.

4.1.24 Use Exercise 4.1.23 twice.


4.1.11

a. ~u 2, 1); ~u ~v = (1, 0, 3); ~u 2~v = ( 4.1.25 Use Exercises 4.1.24 and 4.1.22.
+ ~v = (3,
2 2, 2, 2 2).
4.1.26 In the triangle ABC with midpoints P QR, show that
b. ~x = (1, 0, 3). the point that is two thirds of the distance from A to P is given
by 13 (~a + ~b + ~c). Then show that 13 (~a + ~b + ~c) is two thirds of
4.1.12 the distance from B to Q and two thirds of the distance from
a. (4, 7, 13)

61
APPENDIX A. ANSWERS TO EXERCISES
C to R. 4.2.4
4.1.27 We prove that a vector has length zero if and only if a. k = 6
all its components are zero. b. k = 0 or k = 3
Let ~u Rn have components u1 , . . . , un . Recall that the c. k = 3
square of any real number is greater than or equal to zero, d. any k R
2
with equality only when that real is zero. Thus |~u | = u1 2 +
2
+ un is a sum of numbers greater than or equal to zero, 4.2.5
and so is itself greater than or equal to zero, with equality if a. = 0.3218 18.43
and only if each ui is zero. Hence |~u | = 0 if and only if all
b. = 1.6476 94.4
the components of ~u are zero.
c. = /4 = 45
4.1.28 Assume that ~v Rn has components v1 , . . . , vn . If d. = /2 = 90
~v 6= ~0 then we have this.
s 4.2.6
2  2
v1 vn a. 1.074 radians
+ + 2
v1 2 + + vn 2 v1 + + vn 2 b. 1.180 radians
c. 1.441 radians
s
v1 2 vn 2
  
= + +
v1 2 + + vn 2 v1 2 + + v n 2 4.2.7
=1 a. 3

b. 19
If ~v = ~0 then ~v /k~v k is not defined.
4.2.8
4.1.29 For the first question, assume that ~v Rn and r 0,
a. 1.
take the root, and factor.
p p b. (0, 1) and (1, 0).
kr~v k = (rv1 )2 + + (rvn )2 = r2 (v1 2 + + vn 2 = rk~v k
4.2.9  
For the second question, the result is r times as long, but it 9
a. arccos 112.65
points in the opposite direction in that r~v + (r)~v = ~0. 546
 
4
4.1.30 Hint: ~x = ~x ~y + ~y b. arccos 71.86
165
4.2.1
a. 22 4.2.10
2 3 2
 70 14 49 
b. 3 a. 17 , 17 , 17 , , ,
3 3
 5 117  17 17
c. Not defined. b. 2 , 2 , 3 , 2 , 2 , 1
c. 14 7 7
 1 4 2
d. 0 3 , 3, 3 , 3, 3, 3
d. 13 , 23 , 13 , 1 , 53 , 13 , 73 , 0
 
4.2.2
a. Yes 4.2.11
b. Yes a. proj~a (~b) = (0, 5), perp~a (~b) = (3, 0)
c. No b. proj~a (~b) = 25 , 25 , perp~a (~b) = 136
36 48
 102

25 , 25
d. Yes
c. proj~a (~b) = (0, 5, 0), perp~a (~b) = (3, 0, 2)
d. proj~a (~b) = 94 , 89 , 89 , perp~a (~b) = 40 1 19
 
4.2.3 9 , 9, 9
a. Answers will vary; two possible answers are (7, 4) and
(14, 8). 4.2.12
= 27 , 67 , 37

b. Answers will vary; two possible answers are (5, 3) and a. u
(15, 9). b. 220 660 330

49 , 49 , 49
c. Answers will vary; two possible answers are (1, 0, 1) and c. 270 222 624

49 , 49 , 49
(4, 5, 9).
d. Answers will vary; two possible answers are (2, 1, 0) and 4.2.13
(1, 1, 1/3). a. If ~v = (v1 , v2 ), show that v1 = k~v k cos and that v2 =
k~v k sin . This shows that ~v = k~v k(cos , sin ), and since

62
APPENDIX A. ANSWERS TO EXERCISES
~
v
v = vk ,
k~ the required formula follows. 4.2.22 In each item below, assume that the vectors ~u, ~v , w~
n
b. The angle between the vectors e~a and e~b is exactly , R have components u 1 , . . . , u ,
n 1v , . . . , w n .
so by the definition of the dot product we get cos( a. Dot product is right-distributive.
e~a e~b
) = . The denominator here is one since they u1 v1 w1
ke~a kke~b k .. .. ..
are both unit vectors, and working out the top gives the (~
u + ~
v ) w
~ = [ +
. . . ]
required result. un vn wn

u1 + v1 w1
4.2.14 . .
=
.. ..

a. Find a, b, and c by taking the dot product of w ~ with u~1 ,
u~2 , and u~3 , respectively. un + vn wn
~ = 31 u~2 53 u~3
b. w = (u1 + v1 )w1 + + (un + vn )wn
= (u1 w1 + + un wn ) + (v1 w1 + + vn wn )
4.2.15
= ~u w
~ + ~v w
~
4.2.16 Analyse the squared norm of k~uk~v k~v k~u and k~uk~v +
k~v k~u).
b. Dot product is also left distributive: w~ (~u + ~v ) = w
~ ~u +
4.2.17 w
~ ~v . The proof is just like the prior one.

a. The statement is false: Let ~a be any non-zero vector, let c. Dot product commutes.
~b be any non-zero vector that is orthogonal to ~a, and let
u1 v1 v1 u1
~c = ~b. Then the antecedent of the statement is true, .. .. .. ..
since both sides are equal to 0, while the consequent is . . = u1 v1 + +un vn = v1 u1 + +vn un = . .
false. un vn vn un
b. No.
d. Because ~u ~v is a scalar, not a vector, the expression
4.2.18 Expand the left side of the equation by using the fact (~u ~v ) w
~ makes no sense; the dot product of a scalar and
2
that k~v k = ~v ~v for any vector ~v to get to the right side. a vector is not defined.
e. This is a vague question so it has many answers. Some
4.2.19
2 2 are (1) k(~u~v ) = (k~u)~v and k(~u~v ) = ~u(k~v ), (2) k(~u~v ) 6=
a. (~u + ~v ) (~u ~v ) = k~uk k~v k (k~u) (k~v ) (in general; an example is easy to produce),
b. Hint: The vectors ~u +~v and ~u ~v are the diagonals of the and (3) |k~v | = |k||~v | (the connection between length and
parallelogram defined by ~u and ~v . And use the previous dot product is that the square of the length is the dot
part. product of a vector with itself).

4.2.20 4.2.23 No. These give an example.


a. We can use the x-axis.      
1 1 1
(1)(1) + (0)(1) ~u = ~v = w
~=
arccos( ) 0.79 radians 0 0 1
1 2
4.2.24 We will prove this demonstrating that the contrapos-
b. Again, use the x-axis. itive statement holds: if ~x 6= ~0 then there is a ~y with ~x ~y 6= 0.
(1)(1) + (0)(1) + (0)(1) Assume that ~x Rn . If ~x 6= ~0 then it has a nonzero compo-
arccos( ) 0.96 radians nent, say the i-th one xi . But the vector ~y Rn that is all
1 3
zeroes except for a one in component i gives ~x ~y = xi . (A
slicker proof just considers ~x ~x.)
c. The x-axis worked before and it will work again.
4.2.25 The angle between ~u and ~v is acute if ~u ~v > 0, is
(1)(1) + + (0)(1) 1
arccos( ) = arccos( ) right if ~u ~v = 0, and is obtuse if ~u ~v < 0. Thats because, in
1 n n the formula for the angle, the denominator is never negative.
n
d. Using the formula
from the prior item, 4.2.26 Suppose that ~u, ~v R . If ~u and ~v are perpendicular
limn arccos(1/ n) = /2 radians. then
2 2 2
k~u+~v k = (~u+~v )(~u+~v ) = ~u~u+2 ~u~v +~v ~v = ~u~u+~v ~v = k~u k +k~v k
4.2.21 Clearly u1 u1 + + un un is zero if and only if each
ui is zero. So only ~0 Rn is orthogonal to itself. (the third equality holds because ~u ~v = 0).

63
APPENDIX A. ANSWERS TO EXERCISES
4.2.27 Where ~u, ~v Rn , the vectors ~u + ~v and ~u ~v are b. ~u ~v = (11, 1, 17)
perpendicular if and only if 0 = (~u + ~v ) (~u ~v ) = ~u ~u ~v ~v , c. ~u ~v = (0, 2, 0)
which shows that those two are perpendicular if and only if d. ~u ~v = (0, 0, 0)
~u ~u = ~v ~v . That holds if and only if k~u k = k~v k.
e. ~i ~j = ~k
4.2.28 We will show something more general: if k~z1 k = k~z2 k f. ~i ~k = ~j
for ~z1 , ~z2 Rn , then ~z1 + ~z2 bisects the angle between ~z1 and g. ~j ~k = ~i
~z2
4.3.2
000 0
gives 0 a. (27, 9, 9)
00 0
0 b. (31, 34, 8)
0
c. (4, 5, 4)
(we ignore the case where ~z1 and ~z2 are the zero vector). 4.3.3
The ~z1 + ~z2 = ~0 case is easy. For the rest, by the definition of
a. 5
angle, we will be finished if we show this.
b. 21
~z1 (~z1 + ~z2 ) ~z2 (~z1 + ~z2 ) c. 0
=
k~z1 k k~z1 + ~z2 k k~z2 k k~z1 + ~z2 k
d. 5
But distributing inside each expression gives
4.3.4
~z1 ~z1 + ~z1 ~z2 ~z2 ~z1 + ~z2 ~z2 a. 16 (1, 1, 2)
k~z1 k k~z1 + ~z2 k k~z2 k k~z1 + ~z2 k
b. 121 (2, 1, 4)
2 2
and ~z1 ~z1 = k~z1 k = k~z2 k = ~z2 ~z2 , so the two are equal. c. (0, 1, 0)
d. Any vector orthogonal to ~u works (such as 1 (1, 0, 1)).
4.2.29 Let 2

u1 v1 w1 4.3.5
~u = ... , ~v = ... ~ = ...
w

4.3.6

un vn wn a. 5 2/2

and then b. 3 30/2

c. 1
u1 kv1 mw1
d. 5/2
~ = ... ... + ...
 
~u k~v + mw

un kvn mwn 4.3.7



a. 35

u1 kv1 + mw1
.. .. b. 11
= .

.
un kvn + mwn c. 9
d. 13
= u1 (kv1 + mw1 ) + + un (kvn + mwn )
= ku1 v1 + mu1 w1 + + kun vn + mun wn 4.3.8
= (ku1 v1 + + kun vn ) + (mu1 w1 + + mun wna.) 126
= k(~u ~v ) + m(~u w)
~ b. 5

as required. 4.3.9 24

4.2.30 If A, B, and C are the vertices of the triangle, let H 4.3.10


be the point of intersection of the altitudes from A and B. a. 52 , 25 , 0


Then prove that CH is orthogonal to AB. b. 2
4.2.31 Hint: Let O be the centre of the circle, and express c. 2

everything in terms of OA and OC. 4.3.11
4.3.1 a. 0
a. ~u ~v = (12, 15, 3) b. 30

64
APPENDIX A. ANSWERS TO EXERCISES
c. 2 parametric: x = 5, y = 1 t, z = 9
symmetric: not defined, as some components of the di-
4.3.12 With ~u = (u1 , u2 , u3 ), we have rection are 0.
~u ~u = (u2 u3 u3 u2 , (u1 u3 u3 u1 ), u1 u2 u2 u1 ) g. Answers can vary; here the direction is given by d~1 d~2 :
vector: ~x = (7, 2, 1) + t(1, 1, 2)
= (0, 0, 0)
parametric: x = 7 + t, y = 2 t, z = 1 + 2t
= ~0. symmetric: x 7 = 2 y = (z + 1)/2
h. Answers can vary; here the direction is given by d~1 d~2 :
4.3.13 vector: ~x = (2, 2, 3) + t(5, 1, 3)
parametric: x = 2 + 5t, y = 2 t, z = 3 3t
a. 0
symmetric: (x 2)/5 = (y 2) = (z 3)/3
b. The vectors are coplanar.
i. vector: ~x = (1, 1) + t(2, 3)
4.3.14 10 parametric: x = 1 + 2t, y = 1 + 3t
symmetric: (x 1)/2 = (y 1)/3
4.3.15 With ~u = (u1 , u2 , u3 ) and ~v = (v1 , v2 , v3 ), we have j. vector: ~x = (2, 5) + t(0, 1)
parametric: x = 2, y = 5 + t
~u (~u ~v ) = (u1 , u2 , u3 ) (u2 v3 u3 v2 , (u1 v3 u3 v1 ), u1 v2 u2 v1 )
symmetric: not defined
= u1 (u2 v3 u3 v2 ) u2(u1 v3 u3 v1 ) + u3(u1 v2 u2 v1 )
= 0. 4.4.4
x = 1 + 2t
a. ~x = (1, 1, 1) + t(2, 3, 1), y = 1 + 3t , t R
4.3.16 Prove. Hint: (~n1 ~n2 ) ~n1 .
z = 1 t

4.4.1 Note that alternative correct answers are possible. x = 2 + 3t
a. ~x = (3, 4) + t(5, 1), t R b. ~x = (2, 4, 5) + t(3, 1, 2), y = 4 t , t R
z = 5 2t

b. ~x = (2, 0, 5) + t(4, 2, 11), t R
c. ~x = (4, 0, 1, 5, 3) + t(2, 0, 1, 2, 1), t R 4.4.5 To the x-axis: ~x = (3, 4, 7) + t(1, 0, 0), t R. To
the y-axis: ~x = (3, 4, 7) + t(0, 1, 0), t R. To the z-axis:
4.4.2 Note that alternative correct answers are possible. ~x = (3, 4, 7) + t(0, 0, 1), t R.
a. ~x = (1, 2) + t(3, 5), t R
b. ~x = (4, 1) + t(6, 2), t R 4.4.6
x = 2t
c. ~x = (1, 3, 5) + t(3, 4, 5), t R
a. y = 3t , t R.
d. ~x = 12 , 14 , 1 + t 32 , 34 , 23 , t R
 
z = 2 3t

e. ~x = (1, 0, 2, 5) + t(4, 2, 1, 7), t R
x = 1 + 2t
b. y = 3t , t R.
4.4.3
z = 2t
a. vector: ~x = (2, 4, 1) + t(9, 2, 5)
parametric: x = 2 + 9t, y = 4 + 2t, z = 1 + 5t 4.4.7 That line is this set.
symmetric: (x 2)/9 = (y + 4)/2 = (z 1)/5
2 7
b. vector: ~x = (6, 1, 7) + t(3, 2, 5)



1 + 9 t

parametric: x = 6 3t, y = 1 + 2t, z = 7 + 5t 1 2
tR

symmetric: (x 6)/3 = (y 1)/2 = (z 7)/5



0 4

c. Answers can vary: vector: ~x = (2, 1, 5) + t(5, 3, 1)
parametric: x = 2 + 5t, y = 1 3t, z = 5 t Note that this system
symmetric: (x 2)/5 = (y 1)/3 = (z 5)
2 + 7t = 1
d. Answers can vary: vector: ~x = (1, 2, 3) + t(4, 7, 2) 1 + 9t = 0
parametric: x = 1 + 4t, y = 2 + 7t, z = 3 + 2t 1 2t = 2
symmetric: (x 1)/4 = (y + 2)/7 = (z 3)/2 0 + 4t = 1
e. Answers can vary; here the direction is given by d~1 d~2 :
vector: ~x = (0, 1, 2) + t(10, 43, 9) has no solution. Thus the given point is not in the line.
parametric: x = 10t, y = 1 + 43t, z = 2 + 9t 4.4.8 No, they are skew lines.
symmetric: x/10 = (y 1)/43 = (z 2)/9
f. Answers can vary; here the direction is given by d~1 d~2 : 4.4.9
25
, 36

vector: ~x = (5, 1, 9) + t(0, 1, 0) a. 17 17

65
APPENDIX A. ANSWERS TO EXERCISES
b. (0, 1, 2) a. 2x1 + 4x2 x3 = 9
c. no point of intersection b. 3x1 + 5x3 = 26
d. (7, 2, 5) c. 3x1 4x2 + x3 = 8

4.4.10 4.5.2 2x + 3y 5 = 21
a. Parallel. 4.5.3
b. Intersecting; (12, 3, 7). a. 2x1 3x2 + 5x3 = 6
c. Intersecting; (9, 5, 13). b. x2 = 2
d. Skew.
e. Parallel. 4.5.4 Answers will vary.
f. Same. 4.5.5
g. Skew. a. Standard form: 3(x 2) (y 3) + 7(z 4) = 0
general form: 3x y + 7z = 31
4.4.11 The system
b. Standard form: 2(y 3) + 4(z 5) = 0
1= 1 general form: 2y + 4z = 26
1+t= 3+s c. Answers may vary;
2 + t = 2 + 2s Standard form: 8(x 1) + 4(y 2) 4(z 3) = 0
general form: 8x + 4y 4z = 4
gives s = 6 and t = 8, so this is the solution set.
d. Answers may vary;
1 Standard form: 5(x 5) + 3(y 3) + 2(z 8) = 0
9 general form: 5x + 3y + 2z = 0
10

e. Answers may vary;
Standard form: 7(x 2) + 2(y 1) + (z 2) = 0
4.4.12 general form: 7x + 2y + z = 10
a. 52 , 52 ,

5 f. Answers may vary;
2 Standard form: 3(x 5) + 3(z 3) = 0
58 91 6

b. 17 , 17 , 17 general form: 3x + 3z = 24
17 1 1
 q 29 g. Answers may vary;
c. 6 , 3, 6 , 6
 Standard form: 2(x 1) (y 1) = 0
5 11 1
d. ,
3 3 , 3 , 6 general form: 2x y = 1
h. Answers may vary;
4.4.13 Standard form: 2(x 1) + (y 1) 3(z 1) = 0

a. 41/3 general form: 2x + y 3z = 0

b. 3 2 i. Answers may vary;

c. 5 2/2 Standard form: 2(x 2) (y + 6) 4(z 1) = 0
d. 5 general form: 2x y 4z = 6
j. Answers may vary;
4.4.14 1 Standard form: 4(x 5) 2(y 7) 2(z 3) = 0
2
general form: 4x 2y 2z = 0
4.4.15 k. Answers may vary;
a. ( 15 , 75 ) Standard form: (x 5) + (y 7) + (z 3) = 0

6 5 general form: x + y + z = 15
b. 5
l. Answers may vary;
4.4.16 Standard form: 4(x 4) + (y 1) + (z 1) = 0
general form: 4x + y + z = 18
a. 3/ 2
m. Answers may vary;
b. 2
Standard form: 3(x + 4) + 8(y 7) 10(z 2) = 0
4.4.17 general form: 3x + 8y 10z = 24
a. The two lines intersect. n. Standard form: x 1 = 0
general form: x = 1
b. The lines intersect at (5, 2, 3) and the distance is 0.
4.5.6
4.5.1

66
APPENDIX A. ANSWERS TO EXERCISES
a. 39x + 12y + 10z = 140 4.5.12
20 51 37

b. 11x 21y 17z = 56 a. 13 , 13 , 13
c. 12x + 3y 19z = 14 b. 73 , 13 , 23


4.5.7 4.5.13
a. x 4y 10z = 85 a. No point of intersection; the plane and line are parallel.
b. 2x 2y + 3z = 5 b. The plane contains the line, so every point on the line is
c. 5x 2y + 6z = 15 a point of intersection.
c. (3, 7, 5)
4.5.8
d. (3, 1, 1)
a. Note that
4.5.14 This system

2 1 1 3 1 2
2 1 1 1 1 0
= = 2+t= 0
2 5 3 0 5 5
t = s + 4w
0 1 1 4 1 5
1 t = 2s + w
and so the plane is this set.
gives t = 2, w = 1, and s = 2 so their intersection is this
point.

1 1 2
0

1 1 0

+ t + s t, s R 2
5 3 5
3

1 1 5

b. No; this system 4.5.15


a. The line is parallel to the plane.
1 + 1t + 2s = 0
b. The line is orthogonal to the plane.
1 + 1t =0
5 3t 5s = 0 c. The line is neither parallel nor orthogonal to the plane,
1 + 1t + 5s = 0 0.702 radians.
d. The line is orthogonal to the plane.
has no solution.
e. The line is parallel to the plane.
4.5.9 The vector
4.5.16

2 p
0 a. 5/7
3
b. 8/ 21

is not on the line. Because c. 1/ 3

2 1 3 d. 3
0 0 = 0
4.5.17
3 4 7
a. 226
we can describe that plane in this way. 13
b. 38

1 1 3 c. 4
5
0 + m 1 + n 0 m, n R

4 2 7
d. 6

4.5.18
4.5.10 a. 13 (0, 14, 1, 10)
a. ~n = (3, 2, 1) b. 71 (11, 13, 10, 3)
b. ~n = (4, 3, 5)
c. ~n = (1, 1, 2, 3) 4.5.19
a. Answers
may vary:
4.5.11 x = 14t

a. 3x1 + x2 + 4x3 + x4 = 2 ~x = y = 1 10t

b. x2 + 3x3 + 3x4 = 1 z = 2 8t

67
APPENDIX A. ANSWERS TO EXERCISES
b. Answers
may vary: 4.5.25 Suppose ~u Rn is perpendicular to both ~v Rn and
x = 1 + 20t
~ Rn . Then, for any k, m R we have this.
w
~x = y = 3 + 2t
~u (k~v + mw)
~ = k(~u ~v ) + m(~u w)
~ = k(0) + m(0) = 0
z = 3.5 26t

4.5.20 The points of coincidence are solutions of this system. 5.1.1


a. 0 + 0x + 0x2 + 0x3
t = 1 + 2m
0 0 0 0
t + s = 1 + 3k b.
0 0 0 0
t + 3s = 4m
c. The constant function f (x) = 0
Using Gaussian elimination on d. The constant function f (n) = 0

1 0 0 2 1
1 1 3 5.1.2
0 1 2
1 3 0 4 0 a. 3 + 2x x

1 +1
we get b.
0 3
1 0 0 2 1
c. 3ex + 2ex
0 1 3 2 0
0 0 9 8 1 5.1.3
which gives k = (1/9) + (8/9)m, so s = (1/3) + (2/3)m a. 1 + 2x, 2 1x, and x.
and t = 1 + 2m. The intersection is b. 2 + 1x, 6 + 3x, and 4 2x.

1 0 2 1 2
5.1.4

1 + 3 ( 1 + 8 m) + 0 m m R = 2/3 +

8/3 m mR
9 9
a. 4

0 0 4 0
     
1 2 1 2 0 0
, ,
3 4 3 4 0 0
4.5.21
46 3
 b.
a. ~x = 11 , 11 , 0 + t(2, 3, 11), tR
     
1 2 1 2 0 0
7 , ,
0 4 0 4 0 0

b. ~x = 2 , 4, 0 + t(3, 4, 2), t R

4.5.22 5.1.5
a. Point of intersection: (3, 1, 3), x + 5y + 3z = 17 a. (1, 2, 3), (2, 1, 3), and (0, 0, 0).
b. No point of intersection, 935 b. (1, 1, 1, 1), (1, 0, 1, 0) and (0, 0, 0, 0).
23
c. No point of intersection,
3 10
5.1.6
d. Point of intersection (1, 0, 7), 3x y z = 4
For each part the set is called Q. For some parts, there are
more than one correct way to show that Q is not a vector
4.5.23
space.
a. ~x = (1, 1, 1) + t(1, 2, 0) t R
a. It is not closed under addition.
b. ( 15 , 75 , 1)

c. 6 5 (1, 0, 0), (0, 1, 0) Q (1, 1, 0) 6 Q
5

4.5.24 b. It is not closed under addition.


a. Such a line exists, but it is not unique. One of such line (1, 0, 0), (0, 1, 0) Q (1, 1, 0) 6 Q
is obtained by finding a vector d~ perpendicular to the
direction vector of L1 , then the equation of the line is c. It is not closed under addition.
given by ~x = Q + td~ where t R.      
b. Such a line exists and is unique. The line can be obtained 0 1 1 1 1 2
, Q 6 Q
by finding a vector d~ parallel to the intersection of L1 and 0 0 0 0 0 0
L2 , then the equation of the line is given by ~x = Q + td~
where t R. d. It is not closed under scalar multiplication.
c. Such a plane does not exist since the lines are skew. 1 + 1x + 1x2 Q 1 (1 + 1x + 1x2 ) 6 Q

68
APPENDIX A. ANSWERS TO EXERCISES
e. The set is empty, violating the existance of the zero vec- 5.1.15
tor. a. No since 1 (0, 1) + 1 (0, 1) 6= (1 + 1) (0, 1).
5.1.7 No, it is not closed under scalar multiplication since, b. No since the same calculation as the prior part shows a
e.g., (1) is not a rational number. condition in the definition of a vector space that is vio-
lated. Another example of a violation of the conditions
5.1.8 The + operation is not commutative; producing two for a vector space is that 1 (0, 1) 6= (0, 1).
members of the set witnessing this assertion is easy.
5.1.16
5.1.9 a. Let V be a vector space, let ~v V , and assume that
a. It is not a vector space. w~ V is an additive inverse of ~v so that w ~ + ~v = ~0.
Because addition is commutative, ~0 = w
~ + ~
v = ~v + w,
~ so
1 1 1
(1 + 1) 0 6= 0 + 0 therefore ~
v is also the additive inverse of w.
~
0 0 0 b. Let V be a vector space and suppose ~v , ~s, ~t V . The
additive inverse of ~v is ~v so ~v + ~s = ~v + ~t gives that
b. It is not a vector space. ~v + ~v + ~s = ~v + ~v + ~t, which implies that ~0 + ~s = ~0 + ~t
and so ~s = ~t.
1 1
1 0 6= 0 5.1.17 We can combine the argument showing closure under
0 0 addition with the argument showing closure under scalar mul-
tiplication into one single argument showing closure under lin-
5.1.10 For each yes answer, you must give a check of all ear combinations of two vectors. If r1 , r2 , x1 , x2 , y1 , y2 , z1 , z2
the conditions given in the definition of a vector space. For are in R then
each no answer, give a specific example of the failure of one
x1

x2 r1 x1 r1 + 1

r2 x2 r2 + 1

of the conditions. r1 y1 +r2 y2 = r1 y1 + r2 y2
a. Yes. z1 z2 r1 z1 r2 z2
b. Yes.
r1 x1 r1 + r2 x2 r2 + 1

c. No, this set is not closed under the natural addition oper- = r1 y1 + r2 y2
ation. The vector of all 1/4s is an element of this set but r1 z1 + r2 z2
when added to itself the result, the vector of all 1/2s, is
not an element of the set. (note that the definition of addition in this space is that the
first components combine as (r1 x1 r1 +1)+(r2 x2 r2 +1)1,
d. Yes.
so the first component of the last vector does not say + 2).
e. No, f (x) = e2x + (1/2) is in the set but 2 f is not (that Adding the three components of the last vector gives r1 (x1
is, closure under scalar multiplication fails). 1 + y1 + z1 ) + r2 (x2 1 + y2 + z2 ) + 1 = r1 0 + r2 0 + 1 = 1.
Most of the other checks of the conditions are easy (although
5.1.11 the oddness of the operations keeps them from being routine).
a. Closed under vector addition. Hint: Apply determinant Commutativity of addition goes like this.
properties.
  x1 x2 x1 + x2 1 x2 + x1 1 x2 x1
1 0
b. ~0 = V y1 + y2 = y1 + y2 = y2 + y1 = y2 + y1
0 1 z1 z2 z1 + z2 z2 + z1 z2 z1
c. Every A V has an additive inverse A1 .
~ Associativity of addition has
d. Not closed
 under scalar multiplication. Since 00 =
1 0 0 0 x1 x2 x3 (x1 + x2 1) + x3 1
0 = 6 V
0 1 0 0 ( y1 + y2 ) + y3 = (y1 + y2 ) + y3
z1 z2 z3 (z1 + z2 ) + z3
5.1.12 Not a vector space since the set is not closed under
    while
1 2 1 1
vector space addition. i.e. If A = , A=
x1

x2 x3

x1 + (x2 + x3 1) 1

2 3 1 1
     y1 + ( y2 + y3 ) = y1 + (y2 + y3 )
1 2 1 1 1 1

V then A + B = AB = = / V. z z z z + (z + z )
2 3 1 1 1 1 1 2 3 1 2 3

and they are equal. The identity element with respect to this
5.1.13 Check all 10 conditions of the definition of a vector addition operation works this way
space.
x 1 x+11 x
5.1.14 It is not a vector space since it is not closed under y + 0 = y + 0 = y
addition, as (x2 ) + (1 + x x2 ) is not in the set. z 0 z+0 z

69
APPENDIX A. ANSWERS TO EXERCISES
and the additive inverse is similar. 5.1.19
It is not a vector space since addition of two matrices of un-
x x + 2 x + (x + 2) 1 1
y + y = equal sizes is not defined, and thus the set fails to satisfy the
yy = 0
closure condition.
z z zz 0
5.1.20
The conditions on scalar multiplication are also easy. For the Each element of a vector space has one and only one additive
first condition, inverse.

x (r + s)x (r + s) + 1
For, let V be a vector space and suppose that ~v V . If
w
~ 1, w~ 2 V are both additive inverses of ~v then consider w ~1 +
(r + s) y = (r + s)y
z (r + s)z ~
v + w
~ 2 . On the one hand, we have that it equals w
~ 1 +(~v + w
~ 2) =
~ 1 + ~0 = w
w ~ 1 . On the other hand we have that it equals
while (w~ 1 + ~v ) + w~ 2 = ~0 + w
~2 = w~ 2 . Therefore, w
~1 = w ~ 2.

5.1.21

x x rx r + 1 sx s + 1
r y +s y =
ry + sy Assume that ~v V is not ~0.
z z rz sz a. One direction of the if and only if is clear: if r = 0 then

(rx r + 1) + (sx s + 1) 1
r ~v = ~0. For the other way, let r be a nonzero scalar.
= ry + sy If r~v = ~0 then (1/r) r~v = (1/r) ~0 shows that ~v = ~0,
rz + sz contrary to the assumption.
b. Where r1 , r2 are scalars, r1~v = r2~v holds if and only if
and the two are equal. The second condition compares (r1 r2 )~v = ~0. By the prior item, then r1 r2 = 0.
~
r(x1 + x2 1) r + 1c. A nontrivial space has a vector ~v 6= 0. Consider the set

x1 x2 x1 + x2 1
r( y1 + y2 ) = r y1 + y2 = r(y1 + y2 ) { k ~v | k R }. By the prior item this set is infinite.
z1 z2 z1 + z2 r(z1 + z2 )
5.2.1
with a. Yes, we can easily check that it is closed under addition
and scalar multiplication.
x1 x2 rx1 r + 1 rx2 r + 1
b. Yes, we can easily check that it is closed under addition
r y1 + r y2 = ry1 + ry2
and scalar multiplication.
z1 z2 rz1 rz2
c. No. It is not closed under addition. For instance,
(rx1 r + 1) + (rx2 r + 1) 1
= ry1 + ry2
     
5 0 5 0 10 0
rz1 + rz2 + =
0 0 0 0 0 0
and they are equal. For the third condition, is not in the set. (This set is also not closed under scalar

x rsx rs + 1
multiplication, for instance, it does not contain the zero
(rs) y = rsy matrix.)
z rsz d. Yes, we can easily check that it is closed under addition
and scalar multiplication.
while
5.2.2 No, it is not closed. In particular, it is not closed
x sx s + 1 r(sx s + 1) r + 1
under scalar multiplication because it does not contain the
r(s y ) = r( sy ) = rsy
zero polynomial.
z sz rsz

and the two are equal. For scalar multiplication by 1 we have 5.2.3 No, such a set is not closed. For one thing, it does not
this. contain the zero vector.

x 1x 1 + 1 x 5.2.4
1 y = 1y = y
a. Every such set has the form {r ~v + s w ~ | r, s R }
z 1z z
~ may be ~0. With the inher-
where either or both of ~v , w
Thus all the conditions on a vector space are met by these ited operations, closure of addition (r1~v + s1 w)
~ + (r2~v +
two operations. s2 w)
~ = (r1 + r2 )~v + (s1 + s2 )w
~ and scalar multiplication
c(r~v + sw)
~ = (cr)~v + (cs)w~ is clear.
5.1.18
Addition is commutative, so in any vector space, for any vec- b. No such set can be a vector space under the inherited
tor ~v we have that ~v = ~v + ~0 = ~0 + ~v . operations because it does not have a zero element.

70
APPENDIX A. ANSWERS TO EXERCISES
5.2.5 No. The only subspaces of R1 are the space itself and with ~b 6 A. Consider ~a + ~b. Note that sum is not an
its trivial subspace. Any subspace S of R that contains a element of A or else (~a + ~b) ~a would be in A, which it
nonzero member ~v must contain the set of all of its scalar is not. Similarly the sum is not an element of B. Hence
multiples { r ~v | r R}. But this set is all of R. the sum is not an element of A B, and so the union is
not a subspace.
5.2.6 Yes. A theorem of first semester calculus says that a
c. It is not a subspace. As A is a subspace, it contains the
sum of differentiable functions is differentiable and that (f +
zero vector, and therefore the set that is As complement
g)0 = f 0 + g 0 , and that a multiple of a differentiable function
does not. Without the zero vector, the complement can-
is differentiable and that (r f )0 = r f 0 .
not be a vector space.
5.2.7
5.2.11 It is transitive; apply the subspace test. (You must
a. This is a subspace. It is closed because if f1 , f2 are even
consider the following. Suppose B is a subspace of a vector
and c1 , c2 are scalars then we have this.
space V and suppose A B V is a subspace. From which
space
(c1 f1 +c2 f2 ) (x) = c1 f1 (x)+c2 f2 (x) = c1 f1 (x)+c2 f2 (x) = (cdoes A inherit its operations? The answer is that it
1 f1 +c2 f2 ) (x)
doesnt matter A will inherit the same operations in either
case.)
b. This is also a subspace; the check is similar to the prior 5.3.1
one.
a. Yes, solving the linear system arising from
3

5.2.8 No. Subspaces of R are sets of three-tall vectors, while 1 0 2
R2 is a set of two-tall vectors. Clearly though, R2 is just like r1 0 + r2 0 = 0
this subspace of R3 . 0 1 1
gives r1 = 2 and r2 = 1.
x
y x, y R
b. Yes; the linear system arising from r1 (x2 ) + r2 (2x + x2 ) +
r3 (x + x3 ) = x x3

0

2r2 + r3 = 1
r1 + r2 = 0
5.2.9
r3 = 1
a. The union of the x-axis and the y-axis in R2 is one.
b. The set of integers, as a subset of R1 , is one. gives that 1(x2 ) + 1(2x + x2 ) 1(x + x3 ) = x x3 .
c. The subset {~v } of R2 is one, where ~v is any nonzero c. No; any combination of the two given matrices has a zero
vector. in the upper right.

5.2.10 5.3.2
a. Is is a subspace. a. Yes. It is in that span since 1 cos2 x + 1 sin2 x = f (x).
Assume that A, B are subspaces of V . Note that their b. No. Since r1 cos2 x + r2 sin2 x = 3 + x2 has no scalar
intersection is not empty as both contain the zero vector. solutions that work for all x. For instance, setting x to
~ ~s A B and r, s are scalars then r~v + sw
If w, ~ A be 0 and gives the two equations r1 1 + r2 0 = 3 and
because each vector is in A and so a linear combination r1 1 + r2 0 = 3 + 2 , which are not consistent with each
is in A, and r~v + sw~ B for the same reason. Thus the other.
intersection is closed. c. No. Consider what happens on setting x to be /2 and
b. In general it is not a subspace. (It is a subspace, only if 3/2.
A B or B A). d. Yes, cos(2x) = 1 cos2 (x) 1 sin2 (x).
Take V to be R3 , take A to be the x-axis, and B to be
the y-axis. Note that 5.3.3
 
1
 
0
   
1 0 a. Yes, for any x, y, z R this equation
A and B but + 6 A B
0 1 0 1 1 0 0 x
r1 0 + r2 2 + r3 0 = y
as the sum is in neither A nor B. 0 0 3 z
If A B or B A then clearly A B is a subspace.
To show that A B is a subspace only if one subspace has the solution r1 = x, r2 = y/2, and r3 = z/3.
contains the other, we assume that A 6 B and B 6 A b. Yes, the equation
and prove that the union is not a subspace. The assump-

2 1 0 x
tion that A is not a subset of B means that there is an r1 0 + r2 1 + r3 0 = y
~a A with ~a 6 B. The other assumption gives a ~b B 1 0 1 z

71
APPENDIX A. ANSWERS TO EXERCISES
  
gives rise to this d b
b. b, c, d R =
 c d
2r1 + r2 =x
    
0 1 0 0 1 0
r2 =y b +c +d b, c, d R One
0 0 1 0 0 1
r1 + r3 = z set that spans this space consists of those three matrices.
Gaussian elimination gives
c. The system
2r1 + r2 =x a + 3b =0
r2 =y 2a c d = 0
r3 = (1/2)x + (1/2)y + z gives b = (c+d)/6 and a = (c+d)/2. So one description
is this.
so that, given any x, y, and z, we can compute that r3 =
(1/2)x + (1/2)y + z, r2 = y, and r1 = (1/2)x (1/2)y.
     
1/2 1/6 1/2 1/6
c +d c, d R
c. No. In particular, we cannot get the vector 1 0 0 1

0 That shows that a set spanning this subspace consists of
0 those two matrices.
1 d. The a = 2b c gives that the set
3

(2b c) + bx + cx b, c R equals the set
as a linear combination since the two given vectors both
b(2 + x) + c(1 +x3 ) b, c R . So the subspace is

have a third component of zero.
the span of the set 2 + x, 1 + x3 .
d. Yes. The equation
a + bx + cx2 a + 7b + 49c = 0

e. The set can be
1 3 1 2 x parametrized as
r1 0 + r2 1 + r3 0 + r4 1 = y
b(7 + x) + c(49 + x2 ) b, c R

1 0 0 5 z

and so has the spanning set 7 + x, 49 + x2 .



leads to this reduction.

1 3 1 2 x
0 1 0 5.3.5
1 y
0 0 1 6 x + 3y + z a. We can parametrize in this way

We have infinitely many solutions. We can, for example, x 1 0

0 x, z R = x 0 + z 0 x, z R
set r4 to be zero and solve for r3 , r2 , and r1 in terms of
z 0 1

x, y, and z by the usual methods of back-substitution.

e. No. The equation giving this for a spanning set.



2 3 5 6 x
1 0
r1 1 + r2 0 + r3 1 + r4 0 = y 0 , 0
1 1 2 2 z
0 1

leads to this reduction.


b. Hereis aparametrization, and
the
associated spanning
2 3 5 6 x

2/3 1/3
0 3/2 3/2 3 (1/2)x + y set. y 1 + z 0 y, z R
0 0 0 0 (1/3)x (1/3)y + z
0 1


2/3 1/3
This shows that not every vector can be so expressed. 1 , 0
Only the vectors satisfying the restriction that (1/3)x
0 1

(1/3)y + z = 0 are in the span. (To see that any such
vector is indeed expressible, take r3 and r4 to be zero
1 1/2
2 0

and solve for r1 and r2 in terms of x, y, and z by back- c. ,
1 0
substitution.)

0 1

5.3.4 d. Parametrize the description as


2 3


a.  (c b c) b, c R

= a1 + a 1 x + a 3 x + a3 x a1 , a3 R to get
1 + x, x2 + x3 .

b(0 1 0) + c(1 0 1) b, c R The obvious choice
e. 1, x, x2 , x3 , x4

for the set that spans is (0 1 0), (1 0 1) .

72
APPENDIX A. ANSWERS TO EXERCISES
       
1 0 0 1 0 0 0 0 The answer is not never because if either set contains
f. , , ,
0 0 0 0 1 0 0 1 the other then equality is clear. We can characterize
equality as happening only when either set contains the
5.3.6 We will show mutual containment between the two other by assuming S 6 T (implying the existence of a
sets. vector ~s S with ~s 6 T ) and T 6 S (giving a ~t T with
The first containment span (span (S)) span (S) is an in- ~t 6 S), noting ~s + ~t span (S T ), and showing that
stance of the more general, and obvious, fact that for any ~s + ~t 6 span (S) span (T ).
subset T of a vector space, span (T ) T . c. Sometimes.
For the other containment, that span (span (S)) span (S), Clearly span (S T ) span (S) span (T ) because any
take m vectors from span (S), namely c1,1~s1,1 + +c1,n1 ~s1,n1 , linear combination of vectors from S T is a combination
. . . , c1,m~s1,m + + c1,nm ~s1,nm , and note that any linear of vectors from S and also a combination of vectors from
combination of those T.
r1 (c1,1~s1,1 + +c1,n1 ~s1,n1 )+ +rm (c1,m~s1,m + +c1,nm ~s1,nm ) Containment the other way does not always hold. For
instance, in R2 , take
is a linear combination of elements of S
     
= (r1 c1,1 )~s1,1 + +(r1 c1,n1 )~s1,n1 + +(rm c1,m )~s1,m + +(rm c1,nm )~s1,nm 1 0 2
S= , , T =
0 1 0
and so is in span (S). That is, simply recall that a linear
combination of linear combinations (of members of S) is a so that span (S) span (T ) is the x-axis but span (S T )
linear combination (again of members of S). is the trivial subspace.
Characterizing exactly when equality holds is tough.
5.3.7 Hint: For each subspace determine a set of vectors that Clearly equality holds if either set contains the other,
spans it. but that is not only if by this example in R3 .
W1 ( W2

1 0 1 0
5.3.8 For if, let S be a subset of a vector space V
S = 0 , 1 , T = 0 , 0
and assume ~v S satisfies ~v = c1~s1 + + cn~sn where
0 0 0 1

c1 , . . . , cn are scalars and ~s1 , . . . , ~sn S. We must show that
span (S {~v }) = span (S).
Containment one way, span (S) span (S {~v }) is obvious. d. Never, as the span of the complement is a subspace, while
For the other direction, span (S {~v }) span (S), note that the complement of the span is not (it does not contain
if a vector is in the set on the left then it has the form d0~v + the zero vector).
~ ~ ~
d1 t1 + + dm tm where the ds are scalars and the t s are in
S. Rewrite that as d0 (c1~s1 + + cn~sn ) + d1~t1 + + dm~tm 5.4.1
and note that the result is a member of the span of S. a. It is dependent. Considering
The only if is clearly true adding ~v enlarges the span to
include at least ~v . 1 2 4 0
c1 3 + c2 2 + c3 4 = 0
5.3.9 The span of a set does not depend on the enclosing 5 4 14 0
space. A linear combination of vectors from S gives the same
sum whether we regard the operations as those of W or as gives this linear system.
those of V , because the operations of W are inherited from
V. c1 + 2c2 + 4c3 = 0
3c1 + 2c2 4c3 = 0
5.3.10 5c1 + 4c2 + 14c3 = 0
a. Always; if S T then a linear combination of elements
of S is also a linear combination of elements of T . Gausss Method
b. Sometimes (more precisely, if and only if S T or T
1 2 4 0

S). 3 2 4 0
The answer is not always as is shown by this example 5 4 14 0
from R3
gives

1 0 1 0
S= 0 , 1
, T = 0 , 0
1 2 4 0

0 0

0 1
0 8 8 0
0 0 0 0
because of this.

1

1 yields a free variable, so there are infinitely many solu-
1 span (S T ) 1 6 span (S) span (T ) tions. For an example of a particular dependence we can
1 1 set c3 to be, say, 1. Then we get c2 = 1 and c1 = 2.

73
APPENDIX A. ANSWERS TO EXERCISES
b. It is dependent. The linear system that arises here b. This set is independent. We can see this by inspection,
straight from the definition of linear independence. Ob-
1 2 3 0
7 7 viously neither is a multiple of the other.
7 0
7 7 7 0 c. This set is linearly independent. The linear system re-
duces in this way
and Gaussian elimination gives
2 3 4 0
1 2 3 0 1 1 0 0
0 7 14 0
7 2 3 0
0 0 0 0
has infinitely many solutions. We can get a particular and Gaussian elimination gives
solution by taking c3 to be, say, 1, and back-substituting
2 3 4 0

to get the resulting c2 and c1 . 0 5/2 2 0
c. It is linearly independent. The system 0 0 51/5 0

0 1 0 to show that there is only the solution c1 = 0, c2 = 0,
0 0 0
and c3 = 0.
1 4 0
d. This set is linearly dependent. The linear system
and Gaussian elimination gives
8 0 2 8 0
1 4 0 3 1 2 2 0
0 1 0 3 2 2 5 0
0 0 0
must, after reduction, end with at least one variable free
has only the solution c1 = 0 and c2 = 0. (We could also
(there are more variables than equations, and there is
have gotten the answer by inspection the second vector
no possibility of a contradictory equation because the
is obviously not a multiple of the first, and vice versa.)
system is homogeneous). We can take the free variables
d. It is linearly dependent. The linear system as parameters to describe the solution set. We can then
set the parameter to a nonzero value to get a nontrivial

9 2 3 12 0
9 0 5 12 0 linear relation.
0 1 4 1 0
5.4.3 Let Z be the zero function Z(x) = 0, which is the
has more unknowns than equations, and so Gausss additive identity in the vector space under discussion.
Method must end with at least one variable free (there a. This set is linearly independent. Consider c1 f (x) + c2
cant be a contradictory equation because the system is g(x) = Z(x). Plugging in x = 1 and x = 2 gives a linear
homogeneous, and so has at least the solution of all ze- system
roes). To exhibit a combination, we can do the reduction c1 1 + c2 1 = 0
c1 2 + c2 (1/2) = 0

9 2 3 12 0
0 2 2 0 0 with the unique solution c1 = 0, c2 = 0.
0 0 3 1 0 b. This set is linearly independent. Consider c1 f (x) + c2
and take, say, c4 = 1. Then we have that c3 = 1/3, g(x) = Z(x) and plug in x = 0 and x = /2 to get
c2 = 1/3, and c1 = 31/27.
c1 1 + c2 0 = 0
5.4.2 c1 0 + c2 1 = 0
a. This set is independent. Setting up the relation c1 (3 which obviously gives that c1 = 0, c2 = 0.
x+9x2 )+c2 (56x+3x2 )+c3 (1+1x5x2 ) = 0+0x+0x2 c. This set is also linearly independent. Considering c1
gives a linear system f (x) + c2 g(x) = Z(x) and plugging in x = 1 and x = e

3 5 1 0
1 6 1 0 c1 e + c2 0 = 0
9 3 5 0 c1 ee + c2 1 = 0

and Gaussian elimination gives gives that c1 = 0 and c2 = 0.



3 5 1 0 5.4.4
0 13 4 0
a. This set is dependent. The familiar relation sin2 (x) +
0 0 128/13 0
cos2 (x) = 1 shows that 2 = c1 (4 sin2 (x)) + c2 (cos2 (x))
with only one solution: c1 = 0, c2 = 0, and c3 = 0. is satisfied by c1 = 1/2 and c2 = 2.

74
APPENDIX A. ANSWERS TO EXERCISES
b. This set is independent. Consider the relationship c1 1+ Conclusion: the cs are all zero, and so the set is linearly
c2 sin(x) + c3 sin(2x) = 0 (that 0 is the zero function). independent.
Taking three suitable points such as x = , x = /2, b. The second set is dependent
x = /4 gives a system
1 (~u ~v ) + 1 (~v w) ~ ~u) = ~0
~ + 1 (w
c1 =0
c1 + c2 =0 whether or not the first set is independent.
c1 + ( 2/2)c2 + c3 = 0
5.4.9
whose only solution is c1 = 0, c2 = 0, and c3 = 0. a. A singleton set {~v } is linearly independent if and only if
c. By inspection, this set is independent. Any dependence ~v 6= ~0. For the if direction, with ~v 6= ~0, we consider the
cos(x) = c x is not possible since the cosine function is relationship c ~v = ~0 and noting that the only solution
not a multiple of the identity function is the trivial one: c = 0. For the only if direction, it is
d. By inspection, we spot that there is a dependence. Be- evident from the definition.
cause (1 + x)2 = x2 + 2x + 1, we get that c1 (1 + x)2 + b. A set with two elements is linearly independent if and
c2 (x2 + 2x) = 3 is satisfied by c1 = 3 and c2 = 3. only if neither member is a multiple of the other (note
e. This set is dependent, because it contains the zero object that if one is the zero vector then it is a multiple of the
in the vector space, the zero polynomial. other). This is an equivalent statement: a set is linearly
dependent if and only if one element is a multiple of the
f. This set is dependent. The easiest way to see that is to
other.
recall the trigonometric relationship cos2 (x) sin2 (x) =
The proof is easy. A set {~v1 , ~v2 } is linearly dependent if
cos(2x).
and only if there is a relationship c1~v1 + c2~v2 = ~0 with
either c1 6= 0 or c2 6= 0 (or both). That holds if and only
5.4.5 No. Here are two members of the plane where the
if ~v1 = (c2 /c1 )~v2 or ~v2 = (c1 /c2 )~v1 (or both).
second is a multiple of the first.

1 2 5.4.10 Hint: Prove by contradiction. The converse (the only
0 , 0 if statement) does not hold. An example is to consider the
0 0 vector space R2 and these vectors.
     
1 0 1
(Another reason that the answer is no is the the zero vector ~x = , ~y = , ~z =
0 1 1
is a member of the plane and no set containing the zero vector
is linearly independent.)
5.4.11
5.4.6
a. The linear system arising from
a. = 1
b. 6= 1, 21 , 1 1 1 0
c1 1 + c2 2 = 0
5.4.7 0 0 0
a. 6= 1 and 6= 0 has the unique solution c1 = 0 and c2 = 0.
b. 6= 1 and 6= 0 b. The linear system arising from
c. = 1 and = 0
1 1 3
d. No such value of exists.
c1 1 + c2 2 = 2
e. = 1 24033 0 0 0

5.4.8 has the unique solution c1 = 8/3 and c2 = 1/3.


a. Assume that {~u, ~v , w~ } is linearly independent, so that c. Suppose that S is linearly independent. Suppose that we
any relationship d0 ~u + d1~v + d2 w~ = ~0 leads to the con- have both ~v = c1~s1 + +cn~sn and ~v = d1~t1 + +dm~tm
clusion that d0 = 0, d1 = 0, and d2 = 0. (where the vectors are members of S). Now,
Consider the relationship c1 (~u)+c2 (~u+~v )+c3 (~u+~v +w)
~ =
~0. Rewrite it to get (c1 +c2 +c3 )~u +(c2 +c3 )~v +(c3 )w
~ = ~0. c1~s1 + + cn~sn = ~v = d1~t1 + + dm~tm
Taking d0 to be c1 + c2 + c3 , taking d1 to be c2 + c3 , and can be rewritten in this way.
taking d2 to be c3 we have this system.
c1~s1 + + cn~sn d1~t1 dm~tm = ~0
c1 + c2 + c3 = 0
c2 + c3 = 0 Possibly some of the ~s s equal some of the ~t s; we can
c3 = 0 combine the associated coefficients (i.e., if ~si = ~tj then

75
APPENDIX A. ANSWERS TO EXERCISES
+ ci~si + dj~tj can be rewritten as + (ci 5.4.13 Yes; here is one.
dj )~si + ). That equation is a linear relationship among
distinct (after the combining is done) members of the set 1 0 0 1
S. Weve assumed that S is linearly independent, so all 0 , 1 , 0 , 1
0 0 1 1

of the coefficients are zero. If i is such that ~si does not
equal any ~tj then ci is zero. If j is such that ~tj does not
equal any ~si then dj is zero. In the final case, we have
5.4.14
that ci dj = 0 and so ci = dj .
Therefore, the original two sums are the same, except a. Assume n
that ~v and w~ are perpendicular nonzero vectors
perhaps for some 0 ~si or 0 ~tj terms that we can neglect. in R , with n > 1. With the linear relationship c~v +dw ~=
~0, apply ~v to both sides to conclude that ck~v k2 +d0 = 0.
Because ~v 6= ~0 we have that c = 0. A similar application
d. This set is not linearly independent:
of w~ shows that d = 0.
b. Two vectors in R1 are perpendicular if and only if at least
   
1 2
S= , R2 one of them is zero.
0 0
c. The right generalization is to look at a set {~v1 , . . . , ~vn }
and these two linear combinations give the same result Rk of vectors that are mutually orthogonal (also called
          pairwise perpendicular ): if i 6= j then ~vi is perpendicular
0 1 2 1 2
=2 1 =4 2 to ~vj . Mimicking the proof of the first item above shows
0 0 0 0 0
that such a set of nonzero vectors is linearly independent.
Thus, a linearly dependent set might have indistinct
sums. 5.4.15 It is both if and only if.
In fact, this stronger statement holds: if a set is linearly Let T be a subset of the subspace S of the vector space V .
~
dependent then it must have the property that there are The assertion that any linear relationship c1~t1 + +cn~tn = 0
two distinct linear combinations that sum to the same among members of T must be the trivial relationship c 1 = 0,
vector. Briefly, where c1~s1 + + cn~sn = ~0 then multi- . . . , cn = 0 is a statement that holds in S if and only if it
plying both sides of the relationship by two gives another holds in V , because the subspace S inherits its addition and
relationship. If the first relationship is nontrivial then the scalar multiplication operations from V .
second is also.
5.4.16 Hint: Use the definition of linear independence to
5.4.12 show that there only exists the trivial linear combination giv-
ing the zero vector.
a. For any a1,1 , . . . , a2,4 ,
          5.4.17 In R4 the biggest linearly independent set has four
a1,1 a1,2 a1,3 a1,4 0
c1 + c2 + c3 + c4 = vectors. There are many examples of such sets, this is one.
a2,1 a2,2 a2,3 a2,4 0

1 0 0 0
yields a linear system


, , , 0
0 1 0

a1,1 c1 + a1,2 c2 + a1,3 c3 + a1,4 c4 = 0 0 0 1 0



0 0 0 1

a2,1 c1 + a2,2 c2 + a2,3 c3 + a2,4 c4 = 0

that has infinitely many solutions (Gausss Method leaves To see that no set with five or more vectors can be indepen-
at least two variables free). Hence there are nontrivial dent, set up
linear relationships among the given members of R2 .
a1,1 a1,2 a1,3 a1,4 a1,5 0
b. Any set five vectors is a superset of a set of four vectors, a2,1 a2,2 a2,3 a2,4 a2,5 0
and so is linearly dependent. c1
a3,1 +c2 a3,2 +c3 a3,3 +c4 a3,4 +c5 a3,5 = 0

With three vectors from R2 , the argument from the prior a4,1 a4,2 a4,3 a4,4 a4,5 0
item still applies, with the slight change that Gausss
Method now only leaves at least one variable free (but and note that the resulting linear system
that still gives infinitely many solutions).
c. The prior part shows that no three-element subset of R2 a1,1 c1 + a1,2 c2 + a1,3 c3 + a1,4 c4 + a1,5 c5 = 0
is independent. We know that there are two-element sub- a2,1 c1 + a2,2 c2 + a2,3 c3 + a2,4 c4 + a2,5 c5 = 0
sets of R2 that are independent. The following one is a3,1 c1 + a3,2 c2 + a3,3 c3 + a3,4 c4 + a3,5 c5 = 0
    a4,1 c1 + a4,2 c2 + a4,3 c3 + a4,4 c4 + a4,5 c5 = 0
1 0
, has four equations and five unknowns, so Gausss Method
0 1
must end with at least one c variable free, so there are in-
and so the answer is two. finitely many solutions, and so the above linear relationship

76
APPENDIX A. ANSWERS TO EXERCISES
among the four-tall vectors has more solutions than just the to a vector in span (S), and the right side is a vector
trivial solution. in span (T (S T )). Therefore, since the intersection
The smallest linearly independent set is the empty set. n o of the spans is trivial, both sides equal the zero vector.
The biggest linearly dependent set is R4 . The smallest is ~0 . Because S is linearly independent, all of the cs are zero.
Because T is linearly independent so also is T (S T )
5.4.18 linearly independent, and therefore all of the ds are zero.
Thus, the original linear relationship among members of
a. The intersection of two linearly independent sets S T
S T only holds if all of the coefficients are zero. Hence,
must be linearly independent as it is a subset of the lin-
S T is linearly independent.
early independent set S (as well as the linearly indepen-
For the if half we can make the same argument in re-
dent set T also, of course).
verse. Suppose that the union S T is linearly indepen-
b. The complement of a linearly independent set is linearly dent. Consider a linear relationship among members of S
dependent as it contains the zero vector. and T (S T ). c1~s1 + + cn~sn + d1~t1 + + dm~tm = ~0
c. A simple example in R2 is these two sets. Note that no ~si is equal to a ~tj so that is a combination
    of distinct vectors. So the only solution is the trivial one
1 0 c1 = 0, . . . , dm = 0. Since any vector ~v in the intersec-
S= T =
0 1 tion of the spans span (S) span (T (S T )) we can
write ~v = c1~s1 + + cn~sn = d1~t1 dm~tm , and it
A somewhat subtler example, again in R2 , is these two.
must be the zero vector because each scalar is zero.
     
1 1 0
S= T = , 5.4.20
0 0 1
a. Assuming first that a 6= 0,
     
a b 0
d. We must produce an example. One, in R2 , is x +y =
c d 0
   
1 2 gives
S= T =
0 0 ax + by = 0
cx + dy = 0
since the linear dependence of S1 S2 is easy to see.
and Gaussian elimination
5.4.19 ax + by = 0
a. The vectors ~s1 , . . . , ~sn , ~t1 , . . . , ~tm are distinct. But we ((c/a)b + d)y = 0
could have that the union S T is linearly independent which has a solution if and only if 0 6= (c/a)b + d =
with some ~si equal to some ~tj . (cb + ad)/d (weve assumed in this case that a 6= 0, and
b. One example in R2 is these two. so back substitution yields a unique solution).
      The a = 0 case is also not hard break it into the c 6= 0
1 1 0 and c = 0 subcases and note that in these cases ad bc =
S= T = ,
0 0 1 0 d bc.
b. The equation
c. An example from R2 is these sets.
a b c 0
   
1 0
   
1 1 c1 d + c2 e + c3 f = 0
S= , T = , g h i 0
0 1 0 1
expresses a homogeneous linear system. We proceed by
d. The union of two linearly independent sets S T is writing it in matrix form and applying Gausss Method.
linearly independent if and only if their spans of S We first reduce the matrix to upper-triangular. Assume
and T (S T ) haven a otrivial intersection span (S) that a 6= 0. With that, we can clear down the first col-
umn.
span (T (S T )) = ~0 . To prove that, assume that

1 b/a c/a 0
S and T are linearly independent subsets of some vector 0 (ae bd)/a (af cd)/a 0
space. 0 (ah bg)/a (ai cg)/a 0
For the only if direction, assume that the intersection Then we get a 1 in the second row, second column entry.
of o spans is trivial span (S) span (T (S T )) =
n the (Assuming for the moment that ae bd 6= 0, in order to
~0 . Consider the set S (T (S T )) = S T and do the row reduction step.)
consider the linear relationship c1~s1 + + cn~sn + d1~t1 +

1 b/a c/a 0
+ dm~tm = ~0. Subtracting gives c1~s1 + + cn~sn = 0 1 (af cd)/(ae bd) 0
d1~t1 dm~tm . The left side of that equation sums 0 (ah bg)/a (ai cg)/a 0

77
APPENDIX A. ANSWERS TO EXERCISES
Then, under the assumptions, we perform the row oper- gives
ation ((ah bg)/a)2 + 3 to get this. 1 3 0 x
2 2 0 y
1 b/a c/a 0 3 1 1 z
0 1 (af cd)/(ae bd) 0
0 0 (aei + bgf + cdh hf a idb gec)/(ae bd) 0 and Gaussian elimination gives

1 3 0 x
Therefore, the original system is nonsingular if and only 0 4 0 2x + y
if the above 3, 3 entry is nonzero (this fraction is defined
0 0 1 x 2y + z
because of the ae bd 6= 0 assumption). It equals zero if
and only if the numerator is zero. which has the unique solution c3 = x 2y + z, c2 =
We next worry about the assumptions. First, if a 6= 0 x/2 y/4, and c1 = x/2 + 3y/4.
but ae bd = 0 then we swap row 2 and row 3 b. This is not a basis. Setting it up as in the prior part

1 b/a c/a 0

1 3 x
0 (ah bg)/a (ai cg)/a 0 c1 2 + c2 2 = y
0 0 (af cd)/a 0 3 1 z
and conclude that the system is nonsingular if and only gives a linear system whose solution
if either ah bg = 0 or af cd = 0. Thats the same as
asking that their product be zero: 1 3 x
2 2 y
ahaf ahcd bgaf + bgcd = 0 3 1 z
ahaf ahcd bgaf + aegc = 0 gives
a(haf hcd bgf + egc) = 0

1 3 x
0 4 2x + y
(in going from the first line to the second weve applied 0 0 x 2y + z
the case assumption that ae bd = 0 by substituting ae
for bd). Since we are assuming that a 6= 0, we have that is possible if and only if the three-tall vectors compo-
haf hcd bgf + egc = 0. With ae bd = 0 we can nents x, y, and z satisfy x 2y + z = 0. For instance, we
rewrite this to fit the form we need: in this a 6= 0 and can find the coefficients c1 and c2 that work when x = 1,
ae bd = 0 case, the given system is nonsingular when y = 1, and z = 1. However, there are no cs that work
haf hcd bgf + egc i(ae bd) = 0, as required. for x = 1, y = 1, and z = 2. Thus this is not a basis; it
The remaining cases have the same character. Do the does not span the space.
a = 0 but d 6= 0 case and the a = 0 and d = 0 but g 6= 0 c. Yes, this is a basis. Setting up the relationship leads to
case by first swapping rows and then going on as above. this reduction
The a = 0, d = 0, and g = 0 case is easy a set with a 0 1 2 x
2 1 5 y
zero vector is linearly dependent, and the formula comes
out to equal zero. 1 1 0 z
c. It is linearly dependent if and only if either vector is a gives
multiple of the other. That is, it is not independent iff 1 1 0 z
0 3 5 y + 2z
a b b a 0 0 1/3 x y/3 2z/3
d = r e or e = s d
which has a unique solution for each triple of components
g h h g
x, y, and z.
(or both) for some scalars r and s. Eliminating r and s in d. No, this is not a basis. The reduction of
order to restate this condition only in terms of the given
letters a, b, d, e, g, h, we have that it is not independent, 0 1 1 x
2 1 3 y
it is dependent iff ae bd = ah gb = dh ge
1 1 0 z
5.5.1 Each set is a basis if and only if we can express each gives
vector in the space in a unique way as a linear combination

1 1 0 z
of the given vectors. 0 3 3 y + 2z
a. Yes this is a basis. The relation 0 0 0 x y/3 2z/3
which does not have a solution for each triple x, y, and

1 3 0 x
c1 2 + c2 2 + c3 0 = y z. Instead, the span of the given set includes only those
3 1 1 z vectors where x = y/3 + 2z/3.

78
APPENDIX A. ANSWERS TO EXERCISES
5.5.2 5.5.5 The reduction
a. This is a basis for P2 . To show that it spans the space
 
1 4 3 1 0
2
we consider a generic a2 x + a1 x + a0 P2 and look 2 8 6 2 0
for scalars c1 , c2 , c3 R such that a2 x2 + a1 x + a0 =
c1 (x2 x + 1) + c2 (2x + 1) + c3 (2x 1). Gausss Gaussian elimination gives
Method on the linear system  
1 4 3 1 0
c =a 0 0 0 0 0
1 2
2c2 + 2c3 = a1 gives that the only condition is that x1 = 4x2 3x3 + x4 . The
c2 c3 = a0 solution set is
shows that given the ai s we can compute the cj s as c1 =

4x2 3x3 + x4
a2 , c2 = (1/4)a1 + (1/2)a0 , and c3 = (1/4)a1 (1/2)a0 .

x2

x2 , x3 , x4 R
Thus each element of P2 is a combination of the given

x3
three.

x4

To prove that the set of the given three is linearly in-
2 4 3 1
dependent we can set up the equation 0x + 0x + 0 =


1 0 0

2

c1 (x x + 1) + c2 (2x + 1) + c3 (2x 1) and solve, and = x2 + x3 + x4 x2 , x3 , x4 R

0 1 0
it will give that c1 = 0, c2 = 0, and c3 = 0.


0 0 1

b. This is not a basis. It does not span the space since no
combination of the two c1 (x + x2 ) + c2 (x x2 ) will and so the obvious candidate for the basis is this.
sum to the polynomial 3 P2 .
4 3 1
1 0 0
5.5.3 h , ,
i
0 1 0
a. We solve       0 0 1
1 1 1
c1 + c2 =
1 1 2 Weve shown that this spans the space, and showing it is also
with linearly independent is routine.
 
1 1 1
1 1 2 5.5.6 There are many bases. This is a natural one.
       
and Gaussian elimination gives 1 0 0 1 0 0 0 0
h , , , i
  0 0 0 0 1 0 0 1
1 1 1
0 2 1
5.5.7 For each, many answers are possible.
and conclude that c2 = 1/2 and so c1 = 3/2. Thus, the a. One way to proceed is to parametrize by expressing the
representation is this. a2 as a combination of the other two a2 = 2a1 +a0 . Then
    a2 x2 + a1 x + a0 is (2a1 + a0 )x2 + a1 x + a0 and
1 3/2
RepB ( )=
(2a1 + a0 )x2 + a1 x + a0 a1 , a0 R

2 1/2 B

= a1 (2x2 + x) + a0 (x2 + 1) a1 , a0 R


b. The relationship c1 (1) + c2 (1 + x) + c3 (1 + x + x2 ) + c4


suggests h2x2 + x, x2 + 1i. This only shows that it spans,
(1 + x + x2 + x3 ) = x2 + x3 is easily solved by inspection
but checking that it is linearly independent is routine.
to give that c4 = 1, c3 = 0, c2 = 1, and c1 = 0. 
b. Parametrize
 (a b c) a + b = 0 to get
0 (b b c) b, c R , which suggests using the
2 3
1 sequence h(1 1 0), (0 0 1)i. Weve shown that it
RepD (x + x ) = 0

spans, and checking that it is linearly independent is
1 D easy.
c. Rewriting
5.5.4 A natural basis is h1, x, x2 i. There are bases for P2
        
a b 1 0 0 1
that do not contain any polynomials of degree one or degree a, b R = a + b a, b R
0 2b 0 0 0 2
zero. One is h1 + x + x2 , x + x2 , x2 i. (Every basis has at
least one polynomial of degree two, though.) suggests this for the basis.
   
1 0 0 1
h , i
0 0 0 2

79
APPENDIX A. ANSWERS TO EXERCISES
5.5.8 gives
a. Asking which a0 + a1 x + a2 x2 can be expressed as c1 a0 + 7a1 + 49a2 + 343a3 = 0
(1 + x) + c2 (1 + 2x) gives rise to three linear equations, 2a1 24a2 218a3 = 0
describing the coefficients of x2 , x, and the constants. gives that a1 = 12a2 109a3 and that a0 = 35a2 +
420a3 . Rewriting (35a2 + 420a3 ) + (12a2 109a3 )x +
c1 + c2 = a0
a2 x2 + a3 x3 as a2 (35 12x + x2 ) + a3 (420 109x + x3 )
c1 + 2c2 = a1
suggests this for a basis h35 12x + x2 , 420 109x + x3 i.
0 = a2
The above shows that it spans the space. Checking it
Gausss Method with back-substitution shows, provided is linearly independent is routine. (Comment. A worth-
that a2 = 0, that c2 = a0 + a1 and c1 = 2a0 a1 . while check is to verify that both polynomials in the basis
Thus, with a2 = 0, we can compute appropriate c1 and have both seven and five as roots.)
c2 for any a0 and a1 . So the span is the entire set of lin- c. Here there are three conditions on the cubics, that a0 +
ear polynomials { a0 + a1 x | a0 , a1 R }. Parametriz- 7a1 + 49a2 + 343a3 = 0, that a0 + 5a1 + 25a2 + 125a3 = 0,
ing that set {a0 1 + a1 x | a0 , a1 R} suggests a basis and that a0 + 3a1 + 9a2 + 27a3 = 0. Gausss Method
h1, xi (weve shown that it spans; checking linear inde-
pendence is easy). a0 + 7a1 + 49a2 + 343a3 = 0
a0 + 5a1 + 25a2 + 125a3 = 0
b. With
a0 + 3a1 + 9a2 + 27a3 = 0
a0 +a1 x+a2 x2 = c1 (22x)+c2 (3+4x2 ) = (2c1 +3c2 )+(2c1 )x+(4c2 )x2
gives
we get this system. a0 + 7a1 + 49a2 + 343a3 = 0
2a1 24a2 218a3 = 0
2c1 + 3c2 = a0 8a2 + 120a3 = 0
2c1 = a1
yields the single free variable a3 , with a2 = 15a3 , a1 =
4c2 = a2
71a3 , and a0 = 105a3 . The parametrization is this.
and Gaussian elimination gives
(105a3 ) + (71a3 )x + (15a3 )x2 + (a3 )x3 a3 R

2c1 + 3c2 = a0
= a3 (105 + 71x 15x2 + x3 ) a3 R

3c2 = a0 + a1
0 = (4/3)a0 (4/3)a1 + a2
Therefore, a natural candidate for the basis is h105 +
Thus, the only quadratic polynomials a0 + a1 x + a2 x 2 71x15x2 +x3 i. It spans the space by the work above. It
with associated cs are the ones such that 0 = (4/3)a0 is clearly linearly independent because it is a one-element
(4/3)a1 + a2 . Hence the span is this. set (with that single element not the zero object of the
space). Thus, any cubic through the three points (7, 0),
(a1 + (3/4)a2 ) + a1 x + a2 x2 a1 , a2 R (5, 0), and (3, 0) is a multiple of this one. (Comment.


As in the prior question, a worthwhile check is to verify


Parametrizing gives a1 (1 + x) + a2 ((3/4) + x2 ) a1 , a2 that

R ,plugging seven, five, and three into this polynomial
which suggests h1 + x, (3/4) + x2 i (checking that it is yields zero each time.)
linearly independent is routine). d. This is the trivial subspace of P3 . Thus, the basis is
empty hi.
5.5.9
a. The subspace is this. 5.5.10
a. B = 1 + x3 , x2 + x3

a0 + a1 x + a2 x2 + a3 x3 a0 + 7a1 + 49a2 + 343a3 = 0

b. (p(x))B = (2, 2)
Rewriting a0 = 7a1 49a2 343a3 gives this.
No linearly independent set contains a zero vector.
5.5.11
(7a1 49a2 343a3 ) + a1 x + a2 x2 + a3 x3 a1 , a2 , a3 R


5.5.12
On breaking out the parameters, this suggests h7 +
a. To show that it is linearly independent, note that if
x, 49 + x2 , 343 + x3 i for the basis (it is easily veri-
d1 (c1 ~1 ) + d2 (c2 ~2 ) + d3 (c3 ~3 ) = ~0 then (d1 c1 ) ~1 +
fied). ~ ~ ~
(d2 c2 )2 + (d3 c3 )3 = 0, which in turn implies that each
b. The given subspace is the collection of cubics p(x) = d i ci is zero. But with ci 6= 0 that means that each di is
a0 +a1 x+a2 x2 +a3 x3 such that a0 +7a1 +49a2 +343a3 = 0 zero. Showing that it spans the space is much the same;
and a0 + 5a1 + 25a2 + 125a3 = 0. Gausss Method because h~1 , ~2 , ~3 i is a basis, and so spans the space, we
a0 + 7a1 + 49a2 + 343a3 = 0 can for any ~v write ~v = d1 ~1 + d2 ~2 + d3 ~3 , and then
a0 + 5a1 + 25a2 + 125a3 = 0 ~v = (d1 /c1 )(c1 1 ) + (d2 /c2 )(c2 2 ) + (d3 /c3 )(c3 ~3 ). If any
~ ~

80
APPENDIX A. ANSWERS TO EXERCISES
of the scalars are zero then the result is not a basis, be- 5.5.16
cause it is not linearly independent. a. Describing the vector space as
b. Showing that h2~1 , ~1 + ~2 , ~1 + ~3 i is linearly indepen-   
dent is easy. To show that it spans the space, assume a b
a, b, c R
that ~v = d1 ~1 + d2 ~2 + d3
~3 . Then, we can represent the b c
same ~v with respect to h2~1 , ~1 + ~2 , ~1 + ~3 i in this way suggests this for a basis.
~v = (1/2)(d1 d2 d3 )(2~1 ) + d2 (~1 + ~2 ) + d3 (~1 + ~3 ).
     
1 0 0 0 0 1
h , , i
5.5.13 Each forms a linearly independent set if we omit ~v . 0 0 0 1 1 0
To preserve linear independence, we must expand the span of
each. That is, we must determine the span of each (leaving ~v Verification is easy.
out), and then pick a ~v lying outside of that span. Then to b. This is one possible basis.
finish, we must check that the result spans the entire given
1 0 0 0 0 0 0 0 0 0 1 0 0 0 1
space. Those checks are routine.
h0 0 0 , 0 1 0 , 0 0 0 , 1 0 0 , 0 0 0 ,
a. Any vector that is not a multiple of the given one, that 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0
is, any vector that is not on the line y = x will do here.
One is ~v = ~e1 .
c. As in the prior two questions, we can form a basis from
b. By inspection, we notice that the vector ~e3 is not in the two kinds of matrices. First are the matrices with a single
span of the set of the two given vectors. The check that one on the diagonal and all other entries zero (there are
the resulting set is a basis for R3 is routine. n of those matrices). Second are the matrices with two
c. For any member
of the span opposed off-diagonal entries are ones and all other entries
c1 (x) + c2 (1 + x2 ) c1 , c2 R , the coefficient

are zeros. (That is, all entries in M are zero except that
of x2 equals the constant term. So we expand the mi,j and mj,i are one.)
span if we add a quadratic without this property, say,
~v = 1 x2 . The check that the result is a basis for P2 is 5.5.17
easy. a. Any four vectors from R3 are linearly related because the
vector equation
5.5.14 To show that each scalar is zero, simply subtract
c1 ~1 + + ck
~k ck+1 ~k+1 cn ~n = ~0. The obvi-
x1 x2 x3 x4 0
ous generalization is that in any equation involving only the c1 y1 + c2 y2 + c3 y3 + c4 y4 = 0
~ and in which each ~ appears only once, each scalar is
s, z1 z2 z3 z4 0
zero. For instance, an equation with a combination of the
even-indexed basis vectors (i.e., ~2 , ~4 , etc.) on the right gives rise to a linear system
and the odd-indexed basis vectors on the left also gives the
x1 c1 + x2 c2 + x3 c3 + x4 c4 = 0
conclusion that all of the coefficients are zero.
y1 c1 + y2 c2 + y3 c3 + y4 c4 = 0
2
5.5.15 Here is a subset of R that is not a basis, and two z1 c1 + z2 c2 + z3 c3 + z4 c4 = 0
different linear combinations of its elements that sum to the
that is homogeneous (and so has a solution) and has four
same vector.
             unknowns but only three equations, and therefore has
1 2 1 2 1 2 nontrivial solutions. (Of course, this argument applies to
, 2 +0 =0 +1
2 4 2 4 2 4 any subset of R3 with four or more vectors.)
Thus, when a subset is not a basis, it can be the case that its b. We shall do just the two-vector case. Given x1 , . . . , z2 ,
linear combinations are not unique.

x1 x2
But just because a subset is not a basis does not imply that S = y1 , y2
its combinations must be not unique. For instance, this set
z1 z2

 
1 to decide which vectors
2
x
does have the property that y
    z
1 1
c1 = c2
2 2 are in the span of S, set up
implies that c1 = c2 . The idea here is that this subset fails to
x1

x2 x
be a basis because it fails to span the space. c1 y1 + c2 y2 = y
z1 z2 z

81
APPENDIX A. ANSWERS TO EXERCISES
and row reduce the resulting system. 5.6.3 The solution set is

x 1 c1 + x 2 c2 = x
4x2 3x3 + x4


x2

y1 c1 + y2 c2 = y



x2 , x3 , x4 R
z1 c1 + z2 c2 = z

x3



x4

There are two variables c1 and c2 but three equations, so
when Gausss Method finishes, on the bottom row there so a natural basis is this
will be some relationship of the form 0 = m1 x + m2 y +
4 3 1
m3 z. Hence, vectors in the span of the two-element set 1 0 0
S must satisfy some restriction. Hence the span is not 0 , 1 , 0i
h
all of R3 . 0 0 1
5.5.18 We have (using these odball operations with care) (checking linear independence is easy). Thus the dimension
is three.
1yz y + 1 z + 1
5.6.4

y y, z R = y + 0 y, z R


z
0 z a. As inthe prior exercise, the space M22 of matrices with-

0

0

out restriction has this basis
= y 1 + z 0 y, z R
       
1 0 0 1 0 0 0 0

0 1
h , , , i
0 0 0 0 1 0 0 1
and so a natural candidate for a basis is this. and so the dimension is four.
b. For this space

0 0
h1 , 0i   
0 1 a b
a = b 2c and d R
c d
To check linear independence we set up  
1 1
 
2 0
 
0 0



= b +c +d b, c, d R
0 0 1 0 0 1
0 0 1
c1 1 + c2 0 = 0 this is a natural basis.
0 1 0      
1 1 2 0 0 0
h , , i
(the vector on the right is the zero object in this space). That 0 0 1 0 0 1
yields the linear system
The dimension is three.
(c1 + 1) + (c2 + 1) 1 = 1 c. Gausss Method applied to the two-equation linear sys-
c1 =0 tem gives that c = 0 and that a = b. Thus, we have
c2 =0 this description
with only the solution c1 = 0 and c2 = 0. Checking the span
        
b b 1 1 0 0
is similar. b, d R = b +d b, d R
0 d 0 0 0 1
5.6.1 One basis is h1, x, x2 i, and so the dimension is three. and so this is a natural basis.
   
5.6.2 For this space 1 1 0 0
h , i

  0 0 0 1
a b
a, b, c, d R
c d The dimension is two.
     
1 0 0 0
5.6.5 1 + x2 , x + x3 is a basis of W , therefore W is

= a + + d a, b, c, d R
0 0 0 1
of dimension 2.
this is a natural basis. 5.6.6 The bases for these spaces are developed in the answer
        set of the prior subsection.
1 0 0 1 0 0 0 0
h , , , i a. One basis is h7+x, 49+x2 , 343+x3 i. The dimension
0 0 0 0 1 0 0 1
is three.
The dimension is four. b. One basis is h35 12x + x2 , 420 109x + x3 i so the
dimension is two.

82
APPENDIX A. ANSWERS TO EXERCISES
c. A basis is 105 + 71x 15x2 + x3 . The dimension is If there is a repeated element then the intersection U W is


one. nontrivial. Otherwise, the set BU BW is linearly dependent


5
d. This is the trivial subspace of P3 and so the basis is as it is a six member subset of the five-dimensional space R .
empty. The dimension is zero. In either case some member of B W is in the span of
n o U B , and
thus U W is more than just the trivial space ~0 .
5.6.7 First recall that cos 2 = cos2 sin2 , and so deletion Generalization: if U, W are subspaces of a vector space of
of cos 2from this set leaves the
span unchanged. Whats left, dimension n and if dim(U ) + dim(W ) > n then they have a
the set cos2 , sin2 , sin 2 , is linearly independent (con- nontrivial intersection.
sider the relationship c1 cos2 + c2 sin2 + c3 sin 2 = Z()
5.6.15 First, note that a set is a basis for some space if and
where Z is the zero function, and then take = 0, = /4,
only if it is linearly independent, because in that case it is a
and = /2 to conclude that each c is zero). It is therefore
basis for its own span.
a basis for its span. That shows that the span is a dimension
three vector space. a. The answer to the question in the second paragraph is
yes (implying yes answers for both questions in the
5.6.8 A basis is first paragraph). If BU is a basis for U then BU is a
linearly independent subset of W . It is possible to expand

1 0 0 0 0 0 1 0 0 0 0 0 0 0 0
h 0 0 0 0 0 , 0 0 0 0 0 ,..., 0 0 0 0 0 i it to a basis for W . That is the desired BW .
0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 The answer to the question in the third paragraph is
no, which implies a no answer to the question of the
and thus the dimension is 3 5 = 15. fourth paragraph. Here is an example of a basis for a
5.6.9 In a four-dimensional space a set of four vectors is lin- superspace with no sub-basis forming a basis for a sub-
early independent if and only if it spans the space. The form space: in W = R2 , consider the standard basis E2 . No
of these vectors makes linear independence easy to show (look sub-basis of E2 forms a basis for the subspace U of R2
at the equation of fourth components, then at the equation that is the line y = x.
of third components, etc.). b. It is a basis (for its span) because the intersection of
linearly independent sets is linearly independent (the in-
5.6.10 tersection is a subset of each of the linearly independent
a. One sets).
b. Two It is not, however, a basis for the intersection of the
c. n spaces. For instance, these are bases for R2 :
       
1 0 2 0
5.6.11 dim(R2 ) = 2, hence a set of four vectors is linearly B1 = h , i and B2 = h[ir] ,
0 1 0 2
independent.
and R2 R2 = R2 , but B1 B2 is empty. All we can say
5.6.12 A plane has the form { p~ + t1~v1 + t2~v2 | t1 , t2 R }.
is that the of the bases is a basis for a subset of the
When the plane passes through the origin we can take the
intersection of the spaces.
particular vector p~ to be ~0. Thus, in the language we have
developed in this chapter, a plane through the origin is the c. The of bases need not be a basis: in R2
span of a set of two vectors.    
1 1
   
1 0
Now for the statement. Asserting that the three are not copla- B1 = h , i and B2 = h , i
0 1 0 2
nar is the same as asserting that no vector lies in the span of
the other two no vector is a linear combination of the other B1 B2 is not linearly independent. A necessary and
two. Thats simply an assertion that the three-element set is sufficient condition for a of two bases to be a basis
linearly independent. Since dim(R3 ) = 3 then the set spans
R3 . The set is a basis for R3 B1 B2 is linearly independent span (B1 B2 ) = span

5.6.13 Let the space V be finite dimensional. Let S be a it is easy enough to prove (but perhaps hard to apply).
subspace of V . d. The complement of a basis cannot be a basis because it
a. The empty set is a linearly independent subset of S. It contains the zero vector.
can be expanded to a basis for the vector space S.
5.6.16
b. Any basis for the subspace S is a linearly independent
set in the superspace V . Hence it can be expanded to a. A basis for U is a linearly independent set in W and so
a basis for the superspace, which is finite dimensional. can be expanded to a basis for W . The second basis has
Therefore it has only finitely many members. at least as many members as the first.
b. One direction is clear: if V = W then they have the same
5.6.14 Let BU be a basis for U and let BW be a basis for dimension. For the converse, let BU be a basis for U .
W . Consider the concatenation of the two basis sequences. It is a linearly independent subset of W and so can be

83
APPENDIX A. ANSWERS TO EXERCISES
expanded to a basis for W . If dim(U ) = dim(W ) then b. Z = 3; x = 1, y = 0, z = 1 from the final tableau
this basis for W has no more members than does BU and
so equals BU . Since U and W have the same bases, they x y z s1 s2 Z
are equal.
0
1 1 2 1 0 1
1 1 0 1 1 0 1
6.1.1 0 0 0 0 1 1 3
a. Z = 370/3; x = 110/3, y = 40/3 from the final tableau
c. Z = 23/7; x = 10/7, y = 0, z = 1/7 from the final
x y s1 s2 Z tableau
1 0 1/3 1/3 0 110/3
x y z s1 s2 s3 Z

0 1 1/3 2/3 0 40/3
0 0 2/3 5/3 1 370/3 0 1/7 1 2/7 1/7 0 0 1/7

0 4/7 0 1/7 3/7 1 0 4/7

1 4/7 0 1/7 4/7 0 0 10/7
b. Z = 380/3; x = 130/3, y = 0, z = 40/3 from the final
0 2/7 0 4/7 5/7 0 1 23/7
tableau

x y z s1 s2 Z d. Z = 55; x = 5, y = 45, z = 5 from the final tableau
1 1 0 2/3 1/3 0 130/3

0 1 1 1/3 2/3 0 40/3 x y z s1 s2 s3 Z
0 0 0 1/3 4/3 1 380/3 0 1 0
1/2 1/2 1/2 0 45

0 0 1
1/2 1/2 1/2 0 5
c. Z = 98; x = 16, y = 0, z = 22 from the final tableau
1 0 0 1/2 1/2 1/2 0 5
0 0 0 1/2 1/2 1/2 1 55

x y z s1 s2 s3 Z
1 1/5 0 2/5 1/5 0 0 16
e. Z = 370/3; x = 110/3, y = 40/3, z = 0 from the final
0 3/5 1
1/5 2/5 0 0 22 tableau
0 1/5 0 3/5 1/5 1 0 16
0 2/5 0 1/5 8/5 0 1 98 x y z s1 s2 s3 Z
0 0 5/3 2/3 1/3 1 0 40/3

0 1 4/3 1/3 2/3 0 0 40/3
d. Z = 95; x = 25/2, y = 25/2, z = 5/2 from the final
1 0 2/3 1/3

1/3 0 0 110/3
tableau
0 0 4/3 2/3 5/3 0 1 370/3

x y z s1 s2 s3 s4 Z
0 1
0 5/8 0 1/8 1/4 0 25/2
0 0
0 9/8 1 5/8 3/4 0 85/2
1 0 0 1/8 0 3/8 1/4 0 25/2

0 0 1 3/8 0 1/8 1/4 0 5/2
0 0 0 5/4 0 3/4 5/2 1 95

e. Z = 400/3; x = 20, y = 0, z = 50/3, w = 40/3 from the


final tableau

x y z w s1 s2 s3 Z
0 0 1 0
0 1/3 1/3 0 50/3
1 0 0 0 1/2 0 1/2 0 20

0 1 0 1 1 1/3 1/3 0 40/3
0 3 0 0 3 2/3 1/3 1 400/3

6.1.2
a. Z = 2; x = 1, y = 1 from the final tableau

x y s1 s2 Z
0
1 3/2 1/2 0 1
1 0 1/2 1/2 0 1
0 0 1 0 1 2

84
References

[GHC] Gregory Hartman, APEX Calculus, Version 3.0, https://github.com/APEXCalculus/APEXCalculus Source, Li-
censed under the Creative Commons Attribution-Noncommercial 3.0 license.
[GH] Gregory Hartman, Fundamentals of Matrix Algebra, https://github.com/APEXCalculus/
Fundamentals-of-Matrix-Algebra, Licensed under the Creative Commons Attribution-Noncommercial 3.0 license.
[HE] Harold W. Ellingsen Jr., Matrix Arithmetic, Licensed under the Creative Commons Attribution-ShareAlike 2.5 License.

[JH] Jim Hefferon, Linear Algebra, http://joshua.smcvt.edu/linearalgebra, Licensed under the GNU Free Documentation
License or the Creative Commons Attribution-ShareAlike 2.5 License, 2014.
[KK] Ken Kuttler and Stephanie Keyowski, A First Course in Linear Algebra, http://lyryx.com, Licensed under the Creative
Commons Attribution 4.0 License, 2015.

[MB] Melanie Beck, Licensed under the GNU Free Documentation License or the Creative Commons Attribution-ShareAlike
License.
[MC] Michael Corral, Vector Calculus, http://mecmath.net/, Licensed under the GNU Free Documentation License, Version
1.2, 2013.
[MH] Mathilde Hitier, Licensed under the GNU Free Documentation License or the Creative Commons Attribution-
ShareAlike License.
[OV] Olga Veres, Licensed under the GNU Free Documentation License or the Creative Commons Attribution-ShareAlike
License.
[CR] Chris Roderick, Licensed under the Creative Commons Attribution-Noncommercial-ShareAlike License.

[SM] Sylvain Muise, Licensed under the GNU Free Documentation License or the Creative Commons Attribution-ShareAlike
License.
[SS] Shahab Shahabi, Licensed under the GNU Free Documentation License or the Creative Commons Attribution-ShareAlike
License.

[SZ] Carl Stitz and Jeff Zeager, Precalculus, http://http://stitz-zeager.com/, Licensed under the Creative Commons
Attribution-Noncommercial-ShareAlike License.
[YL] Yann Lamontagne, http://obeymath.org, Licensed under the GNU Free Documentation License or the Creative Com-
mons Attribution-ShareAlike License.

The problems contained in this textbook are taken directly or based on content of the above open source references.

85
Index

additive inverse, 33, 34 equation of plane, 30


adjoint matrix, 21 even functions, 35
algebraic properties, 26
angle, 25 function
between a line and a plane, 30 even, 35
angle between vectors, 25, 26 odd, 35
area function space, 3337, 40
parallelogram, 27 function subspace, 35
triangle, 27
area of triangle, 27 Gauss-Jordan Elimination, 3, 4
augmented matrix, 1, 4, 5 Gauss-Jordan elimination, 3
Gaussian Elimination, 3
basis, 39 Gaussian elimination, 3, 4
bisector, 26 geometric interpretation, 1
geometry, 24, 26
Cauchy-Schwartz Inequality, 25, 26 graphical interpretation, 1
closest point, 31
co-factor expansion, 17 homogeneous system, 4, 39, 40
cofactor, 17
cofactor expansion, 1719 idempotent matrix, 11
collinear, 24 ij product, 7
consistent, 5 inconsistent, 4
consistent system, 4, 5 inconsistent system, 4
consistent, inconsistent, 4 infinite solutions, 4
coordinate vector, 39 infinitely many solution, 4
coplanar, 27 infinitely many solutions, 4
Cramers Rule, 22 intersection, 28, 30
cross product, 27 between line and plane, 30
cube, 26 line and plane, 30
two lines, 28, 31
determinant, 1722 intersection:line and plane, 30, 31
triangular, 17 intersection:of two planes, 30
diagonal matrix, 7, 8, 34 invertible matrix, 21
dimension, 40, 41
distance, 29, 30 Kirchhoffs law, 5
between two points, 23
Laplace expansion, 1719
line to line, 29
line, 24, 28, 30
plane to plane, 30
parallel, 28
point to line, 29
parametric equations, 28
point to plane, 30
vector equation, 28
two lines, 31
line, plane, 35
distance:from a point to a line, 28, 29
linear combination, 24, 25, 39
distance:from a point to a plane, 30
linear dependence, 37, 38
dot product, 25, 26
linear equation, 1
elementary matrix, 14 linear independence, 37, 38
elementary operation, 18, 19 linear system, 12
elementary row operation, 1
matrix
equation of line, 28, 30

86
INDEX INDEX
addition, 9 nilpotent matrix, 10
adjoint, 21 no solution, unique solution, infinitely many solution, 5
algebraic properties, 10 norm, 2326
arithmetic, 8, 10
augmented, 2 odd function, 35
cancellation law, 10 orthogonal, 25, 26, 31
commutativity, 10, 13 orthogonal vector, 26, 27
commutitavity, 10 orthogonal vectors, 25, 27
diagonal, 8
dimension, 8 parallel vectors, 27
elementary, 14 parallelogram, 26
entry, 9 parallelogram law, 26
equation, 12 parametric equation of line, 28
expression, 13 particular solution, 3, 4
idempotent, 9 plane, 2931
inverse, 9, 1114, 19, 21 general form, 29
invertible, 21 normal vector, 30
involutory, 11, 13, 14 parallel to another plane, 29
lower triangular, 8 point-normal equation, 2931
matrix, 14 standard form, 29
multiplication, 79 point on plane, 29
nilpotent, 13, 20 polynomial space, 33, 3537, 39, 40
of cofators, 21 positive real numbers, 34
polynomial, 13 projection, 25, 2730
power, 10, 11 Pythagorean theorem, 26
product, 8
quadratic equation, parabola, 4
similar, 20
singular, 13 rational numbers, 33
skew-symmetric, 9, 10 reduced row echelon form, 2, 3
subtraction, 9 rhombus, 26
symmetric, 10, 39 row echelon form, 2, 4
trace of, 9, 11 row equivalent, 15
transpose, 811, 13, 14 row-equivalent, 14
triangular, 8, 17, 19
upper triangular, 8 scalar multiplication, 8
zero factor property, 10 scalar triple product, 27
matrix addition, 7, 9, 10 shortest distance, 31
matrix associativity, 7, 8 similar matrix, 20
matrix dimension, 7 simplex
matrix element, 7 maximize, 43
matrix entry, 7 minimize, 43
matrix equation, 914 simplex method, 43
matrix inverse, 1113 singular matrix, 19, 21
matrix multiplication, 8 skew-symmetric matrix, 8, 19
matrix polynomial, 12, 13 solution set, 24
matrix power, 7, 10 spaning set, 36
matrix powers, 10 square, 26
matrix product, 7, 10 subspace, 35, 36
matrix size, 7 symmetric equation, 13
matrix space, 3336, 39, 40 symmetric equation of line, 28
matrix space, polynomial space, 36 symmetric matrix, 8, 10, 39
maximize, 43 system of linear equation, 12
midpoint, 24 system of linear equations, 1, 3, 5, 9, 12, 14, 22, 24
minor, 17
trace of a matrix, 9, 10
nearest point:to a point in a hyperplane, 30 triangle inequality, 24, 25
nearest point:to a point in a line, 28, 29 triangular matrix, 8, 17

87
INDEX INDEX
trivial solution, 4
trivial subspace, 35

unique solution, 4
unique solution, inconsistent, infinitely many solutions, 4
unit vector, 2325

Vandermonde determinant, 19
vector, 23, 28
vector arithmetic, 23, 24
vector equation of line, 28
vector space, 33, 34, 36
matrix space, 34
subspace, 35
volume
parallelepiped, 27
volume:parallelepiped, 27

zero vector, 33, 34

88

You might also like