You are on page 1of 45

Introduction

AMS 210: Applied Linear Algebra


Theory: Orthogonalization and QR

11-22-16

11-22-16 AMS 210: Applied Linear Algebra Page 1


Introduction Section 5.4

Chapter 5 Section 4: Orthogonal Systems

11-22-16 AMS 210: Applied Linear Algebra Page 2


Introduction Section 5.4

Inverse of a Matrix with Orthogonal Columns

3 4
A=
4 3

11-22-16 AMS 210: Applied Linear Algebra Page 3


Introduction Section 5.4

Inverse of a Matrix with Orthogonal Columns

3 4
A=
4 3

Note the sum of squares of the columns is 25.

11-22-16 AMS 210: Applied Linear Algebra Page 3


Introduction Section 5.4

Inverse of a Matrix with Orthogonal Columns

3 4
A=
4 3

Note the sum of squares of the columns is 25.

So we divide each column by 25 and take the transpose.

11-22-16 AMS 210: Applied Linear Algebra Page 3


Introduction Section 5.4

Example

2 1 0
1
2 3

1 1 1
A = 41 15

11-22-16 AMS 210: Applied Linear Algebra Page 4


Introduction Section 5.4

Example

2 1 0
1
2 3

1 1 1
A = 41 15

1 1
6 6 6
1 1
A = 3 3 3
22 3

1 1
0 2 2
41 15

11-22-16 AMS 210: Applied Linear Algebra Page 4


Introduction Section 5.4

Theorem

Theorem
~ajC ~b
if A~x = ~b, then ~bj = ~ajC ~ajC
The projection of b onto the jth column of A.

11-22-16 AMS 210: Applied Linear Algebra Page 5


Introduction Section 5.4

Orthonormal

Definition
A set of vectors is Orthonormal if they are mutually orthogonal, as well as
have unit length.

11-22-16 AMS 210: Applied Linear Algebra Page 6


Introduction Section 5.4

Orthonormal

Definition
A set of vectors is Orthonormal if they are mutually orthogonal, as well as
have unit length.

Note, if A has orthonormal columns, how would we take its inverse?

11-22-16 AMS 210: Applied Linear Algebra Page 6


Introduction Section 5.4

Orthonormal

Definition
A set of vectors is Orthonormal if they are mutually orthogonal, as well as
have unit length.

Note, if A has orthonormal columns, how would we take its inverse?

By the previous theorem, A 1 = AT .

11-22-16 AMS 210: Applied Linear Algebra Page 6


Introduction Section 5.4

Orthonormal

Definition
A set of vectors is Orthonormal if they are mutually orthogonal, as well as
have unit length.

Note, if A has orthonormal columns, how would we take its inverse?

By the previous theorem, A 1 = AT .

Definition
Such a matrix is called unitary.

11-22-16 AMS 210: Applied Linear Algebra Page 6


Introduction Section 5.4

Change of Basis

q1 = .8 .6 , q2 = .6 .8 ?

Can we express the vectorb = 1 2 In terms of the orthonormal basis

11-22-16 AMS 210: Applied Linear Algebra Page 7


Introduction Section 5.4

Pseudoinverse with orthonoral columns

0 5
A= 5
23 3

0 1
05 44

11-22-16 AMS 210: Applied Linear Algebra Page 8


Introduction Section 5.4

Pseudoinverse with orthonormal columns

The pseudoinverse of a matrix Q with orthonormal columns is just Q T

11-22-16 AMS 210: Applied Linear Algebra Page 9


Introduction Section 5.4

Orthonormal Basis

How can we find an orthonormal basis for the space spanned by a set
of vectors, Q?
How can we express a matrix as only orthonormal vectors in its
column space?

11-22-16 AMS 210: Applied Linear Algebra Page 10


Introduction Section 5.4

Gram-Schmidt Orthogonalization

Given a set of linearly independent vectors, ~a1 . . .~an , We will find ~q1 . . . ~qn
that form a basis for the space spanned by the as. After each step,
~q1 . . . ~qk will be an orthonormal basis for ~a1 . . .~ak .

11-22-16 AMS 210: Applied Linear Algebra Page 11


Introduction Section 5.4

Gram-Schmidt Orthogonalization

We begin by scaling the first vector, a1 to unit length

11-22-16 AMS 210: Applied Linear Algebra Page 12


Introduction Section 5.4

Gram-Schmidt Orthogonalization

We begin by scaling the first vector, a1 to unit length


1
q1 = |a1 | a1

11-22-16 AMS 210: Applied Linear Algebra Page 12


Introduction Section 5.4

Gram-Schmidt Orthogonalization

We begin by scaling the first vector, a1 to unit length


1
q1 = |a1 | a1
Then for the next vector, a2 , we subtract out the component which is
parallel to q1 , (or a1 ).
q2 = a2 a2 q1

11-22-16 AMS 210: Applied Linear Algebra Page 12


Introduction Section 5.4

Gram-Schmidt Orthogonalization

We begin by scaling the first vector, a1 to unit length


1
q1 = |a1 | a1
Then for the next vector, a2 , we subtract out the component which is
parallel to q1 , (or a1 ).
q2 = a2 a2 q1
We have to scale q2 to make sure it has unit length
1
q2 = |q2 | q2

11-22-16 AMS 210: Applied Linear Algebra Page 12


Introduction Section 5.4

Gram-Schmidt Orthogonalization

We begin by scaling the first vector, a1 to unit length


1
q1 = |a1 | a1
Then for the next vector, a2 , we subtract out the component which is
parallel to q1 , (or a1 ).
q2 = a2 a2 q1
We have to scale q2 to make sure it has unit length
1
q2 = |q2 | q2
Then for the next vector, a3 , we subtract out the component which is
parallel to q1 , and the component parallel to q2 .
q3 = a3 a3 q2 a3 q 1

11-22-16 AMS 210: Applied Linear Algebra Page 12


Introduction Section 5.4

Gram-Schmidt Orthogonalization

We begin by scaling the first vector, a1 to unit length


1
q1 = |a1 | a1
Then for the next vector, a2 , we subtract out the component which is
parallel to q1 , (or a1 ).
q2 = a2 a2 q1
We have to scale q2 to make sure it has unit length
1
q2 = |q2 | q2
Then for the next vector, a3 , we subtract out the component which is
parallel to q1 , and the component parallel to q2 .
q3 = a3 a3 q2 a3 q 1
1
q3 = |q3 | q3

11-22-16 AMS 210: Applied Linear Algebra Page 12


Introduction Section 5.4

Gram-Schmidt Orthogonalization

We begin by scaling the first vector, a1 to unit length


1
q1 = |a1 | a1
Then for the next vector, a2 , we subtract out the component which is
parallel to q1 , (or a1 ).
q2 = a2 a2 q1
We have to scale q2 to make sure it has unit length
1
q2 = |q2 | q2
Then for the next vector, a3 , we subtract out the component which is
parallel to q1 , and the component parallel to q2 .
q3 = a3 a3 q2 a3 q 1
1
q3 = |q3 | q3

11-22-16 AMS 210: Applied Linear Algebra Page 12


Introduction Section 5.4

Gram-Schmidt Orthogonalization

For j = 1 . . . n
For i = 1 . . . j 1
rij = aj qi
qj = aj r1j q1 r2j q2 ... rj 1j qj 1
rjj = |qj |
qj = qj /rjj

11-22-16 AMS 210: Applied Linear Algebra Page 13


Introduction Section 5.4

QR decomposition

A matrix, A, can be composed into the product of two matrices, QR, such
that Q is a unitary matrix and R is an upper triangular matrix

11-22-16 AMS 210: Applied Linear Algebra Page 14


Introduction Section 5.4

Gram-Schmidt Orthogonalization

11-22-16 AMS 210: Applied Linear Algebra Page 15


Introduction Section 5.4

Example

3 2
~a1 = ; ~a2 =
4 1

11-22-16 AMS 210: Applied Linear Algebra Page 16


Introduction Section 5.4

Example: Solution

3 2
A=
4 1

4
Q= 5 5
4 3
5 5
3

5 2
R=
0 1

11-22-16 AMS 210: Applied Linear Algebra Page 17


Introduction Section 5.4

Inverses and QR

How would we express A 1 in terms of its QR decomposition?

11-22-16 AMS 210: Applied Linear Algebra Page 18


Introduction Section 5.4

Inverses and QR

How would we express A 1 in terms of its QR decomposition?


1 1
A =R QT

11-22-16 AMS 210: Applied Linear Algebra Page 18


Introduction Section 5.4

Inverses and QR

How would we express A 1 in terms of its QR decomposition?


1 1
A =R QT

How would we express the pseudoinverse in terms of QR?

11-22-16 AMS 210: Applied Linear Algebra Page 18


Introduction Section 5.4

Inverses and QR

How would we express A 1 in terms of its QR decomposition?


1 1
A =R QT

How would we express the pseudoinverse in terms of QR?


1
A+ = R QT

11-22-16 AMS 210: Applied Linear Algebra Page 18


Introduction Section 5.3

Least Squares Error

A~x = ~b

Has a least squares solution

A~
w = p~

4-27-17 AMS 210: Applied Linear Algebra Page 28


Introduction Section 5.3

Least Squares Error

A~x = ~b

Has a least squares solution

A~
w = p~

Lets note:
~b = p~ + ~b p~ = A~
w + (~b p~)

4-27-17 AMS 210: Applied Linear Algebra Page 28


Introduction Section 5.3

Least Squares Error

A~x = ~b

Has a least squares solution

A~
w = p~

Lets note:
~b = p~ + ~b p~ = A~
w + (~b p~)

Recalling p~ = A~
w is orthogonal to (~b p~)
We have expressed our b vector as a least squares solution, plus an error
vector. This representation is unique.

4-27-17 AMS 210: Applied Linear Algebra Page 28


Introduction Section 5.3

Error Space

We can think of p~ and ~b p~ as a decomposition of ~b into 2 parts; one in


the range of A, and one in the space orthogonal to the range of A.
Definition
We call space orthogonal to the range of A the Error Space of the matrix
A.
V (A) = {~v : AT ~v = ~0}

4-27-17 AMS 210: Applied Linear Algebra Page 29


Introduction Section 5.3

Error Space

We can think of p~ and ~b p~ as a decomposition of ~b into 2 parts; one in


the range of A, and one in the space orthogonal to the range of A.
Definition
We call space orthogonal to the range of A the Error Space of the matrix
A.
V (A) = {~v : AT ~v = ~0}

Note: The Error Space is the Null space of AT

4-27-17 AMS 210: Applied Linear Algebra Page 29


Introduction Section 5.3

Error Space: Properties

dim(V (A)) = m Rank(AT ) = m Rank(A)


dim(V (A)) = dim(Null(AT ))
dim(V (A)) + Rank(A) = m

4-27-17 AMS 210: Applied Linear Algebra Page 30


Introduction Section 5.3

Theorem

Theorem
Let A be an m by n matrix and ~b be any m-vector. Then ~b can be written
as a unique sum: ~b = ~b1 + ~b2 Where b~1 is in Range(A), and b~2 is in V(A).
b~1 , b~2 are, of course, orthogonal.
Furthermore, V (A) = Null(AT ), i.e. Null(AT ) is orthogonal to Range(A).

4-27-17 AMS 210: Applied Linear Algebra Page 31


Introduction Section 5.3

Theorem

Theorem
Let A be an m by n matrix and ~b be any m-vector. Then ~b can be written
as a unique sum: ~b = ~b1 + ~b2 Where b~1 is in Range(A), and b~2 is in V(A).
b~1 , b~2 are, of course, orthogonal.
Furthermore, V (A) = Null(AT ), i.e. Null(AT ) is orthogonal to Range(A).

Corollary
The row space of A, and the null space of A are orthogonal.
Further, any n-vector can be written as ~x = x~1 + x~2
Where x~1 is in Row(A) and x~2 is in Null(A)

4-27-17 AMS 210: Applied Linear Algebra Page 31


Introduction Section 5.3

Theorem

Theorem
Let A be an m by n matrix and ~b be any m-vector. Then ~b can be written
as a unique sum: ~b = ~b1 + ~b2 Where b~1 is in Range(A), and b~2 is in V(A).
b~1 , b~2 are, of course, orthogonal.
Furthermore, V (A) = Null(AT ), i.e. Null(AT ) is orthogonal to Range(A).

Corollary
The row space of A, and the null space of A are orthogonal.
Further, any n-vector can be written as ~x = x~1 + x~2
Where x~1 is in Row(A) and x~2 is in Null(A)

Corollary
If A~x = ~b has a solution, then it has a solution ~x that is in the row space
of A.
4-27-17 AMS 210: Applied Linear Algebra Page 31
lrT7
<- -
-
-C-

J c U
-
S
-\---

\ ? SS
) i
-Th
I
s L-
r -sr-
C :
t

L2Zi .

- C 0 o - E
I - -
/)
- 5-


5-
C - m
-

x
4-- L


__i
a 4-

-s .-
(Thfl
\
C I ) C

_
__J
.

- N/

- q 1 LLS

OcN

r 5
U c
5

I .
I r
ceL C V
5
-- -
5-

-
5,

N

I,L7-
f3
Ir)
1-5-
- )
.
-,
r
(Th

7L cm I
cm (Th \-

C

C -s
fl H I
1

- --

J-- c:\) J
C(
(T Cioo _-C
-

_7 -z <
-) C c)
5
C c
c
LI
I
[\J
_c

I
7

1
Y
Introduction Section 5.4

The End

Now this is not the end. It is not even the beginning of the end.
But it is, perhaps, the end of the beginning. Winston Churchill

11-22-16 AMS 210: Applied Linear Algebra Page 24

You might also like