You are on page 1of 13

Linear Algebra 202 / Mathematical Methods 202/204/502 - Notes for Week 5 1

Inner Product Spaces (cont.)

Length and Angle

Result 1: (The Cauchy-Schwarz Inequality)


Let u and v be vectors in an inner product space V . Then

< u, v > 2 ≤ < u, u > < v, v > .

Proof: The result is obvious when u = 0 or v = 0. Assume now that u 6= 0 and let
a = < u, u > , b = 2 < u, v > and c = < v, v > . Let t be any scalar. Then
< (tu + v), (tu + v) > ≥ 0
=⇒ < tu, tu > + < tu, v > + < v, tu > + < v, v > ≥ 0
=⇒ t2 < u, u > + 2t < u, v > + < v, v > ≥ 0
=⇒ at2 + bt + c ≥ 0,
for all t. This means that the quadratic equation at2 + bt + c = 0 has either no real roots
or a repeated real root. In either case, b2 − 4ac ≤ 0. This, in turn, gives us
(2 < u, v > )2 − 4 < u, u > < v, v > ≤ 0
i.e. 4 < u, v > 2 ≤ 4 < u, u > < v, v >
i.e. < u, v > 2 ≤ < u, u > < v, v > ,
as required.

Def n : Let V be an inner product space. The norm (or length or magnitude) of a vector
u ∈ U is defined as
1 √
||u|| = < u, u > 2 = < u, u > .

Note that we may write ||u||2 = < u, u > . The Cauchy-Schwarz Inequality may hence
be rewritten as

< u, v > 2 ≤ ||u||2 ||v||2 .

Ex 1: Take IR3 with the usual dot product. Then, for any x = [x, y, z] ∈ IR3 ,
√ √ q
||x|| = < x, x > = x.x = x2 + y 2 + z 2  
1
which corresponds to our usual notion of length in IR3 . For example, if x =  1 , we
 

1

have ||x|| = 3.
Linear Algebra 202 / Mathematical Methods 202/204/502 - Notes for Week 5 2

Ex 2: Consider IR3 with the inner product


   
x1 y1
< x, y > = 2x1 y1 + x2 y2 + 3x3 y3 , ∀ x =  x2  , y =  y2  ∈ IR3 .
   

x3 y3
 
1
√ q
Then, for all x ∈ IR3 , ||x|| = < x, x > = 2 2 2
2x1 + x2 + 3x3 . For instance, if x =  1 

,
1
q √
then ||x|| = 2(1)2 + (1)2 + 3(1)2 = 6.

Examples 1 and 2 clearly show that the norm (or length) of a vector very much depends
on what inner product is being used in the governing inner product space. Note that the
result in Example 2 makes just as much sense as that in Example 1. The difference lies
in how we measure ‘length’.

Ex 3: Consider M22 with the inner product


< A, B > = a11 b11 + a12 b12 + a21 b21 + a22 b22 ,
" # " # " #
a11 a12 b11 b12 1 2
for all A = ,B= ∈ M22 . Find .

a21 a22 b21 b22 0 1

√ q
Soln: ||A|| = < A, A > = a211 + a212 + a221 + a222 . Hence,

" #
1 2 q
= (1)2 + (2)2 + (0)2 + (1)2 = 6.

0 1

Def n : Let V be an inner product space. Then, for any u, v ∈ V , the distance between u and v
is defined as
d(u, v) = ||u − v||

Ex 4: Consider IR2 with the usual dot product. Find the distance between u = (1, 1)
and v = (2, 3).

Soln: u = (1, 1), v = (2, 3). Hence, q √


d(u, v) = ||u − v|| = ||(1, 1) − (2, 3)|| = ||(−1, −2)|| = (−1)2 + (−2)2 = 5.

Ex 5: Let C[0, 1] be the set of all continuous functions on the interval [0, 1]. Find the
distance between f (x) = x2 and g(x) = ex in C[0, 1] if we use the inner product
Z 1
< f, g > = f (x)g(x) dx, ∀f, g ∈ C[0, 1].
0
Linear Algebra 202 / Mathematical Methods 202/204/502 - Notes for Week 5 3

Soln: d(f, g) = ||f − g||


q
= < f − g, f − g >
s
Z 1
= (f − g)(x) (f − g)(x) dx
0
s
Z 1
= (f (x) − g(x))2 dx
0
s
Z 1
= (x2 − ex )2 dx
0
s
Z 1
= (x4 − 2x2 ex + e2x ) dx
0
v" #1
u
u x5 e2x
= t 2 x x x
− 2x e + 4xe − 4e +
5 2 0
v ! 
e2
u
u 1 1

= t − 2e + 4e − 4e + − −4 +
5 2 2
s
37 e2
= − 2e +
10 2
≈ 1.399273.
x2 ex dx = x2 ex − 2xex + 2ex .)
R
(Where we have used tabular integration for

Result 2: Let V be an inner product space. Then, for any u, v ∈ V and any scalar k,
(i) ||u|| ≥ 0.

(ii) ||u|| = 0 iff u = 0.

(iii) ||ku|| = |k|||u||.

(iv) ||u + v|| ≤ ||u|| + ||v||. (Triangle Inequality)


Proof: (i) and (ii) are obvious, given the definition of inner products and norms. We shall
leave (iii) as an exercise, and it remains to prove (iv). Recall that the Cauchy - Schwartz
Inequality can be written as

< u, v > 2 ≤ ||u||2 ||v||2


q
=⇒ < u, v > 2 ≤ ||u||||v||
i.e. | < u, v > | ≤ ||u||||v||.
Now,
Linear Algebra 202 / Mathematical Methods 202/204/502 - Notes for Week 5 4

||u + v||2 = < u + v, u + v >


= < u, u > + 2 < u, v > + < v, v >
≤ ||u||2 + 2| < u, v > | + ||v||2
≤ ||u||2 + 2||u||||v|| + ||v||2 (by the above)
2
= (||u|| + ||v||)
Taking square roots, we get the result.

Note how the triangle inequality certainly holds for our familiar notion of length of vectors
in IR2 and IR3 :

|
}



||u + v||

v  ||v||
{z
{z


v
+


u


}r
|

u  :
r {z
}

| ||u||

We may rearrange the Cauchy - Schwarz Inequality, <u, v>2 ≤ ||u||2 ||v||2 , as
!2
< u, v > < u, v >
≤1 i.e. −1≤ ≤ 1.
||u||||v|| ||u||||v||

In view of this, we have the following definition:

Def n : Let u and v be two vectors in an inner product space V . Then, the unique angle
θ ∈ [0, π] satisfying
< u, v >
cos θ =
||u||||v||

is called the angle between u and v.

Note that, just like the definition of norm, the angle between two vectors also depends on
the specific choice of inner product being used.

Ex 6: Find the angle between the vectors u = [2, −1, 1] and v = [1, 1, 2] in IR3 .
Linear Algebra 202 / Mathematical Methods 202/204/502 - Notes for Week 5 5

Unless otherwise stated, we can assume that the inner product we’re using here is simply
the standard dot product in IR3 . Hence,
< u, v > u.v 3 3 1 π
cos θ = = =√ √ = = ⇒ θ = (or θ = 60◦ ).
||u||||v|| ||u||||v|| 6 6 6 2 3

Ex 7: Consider M22 with the same inner product as in Example 3, i.e.


< A, B > = a11 b11 + a12 b12 + a21 b21 + a22 b22
" # " #
1 1 1 0
Find the angle between A = and B = .
0 0 1 0

< A, B > (1)(1) + (1)(0) + (0)(1) + (0)(0) 1


Soln: cos θ = =q q = , i.e.
||A||||B|| (1)2 + (1)2 + (0)2 + (0)2 (1)2 + (0)2 + (1)2 + (0)2 2
π
θ= or θ = 60◦ is the angle between A and B.
3

Def n : Two vectors u and v in an inner product space V are called orthogonal if
< u, v > = 0. In this case, we write u⊥v.
" # " #
1 0 0 2
Ex 8: Show that A and B = are orthogonal in M22 w.r.t. the inner
1 1 0 0
product defined in Ex 3 and Ex 7.

Soln: < A, B > = (1)(0) + (0)(2) + (1)(0) + (1)(0) = 0 =⇒ A⊥B.

Ex 9: Let P2 have the inner product


Z 1
< p, q > = p(x)q(x) dx, ∀ p, q ∈ P2 .
−1
Show that p(x) = x and q(x) = x2 are orthogonal.
Z 1 Z 1
Soln: < p, q > = x x2 dx = x3 dx = 0, since x3 is an odd function. Hence, p⊥q.
−1 −1

76

Result 3: (Generalized Theorem of Pythagoras) 
v

If u and v are orthogonal vectors in an inner product space, then 


+

 v
u


||u + v||2 = ||u||2 + ||v||2 

 -
u
Proof: ||u + v||2 = < u+v, u+v > = < u, u > +2 < u, v > + < v, v > = ||u||2 + ||v||2 .
| {z }
=0, since u⊥v
Linear Algebra 202 / Mathematical Methods 202/204/502 - Notes for Week 5 6

Def n : Let V be an inner product space and let W be a subspace of V . If u ∈ V is


orthogonal to every vector in W , we say that u is orthogonal to W .
Furthermore, the set of all vectors in V which are orthogonal to W is called the orthogonal
complement of W , denoted by W ⊥ .

@ x2 6
Ex 10: Note that u = [1, 1] and v = [1, −1] are @ W
orthogonal in IR2 . Let W = span {u}. W ⊥@
@
Then W ⊥ = span {v}. @ -
@ x1
@
@
@
@

Ex 11: Consider
 the subspace z6

W⊥
 x
 
 J
3
W =  y  ∈ IR : x + y + z = 0 
Jn
 
 

z   J
of IR3 . W is simply a plane containing the
J J
J J -
origin. y
 Note that a normal to the plane is
J 
 J

1
J 
 W
 J 
n =  1 . Hence W ⊥ = span {n}.
  
+
 J
x
1

Suppose that w1 , w2 , . . ., wr are vectors in IRn . Let W = span {w1 , w2 , . . . , wr }. Then,


using the ordinary dot product in IRn , any x ∈ W ⊥ must satisfy
w1 .x = 0, w2 .x = 0, ..., wr .x = 0.
i.e. Ax = 0, where A is an r × n matrix whose rows are simply the vectors wi , i =
1, 2, . . . , r, i.e.
 
w1

 w2 

A= .. 
.
 
 
wr

This must be satisfied for all x ∈ W ⊥ . Conversely, any x ∈ V which satisfies Ax = 0 is


orthogonal to each wi , i = 1, . . . , r and hence orthogonal to any w ∈ W . It follows that
W ⊥ is simply the nullspace of the matrix A.

Ex 12: Find W ⊥ if W = span {[1, 1, 1], [1, 2, 1]}.


Linear Algebra 202 / Mathematical Methods 202/204/502 - Notes for Week 5 7
   
" # " # −s −1
1 1 1 1 1 1
Soln: A = ∼ . Hence, the solution of Ax = 0 is x =  0  = s  0 .
   
1 2 1 0 1 0
s 1
 
 −1 
 
Finally, the orthogonal complement of W is W ⊥ = span  0 .
 

1 
Result 4: Let W be a subspace of a finite dimensional inner product space V , where V
has dimension n. Then
(i) W ⊥ is a subspace of V .
 
(ii) dim W ⊥ = n − dim (W ).
i.e. Together, W and W ⊥ make up the whole of V .

Orthonormal Bases

We know that in IRn , the standard unit basis vectors are all orthogonal to each other and
they all have length 1. These properties make the standard unit basis vectors easy to deal
with. Our next task is to examine more general bases (not just in the Euclidean Spaces,
either) which display the properties of being mutually orthogonal and having length 1.

Def n : A set of vectors in an inner product space is called orthogonal if each distinct pair
of vectors in the set is orthogonal.
Furthermore, if the norm of each vector in an orthogonal set is 1, we say that it is an
orthonormal set of vectors.

Ex 13: Clearly, the standard basis of IRn is an orthonormal set, in this case referred to
as an orthonormal basis.

Ex 14: Show that S = {v 1 , v 2 , v 3 } is an orthonormal set of vectors in IR3 , where


√ √ √ √
v 1 = [0, 1, 0], v 2 = [1/ 2, 0, 1/ 2], v 3 = [1/ 2, 0, −1/ 2].
Soln: < v 1 , v 2 > = v 1 .v 2 = 0, < v 1 , v 3 > = v 1 .v 3 = 0, < v 2 , v 3 > = v 2 .v 3 = 0. This

shows that the vectors form an orthogonal set. Furthermore, ||v 1 || = < v 1 , v 1 > =
q √ q
(0)2 + (1)2 + (0)2 = 1, ||v 2 || = < v 2 , v 2 > = ( √12 )2 + (0)2 + ( √12 )2 = 1, ||v 3 || =
√ q
< v 3 , v 3 > = ( √12 )2 + (0)2 + (− √12 )2 = 1 show that all vectors are of length 1. Hence,
S = {v 1 , v 2 , v 3 } is an orthonormal set.

Note that if {v 1 , v 2 , . . . , v n } is an orthogonal set of vectors, then


Linear Algebra 202 / Mathematical Methods 202/204/502 - Notes for Week 5 8

(
0, if i 6= j,
< vi, vj > =
||v i ||2 , if i = j.
Furthermore, if the set is orthonormal, then
(
0, if i 6= j,
< vi, vj > =
1, if i = j.

Result 5: Let S = {v 1 , v 2 , . . . , v n } be an orthogonal set of nonzero vectors in an inner


product space V . Then S is linearly independent.

Proof: Consider the equation c1 v 1 + c2 v 2 + . . . + cn v n = 0. We take the inner product


of each side of this equation with v i , where i ∈ {1, 2, . . . , n}. Then
< c1 v 1 + c2 v 2 + . . . + cn v n , v i > = < 0, v i >
i.e. c1 < v 1 , v i > + c2 < v 2 , v i > + . . . + cn < v n , v i > = 0
i.e. ci < v i , v i > = 0.
Since v i 6= 0, < v i , v i > 6= 0, so we must have ci = 0. This conclusion can be reached for
each i ∈ {1, 2, . . . , n} in turn. It follows that S is a l.i. set.

Ex 14: (cont.) We saw that S = {v 1 , v 2 , v 3 } is orthogonal and hence l.i. by Result 5.


Thus S is an orthonormal (since each vector in S has norm 1) basis of IR3 .

The beauty of orthonormal bases is that they allow us to easily express any vector as a
l.c. of basis vectors. In other words, it is very easy to find the coordinate representation
of a vector w.r.t. an orthonormal basis.

Result 6: Let S = {v 1 , v 2 , . . . , v n } be an orthonormal basis of an inner product space


V . Then, any u ∈ V can be written as

u = < u, v 1 > v 1 + < u, v 2 > v 2 + . . . + < u, v n > v n .

Proof: Since S is a basis, there exist scalars c1 , c2 , . . . , cn such that


u = c1 v 1 + c2 v 2 + . . . + cn v n
Now, we let i ∈ {1, 2, . . . , n} and take the inner product of both sides of the equation
with v i . We get
< u, v i > = < c1 v 1 + c2 v 2 + . . . + cn v n , v i >
= c1 < v 1 , v i > + c2 < v 2 , v i > + . . . + cn < v n , v i >
= ci < v i , v i >
= ci , since < v i , v i > = 1.
This holds for all i ∈ {1, 2, . . . , n}. Hence result.
Linear Algebra 202 / Mathematical Methods 202/204/502 - Notes for Week 5 9

Ex 15: Recall that S = {v 1 , v 2 , v 3 } in Example 14 is an orthonormal basis for IR3 . Find


the coordinate vector of u = [1, 1, 1] w.r.t. S.

Soln: As suggested by Result 6, u = c1 v 1 + c2 v 2 + c3 v 3 , where

c1 = < u, v 1 > = u.v 1 = [1, 1, 1].[0, 1, 0] = 1,


1 1 2 √
c2 = < u, v 2 > = u.v 2 = [1, 1, 1].[ √ , 0, √ ] = √ = 2,
2 2 2
1 1
c3 = < u, v 3 > = u.v 3 = [1, 1, 1].[ √ , 0, − √ ] = 0.
2 2

Hence, uS = [1, 2, 0].

Having seen the usefulness of an orthonormal basis, how do we go about constructing


one? The next result will get us underway.

Result 7: Let V be an inner product space and let {v 1 , v 2 , . . . , v r } be an orthonormal


set of vectors in V . Let W = span {v 1 , v 2 , . . . , v r }. Then, any vector u ∈ V can be
expressed as
u = uW + uW ⊥ ,
where uW ∈ W and uW ⊥ ∈ W ⊥ , the orthogonal complement of W .

Proof: Let uW = < u, v 1 > v 1 + < u, v 2 > v 2 + . . . + < u, v r > v r . Clearly, uW is just a
l.c. of v 1 , v 2 , . . . , v r and it must hence be an element of W . Now, let uW ⊥ = u − uW .
Then u = uW + uW ⊥ , as required, but it remains to be shown that the vector uW ⊥ is in
W ⊥ , i.e. that uW ⊥ is orthogonal to every vector in W . It suffices to show that uW ⊥ is
orthogonal to every v i , i = 1, 2, . . . , r. Consider
< uW ⊥ , v i > = < u − uW , v i >
= < u, v i > − < uW , v i >
= < u, v i > − < < u, v 1 > v 1 + < u, v 2 > v 2 + . . . + < u, v r > v r , v i >
= < u, v i > − < u, v 1 > < v 1 , v i > − < u, v 2 > < v 2 , v i >
− . . . − < u, v i > < v i , v i > − . . . − < u, v r > < v r , v i >
= < u, v i > − < u, v 1 > (0) − < u, v 2 > (0)
− . . . − < u, v i > (1) − . . . − < u, v r > (0)
= 0, i = 1, 2, . . . , r.
This shows that uW ⊥ is orthogonal to each v i , i = 1, . . . , r, i.e. uW ⊥ is orthogonal to W
itself.

We may think of

uW = < u, v 1 > v 1 + < u, v 2 > v 2 + . . . + < u, v r > v r


Linear Algebra 202 / Mathematical Methods 202/204/502 - Notes for Week 5 10

as the orthogonal projection of u on W and of uW ⊥ as the component of u which is


orthogonal to W (or the orthogonal projection of u on W ⊥ ), i.e.

6
u uW ⊥
 
 
 uW
- W
 

Result 8: Every (nonzero) finite dimensional inner product space V has an orthonormal
basis.

Proof: We’ll find one! The process for doing so is known as the Gram - Schmidt process.

Gram - Schmidt Process

We start by letting S = {u1 , u2 , . . . , un } be any basis of V . To obtain an orthonormal


basis {v 1 , v 2 , . . . , v n } of V , we proceed as follows:

!
u1 q
Step 1: (Let q 1 = u1 ). Put v 1 = = 1 . Then ||v 1 || = 1, as required.
||u1 || ||q 1 ||

Step 2: The next vector we look for will have to be orthogonal to v 1 , i.e. orthogonal
to W1 = span {v 1 }. By Result 7, recall that we may write u2 = u2W1 + u2 W1⊥ ,
where u2W1 = u2 − < u2 , v 1 > v 1 ∈ W1 and u2 W1⊥ ∈ W1⊥ . Hence, the vector
u2 W1⊥ = u2 − u2W1 = u2 − < u2 , v 1 > v 1 will be orthogonal to W1 , i.e. orthogonal
to v 1 , as required. The only thing that remains to be done is to divide this vector
by its own norm, to get a vector of length 1. We can summarize this step as follows:
q
Let q 2 = u2 − < u2 , v 1 > v 1 . Then, put v 2 = 2 .
||q 2 ||
(Note that q 2 = u2 − < u2 , v 1 > v 1 can’t be the zero vector, otherwise u1 and u2
would be l.d., contradicting the fact that S is a basis.)

Step 3: In this step, we need to find a vector which is orthogonal to both v 1 and v 2 , i.e.
orthogonal to W2 = span {v 1 , v 2 }. Let’s take u3 , which, according to Result 7, we
may write as u3 = u3W2 + u3 W2⊥ , where u3W2 = < u3 , v 1 > v 1 + < u3 , v 2 > v 2 ∈ W2
Linear Algebra 202 / Mathematical Methods 202/204/502 - Notes for Week 5 11

and u3 W2⊥ ∈ W2⊥ . Hence, u3 W2⊥ = u3 − u3W2 = u3 − < u3 , v 1 > v 1 − < u3 , v 2 > v 2
is orthogonal to W2 , i.e. to both v 1 and v 2 . We need to divide this vector by its
own norm to get a unit vector once more. Thus, we may summarize this step as:
q
Let q 3 = u3 − < u3 , v 1 > v 1 − < u3 , v 2 > v 2 and then put v 3 = 3 .
||q 3 ||
(Again, it’s the fact that S is l.i. which insures that q 3 = u3 W2⊥ 6= 0.)

..
.

Step j: You can see the pattern in these steps now. Rather than going into a detailed
argument once more, we simply indicate the procedure for this general step:
q
Let q j = uj −<uj , v 1>v 1 −<uj , v 2>v 2 −. . .−<uj , v j−1>v j−1 . Then put v j = j .
q j

Note that there is nothing in this method which would not allow us to carry out all the
required steps until we find the orthonormal basis {v 1 , v 2 , . . . , v n } for V . Hence, the proof
of Result 8 is complete.

Ex 16: Construct an orthonormal basis for IR3 from S = {u1 , u2 , u3 }, where

u1 = [1, 1, 1], u2 = [0, 1, 1], u3 = [0, 0, 1].

" #
u1 1 1 1 1
Soln: (q 1 = u1 ), v 1 = = √ [1, 1, 1] = √ , √ , √ .
||u1 || 3 3 3 3
" #
2 1 1 1 2 1 1
 
q 2 = u2 − < u2 , v 1 > v 1 = [0, 1, 1] − √ √ , √ , √ = − , , .
3 3 3 3 3 3 3
" #
q 1 2 1 1 2 1 1
 
Hence, v 2 = 2 = q − , , = −√ , √ , √ .
||q 2 || 2 3 3 3 6 6 6
3

q 3 = u3 − < u3 , v 1 > v 1 − < u3 , v 2 > v 2


" # " #
1 1 1 1 1 2 1 1
= [0, 0, 1] − √ √ , √ , √ − √ − √ , √ , √
3 3 3 3 6 6 6 6
1 1
 
= 0, − , .
2 2
" #
q 1 1 1 1 1
 
Hence, v 3 = 3 = q 0, − , = 0, − √ , √ .
||q 3 || 1 2 2 2 2
2
Linear Algebra 202 / Mathematical Methods 202/204/502 - Notes for Week 5 12

In summary, an orthonormal basis of IR3 is


(" # " # " #)
1 1 1 2 1 1 1 1
√ , √ , √ , − √ , √ , √ , 0, − √ , √ .
3 3 3 6 6 6 2 2

Note that we could have made the calculations in Example 16 a little easier if we had
started with u3 instead of u1 . The order in which we consider the original basis vectors
does not matter in the Gram - Schmidt process, so we may as well make things as easy
as possible for ourselves.

Ex 17: Consider P2 with the inner product


Z 1
< p, q > = p(x)q(x) dx.
0
Use the Gram - Schmidt process to transform the standard basis S = {p1 (x), p2 (x), p3 (x)} =
{1, x, x2 } into an orthonormal basis.
s s
q Z 1 Z 1
2
Soln: ||p1 (x)|| = < p1 (x), p1 (x) > = (p1 (x)) dx = (1)2 dx = 1. Hence,
0 0
q1 (x) p1 (x) 1
v1 (x) = = = = 1.
||q1 (x)|| ||p1 (x)|| 1
The next step is to find q2 (x) = p2 (x) − < p2 (x), v1 (x) > v1 (x). We have
#1
x2
"
Z 1 Z 1 1
< p2 (x), v1 (x) > = p2 (x) v1 (x) dx (x)(1) dx = = .
0 0 2 0
2
1 1
Hence, q2 (x) = p2 (x) − v (x)
2 1
=x− 2
. Further,
q
||q2 (x)|| = < q2 (x), q2 (x) >
s
Z 1
= (q2 (x))2 dx
0
s
1 2
1
Z 
= x− dx
0 2
s
Z 1 
1

= x2 −x+ dx
0 4
v" #1
u
u x3 x2 x
= t − +
3 2 4 0
s
1 1 1
= − +
3 2 4
1
= √
2 3
Linear Algebra 202 / Mathematical Methods 202/204/502 - Notes for Week 5 13

Hence,
q2 (x) √  1
 √ √
v2 (x) = =2 3 x− = 2 3x − 3.
||q 2 (x)|| 2
Finally, we need q3 (x) = p3 (x) − < p3 (x), v1 (x) > v1 (x) − < p3 (x), v2 (x) > v2 (x). We have
" #1
Z 1 Z 1
2 x3 1
< p3 (x), v1 (x) > = p3 (x) v1 (x) dx = (x )(1) dx = = .
0 0 3 0 3
Z 1
< p3 (x), v2 (x) > = p3 (x) v2 (x) dx
0
Z 1 √ √
= (x2 )(2 3x − 3) dx
0
√ √ Z 1
(2 3x3 − 3x2 ) dx=
0
"√ √ 3 #1
3x4 3x
= −
2 3 0
1
= √
2 3
1 1 √ √ 1
2
Hence, q3 (x) = x − − √ (2 3x − 3) = x2 − x + . Further,
3 2 3 6
s
Z 1
||q3 (x)|| = (q3 (x))2 dx
0
s
1 2
1
Z 
= x2 − x + dx
0 6
v !
1 4x2 x
uZ
u 1
= t x4 − 2x3 + − + dx
0 3 3 36
v" #1
u
u x5 x4 4x3 x2 x
= t − + − +
5 2 9 6 36 0
s
1 1 4 1 1
= − + − +
5 2 9 6 36
s
1
=
180
1
= √
6 5 
q3 (x) √ 1
 √ √ √
Hence, we put v3 (x) = = 6 5 x2 − x + = 6 5x2 − 6 5x + 5.
||q3 (x)|| 6
Finally, the required orthonormal basis is
n √ √ √ √ √ o
1, 2 3x − 3, 6 5x2 − 6 5x + 5 .

You might also like