Professional Documents
Culture Documents
http://programeansic.blo
gspot.com
VECTORS IN
In the first two sections of this chapter we looked at vectors in 2-space and 3-space. You
probably noticed that with the exception of the cross product (which is only defined in 3-space)
all of the formulas that we had for vectors in 3-space were natural extensions of the 2-space
formulas. In this section we’re going to extend things out to a much more general setting. We
won’t be able to visualize things in a geometric setting as we did in the previous two sections but
things will extend out nicely. In fact, that was why we started in 2-space and 3-space. We
wanted to start out in a setting where we could visualize some of what was going on before we
generalized things into a setting where visualization was a very difficult thing to do.
So, let’s get things started off with the following definition.
Next, we need to get the standard arithmetic definitions out of the way and all of these are going
to be natural extensions of the arithmetic we saw in and .
http://programeansic.blogspot.com
Definition 2 Suppose and
are two vectors in .
(a) We say that u and v are equal if,
The basic properties of arithmetic are still valid in so let’s also give those so that we can say
that we’ve done that.
Theorem 1 Suppose ,
and are vectors in and c and k are scalars then,
(a
(b
(d \\
(e)
(f)
(g
(h
http://programeansic.blogspot.com
We now need to extend the dot product we saw in the previous section to and we’ll be
giving it a new name as well.
So, we can see that it’s the same notation and is a natural extension to the dot product that we
looked at in the previous section, we’re just going to call it something different now. In fact, this
is probably the more correct name for it and we should instead say that we’ve renamed this to the
dot product when we were working exclusively in and .
Note that when we add in addition, scalar multiplication and the Euclidean inner product to
we will often call this Euclidean n-space.
We also have natural extensions of the properties of the dot product that we saw in the previous
section.
(c)
(d)
(c)
(e) and
(f)
(e) if and only if u=0.
The final extension to the work of the previous sections that we need to do is to give the
definition of the norm for vectors in and we’ll use this to define distance in .
http://programeansic.blogspot.com
Definition 5 Suppose and
are two points in then the Euclidean distance between them is defined to be,
Notice in this definition that we called u and v points and then used them as vectors in the norm.
This comes back to the idea that an n-tuple can be thought of as both a point and a vector and so
will often be used interchangeably where needed.
(d)
(e)
Solution
There really isn’t much to do here other than use the appropriate definition.
(a)
(b)
(c)
http://programeansic.blogspot.com
(d)
(e)
Definition of unit vector :-Just as we saw in the section on vectors if we have then we
will call u a unit vector
And so the vector u from the previous set of examples is not a unit vector
Now that we’ve gotten both the inner product and the norm taken care of we can give the
following theorem.
Theorem 3 Suppose u and v are two vectors in and is the angle between them. Then,
Of course since we are in it is hard to visualize just what the angle between the two vectors
is, but provided we can find it we can use this theorem. Also note that this was the definition of
the dot product that we gave in the previous section and like that section this theorem is most
useful for actually determining the angle between two vectors.
The next theorem is very important and has many uses in the study of vectors. In fact we’ll need
it in the proof of at least one theorem in these notes. The following theorem is called
the Cauchy-Schwarz Inequality.
Proof : This proof is surprisingly simple. We’ll start with the result of the previous theorem and
take the absolve value of both sides.
http://programeansic.blogspot.com
However, we know that
and so we get our result by using this fact.
Theorem 5 Suppose u and v are two vectors in and that c is a scalar then,
(a
(c)
The proof of the first two part is a direct consequence of the definition of the Euclidean norm and
so won’t be given here.
Proof :
(c) We’ll just run through the definition of the norm on this one.
(d) The proof of this one isn’t too bad once you see the steps you need to take. We’ll start with
the following.
http://programeansic.blogspot.com
So, we’re starting with the definition of the norm and squaring both sides to get rid of the square
root on the right side. Next, we’ll use the properties of the Euclidean inner product to simplify
this.
Now, notice that we can convert the first and third terms into norms so we’ll do that. Also,
is a number and so we know that if we take the absolute value of this we’ll
have . Using this and converting the first and third terms to norms gives,
We can now use the Cauchy-Schwarz inequality on the second term to get,
We’re almost done. Let’s notice that the left side can now be rewritten as,
http://programeansic.blogspot.com
Solution
Let’s first verify the Cauchy-Schwarz inequality. To do this we need to following quantities.
(c)
The proof of the first two parts is a direct consequence of the previous theorem and the proof of
the third part is a direct consequence of the definition of distance and won’t be proven here.
Now, We have one final topic that needs to be generalized into Euclidean n-space.
So, this definition of orthogonality is identical to the definition that we saw when we were
dealing with and .
Proof : The proof of this theorem is fairly simple. From the proof of the triangle inequality for
norms we have the following statement.
he
Solution
Showing that these two vectors is easy enough.
So, the Pythagorean Theorem should hold, but let’s verify that. Here’s the sum
---
We’ve got one more theorem that gives a relationship between the Euclidean inner product and
the norm. This may seem like a silly theorem, but we’ll actually need this theorem towards the
end of the next chapter.
The first of these we’ve seen a couple of times already and the second is derived in the same
manner that the first was and so you should verify that formula.
http://programeansic.blogspot.com
Now subtract the second from the first to get,
In the previous section we saw the three standard basis vectors for are i, j, and k. This
idea can also be extended out to . In we will define the standard basis
vectors or standard unit vectors to be,
Now that we’ve gotten the general vector in Euclidean n-space taken care of we need to go back
and remember some of the work that we did in the first chapter. It is often convenient to write
the vector as either a row matrix or a column matrix as follows,
http://programeansic.blogspot.com
In this notation we can use matrix addition and scalar multiplication for matrices to show that
we’ll get the same results as if we’d done vector addition and scalar multiplication for vectors on
the original vectors.
So, why do we do this? We’ll let’s use the column matrix notation for the two vectors u and v.
So, we can think of the Euclidean inner product can be thought of as a matrix multiplication
using,
The natural question this is just why is this important? Well let’s consider the following
scenario. Suppose that u and vare two vectors in and that A is an matrix. Now
consider the following inner product and write it as a matrix multiplication.
Now, rearrange the order of the multiplication and recall one of the properties of transposes.
http://programeansic.blogspot.com
Don’t forget that we switch the order on the matrices when we move the transpose out of the
parenthesis. Finally, this last matrix product can be rewritten as an inner product.
This tells us that if we’ve got an inner product and the first vector (or column matrix) is
multiplied by a matrix then we can move that matrix to the second vector (or column matrix) if
we simply take its transpose. A similar argument can also show that,
So ,here it’s an end of this topic .now we look At linear transformation and vector space.
http://programeansic.blogspot.com