You are on page 1of 5

Welcome to calculus, I'm Professor Grist.

We're about to begin lecture 3 on Taylor


Series.
In this lesson, we'll harness our
understanding of E to the X as a series
and extend this perspective to a wider
world of functions.
This will be done through the key tool of
chapter one that of the Taylor Series of
a function.
In our last lesson we began with the
definition of e to the x as something
like a long polynomial And from that, and
a little bit of help from Euler's
formula, we observed similar formuli or
expressions for the basic trigonometric
functions sin and cosine.
Expressions of this form, we are going to
call series and we will be working with
them throughout this course.
The question arises.
Are there other similar expressions for
different functions besides these basic
three?
The answer is an emphatic yes.
We're going to work under the assumption
that every reasonable function can be
expressed in the form of a series.
As a constant, plus some other constant
times x, plus a third constant times x
squared, etc, for some collection of
constants.
Now of course strictly speaking this is
not true, we need to be careful about
what we mean by every and reasonable.
For the moment, let's pretend that this
is true and see where it gets us.
We are first led to the question, how do
we figure out or compute these
coefficients?
The following definition is critical.
The Taylor Series of a function f.
And an input zero is the following
series.
F at zero plus the derivative at zero
times x plus one over two factorial times
the second derivative at zero times x
squared etc.
, that is the kth coefficient.
Is equal to the kth derivative of f,
evaluated at the input 0, and then
divided by k factorial.
This is a most important definition.
And for all intents and purposes, at this
point in the course Remarkably, this
series returns the value of f at x.
Let's see how this plays out in an
example that we already know.
Starting with our definition of the

Taylor Series, let's apply it to the


function e to the x.
In order to compute this, we're going to
need to know the derivatives of e to the
x.
And we're going to have to evaluate them
at x equals zero.
But since we know that the derivative of
e to the x is e to the x, all of these
derivatives evaluate to one.
Therefore, when we substitute these
values into our formula for the Taylor
Series, we obtain the familiar series for
e to the x.
This definition, at least, works in this
one context.
Lets continue, lets look at another
function for which we know a series
expression that of sine of x.
We recall how the derivative of sine goes
that gives you cosine.
The derivative of cosine is minus sine.
The derivative of minus sin is minus
cosine and then the derivative of minus
cosine is sine.
Once again, evaluating all of these at an
input of 0 gives us alternating forms of
0 and nonzero terms.
With the nonzero terms having alternating
signs.
Therefore, we can substitute in these
derivatives.
Obtaining 0, 1, 0, negative 1, and
repeating in blocks of four.
When we write out the resulting Taylor
series, we see, once again, the familiar
form x minus x cubed over 3 factorial.
Plus x to the fifth over five factorial,
et cetera.
This is the expression for sin of x.
It seems clear that this ought to work in
other contexts as well, but let's just
check it.
For example, if we work with cosine of x.
Then well the derivatives of cosine
follow the same pattern as before.
Evaluating those to zero gives us the
same numbers as before.
Why do we not get the same series?
Well because when we evaluate these
derivatives.
The sequence of numbers is shifted.
The f at 0, that is, cosine of 0, is 1.
The derivative at 0 is 0.
Negative 1, 0, 1, and the pattern
continues.
When we simplify this expression, we see
that all of the odd degree terms have
zero coefficients leaving us, with only
the even degree terms in this series and

with the now familiar alternating sines,


giving us our familiar expression for
cosine of x.
And so it seems clear that we aught to be
able to apply this method to other
functions as well.
Let us do so with another function that
is at least reasonably simple.
What would a very simple function be?
Well let's take a polynomial In this
case, x-cubed plus two x-squared minus x
plus five.
I know that we can differentiate this
with ease.
If we evaluate this function at zero, we
obtain the first term in the series,
namely five.
if we take the first derivative of this
function, what do we get?
Well, 3x squared plus 4x minus 1,
evaluating this at 0 gives us what?
Well, that gives us negative 1.
Therefore, the next term in the series
expansion is negative 1 times x.
Continuing, if we take the second
derivative of this function, we will
obtain 6x plus 4.
Evaluting this at 0, give us simply 4.
Therefore, the next term in the Taylor
series is 1 over 2 factorial times 4
times x squared.
The third derivative of this function is
very simple.
It is exactly six, independent of where
we evaluate it.
Thereore the next term in the Taylor
series is 1 over 3 factorial times 6
times X cubed.
What happens when we take higher and
higher derivatives?
Well the derivative of a constant is 0.
Thus, all of the higher derivatives
vanish and all further terms in the
Taylor series evaluate to 0 so we can
drop them out without consequence.
Rewriting, reusing a bit of
simplification gives us The Taylor
Series.
For this function as 5 minus X plus 2 X
squared plus X cubed.
let us take a look at our work.
Do we believe what we have done, well of
course.
This is exactly the same function that
we've started off with.
We've merely written the terms.
In ascending order of degree, this seems
like a trivial example but it is actually
very crucial.
You must remember that polynomials have

themselves as the Taylor series.


Polynomials have polynomial Taylor
series.
This is going to connect to some very
deep properties concerning polynomial
approximation.
Our strategy, therefore, for working with
functions is to think of Taylor expansion
not as a function itself but as something
like an operator.
As something that takes as its input a
function and returns as its output,
something that is in the form of a long
polynomial or better a series.
Why do we want to do this well, series
thought of as long polynomials are very
simple to work with, whereas some
functions can be obtuse, very difficult,
maybe even unknown in a specific form.
Taylor expansion helps us to convert such
objects.
into an easier to work with form.
Indeed, some functions really can't be
defined well except as a Taylor series.
Here's an example that I'll bet you've
never seen before, though it's a famous
function.
This is the Bessel Function J zero.
That is most easily defined in terms of
its Taylor series.
As the sum k goes from zero to infinity,
negative 1 to the k times x to the 2 k
over 2 to the 2 k times k factorial
quantity squared.
That's a bit of mouth full.
We could write that out and we would get
something that doesn't look too bad.
There are a lot of complexities in the
coefficients there.
How might we understand this function?
Well let's see, the general form of it is
reminiscent of the expression for cosine
that we have derived, in that it has
alternating signs and only even terms.
But notice that the denominator of the
coefficients is growing very rapidly.
Much more rapidly than k factorial or
even 2 k quantity factorial.
We might therefore anticipate that the
graph of this function looks something
like a cosine wave but with a decreasing
amplitude.
As a function of x.
Have you ever seen such a function
before?
Maybe you have, if you've ever taken a
chain or rope and rotated it about a
vertical axis.
If it winds up in equilibrium the shape
that you get is related to this Bessel

function.
If you drop a pebble in some water in a
round tank or an open pond the rippling
effect of the waves is going to be very
closely related to such a vessel
function.
These are not too unusual even in every
day occurrences.
In fact, for a chain or a rope that is
rotated in equilibrium, we can describe
the displacement away from the vertical
axis r as follows.
This r is proportional to the Bessel
function j zero.
Evaluated at 2 omega over the square root
of g times the square root of x.
Here omega is the angular frequency or
how fast you're spinning that rope.
g is a gravitation constant and x, most
importantly, is the distance not from the
top of the rope, but from the bottom of
the rope.
Now you don't need to remember this
formula.
And you don't need to know how it's
derived.
What we are going to look at, is what
happens when we substitute these values
into our Taylor series, for the Bessel
function.
One of the things that we can conclude
from this Taylor series is that if x is
reasonably small, if you're near the
bottom of the rope, small enough that we
can ignore some of the quadratic and
higher order terms, then what's left
over.
Looks like a linear expression in x.
R is proportional to 1 minus omega
squared over g times x.
That tells you something about the slope.
At the end of the rope, namely that this
free end is swinging with a slope that is
proportional not to omega, the angular
frequency, but to omega squared.
The faster you spin it, the more the
slope changes, and we can say exactly
what rate that change is it's quadratic.
You can try this at home with a piece of
heavy rope or chain.
This lesson is given us a new definition
that of Taylor series as well as a new
perspective.
The idea that expanding on a function to
a long polynomial or series is
advantageous.
Next time, we'll consider the question of
how one can effectively compute these
Taylor series.

You might also like