You are on page 1of 2

Math 25b - Multivariable Taylor Series

March 31, 2014


Scribe: Tudor G-T

Reducing Multivariable Taylor Series to the 1D Case


Last time we tried to extend the theory of Taylor series to functions of more than one
variable. Suppose we have a function f : U ⊆ Rn → R with enough derivatives such
that we can expand f in an open ball Br (~a) around a point a. If we look at a point
~a + ~h ∈ Br (~a) with h = (h1 , . . . , hn ) we look at how the function changes, then we
conjectured that the kth degree Taylor polynomial would be of the form:
X 1
Pk (h) = Dj1 . . . Dnjn f (~a)hj11 . . . hjnn
j1 !j2 ! . . . jn ! 1
j1 +···+jn ≤k

We can prove this by reducing it to the one-dimensional case by parametrizing ϕ(t) =


~a + t~h with t ∈ [0, 1]. Then to evaluate f (~a + ~h) = f (ϕ(1)) we want to evaluate the
one-dimensional Taylor series in t around 0 at t = 1. The first order term is then given
by: n
d X
= f 0 (ϕ(0)) · ϕ0 (0) = f 0 (~a)~h =

f (ϕ(t)) Di f (~a)hi
dt t=0 i=1

Which agrees with our multivariable Taylor formula with k = 1 in which case one of the
js is 1 and the others are 0 so the linear terms match.

Now for degree 2, we need the second derivatives at zero. The term in the Taylor series
in t at t = 1 is:
n
!0
1 1 1 X
f (ϕ(t))00 = (D1 f (ϕ(t))ϕ01 (t) + · · · + Dn f (ϕ(t)ϕ0n (t))0t=0 = Di f (ϕ(t))hi
2 2 2 i=1
t=0

As all higher-than-first-order derivatives of ϕ(t) are zero as the function is linear in t.


Therefore:
n n n
1 1 d X 1 XX
f (ϕ(t))00 = Di f (ϕ(t))hi = Di Dj f (~a)hi hj
2 2 dt t=0 i=1 2 i=1 j=1

This also matches our second-order term in our multivariable Taylor series ansatz. By
induction, we get all the higher terms in the same way as ϕ(n) (t) = 0 for n ≥ 2 and only
the first term in the chain rule is nonzero. Therefore if g(t) = f (ϕ(t)) we can look at the
remainder term Rk (t) = g(t) − g(0) − g 0 (0)t − 21 g 00 (0)t2 − · · · − k!
1 (k)
g (0)tk . By Taylor’s
theorem the remainder term is of the form
g (k+1) (ζ)tk+1
Rk (t) = for some ζ ∈ (0, t)
(k + 1)!

We can expand the remainder as a finite sum over j1 + j2 + · · · + jn = k + 1 of the same


form as a Taylor term (a polynomial of degree k + 1 in hi ), evaluated at some point in

1
the neighborhood of ~a. Because d(~a + t~h, ~a) = ||t~h|| = |t|||~h|| ≤ ||~h|| so the polynomial is
bounded by ||~h||k+1 so:
|Rk (~h)|| ≤ M ||h||k+1
as each hi is bounded by |hi | ≤ ||~h|| and M ≥ 0 is a bound given by the sup of the
k + 1-order derivatives in the studied neighborhood.

Multivariable Second Derivative Test


For a critical point f 0 (~a) = 0 then the first-order term in the Taylor expansion is zero
and we need to look at the second-order term. It turns out that this is a problem we can
apply the theory of self-adjoint operators to. This is because we can assign all second-
order derivatives in a matrix H:
D12 f (~a)
 
D1 D2 f (~a) . . . D1 Dn f (~a)
 D2 D1 f (~a) D22 f (~a) . . . D2 Dn f (~a)
H=  = (Di Dj f (~a))
 
.. .. ..
 . . . 
Dn D1 f (~a) Dn D2 f (~a) . . . Dn2 f (~a)

This is known as the Hessian matrix. If f ∈ C 2 then the second order derivatives are
continuous and Di Dj f (~a) = Dj Di f (~a) and the matrix is symmetric. Then consider a
product of the form hT Hh where h is a column vector. Then this product is exactly
proportional to the second-order term in the Taylor series, which is:
 
h1 n n
1 T 1   .  1 X 1 X
h Hh = h1 . . . hn H  ..  = Hij hi hj = Di Dj f (~a)hi hj
2 2 2 i,j=1 2 i,j=1
hn

The sign of this quadratic form gives us the nature of the critical point. Remember that
we can diagonalize any symmetric matrix, i.e. find an orthonormal basis of eigenvectors.
Then the matrix of H will be diagonal eigenvalues λi as the elements, and so the above
becomes:
n
1 T 1X
h Hh = λi h2i
2 2 i=1
We see that if all λi > 0 then the point is a local minimum, and if all λi < 0 it’s a
maximum. If all λi = 0 then we need to look at higher order terms; otherwise, the point
is neither a maximum or a minimum.

You might also like