You are on page 1of 118

MSO-203b

T. Muthukumar
tmk@iitk.ac.in
November 16, 2012
ii
Contents
Notations 1
1 Ordinary Dierential Equations 1
1.1 Simple Harmonic Motion . . . . . . . . . . . . . . . . . . . . . 1
1.2 Second Order ODE: A Summary . . . . . . . . . . . . . . . . 2
1.2.1 Wronskian . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.2 Inhomogeneous (non-homogeneous) Equation . . . . . 5
2 Sturm-Liouville Problems 7
2.1 Eigen Value Problems . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Sturm-Liouville Problems . . . . . . . . . . . . . . . . . . . . 8
2.3 Singular Sturm-Liouville Problem . . . . . . . . . . . . . . . . 12
2.3.1 EVP of Legendre Operator . . . . . . . . . . . . . . . . 13
2.3.2 EVP of Bessels Operator . . . . . . . . . . . . . . . . 14
2.4 Orthogonality of Eigen Functions . . . . . . . . . . . . . . . . 16
2.5 Eigen Function Expansion . . . . . . . . . . . . . . . . . . . . 20
3 Fourier Series 23
3.1 Periodic Functions . . . . . . . . . . . . . . . . . . . . . . . . 23
3.2 Fourier Coecients and Fourier Series . . . . . . . . . . . . . 24
3.3 Piecewise Smooth Functions . . . . . . . . . . . . . . . . . . . 30
3.4 Complex Fourier Coecients . . . . . . . . . . . . . . . . . . . 31
3.5 Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.5.1 Odd and Even functions . . . . . . . . . . . . . . . . . 34
3.6 Fourier Sine-Cosine Series . . . . . . . . . . . . . . . . . . . . 35
3.7 Fourier Integral and Fourier Transform . . . . . . . . . . . . . 37
iii
CONTENTS iv
4 PDE: An Introduction 39
4.1 Denition and Well-Posedness of PDE . . . . . . . . . . . . . 39
4.2 Three Basic PDE: History . . . . . . . . . . . . . . . . . . . . 44
4.3 Continuity Equation . . . . . . . . . . . . . . . . . . . . . . . 46
5 First Order PDE 49
5.1 Family Of Curves . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.2 Linear Transport Equation (2 Variable) . . . . . . . . . . . . . 51
5.2.1 Derivation . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.2.2 Solving . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.3 Method of Characteristics: Two variable . . . . . . . . . . . . 51
5.4 Cauchy Problem . . . . . . . . . . . . . . . . . . . . . . . . . 55
6 Second Order PDE 61
6.1 Classication of Second Order PDE . . . . . . . . . . . . . . . 61
6.1.1 Second order PDE in Two Varaibles . . . . . . . . . . 62
6.1.2 Classication of Semi-Linear PDE in Two Variables . . 62
6.1.3 Invariance of Discriminant . . . . . . . . . . . . . . . . 66
6.1.4 Standard or Canonical Forms . . . . . . . . . . . . . . 67
6.1.5 Reduction to Standard Form . . . . . . . . . . . . . . . 68
6.1.6 Classication in n-variable . . . . . . . . . . . . . . . . 72
6.2 The Laplacian . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
6.2.1 Laplacian in Dierent Coordinate Systems . . . . . . . 73
6.2.2 Harmonic Functions . . . . . . . . . . . . . . . . . . . 75
6.3 Separation Of Variable . . . . . . . . . . . . . . . . . . . . . . 76
6.3.1 Dirichlet Boundary Condition . . . . . . . . . . . . . . 77
6.3.2 Eigenvalue Problem of Laplacian . . . . . . . . . . . . 84
6.3.3 Neumann Boundary Condition . . . . . . . . . . . . . . 87
7 Heat Equation: One Space Variable 89
7.1 On a Rod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
7.2 On a Circular Wire . . . . . . . . . . . . . . . . . . . . . . . . 92
7.3 Duhamels Principle: Inhomogeneous Equation . . . . . . . . . 93
8 Wave Equation: One Space Variable 95
8.1 The Vibrating String: Derivation . . . . . . . . . . . . . . . . 95
8.2 Travelling Waves . . . . . . . . . . . . . . . . . . . . . . . . . 97
8.2.1 dAlemberts Formula . . . . . . . . . . . . . . . . . . . 98
CONTENTS v
8.2.2 dAlemberts Formula: Aliter . . . . . . . . . . . . . . 99
8.3 Standing Waves: Separation of Variable . . . . . . . . . . . . . 100
8.4 Duhamels Principle . . . . . . . . . . . . . . . . . . . . . . . 104
Appendices 107
A Divergence Theorem 109
B Normal Vector of a Surface 111
CONTENTS vi
Chapter 1
Ordinary Dierential Equations
1.1 Simple Harmonic Motion
Consider the system where an object of mass m is attached to one end of a
horizontal spring whose other end is xed to wall. Assume the system is on a
frictionless surface. This is the most basic oscillatory system called the simple
harmonic oscillator. We choose the axis on the surface such that the origin
coincides with centre of the mass of the object when the system is at rest.
When the object is displaced from its initial position (origin) and released,
the system under goes oscillation, called the simple harmonic motion.
Let y(t) denote the displacement of the object at time t. We also assume
that the spring is ideal (in the sense of Hookes law), i.e., the force F exerted
by the spring on the object is given by F = ky(t), where k > 0 is the spring
constant. By Newtons second law, F = ma, we get
ky(t) = my

(t)
Setting c =
_
k/m, we have the ODE describing the system.
y

(t) + c
2
y(t) = 0. (1.1.1)
The general solution to the above ODE is given by y(t) = cos ct+ sin ct,
where and are constants. In fact any solution of (1.1.1) is of the above
form, as seen from the exercise below.
Exercise 1. Let y : R R be a twice continuously dierentiable function
which is a solution (1.1.1), then show that there exists constants and
such that y(t) = cos ct + sin ct.
1
CHAPTER 1. ORDINARY DIFFERENTIAL EQUATIONS 2
Exercise 2. Recall from your ODE course that the general solution of the
ODE
y

(t) c
2
y(t) = 0,
for c ,= 0, is y(t) = e
cx
+ e
cx
.
Exercise 3. Recall from your ODE course that the Cauchy-Euler equation
Ax
2
y

(x) + Bxy

(x) + Cy(x) = 0
can be reduced to an ODE of constant coecients
Ay

(s) + (B A)y

(s) + Cy(s) = 0,
for a new variable s.
1.2 Second Order ODE: A Summary
We have already seen/studied various methods on how to solve a second
order ODE. Let us understand them in the linear-algebraic viewpoint.
Consider a second order linear homogeneous ODE
y

(x) + P(x)y

(x) + Q(x)y(x) = 0 (1.2.1)


on an interval I R such that both P and Q are continuous on I. If we set
L =
d
2
dx
+ P(x)
d
dx
+ Q(x),
then the ODE is simply written as Ly = 0. Now, let
C(I) = y : I R [ y is continuous
denote the set of all real valued continuous functions on I. Note that C(I) is
a vector space over R. Also, let C
k
(I) be the set of all k-times continuously
dierentiable function and C
k
(I) C(I) is a subspace of C(I).
Remark 1.2.1. How big is C(I)? Note that the functions 1, x, x
2
, . . . , x
k
, . . .
all belong to C(I). Also note that they are all linearly independent. Every
nite linear combination of these functions is a polynomial of nite degree.
The set of all polynomials up to k degree is a k dimensional subspace of C(I).
Thus, C(I) is not nite dimensional is clear.
CHAPTER 1. ORDINARY DIFFERENTIAL EQUATIONS 3
In our context, when we say the given second order ODE (1.2.1) is linear,
we mean the corresponding map L : C
2
(I) C(I) C(I) is a linear map,
i.e., L(y
1
+ y
2
) = Ly
1
+ Ly
2
for all y
1
, y
2
C
2
(I).
When we say we wish to solve (1.2.1), we basically want to nd the null-
space (kernel) of L, denoted as S, i.e.,
S = y C
2
(I) C(I) [ Ly = 0.
Thus, solving a second order linear ODE is equivalent to nding the solution
space S corresponding to L.
Since y 0 is always a (trivial) solution to (1.2.1), we have 0 S. Recall
(or prove as an exercise) that S, the null-space of L, is a subspace of C
2
(I).
By doing this exercise, you would have actually proved the following theorem:
Theorem 1.2.2 (Principle of Superposition). If y
1
and y
2
are solutions of
(1.2.1), then any linear combination of y
1
and y
2
, i.e.,
y
1
+ y
2
, R
is also a solution of (1.2.1).
Now, the question of interest is how big can S be? For instance, if S
is zero-dimensional then this means that (1.2.1) has no non-trivial solution.
We will show that S is at most two dimensional. To say so, we will use the
uniqueness theorem.
Solving an ODE depends on the initial condition prescribed. Since we
are dealing with second order ODE, we need two initial conditions. One can
convince this, by arguing heuristically, that to nd y we need to integrate
twice, which will yield two integration constants. Thus, we need two pieces
of information to nd the two constants. In language of time, we provide the
initial position at x
0
, y(x
0
) = y
0
and initial velocity at x
0
, y

(x
0
) = y

0
.
Theorem 1.2.3 (Existence/Uniqueness). Let P and Q be continuous on I
(assume I closed interval) and for any point x
0
I, then the ODE (1.2.1)
with initial conditions
y(x
0
) = y
0
and y

(x
0
) = y

0
has a unique solution.
Using the above theorem, we shall make a statement on the dimensionality
of S.
CHAPTER 1. ORDINARY DIFFERENTIAL EQUATIONS 4
Theorem 1.2.4 (Dimensionality Theorem). Let L be the second order linear
dierential operator L : C
2
(I) C(I) of the form
L =
d
2
dx
+ P(x)
d
dx
+ Q(x).
Then its solution space S is of dimension two.
Proof. For any xed point x
0
I, consider the linear transformation T :
S R
2
dened as
T(y) := (y(x
0
), y

(x
0
)) = (y
0
, y

0
).
This denition of T makes sense under the hypothesis of the uniqueness
theorem. By uniqueness theorem, if T(y) = 0 then y 0. Therefore,
T is one-to-one (injective). Also, again by uniqueness theorem T is onto
(surjective). Thus, dimension of S is same as image of T, which is R
2
. Hence
S is two dimensional.
All the arguments above can be generalised to any k-th order linear homo-
geneous ODE and one can conclude that the solution space is of k dimension.
Thus, for a second order linear homogeneous ODE, any two linearly in-
dependent solutions of (1.2.1) will span the space of solutions S. In two
dimension, checking if two solutions are linearly independent is trivial, but
this check gets tougher in higher dimension. Thus, we introduce the notion
of Wronskian which works for all dimensions.
1.2.1 Wronskian
If we know the general solution of (1.2.1), i.e.,
y(x) = y
1
(x) + y
2
(x),
how does one nd the constants and such that y satises the initial
conditions y(x
0
) = y
0
and y

(x
0
) = y

0
, for a given x
0
I? Thus, we want to
nd and such that
y(x
0
) = y
1
(x
0
) + y
2
(x
0
) = y
0
,
y

(x
0
) = y

1
(x
0
) + y

2
(x
0
) = y

0
.
CHAPTER 1. ORDINARY DIFFERENTIAL EQUATIONS 5
This is equivalent to saying
_
y
1
(x
0
) y
2
(x
0
)
y

1
(x
0
) y

2
(x
0
)
__

_
=
_
y
0
y

0
_
The above matrix equation has a unique solution i the 2 2 matrix has
non-zero determinant (invertible). We dene the determinant of the matrix
to be the Wronskian
W(y
1
, y
2
)(x
0
) =

y
1
(x
0
) y
2
(x
0
)
y

1
(x
0
) y

2
(x
0
)

.
Theorem 1.2.5. Two solutions y
1
and y
2
of (1.2.1) are linearly independent
(hence span the solution space S) if and only if there is a point x
0
I at
which W(y
1
, y
2
)(x
0
) ,= 0.
We make a caution that if and only if is true for solutions of the ODE.
Theorem 1.2.6. Let f
1
and f
2
be two dierentiable functions on I. If there
is a point x
0
I such that W(f
1
, f
2
)(x
0
) ,= 0, then f
1
and f
2
are linearly
independent in C(I).
The converse is not true unless they are solutions of a ODE. You might
have already seen such examples during the lectures on Wronskian (ODE
course).
Exercise 4. The second order ODE (1.2.1) can be rewritten in its normal
form, v

+ q(x)v = 0, by a simple change of variable.


1.2.2 Inhomogeneous (non-homogeneous) Equation
Recall that (1.2.1) is a homogeneous equation. An inhomogeneous equation
is of the form Ly = R(x) for a given continuous function R on I which is
not identically zero. Thus, solving an inhomogeneous equation is equivalent
to identifying the subset of C
2
(I) which under L is mapped to R C(I).
Note that if y
p
is a solution of Ly = R, then the subset y
p
+ S is the
set of all solutions of the inhomogeneous equation. Because L(y
p
+ S) =
L(y
p
) + L(S) = R(x), by linearity of L.
1
Conversely, if y is any solution of
the inhomogeneous equation, then y y
p
is a solution of the homogeneous
equation. Hence y y
p
S. The set y
p
+ S is a translation of S with y
p
in
C(I).
1
Note that S is a set and the application of L on S means applying L on each element
of S
CHAPTER 1. ORDINARY DIFFERENTIAL EQUATIONS 6
Chapter 2
Sturm-Liouville Problems
2.1 Eigen Value Problems
Denition 2.1.1. Let L denote a linear dierential operator. Then we say
Ly(x) = y(x) on I
is an eigenvalue problem (EVP) corresponding to L, if both and y : I R
are unknown.
Example 2.1. For instance, if L =
d
2
dx
2
then its corresponding eigenvalue
problem is y

= y.
For a xed in R, one can obtain a general solution. But, in an EVP
1
we need to nd all R for which the given ODE is solvable. Note that
y 0 is a trivial solution, for all R.
Denition 2.1.2. A R, for which the EVP corresponding to L admits
a non-trivial solution y

is called an eigenvalue of the operator L and y

is
said to be an eigen function corresponding to . The set of all eigenvalues of
L is called the spectrum of L.
Exercise 5. Note that for a linear operator L, if y

is an eigen function
corresponding to , then y

is also an eigen function corresponding to ,


for all R.
1
compare an EVP with the notion of diagonalisation of matrices in Linear Algebra
7
CHAPTER 2. STURM-LIOUVILLE PROBLEMS 8
As a consequence of the exercise above, we note that for a linear operator
L the set of all eigen functions corresponding to forms a vector space W

,
called the eigen space corresponding to . Let V denote the set of all solutions
of the EVP corresponding to a linear operator L. Necessarily, 0 V and
V C
2
(I). Note that V =

where s are eigenvalues of L.


Exercise 6. Show that any second order ODE of the form
y

+ P(x)y

+ Q(x)y(x) = R(x)
can be written in the form
d
dx
_
p(x)
dy
dx
_
+ q(x)y(x) = r(x).
(Find p, q and r in terms of P, Q and R).
Proof. Let us multiply the original equation by a function p(x), which will
be chosen appropriately later. Thus, we get
p(x)y

+ p(x)P(x)y

+ p(x)Q(x)y(x) = p(x)R(x).
We shall choose such that
p

= p(x)P(x).
Hence, p(x) = e

P(x) dx
. Thus, by setting q(x) = p(x)Q(x) and r(x) =
p(x)R(x), we have the other form.
2.2 Sturm-Liouville Problems
Given a nite interval (a, b) R, the Sturm-Liouville (S-L) problem is given
as
_
_
_
d
dx
_
p(x)
dy
dx
_
+ q(x)y + r(x)y = 0 x (a, b)
c
1
y(a) + c
2
y

(a) = 0. c
2
1
+ c
2
2
> 0
d
1
y(b) + d
2
y

(b) = 0 d
2
1
+ d
2
2
> 0.
(2.2.1)
The function y(x) and are unknown quantities. The pair of boundary
conditions given above is called separated. The boundary conditions corre-
sponds to the end-point a and b, respectively. Note that both c
1
and c
2
cannot be zero simultaneously and, similar condition on d
1
and d
2
.
CHAPTER 2. STURM-LIOUVILLE PROBLEMS 9
Denition 2.2.1. The Sturm-Liouville problem with separated boundary con-
ditions is said to be regular if:
(a) p, p

, q, r : [a, b] R are continuous functions


(b) p(x) > 0 and r(x) > 0 for x [a, b].
We say the S-L problem is singular if either the interval (a, b) is un-
bounded or one (or both) of the regularity condition given above fails.
We say the S-L problem is periodic if p(a) = p(b) and the separated
boundary conditions are replaced with the periodic boundary condition y(a) =
y(b) and y

(a) = y

(b).
Example 2.2. Examples of regular S-L problem:
(a)
_
y

(x) = y(x) x (0, a)


y(0) = y(a) = 0.
We have chosen c
1
= d
1
= 1 and c
2
= d
2
= 0. Also, q 0 and p r 1.
(b)
_
y

(x) = y(x) x (0, a)


y

(0) = y

(a) = 0.
We have chosen c
1
= d
1
= 0 and c
2
= d
2
= 1. Also, q 0 and p r 1.
(c)
_
_
_
y

(x) = y(x) x (0, a)


y

(0) = 0
cy(a) + y

(a) = 0,
where c > 0 is a constant.
(d)
_
_
_
(x
2
y

(x))

= y(x) x (1, a)
y(1) = 0
y(a) = 0,
where p(x) = x
2
, q 0 and r 1.
Example 2.3. Examples of singular S-L problem:
CHAPTER 2. STURM-LIOUVILLE PROBLEMS 10
(a) For each n = 0, 1, 2, . . ., consider the equation
_
(xy

(x))

=
_

n
2
x
+ x
_
y(x) x (0, a)
y(a) = 0,
where p(x) = r(x) = x, q(x) = n
2
/x. This equation is not regular
because p(0) = r(0) = 0 and q is not continuous in the closed interval
[0, a], since q(x) as x 0.
(b) The Legendre equation

_
(1 x
2
)y

(x)

= y(x) x (1, 1)
with no boundary condition. Here p(x) = 1 x
2
, q 0 and r 1. This
equation is not regular because p(1) = p(1) = 0. We had no boundary
conditions here because for singular problems, when p vanishes at an
end-point, we drop the boundary condition corresponding to the end-
point. Thus, in this case both the boundary conditions are dropped.
Note that dropping a boundary condition corresponding to a end-point
is equivalent to taking both constants zero (for instance, c
1
= c
2
= 0, in
case of left end-point). This was reason for having no boundary condition
corresponding to 0 in Bessels equation.
Example 2.4. Examples of periodic S-L problem:
_
_
_
y

(x) = y(x) x (, )
y() = y()
y

() = y

().
We shall now state without proof the spectral theorem for regular S-L
problem. Our aim, in this course, is to check the validity of the theorem
through some examples.
Theorem 2.2.2. For a regular S-L problem, there exists an increasing se-
quence of eigenvalues 0 <
1
<
2
<
3
< . . . <
k
< . . . with
k
, as
k .
Exercise 7. Let W
k
= W

k
be the eigen space corresponding
k
. Show that
for regular S-L problem W
k
is one dimensional, i.e., corresponding to each

k
, there cannot be two or more linearly independent eigen vectors.
CHAPTER 2. STURM-LIOUVILLE PROBLEMS 11
Example 2.5. Consider the boundary value problem,
_
y

+ y = 0 x (0, a)
y(0) = y(a) = 0.
This is a second order ODE with constant coecients. Its characteristic
equation is m
2
+ = 0. Solving for m, we get m =

. Note that
the can be either zero, positive or negative. If = 0, then y

= 0 and
the general solution is y(x) = x + , for some constants and . Since
y(0) = y(a) = 0 and a ,= 0, we get = = 0. Thus, we have no non-trivial
solution corresponding to = 0.
If < 0, then = > 0. Hence y(x) = e

x
+ e

x
. Using the
boundary condition y(0) = y(a) = 0, we get = = 0 and hence we have
no non-trivial solution corresponding to negative s.
If > 0, then m = i

and y(x) = cos(

x) + sin(

x). Using
the boundary condition y(0) = 0, we get = 0 and y(x) = sin(

x).
Using y(a) = 0 (and = 0 yields trivial solution), we assume sin(

a) = 0.
Thus, = (k/a)
2
for each non-zero k N (since > 0). Hence, for each
k N, there is a solution (y
k
,
k
) with
y
k
(x) = sin
_
kx
a
_
,
and
k
= (k/a)
2
. Notice the following properties of the eigenvalues
k
and
eigen functions y
k
(i) We have discrete set of s such that 0 <
1
<
2
<
3
< . . . and

k
, as k .
(ii) The eigen functions y

corresponding to form a subspace of dimension


one.
In particular, in the above example, when a = the eigenvalues, for each
k N, are (y
k
,
k
) where y
k
(x) = sin(kx) and
k
= k
2
.
Theorem 2.2.3. For a periodic S-L problem, there exists an increasing se-
quence of eigenvalues 0 <
1
<
2
<
3
< . . . <
k
< . . . with
k
,
as k . Moreover, W
1
= W

1
, the eigen space corresponding to the rst
eigen value is one dimensional.
CHAPTER 2. STURM-LIOUVILLE PROBLEMS 12
Example 2.6. Consider the boundary value problem,
_
_
_
y

+ y = 0 in (, )
y() = y()
y

() = y

().
The characteristic equation is m
2
+ = 0. Solving for m, we get m =

.
Note that the can be either zero, positive or negative.
If = 0, then y

= 0 and the general solution is y(x) = x +, for some


constants and . Since y() = y(), we get = 0. Thus, for = 0, y
a constant is the only non-trivial solution.
If < 0, then = > 0. Hence y(x) = e

x
+ e

x
. Using
the boundary condition y() = y(), we get = and using the other
boundary condition, we get = = 0. Hence we have no non-trivial solution
corresponding to negative s.
If > 0, then m = i

and y(x) = cos(

x) + sin(

x). Using
the boundary condition, we get
cos(

) + sin(

) = cos(

) + sin(

)
and
sin(

) + cos(

) = sin(

) + cos(

).
Thus, sin(

) = sin(

) = 0.
For a non-trivial solution, we must have sin(

) = 0. Thus, = k
2
for
each non-zero k N (since > 0).
Hence, for each k N 0, there is a solution (y
k
,
k
) with
y
k
(x) =
k
cos kx +
k
sin kx,
and
k
= k
2
.
2.3 Singular Sturm-Liouville Problem
Singular S-L, in general, have continuous spectrum. However, the examples
we presented viz. Bessels equation and Legendre equation have a discrete
spectrum, similar to the regular S-L problem.
CHAPTER 2. STURM-LIOUVILLE PROBLEMS 13
2.3.1 EVP of Legendre Operator
Consider the Legendre equation
d
dx
_
(1 x
2
)
dy
dx
_
+ y = 0 for x [1, 1].
Note that, equivalently, we have the form
(1 x
2
)y

2xy

+ y = 0 for x [1, 1].


The function p(x) = 1 x
2
vanishes at the endpoints x = 1.
Denition 2.3.1. A point x
0
is a singular point of (1.2.1) if either P or R
(or both) are not analytic at x
0
. A singular point x
0
is said to be regular if
(x x
0
)P(x) and (x x
0
)
2
R(x) are analytic.
The end points x = 1 are regular singular point. The coecients P(x) =
2x
1x
2
and R(x) =

1x
2
are analytic at x = 0, with radius of convergence 1.
We look for power series form of solutions y(x) =

k=0
a
k
x
k
. Dierentiating
(twice) the series term by term, substituting in the Legendre equation and
equating like powers of x, we get a
2
=
a
0
2
, a
3
=
(2)a
1
6
and for k 2,
a
k+2
=
(k(k + 1) )a
k
(k + 2)(k + 1)
.
Thus, the constants a
0
and a
1
can be xed arbitrarily and the remaining
constants are dened as per the above relation. For instance, if a
1
= 0, we
get the non-trivial solution of the Legendre equation as
y
1
= a
0
+

k=1
a
2k
x
2k
and if a
0
= 0, we get the non-trivial solution as
y
2
= a
1
x +

k=1
a
2k+1
x
2k+1
,
provided the series converge. Note from the recurrence relation that if a
coecient is zero at some stage, then every alternate coecient, subsequently,
is zero. Thus, there are two possibilities of convergence here:
CHAPTER 2. STURM-LIOUVILLE PROBLEMS 14
(i) the series terminates after nite stage to become a polynomial
(ii) the series does not terminate, but converges.
Suppose the series does not terminate, say for instance, in y
1
. Then
a
2k
,= 0, for all k. Consider the ratio
lim
k

a
2(k+1)
x
2(k+1)
a
2k
x
2k

= lim
k

2k(2k + 1)x
2
(2k + 2)(2k + 1)

= lim
k

2kx
2
(2k + 2)

= x
2
.
The term involving tends to zero. Therefore, by ratio test, y
1
converges in
x
2
< 1 and diverges in x
2
> 1. Also, it can be shown that when x
2
= 1 the
series diverges (beyond the scope of this course).
Since, Legendre equation is a singular S-L problem, we try to nd solution
y such that y and its derivative y

are continuous in the closed interval [1, 1].


Thus, the only such possible solutions will be terminating series becoming
polynomials.
Note that, for k 2,
a
k+2
=
(k(k + 1) )a
k
(k + 2)(k + 1)
.
Hence, for any n 2, if = n(n+1), then a
n+2
= 0 and hence every alternate
term is zero. Also, if = 1(1+1) = 2, then a
3
= 0. If = 0(0+1) = 0, then
a
2
= 0. Thus, for each n N 0, we have
n
= n(n + 1) and one of the
solution y
1
or y
2
is a polynomial. Thus, for each n N 0, we have the
eigen value
n
= n(n + 1) and the Legendre polynmial P
n
of degree n which
is a solution to the Legendre equation.
2.3.2 EVP of Bessels Operator
Consider the EVP, for each xed n = 0, 1, 2, . . .,
_
(xy

(x))

=
_

n
2
x
+ x
_
y(x) x (0, a)
y(a) = 0.
As before, since this is a singular S-L problem we shall look for solutions y
such that y and its derivative y

are continuous in the closed interval [0, a].


We shall assume that the eigenvalues are all real
2
. Thus, may be zero,
positive or negative.
2
needs proof
CHAPTER 2. STURM-LIOUVILLE PROBLEMS 15
When = 0, the given ODE reduces to the Cauchy-Euler form
(xy

(x))

+
n
2
x
y(x) = 0
or equivalently,
x
2
y

(x) + xy

(x) n
2
y(x) = 0.
The above second order ODE with variable coecients can be converted to
an ODE with constant coecients by the substitution x = e
s
(or s = ln x).
Then, by chain rule,
y

=
dy
dx
=
dy
ds
ds
dx
= e
s
dy
ds
and
y

= e
s
dy

ds
= e
s
d
ds
_
e
s
dy
ds
_
= e
2s
_
d
2
y
ds
2

dy
ds
_
.
Therefore,
y

(s) n
2
y(s) = 0,
where y is now a function of the new variable s. For n = 0, the general
solution is y(s) = s+, for some arbitrary constants. Thus, y(x) = ln x+
. The requirement that both y and y

are continuous on [0, a] forces = 0.


Thus, y(x) = . But y(a) = 0 and hence = 0, yielding the trivial solution.
Now, let n > 0 be positive integers. Then the general solution is y(s) =
e
ns
+ e
ns
. Consequently, y(x) = x
n
+ x
n
. Since y and y

has to be
continuous on [0, a], = 0. Thus, y(x) = x
n
. Now, using the boundary
condition y(a) = 0, we get = 0 yielding the trivial solution. Therefore,
= 0 is not an eigenvalue for all n = 0, 1, 2, . . ..
When > 0, the given ODE reduces to
x
2
y

(x) + xy

(x) + (x
2
n
2
)y(x) = 0.
Using the change of variable s
2
= x
2
, we get y

(x) =

(s) and y

(x) =
y

(s). Then the given ODE is transformed into the Bessels equation
s
2
y

(s) + sy

(s) + (s
2
n
2
)y(s) = 0.
Using the power series form of solution, we know that the general solution
of the Bessels equation is
y(s) = J
n
(s) + Y
n
(s),
CHAPTER 2. STURM-LIOUVILLE PROBLEMS 16
where J
n
and Y
n
are the Bessel functions of rst and second kind, respectively.
Therefore, y(x) = J
n
(

x) + Y
n
(

x). The continuity assumptions of


y and y

force that = 0, because Y


n
(

x) is discontinuous at x = 0.
Thus, y(x) = J
n
(

x). Using the boundary condition y(a) = 0, we get


J
n
(

a) = 0.
Theorem 2.3.2. For each non-negative integer n, J
n
has innitely many
positive zeroes.
For each n N 0, let z
nm
be the m-th zero of J
n
, m N. Hence

a = z
nm
and so
nm
= z
2
nm
/a
2
and the corresponding eigen functions are
y
nm
(x) = J
n
(z
nm
x/a).
For < 0, there are no eigen values. Observing this fact is beyond the
scope of this course, hence we assume this fact.
2.4 Orthogonality of Eigen Functions
Observe that for a regular S-L problem the dierential operator can be writ-
ten as
L =
1
r(x)
d
dx
_
p(x)
d
dx
_

q(x)
r(x)
.
Let V denote the set of all solutions of (2.2.1). Necessarily, 0 V and
V C
2
(a, b). We dene the inner product
3
, : V V R on V as,
f, g :=
_
b
a
r(x)f(x)g(x) dx.
Denition 2.4.1. We say two functions f and g are perpendicular or or-
thogonal with weight r if f, g = 0. We say f is of unit length if its norm
|f| =
_
f, f = 1.
Theorem 2.4.2. With respect to the inner product dened above in V , the
eigen functions corresponding to distinct eigenvalues of the S-L problem are
orthogonal.
3
a generalisation of the usual scalar product of vectors
CHAPTER 2. STURM-LIOUVILLE PROBLEMS 17
Proof. Let y
i
and y
j
are eigen functions corresponding to distinct eigenvalues

i
and
j
. We need to show that y
i
, y
j
= 0. Recall that L is the S-L operator
and hence Ly
k
=
k
y
k
, for k = i, j. Consider

i
y
i
, y
j
= Ly
i
, y
j
=
_
b
a
rLy
i
y
j
dx
=
_
b
a
d
dx
_
p(x)
dy
i
dx
_
y
j
(x)
_
b
a
q(x)y
i
y
j
dx
=
_
b
a
p(x)
dy
i
dx
dy
j
(x)
dx
dx [p(b)y

i
(b)y
j
(b) p(a)y

i
(a)y
j
(b)]

_
b
a
q(x)y
i
y
j
dx
=
_
b
a
y
i
(x)
d
dx
_
p(x)
dy
j
dx
_
+
_
p(b)y

j
(b)y
i
(b) p(a)y

j
(a)y
i
(b)

[p(b)y

i
(b)y
j
(b) p(a)y

i
(a)y
j
(b)]
_
b
a
q(x)y
i
y
j
dx
= y
i
, Ly
j
+ p(b)
_
y

j
(b)y
i
(b) y

i
(b)y
j
(b)

p(a)
_
y

j
(a)y
i
(a) y

i
(a)y
j
(a)

=
j
y
i
, y
j
+ p(b)
_
y

j
(b)y
i
(b) y

i
(b)y
j
(b)

p(a)
_
y

j
(a)y
i
(a) y

i
(a)y
j
(a)

.
Thus,
(
i

j
)y
i
, y
j
= p(b)
_
y

j
(b)y
i
(b) y

i
(b)y
j
(b)

p(a)
_
y

j
(a)y
i
(a) y

i
(a)y
j
(a)

.
For regular S-L problem, the boundary condition corresponding to the end-
point b is the system of equations
c
1
y
i
(b) + c
2
y

i
(b) = 0
c
1
y
j
(b) + c
2
y

j
(b) = 0
such that c
2
1
+ c
2
2
= 0. Therefore, the determinant of the coecient matrix
y
i
(b)y

j
(b) y
j
(b)y

i
(b) = 0. Similar, argument is also valid for the boundary
condition corresponding to a. Thus, (
i

j
)y
i
, y
j
= 0. But
i

j
,= 0,
hence y
i
, y
j
= 0.
For periodic S-L problem, p(a) = p(b), y
k
(a) = y
k
(b) and y

k
(a) = y

k
(b),
for k = i, j. Then the RHS vanishes and y
i
, y
j
= 0.
CHAPTER 2. STURM-LIOUVILLE PROBLEMS 18
For singular S-L problems such that either p(a) = 0 or p(b) = 0 or both
happens, then again RHS vanishes. This is because if p(a) = 0, we drop the
boundary condition corresponding to the end-point a.
Let us examine the orthogonality of the eigenvectors computed in the
examples earlier.
Example 2.7. We computed in Example 2.5 the eigenvalues and eigenvectors
of the regular S-L problem,
_
y

+ y = 0 x (0, a)
y(0) = y(a) = 0
to be (y
k
,
k
) where
y
k
(x) = sin
_
kx
a
_
and
k
= (k/a)
2
, for each k N. For m, n N such that m ,= n, we need
to check that y
m
and y
n
are orthogonal. Since r 1, we consider
y
m
(x), y
n
(x) =
_
a
0
sin
_
mx
a
_
sin
_
nx
a
_
dx
Exercise 8. Show that, for any n 0 and m positive integer,
(i)
_

cos nt cos mt dt =
_
, for m = n
0, for m ,= n.
(ii)
_

sin nt sin mt dt =
_
, for m = n
0, for m ,= n.
(iii)
_

sin nt cos mt dt = 0.
Consequently, show that
cos kt

and
sin kt

are of unit length.


CHAPTER 2. STURM-LIOUVILLE PROBLEMS 19
Proof. (i) Consider the trigonometric identities
cos((n + m)t) = cos nt cos mt sin nt sin mt (2.4.1)
and
cos((n m)t) = cos nt cos mt + sin nt sin mt. (2.4.2)
Adding (2.4.1) and (2.4.2), we get
1
2
cos((n + m)t) + cos((n m)t) = cos nt cos mt.
Integrating both sides from to , we get
_

cos nt cos mt dt =
1
2
_

(cos((n + m)t) + cos((n m)t)) dt.


But
_

cos kt dt =
1
k
sin kt[

= 0, for k ,= 0.
Thus,
_

cos nt cos mt dt =
_
, for m = n
0, for m ,= n.
Further,
| cos kt| = cos kt, cos kt
1/2
=

.
Therefore,
cos kt

is of unit length.
(ii) Subtract (2.4.1) from (2.4.2) and use similar arguments as above.
(iii) Arguments are same using the identities (2.4.1) and (2.4.2) correspond-
ing to sin.
Exercise 9. Show that
_

e
imt
e
int
dt =
_
2, for m = n
0, for m ,= n.
CHAPTER 2. STURM-LIOUVILLE PROBLEMS 20
Example 2.8. We computed in Example 2.6 the eigenvalues and eigenvectors
of the periodic S-L problem,
_
_
_
y

+ y = 0 in (, )
y() = y()
y

() = y

()
to be, for each k N 0, (y
k
,
k
) where
y
k
(x) =
k
cos kx +
k
sin kx,
and
k
= k
2
. Again r 1 and the orthogonality follows from the exercise
above.
Example 2.9. The orthogonality of Legendre polynomial and Bessel function
must have been discussed in your course on ODE. Recall that the Legendre
polynomials has the property
_
1
1
P
m
(x)P
n
(x) dx =
_
0, if m ,= n
2
2n+1
, if m = n
and the Bessel functions have the property
_
1
0
xJ
n
(z
ni
x)J
n
(z
nj
x) dx =
_
0, if m ,= n
1
2
[J
n+1
(z
ni
)]
2
, if m = n
where z
ni
is the i-th positive zero of the Bessel function (of order n) J
n
.
2.5 Eigen Function Expansion
Observe that an eigenvector y
k
, for any k, can be normalised (unit norm) in
its inner-product by dividing y
k
by its norm |y
k
|. Thus, y
k
/|y
k
|, for any
k, is a unit vector. For instance, in view of Exercise 8,
cos kt

and
sin kt

are
functions of unit length.
Denition 2.5.1. Any given function f : (a, b) R is said to be have the
eigen function expansion corresponding to the S-L problem (2.2.1), if
f(x)

k=0
a
k
y
k
,
for some constants a
k
and y
k
are the normalised eigenvectors corresponding
to (2.2.1).
CHAPTER 2. STURM-LIOUVILLE PROBLEMS 21
We are using the symbol to highlight the fact that the issue of con-
vergence of the series is ignored.
If the eigenvectors (or eigen functions) y
k
involves only sin or cos terms,
as in regular S-L problem (cf. Example 2.5), then the series is called Fourier
Sine or Fourier Cosine series.
If the eigen functions y
k
involve both sin and cos, as in periodic S-L
problem (cf. Example 2.6), then the series is called Fourier series. In the
case of the eigen functions being Legendre polynomial or Bessel function, we
call it Fourier-Legendre or Fourier-Bessel series, respectively.
CHAPTER 2. STURM-LIOUVILLE PROBLEMS 22
Chapter 3
Fourier Series
At the end of previous chapter, we introduced the Fourier series of a function
f. A natural question that arises at this moment is: what classes of functions
admit a Fourier series expansion? We attempt to answer this question in this
chapter.
3.1 Periodic Functions
We isolate the properties of the trigonometric functions, viz., sin, cos, tan
etc.
Denition 3.1.1. A function f : R R is said to be periodic of period T,
if T > 0 is the smallest number such that
f(t + T) = f(t) t R.
Such functions are also called T-periodic functions.
Example 3.1. The trigonometric functions sin t and cos t are 2-periodic func-
tions, while sin 2t and cos 2t are -periodic functions.
Given a L-periodic real-valued function g on R, one can always construct
a T-periodic function as: f(t) = g(Lt/T). For instance, f(t) = sin
_
2t
T
_
is a
T-periodic function.
sin
_
2(t + T)
T
_
= sin
_
2t
T
+ 2
_
= sin
_
2t
T
_
.
In fact, for any positive integer k, sin
_
2kt
T
_
and cos
_
2kt
T
_
are T-periodic
functions.
23
CHAPTER 3. FOURIER SERIES 24
Exercise 10. If f : R R is a T-periodic function, then show that
(i) f(t T) = f(t), for all t R.
(ii) f(t + kT) = f(t), for all k Z.
(iii) g(t) = f(t + ) is (T/)-periodic, where > 0 and R.
Exercise 11. Show that for a T-periodic integrable function f : R R,
_
+T

f(t) dt =
_
T
0
f(t) dt R.
3.2 Fourier Coecients and Fourier Series
Without loss of generality, to simplify our computation, let us assume that
f is a 2-periodic
1
function on R. Suppose that f : (, ) R, extended
to all of R as a 2-periodic function, is such that the innite series
a
0
+

k=1
(a
k
cos kt + b
k
sin kt)
converges uniformly
2
to f. Then,
f(t) = a
0
+

k=1
(a
k
cos kt + b
k
sin kt). (3.2.1)
and integrating both sides of (3.2.1), from to , we get
_

f(t) dt =
_

_
a
0
+

k=1
(a
k
cos kt + b
k
sin kt)
_
dt
= a
0
(2) +
_

k=1
(a
k
cos kt + b
k
sin kt)
_
dt
Since the series converges uniformly to f, the interchange of integral and
series is possible. Therefore,
_

f(t) dt = a
0
(2) +

k=1
__

(a
k
cos kt + b
k
sin kt) dt
_
1
similar idea will work for any T-periodic function
2
note the uniform convergence hypothesis
CHAPTER 3. FOURIER SERIES 25
From Exercise 8, we know that
_

sin kt dt =
_

cos kt dt = 0, k N.
Hence,
a
0
=
1
2
_

f(t) dt.
To nd the coecients a
k
, for each xed k N, we multiply both sides
of (3.2.1) by cos kt and integrate from to . Consequently,
_

f(t) cos kt dt = a
0
_

cos kt dt
+

j=1
_

(a
j
cos jt cos kt + b
j
sin jt cos kt) dt
=
_

a
k
cos kt cos kt dt = a
k
.
Similar argument, after multiplying by sin kt, gives the formula for b
k
. Thus,
we have derived , for all k N,
a
k
=
1

f(t) cos kt dt
b
k
=
1

f(t) sin kt dt
a
0
=
1
2
_

f(t) dt.
These are the formulae for Fourier coecients of a 2-periodic functions f,
in terms of f. Similarly, if f is a T-periodic function extended to R, then its
Fourier series is
f(t) = a
0
+

k=1
_
a
k
cos
_
2kt
T
_
+ b
k
sin
_
2kt
T
__
,
where
a
k
=
2
T
_
T
0
f(t) cos
_
2kt
T
_
dt (3.2.2a)
CHAPTER 3. FOURIER SERIES 26
b
k
=
2
T
_
T
0
f(t) sin
_
2kt
T
_
dt (3.2.2b)
a
0
=
1
T
_
T
0
f(t) dt. (3.2.2c)
The above discussion motivates us to give the following denition.
Denition 3.2.1. If f : R R is any T-periodic integrable function then
we dene the Fourier coecients of f, a
0
, a
k
and b
k
, for all k N, by (3.2.2)
and the Fourier series of f is given by
f(x) a
0
+

k=1
_
a
k
cos
_
2kt
T
_
+ b
k
sin
_
2kt
T
__
. (3.2.3)
Note the use of symbol in (3.2.3). This is because we have the
following issues once we have the denition of Fourier series of f, viz.,
(a) Will the Fourier series of f always converge?
(b) If it converges, will it converge to f?
(c) If so, is the convergence point-wise or uniform
3
.
Answering these question, in all generality, is beyond the scope of this
course. However, we shall state some results, in the next section, that will
get us in to working mode. We end this section with some simple examples
on computing Fourier coecients of functions.
Example 3.2. Consider the constant function f c on (, ). Then
a
0
=
1
2
_

c dt = c.
For each k N,
a
k
=
1

c cos kt dt = 0
and
b
k
=
1

c sin kt dt = 0.
3
because our derivation of formulae for Fourier coecients assumed uniform conver-
gence of the series
CHAPTER 3. FOURIER SERIES 27
Example 3.3. Consider the trigonometric function f(t) = sin t on (, ).
Then
a
0
=
1
2
_

sin x dt = 0.
For each k N,
a
k
=
1

sin t cos kt dt = 0
and
b
k
=
1

sin t sin kt dt =
_
0 k ,= 1
1 k = 1.
Similarly, for f(t) = cos t on (, ), all Fourier coecients are zero, except
a
1
= 1.
Example 3.4. Consider the function f(t) = t on (, ). Then
a
0
=
1
2
_

t dt = 0.
For each k N,
a
k
=
1

t cos kt dt =
1
k
_

sin kt dt + ( sin k () sin k())


_
and hence a
k
= 0, for all k.
b
k
=
1

t sin kt dt =
1
k
__

cos kt dt ( cos k () cos k())


_
=
1
k
_
0
_
(1)
k
+ (1)
k
_
=
(1)
k+1
2
k
Therefore, t as a 2-periodic function dened in (, ) has the Fourier series
expansion
t 2

k=1
(1)
k+1
k
sin kt
Example 3.5. Let us consider the same function f(t) = t, as in previous
example, but dened on (0, ). Viewing this as -periodic function, we com-
pute
a
0
=
1

_

0
t dt =

2
.
CHAPTER 3. FOURIER SERIES 28
For each k N,
a
k
=
2

_

0
t cos 2kt dt =
2
2k
_

_

0
sin 2kt dt + ( sin 2k 0)
_
=
1
k
_
1
2k
(cos 2k cos(0))
_
= 0
and
b
k
=
2

_

0
t sin 2kt dt =
2
2k
__

0
cos 2kt dt ( cos 2k 0)
_
=
1
k
_
1
2k
(sin 2k sin(0))
_
=
1
k
.
Therefore, t as a -periodic function dened on (0, ) has the Fourier series
expansion
t

2

k=1
1
k
sin 2kt.
Note that dierence in Fourier expansion of the same function when the
periodicity changes.
Exercise 12. Find the Fourier coecients and Fourier series of the function
f(t) =
_
0 if t (, 0]
t if t (0, ).
Theorem 3.2.2 (Riemann-Lebesgue Lemma). Let f be a continuous func-
tion in [, ]. Show that the Fourier coecients of f converges to zero,
i.e.,
lim
k
a
k
= lim
k
b
k
= 0.
Proof. Observe that [a
k
[ and [b
k
[ are bounded sequences, since
[a
k
[, [b
k
[
_

[f(t)[ dt < +.
CHAPTER 3. FOURIER SERIES 29
We need to show that these bounded sequences, in fact, converges to zero.
Now set x = t /k and hence
b
k
=
_

f(t) sin kt dt =
_
/k
/k
f(x + /k) sin(kx + ) dx
=
_
/k
/k
f(x + /k) sin kxdx.
Therefore, after reassigning x as t,
2b
k
=
_

f(t) sin kt dt
_
/k
/k
f(t + /k) sin kt dt
=
_

/k
f(t + /k) sin kt dt +
_
/k

(f(t) f(t + /k)) sin kt dt


+
_

/k
f(t) sin kt dt
= I
1
+ I
2
+ I
3
.
Thus, [2b
k
[ [I
1
[ +[I
2
[ +[I
3
[. Consider
[I
3
[ =

_

/k
f(t) sin kt dt

_

/k
[f(t)[ dt

_
max
t[,]
[f(t)[
_

k
=
M
k
.
Similar estimate is also true for I
1
. Let us consider,
[I
2
[ =

_
/k

(f(t) f(t + /k)) sin kt dt

_
max
t[,/k]
[f(t) f(t + /k)[
_
_
2

k
_
By the uniform continuity of f on [, ], the maximum will tend to zero as
k . Hence [b
k
[ 0. Exactly, similar arguments hold for a
k
.
CHAPTER 3. FOURIER SERIES 30
3.3 Piecewise Smooth Functions
Denition 3.3.1. A function f : [a, b] R is said to be piecewise contin-
uously dierentiable if it has a continuous derivative f

in (a, b), except at


nitely many points in the interval [a, b] and at each these nite points, the
right-hand and left-hand limit for both f and f

exist.
Example 3.6. Consider f : [1, 1] R dened as f(t) = [t[ is continuous. It
is not dierentiable at 0, but it is piecewise continuously dierentiable.
Example 3.7. Consider the function f : [1, 1] R dened as
f(t) =
_

_
1, for 1 < t < 0,
1, for 0 < t < 1,
0, for t = 0, 1, 1.
It is not continuous, but is piecewise continuous. It is also piecewise contin-
uously dierentiable.
Exercise 13 (Riemann-Lebesgue Lemma). Let f be a piecewise continuous
function in [, ] such that
_

[f(t)[ dt < +.
Show that the Fourier coecients of f converges to zero, i.e.,
lim
k
a
k
= lim
k
b
k
= 0.
Theorem 3.3.2. If f is a T-periodic piecewise continuously dierentiable
function, then the Fourier series of f converges to f(t), for every t at which
f is smooth. Further, at a non-smooth point t
0
, the Fourier series of f will
converge to the average of the right and left limits of f at t
0
.
Corollary 3.3.3. If f : R R is a continuously dierentiable (derivative
f

exists and is continuous) T-periodic function, then the Fourier series of f


converges to f(t), for every t R.
Example 3.8. For a given constant c ,= 0, consider the piecewise function
f(t) =
_
0 if t (, 0)
c if t (0, ).
CHAPTER 3. FOURIER SERIES 31
Then,
a
0
=
1
2
_

0
c dt =
c
2
.
For each k N,
a
k
=
1

_

0
c cos kt dt = 0
and
b
k
=
1

_

0
c sin kt dt =
c

_
1
k
(cos k + cos(0))
_
=
c(1 + (1)
k+1
)
k
.
Therefore,
f(t)
c
2
+

k=1
c(1 + (1)
k+1
)
k
sin kt.
The point t
0
= 0 is a non-smooth point of the function f. Note that the
right limit of f at t
0
= 0 is c and the left limit of f at t
0
= 0 is 0. Note that
the Fourier series of f ay t
0
= 0 converges to c/2, the average of c and 0.
3.4 Complex Fourier Coecients
The Fourier series of a 2-periodic function f : R R as given in (3.2.1),
can be recast in complex number notation using the formulae
cos t =
e
it
+ e
it
2
, sin t =
e
it
e
it
2i
.
Note that we can rewrite the Fourier series expansion of f as
f(t) =
a
0
2
+

k=1
(a
k
cos kt + b
k
sin kt)
CHAPTER 3. FOURIER SERIES 32
with a factor 2 in denominator of a
0
and make the formulae of the Fourier
coecient having uniform factor. Thus,
f(t) =
a
0
2
+

k=1
(a
k
cos kt + b
k
sin kt)
=
a
0
2
+

k=1
_
a
k
2
_
e
ikt
+ e
ikt
_

ib
k
2
_
e
ikt
e
ikt
_
_
f(t) =
a
0
2
+

k=1
__
a
k
ib
k
2
_
e
ikt
+
_
a
k
+ ib
k
2
_
e
ikt
_
= c
0
+

k=1
_
c
k
e
ikt
+ c
k
e
ikt
_
=

k=
c
k
e
ikt
.
Exercise 14. Given a 2-periodic function f such that
f(t) =

k=
c
k
e
ikt
, (3.4.1)
where the convergence is uniform. Use the integral formulae from Exercise 9
to show that, for all k Z,
c
k
=
1
2
_

f(t)e
ikt
dt.
Proof. Fix a k. To nd the coecient c
k
, multiply both sides of (3.4.1) by
e
ikt
and integrate from to .
Using the real Fourier coecients one can write down the complex Fourier
coecients using the relations
c
0
=
a
0
2
, c
k
=
a
k
ib
k
2
and c
k
=
a
k
+ ib
k
2
and if one can compute directly the complex Fourier series of a periodic
function f, then one can write down the real Fourier coecients using the
formula,
a
0
= 2c
0
, a
k
= c
k
+ c
k
and b
k
= i(c
k
c
k
).
CHAPTER 3. FOURIER SERIES 33
Exercise 15. Find the complex Fourier coecients (directly) of the function
f(t) = t for t (, ] extended to R periodically with period 2. Use the
complex Fourier coecients to nd the real Fourier coecients of f.
Proof. We use the
c
k
=
1
2
_

te
ikt
dt.
for all k = 0, 1, 2, . . .. For k = 0, we get c
0
= 0. For k ,= 0, we apply
integration by parts to get
c
k
=
1
2
__
it
k
e
ikt
_

i
k
e
ikt
dt
_
=
1
2
_
i
k
e
ik
+
i
k
e
ik
_
=
i
k
cos k =
i
k
(1)
k
.
Hence the real Fourier coecients are
a
0
= 0, a
k
= (c
k
+ c
k
) = 0 and b
k
= i(c
k
c
k
) = 2
(1)
k+1
k
.
3.5 Orthogonality
Let V be the class of all 2-periodic real valued continuous function on R.
Exercise 16. Show that V is a vector space over R.
We introduce an inner product on V . For any two elements f, g V , we
dene:
f, g :=
_

f(t)g(t) dt.
The inner product generalises to V the properties of scalar product on R
n
.
Exercise 17. Show that the inner product dened on V as
f, g =
_

f(t)g(t) dt
satises the properties of a scalar product in R
n
, viz., for all f, g V ,
(a) f, g = g, f.
CHAPTER 3. FOURIER SERIES 34
(b) f + g, h = f, h +g, h.
(c) f, g = f, g R.
(d) f, f 0 and f, f = 0 implies that f 0.
Denition 3.5.1. We say two functions f and g are perpendicular or or-
thogonal if f, g = 0. We say f is of unit length if its norm |f| =
_
f, f = 1.
Consider, for k N, the following elements in V
e
0
(t) =
1

2
, e
k
(t) =
cos kt

and f
k
(t) =
sin kt

.
Example 3.9. e
0
, e
k
and f
k
are all of unit length. e
0
, e
k
= 0 and e
0
, f
k
= 0.
Also, e
m
, e
n
= 0 and f
m
, f
n
= 0, for m ,= n. Further, e
m
, f
n
= 0 for all
m, n. Compare this example with the orthonormal basis vectors of R
n
!
In this new formulation, we can rewrite the formulae for the Fourier
coecients as:
a
0
=
1

2
f, e
0
, a
k
=
1

f, e
k
and b
k
=
1

f, f
k
.
and the Fourier series of f has the form,
f(t) = f, e
0

2
+
1

k=1
(f, e
k
cos kt +f, f
k
sin kt) .
3.5.1 Odd and Even functions
Denition 3.5.2. We say a function f : R R is odd if f(t) = f(t)
and even if f(t) = f(t).
Example 3.10. All constant functions are even functions. For all k N, sin kt
are odd functions and cos kt are even functions.
Exercise 18. Any odd function is always orthogonal to an even function.
The Fourier series of an odd or even functions will contain only sine or
cosine parts, respectively. The reason being that, if f is odd
f, 1 = 0 and f, cos kt = 0
and hence a
0
= 0 and a
k
= 0, for all k. If f is even
f, sin kt = 0
and b
k
= 0, for all k.
CHAPTER 3. FOURIER SERIES 35
3.6 Fourier Sine-Cosine Series
Let f : (0, T) R be a piecewise smooth function. To compute the Fourier
Sine series of f, we extend f, as an odd function f
o
, to (T, T)
f
o
(t) =
_
f(t), for t (0, T)
f(t) , for t (T, 0).
Note that f
o
is a 2T-periodic function and is an odd function. Since f
o
is
odd, the cosine coecients a
k
and the constant term a
0
vanishes in Fourier
series expansion of f
o
. The restriction of the Fourier series of f
o
to f in the
interval (0, T) gives the Fourier sine series of f. We derive the formulae for
Fourier sine coecient of f.
f(t) =

k=1
b
k
sin
_
kt
T
_
where (3.6.1)
b
k
=
1
T
_
f
o
, sin
_
kt
T
__
=
1
T
_
T
T
f
o
(t) sin
_
kt
T
_
dt
=
1
T
__
0
T
f(t) sin
_
kt
T
_
dt +
_
T
0
f(t) sin
_
kt
T
_
dt
_
=
1
T
__
0
T
f(t) sin
_
kt
T
_
dt +
_
T
0
f(t) sin
_
kt
T
_
dt
_
=
2
T
_
T
0
f(t) sin
_
kt
T
_
dt.
Example 3.11. Let us consider the function f(t) = t on (0, ). To compute the
Fourier sine series of f, we extend f to (, ) as an odd function f
o
(t) = t
on (, ). For each k N,
b
k
=
2

_

0
t sin kt dt =
2
k
__

0
cos kt dt ( cos k 0)
_
=
2
k
_
1
k
(sin k sin(0)) + (1)
k+1
_
=
(1)
k+1
2
k
.
Therefore, the Fourier sine series expansion of f(t) = t on (0, ) is
t 2

k=1
(1)
k+1
k
sin kt
CHAPTER 3. FOURIER SERIES 36
Compare the result with Example 3.4.
For computing the Fourier cosine series of f, we extend f as an even
function to (T, T),
f
e
(t) =
_
f(t), for t (0, T)
f(t) , for t (T, 0).
The function f
e
is a 2T-periodic function extended to all of R. The Fourier
series of f
e
has no sine coecients, b
k
= 0 for all k. The restriction of the
Fourier series of f
e
to f in the interval (0, T) gives the Fourier cosine series
of f. We derive the formulae for Fourier cosine coecient of f.
f(t) = a
0
+

k=1
a
k
cos
_
kt
T
_
(3.6.2)
where
a
k
=
2
T
_
T
0
f(t) cos
_
kt
T
_
dt
and
a
0
=
1
T
_
T
0
f(t) dt.
Example 3.12. Let us consider the function f(t) = t on (0, ). To compute
the Fourier cosine series of f, we extend f to (, ) as an even function
f
e
(t) = [t[ on (, ). Then,
a
0
=
1

_

0
t dt =

2
.
For each k N,
a
k
=
2

_

0
t cos kt dt =
2
k
_

_

0
sin kt dt + ( sin k 0)
_
=
2
k
_
1
k
(cos k cos(0))
_
=
2[(1)
k
1]
k
2

.
Therefore, the Fourier cosine series expansion of f(t) = t on (0, ) is
t

2
+ 2

k=1
(1)
k
1
k
2

cos kt.
Compare the result with the Fourier series of the function f(t) = [t[ on
(, ).
CHAPTER 3. FOURIER SERIES 37
3.7 Fourier Integral and Fourier Transform
Recall that we had computed the Fourier series expansion of periodic func-
tions. The periodicity was assumed due to the periodicity of sin and cos
functions. The question we shall address in this section is: Can we generalise
the notion of Fourier series of f, for non-periodic functions?
The answer is yes. Note that the periodicity of f is captured by the
integer k appearing in the arguments of sin and cos. To generalise the notion
of Fourier series for non-periodic functions, we shall replace k, an positive
integer, with a real number . Note that when we replace k with , the
sequences a
k
, b
k
become functions of , a() and b() and the series form is
replaced by an integral form over R.
Denition 3.7.1. If f : R R is a piecewise continuous function which
vanishes outside a nite interval, then its Fourier integral is dened as
f(t) =
_

0
(a() cos t + b() sin t) d,
where
a() =
1

f(t) cos t dt
b() =
1

f(t) sin t dt.


Denition 3.7.2. If f : R R is a piecewise continuous function which
vanishes outside a nite interval, then its Fourier Transform is dened as

f() =
1
2
_

f(t)e
it
dt.
and
f(t) =
_

f()e
it
dt
is called the Fourier Inversion formula.
CHAPTER 3. FOURIER SERIES 38
Chapter 4
PDE: An Introduction
A partial dierential equation (PDE) is an equation involving an unknown
function u of two or more variables and some or all of its partial derivatives.
The partial dierential equation is usually a mathematical representation of
problems arising in nature, around us. The process of understanding physical
systems can be divided in to three stages:
(i) Modelling the problem or deriving the mathematical equation (in our
case it would be formulating PDE). The derivation process is usually a
result of conservation laws or balancing forces.
(ii) Solving the equation (PDE). What do we mean by a solution of the
PDE?
(iii) Studying properties of the solution. Usually, we do not end up with a
denite formula for the solution. Thus, how much information about
the solution can one extract without any knowledge of the formula?
4.1 Denition and Well-Posedness of PDE
Recall that the ordinary dierential equations (ODE) dealt with functions
of one variable, u : R R. The subset could have the interval form
(a, b). The derivative of u at x is dened as
u

(x) := lim
h0
u(x + h) u(x)
h
,
39
CHAPTER 4. PDE: AN INTRODUCTION 40
provided the limit exists. The derivative gives the slope of the tangent line
at x . How to generalise this notion of derivative to a function u :
R
n
R? These are ideas that are part of a course on several or multi
variable calculus. However, we shall jump directly to concepts necessary for
us to begin this course.
Let be an open subset of R
n
and let u : R be a given function.
We denote the directional derivative of u at x , along a vector R
n
, as
u

(x) = lim
h0
u(x + h) u(x)
h
,
provided the limit exists. The directional derivative of u at x , along
the standard basis vectors e
i
= (0, 0, . . . , 1, 0, . . . , 0) is called the i-th partial
derivative of u at x and is given as
u
x
i
=
u
x
i
(x) = lim
h0
u(x + he
i
) u(x)
h
.
Similarly, one can consider higher order derivatives, as well. We now in-
troduce Schwartzs multi-index notation for derivative, which will be used
to denote a PDE in a concise form. Let = (
1
, . . . ,
n
) be an n-tuple of
non-negative integers and let [[ =
1
+ . . . +
n
. The partial dierential
operator of order is denoted as,
D

1
x
1

1
. . .

n
x
n
n
=

||
x
1

1
. . . x
n
n
.
If and are two multi-indices, then means
i

i
for all 1 i n.
Also, = (
1

1
, . . . ,
n

n
), ! =
1
! . . .
n
! and x

= x

1
1
. . . x
n
n
.
For k 1, we dene D
k
u(x) := D

u(x) [ [[ = k. Thus, for k = 1, we


regard Du as being arranged in a vector,
=
_
D
(1,0,...,0)
, D
(0,1,0,...,0)
, . . . , D
(0,0,...,0,1)
_
=
_

x
1
,

x
2
, . . . ,

x
n
_
.
We call this the gradient vector.
Similarly, for k = 2, we regard D
2
as being arranged in a matrix form
(called the Hessian matrix),
D
2
=
_
_
_
_
_
_

2
x
2
1
. . .

2
x
1
xn

2
x
2
x
1
. . .

2
x
2
xn
.
.
.

2
xnx
1
. . .

2
x
2
n
_
_
_
_
_
_
nn
.
CHAPTER 4. PDE: AN INTRODUCTION 41
The trace of the Hessian matrix is called the Laplace operator, denoted as
:=

n
i=1

2
x
2
i
. Note that under some prescribed order on multi-indices ,
D
k
u(x) can be regarded as a vector in R
n
k
.
Example 4.1. Let u(x, y) : R
2
R be u(x, y) = ax
2
+ by
2
. Then
u = (u
x
, u
y
) = (2ax, 2by).
D
2
u =
_
u
xx
u
yx
u
xy
u
yy
_
=
_
2a 0
0 2b
_
Note that, for convenience, we can view u : R
2
R
2
and D
2
u : R
2

R
4
= R
2
2
, by assigning some ordering to the partial derivatives .
Denition 4.1.1. Let be an open subset of R
n
. A k-th order PDE F is a
given map F : R
n
k
R
n
k1
. . . R
n
R R having the form
F
_
D
k
u(x), D
k1
u(x), . . . Du(x), u(x), x
_
= 0, (4.1.1)
for each x and u : R is the unknown.
A general rst order PDE is of the form F(Du(x), u(x), x) = 0 and, in
particular, for a two variable function u(x, y) the PDE will be of the form
F(u
x
, u
y
, u, x, y) = 0. If u(x, y, z) is a three variable function, then the PDE
is of the form F(u
x
, u
y
, u
z
, u, x, y, z) = 0. A general second order PDE is of
the form F(D
2
u(x), Du(x), u(x), x) = 0.
Example 4.2. Some important PDEs:
Eikonal Equation [u(x)[ = 1.
Transport Equation u
t
(x, t) + b
x
u(x, t) = 0 for some given b R
n
assuming that x R
n
.
Inviscid Burgers Equation u
t
+ uu
x
= 0, for x R
Hamilton-Jacobi Equation u
t
+ H(u, x) = 0.
Laplace Equation u = 0.
Poisson Equation u(x) = f(x).
Poisson Equation u(x) = f(u).
CHAPTER 4. PDE: AN INTRODUCTION 42
Helmholtz Equation u + k
2
u = 0, for a given constant k.
Monge-Ampere Equation det(D
2
u) = f.
Heat Equation u
t
u = 0.
Kolmogorovs Equation u
t
A D
2
u +b u = 0, for given nn matrix
A = (a
ij
) and b R
n
. The rst scalar product is in R
n
2
and the second
is in R
n
.
Wave Equation u
tt
u = 0
General Wave Equation u
tt
A D
2
u+b u = 0, for given nn matrix
A = (a
ij
) and b R
n
. The rst scalar product is in R
n
2
and the second
is in R
n
.
Schr odinger Equation iu
t
+ u = 0.
Airys Equation u
t
+ u
xxx
= 0.
Beam Equation u
t
+ u
xxxx
= 0.
Korteweg de Vries (KdV) Equation u
t
+ u
x
+ uu
x
+ u
xxx
= 0.
As we know the PDE is a mathematical description of the behaviour of
the associated system. Thus, our foremost aim is to solve the PDEs for
the unknown function u, usually called the solution of the PDE. The rst
expected notion of solution is as follows:
Denition 4.1.2. We say u : R is a solution (in the classical sense)
to the k-th order PDE (4.1.1),
if u is k-times dierentiable with the k-th derivative being continuous
and u satises the equation (4.1.1).
Example 4.3. (i) Consider the PDE u
t
(x, t) = u
xx
(x, t), for a two variable
function f. We shall see later chapters that this is the heat equation.
Note that u(x, t) = 0 is a solution of the PDE. Further any additive
constant of u is also a solution, i.e., u = c, for any constant c, is also a
solution. Thus, we have a family of solutions depending on c.
CHAPTER 4. PDE: AN INTRODUCTION 43
(ii) The function u : R
2
R dened as u(x, t) = (1/2)x
2
+t +c, for a given
constant c, is a solution of the PDE u
t
= u
xx
. Because u
t
= 1, u
x
= x
and u
xx
= 1. We have dierent family of solutions for the same PDE.
(iii) Find a, b R such that u(x, t) = e
ax+bt
is a solution to the PDE
u
t
= u
xx
. Note that u
t
= bu, u
x
= au and u
xx
= a
2
u. Observe that
u ,= 0, thus for u to solve the PDE, we demand that a and b are such
that b = a
2
.
Note that in the above example we have three dierent family of solutions
for the same PDE and we may have more! In contrast to ODE, the family
of solutions of a PDE may be given by a function rather than a constant.
Example 4.4. Consider the equation u
t
(x, t) = u(x, t). If we freeze the vari-
able x, the equation is very much like an ODE. Integrating both sides, we
would get u(x, t) = f(x)e
t
as a solution, where f is any arbitrary function of
x. Thus, the family of solution depends on the choice of f.
Example 4.5. Let us solve for u in the equation u
xy
= 4x
2
y. Unlike previous
example, the PDE here involves derivatives in both the variable. Still one
can solve this PDE for a general solution. We rst integrate w.r.t x both
sides to get u
y
= (4/3)x
3
y + f(y). Then, integrating again w.r.t y, we get
u(x, y) = (2/3)x
3
y
2
+ F(y) + g(x), where F(y) =
_
f(y) dy.
As it happens with reality, as of today, there is no universal way of solving
a given PDE. Thus, the PDEs have to be categorised based on some common
properties, for which we might expect a common technique to solve. One such
classication is given below.
Denition 4.1.3. We say F is linear if (4.1.1) has the form

||k
a

(x)D

u(x) = f(x)
for given functions f and a

([[ k). If f 0, we say F is homogeneous.


F is said to be semilinear, if it is linear in the highest order, i.e., F has the
form

||=k
a

(x)D

u(x) + a
0
(D
k1
u, . . . , Du, u, x) = 0.
We say F is quasilinear if it has the form

||=k
a

(D
k1
u(x), . . . , Du(x), u(x), x)D

u + a
0
(D
k1
u, . . . , Du, u, x) = 0.
CHAPTER 4. PDE: AN INTRODUCTION 44
Finally, we say F is fully nonlinear if it depends nonlinearly on the highest
order derivatives.
Example 4.6. (i) a
1
(x)u
xx
+ a
2
(x)u
xy
+ a
3
(x)u
yy
+ a
4
(x)u
x
+ a
5
(x)u
y
=
a
6
(x)u is linear.
(ii) xu
y
yu
x
= u is linear.
(iii) u
x
+ u
y
u
2
= 0 is semi-linear.
(iv) u
x
+ uu
y
u
2
= 0 is quasilinear and u
x
u
y
u = 0 is nonlinear.
Exercise 19. Classify all the important PDE listed in example 4.2.
A problem involving a PDE could be a boundary-value problem (we look
for a solution with prescribed boundary value) or a initial value problem (a
solution whose value at initial time is known). It is usually desirable to solve
a well-posed problem, in the sense of Hadamard . By well-posedness we mean
that the PDE along with the boundary condition (or initial condition)
(a) has a solution (existence)
(b) the solution is unique (uniqueness)
(c) the solution depends continuously on the data given (stability).
Any PDE not meeting the above criteria is said to be ill-posed. Hadamard
gave an example of an ill-posed problem.
4.2 Three Basic PDE: History
The study of partial dierential equations started as a tool to analyse the
models of physical science. The PDEs usually arise from the physical laws
such as balancing forces (Newtons law), momentum, conservation laws etc.
The rst PDE was introduced in 1752 by dAlembert as a model to study
vibrating strings. He introduced the one dimensional wave equation

2
u(x, t)
t
2
=

2
u(x, t)
x
2
.
This was then generalised to two and three dimensions by Euler (1759) and
D. Bernoulli (1762), i.e.,

2
u(x, t)
t
2
= u(x, t),
CHAPTER 4. PDE: AN INTRODUCTION 45
where =

3
i=1

2
x
2
i
.
In physics, a eld is a physical quantity associated to each point of space-
time. A eld can be classied as a scalar eld or a vector eld according to
whether the value of the eld at each point is a scalar or a vector, respec-
tively. Some examples of eld are Newtons gravitational eld, Coulombs
electrostatic eld and Maxwells electromagnetic eld.
Given a vector eld V , it may be possible to associate a scalar eld u,
called potential, such that u = V . Moreover, the gradient of any function
u, u is a vector eld. In gravitation theory, the gravity potential is the
potential energy per unit mass. Thus, if E is the potential energy of an
object with mass m, then u = E/m and the potential associated with a mass
distribution is the superposition of potentials of point masses.
The Newtonian gravitation potential can be computed to be
u(x) =
1
4
_

(y)
[x y[
dy
where (y) is the density of the mass distribution, occupying R
3
, at y. In
1782, Laplace discovered that the Newtons gravitational potential satises
the equation:
u = 0 on R
3
.
Thus, the operator = is called the Laplacian and any function whose
Laplacian is zero (as above) is said to be a harmonic function.
Later, in 1813, Poisson discovered that on the Newtonian potential
satises the equation:
u = on .
Such equations are called Poisson equation. The identity obtained by Laplace
was, in fact, a consequence of the conservation laws and can be generalised
to any scalar potential. Green (1828) and Gauss (1839) observed that the
Laplacian and Poisson equations can be applied to any scalar potential in-
cluding electric and magnetic potentials. Suppose there is a scalar potential
u such that V = u for a vector eld V and V is such that
_

V d = 0
for all closed surfaces . Then, by Gauss divergence theorem
1
(cf.
Appendix A), we have
_

V dx = 0 .
1
a mathematical formulation of conservation laws
CHAPTER 4. PDE: AN INTRODUCTION 46
Thus, V = divV = 0 on and hence u = (u) = V = 0
on . Thus, the scalar potential is a harmonic function. The study of
potentials in physics is called Potential Theory and, in mathematics, it is
called Harmonic Analysis. Note that, for any potential u, its vector eld
V = u is irrotational, i.e., curl(V ) = V = 0.
Later, in 1822 J. Fourier on his work on heat ow in Theorie analytique
de la chaleur introduced the heat equation
u(x, t)
t
= u(x, t),
where =

3
i=1

2
x
2
i
. The heat ow model was based on Newtons law of
cooling.
Thus, by the beginning of 19th century, the three most important PDEs
were identied.
4.3 Continuity Equation
Let us consider an ideal compressible uid (viz. gas) occupying a bounded
region R
n
(in practice, we take n = 3, but the derivation is true for
all dimensions). For mathematical precision, we assume to be a bounded
open subset of R
n
. Let (x, t) denote the density of the uid for x at
time t I R, for some open interval I. Mathematically, we presume that
C
1
(I). We cut a region
t
and follow
t
, the position at time t,
as t varies in I. For mathematical precision, we will assume that
t
have C
1
boundaries (cf. Appendix A). Now, the law of conservation of mass states
that during motion the mass is conserved and mass is the product of density
and volume. Thus, the mass of the region as a function of t is constant and
hence its derivative should vanish. Therefore,
d
dt
_
t
(x, t) dx = 0.
We regard the points of
t
, say x
t
, following the trajectory x(t) with
velocity v(x, t). We also assume that the deformation of
t
is smooth, i.e.,
CHAPTER 4. PDE: AN INTRODUCTION 47
v(x, t) is continuous in a neighbourhood of I. Consider
d
dt
_
t
(x, t) dx = lim
h0
1
h
_
_

t+h
(x, t + h) dx
_
t
(x, t) dx
_
= lim
h0
_
t
(x, t + h) (x, t)
h
dx
+ lim
h0
1
h
_
_

t+h
(x, t + h) dx
_
t
(x, t + h) dx
_
The rst integral becomes
lim
h0
_
t
(x, t + h) (x, t)
h
dx =
_
t

t
(x, t) dx.
The second integral reduces as,
_

t+h
(x, t + h) dx
_
t
(x, t + h) dx =
_

(x, t + h)
_

t+h

t
_
=
_

t+h
\t
(x, t + h) dx

_
t\
t+h
(x, t + h) dx.
We now evaluate the above integral in the sense of Riemann. We x
t. Our aim is to partition the set (
t+h

t
) (
t

t+h
) with cylinders
and evaluate the integral by letting the cylinders as small as possible. To
do so, we choose 0 < s 1 and a polygon that covers
t
from outside
such that the area of each of the face of the polygon is less than s and the
faces are tangent to some point x
i

t
. Let the polygon have m faces.
Then, we have x
1
, x
2
, . . . x
m
at which the faces F
1
, F
2
, . . . , F
m
are a tangent
to
t
. Since
t+h
is the position of
t
after time h, any point x(t) moves to
x(t +h) = v(x, t)h. Hence, the cylinders with base F
i
and height v(x
i
, t)h is
expected to cover our annular region depending on whether we move inward
or outward. Thus, v(x
i
, t)(x
i
) is positive or negative depending on whether

t+h
moves outward or inward, where (x
i
) is the unit outward normal at
x
i

t
.
_

t+h
\t
(x, t+h) dx
_
t\
t+h
(x, t+h) dx = lim
s0
m

i=1
(x
i
, t)v(x
i
, t)(x
i
)hs.
CHAPTER 4. PDE: AN INTRODUCTION 48
Thus,
lim
h0
1
h
_
_

t+h
(x, t + h) dx
_
t
(x, t + h) dx
_
=
_
t
(x, t)v(x, t)(x) d.
By Greens theorem (cf. Appendix A), we have
d
dt
_
t
(x, t) dx =
_
t
_

t
+ div(v)
_
dx.
Now, using conservation of mass, we get

t
+ div(v) = 0 in R. (4.3.1)
Equation (4.3.1) is called the equation of continuity. In fact, any quantity
that is conserved as it moves in an open set satises the equation of con-
tinuity (4.3.1).
Chapter 5
First Order PDE
We begin by looking at some family of curves which arise as a solution to
rst order PDEs.
5.1 Family Of Curves
Let A R
2
be an open subset. Consider
u : R
2
A R
a two parameter family of smooth surfaces in R
3
, u(x, y, a, b), where (a, b)
A. For instance, u(x, y, a, b) = (x a)
2
+ (y b)
2
is a family of circles
with centre at (a, b). Dierentiate w.r.t x and y, we get u
x
(x, y, a, b) and
u
y
(x, y, a, b), respectively. Eliminating a and b from the two equations, we
get a rst order PDE
F(u
x
, u
y
, u, x, y) = 0
whose solutions are the given surfaces u.
Example 5.1. Consider the family of circles
u(x, y, a, b) = (x a)
2
+ (y b)
2
.
Thus, u
x
= 2(x a) and u
y
= 2(y b) and eliminating a and b, we get
u
2
x
+ u
2
y
4u = 0
is a rst order PDE.
49
CHAPTER 5. FIRST ORDER PDE 50
Example 5.2. Find the rst order PDE, by eliminating the arbitrary function
f, satised by u.
(i) u(x, y) = xy + f(x
2
+ y
2
)
(ii) u(x, y) = f(x/y)
Proof. (i) Dierentiating the given equation w.r.t x and y, we get
u
x
= y + 2xf

, u
y
= x + 2yf

,
respectively. Eliminating f

, by multiplying y and x respectively, we


get
yu
x
xu
y
= y
2
x
2
.
(ii) Dierentiating the given equation w.r.t x and y, we get
u
x
=
1
y
f

, , u
y
=
x
y
2
f

,
respectively. Eliminating f

, by multiplying x and y respectively, we


get
xu
x
+ yu
y
= 0.
Example 5.3. Find the rst order PDE, by eliminating the arbitrary constants
a and b, satised by u
(i) u(x, y) = (x + a)(y + b)
(ii) u(x, y) = ax + by
Proof. (i) Dierentiating the given equation w.r.t x and y, we get
u
x
= y + b, u
y
= x + a,
respectively. Eliminating a and b, we get
u
x
u
y
= u.
(ii) Dierentiating the given equation w.r.t x and y, we get
u
x
= a, u
y
= b
respectively. Eliminating a and b, we get
xu
x
+ yu
y
= u.
CHAPTER 5. FIRST ORDER PDE 51
5.2 Linear Transport Equation (2 Variable)
5.2.1 Derivation
Imagine a wave (say water wave) moving on the surface of water with constant
speed b. At any time instant t, every point on the wave would have travelled
a distance of bt from its initial position. Let us x a point (x
0
, 0) on the wave.
Now note that the value of u(x, t) is constant along the line x = bt + x
0
or
x bt = x
0
. Therefore the directional derivative of u along the vector in the
direction of (b, 1) is zero. Therefore
0 = u(x, t) (b, 1) = u
t
+ bu
x
.
This is a simple rst order linear equation called the transport equation.
5.2.2 Solving
We wish to solve the transport equation u
t
+ bu
x
= 0 which describes the
motion of a wave with constant speed b as seen by person who is observing
the wave from a xed point. Let us imagine that someone else, say on
a skateboard, moving with speed b observes the same wave moving in the
direction of the wave. For this person the wave would appear stationary
and for the xed observe the wave would appear to travel with speed b.
What is the equation of the motion of the wave for the moving observer?
To understand this we need to identify the coordinate system for the person
on skateboard. Let us x a point x at time t = 0. After time t, the point
x remains as x for the xed observer, while for the moving observer the
point x is now x bt. Therefore the coordinate system for the moving
observer is (, t) where = x bt. Let v(, t) describe the motion of the
wave from the moving observers view. Then the PDE describing the motion
of wave as seen by moving observer is v
t
(, t) = 0, Because u
t
= v

t
+v
t
and
u
x
= v

x
+v
t
t
x
= v

. Hence u
t
+bu
x
= v

(b)+v
t
+v

= v
t
. Now, to solve for
u we solve for v and use it to nd u. Note that v(, t) = f() for any arbitrary
function (suciently dierentiable). Hence u(x, t) = v(x bt, t) = f(x bt).
5.3 Method of Characteristics: Two variable
We begin by describing the method for linear rst order equations in two
variables. We are restricting ourselves to a function of two variables to x
CHAPTER 5. FIRST ORDER PDE 52
ideas (and to visualize geometrically). However, the ideas do carry forward
to functions of several variable. The method of characteristics is a technique
to reduce a given rst order PDE into a system of ODE and then solve the
ODE using known methods, to obtain the solution of the rst order PDE.
Consider rst order linear equation of two variable:
a(x, y)u
x
+ b(x, y)u
y
= c(x, y). (5.3.1)
We need to nd u(x, y) that solves above equation. This is equivalent to
nding the graph (surface) S (x, y, u(x, y)) of the function u in R
3
. If u
is a solution of (5.3.1), at each (x, y) in the domain of u,
a(x, y)u
x
+ b(x, y)u
y
= c(x, y)
a(x, y)u
x
+ b(x, y)u
y
c(x, y) = 0
(a(x, y), b(x, y), c(x, y)) (u
x
, u
y
, 1) = 0
(a(x, y), b(x, y), c(x, y)) (u(x, y), 1) = 0.
But (u(x, y), 1) is normal to S at the point (x, y) (cf. Appendix B).
Hence, the coecients (a(x, y), b(x, y), c(x, y)) are perpendicular to the nor-
mal. Thus, the coecients (a(x, y), b(x, y), c(x, y)) lie on the tangent plane
to S at (x, y, u(x, y)).
Solving the given rst order linear PDE is nding the surface S for which
(a(x, y), b(x, y), c(x, y)) lie on the tangent plane to S at (x, y, z).
Denition 5.3.1. We call a surface S R
n
to be an integral surface w.r.t
a given vector eld V , if V lies in the tangent plane to S at each point of S.
Hence, nding u is equivalent to nding the integral surface corresponding
to the vector eld V = (a(x, y), b(x, y), c(x, y)).
We view an integral surface w.r.t V as an union of integral curves w.r.t
V .
Denition 5.3.2. A curve R
n
is said to be an integral or characteristic
curve w.r.t a given vector eld V , if V lies in the tangent plane to at each
point of .
The surface is the union of curves which satisfy the property of S. Thus,
for any curve S such that at each point of , the vector V (x, y) =
(a(x, y), b(x, y), c(x, y) is tangent to the curve. Parametrizing the curve by
CHAPTER 5. FIRST ORDER PDE 53
the variable s, we see that we are looking for the curve = x(s), y(s), z(s)
R
3
such that
dx
ds
= a(x(s), y(s)),
dy
ds
= b(x(s), y(s)),
and
dz
ds
= c(x(s), y(s)).
The three ODEs obtained are called characteristic equations. The union
of these characteristic (integral) curves give us the integral surface.
Example 5.4. Consider the linear transport equation in two variable,
u
t
+ bu
x
= 0, x R and t (0, ),
where the constant b R is given. Thus, the given vector eld V (x, t) =
(b, 1, 0). The characteristic equations are
dx
ds
= b,
dt
ds
= 1, and
dz
ds
= 0.
Solving the 3 ODEs, we get
x(s) = bs + c
1
, t(s) = s + c
2
, and z(s) = c
3
.
Eliminating the parameter s, we get the curves (lines) xbt = a constant and
z = a constant. Therefore, z = u(x, t) is constant along the lines x bt = a
constant. Hence z is a function of xbt, i.e., it changes value only when you
switch between the lines x bt. Thus, for any function g (smooth enough)
u(x, t) = g(x bt)
is a general solution of the transport equation. This is can be observed by
noting that
u
t
+ bu
x
= g

(x bt)(b) + bg

(x bt) = 0.
Also note that u(x, 0) = g(x).
Example 5.5. Given a constant b R and a function f(x, t) , we wish to
solve the inhomogeneous linear transport equation,
u
t
(x, t) + bu
x
(x, t) = f(x, t), x R and t (0, ).
CHAPTER 5. FIRST ORDER PDE 54
As before, the rst two ODE will give the projection of characteristic curve
in the xt plane, x bt = a constant, and the third ODE becomes
dz(s)
ds
= f(x(s), t(s)).
Lets say we need to nd the value of u at the point (x
0
, t
0
). The line passing
through (x
0
, t
0
) with slope 1/b is given by the equation x bt = , where
= x
0
at
0
. If z has to be on the integral curve, then z(s) = u( + bs, s).
Hence set z(s) := u( + bs, s) and let (s) = + bs be the line joining (, 0)
and (x
0
, t
0
) as s varies from 0 to t
0
. Therefore, the third ODE becomes,
dz(s)
ds
= f((s), s) = f( + bs, s).
Integrating both sides from 0 to t
0
, we get
_
t
0
0
f(x
0
b(t
0
s), s) ds = z(t
0
) z(0)
= u(x
0
, t
0
) u(x
0
bt
0
, 0).
Thus,
u(x, t) = u(x bt, 0) +
_
t
0
f(x b(t s), s) ds
is the general solution.
Example 5.6. Find the general solution (in terms of arbitrary functions) of
the given rst order PDE
(i) xu
x
(x, y) + yu
y
(x, y) = u(x, y). Compare your answer with Exam-
ple 5.3(ii).
(ii) yu(x, y)u
x
(x, y) + xu(x, y)u
y
(x, y) = xy
Proof. (i) The characteristic equations (ODEs) are
dx
ds
= x(s)
dy
ds
= y(s) and
dz
ds
= z(s).
Thus, x(s) = c
1
e
s
, y(s) = c
2
e
s
and z(s) = c
3
e
s
. Eliminating the pa-
rameter s, we get y/x = c
4
and z/x = c
5
. Thus, the general solution
is
u(x, y) = xg(y/x) or u(x, y) = yf(x/y),
for some arbitrary smooth functions f and g.
CHAPTER 5. FIRST ORDER PDE 55
(ii) Note that the equation in the given form is quasi-linear. However,
it can be made semilinear by dividing throughout by xyu, as long as
the division is not zero. Since the domain is not specied to conclude
whether xyu ,= 0, we proceed as follows:
The characteristic equations are
dx
ds
= yz,
dy
ds
= xz and
dz
ds
= xy.
Hence,
0 = x
dx
ds
y
dy
ds
=
d(x
2
)
ds

d(y
2
)
ds
=
d(x
2
y
2
)
ds
.
Thus, x
2
y
2
= c
1
and, similarly, x
2
z
2
= c
2
. Hence, for some f and
g,
u
2
(x, y) = x
2
+ f(x
2
y
2
) or u
2
(x, y) = y
2
+ g(x
2
y
2
).
5.4 Cauchy Problem
Recall that the general solution of the transport equation depends on the
value of u at time t = 0, i.e., the value of u on the curve (x, 0) in the xt-
plane. Thus, the problem of nding a function u satisfying the rst order
PDE (5.3.1) such that u is known on a curve in the xy-plane is called the
Cauchy problem.
Denition 5.4.1. A Cauchy problem states that: given a curve R
3
, can
we nd a solution u of (5.3.1) whose graph contains ?
The question that arises at this moment is that: Does the knowledge of
u on any curve R
3
lead to solving the rst order PDE? The answer
is a no. For instance, in the transport problem, if we choose the curve
= (x, t) [ x bt = 0, then we had no information to conclude u o the
line xbt = 0. The characteristic curves should emanate from to determine
CHAPTER 5. FIRST ORDER PDE 56
u. Thus, only those curves are allowed which are not characteristic curves,
i.e., the coecient vector eld (a(x, y), b(x, y)) is nowhere tangent to the
curve .
Denition 5.4.2. We say =
1
(r),
2
(r) R
2
is non-characteristic for
the Cauchy problem
_
a(x, y)u
x
+ b(x, y)u
y
= c(x, y) (x, y) R
2
u = on
if is nowhere tangent to (a(
1
,
2
), b(
1
,
2
)), i.e.,
(a(
1
,
2
), b(
1
,
2
)) (

2
,

1
) ,= 0.
Example 5.7. For any given (smooth enough) function : R R, consider
the linear transport equation
_
u
t
+ bu
x
= 0 x R and t (0, )
u(x, 0) = (x) x R.
(5.4.1)
We know that the general solution of the transport equation is u(x, t) =
g(x bt) for some g. In addition, we want the initial condition u(x, 0) to be
satised. Thus,
u(x, 0) = g(x) = (x).
Thus, by choosing g = , we get the particuar solution of the problem as
u(x, t) = (x bt).
Recall that we already introduced the notion of well-posedness of a PDE
in chapter 4. Though we can not elaborate on the existence and unique-
ness of solutions of Cauchy problem at this level, we show by example the
importance of well-posedness of Cauchy problem. In particular, if is not
non-characteristic, then the Cauchy problem is not well-posed.
Example 5.8. (i) Find the general solution (in terms of arbitrary functions)
of the rst order PDE 2u
x
(x, y) + 3u
y
(x, y) + 8u(x, y) = 0.
(ii) For the PDE given above, check if the following curves are non-characteristic
(a) y = x in the xy-plane
(b) y =
3x1
2
.
CHAPTER 5. FIRST ORDER PDE 57
(iii) Discuss the particular solutions of the above PDE, corresponding to
(a) u(x, x) = x
4
on y = x
(b) u(x, (3x 1)/2) = x
2
on y = (3x 1)/2
(c) u(x, (3x 1)/2) = e
4x
.
Observe the nature of solutions for the same PDE on a characteristic
curve and on non-characteristic curve.
Proof. (i) The characteristic equations are
dx
ds
= 2,
dy
ds
= 3 and
dz
ds
= 8z.
Hence,
x(s) = 2s + c
1
y(s) = 3s + c
2
and z(s) = c
3
e
8s
.
Thus, 3x 2y = c
4
and z = c
4
e
4x
or z = c
5
e
8y/3
Hence, for some f
and g,
u(x, y) = f(3x 2y)e
4x
or u(x, y) = g(3x 2y)e
8y/3
.
(ii) (a) Parametrise the curve y = x as (r) : r (r, r). Thus
1
(r) =

2
(r) = r. Since the coecients of the PDE are a(r) = 2 and
b(r) = 3, we have
(a, b) (

2
(r),

1
(r)) = (2, 3) (1, 1) = 2 + 3 = 1 ,= 0.
Hence is non-characteristic.
(b) Parametrise the curve y = (3x 1)/2 as (r) : r (r, (3r 1)/2).
Hence
1
(r) = r and
2
(r) = (3r 1)/2 and
(a, b) (

2
(r),

1
(r)) = (2, 3) (3/2, 1) = 3 + 3 = 0.
Hence is a characteristic curve.
(iii) Recall that the general solution is
u(x, y) = f(3x 2y)e
4x
or u(x, y) = g(3x 2y)e
8y/3
.
CHAPTER 5. FIRST ORDER PDE 58
(a) Now, u(x, x) = x
4
implies f(x) = x
4
e
4x
or g(x) = x
4
e
8x/3
, and
u(x, y) = (3x 2y)
4
e
8(xy)
.
Thus, we have a unique solution of u.
(b) Using the given condition, we have f(1) = x
2
e
4x
or g(1) = x
2
e
4x
e
4/3
for all x R. This implies f and g are not well dened. We have
no function f and g, hence there is no solution u solving the given
PDE with the given data.
(c) Once again using the given condition, we have f(1) = 1 or g(1) =
e
4/3
. One has many choices of function satisying these conditions.
Thus, we have innite number of solutions (or choices for) u that
solves the PDE.
We end this chapter by deriving the particular solution of the Cauchy
problem (5.4.1) using parametrisation of the data curve . The example
also shows how the information on data curve is reduced as initial condi-
tion for the characteristic ODEs. Note that in the example below the data
curve is parametrised using the variable r and the characteristic curves is
parametrised using the variable s.
Example 5.9. We shall compute the solution of the Cauchy problem (5.4.1).
We rst check for non-characteristic property of . Note that (x, 0),
the x-axis of xt-plane, is the (boundary) curve on which the value of u is given.
Thus, (, ) = (x, 0, (x)) is the known curve on the solution surface of u.
We parametrize the curve with r-variable, i.e., =
1
(r),
2
(r) =
(r, 0). is non-characteristic, because (b, 1) (0, 1) = 1 ,= 0. The charac-
teristic equations are:
dx(r, s)
ds
= a,
dt(r, s)
ds
= 1, and
dz(r, s)
ds
= 0
with initial conditions,
x(r, 0) = r, t(r, 0) = 0, and z(r, s) = (r).
Solving the ODEs, we get
x(r, s) = as + c
1
(r), t(r, s) = s + c
2
(r)
CHAPTER 5. FIRST ORDER PDE 59
and z(r, s) = c
3
(r) with initial conditions
x(r, 0) = c
1
(r) = r
t(r, 0) = c
2
(r) = 0, and z(r, 0) = c
3
(r) = (r).
Therefore,
x(r, s) = as + r, t(r, s) = s, and z(r, s) = (r).
We solve for r, s in terms of x, t and set u(x, t) = z(r(x, t), s(x, t)).
r(x, t) = x at and s(x, t) = t.
Therefore, u(x, t) = z(r, s) = (r) = (x at).
CHAPTER 5. FIRST ORDER PDE 60
Chapter 6
Second Order PDE
6.1 Classication of Second Order PDE
A general second order PDE is of the form F (D
2
u(x), Du(x), u(x), x) = 0,
for each x R
n
and u : R is the unknown. A Cauchy problem
poses the following: Given the knowledge of u on a smooth hypersurface
can one nd the solution u of the PDE? The knowledge of u on is
said to be the Cauchy data.
What should be the minimum required Cauchy data for the Cauchy prob-
lem to be solved? Viewing the Cauchy problem as an initial value problem
corresponding to ODE, we know that a unique solution exists to the second
order ODE
_
_
_
y

(x) + P(x)y

(x) + Q(x)y(x) = 0 x I
y(x
0
) = y
0
y

(x
0
) = y

0
.
where P and Q are continuous on I (assume I closed interval of R) and for
any point x
0
I. This motivates us to dene the Cauchy problem for second
order PDE as:
_
_
_
F (D
2
u(x), Du(x), u(x), x) = 0 x
u(x) = g(x) x
Du(x) (x) = h(x) x
(6.1.1)
where is the outward unit normal vector on the hypersurface and g, h are
known functions on . To keep things simple, we shall henceforth restrict
ourselves to the two-variable setup.
61
CHAPTER 6. SECOND ORDER PDE 62
6.1.1 Second order PDE in Two Varaibles
Consider the Cauchy problem (6.1.1) in two variables and set x = (x, y). Let
denote the unit tangent vector on . Then, the directional derivative along
the tangent vector, Du(x, y) (x, y) = g

(x, y) is known because g is known.


Thus, we may compute directional derivative of u in any direction, along ,
as a linear combination of Du and Du . Using this we may reformulate
(6.1.1) as
_

_
F (D
2
u, Du, u, x, y) = 0 (x, y)
u(x, y) = g(x, y) (x, y)
u
x
(x, y) = h
1
(x, y) (x, y)
u
y
(x, y) = h
2
(x, y) (x, y)
with the compatibility condition that g(s) = h
1

1
(s) + h
2

2
(s), where s
(
1
(s),
2
(s)) is the parametrisation
1
of the hypersurface . The compatibil-
ity condition is an outcome of the fact that
u(s) = u
x

1
(s) + u
y

2
(s).
The above condition implies that among g, h
1
, h
2
only two can be assigned
independently.
6.1.2 Classication of Semi-Linear PDE in Two Vari-
ables
Consider the Cauchy problem for the second order semi-linear PDE in two
variables (x, y) R
2
,
_

_
A(x, y)u
xx
+ 2B(x, y)u
xy
+ C(x, y)u
yy
= D (x, y)
u(x, y) = g(x, y) (x, y)
u
x
(x, y) = h
1
(x, y) (x, y)
u
y
(x, y) = h
2
(x, y) (x, y) .
(6.1.2)
where D(x, y, u, u
x
, u
y
) may be non-linear and is a smooth
2
curve in .
Also, one of the coecients A, B or C is identically non-zero (else the PDE
1
(/) denotes the derivative with respect to space variable and () denotes the derivative
with respect to parameter
2
twice dierentiable
CHAPTER 6. SECOND ORDER PDE 63
is not of second order). Let s (
1
(s),
2
(s)) be a parametrisation of the
curve . Then we have the compatibility condition that
g(s) = h
1

1
(s) + h
2

2
(s).
By computing the second derivatives of u on and considering u
xx
, u
yy
and u
xy
as unknowns, we have the linear system of three equations in three
unknowns on ,
Au
xx
+2Bu
xy
+Cu
yy
= D

1
(s)u
xx
+
2
(s)u
xy
=

h
1
(s)

1
(s)u
xy
+
2
(s)u
yy
=

h
2
(s).
This system of equation is solvable if the determinant of the coecients are
non-zero, i.e.,

A 2B C

1

2
0
0
1

2

,= 0.
Denition 6.1.1. We say a curve is characteristic (w.r.t (6.1.2)) if
A
2
2
2B
1

2
+ C
2
1
= 0.
where (
1
(s),
2
(s)) is a parametrisation of .
If y = y(x) is a representation of the curve (locally, if necessary), we
have
1
(s) = s and
2
(s) = y(s). Then the characteristic equation reduces
as
A
_
dy
dx
_
2
2B
dy
dx
+ C = 0.
Therefore, the characteristic curves of (6.1.2) are given by the graphs whose
equation is
dy
dx
=
B

B
2
AC
A
.
Thus, we have three situations arising depending on the sign of the dis-
criminant, B
2
AC. This classies the given second order PDE based on
the sign of its discriminant d = B
2
AC.
Denition 6.1.2. We say a second order PDE is of
CHAPTER 6. SECOND ORDER PDE 64
(a) hyperbolic type if d > 0,
(b) parabolic type if d = 0 and
(c) elliptic type if d < 0.
The hyperbolic PDE have two families of characteristics, parabolic PDE
has one family of characteristic and elliptic PDE have no characteristic. We
caution here that these names are no indication of the shape of the graph of
the solution of the PDE.
Example 6.1 (Wave Equation). For a given c R, u
yy
c
2
u
xx
= 0 is hyper-
bolic. Since A = c
2
, B = 0 and C = 1, we have d = B
2
AC = c
2
> 0.
Example 6.2 (Heat Equation). For a given c R, u
y
cu
xx
= 0 is parabolic.
Since A = c, B = 0 and C = 0, thus d = B
2
AC = 0.
Example 6.3 (Laplace equation). u
xx
+u
yy
= 0 is elliptic. Since A = 1, B = 0
and C = 1, thus d = B
2
AC = 1 < 0.
Note that the classication of PDE is dependent on its coecients. Thus,
for constant coecients the type of PDE remains unchanged throughout
the region . However, for variable coecients, the PDE may change its
classication from region to region.
Example 6.4. An example is the Tricomi equation , u
xx
+ xu
yy
= 0. The
discriminant of the Tricomi equation is d = x. Thus, tricomi equation is
hyperbolic when x < 0, elliptic when x > 0 and degenerately parabolic when
x = 0, i.e., y-axis. Such equations are called mixed type.
The notion of classication of second order semi-linear PDE, discussed in
this section, could be generalised to quasi-linear, non-linear PDE and system
of ODE. However, in these cases the classication may also depend on the
solution u, as seen in the examples below.
Example 6.5. Consider the quasi-linear PDE u
xx
uu
yy
= 0. The discrim-
inant is d = u. It is hyperbolic for u > 0
3
, elliptic when u < 0 and
parabolic when u = 0.
Example 6.6. Consider the quasi-linear PDE
(c
2
u
2
x
)u
xx
2u
x
u
y
u
xy
+ (c
2
u
2
y
)u
yy
= 0
where c > 0. Then d = B
2
AC = c
2
(u
2
x
+ u
2
y
c
2
) = c
2
([u[
2
c
2
). It is
hyperbolic if [u[ > c, parabolic if [u[ = c and elliptic if [u[ < c.
3
The notation u > 0 means x [ u(x) > 0
CHAPTER 6. SECOND ORDER PDE 65
Exercise 20. Find the family of characteristic curves for the following second
order PDE, whenever they exist.
(i) For a given c R, u
yy
c
2
u
xx
= 0.
(ii) For a given c R, u
y
cu
xx
= 0.
(iii) u
xx
+ u
yy
= 0.
(iv) u
xx
+ xu
yy
= 0.
Proof. (i) We have already seen the equation is hyperbolic and hence it
should have two characteristic curves. The characteristic curves are
given by the equation
dy
dx
=
B

B
2
AC
A
=

c
2
c
2
=
1
c
.
Thus, cy x = a constant is the equation of the two characteristic
curves.
(ii) We have already seen the equation is parabolic and hence it should
have one characteristic curve. The characteristic curve are given by the
equation
dy
dx
=
B

B
2
AC
A
= 0.
Thus, y = a constant is the equation of the characteristic curve.
(iii) We have already seen the equation is elliptic and hence has no real
characteristics.
(iv) The equation is of mixed type. In the region x > 0, the characteristic
curves are y 2x
3/2
/3 = a constant.
Exercise 21. Classify the following PDE and nd their characteristics, when
it exists:
(a) u
xx
+ (5 + 2y
2
)u
xy
+ (1 + y
2
)(4 + y
2
)u
yy
= 0.
(b) yu
xx
+ u
yy
= 0.
CHAPTER 6. SECOND ORDER PDE 66
(c) yu
xx
= xu
yy
.
(d) u
yy
xu
xy
+ yu
x
+ xu
y
= 0.
(e) y
2
u
xx
+ 2xyu
xy
+ x
2
u
yy
= 0.
(f) u
xx
+ 2xu
xy
+ (1 y
2
)u
yy
= 0.
6.1.3 Invariance of Discriminant
The classication of second order semi-linear PDE is based on the discrimi-
nant B
2
AC. In this section, we note that the classication is independent
of the choice of coordinate system (to represent a PDE). Consider the two-
variable semilinear PDE
A(x, y)u
xx
+2B(x, y)u
xy
+C(x, y)u
yy
= D(x, y, u, u
x
, u
y
) (x, y) (6.1.3)
where the variables (x, y, u, u
x
, u
y
) may appear non-linearly in D and R
2
.
Also, one of the coecients A, B or C is identically non-zero (else the PDE is
not of second order). We shall observe how (6.1.3) changes under coordinate
transformation.
Denition 6.1.3. For any PDE of the form (6.1.3) we dene its discrimi-
nant as B
2
AC.
Let T : R
2
R
2
be the coordinate transformation with components
T = (w, z), where w, z : R
2
R. We assume that w(x, y), z(x, y) are such
that w, z are both continuous and twice dierentiable w.r.t (x, y), and the
Jacobian J of T is non-zero,
J =

w
x
w
y
z
x
z
y

,= 0.
We compute the derivatives of u in the new variable,
u
x
= u
w
w
x
+ u
z
z
x
,
u
y
= u
w
w
y
+ u
z
z
y
,
u
xx
= u
ww
w
2
x
+ 2u
wz
w
x
z
x
+ u
zz
z
2
x
+ u
w
w
xx
+ u
z
z
xx
u
yy
= u
ww
w
2
y
+ 2u
wz
w
y
z
y
+ u
zz
z
2
y
+ u
w
w
yy
+ u
z
z
yy
u
xy
= u
ww
w
x
w
y
+ u
wz
(w
x
z
y
+ w
y
z
x
) + u
zz
z
x
z
y
+ u
w
w
xy
+ u
z
z
xy
CHAPTER 6. SECOND ORDER PDE 67
Substituting above equations in (6.1.3), we get
a(w, z)u
ww
+ 2b(w, z)u
wz
+ c(w, z)u
zz
= d(w, z, u, u
w
, u
z
).
where D transforms in to d and
a(w, z) = Aw
2
x
+ 2Bw
x
w
y
+ Cw
2
y
(6.1.4)
b(w, z) = Aw
x
z
x
+ B(w
x
z
y
+ w
y
z
x
) + Cw
y
z
y
(6.1.5)
c(w, z) = Az
2
x
+ 2Bz
x
z
y
+ Cz
2
y
. (6.1.6)
Note that the coecients in the new coordinate system satisfy
b
2
ac = (B
2
AC)J
2
.
Since J ,= 0, we have J
2
> 0. Thus, both b
2
ac and B
2
AC have the
same sign. Thus, the sign of the discriminant is invariant under coordinate
transformation. All the above arguments can be carried over to quasi-linear
and non-linear PDE.
6.1.4 Standard or Canonical Forms
The advantage of above classication helps us in reducing a given PDE into
simple forms. Given a PDE, one can compute the sign of the discriminant
and depending on its clasication we can choose a coordinate transformation
(w, z) such that
(i) For hyperbolic, a = c = 0 or b = 0 and a = c.
(ii) For parabolic, c = b = 0 or a = b = 0. We conveniently choose
c = b = 0 situation so that a ,= 0 (so that division by zero is avoided in
the equation for characteristic curves).
(iii) For elliptic, b = 0 and a = c.
If the given second order PDE (6.1.3) is such that A = C = 0, then
(6.1.3) is of hyperbolic type and a division by 2B (since B ,= 0) gives
u
xy
=

D(x, y, u, u
x
, u
y
)
where

D = D/2B. The above form is the rst standard form of second order
hyperbolic equation. If we introduce the linear change of variable X = x +y
CHAPTER 6. SECOND ORDER PDE 68
and Y = x y in the rst standard form, we get the second standard form
of hyperbolic PDE
u
XX
u
Y Y
=

D(X, Y, u, u
X
, u
Y
).
If the given second order PDE (6.1.3) is such that A = B = 0, then
(6.1.3) is of parabolic type and a division by C (since C ,= 0) gives
u
yy
=

D(x, y, u, u
x
, u
y
)
where

D = D/C. The above form is the standard form of second order
parabolic equation.
If the given second order PDE (6.1.3) is such that A = C and B = 0,
then (6.1.3) is of elliptic type and a division by A (since A ,= 0) gives
u
xx
+ u
yy
=

D(x, y, u, u
x
, u
y
)
where

D = D/A. The above form is the standard form of second order
elliptic equation.
Note that the standard forms of the PDE is an expression with no mixed
derivatives.
6.1.5 Reduction to Standard Form
Consider the second order semi-linear PDE (6.1.3) not in standard form. We
look for transformation w = w(x, y) and z = z(x, y), with non-vanishing
Jacobian, such that the reduced form is the standard form.
If B
2
AC > 0, we have two characteristics. We are looking for the
coordinate system w and z such that a = c = 0. This implies from equation
(6.1.4) and (6.1.6) that we need to nd w and z such that
w
x
w
y
=
B

B
2
AC
A
=
z
x
z
y
.
Therefore, we need to nd w and z such that along the slopes of the charac-
teristic curves,
dy
dx
=
B

B
2
AC
A
=
w
x
w
y
.
This means that, using the parametrisation of the characteristic curves,
w
x

1
(s) + w
y

2
(s) = 0 and

w(s) = 0. Similarly for z. Thus, w and z
are chosen such that they are constant on the characteristic curves.
CHAPTER 6. SECOND ORDER PDE 69
The characteristic curves are found by solving
dy
dx
=
B

B
2
AC
A
and the coordinates are then chosen such that along the characteristic curve
w(x, y) = a constant and z(x, y) = a constant.
Example 6.7. Let us reduce the PDE u
xx
x
2
yu
yy
= 0 given in the region
(x, y) [ x R, x ,= 0, y > 0 to its canonical form. Note that A = 1, B = 0,
C = x
2
y and B
2
AC = x
2
y. In the given region x
2
y > 0, hence the
equation is hyperbolic. The characteristic curves are given by the equation
dy
dx
=
B

B
2
AC
A
= x

y.
Solving we get x
2
/2 2

y = a constant. Thus, w = x
2
/2 + 2

y and
z = x
2
/2 2

y. Now writing
u
x
= u
w
w
x
+ u
z
z
x
= x(u
w
+ u
z
)
u
y
= u
w
w
y
+ u
z
z
y
=
1

y
(u
w
u
z
)
u
xx
= u
ww
w
2
x
+ 2u
wz
w
x
z
x
+ u
zz
z
2
x
+ u
w
w
xx
+ u
z
z
xx
= x
2
(u
ww
+ 2u
wz
+ u
zz
) + u
w
+ u
z
u
yy
= u
ww
w
2
y
+ 2u
wz
w
y
z
y
+ u
zz
z
2
y
+ u
w
w
yy
+ u
z
z
yy
=
1
y
(u
ww
2u
wz
+ u
zz
)
1
2y

y
(u
w
u
z
)
x
2
yu
yy
= x
2
(u
ww
2u
wz
+ u
zz
) +
x
2
2

y
(u
w
u
z
)
Substituting into the given PDE, we get
0 = 4x
2
u
wz
+
2

y + x
2
2

y
u
w
+
2

y x
2
2

y
u
z
= 8x
2

yu
wz
+ (x
2
+ 2

y)u
w
+ (2

y x
2
)u
z
.
Note that w + z = x
2
and w z = 4

y. Now, substituting x, y in terms of


w, z, we get
0 = 2(w
2
z
2
)u
wz
+
_
w + z +
w z
2
_
u
w
+
_
w z
2
w z
_
u
z
= u
wz
+
_
3w + z
4(w
2
z
2
)
_
u
w

_
w + 3z
4(w
2
z
2
)
_
u
z
.
CHAPTER 6. SECOND ORDER PDE 70
In the parabolic case, B
2
AC = 0, we have a single characteristic. We
are looking for a coordinate system such that either b = c = 0.
Example 6.8. Let us reduce the PDE e
2x
u
xx
+ 2e
x+y
u
xy
+ e
2y
u
yy
= 0 to its
canonical form. Note that A = e
2x
, B = e
x+y
, C = e
2y
and B
2
AC = 0.
The PDE is parabolic. The characteristic curves are given by the equation
dy
dx
=
B
A
=
e
y
e
x
.
Solving, we get e
y
e
x
= a constant. Thus, w = e
y
e
x
. Now, we
choose z such that w
x
z
y
w
y
z
x
,= 0. For instance, z = x is one such choice.
Then
u
x
= e
x
u
w
+ u
z
u
y
= e
y
u
w
u
xx
= e
2x
u
ww
+ 2e
x
u
wz
+ u
zz
e
x
u
w
u
yy
= e
2y
u
ww
+ e
y
u
w
u
xy
= e
y
(e
x
u
ww
u
wz
)
Substituting into the given PDE, we get
e
x
e
y
u
zz
= (e
y
e
x
)u
w
Replacing x, y in terms of w, z gives
u
zz
=
w
1 + we
z
u
w
.
In the elliptic case, B
2
AC < 0, we have no real characteristics. Thus,
we choose w, z to be the real and imaginary part of the solution of the
characteristic equation.
Example 6.9. Let us reduce the PDE x
2
u
xx
+ y
2
u
yy
= 0 given in the region
(x, y) R
2
[ x > 0, y > 0 to its canonical form. Note that A = x
2
,
B = 0, C = y
2
and B
2
AC = x
2
y
2
< 0. The PDE is elliptic. Solving the
characteristic equation
dy
dx
=
iy
x
CHAPTER 6. SECOND ORDER PDE 71
we get ln x i ln y = c. Let w = ln x and z = ln y. Then
u
x
= u
w
/x
u
y
= u
z
/y
u
xx
= u
w
/x
2
+ u
ww
/x
2
u
yy
= u
z
/y
2
+ u
zz
/y
2
Substituting into the PDE, we get
u
ww
+ u
zz
= u
w
+ u
z
.
Example 6.10. Let us reduce the PDE u
xx
+2u
xy
+5u
yy
= xu
x
to its canonical
form. Note that A = 1, B = 1, C = 5 and B
2
AC = 4 < 0. The PDE is
elliptic. The characteristic equation is
dy
dx
= 1 2i.
Solving we get x y i2x = c. Let w = x y and z = 2x. Then
u
x
= u
w
+ 2u
z
u
y
= u
w
u
xx
= u
ww
+ 4u
wz
+ 4u
zz
u
yy
= u
ww
u
xy
= (u
ww
+ 2u
wz
)
Substituting into the PDE, we get
u
ww
+ u
zz
= x(u
w
+ 2u
z
)/4.
Replacing x, y in terms of w, z gives
u
ww
+ u
zz
=
z
8
(u
w
+ 2u
z
).
Exercise 22. Rewrite the PDE in their canonical forms and solve them.
(a) u
xx
+ 2

3u
xy
+ u
yy
= 0
(b) x
2
u
xx
2xyu
xy
+ y
2
u
yy
+ xu
x
+ yu
y
= 0
(c) u
xx
(2 sin x)u
xy
(cos
2
x)u
yy
(cos x)u
y
= 0
(d) u
xx
+ 4u
xy
+ 4u
yy
= 0
CHAPTER 6. SECOND ORDER PDE 72
6.1.6 Classication in n-variable
Our aim now is to generalise the classication of a two variable semi-linear
PDE to a n-variable quasi-linear PDE, for n 2. Note that in the standard
form with no mixed derivatives, the sign of the cocients of highest order
derivative are opposite for hyperbolic and same for elliptic. For parabolic,
one of the coecient of highest order vanishes. We shall use this approach
to generalise the classication to multi-variable PDE. Consider the general
second order qausi-linear PDE with n independent variable
A(x, u, u) D
2
u + B(x, u) u = D(x),
where A = A
ij
is an n n matrix with entries A
ij
(x, u, u), D
2
u is the
Hessian matrix and B = (B
i
). The rst dot product in second order part
(involving A) is in R
n
2
and the second dot product, involving B, is in R
n
.
We consider the above PDE at a specic point x
0
. Let u(x
0
) = u
0
and
A
0
= A(x
0
, u
0
, u(x
0
)). Thus, locally the PDE can be approximated by a
constant coecient equation whose highest order term involves the matrix
multiplication A
0
D
2
u, where A
0
is a nn matrix with constant entries. We
assume the mixed derivatives to be equal and also assume A
0
is symmetric.
In fact if A
0
is not symmetric, we can replace A
0
with
1
2
(A
0
+ A
t
0
), which is
symmetric. Motivated from our experience in classifying semi-linear PDE in
two variable, we look for the suitable coordinate transformation. Let T be a
nn matrix with constant coecients which transforms w = Tx. Applying,

x
i

k=1
w
k
x
i

w
k
in A
0
D
2
x
u, we get TA
0
T
t
D
2
w
u, where the subscript in the Hessian matrix
denotes the variable in which the derivatives are taken and T
t
denotes the
transpose of the coordinate transformation matrix T.
We know from linear algebra that any real symmetric matrix can always
be diagonalised. Since A
0
was assumed to be symmetric, we can choose the
coordinate transformation T to be the one that diagonalises the matrix A
0
.
Thus, choosing T to be the diagonalising matrix of A
0
, we get TA
0
T
t
as a
diagonal matrix with diagonal entries, say
1
,
2
, . . . ,
n
. Since A was real
symmetric all
i
R, for all i. Thus, we classify PDE at a point x
0
based
on the eigenvalues of the matrix A
0
.
We say a PDE is hyperbolic at a point x
0
, where the solution is u
0
, if none
of the eigenvalues
i
of the coecient matrix A
0
vanish and one eigenvalue has
CHAPTER 6. SECOND ORDER PDE 73
a sign opposite to others. We say it is parabolic if only one of
i
vanishes and
remaining have same sign. We say it is elliptic, if none of the eigenvalues
i
vanish and all have same sign. For n > 2, we may have other cases depending
on the number of eigenvalues vanish and the pattern of sign of the remaining
eigenvalues. But we do not deal with such equations in our context.
6.2 The Laplacian
Recall that we introduced Laplacian to be the trace of the Hessain matrix,
:=

n
i=1

2
x
2
i
. The Laplace operator usually appears in physical models
associated with dissipative eects (except wave equation). The importance
of Laplace operator can be realised by its appearance in various physical
models. For instance, the heat equation

t
,
the wave equation

2
t
2
,
or the Schrodingers equation
i

t
+ .
The Laplacian is a linear operator, i.e., (u+v) = u+v and (u) =
u for any constant R.
6.2.1 Laplacian in Dierent Coordinate Systems
As we know, in cartesian coordiantes, the n-dimensional Laplacian is given
as
:=
n

i=1

2
x
2
i
.
In polar coordinates (2 dimensions), the Laplacian is given as
:=
1
r

r
_
r

r
_
+
1
r
2

2
CHAPTER 6. SECOND ORDER PDE 74
where r is the magnitude component (0 r < ) and is the direction
component (0 < 2). The direction component is also called the azimuth
angle or polar angle.
Exercise 23. Show that the two dimensional Laplacian has the representation
:=
1
r

r
_
r

r
_
+
1
r
2

2
in polar coordinates.
Proof. Using the fact that x = r cos and y = r sin , we have
x
r
= cos ,
y
r
= sin and
u
r
= cos
u
x
+ sin
u
y
.
Also,

2
u
r
2
= cos
2

2
u
x
2
+ sin
2

2
u
y
2
+ 2 cos sin

2
u
xy
.
Similarly,
x

= r sin ,
y

= r cos and
u

= r cos
u
y
r sin
u
x
.
Also,
1
r
2

2
u

2
= sin
2

2
u
x
2
+ cos
2

2
u
y
2
2 cos sin

2
u
xy

1
r
u
r
.
Therefore,

2
u
r
2
+
1
r
2

2
u

2
=

2
u
x
2
+

2
u
y
2

1
r
u
r
.
and hence
u =

2
u
r
2
+
1
r
2

2
u

2
+
1
r
u
r
.
In cylindrical coordinates (3 dimensions), the Laplacian is given as
:=
1
r

r
_
r

r
_
+
1
r
2

2
+

2
z
2
CHAPTER 6. SECOND ORDER PDE 75
where r [0, ), [0, 2) and z R. In spherical coordinates (3 dimen-
sions), the Laplacian is given as
:=
1
r
2

r
_
r
2

r
_
+
1
r
2
sin

_
sin

_
+
1
r
2
sin
2

2
where r [0, ), [0, ] (zenith angle or inclination) and [0, 2)
(azimuth angle).
6.2.2 Harmonic Functions
The one dimensional Laplace equation is a ODE and is solvable with solu-
tions u(x) = ax + b for some constants a and b. But in higher dimensions
solving Laplace equation is not so simple. For instance, a two dimensional
Laplace equation
u
xx
+ u
yy
= 0
has the trivial solution as all one degree polynomials of two variables, u(x, y) =
ax +by +c. In addition, xy, x
2
y
2
, e
x
sin y and e
x
cos y are all solutions to
the two variable Laplace equation. In more than two dimensions, it is trivial
to check that all polynomials up to degree one solve the Laplace equation
u = 0, i.e.,

||1
a

is a solution to u = 0. But we also have functions of higher degree and


functions not expressible in terms of elementary functions as solutions to
Laplace equation. For instance, note that u(x) =

n
i=1
x
i
is a solution to
u = 0. Give the importance of this class of solutions (which do not seem
to have a general formula), we name this class of solutions and study their
properties.
Denition 6.2.1. Let be an open subset of R
n
. A function u C
2
() is
said to be harmonic on if u(x) = 0 in .
We already remarked that every scalar potential is a harmonic function.
Gauss was the rst to deduce some important properties of harmonic func-
tions and thus laid the foundation for Potential theory and Harmonic Anal-
ysis.
CHAPTER 6. SECOND ORDER PDE 76
Due to the linearity of , sum of any nite number of harmonic functions
is harmonic and a scalar multiple of a harmonic function is harmonic. More-
over, harmonic functions can be viewed as the kernel of the Laplace operator,
say from C
2
to the space of continuous functions. Our aim in this section is
to understand the properties of harmonic functions.
Theorem 6.2.2. Let be a bounded open subset of R
n
. Let u : R be
a continuous function which is twice continuously dierentiable in , such
that u is harmonic in . Then
max

u = max

u.
Proof. Recall that if a function v attains maximum at a point x , then
the Hessian matrix of v is negative semi-denite. Think of negative semi-
denite as implying that every element of the Hessian matrix is negative. For
the harmonic function u and for a xed > 0, we set v

(x) = u(x) + [x[


2
,
for each x . For each x , v

= u + 2n > 0. v

does not attain


maximum in . But v

has to have a maximum, hence it should be attained


at some point x

on the boundary. For all x ,


u(x) v

(x) v

(x

) = u(x

) + [x

[
2
max
x
u(x) + max
x
[x[
2
.
The above inequality is true for all > 0. Thus, u(x) max
x
u(x), for
all x . Therefore, max

u max
x
u(x). The reverse inequality is true
because and hence we have equality.
Theorem 6.2.3 (Uniqueness of Harmonic Functions). Let be an open,
bounded subset of R
n
. Let u
1
, u
2
C() be harmonic in such that u
1
= u
2
on , then u
1
= u
2
in .
Proof. Note that u
1
u
2
is a harmonic function and hence, by weak maximum
principle, should attain its maximum on . But u
1
u
2
= 0 on . Thus
u
1
u
2
0 in . Now, repeat the argument for u
2
u
1
, we get u
2
u
1
0
in . Thus, we get u
1
u
2
= 0 in .
6.3 Separation Of Variable
The method of separation of variables was introduced by dAlembert (1747)
and Euler (1748) for the wave equation. This technique was also employed
CHAPTER 6. SECOND ORDER PDE 77
by Laplace (1782) and Legendre (1782) while studying the Laplace equation
and also by Fourier while studying the heat equation. The motivation behind
the separation of variable technique will highlighted while studying wave
equation.
Recall that we said for Second order Cauchy problem we need two infor-
mation on the data curve , one on u and one along the normal derivative of
u. However, it turns out that the Cauchy problem for Laplacian is not well-
posed. In fact, Laplace equation on a bounded domain is over-determined.
Hence we cannot specify both u and Du on . Thus, we either choose
to specify the value of u (Dirichlet condition) or the normal derivative of u
(Neumann condition) on .
Exercise 24. After you learn separation of variable technique, use it to show
that the Cauchy problem for Laplace equation
_
_
_
u
xx
+ u
yy
= 0
u(x, 0) = 0
u
y
(x, 0) = k
1
sin kx,
where k > 0, is not well-posed. (Hint: Compute explicit solution using sep-
aration of variable. Note that, as k , the Cauchy data tends to zero
uniformly, but the solution does not converge to zero for any y ,= 0. There-
fore, a small change from zero Cauchy data (with corresponding solution
being zero) may induce bigger change in the solution.)
6.3.1 Dirichlet Boundary Condition
Let R
n
be a bounded open subset with a boundary . Let g : R
be a continuous function. Then the Dirichlet problem is to nd a harmonic
function u : R such that
_
u(x) = 0 x
u(x) = g(x) x .
(6.3.1)
2D Rectangle
Let = (x, y) R
2
[ 0 < x < a and 0 < y < b be a rectangle of sides
a, b. Let g : R which vanishes on three sides of the rectangle, i.e.,
g(0, y) = g(x, 0) = g(a, y) = 0 and g(x, b) = h(x) where h is a continuous
CHAPTER 6. SECOND ORDER PDE 78
function h(0) = h(a) = 0. We want to solve (6.3.1) on this rectangle with
given boundary value g.
We begin by looking for solution u(x, y) whose variables are separated,
i.e., u(x, y) = v(x)w(y). Substituting this form of u in the Laplace equation,
we get
v

(x)w(y) + v(x)w

(y) = 0.
Hence
v

(x)
v(x)
=
w

(y)
w(y)
.
Since LHS is function of x and RHS is function y, they must equal a constant,
say . Thus,
v

(x)
v(x)
=
w

(y)
w(y)
= .
Using the boundary condition on u, u(0, y) = g(0, y) = g(a, y) = u(a, y) =
0, we get v(0)w(y) = v(a)w(y) = 0. If w 0, then u 0 which is not a
solution to (6.3.1). Hence, w , 0 and v(0) = v(a) = 0. Thus, we need to
solve,
_
v

(x) = v(x), x (0, a)


v(0) = v(a) = 0,
the eigen value problem for the second order dierential operator. Note that
the can be either zero, positive or negative. If = 0, then v

= 0 and
the general solution is v(x) = x + , for some constants and . Since
v(0) = 0, we get = 0, and v(a) = 0 and a ,= 0 implies that = 0. Thus,
v 0 and hence u 0. But, this can not be a solution to (6.3.1).
If > 0, then v(x) = e

x
+ e

x
. Equivalently,
v(x) = c
1
cosh(

x) + c
2
sinh(

x)
such that = (c
1
+c
2
)/2 and = (c
1
c
2
)/2. Using the boundary condition
v(0) = 0, we get c
1
= 0 and hence
v(x) = c
2
sinh(

x).
Now using v(a) = 0, we have c
2
sinh

a = 0. Thus, c
2
= 0 and v(x) = 0.
We have seen this cannot be a solution.
If < 0, then set =

. We need to solve
_
v

(x) +
2
v(x) = 0 x (0, a)
v(0) = v(a) = 0.
CHAPTER 6. SECOND ORDER PDE 79
The general solution is
v(x) = cos(x) + sin(x).
Using the boundary condition v(0) = 0, we get = 0 and hence v(x) =
sin(x). Now using v(a) = 0, we have sin a = 0. Thus, either = 0
or sin a = 0. But = 0 does not yield a solution. Hence a = k or
= k/a, for all non-zero k Z. Hence, for each k N, there is a solution
(v
k
,
k
) for (6.3.1), with
v
k
(x) =
k
sin
_
kx
a
_
,
for some constant b
k
and
k
= (k/a)
2
. We have solved for v. it now
remains to solve w for these
k
. For each k N, we solve for w
k
in the ODE
_
w

k
(y) =
_
k
a
_
2
w
k
(y), y (0, b)
w(0) = 0.
Thus, w
k
(y) = c
k
sinh(ky/a). For each k N,
u
k
=
k
sin
_
kx
a
_
sinh
_
ky
a
_
is a solution to (6.3.1). The general solution is of the form (principle of
superposition) (convergence?)
u(x, y) =

k=1

k
sin
_
kx
a
_
sinh
_
ky
a
_
.
We shall now use the condition u(x, b) = h(x) to nd the solution to the
Dirichlet problem (6.3.1).
h(x) = u(x, b) =

k=1

k
sinh
_
kb
a
_
sin
_
kx
a
_
.
Since h(0) = h(a) = 0, we know that h admits a Fourier Sine series. Thus

k
sinh
_
kb
a
_
is the k-th Fourier sine coecient of h, i.e.,

k
=
_
sinh
_
kb
a
__
1
2
a
_
a
0
h(x) sin
_
kx
a
_
.
CHAPTER 6. SECOND ORDER PDE 80
2D Disk
Now that we have solved the Dirichlet problem in a 2D rectangular domain,
we intend to solve the Dirichlet problem in a 2D disk. The Laplace operator
in polar coordinates (2 dimensions),
:=
1
r

r
_
r

r
_
+
1
r
2

2
where r is the magnitude component and is the direction component.
Consider the unit disk in R
2
,
= (x, y) R
2
[ x
2
+ y
2
< 1
and is the circle of radius one. The DP is to nd u(r, ) : R which
is well-behaved near r = 0, such that
_
_
_
1
r

r
_
r
u
r
_
+
1
r
2

2
u

2
= 0 in
u(r, + 2) = u(r, ) in
u(1, ) = g() on
(6.3.2)
where g is a 2 periodic function.
We will look for solution u(r, ) whose variables can be separated, i.e.,
u(r, ) = v(r)w() with both v and w non-zero. Substituting it in the polar
form of Laplacian, we get
w
r
d
dr
_
r
dv
dr
_
+
v
r
2
d
2
w
d
2
= 0
and hence
r
v
d
dr
_
r
dv
dr
_
=
1
w
_
d
2
w
d
2
_
.
Since LHS is a function of r and RHS is a function of , they must equal a
constant, say . We need to solve the eigen value problem,
_
w

() w() = 0 R
w( + 2) = w() .
Note that the can be either zero, positive or negative. If = 0, then
w

= 0 and the general solution is w() = +, for some constants and


. Using the periodicity of w,
+ = w() = w( + 2) = + 2 +
CHAPTER 6. SECOND ORDER PDE 81
implies that = 0. Thus, the pair = 0 and w() = is a solution. If
> 0, then
w() = e

+ e

.
If either of and is non-zero, then w() as , which contra-
dicts the periodicity of w. Thus, = = 0 and w 0, which cannot be a
solution. If < 0, then set =

and the equation becomes


_
w

() +
2
w() = 0 R
w( + 2) = w()
Its general solution is
w() = cos() + sin().
Using the periodicity of w, we get = k where k is an integer. For each
k N, we have the solution (w
k
,
k
) where

k
= k
2
and w
k
() =
k
cos(k) +
k
sin(k).
For the
k
s, we solve for v
k
, for each k = 0, 1, 2, . . .,
r
d
dr
_
r
dv
k
dr
_
= k
2
v
k
.
For k = 0, we get v
0
(r) = log r + . But log r blows up as r 0 and we
wanted a u well behaved near origin. Thus, we must have the = 0. Hence
v
0
. For k N, we need to solve for v
k
in
r
d
dr
_
r
dv
k
dr
_
= k
2
v
k
.
Use the change of variable r = e
s
. Then e
s ds
dr
= 1 and
d
dr
=
d
ds
ds
dr
=
1
e
s
d
ds
.
Hence r
d
dr
=
d
ds
. v
k
(e
s
) = e
ks
+e
ks
. v
k
(r) = r
k
+r
k
. Since r
k
blows
up as r 0, we must have = 0. Thus, v
k
= r
k
. Therefore, for each
k = 0, 1, 2, . . .,
u
k
(r, ) = a
k
r
k
cos(k) + b
k
r
k
sin(k).
The general solution is
u(r, ) =
a
0
2
+

k=1
_
a
k
r
k
cos(k) + b
k
r
k
sin(k)
_
.
CHAPTER 6. SECOND ORDER PDE 82
To nd the constants, we use u(1, ) = g(), hence
g() =
a
0
2
+

k=1
[a
k
cos(k) + b
k
sin(k)] .
Since g is 2-periodic it admits a Fourier series expansion and hence
a
k
=
1

g() cos(k) d,
b
k
=
1

g() sin(k) d.
3D Sphere
Now that we have solved the Dirichlet problem in a 2D disk, we intend to
solve the Dirichlet problem in a 3D sphere. The Laplace operator in spherical
coordinates (3 dimensions),
:=
1
r
2

r
_
r
2

r
_
+
1
r
2
sin

_
sin

_
+
1
r
2
sin
2

2
.
where r is the magnitude component, is the inclination (zenith or elevation)
in the vertical plane and is the azimuth angle (in the direction in horizontal
plane. Consider the unit sphere in R
3
,
= (x, y, z) R
3
[ x
2
+ y
2
+ z
2
< 1
and is the boundary of sphere of radius one. The DP is to nd u(r, , ) :
R which is well-behaved near r = 0, such that
_

_
1
r
2

r
_
r
2 u
r
_
+
1
r
2
sin

_
sin
u

_
+
1
r
2
sin
2

2
u

2
= 0 in
u(1, , ) = g(, ) on .
(6.3.3)
We will look for solution u(r, , ) whose variables can be separated, i.e.,
u(r, , ) = v(r)w()z() with v, w and z non-zero. Substituting it in the
spherical form of Laplacian, we get
wz
r
2
d
dr
_
r
2
dv
dr
_
+
vz
r
2
sin
d
d
_
sin
dw
d
_
+
vw
r
2
sin
2

d
2
z
d
2
= 0
CHAPTER 6. SECOND ORDER PDE 83
and hence
1
v
d
dr
_
r
2
dv
dr
_
=
1
wsin
d
d
_
sin
dw
d
_

1
z sin
2

d
2
z
d
2
.
Since LHS is a function of r and RHS is a function of (, ), they must equal
a constant, say . If Azimuthal symmetry is present then z() is constant
and hence
dz
d
= 0. We need to solve for w,
sin w

() + cos w

() + sin w() = 0, (0, )


Set x = cos . Then
dx
d
= sin .
w

() = sin
dw
dx
and w

() = sin
2

d
2
w
dx
2
cos
dw
dx
In the new variable x, we get the Legendre equation
(1 x
2
)w

(x) 2xw

(x) + w(x) = 0 x [1, 1].


We have already seen that this is a singular problem (while studying S-L
problems). For each k N 0, we have the solution (w
k
,
k
) where

k
= k(k + 1) and w
k
() = P
k
(cos ).
For the
k
s, we solve for v
k
, for each k = 0, 1, 2, . . .,
d
dr
_
r
2
dv
k
dr
_
= k(k + 1)v
k
.
For k = 0, we get v
0
(r) = /r + . But 1/r blows up as r 0 and we
wanted a u well behaved near origin. Thus, we must have the = 0. Hence
v
0
. For k N, we need to solve for v
k
in
d
dr
_
r
2
dv
k
dr
_
= k(k + 1)v
k
.
Use the change of variable r = e
s
. Then e
s ds
dr
= 1 and
d
dr
=
d
ds
ds
dr
=
1
e
s
d
ds
.
Hence r
d
dr
=
d
ds
. Solving for m in the quadratic equation m
2
+m = k(k +1).
m
1
= k and m
2
= k 1. v
k
(e
s
) = e
ks
+ e
(k1)s
. v
k
(r) = r
k
+ r
k1
.
CHAPTER 6. SECOND ORDER PDE 84
Since r
k1
blows up as r 0, we must have = 0. Thus, v
k
= r
k
.
Therefore, for each k = 0, 1, 2, . . .,
u
k
(r, , ) = a
k
r
k
P
k
(cos ).
The general solution is
u(r, , ) =

k=0
a
k
r
k
P
k
(cos ).
Since we have azimuthal symmetry, g(, ) = g(). To nd the constants,
we use u(1, , ) = g(), hence
g() =

k=0
a
k
P
k
(cos ).
Using the orthogonality of P
k
, we have
a
k
=
2k + 1
2
_

0
g()P
k
(cos ) sin d.
6.3.2 Eigenvalue Problem of Laplacian
Recall that we did the eigenvalue problem for the Sturm-Liouville operator,
which was one-dimensional. A similar result is true for Laplacian in all
dimensions. However, we shall just state in two dimensions. For a given
open bounded subset R
2
, the Dirichlet eigenvalue problem,
_
u(x, y) = u(x, y) (x, y)
u(x, y) = 0 (x, y) .
Note that, for all R, zero is a trivial solution of the Laplacian. Thus,
we are interested in non-zero s for which the Laplacian has non-trivial
solutions. Such an is called the eigenvalue and corresponding solution u

is called the eigen function.


Note that if u

is an eigen function corresponding to , then u

, for all
R, is also an eigen function corresponding to . Let W be the real vector
space of all u : R continuous (smooth, as required) functions such that
u(x, y) = 0 on . For each eigenvalue of the Laplacian, we dene the
subspace of W as
W

= u W [ u solves Dirichlet EVP for given .


CHAPTER 6. SECOND ORDER PDE 85
Theorem 6.3.1. There exists an increasing sequence of positive numbers
0 <
1
<
2
<
3
< . . . <
n
< . . . with
n
which are eigenvalues of
the Laplacian and W
n
= W
n
is nite dimensional. Conversely, any solution
u of the Laplacian is in W
n
, for some n.
Though the above theorem assures the existence of eigenvalues for Lapla-
cian, it is usually dicult to compute them for a given . In this course, we
shall compute the eigenvalues when is a 2D-rectangle and a 2D-disk.
In Rectangle
Let the rectangle be = (x, y) R
2
[ 0 < x < a, 0 < y < b. We wish to
solve the Dirichlet EVP in the rectangle
_
u(x, y) = u(x, y) (x, y)
u(x, y) = 0 (x, y) .
The boundary condition amounts to saying
u(x, 0) = u(a, y) = u(x, b) = u(0, y) = 0.
We look for solutions of the form u(x, y) = v(x)w(y) (variable separated).
Substituting u in separated form in the equation, we get
v

(x)w(y) v(x)w

(y) = v(x)w(y).
Hence

(x)
v(x)
= +
w

(y)
w(y)
.
Since LHS is function of x and RHS is function y and are equal they must
be some constant, say . We need to solve the EVPs
v

(x) = v(x) and w

(y) = ( )w(y)
under the boundary conditions v(0) = v(a) = 0 and w(0) = w(b) = 0.
As seen before, while solving for v, we have trivial solutions for 0. If
> 0, then v(x) = c
1
cos(

x)+c
2
sin(

x). Using the boundary condition


v(0) = 0, we get c
1
= 0. Now using v(a) = 0, we have c
2
sin

a = 0. Thus,
either c
2
= 0 or sin

a = 0. We have non-trivial solution, if c


2
,= 0,
then

a = k or

= k/a, for k Z. For each k N, we have
CHAPTER 6. SECOND ORDER PDE 86
v
k
(x) = sin(kx/a) and
k
= (k/a)
2
. We solve for w for each
k
. For each
k, l N, we have w
kl
(y) = sin(ly/b) and
kl
= (k/a)
2
+ (l/b)
2
. For each
k, l N, we have
u
kl
(x, y) = sin(kx/a) sin(ly/b)
and
kl
= (k/a)
2
+ (l/b)
2
.
In Disk
Let the disk of radius a be = (x, y) R
2
[ x
2
+ y
2
< a
2
. We wish to
solve the Dirichlet EVP in the disk
_
_
_
1
r

r
_
r
u
r
_

1
r
2

2
u

2
= u(r, ) (r, )
u() = u( + 2) R
u(a, ) = 0 R.
We look for solutions of the form u(r, ) = v(r)w() (variable separated).
Substituting u in separated form in the equation, we get

w
r
d
dr
_
r
dv
dr
_

v
r
2
w

() = v(r)w().
Hence dividing by vw and multiplying by r
2
, we get

r
v
d
dr
_
r
dv
dr
_

1
w
w

() = r
2
.
r
v
d
dr
_
r
dv
dr
_
+ r
2
=
1
w
w

() = .
Solving for non-trivial w, using the periodicity of w, we get for
0
= 0,
w
0
() =
a
0
2
and for each k N,
k
= k
2
and
w
k
() = a
k
cos k + b
k
sin k.
For each k N 0, we have the equation,
r
d
dr
_
r
dv
dr
_
+ (r
2
k
2
)v = 0.
Introduce change of variable x =

r and x
2
= r
2
. Then
r
d
dr
= x
d
dx
.
CHAPTER 6. SECOND ORDER PDE 87
rewriting the equation in new variable y(x)) = v(r)
x
d
dx
_
x
dy(x)
dx
_
+ (x
2
k
2
)y(x) = 0.
Note that this none other than the Bessels equation. We already know that
for each k N 0, we have the Bessels function J
k
as a solution to the
Bessels equation. Recall the boundary condition on v, v(a) = 0. Thus,
y(

a) = 0. Hence

a should be a zero of the Bessels function.


For each k N0, let z
kl
be the l-th zero of J
k
, l N. Hence

a = z
kl
and so
kl
= z
2
kl
/a
2
and y(x) = J
k
(x). Therefore, v(r) = J
k
(z
kl
r/a). For each
k N 0 and l N, we have
u
kl
(r, ) = J
k
(z
kl
r/a) sin(k) or J
k
(z
kl
r/a) cos(k)
and
kl
= z
2
kl
/a
2
.
6.3.3 Neumann Boundary Condition
The Neumann problem is stated as follows: Given f : R and g :
R, nd u : R such that
_
u = f in
u

= g on
(6.3.4)
where
u

:= u and = (
1
, . . . ,
n
) is the outward pointing unit normal
vector eld of . Thus, the boundary imposed is called the Neumann
boundary condition. The solution of a Neumann problem is not necessarily
unique. If u is any solution of (6.3.4), then u +c for any constant c is also a
solution of (6.3.4). More generally, for any v such that v is constant on the
connected components of , u+v is a solution of (6.3.4). Moreover, if u is a
solution of the Neumann problem (6.3.4) then u satises, for every connected
component of ,
_

u =
_

u (Using GDT)
_

f =
_

g.
The second equality is called the compatibility condition. Thus, if (6.3.4) is
solvable then necessarily, the given function f, g must satisfy the compatibil-
ity condition.
CHAPTER 6. SECOND ORDER PDE 88
Chapter 7
Heat Equation: One Space
Variable
Let a homogeneous material occupy a region R
n
with C
1
boundary. Let
k denote the thermal conductivity (dimensionless quantity) and c be the heat
capacity of the material. Let u(x, t) be a function plotting the temperature
of the material at x in time t. The thermal energy stored at x , at
time t, is cu(x, t). If v(x, t) denotes the velocity of (x, t), by Fourier law, the
thermal energy changes following the gradients of temperature, i.e.,
cu(x, t)v(x, t) = ku.
The thermal energy is the quantity that is conserved (conservation law) and
satises the continuity equation (4.3.1). Thus, we have
u
t

k
c
u = 0.
If the material occupying the region is non-homogeneous, anisotropic,
the temperature gradient may generate heat in preferred directions, which
themselves may depend on x . Thus, the conductivity of such a material
at x , at time t, is given by a nn matrix K(x, t) = (k
ij
(x, t)). Thus, in
this case, the heat equation becomes,
u
t
div
_
1
c
Ku
_
= 0.
The heat equation is an example of a second order equation in divergence
form. The heat equation gives the temperature distribution u(x, t) of the
89
CHAPTER 7. HEAT EQUATION: ONE SPACE VARIABLE 90
material with conductivity k and capacity c. In general, we may choose
k/c = 1, since, for any k and c, we may rescale our time scale t (k/c)t.
7.1 On a Rod
The equation governing the heat propagation in a bar of length L is
u
t
=
1
(x)(x)

x
_
(x)
u
x
_
where (x) is the specic heat at x, (x) is density of bar at x and (x) is
the thermal conductivity of the bar at x. If the bar is homogeneous, i.e, its
properties are same at every point, then
u
t
=

2
u
x
2
with , , being constants.
Let L be the length of a homogeneous rod insulated along sides and its
ends are kept at zero temperature. Then the temperature u(x, t) at every
point of the rod, 0 x L and time t 0 is given by the equation
u
t
= c
2

2
u
x
2
where c is a constant.
The temperature zero at the end points is given by the Dirichlet boundary
condition
u(0, t) = u(L, t) = 0.
Also, given is the initial temperature of the rod at time t = 0, u(x, 0) = g(x),
where g is given (or known) such that g(0) = g(L) = 0. Given g : [0, L] R
such that g(0) = g(L) = 0, we look for all the solutions of the Dirichlet
problem
_
_
_
u
t
(x, t) c
2
u
xx
(x, t) = 0 in (0, L) (0, )
u(0, t) = u(L, t) = 0 in (0, )
u(x, 0) = g(x) on [0, L].
We look for u(x, t) = v(x)w(t) (variable separated). Substituting u in sepa-
rated form in the equation, we get
v(x)w

(t) = c
2
v

(x)w(t)
CHAPTER 7. HEAT EQUATION: ONE SPACE VARIABLE 91
w

(t)
c
2
w(t)
=
v

(x)
v(x)
.
Since LHS is function of t and RHS is function x and are equal they must be
some constant, say . Thus,
w

(t)
c
2
w(t)
=
v

(x)
v(x)
= .
Thus we need to solve two ODE to get v and w,
w

(t) = c
2
w(t)
and
v

(x) = v(x).
But we already know how to solve the eigenvalue problem involving v. For
each k N, we have the pair (
k
, v
k
) as solutions to the EVP involving v,
where
k
= (k)
2
/L
2
and v
k
(x) = sin
_
kx
L
_
some constants b
k
. For each
k N, we solve for w
k
to get
ln w
k
(t) =
k
c
2
t + ln
where is integration constant. Thus, w
k
(t) = e
(kc/L)
2
t
. Hence,
u
k
(x, t) = v
k
(x)w
k
(t) =
k
sin
_
kx
L
_
e
(kc/L)
2
t
,
for some constants
k
, is a solution to the heat equation. By superposition
principle, the general solution is
u(x, t) =

k=1
u
k
(x, t) =

k=1

k
sin
_
kx
L
_
e
(kc/L)
2
t
.
We now use the initial temperature of the rod, given as g : [0, L] R to
nd the particular solution of the heat equation. We are given u(x, 0) = g(x).
Thus,
g(x) = u(x, 0) =

k=1

k
sin
_
kx
L
_
Since g(0) = g(L) = 0, we know that g admits a Fourier Sine expansion and
hence its coecients
k
are given as

k
=
2
L
_
L
0
g(x) sin
_
kx
L
_
.
CHAPTER 7. HEAT EQUATION: ONE SPACE VARIABLE 92
7.2 On a Circular Wire
We intend solve the heat equation in a circle (circular wire) of radius one
which is insulated along its sides. Then the temperature u(, t) at every
point of the circle, R and time t 0 is given by the equation
u
t
= c
2

2
u

2
where c is a constant. We note that now u(, t) is 2-periodic in the variable
. Thus,
u( + 2, t) = u(, t) R, t 0.
Let the initial temperature of the wire at time t = 0, be u(, 0) = g(), where
g is a given 2-periodic function. Given a 2-periodic function g : R R,
we look for all solutions of
_
_
_
u
t
(, t) c
2
u

(, t) = 0 in R (0, )
u( + 2, t) = u(, t) in R (0, )
u(, 0) = g() on R t = 0.
We look for u(, t) = v()w(t) with variables separated. Substituting for u
in the equation, we get
w

(t)
c
2
w(t)
=
v

()
v()
= .
For each k N 0, the pair (
k
, v
k
) is a solution to the EVP where

k
= k
2
and
v
k
() = a
k
cos(k) + b
k
sin(k).
For each k N 0, we get w
k
(t) = e
(kc)
2
t
. For k = 0
u
0
(, t) = a
0
/2 (To maintain consistency with Fourier series)
and for each k N, we have
u
k
(, t) = [a
k
cos(k) + b
k
sin(k)] e
k
2
c
2
t
Therefore, the general solution is
u(, t) =
a
0
2
+

k=1
[a
k
cos(k) + b
k
sin(k)] e
k
2
c
2
t
.
CHAPTER 7. HEAT EQUATION: ONE SPACE VARIABLE 93
We now use the initial temperature on the circle to nd the particular solu-
tion. We are given u(, 0) = g(). Thus,
g() = u(, 0) =
a
0
2
+

k=1
[a
k
cos(k) + b
k
sin(k)]
Since g is 2-periodic it admits a Fourier series expansion and hence
a
k
=
1

g() cos(k) d,
b
k
=
1

g() sin(k) d.
Note that as t the temperature of the wire approaches a constant a
0
/2.
Exercise 25. Solve the heat equation for 2D Rectangle and 2D Disk
7.3 Duhamels Principle: Inhomogeneous Equa-
tion
In this section we solve the inhomogeneous heat equation, using Duhamels
principle. The Duhamels principle states that one can obtain a solution of
the inhomogeneous IVP for heat from its homogeneous IVP.
For a given f, let u(x, t) be the solution of the inhomogeneous heat equa-
tion,
_
_
_
u
t
(x, t) c
2
u(x, t) = f(x, t) in (0, )
u(x, t) = 0 in (0, )
u(x, 0) = 0 in .
As a rst step, for each s (0, ), consider w(x, t; s) as the solution of the
homogeneous problem (auxiliary)
_
_
_
w
s
t
(x, t) c
2
w
s
(x, t) = 0 in (s, )
w
s
(x, t) = 0 in (s, )
w
s
(x, s) = f(x, s) on s.
Since t (s, ), introducing a change of variable r = t s, we have
w
s
(x, t) = w(x, t s) which solves
_
_
_
w
t
(x, r) c
2
w(x, r) = 0 in (0, )
w(x, r) = 0 in (0, )
w(x, 0) = f(x, s) on .
CHAPTER 7. HEAT EQUATION: ONE SPACE VARIABLE 94
Duhamels principle states that
u(x, t) =
_
t
0
w
s
(x, t) ds =
_
t
0
w(x, t s) ds
Proof
Let us prove that u dened as
u(x, t) =
_
t
0
w(x, t s) ds
solves the inhomogenous heat equation.
Assuming w is C
2
, we get
u
t
(x, t) =

t
_
t
0
w(x, t s) ds
=
_
t
0
w
t
(x, t s) ds + w(x, t t)
d(t)
dt
w(x, t 0)
d(0)
dt
=
_
t
0
w
t
(x, t s) ds + w(x, 0).
u
t
(x, t) =
_
t
0
w
t
(x, t s) ds + w(x, 0)
=
_
t
0
w
t
(x, t s) ds + f(x, t).
Similarly,
u(x, t) =
_
t
0
w(x, t s) ds.
Thus,
u
t
c
2
u = f(x, t) +
_
t
0
_
w
t
(x, t s) c
2
w(x, t s)
_
ds
= f(x, t).
Chapter 8
Wave Equation: One Space
Variable
The wave equation is the rst ever partial dierential equation (PDE) to be
studied by mankind. It was introduced in 1752 by dAlembert as a model to
study vibrating strings. He introduced the one dimensional wave equation

2
u(x, t)
t
2
=

2
u(x, t)
x
2
.
dAlembert used the travelling wave technique to solve the wave equation. In
this chapter we shall explain this technique of dAlembert and also give the
standing wave technique which motivates the idea of separation of variable
and in turn the evolution of Fourier series. The wave equations was gener-
alised to two and three dimensions by Euler (1759) and D. Bernoulli (1762),
respectively.
8.1 The Vibrating String: Derivation
Let us consider a homogeneous string of length L, stretched along the x-axis,
with one end xed at x = 0 and the other end xed at x = L. We assume
that the string is free to move only in the vertical direction. Let > 0 denote
the density of the string and T > 0 denote the coecient of tension of the
string. Let u(x, t) denote the vertical displacement of the string at the point
x and time t.
We shall imagine the string of length L as system of N objects, for N
suciently large. Think of N objects sitting on the string L at equidistant
95
CHAPTER 8. WAVE EQUATION: ONE SPACE VARIABLE 96
(uniformly distributed). The position of the n-th object on the string is
given by x
n
= nL/N. One can think of the vibrating string as the harmonic
oscillator of N objects governed by the tension on the string (which behaves
like the spring). Let y
n
(t) = u(x
n
, t) denote the displacement of the object x
n
at time t. The distance between any two successive objects is h = x
n+1
x
n
=
L/N. Then mass of each of the N object is mass of the string divided by N.
Since mass of the string is L, mass of each of the object x
n
, n = 1, 2, . . . , N,
is h. Thus, by Newtons second law, hy

n
(t) is same as the force acting on
the n-th object. The force on x
n
is coming both from left (x
n1
) and right
(x
n+1
) side. The force from left and right is given as T(y
n1
y
n
)/h and
T(y
n+1
y
n
)/h, respectively. Therefore,
hy

n
(t) =
T
h
y
n+1
(t) + y
n1
(t) 2y
n
(t)
=
T
h
u(x
n
+ h, t) + u(x
n
h, t) 2u(x
n
, t)
y

n
(t) =
T

_
u(x
n
+ h, t) + u(x
n
h, t) 2u(x
n
, t)
h
2
_
Note that assuming u is twice dierentiable w.r.t the x variable, the term
on RHS is same as
T

1
h
_
u(x
n
+ h, t) u(x
n
, t)
h
+
u(x
n
h, t) u(x
n
, t)
h
_
which converges to the second partial derivative of u w.r.t x as h 0. The
h 0 is the limit case of the N objects we started with. Therefore the
vibrating string system is governed by the equation

2
u
t
2
=
T

2
u
x
2
where T is the tension and is the density of the string. Equivalently,

2
u
t
2
= c
2

2
u
x
2
(8.1.1)
where c
2
= T/, c > 0, on x (0, L) and t > 0.
Exercise 26. Show that the wave equation (8.1.1), derived above can be
written as
u
zz
= u
ww
in (w, z) (0, L) (0, ).
CHAPTER 8. WAVE EQUATION: ONE SPACE VARIABLE 97
under a new coordinate system (w, z). One may, in fact choose coordinate
such that the string is xed between (0, ).
Proof. Set w = x/a and z = t/b, where a and b will be chosen appropriately.
Then, w
x
= 1/a and z
t
= 1/b. Therefore, u
x
= u
w
/a, u
t
= u
z
/b, a
2
u
xx
= u
ww
and b
2
u
tt
= u
zz
. Choosing a = 1 and b = 1/c. Choosing a = L/ and
b = L/c makes the domain (0, ).
8.2 Travelling Waves
Consider the wave equation u
tt
= c
2
u
xx
on R (0, ), describing the vi-
bration of an innite string.. Let F be any twice dierentiable real-valued
function on R. Note that u(x, t) = F(x + ct) solves the wave equation.
At t = 0, the solution is simply the graph of F and at t = t
0
the solu-
tion is the graph of F with origin translated to the left by ct
0
. Similarly,
u(x, t) = F(x ct) also solves wave equation and at time t is the translation
to the right of the graph of F by ct. This motivates the name travelling
waves and wave equation. The graph of F is shifted to right or left with
a speed of c.
Next we note that the wave equation is linear. This is in the sense that
if u and v are solutions (particular) to the wave equation u + v is also a
solution to the wave equation, for any constants , , called the principle of
superposition. Thus, for any two twice dierentiable real-valued functions F
and G on R, u(x, t) = F(x+ct)+G(xct) is a solution of the wave equation.
A natural question at this juncture is: Is every solution of the wave equation
of the above form? The answer is yes. To prove this, we observe that a
representation of a PDE depends on the choice of our coordinate system. Let
u be a twice dierentiable function which is a solution to the wave equation.
Introduce the new coordinates w = x+ct, z = xct and set v(w, z) = u(x, t).
Thus, we have the following relations, using chain rule:
u
x
= v
w
w
x
+ v
z
z
x
= v
w
+ v
z
u
t
= v
w
w
t
+ v
z
z
t
= c(v
w
v
z
)
u
xx
= v
ww
+ 2v
zw
+ v
zz
u
tt
= c
2
(v
ww
2v
zw
+ v
zz
)
CHAPTER 8. WAVE EQUATION: ONE SPACE VARIABLE 98
Substituting in the wave equation we have that v satises v
wz
= 0. In-
tegrating
1
this twice, we have v(w, z) = F(w) + G(z), for some arbitrary
functions F and G. Thus, u(x, t) = F(x + ct) + G(x ct). Thus, a general
solution of the wave equation has been derived.
8.2.1 dAlemberts Formula
Now that we have derived the general form of the solution of wave equation,
we return to understand the physical system of vibrating innite string. The
initial shape (position at initial time t = 0) of the string is given as u(x, 0) =
g(x), where the graph of g on R
2
describes the shape of the string. Since we
need one more data to identify the arbitrary functions, we also prescribe the
initial velocity of the string, u
t
(x, 0) = h(x). Thus, we have the initial value
problem (IVP) of the wave equation,
_
_
_
u
tt
(x, t) u
xx
(x, t) = 0 in R (0, )
u(x, 0) = g(x) in [0, L]
u
t
(x, 0) = h(x) in (0, L)
Recall that a general solution is u(x, t) = F(x + ct) + G(x ct). Using the
initial position we get
F(x) + G(x) = g(x).
Since u has to be a solution of wave equation, its partial derivatives should
exist and hence we may assume both F and G are dierentiable. Now,
u
t
(x, t) = c (F

(w) G

(z)) and putting t = 0, we get


F

(x) G

(x) =
1
c
h(x).
Thus, solving for F

and G

, we get 2F

(x) = g

(x) + h(x)/c. Similarly,


2G

(x) = g

(x) h(x)/c. Integrating


2
both these equations, we get
F(x) =
1
2
_
g(x) +
1
c
_
x
0
h() d
_
+ c
1
and
G(x) =
1
2
_
g(x)
1
c
_
x
0
h() d
_
+ c
2
.
1
We are assuming the function is integrable, which may be false
2
assuming they integrable and the integral of their derivatives is itself
CHAPTER 8. WAVE EQUATION: ONE SPACE VARIABLE 99
Since F(x) +G(x) = g(x), we get c
1
+c
2
= 0. Therefore, the solution to the
wave equation is
u(x, t) =
1
2
(g(x + ct) + g(x ct)) +
1
2c
_
x+ct
xct
h() d,
called the dAlemberts formula.
We end this section with a remark on the invariance of the wave equation
under the transformation t t. In other words, if u(x, t) is a solution to
the wave equation for t 0, then u(x, t) := u(x, t) is a solution of the wave
equation for t < 0.
8.2.2 dAlemberts Formula: Aliter
Consider the IVP
_
_
_
u
tt
(x, t) = c
2
u
xx
(x, t) in R (0, )
u(x, 0) = g(x) in R t = 0
u
t
(x, 0) = h(x) in R t = 0,
where g, h : R R are given functions.
Note that the PDE can be factored as
_

t
+ c

x
__

t
c

x
_
u = u
tt
c
2
u
xx
= 0.
We set v(x, t) =
_

t
c

x
_
u(x, t) and hence
v
t
(x, t) + cv
x
(x, t) = 0 in R (0, ).
Notice that the rst order PDE obtained is in the form of homogeneous
transport equation, which we know to solve.
Hence, for some smooth function f,
v(x, t) = f(x ct)
and f(x) := v(x, 0).
Using v in the original equation, we get the inhomogeneous transport
equation,
u
t
(x, t) cu
x
(x, t) = f(x ct).
CHAPTER 8. WAVE EQUATION: ONE SPACE VARIABLE 100
Recall the formula for inhomogenoeus TE
u(x, t) = g(x at) +
_
t
0
f(x a(t s), s) ds.
Since u(x, 0) = g(x) and a = c, in our case the solution reduces to,
u(x, t) = g(x + ct) +
_
t
0
f(x + c(t s) cs) ds
= g(x + ct) +
_
t
0
f(x + ct 2cs) ds
= g(x + ct) +
1
2c
_
xct
x+ct
f(y) dy
= g(x + ct) +
1
2c
_
x+ct
xct
f(y) dy.
But f(x) = v(x, 0) = u
t
(x, 0) cu
x
(x, 0) = h(x) cg

(x)
and substituting this in the formula for u, we get
u(x, t) = g(x + ct) +
1
2c
_
x+ct
xct
(h(y) cg

(y)) dy
= g(x + ct) +
1
2
(g(x ct) g(x + ct))
+
1
2c
_
x+ct
xct
h(y) dy
=
1
2
(g(x ct) + g(x + ct)) +
1
2c
_
x+ct
xct
h(y) dy
If c = 1, we have
u(x, t) =
1
2
(g(x t) + g(x + t)) +
1
2
_
x+t
xt
h(y) dy.
This is called the dAlemberts formula.
8.3 Standing Waves: Separation of Variable
Recall the set-up of the vibrating string given by the equation u
tt
= u
xx
,
we have normalised the constant c. Initially at time t, let us say the string
CHAPTER 8. WAVE EQUATION: ONE SPACE VARIABLE 101
has the shape of the graph of v, i.e., u(x, 0) = v(x). The snapshot of the
vibrating string at each time are called the standing waves. The shape of
the string at time t
0
can be thought of as some factor (depending on time)
of v. This observation motivates the idea of separation of variable, i.e.,
u(x, t) = v(x)w(t), where w(t) is the factor depending on time, which scales
v at time t to t with the shape of u(x, t).
The fact that endpoints are xed is given by the (Dirichlet) boundary
condition
u(0, t) = u(L, t) = 0.
We are also given the initial position u(x, 0) = g(x) (at time t = 0) and initial
velocity of the string at time t = 0, u
t
(x, 0) = h(x). Given g, h : [0, L] R
such that g(0) = g(L) = 0, we need to solve the initial value problem
_

_
u
tt
(x, t) c
2
u
xx
(x, t) = 0 in (0, L) (0, )
u(0, t) = u(L, t) = 0 in [0, )
u(x, 0) = g(x) in [0, L]
u
t
(x, 0) = h(x) in (0, L)
(8.3.1)
Let us seek for solutions u(x, t) whose variables can be separated. Let
u(x, t) = v(x)w(t). Dierentiating and substituting in the wave equation, we
get
v(x)w

(t) = c
2
v

(x)w(t)
Hence
w

(t)
c
2
w(t)
=
v

(x)
v(x)
.
Since RHS is a function of x and LHS is a function t, they must equal a
constant, say . Thus,
v

(x)
v(x)
=
w

(t)
c
2
w(t)
= .
Using the (Dirichlet) boundary condition on u, u(0, t) = u(L, t) = 0, we
get v(0)w(t) = v(L)w(t) = 0. If w 0, then u 0 and this cannot be a
solution to (8.3.1). Hence, w , 0 and v(0) = v(L) = 0. Thus, we need to
solve the eigen value problem for the second order dierential operator.
_
v

(x) = v(x), x (0, L)


v(0) = v(L) = 0,
Note that the can be either zero, positive or negative. If = 0, then
v

= 0 and the general solution is v(x) = x + , for some constants and


CHAPTER 8. WAVE EQUATION: ONE SPACE VARIABLE 102
. Since v(0) = 0, we get = 0, and v(L) = 0 and L ,= 0 implies that = 0.
Thus, v 0 and hence u 0. But, this cannot be a solution to (8.3.1).
If > 0, then v(x) = e

x
+ e

x
. Equivalently,
v(x) = c
1
cosh(

x) + c
2
sinh(

x)
such that = (c
1
+c
2
)/2 and = (c
1
c
2
)/2. Using the boundary condition
v(0) = 0, we get c
1
= 0 and hence
v(x) = c
2
sinh(

x).
Now using v(L) = 0, we have c
2
sinh

L = 0. Thus, c
2
= 0 and v(x) = 0.
We have seen this cannot be a solution.
Finally, if < 0, then set =

. We need to solve the simple


harmonic oscillator problem
_
v

(x) +
2
v(x) = 0 x (0, L)
v(0) = v(L) = 0.
The general solution is
v(x) = cos(x) + sin(x).
Using v(0) = 0, we get = 0 and hence v(x) = sin(x). Now using
v(L) = 0, we have sin L = 0. Thus, either = 0 or sin L = 0. But
= 0 does not yield a solution. Hence L = k or = k/L, for all non-
zero k Z. Since > 0, we can consider only k N. Hence, for each k N,
there is a solution (v
k
,
k
) for the eigen value problem with
v
k
(x) =
k
sin
_
kx
L
_
,
for some constant b
k
and
k
= (k/L)
2
. It now remains to solve w for each
of these
k
. For each k N, we solve for w
k
in the ODE
w

k
(t) + (ck/L)
2
w
k
(t) = 0.
The general solution is
w
k
(t) = a
k
cos
_
ckt
L
_
+ b
k
sin
_
ckt
L
_
.
CHAPTER 8. WAVE EQUATION: ONE SPACE VARIABLE 103
For each k N, we have
u
k
(x, t) =
_
a
k
cos
_
ckt
L
_
+ b
k
sin
_
ckt
L
__
sin
_
kx
L
_
for some constants a
k
and b
k
. The situation corresponding to k = 1 is called
the fundamental mode and the frequency of the fundamental mode is
1
2
c
L
=
c
2L
=
_
T/
2L
.
The frequency of higher modes are integer multiples of the fundamental fre-
quency.
The general solution of (8.3.1), by principle of superposition, is
u(x, t) =

k=1
_
a
k
cos
_
ckt
L
_
+ b
k
sin
_
ckt
L
__
sin
_
kx
L
_
.
Note that the solution is expressed as series, which rises the question of
convergence of the series. Another concern is whether all solutions of (8.3.1)
have this form. We ignore these two concerns at this moment.
Since we know the initial position of the string as the graph of g, we get
g(x) = u(x, 0) =

k=1
a
k
sin
_
kx
L
_
.
This expression is again troubling and rises the question: Can any arbitrary
function g be expressed as an innite sum of trigonometric functions? An-
swering this question led to the study of Fourier series. Let us also, as
usual, ignore this concern for time being. Then, can we nd the the con-
stants a
k
with knowledge of g. By multiplying sin
_
lx
L
_
both sides of the
expression of g and integrating from 0 to L, we get
_
L
0
g(x) sin
_
lx
L
_
dx =
_
L
0
_

k=1
a
k
sin
_
kx
L
_
_
sin
_
lx
L
_
dx
=

k=1
a
k
_
L
0
sin
_
kx
L
_
sin
_
lx
L
_
dx
CHAPTER 8. WAVE EQUATION: ONE SPACE VARIABLE 104
Therefore, the constants a
k
are given as
a
k
=
2
L
_
L
0
g(x) sin
_
kx
L
_
.
Finally, by dierentiating u w.r.t t, we get
u
t
(x, t) =

k=1
ck
L
_
b
k
cos
ckt
L
a
k
sin
ckt
L
_
sin
_
kx
L
_
.
Employing similar arguments and using u
t
(x, 0) = h(x), we get
h(x) = u
t
(x, 0) =

k=1
b
k
kc
L
sin
_
kx
L
_
and hence
b
k
=
2
kc
_
L
0
h(x) sin
_
kx
L
_
.
The separation of variable technique can be used for studying wave equa-
tion on 2D Rectangle and 2D Disk etc.
8.4 Duhamels Principle
The idea is same as that introduced while applying this principle for heat
equation. Thus, for a given f, the Duhamels principle states that the solu-
tion u(x, t) of the inhomogeneous wave equation
_
_
_
u
tt
(x, t) u(x, t) = f(x, t) in (0, )
u(x, t) = 0 in (0, )
u(x, 0) = u
t
(x, 0) = 0 in
is u(x, t) =
_
t
0
w(x, t s) ds, where w(x, t s) is the solution of the homoge-
neous equation
_

_
w
tt
(x, t s) w(x, t s) = 0 in (0, )
w(x, t s) = 0 in (0, )
w(x, 0) = 0 on
w
t
(x, 0) = f(x, s) on .
CHAPTER 8. WAVE EQUATION: ONE SPACE VARIABLE 105
Example 8.1. Consider the wave equation
_
_
_
u
tt
(x, t) c
2
u
xx
(x, t) = sin 3x in (0, ) (0, )
u(0, t) = u(, t) = 0 in (0, )
u(x, 0) = u
t
(x, 0) = 0 in (0, ).
We look for the solution of the homogeneous wave equation
_

_
w
tt
(x, t) c
2
w
xx
(x, t) = 0 in (0, ) (0, )
w(0, t) = w(, t) = 0 in (0, )
w(x, 0) = 0 in (0, )
w
t
(x, 0) = sin 3x in (0, ).
We know that the general solution of w is
w(x, t) =

k=1
[a
k
cos(kct) + b
k
sin(kct)] sin(kx)
Hence
w(x, 0) =

k=1
a
k
sin(kx) = 0.
Thus, a
k
= 0, for all k.
Also,
w
t
(x, 0) =

k=1
b
k
ck sin(kx) = sin 3x.
Hence, b
k
s are all zeroes except k = 3 and b
3
= 1/3c.
Thus,
w(x, t) =
1
3c
sin(3ct) sin(3x).
u(x, t) =
_
t
0
w(x, t s) ds
=
1
3c
_
t
0
sin(3c(t s)) sin 3x ds
=
sin 3x
3c
_
t
0
sin(3c(t s)) ds
=
sin 3x
3c
cos(3c(t s))
3c

t
0
=
sin 3x
9c
2
(1 cos 3ct) .
CHAPTER 8. WAVE EQUATION: ONE SPACE VARIABLE 106
Appendices
107
Appendix A
Divergence Theorem
Denition A.0.1. For an open set R
n
we say that its boundary is
C
k
(k 1), if for every point x , there is a r > 0 and a C
k
dieomor-
phism : B
r
(x) B
1
(0)R ( i.e.
1
exists and both and
1
are k-times
continuously dierentiable) such that
1. ( B
r
(x)) B
1
(0) x R
n
[ x
n
= 0 and
2. ( B
r
(x)) B
1
(0) x R
n
[ x
n
> 0
We say is C

if is C
k
for all k = 1, 2, . . . and is analytic if is
analytic.
Equivalently, a workable denition of C
k
boundary would be the follow-
ing: if for every point x , there exists a neighbourhood B
x
of x and a
C
k
function : R
n1
R such that
B
x
= x B
x
[ x
n
> (x
1
, x
2
, . . . , x
n1
).
The divergence of a vector eld is the measure of the magnitude (outgoing
nature) of all source (of the vector eld) and absorption in the region. The
divergence theorem was discovered by C. F. Gauss in 1813
1
which relates
the outward ow (ux) of a vector eld through a closed surface to the
behaviour of the vector eld inside the surface (sum of all its source and
sink). The divergence theorem is, in fact, the mathematical formulation
of the conservation law.
1
J. L. Lagrange seems to have discovered this, before Gauss, in 1762
109
APPENDIX A. DIVERGENCE THEOREM 110
Theorem A.0.2. Let be an open bounded subset of R
n
with C
1
boundary.
If v C
1
() then
_

v
x
i
dx =
_

v
i
d
where = (
1
, . . . ,
n
) is the outward pointing unit normal vector eld and
d is the surface measure of .
The domain need not be bounded provided [v[ and

v
x
i

decays as
[x[ . The eld of geometric measure theory attempts to identify the
precise condition on and v for which divergence theorem or integration
by parts hold.
Corollary A.0.3 (Integration by parts). Let be an open bounded subset
of R
n
with C
1
boundary. If u, v C
1
() then
_

u
v
x
i
dx +
_

v
u
x
i
dx =
_

uv
i
d.
Theorem A.0.4 (Gauss). Let be an open bounded subset of R
n
with C
1
boundary. Given a vector eld V = (v
1
, . . . , v
n
) on such that v
i
C
1
()
for all 1 i n, then
_

V dx =
_

V d. (A.0.1)
Corollary A.0.5 (Greens Identities). Let be an open bounded subset of
R
n
with C
1
boundary. Let u, v C
2
(), then
(i)
_

(vu +v u) dx =
_

v
u

d,
where
u

:= u and := .
(ii)
_

(vu uv) dx =
_

_
v
u

u
v

_
d.
Proof. Apply divergence theorem to V = vu to get the rst formula. To get
second formula apply divergence theorem for both V = vu and V = uv
and subtract one from the other.
Appendix B
Normal Vector of a Surface
Let S(x, y, z) = 0 be the equation of a surface S in R
3
. Let us a x a
point p
0
= (x
0
, y
0
, z
0
) S. We need to nd the normal vector at p
0
for
the surface S. Let us x an arbitrary curve C lying on the surface passing
through the point p
0
. Let the parametrized form of the curve C be given as
r(t) = (x(t), y(t), z(t)) such that r(t
0
) = p
0
. Since the curve C r(t) lies
on the surface for all t, we have S(r(t)) = 0. Thus, S(x(t), y(t), z(t)) = 0.
Dierentiating w.r.t t (using chain rule), we get
S
x
dx(t)
dt
+
S
y
dy(t)
dt
+
S
z
dz(t)
dt
= 0
(S
x
, S
y
, S
z
) (x

(t), y

(t), z

(t)) = 0
S(r(t)) r

(t) = 0.
In particular, the above computation is true for the point p
0
. Since r

(t
0
)
is the slope of the tangent at t
0
to the curve C, we see that the vector S(p
0
)
is perpendicular to the tangent vector at p
0
. Since this argument is true for
any curve that passes through p
0
. We have that S(p
0
) is normal vector to
the tangent plane at p
0
. If, in particular, the equation of the surface is given
as S(x, y, z) = u(x, y) z, for some u : R
2
R, then
S(p
0
) = (S
x
(p
0
), S
y
(p
0
), S
z
(p
0
)) = (u
x
(x
0
, y
0
), u
y
(x
0
, y
0
), 1) = (u(x
0
, y
0
), 1).
111
APPENDIX B. NORMAL VECTOR OF A SURFACE 112

You might also like