You are on page 1of 3

Remarks on Section 4.

As you probably noticed, all the theory in Chapter 4 of Polking is in Section 4.1
– where it is mingled with other stuff. This is an attempt to write down the main
theoretical points concisely.
Existence and Uniqueness
The theory of any class of differential equations begins with existence and unique-
ness theorems. There are two versions of these theorems, first there are general
existence and uniqueness theorems like the ones in Chapter 2, and then there are
special existence and uniqueness theorems for linear equations.
Theorem 1:General existence theorem for y 00 = f (t, y, y 0 ). Suppose that (t0 , y0 , v0 )
is a point in a box B = {a1 ≤ t ≤ b1 , a2 ≤ y ≤ b2 , a3 ≤ v ≤ b3 } and f (t, y, v) is
continuous on B. Then there is at least one function y(t) with continuous first and
second derivatives such that y(t0 ) = y0 and y 0 (t0 ) = v0 and y 00 (t) = f (t, y(t), y 0 (t)).
This solution can be continued until (t, y(t), y 0 (t)) leaves B.
Theorem 2: General uniqueness theorem for y 00 = f (t, y, y 0 ). Suppose f (t, y, v),
∂f ∂f
∂y (t, y, v) and ∂v (t, y, v) are all continuous on a box B as in Theorem 1, and y1 (t)
and y2 (t) are two solutions to y 00 (t) = f (t, y(t), y 0 (t)). Then, if y1 (t0 ) = y2 (t0 ) and
y10 (t0 ) = y20 (t0 ), you must have (t, y1 (t), y10 (t)) = (t, y2 (t), y20 (t)) as long as these
points are in B. In short, if the solutions and their first derivatives are equal at
one time, then they must be equal at all times when they are defined.
These theorems will be useful a little later in the course. The theorem that is
most important in Chapter 4 and on Hour Exam II is:
Theorem 3: Existence AND uniqueness theorem for y 00 + p(t)y 0 + q(t)y = f (t).
Suppose that p(t), q(t) and f (t) are continuous functions on the interval I = {a ≤
t ≤ b}, and suppose that t0 is a point in I. Then for any choice of (y0 , v0 ) there
is a function y(t) defined on I with continuous first and second derivatives such
that y(t0 ) = y0 and y 0 (t0 ) = v0 and y 00 (t) + p(t)y 0 (t) + q(t)y(t) = f (t) for all t in
I. Moreover, this solution is unique: if w00 (t) + p(t)w0 (t) + q(t)w(t) = f (t) and
for some t1 in I you have w(t1 ) = y(t1 ) and w0 (t1 ) = y 0 (t1 ), then w(t) = y(t) on
I. When p(t), q(t) and f (t) are continuous everywhere, you can replace I by the
whole line −∞ < t < ∞.
You should think about Theorem 3 until you realize that it is quite a bit simpler
than Theorems 1 and 2.
In the rest of this note I am going to work with the linear, homogeneous
equation y 00 + p(t)y 0 + q(t)y = 0. Theorem 3 applies to this equation, but one can
say a lot more. To solve the initial value problem y(t0 ) = y0 , y 0 (t0 ) = v0 , you do
not need to find a new solution y(t) for every choice of t0 , y0 and v0 . You really
just need to find two solutions y1 (t) and y2 (t), and you will be able to solve all the
initial value problems with y(t) = c1 y1 (t) + c2 y2 (t) by adjusting the constants c1
2

and c2 . The functions y1 (t) and y2 (t) need to be solutions to the differential equation
y 00 + p(t)y 0 + q(t)y = 0, but not every pair of solutions will work. To solve the initial
value problem you need c1 y1 (t0 ) + c2 y2 (t0 ) = y0 and c1 y20 (t0 ) + c2 y20 (t0 ) = v0 . In
matrix notation that is
µ ¶µ ¶ µ ¶
y1 (t0 ) y2 (t0 ) c1 y0
0 0 =
y1 (t0 ) y2 (t0 ) c2 v0

This system of simultaneous linear equations needs to have a solution for all choices
of y0 and v0 . You may remember (from Math 33A, for instance) that it will have µ ¶a
c1
solution for all choices if and only if the determinant of the matrix multiplying
c2
is not zero. I am going to call that determinant W (t0 ). So W (t0 ) = y1 (t0 )y20 (t0 ) −
y10 (t0 )y2 (t0 ). If you do remember Math 33A, then you know that the solution to
the system is
µ ¶ µ ¶−1 µ ¶ µ ¶µ ¶
c1 y1 (t0 ) y2 (t0 ) y0 1 y20 (t0 ) −y2 (t0 ) y0
= =
c2 y10 (t0 ) y20 (t0 ) v0 W (t0 ) −y10 (t0 ) y2 (t0 ) v0

which makes sense if and only if W (t0 ) 6= 0. When y1 (t) and y2 (t) are solutions of
a second order linear differential equation, W (t) is called the Wronskian, after the
Polish soldier, mathematician and philosopher Josef Hoene-Wronski (1776-1853).
You can learn a lot about Josef H-W from Wikipedia, including that he was prob-
ably not the inspiration for dashing Count Vronsky in Tolstoy’s Anna Karenina.
So to solve the initial value problem at t = t0 you need two solutions with
W (t0 ) 6= 0. The nice surprise is that, if those solutions work for one choice of
t0 in I, they work for all choices. That can be deduced from Abel’s1 formula
W (t) = Ce−P (t) , where P 0 (t) = p(t) (see Prop. 1.26 (page 142) in Polking).
Either C = 0 and W (t) is zero on all of I or C 6= 0 and W (t) is never zero
on I. [Caution: the assumption that p(t) is continuous on I is used here: when
p(t) = −1/t, e−P (t) = |t| which vanishes at t = 0 and nowhere else.]
Polking points out (in Prop 1.27, page 143) that W (t) 6= 0 is equivalent to
the linear independence of the solutions y1 (t) and y2 (t). This is an important
point. It becomes more important when you get to higher order equations and
large systems of equations in Chapter 9. There is an alternate proof for it on the
(easily disposable) third page of these notes. However, the main fact that is used
all through Chapter 4 is that you can always solve the initial value problem for
the homogeneous equations with y(t) = c1 y1 (t) + c2 y2 (t) when the Wronskian of
y1 and y2 is not zero. Sometimes Polking says that in this case c1 y1 (t) + c2 y2 (t) is
the “general solution” to y 00 + p(t)y 0 + q(t)y = 0, but that is really the same thing:
Every solution is the solution to some initial value problem. So, if you can solve
every initial value problem with y(t) = c1 y1 (t) + c2 y2 (t) with the right choice of the
constants c1 and c2 , that must be the general solution!

1 Due to the Norwegian mathematician Niels Abel (1802-1829).


3

Proof that under the hypotheses of Theorem 3 two solutions y1 (t) and
y2 (t) to y 00 + p(t)y 0 + q(t)y = 0 are linearly independent on I if and only if
their Wronskian W (t) does not vanish on I.
If y1 and y2 are not linearly independent, then they are linearly dependent,
and you have the following possibilities. Either y1 (t) = cy2 (t) for all t in I, and
you have y10 (t) = cy20 (t) for all t in I, or y2 (t) = cy1 (t) for all t in I, and you have
y20 (t) = cy10 (t) for all t in I. In either case you can check easily that W (t) = 0 for
all t in I. So y1 and y2 linearly dependent implies W (t0 ) = 0 for any t0 in I.
Now I want to show that W (t0 ) = 0 for some t0 in I implies y1 and y2 are linearly
dependent. Suppose

0 = W (t0 ) = y1 (t0 )y20 (t0 ) − y10 (t0 )y2 (t0 )

If y1 (t0 ) = y10 (t0 ) = 0, then y1 (t) has the same initial data at t = t0 as the solution
y(t) ≡ 0. So Theorem 3 says y1 (t) is that solution, and we have y1 (t) = 0 = 0y2 (t).
So y1 (t) and y2 (t) are linearly dependent.
y2 (t0 ) 0
If y1 (t0 ) 6= 0, then W (t0 ) = 0 implies y20 (t0 ) = y1 (t0 ) y1 (t0 ). That means the y2 (t)
y2 (t0 )
and y1 (t0 ) y1 (t)
have the same initial data at t = t0 (check that!), and Theorem 3
says that they must be the same solution. So y2 (t) = cy1 (t) with c = (y2 (t0 )/y1 (t0 )),
and y1 and y2 are linearly dependent.
y20 (t0 )
If y10 (t0 ) 6= 0, then W (t0 ) = 0 implies y2 (t0 ) = y10 (t0 ) y1 (t0 ). That means the y2 (t)
y20 (t0 )
and y1 (t) have the same initial data at t = t0 (check that!), and Theorem 3
y10 (t0 )
says that they must be the same solution. So y2 (t) = cy1 (t) with c = (y20 (t0 )/y10 (t0 )),
and y1 and y2 are linearly dependent.
Since that exhausts all possibilities for what y1 (t0 ) and y 0 (t0 ) can be, I can say
W (t0 ) = 0 always implies that y1 and y2 are linearly dependent. Since at the
beginning I showed y1 and y2 linearly dependent implies W (t0 ) = 0 for all t0 , you
have a logical equivalence: y1 and y2 are linearly dependent if and only if their
Wronskian vanishes at some point in I. So y1 and y2 are linearly independent if
and only if their Wronskian does not vanish at any point in I.

You might also like