Professional Documents
Culture Documents
A Thesis
Presented to
The Faculty of the Department of Mathematics
Western Kentucky University
Bowling Green, Kentucky
In Partial Fulfillment
Of the Requirements for the Degree
Master of Science
By
Qian Dong
May 2009
QUALITATIVE BEHAVIOR OF SOLUTIONS TO
____ Di Wu ____________________________
_________________________________________
Dean, Graduate Studies and Research Date
ACKNOWLEDGMENTS
My sincere thanks go to Dr. Lan Nguyen, for his guidance and flexibility for the
duration of my thesis project. His excellent mentorship has helped me in understanding
the concepts of qualitative behavior of solutions to differential equations which is the
foundation of my thesis. I just want to say, without him, this thesis would not have been
possible.
Also, I would like to extend my sincere thanks to Dr. John Spraker and Dr. Di Wu
for their service as members of my thesis committee and for their valuable suggestions on
my thesis.
My special thanks go out to my parents who are in China. Without their
encouragement, I would not achieve my goals.
Last but not least, I would like to thank the entire graduate faculty in the
mathematics department at Western Kentucky University for making my graduate
experience such a positive one.
iii
TABLE OF CONTENTS
ABSTRACT........................................................................................................................ v
PREFACE ........................................................................................................................... 1
CHAPTER 1: Background: A Population Problem........................................................ …4
1.1 When the solution to population equation is periodic .................................................. 5
1.2 The initial value of the periodic function…………………………………………...... 6
1.3(a) Fish in a lake............................................................................................................. 7
1.3(b) Population in a village.............................................................................................. 7
CHAPTER 2: Matrices ..................................................................................................... 10
ABSTRACT
various questions arising in the study of the long run behavior of solutions. The contents
of this thesis are related to three of the major problems of the qualitative theory, namely
the stability, the boundedness and the periodicity of the solution. Learning the qualitative
lim u(t ) = 0 . Moreover, the periodicity of a solution is also of great significance for
t →∞
practical purposes.
v
PREFACE
comprehensive result. In fact, it is almost certain that when solving new problems, we use
more knowledge that we have already acquired to reach a new conclusion. The
questions arising in the study of the long run behavior of solutions. The contents of this
thesis are related to three of the major problems of the qualitative theory, namely
It is our view that one of the most important problems in the study of homogeneous
and non-homogeneous equations and their applications is that of describing the nature of
the solutions for a large range of parameters involved. From a numerical point of view,
must also be studied. The usual approach to fulfill such requirements is to have a set of
differential equations which are as general as possible and for which explicit analytic
Below, we are going to explain how to find the qualitative behavior of solutions to
dimensional R with periodic solutions, then give applications of the asymptotic behavior
of solutions of the ordinary differential equations in R with periodic solution in the real
world and studying periodic solution of a population equation that represents real-world
1
2
situations. We will also achieve some results for the population equation in 1-
the attention has been given to one periodic solution in 1- dimensional R . A periodic
solution with initial population y0 ensures that the population cannot become extinct,
provided y (t + 1) = y (t ) .
equations. Most obvious applications would be in the studies of Linear Algebra and
Differential Equations where matrix functions are prevalent. To reach our final results,
function. Using Riesz theory, we also introduce the matrix-valued function f ( A) , where
exponential function, then we can define matrix e A . Many properties of such functions
are given. They are very important to the theory of matrix-valued differential equations
and the behavior of their solutions. At the end of Chapter 3, we prove the Spectral
Mapping Theorem, an exemplary theorem about the relationship between the eigenvalue
Finally, we have the main results in Chapter 3 and Chapter 4. Learning the
y ' (t ) = Ay (t ) + f (t )
y (0) = y 0 ,
3
periodicity of a solution is also of great significance for practical purposes. Among the
results, we have the theorem about the stability of solutions of homogeneous equation. It
gives four equivalent conditions to check if the system is stable. As a nice corollary of
that result, if Re( λ ) < 0 for each point in the spectrum of A, then we also have a result
about the boundedness of the solution of non-homogeneous equation. Next, we study the
operator A, so that our solution is periodic. We find a nice condition on the spectrum of
A, namely if 2πni (n ∈ Z , the set of integers) are in the resolvent set of A, then it
guarantees the existence and uniqueness of 1- periodic solutions. Finally, if the imaginary
axis is a subset of the resolvent of A, then the existence and uniqueness of complete
In this chapter, we consider a population (such as human beings, bacteria, fish, etc.)
model. In this population, we assume the birth rate = b, the death rate = d, then the
growth rate: r = b − d . If we have external influence then each year f (t ) is added (or
y ' (t ) = ry (t ) + f (t )
(1)
y (0) = y 0
We should use the method for solving linear differential equations. First, write the
µ (t ) = e ∫ − rdt = e − rt (3)
t
y (t ) = e rt y0 + ∫ e r (t − s ) f ( s )ds (4)
0
function f (t ) is called p-periodic if f (t + p ) = f (t ) for all t in the domain. For the sake
4
5
not periodic. We now want to find certain initial value y0 , so that y(t) is periodic. We
now are in the position to find the initial value, such that the solution y (t ) is periodic.
Theorem 1.1 Suppose f (t ) is a periodic function with period 1. If r ≠ 0 , then there exists
a unique initial value y 0 , such that the solution of the population equation:
y ' (t ) = ry (t ) + f (t )
y ( 0) = y 0
is 1-periodic .
t
Proof: Suppose the solution y (t ) = e y 0 + ∫ e r ( t − s ) f ( s )ds is 1-periodic, then y (1) = y 0 .
rt
Hence,
1
e r y 0 + ∫ e r (1− s ) f ( s )ds = y 0 .
0
Therefore,
1
(1 − e r ) y 0 = ∫ e r (1− s ) f ( s ) ds .
0
1 1
y0 = r (1− s ) f ( s )ds .
∫e
1 − er 0
1
1
r ∫
So, if y (t ) is 1- periodic, then y 0 must be equal to e r (1− s ) f ( s ) ds , and hence, y 0 is
1− e 0
unique.
6
1
1
1 − e r ∫0
Conversely, if y 0 = e r (1− s ) f ( s )ds , we will show the
t
solution y (t ) = e rt y 0 + ∫ e r ( t − s ) f ( s )ds is 1- periodic by showing y (1) = y (0) .
0
We have:
1
y (1) = e y 0 + ∫ e r (1− s ) f ( s)ds
r
0
1 1 1
= er r (1 − s ) f ( s )ds + ∫ e r (1− s ) f ( s )ds
r ∫e
1− e 0 0
er 1
= + 1 ∫ e r (1− s ) f ( s ) ds
1 − er
0
1 1
= r (1− s ) f ( s)ds
∫e
1 − er 0
= y0 ,
Remark: In the general case, if f (t ) is p- periodic, then the initial value of the unique
p
1 r ( p − s ) f ( s )ds .
y0 = ∫e
1 − e pr 0
Next, we will use this result to solve problems in the real case.
7
1.3 Applications
fishing removes fish with a constant rate of 15,000 tons per year. What is the amount of
Solution: We know, there is a unique initial value y 0 so that the fish in the lake is 1-
periodic.
If the initial amount of fish > y 0 , then the fish will grow.
If the initial amount of fish < y 0 , then the fish will be gone.
y ' (t ) = .3 y (t ) − 15,000
y (0) = y 0
1 1
y0 = r (1− s ) f ( s )ds
∫e
1 − er 0
− 1 1 .3(1− s)
= ∫e 15,000ds
1 − e.3 0
= 50,000 (tons)
Population of a village: y (0) = y0 .Let the birth rate = 2% , and the death rate = 1%, then
the growth rate = 1%. However, each year the number of people leaving for cities is
8
2π
− f (t ) = 30 − cos t (Period p = 10).
10
What is the (initial) population of the village, so that the village won’t become empty?
Solution: First, we need to find what the initial value y 0 is. We have the population
equation:
2π
y ' (t ) = 0.01y (t ) − 30 + cos t
10 .
y (0) = y 0
1 10
y0 = 0.01(10 − s ) f ( s ) ds
∫e
1 − e10⋅(0.01) 0
− 1 10 0.01(10 − s ) 2π
= ∫e (30 − cos s ) ds
1− e 0 . 1 10
0
−1 10 10 2π
= [∫ e 0.01(10 − s ) 30ds − ∫e
0.01(10 − s ) cos( s )ds )]
1 − e 0.1 0 0
10
−1 e 0.01s 10 10 0.01(10 − s ) 2π
= [30 |0 − ∫ e cos( s )ds )]
1 − e 0 .1 0.01 0
10
−1 − 1 10 0.01(10 − s ) 2π
= 3000(e 0.1 − 1) − ∫e cos( s )ds
1 − e 0.1 1− e 0. 1
0
10
−1 10 2π
= 3000 − e 0.1 ∫ e − 0.01s cos( s )ds
1 − e 0 .1 0
10
e au (a cos nu + n sin nu )
(Using ∫ e au cos nudu = )
a2 + n2
9
−1 − 0.01(1 − e 0.1 )
= 3000 − ⋅
1 − e 0.1 (−0.01) 2 + ( 2π ) 2
10
0.01
= 3000 −
2π 2
(0.01) 2 + ( )
10
When the village has 3,000 residents, then the population of the village is not decreasing.
QED
From the above applications, we think it is important to study linear equations, not
only in 1 dimension, but in the multi-dimensional case. Before doing that we are going to
properties.
Definition 2.1: The space R n is the set of all ordered n-tuples of the form:
u = {u1 , u 2 ,L, u n } ,
called vectors.
y = ( y1 , y 2 ,..., y n ) as follows:
x ⋅ y = x1 y1 + x 2 y 2 + L + xn y n .
|| x ||= x ⋅ x = x12 + x 22 + L + x n2 .
10
11
|| x + y ||≤|| x || + || y || .
y = ( y1 , y 2 , L , y n ) ,
Then,
| x ⋅ y |≤|| x || ⋅ || y || .
d ( x, y ) =|| x − y ||= ( x1 − y1 ) 2 + ( x 2 − y 2 ) 2 + L + ( x n − y n ) 2
2
as a vector in R n by A = (a11 , a12 , L , a1n , L , a n1 , a n 2 , L , a nn ) , ( n 2 terms).Then,
n
the norm of A, denoted by || A || is || A ||= ∑ aij2 .
i =1
j =1
Theorem 2.6 Let A and B be n × n matrices and x in R n , then the following inequalities
hold:
1) || Ax ||≤|| A || ⋅ || x || ,
12
2) || AB ||≤|| A || ⋅ || B || .
Proof:
2 + a 2 + L + a 2 )( x 2 + x 2 + L + x 2 ) + (a 2 + L + a 2 )( x 2 + x 2 + L + x 2 ) + L +
≤ (a11 12 1n 1 2 n 21 2n 1 2 n
(an21 + L + ann
2 )( x 2 + x 2 + L + x n )
1 2 n
2 2
= ( a11 + a12 + L a12n + a 21
2
+ L + a 22n + L + a n21 + L + a nn
2
)( x12 + x 22 + L + x nn )
= || A || 2 || x || 2 = (|| A || ⋅ || x ||) 2 .
|| Ax ||≤|| A || ⋅ || x || .
2) Let y1 , y 2 , ..., y n be the column vectors of matrix B. Then it is easy to see that Ay 1
n n
|| B || 2 = ∑ || y i || 2 and || A ⋅ B || 2 = ∑ || Ay i || 2 .
i =1 i =1
|| A ⋅ yi || 2 ≤|| A || 2 ⋅ || y i || .2
13
n
|| AB || 2 = ∑ || Ayi || 2
i =1
n
≤ ∑ || A || 2 || yi || 2
i =1
n
=|| A || 2 ⋅∑ || y i || 2
i =1
=|| A || 2 ⋅ || B || 2 . QED
continuous.
f (t + h) − f (t )
We say f : R → R n is differentiable at t if lim exists in R n . The limit is
h →0 h
f1 ( x ) f1 ' ( x)
f 2( x ) f 2 ' ( x)
It is easy to see that if f (t ) = , then f ' (t ) = . Here is an example: If
M M
f n ( x) f ' ( x)
n
t2 2t
f (t ) = , then f ' (t ) =
− sin t . Similarly, we say F (t ) is an anti-derivative of an n-
cos t
b
b b b
∫ f (t ) dt := ∫ f 1 (t ) dt , ∫ f 2 (t ) dt , ..., ∫ f n (t ) dt .
a a a a
d
F (t ) x (t ) = F ' ( t ) x (t ) + F (t ) x ' (t ) .
dt
d F (t + h ) x (t + h ) − F ( t ) x (t )
Proof: F (t ) x (t ) = lim
dt h →0 h
F (t + h ) x ( t + h ) − F (t ) x (t + h ) + F (t ) x ( t + h ) − F (t ) x (t )
= lim
h →0 h
( F ( t + h ) − F ( t )) x ( t + h ) F (t )( x ( t + h ) − x (t ))
= lim + lim
h→0 h h→0 h
15
F (t + h) − F (t ) x(t + h) − x(t )
= lim ⋅ lim x(t + h) + F (t ) ⋅ lim
h→0 h h→0 h →0 h
To reach our end result, we need to know what the eigenvalues and eigenvectors of
an n × n matrix A are.
Ax = λx .
The nonzero vector x is called an eigenvector corresponding to λ . We also note that any
det(λ − A) = 0.
This equation is called the characteristic equation. Solutions to this equation will be
eigenvalues.
For each eigenvalue, there is one or more corresponding eigenvectors (we disregard the
1 4
of the matrix A = . Then,
2 3
16
λ 0 1 4 λ − 1 − 4
| λI − A |= − 2 3 = − 2 λ − 3 = (λ − 1)(λ − 3) − 8 = λ − 4λ − 5
2
0 λ
= (λ − 5)(λ + 1).
This gives two eigenvalues λ1 = 5 and λ2 = −1 . Then, we need to find the corresponding
For λ1 = 5 we have
5 0 1 4 x1 4 − 4 x1 0
(5I − A) x = ( − ) ⋅ = ⋅ = .
0 5 2 3 x2 − 2 2 x2 0
1
Solving that system, we get x = c .
1
− 1 0 1 4 x1 − 2 − 4 x1 0
(− I − A) x = ( − ) = − 2 − 4 ⋅ x = 0 .
0 − 1 2 3 x2 2
− 2
Solving that system, we get x = c .
1
tA
2.4 Matrix Exponential Function e
Let A be a n × n matrix. What are the matrix e A and the function e tA ? We have
A A2 An ∞ An
e A = lim ( I + + +L+ )= ∑
n→∞ 1! 2! n! n = 0 n!
17
and
∞An n
e =∑
tA
t .
n =0 n!
A A2 An
The above definition is meaningful. Indeed, if we denote S n := I + + +L+ ,
1! 2! n!
then
∞ Ai
e A − Sn = ∑ and hence,
i = n +1 i!
∞ Ai ∞ || A i || ∞ || A || i
|| e A − S n || = || ∑ || ≤ ∑ ≤ ∑ →0 as n → ∞ .
i =n +1 i! i = n +1 i! i = n +1 i!
Using the above definition we will find e tA for some given matrix A.
0 1
Examples 2.12: Find e tA , if a) A =
−1 0
0 1
b) A =
1 0
1 1
c) A =
− 1 − 1
0 1 0 1 −1 0 1 0
a) If A = then A 2 = , A 3 = and A 4 = = I .
− 1 0 − 1 0 0 − 1 0 1
1 0 0 1 t 2 − 1 0 t 3 0 − 1 t 4 1 0
e tA = + t + + + + L
0 1 − 1 0 2! 0 − 1 3! 1 0 4! 0 1
∞ t 2n ∞ t 2 n +1
∑ ( −1) n ∑ ( −1)
n
( 2 n + 1)!
=
n=0 ( 2n )! n=0
∞ t 2 n +1 ∞
n t
2n
− ∑ ( −1) ∑ ( −1)
n
n=0 ( 2n + 1)! n =0 ( 2 n)!
cos t sin t
= .
− sin t cos t
∞ t 2n ∞ t 2 n +1
Here we have used the facts that cos t = ∑ ( −1) n ) and sin t = ∑ (−1) n .
n =0 ( 2n)! n =0 (2n + 1)!
0 1 0 1 1 0
b) If A = then A = , A 2 = = I , A3 = A, A 4 = I , L . Hence,
1 0 1 0 0 1
1 0 0 1 t 2 1 0 t 3 0 1 t 4 1 0
e tA = + t + + + + ...
0 1 1 0 2! 0 1 3! 1 0 4! 0 1
cosh t sinh t
= .
sinh t cosh t
e t − e −t et + e −t
(Using sinh t = ; cosh t = .) .
2 2
1 1 1 1 1 1 0 0 0 0
c) If A = then A 2 = ⋅ = , A 3 = ,
− 1 − 1 − 1 − 1 − 1 − 1 0 0 0 0
0 0
A4 = , L .
0 0
Hence,
19
1 0 1 1 t 2 0 0 t 3 0 0 t 4 0 0
e tA = + t + + + + L
0 1 − 1 − 1 2! 0 0 3! 0 0 4! 0 0
1 0 t t 1 + t t
= + = .
0 1 − t − t − t 1 − t .
0 1 0 L 0
0 0 1 O M
a) Finally, if A = M O O 1 0 , then
M L 0 O 1
0 L L L 0
t2 t n−1
1 t L
2! (n − 1)!
0 1 t t n −2
L
e At = (n − 2)!.
M O 1 O M
M O O O t
0 L L 0 1
(b) e tI = e t I ;
(c) e A( t + s ) = e At e As ;
(d) (e At ) −1 = e − At .
tA (tA) 2 (tA) n
Proof: (a) Using the formula e tA = I + + +L+ + ... , we can calculate
1! 2! n!
O O2 On
eO = I + + + ... + + .....
1! 2! n!
20
= I.
t ⋅ A (t ⋅ A) 2 (t ⋅ A) n
(b) From e = I +
tA
+ + ... + + ..... we have
1! 2! n!
r ⋅ I (r ⋅ I ) 2 (r ⋅ I ) n
e rI = I + + + ... + + .....
1! 2! n!
r ⋅ I r2I rnI
= I+ + + ... + + .....
1! 2! n!
r r2 rn
= I (1 + + + ... + + .....)
1! 2! n!
= er I . QED
λ1 0 L 0 e λ1t 0 L 0
0 λ O M λ2t
(c) 1) If A is diagonal, i.e. A = 2 . Then, e = 0
At e O M
M O O 0 M O O 0
0 L 0 λn 0 L 0 e λ n t
e λ1 s 0 L 0
λ2 s
and e = 0
As e O . M
M O O 0
0 L 0 e λ n s
e λ1 (t + s ) 0 L 0
e λ 2 (t + s )
e e =
At As 0 O M
M O O 0
0 L 0 e λ n (t + s ) = e A(t + s ) .
21
Using the same reasoning, we can conclude that that SA n S −1 = ( SAS −1 ) n = D n , a diagonal
matrix. Hence,
−1
∞ t n −1 ∞ SA n t n S −1 ∞ t n Dn
At
Se S = S( ∑ A )S = ∑
n
=∑ = e tD .
n=0 n! n =0 n! n = 0 n!
e At e As = S −1e tD S ⋅ S −1e sD S
= S −1 (e tD e sD ) S
= e A( t + s ) .
λ 1 0 L 0 λ 0 L L 0 0 1 0 L 0
0 λ 1 O M 0 λ O O M 0 0 1 O M
A = 0 0 λ O 0 = M O O O M + M O O 1 0 ,
M M O O 1 M O O O 0 M L 0 O 1
0 0 L 0 λ 0 L L 0 λ 0 L L L 0
= λI + B . .
λI A ( λI + A )
. We observe that it is not hard to show λ is any constant, then e e = e . So we
can obtain
22
t2 t n−1
1 t L
2! (n − 1)!
0 1 t t n −2
L
e At = e t ( λI + B ) = e tλI ⋅ e tB = e tλI ⋅ (n − 2)! and similarly,
M O 1 O M
M O O O t
0 L L 0 1
s2 s n −1
1 s L
2! (n − 1)!
0 1 s s n−2
L
e As = e s ( λI + B ) = e sλI ⋅ e sB = e sλI ⋅ (n − 2)! .
M O 1 O M
M O O O s
0 L L 0 1
Hence, we obtain
t2 t n −1 s2 s n −1
1 t L 1 s L
2! (n − 1)! 2! (n − 1)!
t n−2 s n−2
e At e As = etλI ⋅ sλI ⋅
0 1 t L 0 1 s L
(n − 2)! ⋅ e (n − 2)! ,
M O 1 O M M O 1 O M
M O O O t M O O O s
0 L L 0 1 0 L L 0 1
t2 t n −1 s2 s n −1
1 t L 1 s L
2! (n − 1)! 2! (n − 1)!
n−2
t s n−2
= etλI ⋅ e sλI ⋅ 0 1 s
0 1 t L L
(n − 2)! ⋅ (n − 2)! ,
M O 1 O M M O 1 O M
M O O O t M O O O s
0 L L 0 1 0 L L 0 1
23
(t + s)2 (t + s)n −1
1 (t + s ) L
2! (n − 1)!
(t + s) n − 2
= e(t + s )λI ⋅
0 1 (t + s) L
(n − 2)! ,
M O 1 O M
M O O O (t + s)
0 L L 0 1
= e ( t + s ) λI ⋅ e ( t + s ) B = e ( t + s ) A .
Finally, if A is similar to the Jordan block, i.e. A = SJS −1 with J=Jordan block. Then we
have,
e A(t + s ) = Se J (t + s ) S −1 = Se Jt + Js S −1 = S (e Jt ⋅ e Js ) S −1 = Se Jt S −1Se Js S −1 = e At ⋅ e As .
e tA .e −tA = e A(t −t ) = e O = I .
Hence, (e At ) −1 = e − At . QED
d tA
Theorem2.14 If A is a n × n matrix, then e = AetA .
dt
A A2 An
Proof: We know e A = 1 + + +L+ + L . Then we have
1! 2! n!
tA (tA) 2 (tA) n
e tA = 1 + + +L+ +L
1! 2! n!
Since the above Taylor series converges, and the series of derivatives of these terms
d tA A 2tA 2 3t 2 A 3 nt n −1 A n
e = 0+ + + L+ +L
dt 1 2 ×1 3 × 2 ×1 n × ( n − 1) × L × 1
tA 2 t 2 A 3 t n −1 A n
= A+ + +L+ +L
1 2 ×1 ( n − 1)!
tA (tA ) (tA ) ) + L
2 n −1
= A(1 + + +L+
1 2! (n − 1)!
= Ae tA .
QED
The second approach to define e tA is using the Riesz theory. Recall, the Cauchy
Integral Formula is a useful tool for solving many problems in Complex Analysis. The
Cauchy Integral Formula formally states that given a complex function f (z ) that is
analytic everywhere inside and on a simple closed contour C, taken in the positive sense,
1 f ( z)
f ( z0 ) = ∫ dz .
2πi c z − z 0
The Cauchy Integral Formula states that the value of f ( z0 ) can be determined, if f (z ) on
1 f ( z)
f ' (z0 ) = ∫ dz.
2πi c (z − z 0 )2
In order to accommodate later use, the Cauchy Integral Formula can be rewritten as
1
∫ f ( z ) ⋅ ( z − z 0 ) dz.
−1
f (z0 ) =
2πi c
25
1 −1
f ( A) =
2π i c
∫ f ( z ) ⋅ ( z − A) dz ,
have:
1 −1
eA = ∫ e ( z − A) dz.
z
2π i C
Note that the definition is independent of the choice of the contour C. We will find out
that the two above definitions of e A using the two approaches are the same. First, we
1. If f (z) = 1, then f ( A) = I .
2. If f (z) = z , then f ( A) = A .
Proof:
∞
( I − A) −1 = ∑ A n .
n =0
A
Let C be a contour with | z |>|| A || for all z on C. We have then < 1 and hence,
z
A A
I − is invertible. Thus, z − A = z I − is invertible and
z z
26
−1
1 A
( z − A) −1 = I −
z z
1 ∞ A
n
= ∑
z n =0 z
∞ An
= ∑ n+1 .
n =0 z
1
f ( A) = ∫ f ( z )( z − A) −1 dz
2πi C
1 ∞ An
= ∫ ∑
1 ⋅
2πi C n =0 z n+1
dz
1 ∞ n 1
= ∑ A ∫ dz
2πi n=0 Cz
n +1
1
= ⋅ I ⋅ 2πi
2πi
= I.
1 1
Here we have used the fact that ∫ dz = 2π ⋅ i and ∫ n dz = 0 for all n ≥ 2 .
C z C z
1 ∞ Am
n
f ( A) = ∫z ∑
2πi C m = 0 z m +1
dz
1 ∞ m 1
= ∑ A ∫ dz
2πi m =0 Cz
m − n +1
1
= ⋅ A n ⋅ 2πi
2πi
27
= An .
1 1
Also, here we have used the fact that ∫ dz = 2π ⋅ i and ∫ n dz = 0 for all n ≥ 2 . QED
C z C z
2 2 3
From Theorem 2.5, it is expected that if f (z) = z , and then f (A) = A ; if f (z) = z , then
be applied to more exotic functions which will be shown below. First, some addition and
Theorem 2.16
1
f ( A) −1 = ( A) .
f
Proof:
1 −1
h(A) = ∫ h(z)(z − A) dz
2πi c
1
= ∫ ( f ( z) ± g ( z) )( z − A) −1dz
2πi c
28
1 1
= ∫ f ( z )( z − A) −1 dz ± ∫ g ( z )( z − A) −1 dz
2πi c 2πi c
= f (A) ± g(A) .
1 −1
2. If h(z) = c ⋅ f (z) , then h(A) = ∫ h(z)(z − A) dz
2π i c
1 −1
= ∫ c ⋅ f ( z )( z − A) dz
2πi c
1
= c⋅ ∫ f ( z )( z − A) −1 dz = c ⋅ f ( A) .
2πi c
1 −1
h( A) = f ( A) ⋅ g( A) = ∫ f (z)g(z)(z − A) dz.
2π i c
Because the integral does not depend on the choice of the contour, the contours
1
∫ f ( z ) ⋅ ( z − A ) dz
−1
f ( A) =
2πi C1
1
∫ g (u ) ⋅ (u − A ) du
−1
and g ( A) =
2πi C 2
1
f ( A) ⋅ g ( A) = 2 ∫
f ( z )( z − A) −1 dz ∫ g (u )(u − A) −1 du
( 2πi ) C1 C2
1
= 2 ∫ ∫
f ( z ) g (u )( z − A) −1 (u − A) −1 dudz
(2πi ) C1C2
After multiplying the integrals together, the result can be rewritten after applying the
( z − A) −1 (u − A) −1 =
1
z −u
((u − A) −1 − ( z − A) −1 ) )
29
as follows:
f ( A ) ⋅ g ( A) =
1 1
[ −1 −1
∫ ∫ f ( z ) g (u ) z − u (u − A) − ( z − A) dudz
( 2πi ) 2 c1 c 2
]
This can be split into two double integrals, which can be arranged as shown.
1 f ( z) 1 g (u )
f ( A) ⋅ g ( A) = 2 ∫
g (u )(u − A) −1 ∫ dzdu + 2 ∫
f ( z )( z − A) −1 ∫ dudz.
( 2πi ) c2 c1 z − u ( 2πi ) c1 c2 u − z
f ( z)
However, by the Cauchy Integral Formula, ∫ dz = 0 , since u is not contained in
c1
z − u
g (u )
∫ u − z du = 2πig ( z ) by the Cauchy Integral Formula, since z is contained in C2 .
c2
Hence,
1 −1
f ( A) ⋅ g ( A) = ∫ f ( z )( z − A) ⋅ 2πig ( z ) dz
( 2πi ) 2 c1
1 −1
= ∫ f ( z ) ⋅ g ( z )( z − A) dz
2πi c1
1 −1
= ∫ h( z )( z − A) dz.
2πi c1
1
of A with f ( z ) ≠ 0 for all z in this domain. Hence, g ( z) := exists in that domain
f ( z)
h( A) = I = f ( A) g ( A) .
1
That means f ( A) is invertible and f ( A) −1 = g ( A) = ( A) . QED
f
30
1
For example, if A is an invertible matrix, (i.e. 0 is not an eigenvalue of A) and f ( z ) =
z
1 1
with the domain D( f ) = C \ {0} , then ( z ) = z and ( A) = A . Hence
f f
−1
1
f ( A) = ( A) = A−1 .
f
∞ zn
If now f ( z ) = e z = ∑ , then, using the addition and multiplication properties, we
n = 0 n!
have
∞ An
A
e = ∑
n = 0 n!
=eA .
e iz + e −iz
Now, if f ( z ) = cos( z ) = , a new matrix function can be defined as
2
cos( A) = ∫ cos( z )( z − A) −1 dz
C
1 e iz + e − iz
= ∫ ( z − A) −1 dz
2πi C 2
1 1 −1 1 −iz −1
= ( ∫ e ( z − A) dz +
iz
∫ e ( z − A) dz )
2 2πi C 2πi C
∞(iA) n ∞ ( −iA) n
= ∑ +∑
n = 0 n! n=0 n!
31
= ∑
∞ (− 1)n A 2 n = I−
A2 A4
+ −L
n =0 ( 2 n )! 2 4!
e iz − e −iz
Likewise, if f ( z ) = sin ( z ) = , then
2i
e iA − e −iA ∞ (− 1)n A 2 n +1 A3 A5
sin( A) = =∑ =A − + −L
2i n = 0 ( 2 n + 1)! 3! 5!
7 6
certain branch) and A = . The eigenvalues of this matrix A are 1 and 4. Now
− 3 − 2
f ( A) = A = ∫ z (z − A) dz
−1
3 2
A= .
− 1 0
EV ( f ( A)) = f ( EV ( A)) .
f ( z ) − f (λ )
if z ≠ λ ;
g ( z ) := z−λ
f ' (λ ) if z = λ .
∞ f ( n ) (λ ) ∞ f ( n +1) ( λ )
as f ( z ) = ∑ ( z − λ ) n . Hence, by definition, g ( z ) = ∑ ( z − λ ) n is also
n =0 n! n = 0 ( n + 1)!
f ( A) − f (λ ) I = ( A − λI ) g ( A)
and hence,
det ( f ( A) − f (λ ) I ) = det (( A − λI ) ⋅ g ( A) )
= 0. det( g ( A))
eigenvalue of matrix f ( A) .
A 2 corresponding to λ 2 . In the same manner, we have that the same vector x is the
In this Chapter we will study the qualitative behavior of solutions of the homogeneous
differential equation
y ' (t ) = Ay (t )
(3.1)
y (0 ) = y 0
y ' (t ) = Ay (t ) + f (t )
. (3.2)
y ( 0) = y 0
Studying the qualitative behavior of such solutions is an important part of the theory of
a solution is stable, i.e. lim y(t ) = 0 . Moreover, the periodicity of a solution is also of
t →∞
great significance for practical purposes. Before studying the properties of such solutions,
y (t ) = e tA y 0 .
34
35
t
y(t ) = e tA y0 + ∫ e (t − s ) A f ( s )ds .
0
Proof: We will prove part (2) , as part (1) is a special case of it, when f (t ) = 0 .
y ' (t ) − Ay (t ) = f (t ) , (3.3)
− Adt
µ (t ) = e ∫ = e − At .
or (e −tA y (t ))' = e − tA f (t )
t t
e −tA y (t ) = y (0) + ∫ e − sA f ( s ) ds = y 0 + ∫ e − sA f ( s ) ds.
0 0
t
y (t ) = e tA y 0 + ∫ e (t − s ) A f ( s ) ds. QED
0
We now state the first result, in which the equivalence of part a. and part e. is the famous
Lyapunov’s Theorem.
Theorem 3.2 Consider the system of linear differential equations with initial condition:
y ' (t ) = Ay (t )
.
y (0 ) = y 0
b. lim || e tA ||= 0 .
n →∞
Proof: The implications (c. → b.), (b. → d.) and (b. → a.) are obvious. We now prove (a.
(a. → b.): Let (e tA ) i be the i th column of e tA . Take now y 0 = (1, 0,..., 0) , then the
same reasoning, we can prove that lim || (e tA ) i ||= 0 for i = 1, 2,..., n . Hence,
t →∞
n
lim || e || = lim ∑ || (e ) i || = 0 .
tA 2 tA 2
t →∞ t →∞ i =1
(d. → c.) Let t 0 be the number with || e t 0 A ||= r0 < 1 . For any number t > t 0 , we write
|| e tA ||=|| e ( mt 0 + s ) A ||=|| e mt 0 A e sA ||
≤|| e mt 0 A || ⋅ || e sA ||
≤ M || e mt 0 A ||
37
= M || (e t 0 A ) m ||
≤ M || e t 0 A || m
= Me − s / t 0 ln r0 e (ln r0 / t 0 )t . (3.4)
Let ω = −(ln r0 ) / t 0 . Since, r0 < 1 , hence ln r0 < 0 and ω > 0 . From (3.3) we have
−( s / t 0 ) ln r0
Here M 1 = M max e .
0≤ s ≤t0
then Re λ < 0 for each eigenvalue λ of A . On the contrary, suppose there exists an
eigenvalue λ , where Re λ ≥ 0 .
x1
x
(1) If λ is a real eigenvalue with corresponding eigenvector x = 2 , then the
M
xn
x1
x
solution of the system is y(t ) = e λ t 2 ( with the initial values y (0) = x ), which
M
xn
(2) We need to use the following fact: Suppose α < 0 and Pm (t ) is a real polynomial
system (with the initial value y(0) = Re x ), which does not approach 0 as
as t → ∞ for all the initial values y0 . Let Re λ < 0 for each eigenvalue of the coefficient
(1) Let λ1 , λ2 ,L, λk be the real eigenvalues of A with multiplicity one, with
corresponding eigenvector xi ;
corresponding eigenvector y i ;
corresponding eigenvector zi .
k l m
y (t ) = ∑ ai e λi t xi + ∑ eη i t Pni (t ) yi + ∑ bi eα i t (cos β i Re zi − sin β i t Im zi )
i =1 i =1 i =1
m
+ ∑ ci eα i t (cos β i Im z i + sin β i t Re z i )
i =1
= I1 + I 2 + I 3 + I 4 ,
39
where Pni (t ) are the corresponding polynomials of degree ni . We need to consider each
of I1 , I 2 , I 3 , I 4 , so
k
I1 = ∑ ai e λi t xi → 0 as t → ∞ , since λi < 0 for each i = 1, 2, L , k .
i =1
l
I 2 = ∑ eη i t Pni (t ) yi → 0 as t → ∞ , using the above mentioned fact.
i =1
m
I 3 = ∑ bi eα i t (cos β i Im zi − sin β i t Im zi → 0 as t → ∞ , since α i < 0 for each
i =1
m
Similarly, I 4 = ∑ ci eα i t (cos β i Im zi + sin β i t Re zi ) → 0 as t → ∞ .
i =1
As corollary of the above theorem, if Re( λ ) < 0 for each eigenvalue of A and the
equation is bounded. The following theorem shows the equivalence of the two properties.
y ' (t ) = Ay (t ) + f (t )
.
y (0) = y 0
t
The solution of the system is y (t ) = e tA y 0 + ∫ e (t − s ) A f ( s ) ds. .
0
a. For each bounded function f (t ) (i.e. || f (t ) ||< C for all t), solution y (t ) of the non-
Proof (b. → a.) Suppose Re λ < 0 for all eigenvalue λ of A . Then, by Theorem 3.2, there
t
|| y(t ) || = ||etAy0 + ∫ e (t − s) A f (s)ds ||
0
t
≤|| e y0 || + || ∫ e (t − s ) A f ( s)ds ||
tA
0
t
≤|| e tA || ⋅ || y 0 || + ∫ || e (t − s) A || ⋅ || f ( s) || ds
0
t
≤ Me −ωt ⋅ || y0 || + ∫ Me −ω (t − s ) ⋅ Cds
0
1 − e −ωt
= Me −ωt || y 0 || + MC
ω
MC
≤ M || y0 || + .
ω
Hence, y (t ) is bounded.
(a,. → b.) Suppose for each bounded function f (t ) , solution y (t ) of the non-
Then we have
t t t e tλ − 1
y (t ) = ∫ e (t − s ) A xds = ∫ e (t − s )λ xds = ( ∫ e (t − s )λ ds ) x = x.
0 0 0 λ
Hence,
e tλ − 1 | e tλ − 1 | | e tλ | −1 e (Re λ )t − 1
|| y(t ) ||=|| x ||= || x ||≥ || x ||= || x ||→ ∞
λ |λ| |λ| |λ|
complete. QED
t
y(t ) = e y0 + ∫ e A(t −s) f (s)ds
At
0
with period 1.
t +1
A( t +1)
y (t + 1) = e y0 + ∫ e A(t +1− s ) f ( s )ds
0
1 1+ t
= e At e A y0 + ∫ e A(t +1− s ) f ( s)ds + ∫ e A(t +1− s ) f ( s )ds
0 1
42
1 t
= e At e A y0 + e At ∫ e A(1− s ) f ( s)ds + ∫ e A(t − s ' ) f ( s'+1)ds' (using s = s '+1)
0 0
1 t
= e At e A y 0 + ∫ e A(1− s ) f ( s) ds + ∫ e A(t − s ') f ( s' )ds ' (using f ( s '+1) = f ( s ' ) )
0 0
t t
A(t − s' )
At
= e y(1) + ∫ e f ( s' )ds = e At
y (0) + ∫ e A(t − s ') f ( s' )ds = y(t ).
0 0
QED
We now can state the Theorem about the periodicity of solutions of non-homogeneous
equation.
with period 1, there exists a unique initial value y 0 , such that the solution of
equation(3.2)
y ' (t ) = Ay (t ) + f (t )
y ( 0) = y 0
is 1-periodic ;
A
b) Number 1 is not an eigenvalue of e ;
Proof: The equivalence between b) and c) is actually the content of the Spectral
Take
1
y 0 = ( I − e A ) −1 ∫ e A(1− s ) f ( s ) ds
0 .
t
y (t ) = e tA y 0 + ∫ e (t − s ) Af ( s)ds
0
We have
1
y (1) = e A y 0 + ∫ e A(1− s ) f ( s ) ds
0
1 1
= e A ( I − e A ) −1 ∫ e A(1− s ) f ( s)ds + ∫ e A(1− s ) f ( s)ds
0 0
( )
1
= e A ( I − e A ) −1 + I ∫ e A(1− s ) f ( s ) ds
0
1
= ( I − e A ) −1 ∫ e A(1− s ) f ( s)ds = y 0
0
Hence, by Theorem 3.4, y (t ) is 1-periodic. If there were now another initial value
t
y 2 (t ) = e tA y 2 + ∫ e (t − s ) A f ( s ) ds
0
or e A ( y0 − y2 ) = y0 − y2 ≠ 0 .
t
y (t ) = e tA y 0 + ∫ e (t − s ) Af ( s)ds
0
be the unique 1-periodic solution of differential equation (3.2). We use the contradiction
t
y 2 (t ) = e At ( y 0 + x 0 ) + ∫ e A(t − s ) f ( s ) ds
0
1
y 2 (1) = e A ( y 0 + x0 ) + ∫ e (1− s ) Af ( s ) ds
0
1
= e A x0 + (e A y 0 + ∫ e (1− s ) Af ( s) ds)
0
= x0 + ( y 0 )
= y 2 (0) .
So, y 2 (t ) is also 1-periodic, and that is the contradiction to the uniqueness of the 1-
In physics and biology sometimes we consider the solutions on the whole line R. Such
t
y (t ) = e tA y (0) + ∫ e (t − s ) A f ( s) ds
0
t 0
where − ∞ < t < ∞ . Note that, if t < 0 then we define ∫ f ( s) ds := − ∫ f ( s )ds . It is not hard
0 t
t
y (t ) = e (t − s ) A u ( s ) + ∫ e (t −τ ) A f (τ ) dτ
s
trajectory.
Hilbert Spaces
In this Chapter we try to extend some results in Chapter 3 to Hilbert space. First we
satisfied:
(a) u (αx + βy , z ) = αu ( x, z ) + βu ( y, z ) ,
(b) u ( x , αy + βz ) = αu ( x, y ) + β u ( x, z ) ,
(c) u ( x, x) ≥ 0 ,
(d) u ( x, y ) = u ( y, x ) .
Definition4.2 A Hilbert space is a vector space H over F together with an inner product
x ⋅ y = x1 y1 + x2 y 2 + ... + xn yn .
46
47
Example 4.4 Let L2 ([a, b]) be the space of all complex-valued integrable functions
b
f , g = ∫ f (t ) g (t )dt ,
a
Then this defines an inner product on L2 ([a, b]) . L2 ([a, b]) is a Hilbert space with the
norm
b
|| f || 2 = ∫ | f (t ) | 2 dt
a
space, then
for all x and y in X . Moreover, equality occurs if and only if there are scalars α and β ,
a) A( x + y ) = Ax + Ay;
48
b) A(αx) = αAx
Axn → Ax .
Proposition 4.9 ([2], Proposition II.1.1) Let H and K be Hilbert spaces and A:
(a) A is continuous.
(b) A is continuous at 0.
(d) There is a constant c > 0 such that || Ah ||≤ c || h || for all h in H . We say in this case
= sup{|| Ah || / || h ||: h ≠ 0}
The following theorem is about properties of the norm of bounded operators. First, the set
derivative of such functions are defined as in R n . Also, the continuity and the product
rule in Theorem 2.7 and in Theorem 2.9 also hold in Hilbert space.
by ρ ( A) , is the set of all complex numbers λ such that (λI − A) has an inverse and
σ ( A) = C \ ρ ( A) .
of A is the whole space H. It is not hard to see that λ is in resolvent set if and only if
ker( A) = {x ∈ H : Ax = 0} .
It is easy to see that 0 ∈ ker( A) for every operator A. If ker( A) = {0} , then A is injective.
σ p ( A) = {λ ∈ C : ker( A − λ ) ≠ (0)}
50
As in the case of operators on a Hilbert space, elements of σ p (A) are called eigenvalues.
∞
( I − A) −1 = ∑ A n .
n =0
A
If λ now is a complex number with | λ |>|| A || , then we have < 1 and hence,
λ
A A
I − is invertible. Thus, λ − A = λ I − is invertible and
λ λ
−1
−1 1 A
( λ − A) = I − ,
λ λ
n
1 ∞ A
= ∑ ,
λ n=0 λ
∞ An
= ∑ .
n +1
n=0 λ
Hence, we have
Theorem 4.14 The spectrum of a bounded operator A lies inside the disk with
Definition 4.15 Let C be the closed contour which contains all the spectrum of A and
following:
51
1 −1
f ( A) = ∫ f (λ )(λ − A) dλ.
2πi C
As with matrices in R n , we can show that these functions have all properties in Theorem
∞ An
eA = ∑ .
n =0 n!
σ ( f ( A)) = f (σ ( A)) .
Proof: If λ ∈ σ ( A) , define
f ( z ) − f (λ )
if z ≠ λ ;
g ( z ) := z−λ
f ' (λ ) if z = λ .
were the case that f (λ ) ∉ σ ( f ( A)) , then ( A − λ ) would be invertible with inverse
g ( A)[ f ( A) − f (λ ) ] , since
−1
( A − λ )[ g ( A)( f ( A) − f (λ )) −1 ] = [ f ( A) − f (λ )]( f ( A) − f (λ )) −1 = I
and
= ( f ( A) − f (λ )) −1[ f ( A) − f (λ )]
= I.
The converse part is proved the same as in Theorem 2.19 and the theorem is proved.
QED
Next we introduce the Spectral Projectors in Hilbert space. Let σ ( A) be the spectrum
1 −1
P1 := ∫ (λ − A) dλ
2πi C1
and
1 −1
P2 := ∫ (λ − A) dλ .
2πi C2
Theorem 4.17 ([6], Theorem 48.2) P1 and P2 are the orthogonal projectors in H, that
means,
b) P1 P2 = P2 P1 = 0 ;
c) P1 + P2 = I .
a) A1 + A2 = A and
b) f ( A1 ) + f ( A2 ) = f ( A) ;
c) σ ( A1 ) = σ 1 and σ ( A2 ) = σ 2 and
d) σ ( f ( A1 )) = f (σ 1 ) and f (σ ( A2 )) = f (σ 2 ).
53
y ' (t ) = Ay (t )
(4.1)
y (0 ) = y 0
y ' (t ) = Ay (t ) + f (t )
. (4.2)
y ( 0) = y 0
existence and uniqueness of the solutions of the above two equations as the following.
Theorem 4.18
y (t ) = e tA y 0 .
t
y(t ) = e tA y0 + ∫ e (t − s ) A f ( s )ds .
0
We now want to extend some results in Chapter 3 to Hilbert space. The problem we need
to overcome is, in Hilbert space the spectrum and the set of eigenvalues are not the same,
and we do not have the determinant concept. If the proof of any result does not use
eigenvalues and determinant, then we can extend that result to Hilbert space, as the
Theorem 4.19 Consider the system of linear differential equation with the initial
condition:
y ' (t ) = Ay (t )
.
y ( 0 ) = y 0
b. lim || e tA ||= 0 .
n →∞
with period 1, there exists a unique initial value y 0 , such that the solution of
equation(3.2)
y ' (t ) = Ay (t ) + f (t )
y ( 0) = y 0
is 1-periodic .
A
b) 1 is in the resolvent set of e
Proof: The equivalence between b) and c) is actually the content of the Spectral
Mapping Theorem, when f ( z ) = e z , and the proof of “b) ⇒ a)” is the same as in
Theorem 3.5. We now prove “a) ⇒ b)”. From the same reasoning as in the proof of
now need only to show that ( I − e A ) is surjective, i.e., if u is any vector in H, there is a
to f (t ) . We have
t
y (t ) = e At y 0 + ∫ e A(t − s ) e As uds
0
= e At y 0 + te At u .
u = ( I − e A )e − A y 0 .
defining f (−t ) = f (t ) . This extension is called the periodic extension. It is easy to see that
trajectory.
56
A
b) 1 is in resolvent set of e
Finally, we have the result about the existence and uniqueness of bounded complete
trajectories.
t A
d) The unit circle is a subset of the resolvent of e for each positive number t.
Proof: The equivalence among b), c), and d) are actually the content of the Spectral
“ b) → a ) ”: Let σ 1 be the subset of σ ( A) which lies to the left of the imaginary axis, and
σ 2 be the subset of σ ( A) which lies to the right of the imaginary axis, then Re λ < 0 for
Re λ < 0 for λ ∈ σ ( − A2 ) .
Let f (t ) be now a bounded function from R to Hilbert space H . We define the function
57
t ∞
y (t ) = ∫ e A1 ( t − s ) f ( s ) ds + ∫ e A2 (t − s ) f ( s ) ds .
−∞ t
t ∞
y(t ) − e A(t − s ) y ( s ) = ( ∫ e A1 (t −τ ) f (τ )dτ + ∫ e A2 (t −τ ) f (τ )dτ )
−∞ t
s ∞
− (e A(t − s ) ∫ e A1 ( s −τ ) f (τ )dτ + e A(t − s ) ∫ e A2 ( s −τ ) f (τ )dτ )
−∞ s
t s
= ( ∫ e A1 (t −τ ) f (τ )dτ − e A( t − s ) ∫ e A1 ( s −τ ) f (τ )dτ )
−∞ −∞
∞ ∞
+ ( ∫ e A2 (t −τ ) f (τ )dτ − e A(t − s ) ∫ e A2 ( s −τ ) f (τ )dτ )
t s
t t
= ∫ e A1 (t −τ ) f (τ ) dτ + ∫ e A2 (t −τ ) P2 f (τ ) dτ
s s
t
= ∫ e A(t −τ ) f (τ )dτ .
s
t
Hence, y (t ) = e A(t − s ) y ( s ) + ∫ e A(t −τ ) f (τ ) dτ and therefore, is a complete trajectory.
s
To prove u (t ) is bounded, using Theorem 4.19, we have || e A1t ||< Me −ωt and
t ∞
|| y (t ) ||=|| ∫ e A1 ( t − s ) f ( s ) ds + ∫ e A2 (t − s ) f ( s ) ds ||
−∞ t
t ∞
≤|| ∫ e A1 (t − s) f ( s)ds ||+|| ∫ e A2 (t − s ) f ( s )ds ||
−∞ t
t ∞
≤ ∫ || e A1 (t − s ) || ⋅ || f ( s) || ds + ∫ || e A2 (t − s ) || ⋅ || f ( s ) || ds
−∞ t
58
t ∞
≤ ∫ Me − ω (t − s ) ⋅ Cds + ∫ Me − ω ( s − t ) Cds
−∞ t
t ∞
= MCe −ωt ∫ eωs ds + MCeωt ∫ e −ωs ds
−∞ t
e ωt e −ωt
= MCe −ωt ⋅ + MCe ωt ⋅
ω ω
2 MC
= .
ω
” a ) → c ) ” Assume that for each bounded function f (x ) , there exists a unique bounded
t
y(t ) = e A(t − s) y(s) + ∫ e A(t −τ ) f (τ )dτ
s
the bounded trajectory corresponding to f (t ) .We put v(t ) = y (t + 1) , then it is not hard to
e iα − e A = e iα ( I − e A − iα ) ,
59
complete trajectory of the equation y ' (t ) = Ay (t ) + f (t ) . We show v(t ) = e −iαt y(t ) is the
bounded complete trajectory of equation (4.3). Because v(t ) = e −iαt y(t ) , we have
= ( A − iα )v(t ) + f (t )e −iαt .
= ( A − iα )v (t ) + g (t ).
That means v(t ) is the bounded complete trajectory of equation (4.3). By the above
y ' (t ) = Ay (t ) + f (t )
,
y ( 0) = y 0
it is very interesting to find conditions on the operator A, so that we can check the
qualitative behavior of solutions. These results can be applied in other areas such as in
[1] Jams W. Brown and Ruel V. Churchill: Complex Variables and Application.
[4] Ron Larson, Bruce H. Edwards, and David C. Falvo: Elementary Linear Algebra.
[7] R.Kent Nagle, Edward B. Saff and Arthur David Snider: Fundamentals of
[8] J. Pruss: On the spectrum of C0 -semigroup. Trans. Amer. Math. Soc. 284, 1984,
847--857.
60