Professional Documents
Culture Documents
Managing Editor:
M. HAZEWINKEL
Centre for Mathematics and Computer Science, Amsterdam, The Netherlands
Editorial Board:
Richard E. Bellman
Department of Electrical Engineering. University of Southern California.
Los Angeles. U.S.A.;
Center for Applied Mathematics. The University of Georgia.
Athens. Georgia. U.S.A.
and
Robert S. Roth
Boston. U.S.A.
2- 0388- 200 ts
EDITOR'S PREFACE ix
PREFACE xi
Chapter
1. BASIC CONCEPTS • 1
Introduction • . . . • .•••• 1
Integral Domains,Fields and Vector
Spaces • . • . • .•••• 2
Subspaces, Bases and Inner Products • 4
Spaces, Subspaces and Approximation • 7
The Continuous Function . • . • • . • 8
Polynomial Subspaces • . . • • • . . 9
Spaces Generated by Differential
Equations . . • . • . • • 10
The Piecewise Linear Function • 13
Discussion •• • • . . . 15
Bibliograpy and Comments 16
2. POLYNOMIAL APPROXIMATION • • • . 17
Introduction • . • . • . • • • • • 17
Piecewise Linear Functions • • 22
Curve Fitting by Straight Lines. . 24
A One Dimensional Process in Dynamic
Programming . . . • . • • . • 26
The Functional Equation • . • • •• 27
The Principle of Optimality. • •• 29
A Direct Derivation . . . . • • • • 29
Curve Fitting by Segmented Straight
Lines • . • • . • • • . • 30
A Dynamic Programming Approach 31
A Computational Procedure . 32
Three Dimensional polygonal
Approximation • • • • • • 33
The Orthogonal Polynomials • • • • 35
The Approximation Technique 36
CONTENTS
Discussion . • • • 37
Bibliography and Comments 38
3. POLYNOMIAL SPLINES . 40
Introduction . . . . . . . • 40
The Cubic Spline I • . . • • . 40
Construction of the Cubic Spline 41
Existence and Uniqueness • . • • • 46
A Computational Algorithm - Potter's
Method • . • . . . . • . . . 48
Splines via Dynamic Programming • • 53
Derivation of Splines by Dynamic
Programming • . . • • . • •• 54
Equivalence of the Recursive
Relations obtained by Dynamic
Programming and the Usual
results • . . 59
Cardinal Splines . . . . . 61
Polynomial Splines 63
Generalized Splines . . • . 64
Mean Square Spline Approximation 65
The Cubic Spline II . . . • . 66
The Minimization Procedure 67
The Functional Equation • 68
Recursion Relations . • . . . . . . 69
Bibliography and Comments . . . . . 69
4. QUASILINEARIZATION • • . 71
Introduction • . . . . • . . . 71
Quasilinearization I . . • . 71
The Newton Raphson Method . 72
Quasilinearization II . • • • • • . 76
Existence . . . • . . • . • 77
Convergence . . . . . . . . • • • . 79
An Example, Parameter Identification 82
Unknown Initial Conditions . . • • 85
Damped Oscillations • • . . • • • . 88
Segmental Differential Approximation 92
Differential Systems with Time
Varying Coefficients . . . • 94
A Method of Solution. . . • • . . 95
An Interesting Case . . • . . • . • 98
Discussion . . . . . • • . . 100
Bibliography and Comments . • 101
CONTENTS
Approach your problems from the right end It isn't that they can't see the solution. It is
and begin with the answers. Then one day, that they can't see the problem.
perhaps you will find the final question.
G.K. Chesterton. The Scandal of Father
'The Hermit Clad in Crane Feathers' in R. Brown 'The point of a Pin'.
van Gulik's The Chinese Maze Murders.
As the authors stress in their preface there are two kinds of approximations
involved when doing mathematical investigations of real world phenomena. First
there is the idealized mathematical model in its full complexity and then there are
the approximations one must make in order to be able to do something with it.
And, of course, the inaccuracies adhering to the determination of whatever meas-
ured constants are involved. Thus,
"In every mathematical investigation the question will arise whether we can
apply our results to the real world.... Consequently, the question arises of
choosing those properties which are not very sensitive to small changes in
the model and thus maybe viewed as properties of the real process.
V.1. Arnol'd, 1978
Such are the considerations which lie at the basis of several fields in mathematics:
deformation theory, perturbation theory and approximation theory. In the setting
above the task of approximation theory becomes that of finding that approximate
model which precisely captures those properties which are not very sensitive to
small changes. This also helps to make it clear that much more is involved than
neglecting small terms or doing an expansion in some ( or throwing away all non-
linear terms. Often indeed there will not even be an obvious small quantity.
The senior author of this volume, the late Richard Bellmann, has spent much time
on thinking about the why and how of approximation and has pioneered several
new methods in the field. The book can be accurately described as a survey of his
thoughts on the topic during the last 25 years. Much of the material dates from the
middle and late seventies. Here this material is presented in a coherent fashion by
Bellmann and Roth without losing the typical inventive and stimulating character of
much of Bellmann's writing.
William Blake
u(x,O) = h(x),
The same approximation techniques can be used
to solve higher order systems and using blends
of this with quasi linearization, we can consid-
er solving systems with partial information.
Chapter 7 discusses exponential approxima-
tion where we seek approximations of the form,
NAt
u(t) = I a e n
n=1 n
These techniques are used to convert the Fre-
dholm integral equation,
1
u (t) = f ( t) + / k( It - s I ) u ( s) ds ,
o
into a set of differential equations represent-
ing an interesting two point boundary value
prablem.
xiv PREFACE
u(O) = c.
Using the ideas of the maximum operation, we
can characterize the solution of the Ricatti
equation by means of upper and lower bounds.
We can approach the solution of Eq.(l) yet an-
other way ,via quasilinearization and take full
advantage of quadratic convergence in the nu-
merical solution.
The solution of approximate equations is
discussed in chapter 9. The idea here is to
replace a difficult equation whose solution is
unknown with an approximate equation whose so-
lution is known exactly. These concepts are
clearly based on classical techniques in ap-
plied mathematics as we point out by first con-
sidering perturbation methods in solving nonli-
near differential equations. Here we have a
precise way of defining a sequence of linear
differential equations whose solutions are well
known. Thru the perturbation series, the nonli-
near equation is systematically approximated by
a simpler set of linear equations.
Modern approximation methods such as the
polynomial spline affords us a new freedom in
selecting our problems and the theory of dynam-
ic programming, for example, allows us to bring
together many disciplines in defining the ap-
proximate equations themselves.
The goal of this chapter is to aquaint the
reader with several of these techniques which
may have immediate use in solving complex
problems in applied mathematics.
PREFACE xv
1.1 INTRODUCTION
A successful approximation has the following
three properties:
(7) Unity:
1ex = ex
c ex + c ex + ... + C 0:
1 1 2 2 m m
is a subspace of V. This is because of the
identities:
(a 0: + + a 0: ) + (b 13 + ... + b 13 )
mm
=
1 1 mm 1 1
a (b 0: + +b 0: ) = (a b )~ + + (a ~ )0:
111 m m 111 mm m
showing that the distributive law holds for
all vectors in the subspace
We can also prove that the intersection A,
B of any two subspaces of a vector space V is
itself a subspace of V.
We now make the observation that if V is a
vector space and A is a subspace of V and a
vector 0: is chosen in V, then if we wish to ap-
proximate 0: by a choosing the 'closest' vector
~ in A,then the "error" in the approximation
must lie in the subspace V - A.If the set V - A
is O,(the empty set), then the approximation is
exact (0: = ~). By defining our concepts of
"closest" and "error"carefully,we can further
refine the concept of approximation. Let us
consider the idea of linear independence.
The vectors 0: ,0: , •• 0: are linearly
123
independent over the field F if and only if
for all scalars c in F,
i
C 0: + C 0: + ... + C 0: = 0 (1-1)
1 1 2 2 m m
implies that c = c = ... =c = 0 and the
1 2 m
vector space has the dimension m.
From the idea of linear independence we in-
troduce the bases of a vector space as a lin-
early independent subset which spans the entire
space. We wish to impose on this vector space
a means of defining lengths of vectors which
play a central role in considering modern meth-
ods of approximation.
In an n-dimensional space, the inner product
6 APPROXIMA nON
+(a - c )a •
n n n
If we ask for the best approximation to
the vector~, clearly our choice would be,
a = c ,a = c ••• a = c but what happens
112 2 n n
if we restrict ~ to be in a subspace V of V ,
m n
m < n? The answer to the previous question is
no longer trivial.
Again we can define an error vector,
(~ - ~)= (a - c )a + ••• + (a - c )a
1 1 1 m m m
8 APPROXIMATION
+a Cl + +a Cl •
(0+1 (m+l) n n
1 P l/p
( f , g) = ( I If(x) - g(X)1 dx ) ,
o
for convenience, we choose the L(2) norm, where
1 2 1/2
( f , g) = ( I If(x) - g(x)1 dx ) •
o
Over this vector space, we are at liberty to
define other norms when it is convenient to do
so.
1 2 1/2
(p ,q = ( I Ip ( x) - q (x) I dx).
n m o n m
The entire space V can be generated by N linearly
n
independent polynomials, that is,any polynomial r (x)
n
n S N can be expressed as a linear combinations
of the base polynomials.
10 APPROXIMATION
Figure 1.1
Piecewise Linear Function
3.0
2.0
g(t)
1.0
1. 9 DI SCUSSION
2.1 INTRODUCTION
In this chapter we shall consider polynomial
approximation in its most simple form. As in
the last chapter we shall restrict ourselves,
for convenience, to the closed interval (0,1),
and we will let f(x) be a real valued continu-
ous function defined in the interval.
Our first essential task is to show that the
polynomial is a good candidate for approximat-
ing f(x) over (0,1). If this is true, the nu-
merical ideas which follow will have a clear
theoretical basis.
We can establish this by proving the weier-
strass Polynomial Approximation theorem. Let
f(x) be a real valued continuous function de-
fined in the closed interval (0,1).
We shall show that there exist a sequence of
polynomials P (x) which converge to f(x) uni-
n
formly as n goes to infinity. To begin we
choose the polynomial P (x) to be of the form,
n
n m n-m
P (x) • I C f(m/n) x (1 - x) ,
n m=O n m
where C is the combination of n things taken
n m
m at a time. Consider the binomial expansion ,
17
18 APPROXIMA TlON
n n m (n-m)
(X + y) = I C x y (2-1)
maO m n
by differentiating Eq.(2-1) by x and
multiplying the result by x, we have
n-1 n m (n-m)
n x(x + y) = I mC x y
m=O n m (2-2)
and by differentiating Eq.(2-1) twice by x
2
and multiplying the results by x we have,
2 (n-2) n m (n-m)
n(n-1)x (x + y) = I m(m-1) C x y.
maO n m
(2-3)
Now if we set,
m n-m
r (x) = C x (i-x) (2-4)
m n m
then, by using Eq,(2-1),Eq.(2-2) and Eq.(2-3),
setting y = 1-x, we have the following three
important properties,
n
I r (x) = 1
m=O m
n
I mr (x) = n x
maO m
and
n 2
I m(m-1)r (x) = n(n-1) x •
m=O m
Using these properties, we can easily see that,
APPROXIMATION 19
n 2 n 2 n
I (m-nx) r (x) = m r (x) - 2n x ~ m r (x)
~
maO m m=O m m=O m
n 2 2
+ ~ n x r (x) ,
m=O m
2 2 2 2 2
= (nx + n(n-1)x ) 2n x + n x ,
= nx(1 - x).
We now note that since f(x) is continuous over
the closed interval (0,1), then there exists
an M such that,
If(x)1 S M ,Os x s 1.
Also by uniform continuity, there exists, for
every E > 0, 8 > 0, such that, if
Ix - XI I < 8, then I f (x) - f (x I ) I < E.
so
2
«m - nx)/8n) > 1.
Therefore,for all m , such that 1m - nxl > an,
2
r (x) s I «m - nx)/&n) r (x).
m m
1m - nxl > &n 1m - n xl > &n
Since r (x) ~ 0, we can combine the two
m
expressions in Eq.(2-S} to give the useful
result,
n 2
I r (x) s I«m - nx)/&n)r (x).
1m - nxl ~&n m m=O m
Returning to the main proof, we can now observe
that,
I (f(x) - f(m/n» r (x}1
m
Im-nxl>&n
Im-nxl>&n Im-nxl>&n
S 2M I I r (x ) I ,
m
Im-nxl>&n
APPROXIMATION 21
n 2
S 2M I ((m-nx)/&n) r (x) ,
m=O m
2 2 n 2
= 2M/(n & ) I (m-nx) r (x)
m=O m
2 2
= 2M/(n & ) nx(l-x) ,
2
S M/( 2n& ),
• a 9 (x) is in G,
1
• (a + b) 9 (x)
1
= a 9 (x) + b 9 (x) ,
1 1
• (ab) 9 (x)
1
= a(bg (x» •
1
So G is a vector space.
We now define a continuous sequence of piece-
wise linear functions over the closed interval (0,1)
9 (x) if it is continuous and has n joining points
n
(called knots) within the interval (0,1). Further-
more 9 (x) is a piecewise continuous approximation
n
to f(x) if at each knot x ,g (x ) = f(x ).
i n i i
We can now demonstrate the following result.
If f(x) is uniformly continuous in the closed
interval (0,1), then there exists a piecewise
linear function 9 (x) which converges uniformly
n
to f(x) as n goes to infinity. Since f(x) is
uniformly continuous for every E>O, there exists
a 8 > 0 such that,
iflx-x'I<8,
then I f (x) - f (x' ) I < E.
f(x -f(x )
n+1 n
If(x) - g (x) 1 < 1 f(x) - « __--:-__-:--_
n x - x
n+1 n
(x-x
n
»1 '
f(x -f(x )
n+1 n
< If(x) - f(x»1 + 1
n x - x
n+1 n
1 x-x 1 ,
n
< 2 E.
Figure 2.1
Piecewise Linear Approximation
S',n
g(t)
x ~ 0, (2-7)
i
i = 1, ••• N.
x + x + ••• + x =x •
1 2 N
APPROXIMATION 27
so we can write
f (x) = min (h (x ) + h (x ) + •• + h (x »
N 1 2 N
(x + x + ••• + x = x)
1 2 N
x > 0,
i
30 APPROXIMATION
= min min (h (x ) + h (x » .. ),
N N N-1 N-1
OSx SX ( x +x + ••• +x )=(x-x )
N 1 2 N-1 N
= min (h (x ) + min (h (x + ) ,
N N N-1 N-1
OSx SX , x +x + •. x =(x-x )
N 1 2 N-1 N
= min (h (x ) + f (x - x»,
OSx Sx N N N-1 N
N
which is the Fundamental Equation of Dynamic
Programming. In the following section we will
use this result to obtain a linear approxima-
tion to f(x).
A-________________~1-
1-~------~~------1~+~
1-~----+_---_71+
34 APPROXIMATION
16 8
~-----
23
6f---+-........-.....,.f
14 27
17 20
24
26
18 19
25 12
1>--........ -+----t'L-. y
22
10
h = -~*(1-~)*~*(1+~)*(1-'**2)/4
20
h = (1-~**2)*(1-~**2)*(1-'**2)
21
h = -(1-~**2)*(1-~**2)*'*(1-')/2
22
h = (1-~**2)*(1-~**2)*'*(1+')/2
23
h = -(1-~**2)*~*(1-~)*(1-'**2)/2
24
h = ~*(1+~)*(1-~**2)*(1-'**2)/2
25
h = (1-~**2)*~*(1+~)*(1-~**2)/2
26
h = -~*(1-~)*(1-~**2)*(1-'**2)/2
27
X = Ih (t,'Y),O,
X
i i
y = I y h (t,'Y),~), (2-11)
i i
z = I z h (t,'Y),O.
i i
Expressions Eq.(2-10) and Eq.(2-11) repre-
sent a continuous mapping from the unit cube
(-1~t,'Y),~S 1) to the points in V and the value
of the function within the same volume. To
use this approximation for interpolation, f
must be known and Eq.(2-11) must be inverted,
which is quite difficult. However, the evalua-
tion of the integrals are straightforward.
'I' = / F (f) dv = / F ( If h (t, 'Y),
v vii
° )I J I dtd'Y)dL
2.14 DISCUSSION
Let us finally note that in trying to ap-
proximate the continuous function f(x) by a se-
quence of segmented straight lines, we are as-
suming something about the basic structure of
the problem, mainly that the system we are ap-
proximating is linear (or near linear) between
a finite but unknown set of critical states.
Such problems must be chosen with great care,
but having done so,we have at our disposal the
full power of dynamic programming.
38 APPROXIMATION
3.1 INTRODUCTION
In the last chapter we considered two very
simple polynomials as approximating func-
tions,the segmented straight line over the
closed interval (0,1), and the set of 27 or-
thogonal quadratic polynomials over the unit
cube, -1 s ~,~" s 1.
Now we wish to consider the polynomial
spline as a modern approximation tool. In par-
ticular, we will focus on the cubic spline to
illustrate the use of these special polynomials
in approximation theory.
40
APPROXIMATlON 41
3 3
(x - x
j
) (x - x )
j-1
sA(x) = M ( ) + M (
j-1 6h j 6h
j j
2
M h x - x
j-1 j j
+ (y )(
j-1 6 h
j
(3-2)
2
M
j
h
j
x - x
j
+ (y ) ( ) ,
j 6 h
j
and
2 2
(x -x) (x-x )
j j-1
s'A(x) .= -M ( ) + M( )
j-1 2h j 2h
j j
M - M
j j-1
- (
6
h
j
y - y
+ (
j
h
j-1
) .
j
In expression
moments M , j
j
Eq.(3-2),
0 , N,.only the set of
.....
are unknown
and must be determined. Since the spline
APPROXIMATION 43
j
(3-3)
h h
+ j+l j+l
s'a(x ) = - M - M
j 3 j 6 j+l
y - Y (3-4)
j+l j
+ ( ) ,
h
j+l
h (h + h ) h
j j j+l j
M + M + M =
-6- j-l ---,3....---- j ---r j+l
Y y y - Y
j+l
----~h-----
j
) - (
---~h-------
j j-l
) .
j+l j
44 APPROXIMAnON
Y - Y
M
N-l
+ 2 M
N
=
II
6
(y ,
N
N
h
N-l
) .
N N
We can also put M = M = 0 or M + A M = 0,
0 N 0 1
O<A<1.
We will explore such ideas further
later in the chapter. In general we can write
the spline end conditions as,
2 M + A(O) M = d
0 1 0
",,(N) M
N-l
+ 2 M = d
N N
.
If we let
h
j+l
A(j)= ",,( j ) = 1 - A(j)
h + h
j+l j
then the continuity condition can be written,
APPROXIMATION 4S
~(j) M + 2 M + ~(j) M • 6 d ,
j-1 j j+1 j
j • 1,2, ••••• N-1, (3-')
where
y - y y - Y
j+1 j j j-1
) - ( )
h h
j+1 j
d •
j
(h + h )
j j+1
2 ~(O) 0 0 M dO
0
~(1) 2 • •• 0 0 M d
1 1
•
0 ~(2) M d
2 2
•
M d
2 ~(N-1) N-1 N-1
~(N) 2 M d
N N
46 APPROXIMATION
I a x = A x
j mj j m
or by extracting the pivot element,
x
j
A - a = I a
mm j¢m mj x
m
we obtain the inequality,
I A - a Ia I. (3-5)
mm mj
Eq.(3-5) mearly states that the eigenvalues A
lie within a circle in the complex plane centered
at a with a radius Iia I. Therefore we have
mm mj
the Gersgoren theorem.
The eigenvalues of a matrix A lie in the union
union of the circles,
Iz - a (3-6)
ii
o< i S n.
We observe that if the matrix A has no zero
eigenvalues, then the matrix is nonsingular and
a unique inverse exists.
APPROXIMATION 47
la .. Ia I,
11 ij
then by Eq.(3-6),
Iz - a < la I· (3-7)
ii ii
To satisfy Eq.(3-7), z must be nonzero and
we can say that a dominate main diagonal matrix
is nonsingular.
Now we know that the success of the cubic
spline depends on the nonsingularity of the ma-
trix,
2 ~(O)
2 ~(1)
~(N-1) 2 A(N-1)
~(N) 2
2 M + M = d ,
0 1 0
.... ( j ) M + 2 M + A(j ) M = d
j-1 j j+1 j
j = 1,2, ••• N-1,
(3-S)
M + 2 M = d •
N-1 N N
(d - ~(i) b(j-1»
j
+
(2 + ~(j) a(j-1»
Comparing Eq.(3-9) with Eq.(3-10) leads to the
recursion relation,
-A(j)
a(j) = (3-11)
(2 + ~(j ) a(j-1»
and
( d( j) - ~(j) b(j-1»
b(j) =
(2 + ~(j ) a(j-1»
2M + A(O) M = d
0 1 0
while at x = x ,
N
Y - Y
6 N N-1
M + 2 M
N
= (
--n-
) y'
N
- h
) ,
N-1
N N
2 M = 2 y' , ,
N N
.dN) M + 2 M = d
N-1 N N
M = a(j) M + b(j) •
j j+1
If the initial slope is known from observed data,
y' = (y -y )/h , then, from Eq.(3-12),
o 101
M = - (1/2) M ,
o 1
M = -A(0)/2 M + d /2,
010
giving,
a(O) = -A(0)/2, b(O) = d /2.
o
With this information, following the recur-
sive relation Eq.(3-11), we can compute and
store all coefficients a(i),b(i),i =
1,2, ••• ,N-l.
When j = N-l, we have the relations,
M = a(N-l) M + b(N-l),
N-l N
tJ-(N) M + 2 M = d.
N-l N N
Solving for M
N
d - tJ-(N) b
N N-l
M =
N 2 + tJ-{N) a{N-l)
Finally, since = a (i) M + b(i), all the
M
i i+l
spline moments can be computed.
The procedure we have just described is in-
teresting in its own right, for it describes,in
mathematical terms, the propagation of partial
information. The partial information represent-
ed by the end conditions is propagated thru the
parameters a(i) ,b(i) until the end is reached.
Knowledge at that point allows us to compute
the first (actually last) spline moment The
recursion relation then allows us to compute
all remaining moments. This property is typical
of the Riccati equation which we shall consider
in a later chapter.
52 APPROXIMATION
M M
j j-l 3
s (x) = ) x
j 611 6h
j j
M ~
j-l j 2
+ (x x ) x
j 2h j-l 2h
j j
M M
2 j-l j
+ (-x + x
j 2h j-l 2h
j j
2
Y - M h /6
j-l j-l j
h
»
j
2
Y M h /6
j j j
+ ( ) x
h
j
APPROXIMATION 53
M M
3 j-1 3 j
+ (x -x
j 6h j-1 6h
j j
y M h
j-1 j-1 j
+ ( ) x
h 6 j
j
y M h
j j j
-( )x
n- 6 j-1
j
b 2
J(a,b) == / (f"(x» dx.
a
We can now apply the methods of dynamic
programming to determine sex) in each sub-
interval. Let us consider the interval
x s x S x and write,
i i+l
2
s (x)
i
== y + M (x
i i
- x
i
) + p(i) (x-x
i
)
3
+ g(i)(x - x
i
) ,
for x S x S x •
i i+l
Since y and y are known, letting M and M
i i+l i i+l
denote the first moments at the nodes x and
i
x , we find, using the continuity property, that,
i+l
1
p(i) = ( )(3r(i) (M + 2 M ) ,
n- i+l i
i+l
1
q( i) =
2
«M
i+l
+ M)
i
- 2 r(i»,
h
i+l
where
rei) = (y y )/h ,
i+l i i+l
h
i+l
= (x
i+l
x
i
) .
56 APPROXIMA nON
x = b.
N
+ F (M )),
i+1 i+1
where g(i+1)= 4/h
i+1
We now argue that F is quadratic in M • To see
i i
this, let us take the last interval,x s x s X ,
N-1 N
2
F (M ) = g(N)(3/4 r(N-1) 3/2 r<N-1)M
N-l N-l N-l
(3-15)
2
+ 3/4 M
N-1
The above expression is clearly quadratic in M
N-1
Hence, by induction, it can be shown that F is
quadratic in M
i
. Therefore write,
i
2
F (M ) = c(i) + d(i)M + e(i) M (3-16)
i i i i
From Eq.(3-14) we obtain,
222
F (M) = Min (g(i+1)(3r(i) + M + M
i i M i+l i
i+1
(3-17)
-3r(i)(M + M) + M M »
i+l i i i+1
2
c(i+1) + d(i+1)M + e(i+1)M ).
i+1 i+1
The minimization over M is readily performed.
i+1
58 APPROXIMATION
(3-18)
g(i+1)
e(i) = g(i+1)( 1 ) ,
4~(-g~(~1~+~1~)~+-e~(~i~+~1~)~)
3di)g(i+1) - d
i+1
d(i) = g(i+1)( - 3r(i».
2 ( 9 ( i +1 ) +e (i +1) )
From Eq.(3-15) the values of c(N-1),d(N-1) and
e(N-1) are given by
2
c(N-1) = 3r(N-1) /h
N
d(N-1) = -6 dN-1) /h
N
e(N-1) = 3/h
N
2
F (M) = c(O) + d(O) M + e(O) M.
o 0 0 0
d g(i+1)M
i i
M = + 3r(i)-
i+1 g(i+1)
and
2(g(i) + e(i» M = 3r(i-1)g(i) - d(i) -g(i)M.
i i-1
Adding these two relations and using Eq.(3-17)
we have,
(y -y )
i i-1
k(i)M + 2 M+ ~(i)M = 3k(i)
i-1 i i+1 h
i
60 APPROXIMATION
(y -y )
i+1 i
h
i+1
where ~(i) = h /(h + h ),and
i+1 i i+1
~(i) =1 - ~(i).
3(y - y )
N N-1
2 M + M =
N N-1 h
N
At x = a minimizing F (M ) yields
o 0
d
o
M = - (3-19)
o 2eT"IT)
We know that,
3r(0)g(1) - g(1)M - d
o 1
M =
1 2(g(1) + e(1»
(3-20)
where r(O) =(y -y )/h and g(1) = 4/h ,
101 1
APPROXIMATION 61
with h = x x
110
k = 0,1, N-l,
Sn(t ) = c , Sn'(t ) =c
k 1 k 2
Sn(t )= d , Sn' (t ) =d •
k+1 1 )1:+1 2
4.1 INTRODUCTION
In this chapter we intend to explore several
numerical techniques for fitting a known func-
tion to a linear or nonlinear differential
equation. This technique, known as quasilinear-
ization, makes specific use of the underlying
structure of the linear differential equation
allowing us to approximate, numerically, both
initial conditions and system parameters asso-
ciated with the selected differential equation.
If u{t) is known on the interval (O,l), we
seek to determine a differential equation and
boundary conditions b, having the form W{v,b) =
o whose solution vet) best fits u{t) over the
interval according to some error criterion.
4.2 QUASILINEARIZATION I
There is no reason to believe that the best
approximating differential operator is a linear
equation with constant coefficients. On the
other hand it must be clearly understood that
if one seeks a differential approximation to u,
W{u,b)=O,certain apriori knowledge of the form
of W{u,b) must be assumed.
If W(u,b) is assumed to be nonlinear in u,
then the approximation methods of the last
chapter may have to be modified. Of course,if
we proceed in exactly the same way as before,
71
72 APPROXIMATION
-------4----------------~~~----------- x
Figure 4.2
Successive Approximation to the
Root of f(x}
--------------~----~ __ --~~~-------------x
f(r ) - f(x »
o
-------- ) .
n
r - X = r x
o n+1 0 n
f' (X
n
(4-2)
By the mean value theorem,
1 2
+ (-) (r - x) f"(~),
2
(4-3)
where x S ~ Sr.
n 0
Substituting Eq.(4-3) into Eq.(4-2) we have,
1 f"(~) 2
rO - x (- ) ) r x )
n+1 2 --PfT'"T(-x--"') o n
n
Then
2
Ir - x I S k I r - x I
0 n+1 0 n
and the convergence is quadratic.
76 APPROXIMATION
4.4 QUASILINEARIZATION II
The quasilinear technique is a generaliza-
tion of the Newton Raphson method developed in
the last section. As such we expect, and will
indeed show ,that the properties of monotonici-
ty and quadratic convergence apply here as
well.
We shall begin by considering the nonlinear
differential equation,
u' (x) = f(x,u(x», (4-4)
u(O) = c, O~ x ~ b.
4.5 EXISTENCE
In this section we will be concerned with
demonstrating that the sequence Eq.(4-7)
does,in fact, exist and further that there is a
common interval (O,b) in which the functions
are uniformly bounded and converge to a solu-
tion of Eq.(4-4).
We shall begin by defining a norm over the
space of all functions as,
I v-c I ~ 1.
We find, as before,
APPROXIMATION 79
x
u (X) -c = / (f(u (X»
n+l 0 n
+ (u (X) - u (x»f (u (x»ds,
n+l nun
X
= / (f(u (s»+«u (s)-c) -(u (s)-c»f ds.
o n n+l n u
Hence
II u (x) - c II S X(m + m( I I u (x) -c II +1) ) ,
n+l n+l
II u (x) - c II S 2mx/ (l-mx) ,
n+l
which will not exceed unity provided Eq.(4-9)
holds. Thus we see that under the boundedness
condition Eq.(4-S), we know a region exists in
which the sequence of functions exists and is
uniformly bounded.
4.6 CONVERGENCE
A very important property of the sequence
is that of convergence. We say a sequence of
functions is quadratically convergent if it
converges to a limit u(x) and
2
Ilu (x) - u(x) II S k lIu (x) - u(x) II,
n+l n
where k is independent of n.
To show the condition under which the
sequence u (x) is quadratically convergent, we
n
consider the quasilinear sequence defined by
80 APPROXIMATION
+ (u - u )£ (u ».
n-l u n-l
In view of past work, we may write
APPROXIMATION 81
2
Ilu - u II~ b(mllu -u II + mllu - u II),
n n-1 n-1
2
Ilu - u II ~ k Ilu - u I I,
n n-1 n-1 n-2
2
lIu - u II :s; k lIu - u II.
2 1 1 0
By substitution
2
lIu - u II :s; k lIu - u II,
n+1 n n n-1
2 2
:s; k(kllu - u II),
n-1 n-2
82 APPROXIMATION
(4-13)
2n-2 2n
Sk(k Ilu -u II),
1 0
2n
= (k 11u - u 1I) /k.
1 0
If Ilu - u 11< 1, then the right side of
1 0
of Eq.(4-13) approaches zero as n increases
and the sequence converges.
We wish to emphasize the importance of quad-
ratic convergence which is associated with the
quasilinear technique. Not only does it aid the
computing algorithm but it assures rapid con-
vergence to a numerical solution.
~ = 7.0.
The parameter ~ in Eq.(4-14 can be consid-
ered to be a function of t, subject to the dif-
ferential equation ~' = O. The Van der Pol
equation can now be written as a first order
system,
x' = u ,
2
u' = -~ (x -1) - x, (4-15)
~' = O.
We can integrate Eq.(4-15) numerically, subject
to the approximate initial conditions
x(4) • -1.S0S43,
84 APPROXIMATION
A' = 0,
n
Table 4.1
Parameter Identification
(4-20)
p (0) = 0,
n+1
and
hI (t) =f (t,u ) h , (4-21)
n+1 u n n+1
h (0) = 1-
n+1
APPROXIMATION 87
x(O) =c , (4-22)
1
x' (0) =c •
2
+ b (x - x ) + x (b -b »,
n n+1 n n n+1 n
al = 0,
n+1
bl
n+1
= o.
o 1 o o
o o 1 o
p (o) = h (o) = h (o)= ,h (o)=
n 0 1 o 2 0 3 1
o o o o
90 APPROXIMATION
o
o
h (0) =
4 0
1
j
where in Eq.(4-23) h (t) is the jth componant
i
solution of the i-th vector solution. By way of
an example, let the known data have the form,
J3t
y(t) = e (A sin wt + B cos wt). (4-24)
substituting Eq.(4-24) into Eq.(4-22) we see
that,
a = 213,
2 2
b = 13 + W • (4-25)
In particular let
-t
y(t) = e (0.07387 sin lOt + 0.9844 cosl0t).
The results of a numerical study is shown in
table 4.2.
APPROXIMATION 91
Table 4.2
Identification of a System with
Damped Oscillations
init iteration iteration
approx 1 2
M
M d
where u = u (t) •
k ~ k
dt
in t s t s t we have the initial conditions
k k+l
j
u (t ) = c j = 0, 1 , ••• M-l •
k k jk
Then if
t
k+l 0 2
S (b , c ,t) = / (u (t) - f (t» dt,
A ij ij k t k k
k
with
F (T) = S (O,O,T),
1 A
u' - b
lim
u
u -> 0
If we further assume b,b , •• b to be the state
o k+1
variables we arrive at the system of differential
equations •
• (0) (1)
u = u,
• (1) (2)
u = u,
• (k-2) (k-1)
u = u,
(1 )
·
u
(k-l)
=
1
(
u - b
- b b u
(2 )
b u 0 1
k+l
(k-1)
- b u ) ,
k
·
b = 0,
b = O.
k+l
Now , if we put
APPROXIMA TlON 97
(0)
u = x ,
0
(1)
u = x
1
(k-l)
u = x,
(k-l)
b = x ,
k
bO = x ,
(k+l)
.• (k+l)
b = x ,
(2k + 2)
x = g(x).
.(n) (n-1)
x = 9 (X )
j j
(n-1)
2k+2 og (X ) (n) (n-1)
+ I __ j _ _ _ _ (x - x ).
m=O ax m m
m
Following the previous procedure, we deter-
mine (2k+2) independent homogeneous vector so-
lutions and one particular solution. The n-th
quasi linear vector solutions is then the linear
combination
(n) 2k+2
u = pet) + I a h (t).
j=O j j
The coefficients a are determined by the
j
requirement that,
T 2k+2 2
I (f(t) - (p(t) + I a h (t » dt,
o j=O j j
(k) (4-30)
Then u can be expressed as
(2) (1)
(k) 1 u - au
u = - ca - b
b o
k+1
(k-1)
. .. - b u ),
1
so from Eq.(4-29)
(2 ) (1 ) (1 ) (1 )
k 1 u u -u (u -b) C(U -b)
u =- (
b 2
(k+l) u u
(k-1)
-b - b u - ... - b u )•
o 1 k
Hence, putting
(0)
u = x ,
o
(k-1)
.
u = x ,
(k-1)
100 APPROXIMAnON
b =x ,
k
bO ,
= x (k+l)
b = x,
k+l (2k+2)
c = x •
. (2k+3)
we again obtain a nonlinear differential
equation,
x = g(x)
4.14 DISCUSSION
5.1 INTRODUCTION
In the last chapter we considered the tech-
nique of quasilinearization for fitting a known
function f(x) to a differential equation by
determining the initial conditions and system
parameters associated with the chosen differen-
tial equation. This was done by numerical meth-
ods.
In this chapter we wish to explore the ana-
lytical methods for doing a slightly different
approximation. In this case we are given a non-
linear differential equation and we seek to de-
termine an associated linear differential equa-
tion whose exact solution is a good
approximation to the solution of the original
equation in the range of interest.
(k) (k)
where we set u (0). v (0), k = 0,1, ••• n-l.
The function u satisfies the equation,
(n) (n-l)
u + b u + ••• + b u = h(t),
1 n
(k) (k)
v (0) = u (0),
5.6 AN EXAMPLE
Consider the equation,
2 2
t -s t -(t-s)
u(t) = (1- / e ds) + / e u(s) ds
o o
(5-5)
having the solution u(t) = 1.
Let the kernel of Eq.(5-5) be,
2
-t
k(t) = e,
and the forcing function be,
2
t -s
f(t) =1 / e ds.
o
Equation Eq.(5-5) can be written as,
t
u(t) = f(t) + / k(t-s)u(s) ds. (5-6)
o
We wish to find an approximate solution to
Eq.(5-6). Let us begin by differentiating
Eq.(5-6) three times so that
t
u' (t) • f' (t) + k(O)u(t) + / k' (t-s)u(s)ds,
o
(5-7)
u" (t) • f" (t) + k(O)u' (t) + k' (O)u(t)
(5-8)
t
+/ k"(t-s)u(s)ds,
o
108 APPROXIMATION
+ kIt (O)u(t)
t
+ I k'" (t-s)u(s)ds. (5-9)
o
f(O) = 1, (5-11)
fICO) = -1,
f"(O) • o.
Eq.(5-5) is replaced by two simultaneous
differential equations, one for u, obtained by
110 APPROXIMATION
(5-12)
u(t) = h(t) 0 < t < 1.
Let us introduce a sequence of functions,
n = 0,1 . .. ,
where we must determine the initial conditions
for each equation. We know immediately,
APPROXIMATION III
u I(t) = g(U ,U ),
110
= g(u ,h(t»,
1
(2 ) (2 )
u (0) = v (0) •
a b
i i i
Table 5.2
Results of an Approximation
time calculated absolute
value error
-4
0.1 0.99003 0.300 10
-3
0.3 0.913676 0.255 10
-3
0.5 0.778679 0.122 10
-3
0.8 0.527665 0.372 10
-4
1.0 0.367951 0.724 10
5.9 DISCUSSION
If desired, we can improve the accuracy of the
approximation by taking the values of u (0), the
i
initial conditions, as parameters,and determining
these values by minimizing the quadratic,
T n 2
J(c ) =I (v(t) I c u ) dt,
i 0 i=l i i
116 APPROXIMATION
0)
u (0) = c ,
i
where u ,u , ••• u are n linearly independent
1 2 n
solutions of Eq.(S-16).
d w
j
= u v, w (0 ) = 0,
j j
dt
then
T T
h (T) = / u u dt, w = /u v dt.
ij 0 i j j 0 j j
5.10 AN EXAMPLE
Consider the equation
u'(t) = -u(t-1)(1 + u(t», t > 1
z =u ,
o n
z '" u ,,
1 n
z
2
• u'
n-1
(1 + u
n
) - un-1 u'n '" u' , ,
n
n n n-1
n=1,2 . . . We proceed to obtain a numerical so-
lution of this equation.
3
u' I + U + E U = 0,
u{O) = 1,
u'(O) = O.
v(O) = 1,
v'(O) = 0,
where b is constant and to compare u and v in
the interval (O,~). We can obtain a value of b
accurate to O(E) by minimizing,
~ 3 2
/(u + EU -(u + Ebu) ) dt ,
o
(5-21)
2 ~ 3 2
= E / (u - bu) dt.
o
(5-24)
Eq.(5-24) reduces to
2~ 2
0 = I «u 1) u'u b uu')dt,
0 1
2~ 2
=b I u dt,
2 0
APPROXIMA TION 123
SO
b = O.
2
Eq. (5-23) then yields,
2'JT 2 2 2'JT 2
/ (u - 1) u' dt = b / u' dt. (5-25)
0 1 0
6.1 INTRODUCTION
In this chapter we wish to explore yet an-
other application of approximation methods, the
numerical solution of nonlinear partial differ-
ential equations. In many cases all that is de-
sired is a moderately accurate solution at a
few points which can be computed rapidly.
Standard finite difference techniques have
the characteristic that the solution must be
calculated at a large number of points in order
to obtain moderately accurate results at the
points of interest. As we will show, differen-
tial quadrature is an approximation technique
which could prove to be an attractive alterna-
tive in the solution of partial differential
equations.
125
126 APPROXIMATION
i = 1,2 •••• , N,
where the coefficients will be determinded
later. Substituting Eq.(6-3) into Eq.(6-1),
yields a set of N ordinary coupled nonlinear
differential equations,
N
u (x ,t) = g(x ,t,u(x ,t), I a u(x ,t»,
t i i i j=l ij j
with initial conditions,
u(x ,0) = hex ),
i i
i = 1,2, •.• n.
Hence, under the assumption that Eq.(6-3) is
valid, we have succeeded in reducing the task
of solving Eq.(6-1) to that of solving a set of
N ordinary differential equations with pre-
scribed boundary conditions.
i = 1,2 ••• n ,
APPROXIMATION 127
N k-l k-2
I a x = (k-l) x , (6-5)
j=l ij j i
i = 1, ••• ,N,
k = 1, ••• N,
p
* (x)
n
f(x) =
(x - x ) P *, (x )
k N k
where P
* (x) is defined in terms of the Legendre
N
polynomials by the relation,
P
* (x) = P (1 - 2x),
N N
128 APPROXIMATION
P
* '(x
N i
a = , i¢k.
ik *
(x - x ) P '(x )
i k N k
gives,
( 1 - 2x )
k
a =
kk
2x x - 1)
k k
2 1 N 2
u (x ,t) = x (-) ( I a u(x ,t».
t i i 4 j=l ij j
(6-7)
The system Eq.(6-7) was integrated from t=O
to t = 1 using an Adams-Moulton integration
scheme with a step size of ~ = 0.01. The order
of the quadrature was chosen to be, N = 7. The
results are shown in table 6.1.
130 APPROXIMA nON
Table 6.1
t x computed actual
solution solution
-5 -5
x 6.4535127 10 6.4535123 10
1
-2 -2
0.1 x 2.4916991 10 2.416993 10
4
-2 -2
x 9.4660186 10 9.4660196 10
7
-4 -4
x 2.9922271 10 2.9922131 10
1
-1 -1
0.5 x 1.1552925 10 1.1552926 10
4
-1 -1
x 4.3889794 10 4.3889818 10
7
-4 -4
x 4.9314854 10 4.9313300 10
1
-1 -1
1.0 x 1.9039847 10 1.9039851 10
4
-1 -1
x 7.2332754 10 7.2332808 10
7
APPROXIMATION 131
where for N = 7,
-2
x = 2.5446043 10
1
-1
x = 1.2923441 10
2
-1
x • 2.9707742 10
3
-1
x • 5.0000000 10
4
-1
x • 7.0292257 10
5
-1
x • 8.7076559 10
6
-1
x = 9.7455395 10
7
Table 6.2
Table 6.3
-3 -3
x 8.2435190 10 8.2437255 10
1
-2 -2
0.1 x 9.9951267 10 9.9950668 10
4
-3 -3
x 7.7428169 10 7.7430951 10
7
-3 -3
x 9.4604850 10 9.46697773 10
1
-2 -2
0.5 x 9.8806333 10 9.8798155 10
4
-3 -3
x 6.9017938 10 6.9041174 10
7
-2 -2
x 1.1618795 10 1.1617583 10
1
-2 -2
1.0 x 9.5506268 10 9.5530162 10
4
-3 -3
x 6.0877255 10 6.0802054 10
7
136 APPROXIMATION
(6-10)
v = u v + v v v(x,y,O) = g(x,y).
t x y
N N
u = u (I a u ) + v (I a u ) ,
ij ijk=l ik kj ijk=l jk ki
(6-11)
N N
v = u (I a v ) + v (I a v ),
ij ijk=l ik kj ijk=l jk ik
i,j = 1,2 n ,
with the initial conditions,
u (0 ) = f(x ,y ) ,
ij i j
v (0 ) = g(x ,y ) ,
ij i j
i,j = 1,2 ... n.
Since the solution of Eq.(6-11) can also
process a shock phenomenon for finite t, for
numerical experiments we shall chose f and 9 to
insure that the shock takes place far from the
region of interest.
Let
f(x,y) = g(x,y) = x + y,
the implicit solution of Eq.(6-10) is,
(x + y)
u(x,u) = v(x,y) =
(1 - 2 t)
138 APPROXIMATION
Table 6.4
t x y u(x,y,y) u(x,y,t)
computed actual
-2 -2
0.025 0.025 6.3615112 10 6.365106 10
-1 -1
0.025 0.025 2.5446831 10 2.5446023 10
u(O,x) = f(x).
140 APPROXIMA nON
f(x) = 2 ~ g (x)/g(x).
x
This property will allow us to compare our numer-
ical results with analytical solutions of Eq.(6-13).
Letting
u (t) = u(x ,t),
i i
we have the set of ordinary differential equations,
N
u '( t) + u (t)(I a u (t» =
i i j=l ij j
N N
~ I I a a u (t),
k=lj=l ik kj j
i = 1, ••• N,
with the initial conditions,
u (0) = f(x), i = 1, ••• N.
i i
APPROXIMAnON 141
N
~ a y(t) = g(y(t »,
j=l ij j i
i = 1, •••• N.
"~ N
j-1
(b
ij
x(t) - A x(t »
j i
11,(6-15)
N
u (X ,t) + g(x ,t,u(x ,t),I a u(x ,t»,
t i i i j=l ij j
u(x ,0) = c ,
i j
(k)
(k+l) (k) N iJ g(u ) (k+l) (k)
u =g (u ) + ( I i )(u u ),
i i j=l au- - j j
j
i = 1,2, n. (6-19)
If an initial approximation is known, then
Eq.(6-19) defines a sequence of functions for
each space point i.
Because of the application of differential
quadrature to the space derivative the
function u represents a set of functions
k
associated with all points x •
i
As we have done before, we let,
(k+l) (k+l) (k+l)
u (t) = p (t) + h (t) ,
i i i i
7.1 INTRODUCTION
A problem common to many fields is that of
determining the parameters
NAt
u(t) = I a e n, (7-1)
n=1 n
where u(t) is known.
In Chapter 5 we used the technique of dif-
ferential approximation to solve this type of
problem. There, the observation was that the
right hand side of Eq.(7-1) satisfied a linear
differential equation of the form,
(N) (N-1)
u + b u + ••• + b u = 0,
1 N
(i)
u (0) = c,
i
i = 1,2, ••• N-1.
149
150 APPROXIMA nON
T (N) (N-1) 2
J(u,b) = I (u + b u + ••• + b u )dt,
o 1 N
A Compartmental Model
Xl
I. ·1 x2
I. ·1 x3
~ ·1
x4
and
X, x2 x3
x4
x .. k x - (k + k ) x + k x ,
2 12 1 21 23 2 32 3
152 APPROXIMATION
X (0 ) =c ,
2 2
x = k x + k x -(k + k )x
3 32 2 34 4 32 42 2
x (0 ) =c
3 3
x = k x k x x (0 ) = c,
4 34 3 43 4 4 4
In the second case
x = -k x + k x , x (0 ) =c ,
1 12 1 21 2 1 1
x = k x -(k + k + k )x + k x + k x ,
2 12 1 21 23 24 2 32 3 42 4
x ( 0) =c
2 2
x = -k x + k x , x (0 ) =c ,
3 32 3 23 2 3 3
x = -k x + k x , x (0 ) =c ,
4 42 4 24 2 4 4
t t t
x(t) = c p + c P + + C P.
1 1 2 2 n n
(7-3)
We suppose that a linear change of variable
has been introduced, in advance, in such a way
that the values of x(t) are specified at N
equally spaced points, n = O,l, ••• ,N-l.
If the set of points are to fit the expres-
sion Eq.(7-3), then it must be true that,
c + C + C = x(O),
1 2 N
c P
1 1
+ C P
2 2
+ ..... + c P
NN
= x(l),
(7-4)
2 2 2
c p + C P + + c P = x(2),
1 1 2 2 NN
(7-6)
We now let,
A (t - s)
t n
u (t) = / e u(s) ds,
n 0
-A t -A s
n t n
u (t) e = / e u(s)ds. (7-7)
n 0
or
-A t
n
(u' A u - u ) e = O.
n n n
1
+ I f(s-t)g(u(s»ds,
t
A (s-t)
1 n
+ I e 9 ( u ( s ) ) ds ) •
t
Now we let
A (t-s)
t n
u (t) = I e g(u(s» ds
n 0
and
A (s-t)
n 1
v (t) '" I e g(u(s» ds.
n t
Following the same procedure as above, we are
led to the two point boundary value problem
given below.
u ' - AU'" g(u(t» u (0) = 0
n n n n
v' + A v = g(u(t» , v (1) = 0
n n n n
160 APPROXIMAnON
N
u(t) = f(t) + I (u (t) + v (t».
n=1 n n
8.1 INTRODUCTION
In this chapter we wish to apply approxima-
tion techniques to the study of one of the
fundamental equations of mathematical analysis,
the first order nonlinear ordinary differential
equation.
2
u' + u + p(t) u + q(t) = O~
u(O) = c,
known as the Riccati equation. This equation
plays a fundamental role in the analysis of so-
lutions of the second order linear differential
equation,
w" + a(t) w' + b(t) w = 0,
w(O) = a,
w(L) = b,
and thus occupies an important position in
quantum mechanics in connection with the Schro-
dinger equation. Furthermore, together with its
multidimensional analogue, it enjoys a central
place in dynamic programming and invariant em-
bedding.
161
162 APPROXIMATION
w '(0) = 0 w '(0) = 1,
1 2
then, by virtue of the linearity of Eq.(B-1),
every solution can be represented as a linear
combination,
w(t) = c w1(t) + c w2(t).
1 2
b = c w (L) + c w (L) ,
1 1 2 2
and
b - a w (L)
1
w(t) • a w (t) + ( w (t) •
1 1 w (L) 2
2
w(O) = w(L) = O.
u(O) = c.
We can write Eq.(8-5) in the form
APPROXIMATION 165
du
• a(t) u + f(t) - p(t), (8-6)
at
u(O) • c,
where p(t) > 0 for t>O.
Let the function v(t) be the solution of
dv
• a(t) v + f(t).
where
t
I f(s) ds
Multiplying Eq.(8-10) by e O , the
integrating factor, we can solve by direct
168 APPROXIMAnON
integration,
t t
-I f(s)ds -I f(s)ds
d o o
(v e ) = q(t) e
- ....a.,...t-
t t t
1 f(s)ds 1 f(s)ds -/f(r)dr
o 0 t s
vet) = c e + e 1 g(s) e ds.
o
(8-11)
Since f(s) = -(pes) + 2 u(s» and g(s) =(u-
g(s) - res»~, substituting these into Eq.(8-11)
and noting that ret) > 0, t>O, we can readily
see that,
v < w.
And therefore,
t
-I (2 u(s) + p(s»ds
0
vet) = min ( c e
u
(8-12)
t
- f (2u(r)+p(r) )dr
t s 2
+ 1 e (u(s)- q (s) ) ds.
0
Thus we have been able to obtain an analytic
solution to the Riccati equation in terms of a
minimum operation.
APPROXIMATION 169
v (0) = c.
1
2
u' = - u + u, u(O} = c,
2
where Icl « 1. A corresponding equation for u
is,
2
(u )' = 2 u u,
172 APPROXIMA nON
2
= 2 u (-u + u ) ,
= -2 u
2
+ 2u
3
u(O) =c
2
.
Writing,
u = u ,
1
2
u = u
2
we have the system
u = -u + u u (0 ) = c,
1 1 2 1
, 3 2
u = -2u + 2u u (0 ) = c.
2 2 1 2
2
v ' • -2 v , v (0) = c •
2 2 2
2 2
> S + S R + R S - 2 S
2
> S R + R S - S,
for all symmetric matrices S, with equality if
and only if R • S.Thus Eq.(8-13) leads to the
equation,
2
R' < A + S - S R - R s, R(O) '"' I.
174 APPROXIMATION
where Y = F(S,t).
Now, it can be shown that the matrix equation,
X' = A(t) X + X B(t) + F(t), X(O) .. c,
may be written,
t -1 -1
X =Y C Y + I Y (s) Y (t) F(s) Y (t) Y (s)ds,
1 201 1 2 2
(8-15)
where Y , = A(t) Y ,Y (0) = I,
111
Y , = Y B(t) Y (0) = I.
2 2 2
-1
G(S ,t) < R < F(S ,t),
1 2
for t >0, where S (t) and S (t) are arbitrary
1 2
positive definite matrices in the interval
(O,T).
iJf
= (C,B(a)C) - (gradf,gradf)/4,
subject to f(C,T) = 0 •
We now employ the fact that,
f(C,a) = (C,R(a) C) ,
as seen from the associated Euler equation. We
see that R satisfies the matrix Riccati Equa-
tion.
9.1 INTRODUCTION
In this chapter we wish to explore some of
the ideas surrounding approximate equations.
The basic idea is the following: if we are con-
fronted with a nonlinear differential equation
whose solution is unknown, then we would like
to replace this equation with a set, one or
more, of approximating equations whose solu-
tions are known. Our goal is to see if we can
obtain an approximate solution to the nonlinear
system by exact solutions to the approximating
equation. Since we are, in fact, approximat-
ing one differential equation by a set of oth-
ers, this chapter considers the interrelations
between them.
We wish to state at the outset that nonli-
near equations are notoriously difficult to
solve in general and we intend to pick our way
very carefully and we may skip some of the
more interesting details.
178
APPROXIMATION 179
+w (u " + a (t ) u ' + b (t ) u )
2 2 2 2
+(w 'u ' +w'u').
1 1 2 2
APPROXIMATION 181
w 'u , + W 'u
, = g (t) ,
1 1 2 2
w 'u + W 'u = O.
1 1 2 2
,
Now we can solve for w , and w , giving
1 2
-get) u
2
w ' =
1 (u ' u - u 'u )
2 1 1 2
(9-5)
get) u
, = 1
w
2 (u 'u - u 'u )
2 1 1 2
= -a(t)W(t),
and
t
-/ a(s)ds
0
W( t) = W( 0) e
Further if u and u are chosen so that
1 2
u (0 ) = 1, u ' (0) = 0,
1 1
u (0 ) = 0, u ' (0) = 1,
2 2
u(t) = c u + c u
1 1 2 2
s
/ a (r)dr
t o
+ / g(s) e (u (s)u (t) - u (t)u (s»ds.
o 1 2 1 2
9.4 DISCUSSION
The simple classical analysis shown above
has produced analytic solutions for general
first and second order linear differential
equations in terms of the system parameter
functions , the initial conditions and the
forcing functions.
APPROXIMATION 183
u(t) = u (t)
o
+ E u
1
(t) + E
2
u (t) +
2
... ,
(9-7)
and let the initial conditions hold uniformly
in E, so,
u (0 ) =c , u ' (0) =c ,
0 1 0 2
u (0 ) = 0,
i
u ' (0) = O.
i = 1,2 ...
i
184 APPROXIMA nON
so that
d/ds = (d/dt) (dt/ds)
2
= d/dt(l + c E + C E + ••• ).
1 2
We can now write,
2
d u 2 du 2
+ E(U - 1) (1 + C E + C E+ •• )
2 d'S 1 2
ds
(9-10)
2 2
+ (1 + C E + C E + ••• ) U = O.
1 2
2 3 (9-12)
a a
+ (a( - l)sin s + sin 3s).
4 4
2~ 2 2
~ .. I «u -l)u' - a u - au') dt
012
and we seek a and a so as to minimize the
1 2
error ~. Therefore,
2~ 2
I «u - 1)u'- a u - a u')u dt = 0, (9-15)
012
and
2~ 2
I (u -1)u'- a u - a u')u' dt = O. (9-16)
012
2~ 2
I (u -1)u u' dt .. 0, (9-17)
0
and
2~
I u u' dt = O.
0
It follows from Eg. (9-15) that a II: 0 and from
1
Eg.(9-16),
188 APPROXIMA nON
2'TT 2 2
I (u -l)u' dt
o
a .... = (9-18)
...
2'TT 2
I u' dt
o
Therefore the approximate equation for Eq.{9-14)
is,
u" + E a u' + u = o. (9-19)
2
T 2 2
lim J « u -1) u' - a
u - au') dt.
012
as T --> 00
k-1 k-l 2
u '= k u u' =k u (-u + u ),
k
or
u ' = -k u + k u ,
k k k+l
t
s = I a(r) dr, (9-22)
o
then since,
du du ds du
u' = = = aCt)
at as at as
Eq. (9-21) becomes,
2
d u a' (t) du
-, +
2 as
+ u .. 0, (9-23)
ds a (t)
and if we further let,
APPROXIMATION 191
1 t
I p( r) dr
2 0
u • e v,
this reduces the equation
to the form,
112
v' '+(q(t) -(-) p' (t) - (-) pet) ) v = 0,
2 4
thus eliminating the middle term in Eq.(9-24).
Further, if we set
1 s a'(r)
I dr
202
u· e a(r) v, (9-25)
1 d a'(s) 1 a'(s) 2
v" + (1 - ( ) + -( »v • O.
2 Os 2 4 2
a(s) a(s)
(9-26)
Since,
s a'(r) sa' (r ) ds
I dr • I dr
o --r o --r- err
a (d a (r)
192 APPROXIMA nON
t a' (r)
= I dr
0 arrr
= log a(t),
the transformation Eq.(9-25) reduces to
v
u = ,
1/2
a(t)
2
r' = (1 + b(t» - r • (9-28)
We now wish to modify Eq.(9-28) in such a way
that it has the exact solution
b
v' = 1 +
2
2
b b' 2
r' = ( 1 + b + - + ) - r,
4 2
u' = 1 + b{t}u,
u{O} = 0,
and consider the error measurement of the two
systems to be,
~/2 2 2 2
'1';: I {{u' -(1+u }) + {u' - {1 + b{t} u} }dt.
o
{9-31}
Since we know the general behavior of Eq.{9-30},
its behavior could be represented by a simple
polynomial, so we let
2
u =a t + a t, {9-32}
1 2
2
b{t} = b + b t + b t,
0 1 2
where we have satisfied the initial condition,
u{O} = O. Putting Eq.(9-32} into Eq.(9-31} and
minimizing over the parameters a ,a ,b,b and
1 2 0 1
b we are lead to a nonlinear algebraic
2
equation,
'I'(c = 0, (9-33)
dC i
i
where c runs over the parameter set.
i
APPROXIMATION 195
N
'I' = I 'l'{t ),
i=l i
where in each interval we solve for the para-
meter set (a ,a , b , b , b ) . To insure
1 2 012
continuity of the solution at the end points,
i i i 2
u = a +
0
a
1
(t-t
1
) + a
2
(t - t ),
i
where
i i-1 i-1 i-1 2
a = a + a (t t ) + a (t t ) )
0 0 1 i i-1 2 i i-1
o
a = O.
o
Let E(t , t ) be the error in approximating the
i j
nonlinear equation over the interval (t ,t ),
i j
t
j 222
E(t ,t ) = I «u'-(l+u » + (u'-(l+b(t)u) )dt.
i j t
i
Then it must be true that,
F (t) = min E(t ,t ) + F (t»
N i i j N-1 j
t <t
j i
(9-34)
F (0) = 0 N = 1,2
N
APPROXIMATION 197
t
j 2 2 2
F (t )= /«u'-(1+u » + (u'-(1 + b(t)u) dt.
o j 0
The goal of the calculation, in the example, is
to compute F (n/2), using the iterative
N
scheme Eq.(9-34). As a byproduct of the compu-
tation, the N switching times, and the appro-
ximate given by b ,b ,b are determined in
012
each interval.
But Eq.(9-34) is just the Fundamental Equa-
tion of Dynamic Programming discussed in chap-
ter 2. This is the rich field of modern applied
mathematics where many applications occurs and
further study is recommended by the references
given at the end of the chapter.
9.11 DISCUSSION
The solution of approximate equations is an
interesting idea. In this philosophy we are
willing to forego an accurate solution to a
nonlinear differential equation and accept the
exact solution of an approximate equation in
its place. The conditions under which this
compromise is useful are numerous. Fitting
complex systems into more tractable forms are
especially useful in the analysis and design of
practical systems where we are more interested
in gross behavior rather that detail under-
standing.
While the perturbation techniques discussed
earlier in the chapter allow us to do this pre-
cisely,later analytical and numerical methods
tend to give us more freedom. Here we are al-
lowed to select the gross behavior we are most
interested in and determine grossly their ef-
fects on the system.
198 APPROXIMA nON
10.1 INTRODUCTION
We have waited until the last chapter to
discuss ,in detail, a problem using the finite
element method as an approximation technique.
The problem with which we will be concerned is
the determination of the magnetic field arising
from a set of permanent magnets arbitrarily
placed in space containing material of varying
permeability.
We have chosen to solve this problem using
finite elements following the work of Zienkiew-
icz, with one exception. We have indroduced a
27 node brick formulation instead of the stan-
dard 20 node model.This means that within each
solid element a quadratic variation is allowed
which has been described in chapter 2 and is
motivated by the need to have a complete set of
polynomials with which to work. Chapter 1 dis-
cusses the necessity of having a complete set
as a prerequisite of a good approximation.
Finally the finite element method can be
used also to describe the magnets themselves,
making the method not only suitable as a tech-
nique for solving difficult three dimensional
problems successfully, but also as a way of de-
fining the forcing environment.
200
APPROXIMATION 201
~ J X (r - r')
B (r) = --:---,1 dv,
4'IT v' 3
I r - r I I
2 2 2 2
(IHI> = (H + H + H ) ,
x y z
APPROXIMA nON 207
+ H &a~/az,
z
= H. V&~ •
Hence,
&x = / ~ • V&~ dv / B &~ ds,
v s n
B )&<l>ds = o.
v s n n
( 10-7)
Since Eq.(10-7) must hold for any &~ defined
in v and on s, then,
or
-/V(~(H +V ~»8~v + I(~(H +a~/an)-B )8~s.
v p s p n
= O. (10-10)
<I> = (10-12)
and therefore
210 APPROXIMATION
V<I> = I <I> Vb •
i i
Now referring back to Eq.(10-10),we have
-/ B I 8<1> Vh ds = O.
s n i i
or
(I<I> /Vh • ~Vh dv
i v j i
+ / ~ H .Vh dv) = 0,
v P j
d = I J.L Vh • V h dv,
ij v i j
f = I J.L H • Vh ds.
j v p j
where
ax/a~ ax/aTJ ax/a~
~ = I ~ h ,
i i
Hm = V~,
H =H + H •
P m
H = J dv ' ,
P 4"iT 3
I r - r I
I
where J is the current vector density in the
conductor v', r is the position vector of the
point where H is to be formed and r' is the
position vector to dv'.
The finite element approximation is in the
form of a matrix equation,
K~ + g = 0, (10-14)
214 APPROXIMA nON
where
K = I~Vf eVf dv,
ij v i j
g = I~Vf eHdv.
i v i
Here v is the volume of the problem and f
i
is the set of 27 orthogonal functions for the
new 27 node finite element brick formulation.
We note that in equation Eq.(10-14),
a f a f af af
Vf e Vf= i j + i __ j
i j -a-x ax -a-yay
a f a f
+ i __j
a-z iJ z
a f a f a f
Vf e H = i H + i H + i H,
i ax x iJY y az z
y = I y f (t,~,t),
i i
APPROXIMATION 215
z =I z f (~,~,'),
i i
dv = IJI d~d~d',
and
of lax iJf la~
i i
T -1
of liJy = (J ) iJf liJ~
i i
of liJz iJf liJ,
i i
where J is the Jacobean matrix.
The forcing vector H is referred to the unit
p
cube in the following way,
H = H i + H j + H k,
p x Y z
and
H = I a f (~,~,'),
x i i
H = I b f (~,~,O,
y i i
H = I c f (~,~,O,
z i i
216 APPROXIMA nON
af af af
g=/ i::U f + i~b f + i ~ f )IJld~dTfd'.
ax i i iJy i i az i i
v • Hp,
within a volume V bounded by a surface S. Theo-
retically this partial differential equation
requires boundary conditions of the form,
spec i fied on S.
a<l>/an
APPROXIMATION 217
8n =/ (~IHldIHldv - / B 8~s.
v Sb n
Since ~ is known on S~, its variation over
that surface is zero and contributes nothing to
the total variation. From the definition of
IHI, we see,
IHI81HI = H 8H + H 8H +H 8H,
x x Y Y z z
but by definition, H = H + V~ and 8H = V8~.
P
Therefore,
8n = /(~H.V8~ dv - / B 8~s = O.
v Sb n
(10-16)
Let us say a word about Eq.(10-16). The field H
is assumed to be the true but unknown vector
field in v, 8~ is an arbitrary continuous func-
tion defined on V and zero on S~. The true
vector field, H = Hp + V ~ ,where Hp is known
everywhere and the scalar potential is known
only on S ~. In the problems for which we wish
to apply the finite element method, the space
of interest is embedded with finite volumes of
mgnetic material. Hence ~ the permuability is
discontinuous on certain surfaces throughout v.
Equation Eq.(10-16) states that if H is the
true field, then for any function 8~ satisfying
8~ = 0 on S~, 8n = O. This allows us to con-
struct the finite element method. However this
APPROXIMA nON 219
10.9 DISCUSSION
The finite element approximation affords us
a neat way to solve complex problems in partial
differential eq~ations. These techniques are
being successfully applied to a large class of
problems including, among others, elastici-
ty,plasticity,heat transfer and fluid mechan-
ics.
The finite element technique is truely a
modern method of approximation, having roots
squarely in the Galerkin procedure. How-
ever,like all approximations methods, care must
be used in applying them. Defining the finite
element model to include all the needed flex-
ibility is by no means straightforward. Yet
careful construction can lead to impressive re-
sults leading us to conclude that the final
product is well worth the effort.
Mohammed,O.A.,W.A.Davis, B. D. Popovic,
T. W. Nehl and N. A. Gemerdash, :1962,
"On the Uniqueness of Solutions of
Magnetostatic Vector-Potential Problem
by Three Dimensional Finite Element
Methods",J. Appl. Phy. 53,11, 8402-8404
INDEX