Professional Documents
Culture Documents
2010)
DANIEL CHIN
UNIVERSITY OF CALGARY
MECH ENGG
(DACHIN@UCALGARY.CA)
Lecture notes from Yanis (Pouyan Jazayeri) lectures winter 2010 at the University
of Calgary
from classes dealing with physical mechanical systems etc. I hope that during this
learning process with LATEX, I will be able to more efficiently type my notes, especially mathematical formulas.
Last updated: October 4, 2010 : Finished! as
Contents
2.2. Review on how binary works (not tested in 2009-2010). Binary is a base
2 number system
-if we have 16 bits for storing signed integers the largest signed number we can store is:
0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
3. Truncation
Friday January 15 2010
Machine epsilon: parameter used to indicate the level of precision offered by a computer with IEEE 754 format. For doubles machine epsilon is 253 = 2.22 1016 .
This refers to the smallest number that we can have
Significant digits are the digits of a number that can be used with confidence. To
remove uncertainty we use scientific notation to show training zeroes.
#
Sig. digits
5000
1,2,3,4 ???
3
5 10
1
5.000 103 4
4
5008
5008.5
5
0.005
1
Truncation Error these are the result of using an approximation instead of an
exact mathematical formula.
P
2
n
xn
Ex: ex = 1 + x + x2 + . . . + xn! =
n=1 1 + n!
If we compute ei.2 , approximating with the first three terms we have
e1.2 ' 1 + (1.2) +
(1.2)2
= 2.92
2!
(1.2)3 (1.2)4
+
+ ...
3!
4!
(1.2)2 (1.2)3
+
= 3.208
2!
3!
|3.208 2.92|
EA the relative approximation error EA1 =
100% = 1.98%
3.208
Another example is the derivation of a function at point x is defined as:
f (x + x) f (x)
x0
x
f
(x
+
x)
f (x)
f 0 (x) '
x
f 0 (x) = lim
Using the first six terms we can use the taylor series approximation to find
sin( )
3
xi = 0 xi+1 =
3
h=
3
0
1 3 0 4 1 5
f ( ) = 0 + 1( ) + ( )2 +
( ) + ( ) + ( ) + (H.O.T )
3
3
2! 3
3! 3
4! 3
5! 3
Where H.O.T is higher order terms. If we drop the higher order terms (as Yani says
drop it like its hot1) we then have truncation error and the taylor series approximation.
5
f( ) '
+
= 0.866295
3
5
3
3
3!3
5!3
r
3
= 0.866025
While our exact value is
2
Therefore our truncation error ' 2.69 104
1Actual
quotation
X
k=0
(k)
hk
(xi )
k!
h2
f (xi+1 ) = f (xi )
+ f 0 (xi )h + f 00 (xi )
|
{z
}
2!
Zero Order Approximation
{z
}
|
First Order Approximation
{z
}
|
Second Order Approximation
{z
|
+ f 000 (xi )
h3
3!
+...
10
The induced truncation error (E) is the portion of the taylor series that we did not
account for. This is as follows:
h2
E = f 0 (xi )h + f 00 (xi ) + . . .
2!
4.2. First Order Approximation.
We now take into account the first derivative of the function (we add the slope)
f (xi+1 ) ' f (x)i + f 0 (xi )h
Again the induced truncation error (E) is the portion of the taylor series that we
truncated (ignored).
h2
E = f 00 (xi ) + . . .
2!
4.3. Improving the accuracy of taylor series approximation. We have two
different ways of increasing the accuracy of our taylor series approximation:
- increase number of terms (higher order of approximation, higher order derivatives)
- smaller step size (h)
4.4. McGovin series. for points where initial point xi = 0
ex. f (x) = sin(x), xi = 0, h = xi+1 xi = xi+1
!e know from trigonometry that sin0 (x) = cos(x) and cos0 (x) = sin(x)
f (0) = 0 f 0 (0) = 1 f 00 (0) = 0f (3) (0) = 1 f (4) (0) = 0 f (5) (0) = 1
We can already see that because of trigonometric patterns, we have a much simplified
solution
11
If we now replace x( i + 1) with x we now have the following formula, the formula
that calculators in fact use to calculate the value of sin
x3 x5
+
...
sin(x) = x
3!
5!
4.5. General Taylor Series information. If only n number of derivatives of a
function available then
hn
f (xi+1 ) ' (xi ) + f 0 (xi )h + . . . + f (n)
n!
n
0
(n) h
f (xi+1 ) = (xi ) + f (xi )h + . . . + f
+ Rn
n!
Where Rn is the remainder term, or truncation error
Due to the first term of the truncated part of the taylor series, the closed term
expression for Rn can be expressed as:
Rn =
f (c) hn+1
(n + 1)!
12
x2 x3
+
+ ...
2!
3!
xi = 0 and xi+1 = x
-Use the fourth order taylor approximation to estimate e2
-Use the remainder expression to find the bounds on the truncation error
4
16
4 16
e2 = 1 + 2 + +
+ R4 ' 1 + 2 + +
=7
2!
4!
2 24
|R4 | k|h5 |
f (5) (c)hn+1
n + 1!
ec 25
R4 =
5!
Since h = xi+1 xi = 2 0 = 2 and 5th derivative of ex = ex
4
R4 = ec
15
Where:
xi < c < xi+1
0<c<2
and since ec increases continuously between 0 2 our error bounds are:
R4 =
4 0
4
4
e =
R4 e2 = 1.97
15
15
15
4
< R4 < 1.97
15
h2
h3
+ f 000 (xi ) + . . .
2!
3!
+ ...
h
2!
3!
(Forward Difference Formula)
f (xi+1 ) f (xi )
f 0 (x) '
h
The truncation error is O(h)
Backward Differential formula
-1st backwards taylor series expansion
(note that xi+1 is replaced with xi1 )
f 0 (x) '
f 0 (xi ) '
f (xi ) f (xi1 )
h
f 0 (xi ) '
f (xi+1 ) f (xi1 )
2h
13
14
f 0 (x1 ) '
f (x2 ) f (x1 )
= 4.172
x 2 x1
Central Difference
f 0 (xi ) '
f 0 (xi ) '
f (xi ) f (xi1 )
= 4.0355
x2 x0
f 00 (xi )(2h)2
2!
And:
f (xi+1 ) ' f (xi ) + f 0 (xi )h +
f 00 (xi )h2
2!
f 00 (xi ) '
f 00 (xi ) '
(Backwards Difference)
f 00 (xi ) '
15
f (x + i) 2f (xi1 ) + f (xii1 )
h2
x) = f
x =
...
... Therefore: f (x1 , x2 , . . . , xn ) = f (
xn
xn
1-Dimensional
Function Variable
xi
N-Dimensional
x1
x2
xi =
...
xN
x2i
x1i
xi =
...
Expansion Point
xi
Step Size
h = xi+1 xi
xN i
x1(i+1) x1i
x2(i+1) x2i
=
= xi+1 xi
h
..
xN (i+1) xN i
f (
x)
x1
.
First Derivative
f 0 (x)
f 0 (
xi ) =
x)
.. = J(
f (
x)
xN
The first derivative of a multi-dimensional function is called the Jacobian
J(
xi ) Jacobian of the function f (
x) at x = xi
16
The Taylor series expansion at this point for a multi dimensional function is:
f (xi+1 ) ' f (xi ) + f 0 (xi )h
f (
xi+1 ) ' f (
xi ) + J(
xi )T h
Where T represents transposition
x1(i+1) x1i
f (
x)
f (
x)
..
17
x1
3
x2
2x2
(3x1 + sin(x2 ) + 2x2 x3 + 1)
x3
And at point xi
3
3
J((x)i ) = cos(0) = 1
0
0
f (
xi+1 ) ' f (
xi ) + J(
xi ) h ' 1 + 3 1 0 0.1 ' 1.4
0.1
We will now try the second order approximation
-We need to determine the second order derivative for the functionf (
x)
2
f (
x) 2 f (
x)
2 f (
x)
.
.
.
x2
x1 x2
x1 xN
1
2
2
f (
x) f (
x)
2 f (
x)
...
x x
2
x
x
x
x)where H is called the Hessian
2
N = H(
2 2
2
..
..
..
.
.
.
.
.
2.
f (
x)
2 f (
x)
x) 2 f (
...
xN x1 xN x2
x2N
Hessian also can be written as:
J(
x)
x1
J(
x)
J(
x)
...
x2
xN
18
h
xi ) h
+
f (
xi+1 ) ' f (
xi ) + J(
xi )T h
2!
Note that as h 0 we may get round off errors and increase total error
Example:
approximation to estimate:
use second
order
0
0.1
= 0.1 , f (
xi = 0 , h
xi ) = 1
0
0.1
3
3
J(
x) = cos(x2 ) + 2x3 , J(
xi ) = J(0) = 1
2x2
0
0 0 0
0
0
0
xi ) = H(0) = 0 0 2
H(
x) = 0 sin(x2 ) 2 , H(
0 2 0
0
2
0
Therefore:
0 0 0
0.1
3
1
1
0.1
0.1
0.1
0
0
2
0.1
0.1
0.1
f (
xi+1 ) ' 1 +
+
0.1
2
0
0 2 0
0.1
0
1
' 1 + 0.4 + 0.1 0.1 0.1 0.2
2
0.2
1
' 1 + 0.4 + (0.04)
2
' 1.42
7.2. Roots of Functions. The root of a single variable function have many applications
Optimization of a single variable function where f 0 (x) = 0
For multivariable functions it is more difficult to get optimization and roots
For roots of a single variable functions we will use the following methods
Bracketing Methods
Bisection
19
False position
Open Methods
NewtonRaphson
Secant
Polynomials - Muller
Bracketing
We find solutions of f (x) = c and assume xE[xA , xB ]
Define g(x) = f (x) c and find roots of g(x) in [xA , xB ] bracket, and minimize
bracket
20
8. Bracketing Methods
Wednesday January 27, 2010
8.1. Bisection.
-Idea is to half the interval and discard the past that does not have root.
Observe that if a root exists between xA and xB , between g(xA ) g(xB ) There will
be a sign change. Therefore g(xA ) g(xB ) < 0
8.1.1. Binomial Algorithm.
(1) Find midpoint between xA & xB . Midpoint xm =
xA + xB
2
21
22
-we use false position root instead of midpoint, but same method besides this
Through similar triangle relationship to find our false position midpoint.
g(xA )
g(xB )
=
, xm = Unknown
xm xA
xB xm
g(xA )xB g(xB )xA
xm =
g(xA ) g(xB )
g(xB )(xA xB )
xm = xB
g(xA ) g(xB )
8.3. Bracket Summary. Bracket midpoint update terms:
xA + xB
Bisection xm =
2
g(xB )(xA xB )
False Position xm = xB
g(xA ) g(xB )
FP places xm closer to root, but slower in functions with strong curvature
8.4. Open Methods. :
No bracket is required, only 1 initial point is needed (a guess)
Usually faster than bracketing methods
may not always find a given root, may not converge correctly (if guess is bad)
g(xi )
Newton-Raphson Update Term
g 0 (xi )
23
24
9. Open Methods
Friday January 29, 2010
9.1. Newton-Raphson. (NR)
g(xi )
xi xi+1
g(xi )
= xi 0
g (xi )
g 0 (xi ) =
xi+1
g 0 (x) = ex
g(x0 ) = e4 10 6= 0
E= 1.70
e4 10
g(x0 )
=
4
= 3.18 E= 0.88
g 0 (x0 )
e4
e3.18 10
x2 = 3.18
= 2.59
E = 0.293
e3.18
x3 = 2.34
E= 0.037
x1 = x0
We can see that the solution for x converges, and error from true value decreases.
9.2. Convergence on open method. Bisection error roughly decreased by 12 every
iteration Linear Convergence
In NR the error is roughly proportional to the square of the previous error Quadratic
Convergence
Warning Watch for inflection and local min/max, since if g 0 (xi ) = 0 its bad!
g(xi )
we use the derivative.
g 0 (xi )
When the derivative is unknown, we use backwards difference method to approximate
derivative g 0 (x)
9.3. Secant Method. In NR update:
xi+1 = xi
g 0 (x) '
g(xi+1 ) g(xi )
xi1 xi
Therefore:
xi+1 = xi
g(xi )(xi1 xi )
g(xi1 ) g(xi )
25
26
27
!"#$%&'()"*+*,"&-$')."/*,)0123,,)42)-(05
!"#$%&'()*'+,&&'-.*+'/012/13452/0'-11/*13!"#$%&'("#$
6'7518/0'-'4*#-11/#/"9:;9<=>?@A,/%-'/'528'2151/7518/04*#52*'/,%1,-454/4#&%145/*#/(
BC9C
B<9C=>
B;9<
D4-*/@/0'+EFGEHI12'31-4$%&'$'5/45J/04*$'/012(
x0 = 0;
x1 = 0.5;
x2 = 1;
err = 100;
counter = 0;
28
!"#$#%&'(#"$)
*#+)$!$,+%$#("
%-./0123
4-./5520
6-./708
98-:/02:
;,6("<!$,+%$#("
%-./1=72
4-:/.=5=
6-./.323
98-:/7=08
29
!"#$%&'()"**+"&,$'("#-./,.#0..,1$#-$#23$,(456758
x
fzero1/#',$.#9:$#2+);"0$"<+)#.#+$#)"0=)0.1$#-$#2
!"#$%&#'(#)*+!,+-.&)/(#0)&$1$2/03%#)2(#0)0.%42/51(#)62)7081)(1/")#9&1$(0.#)7("1400($0.)0)'#)124.&)/(#0)$:
;#38'#.#17;<)(2=>X = FZERO(FUN,X0)(4#1$(0.#)72?1400.("1.&)/(#0) FUN)124X0:@X0 #$("1#)#(#2'6&1$$A
FUN/2)%1$81/#.#17#)2)&3%140.B2<$:!0.#)7("1400($0.!"#$%#&'()"#$<0&/2)&$1>
f = @(x) x^2*sin(x);
r1 = fzero (f , 3);
CD412(12)2)0)<30&$.&)/(#0)
CE#)7("1400(B#("2)#)#(#2'6&1$$0.F:
G4H70#(2''2(0)/1>
I=238'1>E#)7#)6("1400($0.!"#$%#&'()"#$&$#)6*+!,+->
>> f = @(x) x^2*sin(x);
456758>.??"#-
>> r = fzero(f,4);
>> r = fzero(f,0.5);
>> r = fzero(f,10);
@..,1./#-
J
F
,1$$0)>400(0%(2#)17.403.?1407181)7$0)("1
#)#(#2'6&1$$:
deconv 1/#',$.#9A)1+",$#2B.+C#.?$"+&
+''("131("07$7#$/&$$17$0.24@-#$1/(#0)HE2'$1K0$#(#0)HLMH;1/2)(A/2)%1288'#17(0.#)7#)6412'400($0.80'<)03#2'$:
E0480'<)03#2'$B#("3&'(#8'1400($H80'<)03#2'71.'2(#0)/2)%1&$17#)/03%#)2(#0)B#("2%0N131("07$(02N0#7("1)117
.04/03#)6&8B#("840814#)#(#2'6&1$$N2'&1$:
I=238'1>E#)7#)6("1400($0..@=AO=PQR=FSR=TSR=U
;&880$1B1$(24(B#("2)#)#(#2'6&1$$0.=JOT:T2)7&$1LM(071(143#)1("2(=OT#$2400(:!"#$2'$0312)$("2(@=TA#$2
.2/(040..@=A:V1/2))0B*+!,-.+.@=AB#("("1.2/(04@=TA
=FQF=TQ=SF
=QT
=PQR=FSR=TSR=U
WW:
J
30
!"#$#%&$#'%()*+(),*(-./.),/)0.*
!"1231415&2&%3&$678#294:#2;<!8<=2&>&?2@
;<!8<=A1BC#D4#>2&E#$%&$F2"#E&5GB&F?15>#%512?&B2146@
H&5GB&F?154A1BC##B2#$#>?B2&;<!8<=CG42&$?B:2"#A&#%%?A?#B24141I#A2&$7J&'%&$)K/L).0L),0L)M'3#A1BA$#12#1
I#A2&$A155#>%@
>> f = [1 -5 5 5 -6];
NO&#%%?A?#B24?B>#4A#B>?B:E&3#$
<B>%&$(),*3#A1BA$#12#14#A&B>I#A2&$A155#>%$P@
>> fr1 = [1 -2];
Deconv%DBA2?&BA1BB&3C#D4#>2&E#$%&$F2"#E&5GB&F?15>?I?4?&B%$&F5142E1:#@
>> f2 = deconv(f, fr1);
!"?43?55E$&>DA#@
f2 = [ 1 -3 -1 3]
Q"?A"2$1B4512#42&2"#E&5GB&F?15@)./.),/)0.
=1A62&&D$E$&C5#F@
%()*+(),*()./.),/)0.*
%,()*
%,()*?42"#B#3%DBA2?&B%&$3"?A"3#B##>2&%?B>1$&&27<EE5GRS1:1?B7JDEE&4#3#A&BI#$:#&B2"#$&&212)+P7
R&3'3#B##>2&>#%512#%,()*CG()P*@
),/,)/.
)/P
)./.),/)0.
TT7
U
!"#$#%&$#'%,()*+()P*(),/,)/.*
8#294"1I#;<!8<=>#%512#%,7
>> f2 = [ 1 -3 -1 3];
N%,+)./.),/)0.
>> fr2 = [1 -1];
NI#A2&$%&$E&5GB&F?15()/P*
>> f3 = deconv(f2, fr2);
!"?43?55E$&>DA#@
f3 = [1 -2 -3]
Q"?A"2$1B4512#42&2"#E&5GB&F?15@),/,)/.
31
11.1. Mullers Method. We use the second order taylor approximation to determine next estimate
We can find real, complex, single or multiple roots using this method
This method can be applied to any function, it is very general. We require 3 initial
points.
Parabola:
f (x) = dx2 + ex + F
= A(x x2 )2 + B(x x2 ) + C
We model our parabola with the given points
A B and C can be found using linear algebra
32
f (x1 ) f (x0 )
x 1 x0
h0 = x1 x0
0 =
f (x2 ) f (x1 )
x 2 x1
h1 = x2 x1
1 =
Therefore
0 = A(x1 x2 )2 + B(x1 x2 ) + C
B B 2 4AC
(x1 x2 ) =
2A
We will get two roots; we choose the one that is closest to x2 (smallest)
Once xr is determined. we update our three points and repeat
11.2. Complication
With Muller Method. if b2 >>> 4ac
We then let
z=
1
xr x2
z 1 = xr x2
0 = Az 2 + Bz 1 + C
0 = Cz 2 + Bz + A
where to avoid round off:
B B 2 4AC
Take the largest root |z|
z=
2A
1
z=
root with smallest |xr x2 |
xr x2
To avoid calculating for both roots:
B B 4AC
2A
z=
B
+
B 2 4AC
2A
if B > 0
if B < 0
33
34
l1 + l2
l1
=
l1
l2
1
1+=
2 + 1 = 0
=
' 0.618
Therefore d =
51
2
!
(xu xl )
1+4
51
2
35
36
if f (x2 ) < f (x1 ) : maximum is in upper interval: [x2 xu ](x2 becomes new xl )
if f (x1 ) < f (x2 ) : maximum is in lower interval: [xl x1 ](x1 becomes new xu )
d=
51
2
!
1.5 = 0.927
x1 = xl + d
= 0.5 + 0.927 = 0.427
x2 = xu d
= 1 0.927 = 0.073
f (x1 ) = 1 (0.427)2 = 0.818
f (x2 ) = 1 (0.073)2 = 0.995
Because we are looking for the max: f (x1 ) < f (x2 ) (Increases left)
Now the interval is the lesser one: [xl x1 ] = [0.5 0.427]
Second iteration:
d=
51
2
!
0.927 = 0.573
x1 = xl + d = 0.073
x2 = xu d = 0.146
f (x1 ) = 1 (0.427)2 = 0.995
f (x2 ) = 1 (0.073)2 = 0.978
f (x1 ) > f (x2 )
37
38
And we continue to decrease the size of the interval, until stopping conditions
39
g(x) = Ax2 + Bx + C
Where A,B,C are unknowns. We use f (x1 ), f (x2 ), f (x3 ) to solve:
A
f (x1 )
Ax21 +Bx1
f (x2 ) = Ax22 +Bx2
f (x3 )
Ax23 +Bx3
+C
x21
+C = x22
+C
x23
}|
{
x1 1
x2 1
x3 1
40
x1 = 2
x2
4 2 1
0 0 1
1 1 1
=0
x3
4
2
2 0
3
1
A=
1
3
= 1y1 = 2
y2 = 2
y3
2 0
6 0 0
0
0 1
2 0 0 1
1 0
1
1 1 0
B=
2
3
C=2
1
2
g(x) = x2 x + 2
3
3
2
2
min: g 0 (x) = x
3
3
B
xp =
=1
2A
We know if it is optimum via analytic approach or graphs
xp = 1
x1 = 2
x2 = 0
x3 = 1
=3
2
2
1
x 1 = 0 y 1 = 2
new initial points: x2 = 0 y2 = 1
x = 2 y = 2
3
3
4 2 1
2
2 0 1
0
2 0 0
2
1 0 0
1 1 1
1 1 1 1
1 0 1 1
0 0 1 0
0 0 1
2
0 0 1
2
0 0 1
2
0 0 1
1
2
2
41
A =1
B = 2
C =2
g(x) = x2 2x + 2
g 0 (x) = 2x 2
xp =
B
(2)
=
=1
2A
2
f (x)
2x1
x1
J(x) = f (x) =
2x2
x2
When J(x) = 0:
2x1 0
J(x) =
0 2x2
0
0
(
x1 = 0
x2 = 0
42
f (x)
2x1 + x2 + 2
x1
J(x) =
=
f (x)
x1 + 2x2
x2
x2
x1
}|
z
{
1 2
1 2
2
2 1
0
1 12
2
0
0
1
1
2
1
2
2
0
(
x1 = 43
x2 = 23
43
z }| {
J(xi+1 ) ' J(xi ) + H(xi ) (xi+1 xi )
We want our next point xi+1 to be the root of Jacobian J(xi+1 ) = 0
0 ' J(xi ) + H(xi ) (xi+1 xi )
J(xi ) = H(xi ) (xi+1 xi )
xi+1 = xi H 1 (xi ) J(xi )
xi+1 = xi H 1 (xi ) J(xi ) } Newton Raphson update term or Steepest Decent
update
d
=
H(x) = J(x)
dx
"
J(x)
x1
J(x)
x2
12x21 2
=
2
2
Let:
1
x0 =
0
44
0
J(x0 ) =
=
} (note at x0 , J(x) 6= 0)
2+2
4
=
= x1
0
2 2
4
1.714
12 2
H(x0 ) =
2 2
44
0.744
0.886
x1 =
J(x1 ) =
1.714
0
0.615
0.097
x2 =
J(x2 ) =
1.605
0
0.59
0.0017
x3 =
J(x3 ) =
1.59
0
0.5898
0
x4 =
J(x4 ) =
1.5898
0
0.5898
The optimized value of x is therefore:
1.5898
Because the Jacobian (derivative) J(x) = 0
14.2. Solutions to Linear Algebraic Equations. In general:
(
m equations
general cases
n unknowns
a11
a21
.
..
am1
. . . a1n x1 b1
.. x b
...
. 2 2
..
... = ...
..
.
.
xn
bm
. . . amn
3 2
1 2
18
0 8
2
1 2
24
0 1
2
1 0
3
1 0
4
0 1
4
3
Matrix inversion
45
46
det(A) = 0
A0
1
1 x1
1
=
1 + E 1 x2
2
det(A) = 1 1 E 1 = E
x
1
A 1 =
x2
2
1
1
1 1
x1
1 1
=
=A
2
2
x2
E (1 + E) 1
1
1
1
x1
1
(1 2)
= 1 E
=
' E1
x2
E E1
(1 E + 2)
| {z }
E
E
E11
Function is very sensitive to E as E 0
Round off errors are critical
This is an ill conditioned system
System will hypothetically have a solution in extreme cases
a11 . . . a1n x1 b1
.. x b
. 2 2
a21 . . .
.
..
.. = ...
..
..
.
. .
xn
bm
am1 . . . amn
Goal is: Simple operations on A:
Multiply (or divide) rows
47
48
a11 . . . a1n b1
..
..
.
.
a21 . . .
.
.
.
.
..
..
..
..
am1 . . . amn bn
For row m = 2, 3, 4 . . . n
am1
row1
row m = row m
a11
a11 = pivot element in step 1
2)
.
.
0 a22 . . .
.
.
.
.
..
..
..
..
0
0 am2 . . . amn bn
If pivot element = 0 or small, we switch rows
am2
for m= 3. . .n row m = row m
row 2
a2 2
We use back substitution to isolate main diagonal
Until we have triangular matrix:
a11 a12 . . . . . . b1
..
..
.
.
.
0 a22 . .
.
..
..
0
0 a33
.
0
0
0 amn bn
Neve Problems: if pivot elements are small or zero, we have to exchange rows
49
1 2 3
A = 1 0 1
2 1 0
1 2
3
A0 = r1 0 2 2
2r1 0 3 6
1 2
3
0 2 2
A00 =
3
2 r2 0 6 3
1
0
0
1
1
2
1
1
2
Therefore: U = 0
0
L= 1
2
test: A = LU
1
A = 1
2
Matrix multiplication proves us true
}| {
0 0
1 0
0 1
0 0
1 0 +r1
0 1 +2r1
0 0
1 0
3
1 + 32 r2
2
2
3
2 2
6 3
0 0
1 0
3
1
2
0 0
1 2
3
1 0 0 2 2
3
1 0 6 3
2
50
[A][] = [B]
Relationship: [A]V = []V
([A] [][I])V = 0
V =0
or:
Trivial answer
|A I| = 0
= det|A I|
a11
.
.
.
a
1n
..
a22
.
a
= det .21
.
.
..
..
..
an1
...
ann
This gives the polynomial order n in terms of
5 38
0.37
=
=
2
5.37
(A 1 I)V 1 = 0
1 (0.37)
2
a
0
=
3
(0.37) b
0
1.37
1.37 2
0
{Let a = 1} b =
0
3 4.37
2
1
V1 =
1.37
2
0
0
51
52
17.1.
x
x1
x2
..
.
53
xn y n
We know physical properties prior to analysis: linear relationship between x & y.
y = p1 + p2 x
The objective
P is to find p1 & p2 that best represents data (Try to reduce variance
(V = ni=0 Ei )
If data is error free:
y1 = p1 + p2 x1
y2 = p1 + p2 x2
.. .. .. ..
..
. . . .
.
yn = p1 + p2 xn
y1
y2
.=
..
yn
M =AP
}|
{
z
1 x1
1 x2 p1
. .
.. .. p2
1 xn
54
Pseudo Inverse of A
P2 if we know y = P1 + p2 x
1
1 0
2.1 1 1 P1
=
2.4 1 2 P2
3.8
1 3 | {z }
| {z } | {z } P
1
1
1
1 1 1 1
2.1
1
1
1
1
=
0 1 2 3 2.9
0 1 2 3 1
3.8
1
9.8
4 6 P1
=
19.3
6 14 P2
| {z }
AT A
0
1
P1
2 P2
3
z
4 6
6 14
}|
{
1
1 0
2 3
0
2
0 1
32 1
0 5
14
3
2 0
5
5
3
1
0 1
10
5
7
3
1
10
A = 53
1
10
5
Therefore:
7
5
3
10
3
10
1
5
9.8
P1
0.92
=
=
19.3
P2
1.07
y = 0.92 + 1.07x
17.4. Consider N equally spaced points.
y1
1 1
y2
1 2 P1
. = . .
..
.. .. P2
yN
1 N
| {z }
f1 (x)
f2 (x)
55
56
P = (AT A)1 AT M
1
1 1
y
1
1 1 . . . 1 1 2
1 1 . . . 1 y2
. .
=
1 2 . . . N .. ..
1 2 . . . N ...
1 N
yN
PN 1 PN
y
N
n
PNi n
PNi 2
= PN
i nyn
i n
i n
PN 1 PN
y
N
n
n
PNi
PN
PNi 2
n
n
i nyn
i
i
(AT A)1
AT
57
y1
y2
.
..
yN
| {z }
Measurement Vector M
f1 (x1 ) . . . fn (x1 )
..
.
f (x ) . . .
= 1. 2 .
..
..
..
.
f1 (xn ) . . . fn (xn )
{z
}
|
Model Matrix A
M = AP
P =
P1
P2
.
..
Pn
| {z }
Parameter Vector P
1 x1 x21 . . . xQ
1
1 x
x22 . . . xQ
2
2
A = . .
.
.
.. . . . ..
.. ..
Q
2
1 xN xN . . . xN
Example: y = P1 sin(x) Goal to find amplitude of function (P)
sin(x1 )
y1
P = [P1 ]
A = ...
M = ...
sin(xN )
yN
P = (AT A)1 AT M
1
sin x1
y
.1
.
sin(x1 ) . . . sin(xN ) ..
= sin(x1 ) . . . sin(xN ) ..
sin(xN )
yN
PN
1 PN
2
=
i=1 sin (xi )
i=1 sin(xi )yi
58
N
X
En2 =
n=1
N
X
yn
|{z}
M easurement
(P1 xn yn )2 = J
n=1
P1 x2N
n=1
P1 =
N
X
n=1
Therefore:
yn xn
n=1
!1
x2N
N
X
1
x
x
.1
.1
.
..
x1 . . . xn
x1 . . . xn
=
.
xn
xn
P = (AT A)1 AT M
59
60
f 00 (x0 )(x x0 )2
+ ...
2!
f (xi+1 ) f (xi )
(xi+1 xi )
f (xi ) f (xi1 )
f 0 (xi ) '
(xi xi1 )
f (xi+1 ) f (xi1 )
f 0 (xi ) '
(xi+1 xi1 )
f 0 (xi ) '
61
xi+1 = xi +
f (xi )(xi1 xi )
f (xi1 ) f (xi )
f (x) = 2Ax + B
For finding Optimum
2
Ax1 + Bx1 + C
y1
y2 = Ax22 + Bx2 + C
Ax23 + Bx3 + C
y3
The new point xnew is there the function = 0. The next iteration will utilize this
new point, as well as the two closest points.
19.5. Single Variable Optimization. Is when the derivative of the function f 0 (x) = 0
Golden Search
62
M inimum
M aximum
z }| {
[xL , x1 ]
[x2 , xU ]
| {z }
if :
z }| {
[x2 , xU ]
if :
[x , x ]
| L{z 1}
M inimum
d can be [0.5
1] so long as overlap happens
51
Golden Ratio=
' 0.62 T otal
2
Parabolic Muller Covered already
19.6. Multi-Variable Optimization.
xi+1 = xi H 1 (xi )J(xi )
M aximum
63
20. Interpolation
Wednesday March 3, 2010
20.1. Definition. Smooth curve that fits all given data points
x31
x32
3
x3
x34
x21
x22
x23
x24
}|
x1
x2
x3
x4
1
1
1
1
{
y1
y2
y3
y4
64
Order
N
z }| {
X
fN (x) =
yn Ln (x)
n=0
W here :
ex :
Ln =
L1 (x) =
N
Y
(x xi )
(xn xi )
i=0,i6=0
N
Y
(x x0 ) (x x2 ) (x x3 )
(x xN )
(x xi )
=
...
(x1 xi )
(x1 x0 ) (x1 x2 ) (x1 x3 )
(x1 xN )
i=0,i6=0
(x x1 )
term, since the denominator is zero
(x1 x1 )
Example: If we have two points (x0 , y0 ) (x1 , y1 ): N+1=2, N=1
f1 =
1
X
n=0
"
= y0
= y0
#
" 1
#
Y x xi
x x1
x x0
x xi
=
=
+ y1
x
x
x
x1 x0
0 xi
0 x1
1 xi
i=0,i6=0
i=0,i6=0
1
Y
x x1
x x0
+ y1
x0 x1
x1 x0
y0 = 3,
x1 = 2,
y1 = 6
x2
x3
+6
12
21
= 3(x 2) + 6(x 3) = 3x as expected
f =3
65
f2 (x) = 1
(x 2)(x 3)
(x 1)(x 3)
(x 1)(x 2)
+4
+9
(1 2)(1 3)
(2 1)(2 3)
(3 1)(3 2)
= x2
But for example if:
x 1 2 3
y 1 8 27
(x 1)(x 3)
(x 1)(x 2)
(x 2)(x 3)
+8
+ 27
(1 2)(1 3)
(2 1)(2 3)
(3 1)(3 2)
2
= 6x 11x + 6
f2 (x) = 1
Therefore we see that Lagrangian method does not determine coefficients directly.
No round off errors affect sensitivity
66
21.1. Definition. Piecewise spline interpolation uses lower order polynomials for
subset of data points
67
f1 (x) = y1 + m1 (x x1 ) x1 x x2
f (x) = f2 (x) = y2 + m2 (x x2 ) x2 x x3
f (x) = y + m (x x ) x x x
n
n
n
n
n
n+1
68
yi+1 yi
xi+1 xi
Closer data points creates smaller error
The problem there is discontinuity of 1st derivative at knots - f(x) is not smooth
Where: mi =
f1 (x) = a1 x + b1 x + c1 x1 x x2
f (x) = f2 (x) = a2 x2 + b2 x + c2 x2 x x3
f (x) = a x2 + b x + c x x x
n
n
n
n
n
n+1
69
70
f1 (x) = y1 + m1 (x x1 ) x1 x x2
f (x) = f2 (x) = y2 + m2 (x x2 ) x2 x x3
f (x) = y + m (x x ) x x x
n
n
n
n
n
n+1
22.1. Condition 1. Polynomial at each interval must pass through its 2 endpoints.
Ex: f1 (x)
Or in general:
y1 = a1 x21 + b1 x1 + c1
start
y2 = a1 x22 + b1 x2 + c1
end
yi = ai x2i + bi xi + ci
yi+1 = ai x2i+1 + bi xi+1 + ci
There are a total of N intervals and 2 equations per interval for a total of 2N equations
22.2. Condition 2. The 1st derivative at interior knots must be equal (continous
slope as curve switches from one interval to the next)
Example:
f10 (x2 ) = f20 (x2 )
2x2 a1 + b1 = 2x2 a2 + b2
Or in general:
2ai1 xi + bi1 = 2ai xi + bi
f or : 2 i N
This gives us (N-1) knots in the interior= N-1 equations
From these imposed conditions we have 3N-1 equations
22.3. Condition 3. We need one more equation to solve the system. We then
arbitrarily set the second derivative to zero at the 1st point x1
1st interval f 000 (x1 ) = 0 = a1 :
f1 (x) = 0x2 + b1 x + c1 = b1 x + c1
There is a straight line between xi x2
Now have to solve 3N polynomial coefficients
Either set up system of equations or recursive solution
71
a1
f its : (x1 , y1 )(x2 , y2 ) y1
y
2
= 0 (from condition 3)
= b1 x 1 + c 1
= b1 x 2 + c 1
solve for b1 , c1
= a2 x22 + b2 x2 + c2
= a2 x23 + b2 x3 + c2
solve for a2 , b2 , c2
72
f1 (x) = a1 x2 + b1 x + c1
f2 (x) = a2 x2 + b2 x + c2
Condition 1: f(x) must pass through points:
(
1 = a1 (02 ) + b1 (0) + c1
f1 (x) =
e = a1 (12 ) + b1 (1) + c1
(
e = a2 (12 ) + b2 (1) + c2
f2 (x) =
e2 = a2 (22 ) + b2 (2) + c2
Condition 2: derivative at internal knots must be equal
f10 (x2 )
0
1
2
1
b 1 c 1 a2 b 2 c 2
0
1
0
1
0
1 0
0 0
1 0
0 0
0 1
1 1
0 2 1 0
0 0
0 0
1
e
e2
0
0
ai x3i+1
bi x2i+1
start
+ ci xi+1 + di
end
73
22.8. Condition 3. Set second derivatives for all interior points equal to each other
00
fi1
(x) = fi00 (x)
6ai1 xi + 2bi1 = 6ai xi + 2bi
74
Rectangular
Trapezoidal
Simpsons:
31 rule
38 rule
Gaussian Quadrature
23.1. Taylor Series Based Approximations. Using the Taylor series at a we integrate both sides of the equation:
(x a)2
f (x) = f (a) + f 0 (a)(x a) + f 00
+ ...
2
Z b
Z b
Z b
f (x)dx =
f (a)dx +
f 0 (a)(x a)dx
a
|a
{z a
}
0th order Taylor = Rectangle
|
{z
}
1st order Taylor = Trapezoidal
f (x)dx '
a
f (a)dx = f (a)(b a)
a
Z
Error '
a
f 0 (a)(x a)dx =
f 0 (a)
(b a)2
2
75
76
error
Z
f (x)dx '
Z
f (a)dx +
a
b
f 0 (a)(x a)dx
f 0 (a)
(b a)2
2
f (b) f (a)
Forward Difference Approximation: f 0 (a) '
ba
Z b
f (a) + f (b)
f (x) ' (b a)
= Area of a trapezoid
2
a
Error:
b
(x a)2
dx
2
a
f 00 (a) (x a)3 b
=
2
3
a
1 00
= f (a)(b a)3
6
Error h3
Z
Error =
f 00 (a)
R
2
Rectangular:
f (a) = 0
Z
Trapezoidal:
Z b
Z
f (x)dx '
77
Z
f (a)dx +
f 0 (a)(x a)dx
1
1
= 0 + (b a) =
=
2
22
4
Or using the formula and knowing: f 0 (a) = cos(0) = 1
f (a) + f (b)
2
10
=
=
2 2
4
= (b a)
78
x1
x2
x3
xN
xi+1
(ai x3 + bi x2 + ci x + di )dx
xi
bi
ci
ai 4
(xi+1 x4i ) + (x3i+1 x3i ) + (x2i+1 x2i + di (xi+1 xi )
4
3
2
79
Lagrangian polynomial:
f2 (x) = f (x0 )L0 (x) + f (x1 )L1 (x) + f (x2 )L2 (x)
(x x1 )(x x2 )
(x h)(x 2h)
(x h)(x 2h)
=
=
(x0 x1 )(x0 x2 )
(h)(2h)
2h2
(x x0 )(x x2 )
x(x 2h)
x(x 2h)
L1 (x) =
=
=
(x1 x0 )(x1 x2 )
h(h)
h2
(x x0 )(x x1 )
(x h)x
x(2 h)
L2 (x) =
=
=
(x2 x0 )(x2 x1 )
(2h)h
2h2
Where: L0 (x) =
So all together:
f (0) (x h)(x 2h) f (h)
f (2h) (x h)x
x(x
2h)
+
h2
2
h2
h2
2
Z 2h
Z 2h
f (x)dx '
f2 (x)dx
0
0
Z
Z
Z
f (0) 2h (x h)(x 2h)
f (h) 2h
f (2h) 2h (x h)x
' 2
dx 2
x(x 2h)dx +
dx
h
2
h
h2
2
0
0
0
1
4
1
= f (0)h + f (h)h + f (2h)h
3
3
3
h
= (f (0) + 4f (h) + f (2h)
3
f2 (x) =
80
h
(f (x0 ) + 4f (x1 ) + f (x2 )
3
h
I2 = (f (x2 ) + 4f (x3 ) + f (x4 )
3
Overall Integral I = I1 + I2
h
= (f (x0 ) + f (x1 ) + 2f (x2 ) + 4f (x3 ) + f (x4 ))
3
I1 =
81
25.1. Using Simpsons 1/3 Rule for Multiple Panels. Simpsons 1/3 Rule:
h
I ' (f (x0 ) + 4f (x1 ) + f (x2 ) (For 1 panel)
3
Points must be uniformly and linearly spaced so that h is constant
Given N+1 uniformly spaced data point, N being an even number:
h
(f (x0 ) + 4f (x1 ) + 2f (x2 ) + . . . + f (xN ))
3
Factors for each point: endpoints = 1, odds = 4, even = 2
j=Odd
z
}|
{
N
2
N
1
X
X
h
I = f (x0 ) + 2
f (xi ) + 4
f (xj ) +f (xN )
j=2,4,6...
i=1,3,5...
|
{z
}
I'
i=Even
error '
hN
180
82
h
I=
3
f (x0 ) + 2
N
2
X
f (xi ) + 4
j=2,4,6...
Gh
I=
3
1+2
N
2
X
N
1
X
f (xj ) + f (xN )
i=1,3,5...
1+4
j=2,4,6...
N
1
X
!
1+1
i=1,3,5...
N 2
N
Gh
1+2
+4 +1
I=
3
2
2
Gh
=
(2 + N 2 + 2N )
3
W idth
z}|{
= |{z}
G Nh
height
h must be uniform
N points must be even number (total points must be odd)
x0 is 1st point
xN is last pint
Minimum 3 points
j=Odd
z
}|
{
N
2
N
1
X
X
I = f (x0 ) + 2
f (xi ) + 4
f (xj ) +f (xN )
j=2,4,6...
i=1,3,5...
|
{z
}
i=Even
25.3. Simpsons 3/8 Rule. Now every panel contains 4 equally spaced points
3rd order Lagrangian polynomial is used to interpolate the function
Where: L0 (x) =
I=
3h
(f (0) + 3f (h) + 3f (2h) + f (3h))
8
83
84
j=Multiple of 3
z
}|
{
N
1
N
3
X
X
R NH
3h
+2
f
(x
)
f
(x)
'
I
=
f
(x
)
+
f
(x
)
+
3
f
(x
)
0
N
i
j
0
j=3,6,9...
i=1,2,4,5,7...
|
{z
}
i6=Multiple of 3
Simpsons 1/3, 3/8 rules can be used together for all cases: For both even or odd
data points
Ex if N=5 we cannot use just one rule since N isnt a multiple or 2 or 3
85
86
Provides a very simple formula for numerical integration but requires knowledge of
f(x) - not the data points
Suppose we are integrating f(x) from (1 1) using the trapezoidal rule
87
|Area
{z B}
Underestimation Error
f (x) = 1 C0 + C1
f (x) = x C x + C x
0 0
1 1
Setting f(x) arbitrarily:
2
2
2
f
(x)
=
x
C
x
+
C
0 0
1 x1
R1
= 1 1dx = 2 (1)
R1
= 1 xdx = 0 (2)
R1
= 1 x2 dx = 23 (3)
R1
= 1 x3 dx = 0 (4)
88
R1
(
x0
x1
= xi + d i
= xi+1 di
1 13
di
Ratio:
=
hi
2
hi
1
di = (1 )
2
3
di ' 0.21135hi
Therefore: Ii =
R xi+1
xi
f (x)dx =
hi
(f (x0 ) + f (x1 ))
2
h1
d1
x0
x1
=1
= 0.21135hi = 0.21135
= 0 + d1 = d1
= 1 d1
h2
d2
x0
x1
=1
= 0.21135hi = 0.21135 = d1
= d1 + 1
= 2 d1
89
90
1
(f (x0 ) + f (x1 ))
2
1 2
d1 + (1 d1 )2
=
2
= 0.3
I1 =
1
(f (x0 ) + f (x1 ))
2
1
=
(1 + d1 )2 + (2 d1 )2
2
= 2.3
I2 =
I = I 1 + I2 = 2.6
Theoretically:
Z
0
x2 dx =
x3 2 8
= = 2.6
3 0 3
f (x) = 1
f (x) = x
f (x) = x2
Use to solve for C0 , C1 , d
f (x) = x3
f (x) = x4
f (x) = x5
91
92
Yanischeap,plastichandouton3pointGaussQuadrature
Startingpoint:Thereare3points,d,0,anddsuchthat:
f(x)dx = c
(i)
Goal:Determinec0,c1,andd.
Approach:
Trythefollowingfunctionsforf(x):
f(x)=1
f(x)=x
f(x)=x2
f(x)=x3
f(x)=x4
f(x)=x5
Evaluateeq(i)foreachoftheseinstancesoff(x).Thiswillgiveusenoughequationstosolveforc0,c1,andd.
f (x) dx
Equation
c0+c1+c0
=2c0+c1
2=2c0+c1
(1)
c0(d)+0+c0(d)
=0
0=0
Doh!
x2
2/3
c0(d)2+0+c0(d)2
=2c0(d)2
c0(d)2=1/3
(2)
x3
c0(d)3+0+c0(d)3
=0
0=0
Doh!
x4
2/5
c0(d)4+0+c0(d)4
=2c0(d)4
c0(d)4=1/5
(3)
x5
c0(d)5+0+c0(d)5
=0
0=0
Doh!
f(x)
Fromeq.(2)and(3):
c0d 4 1 / 5
=
c0d 2 1 / 3
d=
Subdinto(2):
c0 =
1 1 15 5
=
=
3 d2 3 3 9
Subc0into(1)
c 1 = 2 2c 0 = 2
10 8
=
18 9
Putitalltogether:
f(x)dx = 9 f
3 8
5 3
+ f(0) + f
5 9
9 5
93
2 x(t) y(t)
+
+ G = x(t) y(t)
t2
t
t = Independant Variable, x, y = dependant variable
94
Case 1:
f (x, y) = f (x) Function of only 1 variable
Z
Z
dy = f (x)dx Easy to solve!
y(x) Z x
f (x)dx = y(x) y0
=
y
y0
x0
Case 2:
dy
f (x, y) = g(x) h(y) =
dx
Z y
Z x
1
(x)
g(x)dx
dy =
h(y)
y0
x0
However, many 1st order ODEs are not part of these classes:
dy
=
dx
f (y, x)
| {z }
Not Separable
If not separable, numerical solutions are needed. Methods are needed to solve
There are infinite solutions to differential equations (like flow lines). f(x,y) = slope of
the function (field arrow diagram). At each value of x and y the slope of the function
changes. Hence the need for initial conditions:
95
96
29.1. Local Truncation Error. Error in the trajectory of the solution due to having a finte step size (x)
With step size x, our next estimate of y(x) may be inaccurate
A more accurate point in the trajectory of y(x) will occur if step size is smaller
29.2. Propagated (Accumulation) Truncation Error. Each additional step increases the truncation error:
1st step from x0 x0 + x generates e1 = local truncation error of step 1
2nd step from x0 x0 + 2x generates e2 = local truncation error of step 1 + local
truncation error of step 2
3rd step from x0 x0 + 3x generates e3 = local truncation error of step 1,2 and 3
97
The approximate value of y at each additional point is not at the true trajectory
based on x0 , y0
Slope in the next line is slightly in error
dy
= f (y, x)
dx
z }|1 {
y1 = y0 + f (x0 , y0 ) (x1 x0 )
| {z }
slope
z }|2 {
y2 = y1 + f (x1 , y1 ) (x2 x1 )
98
Small error in slope evaluation since calculated slope not equal to real trajectory
h
Old answer
In general: yi+1 =
z}|{
yi
z }| {
+ f (x1 , y1 ) (x1 x0 )
| {z }
slope
z }| {
dy
1 d2 y 2
h
yi+1 = yi + (xi+1 xi ) +
dx
2 dx2
Therefore local truncation error is the second order term:
1 d2 y 2
E=
h
2 dx2
+...
99
dx
= v(t, x) velocity
dt
dx
dt
x1 ' x0 + 0 + v(t0 , x0 )(t1 t0 )
if:
= 3 = constant
at t=1, x=?
' (0 + 3) 1 = 3
x2 ' x1 + v(t0 , x0 )(t2 t1 )
' 3 + (3)(1)
dx
dt
x1 ' x0 + v(t0 , x0 )(t1 t0 )
if:
=t+1
at t=1,2,3, x=?
= 0 + (0 + 1) 1 = 1
x2 ' 1 + (1 + 1) 1 = 3
x3 ' 3 + (2 + 1) 1 = 6
Eulers Forward:
xi+1 = xi + V (ti+1 , xi+1 )(ti+1 ti )
Central Difference (Trapezoidal)
1
xi+1 = xi + [V (ti1 , xi1 ) + V (ti+1 , xi+1 )] (ti+1 ti )
2
2Class
was getting boring at this point. Yani was getting ready to go to the hospital for his son
to be born, and we got a sub for a bit. More to follow
100
Shortly after Yani returned, I had to attend my grandfathers funeral [doing matlab
assignments on the plane = ( ] and I believe I missed two more classes
Anyone willing to donate missing notes is welcome; use the email provided in the
introduction, I might release the TEXdocument later too...
101
d3 x 2
d2 x
x
t
+
+ t2 x = 0
3
2
dt
dt
Solve via 4th order classical RK method
d2 x
d3 x 2
x t + 2 + t2 x = 0
3
dt
dt
d2 x
d3 x 2
x
t
=
t2 x
dt3
dt2
d2 x
d3 x
1
t
= 2
3
2
dt
dt
xt
x
31.1. Step 1. Transform 3rd order ODE into 1st order ODE using vectors
x1 = x
dx
Change of Variable x2 =
dt2
d
x
x 3 = 2
dt
dx1
= x2
dt
dx2
= x3
dt
t
dx3
x3
1
= 3 = x3
2
dt
dt
x1 t
x1
In the form:
x2
x1
d
x
x2 = x 3 t
3
dt x
2
3
x1 t x1
d
(x) = F (x, t)
dt
102
31.2. At iteraion i:
x2,i
x3,i
k 1,i = F (ti , xi ) = F
ti
x3,i
2
x1,i t x1,i
t
t
, xi +
k 2,i = F ( ti +
k 1,i )
2
2
| {z } | {z }
Plus half step Estimate next x
x2,i
x
t 1,i t
x3,i
= F ti +
, x2,i +
ti
x3,i
2
2
2
x3,i
x1,i t x1,i
k 2,i
x1,i +
x2,i
t x2,i +
x3,i
= F ti +
,
x3,i
ti
2
x3,i + 2
x1,i t x1,i
|
{z
}
Simplif y:
V
W
U
t
, V
= F ti +
2
W
Therefore:
k 2,i =,
V
W
W
1+
U 2 (t
t
2
ti +
U
t
k 1 + 2k 2 + 2k 3 + k 4
6
103
31.3. Example. Write recursive equation for solving angle of pendulum in time
A) Forward Euler
d2
d
+ + M Lg sin () = 0
2
dt
dt
d2
d g sin ()
dt2
M L2 dt
L
0
Let: x1 = , x2 =
x2
g
d
x2 sin (x1 )
[x] =
2
L
dt
| M
{zL } |{z}
a
b
d
x2
[x] =
= F (x, t)
ax2 + b sin (x1 )
dt
Forward Euler: xi+1 = xi + tF (x, t)
x1,i
x2,i
x1,i +
x2,i
=
+ t
=
x2,i
ax2 + b sin (x1 )
x2,i + ax2 + b sin (x1 )
M L2
104
32.2. Example.
Time(t) Measurement(x)
0
2
1
1
2
0
3
1
y1
f1 (x1 ) . . . fK (x1 )
P1
.
.
.
..
.
.. =
..
..
..
Model:
.
yN
f1 (xN ) . . . fK (xN ) PK
M = AP
pseudo inverse: AT M = (AT A)P
P = (AT A)1 AT M
In our case:
a
=
b
0
1
a
2 b
3
f1 (t) = 1, f2 (t) = t, P
2
1
1 1
=
0 1
1
1
1
1 1 1 1 1
P =
0 1 2 3 1
1
1
4
4 6
P =
4
6 14
1
0
2
1
1 1 1 1
1
2
0 1 2 3 0
3
1
identity matrix
4
6
4
3
2 r1 0
6
14
6
5
z }| {
1 0
0 1
1 0
3
1
2
Therfore:
1 0
4 6
U=
L 3
0 5
1
2
4 6 1 0
4 6
Test:LU = X =
=
3
0 5 2 1
6 14
105
106
Now:
a
4
LU
=
b
4
1 0 4 6 a
4
=
3
0
5
b
4
1
2
| {z }
Y
4
LY =
4
y1 = 4
3
y1 + y2 = 4
2
4
4 6 a
y=
=
2
0 5 b
2
b=
5
a = 1.6
2
0 0 1
Or: 1 = 1 1 1
0
4 2 1
107
108
N
X
n=0
"
" N
#
#
Y x xi
x xi
= Y0
+ Y1
x0 xi
x1 xi
i=0,i6=0
i=0,i6=0
(x x1 )(x x2 ) . . .
(x x0 )(x x2 ) . . .
= Y0
+ Y1
(x0 x1 )(x0 x2 ) . . .
(x1 x0 )(x1 x2 ) . . .
N
Y
Spline interpolation
1 equation / point
make endpoints first derivatives same: f 0 (x) = limxx = limxx+
if cubic, set second derivative same
set f 00 (x) = 0 at first point
5) Integration
Rectangular
Trapezoidal
Simpsons
Gaussian Quadrature
(Simpsons and quadrature are given on formula sheet)
6) ODE
Euler
Backwards f (xi+1 ) ' f (x0 ) + f 0 (x0 )x
f 0 (x0 ) + f 0 (xguess )
x
2
109