Professional Documents
Culture Documents
net
ww
w.E
a syE
ngi
nee
rin
g.n
et
ww
w.E
asy
En
gin
eer
ing
.ne
t
Why K-Notes?
Towards the end of preparation, a student has lost the time to revise all the chapters from his /
her class notes / standard text books. This is the reason why K-Notes is specifically intended for
Quick Revision and should not be considered as comprehensive study material.
ww
A 40 page or less notebook for each subject which contains all concepts covered in GATE
w.E
Curriculum in a concise manner to aid a student in final stages of his/her preparation. It is highly
useful for both the students as well as working professionals who are preparing for GATE as it
asy
comes handy while traveling long distances.
En
When do I start using K-Notes?
gin
It is highly recommended to use K-Notes in the last 2 months before GATE Exam
(November end onwards).
ee
How do I use K-Notes?
rin
g.n
Once you finish the entire K-Notes for a particular subject, you should practice the respective
Subject Test / Mixed Question Bag containing questions from all the Chapters to make best use
of it.
et
LINEAR ALGEBRA
MATRICES
A matrix is a rectangular array of numbers (or functions) enclosed in brackets. These numbers
(or function) are called entries of elements of the matrix.
2 0.4 8
Example: order = 2 x 3, 2 = no. of rows, 3 = no. of columns
5 -32 0
ww
1. Square Matrix
A m x n matrix is called as a square matrix if m = n i.e, no of rows = no. of columns
a11a22 .........
w.E
The elements aij when i = j
1 2
Example:
are called diagonal elements
4 5
2. Diagonal Matrix asy
En
A square matrix in which all non-diagonal elements are zero and diagonal elements may or
may not be zero.
1 0
Example:
0 5
gin
Properties ee
a. diag [x, y, z] + diag [p, q, r] = diag [x + p, y + q, z + r]
rin
b. diag [x, y, z] × diag [p, q, r] = diag [xp, yq, zr]
c. diag x, y, z
1
diag 1 , 1 , 1
g.n
d.
t
3. Scalar Matrix
A diagonal matrix in which all diagonal elements are equal.
4. Identity Matrix
A diagonal matrix whose all diagonal elements are 1. Denoted by I
Properties
a. AI = IA = A
n
b. I I
1
c. I I
d. det(I) = 1
ww
5. Null matrix
An m x n matrix whose all elements are zero. Denoted by O.
w.EProperties:
a. A + O = O + A = A
b. A + (- A) = O asy
6. Upper Triangular Matrix En
gin
A square matrix whose lower off diagonal elements are zero.
3 4 5
Example: 0 6 7
0 0 9
ee rin
7. Lower Triangular Matrix g.n
A square matrix whose upper off diagonal elements are zero.
3 0 0
Example: 4 6 0
et
5 7 9
8. Idempotent Matrix
A matrix is called Idempotent if A 2 A
1 0
Example:
0 1
9. Involutary Matrix
A matrix is called Involutary if A 2 I .
Matrix Equality
Two matrices A mn and Bp q are equal if
Addition of Matrices
For addition to be performed, the size of both matrices should be same.
If [C] = [A] + [B]
ww
Then cij aij bij
w.E
i.e., elements in same position in the two matrices are added.
Subtraction of Matrices
[C] = [A] – [B]
= [A] + [–B]
asy
En
Difference is obtained by subtraction of all elements of B from elements of A.
gin
Hence here also, same size matrices should be there.
Scalar Multiplication
ee rin
The product of any m × n matrix A a jk and any scalar c, written as cA, is the m × n
matrix cA = ca jk obtained by multiplying each entry in A by c.
g.n
Multiplication of two matrices
et
Let A m n and Bp q be two matrices and C = AB, then for multiplication, [n = p]
Properties
Transpose of a matrix
ww
If we interchange the rows by columns of a matrix and vice versa we obtain transpose of a
matrix.
1 3
w.E
eg., A = 2 4
6 5
1 2 6
; AT
3 4 5
Conjugate of a matrix
asy
En
The matrix obtained by replacing each element of matrix by its complex conjugate.
Properties gin
a. A A ee rin
A B A B
b.
c. KA K A g.n
d. AB AB et
Transposed conjugate of a matrix
The transpose of conjugate of a matrix is called transposed conjugate. It is represented by A .
A
a. A
b. A B A B
c. KA KA
d. AB B A
Trace of matrix
Trace of a matrix is sum of all diagonal elements of the matrix.
T
a. Symmetric Matrix : A A
T
b. Skew symmetric matrix : A A
w.E
a. If A & B are symmetric, then (A + B) & (A – B) are also symmetric
b. For any matrix AA T is always symmetric.
A + AT A AT
c. For any matrix,
2
asy
is symmetric &
2
is skew symmetric.
En
d. For orthogonal matrices, A 1
a12 a13
Minor of element a21 : M21
a32 a33
i j
Co-factor of an element aij 1 Mij
Determinant
ww
Suppose, we need to calculate a 3 × 3 determinant
3
a1 jcof a1 j
3 3
a2 jcof a2 j a3 jcof a3 j
w.E j1 j1 j1
Properties
asy
En
Value of determinant is invariant under row & column interchange i.e., A T A
gin
If any row or column is completely zero, then A 0
If A is a matrix of order n × n , then
ee
If two rows or columns are interchanged, then value of determinant is multiplied by -1.
If one row or column of a matrix is multiplied by ‘k’, then determinant also becomes k times.
rin
KA Kn A
Value of determinant is invariant under row or column transformation g.n
AB A * B
An A
n et
1
A 1
A
Inverse of a matrix
Inverse of a matrix only exists for square matrices
Adj A
A
1
A
Properties
a. AA1 A1 A I
AB 1 B1 A 1
ww b.
.E
e. The inverse of a 2 × 2 matrix should be remembered
a b
1
asy
1 d b
ad bc c a
I.
c d
Divide by determinant. En
II.
gin
Interchange diagonal element.
III.
ee
Take negative of off-diagonal element.
rin
Rank of a Matrix
a. Rank is defined for all matrices, not necessarily a square matrix. g.n
b. If A is a matrix of order m × n,
then Rank (A) ≤ min (m, n)
c. A number r is said to be rank of matrix A, if and only if
et
There is at least one square sub-matrix of A of order ‘r’ whose determinant is
non-zero.
If there is a sub-matrix of order (r + 1), then determinant of such sub-matrix
should be 0.
Note
Let X1,X2 ……………. Xn be n vectors of matrix A
if rank(A)=no of vectors then vector X1,X2………. Xn are L.I.
if rank(A)<no of vectors then vector X1,X2………. Xn are L.D.
ww
System of Linear Equations
There are two type of linear equations
w.E
Homogenous equation
a11 x1 a12 x2 .......... a1n xn 0
asy
a21 x1 a22 x2 .......... a2n xn 0
------------------------------------
En
------------------------------------
gin
am1 x1 am2 x2 .......... amn xn 0
This is a system of ‘m’ homogenous equations in ‘n’ variables
a11
a
21
a12
a22
a13
a2n
ee
a1n
x1
x
2
0
rin
0
Let A ; x ; 0= -
a
amn m n
- g.n
m1 am2
This system can be represented as
AX = 0
xn
n1
0
et
m1
Important Facts
Inconsistent system
Not possible for homogenous system as the trivial solution
T T
x1 , x2 ....., xn 0,0,......,0 always exists .
Non-Homogenous Equation
a11 x1 a12 x2 .......... a1n xn b1
a21 x1 a22 x2 .......... a2n xn b2
-------------------------------------
-------------------------------------
w.E a11
a21
a12
a22
a1n
a2n
x1
x2
b1
b2
A
asy
; X ; B= -
-
a
m1 am2
En
amn m n x
n
b
m m 1
The solution of system of equations can be obtained by using Gauss elimination Method.
(Not required for GATE)
Note
Let An n and rank(A)=r, then the no of L.I. solutions of Ax = 0 is “n-r”
ww
is called Eigen value problem.
Where λ is called as Eigen value of A.
x is called as Eigen vector of A.
w.E a11
a
a12
a22
a1n
a2n
asy
Characteristic polynomial A I
21
am1 am2
amn
En
Characteristic equation A I 0
gin
The roots of characteristic equation are called as characteristic roots or the Eigen values.
To find the Eigen vector, we need to solve
A I x 0
ee
This is a system of homogenous linear equation.
rin
We substitute each value of λ one by one & calculate Eigen vector corresponding to
each Eigen value.
g.n
Important Facts et
a. If x is an eigenvector of A corresponding to λ, the KX is also an Eigenvector where K
is a constant.
b. If a n × n matrix has ‘n’ distinct Eigen values, we have ‘n’ linearly independent Eigen
vectors.
c. Eigen Value of Hermitian/Symmetric matrix are real.
d. Eigen value of Skew - Hermitian / Skew – Symmetric matrix are purely imaginary or
zero.
e. Eigen Value of unitary or orthogonal matrix are such that | λ | = 1.
f. If 1 , 2 ......., n are Eigen value of A, k 1 ,k2 .......,kn are Eigen values of kA.
A A A
h. If 1 , 2 ...., n are Eigen values of A, , , ........., are Eigen values of Adj(A).
1 2 n
i. Sum of Eigen values = Trace (A)
j. Product of Eigen values = |A|
k. In triangular or diagonal matrix, Eigen values are diagonal elements.
wwC1 n C2 n 1 ...... Cn 0
Then
w.E
C1An C2An 1 ...... CnI O
Where I is identity matrix
O is null matrix
asy
En
gin
ee rin
g.n
et
CALCULUS
b. 1 x 1 1 x x2 ............
x x2 2 x3 3
c. a 1 x log a xloga xloga ................
2! 3!
3
d. sinx x x x5 .................
ww e. cos x 1 x
2
3!
2!
5!
+ x
4
4!
......................
w.E f. tan x = x x
3 2 5
3! + 15 x + .........
g. log (1 + x) = x x
asy 2
2 +
x3 + ............, x < 1
3
Important Limits En
lt
sinx gin
a.
b.
x0
lt
x0
x
tanx
x
1
1
ee rin
c.
lt
x0
1
1 nx x en g.n
d.
x0
lt
cos x 1 et
lt 1
e. 1 x x e
x0
x
lt x
f. 1 1 e
x
L – Hospitals Rule
If f (x) and g(x) are to function such that
lt lt
f x 0 and gx 0
xa xa
lt f x lt f' x
Then
x a g x x a g' x
If f’(x) and g’(x) are also zero as x a , then we can take successive derivatives till this
condition is violated.
lim
For continuity, f x =f a
xa
lim f x0 h f x0
For differentiability, exists and is equal to f ' x0
h 0 h
ww
If a function is differentiable at some point then it is continuous at that point but converse
w.E
may not be true.
En
is existing at every point in open interval a < x < b and f(a) = f(b).
gin
Then, there exists a point ‘c’ such that f’(c) = 0 and a < c < b.
f ' c
f b f a g.n
Differentiation
b a
et
Properties: (f + g)’ = f’ + g’ ; (f – g)’ = f’ – g’ ; (f g)’ = f’ g + f g’
Important derivatives
a. xn → n xn 1
nx 1 x
b.
c. loga x (loga e) 1 x
x x
d. e e
x x
e.
a a loge a
f. sin x → cos x
g. cos x → -sin x
2
h. tan x → sec x
i. sec x → sec x tan x
j. cosec x → - cosec x cot x
k. cot x → - cosec2 x
l. sin h x → cos h x
m. cos h x → sin h x
ww n. sin1 x
1
1 - x2
w.E
o.
cos1 x
-1
1 x2
1
asy
p. tan1 x
1 x2
En
cosec 1 x
-1 gin
q.
r. sec1x
x x2 1
1
x x2 1
ee rin
cot 1 x
-1
1 x2
g.n
s.
et
Increasing & Decreasing Functions
f ' x 0 V x a, b , then f is increasing in [a, b]
f ' x 0 V x a, b , then f is strictly increasing in [a, b]
f ' x 0 V x a, b , then f is decreasing in [a, b]
f ' x 0 V x a, b , then f is strictly decreasing in [a, b]
ww interval given eg. [a, b], we find f(a) & f(b) & compare it with the values of local maxima &
minima. The absolute maxima & minima can be decided then.
w.E
Taylor & Maclaurin series
Taylor series
Maclaurin
En
f(x) = f(0) + x f’(0) +
x2
2 gin
f“(0)+……………..
x x
2 n
n y y y
= x a0 a1
x a2 .................... an
Euler’s Theorem
If u is a homogenous function of x & y of degree n, then
u u
x y nu
x y
Maxima
rt > s2 ; r<0
Minima
rt > s2 ; r>0
Saddle point
ww
rt < s2
w.E
Integration
Indefinite integrals are just opposite of derivatives and hence important derivatives must
always be remembered.
f x dx f t dt En
b.
a
b
a
a
f x dx f x dx
gin
c.
a
b c
b
b ee
f x dx f x dx f x dx rin
d.
a
b
a
b
f x dx f a b x dx
c
g.n
e.
a
d
t
dt t
a
f x dx f t ' t f t ' t
et
Vectors
Addition of vector
a b of two vector a = a1 ,a2 ,a3 and b = b1 ,b2 ,b3
Unit vector
a
â
a
Dot Product
a.b= a b cos γ, where ‘γ’ is angle between a & b .
Properties
ww
a. | a . b | ≤ |a| |b|
b. |a + b| ≤ |a| + |b|
(Schwarz inequality)
(Triangle inequality)
w.E
c. |a + b|2 + |a – b|2 = 2
v ab a
ˆi ˆj kˆ
b
asy
sin γ
= a1 a2 a3 En
where a a1 ,a2 ,a3 ; b b1 ,b2 ,b3
b1 b2 b3
gin
Properties:
a. a b b a
ee rin
b. a b c a b c
g.n
Scalar Triple Product
(a, b, c) = a . b c
et
Vector Triple product
a b c a . c b a . b c
Directional derivative
Derivative of a scalar function in the direction of b
b
Db f . grad f
b
ww
Curl of vector field
i j k
w.E Curl v v
x
v1
y
v2
z
v3
v v1 , v 2 , v 3
asy
Some identities En
a. Div grad f = 2
gin 2 f 2 f 2 f
x 2 y 2 z 2
b. Curl grad f = f 0
c. Div curl f = . f 0
ee rin
g.n
d. Curl curl f = grad div f – 2 f
et
Line Integral
b
dr
F r .dr
C
a F r t . dt dt
Hence curve C is parameterized in terms of t ; i.e. when ‘t’ goes from a to b, curve C is
traced.
b
F r .dr
C
a F1 x' F2 y ' F3 z ' dt
F F1 ,F2 ,F3
Green’s Theorem
F2 F1
R x
y
dx dy F1 dx F2dy
C
w.E
Stoke’s Theorem
S curl F . nˆ d A C F. r ' s ds
asy
Where n̂ is unit normal vector of S
En
C is the curve which enclosed a plane surface S.
gin
ee rin
g.n
et
DIFFERENTIAL EQUATIONS
The order of a deferential equation is the order of highest derivative appearing in it.
The degree of a differential equation is the degree of the highest derivative occurring in it,
after the differential equation is expressed in a form free from radicals & fractions.
w.E solution: f x dx g y dy c
The solution is
y x
ee
a = M dx (termsof Nnot containingx)dy rin
Integrating factors
g.n
An equation of the form
P(x, y) dx + Q (x, y) dy = 0
et
This can be reduced to exact form by multiplying both sides by IF.
1 P Q
If is a function of x, then
Q y x
1 P Q
R(x) =
Q y x
Integrating Factor
IF = exp R x dx
Q P
Otherwise, if 1 P is a function of y
x y
Q P
S(y) = 1 P
x y
Integrating factor, IF = exp S y dy
Linear Differential Equations
An equation is linear if it can be written as:
y ' P x y r x
If r(x) = 0 ; equation is homogenous
y(x) = ce
p x dx
is the solution for homogenous form
y x eh ehrdx c
asy
Bernoulli’s equation
The equation
dy
Py Qyn En
dx
Where P & Q are function of x gin
dz
dx
P 1 n z Q 1 n
ee
Divide both sides of the equation by y n & put y
1 n
z
rin
This is a linear equation & can be solved easily.
g.n
Clairaut’s equation
An equation of the form y = Px + f (P), is known as Clairaut’s equation where P = dy
e
t dx
The solution of this equation is
y = cx + f (c) where c = constant
ww dn y
dxn
k 1
dn1 y
dxn1
............. k n y 0
w.E
d. Particular integral is particular solution of
dn y
dxn
k 1
dn1 y
dxn1 asy
............ kn y x
En
e. y = CF + PI is complete solution
gin
Finding complementary function
Method of differential operator
d dy
ee rin
Replace
Similarly
dx
by D →
dx
Dy
g.n
dn
dxn
by Dn
→
dn y
dxn
Dn y et
dn y dn1 y
k 1 ............ kn y 0 becomes
dxn dxn1
Dn k1Dn1 ........... kn y 0
Let m1 ,m2 ,............,mn be roots of
Dn k1Dn1 ................ Kn 0 ………….(i)
ww
w.E
Case III: If two roots are complex conjugate
m1 j ; m2 j
asy
y = ex c1 'cos x c2 'sinx .......... cnemnx
Particular Integral
PI = y1
W1 x
ee
dx y 2
W2 x
dx .......... yn
Wn x
rin
dx
W x W x W x
Where y1 , y 2 ,............yn are solutions of Homogenous from of differential equations. g.n
et
y1 y2 yn y1 y2 y i1 0 yn
y1 ' y2 ' yn ' y1 ' y 2 ' y i1 ' 0 yn '
W x Wi x
0
n n n n n n n
y1 y2 yn y1 y2 y i1 1 yn
Wi x is obtained from W(x) by replacing ith column by all zeroes & last 1.
Euler-Cauchy Equation
An equation of the form
dn y n1 d
n1
y
xn n k 1 x .......... kn y 0
dx n
dx 1
ww
The roots of equation are
w.E
Case I: All roots are real & distinct
y c1 x
m1 m2
c2 x ........... cn xmn
asy
Case II: Two roots are real & equal
m1 m2 m
En
y c1 c2 nx xm c3 x
m3 mn
........ cnx
gin
ee
Case III: Two roots are complex conjugate of each other
m1 j ; m2 j
m
rin
y = x A cos nx B sin nx c3x 3 ........... cnx n
m
g.n
et
COMPLEX FUNCTIONS
f z ez e
x iy
ww
log z = w + 2inπ
This logarithm of complex number has infinite numbers of values.
The general value of logarithm is denoted by Log z & the principal value is log z & is
w.E
found from general value by taking n = 0.
Analytic function
asy
A function f(z) which is single valued and possesses a unique derivative with respect to z at
En
all points of region R is called as an analytic function.
u u v v
gin
If u & v are real, single valued functions of x & y s. t. , , ,
x y x y
are continuous
u v v u
ee
throughout a region R, then Cauchy – Riemann equations ;
x y x
are necessary & sufficient condition for f(z) = u + iv to be analytic in R.
rin
y
n a n! f z
f 2i n1
dz
C z a
wwf z
an z a
n
n
; an
1
f t
2i t an1
dt
w.E
z = z 0 is an isolated singularity if there is no singularity of f(z) in the neighborhood of z
= z0 .
Removable singularity asy
En
If all the negative power of (z – a) are zero in the expansion of f(z),
f(z) =
an z a
n
ginn0
The singularity at z = a can be removed by defined f(z) at z = a such that f(z) is
Poles
analytic at z = a.
ee rin
th
If all negative powers of (z – a) after n are missing, then z = a is a pole of order ‘n’.
g.n
Essential singularity
If the number of negative power of (z – a) is infinite, the z = a is essential
singularity & cannot be removed.
et
RESIDUES
If z = a is an isolated singularity of f(z)
2 1 2
f z a0 a1 z a a2 z a ............. a1 z a a 2 z a ...........
Then residue of f(z) at z = a is a1
Residue Theorem
n 1! dz z a
F cos ,sin d
ww I=
0
w.E cos
Assume z= ei
2
; sin
2i
asy
1
z+ z
cos ; sin 1 z 1
2
En
2i z
I= f z
c
dz
iz gin n
k=1
2i Res f zk
ee
Residue should only be calculated at poles in upper half plane.
f x dx 2iRes f z g.n
et
Where residue is calculated at poles in upper half plane & poles of f(z) are found
by substituting z in place of x in f(x).
Types of events
Complementary events
Ec s E
The complement of an event E is set of all outcomes not in E.
U Ei =
C
n
i1
EiC ee rin
n
Ei =
i1
n
i1
EiC
g.n
Axioms of Probability
E1 ,E2 ,...........,En are possible events & S is the sample space.
et
a. 0 ≤ P (E) ≤ 1
b. P(S) = 1
n n
c. P Ei = P Ei for mutually exclusive events
i1 i=1
ww
Total Probability Theorem
P(A B) = P (A E) + P (B E)
Baye’s Theorem
P(A |E) = asy
P(A E) + P (B E)
En
= P(A)* P(E | A) + P(B) * P(E | B)
Statistics
Arithmetic Mean of Raw Data gin
x
x
n ee
x = arithmetic mean; x = value of observation ; n = number of observations rin
Arithmetic Mean of grouped data
fx
g.n
x
f
; f = frequency of each observation
et
Median of Raw data
Arrange all the observations in ascending order
x1 x2 ............ xn
n 1
If n is odd, median = th value
2
th th
If n is even, Median =
n2
value + n 2 1 value
2
ww
Standard deviation of grouped data
N fi2 xi2 fi xi
2
w.E
N
fi = frequency of each observation
N = number of observations.
variance = 2
asy
Coefficient of variation = CV = En
gin
b.
P x 1
E X x P x
ee rin
c. V x E x 2 E x
2
g.n
Properties of continuous distributions
et
f x dx 1
x
F x f x dx = cumulative distribution
E x xf x dx = expected value of x
2
V x E x 2 E x = variance of x
V ax1 bx2 a2 V x1 b2 V x2
cov (x, y) = E (x y) – E (x) E (y)
Binomial Distribution
no of trials = n
ww
Probability of success = P
Probability of failure = (1 – P)
w.E P X x nCxPx 1 P
Mean = E(X) = nP
nx
Mean = E(x) = λ
P X x
Variance = V(x) = λ
ee
x!
rin
g.n
Continuous Distributions
Uniform Distribution
et
1
if a x b
f x b a
0 otherwise
ba
Mean = E(x) =
2
b a2
Variance = V(x) =
12
Exponential Distribution
x
e if x 0
f x
0 if x 0
Mean = E(x) = 1
Variance = V(x) = 1 2
Normal Distribution
1 x 2
f x ep , x
ww 22
Means = E(x) = μ
22
w.E
Variance = v(x) = 2
Coefficient of correlation
cov x, y asy
var x var y
En
x & y are linearly related, if ρ = ± 1
x & y are un-correlated if ρ = 0 gin
Regression lines
x x = bxy y y
ee rin
y y = b yx x x g.n
Where x & y are mean values of x & y respectively
b xy =
cov x, y
; b yx =
cov x, y
et
var y var x
bbxy yx
NUMERICAL – METHODS
Bisection Method
ww If a function f(x) is continuous between a & b and f(a) & f(b) are of opposite sign,
then there exists at least one roots of f(x) between a & b.
asy
If f x0 0 ; x 0 is the root
Else, if f x0 has same sign as f a , then roots lies between x 0 & b and
we assume En
x1 0
x b
2 gin
, and follow same procedure otherwise if f x0 has same sign
as f b , then
ee
root lies between a & x 0 & we assume x1
a x0
2 rin
& follow same procedure.
g.n
We keep on doing it, till f xn , i.e., f xn is close to zero.
No. of step required to achieve an accuracy
n
ba
loge
et
loge 2
Regula-Falsi Method
This method is similar to bisection method, as we assume two value x0 & x1
such that
f x0 f x1 0 .
f x1 .x0 f x0 .x1
x2
f x1 f x0
If f(x2)=0 then x2 is the root , stop the process.
If f(x2)>0 then
f x2 .x0 f x0 .x2
x3
f x2 f x 0
If f(x2)<0 then
f x1 .x2 f x2 .x1
x3
f x1 f x2
Continue above process till required root not found
ww
Secant Method
In secant method, we remove the condition that f x0 f x1 0 and it doesn’t
w.E provide the guarantee for existence of the root in the given interval , So it is called
un reliable method .
x2 asy
f x1 .x0 f x0 .x1
f x1 f x0
En
and to compute x3 replace every variable by its variable in x2
x3 gin
f x2 .x1 f x1 .x2
f x2 f x1
ee
Continue above process till required root not found
rin
Newton-Raphson Method g.n
xn1 xn
f xn
f ' xn et
Note : Since N.R. iteration method is quadratic convergence so to apply this formula f " x
must exist.
Order of convergence
Bisection = Linear
Regula false = Linear
Secant = superlinear
Newton Raphson = quadratic
Numerical Integration
Trapezoidal Rule
b
a f x dx , can be calculated as
Divide interval (a, b) into n sub-intervals such that width of each interval
b a
h
ww n
we have (n + 1) points at edges of each intervals
x0 , x1 , x2 ,.........., xn
w.E y0 f x0 ; y1 f x1 ,..................., yn f xn
asy
b
h
En
a f x dx 2 y 0 2 y1 y 2 .......... yn1 yn
gin
Simpson’s 1
3
rd Rule ee
Here the number of intervals should be even rin
h
b a
n
g.n
b
h
et
a f x dx 3 y 0 4 y1 y 3 y5 .......... yn1 2 y 2 y 4 ................ yn2 yn
Simpson’s 3
8 th Rule
Here the number of intervals should be even
b
3h
a f x dx y 3(y1 y 2 y 4 y5........ yn1 ) 2(y 3 y 6 y 9 ..............yn3 ) y n
8 0
Truncation error
(b a) 2
Trapezoidal Rule: T
bound
h max f " and order of error =2
12
Simpson’s 1 Rule: T
(b a) 4 iv
h max f and order of error =4
3 bound 180
Simpson’s 3 th Rule: T
3(b a) 4 iv
h max f and order of error =5
8 bound n80
where x0 xn
ww Note : If truncation error occurs at nth order derivative then it gives exact result while
integrating the polynomial up to degree (n-1).
w.E
Numerical solution of Differential equation
y i y x i ; y i 1 y xi 1 ; Xi 1 Xi h
g.n
Modified Euler’s Method (Heun’s method)
y1 y 0
h
f x , y f x0 h, y 0 h
et
2 0 0
h k
k 3 hf x0 , y 0 2
2 2
k 4 hf x0 h, y 0 k 3
ww
w.E
asy
En
gin
ee rin
g.n
et
ww
w.E
asy
En
gin
eer
ing
.ne
t