Professional Documents
Culture Documents
x(n) y(n)
DTS
N M
∑a
k =0
k y ( n + k ) = ∑bk x (n + k )
k =0
n ≥0 (2)
1 M N
n = 0 ⇒ y (0) = [∑bk x( −k ) − ∑ak y ( −k )
a0 k =0 k =1
where y(-k), k =1, n represents the initial condition for the differential equation and
are assumed known.
1 M N
earlier determined
Solution
n
3 1 1
y (n) = y (n −1) − y (n − 2) +
4 8 2
0
3 1 1 7
n=0 => y (0) = y ( −1) − y (−2) + =
4 8 2 4
1
3 1 1 27
n=1 => y (1) = y (0) − y ( −1) + =
4 8 2
16
2
3 1 1
n=2 => y (2) = y (1) − y (0) +
4 8 2
The solutions for the equation (1) or (2) have the components :
-homogeneous solution yh (n) which depends on the initial conditions that are
assumed known.
-particular solution y p (n) which depends on the input signal
∑a
k =0
k y(n − k ) = 0
N N
⇒ ∑ ak Aα n − k = 0 ⇒ ∑a α k
−k
= 0 characteristic equation
k =0 k =0
αk are the roots of the characteristic equation, k =1, n
1) If αk are distinct roots ,then y h (n) = A1α1n + A2α 2n + ... + AN α Nn
2) If αk has a multiplicity of P1 ,then “ N − P1 ” are distinct roots; then:
y h (n) = A1α 1n + A2 nα 1n + ... + AP1 n P1 −1α 1N + AP1 +1α 2 1 + ... + AN α Nn − P1
P
Example I
Consider the equation:
13 3 1
y (n) − y ( n −1) + y ( n − 2) − y ( n − 3) = 0 , with initial conditions:
12 8 24
y ( −1) = 6
, y (−2) = 6 , y (−3) = −2
Find the homogeneous solution of the equation
13 −1 3 −2 1
1− α + α − α −3 = 0
12 8 24
13 3 1
α3 − α2 + α − = 0 =>
12 8 24
1 1 1
α1 = , α 2 = , α 3 =
2 3 4
y h ( n) = A1α1 + A2α 2 + A3α 3n =>
n n
n n n
1 1 1
y h ( n) = A1 + A2 + A3
2 3 4
−1 −1 −1
1 1 1 =
n = −1 ⇒ y h (−1) = 6 = A1 + A2 + A3
2 3 4
= 2 A1 + 3 A2 + 4 A3
−2 −2 −2
1 1 1 =
n = −2 ⇒ y h (−2) = 6 = A1 + A2 + A3
2 3 4
= 4 A1 + 9 A2 + 16 A3
−3 −3 −3
1 1 1 =
n = −3 ⇒ y h (−3) = −2 = A1 + A2 + A3
2 3 4
= 8 A1 + 27 A2 + 64 A3
2 A1 + 3 A2 + 4 A3 = 6 A1 = 7
4 A1 + 9 A2 +16 A3 = 6 A2 = −10 / 3
8 A1 + 27 A2 + 64 A3 = −2 A3 = 1 / 2
n n n
1 10 1 1 1
⇒ y h ( n ) = 7 − +
2
3 3
24
Example II
Consider the equation:
5 1 1
y h ( n) − y ( n − 1) + y (n − 2) − y (n − 3) = 0 with initial conditions:
4 2 16
y ( −1) =6 , y (−2) =6 , y ( −3) =−2 .
Find the homogeneous solution of the equation:
Solution:
5 1 1
The characteristic equation is: 1 − α −1 + α −2 − α −3 = 0
4 2 16
5 2 1 1 1 1
α − α + α − = 0 => α 1 = α 2 = , α3 =
3
4 2 16 2 4
n n n
n= -1 , y(-1)=6
−1 −1 −1
1 1 1
y h ( −1) = A1 + A2 ( −1) + A3 = 2 A1 − 2 A2 + 4 A3
2 3 4
n= -2 , y(-2)=6
−2 −2 −2
1 1 1
y h (−2) = A1 + A2 ( −2) + A3 = 4 A1 − 8 A2 + 16 A3
2 3 4
n= -3 , y(-3)= -2
−3 −3 −3
1 1 1
y h ( −3) = A1 + A2 ( −3) + A3 = 8 A1 − 24 A2 + 64 A3
2 3 4
The system
2 A1 − 2 A2 + 4 A3 = 6 A1 = 9 / 2
4 A1 − 8 A2 +16 A3 = 6 has the following solutions: A2 = 5 / 4
8 A1 − 24 A2 + 64 A3 = −2 A3 = −1 / 8
n n n
91 5 1 11
y h (n) = + −
22 4 3 84
Using of the principle of Superposition Theorem we can write the final particular
solution:
M
y p ( n) = ∑bk y (n − k )
k =1
If x(n) = ct ⇒ y (n) = ct ⇒ y (n) is a linear combination of the version x(n) and its
p
Example:
Consider the differential equation:
3 1 nπ
y ( n) − y ( n −1) + y ( n − 2) = 2 sin , with initial conditions:
4 8 2
y(-1)=2 , y(-2)=4
y ( n) = y p ( n) + y h ( n)
Solution:
Finding the particular solution
nπ nπ
y p ( n) = A sin + B cos
2 2
n= -1, y(-1)= 2 and y p ( −1) = y ( −1) = -A
n= -2 , y(-2)= 4 and yp ( −2) = y ( −2) = 4
nπ nπ
y p ( n) = A sin + B cos
2 2
( n − 1)π ( n − 1)π nπ nπ
y p ( n − 1) = A sin + B cos = − A cos + B sin
2 2 2 2
(n − 2)π (n − 2)π nπ nπ
y p (n − 2) = A sin + B cos = − A sin − B cos
2 2 2 2
nπ nπ 3 nπ nπ 1 nπ nπ
A sin + B cos − (− A cos + B sin ) + [− A sin − B sin ]=
2 2 4 2 2 8 2 2
nπ
= 2 sin
2
3 1 nπ 3 1 nπ nπ
( A − B − A) sin + ( B − A − B) cos = 2 sin
4 8 2 4 8 2 2
The system
3 1 112
A− B− A=2 A=
4 8 85
3 1
has the following solutions 96
B − A− B =0 B =−
4 8 85
112 nπ 96 nπ
y p (n) = sin − cos
85 2 85 2
Example II
Consider the differential equation
3 1 1
y ( n) − y ( n −1) + y ( n − 2) = x ( n) + x ( n −1) with
4 8 2
nπ
x(n) = 2 sin . Find the particular solution of the differential equation
2
1
y ′p ( n) = y p (n) + y p ( n −1)
2
112 nπ 96 nπ
y p ( n) = sin − cos
85 2 85 2
112 ( n −1)π 96 ( n −1)π
y p ( n −1) = sin − cos
85 2 85 2
112 nπ 96 nπ 1 112 (n −1)π 1 96 (n −1)π
y p ( n) = sin − cos + ⋅ sin − ⋅ cos
85 2 85 2 2 85 2 2 85 2
δ( n) =1; k = 0 δ ( n − k ) =1; n = k
δ (n) = 0; k ≠ 0 δ (n − k ) = 0; n ≠ k
With y(-1)=0
y(-2)=0 n ≥0
……….
For n > M the right side of the eq.(3) is 0 so we have the homogeneous equation
The N initial conditions are y(M);y(M-1);…y(M-N+1)
Since N ≥ M for a causal system we have to determine only y(0); y(1); …; y(n).
But n =1, M and y(k)=0 if k < 0 ; we get the following set of M+1 equations:
M
∑a
k =0
k y (n − k ) = bk , k =0, M
y (0)
a0 0........
1.......0 y (1)
b0
b1
a a .......1 .......
0 y ( 2)
=
b2
10
.
a2a1a0 ....1.......0
y ( M )
bn
..........
..........
.. M
For n> M , ∑a k y ( n − k ) = 0 ⇒ y h ( n)
aM .......... a0
.......
k =0
Example
aM aM − 1.....a1a0 y ( n) −
3
4
1 1
y ( n −1) + y ( n − 2) = δ ( n) + δ ( n −1)
8 2
Solution:
N=2; M=1
3 1
n ≥ 2 ⇒ homogeneous equation: y ( n) − y (n −1) + y (n − 2) = 0
4 8
3 1 1 1
1 − α −1 + α −2 = 0 => α1 = , α 2 = ==>
4 8 2 3
1 1
y h (n) = A1 ( ) n + A2 ( ) n
2 4
a0 0 y (0) b0 1 0 y (0) b0
[ ] = => [ ] = =>
a1 a0 y (1) b1
− 3/ 4 0 y (1) b1
y ( 0) = 1
y ( 0)
1
= 1 ;
y (1) 5
2 y (1) =
yh (0) = y(0) = 1 4
5 A1 + A2 = 1
y
h (1) = y (1) = A1 + A2 = 1 | ⋅ − 1
4 => 1 1 5
A1 + A2 = | ⋅ 4
2 4 4 2 A1 + A2 = 5
A1 = − 3
A2 = 4
1 1
y h (n) = 4( ) n − 3( ) n
2 4
Proof
3 1 1
n =0 y (0) − y ( −1) + y (−2) = δ (0) + δ (−1)
4 8 2
3 1 1
n =1 y (1) − y (0) + y ( −1) = δ (1) + δ (0)
4 8 2
y (− 2) = 0 y(0) = 1
⇒
y (1) =
3 1
= 1+
y (− 1) = 0 4 2 Response at the x( n) = u ( n) of the discrete
∞
n ≥0, α ≠ β =>
n
1 − (αβ−1 ) n
y ( n) = β n ∑(αβ−1 ) k = β n =
k =0 1 − (αβ−1 )
α − β n+1
n +1
= βn
α −β
1 − β n+1
α =1⇒ y (n) = β n
1 −β
N
2 0≤ n≤ 6
x(n) =
1 6≤ n≤ 9
x (n) = 2[u (n) − u (n − 6)] +[u ( n − 6) − u ( n −10 )] = 2u ( n) − u (n − 6) −
− u (n −10 ) =>
1 1 1
s ( n) = 2( ) n − ( ) n −6 − ( ) n −10 ⇒ 0 ≤ n ≤ 5
3 3 3
n n
1 1 1
y (n) = ∑2( ) k = 2∑( ) k = 3 − ( ) n
k =0 3 k =0 3 3
n
⇒ 6 ≤ n ≤ 10 y ( n) = ∑.......... ..........
k =0
n
n -3 ≤ n ≤ 3
x ( n) =
0 o t h e r a).
w iy(n)
s e = x(n)
b). y(n) = x(n-1)
c). y(n) = x(n+1)
d). y(n) = 1/3 [x(n+1)+x(n)+x(n-1)]
e). y(n) = max[x(n+1), x(n), x(n-1)]
n
V
n ….. -5 -4 -3 -2 -1 0 1 2 3 4 5
x(n) 0 0 3 2 1 0 1 2 3 0 0
a 0 0 3 2 1 0 1 2 3 0 0
b 0 0 0 3 2 1 0 1 2 3 0
c 0 3 2 1 0 1 2 3 0 0 0
d 0 1 5/3 2 1 2/3 1 2 5/3 1 0
e 0 3 3 3 2 1 2 3 3 3 0
f 0 0 0 3 5 6 7 9 12 12 12
n=0 y(0)=max[1,0,1]=1
n=1 y(1)=max[2,1,0]=2
n
n
y ( n) = ∑ x( n)
k =1
x (n) y (n)
Unit delay
for convenience
z −1 x (n) _________
_________ x(n −1)
Advance unit
x (n) ________ __________ x(n −1)
Example:
Using the basic building blocks, sketch the block diagram representation of a
discrete time system described by a equation
1 1 1
y ( n) = y ( n −1) + x ( n) + x ( n −1) where x (n) is the input signal and y (n) is the
4 2 2
output signal .
Solution:
METHOD 1: We write directly the equation:
1 1 1
y ( n) = x ( n) + x ( n −1) + y ( n −1)
2 2 4
Figura
z −1 1/2
x (n) + +
y (n) ¼
½ z −1
METHOD 2:
1
y ( n) = [ x(n) + x(n −1)] + 1 y (n −1)
2 4
Figura
z −1
1/2
x (n) + + y (n)
1/4
z −1
Clock frequency: x( n) = n or τ ( n) = n
Function generator
Figura
x1 ( n)
y (n) = f [ x1 (n), x 2 ( n),..., x k (n)]
f y(n)
x k (n)
Exemplul 1
Figura
Exemplul 2
y (k ) = u ( k ) + αy (k −1)
; u (k ) =0 for k<0; u (k ) =1, for k>0
y 0 =1
k=0: y (1) = 1 + αy 0 = 1 + α
k=1: y (2) = 1 + αy1 = 1 + α + α 2
…………………………………..
1 + α k +1
k : y ( k ) = 1 + α + α 2 + ... + α k = when α ≠ 1 and
1 −α
Figura k +1 when α =1
y (k ) = αy (k −1) +1 = αy (k −1) + u ( k )
y (k ) + b1 y (k −1) + b2 y (k − 2) + ... + bn y (k − n) = u ( k )
-differential equation of a DTS
Structures of a DTS
-Recursive system
-in time domain
N M
∑ a k y ( n − k ) = ∑ bk x ( n − k )
k =0 k =0
H ( z)
X (z ) ______ ________ ϕ( z )
Y ( z)
∑b
k =0
k z −k
H ( z) = N -transfer function
X ( z)
∑ a k z −k
k =0
Y ( z) ∑b z k
−k
H ( z) = k =0
N ; where H(z) is the transfer function
X ( z)
1 + ∑ak z −k
k =1
H ( z) = H1 ( z) ⋅ H 2 ( z)
M
H 1 ( z ) = ∑bk z −k gives all the “0“s of the transfer function H(z)
k =0
1
H 2 ( z) = N
1 + ∑ak z −k gives all the poles of the transfer function H(z)
k =1
Observation|:
Y ( z) = H ( z) ⋅ X ( z) = H1 ( z) ⋅ H 2 ( z) X ( z)
Implementation
Particular cases
-first order structures
b0 + b1 z −1
y ( n) = −a1 y ( n −1) + b0 x ( n) + b1 x (n −1) => H ( z ) =
1 + a1 z −1
Direct form 1
v ( n) = b0 x ( n) + b1 x ( n −1) Figura Direct form I
y (n) = −a1 y (n −1) + v (n)
-2 adders
-2 unit delays
-3 multipliers
=>Figura
-2 adders
-1 unit delay
-3 multipliers
-second order structures
Direct form 2
Figura
w(n) = x( n) − a1 w( n −1) − a 2 w( n − 2)
y ( n) = b0 x ( n) + b1 w( n −1) + b2 w( n − 2)
Figura
y ( n) = b0 x (n) + w( n)
w1 (n) = b1 w(n −1) − a1 y (n −1)
w2 (n) = b2 x(n − 2) − a 2 y ( n − 2)
Figura
N
w( n) = x (n) − ∑ a k y ( n − k )
k =1
M
v ( n) = ∑bk w( n − k )
k =1
METHOD 3:
Cascade form
N
∑b k z −k
H ( z) = 0
M ; (the numerator has the degree M, the denominator- N)
1 + ∑ a k z −k
1
∏ (1 − g
k =1
k z )∏ (1 − bk z −1 )(1 − bk* z −1 )
−1
k =1
H ( z) = N1 N2
∏ (1 − e
k =1
k z −1 )∏ (1 − d k z −1 )(1 − d k* z −1 )
k =1
Where M = M 1 + 2M 2
N = N 1 + 2N 2
ek -real poles
gk -real zeros
bk , bk* -complex zeros
d k , d k* -complex poles
Figura
Implementation of H 3 ( z)
Figura
Homework :
Implement the structure of the DTS described by the ecuation
1 + 2 z −1 + z −2 (1 + z −1 ) 2
H ( z) = = ;
1 − 0.75 z −1 + 0.125 z −2 (1 − 0.5 z −1 )(1 − 0.25 z −1 )
Solution
H ( z) = H1 (Z ) ⋅ H 2 ( z)
1 + z −1 1 + z −1
H1 ( z) = H 2 ( z) =
1 − 0.5 z −1 1 − 0.25 z −1
V ( z) = H 2 ( z) ⋅ X ( z)
Y ( z) = V ( z) ⋅ H 1 ( z)
Parallel Form Structure
N
Ak N
H ( z ) =C+ ∑
k =1 1 − p k x
−1 =C+ ∑
k =1
H k ( z)
pk =poles of H k (z)
Ak =coefficients in the partial fraction expansion
Np N1 N2
H ( z) = ∑ Ck z +∑ +(1∑d−k z 1 −)(1k
−k Ak B(1− e− 1z )
1− C k z− 1 k d*k−z 1 )−
k =0 k 1= k 1 =
N = N1 + N 2
- if M ≥ N then N p = M − N
- if ak , bk ∈ ¡ then Ak , Bk , Ck , ck , ek ∈ ¡ and the system function can be
interpreted as representing a parallel combination of first and second order systems
with N p possible delay paths.
Second method:
Grasping the real poles in pairs, then
Np Ns
e0 k + e1k z−1 N +1
H ( z ) = ∑ Ck z −k
+∑ −1
, where N s is integer part of
k =o k =1 1 − a1k z − a2 zk−2 2
The general difference equations for the parallel form with second order direct form
II sections are:
Wk (n) = a1kWk (n − 1) + a 2 kW k (n − 2) + x(n); k= 1, N s
yk (n) = e0 kWk (n) + e1kWk (n − 1)
Np Ns
y n = ∑ Ck x ( n − k ) + ∑ y k ( n )
k =0 k =1
Example: Parallel form structure for a sixth order system with real and complex
poles grasped in pairs.
(N=M=6)
1 + 2 z −1 + z −2 −7 + 8 z−1
E.g. H(z)= = 8+
1 − 0.75 z −1 + 0.125 z −2 1 − 0.75 z−1 + 0.125 z−2
18 25
b) H(z)= 8 + −1
−
1 − 0.05z 1 − 0.25z −1
c)
Parallel structure using first order system
Example: Determine the cascade and parallel structure for the system described by
the transfer function
1 −1 2
z )(1 − z −1 )(1 + 2 z −1 )
10(1 −
H ( z) = 2 3
3 −1 1 −1 1 j −1 1 j
(1 − z )(1 − z ) 1 − ( + ) z 1 − ( − ) z −1
4 8 2 2 2 2
a) Cascade implementation
H ( z ) = 10H 1( z ) H 2( z )
2 −1 3 −1 −2
1− z 1+ z −z
H1 ( z ) = 3 H 2 ( z) = 2
7 3 1
1 − z −1 + z −2 1 − z −1 + z −2
8 32 2
b) parallel implementation
A1 A2 A3 A3∗
H ( z) = + + +
3 1
1 − z −1 1 − z −1 1 − + z −1 1 − − z −1
1 j 1 j
4 8 2 2 2 2
−14.75 − 12.91z −1 24.5 + 26.82 z −1
H (z) = +
7 −1 3 −2 1
1− z + z 1 − z −1 + z −2
8 32 2
A1 =29.3; A3 =12.25-14.57j
A2 =-17.68; A3∗ = 12.25 +14.57 j
Exam problems:
1) Obtain the direct form I, direct form II, cascade, parallel and lattice structures
for the following systems
3 1 1
a) y (n) = y (n − 1) − y (n − 2) + x(n) + x(n − 1)
4 8 3
b) y (n) = −0.1y (n − 1) + 0.72 y (n − 2) + 0.7 x(n) − 0.252 x(n − 2)
c) y (n) = −0.1y (n − 1) + 0.2 y (n − 2) + 3x(n) + 3.6 x(n − 1) + 0.6 x(n − 2)
1 1
d) y (n) = y (n − 1) + y (n − 2) + x(n) + x(n − 1)
2 4
1
e) y (n) = y (n − 1) − y (n − 2) + x(n) − x(n − 1) + x(n − 2)
2
2(1 − z −1 )(1 + 2 z −1 + z −2
f) H ( z) =
(1 + 0.5 z −1 )(1 − 0.9 z −1 + 0.81z −2 )
−1 −2 −3 −4
2) For H ( z ) = 1 + 2.88 z + 3.404 z + 1.74 z + 0.4 z sketch the direct form and lattice
structure and find the corresponding input output implementations.
3) Determine a direct form of implementation for the systems
a) h(n) = { 1, 2,3, 4,3, 2,1}
b) h(n) = { 1, 2,3,3, 2,1}
4) Determine the transfer function and the impulse response of the systems
drawn below:
a)
b)
Lattice structure
1 1
H ( z) = N
=
aN ( z )
1 + ∑ aN ( k ) z − k
k =1
N
g ( n) = −∑ aN ( k ) y ( n − k ) + x( n)
k =1
M −1
B y (n) = ∑ bk x(n − k ) xu(uzur) h( n) → y (n)
u
k =0
↓z ↓z
M −1
Y ( z ) = ∑ bk z − k x ( z ) xu(uzur) H ( z ) → Y ( z )
u
k =0
Y ( z ) M −1 − k
H ( z) = = ∑ bk z the response of the system for the unit
X ( z ) k =0
sample is identical to the coefficients bk , i.e.:
h(n) = { bn , 0 ≤ n ≤ M −1}
k
ω ω
H n ( jω ) = 20 log = 20k log
ω0 ω0
∂B
π
j k
arg H n ( jω) = e 2
M
h ( k ) x ( n − k ) ≡ h ( M − h ) x n −
2
M
−1
2
M M
y (n) = ∑ h(k )[ x(n − k ) + x(n − M + k )] + h
k =0
x n −
2 2
M
→ multiplications
2
Example:
Fourth order section in a cascade implementation of a symmetric system:
z −1 z −1
( )( )
H k ( z ) = c k 0 1 − z k z −1 1 − z k* z −1 1 −
zk
1 − * = c k 0 + c k1 z −1 + c k 2 z −2
z
−3 −4
+ c k1 z + z
c) Lattice structure
H m ( m) = Am ( z ); m = 0, M −1
− h0 (0) = 1
m
− hm ( k ) = α m (k ); k = 1, m; y (m) = x(n) + ∑α m (k ) x( n − k )
k =1
a)
Bocle Diagrams:
m
∏( s − z k ) lhz
H (s) = c k =1
n
∏( s − p
l
h ) ph
k =1
s=jώ
m
∏ ( jω − z h ) l zk
H ( jω ) = c k =1
m
∏ ( jω − p
l pk
k )
k =1
d
| H ( jω ) | dB = 20 log | H ( jω ) |=
m n
= 20log c + ∑ l zk log | jω − z k | − ∑ l pk log | jω − p k |
k =1 k =1
1
| H ( jω) | dB − | H ( jω) | dB = 20 log | H ( jω) | −20 log | H ( jω) | +
2
+ 20 log 2 = 20 log 2 =10 log 2 ≈ 3dB
d ω
H n ( jω) = H normalized transfer function
jω
0
ωa ω
octave =2 ; decade ω = 10
a
ωb b
ω ω
Hn
jω
= 20 log
jω
0 dB 0
ωa
= 10 ⇒| H ( jωa ) | dB − | H ( jωb ) | dB = 20 log | H ( jω a ) | −20 log | H ( jωa ) |=
ωb
ω
= 20 log | H ( jω a ) | −20 log H j a = 20 log 10 = 20 dB decade
10
ωa
= 2 ⇒| H ( jω a ) | dB − | H ( jω b ) | dB = 20 log 2 ≈ 6 dB octave
ωb
Example:
1) H (jω)=k
ω
H n ( jω ) = H j = k ⇒ H n ( jω ) = k
ω0
0, k > 0
a r Hgn ( jω ) =
π ,k < 0
H n ( jω) dB = 20 log k dB
2) H (s) = s k
↓ s = jω
π
j k
H ( jω) = ( jω) k = j k * ωk = e 2
* ωk
Figura
k
ω ω jk π2
H n ( jω) = H j = e
ω0 ω0
1. Butterworth filters
- we study the transfer function
1
H ( s) =
1 + s n ; substitution: s = jω
1
H ( jω ) =
1 + ( jω ) n -transfer function;
1
H ( jω ) =
1 + ω 2n -magnitude transfer function;
2 1
H ( jω ) =
1 + ω 2n -squared magnitude transfer function;
1
20 log H (ω) 2 = 20 log = −20 log 2 = −3dB
ω =1 2
2 1 1
H (ω ) = H ( s) − H (− s ) = s =
s = jω 1+ ω ω =
2n
s
2n
j 1+
j
2
H (ω )
Poles of are :
2n 2n 2n
s s ( 2 k −1)π s
1 + = 0 ⇒ = −1 ⇒ e =
j j j ; k = 0,1,......,2n − 1
2 k −1+ n
j π
sk = e 2n
; k = 0,2n − 1 -poles of squared magnitude transfer function;
2k − 1 + n 2k − 1 + n 2k − 1 π 2k − 1 π
s k = cos π + j sin π = cos π + + j sin π + =
2n 2n 2n 2 2n 2
2k − 1 2k − 1
= − sin π + j cos π
2n 2n
2k −1
σk = −2 sin π
s k = σ k + jω k 2n
, where 2k −1
ωk = cos π
2n
Figure
Example:
n=3
2 1
H (ω ) =
1 + ω 2n k = 0,5
2 k −1+ n 2 k −1+ 3
j π j π
sk = e 2n
;
sk = e 6
π
j 1 3
s0 = e 3
= + j
k =0 2 2 ;
2π
j 1 3
s1 = e 3
=− + j
k =1 2 2 ;
k =2 s 2 = e jπ = −1 ;
4π
j 1 3
s31 = e 3
=− − j
k =3 2 2 ;
5π
j 1 3
s 41 = e 3
= +j
k =4 2 2 ;
k =5 s 5 = e j 2π = 1 ;
1 1 1
H ( s) = = =
( s − s1 )(s − s 2 )(s − s3 ) 1 3 1 3 ( )
( s + 1) s 2 + s + 1
s − − + j s − − j ( s + 1)
2 2 2 2
TABLE 1
n Butterworth polynomials(factored form)
1 s +1
2 s 2 + 2s + 1
3 ( s + 1) ( s 2 + s + 1)
4 (s 2
+ 0.76536 s + 1)( s 2 + 1.84776 s + 1)
5 (s 2
)( )
+ 0.6180s + 1 s 2 + 1.9318s + 1 ( s + 1)
6 ( s + 0.5176s + 1)( s + 2s + 1)( s + 1.9318s + 1)
2 2 2
1
H (s) =
∆( s ) = a n s n + a n −1 s n −1 + ....... + a1 s + 1 ; ∆ n (s)
TABLE 2
Coefficients of butterworth polynomials
n a1 a2 a3 a4 a5 a6 a7 a8
1 1
2 2 1
3 2 2 1
4 2.613 3.414 2.613 1
5 3.236 5.236 5.236 3.236 1
6 3.864 7.464 9.141 7.464 3.864 1
7 4.494 10.10 14.60 14.60 10.10 4.494 1
3 6 6 3
8 5.126 13.12 21.12 25.69 21.12 13.12 5.126 1
8 8 1 8 8
Exercise:
Design of a low-pass Butterworth filter
Figura grafic H (ω ) ≥ 1 − δ 1 ω ≤ ωp
;
H (ω ) < δ 2 ω > ωs
;
H (ω ) ≥ 1 − δ 1 ω ≤ ωp s = jω ; H ( s ) → H ( jω )
;
H n ( s ) → H n ( jω n ) = H ( s ) ω
s= j
H (ω ) < δ 2 ω > ωs ωn
;
2
ω 1 1
H = = 1 − δ1
ωc
2n 2n
ω ωp
1 + 1 +
ωc ; ωc (a)
1
H n (ω p ) =
2n
ωp
1 +
ωc ⇒
1
= δ2
2n
ω
1 + s
ωc (b)
1
H n (ω s ) =
2n
ω
1 + s
ωc ;
2n 2n
ωp 1 ωp 1
1 + = = −1
(1 − δ 1 ) 2 (1 − δ 1 )
ωc ωc
2
=> =>
2n 2n
ω 1 ωs 1
1 + s = = −1
ωc δ2 ωc δ2
2 2
1 − (1 − δ 1 )
2
log
[1 − (1 − δ ) ]δ 1
2
2
2
ωp
2n
(1 − δ 1 ) 2 1 (1 − δ 1 ) 2 (1 − δ 2
)
= n= 2
ωs 1 − δ 22 2 ωp
log
=> => δ 22 ωs (c)
n - must be integer , choose nearest integer value
If
ωc is determined from equation (a), the PB specifications are met exactly,
whereas the SB are exceeded.
If
ωc is determined from equation (b), the SB specifications are met exactly,
whereas the PB are exceeded.
Steps in finding H(s) :
1. Determine “n” from equation (c) using the values of δ1 , δ2 , ωp ,ωs and round up
to the nearest greater integer.
2. Determine ωc from the equation (a) or (b).
3. For the value of “n” determined at step 1. , determine the nominator of H(s) using
table 1, table 2 or H(s).
s
4. Find the H n (s ) by replacing s with ωc .
Example:
Design a BF (Butterworth filter) to have attenuation no more than 1dB for ω ≤
1000rad/s and least 10 dB for ω ≥ 5000 rad/s.
20log( 1 − δ 1 ) = − 1 δ 1 = 0.108749
⇒
20logδ 2 = − 10 => δ 2 = 0.31623
ω p = 1000 1
H ( s) =
ω s = 5000 => n = 1.012 . Round n = 2 => s 2 + 2s + 1
From equation (b) we have:
n=2 => ω 2 = 2886.75 rad/s
s 1 2886 .75 2
Hn( )= =
ωc s
2
2s s 2 + 4082 .48 s + 2886 .75 2
+ +1
2886 .75 2886 .75
DESIGN METHOD
log1 +
ω
2n
ω c A p
10 log 1 + s = As => = ⇒
ω c ω 2 n As
log 1 + s
ω c
2n 2n
ωp ωp
1 + = 10
0.1 Ap
⇒ = 10 0.1 Ap − 1
ωc ωc
2n 2n
ω ωs
1 + s = 10 0.1 As
= 10
0.1 A p
−1 ⇒
ωc ωc
1
0 .1 A p − 1 2
−1 log 10
0 .1 A p
10
ω p 10 0.1 A − 1 p
1
log 0.1 As 10 0.1 As − 1
⇒ = ⇒ n= 10 − 1 =
ωs 10 0.1 A − 1 s
2 ωp ωp
log log
ωs ωs
Notations:
ωp 2πf p f
k= = = p <1 - selectivity parameter
ωs 2πf s fs
1
10 0.1 Ap −1 2 log d
d = 0.1 As <1 - discriminator factor => n= , n ∈ N*
10 −1 log k
ωp
⇒ ωc = - cut-off request
(10 )
1
0.1 A p
− 1 2n
or
ωs
ωc =
(10 )
1
0.1 As
− 1 2n
Exemple 2:
A p = 1dB f p =1kHz
Given : As = 20 dB f s = 5kHz
, find : H(s)=?
Solution :
fp 1 * 10 3
k= = = 0.2
fs 5 * 10 3
0.1 A p
⇒ d = 1.6099 * 10 −2
10 −1 10 −1 0.1*1
d= =
10 0.1 As
−1 10 0.1*20
−1
log d
n= = 2.565 ⇒ choose n=3
log k
Poles:
2k − 1 π 2k − 1 π
s k = − sin + j cos ; k =0,5
n 2 n 2
2. CHEBYSHEV FILTERS
( )
c on cs o− 1 ωs ;0 ≤ ω ≤ 1
Cn ( ω ) =
( )
c o nsc ho − 1sω h;ω > 1
u = cos −1 ω
C n ( ω ) = cos nu
n=0 : C 0 ( ω) = cos 0 = 1
n=1 : C1 ( ω ) = cosu = cos( cos −1 ω ) = ω
n=2 : C 2 ( ω ) = cos 2u = 2 cos 2 u − 1 = 2 cos2 ( cos −1 ω ) − 1 = 2ω 2 − 1
n=3 : C3 ( ω ) = cos 3u = 4 cos 3 u − 3 cos u = 4[cos ( cos −1 ω )] − 3[os ( cos −1 ω )] = 4ω 3 − 3ω
3
Properties of C n (ω) :
1) 0 ≤ C n (ω) ≤1;0 ≤ ω ≤1 ; C n (ω) >1; ω >1
2) C n (ω) increases monotonically for ω > 1
3) C n (ω) is: -even polynomial for n-even
-odd polynomial for n-odd
0, n − o d d
4) C 0 (ω) =
1, n − e v e n
1
H (ω ) =
2
squared magnitude transfer function
1 + ε 2Cn (ω )
2
Properties:
1) ∀n the zeros of C n (ω) are located in the interval ω ≤1
2) -for ω ≤ 1; C n (ω) ≤ 1 ⇒ H (ω) oscillates about unity such that the maximum
2
1
value is 1 and minimum value is
1+ε 2
-for ω >1; C n (ω) increases rapidly,as ω becomes large , then H (ω)
2
n- even n-odd
1 + ε Cn (ω )
2 2
1
-for ω ≥ 1 ⇒ H n ( ω )
2
≅
εC n ( ω ) ; (1 from the denominator is neglected due to cosh)
dB attenuation (loss)
loss = −20 log H (ω) = 20 log ε + 20 C n (ω)
For large ω; C n (ω) ≈ 2 n −1 ωn
loss = 20 log ε + 20 log 2 n −1 + 20 log ω n = 20 log ε + 6( n −1) + 20 n log ω
Pole location of Chebyshev Filter
1 1
H ( s)H (−s) = H (ω)
2
= =
1 +ε C n (ω)
s = jω s
2s
22 ω=
j
1 +ε 2 C n
j
s s 1 1
Poles : 1 + ε 2 C n = 0 ⇒ C n 2 = − 2 ⇒ C n ( − js ) = ± − 2
2
j
j
ε ε
1
s = σ + jω ; − js = ω − jσ ; C n ( ω − jσ ) = ± j
ε
[ ]
cos n cos −1 (ω − jσ ) = ± j
1
ε
; cos −1 (ω − jσ ) = x + jy ;
1 1
cos n( x + jy ) = ± j ⇒ cos nx cos njy − sin nx sin njy = ± j ;
ε ε
2k − 1π
c n co nx o= s0 y s h xk = ,k = 0,2n − 1
n2
⇒ 1 ⇒ c n =o 0 ⇒ x s
s n is nxi ±= nε y h y ±= 1 s − 1i1 n h
k n ε
[ ]
cos cos−1 ( ω k − jσ k ) = cos( x k + jy k )
ω k − jσ k = c o xksc hk y− j s i xnk s hk y
ω k = c o xksc hk y
σ k = s i xnk s hk y
2 2
ωk σk
+ = 1 Ellipse
chyk shy k
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! C5
Poles of Butterworth filters:
Formulas: cosh-1x=ln( x + x 2 −1 )
sinh 1 πln( x +
2k-1−x= x 2 +1 )
ω k = c o s n ⋅ 2
σ = s i n2k − 1 ⋅ π
k n 2
22 2
ω
ω
H (( ω
1
)) =
H )
H (ω ω ω
ωp
pp 1 +ε C n2 (
2
)
ωp
ω =ω p ⇒
ω =ω r ⇒
where:
2
ωp 1 1
Ar = −10 log H ( ) = −10 log = −10 log =
ωr ω ωr
1 + εC ( r )
2
n 1 + ε cosh( n cosh
2 −1
)
ωp ωp
10 0.1 Ar − 1 1
cosh −1 ( ) cosh −1
ω ε 2
d
10 log(1ε 2 cosh 2 ( n cosh −1 r )) ⇒ n = =
ωp ω 1
cosh −1 r cosh −1
ωp k
ω =ω c ⇒
2
ω 1 1 ω ω
H( c ) = = ⇒ 1 + ε 2 C n2 ( c ) = 2 ⇒ ε 2 C n2 ( c ) = 1
ωp ω 2 ωp ωp
n− 1 s k 1 + ε 2 C n2 ( c )
− ∏ (ns −cosh
ωp
ε cosh
; n − o( ωd ))d = 1 ⇒ ω
−1
= ω p cosh(
1 1
cosh −1 )
k = 0 sk ω
2 2 c
ε
c
2
H (s) =
p
1 n− 1 s
ω ∏f k ; n − e v e n
k= ε2
ε f s − sk
p p
+
d
=2 k = 0 d =(
ωr1 r 10 0.1 Ar
−1
)1 / 2
Transfer function
1 n− 1 sk
2 k∏= 0 s ; n − e v e n
1+ ε − sk
s ωp
H n ( ) = n− 1
ω p sk
∏k = 0 s
− ;n − o d d
ω − sk
p
Fig 2
Filter specifications:
←cut-off frequency
5π 5π
k = 3; s3 = sin + j cos
6 6
7π 7π
k = 4; s 4 = sin + j cos
6 6
9π 9π
k = 5; s5 = sin + j cos
6 6
Ap≤ 2 dB at ω =ω p
Ap≥ 50 dB at
ω =ω c=5ω p
ωp ωp 1
k= = = = 0.2 k-selectivity factor
ωr 5ω p 5
ε 0.764783
d = = = 2.4185 ⋅10 −3
10 0.1 Ar
−1 10 0.1⋅50
−1 d-discrimination factor
1 1 1
cosh −1 ( ) ln( + − 1)
d = d d2 6.7177
n≥ = = 2.9251 ε =0.764783 -ripple factor
1 1 1 2.3124
cosh −1 ( ) ln( + − 1)
k k k2
Choose n=3
Butterworth poles:
1 1 1 1 1 1 1 1
β= sinh −1 = ln( + + 1) = ln( + + 1) ⇒
n ε n ε ε 2
3 0.764083 0.7610832
e β − e −β
sinh β = = 0.3689
2
e β + e−β
cosh β = = 1.0659
2
2k −1 π 2k − 1 π
sk = sin( ⋅ ) ⋅ sinh β + j cos( ⋅ ) ⋅ cosh β ; n = 3; k = 0,2n − 1
n 2 n 2
2k − 1 π 2k − 1 π
s k = sin ⋅ + j cos ⋅ ; n = 3; k = 0,2n − 1
n 2 n 2
ε = 10
0.1 A p
−1 = 10 0.1⋅2 −1
π π
k = 0; s 0 = sin − + j cos −
6 6
π π
k = 1; s1 = sin + j cos
6 6
3π 3π
k = 2; s 2 = sin + j cos
6 6
For stability conditions the poles of Tcebyshev are in the left side of the plane:
5π 5π
s 3 = sin ⋅ 0.3689 + j cos ⋅1.0659 = 0.1844 − j ⋅ 0.9231
6 6
7π 7π
s 5 = sin ⋅ 0.3689 + j cos ⋅1.0659 = −0.1844 − j ⋅ 0.9231
6 6
3π 3π
s 5 = sin ⋅ 0.3689 + j cos ⋅1.0659 = −0.3629
2 2
Transfer function:
2 sk s s s
H (s) = − ∏ =− 3 ⋅ 4 ⋅ 5
k =0 s − s s − s3 s − s 4 s − s5
k
s 2 sk
Hn( )=−∏
ωp k =0 s
− sk
ωp
Inverse Tcebyshev Filters(Type 2 Tcebyshev filters)
ε 2 C n2 (ω )
2
H (ω ) =
1 + ε 2 C n2 (ω )
−1
c on c s oω () s0;< ω < 1
Cn (ω ) = −1
c on c s oh ω )s(ω ,h> 1
Poles: 1 + ε 2 C n2 (ω) = 0 the same for Tchebyshev filters
2k − 1 π 2k − 1 π
s k = sin sinh β + j cos cosh β
n 2 n 2
Zeros: ε 2 C n2 (ω) = 0
2k − 1 π 2k − 1 π
C n2 = 0 <=> cos(cos −1
ω) = 0 => cos(cos −1
ω) = cos => cos −1 ω =
n 2 n 2
S=jω
s 2k −1 π
ω= => cos −1 ( js ) =
j n 2
2k − 1 π 2k − 1 π
− js k = cos => s k = j cos
n 2 n 2
ωr
ε 2 C n2 (
)
ωr ω 1 1
H( ) = = 1− ≈ 1+
ω ω ω ω
1 + ε 2 C n2 ( r ) 1 + ε 2 C n2 ( r ) ε 2 C n2 ( r )
ω ω ω
2
ωr
A(ω) = −10 log H ( ) [dB]
ω
1
A(ω) = −10 log 1 + [dB]
ω
ε 2 C n2 ( r )
ω
1
A(ωp ) = −10 log 1 +
[dB]
2 ω
ε Cn ( r
2
)
ωp
1 1
A(ωr ) = −10 log 1 + = −10 log( 1 + 2 ) [dB]
ω ε
ε 2 C n2 ( r )
ωr
1
1 cosh −1
E= n≥ d
10 0.1 Ar − 1 1
cosh −1
k
Elliptic filters
2 1
For LPF only H (ω) = where R(ω)-Tchebyshev rational functions.
1 + ε R 2 (ω)
2
The roots of R (ω) are related to the Jacobi elliptic sine function.
Properties of R (ω):
1) R(ω) are even functions for even n and odd for odd n;
2) Zeros of R(ω) are in the range ω<1 while poles are in the rang ω>1 ;
3) R(ω) oscillates between values -1 and +1 in PB;
4) R(ω)=1 for ω=1;
5) R(ω) oscillates between − d and ∝ in the SB (d= discriminator factor)
+
n2− 1 2 2
ω ω i − ω ,n − o d d
∏i= 0 1 − ω 2ω 2
i
6) R(ω ) = n− 1 Normalized to “center”: frequency ω0 =1
2 ω 2−ω 2
∏ i ,n − e v e n
i= 1 1 − ω i2ω 2
1 1
7) R( )=
ω R (ω)
The poles and zeros of R(ω) are reciprocal and exhibit geometric symmetry
….center frequency ω0 .
1) A p = 10 log( 1 + ε 2 ) => ε = 10 0.1 Ap −1
ε2 ε ωp
2) Ar = 10 log(1 + d 2 ) = > d = ; k=
ωr
; ω0 = ωp ωr
10 0.1 Ar
−1
1
1 1 − (1 − k )2 4
3) q0 =
( )
1
2
1+ 1− k 2 4
4) q = q0 + 2q05 +15 q0 +150 q13
0
10
log
n= d2
5) 1
log
q
1 ∝
2q n ∑ ( −1) m q m ( m +1) sinh( 2m + 1) β
1 1 + ε 2 +1
6) a = m =0
∝
β=
2n
ln
1 − ε 2 −1
1 + 2∑ (−1) m q m cosh 2mβ
2
m =1
1 ∝
2m + 1
2q n
∑ (−1) m
q m ( m +1) sin
n
πl
l =i −
1
7) ωl = m =0
∝ 2
2m
1 + 2∑ (−1) m q m
2
cos πl l =i
m =1 n
n
i =1, , n − even
2
n −1
i =1, , n − odd
2
ωi2
8) Vi = (1 − kωi ) (1 − ) 2
k
1
9) ai = ω 2
i
2a i v i
10) bi =
1 + a 2ω i2
a2
11) U = (1 + ka 2 )(1 + )
k
(aVi ) 2 + (ω iU ) 2
12) ci =
(1 + a 2ω i2 ) 2
n2− 1
a ci , n − o d d
∏i= 1 a
i
13) H 0 = n− 1
1 2c
∏
1 + ε i= 1 i
2 a
i
,n − e v e n
n2 2
H s + ai
0 ∏i= 1 s 2 + b s + c
,n − e v e n
i i
14) H s =
n
H 2 s2 + a
0 ∏ 2 i ,n − o d d
s + a i= 1 s + bi s + ci
0.1 A
10 p − 1
d=( ) = 7.6478*10− 4
100.1 Ar − 1
1
1 1 − (1 − k 2 ) 4
q0 = 1 ;
q = q 0 + 2q 0 + 5q 09 + 150 q 13
0
2
1 + (1 − k )
2 4
16
log
n= d 2 ≥ 5.7
1 . We choose n=6.
log
q
1
ωr = = 1.555 rads −1
k
ω0 = ωp ωr = 2π * 3464 rads −1
H ( s ) = 7.1374 * 10 4 R
( s 2 + 13 .8451 )( s 2 + 2.2153 )( s 2 + 1.3955 )
R=
( s 2 + 0.35518 s + 0.10903 )( s 2 + 0.194425 )( s 2 + 0.05386 s + 0.7341 )
Bessel Filters
1
a) H ( s) = B
n ( s)
n
Bn ( s ) = ∑ a k s k , n-th order Bessel …..
k =0
( 2n − k )!
ak = n −k
2 k!( n − k )!
• In time domain
H(e j )
T c 0 c T
y(n)=h(n)*x(n)= ∑h( k ) x( n − k )
k =−∞
• Z transform
Ψ(ω) = arg H (e jω )
• D.F.T.
jω
→Y (e jω ) = H (e jω ) * X (e jω )
=e
z
Y (e jω )
H (e jω ) = = H ( e j ω ) * e j ϕ (ω) , where Ψ(ω) = arg H (e jω ) - phase
X (e jω )
We define arg H (e jω) has principal value of phase:
− π ≤ arg H (e jω ) ≤ π
ϕ(ω) => H (e jω ) = ArgH (e jω ) + 2πr (ω), r (ω) ∈Z
d
⇒ ι (ω ) = − A rg H(e jω ) ≡ time delay except pants,
dω
where Arg H (e jω) has discontinuities.
nd n
ω
1 c jnω 1 1 jn ω ω c 1 e jnω − e − jnω sin ωc n
2 n −∫ω c
hLPF ( n) = e d ω = e = =
2n jn −ω c πn 2j πn
Comment: If n increases ⇒ hLPF → 0 , but not fast enough
b) Phase shift
hi d = δ (n − n d )
H i d (e jω ) = 1
− jω n d
H i d (e ) = e ⇒
jω
< H i d(e jω ) = − ω n
d (< H i d (e jω ) )
ι (ω ) = − = nd
dω
Z-plane
1
z 2*
z2
z1 1
z1
z2*
1
z2
ω ω
1 c jnnd j ωn 1 c j ω( n − n d ) sin[ ω(n − nd )]
hid = ∫
2n −ωc
e e d ω = ∫ e
2n − ω c
dω =
π ( n − nd )
ω
n = nd ⇒ lim hid =
n →n d π
→ the answer is delayed and identical with sinc
c) Group delay
= 0,ω 0 − ε ≤ ω ≤ ω 0 + ε ,ε > 0
We suppose H (e jω
)
≠ 0, e l s e w h e r e
< H (e jω) = −ωnd −θ , generalized phase
So:
y(n)=h(n)*x(n)
Y ( e j ω ) = H (e j ω ) * X ( e j ω)
Y ( e j ω) = H ( e j ω) * X (e jω )
jω jω jω
< Y (e ) =< H (e ) + < X (e )
jω
d [ <Y ( e )]
ι(ω) = −
dω
d) Phase generalized
H (e jω ) = −A(e j ω )e −j (αω −β)
by derivation
< H ( e j ω) = −α
ω +β
ι(ω) =α
⇒H (e jω ) = A(e j ω ) cos( α
ω − β) − jA ( e j ω ) sin( αω − β)
jω sin( αω − β )
So: tan H (e ) =−
cos( αω − β )
∞
⇒ H (e jω ) = ∑h(n)e − jωn
n = −∞
∞ ∞
meaning : H ( e j ω ) = ∑h(n ) cos ωn − j ∑h( n)sin ωn
n = −∞ n = −∞
∞
jω
− ∑h(n) sin ωn
tan H (e ) = n = −∞
∞
∑h(n) cos ωn
n = −∞
∞
− sin(α ω− β )
− ∑h(n) sin ωn ∞
= n = −∞
∞
⇒ [ ∑ h(n) cos ωn]sin(α ω− β ) −
cos(α ω− β )
∑h(n) cos ωn
n = −∞
n = −∞
∞
− [ ∑ h(n) sin ωn] cos(α ω− β ) = 0
n = −∞
∞
∑h( n)[sin(( n − α )ω + β )] = 0
n = −∞
So: ∑h (n ) sin[( n − α )ω + β ] = 0
n=−M
- needed for causality
Another condition (so that we don’t have negative frequencies):
M
∑h( n) sin[( n − α )ω + β ] = 0
n= 0
∑ [h(k )x (n − k ) + h(M − k )x (n − M − k )]
k =0
∑ [h(k )x (n − k ) − h(M − k ) x( n − M − k )]
k =0
h(M − l − n) = h(n) ,s y m m e t r i c
h(M − l − n) = − h(n) ,a n t i s y mr i cm e t
M −1
− M2−1 M −1 2 M −1 −2 n
−
M −1 − 2 n
H ( z ) = z {h ( ) + ∑h( n)[ z 2
±z 2
]}, M − odd
2 n = 0
M −1
M −1 M −1 −2 n M −1 −2 n
− 2 −
H ( z ) = z 2
∑h (n )[ z 2
±z 2
], M − even
n =0
M −1
Obs : H ( z ) = ∑h( k ) z
−k
k =0
M −1
z → z ⇒ H ( z ) = ∑h( k ) z
−1 −1 k
k =0
z −( M −1) −1
H ( z ) = H ( z) roots of H(z) are roots of H( z −1 )
Figure
M
M −1 2 −1 M −1− 2 n M −1 − 2 n
− −
H (z) = z 2
∑h (n )[z 2
+z 2
], M − even
n =0
z = e jω
M
−1
M −1 2 M −1− 2 n M −1 − 2 n
jω ( −
∑h( n)[e j ω( −
jω ) jω ( ) )
H (e ) = e 2 2 +e 2 ]
n =0
M M
M −1 −1 −1 M −1
−j ω 2 M − 1− 2n 2 −j ω M − 1 − 2n
jω
H (e ) = e 2
∑2h(n) cos( ω 2
)= ∑2h( n) e 2
cos( ω
2
)
n=0 n =0
M −1
Y ( Z ) = ∑bk z −k X ( z ) = H ( z ) X ( Z )
n =0
z = e jω
M
−1 M −1
2
M −1 − 2n −
H (z) = ∑2h( n) cos( ω 2
)z 2
n =0
M −1 − 2n
bk = 2h (k ) cos( ω )
2
M
−1 M −1
M −1 − 2n − j
2 ω
H (e ) = 2 ∑h(n ) cos( ω
jω
)e 2
n =0 2
M
−1
H (e jω ) = 2 h (n ) cos(ω M − 1 − 2n )
2
∑
n=0 2
M
⇒ < H ( e jω ) = − ( − 1)ω
2
M
τ (ω ) = −1
2
M-odd h( n) = h( M −1 − n)
jω
z =e
M
−3
M −1
M −1 2 M −1 − 2 n M −1 −2 n
H (e jω) = e − jω ( − ) + ∑h( n)[e
) j ω( ) jω ( − )
2 [h ( 2 +e 2 ]=
2 n =0
M
−3
M −1
M −1 2
M − 2n − 1
∑h( n) cos( ω
(− )
= e − jω 2 [ h( )+ )]
2 n =0 2
M
−3
= M −1 M − 2 n −1
2
∑h( n) cos( ω
jω
H (e )
h( )+ )
2 n=0 2
M −1
< H ( e j ω) = − ω
2
M −1
τ (ω) =
2
M-odd h(n) = −h( M −1 − n)
jω
z =e
M
−1
M −1 2 M −1−2 n M −1−2 n
H (e jω) = e − jω ( −
∑h(n )[e jω ( −
) jω ( ) )
2 2 −e 2 ]=
n =0
M M
M −1 2 −1 M −1 π 2 −1
− jω M − 2n −1 − j (ω − ) M − 2n −1
= 2 je 2
∑h( n) sin[ ω 2
] =2 je 2 2
∑h( n) sin[ ω 2
]
n =0 n=0
M −1 π
H (e jω) = − j (ω − )
H I ( e jω ) e 2 2
M −1 π
< H ( e jω ) = − ω +
2 2
M −1
τ (ω) =
2
M
M− 1 −3 M − 1− 2 n M − 1− 2n
−M−1 2 −
H (z) = z {h( + ∑ h(n) z[
2 2
−z 2
] M, − o d d
2 n= 0
z = e jω
M
−3
M−1 M−1
M − 2n − 1 2
e {h( ) + j ∑ 2h(n) s[ i n ( ω )
H (e jω
)= − jω
2
2 n= 0 2
H (e jω) = H (e jω) e jϕ(ω)
M −3
M −1 2
M − 2 n −1 2
H (e j ω ) = h 2 ( ) + [ ∑ 2h( n) sin( ω )]
2 n =0 2
M −3
2
M − 2n − 1
∑2 h( n)sin( ω 2
)
M −1 M −1
< H ( e j ω ) = arctan n =0
−ω = β (ω ) − ω
M −1 2 2
h
2
dβ (ω) M −1
τ (ω) = − +
ω 2
+∞
1
hd ( n) =
2π ∫H
−∞
d ( e − j ω ) e jω d ω -impulse response of FIR filters
− 2π − ω c ; − 2π + ω c −π − ωc ωc π 2π − ωc 2π 2π + ω c
π ωc
1 1 1 ωc
∫ H d (e )e dω = ∫ω e dω =
jω jωn jω
hd ( n) = e jωn =
2π −π
2π − c
2πjn − ωc
jωc n − jωc n
1 e −e ωc sin nωc sin nωc
= = 2 fc
πn 2j π nωc nωc
2 fc ; n = 0
hd (n) = s i nnω c ⇒ hd (n) is not causal
2 f c nω ; n ≠ 0
c
hd (n)
2 f c
0 n
−π π
ωc ωc
∞
H d (e jω ) = ∑ hd ( n)e − jnω truncated H d (e jω ) for FIR causal filter to be implemented
n =0
n −1
H d (e jω ) = ∑ hd ( n)e − jωn where n is the length of the filter
n =0
z-plane
n −1
1
hd (n) = ∑b(n)e − jωn
n =0 z2 Symetry of
the roots of
H(z) and
H ( z −1 )
zz R
R
1
-1 +1 1 z1
1
z2
M
M− 1 2 − 1 M − 2 n− 1 M − 2 n− 1
− −
H ( z ) = z ∑ h( n) z
2 2
+z 2
− M is e v e n
n= 0
M
M−1 2 −1
− jω jω M − 22n− 1 − jω M − 22n− 1
H ( e jω ) = e 2
∑n= 0 h(n) e + e
M
M −1 2 −1
−j ω ω ( M − 2n − 1)
H ( e jω ) = e 2
∑2h(n) cos
n =0 2
M
−1 M −1
2 −j ω ω ( M − 2n − 1)
H ( e jω ) = ∑ 2 h ( n )e
n =0
2
cos
2
n −1
Y ( z ) = ∑bk z −k X ( z ) = H ( z ) X ( z )
k =0
M
−1 M −1
2
ω ( M − 2n − 1) −
e jω = z H ( z ) = ∑ 2h( n) cos z
2
n =0 2
M − 2k − 1
bk = 2h(k ) c o sω
2
M
−1 M −1
2
M − 2n − 1 − jω
H (e jω ) = 2 ∑ h(n) cos ω e 2 M is even
n =0 2
M −1
−j ω
jω jω
H (e ) = H r (e )e 2
M
−1
H (e jω ) = 2 h(n) c o sω M − 2n − 1
2
∑n= 0 2
jω M −1
∠ H ( e ) = − ω
2
M
τ (ω ) = − 1
2
if M is odd: z = e jω
n −3
M −2 n −1
− jω
M −1
h M −1 2 jω M −22 n −1 − jω
H (e jω ) = e 2
+ ∑h(n)e
+e 2
2 n =0
M −1 M −3
− jω
h M −1 2
M − 2n −1
H ( e jω ) = e 2
+ ∑h(n) cos ω
2 n =0 2
M −3
H ( e jω ) = H ( e jω ) = h M − 1 + M − 2n − 1
2
r
2
∑ h(n) cos ω
n= 0 2
jω M −1
∠ H (e ) = − ω
2
M −1
τ (ω ) =
2
M
−1
2
ω ( M − 2n − 1)
H (e jω ) = H J (e jω ) = 2 ∑ h(n) sin
n =0 2
M −1 π
∠H (e ) = −ωjω
+
2 2
M −1
τ (ω) =
2
hd ( n ) ; 0 ≤ n ≤ n − 1
hd (n) = hd (n)w(n) =
0 ; o t h e r w i s e
1 ; 0≤ n≤ n− 1
w(n) = -window function; Gibls phenomenon
0 ; o t h e r w i s e
ωc
1
∫ω H
jω jω jω
H (e ) = H d (e ) * W (e ) = (e jω )W (e j (ω −v ) )e j (ω −v ) dv
2π
d
− c
M
jωM M
-π
−jω −jω
M −1
1 −e −jωM
e 2 e
2
− e 2
- ωc ωc
W ( e jω ) = ∑ 1e −jω = =
n =0 1 −e −jω −j
ω
jω −j
ω
e 2 e 2
−e 2
M
M −1
− jω
sin ω
=e 2 2
ω
sin
2
ωM
sin
2 M −1 M −1
W (e jω ) = −ω
ω ; phase = 2
; grd(group delay)= 2
sin
2
figura
N
H(z) = ∑ h(n) ⋅ z
n=0
−n ⇒ causal FIR transfer function
Figure
Figure 1 2
Type 3 : h(n) = - h(N-n) Type 4 : h(n) = - h(N-n)
N = even N = odd
Figure 3
for N = 8
8
H(ej ω ) = H(z)|z= ej ω =
= e-4j ω [h(0) ·2cos 4ω + h(1) ·2cos 3ω + h(2) ·2cos 2ω + h(3) ·2cos ω +
h(4)]
for N = odd
h(n) = h(N-n)
7
N=7 ⇒ H(z) = ∑h(n) ∙ z-n = h(0) + h(1)∙ z-1 + h(2)∙ z-2 + h(3)∙ z-3 + h(4)∙ z-4 +
n =0
h(0) = h(7)
h(1) = h(6)
h(n)= h(7-n) =>
h(2) = h(5)
h(3) = h(4)
H(z) = h(0) (1 + h-7) + h(1) (z-1 + z-6) + h(2) (z-2 + z-5)+ h(3) (z-3 + z-4) =
7 7 7 5 5 3 3 1 1
= z −2 [h(0) (z 2 + z −2 ) + h(1) (z 2 + z −2 ) + h(2) (z −2 + z 2 ) h(3) (z −2 + z 2 )]
H(ejω) = H(z)|z=ejω
7 7 5ω 3ω ω
=e j
2
ω
[h(0) ∙ 2cos 2 ω + h(1) ∙ 2cos 2
+ h(2) ∙ 2cos 2
+ h(3) ∙ 2cos 2
]
=
j 7ω ~ ~ 7
j − ω+β
= e− 2 ∙ H (ω) ∙ ejβ = H 7 (ω) ∙ e 2
~
β =0 ⇒ |H 7 (e )| ≥ 0
jω
β = π ⇒ |H(ejω)| < 0
~ 7 5 3 1
H 7 (ω) = 2h(0) ∙ cos 2 ω + 2h(1) ∙ 2cos 2 ω + 2h(2) ∙ cos 2 ω + 2h(3) ∙ 2cos 2 ω
7
θ = − ω + β
2
g r θd = 7
2
N ~
H(ejω) = e j −ω 2 +β ∙ H (ω)
N +1
1
(ω) = 2 ∑ N + 1 − n ∙ cos ω n − 2
~ 2
H
2
n =1
In general:
N
θ = − ω
2
-
g r θd = N
2
Antisymmetric impulse response with odd length (type 3)
H(ej ω ) = H(z)|z= ej ω =
= e-4j ω [h(0) ·2j∙sin 4ω + h(1) ·2j·sin 3ω + h(2) ·2j·sin 2ω + h(3) ·2j·sinω] =
π π ~
= e j −4ω + 2 [h(0) · 2sin4ω + … + h(3) · sinω= e j −4ω + 2 ∙ H (ω) ∙ ejβ
~
β=0 ⇒ H (ω) ≥0
~
β=π ⇒ H (ω) < 0
π
θ = − 4ω + + β
2
ω ∈(0, π) ⇒
g r d= dθ = 4
dω
In general:
N π ~
- H(ejω) = e j −ω 2 + 2 +β ∙ H (ω)
for N = odd
h(n) = -h(N-n)
7
N=7 ⇒ H(z) = ∑h(n) ∙ z-n = h(0) + h(1)∙ z-1 + h(2)∙ z-2 + h(3)∙ z-3 + h(4)∙ z-4 +
n =0
7 7 7 5 5 3
H(ejω) = H(z)|z=ejω = e − j 2 ω [h(0)( e j
2
ω
- e − j 2 ω ) + h(1)( e j
2
ω
- e − j 2 ω ) + h(2) ( e j
2
ω
3 1 1
- e − j 2 ω ) + h(3) ( e j
2
ω
- e − j 2 ω )] =
7 7 5 3 1
= 2j e − j 2 ω [h(0) ∙2sin 2 ω + h(1) ∙2sin 2 ω + h(2) ∙2sin 2 ω + h(3) ∙2sin 2 ω] = =e
−7 π ~
ω+ +β
j
2 2 ∙ H (ω)
~ 7 5 3 1
H (ω) = 2[h(0) ∙ sin 2 ω+ h(1) ∙ sin 2 ω + h(2) ∙ sin 2 ω + h(3) ∙ sin 2 ω
7 π
θ = − ω + β
2 2
g r θd = 7
2
~
β=0 ⇒ H (ω) ≥ 0
~
β = π ⇒ H (ω) < 0
In general
N π ~
- H(ejω) = e j −ω 2 + 2 +β ∙ H (ω)
N π
θ = − ω + + β
N +1
1 2 2
(ω) = 2 ∑ N + 1 − n ∙ sin ω n − 2 ;
~ 2
- H
n =1 2
g r θd = N
2
General form of frequency response
Nω ~
H(ejω) = e − j 2 ∙ ejβ ∙ H (ω)
− Nω ~
2 + β ; H (ω ) ≥ 0
θ(ω) =
− Nω + β + π ; H~ (ω ) < 0
2
N
δ(ω)= 2
H(z) = ∑h(n) z
n =0
−n
- causal FIR filter
( )
N N N
b) h(n) = h(N-m)
N N N N
H(z) = ∑h(n) ⋅ z
n =0
−n
=- ∑ h ( N − m) ⋅ z
n =0
−n
=- ∑h(m) ⋅ z
n =0
m −N
= -z-N∙ ∑h( m) ⋅ H ( z −1 )
n =0
1
→ If z = ξo is a zero of H(z) => z = ξ is also zero of H(z)
→ If z = ejφ (complex number on a unit circle) => z = e-jφ is also a zero of H(z)
1 jϕ
→ If z = r∙e±jφ is a zero of H(z) => z = ⋅e is also a zero of H(z)
r
→ If zero at z = ±1 is its own reciprocal, implying it can appear only single
→ A type 2 FIR filter must have a zero at z = -1, since H(-1) = (-1) N ∙ |H(-1)| = -H(-
1)
→ For type 3 and 4 FIR filter H(-1) = -(-1)N∙ H(-1) = -H(-1) forcing H(-1) = 0
Bounded real transfer function of FIR filter
jω Y ( e jω )
H(e ) = ; |H(ejω) | ≤ 1 - bounded real transfer function for causal filter
X ( e jω )
(BR)
1 z +1
Ho(z) =
1
2
(
1 + z −1 =
2 z
)
zz = − 1→ z e r o
zp = 0 → p o l e
Z = ejω
1 e jω +1
H(ejω) = H(z)|z = ejω =
2 e jω
Zz = -1 => ωz = π
Zz = 0 => ωp = 0 (ω∈(0, π) )
ω
j jω −
jω
e 2 e 2 + e 2
jω
H(e ) = ω
1 = e − j 2 cos ω
jω
2 e 2
−1
θ = ω
2
1 1
2 jω C
g rθ d=
2
H (e ) =
jω
2
ω
H (e ) = c o s
2
1
20 log H (e jωC ) = 20 log H( e j ) = -20 log
0
2 = 3dB
2
1 ωc π ω π
H (e jωC ) = = cos => cos = cos c => ωc =
2 2 4 2 2
LPF →
z→ − z HPF
zz = 1
zp = 0
1 1 z −1
H1(z) =
1
2
( 1
)
1 + (−z ) −1 = 1 − =
2 z 2 z
ω
j j ω2 −j
ω
e 2 e − e 2
H1(ejω) = 1 e jω −1 1 ω π
= = e − j 2 + 2 ⋅ sin ω
2 e jω 2 e jω 2
zz = 1 = ω z = π >
z p = 0 = ω z = 0>
Filter design steps
1. – Specification of the filter requirements
a) signal characteristics
- type of signal source
- input output interface
- data rates and width
- highest of frequency
b) characteristics of filter
b1) desired amplitude
} and their tolerances
b2) phase response
- speed of operation
- modes of filtering –in real time and not in real time
c) level of implementation
IIR Filters
∞
∑a
k =0
k z −k
3) H(z) = N
1 + ∑bk z −k
k =0
= H * ( z) ⋅ X * ( z)
∞
ω 2π
j( + k)
H ( e jω ) = ∑H c (e
k =−∞
Td Td
)
π
cH ( j Ω ) = 0, Ω ≥
Td
ω
jω j
H (e ) = H c (e d ) ,ω ≤ π
T
N
Ak 1
H c ( s) = ∑ ⋅ →
z
s = ln z
k =1 s − s k T
L−1
N sk t Sampling
∑ Ak e ,t ≥ 0
theorem
hc (t) = k= 1
0,t < 0
Sampling:
N N
h( n) = Td hc (nT d ) = ∑Td Ak e sk nTd u ( n) = ∑Td Ak (e sk Td ) n u ( n)
k =1 k =1
N
Td Ak
H ( z) = ∑ sk Td −1
k =1 1 − e z
Figure
Stability condition
Example:
Design of a LPF discrete in time by applying impulse invariance method to an
appropriate Butterworth continuous in time filter.
Td = 1
H c ( e j 0 .2 π ) ≥ 0 . 8 9 1 2 5
j 0.3π
H c (e ) ≤ 0.1 7 7 8 3
2 1
H c ( jω) =
Ω 2n Butterworth filter
1+( )
Ωc
0.2π 2n 1
1+ ( Ω ) = .0 8 9 1 n = 2 .5 8 5 8 5 8
c
⇒
1+ (0.3π )2n = 1 Ω c = .0 7 0 4 7 4
Ω c .0 1 7 7 8 3
Choose n=6 Ωc =0.7032
Pole computing :
1
H c (s) ⋅ H c (-s) =
Ω 2n
(1 + )
jΩc
Ω 2n
poles (1 + ) = 0 12 poles
jΩc
0.12093
H c ( s) = 2 2 2
( s + 0.364s + 0.4975)( s + 0.994s + 0.4945)(s + 1.3585s + 0.4945)
a⋅s +b
H c ( s) = 2
s + 0.364s + 0.4975
z
0.2871 − 0.4466 z −1
H(z) = +…..
1 − 01 .297 z −1 + 0.6949 z −2
Draw the signal flow graph of implementation of the system in parallel using second
order section
- direct from 1 and direct from 2
- cascade of second order section
z −1 + α
z −1 = −
1 + αz −1 θ p +ωp
cos
HPF α= 2
θ −ωp
cos p
2
ω p2 + ω p1
2αk −1 k −1 cos
BPF z −2 − z + 2
k +1 k +1
α=
z −1 = ω p2 − ω p1
k −1 −2 2αk −1 cos
z − z +1 2
k +1 k +1
ω p2 − ω p1 θp
k = cot ang tan
2 2
ω p2 + ω p1
2α −1 1 − k cos
z −2 − z + α= 2
BSF z −1
= 1 − k 1+ k ω p2 − ω p1
1 − k −2 2α −1 cos
z − z +1 2
1+ k 1+ k
ω p2 − ω p1 θp
k = tan tan
2 2
• The unit circle of the z plane must map onto the unit circle of the z-plane:
z = e jθ e − jθ = G(e − jω ) G(e − jω ) = 1
jω ⇒
z = e e ∠ G(e )
j jω
θ = ∠ G(e − jω )
LPF LPF
z −1 − α
z −1 = G ( z −1 ) =
1 − αz −1
z = e jθ e − jω − α (1 − α ) 2 sin θ
e − jθ =
1 − αe − jω
⇒ ω = arctan
2α + (1 + α 2 ) cos θ
z = e jω
(1 − α ) sin θp
ω → ωp ωp = arctan ⇒ α = ...
2α + (1 + α 2 ) cos θp
N
d k y( t ) N
d k x( t )
∑a k
k =0 dt k
= ∑
k =0
b k
dt k
x( t ) → y( t ) →
dy (t )
y (t ) → H(s)=s →
dt
y (t )→ 1 − Z −1 y (n) − y (n − 1)
H(z)=
→
T T
1 − z −1 1
s= or z=
T 1 − sT
s = jΩ
1 1 − jΩT 1
z= = = e j*arctg ΩT
1 − jΩT 1 + Ω T
2 2
1+Ω T
2 2
In general:
k
d k y (t ) 1 − Z −1
=
T
dt k
H (Z ) = H a (s) s=
1− Z −1
-> system fct of a digital IIR filter
T
H a (s ) - transfer fct. Of an analog filter
This approximation is valid only in LPF or BPF ( low frequency filter ) and cannot
be used for high frequency.
dy L
y (nT + kT ) − y ( nT − kT )
= ∑ak
dt t =nT k =1 T
↓Z
1 L
s= ∑ ak (Z k − Z −K )
T k =1
Z = e jω
s=
1 L
∑ k
T k =1
a e jϖk
− e (
− jϖk
= j
2 L
T k =1
)
∑ a k sin kϖ
2 L
s = jΩ ⇒ Ω = ∑ a k sin kϖ
T k =1
Is difficult to calculate { a k } for the poles that are inside the unit circle.
Ex 1:
1
Convert the analogue band pass filter with system function H a ( s) = ( s + 0.1) 2 + 9 into
a IIR digital filter by use of the backward difference for derivative.
1
H (Z ) = H a (s) s=
1− Z −1 = 2
=
1 − Z −1
T
T + 0.1
+9
2
T
= 1 + 0.2T + 9.01T 2
2(1 + 0.1T ) 1
1− Z −1 + Z −2
1 + 0.2T + 9.01T 2
1 + 0.2T + 9.01T
± j16 .5
Choose T=0.1 => p1, 2 = 0.949 e
Ex2:
1
Convert the analogue band pass filter with H a ( s ) = into a digital IIR filter by the
( s + 0.1) 2 + 9
1 Z 2T 2
H ( z) = = ...... =
2
z 4 + 0.2Tz 3 + ( 2 + 9.01T 2 ) z 2 − 0.2Tz +1
1
( −1
)
T z − z + 0.1 + 9
Bilinear transform
2 1 − z −1
s=
T 1 + z −1
s = σ + j Ω
jϖ
Z = r e
2 1 − r −1e − jϖ 2 r 2 −1 2r sin ϖ
s= = +j
T 1 + r −1e − jϖ T 1 + r + 2r cos ϖ
2
1 + r 2 + 2r cos ϖ
r 2 −1
=> σ =
1 + r 2 + 2r cosϖ
2r sin ϖ
Ω=
1 + r 2 + 2r cos ϖ
If r<1 σ <0
r>1 σ >0
2 2 sin ω 2 ω
r=1 σ =0 => s = jΩ , Ω = = tan
Td 2(1 + cos ω) Td 2
s + 0.1
Ex: Convert the analogue filter with the system fct. H a ( s) = into a
( s + 0.1) 2 +16
digital IIR filter by means of the bilinear transformation. The digital filter is to have
Π
a resonant frequency: ωr =
2
Π 2 ωr 1 1 − Z −1
Ωr = 4 ωr = Ωr = tan ⇒ Td = s =4
2 Td 2 2 1 + Z −1
1 − z −1
4 −1
+ 0.1
H ( z ) = H a ( s ) s =41− z =
−1 1 + z =
2
1+ z −1 1 − z −1
4 −1
+ 0.1 + 16
1 + z
−1 −2
0.128 + 0.006 z − 0.122 z
=
1 + 0.0006 z −1 + 0.975 z −2
1 + 0.975 z −2 = 0
⇒ poles: ±j
Π
z β1, 2 = 0.987 + e 2
Usually, the design begins with the digital filter specifications, after that consider the
analogue filter which satisfies the imposed specifications.
Ex 1: Design a single –pole low-pass filter with 3dBbandwidth of 0.2Π using the
Ω
bilinear transformation applied to the analogue filter. H ( s ) = s + Ω , where
c
Ωc is the
c
ω c=0.2π
2 ωc 2 0.2 π 0.65
Ω c= Td tan = T tan 2
= Td
2
0.6 5
Td
Then the transfer function will be: H(s)=
s+ 0. 6 5
Td
0.65
Td
−1
2 1−z −1
−1
2 1− z
( )+
Td 1+ z −1
0.65
Td
0.245 (1 + z −1 )
−1
H(z)=H(s)|s= Td ( 1+z )= = 1 − 0.509 z
jω 0.245 (1 + e − jw )
jω
Z=e H(e )=
1 − 0.509 e − jω
ω 0.490
ω =0 H(ej )= 1 − 0.509 ≈ 1
j0.2π 0.245 (1 + e -j0.2 π )
ω c=0.2π H(e )= = 0.707 3 dB attenuation
1 − 0.509 e - j0.2 π
Ex. 2:
The specifications on the discrete-time filter are:
0.819125 ≤ |H( e -j0.2 π ) ≤ 1; 0 ≤ ω ≤ 0.2π
| H( e -j0.2 π
)| ≤ 0.17783; 0.3π ≤ ω ≤ π
For this specific filter, the analogue filter must have:
Ω 2 0.2 π
0.89125 ≤ | Hc( e -j )| ≤ 1; 0≤ω ≤ Td
tan 2
2 0.2 π 2 π
| H( e -j Ω )| ≤ 0.17783; Td
tan 2
≤ Ω ≤ Td
tan 2
(∞)
For convenience choose Td=1
{ | Hc ( e 2jtan0.1 π )|≥0.89125
{ | Hc ( e j2tan0.15 π )| ≤0.17783
1
jΩ 2
|Hc(e )| =
1 + ( ΩΩc ) 2 N Butterworth filter
2 tan 0.1π 2N 1
1+( Ωc ) = 0.89125
2 tan 0.15 π 2N 1
1+( Ωc ) = 0.17783
N=5.30466
Choose N=6 Ωc=0.76622
We take the functions from the tabels for various degrees. We choose the poles
from the left in order to have stability.
0.20238
Hc(s)= ( s 2 + 0.3966 s + 0.5871 )( s 2 +1.0836 s + 0.5871 )( s 2 +1.480 s + 0.5781 )
H(z)=
0.0007378 (1 + z −1 )
(1 −1.2686 z −1 + 0.7051 z −2 )(1 −1.0106 z −1 + 0.3583 z −2 )(1 − 0.9044 z −1 + 0.2155 z −2 )
Draw the signal flow-graph of implementation of the system:
-as a cascade of 2nd – order section
-in direct form I and direct form II
-in the parallel form using 2nd order section
Or
N
∑a z
k =o
k
−k
H(z)= N (2)
1 + ∑bk z −k
k =1
or
( z − z1 )( z − z2 ) − ... − ( z − zn )
H(z)= k ( z − p )( z − p )...( z − p ) (3)
1 2 n
Measure of information
Postulates:
1) P( xi ) ≥ 0 (probability of event xi )
2) P( X ) =1 (probability of the sample space X (certain event))
3) x y =∅
i j , i ≠ j =1,2,...
-these are called mutually exclusive events
n
P (U xi ) = ∑ P ( xi ) - probability of the mutually exclusive events
i =1 i =1
Theorem:
If one experiment has the possible outcomes xi , i =1, n and the second experiment
has the possible outcomes y , j =1, m , then the combined experiment has the
j
possible outcomes ( xi , y j ) , i = 1, n , j = 1, m
The joint probability:
P ( x i , y i ) of combined experiment satisfies the condition: 0 ≤ P ( xi , y j ) ≤1
Theorem:
m
∑ P( x , y
i =1
i j ) = P( y j )
∑∑P( x , y
i =1 j =1
i j ) =1
Conditional probabilities
P ( xi , y j )
P ( xi | y j ) = ; P( y j ) > 0 - conditional probability of the event xi given by
P( y j )
the accuracy of the event yj
P ( xi , y j )
P ( y j | xi ) = , P ( xi ) > 0 ⇒ P ( xi , y j ) = P ( y j ) ⋅ P ( xi | y j ) = P ( x i ) ⋅ P ( y j | x i )
P ( xi )
Notes
1. If xi y j ( xi & y j are mutually exclusive events) then P ( xi | y j ) = 0
P ( xi )
2. If xi is a subset of y j ( x i y j =x i ) then P ( xi | y j ) =
P( y j )
P( y j )
3. If y j is a subset of xi ( x i y j =y j ) then P ( xi | y j ) = =1
P( y j )
Bayes Theorem
n
P( xi , Y ) P(Y / xi ) ⋅ P( xi )
P( xi | Y ) = = n
event with P (Y ) > 0 , then P(Y )
∑ P(Y / x
j =1
j ) ⋅ P( x j )
Statistical independence
If the occurrence of X doesn’t depend on the occurrence of Y, then P( X | Y ) = P( X )
and P ( X , Y ) = P ( X ) ⋅ P (Y )
Example:
P ( x1 , x 2 ) = P ( x1 ) ⋅ P ( x 2 )
P ( x 2 , x3 ) = P ( x 2 ) ⋅ P ( x 3 )
P ( x1 , x3 ) = P ( x1 ) ⋅ P ( x3 )
P ( xi / y j ) P ( xi , y j )
I ( xi , y j ) = log b = log b ⇒ mutual information xi , y j
P ( xi ) P ( xi ) ⋅ P ( y j )
if b=2 then:
- I ( xi , y j ) [bits]
if b=e then: b=e [nats]
log e a = 0.69315 log 2 a
log 2 1.44265 ln a
Observations
Show that:
P ( xi / y j ) P( xi , y j )
I ( xi , y j ) = log b = log b
P ( xi ) P ( xi ) ⋅ P ( y j )
I ( xi ) = −log b P ( xi )
Since:
P ( xi / y j ) P ( xi / y j ) ⋅ P ( y j ) P ( xi , y j ) P ( y j / xi )
= = = ⇒
P ( xi ) P ( xi ) ⋅ P ( y j ) P ( xi ) ⋅ P ( y j ) P( y j )
⇒ I ( xi ; y j ) = I ( y j , xi )
Examples:
1) Suppose a discrete information source that emits a binary digit xi = {0,1} with
1
equal probability at every τ seconds. ( P( xi ) = 2 ) then the information content
1
of each output source is I ( xi ) = − log 2 P( xi ) = − log 2 = 1 [bit]
2
2) If considered a block of “K” binary digits from the source, which occurs in a
1 1
time interval “ Kτ ” then P( xi ) = K then i ( xi ) = − log 2 K = K [bits]
2 2
P (Y = 0 / X = 0) = 1 − P0 P(Y = 1/ X = 1) = 1 − P1
P (Y = 1 / X = 0) = P0
P(Y = 0 / X = 1) = P1
P (Y = 0) = P(Y = 0 / X = 0) ⋅ P( X = 0) + P (Y = 0 / X = 1) ⋅ P( X = 1) =
1 1 1
= (1 − Po ) + P1 ⋅ = (1 − P0 + P1 )
2 2 2
P (Y = 1) = P (Y = 1 / X = 0) ⋅ P ( X = 0) + P (Y = 1 / X = 1) ⋅ P ( X = 1) =
1 1 1
= P0 ⋅ + (1 − P1 ) = (1 + P0 − P1 )
2 2 2
P ( xi / y j ) P ( xi , y j )
I ( xi , y i ) = log 2 = log b → mutual information
P ( xi ) P ( xi ) ⋅ P ( y j )
Then:
P(Y = 0 / X = 0) 1 − P0
I (0,0) = log 2 = log 2
P (Y = 0) 1
(1 − P0 + P0 )
[bits]
2
P(Y = 1 / X = 1) 1 − P1
I (1,1) = log 2 = log 2
Similarly: P(Y = 1) 1
(1 − P1 + P0 )
2
Special Cases
2(1 − P)
I (0,0) = l o 2g = l o 2g(1 − P) + 1
(1 − P + P )
a) If P0 = P1 = P ⇒
I (1,1) = l o g 2(1 − P) = 1 + l o g(1 − P)
2
1− P + P
2
Cases:
1: P0 = P1 = 0 →Errorless channel (without noise)
I (0,0) = 1
⇒ ⇒ total information
I (1,1) = 1
1
2: P0 = P1 = ⇒ I (0,0) = I (1,1) = 0 → don’t transform information
2
1
3: P0 = P1 = 4 ⇒ I (0,0) = −1 + log 2 3 = I (1,1) = 0,587 [bits]
Conditional self information
1
I(xi/yj)= logb P( x logbP(xi/yj) → information about x=λ I after having
i / yj )
=
observed the event Y= yj
I ( x ≥ i 0)
I(xi/yj)= I(xi)-I(xi/yj) ⇒ I(xi/yj)>;0;<
I( x i / y i ) ≥ 0
Average self-information:
m
m
H(x)= ∑ P( x i ) ⋅ I( x i ) = ∑ P( x i ) ⋅ log b P( x i )
i =1 i =1
Entropy of the source is the average self-information per source letter, when X
represents the alphabet of possible output letters from a source.
Special case:
1 m 1 1
P(xi)= ⇒H(X)= ∑ ⋅ log2 = log2 n
−
n i =1 n n
Example:
Consider a source that emits a sequence of statistically independent letters, where
each output letter is:
xi P(xi)
0 Q
1 1-q
H(X)=- P(xi)⋅ log2 P(x1)- P(x2)⋅ log2 P(x2)=-q⋅ log2q-(1-q)⋅ log2(1-q)=H(q)
H(q)=[bits/letter]
DESEN
Example:
1. One experiment has four mutually exclusive outcomes xi; i= 1,4 and a second
experiment has three mutually exclusive outcomes yj; j= 1,3 . Find the mutual
information and average mutual information.
4
P(xi)= ∑P( x i , y j )
i =1
yj
1 2 3 P(xi)
xi
0.0 0.1
1 0.2 0.41
8 3
0.0 0.0 0.0
2 0.17
5 3 9
0.0 0.1 0.1
3 0.31
5 2 4
0.1 0.0 0.0
4 0.21
1 4 6
0.3 0.2 0.4
P(yj) 1
1 7 2
3
P(xi)= ∑P( x i , y j )
i =1
P( x i , y j )
Mutual info: I(xi,yj)= log 2
P( x i ) ⋅ P ( y j )
[bits]
yj
1 2 3
xi
-
1 0.057 -0.065
0.0022
- -
2 0.334
0.0759 0.6135
-
3 0.5197 0.1047
0.9426
-
4 0.7568 -0.558
0.5033
I(X,Y)= ∑ ∑ P( x i , y j ) ⋅ I( x i , y j ) =0.067
i =1 j=1
yj
1 2 3
xi
P(yj) 0.31 0.27 0.42
1.689
-log2P(yj) 1.88847 1.25154
7
0.523
-P(yj)log 0.5106 0.5256
8
H(Y) 1.5594 bits
H(Y) 1.5594
Efficiency = R
= 3
=0.5198
Ex.: Entropy of a binary source with memory
Consider the binary (i.e. two-symbols) first order Markov source described by the
transition diagram.
DESEN
Extension codes
The source alphabet for the binary Markov source consists of “0” and “1” and
occur with probability:
P(0) = 0.9 & P(1) = 0.1
Successive symbols are not independent and we can define a new set of code
symbols as binary 2 tuples (extensions code) to take the advantage of this
extension.
Binary 2 Extension
Extension symbol probability
- tuple symbol
00 a P(a)=P(0 / 0) - P(0) = 0.95 x 0.9 = 0.855
11 b P(b) = P(1 / 1) ⋅ P(1) = 0.55 x 0.1 = 0.055
01 c P(c) = P(0 / 1) ⋅ P(1) = 0.45 x 0.1 = 0.045
10 d P(d) = P(1 / 0) ⋅ P(0) = 0.05 x 0.9 = 0.045
Kraft inequality
∑2
k =1
nk
≤1
Proof of necessity:
-in the code three of order n=nL the no of total terminals nodes that must be assigned
to all the code words is given the sum of + number of nodes corresponding to the
rotation
L L
∑d
k =1
n − nk
≤ 2 n : 2n ⇒ ∑ 2 − nk ≤ 1
k =1
Let X be a set of letters (symbols words) from a DMS (discrete memory source) and
H(x) the entropy of the source. Output letters xk ∈ X whith probabilities p(xk) of
occurrence have a length of nk symbols. It is possible to construct a code that
satisfies the prefix conditions and that has the average length R in the range
(H(x),H(x)+1)
Proof
Lower bound
H ( X ) ≤ R < H ( X ) +1
H (X ) ≤ R
L L
1
H ( X ) − R = ∑ p ( xk ) log 2 − ∑nk p ( xh ) =
k =1 p ( xk ) k =1
L L
1
= ∑ p ( xk ) log 2 + ∑ p ( xk ) log 2 2 −nk =
k =1 P ( xk ) k =1
L
2 −nk
= ∑ p ( xk ) log 2
k =1 p ( xk )
ln x ≤ x −1
L
2 −nk L
2 −nk
H ( X ) − R = ∑ p ( xk ) log 2 = log 2 e∑ p ( xk ) log ≤
k =1 p ( xk ) k =1 p ( xk )
2 −nk
L
L L
≤ log 2 e∑ p ( xk ) −1 ≤ (log 2 e) ∑2 −nk − ∑ p ( xk ) ≤ 0
k =1 p ( xk ) k =1 k =1
only for equiprobable events R =H ( X )
Upper bound R < H ( X ) +1
nk length of the code word k =1, L nk ∈ N *
Select the number such that if a certain word x k has the probability p(x k); then
2− n ≤ p ( xk ) .
k
∑ p( x ) = 1
k =1
k
L L L
∑ 2−nk ≤ ∑ p( xk ) ⇒ ∑ 2−nk ≤ 1
k =1 k =1 k =1
Having chosen:
p ( x k ) < 2 −nk +1
log 2 p ( x k ) < 2 −nk +1 ⇒ n k < 1 − log 2 p ( x k )
L L L
R = ∑ p ( x k )n k < ∑ p ( x k ) − ∑ p ( x k ) log 2 p ( x k )
k =1 k =1 k =1
R < H ( X ) +1
1 1 1 1 1
p ( x k ) − ⇒ < < ;2 −2 ≤ ≤ 2 −1
3 4 3 2 3
Remarks:
Variable length codes that satisfy the prefix condition are efficient length codes for
any DMS with source symbols that are not equiprobable. For the source symbols
being equiprobable it is better to use fixed length codes.
Coding with fixed length word
H ( X ) ≤ lo g2 L < R ⇒ R ≥ H ( X )
R = log2 L = log2 22 = 2
H ≤ log 2 L < log 2 L + 1
1.75 ≤ 2 ≤ 3; R >1.75 . Choose R=2
N −1 ≤ J log 2 L < N ,
J log 2 L ≤ N ≤ J log 2 L + 1 |:J
N 1
l o 2gL ≤ ≤ l o 2gL +
J J
K H
η= =
H
=
JH
efficiency
J l o 2gL;2 J R N/J N
N= K
[ J l o gL] + 1; L ≠ 2 J
2
Example:
For L=5:
log 2 L = log 2 5 = 2.32193
log 2 L ≤ R < log 2 L + 1 ; 2.32 ≤ R ≤ 3.32 Choose R=3
H(x) = 1.875
H ( x) 1.875
Efficiency: η = R
=
2.5
= 0.75
Code I: 0 0 1 0 0 1 or 0 0 1 0 0 1
x2 x4 x3 x2 x1 x2 x1
we don’t know for sure how to group; it’s ambiguous
figure
Code I:
7
H (x) = ∑ p(xi ) l o 2 gp(xi ) = 2.1 b1 i t s
i= 1 H ( x ) 2 .1 1
7 ⇒ η = = = 0 .9 5 4
R 2 .2 1
R = ∑ p ( x i ) n i = 2 .2 1
i= 1
Code II:
H ( x ) 2.11
H(x) = 2.11 bits ; R = 2.66 ; η= = ≅ 0.8..
R 2.66
Letter p(xi) - log2 p(xi) Code Length
x1 0.45 1.156 1 1
x2 0.35 1.52 01 2
x3 0.2 2.33 00 2
Figure Code
3
H ( x) = ∑ p( xi ) l o 2 gp( xi ) = 1.5 1b 8i t s
i= 1 H ( x)
⇒ η = = 0.9 7 9
3
R
R = ∑ p( xi )ni = 1.5 5
i= 1
Or code II figure
- q n -noise
Uniform Quantizer
7 bit quantizer
input-output characteristic for a uniform quanitzer:
∆ = 2 − R - size of quantization
∆ 2 2 −2 R
ε (q ) = ∫
∆ ∆ 1 2 1 q3 ∆
2 2
p( q ) q dq = ∫
2 2
q dq = 2
= =
− ∆
2
− ∆
2 ∆ ∆ 3 −∆
2 12 12
2 −2 R
ε ( q 2 ) log = 10 log ε ( q 2 ) = 10 log = −20 R log 2 − 10 log 12 = −6 R − 10 .8 dB
12
log (1 + µ x )
y =
log( 1 + x )
ex: =225; R=7 => ε(q ) = −77 dB
2
A non quantizer is made from a non-linear device that compresses the signal and a
uniform quantizer.
i= 1 i= 1 j= 1
The source is stationary with Φ( n ) -autocorrelation function.
If the source is wide sense stationary, results:
p p p
ε p = Φ( 0 ) − 2∑ai Φ( i ) + ∑∑ai a j Φ( ij )
i =1 i =1 j =1
∑ a Φ( i − j ) = Φ( j )
i =1
i ; j=1,2,…,p Yule-Walker equations
If Φ( n ) is unknown apriory, it can be estimated by the relation:
^ 1N
Φ ( n ) = ∑ xi xi + n ; n=0,1,2,…,p
N i= 1
xn - sampled value en - predicted error
~ ~
x n - quantized signal e n - quantized predicted error
^
~
- predicted quantized signal qn - quantized error
xn
Encoder:
Decoder:
^
~ p ~
en = x n − x n = x n − ∑ a i x n − i
i= 1
~ ~ ~ ~ ~ ~ ^ ^
qn = en − en = en − xn − xn = en + xn − xn = xn− xn
~
The quantized value x n differs from the input x n by the quantization error q n
independent of the predictor’s prediction. Therefore the quantization errors do not
accumulate.
Improvement in the quality of estimation => inclusion of the linearly filtered past
values of the quantized error.
~ p m ~
x n = ∑ a i x n −i + ∑bi en −i
i =1 i =1
figure Encoder:
Decoder
figure
Many real sources are quasistationary. The variance and autocorrelation functions of
the source output vary slowly with time. The PCM and DPCM encoders are
designed on the basis that the source output is stationary. The efficiency and
performance can be improved by having them adapted to the slowly time-variant
statistics of the source. The quantization error q n has a time-variant variance.
PCM DPCM
2 3 4 2 3 4
M(1) 0.60 0.83 0.8 0.8 0.9 0.9
M(2) 2.20 1.00 0.8 1.6 0.9 0.9
M(3) 1.00 0.8 1.25 0.9
M(4) 1.50 0.8 1.70 0.9
M(5) 1.20 1.20
M(6) 1.60 1.60
M(7) 2.00 2.00
M(8) 2.40 2.40
In DPCM the predictor can also be made adaptive when the source output is
stationary. The predictor coefficients can be changed periodically to reflect the
changing signal statistics of the source. The short-term estimate of the
autocorrelation function of x n replaces the ensemble correlation function. The
predictor coefficients are passed along the receiver with the quantization error en.
The source predictor is implemented at the receiver and a higher bit rate results. The
advantage of decreased bit rate produced by using a quantizer with fewer bits is lost.
An alternative is the using of a predictor at the receiver that may compute its own
predicted coefficients from en and xn .
The estimated value of x n is the previous sample x n-1 modified by the quantization
noise q n-1. The difference equation represents an integrator with an input e n.
In general the quantized error signal is scaled by some value, say ∆ 1, called step
size.
The encoder shown in the figure approximates the waveform x(t) by a linear
staircase function (the waveform must change slowly relative to the sampling rate. It
results that the sampling rate must be about 5 times greater or equal to the Nyquist
rate.
+
Sampler _ Quantizer to transmitter
xn= xn-1
Encoder Unit delay +
z-1
en LPF output
Decoder
z-1
____________________________________________________________________
____________________________________________________________________
+ en = ± 1
Sampler _ Quantizer to transmitter
Encoder xn
Accumulator +
∆ 1
en
+ Accumulator LPF output
∆ 1 Decoder
Joyant (1970)
e n ⋅ e n-1
∆ n =∆ n-1 ⋅ K ,K>1
en = ± 1
Sampler Quantizer
z-1
Accumulator +
∆ n = α ∆ n − 1 + K1
∆ n = α ∆ n− 1 + K2
~ ;~ ~
en en −1 ; en −2have the same sign
K1 >>K2 > 0 ; 0<α < 1
{xn }n=0, N −1
- The sample sequence is assumed to have been generated by an all-pole filter with:
G
H ( x) = p
1 − ∑a k z −k
k =1
In general, the observed source output does not satisfy the difference equation, only
its model does.
If the input is a white noise sequence or an impulse, we may for an estimate
(prediction) of Xn :
p
Xˆ n = ∑a k x n −k ; n > 0
k =1
The error of the observed value Xn with respect to the predict value X̂ n is
p
en = x n − xˆ n = x n − ∑ a k x n −k
k =1
The filter coefficients are chosen to minimize the mean square of the error
p
ξ p = E (−en2 ) ≡ E[( x n − ∑a k x n −k ) 2 ] =
k =1
p p p
= φˆ(0) − 2 ∑a φ
k =1
k k + ∑∑a k a mφ(k − m)
k =1 m =1
where ξp is the residual MSE obtained after substituting the optimum prediction
coefficients (Julle-Walker equations).
p
⇒ ξ p = G 2 = φ (0) − ∑ a k φ (k )
k =1
Usually the time auto correlation function of the source output is unknown, and we
use the estimate
N −n
1
φ̂ ( n) =
N
∑x x
i =1
i i +n ; n = 0, N − 1
p
∑a φ(i − j ) =φ( j );
i j =1, p
Julle-Walker equation => k =1
Example : p=4;
φˆ(0) φˆ(1) ˆ( 2)
φ ˆ(3)
φ
ˆ(1)
φ ˆ(0)
φ φˆ(1) ˆ ( 2)
φ
φ= - Toeplitz matrix
φˆ( 2) φˆ(1) ˆ(0)
φ φˆ(1)
φˆ(3) ˆ
φ(2) φˆ(1) ˆ
φ(0)
Recursive algorithm for the inverse Toeplitz matrix is done by Durbin and Levinson
(1959):
The recursive solutions give the predictive coefficients for all orders less than p.
The residual MSE ξˆi ; i =1, p form a monotone decreasing sequence:
ˆ ≤ξ
ξ ˆ ˆ ˆ
p p −1 ≤... ≤ξ1 ≤ξ0
and
aij <1
The necessity and sufficient condition is to have all the poles inside the unit circle
H(z).
Implementation
Receiver
Notes: - When the source output is stationary it’s enough to transmit only once the
source parameters.
- When the source output is quasi-stationary hence new estimates of the
filter parameters must be periodically obtained.
Speech recovering