You are on page 1of 123

Difference equation representation of discrete time signals

x(n) y(n)
DTS

Input signal Output signal


(excitation) (H) (response)

N M

∑ak y (n − k ) = ∑bk x(n − k ) ,


k =0 k =0
n ≥0 (1)

where ak , bk are known


N M

∑a
k =0
k y ( n + k ) = ∑bk x (n + k )
k =0
n ≥0 (2)

M ≤N for stable states

Method I for finding the solution for the system:


N M
a0 y (n) + ∑ak y ( n − k ) = ∑bk x( n − k )
k =1 k =0
M N
1
⇒ y ( n) = [∑bk x( n − k ) − ∑ak y ( n − k )
a0 k =0 k =1

1 M N
n = 0 ⇒ y (0) = [∑bk x( −k ) − ∑ak y ( −k )
a0 k =0 k =1

where y(-k), k =1, n represents the initial condition for the differential equation and
are assumed known.

1 M N

n=1 y (1) = [∑bk x(1 − k ) − ∑ak y (1 − k ) where y(0) was


a0 k =0 k =1

earlier determined

n=2 y(2)= …………


Example:
Consider the differential equation:
n
3 1 1 
y ( n) − y (n −1) + y (n − 2) =   , n ≥ 0
4 8 2
with initial conditions: y ( −1) =1 and y ( −2) = 0 . Find y (n) .

Solution
n
3 1 1 
y (n) = y (n −1) − y (n − 2) +  
4 8 2
0
3 1 1  7
n=0 => y (0) = y ( −1) − y (−2) +   =
4 8 2 4
1
3 1 1  27
n=1 => y (1) = y (0) − y ( −1) +   =
4 8 2
  16
2
3 1 1 
n=2 => y (2) = y (1) − y (0) +  
4 8 2 

Method II for finding the solution of the system:

The solutions for the equation (1) or (2) have the components :
-homogeneous solution yh (n) which depends on the initial conditions that are
assumed known.
-particular solution y p (n) which depends on the input signal

y ( n) = y p ( n) + y h ( n) a) Finding the homogeneous solution of the differential


equation (1)
N

∑a
k =0
k y(n − k ) = 0

We assume that the solution is:


N
y h ( n ) = Aα ⇒
n
∑a
k =0
k y(n − k ) = 0

N N
⇒ ∑ ak Aα n − k = 0 ⇒ ∑a α k
−k
= 0  characteristic equation
k =0 k =0
αk are the roots of the characteristic equation, k =1, n
1) If αk are distinct roots ,then y h (n) = A1α1n + A2α 2n + ... + AN α Nn
2) If αk has a multiplicity of P1 ,then “ N − P1 ” are distinct roots; then:
y h (n) = A1α 1n + A2 nα 1n + ... + AP1 n P1 −1α 1N + AP1 +1α 2 1 + ... + AN α Nn − P1
P
Example I
Consider the equation:
13 3 1
y (n) − y ( n −1) + y ( n − 2) − y ( n − 3) = 0 , with initial conditions:
12 8 24
y ( −1) = 6
, y (−2) = 6 , y (−3) = −2
Find the homogeneous solution of the equation
13 −1 3 −2 1
1− α + α − α −3 = 0
12 8 24
13 3 1
α3 − α2 + α − = 0 =>
12 8 24
1 1 1
α1 = , α 2 = , α 3 =
2 3 4
y h ( n) = A1α1 + A2α 2 + A3α 3n =>
n n

n n n
1 1  1
y h ( n) = A1   + A2   + A3  
2 3 4
−1 −1 −1
1  1  1  =
n = −1 ⇒ y h (−1) = 6 = A1   + A2   + A3  
2 3 4
= 2 A1 + 3 A2 + 4 A3
−2 −2 −2
1 1  1 =
n = −2 ⇒ y h (−2) = 6 = A1   + A2   + A3  
2 3 4
= 4 A1 + 9 A2 + 16 A3
−3 −3 −3
1  1  1  =
n = −3 ⇒ y h (−3) = −2 = A1   + A2   + A3  
2 3 4
= 8 A1 + 27 A2 + 64 A3

2 A1 + 3 A2 + 4 A3 = 6 A1 = 7
4 A1 + 9 A2 +16 A3 = 6 A2 = −10 / 3
8 A1 + 27 A2 + 64 A3 = −2 A3 = 1 / 2

n n n
1 10  1  1 1
⇒ y h ( n ) = 7  −   +  
2
  3 3
  24

Example II
Consider the equation:
5 1 1
y h ( n) − y ( n − 1) + y (n − 2) − y (n − 3) = 0 with initial conditions:
4 2 16
y ( −1) =6 , y (−2) =6 , y ( −3) =−2 .
Find the homogeneous solution of the equation:
Solution:
5 1 1
The characteristic equation is: 1 − α −1 + α −2 − α −3 = 0
4 2 16
5 2 1 1 1 1
α − α + α − = 0 => α 1 = α 2 = , α3 =
3

4 2 16 2 4
n n n

y h ( n) = A1α1n + A2α 2n + A3α 3n = A1   + A2   + A3  


1 1 1
2 3 4

n= -1 , y(-1)=6
−1 −1 −1
1 1  1 
y h ( −1) = A1   + A2 ( −1)  + A3   = 2 A1 − 2 A2 + 4 A3
2 3 4

n= -2 , y(-2)=6
−2 −2 −2
1 1  1 
y h (−2) = A1   + A2 ( −2)  + A3   = 4 A1 − 8 A2 + 16 A3
2 3 4

n= -3 , y(-3)= -2
−3 −3 −3
1 1  1 
y h ( −3) = A1   + A2 ( −3)  + A3   = 8 A1 − 24 A2 + 64 A3
2 3 4

The system
2 A1 − 2 A2 + 4 A3 = 6 A1 = 9 / 2
4 A1 − 8 A2 +16 A3 = 6 has the following solutions: A2 = 5 / 4
8 A1 − 24 A2 + 64 A3 = −2 A3 = −1 / 8

n n n
91 5 1 11
y h (n) =   +   −  
22 4 3 84

b). finding the particular solution of the differential equation (1)


The particular solution y p (n) is obtained first by determining y (n) the particular
N

solution of the equation ∑a


k =0
k y ( n − k ) =x( n)

Using of the principle of Superposition Theorem we can write the final particular
solution:
M
y p ( n) = ∑bk y (n − k )
k =1
If  x(n) = ct ⇒ y (n) = ct ⇒ y (n) is a linear combination of the version x(n) and its
p

delayed version x(n-1), x(n-2), …..

 x(n) = β n ⇒ y (n) = kβ n ⇒ y p (n) has the same form


 x (n) = sin Ω0 n ⇒ x (n − k ) = sin Ω0 ( n − k ) =
= sin Ω0 n cos Ω0 k − cos Ω0 n sin Ω0 k = A sin Ω0 n + B cos Ω0 n
==> y (n) = A sin Ω n +B cos Ω n
p 0 0

 x(n) = cos Ω0 n => x(n − k ) = cos Ω0 (n − k ) =


= cos Ω0 n cos Ω0 k + sin Ω0 n sin Ω0 k = A cos Ω0 n + B sin Ω0 n
==> y p ( n) = A cos Ω0 n + B sin Ω0 n

Example:
Consider the differential equation:
3 1 nπ
y ( n) − y ( n −1) + y ( n − 2) = 2 sin , with initial conditions:
4 8 2
y(-1)=2 , y(-2)=4
y ( n) = y p ( n) + y h ( n)

Solution:
Finding the particular solution
nπ nπ
y p ( n) = A sin + B cos
2 2
n= -1, y(-1)= 2 and y p ( −1) = y ( −1) = -A
n= -2 , y(-2)= 4 and yp ( −2) = y ( −2) = 4
nπ nπ
y p ( n) = A sin + B cos
2 2
( n − 1)π ( n − 1)π nπ nπ
y p ( n − 1) = A sin + B cos = − A cos + B sin
2 2 2 2
(n − 2)π (n − 2)π nπ nπ
y p (n − 2) = A sin + B cos = − A sin − B cos
2 2 2 2

nπ nπ 3 nπ nπ 1 nπ nπ
A sin + B cos − (− A cos + B sin ) + [− A sin − B sin ]=
2 2 4 2 2 8 2 2

= 2 sin
2
3 1 nπ 3 1 nπ nπ
( A − B − A) sin + ( B − A − B) cos = 2 sin
4 8 2 4 8 2 2

The system
3 1 112
A− B− A=2 A=
4 8 85
3 1
has the following solutions 96
B − A− B =0 B =−
4 8 85

112 nπ 96 nπ
y p (n) = sin − cos
85 2 85 2

Find the homogeneous solution:


3 1 3 1
1 − α −1 + α −2 = 0 => α 2 − α + α = 0 => α1 = 1 / 4 , α 2 = 1 / 2
4 8 4 8
1 1
=> y h (n) = Cα1n + Dα 2n = C ( 4 ) n + D( 2 ) n
1 1 112 nπ 96 nπ
y ( n) = y p ( n) + y h ( n) => y ( n) = C ( ) n + D ( ) n + sin − cos
4 2 85 2 85 2
−1 −1
1 1 112 −π 96 −π
n= -1 => y(-1)=2= C
4
+D
2
+
85
sin
2

85
cos
2
−2 −2
1 1 112 96
n= -2 => y(-2)=4 = C +D + sin( −π) − cos( −π)
4 2 85 85
The system has the following solutions:
112 8
4C + 2D + =2 C =-
85 17
96 13
16C + 4D + =4 D=
85 15
8 1 −n 13 1 −n 112 nπ 96 nπ
y ( n) = − ( ) + ( ) + sin − cos
17 4 15 2 85 2 85 2

Example II
Consider the differential equation
3 1 1
y ( n) − y ( n −1) + y ( n − 2) = x ( n) + x ( n −1) with
4 8 2

x(n) = 2 sin . Find the particular solution of the differential equation
2
1
y ′p ( n) = y p (n) + y p ( n −1)
2
112 nπ 96 nπ
y p ( n) = sin − cos
85 2 85 2
112 ( n −1)π 96 ( n −1)π
y p ( n −1) = sin − cos
85 2 85 2
112 nπ 96 nπ 1 112 (n −1)π 1 96 (n −1)π
y p ( n) = sin − cos + ⋅ sin − ⋅ cos
85 2 85 2 2 85 2 2 85 2

Finding the response of the system when x ( n ) =δ ( n )


N M

∑ak y(n − k ) = ∑bkδ (n − k )


k =0 k =0
(3)

δ( n) =1; k = 0 δ ( n − k ) =1; n = k
δ (n) = 0; k ≠ 0 δ (n − k ) = 0; n ≠ k
With y(-1)=0
y(-2)=0 n ≥0
……….
For n > M the right side of the eq.(3) is 0 so we have the homogeneous equation
The N initial conditions are y(M);y(M-1);…y(M-N+1)
Since N ≥ M for a causal system we have to determine only y(0); y(1); …; y(n).
But n =1, M and y(k)=0 if k < 0 ; we get the following set of M+1 equations:
M

∑a
k =0
k y (n − k ) = bk , k =0, M

y (0) 
 a0 0........
1.......0 y (1) 
b0 
b1 
 a a .......1 .......
0  y ( 2) 
=

b2 
10  
.

  

 a2a1a0 ....1.......0 
y ( M )

bn 

 
 ..........
..........
..  M

For n> M , ∑a k y ( n − k ) = 0 ⇒ y h ( n)
 aM .......... a0 
.......
k =0

  Example
 aM aM − 1.....a1a0  y ( n) −
3
4
1 1
y ( n −1) + y ( n − 2) = δ ( n) + δ ( n −1)
8 2

Solution:

N=2; M=1
3 1
n ≥ 2 ⇒ homogeneous equation: y ( n) − y (n −1) + y (n − 2) = 0
4 8
3 1 1 1
1 − α −1 + α −2 = 0 => α1 = , α 2 = ==>
4 8 2 3
1 1
y h (n) = A1 ( ) n + A2 ( ) n
2 4
a0 0 y (0) b0  1 0 y (0) b0 
[ ] = => [ ] = =>
a1 a0 y (1)  b1 

− 3/ 4 0 y (1)  b1 

 y ( 0) = 1
y ( 0) 
1
= 1 ;

y (1)    5
2  y (1) =
 yh (0) = y(0) = 1 4

 5  A1 + A2 = 1
y
 h (1) = y (1) =   A1 + A2 = 1 | ⋅ − 1
4 =>  1 1 5 
 A1 + A2 = | ⋅ 4
 2 4 4  2 A1 + A2 = 5
 A1 = − 3

 A2 = 4
1 1
y h (n) = 4( ) n − 3( ) n
2 4
Proof
3 1 1
n =0 y (0) − y ( −1) + y (−2) = δ (0) + δ (−1)
4 8 2
3 1 1
n =1 y (1) − y (0) + y ( −1) = δ (1) + δ (0)
4 8 2

 y (− 2) = 0  y(0) = 1

 ⇒ 
y (1) =
3 1
= 1+
 y (− 1) = 0  4 2 Response at the x( n) = u ( n) of the discrete

time system y (n) = ∑ xk h(n − k ) ,


k =0

1 k ≥ 0 1 n-k≥ 0 n≥ k  x(n) = α nu(n)


u(k ) =  u (n − k ) =  
0 k< 0  0 n-k< 0 n< k  h(n) = β nu(n)
∞ ∞
y (n) = ∑α k u (k ) β n −k u (n − k ) = ∑α k β k u (k )u (n − k )
k =0 k =0

y (n) = β n ∑(α β−1 ) k u (k ) , with y (0) = 0 for n < 0
k =0
n
n ≥ 0 , α = β => y ( n) = β ∑(1) = ( n + 1)β
n k

k =0

 n ≥0, α ≠ β =>
n
1 − (αβ−1 ) n
y ( n) = β n ∑(αβ−1 ) k = β n =
k =0 1 − (αβ−1 )
α − β n+1
n +1
= βn
α −β
1 − β n+1
 α =1⇒ y (n) = β n
1 −β
N

If x (n) = u ( n) ⇒ s (n) = ∑h(k )


k =0
n
1 
Example h( n) =   u ( n)
3

 2 0≤ n≤ 6
x(n) = 
1 6≤ n≤ 9
x (n) = 2[u (n) − u (n − 6)] +[u ( n − 6) − u ( n −10 )] = 2u ( n) − u (n − 6) −
− u (n −10 ) =>
1 1 1
s ( n) = 2( ) n − ( ) n −6 − ( ) n −10 ⇒ 0 ≤ n ≤ 5
3 3 3
n n
1 1 1
y (n) = ∑2( ) k = 2∑( ) k = 3 − ( ) n
k =0 3 k =0 3 3
n
⇒ 6 ≤ n ≤ 10 y ( n) = ∑.......... ..........
k =0
n

=> n ≥ 10 y ( n) = ∑.......... ..........


k =0

Discrete Time System (DTS)


Definition:
DTS is a device or an algorithm that operates on a discrete time signal called the
input (excitation) according to some well defined rules to produce another discrete
time signal called the output (response)of the system.
H
x (n) → y (n)
Example

Determine the response of the following system to the input signal :

 n -3 ≤ n ≤ 3
x ( n) = 
 0 o t h e r a).
w iy(n)
s e = x(n)
b). y(n) = x(n-1)
c). y(n) = x(n+1)
d). y(n) = 1/3 [x(n+1)+x(n)+x(n-1)]
e). y(n) = max[x(n+1), x(n), x(n-1)]
n

f). y (n) = ∑ x(k ) = x(n) + x(n − 1) + x(n − 2) + .....


k =0

V
n ….. -5 -4 -3 -2 -1 0 1 2 3 4 5
x(n) 0 0 3 2 1 0 1 2 3 0 0
a 0 0 3 2 1 0 1 2 3 0 0
b 0 0 0 3 2 1 0 1 2 3 0
c 0 3 2 1 0 1 2 3 0 0 0
d 0 1 5/3 2 1 2/3 1 2 5/3 1 0
e 0 3 3 3 2 1 2 3 3 3 0
f 0 0 0 3 5 6 7 9 12 12 12

d). n=0 y(0)=1/3[x(1)+x(0)+x(-1)]= 1/3(1+0+1) => y(0)=2/3

n=1 y(1)=1/3[x(2)+x(1)+x(0)] = 1/3 (2+1+0) => y(1) = 1

e). max [x(x+1), x(n), x(n-1) ]

n=0 y(0)=max[1,0,1]=1
n=1 y(1)=max[2,1,0]=2
n

f). y (n) = ∑x(k )


k =0
n=0 y(0)=6
n=1 y(1)=7
n n −1
y ( n) = ∑x(k ) =
k =−∞
∑x(k ) + x(n) = y(n −1) + x(n)
k =−∞

Building blocks for implementation of DTS (discrete time system)


z
Figura ce include x1 ,xk si y

n
y ( n) = ∑ x( n)
k =1

Multiplier with a constant of a system :

x (n) ____ ______ y (n) y ( n) = αx( n)


α
For convenience

x(n) ____ α_____y(n)

x (n) y (n)

Unit delay

Unit x (n) _____ ______


delay x ( n −1)

for convenience

z −1 x (n) _________
_________ x(n −1)

Advance unit
x (n) ________ __________ x(n −1)
Example:
Using the basic building blocks, sketch the block diagram representation of a
discrete time system described by a equation
1 1 1
y ( n) = y ( n −1) + x ( n) + x ( n −1) where x (n) is the input signal and y (n) is the
4 2 2
output signal .

x (n) _____DTS_____ y (n)

Solution:
METHOD 1: We write directly the equation:
1 1 1
y ( n) = x ( n) + x ( n −1) + y ( n −1)
2 2 4
Figura
z −1 1/2

x (n) + +
y (n) ¼

½ z −1

METHOD 2:
1
y ( n) = [ x(n) + x(n −1)] + 1 y (n −1)
2 4
Figura
z −1
1/2

x (n) + + y (n)
1/4
z −1

Clock frequency: x( n) = n or τ ( n) = n

Function generator
Figura
x1 ( n)
y (n) = f [ x1 (n), x 2 ( n),..., x k (n)]
f y(n)

x k (n)
Exemplul 1
Figura

Exemplul 2

y (k ) = u ( k ) + αy (k −1)
; u (k ) =0 for k<0; u (k ) =1, for k>0
y 0 =1
k=0: y (1) = 1 + αy 0 = 1 + α
k=1: y (2) = 1 + αy1 = 1 + α + α 2
…………………………………..
1 + α k +1
k : y ( k ) = 1 + α + α 2 + ... + α k = when α ≠ 1 and
1 −α

Figura k +1 when α =1

y (k ) = αy (k −1) +1 = αy (k −1) + u ( k )
y (k ) + b1 y (k −1) + b2 y (k − 2) + ... + bn y (k − n) = u ( k )
-differential equation of a DTS
Structures of a DTS

-Recursive system
-in time domain
N M

∑ a k y ( n − k ) = ∑ bk x ( n − k )
k =0 k =0

M ≤N for a causal system .


It is difficult to work in the time domain, so we apply the Z transform
x (n) ____________ X (z )
x ( n − k ) _______ z −k X ( z )
N M

∑ak z −kY ( z ) = ∑bk z −k X ( z ) ;


k =0 k =0

H ( z)
X (z ) ______ ________ ϕ( z )

Y ( z)
∑b
k =0
k z −k
H ( z) = N -transfer function
X ( z)
∑ a k z −k
k =0

-Non recursive system


N M
a0 = 1 : y ( n) + ∑ ak y (n − k ) = ∑bk x( n − k )
k =1 k =0
N M
y ( n) = −∑ak y (n − k ) + ∑bk x (n − k )
k =1 k =0

-by passing to Z transform we get :


N M
Y ( z ) = −∑ak z −k Y ( z ) + ∑bk z −k X ( z )
k =1 k =0

Y ( z) ∑b z k
−k

H ( z) = k =0
N ; where H(z) is the transfer function
X ( z)
1 + ∑ak z −k

k =1
H ( z) = H1 ( z) ⋅ H 2 ( z)
M
H 1 ( z ) = ∑bk z −k gives all the “0“s of the transfer function H(z)
k =0
1
H 2 ( z) = N
1 + ∑ak z −k gives all the poles of the transfer function H(z)
k =1

Observation|:
Y ( z) = H ( z) ⋅ X ( z) = H1 ( z) ⋅ H 2 ( z) X ( z)

Implementation

-Direct Form 1 for implementation of DTS


V ( z ) = H1 ( z) ⋅ X ( z)
Y ( z ) =V ( z ) ⋅ H 2 ( z )
-Direct Form 2 for implementation of DTS
W ( z) = H 2 ( z) ⋅ X ( z)
Y ( z ) =W ( z ) ⋅ H 1 ( z )

Particular cases
-first order structures
b0 + b1 z −1
y ( n) = −a1 y ( n −1) + b0 x ( n) + b1 x (n −1) => H ( z ) =
1 + a1 z −1
Direct form 1
v ( n) = b0 x ( n) + b1 x ( n −1) Figura Direct form I
y (n) = −a1 y (n −1) + v (n)
-2 adders
-2 unit delays
-3 multipliers

Direct form 2 (regular)


w( n) = x (n) − a1 w(n −1)
Figura Direct form II
y ( n) = b1 w( n −1) + b0 w( n)
-2 adders
-2 unit delays
-3 multipliers

=>Figura
-2 adders
-1 unit delay
-3 multipliers
-second order structures

y (n) = −a1 y (n −1) − a 2 y ( n − 2) + b0 x(n) + b1 x (n −1) + b2 x(n − 2)


b0 + b1 z −1 + b2 z −2
=> H ( z ) =
1 + a1 z −1 + a 2 z −2
Direct form 1
Figura
v (n) = b0 x ( n) + b1 x (n −1) + b2 x ( n − 2)
y ( n) = −a 2 y (n − 2) − a1 y ( n −1) + v (n)

Direct form 2
Figura
w(n) = x( n) − a1 w( n −1) − a 2 w( n − 2)
y ( n) = b0 x ( n) + b1 w( n −1) + b2 w( n − 2)

Signal Graph of direct form 2

Figura

y ( n) = b0 x (n) + w( n)
w1 (n) = b1 w(n −1) − a1 y (n −1)
w2 (n) = b2 x(n − 2) − a 2 y ( n − 2)

Transpose graph (overturn the previous)


Figura

Implementation of transpose graph


Figura

Signal graph (obtained by overturning once more)


Figura
y ( n) = b0 w( n) + x ( n)
w(n) = b1 w( n −1) − a1 y (n −1)
w1 ( n) = b2 x (n − 2) − a 2 y ( n − 2)

Transpose Direct Form 2 (for reducing the summation circuits)

Figura

Direct Form 1 Implementation


Figura
M
v (n) = ∑bk x (n − k )
k =0
N
y ( n) = v ( n) + ∑ a k y ( n − k )
k =1
Direct Form 2 (regular)
Figura
Figura

N
w( n) = x (n) − ∑ a k y ( n − k )
k =1
M
v ( n) = ∑bk w( n − k )
k =1

grad M< grad N

Transpose Direct Form 2


Figura
Homework:
Given the transfer function H(z), find its regular form and transpose
1 + 2 z −1 + z −2
H ( z) =
1 − 0.75 z −1 + 0.125 z −2
Hint! See the coefficients; make implementation

METHOD 3:
Cascade form
N

∑b k z −k
H ( z) = 0
M ; (the numerator has the degree M, the denominator- N)
1 + ∑ a k z −k
1

If a k , bk ∈ R then all poles and zeros are complex conjugate


M1 M2

∏ (1 − g
k =1
k z )∏ (1 − bk z −1 )(1 − bk* z −1 )
−1

k =1
H ( z) = N1 N2

∏ (1 − e
k =1
k z −1 )∏ (1 − d k z −1 )(1 − d k* z −1 )
k =1

Where M = M 1 + 2M 2
N = N 1 + 2N 2
ek -real poles
gk -real zeros
bk , bk* -complex zeros
d k , d k* -complex poles

For a causal system e k <1 (for ROC convergence of H ( z) )


d k <1
Ns bk0 + bk1 z −1 + bk2 z −2
H ( z ) = H 1 ( Z ) ⋅ H 2 ( z )∏ = H 1 ( z) ⋅ H 2 ( z) ⋅ H 3 ( z) ,
k =0 1 − a k1 z −1 − a k 2 z − 2
where:
M1
H 1 ( z ) = ∑ (1 − g k z −1 )
k =1
N1
H 2 ( z ) = ∑ (1 − ek z −1 )
k =1
N s = min( N 2 , M 2 )

Figura

Implementation of H 3 ( z)

Figura

Homework :
Implement the structure of the DTS described by the ecuation
1 + 2 z −1 + z −2 (1 + z −1 ) 2
H ( z) = = ;
1 − 0.75 z −1 + 0.125 z −2 (1 − 0.5 z −1 )(1 − 0.25 z −1 )
Solution
H ( z) = H1 (Z ) ⋅ H 2 ( z)
1 + z −1 1 + z −1
H1 ( z) = H 2 ( z) =
1 − 0.5 z −1 1 − 0.25 z −1

a)Direct Form 2 Figura

b)Direct Form 1 - subsections


Figura
W ( z) = H 1 ( z) ⋅ X ( z)
Y ( z) = W ( z) ⋅ H 2 ( z)

c)Direct Form 2-subsections

V ( z) = H 2 ( z) ⋅ X ( z)
Y ( z) = V ( z) ⋅ H 1 ( z)
Parallel Form Structure
N
Ak N
H ( z ) =C+ ∑
k =1 1 − p k x
−1 =C+ ∑
k =1
H k ( z)

pk =poles of H k (z)
Ak =coefficients in the partial fraction expansion

Structure of second order section in parallel structure

bk0 + bk1` z −1 Wk ( n) = ( x ) n 1− ak( Wk 1) n − 2 ( ak −W) k 2 n −


Hk (z)=
1 + a k1 z + a k 2 z −2

 Yk ( n)= 0bk W k ( n) 1−bk Wk ( n 1) −
k
H ( z ) = C + ∑ Hk ( z )
k =1
N +1
k-integer part of 2

Np N1 N2
H ( z) = ∑ Ck z +∑ +(1∑d−k z 1 −)(1k
−k Ak B(1− e− 1z )
1− C k z− 1 k d*k−z 1 )−
k =0 k 1= k 1 =
N = N1 + N 2

- if M ≥ N then N p = M − N
- if ak , bk ∈ ¡ then Ak , Bk , Ck , ck , ek ∈ ¡ and the system function can be
interpreted as representing a parallel combination of first and second order systems
with N p possible delay paths.

Second method:
Grasping the real poles in pairs, then
Np Ns
e0 k + e1k z−1 N +1
H ( z ) = ∑ Ck z −k
+∑ −1
, where N s is integer part of
k =o k =1 1 − a1k z − a2 zk−2 2

The general difference equations for the parallel form with second order direct form
II sections are:
Wk (n) = a1kWk (n − 1) + a 2 kW k (n − 2) + x(n); k= 1, N s
yk (n) = e0 kWk (n) + e1kWk (n − 1)
Np Ns
y n = ∑ Ck x ( n − k ) + ∑ y k ( n )
k =0 k =1
Example: Parallel form structure for a sixth order system with real and complex
poles grasped in pairs.
(N=M=6)
1 + 2 z −1 + z −2 −7 + 8 z−1
E.g. H(z)= = 8+
1 − 0.75 z −1 + 0.125 z −2 1 − 0.75 z−1 + 0.125 z−2

a) parallel structure using 2nd order system

18 25
b) H(z)= 8 + −1

1 − 0.05z 1 − 0.25z −1
c)
Parallel structure using first order system
Example: Determine the cascade and parallel structure for the system described by
the transfer function

1 −1 2
z )(1 − z −1 )(1 + 2 z −1 )
10(1 −
H ( z) = 2 3
3 −1 1 −1  1 j −1   1 j 
(1 − z )(1 − z ) 1 − ( + ) z  1 − ( − ) z −1 
4 8  2 2  2 2 

a) Cascade implementation
H ( z ) = 10H 1( z ) H 2( z )
2 −1 3 −1 −2
1− z 1+ z −z
H1 ( z ) = 3 H 2 ( z) = 2
7 3 1
1 − z −1 + z −2 1 − z −1 + z −2
8 32 2
b) parallel implementation
A1 A2 A3 A3∗
H ( z) = + + +
3 1
1 − z −1 1 − z −1 1 −  +  z −1 1 −  −  z −1
1 j 1 j
4 8 2 2 2 2
−14.75 − 12.91z −1 24.5 + 26.82 z −1
H (z) = +
7 −1 3 −2 1
1− z + z 1 − z −1 + z −2
8 32 2

A1 =29.3; A3 =12.25-14.57j
A2 =-17.68;  A3∗ = 12.25 +14.57 j 

Exam problems:

1) Obtain the direct form I, direct form II, cascade, parallel and lattice structures
for the following systems
3 1 1
a) y (n) = y (n − 1) − y (n − 2) + x(n) + x(n − 1)
4 8 3
b) y (n) = −0.1y (n − 1) + 0.72 y (n − 2) + 0.7 x(n) − 0.252 x(n − 2)
c) y (n) = −0.1y (n − 1) + 0.2 y (n − 2) + 3x(n) + 3.6 x(n − 1) + 0.6 x(n − 2)
1 1
d) y (n) = y (n − 1) + y (n − 2) + x(n) + x(n − 1)
2 4
1
e) y (n) = y (n − 1) − y (n − 2) + x(n) − x(n − 1) + x(n − 2)
2
2(1 − z −1 )(1 + 2 z −1 + z −2
f) H ( z) =
(1 + 0.5 z −1 )(1 − 0.9 z −1 + 0.81z −2 )

−1 −2 −3 −4
2) For H ( z ) = 1 + 2.88 z + 3.404 z + 1.74 z + 0.4 z sketch the direct form and lattice
structure and find the corresponding input output implementations.
3) Determine a direct form of implementation for the systems
a) h(n) = { 1, 2,3, 4,3, 2,1}
b) h(n) = { 1, 2,3,3, 2,1}
4) Determine the transfer function and the impulse response of the systems
drawn below:

a)

b)
Lattice structure

1 1
H ( z) = N
=
aN ( z )
1 + ∑ aN ( k ) z − k
k =1

N
g ( n) = −∑ aN ( k ) y ( n − k ) + x( n)
k =1

M −1
B y (n) = ∑ bk x(n − k ) xu(uzur) h( n) → y (n)
u
k =0
↓z ↓z
M −1
Y ( z ) = ∑ bk z − k x ( z ) xu(uzur) H ( z ) → Y ( z )
u
k =0

Y ( z ) M −1 − k
H ( z) = = ∑ bk z the response of the system for the unit
X ( z ) k =0
sample is identical to the coefficients bk , i.e.:
h(n) = { bn , 0 ≤ n ≤ M −1}
k
ω  ω
H n ( jω ) = 20 log   = 20k log
 ω0  ω0
∂B

π
j k
arg H n ( jω) = e 2

a) Direct form structure (transversal structure)


n −1
y ( n ) = ∑h ( n ) x ( n − k )
k =0
n-1 - memory locations for storing the M-1 previous inputs
M - multipliers
M-1 - adders of M-1 past values of the input and weighted current value of the input

if h(n)=h(M-n-1) the system is SYMMETRIC


h(n)=-h(M-n-1) the system is ANTISYMMETRIC
M
−1
M −1 2
M   M
y ( n ) = ∑ h( k ) x ( n − k ) = ∑ h(k ) x(n − k ) + h  x n − +
k =0 k =0 2   2 
M −1
+ ∑h(k ) x(n − k )
M
k= +1
2

 M
h ( k ) x ( n − k ) ≡ h ( M − h ) x n − 
 2 
M
−1
2
M   M
y (n) = ∑ h(k )[ x(n − k ) + x(n − M + k )] + h
k =0
x n − 
2   2 

M
→ multiplications
2

b) Cascade form structure


k
H ( z) = ∏H k ( z)
k =1

Example:
Fourth order section in a cascade implementation of a symmetric system:
 z −1  z −1 
( )( )
H k ( z ) = c k 0 1 − z k z −1 1 − z k* z −1 1 −
zk
1 − *  = c k 0 + c k1 z −1 + c k 2 z −2
  z 
−3 −4
+ c k1 z + z

c) Lattice structure
H m ( m) = Am ( z ); m = 0, M −1

The unit sample response of the mth system is:

− h0 (0) = 1
m
− hm ( k ) = α m (k ); k = 1, m; y (m) = x(n) + ∑α m (k ) x( n − k )
k =1

Direct forms structure

a)

αm(1) αm(2) αm(m-1) αm(m)


b)

-αm(1) -αm(2) -αm(m-1) -αm(m)

Analog Filters (Continuous in time)


An ideal frequency selective passes certain frequencies without any change while it
stops the other frequencies completely.

Types of Ideal Filters:

LPF (Low Pass Filter)

HPF (High Pass Filter)

PBF (Pass Band Filter)

SBF (Stop Band Filter)

Types of Real Filters:


LPF 1-δ1 ≤ H(jω) ≤ 1+δ1
1-δ1 |ω| ≤ ωp
δ2 |H(jω) |≤ δ1
|ωs| ≤ ω
δ1,δ2 - riple

Bocle Diagrams:
m

∏( s − z k ) lhz
H (s) = c k =1
n

∏( s − p
l
h ) ph
k =1

s=jώ
m

∏ ( jω − z h ) l zk
H ( jω ) = c k =1
m

∏ ( jω − p
l pk
k )
k =1

d
| H ( jω ) | dB = 20 log | H ( jω ) |=
 m n

= 20log c + ∑ l zk log | jω − z k | − ∑ l pk log | jω − p k |
 k =1 k =1 
1
| H ( jω) | dB − | H ( jω) | dB = 20 log | H ( jω) | −20 log | H ( jω) | +
2
+ 20 log 2 = 20 log 2 =10 log 2 ≈ 3dB
d  ω 
H n ( jω) = H   normalized transfer function
jω 
 0 

ωa ω
octave =2 ; decade ω = 10
a

ωb b

 ω   ω 
Hn
jω 
 = 20 log 
jω 

 0  dB  0 

ωa
= 10 ⇒| H ( jωa ) | dB − | H ( jωb ) | dB = 20 log | H ( jω a ) | −20 log | H ( jωa ) |=
ωb
 ω 
= 20 log | H ( jω a ) | −20 log H  j a  = 20 log 10 = 20 dB decade
 10 

ωa
= 2 ⇒| H ( jω a ) | dB − | H ( jω b ) | dB = 20 log 2 ≈ 6 dB octave
ωb

Example:
1) H (jω)=k
 ω
H n ( jω ) = H  j  = k ⇒ H n ( jω ) = k
 ω0 

 0, k > 0
a r Hgn ( jω ) = 
 π ,k < 0

H n ( jω) dB arg H n ( jω)

H n ( jω) dB = 20 log k dB
2) H (s) = s k
↓ s = jω
π
j k
H ( jω) = ( jω) k = j k * ωk = e 2
* ωk

Figura

k
 ω  ω  jk π2
H n ( jω) = H  j  =   e
 ω0   ω0 

Study of the digital filters

1. Butterworth filters
- we study the transfer function

1
H ( s) =
1 + s n ; substitution: s = jω
1
H ( jω ) =
1 + ( jω ) n -transfer function;
1
H ( jω ) =
1 + ω 2n -magnitude transfer function;
2 1
H ( jω ) =
1 + ω 2n -squared magnitude transfer function;

Figura grafic Figura grafic

Maximally flat approximation


d k H ( jω )
=0
dω k ω=0 ; k = 1, n − 1
2 1 3
H (ω ) = 1 − ω 2 n + ω 4 n
2 8 +……

1
20 log H (ω) 2 = 20 log = −20 log 2 = −3dB
ω =1 2

2 1 1
H (ω ) = H ( s) − H (− s ) = s =
s = jω 1+ ω ω =
2n
 s 
2n
j 1+  
 j
 
2
H (ω )
Poles of are :

2n 2n 2n
s s ( 2 k −1)π s
1 +   = 0 ⇒   = −1 ⇒ e =  
 j  j  j ; k = 0,1,......,2n − 1

2 k −1+ n
j π
sk = e 2n
; k = 0,2n − 1 -poles of squared magnitude transfer function;
2k − 1 + n 2k − 1 + n  2k − 1 π  2k − 1 π
s k = cos π + j sin π = cos π +  + j sin π + =
2n 2n  2n 2  2n 2
2k − 1 2k − 1
= − sin π + j cos π
2n 2n
2k −1
σk = −2 sin π
s k = σ k + jω k 2n
, where 2k −1
ωk = cos π
2n

Figure

Example:
n=3
2 1
H (ω ) =
1 + ω 2n k = 0,5
2 k −1+ n 2 k −1+ 3
j π j π
sk = e 2n
;
sk = e 6

π
j 1 3
s0 = e 3
= + j
k =0 2 2 ;

j 1 3
s1 = e 3
=− + j
k =1 2 2 ;

k =2 s 2 = e jπ = −1 ;

j 1 3
s31 = e 3
=− − j
k =3 2 2 ;


j 1 3
s 41 = e 3
= +j
k =4 2 2 ;

k =5 s 5 = e j 2π = 1 ;

The poles from the left plane:

1 1 1
H ( s) = = =
( s − s1 )(s − s 2 )(s − s3 )   1 3    1 3  ( )
( s + 1) s 2 + s + 1
 s −  − + j    s −  − j   ( s + 1)
  2 2     2 2  

TABLE 1
n Butterworth polynomials(factored form)
1 s +1
2 s 2 + 2s + 1
3 ( s + 1) ( s 2 + s + 1)
4 (s 2
+ 0.76536 s + 1)( s 2 + 1.84776 s + 1)
5 (s 2
)( )
+ 0.6180s + 1 s 2 + 1.9318s + 1 ( s + 1)
6 ( s + 0.5176s + 1)( s + 2s + 1)( s + 1.9318s + 1)
2 2 2

7 ( s + 0.4450s + 1)( s + 1.2456s + 1)( s + 1.8022s + 1)( s + 1)


2 2 2

8 ( s + 0.3986s + 1)( s + 1.111s + 1)( s + 1.6663s + 1)( s + 1.9622s + 1)


2 2 2 2

1
H (s) =
∆( s ) = a n s n + a n −1 s n −1 + ....... + a1 s + 1 ; ∆ n (s)
TABLE 2
Coefficients of butterworth polynomials
n a1 a2 a3 a4 a5 a6 a7 a8
1 1
2 2 1
3 2 2 1
4 2.613 3.414 2.613 1
5 3.236 5.236 5.236 3.236 1
6 3.864 7.464 9.141 7.464 3.864 1
7 4.494 10.10 14.60 14.60 10.10 4.494 1
3 6 6 3
8 5.126 13.12 21.12 25.69 21.12 13.12 5.126 1
8 8 1 8 8

Exercise:
Design of a low-pass Butterworth filter

Figura grafic H (ω ) ≥ 1 − δ 1 ω ≤ ωp
;
H (ω ) < δ 2 ω > ωs
;

H (ω ) ≥ 1 − δ 1 ω ≤ ωp s = jω ; H ( s ) → H ( jω )
;

H n ( s ) → H n ( jω n ) = H ( s ) ω
s= j
H (ω ) < δ 2 ω > ωs ωn
;

2
ω  1 1
H   = = 1 − δ1
 ωc
2n 2n
 ω   ωp 
1 +   1 +  
 ωc  ;  ωc  (a)
1
H n (ω p ) =
2n
ωp 
1 +  
 ωc  ⇒
1
= δ2
2n
ω 
1 +  s 
 ωc  (b)
1
H n (ω s ) =
2n
ω 
1 +  s 
 ωc  ;

2n 2n
ωp  1 ωp  1
1 +   =   = −1
 (1 − δ 1 ) 2 (1 − δ 1 )
 ωc  ωc
2
 
=> =>
2n 2n
ω  1  ωs  1
1 +  s  =   = −1
 ωc δ2  ωc δ2
2 2
 

1 − (1 − δ 1 )
2
log
[1 − (1 − δ ) ]δ 1
2
2
2

ωp



2n
(1 − δ 1 ) 2 1 (1 − δ 1 ) 2 (1 − δ 2
)
= n= 2

 ωs  1 − δ 22 2 ωp
log
=> => δ 22 ωs (c)
n - must be integer , choose nearest integer value

If
ωc is determined from equation (a), the PB specifications are met exactly,
whereas the SB are exceeded.

If
ωc is determined from equation (b), the SB specifications are met exactly,
whereas the PB are exceeded.
Steps in finding H(s) :

1. Determine “n” from equation (c) using the values of δ1 , δ2 , ωp ,ωs and round up
to the nearest greater integer.
2. Determine ωc from the equation (a) or (b).
3. For the value of “n” determined at step 1. , determine the nominator of H(s) using
table 1, table 2 or H(s).
s
4. Find the H n (s ) by replacing s with ωc .

Example:
Design a BF (Butterworth filter) to have attenuation no more than 1dB for ω ≤
1000rad/s and least 10 dB for ω ≥ 5000 rad/s.

 20log( 1 − δ 1 ) = − 1 δ 1 = 0.108749
⇒ 
 20logδ 2 = − 10 => δ 2 = 0.31623
ω p = 1000 1
 H ( s) =
ω s = 5000 => n = 1.012 . Round n = 2 => s 2 + 2s + 1
From equation (b) we have:
n=2 => ω 2 = 2886.75 rad/s

s 1 2886 .75 2
Hn( )= =
ωc  s 
2
2s s 2 + 4082 .48 s + 2886 .75 2
  + +1
 2886 .75  2886 .75

DESIGN METHOD

Given: - the maximum pass band attenuation ( A p )


- the minimum stop band attenuation ( As )
- the pass band edge frequency ( f p )
- the stop band edge frequency ( f s )

Find the transfer function H(s) of a Butterworth filter.


  ω p 2 n 
10 log 1 +    = Ap
  ωc  
 ωp  
2n

log1 +   
 ω 
2n
   ω c  A p

10 log 1 +  s   = As => = ⇒
  ω c     ω  2 n  As
log 1 +  s  
  ω c  
2n 2n
 ωp  ωp 
1 +   = 10
0.1 Ap
⇒   = 10 0.1 Ap − 1
 ωc   ωc 
2n 2n
ω   ωs 
1 +  s  = 10 0.1 As
  = 10
0.1 A p
−1 ⇒
 ωc   ωc 
1
 0 .1 A p − 1  2
−1 log  10
0 .1 A p
10 
ω p 10 0.1 A − 1 p
1
log 0.1 As  10 0.1 As − 1 
⇒ = ⇒ n= 10 − 1 =  
ωs 10 0.1 A − 1 s
2  ωp  ωp
log   log
 ωs  ωs

Notations:

ωp 2πf p f
k= = = p <1 - selectivity parameter
ωs 2πf s fs
1
 10 0.1 Ap −1  2 log d
d =  0.1 As  <1 - discriminator factor => n= , n ∈ N*
 10 −1  log k

ωp
⇒ ωc = - cut-off request
(10 )
1
0.1 A p
− 1 2n
or
ωs
ωc =
(10 )
1
0.1 As
− 1 2n

Poles of H(s) are :


2k − 1 2k − 1
s k = − sin π + j cos π ; k =0,2n −1
2n 2n

Exemple 2:
A p = 1dB f p =1kHz
Given : As = 20 dB f s = 5kHz
, find : H(s)=?
Solution :
fp 1 * 10 3
k= = = 0.2
fs 5 * 10 3
0.1 A p
⇒ d = 1.6099 * 10 −2
10 −1 10 −1 0.1*1
d= =
10 0.1 As
−1 10 0.1*20
−1

log d
n= = 2.565 ⇒ choose n=3
log k
Poles:
2k − 1 π 2k − 1 π
s k = − sin + j cos ; k =0,5
n 2 n 2

Impose the stability condition for H(s) ⇒ k=1,2,3


n Chebyshev polynomials C n (ω)
Figura grafic
0 1
1 ω
2 2ω 2 −1
3 4ω 3 − 3ω
4 8ω 4 − 8ω 2 + 1
5 16ω 5 − 20ω 3 + 5ω
6 32ω 6 − 48ω 4 + 18ω 2 + 1
7 64ω 7 − 112ω 5 + 56ω 3 − 7ω
8 128ω 8 − 256ω 6 + 160ω 4 − 32ω 2 + 1

2. CHEBYSHEV FILTERS

Chebyshev polynomials Figura graphic cosh x

( )
 c on cs o− 1 ωs ;0 ≤ ω ≤ 1
Cn ( ω ) = 
( )
 c o nsc ho − 1sω h;ω > 1
u = cos −1 ω
C n ( ω ) = cos nu
n=0 : C 0 ( ω) = cos 0 = 1
n=1 : C1 ( ω ) = cosu = cos( cos −1 ω ) = ω
n=2 : C 2 ( ω ) = cos 2u = 2 cos 2 u − 1 = 2 cos2 ( cos −1 ω ) − 1 = 2ω 2 − 1
n=3 : C3 ( ω ) = cos 3u = 4 cos 3 u − 3 cos u = 4[cos ( cos −1 ω )] − 3[os ( cos −1 ω )] = 4ω 3 − 3ω
3

C n −1 ( ω ) = cos [ ( n − 1) u ] = cos nu cos u + sin nu sin u


C n ( ω ) = cos nu
C n +1 ( ω ) = cos [ ( n + 1) u ] = cos nu cos u − sin nu sin u
C n −1 ( ω ) + C n +1 ( ω ) = 2 cos nu cos u = C n ( ω ) cos u

Recursive formula for Chebyshev polynomials:


C n +1 (ω) = 2ωC n ( ω) − C n −1 (ω) ; n=1,2,3;
C 0 (ω) = 1
C1 (ω) = ω

Properties of C n (ω) :
1) 0 ≤ C n (ω) ≤1;0 ≤ ω ≤1 ; C n (ω) >1; ω >1
2) C n (ω) increases monotonically for ω > 1
3) C n (ω) is: -even polynomial for n-even
-odd polynomial for n-odd

 0, n − o d d
4) C 0 (ω) =
 1, n − e v e n
1
H (ω ) =
2
squared magnitude transfer function
1 + ε 2Cn (ω )
2

Properties:
1) ∀n the zeros of C n (ω) are located in the interval ω ≤1
2) -for ω ≤ 1; C n (ω) ≤ 1 ⇒ H (ω) oscillates about unity such that the maximum
2

1
value is 1 and minimum value is
1+ε 2
-for ω >1; C n (ω) increases rapidly,as ω becomes large , then H (ω)
2

approaches zero rapidly.


Figura grafic Figura grafic

n- even n-odd

-for ω = 1, C1 ( ω ) = 1 for all ”n”


1
H (1)
2
=
1+ε 2
1
-for ω < 1 ⇒ H n ( ω ) =
2

1 + ε Cn (ω )
2 2

1
-for ω ≥ 1 ⇒ H n ( ω )
2

εC n ( ω ) ; (1 from the denominator is neglected due to cosh)

dB attenuation (loss)
loss = −20 log H (ω) = 20 log ε + 20 C n (ω)
For large ω; C n (ω) ≈ 2 n −1 ωn
loss = 20 log ε + 20 log 2 n −1 + 20 log ω n = 20 log ε + 6( n −1) + 20 n log ω
Pole location of Chebyshev Filter
1 1
H ( s)H (−s) = H (ω)
2
= =
1 +ε C n (ω)
s = jω s
2s 
22 ω=
j
1 +ε 2 C n 
 j

 
s s 1 1
Poles : 1 + ε 2 C n   = 0 ⇒ C n 2   = − 2 ⇒ C n ( − js ) = ± − 2
2

j
  j
  ε ε
1
s = σ + jω ; − js = ω − jσ ; C n ( ω − jσ ) = ± j
ε
[ ]
cos n cos −1 (ω − jσ ) = ± j
1
ε
; cos −1 (ω − jσ ) = x + jy ;

1 1
cos n( x + jy ) = ± j ⇒ cos nx cos njy − sin nx sin njy = ± j ;
ε ε
 2k − 1π
 c n co nx o= s0 y s h  xk = ,k = 0,2n − 1
  n2
⇒  1 ⇒ c n =o 0 ⇒ x  s
 s n is nxi ±= nε y h  y ±= 1 s − 1i1 n h
 k n ε
[ ]
cos cos−1 ( ω k − jσ k ) = cos( x k + jy k )

ω k − jσ k = c o xksc hk y− j s i xnk s hk y
 ω k = c o xksc hk y

 σ k = s i xnk s hk y
2 2
 ωk   σk 
  +   = 1 Ellipse
 chyk   shy k 

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! C5
Poles of Butterworth filters:

Formulas: cosh-1x=ln( x + x 2 −1 )

 sinh 1 πln( x +
2k-1−x= x 2 +1 )

 ω k = c o s n ⋅ 2

 σ = s i n2k − 1 ⋅ π
 k n 2

x-poles of Tchebyshev filters


. –poles of Butterworth filters
Design Tchebyshev Filters

Given: Ap -maximum Pass Band attenuation; An -minimum Stop Band


attenuation
Fp (ωp ) -PB edge frequency

Fn(ω p)-SB edge frequency

Find: n=? H(s)=?


2 1
H (ω ) = ω
1 + ε 2C n2 (ω ) ; ω ω
p

22 2
ω
ω
H (( ω
1
)) =
H )
H (ω ω ω
ωp
pp 1 +ε C n2 (
2
)
ωp

ω =ω p ⇒

ω =ω r ⇒

where:
2
ωp 1 1
Ar = −10 log H ( ) = −10 log = −10 log =
ωr ω ωr
1 + εC ( r )
2
n 1 + ε cosh( n cosh
2 −1
)
ωp ωp
10 0.1 Ar − 1 1
cosh −1 ( ) cosh −1
ω ε 2
d
10 log(1ε 2 cosh 2 ( n cosh −1 r )) ⇒ n = =
ωp ω 1
cosh −1 r cosh −1
ωp k
ω =ω c ⇒
2
ω 1 1 ω ω
H( c ) = = ⇒ 1 + ε 2 C n2 ( c ) = 2 ⇒ ε 2 C n2 ( c ) = 1
ωp ω 2 ωp ωp
 n− 1 s k 1 + ε 2 C n2 ( c )

 − ∏ (ns −cosh
ωp

ε cosh
; n − o( ωd ))d = 1 ⇒ ω
−1
= ω p cosh(
1 1
cosh −1 )
 k = 0 sk ω
2 2 c

ε
c
2
H (s) = 
p

1 n− 1 s
ω ∏f k ; n − e v e n
k=  ε2
 ε f s − sk
p p

+
d
=2 k = 0 d =(
ωr1 r 10 0.1 Ar
−1
)1 / 2

Transfer function

Normalised transfer function:

 1 n− 1 sk
 2 k∏= 0 s ; n − e v e n
 1+ ε − sk
s  ωp
H n ( ) =  n− 1
ω p  sk
 ∏k = 0 s
− ;n − o d d
 ω − sk
 p

Fig 2

Filter specifications:

←cut-off frequency
5π 5π
k = 3; s3 = sin + j cos
6 6
7π 7π
k = 4; s 4 = sin + j cos
6 6
9π 9π
k = 5; s5 = sin + j cos
6 6
Ap≤ 2 dB at ω =ω p

Ap≥ 50 dB at
ω =ω c=5ω p
ωp ωp 1
k= = = = 0.2 k-selectivity factor
ωr 5ω p 5

ε 0.764783
d = = = 2.4185 ⋅10 −3
10 0.1 Ar
−1 10 0.1⋅50
−1 d-discrimination factor

1 1 1
cosh −1 ( ) ln( + − 1)
d = d d2 6.7177
n≥ = = 2.9251 ε =0.764783 -ripple factor
1 1 1 2.3124
cosh −1 ( ) ln( + − 1)
k k k2
Choose n=3
Butterworth poles:
1 1 1 1 1 1 1 1
β= sinh −1 = ln( + + 1) = ln( + + 1) ⇒
n ε n ε ε 2
3 0.764083 0.7610832
e β − e −β
sinh β = = 0.3689
2
e β + e−β
cosh β = = 1.0659
2
2k −1 π 2k − 1 π
sk = sin( ⋅ ) ⋅ sinh β + j cos( ⋅ ) ⋅ cosh β ; n = 3; k = 0,2n − 1
n 2 n 2

2k − 1 π 2k − 1 π
s k = sin ⋅ + j cos ⋅ ; n = 3; k = 0,2n − 1
n 2 n 2

ε = 10
0.1 A p
−1 = 10 0.1⋅2 −1

π π
k = 0; s 0 = sin − + j cos −
6 6
π π
k = 1; s1 = sin + j cos
6 6
3π 3π
k = 2; s 2 = sin + j cos
6 6

For stability conditions the poles of Tcebyshev are in the left side of the plane:
5π 5π
s 3 = sin ⋅ 0.3689 + j cos ⋅1.0659 = 0.1844 − j ⋅ 0.9231
6 6
7π 7π
s 5 = sin ⋅ 0.3689 + j cos ⋅1.0659 = −0.1844 − j ⋅ 0.9231
6 6
3π 3π
s 5 = sin ⋅ 0.3689 + j cos ⋅1.0659 = −0.3629
2 2

Transfer function:
2 sk s s s
H (s) = − ∏ =− 3 ⋅ 4 ⋅ 5
k =0 s − s s − s3 s − s 4 s − s5
k

s 2 sk
Hn( )=−∏
ωp k =0 s
− sk
ωp
Inverse Tcebyshev Filters(Type 2 Tcebyshev filters)

ε 2 C n2 (ω )
2
H (ω ) =
1 + ε 2 C n2 (ω )

−1
c on c s oω () s0;< ω < 1
Cn (ω ) = −1
c on c s oh ω )s(ω ,h> 1
Poles: 1 + ε 2 C n2 (ω) = 0 the same for Tchebyshev filters
2k − 1 π 2k − 1 π
s k = sin sinh β + j cos cosh β
n 2 n 2
Zeros: ε 2 C n2 (ω) = 0
2k − 1 π 2k − 1 π
C n2 = 0 <=> cos(cos −1
ω) = 0 => cos(cos −1
ω) = cos => cos −1 ω =
n 2 n 2
S=jω
s 2k −1 π
ω= => cos −1 ( js ) =
j n 2
2k − 1 π 2k − 1 π
− js k = cos => s k = j cos
n 2 n 2
ωr
ε 2 C n2 (
)
ωr ω 1 1
H( ) = = 1− ≈ 1+
ω ω ω ω
1 + ε 2 C n2 ( r ) 1 + ε 2 C n2 ( r ) ε 2 C n2 ( r )
ω ω ω
2
ωr
A(ω) = −10 log H ( ) [dB]
ω
 
 1 
A(ω) = −10 log 1 + [dB]
 ω 
ε 2 C n2 ( r ) 

 ω  
 
 
1
A(ωp ) = −10 log 1 + 
[dB]
 2 ω 
 ε Cn ( r
2
)
 ωp 
 
 1  1
A(ωr ) = −10 log 1 +  = −10 log( 1 + 2 ) [dB]
 ω  ε
 ε 2 C n2 ( r )
 ωr 
1
1 cosh −1
E= n≥ d
10 0.1 Ar − 1 1
cosh −1
k

Elliptic filters
2 1
For LPF only H (ω) = where R(ω)-Tchebyshev rational functions.
1 + ε R 2 (ω)
2

The roots of R (ω) are related to the Jacobi elliptic sine function.

Properties of R (ω):
1) R(ω) are even functions for even n and odd for odd n;
2) Zeros of R(ω) are in the range ω<1 while poles are in the rang ω>1 ;
3) R(ω) oscillates between values -1 and +1 in PB;
4) R(ω)=1 for ω=1;
5) R(ω) oscillates between − d and ∝ in the SB (d= discriminator factor)
+

 n2− 1 2 2
 ω ω i − ω ,n − o d d
 ∏i= 0 1 − ω 2ω 2
i
6) R(ω ) =  n− 1 Normalized to “center”: frequency ω0 =1
 2 ω 2−ω 2
 ∏ i ,n − e v e n
 i= 1 1 − ω i2ω 2
1 1
7) R( )=
ω R (ω)

The poles and zeros of R(ω) are reciprocal and exhibit geometric symmetry
….center frequency ω0 .
1) A p = 10 log( 1 + ε 2 ) => ε = 10 0.1 Ap −1
ε2 ε ωp
2) Ar = 10 log(1 + d 2 ) = > d = ; k=
ωr
; ω0 = ωp ωr
10 0.1 Ar
−1
1
1 1 − (1 − k )2 4

3) q0 =
( )
1
2
1+ 1− k 2 4
4) q = q0 + 2q05 +15 q0 +150 q13
0

10
log
n= d2
5) 1
log
q
1 ∝
2q n ∑ ( −1) m q m ( m +1) sinh( 2m + 1) β
1 1 + ε 2 +1
6) a = m =0

β=
2n
ln
1 − ε 2 −1
1 + 2∑ (−1) m q m cosh 2mβ
2

m =1

1 ∝
2m + 1
2q n
∑ (−1) m
q m ( m +1) sin
n
πl
l =i −
1
7) ωl = m =0
∝ 2
2m
1 + 2∑ (−1) m q m
2
cos πl l =i
m =1 n

n
i =1, , n − even
2
n −1
i =1, , n − odd
2

ωi2
8) Vi = (1 − kωi ) (1 − ) 2

k
1
9) ai = ω 2
i

2a i v i
10) bi =
1 + a 2ω i2

a2
11) U = (1 + ka 2 )(1 + )
k

(aVi ) 2 + (ω iU ) 2
12) ci =
(1 + a 2ω i2 ) 2
 n2− 1
 a ci , n − o d d
 ∏i= 1 a
i
13) H 0 =  n− 1
 1 2c
 ∏
 1 + ε i= 1 i
2 a
i
,n − e v e n

 n2 2
H s + ai
 0 ∏i= 1 s 2 + b s + c
,n − e v e n
i i
14) H s = 
n
 H 2 s2 + a
 0 ∏ 2 i ,n − o d d
 s + a i= 1 s + bi s + ci

Example: Design a filter with no more than 2 dB ripple in the PB up to an edge


frequency of 3000 Hz. The filter is to attenuate the out of hand signals beyond
4000 Hz by at least 60 Hz.
Find H(s).

A p ≤ 2dB ω p = 2πf p = 2π * 3000rads −1


Ar ≤ 60 dB ω r = 2πf r = 2π * 4000 rads −1
ωp fp
k= = = 0.75
ωr fr

0.1 A
10 p − 1
d=( ) = 7.6478*10− 4
100.1 Ar − 1
1

1 1 − (1 − k 2 ) 4
q0 = 1 ;
q = q 0 + 2q 0 + 5q 09 + 150 q 13
0
2
1 + (1 − k )
2 4
16
log
n= d 2 ≥ 5.7
1 . We choose n=6.
log
q
1
ωr = = 1.555 rads −1
k
ω0 = ωp ωr = 2π * 3464 rads −1

H ( s ) = 7.1374 * 10 4 R
( s 2 + 13 .8451 )( s 2 + 2.2153 )( s 2 + 1.3955 )
R=
( s 2 + 0.35518 s + 0.10903 )( s 2 + 0.194425 )( s 2 + 0.05386 s + 0.7341 )

Bessel Filters
1
a) H ( s) = B
n ( s)
n
Bn ( s ) = ∑ a k s k , n-th order Bessel …..
k =0

( 2n − k )!
ak = n −k
2 k!( n − k )!

b) Bn ( s ) = ( 2n −1) Bn −1 ( s ) + s 2 Bn −2 ( s ) with initial conditions Bn (0) =1; Bn (1) = s +1

Frequency transformations for analog filters

Type of Transformation Band edge frequencies


transformation of new filter
LPF ωp
s→ s
ω p1
HPF ω p ω p1
s→
s
SBF s (ω u − ω l )
s → ωp
s 2 + ωuωl
PBF s 2 + ωu ωl
s → ωp
s (ωu − ω l )

Transform Analysis of LTI Systems

• In time domain
H(e j )

T c 0 c T

h(n) – impulse response of LTI for x ( n) = δ ( n )


y(n)=h(n)*x(n)= ∑h( k ) x( n − k )
k =−∞

• Z transform
Ψ(ω) = arg H (e jω )

• D.F.T.

→Y (e jω ) = H (e jω ) * X (e jω )
=e
z
Y (e jω )
H (e jω ) = = H ( e j ω ) * e j ϕ (ω) , where Ψ(ω) = arg H (e jω ) - phase
X (e jω )
We define arg H (e jω) has principal value of phase:
− π ≤ arg H (e jω ) ≤ π
ϕ(ω) => H (e jω ) = ArgH (e jω ) + 2πr (ω), r (ω) ∈Z
d
⇒ ι (ω ) = − A rg H(e jω ) ≡ time delay except pants,

where Arg H (e jω) has discontinuities.

a) A linear phase a low-pass filter

 1,− ω c < ω < ω c


H (e ) = 

LP F
 0,− ω c < ω < π
id

nd n

ω
1 c jnω 1 1 jn ω ω c 1 e jnω − e − jnω sin ωc n
2 n −∫ω c
hLPF ( n) = e d ω = e = =
2n jn −ω c πn 2j πn
Comment: If n increases ⇒ hLPF → 0 , but not fast enough
b) Phase shift
hi d = δ (n − n d )
 H i d (e jω ) = 1
− jω n d
H i d (e ) = e ⇒ 

 < H i d(e jω ) = − ω n
d (< H i d (e jω ) )
ι (ω ) = − = nd

Z-plane
1
z 2*

z2

z1 1
z1

z2*

1
z2
ω ω
1 c jnnd j ωn 1 c j ω( n − n d ) sin[ ω(n − nd )]
hid = ∫
2n −ωc
e e d ω = ∫ e
2n − ω c
dω =
π ( n − nd )
ω
n = nd ⇒ lim hid =
n →n d π
→ the answer is delayed and identical with sinc

c) Group delay

x (n) = s ( n) cos ω0 n - narrow band signal

 = 0,ω 0 − ε ≤ ω ≤ ω 0 + ε ,ε > 0
We suppose H (e jω
)

 ≠ 0, e l s e w h e r e
< H (e jω) = −ωnd −θ , generalized phase
So:
y(n)=h(n)*x(n)
Y ( e j ω ) = H (e j ω ) * X ( e j ω)
Y ( e j ω) = H ( e j ω) * X (e jω )
jω jω jω
< Y (e ) =< H (e ) + < X (e )

d [ <Y ( e )]
ι(ω) = −

d) Phase generalized
H (e jω ) = −A(e j ω )e −j (αω −β)

 by derivation
< H ( e j ω) = −α
ω +β 
ι(ω) =α 

⇒H (e jω ) = A(e j ω ) cos( α
ω − β) − jA ( e j ω ) sin( αω − β)

jω sin( αω − β )
So: tan H (e ) =−
cos( αω − β )

⇒ H (e jω ) = ∑h(n)e − jωn

n = −∞
∞ ∞
meaning : H ( e j ω ) = ∑h(n ) cos ωn − j ∑h( n)sin ωn
n = −∞ n = −∞


− ∑h(n) sin ωn
tan H (e ) = n = −∞

∑h(n) cos ωn
n = −∞

− sin(α ω− β )
− ∑h(n) sin ωn ∞
= n = −∞

⇒ [ ∑ h(n) cos ωn]sin(α ω− β ) −
cos(α ω− β )
∑h(n) cos ωn
n = −∞
n = −∞


− [ ∑ h(n) sin ωn] cos(α ω− β ) = 0
n = −∞

∑h( n)[cos ωn sin( αω − β ) − sin ωn cos( αω − β)] = 0


n = −∞

∑h( n)[sin(( n − α )ω + β )] = 0
n = −∞

necessary but not sufficient condition for a linear phase


For β=0 or π
M 
α=
2
, M ∈ Ν* h( M − n) = −h( n)
 conditions for linear phase but not system in

NOT causal
(because it goes from − ∞ to + ∞ )
M

So: ∑h (n ) sin[( n − α )ω + β ] = 0
n=−M
- needed for causality
Another condition (so that we don’t have negative frequencies):
M

∑h( n) sin[( n − α )ω + β ] = 0
n= 0

Linear Phase FIR(finite impulse response) systems

h( M − n) = h(n) , n =0, M - symmetry condition for linear generalized


h( M − n) = −h( n) , n =0, M - antisymmetry condition phase of a systems
M
y (n ) = ∑hi ( k ) x( n − k )
k =0
M M
y ( n) = h(0) x ( n) + h(1) x ( n −1) +... + h( ) x(n − ) +... + h( M −1) x ( n − M +1) +
2 2
+ h( M ) x ( n − M )
M
−1
M
2
M M
y (n ) = ∑ h (k ) x ( n − k ) + h( ) x( n − ) +
2 2
∑ h( k ) x( n − k ) =
k =0 M
k= +1
2
M
−1
2
M M
= ∑ [ h( k ) x( n − k ) ± h ( M − k ) x( n − M − k )] + h(
2
) x( n − )
2
k =0

Type I systems : h ( M − n) = h( n ) for M-even


M
⇒ 2
−1
M M
∑ [ h( k ) x (n − k ) + h(M − k ) x (n − M − k )] + h( 2 ) x( n − 2
)
k=0

Type III systems: h( M − n) = −h( n) for M-odd


M
−1
2
M M
∑ [ h( k ) x (n − k ) − h( M − k ) x( n − M − k )] + h ( 2 ) x( n − 2
)
k=0

Type II systems : for M-odd


M
−1
2

∑ [h(k )x (n − k ) + h(M − k )x (n − M − k )]
k =0

Type IV systems : for M-even


M
−1
2

∑ [h(k )x (n − k ) − h(M − k ) x( n − M − k )]
k =0

Design FIR Filters

Symmetric and antisymmetric filters


M −1
y (n ) = ∑bi x ( n − k ) - differential equation of a FIR with M length
k =0
M −1
y (n ) = ∑h (n ) x (n − k ) - output signal of a FIR with M length
k =0
bk = h(k )
M −1
H ( z ) = ∑h( z ) z −k , Z-transform of impulse response system
k =0
M −1
M −1 −
H ( z ) = h(0) + h(1) z −1 + h( 2) z −2 + ... + h( )z 2
+ ... + h( M − 2) z −( M −2 ) +
2
+ h( M −1) z −( M −1)
M −1 M −1 M −1 M −1
− −1 −2 M −1 M −1
H ( z) = z 2
( h(0) z 2
+ h(1) z 2
+ h( 2) z 2
+ ... + h( ) + h( +1) z1 + ...
2 2
M −1

... + h( M −1) z 2

 h(M − l − n) = h(n) ,s y m m e t r i c

 h(M − l − n) = − h(n) ,a n t i s y mr i cm e t
M −1
 − M2−1 M −1 2 M −1 −2 n

M −1 − 2 n

H ( z ) = z {h ( ) + ∑h( n)[ z 2
±z 2
]}, M − odd
 2 n = 0
M −1
M −1 M −1 −2 n M −1 −2 n
 − 2 −
H ( z ) = z 2
∑h (n )[ z 2
±z 2
], M − even
 n =0
M −1

Obs : H ( z ) = ∑h( k ) z
−k

k =0
M −1

z → z ⇒ H ( z ) = ∑h( k ) z
−1 −1 k

k =0

z −( M −1) −1
H ( z ) = H ( z) roots of H(z) are roots of H( z −1 )

Figure

M
M −1 2 −1 M −1− 2 n M −1 − 2 n
− −
H (z) = z 2
∑h (n )[z 2
+z 2
], M − even
n =0

z = e jω
M
−1
M −1 2 M −1− 2 n M −1 − 2 n
jω ( −
∑h( n)[e j ω( −
jω ) jω ( ) )
H (e ) = e 2 2 +e 2 ]
n =0
M M
M −1 −1 −1 M −1
−j ω 2 M − 1− 2n 2 −j ω M − 1 − 2n

H (e ) = e 2
∑2h(n) cos( ω 2
)= ∑2h( n) e 2
cos( ω
2
)
n=0 n =0

M −1
Y ( Z ) = ∑bk z −k X ( z ) = H ( z ) X ( Z )
n =0

z = e jω
M
−1 M −1
2
M −1 − 2n −
H (z) = ∑2h( n) cos( ω 2
)z 2

n =0
M −1 − 2n
bk = 2h (k ) cos( ω )
2
M
−1 M −1
M −1 − 2n − j
2 ω
H (e ) = 2 ∑h(n ) cos( ω

)e 2

n =0 2
 M
−1
 H (e jω ) = 2 h (n ) cos(ω M − 1 − 2n )
2

 ∑
n=0 2

 M
⇒ < H ( e jω ) = − ( − 1)ω
 2
 M
τ (ω ) = −1
2

M-odd h( n) = h( M −1 − n)

z =e
M
−3
M −1
M −1 2 M −1 − 2 n M −1 −2 n
H (e jω) = e − jω ( − ) + ∑h( n)[e
) j ω( ) jω ( − )
2 [h ( 2 +e 2 ]=
2 n =0
M
−3
M −1
M −1 2
M − 2n − 1
∑h( n) cos( ω
(− )
= e − jω 2 [ h( )+ )]
2 n =0 2

M
−3
= M −1 M − 2 n −1
2

∑h( n) cos( ω

H (e )
h( )+ )
2 n=0 2
M −1
< H ( e j ω) = − ω
2
M −1
τ (ω) =
2
M-odd h(n) = −h( M −1 − n)

z =e
M
−1
M −1 2 M −1−2 n M −1−2 n
H (e jω) = e − jω ( −
∑h(n )[e jω ( −
) jω ( ) )
2 2 −e 2 ]=
n =0
M M
M −1 2 −1 M −1 π 2 −1
− jω M − 2n −1 − j (ω − ) M − 2n −1
= 2 je 2
∑h( n) sin[ ω 2
] =2 je 2 2
∑h( n) sin[ ω 2
]
n =0 n=0
M −1 π
H (e jω) = − j (ω − )
H I ( e jω ) e 2 2

M −1 π
< H ( e jω ) = − ω +
2 2
M −1
τ (ω) =
2
M
M− 1 −3 M − 1− 2 n M − 1− 2n
−M−1 2 −
H (z) = z {h( + ∑ h(n) z[
2 2
−z 2
] M, − o d d
2 n= 0
z = e jω
M
−3
M−1 M−1
M − 2n − 1 2
e {h( ) + j ∑ 2h(n) s[ i n ( ω )
H (e jω
)= − jω
2
2 n= 0 2
H (e jω) = H (e jω) e jϕ(ω)
M −3

M −1 2
M − 2 n −1 2
H (e j ω ) = h 2 ( ) + [ ∑ 2h( n) sin( ω )]
2 n =0 2
M −3
2
M − 2n − 1
∑2 h( n)sin( ω 2
)
M −1 M −1
< H ( e j ω ) = arctan n =0
−ω = β (ω ) − ω
 M −1  2 2
h 
 2 
dβ (ω) M −1
τ (ω) = − +
ω 2

Design of Linear Phase FIR filters with windows


jω +∞ − jωn
H d (e )= ∑ h d ( n )e -desired frequency response of FIR filters
n = −∞

+∞
1
hd ( n) =
2π ∫H
−∞
d ( e − j ω ) e jω d ω -impulse response of FIR filters

jω  1;− ω c < ω < ω c


H d (e ) = 
 0;o t h e r w i s e

− 2π − ω c ; − 2π + ω c −π − ωc ωc π 2π − ωc 2π 2π + ω c

π ωc
1 1 1 ωc
∫ H d (e )e dω = ∫ω e dω =
jω jωn jω
hd ( n) = e jωn =
2π −π
2π − c
2πjn − ωc

jωc n − jωc n
1 e −e ωc sin nωc sin nωc
= = 2 fc
πn 2j π nωc nωc

 2 fc ; n = 0

hd (n) =  s i nnω c ⇒ hd (n) is not causal
 2 f c nω ; n ≠ 0
 c

hd (n)
2 f c

0 n
−π π
ωc ωc

H d (e jω ) = ∑ hd ( n)e − jnω truncated H d (e jω ) for FIR causal filter to be implemented
n =0

n −1
H d (e jω ) = ∑ hd ( n)e − jωn where n is the length of the filter
n =0

z-plane
n −1
1
hd (n) = ∑b(n)e − jωn
n =0 z2 Symetry of
the roots of
H(z) and
H ( z −1 )
zz R
R

1
-1 +1 1 z1

1
z2
M
M− 1 2 − 1 M − 2 n− 1 M − 2 n− 1
−  − 
H ( z ) = z ∑ h( n)  z
2 2
+z 2
− M is e v e n
n= 0  
M
M−1 2 −1
− jω  jω M − 22n− 1 − jω M − 22n− 1 
H ( e jω ) = e 2
∑n= 0 h(n) e + e 
 
M
M −1 2 −1
−j ω ω ( M − 2n − 1)
H ( e jω ) = e 2
∑2h(n) cos
n =0 2

M
−1 M −1
2 −j ω ω ( M − 2n − 1)
H ( e jω ) = ∑ 2 h ( n )e
n =0
2
cos
2

n −1
Y ( z ) = ∑bk z −k X ( z ) = H ( z ) X ( z )
k =0
M
−1 M −1
2
ω ( M − 2n − 1)  −
e jω = z H ( z ) = ∑ 2h( n) cos   z
2

n =0  2

 M − 2k − 1 
bk = 2h(k ) c o sω 
 2 
M
−1 M −1
2
 M − 2n − 1  − jω
H (e jω ) = 2 ∑ h(n) cos ω e 2 M is even
n =0  2 

M −1
−j ω
jω jω
H (e ) = H r (e )e 2
 M
−1
 H (e jω ) = 2 h(n) c o sω M − 2n − 1
2

 ∑n= 0 2

 jω  M −1 
 ∠ H ( e ) = − ω  
  2 
 M
 τ (ω ) = − 1
 2

if M is odd: z = e jω
 n −3
M −2 n −1 
− jω
M −1
h  M −1  2  jω M −22 n −1 − jω 
H (e jω ) = e 2
   + ∑h(n)e
 +e 2 

 2  n =0  

 
M −1  M −3

− jω
h M −1  2
 M − 2n −1 
H ( e jω ) = e 2
   + ∑h(n) cos ω 
 2  n =0  2 

 

 M −3

 H ( e jω ) = H ( e jω ) = h M − 1 +  M − 2n − 1 
2

 r
2
∑ h(n) cos ω
n= 0 2



 jω M −1
 ∠ H (e ) = − ω
 2
 M −1
τ (ω ) =
2

if m is even; h(n) = −h( m − n −1 )


M
M −1 2 −1
−  M −22 n −1 −
M −2 n −1

H ( z) = z 2
∑ h ( n )
 z − z 2 

n =0  
z = e jω
m
M −1 2 −1
jω  jω M −22 n −1 − jω
M −2 n −1

H (e jω ) = e 2
∑ h ( n )
 e − e 2 

n =0  
M M
M −1 2 −1  M −1 π  −1
− jω ω ( M − 2n − 1) − j ω −  2 ω ( M − 2n − 1)
H (e jω ) = 2 je 2
∑h(n) sin
n =o 2
= 2e  2 2 
∑h(n) sin
n =0 2
M− 1 π
− j  ω − 
H ( e jω ) = H J ( e ) e
jω  2 2 

M
−1
2
ω ( M − 2n − 1)
H (e jω ) = H J (e jω ) = 2 ∑ h(n) sin
n =0 2
M −1 π
∠H (e ) = −ωjω
+
2 2
M −1
τ (ω) =
2

 hd ( n ) ; 0 ≤ n ≤ n − 1
hd (n) = hd (n)w(n) = 
0 ; o t h e r w i s e

1 ; 0≤ n≤ n− 1
w(n) =  -window function; Gibls phenomenon
0 ; o t h e r w i s e
ωc
1
∫ω H
jω jω jω
H (e ) = H d (e ) * W (e ) = (e jω )W (e j (ω −v ) )e j (ω −v ) dv

d
− c

M
 jωM M


−jω −jω

M −1
1 −e −jωM
e 2 e

2
− e 2 
 - ωc ωc
W ( e jω ) = ∑ 1e −jω = =  
n =0 1 −e −jω −j
ω
 jω −j
ω

e 2 e 2
−e 2 

 
M
 M −1 
− jω  
sin ω
=e  2  2
ω
sin
2
ωM
sin
2 M −1 M −1
W (e jω ) = −ω
ω ; phase = 2
; grd(group delay)= 2
sin
2
figura

TYPES OF LINEAR PHASE FIR


TRANFER FUNCTIONS

N
H(z) = ∑ h(n) ⋅ z
n=0
−n ⇒ causal FIR transfer function

h(n) = h(N-n) ⇒ symmetric filter - N = even


- N = odd

h(n) = h(N-n) ⇒ antisymmetric filter - N = even


- N = odd

Figures of symmetry conditions for filters

Type 1 : h(n) = h(N-n) Type 2 : h(n) = h (N-n)


N = even N = odd

Figure
Figure 1 2
Type 3 : h(n) = - h(N-n) Type 4 : h(n) = - h(N-n)
N = even N = odd

Figure 3

Symmetric impulse response with odd length (type 1)

for N = 8
8

H(z) = ∑h(n) .z-n = h(0).z0 + h(1).z-1 + h(2).z-2 + h(3).z-3 + h(4).z-4 + h(5).z-5


n =0

+ h(6).z-6 + h(7).z-7 + h(8).z-8


 h(0) = h(8)
 h(1) = h(7)

h(n) = h(8-n) =>  h(2) = h(6)
 h(3) = h(5)

 h(4)
H(z) = h(0) (1+z-8) + h(1) (z-1 + z-7) + h(2) (z-2 + z-6) + h(3) (z-3 + z-5) + h(4) z-4 =
= z-4 [ h(0) (z4 + z-4) + h(1) (z3 + z-3) + h(2) (z2 + z-2) + h(3) (z1 + z-1) + h(4)]

z = ej ω ⇒ zm + z-m = ejm ω + e-jm ω = 2cos ωm

H(ej ω ) = H(z)|z= ej ω =
= e-4j ω [h(0) ·2cos 4ω + h(1) ·2cos 3ω + h(2) ·2cos 2ω + h(3) ·2cos ω +
h(4)]

ω ∈(0, π) ⇒ cos ωn ∈ (-1,1)

H(ejω) = e-4jω |H(ejω)|ejβ = e(-4ω+β) |H(ejω)|


β = 0 ⇒ |H(ejω)| > 0 θ = -4ω + β
− dθ
β = π ⇒ |H(ejω)| < 0 grd = dω
In general:
N
N 
+ 2 ∑ h N − n  ∙ cos
N ~ ~ 2
- H(ejω) = e j (ω +3)
2 ∙ H (ω) , where H (ω) = |H(ejω)| = h 
2
n =1 2 
ωn
 N
θ = − ω + β
 2
- 
 g r θd = N
 2

Symmetric impulse response with even length (type 2)

for N = odd
h(n) = h(N-n)
7

N=7 ⇒ H(z) = ∑h(n) ∙ z-n = h(0) + h(1)∙ z-1 + h(2)∙ z-2 + h(3)∙ z-3 + h(4)∙ z-4 +
n =0

+ h(5)∙ z-5 + h(6)∙ z-6 + h(7)∙ z-7

 h(0) = h(7)
 h(1) = h(6)

h(n)= h(7-n) => 
 h(2) = h(5)
 h(3) = h(4)
H(z) = h(0) (1 + h-7) + h(1) (z-1 + z-6) + h(2) (z-2 + z-5)+ h(3) (z-3 + z-4) =
7 7 7 5 5 3 3 1 1
= z −2 [h(0) (z 2 + z −2 ) + h(1) (z 2 + z −2 ) + h(2) (z −2 + z 2 ) h(3) (z −2 + z 2 )]

H(ejω) = H(z)|z=ejω
7 7 5ω 3ω ω
=e j
2
ω
[h(0) ∙ 2cos 2 ω + h(1) ∙ 2cos 2
+ h(2) ∙ 2cos 2
+ h(3) ∙ 2cos 2
]
=
j 7ω ~ ~  7 
j  − ω+β 
= e− 2 ∙ H (ω) ∙ ejβ = H 7 (ω) ∙ e  2 
~
β =0 ⇒ |H 7 (e )| ≥ 0

β = π ⇒ |H(ejω)| < 0
~ 7 5 3 1
H 7 (ω) = 2h(0) ∙ cos 2 ω + 2h(1) ∙ 2cos 2 ω + 2h(2) ∙ cos 2 ω + 2h(3) ∙ 2cos 2 ω

 7
θ = − ω + β
 2

 g r θd = 7
 2
 N  ~
H(ejω) = e j  −ω 2 +β  ∙ H (ω)

N +1
 1
(ω) = 2 ∑ N + 1 − n  ∙ cos ω  n − 2 
~ 2
H
2  
n =1  

In general:
 N
θ = − ω
 2
-
 g r θd = N
 2
Antisymmetric impulse response with odd length (type 3)

for N = even ⇒ h(n) = -h(N-n)


8

N=8 ⇒ H(z) = ∑h(n) .z-n = h(0).z0 + h(1).z-1 + h(2).z-2 + h(3).z-3 + h(4).z-4 +


n =0

+ h(5).z-5 + h(6).z-6 + h(7).z-7 + h(8).z-8


 h(0) = − h(8)
 h(1) = − h(7)

h(n)= - h(8-n) = >  h(2) = − h(6)
 h(3) = − h(5)

 h(4) = − h(4) = >h(4) = 0
H(z) = h(0) (1+z-8) + h(1) (z-1 - z-7) + h(2) (z-2 - z-6) + h(3) (z-3 - z-5) =
= z-4 [h(0) (z4 - z-4) + h(1) (z3 - z-3) + h(2) (z2 - z-2) + h(3) (z1 - z-1)]

z = ej ω ⇒ zm + z-m = ejm ω - e-jm ω = 2j∙sin ωm

H(ej ω ) = H(z)|z= ej ω =
= e-4j ω [h(0) ·2j∙sin 4ω + h(1) ·2j·sin 3ω + h(2) ·2j·sin 2ω + h(3) ·2j·sinω] =
 π  π ~
= e j  −4ω + 2  [h(0) · 2sin4ω + … + h(3) · sinω= e j  −4ω + 2  ∙ H (ω) ∙ ejβ
~
β=0 ⇒ H (ω) ≥0
~
β=π ⇒ H (ω) < 0
 π
θ = − 4ω + + β
 2
ω ∈(0, π) ⇒ 
 g r d= dθ = 4
 dω
In general:
 N π  ~
- H(ejω) = e j  −ω 2 + 2 +β  ∙ H (ω)

(ω) = 2 ∑ N − n  ∙ sin ωn


~ 2
- H
n =1 2 

Antisymmetric impulse response with even length (type 4)

for N = odd
h(n) = -h(N-n)
7

N=7 ⇒ H(z) = ∑h(n) ∙ z-n = h(0) + h(1)∙ z-1 + h(2)∙ z-2 + h(3)∙ z-3 + h(4)∙ z-4 +
n =0

+ h(5)∙ z-5 + h(6)∙ z-6 + h(7)∙ z-7


 h(0) = − h(7)
 h(1) = − h(6)

h(n)= -h(7-n) => 
 h(2) = − h(5)
 h(3) = − h(4)
H(z) = h(0) (1 - h-7) + h(1) (z-1 - z-6) + h(2) (z-2 - z-5)+ h(3) (z-3 - z-4) =
7 7 7 5 5 3 3 1 1
= z −2 [h(0) (z 2 - z −2 ) + h(1) (z 2 - z −2 ) + h(2) (z −2 - z 2 ) h(3) (z −2 - z 2 )]

7 7 7 5 5 3
H(ejω) = H(z)|z=ejω = e − j 2 ω [h(0)( e j
2
ω
- e − j 2 ω ) + h(1)( e j
2
ω
- e − j 2 ω ) + h(2) ( e j
2
ω

3 1 1
- e − j 2 ω ) + h(3) ( e j
2
ω
- e − j 2 ω )] =
7 7 5 3 1
= 2j e − j 2 ω [h(0) ∙2sin 2 ω + h(1) ∙2sin 2 ω + h(2) ∙2sin 2 ω + h(3) ∙2sin 2 ω] = =e
 −7 π  ~
ω+ +β 
j
 2 2  ∙ H (ω)

~ 7 5 3 1
H (ω) = 2[h(0) ∙ sin 2 ω+ h(1) ∙ sin 2 ω + h(2) ∙ sin 2 ω + h(3) ∙ sin 2 ω

 7 π
θ = − ω + β
 2 2

 g r θd = 7
 2
~
β=0 ⇒ H (ω) ≥ 0
~
β = π ⇒ H (ω) < 0
In general
 N π  ~
- H(ejω) = e j  −ω 2 + 2 +β  ∙ H (ω)

 N π
θ = − ω + + β
N +1
 1  2 2
(ω) = 2 ∑ N + 1 − n  ∙ sin ω  n − 2  ; 
~ 2
- H
 
n =1  2 
 g r θd = N
 2
General form of frequency response
Nω ~
H(ejω) = e − j 2 ∙ ejβ ∙ H (ω)
 − Nω ~
 2 + β ; H (ω ) ≥ 0
θ(ω) = 
 − Nω + β + π ; H~ (ω ) < 0
 2

N
δ(ω)= 2

Zero locations of Linear Phase FIR transfer functions


N

H(z) = ∑h(n) z
n =0
−n
- causal FIR filter

a) h(n) = h(N-n) - symmetric filter

( )
N N N

H(z) = ∑h( N − n) ⋅ z −n ⇒ H(z) =


n =0
∑h(m) ⋅ z −m−N = z-N∙ ∑h(m) ⋅ Z −1 z −n
n =0 n =0

H(z) = z-N∙H(z-1) - minor image polynomial (MIP)

b) h(n) = h(N-m)
N N N N

H(z) = ∑h(n) ⋅ z
n =0
−n
=- ∑ h ( N − m) ⋅ z
n =0
−n
=- ∑h(m) ⋅ z
n =0
m −N
= -z-N∙ ∑h( m) ⋅ H ( z −1 )
n =0

H(z) = -z-N∙H(z-1) - antimirror image polynomial (AIP)

H(z) = ± z-N ∙ H(z-1) ⇒ zeros of H(z) are zeros of H(z-1)

1
→ If z = ξo is a zero of H(z) => z = ξ is also zero of H(z)
→ If z = ejφ (complex number on a unit circle) => z = e-jφ is also a zero of H(z)
1  jϕ
→ If z = r∙e±jφ is a zero of H(z) => z = ⋅e is also a zero of H(z)
r
→ If zero at z = ±1 is its own reciprocal, implying it can appear only single

→ A type 2 FIR filter must have a zero at z = -1, since H(-1) = (-1) N ∙ |H(-1)| = -H(-
1)

→ For type 3 and 4 FIR filter H(-1) = -(-1)N∙ H(-1) = -H(-1) forcing H(-1) = 0
Bounded real transfer function of FIR filter

jω Y ( e jω )
H(e ) = ; |H(ejω) | ≤ 1 - bounded real transfer function for causal filter
X ( e jω )
(BR)

|Y(ejω)|2 ≤ |X(ejω)|2 => y(n)2 ≤ x(n)2


y(n) = x(n) - lossless transmitting signal

Law Pass Digital Filter Design

Eg: The moving average filter

1 z +1
Ho(z) =
1
2
(
1 + z −1 =
2 z
)

 zz = − 1→ z e r o

 zp = 0 → p o l e
Z = ejω
1 e jω +1
H(ejω) = H(z)|z = ejω =
2 e jω
Zz = -1 => ωz = π
Zz = 0 => ωp = 0 (ω∈(0, π) )
ω
j  jω −


e 2 e 2 + e 2 

H(e ) =   ω
1   = e − j 2 cos ω

2 e 2
 −1
θ = ω
 2
 1 1
2 jω C
 g rθ d=
2
H (e ) =

 jω
2
ω
 H (e ) = c o s
 2

1
20 log H (e jωC ) = 20 log H( e j ) = -20 log
0
2 = 3dB
2

1 ωc π ω π
H (e jωC ) = = cos => cos = cos c => ωc =
2 2 4 2 2

High Pass Filter

LPF  →
z→ − z HPF

 zz = 1

 zp = 0
1  1 z −1
H1(z) =
1
2
( 1
)
1 + (−z ) −1 = 1 −  =
2 z 2 z
ω
j  j ω2 −j
ω

e 2 e − e 2 
H1(ejω) = 1 e jω −1 1   ω π 

=   = e − j  2 + 2  ⋅ sin ω
2 e jω 2 e jω 2

 zz = 1 = ω z = π >

 z p = 0 = ω z = 0>
Filter design steps
1. – Specification of the filter requirements
a) signal characteristics
- type of signal source
- input output interface
- data rates and width
- highest of frequency
b) characteristics of filter
b1) desired amplitude
} and their tolerances
b2) phase response

b1) desired amplitude is specific in frequency domain in tolerance scheme for


selective filters
b2) phase response - are used in equalize or compensate in phase response

- speed of operation
- modes of filtering –in real time and not in real time
c) level of implementation

- high level language routines in a computer


- DSP processor based system
- choice of signal processor
- costs
2. – Coefficients computation for FIR filters
- random method
- frequency sampling method
- optimal method
Computation of coefficients for IIR filters
- impulse invariant method
- bilinear method
- pole zero placement method

3. -- Representation of the filter by a suitable structure (realization)


4. -- Analysis of the effect of finite word length in filter tolerance
5. -- Implementation of filter in software and hardware mode

IIR Filters

1) y(n) = ∑h(n) x(n − k )


k =0
N N

2) y(n) = ∑ak x(n − k ) + ∑h(n) y (n − k )


k =0 k =0
N

∑a
k =0
k z −k
3) H(z) = N
1 + ∑bk z −k
k =0

Reconstruction of a bandwidth signal from its samples.

Reconstruction scheme of a signal from its sampling signal



x * (t ) = ∑ x(n)δ (t − nT )
n =−∞
~ ∞ ∞
y (t ) = ∫ h(τ ) x * (t −τ ) dτ = ∫ h(τ )x (t −τ )δT (t −τ )dτ =
0 0
∞ ∞ ∞

∑∫ h(τ ) x(t −τ )δ (t −τ − mT )dτ = ∑h(t − mT ) x(mT )


m =0 0 m =0

h(nT − mT ) = o for m≠n


∞ ~ ∞ ∞
y * ( z ) = ∑ y ( nT ) z −n = ∑∑h( nT − mT )x(mT )z −n
n =o n =0 m =0
∞ ∞ ∞ ∞
y * ( z ) = ∑∑h( kT ) x ( mT )z −( k +m ) = ∑h( kT )z −k ∑x ( mT )z −m =
n =0 m =0 k =0 m =0

= H * ( z) ⋅ X * ( z)

IIR filter design by impulse invariant method.

h( n) = Td he ( nT d ) , he - impulse response of continuous. in time filter


h(n) – impulse of digital filter


ω 2π
j( + k)
H ( e jω ) = ∑H c (e
k =−∞
Td Td
)

Frequency for continuous time filter is noted by Ω.


Frequency for digital filter is noted by ω.
Transfer function for CTF in band limited ω = ΩTd

 π
 cH ( j Ω ) = 0, Ω ≥
Td
 ω
 jω j
 H (e ) = H c (e d ) ,ω ≤ π
T

N
Ak 1
H c ( s) = ∑ ⋅ →
z
s = ln z
k =1 s − s k T
L−1
 N sk t Sampling

 ∑ Ak e ,t ≥ 0
theorem

hc (t) =  k= 1
 0,t < 0

Sampling:
N N
h( n) = Td hc (nT d ) = ∑Td Ak e sk nTd u ( n) = ∑Td Ak (e sk Td ) n u ( n)
k =1 k =1

N
Td Ak
H ( z) = ∑ sk Td −1
k =1 1 − e z
Figure

Stability condition

Example:
Design of a LPF discrete in time by applying impulse invariance method to an
appropriate Butterworth continuous in time filter.

Conditions from the figure


 0.8 9 1 2≤ 5H c (e jΩ ) ≤ 1;0 ≤ Ω ≤ 0.2π

 jΩ
 H c (e ) ≤ 0.1 7 7 8;03.3π ≤ ω ≤ π

Td = 1
 H c ( e j 0 .2 π ) ≥ 0 . 8 9 1 2 5

 j 0.3π
 H c (e ) ≤ 0.1 7 7 8 3
2 1
H c ( jω) =
Ω 2n Butterworth filter
1+( )
Ωc

 0.2π 2n 1
 1+ ( Ω ) = .0 8 9 1 n = 2 .5 8 5 8 5 8
 c
 ⇒
 1+ (0.3π )2n = 1  Ω c = .0 7 0 4 7 4
 Ω c .0 1 7 7 8 3
Choose n=6 Ωc =0.7032
Pole computing :

1
H c (s) ⋅ H c (-s) =
Ω 2n
(1 + )
jΩc
Ω 2n
poles (1 + ) = 0 12 poles
jΩc

s1, 2 , 7 ,8 = ± 0.182 ± j 0.679


s 3, 4, 9,10 = ± 0.497 ± j 0.497
s 5, 6 ,11 ,12 = ± 0.679 ± j 0.182

For stability condition results


s1, 2 = - 0.182 ± j 0.679
s 3, 4 = - 0.497 ± j 0.497
s 5, 6 = - 0.679 ± j 0.182

0.12093
H c ( s) = 2 2 2
( s + 0.364s + 0.4975)( s + 0.994s + 0.4945)(s + 1.3585s + 0.4945)
a⋅s +b
H c ( s) = 2
s + 0.364s + 0.4975
z
0.2871 − 0.4466 z −1
H(z) = +…..
1 − 01 .297 z −1 + 0.6949 z −2

Draw the signal flow graph of implementation of the system in parallel using second
order section
- direct from 1 and direct from 2
- cascade of second order section

Tolerance scheme for frequency selective digital filters


1.Transformation from LPDF (low-pass digital filters)prototype
of cuttoff frequency θp .
Filter Transformation Associated Design Formulas
type
θ p −ωp
sin
z −α
−1
α= 2
z −1 =
LPF 1 −αz −1 θ p + ωp
sin
2
ωp - cutoff frequency

z −1 + α
z −1 = −
1 + αz −1 θ p +ωp
cos
HPF α= 2
θ −ωp
cos p
2

ω p2 + ω p1
2αk −1 k −1 cos
BPF z −2 − z + 2
k +1 k +1
α=
z −1 = ω p2 − ω p1
k −1 −2 2αk −1 cos
z − z +1 2
k +1 k +1
ω p2 − ω p1 θp
k = cot ang tan
2 2
ω p2 + ω p1
2α −1 1 − k cos
z −2 − z + α= 2
BSF z −1
= 1 − k 1+ k ω p2 − ω p1
1 − k −2 2α −1 cos
z − z +1 2
1+ k 1+ k
ω p2 − ω p1 θp
k = tan tan
2 2

2. Frequency transformations of Low Pass IIR Filters

Z-plane of a LPF to Z-plane of a desired


H LPF ( z ) → H ( z )
−1 −1
Need to find Z = f ( z )i.e.Z = G ( z ) s.t.H ( z ) = H LPF ( z ) z −1 =G ( z −1 )

Constraints on the transformation Z −1 = G ( z −1 ) :


• G(z-1) must be a rational function of z-1;
• Inside of the unit circle of the z-plane

• The unit circle of the z plane must map onto the unit circle of the z-plane:

 z = e jθ e − jθ = G(e − jω )   G(e − jω ) = 1
 jω ⇒ 
 z = e e ∠ G(e ) 
j jω
 θ = ∠ G(e − jω )

The most general form of G(z-1) is


N
z −1 − α k
Z −1 = G ( z −1 ) = ±∏ ; αk < 1
k =1 1 − α k z
−1

• We find the transfer function for the filter

LPF LPF
z −1 − α
z −1 = G ( z −1 ) =
1 − αz −1

z = e jθ  e − jω − α (1 − α ) 2 sin θ
 e − jθ =
1 − αe − jω
⇒ ω = arctan
2α + (1 + α 2 ) cos θ
z = e jω 
(1 − α ) sin θp
ω → ωp ωp = arctan ⇒ α = ...
2α + (1 + α 2 ) cos θp

LPF –> HPF


z −1 + α
z −1 = ⇒ α = ...
1 + αz −1
I I R Filter Design by approximation of derivatives

N
d k y( t ) N
d k x( t )
∑a k
k =0 dt k
= ∑
k =0
b k
dt k
 x( t ) →  y( t ) →

-linear constant - coefficient differential equation of an analogue filter

dy ( t ) y (nT ) − y (nT − T ) y (n) − y (n − 1)


t =nT
= = - backward difference
dt T T
↓α
sY (s )
↓Z
1 − Z −1
Y (Z )
T

dy (t )
 y (t ) → H(s)=s  →
dt

 y (t )→ 1 − Z −1 y (n) − y (n − 1)
H(z)= 
→
T T

1 − z −1 1
s= or z=
T 1 − sT

s = jΩ
1 1 − jΩT 1
z= = = e j*arctg ΩT
1 − jΩT 1 + Ω T
2 2
1+Ω T
2 2

2nd order differencial:

y (nT ) − y (nT − T ) y (nT − T ) − y (nT − 2T )


2 −
d y (t ) d  dy (t )  T T
2
=  = =
dt dt  dt  T
y (n) − 2 y (n − 1) + y (n − 2)
=
T2
↓Z
2
1 − 2 Z −1 + Z −2 1 − Z −1 
= 2
=
 T 

T  

In general:
k
d k y (t ) 1 − Z −1 
=
 T 

dt k  

H (Z ) = H a (s) s=
1− Z −1
-> system fct of a digital IIR filter
T
H a (s ) - transfer fct. Of an analog filter

This approximation is valid only in LPF or BPF ( low frequency filter ) and cannot
be used for high frequency.

dy L
y (nT + kT ) − y ( nT − kT )
= ∑ak
dt t =nT k =1 T

↓Z
1 L
s= ∑ ak (Z k − Z −K )
T k =1

Z = e jω
s=
1 L
∑ k
T k =1
a e jϖk
− e (
− jϖk
= j
2 L
T k =1
)
∑ a k sin kϖ
2 L
s = jΩ ⇒ Ω = ∑ a k sin kϖ
T k =1

Is difficult to calculate { a k } for the poles that are inside the unit circle.

Ex 1:
1
Convert the analogue band pass filter with system function H a ( s) = ( s + 0.1) 2 + 9 into
a IIR digital filter by use of the backward difference for derivative.
1
H (Z ) = H a (s) s=
1− Z −1 = 2
=
 1 − Z −1
T 

 T + 0.1
 +9
 
2
T
= 1 + 0.2T + 9.01T 2
2(1 + 0.1T ) 1
1− Z −1 + Z −2
1 + 0.2T + 9.01T 2
1 + 0.2T + 9.01T

± j16 .5
Choose T=0.1 => p1, 2 = 0.949 e

Ex2:
1
Convert the analogue band pass filter with H a ( s ) = into a digital IIR filter by the
( s + 0.1) 2 + 9

use of the mapping s =


1
T
( z − z −1 )

1 Z 2T 2
H ( z) = = ...... =
2
z 4 + 0.2Tz 3 + ( 2 + 9.01T 2 ) z 2 − 0.2Tz +1
1
( −1 
)
T z − z + 0.1 + 9
 

Bilinear transform

2  1 − z −1 
s=  
T  1 + z −1 

 s = σ + j Ω
 jϖ
 Z = r e
2  1 − r −1e − jϖ  2 r 2 −1 2r sin ϖ 
s=   =  +j 
T  1 + r −1e − jϖ  T  1 + r + 2r cos ϖ
2
1 + r 2 + 2r cos ϖ 

r 2 −1
=> σ =
1 + r 2 + 2r cosϖ

2r sin ϖ
Ω=
1 + r 2 + 2r cos ϖ
If r<1 σ <0

r>1 σ >0

2 2 sin ω 2 ω
r=1 σ =0 => s = jΩ , Ω = = tan
Td 2(1 + cos ω) Td 2

-the transformation is not bijective; is non linear.

s + 0.1
Ex: Convert the analogue filter with the system fct. H a ( s) = into a
( s + 0.1) 2 +16
digital IIR filter by means of the bilinear transformation. The digital filter is to have
Π
a resonant frequency: ωr =
2

Π 2 ωr 1 1 − Z −1
Ωr = 4 ωr = Ωr = tan ⇒ Td = s =4
2 Td 2 2 1 + Z −1
1 − z −1
4 −1
+ 0.1
H ( z ) = H a ( s ) s =41− z =
−1 1 + z =
2
1+ z −1  1 − z −1 
 4 −1
+ 0.1 + 16
 1 + z 
−1 −2
0.128 + 0.006 z − 0.122 z
=
1 + 0.0006 z −1 + 0.975 z −2

1 + 0.975 z −2 = 0
⇒ poles: ±j
Π

z β1, 2 = 0.987 + e 2

⇒ zeros ⇒ 0.128 + 0.006 z −1 − 0.1222 z −2 = 0 ⇒ z z1, 2 = −1.095

Usually, the design begins with the digital filter specifications, after that consider the
analogue filter which satisfies the imposed specifications.

Ex 1: Design a single –pole low-pass filter with 3dBbandwidth of 0.2Π using the

bilinear transformation applied to the analogue filter. H ( s ) = s + Ω , where
c
Ωc is the
c

3dB bandwidth of analogue filter.

ω c=0.2π
2 ωc 2 0.2 π 0.65
Ω c= Td tan = T tan 2
= Td
2
0.6 5
Td
Then the transfer function will be: H(s)=
s+ 0. 6 5
Td
0.65
Td
−1
2 1−z −1
−1
2 1− z
( )+
Td 1+ z −1
0.65
Td
0.245 (1 + z −1 )
−1
H(z)=H(s)|s= Td ( 1+z )= = 1 − 0.509 z

jω 0.245 (1 + e − jw )

Z=e  H(e )=
1 − 0.509 e − jω
ω 0.490
ω =0  H(ej )= 1 − 0.509 ≈ 1
j0.2π 0.245 (1 + e -j0.2 π )
ω c=0.2π  H(e )= = 0.707  3 dB attenuation
1 − 0.509 e - j0.2 π

Ex. 2:
The specifications on the discrete-time filter are:
0.819125 ≤ |H( e -j0.2 π ) ≤ 1; 0 ≤ ω ≤ 0.2π
| H( e -j0.2 π
)| ≤ 0.17783; 0.3π ≤ ω ≤ π
For this specific filter, the analogue filter must have:
Ω 2 0.2 π
0.89125 ≤ | Hc( e -j )| ≤ 1; 0≤ω ≤ Td
tan 2
2 0.2 π 2 π
| H( e -j Ω )| ≤ 0.17783; Td
tan 2
≤ Ω ≤ Td
tan 2
(∞)
For convenience choose Td=1
{ | Hc ( e 2jtan0.1 π )|≥0.89125
{ | Hc ( e j2tan0.15 π )| ≤0.17783
1
jΩ 2
|Hc(e )| =
1 + ( ΩΩc ) 2 N  Butterworth filter
2 tan 0.1π 2N 1
1+( Ωc ) = 0.89125
2 tan 0.15 π 2N 1
1+( Ωc ) = 0.17783
N=5.30466
Choose N=6  Ωc=0.76622
We take the functions from the tabels for various degrees. We choose the poles
from the left in order to have stability.
0.20238
Hc(s)= ( s 2 + 0.3966 s + 0.5871 )( s 2 +1.0836 s + 0.5871 )( s 2 +1.480 s + 0.5781 )
H(z)=
0.0007378 (1 + z −1 )
(1 −1.2686 z −1 + 0.7051 z −2 )(1 −1.0106 z −1 + 0.3583 z −2 )(1 − 0.9044 z −1 + 0.2155 z −2 )
Draw the signal flow-graph of implementation of the system:
-as a cascade of 2nd – order section
-in direct form I and direct form II
-in the parallel form using 2nd order section

IIR filters Form


N N

Y(n)= ∑ak x(n − k ) + ∑bx y (n − k ) (1)


k =0 k =1

Or
N

∑a z
k =o
k
−k

H(z)= N (2)
1 + ∑bk z −k

k =1

or
( z − z1 )( z − z2 ) − ... − ( z − zn )
H(z)= k ( z − p )( z − p )...( z − p ) (3)
1 2 n

Pole-zero placement method:

Ex.1: A bandpass filter is required to meet the following specifications:


-a complete rejection on d.c. and 250 Hz
-a narrow band centered at 125 Hz
-a 3 dB bandwidth of 10 Hz
Assuming a Fs=500Hz, obtain the transfer function of the filter by suitable placing
z-plane poles and zeros and its diference equation. Sketch the block-diagram of the
filter.
0o – complete rejection
360 o * 250
± = 180 o
500
Passband must be centered at 125 Hz and we must have the poles
360 o *125
± = ±90 o
500
10
r=1- 500 π =0.937
( z − 1)( z + 1) z2 − 1 1 − z −2
H(z)= π = 2 = −2
( z − 0.937e 2 )( z − 0.937e − j 2 ) z + 0.877969 1 + 0.077969z

y(n)=-0.877969y(n-2)+x(n)+x(n-2)  the form we need


In this form the coefficients are:
a0=1 ;a1=0; a2=-1;
b1=1; b2=0.877969.

Ex.2: A digital notch filter has the following specifications:


-notch frequency: 50 Hz (notch = cutting, very sharp stop-band)
-3 dB width of the notch: +/- 5Hz
-sampling frequency: Fs=500 Hz

To reject the comp. of 50 Hz we place a pair of zeroes at points in the corresponding


unit circle.
360 o * 50
± = ±36 o
500
bw 5
-for poles: r=1- Fs π = 1 − 500 π = 0.937
o o
( z − e − j 36 )( z − e j 36 ) z 2 −1.618 z +1
= =
z 2 −1.5161 z + 0.87
o o
( z − 0.937 e − j 36 )( z − 0.937 e j 36 )
H(z)=
1 −1.618 z −1 + z −1
=
1 −1.516 z −1 + 0.87 z −2

The differential equations for this filter:


The ΔE of this notch filter is:
y(n)=x(n)-1.618x(n-1)+x(n-2)+516y(n-1)-0.878y(n-2)
In the coefficient form:
a0=1; a1=-1.618; a2=1;
b1=-1.5161; b2=0.878

Measure of information

X- space of discrete random variables with possible outcomes xi , i =1, n

Y- space of discrete RV with possible outcomes y j , j =1, m


- If X,Y are statistically independent, then the measure of information they will
define should be zero (no information)
- If X,Y are mutual dependent (causal link), then the measure of information they
will define should be given by x = xi , when y = y j .

Postulates:
1) P( xi ) ≥ 0 (probability of event xi )
2) P( X ) =1 (probability of the sample space X (certain event))
3) x y =∅
i j , i ≠ j =1,2,...
-these are called mutually exclusive events
n
P (U xi ) = ∑ P ( xi ) - probability of the mutually exclusive events
i =1 i =1

Joint events and joint probability

Theorem:
If one experiment has the possible outcomes xi , i =1, n and the second experiment
has the possible outcomes y , j =1, m , then the combined experiment has the
j

possible outcomes ( xi , y j ) , i = 1, n , j = 1, m
The joint probability:
P ( x i , y i ) of combined experiment satisfies the condition: 0 ≤ P ( xi , y j ) ≤1

Theorem:
m

If the outcomes yj ; j =1, m are mutually exclusive, then ∑P ( x , y


j =1
i j ) = P ( xi )

Similarly, if the outcomes x i , i =1, n are mutually exclusive events, then


n

∑ P( x , y
i =1
i j ) = P( y j )

If all the outcomes of the 2 experiments are mutually exclusive, then :


n m

∑∑P( x , y
i =1 j =1
i j ) =1
Conditional probabilities

P ( xi , y j )
P ( xi | y j ) = ; P( y j ) > 0 - conditional probability of the event xi given by
P( y j )
the accuracy of the event yj
P ( xi , y j )
P ( y j | xi ) = , P ( xi ) > 0 ⇒ P ( xi , y j ) = P ( y j ) ⋅ P ( xi | y j ) = P ( x i ) ⋅ P ( y j | x i )
P ( xi )

Notes
1. If xi y j ( xi & y j are mutually exclusive events) then P ( xi | y j ) = 0
P ( xi )
2. If xi is a subset of y j ( x i y j =x i ) then P ( xi | y j ) =
P( y j )
P( y j )
3. If y j is a subset of xi ( x i y j =y j ) then P ( xi | y j ) = =1
P( y j )

Bayes Theorem
n

If xi ; i =1, n are mutually exclusive events such that  xi = X and Y is an arbitrary


i =1

P( xi , Y ) P(Y / xi ) ⋅ P( xi )
P( xi | Y ) = = n
event with P (Y ) > 0 , then P(Y )
∑ P(Y / x
j =1
j ) ⋅ P( x j )

Statistical independence
If the occurrence of X doesn’t depend on the occurrence of Y, then P( X | Y ) = P( X )
and P ( X , Y ) = P ( X ) ⋅ P (Y )

Example:
P ( x1 , x 2 ) = P ( x1 ) ⋅ P ( x 2 )
P ( x 2 , x3 ) = P ( x 2 ) ⋅ P ( x 3 )
P ( x1 , x3 ) = P ( x1 ) ⋅ P ( x3 )

Logarithmic measure of information

P ( xi / y j ) P ( xi , y j )
I ( xi , y j ) = log b = log b ⇒ mutual information xi , y j
P ( xi ) P ( xi ) ⋅ P ( y j )
if b=2 then:
- I ( xi , y j ) [bits]
if b=e then: b=e [nats]
log e a = 0.69315 log 2 a
log 2 1.44265 ln a

Observations

1. If the random variables X & Y are statistically independent, then P ( xi | y j ) = P ( xi ) ,


then I ( xi , y i ) = 0

2. If the random variable are fully dependent, then


1
I ( xi , y j ) = log b = − log b P( xi ) = I ( xi ) = self information.
P( xi )

Show that:
P ( xi / y j ) P( xi , y j )
I ( xi , y j ) = log b = log b
P ( xi ) P ( xi ) ⋅ P ( y j )
I ( xi ) = −log b P ( xi )

Since:
P ( xi / y j ) P ( xi / y j ) ⋅ P ( y j ) P ( xi , y j ) P ( y j / xi )
= = = ⇒
P ( xi ) P ( xi ) ⋅ P ( y j ) P ( xi ) ⋅ P ( y j ) P( y j )
⇒ I ( xi ; y j ) = I ( y j , xi )

Examples:
1) Suppose a discrete information source that emits a binary digit xi = {0,1} with
1
equal probability at every τ seconds. ( P( xi ) = 2 ) then the information content
1
of each output source is I ( xi ) = − log 2 P( xi ) = − log 2 = 1 [bit]
2
2) If considered a block of “K” binary digits from the source, which occurs in a
1 1
time interval “ Kτ ” then P( xi ) = K then i ( xi ) = − log 2 K = K [bits]
2 2

X- the set of signals on entry point


Y- the set of signals on exit point
X & Y ∈ {0,1}

P0 - Probability of error for input “0”


P1 - Probability of error for input “1”

 P (Y = 0 / X = 0) = 1 − P0  P(Y = 1/ X = 1) = 1 − P1

 P (Y = 1 / X = 0) = P0

 P(Y = 0 / X = 1) = P1
P (Y = 0) = P(Y = 0 / X = 0) ⋅ P( X = 0) + P (Y = 0 / X = 1) ⋅ P( X = 1) =
1 1 1
= (1 − Po ) + P1 ⋅ = (1 − P0 + P1 )
2 2 2
P (Y = 1) = P (Y = 1 / X = 0) ⋅ P ( X = 0) + P (Y = 1 / X = 1) ⋅ P ( X = 1) =
1 1 1
= P0 ⋅ + (1 − P1 ) = (1 + P0 − P1 )
2 2 2

We know that I ( xi ) = − log b P ( xi ) →self information

P ( xi / y j ) P ( xi , y j )
I ( xi , y i ) = log 2 = log b → mutual information
P ( xi ) P ( xi ) ⋅ P ( y j )
Then:
P(Y = 0 / X = 0) 1 − P0
I (0,0) = log 2 = log 2
P (Y = 0) 1
(1 − P0 + P0 )
[bits]
2
P(Y = 1 / X = 1) 1 − P1
I (1,1) = log 2 = log 2
Similarly: P(Y = 1) 1
(1 − P1 + P0 )
2

Special Cases
 2(1 − P)
I (0,0) = l o 2g = l o 2g(1 − P) + 1
 (1 − P + P )
a) If P0 = P1 = P ⇒ 
 I (1,1) = l o g 2(1 − P) = 1 + l o g(1 − P)
 2
1− P + P
2

Cases:
1: P0 = P1 = 0 →Errorless channel (without noise)
 I (0,0) = 1
⇒  ⇒ total information
 I (1,1) = 1
1
2: P0 = P1 = ⇒ I (0,0) = I (1,1) = 0 → don’t transform information
2
1
3: P0 = P1 = 4 ⇒ I (0,0) = −1 + log 2 3 = I (1,1) = 0,587 [bits]
Conditional self information
1
I(xi/yj)= logb P( x logbP(xi/yj) → information about x=λ I after having
i / yj )
=
observed the event Y= yj

I ( x ≥ i 0) 
I(xi/yj)= I(xi)-I(xi/yj) ⇒ I(xi/yj)>;0;<
I( x i / y i ) ≥ 0 

Average mutual information


n m n m P( x i , y j )
I(X/Y)= ∑ ∑ P( x i , y j ) ⋅ I( x i , y j ) = ∑ ∑ P( x i , y j ) ⋅ log b
P( x i ) ⋅ P ( y j )
≥0
i =1 j=1 i =1 j=1

I(X,Y)=0 if X & Y are statistically independent

Average self-information:
 m
  m

H(x)= ∑ P( x i ) ⋅ I( x i ) = ∑ P( x i ) ⋅ log b P( x i )
 i =1   i =1 
Entropy of the source is the average self-information per source letter, when X
represents the alphabet of possible output letters from a source.

Special case:
1  m 1 1
P(xi)= ⇒H(X)=  ∑  ⋅ log2 = log2 n

n  i =1 n  n

H(X)≤ log2n – for equiprobable events

Example:
Consider a source that emits a sequence of statistically independent letters, where
each output letter is:
xi P(xi)
0 Q
1 1-q
H(X)=- P(xi)⋅ log2 P(x1)- P(x2)⋅ log2 P(x2)=-q⋅ log2q-(1-q)⋅ log2(1-q)=H(q)
H(q)=[bits/letter]

DESEN

Conditional entropy is a average conditional self inform


n m
1
H(x/y)= ∑ ∑ P( x
i =1 j=1
i , y j ) ⋅ log b
P( x i , y j )
n m

=∑ ∑ P( x i , y j ) ⋅ log b P( x i , y j ) →uncertainly in X after Y was observed


i =1 j=1
1
Efficiency of the source = η
R→average no of binary digits (output letter)
R 1
η = H( x ) ; efficiency= η

Example:
1. One experiment has four mutually exclusive outcomes xi; i= 1,4 and a second
experiment has three mutually exclusive outcomes yj; j= 1,3 . Find the mutual
information and average mutual information.
4

P(xi)= ∑P( x i , y j )
i =1

yj
1 2 3 P(xi)
xi
0.0 0.1
1 0.2 0.41
8 3
0.0 0.0 0.0
2 0.17
5 3 9
0.0 0.1 0.1
3 0.31
5 2 4
0.1 0.0 0.0
4 0.21
1 4 6
0.3 0.2 0.4
P(yj) 1
1 7 2
3

P(xi)= ∑P( x i , y j )
i =1
P( x i , y j )
Mutual info: I(xi,yj)= log 2
P( x i ) ⋅ P ( y j )
[bits]
yj
1 2 3
xi
-
1 0.057 -0.065
0.0022
- -
2 0.334
0.0759 0.6135
-
3 0.5197 0.1047
0.9426
-
4 0.7568 -0.558
0.5033

Average mutual information


n m

I(X,Y)= ∑ ∑ P( x i , y j ) ⋅ I( x i , y j ) =0.067
i =1 j=1

If yi,y3= 1,3 are the 3 possible outcomes of the source (letters).


3

H(Y)= ∑P( y j ) ⋅ log 2 P( y j )


j=1

yj
1 2 3
xi
P(yj) 0.31 0.27 0.42
1.689
-log2P(yj) 1.88847 1.25154
7
0.523
-P(yj)log 0.5106 0.5256
8
H(Y) 1.5594 bits
H(Y) 1.5594
Efficiency = R
= 3
=0.5198
Ex.: Entropy of a binary source with memory
Consider the binary (i.e. two-symbols) first order Markov source described by the
transition diagram.

DESEN

H(X) = P(0) ⋅ H(X / 0) + P(1) ⋅ H(X / 1)


H(X / 0) = - [ P(0 / 0) ⋅ log2 P(0,0) + P(1 / 0) log2 P(1 / 0) ] =
= - [ 0.95 ⋅ log2 0.95 + 0.05 ⋅ log2 0.05 ] = 0.286
H(X / 1) = - [ P(0 / 1) ⋅ log2 P(0 / 1) + P(1 / 1) ⋅ log2 P(1 / 1) ] =
= - [ 0.45 ⋅ log2 0.45 + 0.55 ⋅ log2 0.55 ] = 0.993
Given:
P(X=0) = P(X=0 / Y=0) ⋅ P(Y=0) + P(X=0 / Y=1)
In our case:
P(0) = P(0) ⋅ P(0 / 0) + P(1) ⋅ P(0 / 1)
Or
P(1) = P(1) ⋅ P(0 / 1) + P(0) ⋅ P(1 / 0)
P(0)= P(0)⋅ P(0 / 0) + P( 1 ⋅) P(0 / 1)
 P(0) =0.9
P(1) = P(1) ⋅ P(0 / 1) + P(0) ⋅ P(1 / 0)  ⇒ P(1) =0.1 ⇒
P(0) + P(1) = 1 

⇒ H(x) = 0.95 x 0.286 + 0.1 x 0.993 = 0.357 bits/symbol

Extension codes
The source alphabet for the binary Markov source consists of “0” and “1” and
occur with probability:
P(0) = 0.9 & P(1) = 0.1
Successive symbols are not independent and we can define a new set of code
symbols as binary 2 tuples (extensions code) to take the advantage of this
extension.

Binary 2 Extension
Extension symbol probability
- tuple symbol
00 a P(a)=P(0 / 0) - P(0) = 0.95 x 0.9 = 0.855
11 b P(b) = P(1 / 1) ⋅ P(1) = 0.55 x 0.1 = 0.055
01 c P(c) = P(0 / 1) ⋅ P(1) = 0.45 x 0.1 = 0.045
10 d P(d) = P(1 / 0) ⋅ P(0) = 0.05 x 0.9 = 0.045

X2=2nd order extension of source x

H(X2) = P(a) ⋅ H(x2 / a) + P(b) ⋅ H(x2 / b) + P(c) ⋅ H(x2 / c) + P(d) ⋅ H(x2 / d)


=0.825 bits/ output signal
=0.412 bit/input signal

Binary 3 tuple Extension symbol Extension Symbol probability


000 a
100 b
001 c
111 d
110 e
011 f
010 g
101 h

Kraft inequality

Condition of existence if the code satisfying the prefix condition


L

∑2
k =1
nk
≤1

(L – the number of words)


A necessary and sufficient condition for the existence of a binary code with code
words of lengths n1 ≤ n2 ≤  ≤ nL
Proof of sufficiency:
figura

Algorithm for constructing a code;


- start with a full binary tree of order n = n L (the targhet word)
- the tree 2n L terminal nodes and for each node of order k-1 there are two nodes of
order k connected to it.
- select any node of order n1 it results the code word c1 dominating 2 n-n1 code words
- from the available select another one of order n2 and attach the code word c2 and
eliminate 2 n− n2 code words
- the process continues until n L it is reached and it results c L

Proof of necessity:

-in the code three of order n=nL the no of total terminals nodes that must be assigned
to all the code words is given the sum of + number of nodes corresponding to the
rotation
L L

∑d
k =1
n − nk
≤ 2 n : 2n ⇒ ∑ 2 − nk ≤ 1
k =1

Noiseless source catching theorem

Let X be a set of letters (symbols words) from a DMS (discrete memory source) and
H(x) the entropy of the source. Output letters xk ∈ X whith probabilities p(xk) of
occurrence have a length of nk symbols. It is possible to construct a code that
satisfies the prefix conditions and that has the average length R in the range
(H(x),H(x)+1)
Proof
Lower bound
H ( X ) ≤ R < H ( X ) +1
H (X ) ≤ R
L L
1
H ( X ) − R = ∑ p ( xk ) log 2 − ∑nk p ( xh ) =
k =1 p ( xk ) k =1
L L
1
= ∑ p ( xk ) log 2 + ∑ p ( xk ) log 2 2 −nk =
k =1 P ( xk ) k =1
L
2 −nk
= ∑ p ( xk ) log 2
k =1 p ( xk )
ln x ≤ x −1
L
2 −nk L
2 −nk
H ( X ) − R = ∑ p ( xk ) log 2 = log 2 e∑ p ( xk ) log ≤
k =1 p ( xk ) k =1 p ( xk )
 2 −nk
L
  L L

≤ log 2 e∑ p ( xk )  −1 ≤ (log 2 e) ∑2 −nk − ∑ p ( xk )  ≤ 0
k =1  p ( xk )   k =1 k =1 
only for equiprobable events R =H ( X )
Upper bound R < H ( X ) +1
nk length of the code word k =1, L nk ∈ N *
Select the number such that if a certain word x k has the probability p(x k); then
2− n ≤ p ( xk ) .
k

If it is impossible chose n k such that:


p ( xk ) < 2 − nk +1
2− nk ≤ p ( xk ) ≤ 2− nk +1
L

∑ p( x ) = 1
k =1
k

L L L

∑ 2−nk ≤ ∑ p( xk ) ⇒ ∑ 2−nk ≤ 1
k =1 k =1 k =1

Having chosen:
p ( x k ) < 2 −nk +1
log 2 p ( x k ) < 2 −nk +1 ⇒ n k < 1 − log 2 p ( x k )
L L L
R = ∑ p ( x k )n k < ∑ p ( x k ) − ∑ p ( x k ) log 2 p ( x k )
k =1 k =1 k =1

R < H ( X ) +1
1 1 1 1 1
p ( x k ) − ⇒ < < ;2 −2 ≤ ≤ 2 −1
3 4 3 2 3
Remarks:
Variable length codes that satisfy the prefix condition are efficient length codes for
any DMS with source symbols that are not equiprobable. For the source symbols
being equiprobable it is better to use fixed length codes.
Coding with fixed length word

L = possible no of distinct symbols of source


R = number of bits per symbols
2 R− 1 < L ≤ 2 R
R − 1 < lo g2 L ≤ R ⇒ lo g2 L ≤ R < lo g2 L + 1
 lo g2 L; L = 2 k
R
 [ lo g2 L] + 1; L ≠ 2 k
probability of occurance of event xi
p( xi );i = 1, L
L
H ( X ) = − ∑ p( xi ) lo g2 p( xi )
i= 1

H ( X ) ≤ lo g2 L < R ⇒ R ≥ H ( X )

Condition for noiseless coding ; H(x) - entropy of the source


Example: L=4
i xi p(xi) -log2p(xi) - p(xi)log2p(xi) Code Length
1 A 1/2 1 1/2 00 2
2 B 1/4 2 1/2 01 2
3 C 1/8 3 3/8 10 2
4 D 1/8 3 3/8 11 2
=1 H(x)=1/2 + 1/2+
3/8 + 3/8 = 1.75

R = log2 L = log2 22 = 2
H ≤ log 2 L < log 2 L + 1
1.75 ≤ 2 ≤ 3; R >1.75 . Choose R=2

R=2 fulfills 1.75 < 2 ≤ R ≤ 3


Efficiency:
H ( x) 1.75
η= = = 0.875
R 2

Block coding of J- symbols with N bits/ symbol


 N
Rj = → number of bits for a block
J
2 N −1 ≤ LJ <2 , where L is the number of distinct blocks
N J

N −1 ≤ J log 2 L < N ,
J log 2 L ≤ N ≤ J log 2 L + 1 |:J
N 1
l o 2gL ≤ ≤ l o 2gL +
J J
 K H
η=  =
H
=
JH
efficiency
 J l o 2gL;2 J R N/J N
N=  K
 [ J l o gL] + 1; L ≠ 2 J
 2
Example:
For L=5:
log 2 L = log 2 5 = 2.32193
log 2 L ≤ R < log 2 L + 1 ; 2.32 ≤ R ≤ 3.32 Choose R=3

i xi p(xi) -log2p(xi) - p(xi) log2 p(xi) Code I Length ni Code II


1 A 1/2 1 1/2 000 3 001
2 B 1/4 2 1/2 001 3 010
3 C 1/8 3 3/8 010 3 011
4 D 1/16 4 1/4 011 3 100
5 E 1/16 4 1/4 100 3 101
H(X) = 1.875

H ≤ log 2 L < log 2 L + 1


1.875 ≤ 2.32 ≤ R < 3.32 => R=3, verify
For blocks: L=5 , J=2
J log 2 L ≤ N < J log 2 L + 1
2.232 ≤ N ≤ 2.232 +1 , 4.64 ≤ N ≤ 5.64
N 5
Choose N=5 => R = J = 2 = 2.5
LJ = 5 2 = 25 distinct
2 L = 2 5 = 25 distinct codes
Blocks p(xi ,yj) -log2p(xi, yj) - p(xi, yj) log2 p(xi) Codes
AA 1/4 2 1/2 00001
AB 1/8 3 3/8 00010
AC 1/16 4 ¼ .
AD . . .
AE . . .
BA . . . .
BB . . . .
BC . . . .
BD . . . .
B E..EE . . . .

H(x) = 1.875

H ( x) 1.875
Efficiency: η = R
=
2.5
= 0.75

Huffman algorithm (1952)


Remarks:

1. It is optimum ( R minimum)
2. Needs all code words satisfying prefix conditions
3. encoded signal is uniquely and instantaneously decodable

J symbols form a block 


JH ( x ) ≤ R J ≤ JH ( x ) +1 , where RJ is the average number of binary digits

per block
RJ 1
H ( x) ≤ ≤ H ( x) + ;
J J

 RJ
R= -> average number of bits per symbol
N
 1
H ( x) ≤ RJ < H ( x) +
J

letter p(xi) Code I Code II Code III


x1 1/2 1 0 0
x2 1/4 00 10 01
x3 1/8 01 110 011
x4 1/8 10 111 111

Code I: 0 0 1 0 0 1 or 0 0 1 0 0 1
x2 x4 x3 x2 x1 x2 x1
we don’t know for sure how to group; it’s ambiguous

Code II: it’s uniquely and instantaneous decodable

figure

Code III: it’s uniquely but not instantaneously decodable


figure

xi p(xi) - log2 p(xi) Code I Length(ni) Code II Length(ni)


x1 0.35 1.5146 0 1 00 2
x2 0.3 1.737 10 2 01 2
x3 0.2 2.3219 110 3 10 2
x4 0.1 3.3219 1110 4 110 3
x5 figure
0.04 4.6439 11110 5 1110 4
x6 0.05 7.6439 111110 6 11110 5
x7 0.05 7.6439 111111 6 11111 5

Code I:
7

H (x) = ∑ p(xi ) l o 2 gp(xi ) = 2.1 b1 i t s
i= 1  H ( x ) 2 .1 1
7  ⇒ η =  = = 0 .9 5 4
 R 2 .2 1
R = ∑ p ( x i ) n i = 2 .2 1
i= 1


Code II:

 H ( x ) 2.11
H(x) = 2.11 bits ; R = 2.66 ; η=  = ≅ 0.8..
R 2.66
Letter p(xi) - log2 p(xi) Code Length
x1 0.45 1.156 1 1
x2 0.35 1.52 01 2
x3 0.2 2.33 00 2

Figure Code

3

H ( x) = ∑ p( xi ) l o 2 gp( xi ) = 1.5 1b 8i t s
i= 1  H ( x)
 ⇒ η =  = 0.9 7 9
 3
 R
R = ∑ p( xi )ni = 1.5 5
i= 1


Or code II figure

Letter pair p(xi ,yj) -log2p(xi, yj) Code Length ni


x1 x1 0.2025 … 10 2
x1 x2 0.1575 … 000 3
x2 x1 0.1575 … 010 3
x2 x2 0.1225 … 011 3
x1 x3 0.09 … 110 3
x3 x1 0.09 … 0010 4
x2 x3 0.07 … 0011 4
x3 x2 0.07 … 1110 4
x3 x3 0.04 … 1111 4

H (x) = 3.0 b3 i6 t s H (x)


  ⇒ η =  = 0.9 5
R = 3.0 b6 /i7l te s  t t e r R
H (x) = .3 0 b3 i 6t s H (x)
  ⇒ η =  = 9 %9
R = 3.0 b6 /il7 te s t t e rR

Coding for analog sources

An analog source emits a waveform x(t ) that is a sample function of a stochastic


process X (t ) .
If X (t ) is a stationary stochastic process:
• autocorrelation function is Φ xx (τ )
• power density function is Φxx ( f )
If X (t ) is a stationary stochastic process with band limited, then Φ xx ( f ) = 0 .
For the signals above, we ca use the Sampling Theorem:
 n 
sin Πf s 
 t− 
+∞
 n   fs 

X (t ) = ∑ X  f 
n =−∞  s  n 
Πf s 
t − f  
 s 

where: f s = 2 f max -Nyquist criterion


f max -highest frequency of the signal
f s -sampling frequency
Type of encoding –represent each sample (discrete amplitude level) by a
sequence of binary digits. Hence for L levels the number of binary digits is:
log 2 L  forL = 2 k
R=
[ log 2 L ] + 1 forL ≠ 2 k
If the levels are not equally probable and the probabilities of the output levels are
known ,then we use the Huffman coding(entropy coding).
If the levels are not equally probable and the probabilities are not known, we can
estimate the encoding source.
Quantization means both compression of data and distortion of the waveform (loss
of signal fidelity).
Minimization of the distortion can be made with PCM, DPCM and DM.

PCM-Pulse Code Modulation

x(t ) -a sample function emitted by the source


x ( n ) -samples taken at a sample rate f s ≥ 2 f max
In PCM, each sample of the signal is quantized to one of 2 R amplitude levels and
[ ]
the rate of source is R ⋅ f s bitss .
~
x n = x n + q n - mathematical model of quantization
~
- x n -quantized value of xn

- q n -noise

Uniform Quantizer
7 bit quantizer
input-output characteristic for a uniform quanitzer:

pdf- probability density function


p( q ) = 1

∆ = 2 − R - size of quantization

MSE – mean square error - ε ( q 2 )

∆ 2 2 −2 R
ε (q ) = ∫
∆ ∆ 1 2 1 q3 ∆
2 2
p( q ) q dq = ∫
2 2
q dq =  2
= =
− ∆
2
− ∆
2 ∆ ∆ 3 −∆
2 12 12

2 −2 R
ε ( q 2 ) log = 10 log ε ( q 2 ) = 10 log = −20 R log 2 − 10 log 12 = −6 R − 10 .8 dB
12

ex: for R=7bits,


ε ( q 2 )dB = −52 .8 dB

uniform quantization non-uniform quantization

An uniform quantizer provides the same spacing between successive levels


throughout the entire dynamic range of a signal.
A better approach is to have more closely spaced levels at the large signal amplitude
=>non uniform quantizer.
A classic non uniform quantizer is a logarithmic compressor (Javant 1974 for speech
processing)
y - magnitude of output
x - magnitude of input
   - compression factor

log (1 + µ x )
y =
log( 1 + x )
ex: =225; R=7 => ε(q ) = −77 dB
2

A non quantizer is made from a non-linear device that compresses the signal and a
uniform quantizer.

DPCM – Differential Pulse Code Modulation

In PCM each sample is encoded independently. However, most sources


sampled at Nyquist rate or faster exhibit significant correlation between successive
samples (the average change is successive sample and is small).
Exploiting the redundancy => lower rate for the output
Simple solution – encoding the differences between successive samples.
Refinement: to predict the current sample based on previous “p” samples.
^ p
x n = ∑ ai xn − i ;
i= 1
{ ai } - coefficient of prediction
xn - current sample
^
xn - predicted sample

MSE for computation of { ai } coefficients:


 ^2   
2

()
p
 
ε p = E en2 = E  xn − xn   = E  xn − ∑ ai xn   =
      i= 1  

= E( xn ) − 2∑ ai E( xn xn− i ) + ∑ ∑ aia j E( xn− i xn− j )


p p p
2

i= 1 i= 1 j= 1
The source is stationary with Φ( n ) -autocorrelation function.
If the source is wide sense stationary, results:
p p p
ε p = Φ( 0 ) − 2∑ai Φ( i ) + ∑∑ai a j Φ( ij )
i =1 i =1 j =1

Minimization of εp with respect to predicted coefficients { ai } results in a set of


linear equations:
p

∑ a Φ( i − j ) = Φ( j )
i =1
i ; j=1,2,…,p Yule-Walker equations
If Φ( n ) is unknown apriory, it can be estimated by the relation:
^ 1N
Φ ( n ) = ∑ xi xi + n ; n=0,1,2,…,p
N i= 1
xn - sampled value en - predicted error
~ ~
x n - quantized signal e n - quantized predicted error
^
~
- predicted quantized signal qn - quantized error
xn
Encoder:

Decoder:
^
~ p ~
en = x n − x n = x n − ∑ a i x n − i
i= 1

~ ~ ~ ~ ~ ~  ^ ^

qn = en − en = en −  xn − xn  = en + xn − xn = xn− xn
 
 
~
The quantized value x n differs from the input x n by the quantization error q n
independent of the predictor’s prediction. Therefore the quantization errors do not
accumulate.
Improvement in the quality of estimation => inclusion of the linearly filtered past
values of the quantized error.
~ p m ~
x n = ∑ a i x n −i + ∑bi en −i
i =1 i =1

figure Encoder:

Decoder

figure

Adaptive PCM and DPCM

Many real sources are quasistationary. The variance and autocorrelation functions of
the source output vary slowly with time. The PCM and DPCM encoders are
designed on the basis that the source output is stationary. The efficiency and
performance can be improved by having them adapted to the slowly time-variant
statistics of the source. The quantization error q n has a time-variant variance.

Reducing of the dynamic range of q n by using an adaptive quantifier

∆ n+1 = ∆ n ⋅ M(n), where :

∆ n is the step size of the quantizer for processing


M(n) is the multiplication factor dependent of the quantizer level for the sample x(n)

PCM DPCM
2 3 4 2 3 4
M(1) 0.60 0.83 0.8 0.8 0.9 0.9
M(2) 2.20 1.00 0.8 1.6 0.9 0.9
M(3) 1.00 0.8 1.25 0.9
M(4) 1.50 0.8 1.70 0.9
M(5) 1.20 1.20
M(6) 1.60 1.60
M(7) 2.00 2.00
M(8) 2.40 2.40

In DPCM the predictor can also be made adaptive when the source output is
stationary. The predictor coefficients can be changed periodically to reflect the
changing signal statistics of the source. The short-term estimate of the
autocorrelation function of x n replaces the ensemble correlation function. The
predictor coefficients are passed along the receiver with the quantization error en.

The source predictor is implemented at the receiver and a higher bit rate results. The
advantage of decreased bit rate produced by using a quantizer with fewer bits is lost.

An alternative is the using of a predictor at the receiver that may compute its own
predicted coefficients from en and xn .

Delta Modulation (DM)


Adaptive Delta Modulation (ADM)

Delta Modulation is a simplified form of DPCM in which a 2-level (1 bit) quantizer


is used in conjunction with a fixed first-order prediction.
x n = xn-1 – en-1
⇒ x n = x n-1 +q n-1 + x n-1 – x n-1 = x n-1 + q n-1
q n = en – en = en – (x n – x n)

The estimated value of x n is the previous sample x n-1 modified by the quantization
noise q n-1. The difference equation represents an integrator with an input e n.

An equivalent realization of the one-step predictor is an accumulator with an input


equal to the quantized error e n

In general the quantized error signal is scaled by some value, say ∆ 1, called step
size.

The encoder shown in the figure approximates the waveform x(t) by a linear
staircase function (the waveform must change slowly relative to the sampling rate. It
results that the sampling rate must be about 5 times greater or equal to the Nyquist
rate.

+
Sampler _ Quantizer to transmitter

xn= xn-1
Encoder Unit delay +
z-1

en LPF output

Decoder
z-1
____________________________________________________________________
____________________________________________________________________

+ en = ± 1
Sampler _ Quantizer to transmitter
Encoder xn
Accumulator +

∆ 1

en
+ Accumulator LPF output

∆ 1 Decoder

Joyant (1970)
e n ⋅ e n-1
∆ n =∆ n-1 ⋅ K ,K>1
en = ± 1
Sampler Quantizer

z-1
Accumulator +

Continuous variable slope:

 ∆ n = α ∆ n − 1 + K1

 ∆ n = α ∆ n− 1 + K2
~ ;~ ~
en en −1 ; en −2have the same sign
K1 >>K2 > 0 ; 0<α < 1

Linear Predictive Coding LPC

The source is modeled as a linear system (filter), which when excited by an


appropriate signal gives the observed output.
Instead of sending the samples of the source waveform, the parameters of the linear
system are transmitted along with the appropriate excitation signal.

- The source output sampled at a rate > Nyquist rate

{xn }n=0, N −1

- The sample sequence is assumed to have been generated by an all-pole filter with:

G
H ( x) = p
1 − ∑a k z −k
k =1

Excitation functions: - impulse


- sequence of impulses
- white noise with unit variance

The difference equations for the filter


p p
X n = ∑a k x n −k + ∑bk v n −k or
k =1 k =1
p
X n = ∑ a k x n −k + Gv n ; n = 0, N
k =1

In general, the observed source output does not satisfy the difference equation, only
its model does.
If the input is a white noise sequence or an impulse, we may for an estimate
(prediction) of Xn :
p
Xˆ n = ∑a k x n −k ; n > 0
k =1

The error of the observed value Xn with respect to the predict value X̂ n is
p
en = x n − xˆ n = x n − ∑ a k x n −k
k =1

The filter coefficients are chosen to minimize the mean square of the error
p
ξ p = E (−en2 ) ≡ E[( x n − ∑a k x n −k ) 2 ] =
k =1
p p p
= φˆ(0) − 2 ∑a φ
k =1
k k + ∑∑a k a mφ(k − m)
k =1 m =1

where φ(m) is auto correlating function of sequence Xn.


ξp is identical to MSE for a predictor used in DPCM and results the Julle-Walker
equations.
To completely specify the filter H(z), the filter gain G has to be specified.
p
E[(G (v n )) 2 ] = G 2 E[(v n ) 2 ] = G 2 = E ( x n − ∑ a k x n −k ) 2 ] = ξ p
k =1

where ξp is the residual MSE obtained after substituting the optimum prediction
coefficients (Julle-Walker equations).
p
⇒ ξ p = G 2 = φ (0) − ∑ a k φ (k )
k =1

Usually the time auto correlation function of the source output is unknown, and we
use the estimate
N −n
1
φ̂ ( n) =
N
∑x x
i =1
i i +n ; n = 0, N − 1
p

∑a φ(i − j ) =φ( j );
i j =1, p
Julle-Walker equation => k =1

with φ( j ) replaced by φˆ( j )


Matriceal form of Julle-Walker equation:
[φ]a ] =φ]
[φ] ∈R pxp with elements ˆ =φ
φ ˆ(i − j )
i j

where a ] −R pxp −column vector of predictor coefficien ts


φ] −R pxp
−column vector with elements φ(i )

Example : p=4;
φˆ(0) φˆ(1) ˆ( 2)
φ ˆ(3)
φ
ˆ(1)
φ ˆ(0)
φ φˆ(1) ˆ ( 2)
φ
φ= - Toeplitz matrix
φˆ( 2) φˆ(1) ˆ(0)
φ φˆ(1)
φˆ(3) ˆ
φ(2) φˆ(1) ˆ
φ(0)

Recursive algorithm for the inverse Toeplitz matrix is done by Durbin and Levinson
(1959):

1) Start with 1st order predictor ξˆ0 = φˆ(0)


2)
ξˆi = (1 − aij )ξˆi − j
i −1
φˆ(i ) −∑ai −1, k φˆ(i −k )
a ij = k =1
; k =1, i −1
ξˆ(i −1)
i =1, p
φˆ(1)
a ik = a i −1, k − a ii ∗a i −1, i −k ; a ii =
φˆ(0)
3)
a k = a p ,k ; k = 1, p
p
ξˆ p = G 2 = φ(0) − ∑a k φˆ( k )
k =1

The recursive solutions give the predictive coefficients for all orders less than p.
The residual MSE ξˆi ; i =1, p form a monotone decreasing sequence:
ˆ ≤ξ
ξ ˆ ˆ ˆ
p p −1 ≤... ≤ξ1 ≤ξ0

and
aij <1

The necessity and sufficient condition is to have all the poles inside the unit circle
H(z).

Implementation

Receiver

Notes: - When the source output is stationary it’s enough to transmit only once the
source parameters.
- When the source output is quasi-stationary hence new estimates of the
filter parameters must be periodically obtained.

Speech recovering

You might also like