Professional Documents
Culture Documents
T. Venkateswarlu B070427EC
V. A. Amarnath B070032EC
Creative Commons License
DSP Lab Report by
Kurian Abraham, Shanas P. Shoukath
T. Venkateswarlu and V. A. Amarnath is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 2.5 India License.
C
CC BY: $
\
Contents
I Lab Report 1 1
1 z Plane and Pole - Zero Plots 1
1.1 Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 z-Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Pole-Zero Plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2 FIR lters 5
2.1 Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Responses of FIR lters . . . . . . . . . . . . . . . . . . . . . . . 6
2.2.1 First Degree Transfer Function . . . . . . . . . . . . . . . 6
2.2.2 Linear Phase Filters . . . . . . . . . . . . . . . . . . . . . 8
2.2.3 Minimum Phase Filters . . . . . . . . . . . . . . . . . . . 9
3 Linear Convolution 11
3.1 Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.2 Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.3 Illustrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4 Exercise 15
II Lab Report 2 16
1 Overlap Save Method 16
1.1 Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.2 Overlap Save Routine . . . . . . . . . . . . . . . . . . . . . . . . 17
3 Inference 20
IV Lab Report 4 37
1 Fast Fourier Transform 37
2 Decimation In Time FFT Algorithm 38
3 Decimation In Frequency FFT Algorithm 39
4 Spectrum Analysis 41
5 FIR Filter Design using Frequency Sampling Method 55
V Lab Report 5 61
1 IIR Filters 61
2 IIR Design through Analog Filters 62
2.1 Butterworth Approximation . . . . . . . . . . . . . . . . . . . . . 62
2.2 Chebyshev Approximation . . . . . . . . . . . . . . . . . . . . . . 64
4 Design of Filters 72
4.1 Butterworth Filter Design . . . . . . . . . . . . . . . . . . . . . . 72
4.1.1 Specications . . . . . . . . . . . . . . . . . . . . . . . . 72
4.1.2 Design using Butterworth approximation . . . . . . . . . 72
4.1.3 IIT Method . . . . . . . . . . . . . . . . . . . . . . . . . . 73
4.1.4 BLT Method . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.2 Chebyshev Filter Design . . . . . . . . . . . . . . . . . . . . . . . 75
4.2.1 Specications . . . . . . . . . . . . . . . . . . . . . . . . 75
4.2.2 Design using Chebyshev approximation . . . . . . . . . . 75
4.2.3 IIT Method . . . . . . . . . . . . . . . . . . . . . . . . . . 77
4.2.4 BLT Method . . . . . . . . . . . . . . . . . . . . . . . . . 77
4
4.3 Integrator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.3.1 IIT Method . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.3.2 BLT Method . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.4 Comb Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
5 Observations 80
5.1 Butterworth Filter . . . . . . . . . . . . . . . . . . . . . . . . . . 80
5.2 Chebyshev Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
5.3 Integrator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
5.4 Comb Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
6 Code 96
6.1 Butterworth Filter . . . . . . . . . . . . . . . . . . . . . . . . . . 96
6.2 Chebyshev Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
6.3 Comb Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
5
2 Quantisation Technique 131
2.1 Uniform Quantization . . . . . . . . . . . . . . . . . . . . . . . . 131
2.2 Non Uniform Quantization . . . . . . . . . . . . . . . . . . . . . 132
2.3 PDF Optimized Non Uniform Quantizer . . . . . . . . . . . . . . 132
2.3.1 Determination of optimum ∆k . . . . . . . . . . . . . . . 133
4 Digital Companding:
Segmentation of Companding Characteristics 137
4.1 Segmentation of A-law Companding Characteristics . . . . . . . 138
4.2 Segmentation of µ-law Companding Characteristics . . . . . . . . 138
5 Observations 140
6 Code 143
Part I
Lab Report 1
Date: 31st December, 2009
where s = σ + jω .
When x(t)is discretized to x(n),
t = nT
where T is the sampling period.
∞
X
⇒ X(z) = x(n)z −n
n=−∞
z = eσ ejω
1
Pole-Zero Plot
Let
N (z)
H(z) =
D(z)
Solving N (z) = 0 and D(z) = 0 we obtain the zeros and poles of H(z).
Stability Criterion All poles of H(z) should lie within the unit circle.
1.2 z-Plane
Code
clf;
w = linspace(0,2*pi,100);
r = 1; %plot z plane
z = r*exp(1j*w);
plot (z);
axis equal;
hold on;
n = [1 2 3];
d = [1 0 0];
ze = roots(n); %finding zeros and poles
po = roots(d);
plot (real(ze), imag(ze), `or');
hold on;
plot (real(po), imag(po), `xr');
xlabel(`real z');
ylabel(`imag z');
title(`z-Plane');
grid on;
2
Figure
Figure 1: z-Plane
clf;
w = linspace(-pi,pi,1000);
r = 1;
z = r*exp(1j*w);
plot (z); %plotting z plane
axis equal;
hold on;
z = exp(1j*w);
n = [1, 2, 3, 4];
d = [1, 0, 0, 0];
ze = roots(n);
po = roots(d);
plot (real(ze), imag(ze), `or'); %plot zeros and poles
hold on;
plot (real(po), imag(po), `xr');
xlabel(`real z');
ylabel(`imag z');
title(`Pole Zero Plot');
grid on;
3
Figure
1.4 Inference
For the given transfer function, the pole-zero plot is obtained(see gure 2). Here,
we observe that all three zeros are outside the unit circle and we have multiple
poles at origin. Hence, the given transfer function is stable whereas the inverse
transfer function is not stable.
4
2 FIR lters
2.1 Concepts
Introduction
Filters with nite impulse responses are called FIR lters. A lter with
transfer function H(z) is stable if all its poles lie within the unit circle.
A causal FIR lter can be represented as
N
X −1
H(z) = h(n)z −n (Eq. I.2.1)
n=0
where h(n) is its impulse response. There exists (N − 1) zeros for H(z) and
(N − 1) poles which lies on z = 0. Therefore, FIR lters are stable.
h PN i
e−jω (N2−1) 2 n=0 2 −1
h(n) cos ω(n − N 2−1 ) N is even
H(ejω ) = h N −3 i
e−jω (N2−1) h( N −1 ) + 2 P 2 h(n) cos ω(n − N −1
2 n=0 2 ) N is odd
(Eq. I.2.4)
From (Eq. I.2.4), the underlined term represents the phase
(N − 1)
φ = −ω
2
Hence, any lter with transfer function satisfying (Eq. I.2.2) and (Eq. I.2.3)
is linear phase.
5
2.2 Responses of FIR lters
2.2.1 First Degree Transfer Function
H(z) = 1 + z −1
Code
Here we have plotted real and imaginary part of H(ejw ) versus ω .
clf;
w = linspace(-3*pi,3*pi,1000);
r = 1;
z = r*exp(1j*w);
figure(1);
hold on;
plot (real(z), imag(z)); %plotting z plane
axis ([-2,2,-2,2]);
axis equal;
z = exp(1j*w);
n = [1, 1];
d = [1, 0];
ze = roots(n);
po = roots(d);
plot (real(ze), imag(ze), `or'); %pole zero plot
hold on;
plot (real(po), imag(po), `xr');
xlabel(`real z');
ylabel(`imag z');
title(`Pole Zero Plot');
grid on;
figure(2);
h = 1 + 1*z.^(-1); %finding DTFT
subplot (2,1,1);
plot (w,real(h));
grid on;
title (`magnitude response'); %magnitude response
xlabel (`w');
ylabel (`magnitude');
subplot (2,1,2);
plot (w,imag(h));
grid on;
title (`phase response'); %phase response
xlabel (`w');
ylabel (`phase');
6
Figures
Inference
H(ejω ) = 1 + e−jω
Real part of H(ejω ) is cos(ω)with a DC oset 1 and imaginary part −sin(ω).
7
2.2.2 Linear Phase Filters
H(z) = 1 + 2z −1 − 3z −2 + 2z −3 + z −4
Code
Using code on on page 6, and replacing
n = [1, 2, -3, 2, 1];
d = [1, 0, 0, 0, 0];
And then plotting magnitude and phase responses.
Figures
8
Figure 6: Magnitude and Phase Response
Inference
Frequency response is obtained by taking DTFT of the transfer function.
That is evaluating the transfer function along the unit circle once. Zeros try to
bring the magnitude response to zero and poles try to make it innity. When ω is
varied from -π to π , whenever the product of the Eucledian distances from zeros
is minimum, magnitude reponse will encounter a local minimum. Similarly, in
the case of poles response will have a local maxima. Zeros lying on unit circle
makes the response zero and poles on unit circle makes the response innite at
that ω .
Code
Using code on on page 6, and replacing
n = [1, 1, .25];
d = [1, 0, 0];
And then plotting magnitude and phase responses.
9
Figures
10
3 Linear Convolution
3.1 Concepts
∞
X ∞
X
y(n) = x(n − k)h(k) = x(k)h(n − k)
k=−∞ k=−∞
where x(n) is the input to the LTI system with impulse reponse h(n) and
y(n) is its output.
3.2 Convolution
Code
x = [1 2 3];
y = [3 2 1];
xn = length (x);
yn = length (y);
zn = xn+yn-1;
z = zeros(1, zn);
x1 = [x, zeros(1, yn-1)]; %zero padding
y1 = [y, zeros(1, xn-1)];
for i = 1:zn %convolving
for j = 1:i
z(i) = z(i) + x1(j)*y1(i-j+1);
end
end
z
Result
z = 3 8 14 8 3
3.3 Illustrations
Unit impulse response (rect(n) ∗ δ(n))
Code
x = 0:1:22;
y = [1,zeros(1,20)];
H = [ 1, 1, 1 ];
xn = length (H); %convolution
yn = length (y);
11
zn = xn+yn-1;
z = zeros(1, zn);
x1 = [H, zeros(1, yn-1)];
y1 = [y, zeros(1, xn-1)];
for m = 1:zn
for n = 1:m
z(m) = z(m) + x1(n)*y1(m-n+1);
end
end
clf;
plot(x,z);
axis([0,10,0,4]);
grid on;
Figure
12
Unit step response (rect(n) ∗ u(n))
Using code from on page 11 with
y = ones(1,21);
Figure
Figure
13
Square input
Using code from on page 11 with
a = 0:1:30;
y = square(a);
Figure
Sinusoidal input
Using code from on page 11 with
a = 0:1:30;
y = sin(a);
Figure
14
4 Exercise
Are all linear phase lters, minimum phase?
Consider a linear phase lter with transfer function,
1. absolute value of each zeros is one⇐⇒ all of these lie on unit circle. This
violates condition specied in 2.1 for minimum phase lters.
2. absolute value of each complex zeros is one and real zeros exist in α and
1/α form where α < 1 ⇐⇒ zero with absolute value 1/α lies outside the
3. absolute value of each real zeros is one and complex zeros have absolute
value greater than one ⇐⇒That is, both the complex zeros lies outside
the unit circle violating the minimum phase condition.
4. absolute value of all zeros are not equal to one, is a combination of above
mentioned cases 3 and 4 ⇐⇒ This one also violates the minimum phase
condition.
Corollary: For a minimum phase lter to be linear phase the transfer function
should be symmetric and should satisfy Eq. I.4.2. From above case 1 - 4, this
is never satised.
15
Part II
Lab Report 2
Date: 7th January, 2010
16
1.2 Overlap Save Routine
Code
%routine for h*x
h=[1,1,1];
x=[3,-1,0,1,3,2,0,1,2,1];
l=3;
%linear convolution result
conv(x,h)
m=length(h);
L=length(x);
%creating blocks of length `l'
y=[zeros(1,m-1),x];
r=rem(length(y),(l-(m-1)));
y=[y,zeros(1,r)];
if r==0
y=[y,zeros(1,m-1)];
end
res = zeros(1,L+m-1);
k=1;
pos=0;
H=fft(h,l);
while k <= length(res)
temp=zeros(1,l);
for p=0:l-1
temp(p+1)=y(k+p);
end
temp=fft(temp,l);
temp=temp.*H;
temp=ifft(temp,l);
p=1;
while p <= l-m+1
res(p+pos)=temp(p+m-1);
p=p+1;
end
pos=pos+l-m+1;
k=k+l-m+1;
end
res
Output
ans =
3 2 2 0 4 6 5 3 3 4 3 1
res =
3 2 2 0 4 6 5 3 3 4 3 1
17
2 Overlap Add Method
2.1 Concept
The sequence x(n) of length n is the input to the LTI system with impulse
response h(n) of length m( n). x(n) is partitioned into blocks of nite length,
l. These blocks are linearly convolved with h(n) one by one. The result of
previous step gives sequences of length (l + m − 1). This result contains errors
in rst and last (m − 1) terms due to original sequence x(n) being partitioned
into blocks. The last (m − 1) terms of a block is added to the rst (m − 1) terms
of succeeding block with all other terms retained as such.
18
2.2 Overlap Add Routine
Code
%routine for h*x
h=[1,1,1];
x=[3,-1,0,1,3,2,0,1,2,1];
l=3;
%linear convolution result
conv(x,h)
m=length(h);
n=length(x);
%creating blocks of length `l'
x=[x,zeros(1,l-rem(n,l))];
xn = length(x);
k=1;
res=zeros(1,n+m-1);
pos=1;
while k <= xn
temp = zeros(1,l+m-1);
for p=1:l
temp(p)=x(p+k-1);
end
temp=conv(temp,h);
k=k+l;
for p=1:l+m-1
res(pos)=res(pos)+temp(p);
if (pos>=n+m-1)
break;
end;
pos=pos+1;
end;
pos=pos-m+1;
end;
res
Output
ans =
3 2 2 0 4 6 5 3 3 4 3 1
res =
3 2 2 0 4 6 5 3 3 4 3 1
19
3 Inference
Both overlap save and overlap add methods are ecient in computing linear
convolution of long sequences. These techniques reduce computational com-
plexity.
References
[1] Emmanuel C. Ifeachor and Barrie W. Jervis, Digital Signal Processing: A
Practical Approach, 2nd Ed., Pearson Education
20
Part III
Lab Report 3
Date: 14th January, 2010
1.5
0.5
0
−5 0 5
w→
0.5
−0.5
−5 0 5
n→
21
Evaluating hd (n) from ideal low pass lter Hd (ω) as shown in gure 16,
(
2fc sin(nωc )
, n 6= 0, −∞ < n < ∞
hd (n) = nωc (Eq. III.1.2)
2fc , n=0
where fc is the cuto frequency.
The impulse response plotted in reveals that hd (n) is symmetrical about
n = 0. Hence, the lter will have a linear phase reponse. But, the practical
problem is that hd (n) extends from −∞ < n < ∞, i.e. the lter is not FIR.
Eq. III.1.2 derived above is a non causal lter. If we take the frequency
response, Hd (ω)
(
e−jωα |ω| ≤ ωc
Hd (ω) =
0 otherwise
where −α is the slope of phase response Hd (ω). That is in time domain
every frequency component undergoes a delay of α.
From Eq. III.1.1 we have, hd (n)
(
sin(ωc (n−α))
, n 6= α
hd (n) = ωc π(n−α) (Eq. III.1.3)
α , n=α
The causal FIR is obtained by windowing Eq. III.1.3 with w(n)(exists from
0 to N-1).
(N − 1)
α=
2
Using a rectangular window introduces ripples and overshoots near the tran-
sition region in the frequency response. This phenomenon is called Gibb's phe-
nomenon.
To minimize the ripples we need to use windows with smoother transition
and to get a close approximation to Hd (ω) we need to retain as many coecients
of hd (n) as possible. Several window functions available to obtain a FIR lter
from hd (n). The selection of the window is based on the stopband attenuation
required and the order of the lter.
22
1.2 Window Functions
Name Time domain sequence [0 ≤ n < N ] k Min. stopband att.
2|n− N −1 |
Bartlett 1- N −12 4 -25 dB
Hanning 1
2 1 − cos 2πn
N −1 4 -44 dB
Hamming 0.54 − 0.46 cos N2πn −1 4 -53 dB
Blackman 0.42 − 0.5 cos N2πn−1 + 0.08 cos 4πn
N −1 6 -74 dB
For a stop band attenuation of 50 dB, Hamming window can be used. The
required order of lter, N is given by
4fs
N=
transition width
Substituting values, we obtain N = 81. (Here we have assumed fc = fp ).
1.3.2 Design
23
1.3.3 Code
24
end
temp=conv(temp,hd);
k=k+l;
for p=1:l+m-1
res(pos)=res(pos)+temp(p);
if (pos>=n+m-1)
break;
end;
pos=pos+1;
end;
pos=pos-m+1;
end;
%plotting
subplot(2,1,2);
plot(res)
title('Unit Impulse Response');
xlabel('Time\rightarrow');
ylabel('Amp\rightarrow');
axis([950,1150,-.1,.3]);
−100
−200
−300
−4 −3 −2 −1 0 1 2 3 4
Frequency (ω) →
Low Pass Filter Response
4
angle(H(ejω) →
−2
−4
−1 −0.5 0 0.5 1
Frequency (ω) →
25
1.3.5 LPF in action
0.8
Amp→
0.6
0.4
0.2
0
0 500 1000 1500 2000 2500
Time→
Unit Impulse Response
0.3
0.2
Amp→
0.1
−0.1
950 1000 1050 1100 1150
Time→
1.5
Amp→
0.5
0
0 100 200 300 400 500
Time→
Step Response
2
1
Amp→
−1
0 100 200 300 400 500
Time→
26
Exponential Input
1
0.8
Amp→
0.6
0.4
0.2
0
0 20 40 60 80 100 120
Time→
Exponential Response
0.3
0.2
Amp→
0.1
−0.1
0 50 100 150 200
Time→
Square Input
2
1
Amp→
−1
0 50 100 150 200
Time→
Square Response
2
1
Amp→
−1
−2
0 50 100 150 200
Time→
From fourier series of square wave, we know that it has its natural frequency
and its odd harmonics as frequency. So when we pass it through a low pass
lter, all frequency components beyond the cut o freequency is attenuated and
27
so we obtain a distorted square wave as output.
Sinusoidal Input
2
fc = 500 Hz
Amp→ 1
−1
−2
0 20 40 60 80 100
Time→
Sinusoidal Response
1
0.5
Amp→
−0.5
−1
0 20 40 60 80 100
Time→
Sinusoidal Input
2
fc = 1.8 kHz
1
Amp→
−1
−2
0 20 40 60 80 100
Time→
Sinusoidal Response
1
0.5
Amp→
−0.5
−1
0 20 40 60 80 100
Time→
28
Sinusoidal Input
4
fc1 = 300 Hz, fc2 = 2000 Hz
2
Amp→
0
−2
−4
0 20 40 60 80 100
Time→
Frequency Response
4000
3000
2000
1000
0
0 2000 4000 6000 8000 10000 12000
Frequency (ω) →
Lets give a composite sinusoid as in gure 23. According to our design, cut
o frequency = 1000 Hz.
29
1.4 Designing a Bandpass Filter
1.4.1 Specications
1.4.2 Design
The Bandpass lter is obtained as the dierence of two low pass lters having
cuto frequencies ωc1 and ωc2 . ωc1 and ωc2 are selected in such a way that ripples
in the passband is as specied. The window function is selected depending on
the stopband attenuation.
ωc1 = 0.3π
ωc2 = 0.7π
ωs1 = 0.16π
ωs2 = 0.87π
ωp1 = 0.42π
ωp2 = 0.61π
Stopband Attenuation ≤ −60dB =⇒ k = 6
2πk 2πk
N = max( , ) = 47
ωs2 − ωp2 ωp1 − ωs1
N −1
α= = 23
2
∴ The window selected is Blackman window.
2πn 4πn
w(n) = 0.42 − 0.5 cos + 0.08 cos
N −1 N −1
sin(ωc2 (n − α))
hd (n) =
π(n − α)
∴ Required response is
h(n) = hd (n)w(n)
1.4.3 Code
30
N=min((wc1-ws1),(ws2-wc2));
N=ceil(20*pi/N);
if rem(N,2)~=1
N=N+1;
end;
a=(N-1)/2;
%selecting Blackman Window
k=1:N;
hd =(sin(wc2*(k-1-a))./(pi*(k-1-a))-sin(wc1*(k-1-a))./(pi*(k-1-a)))
W=(.42-(.5*cos(2*pi*(k-1)./(N-1)))+(.08*cos(4*pi*(k-1)./(N-1))));
h(k)=hd .*W;
h((N+1)/2)=(wc2-wc1)/pi;
x=linspace(-pi,pi,1800);
H=fft(h,1800);
%plotting
plot(x,fftshift(20*log(abs(H))));
grid on;
xlabel('Frequency \rightarrow');
ylabel('|H(e^{j\omega})| \rightarrow');
title ('Band Pass Filter Response');
%unit impulse input
x=[1,zeros(1,100)];
%plotting input
subplot(2,1,1);
plot(x)
axis([-100,100,0,2])
xlabel('Time \rightarrow')
ylabel('Amp \rightarrow')
Title('Unit Impulse Input')
%convolution through overlap add method given 2
l=100;
m=length(h);
n=length(x);
x=[x,zeros(1,l-rem(n,l))];
xn = length(x);
k=1;
res=zeros(1,n+m-1);
pos=1;
while k <= xn
temp = zeros(1,l);
for p=1:l
temp(p)=x(p+k-1);
end
temp=conv(temp,h);
k=k+l;
for p=1:l+m-1
31
res(pos)=res(pos)+temp(p);
if (pos>=n+m-1)
break;
end;
pos=pos+1;
end;
pos=pos-m+1;
end;
%plotting output
subplot(2,1,2);
kl = linspace(-pi,pi,n+m-1);
plot(res);
xlabel('Time \rightarrow');
ylabel('Amp \rightarrow');
Title('Unit Impulse Response');
−50
−100
|H(ejω)| →
−150
−200
−250
−300
−350
−4 −3 −2 −1 0 1 2 3 4
Frequency →
32
1.4.5 BPF in action
Amp → 1.5
0.5
0
−100 −50 0 50 100
Time →
Unit Impulse Response
0.4
0.2
Amp →
−0.2
−0.4
0 50 100 150
Time →
Step Input
2
1.5
Amp →
0.5
0
−100 −50 0 50 100
Time →
Step Response
0.4
0.2
Amp →
−0.2
−0.4
0 50 100 150
Time →
For step input to a band pass lter, we observe that dc component of input is
removed as low frequency components are eliminated.
33
Exponential Input
1
0.8
Amp →
0.6
0.4
0.2
0
0 20 40 60 80 100
Time →
Exponential Respnse
0.4
0.2
Amp →
−0.2
−0.4
0 50 100 150
Time →
Square Input
2
fc=.45pi
1
Amp →
−1
−2
0 50 100 150 200
Time →
Square Response
2
1
Amp →
−1
−2
0 50 100 150 200
Time →
34
Sinusoidal Input
2
sin(.45pi.x)
1
Amp →
0
−1
−2
0 20 40 60 80 100
Time →
Sinusoidal Response
2
1
Amp →
−1
−2
0 20 40 60 80 100
Time →
(a) in passband
Sinusoidal Input
2
sin(.91pi.x)
1
Amp →
−1
−2
0 20 40 60 80 100
Time →
Sinusoidal Response
0.1
0.05
Amp →
−0.05
−0.1
0 2000 4000 6000 8000 10000 12000
Time →
35
Sinusoidal Input
2
sin(.91pi.x)+sin(.45*pi.x)
1
Amp →
0
−1
−2
0 20 40 60 80 100
Time →
Sinusoidal Response
6000
4000
2000
0
0 2000 4000 6000 8000 10000 12000
Frequency(ω) →
References
[1] Emmanuel C. Ifeachor and Barrie W. Jervis, Digital Signal Processing: A
Practical Approach, 2nd Ed., Pearson Education
36
Part IV
Lab Report 4
Date: 28th January, 2010
ωN (k + N ) = ωN (k)
In fast fourier transform, the sequence is split into two subsequences of length
N/2 and calculate the N/2 point DFT of those two sequences and adding those
two DFTs to calculate the DFT of original sequence. This process is repeated
by making this N/2 point DFTs into two subsequences and so on until 2 point
DFTs is reached. In this process the total number of multiplications reduces to
a value equal to N ∗ log2 N from a value which was proportional to N 2 .
37
2 Decimation In Time FFT Algorithm
Here the original signal is sampled into two subsequences depending upon their
position. All the samples present in odd places are grouped into one sequence
and all the samples present in even places are grouped into another sequence.
Then nd the N/2 point DFTs of both sequences. Let g(n) represent even
numbered samples sequence and h(n)represent odd numbered sample sequence.
These sequence are added using symmetry and periodicity of twiddle factor,
ω N . In this way a 8 point DFT is reduced to two 4 point DFTs and the number
of multiplications can still be reduced by computing the two 2 point DFTs to
calculate each of these 4 point DFTs.
Properties
1. Input is in bit reversal order
38
N
X −1
kn
X[k] = x[n]ωN , k = 0, 1, ..., N − 1
n=0
then separating even and odd terms, and substituting n = 2r for n even and
n = 2r + 1 for n odd.
(N/2)−1 (N/2)−1
2 kr
X k
X
2
kr
X[k] = x[2r] ωN + ωN x[2r + 1] ωN
r=0 r=0
as 2
ωN = ωN/2
(N/2)−1 (N/2)−1
X X
kr k kr
X[k] = x[2r]ωN/2 + ωN x[2r + 1]ωN/2
r=0 r=0
k
= G[k] + ωN H[k], k = 0, 1, ..., N − 1
First term is N/2 point DFT of even terms and second term is N/2 point DFT
of odd terms. This procedure can be repeated till two point DFTs are reached.
Then making use of symmetry and periodicity of twiddle factor, the required
N point DFT is obtained. The ow graph of 8 point DFT using DIT FFT is
shown in gure 31 on the previous page.
Properties
1. Input is in normal order
39
Figure 32: DIF FFT
N
X −1
kn
X[k] = x[n]ωN , k = 0, 1, ..., N − 1
n=0
for even numbered samples
N −1
X
2
nr
X[2r] = x[n] ωN r = 0, 1, ..., (N/2) − 1
n=0
(N/2)−1 N −1
X X
2nr 2nr
= x[n]ωN + x[n]ωN
n=0 n=N/2
(N/2)−1 (N/2)−1
2(n+N/2)r
X X
2nr
= x[n]ωN + x[n + N/2]ωN
n=0 n=0
using periodicity of twiddle factor
(N/2)−1
X
X[2r] = nr
(x[n] + x[n + N/2]) ωN/2 , r = 0, 1, ..., (N/2)−1 (Eq. IV.3.1)
n=0
(N/2)−1
X
n nr
X[2r + 1] = (x[n] − x[n + N/2]) ωN ωN/2 , r = 0, 1, ..., (N/2) − 1
n=0
(Eq. IV.3.2)
40
Eq. IV.3.1 is the N/2 point DFT of the N/2 point sequence obtained by
adding the rst half and the last half of the input sequence. Eq. IV.3.2 is the
N/2 point DFT of the sequence obtained by substracting the second half of the
input sequence from the rst half and multiplying the resulting sequence by ωN
n
.
Thus from Eq. IV.3.1 and Eq. IV.3.2,
and odd samples respectively. The ow graph of 8 point DFT using DIF FFT
is illustrated in gure 32 on the preceding page.
4 Spectrum Analysis
Sinusoidal Signal
Sinusoidal Input
2
f=500Hz
1
Magnitude →
−1
−2
0 10 20 30 40 50 60 70 80 90 100
Time(n) →
Sinusoidal Response
0.4
ω=π/10
0.3
Magnitude →
0.2
0.1
0
−4 −3 −2 −1 0 1 2 3 4
Frequency(ω) →
The DFT of a sine or cosine wave will give only two spikes at conjugate fre-
quencies and there will not be any other frequencies. This is because pure sine
or cosine functions contains only one frequency at all times.
41
spikes will have an amplitude which is equal to half the amplitude of original
signal.
Pulse Input
Pulse Input
1
0.8
Magnitude →
0.6
0.4
0.2
0
0 100 200 300 400 500 600
Time(n) →
Response
1
0.8
Magnitude →
0.6
0.4
0.2
0
−4 −3 −2 −1 0 1 2 3 4
Frequency(ω) →
Observations The DFT of a step function will give a sinc function centred
at the origin, which is of amplitude same as that of step function. If the width
of the pulse is small, the sinc would be more spead out and if the pulse is of
longer duration, the sinc would be sharper.
42
Square Input
SquareWave Input
2
f=500Hz
1
Magnitude → 0
−1
−2
0 10 20 30 40 50 60 70 80 90 100
Time(n) →
Response
0.5
0.4
Magnitude →
0.3
0.2
0.1
0
−4 −3 −2 −1 0 1 2 3 4
Frequency(ω) →
Observations From gure 35, we observe that the amplitude of the frequency
components present at fundamental frequency on either side of the origin will
have an amplitude equal to half the amplitude of square pulse and the side
components amplitude decreases as we move farther from the centre.
43
of the message signal.
If fc is frequency of carrier and m(t)is the message signal then the modulated
signal instantaneous frequency is
ˆt
θ = 2π f (τ )dτ (Eq. IV.4.2)
0
f (t) = fc + k t
from (Eq. IV.4.2),
t2
⇒ θ = 2πfc t + 2πk
2
Linear FM
2
Slope=.4, f=200Hz, fs=500Hz
1
Magnitude →
−1
−2
0 10 20 30 40 50 60 70 80 90 100
Time(n) →
Response
0.5
0.4
Magnitude →
0.3
0.2
0.1
0
−4 −3 −2 −1 0 1 2 3 4
Frequency(ω) →
44
Linear FM
2
Slope=24, f=200Hz, fs=500Hz
1
Magnitude →
0
−1
−2
0 10 20 30 40 50 60 70 80 90 100
Time(n) →
Response
0.06
0.05
Magnitude →
0.04
0.03
0.02
0.01
0
−4 −3 −2 −1 0 1 2 3 4
Frequency(ω) →
Sinusoidal Modulation
For a sinusoidal modulating signal, m(t) = Acos(2πfm t) then from (Eq. IV.4.1),
f = fc + kAcos(2πfm t)
= fc + ∇f cos(2πfm t)
where ∇f is frequency deviation.
Modulation index, β
∇f
β=
fc
Modulated wave is
45
For other β ,
Sine Modulated FM
2
β=.2,fc=50Hz,fm=5
1
Magnitude →
−1
−2
0 10 20 30 40 50 60 70 80 90 100
Time(n) →
Response
0.5
0.4
Magnitude →
0.3
0.2
0.1
0
−4 −3 −2 −1 0 1 2 3 4
Frequency(ω) →
46
Sine modulated FM
Sine modulated FM
2
2
β=2, fc=50Hz, fm=3, fs=200Hz
β=2, fc=50Hz, fm=5, fs=200Hz
1
1
Magnitude →
Magnitude →
0 0
−1 −1
−2 −2
0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100
Time(n) → Time(n) →
Response Response
0.4 0.4
0.3 0.3
Magnitude →
Magnitude →
0.2 0.2
0.1 0.1
0 0
−4 −3 −2 −1 0 1 2 3 4 −4 −3 −2 −1 0 1 2 3 4
Frequency(ω) → Frequency(ω) →
47
Kernel of DFT
1 1
0.8 0.8
Magnitude →
Magnitude →
0.6 0.6
0.4 0.4
0.2 0.2
0 0
0 100 200 300 400 500 600 0 200 400 600 800 1000 1200
Time(n) → Frequency(ω) →
1.5 1
0.8
Magnitude →
Magnitude →
1
0.6
0.4
0.5
0.2
0 0
0 200 400 600 800 1000 1200 0 200 400 600 800 1000 1200
Time(n) → Frequency(ω) →
0.8
Magnitude →
0.6
0.4
0.2
0
0 200 400 600 800 1000 1200
Time(n) →
Figure 40
Consider that DFT of signal x(t) is X(ω). If you nd the DFT of X(ω) and
normalize, the resultant signal is same as x(−t). Once more taking DFT of
x(−t) will give X(−ω) and on applying DFT fourth time and normalizing will
give back x(t). This is because while we are nding the DFT, we are just
rotating the samples on a unit circle. So when DFT is performed twice, 180o
out of phase signal is obtained and performing DFT four times will give your
original signal back. So here when a pulse function is applied as input and DFT
of this function gives a sinc. On applying DFT once again and normalizing gives
a mirror image of original pulse signal. On applying DFT two more times and
normalizing gives back your original rectangular signal (see gure 40).
Inference Taking DFT each time may be visualised as rotating the sequence
anti-clockwise by π/2 in time frequency plane.
So in this case eigen values of DFT is j, −1, −j, 1 and kernel is j .
F (x(t)) = X(ω)
F 3 (x(t)) = X(−ω)
F 4 (x(t)) = x(t)
48
Code
DIT FFT
clc;
x=[1 2 1 1 1 2 3 1 4 5];
n = length(x);
N=2^(ceil(log2(n)));
z=[x, zeros(1,N-n)];
fft(z,N)
y=bitrevorder(z);
r=log2(N);
for p=1:r
pow=2^p;
wn=exp(-j*2*pi/pow);
o=1;
while(o<=N)
for q=1:pow/2
y(o+pow/2+(q-1))=y(o+pow/2+(q-1))*wn^(q-1);
end
for q=1:pow/2
g=y(o+(q-1))+y(o+pow/2+(q-1));
h=y(o+(q-1))-y(o+pow/2+(q-1));
y(o+(q-1))=g;
y(o+pow/2+(q-1))=h;
end
o=o+pow;
end
end
y
DIF FFT1
function [y]=jan28_2(x)
clc;
n = length(x);
N=2^(ceil(log2(n)));
z=[x, zeros(1,N-n)];
a = log2(N);
y = z;
for p = 1:a
temp=[];
w = exp(-j*2*pi/(2^(a-p+1)));
for q = 1:(2^(p-1))
for m = 1:2^(a-p)
1 jan28_2() used in computation of DFT in other programs
49
temp = [temp y((2^(a-p))*(q-1)*2+m)+y((2^(a-p))*(q-1)*2+m+2^(a-p))];
end
for m = 1:2^(a-p)
temp = [temp (w^(m-1))*(y((2^(a-p))*(q-1)*2+m)-y((2^(a-p))*(q-1)*2+m+2^(a-p)))];
end
end
y = temp;
end
y=bitrevorder(y);
end
Spectrum Analysis
% sine
figure(1);
pt = 2048;
n = linspace(0,pt-1,pt);
y = sin(2*pi*500*n/10000);
u=length(y)
subplot(2,1,1);
plot(y);
xlabel('Time(n) \rightarrow')
ylabel('Magnitude \rightarrow')
title('Sinusoidal Input')
text(80,1.5,'f=500Hz')
axis([0,100,-2,2]);
res = jan28_2(y)./pt;
subplot(2,1,2);
w=linspace(-pi,pi,pt);
plot(w,fftshift(abs(res)));
xlabel('Frequency(\omega) \rightarrow')
ylabel('Magnitude \rightarrow')
title('Sinusoidal Response')
text(2.7,.35,'\omega=\pi/10')
figure(2);
%pulse
pt = 1024;
n = linspace(0,pt-1,pt);
y = [ones(1,100),zeros(1,500)];
subplot(2,1,1);
plot(y);
xlabel('Time(n) \rightarrow')
ylabel('Magnitude \rightarrow')
title('Pulse Input')
res = jan28_2(y)./(100);
subplot(2,1,2);
50
w=linspace(-pi,pi,pt);
plot(w,fftshift(abs(res)))
xlabel('Frequency(\omega) \rightarrow')
ylabel('Magnitude \rightarrow')
title('Response')
figure(3);
% square
pt = 2048;
n = linspace(0,pt-1,pt);
y = square(2*pi*500*n/10000);
subplot(2,1,1);
plot(y);
xlabel('Time(n) \rightarrow')
ylabel('Magnitude \rightarrow')
title('SquareWave Input')
text(80,1.5,'f=500Hz')
axis([0,100,-2,2]);
res = jan28_2(y)./(pt);
subplot(2,1,2);
w=linspace(-pi,pi,pt);
plot(w,fftshift(abs(res)))
xlabel('Frequency(\omega) \rightarrow')
ylabel('Magnitude \rightarrow')
title('Response')
% linear fm
figure(4);
pt = 1024;
n = linspace(0,2,1000);
y = sin(2*pi*(200*n+12*n.^2));
subplot(2,1,1);
plot(y);
axis([0,100,-2,2]);
xlabel('Time(n) \rightarrow')
ylabel('Magnitude \rightarrow')
title('Linear FM')
text(60,1.5,'Slope=24, f=200Hz, f_s=500Hz')
res = jan28_2(y)./(pt);
subplot(2,1,2);
w=linspace(-pi,pi,pt);
plot(w,fftshift(abs(res)));
xlabel('Frequency(\omega) \rightarrow')
ylabel('Magnitude \rightarrow')
title('Response')
y = sin(2*pi*(200*n+.2*n.^2));
res = jan28_2(y)./(pt);
figure(5)
51
subplot(2,1,1);
plot(y);
axis([0,100,-2,2]);
xlabel('Time(n) \rightarrow')
ylabel('Magnitude \rightarrow')
title('Linear FM')
text(60,1.5,'Slope=.4, f=200Hz, f_s=500Hz')
subplot(2,1,2);
plot(w,fftshift(abs(res)));
xlabel('Frequency(\omega) \rightarrow')
ylabel('Magnitude \rightarrow')
title('Response')
figure(6);
% narrow fm
pt = 2048;
n = linspace(0,10,2000);
y = cos(2*pi*50*n+.2*sin(2*pi*5*n));
subplot(2,1,1);
plot(y);
axis([0,100,-2,2]);
xlabel('Time(n) \rightarrow')
ylabel('Magnitude \rightarrow')
title('Sine Modulated FM')
text(60,1.5,'\beta=.2,f_c=50Hz,f_m=5')
res = jan28_2(y)./(pt);
subplot(2,1,2);
w=linspace(-pi,pi,pt);
plot(w,fftshift(abs(res)));
xlabel('Frequency(\omega) \rightarrow')
ylabel('Magnitude \rightarrow')
title('Response')
% wideband fm
figure(7);
pt = 2048;
n = linspace(0,10,2000);
y = cos(2*pi*50*n+2*sin(2*pi*5*n));
subplot(2,1,1);
plot(y);
axis([0,100,-2,2]);
xlabel('Time(n) \rightarrow')
ylabel('Magnitude \rightarrow')
title('Sine modulated FM')
text(60,1.5,'\beta=2, f_c=50Hz, f_m=5, f_s=200Hz')
res = jan28_2(y)./(pt);
subplot(2,1,2);
w=linspace(-pi,pi,pt);
52
plot(w,fftshift(abs(res)));
xlabel('Frequency(\omega) \rightarrow')
ylabel('Magnitude \rightarrow')
title('Response')
% wideband fm
figure(8);
pt = 2048;
n = linspace(0,10,2000);
y = cos(2*pi*50*n+2*sin(2*pi*3*n));
subplot(2,1,1);
plot(y);
axis([0,100,-2,2]);
xlabel('Time(n) \rightarrow')
ylabel('Magnitude \rightarrow')
title('Sine modulated FM')
text(60,1.5,'\beta=2, f_c=50Hz, f_m=3, f_s=200Hz')
res = jan28_2(y)./(pt);
subplot(2,1,2);
w=linspace(-pi,pi,pt);
plot(w,fftshift(abs(res)));
xlabel('Frequency(\omega) \rightarrow')
ylabel('Magnitude \rightarrow')
title('Response')
% fft
figure(9);
pt = 1024;
n = linspace(0,pt-1,pt);
y = [ones(1,100),zeros(1,500)];
subplot(3,2,1);
plot(y);
xlabel('Time(n) \rightarrow')
ylabel('Magnitude \rightarrow')
res = jan28_2(y);
res1 = jan28_2(res)./pt;
res2 = jan28_2(res1);
res3 = jan28_2(res2)./pt;
subplot(3,2,2);
plot(n,fftshift(abs(res/100)));
xlabel('Frequency(\omega) \rightarrow')
ylabel('Magnitude \rightarrow')
subplot(3,2,3);
plot(n,abs(res1));
xlabel('Time(n) \rightarrow')
ylabel('Magnitude \rightarrow')
subplot(3,2,4);
plot(n,fftshift(abs(res2/100)));
53
xlabel('Frequency(\omega) \rightarrow')
ylabel('Magnitude \rightarrow')
subplot(3,2,5);
plot(n,abs(res3));
xlabel('Time(n) \rightarrow')
ylabel('Magnitude \rightarrow')
Normalisation of FFT
Lets consider a rectangular pulse of length N in time domain.
(
1 0≤n<N
x[n] =
0 otherwise
The DTFT of x[n] gives,
∞
X
X(ω) = x[n]e−jωn
n=0
N
X −1
= e−jωn
n=0
1 − e−jωN
=
1 − e−jω
using L' Hospital's Rule to nd X(0)
jN e−jωN
X(0) = =N
je−jω ω=0
From Parseval's Theorem, energy of any signal, x[n] in time domain and
frequency domain are equal.
N −1 N −1
X 1 X 2
x2 [n] = |X[k]|
n=0
N
k=0
Hence, after taking Fourier Transform, the result is normalised with N .
References
[1] Alan V. Oppenheim, Ronald W. Schafer and John R. Buck, Discrete-Time
Signal Processing, 2nd Ed., Pearson Education
54
Date: 4th February 2010
N −1
1 X
H(k)ej ( N )
2πnk
h(n) =
N
k=0
N −1
1
|H(k)| e−j ( ) ej ( 2πnk
N )
X 2παk
= N
N
k=0
N −1
1 2π(n−α)k
|H(k)| ej ( )
X
= N
N
k=0
N −1
1 X 2π(n − α)k 2π(n − α)k
= |H(k)| cos + j sin
N N N
k=0
N −1
1 X 2π(n − α)k
= |H(k)| cos
N N
k=0
since h(n) is real, imaginary part of h(n) is zero. And for h(n) to be linear
phase lter, h(n) needs to be symmetrical. For n even,
/2−1
N
2 X 2π(n − α)k
h(n) = |H(k)| cos + H(0) (Eq. IV.5.2)
N N
k=0
55
Ideal Low Pass Filter
Here samples are taken from the ideal low pass lter as shown in gure 41. As
we observe the designed FIR lter have ripples in the response, where ever there
is as sudden transition in the frequency reponse in ideal lter. This is because
their is a sudden transition in frequency domain. And so we have taken less or
no samples (as in this case) in the transition band of the response.
0.8
magnitude →
0.6
0.4
0.2
0
−4 −3 −2 −1 0 1 2 3 4
frequency (ω) →
1.5
magnitude →
0.5
0
−4 −3 −2 −1 0 1 2 3 4
frequency (ω) →
Figure 41: Ideal Low Pass Filter with less number of samples in transition band
Now let's compare the resultant FIR lter with the ideal lter by placing
the responses one over the other as in gure 42 on the next page.
56
1.4
1.2
0.8
magnitude →
0.6
0.4
0.2
0
−4 −3 −2 −1 0 1 2 3 4
frequency (ω) →
Let's now redesign the FIR lter by including more samples from transition
band of the response. As we observe in gure 43 on the following page, the
designed lter have smoother transition band and less ripples.
57
Low Pass Filter (smooth transition band)
1
0.8
magnitude →
0.6
0.4
0.2
0
−4 −3 −2 −1 0 1 2 3 4
frequency (ω) →
1.5
magnitude →
0.5
0
−4 −3 −2 −1 0 1 2 3 4
frequency (ω) →
Figure 43: Ideal Low Pass Filter with more number of samples in transition
band
58
Arbitrary Frequency Response
2
1.5
magnitude →
0.5
0
−4 −3 −2 −1 0 1 2 3 4
frequency (ω) →
2.5
2
magnitude →
1.5
0.5
0
−4 −3 −2 −1 0 1 2 3 4
frequency (ω) →
Frequency sampling method can be used to obtain FIR lter from any ar-
bitary frequency response satisfying the symmetricity property.
Code
N = 255;
alpha = (N-1)/2;
H = [ones(1, 50), zeros(1,156), ones(1,50)];
figure(1);
subplot(2,1,1);
w = linspace(-pi, pi, N+1);
plot(w,H);
xlabel('frequency (\omega) \rightarrow');
ylabel('magnitude \rightarrow');
title('Ideal Low Pass Filter');
h = zeros(1, N);
for k = 0 : alpha
59
for m = 1 : alpha
60
Part V
Lab Report 5
Date: 4th & 11th February, 2010
1 IIR Filters
Basic Features
IIR digital lters which can be realized is characterised by the recursuive equa-
tion,
∞
X
y[n] = h[k]x[n − k] (Eq. V.1.1)
k=0
N
X M
X
= bk x[n − k] − ak y[n − k]
k=0 k=1
where h[k] is the impulse response of the lter, bk and ak are the lter
coecients of the lter, and x[n] and y[n] are the input and output to the lter.
The transfer function for the IIR lter is,
b0 + b1 z −1 + . . . + bN z −N
H(z) = (Eq. V.1.2)
1 + a1 z −1 + . . . + aM z −M
PN −k
k=0 bk z
= PM
1 + k=1 ak z −k
From (Eq. V.1.1), it can be noted that output, y[n] depends on the past
outputs, y[n − k] and present as well as past input samples, x[n − k], that is IIR
lter is a feedback system and hence the name Innite Impulse Response (IIR)
lter.
The transfer function of the IIR lterm, H(z), given by (Eq. V.1.2) can be
represented as,
61
2 IIR Design through Analog Filters
IIR lters are generally designed by rst designing its analog counterparts and
then transforming it to digital lters. This method is preferred as analog lter
design procedures are highly advanced and the classical lters like Butterworth,
Chebychev and elliptic are readily available.
In case of IIR lter design ,the most common practice is to convert the
digital lter specications into analog low pass prototype lter specications,
to determine the analog low pass lter transfer function H(s) meeting these
specications, then to transform it into the desired digital lter transfer function.
This approach is widely used because
1. Analog approximations are highly advanced.
2. They usually yield closed form solutions.
3. Extensive tables are available for analog lter design.
2 1
|H(jΩ)| = 2N (Eq. V.2.1)
Ω
1+ ΩC
where N is the order of the lter and ΩC is the 3dB cut o frequency of the
low pass lter.
62
band regions. The response is also said to be maximally at due to the initial
atness with a slope of almost zero at DC.
Normalising (Eq. V.2.1),
1
2
|H(jΩ)| =
1 + Ω2N
⇒ H(jΩ)H(−jΩ) = H(s)H(−s)
1
= 2N
1 + sj
1
H(s)H(−s) =
1 + (−1)N s2N
1
=
1 + (−s2 )N
Now, poles of transfer function are obtained by solving,
1 + (−s2 )N = 0
solving, the poles sk are obtained as
(
ej 2πk/2N , f or N odd
s = (Eq. V.2.2)
ej (2k−1)π/2N , f or N even
Here, the poles are placed on the circumference of the unit circle (as the
function is normalised ).
f
F = (Eq. V.2.3)
fs
where F is the digital frequency,f is the analog frequency and fs is the sampling
frequency.
To nd n
2n !
Ω
α = 10log 1 + dB (Eq. V.2.4)
Ωc
The attenuation in the passband should not exceed αmax and that in the
stopband should be atleast αmin .
(Eq. V.2.4)
Ω
⇒ Ωc = α 1 (Eq. V.2.5)
10 10 − 1 2n
Solving for n in (Eq. V.2.4) by substituting for α and Ω in the passband
and stopband, 0.1α
10 min −1
1 log 100.1α max −1
n= (Eq. V.2.6)
2 log Ωs
Ωp
63
Maximal Flatness 2
All derivatives of the |H(jΩ)| up to but not including
the 2N derivative are zero, resulting in "maximal atness". If the requirement
th
to be monotonic is limited to the passband only and ripples are allowed in the
stopband, then it is possible to design a lter of the same order that is atter
in the passband than the "maximally at" Butterworth. Such a lter is inverse
Chebyshev lter or Type II Chebyshev lter.
Type I, with equal ripple in the pass band, monotonic in the stop band.
Type II, with equal ripple in the stop band, monotonic in the pass band.
2 1
|H(Ω)| =
2 Ω
1 + ε2 CN Ωp
where CN ΩΩp is a Chebyshev polynomial which exhibits equal ripple in
the pass band, N is the order of the polynomial as well as that of the lter, and
ε determines the pass band ripple, which in decibels is given by,
64
Figure 46: Classical Chebyshev Low Pass Filter Response
Pole Calculation
⇒ cos (nu) = 0
n
(2m+1)π
∴u= 2n , m = 0, 1, 2....n − 1
1
sin (nu) sinh (nv) = ±
1 1
⇒ v = ± sinh−1 = ±a
n
65
Then,
s = cos ω
Replace s → s
ωp
n
1
(2m + 1) π2 ± sinh−1 1
⇒ s = ωp cos n , m = 0, 1, 2....n − 1
(Eq. V.2.10)
Guillemin's Algorithm
Guillemin's algorithm is used to transform a Butterworth lter design to a
Chebyshev design in a simple and easier way. The steps to obtain Chebyshev
poles from Butterworth poles is given below.
The Butterworth angles are,
2k + 1 π
φk = , k = 0, 1, 2, ...(2n − 1)
n 2
From (Eq. V.2.10),
66
sin φk = cos ψk
cos φk = sin ψk
Hence,
67
3 Transforming Analog Filter to Digital Filter
After analog lter design is completed as per the specications, the design needs
to be converted into digital domain. The commonly used techniques are
1. Impulse Invariance
2. Bilinear Transformation
h[n] = ha (nT )
and then now H(z) by taking z-transform of h[n].
Consider the case when no poles have multiplicity more than one, H(z) can
be obtained directly from H(s).
N
X Ak
H(s) = (Eq. V.3.1)
s − pk
k=1
−1
ha (t)= L (H(s))
(P
N pk t
k=1 Ak e ,t≥0
⇒ ha (t) =
0 ,t<0
h[n] = T ha (nT ), n≥0
N
X
⇒ h[n] = Ak T epk nT (Eq. V.3.2)
k=1
N
X Ak T
⇒ H(z) =
1 − epk T z −1
k=1
N
X Ak T z
= (Eq. V.3.3)
z − e pk T
k=1
68
Ignoring zero at z = 0 for (Eq. V.3.3) causes the transfer function to be
delayed.
N
X Ak T
H(z) = (Eq. V.3.4)
z − e pk T
k=1
From (Eq. V.3.4), it can be infered that to transform analog transfer func-
tion, H(s) to digital transfer function, H(z) by replacing pk in (Eq. V.3.1) with
epk T . The transformations are given below:
s → z
H(s) → T H(z)
pk → e pk T
s = jΩ , z = ejω
jΩT = jω
ω
⇒Ω = (Eq. V.3.5)
T
Hence, from (Eq. V.3.5) it can be seen that analog frequency and digital
frequency are linearly related.
69
= esT
z
1
⇒s = ln(z)
T " #
1 3 5
2 z−1 1 z−1 1 z−1
= + + + ...
T z+1 3 z+1 5 z+1
2 z−1
⇒s ≈
T z+1
2 1 − z −1
∴s = (Eq. V.3.6)
T 1 + z −1
1 + sT/2
⇒z =
1 − sT/2
Hence,
H(s) → H(z)
2 1 − z −1
s →
T 1 + z −1
s = jΩ , z = ejω
2 1 − z −1
s =
T 1 + z −1
2 1 − e−jω
jΩ =
T 1 + e−jω
2 ω
⇒Ω = tan (Eq. V.3.7)
T 2
TΩ
⇒ω = 2 tan−1
(Eq. V.3.8)
2
Hence, from (Eq. V.3.5) it can be seen that analog frequency and digital
frequency are not linearly related (see Figure 47 on page 71). This deviation of
Ω − ω from linear relation is known as frequency warping.
To compensate for this eect, the analog frequencies are prewarped before
applying bilinear transformation using (Eq. V.3.8).
70
Figure 47: Ω − ω plot
71
4 Design of Filters
4.1 Butterworth Filter Design
4.1.1 Specications
αmax = 0.5 dB
αmin = 20 dB
Ωp = 1000 rad/sec
Ωs = 2000 rad/sec
fsampling = 10000Hz
From these poles, the ones lying on the left half plane is chosen for stability.
The selected poles as per this criterion are,
s3 = −0.309 + 0.951
s4 = −0.809 + 0.587
s5 = −1
s6 = −0.809 − 0.587
s7 = −0.309 − 0.951
1
H(s) =
(s − s3 ) (s − s4 ) (s − s5 ) (s − s6 ) (s − s7 )
1
=
(s + 1) (s2 + 0.618s + 1) (s2 + 1.62s + 1)
72
Replacing s → Ωsc = 1263 ,
s
12635
H(s) =
(s + 1263) (s2 + 783s + 12632 ) (s2 + 2046s + 12632 )
Considering it as a cascade of one rst order and two second order systems
gives,
1263
H1 (s) =
s + 1263
12632
H2 (s) =
s + 783s + 12632
2
12632
H3 (s) =
s + 2046s + 12632
2
73
where
Y1 (z)
H1 (z) = (Eq. V.4.4)
X (z)
Y2 (z)
H2 (z) = (Eq. V.4.5)
Y1 (z)
Y (z)
H3 (z) = (Eq. V.4.6)
Y2 (z)
(Eq. V.4.1) and (Eq. V.4.4) gives
y1 [n] = 0.1263x[n] + 0.88y1 [n − 1]
(Eq. V.4.2) and (Eq. V.4.5) gives
y2 [n] = 0.152y1 [n] + 1.9y2 [n − 1] − 0.925y2 [n − 2]
(Eq. V.4.3) and (Eq. V.4.6) gives
y[n] = 0.0144y2 [n] + 1.804y[n − 1] − 0.815y[n − 2]
1 + z −1
H1 (z) = (Eq. V.4.7)
16.83 − 14.83z −1
1 + 2z −1 + z −2
H2 (z) = (Eq. V.4.8)
242z −2 − 500z −1 + 262
1 + 2z −1 + z −2
H3 (z) = (Eq. V.4.9)
226z −2 − 500z −1 + 277
74
(Eq. V.4.7) and (Eq. V.4.4) gives
1
y1 [n] = (x[n] + x[n − 1] + 14.83y1 [n − 1])
16.83
(Eq. V.4.8) and (Eq. V.4.5) gives
1
y2 [n] = (y1 [n] + 2y1 [n − 1] + y1 [n − 2] + 500y2 [n − 1] − 242y2 [n − 2])
262
(Eq. V.4.9) and (Eq. V.4.6) gives
1
y[n] = (y2 [n] + 2y12 [n − 1] + y2 [n − 2] + 500y[n − 1] − 226y[n − 2])
277
= 0.3493
∵Cn = 1, when ω = ωp
To nd n put α = αmin and ω = ωs in (Eq. V.2.7)
n=4
75
∴The poles are obtained as
n
⇒ s = ωp cos (2m + 1) π8 ± 0.4435 , m = 0, 1, 2, 3
s1 = −175.338 + 1016.238
s2 = −423.305 + 419.464
s3 = −423.305 − 419.464
s4 = −175.338 − 1016.238
2
(1031.25)
H1 (s) = 2
s2 + 350.676s + (1031.25)
2
(596.93)
H2 (s) = 2
s2 + 846.61s + (595.93)
0.5
−0.5
−1
−1.5
−1.5 −1 −0.5 0 0.5 1 1.5
76
4.2.3 IIT Method
523.249 −523.249
H1 (s) = +
s + 175.338 + 1016.238 s + 175.338 − 1016.238
0.05232 −0.05232
H1 (z) = +
1 − e−(0.01753+0.10162) z −1 1 − e(−0.01753+0.10162) z −1
0.01043z −1
=
1 − 1.9549z −1 + 0.9655z −2
⇒ y1 [n] = 0.01043x[n − 1] + 1.9549y1 [n − 1] − 0.9655y1 [n − 2]
423.303 −423.303
H2 (s) = +
s + 423.303 + 420.9 s + 423.303 − 420.9
0.04233 −0.04233
H2 (z) = +
1 − e−(0.04233+0.04209) z −1 1 − e(−0.04233+0.04209) z −1
0.034z −1
=
1 − 1.915z −1 + 0.9655z −2
⇒ y[n] = 0.000321y1 [n − 1] + 1.915y[n − 1] − 0.9187y[n − 2]
Ωp = 1000.83 rad/sec
Ωs = 2006.67 rad/sec
∴ Poles are
s1 = −175.484 + 1017.073
s2 = −423.654 + 421.28
s3 = −423.654 − 421.28
s4 = −175.484 − 1017.073
2
(1032.1)
H1 (s) = 2
s2 + 351.49s + (1032.1)
0.00251 1 + 2z −1 + z −2
∴ H1 (z) =
1.0202 − 1.99468z −1 + 0.985z −2
⇒ y1 [n] = 0.00246x[n] + 2x[n − 1] + x[n − 2] + 1.9552y1 [n − 1] − 0.9654y1 [n − 2]
2
(597.44)
H2 (s) = 2
s2 + 847.35s + (597.44)
0.00089 1 + 2z −1 + z −2
∴ H2 (z) =
1.043 − 1.998z −1 + 0.9585z −2
⇒ y[n] = 0.000855 (y1 [n] + 2y1 [n − 1] + y1 [n − 2]) + 1.9156y1 [n − 1] − 0.919y1 [n − 2]
77
4.3 Integrator
The analog transfer function of integrator is given by,
1
H(s) =
s
Here the digital lter obtained has a zero at z = 0. So to ignore this a delay
is given to the input. So
z −1
H (z) =
1 − z −1
⇒ y[n] = x[n − 1] + y[n − 1]
1 + z −1
H (z) =
2 (1 − z −1 )
⇒ y[n] = 0.5x[n] + 0.5x[n − 1] + y[n − 1]
s (s + a) (s − a)
H (s) =
(s + b + c) (s + b − c) (s + b + d) (s + b − d)
Taking a = 2, b = 1, c = 1, d = 3
78
s s2 + 4
H (s) =
(s2 + 2s + 2) (s2 + 2s + 10)
s
H1 (s) = 2
(s + 2s + 2)
s2 + 4
H2 (s) =
(s2 + 2s + 10)
1 − z −2
H1 (z) =
5 − 2z −1 + z −2
4 1 + z −2
H2 (z) =
9 + 6z −1 + 5z −2
1
y1 [n] = (x[n] − x[n − 2] + 2y1 [n − 1] − y1 [n − 2])
5
1
y[n] = (4y1 [n] − 4y1 [n − 1] − 6y[n − 1] − 5y[n − 2])
9
79
5 Observations
5.1 Butterworth Filter
IIT Method
Frequency Response
1
0.5
Amplitude →
−0.5
−1
0 100 200 300 400 500 600 700
n→
−50
Magnitude →
−100
−150
−200
−250
−300
−4 −3 −2 −1 0 1 2 3 4
n→
1
Phase →
−1
−2
−3
−4 −3 −2 −1 0 1 2 3 4
ω→
80
BLT Method
Frequency Response
1
0.5
Amplitude →
−0.5
−1
0 100 200 300 400 500 600 700
n→
0
Magnitude →
−50
−100
−150
−4 −3 −2 −1 0 1 2 3 4
ω→
2
Phase →
−2
X: 1.29
Y: −1.845
−4
−4 −3 −2 −1 0 1 2 3 4
ω→
81
Frequency Response →
0
−20
−40
Magnitude(db) →
−60
−80
−100
−120
−4 −3 −2 −1 0 1 2 3 4
ω→
Impulse Response →
1
0.8
Amplitude →
0.6
0.4
0.2
0
0 2000 4000 6000 8000 10000 12000
n
0.06
0.04
Amplitude →
0.02
−0.02
0 100 200 300 400 500 600 700
n→
82
Pulse Response Pulse Response
1 1
0.8 0.8
0.6 0.6
0.4 0.4
Amplitude →
Amplitude →
0.2 0.2
0 0
−0.2 −0.2
−0.4 −0.4
−0.6 −0.6
−0.8 −0.8
−1 −1
0 100 200 300 400 500 600 700 0 100 200 300 400 500 600 700
n→ n→
0.3 1.2
0.25 1
0.2 0.8
Magnitude →
Magnitude →
0.15 0.6
0.1 0.4
0.05 0.2
0 0
−0.05 −0.2
0 100 200 300 400 500 600 700 800 0 100 200 300 400 500 600 700 800
ω→ ω→
Sinusoidal Response
1
0.5
Amplitude →
−0.5
−1
0 100 200 300 400 500 600 700
n→
0.5
Amplitude →
−0.5
−1
0 100 200 300 400 500 600 700
n→
83
Sinusoidal Response
1
0.8
0.6
0.4
Amplitude →
0.2
−0.2
−0.4
−0.6
−0.8
−1
0 100 200 300 400 500 600 700
n→
0.08
0.07
0.06
0.05
Amplitude →
0.04
0.03
0.02
0.01
−0.01
−0.02
0 100 200 300 400 500 600 700
n→
84
Sinusoidal Response
1
0.5
Amplitude →
−0.5
−1
0 100 200 300 400 500 600 700
n→
0.5
Amplitude →
−0.5
−1
0 100 200 300 400 500 600 700
n→
0.5
X: 0.0764
Y: 0.4865
0.4
Magnitude →
0.3
0.2
0.1
0
−4 −3 −2 −1 0 1 2 3 4
ω→
85
Pulse Input
1.5
Amplitude →
0.5
−0.5
−1
0 100 200 300 400 500 600 700
Time(n) →
1.5
1
Amplitude →
0.5
−0.5
−1
0 100 200 300 400 500 600 700
Time(n) →
1
Amplitude →
0.5
−0.5
−1
0 100 200 300 400 500 600 700
Time(n) →
1.5
1
Amplitude →
0.5
−0.5
−1
0 100 200 300 400 500 600 700
Time(n) →
Figure 61: Pulse Response (excluding all poles other than the dominant pole)
86
5.2 Chebyshev Filter
IIT Method
Frequency response
0
−20
−40
Gain in dB
−60
−80
−100
−120
−4 −3 −2 −1 0 1 2 3 4
Frequency(rad/sam) →
Frequency response
0
−0.5
−1
−1.5
Gain in dB
−2
−2.5
−3
−3.5
87
BLT Method
Frequency Response
20
−20
−40
Amplitude
−60
−80
−100
−120
−140
−160
−4 −3 −2 −1 0 1 2 3 4
Frequency(rad/sam) →
Frequency Response
0.5
−0.5
Amplitude
−1
−1.5
−2
−2.5
−3
−3.5
−4
−0.1 −0.05 0 0.05 0.1
Frequency(rad/sam) →
88
Pulse Input
1.5
Amplitude
0.5
0
0 100 200 300 400 500 600 700
Time(n) →
1.5
1
Amplitude
0.5
−0.5
0 100 200 300 400 500 600 700
Time(n) →
The rising and falling edges of the pulse excites the system causing a damped
sinusoid as output as seen in the case of the impulse response of the system.
The frequency of the sinusoid is expected to be that of dominant pole frequency.
Here, from the dominant pole frequency is 1016rad/sec. And from the Figure
67 on page 89, frequency of sinusoid, f
−1
166 − 107
f= × 2π = 1064.9rad/sec
10000
Here, we observe that there is a variation of frequency from what is expected.
This is because of the eect of non dominant poles. So, let us take the lter
with only dominant pole and other poles are removed. The response of such a
system is shown in Figure 68 on page 90.
89
Pulse input(Dominant pole alone)
1.5
Amplitude →
1
0.5
0
0 100 200 300 400 500 600 700
Time(n) →
1.5
Amplitude →
0.5
−0.5
−1
0 100 200 300 400 500 600 700
Time(n) →
Figure 68: Pulse Response (excluding all poles except the dominant pole)
With the system with only dominant pole, the frequency of sinusoid f ,
−1
142 − 80
f= × 2π = 1014rad/sec
10000
And as expected, we obtain f equal to dominant pole frequency.
Sinusoidal Input(Passband)
2
f=0.075 rad/sam
1
Amplitude
−1
−2
0 100 200 300 400 500 600 700
Time(n) →
0.5
Amplitude
−0.5
−1
0 100 200 300 400 500 600 700
Time(n) →
90
Sinusoidal input(stopband)
1.5
f=0.75 rad/sam
1
Amplitude →
0.5
0
−0.5
−1
−1.5
0 100 200 300 400 500 600 700
Time(n) →
1.5
1
Amplitude →
0.5
0
−0.5
−1
−1.5
0 100 200 300 400 500 600 700
Time(n) →
1
Amplitude →
−1
−2
0 100 200 300 400 500 600 700
Time(n) →
1.5
1
Amplitude →
0.5
−0.5
−1
−1.5
0 100 200 300 400 500 600 700
Time(n) →
91
Preprocessor
Amplitude →
1
−1
0 100 200 300 400 500 600 700
Time(n) →
Amplitude →
1
−1
0 100 200 300 400 500 600 700
Time(n) →
0.05
Amplitude →
−0.05
0 100 200 300 400 500 600 700 800
Time(n) →
If we have a preprocessing block to which the input signal is fed before giving
to our system which has a zero at z=1. This causes the eect of poles of pulse
input to be compensated or nullied.
z-transform of input signal
1 − z −N
X(z) =
1 − z −1
and transfer function of preprocessing system is
P (z) = 1 − z −1
So the output of preprocessing unit is
Y1 (z) = 1 − z −N
92
5.3 Integrator
IIT Method
0.8
Amp → 0.6
0.4
0.2
0
0 200 400 600 800 1000 1200
Time(n) →
0.5
Amp →
−0.5
−1
−4 −3 −2 −1 0 1 2 3 4
Frequency(rad/sam) →
0.8
Amp →
0.6
0.4
0.2
0
0 200 400 600 800 1000 1200
Time(n) →
−20
Amp →
−40
−60
−4 −3 −2 −1 0 1 2 3 4
Frequency(rad/sam) →
93
BLT Method
0.8
Amp →
0.6
0.4
0.2
0
0 100 200 300 400 500 600 700 800 900 1000
Time(n) →
0.8
Amp →
0.6
0.4
0.2
0
−4 −3 −2 −1 0 1 2 3 4
Frequency(rad/sam) →
Integrator(Sinusoidal Input)
1
0.5
Amp →
−0.5
−1
0 100 200 300 400 500 600 700 800 900 1000
Time(n) →
40
30
Amp →
20
10
−10
0 100 200 300 400 500 600 700 800 900 1000
Time(n) →
94
Integrator(Square wave Input)
1
0.5
Amp →
0
−0.5
−1
0 100 200 300 400 500 600 700 800 900 1000
Time(n) →
50
40
Amp →
30
20
10
0
0 100 200 300 400 500 600 700 800 900 1000
Time(n) →
Integrator(Ramp Input)
10
8
Amp →
0
0 100 200 300 400 500 600 700 800 900 1000
Time(n) →
5000
4000
Amp →
3000
2000
1000
0
0 100 200 300 400 500 600 700 800 900 1000
Time(n) →
95
5.4 Comb Filter
Comb Filter
0
−100
Magnitude
−200
−300
−400
−4 −3 −2 −1 0 1 2 3 4
Frequency(rad/sam) →
1
Phase
−1
−2
−3
−4 −3 −2 −1 0 1 2 3 4
Frequency(rad/sam) →
6 Code
6.1 Butterworth Filter
clc;
x=[1,zeros(1,10000)];
%x=[zeros(1,50),ones(1,10),zeros(1,1000)];
%t=linspace(0,999,1000);
%x=sin(2*pi*1.2*t/100)+sin(2*pi*12*t/100);
figure(5);
subplot(3,1,1)
plot(x);
xlabel('n \rightarrow')
ylabel('Amplitude \rightarrow')
title('Frequency Response ')
axis([0,700,-1,1])
y=[];
pt=700;
for i=1:pt
if(i==1)
y(i)=.1263*x(1);
96
else
y(i)=.88*y(i-1)+.1263*x(i);
end
end
z(1)=0;
for i=2:pt
if(i==2)
z(i)=.0152*y(1);
else
z(i)=1.908*z(i-1)-.925*z(i-2)+.0152*y(i-1);
end
end
w=[];
w(1)=0;
for i=2:pt
if(i==2)
w(i)=.01436*z(1);
else
w(i)=1.804*w(i-1)-.818*w(i-2)+.01436*z(i-1);
end
end
w1=-pi:2*pi/(pt-1):pi;
l1=length(w);
n=0:l1-1;
subplot(3,1,2)
plot(w1,(fftshift(20*log((abs(fft(w)))))))
%plot(n,(w))
xlabel('n \rightarrow')
ylabel('Magnitude \rightarrow')
%figure(3);
subplot(3,1,3)
plot(w1,(fftshift(angle(fft(w)))))
xlabel('\omega \rightarrow')
ylabel('Phase \rightarrow')
%xlabel('\omega \rightarrow')
%ylabel('Magnitude(db) \rightarrow')
%title('Frequency Response \rightarrow')
97
pt=700;
y(1)=0;
y(2)=0.01043*0.944*x(1)+1.9549*y(1);
for i=3:pt
y(i)=1.9549*y(i-1)-0.9655*y(i-2)+0.01043*0.944*x(i-1);
end
w(1)=0;
w(2)=(3.4154*10^(-3))*y(1)+1.915*w(1);
for i=3:pt
w(i)=1.915*w(i-1)-0.9187*w(i-2)+(3.4154*10^(-3))*y(i-1);
end
w1=-pi:2*pi/(pt-1):pi;
l1=length(w);
n=0:l1-1;
%plot(n,(w))
%axis([0,700,-1,1.5])
plot(w1,20*log10(fftshift(abs(fft(w,pt)))))
grid on
xlabel('Frequency in radians per sample')
ylabel('Gain in dB')
title('Frequency response')
else
y(i)=(-2*y(i-1)-1*y(i-2)+4*x(i)-4*x(i-2))/17;
end
end
z=[];
for i=1:pt
if(i==1)
98
z(i)=(8*y(i))/25;
elseif(i==2)
z(i)=(-18*z(i-1)+8*y(i))/25;
else
z(i)=(-18*z(i-1)-9*z(i-2)+8*y(i)+8*y(i-2))/25;
end
end
w=z;
w1=-pi:2*pi/(pt-1):pi;
l1=length(w);
n=0:l1-1;
subplot(3,1,2)
plot(w1,(fftshift(20*log10(abs(fft(w))))))
subplot(3,1,1)
plot(w1,abs(fft(w)))
%plot(n,(w))
%figure(3);
subplot(3,1,3)
plot(w1,(fftshift(angle(fft(w)))))
References
[1] Alan V. Oppenheim, Ronald W. Schafer and John R. Buck, Discrete-Time
Signal Processing, 2nd Ed., Pearson Education, 2002
99
Part VI
Lab Report 6
Date: 18th February, 2010
with A0 (z) = 1. The unit impulse response of the mth order lter is
hm (0) = 1 and hm (k) = αm (k), k = 1, 2, ..., m. m is the degree of the polyno-
mial Am (z). Let αm (0) = 1.
If {x(n)} is the input sequence to the lter Am (z) and {y(n)} is the output
sequence,
m
X
y(n) = x(n) + αm (k)x(n − k) (Eq. VI.1.1)
k=1
Suppose m = 1
100
Figure 81: First order Lattice Filter
For m = 2,
α2 (2) = K2 α2 (1) = K1 (1 + K2 )
or
α2 (1)
K2 = α2 (2) K1 =
1 + α2 (2)
101
1.3 Filter of order M
As order 2 lter was developed in 1.2, a lter of any order, say M can be
realized by cascading the lattice stages. Such a lter is described by
And output of (M − 1)th stage lter is output of the (M − 1)th order FIR
lter,
y(n) = fM −1 (n)
As mth order FIR lter can be expressed using output of m stage lattice
fm (n),
m
X
fm (n) = αm (k)x(n − k), αm (0) = 1 (Eq. VI.1.9)
k=0
Fm (z) = Am (z)X(z)
Fm (z)
⇒ Am (z) =
F0 (z)
Similarly for gm (n) can also be expressed like fm (n) as in (Eq. VI.1.9). But
it is seen that the coecients are in the reverse order to that given in (Eq.
VI.1.9).
m
X
gm (n) = βm (k)x(n − k)
k=0
where
102
G(z)= Bm (z)X(z)
Gm (z)
⇒ Bm (z) =
X(z)
Now lets see the relation between Am (z) and Bm (z)
m
X
Bm (z) = βm (k)z −k
k=0
Xm
= αm (m − k)z −k
k=0
m
X
= z −m αm (l)z l
l=0
−m
Bm (z) = z Am (z −1 )
From the above relation, the zeros of FIR lter with system function Am (z)
are the reciprocals of the zeros of Bm (z). Thus, Bm (z) is called the reverse
polynomial of Am (z).
From (Eq. VI.1.8), and dividing each equation with X(z),
A0 (z) = B0 (z) = 1
Am (z) = Am−1 (z) + Km z −1 Bm−1 (z) (Eq. VI.1.10)
Bm (z) = Km Am−1 (z) + z −1 Bm−1 (z)
Given the FIR lter coecients for direct form realization, the corresponding
lattice lter parameters, Ki can be determined. For a lter of m stages, Km =
αm and to compute Km−1 , Am−1 polynomial is required. From (Eq. VI.1.10),
Am (z) − Km Bm (z)
Am−1 (z) = 2
, m = M − 1, M − 2, ..., 1
1 − Km
Hence all Ki are determined by stepping down from m = M − 1 to m = 1.
103
1.5 Code
clc ;
p=[2 3 4 5 ] ;
a =[];
b=[];
H=[1 , p ] ;
l=l e n g t h (H ) ;
k=p ( l − 1);
c =[k ] ;
%e x c i t i n g t h e l a t t i c e s t r u c t u r e with an i m p u l s e
l =100;
x=z e r o s ( 1 , l ) ;
f 1=z e r o s ( 1 , l ) ;
g1=z e r o s ( 1 , l ) ;
x (1)=1;
f 0=x ;
g0=x ;
n=1;
f o r n=1:4
f o r i =1: l
i f i==1
f 1 ( i )= f 0 ( i ) ;
g1 ( i )=k ( n ) * f 0 ( i ) ;
else
f 1 ( i )=k ( n ) * g0 ( i −1)+ f 0 ( i ) ;
g1 ( i )=k ( n ) * f 0 ( i )+g0 ( i − 1);
end
end
104
n=n+1;
f 0=f 1 ;
g0=g1 ;
end
stem ( f 1 ) ;
1.6 Observations
Impulse Response
Impulse Response
5
4.5
3.5
3
amplitude→
2.5
1.5
0.5
0
0 5 10 15 20
n→
For the direct lter coecients [1 2 3 4 5], the corresponding lattice parame-
ters are computed and is
k =
1 0.50000 0.33333 0.25000 5.00000
When the lattice lter is excited with impulse input, the output obtained is
equal to the direct lter coecients as expected.
105
Frequency Response
Frequency Response
16
14
12
magnitude→
10
2
−4 −3 −2 −1 0 1 2 3 4
ω (rad/sec) →
Pulse Response
Pulse Response
15
10
amplitude→
0
0 10 20 30 40 50
n→
106
Sinusoidal Response
20
f=1Hz
10
−10
−20
0 200 400 600 800 1000
Sinusoidal Response
8000
6000
amplitude→
4000
2000
0
−4 −3 −2 −1 0 1 2 3 4
n→
100
80
60
40
20
−20
−40
0 20 40 60 80 100 120
107
120
100
80
60
40
20
−20
−40
0 20 40 60 80 100 120
140
120
100
80
60
40
20
−20
−40
0 20 40 60 80 100 120
108
200
150
100
50
−50
0 20 40 60 80 100 120
6000
5000
4000
3000
2000
1000
−1000
−2000
0 20 40 60 80 100 120
1.7 Inference
109
References
[1] John G. Proakis and Dimitris G. Manolakis, Digital Signal Processing:
Principles, Algorithms, and Applications, 3rd Ed., Prentice Hall Interna-
tional, Inc
110
Part VII
Lab Report 7
Date: 25th February & 11th March, 2010
1 Introduction
At the transmitting side, the messge signal is passed through a decorrelator
to give a signal which is approximately white. Then, only the coecients of the
lter needs to be transmitted along with the compressed error signal.
y(n) −→ A(z) −→ ŵ(n)
By transmitting the lter coecients, A(z) and the error signal, ŵ(n) the
message signal can be reconstructed. It is also possible to reconstruct the mes-
sage signal without transmitting the error signal, ŵ(n) as the error signal is
approximately white, it is sucient if a white noise of same power as error is
generated locally.
Receiver side,
w(n) −→ 1/A(z) −→ ŷ(n)
where ŷ(n) is the predicted message signal.
111
Now toPobtain more accurate prediction, the error should be minimized, i.e.
minimize e2n . Take z-transform of (Eq. VII.2.1),
E e2n
ξ =
∂ξ ∂en
⇒ = 2E en = 0
∂a1 ∂a1
∂en
= yn−1
∂a1
⇒ E [en yn−1 ] = 0 (Eq. VII.2.4)
E [(yn + a1 yn−1 ) yn−1 ] = 0
E [yn yn−1 ] + a1 E [yn−1 yn−1 ] = 0
⇒ RY Y (1) + a1 RY Y (0) = 0 (Eq. VII.2.5)
RY Y (1)
a1 = −
RY Y (0)
As RY Y (0) is the maximum value, a1 < 1 ⇒ zeros are within the unit circle
and hence the lter is minimum phase and is invertible. (Eq. VII.2.4) is known
as the orthogonality relation.
yn −→ A(z) −→ en
112
Figure 91: Direct Form Realization
Lets compute the energy left in the error sequence, en , when the error is
minimum
= E e2n
ξ(a1 )min
= E [en (yn + a1 yn−1 )]
= E [en yn ] + a1 E [en yn−1 ]
= E [(yn + a1 yn−1 ) yn ]
ξ(a1 )min = RY Y (0) + a1 RY Y (1) (Eq. VII.2.6)
= RY Y (0) 1 − a21
RY Y (0) RY Y (1) 1 ξ
=
RY Y (1) RY Y (0) a1 0
| {z } | {z } | {z }
R a ξ
Ra = ξ
113
∂ξ(a01 , a02 )
= 0
∂a01
∂ξ(a01 , a02 )
= 0
∂a02
Filter A(z) is generating the prediction error en and so A(z) is called the
prediction error lter. Drawback of direct form realization is that whenever
lter order is changed or updated, all coecients computed earlier are also to
be redened. So it is computationally intensive.
So we require a lter which does not change coecient values when the
dimension of the lter changes except that new coecients are introduced. Gap
function method is utilized to make such a lter.
114
Let g(1) = E [en yn−1 ], and for rst order, g(1) = 0
" 1
! #
X
g(1) = E am yn−m yn−1
m=0
1
X
= am E [yn−m yn−1 ]
m=0
1
X
= am RY Y (1 − m)
m=0
k
X
g(k) = am RY Y (k − m)
m=0
= ak ∗ RY Y (k)
The lter is similar to convolution operation.
RY Y (k) −→ {1, a1 , a2 , . . .} −→ g(k)
115
Relation between γ in terms of R
and
and
AR (z) = z −1 A(z −1 )
also,
A0 (z −1 ) = A(z −1 ) + γ2 zAR (z −1 )
⇒ A0R (z) = z −1 AR (z) + γ2 A(z) (Eq. VII.2.11)
116
Representing (Eq. VII.2.10) and (Eq. VII.2.11) as matrix form,
A0 (z) 1 γ2 z −1
A(z)
=
A0R (z) γ2 z −1 AR (z)
In general,
γM +1 z −1
AM +1 (z) 1 AM (z)
= (Eq. VII.2.12)
AR
M +1 (z) γM +1 z −1 AR
M (z)
(Eq. VII.2.12) is called the forward Levinson Durban Recursion and can
be easily implemented using Lattice Structures. The order of the lter can
be determined by prediction error analysis. Error decreases when the order of
the lter is increased, but the lter order is xed by xing a threshold value
of error. When the threshold is satised, the order of the lter need not be
increased further.
3 Inverse Filter
117
4 Observations
Gap Function
50 50 50
40
40 40
30
30 30
20
20 20
10
10 10
0
0 0
−10
40 40 40
35 35 35
30
30 30
25
25 25
20
20 20
15
15 15
10
10 10
5
5 5
0
−5 0 0
−10 −5 −5
0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40
Here while a gap is added, actually a new new dimension is added to the
lter. The new component in orthogonal to the rest, so a zero component is
obtained.
Linear Prediction
118
Figure 95
50 50
40
40
30
30
20
20
10
10
0
0
−10
−20 −10
0 5 10 15 20 25 30 0 5 10 15 20 25 30 35
40 40
35 35
30 30
25 25
20 20
15 15
10 10
5 5
0 0
−5 −5
0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40 45
119
40
35
30
25
20
15
10
−5
0 10 20 30 40 50 60
Predicted signal
Linear predictor
7
0
0 5 10 15
input ⇒ red; predicted with error signal ⇒ blue; predicted with wgn ⇒ green
120
449 -196 -538 144 363 -341 -203 373 106 -98 86 227 54 88 37 -136 337
359 -281 -4 772 317 -34 427 610 296 318 90 -192 -17 -583 -1108 -863
-946 -1505 -1381 -899 -1012 -637 -73 -1 594 1112 867 1369 1585 968
1164 1420 709 403 402 181 105 220 540 882 693 538 503 304 -296 -1051
-1640 -2066 -2706 -3267 -3150 -3013 -2460 -2001 -1129 -5 1023 1795
2596 3491 3482 3114 2508 2256 1581 231 -612 -666 -1222 -1628 -1490
-980 -747 -402 284 827 1530 2143 2147 2117 1361 1078 418 -603 -2202
-3100 -3740 -4572 -5140 -4530 -3996 -3005 -1731 -308 1260 2898 4169
5079 5698 5376 4730 3589 2403 760 -658 -1734 -2531 -3103 -3249 -2651
-1758 -870 -17 659 1860 2621 2790 3551 3473 1997 837 380 -1169 -3620
-4224 -5309 -6060 -5987 -5841 -4288 -3035 -1301 703 2439 4205 5564
6520 6693 5896 4825 3262 1413 -429 -1887 -2974 -3409 -3690 -3565 -2645
-1439 -79 1097 1682 2373 3352 3431 3091 3414 2477 450 -891 -1903 -3887
-5615 -5664 -6152 -6432 -5684 -4636 -2713 -569 1284 3138 4577 5609
6757 7069 6248];
The signal is segmented into eight segments with 32 samples each. Then
each of them are processed separately.
Speech signal
8000
6000
4000
2000
−2000
−4000
−6000
−8000
0 50 100 150 200 250 300
121
1000 1000
800 800
600 600
400 400
200 200
0 0
−200 −200
−400 −400
−600 −600
−800 −800
−1000 −1000
0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35
1500 1500
1000 1000
500 500
0 0
−500 −500
−1000 −1000
−1500 −1500
0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35
122
1000 1000
500 500
0 0
−500 −500
−1000 −1000
−1500 −1500
−2000 −2000
0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35
2000 2000
1000 1000
0 0
−1000 −1000
−2000 −2000
−3000 −3000
−4000 −4000
0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35
123
4000 4000
3000 3000
2000 2000
1000 1000
0 0
−1000 −1000
−2000 −2000
−3000 −3000
−4000 −4000
0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35
6000 8000
6000
4000
4000
2000
2000
0
0
−2000
−2000
−4000
−4000
−6000 −6000
0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35
124
8000 8000
6000 6000
4000 4000
2000 2000
0 0
−2000 −2000
−4000 −4000
−6000 −6000
−8000 −8000
0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35
8000 8000
6000 6000
4000 4000
2000 2000
0 0
−2000 −2000
−4000 −4000
−6000 −6000
−8000 −8000
0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35
125
5 Code
Decorrelator
%d e c o r r e l a t o r ( l e v i n s o n s r e c u r s i o n )
clear
clc
x=[1 2 3 4 5 3 2 3 1 5 6 6 3 ] ;
%c a l c u l a t i n g r e f l e c t i o n c o e f f i e n t s
a1=l e n g t h ( x ) ;
% g1=a1 ;
r1 = [ ] ;
p1=x c o r r ( x , x ) ;
ord = 6 ;
f o r i=a1 : l e n g t h ( p1 )
r 1 ( i −a1+1)=p1 ( i ) ;
end
e = [];
a =[];
ar = [ ] ;
gam = [ ] ;
g =[];
a (1)=1;
a(2)= − r 1 ( 2 ) / r 1 ( 1 ) ;
gam(1)= a ( 2 ) ;
a r (1)= a ( 2 ) ;
a r (2)= a ( 1 ) ;
e ( 1 ) = p1 ( ( l e n g t h ( p1 ) + 1 ) / 2 ) ;
f o r i =2: ord
g=conv ( a , p1 ) ;
figure ( i )
stem ( g ) ;
gam( i ) = −g ( a1+i ) / g ( a1 ) ;
b=a ;
a=[a 0 ] + gam( i ) * [ 0 a r ] ;
a r=gam( i ) * [ b 0 ] + [ 0 a r ] ;
e ( i )=e ( i − 1)*(1 − gam( i − 1)^2);
end
i=i +1;
e ( i )=e ( i − 1)*(1 − gam( i − 1)^2);
gam
a
126
e
fn = x ;
gn = x ;
%l e v i n s o n s r e c u r s i o n
f o r i = 1 : ord
f t = fn ;
f n = [ f t 0]+gam( i ) * [ 0 gn ] ;
gn = [ 0 gn]+gam( i ) * [ f t 0 ] ;
end
Speech Processing
%s p e e c h p r o c e s s i n g
clc
clf
clear ;
x = [ − 120 186 −348 −517 555 −5 −434 17 595 −225 −473 189 237 −3 −188 117 −249 12
figure (1)
plot (x)
g r i d on
t i t l e ( ' Speech s i g n a l ' )
sgmt = z e r o s ( 8 , 3 2 ) ;
for i = 1:8
for j = 1:32
sgmt ( i , j ) = x ( j + ( i − 1 ) * 3 2 ) ;
end
end
%f i l t e r c o e f f i c i e n t s computation
ab = x ;
l = l e n g t h ( ab ) ;
ord = 5 0 ;
127
ryy1 = x c o r r ( ab ) ;
ryy = [ ] ;
f o r i = l : l e n g t h ( ryy1 )
end
a = [];
arev = [ ] ;
a (1) = 1;
a ( 2 ) = −ryy ( 2 ) / ryy ( 1 ) ;
arev (2) = a ( 1 ) ;
arev (1) = a ( 2 ) ;
gamma = [ ] ;
g = [];
gamma ( 1 ) = a ( 2 ) ;
f o r j = 2 : ord
g = conv ( ryy1 , a ) ;
gamma( j ) = −g ( l+j ) / g ( l ) ;
b = a;
a = c a t ( 2 , a , z e r o s ( 1 , 1 ) ) + gamma( j ) * c a t ( 2 , z e r o s ( 1 , 1 ) , a r e v ) ;
a r e v = gamma( j ) * c a t ( 2 , b , z e r o s ( 1 , 1 ) ) + c a t ( 2 , z e r o s ( 1 , 1 ) , a r e v ) ;
end
gamma
f 0 = ab ;
g0 = ab ;
gamma ( 1 ) = gamma ( 1 ) ;
f o r j = 1 : ord
128
for i = 1: l
i f ( i == 1 )
f1 ( i ) = f0 ( i ) ;
g1 ( i ) = gamma( j ) * f 0 ( i ) ;
else
f 1 ( i ) = f 0 ( i ) + gamma( j ) * g0 ( i − 1);
g1 ( i ) = gamma( j ) * f 0 ( i ) + g0 ( i − 1);
end
end
f0 = f1 ;
g0 = g1 ;
end
%p r e d i c t i o n u s i n g s i g n a l e h e r e u s i n g e r r o r s i g n a l
v a r i n = var ( f 1 ) ;
e = f1 ;
o = ord ;
z = [ z e r o s ( o+1, l ) ] ;
t = [ z e r o s ( o+1, l ) ] ;
kr = f l i p l r (gamma ) ;
for i = 1: l
z (1 , i ) = e ( i ) ;
end
for i = 1: l
i f i == 1
for j = 1: o
z ( j +1, i ) = z ( j , i ) ;
t ( j , i ) = kr ( j ) * z ( j +1, i ) ;
end
t ( o+1, i ) = z ( o+1, i ) ;
129
else
for j = 1: o
z ( j +1, i ) = z ( j , i ) − kr ( j ) * t ( j +1, i − 1);
t ( j , i ) = kr ( j ) * z ( j +1, i ) + t ( j +1, i − 1);
end
t ( o+1, i ) = z ( o+1, i ) ;
end
end
for j = 1: l
pred ( j ) = z ( o+1, j ) ;
end
figure (1)
clf
p l o t ( ab , ' b ' )
g r i d on
h o l d on
p l o t ( pred , ' ro ' )
g r i d on
6 Inference
In speech transmission, Levinson Durbin Algorithm helps in reducing the
bandwidth and power requirement for transmission. Here, the speech is analysed
and the lter coecients along with the error variance is transmitted instead
of the entire signal being transmitted. At receiver, the inverse lter synthesises
the original signal using these lter coecients.
This algorithm can be used for various applications like vocal tract modelling
and speech synthesis.
References
[1] John G. Proakis and Dimitris G. Manolakis, Digital Signal Processing:
Principles, Algorithms, and Applications, 3rd Ed., Prentice Hall Interna-
tional, Inc
130
Part VIII
Lab Report 8
Date: 18th March, 2010
1 Introduction
Most of the real world signals are continuous amplitude, time varying signals.
So whenever there is a need for signal processing, analog to digital conversion is
essential. The signal is sampled initially and then passed onto a quantizer. The
output of a sampler can be visualised as weighted impulse train whose weights
are determined by the signal amplitude at that specic instant. Quantizer helps
in amplitude quantization. Quantizer maps the output sample amplitudes to
the set of quantization values.
2 Quantisation Technique
The signal amplitude range is segmented into l segments, and each of these
is represented by a amplitude value say I k .
Ik = {x : xk ≤ x ≤ xk+1 ∀ 1 ≤ k ≤ l}
Then each of these segments Ik is assigned a representative amplitude value
say yk . Then either the amplitude levels are encoded and transmitted or else
the index k itself can be transmitted. At receiver, the encoded signal is decoded
and transformed to the respective amplitude level. This process inevitably in-
troduces error due to approximating the message signal to various nite quan-
tization levels. This error is called quantization error. Thus for ecient signal
reconstruction, quantization error should be minimized.
Uniform quantization has uniform step size. The dierence between the
quantization levels are xed. The step size for k th interval is given as
∆k = Ik+1 − Ik
Here, in uniform quantization
∆1 = ∆2 = . . . = ∆l = ∆
131
2.2 Non Uniform Quantization
Uniform quantizers are not used in speech quantization even though their
design is simple and easy to implement. Speech signals have higher probabiliy
of low amplitude components than larger amplitude components. So, uniform
quantizers would introduce a lot more of quantization error.
So non uniform quantizers are used in such cases. Non uniform quantizer
uses dierent step sizes in dierent amplitude regions, therey minimizing error.
In small amplitude region, step size is much smaller than in high amplitude
regions.
ˆ
2
σQ e
= q 2 pQ (x)dq
L ˆ
X xk+1
2
= (x − yk ) pX (x)dx (Eq. VIII.2.1)
k=1 xk
Pk = P (XεIk )
= pX (yk )∆k (Eq. VIII.2.2)
132
2
∂σQ e
= 0, k = 1, 2, . . . , L
∂yk
ˆ xk+1
−2 (x − yk ) dx = 0
xk
xk+1
x2
− yk (xk+1 − xk ) = 0
2 xk
1
⇒ yk = (xk + xk+1 ) (Eq. VIII.2.4)
2
Hence, it can be concluded that the reconstruction levels should be in the
middle of Ik for the minimum error variance. Since, PX (x) is a constant in Ik ,
this translates to yk being the centroid of Ik .
Now substitute (Eq. VIII.2.4) in (Eq. VIII.2.3),
L ˆ
2
X Pk xk+1 2
σQ e (min)
= (x − yk ) dx
∆k xk
k=1
3 xk+1
L
X Pk (x − yk )
=
∆k 3
k=1 xk
L 2
X ∆
= k
Pk (Eq. VIII.2.5)
12
k=1
∆2
Here, 12k is the error variance condition in the interval Ik . That is, the
error variance is dependent only on the length of the interval Ik . Thus, it is the
2
parameter ∆k which is to be optimized. For uniform quantizer, σQ 2
e (min)
=∆12 ,
where ∆ = ∆k .
1
Lets dene, αk = (pX (yk )) 3 ∆k , then
L
2 1 X 3
σQ e (min)
= αk
12
k=1
PL
with the constraint k=1 αk =constant. That is,
133
L L ˆ xk+1
X X 1 1
αk = (pX (yk )) 3 ∆k = (pX (x)) 3 dx
k=1 k=1 xk
We need to nd the optimum value of αk under the above constraint. La-
grange Multiplier technique is used to incorporate the constaint.
" L
#
∂ 2
X
σQ +λ αk = 0 f or k = 1, 2, . . , L
∂αk e
k=1
" L L
#
∂ 1 X 3 X
αk + λ αk = 0
∂αk 12
k=1 k=1
L
1X 2
αk + λL = 0
4
k=1
L
1 X 2
⇒λ = − αk
4L
k=1
Thus,
L L L
2
X
2 1 X 2 X
σQ +λ αk = σQ − α αj
e e
4L k
k=1 j=1
k=1
L L
2 1 X 2
2
X 1
= σQ − (pX (yk )) ∆k
3
(pX (yj )) 3 ∆k
e
4L j=1
k=1
L
2 1 X
= σQ − pX (yk )∆3k
e
4L
k=1
L L
X X 1
αk = {pX (yk )} 3 ∆k
k=1 k=1
ˆ xmax
1
' {pX (x)} 3 dx
−xmax
' constant
⇒ pX (yk )∆3k = constant
⇒ Pk ∆2k = constant
134
From the above result, we can interpret that the interval of higher proba-
bility of occurence must have a smaller step size. This agrees with our earlier
assumption that the smaller amplitudes require smaller step sizes. We have,
L
2 1 X
σQ = Pk ∆2k and Pk ∆2k = constant
e
12
k=1
3 Logarithmic Companding
The companding techniques implemented for dierent signal distributions
are dierent. Speech signals being exponentially distributed, logarithmic com-
pression leads to a uniform distribution. To maintain the transmission quality of
loud and soft sounds, voice signals require relatively constant signal to quantiza-
tion noise ratio over a wide range of amplitude levels. This requires logarithmic
compression to be implemented.
We have,
ˆ −2
x2max xmax
2 dC(x)
σQe = pX (x) dx
3L2 −xmax dx
2
σX
⇒ SN R = 2
σQe
´ xmax
2
3L 2 x pX (x)dx
−xmax
=
x2max ´ ∞
−2
p (x) dC(x)
−∞ X dx dx
dC(x)
We require constant SNR, i.e. dx should be always proportional to x
dC(x)
= (kx)−1
dx
where k is a constant. Thus,
3L2
SN R =
k 2 x2max
1
⇒ C(x) = ln x + d ; x > 0
k
135
where d is constant of integration. Thus, logarithmic companding results in
constant SNR. Using the condition,
C(xmax ) = xmax
1 |x|
C(x) = ln + xmax sgn(x)
k xmax
1. A-law companding
2. µ-law companding
Note: The parameter `A' controls the degree of compression and may be
chosen so that large amplitude changes in input is not reected at the output.
The ratio of maximum to minimum slope is A. The practical value of A is 87.56
136
3.2 µ-law Companding
The µ-law characteristic is neither strictly logarithmic nor strictly linear any-
where. It is approximately linear at low levels i.e. µ|x| << xmax or x <<
µ−1 xmax and approximately logarithmic at high levels i.e. x >> µ−1 xmax .
µ|x|
C(x) = k 'log 1 +
xmax
Apply the condition, C(xmax ) = xmax ,
0 xmax
k =
log(1 + µ)
Hence, the µ-law characteristic is given by
4 Digital Companding:
Segmentation of Companding Characteristics
The logarithmic form of companding laws is slow to calculate. In order to
implement companding in a digital transmission system, we need a digital form
of the companding laws. Digital companding involves sampling an analog signal,
converting it to a PCM code and digitally compressing the code. At the receiver
137
end, the compressed PCM code is received, expanded and decoded. Digital
companding is obtained by approximating the companding characteristics by
linear segments. This process is called segmentation.
Segment 0 1 2 3 4 5 6 7
Step size 32 32 64 128 256 512 1024 2048
Encoding:
1. Determine the bit position of the most signicant 1-bit among bits 5
through 11 of the input.
2. If such a 1-bit is found, the segment code becomes that position minus 4.
Otherwise, the segment code is 0.
3. The 4 bits following this 1-bit position determine the quantization code.
If segment code is 0, the quantization code is half the input value.
Decoding:
1. If the segment code is non-zero, multiply quantization code by 2 and add
33. Multiply result by 2 raised to the power of the segment code minus 1.
138
Segment 0 1 2 3 4 5 6 7
Step size 32 64 128 256 512 1024 2048 4096
The output levels are equispaced. Hence, output step size is 128/8 = 16.
This leads to segmentation of the curve as shown in the gure. The same
procedure can be applied to the negative half of the characteristic. In order to
make the segment end points multiples of 2, we add a bias of 33. The motivation
behind such a step is ease in computation. Thus, the maximum allowable input
amplitude reduces to 214 33 = 8159.
Each segment is further sub-divided into 16 equally spaced quantization
intervals (of step size 1 each). In this way, we form an 8-bit compressed code
consisting of sign bit, a 3-bit segment identier and a 4-bit quantization interval
identier. The 4-bit quantization code identies the quantization interval within
the segment. Sinse each segment change implies a change of 16, the format of
the 8-bit code is
P S2 S1 S0 Q3 Q2 Q1 Q0
Encoding:
1. Add 33 to absolute value of input.
2. Determine bit position of the most signicant 1-bit among bits 5 through
12of the input.
3. Subtract 5 from the position. The resulting number gives the segment
code.
4. The 4 bits following the bit-position determined in step2 form the 4-bit
quantization code.
5. The sign bit is the same as the sign bit of the input.
139
Decoding:
1. Shift left by 1 position and add 33.
5 Observations
A-law Companding
PDF Transformation
1600 800
1400 700
1200 600
Frequency→
Frequency→
1000 500
800 400
600 300
400 200
200 100
0 0
0 100 200 300 400 500 0 100 200 300 400 500
Amplitude→ Amplitude→
140
Companding
Companded output−Alaw
8000
6000
4000
2000
Amplitude→
−2000
−4000
−6000
−8000
0 50 100 150 200 250 300
t→
µ-law Companding
PDF Transformation
180
600
160
500 140
120
Frequency→
Frequency→
400
100
300
80
200 60
40
100
20
0 0
0 100 200 300 400 500 600 700 0 100 200 300 400 500 600 700
Amplitude→ Amplitude→
141
Companding
Companded output−ulaw
8000
6000
4000
2000
Amplitude→
−2000
−4000
−6000
−8000
0 50 100 150 200 250 300
t→
In both these companded outputs, we observe that the low amplitude com-
ponents are amplied more than higher amplitude components. So, we have
almost a uniform PDF as per requirement.
Reconstruction
Here we have taken a mixed sinusoid as input to the compander (black
colour) and it is companded by a µ-law compander (red colour). The companded
output is used to reconstruct the original input and this expanded signal is shown
with blue colour.
142
Companding And Reconstruction
1
0.8
0.6
0.4
0.2
Amplitude→
−0.2
−0.4
−0.6
−0.8
−1
0 50 100 150 200 250 300 350 400 450
t→
6 Code
In MATLAB, it is possible ot create a uniform PDF but here we have a
speech sequence which is exponentially distributed. So, we need to transform
to uniform PDF.
A-law Companding
%Alaw s p e e c h companding
clc
A=87.6
x=[ − 120 186 −348 −517 555 −5 −434 17 595 −225 −473
189 237 −3 −188 117 −249 12 617 −932 −92 970 −497
−592 873 −175 −314 178 −249 149 481 −884 −166 1087
−497 −991 1034 606 − 1205 523 644 −706 205 320 −642
512 294 −892 3 688 −525 −298 412 −9 −331 9 −79 11 80
−505 −200 449 −196 −538 144 363 −341 −203 373 106 −98
86 227 54 88 37 −136 337 359 −281 −4 772 317 −34 427
610 296 318 90 −192 −17 −583 − 1108 −863 −946 − 1505
− 1381 −899 − 1012 −637 −73 −1 594 1112 867 1369 1585
968 1164 1420 709 403 402 181 105 220 540 882 693 538
143
503 304 −296 − 1051 − 1640 − 2066 − 2706 − 3267 − 3150
− 3013 − 2460 − 2001 − 1129 −5 1023 1795 2596 3491 3482
3114 2508 2256 1581 231 −612 −666 − 1222 − 1628 − 1490
−980 −747 −402 284 827 1530 2143 2147 2117 1361 1078
418 −603 − 2202 − 3100 − 3740 − 4572 − 5140 − 4530 − 3996
− 3005 − 1731 −308 1260 2898 4169 5079 5698 5376 4730
3589 2403 760 −658 − 1734 − 2531 − 3103 − 3249 − 2651 − 1758
−870 −17 659 1860 2621 2790 3551 3473 1997 837 380 − 1169
− 3620 − 4224 − 5309 − 6060 − 5987 − 5841 − 4288 − 3035 − 1301
703 2439 4205 5564 6520 6693 5896 4825 3262 1413 −429
− 1887 − 2974 − 3409 − 3690 − 3565 − 2645 − 1439 −79 1097 1682
2373 3352 3431 3091 3414 2477 450 −891 − 1903 − 3887 − 5615
− 5664 − 6152 − 6432 − 5684 − 4636 − 2713 −569 1284 3138 4577
5609 6757 7069 6 2 4 8 ] ;
ymax=max( x ) ;
figure (4)
y=x ;
n=l e n g t h ( y ) ;
i =1:n ;
plot ( i , y ) ;
g r i d on ;
h o l d on ;
C=0;
f o r l =1: l e n g t h ( y )
i f abs ( y ( l )) <(ymax/A)
C( l )=A. * abs ( y ( l ) ) . / ( 1 + l o g (A) ) . * s i g n ( y ( l ) ) ;
else
C( l )=ymax . * ( ( 1 + l o g ( (A. * y ( l ) ) / ymax))/(1+ l o g (A ) ) ) . * s i g n ( y ( l ) ) ;
end
end
n=l e n g t h (C ) ;
i =1:n ;
p l o t ( i , C, ' r ' ) ;
g r i d on ;
t i t l e ( ' Companded output −Alaw ' )
xlabel ( ' t \ rightarrow ' )
y l a b e l ( ' Amplitude \ r i g h t a r r o w ' )
144
µ-law Companding
%ulaw s p e e c h companding
clc
clf
u=255;
y=[ − 120 186 −348 −517 555 −5 −434 17 595 −225 −473
189 237 −3 −188 117 −249 12 617 −932 −92 970 −497
−592 873 −175 −314 178 −249 149 481 −884 −166 1087
−497 −991 1034 606 − 1205 523 644 −706 205 320 −642
512 294 −892 3 688 −525 −298 412 −9 −331 9 −79 11
80 −505 −200 449 −196 −538 144 363 −341 −203 373 106
−98 86 227 54 88 37 −136 337 359 −281 −4 772 317
−34 427 610 296 318 90 −192 −17 −583 − 1108 −863 −946
− 1505 − 1381 −899 − 1012 −637 −73 −1 594 1112 867 1369
1585 968 1164 1420 709 403 402 181 105 220 540 882
693 538 503 304 −296 − 1051 − 1640 − 2066 − 2706 − 3267
− 3150 − 3013 − 2460 − 2001 − 1129 −5 1023 1795 2596 3491
3482 3114 2508 2256 1581 231 −612 −666 − 1222 − 1628
− 1490 −980 −747 −402 284 827 1530 2143 2147 2117 1361
1078 418 −603 − 2202 − 3100 − 3740 − 4572 − 5140 − 4530
− 3996 − 3005 − 1731 −308 1260 2898 4169 5079 5698 5376
4730 3589 2403 760 −658 − 1734 − 2531 − 3103 − 3249 − 2651
− 1758 −870 −17 659 1860 2621 2790 3551 3473 1997 837
380 − 1169 − 3620 − 4224 − 5309 − 6060 − 5987 − 5841 − 4288
− 3035 − 1301 703 2439 4205 5564 6520 6693 5896 4825
3262 1413 −429 − 1887 − 2974 − 3409 − 3690 − 3565 − 2645
− 1439 −79 1097 1682 2373 3352 3431 3091 3414 2477 450
−891 − 1903 − 3887 − 5615 − 5664 − 6152 − 6432 − 5684 − 4636
− 2713 −569 1284 3138 4577 5609 6757 7069 6 2 4 8 ] ;
ymax=max( y ) ;
figure (1)
n=l e n g t h ( y )
i =1:n
plot ( i , y , ' b ' )
h o l d on
C=ymax * ( ( l o g (1+(u . * ( abs ( y ) . / ymax ) ) ) ) . / l o g (1+u ) ) . * s i g n ( y ) ;
n=l e n g t h (C)
i =1:n
p l o t ( i , C, ' r ' )
g r i d on
t i t l e ( ' Companded output −ulaw ' )
xlabel ( ' t \ rightarrow ' )
y l a b e l ( ' Amplitude \ r i g h t a r r o w ' )
145
Reconstruction
t =0:0.01:4;
x =8158*( s i n ( 2 * p i * 2 . * t )+ s i n ( 2 * p i *3* t ) ) . / 2 ;
x=x +33;
l e n=l e n g t h ( x ) ;
x1=x . / max( x ) ;
p l o t ( x1 , ' black ' )
y=z e r o s ( l e n , 1 4 ) ;
f o r i =1: l e n ;
i f x ( i ) >=0
x ( i )= c e i l ( x ( i ) ) ;
s=0
y( i , : ) = [ de2bi (x( i ) ,13) s ] ;
e l s e i f x ( i )<0
s =1;
x ( i )= f l o o r ( x ( i ) ) ;
y ( i , : ) = [ d e 2 b i (− x ( i ) , 1 3 ) s ] ;
end
end
d=y ( : , 1 4 ) ;
t r i a l 1 =0;
a=z e r o s ( l e n , 4 ) ;
b=z e r o s ( l e n , 3 ) ;
z=z e r o s ( l e n , 8 ) ;
f o r j =1: l e n
t r i a l 1 =0;
f o r i =0:13
i f y ( j ,13 − i )==1
t r i a l 1 =13− i −1
break ;
end
end
t r i a l 2=t r i a l 1 − 5;
i f t r i a l 2 ==8
break ;
146
end
b( j ,:)= de2bi ( t r i a l 2 , 3 ) ;
a ( j , : ) = y ( j ,13 − i − 3:13 − i ) ;
end
z =[a b d ] ;
f i n a l=z e r o s ( 1 , l e n ) ;
t r i a l 3=z e r o s ( 1 , 7 )
f o r k=1: l e n
i f z ( k ,8)==1
t r i a l 3=z ( k , 1 : 7 ) ;
f i n a l ( k)=−1* b i 2 d e ( t r i a l 3 ) ;
e l s e i f z ( k ,8)==0
t r i a l 3=z ( k , 1 : 7 ) ;
f i n a l ( k)= b i 2 d e ( t r i a l 3 ) ;
end
end
h o l d on
p l o t ( f i n a l /max( f i n a l ) , ' r ' )
hold o f f
s i g n=z ( : , 8 ) ;
segment=z ( : , 5 : 7 ) ;
q=z ( : , 1 : 4 ) ;
q=[ z e r o s ( l e n , 1 ) q ] ;
p=b i 2 d e ( q )
p=p+33;
segment=b i 2 d e ( segment ) ;
p=p . * 2 . ^ segment
p=p − 33;
new=d e 2 b i ( p , 1 3 ) ;
new=[new s i g n ] ;
t r i a l 4=z e r o s ( 1 , l e n )
t r i a l 5=z e r o s ( 1 , 1 4 ) ;
f o r i =1: l e n
i f new ( i ,14)==0
t r i a l 5=new ( i , 1 : 1 3 ) ;
t r i a l 4 ( i )= b i 2 d e ( t r i a l 5 ) ;
147
e l s e i f new ( i ,14)==1
t r i a l 5=new ( i , 1 : 1 3 ) ;
t r i a l 4 ( i )= − 1.* b i 2 d e ( t r i a l 5 ) ;
end
end
op=t r i a l 4 /max( t r i a l 4 ) ;
h o l d on
p l o t ( op , ' b ' )
t i t l e ( ' Companding And R e c o n s t r u c t i o n ' )
xlabel ( ' t \ rightarrow ' )
y l a b e l ( ' Amplitude \ r i g h t a r r o w ' )
hold o f f
References
[1] Fathima Shifa Akbar, Geetu George, Kala K. and Sneha Sara Suresh,
Mini Pro ject Report on A-Law and µ-Law Companding: Principles,
148