You are on page 1of 20

1.

Adaptive Filtering

Adaptive filters are used in variety of applications where the statistical knowledge of the
signals to be filtered/ analysed are not known apriori or the signals may be slowly time
variant.
We can use both FIR and IIR filters in adaptive filtering, but FIR filters are mostly used,
because i) Simple filter ii) have only adjustable zeros.
In adaptive filtering the adjustable filter parameters are to be optimized.

1.1.System Modeling
Consider an unknown system to be identified. The model is shown below. The system is
modeled with M adjustable coefficients.
Noise w(n)

d(n)
Unknown Time

Variant System

x (n)

y(n)
+

+
zz+
-

FIR Filter Model

e(n)

^
y (n)

Adaptive algorithm

error signal
The unknown system and FIR filter is excited by the input x(n) , the output of the dynamic
^
system is y(n) and FIR filter output is y (n) .

M 1

^
y (n)= h ( k ) x (nk )
k=0

The error signal is given by


e ( n )= y ( n ) ^
y ( n)
The error signal is to be minimized using mean square error criteria,

M1

Emin = y ( n ) h ( k ) x (nK )
n=0

k=0

and select the filter coefficient.


1.2 Adaptive Channel Equalization
Block diagram of digital Communication system in which an adaptive equalizer is used
to compensate the distortion caused by transmitting medium (channel).

a(n)
Data
sequence

Transmitt
er (filter)

Receiver
(Filter)

Channel
(Time variant
filter)

s(t) s(n)
Sampler

Noise w(n)
Decision
Device

^
a(
n)

Reference
Signal

d(n)

^
a(
n)

Adaptive
Equalizer

Adaptive
Algorithm

e(n)
Error signal

In the transmitting medium the distortion is caused by ISI and thermal noise.
The output of the receiving filter is
t k T s
ak p()
s ( t )=
k

Ts signaling interval duration


p(t) impulse response of the cascade connection of the filters.
The sampled version of s(t) is ,
s ( k )= an p (k n)
n

s ( k )=ak p ( 0 )+ p (k n)
n
n k

In the above equation the first term represents the desired symbol and remaining term
represents inter symbol interference. To avoid ISI , transmitter and receiver filter should
be properly designed based on the channel characteristics.
Since channel has random characteristics, filters are designed based on the average
characteristics. But this may not reduce the ISI for the larger extent.
Adaptive equalizer is used to reduce the ISI .It compensates the channel distortion so that
the detected signal will be reliable.
Adaptive equalization process is done in two steps
i)
Training mode
ii)
Tracking mode.
Training Mode
A known test signal (PN sequence) is transmitted.
The received signal is compared with test signal at the receiver , the resultant error signal
gives the information about the channel.
This error signal is used to adjust the equalizer coefficients
Tracking Mode

After Training process is done, the equalizer coefficient continuously adjusted in


decision direct mode.
The output of the equalizer is sent to the decision device to get the estimate. This estimate
is used to adjust the filter coefficient.
1.3 Adaptive Line Enhancer
The adaptive line enhancer used to suppress wide band noise components and only
allows the narrow band signals with less attenuation.

An adaptive line enhancer (ALE) is based on the straightforward concept of linear


prediction. A nearly-periodic signal can be perfectly predicted using linear combinations
of its past samples, whereas a non-periodic signal cannot.
The delay, , is long enough to decorrelate the broadband noise-like signal, resulting in
a filter which extracts the narrowband periodic signal at filter output y(k) (or removes the
periodic noise from a wideband signal at e(k) ).

1.4 Adaptive Noise Cancellation

The above figure illustrates the basic principles of adaptive noise canceling.
The input to the adaptive filter is a noise signal w1 (n) that is highly
correlated with the additive disturbance, w(n), but is uncorrelated with the
clean signal s(n).

^
The reference signal w1(n) is filtered to produce the output w (n) that is

an estimate of the additive noise w(n). This output is then subtracted from
the noisy signal x(n) to produce the system output z(n). The system output is
used to control the adaptive filter and is an estimate of s(n).
1.5 Echo Cancellation:
Consider a two wire and four wire transmissions in the telephone
connections.
Echo is generated at hybrid which connects a 4 to 2 wire connection.
Assume that the call is made using satellite. The satellite communication
has270 ms delay.
Transmitt

er A

Transmitt
er B

Echo

Canceller

Hybrid

Hybrid
A

Hybri
d B

Hybrid

Echo
Canceller

Adaptive
Algorithm

Adaptive
Algorithm

Receiver
A

Receiver
B

When A speaks to B , the speech signal takes the upper transmission and
lower transmission path. Then the received signal has a delay of 540 ms.
The echo cancellation is done by finding an estimate of echo and
subtracting the echo from the received signal.
The return signal is

y ( n )= h ( k ) x ( nk ) +v (n)
k=0

Where x(n) is speech of speaker A


v(n) is the speech of speaker B + noise
h(k) is impulse response of echo path.

The estimation of echo is


^
y (n) =

( k ) x ( nk )
h^
k=0

^
Where h ( k ) is estimate of the impulse response of the echo path.

The error signal is


e ( n )= y ( n ) ^
y ( n)

By adaptively controlling h(k), after some iterations, the echo effect can
be minimized.
An echo signal at terminal caused by hybrid A is a near end echo and an
echo signal at terminal B caused by hybrid A is a far end echo. Bothe
these echoes are removed by echo cancellers.
The receiver signal at Modem A is
S RA ( t )= A1 S B ( t ) + A 2 S A ( td 1 ) + A 3 S A ( td 2 )

Where S B ( t ) - the desired signal which is to be demodulated at modem


A
S A ( td 1 ) - near-end echo due to hybrid A
S A ( td 2 ) - far -end echo due to hybrid B

A1,A2,A3 Amplitude of signal components


r A ( t )=S RA ( t ) +w (t)
r A (t )

- Corrupted received signal due to additive noise w (t)

Let h(n) is the impulse response of the adaptive its output signal is
M 1

S^A ( n ) = h ( k ) a (nk )
k=0

Different configuration of echo cancellers are


i)
Symbol rate echo canceller
ii)
Nyquist rate echo canceller
1.5.1 Symbol rate Echo canceller

Input data

a(n)

Transmitte
r Filter

Echo
Canceller

Hybrid

Adaptive
Algorithm

Decisio
n
Device

Symbol
Rate
Sampler

Receive
r Filter

1.5.2Nyquist rate echo canceller

Input data

a(n)

Transmitte
r Filter

Echo
Canceller

Hybrid

Adaptive
Algorithm

Decisio
n
Device

Receive
r Filter

Nyquist
Rate
Sampler

2. Musical Signal Processing


Almost all musical programs are produced in basically two stages. First,
sound from each individual instrument is recorded in an acoustically inert
studio on a single track of a multi track tape recorder. Then, the signals
from each track are manipulated by the sound engineer to add special
audio effects and are combined in a mix-down system to finally generate
the stereo recording on a two-track tape recorder.
2.1 Generation of Echo effects:
x(n)

y(n)

ZR

Echoes are simply generated by delay units. For example the direct sound
and echo appearing after R periods are generated by FIR filter which is
described by the difference equation,
y ( n )=x ( n ) +x(nR)

Or equivalently, by the transfer function


H ( z )=1+ zR

This single echo filter also called as comb filter.


To generate a fixed number of multiple echoes spaced R sampling
periods apart with exponentially decaying amplitudes, one can use an
FIR filter with a transfer function
H ( z )=1+ zR + 2 z 2 R + + N 1 z(N 1)R + N zNR
H ( z )=

1 N zNR
1 zR

To generate a infinite number of echoes spaced R sampling periods apart


with exponentially decaying amplitudes, one can use an IIR filter with a
transfer function

2.2 Generation Reverberation


The sound reaching the listener in a closed space, such as a concert hall,
consists of several components: direct sound, early reflections, and
reverberation. The early reflections are composed of several closely
spaced echoes that are basically delayed and attenuated copies of the
direct sound, whereas the reverberation is composed of densely packed
echoes reflections.
The IIR comb filter itself does not provide natural-sounding
reverberations for two reasons.
First, its magnitude response is not constant for all frequencies, resulting
in a coloration of many musical sounds that are often unpleasant for
listening purposes.
Second, the output echo density, given by the number of echoes per
second, generated by a unit impulse at the input, is much lower than that
observed in a real room, thus causing a fluttering of the composite
sound. It has been observed that approximately 1000 echoes per second
are necessary to create a reverberation that sounds free of flutter.
To develop a more realistic reverberation, a reverberator with an all pass
structure has been proposed. Its transfer function is given by

H ( z )=

+ ZR | |
<1
1+ ZR

2.3Generation of Chorus effects


The chorus effect is achieved when several musicians are playing the
same musical piece at the same time but with small changes in the

amplitudes and small timing differences between their sounds. Such an


effect can also be created synthetically by a chorus generator from the
music of a single musician. A simple modification of the digital filter that
can be employed to simulate this sound effect.

The phasing effect is produced by processing the signal through a


narrowband notch filter with variable notch characteristics and adding a
scaled portion of the notch filter output to the original signal.

2.4 Generation of flanging effect


There are a number of special sound effects that are often used in the
mix-down process. One such effect is called flanging.
Originally, it was created by feeding the same musical piece to two tape
recorders and then combining their delayed outputs while varying the
difference t between their delay times. One way of varying t is to slow

down one of the tape recorders by placing the operators thumb on the
flange of the feed reel, which led to the name flanging.
The FIR comb filter can be modified to create the flanging effect

3. Image Enhancement
The goal of image enhancement is to improve the image quality so that
the processed image is better than the original image for a specific
application or set of objectives.
3.1 Spatial domain techniques
These techniques are based on gray level mappings, where the type of
mapping used depends on the criterion chosen for enhancement.
As an eg. consider the problem of enhancing the contrast of an image. Let r
and s denote any gray level in the original and enhanced image respectively.
Suppose that for every pixel with level r in original image we create a pixel
in the enhanced image with level S=T(r). If T(r) has the form as shown

Figure( 5.1)

The effect of this transformation will be to produce an image of higher


contrast than the original by darkening the levels below a value m and
brightening the levels above m in the original pixel spectrum. The technique
is referred as contrast stretching.
The values of r below m are compressed by the transformation function into
a narrow range of S towards the dark end of the spectrum; the opposite effect
takes place for values of r above m .
In the limiting case shown in figure, T(r) produces a 2-level (binary) image.
This is also referred to as image thresholding. Many powerful enhancement
processing techniques can be formulated in the spatial domain of an image.

3.2 Image Enhancement by histogram modification


The histogram of an image represents the relative frequency of occurrence of
the various gray levels in the image. It provides a total description of the
appearance of an image. The type and degree of enhancement obtained
depends on the nature of the specified histogram.
Let the variable r represent the grey level of the pixels in the image to be
enhanced. Assume that the pixel values are normalized to lie in the range
0r1

T (r )

with r=0 represents black

represents white in the gray scale

For any r in [ 0,1 ], we consider transformations of the form which produce a


level S for every pixel value r in the original image. It is assumed that the
transformation function satisfies the conditions:
(1) T (r) is singled valued and monotonically increasing in the interval
{

};

(2) 0 T ( r ) 1 for 0 r 1
Condition (1) transformation preserves the order from black to white in the
gray scale

Condition (2) transformation guarantees a mapping that is consistent with


the allowed range of pixel values. Example of such a transformation is:

The inverse transform r=T s for 0 s 1 where it is assumed


satisfies conditions (1) or (2) wrt variables. The gray levels in an image are
random quantities in the interval [0,1] . Assuming that they are continuous
variables the original and transformed gray levels can be characterized by
their probability density functions Pr(r) and Ps(s ). The general
characteristics of an image from the density functions of its gray levels.

Figure 5.3a

( )

Figure 5.3b

This function implies that the image will have The image will have predominant light
tones since majority of pixels are light
dark characters since majority of levels are
concentrated inthe dark region of gray scale gray.

It follows from elementary probability theory that if Pr(r) and T (r) are
known and

satisfies condition (1), then

The enhancement techniques are based on modifying the appearance of an


image by controlling the probability density function of its gray levels via
the transformation function T (r) .

4. Speech Compression
A voice signal has a frequency range of 300 to 3000 Hz. It is sampled at a
rate of 8KHz and the word length of digitized signal is 12 bits.
Speech compression and coding are used to reduce the redundancy present
in the voice signals.
The different voice coding techniques are
a. Wave form coding non-uniform, differential, adaptive
quantization.
b. Transform coding transform the voice signal to an orthogonal
signal and then coding the transform.
c. Frequency band of coding- frequency range of voice signals are
divide into discrete channels and each channel is coded separately
d. Parametric method linear prediction.
4.1 Channel Vocoders
The channel vocoder is an analysis synthesis system. A filter bank is used to
separate the bands. There are 8 to 10 filters.
The amplitude of filters are encode using level detectors and coders.

Pitch and voicing information are also sent along with them .
A wideband excited signal is generated at the receiving end using the
transmission pitch and voicing information.
For a voiced signal , the excitation consists of a periodic signal with
appropriate frequency. For unvoiced signal, the excitation is a white noise.
At the receiver end , a matching filter bank is available, so that the output
level matches the encoded value.
The individual outputs are combined to produce the speech signal.

4.2 Subband Coding


Subband coding is a method where the speech signal is sub dived into
several frequency bands and each band is digitally encoded separately.
Let us assume the speech signal is sampled at a rate Fs samples per second.
The First frequency subdivision splits the signal spectrum into two equal

Fs
0

width segments a lowpass signal (


4
(

and a high pass signal

Fs
F
F s ) .
4
2

The second frequency subdivision splits the lowpass signal from the first
F

s
stage into two equal sub bands a lowpass signal ( 0 F 8

pass signal

and a high

Fs
F
F s ) .
8
4

Finally the third frequency subdivision splits the lowpass signals from the
second stage into two equal bandwidth signals. Thus the signal is divided
into four frequency bands covering three Octaves.

Decimation by a factor 2 is performed after frequency sub division. By


allowing different number of bits per sample to the signals in four sub bands,
we can achieve the reduction in the bit rate of digitized signal.
In the synthesis method for the subband encoded speech signals is basically
the reverse process of encoding process.
The signal is adjacent lowpass and highpass frequency bands are
interpolated ,filtered and combined.

You might also like