You are on page 1of 5

EE644 -Discrete Time Systems

Spring 2009
Midterm Exam #2 with Solutions
Problem 1. Polyphase Representations
Consider a digital filter with transfer function . Suppose we want to perform a
two-component polyphase decomposition
.
Determine and in terms of .
Solution:
.
Hence,
,
.
Problem 2. Multirate Filtering
An analog signal is sampled at a rate of to produce a discrete time signal, . It is
desired to filter this signal with a low-pass filter whose passband edge is and stop
band edge is . It is desired to implement the lowpass filter in a multirate fashion
using a single decimator/interpolator as shown in the figure.
Provide appropriate bandwidth specifications for each of the three filters. That is, give the stop-
band edge and passband edge for each of the three filters.
Solution:
First and Third Filter:
Passband: 0-300Hz (or 325Hz)
Transition Band: 300-700Hz (or 325-675Hz)
Stopband: 700-5000Hz (or 675-5000Hz)
Second Filter:
H z ( )
1
1 az
1

------------------- =
H z ( ) P
0
z
2
( ) z
1
P
1
z
2
( ) + =
P
0
z ( ) P
1
z ( ) z
H z ( ) 1 az
1
a
2
z
2
a
3
z
3
a
4
z
4
+ + + + + =
1 a
2
z
2
a
4
z
4
+ + + ( ) az
1
1 a
2
z
2
a
4
z
4
+ + + ( ) + =
1
1 a
2
z
2

----------------------
az
1
1 a
2
z
2

---------------------- + =
P
0
z
2
( )
1
1 a
2
z
2

---------------------- = P
0
z ( )
1
1 a
2
z
1

---------------------- =
P
1
z
2
( )
a
1 a
2
z
2

---------------------- = P
0
z ( )
a
1 a
2
z
1

---------------------- =
10kHz x n | |
f
p
300Hz =
f
s
325Hz =
H
1
z ( ) H
3
z ( ) H
2
z ( )
10 10
Passband: 0-300Hz
Transition Band 300-325Hz
Stopband: 325-500Hz
Problem 3. Wiener Filter
An AR(1) process is defined by the difference equation
,
where is a zero-mean white noise process with variance . Find an infinite order ( )
m-step predictor for this process.
Solution:
The autocorrelation function for this AR(1) process is:
.
The z-transform of this is
.
Hence, the whitening filter (first stage of the Wiener filter) is
.
The output of the whitening filter is
.
The second stage of the Wiener filter is
.
The required cross-correlation function is:
Hence,
.
Finally,
.
Problem 4. Orthogonality of Backward Prediction Errors
Let be the output of a th order backward prediciton error filter
x n | | ax n 1 | | w n | | + =
w n | |
w
2
p
R
xx
m ( )

w
2
1 a
2

--------------a
m
=
S
xx
z ( )

w
2
1 az ( ) 1 az
1
( )
-------------------------------------------- =
H
1
z ( ) 1 az
1
= h
1
n | | n | | a n 1 | | =
x n | |*h
1
n | | x n | | ax n 1 | | w n | | = =
h
2
n | |
R
dw
n ( )

w
2
-----------------u n | | =
R
dw
n ( ) E d k n + | |w k | | | | E x k n m + + | |w k | | | |
0 n m <

w
2
a
n m +
n m

= = =
h
2
n | |
R
dw
n ( )

w
2
-----------------u n | | a
n m +
u n | | = = H
2
z ( )
a
m
1 az
1

------------------- =
H z ( ) H
1
z ( )H
2
z ( ) 1 az
1
( )
a
m
1 az
1

------------------- a
m
= = = x n m + | | a
m
x n | | =
g
q
n | | q
,
where . Prove that for a stationary zero-mean input, , the backward prediction
errors of orders and are mutually orthogonal. That is, show that for
.
Solution:
Suppose that . Then,
.
According to the orthogonality principle, the prediction error, is orthogonal to the data that
it is based on. In this case, for . Since it was
assume that , then all the expectations in the above sum are zero. Hence,
.
On the other hand, if , follow the exact same
set of steps with the roles of and reversed to
arrive at the same conclusion. Hence,
for all .
Problem 5. MMSE Estimation
Consider the random process
, ,
where is a known sequence, is a random variable with and . The
process is a zero-mean, white noise sequence with variance . Given observation of
, , we wish to form a linear estimate of ,
.
g
q
n | | x n q | | x n q | | b
q
k ( )x n k | |
k 0 =
q

= =
b
q
q | | 1 = x n | |
p q E g
q
n | |g
p
n | | | | 0 =
p q
p q >
E g
q
n | |g
p
n | | | | E b
q
k ( )x n k | |
k 0 =
q

\ .
|
|
| |
g
p
n | | =
b
q
k ( )E g
p
n | |x n k | | | |
k 0 =
q

=
g
p
n | |
E g
p
n | |x m | | | | 0 = m n p 1 + n p 2 + n , , , =
Q p >
E g
q
n | |g
p
n | | | | 0 =
q p >
p q
E g
q
n | |g
p
n | | | | 0 = p q
x n | | gv n | | w n | | + = n 1 2 N , , , =
v n | | g E g | | 0 = E g
2
| | G =
w n | |
w
2
x n | | { } n 1 2 N , , , = g
g h n | |x n | |
n 1 =
N

=
(a) Find a set of linear equations for the filter coefficients, , , that mini-
mize the mean squared error, . Hint: You will probably find it easier to introduce
a matrix notation.
(b) Use the matrix inversion lemma (or any other method) to solve the set of equations from part
(a) and provide an explicit form for the filter coefficients.
Solution:
In matrix form, we have , . The mean squared error is
.
(a) Forming the gradient with respect to we get
.
These two expectations work out to be
,
.
The autocorrelation matrix for takes on the form
.
Hence the optimal (MMSE) filter coefficients satsify
or .
(b) Note that the autocorrelation matrix is a rank one update of a diagonal matrix, hence we can
use the matrix inversion lemma to find the needed invers
.
The optimal filter coefficients are then
.
An alternative approach to solving this set of equations would be to draw on the knowledge that
the optimal filter coefficients should be matched to . Hence, we expect a solution that is pro-
h n | | { } n 1 2 N , , , =
E g g ( )
2
| |
x gv w + = g h
T
x =
E g g ( )
2
| | E g h
T
x ( )
2
| | =
h

h
E g g ( )
2
| | 2 ( )E g g ( )
h
g ( ) | | 2 ( )E g g ( )x | | 0 = = =
E gx | | E g x | | =
E gx | | E g gv w + ( ) | | Gv = =
E g x | | E xg | | E xx
T
h | | R
xx
h = = =
x
R
xx
E gv w + ( ) gv
T
w
T
+ ( ) | | Gvv
T

w
2
I + = =
Gvv
T

w
2
I + ( )h Gv = vv
T

w
G
------
2
I +
\ .
| |
h v =
vv
T

w
G
------
2
I +
\ .
| |
1
G

w
2
-------I
G

w
2
-------
\ .
| |
2
vv
T
1
G

w
2
-------
\ .
| |
v
T
v +
-------------------------------
G

w
2
-------I
G

w
2
-------
\ .
| |
2
vv
T
1
G

w
2
-------
\ .
| |
v
2
+
-------------------------------- = =
h vv
T

w
G
------
2
I +
\ .
| |
1
v
G

w
2
-------I
G

w
2
-------
\ .
| |
2
vv
T
1
G

w
2
-------
\ .
| |
v
2
+
--------------------------------
\ .
|
|
|
| |
v
G

w
2
-------v
G

w
2
-------
\ .
| |
2
v
2
v
1
G

w
2
-------
\ .
| |
v
2
+
-------------------------------- = = =
G

w
2
-------v
1
G

w
2
-------
\ .
| |
v
2
+
-------------------------------- =
v
portional to . Towards that end, assume that the optimal filter is of the form where is
an unknown constant. Then plug this solution into the equation found in part (a).
.
Since the assumed form does in fact satisfy the system of equations and we know that the solution
to this equation is unique. This must be the correct solution. Note, this is the same result as found
using the Matrix Inversion Lemma.
v h cv = c
vv
T

w
G
------
2
I +
\ .
| |
cv v = c v
2

w
G
------
2
+
\ .
| |
1
=

You might also like