Professional Documents
Culture Documents
k=0
(1 t
k
z
1
). (2)
Clearly, the convolution between h and will result in a zero output as the
t
k
s are the roots of the lter (for more details, we refer to Section 5.2.1 of
the lecture notes). Therefore h is called annihilating lter of . In practice,
the lter can be found by posing h[0] = 1 and solving the following system
of equations
_
_
[K 1] [K 2] . . . [0]
[K] [K 1] . . . [1]
.
.
.
.
.
.
.
.
.
.
.
.
[N 1] [N 2] . . . [N K]
_
_
_
_
_
_
_
h[1]
h[2]
.
.
.
h[K]
_
_
_
_
_
=
_
_
_
_
_
[K]
[K + 1]
.
.
.
[N]
_
_
_
_
_
.
The knowledge of the lter is sucient to determine the t
k
s as they are the
roots of the polynomial in (2). The weights a
k
can be determined using K
samples of [m]. These form a classic Vandermonde system
_
_
1 1 . . . 1
t
0
t
1
. . . t
K1
.
.
.
.
.
.
.
.
.
.
.
.
t
K1
0
t
K1
1
. . . t
K1
K1
_
_
_
_
_
_
_
a
0
a
1
.
.
.
a
K1
_
_
_
_
_
=
_
_
_
_
_
[0]
[1]
.
.
.
[K 1]
_
_
_
_
_
which has unique solution given that the t
k
s are distinct.
Exercise 3 Write a program that nds the annihilating lter h[m] given a
signal [m]. Apply your program to the signal [m] available on the course
website (m-le: tau.mat). Retrieve the locations t
k
and the amplitudes a
k
.
Remark: we have set K = 2.
4 Sampling Diracs
All the necessary pieces for implementing an end-to-end algorithm capable
of recovering sampled streams of Diracs have been written. Indeed, consider
a signal of type x(t) =
K1
k=0
a
k
(t t
k
). Then the locations t
k
and the am-
plitudes a
k
can be recovered from the samples by applying the annihilating
4
lter method to
s[m] =
n
c
m,n
y[n]
= x(t),
n
c
m,n
(t n)
=
_
x(t)t
m
dt
=
K1
k=0
a
k
t
m
k
where the last equality derives from the fact that x(t) is a stream of Diracs
and
_
f(t)(t t
k
)dt = f(t
k
).
Exercise 4 Create a stream of Diracs where K = 2. Sample the signal
with an appropriate Daubechies scaling function using a sampling period of
T = 64. Reconstruct the continuous time Diracs from the samples.
Exercise 5 We have sampled a stream of Diracs (K = 2, T = 64) with a
db4 Daubechies scaling function. The observed samples are available on
the course website (m-le: samples.mat). Use the reconstruction algorithm
to exactly recover the locations and the amplitudes of all the Diracs.
Exercise 6 In order to check the stability of the algorithm, create a stream
of Diracs where K = 1, . . . , 4. For each case, sample the signal with an
appropriate Daubechies scaling function and check if you can always recon-
struct the Diracs. Comment on the results.
5 Sampling piecewise polynomials
An interesting extension of the sampling scheme is that piecewise polynomial
signals can also be exactly recovered. In the context of this project, we
will concentrate only on piecewise constant functions. The idea behind the
extension is that the derivative of a piecewise constant function is a stream of
Diracs. Consider the samples y[n] = x(t), (t n) where x(t) is piecewise
constant. If we apply the nite dierences to the samples it can be shown
that
z[n] = y[n + 1] y[n] = x(t), (t n 1) (t n)
=
_
dx(t)
dt
, (t n)
0
(t n)
_
.
Therefore the samples z[n] are equivalent to a sampled stream of Diracs
with an equivalent kernel that is
equ
(t) = (t)
0
(t). Notice that in the
5
case of (t) =
0
(t), the new kernel is
equ
=
0
(t)
0
(t) =
1
(t) which is
a linear spline. The recovery algorithm implemented in the previous section
can readily be applied to z[n].
Exercise 7 Consider a box function
x(t) = A[u(t t
0
) u(t t
1
)], t [0, 32[
where
u(t) =
_
0 t < 0
1 t 0
and
d
dt
u(t) = (t).
Set A = 2.345, t
0
= 8.8125 and t
1
= 21.1875. Sample x(t) using (t) =
0
(t)
as a sampling kernel. Apply the nite dierences to the samples and use your
program to recover the locations and amplitudes of the equivalent signal.
Recover the original x(t). Plot all the results. (Hint: You can locate t
0
and
t
1
separately such that K = 1 since the two locations are separated by some
zero samples).
6
6 Application: image super-resolution
You have just implemented a new sampling algorithm capable of perfectly
recovering certain classes of signals that have been sampled with a rate
below the Nyquist rate. This new scheme has a wide variety of applications
ranging from wideband communications to image super-resolution. It is this
last application that we are now asking you to implement.
Suppose that we have access to N low-resolution cameras that are all
pointing to the same xed scene. The positions of each camera is unknown.
The goal of image super-resolution is to generate a single high-resolution
image, called the super-resolved image, by fusing the set of low-resolution
images acquired by these cameras. The super-resolved image (SR) has there-
fore more pixels than any of the low-resolution image (LR), but more impor-
tantly, it has also more details (i.e. it is not a simple interpolation). This
is made possible because each camera observes the scene from a dierent
viewpoint.
Image super-resolution can usually be decomposed into two main steps
(see Figure 1):
1. Image registration: the geometric transformations between the LR
images are estimated so that they can be overlaid accurately.
2. Image fusion and restoration: the data from each LR image is fused on
a single high-resolution grid and missing data is estimated. Restora-
tion nally enhances the resulting image by removing any blur and
noise.
In the next exercise, you are asked to write in Matlab the image registration
function of an image super-resolution algorithm. The approach used to
register the LR images is based on the notion of continuous moments and is
explained next. The fusion/restoration step is already written for you.
Suppose that the i-th camera observes the scene:
f
i
(x, y) = f
1
(x +dx
i
, y +dy
i
) i = 1, . . . , N,
where f
1
(x, y) is the scene observed by the rst camera taken as the refer-
ence. We therefore have (dx
1
, dy
1
)
T
= (0, 0)
T
. We have implicitly assumed
here that the geometric transformations between the views are 2-D trans-
lations. At each camera, the point spread function (x, y) of the lens is
modeled by a 2-D cubic B-spline and blurs the observed scene f
i
(x, y) (see
Figure 2). B-splines are functions with compact support and satisfy Strang-
Fix conditions. Therefore, similarly to Equation (1), there exists a set of
coecients
_
c
(p,q)
m,n
_
so that:
mZ
nZ
c
(p,q)
m,n
(x m, y n) = x
p
y
q
,
7
The blurred scene f
i
(x, y) (x, y) is then uniformly sampled by the
CCD array which acts as an analog to discrete converter. The output dis-
crete image of the i-th camera has then the samples S
(i)
m,n
:
S
(i)
m,n
= f
i
(x, y) , (x m, y n)
By using a similar development as written in Section 4, it can be shown
that:
m
p,q
=
_ _
f (x, y) x
p
y
q
dxdy (3)
=
n
c
(p,q)
m,n
S
m,n
(4)
Equation (3) is the denition of the geometric moments m
p,q
of order (p +q) ,
p, q N, of a 2-D continuous function f (x, y). Equation (4) shows that it
is possible to retrieve the exact moments of the observed continuous scene
from its blurred and sampled version by computing a linear combination
with the proper coecients. Knowing those moments, we can compute the
barycenter of f (x, y):
(x, y)
T
=
_
m
1,0
m
0,0
,
m
0,1
m
0,0
_
T
If f
i
(x, y) = f
1
(x +dx
i
, y +dy
i
), then (dx
i
, dy
i
)
T
can be retrieved from the
dierence between the barycenter of f
1
and the barycenter of f
i
:
(dx
i
, dy
i
)
T
= (x
i
, y
i
)
T
(x
1
, y
1
)
T
Exercise 8 In this exercise, you have access to N = 40 low-resolution color
images of 64x64 pixels. The goal is to generate a super-resolved color image
of 512x512 pixels. The images observe the same scene and have been taken
from dierent viewpoints. We assume that the geometric transformations
between the views are 2-D translations. Write a program that registers each
image with respect to the rst image and which ts into the skeleton of the
super-resolution algorithm provided.
8
Guidelines and tips:
1. Download and unzip the le Tiger.zip. You should have the following:
Main SR.m: this le is the main program.
ImageRegistration.m: this is where you should write your code.
ImageFusion.m: this function fuses your LR images and takes as
input the output of your registration function. You should not
need to modify the code here.
LR Tiger xx.tif: this is the set of 40 LR images where xx goes
from 01 to 40. Use LR Tiger 01.tif as your image of reference
for registration. Each LR image is a color image and has 64x64
pixels in each of the three layers of color (R,G,B).
HR Tiger 01.tif: this 512x512 color image is your Holy Grail
and is only used to compute the PSNR of your SR image.
PolynomialReproduction coef.mat: you will use this data to com-
pute the continuous moments from the samples of your LR im-
ages. It contains three matrices of size 64x64: Coef 0 0, Coef 1 0
and Coef 0 1. Coef 0 0 is used to compute the continuous mo-
ment m
0,0
, Coef 1 0 is for m
1,0
and Coef 0 1 is for m
0,1
.They
are loaded at the beginning of ImageRegistration.m so that you
can directly use the matrices when you need to.
2DBspline.mat: this data is the 2-D cubic B-spline used to model
the point-spread function of the cameras. It is used during the
restoration process and you dont need to use it.
2. Consider rst the Red layer of each LR images, compute the continu-
ous moments and nd the 2-D translations. Then only, consider the
Green layer, and then the Blue one. For each LR image, you should
then have retrieved three similar 2-D translations.
3. The output of your registration function should be formatted the follow-
ing way to t the super-resolution algorithm: Tx RGB and Ty RGB
are two 40x3 matrices. They correspond respectively to the horizontal
and vertical shifts. The row of each matrix corresponds to the image
number and the column specify the layer of color: 1 =Red, 2 =Green,
3 =Blue. By denition, Tx RGB(1, :) = Ty RGB(1, :) = (0 0 0).
Then Tx RGB(i, :) = (dx
(R)
i
dx
(G)
i
dx
(B)
i
) and so on. . .
4. In order to reduce the noise created by the background of the image
and get accurate moments, it is required that you select a threshold for
each layer and set to 0 all the samples that are below it.
5. When reading an image in Matlab, samples values are integers between
0 and 255. Rescale them between 0 and 1.
9
Restoration
Super-resolved
image
Set of low-resolution images Image Registration HR grid estimation
LR image 0
...
LR image k
Figure 1: Image super-resolution principles
S
(i)
m,n
f
i
(x, y) T
(x, y)
(a) Acquisition model. (b) 2-D B-spline (x, y).
Figure 2: Model for each of the N cameras.
10