You are on page 1of 75

1

CHAPTER 1
INTRODUCTION
With the recent advances in low-cost imaging solutions and increasing
storage capacities, there is an increased demand for better image quality in a
wide variety of applications involving both image and video processing. While
it is preferable to acquire image data at a higher resolution to begin with, one
can imagine a wide range of scenarios where it is technically not feasible. In
some cases, it is the limitation of the sensor due to low-power requirements as
in satellite imaging, remote sensing, and surveillance imaging. An
improvement in the spatial resolution for still images directly improves the
ability to discern important features in images with a better precision.
Most of the existing interpolation schemes assume that the original
image is noise free. This assumption, however, is invalid in practice because
noise will be inevitably introduced in the image acquisition process. Usually
denoising and interpolation are treated as two different problems and they are
performed separately. However, this may not be able to yield satisfying result
because the denoising process may destroy the edge structure and introduce
artifacts, which can be further amplified in the interpolation stage. With the
prevalence of inexpensive and relatively low resolution digital imaging devices
(e.g. webcam, camera phone), demands for high-quality image denoising and
interpolation algorithms are being increased. Hence new interpolation schemes
for noisy images need to be developed for better suppressing the noise-caused
artifacts and preserving the edge structures.
1.1 WAVELET TRANSFORM
Wavelet mean 'small wave'. The wavelet analysis is about analyzing
signal with short duration finite energy functions. They transform the signal
under investigation into another representation which presents the signal in a
more useful form. This transformation of the signal is called wavelet
transform. The DWT replaces the infinitely oscillating sinusoidal basis

2

functions of the Fourier transform with a set of locally oscillating basis
functions called wavelets. In the classical setting, the wavelets are stretched
and shifted versions of a fundamental, real-valued band pass wavelet (t).
When carefully chosen and combined with shifts of a real-valued low-pass
scaling function (t ), they form an orthonormal basis expansion for one-
dimensional (1-D) real valued continuous-time signals. Any finite energy
analog signal x(t) can be decomposed in terms of wavelets and scaling
functions via
() ()( ) ( )

(1.1)
where c(n) and d( j, n) are the scaling and wavelet coefficients respectively.


Figure 1.1 3 -Level Decomposition DWT
Figure 1.1 shows the 3-Level Decomposition of DWT, where H
o
(z) and
H
1
(z) are the low pass and high pass filters
1.2 DISADVANTAGES OF DWT
In spite of its efficient computational algorithm and sparse
representation, the real wavelet transform suffers from four fundamental,
intertwined shortcomings.
1.2.1 Oscillations
First disadvantage of DWT is the oscillation of basis functions between
negative and positive values at the discontinuities of the function which causes
complexity in terms of singularity extraction. The oscillation of basis functions
around singularities is due to the fact that wavelets are band pass functions.
Hence, these basis functions can not represent an infinite increase or decrease

3

of the function. Moreover, the knowledge that wavelet coefficients at the
singularities of the function yields large values is also overstated since it is
highly possible to have a small or zero coefficient at the singularities.
1.2.2 Shift Variance
The second disadvantage of real discrete wavelet transform is its shift
variance property. A small shift of the signal greatly perturbs the wavelet
coefficient oscillation pattern around singularities. Shift variance also
complicates wavelet-domain processing algorithms must be made capable of
coping with the wide range of possible wavelet coefficient patterns caused by
shifted singularities
1.2.3 Aliasing
Aliasing is a problem for signal processing applications which is usually
due to the lack of Nyquist condition. Nyquist condition states that the sampling
frequency of a signal has to be twice of the bandwidth of the input signal and
in case of lack of Nyquist condition, the sampled signals cannot be perfectly
reconstructed. For the case of real DWT, aliasing has similar meaning. Since
the filters that are used iteratively for the calculation of wavelet coefficients are
non ideal low-high pass filters, any change in the coefficients disturbs the
balance between the forward and backward transformation. Moreover, most of
the signal processing applications use thresholding, filtering and quantization
methods which change the wavelet transform coefficients of the signal and
these changes result in the construction of another signal which is different
than the original input signal.
1.2.4 Lack of Directionality
Lack of directionality can be considered as the fourth shortcoming of
real DWT. Whereas the basis functions of Fourier transform produce
directional plane waves in high dimensions, the basis functions of real DWT
results in plane waves oriented in several directions and this lack of

4

directionality prevents detecting geometric image features like edges and
ridges.

Figure 1.2 Typical Wavelets Associated with the 2-D Separable DWT (a)
Wavelets in the space domain (LH, HL, HH) (90, 0, 45 respectively) (b)
Fourier spectrum of each wavelet in the 2D frequency domain
1.3 STATIONARY WAVELET TRANSFORM (SWT)
In order to overcome the translation variant nature of the discrete
wavelet transform and to get more complete characteristic of the analyzed
signal the undecimated wavelet transform [22] was proposed. The general idea
behind it is that it doesn't decimate the signal. Thus it produces more precise
information for the frequency localization. From the computational point of
view the undecimated wavelet transform has larger storage space requirements
and involves more computations.
Three main algorithms for computing the undecimated wavelet transform exist.
trous algorithm
Beylkin's algorithm
Undecimated algorithm


5

(i) The trous algorithm
The most often mentioned algorithm is the algorithm a trous". It
modifies the standard Discrete Wavelet Transform (DWT) decomposition
scheme by modifying the low pass and high pass filters at each consecutive
level. It imitates the sub-sampling of the filtered signal by up-sampling the
low-pass filter. This up-sampling is done by including zeros between each of
the filter's coefficients at each level. The detail coefficients are computed as the
difference between the low filtered images from two consecutive levels. The
inverse transform is computed by adding the detail coefficients from all levels
to the final low-resolution image. We have not implemented and tested the
performance of this algorithm.
(ii) Beylkins algorithm
The non-decimation approach is one way to think about the
undecimated wavelet transform. Another way is to think about the shift
invariance. To be completely shift invariant the transform has to be computed
for all possible shifts of the original signal. It is realized that the coefficients
from all shifts are not necessary. In fact, the shifts by any odd number will give
the same coefficients as the shift by one. The same is true for any even number
- it is equivalent to a shift by zero. Generally, this means that computing the
DWT for the signal and its shift by one and repeating this for each level, will
give all possible wavelet coefficients. For two-dimensional signals the shift
has to be extended to a shift by one in each direction -horizontal, vertical and
diagonal.
(iii) Undecimated algorithm
This algorithm is based on the idea of no decimation. It applies the
wavelet transform and omits both down-sampling in the forward and
upsamplig in the inverse transform. More precisely, it applies the transform at
each point of the image and saves the detail coefficients and uses the low-

6

frequency coefficients for the next level. The size of the coefficients array does
not diminish from level to level. By using all coefficients at each level, we get
very well allocated high-frequency information.
1.4 DUAL TREE COMPLEX WAVELET TRANSFORM (DTCWT)

A dual-tree complex wavelet transform (DT-CWT) [10] is introduced to
alleviate the drawbacks caused by the decimated DWT. It is shift invariant and
has improved directional resolution when compared with that of the decimated
DWT. Moreover the coefficients of the DT-CWT are complex numbers which
also provides phase information compared to the coefficients in real wavelet
transform. These properties make the DT-CWT an attractive tool for image
resolution enhancement than DWT. The best way is to study the properties of
the Fourier transform and try to achieve these properties in DWT since the
investigation of the DT-CWT stems from the Fourier analysis.
First of all, the first advantage of Fourier transform over real DWT is
that the magnitudes of the coefficients do not oscillate positive and negative.
Moreover, Fourier transform is completely shift invariant. The only effect of a
time shift of the input signal is a simple linear negative or positive phase
addition according to the direction of the time shift. Another advantage of
Fourier transform is that there is not any aliasing cancellation operation during
the reconstruction of the transformed signal. Lastly, the basis functions of
Fourier transform sinusoids have the directionality also for higher dimensions.
According to the stated properties of Fourier transform, scientist tried to
develop a concept which is called Dual-Tree complex wavelet transform by
using the key features of Fourier transform.
By observing the equality

() (), it is possible to
define a wavelet transform with a complex scaling function and a complex
wavelet.

()

()

() (1.2)

7

When the equation is compared with the Fourier counterpart, the first
part will be real and even and the second part will be complex and odd.
The coefficients d (j, n) and c (j, n) for real DWT can be computed by
using a very efficient linear time complexity algorithm. This algorithm uses
recursively a discrete FB consisting of two different discrete time filters, one is
low pass and the other one is high pass, and upsamplig and downsampling
operations. These filters can represent efficiently the desired wavelets and
scaling functions with properties such as compact time support and fast
frequency delay. Fast frequency delay enables the wavelet transform to be
local as much as possible and the compact time support means that the scaling
and wavelet functions are 0 outside a certain time interval.
The theory of DT-CWT has two different approaches to the issues of
determining the basis functions. The first approach tries to find out that

()
that will form an orthonormal or biorthogonal basis and the second approach
tries to find out

() and

() separately where each individual basis


function forms an orthonormal or biorthogonal basis. In case of this approach
the two different basis functions will be generated by using two FB trees. The
filters used for the first real part and the second real part are different than each
other to achieve the Hilbert pair condition of the basis functions and there
exists a lot of different FIR filter design techniques for the FB of DT-CWT. In
order to perform DT-CWT, these filters have to satisfy the following
properties.
1. Approximate half sample delay property
2. Perfect reconstruction
3. Finite support
4. Vanishing moments
5. Linear phase response

8


Figure 1.3 Two Level Decomposition of DT-CWT in 2-D
If we consider DT-CWT for higher dimensions, its advantage about
representing singularities enables us to process edges of images better than real
DWT. First of all, 2-D DT-CWT separates the input signal into 6 different sub-
bands where each one is oriented along 15, 75 and 45 at each scale.

Figure 1.4 Typical Wavelets Associated with the 2-D Dual Tree Wavelet
Transform (a) Wavelets in the space domain (-75, -45, -15, +15, +45, +75
respectively) (b) Fourier spectrum of each wavelet in the 2D frequency
domain.

9

1.5 WAVELET PACKET TRANSFORM (WPT)
In DWT, the high frequency analysis tends to be less refined when the
speed increases. In order to overcome this drawback of DWT, the wavelet
packet transform [21] was proposed. Wavelet packet transform (WPT) is a
generalization of the dyadic wavelet transform (DWT) that offers a rich set of
decomposition structures. WPT was first introduced by Coifman et al. for
dealing with the nonstationarities of the data better than DWT does. WPT is
associated with a best basis selection algorithm. The best basis selection
algorithm decides a decomposition structure among the library of possible
bases, by measuring a data dependent cost function. Wavelet decomposition is
achieved by iterative two channel perfect reconstruction filter bank operations
over the low frequency band at each level. Wavelet packet decomposition, on
the other hand, is achieved when the filter bank is iterated over all frequency
bands at each level. The final decomposition structure will be a subset of that
full tree, chosen by the best basis selection algorithm.
1.6 IMAGE RESOLUTION ENHANCEMENT
Image resolution describes the detail an image holds. Image
enhancement improves the quality (clarity) of images for human viewing.
Higher resolution means more image details. Image resolution can be
measured in various ways. Basically, resolution quantifies how close lines can
be to each other and still be visibly resolved. Resolution units can be tied to
physical sizes (e.g. lines per mm, lines per inch), to the overall size of a
picture. The resolution of digital images can be described in many different
ways.
1.6.1 Pixel resolution
The term resolution is often used for a pixel count in digital imaging.
An image of N pixels high by M pixels wide can have any resolution less than
N lines per picture height. But when the pixel counts are referred to as

10

resolution, the convention is to describe the pixel resolution with the set of two
positive integer numbers, where the first number is the number of pixel
columns (width) and the second is the number of pixel rows (height), for
example as 640 by 480. Another popular convention is to cite resolution as the
total number of pixels in the image, typically given as number of megapixels,
which can be calculated by multiplying pixel columns by pixel rows and
dividing by one million.
1.6.2 Spatial resolution
The measure of how closely lines can be resolved in an image is called
spatial resolution, and it depends on properties of the system creating the
image, not just the pixel resolution in pixels per inch (ppi). For practical
purposes the clarity of the image is decided by its spatial resolution, not the
number of pixels in an image. In effect, spatial resolution refers to the number
of independent pixel values per unit length.
1.6.3 Gray-level Resolution
Gray-level resolution is the smallest discernible change in gray level.
Due to hardware considerations, we only consider quantization level, usually
an integer power of 2.
1.6.4 Spectral resolution
Color images distinguish light of different spectra. Multi-spectral
images resolve even finer differences of spectrum or wavelength than is
needed to reproduce color. That is, they can have higher spectral resolution
(the strength of each band that is created).
Image enhancement processes consist of a collection of techniques that
seek to improve the visual appearance of an image or to convert the image to a
form better suited for analysis by a human or a machine. In an image
enhancement system, there is no conscious effort to improve the fidelity of a
reproduced image with regard to some ideal form of the image, as is done in

11

image restoration. Image resolution enhancement is a usable pre-process for
many satellite image processing applications, such as vehicle recognition,
bridge recognition, and building recognition etc. Image resolution
enhancement techniques can be categorized into two major classes according
to the domain that they are applied in:
Spatial domain methods
The term spatial domain refers to the aggregate of pixels composing an
image. Spatial domain methods are procedures that operate directly on these
pixels. It uses the statistical and geometric data directly extracted from the
input image.
Transform domain methods
Transform-domain techniques use transformations to achieve the image
resolution enhancement.


Figure 1.5 Resolution Enhancement
1.7 SUPER RESOLUTION
Any super resolution algorithm/method is capable of producing an
image with a resolution greater than that of the input. Typically, input is a
sequence of low resolution (LR) images displaced from each other having
common region of interest.

12


Figure 1.6 Block Diagram of Super Resolution Technique
The spatial resolution could be increased reducing the pixel size or
increasing the chip size acting on the sensors manifacturing techniques. One
promising approach is to use signal processing techniques to obtain HR images
from one or more low-resolution (LR) images. The major advantage of signal
processing approach is that it is less expensive and all the LR imging systems
can be still useful. We can consider two main sources categories, single-image
and multiple-image. In the first case we obtain an HR image from only one LR
image; the second case HR images are obtained from LR video frames.
1.7.1 Single-frame Super Resolution
In this case the source is a single raster image. Single-frame Super
Resolution (SR) is also known as image scaling, interpolation, zooming and
enlargement.

Figure 1.7 Single-frame Super Resolution




13

1.7.2 Multi-frame Super Resolution
Multi frame super resolution (SR) image re-construction is the process
of combining the information from multiple low resolution (LR) frames of the
same scene to estimate a high resolution (HR) image.

Figure 1.8 Multi-frame Super Resolution
1.8 OBJECTIVE OF THE PROJECT
To demonstrate denoising and resolution enhancement algorithm for
noisy images using DTCWT based techniques to obtain denoised high
resolution images.
To extend this algorithm using different single-frame and multi-frame
super resolution reconstruction methods and wavelet packet transform.
1.9 CHAPTER ORGANIZATION
Chapter 2 deals about the literature surveys about different image
denoising and resolution enhancement methods.
Chapter 3 discusses about the existing interpolation techniques.
Chapter 4 describes the DTCWT based image denoising and resolution
enhancement.
Chapter 5 deals about the Simulation Results and Discussion.
Chapter 6 is about the conclusion of the work.


14

CHAPTER 2
LITERATURE REVIEW
2.1 INTRODUCTION
Extensive research has been made by various authors in the area image
denoising and resolution enhancement. The results done by many authors are
surveyed and studied. Some of the selected research work done in this field is
discussed in the following paragraphs.
2.2 WAVELET DOMAIN IMAGE INTERPOLATION VIA STATI-
STICAL ESTIMATION
Ying Zhu Stuart, C. Schwartz Michael and T.Orchard [15] have
proposed a new wavelet domain image interpolation scheme based on
statistical signal estimation. A linear composite MMSE estimator is
constructed to synthesize the detailed wavelet coefficients as well as to
minimize the mean squared error for high-resolution signal recovery. Based on
a discrete time edge model, it uses low-resolution information to characterize
local intensity changes and perform resolution enhancement accordingly. A
linear MMSE estimator follows to minimize the estimation error. Local image
statistics are involved in determining the spatially adaptive optimal estimator.
With knowledge of edge behavior and local signal statistics, the composite
estimation is able to enhance important edges and to maintain the intensity
consistency along edges. Strong improvement in both the visual quality and the
PSNRs of the interpolated images has been achieved by the proposed
estimation scheme.
The edge behavior is characterized by a parameterized discrete time
signal to accurately locate edges from low-resolution samples. A linear
composite minimum-mean-squared-error estimator is proposed to solve the
estimation problem. The composite estimation involves a parametric edge
model and local image statistics. First, local edge behavior is determined from

15

the low-resolution samples and used to synthesize the detailed coefficients.
Then, a linear estimator minimizes the estimation error using local statistical
information of the enhanced signals. Both analysis and experiments reveal that
the knowledge of edge behavior and local image statistics enable the composite
estimator to enhance the cross-edge sharpness and maintain the intensity
consistency along edges, which are essential for high image quality.
2.3 IMAGE RESOLUTION ENHANCEMENT USING WAVELET
DOMAIN HIDDEN MARKOV TREE AND COEFFICIENT SIGN
ESTIMATION
A. Temizel [18] has proposed an algorithm that assumes the low
resolution image is the approximation sub-band of a higher resolution image
and attempts to estimate the unknown detail coefficients to reconstruct a high
resolution image. A subset of these recent approaches utilized probabilistic
models to estimate these unknown coefficients. Particularly, Hidden Markov
Tree (HMT) based methods using Gaussian mixture models have been shown
to produce promising results. However, one drawback of these methods is that,
as the Gaussian is symmetrical around zero, signs of the coefficients generated
using this distribution function are inherently random, adversely affecting the
resulting image quality.
In the HMT methodology, once the states of the coefficients are
estimated, coefficient magnitudes are assigned randomly using the Gaussian
distribution which is associated with the state. As Gaussian distributions are
symmetrical around zero, coefficients generated using these distributions have
an equal chance of having assigned a negative or a positive sign. Here make
use of the fact that the coefficient sign and magnitude information are
statistically independent to estimate coefficient magnitude and sign separately.
Once the state of the unknown coefficient is found, the magnitude is generated
using a random number generator with this distribution function. For the
coefficient sign estimation, we make use of an observation which states that
there is higher correlation among the corresponding coefficients between high-

16

pass wavelet filtered LR image and the unknown high-frequency sub-bands to
be estimated.
2.4 MAP-BASED IMAGE SUPER-RESOLUTION RECONSTRUCTION
Xueting Liu, Daojin Song, Chuandai Dong, and Hongkui Li [4] have
proposed a new super-resolution algorithm to the problem of obtaining a high-
resolution image from several low-resolution images that have been sub-
sampled. The algorithm is based on the MAP framework, solving the
optimization by proposed iteration steps. At each iteration step, the
regularization parameter is updated using the partially reconstructed image
solved at the last step. The MAP approach provides a flexible and convenient
way to model a priori knowledge to constrain the solution.
The proposed algorithm is tested on Lena images. The results of the
experiments indicate that the proposed algorithm has considerable
effectiveness in that it can not only make an automatic choice and renew the
regularization parameter, but also can get the high resolution reconstruction
image expectedly.
2.5 SUPER RESOLUTION IMAGING: A SURVEY OF CURRENT
TECHNIQUES
G. Cristobal, E. Gil, F. Sroubek, J. Flusser, C. Miravet, and F. B.
Rodrguez [5] have compared many super resolution algorithms. Techniques
for recovering the original image include blind deconvolution (to remove blur)
and super resolution (SR). The stability of these methods depends on having
more than one image of the same frame. Differences between images are
necessary to provide new information, but they can be almost unperceivable.
State-of-the-art SR techniques achieve remarkable results in resolution
enhancement by estimating the subpixel shifts between images, but they lack
any apparatus for calculating the blurs.
After introducing a review of some SR techniques, two recently
developed SR methods are introduced. First is a variational method that
minimizes a regularized energy function with respect to the high resolution

17

image and blurs. Here a unified way to simultaneously estimate the blurs and
the high resolution image is established. By estimating blurs we automatically
estimate shifts with subpixel accuracy, which is inherent for good SR
performance. Second, an innovative learning-based algorithm using a neural
architecture for SR is described. Comparative experiments on real data
illustrate the robustness and utilization of both methods.
2.6 WAVELET DOMAIN IMAGE RESOLUTION ENHANCEMENT
USING CYCLE SPINNING AND EDGE MODELLING
A. Temizel and T. Vlachos [19] have proposed a wavelet domain image
resolution enhancement adopts the cycle-spinning methodology. An initial
high-resolution approximation to the original image is obtained by means of
zero-padding in the wavelet domain. This is further processed using the cycle-
spinning methodology which reduces ringing. A critical element of the
algorithm is the adoption of a simplified edge profile suitable for the
description of edge degradations such as blurring due to loss of resolution.
Linear regression using a minimal training set of high-resolution originals is
finally employed to rectify the degraded edges.
An initial approximation to the unknown HR image is generated using
wavelet-domain zero padding (WZP). The decimated wavelet transform is not
shift-invariant and as a result, distortion of wavelet coefficients, due to
quantisation of coefficients in compression applications or non-exact
estimation of high-frequency coefficients in resolution enhancement
applications (including zero padding of coefficients as in WZP), introduces
cyclostationarity into the image which manifests itself as ringing in the
neighbourhood of discontinuities.
The main elements of this algorithm were zero-padding of high-
frequency wavelet sub-bands, cycle spinning to reduce ringing arising from
zero-padding and finally edge rectification to alleviate blurring due to the
unavailability of high spatial frequency information. The edge rectification
algorithm works in a variable-separable way, first horizontally then vertically.

18

As horizontal and vertical edges in natural images have the potential of
different degrees of sharpness, for example due to non-isotropic sensors, it was
found beneficial to calculate different estimator weights for each direction.
This was based on the adoption of profile model and the parameterisation and
rectification of blurring using linear regression.
2.7 REGULARITY-PRESERVING IMAGE INTERPOLATION
W. Knox Carey, Daniel B. Chuang, and Sheila S. Hemami [3] have
proposed wavelet-based interpolation method that estimates the regularity of
edges by measuring the decay of wavelet transform coefficients across scales
and preserves the underlying regularity by extrapolating a new sub-and to be
used in image resynthesis.
Traditional interpolation methods operate in the time domain by first
interleaving the known samples with zeros and then lowpass filtering the
interleaved signal to fill in the missing samples. In contrast, the regularity-
preserving interpolation method synthesizes a new wavelet sub-band based on
the decay of the known wavelet transform coefficients. The image to be
interpolated can be considered to be the lowpass output of a wavelet analysis
stage. The original image can therefore be input to a single wavelet synthesis
stage along with the corresponding high frequency subbands to produce an
image interpolated by a factor of two in both directions.
The unknown high-frequency sub-bands are created separably by a two-
step process. First, edges with significant correlation across scales in each row
are identified. The rate of decay of the wavelet coefficients near these edges is
then extrapolated to approximate the high-frequency sub-band required to
resynthesize a row of twice the original size. The same procedure is then
applied to each column of the row-interpolated image. The algorithm produces
visibly sharper edges than traditional techniques and exhibits an average peak
signal-to-noise ratio (PSNR) improvement over bilinear and bicubic
techniques.


19

CHAPTER 3
IMAGE INTERPOLATION
Image interpolation occurs in all digital photos at some stage. It happens
anytime you resize or remap (distort) an image from one pixel grid to another.
Image resizing is necessary when we need to increase or decrease the total
number of pixels.






Original Image After Interpolation

Figure 3.1 Interpolation
Image resize algorithms are try to interpolate the suitable image
intensity values for the pixels of resized image which does not directly mapped
to its original image. There are three major algorithms for image resizing.
3.1 NEAREST NEIGHBOUR INTERPOLATION
In the nearest neighbour algorithm, the intensity value for the point
v(x,y) is assigned to the nearest neighbouring pixel intensity f(x,y) which is
the mapped pixel of the original image. The logic behind the approximation is
as the equation below.
( )
{

*(() ()) *( ) + *( ) +
*(() ()) *( ) + *( ) +
*(() ()) *( ) + *( ) +
* (() ()) *( ) + *( ) + }


(3.1)

20

3.2 BILINEAR INTERPOLATION
In the bilinear interpolation, intensity of the zoomed image pixel P is
defined by the weighted sum of the mapped 4 neighbouring pixels. If the
zooming factor is s, then the mapped pixel point in the original image is
given by r and c as follows.
.

/ .

/ (3.2)
And the distance from the point of interest to the mapped pixels can be obtain
as follows,

2.

/ .

/3 (3.3)
So the neighbouring 4 pixels can be defined as,
*( ) ( ) ( ) ( )+ (3.4)
By using these values we can approximate the intensity of the pixel in interest
as follows.
( ) (

)(

) (

)(

) (3.5)
3.3 BICUBIC INTERPOLATION
When using the bi-cubic interpolation zoomed image intensity v(x,y) is
defined using the weighted sum of mapped 16 neighbouring pixels of the
original image. Let the zooming factor is s and the mapped pixel point in the
original image is given by r and c. Then the neighbour-hood matrix can be
defined as,

21

[

]
[

( ) ( ) ( ) ( )
( ) ( ) ( ) ( )
( ) ( ) ( ) ( )
( ) ( ) ( ) ( )]


(3.6)
Using the bi-cubic algorithm;
( )

(3.7)
The coefficients a
ij
can be find using the La-grange equation.

(3.8)


((())
(()(())

(3.9)


((())
(()(())

(3.10)
When implementing this make a mask for defining a
i
and b
j
using
matrix and then applied it to the matrix containing the selected 16 points, due
to reduce the complexity of the algorithm in-order to reduce the calculation
time. Zero padding is added to surround the original image to remove the zero
reference error occurred.
( ) ,

- [

(3.11)
3.4 EDGE DIRECTED INTERPOLATION
High quality interpolated images are obtained when the pixel values are
interpolated according to the edges of the original images. A number of edge-
directed interpolation (EDI) methods that make use of the local statistical and

22

geometrical properties to interpolate the unknown pixel values are shown to be
able to obtain high visual quality interpolated images without the use of edge
map. The New Edge-Directed Interpolation (NEDI) method [13] models the
natural image as a second-order locally stationary Gaussian process and
estimates the unknown pixels using simple linear prediction. A covariance of
the image pixels in a local block (training window) is required for the
computation of the prediction coefficients. The NEDI method preserves the
sharpness and continuity of the interpolated edges. However, this method
considers only the four nearest neighbouring pixels along the diagonal edges
and not all the unknown pixels are estimated from the original image, which
degrades the quality of interpolated image. Moreover, the NEDI method has
difficulty in texture interpolation because of the large kernel size, which
reduces the fidelity of the interpolated image, thus lower the peak signal-to-
noise ratio (PSNR) level. The NEDI is applied along the edges by estimating
the covariance of surroundingneighbours as shown below:
Suppose low resolution image is X
i,j
and high resolution image Y
i,j
.
Assume that magnification ratio is two i.e., Y
2i,2j
= X
i,j
.
After finding the covariance of the high resolution covariance is
replaced by their low resolution pixels which couple the pair of pixels
along the same orientation but at different resolution. The high-
resolution covariance is linked to the low-resolution covariance by a
quadratic-root function. As the difference between the covariance of
high resolution pixels and the low resolution pixels become negligible
the resolution gets higher and higher.
Interpolate the other half by rotating the image by 45
0
and repeat the
above steps that results in new interpolated pixels with increased
resolution.
Finally, the image is interpolated by the factor of two which results in a
high resolution image.


23

3.5 AN EDGE-GUIDED IMAGE INTERPOLATION USING LMMSE
The commonly used linear methods, such as pixel duplication, bilinear
interpolation, and bicubic convolution interpolation, have advantages in
simplicity and fast implementation, they suffer from some inherent defects,
including block effects, blurred details and ringing artifacts around edges.
Preserving edge structures is a challenge to image interpolation
algorithms that reconstruct a high-resolution image from a low-resolution
counterpart. A new edge-guided nonlinear interpolation technique [13] through
directional filtering and data fusion faithfully reconstructs the edges in the
original scene. For a pixel to be interpolated, two observation sets are defined
in two orthogonal directions, and each set produces an estimate of the pixel
value. These directional estimates, modelled as different noisy measurements
of the missing pixel are fused by the linear minimum mean square-error
estimation (LMMSE) technique into a more robust estimate, using the statistics
of the two observation sets.

Figure 3.2 Formation of LR Image from an HR Image
The edge direction is the most important information for the
interpolation process. To extract and use this information, we partition the
neighboring pixels of each missing sample into two directional subsets that are
orthogonal to each other. From each subset, a directional interpolation is made,
and then the two interpolated values are fused to arrive at an LMMSE estimate

24

of the missing sample. We recover the HR image in two steps. First, those
missing samples at the center locations surrounded by four LR samples are
interpolated. Second, the other missing samples and are interpolated with the
help of the already recovered samples
a) Interpolation of Samples

(2n,2m)
We can interpolate the missing HR sample

(2n,2m) along two


orthogonal directions: 45 diagonal and 135 diagonal. Denote by

(2n,2m) and

(2n,2m) the two directional interpolation results by some linear methods,


such as bilinear interpolation, bicubic convolution, or spline interpolation.

( )

( )

( ) (3.12)

( )

( )

( ) (3.13)

Where the random noise variables

and

represent the
interpolation errors in the corresponding direction. To fuse the two directional
measurements into matrix form,

(3.14)
where [

] , 0

1, 0

1 (3.15)
The LMMSE of

can be calculated with the assumption that V is zero


mean and uncorrelated with

((

)(

) (3.16)
where

) and

() (3.17)


25


Figure 3.3 Formation of HR Sample

(2n,2m)
b) Interpolation of Samples

(2n-1,2m) and

(2n,2m-1)
After the missing HR samples

(2n,2m) are estimated, the other


missing samples

(2n-1,2m) and

(2n,2m-1) can be estimated similarly, but


now with the aid of the just estimated HR samples. Finally, the whole HR is
reconstructed by the proposed edge-guided LMMSE interpolation technique.


(a) (b)
Figure 3.4 Interpolations of the Missing HR Samples (a)

(2n-1,2m) and
(b)

(2n,2m-1)
3.6 ITERATIVE CURVATURE-BASED INTERPOLATION (ICBI)
Iterative curvature-based interpolation (ICBI) [1] is real time artifact
free image upscaling based on a two-step grid filling. Iterative correction of

26

the interpolated pixels is obtained by minimizing an objective function
depending on the second-order directional derivatives of the image intensity.
The steps in ICRB as follows
Computing local approximations of the second-order derivatives along
the two directions using eight-valued neighboring pixels.
Assigning to the point (2i+1, 2j+1) ,the average of the two neighbors in
the direction where the derivative is lower.
Interpolated values are then modified in an iterative procedure trying to
minimize an energy term that sums local directional changes of second-
order derivatives.
The smoothing effect caused by energy term can be slightly reduced by
replacing the second-order derivative estimation with the actual
directional curvature to create sharper image.
The artifacts produced are removed by an iterative isophote smoothing
method based on a local force.
Complete energy function=curvature continuity + curvature enhancement +
isophote smoothing
U(2i+1,2j+1) = aU
c
(2i+1,2j+1)+ bU
e
(2i+1,2j+1)+ cU
i
(2i+1,2j+1) (3.18)
where U
c,
U
e,
and U
i
are the curvature continuity, curvature enhancement and
isophote smoothing values at (2i+1,2j+1) and a, b, c are the weight function.



Figure 3.5 ICBI HR Grid Filling

27

3.7 SUMMARY
Many interpolation techniques have been developed to increase the
image resolution. The three well-known linear interpolation techniques are
nearest neighbor interpolation, bilinear interpolation, and bicubic interpolation.
Bicubic interpolation is more sophisticated than the other two techniques, it
produces noticeably sharper images. To preserve the edge details, edge
directed interpolation and edge guided interpolation using directional filtering
and data fusion are techniques are used. Iterative curvature-based interpolation
(ICBI) is real time artifact free image upscaling based on a two-step grid
filling.




















28

CHAPTER 4

WAVELET TRANSFORM BASED IMAGE DENOISING AND
RESOLUTION ENHANCEMENT

4.1 OVERVIEW
The images are used in many applications such as geo-science studies,
astronomy, and geographical information systems. One of the most important
quality factors in images comes from its resolution. Interpolation in image
processing is a well-known method to increase the resolution of a digital
image.
The decimated DWT has been widely used for performing image
resolution enhancement. A common assumption of DWT-based image
resolution enhancement is that the low resolution (LR) image is the low-pass-
filtered sub-band of the wavelet-transformed high-resolution (HR) image. This
type of approach requires the estimation of wavelet coefficients in sub-bands
containing high-pass spatial frequency information in order to estimate the HR
image from the LR image.
In order to estimate the high-pass spatial frequency information, many
different approaches have been introduced. The high-pass coefficients with
significant magnitudes are estimated as the evolution of the wavelet
coefficients among the scales. The performance is mainly affected from the
fact that the signs of the estimated coefficients are copied directly from parent
coefficients without any attempt being made to estimate the actual signs. This
is contradictory to the fact that there is very little correlation between the signs
of the parent coefficients and their descendants. HMT models are used to
determine the most probable state for the coefficients to be estimated. The
performance also suffers mainly from the sign changes between the scales. The
decimated DWT is not shift invariant, and as a result, suppression of wavelet

29

coefficients introduces artifacts into the image which manifest as ringing in the
neighborhood of discontinuities.
Complex wavelet transform (CWT) is one of the recent wavelet
transforms used in image processing. A one-level CWT of an image produces
two complex-valued low-frequency sub-band images and six complex-valued
high-frequency sub-band images. The high frequency sub-band images are the
result of direction-selective filters. They show peak magnitude responses in the
presence of image features oriented at +75, +45, +15, 15, 45, and 75.
4.2 IMAGE RESOLUTION ENHANCEMENT USING DTCWT
The Dual Tree complex Wavelet Transform based mage resolution
enhancement technique [6] based on interpolation of the high-frequency sub-
band images obtained by DT-CWT. DT-CWT is used to decompose an input
low resolution image into different sub-bands. Then, the high-frequency sub-
band images and the input image are interpolated, followed by combining all
these images to generate a new high resolution image by using inverse DT-
CWT. The resolution enhancement is achieved by using directional selectivity
provided by the CWT, where the high-frequency sub-bands in six different
directions contribute to the sharpness of the high-frequency details such as
edges.
The main loss of an image after being super resolved by applying
interpolation is on its high-frequency components (i.e., edges), which due to
the smoothing is caused by interpolation. Hence, in order to increase the
quality of the super resolved image, preserving the edges is essential. DT-CWT
has been employed in order to preserve the diagram of the proposed resolution
enhancement algorithm. The DT-CWT has good directional selectivity and has
the advantage over discrete wavelet transform (DWT). It also has limited
redundancy. The DT-CWT is approximately shift invariant, unlike the
critically sampled DWT. The redundancy and shift invariance of the DT-CWT
mean that the coefficients are inherently interpolable.

30

DT-CWT decomposes a low-resolution image into different sub-band
images as in fig 4.1. Then the six complex-valued high-frequency sub-band
images are interpolated using bicubic interpolation. In parallel, the input image
is also interpolated separately. Instead of using low-frequency sub-band
images, which contain less information than the original input image, we are
using the input image for the interpolation of two low frequency sub-band
images.








Figure 4.1 Block Diagram of the DTCWT Based Resolution Enhancement.
Hence, using the input image instead of the low-frequency sub-band
images increases the quality of the superresolved image. Note that the input
image is interpolated with the half of the interpolation factor used to
interpolate the high-frequency sub-bands. The two upscaled images are
generated by interpolating the low-resolution original input image and the
shifted version of the input image in horizontal and vertical directions. These
two real-valued images are used as the real and imaginary components of the
interpolated complex LL image, respectively, for the IDT-CWT operation. By
interpolating the input image by /2 and the high-frequency sub-band images
by and then by applying IDT-CWT, the output image will contain sharper
edges than the interpolated image obtained by interpolation of the input image
directly. This is due to the fact that the interpolation of the isolated high-
frequency components in the high-frequency sub-band images will preserve
Input
image
(nxm)
Low frequency
sub-bands
Interpolated
input image
High frequency
subbands
Interpolated high
frequency subbands
Super
resolved high
resolution
image
(nxm)
DT-CWT
IDT-CWT
Bicubic interpolation with a factor

Bicubic interpolation with a factor /2

31

more high-frequency components after the interpolation of the respective sub-
bands separately than interpolating the input image directly.
4.3 PROPOSED IMAGE RESOLUTION ENHANCEMENT USING
DTCWT WITH EDGE GUIDED INTERPOLATION
All the classical linear interpolation techniques like bilinear, bi-cubic
interpolation methods generate blurred image. By employing dual-tree
complex wavelet transform (DT-CWT) on an edge guided interpolation, it is
possible to recover the high frequency components which provide an image
with good visual clarity and thus super resolved high resolution images are
obtained.












Figure 4.2 Block of the Proposed Resolution Enhancement Technique.
The proposed technique interpolates the input image as well as the high
frequency sub-band images obtained through the DT-CWT process as in
fig.4.2. The final high resolution output image is generated by using IDT-CWT
of the interpolated sub-band images and the input image. In the proposed
algorithm, the employed interpolation i.e., edge guided interpolation via
Input image
(nxm)
Low frequency
sub-bands
High frequency
subbands
Interpolated
input image
Interpolated high
frequency subbands
Super
resolved high
resolution
image
(nxm)
Edge guided
interpolation
with a factor /2
DT-CWT
IDT-CWT
Edge guided
interpolation
with a factor


32

directional filtering and data fusion is the same for all the sub-band and the
input images. The interpolation and the wavelet function are two important
factors to determine the quality of image.
4.4 DWT BASED SUPER RESOLUTION ALGORITHM
In DWT based super resolution algorithm [8], DWT separates the image
into different subband images, namely, LL, LH, HL, and HH. High frequency
subbands contains the high frequency component of the image. The
interpolation can be applied to these four subband images. In the wavelet
domain, the low-resolution image is obtained by low-pass filtering of the high-
resolution image. The low resolution image (LL subband), without
quantization (i.e., with double-precision pixel values) is used as the input for
the resolution enhancement process. In other words, low frequency subband
images are the low resolution of the original image. Therefore, instead of using
low-frequency subband images, which contains less information than the
original input image, input image itself is used through the interpolation
process. Hence, the input low-resolution image is interpolated with the half of
the interpolation factor, /2, used to interpolate the high-frequency subbands,
as shown in fig.4.3. In order to preserve more edge information, i.e., obtaining
a sharper enhanced image, an intermediate stage in high frequency subband
interpolation process is used. As shown in fig.4.3, the low-resolution input
satellite image and the interpolated LL image with factor 2 are highly
correlated. The difference between the LL subband image and the low-
resolution input image are in their high-frequency components. Hence, this
difference image can be use in the intermediate process to correct the estimated
high-frequency components. This estimation is performed by interpolating the
high-frequency subbands by factor 2 and then including the difference image
(which is high-frequency components on low-resolution input image) into the
estimated high-frequency images, followed by another interpolation with factor
/2 in order to reach the required size for IDWT process. The intermediate

33

process of adding the difference image, containing high-frequency
components, generates significantly sharper and clearer final image. This
sharpness is boosted by the fact that, the interpolation of isolated high-
frequency components in HH, HL, and LH will preserve more high-frequency
components than interpolating the low-resolution image directly.


Figure 4.3 Block Diagram of the DWT Based Super Resolution Algorithm.
Here instead of using bicubic interpolation, it is possible to recover the
high frequency components which provide an HR image with good visual
clarity and sharper edges using edge guided interpolation.




34

4.5 DWT AND SWT BASED SUPER RESOLUTION ALGORITHM
In DWT and SWT based super resolution algorithm [7], one level DWT
(with Daubechies 9/7 as wavelet function) is used to decompose an input
image into different subband images. Three high frequency subbands (LH, HL,
and HH) contain the high frequency components of the input image. In this
technique, bicubic interpolation with enlargement factor of 2 is applied to high
frequency subband images. Downsampling in each of the DWT subbands
causes information loss in the respective subbands. That is why SWT is
employed to minimize this loss. The interpolated high frequency subbands and
the SWT high frequency subbands have the same size which means they can
be added with each other. The new corrected high frequency subbands can be
interpolated further for higher enlargement. Low frequency subband is the low
resolution of the original image. Therefore, instead of using low frequency
subband, which contains less information than the original high resolution
image, we are using the input image for the interpolation of low frequency
subband image. Using input image instead of low frequency subband increases
the quality of the super resolved image. Fig. 4.4 illustrates the block diagram of
the DWT and SWT image resolution enhancement technique. By interpolating
input image by /2, and high frequency subbands by 2 and in the
intermediate and final interpolation stages respectively, and then by applying
IDWT, as illustrated in fig. 4.4, the output image will contain sharper edges
than the interpolated image obtained by interpolation of the input image
directly. This is due to the fact that the interpolation of isolated high frequency
components in high frequency subbands and using the corrections obtained by
adding high frequency subbands of SWT of the input image will preserve more
high frequency components after the interpolation than interpolating input
image directly.


35


Figure 4.4 Block Diagram of the DWT and SWT Based Super Resolution
Algorithm.
By using edge guided interpolation instead of using bicubic
interpolation, it is possible to recover the high frequency components which
provide an HR image with good visual clarity and sharper.
4.6 IMAGE DENOISING AND ZOOMING UNDER THE LMMSE
FRAMEWORK
Most of the existing image interpolation schemes assume that the image
to be interpolated is noise free. This assumption is invalid in practice because
noise will be inevitably introduced in the image acquisition process. Usually
the image is denoised first and is then interpolated. The denoising process,
however, may destroy the image edge structures and introduce artifacts.
Meanwhile, edge preservation is a critical issue in both image denoising and

36

interpolation. To address these problems, a directional denoising scheme,
which naturally endows a subsequent directional interpolator can be used as in
[12]. The problems of denoising and interpolation are modeled as to estimate
the noiseless and missing samples under the same framework of optimal
estimation. The local statistics is adaptively calculated to guide the estimation
process. For each noisy sample, we compute multiple estimates of it along
different directions and then fuse those directional estimates for a more
accurate output. The estimation parameters calculated in the denoising
processing can be readily used to interpolate the missing samples. This noisy
image interpolation method can reduce many noise-caused interpolation
artifacts and preserve well the image edge structures.
There are two commonly used models to generate an LR image from an
HR image. In the first model, the LR image is directly down-sampled from the
HR image. In the second model, the HR image is smoothed by a point spread
function (PSF), e.g. a Gaussian function, and then down-sampled to the LR
image. The estimation of the HR image under the first model is often called
image interpolation, while the HR image estimation under the second model
can be viewed as an image restoration problem.
One simple and widely used noise model is the signal-independent
additive noise model y=x + v. The signal-dependent noise characteristic can be
compensated by estimating the noise variance adaptively in each local area, i.e.
we can let the additive noise level vary spatially to approximate the signal-
dependent noise characteristic.

( )

( ) ( ) (4.1)
where

is the noisy LR image and is zero mean white noise with variance

.
The conventional way to enlarge the noisy LR image

is to perform
denoising and interpolation in tandem, i.e. denoising first and interpolation
later. However, if the denoising process is designed without consideration of

37

the following interpolation process, the artifacts (such as block effects, blur,
stings, etc) generated in the denoising process can be amplified in the
interpolation process. Another strategy is to interpolate the noisy image first
and then denoise the interpolated image. This strategy will complicate the
noise characteristics and generate many noise-caused interpolation artifacts,
which will be hard to remove in the following denoising process. Therefore,
there is a high demand to develop new interpolation techniques for noisy
images.
Instead of viewing denoising and interpolation as two separate
processes, here a unified framework for denoising and interpolation is
developed. The basic idea is to model the two problems under the same
framework of signal estimation. For the available noisy samples, we estimate
their noiseless counterparts; then for the missing samples, we estimate them
from the available estimated noiseless neighbors. Since the human visual
system is sensitive to image edge information, directional denoising and
interpolation techniques will be used.
For an existing noisy pixel in

, the denoising of it can be viewed as


how to estimate the true value at this position by using the noisy measurements
in the neighborhood of it. For example, to estimate the value at noisy pixel

( ) , we can take the estimation, as the linear function of pixel

( )
and its nearest neighbors, i.e.

( )

(4.2)
where vector contains the noisy pixels in the neighborhood of position (n,
m), is the weight vector containing the weights assigned to , b is a constant
representing some bias value, and symbol T is the transpose operator. With
and b,

( ) is estimated as a linear combination of noisy pixels in the


neighborhood around position (n, m), and thus the problem of denoising is
transformed to the weights and constant b. Similarly, for a missing HR
sample

( ) , we can estimate it as the linear function of its denoised LR



38

neighbors. For instance, for

( ) that has four closest diagonal LR


neighbors, it can be estimated as

( )

(4.3)
where vector

contains the four diagonal LR neighbors of

( ) , is the
weight vector assigned to

, and is a constant representing some bias value.


4.6.1 Denoising Stage
Input: noisy LR image
(i) Calculate three directional estimates of LR pixel by using its four
diagonal neighbors, four horizontal and vertical neighbors and its
central measurement.
(ii) The three estimates can be written in terms of weight vector and
vector containing the neighbors of noisy measurement.
(iii) Fuse the three estimates by weighted average.
Output: denoised LR image and weight vectors

Figure 4.5 Estimation of the Unknown Noiseless Sample.

4.6.2 Interpolation Stage
Input: denoised LR image and weight vectors
(i) By using denoising weight vector, compute interpolation weight
vector and bias value.
(ii) First interpolate the missing diagonal samples, and then interpolate
the missing horizontal and vertical samples.
Output: zoomed HR image.


39


Figure 4.6 Directional Interpolation of the Missing HR Samples
4.7 PROPOSED COMPLEX WAVELET TRANSFORM BASED IMAGE
DENOISING AND ZOOMING UNDER THE LMMSE
FRAMEWORK









Figure 4.7 Block Diagram of the Proposed DTCWT Based Denoising and
Resolution Enhancement Technique.
The main aim of the image resolution enhancement of noisy images is
to preserve the edge details. The proposed method is a combination of DT-
CWT with LMMSE based image denoising and interpolation which preserves
the edge details in the resolution enhanced image. The block diagram of the
proposed method is shown in fig.4.7. The algorithm is explained in the
following steps.
Step1: Obtain a low resolution image by low-pass filtering and downsampling
of the high resolution image.
Noisy
input
image
(nxm)
Low frequency
sub-bands
Denoised and
interpolated input image

High frequency
subbands
Interpolated high
frequency subbands
Denoised and
super resolved
high resolution
image (nxm)


DT-CWT
IDT-CWT
Edge guided interpolation with a factor


Denoising and zooming via LMMSE with an interpolation factor /2


40

Step 2: Perform the Dual Tree Complex Wavelet Transform (DTCWT) on the
low resolution noisy image to get low and high frequency subbands.
Step 3: The edge guided interpolation with an interpolation factor of is
applied to the high-frequency sub-band images obtained by DTCWT.
Step 4: The input low resolution noisy image is denoised and interpolated by
the same LMMSE technique with half of the interpolation factor () which is
used in the interpolation of the high frequency sub-bands. For each noisy LR
image sample, we compute multiple estimates of it along different directions
and then fuse those directional estimates for a more accurate denoised LR
image. The estimation parameters calculated in the denoising processing can
be readily used to interpolate the missing samples as in [12].
Step 5: The inverse DT-CWT is applied on the denoised input and interpolated
high frequency subband images to obtain the denoised high resolution image.
4.8 MAXIMUM A POSTERIORI (MAP) ESTIMATION BASED
SUPERRESOLUTION RECONSTRUCTION
The HR image estimate can be computed by maximizing the a posteriori
probability, (*

+), or by maximizing the log-likelihood function:


( , (

)-) (4.4)

This results in the following cost function to be minimized assuming a
Gaussian distribution as in [4] for the noise and (*

+)
()

)( )

(4.5)
where y is a vector concatenating all the LR observations, k=1,..,K,

is the
noise standard deviation and C is the covariance matrix of z, W is the
degradation matrix.During the imaging process, the observed LR image results
from warping, blurring, and subsampling operators performed and is also
corrupted by noise.The cost function balances two types of errors. The first
term will be referred to as the linear equation error. This error is minimized
when z, projected through the observation model, matches the observed data.

41

Minimization of this term alone can lead to excessive noise magnification in
some applications due to the ill-posed nature of this inverse problem. The
second term will be referred to as the image prior error which serves as a
regularization operator. This is generally minimized when z is smooth. The
weight of each of these competing forces" in the cost function is controlled by

and . For example, if the fidelity of the observed data is high (i.e.,

is
small), the linear equation error dominates the cost function. If the observed
data is very noisy, the cost function will emphasize the image prior error. This
will generally lead to smoother image estimates. Thus, the image prior term
can be viewed as a simple penalty term controlled by which biases the
estimator away from noisy solutions which are inconsistent with the Gibbs
image prior assumption. An iterative gradient descent minimization procedure
is used to update the HR estimate as follows:

()

(4.6)
where is the step size.
4.9 SELECTIVE PERCEPTUAL (SELP) SR
The idea of selective perceptual reduces the computational complexity
by finding a perceptually significant constraint set of pixels to process while
maintaining the desired HR visual quality. Here perceptual contrast sensitivity
threshold model used to determine perceptually significant pixels.
The contrast sensitivity threshold is used to detect the set of
perceptually significant pixels (active pixels).
The contrast sensitivity threshold is the measure of the smallest contrast,
or Just Noticeable Difference (JND), that yields a visible signal over a
uniform background.
JND is computed per image block based on block mean luminance (8 by
8 blocks) and using initial SR estimate (obtained by interpolating one of
the LR images or by applying a median shift and add of the LR images).

42

JND thresholds are precomputed for all possible discrete mean
luminances and stored in a look-up table.
For each 8x8 image block, the mean of the block is computed and the
corresponding JND threshold t
JND
is retrieved (Look up Table).Once the t
JND
is
obtained, the center pixel of a sliding 3 3 window is compared to its 4
cardinal neighbors. If any absolute difference is greater than the t
JND
, then the
corresponding pixel mask is flagged as 1 and the pixel is labeled as active
pixel. The luminance-adjusted contrast sensitivity JND thresholds are
approximated using a power function as follows.

()
]

(4.7)
where

is set to 0.649


This SELP SR framework can be combined with MAP algorithm is
proposed to obtain a high quality HR reconstructed image. It will reduce the
pixels to process and the computational complexity of the MAP algorithm.
4.10 PROPOSED DT-CWT BASED SUPER RESOLUTION
RECONSTRUCTION USING SELP-MAP FOR NOISY IMAGES.
As discussed before, SELP-MAP will reduce the computational
complexity by finding a perceptually significant constraint set of pixels to
process while maintaining the desired HR visual quality with sharp edges. The
proposed method is a combination of DT-CWT with SELP-MAP which
preserves the edge details in the resolution enhanced image. The block diagram
of the proposed method is shown in fig.4.8. The algorithm is explained in the
following steps.
Step1: Obtain a low resolution image by low-pass filtering and downsampling
of the high resolution image.
Step 2: Perform the Dual Tree Complex Wavelet Transform (DTCWT) on the
low resolution noisy image to get low and high frequency subbands.

43

Step 3: The SELP-MAP based interpolation with an interpolation factor of is
applied to the high-frequency sub-band images obtained by DTCWT.
Step 4: The input low resolution noisy image is denoised and interpolated by
SELP-MAP SR reconstruction with half of the interpolation factor () which is
used in the interpolation of the high frequency sub-bands.
Step 5: The inverse DT-CWT is applied on the denoised input and interpolated
high frequency subband images to obtain the denoised high resolution image.
The same algorithm can be extended with 2D WPT instead of DT-
CWT. Wavelet packet transform (WPT) is a generalization of the dyadic
wavelet transform (DWT) that offers a rich set of decomposition structures and
dealing with the nonstationarities of the data better than DWT does. WPT is
associated with a best basis selection algorithm. The best basis selection
algorithm decides a decomposition structure among the library of possible
bases, by measuring a data dependent cost function .The block diagram of the
proposed WPT based super resolution reconstruction using SELP-MAP for
noisy images is shown in fig 4.8.












Figure 4.8 Block Diagram of the Proposed WPT Based Denoising and
Resolution Enhancement Technique
Noisy input
image
(nxm)
Low frequency
sub-bands
High frequency
subbands
Denoised and
interpolated
input image
Interpolated high
frequency subbands
Denoised and super resolved high
resolution image
(nxm)
SELP-MAP SR
reconstruction
with a factor /2
WPT
IW PT
SELP-MAP
interpolation
with a factor


44

4.11 PERFORMANCE EVALUATION FACTORS
The quantitative measurements used for performance evaluation of the
resolution enhanced images are described below
4.11.1 Peak Signal to Noise Ratio (PSNR)
PSNR is the ratio between possible power of a signal and the power of
corrupting noise that affects the fidelity of its representation. This ratio is often
used as a quality measurement between the original and a reconstructed image.
The higher the PSNR, the better the quality of the reconstructed image. PSNR
in decibels is easily defined from MSE as given below

(4.9)
where MSE is the Mean Square Error between the original and the
reconstructed image. MSE is defined as follows.

,

()

()-

(4.10)
where M and N are the number of rows and columns in the image.
4.11.2 Mean Structural Similarity Index Matrix (MSSIM)
MSSIM is a performance indicator which is based on measuring the
structural similarity between two images. It is defined as
( )

(4.11)
where X and Y are the reference image and reconstructed image respectively
and M is the number of local windows in the image.
Structural Similarity Index (SSIM) index is a full reference metric, in
other words, the measuring of image quality based on a reference image. SSIM
is designed to improve on traditional methods like peak signal-to-noise ratio
(PSNR) and mean squared error (MSE), which have proved to be inconsistent
with human eye perception. The SSIM metric is calculated on various windows
of an image. The measure between two windows x and y of common size NN
is,

45

( )
(

)(

)
(

)(

)
(4.12)
where

two variables to stabilize the


division with weak denominator
L the dynamic range of the pixel-values


4.11.3 Feature Similarity Index Matrix (FSIM)
The great success of SSIM and its extensions owes to the fact that
human visual system (HVS) is adapted to the structural information in images.
The visual information in an image, however, is often very redundant, while
the HVS understands an image mainly based on its low-level features, such as
edges and zero-crossings. In other words, the salient low-level features convey
crucial information for the HVS to interpret the scene. Based on the above
analysis, low-level feature similarity induced FR IQA metric, namely FSIM
(Feature SIMilarity) can be used.
FSIM is based on phase congruency (PC) and gradient magnitude
(GM). At points of high phase congruency (PC) we can extract highly
informative features. Therefore PC is used as the primary feature in computing
FSIM. Meanwhile, considering that PC is contrast invariant but image local
contrast does affect HVS perception on the image quality, the image gradient
magnitude (GM) is computed as the secondary feature to encode contrast
information. PC and GM are complementary and they reflect different aspects

46

of the HVS in assessing the local quality of the input image. After computing
the local similarity map, PC is utilized again as a weighting function to derive
a single similarity score.
(4.13)
where Spc is the phase congruency and Sgm is the gradient map.
4.11.4 Image Enhancement Factor (IEF)
Image Enhancement means improving the image to show some hidden
details/bringing into evidence some part of the image. It only applies locally
and is necessarily constrained to match the original real image. Image
enhancement factor gives how much the noisy image got enhanced with
respect to original image.

(

()
where o,y and r Original noise free HR , noisy HR , and reconstructed HR
image
4.12 SUMMARY
DTCWT based image resolution enhancement algorithm is based on the
interpolation of the high frequency sub-band images obtained by DTCWT and
the low resolution input image. Most of the existing image interpolation
schemes assume that the image to be interpolated is noise free. This
assumption is invalid in practice because noise will be inevitably introduced in
the image acquisition process. The DTCWT based resolution enhancement
method can be extended for noisy images using LMMSE and SELP MAP
based reconstruction.






47

CHAPTER 5
RESULTS AND DISCUSSION
The analysis and implementation of wavelet based image resolution
algorithms are carried out in MATLAB. All the methods are implemented in
MATLAB 7.0 and run in64-bit Windows 7 with 2.53-GHz Intel Core i3 CPU
and 3-GB RAM. Different test images such as Lena, Houses, Butterfly and
Peppers & SAR images are taken for analysis. Medical image, i.e. brain image
is also used for comparison. Resolution enhancement with a factor of 2 is
performed on these images. Simulation results of different resolution
enhancement algorithms are shown below. The visual results of proposed
technique support the quantitative results as in Table 5.10 and 5.13 in terms of
PSNR, MSSIM, FSIM and IEF.
5.1 SIMULATION RESULTS
The proposed DT-CWT with edge guided interpolation using LMMSE
technique has been tested on test images, satellite images and brain image of
size 256x256 with interpolation factor of =2 and compared with different
existing interpolation methods namely nearest neighbor, bilinear, bicubic
interpolation, WZP, HMT, new edge directed interpolation (NEDI), edge
guided interpolation via directional filtering and data fusion, ICBI, DT-CWT
with bicubic, DWT based super resolution algorithm and DWT and SWT
based super resolution algorithm. Edge guided interpolation is used instead of
bicubic interpolation in DWT based super resolution algorithm and DWT and
SWT based super resolution algorithm and compared with above techniques.
The results of the proposed resolution enhancement techniques shown in fig.
5.1-5.5 support the quantitative results in Tables 5.1-5.6 in terms of PSNR,
MSSIM and FSIM respectively. Results in both tables indicate the superiority
of the proposed techniques over the conventional and state-of-the-art image
resolution enhancement techniques for all the images.

48





















Fig 5.1 shows that the images in column (d) enhanced by using the
proposed technique is clearly sharper than the image enhanced by using DT-
CWT with bicubic interpolation in column (c). The proposed algorithm can
better preserve the edge structures, for example, Lenas hat, letters in Houses,
apex of the pepper etc. That is the proposed method outperforms the
conventional and state-of-the-art image resolution enhancement methods in
terms of visual clarity and edge preservation capability.

(3)
(2)
(1)
(a) (b) (c) (d)
Figure 5.1 Simulation Results of Resolution Enhancement Algorithms for Test
Images. (a) Original low resolution image. (b) Interpolated image using LMMSE
based edge guided interpolation. (c) DT-CWT with bicubic. (d) Proposed DT-
CWT with edge guided interpolation


49


The satellite images are taken from eMap high resolution imagery [14].
Here also the proposed technique showing better visual clarity and edge
preservation.
(1)
(2)
(3)
(4)
(a) (b) (c) (d)
(e)
Figure 5.2 Simulation Results of Resolution Enhancement Algorithms for
Satellite Images. (a) Original low resolution image. (b) Interpolated image using
LMMSE based edge guided interpolation. (c) DT-CWT with bicubic. (d)
Proposed DT-CWT with edge guided interpolation

50

(a) (b)
(c) (d)
(f) (e)
(g) (h)
Figure 5.3 Simulation Results of SR Algorithms for Lena Image. (a) Original
HR image. (b) LR image. (c) DTCWT with bicubic. (d)Proposed DTCWT SR
with EGI. (e) DWT SR. (f) Proposed DWT SR with EGI. (g) DWT-SWT SR.
(h) Proposed DWT-SWT SR with EGI.

51


(a)
(b)
(c) (d)
(f) (e)
(g) (h)
Figure 5.4 Simulation Results of SR Algorithms for Satellite Image.
(a) Original HR image. (b) LR image. (c) DTCWT with bicubic. (d)Proposed
DTCWT SR with EGI. (e) DWT SR. (f) Proposed DWT SR with EGI.
(g) DWT-SWT SR (h) Proposed DWT-SWT SR with EGI.


52


(a)
(b)
(c)
(d)
(f) (e)
(g) (h)
Figure 5.5 Simulation Results of SR Algorithms for Brain Image. (a) Original
HR image (b) LR image. (c) DTCWT with bicubic. (d)Proposed DTCWT SR
with EGI. (e) DWT SR with EGI. (f) Proposed DWT SR with EGI. (g) DWT-
SWT SR. (h) Proposed DWT-SWT SR with EGI.


53

Next section performs experiments to verify the proposed DT-CWT and
WPT based denoising and interpolation algorithm. For comparison, we employ
the LMMSE, MAP based SR reconstruction algorithms. Both test and medical
images are used in this analysis. The size of all the original HR images is
512512. In the experiments, the original images are first downsampled to
256256 and then added Gaussian white noise. Two noise levels with standard
deviation =15 and =30 are tested respectively. In the proposed DT-CWT
based denoising and interpolation schemes such as LMMSE, SELP-MAP
based denoising and interpolation with an interpolation factor of /2 is applied
to the noisy LR image and the interpolation factor of for high frequency
subbands. Same algorithm is extended using WPT with SELP-MAP SR
reconstruction. By calculating the number of active pixels, it is clear that
SELP-MAP will reduce the pixels to process and the computational complexity
of the existing MAP algorithm.
All the methods are implemented in MATLAB 7.0 and run in64-bit
Windows 7 with 2.53-GHz Intel Core i3 CPU and 3-GB RAM. Proposed DT-
CWT with SELP-MAP and WPT with SELP-MAP giving better result
compared to existing results.
The PSNR, MSSIM,FSIM and IEF values of the denoised and
interpolated images by various schemes with =15 and =30 are listed in
tables 5.7 & 5.13 However, from figure 5.6-5.9, we can see that the proposed
methods leads to less block effects and ring effects. Since the edge directional
information is adaptively estimated and employed in the denoising process and
consequently in the interpolation process, the proposed directional joint
denoising and interpolation algorithm can better preserve the edge structures.
Overall the presented joint denoising and interpolation scheme yields
encouraging results.


54


(a)
(b)
(c)
(d) (e) (f)
(g) (h) (i)
Figure 5.6 Simulation Results of SR Reconstruction Algorithms for Pepper Image.
(a) Original HR image (b) LR noisy image. (c) Traditional denoising using
thresholding + LMMSE interpolated image. (d) LMMSE SR reconstructed image.
(e) MAP SR reconstructed image. (f) Proposed SELP-MAP SR reconstructed image.
(g) Proposed DT-CWT with LMMSE SR reconstructed image. (h) Proposed DT-
CWT with SELP-MAP SR reconstructed image. (i) Proposed WPT with SELP-MAP
SR reconstructed image.


55


(a)
(c)
(d) (e) (f)
(i) (h) (g)
Figure 5.7 Simulation Results of SR Reconstruction Algorithms for Lena Image.
a) Original HR image (b) LR noisy image. (c) Traditional denoising using
thresholding + LMMSE interpolated image. (d) LMMSE SR reconstructed image.
(e) MAP SR reconstructed image. (f) Proposed SELP-MAP SR reconstructed
image. (g) Proposed DT-CWT with LMMSE SR reconstructed image. (h) Proposed
DT-CWT with SELP-MAP SR reconstructed image. (i) Proposed WPT with SELP-
MAP SR reconstructed image
(b)

56

.

(a)
(b)
(c)
(g) (i)
(f) (e) (d)
Figure 5.8 Simulation Results of SR Reconstruction Algorithms for House Image.
(a) Original HR image (b) LR noisy image. (c) Traditional denoising using
thresholding + LMMSE interpolated image. (d) LMMSE SR reconstructed image. (e)
MAP SR reconstructed image. (f) Proposed SELP-MAP SR reconstructed image. (g)
Proposed DT-CWT with LMMSE SR reconstructed image. (h) Proposed DT-CWT
with SELP-MAP SR reconstructed image. (i) Proposed WPT with SELP-MAP SR
reconstructed image

(h)

57


(a)
(b)
(c)
(i)
(f) (e) (d)
Figure 5.9 Simulation Results of SR Reconstruction Algorithms for Brain Image.
(a) Original HR image (b) LR noisy image. (c) Traditional denoising using
thresholding + LMMSE interpolated image. (d) LMMSE SR reconstructed image.
(e) MAP SR reconstructed image. (f) Proposed SELP-MAP SR reconstructed image.
(g) Proposed DT-CWT with LMMSE SR reconstructed image. (h) Proposed DT-
CWT with SELP-MAP SR reconstructed image. (i) Proposed WPT with SELP-
MAP SR reconstructed image

(g) (h)

58



Figure 5.11 Number of Processed Pixels per Iteration in MAP SR Technique
0 5 10 15 20 25 30
6.5535
6.5535
6.5535
6.5536
6.5536
6.5536
6.5536
6.5536
6.5537
6.5537
6.5537
x 10
4
Number of Processed Pixels per Iteration
Iterations
A
c
t
i
v
e

P
i
x
e
l
s


MAP-SR lena
(a) (b)
Figure 5.10 Simulation Results of Multi-Frame SR Reconstruction
Algorithms. (a) Original HR image. (b) LR noisy image frames. (c) MAP SR
reconstructed image. (d) SELP-MAP SR reconstructed image.

(c)
(d)

59


Figure 5.12 Number of Processed Pixels per Iteration in SELP-MAP SR
Technique
From the figure 5.11 and 5.12, it is clear that SELP-MAP will reduce
the pixels to process and the computational complexity of the existing MAP
algorithm.
5.2 PERFORMANCE COMPARISON
Table 5.1 Comparison of PSNR Values (in dB) of Resolution Enhancement
Algorithms for Satellite Images
0 5 10 15 20 25 30
4.4
4.6
4.8
5
5.2
5.4
5.6
5.8
x 10
4
Number of Processed Pixels per Iteration
Iterations
A
c
t
i
v
e

P
i
x
e
l
s


SELP-MAP lena
Techniques/images (1) (2) (3) (4)
WZP[19] 26.84 24.18 27.47 20.84
Nearest Neighbor[20] 28.45 23.26 28.22 23.28
HMT[18] 29.56 24.56 29.16 23.95
Bilinear[20] 29.76 24.71 30.27 23.96
Bicubic[20] 30.15 25.88 30.75 24.12
NEDI[13] 29.53 25.98 30.12 24.18
EGI via DFDF[11] 30.92 26.60 31.64 24.37
ICBI[1] 30.98 26.94 31.67 24.59
DT-CWT with bicubic[6] 32.70 28.38 34.70 25.45
DWT SR[8] 32.92 28.69 34.62 26.07
DWT+SWT SR[7] 32.84 28.54 34.69 25.41
Proposed DT-CWT SR +EGI via DFDF 34.65 29.91 36.89 27.55
Proposed DWT SR +EGI via DFDF 34.82 29.95 36.52 27.94
Proposed DWT+SWT SR +EGI via DFDF 34.73 29.86 35.41 27.78

60

Table 5.2 Comparison of PSNR Values (in dB) of Resolution Enhancement
Algorithms for Test and Medical Images
The above tables 5.1&5.2 show the PSNR values of different resolution
enhancement algorithms. The DWT, DWT+SWT and DTCWT based resolution
enhancement algorithms are compared with different existing methods such as
WZP, HMT, nearest neighbour, bilinear, bicubic, NEDI, EGI via DFDF and
ICBI. From the above tables, it is observed that the DTCWT based resolution
enhancement algorithm has a PSNR value which is 3 dB greater than that of the
existing bicubic interpolation for all the images. The DWT and DWT+SWT SR
algorithms also showing improvement in PSNR as similar to DTCWT based SR
algorithm. Also the proposed methods give a good improvement of 2 dB in
PSNR value when compared with the DWT, DWT+SWT and DTCWT based
resolution enhancement using bicubic interpolation. While comparing it with
wavelet domain techniques such as HMT and WZP, the proposed technique has
a PSNR improvement of 5 to 8 dB.

Techniques/images Lena Peppers Houses brain
WZP[19] 24.32 23.65 20.84 28.42
Nearest Neighbor[20] 26.33 24.66 22.32 29.77
HMT[18] 28.32 25.89 23.13 32.13
Bilinear[20] 27.18 25.81 23.17 30.80
Bicubic[20] 28.04 26.39 23.89 32.19
NEDI[13] 27.88 26.11 23.12 32.03
EGI via DFDF[11] 29.54 27.92 24.82 33.18
ICBI[1] 29.83 28.14 25.87 33.38
DT-CWT with bicubic[6] 31.89 29.84 25.28 37.42
DWT SR[8] 32.13 29.81 25.67 37.53
DWT+SWT SR[7] 32.02 29.79 25.44 37.43
Proposed DT-CWT SR +EGI via DFDF 33.98 31.25 26.88 39.74
Proposed DWT SR +EGI via DFDF 33.95 31.10 26.76 39.88
Proposed DWT+SWT SR +EGI via DFDF 33.85 31.55 26.32 39.56

61

Table 5.3 Comparison of MSSIM Values of Resolution Enhancement
Algorithms for Satellite Images

Table 5.4 Comparison of MSSIM Values of Resolution Enhancement
Algorithms for Test and Medical Images

Techniques/images (1) (2) (3) (4)
WZP[19] 0.6677 0.6913 0.7398 0.6902
Nearest Neighbor[20] 0.7718 0.7718 0.8321 0.7963
HMT[18] 0.7911 0.8110 0.8894 0.8298
Bilinear[20] 0.8244 0.8194 0.8854 0.8397
Bicubic[20] 0.8539 0.8403 0.9198 0.8761
NEDI[13] 0.8571 0.8889 0.9154 0.8895
EGI via DFDF[11] 0.9219 0.9043 0.9585 0.9078
ICBI[1] 0.9217 0.9118 0.9593 0.9110
DT-CWT with bicubic[6] 0.9463 0.9289 0.9676 0.9322
DWT SR[8] 0.9457 0.9291 0.9669 0.9401
DWT+SWT SR[7] 0.9408 0.9283 0.9644 0.9358
Proposed DT-CWT SR +EGI via DFDF 0.9649 0.9584 0.9824 0.9638
Proposed DWT SR +EGI via DFDF 0.9634 0.9597 0.9798 0.9679
Proposed DWT+SWT SR +EGI via DFDF 0.9641 0.9577 0.9801 0.9648
Techniques/images Lena Peppers Houses brain
WZP[19] 0.7118 0.6717 0.7198 0.7011
Nearest Neighbor[20] 0.7612 0.6981 0.7512 0.7123
HMT[18] 0.7984 0.7511 0.7944 0.7828
Bilinear[20] 0.7914 0.7443 0.7813 0.7813
Bicubic[20] 0.8353 0.7812 0.8288 0.8127
NEDI[13] 0.8276 0.7802 0.8201 0.8023
EGI via DFDF[11] 0.8967 0.8543 0.8989 0.8923
ICBI[1] 0.8993 0.8565 0.8944 0.8986
DT-CWT with bicubic[6] 0.9259 0.9193 0.9212 0.9043
DWT SR[8] 0.9253 0.9215 0.9208 0.9122
DWT+SWT SR[7] 0.9242 0.9188 0.9185 0.9087
Proposed DT-CWT SR +EGI via DFDF 0.9619 0.9357 0.9674 0.9554
Proposed DWT SR +EGI via DFDF 0.9596 0.9364 0.9587 0.9613
Proposed DWT+SWT SR +EGI via DFDF 0.9599 0.9297 0.9555 0.9576

62

The above tables 5.3&5.4 show the MSSIM values of different
resolution enhancement algorithms. The DWT, DWT+SWT and DTCWT
based resolution enhancement algorithms are compared with different existing
methods such as WZP, HMT, nearest neighbour, bilinear, bicubic, NEDI, EGI
via DFDF and ICBI. From the above tables, it is observed that the DTCWT
based resolution enhancement algorithm has a MSSIM value which is 0.1 to
0.12 greater than that of the existing bicubic interpolation for all the images.
The DWT and DWT+SWT SR algorithms also showing improvement in
PSNR as similar to DTCWT based SR algorithm. Also the proposed methods
give a good improvement of 0.15 to 0.18 in MSSIM value when compared
with the DWT, DWT+SWT and DTCWT based resolution enhancement using
bicubic interpolation. While comparing it with wavelet domain techniques such
as HMT and WZP, the proposed technique has a MSSIM improvement of 0.2
to 0.25.

Table 5.5 Comparison of FSIM Values of Resolution Enhancement
Algorithms for Satellite Images
Techniques/images (1) (2) (3) (4)
WZP[19] 0.7113 0.6617 0.7338 0.7138
Nearest Neighbor[20] 0.7123 0.6991 0.7522 0.7512
HMT[18] 0.7828 0.7511 0.7954 0.7894
Bilinear[20] 0.7813 0.7443 0.7813 0.7914
Bicubic[20] 0.8127 0.7812 0.8288 0.8354
NEDI[13] 0.8023 0.7802 0.8201 0.8176
EGI via DFDF[11] 0.8923 0.8543 0.8989 0.8966
ICBI[1] 0.8986 0.8565 0.8944 0.8793
DT-CWT with bicubic[6] 0.9043 0.9193 0.9212 0.9159
DWT SR[8] 0.9122 0.9215 0.9208 0.9153
DWT+SWT SR[7] 0.9087 0.9188 0.9185 0.9142
Proposed DT-CWT SR +EGI via DFDF 0.9554 0.9459 0.9674 0.9611
Proposed DWT SR +EGI via DFDF 0.9613 0.9464 0.9687 0.9596
Proposed DWT+SWT SR +EGI via DFDF 0.9576 0.9397 0.9657 0.9593

63

Table 5.6 Comparison of FSIM Values of Resolution Enhancement
Algorithms for Test and Medical Images

The above tables 5.5&5.6 show the FSIM values of different resolution
enhancement algorithms. The DWT, DWT+SWT and DTCWT based
resolution enhancement algorithms are compared with different existing
methods such as WZP, HMT, nearest neighbour, bilinear, bicubic, NEDI, EGI
via DFDF and ICBI. From the above tables, it is observed that the DTCWT
based resolution enhancement algorithm has a FSIM value which is 0.1 to 0.12
greater than that of the existing bicubic interpolation for all the images. The
DWT and DWT+SWT SR algorithms also showing improvement in PSNR as
similar to DTCWT based SR algorithm. Also the proposed methods give a
good improvement of 0.15 to 0.18 in FSIM value when compared with the
DWT, DWT+SWT and DTCWT based resolution enhancement using bicubic
interpolation. While comparing it with wavelet domain techniques such as
HMT and WZP, the proposed technique has a FSIM improvement of 0.2 to
0.25.
Techniques/images Lena Peppers Houses brain
WZP[19] 0.7109 0.6955 0.7488 0.6712
Nearest Neighbor[20] 0.7188 0.7598 0.8321 0.7883
HMT[18] 0.7914 0.8133 0.8984 0.8298
Bilinear[20] 0.8244 0.8194 0.8854 0.8397
Bicubic[20] 0.8539 0.8403 0.9198 0.8761
NEDI[13] 0.8571 0.8889 0.9154 0.8895
EGI via DFDF[11] 0.9217 0.9043 0.9588 0.9078
ICBI[1] 0.9217 0.9118 0.9593 0.9110
DT-CWT with bicubic[6] 0.9463 0.9289 0.9676 0.9322
DWT SR[8] 0.9457 0.9291 0.9669 0.9401
DWT+SWT SR[7] 0.9408 0.9283 0.9644 0.9358
Proposed DT-CWT SR +EGI via DFDF 0.9649 0.9484 0.9824 0.9533
Proposed DWT SR +EGI via DFDF 0.9634 0.9497 0.9798 0.9579
Proposed DWT+SWT SR +EGI via DFDF 0.9643 0.9479 0.9807 0.9556

64

Table 5.7 Comparison of PSNR Values (in dB) of SR Reconstruction
Algorithms ( =15)
Techniques/images ( =15) Lena House Peppers Brain
Thresholding + Bicubic [20] 22.55 24.03 24.98 25.31
Thresholding + NEDI [13] 22.89 24.12 25.01 25.87
Thresholding + LMMSE [11] 23.12 25.67 25.67 26.32
Thresholding + ICBI [1] 23.15 25.55 25.58 26.76
LMMSE SR Reconstruction [12] 25.27 27.59 27.23 28.81
MAP SR Reconstruction [4] 27.14 28.33 28.14 29.45
Proposed SELP-MAP SR 27.47 28.91 28.83 29.81
Proposed DT-CWT+LMMSE 27.59 29.98 28.92 30.18
Proposed DT-CWT+SELP MAP 28.60 32.22 30.26 32.41
Proposed WPT+SELP MAP 28.78 32.25 30.03 32.45
Table 5.8 Comparison of PSNR Values (in dB) of SR Reconstruction
Algorithms ( =30)
Techniques/images ( =30) Lena House Peppers Brain
Thresholding + Bicubic [20] 21.01 22.67 22.08 21.45
Thresholding + NEDI [13] 21.03 22.99 22.15 21.87
Thresholding + LMMSE [11] 21.42 23.56 22.89 22.12
Thresholding + ICBI [1] 21.44 23.57 22.77 22.35
LMMSE SR Reconstruction [12] 23.55 25.76 24.71 24.98
MAP SR Reconstruction [4] 23.78 26.23 25.35 25.45
Proposed SELP-MAP SR 24.13 26.77 25.89 25.88
Proposed DT-CWT+LMMSE 24.48 27.12 26.12 26.12
Proposed DT-CWT+SELP MAP 25.56 28.52 26.74 27.41
Proposed WPT+SELP MAP 25.63 28.49 26.99 27.36
The PSNR values of the denoised and interpolated images by various
schemes with =15 and =30 are listed in tables 5.7 & 5.8.The proposed
SELP-MAP SR reconstruction giving an improvement 1dB over LMMSE SR

65

reconstruction algorithm. DTCWT with SELP-MAP SR reconstruction
algorithm gives an improvement 1.5 to 2dB compared to existing LMMSE SR
reconstruction algorithm. Proposed WPT with SELP MAP also give PSNR
improvement of 2 dB over LMMSE SR reconstruction.
Table 5.9 Comparison of MSSIM Values of SR Reconstruction Algorithms
Techniques/images ( =15) Lena House Peppers Brain
Thresholding + Bicubic [20] 0.8841 0.8987 0.9067 0.8933
Thresholding + NEDI [13] 0.8897 0.9001 0.9102 0.8988
Thresholding + LMMSE [11] 0.8901 0.9017 0.9134 0.9033
Thresholding + ICBI [1] 0.8911 0.9007 0.9148 0.9102
LMMSE SR Reconstruction [12] 0.9298 0.9308 0.9315 0.9398
MAP SR Reconstruction [4] 0.9311 0.9421 0.9414 0.9423
Proposed SELP-MAP SR 0.9357 0.9477 0.9489 0.9501
Proposed DT-CWT+LMMSE 0.9456 0.9589 0.9503 0.9556
Proposed DT-CWT+SELP MAP 0.9578 0.9644 0.9671 0.9662
Proposed WPT+ SELP MAP 0.9557 0.9659 0.9686 0.9628
Table 5.10 Comparison of MSSIM Values of SR Reconstruction Algorithms
Techniques/images ( =30) Lena House Peppers Brain
Thresholding + Bicubic [20] 0.8666 0.8701 0.8722 0.8601
Thresholding + NEDI [13] 0.8661 0.8723 0.8723 0.8599
Thresholding + LMMSE [11] 0.8756 0.8811 0.8834 0.8776
Thresholding + ICBI [1] 0.8737 0.8832 0.8828 0.8787
LMMSE SR Reconstruction [12] 0.9003 0.9112 0.9017 0.9008
MAP SR Reconstruction [4] 0.9111 0.9213 0.9084 0.9123
Proposed SELP-MAP SR 0.9152 0.9277 0.9149 0.9178
Proposed DT-CWT+LMMSE 0.9167 0.9288 0.9153 0.9195
Proposed DT-CWT+SELP MAP 0.9221 0.9354 0.9201 0.9235
Proposed WPT+ SELP MAP 0.9217 0.9388 0.9219 0.9226

66

The MSSIM values of the denoised and interpolated images by various
schemes with =15 and =30 are listed in tables 5.9 & 5.10.The proposed
SELP-MAP SR reconstruction giving an improvement 0.015 in terms of
MSSIM over LMMSE SR reconstruction algorithm. DTCWT with SELP-MAP
SR reconstruction algorithm gives an improvement 0.025 compared to existing
LMMSE SR reconstruction algorithm. Proposed WPT with SELP MAP also
give same improvement over LMMSE SR reconstruction in terms of MSSIM.
Table 5.11 Comparison of FSIM Values of SR Reconstruction Algorithms
Techniques/images ( =15) Lena House Peppers Brain
Thresholding + Bicubic [20] 0.8837 0.8875 0.8944 0.8837
Thresholding + NEDI [13] 0.8856 0.8877 0.8992 0.8879
Thresholding + LMMSE [11] 0.8912 0.8994 0.9015 0.8923
Thresholding + ICBI [1] 0.8907 0.8997 0.9034 0.8921
LMMSE SR Reconstruction [12] 0.9189 0.9208 0.9314 0.9298
MAP SR Reconstruction [4] 0.9221 0.9422 0.9414 0.9383
Proposed SELP-MAP SR 0.9275 0.9478 0.9489 0.9501
Proposed DT-CWT+LMMSE 0.9356 0.9523 0.9499 0.9556
Proposed DT-CWT+SELP MAP 0.9487 0.9611 0.9551 0.9653
Proposed WPT+ SELP MAP 0.9475 0.9625 0.9566 0.9644

The FSIM values of the denoised and interpolated images by various
schemes with =15 and =30 are listed in tables 5.11&5.12.The proposed
SELP-MAP SR reconstruction giving an improvement 0.02 in terms of FSIM
over LMMSE SR reconstruction algorithm. DTCWT with SELP-MAP SR
reconstruction algorithm gives an improvement 0.03 compared to existing
LMMSE SR reconstruction algorithm. Proposed WPT with SELP MAP also
give same improvement over LMMSE SR reconstruction in terms of FSIM.


67

Table 5.12 Comparison of FSIM Values of SR Reconstruction Algorithms
Techniques/images ( =30) Lena House Peppers Brain
Thresholding + Bicubic [20] 0.8612 0.8522 0.8604 0.8712
Thresholding + NEDI [13] 0.8678 0.8567 0.8653 0.8755
Thresholding + LMMSE [11] 0.8745 0.8656 0.8779 0.8812
Thresholding + ICBI [1] 0.8740 0.8679 0.8755 0.8842
LMMSE SR Reconstruction [12] 0.9011 0.9007 0.9017 0.9028
MAP SR Reconstruction [4] 0.9134 0.9122 0.9084 0.9106
Proposed SELP-MAP SR 0.9212 0.9187 0.9149 0.9177
Proposed DT-CWT+LMMSE 0.9245 0.9205 0.9153 0.9204
Proposed DT-CWT+SELP MAP 0.9324 0.9245 0.9201 0.9287
Proposed WPT+ SELP MAP 0.9310 0.9266 0.9219 0.9298
For brain image, the proposed SELP-MAP SR and DTCWT with SELP-
MAP SR reconstruction algorithms giving an improvement 0.0149 and 0.026
in terms of FSIM over existing LMMSE SR reconstruction algorithm.
Table 5.13 Comparison of IEF Values of SR Reconstruction Algorithms

Techniques/images
Lena Peppers
= 15 = 30 =15 =30
Thresholding + Bicubic [20] 63 51 61 45
Thresholding + NEDI [13] 74 58 69 49
Thresholding + LMMSE [11] 81 64 78 53
Thresholding + ICBI [1] 82 63 77 55
LMMSE SR Reconstruction [12] 138 117 123 106
MAP SR Reconstruction [4] 189 166 165 144
Proposed SELP-MAP SR 223 203 207 187
Proposed DT-CWT+LMMSE 288 254 259 235
Proposed DT-CWT+SELP MAP 312 291 276 253
Proposed WPT+ SELP MAP 328 310 292 262

68

The IEF values of the denoised and interpolated images (Lena and
Peppers) by various schemes with =15 and =30 are listed in above table
5.13. For lena image, the proposed DT-CWT and WPT based denoising and
interpolation algorithms showing an improvement of 200 in IEF and for
peppers image, it is of 150 at =15. From the table, we can see that the
proposed DT-CWT and WPT based denoising and interpolation algorithms can
give better quantitative results while compared to existing algorithms
























69

CHAPTER 6
CONCLUSION
The dual-tree complex wavelet transform (DTCWT) based image
resolution enhancement technique uses DT-CWT to decompose an input low-
resolution satellite image into different sub-bands. The high-frequency sub-
band images and the input image are interpolated by bicubic interpolation,
followed by combining all these images to generate a new high-resolution
image by using inverse DT-CWT.
Here the interpolation scheme assumes that the image to be interpolated
is noise free. This assumption is invalid in practice because noise will be
inevitably introduced in the image acquisition process. The problems of
denoising and interpolation are modeled as to estimate the noiseless and
missing samples under the same framework of LMMSE. So DT-CWT based
resolution enhancement algorithm can be used for noisy images by applying
LMMSE based denoising and zooming for input low resolution noisy image.
Maximum a posteriori (MAP) estimation and Selective Perceptual
MAP (SELP-MAP) based denoising and resolution enhancement algorithms
using a single and a sequence of undersampled images are implemented for
Gaussian distribution and compared with LMMSE technique. SELP-MAP
giving better performance when it is replaced instead of LMMSE in DT-CWT
based image denoising and enhancement algorithm. This algorithm is further
extended using wavelet packet transform and compared the performance in
terms of PSNR, MSSIM, FSSIM and IEF.
6.1 FUTURE WORK
Resolution enhancement also plays an important role in video
processing applications. In future it is possible to develop wavelet based
denoising and super resolution techniques in video.


70
































71






























72

REFERENCES

1. Andrea Giachetti and Nicola Asuni, (2011): Real-Time Artifact-free
Image Upscaling, IEEE Trans. Image Process., Vol. 20, No. 10,
pp.27602768.

2. Armstrong E, Barnard K and Hardie R, (1997): Joint MAP Registration
And High-Resolution Image Estimation Using a Sequence of Under
Sampled Images, IEEE Trans. Image Process., Vol. 6, No. 12, pp.1621-
1633.

3. Carey W K, Chuang D B, and Hemami S S, (1999): Regularity-
Preserving Image Interpolation, IEEE Trans. Image Process., Vol. 8,
No. 9, pp. 1295 1297.

4. Chuandai Dong, Daojin Song, Hongkui Li and Xueting Liu, (2008):
MAP-Based Image Superresolution Reconstruction, International
Journal of Computer Science and Engineering, Vol. 2, No. 3, pp.125
128.

5. Cristobal G, Flusser J, Gil E, Miravet C, Rodrguez F and Sroubek F,
(2006): Superresolution Imaging: A Survey of Current Techniques,
Proc. of SPIE Vol. 7074 70740C-15.

6. Gholamreza Anbarjafari and Hasan Demirel, (2010): Satellite Image
Resolution Enhancement Using Complex Wavelet Transform, IEEE
Geoscience and Remote Sensing Letters, Vol.7, No. 1, pp.122-128.



73

7. Gholamreza Anbarjafari and Hasan Demirel, (2011): Image Resolution
Enhancement by Using Discrete and Stationary Wavelet
Decomposition, IEEE Trans. Image Process., Vol. 20, No. 5, pp.1458
1460.

8. Gholamreza Anbarjafari and Hasan Demirel, (2011): Discrete Wavelet
Transform Based Satellite Image Resolution Enhancement, IEEE
Geoscience and Remote Sensing Letters, Vol.49, No.6, pp.1997-2004.

9. Http://www.emap-int.com/products/HighResImagery International is
pleased to offer competitively priced high resolution imagery from a
growing \ Constellation of satellites.

10. Ivan W. Selesnick, Nick G. Kingsbury and Richard G. Baraniuk,
(2005): The Dual-Tree Complex Wavelet Transform, IEEE Signal
Processing Magazine, pp.123-150.

11. Lei Zhang, Xiaolin Wu, (2006): An Edge-Guided Image Interpolation
Algorithm via Directional Filtering and Data Fusion, IEEE Trans.
Image Process., Vol. 15, No. 8, pp.245-258.

12. Lei Zhang, David Zhang and Xin Li, (2012): Image Denoising and
Zooming Under the LMMSE Framework, IET on Image Processing,
Vol.7, No.12, pp.161-127

13. Li. X and Orchrard. M, (2001): New Edge-Directed Interpolation,
IEEE Trans. Image Process, Vol.10, No.10, pp.1521-1527.


74

14. Nick G. Kingsbury, (1999); Image Processing with Complex
Wavelets, Philos. Trans. R. Soc. London A, Math. Phys. Sci., Vol. 357,
No. 1760, pp. 2543 2560.

15. Orchard M. T, Schwartz S. C and Zhu. Y, (2001): Wavelet Domain
Image Interpolation via Statistical Estimation, in Proc. Int. Conf. Image
Processing, Vol. 3, pp. 840843.

16. Park H. W, Piao. Y, Shin. I, (2005): Image Resolution Enhancement
using Inter-Subband Correlation in Wavelet Domain, in Proc. ICIP,
Vol. 1, pp. I-445I-448.

17. Tardi Tjahjadi and Turgay Celik, (2010): Image Resolution
Enhancement using Dual-tree Complex Wavelet Transform, IEEE
Geoscience And Remote Sensing Letters, Vol. 7, No. 3.pp.1245-1247.

18. Temizel A, (2007): Image Resolution Enhancement using Wavelet
Domain Hidden Markov Tree and Coefficient Sign Estimation, in Proc.
ICIP, Vol. 5, pp. V-381V-384.

19. Temizel. A and Vlachos. T, (2008): Wavelet Domain Image Resolution
Enhancement using Cycle-Spinning, Electron. Letters, Vol. 41, No. 3,
pp. 119121.

20. Vaidyanathan P.P and Vrcelj.B, (2001): Efficient Implementation of
All-digital Interpolation, IEEE Trans. Image Process., Vol. 10, No. 11,
pp.16391646.


75

21. Vetterli M and Ramachandran K, (1993): Best Wavelet Packet basis in
a rate distortion sense, IEEE Trans. Image Process., Vol. 2, No. 17,
pp.160174.

22. Zilverman B and Naso G, (1995): The Stationary Wavelet Transform
and Some Statistical Applications, in Wavelets and Statistics, pp.281-
299

You might also like