You are on page 1of 11

Neurocomputing 184 (2016) 196206

Contents lists available at ScienceDirect

Neurocomputing
journal homepage: www.elsevier.com/locate/neucom

A self-learning image super-resolution method via sparse


representation and non-local similarity
Juan Li, Jin Wu n, Huiping Deng, Jin Liu
College of Information Science and Engineering, Wuhan University of Science and Technology, Wuhan, China

ar t ic l e i nf o a b s t r a c t

Article history: It is difcult to design an image super-resolution algorithm that can not only preserve image edges and
Received 26 December 2014 texture structure but also keep lower computational complexity. A new super-resolution model based on
Received in revised form sparsity regularization in Bayesian framework is presented. The delity term restricts the underlying
19 May 2015
image to be consistent with the observation image in terms of the image degradation model. The sparsity
Accepted 18 July 2015
Available online 17 December 2015
regularization term constraints the underlying image with a sparse representation in a proper dictionary.
The non-local self-similarity is also introduced into the model. In order to make the sparse domain better
Keywords: represent the underlying image, we use high-frequency features extracted from the underlying image
Super-resolution patches for sparse representation, which increases the effectiveness of sparse modeling. The proposed
Sparse representation
method learns dictionary directly from the estimated high-resolution image patches (extracted features),
Non-local self-similarity
and the dictionary learning and the super-resolution can be fused together naturally into one coherent
and iterated process. Such a self-learning method has stronger adaptability to different images and
reduces dictionary training time. Experiments demonstrate the effectiveness of the proposed method.
Compared with some state-of-the-art methods, the proposed method can better preserve image edges
and texture details.
& 2015 Elsevier B.V. All rights reserved.

1. Introduction combination of few atoms from an over-complete dictionary. The


sparsity-based method in Ref. [14] supervised super-resolution
In the process of acquiring digital images, the infrared, visible reconstruction by learning sparse association between high-
light, etc. imaging system can be affected by imaging condition resolution and low-resolution image patches. It is found that the
and mode, so it is difcult to acquire high-resolution images under edges reconstructed by Ref. [14] contain some visible artifacts.
some circumstances [1,2]. Image super-resolution reconstruction Research on image redundancy shows that natural images have
is a restoration technique, which aims to reconstruct a high- non-local self-similarity, namely there exist many repetitive
resolution image from its low-resolution version. Image super- structures throughout a natural image. Recently many works have
resolution reconstruction has wide application prospects in the shown that a good combination of local sparsity and non-local
elds of medical imaging, remote sensing, surveillance, and so on. similarity can greatly improve the performance of super-
Researchers have proposed many image super-resolution resolution. The ASDS-Reg method [16] proposed an image super-
methods. The regularization model represented by the total var- resolution method combining sparse coding, non-local self-simi-
iation (TV) model [37] has been widely used, but it tends to larity and piecewise autoregressive model. The ASDS-Reg method
[16] can recover many high-frequency details, but it produces a
generate piecewise constant image structures. Other image prior
certain ringing effect at the edges and some artifacts in smooth
knowledge, such as a soft edge smoothness prior [8] and gradient
regions. The NCSR method [17] proposed non-local sparse coding
prole prior [9] have been proposed to regularize super-resolution,
strategy, which obtains the non-local sparse code of each image
yet the reconstructed images look unnatural. Image super-
patch by calculating the weighted average sparse codes of similar
resolution reconstruction methods using sparse representation
patches. Compared with the ASDS-Reg method [16], the NCSR
have become one of the hot issues in current research [2,1019].
method [17] produces sharper edges and has less visual artifacts.
The theory of sparse representation assumes that each image
In this paper, we present a simple and effective super-
patch of a natural image can be represented as a linear resolution method, which is also built on the same sparsity and
non-local redundancies as in the ASDS-Reg method [16], but our
n
Corresponding author. method makes the following several novelties to improve the
E-mail address: lilian0122@163.com (J. Wu). results:

http://dx.doi.org/10.1016/j.neucom.2015.07.139
0925-2312/& 2015 Elsevier B.V. All rights reserved.
J. Li et al. / Neurocomputing 184 (2016) 196206 197

(1) Instead of directly using raw image patches for sparse repre- where is a parameter balancing sparsity and sparse representa-
sentation as in the ASDS-Reg method [16], we extract patch tion error. Then the entire image can be sparsely represented as
features from the estimated high-resolution image in our
1X X
method, which helps to construct a more effective dictionary, min X i  Dh i 22 i 1 4
so that the learned dictionary can better represent image f i g 2 i i

edges and textures. To construct a more effective dictionary, we extract high-


(2) The ASDS-Reg method [16] learns dictionary from pre- frequency features from high-resolution image patches for
collected example image patches, which might limit their sparse representation instead of directly using pixel values [20,21].
practical uses. In this paper, we fuse the dictionary learning We obtain the high-frequency features by subtracting the mean
and the super-resolution together naturally into one coherent intensity value from each high-resolution image patch. Thus the
and iterated process, when training on the estimated high- learned dictionary can better represent image edges and textures,
resolution image patches (extracted features) directly. The and the numerical stability of sparse coding can be improved.
learned dictionary can better adapt to different images, and Considering the shrinkage effect of l1 norm minimization on the
the accuracy of super-resolution reconstruction can be sparse representation, all the high-resolution patch features
improved. Moreover, compared with the ASDS-Reg method should be normalized to unit length.
[16], the dictionary training time of our method can be In the Bayesian framework, the image super-resolution recon-
signicantly reduced. struction problem can be modeled as the following optimization
problem:
The remaining of this paper is organized as follows. The pro- X
posed method is presented in Section 2. The experimental results X  arg min Y SHX22 ki k1
X;fi g 2 i
are given in Section 3. The conclusion is made in Section 4.  2
1X P
D h i  i X  meanP 
i X 
 5
2 i P i X  meanP i X2 2
2. The proposed image super-resolution method
where P i is a matrix extracting patch X i from X, the scalar mean
Image super-resolution reconstruction is to restore a high- P i X is the mean intensity value of patch X i , which is done to
resolution image X from its observed low-resolution image Y. every element of the vector X i , and is a parameter balancing the
The observation model relating X and Y can be generally expressed reconstruction constraint and the sparsity prior. In Eq. (5), the rst
as term restricts the consistency between the reconstructed image
and the observation image in terms of the image degradation
Y SHX V 1 model. The second and third terms make sure that each patch
where H and S denote a blurring lter and a down-sampling feature of the reconstructed image can be sparsely represented,
operator respectively, and V represents gaussian noise. which helps to recover the high-frequency details.
Image super-resolution is an ill-posed inverse problem. The However, the ability of the single sparsity regularization to
effective method to solve such problem is regularization method, image reconstruction is limited. Besides the sparsity prior, a non-
which uses prior knowledge of natural images to constrain solu- local similarity constraint is introduced, which can further
tion space, and converts image super-resolution problem to the improve the accuracy of the reconstructed image.
optimization problem of a cost function. In this paper, we reg-
ularize the super-resolution problem via the sparsity prior and the
2.2. Exploiting non-local self-similarity
non-local self-similarity.
The non-local self-similarity is benecial to preserve edge
2.1. Regularization with sparsity prior
details and suppress noise.
For image patch X i , we can search for its similar image patches
Our method is based on the sparse representation of image
X j (j 1; 2; ; L) in the whole image X. Let xic be the gray value of
patches which demonstrates effectiveness and robustness in reg-
the central pixel in patch X i , and xjc be the gray value of the central
ularizing super-resolution problem.
pixel in patch X j , then xic can be regarded as a weighted average of
We use X A RN to denote the original image, and X i A Rm to
xjc (j 1; 2; ; L)
denote the ith image patch of X. It is noteworthy that X and X i are
all represented as column vectors. Let Dh d1 ; ; dk  A Rmk X
L

(m o k) be an over-complete dictionary composed of a set of nor- xic j xjc 6


j1
malized basis vectors, and suppose each image patch can be
sparsely represented over this dictionary. That is, for image patch where j is the similarity weight between xjc and xic , which is
X i , there exists a sparse vector i A Rk such that X i  Dh i . We call dened as
i the sparse code, which is a vector with very few ( o o k) non-
exp  X i  X j 22 =h
zero coefcients. The sparse coding of each image patch can be j P   7
formulated as follows: exp  X i  X j 22 =h
j A GB
min i 0 s:t: X i Dh i r 2
where h is a weight controlling factor, and the denominator is a
where i 0 represents the numbers of non-zero coefcients in i , normalization factor.
and is a small constant. Since there exists a lot of redundant information in natural
Since it is difcult to optimize l0 norm, we select l1 norm as images, we consider each pixel in the reconstructed image can be
sparse metric function. The equivalent formulation of sparse predicted by the weighted average of its non-local similarity pix-
coding can be written as els, and the prediction error of each pixel is expected to be small.
Let bi be the column vector containing j j 1; 2; ; L and i be
1 P
L
min X i  Dh i 22 i 1 3 the column vector containing xjc j 1; 2; ; L, then xic  j
i 2 j1
198 J. Li et al. / Neurocomputing 184 (2016) 196206

P
L
Firstly, we use the Landweber iteration method [10] to initialize
bi i 22 . We incorporate the non-
T
xjc 22 can be written as xic 
j1 the high-resolution image, denoted by X 0 , from which, we ran-
local similarity constraint by utilizing all the self-predictions in the domly select k patch feature vectors as the initial dictionary atoms,
reconstructed image into Eq. (5), as follows: denoted by D0
h
, and we set the initial iteration number l 1. Then,
X we perform an iterative process as follows:
X  arg min Y  SHX22 ki k1
X;fi g;Dh 2 i
(1) Solve the optimization problem in Eq. (11) over the sparse
 2
1X Dh i  P X  meanP 
i X  X  T 
2 codes fi g given xed X l  1 and Dhl  1 . This problem can be
xic bi i 
i
  8
2 i P i X meanP i X2 2 2x A X 2 solved by optimizing over each i individually:
ic
 2
where is a parameter balancing the non-local similarity con- 1
 l  1 P i X l  1  meanP i X l  1  
P l arg min D i   ki k1
straint. We rewrite the term xic  bi i 22 as I  WX22 ,
T i
i 2 h P i X l  1  meanP i X l  1 2 2
xic A X
12
where I is the identity matrix and
(
j if xjc is an element of i ; j A b i
Wi; j 9 The feature-sign search algorithm [22] can be used to nd the
0 otherwise optimal sparse codes.
(2) Update the dictionary Dh given those sparse representations,
Then, Eq. (8) can be expressed as:
which reduces to the following optimization problem
X
X  arg min Y  SHX22 ki k1 X
X;fi g 2 Dl
h
arg min Dh l
i
i
Dh
 2 i
2
1X P X  meanP 
i X   
Dh i  i
 I  WX 22 10 PiX  meanP i X l  1 
l  1
  
2 i P i X meanP i X2 2 2   s:t: dj 2 r 1; j 1; 2; ; k
P i X l  1
meanP i X l  1
2 
2
13
2.3. A self-learning algorithm combining sparsity prior and non-local
where dj is the jth column of Dh , and the l2 norm constraint on
similarity
each column of the dictionary removes the scaling ambiguity.
This optimization problem can be solved using a Lagrange dual
In previous sections, we discuss the super-resolution algorithm
method [22]. n o
by assuming that the dictionary Dh is known. Designing a good
(3) Update X given xed l i and Dl
h
. Returning to Eq. (11), we
dictionary is of great importance to sparse representation. The
need to solve
prespecied analytical dictionary (e.g., wavelet dictionary, curvelet 
 
dictionary) can be used. It leads to fast algorithms for computing X l arg min kY  SHX k2 I  W l  1 X 2
2 2
X
sparse codes, but it is less effective to model the local structures of  2
images. The synthesis dictionary can also be used, which is X P i X  meanP i X l  1 
 l l 
D h i   14
ususllay learned from example images. The synthesis dictionary 
i P i X l  1
 meanP i X l  1
2  2
based representation is computationally more expensive than the
analytical dictionary based representation, but it can obtain better where W l  1 is the non-local similarity matrix of X l  1 , it can
sparse representation and respond more effectively to the images be computed according to Eq. (9). Let il  1 A Rm be a column
with rich details. vector, whose elements are all set to be meanP i X l  1 , and let
We select the synthesis dictionary for our super-resolution P i X l  1  meanP i X l  1 2 to be C il  1 , Eq. (14) can be rewrit-
method. Learning dictionary from a large number of randomly ten as
 
selected high-resolution image patches can make full use of the 
X l arg min kY  SHX k22 I  W l  1 X 22
high-frequency information provied by the training example X
images. However, the detailed information provided by the  
X l  1 2
 l l P i X  i 
example images for image recovery may not be accurate. The D h i   15
optimization of the cost function in Eq. (10) reveals a way to

i C l  1 i 2
integrate the dictionary learning into the super-resolution process,
specically, we learn the dictionary directly from the currently This is a simple quadratic term that has a solution of the form
estimated high-resolution image patches (extracted features). !1
Compared with learning dictionary from pre-collected example X PT Pi
X l SHT SH i
l  1 2
I  W l  1 T
I W l  1

image patches, the self-learning method has stronger adaptive
i C i
ability to different images and can lead to better image recon- !
struction results. X P Ti Dl l X P Ti l  1
SHT Y h i
i
16
Returning to Eq. (10), we can regard Dh as an unknown variable,
i C il  1 l  1 2
i C i
and dene the cost function as:
X
X  arg min kY  SHX k22 ki k1 (4) Terminate when l reaches a specied maximun number of
X;fi g;Dh 2 i iterations L (15 in our experiments). Otherwise, increment l
 2
1X P
Dh i  i X  meanP 
i X    and return to step (1).
 I  WX 22 11
2 i P i X meanP i X2 2 2
From the above procedure, we can see that the super-
This cost function has four kinds of unknowns: the sparse resolution model in Eq. (11) can be iteratively solved. During the
codes fi g, the dictionary Dh , the non-local similarity matrix W, iteration, the quality of the recovered high-resolution image is
and the latent high-resolution image X. We can apply an alter- gradually improved, which improves the accuracy of non-local
nating minimization algorithm to solve Eq. (11). similarity, as a result, more features and details are revealed out of
J. Li et al. / Neurocomputing 184 (2016) 196206 199

Table 1
The PSNR and SSIM results of the reconstructed images (noiseless).

Images Buttery Flower Girl Parrot Bike Hat Plant Leaves

TV [5] 26.60 27.38 31.21 27.59 23.61 29.19 31.28 24.58


0.9036 0.8111 0.7878 0.8856 0.7567 0.8569 0.8784 0.8878

ASDS-Reg [16] 27.09 29.19 33.53 29.97 24.48 30.93 33.47 26.78
0.8975 0.8480 0.8242 0.9090 0.7948 0.8706 0.9094 0.9050

NCSR [17] 28.10 29.50 33.65 30.48 24.71 31.28 34.04 27.46
0.9159 0.8563 0.8274 0.9148 0.8027 0.8705 0.9191 0.9218

Proposed method 27.60 29.57 33.64 30.40 24.83 31.43 34.06 27.35
0.9165 0.8610 0.8271 0.9178 0.8081 0.8746 0.9194 0.9260

Fig. 1. Reconstructed high-resolution images of Buttery by different methods.

the recovered image, the accuracy of sparse representation is also implementation, the dictionary size is selected as 256, the size of
improved. In return, the improvement of non-local similarity and each image patch is selected as 7  7 with overlap of 5 pixels
sparse representation further enhance the quality of the recovered between adjacent patches, and the size of the non-local similarity
image. Finally, the desired high-resolution image can be obtained window is selected as 7  7.
when the procedure converges. We compare the proposed method with three state-of-the-art
methods on eight test images. The PSNR (peak signal-to-noise
ratio) and SSIM (structural similarity) indices of the methods are
3. Experimental results listed in Table 1 (the rst row is PSNR, the second is SSIM). Table 1
indicates that the proposed method outperforms the TV method
In order to test the performance of the proposed method, we [5] and the ASDS-Reg method [16]. Furthermore, the PSNR values
conduct super-resolution reconstruction experiments on some of the proposed method are roughly equivalent to those of the
natural images. We rst apply a 7  7 Gaussian kernel with stan- NCSR method [17], and the SSIM values of the proposed method
dard deviation of 1.6 to the high-resolution image, and then down- are slightly superior to those of the NCSR method [17]. Some visual
sample the blurred image by a scaling factor of 3. In our comparisions of these methods are shown in Figs. 14. It is clear
200 J. Li et al. / Neurocomputing 184 (2016) 196206

Fig. 2. Local magnication of Fig. 1.

Fig. 3. Reconstructed high-resolution images of leaves by different methods.

Fig. 4. Local magnication of Fig. 3.

that the TV method [5] generates piecewise smooth structures. [17], the proposed method can better preserve edge and texture
The ASDS-Reg method [16] can recover many high-frequency details. In our experiments, for the noiseless low-resolution ima-
details. However, there exists a certain ringing effect at the ges, the parameters are empirically set as follows: 6, 0:15,
edges and some artifacts in smooth regions. Compared with the and 0:003.
ASDS-Reg method [16], the NCSR method [17] and the proposed To further test the robustness of the proposed method to noise,
method can produce sharper results, suppress the ringing effect we add gaussian noise with standard deviation of 5 to the above
and have less visual artifacts. Compared with the NCSR method low-resolution test images. From Table 2 and some subjective
J. Li et al. / Neurocomputing 184 (2016) 196206 201

Table 2
The PSNR and SSIM results of the reconstructed images (noisy, noise level 5).

Images Buttery Flower Girl Parrots Bike Hat Plants Leaves

TV [5] 25.47 26.45 29.77 26.77 23.07 28.11 29.67 23.78


0.8502 0.7509 0.7258 0.8084 0.7118 0.7768 0.8028 0.8457

ASDS-Reg [16] 25.99 27.67 31.79 28.66 23.52 29.57 31.09 25.49
0.8591 0.7738 0.7593 0.8632 0.7205 0.8127 0.8350 0.8633

NCSR [17] 26.84 28.07 32.04 29.50 23.78 29.96 31.79 26.22
0.8883 0.7936 0.7642 0.8772 0.7369 0.8247 0.8607 0.8944

Proposed method 26.44 28.16 32.00 29.21 23.99 30.27 31.88 26.10
0.8853 0.8024 0.7650 0.8792 0.7477 0.8347 0.8605 0.8960

Fig. 5. Reconstructed high-resolution images of noisy Buttery by different methods.

Fig. 6. Local magnication of Fig. 5.


202 J. Li et al. / Neurocomputing 184 (2016) 196206

Fig. 7. Reconstructed high-resolution images of noisy leaves by different methods.

Fig. 8. Local magnication of Fig. 7.

comparisons shown in Figs. 58, we can conclude that the pro- sparser, at the same time, the objective function value becomes
posed method is most faithful to the ground truth. Not only the smaller and smaller. Other test images of our experiments also
noise and ringing effect are effectively suppressed, but also the have the same trend of curves as the image Leaves. Thus, our
structure information are well preserved. For the noisy low- optimization algorithm can be guaranteed to converge to a local
resolution images, the parameters are empirically set as follows: minimum.
0:1, 0:2, and 0:05. To more comprehensively validate the effectiveness of the
Fig. 9 describes the behavior of the proposed algorithm for the proposed method, we conduct super-resolution experiments on a
image Leaves. Each iteration improves the super-resolution results dataset containing 200 high-resolution natural images of various
as indicated by the increase of PSNR and SSIM values. As shown in contents. A 256  256 subimage that is rich in edge and texture is
Fig. 9(a) and (b), the graph presents consistent improvement over cropped from each of these 200 images. We compare the proposed
the rst iteration. For the noiseless Leaves, the PSNR and SSIM method with the ASDS-Reg method [16] and the NCSR method
achieve a gain of up to 2.8 dB and 0.1 respectively, and for the [17], the average PSNR and SSIM indices are listed in Table 3. We
noisy Leaves (noise level 5), the PSNR and SSIM achieve a gain can conclude that the proposed method outperforms the ASDS-
of up to 2.2 dB and 0.12 respectively. We also observe fast con- Reg method [16], and the structural similarity performance of the
vergence in our algorithm, the PSNR and SSIM indices get stable in proposed method is slightly superior to those of the NCSR method
about 15 iterations, which still holds true for other test images in [17].
our experiments. Fig. 9(c) and (d) show that, as the optimization We evaluate the effects of dictionary size on the performance of
iteration increases, the sparse representation becomes sparser and the proposed method. Table 4 lists the PSNR and SSIM indices of
J. Li et al. / Neurocomputing 184 (2016) 196206 203

3 0.14

0.12
2.5

0.1
2
Difference in PSNR

Difference in SSIM
0.08
1.5
0.06

1
0.04

0.5
0.02
Noiseless Leaves Noiseless Leaves
Noisy Leaves Noisy Leaves
0 0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
iterations iterations
The PSNR improvement after each iteration The SSIM improvement after each iteration

0.028 3800
Noiseless Leaves Noiseless Leaves
Noisy Leaves Noisy Leaves
0.026 Objective function value 3600

0.024 3400
Sparsity measure

0.022 3200

0.02 3000

0.018 2800

0.016 2600
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
iterations iterations
The sparsity measure at each iteration The objective function value at each iteration
Fig. 9. The improvement in the super-resolution results after each iteration for the image leaves.

Table 3 Table 4
The average PSNR and SSIM results of the reconstructed images on the 200-image The PSNR and SSIM results of the reconstructed images using dictionaries of
dataset. different sizes.

Method Noise level 0 Noise level 5 Images D64 D128 D256 D512

ASDS-Reg [16] 28.18 27.06 Bike 23.94 23.97 23.99 23.99


0.8105 0.7508 0.7443 0.7465 0.7477 0.7478

NCSR [17] 28.52 27.46 Plants 31.84 31.85 31.88 31.89


0.8162 0.7631 0.8589 0.8599 0.8605 0.8606

Proposed method 28.47 27.43 Leaves 26.00 26.06 26.10 26.12


0.8190 0.7669 0.8942 0.8955 0.8960 0.8960

some images which are reconstructed from their noisy low- Table 5 lists the average training time (in seconds) for the
resolution versions using dictionaries of size 64, 128, 256 and ASDS-Reg method [16], the NCSR method [17] and the proposed
512. From the table we can see that larger dictionaries yield higher method. The ASDS-Reg method [16] needs to learn dictionary and
PSNR and SSIM values, while increasing the computation time for autoregressive models from pre-collected example images. How-
dictionary training and sparse coding. Moreover, we observe ever, both the NCSR method [17] and the proposed method learns
equivalence or little growth in PSNR and SSIM when the dictionary dictionary directly from the degraded image itself without learn-
size varies from 256 to 512. To achieve balance between compu- ing autoregressive models. The NCSR method [17] applies PCA to
tation cost and image quality, we choose the dictionary size as 256. each cluster of similar patches to learn a dictionary of PCA bases.
204 J. Li et al. / Neurocomputing 184 (2016) 196206

The proposed method learns the dictionary from the estimated kernel, which elementarily validates the robustness of the pro-
high-resolution image patches (extracted features) using a posed method to the kernel estimation error.
Lagrange dual method. As shown in Table 5, the ASDS-Reg method We also apply the proposed method to other blurring types.
[16] requires much more training time than the NCSR method [17] Two commonly used blur kernels, 9  9 uniform blur and motion
and the proposed method. The proposed method achieves the blur with length 10 and angle 45, are used for simulations. Take
relative minimal computation complexity of training. the image Buttery for example, the reconstructed results are
We present our super-resolution reconstruction method in the shown in Figs. 1013, which elementarily demonstrates the
non-blind situation. However, the blurring parameters are effectiveness of the proposed method under different blur kernels.
unknown in most super-resolution applications. To test the
robustness of the proposed method to kernel estimation error, we
suppose the estimated standard deviation of the Gaussian kernel is
4. Conclusion
1.7, the estimation error is 6.25% compared with the actual stan-
dard deviation 1.6. Then we feed the estimated blur kernel into the
This paper has presented a sparse representation based image
proposed method. The PSNR and SSIM indices of our method using
super-resolution method, which achieves better structural simi-
the estimated kernel are listed in Tables 6 and 7. We can see that
larity performance and better visual quality compared with many
the quantitative evaluations of the proposed method using the
state-of-the-art methods. Furthermore, this paper elementarily
estimated kernel are slightly inferior to those using the actual
validates the robustness of the proposed method to the kernel
estimation error, and the effectiveness of the proposed method
Table 5
Average training time. under different blur kernels. To ensure the sparse domain repre-
senting well the latent high-resolution image, patch features are
Method ASDS-Reg [16] NCSR [17] Proposed method used for sparse representation instead of raw image patches. In
addition, the proposed method reveals a way to integrate the
Tr 2775 338 251
dictionary learning into the super-resolution process without

Table 6
The PSNR and SSIM results of the reconstructed images using the estimated kernel (noiseless).

Images Buttery Flower Girl Parrot Bike Hat Plant Leaves

Actual kernel 27.60 29.57 33.64 30.40 24.83 31.43 34.06 27.35
0.9165 0.8610 0.8271 0.9178 0.8081 0.8746 0.9194 0.9260

Estimated kernel 27.21 29.49 33.57 30.06 24.62 31.35 34.05 27.06
0.9135 0.8608 0.8270 0.9173 0.8076 0.8745 0.9190 0.9252

Table 7
The PSNR and SSIM results of the reconstructed images using the estimated kernel (noisy, noise level 5).

Images Buttery Flower Girl Parrot Bike Hat Plant Leaves

Actual kernel 26.44 28.16 32.00 29.21 23.99 30.27 31.88 26.10
0.8853 0.8024 0.7650 0.8792 0.7477 0.8347 0.8605 0.8960

Estimated kernel 26.32 28.14 31.99 29.08 23.94 30.23 31.87 25.98
0.8832 0.8020 0.7643 0.8757 0.7476 0.8345 0.8604 0.8958

Fig. 10. Reconstructed high-resolution image of Buttery (noiseless, motion blur kernel).
J. Li et al. / Neurocomputing 184 (2016) 196206 205

Fig. 11. Reconstructed high-resolution image of Buttery (noisy, motion blur kernel).

Fig. 12. Reconstructed high-resolution image of Buttery (noiseless, uniform blur kernel).

Fig. 13. Reconstructed high-resolution image of Buttery (noisy, uniform blur kernel).

collecting example images in advance, which is more practical to our future work, we will research into the super-resolution
super-resolution applications. In the use of image non-local self- reconstruction method in the non-blind situation.
similarity, the proposed method adopts Gauss weighted Euclidean
distance to measure the similarity of image patches, but it is
insufcient to reect the structure of image patches. Designing a Acknowledgments
more effective evaluation criterion for measuring similar patches
is worthy of further study. Moreover, in most super-resolution This work was partially supported by the National Natural
applications, the blurring type and parameters are unknown. In Science Foundation of China (Nos. 81070853, 61501336, 61502357,
206 J. Li et al. / Neurocomputing 184 (2016) 196206

and 61502358), Specialized Research Fund for the Doctoral Pro- [22] H. Lee, A. Battle, R. Raina, Andrew Y. Ng, Efcient sparse coding algorithms, in:
gram of Higher Education (No. 20124219120002), and Natural Proceedings of the Neural Information Processing Systems (NIPS), vol. 1, 2007,
pp. 801808.
Science Foundation of Hubei Province of China (Nos. 2013CFB333,
and 2015CFB365). The authors wish to thank the authors of [16,17]
for generously sharing their code and data with them.
Juan Li received her B.Sc. degree in 2001 from the
Huazhong University of Science and Technology,
received her M.Sc. degree in 2005 from the Wuhan
References University of Science and Technology, and now she is a
lecturer and a doctorial student at the Wuhan Uni-
versity of Science and Technology. Her main research
[1] S. Park, M. Park, M. Kang, Super-resolution image reconstruction: a technical interests include image processing and theory of signal
overview, IEEE Signal Process. Mag. 20 (3) (2003) 2136. sparse representation.
[2] S. Yang, M. Wang, Y. Chen, Y. Sun, Single-image super-resolution reconstruc-
tion via learned geometric dictionaries and clustered sparse coding, IEEE
Trans. Image Process. 21 (9) (2012) 40164028.
[3] T. Chan, S. Esedoglu, F. Park, A. Yip, Recent Development in Total Variation
Image Restoration, Mathematical Models of Computer Vision, Springer Verlag,
New York, 2005.
[4] M. Lysaker, X. Tai, Iterative image restoration combining total variation
minimization and a second-order functional, Int. J. Comput. Vis. 66 (1) (2006)
518. Jin Wu received her B.Sc. degree in 1988 from the
[5] A. Marquina, S.J. Osher, Image super-resolution by TV-regularization and Huazhong University of Science and Technology,
Bregman iteration, J. Sci. Comput. 37 (2008) 367382. received her M.Sc. degree in 1997 from the Beijing
[6] A. Beck, M. Teboulle, Fast gradient-based algorithms for constrained total university of science and technology, received her Ph.D.
variation image denoising and deblurring problems, IEEE Trans. Image Pro- degree in 2006 from the Huazhong University of Sci-
cess. 18 (11) (2009) 24192434. ence and Technology, now she is a professor and a
[7] G. Chantas, N.P. Galatsanos, R. Molina, A.K. Katsaggelos, Variational Bayesian doctoral supervisor at the Wuhan University of Science
image restoration with a product of spatially weighted total variation image and Technology. Her main research interests include
priors, IEEE Trans. Image Process. 19 (2) (2010) 351362. image processing and pattern recognition, process
[8] S. Dai, M. Han, W. Xu, Y. Wu, Y. Gong, A.K. Katsaggelos, SoftCuts: a soft edge detection and control.
smoothness prior for color image super-resolution, IEEE Trans. Image Process.
18 (5) (2009) 969981.
[9] J. Sun, J. Sun, Z. Xu, H. Shum, Image super-resolution using gradient prole
prior, IEEE Comput. Vis. Pattern Recognit. (2008) 18.
[10] I. Daubechies, M. Defriese, C. DeMol, An iterative thresholding algorithm for
linear inverse problems with a sparsity constraint, Commun. Pure Appl. Math.
57 (2004) 14131457. Huiping Deng received her B.Sc. degree in electronics
[11] X. Wang, D. Chen, Q. Ran, Image super-resolution reconstruction with content and information engineering, an M.Sc. degree in com-
based dual-dictionary learning and sparse representation, Chin. J. Sci. Instrum. munication and information system from the Yangtze
34 (8) (2013) 16901695. University in 2005 and 2008 respectively. She received
[12] F. Maira, J. Bach, G. Ponce, Sapiro, sparse representation for color image her Ph.D. degree from the Electronics and Information
restoration, IEEE Trans. Image Process. 17 (1) (2008) 5369. Engineering Department, Huazhong University of Sci-
[13] A.M. Bruckstein, D.L. Donoho, M. Elad, From sparse solutions of systems of nece and Technology in 2013, and now is a faculty at
equations to sparse modeling of signals and images, SIAM Rev. 51 (1) (2009) the Wuhan University of Science and Technology. Her
3481. research interests are video coding and computer
[14] J. Yang, J. Wright, T. Huang, Y. Ma, Image super-resolution via sparse repre- vision, currently focusing on three dimensional
sentation, IEEE. Trans. Image Process. 19 (11) (2010) 28612873. video (3DV).
[15] W. Dong, G. Shi, L. Zhang, X. Wu, Super-resolution with nonlocal regularized
sparse representation, Proc. SPIE Vis. Commun. Image Process. (2010) (pp.
77440H-177440H-10).
[16] W. Dong, L. Zhang, G. Shi, X. Wu, Image deblurring and super-resolution by
adaptive sparse domain selection and adaptive regularization, IEEE. Trans.
Jin Liu received his Ph.D. degree in pattern recognition
Image Process. 20 (7) (2011) 18381857.
and intelligent system from the Huazhong University of
[17] W. Dong, L. Zhang, G. Shi, X. Li, Nonlocally centralized sparse representation
Science and Technology in 2011. Currently, he is an
for image restoration, IEEE. Trans. Image Process. 22 (4) (2013) 16201630.
associate professor at the Wuhan University of Science
[18] J. Maira, F. Bach, J. Ponce, G. Sapiro, Non-local sparse models for image
and Technology, and a post doctor of Beihang Uni-
restoration, in: Proceedings of the IEEE ICCV, 2009, pp. 22722279.
versity. He is mainly working on pulsar navigation.
[19] M. Protter, M. Elad, H. Takeda, P. Milanfar, Generalizing the nonlocal-means to
super-resolution reconstruction, IEEE Trans. Image Process. 18 (1) (2009)
3651.
[20] J. Yang, Z. Wang, Z. Lin, S. Cohen, T. Huang, Couple dictionary training for
image super-resolution, IEEE. Trans. Image Process. 21 (8) (2012) 34673478.
[21] S. Gao, I. Tsang, L. Chia, Laplacian sparse coding, hypergraph laplacian sparse
coding, and applications, IEEE. Trans. Pattern Anal. Mach. Intell. 36 (1) (2013)
92104.

You might also like