You are on page 1of 14

International Journal of Computer Science Engineering

and Information Technology Research (IJCSEITR)


ISSN(P): 2249-6831; ISSN(E): 2249-7943
Vol. 4, Issue 6, Dec 2014, 1-14
TJPRC Pvt. Ltd.

MULTI-HISTOGRAM EQUALIZATION USING ERROR BACK PROPOGATION


NETWORK (MHEBPN)
UMESH KUMAR SHARMA1 & KAPIL KUMAWAT2
1
2

Reasearch Scholar, SBCET, Jaipur, Rajasthan, India

Associate Professor, SBCET, Jaipur, Rajasthan, India

ABSTRACT
Histogram Equalization is a simple and effective technique for image contrast enhancement but in does not
preserve the brightness. Bi-histogram equalization (BBHE) has been proposed and analyzed mathematically that it can
preserve the original brightness to a certain extends. Image Dependent Brightness Preserving Histogram Equalization
(IDBPHE) technique is a better technique for contrast enhancement but its not always giving best Absolute Mean
Brightness Error (AMBE). Multi Histogram using Error Back Propagation Network (MHEBPN) provides not only better
contrast but also it gives better scalable brightness preservation. First the image is decomposed into equal area sub-images
based on Probability Density Function (PDF). Curvlet transform is used to identify the bright region. Separation of
histogram on the basis of threshold level which give the minimum AMBE value. Error Back Propagation Network (EBPA)
is used to maintain the correct Euclidean distance which gives the better value of PSNR. Experiment results shows that
proposed method gives the better AMBE and PSNR results compared with other methods.

KEYWORDS: Histogram Equalization, EBPA, PDF and CDF


I. INTORDUCTION
The field of digital image processing refers to processing digital images by means of a digital computer. Vision is
the most advanced of our senses, so it is not surprising that images play the single most important role in human
perception. One useful paradigm is to consider three types of computerized: low, mid and high-level processes. Low-level
processes involve primitive operations such as image pre-processing to reduce noise, contrast enhancement, and image
sharpening. A low-level process is characterized by the fact that both its inputs and outputs are images. Mid-level
processing on images involves tasks such as segmentation (partitioning an image into regions or objects), description of
those objects to reduce them to a form suitable for computer processing, and classification (recognition) of individual
objects. The goal of image enhancement is to improve the quality of the image such that the extracted image is better than
input image. Histogram Equalization (HE) is a very popular and simple technique for enhancing the contrast [1]. Based on
the image's original gray level distribution, the image's histogram is reshaped into a different one with uniform distribution
property in order to increase the contrast [2]. The HE technique is a global operation hence; it does not preserve the image
brightness. To overcome this issue local-HE [3] and brightness preserving local HE [4]-[14] techniques have been
proposed. In Brightness preserving bi-histogram equalization (BBHE) [4] and dualistic sub-image histogram equalization
(DSIHE) [5] techniques, a histogram have been divided into two sub-histograms such that one contains high intensity
pixels and another contains low intensity pixels. And then equalize each part independently with HE technique. The BBSE
and DSIHE techniques use mean and median values, respectively as separation intensity to divide the histogram into two

www.tjprc.org

editor@tjprc.org

Umesh Kumar Sharma & Kapil Kumawat

sub-histograms. To overcome the drawback of Bi-HE method Multi Histogram Equalization (Multi-HE) was proposed
[10]. Multi-HE consists of decomposing the input image into several sub-images, and then applying the classical HE
process to each one. They propose two discrepancy functions for image decomposing, conceiving two new Multi-HE
methods. A cost function is also used for automatically deciding in how many sub-images the input image will be
decomposed on. By observing the value of brightness in the original and the processed images (i.e, the brightness
preservation), we state that: 1) The images produced by Multi-HE are better in preserving the brightness of the original
images; 2) Even thought Multi-HE are not always the best brightness preserving ones, their resulting brightness is always
very close to the brightness of the original images.
In order to enhance contrast, brightness and produce natural looking image, this article propose a Multi Resolution
Histogram Self Organizing Map Filter (MHEBPN). In MHEBPN we first decompose the input image into several subimages. The curvelet transform and histogram matching technique using Error Back Propagation Algorithm(EBPA) is
used. The proposed MHEBPN method undergoes two steps. (i) Region identification using the curvelet transforms.
(ii) Training of image pixels using Error Back Propagation Algorithm (EBPA). (iii) Computation of a histogram of original
image pixels and image pixels after training (iv) Modification of a image histogram with respect to a histogram of the
identified region. This paper is organized in sections. Section II presents overviews the curvelet transforms, Section III
describes about Error Back Propagation Algorithm (EBPA). Section IV describes previous works. The proposed methods
are introduced in Section V. Results of our methods are presented, discussed and compared with other HE methods in
section VI.

II. RELATED WORK


According to H.D. Cheng and X.J. Shi in A simple and effective histogram equalization approach to image
enhancement Mainly, enhancement methods can be classified into two classes: global and local methods. In this paper,
the multi-peak generalized histogram equalization (multi-peak GHE) is proposed. In this method, the global histogram
equalization is improved by using multi-peak histogram equalization combined with local information. In our experiments,
different local information is employed. This method adopts the traits of existing methods. It also makes the degree of the
enhancement completely controllable. Experimental results show that it is very effective in enhancing images with low
contrast, regardless of their brightness. Multi-peak GHE technique is very effective to enhance various kinds of images
when the proper features (local information) can be extracted.
According to Joug-Youn Kim,Lee-Sup Kim and Seung-Ho Hwang An Advance contrast Enhancement
Using Partially Overlapped Sub-Blok Histogram Equalization They presented an advanced Histogram Equalization
algorithm for contrast Enhancement. Global Histogram Equalization is simple and fast but its contrast enhancement power
is relatively low. Local histogram enhancement is on the other hand, can enhance overall contrast more effectively.
For High contrast and simple calculation a low pass filter type mask is proposed. The low pass filter type mask is realized
by partially overlapped sub-block histogram equalization (POSHE). POSHE is derived from local histogram equalization
but it is much effective and much faster. The most important feature of POSHE is Low-pass filter shaped mask.
According to Byoung-Woo Yoon and Woo-Jin Song in Image contrast enhancement based on the
generalized histogram present an adaptive contrast enhancement method based on the generalized histogram, which is
obtained by relaxing the restriction of using the integer count. For each pixel, the integer count 1 allocated to a pixel is split
into the fractional count and the remainder count. The generalized histogram is generated by accumulating the fractional
Impact Factor (JCC): 6.8785

Index Copernicus Value (ICV): 3.0

Multi-Histogram Equalization Using Error Back Propogation Network (MHEBPN)

count for each intensity level and distributing the remainder count uniformly throughout the intensity levels. The intensity
mapping function, which determines the contrast gain for each intensity level, is derived from the generalized histogram.
Since only the fractional part of the count allocated to each pixel is used for increasing the contrast gain of its intensity
level, the amount of contrast enhancement is adjusted by varying the fractional count according to regional characteristics.
By adjusting the fractional count for each pixel according to users requirement and its spatial activity, the amount of
contrast enhancement is controlled appropriately to the human observers. Therefore, the proposed method can achieve
visually more pleasing contrast enhancement than the conventional histogram equalization methods.
According to David Menotti, Laurent Najman, Jacques Facon, and Arnaldo de A. Arajo in
Multi-Histogram Equalization Methods for Contrast Enhancement and Brightness Preserving They proposes a
novel technique called Multi-HE, which consists of decomposing the input image into several sub-images, and then
applying the classical HE process to each one. This methodology performs a less intensive image contrast enhancement,
in a way that the output image presents a more natural look. They propose two discrepancy functions for image
decomposing, conceiving two new Multi-HE methods. A cost function is also used for automatically deciding in how
many sub-images the input image will be decomposed on. Experiments show that their methods preserve more the
brightness and produce more natural looking images than the other HE methods. In this work, they proposed and tested a
new framework called MHE for image contrast enhancement and brightness preserving which generated natural looking
images. The experiments showed that their methods is better on preserving the brightness of the processed image
(in relation to the original one) and yields images with natural appearance, at the cost of contrast enhancement.
The contributions of this work are threefold: 1) An objective comparison among all the HE methods using quantitative
measures, such as the PSNR, brightness and contrast; 2) An analysis showing the boundaries of the HE technique and its
variations (i.e, Bi- and Multi-HE methods) for contrast enhancement, brightness preserving and natural appearance;
3) Their proposed methods.
According to Fan Yang, Jin Wu in An Improved Image Contrast Enhancement in Multiple-Peak Images
Based on Histogram Equalization
To solve the two problems of Histogram Equalization problems, this paper presents an improved image contrast
enhancement based on histogram equalization, which is especially suitable for multiple-peak images. Firstly, the input
image is convolved by a Gaussian filter with optimum parameters. Secondly, the original histogram can be divided into
different areas by the valley values of the image histogram. Thirdly, using of our proposed method processes images.
This method outperforms others on the aspects of simplicity and adaptability. The result demonstrates that the proposed
algorithm has good performance in the field of image enhancement. This proposed method for image contrast enhancement
is especially suitable for multiple-peak images.
According to P. Rajavel in Image Dependent Brightness Preserving Histogram Equalization They
proposes image-dependent brightness preserving histogram equalization (IDBPHE) technique to enhance image contrast
while preserving image brightness. The curvelet transform and histogram matching technique are used to enhance image.
The proposed IDBPHE technique undergoes two steps. (i) The curvelet transform is used to identify bright regions of the
original image. (ii) Histogram of the original image is modified with respect to a histogram of the identified regions.
Histogram of the original image is modified using a histogram of portion of the same image hence, it enhances image
contrast while preserving image brightness without any undesired artifacts. A subjective assessment to compare the visual
www.tjprc.org

editor@tjprc.org

Umesh Kumar Sharma & Kapil Kumawat

quality of the images is carried out. Absolute mean brightness error (AMBE) and peak signal to noise ratio (PSNR) are
used to evaluate the effectiveness of the proposed method in the objective sense.

III. PROPOSED WORK


The proposed Multi Histogram using Error Back Propagation Network(MHEBPN) technique use the wrapping
discrete curvelet transforms (WDCvT), Error Back Propagation Algorithm (EBPA) as a filter and the histogram matching
technique.
A. Curvelet Transform
Motivated by the need of image analysis, Candes and Donohol developed curvelet transform in 2000
[Candes and Donoho 2000]. Curvelet transform has a highly redundant dictionary which can provide sparse representation
of signals that have edges along regular curve. Initial construction of curvelet was redesigned later and was re-introduced
as Fast Digital Curvelet Transform (FDCT) [Candes et al. 2006]. This second generation curvelet transform is meant to be
simpler to understand and use. It is also faster and less redundant compared to its first generation version. Curvelet
transform is defined in both continuous and digital domain and for higher dimensions. Since image-based feature
extraction requires only 2D FDCT, well restrict our discussion to the same.
In order to implement curvelet transform, first 2D Fast Fourier Transform (FFT) of the image is taken. Then the
2D Fourier frequency plane is divided into wedges (like the shaded region in figure 1). The parabolic shape of wedges is
the result of partitioning the Fourier plane into radial (concentric circles) and angular divisions.

Figure 1: Curvelets in Fourier Frequency (Left) and Spatial Domain (Right) [Candes Et Al. 2006]
The concentric circles are responsible for the decomposition of an image into multiple scales (used for bandpassing the image at different scale) and the angular divisions partition the band-passed image into different angles or
orientations. Thus if we want to deal with a particular wedge well need to define its scale j and angle.
When the image is of the right type, curvelets provide a representation that is considerably sparser than other
wavelet transforms. This can be quantified by considering the best approximation of a geometrical test image that can be
represented using only n wavelets, and analysing the approximation error as a function of n. For a Fourier transform,
the error decreases only as O(1 / n1

/ 2

). For a wide variety of wavelet transforms, including both directional and

non-directional variants, the error decreases as O(1 / n). The extra assumption underlying the curvelet transform allows it
to achieve O((log(n))3 / n2).

Impact Factor (JCC): 6.8785

Index Copernicus Value (ICV): 3.0

Multi-Histogram Equalization Using Error Back Propogation Network (MHEBPN)

Efficient numerical algorithms exist for computing the curvlet transform of discrete data. The computational cost
of a curvlet transform is approximately 1020 times that of an FFT, and has the same dependence of O(n2log(n)) for an
image of size

B. Error Back Propagation Algorithm (EBPA)


Lack of suitable training methods for multilayer perceptrons (MLP)s led to a waning of interest in NN in 1960s
and 1970s. This was changed by the reformulation of the back Propagation training method for MLPs in the mid-1980s by
Rumelhart et al. Backpropagation was created by generalizing the Widrow-Hoff learning rule to multiple-layer networks
and nonlinear differentiable transfer functions. Standard back propagation is a gradient descent algorithm, as is the
Widrow-Hoff learning rule, in which the network weights are moved along the negative of the gradient of the performance
function. The term back propagation refers to the manner in which the gradient is computed for nonlinear multilayer
networks.
As in simple cases of the delta learning rule training studied before, input patterns are submitted during the
back-propagation training sequentially. If a pattern is submitted and its classification or association is determined to be
erroneous, the synaptic weights as well as the thresholds are adjusted so that the current least mean square classification
error is reduced. The input /output mapping, comparison of target and actual values, and adjustment, if needed, continue
until all mapping examples from the training set are learned within an acceptable overall error. Usually, mapping error is
cumulative and computed over the full training set.
During the association or classification phase, the trained neural network itself operates in a feed forward manner.
However, the weight adjustments enforced by the learning rules propagate exactly backward from the output layer through
the so-called "hidden layers" toward the input layer.

Figure 2: Schematic Model of EBPA


Basic EBPA Algorithm Can Be Described As Follows

First apply the inputs to the network and work out the output remember this initial output could be anything, as
the initial weights were random numbers.

Next work out the error for neuron B. The error is What you want What you actually get, in other words:
Errorb = Outputb (1-Outputb) (Targetb Outputb)

www.tjprc.org

editor@tjprc.org

Umesh Kumar Sharma & Kapil Kumawat

The Output(1-Output) term is necessary in the equation because of the Sigmoid Function if we were only
using a threshold neuron it would just be (Target Output).

Change the weight. Let W+AB be the new (trained) weight and WAB be the initial weight.

W+AB = WAB + (Errorb X Outputa)


Notice that it is the output of the connecting neuron (neuron A) we use (not B). We update all the weights in the
output layer in this way.
Calculate the Errors for the hidden layer neurons. Unlike the output layer we cant calculate these directly
(because we dont have a Target), so we Back Propagate them from the output layer (hence the name of the algorithm).
This is done by taking the Errors from the output neurons and running them back through the weights to get the hidden
layer errors. For example if neuron A is connected as shown to B and C then we take the errors from B and C to generate
an error for A.
Errora = Output A (1 - Output A) (Errorb WAB + Errorc WAC)
Again, the factor Output (1 - Output ) is present because of the sigmoid squashing function.

Having obtained the Error for the hidden layer neurons now proceed as in stage 3 to change the hidden layer
weights. By repeating this method we can train a network of any number of layers.
Here all the calculations for a full sized network with 2inputs, 3 hidden layer neurons and 2 output neurons as

shown in figure 3.4. W+ represents the new, recalculated, weight, whereas W (without the superscript) represents the old
weight.
Calculations for the back-propagation network are as follows:

Calculate errors of output neurons


= OUT (1 - OUT) (TARGET - OUT)
= OUT (1 - OUT) (TARGET - OUT)

Change output layer weights


W+A = WA + OUTA
W+A = WA + OUTA
W+B = WB + OUTB
W+B = WB + OUTB
W+C = WC + OUTC
W+C = WC + OUTC

Calculate (back-propagate) hidden layer errors


A = OUTA (1 OUTA) (WA + WA)
B = OUTB (1 OUTB) (WB + WB)

Impact Factor (JCC): 6.8785

Index Copernicus Value (ICV): 3.0

Multi-Histogram Equalization Using Error Back Propogation Network (MHEBPN)

C = OUTC (1 OUTC) (WC + WC)

Change hidden layer weights


W+A = WA + A IN
W+A = W+A + A IN
W+B = WB + B IN
W+B = W+B + B IN
W+C = WC + C IN
W+C = W+C + C IN
The constant (called the learning rate, and nominally equal to one) is put in to speed

Up Or Slow Down The Learning If Required.


C. MOULTI Histogram Error Back Propagation Network (MHEBPN):
In proposed paper the flow chart of the process is as follows:

Figure 3: Flow Chart of Proposed Method


Proposed multi resolution histogram equalization technique use the wrapping discrete curvlet transforms
(WDCvT), Error Back Propagation Network (EBPN) and histogram matching technique. Corresponding steps in flow chart
of proposed technique are as follows:

Region Identification and Separation: Curvlet transformation is used to identify bright regions of an original
image.

Histogram Computation: A histogram of original image and histogram of pixels after training using Error Back
Propagation Algorithm (EBPA) are computed.

Modify original image histogram with respect to a histogram of image pixels after training with EBPA.

www.tjprc.org

editor@tjprc.org

Umesh Kumar Sharma & Kapil Kumawat

A. Region Identification and Separation Process use the Following Steps:

Take the curvlet transform of original image (I) and obtain the curvlet coefficient Ci , j , where Ci , j represent ith
directional sub-band at scale j.

Scale the curvlet coefficients Ci , j , by scaling constants and obtain the modified curvlet coefficients

Fine the Euclidean distance between points. Create distance vector.

Create the weight matrix with the help of distance vector.

Perform the training of the image co-ordinate with weight matrix Wij .
The Euclidean

distance between

points p and q is

the

length

of

the line

C%i , j

segment connecting

them

( pq ).In Cartesian coordinates, if p = (p1, p2,, pn) and q = (q1, q2,, qn) are two points in Euclidean n-space, then the
distance from p to q, or from q to p is given by:

d ( p, q ) = d ( q , p ) = ( q1 p1 ) 2 + ( q2 p2 ) 2
B. Histogram Computation
Suppose an image I with gray levels in the range [0,L-1], then the probability distribution function(PDF) of
normalized histogram of image I is given by,

p ( rk ) =

nk
N

k = 0, 1,, .L-1

And its cumulative distribution function (CDF) is given by


i =k

c(rk ) = p(ri )

k = 0, 1,, L-1,

i =0

0 c ( rk ) 1
Where rk, nk represent the kth gray level and the number of pixels in the image having gray level rk, respectively,
N represents total number of pixels.
Let the px(x) and py(y) represent original and desired probability density functions, respectively. Desired
histogram py(y) is a histogram of image pixels after training using self organization map filter. Histogram of the
Image I with gray levels [0, L-1] is given by,

p x ( xk ) =

nk
N

k = 0, 1, , L 1,

Where N is the total number of pixels.


Image matrix after the training is given by.
Impact Factor (JCC): 6.8785

Index Copernicus Value (ICV): 3.0

Multi-Histogram Equalization Using Error Back Propogation Network (MHEBPN)

I o u t = I in .W i
Where,

I out = Image matrix after training,


Iin = Image matrix before training and
Wi = Weight matrix
Training is performed until we got the final resultant matrix. After training we generate the histogram of
transformed image matrix. After generating histogram of transformed image, histogram matching technique is applied.
Histogram matching between original image histogram and transformed image histogram gives us enhanced
image which has better value of AMBE and PSNR. Results are represented in the next session.

III. EXPERIMENTAL RESULTS


The proposed method was tested with several gray scale images and has been compared with histogram
equalization methods HE, MHE, IDBPHE Figure 11 show a comparison among our proposed method and other methods
for two different images. The input images used in the experiments were the ones previously used in [2]-[5], [7]. They are
named as Einstein and barbara.
To start our analysis, for each image, we computed the PSNR and AMBE.
Table 1: AMBE and PSNR Values for Eintein and Barbara Image
Methods
HE
MHE
IDBPHE
Proposed

AMBE
Image1 Image2
64.73
41.68
62.51
40.25
37.90
24.40
16.36
10.53

PSNR
Image1 Image2
18.58
11.96
20.78
13.38
25.59
16.48
40.76
26.24

Figure 4: Original Image of Einstein

www.tjprc.org

editor@tjprc.org

10

Umesh Kumar Sharma & Kapil Kumawat

Figure 5: Result of HE on Image Einstein

Figure 6: Result of MHE of Image Einstein

Figure 7: Result of IDBPHE of Image Einstein

Impact Factor (JCC): 6.8785

Index Copernicus Value (ICV): 3.0

Multi-Histogram Equalization Using Error Back Propogation Network (MHEBPN)

11

Figure 8: Result of MHEBPN


To evaluate effectiveness of the proposed method absolute mean brightness error(AMBE) and peak signal to noise
ratio(PSNR) are used.
AMBE is used to access the degree of brightness preservation. Smaller AMBE is better. Smaller AMBE indicates
that mean value of the original and result images are almost same. AMBE is given by,

AMBE( X ,Y ) =| MX MY |
Where Mx, My represent mean values of the input image X and output image Y, respectively.
PSNR is used to assess the degree of contrast enhancement. Greater the PSNR is better. Greater PSNR indicates
better image quality.
PNSR is given by,

Here, MAXI is the maximum possible pixel value of the image. When the pixels are represented using 8 bits per
sample, this is 255. More generally, when samples are represented using linear PCM with B bits per sample, MAXI is 2B1.
Mean squared error (MSE) is defined as:

Where MAXI is the maximum possible pixel value of the image I ,K are the original and enhanced images
respectively and M X N is the size of the image.
Following diagram represents the result of AMBE and PSNR of HE, MHE, IDBPHE and MHEBPN which clearly
indicate that the value of AMBE and PSNR is better in MHEBPN.

www.tjprc.org

editor@tjprc.org

12

Umesh Kumar Sharma & Kapil Kumawat

Figure 9: AMBE and PSNR Value of HE, MHE, IDBPHE and MHEBPN for Image Lena

Figure 10: Original Image or Barbara

Figure 11: HE Result of Image Barbara

Figure 12: Result of MHE of Image Barbara


Impact Factor (JCC): 6.8785

Index Copernicus Value (ICV): 3.0

13

Multi-Histogram Equalization Using Error Back Propogation Network (MHEBPN)

Figure 13: Result of IDBPHE of Image Barbara

Figure 14: Result of MHEBPN of Image Barbara


Following figure represents the relation between AMBE and PSNR values of HE, MHE, IDBPHE and MHEBPN
for barbara image.

Figure 15: AMBE and PSNR Value of HE, MHE, IDBPHE And MHEBPN for Image Barbara
Table 1 shows the AMBE and PSNR values for image 1(Einstein) and image 2(barbara). From table and figure 8
and figure 14 its clearly shows that the proposed method is better compared to other method for gray image contrast
enhancement. Even though the proposed method is not always give the better results for any image.

V. CONCLUSIONS
In this paper Multiple Histogram using Error Back Propagation Algorithm (MHEBPN) technique is proposed for
image contrast enhancement and brightness preserving. Curvlet transform, Error Back Propagation Algorithm (EBPA) and
histogram matching techniques enhance the original image contrast level and also preserve the brightness. Proposed
method is checked on standard image such as Barbara, lena and cameraman image. Proposed method enhance the contrast
www.tjprc.org

editor@tjprc.org

14

Umesh Kumar Sharma & Kapil Kumawat

and improve the image visualization more effectively.

REFERENCES
1.

R. Gonzalez and R. Woods, Digital Image Processing, 2nd ed. Prentice Hall, Jan. 2002.

2.

D Wang and Z. Ye, Brightness preserving histogram equalization with maximum entropy: A variational
perspective, IEEE Trans. On Consumer Electronics, vol. 51, no. 4, pp. 1326-1334, Nov. 2005.

3.

S.-D. Chen and A. Ramli, Minimum mean brightness error bi-histogram equalization in contrast enhancement,
IEEE Trans. on Consumer Electronics, vol. 49, no. 4, pp. 1310-1319, Nov. 2003.

4.

Y. Wang, Q. Chen, and B. Zhang, Image enhancement based on equal area dualistic sub-image histogram
equalization method, IEEE Trans. on Consumer Electronics, vol. 45, no. 1, pp. 68-75, Feb. 1999.

5.

Soong-Der Chen and Abd. Rahman Ramli, Minimum mean brightness error bi-histogram equalization in contrast
enhancement, IEEE Trans. Consumer Electron, vol. 49, no. 4, pp. 1310-1319, Nov.2003.

6.

Soong-Der Chen and Abd. Rahman Ramli, Contrast enhancement using recursive mean-separate histogram
equalization for scalable

7.

brightness preservation, IEEE Trans. Consumer Electron, vol. 49,no.4, pp. 1301-1309, Nov. 2003.

8.

K. S. Sim, C. P. Tso and Y. Y. Tan, Recursive sub-image histogram equalization applied to gray scale images,
Pattern Recognition Letters, vol. 28, no. 10, pp. 1209-1221, 2007.

9.

E Menotti, L. Najman, J. Facon and A. A. Araujo, Multi-histogram equalization methods for contrast
enhancement and brightness

10. H. Ibrahim and N. S. P. Kong, Brightness preserving dynamic histogram equalization for image contrast
enhancement, IEEE Trans. Consumer Electron, vol. 53, no. 4, pp. 1752-1758, Nov. 2007.
11. Nyamlkhagva Sengee and Heung Kook Choi, Brightness Preserving Weight Clustering Histogram
Equalization, IEEE Trans. Consumer Electron, vol. 54, no. 3, pp. 1329-1337, Aug. 2008.
12. Hojat Yeganeh, Ali Ziaei and Amirhossein Rezaie, A novel approach for contrast enhancement based on
histogram equalization, In Proceedings of the International Conference on Computer and Communication
Engineering, pp. 256-260, 2008.
13. S. Aghagolzadeh and O. K. Ersoy, Transform image enhancement, Opt. Eng, vol. 31, pp. 614626, Mar. 1992.
14. Hasanul Kabir, Abdullah Al-Wadud, and Oksam Chae Brightness Preserving Image Contrast Enhancement
Using Weighted Mixture of Global and Local Transformation Functions The International Arab Journal of
Information Technology, Vol. 7, No. 4, October 2010.
15. Lucas Brocki Kohonen Self-Organizing Map for the Traveling Salesperson Problem
16. Manish Sarkar et. All Backpropagation learning algorithms for classification with fuzzy mean square error
Elsevier Pattern Recognition Letters 19 _1998. 4351

Impact Factor (JCC): 6.8785

Index Copernicus Value (ICV): 3.0

You might also like