Professional Documents
Culture Documents
8, AUGUST 2014
Abstract This paper presents a power-constrained contrast replace the TFT-LCD displays currently dominating the com-
enhancement algorithm for organic light-emitting diode display mercial market. Even though the OLED is mainly used for
based on multiscale retinex (MSR). In general, MSR, which is small panels in mobile devices, its mass-production technology
the key component of the proposed algorithm, consists of power
controllable log operation and subbandwise gain control. First, is being developed rapidly. So large-size OLED panels may
we decompose an input image to MSRs of different sub-bands, soon be adopted in a wider range of devices such as high-
and compute a proper gain for each MSR. Second, we apply a definition television (HDTV) and ultra HDTV [3].
coarse-to-fine power control mechanism, which recomputes the Note that display modules consume most of the power in
MSRs and gains. This step iterates until the target power saving digital media devices [5]. So techniques to minimize power
is accurately accomplished. With video sequences, the contrast
levels of adjacent images are determined consistently using consumption in the display are inevitably required. Several
temporal coherence in order to avoid flickering artifacts. Finally, image processing techniques for power saving in display pan-
we present several optimization skills for real-time processing. els have been proposed, beyond circuit-level power savings.
Experimental results show that the proposed algorithm provides Unfortunately, such techniques focus on reducing backlight
better visual quality than previous methods, and a consistent intensity for TFT-LCDs while preserving the same level of
power-saving ratio without flickering artifacts, even for video
sequences. perceived quality [6][10]. So they cannot be applied to power
saving in emissive display devices such as OLED.
Index Terms Power consumption, contrast enhancement, Lee et al. proposed a power-constrained contrast-
OLED, multi-scale retinex.
enhancement algorithm (PCCE) for emissive displays based
on histogram equalization (HE) [11], [12] They developed a
I. I NTRODUCTION
power-consumption model for OLED displays and formulated
enhancement than the latter for middle intensity pixels. Note Simultaneously, R t is reset to R t 1 because it exceeds the
that the human visual system tends to be relatively insensitive target power. If T2 < tC < T1 , we increase relatively
to contrast change in the dark region [15]. Therefore, the plog small as in Eq. (18) because tC weakly overruns P.
of Eq. (12) has the effect of controlling the increase in power
consumption while partially lowering the contrast in the dark t +1 = t + (D t )/4 (18)
region. From Eq. (7) and MSRs computed by plog, i.e., {R n }, In this case, R t is also reset to R t 1 . On the other hand, T1 <
we can derive the following output image. tC < T2 indicates weak power reduction below P. So we
N approach P by decreasing relatively small as in Eq. (19).
Rt = gn R n (13)
t +1 = t t /4 (19)
n=1
where t indicates the iteration number. Initially, t is set to 0. Finally, if tC > T2 , pt falls short of P noticeably. So we
On the other hand, basing YP on Eq. (11), the power rapidly approach P by decreasing significantly.
reduction ratio of an input image and its output image is t +1 = t t /2 (20)
defined as follows:
N As a result, t +1 is adaptively computed with the above-
Y P( R t ) mentioned update rule, and this entire CCS iterates with the
pt = 1 (14)
Y P(I ) updated .
Note that the initial , i.e., 0 affects the number of iterations
N
where R t is the normalized version of R t such that it has the in the CCS. If 0 is considerably larger or smaller than an
same dynamic range as the input image I , i.e., [l, L]. In this optimal value for each frame, the convergence speed may
N get slower due to the increased number of iterations. Since
paper, R t can be computed with Eq. (15) as in [16].
the optimal 0 per frame is generally unknown, this paper
X m empirically sets 0 to 80 for the first frame of each video
f (X) = X N = (L l) + l (15)
M m sequence. For the following frames 0 is updated according to
a specific rule, which will be discussed in the next subsections.
where M = max( R t ) and m = min( R t ). pt should be For extremely high P values, the CCS does not often
controlled to be close to target power reduction ratio P. Let achieve the target range of T1 < tC < T1 , and can be caught
t denote the difference between pt and P as in Eq. (16). in an infinite loop trap. In order to mitigate computational
damages by such a phenomenon, we limit the maximum
t = P p t (16) number of iterations to 5. However, the CCS works well except
t has an identical meaning in the fine control stage (FCS) as such an extreme case.
well as the coarse control stage (CCS), and it is re-defined as 2) Fine Control Stage: Since pt still cannot get close to
tC and tF in the CCS and FCS, respectively. We make use of P through the proposed CCS, we require additional power
tC as a criterion to determine the range of power reduction at control. Let tF denote the difference between the power
every iteration of the CCS. Using tC , we propose reduction ratio at the t-th iteration of the fine control stage
an effective 0F is set to the final C obtained from the
strategy to achieve quick convergence speed. If tC is within and P. Initially,
F
a sufficiently small range, we just terminate the iteration(s) in CCS. If t < e, this FCS terminates as in Fig. 2, and the
the CCS. Also, if tC is too large, we increase the variation output image I is obtained by normalizing the final R from
of in Eq. (12) to achieve fast convergence. Let T1 and T2 the previous stage. Here, e indicates the allowable tolerance.
be the thresholds for the two decisions, respectively. Section Fig. 4 (a) shows the number of iterations in the FCS according
V depicts how T1 and T2 are determined. to e, and Fig. 4 (b) is the graph of the power control error vs.
According to the above-mentioned strategy, if pt is suffi- e. For this experiment, the first frames of all the test video
ciently close to P, that is, T1 < tC < T1 , we go to the sequences and still images (as mentioned in section V) are
FCS without further iterations (see Fig. 2). Otherwise, we employed, and their average values are displayed in Fig. 4.
have to control for approaching P. Note that the decrease We can observe that the number of iterations decreases as
of suppresses the range of intensity increase by reducing e increases. In other words, the increase of e indicates quick
the shadow region, and it concurrently expands the range of convergence speed. On the contrary, the increase of e tends
intensity decrease by widening the highlight region. So we can to augment the power control errors. Especially, if e becomes
achieve P by controlling properly. The basic mechanism larger than 0.001, the power control error rapidly increases.
to control is to quickly approach target power reduction Also if e is large, the so-called flickering artifact may happen
ratio P with a big change in for a large tC , and get for video sequences due to inconsistent luminance levels
close to P with a small change in for a small tC . More between adjacent images. So we set e to 0.001. For instance,
specifically, if tC < T2 , we significantly increases as in if P is 20%, this FCS iterates until pt is included in [19.9%
Eq. (17) because such a condition indicates an excess of power 20.1%].
reduction over P. If tF e, a regular FCS begins. In general, the high-
frequency region of an image is highly involved in details
t +1 = t + (D t )/2 (17) of the image, but its impact on the power of the image is
3312 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 23, NO. 8, AUGUST 2014
B. CUDA Programming
In addition, we implemented the proposed algorithm using
CUDA toolkit 4.0 on GPU, i.e., nVidia GeForce GTX 460,
which consists of 336 core processors. As the main central
processing unit (CPU), an Intel Core2 Quad CPU Q9400 at
2.66GHz was used.
All the operations except computing and were imple-
mented through parallel processing on the GPU. Note that
and are computed on the CPU. For CUDA programming,
128 threads were used, and parallel processing by 32 blocks
was performed. The execution time of the proposed algorithm Fig. 6. The resulting images from caps when P is set to 10% (a) original,
(b) linear algorithm, (c) PCCE [11], (d) the proposed algorithm (t = 40).
is explained in Section V.
V. E XPERIMENTAL R ESULTS
To evaluate the performance of the proposed algorithm, we
chose two images from Kodak Lossless True Color Image
Suite1 (caps and beach) and a high dynamic range (HDR) test
image memorial. Also, we employed six common intermediate
format (CIF) video sequences: container (500 frames), football
(90 frames), Paris (300 frames), foreman (300 frames), bus
(500 frames), Stefan (90 frames) and five 720p sequences: big
ship (60 frames), crew (60 frames), jets (60 frames), night
(60 frames), raven (60 frames), and four 1080p sequences:
crowd run (500 frames), park joy (60 frames), toys and cal-
endar (60 frames) traffic (60 frames). We processed only the
luminance components in the experiments. More specifically,
given a color image, we converted it to the YUV color space
and then process only the Y-component without modifying the
U- and V-components.
Fig. 7. The resulting images from beach when P is set to 10% (a) original,
(b) linear algorithm, (c) PCCE [11], (d) the proposed algorithm (t = 160).
A. Evaluation in Terms of Subjective Visual Quality
We compared the proposed algorithm with a typical linear
algorithm as the most trivial approach and a power-constrained the proposed algorithm can enhance details while effectively
contrast enhancement (PCCE) proposed in [11] in terms of preserving the overall intensity level of the input image.
qualitative visual quality. Here, the linear algorithm indicates Figs. 8 and 9 compare several algorithms for the first frames
a method that transforms I to I 1 P for a given P. of Paris and big ship sequences when P is 20%. Fig. 8
In the proposed algorithm, and 0 were set to 0.1 and 80, shows that the proposed algorithm enhances the details such
respectively. Also, T1 and T2 were set to 0.05 and 0.1. Since as a mans suit and cups shade better than the PCCE and
PCCE cannot automatically control its parameters, was set the linear method. Similarly, Fig. 9 proves that the proposed
to 5 and was manually chosen for a given P. algorithm successfully represents the details in a sail on a ship
First, Fig. 6 and Fig. 7 are the results of several algorithms in comparison with the PCCE.
for the caps and beach images when P is 10%. Both the Figs. 10 and 11 show the comparison results of several
proposed algorithm and the PCCE achieve significant contrast algorithms when P is 30%. They result from the first frames
enhancement compared to the linear algorithm. However, the of traffic and crowd run sequences. The PCCE still provides
PCCE loses the details in the yellow cap and shows too- high performance in enhancing contrasts, but loses details in
darkened shadows as seen in Fig. 6(c). Similarly, the PCCE the dark region (for example, the shaded buildings and trees) as
loses details in the cloud in Fig. 7(c) and darkens the forest too seen in Fig. 10(c). However, Fig. 10(a) and Fig. 10(d) show
much. This is a phenomenon that histogram-based approaches that the proposed algorithm preserves the overall luminance
such as the PCCE often suffer from. On the other hand, level of the input image with enhanced contrasts even for a
high P of 30%. Similarly, Fig. 11 proves that the proposed
1 http://r0k.us/graphics/kodak algorithm is superior to the other algorithms.
3314 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 23, NO. 8, AUGUST 2014
Fig. 10. The resulting images from traffic when P is set to 30% (a) original,
(b) linear algorithm, (c) PCCE [11], (d) the proposed algorithm (t = 80).
Fig. 8. The resulting images from Paris when P is set to 20% (a) original,
(b) linear algorithm, (c) PCCE [11], (d) the proposed algorithm (t = 60).
Fig. 11. The resulting images from crowd run when P is set to 30%
(a) original, (b) linear algorithm, (c) PCCE [11], (d) the proposed algorithm
(t = 40).
Fig. 9. The resulting images from big ship when P is set to 20% (a) original,
(b) linear algorithm, (c) PCCE [11], (d) the proposed algorithm (t = 60).
B. Evaluation in Terms of Objective Visual Quality adopted a metric using the Lapacian operator. First, we obtain
a Laplacian image using the following equation.
In order to quantitatively evaluate the performance of power
control algorithms, some metrics to measure the degree of 1 Lap(x, y) > T
(x, y) = (23)
edge preservation were employed [9], [10]. Among them, we 0, other wi se
NAM et al.: POWER-CONSTRAINED CONTRAST ENHANCEMENT ALGORITHM 3315
Fig. 13. The resulting images from crew when P is set to 50% (a) original, TABLE I
(b) linear algorithm, (c) PCCE [11], (d) the proposed algorithm (t = 80). C OMPARISON OF THE E DGE P RESERVATION R ATIOS A CCORDING TO T1
Fig. 14. The resulting images from memorial when P is set to 50%
(a) original, (b) linear algorithm, (c) PCCE [11], (d) the proposed algorithm
(t = 80).
TABLE II TABLE IV
C OMPARISON OF THE N UMBER O F I TERATIONS C OMPARISON IN T ERMS OF THE E DGE P RESERVING R ATIO
A CCORDING TO T1 IN THE CCS
TABLE VII
T HE C OMPARISON O F F LICKERING A RTIFACTS IN T ERMS
OF THE F VALUE . H ERE , T HE U NIT IS 102
TABLE VIII
T HE O UTPUT P OWER R EDUCTION R ATIOS FOR VARIOUS P VALUES
TABLE IX
T HE C OMPARISON OF F LICKERING A RTIFACTS IN T ERMS
OF THE F VALUE . H ERE , THE U NIT IS 102
Fig. 18. Fast convergence of the proposed algorithm. (a) The 0C graph
according to frame numbers and (b) the number of iterations in the CCS
according to frame numbers. TABLE X
E XECUTION T IME C OMPARISON A CCORDING TO
O PTIMIZATION L EVELS [ MS / FRAME ]
cause flickering artifacts. Table IX shows that the proposed
algorithm provides noticeably lower F values than the PCCE.
Especially, the PCCE has an outstanding F value for football
sequence. This indicates that the PCCE is likely to undergo
flickering artifact for the sequence. Actually, Fig. 17 shows
the power reduction ratios for the football video sequence
when P is set to 30%. For this experiment, for the PCCE
is manually tuned to 0.3 according to the given P. Since the
PCCE cannot keep the power reduction ratio consistently, it On the other hand, Fig. 18 proves that the proposed algo-
may face the so-called flickering artifact. The 41st and 42nd rithm converges quickly in terms of the number of iterations
images from the PCCE have different contrast levels, but those as well as the control error. These graphs result from the park
of the proposed algorithm show consistent contrast levels. joy video sequence when P is set to 20%. In Fig. 18(a),
Therefore, the proposed algorithm does not suffer from the we can see that 0C dramatically decreases owing to the
visually annoying flickering artifact. proposed video sequence rate control as time goes on. Also,
NAM et al.: POWER-CONSTRAINED CONTRAST ENHANCEMENT ALGORITHM 3319
Fig. 18(b) shows that the proposed algorithm requires no more [11] C. Lee, C. Lee, Y.-Y. Lee, and C.-S. Kim, Power-constrained contrast
iterations. enhancement for emissive displays based on histogram equalization,
IEEE Trans. Image Process., vol. 21, no. 1, pp. 8093, Jan. 2012.
[12] C. Lee, C. Lee, and C.-S. Kim, Power-constrained contrast
enhancement for OLED displays based on histogram equalization, in
D. Computational Complexity Proc. IEEE ICIP, Sep. 2010, pp. 16891692.
Table X shows the execution times of the proposed algo- [13] E. H. Land and J. McCann, Lightness and retinex theory, J. Opt. Soci.
Amer., vol. 61, no. 1, pp. 111, Jan. 1971.
rithm for different 1080p images. With only MRP, we can
[14] D. J. Jobson, Z.-U. Rahman, and G. A. Woodell, Properties and
reduce the entire running time up to about 30%. If we addi- performance of a center/surround retinex, IEEE Trans. Image Process.,
tionally employ LUT-based optimization, we can further save vol. 6, no. 3, pp. 451462, Mar. 1997.
execution time of 10%. Finally, if we utilize all the optimiza- [15] D. J. Jobson, Z.-U. Rahman, and G. A. Woodell, A multiscale retinex
for bridging the gap between color images and the human observation
tion skills including GPU-based parallel programming, we can of scenes, IEEE Trans. Image Process., vol. 6, no. 7, pp. 965976,
accomplish the entire execution time of 9 ms per image. Jul. 1997.
[16] J. H. Jang, B. Choi, S. D. Kim, and J. B. Ra, Sub-band decomposed
multiscale retinex with space varying gain, in Proc. IEEE ICIP,
VI. C ONCLUDING R EMARKS Oct. 2008, pp. 31683171.
[17] T.-C. Jen and S.-J. Wang, Bayesian structure-preserving image contrast
This paper proposes an SD-MSR-based image processing enhancement and its simplification, IEEE Trans. Circuits Syst. Video
algorithm for fine power control in OLED displays. We Technol., vol. 22, no. 6, pp. 831843, Jun. 2012.
designed a power-constrained log function for effective power [18] C.-H. Lee, L.-H. Chen, and W.-K. Wang, Image contrast enhancement
saving in dark regions. Using the power-constrained log using classified virtual exposure image fusion, IEEE Trans. Consum.
Electron., vol. 58, no. 4, pp. 12531261, Nov. 2012.
function for SD-MSR and an adaptive weighting strategy [19] S. C. Nercessian, K. A. Panetta, and S. S. Agaian, Non-linear direct
proper for an input image, we proposed a coarse-to-fine power multi-scale image enhancement based on the luminance and contrast
control mechanism for still images. Finally, we presented a masking characteristics of the human visual system, IEEE Trans. Image
Process., vol. 22, no. 9, pp. 35493561, Sep. 2013.
power control scheme for a constant power reduction ratio
[20] R. Chouhan, R. K. Jha, and P. K. Biswas, Enhancement of dark and
in video sequences by using temporal coherence in video low-contrast images using dynamic stochastic resonance, IET Image
sequences. Experimental results showed that the proposed Process., vol. 7, no. 2, pp. 174184, Sep. 2013.
algorithm provides better visual quality than previous works, [21] M. Dong, Y.-S. K. Choi, and L. Zhong, Power modeling of graphical
user interfaces on OLED displays, in Proc. ACM/IEEE DAC, Jul. 2009,
and a consistent power-saving ratio without the flickering pp. 652657.
artifact even for video sequences. Specifically, the proposed [22] J. Chalidabhongse and C.-C. J. Kuo, Fast motion vector
algorithm provides at maximum 36% and on average estimation using multiresolution-spatio-temporal correlations, IEEE
13% higher edge-preserving ratios than the state-of-the-art Trans. Circuits Syst. Video Technol., vol. 7, no. 3, pp. 477488,
Jun. 1997.
algorithm (i.e., PCCE [11]). In addition, we proved the [23] C. E. Duchon, Lanczos filtering in one and two dimensions, J. Appl.
possibility of real-time processing by accomplishing an entire Meteorol., vol. 18, no. 8, pp. 10161022, Aug. 1979.
execution time of 9 ms per 1080p image. [24] C. Christine and T. Fleiss, A study of efficiency and accuracy in the
transformation from RGB to CIELAB color space, IEEE Trans. Image
Process., vol. 6, no. 7, pp. 10461048, Jul. 1997.
R EFERENCES [25] S. S. Agaian, B. Silver, and K. A. Panetta, Transform coefficient
histogram-based image enhancement algorithms using contrast entropy,
[1] J. Jang, S. Lee, and M. Oh, Technology development and production IEEE Trans. Image Process., vol. 16, no. 3, pp. 741758, Mar. 2007.
of flat panel displays in Korea, IEEE Proc. J., Mag., vol. 90, no. 4, [26] L. Fang and O. C. Au, Subpixel-based image down-sampling with
pp. 501513, Apr. 2002. min-max directional error for stripe display, IEEE J. Sel. Topics Signal
[2] K. Suzuki, Past and future technologies of information displays, in Process., vol. 5, no. 2, pp. 240251, Apr. 2011.
Proc. IEEE IEDM, Dec. 2005, pp. 1621. [27] H. Yehaneh and Z. Wang, Objective quality assessment of tone-mapped
[3] B. Young, OLEDsPromises, myths, and TVs, Inform. Display, images, IEEE Trans. Image Process., vol. 22, no. 2, pp. 657667,
vol. 25, no. 9, pp. 1417, Sep. 2009. Feb. 2013.
[4] H. D. Kim, H. J. Chung, B. H. Berkeley, and S. S. Kim, Emerging
technologies for the commercialization of AMOLED TVs, Inf. Display,
vol. 25, no. 9, pp. 1822, Sep. 2009.
[5] W.-C. Cheng and M. Pedram, Power minimization in a backlit
TFT-LCD display by concurrent brightness and contrast scaling, IEEE
Trans. Consum. Electron., vol. 50, no. 1, pp. 2532, Feb. 2004.
[6] P. Greef and H. G. Hulze, Adaptive dimming and boosting backlight for
LCD-TV systems, in SID Symp. Dig. Tech. Papers, May 2007, vol. 38,
no. 1, pp. 13321335.
[7] L. Kerofsky and S. Daly, Distinguished paper: Brightness
preservation for LCD backlight reduction, in SID Symp. Dig. Tech.
Papers, Jun. 2006, vol. 37, no. 1, pp. 1242124. Yeon-Oh Nam received the B.S. and M.S. degrees
in electronic engineering from Inha University,
[8] C.-C. Lai and C.-C. Tsai, Backlight power reduction and image contrast
enhancement using adaptive dimming for global backlight applications, Incheon, Korea, in 2012 and 2014, respectively.
IEEE Trans. Consum. Electron., vol. 54, no. 2, pp. 669674, May 2008. His research interests include image processing and
video coding.
[9] S. I. Cho, S.-J. Kang, and Y. H. Kim, Image quality-aware backlight
dimming with color and detail enhancement techniques, IEEE J.
Display Technol., vol. 9, no. 2, pp. 112121, Feb. 2013.
[10] P.-S. Tsai, C.-K. Liang, T.-H. Huang, and H. H. Chen, Image
enhancement for backlight-scaled TFT-LCD displays, IEEE Trans.
Circuits Syst. Video Technol., vol. 19, no. 9, pp. 574583, Apr. 2009.
3320 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 23, NO. 8, AUGUST 2014
Dong-Yoon Choi received the B.S. degree in elec- Byung Cheol Song received the B.S., M.S., and
tronic engineering from Inha University, Incheon, Ph.D. degrees in electrical engineering from the
Korea, in 2014, where he is currently pursuing the Korea Advanced Institute of Science and Technology
M.S. degree in electronic engineering. His research (KAIST), Daejeon, Korea, in 1994, 1996, and 2001,
interests include image processing and superresolu- respectively. From 2001 to 2008, he was a Senior
tion. Engineer at Digital Media Research and Develop-
ment Center, Samsung Electronics Company, Ltd.,
Suwon, Korea. In 2008, he joined the Department of
Electronic Engineering at Inha University, Incheon,
Korea, where he is currently an Associate Professor.
His research interests are in the general areas of
video/image coding, superresolution, deblur, image stabilization, deweather-
ing, computer vision, multimedia system design, and data mining.