You are on page 1of 5

JPEG-2000: A New Still Image Compression Standard

Osama K. Al-Shaykh, Iole Moccagatta, and Homer Chen


Rockwell Science Center
1049 Camino Dos Rios
Thousand Oaks, CA 91360
osamakl, iole, homer@risc.rockwell.com

Abstract 2. JPEG-2000 Requirements


This paper describes the status and directions of the
Of the many image compression schemes available
emerging ISO/IEC JPEG-2000 image compression coding
nowadays, the most commonly used one is the P E G base-
standard. This new work item of the Joint Photographic
line. However, most of such schemes perform well for one
Expert Group (JPEG) 2000 is intended to create a uni-
or more specific tasks but fail for others. For example, the
jied coding standard for different types of still images (bi-
P E G baseline can be very efficiently implemented in hard-
level, gray-level, color) with different characteristics (natu-
ware and software. It performs reasonably well at high bit-
ral, medical, remote sensing, text). The paper presents an
rates. However, it does not provide scalability that is needed
overview of the various aspects of JPEG-2000 and high-
in applications such as Internet and multimedia databases,
lights new functionalities supported by this emerging stan-
nor lossless coding.
dard and its potential applications. Experimental results
are shown to illustrate its performance. PEG-2000 attempts to achieve a digital image compres-
sion standard that would satisfy the requirements of many
applications using as limited a number of tools as possible.
Such applications include image databases, digital photog-
1. Introduction raphy, faxing, scanning, printing, medical imaging, remote
sensing, wireless multimedia, and world wide web applica-
The Joint Photographic Expert Group (PEG) is develop- tions. The following are the basic requirements of JPEG-
ing a new image compression standard, commonly referred 2000 [l]:
to as PEG-2000, that targets at a wide variety of images
for different applications under different settings. The ul- 1. Efficient coding PEG-2000 is required to provide sig-
timate goal is to have one unified standard to accomplish nificantly better coding efficiency than PEG-baseline.
these tasks.
The applications targeted by PEG-2000 have different 2. Spatial and quality scalability PEG-2000 bitstream
requirements. For example, lossless compression is a pri- is going to be scalable in resolution and quality.
mary requirement for medical imaging, while visually loss-
less compression is the main requirement for digital pho- 3. Complexity PEG-2000 should be able to satisfy ap-
tography. Nonetheless, many applications share similar re- plications with limited memory and processing pow-
quirements such as low complexity coding/decoding, scal- ers.
able coding, and efficient coding.
This paper discusses PEG-2000 requirements and the
technologies adopted so far to satisfy these requirements. 4. Line and block applications PEG-2000 should be
The paper is organized as follows. Section 2 summarizes able to compress images that are acquired in a raster
the PEG-2000 requirements, Sec. 3 presents the main tools order (scanners) or decoded in line order (printers). It
in the current PEG-2000 Verification Model , and Sec. 4 should also address block based imaging.
describes some results to show the performance of P E G -
2000 in coding efficiency and error resilience. Finally, 5. Robustness PEG-2000 should be robust to transmis-
Sec. 5 draws the conclusions. sion errors.

0-7803-5148-7/98/$10.0001998 IEEE 99
3. JPEG-2000 Verification Model

In November 1997, the PEG-2000 committee decided


on a framework using wavelet subband coding for the new
standard. The decision is based on the test results of 24
proposals submitted by various companies and universi-
ties. The test results show that wavelet coding is able to
achieve both subjective and objective image quality better
than DCT.
I i
To facilitate the standard development, a verification
model was established. The PEG-2000 Verification Model
(VM) [2] is basically the specification of a coding system
composed of a collection of tools. It serves as the common
platform for testing, comparing, and accepting various al-
gorithms proposed for inclusion in the standard. Depending
on its performance, a tool can be removed or added to the
VM. This process allows the VM to eventually evolve into
the final standard.
Various tools have been adopted in the VM, including vi-
sual weighting, scalar quantization, region of interest, and
error resilience. Figure 1 depicts a block diagram of the
PEG-2000 VM. The input image is first transformed using
a wavelet transform. Integer, fixed point, or floating point
wavelet transform can be used. Different wavelet decom-
positions are also allowed. The transform coefficients are
then quantized using either a scalar or trellis coded quan-
tizer. The transform coefficients can be grouped together
before quantization by classifying them into two groups.
Each group consists of blocks in the wavelet domain. This
will allow the quantizer to adapt to the image characteristics
at different regions of the image. The quantized coefficients
are then coded using a bit-plane, context arithmetic coder.

4. JPEG-2000 Perh-mance Figure 1. Encoder block diagram of JPEG-


2000 Verification Model.
This section presents the coding efficiency and error re-
silience performance of PEG-2000. It should be noted that
the PEG-2000 standard is still evolving at the time when
this paper is written. The PEG-2000 coder evaluated in this SQ performs better than TCQ. However, this is not always
section is based on the Verification Model 2.1 [2]. Newer the case. For example, some tests have shown that TCQ
versions of the verification model will come out in the fu- performs better than SQ for synthetic aperture radar (SAR)
ture. images.
Another coding option is for classz3cation. The classifi-
4.1 Coding efficiency cation technique currently in the VM aims at separating the
wavelet coefficients in two groups. Coefficients belonging
The current PEG-2000 has different coding options. to the same group have similar statistics. This technique
One coding option is for selection between the scalar quan- is an example of data source classification. The subsequent
tization (SQ) and the Trellis coded quantization (TCQ) [3]. phases in the encoder block diagram (quantization, rate con-
Table 1 shows the peak signal to noise ratio (PSNR) re- trol, and entropy coder) are expected to take benefit of the
sults of the two methods for the test image Bike shown in classification.
Figure2,a), which is a 2048 x 2560 gray levels image. Fig- The performance of the classification, however, is not al-
ures 2 and 3 show the Bike image compressed at 0.0625, ways consistent. Tests have shown that the classification
0.25, and 0.5 bits per pel (bpp). In this particular example, can achieve a small coding gain in some cases while result-

100
1-
I 0.0625 11 22.95 I 23.15 I
I I
II I

0.1250 11 25.65 I 25.76

1.0000 )I 37.79 1 38.12

Table 1. Test image Bike: comparison of SQ


and TCQ coding efficiency.

I Bit Rate PSNR (dB)


i
~ (bPP1 TCQ SQ
1 with classification with classification
0.0625 23.08 23.12
0.1250 25.67 25.70
0.2500 28.91 29.01
0.5000 32.82 32.86

Table 2. Test image Bike: comparison of SQ


and TCQ coding efficiency when adding clas-
sification.

4.2 Error robustness

A bit-stream packetization approach has been adopted to


make JPEG-2000 robust to channel degradation. This ap-
proach does not affect the coding efficiency and the spa-
tial/quality scalability features of the compression algo-
rithm, and it only requires minimal overhead.
The bit-stream packetization approach organizes the data
stream into packets. Each packet consists of a packet
header, followed by a number of encoding units. The
header contains a resynchronization marker (RM) and an
encoding unit identification number(s). The resynchro-
nization marker has to be uniquely decodable from the
bit-stream. The coding unit ID associates the data con-
tained in the current package to an absolute position in the Figure 2. Test image Bike: a) original, and
subbandsequencehit-plane domain. Encoder and decoder b) compressedwith TCQ and classificationat
can perform resynchronization at the subbandsequence 0.5bpp.
level or bit-plane level.

101
Table 3 shows the PEG-2000 packetization perfor-
mance. The PEG-2000 coder is used to compress the
2048x2056, 8-bpp gray scale images Bike and Woman at
a rate of 0.5 bpp. Uncorrupted decoding results in a PSNR
= 32.86 dB for Bike, and PSNR = 33.29 for Woman. Trellis
quantizer and the rate control implemented in the P E G -
2000 Verification Model have been used to generate those
results. In the table, the performance is expressed in terms
of average PSNR over 50 random errors. Finally, Figures 4
and 5 show an example of the visual quality obtained.

Test Images Corrupted Stream


with Error Res.
Res Marker Res Marker

Ave. PSNR Ave. PSNR

Bike 14.60 17.70


Woman II 19.18 21.68

Table 3. Simulation results obtained when


corrupting bit-streams of test images coded
at 0.5pbb with random errors (BER 10e-4).

5. Conclusions

PEG-2000 attempts to achieve better compression effi-


ciency than PEG-baseline and provide scalability and error
resilience. Currently, the PEG-2000 committee is working
on reducing the complexity and memory requirement of the
tools accepted in the VM. This may result in replacement or
modifications of some of the tools in the future. PEG-2000
has a great potential to provide many useful functionalities
with a single standard.

References

[l] ISO/IEC JTCl/SC29/WGO1 Ad hoc Group on JPEG-2000


Requirements. JPEG2000 Requirements and Profiles Version
3.0. Technical Report WGI N943, Copenhagen, July 1998.
[2] ISO/IEC JTCl/SC29/WGOl Ad hoc Group on JPEG-2000
Verification Model. JPEG2000 Verification Model Version
2.0/2.1. Technical Report WG1 N988, October 1998.
[3] Marecellin, M.W. and Fischer, T.R. Trellis Coded Quantiza-
tion of Memoryless and Gauss-MarkovSources,.IEEE Trans-
actions on Communications, 38(1):82-93, January 1990.

Figure 3. Test image Bike: compressed with


TCQ and classification at a) 0.25bpp, and b)
0.0625bpp.

102
Figure 4. Test image Woman: a) original, and Figure 5. Test image Woman compressed
b) compressed at OSbpp and decoded error at O.5bpp and corrupted with random
free (PSNk29.73dB). errors (BER 10e-4): a) Sequence RM
(PSNR=19.17dB), and b) Bitplane RM
(PSNk21.86dB).
103

You might also like