Professional Documents
Culture Documents
data type
and any valid dimensions. numberOfElements is a whole number of
the MATLAB double class.
Display = Display text and numeric expressions.
Syntax: display(X)
display(X) prints the value of X. MATLAB
implicitly calls
display after any variable or expression that is not terminated by a
semicolon.
To customize the display of objects, overload the disp function
instead of the display function. Display calls disp.
A Project Report on Image Compression
Department of Electronics & Telecommunication Engineering, MPSTME, SVKMs NMIMS 26
3.7 Drawbacks of Run Length Encoding
There are two potential drawbacks to this method:
The minimum useful run-length size is increased from three characters to four. This
could affect compression efficiency with some types of data.
If the unencoded data stream contains a character value equal to the flag value, it
must be compressed into a 3-byte encoded packet as a run length of one. This
prevents erroneous flag values from occurring in the compressed data stream. If
many of these flag value characters are present, poor compression will result. The
RLE algorithm must therefore use a flag value that rarely occurs in the
uncompressed data stream.
Fig 3.7 RLE encoding direction
3.9 Application of Run Length Encoding
Run-length encoding (RLE) is a very simple form of data compression in
which runs of data (that is, sequences in which the same data value occurs in many
consecutive data elements) are stored as a single data value and count, rather than as the
original run. This is most useful on data that contains many such runs: for example,
relatively simple graphic images such as icons, line drawings, and animations. It is not
useful with files that don't have many runs as it could potentially double the file size.
A Project Report on Image Compression
Department of Electronics & Telecommunication Engineering, MPSTME, SVKMs NMIMS 27
Application of Run Length Encoding:-
Run-length encoding performs lossless data compression and is well suited
to palette-based iconic images. It does not work well at all on continuous-tone
images such as photographs, although JPEG uses it quite effectively on the
coefficients that remain after transforming and quantizing image blocks.
Common formats for run-length encoded data include True vision, Pack
Bits, PCX and ILBM.
Run-length encoding is used in fax machines (combined with other techniques
into Modified Huffman coding). It is relatively efficient because most faxed
documents are mostly white space, with occasional interruptions of black.
Data that have long sequential runs of bytes (such as lower-quality sound samples)
can be RLE compressed after applying a predictive filter such as delta encoding.
A Project Report on Image Compression
Department of Electronics & Telecommunication Engineering, MPSTME, SVKMs NMIMS 28
WAVELET TRANSFORM
4.1 Introduction
The reason why we choose wavelet transform goes way down to other
transforms like Fourier and short time Fourier transforms. So let us have a brief look
of both these transforms starting with fourier transform:
We generally have two types of signals.
Stationary signals
Non-stationary signals
Stationary signals:
Signals whose frequency content do not change in time are called
stationary signals. In this case, one does not need to know at what times
frequency components exist , since all frequency components exist at all times.
For example the following
signal:
x(t)=cos(2*pi*10*t)+cos(2*pi*25*t)+cos(2*pi*50*t)+cos(2*pi*1
00*t)
Equation -1
is a stationary signal, because it has frequencies of 10, 25, 50, and 100 Hz
at any given time instant. This signal is plotted below:
Figure 4.1 Graphical representation of a Stationary signal (equation-1
A Project Report on Image Compression
Department of Electronics & Telecommunication Engineering, MPSTME, SVKMs NMIMS 29
Non Stationary signal:
A signal with different frequency components at different time intervals, hence a non-
stationary signal .The frequency content of non-stationary signals change in time. In
this case one should know what frequency components occur at what times.
Figure 4.2 Graphical representation of a non-stationary signal
4.2. Comparison of the signals using Fourier Transform
The top plot in Figure 4.3 is the (half of the symmetric) frequency spectrum of
the signal in Figure 4.1. The bottom plot is the zoomed version of the top plot,
showing only the range of frequencies that are of interest to us. Note the four spectral
components corresponding to the frequencies 10, 25, 50 and 100 Hz.
A Project Report on Image Compression
Department of Electronics & Telecommunication Engineering, MPSTME, SVKMs NMIMS 30
Fourier transform of the stationary signal:
Figure 4.3 Graphical representation of Fourier Transformed stationary signal
Fourier transform of non-stationary signal:
Figure 4.4 Graphical representation of Fourier transformed non-stationary signal
A Project Report on Image Compression
Department of Electronics & Telecommunication Engineering, MPSTME, SVKMs NMIMS 31
Now, compare the Figures 4.3 and 4.4. The similarity between these two
spectrums should be apparent. Both of them show four spectral components at exactly
the same frequencies, i.e., at 10, 25, 50, and 100 Hz. Other than the ripples, and the
difference in amplitude (which can always be normalized), the two spectrums are
almost identical, although the corresponding time-domain signals are not even close to
each other. Both of the signals involves the same frequency components, but the first
one has these frequencies at all times, the second one has these frequencies at different
intervals. So, how come the spectrums of two entirely different signals look very
much alike? Recall that the FT gives the spectral content of the signal, but it gives no
information regarding where in time those spectral components appear. Therefore, FT is
not a suitable technique for non-stationary signal.
4.3. Adoption of STFT
To overcome this we adopt SHORT TIME FOUTIER TRANSFORM (STFT),
which is the modified version of Fourier transform. In STFT, the non-stationary signal is
divided into small portions, which are assumed to be stationary. This is done using a
window function of a chosen width (i.e., fixed), which is shifted and multiplied with
the signal to obtain the Short Time Fourier Transform of the signal.
The problem with STFT goes back to Heisenberg uncertainty principle which
states that it is impossible for one to obtain which frequencies exist at which time
instance, but one can obtain the frequency bands existing in a time interval. This gives
rise to resolution issue where there is tradeoff between the time resolution and frequency
resolution. To assume stationary the window is supposed to be narrow, which results
in a poor frequency resolution, i.e., it is difficult to know the exact frequency
components that exist in the signal; only the band of frequencies that exist is obtained. If
the width of the window is increased, frequency resolution improves but time resolution
becomes poor, i.e., it is difficult to know what frequencies occur at which time
intervals. Also, c h o o s i n g a wi d e window may violate the condition of
stationary.
The Wavelet Transform solves the above problem to a certain extent. In contrast
to STFT, which uses a single analysis window, the wavelet transform uses short
windows at high frequencies and long windows at low frequencies. This results in
multi resolution analysis by which the signal is analyzed with different resolutions at
different frequencies, i.e., both frequency resolution and time resolution vary in the
time frequency plane without violating the Heisenberg inequality.
Therefore, the wavelet transform can be defined as a mathematical tool
that decomposes a signal into a representation that shows signal details and trends as
a function of time. This representation can be used to characterize transient events,
reduce noise, compress data, and perform many other operations. The main
advantages of wavelet methods over traditional Fourier methods are the use of
localized basis functions and the faster computation speed. Localized basis functions
are ideal for analyzing real physical situations in which a signal contains
discontinuities and sharp spikes
A Project Report on Image Compression
Department of Electronics & Telecommunication Engineering, MPSTME, SVKMs NMIMS 32
PROCESS INVOLVED IN COMPRESSION
5.1. Image Compression and Decompression Process
The basic block diagram showing the steps involved in the process of image
compression and de-compression.
WAVELET FORWARD TRANSFORM
BMP
FILE
EXTRACT
HEADER &
PIXEL INFO.
ROW
FORWARD
COLUMN
FORWARD
COLUMN
FORWAR
D
THRESHOLD
WAVELET INVERSE TRANSFORM
INVERSE
ROW
FORWARD
INVERSE
COLUMN
FORWARD
DECODING
RUN-
LENGTH
ENCODING
Figure 5.1 Steps involved in Compression Technique.
5.2 BMP File
A BMP file consists of two parts mainly.
1. HEADER consists of information about the image and is same for all the
images of its type.
2. PIXEL is the content of the image and it varies depending upon the type and content of the
image file.
A Project Report on Image Compression
Department of Electronics & Telecommunication Engineering, MPSTME, SVKMs NMIMS 33
Color Image:
It consists of 54 bytes of header information. One pixel occupies 3 byte
memory space in which Red, Green and Blue occupies 1 byte each.
Grey Scale Image:
An optical pattern consisting of discrete steps or shades of gray between
black and white is known as grey scale image.It consists of 1078 bytes of
header information. In this type of image one pixel occupies one byte memory
and its values range from 0 to
255. 0 represents black and 255 represents
white.
Black & White Image:
It consists of 1078 bytes of header information. This type of image also
occupies one byte memory for every pixel. 0 represents black and 1 represents
white.
Extraction:
This process includes the extraction of header information and pixel
information from the input image file. Pixel information varies from image to
image those results in the process where as the header information remains the
same.
5.3 Wavelet Forward Transform
It is a mathematical tool that decomposes a signal into a representation that
shows signal details and trends as a function of time. The wavelet
transform or wavelet analysis is probably the most recent solution to
overcome shortcomings of fourier transform. In this type of transform as
frequency increases the time resolution increases; likewise as frequency
decreases the frequency resolution increases. Thus a certain high frequency
component can be located more accurately in time than a low frequency
component and a low frequency component can be located more accurately in
frequency compared to a high frequency component.
A Project Report on Image Compression
Department of Electronics & Telecommunication Engineering, MPSTME, SVKMs NMIMS 34
Types of wavelet algorithms:
There are a wide variety of popular wavelet algorithms. Some of
them are
Haar Wavelets
Daubechies Wavelets
Mexican Hat Wavelets
Morlet Wavelets
Why Haar Wavelets?
Of these algorithms, including Daubechies wavelets, Mexican Hat wavelets
and Morlet wavelets have the advantage of better resolution for smoothly
changing time series. But they have the disadvantage of being more expensive
to calculate than the Haar wavelets. The higher resolution provided by these
wavelets is not worth the cost for financial time series (non-stationary), which
are characterized by jagged transitions.
Haar Wavelets:
The Haar wavelet algorithms are applied to time series
1
where the number of
samples is a power of two (e.g., 2, 4, 8, 16, 32, 64...) The Haar wavelet
uses a rectangular window to sample the time series. The first pass over the
time series uses a window width of two. The window width is doubled at
each step until the window encompasses the entire time series.
Each pass over the time series generates a new time series and a set of
coefficients. The new time series is the average of the previous time series
over the sampling window. The coefficients represent the average change in
the sample window. For
A time series is simply a sample of a signal or
a record of something, like temperature, water level or market data (like
equity close price).Example, if we have a time series consisting of the samples
s
i
, s
i+1
, s
i+2
... then the Haar wavelet equations is
A Project Report on Image Compression
Department of Electronics & Telecommunication Engineering, MPSTME, SVKMs NMIMS 35
where c
i
is the wavelet
coefficient.
The Haar transform preserves the average in the smoothed values (this is not
true of all wavelet transforms). The scaling function produces a smoother
version of the data set, which is half the size of the input data set. Wavelet
algorithms are recursive and the smoothed data becomes the input for the next
step of the wavelet transform. The
Haar wavelet scaling function
is
where a
i
is a smoothed value.
Function of various Filters in Wavelet Transform: High pass filter
In digital signal processing (DSP) terms, the wavelet function is a high pass
filter. A high pass filter allows the high frequency components of a signal
through while suppressing the low frequency components. For example, the
differences that are captured by the Haar wavelet function represent high
frequency change between an odd and an even value.
Low pass filter
In digital signal processing (DSP) terms, the scaling function is a low pass
filter. A low pass filter suppresses the high frequency components of a signal
and allows the low frequency components through. The Haar scaling function
calculates the average of an even and an odd element, which results in a
smoother, low pass signal. In the wavelet literature this tree structured
recursive algorithm is referred to as a pyramidal algorithm.
Wavelets allow a time series to be viewed in multiple resolutions. Each
resolution reflects a different frequency. The wavelet technique takes averages
and differences of a signal, breaking the signal down into spectrum. All the
wavelet algorithms work on time series a power of two values (e.g., 64, 128,
256...). Each step of the wavelet transform produces two sets of values: a set of
averages and a set of differences (the differences are referred to as wavelet
coefficients). Each step produces a set of averages and coefficients that is
half the size of the input data. For example, if the time series contains 256
elements, the first step will produce 128 averages and 128 coefficients. The
averages then become the input for the next step (e.g., 128 averages resulting in
a new set of 64 averages and 64 coefficients). This continues until one
average and one coefficient (e.g., 2
0
) is calculated.
A Project Report on Image Compression
Department of Electronics & Telecommunication Engineering, MPSTME, SVKMs NMIMS 36
The average and difference of the time series is made across a window of
values. Most wavelet algorithms calculate each new average and difference by
shifting this window over the input data. For example, if the input time series
contains 256 values, the window will be shifted by two elements, 128 times, in
calculating the averages and differences. The next step of the calculation uses
the previous set of averages, also shifting the window by two elements. This
has the effect of averaging across a four element window. Logically, the window
increases by a factor of two each time.
The Haar wavelet transform has a number of
advantages:
It is conceptually simple.
It is fast.
It is memory efficient, since it can be calculated in place without a
temporary array.
It is exactly reversible without the edge effects that are a problem with
other wavelet transforms.
Works better (comparatively) for financial time series applications.
5.4 Thresholding
In certain signals, many of the wavelet coefficients are close or equal to zero.
Through a method called thresholding, these coefficients may be modified so
that the sequence of wavelet coefficients contains long strings of zeroes.
Through a type of compression known as entropy coding, these long strings may
be stored and sent electronically in much less space. There are different types of
thresholding.
Hard thresholding
Soft thresholding
Quantile thresholding
Hard Thresholding:
In hard thresholding, a tolerance is selected. Any wavelet whose absolute value
falls below the tolerance is set to zero with the goal to introduce many zeros
without losing a great amount of detail. There is not a straight forward
easy way to choose the threshold, although the larger the threshold that is
chosen the more error that is introduced into the process.
A Project Report on Image Compression
Department of Electronics & Telecommunication Engineering, MPSTME, SVKMs NMIMS 37
Soft Thresholding:
This is another type of thresholding known as soft thresholding. Once
again a tolerance, h, is selected. If the absolute value of an entry is less than
the tolerance, than that entry is set to zero. All other entries, d, are replaced
with sign(d)||d| - h|. Soft thresholding can be thought of as a translation of
the signal toward zero by the amount h.
Quantile Thresholding:
A third type of thresholding is quantile thresholding. In this method a
percentage p of entries to be eliminated are selected. The smallest (in
absolute value) p percent of entries are set to zero.
We generally use hard thresholding in order to obtain high output efficiency
without much loss and altering of data.
5.5 Run Length Encoding and Decoding Techniques
Run-length encoding (RLE) is a very simple form of data compression in
which runs of data (that is, sequences in which the same data value occurs in
many consecutive data elements) are stored as a single data value and count,
rather than as the original run. This is most useful on data that contains many
such runs; for example, simple graphic images such as icons and line drawings.
Example:
The string: "aaaabbcdeeeeefggggghhhiiiij"
may be replaced with
"a4b2c1d1e5f1g5h3i4j1".
The numbers are in bold to indicate that they are values, not symbols.RLE
works by reducing the physical size of a repeating string of characters. This
repeating string, called a run, is typically encoded into two bytes. The first
byte represents the number of characters in the run and is called the run count.
In practice, an encoded run may contain 1 to 128 or 256 characters; the run
count usually contains as the number of characters minus one (a value in the
range of 0 to 127 or 255). The second byte is the value of the character in the
run, which is in the range of 0 to 255, and is called the run value. A black-and-
white image that is mostly white, such as the page of a book, will encode very
well, due to the large amount of contiguous data that is all the same color. Make
sure that your RLE encoder always stops at the end of each scan line of bitmap
data that is being encoded. There are several benefits to doing so. Encoding
only a simple scan line at a time means that only a minimal buffer size is
A Project Report on Image Compression
Department of Electronics & Telecommunication Engineering, MPSTME, SVKMs NMIMS 38
required. Encoding only a simple line at a time also prevents a problem known
as cross-coding.
Cross-coding is the merging of scan lines that occurs when the encoded
process loses the distinction between the original scan lines. If the data of the
individual scan line is merged by the RLE algorithm, the point where one
scan line stopped and another began is lost or, at least, is very hard to detect
quickly.
When an encoder is encoding an image, an end-of-scan-line marker is placed in
the encoded data to inform the decoding software that the end of the scan line
has been reached. This marker is usually a unique packet, explicitly defined in
the RLE specification, which cannot be confused with any other data packets.
End-of-scan-line markers are usually only one byte in length, so they don't
adversely contribute to the size of the encoded data.
5.6 Inverse Wavelet Transform
The forward wavelet transform is an invertible mapping; in fact and this
process is the inverse mapping of the forward wavelet transform. Let us
consider c
i
, c
i+1
, .. as wavelet coefficients and a
i
, a
i+1
, . as wavelet
averages or smoothed values, then the required original data s
i
, s
i+1
, can be
obtained by the following inverse wavelet equations.
s
i
= a
i
+ c
i
s
i+1
= a
i
- c
i
The obtained output data constitutes to the final decompressed image.
A Project Report on Image Compression
Department of Electronics & Telecommunication Engineering, MPSTME, SVKMs NMIMS 39
DIGITAL SIGNAL PROCESSOR
6.1 Introduction
A signal is any variable that carries information. Examples of the types of
signals of interest are
Speech (telephony, radio, everyday communication).
Biomedical signals (EEG brain signals).
Sound and music.
Video and image.
Radar signals (range and bearing).
Digital signal processing (DSP) is concerned with the digital representation of signals and
the use of digital processors to analyze, modify, or extract information from
signals. Many signals in DSP are derived from analogue signals which have been
sampled at regular intervals and converted into digital form.
The key advantages of DSP over analogue processing are
o Guaranteed accuracy (determined by the number of bits used).
o Perfect reproducibility.
o No drift in performance due to temperature or age.
o Takes advantage of advances in semiconductor technology.
o Greater flexibility (can be reprogrammed without modifying hardware).
Superior performance (linear phase response possible, and filtering algorithms can be
made adaptive).Sometimes information may already be in digital form. There are
however some disadvantages. Speed and cost (DSP design and hardware may be
expensive, especially with high bandwidth signals). Finite word length problems (limited
number of bits may cause degradation).
6.2 TMS320VC5416 DSP Processor
Description:
The TMS320VC5416 fixed-point, digital signal processor (DSP) (hereafter referred
to as the 5416 unless otherwise specified) is based on an advanced modified Harvard
architecture that has one program memory bus and three data memory buses. This
processor provides an arithmetic logic unit (ALU) with a high degree of parallelism,
application-specific hardware logic, on-chip memory, and additional on-chip
peripherals. The basis of the operational flexibility and speed of this DSP is a highly
specialized instruction set. Separate program and data spaces allow simultaneous
A Project Report on Image Compression
Department of Electronics & Telecommunication Engineering, MPSTME, SVKMs NMIMS 40
access to program instructions and data, providing a high degree of parallelism. Two
read operations and one write operation can be performed in a single cycle.
Instructions with parallel store and application-specific i nst ruct i ons can fully utilize
this architecture. In addition, data can be transferred between data and program
spaces. Such parallelism supports a powerful set of arithmetic, logic, and bit-
manipulation operations that can all be performed in a single machine cycle. The 5416
also includes the control mechanisms to manage interrupts, repeated operations, and
function calls. Enhanced Harvard architecture built around one program bus, three
data buses, and four address buses for increased performance and versatility. Advanced
CPU design with a high degree of parallelism and application specific hardware
logic for increased performance. A highly specialized instruction set for faster
algorithms and for optimized high-level language operation. Modular
architecture design for fast development of spinoff devices. Advanced IC processing
technology for increased performance and low power consumption. Low power
consumption and increased radiation hardness because of new static design
techniques. These are some of the major advantages offered by C5416.
Architecture and Specifications:
Fig.6.1 Architecture of TMS320C54X DSP Processor
A Project Report on Image Compression
Department of Electronics & Telecommunication Engineering, MPSTME, SVKMs NMIMS 41
It comprises the central processing unit (CPU), memory and on-chip
peripherals. The 54x DSP`s use an advanced modified Harvard
architecture that maximizes processing power w i t h e i g h t buses.
Separate program and d a t a s p a c e s allow simultaneous access to
program instructions and data, providing a high degree of parallelism. For
example, 3 reads and 1 write can be performed in a single cycle. Instructions
with parallel store and application-specific instructions fully utilize this
architecture. In addition, data can be transferred between data and program
spaces. Such parallelism supports a powerful set of arithmetic, logic and bit-
manipulation operations that can all be performed in a single machine cycle.
Also, the 54x includes the control mechanisms to manage interrupts,
repeated operations, and function calling.
CPU:
Advanced multi-bus architecture with one program bus, three data
buses and four address buses
40-bit arithmetic logic unit (ALU), including a 40-bit barrel shifter
and two independent 40-bit accumulators
17-bit x 17-bit parallel multiplier coupled to a 40-bit dedicated
adder for non-pipelined single-cycle multiply/accumulate (MAC)
operation
Exponent encoder to compute the exponent of a 40-bit accumulator
value in a single cycle
Two address generators, including eight auxiliary registers and
two auxiliary registers arithmetic units
Memory:
192K words x 16-bit addressable memory space (64K- words program,
64K- words data and 64K-words I/O), with extended program memory.
It has got high speed on chip memory.
Buses:
The 54x architecture is built around eight major 16-bit buses (four
program/data buses and four address buses):
The program bus (PB) carries the instruction code and immediate
operands from program memory.
Three data buses (CB, DB and EB) interconnect to various elements,
such as the CPU, data address generation logic, program address
generation, and on- chip peripherals and data memory.
The CB and DB carry the operands that are read from data memory.
The EB carries the data to be written to memory.
A Project Report on Image Compression
Department of Electronics & Telecommunication Engineering, MPSTME, SVKMs NMIMS 42
Four address buses (PAB, CAB, DAB and EAB) carry the addresses
needed for instruction execution.
Addressing Modes:
The 54x offers seven basic data addressing
modes:
Immediate addressing uses the instruction to encode a fixed value
Absolute addressing uses the instruction to encode a fixed address
Accumulator addressing uses accumulator A to access a location in
program memory as data
Direct addressing uses seven bits of the instruction to encode the
lower seven bits of an address. The seven bits are used with the data
page pointer (DP) or the stack pointer (SP) to determine the actual
memory address
Indirect addressing uses the auxiliary registers to access
memory
Memory-mapped r egi s t er s addressing uses the memory-mapped
r egi s t er s without modifying either the current DP value or the
current SP value
Stack addressing manages adding and removing items from the system
stack during the execution of instructions using direct, indirect or
memory-mapped register addressing, the data-address generation logic
(DAGEN) computes the addresses of data-memory operands.
Instruction Set:
Single-instruction repeat and block repeat operations
Block memory move instructions for better program and
data management
Instructions with a 32-bit long operand
Instructions with 2- or 3-operand simultaneous reads
Arithmetic instructions with parallel store and parallel load
Conditional-store instructions
A Project Report on Image Compression
Department of Electronics & Telecommunication Engineering, MPSTME, SVKMs NMIMS 43
On-Chip Peripherals:
Software-programmable wait-state generator
Programmable bank switching
External bus-off control to disable the external data bus, address
bus and control signals.
Data bus with a bus holder feature
Speed:
25/20/15/12.5/10-ns execution time for a single-cycle,
fixed-point instruction
Features:
Advanced multi bus architecture with three separate 16- bit data memory
buses and one program memory bus.
40-bit arithmetic logic unit (ALU) including a 40- bit barrel shifter and
two independent 40- bit accumulators.
Two address generators with eight auxiliary registers and two auxiliary
registers arithmetic units (ARAUS).
Data bus with a bus holder feature.
Application areas of DSP are considerable:
Image processing (pattern recognition, robotic vision, image
enhancement, facsimile, satellite weather map, animation).
Instrumentation and control (spectrum analysis, position and rate
control, noise reduction, data compression).
Speech and audio (speech recognition, speech synthesis, text to speech,
digital audio, equalization).
Military (secure communication, radar processing, sonar processing,
and missile guidance).
Telecommunications (echo cancellation, adaptive equalizations,
spread spectrum, video conferencing, and data
communication).
Biomedical (patient monitoring, scanners, EEG brain mappers, ECG
analysis, X-ray storage and enhancement).
A Project Report on Image Compression
Department of Electronics & Telecommunication Engineering, MPSTME, SVKMs NMIMS 44
CONCLUSION
Compression is the process of reducing the number of bits or bytes needed to
represent a given set of data. Compression takes advantage of redundancies or
similarities in the data file. By reducing the number of bits or bytes used to store a set of
data, we not only reduce the space required to store it, we also reduce the
bandwidth needed to transmit it.
Compression does have its trade-offs. The more efficient the compression technique,
the more complicated the algorithm will be and thus, requires more computational
resources or more time to decompress. This tends to affect the speed. Speed is not so
much of an importance to still images but weighs a lot in motion-pictures.
The same process of compression technique can be applied to color images by
making necessary changes in the program. Here, we achieved a suitable compression ratio
for the binary image. But in practice, such a large compression ratios cannot be obtained
for audio, video or any other multimedia files.
There are two potential drawbacks to this method:
The minimum useful run-length size is increased from three characters to four. This could
affect compression efficiency with some types of data.
If the unencoded data stream contains a character value equal to the flag value, it must be
compressed into a 3-byte encoded packet as a run length of one. This prevents erroneous
flag values from occurring in the compressed data stream. If many of these flag value
characters are present, poor compression will result. The RLE algorithm must therefore use
a flag value that rarely occurs in the uncompressed data stream.
We were able to successfully implement the RLE on our DSP. The RLE algorithm is a
simple one; however, its implementation on the DSP is not as simple. Our MATLAB
implementation was fairly simple to write and run. Writing the DSP assembly code was far
more complex. The DSP assembly code requires us to be much more precise and
deliberate.
Future improvements to our RLE/D algorithm that we considered were variable length
inputs and real time outputs. Through this project we were able to gain valuable hands on
experience with the entire DSP programming procedure.
A Project Report on Image Compression
Department of Electronics & Telecommunication Engineering, MPSTME, SVKMs NMIMS 45
REFERENCES
Books:
Donald B. Percival and Andrew T. Walden (2000), Wavelet Methods for
Time Series Analysis, Cambridge University Press.
Jensen and la Cour-Harbo, (2001), Ripples in Mathematics: the Discrete
Wavelet Transform.
Timothy C. Bell, Ian Witten, J ohn Cleary (1990), Text Compression,
Prentice Hall, ISBN 0139119914.
Data Compression: The complete reference
Digital Image Compression Techniques
Digital Image Processing, Gonzalez
Introduction to Data Compression, Khalid Sayood
Digital image compression techniques, Majid Rabbinic, Paul W. Jones
J.M. Shapiro, \Embedded image coding using zero trees of wavelets conceits,"
IEEE Trans. Signal Processing, vol. 41, pp. 3445{3462, Dec. 1993.
M. Rabbani, and P.W. Jones, Digital Image Compression Techniques, SPIE
Opt. Eng. Press, Bellingham, Washington, 1991.
R.A. DeVore, B. Jawerth, and B.J. Lucier, \Image compression through
wavelet transform coding," IEEE Trans. Inform. Theory, vol. 38, pp. 719{746,
March 1992.
E.H. Adelson, E. Simoncelli, and R. Hingorani, \Orthogonal pyramid
transforms for image coding," Proc. SPIE, vol. 845 { Visual Comm. and Image
Proc. II, Cambridge, MA, pp. 50{ 58, Oct. 1987.
M. Antonini, M. Barlaud, P. Mathieu, and I. Daubechies, \Image coding using
wavelet transform," IEEE Trans. Image Processing, vol. 1, pp. 205{220, April
1992.
A Project Report on Image Compression
Department of Electronics & Telecommunication Engineering, MPSTME, SVKMs NMIMS 46
Journals:
DeVore, R.; Jawerth, B.; and Lucier, B., (1992), Images Compression
through Wavelet Transform Coding, IEEE Trans. Information Th. 38,
719-746.
Websites:
www.wikipedia.org
http://www.dspguide.com
http://www.arturocampos.com/
http://www.ti.com/corp/docs/home.htm
TEXAS INSTRUMENTS.
http://zone.ni.com/zone/jsp/zone.jsp
NATIONAL INSTRUMENTS.
http://www.bearcave.com
http://users.rowan.edu/~polikar/homepage.html
http://en.wikipedia.org/wiki/Run-length_encodin
A Project Report on Image Compression
Department of Electronics & Telecommunication Engineering, MPSTME, SVKMs NMIMS 47
A Project Report on Image Compression
Department of Electronics & Telecommunication Engineering, MPSTME, SVKMs NMIMS 48