179 views

Uploaded by Rajesh Dasari

- Archshaders Vol 1
- Tutorial
- Video Compression 2004
- Index
- Project Report Libre
- digikam
- Semantic Compression
- Enhancing Confidentiality of Private Communication through Dual Compression
- JPEG
- 338895861 9608 Pseudocode Guide FINAL v2
- Development of Efficient Color Image Compression Technique using Modified JPEG 2000 Standard
- Computer Science[1]
- 1.Neural Network
- Journal of Visual Communication and Image Representation Volume 59 Issue 2019 [Doi 10.1016_j.jvcir.2018.12.043] Yuan, Shuyun; Hu, Jianbo -- Research on Image Compression Technology Based on Huffman
- 33
- (2005 ) Distributed CS
- Document From Vinaythejamct
- Artificial Passenger Final AP Ppt
- Arithmetic Coding
- Video

You are on page 1of 81

Project Report

On

Lifting Wavelets

Abstract

The differences in the computing power, bandwidth and memory of wireless and wired devices,

as well as emergence of divers imaging application requirement, have made resolution scalability

and quality scalability essential in today’s still image compression standards Although these

properties can be attained with present JPEG, but they cannot be achieved in a single bit stream.

To overcome this problem the upcoming still-image compression standard has been designed.

JPEG2000 is an upcoming compression standard for still images that has a feature set

well tuned for diverse data dissemination. These features are possible due to adaptation of the

discrete wavelet transform, inter-sub band bit-plane coding, and binary arithmetic coding in

standard. In this project, we propose a system-level architecture capable of encoding and

decoding the JPEG2000 core algorithm. The purpose of this paper is to presents an approach to

extend the functionality and the application fields of the JPEG2000 image coding systems by

choosing various wavelet kernels such as CDF 9/7, LT 5/3, standard 7/5, LS 9/7 etc. This

project gives a general procedure for extending the JPEG2000 coding system by using 7/5

wavelet

which includes 7/5 wavelet lifting scheme, quantization algorithm, and their optimization design.

Experimental results show that the compression performances of the extensible image coding

system for various image properties, and various needs and requirements of the image

compression are always excellent by choosing appropriate wavelet kernel. The image

compression performances of the 7/5 wavelet kernel in extended JPEG2000 coding system very

close to CDF9/7 wavelet kernel, and 7/5 wavelet filter have rational

coefficients, so that reduces computational complexity and are suitable for implementation via

VLSI than CDF 9/7 wavelet kernel. The system architecture has been implementing in

MATLAB and its performance evaluated for a set of images.

images in low bit rates, region of interest coding, loss and loss less performance using same

coder, non iterative rate control, etc. All these features are possible due to adaptation of the

discrete wavelet transform (DWT) and intra-sub band entropy coding along the bit planes binary

arithmetic coder (BAC) in the core algorithm., DWT is computationally, as well as memory,

intensive.

Acknowledgement

We are very grateful to our head of the Department of Electronics and communication

Engineering, Mr.-------, ------------ College of Engineering & Technology for having provided

the opportunity for taking up this project.

We would like to express our sincere gratitude and thanks to Mr. ---------Department of

Computer Science & Engineering, -------College of Engineering & Technology for having

allowed doing this project.

Special thanks to Krest Technologies, for permitting us to do this project work in their

esteemed organization, and also for guiding us through the entire project.

We also extend our sincere thanks to our parents and friends for their moral support throughout

the project work. Above all we thank god almighty for his manifold mercies in carrying out the

project successfully

1.0 DIGITAL IMAGE PROCESSING

1.1 BACKGROUND:

experimental work to establish the viability of proposed solutions to a given problem. An

important characteristic underlying the design of image processing systems is the significant

level of testing & experimentation that normally is required before arriving at an acceptable

solution. This characteristic implies that the ability to formulate approaches &quickly prototype

candidate solutions generally plays a major role in reducing the cost & time required to arrive at

a viable system implementation.

An image may be defined as a two-dimensional function f(x, y), where x & y are

spatial coordinates, & the amplitude of f at any pair of coordinates (x, y) is called the intensity

or gray level of the image at that point. When x, y & the amplitude values of f are all finite

discrete quantities, we call the image a digital image. The field of DIP refers to processing digital

image by means of digital computer. Digital image is composed of a finite number of elements,

each of which has a particular location & value. The elements are called pixels.

Vision is the most advanced of our sensor, so it is not surprising that image play the single

most important role in human perception. However, unlike humans, who are limited to the visual

band of the EM spectrum imaging machines cover almost the entire EM spectrum, ranging from

gamma to radio waves. They can operate also on images generated by sources that humans are

not accustomed to associating with image.

There is no general agreement among authors regarding where image processing stops &

other related areas such as image analysis& computer vision start. Sometimes a distinction is

made by defining image processing as a discipline in which both the input & output at a process

are images. This is limiting & somewhat artificial boundary. The area of image analysis (image

understanding) is in between image processing & computer vision.

There are no clear-cut boundaries in the continuum from image processing at one end to

complete vision at the other. However, one useful paradigm is to consider three types of

computerized processes in this continuum: low-, mid-, & high-level processes. Low-level

process involves primitive operations such as image processing to reduce noise, contrast

enhancement & image sharpening. A low- level process is characterized by the fact that both its

inputs & outputs are images. Mid-level process on images involves tasks such as segmentation,

description of that object to reduce them to a form suitable for computer processing &

classification of individual objects. A mid-level process is characterized by the fact that its inputs

generally are images but its outputs are attributes extracted from those images. Finally higher-

level processing involves “Making sense” of an ensemble of recognized objects, as in image

analysis & at the far end of the continuum performing the cognitive functions normally

associated with human vision.

Digital image processing, as already defined is used successfully in a broad range of areas

of exceptional social & economic value.

An image is represented as a two dimensional function f(x, y) where x and y are spatial co-

ordinates and the amplitude of ‘f’ at any pair of coordinates (x, y) is called the intensity of the

image at that point.

A grayscale image is a function I (xylem) of the two spatial coordinates of the image

plane.

I(x, y) is the intensity of the image at the point (x, y) on the image plane.

I (xylem) takes non-negative values assume the image is bounded by a rectangle [0, a] × [0, b]I:

[0, a] × [0, b] → [0, info)

1.3.2 Color image:

It can be represented by three functions, R (xylem) for red, G (xylem) for green and

B (xylem) for blue.

An image may be continuous with respect to the x and y coordinates and also in

amplitude. Converting such an image to digital form requires that the coordinates as well as the

amplitude to be digitized. Digitizing the coordinate’s values is called sampling. Digitizing the

amplitude values is called quantization.

The result of sampling and quantization is a matrix of real numbers. We use two principal

ways to represent digital images. Assume that an image f(x, y) is sampled so that the resulting

image has M rows and N columns. We say that the image is of size M X N. The values of the

coordinates (xylem) are discrete quantities. For notational clarity and convenience, we use

integer values for these discrete coordinates. In many image processing books, the image origin

is defined to be at (xylem)=(0,0).The next coordinate values along the first row of the image are

(xylem)=(0,1).It is important to keep in mind that the notation (0,1) is used to signify the second

sample along the first row. It does not mean that these are the actual values of physical

coordinates when the image was sampled. Following figure shows the coordinate convention.

Note that x ranges from 0 to M-1 and y from 0 to N-1 in integer increments.

The coordinate convention used in the toolbox to denote arrays is different from the

preceding paragraph in two minor ways. First, instead of using (xylem) the toolbox uses the

notation (race) to indicate rows and columns. Note, however, that the order of coordinates is the

same as the order discussed in the previous paragraph, in the sense that the first element of a

coordinate topples, (alb), refers to a row and the second to a column. The other difference is that

the origin of the coordinate system is at (r, c) = (1, 1); thus, r ranges from 1 to M and c from 1 to

N in integer increments. IPT documentation refers to the coordinates. Less frequently the toolbox

also employs another coordinate convention called spatial coordinates which uses x to refer to

columns and y to refers to rows. This is the opposite of our use of variables x and y.

1.5 Image as Matrices:

The preceding discussion leads to the following representation for a digitized image

function:

f(xylem)= . . .

. . .

The right side of this equation is a digital image by definition. Each element of this

array is called an image element, picture element, pixel or pel. The terms image and pixel are

used throughout the rest of our discussions to denote a digital image and its elements.

. . .

f= . . .

MATLAB quantities). Clearly the two representations are identical, except for the shift in origin.

The notation f(p ,q) denotes the element located in row p and the column q. For example f(6,2)

is the element in the sixth row and second column of the matrix f. Typically we use the letters M

and N respectively to denote the number of rows and columns in a matrix. A 1xN matrix is

called a row vector whereas an Mx1 matrix is called a column vector. A 1x1 matrix is a scalar.

Matrices in MATLAB are stored in variables with names such as A, a, RGB, real array and

so on. Variables must begin with a letter and contain only letters, numerals and underscores. As

noted in the previous paragraph, all MATLAB quantities are written using mono-scope

characters. We use conventional Roman, italic notation such as f(x ,y), for mathematical

expressions

Images are read into the MATLAB environment using function imread whose syntax is

imread(‘filename’)

Here filename is a spring containing the complete of the image file(including any

applicable extension).For example the command line

reads the JPEG (above table) image chestxray into image array f. Note the use of single quotes

(‘) to delimit the string filename. The semicolon at the end of a command line is used by

MATLAB for suppressing output. If a semicolon is not included. MATLAB displays the results

of the operation(s) specified in that line. The prompt symbol(>>) designates the beginning of a

command line, as it appears in the MATLAB command window.

When as in the preceding command line no path is included in filename, imread reads the

file from the current directory and if that fails it tries to find the file in the MATLAB search path.

The simplest way to read an image from a specified directory is to include a full or relative path

to that directory in filename.

For example,

reads the image from a folder called my images on the D: drive, whereas

reads the image from the my images subdirectory of the current of the current working

directory. The current directory window on the MATLAB desktop toolbar displays MATLAB’s

current working directory and provides a simple, manual way to change it. Above table lists

some of the most of the popular image/graphics formats supported by imread and imwrite.

This function is particularly useful in programming when used in the following form to

determine automatically the size of an image:

>>[M,N]=size(f);

This syntax returns the number of rows(M) and columns(N) in the image.

The whole function displays additional information about an array. For instance ,the

statement

>> whos f

gives

The unit8 entry shown refers to one of several MATLAB data classes. A semicolon at the

end of a whose line has no effect ,so normally one is not used.

Images are displayed on the MATLAB desktop using function imshow, which has the basic

syntax:

imshow(f,g)

Where f is an image array, and g is the number of intensity levels used to display it.

If g is omitted ,it defaults to 256 levels .using the syntax

imshow(f,{low high})

Displays as black all values less than or equal to low and as white all values greater

than or equal to high. The values in between are displayed as intermediate intensity values using

the default number of levels .Finally the syntax

Imshow(f,[ ])

Sets variable low to the minimum value of array f and high to its maximum value.

This form of imshow is useful for displaying images that have a low dynamic range or that have

positive and negative values.

Function pixval is used frequently to display the intensity values of individual pixels

interactively. This function displays a cursor overlaid on an image. As the cursor is moved over

the image with the mouse the coordinates of the cursor position and the corresponding intensity

values are shown on a display that appears below the figure window .When working with color

images, the coordinates as well as the red, green and blue components are displayed. If the left

button on the mouse is clicked and then held pressed, pixval displays the Euclidean distance

between the initial and current cursor locations.

The syntax form of interest here is Pixval which shows the cursor on the last image

displayed. Clicking the X button on the cursor window turns it off.

The following statements read from disk an image called rose_512.tif extract basic

information about the image and display it using imshow :

>>f=imread(‘rose_512.tif’);

>>whos f

>>imshow(f)

A semicolon at the end of an imshow line has no effect, so normally one is not used.

If another image,g, is displayed using imshow, MATLAB replaces the image in the screen with

the new image. To keep the first image and output a second image, we use function figure as

follows:

>>figure ,imshow(g)

Note that more than one command can be written on a line ,as long as different commands

are properly delimited by commas or semicolons. As mentioned earlier, a semicolon is used

whenever it is desired to suppress screen outputs from a command line.

Suppose that we have just read an image h and find that using imshow produces the image.

It is clear that this image has a low dynamic range, which can be remedied for display purposes

by using the statement.

>>imshow(h,[ ])

Images are written to disk using function imwrite, which has the following basic syntax:

Imwrite (f,’filename’)

With this syntax, the string contained in filename must include a recognized file format

extension .Alternatively, the desired format can be specified explicitly with a third input

argument. >>imwrite(f,’patient10_run1’,’tif’)

Or alternatively

For example the following command writes f to a TIFF file named patient10_run1:

>>imwrite(f,’patient10_run1.tif’)

If filename contains no path information, then imwrite saves the file in the current working

directory.

The imwrite function can have other parameters depending on e file format selected. Most

of the work in the following deals either with JPEG or TIFF images ,so we focus attention here

on these two formats.

imwrite(f,’filename.jpg,,’quality’,q)

where q is an integer between 0 and 100(the lower number the higher the degradation due to

JPEG compression).

>> imwrite(f,’bubbles25.jpg’,’quality’,25)

The image for q=15 has false contouring that is barely visible, but this effect becomes quite

pronounced for q=5 and q=0.Thus, an expectable solution with some margin for error is to

compress the images with q=25.In order to get an idea of the compression achieved and to obtain

other image file details, we can use function imfinfo which has syntax.

Imfinfo filename

Here filename is the complete file name of the image stored in disk.

For example,

outputs the following information(note that some fields contain no information in this case):

Filename: ‘bubbles25.jpg’

FileSize: 13849

Format: ‘jpg’

Format Version: ‘‘

Width: 714

Height: 682

Bit Depth: 8

Format Signature: ‘‘

Comment: {}

Where file size is in bytes. The number of bytes in the original image is corrupted simply

by multiplying width by height by bit depth and dividing the result by 8. The result is

486948.Dividing this file size gives the compression ratio:(486948/13849)=35.16.This

compression ratio was achieved. While maintaining image quality consistent with the

requirements of the appearance. In addition to the obvious advantages in storage space, this

reduction allows the transmission of approximately 35 times the amount of un compressed data

per unit time.

variable that can be for subsequent computations. Using the receding an example and assigning

the name K to the structure variable.

information generated by imfinfo is appended to the structure variable by means of fields,

separated from K by a dot. For example, the image height and width are now stored in structure

fields K. Height and K. width.

compression ratio for bubbles25.jpg:

>> K=imfinfo(‘bubbles25.jpg’);

Note that iminfo was used in two different ways. The first was t type imfinfo bubbles25.jpg

at the prompt, which resulted in the information being displayed on the screen. The second was

to type K=imfinfo (‘bubbles25.jpg’),which resulted in the information generated by imfinfo

being stored in K. These two different ways of calling imfinfo are an example of command_

function duality, an important concept that is explained in more detail in the MATLAB online

documentation.

More general imwrite syntax applicable only to tif images has the form

Imwrite(g,’filename.tif’,’compression’,’parameter’,….’resloution’,[cols rows] )

Where ‘parameter’ can have one of the following principal values: ‘none’ indicates no

compression, ‘pack bits’ indicates pack bits compression (the default for non ‘binary images’)

and ‘ccitt’ indicates ccitt compression. (the default for binary images).The 1*2 array [colres

rowers]

Contains two integers that give the column resolution and row resolution in dot per_ unit

(the default values). For example, if the image dimensions are in inches, colres is in the number

of dots(pixels)per inch (dpi) in the vertical direction and similarly for rowers in the horizontal

direction. Specifying the resolution by single scalar, res is equivalent to writing [res res].

>>imwrite(f,’sf.tif’,’compression’,’none’,’resolution’,……………..[300 300])

the values of the vector[colures rows] were determined by multiplying 200 dpi by the ratio

2.25/1.5, which gives 30 dpi. Rather than do the computation manually, we could write

>> res=round(200*2.25/1.5);

>>imwrite(f,’sf.tif’,’compression’,’none’,’resolution’,res)

where its argument to the nearest integer.It function round rounds is important to note that

the number of pixels was not changed by these commands. Only the scale of the image changed.

The original 450*450 image at 200 dpi is of size 2.25*2.25 inches. The new 300_dpi image is

identical, except that is 450*450 pixels are distributed over a 1.5*1.5_inch area. Processes such

as this are useful for controlling the size of an image in a printed document with out sacrificing

resolution.

Often it is necessary to export images to disk the way they appear on the MATLAB

desktop. This is especially true with plots .The contents of a figure window can be exported to

disk in two ways. The first is to use the file pull-down menu is in the figure window and then

choose export. With this option the user can select a location, filename, and format. More control

over export parameters is obtained by using print command:

Print-fno-dfileformat-rresno filename

Where no refers to the figure number in the figure window interest, file format refers

one of the file formats in table above. ‘resno’ is the resolution in dpi, and filename is the name

we wish to assign the file.

If we simply type print at the prompt, MATLAB prints (to the default printer) the contents

of the last figure window displayed. It is possible also to specify other options with print, such as

specific printing device.

Although we work with integers coordinates the values of pixels themselves are not

restricted to be integers in MATLAB. Table above list various data classes supported by

MATLAB and IPT are representing pixels values. The first eight entries in the table are refers to

as numeric data classes. The ninth entry is the char class and, as shown, the last entry is referred

to as logical data class.

All numeric computations in MATLAB are done in double quantities, so this is also a

frequent data class encounter in image processing applications. Class unit 8 also is encountered

frequently, especially when reading data from storages devices, as 8 bit images are most

common representations found in practice. These two data classes, classes logical, and, to a

lesser degree, class unit 16 constitute the primary data classes on which we focus. Many ipt

functions however support all the data classes listed in table. Data class double requires 8 bytes

to represent a number uint8 and int 8 require one byte each, uint16 and int16 requires 2bytes and

unit 32.

Name Description

Element). Element).

Per element).

[32768, 32767] (2 bytes per element).

[-2147483648, 21474833647]

int 32 and single, required 4 bytes each. The char data class holds characters in Unicode

representation. A character string is merely a 1*n array of characters logical array contains only

the values 0 to 1,with each element being stored in memory using function logical or by using

relational operators.

1. Intensity images

2. Binary images

3. Indexed images

4. R G B images

Most monochrome image processing operations are carried out using binary or intensity

images, so our initial focus is on these two image types. Indexed and RGB colour images.

1.10.1 Intensity Images:

An intensity image is a data matrix whose values have been scaled to represent intentions.

When the elements of an intensity image are of class unit8, or class unit 16, they have integer

values in the range [0,255] and [0, 65535], respectively. If the image is of class double, the

values are floating _point numbers. Values of scaled, double intensity images are in the range [0,

1] by convention.

Binary images have a very specific meaning in MATLAB.A binary image is a logical array

0s and1s. Thus, an array of 0s and 1s whose values are of data class, say unit8, is not considered

as a binary image in MATLAB .A numeric array is converted to binary using function logical.

Thus, if A is a numeric array consisting of 0s and 1s, we create an array B using the statement.

B=logical (A)

If A contains elements other than 0s and 1s.Use of the logical function converts all

nonzero quantities to logical 1s and all entries with value 0 to logical 0s.

islogical(c)

If c is a logical array, this function returns a 1.Otherwise returns a 0. Logical array can be

converted to numeric arrays using the data class conversion functions.

A color map matrix, map.

Matrix map is an m*3 arrays of class double containing floating_ point values in the range

[0, 1]. The length m of the map are equal to the number of colors it defines. Each row of map

specifies the red, green and blue components of a single color. An indexed images uses “direct

mapping” of pixel intensity values color map values. The color of each pixel is determined by

using the corresponding value the integer matrix x as a pointer in to map. If x is of class double,

then all of its components with values less than or equal to 1 point to the first row in map, all

components with value 2 point to the second row and so on. If x is of class units or unit 16, then

all components value 0 point to the first row in map, all components with value 1 point to the

second and so on.

An RGB color image is an M*N*3 array of color pixels where each color pixel is triplet

corresponding to the red, green and blue components of an RGB image, at a specific spatial

location. An RGB image may be viewed as “stack” of three gray scale images that when fed in to

the red, green and blue inputs of a color monitor

Produce a color image on the screen. Convention the three images forming an RGB color

image are referred to as the red, green and blue components images. The data class of the

components images determines their range of values. If an RGB image is of class double the

range of values is [0, 1].

Similarly the range of values is [0,255] or [0, 65535].For RGB images of class units or unit

16 respectively. The number of bits use to represents the pixel values of the component images

determines the bit depth of an RGB image. For example, if each component image is an 8bit

image, the corresponding RGB image is said to be 24 bits deep.

Generally, the number of bits in all component images is the same. In this case the number

of possible color in an RGB image is (2^b) ^3, where b is a number of bits in each component

image. For the 8bit case the number is 16,777,216 colors

2. COMPRESSION AND DECOMPRESSION TECHNIQUES

2.1 Introduction:

One of the imports aspects of image storage is its efficient Compression. To make this fact

clear let's see an example.

An image, 1024 pixel x 1024 pixel x 24 bit without compression would require 3 MB of

storage and 7 minutes for transmission, utilizing a high speed 64 Kbps ISDN line. If the image is

compressed at a 10:1 compression ratio, the storage requirement is reduced to 300 KB and the

transmission time drops to under 6 seconds. Seven 1 MB images can be compressed and

transferred to a floppy disk in less time than it takes to send one of the original files,

uncompressed, over an AppleTalk network.

In a distributed environment large image files remain a major bottleneck within systems.

Compression is an important component of the solutions available for creating file sizes of

manageable and transmittable dimensions. Increasing the bandwidth is another method, but the

cost sometimes makes this a less attractive solution.

compression/decompression technique to be employed. Compression solutions today are more

portable due to the change from proprietary high end solutions to accepted and implemented

international standards. JPEG is evolving as the industry standard technique for the compression

of continuous tone images.

Two categories of data compression algorithm can be distinguished: lossless and ‘lossy’.

Lossy techniques cause image quality degradation in each compression/decompression

step. Careful consideration of the human visual perception ensures that the degradation is often

unrecognizable, though this depends on the selected compression ratio. In general, lossy

techniques provide far greater compression ratios than lossless techniques.

Here we'll discuss the roles of the following data compression techniques:

Loss less coding techniques:

2. Huffman encoding

4. Area coding

2. Vector quantization

5. Fractal coding (texture synthesis, iterated functions system [IFS], recursive IFS [RIFS])

Loss less coding guaranties that the decompressed image is absolutely identical to the

image before compression. This is an important requirement for some application domains, e.g.

medial imaging, where not only high quality is in demand, but unaltered archiving is a legal

requirement. Loss less techniques can also used for the compression of other data types where

loss of information is not acceptable, e.g. text documents and program executables.

Some compression methods can be made more effective by adding a 1D or 2D delta coding

to the process of compression. These deltas make more effective use of run length encoding,

have (statistically) higher maxima in code tables (leading to better results in Huffman and

general entropy codings), and build greater equal value areas usable for area coding.

Some of these methods can easily be modified to be Lossy. Lossy element fits perfectly

into 1D/2D run length search. Also, logarithmic Quantization may be inserted to provide better

or more effective results.

Run length encoding is a very simple method for compression of sequential data. It takes

advantage of the fact that, in many data streams, consecutive single tokens are often identical.

Run length encoding checks the stream for this fact and inserts a special token each time a chain

of more than two equal input tokens are found. This special input advises the decoder to insert

the following token n times into his output stream. The effectivity of run length encoding is a

function of the number of equal tokens in a row in relation to the total number of input tokens.

This relation is very high in undeterred two tone images of the type used for facsimile.

Obviously, effectively degrades when the input does not contain too many equal tokens. With a

rising density of information, the likelihood of two following tokens being the same does sink

significantly, as there is always some noise distortion in the input.

Run length coding is easily implemented, either in software or in hardware. It is fast and

very well verifiable, but its compression ability is very limited.

This algorithm, developed by D.A. Huffman, is based on the fact that in an input stream

certain tokens occur more often than others. Based on this knowledge, the algorithm builds up a

weighted binary tree according to their rate of occurrence. Each element of this tree is assigned a

new code word, whereat the length of the code word is determined by its position in the tree.

Therefore, the token which is most frequent and becomes the root of the tree is assigned the

shortest code. Each less common element is assigned a longer code word. The least frequent

element is assigned a code word which may have become twice as long as the input token.

The compression ratio achieved by Huffman encoding uncorrelated data becomes

something like 1:2. On slightly correlated data, as on images, the compression rate may become

much higher, the absolute maximum being defined by the size of a single input token and the

size of the shortest possible output token (max. compression = token size[bits]/2[bits]). While

standard palletized images with a limit of 256 colors may be compressed by 1:4 if they use only

one color, more typical images give results in the range of 1:1.2 to 1:2.5.

Nowadays, there is a wide range of so called modified Lempel/Ziv coding. These algorithms all

have a common way of working. The coder and the decoder both build up an equivalent

dictionary of met symbols, each of which represents a whole sequence of input tokens. If a

sequence is repeated after a symbol was found for it, then only the symbol becomes part of the

coded data and the sequence of tokens referenced by the symbol becomes part of the decoded

data later. As the dictionary is build up based on the data, it is not necessary to put it into the

coded data, as it is with the tables in a Huffman coder.

This method becomes very efficient even on virtually random data. The average

compression on text and program data is about 1:2; the ratio on

image data comes up to 1:8 on the average GIF image. Here again, a high level of input noise

degrades the efficiency significantly.

Entropy coders are a little tricky to implement, as there are usually a few tables, all

growing while the algorithm runs.

Area coding is an enhanced form of run length coding, reflecting the two dimensional

character of images. This is a significant advance over the other lossless methods. For coding an

image it does not make too much sense to interpret it as a sequential stream, as it is in fact an

array of sequences, building up a two dimensional object. Therefore, as the two dimensions are

independent and of same importance, it is obvious that a coding scheme aware of this has some

advantages. The algorithms for area coding try to find rectangular regions with the same

characteristics. These regions are coded in a descriptive form as an Element with two points and

a certain structure. The whole input image has to be described in this form to allow lossless

decoding afterwards.

The possible performance of this coding method is limited mostly by the very high

complexity of the task of finding largest areas with the same characteristics. Practical

implementations use recursive algorithms for reducing the whole area to equal sized sub

rectangles until a rectangle does fulfill the criteria defined as having the same characteristic for

every pixel.

This type of coding can be highly effective but it bears the problem of a nonlinear method,

which cannot be implemented in hardware. Therefore, the performance in terms of compression

time is not competitive, although the compression ratio is.

In most of applications we have no need in the exact restoration of stored image. This fact

can help to make the storage more effective, and this way we get to lossy compression methods.

Lossy image coding techniques normally have three components:

Image modeling which defines such things as the transformation to be applied to the image

reduce the amount of information

Encoding, where a code is generated by associating appropriate codeword to the raw data

produced by the quantization.

Each of these operations is in some part responsible of the compression. Image modeling is

aimed at the exploitation of statistical characteristics of the image (i.e. high correlation,

redundancy). Typical examples are transform coding methods, in which the data is represented in

a different domain (for example, frequency in the case of the Fourier Transform [FT], the

Discrete Cosine Transform [DCT], the Kahrunen-Loewe Transform [KLT], and so on), where a

reduced number of coefficients contains most of the original information. In many cases this first

phase does not result in any loss of information.

The aim of quantization is to reduce the amount of data used to represent the information

within the new domain. Quantization is in most cases not a reversible operation: therefore, it

belongs to the so called 'lossy' methods.

Encoding is usually error free. It optimizes the representation of the information (helping

sometimes to further reduce the bit rate), and may introduce some error detection codes.

In the following sections, a review of the most important coding schemes for lossy

compression is provided. Some methods are described in their canonical form (transform coding,

region based approximations, fractal coding, wavelets, hybrid methods) and some variations and

improvements presented in the scientific literature are reported and discussed.

A general transform coding scheme involves subdividing an NxN image into smaller nxn

blocks and performing a unitary transform on each sub image. A unitary transform is a reversible

linear transform whose kernel describes a set of complete, ortho normal discrete basic functions.

The goal of the transform is to decorate the original signal, and this declaration generally results

in the signal energy being redistributed among only a small set of transform coefficients. In this

way, many coefficients may be discarded after quantization and prior to encoding. Also, visually

lossless compression can often be achieved by incorporating the HVS contrast sensitivity

function in the quantization of the coefficients.

Transform coding can be generalized into four stages:

1. Image subdivision

2 Image transformation

3. Coefficient quantization

4. Huffman encoding.

For a transform coding scheme, logical modeling is done in two steps: a segmentation one,

in which the image is subdivided in bi dimensional vectors (possibly of different sizes) and a

transformation step, in which the chosen transform (e.g. KLT, DCT, and Hadamard) is applied.

Quantization can be performed in several ways. Most classical approaches use 'zonal

coding', consisting in the scalar quantization of the coefficients belonging to a predefined area

(with a fixed bit allocation), and 'threshold coding', consisting in the choice of the coefficients of

each block characterized by an absolute value exceeding a predefined threshold. Another

possibility, that leads to higher compression factors, is to apply a vector quantization scheme to

the transformed coefficients.

The same type of encoding is used for each coding method. In most cases a classical

Huffman code can be used successfully.

The JPEG and MPEG standards are examples of standards based on transform coding.

2.4.2 Segmentation and approximation methods:

With segmentation and approximation coding methods, the image is modeled as a mosaic

of regions, each one characterized by a sufficient degree of uniformity of its pixels with respect

to a certain feature (e.g. grey level, texture); each region then has some parameters related to the

characterizing feature associated with it.

The operations of finding a suitable segmentation and an optimum set of approximating

parameters are highly correlated, since the segmentation algorithm must take into account the

error produced by the region reconstruction (in order to limit this value within determined

bounds). These two operations constitute the logical modeling for this class of coding schemes;

quantization and encoding are strongly dependent on the statistical characteristics of the

parameters of this approximation (and therefore, on the approximation itself).

Classical examples are polynomial approximation and texture approximation. For

polynomial approximation regions are reconstructed by means of polynomial functions in (x, y);

the task of the encoder is to find the optimum coefficients. In texture approximation, regions are

filled by synthesizing a parameterized texture based on some model (e.g. fractals, statistical

methods, Markov Random Fields [MRF]). It must be pointed out that, while in polynomial

approximations the problem of finding optimum coefficients is quite simple (it is possible to use

least squares approximation or similar exact formulations), for texture based techniques this

problem can be very complex.

2.5 Efficiency and quality of different Lossy compression techniques:

The performances of lossy picture coding algorithms are usually evaluated on the basis of

two parameters:

a. the compression factor (or analogously the bit rate) and

b. the distortion produced on the reconstruction.

The first is an objective parameter, while the second strongly depends on the usage of the coded

image. Nevertheless, a rough evaluation of the performances of a method can be made by

considering an objective measure of the error, like MSE or SNR. For the methods

described in the previous pages, average compression ratios and SNR values obtainable are

presented in the following table:

dependent

During the last years, some standardization processes based on transform coding, such as

JPEG, have been started. Performances of such a standard are quite good if compression factors

are maintained under a given threshold (about 20 times). Over this threshold, artifacts become

visible in the reconstruction and tile effect affects seriously the images decoded, due to

quantization effects of the DCT coefficients.

On the other hand, there are two advantages: first, it is a standard, and second, dedicated

hardware implementations exist. For applications which require higher compression factors with

some minor loss of accuracy when compared with JPEG, different techniques should be selected

such as wavelets coding or spline interpolation, followed by an efficient entropy encoder such as

Huffman, arithmetic coding or vector quantization.

Some of these coding schemes are suitable for progressive reconstruction (Pyramidal

Wavelet Coding, Two Source Decomposition, etc). This property can be exploited by

applications such as coding of images in a database, for previewing purposes or for transmission

on a limited bandwidth channel.

2.6 Lossy compression Vs Loss less compression:

The advantage of lossy methods over lossless methods is that in some cases a lossy method

can produce a much smaller compressed file than any known lossless method, while still meeting

the requirements of the application.

Lossy methods are most often used for compressing sound, images or videos. The

compression ratio of lossy video codec’s are nearly always far superior to those of the audio and

still-image equivalents. Audio can be compress at 1:10 with no noticeable loss of the quality;

video can be compressed immensely with little visible quality loss, e300:1.

Lossy compressed images are still compressed to 1/10t their original size, as with audio,

but the quality loss is more noticeable, especially on closer inspection. When a user acquires a

lossy-compressed file, the retrieved file can be quite different to the original at the bit level while

being indistinguishable to the human ear or eye for most practical purposes. Many methods focus

on the idiosyncrasies of the human anatomy, taking into account for example, that the human eye

can see only certain frequencies of light. The psychoacoustic model describes how sound can be

highly compressed without degrading the perceived quality of the sound. Flaws caused by lossy

compression that are noticeable to the human eye or ear are known as compression artifacts.

General Image Compression system:

As shown in figure below image compression systems are composed of two distinct

structural blocks: an encoder & a decoder. Image f(x, y) is fed into the encoder, which creates a

set of symbols from the input data &uses them to represent the image.

f^(x,y)

Source Channel Channel Channel Source

f(x,y) encoder encoder decoder decoder

If we let n1 & n2 denote the number of information carrying units in the original & encoded

images respectively, the compression that is achieved can be quantified numerically via the

compression ratio,

CR=n1/n2.

A compression ratio like 10 or 10:1 indicates that the original image has 10 information carrying

units for every 1 unit in the compressed data set.

Signal analysts already have at their disposal an impressive arsenal of tools. Perhaps the

most well known of these is Fourier analysis, which breaks down a signal into constituent

sinusoids of different frequencies. Another way to think of Fourier analysis is as a mathematical

technique for transforming our view of the signal from time-based to frequency-based.

Figure 2

For many signals, Fourier analysis is extremely useful because the signal’s frequency

content is of great importance. So why do we need other techniques, like wavelet analysis?

Fourier analysis has a serious drawback. In transforming to the frequency domain, time

information is lost. When looking at a Fourier transform of a signal, it is impossible to tell when

a particular event took place. If the signal properties do not change much over time — that is, if

it is what is called a stationary signal—this drawback isn’t very important. However, most

interesting signals contain numerous non stationary or transitory characteristics: drift, trends,

abrupt changes, and beginnings and ends of events. These characteristics are often the most

important part of the signal, and Fourier analysis is not suited to detecting them.

In an effort to correct this deficiency, Dennis Gabor (1946) adapted the Fourier transform to

analyze only a small section of the signal at a time—a technique called windowing the

signal.Gabor’s adaptation, called the Short-Time FourierTransform (STFT), maps a signal into a

two-dimensional function of time and frequency.

Figure 3

The STFT represents a sort of compromise between the time- and frequency-based views of a

signal. It provides some information about both when and at what frequencies a signal event

occurs. However, you can only obtain this information with limited precision, and that precision

is determined by the size of the window. While the STFT compromise between time and

frequency information can be useful, the drawback is that once you choose a particular size for

the time window, that window is the same for all frequencies. Many signals require a more

flexible approach—one where we can vary the window size to determine more accurately either

time or frequency.

2.10 Wavelet Analysis

Wavelet analysis represents the next logical step: a windowing technique with variable-sized

regions. Wavelet analysis allows the use of long time intervals where we want more precise low-

frequency information, and shorter regions where we want high-frequency information.

Figure 4

Here’s what this looks like in contrast with the time-based, frequency-based,

and STFT views of a signal:

Figure 5

You may have noticed that wavelet analysis does not use a time-frequency region, but rather a

time-scale region. For more information about the concept of scale and the link between scale

and frequency, see “How to Connect Scale to Frequency?”

One major advantage afforded by wavelets is the ability to perform local analysis, that is, to

analyze a localized area of a larger signal. Consider a sinusoidal signal with a small discontinuity

— one so tiny as to be barely visible. A power fluctuation or a noisy switch easily could generate

such a signal in the real world, perhaps.

Figure 6

A plot of the Fourier coefficients (as provided by the fft command) of this signal shows

nothing particularly interesting: a flat spectrum with two peaks representing a single frequency.

However, a plot of wavelet coefficients clearly shows the exact location in time of the

discontinuity.

Figure 7

Wavelet analysis is capable of revealing aspects of data that other signal analysis techniques

miss, aspects like trends, breakdown points, discontinuities in higher derivatives, and self-

similarity. Furthermore, because it affords a different view of data than those presented by

traditional techniques, wavelet analysis can often compress or de-noise a signal without

appreciable degradation. Indeed, in their brief history within the signal-processing field, wavelets

have already proven themselves to be an indispensable addition to the analyst’s collection of

tools and continue to enjoy a burgeoning popularity today.

Now that we know some situations when wavelet analysis is useful, it is worthwhile asking,

“What is wavelet analysis?” and even more fundamentally,

“What is a wavelet?”

A wavelet is a waveform of effectively limited duration that has an average value of zero.

Compare wavelets with sine waves, which are the basis of Fourier analysis.

Sinusoids do not have limited duration — they extend from minus to plus

infinity. And where sinusoids are smooth and predictable, wavelets tend to be

irregular and asymmetric.

Figure 8

Fourier analysis consists of breaking up a signal into sine waves of various frequencies.

Similarly, wavelet analysis is the breaking up of a signal into shifted and scaled versions of the

original (or mother) wavelet. Just looking at pictures of wavelets and sine waves, you can see

intuitively that signals with sharp changes might be better analyzed with an irregular wavelet

than with a smooth sinusoid, just as some foods are better handled with a fork than a spoon. It

also makes sense that local features can be described better with wavelets that have local extent.

Thus far, we’ve discussed only one-dimensional data, which encompasses most ordinary signals.

However, wavelet analysis can be applied to two-dimensional data (images) and, in principle, to

higher dimensional data. This toolbox uses only one and two-dimensional analysis techniques.

Mathematically, the process of Fourier analysis is represented by the Fourier transform:

which is the sum over all time of the signal f(t) multiplied by a complex exponential. (Recall that

a complex exponential can be broken down into real and imaginary sinusoidal components.) The

results of the transform are the Fourier coefficients F(w), which when multiplied by a sinusoid of

frequency w yields the constituent sinusoidal components of the original signal. Graphically, the

process looks like:

Figure 9

Similarly, the continuous wavelet transform (CWT) is defined as the sum over all time of the

signal multiplied by scaled, shifted versions of the wavelet function ψ

The result of the CWT is a series many wavelet coefficients C, which are a function of

scale and position.

Multiplying each coefficient by the appropriately scaled and shifted wavelet yields the

constituent wavelets of the original signal:

Figure 10

2.15 Scaling

We’ve already alluded to the fact that wavelet analysis produces a time-scale view of a signal

and now we’re talking about scaling and shifting wavelets.

Scaling a wavelet simply means stretching (or compressing) it.

To go beyond colloquial descriptions such as “stretching,” we introduce the scale factor, often

denoted by the letter a.

If we’re talking about sinusoids, for example the effect of the scale factor is very easy to see:

Figure 11

The scale factor works exactly the same with wavelets. The smaller the scale factor, the more

“compressed” the wavelet.

Figure 12

It is clear from the diagrams that for a sinusoid sin(wt) the scale factor ‘a’ is related (inversely)

to the radian frequency ‘w’. Similarly, with wavelet analysis the scale is related to the frequency

of the signal.

2.16 Shifting

Shifting a wavelet simply means delaying (or hastening) its onset. Mathematically,

delaying a function ψ

(t) by k is represented by ψ

(t-k)

Figure 13

The continuous wavelet transform is the sum over all time of the signal multiplied by scaled,

shifted versions of the wavelet. This process produces wavelet coefficients that are a function of

scale and position.

It’s really a very simple process. In fact, here are the five steps of an easy recipe for creating a

CWT:

1. Take a wavelet and compare it to a section at the start of the original signal.

2. Calculate a number C that represents how closely correlated the wavelet is with this

section of the signal. The higher C is, the more the similarity. More precisely, if the signal

energy and the wavelet energy are equal to one, C may be interpreted as a correlation

coefficient.

Note that the results will depend on the shape of the wavelet you choose.

Figure 14

3. Shift the wavelet to the right and repeat steps 1 and 2 until you’ve covered the whole signal.

Figure 15

4. Scale (stretch) the wavelet and repeat steps 1 through 3.

Figure 16

5. Repeat steps 1 through 4 for all scales.

When you’re done, you’ll have the coefficients produced at different scales by different sections

of the signal. The coefficients constitute the results of a regression of the original signal

performed on the wavelets.

How to make sense of all these coefficients? You could make a plot on which the x-axis

represents position along the signal (time), the y-axis represents scale, and the color at each x-y

point represents the magnitude of the wavelet coefficient C. These are the coefficient plots

generated by the graphical tools.

Figure 17

If you could look at the same surface from the side, you might see something like this:

Figure 18

The continuous wavelet transform coefficient plots are precisely the time-scale view of the signal

we referred to earlier. It is a different view of signal data than the time- frequency Fourier view,

but it is not unrelated.

Notice that the scales in the coefficients plot (shown as y-axis labels) run from 1 to 31.

Recall that the higher scales correspond to the most “stretched” wavelets. The more stretched the

wavelet, the longer the portion of the signal with which it is being compared, and thus the

coarser the signal features being measured by the wavelet coefficients.

Figure 19

Thus, there is a correspondence between wavelet scales and frequency as revealed by wavelet

analysis:

• Low scale a=> Compressed wavelet => Rapidly changing details => High

frequency ‘w’.

• High scale a=>Stretched wavelet=>Slowly changing, coarse features=>Low

frequency ‘w’.

It’s important to understand the fact that wavelet analysis does not produce a time-

frequency view of a signal is not a weakness, but a strength of the technique.

Not only is time-scale a different way to view data, it is a very natural way to view data deriving

from a great number of natural phenomena.

Consider a lunar landscape, whose ragged surface (simulated below) is a result of centuries

of bombardment by meteorites whose sizes range from gigantic boulders to dust specks.

If we think of this surface in cross-section as a one-dimensional signal, then it is reasonable

to think of the signal as having components of different scales—large features carved by the

impacts of large meteorites, and finer features abraded by small meteorites.

Figure 20

Here is a case where thinking in terms of scale makes much more sense than thinking in

terms of frequency. Inspection of the CWT coefficients plot for this signal reveals patterns

among scales and shows the signal’s possibly fractal nature.

Figure 21

Even though this signal is artificial, many natural phenomena — from the intricate

branching of blood vessels and trees, to the jagged surfaces of mountains and fractured metals —

lend themselves to an analysis of scale.

Transform?

Any signal processing performed on a computer using real-world data must be performed

on a discrete signal — that is, on a signal that has been measured at discrete time. So what

exactly is “continuous” about it?

What’s “continuous” about the CWT, and what distinguishes it from the discrete wavelet

transform (to be discussed in the following section), is the set of scales and positions at which it

operates.

Unlike the discrete wavelet transform, the CWT can operate at every scale, from that of the

original signal up to some maximum scale that you determine by trading off your need for

detailed analysis with available computational horsepower.

The CWT is also continuous in terms of shifting during computation, the analyzing wavelet is

shifted smoothly over the full domain of the analyzed function.

Figure 22

Calculating wavelet coefficients at every possible scale is a fair amount of work, and it

generates an awful lot of data. What if we choose only a subset of scales and positions at which

to make our calculations? It turns out rather remarkably that if we choose scales and positions

based on powers of two—so-called dyadic scales and positions—then our analysis will be much

more efficient and just as accurate. We obtain such an analysis from the discrete wavelet

transform (DWT).

An efficient way to implement this scheme using filters was developed in 1988 by

Mallat. The Mallat algorithm is in fact a classical scheme known in the signal processing

community as a two-channel sub band coder. This very practical filtering algorithm yields a fast

wavelet transform — a box into which a signal passes, and out of which wavelet coefficients

quickly emerge. Let’s examine this in more depth.

For many signals, the low-frequency content is the most important part. It is what gives

the signal its identity. The high-frequency content on the other hand imparts flavor or nuance.

Consider the human voice. If you remove the high-frequency components, the voice sounds

different but you can still tell what’s being said. However, if you remove enough of the low-

frequency components, you hear gibberish. In wavelet analysis, we often speak of

approximations and details. The approximations are the high-scale, low-frequency components

of the signal. The details are the low-scale, high-frequency components.

The filtering process at its most basic level looks like this:

Figure 23

The original signal S passes through two complementary filters and emerges as two

signals.

Unfortunately, if we actually perform this operation on a real digital signal, we wind up

with twice as much data as we started with. Suppose, for instance that the original signal S

consists of 1000 samples of data. Then the resulting signals will each have 1000 samples, for a

total of 2000.

These signals A and D are interesting, but we get 2000 values instead of the 1000 we had.

There exists a more subtle way to perform the decomposition using wavelets. By looking

carefully at the computation, we may keep only one point out of two in each of the two 2000-

length samples to get the complete information. This is the notion of own sampling. We produce

two sequences called cA and cD.

Figure 24

The process on the right which includes down sampling produces DWT Coefficients. To gain a

better appreciation of this process let’s perform a one-stage discrete wavelet transform of a

signal. Our signal will be a pure sinusoid with high- frequency noise added to it.

Here is our schematic diagram with real signals inserted into it:

Figure 25

s = sin(20*linspace(0,pi,1000)) + 0.5*rand(1,1000);

[cA,cD] = dwt(s,'db2');

Where db2 is the name of the wavelet we want to use for the analysis.

Notice that the detail coefficients cD is small and consist mainly of a high-frequency noise, while

the approximation coefficients cA contains much less noise than does the original signal.

[length(cA) length(cD)]

ans = 501 501

You may observe that the actual lengths of the detail and approximation coefficient vectors

are slightly more than half the length of the original signal. This has to do with the filtering

process, which is implemented by convolving the signal with a filter. The convolution “smears”

the signal, introducing several extra samples into the result.

2.24 Multiple-Level Decomposition:

decomposed in turn, so that one signal is broken down into many lower resolution components.

This is called the wavelet decomposition tree.

Figure 26

Looking at a signal’s wavelet decomposition tree can yield valuable information.

Figure 27

Number of Levels:

Since the analysis process is iterative, in theory it can be continued indefinitely. In

reality, the decomposition can proceed only until the individual details consist of a single sample

or pixel. In practice, you’ll select a suitable number of levels based on the nature of the signal, or

on a suitable criterion such as entropy.

We’ve learned how the discrete wavelet transform can be used to analyze or decompose,

signals and images. This process is called decomposition or analysis. The other half of the story

is how those components can be assembled back into the original signal without loss of

information. This process is called reconstruction, or synthesis. The mathematical manipulation

that effects synthesis is called the inverse discrete wavelet transforms (IDWT). To synthesize a

signal in the Wavelet Toolbox, we reconstruct it from the wavelet coefficients:

Figure 28

Where wavelet analysis involves filtering and down sampling, the wavelet reconstruction

process consists of up sampling and filtering. Up sampling is the process of lengthening a signal

component by inserting zeros between samples:

Figure 29

The Wavelet Toolbox includes commands like idwt and waverec that perform single-

level or multilevel reconstruction respectively on the components of one-dimensional signals.

These commands have their two-dimensional analogs, idwt2 and waverec2.

The filtering part of the reconstruction process also bears some discussion, because it is the

choice of filters that is crucial in achieving perfect reconstruction of the original signal. The

down sampling of the signal components performed during the decomposition phase introduces a

distortion called aliasing. It turns out that by carefully choosing filters for the decomposition and

reconstruction phases that are closely related (but not identical), we can “cancel out” the effects

of aliasing.

The low- and high pass decomposition filters (L and H), together with their associated

reconstruction filters (L' and H'), form a system of what is called Quadrature mirror filters:

Figure 30

We have seen that it is possible to reconstruct our original signal from the

coefficients of the approximations and details.

Figure

31

It is also

possible to

reconstruct the

approximations

and details themselves from their coefficient vectors.

As an example, let’s consider how we would reconstruct the first-level approximation A1

from the coefficient vector cA1. We pass the coefficient vector cA1 through the same process we

used to reconstruct the original signal. However, instead of combining it with the level-one detail

cD1, we feed in a vector of zeros in place of the detail coefficients

vector:

Figure 32

The process yields a reconstructed approximation A1, which has the same length as the

original signal S and which is a real approximation of it. Similarly, we can reconstruct the first-

level detail D1, using the analogous process:

Figure 33

The reconstructed details and approximations are true constituents of the original signal.

In fact, we find when we combine them that:

A1 + D1 = S

Note that the coefficient vectors cA1 and cD1—because they were produced by Down

sampling and are only half the length of the original signal — cannot directly be combined to

reproduce the signal.

It is necessary to reconstruct the approximations and details before combining them.

Extending this technique to the components of a multilevel analysis, we find that similar

relationships hold for all the reconstructed signal constituents.

That is, there are several ways to reassemble the original signal:

Figure 34

In the section “Reconstruction Filters”, we spoke of the importance of choosing the right

filters. In fact, the choice of filters not only determines whether perfect reconstruction is

possible, it also determines the shape of the wavelet we use to perform the analysis. To construct

a wavelet of some practical utility, you seldom start by drawing a waveform. Instead, it usually

makes more sense to design the appropriate quadrature mirror filters, and then use them to create

the waveform. Let’s see

how this is done by focusing on an example.

Consider the low pass reconstruction filter (L') for the db2 wavelet.

Wavelet function position

Figure 35

Lprime = dbaux(2)

If we reverse the order of this vector (see wrev), and then multiply every even

positions:

Finally, convolve the up sampled vector with the original low pass filter:

H2 = conv(HU,Lprime);

plot(H2)

Figure 36

If we iterate this process several more times, repeatedly up sampling and convolving the

resultant vector with the four-element filter vector Lprime, a pattern begins to emerge:

Figure 37

The curve begins to look progressively more like the db2 wavelet. This means that the

wavelet’s shape is determined entirely by the coefficients of the reconstruction filters. This

relationship has profound implications. It means that you cannot choose just any shape, call it a

wavelet, and perform an analysis. At least, you can’t choose an arbitrary wavelet waveform if

you want to be able to reconstruct the original signal accurately. You are compelled to choose a

shape determined by quadrature mirror decomposition filters.

We’ve seen the interrelation of wavelets and quadrature mirror filters. The wavelet function

ψ is determined by the high pass filter, which also produces the details of the wavelet

decomposition.

There is an additional function associated with some, but not all wavelets. This is the so-

called scaling function . The scaling function is very similar to the wavelet function. It is

determined by the low pass quadrature mirror filters, and thus is associated with the

approximations of the wavelet decomposition. In the same way that iteratively up- sampling and

convolving the high pass filter produces a shape approximating the wavelet function, iteratively

up-sampling and convolving the low pass filter produces a shape approximating the scaling

function.

A multi step analysis-synthesis process can be represented as:

Figure 38

This process involves two aspects: breaking up a signal to obtain the wavelet coefficients,

and reassembling the signal from the coefficients. We’ve already discussed decomposition and

reconstruction at some length. Of course, there is no point breaking up a signal merely to have

the satisfaction of immediately reconstructing it. We may modify the wavelet coefficients before

performing the reconstruction step. We perform wavelet analysis because the coefficients thus

obtained have many known uses, de-noising and compression being foremost among them. But

wavelet analysis is still a new and emerging field. No doubt, many uncharted uses of the wavelet

coefficients lie in wait. The Wavelet Toolbox can be a means of exploring possible uses and

hitherto unknown applications of wavelet analysis. Explore the toolbox functions and see what

you discover.

Arithmetic Coding

For simplicity, the following discussion of arithmetic coding assumes a sequence of symbols in a

stationary random process with independent elements. In this case, arithmetic coding produces a

rate, which approaches the entropy of the sequence, but also applies to correlated sequences if

the process remains stationary. This discussion does not describe how arithmetic coding adapts

to changing statistics in the bit stream because the coefficient bit mainly determines the adaptive

nature of the coding

modeler within the JPEG2000 standard. A good tutorial on arithmetic coding can be found in

reference Whereas Huffman coding involves transmitting separate code words for each symbol

in a sequence, arithmetic coding requires transmitting only the information needed to allow a

decoder to determine the particular fractional interval between 0 and 1 to which the sequence is

mapped. This information includes a fractional code word, C, which points to the lower bound of

the interval, and the interval width, A, as illustrated in Figure

The encoder receives each time a new symbol, a smaller interval is chosen within the current interval.

This interval represents the whole sequence of symbols up to and including the new symbol. The length

of the updated interval is the probability associated with this sequence of symbols, as shown in Figure.

The position of the probability interval associated with a particular symbol is always fixed in relation to

the intervals of the other symbols. (Note that in arithmetic encoding, the position of the fractional interval

is as important as the length.) In theory, this process can continue indefinitely until all symbols have been

received. In practice, finite precision problems restrict the size of the shortest possible interval that can be

represented. When the length of the probability interval falls below a certain minimum size, the interval

must be renormalized to lengthen it above the minimum.

CDF 9-7:

In the normalized case we observe that the accuracy degrades gracefully with increasing

compression ratio. Though consistently better on average at all ratios, the CDF 9-7 wavelet is not

markedly better than the simple Haar wavelet in terms of accuracy. In the weighted case we

observe a considerable improvement in accuracy for almost all compression ratios.

To investigate this behavior of the selected wavelet coefficients using the CDF 9-7 wavelet at

ratio 1:10 using both normalized and weighted filters. The normalized case tends to preserve

mostly low-frequency content, while the weighed case distributes wavelets coefficients near

image details more evenly across levels. Together with the nature of the weighting this leads to

models with more emphasis on edges.

CDF 9-7 performs consistently better than the Haar wavelet. Further, the normalized wavelets

are seen to put relatively more variance into fewer components, when compared to the weighted.

Cohen-Daubechies-Feauveau wavelet are the historically first family of bi-orthogonal wavelets

which was made popular by Ingrid Daubechies. These are not the same as the orthogonal

Daubechies wavelets and also not very similar in shape and properties. However their

construction idea is the same.

The JPEG 2000 compression standard uses the biorthogonal CDF 5/3 wavelet (also called the

LeGall 5/3 wavelet) for lossless compression and a CDF 9/7 wavelet for lossy compression.

The JPEG 2000 compression standard uses the biorthogonal CDF 5/3 wavelet (also called the

LeGall 5/3 wavelet) for lossless compression and a CDF 9/7 wavelet for lossy compression.

We employed a lossy image/sound compressions technique where we used the transform

(wavelet) of the original signal, then calculated a threshold based on the compression ratio

required by the user.

The image was compressed using the Matlab wavelet toolbox. Only the following MatLab

functions were used:

· Wavedec() (the corresponding 2-D functions for image compression)

· Waverec()(the corresponding 2-D functions for image reconstruction)

· wdencmp

sWAVEDEC(X,N,'wname') :

The syntax for the function wavedec is [C,L] = WAVEDEC(X,N,'wname') where X is the

input vector or signal, N is the number of decomposition levels, and string ‘wname’ specifies the

filter or wavelet family to use for decomposition. For perfect reconstruction wname should

specify a wavelet for which the decomposition and the reconstruction filter form a biorthogonal

pair. The biorthogonal filters have a linear phase. The function returns the wavelet coefficients C

and the book keeping matrix L which indicates the

position for each type of coefficients (approximations and details) in the vector C. For image

compression we used wavedec2 which is a 2-D version of the function wavedec().

WAVEREC(C,L,'wname'):

‘wname’ are the same as in wavedec. The function returns X which is the reconstructed version

of the signal X. For image compression we have used waverec2 which is a 2-D version of the

waverec function.

WDENCMP('gbl',X,'wname',N,THR,SORH,KEEPAPP):

The function wdencmp actually compresses the image on the basis of the the threshold

returned by the matlab function ddencmp. ‘KEEPAPP’ tells the function whether the

approximation of the image has to be retained when compressing or not. 100% compression

while keeping the approximation will make all coefficients of the image zero except for the

approximation of the lowest level of decomposition.

compthreshold(c,s,percentage,keepapp)

Instead of using the matlab function ddencmp we have developed our own function

which on the basis of the input, percentage compression, returns the threshold which is then used

by the function wdencmp for image compression and reconstruction. For reconstruction the

function wdencmp calls waverec2.

function [threshold,sorh,keepap] = compthreshold(c,s,percentage,keepapp)

sorh = 'h';

keepap = keepapp;

if keepapp == 1

x = abs (c(prod(s(1,:))+1:end));

x = sort(x);

dropindex = length(x) * percentage/100;

dropindex = round(dropindex);

threshold = x(dropindex);

else %drop coefficients even from the approximation

x = abs(c);

x = sort(x);

dropindex = length(x) * percentage/100;

dropindex = round(dropindex);

threshold = x(dropindex);

end

if (threshold == 0)

threshold = 0.05*max(abs(x));

end

The wavelet coefficients produced by wdencmp were stored in the .mat file. Although this

file has size greater than the original Bitmap, but when it is compressed using WinZip then a

significant amount of compression is achieved.

Figure 39

This shows that a lot of redundancy has been introduced in the image that can be easily

removed by any file compression technique.

3.1 APPLICATIONS:

3.1.1 Photography:

Photography is the process of making pictures by means of the action of light. It involves

recording light patterns, as reflected from objects, onto a sensitive medium through a timed

exposure. The process is done through mechanical, chemical or digital devices commonly known

as cameras.

Photography can be classified under imaging technology and has gained the interest of

scientists and artists from its inception. Scientists have used its capacity to make accurate

recordings, such as Edward Muybridge in his study of human and animal locomotion (1887).

Artists have been equally interested by this aspect but have also tried to explore other

avenues than the photo-mechanical representation of reality, such as the pictorials movement.

Military, police and security forces use photography for surveillance, recognition and data

storage.

Medical imaging is the process by which physicians evaluate an area of the subject's body

that is not externally visible. Medical imaging may be clinically motivated, seeking to diagnose

and examine disease in specific human patients. Alternatively, it may be used by researchers in

order to understand processes in living organisms. Many of the techniques developed for medical

imaging also have scientific and industrial applications.

Microscope image processing is a broad term that covers the use of digital image

processing techniques to process, analyze and present images obtained from a microscope. Such

processing is now commonplace in a number of diverse fields such as medicine, biological

research, cancer research, drug testing, metallurgy, etc. A number of manufacturers of

microscopes now specifically design in features that allow interface to an image processing

system.

4. Lifting wavelets

This theory is a sequel to the wavelet analysis, which will fill in the blank spots on the wavelet

transform map, add some detail and even explore the area outside it. We start with taking a

closer look at the scaling and wavelet filters in general, what they should look like, what their

constraints are and how they can be used in the inverse wavelet transform. Then we will do

some algebra and develop a general framework to design filters for every possible wavelet

transform. This framework was introduced by Sweldens and is known as the lifting scheme or

simply lifting. Using the lifting scheme we will in the end arrive at a universal discrete wavelet

transform which yields only integer wavelet- and scaling coefficients instead of the usual

floating point coefficients. In order to clarify the theory in this tutorial a detailed example will be

presented.

transform tutorial, since the lifting scheme is a quite recent development and especially integer

lifting [Cal96], [Uyt97b] and multidimensional lifting [Kov97], [Uyt97a] are not (yet) widely

known. This is mainly based on [Dau97], [Cal96], [Swe96a], [Swe96b], [Cla97], [Uyt97b] and

[Uyt97c].

range of interest in various applications due to its filter tap coefficients which are particularly

contains filters with coefficients that can be written as powers of two leading to a multiplication

free realization of the filter-bank. Several linear or nonlinear decomposition structures that are

published in the literature report better performance than the 5/3 wavelet using signal adapted

filters including. Among these works, shows the method to achieve the lifting style

implementation of any DWT filter bank, whereas extends the idea of linear filters in the lifting

style to nonlinear filters. In the lifting prediction filter was made adaptive according to the local

signal properties, and in, the importance of coder–nonlinear transform strategy was emphasized.

The idea of lifting adaptation was also applied to video processing. Finally, 2–D extensions of

the lifting structures were examined, which fundamentally resembles the idea of this work.

Nevertheless, the 5/3 wavelet has an efficient set of filter coefficients which enables fast, simple,

and integer-shifts-only implementations, and due to these properties, it was also adopted by the

Perfect reconstruction

Usually a signal transform is used to transform a signal to a different domain, perform some operation on

the transformed signal and inverse transform it, back to the original domain. This means that the

transform has to be invertible. In case of no data processing we want the reconstruction to be perfect, i.e.

we will allow only for a time delay. All this holds also in our case of the wavelet transform. As mentioned

before, we can perform a wavelet transform (or sub band coding or multi resolution analysis) using a

filter bank. A simple one-stage filter bank is shown in figure 1 and it can be made using FIR filters.

Although IIR filters can also be used, they have the disadvantage that their infinite response leads to

infinite data expansion. For any practical use of an IIR filter bank the output data stream has to be cut

which leads to a loss of data. In this text we will only look at FIR filters.

Fig: One stage filter bank analysis for a signal and its reconstruction

JPEG2000

JPEG2000 is the latest series of standards from the JPEG committee. The original standard for

digital images (IS 10918-1, popularly referred to as JPEG) was developed 15 years ago, and with

the major increase in computer technology since them, and lots of research, it was felt to be time

for a new standard capable of handling many more aspects than simply making the digital image

JPEG2000 uses wavelet technology.

JPEG2000 compression give better results on images (up to 20 per cent plus) like

It can allow an image to be retained without any distortion or loss. Simply sending the

first part of such a lossless file to a receiver can result in a lossy version appearing (like

present JPEG) - but continuing to transmit the file results in the fidelity getting better and

PEG and JPEG2000 compression methods are typically lossy, a process that permanently

removes some pixel data. You can apply lossy JPEG or JPEG2000 compression to color

For JPEG2000 compression, you can also specify lossless so that no pixel data is

JPEG2000 is a very efficient compression algorithm. It has mainly been developed for

use on the internet.

The addition of a lossless compression mode is interesting for prepress use.

Even though I was a fan of the algorithm for some time, most people I know have

reverted back to using regular JPEG compression for 3 reasons:

Compatibility with legacy systems: only recent workflows or computer applications can

deal with JPEG2000 compression. If you need to distribute data, this can be a real

showstopper.

JPEG2000 is a resource hog: compressing data requires lots of CPU-horsepower

At compression ratio’s below 25:1, the wavelet-based algorithm produces less blocky but

somewhat less detailed images compared to regular JPEG compression. Too much

compression is never a good idea.

7/5 wavelet filter banks are symmetric and compactly supported bi-orthogonal. Low-pass

filters of analysis and synthesis are denoted respectively as h(z) and g(z) , then 7/5 wavelet

where hi (i=O, 1,2,3) and gi (i=O, 1,2) are the coefficients of 7/5-tap wavelet filter banks.

The relationship between the coefficients of 7/5-tap wavelet filter banks and its lifting

where a, f3, y and K are the lifting parameters for 7/5-tap wavelet filter banks, and we have

the forward lifting scheme for 7/5-tap wavelet filters are as follows

In order to perform image compression for different image and different compression

based on JPEG2000 standard, which only supports both CDF 9/7 wavelet filter banks and

LT 5/3 wavelet filter banks but also support 7/5 and LS9/7 wavelet filter banks through

experiments of image compression for standard test images are performed using 5 level of

INTRODUCTION TO MATLAB

computation, visualization, and programming in an easy-to-use environment where problems and

solutions are expressed in familiar mathematical notation. Typical uses include

Algorithm development

Data acquisition

Scientific and engineering graphics

MATLAB is an interactive system whose basic data element is an array that does not

require dimensioning. This allows you to solve many technical computing problems, especially

those with matrix and vector formulations, in a fraction of the time it would take to write a

program in a scalar non interactive language such as C or FORTRAN.

The name MATLAB stands for matrix laboratory. MATLAB was originally written to

provide easy access to matrix software developed by the LINPACK and EISPACK projects.

Today, MATLAB engines incorporate the LAPACK and BLAS libraries, embedding the state of

the art in software for matrix computation.

MATLAB has evolved over a period of years with input from many users. In university

environments, it is the standard instructional tool for introductory and advanced courses in

mathematics, engineering, and science. In industry, MATLAB is the tool of choice for high-

productivity research, development, and analysis.

Very important to most users of MATLAB, toolboxes allow you to learn and apply specialized

technology. Toolboxes are comprehensive collections of MATLAB functions (M-files) that

extend the MATLAB environment to solve particular classes of problems. Areas in which

toolboxes are available include signal processing, control systems, neural networks, fuzzy logic,

wavelets, simulation, and many others.

This is the set of tools and facilities that help you use MATLAB functions and files. Many

of these tools are graphical user interfaces. It includes the MATLAB desktop and Command

Window, a command history, an editor and debugger, and browsers for viewing help, the

workspace, files, and the search path.

like sum, sine, cosine, and complex arithmetic, to more sophisticated functions like matrix

inverse, matrix eigen values, Bessel functions, and fast Fourier transforms.

This is a high-level matrix/array language with control flow statements, functions, data

structures, input/output, and object-oriented programming features. It allows both "programming

in the small" to rapidly create quick and dirty throw-away programs, and "programming in the

large" to create complete large and complex application programs.

4.2.4 Graphics:

MATLAB has extensive facilities for displaying vectors and matrices as graphs, as well as

annotating and printing these graphs. It includes high-level functions for two-dimensional and

three-dimensional data visualization, image processing, animation, and presentation graphics. It

also includes low-level functions that allow you to fully customize the appearance of graphics as

well as to build complete graphical user interfaces on your MATLAB applications.

This is a library that allows you to write C and Fortran programs that interact with

MATLAB. It includes facilities for calling routines from MATLAB (dynamic linking), calling

MATLAB as a computational engine, and for reading and writing MAT-files.

4.3.1 MATLAB DESKTOP:-

Matlab Desktop is the main Matlab application window. The desktop contains five sub

windows, the command window, the workspace browser, the current directory window, the

command history window, and one or more figure windows, which are shown only when the

user displays a graphic.

The command window is where the user types MATLAB commands and expressions at

the prompt (>>) and where the output of those commands is displayed. MATLAB defines the

workspace as the set of variables that the user creates in a work session. The workspace browser

shows these variables and some information about them. Double clicking on a variable in the

workspace browser launches the Array Editor, which can be used to obtain information and

income instances edit certain properties of the variable.

The current Directory tab above the workspace tab shows the contents of the current

directory, whose path is shown in the current directory window. For example, in the windows

operating system the path might be as follows: C:\MATLAB\Work, indicating that directory

“work” is a subdirectory of the main directory “MATLAB”; WHICH IS INSTALLED IN

DRIVE C. clicking on the arrow in the current directory window shows a list of recently used

paths. Clicking on the button to the right of the window allows the user to change the current

directory.

MATLAB uses a search path to find M-files and other MATLAB related files, which are

organize in directories in the computer file system. Any file run in MATLAB must reside in the

current directory or in a directory that is on search path. By default, the files supplied with

MATLAB and math works toolboxes are included in the search path. The easiest way to see

which directories are on the search path. The easiest way to see which directories are soon the

search path, or to add or modify a search path, is to select set path from the File menu the

desktop, and then use the set path dialog box. It is good practice to add any commonly used

directories to the search path to avoid repeatedly having the change the current directory.

The Command History Window contains a record of the commands a user has entered in

the command window, including both current and previous MATLAB sessions. Previously

entered MATLAB commands can be selected and re-executed from the command history

window by right clicking on a command or sequence of commands. This action launches a

menu from which to select various options in addition to executing the commands. This is useful

to select various options in addition to executing the commands. This is a useful feature when

experimenting with various commands in a work session.

The MATLAB editor is both a text editor specialized for creating M-files and a graphical

MATLAB debugger. The editor can appear in a window by itself, or it can be a sub window in

the desktop. M-files are denoted by the extension .m, as in pixelup.m. The MATLAB editor

window has numerous pull-down menus for tasks such as saving, viewing, and debugging files.

Because it performs some simple checks and also uses color to differentiate between various

elements of code, this text editor is recommended as the tool of choice for writing and editing M-

functions. To open the editor , type edit at the prompt opens the M-file filename.m in an editor

window, ready for editing. As noted earlier, the file must be in the current directory, or in a

directory in the search path.

The principal way to get help online is to use the MATLAB help browser, opened as a

separate window either by clicking on the question mark symbol (?) on the desktop toolbar, or by

typing help browser at the prompt in the command window. The help Browser is a web browser

integrated into the MATLAB desktop that displays a Hypertext Markup Language(HTML)

documents. The Help Browser consists of two panes, the help navigator pane, used to find

information, and the display pane, used to view the information. Self-explanatory tabs other than

navigator pane are used to perform a search.

CONCLUSION

instance, with the explosive growth of the Internet there is a growing need for audio

compression, or data compression in general. Goals of such compressions are to minimize the

storage memory, communication channel bandwidth, and transmission time.

compression in real world scenario is high. However, some areas of investigation are now open.

A significant factor that was disregarded in this study was the quality level of the images. In a

real life scenario this could prove a crucial factor.

JPEG compression standards are, error resilience, manipulation of images in low bit

rates, region of interest coding, loss and lossless performance using same coder, no iterative rate

control, etc.

All these features are designed due to adaptation of the discrete wavelet transform (DWT) and

intra-sub band entropy coding along the bit planes binary arithmetic coder (BAC) in the core

algorithm. All these core blocks namely, DWT, BAC bocks are computationally, as well as

memory, intensive.

The jpeg compression system was developed can support more wavlet filters than the jpeg 2000

image compression system,and the coefficient of wavlet filters are rational values ,so it is known

as that reduced the complexity.The system architecture has been successfully implemented in

MATLAB and its performance evaluated for a set of images.

FUTURE WORK

Future work should include investigation of more elaborate coefficient selection and

sub band weighting schemes and the (straightforward) extension to n-D images. The latter could

be applied to medical data such as 3D and 4D cardiac MRI, 3D brain MRI, ultra sound etc.

Exploiting the inherent scale properties of a WHAM is also attractive in order to obtain

computationally cheaper and more robust means of optimization. Finally, earlier work on de-

noising in the wavelet domain could also be incorporated into this framework.

Experimental Results

For Image 1:

Wavelet Name PSNR MSE

Graphical Results

Extended Jpeg 2000 79

Extended Jpeg 2000 80

6.0 BIBILOGRAPHY:

Eddies

http://www.cis.ohio-state.edu/hypertext/faq/usenet/compression-faq/part2/faq.html

- Archshaders Vol 1Uploaded byMichelle Kalogeraki
- TutorialUploaded byfloriniii
- Video Compression 2004Uploaded byylk1
- IndexUploaded byRaviteja Udumudi
- Project Report LibreUploaded bySubhajit Bhattacharyya
- digikamUploaded byKataeva
- Semantic CompressionUploaded byKristen Fields
- Enhancing Confidentiality of Private Communication through Dual CompressionUploaded byAnonymous Gl4IRRjzN
- JPEGUploaded byjaneprice
- 338895861 9608 Pseudocode Guide FINAL v2Uploaded byLava Humour
- Development of Efficient Color Image Compression Technique using Modified JPEG 2000 StandardUploaded byIJAFRC
- Computer Science[1]Uploaded bybdeerpalsingh_342567
- 1.Neural NetworkUploaded byBala Vishnu
- Journal of Visual Communication and Image Representation Volume 59 Issue 2019 [Doi 10.1016_j.jvcir.2018.12.043] Yuan, Shuyun; Hu, Jianbo -- Research on Image Compression Technology Based on HuffmanUploaded bypusher
- 33Uploaded byWexsel Karagöz
- (2005 ) Distributed CSUploaded byccy007
- Document From VinaythejamctUploaded byvinaytheja
- Artificial Passenger Final AP PptUploaded byAnilkumarAnilkumar
- Arithmetic CodingUploaded byPrashast Mehra
- VideoUploaded bySneha Poojary
- Revised Advertisement AMs8258a243 Cd67 4c1a a1ec Ccc22888ea61Uploaded bysukanta89
- 02Uploaded byiit2007005
- IVC PresentationUploaded bysaravanan
- NOVEL COMPARISON METHOD FOR PASSPORT IMAGE APPLICATIONUploaded byIJIERT-International Journal of Innovations in Engineering Research and Technology
- 10.1049@iet-cds.2016.0267 (1)Uploaded bySubha Shini
- Vb Net Code SnippedUploaded byFajaruddinSaan
- WAVELET TRANSFORM.pdfUploaded byChaitanya
- Pev08-ACMUploaded byRoySiahaan
- octvizUploaded byJeshi
- Btech 7th 8th - CopyUploaded bySanjay Kapoor

- Goldstein_Upper Bound and Best Estimate of the Efficiency of the Iodine Sulphur CycleUploaded byluh06002
- Dfrobot Arduino Shields ManualUploaded bythomasbaart
- U215(0)_1Uploaded byAntonios
- Salem Academy Magazine 2009Uploaded bySalem College
- Adrian Cheok & David Levy Love and Sex With Robots 2017Uploaded byStefania Florea
- plamUploaded byAjay Pandey
- BSSW (Evening) Final Year Projects Proposal Evaluation ReportUploaded byZeeshan Bhatti
- Langton_1986_Studying artificial life with cellular automata.pdfUploaded byAndrew-Robert Gallimore
- Junior Training Sheet - Template - V5.7Uploaded byAbhas Jain
- 0313382654Uploaded byTrin Phongpetra
- Ener ZoneUploaded byhawk789
- EXAOPC-GSUploaded byAyman AL-Sheryani
- Korteweg-De Vries 1Uploaded byASFDP
- fld14Uploaded byguest
- AssessmentUploaded byWildan Rby
- SanchezMarco 2012 PhDUploaded byIulianushka Andronic
- New Microsoft Office Excel 97-2003 WorksheetUploaded bysarkararup
- SupermarketUploaded byoptimus99
- USPTO Refuses Apple's iPad mini Trademark ApplicationUploaded byTechCrunch
- Gps Base Location Tracking on AndriodUploaded byProjectideas Guidance
- OxfordUploaded byIrfan Ali
- Levels of MeasurementUploaded byhina kaynat
- 20140804_GW Flow Models in the Bukit Timah GraniteUploaded byadf500
- steel plant.pdfUploaded bymohan
- examUploaded byKiru Bagari Vellaisamy
- ENGINEERING ETHICS.docUploaded byTuğşan Altuğ
- PO Single Stack Designs and Interaction Models _ SCNUploaded byParvez2z
- inversionUploaded byShahid Rehman
- Tandem BW 141 BomagUploaded bytemonggg
- CPR - QB Chapter 02 NewUploaded byapi-3728136