You are on page 1of 58

www.1000projects.com www.fullinterview.com www.chetanasprojects.

com

An Algorithm for SAR Image Embedded Compression based on Wavelet Transform

www.1000projects.com www.fullinterview.com www.chetanasprojects.com

www.1000projects.com www.fullinterview.com www.chetanasprojects.com

ABSTACT:
Synthetic Aperture Radar (SAR) image compression is important in image transmission and archiving. In this paper, SAR image compression using embedded zerotree wavelets algorithm, based on discrete wavelet transform (DWT), is researched. Aiming at special characteristics of SAR image such as speckle noise, the author added a denoise step in the flow of compression, trying to do some meaningful attempt in SAR image compression. Experiments carried out show that the improvement can reduce the speckle noise and improve the computation precision and time.

KEYWORDS:
Discrete

wavelet transform

Embedded zero tree wavelet Haar transform technique Denoiseing /filtering process

www.1000projects.com www.fullinterview.com www.chetanasprojects.com

www.1000projects.com www.fullinterview.com www.chetanasprojects.com

CONTENTS Page No
CHAPTER 1 1.1 Introduction08 CHAPTER 2 2.1 Description.09 2.2 Block Diagram of SAR image...10 2.3 SAR Image and its Importance..11 2.4 Interaction between Microwaves and Earth's Surface..14 CHAPTER 3 3.1 Transforms 15 3.2 Continuous Wavelet Transform.18 3.3 Computation of CWT ....19 3.4 Discrete Wavelet Transform...19 CHAPTER 4 4.1 Scaling Function21 4.2 2D Wavelet Functions...22 4.3 Applications of Wavelet Transforms.30 4.4 Progressive Transmission..31 4.5 Summary of Compression.31

www.1000projects.com www.fullinterview.com www.chetanasprojects.com

www.1000projects.com www.fullinterview.com www.chetanasprojects.com

CHAPTER- 5 5.1 SAR Image Embedded Compression.32 5.2 EZW encoding35 5.3The zero tree36 CHAPTER - 6 6.1 The algorithm.39 6.2 Arithmetic Coding Principles.42 6.3 Arithmetic Process..45 6.4 Decoding Process....47 6.5 Introduction to Mat lab...49 CHAPTER - 7 Future Scope53 Conclusion...54 CHAPTER - 8 Bibliography..55 Appendix

www.1000projects.com www.fullinterview.com www.chetanasprojects.com

www.1000projects.com www.fullinterview.com www.chetanasprojects.com

LIST OF DIAGRAMS
Page-no Fig: 2.1 The Proposed Flow of SAR Image..10 Fig: 2.2 SAR Antennas12 Fig: 2.3 Short and Long Antenna13 Fig: 2.4 Polarization and Incident Angle14 Fig: 2.5 The Wavelet Transform...16 Fig: 3.1 Computation of CWT..19 Fig: 3.2 1D Wavelet Transform..20

Fig: 4.1 Implementation of 2D Wavelet Transform22 Fig: 4.2 Single Stage Decomposition23 Fig: 4.3 Multi Stage Decomposition.23 Fig: 4.4 10DWT Coefficients.27 Fig: 5.1 First Stage of Transforms.34 Fig: 5.2 Discreet Wavelet Transform Decomposition...34 Fig: 5.3 Wavelet Coefficient in Different subbands..37
www.1000projects.com www.fullinterview.com www.chetanasprojects.com 5

www.1000projects.com www.fullinterview.com www.chetanasprojects.com

Fig: 6.1 System with Typical Process for Data Compression43 Fig: 6.2 Cumulative Distribution of Code Values.45 Fig: 6.3 Updating Arithmetic Coding Intervals.46 Fig: 6.4 Mat lab..49 Fig: 6.5 Main Screen of Mat lab.....51

LIST OF TABLES
Page-no Table: 1. Estimated probabilities of some letters and punctuation Marks in the English language.45

www.1000projects.com www.fullinterview.com www.chetanasprojects.com

www.1000projects.com www.fullinterview.com www.chetanasprojects.com

ABBREVATIONS:

SAR- Synthetic aperature radar. EZW- Embedded zero tree wavelet. DWT- Discrete wavelet transform. MSE- Mean square error. PSNR- Peak Signal to noise ratio.

CHAPTER-1
www.1000projects.com www.fullinterview.com www.chetanasprojects.com 7

www.1000projects.com www.fullinterview.com www.chetanasprojects.com

1.1 INTRODUCTION:
Synthetic aperture radar (SAR) is a very efficient instrument for obtaining a better understanding of the environment. SAR image products are very important and useful for remote sensing applications because they can be acquired independent of time or day or weather conditions and because their characteristics (wavelength, polarisation, observation angle) can be chosen in function of the phenomenon under investigation. So SAR image compression is important in image transmission and archiving. Due to the high entropy of SAR raw data, conventional compression techniques fail to ensure acceptable performances, in that lossless ones do not in fact succeed to compress, while general purpose lossy ones (e.g. JPEG) provide some compression degree only at the price of unacceptable image quality degradation[1]. Furthermore the presence of speckle noise in SAR images limits the visual interpretation of scenes because it obscures the content. In order to get reliable data interpretation and quantitative spots measurements, it is recommended to applying speckle filtering schemes in SAR image compression

CHAPTER-2
2.1 DESCRIPTION:
www.1000projects.com www.fullinterview.com www.chetanasprojects.com 8

www.1000projects.com www.fullinterview.com www.chetanasprojects.com

The enhancement & utilization of image process in typical environment makes the project development more realist. In this project, it deals that a image i.e., a sar image, which make includes noise during the tracing of image from remote areas for any particular application. As and when the image receiving the noise added makes chance of distortion in the original image. As to have exact picture details of particular target or area of concern. It is necessary to eliminate the noise as possible as much. By making use of different denoising techniques. After removing the noise, as the size of picture details takes more memory size, to reduce the no of pixel rate such that the memory size is reduced makes obviously compression. As the amount of compression we make the rate of picture quality decreases. So, it is trade off between the compression and the picture quality, so choose optimum compression such that the picture should fine to exact the target information. SAR image compression using embedded zerotree wavelets algorithm, based on discrete wavelet transform (DWT), is researched. Aiming at special characteristics of SAR imagery such as speckle noise, the author added a denoise step in the flow of compression, trying to do some meaningful attempt in SAR image compression. Experiments carried out show that the improvement can reduce the speckle noise and improve the computation precision and time.

2.2 BLOCK DIAGRAM OF SAR IMAGE EMBEDDED COMPRESSION

www.1000projects.com www.fullinterview.com www.chetanasprojects.com

www.1000projects.com www.fullinterview.com www.chetanasprojects.com

2.3 SAR IMAGE AND ITS IMPORTANCE:


Synthetic-aperture radar (SAR) is a form of radar in which multiple radar images are processed to yield higher-resolution images than would be possible by conventional means. Either a single antenna mounted on a moving platform (such as an airplane or spacecraft) is used to illuminate a target scene or many low-directivity small stationary antennae are scattered over an area near the target area. The many echo waveforms received at the different antenna positions are postprocessed to resolve the target. SAR can only be implemented by moving one or more antennae over relatively immobile targets, by placing multiple stationary antennae over a relatively large area, or combinations thereof. SAR has seen wide applications in remote sensing and mapping. In a typical SAR application, a single radar antenna is attached to the side of an aircraft. A single pulse from the antenna will be rather broad (several degrees) because diffraction requires a large antenna to produce a narrow beam. The pulse will also be broad in the vertical direction; often it will illuminate the terrain from directly beneath the aircraft out to the horizon. If the terrain is approximately flat, the time at which echoes return allows points at different distances to be distinguished. Distinguishing points along the track of the aircraft is difficult with a small antenna. However, if the amplitude and phase of the signal returning from a given piece of ground are recorded, and if the aircraft emits a series of pulses as it travels, then the results from these pulses can be combined. www.1000projects.com www.fullinterview.com www.chetanasprojects.com 10

www.1000projects.com www.fullinterview.com www.chetanasprojects.com Effectively, the series of observations can be combined just as if they had all been made simultaneously from a very large antenna; this process creates a synthetic aperture much larger than the length of the antenna (and much longer than the aircraft itself).

Fig: 2.2 SAR Antennas.

A scale hologram interference pattern was produced Combining the series of observations requires significant computational resources. It is often done at a ground station after the observation is complete, using Fourier transform techniques. The high computing speed now available allows SAR processing to be done in real time onboard SAR aircraft. The result is a map of radar reflectivity (including both amplitude and phase). The phase information is, in the simplest applications, discarded. The amplitude information contains information about ground cover, in much the same way that a black-and-white picture does. Interpretation is not simple, but a large body of experimental results has been accumulated by flying test flights over known terrain.

www.1000projects.com www.fullinterview.com www.chetanasprojects.com

11

www.1000projects.com www.fullinterview.com www.chetanasprojects.com Image resolution of SAR is mainly proportional to the radio signal bandwidth used and, to a lesser extent, on the system precision and the particular techniques used in post-processing. Early satellites provided a resolution in the tens of meters. More recent airborne systems provide resolutions to about 10 cm, ultra-wideband systems (developed and produced in the last decade) provide resolutions of a few millimeters, and experimental terahertz SAR has provided submillimeter resolution in the laboratory. Before rapid computers were available, the processing stage was done using holographic techniques. This was one of the first effective directly from the analogue radar data (for example 1:1,000,000 for 0.6 meters radar). Then laser light with the same scale (in the example 0.6 micrometers) passing through the hologram would produce a terrain projection. This works because SAR is fundamentally very similar to holography with microwaves instead of light. The microwave beam sent out by the antenna illuminates an area on the ground (known as the antenna's "footprint"). In radar imaging, the recorded signal strength depends on the microwave energy backscattered from the ground targets inside this footprint. Increasing the length of the antenna will decrease the width of the footprint.

Fig. 2.3 Short and Long Antenna. It is not feasible for a spacecraft to carry a very long antenna which is required for high resolution imaging of the earth surface. To overcome this limitation, SAR capitalises on the motion of the space www.1000projects.com www.fullinterview.com www.chetanasprojects.com 12

www.1000projects.com www.fullinterview.com www.chetanasprojects.com craft to emulate a large antenna (about 4 km for the ERS SAR) from the small antenna (10 m on the ERS satellite) it actually carries on board.

2.4 Interaction between Microwaves and Earth's Surface


When microwaves strike a surface, the proportion of energy scattered back to the sensor depends on many factors:

Physical factors such as the dielectric constant of the surface materials which also depends strongly on the moisture content; Geometric factors such as surface roughness, slopes, orientation of the objects relative to the radar beam direction; The types of landcover (soil, vegetation or man-made objects).

Microwave frequency, polarisation and incident angle.

www.1000projects.com www.fullinterview.com www.chetanasprojects.com

13

www.1000projects.com www.fullinterview.com www.chetanasprojects.com Fig: 2.4 Microwave frequency, polarisation and incident angle

CHAPTER-3
3.1 WHAT IS A TRANSFORM? WHY DO WE NEED TRANSFORMS? 1 2 3a) The transform of a function may give additional /hidden information about the original function, which may not be available /obvious otherwise. 4b) The transform of an equation may be easier to solve than the original equation (recall Laplace transforms for Diff- Equations). www.1000projects.com www.fullinterview.com www.chetanasprojects.com 14 A Transform is a mathematical operation that takes a function or

sequence and maps it into another one. Transforms are used because

www.1000projects.com www.fullinterview.com www.chetanasprojects.com 5 6c) The transform of a function/sequence may require less storage, hence provide data compression/reduction. 7 d) An operation may be easier to apply on the transformed function, rather than the original function (recall convolution). Mathematical transformations are applied to signals to obtain further information from that signal that is not readily available in the raw signal. Most of the signals in practice, are Time domain signals in their raw format, i.e. whatever that signal is measuring, is a function of time. In other words, when we plot the signal, one of the axes is time (independent variable), and the other (dependent variable) is usually the amplitude. When we plot time-domain signal we obtain a Time Amplitude representation of the signal. This representation is not always the best representation of the signal for most IMAGE PROCESSING related applications.

In many cases, the most distinguished information is hidden in the frequency content of the signal. The frequency spectrum of a signal is basically the frequency components (spectral components) of that signal. The frequency spectrum of a signal shows what frequencies exist in the signal. Intuitively, we all know that the frequency is something to do with the change in rate of something. If something (a mathematical or physical variable would be the technically correct term) changes rapidly, we say that it is of high frequency, where as if this variable does not change rapidly, i.e., it changes smoothly, we say that it is of low frequency. If this variable does not change at all, then we say it has zero frequency, For example the publication frequency of a daily newspaper is higher than that of a monthly magazine. Frequency is www.1000projects.com www.fullinterview.com www.chetanasprojects.com 15

www.1000projects.com www.fullinterview.com www.chetanasprojects.com measured in cycles/second, or with a more common name, in "Hertz". Now, look at the following figures. The first one is a sine wave at 3 Hz, the second one at 10 Hz as shown below.

Fig: 2.5 THE WAVELET TRANSFORM. The Wavelet transform is a transform of this type i.e. it provides the timefrequency representation. There are other transforms which give this information too, such as short time Fourier transforms, Wigner distributions. Wavelet transform is capable of providing the time and frequency information simultaneously, hence giving a time-frequency representation of the signal. The WT was developed as an alternative to the STFT (Short Time Fourier Transform). The advantages of Wavelet Transform is a. Overcomes the present resolution problem of the STFT by using a variable length window. b. Analysis windows of different lengths are used for different frequencies. www.1000projects.com www.fullinterview.com www.chetanasprojects.com 16

www.1000projects.com www.fullinterview.com www.chetanasprojects.com c. For analysis of high frequencies, Use narrower windows for better time resolution.

d. For analysis of low frequencies, Use wider windows for better frequency resolution. e. This works well, if the signal to be analyzed mainly consists of slowly varying characteristics with occasional short high frequency bursts. f. Heisenberg principle still holds good. g. The function used to window the signal is called the wavelet. Wavelet Transforms basically work on two properties. a) Scaling property. b) Translation property. Translation Property: It is the time shift property 1f(t) 3If f(a.t) a>0 a>1 then dilation takes place i.e. expansion, large scale (lower frequency) f(a/t) a>0 2If 0<a<1 then contraction takes place i.e. low scale (high frequency) 4if f(t) 6If 1 2Scaling Property : It has a similar meaning as that of scale in maps 3A. Large scale: Overall view, long term behavior 4B. Small scale: Detail view, local behavior

5If 0<a<1 then dilation takes place i.e. large scale (lower frequency) a>1 then contraction takes place, low scale (high frequency)

www.1000projects.com www.fullinterview.com www.chetanasprojects.com

17

www.1000projects.com www.fullinterview.com www.chetanasprojects.com

3.2 Continuous Wavelet Transform:


The continuous wavelet transform is obtained using the equation:

3.3 Computation of CWT:


www.1000projects.com www.fullinterview.com www.chetanasprojects.com 18

www.1000projects.com www.fullinterview.com www.chetanasprojects.com

Fig 3.1 COMPUTATION OF CWT

3.4 DISCRETE WAVELET TRANSFORM: 1 we need not have to use a uniform sampling rate for the translation parameters,

since we do not need as high time sampling rate when the scale is high (low frequency). 2Lets consider the following sampling grid:

3 www.1000projects.com www.fullinterview.com www.chetanasprojects.com 19

www.1000projects.com www.fullinterview.com www.chetanasprojects.com 4 If we use this discretization, we obtain

5A common choice for a0 and b0 are a0=2 and b0=1,which lend themselves to dyadic sampling grid

6Then the discrete wavelet transform(DWT) pair can be give as

SAMPLING GRID

Consider an example as shown below for 2D Wavelet Transform:

www.1000projects.com www.fullinterview.com www.chetanasprojects.com

20

www.1000projects.com www.fullinterview.com www.chetanasprojects.com

Fig: 3.2 1D WAVELET TRANSFORM.

www.1000projects.com www.fullinterview.com www.chetanasprojects.com

21

CHAPTER-4
4.1 SCALING FUNCTION:
Orthonormal dyadic discrete wavelets are also associated with scaling functions, which are used to smooth or obtain approximation of a signal. Scaling functions have a similar form to those of wavelet functions

Where WAVELET. Scaling functions also have the property

is know as the FATHER

(Recall that) Also note that, scaling functions are orthogonal to their translation, not to their dilations.

In general, for any given scaling function, the approximation coefficients can be obtained by inner product of the signal and the scaling function.

A continuous approximation of signal at scale j can then be obtained from the discrete sum of these coefficients.

Recall and note that as

4.2 2D WAVELET FUNCTIONS:

Fig: 4.1 IMPLEMENTATION OF 2D WAVELET TRANSFORM. In DWT, we pass the time-domain signal from various high pass and low pass filters, which filters out either high frequency or low frequency portions of the signal. This procedure is repeated, and every time some portion of the signal corresponding to some frequencies being removed from the signal. This is the technique used for compression of an image using Wavelet Transform. Here is how this works: The WT can be performed by using two elimination methods, they are 1) H-Elimination method and 2) H* Elimination method. The elimination methods are chosen based on the required compression. Now suppose we have a signal which has frequencies up to 1000 Hz. In the first stage we split up the signal into two parts by passing the signal from a high pass and a low pass filter (filters should satisfy some certain conditions, so-called admissibility condition) which results in two different versions of the same signal: portion of the signal corresponding to 0-500 Hz (low pass portion), and 500-1000 Hz (high pass

portion). Then, we take either portion (usually low pass portion) or both, and do the same thing again. This operation is called decomposition. The figure below shows the single stage and multi stage decomposition in Wavelet Transform.

Fig: 4.2 SINGLE STAGE DECOMPOSITION .

Fig: 4.3 MULTI STAGE DECOMPOSITION

Assuming that we have taken the low pass portion, we now have 3 sets of data, each corresponding to the same signal at frequencies 0-250 Hz, 250-500 Hz, 500-1000 Hz.

Then we take the low pass portion again and pass it through low and high pass filters; we now have 4 sets of signals corresponding to 0-125 Hz, 125-250 Hz, 250-500 Hz, and 500-1000 Hz. We continue like this until we have decomposed the signal to a pre-defined certain level. Then we have a bunch of signals, which actually represent the same signal, but all corresponding to different frequency bands. We know which signal corresponds to which frequency band, and then based on the required compression ratio some frequencies are computed and some frequencies are skipped as shown below In wavelet analysis the signal is multiplied with a function, i.e. a wavelet, similar to the window and the transform is computed separately for different segments of the time-domain signal. The width of the window is changed as the transform is computed for every single spectral component, which is probably the most significant characteristic of the wavelet transform. The term Wavelet means small wave. The smallness refers to the condition that this (window) function is of finite length (compactly supported). The wave refers to the condition that this function is oscillatory. In terms of frequency, low frequencies (high scales) correspond to a global information of a signal (that usually spans the entire signal), whereas high frequencies (low scales) correspond to a detailed information of a hidden pattern in the signal (that usually lasts a relatively short time). Fortunately in practical applications, low scales (high frequencies) do not last for the entire duration of the signal, unlike those shown in the figure, but they usually appear from time to time as short bursts, or spikes. The convolution operation in discrete time is defined as follows:

The unit of frequency is of particular importance at this time. In discrete signals, frequency is expressed in terms of radians. Accordingly, the sampling frequency of the signal is equal to 2p radians in terms of radial frequency. Therefore, the highest frequency component that exists in a signal will be p radians, if the signal is sampled at Nyquists rate (which is twice the maximum frequency that exists in the signal); that is, the Nyquists rate corresponds to p rad in the discrete frequency domain. Therefore using Hz is not appropriate for discrete signals. After passing the signal through a half band low pass filter, half of the samples can be eliminated according to the Nyquists rule, since the signal now has a highest frequency of p/2 radians instead of p radians. Simply discarding every other sample will sub sample the signal by two, and the signal will then have half the number of points. The scale of the signal is now doubled. Resolution, on the other hand, is related to the amount of information in the signal, and therefore, it is affected by the filtering operations. Half band low pass filtering removes half of the frequencies, which can be interpreted as losing half of the information. Therefore, the resolution is halved after the filtering operation. Note, however, the sub sampling operation after filtering does not affect the resolution, since removing half of the spectral components from the signal makes half the number of samples redundant anyway. Half the samples can be discarded without any loss of information. In summary, the low pass filtering halves the resolution, but leaves the scale unchanged. The signal is then sub sampled by 2 since half of the number of samples are redundant. This doubles the scale. This procedure can mathematically be expressed as

Having said that, we now look how the DWT is actually computed: The DWT analyzes the signal at different frequency bands with different resolutions by decomposing the signal into a coarse approximation and detail information. DWT employs two sets of functions, called scaling functions and wavelet functions, which

are associated with low pass and high pass filters, respectively. The decomposition of the signal into different frequency bands is simply obtained by successive high pass and low pass filtering of the time domain signal. The original signal x[n] is first passed through a half band high pass filter g[n] and a low pass filter h[n]. After the filtering, half of the samples can be eliminated according to the Nyquists rule, since the signal now has a highest frequency of p /2 radians instead of p. The signal can therefore be sub sampled by 2, simply by discarding every other sample. This constitutes one level of decomposition and can mathematically be expressed as follows:

Where yhigh[k] and ylow[k] are the outputs of the high pass and low pass filters, respectively, after sub sampling by 2. This decomposition halves the time resolution since only half the number of samples now characterizes the entire signal. However, this operation doubles the frequency resolution, since the frequency band of the signal now spans only half the previous frequency band, effectively reducing the uncertainty in the frequency by half. The above procedure, which is also known as the sub band coding, can be repeated for further decomposition. At every level, the filtering and sub sampling will result in half the number of samples (and hence half the time resolution) and half the frequency band spanned (and hence doubles the frequency resolution). Figure 4.1 illustrates this procedure, where x[n] is the original signal to be decomposed, and h[n] and g[n] is low pass and high pass filters, respectively. The bandwidth of the signal at every level is marked on the figure as "f".

Fig: 4.4 10DWT COEFFICIENTS The Sub band Coding Algorithm As an example, suppose that the original signal x[n] has 512 sample points, spanning a frequency band of zero to p rad/s. At the first decomposition level, the signal is passed through the high pass and low pass filters, followed by sub sampling by 2. The output of the high pass filter has 256 points (hence half the time resolution), but it only spans the frequencies p/2 to p rad/s (hence double the frequency resolution). These 256 samples constitute the first level of DWT coefficients.

The output of the low pass filter also has 256 samples, but it spans the other half of the frequency band, frequencies from 0 to p/2 rad/s. This signal is then passed through the same low pass and high pass filters for further decomposition. The output of the second low pass filter followed by sub sampling has 128 samples spanning a frequency band of 0 to p/4 rad/s, and the output of the second high pass filter followed by sub sampling has 128 samples spanning a frequency band of p/4 to p/2 rad/s. The second high pass filtered signal constitutes the second level of DWT coefficients. This signal has half the time resolution, but twice the frequency resolution of the first level signal. In other words, time resolution has decreased by a factor of 4, and frequency resolution has increased by a factor of 4 compared to the original signal. The low pass filter output is then filtered once again for further decomposition. This process continues until two samples are left. For this specific example there would be 8 levels of decomposition, each having half the number of samples of the previous level. The DWT of the original signal is then obtained by concatenating all coefficients starting from the last level of decomposition (remaining two samples, in this case). The DWT will then have the same number of coefficients as the original signal. The frequencies that are most prominent in the original signal will appear as high amplitudes in that region of the DWT signal that includes those particular frequencies. The difference of this transform from the Fourier transform is that the time localization of these frequencies will not be lost. However, the time localization will have a resolution that depends on which level they appear. If the main information of the signal lies in the high frequencies, as happens most often, the time localization of these frequencies will be more precise, since they are characterized by more number of samples. If the main information lies only at very low frequencies, the time localization will not be very precise, since few samples are used to express signal at these frequencies. This procedure in effect offers a good time resolution at high frequencies, and good frequency resolution at low frequencies. Suppose we have a 256-sample long signal sampled at 10 MHZ and we wish to obtain its DWT coefficients. Since the signal is sampled at 10 MHz, the highest frequency component that exists in the signal is 5 MHz. At the first level, the signal is passed through the low pass filter h[n], and the

high pass filter g[n], the outputs of which are sub sampled by two. The high pass filter output is the first level DWT coefficients. There are 128 of them, and they represent the signal in the [2.5 5] MHz range. These 128 samples are the last 128 samples plotted. The low pass filter output, which also has 128 samples, but spanning the frequency band of [0 2.5] MHz, are further decomposed by passing them through the same h[n] and g[n]. The output of the second high pass filter is the level 2 DWT coefficients and these 64 samples precede the 128 level 1 coefficients in the plot. The output of the second low pass filter is further decomposed, once again by passing it through the filters h[n] and g[n]. The output of the third high pass filter is the level 3 DWT coefficients. These 32 samples precede the level 2 DWT coefficients in the plot. The procedure continues until only 1 DWT coefficient can be computed at level 9. This one coefficient is the first to be plotted in the DWT plot. This is followed by 2 level 8 coefficients, 4 level 7 coefficients, 8 level 6 coefficients, 16 level 5 coefficients, 32 level 4 coefficients, 64 level 3 coefficients, 128 level 2 coefficients and finally 256 level 1 coefficients. The less and less number of samples is used at lower frequencies, therefore, the time resolution decreases as frequency decreases, but since the frequency interval also decreases at low frequencies, the frequency resolution increases. Obviously, the first few coefficients would not carry a whole lot of information, simply due to greatly reduced time resolution. One area that has benefited the most from this particular property of the wavelet transforms is image processing. DWT can be used to reduce the image size without losing much of the resolution. Here is how: For a given image, you can compute the DWT of, say each row, and discard all values in the DWT that are less then a certain threshold. We then save only those DWT coefficients that are above the threshold for each row, and when we need to reconstruct the original image, we simply pad each row with as many zeros as the number of discarded coefficients, and use the inverse DWT to reconstruct each row of the original image.

We can also analyze the image at different frequency bands, and reconstruct the original image by using only the coefficients that are of a particular band.

4.3 Applications of Wavelet Transforms


Wavelets have broad applications in fields such as signal processing and medical imaging. Due to time and space constraints, I will only be discussing two in this paper. The two applications most applicable to this paper are wavelet image compression and the progressive transmission of image files over the internet.

Wavelet Compression
So what is the point to all of this? The point of doing the Haar wavelet transform is that areas of the original matrix that contain little variation will end up as small or Zero elements in the Haar transform matrix. A matrix is considered sparse if it has a high proportion of zero entries. Matrices that are sparse take much less memory to store. Because we cannot expect the transformed matrices always to be sparse, we must consider wavelet compression To perform wavelet compression we first decide on a non-negative threshold value known as _. We next let any value in the Haar wavelet transformed matrix whose magnitude is less than be reset to zero. Our hope is that this will leave us with a relatively sparse matrix. If it is equal to zero we will not modify any of the elements and therefore we will not lose any information. This is known as lossless compression. Lossy compression occurs when it is greater than zero. Because some of the elements are reset to zero, some of our original data is lost. In the case of lossless compression we are able to reverse our operations and get our original image back.

4.4 Progressive Transmission


Many people frequently download images from the internet, a few of which are not even pornographic. Wavelet transforms speed up this process considerably. When a

person clicks on an image to download it, the source computer recalls the wave transformed matrix from memory. It first sends the overall approximation coefficient and larger detail coefficients, and then the progressively smaller detail coefficients. As your computer receives this information it begins to reconstruct the image in progressively greater detail until the original image is fully reconstructed. This process can be interrupted at any time if the user decides that s/he does not want the image. Otherwise an user would only be able to see an image after the entire image file had been downloaded. Because a compressed image file is significantly smaller it takes far less time to download.

4.5 Summary of Compression


Image compression using the Haar wavelet transform can be summed up in a few simple steps. 1. Convert the image into a matrix format (I). 2. Calculate the row-and-column transformed matrix (T) using Equation 1. The transformed matrix should be relatively sparse. 3. Select a threshold value, and replace any entry in T whose magnitude is less than with a zero. This will result in a sparse matrix denoted as S.

CHAPTER-5 5.1 SAR IMAGE EMBEDDED COMPRESSION


Although speckle noise is an inherent nature of SAR images, the presence of speckle noise in SAR images limits the visual interpretation of scenes. So far speckle noise have not used to help the interpretation of SAR despite there may be some information in speckle noise. The author had done some experiments using

conventional compression techniques alone in SAR image compression. These experiments showed that conventional compression techniques failed to ensure acceptable performances and couldnt weaken the phenomenon of speckle noise. In this paper, the author adopted EZW algorithm in SAR image compression and added a denoise step before encoding. Speckle noise filter SAR images are generally affected by speckle noise, which is the factor preventing the interpretation of SAR. Speckle phenomenon results from the coherent radiation which appears as a granular signal dependent noise, whose effect is to degrade the image quality. As speckle noise is a kind of multiplicative noise, it is difficult to reduce it by direct measure. Generally, we can turn this multiplicative noise to additive noise by pretreatment of logarithm transform . Later, the additive noise can be filtered by some techniques of noise filtering. In this paper, a measure of noise filtering are tested, (ENL) and Mean Square Error (MSE) to evaluate the result. ENL is often used to estimate the speckle noise level in a SAR image. It is the mean-to-standard deviation ratio, which is a measure of the signal-to-noise ratio. In this study, ENL is used to measure the degree of speckle reduction. The higher the ENL is, the stronger the speckle reduction MSE is the Mean Squared Error between the reconstructed image and the original one. The lower the MSE is, the stronger the speckle reduction. Discrete wavelet transform the theory of wavelets is well median

denoise. And we use two parametersEquivalent Number of Looks

established [9-13]. A wavelet is a family of functions derived from a

basis function ()tdefined in terms of two parameters, a, dilation (scale), and b, translation (time):

The wavelet is either stretched or compressed when the scale changes. This property makes a wavelet suitable to describe different features (both broad and sharp) of the signal.

This kind of wavelet is called a discrete wavelet, which is often used in chemical signal analysis. Multiresolution analysis (MRA) is an approach presented by Mallat [14] to implement a fast wavelet transformation. In Mallats algorithm [1416], the decomposition of a signal is accomplished by repeating a linear transformation involving a scaling function H (a low pass filter) and a wavelet function G (a high pa filter). In the mth scale decomposition, we obtain:

Where c and d are the approximation (low-frequency) and detail (high-frequency) coefficients, respectively, whose lengths decrease dyadically at higher scales. A higher scale corresponds to a lower frequency and coarser resolution. In this way, the original signal is decomposed frequencies. into several scales according to its component

Reconstruction of the original signal can be performed by repeating the following linear transformation:

Where j is the number of times that a signal of length 2jcan be decomposed. The discrete wavelet transform used in this paper is identical to a hierarchical subband system, where the subbands are logarithmically spaced in frequency and represent octave-band decomposition. To begin the decomposition, the image is divided into four subbands and critically subsampled. Each coefficient represents a spatial area corresponding to approximately a 2 2 area of the original image. The low frequencies represent a bandwidth approximately corresponding to0 <<whereas the high frequencies represent the band from/2. The four subbands arise from separable application of vertical and horizontal filters. The subbands labeled1LH1 HL1 and HH2 represent the finest scale wavelet coefficients. To obtain the next coarser scale of wavelet coefficients, the subband 1HH1LLis further decomposed and critically sampled The process continues until some final scale is reached.

Fig: 5.1 first stage of Transforms

Fig: 5.2 A two scale discreet wavelet transforms

After discrete wavelet transform transfer the SAR image data to the wavelet coefficients in wavelet domain, EZW encoder can work.

How Images are Stored


Before we can understand how to manipulate an image, it is important to understand exactly how the computer stores the image. For a 256 x 256 pixel gray scale image, the image is stored as a 256 x 256 matrix, with each element of the matrix being a number ranging from zero (for black) to some positive whole number (for white). We can use this matrix, and some linear algebra to maximize compression while maintaining a suitable level of detail.

5.2 EZW encoding:


When searching through wavelet literature for image compression schemes it is almost impossible not to note Shapiros Embedded Zerotree Wavelet encoder or EZW encoder for short . An EZW encoder is an encoder specially designed to use with wavelet transforms, which explains why it has the word wavelet in its name. The EZW encoder was originally designed to operate on images (2D-signals) but it can also be used on other dimensional signals. The EZW encoder is based on progressive encoding to compress an image into a bit stream with increasing accuracy. This means that when more bits are added to the stream, the decoded image will contain more detail, a property similar to JPEG encoded images. It is also similar to the representation of a number like p. Every digit we add increases the accuracy of the number, but we can stop at any accuracy we like. Progressive encoding is also known as embedded encoding, which explains the E in EZW. This leaves us with the Z. This letter is a bit more complicated to explain, but I will give it a try in the next paragraph. Coding an image using the EZW scheme, together with some optimizations results in a remarkably effective image compressor with the property that the compressed data stream can have any bit rate

desired. Any bit rate is only possible if there is information loss somewhere so that the compressor is lossy. However, lossless compression is also possible with an EZW encoder, but of course with less spectacular results.

5.3 The zero tree:


The EZW encoder is based on two important observations: 1. Natural images in general have a low pass spectrum. When an image is wavelet transformed the energy in the subbands decreases as the scale decreases (low scale means high resolution), so the wavelet coefficients will, on average, be smaller in the higher subbands than in the lower subbands. This shows that progressive encoding is a very natural choice for compressing wavelet transformed images, since the higher subbands only add detail. 2. Large wavelet coefficients are more important than smaller wavelet coefficients. These two observations are exploited by the EZW encoding scheme by coding the coefficients in decreasing order, in several passes. For every pass a threshold is chosen against which all the coefficients are measured. If a wavelet coefficient is larger than the threshold it is encoded and removed from the image, if it is smaller it is left for the next pass. When all the wavelet coefficients have been visited the threshold is lowered and the image is scanned again to add more detail to the already encoded image. This process is repeated until all the wavelet coefficients have been encoded completely or another criterion has been satisfied (maximum bit rate for instance). The trick is now to use the dependency between the wavelet coefficients across different scales to efficiently encode large parts of the image which are below the current threshold. It is here where the zerotree enters. So let me now add some detail to the foregoing. (As most explanations, this explanation is a progressive one.)

A wavelet transform transforms a signal from the time domain to the joint timescale domain. This means that the wavelet coefficients are two-dimensional. If we want to compress the transformed signal we have to code not only the coefficient values, but also their position in time. When the signal is an image then the position in time is better expressed as the position in space. After wavelet transforming an image we can represent it using trees because of the subsampling that is performed in the transform. A coefficient in a low subband can be thought of as having four descendants in the next higher subband (see fig 5.3). The four descendants each also have four descendants in the next higher subband and we see a quad-tree emerge: every root has four leafs.

We can now give a definition of the zerotree. A zerotree is a quad-tree of which all nodes are equal to or smaller than the root. The tree is coded with a single symbol and reconstructed by the decoder as a quad-tree filled with zeroes. To clutter this definition we have to add that the root has to be smaller than the threshold against which the wavelet coefficients are currently being measured. The EZW encoder exploits the zerotree based on the observation that wavelet coefficients decrease with scale. It assumes that there will be a very high probability that all the coefficients in a quad tree will be smaller than a certain threshold if the root is smaller than this threshold. If this is the case then the whole tree can be coded with a single zerotree symbol. Now if the image is scanned in a predefined order, going from high scale to low, implicitly many positions are coded through the use of zerotree symbols. Of course the

zerotree rule will be violated often, but as it turns out in practice, the probability is still very high in general. The price to pay is the addition of the zerotree symbol to our code alphabet.

How does it work?


Now that we have all the terms defined we can start compressing. Lets begin with the encoding of the coefficients in decreasing order. A very direct approach is to simply transmit the values of the coefficients in decreasing order, but this is not very efficient. This way a lot of bits are spend on the coefficient values and we do not use the fact that we know that the coefficients are in decreasing order. A better approach is to use a threshold and only signal to the decoder if the values are larger or smaller than the threshold. If we also transmit the threshold to the decoder, it can reconstruct already quite a lot. To arrive at a perfect reconstruction we repeat the process after lowering the threshold, until the threshold has become smaller than the smallest coefficient we wanted to transmit. We can make this process much more efficient by subtracting the threshold from the values that were larger than the threshold. This results in a bit stream with increasing accuracy and which can be perfectly reconstructed by the decoder. If we use a predetermined sequence of thresholds then we do not have to transmit them to the decoder and thus save us some bandwidth. If the predetermined sequence is a sequence of powers of two it is called bitplane coding since the thresholds in this case correspond to the bits in the binary representation of the coefficients. Indeed, without this information the decoder will not be able to reconstruct the encoded signal (although it can perfectly reconstruct the transmitted bit stream). It is in the encoding of the positions where the efficient encoders are separated from the inefficient ones. As mentioned before, EZW encoding uses a predefined scan order to encode the position of the wavelet coefficients . Through the use of zerotrees many positions are encoded implicitly. Several scan orders are possible as long as the lower subbands are completely scanned before going on to the higher subbands.

CHAPTER-6
6.1 The algorithm
The EZW output stream will have to start with some information to synchronize the decoder. The minimum information required by the decoder is the number of wavelet transform levels used and the initial threshold, if we assume that always the same wavelet transform will be used. Additionally we can send the image dimensions and the image mean. Sending the image mean is useful if we remove it from the image before coding. After imperfect reconstruction the decoder can then replace the imperfect mean by the original mean. This can increases the PSNR significantly. The first step in the EZW coding algorithm is to determine the initial threshold. If we adopt bitplane coding then our initial threshold t0 will be

Here MAX (.) means the maximum coefficient value in the image and (x,y) denotes the coefficient. With this threshold we enter the main coding loop (I will use a C-like language): threshold = initial_threshold; do { dominant_pass (image); subordinate_pass (image); threshold = threshold/2; } while (threshold> minimum_ threshold);

We see that two passes are used to code the image. In the first pass, the dominant pass, the image is scanned and a symbol is outputted for every coefficient. If the coefficient is larger than the threshold a P (positive) is coded, if the coefficient is smaller than minus the threshold an N (negative) is coded.

If the coefficient is the root of a zerotree then a T (zerotree) is coded and finally, if the coefficient is smaller than the threshold but it is not the root of a zerotree, then a Z (isolated zero) is coded. This happens when there is a coefficient larger than the threshold in the tree. The effect of using the N and P codes is that when a coefficient is found to be larger than the threshold (in absolute value or magnitude) its two most significant bits are outputted (if we forget about sign extension). Note that in order to determine if a coefficient is the root of a zerotree or an isolated zero, we will have to scan the whole quad-tree. Clearly this will take time. Also, to prevent outputting codes for coefficients in already identified zerotrees we will have to keep track of them. This means memory for book keeping. Finally, all the coefficients that are in absolute value larger than the current threshold are extracted and placed without their sign on the subordinate list and their positions in the image are filled with zeroes. This will prevent them from being coded again. The second pass, the subordinate pass, is the refinement pass. In this gives rise to some juggling with uncertainty intervals, but it boils down to outputting the next most significant bit of all the coefficients on the subordinate list. In this list is ordered (in such a way that the decoder can do the same) so that the largest coefficients are again transmitted first. Based on we have not implemented this sorting since the gain is very small but the costs very high. The main loop ends when the threshold reaches a minimum value. For integer coefficients this minimum value equals zero and the divide by two can be replaced by a shift right operation. If we add another ending condition based on the number of outputted bits by the arithmetic coder then we can meet any target bit rate exactly without doing too much work.

We summarize the above with the following code fragments, starting with the dominant pass

/ * Dominant pass * / initialize_fifo (); while (fifo_not_empty) { get_coded_ coefficient_from_fifo (); if coefficient was coded as p, n or z then { code_next_scan_ coefficient (); put_coded_ coefficient_in_fifo (); if coefficient was coded as p or n then { add abs (coefficient) to subordinate list; set coefficient position to zero; } } }

Here we have used a FIFO to keep track of the identified zerotrees. If we want to enter this loop we will have to initialize the fifo by manually adding the first quadtree root coefficients to the fifo. Depending on which level we start in the left of figure 2 this means coding and putting at least three roots in the fifo. The call of code_next_scan_coefficient() checks the next uncoded coefficient in the image, indicated by the scanning order and outputs a P, N, T or Z. After coding the coefficient it is put in the fifo. This will automatically result in a Morton scan order. Thus, the fifo contains only coefficients which have already been coded, i.e. a P, N, T or Z has already been outputted for these coefficients. Finally, if a coefficient was coded as a P or N we remove it from the image and place it on the subordinate list.

This loop will always end as long as we make sure that the coefficients at the last level, i.e. the highest subbands (HH1, HL1 and LH1) are coded as zerotrees.

After the dominant pass follows the subordinate pass:

/ * Subordinate pass * / subordinate_threshold = current_threshold/2; for all elements on subordinate list do { if coefficient> subordinate_threshold then { output a one; coefficient = coefficient- subordinate_threshold; } else output a zero; }

6.2 Arithmetic Coding Principles Data Compression


and Arithmetic Coding

Compression applications employ a wide variety of techniques, have quite different degrees of complexity, but share some common processes. Figure 6.1. shows a diagram with typical processes used for data compression. These processes depend on the data type, and the blocks in fig 6.1 may be in different order or combined. Numerical processing, like predictive coding and linear transforms, is normally used for waveform signals, like images and audio [20, 35, 36, 48, 55]. Logical processing consists of changing the data to a form more suited for compression, like run-lengths, zero-trees, set-partitioning information, and dictionary entries [3, 20, 38, 40, 41, 44, 47, 55]. The next stage, source modeling, is used to account for variations in the statistical properties of the data. It is responsible for gathering statistics and identifying data contexts that make the source models more accurate.

What most compression systems have in common is the fact that the signal process is entropy coding, which is the process of representing information in the most compact form. It may be responsible for doing most of the compression work, or it may just complement what has been accomplished by previous stages. When we consider all the different entropy-coding methods, and their possible applica- tions in compression applications, arithmetic coding stands out in terms of elegance, effectiveness and versatility, since it is able to work most effciently in the largest number of circumstances and purposes. Among its most desirable features we have the following. When applied to independent and identically distributed (i.i.d.) sources, the compression of each symbol is provably optimal . It is effective in a wide range of situations and compression ratios. The same arithmetic coding implementation can effectively code all the diverse data created by the different processes of Figure 6.1, such as modeling parameters, transform coeffcients, signaling, etc. It simplifies automatic modeling of complex sources, yielding near-optimal or significantly improved compression for sources that are not i.i.d

Fig: 6.1 systems with typical process for data compression Its main process is arithmetic, which is supported with ever-increasing efficiency by all general-purpose or digital signal processors (CPUs, DSPs). It is suited for use as a \compression black-box" by those that are not coding experts or do not want to implement the coding algorithm themselves.

Even with all these advantages, arithmetic coding is not as popular and well understood. The complexity of arithmetic operations was excessive for coding applications. Patents covered the most efficient implementations. Royalties and the fear of patent infringement discouraged arithmetic coding in commercial products. Efficient implementations were difficult to understand. However, these issues are now mostly overcome. First, the relative efficiency of computer arithmetic improved dramatically, and new techniques avoid the most expensive operations. Second, some of the patents have expired (e.g., [11, 16]), or became obsolete. Finally, we do not need to worry so much about complexity-reduction details that obscure the inherent simplicity of the method. Current computational resources allow us to implement simple, efficient, and royalty-free arithmetic coding.

Example
Data source can be a with English text: each symbol from this source is a single byte representatinh a character. This data alphabet contains M = 256 sym- bols, and symbol numbers are defined by the ASCII standard. The probabilities of the symbols can be estimated by gathering statistics using a large number of English texts.Shows some characters, their ASCII symbol values, and their esti- mated probabilities. It also shows the number of bits required to code symbol s in an optimal manner, log2 p(s). From these numbers we conclude that, if data symbols in English text were i.i.d., then the best possible text compression ratio would be about 2:1 (4 bits/symbol). Specialized text compression methods [8, 10, 29, 41] can yield significantly better compression ratios because they exploit the statistical dependence between letters. . This First example shows that our initial assumptions about data sources are rarely found in practical cases. More commonly, we have the following issues. 1. The source symbols are not identically distributed. 2. The symbols in the data sequence are not independent (even if uncorrelated).

3. We can only estimate the probability values, the statistical dependence between symbols, and how they change in time. However, in the next sections we show that the generalization of arithmetic coding to time-varying sources is straightforward, and we explain how to address all these practical issues.

Table: 6.1 Estimated probabilities of some letters and punctuation Marks in the English language.

6.3 Arithmetic Coding


Encoding Process
In this section we first introduce the notation and equations that describe arithmetic encoding, followed by a detailed example. Fundamentally, the arithmetic encoding process consists of creating a sequence of nested intervals in the form k(S) = [ k; k ) ; k = 0; 1; : : : ;N;

Fig: 6.2 cumulative distributions of code values. Where S is the source data sequence, k+1 and k are real numbers such that 0 k k+1; and k+1 k 1. For a simpler way to describe arithmetic coding we represent intervals in the form j b; l i, where b is called the base or starting point of the interval, and l the length of the interval. The relationship between the traditional and the new interval notation is j b; l if b = and l = : (1.7) The intervals used during the arithmetic coding process are, in this new notation, defined by the set of recursive equations [5, 13] 0(S) = j b0; l0 i = j 0; 1 i ; (1.8) k(S) = j bk; lk i = j bk1 + c(sk) lk1; p(sk) lk1 i ; k = 1; 2; : : : ;N: (1.9) The properties of the intervals guarantee that 0 bk bk+1 < 1, and 0 < lk+1 < lk a dynamic system corresponding to the set of equations. We later explain how to choose, at the end of the coding process, a code value in the signal interval, i.e., v(S) 2 N(S). The coding process defined by (1.8) and (1.9), also called Elias coding, was first described in [5]. Our convention of representing an interval using its base and length has been use

6.4 Decoding Process


In arithmetic coding, the decoded sequence is determined solely by the code value (v) of the compressed sequence. For that reason, we represent the decoded sequence as S(v) = f(s1)(v); s2(v); : : : ; sN(v)g : (1.11) We now show the decoding process by which any code value v 2 N(S) can be used for decoding the correct sequence (i.e., S (v) = S). We present the set of recursive equations that implement decoding, followed by a practical example that provides an intuitive idea of how the decoding process works, and why it is correct. The decoding process recovers the data symbols in the same sequence that they were coded. Formally,

to find the numerical solution, we define a sequence of normalized code values f~v1; ~v2; : : : ; ~vNg. Starting with ~v1 = v, we sequentially find from ~vk, and then we compute ~vk+1 from sk and ~vk. The recursion formulas are

A mathematically equivalent decoding method which later we show to be necessary when working with fixed-precision arithmetic recovers the sequence of intervals created by the encoder, and searches for the correct value sk(v) in each of these intervals. It is denoted by

The combination of recursion (1.14) with recursion (1.17) yields

Arithmetic Operations Presently, additions and multiplications are not much slower than other operations, such as bit shifts and comparison. Divisions, on the other hand, have a much longer latency (number of clock cycles required to perform the operation), and cannot be streamlined with other operations (like special circuitry for

multiply-add) [58, 59, 60, 61]. First, let us consider the arithmetic required by fixed coders, and adaptive coders with periodic updating. The arithmetic coding recursion in the forms (1.9) and (1.40) require two multiplications and, respectively, one and three additions. Thus, except for processor with very slow multiplication, encoding requirements are quite reasonable. Decoding is more complex due to the effort to and the correct decoding interval. If we use (1.16) for interval search, we have one extra division, plus several comparisons . The multiplication-only version (1.38) requires several multiplications and comparisons, but with reasonably efficient symbol search this is faster than (1.16), and eliminates the need to division when multiplications are approximated. Adaptive coding can be considerably slower because of the divisions in (2.11) and (2.12). They can add up to two divisions per interval updating. In Algorithms 11, 12 and 13, we show how to avoid one of the divisions. Furthermore, updating the cumulative distribution may require a significant number of additions. Note that having only periodic updates of the cumulative distribution signicantly reduces this adaptation overhead, because all these divisions and additions are not required for every coded symbol. For example, a single division, for the computation of 1= ~ C(M), plus a number of multiplications and additions proportional to M, may be needed per update. If the update occurs every M coded symbol, then the average number of divisions per coded symbol is proportional to 1=M and the average number of multiplications and additions are constants.

6.5 INTRODUCTION TO MATLAB 7.0

Fig 6.4 MATLAB

The software used in this project is MATLAB 7.0. MATLAB stands for Matrix Laboratory. The very first version of MATLAB, written at the University of New Mexico and Stanford University in the late 1970s was intended for use in Matrix theory, Linear algebra and Numerical analysis. Later on with the addition of several toolboxes the capabilities of Mat lab were expanded and today it is a very powerful tool at the hands of an engineer. It offers a powerful programming language, excellent graphics, and a wide range of expert Knowledge. MATLAB is published by and a trademark of The Math Works, Inc. The focus in MATLAB is on computation, not mathematics: Symbolic expressions and manipulations are not possible (except through the optional Symbolic Toolbox, a clever interface to Maple). All results are not only numerical but inexact, thanks to the rounding errors inherent in computer arithmetic. The limitation to numerical computation can be seen as a drawback, but it is a source of strength too: MATLAB is much preferred to Maple, Mathematical, and the like when it comes to numeric. On the other hand, compared to other numerically

oriented languages like C++ and FORTRAN, MATLAB is much easier to use and comes with a huge standard library. The unfavorable comparison here is a gap in execution speed. This gap is not always dramatic, and it can often be narrowed or closed with good MATLAB programming. Moreover, one can link other codes into MATLAB, or vice versa, and MATLAB now optionally supports parallel computing. Still, MATLAB is usually not the tool of choice for maximum-performance computing. Typical uses include: a) Math and Computation. b) Algorithm development c) Modeling, simulation and prototyping d) Data analysis, exploration and visualization e) Scientific and engineering graphics f) Application development which includes graphical user interface building. MATLAB is an interactive system whose basic data element is an array. Perhaps the easiest way to visualize MATLAB is to think it as a full-featured calculator. Like a basic calculator, it does simple Math like addition, subtraction, multiplication and division. Like a scientific calculator it handles Square roots, complex numbers, logarithms and trigonometric operations such as sine, cosine and tangent. Like a programmable calculator. It can be used to store and retrieve data; you can create, execute and save sequence of commands, also you can make comparisons and control the order in which the commands are executed. And finally as a powerful calculator it allows you to perform matrix algebra, to manipulate polynomials and to plot data. When you start Mat lab the following window will appear:

Fig: 6.5 Main screen of MATLAB

When you start MATLAB, you get a multilayered desktop. The layout and behavior of the desktop and its components are highly customizable (and may in fact already be customized for your site). The component that is the heart of MATLAB is called the Command Window, located on the right by default. Here you can give MATLAB commands typed at the prompt, >>. Unlike FORTRAN and other compiled computer languages, MATLAB is an interpreted environmentyou give a command, and MATLAB tries to execute it right away before asking for another. At the top left you can see the Current Directory.

In general MATLAB is aware only of files in the current directory (folder) and on its path, which can be customized. For simple problems, entering the commands at the MATLAB prompt is fast and efficient. However as the number of commands increases, or when you wish to change the value of a variable and then re-valuate all the other variables, typing at the command prompt is tedious. Mat lab provides for this a logical solution: I.e. place all your commands in a text file and then tell Mat lab to evaluate those commands. These files are called script files or simple M-files. To create an M-file, chose from the File menu the option NEW and then chose M-file. Or click at the appropriate icon at the command window. Then you will see this window:

CHAPTER-7 Scope for enhancement


1. SAR image compression is important in image transmission and archiving. 2. Useful for remote sensing applications. 3. General purpose application of picture retriory in typical digital camera. 4. Future work on this project would be the ENCODING USING SPIHT CODING WHICH gives progressive transmission and also gives more improved results.

CONCLUSION
In this paper, the author proposed a way for SAR Image Embedded Compression. The method introduced speckle noise filtering in SAR image compression. Experiments carried out show that the method can reduce the speckle noise and improve the computation precision and time.

CHAPTER-8 BIBLIOGRAPHY
1) Amir Said,William a.Pearlman , A New, Fast AND Efficient Codec Based on Set Partitioning in Hierarchical Trees , IEEE Trans on circuits and systems for video technology, vol: 6, no:3, June 1996. 2) M.Rabbani and P.W.Jones, Digital Image Compression Techniques, Bellingham, WA: SPIE Opt, Eng. Press, 1991. 3) Jean- Lue Dugelay, Stephen Roche, Christian Rey and Gwenael Doerr, Still Image Watermarking Robust to Local Geometric Distortions, IEEE Trans on Image Processing, vol: 15, no: 9, sept: 2006. 4) The Engineers Ultimate Guide to Wavelet Analysis by Robi Polikar. 5) Digital Image Processing By Rafael C. Gonzalez and Richard E. Woods. 6) Crash Course in Matlab by Trobin A. Driscell. 7) Digital Watermarking Of Images and Wavelets Alexandru, ISAR , Electronics and Telecommunications Faculty, "Politehnica" University, Timioara. 8) Introduction to Graphical User Interface (GUI) MATLAB 6.5 by: Refaat Yousef Al Ashi, Ahmed Al Ameri and Prof. Abdulla Ismail. 9) Digital Watermarking Technology by Dr. Martin Kutter and Dr. Frdric Jordan. 10) SPIHT Image Compression by SPIHT description .htm.

11) Watermarking Applications and Their Properties by Ingemar J. Cox, Matt L. Miller and Jeffrey A. Bloom NEC Research Institute.

12) Watermarking of Digital Images by Dr. Azizah A. Manaf & Akram M. M. Zeki Zeki University Technology Malaysia. 13) Digital image processing by A.K. Jain. 14) Digital Image Processing, a Remote Sensing Perspective by John. R. Jensen. 15) Lillesand, R. M and R.W.Kiefer, 1994,Remote Sensing and Image Interpretation, New York, 1996. 16) Wavelet Transforms: Introduction to Theory and Applications by Bopardikar, Addison Wesley, 1998. 17) Wavelets and Filter Banks, Gilbert Strang and Truong Nguyen, WellesleyCambridge press, 1997. 18) Introduction to Wavelets and Wavelet Transforms: A Primer, Burrens, Gopinath and Guo. 19) Digital Image Processing by Milan soni. 20) Digital Image Processing: a Remote sensing Perspective by John .R. Jensen, 2ND Edition.

You might also like