You are on page 1of 15

Report for Lab Challenge 1

Digital Image Processing (IT523)


Date: 6th October, 2006

Submitted by:
Atul Bhatia (200611024)
Hina R. Shah (200611032)
Nitin Rawat (200611025)
Problem 1:

Aim: To generate images according to the different criteria and to discuss their dfts.

First image:

Fig 1.1 Fig 1.2


image(fs=10*fmax) DFT of the image

Fig 1.3 Fig 1.4


image(fs= fmax) DFT of the image

First image of the first function is shown in the figure. The image is coming out to be vertical lines of
alternate white and black colors. The sampling frequency for the first case is chosen to be 10 times the
highest frequency component in the image. While for the second case the image sampling frequency is
chosen just equal to the maximum frequency. Due to which in the second case the image is not coming
properly. We have shown the DFT of two images on the right hand side. Since our original image is
coming to be vertical lines so our DFT is a horizontal line. While in the second case it is a point
because the image is coming to be a faulty image.

SECOND IMAGE
Image is generated using fx and fy as the input frequencies. Fx and fy can take any values between
{ 1,10,20 }. We have chosen a sampling frequency which may be varied. Below is showen the image
by taking fx=10, fy=20, sampling frequency=10 times the maximum frequency.
The following image and its DFT is for the first image. The DFT of the image is basically is rotated
with some angle. This is due to the thing that the image is rotated by certain angle.

Fig 1.3 Fig 1.4


image DFT of the image

Third Image:
The image of the third part is shown. Basically the image is being less rotated then the above one.
Hence the DFT of this image is also less shifted then the above one. The DFT of the two images are
coming to same since the two images are same simply one is shifted version of other.

Fig 1.5 Fig 1.6


Aliasing:

We know that band limited signals are the signals have limited bandwidth . Such signals if sampled
with sampling frequency at least twice the max. frequency present in the signal, then the original
signal can be correctly reconstructed. When we do under sampling ie the sampling frequency is less
than twice the max. frequency,then the components with high frequency act like low frequency
components and vary the amplitude of the low frequency components. This is known as aliasing.
When the under sampled signals are reconstructed then they appear as a low frequency signal and are
reconstructed as such.
The aliasing results in form of moire pattern in image. This effect can be noticed when we view a
person varying a tweed jacket or a harrigbone kind of pattern. If the no of scanning lines in TV or
number of pixels in our computer screen is less than desired ,then we can view the moire pattern.
Problem 2

Part 1:

Aim: To generate checkerboard images each of 100 x 100 and 256 gray levels. The block size to be
varied from 22, 52, 102 and 502 pixels.
Images and their corresponding DFTs are given below:

2 x 2 size blocks 5 x 5 size blocks 10 x 10 size blocks 50 x 50 size blocks

Part 2:

Here we have a image of size 50*50., ie the number of rows and column are 50 each. Now we define
a0 function such that

I(x,y)= 255 if x>n1 and y >n2


0 elsewhere.

Now, we take the input from the user for the values of n1 and n2. This way we make the program user
friendly. The code is mentioned below.

I=zeros(50,50);
n1=input('enter value for n1 between 1 and 50');
n2=input('enter value for n2 between 1 and 50');
for i=1:50
for j=1:50
if i>n1 & j>n2
I(i,j)=255;
else
I(i,j)=0;
end
end
end
imwrite(uint8(I),'nit.bmp');
X=fft2(I);
Y=fftshift(X);
imwrite(uint8(abs(Y)),'nit1.bmp');

Here we have the image for the values of n1 and n2 as 20 each.

The DFT of the above image is -->

If we define this function for one dimension case,


I(x)=255 for x>n1
0 elsewhere.
Here for some arbitrarily value of n1,we can generate the function. This will however will not be the image as we
have only one dimension. The resulting function is going to be a step function with value of f(x) being 255 as x gets
greater than n1. The DFT of this function is a single point. If we compare this with the result shown above we can
see that DFT is not a single point but some

Part 3:

Aim: To generate the image according to the required criteria. To find the directional derivative of the
image in the direction normal to the boundary of the two regions. To find DFT for the original and the
differentiated images.

Fig: Generated image and differentiated image.


Directional derivative has to be taken normal to the boundary of the two regions. Hence the mask taken
is as follows:
[ 0 11
-1 0 1
-1 -1 0 ]

The gradient here is taken in the direction normal to the boundary between the two regions. The edge is
detected here and is shown in the form of white pixels.

Problem 3:

An image has been generated which looks like the an image generated by a faulty camera is shown
here. The salt and pepper noise has been generated as per the specifications. This is simply done by
adding noise to the original image. We have drawn the the histogram for original as well as noisy
image. Histogram of the original image is having values that are smoothly distributed in the image,
while the noisy image has the histogram that is varying very much, since the noise introduced has
saturated value either 0 or 255.

original image noisy image

Histogram of original image


Histogram of a noisy image

Line for the noise is shown on the level 0 as well as level 255. Both the lines have the same values
showing that the probability of the black as well as white noise is the same.

We have also plotted FFT of the images. The value of FFT of the image which is original is coming to
be distributed over the whole space, while the FFT of the noisy image is coming to be less distributed.

DFT of original image DFT of noisy image


Problem 4:

Here is the sample image that we have considered for this problem with its fourier transfor

image sample Histogram of the original image

Noisy image with variance 100 The cdf of the image

Histogram of the above image


The pdf of above image ie of fig-100

The noisy image with variance 200(fig-200)

The cdf of the above image ie of fig-200


The histogram of the above noisy image

The pdf of the above noisy image

noisy image with variance equal to 400


The cdf of above image

histogram of the image


pdf of noisy image

Noisy image with variance equal to 500


cdf of the above image

Histogram of the image


The pdf of the above image ie of fig-500

DIFFERENT TYPES OF NOISE IN THE IMAGE


Image noise corresponds to visible grain or particles present in the image. In the context of digital
image processing, the term noise usually refers to the high frequency random perturbations of color
values of size close to 1 pixel, which are generally caused by the electronic noise in the input device
sensor and circuitry (e.g. scanner, digital camera). There are other artifacts of similar appearance which
are referred to with different terms to underline their origin (e.g. scanner streaks, film grain).

1. SALT AND PEPPER NOISE:- The PDF of salt and pepper noise or impulse noise or bipolar noise
is given by
p(z) = pa for z=a
pb for z=b
0 otherwise

If b>a, gray level b will appear as a ligth dot in the image. Conversely, level a will appear like a dark
dot. If either pa or pb is zero, the impulse noise is calledunipolar. If neither probability is zero, and
especiallyif they are approximately equal, impulse noise will resemble salt and pepper granules
randomly distributed over the image. For this reason, bipolar noise alos is called salt and pepper noise.
Shot and spike noise also are terms used to refer to this type of noise. Noise impulses can be negative
or positive. Scaling usually is part of the image digitizing process. Because impulse corruption usually
is large compared with the strength of the image signal, impulse noise generally is digitized as extreme
values in an image. Thus, the assumption usually is that a and b are satureted values, in the sense that
they are equal to the minimum and maximum allowed values in the digitized image.

In salt-and-pepper noise (also known as random noise or independent noise), pixels in the image are
vastly different in color from their surrounding pixels. The defining characteristic is that the color of a
noisy pixel bears no relation to the color of surrounding pixels. Generally this type of noise will only
affect a small number of image pixels. When viewed, the image contains dark and white dots, hence the
term salt and pepper noise. Typical sources include flecks of dust on the lens or inside the camera, or
with digital cameras, faulty CCD elements.

2. GAUSSIAN NOISE:-

In Gaussian noise (dependent noise), an amount of noise is added to every part of the picture. Each
pixel in the image will be changed from its original value by a (usually) small amount. Taking a plot of
the amount of distortion of a pixel against the frequency with which it occurs produces a Gaussian
distribution of noise.

3. RF Noise

A failure of the RF shielding that prevents external noise from getting into the detector is the cause of
an RF noise artifact. The form of the artifact in the image depends on the source of noise and where it
is introduced into the signal. Much can be gained about the source of RF noise by inverse Fourier
transforming the image. For example, a bright spot somewhere in the image can be caused by a single
frequency leaking into the signal. The animation window contains an image with two different RF
noise artifacts represented by the diagonal lines and the two horizontal lines marked with arrows. To
possibly fix the problem before calling a service representative, check to see that the scan room door is
closed and sealing properly.
4. POISSIONS NOISE
Many image systems rely on photon detection as a basis of image formation. One of the major sources
of error in these systems is Poisson noise due to the quantum nature of the photon detection process.
Unlike additive Gaussian noise, Poisson noise is signal dependent, and consequently separating signal
from noise is a very difficult task. In most current Poisson noise reduction algorithms, noisy signal is
firstly pre-processed to approximate Gaussian noise and then denoise by a conventional Gaussian
denoising algorithm. In this paper, based on the property that Poisson noise adapts to the intensity of
signal, we develop and analyze a new method using an optimal ICA-domain filter for Poisson noise
removal. The performance of this algorithm is assessed with simulated data experiments and
experimental results demonstrate that this algorithm greatly improves the performance in denoising
image.

5. SENSOR NOISE
Each pixel in a camera sensor contains one or more light sensitive photodiodes which convert the
incoming light (photons) into an electrical signal which is processed into the color value of the pixel in
the final image. If the same pixel would be exposed several times by the same amount of light, the
resulting color values would not be identical but have small statistical variations, called "noise". Even
without incoming light, the electrical activity of the sensor itself will generate some signal, the
equivalent of the background hiss of audio equipment which is switched on without playing any music.
This additional signal is "noisy" because it varies per pixel (and over time) and increases with the
temperature, and will add to the overall image noise. It is called the "noise floor". The output of a pixel
has to be larger than the noise floor in order to be significant (i.e. to be distinguishable from noise).
Problem 5:
Premultiplying of (-1)^(x+y) in an image while calculating the DFT is basically TRANSLATION
property of Fourier transform.

The translation properties states that :-

f(x,y)exp(j*2*pi(u0*x/M+v0*y/N) <=> F(u-u0,v-v0) (1)


and
f(x-x0,y-y0) <=> F(u,v)exp-(j*2*pi(u*x0/M+v*y0/N)

where the double arrow is used to designate a Fourier transform pair. When u 0=M/2 and v0=N/2, it
follows that

exp(j*2*pi(u0*x/M+v0*y/N) = exp(j*pi(x+y))
= (-1)^(x+y)
In this case eq.1 becomes
f(x,y)(-1)^(x+y) <=> F(u-M/2,v-N/2)
and similarly
f(x-M/2,y-N/2) <=> F(u,v)*(-1)^(u+v)

We see that above equations are used for centering the transform. These results are based on the
variable u & v having values in the range [0,M-1] & [0,N-1], respectively. In the computer
implementation this variable goes from 1 to M & 1 to N. In other words, multiplying f(x,y) by (-1)
^(x+y) shifts the origin of F(u,v) to frequency coordinates (M/2,N/2), which is center of M*N area
occupied by the 2-D DFT.

Basically centering in the Fourier transform is used in case we are getting changes in the image at the
extremes. So that we have to centralized its value so that we can do analysis image by doing its further
processing correctly.

References:

1. Digital Image Processing- Rafael C. Gonzales


2. Fundamentals of Image Processing – A.K.Jain
3. www.mathworks.com
4. Matlab Documentation

You might also like