You are on page 1of 20

EC2029- Digital Image Processing

ECE

VII Semester

UNIT I
DIGITAL IMAGE FUNDAMENTALS
1.1. Define Image.
An Image may be defined as a two dimensional function f(x,y) where x & y are
(plane) coordinates, and the amplitude of f at any pair of coordinates ( x,y) is
spatial
called
gray
levelintensity
of the or
image at that point. When x,y and the amplitude values of
f are
all finite,
discrete
quantities we call the image as Digital I mage.
1.2. Define Image Sampling.
Digitization of spatial coordinates (x,y) is called I mage Sampling. To be
computer
processing,
an image function f(x,y) must be digitized both spatially and
suitable
for
in magnitude.
1.3. Define Quantization.
Digitizing the amplitude values is called Quantization. Quality of digital
image is to a large degree by the number of samples and discrete gray levels
determined
usedquantization.
in sampling
and
1.4. What is Dynamic Range?
The range of values spanned by the gray scale is called dynamic range of an
image.
Image
will have high contrast, if the dynamic range is high, and image will have
dull
outwashed
gray look if the dynamic range is low.
1.5. Define Mach band effect.
The spatial interaction of Luminance from an object and its surr ound cr
eates a
phenomenon
called the mach band effect.
1.6. Define Brightness.
Brightness of an object is the perceived luminance of the surround. Two objects
with
different surroundings would have identical luminance but differ ent br ightness.

1.7. What is meant by Tapered Quant ization?


If gray levels in a certain range occur frequently while others occurs r arely,
quantization
areafinely
range and
coarsely
outside
of it.
the
Gray level levels
refers to
scalarspaced
measurine this
of intensity,
that
ranges spaced
from black
to grays
This
method
is fsometimes
called
Tapered
Quantization.
1.8.
Prepared
and
What
inallybydo
toA.Devasena.,
white.
you
meant
by
Associate
Gray
level?
Professor., Dept/ECE Page 1

EC2029- Digital Image Processing


Semester ECE

VII

1.9. What do you meant by Color model?


A Color model is a specification of 3D-coordinates system and a
subspace
within
system where
each
colorthat
is repr esented by a single point.
1.10. List the hardware oriented color models.
The hardware oriented color models are as follows,
i.RGB model
ii.CMY model
iii.YIQ model
iv.HSI model
1.11. What is Hue of saturation?
Hue is a color attribute that describes a pure color where saturation
the
degree
to which
gives
a measure
of a pure color is diluted by white light.
1.12. List the applications of color models.
The applications of color models are,
i.RGB model--- used for color monitor & color video camera
ii.CMY model---used for color printing
iii.HIS model----used for color image processing
iv.YIQ model---used for color picture transmission
1.13. What is Chromatic Adoption?
The hue of a perceived color depends on the adoption of the viewer.
For example,
American
Flagthe
will not immediately appear red, white, and blue, the
viewer
been subjected
to
high has
intensity
red light before viewing the flag. The color of the flag
will toward
appear the
to shift
in
hue
red component
cyan.
1.14. Define Resolutions.
Resolution is defined as the smallest number of discernible detail in
an image. is
Spatial
resolution
the smallest discernible detail in an image and gray level
resolutiondiscernible
refers to the
smallest
change is gray level.
1.15. Write the M X N digital image in compact matrix form?
f(x,y )= f(0,0) f(0,1)f(0,N -1)
f(1,0) f(1,1)f(1,N-1)
.
.
Prepared by A.Devasena.,
f( M-1) .f(M-1,1)
Associate
f(M
Professor.,
-1,N-1)
Dept/ECE Page 2

EC2029- Digital Image Processing


Semester ECE

VII

1.16. Write the expression to find the number of bits to st ore a digit
al image?
The number of bits required to store a digital image is
b=M X N X k
When M=N, this equation becomes
b=N^2k
1.17. What is meant by pixel?
A digital image is composed of a finite number of elements, each
of
which has
a of value. These elements are referred to as
particular
location
pixels or
image or
elements
or
picture
elements
pels elements.
1.18. Define digital image.
A digital image is an image f( x,y) , that has been discretized both
and
br ightness.
in spatial
coordinates
1.19. List the steps involved in digital image processing.
The steps involved in digital image processing are,
i.Image Acquisition.
ii.Preprocessing.
iii.Segmentation.
iv.Representation and descr iption.
v.Recognition and interpretation.
1.20. What is recognit ion and interpretation?
Recognition is a process that assigns a label to an object based on
the information
provided
by its descriptors. Interpr etation means assigning to a
recognized object.
1.21. Specify t he elements of DIP system.
The elements of DIP system are,
i.Image acquisition.
ii.Storage.
iii.Processing.
iv.Communication.
v.Display.
1.22. List the categories of digit al storage.
The categories of digital storage are,
i.Short term storage for use during processing.
ii.Online storage for r elatively fast recall.
iii. Archical storage for frequent access.
1.23.two
The
Prepared
Write
types
ii.Rods.
by
i.Cones.
the
A.Devasena.,
oftwo
lighttypes
receptors
Associate
of light
are,receptors.
Professor., Dept/ECE Page 3

EC2029- Digital Image Processing


Semester ECE

VII

1.24. Differentiate photopic and scotopic vision.


Photopic vision Scotopic vision
The human being can resolve theSeveral
fine rods are connected to one nerve
details with these cones because end.
eachSo it gives the overall picture of the
cone is connected to its own ner ve end.
image.
This is also known as bright light vision. This is also known as dim
light vision.
1.25. How ones and rods are distributed in ret ina?
In each eye, cones are in the range 6-7 million, and rods are in the range
75-150 million.
1.26. Define subjective brightness and adaption.
Subjective brightness means, intensity as preser ved by the human
adaptation
means
the human visual system can operate only from
visual system.
Br ightness
scotopicoperate
to glare
limit.
It simultaneously. I t accomplishes this large
cannot
over
the range
variation
by changes
in
its overall
intensity.
1.27. Define weber ratio.
The ratio of increment of illumination to background if illumination
known
ratioisi.e.
( I c /Ias
). weber
If the ratio
/I) is( Ismall,
then small percentage of change in intensity is needed
c
ie good
brightness
adaptation.
If the /I)
ratio
I c then large percentage of change in intensity is needed
is (large,
brightness
adaptation
ie poor
1.28. What is mach band effect?
Mach band effect means the intensity of the stripes is constant.
brightness
near the boundar ies; these bands are called as mach
Therefore itpattern
preserves
band effect.
1.29. What is simultaneous contrast?
The regions perceived brightness does not depend on its intensity
but background.
also on its All centr e square have exactly the same intensity.
However
appear
to background
the eye
to becomethey
darker
as the
becomes lighter.
1.30. What do you meant by Zooming of digital images?
Zooming may be viewed as over sampling. It involves the
creation
new pixel
locations
and the of
assignment
of gray levels to those new locations.
Radiance
Shrinking
rowmeant
isand
may
thecolumn.
total
beshrinking
viewed
amount
To reduce
as
ofofunder
energy
possible
sampling.
that
flows
aliasing
To
from
shrink
effect,
the light
itanis image
asource,
1.31.delete
Whatever
do yyou
by
digital
images?
1.32.measured
Prepared
good
image
Define
idea
by
slightly
and
by
A.Devasena.,
the
into
one
itwatts
term
blue
isbefore
usually
half,
(w).
Radiance.
an shr
Associate
weinking it.
Professor., Dept/ECE Page 4

EC2029- Digital Image Processing


Semester ECE

VII

1.33. Define the term Luminance.


Luminance measured in lumens ( lm), gives a measure of the
observeramount
per ceiver
a light
of from
energy
an source.
1.34. What is Image Transform?
An image can be expanded in terms of a discrete set of basis arrays
These basis
images
be generated by unitary matrices. Alternatively,
called
basis can
images.
acangiven
NXN as
image
be viewed
an N^2X1 vectors. An image transform provides aset
of coordinates
or basis
vectors
for vector
space.
1.35. List the applications of transform.
The applications of transforms are,
i.To reduce band width
ii.To reduce redundancy
iii.To extract feature.
1.36. Give the Conditions for perfect transform.
The conditions of perfect transfor m are,
Tr anspose of matr ix = Inver se of a matrix.
Orthoganality.
1.37. What are the properties of unit ary transform?
The properties of unitar y transform are,
i.Determinant and the Eigen values of a unitary matrix have unity
magnitude
ii.The
entropy of a random vector is preser ved under a unitar y Transfor
mation is a measure of average infor mation.
iii.Entropy
1.38. Write the expression of one-dimensional discrete Fourier
The expression of one-dimensional discrete Fourier transforms in
transforms
Forward transform
The sequence of x( n) is given by x(n) = { x 0 ,x1 ,x2 , xN-1 }.
N-1
X(k) = x(n) e- j 2* pi* nk/N ; k = 0,1,2,N-1
n=0
Reverse transforms
N-1
e- j 2* pi* nk/N ; n = 0,1,2,N-1
X(n) = x(k)
1.39.k=0
Prepared
List
The
bythe
properties
A.Devasena.,
i. Periodicity
W
ii.
Properties
Symmetry
of
twiddle
Associate
of twiddle
factor
WProfessor.,
^K
are,
Dept/ECE Page 5
N ^(K+N)=
Nfactor.

EC2029- Digital Image Processing


Semester ECE

VII

W
N ^(K+N/2) = - W
N ^K
1.40. Give the Properties of one-dimensional DFT.
The properties of one-dimensional DFT are,
i. The DFT and unitar y DFT matr ices are symmetric.
ii. The extensions of the DFT and unitar y DFT of a sequence and their
inver
se transforms
ar e periodic
with period N.
iii. The DFT or unitary DFT of a real sequence is conjugate symmetric
about N/2.
1.41. Give the Properties of two-dimensional DFT.
The properties of two-dimensional DFT are,
i.Symmetric
ii.Periodic extensions
iii.Sampled Fourier transform
iv.Conjugate symmetr y.
1.42. Write the properties of cosine transform:
The properties of cosine transfor m are as follows,
i.Real & orthogonal.
ii.Fast transform.
iii.Has excellent ener gy compaction for highly cor related data
1.43. Write the properties of Hadamard transform
The properties of hadamard transform are as follows,
i.Hadamard transform contains any one value.
ii.No multiplications are required in the transfor m calculations.
2
iii.The no: of additions or subtractions required can be reduced from
to about
N Nlog
N2
iv.Very good energy compaction for highly correlated images.
1.44. Define Haar transform.
The Haar functions are defined on a continuous interval Xe [0,1]
1,whereand
N=2^n..The
integer
for K=0,1,
N- k can be uniquely decomposed as
K=2^P+Q-1.
1.45. Write the properties of Haar transform.
The properties of Haar transfor m are,
i.Haar transform is real and orthogonal.
ii.Haar transform is a ver y fast transfor m
iii.Haar transform has very poor energy compaction for images
iv.The basic vector s of Haar matr ix sequensly ordered.
1.46.Write
Prepared
The
iii.Slant
iv.The
ii.Slant
i.Slant
properties
bythe
basic
A.Devasena.,
transform
Properties
vectors
of Slant
is
has
of
areal
Associate
transform
of
ver
fast
Slant
Slant
and
ytransform
good
matrix
orthogonal.
transform.
Professor.,
are,
energy
are not
compaction
Dept/ECE
sequenselyfor
Page
ordered.
images
6

EC2029- Digital Image Processing


Semester ECE

VII

1.47. Define of KL Transform


KL Transform is an optimal in the sense that it minimizes the
betweenmean
the vectors
squareXerand
rortheir approximations X^. Due to this idea of
using the Eigentovectors
corresponding
largest Eigen values. It is also known as principal
component transform.
1.48. Just ify that KLT is an optimal transform.
Since mean square error of reconstructed image and original image is
minimum
and the image is zero so that uncorrelated.
mean value
of transformed
1.49.Write any four applications of DIP.
The applications of DIP are,
i.Remote sensing
ii.Image tr ansmission and storage for business application
iii.Medical imaging
iv.Astronomy
1.50.What is the effect of Mach band pattern?
The intensity or the br ightness pattern perceive a darker stribe in
and brighter
stribe inregion
regionD,
B.This
effect is called Mach band pattern or effect.

Prepared by A.Devasena., Associate Professor., Dept/ECE Page 7

EC2029- Digital Image Processing


Semester ECE

VII

UNIT II
IMAGE ENHANCEMENT
2.1. What is Image Enhancement?
Image enhancement is to process an image so that the output is more
suitable
for specific
application.
2.2. Name the cat egories of Image Enhancement and explain?
The categories of Image Enhancement are,
i.Spatial domain
ii.Frequency domain
Spat ial domain: It refers to the image plane and it is based on direct
manipulation of pixels of an image.
Frequency domain
techniques are based on modifying the Fourier transform of an image.
2.3. What do you mean by Point processing?
Image enhancement at any Point in an image depends only on the gr ay
levelisatoften referred to as Point processing.
that point
2.4.What is gray level slicing?
Highlighting a specific range of gray levels in an image is referred to as
gray
slicing.
It islevel
used in satellite imagery and x-ray images
2.5. What do you mean by Mask or Kernels.
A Mask is a small two-dimensional array, in which the value of the
mask
coefficient
determines the nature of the process, such as image
sharpening.
2.6. What is Image Negative?
The negative of an image with gray levels in the range [0, L-1] is
obtained
using the
negativebytransformation, which is given by the expr ession.
s = L-1- r ,Where s is output pixel, r is input pixel
2.7. Define Histogram.
The histogr am of a digital image with gray levels in the range [0, L-1]
function ish a(rdiscrete
the level
number
and ofnk pixels in the image
k ) = nk, where rk is the k th isgray
having gray level r k .
Contrast
It is a stretching
techniquereduces
used toanobtain
image linear
of higher
histogram
contrast .than
It isthealso
or
2.8.What
isisA.Devasena.,
histogram
equalization
2.9.
Prepared
linear
levels
What
iginal,
ization.
by
known
below
contrast
byCondition
m
as
darkening
and
histogram
stretching?
brightening
Associate
for
theuniforthe
Professor.,
m levels
histogram
above
Dept/ECE
is P
ms (ins)the
=Page
1image.
8

EC2029- Digital Image Processing


Semester ECE

VII

2.10. What is spatial filtering?


Spatial filtering is the process of moving the filter mask from
to point
in an
image. point
For linear
spatial
filter, the response is given by a sum of
products of and
the the
filter
coefficients,
corresponding image pixels in the area spanned by
the filter mask.
2.11. Define averaging filters.
The output of a smoothing, linear spatial filter is the average of the
pixels contained
in mask. These filters are called averaging
the neighborhood
of the filter
filters.
2.12. What is a Median filt er?
The median filter replaces the value of a pixel by the median of the gray
levels inofthe
neighborhood
that pixel.
2.13. What is maximum filter and minimum filter?
The
percentile
100 th is maximum filter is used in finding br ightest points in an
The 0th percentile
image. filter is minimum filter used for finding darkest points in an
image.
2.14. Define high boost filter.
High boost filtered image is defined as
HBF = A (original image) LPF
= (A- 1) original image + original image LPF
HBF = (A-1) original image + HPF
2.15. State the condition of transformation function s=T(r).
i. T(r) is single- valued and monotonically incr easing in the interval
0_r_1 and
ii. 0_T(r)
_1 for 0_r_1.
2.16. Write the applicat ion of sharpening filters.
The applications of sharpening filters are as follows,
i. Electronic printing and medical imaging to industrial application
ii. Autonomous target detection in smart weapons.
2.17. Name the different types of derivative filters.
The different types of der ivative filters are
i. Perwitt operators
ii.bySobel
Roberts
cross gradient
operators
Prepared
iii.
A.Devasena.,
operator
Associate
s.
Professor., Dept/ECE Page 9

EC2029- Digital Image Processing


Semester ECE

VII

UNIT III
IMAGE RESTORATION
3.1. Define Restoration.
Restoration is a process of reconstructing or recover ing an image
thatbyhas
beena prior i knowledge of the degradation
degraded
using
phenomenon.
restoration
techniques
are Thus
oriented
towards modeling the degradation and applying
the inverse
process
order
to recover
thein
or iginal image.
3.2. How a degradation process id modeled?
A system oper ator H, which together with an additive white noise ter
m n(x,y)
operates
on
an input
image af(x,y)
to produce
a degraded image g(x,y).
3.3. What is homogeneity property and what is the significance of
Homogeneity property states that
this property?
H [k1 f1 (x,y)] = k1 H[f1 (x,y)]
Where H=operator
K1 =constant
f(x,y)=input image.
It says that the response to a constant multiple of any input is equal to
themultiplied
response by
to that
input
the same constant.
3.4.What is meant by image restoration?
Restoration attempts to reconstruct or recover an image that has been
degraded
by using aof the degrading phenomenon.
clear knowledge
3.5. Define circulant matrix.
A square matrix, in which each row is a circular shift of the preceding
row isrow
a circular
of the last row, is called circulant matrix.
and theshift
first
3.6. What is the concept behind algebraic approach to restoration?
Algebraic approach is the concept of seeking an estimate of f,
denoted
f^, thatcriterion of per for mance where f is the image.
minimizes
a predefined
3.7. Why the image is subjected to wiener filtering?
This
method
off^that
filtering
consider
asof
random
process
is to find
them
is
Spatial
minimized.
an estimate
transformation
So
of the
image
is defined
uncor
is subjected
rupted
as images
the image
rear
to and
wiener
rangement
f noise
such
filter
that
ing
the
pixels
tomean
on an
and
objective
3.8. minimize
3.9.
Prepared
square
Define
by
image
error
spatial
Gray-level
A.Devasena.,
thethe
between
plane.
error.
transformation.
interpolation.
Associate Professor., Dept/ECE Page 10

EC2029- Digital Image Processing


Semester ECE

VII

Gray-level interpolation deals with the assignment of gray levels to


pixels
the spatially
transfor
medinimage.
3.10. Give one example for the principal source of noise.
The pr incipal source of noise in digital images arise image
acquisition (digitization)
and/or transmission.
The per formance of imaging sensors is affected
by a asvarenvironmental
iety of factor conditions
s,
such
dur ing image acquisition and by
elements.
are light levels and sensor temperature.
the qualityThe
of factors
the sensing
3.11. When does the degradation model satisfy position invariant
property?
An operator having input-output relationship g( x,y)=H[f( x,y)] is said
to position invariant for any f( x,y) and and _. This definition
if H[f(x-,y-_)]=g(x-,y-_)
indicates
theimage
response
at only on the value of the input at that
any point that
in the
depends
point not on its position.
3.12. Why the restoration is called as unconstrained restoration?
In the absence of any knowledge about the noise n, a meaningful
criterion
is approximates of in a least square sense by
to seek an
f^ suchfunction
that H f^
assuming
noise term
is as smallthe
as possible.
Where H = system operator.
f^ = estimated input image.
g = degraded image.
3.13. Which is the most frequent method to overcome the difficulty
to formulate
the of pixels?
spatial
relocation
The point is the most frequent method, which are subsets of pixels
location
the inputwhose
(distorted)
andin
output ( corrected) imaged is known precisely.
3.14. What are the three methods of estimating t he degradation
The three methods of degradation function are,
function?
i.Observation
ii.Experimentation
iii.Mathematical modeling.
3.15. How the blur is removed caused by uniform linear motion?
An image f(x,y) undergoes planar motion in the x and y-direction (t)0
and x0(t)
andcomponents
y
are the time
varying
of motion. The total exposure at any
point of (digital
the recording
medium
memory) is obtained by integrating the instantaneous
exposuredur
over
timethe imaging system shutter is open.
interval
ingthe
which
thetransform
The simplest
of the appr
or iginal
oach image
to restoration
simply isbydirect
dividing
inverse
the filter
transfor
ing, an
3.16.F^
is
inverse
filtering?
Prepared
mWhat
G^(u,v)
of by
(u,v)
the
estimate
by
=
A.Devasena.,
G^(u,v)/H(u,v)
degraded
the degradation
F^(u,v)
image
Associate
of function.
Professor., Dept/ECE Page 11

EC2029- Digital Image Processing


Semester ECE

VII

3.17. Give the difference between Enhancement and Restoration.


Enhancement technique is based primarily on the pleasing aspects it
might
to Contrast Stretching. Where as Removal of
the viewer.
Forpresent
example:
image
applying
a
deblur blur
ringsbyfunction
is considered
a r estoration technique.
3.18. Define the degradation phenomena.
Image restoration or degradation is a process that attempts to
reconstruct
recover by
an using some clear knowledge of the
image that
has beenordegraded
degradation phenomena.
Degradation
may be in the for m of Sensor noise , , Relative
object camera motion
3.19. What is unconstrained restoration.
It is also known as least square error approach.
n = g-Hf
To estimate the original image f^,noise n has to be minimized and
f^ = g/H
3.20.Draw the image observation model.

3.21.What is blind image restoration?


Degradation may be difficult to measure or may be time varying
manner.
image
such
explicitly
cases information
or implicitly.about
Thisthe
taskdegradation
is called blind
mustimage
be
in either
anI nunpredictable
Prepared
extracted
restoration.
by A.Devasena.,
from the observed
Associate Professor., Dept/ECE Page 12

EC2029- Digital Image Processing


Semester ECE

VII

UNIT IV
DATA COMPRESSION
4.1. What is Data Compression?
Data compression requires the identification and extraction of
source data
redundancy.
I n seeks to reduce the number of bits
other words,
compression
used mation.
to store or transmit
infor
4.2. What are two main types of Data compression?
The two main types of data compression ar e lossless compression
lossy
compr and
ession.
4.3.What is Lossless compression?
Lossless
can recover
compression
the exact original data after compr ession. It is used
mainly for compressing database records, spreadsheets or word
processing of
files,
where exact
replication
the original
is essential.

4.4. What is lossy compression?


Lossy compression
will result in a certain loss of accuracy in exchange for a
substantial
increase in compression.
Lossy compression is more effective when
used
to and
compress
graphic
images
digitized
voice where losses outside visual or aural
perception can be tolerated.
4.5. What is the Need For Compression?
In terms of storage, the capacity of a storage device can be
increased
methodseffectively
that compress
a bodywith
of data on its way to a storage device and
decompresses
whenofitcommunications, the bandwidth of a digital
is
retrieved. Initterms
communication
link can
effectively
increased
by be
compressing data at the sending end and
receiving
end. data at the
decompressing
At any given time, the ability of the Internet to transfer data is fixed.
Thus,
data can wherever possible, significant
effectively
be if
compressed
improvements
of data
throughput
can
be
achieved. Many
files
can be combined
into one compressed
document making sending easier.
4.6. What are different Compression Methods?
The different compression methods are,
i.Run Length Encoding (RLE)
ii.Ar ithmetic
coding
symbols
Run-length
into two
bytes,
Encoding,
a countor
andRLE
a symbol.
is a technique
RLE can compress
used to rany
educe the
iii.Huffman
coding
and
4.7. type
Prepared
string
What
iv.Trof
by
ansfor
of
isdata
size
A.Devasena.,
run
characters.
regardless
of
m
length
coding
a repeating
coding?
This
Associate
repeating
Professor.,
string
; Dept/ECE
typically
is called RLE
aPage
run
encodes
13
a run of

EC2029- Digital Image Processing


Semester ECE

VII

of its infor mation content, but the content of data to be compressed


affectsCompression
the compression
ratio.
is normally measured with the compression ratio:
4.8. Define compression ratio.
Compression ratio is defined as the ratio of original size of the image to
compressed
size ofasthe
image .It is given
Compression Ratio = original size / compressed size: 1
4.9. Give an example for Run length Encoding.
Consider a char acter run of 15 'A' character s, which normally would
require 15 bytes to
store: AAAAAAAAAAAAAAA
co ded into 15A With RLE, this would
only
bytes
to
storrequire
e; the two
count
(15) is stor ed as the first byte and the symbol ( A) as
the second byte.
4.10. What is Huffman Coding?
Huffman compression reduces the average code length used to represent
the symbols
of of the source alphabet, which occur frequently, ar
an alphabet.
Symbols
e assigned
withThe
shortgeneral strategy is to allow the code length to
length
codes.
var
y from
character
andcharacter
to ensur etothat the frequently occurr ing characters have
shorter codes.
4.11. What is Arithmetic Coding?
Ar ithmetic compression is an alternative to Huffman compression; it
enables char
acters
to be represented
as fractional
bit lengths. Arithmetic coding works by
by
an interval
of real numbers greater or equal to zero, but less
representing
a number
than one. longer,
As a message
becomes
the interval needed to represent it becomes
smaller and
smaller,
the it increases.
number
of bits
neededand
to specify
4.12. What is JPEG?
The acronym is expanded as "Joint Photographic Expert Group". It is
standardanininternational
1992. It perfectly Works with colour and greyscale images,
Many applications
satellite,
medical,...e.g.,

4.13. What are the basic steps in JPEG?


The Major Steps in JPEG Coding involve:
i. DCT (Discrete Cosine Transfor mation)
ii. Quantization
iii. Zigzag Scan
iv. DPCM on DC component
v.RLE on AC Components
Entropy
Coding
standard
values.
Trvi.
The
ansfor
Since
in acronym
1992.
mthiscoding
Itisperfectly
isa expanded
linear
is used
Works
process
toaswith
convert
"Moving
and
video
no
spatial
and
Picture
information
also
image
used
Expert
pixel
is
in lost,
Group".
values It is
4.14.the
MPEG?
4.15.
Prepared
teleconferencing
What
to
number
by
an
transform
is
A.Devasena.,
international
transform
of coefficients
coefficient
coding?
Associate Professor., Dept/ECE Page 14

EC2029- Digital Image Processing


Semester ECE

VII

produced is equal to the number of pixels transformed. The desired


effect isinthat
ofwill
the be contained in a few large transform
energy
themost
image
coefficients.
If it is generally
the most of the energy in most
same
few coefficients
that contain
may
be further
by loss less entropy coding. I n addition, it
pictures,
then thecoded
coefficients
is likely that
coefficients
canthe
be smaller
coarsely quantized or deleted (lossy coding) without
doing
visible damage
to the reproduced
image.
4.16.Define I-frame.
I-frame is intraframe or independent frame. An I-frame is compressed
independently
ofaall
frames.
It resembles
JPEG encoded image. It is the reference point for
the
motion
estimation
needed
to generate
subsequent P and P-frame.
4.17. Define p-frame.
P-fr ame is called predictive frame. A P-frame is the compressed
differ ence
between
the
current
frame
and a prediction
of it based on the previous I or P-frame.
4.18. Define B-frame.
B-frame is the bidirectional frame. A B-frame is the compressed
current between
frame and
difference
the a prediction of it based on the previous I or P
frame or next
Accordingly
theP-frame.
decoder must have access to both past and future
reference frames.
4.19. Draw the JPEG Encoder.

Prepared by A.Devasena., Associate Professor., Dept/ECE Page 15

EC2029- Digital Image Processing


Semester ECE

VII

4.20. Draw the JPEG Decoder.

4.21. What is zig zag sequence?


The purpose of the Zig-zag Scanare ,
i.To group low frequency coefficients in top of vector.
ii.Maps 8 x 8 to a 1 x 64 vector
4.22. What is block code?
Each source symbol is mapped into fixed sequence of code symbols or
called
as block
code words.
So code.
it is
4.23. Define instantaneous code.
A code that is not a prefix of any other code word is called
instantaneous or prefix code
word.
4.24. Define B2 code.
Each code word is made up of continuation bit c and information
bit
which
ar e isbinar
y B2 code or B code. This is called B2 code
number
s. This
called
because
information bits
are
used two
for continuation
bits.
4.25. Define arithmetic coding.
In arithmetic coding, one to one corresponds between source
doesnt
exist and
where
the single arithmetic code wor d assigned
symbols
codeas word
for a sequence
source
symbols.
A code of
word
defines an interval of number between 0 and 1.
4.26.multilevel
What
isbitbit
plane
decomposition?
An
images
effective
images
plane
technique
into
individually.
a for
ser ies
reducing
ofThis
binary
an
technique
images
images and
is
inter
based
compressing
-pixel
onr theeach
Prepared
edundancies
concept
binar
several
by
y image
wellA.Devasena.,
of decomposing
isknown
via
to process
onebinary
of
Associate
thecompr
Professor.,
ession methods.
Dept/ECE Page 16

EC2029- Digital Image Processing


Semester ECE

VII

UNIT V
IMAGE SEGMENTATION
5.1. What is segmentation?
The first step in image analysis is to segment the image.
image into
its constituent
parts or objects.
Segmentation
subdivides
an
5.2. Write the applications of segmentation .
The applications of segmentation are,
i. Detection of isolated points.
ii. Detection of lines and edges in an image.
5.3. What are the three types of discontinuity in digital image?
Three types of discontinuity in digital image are points, lines and
edges.
5.4. How the discontinuity is detect ed in an image using
The steps used to detect the discontinuity in an image using
segmentation?
segmentation are
i.Compute the sum of the products of the coefficient with the gray
encompassed
levelsregion
contained
inthe by the mask.
+..+
w
z3
ii. The response of the mask at any point in the image is R = w
1 z1 + w+
2 z
23
w9 z9
iii. Where zi = gray level of pixels associated with mass coefficient w i.
iv.The response of the mask is defined with respect to its center
location.
5.5. Why edge detection is most common approach for detecting
The isolated points and thin lines are not frequent occurrences in
discontinuities?
most so
practical
applications,
edge detection is mostly preferred in detection of
discontinuities.
5.6. How the derivatives are obtained in edge detection during
The fir st derivative at any point in an image is obtained by using the
formulation?
of Similar
the
gradientmagnitude
at that point.
ly the second der ivatives are obtained by
using the laplacian.
5.7. detection.
Writeneighborhood
about
linking
edge
points.
small
The
Allapproach
points
(that
3x3
forare
orlinking
similar
5x5) about
edge
are linked,
every
pointspoint
forming
is to
(x,y)in
analyse
a boundary
an image
the of
Prepared
that has
pixels
common
by
that
characteristics
under
A.Devasena.,
properties.
share
gone
some
edge
Associate
of pixels Professor.,
in a
Dept/ECE Page 17

EC2029- Digital Image Processing


Semester ECE

VII

5.8. What is the advantage of using sobel operator?


Sobel operators have the advantage of providing both the differencing
and a smoothing
effect. Because
derivatives enhance noise, the smoothing effect is
of
the sobel
particular
ly operators.
attractive feature
5.9. What is pattern?
Pattern is a quantitative or structural description of an object or
interest some
in an image.
It is for
other entity
of med by one or more descriptors.
5.10. What is pattern class?
It is a family of patterns that share some common properties.

are wM , where M is the number of classes.


denotedPattern
as w1 wclasses
2 w3
5.11. What is pattern recognit ion?
It involves the techniques for arranging pattern to their respective
classesand
bywith a little human intervention.
automatically
5.12. What are the three principle pattern arrangements?
The three principal pattern arrangements are vectors, Strings and trees.
Pattern vectors
are represented
by old lowercase letters such as x ,y, z and it is
, ..,
represented
in the xform
x=[x
x represents I th descriptor and n is the number of
1 , x2 ]. Each component
n
such descriptor.
5.13. What is edge?
An edge is a set of connected pixels that lie on the boundar y between
twomore
regions.
Edges
are
closely
modeled as having a r amp like profile. The slope
of the ramp to
is the
inversely
proportional
degree of blurr ing in the edge.
5.14. What is meant by object point and background point?
To execute the objects from the background is to select a threshold T
that separate
modes.
Then these
any point(x,y) for which f( x,y)>T is called an object
point. Otherwise
point is
called
backgroundthe
point.
5.15. Define region growing.
Region gr owing is a procedure that groups pixels or sub regions into
layerpredefined
regions based
cr iteronia. The basic approach is to start with a set of seed
points and
the grow
regions
by from
appending
to each seed these neighboring pixels that have
properties similar to the
seed.
An approach
component
5.17.
What are
belonging
used
thetot control
wotoprinciple
anover
image.
segmentation
steps
We have
involved
internal
is based
in marker
markers,
on marker s.
5.16.external
What
is
meant
by
Marker
associated
and
selection?
Prepared
The
isby
awith
connected
two
A.Devasena.,
markers
objects
stepsassociated
are
ofmarkers?
Associate
interest
withProfessor.,
background.
Dept/ECE Page 18

EC2029- Digital Image Processing


Semester ECE

VII

i.Preprocessing.
ii.Definition of a set of criteria that markers must satisfy.

5.18. Define chain code.


Chain codes are used to repr esent a boundar y by a connected
segment
length and direction. Typically this repr
sequence ofof specified
straight line
esentation
based
on 4 or
connectivityis of
segments.
The8 direction of each segment is coded
by using a numbering
scheme.
5.19. What are the demerits of chain code?
The demer its of chain code are,
i.The resulting chain code tends to be quite long.
ii.Any small disturbance along the boundar y due to noise cause
changes
codetothat
may
not in
be the
related
the shape of the boundary.
5.20. What is polygonal approximation method?
Polygonal approximation is a image representation approach in
which
a digital
boundary
can be
approximated with arbitrar y accuracy by a polygon.
For a closed curve
the when the number of segments in polygon
approximation
is exact
is equal
to the
number
points
in the
boundar
y soofthat each pair of adjacent points defines a
segment in the polygon.
5.21. Specify t he various polygonal approximation methods.
The various polygonal approximation methods are
i.Minimum perimeter polygons.
ii.Merging techniques.
iii.Splitting techniques.
5.22. Name few boundary descriptors.
i.Simple descriptors.
ii.Shape descriptors.
iii.Fourier descriptors.
5.23. Define lengt h of a boundary.
The length of a boundary is the number of pixels along a boundary.
Example,
for awith
chainunit spacing in both directions, the number of
coded
curve
vertical and plus
horizontal
components
v2 times the number of diagonal components gives its
exact
length.
5.25.Shape
Name
number
few measures
is defined
usedasasthe
simple
first descriptors
difference ofinsmallest
region
5.24.shape
Define
shape
descriptors.
Prepared
magnitude.
by
number
i.Ar
A.Devasena.,
ea.
Theisnumbers.
the
order
number
Associate
n of aof digits
Professor.,
in its rDept/ECE
epresentation.
Page 19

EC2029- Digital Image Processing


Semester ECE

VII

ii.Perimeter.
iii.Mean and median gray levels
iv.Minimum and maximum of gray levels.
v.Number of pixels with values above and below mean.
5.26. Define text ure.
Texture is one of the regional descriptors. It provides measure measures
as
coarseness and regularity.
of smoothness,
properties such
5.27. Define compact ness.
2
Compactness of a region is defined as (perimeter)
/ area. It is a dimensionless
quantity and
is insensitive to uniform scale changes.
5.28. List the approaches t o describe texture of a region.
The approaches to describe the texture of a region are,
i.Statistical approach.
ii.Structural approach.
iii.Spectural approach.
5.29. What is global, local and dynamic or adaptive threshold?
When threshold T depends only on f( x,y) then the thr eshold is called
both
onIff(x,y)
and p(x,y) then it is called local. I f T depends on the
global.
T depends
spatial
coordinates
x and
y,
the
threshold
is called
dynamic
or adaptive where f(x,y) is the original
image.
5.30. What is thinning or skeletonizing algorithm?
An important approach to represent the structur al shape of a plane
region This
is to reduce
it tomay
a be accomplished by obtaining the
graph.
reduction
central
role inalgor
a broad
r ange
skeletonizing
ithm. It
plays of
a problems in image processing,
ranging from
inspection
of prautomated
inted circuit boards to counting of asbestos fibers in air
filter.

Prepared by A.Devasena., Associate Professor., Dept/ECE Page 20

You might also like