You are on page 1of 335

National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011

COMMITTEES
PATRONS
Dr. Bhure Lal
Chairman, Governing Body
Dr. S. K. Kak
Vice-Chancellor, MTU, Noida
Dr. R. P. Chadha
Chairman, I.T.S-The Educational Group
Mr. B. K. Arora
Secretary, I.T.S-The Educational Group
ORGANISINGCOMMITTEE
Chief Convener
Dr. Amitabh Verma
Director, I.T.S Engineering College
Conveners
Mr. R. K. Yadav
Dr. A. K. Singh
Organizing Secretary
Mr. Jugul Kishor
Mr. Agha Asim Husain
Technical Committee
Mr. Ashish Gupta
Mr. Suresh Alapati
Ms. Usha Sharma
Mr. Amendra Bhandari
Mr. Hemant Mondal
Ms. Sonal Kumari
Committee Members
Ms. Seema Srivastava
Mr. Harendra Pal Singh
Mr. P. K. Pradhan
Mr. Sumit Kumar
Mr. Anubhav Yadav
Dr. Rashmi Gupta
Mr. Gagan Deep Arora
Local Advisory Committee
Dr. L. N. Paliwal
Dr. V. B. Dhawan
Dr. M. L. Garg
Mr. Sanjay Yadav
Mr. D. Pandey
Ms. Sushma Vij
National Advisory Committee
Prof. Raj Senani (Director, NSIT)
Prof. S. K. Koul (IIT, Delhi)
Dr. D. R. Bhaskar (JMI, Delhi)
Prof. R. S. Anand (IIT Roorkee)
Dr. Mahesh P. Abegaenkar (IIT Delhi)
Dr. V. K. Kanaujia (AIT Delhi)
Dr. R. L. Yadava (GCET, Gr. Noida)
Dr. V. K. Pandey (NIET, Gr. Noida)
Prof. H. M. Gupta (IIT, Delhi)
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Contents
S.No. Title of the Paper Author Affiliation Page No.
1. Spherical Modal Analysis of a Lens based
Antenna for Mobile Broadband Systems
M.Riyaz Pasha GPCET, KURNOOL
Andhra Pradesh
1
2. Design to Introduce on-chip Tunability in
Microwave Active inductor
Garima kapur Dayalbagh Educational Institute
Dayalbagh Agra
6
3. Effect of Substrate Thickness on Resonance
Characteristics of Slot Ring Microstrip Antenna
1 P. S. Saini
1 Amit Gupta and
2 Ramesh Kumar
1 J.P. Institute of Engineering &
Technology Meerut(U. P.)
2 Radha Govind Engineering College
Meerut (U.P.)
10
4. Analysis of two Layer Annular
Ring Microstrip Antenna
1 Puneet Khanna, 2 Amar Sharma
3 V. K Pandey, 4 Kshitiz Shinghal
1,2 IFTM Universit y, Moradabad.
3 Noida Institute of Engineering and
Technology, Greater Noida.
4 Moradabad Institute of Technology
Moradabad
13
5. Comparative Analysis of Exponentially
Shaped Microstrip-fed Planar Monopole
Antenna With And Without Notch
M. Venkata Narayana
K. Pushpa Rupavathi
K L University
16
6. Congestion Control Algorithms for
Efficient Satellite Communication
1 Harish Kr. Mishra
2 Dr. R. L. Yadava
1 FGIET, Raebareli (UP)
2 GCET, Gr.Noida
20
7. Classification Of Material Using Microwave 1 A. H. Soni
2 Prof. Dr. A. A. Gurjar
1 B. N. College of Engineering
Pusad (M.S),2Sipnas College of Engg.
& Tech,Amravati (M.S)
22
8. Patch Antenna on Defective Ground Plane Sakshi Kumari
Vibha Rani Gupta
Birla Institute of Technology,
Mesra, Ranchi, India
24
9. Parallel hardware based ANN for
Smart Antenna Signal Processing
Prof.V.R.Raut
Ms. Sarika Arun Mardikar
Prof. Ram Meghe Institute of Technology
& Research, Badnera, Amravati [M.S]
27
10. Study of Different Properties of
IMPATT Diode in Ka Band
1 Joydeep Sengupta, 1 Alok Naik
1 Ankush Naik, 1 Arup Hareesh,
1 Naresh Rao, 1 Prashanth G.Rao
1 Ron Agnel Tony 2 Dr. Monojit Mitr
1 Visvesvaraya National
Institute of Technology
Nagpur, India 440011.
2 BESU Shibpur, West Bengal
32
11. Fractal Hilbert Antennas in Mobile
Radar And Light Combat
Air Craft Technology
Ashok Kumar
Kamla Nehru Institute of Technology
Scientist
Sultanpur
of Defence B,ADA,Ministry
Bangalore
36
12. Effect of Moisture Ingress on the
Substrate of Rectangular Micro Strip patch
antenna using CPW fed
Devinder Sharma
Rajesh Khanna
Thapar University, Patiala, India
40
13. Slot Loaded Patch Antenna for
WLAN/Wi-Max Applications
Rakesh Kumar Tripathi
Rajesh Khanna
Thapar University,Patiala,Punjab
43
14. A coaxial fed wide band microstrip patch
antenna for WLAN applications
Dinesh, Rajesh Khanna Thapar University, Patiala, Punjab
46
15. A Compact UWB Monopole Antenna With
Partially Ground Plane
Himanshu Shukla, Apoorv Bajpai
Tejbir Singh
IEC-College of Engg. & Tech. Gr. Noida
49
16. Printed Bow-tie Antenna for Bluetooth
and WLAN Applications
T. V. Rama Krishna
Prof Habibulla Khan, P. Thrinadh
B. Sai Ritesh, K. Suman Kumar
K L University, Guntur, AP, India
52
17. Optimization of SAW Filter Using
Genetic Algorithm
Amrish Saini, Dr.U.P.Singh
Mukhtiar Rana
JMIT, Radaur (YNR), India
56
18. Active Integrated Switching Antennas 1 Rakesh Kumar Yadav
1 Ram Lal, 2 R. P. Yadav
1 Galgotias College of Engineering &
Technology, Greater Noida, Utter Pradesh
2 Rajasthan Technical University
Kota, Rajasthan
59
19. 8x8 Patch Array design for wind profiling
radars in C Band
Bharat Bhushan Verma,
Sumanta Kumar Kundu
Bharati Vidyapeeth College of
Engineering
A-4 Paschim Vihar New Delhi -110063
India
63
20. Microstrip Reflectarray Antenna for Direct
to Home (DTH): Study and Design
1 NITIN KUMAR
2 AJAY SURI
Sunder Deep Engineering College
Ghaziabad, UP
67
21.
Investigation of Hexagonal Patch Antenna
with Superstrate loading
Sweta Agarwal
Ravindra Kumar Yadav
Jugul Kishor
ITS Engineerring College, Gr Noida
70
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Contents
S.No. Title of the Paper Author Affiliation Page No.
22. Analysis of Square Microstrip Patch
Antenna with Superstarte
Niharika, R. K. Yadav
Jugul Kishor
ITS Engineerring College, Gr Noida
73
23. Cosine Modulated Filter Banks with
Perfect Reconstruction
C.S. Vinitha Ambedkar Institute of Technology, Delhi
76
24. Condition Monitoring of Electrical
Machine Using Gabor Transform
Neelam Mehala YMCA University and Science
and Technology, Faridabad (Haryana)
INDIA.
81
25. Study of Microscope
Image Processing
Sasi Kumar Gurumurthy VIT university, Vellore-14
Tamil nadu, India
84
26. Knowledge-Based Template for
Eye Detection
Ms.Vijayalaxmi
Mr. S. Sreehari
Vignan Institute of Technology &
Sciences, Hyderabad
90
27. Digital Image Processing: windfall in
Cancer diagnosis
Sangeeta Mangesh Narula Institute of Technology
Kolkata (W.B) 700109
94
28. Fuzzy Logic Based Impulse Noise
Removal Filter
Reena Tyagi
Amrita Sharma
RKGEC,Pilakhuwa Ghaziabad
UP, India
97
29. Comparision Between Various Edge
Detection Algorithms Using Matlab
Amrita Sharma RKGEC,Pilakhuwa Ghaziabad
UP, India
101
30. Image Denoising using Fractal Method Rashmi Kumari
Vishal Rathore
FET MRIU Faridabad, Haryana, UP
105
31. Sequence Detection for MPSK/MQAM Ms.Neha Singhal
Ms.Neha Goel
Mr. Ankit Tripathi
Raj Kumar Goel Institute of Technology
Ghaziabad,UP,India
108
32. Kaiser Window Based Finite Impulse
Response Bandpass Filter Approach
for Noise Minimization
Amiya Dey
Ayan Kumar Ghosh
Seacom Engineering College
113
33. A comparison and analysis of di fferent
PDE approaches for image enhancement
1 Anubhav kumar
1 Awanish Kr Kaushik
1 R.L.Yadava
2 Divya Saxena
1 Galgotias College of Engineering &
Technology, Gr.Noida, India
2 Vishveshwarya Institute of Engineering
and Technology , G.B.Nagar,India
115
34. Face Detection Algorithm in Color Images
Complex Background
1 Anubhav kumar
1 Awanish Kr Kaushik
2 Anuradha 3 Divya Saxena
1 Galgotias College of Engineering &
Technology, Gr.Noida, India
2 Laxmi Devi Institute of Technology
Alwar, India
3 Vishveshwarya Institute of Engineering
and Technology , G.B.Nagar,India
118
35. An Introduction to Various Edge Detectors
for finding Edges & Lines in Image
1 Tanu Shree Gupta
2 Dr. A.K. Sharma
3 Mr. Sudeep Tanwar
4 Mrs. Antima Puniya
1 Shobhit Universit y, Meerut, UP, India
2 CSE, YMCA, Haryana, HR, India
3 Bharat Institute of Technology ,Meerut
UP, India
4 Shobhit Institute of Engineering
& Technology, Gangoh
Sharanpur, UP, India
121
36. Haar Wavelet : A Technique of Image
Compression
1 Sudhanshu Upadhyay
1 Pankaj Sharma
1 Manoj Kumar Bansal
2 Agha Asim Husain
1 Vira College of Engineering
Bijnor; UP
2 I.T. S Engineering College
Greater Noida
125
37. Pixel level Document Image Processing
for an OCR system
1 Sunanda Verma
2 D. P. Dwivedi
1 Indra Gandhi Institute of Technology
Indraprastha University, Delhi
2 Visveshwarya Institute of Engg.
& Technology,G.B.Nagar,Dadri, India
130
38. Image Resizing using Bilinear Interpolation Navdeep Goel, Nitu Jindal
Punjabi University Guru Kashi Campus
Talwandi Sabo, Punjab, India
134
39. Optimized Fir Filter Using Pso Amrish Saini, Dr. U.P. Singh
Mukhtiar Rana
JMIT, Radaur (YNR), India
137
40. Multispectral Image Restoration
Using Wavelet
Manju Singh G.L. Bajaj Institute of Technology
& Management
Plot no. 2 Knowledge Park-III,
Greater Noida-201306
139
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Contents
S.No. Title of the Paper Author Affiliation Page No.
41. RFID Access Control System Combining
with Face Recognition
Anubhav Srivastava
Usha Kumari
Pranshi Agarwal
ITS Engineering College, Gr Noida
UP, India
143
42. Interleavers in IDMA Communication System
A Survey
Aasheesh Shukla
Rohit Bansal, Sankalp Anand
GLA University, Mathura, India
146
43. OFDM Channel Capacity Enhancment
Under Additive White Gaussian Noise
Mukesh Pathela
Gaurav Bhandari
DIT, DEHRADUN
150
44. Survey of WPAN Technologies:
Zigbee Bluetooth, and Wibree
Vinith Chauhan , Manoj Pandey
Krishna Mohan Rai
St. Margaret Engineering College
Neemrana
153
45. Analysis of Peak to average power ratio
reduction of OFDM signals
Ankit Tripathi
Ms.Neha Goel
Ms.Neha Singhal
Raj Kumar Goel Institute of Technology
Ghaziabad 158
46. RADIUS Server to Improve Wireless
Security
1 Anuroop
2 Tazeem Ahmad Khan
1 Galgotia College of Engg. & Technology
G. Noida
2 Jamia Millia Islamia New Delhi-11025
163
47. Capacity Of MIMO System With Optimally
Ordered Successive Interference Cancellation
Tanu Gupta
Sajib Shau
IGIT, GGSIPU, Delhi -110006
166
48. Cloud Computing: Security and
Management
Jaswant Singh
Gagandeep Kaur
Yadavindra College of Engg.
Talwandi Sabo, Punjab, India
170
49. War in 5 Ghz Frequency Band Wirelss Lan 1 Abhilash Saurabh
2 Tazeem Ahmad Khan
1 N.I.E.T, Greater Noida
2 IIMT College of Engineering
Greater Noida
174
50. Wireless & RF Communication Systems Abhishek Dwivedi
Ved Prakash Sharma
Maharana Pratap College of Technology
Gwalior (M.P.)
178
51. Wireless charging system based on
Ultra-Wideband Retro-Reflective
Beam forming
Harshita Sachan
and Ravindra Kumar Yadav
Gagan Deep Arora
ITS Engineering College
Greater Noida, U.P 182
52. Performance Analysis of Equalizers with
Diversity Combining In Cellular Systems
RICHA JAIN IGIT, GGSIP UNIVERSITY
185
53. An overview of Technical aspect for
WiMAX & LTE Netwaoks Technology
1 Awanish Kr Kaushik
1 Anubhav kumar
1 R. L. Yadava
2 Anuradha
1 Galgotias College of Engineering
& Technology, Gr.Noida, India
2 Laxmi Devi Institute of Technology
Alwar, India
189
54. Anti-collision Protocols for RFID System Mayur Petkar VJTI, Mumbai
194
55. BER for BPSK in OFDM with Rayleigh
multipath channel
Mrs.Dipti Sharma
Mr.Abhishek Saxena
Apex Intitute of Technology
Rampur
198
56. Sybil attack: Threat in P2P Network 1 Pooja Rani
2 Deepti Sharma and
3 Sumit Kumar
ITM, Sector-23 A, Gurgaon
College of Management Studies
Kanpur;NSIT, Dwarka, Delhi
201
57. A Comparative study of network and traffic
simulators
under varying VANETs Conditions
1 Pooja Rani
1 Aman Jatain
2 Nitin Sharma
3 Sumit Kumar
1 ITM, Sector-23A, Gurgaon, India
2 MNIT, Allahabad, India
3 NSIT, Dwarka, Delhi
206
58. Error Performance of Digital Modulation
Techniques over Rayleigh Multipath
Channel-A Unified Investigation
Vicky Singh, Amit Sehgal G.L. Bajaj Institute of Technology
and Management
Gr. Noida
210
59. Noise Reduction of Audio Signal using
Wavelet Transform with Modified
Universal Threshold
1 Rajeev Aggarwal
2 Vijay Kumar Gupta
3 Jay Karan Singh
13 Rajiv Gandhi Proudyogiki
Vishwavidyalaya
Bhopal, (M. P.), INDIA; Uttar Pradesh
Technical University, Lucknow
(U. P.), INDIA
214
60. Low Power Methods for di fferent
technologies
Himani Mittal
Prof. Dinesh Chandra
Sampath kumar
J.S.S.Academy of Technical Education
NOIDA 218
61. Design and Simulation of Low Power
Multiplier
1 Vikas Garg
2 Tazeem Ahmad Khan
1 NIET Greater Noida
2 IIMT College of Engineering
Greater Noida
221
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Contents
225
230
235
238
241
245
248
252
255
260
264
267
273
277
281
286
290
294
298
301
305
307
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Contents
310
314
86. Wireless Based Temperature
Control System Using Micro
Controller
Ankita Daniel
Anoop Tripathi
Ghanshyam Mishra
Y.G. Singh
Institute of Technology &
Management Gorakhpur (UP)
318
1
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Spherical Modal Analysis of a Lens based
Antenna for Mobile Broadband Systems
M.Riyaz Pasha
GPCET, KURNOOL
ANDHRA PRADESH
mail:riyazpasha@gmail.com
B.V.Ramana Raju
GPCET, KURNOOL
ANDHRA PRADESH
mail:Venkat_badam1@yahoo.com
Abstract: To economically meet the high directivity
requirements of MBS (Mobile Broadband Systems), a composite
antenna comprising a set of primary radiators embedded inside
or backing up a dielectric lens is proposed. In this paper a new
and accurate analytical formulation based on Spherical Modal
Expansion (SME) of the near fields of primary radiators in the
presence of a dielectric spherical lens is presented. The analysis
treats the lens as a scatterer and characterizes the source fields
completely unlike earlier works. The novelty of the analysis is
extended by utilizing radial translation and spatial rotation of
the Spherical Modal Complex Coefficients (SMCC) to align
themto the phase centre of the lens. A new closed formsolution
is developed to obtain the scattered fields due to the lens by
application of boundary conditions. A sample computation along
with results of a focused experimental program is presented to
demonstrate the validity of the new approach. The analysis is
flexible enough to accommodate different types of primary
radiators and different shapes for the dielectric lens.
Keywords: MBS; Dielectric Lens; SME; SVWF; SMCC
1. INTRODUCTION
MBS are the mobile counterpart of fixed B-ISDN
and promises to extend all the wired services to wireless
customers. These systems operate in the frequency range of 5
- 64 GHz and that puts severe constraints on the antenna. In
the EHF range the antenna plays a key role in cooperating
with the equalizer to mitigate the channel time dispersion.
Further, power available at these frequencies is not more than
about 2 Watts typically. Hence rest of the gain has to be
achieved through the antenna systems.
Analysis of radiation patterns of apertures in the
presence of dielectric lenses and shells have received
considerable attention in the past. Li etal in [1] have
employed Huygens principle and image theory to obtain the
magnetic current distribution over an aperture. The field over
the aperture has however been assumed constant. Mie series
expansion has then been employed to obtain the unknown
fields in the near and far zones. Allison Brown et al from
Navsys in [3] and Thornton in [5] have employed
Geometrical Optics (GO) and traced the rays through the lens.
Unfortunately GO is a far field approximation of the primary
source. Reuven [2] in particular has also added Physical
Optics (PO) to GO to obtain a better prediction. Another set
of authors Charles et al in [4] and Tapan Sarkar in [6] have
employed Moment methods in a numerical form to obtain
equivalent currents. These techniques provide an accurate
value of the field in a half space only and necessarily restricts
the equivalent surface to finite dimensions. A technique for
analysis of shaped dielectric lenses, used in a pioneering
effort for MBS is described in [7, 8]. This employs classical
ray tracing methods of GO and PO that is valid only in the far
field of a primary point source type of radiator. Here the lens
is in the near field of finite sized primary radiators oriented at
different angles and at different distances from the lens
centre.
2. ANALYSIS
Consider a set of primary feeds radiating in the
presence of a dielectric sphere as shown below. Figure1a is
the coordinate system native to a primary feed. The co-
ordinate system of an aperture oriented in any direction and
radiating into a lens is shown in Figure 1b.
The length PO = d, and the position vector PO makes
a polar angle of and an azimuth angle of
as shown in figure 1a. The phase center of the sphere
coincides with the origin and no assumptions are made on the
distance of the primary feeds from the phase centre or their
relative orientation.
Figure 1.a.
Figure 1.b.
A. Characterization of the Primary Radiator
The near fields radiated by the primary feed at any
point in its own coordinate system are given by
2
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Here M

mn
and N

mn
are the mutually orthogonal TE & TM
Spherical Vector Wave Functions (SVWF) respectively,
defined in [15] with an e j t time variation. The TE and TM
Spherical Modal Complex Coefficients (SMCC) are denoted
by an and bn respectively. Here n is the spherical modal
index and m is the azimuthal index occurring as e jm . For
linearly polarized primary radiatorsm = 1. Yo is the
admittance of the surrounding medium. The maximum value
of the running index n is set to N
max
= ceil[k*lensdia] that
accounts for more than 99.9% of the energy radiated by
collimating type of antennas as shown by Arthur [18] and like
the ones considered in this paper.
Compute the far fields over an enclosing sphere
centered at the aperture center by aperture field integration
described in [16].The fields over an enclosing sphere may
also be obtained by an accurate measurement for any one
primary radiator in an anechoic chamber. Using the
orthogonality of the SVWF s, the SMCC s a
n
and b
n
of
the primary radiator are obtained. These are valid anywhere
in near and far space.
B. Radial Translationof SMCC
The SMCC of the primary radiator are to be
translated to the phase center of the dielectric sphere, as it is
the reference point for evaluating the fields finally scattered
by the lens. This is accomplished by translation addition
theorems [12] and a recursion method of computation
described in [11].But it should be noted that while the
translated origin would coincide with the phase center and
coordinate origin of the dielectric lens, it is not yet spatially
aligned to it. The translated SMCC would have the following
expressions.
Hi (R, ", ") ( jY0 ) AtnNmn B M ) (4) In equation (3) & (4)
the s ymbol is used to represent a radiator coordinate system
that has been translated to the lens phase center but not
spatially aligned to it. The translated SMCC s Atn and Btn are
computed for a distance d between the primary radiator
aperture centre and the lens phase centre (Figure 1b) using the
expressions below [12].
The running index v is up to Nmax. The translation
coefficients Avn and Bvn in (5) and (6) are given by [11],
C. Spatial Rotation of SMCC
The translated SMCC in equations 5 and 6 are now
referred to the lens phase centre coordinate system by
performing a spatial rotation. If R represents the mathematical
rotation group, then as described by Edmonds [13], the
symbol Dn (R) mu would be the matrix of rotation
coefficients for one element of the rotation group defined by a
set of three Eulerian angles ( , , ) that would align the
translated primary radiator coordinate system with the lens
phase centre coordinate system. The translated and rotated
SMCC of the primary radiator are computed using Atn and
Btn in (5) and (6) after substituting into the expressions below
[14].
In (11), (12) and (13), u is the polarization index and m
is the azimuthal index. In our case the primary radiators
chosen always have u = m = 1. The rotation factors d
n
mu ( )
are computed from the recursion relations for Jacobi
polynomials for small values of n as described in [13].For
large values of n it is computationally more efficient to
compute the rotation factors by employing complex FFT and
a data reduction techniques. Since we are dealing with N
max
of
less than 50 the recursion relation was preferred here.
D. Scattering Coefficients
Boundary conditions are applied on the surface of
the dielectric lens for the total field, which are represented as
Ei + Es = Ed and Hi + Hs = Hd . Here Ei is the incident field
of the primary radiator, Es is the field scattered outside the
lens and Ed is the field scattered inside the lens. The incident
field Ei is now characterized by its Translated and Rotated
SMCC [A
nm
,B
nm
] as in (11) and (12) referred to the lens phase
centre. The unknown scattered fields Es and Ed are
represented by their unknown SMCC [Ans ,Bns] and [A
nd
,
3
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
B
nd
] respectively now referred to unprimed lens phase center
coordinate system. Substituting the spherical modal
expressions for Ei , Es and Ed in terms of their SMCC, it is
evident that there are two known coefficients [A
nm
,B
nm
] and
four unknown coefficients [Ans ,Bns] , [A
nd
,B
nd
] for each
modal index n . Thus there are 4*N
max
spherical modal
complex coefficients that are unknown and to be determined.
To solve for these, we apply equations of continuity of
tangential electric and magnetic field E = E

+ E

over the
surface of the sphere r = r
0
E
i
+ E
s
= E
d
(14)
E
i
+ E
s
= E
d
(15)
H
i
+ H
s
= H
d
(16)
H
i
+ H
s
= H
d
(17)
In the case of a spherical dielectric lens of radius r
o
the
orthogonality conditions of SVWF [15] can be invoked on the
surface of the sphere by integrating the dot product of E with
the vector wave functions M
mn
and N
mn
over and applying
orthogonality of the Associated Legendre functions in , to
obtain four linearly independent equations for each spherical
modal index n . Thus all the 4 * N
max
unknowns are easily
solved as closed form expressions and given below for a
spherical lens.
When the lens shape is not spherical, a closed form
solution may not be feasible. However this technique can be
easily extended to dielectric shells easily. The scattered filed
outside the shell is obtained by the same method but the field
inside the spherical shell needs reapplication of the boundary
conditions that yields a different sect of SMCCs inside the
spherical shell. With all SMCC for Ei as [A
nm
,B
nm
] and Es as
[A
ns
,B
ns
] now known, the total field at any point in space in
the near or far field is computed by addition of the incident
and scattered fields using the equations (1) and (2). While this
technique computes the radiated field for a radiator -
dielectric lens combination accurately, it is not suitable
computationally easy to estimate the Direction of Arrival
(DOA) of signals that undergo refraction through the lens. For
such purposes it is advisable to use simple GO with
modifications to the DOA algorithms.
3 .SAMPLE CALCULATION AND EXPERIMENT
A sample computation was performed to test the
validity of the analysis. An open ended rectangular waveguide
with an aperture of dimensions (2.4 x1.2) cms was made to
radiate at a frequency of 10 GHz, in front of a Teflon sphere (
d = 2.08 ) of diameter 12cms. The aperture was assumed to
radiate in the dominant TE10 mode. The distance of
separation between the rectangular waveguide aperture and
center of the dielectric lens was kept at 5.6 cms, where the
reflection losses were better than 20 db.To
perform the measurement, the dielectric sphere was kept
on a supporting stand made of polystyrene foam with d
=1.08 practically same as free space, so that the radiation
pattern of the aperture-lens combination does not get
perturbed.
Figure 2a shows the spherical lens fabricated for the
purpose and Figure 2b shows the actual set up used for the
experiment. Figures 3 and 4 show the return losses and
collimation effect by the lens.
The rectangular aperturelens combination was
excited by a klystron source at 10 GHz. At this frequency, the
number of modes considered was, Nmax 30 i.e. Nmax =
ceil[k *lensdia] which accounts for about 99.9% of the
energy. The SMCCs an and bn of the primary radiator are
computed using the field expressions (1) and (2) of the SME
model by exploiting the orthogonality properties of Mmn and
Nnm. The magnitudes of an and bn are shown in Figure 5.The
fast decay of the SMCCs indicates that most of the energy is
concentrated in the first few modes (~5) where as the higher
order modes contain negligible energy. Using these SMCCs,
the far fields of the rectangular aperture are reconstructed by
substituting them in expressions (1) and (2). The fields of the
rectangular aperture were also calculated using Aperture
Integration Method [16]. There was a satisfactory match
between the patterns obtained by the two methods as is
evident from Figure 6.
As the phase centre of the dielectric lens is the
reference point for the analysis, the SMCC s obtained in the
previous steps are subjected to the operation of radial
translation using the equations (5) and (6) for a radial distance
of 2.5 cms and figure 7 shows the translated SMCCs. As
expected, here also, the SMCCs decay fast indicating that the
translation operation has not changed the energy content of
the modes. The far fields are reconstructed again using the
SME expressions (1) and (2), but with the translated SMCCs
and are shown in Figure 8. It is evident from the Figure that
the fields are similar to those in Figure 6. The translated
SMCCs are then spatially rotated using the rotation matrix to
align them to the phase centre of the lens with the help of
expressions in (11) and (12). The field after the rotation
operation is calculated by substituting the rotated SMCCs in
expressions (1) and (2) and is shown in Figure 9. There is a
reasonably close match between these fields and those in
Figure.6. The scattering coefficients And, Ans, Bnd, BnS, are
obtained by invoking the boundary conditions on the lens
surface for the total fields using the expressions (18)-(21).
The fields Es and Ed scattered by the lens are found out by
substituting these scattering coefficients in (1) and
4
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
(2).Figure10 shows the fields scattered by the lens, which are
in close agreement with the fields obtained by direct aperture
integration shown in Figure.6.
Figure 2.a.: Teflon Spherical Lens, Radius = 6 cm = 2.08
Figure 2.b.: Spherical Lens excited by a Klystron source at 10GHz
Figure 3: Return losses for the spherical lens.
Figure 4: The measured H plane patterns
Figure 5.a. Magnitudes of SMCCs before translation
Figure 5.b.: Magnitudes of SMCCs before translation
Figure 6.a.: E- field for =00 Aperture Integration
Method
Figure 6.b.: E- field for =00 Spherical modal Expansion Method
Figure 6.c.: E- field for =900 Aperture Integration Method
Figure 6.d.: E- field for =900 Spherical Modal Expansion Method
Figure 7.a.: Magnitudes of SMCCs after translation
Figure 7.b.: Magnitudes of SMCCs after translation
Figure 8.a.: E- field for =00 Near field at 2.5 cm after translation
Figure 8.b.: E- field for =900 Near field at 2.5 cmafter translation
Figure 8.c.: E- field for =00 far field at 2 mafter translation
5
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Figure 8.d.: E- field for =00 far field at 2 mafter
Translation
Figure 9.a.: E- field for =00 Near field at 2.5 cm after spatial rotation
Figure 9.b.: E- field for =900 Near field at 2.5 cmafter spatial rotation
Figure 9.c.: E- field for =00 far field at 2 mafter spatial Rotation
Figure 9.d.: E- field for =900 far field at 2 mafter spatial rotation
Figure 10.a. E- field for =00 Scattered fields
Figure 10.b. E- field for =900 Scattered fields
4. CONCLUSION
An analytically accurate and novel solution to the problem
of prediction of radiation patterns of small aperture primary
radiators in the presence of dielectric lenses has been
presented. This uses Spherical Modal Expansion to
characterize the primary radiator fields and the unknown
scattered fields are obtained from a closed form solution for
spherically shaped lenses. The procedure involved translation
and rotation of the primary radiator SME coefficients. The
validity of this theoretical procedure has been\ demonstrated
for a typical case of primary
radiators like rectangular waveguide, small horns and
patch arrays radiating in the presence of a Teflon spherical
lens by comparing results with an experimental program. The
analysis is easily applied to any type of radiating aperture and
any shaped dielectric lens.
5. REFERENCES
[1] L.W.Li, M.S.Leong, X.Ma and T.S.Yeo, Analysis of a
circular aperture antenna and its covered dielectric
hemispherical radome shell over ground plane: Near and far
zone patterns, Microwaveand Optical Technology Letters,
vol.21, No.4, pp.238-243, May 20, 1999.
[2] Reuven Shavit, Dielectric spherical lens antenna for mm-
wave communications, Microwave and Optical
Technology Letters, vol.39, No.1, Oct.5, 2003.
[3] Alison Brown and David Morley (Navsys Corp), Test Results
Of a seven element small controlled reception pattern
Antenna, Proceedings of ION GPS Sept 2001, Salt Lake city
Utah.
[4] Charles W. Manry Jr. and Alison Brown etc., Advanced Mini
Array Antenna Design using fidelity computer modeling and
simulation, http://www.navsys.com/papers/0009004.pdf.
[5] John Thornton, Properties of spherical lens antenna for high
Altitude platformCommunications, "http: //www.ece.unm.edu/
summa/ notes/ ssn/note382.pdf.
[6] Tapan Kumar Sarkar and Ardalan Taaghol, Near Field to
Near/Far field Transformation for Arbitrary Near-field
Geometry utilizing an Equivalent electric current and MOM,
IEEE Trans. onAntennas Propagat. vol.47, No.3, pp.566-573
March 1999.
[7] Carlos A Fernandes, Shaped dielectric lenses for Wireless
Millimeter-Wave Communications," IEEE Ant and Propag
Magazine, vol 41, No 5, pp 141-150, October 1999.
[8] Carlos A Fernandes, Jose G Fernandes, Performance of Lens
Antennas in Wireless Indoor Millimeter Wave Applications,"
IEEE Trans onMTT, vol 47, No 6, pp 732-736, June 1999.
[9] W Croswell, JS Chatterjee, Bradford Mason,Radiation from a
homogeneous sphere mounted on a waveguide aperture,
IEEE Trans Ant and Propag, vol AP-23, No 5, pp 647-
656, Sept 1975.
[10] H.S. Ho, G.J. Hagan,M.R. Foster, Microwave irradiation
design using dielectric lenses, IEEE Transaction on MTT,
Vol MTT-23, pp1058-1061, Dec 1975.
[11] J. H. Bruning and Y.T. Lo, Multiple scattering of EM waves
by spheres: Part I Multipole expansions and Ray optical
solutions, IEEE Trans Ant and Propag, vol AP-19, No 3, pp
378-390, May 1971.
[12] O.R. Cruzan, Translation addition theorems for spherical
vector wave functions," Quarterly of Applied Mathematics,
Vol 20, No 1, pp 15- 24, 1961.
[13] A.R. Edmonds, Angular Momentum in Quantum Mechanics,
Princeton University Press, Princeton, 1974.
[14] M.S. Narasimhan and S.Ravishankar Multiple Scattering of
MWaves by Dielectric Spheres located in the near field of
asource of Radiation, IEEE Trans onAnt and Propag, Vol
AP-35, No 4, pp 399-405, April 1987.
[15] Julius Adams Stratton, Electromagnetic Theory, McGraw Hill
Book Company,Chapter 7, 1941.
[16] Constantine ABalanis, Antenna Theory Analysis and
Design, 2nd Edn, Singapore: John Wiley and Sons: Chapter
12, 2002
[17] S.Ravishankar and H.V. Kumaraswamy, Smart Antenna
System usingDielectric Lens, ICGST DSP Journal, Vol.8,
Issue 1, pp m31-35, Dec2008.
6
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
DESIGNTO INTRODUCE ON-CHIP TUNABILITYIN
MICROWAVEACTIVE INDUCTOR
Garima Kapur, Kapil Bhola, C.M Markan
VLSI Design Technology Lab, Dayalbagh Educational Institute, Dayalbagh, Agra
Abstract Active inductors are increasingly replacing on-chip
CMOS spiral inductors and MEMS switched inductors owing to
their higher Q-factor, fabrication ease, compact chip area, and
above all tunability. They are capable of producing wideband
oscillations ranging from RF, GPS, and Bluetooth to UWB and
hence have begun to find applications in wide variety of areas such
as VCOs, RF filters, Frequency synthesizers, multimode trans-
receivers or software-defined radios. However a major limitation
for these on-chip active inductors, which is common to any analog
integrated circuit, is that of parametric drift during and post-
fabrication, besides having limited or no variability.
This paper presents a design of a gyrator-type, grounded CMOS
active inductor that uses a tunable floating-gate nFET resistor in
the feedback to introduce the concept of fine tenability through on
chip non-volatile programming. The proposed active inductor
simulated in T-Spice on 0.35m CMOS process, exhibits nH
inductances at GHz frequency ranges with high Q factor. Instead
of higher noise and power consumption, on account of its higher
integrating density, better Q factor and tunable operating
frequency range in GHz, these inductors find application in
MMIC (monolithic microwave integrated circuit) designs like
MMIC VCO, low power amplifier, MMIC active band pass filters
etc. The use of FGNFET not only brings in fine tenability (13bit
precision) that can get over the parametric drift but also
introduces wide range programmability which enhances
applicability of wide spectrum. The range of frequency can be
increased by increasing number of legs or in other words by
adding parallel NFETs (with variable aspect ratios) at common
floating gate in FGNFET. The decrease in equivalent parallel
conductance to improve Q can be further enhanced by
implementing and adjusting the biasing current sources using
FGFETs. The design shows lower noise when compared with the
simulation design using feedback resistor. It shows constant power
consumption and can compensate any temperature change.
Key-words: CMOS, High-Q Active Inductor, Floating Gate
transistors.
I. INTRODUCTION
The new trends in wireless communications have motivated a
strong interest towards the development of multi-standards and
multi-services with operating frequencies for GSM, Bluetooth
and UWB bands. The zeal to combine such frequency bands in
one unit introduces the requirement of a VCO which can be
precisely tuned between these frequency bands. The LC-tuned
oscillator circuits can achieve lower-phase noise, but the low Q
value passive inductor limits the tuning range. There are
MEMS switched tunable inductors [1] which can introduce nH
inductance in 1-10GHz frequency range. The main concerns
with the spiral inductor are that they possess low Q value, large
chip area and difficult control in fabrication. However the
active inductors overcome these disadvantages and provide
high Q value. The simple active inductor circuit using op-amp
is being developed and used so far. We had simulated an active
inductor using op amp, implemented by MOSFETs. The fact
that this circuit uses large number of transistors, lack of
tenability and generates frequency in MHz range creates a
limitation for the use of this circuit. Hence many kinds of
gyrator-type active inductors had been developed using passive
tunable compensating network [2].Among them, the most
common one is the so called grounded active inductor which is
along with its equivalent circuit shown below as fig 1(a) (b)
respectively [3].
Fig. 1 Simple active inductor and equivalent circuit
This circuit provides better Q value and with less number of
transistors but the on-chip tenability is not possible. To further
improve the performance, a high-Q active inductor using a
feedback resistor is proposed, as shown in fig2 below. In this
circuit, input impedance is being modified and Q factor is
reduced by reducing the parallel conductance. In order to use
such active inductor at variable frequency ranges the feedback
resistor can be tuned using Band selector circuit block [4]. It
can produce inductance at discrete frequency bands, number
depending on the number of bits of multiplexer used.
In this paper, the proposed programming methodology can
improve the programming precision by 12 bits and hence
inductance can be obtained at continuous frequency ranges (=
2^12) with better Q factor and self resonant frequency as
compared to the previous models. The conductance can be
further decreased using FG FETs for generating equal biasing
currents (I
1
& I
2
) in the circuit. Hence the simulation model of
FGNFET resistor with highly precise tenability is explained in
section 3. Section 4 explains briefly our proposed model of
active inductor. In section5, the simulation results for the
proposed Active inductor using FG-FETs are explained and
comparison with previous models is being shown.
Fig. 2 High-Q active inductor
II. ACTIVE INDUCTOR DESIGNTOPOLOGY
In a simple grounded active inductor using feedback resistance
as shown in fig 2, the resistance R
f
increases the Q factor and
sets the inductive value. The design is based on gyrator
topology including common source amplifier acted by
transistor M
1
and common drain feedback amplifier using
transistor M
2
. The circuit can be analyzed using small signal
equivalent circuit at high frequency as shown below in fig 3.
7
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
In order to simplify small signal analysis, the bulk and the
corresponding source leads are connected and both transistors
are considered to be the same. Thus small signal model
parameters are equal. Both C
in
and C
out
represent only an
approximation of more complex capacitance structure in four
terminal transistor models [5]. The rest of the parameters are
typical MOS components: gate to drain capacitances C
gd
,
output conductances g
out1
, g
out2
and transconductances g
m1
, g
m2
.
Both transistors are biased using ideal current sources.
Fig. 3 The small signal equivalent circuit of gyrator-type active
inductor at high frequencies
Using nodal analysis, considering the above considerations, the
conductance seen from the input port is given by the equation
[6]:
While considering the above expression we ignore the gate
drain capacitances of the mosfets. Thus the expressions for the
components of the equivalent model (Fig 2(b)) becomes
Therefore from above mentioned equations, the parallel
conductance can be greatly reduced by increasing the feedback
resistance which in turn can be on-chip tunable as explained in
the next section. With increase in resistance the inductance in
the circuit can be increased and conductance and series
resistance can be decreased. Hence it improves the quality
factor with decrease in losses. This conductance can be further
decreased if the biasing current sources have been implemented
using floating gate FETs. With tweaking the floating gate
voltage, the trans-conductance can be reduced and hence the
parallel conductance in the circuit design can be reduced further
to improve the quality factor. However, with decrease in
conductance the series resistance of the circuit increases which
can decrease the quality factor. Hence, it is an optimized circuit
design between parallel conductance and series resistance.
III. TUNABLE FGNFET RESISTOR
The fundamental requirement to operate a single MOS
transistor in the triode region as a linear resistive element is to
suppress its nonlinearities by applying a function of the input
signal to its gate and/or its body. Therefore to attain
linearization in CMOS resistor a common mode and large
voltage is employed at the gate. There can be two approaches to
provide large voltages at the gate one is by varying voltage at
one gate in multiple gate transistors. This technique will
provide volatile tenability across the transistor. Another
approach is to use capacitor between the gate and the floating
gate, where large voltages can be provided on the floating gate.
And once the desired charge is pushed onto the floating gate, it
can be utilized to provide a constant bias to other transistors in
a design (M
a
), from which tunable resistance is being generated in
our circuit design.
With the reference from [7] we have implemented such an
indirectly programmable FGNFET circuit design. Voltage
dependent current sources and programmable PMOS have been
used for tunneling and hot-electron injection to the floating gate
as shown in fig 4. For injection, drain to source voltages of the
programming PFET (M
p
) is varied and for tunneling, V
tun
is
varied. Thus charge at the floating gate changes (V
fg
) and
consecutively range of resistance can be obtained across the
transistor. The simulation model of the indirectly programmed
tunable FGNFET was built in T-Spice, 0.35um CMOS process
(fig. 4). The condition to be kept in mind while programming
FGNFET are: * V
tun
-V
fg
should be approximately equal or
greater than 10V. * V
fg
-V
d_prog
should be approximately greater
or equal to 4V. * V
s_prog
V
threshold
should be less or equal to
V
fg
. The programming precision can be obtained is up to 13bits
as mentioned in paper [8]. Therefore, highly precise on-chip
tunable resistance can be obtained across FGNFET which in
turn is used in the active inductor design for inductive
tenability.
Fig. 4 T-Spice 0.35um CMOS process Simulation Model of
indirectly programmed tunable FGNFET resistor
IV. PROPOSEDACTIVE INDUCTOR
The proposed grounded active inductor using feedback resistive
path consists of two transistors as two back to back gyrators
and their internal capacitors are being reflected inductive at the
input impedance terminal. This circuit is composed of common-
source transistor M
4
, common-drain transistor M
3
and the
feedback resistive path, which is obtained using on-chip tunable
floating gate NFET or in other words highly precise tunable
CMOS resistor.
The simulated model of the proposed active inductor simulated in
0.35um T-spice CMOS process, is shown in fig 5. The
biasing currents I
1
and I
2
can also be obtained using floating
gate FETs working in saturation condition. Hence, the biasing
currents can also be adjusted using on-chip programming
topology in floating gate FETs to further enhance the Q factor.
The input admittance in the above circuit including the effect of
parasitic capacitance C
v
[9] is
where, R
f
represents the floating gate nmos resistor.
The equivalent circuit is same as that of the Fig1(b) and the
expression for its components are given as
8
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Fig. 5 High-Q active inductor with floating gate NFET
Therefore the simulated results and characteristics graphs
obtained using the simulated model of this circuit design in
0.35 T-spice CMOS process is being explained and tabulated in
next section.
Fig 6 Plot showing output charateristic of proposed active
inductor design and showing self oscillating freq. = 1.2GHz
& Q = 12
V. SIMULATIONRESULT
Fig 6 depict the comparative output characteristics analysis or
in other words comparative Q factor between the simulated
model of active inductor using resistor in feedback and the
model of proposed active inductor using highly precise on-chip
tunable resistor obtained across FGNFET. It shows that Q
factor improves in proposed active inductor design.
The fig 7 represent the real impedance obtained across the
active inductor, showing neagative resistance can be obtained in
the range 1GHz to 2GHz. Fig 8 represent imaginary
impedance showing inductive reactance can be obtained within
0.4GHz to 1.7GHz range. Hence the practical range of signal
frequencies across inductor at specific tuning conditions i.e at a
specific resistance is limited only to 1GHz to 1.7GHz. For
higher frequencies, the inductor reaches its self resonance
frequency. The simulated results of impedance across the
inductor as mentioned prove the equivalent circuit design
parameters.
The equivalent circuit as shown in fig1(b) gives the impedance
value in terms of parallel capacitor C, conductance G, inductor
L and series resistor R. Using equations mentioned before
(from small signal model analysis) parameters of the equivalent
circuit is being calculated. The impedance at the drain terminal
of M
1
or across the paracitic capacitor C
v
is shown in fig 8.
Thus the self oscillting frequency for the active inductor at
specififc programming conditions in FGNFET is 1.7GHz. This
oscillating frequency and Q factor is on-chip tunable and can be
tuned with very fine values. The range of tunability can be
increased by using parallel NFETs(with variable W/L ratios)
with common floating gate in FGNFET resistor.
The equivalent circuit paramenters and the active inductor
charateristics is being tabulated in table1. Hence, with increase in
resistance, inductance increases and conductance decreases. And
therefore, Q factor increases but the self oscillating
frequency decreases with incraese in resistance. The inductor
can be tuned to very fine values upto 13 bits of resolution[8].
The resistance variation obtained using finely tuned FGNFET is
about 1.765K to 16K which in turn produces inductance 3nH to
7nH with self oscillating frequency in range 1GHz to 6.2 GHz.
Hence the inductor at the specific biasing and circuit sizing
conditions can be tuned from 3nH to 7nH and operated within
the 1GHz to 6.2GHz frequency range. This range of tunable
frequency can be increases by using increasing number of legs
or by adding parallel NFETs(with variable W/L ratios) with
common floating gate at FGNFET.
The decrease in parallel conductance to improve Q can be
further enhanced by implementing biasing current sources using
FGFETs. With programming these current sources, gds
2
can be
increased keeping gds
1
same, such that the parallel conductance
decreases and inductance increases. But series resistance
increases too which can increase losses. It is an optimized
circuit for losses. Hence, the resultant is that the losses
decreases and the Q factor increases. Therefore from the
proposed tuning methodology of active inductor, finely tuned
field programmable analog active inductor can be fabricated
Fig 7 Plot showing real Impedance showing negative resistance
range from 1GHz 2GHz
Fig 8 Plot showing Imaginary (Zin) showing inductive
reactance range from 0.4GHz 1.7GHz
9
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
VI. SENSITIVITYANALYSIS
The proposed active inductor design produces inductance in nH
at GHz frequency range with high Q factor. The design gives
constant power consumption. As the current does not pass the
feedback path in the circuit, the power consumption of the
inductor is not changed when the feedback resistance varies the
characteristics of the inductor. Thus, the unchanged power
consumption characteristic can be applied to design a constant
power consumption wide tuning-range oscillator circuit. The
noise performance of the circuit is given in Fig. 9
which proves to be low. The total output noise voltage is
109.7349 sqV/Hz which is low as compared to the output noise
voltage obtained in active inductor model using feedbakc
resistor(568.342sqV/Hz). In addition, since the number of
transistors in the circuit is minimal, the power dissipation is
low. The proposed design varies with change in temperature
but at specific temperature by adjusting the floating gate
voltage, temperature coefficient of the FGNFET can be tuned to
zero [10]. Therefore the proposed active inductor finds
application for filtering and matching at very high frequency
filters, tuned amplifiers and oscillators. It can be used in RF
applications requiring high Q.
Fig 9 Noise performance of the proposed active inductor
VII. APPLICATIONS OF PROPOSEDACTIVE INDUCTOR
Frequency synthesizers employs PLL (phase locked loop)
based structures in which VCO voltage controlled oscillator is
an essential building block as it determines the phase noise,
harmonics and power consumption. The proposed tunable
active inductor can be utilized in LC oscillator or VCO circuit
design [11]. The tunable inductance will produce tunable
oscillations with same self oscillating frequency and tuning
range as obtained in inductors used. The proposed inductor can
also be used to implement a high Q band pass filter [12], central
frequency and Q of the filter will be equal to the self oscillating
frequency and Q of the active inductor. The design is suitable
for microwave and millimeter wave applications [2]. It can also
be used in low power wireless transreceiver [13]. In addition to
above mentioned, careful design optimization of design,
proposed inductor can lead to an amplifier with minimum noise
figure and programmable central frequency [14].
VIII. CONCLUSION
This work contributes an active inductor which is finely
tuned through on chip programming and introduces inductance
which can produce wideband oscillations of various bands,
such as, RF, GPS, Bluetooth and UWB. Hence such active
inductor finds its usage in VCOs, RF filters, Frequency
synthesizers, multimode transreceiver systems or software-
defined radio systems. The model of the active inductor using
tunable Floating-gate NFET resistor in feedback is being
implemented in T-Spice 0.35um CMOS process, which is more
accurate (12bit precision) than the previous models. The
proposed gyrator-type, grounded active inductor provides nH
inductances at GHz frequency ranges. It shows high Q factor
and lower losses with increasing resistance and decreasing
conductance through fine-tuned FGNFET resistor. As well as,
floating gate FETs can be used as the biasing current sources in
the circuit to further decrease the conductance. Moreover, the
proposed active inductor can provide wider tuning range by
using more number of legs in FGNFET. It gives constant power
consumption, lower noise and low power dissipation. The
design can also compensate any change in temperature.
REFERENCES
[1] R. Zadeh, M. Kohl, P.A Ayazi, F. Georgia, MEMS Switched
Tunable Inductor , Journal of Microelectromechanical system,
IEEE, Vol 17, pg 78-84, Feb,2008.
[2] S. Del Re, G. Leuzzi, V. Stornelli, Dept. of Electrical Eng. ,
University of LAquila, A new Approach to the design of high
dynamic range tunable active inductors , workshop on Integrated
non-linear microwave and millimeter wave circuits, IEEE, pg 25-
28, issue date 24-25 Nov, 2008.
[3] M. Failland, H. Barthelemy, Design of a wide tuning range
VCO using active inductor , Workshop on circuit and systems &
TAISA conference, 2008, NEWCAS-TAISA 2008, Joint 6th
International IEEE Northeast, pg:13-16, 22nd Aug 2008.
[4] M.J. Wu, Y.Y. Huang, Y.H Lee, Y.M. Mu, J.T. Yang, Design
of CMOS multi-band voltage controlled oscillator using active
inductors, International Journal of circuit, system and signal
processing, Issue 2, Vol. 1, pg: 207-210, Oct 2007.
[5] Szczepkowski, Grzegorz and Baldwin, Gerard and Farrell,
Ronan, Wideband 0.18m CMOS VCO Using Active Inductor
with Negative Resistance , European Conference on Circuit
Theory and Design, IEEE Circuit Theory and Design, 26th - 30th
August 2007, Sevilla, Spain.
[6] A. Gupta, S. Ahmadi, M. Zaghloul, Low Voltage High Q
CMOS Active Inductor for RF Applications , The 16th
International Conference on Information Systems Analysis and
Synthesis: ISAS 2010, April 6-9th, 2010, Orlando, Florida USA.
[7] K. Rahimi, C. Diorio, C. Hernandez, M. D. Brockhausen, , A
Simulation model for floating gate MOS synapse transistors,
Vol 14, pg. 1354-1367, IEEE educational Activities Department,
Dec 2006.
[8] Y. L. Wong, M.H Cohen, P. A. Abshire, A 1.2 GHz adaptive
floating gate comparator with 13-bit resolution , ISCAS 2005, pg
6146-49, Vol.6, IEEE, 2005.
[9] Guochi Huang, and Byung-Sung Kim, Programmable Active
Inductor Based Wideband VCO/QVCO Design , IEEE Journal
of Microwave, Antenna and propagation, Vol2, Issue: 8, pg:
830
838, December 2008.
[10] G. kapur, CM Markan, Design of a High Precision, Wide
Ranged Analog Clock Generator with Field Programmability
using Floating-gate Transistors , Proceedings of 2010
NASA/ESA Conference on Adaptive Hardware and Systems,
pg:365-370,June, 2010.
[11] J. T Yang, S. K Hsieh, P. J. Tsai, A wide tuning range voltage-
controlled oscillator with active inductors for bluetooth
applications , Proceeding of 4th International Conference on
Circuit, systemand signals, CSS10, pg:39-42.
[12] H. Xiao, R. Schaumann, W. R Daasch, A radio-frequency
CMOS active inductor and its application in designing high-Q
filters , IEEE International Symposium on Circuit and Systems,
pg:197-200, May 23-26, 2004, Vancouver, Canada.
[13] A. Abidi, G. Pottie, and W. Kaiscr, Power-conscious design of
wireless circuits and systems, Proc. IEEE, Vol. 88, No. 10,
pg:1528-1545, Oct. 2000.
[14] M.J Wu, P.J. Yen, C. C Chou, J.T. Yang, Low Power
Amplifier Design Using CMOS Active Inductor , Proceedings of
the 5th WSEAS International Conference on Signal Processing,
Istanbul, Turkey, pg:111-115, May 27-29, 2006.
10
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
)| 0.
Effect of Substrate Thickness on Resonance
Characteristics of Slot Ring Microstrip Antenna
1
P. S. Saini (member IEEE),
2
Amit Gupta and
3
RameshKumar
1-2
Department of Electronics & Communication Engineering
J.P. Institute of Engineering & Technology, Meerut (U.P.)
3
Department of Electronics & Communication Engineering
Radha Govind Engineering College, Meerut (U.P.)
1
sainipadam@rediffmail.com,
2
amit1hcst@gmail.com,
3
ramesh.rgec@gmail.com
Abstract- A set of measurements of annular ring slot
antennas [7, 8] on substrates of varying thickness is presented.
The most important factors in design of microstrip circuit and
antenna are choice of the best substrate thickness and the
dielectric constant to have low losses. In this case an attempt
has been made by selecting various dielectric thicknesses to
study return loss. For this, an annular ring slot microstrip
antenna (ARSMSA) is designed at 3.0 GHz and simulated on
different substrate thicknesses ranging from0.5mmto 3.0 mm
using Zeland IE3D software. The variation in return loss is
mm. The radiation elements of the proposed antenna consist
of an annular ring slot, operating approximately at 2.9 GHz
(however a slight variation is also noticed in table I which is
negligible for wideband operations). The operating
frequency is taken as 3 GHz. The other parameters are
calculated using [6] and found as: W=31mm, h= (0.5 to 3.0)
mm (assumed),
eff
=4.63, L
eff
=23.2mm, L=0.72mm and
L=21.75mm at
r
=4.2 (assumed). For a microstrip antenna
the resonant frequency is given as [1]:
found from-5.869dBto -42.16dB.
Keywords: Microstrip annular ring slot antenna (ARSMSA),
return loss, VSWR, dielectric thickness
f
r
=
c
2(L + AL) c
r
I. INTRODUCTION
Where length extension ( L):
(c
eff
+


0.3
(W
+


`
264

AL = 0.412h
\h '
Recent interest has developed in radiator etched on
(c 0.258
(W
+ 0.8
`
electrically thick substrates as these antennas are used for
high frequency applications [2,4 and 5].However, microstrip
antennas inherently have narrow bandwidth. In many cases,
Optimum value of W:
eff
( c +

)|
\ h '

1/ 2
`
their increased impedance bandwidth is also paid for poorer
radiation characteristics. Recent interest in millimeter wave
W= o
|
(
r
1)

2
\
2
'
systems and monolithic fabrication, however, has created a
need for substrates that are electrically thicker, and/or have
high permittivity. Increased bandwidth is another reason for
and Af
r
=

f
r
c
r

c
o
c
e0
interest in electrically thicker substrates. Anomalous results
have been previously observed for printed antennas on such
substrates. Many of the theoretical models, which worked
well for thin, low dielectric constant substrates, fail to give
good results for thicker or higher permittivity substrates.
In order to determine the range of validity [3] of these
models, and to provide a database of measured data for the
testing of improved models, this paper describes the results
of a comprehensive set of measurements of annular ring slot
also c
e
=

c
eo
+ A c
e
microstrip antennas. Ten individual antennas were designed
and simulated with different substrate thickness viz.(0.2mm,
0.5mm, 0.75mm, 1.25mm, 1.75mm, 2.0mm, 2.5mm,
2.75mm and 3.0mm).The dielectric constant is assumed
constant as 4.2 and fed with a coaxial probe. The measured
resonant frequencies are reported for each cases. The
simulated results are then compared with the antennas on
different thickness of substrates.
II. ANTENNA GEOMETRYAND DESIGN
The geometry of the proposed antenna is shown in Fig.1.
The ground plane lies at the bottom side of the antenna with
a very compact size of 21.75mm 31mm (0.5 to 3.0)
fig.1: Annular ring slot antenna with W=31mm, h= (0.5 to 3.0)mm
(assumed), eff=4.63, Leff=23.2mm, L=0.72mmand L=21.75mmat r=4.2.
fed by coaxial probe at (15.15mm, 2.975mm)
III. MEASUREMENTS AND RESULTS
Microstrip antenna of annular ring slot shape is designed at
3.0 GHz. various substrates thickness has been used to
simulate the antenna. The ARSMSA is fed by a coaxial
probe at (15.15mm, 2.975mm). The antennas were tested for
VSWR and return loss using Zeland IE3D software. A
careful simulation study of resonant frequency, bandwidth
and return loss of the antenna was undertaken and the
results of return loss are presented. The radiation elements
11
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Dielectric thickness Maximum dB value
0.2 -5.869
0.5 -19.8
0.75 -16.8
1 -16.68
1.25 -17.87
1.75 -20.66
2 -24.26
2.5 -29.89
2.75 -42.16
3 -30.49
of the proposed antenna consist of an annular ring slot,
operating approximately at 2.9 GHz (however a slight
variation is also noticed in table I).
The results obtained are given in table II. It can be
observed that, variation in return loss is found from
-5.869dB to -42.16 dB obtained for various substrates
thickness.
TABLE II
RETURN LOSS (MAXIMUM VALUE) FOR DIFFERENT DIELECTRIC
The plots of return loss for different dielectric thickness are
given from fig.2 to fig.11. However, other parameters such
as VSWR, radiation resistance and smith chart are not
included this time.
.
Fig.2: Return loss of annular ring slot antenna with discussed
parameters and Substrate thickness of 0.2mm
Fig.3: Return loss of annular ring slot antenna
with discussed parameters and Substrate thickness of 0.5mm
Fig.4: Return loss of annular ring slot antenna with discussed
parameters and Substrate thickness of 0.75mm
Fig.5: Return loss of annular ring slot antenna with discussed
parameters and Substrate thickness of 1.0mm
Fig.6: Return loss of annular ring slot antenna with discussed
parameters and Substrate thickness of 1.25mm
Fig.7: Return loss of annular ring slot antenna with discussed
parameters and Substrate thickness of 1.75mm
Fig.8: Return loss of annular ring slot antenna with discussed
parameters and Substrate thickness of 2.0mm
Fig.9: Return loss of annular ring slot antenna with discussed
12
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
parameters and Substrate thickness of 2.5mm
Fig.9: Return loss of annular ring slot antenna with discussed
parameters and Substrate thickness of 2.75mm
in return loss is found from -5.869dB to -42.16dB. A slight
variation in resonance frequency is also noticed which can
be neglected for wideband application.
Fig.12: Return loss Vs substrate thickness curve
Fig.12 shows the variation of return loss as a function of
substrate thickness. It is clear from fig.12 that the negative
value of return loss increases as we increase the thickness.
The return loss is only -5.869dB when thickness is 0.2mm
while it negatively increases to -42.16 dB as we increase the
thickness of substrate.
Fig.11: Return loss of annular ring slot antenna with discussed
parameters and Substrate thickness of 3.0mm
IV. CONCLUSION
This paper has presented a set of measurements of annular
ring slot antennas on substrates of different thickness and
fixed dielectric constant equal to 4.2.Therefore for the
design of a radiator se1ection of suitable substrate thickness
is very essential. The results shown in fig.12 are very useful
for selection of suitable substrates for specific annular ring
slot antenna applications. For the annular ring slot
microstrip antenna (ARSMSA) designed at 3.0 GHz and
simulated on different substrate thicknesses ranging from
0.5mm to 3.0 mm using Zeland IE3D software, the variation
REFERENCES
[1] Bahl I J & B Bhartia,"Microstrip Antennas", Artech House. pp. 1-65.
[2] D M Pozar & S M Voda, A Rigorous Analysis of Microstripline fed patch
antenna" IEEE Trans. on A& P, Vol. No.6, 1982.
[3] M Kara, "Effective permittivity of rectangular microstrip antenna elements
with various thickness of substrates", Microwave 8 Optical Technology,
Vol. 10, Issue 4, Nov. 1995.
[4] Daniel H. Schaubert, David M. PozarAndrew Adrian Effect of Microstrip
Antenna Substrate Thickness and Permittivity: Comparison of Theories
with experiment IEEE transactions on antennas and propagation, vol.
31, No. 6., 1989
[5] J. Bahl and P. Bhartia, Design of microstrip antennas covered with a
dielectric layer, IEEE Trans. Antennas Propagat., vol. AP-30, pp. 314-
318, Mar.1982.
[6] C. A. Balanis, Antenna Theory Analysis and Design, 3rd ed., Hoboken,
New Jersey: Wiley, 2005.chapter 14.
[7] K. Chang, Microwave Ring Circuits and Antennas, John Wiley & Sons.,
1996, pp. 125-189
[8] Dau-Chyrh Chang , Ji-Chyun Liu , Bing-Hao Zeng , Ching-Yang Wu, Chin-
Yen Liu Compact Double-ring Slot Antenna with Ring-fed for
Multiband Applications International Symposium on Antennas and
Propagation ISAP 2006 pp-1-5
TABLE I
RETURN LOSS FORDIFFERENT DIELECTRIC THICKNESS AND RESONANCE FREQUENCY
Return Loss(dB)
Frequency
(GHz)
Thickness of Dielectric Substrate (mm)
0.2 0.5 0.75 1 1.25 1.75 2 2.5 2.75 3
2.874 -1.331 -4.535 -2.263 -2.287 -2.697 -4.578 -6.12 -11.48 -17.1 -27.63
2.879 -1.686 -6.325 -2.728 -2.652 -3.072 -5.149 -6.878 -13.09 -20.33 -30.49
2.889 -2.995 -16.05 -4.283 -3.757 -4.156 -6.748 -9.012 -18.31 -42.16 -20.31
2.896 -4.432 -19.8 -5.964 -4.817 -5.141 -8.151 -10.91 -25.01 -25.02 -16.72
2.897 -4.697 -17.19 -6.301 -5.016 -5.322 -8.404 -11.26 -26.72 -23.74 -16.28
2.905 -6.446 -8.187 -9.83 -6.917 -6.981 -10.7 -14.52 -29.89 -17.68 -13.61
2.912 -5.869 -4.776 -16.16 -9.894 -9.413 -14.03 -19.62 -19.53 -14.27 -11.67
2.913 -5.701 -4.57 -16.8 -10.24 -9.689 -14.41 -20.22 -19 -14.02 -11.52
2.92 -3.933 -3.069 -15.19 -14.52 -13.07 -18.86 -24.26 -15.05 -11.95 -10.18
2.927 -2.515 -2.115 -9.302 -16.68 -17.46 -20.66 -18.49 -12.24 -10.23 -8.999
2.93 -2.198 -1.901 -8.068 -15.32 -17.87 -19.24 -16.8 -11.54 -9.775 -8.672
13
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
ANALYSIS OF TWO LAYERANNULAR
RINGMICROSTRIP ANTENNA
Puneet Khanna
1
, V. K Pandey
2
,
Kshitij Shinghal
3
1
Department of Electronics and Communication Engineering, IFTM University, Moradabad.
Email id: puneetkhanna2k2@yahoo.com, puneetkhanna2k2@gmail.com
2
Department of Electronics and Communication Engg., N.I.E.T, Greater Noida.
3
Department of Electronics and Communication Engg., Moradabad Institute of Technology,
Moradabad
Abstract In this paper the analysis of two layer proximity
coupled annular ring antenna is discussed for different relative
permittivity of different substrate materials. The desired
frequency is chosen to be 3 GHz at which the patch antenna is
designed. After calculating the various parameters such as inner
radius, outer radius, and effective inner radius, effective outer
radius the antenna impedance is matched to 28 ohmfor stacking.
The VSWR, return loss, quality reflection coefficient are
observed. These results are obtained through MATLAB.
I. INTRODUCTION
In recent years the area of microstrip antenna has seen many
inventive works and is one of the most dynamic fields in
communication field. For simplify analysis and performance
prediction, the patch is generally rectangular, circular, and
annular. Due to narrow bandwidth and low gain of circular
disk, attempts were made to improve the characteristics by
increasing the substrate thickness or using other structures
such as annular ring.
II. THEORETICAL CONSIDERATION
The equivalent of Double Layer Annular Ring Microstrip
Antenna (ARMSA) is represented as a parallel combination of
resistor R, inductor L, and capacitor C as shown in figure 1.
The values of R, L, C and the parameters of double layer
annular ring microstrip antenna are given below:
L
Where
a = inner radius of annular ring antenna
a
e
= effective inner radius of annular ring antenna
b = outer radius of annular ring antenna
b
e
= outer radius of annular ring antenna
= permeability of the substrate
h = thickness of dielectric substrate
k = resonance wave number
W = width of patch
= effective relative dielectric permittivity
= effective dielectric of the substrate
The effective outer radius can be given as
Where
The effective value of the inner radius is given by
(2)
(3)
(4)
(5)
(6)
(7)
L1 L2
R1 R2
C1 C2
(8)
The feed location is (e, 0) and E
nm
denotes the electric
field distribution in annular ring microstrip antenna for the
TM
nm
mode and is given by
Fig1. Equivalent circuit of Double Layer Annular Ring
Microstrip Antenna
A. Parameters of Annular Ring Microstrip Antenna
(1)
and
y(a, b) = mutual coupling between the apertures
g(a, a) = edge conductance at inner radius
g(b, b) = edge conductance at outer radius
(9)
The value of mutual admittance between the apertures at = a
14
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
and = b is given by C. Radiation pattern
The radiation pattern of Two Layer Annular ring microstrip
antenna is calculated by using equation 22 and 23.
where
(10)
(11)
The value of edge conductances g(a, a) and g(b, b) are
obtained by substituting b = a and a = b respectively in
equation 4.8 and retaining the real part only.
B. Input Impedance
The input impedance of two layer annular ring microstrip
antenna is the parallel combination of R L and C in which a
III. DESIGN SPECIFICATIONS
(22)
(23)
mutual inductance Lm and mutual capacitance Cm is also
taken which is shown in Fig 2
L
Parameters Values
Inner Radius a = 3.0 cm
L1 L2
R1
Lm
C1 C2
R2
Cm
Outer Radius b = 6.0 cm
Effective Inner Radius a
e
= 2.8306
Effective Outer Radius b
e
= 6.1764
Width of the patch W = 3.0 cm
Relative permittivity
of materials
Fig2. Equivalent circuit of Double Layer ARMSA
Thickness of
Dielectric Substrate
h = 0.157cm
(12)
Where Z
in
is the input impedance and Z
T
is the total impedance
of circuit and X
L
is feed reactance of the coaxial feed.
Resonant Wave No. k
1
= 0.994
Design Frequency 3 GHz
(13)
Where R
T
represents the resistance responsible for dielectric
radiations, dielectric and copper losses.
(14)
IV. RESULTS
From fig. 3 it is clear that the input impedance increases
linearly with frequency. It is observed that the input impedance
at resonance depends heavily on the loss tangent of the
substrate materials and is inversely proportional to the loss
tangent, in which the substrate having the lowest loss tangent
-4
and
( = 1.03, tan = 1.5 10 ) provides the highest impedance at
Now separating real and imaginary parts of Z
in
this gives
The reflection coefficient ( ) can be calculated as
(15)
(16)
(17)
(18)
resonance, whereas the substrate having the highest loss
tangent ( = 4.8, tan = 3.66 10
-2
) provides the lowest
impedance. This is also seen that is highest for the
substrate having the lowest loss tangent and is the lowest for
the substrate with the highest loss tangent. Thus the loss
tangent of the substrate material plays an important role in
controlling the input impedance of the antenna than the
substrate permittivity ( ). In fig. 4, the highest value of
reflection coefficient is shown by the substrate of the material
having relative permittivity ( = 4.8) and the lowest values is
shown by the substrate of the material having relative
permittivity ( = 1.03) which provides the best matching with
Reflection coefficient = = (19)
Where Z
O
is the characteristic impedance of coaxial feed
VSWR = s = (20)
Return loss = -10 log (21)
coaxial feed.
The substrate of the material provides the best matching
with coaxial feed due to the lowest value of return loss is
shown by the substrate of the material having relative
permittivity ( = 1.03) is shown in fig. 5
From fig.5 and fig.6 the variation of return loss and VSWR
with frequency can be shown easily. From these plots it is
15
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Im
p
e
d
a
n
c
e
,
o
h
m
R
e
la
t
iv
e
P
o
w
e
r
,
d
B
Er = 1.03
Er = 2.54
Er = 2.91
Er = 4.8
Er = 6.11
R
e
fle
c
tio
n
C
o
e
ffic
ie
n
t
R
etu
rn
Lo
ss
V
S
W
R
Q
u
a
lity
F
a
c
to
r,
Q
T
observed that at resonance frequency the VSWR have
minimum values.
Fig. 7 shows the variation of quality factor with frequency.
The quality factor depends directly on the frequency and
inversely on the loss tangent of the materials. The coupling
factor depends inversely on the frequency and directly on the
loss tangent of the materials. The radiation pattern of the
antenna is shown in fig. 8 which indicates that the mutual
coupling, radiated power and the substrate with lowest relative
permittivity ( = 1.03) shows the maximum radiated power.
In the direction of maximum radiation, the level of
enhancement in radiated power depends directly on relative
permittivity of the substrates.
30
Er Re( Z) = 1. 03
25
Er Im( Z) = 1.03
Er Re( Z) = 2. 54
20 Er Im( Z) = 2.54
Er Re( Z) = 2. 91
Er Im( Z) = 2.91
15 Er Re( Z) = 4.8
Er Im( Z) = 4.8
Er Re( Z) = 6. 11
10
Er Im( Z) = 6.11
5
0
-5
20
10
0
-10
-20
-30
-40
-50
-60
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
Angle, Degree
Figure 8 E-Plane Radiation Pattern
V. CONCLUSION
We have presented theoretical result for stacked
annular ring antenna for different relative permittivity. The
input impedance, radiation pattern, VSWR, return loss,
reflection coefficient, and quality factor have been studied.
From the plot between VSWR and frequency we have studied
that the antenna has relative dielectric constant of the dielectric
substrate as 4.8 gives the maximum bandwidth.
-10
-15
2 2.2 2.4 2.6 2.8 3 3.2 3.4 3.6 3.8 4 REFERENCES
Frequency, GHz 9
x 10
Figure 3 Plot between Input Impedance and Frequency
1
0.9
Er = 1.03
Er = 2.91
[1] R.L.Yadav and B.R. Vishwakarma Analysis of electromagnetically
coupled two layer elliptical microstrip stacked antennas
INT.J.Electronics, 2000, vol.no 8, page 981-993.
[2] B.K.Kannaujia Reactively loaded annular ring microstrip antenna for
multi band operation Indian Journal of Radio and Space Physics, vol
35, April 2006, page 122-128.
0.8 Er = 2.54
0.7
0.6
0.5
0.4
0.3
0.2
Er = 4.8
Er = 6.11
x 10
[3] J. A. Ansari, Ram Brij Ram, Satya Prakash Dubey and Prabhakar Singh
A frequency agile stacked annular ring microstrip antenna using a
Gunn Diode IOP Science, September 2007.
[4] Nirul Kumprasert and Wiwat Kiranon: Simple and accurate formula for
resonant of circular microstrip disk antenna, IEEE Trans on Ant. and
wave Prop., Vol. 93, No. 11, Nov. 1995.
2 2.2 2.4 2.6 2.8 3 3.2 3.4 3.6 3.8 4
Frequenc y, GHz 9
Figure 4 Variation of Reflection Coefficient with Frequency
0
Er = 2.54
[5] W.C. Chew: A Broad band annular ring Microstrip Antenna, IEEE
Trans, On Ant. and wave prop., Vol. AP-30, No.5, Sept.1982.
[6] J.S. Dahele and K. E. Lee: Characteristies of annular ring microstrip
antenna, Electronics letter, 1982, Vol. 18,24, pp, 1051-1052.
-5 Er = 1. 03
-10
-15
-20
-25
-30
Er = 2. 91
Er = 4.8
Er = 6.11
2 2.2 2.4 2.6 2.8 3 3.2 3.4 3.6 3.8 4
[7] I.J. Bahl and P.Bhartia : Microstrip Antennas, Artech House, INC,
1980.
[8] Mink, J.W. Circular ring microstrip antenna elements, IEEE Ap 5,
Int. Symp. Quebee. Canada, June 1980.
[9] S.A.Bokhari et al,: Near Fields of Microstrip Antennas. IEEE Trans. Frequency, GHz 9
x 10
Figure 5 Variation of Return Loss with Frequency
70
Er = 1.03
60 Er = 2.54
Er = 2.91
Er = 4.8
50 Er = 6.11
40
30
20
10
0
2 2.2 2.4 2.6 2.8 3 3.2 3.4 3.6 3.8 4
On Ant. and Prop. Vol. No. 2, Feb.1995.
[10] A.K.Bhattacharya and R. Garg: Input impedance of annular ring
microstrip antenna using theory circuit theory approach, IEEE Trans.
Antenna and Prop., Vol. AP 33. No. 4, April, 1985
[11] A.K.Bhattacharya and R.Garg: Spectral Domain analysis of wall
admittances for circular and annular microstrip patches and the effect of
surface waves, IEEE Trans. On antenna and Prop. Vol. Al-33, No. 10,
Oct. 1985.
Frequenc y, GHz 9
x 10
Figure 6 Variation of VSWR with Frequency
60
Er = 1.03
55 Er = 2.54
Er = 2.91
50
Er= 4.8
Er = 6.11
45
40
35
30
25
20
15
2 2.2 2.4 2.6 2.8 3 3.2 3.4 3.6 3.8 4
Frequenc y, GHz 9
x 10
Figure 7 Variation of Quality Factor with Frequency
16
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
COMPARATIVE ANALYSIS OF EXPONENTIALLY SHAPED
MICROSTRIP-FEDPLANARMONOPOLEANTENNA
WITH ANDWITHOUTNOTCH
M. Venkata Narayana, K. Pushpa Rupavathi
Department of Electronics and Communications, K L University
venqut@gmail.com
pushparupa@gmail.com
Abstract: This paper describes the comparision
of a microstrip-fed monopole ultra wideband
antenna with and without notch. In case of notch
the notch-band characteristics is achieved by C-
shaped ring resonator. The paper gives better
understanding of antenna parameters such as
return loss, VSWR etc., and its variation in
performance of antennawith and without notch.
Also presents the effect of parameters of antenna
with notch on gain. Finally, simulation is done
using Ansoft HFSS software. Measurements
indicate that antenna operates in the band from
2.25 -10.2GHz and has band-rejected
characteristics in 5.16-5.85GHz which covers
wireless Local Area Network band.
Keywords: monopole ultra wideband
antenna, ring resonator, comparision,
VSWR, return loss
I INTRODUCTION
In recent years, antenna design for Ultra-
Wideband (UWB) systems has attracted increasing
interest due to the appealing properties of this new
communication standard. It is a well-known fact that
planar monopole antennas present really appealing
physical features, such as simple structure, small size
and low cost. Additionally, planar monopoles are
compact broadband omnidirecctional antennas, and
are also non-dispersive. Due to all these interesting
characteristics, planar monopoles are extremely
attractive to be used in emerging UWB applications,
and growing research activity is being focused on
them.
antenna with band rejection characteristics is
considered necessary, to overcome this disadvantage.
In this paper, the exponential shaped cut
patch patch and upper edges of the ground plane are
optimized to enhance the performance of antenna. to
eliminate the limited WLAN band (5.15~5.825GHz),
a down turned C-shaped ring that can enforce
antenna resonance at center frequency of WLAN, is
etched to the bottom layer of the antenna. the
comparision of this exponential shaped cut patch
antenna with and without notch are clearly gives the
better understanding of performance features of
antenna affected due to interference.
II ANTENNA GEOMENTRY
The structure of the antenna without notch is shown
in Fig.1 and with notch is shown in Fig.2 The
antenna, is implemented by a Rogers RT/duroid 5880
substrate with 0.7874mm thickness and relative
dielectric constant of 2.22.
(a) Top View
The federal Communication Commission
(FCC) allocated the unlicensed 3.1-10.6 GHz band
for commercial applications of UWB technology. the
planar monopole antennas are good options for
portable UWB devices. Over the UWB frequency
band there exist other wireless systems operating
bands, such as the 5.2 GHz (5150 GHz-5350 MHz)
and 5.8 GHz (57255825 MHz) bands, which might
cause interference with the UWB system. A UWB
(b)Bottom View
Fig 1. Geometry of the exponentially shaped MicroStrip-fed
planar Monopole Antenna without notch.
17
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
d
B
(
S
(
W
a
v
e
P
o
r
t1
,W
a
v
e
P
o
r
t1
)
)
d
B
(
S
(
P
1
,
P
1
)
)
V
S
W
R
(
W
a
v
e
P
o
r
t1
)
Fig 2. Geometry of the MicroStrip-fed Planar Monopole Antenna
with notch.(a) Exponentially shaped lower edges of patch antenna.
(b) Exponentially shaped upper edges of ground plane
The antenna is composed of three main sections,
The simulated return loss of proposed
antenna without notch is shown in Fig. 3 and with
notch is shown in Fig. 4. The VSWR for both cases
are shown in Fig. 5 and Fig. 6 respectively. Both are
accepting the UWB range but using notch offers the
advantage of UWB communication without any
interference from WLAN band. From the results
shown in figures clearly gives that VSWR and return
loss curves of antenna with notch are rejecting the
WLAN band and in that band the antenna is not
acceptable and thereby no radiation. The other
parameters are also optimized with notch such as
peak directivity, peak gain, radiated power and they
are given in table I below at 7GHz because from
7GHz onwards or higher we will get spurious
radiation. The simulated and measured radiation
patterns without notch are shown in Fig 7 and Fig.8,
with notch is shown in Fig. 9 and Fig. 10.
rectangular patch with two cuts; microstrip feed trace
with tapered transition to patch (Fig. 2(a)), and a
Ansoft Corporation
0.00
S11
Curve Info
HFSSDesign1
partial ground plane with exponential curved upper
edges (Fig. 2(b)). The simulations and optimizations
are based on Ansoft HFSSv11. The exponentially
shaped upper edges is used to cover only the
microstrip feed line. achieving a notch at the WLAN
center frequency (5.5GHz), a C-shape ring, on the
back of the substrate is used Fig. 2(b). This ring and
the exponentially shaped patch radiator can operate
as a resonator at 5.5GHz. By tuning the C-shaped
-10.00
-20.00
-30.00
-40.00
-50.00
0.00 5.00
Freq [GHz]
dB(S(WavePort1,WavePort1))
Setup1 : FullRange
10.00 15.00
ring parameters as length of the ring (Rs, alpha), the
distance of upper edge of ring with upper edge of
Fig. 3 Simulated return loss of antenna without notch
patch (Zs), and then the width of the ring (delta) the
notched frequency and notched depth can be
adjusted.
Ansoft Corporati on
0.00
-5.00
S11
HFSSDesi gn1
Curv e Inf o
dB(S(P1,P1))
Setup1 : Sw eep1
III. STUDY OF ANTENNA
PARAMETERS
The effects of main parts of the proposed
antenna with notch consist of: the radiating element
(cut-shaped patch), ground plane, and notch element
(C-shaped ring) on the impedance bandwidth of the
-10.00
-15.00
-20.00
-25.00
-30.00
0.00 2.00 4.00 6.00 8.00 10.00 12.00 14.00 16.00 18.00
Freq [GHz]
antenna, are discussed and without notch the effect
Fig. 4 Simulated return loss of antenna with notch
of the ground plane and patch and feed section need
of optimisation are also discussed. The patch is
optimized for best return loss performance and
bandwidth. The feed section is optimized for
improving matching microstrip line to patch. The
ring is optimized by tuning its four parameters the
angle of alpha, the radius of Rs, rear C-shaped ring
widths delta, and Zs (the distance between upper-
edge of patch from center of C-shaped ring), in terms
of maximum achievement of VSWR.
Ansoft Corporation
6.00
5.00
4.00
3.00
2.00
VSWR
HFSSDesign1
Cur ve Inf o
VSWR( Wav ePort 1)
Setup1 : FullRange
IV. RESULTS AND COMPARISION
1.00
0.00 5.00 10.00 15.00
Freq [GHz]
Fig. 5 Simulated VSWR of antenna without notch
18
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
V
S
W
R
(
P
1
)
Ansoft Corporati on
8.00
7.00
6.00
5.00
4.00
3.00
2.00
1.00
VSWR
HFSSDesi gn1
Curve Info
VSWR(P1)
Setup1 : Sw eep1
[7]. K. Chung, S. Hong, and J. Choi, Ultra wide-band printed
monopole antenna with band-notch filter, IET Journal of
Microwave and Antennas Propagation., vol.2, pp. 518-522,
Jan. 2007.
[8]. A.H.M. ZahirulbAlam, Rafiqul Islam, and Sheroz Khan,
Design of a tuning fork type UWB patch Antenna, Int.
Jornal of computer Science and Eng., Vol.1, No.4, 240-243.
[9]. Federal Communication Commission, First Report and
Order,Revision of part 15 of the Commissions Rules
Regarding Ultra-Wideband Transmission Systems, FCC 02-
48, April 22, 2002.
[10]. Tao Yuan, Cheng-Wei Li, Mook Seng Leong, and Qun
Zhang, Elliptically shaped ultra-wideband patch antenna
with band-notchfeatures, Microwave and Opt. Tech
0.00 2.00 4.00 6.00 8.00 10.00 12.00 14.00 16.00 18.00
Freq [GHz]
Fig. 6 Simulated VSWR of antenna with notch
TABLE I Computed Antenna parameters with and
without notch at 7GHz
Parameter Value(without
notch)
Value(with
notch)
Peak directivity 1.091 2.701
Peak Gain 1.097 2.687
Radiated power 1.005 0.987
Radiation
efficiency
1.0047 0.9948
V. CONCLUSION
Comparision of a microstrip-fed planar
Ultra-wideband antenna with and without notch has
been performed for UWB applications. To achieve
notch-band caharacteristics we use an optimized rear
C shaped ring as resonator. Without notch we have
found that the exponentially cutting of lower edges of
patch and extended upper edge of partial ground
plane as exponential curvature (and tapering feed)
causes an excellent performance of return loss at
middle and the end of UWB-band. With notch we
found that resonator rear C-shaped ring and
exponential cutting of lower edges of patch and
extended upper edge of partial ground plane have an
excellent performance of return loss over the entire
UWB band, excluding the notched band.
REFERENCES
[1]. Z. N. Chen, M. J. Ammann, X. M. Qing, X. H. Wu, T. S. P.
See, and A. Cai, Planar antennas, IEEE Microwave
Magazine. vol.7, no.6, pp. 63-73, Dec. 2006.
[2]. Z. Q. Wang, W. Hong, Z. Q. Kuai, C. Yu, Y.Zhang, and Y.
D. Dong, Compact ultra-wideband antennas with Multiple
Notches,
[3]. Proceedingof ICMMTConference, pp. 266-269, 2008.
[4]. G. Schantz, G. Wolenec, and E. M. Myszka, Frequency
notched UWB antennas, Proceeding of IEEE Ultra
Wideband System
[5]. Technology Conference, pp. 214-218, 2003.
[6]. C. Y. Huang and W. C. Hsia, Planar ultra-wideband
antenna with a frequency notch characteristic, Microwave
and Opt. Technol. Lett., vol. 49, no. 2, pp. 316-320, Feb.
2007.
Letters, Vol.50, No.3, 736-738, March 2008.
[11]. Zhi Ning Chen, S. P. See, and Xianming Qing,Small
printed ultra wideband antenna with reduced ground plane
effect, IEEE TRANSACTIONS ON ANTENNAS AND
PROPAGATION, VOL. 55, NO. 2, FEBRUARY 2007
[12]. T.Yang and W. A. Davis, Planar half-disk antenna
structures for ultrawideband communications, in Proc.
IEEE Int. Symp. Antennas Propagation, Jun. 2004, vol. 3,
pp. 25082511.
[13]. Kraus and Fleisch, Electromagnetic with Applications, fifth
edition, McGraw-Hill, p132.
(a)
Fig. 7 Simulated and measured E-Plane (xz-plane) Radiation
Patterns of proposed antenna without notch at: (a) 3GHz, (b) 7GHz
19
National Conference onMicrowave, Antenna &Signal Processing April 22-23,
2011
(a)
(b)
Fig. 9 Measured and simulated E-Plane (xz-plane) Radiation
Patterns: (a) 3.5 and (b) 7GHz.
Fig. 8 Simulated and measured H-Plane (zy-plane) radiation
patterns of proposed antenna at: (a) 3GHz,
(b)7GHz
\Fig.10. Measured and simulated H-Plane (xy-plane) Radiation
Patterns: (a) 3.5 and (b) 7GHz
20
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Congestion Control Algorithms for Efficient Satellite Communication
Harish Kr. Mishra
1
, Dr. R. L. Yadava
2
1Ph.D Research Scholar , Department of Electronics and Communication Engineering, TMU, Moradabad (UP)
2, Professor, Department of Electronics&CommunicationEngineering, GCET, Gr.Noida
Phone No. (+91) 9415177465, Email: rblharish@gmail.com,
Abstract: In this paper, we introduce a new TCP congestion control mechanism for satellite IP networks,
It uses low-priority data segments, namely, Supplement segments that prove the network available resources as well as carry new
data blocks not yet transmitted as regular data segments. Our new congestion control algorithms, Fast-Forward Start and First-Aid
Recovery use the supplement segments. A unique characteristics of our scheme is the mechanism for supplement segment keeping
the overhead of minimum duplicate transmissions. Simulation results show that TCP-Cherry yields upto a maximum improvement
of more than 150% in compared with other existing TCP congestion control schemes.
Keywords: Satellite communication , Congestion , Networks,
1. Introduction
In recent years, satellite links have been emerging as viable
Options for supporting Internet connectivity. However, the
Conventional TCP protocols have performance problems in
Satellite networks with long propagation delay and relatively
high link error rates as [1].
2. Related Works
TCP-Peach+ [2] is an improvement of TCP-Peach [1]. TCP-
Peach+ replaces Slow Start and Fast Recovery in TCP-Reno
Or in TCP-New Reno with Jump Start and Quick Recovery re-
Spectively. Jump Start and Quick Recovery use NIL segments
for probing the available bandwidth in the network. Because
NIL segments carry unacknowledged data blocks, they can be
Used to recover lost segments by the receivers. In satellite
Networks with high link error rates, NIL segments may be
efficient in some scenario.
3. Our Proposal: TCP-Cherry
3. 1 Overview
TCP-Cherry [3] replaces Jump Start and Quick Recovery In
TCP-Peach+ with Fast-Forward Start and First-Aid Re-
Covery respectively. These algorithms use new probing low-
Priority data segments, i.e., supplement segments. The new
Probing data segments can be characterized as follows.
1. The probing low-priority data segments, i.e., supplement
Segments carry new data.
2. A unique selection mechanism for supplement segments
Chooses them from reasonable distance away from regular
Data segments.
3. In case some supplement segments are lost, the data blocks
on the supplement segments are retransmitted in regular data
Segments.
3. 2 Supplement Segments
We introduce a new type of low-priority data segments,
Named as supplement segments that carry data blocks yet to be
Transmitted. A data block on a supplement segment is selected
not to overlap data blocks on regular data segment as much as
Possible using the algorithms in Figs. 1 and 2. The variables,
Parameters and functions used in the algorithms are dened
as follow.
The variable seqno4supple is for the sequence number of the
data block on the next forwarded supplement segment.
The parameter Thresh Data indicates the largest sequence
Number of the data blocks transmitted as a regular data segment.
the parameter Skip Data indicates the rst supplement seg-
ment that should be sent at the early stages of either Fast-
Forward Start or First-Aid Recovery.
The function Get Next Send(x, y) returns the sequence number
of the data segment that should be transmitted next following the
segment x. This next segment should be the y-th Among the
segments from x that are unacknowledged and are
yet to be transmitted as regular data segments.
Calculate_Skip_Window ( )
if just before Fast-Forward Start
= maxcwnd;
end;
if just before First-Aid Recovery
cwnd*(MaxRTT/RTT);
end;
end;
Figure 1. The Skip Size Determination Algorithm.
Select_Data4Supple ( )
if just before Fast-Forward Start / First-Aid Recovery
Calculate_Skip_Window ( );
Skip Data = Get_Next_Send (ThreshData, +1);
seqno4supple = SkipData;
else
seqno4supple = Get_Next_Send(seqno4supple, 1);
end;
end;
Figure 2. The Data Selection Algorithm for Supplement Segments.
21
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
3. 3 Fast-Forward Start Algorithm
At the beginning of Fast-Forward Start, the TCP sender sets
Cwnd, i.e., congestion window, to a value of 1. In addition to
1 regular data segment, the sender sends maxcwnd 1 nos.
of supplement segments as in Fig. 3. Here, maxcwnd means
the maximum value of congestion window size.
Fast-Forward Start( )
cwnd=1; =RTT/maxcwnd;
send(Data_Segment);
for i=1 to (maxcwnd-1)
wait ( ); send(Supplement_Segment);
end;
end;
Figure 3. Fast-Forward Start Algorithm.
3. 4 First-Aid Recovery Algorithm
In TCP-Cherry, as Duplicate ACKs detect a segment loss,
Figure 5. Goodput on GEO Satellite Networks.
4. Evaluation
We evaluate on a GEO satellite IP network scenario using
Network simulator ns-2. We also assume that the number of
the sender rst retransmits the lost segment by the Fast Retransmit. ows N = 20, the satellite link capacity c = 1300 segment/s
After that, it invokes the First-Aid Recovery (Fig.4).When a sender Which is approximately 10Mb/s for TCP segments of 1000
is allowed to transmit a regular data segment, it rst lls in the
holes in the scoreboard before Thresh Data. This ensures that there
is no lost regular data segment left out when a new regular data
segment is transmitted.
First-Aid_Recovery( )
cwnd=cwnd/2;
adps=0;
END=0;
pipe=2*cwnd;
=(RTT/2)/ (maxcwnd-cwnd);
send (maxcwnd-cwnd) supplement segments at each
interval ;
while (END=0)
if (ACK_ARRIVAL)
if (Duplicate Ack)
pipe=pipe-1;
update scoreboard;
end;
else if (Partial ACK)
pipe=pipe-amountacked;
update HighAck;
update scoreboard;
end;
else if (Supplement ACK)
cwnd=cwnd+1;
update scoreboard;
end;
else if (Recover ACK)
update HighAck;
END=1;
end;
adps=cwnd-pipe;
nps=min(maxburst, adps);
if (nps>0)
send nps missing packets and/or new packets;
pipe=pipe+nps;
end;
end;
end;
end;
Figure 4. First-Aid Recovery Algorithm.
Bytes, the buffer size of the satellite uplink, K= 50 segments,
maxcwnd = 64 segments and the buffer size of earth
Receiver, rwnd= 512 segments and RT T= 550 ms. We vary
segment loss probability due to error on the satellite link from
106
to
101, i.e., from very low to high. Connection Duration is
550s. The simulation results are in Fig. 5.
5. Conclusion
We introduced a new TCP congestion control mechanism
for satellite IP networks, TCP-Cherry. TCP-Cherry uses low-
priority data segments, namely, supplement segments that
probe the available network resources as well as carry new data
blocks not yet transmitted as regular data segments. Our new
congestion control algorithms, Fast-Forward Start and First-
Aid Recovery use the supplement segments. A unique
charactaristics of our scheme is the mechanism for supplement
segment keeping the overhead of duplicate transmissions
minimum. Simulation results showed that TCP-Cherry yielded
upto a maximum improvement of more than 150% in goodput
compared with other existing TCP congestion control mechanisms.
References
[1] I. F. Akyildiz, G. Morabito, S. Palazzo, TCP-Peach:
A New Congestion Control Scheme for Satellite IP Net-
works, IEEE/ACM Transactions on Networking, Vol. 9,
No. 3, June 2001, pp. 307-321.
[2] I. F. Akyildiz, X. Zhang, J. Fang, TCP-Peach+: En-
hancement of TCP Peach for Satellite IP Networks,
IEEE Communications Letters, Vol. 6, No. 7, July 2002,
pp. 303-305.
[3] S. Utsumi, S. M. S. Zabir, N. Shiratori, TCP-Cherry: A
New Approach for TCP Congestion Control over Satel-
lite IP Networks, Computer Communications, Elsevier,
vol.31, June 2008, pp. 2541-2561.
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Classification Of Material Using Microwave
Anil H.Soni and Ajay A. Gurjar
Department Of Electronics ,Sipnas College Of Engineering and Technology, Amravati(M. S.)
ah.soni@reddifmail.com
Abstract- A classification of material using microwave has
been analyzed. Microwave radar in X-band range is used for
scanning the sheets of various materials viz. metal, acrylic and
investigation in two dimensional spaces. The received back
scattered signal is processed.
wooden in free space. Depending on their respective
electromagnetic properties, reflection from each sheet of
material are collected and images for each is obtained. Further
various features such as energy, entropy, normalized sum of
image intensity and standard deviation are extracted and fed to
feed-forward multi-layer perception classifier. Result shows
good classification performance.
Reflections
FromSheet
Data
Acquisition
Image Formation and
Feature Extraction
Preprocessing and
Filtering
Neural Network
Classifier
Results
Keywords-remote sensing ,ground penetrating radar,
microwave image, feed- forward multi- layer perceptron.
1. INTRODUCTION
Material classification and recognition in uncontrolled or
inaccessible environment is important task in remote sensing
application such as target recognition, surface inspection and
shape extraction as well as in industrial applications. In [1]
passive polar metric imagery was used to distinguish between
dielectric and non-dielectric material by recording polarization
state of reflected signal .Ground Penetrating Radar [GPR] is
used to detect buried objects like land mines, conduit pipes
location and oil and gas exploration. Performance of GPR
largely depends on dielectric contrast between object and soil
surrounding it[2]. Analysis of object is subdivided into four
stages: 1) Object detection; 2) Object material recognition; 3)
Object size estimation; 4) Object shape recognition. The
objective of this paper is to propose a technique for
discrimination of different material having different dielectric
constant for classification as dielectric or non dielectric. The
paper is organized as follows, Section-2, describes the
methodology in which experimental setup, data collection is
given, section-3, describes pre-processing algorithm for
microwave image formation and feature extraction for
classification using feed-forward multi-layer perceptron, in
section-4, results are illustratated, finally concluding remarks
are given in section-5.
2. METHODOLOGY
The developed technique comprises data acquisition,
preprocessing and image formation, and feature extraction for
classification as shown in Fig. 2.1
The experimental set-up uses X-band radar, consist of
reflex klystron as a microwave source, circulator and horn
antenna. All observations were taken at frequency 8.5 GHz,
with plane polarized wave falling normally on sheet of material
placed in front of antenna in free space. A single antenna was
used for both transmission as wall as reception, which was
mounted on moving platform and it scans the region under
Fig. 2.1 :BlockDiagramof Processing stages
2. 1 DATA COLLECTION
For this process, a region of size 1m x 1m, was taken
for observation. Back scattered also called reflected signal at
space of 5cm along X and Y-direction over a length of 1-meter
as shown in Figure 2.2, was received by moving a horn antenna
on a platform [3], dot shows the point where antenna was
placed for data collection, thus data vector of size 20 x 20 was
measured for different sheets of material like wood, acrylic,
and metal, at various ranges from antenna. Size of each sheet
was 12 x 13 taken for observation.
x = 5cms y= 5cms
Figure 2.2, Grid for Scanning
3 MICROWAVE IMAGEFORMATION, FEATURE
EXTRACTIONANDCLASSIFICATION
It is implemented for reducing effect of noise and
eliminating the undesired presence of ground surface echo. The
effect of noise was removed by filter and thresholding was
applied to remove back-ground noise [4]. The process was
carried out by MATLAB 7.7 tool. Microwave images of each
target was formed and various features such as entropy, energy,
22
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Metal Acrylic Wooden
Enrtopy 3.3789 2.8723 2.9199
Std. Devation 0.1495 0.0429 0.0286
Energy 2.7301 1.3571 1.2619
No. Sum of
Image
Intensity
0.0045 0.0026 0.0024
standard deviation and normalized sum of image intensity was
calculated as follow.
Where p
q
is information bit per reflection at a point.
Figure 4.1 Raw and filtered images of material
Where m is mean value of image f(x, y), here N = 400
Where f(x, y) is gray value at point (x, y)
5 CONCLUSION
After feature extraction of microwave images, a feature
vector of size 30 x 3 was fed to feed-forward multi-layer
perceptron, which was train by back propagation algorithm.
The network parameter were optimized by minimizing error.
Multi-layer perceptron neural network consist of one hidden
layer and one output layer. Hidden layer has ten nodes where
as output layer has three nodes. A log sigmoid transfer function
was used in hidden and output layer[5].
4 EXPERIMENTAL RESULTS
After scanning the sheets of different material in free
space, with X-band radar at frequency 8.5GHz, at various
ranges from antenna. Features of each microwave image was
also calculated and given in table below. Raw images and
filtered images of each material is shown in figure 4.1. It shows
that, there is a large variation in features of metallic and non-
metallic material where as variation in features of acrylic and
wooden is small.
In this work, classification of material on basis of their
reflective properties has been presented. This system allow to
discriminate different material having different dielectric
constant. The experimental results shows that implemented
system exhibits good performance. This work can be extended
for classification using support vector machine.
REFERENCES
[1] Vimal Thilak, Charles D. Creusere, David G. Voelz, Material
Classification Using Passive Polarimetric Imagery, IEEE Trans.
International Conferance on Image Processing, PP.121-124,2007.
[2] Edoardo Pasolli, Farid Melgani, Massimo Donelli, Radha Attoui and
Mariette De Vos, Automatic Detection And Classification Of Buried
Objects In GPR Images Using Genetic Algorithms And Support Vector
Machines IEEE Trans PP 525-528.
[3] Sebastion Hantscher, Axexander Reisenzahn, Christian G. Diskus,
Application Of Surface Reconstruction Method for Material
Penetrating UWB Radar, proceedings of Asia-Pacific Microwave
Confereence-2007.
[4] Rafel C. Gonzalez and Richard E. Woods, Digital Image Processing
PP 100-147, 617-633.
[5] Jacek M. Zurada, Introduction to Artificial Neural Systems, PP 165-
216.
23
24
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
1
l
ef
1
r
re
re
1
r
re
1
0
r
re
1
r
Patch Antenna on Defective Ground
Plane
Sakshi Kumari
(1)
, Vibha Rani Gupta
(2)
Birla Institute of Technology,
Mesra, Ranchi, India
(1)sakshi501@gmail.com (2)vrgupta@bitmesra.ac.in
Abstract - Patch antenna placed on conventional conductive ground
plane has several limitations in terms of its performance
characteristics. The conventional ground plane is replaced with a
defected ground plane to improve the characteristics of the patch
antenna. This paper illustrates the detailed design, development
and performance of patch antenna on defective ground plane using
IE3D EM simulator. The designed antenna is fabricated,
performance characteristic are measured and are compared with
the Conventional Patch antenna to show the improvement.
Keywords - Electromagnetic Band-Gap substrate, Efficiency, Patch
antenna, Defected ground plane
I. INTRODUCTION
As we know that Microstrip antenna is one of the most
innovative area in the field of antenna engineering due to being
in the design of patch antenna for Bluetooth application and then
its performance is compared with the conventional patch
antenna.
The paper is organized as follows. Section 2 describes the
dimension calculation of Patch antenna. Section 3 describes the
Antenna design on conventional ground and on defective ground
.The results and discussion has been analyzed in section 4 and
finally the conclusion is given in section 5.
II. PATCH ANTENNA DIMENSION CALCULATION
The width and the length of the patch is calculated using
formulae given below [3]
The length of the patch is calculated as:
light in weight, low profile light, conformal to planar and non
planar surface and ease of fabrication. But when the antenna is
placed on normal conductive plane, its performance degrades
due to formation of surface waves. [1]. Surface waves formation
is a very serious problem which reduces the antenna efficiency
and gain, limits bandwidth, increases end-fire radiation and
cross-polarization levels. It also limits the applicable frequency
range of microstrip antennas. In order to overcome the
limitations we have replaced the conventional ground plane with
the Electromagnetic Band-Gap substrate. Generally speaking,
electromagnetic band gap structures are defined as artificial
periodic (or sometimes non-periodic) objects that prevent/assist
the propagation of electromagnetic waves in a specified band of
frequency for all incident angles and all polarization states [2].
The frequency region where the incident waves cannot
propagate through the structure is termed as band-gap or stop
band. Periodic structures like EBG when applied to microwave
L
C
2 f
f
ff
2
(
l 0.412h
2
2
ff 0.3)
W
h
ff 0.258
W
h
2h
W
0.264
0.8
(1.1)
.5
(1.2)
(1.3)
planar waveguides can produce pass band and stop band
characteristics. By proper selection of dimensions and periods
certain waves are allowed to pass through while the other waves
such as surface waves can be suppressed. By loading the EBG
periodically on substrate a band gap can be created for the
frequencies around the operating frequency of antenna. Such
structure can stop the propagation of surface waves which are
excited through high dielectric substrate material, thus coupling
more power to space wave rather than wasting it in the substrate.
There are basically four types of EBG substrate. One of them is
Where,
= Permittivity of the dielectric
ff = Effective permittivity of dielectric
W = Patch s width
L = Patch s length
h = Thickness of the dielectric
f = operating frequency
The width of the patch is calculated as:
c
W =
defective ground which is very easy to be implemented as there is
no fabrication problem associated with it.
This paper basically focuses on detailed antenna design of
2 f 2
Where,
(1.4)
defective ground plane structure. It demonstrates the use of EBG Cis the speed of light
25
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
r
e
t
u
r
n
l
o
s
s
i
n
d
B
r
e
t
u
r
n
l
o
s
s
i
n
d
B
III. ANTENNA DESIGN
For designing the conventional Patch antenna, the substrate
dimension is 130.91 mm x 100.31 mm. The substrate used is
FR4 with a dielectric constant of 4.4 and thickness 1.6 mm. The
software used for simulation is IE3D from the Zeland [4].
A. Conventional Patch Antenna
Initially the radiating patch of the dimension 29.50 mm x 38.35
mm is placed on conventional conductive ground plane of the
dimension is 130.91 mm x 100.31 mm.
The Fig.1 shows the conventional Patch antenna.
Fig.1: Conventional Patch Antenna
The simulated result for S parameter display has been shown in
Fig 2. The resonant frequency obtained with simulation is 2.42
GHz with higher order modes excitation at 3.78GHz, 4.6 GHz
and 6.28GHz with conventional substrate. A return loss of -
22.03dB is obtained at 2.42 GHz having range of 50 MHz which
does not cover Bluetooth range. Also the gain obtained at this
frequency is 5dBi with an antenna efficiency of 55%. For other
modes the return loss is more than -10dB.
S Parameter display
0
-5
In order to mitigate the deleterious effect of conventional
ground, it has been replaced with EBG substrate. It consists of
4x3 square lattice of holes etched on the ground plane. The
periodic structure design of Electromagnetic Band- Gap
substrate involves the selection of filling factor. As there is no
specific formula, so parametric study has been done to select the
filling factor and then periodic distance and radius has been
calculated. The one dimensional EBG structure that has been
chosen has periodic arrangement of holes on the ground surface.
The periodic cell spacing and the radius has been calculated. [5]
Thus periodic spacing comes out to be 29.3 mm. Lastly the
filling factor i.e. the ratio of radius of the circular holes and the
periodic cell spacing is selected using parametric study which
comes around 0.278.The designed antenna has been shown in
Fig. 3.
The designed antenna has been fabricated and tested. The
simulated and measured plots have been shown in Fig. 4.
Fig.3 EBGPatch Antenna
IV. RESULT AND DISCUSSION
The Fig. (4) shows the comparison of simulated v/s measured
plots that has been obtained in the case of Patch antenna on
defective ground plane. Table. I shows the comparison of
simulated result of conventional Patch Antenna and EBG patch
antenna.
-10
-15
0
loss
-5
-20
-10
-15
-25
1 2 3 4 5 6 7
frequency in GHz
-20
-25
Fig.2: Simulated S parameter display for conventional Patch Antenna
B. EBG Patch Antenna Design
-30
-35
Measured
1 3
frequ
5
ency in
7
GHz
9
Measured Result
Fig.4 Simulated vs.
26
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Charac. Conventional Patch EBGPatch Antenna
Resonant
Freq(GHz)
2.42 3.78 4.6 6.28 2.44 3.39 4.7 6.4
Return
Loss(dB)
-22 -7.2 -7.1 -5.8 -18.4 -16.7 -7.65 -5.28
BW(MHz) 60 - - - 80 100 - -
Gain(dBi) 5.08 3.9 -0.3 0.3 5.66 3.91 1.5 2.52
Ant. Eff% 55.6 37.5 15.8 12.8 70.21 71.00 31.26 44.21
Rad. Eff% 58.7 46.7 19.6 17.3 73.14 73.10 40.20 63.96
TABLE I
COMPARISON OF SIMULATED RESULT
[5] Shantanu K. Padhi and Mark. E. Bialkowski, A Microstrip
Yagi Antenna using EBG Structure Radio Science, Vol-38,
No.3, 1041, 2003.
Thus it has been concluded that when conventional substrate has
been replaced with defective ground plane, there is enhancement
in the bandwidth with the increment in the gain. Although there
is difference in the simulated and measured return loss in case of
defective ground plane as seen in the Fig.4. This is due to
problems that incurred during fabrication. The higher modes are
also seen in EBG Patch antenna. As their return loss is less than
-10 dB, these modes can be also used for the transmission and
reception. The range of Bluetooth is covered which was only 60
MHz in the case of conventional substrate. Also the antenna
efficiency has been increased from 55.6% to 70.21% at the blue
tooth range. The performance of the EBG patch antenna can
further be improved by changing either the lattice structure or
filling factor of periodic structure.
IV. CONCLUSION
The proposed antenna is designed on defective ground plane and
is compared with the patch antenna designed on the
conventional ground plane. Significant improvement in terms of
bandwidth, efficiency and impedance matching can be seen
from the Table 1. The higher order mode of the antenna can also
be used.
V. REFERENCES
[1] Ramesh Garg , Prakash Bhatia, Inder Bahl, Apisak
Ittipiboon, Microstrip Antenna Design Handbook, Artech
House Boston, London , 2000.
[2] Fan Yang and Yanha Rahmat Samii, Electromagnetic
Band-Gap Stuctures in Antenna Engineering, Cambridge
University Press, 2008.
[3] Constantine A. Balanis, Antenna Theory: Analysis and
Design, 3
rd
Ed. John Wiley & Sons, Hoboken, New Jersey,
2005.
[4] IE3D user manual, version 12: Zeland Software, Inc.
California, USA, 2006.
27
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Parallel hardware based ANNfor Smart Antenna Signal Processing
Prof. V. R. Raut
Dept. of Electronics and Telecommunication Engineering
Prof. Ram Meghe Institute of Technology & Research,
Badnera, Amravati [M.S]
Ms. Sarika Arun Mardikar
Dept. of Electronics and Telecommunications Engineering
Prof. Ram Meghe Institute of Technology & Research,
Badnera, Amravati [M.S]
sarika_star@yahoo.com
Abstract Co-channel interference (CCI) is a major cause
of call drop in a mobile environment. Toward this direction
ANN has been applied in smart antenna systems to reduce
CCI. ANN based smart antenna systems perform well for
computationally expensive signal processing for CCI
reduction. However many modern applications demand
faster processing for high performance applications, where
traditional ANN cannot be used because they are very
slow. Various expensive alternatives have been suggested
for speeding up the processing such as ASICs and
FPGAs. In this work we implemented a parallel hardware
based ANN and evaluated its application in Smart Antenna
signal processing for faster processing. We tested our
implementation on a graphical processor having 96 cores,
providing a very economical solution as compared to
ASICs or FPGAs. The resulting implementation is simple
and scales up with the processor cores.
Keywords- GPU, NVIDIA CUDA, ANN, CCI, Training.
I. INTRODUCTION
In mobile radio environment, the co-channel interference
(CCI) resulting from frequency reuse is the main cause of
call drops, unnecessary handoffs, compromising voice
quality and channel capacity reduction. CCI is one of the
most significant factors limiting the capacity and scalability
of wireless networks. CCI reduction involves large no. of
calculations, if done in parallel we can get the response
faster and thus it will be suitable for real-time
communication involving smart antennas. ANN has been
applied in smart antenna systems to reduce CCI [17]. ANN
based smart antenna systems perform well for
computationally expensive signal processing for CCI
reduction .Neural networks , once trained and validated, are
not only capable of performing computationally complex
signal processing but also can track sources in real time
and are nonlinear in nature. These characteristics make
them suitable for smart antenna design. Moreover if we
design a parallel hardware like GPU based ANN for this
smart antenna design then we can achieve maximum speed
up. Thus implementation of an Artificial Neural Network
on a GPU provides improved performance as compared to
CPU implementation .GPUs has become a General Purpose
Processor, and is a good option for implementation of
many Parallel Algorithms. In addition to this the GPU-
based ANN is much more cost effective as compared to
FPGA or ASIC based solutions.
Various programming platforms have been developed for
GPU programming. However, life before NVIDIAs
Compute Unified Device Architecture (CUDA) [1] was
extremely difficult for the programmer, since the
programmers needed to call graphics API (Open GL, Open
CV etc.). This also has a very slow learning rate. CUDA
can be extremely helpful in accelerating ANN algorithms
on GPUs [2] [3]
CUDA is C language with some extensions for
processing on GPUs. The user writes a C code, while the
compiler bifurcate the code into two portions. One portion
is delivered to CPU (because CPU is best for such tasks)
while the other portion, involving extensive calculations, is
delivered to the GPU(s) that executes the code in parallel.
With such a parallel architecture, GPUs [4] provide
excellent computational platform, not only for graphical
applications but any application where we have significant
parallelism.
Thus CUDA accelerated ANN algorithms can be used in
many real-time applications, including CCI reduction in
mobile communication, Image processing, object
classifications, voice recognition and in a number of
systems which require intelligence and auto control.
The remainder of this paper we discuss our approach
and implementation details.
Neural Network Basics
Any particular layer in ANN has a number of
processing nodes or Neurons. These Neurons work
independently. That means each of these processors can
work independently. But if we do these processing on
CPU the large amount of calculations of ANN will be time
consuming .Thus taking this into consideration we have
intend to do this processing in parallel on Graphical
Processing Unit having 100s of processor cores. In this
work we utilized NVIDIA CUDA parallel programming
platform on CUDA- Compatible GPU. Most software
developers have relied on the advances in hardware to
increase the speed of their applications under the hood; the
same software simply runs faster as each new generation of
processors is introduced. This drive, however, has slowed
since 2003 due to energy consumption and heat-dissipation
issues that have limited the increase of the clock frequency
and the level of productive activities that can be performed
in each clock period within a single CPU.
28
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
A sequential program will only run on one of the processor
cores, which will not become significantly faster than those
in use today. Without performance improvement,
application developers will no longer be able to introduce
new features and capabilities into their software as new
microprocessors are introduced, thus reducing the growth
opportunities of the entire computer industry. Rather, the
applications software that will continue to enjoy
performance improvement with each new generation of
microprocessors will be parallel programs, in which
multiple threads of execution cooperate to complete the
work faster. The high-performance computing community
has been developing parallel programs for decades. These
programs run on large-scale, expensive computers.
Now that all new microprocessors are parallel
computers, the number of applications that must be
developed as parallel programs has increased dramatically.
There is now a great need for software developers to learn
about parallel programming.
II. VARIOUS APPROACHES FOR REDUCTION OF CCI
Co-channel interference in an AMPS (Advanced Mobile
Phone System) can reduce in voice quality and channel
capacity.CCI reduction involves large no. of calculations, if
done in parallel we can get the response faster and thus it
will be suitable for real-time communication involving
smart antennas. One the approach can be ASICS and
another can be FPGAs. However both these approaches
are very costly.
ANN based smart antenna systems perform well for
computationally expensive signal processing for CCI
reduction. These characteristics make them suitable for
smart antenna design. If we implement such an ANN for
smart antenna on CPU it will take more time as the
execution will be sequential .Hence we design a parallel
hardware like GPU based ANN for this smart antenna
design then we can achieve maximum speed up. Thus
implementation of parallel hardware GPU based ANN for
this smart antenna design provides improved performance
as compared to CPU implementation, thus it can accelerate
calculations involved in CCI reduction using ANN making
it suitable for real-time communications. Thus GPUs has
become a General Purpose Processor, and is a good option
for implementation of many Parallel Algorithms. In
addition to this the GPU-based ANN is much more cost
effective as compared to FPGA or ASIC based solutions. .
Another advantage of GPU based solution that we
observed is its lower development time thanks to high-level
development platforms such as NVIDIA CUDA and
OpenCL Life before NVIDIAs Compute Unified Device
Architecture (CUDA) [1] was extremely difficult for the
programmer, since the programmers needed to call
graphics API (Open GL, Open CV etc.). This also has a
very slow learning rate. CUDA can be extremely helpful in
accelerating ANN algorithms on GPUs.
III. WHAT IS AGPU?
GPU computing is the use of a GPU (graphics
processing unit) to do general purpose scientific and
engineering computing. The model for GPU computing is to
use a CPU and GPU together in a heterogeneous computing
model. The sequential part of the application runs on the
CPU and the computationally-intensive part runs on the
GPU [11]. From the users perspective, the application just
runs faster because it is using the high-performance of the
GPU to boost performance.
Fig 1 shows a typical Graphics card layout.
Fig 1: Example of a Typical GPU card
.
Fig 2: Comparison of Computational Power (in FLOPs,
Giga Floating Point Operations per second) of a CPU
and GPU
The application developer has to modify their
application to take the compute-intensive kernels and map
them to the GPU. The rest of the application remains on the
CPU. Mapping a function to the GPU involves rewriting
the function to expose the parallelism in the function and
adding C keywords to move data to and from the GPU.
GPU computing is enabled by the massively parallel
architecture of NVIDIAs CUDA architecture. The CUDA
architecture consists of 100s of processor cores that operate
together to crunch through the data set in the application.
A GPU (Graphics Processing Unit) is a processor
attached to a graphics card dedicated to calculating floating
point operations. A graphics accelerator incorporates
custom microchips which contain special mathematical
operations commonly used in graphics rendering. The
efficiency of the microchips therefore determines the
effectiveness of the graphics accelerator. They are mainly
used for playing 3D games or high-end 3D rendering. A
GPU implements a number of graphics primitive
29
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
operations in a way that makes running them much faster
than drawing directly to the screen with the host CPU. The
most common operations for early 2D computer graphics
include the Bit BLT operation, combining several bitmap
patterns using a Raster Op, usually in special hardware
called a "blitter", and operations for drawing rectangles,
triangles, circles, and arcs. Modern GPUs also have support
for 3D computer graphics, and typically include digital
videorelated functions.
A pipelined architecture is the standard procedure for
processors as it breaks down a large task into smaller
individual grouped tasks. When a set of instructions are
transferred to the GPU the GPU then breaks up the
instructions and sends the broken up instructions to other
areas of the graphics card specifically designed for
decoding and completing a set of instructions. These
pathways are called stages. The more stages the graphics
card has, the faster it can process information as the
information can be broken down into smaller pieces while
many stages work on a difficult instruction.
Figure 3: Processing flow on CUDA
Simple CUDA programs have a basic flow:
1. The host initializes an array with data.
2. The array is copied from the host to the memory
on the CUDA device.
3. The CUDA device operates on the data in the array.
4. The array is copied back to the host.
Since an ANN has a number of processing nodes or
Neurons. These Neurons work independently. Thus each
of these processors can work independently. Therefore, we
intend to do this processing in parallel on Graphical
Processing Unit having 100s of processor cores. In this
work we utilized NVIDIA CUDA parallel programming
platform on CUDA- Compatible GPU. To develop an
Artificial Neural Network on a Graphical Processing Unit
,we used NVIDIA CUDA for its implementation. GPUs
have hundreds of processing units and have a highly
parallel architecture , that clearly maps to ANN as ANN is
also a massively parallel system. In addition to this the
GPU-based ANN is much more cost effective as compared
to FPGA or ASIC based solutions. The main idea is to do
the CCI calculations faster by availing the parallelism of
ANN and implementing it on paralle hardware like GPU
and thus accelerating it for real-time communication. This
research aims at implementation ANN for smart antenna
design on a GPU in order to improve the performance as
compared to CPU implementation.
Thus CUDA accelerated ANN algorithms can be used
in many real-time applications, including CCI reduction in
mobile communications ,Image processing, character
recognition ,object classifications, voice recognition and in
a number of systems which require intelligence and auto
control.
We implemented these complex structures and we found
very good performance on GPU. It should be noted that we
have applied parallelism only to the output calculation. One
can also think of training the ANN using a parallel
algorithm on a GPU. We have developed a CUDA
application, to reduce [CCI calculations by implementing
ANN on GPU[18].
IV. IMPLEMENTING AN ANNONGPU
Motivation
Any particular layer in ANN has a number of
processing nodes or Neurons. These Neurons work
independently. That means each of these processors can
work independently. Therefore, we intend to do this
processing in parallel on Graphical Processing Unit having
100s of processor cores. In this work we utilized NVIDIA
CUDA parallel programming platform on CUDA-
Compatible GPU. Most software developers have relied on
the advances in hardware to increase the speed of their
applications under the hood; the same software simply runs
faster as each new generation of processors is introduced.
This drive, however, has slowed since 2003 due to energy
consumption and heat-dissipation issues that have limited
the increase of the clock frequency and the level of
productive activities that can be performed in each clock
period within a single CPU. Virtually all microprocessor
vendors have switched to models where multiple
processing units, referred to as processor cores, are used in
each chip to increase the processing power. This switch has
exerted a tremendous impact on the software developer
community [Sutter 2005].
Traditionally, the vast majority of software
applications are written as sequential programs, as
described by von Neumann [1945] in his seminal report.
The execution of these programs can be understood by a
human sequentially stepping through the code. Historically,
computer users have become accustomed to the expectation
that these programs run faster with each new generation of
microprocessors. Such expectation is no longer strictly
valid from this day onward. A sequential program will only
run on one of the processor cores, which will not become
significantly faster than those in use today.
Without performance improvement, application
developers will no longer be able to introduce new features
and capabilities into their software as new microprocessors
are introduced, thus reducing the growth opportunities of
the entire computer industry. Rather, the applications
software that will continue to enjoy performance
30
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
(c)
improvement with each new generation of microprocessors
will be parallel programs, in which multiple threads of
execution cooperate to complete the work faster. The high-
performance computing community has been developing
parallel programs for decades. These programs run on
large-scale, expensive computers.
Now that all new microprocessors are parallel
computers, the number of applications that must be
developed as parallel programs has increased dramatically.
There is now a great need for software developers to learn
about parallel programming.
ANN on GPU
Artificial Neural Network can be represented
as a cluster of parallel processors as shown below in figure
a to c. The complexity of the
(a)
(b)
(c)
Figure 6.1: Typical ANN structure (a-c)
We implemented these complex structures and we found
very good performance on GPU.
Assumptions:
1- We assumed that the network is already trained
and thus we have a file where the weight sets are
stored.
2- We exploited the parallelism in calculating the
output at each Neuron in each layer.
3- The performance was compared at each layer
output and also at the final output.
The ANN can be modeled by matrix equation as follows:
w0 w1 I0 O0 -----------(1)
w2 w3 I1 = O1
Thus we can obtain the output of ANN by matrix
multiplication as in equation (1).
V. IMPORTANT RESULTS
To prove the computational power of a GPU we did large
no. of experiments with and without shared memory. The
following are the results.
VI. CONCLUSION
In this paper we have demonstrated that parallel hardware
such as Graphical Processing Unit (GPUs) can be suitably
31
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
applied for CCI reduction using ANN in case of smart
antenna systems. The advantage of the GPU is many folds;
especially that it can accelerate calculations involved in
CCI reduction using ANN, making it suitable for real-time
communications. We observed X100 speedup of ANN
output calculations on GPU as compared to CPU
implementation. Another advantage of GPU based solution
we observed is its lower development time thanks to high-
level development platforms such as NVIDIA CUDA and
OpenCL. Lastly the lower cost of the GPU is another factor
because of which it can be preferred over ASICs and
FPGAs. It is also concluded that in addition to the output
calculations of ANN, the training of an ANN can also be
carried out on GPU; thus training part can also be
accelerated making it suitable for real time
communications
REFERENCES
[1] R. Tom, Halfhill, Parallel processing with CUDA ,
Microprocessor report. January 2008 .
[2] Z. Luo, H. Liu and X. Wu, Artificial Neural
Network Computation on Graphic Process Unit ,
Neural Networks, 2005. IJCNN '05. Proceedings.
2005 IEEE International Joint Conference, vol.1, pp.
622 626, 31 July-4 Aug. 2005
[3] R. D. Prabhu, GNeuron: Parallel Neural Networks
with GPU , Posters, International Conference on
High Performance Computing (HiPC) 2007,
December 2006
[4] Shuai Che , Michael Boyer, Jiayuan Meng, David
Tarjan , Jeremy W. Sheaffer, Kevin Skadron, A
Performance Study of General-Purpose
Applications on Graphics Processors Using
CUDA , Journal of Parallel and Distributed
Computing, ACM Portal, Volume 68 ,Issue 10, pp.
1370-1380 October 2008
[5] Elizabeth, Thomas , GPU GEMS: Chapter 35 Fast
Virus Signature Matching on the GPU , Addison
Wesley
[6] Pat Hanrahan , Why are Graphics Systems so Fast?
, PACT Keynote, Pervasive Parallelism
Laboratory, Stanford University, September 14,
2009
[7] Kayvon Fathalian and Mike Houston, A closer
look at GPUs , Communications of the ACM, Vol .
51 no. 10, October 2008
[8] Trendy R. Hagen, Jon M. Hjelmervik, Tor Dokken,
The GPU as a high performance computational
resource , Proceedings of the 21st spring conference
on Computer graphics, pp. 21 26, 2005
[9] R. H. Luke, D. T. Anderson, J. M. Keller S.
Coupland, Fuzzy Logic-Based Image Processing
Using Graphics Processor Units , IFSA-EUSFLAT
2009
[10] Victor Podlozhnyuk, Mark Harris , NVIDIA
CUDA Implementation of Monte Carlo Option
Pricing , Tutorial in NVIDIA CUDA SDK 2.3,
[11] CU DA Programming Guide 3.0 , Published by
NVIDIA Corporations.
[12] David B. Kirk and Wen-mei W. Hwu, Programming
Massively Parallel Processors: A Hands-on
Approach by Morgan Kaufmann (February 5,
2010),ISBN 0123814723
[13] K. Oh, GPU implementation of neural networks
Pattern Recognition, pp. 1311-1314. Vol. 37, No. 6.
(June 2004).
[14] Simon Haykin, Neural Networks: A
comprehensive foundation , 2nd Edition, Prentice
Hall, 1998
[15] Araokar, Shashank, Visual Character Recognition
using Artificial Neural Networks Cog Prints, May
2005
[16] Earl Gose Steve Jost, Richard Johosonbaugh,
Pattern Recognition and Image Analysis , Eastern
Economy Edition, Prentice Hall ,1999.
[17] Neural Networks in Smart Antenna Design for Co-
channel Interference (CCI) Reduction: A Review
[18] Parallel Training of Multilayer Perceptron on GPU. -
Chris Oei ,,Gerald Friedland ,Adam Janin]
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
STUDY OF DIFFERENTPROPERTIES OF
IMPATTDIODEINKa BAND
By Joydeep Sengupta, Alok Naik, Ankush Naik, Arup Hareesh,
Naresh Rao, Prashanth G.Rao, Ron Agnel Tony.
DEPARTMENT OF ELECTRONICS ANDCOMMUNICATIONENGINEERING
VISVESVARAYA NATIONALINSTITUTEOF TECHNOLOGY
NAGPUR, INDIA440011.
Dr. Monojit Mitra,DEPT of ETCBESU, Shibpur, West Bengal
ABSTRACT: The objective of undertaking this
project work is to understand the physics of
submicron devices and the effects of various
material parameters on their performance, and
thereby produce guidelines for device design.
This project work involves the computer aided
design of IMPATT(Impact Avalanche Transit
Time Diodes).
IMPATT diodes are now regarded as the
premier solid state devices for generation of
power in the microwave and millimeter wave
ranges 3-400 GHZ. The millimeter wave provide
greater bandwidth and sharper resolution and
are now being increasingly used in
communication and radar systems for civilian
and defence applications.
INTRODUCTION: The generation of Microwave
frequencies using reverse biased p-n junction
diodes beyond the avalanche breakdown
voltage is achieved by IMPATT diode an
acronym for Impact Avalanche Transit
Time.The typical structure is p+ - n - i - n+
and is called Read Diode.
A
schematic
diagram of
the Read
Structure is
shown
above. It
consists of
(i) a narrow
avalanche
region
where the electric field is very high and carrier
multiplication by impact ionization takes place and
(ii) a drift region where the carrier drifts at
saturated velocities and no impact ionization
occurs. Around the p+n junction the electric field is
high enough to cause avalanche multiplication due
to impact ionization and this forms the avalanche
region.
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
32
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
We consider that an r.f. voltage is superimposed on
the d.c. reverse bias in a Read structure biased to
breakdown.
At time t = t
0
, the r.f. voltage is superimposed
on the d.c. bias and the build-up of the
avalanche current starts. A quarter cycle later,
at t = t
1
, the r.f. voltage has reached its peak
but the avalanche current is still growing. The
carrier generation rate depends on the electric
field as well as on the number of carriers
present. Thus, even after the voltage has
crossed its peak( t
1
< t < t
2
) the avalanche
current still grows as the carrier density is large
and the r.f. voltage is above its average value.
At t = t
2
, the avalanche current reaches its
maximum at the end of the half cycle. After
three quarters of a cycle, at t = t
3
, there is a
decrease in the avalanche current since
between t
2
and t
3
the applied r.f. voltage is less
than the average value. The avalanche current
reaches its peak value at t = t
2
, which occurs
one quarter cycle later than the peak in the r.f.
voltage, which occurs at t = t
1
. Hence there is a
90 phase-lag for the r.f. current generated in
the avalanche region.
The avalanche current, which is generated in a
sharp pulse, is injected into the drift region. If
the length of the drift region is chosen to have
a transit angle of 180, then the external
current lags behind the injected current by 90
which accounts for the transit time delay. Thus
there is a total phase-lag of 180 between the
r.f. voltage and the resultant r.f. current
flowing in the circuit. This gives rise to the
negative resistance in the IMPATT device.
The frequency of oscillations generated in the
device is given by
f= V
s
/ 2W
Where W is the width of the depletion layer,
and W/V
s
is the transit time across the drift
region.
DEVICE ANALYSIS : To provide an accurate
device analysis, the simulation program has to
incorporate realistic material parameters and
the effect of mobile space charge. This chapter
describes the generalized simulation program
for accurate d.c. and small signal analyses of
IMPATT devices which has been used for
investigation of device properties.
Practically the d.c. current level is quite high and
hence the mobile space charge density is no longer
negligible in the depletion layer, resulting in a
change in electric field maximum and also shift in
its position away from the metallurgical junction.
The non-linear Poisson and current continuity
equations cannot be solved independent of each
other. So a simultaneous solution of Poisson and
continuity equations is needed to incorporate the
effect of mobile space charge. This computer
solution is obtained by starting the computation
from the field maximum within the depletion layer
located near the metallurgical junction.
The basic equations that govern the avalanche
multiplication phenomenon and carrier flow in the
depletion layer of a reversed p-n junction are
poisson and current continuity equation.
Poisson equation: dE/dx = q/e[ND-NA+p-n].
(1.1)
The total dc current density is given by J = Jn +
jp,constant throughout the depletion layer.
Hence combining several results obtained by
solving equations we get
dP(x)/dx = ( n + p) ( n p)P(x) (1.2) where
P(x) = (Jp Jn)/J, n and p are ionization rates of
electrons and holes.
Using the computer method the ionization rates
and saturation drift velocities,Vp and Vn of holes
and electrons for silicon and galliumarsenide
have been computed as shown below
33
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Rate of ionization of holes in Silicon
Rate of ionization of electrons in Silicon
Rate of ionization of electrons in GaAs
34
Rate of ionization of holes in GaAs
Vn for Silicon
Vp for Silicon
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Vp for GaAs
Vn for GaAs
ln boundary method of dc analysis equations
1.1 and 1.2 are solved at every step starting
from the field maximum towards the right
hand side of the junction until the field
becomes zero or changes sign.This process is
repeated towards the left side of the junction
until the same conditions are achieved.lf
boundary conditions for P(x) and E(x) profiles
are not simultaneously satisfied then E and x
values hav to be suitably adjusted using a
suitable iterative method.Thus one can obtain
the field and current profiles through
simultaneous solution of above mentioned
equations.
Small signal analysis of lMPATT diode is
obtained by iterative and generalised computer
method based on gummel-blue approach
which incorporates the effect of mobile space
charge. The boundary edges fixed by the d.c.
analysis are taken as starting and end points
for small signal analysis. To keep the
computational effort within feasible limits, the
following simplification are made in this
computer method.
(1) The device analysis is performed considering
one dimensional device equation.
(2) The electron and hole velocities are taken to
be saturated and thus they are space
invariant in the depletion layer.
(3) Carrier diffusion is neglected.
Conclusion
The properties of standard SDRstructure
have been studied and well established. However,
there may be variations in the doping profile of
the diodes, giving rise to more powerful devices.
Currently working to obtain spatial variation of
negative resistance and susceptance in the
depletion layer which will provide a clear insight
regarding the regions of the diode contributing to
most of the microwave power. Along with this,
quest to obtain small signal parameters which
provide insight about the performance of the
device. Thus the combination computer aided
design tools and advanced fabrication technology
will pave the way for the development of
semiconductor devices in the future in the
microwave and millimetre wave
Bibliography
M.Sridharan,Phd,thesis,University of
Calcutta
S.M.Sze,Physics of semiconductor devices
(Wiley,NewYork 1969).
H.K.Gummel and J.L.Blue,IEEE Trans. On
ElectronDevices, Vol.ED-14, p.569 (1967).
P.De Studies on Microwave and MMwave
properties of Flat and LHL Single and Double
Drift IMPATT Diodes based onSi and
GaAs,Ph.D Thesis, University of
Calcutta,1999
S.K.Roy and M.Mitra, Microwave IMPATT
Devices 1st Ed.PHI,2003
35
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
FRACTAL HILBERT ANTENNAS INMOBILE, RADAR
AND LIGHT COMBAT AIR CRAFT TECHNOLOGY
KUMAR ASHOK
[1]
, D.SUNITA
[2]
, SINGH PUSHPENDRA
[3]
[1]DEPARTMENT OF ELECTRONICS ENGINEERING,
KAMLA NEHRU INSTITUTE OF TECHNOLOGY, SULTANPUR
FORMERLY HARCOURT BUTTLER TECHNOLOGICAL INSTITUTE, KANPUR (INDIA)
E-mail- ashok.knit212@gmail.com phone no +919415094287
[2]SCIENTIST B, ADA, MINISTRY OF DEFENCE, BANGALORE
E-mail- sunitadeviiet@yahoo.co.in phone no 9450333743
[3]HOD, DEPARTMENT OF ELECTRONICS ENGINEERING, CEST, LUCKNOW
E-mail- pushpendra_hbti05@yahoo.com phone no 9415157735
ABSTRACT
With the proliferation MIC technology demand of miniaturization of
size of antenna has become an important design prospective. There
are many techniques to accomplish this assignments viz. fractal
curve, dielectric substrate with high permittivity, resistive corrective
loading, increment of electrical length of antenna by optimizing its
shape, utilization of strategically positioned notches on patch
antenna. Hilbert antenna is one of the examples for the same .By the
invention of monolithic I.C fabrication technology i.e.
V.L.S.I this antenna can find many application applications in
VHF/UHF communication. A lot of Hilbert antennas can be
designed on single substrate. This yields to a reduction of size
which is the basic need of wireless technology. As we know the
Universal truth of resistance, that it use to take
Figure 1
36
maximum area in IC fabrication technology which is the main
motto of LCC viz. TEJAS and Radars. Hence, for LCC/Radar the
selected MICs and antennas should be such that, it has lesser
weight & highly strong, so that it can become aerodynamically
stable.
In this work, it has been proved that mutual coupling in side by
side configuration is highest than that of collinear, stacked or
echelon form and resistance increases for the closure spacing of
elements of Hilbert antennas using Mat lab software.
Key words-fractal, micro strip, mutual impedance
1 .INTRODUCTION
The role of space filling curve is most eminently used technology
for analyzing small antennas. The design of small antennas with
multi band characteristics has been the subject interest of RF
communications in recent years. It is a continuous mapping from a
normalized one-dimensional [0, 1] interval to a normalized two-
dimensional 2D region [0, 1]. There are many curves, which are
dependent on space filling principle. Hilbert
[1]
has proposed one
most widely curves in 1891.Hilbert curves may provide an
attractive solution for miniaturization of Antennas since it offers a
resonant structure that may have a small footprint, as one
increases the step order in iterative filling of a 2D region. Hilbert
Antennas can be matched to a feed line with a given
characteristics impedance e.g. 50 ohms at its fundamental
resonant frequency. In this, feeder can be located closure to the
end rather than feeding it at the center
The slight mismatch in this antenna is attributed to
the feed arrangement being unbalanced. The low values for the real
part of the impedance are consistent with other similar small
antenna such as Koch antenna & small meander-line antennas.
However, by using impedance matching circuits, or even by
charging the feed location, this can be remedied. Bandwidth is
narrowed for higher iteration orders while the cross polarization
level is decreased in the non-symmetrical plane of antenna. For a
Hilbert antenna occupying a fixed square surface area of LxL and
increasing n from 1 to 5 significantly lowers the resonant
frequency by a factor of three or higher. This yields to give
reduction in bandwidth. The actual practical antenna must satisfy
the specific bandwidth, gain and pattern characteristics, and these
miniature printed slot antennas with dual band characteristics
clearly show potentials for future mobile wireless communication.
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
0
2. MATHEMATICAL ANALYSIS
fig. 1.1Geometry of the Hilbert Curves for the first four Iterations
Fig1.2 AXIAL LENGTH
2.2 MUTUAL COUPLING EFFECTS IN HILBERT
ANTENNAS
The major problem in Hilbert Antennas is mutual coupling
between two such Antennas. Two identical Hilbert antennas can
be located in following configurations (as shown in figure no.1.3)
1 Collinear
2 Side by side
Z21= -V21/I1= -1/I1
L
E21 sin z. dz ..(1) 3 Stacked (Echelon)
This is the general expression for mutual impedance of two thin
linear, parallel, center-fed antennas with sinusoidal current
distribution .If we put four copies of previous iteration, connected
to additional line segments, we get geometry of Hilbert Antennas.
e.g. geometry of order one (arranged in different
orientations),Connected to the additional segments shown with
dashed lines in figure no. 1.1It would be interesting to identify the
Fractal properties of this geometry .The plane filling nature is
evident by comparing the first few iteration of geometry shown in
figure no. additional connection segments are required. When an
extra iteration order is added to an existing one. But the
contribution of this additional length (shown by dashed lines) is
small compared to overall length of geometry, especially when the
order the iteration is large. Hence, the small additional length can
be disregarded which makes the geometry self-similar. There are
many Hilbert curves for first little iteration as shown in figure no.
1.1
2.1 MUTUAL IMPEDANCE OF PARALLEL SIDE BY SIDE
ELEMENTS OF HILBERT ANTENNA (MATHEMATICAL
ANALYSIS)
Let d be separation of antennas and L be the length of elements By
diagram,r1= d
2
+z
2
and r2= d
2
+(L-z)
2
The mutual impedance then
Fig1.3 Three Arrangements of Two Parallel Elements of Hilbert
Antenna
2.3 MUTUAL IMPEDANCE OF PARALLEL COLLINEAR
ELEMENTS OF HILBERTANTENNA
According to Figure Spacing s=h-L, CARTER gives the mutual
resistance and reactance as
R21 = -15Cos h [-2Ci2 h +Ci2 (h -L) + Ci2 (h +L) -ln {(h
2
-L
2
)/ h
2
}]+15Sin( h)[2Si2 h -Si2 (h -L) - Si2 (h +L)] .
L
[{exp (-j d
2
+z
2
)}/( d
2
+z
2
)+ {exp (- j d
2
+(L-z)
2
}/( d
2
+(L-z)
2
)]Sin z. dz =30[2 Ei(-j d)- Ei(-j ( becomes,Z21= R21+j X21=j30 0
thatZ21=Z12=R21+jX21=R12+jX12 Where L=N /2 For N Odd The mutual
resistance and reactance decreases with distance between elements of
Hilbert antenna. The exact expression where the antenna length L is
not restricted to an odd number of /2, the mutual resistance and
reactance are given by BROWN and KING as [5]
R21=30[2(2+Cos L) Ci d-4Cos2 ( L/2){Ci ( 4d
2
+L
2
- L)/2+ Ci ( 4d
2
+L
2
+ L)/2}+ Cos L {Ci ( d
2
+L
2
- L) + Ci ( d
2
+L
2
+ L)}+Sin L
{Si ( d
2
+L
2
+ L)- Si ( d
2
+L
2
-L)-2 Si ( 4d
2
+L
2
+ L)/2+ 2Si ( 4d
2
+L
2
- L)/2}]/Sin2 ( L)/2 (2)
And
X21=30[-2(2+Cos L) Si d-4Cos2 ( L/2){Si ( 4d
2
+L
2
- L)/2+Si ( 4d
2
+L
2
+ L)/2}+ Cos L {Si ( d
2
+L
2
- L) + Si ( d
2
+L
2
+ I L)}+Sin L
{Ci ( d
2
+L
2
+ L)- Ci ( d
2
+L
2
-L)-2 Ci ( 4d
2
+L
2
+ L)/2+ 2Ci ( 4d
2
(4) and
X21=15Sin h [2Ci2 h -Ci2 (h -L) - Ci2 (h +L) -ln {(h
2
-L
2
)/ h
2
}]-
15Cos( h)[2Si2 h -Si2 (h -L) - Si2 (h +L)] .
(5)
Curves for R21 & X21 of parallel collinear /2 antennas and L= /2 are
presented as function of spacing s. It has been proved that, the mutual
resistance and reactance decreases with distance between elements of
Hilbert antenna.
2.4 MUTUAL IMPEDANCE OF PARALLEL ELEMENTS OF
HILBERTANTENNA INECHELON/STACKEDFORM
For this is let us suppose each element is an odd number of /2 long.
CARTER and KING give the mutual resistance and reactance
+L
2
- L)/2}]/Sin2 ( L)/2 ..(3) (10)
37
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
FIG.2
R21 = -15Cos h [-2CiA-2CiA+CiB+CiB+CiC+CiC+ +15Sin h [+2SiA-2SiA-
SiB+SiB- SiC+ SiC] . (6)
X21=-15Cos h * +2SiA+2SiA-SiB-SiB- SiC- SiC+ +15Sin h [ 2CiA-2CiA-
CiB+CiB-CiC+CiC+ ..(7)
Where, A= [( d
2
+h
2
) +h],A= [( d
2
+h
2
) -h],B= [{ d
2
+(h-L)
2
} +(h-
L)],B= [{ d
2
+(h-L)
2
} -(h-L)],C= [{ d
2
+(h+L)
2
} +(h-L).C= [{ d
2
+(h+L)
2
}-
(h-L)] It has been proved that, the mutual resistance and reactance
decreases with distance between elements of Hilbert antenna. The
stacked or Echelon arrangement is most general solution of which the
side-by-side position and the collinear position are the special cases
and it is highly dependent on orientation.
3. SIMULATION AND RESULTS
The expression discussed in the previous articles for the calculation of
mutual resistances and reactances have been. Simulated by MATLAB
software. The results obtained have been attached in the succeeding
papers On the basis of the results obtained it has been proved that at
closure spacing of Hilbert antennae the mutual resistance and
reactance increases rapidly. This yields to higher gain and higher
efficiency of Hilbert antenna. It has been also found that mutual
resistance and reactance is maximum in the case of side-by-side
arrangement.
4. CONCLUSION AND DISCUSSION
On the basis of analytical studies and simulation results the effects of
spacing of Hilbert antennae in different configurations have been
studied numerically using MATLAB software. It has been found that
mutual resistance and reactance increases for closure spacing of
Hilbert antennae. It has been proved that at larger spacing of the
elements of Hilbert antenna the total radiation resistance and
reactance decreases rapidly. However, the stacked, echelon form the
radiation resistance and reactance is highly dependent on orientation
The important advantage of Hilbert antenna is incorporation of its
plane filling characteristics to realize resonant antenna with smaller
overall physical size. Wide spread proliferation of telecommunication
technology in recent years, the need of small size multi band antennas
has increased many fold. Once optimize for radiation characteristics
Hilbert antennas can find many applications in UHF/VHF
communication antennas, which is most eagerly wanted technology in
this telecommunication era .By the invention of monolithic IC
fabrication technology these antennae can be design in more compact
form on single substrate. This research will provide guidelines to new
researchers for further research in area of fabrication and design of
5. FUTUREASPECTS
VLSI technology is boon to our electronics technology. MICs are used
in spacecrafts and aircrafts. In India Department of Defence and Space
are developing a light combat craft (lcc)since 1980 and radar. Till now
Dept. Has developed three high combat crafts. The main issue is of
lesser weight of crafts. There exist a technology by which resistance of
a cut can be decreased. As we know the Universal truth of resistance,
that it use to to take maximum area in IC fabrication technology which
is the main motto of LCC viz. TEJAS and RADARs. Hence, for
LCC/Radars the selected MICs and antennas should be such that, it
has lesser weight & highly strong, so that it can become
aerodynamically stable. By the invention of monolithic I.C fabrication
technology i.e. V.L.S.I technology this antenna can find many
application applications in VHF/UHF communication . A lot of Hilbert
antennas can be designed on single substrate(FIG2). This yields to a
reduction of size which is the basic need of wireless technology.
6. REFERENCES
1 ZHU JINHUI, HOORFAR AHMAD, E.NADER Bandwidth, cross
polarization, feed point characteristics of matched Hilbert antennas.
2 SAYEM.T.M and ALI. M. Characteristics of micro strip fed miniature
printed Hilbert slot antenna.
3 VINAY.J.K, JOSE. K.A., VARADAN. V.K and VARADAN. V.V Hilbert
curve Fractal antenna .
4 4 GONZALEZ-ARBESU J.M., BLANCH S. and ROMEU J. The Hilbert
curve as small self resonant monopole from practical point of view.
5 J.C. EDWARD &B.G. KEITH, Electromagnetic waves and Radiating
systems PHI Second Edition.
6 BROWN G.H. and R. KING, High frequency models in antenna
investigation.
7 CARTER P.S. Circuit relations in radiating systems and applications to
antenna problems.
8 KING H.E., Mutual impedance of unequal length antennas in
Echelon12-S.Reuven Scattering effect of seams on sandwich radome
performance
13- Chassell Rick The effect of rain on microwave landing system
antenna radomes IEEE symposium on antenna technology & applied
electromagnetic Winnipeg Canada,Aug 15-17,1990
14-Boumens Morcel and Wagner Ulrike ,spherical near field radome
test facility for nose mounted radomes of commercial traffic
aircraftOrbit /Fr-Europe Gmblt Johann Seb astion batch ,Str
11,85591 Vaters stetten ,Germany
15-Griffith lances A fundamental and technical review of radome,
May 2008
16-Electric field on Earth The physics Fact book edited by Glenn Elert
and Atmospheric electricity by ELAT.Institute National de Pesqesus
Spacious June 2005.
17-Some related websiteswww.sciam.com, www.hypertextbook.com,
www.mpdigest.com/issues/articals/may2008
18. Chan. K. K, Chang.P.R & Hsu.F, Radome design by simulated
annealing technique IEEE Trans 07803-07305/92.1992
19. Nie.X-C, Yuan.N, Li.W.L, Yeo.T.S, Gan.Y.B. Fast analysis of
electromagnetic transmission through arbitrarily shaped airborne
radome uses precorrcted FET method, process in electromagnetic
research, PIER 54, 37-59, 2005.
many Hilbert antennae on single substrate. This will yield to reduction
91Raghavan V Material science and Engg. 3
rd
edition PHI ND 1993
rd
in area, which is the basic need of wireless and microwave
communication systems. (As shown in figure2)
10-Skolnic I Merrill Introduction to radar system 3
11-J. Decker Engineering MaterialsTMH
edition TMH.
38
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
MUTUAL IMPEDANCE OF PARALLEL SIDE BY SIDE ELEMENTS OF HILBERT ANTENNA
MUTUALIMPEDANCEOF PARALLELCOLLINEARELEMENTS OF HILBERT ANTENNA
39
40
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Effect of Moisture Ingress on the Substrate
of Rectangular Micro Strip patch antenna
using CPW fed
*1
Devinder Sharma,
*2
Rajesh Khanna
*1
Student,
*2
Professor
1,2
Department of Electronics and Communication Engineering
Thapar University, Patiala, India
devinder.ece@thapar.edu
*1
rkhanna@thapar.edu
*2
Abstract: - In this paper the effect of moisture ingress on the
resonant frequency of very compact rectangular micro strip
patch antenna has been studied. When we change the moisture
ingression, effective dielectric constant of the FR4 substrate gets
changed. So that with the change in the effective dielectric
constant of the substrate, resonant frequency also gets modified.
Here we have considered a CPW fed rectangular microstrip
patch slot antenna having dimensions 28.63 mm 25 mm (L
W), fabricated on the FR4 substrate (
r
= 4.4) of thickness 1.6
mm is analyzed. An empirical relation between percentage
change in moisture ingress and the corresponding change in the
resonant frequency of the rectangular microstrip patch antenna
is presented.
Keywords: CPW fed, Slot antenna, Substrate, Moisture and
Resonant frequency.
I. INTRODUCTION
The past few years have witnessed profusion in technology
in the field of Wireless communications. Antennas that meet
the commercial requirements (larger bandwidth, symmetric
field distribution, stable radiation pattern, etc.) have become a
major challenge for the researchers. Microstrip patch antennas
have many advantageous including light weight, low volume,
low profile, planar configuration, which can be easily made
conformal to host surface, and low fabrication cost, hence can
be manufactured in large quantities, etc.[1] CPW-Fed antenna
is preferable for Monolithic Microwave Integrated Circuit
(MMIC) applications as no drilling of holes through the
substrate is required. [2] CPW antenna slot type or patch type
or feed types are becoming increasingly important in many
military and commercial applications. Its important feature is
the CPW -feed and antenna are on the same plane i.e., on the
same side of the substrate, thereby facilitating connection of
lumped shunt elements and active devices and eliminating the
need for via holes. In particular the CPW-fed slot antennas
have been widely used for wireless applications since they
exhibit a larger bandwidth with bi-directional radiation
patterns and also they are compatible with monolithic
integrated circuits and active solid state devices. [3] In this
research paper effect of moisture ingress on the resonant
frequency of rectangular micro strip patch antenna is studied.
II. ANTENNA DESIGN
The geometry of antenna is shown in Fig.1.The effect of
moisture ingress on the resonant frequency of substrate and
how the dielectric constant changes have been analyzed. CPW
dimensions of a=1.65mm and b=3mm are selected since these
parameters correspond to 50 impedance of the CPW feed.
Antenna characteristics corresponding to each parameter were
analyzed using commercially available Method of Moments
(MoM) software IE3D
Fig.1 Geometry of an antenna
In the above geometry, L= 28.63mm, W= 25 mm, W
g
= 13.12
mm, L
g
=14.8mm, l=10mm, w= 5 mm. Width of the CPW fed
is W
f
= 3mm and d=1.72mm and a=1.65mm.
41
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
III.SIMULATION RESULTS
Effect of Moisture on Resonant Frequency
The effect of moisture ingress is analyzed by changing the
dielectric constant of the substrate with the given relation
shown by eqn. no. 1[4]

r
=
fr4
+M(
w

fr4
) ..........................1
Here M is the moisture ingress in % and
fr4
and
w
is the
dielectric constant of FR4 (~4.4) substrate and water (~80)
respectively.
In the given table no.1, the result shows that with the increase
in moisture absorption, dielectric constant also varies and
corresponding resonant frequency is also changed.
The observed simulated results are tabulated below in TABLE
N0.1
TABLE NO. 1
Fig. 2(a), Fig. 2(b) and Fig. 2(c) shows that return loss and
resonant frequency varies with the increase in dielectric
constant or moisture intrusion.
Here the given three fig.s shows that how the peaks shift to
the left with the ingression of moisture on the substrate of
rectangular micro strip patch slot antenna with CPW fed.
Effective dielectric constant and moisture absorption are
related to each other as shown in equation no. 1.
Fig. 2(a) Return loss and Resonant frequency when
r
=4.4
Fig. 2(b) Return loss and Resonant frequency when
r
=5.0048
Fig. 2(c) Return loss and Resonant frequency when
r
=5.912
42
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
The given below graph shows that with the increase in
moisture intrusion in the range of 0 to 2%, how the resonant
frequency decreases.
GRAPH NO.1
Resonant Frequency and Moisture Absorption
Fig. 3 Graph b/w resonant frequency and moisture absorption
IV. CONCLUSION
1) It is hence observed that with the increase in moisture
ingress on the substrate of micro strip patch antenna the
effective dielectric constant of the substrate is increases.
2) And with the increase in the effective dielectric constant,
resonant frequency is decreases.
V. ACKNOWLEDGEMENT
The authors are grateful to the reviewers for their careful
review and helpful suggestions. This work was funded by the
Electronics &Communication Engineering department,
Thapar University, Patiala.
REFERENCES
[1] Dalia Nashaat, Hala A. Elsadek, Electromagnetic
Analyses and an Equivalent Circuit Model of Microstrip
Patch Antenna with Rectangular Defected Ground Plane,
2009.
[2] D.Sriram Kumar, G. Surya Prakash and C.P Mohammed
Raft Effect of Slot and Substrate Parameters on CPW-Fed
Slot Antenna- Analysis, June 2008.
[3] D.Sriram Kumar, G. Surya Prakash and C.P Mohammed
Raft, Effect of Slot Parameters and Feed Inset on CPW-Fed
Slot Dipole Antennas- Analysis, Vol.4, No.3, May 2009.
[4] G. D. Davis, M. J. Rich and L. T. Drzal, Monitoring
Moisture Uptake and Delamination in CFRP-Reinforced
Concrete Structures with Electrochemical Impedance
Sensors, Journal of Nondestructive Evaluation, Vol. 23, No.
1, March 2004.
[5] Ramesh Garg, Praksh Bhartia and Inder Bahl: Microstrip
Design Handbook, Artech House Publishers, Boston, U.S.A.,
2001.
[6] D.M. Pozar and D.H. Schaubert, Microstrip Antennas,
New York: IEEE press, pp. 155-166, 1995.
43
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Slot Loaded Patch Antenna for WLAN/Wi-Max
Applications
Rakesh Kumar Tripathi
#1
, Rajesh Khanna
#2
1
Student,
2
Professor
#1, #2
Department of Electronics and Communication Engineering
Thapar University,Patiala,Punjab
rakesh.tripathi15@gmail.com
rkhanna@thapar.edu
Abstract- In this paper, the design of a coaxially fed
single layer single patch wideband microstrip antenna
in the form of slot loaded rectangular patch antenna
for WLAN/Wi-Max applications is presented. The
slot loaded patch antenna resonates at 2.43 GHz and
3.5 GHz frequencies. It has been developed to be used
in future WLAN/Wi-Max technologies. The proposed
antenna is simulated using Computer Simulation
Technology (CST) Microwave Studio. The simulated
results show that the designed patch antenna achieves
impedance bandwidth of 4.1% and 14.8% for
VSWR< 2, covering a frequency range from2.39 GHz
to 2.49 GHz and 3.33 GHz to 3.849 GHz respectively.
The antenna exhibits the return loss (S
11
) below -10
dBfor frequency ranges mentioned.
Keywords- Microstrip Antenna, WLAN/Wi-Max
bands, Impedance Bandwidth, CST Microwave
Studio
I. INTRODUCTION
Microstrip patch antennas are being used in
communication systems due to their low profile.
However in some systems, such as two way
wireless communications, in order to have simply
one antenna to transmit and receive the
information, the antenna must be capable of
operating in two distinct frequency ranges rather
than just one. In addition, due to the miniaturization
of portable communication devices, a small
antenna is desirable. A new small dual-band
rectangular patch antenna is presented in this paper. It
has been demonstrated that, by loading a
rectangular microstrip patch antenna with a pair of
narrow slots placed close to the radiating edges of
the patch, a dual-frequency operation can be
obtained [1, 3]. In such dual-frequency designs, the
two operating frequencies are associated to the
TM
10
and TM
30
modes of the unslotted rectangular
patch. In addition, the two operating frequencies
have the same polarisation planes and broadside
radiation patterns, with a frequency ratio generally
in the range of 1.6 to 2.0 for a single probe-feed
case [l]. In this Letter, we demonstrate that, by
placing the embedded slots close to the non-
radiating edges of the patch instead of the radiating
edges and replacing the narrow slots with properly-
bent slots, a novel dual-frequency operation of the
microstrip antenna can easily be achieved using a
single probe feed. The two operating frequencies of
the proposed antenna are also found to have the
same polarisation planes and broadside radiation
patterns. This makes the proposed antenna more
suitable for dual-frequency applications where
lower frequency ratio is required.
In this papper, a slot loaded double band microstrip
patch antenna for WLAN/Wi-Max applications is
designed and simulated using CST Microwave
Studio. The proposed slot loaded patch antenna is
suitable for the 2.39 GHz to 2.49 GHz WLAN
applications and 3.33 GHz to 3.849 GHz Wi-Max
applications.
II. ANTENNA DESIGN
In this paper several parameters have been
investigated using CST Microwave Studio
software. The geometry of slot loaded microstrip
patch antenna is shown in figure-1. The structure
consists of a rectangular patch fed by co-axial
probe of 50 . The design specifications for patch
antenna are:
Substrate permittivity (
r
) = 2.33
Substrate thickness (h) = 8 mm.
Length of patch (L) = 30 mm.
Width of patch (W) = 26 mm.
Feed point location = (0, 2.5)
Dimension of ground (Lg x Wg) = 90x80 mm
2
The slot dimensions are:
a= 19 mm.
b= 20 mm.
c= 14 mm.
Where a, b and c are shown in figure-1.
The width of each slot is 2mm.
The antenna structure is fed with a co-axial probe
(50 ). The inner and outer radius of co-axial
probe is 1.5 mm and 3 mm respectively.
44
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Figure 1(a)
Figure 1(b)
Figure 1(a): Geometry of Slot Loaded Single Layer Patch
Antenna
(b): Structural View of Patch Antenna
III. RESULT AND DISCUSSION
The dual band characteristics of proposed patch
antenna are achieved by incorporating slots. The
centre frequencies of these bands are decided by
the electrical length of these slots. In addition to
other factors, the thick multilayer substrate helps in
achieving required bandwidth. The feed location is
moved from the centre of geometry to get the best
possible impedance match to the antenna [2].
Simulation studies of proposed antenna reported
here are carried out using CST Microwave Studio.
A. Returnloss characteristic
The return loss of slot loaded patch antenna is
shown in figure 2 which shows that it resonates at
2.43 GHz and 3.5GHz frequencies. These resonant
frequencies give the measures of impedance
bandwidth characteristics of the patch antenna [2].
The impedance bandwidth for the proposed antenna is
100 MHz (from 2.39 to 2.49 GHz) for the first
band and 519 MHz (From 3.33 GHz to 3.849 GHz)
for the second band. From the figure 2 the return
loss values at the resonant frequencies f
r1
= 2.43
GHz and f
r2
= 3.5 GHz are -22.63 dB and -20.26 dB
respectively. The achieved values of return loss are
small enough and frequencies are closed enough to
specified frequency bands for WLAN and Wi-Max
applications. The return loss values suggest that
there is good matching at the frequency point
below the -10 dB region.
Figure 2: Return Loss verses Frequency Plot
B. Radiation patterncharacteristics
The simulated far field radiation patterns for
the proposed antenna are different for all
frequencies. Figure 3 shows the simulated
radiation pattern at different frequencies (2.43
GHz and 3.5 GHz). It shows that proposed
antenna radiates in broadside direction. Figure
3 also shows that the directivity of proposed
antenna is 5.366 dBi at resonating frequency
2.43 GHz and 8.725 dBi at resonating
frequency 3.5 GHz.
Figure 3(a)
45
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
A slot
simulat
This is
GHz to
II (3.33
The ret
-20.26
Figure 3(b)
Figure 3(c)
Figure 3(d)
Figure 3: (a):3D Radiation Pattern of Patch Antenna at 2.43GHz
(b): Polar Plot of Patch Antenna at 2.43 GHz
(c):3D Radiation Pattern of Patch Antenna at 3.5 GHz
(d): Polar plot of Patch Antenna at 3.5 GHz
The figure 4 shows the VSWR verses frequency
graph for the proposed antenna. The value of
VSWR at the two resonating frequencies f
r1
=2.43
GHz and f
r2
=3.5 GHz is 1.193 and 1.215
respectively which is below 2. The value of VSWR
for both frequency bands is also less than 2 which
shows better antenna impedance matching at these
two frequency bands.
Figure 4(a)
Figure 4(b)
Figure 4: VSWR verses Frequency Plot
C. CONCLUSION
loaded patch antenna has been designed and
ed using CST Microwave Studio software.
operating in two bands viz band I (2.39
2.49 GHz) covering WLAN band and band
GHz to 3.849 GHz) covering wi-max band.
urn losses for these bands are -22.63 dB and
dB respectively. The measured impedance
bandwidth of the proposed antenna is 4.1% and
14.8% over the entire frequency range from 2.39
GHz to 2.49 GHz and 3.33 GHz to 3.849 GHz
respectively with stable broadside radiation
patterns. A good radiation pattern results have been
obtained which seems to be adequate for the
envisaged applications.
D. REFERENCES
[1]. MACI, S., BIFFI GENTILI, G, PIAZZESI, P.,
and ALVADOR, C.: Dual band slot-loaded
patch antenna , IEE Proc., Microw. Antennas
Propag, 1995, 142, pp. 225-232.
[2]. R.Garg, P.Bhartia, I.Bahl, A.Itipiboon,
Microstrip antenna design handbook,
Artech House, Boston London, 2000
[3]. James,J.R., and P.S.Hall, Handbook of
Microstrip Antennas, Vol.1, London: Peter
Peregrinus Ltd., 1989.
46
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
A coaxial fed wide band microstrip patch
antenna for WLANapplications
Dinesh
#1
, Rajesh Khanna
#2
1
Student,
2
Professor
1, 2
Department of Electronics and Communication Engineering
Thapar University, Patiala, Punjab
1
dinesharma07@gmail.com
2
rkhanna@thapar.edu
Abstract- A rectangular microstrip patch antenna with coaxial
feed is presented. It is a single band patch antenna for WLAN
applications. The antenna consists mainly of ground, substrate
and a patch. Four narrow slots are cut in the patch out of which
two slots are horizontal and two are vertical. It resonates at
frequency equal to 5.868GHz (5.6911-6.0812GHz). Hence it
covers the WLAN band 5.725-5.825GHz. The antenna has been
designed on software CST- MICROWAVESTUDIO.
I. INTRODUCTION
In applications like high performance aircraft, satellite,
missile, mobile radio and wireless communications size, cost,
weight and ease of installation are the main constraints. Also,
with advancement of the technology, the requirement of an
antenna to resonates at more than one frequency that i.e.
multi-banding is also increasing day by day. Here microstrip
patch antenna is the best choice to fulfil all above
requirements. Along with that a microstrip patch antenna
offers many advantages above other conventional antennas
like low fabrication cost, Supports both, linear as well as
circular polarization etc. Microstrip patch antenna have some
disadvantages also like surface wave excitation, narrow
bandwidth etc. But bandwidth of an microstrip patch antenna
can be improved by various methods like cutting U-slot [1],
increasing the substrate height, decreasing height. Antenna
array can also be used to improve the bandwidth [2].
Here, a microstrip patch antenna with coaxial feed is
designed. In this feeding technique, the inner conductor of the
coaxial connector extends from ground through the substrate
and is soldered to the radiating patch, while the outer
conductor extends from ground up to substrate. The main
advantage of this type of feeding scheme is that the feed can
be placed at any desired location inside the patch in order to
match with its input impedance. This feed method is easy to
fabricate and has low spurious radiation. However, its major
drawback is that it provides narrow bandwidth and is difficult to
model since a hole has to be drilled in the substrate and the
connector protrudes outside the ground plane, thus not making
it completely planar for thick substrates. But the bandwidth
can be improved by various methods written above. Recently
many microstrip patch antenna for different applications with
coaxial-feed have been presented [3-6]. Fig1. shows the coxial
feeding technique.
Fig.1 Coxial feeding technique
II. ANTENNA DESIGN AND SIMULATION
The geometry of proposed antenna which is coaxial fed with
single band operation for WLAN applications is depicted in
fig.1. It is a rectangular shaped patch antenna with four slots
cut into the patch as shown. Two slots are horizontal and two
slots are vertical. Both the vertical slots and both the
horizontal slots are of the same dimension. The antenna is
excited by coaxial feed line and is printed on FR4 substrate
with a thickness of 1.6mm and relative permittivity of 4.4.
The dimensions of the proposed antenna is written below
Ground size =55 47.6mm
Substrate size =55 47.6mm
Patch size =29.44 38.02mm
Horizontal slots =26 1mm
Vertical slots =1 30mm
The height of the ground which is made of material FR4 is
4.8mm and of the patch which is made of material PEC
(Perfect electric conductor) is 0.002mm. The proposed
antenna is fed by coaxial line. So, the outer conductor (from
bottom of ground to top of substrate) is made of FR4 material
and inner conductor (from bottom of ground to top of patch) is
made of PEC material. The feed point for the proposed
antenna is (-6.4, 0) where the best matching is achieved. Best
matching always yields the desired result.
47
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
w
ap
hich is very good. It covers the band for the WLAN
plication i.e. 5.725-5.825GHz.
Fig1:-Geometry dimension of proposed antenna
Fig.2 shows the side view of the proposed microstrip patch
antenna. As the proposed antenna is coaxial feed so, we can
view outer and inner conductor of the coaxial feed line very
clearly in this diagram.
Fig2:-Geometry dimension of proposed antenna
Fig.3 shows the simulated return loss [s11] of the proposed
antenna in dB. S11 gives the return loss at port 1 where we
apply the input to the microstrip patch antenna. It should be
less than -10dB for the acceptable operation.
It shows that the proposed antenna resonates at frequency
approximate equal to 5.868MHz. The 10 dB bandwidth is
about 390.06MHz (5.6911-6.0812MHz). The return loss that
is achieved on the resonant frequency is equal to -46.996dB
Fig3:-Simulated return loss[S11]
Fig.4 shows the smith chart of the proposed antenna. It is a
graphical representation of the normalized characteristic
impedance. The Smith chart is one of the most useful
graphical tools for high frequency circuit applications. The
goal of the Smith chart is to identify all possible impedances
on the domain of existence of the reflection coefficient.
Fig4:-Smith chart
Fig.5 shows the 3-d radiation pattern which is graphical
depiction of the relative field strength transmitted from or
received by the antenna. It is the directional (angular)
dependence of the strength. The antenna should not have the
side lobes and back lobes ideally. We cannot remove them
completely but we can minimize them.
48
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Fig5:-Radiation pattern
Fig.6 shows real part of V/A matrix coefficients in Z for the
proposed antenna. The value of the Z11 at the resonating
frequency 5.868GHz for proposed microstrip patch antenna is
approximately equal to 50ohm which is always required for
the desired result. The value of Z11 at the resonating
frequency mainly depends on the feed point location.
Fig6:- Real part of V/Amatrix coefficient in Z
Fig.7 shows the VSWR (voltage standing wave ratio) plot for
the designed antenna. The value of VSWR should lie
between 1 and 2. SWR is used as an efficiency measure for
transmission lines, electrical cables that conduct radio
frequency signals, used for purposes such as connecting radio
transmitters and receivers with their antennas, and
distributing cable television signals. Here the value of the
VSWR for the proposed microstrip patch antenna is 1.008 at
the resonating frequency as shown in fig.7.
Fig7:-VSWR plot
III. CONCLUSION
A simple rectangular microstrip patch antenna with four slots
cut into the patch for WLAN applications has been presented.
It resonates at the frequency 5.868MHz (5.6911-6.0812MHz)
with the bandwidth of 390.06MHz. The return loss achieved
on the resonant frequency is equal to -46.996dB. Hence it
covers the WLAN band 5.725-5.825GHz. We can further see
the impact of moving the slots on the different antenna
parameters in future.
REFERENCES
[1]. M. Mahmoud, Improving the Bandwidth of U-slot
Microstrip Antenna Using a New Technique (Trough-Slot
Patch), Region 5 Conference, 2008 IEEE,pp.1-6.
[2]. A.B. Smolders,Broadband microstrip array antennas,
Antennas and Propagation Society International
Symposium, 1994. AP-S. Digest, vol. 3, 1994, pp.1832-
1835.
[3]. Jun-Hai Cui, Shun-Shi Zhong, Compact microstrip patch
antenna with C-shaped slot, Microwave Conference,
2000 Asia-Pacific, Dec. 2000,pp. 727-730.
[4]. Reza Dehbashi, Keyvan Forooraghi, and Zahra Atlasbaf,
Active Integrated Antenna Based Rectenna Using a New
Probe-Fed U-Slot Antenna with Harmonic Rejection,
Antennas and Propagation Society International
Symposium 2006, IEEE , july 2006, pp. 2225-2228.
[5]. Noor Asniza Murad, Mazlina Esa, and Suzila Tukachil,
Microstrip U-Shaped Dual-Band Antenna, Asia-Pacific
conference on Applied Electromagnetics Proceedings,
Dec. 2005,pp. 110-113.
[6]. K.Kumar, N. Gunasekaran, A Novel Wide Band Planar n
Shaped Base Station Antenna, International Conference
on Communications and Signal Processing (ICCSP), Feb.
2011, pp. 294-296.
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
A Compact UWB Monopole Antenna With
Partially Ground Plane
Himanshu Shukla
1
,Apoorv Bajpai
2
, Tejbir Singh
3
1,2,3
IEC-College of Engg. &Tech. Gr. Noida,
himanshu.coolbal@gmail.com
2
apoorv.bajpai_5454@rediffmail.com
Abstract the proposed antenna has been sorted out by
researching a different parameter in this paper. The impedance
bandwidth (VSWR< 2) of this antenna is 7.5GHz. It is a
monopole antenna with partially ground feed which helps in
ultra wide band communication. It is very compact antenna with
dimensions 26*26*1.6 mm
3
and the proposed antenna is designed
on FR4 substrate. The proposed antenna is successfully
simulated showing wideband matched impedance, established
radiation patterns and constant gain. A good observation is
obtained between simulation and experiment. It based on FR4
substrate hence cheap and easy to fabricate.
Index Terms UWB characteristic, microstrip antenna,
monopole antenna, partially groundedfeed.
I. INTRODUCTION
UWB served as the most promising field for the
researchers because of its attractive features of antenna are
compact size, omnidirectional radiation pattern, wide
impedance bandwidth, low power consumption, ease of
manufacture and unipolar configuration. The printed
monopole antennas with a very large bandwidth are currently
in great demand as they assemble most of the above
mentioned requirements.
The range decided by the FCC for the UWB is 3.1 GHz to
10.6 GHz. There are many narrow bands which operate in
this rang for example, WLAN (Wirless Local Area Network)
between 5.15 GHz to 5.825 GHz and WiMAX (Worldwide
interoperatibility for Microwave Access) between 3.4 GHz to
3.69 GHz but due to the low power consumption and high
data rate UWB technology is more prominent.
UWB is a radio technology that can be used at very low
energy levels for short-range high-bandwidth communications
by using a large portion of the radio spectrum.UWB has got a
traditional application including non-cooperative radar
imaging, target sensor data collection, precision locating and
tracking application. UWB communications transmit in a way
that doesn't interfere largely with other more
traditional narrowband and continuous carrier wave uses in
the same frequency band.
A ground plane structure or relationship exists between
the antenna and another object, where the only structure of the
object is a structure which permits the antenna to function as
such (e.g., forms a reflector or director for an antenna). This
sometimes serves as the near-field reflection point for
an antenna, or as a reference ground in a circuit. There are
many types of ground plane like drooping ground plane and
flat circular ground. A ground plane may be a natural surface
like earth or artificial surface. Artificial surfaces are
conductive surface used in place of earth surface.
The proposed antenna designed with a compact v-shape
slot in ground plane has the advantage of compact size
compared with existing antennas and without much
adjustment the existing structure can be used for design.
The proposed antenna has v-shaped slot which enhances
the frequency bandwidth of the microstrip monopole antenna
with partially ground plane. The antenna is simulated and
analyzed by using simulation software, Ansoft HFSS.
FIGURE 1: SIMULATEDVSWROF THE PROPOSED ANTENNA
II. ANTENNADESIGNANDRESULTS
In the applications where size, weight, cost, performance,
ease of installations are constraints, low profile antennas are
required. In order to meet these specifications microstrip
antennas are used. These antenna can be flush mounted to
metal or other existing surfaces and they only require space
for the feed line which is normally placed behind the partially
ground plane. Microstrip antenna are popular for low profile
applications at frequency above 100 MHz (or <3 m).
In this design of antennas, the substrate is of square
geometry of material FR4. The substrate FR4 is used as it is
cheap and easy to fabricate. These values W1 & W2 are taken
half wavelength of lower frequency (3.1 GHz) and the
substrate height is h=1.6mm, dielectric constant
r
=4.4 and
loss tangent =0.02. The calculated values of the antenna
are optimized with HFSS tool. The optimization was
performed for the most excellent impedance bandwidth.
The final parameters are W1=26mm, W2=26mm, T1=14mm,
T2=9mm, T3=9.2mm, T4=14 mm, T5=3.8 mm, T6=13mm.
Fig. 3 shows the configuration of the proposed monopole
antenna it consist a radiating patch and microstrip feed with V
shaped ground plane. The ground plane slot provides an
49
50
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
additional current path. So this structure changes the
inductance and capacitance of input impedance which plays
the important role to increase the impedance bandwidth fig. 2
shows the simulated VSWR without slot and with slot in
ground plane so it is observed that the v-shape slots in ground
plane has a strong significant effect on the antennas
bandwidth enhancement.
Fig. 2 Simulated VSWR of antenna with slot and without slot with optimal
dimensions.
As we can see from figure 2 the comparison is done
between antenna with slot and without slot on the basis of
VSWR. It can be clearly seen that the antenna with slot
provides a better response on VSWR vs frequency graph
which is always less than 2.
Figure 1 shows the actual VSWR vs frequency graph of
the proposed antenna. This clearly explains that antenna has
VSWR always less than 2 and therefore increases the
bandwidth.
Figure 4: Group delay of the proposed antenna
Figure 5 shows the gain of the proposed antenna at all
band width of UWB. It shows relationship between gain and
frequency (3 to 11 GHz). The variation of gain of proposed
antenna is about 3.5dBi.
Figure 5: Gain of the proposed antenna
Figure 3: Proposed antenna geometry
Since UWB system uses pulse communication, a key
matter is pulse distortion by the antenna. Ideally, a linear
phase response (constant group delay) is wanted. Figure 4
shows simulated group delay of proposed antenna. It can be
observed from the graph the group delay of proposed antenna
is 1.2 ns which does reflect the efficiency of antenna. These
group delay characteristics demonstrate that the proposed
antennas exhibit phase linearity at required UWB frequencies.
Figure 6: Radiation pattern of proposed antenna at 3 GHz frequency
51
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Figure 7: Radiation pattern of the proposed antenna at 7 GHz frequency
Figure 8: Radiation pattern of the proposed antenna at 10 GHz frequency
The E field and H field radiation pattern of the proposed
antenna at frequency 3 GHz, 7 GHz, 10 GHz can be observed
in the figures 6, 7, 8 respectively. It is seen that this antenna
has the nearly omni-directional radiation pattern like normal
monopole antennas. However, the omni-directional radiation
properties have a little deterioration as frequency. Over the
entire bandwidth, its similar to a conventional wideband
monopole antenna.
III.CONCLUSION
From the above discussion it can be concluded that the V
shaped slot increase the bandwidth of the antenna. A compact
monopole antenna for wideband application has been
designed and successfully implemented with experimental and
numerical results. The matching bandwidth of the proposed
antenna has been significantly improved by using V shape slot
in ground plane. Stable radiation patterns and constant gain in
the UWB band are also obtained.
REFERENCES
[1] Constantine A. Balanis, antenna theory analysis and design, john wiley &
sons.
[2] Jeffrey H. Reed, An Introduction to Ultra Wideband Communication
Systems, Prentice Hall,
[3] Handbook of Microstrip ANTENNAS Edited by J R James & P s Hall.
[4] J. Liang, C Chiau, X. Chen and C.G. Parini, \Study of a Printed Circular
Disc Monopole Antenna for UWB Systems", IEEE Transactions on
Antennas and Propagation, vol. 53, no. 11, November 2005, pp.3500-3504.
[5] J. Liang, L.Guo, C.C.Chiau, X. Chen and C.G.Parini, \Study of CPW-Fed
circular disc monopole antenna", IEE Proceedings Microwaves, Antennas
& Propagation, vol. 152, no. 6, December 2005, pp. 520-526.
[6] Kun Song, Ying-Zeng Yin, and Li Zha \a novel monopole antenna with a
self-similar slot for wideband applications, Received 21 April 2009.
[7] K. P. Ray and S. Tiwari \Vertex Fed Printed Hexagonal Monopole
Antenna, Proceedings of international conference on microwave 08.
[8] Qing-Xin Chu, Member, IEEE, and Ying-Ying Yang \ACompact
Ultrawideband Antenna With 3.4/5.5 GHz Dual Band-Notched
Characteristics, IEEE transactions on antennas and propagation, vol. 56,
no. 12, december 2009.
52
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Printed Bow-tie Antenna for Bluetooth and WLANApplications
1
Prof.T.V. Rama Krishna,
2
P.Thrinadh,
2
B. Sai Ritesh,
2
G. Manikanta,
2
K. Suman Kumar
1
Professor,
2
Students
Department of ECE, K L University, Guntur, AP, India.
Email: thrinadh324@gmail.com
Abstract -This paper presents amodified printed
bow-tie antenna designed to simultaneously
operate in the frequency range of 2.4 GHz to 5.5
G Hz. This newantenna design provides an end
fire radiation pattern suitable for integration in
single and dual polarized phased array systems.
The antenna exhibits small size and wide
bandwidth approaching 75%. The return loss,
input impedance, VSWR, radiation patterns and
field distributions are simulated using Ansoft
HFSSand presented inthis paper.
Key Words: Bow-Tie Antenna, Liquid crystal
polymer
The Proposed Antenna is operated in
the frequency range of 2.4GHz and 5.5GHz,
which are mostly used in the Bluetooth and
WLAN applications. The dimensions of the
antenna consists of inner width of the Bow-
Tie is 1mm, outer width is 18mm and the
arm length is 17.1mm, gap port length of
1mm, substrate thickness of 1.68mm and the
substrate dimensions along the X-axis and
the Y-axis are 40mm, 60mm respectively.
I. INTRODUCTION
The Micro strip Patch Antennas are
gaining their importance due to its light weight,
low profile, low cost, ease to analyze and used
in high speed data communications applications.
The printed antennas are economical and can be
accommodated in different packages [1-3].
The Proposed Antenna Consists of two
triangular pieces of stiff wire or two triangular
flat metal plates, arranged in the configuration
of a bowtie, with a feed point at the gap between
the apexes of the triangles[4-7]. In Wireless
communication and Phased array applications,
Printed Microstrip antennas are widely used.
They exhibit a low profile, small size, light
weight, low cost, high efficiency, and ease of
fabrication and installation. Furthermore, they
are microwave integrated circuit fabrication
techniques at RF and microwave frequencies [8-
9].
II. ANTENNA GEOMETRY AND RESULTS
Figure (1) HFSS generated Bow-Tie
antenna
III. SUBSTRATE MATERIAL SELECTION
A liquid crystal polymer material is
used as a substrate material and copper
material is used as a patch in this present
work. The dielectric constant of the liquid
crystal polymer substrate is 2.95 and
dielectric loss tangent of 0.0011 is used. The
liquid crystals are the promising materials for
frequency agile components like tunable
filters, phase shifters, reflect arrays etc, at
microwave frequencies. Liquid crystals are
the new materials with excellent properties
for use in high frequency circuits due to its
low loss and low dielectric constant for
53
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
180 0
microwave and millimeter wave passive
circuits and printed antennas.
Ansoft Corporati on Input Impedance
100 90 80
Bow_ Ti e_ Antenna_ ADKv1
Curve Info
120
110 1.00 70
60
S(1,1))
Setup1 : Sweep
IV. RESULTS AND DISCUSSION
160
140
130
0.50 2.00
50
40
20
150 30
0.20 5.00
170 10
0.20 0.50 1.00 2.00 5.00
-0.0.000
The proposed antenna is simulated
using Ansoft HFSS software. The return loss
-170
-160
-0.20 -5.00
-10
-20
-150 -30
obtained at the desired frequencies (less than
-140
-0.50
-130
-120
-1.00
-2.00
-50
-60
-40
-10 dB) is shown in figure (2). The return
loss of -14.45 and the VSWR of 1.466 is
obtained at 2.4 GHz and a return loss of -
14.88 and the VSWR of 1.43 is obtained at
5.5 GHz and shown in figure (3).
-110
-100 -90 -80
-70
Figure (4) Input Impedance Smith Chart
Figure (5) shows the 3D gain of the present
antenna and we obtain a gain of 3.6 dB from
the simulation results.
Figure (5) 3D-gain
Figure (2) The measured and simulated
return loss of bow-tie antenna.
Ansoft Corporation
-90
-60
-30
Radiation Pattern 2
0
30
8.00
-4.00
60
-16.00
-28.00
90
Bow_ Tie_ Antenna_ ADKv1
Curve Inf o
dB(rEPhi)
Setup1 : Las tAdaptive
Phi= ' 0deg'
dB(rEPhi)
Setup1 : Las tAdaptive
Phi= ' 90deg'
-120 120
-150
-180
150
Figure (6) Gain phi at 0
0
and 90
0
Figure (3) The measured and simulated
VSWR for the modified bow-tie antenna. The
input impedance Smith chart curve is
shown in figure(4) and from this curve we
obtained rms of 0.6271, gain margin of
13.73, gain cross over of 2.75, phase cross
over of 4.98 and bandwidth 4.13.
The co-polarized (E ) and cross polarized
(E ) far field radiation patterns for the
proposed antenna is computed and presented
in figure(6) and (7).
54
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
55
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Quantity Value/units
Max U 0.1232 w/sr
Peak directivity 1.6694
Peak gain 1.6757
Peak realized gain 1.5482
Radiated power 0.9274 w
Accepted power 0.9239 w
Incident power 1 w
Radiation efficiency 1.0038
Front-to-back ratio 1.0326
Ansoft Corporati on
-90
-60
-30
Radiation Pattern 3
0
30
6.00
-8.00
60
-22.00
-36.00
90
Bow_ Tie_ Antenna_ ADKv1
Curve Inf o
dB(rETheta)
Setup1 : Las tAdaptive
Phi= ' 0deg'
dB(rETheta)
Setup1 : Las tAdaptive
Phi= ' 90deg'
The mesh generation of the Bow-Tie
antenna is presented in figure (10). Mesh
generation is the practice of generating a
polygonal or polyhedral mesh that
approximates a geometric domain to the
highest possible degree of accuracy. The
-120
-150
-180
150
120
term grid generation is often used inter-
changeably. Typical uses are for rendering to
Figure (7) Gain theta at 0
0
and 90
0
The radiation patterns are giving good
agreement between the simulated and the
measured results.
V. FIELD DISTRIBUTIONS.
The 3D field distributions give the
relationship between the co-polarization and
cross-polarization components. Moreover it
gives a clear picture as to the nature of
polarization of the fields propagating through
the patch antenna. Figure (8) and (9) gives
the microstrip Bow-Tie antenna E-field and
H-field distributions.
Figure (8) E-Field Pattern
Figure (9) H-Field Pattern
a computer screen or for physical simulation
such as finite element analysis or
computational fluid dynamics. The
triangulated zones in the mesh indicate where
the current distribution is concentrated.
Figure (10) Mesh Pattern
The antenna parameters and maximum field
data are computed using HFSS and given in
table (1) and table (2).
Table (1) Antenna parameters
56
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
rE-field Value/units At phi( )
(in degrees)
At theta( )
(in degrees)
REFERENCES
Total 9.638 180 174
X 4.5684 135 -90
Y 9.6379 180 174
Z 4.4432 90 138
Phi 9.6379 180 174
Theta 9.6355 90 -180
LHCP 6.7969 175 176
RHCP 6.8336 0 -172
Table (2) Maximum field data
VI. CONCLUSION
The experimental implementation of this
work involves the LC dielectric
characterization at microwave frequencies
which have been investigated. The measured
parameters were also in good agreement with
the simulated results. The results shown here
demonstrate the applicability of Liquid
crystals for the development of low-cost,
light weight antennas on all-package
solution for future wireless communication
and remote sensing systems. The
investigation has been limited mostly to
theoretical study due to lack of distributive
computing platform. Detailed experimental
studies can be taken up at a later stage to find
out a design procedure for balanced
amplifying antennas.
ACKNOWLEDGMENTS
The authors like to express their thanks to the
management of K L University and the
department of ECE for their continuous
encouragement and support.
1. N. Ghassemi, J. Rashed-Mohassel, M. H. Neshati,
M. Ghassemi, Slot Coupled Microstrip antenna for
ultra wideband applications in C and X bands,
Progress In Electromagnetics Research M, Vol. 3,
15-25, 2008.
2. Aidin Mehdipour, Karim Mohammadpour-
Aghdam, Reza Faraji-Dana, Abdel-Razik Sebak,
Modified slot bow-tie antenna for UWB
applications, Microwave and Optical Technologies
Letters, vol. 50, Issue 2, pp. 429-432, 2007.
3. Li, Y. T., J. W. Shi, and C. L. Pan, Sub-THz
photonic-transmitters based on separated
transport-recombination photodiodes and a micro-
machined slot-antenna,IEEE Photonics Technology
Letters, Vol. 19, N0. 11, 840-842, Jun. 1, 2007.
4. Huang, C. Y. and D. Y. Lin, CPW-fed bow-tie slot
antenna for ultra-wideband communications,
Electronics Letters, Vol. 42, No. 19, 1073-1074,
2006.
5. A. A. Eldek, A.Z. Elsherbeni and C. E. Smith,
Wide Band Modified Printed Bow-tie Antenna
with Single and Dual Polarization for C- X- Band
Applications, IEEE Trans. Antennas and
Propagation, Vol. 53, no. 9, pp. 3067-3072, Sept.
2005.
6. Gregory, I. S.and C. Baker, Optimization of
Photomixers and antennas for continuous-wave
terahertz emission, IEEE Journal of Quantum
Electronics, Vol. 41, No. 5, 717-728, May 2005.
7. A. Z. Elsherbeni, A. A. Eldek, and C.E. Smith,
Wideband slot and printed antennas, Book chapter
in Encyclopedia of RF and Microwave Engineering,
Editor: K. Change, John Wiley, Jan. 2005.
8. F. Tefiku, and C. A. Grimes, Design of broad-
band and dual-band antennas comprised of series-
fed printed-strip dipole pairs, IEEE Trans.
Antennas and Propagation. Vol. 48, no. 6, pp. 895-
900, June 2000.
9. W. Deal, N. Kaneda, J. Sor, Y. Qian, and T. Itoh,
A new quasi-Yagi antenna for planar active
antenna arrays, IEEE Trans. Microwave Theory
and Tech. vol. 48, no. 6, pp. 910-918, June 2000.
57
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
OPTIMIZATIONOF SAW FILTERUSING
GENETIC ALGORITHM
Amrish Kumar
*
U.P.Singh
**
Mukhtiar Rana
***
*
M. Tech. Scholar, Electronics & Communication Engg. JMIT Radaur., Yamunanagar, INDIA
**
Astt. Prof.,Electronics & Communication Engg., JMIT Radaur , Yamunanagar, INDIA
***
M. Tech. Scholar, Electronics & Communication Engg. JMIT Radaur., Yamunanagar, INDIA
Abstract- Surface Acoustic Wave filter, a semiconductor
device that is used to filter out desired frequencies. Widely
used in mobile phones to filter both RF and IF frequencies, a
SAW filter uses the piezoelectric effect to turn the input signal
into vibrations that are turned back into electrical signals in
the desired frequency range. Saw filters are electromechanical
devices used in wide range of radio frequency applications
providing frequency control, frequency selection and signal
processing capabilities their performance is based on
piezoelectric characteristics of a substrate in which the
electric signal is converted into mechanical one and back
again to the electrical domain at the output. After propagating
through the piezoelectric element the output is recombined to
produce a direct analogue implementation of finite impulse
response filter Surface acoustic wave (SAW) filters have been
widely used for many applications in recent communication
systems. Starting from intermediate-frequency (IF) SAW
filters for TVs, radiofrequency (RF) SAW filters are currently
available for mobile, wireless and personal communication
systems such as cellular phones and personal data assistants
(PDAs). The frequency response characteristics of SAW filters
are governed primarily by their geometrical structures, i.e.,
the configurations of IDTs and reflectors arranged on
piezoelectric substrates. However, even if the problem,
structural design of SAW filters is formulated as an
optimization most design techniques have relied on local
optimization methods.
Key words: - SAW Filter, Genetic Algorithm
I. INTRODUCTION
Surface Acoustic Wave (SAW) filter is a semiconductor
device used to filter out desired frequencies, widely used in
mobile phones both for RF and IF frequencies. A SAW
filter uses the piezoelectric effect to turn the input signal
into vibrations that are turned back into electrical signals in
the desired frequency range. The SAW filters are
electromechanical devices used in wide range of radio
frequency applications providing frequency control,
frequency selection and signal processing capabilities their
performance is based on piezoelectric characteristics of a
substrate in which the electric signal is converted into
mechanical one and back again to the electrical domain at
the output. After propagating through the piezoelectric
element the output is recombined to produce a direct
analogue implementation of finite impulse response filter
Surface acoustic wave (SAW) filters have been widely
used for many applications in recent communication
systems [1, 2]. Starting from intermediate-frequency (IF)
SAW filters for TVs, radiofrequency (RF) SAW filters are
currently available for mobile, wireless and personal
communication systems such as cellular phones and
personal data assistants (PDAs). The frequency response
characteristics of SAW filters are governed primarily by
their geometrical structures, i.e., the configurations of IDTs
and reflectors arranged on piezoelectric substrates. For
realizing a desirable band pass filter, several computer-
aided design approaches have been reported in the [1, 2].
The structural design of SAW filters is formulated as an
optimization problem and mostly classical optimization
methods have been used to solve them.
II. GENETIC ALGORITHM
Genetic Algorithms (GA) are stochastic search methods
that can be used to search for an optimal solution to the
evolution function of an optimization problem. Holland
proposed genetic algorithms in the early seventies as
computer programs that mimic the natural evolutionary
process. De Jong extended the GAs to functional
optimization and a detailed mathematical model of a GA
was presented by Goldberg in 1970. GAs manipulates a
population of individual in each generation. Population
of individuals in each generation (iteration) where each
individual, termed as the chromosome, represents one
candidate solution to the problem. Within the population,
fit individuals survive to reproduce and their genetic
materials are recombined to produce new individuals as
off springs. The genetic material is modeled by some
data structure, most often a finite-length of attributes. As
in nature, selection provides the necessary driving
mechanism for better solutions to survive. Each solution
is associated with a fitness value that reflects how good it
is, compared with other solutions in the population. The
recombination process is simulated through a crossover
mechanism that exchanges portions of data strings
between the chromosomes. New genetic material is also
introduced through mutation that causes random
alterations of the strings. The frequency of occurrence of
58
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
B
e
s
t
f
i
t
n
e
s
s
-
-
>
these genetic operations is controlled by certain pre-set
probabilities.
Criter
Initializ
e
Evalua
Select
Parent
Crosso
Mutatio
Basic Surface Acoustic Wave
IV. SIMULATION RESULTS
This section represents the simulation frame work for the
optimization of saw filter using genetic algorithm.
Simulation is carried out for certain specification such as
Number of generation=10: When number of generation is
10, the BW of the passband lie between 5-6 MHz and
ripples amplitude lie between -10 to -30 dB and number of
ripples are less as compared to previous results obtained
from previous methods.
Terminati
-37
x 10
5
4
3
Basic Genetic Algorithm cycle.
III. SAW Filter
A surface acoustic wave (SAW) is a type of mechanical
wave motion which travels along the surface of a solid
material. The wave was discovered in 1885 by Lord
Rayleigh, and is often named after him. These days, these
acoustic waves are often used in electronic devices. At first
sight it seems odd to use an acoustic wave for an electronic
application, but acoustic waves have some particular
properties that make them very attractive for specialized
purposes. And they are not unfamiliar -many wristwatches
have a quartz crystal used for accurate frequency
generation, and this is an acoustic resonator though it uses
bulk acoustic waves rather than surface waves.Fig.2.1
shows a SAW travelling along the plane surface of a solid
material. As the wave passes, each atom of the material
traces out an elliptical path, repeating the path for each
cycle of the wave motion. The atoms move by smaller
amounts as one looks farther into the depth, away from the
surface. Thus, the wave is guided along the surface. In the
simplest case (an isotropic material), the atoms move in the
so-called sagittal plane, i.e. the plane which includes the
surface normal and the propagation direction.
2
1
1 2 3 4 5 6 7 8 9 10
Number of Generations-->
Fig 1: Bestfitness Vs Number of generation (=10)
Fig Figure 2: Insertion Loss (dB) Vs Frequency (MHz)
V. CONCLUSION
59
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
This paper presents the optimization of saw filter using
genetic algorithm. we reach at the conclusion that
frequency response of band pass SAW filter is improves as
compared to the previous methods that are used for
optimization of FR of SAW filter. The improvements in the
FR of SAW filter are in terms of BW and ripples amplitude
take place with GA technique.
.
VI.REFERENCES
[1]. Richard J. Vaccaro and Brian F. Harrison, Optimal
Matrix-Filter Design, IEEE Transactions on signal
processing, vol. 14, No. 3, pp.705-710, March 1996.
[2]. R.V. Kacelenga, P.V. Graumann, and L.E. Turner,
Design of filters using simulated annealing, IEEE
Proc. Int. Symp. on Circuits and Systems (New Orleans,
LA), pp. 642-645, 1990.
[3]. Joelle Skaf and P. Boyd Stephen, Filter Design with
Low Complexity Coefficients, IEEE Transactions on
Signal processing, vol. 56, no. 7, pp. 3162-3170, July
2008.
[4]. David E. Goldberg, Genetic Algorithm in search,
optimization and machine Learning Pearson
Education, Low price Edition, Delhi, 2005.
[5]. T.William and W.C. Miller, Genetic algorithms for the
design of Digital filters,Proceedings of IEEE ICSP'O4,
pp. 9-12, 2004.
[6]. J. G. Proakis, D. G. Manolakis, Digital Signal
Processing: Principles, Algorithms, and Applications,
4
th
Edition, Pearson Education, Inc, New Delhi, 2007.
[7]. C.C. Tseng, and S.C. Pei, Stable IIR Notch Filter
design with optimal pole placement, IEEE
Transactions on Signal processing, vol. 49, no. 11, pp.
2673-2681, 2001.
[8]. K.D. Abdesselam, Design of Stable, causal, perfect
reconstruction, IIR Uniform DFT Filters, IEEE
Transactions on Signal processing, vol. 48, no. 4, pp.
1110-1117, 2000.
[9]. J.E. Cousseau, Stefan Werner, and P.D. Donate,
Factorized All-Pass Based IIR Adaptive Notch Filters
IEEE Transaction on Signal Processing, vol.55, no. 11,
pp. 5225-5236, 2007.
60
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Active Integrated Switching Antennas
Rakesh Kumar Yadav
1
, RamLal
1
and R. P. Yadav
2
1
Galgotias College of Engineering & Technology, Greater Noida, Utter Pradesh
2
Rajasthan Technical University, Kota, Rajasthan
er.rakeshyadava@gmail.com
Abstract This paper provides an over view of
design and development of active integrated switching
antennas. Polarization-Azile, Reconfigurable, RF-
MEMS and portable switching Antenna have described
in detail. In a novel agile microstrip patch antenna the
frequency bandwidth is limited to 4% in circular
polarization mode and 9% in linear polarization mode
while maximum gain of 6.2 dBi in the broad side
direction. The reconfigurable antenna that combines
polarization and pattern diversities exhibits the
diversity gain of 12.9 dB with a combination of two
patterns ( = 45) and two polarization for a diagonal
orientation of the RAUT. In case of MEMS, an
electronically steerable antenna containing 25000
MEMS, which is integrated with AN/APG-67 radar
systemand the MEMS radar beamsuccessfully scanned
a 120 azimuth sector under control of the AN / APG-
67. While, a broadband circular switched parasitic
arrays (SPAs) for portable DVB-T receiver application
in VHF/UHF bands exhibit directional features as well
as strong advantage of the beamswitching capability. In
present paper the authors present a case study of above
proposed switchable antennas and hope that
information given would be very useful to one who are
involved in this area of research.
Keywords: Active integrated antenna (AIA),
Reconfigurable Antenna under Test (RAUT), MEMS,
Active devices.
I. INTRODUCTION
In general, the Active Integrated Antenna (AIA)
is combination of the active device(s) and microstrip
antenna on the same substrate. The concept of using
Active Antenna is traced back to as early as 1928,
however, the research on active antenna received
much more attention and several pioneering works
have been done in the 1960s and 1970s. This is
because the high frequency transistors were invented
by this time. The AIAs also provide circuital function
such as Duplexing, Resonating and Filtering in spite
of its basic characteristics as a radiating element [1-
3].
In year 2001, W. Choi proposed an Automatic
On-Off switching repeater that is switched of
automatically when there is no active user within its
coverage. This has advantage of preventing the
unnecessary noise enhancement induced by repeaters
and protects the reverse link capacity from harmful
effect and improves it. While, J. Vian and Z. Popovic
proposed a high speed optically controlled T/R active
antenna. This is designed as an element of 6 by 3
cylindrical active lens array with a focal distance
to diameter (F/D) ratio of one, a directivity of 20 dB
and a beam width of 10 in the focusing plane. The
optically controlled SPOT switches used to route the
signal in the active antenna have 0.31dB insertion
loss, 36 dB isolation and -10 dB return loss from 8.36
to 10.8 GHz. The optically controlled microwave
switches requires significantly less control energy
than a MEMS, since energy distribution is
permanently different.
In year 2009, M. Y.W. Chia et al . proposed a
novel low power CMOS chip to enable beam steering
for the RFID interrogator because radiation pattern is
the main drawbacks of RFID. This is suitable for
RFID system operating from 900MHz-2.4 GHz
bands. It can provide multiple delayed signals with
5ps resolution. This will translate for a 2 scanned
angle at 2.4 GHz band. The measured radiation
pattern at 924 MHz using array of two antennas and
at 2.412 GHz using an array of 4 antennas. While, S.
Pajic and Z. B. Popovic proposed a high efficiency
10 GHz amplifier antenna array for spatial power
combining of switching mode power amplifiers. They
found that the 16 active antenna elements exhibit an
output EIRP of 162 W with 70% average drain
efficiency at 10.2 GHz. The 4 element sub arrays and
the 16 element arrays have power combining
efficiencies around 80%. In the combiner used here ,
all Pas saturate simultaneously as the input power is
raised and the drain efficiency across the array
remains constant maintaining the main benefit of
switching mode high efficiency PAs.
II. POLARIZATION - AGILE
F. Ferrero et al. proposed a novel azile micro strip
patch antenna in which radiating patch is fed by a
tunable quasi lumped coupler. By changing the
operating mode of QLC the complete structure can be
altered to radiate the electromagnetic wave with
vertical linear, horizontal linear, right handed circular
61
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
or left handed circular polarization. It has the
advantage of compactness and bias simplicity with no
additional capacitors or inductors. The frequency
bandwidth is limited to 4% in circular polarization
mode and 9% in linear polarization mode while
maximum gain of 6.2 dBi in the broad side direction is
found. Fig. 1 shows the 3D view the polarization
azile antenna and Fig. 2 shows the radiation pattern at
3.5 GHz.
Fig. 1 3Dview of the polarization-agile antenna.
Fig. 2 Measured radiation pattern of the antenna for Vr=15 V
at 3.5 GH (uncoupled line mode).
While in year 2004, S. S. Zhong proposed a
novel polarization - agile micro strip patch antenna
which can be operated either in linear or circular
polarization by electrically controlling with special
FET phase shifter. He found that for circular
polarization a bore sight axial ratio of 0.5 dB is
achieved while isolation is better than 30dB over a
15% BW for the 16 element corner fed dual
polarization array.
III. RECONFIGURABLE
B. Poussot et al. Proposed reconfigurable antenna
with a single input port which can combine
polarization and pattern diversities. It is switched by
means of 8 pin diodes. They found that under specific
propagation condition the diversity gain is 12.9 dB
with a combination of two patterns ( = 45) and
two polarization for a diagonal orientation of the
RAUT. The diversity gains for various combinations
of branches are given in Table I. The improvement is
approximately 3 dB compared to the space diversity
scheme based on two monopoles separated by two
wavelengths.
TABLE I
DIVERSITY GAIN FOR VARIOUS COMBINATIONS OF
BRANCHES WITH A DIAGONAL ORIENTATION OF THE
RAUT
Number of channels DG at
1%
2: Pattern diversity with = 45 8.2 dB
2: Polarization diversity only. 9.7 dB
4:Polarization diversity and Pattern
diversity with = 45
12.9 dB
6:Polarization diversity and Pattern
diversity with = 45 and = 0
13.4 dB
8: All Polarization and Pattern
diversities.
13.9 dB
A. C. K. Mak proposed the design of two
reconfigurable multiband antennas. The first one uses
switched-feed technique in which by changing the
feed location it can be operated within the GSM,
DCS, PCS and UMTS frequency bands. The second
design uses a switched ground approach in which by
connecting or disconnecting the antennas around the
specific location different operating bands can be
created which includes GSM, DCS, PCS, UMTS,
Bluetooth and wireless LAN frequency bands. Fig. 3
and Fig. 4 shows the plot of efficiency for switched
feed and switched ground . Different active switches
such as PIN diode, GaAs switch and MEMS were
implemented into two reconfigurable antennas.
Fig. 3 Switched feed using different active devices efficiency.
62
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Fig. 4 Switched ground using different active devices
efficiency.
G.K Mohanty proposed a method based on real
coded genetic algorithms that optimize the voltage
excitation of the elements for the design of a
reconfigurable array antenna with or without
presence of a ground plane behind the array and the
results are given in Table II and Table III.
TABLE II
DESIRED AND OBTAINED RESULTS IN ABSENCE OF
GROUND PLANE
Design
parameters
Pencil-beam Flat top beam
Desired Obtained Desired Obtained
Side lobe
levels (dB)
-
20.0
-
21.49
-
20.0
-
21.06
Ripple (dB,-
0.19u+0.
19)
N/A N/A 0.5 0.439
TABLE III
DESIRED AND OBTAINED RESULTS IN PRESENCEOF
GROUND PLANE /4 BEHIND
Design
parameters
Pencil-beam Flat top beam
Desire
d
Obtained Desired Obtained
Side lobe
levels (dB)
-
20.0
-
22.82
-
20.0
-
21.85
Ripple (dB,-
0.19u+0.1
9)
N/A N/A 0.5 0.610
In this the maximum variation of the active
impedances of the elements is also minimized when
the active antenna switches between patterns without
changing the geometry of the elements.
IV. RF MEMS
J. J. Maciel et al. Proposed 0.4 m
2
MEMS
electronically steerable antennas containing 25000
MEMS which is integrated with AN/APG-67 radar
system. This MEMS demonstrated radar system
successfully detected both airborne and ground
moving targets. The low cost, light weight and low
power technology demonstrated can enable weight
and power constrained platforms with electronic
steering. They found that when a large radiant 0.4 m
2
MEMS ESA was interfaced with existing transmitter,
receiver and display unit , the MEMS radar beam
successfully scanned a 120 azimuth sector under
control of the AN/ APG-67.
L. Petit proposed a model to design a MEMS
switched parasitic antenna array at 5.6 GHz as shown
in Fig. 5. This model is fast and accurate method of
optimization of switched parasitic array , consisting
of an active central element and a uniform ring of 5
parasitic antenna. In this system the beam switching
is at 25. At any given time only one parasitic
elements open circuited, while the rest remains the
short circuited. The array exhibit directional features
as well as strong advantage of the beam switching
capability.
Fig. 5 (a) Photograph of the passive prototype with metal
connections for load selection (active slot: 2.2 X 1.7 mm, parasitic
slots: 1.6 X 14.7 mm,d = 8mm (d/ = 0:15)) and (b) simulated
(solid) and measured (dashed) radiation patterns at 5.6 GHz of the
passive prototype.
C.W Jung proposed a reconfigurable rectangular
spiral antenna with a set of MEMS which are
monolithically integrated and packaged on to the
same substrate. On activating these switches, the
overall spiral arm length is changed and consequently
its radiation beam direction is also changed. The two
63
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
antennas radiates right handed circularly polarized
and left handed circularly polarized for PCB and
quartz substrate respectively. The gain of two
antennas varies between 3-6 dBi, while axial ration is
3dB at their operating frequency band i.e. 10 GHz.
V. PORTABLE
S. C. Panagiotoy et al. Proposed a broadband
circular switched parasitic arrays (SPAs) for portable
DVB-T receiver application in VHF/UHF bands. It
can be also incorporated in an indoor portable DVB-
the remaining two is short-circuited. By appropriately
selecting active and short-circuited elements, a set of
four radiation patterns can be formed, covering the
horizontal plane alternately.
Furthermore, a comparison between the measured
reflection coefficients is plotted in Fig. 6.
Fig. 6 Simulation and measurement results for S 11.
J.T.Aberle etal.proposed a reconfigurable antenna
that can tune different frequency bands having
enough efficiency and BW. They found that such
tunable antenna technology is an enabler for software
defined radios and RF front ends of which must be
reprogrammable on the fly. It is also possible to
cover a large range say 800 MHz to 2 GHz with A
single T/R antenna pair. A good tuning circuit can be
achieved by implementing a switched band of high Q
capacitors with very low loss, which is the
application of MEMS.The efficiency and measured
(S
11
) for the reconfigurable SPA is shown in fig 4
and 9 respectively[17].
REFERENCES
[1] H. A. Wheeler, Small antennas, IEEE Trans. Antennas
Propagat., vol. AP-23, pp. 462-469, (1975).
[2] B. Grob, Busic Electronics, 6th ed. New York: McGraw-Ell,
Ch. 8, (1959).
[3] J. Lin and T. Itoh, Active integrated antennas, IEEE Trans.
Microwave Theory Tech., vol. 42, pp. 21862194, (1994).
[4] Wan Choi, Bong Youl Cho, and Tae Won Ban, Automatic
OnOff Switching Repeater for DS/CDMA Reverse Link
Capacity Improvement, IEEE Communications Letters, vol.
5, no. 4, PP. 138-141, (2001).
[5] James Vian and Zoya Popovic, A Transmit/Receive Active
Antenna with Fast Low-Power Optical Switching, IEEE
Trans. Microwave Theory Tech., vol. 48, no. 12, pp. 2686
2691, (2000).
[6] Michael Yan-Wah Chia et at . , Electronic Beam-Steering
IC for Multimode and Multiband RFID, IEEE Trans.
Microwave Theory Tech., vol. 57, no. 5, pp. 13101319,
(2009).
[7] Srdjan Pajic, Student Member, IEEE, and Zoya B. Popovic,
An Efficient X-Band 16-Element Spatial Combiner of
Switched-Mode Power Amplifiers, IEEE Trans. Microwave
Theory Tech., vol. 51, no. 7, pp. 18631870, (2003).
[8] Fabien Ferrero et al. , A Novel Quad-Polarization Agile
Patch Antenna, IEEE Trans. Antennas Propagate., vol. 57,
no. 5, pp. 15621566, (2009).
[9] Shun-Shi Zhong, Xue-Xia Yang and Shi-Chang Gao ,
Polarization-Agile Microstrip Antenna Array Using a Single
Phase-Shift Circuit, IEEE Trans. Antennas Propagate., vol.
52, no. 1, pp. 8487, (2004).
[10] Benot Poussot et al. , Diversity Measurements of a
Reconfigurable Antenna With Switched Polarizations and
Patterns, IEEE Trans. Antennas Propagate., vol. 56, no. 1,
pp. 31-38 , (2008).
[11] Angus C. K. Mak et al. , Reconfigurable Multiband Antenna
Designs for Wireless Communication Devices, IEEE Trans.
Antennas Propagate., vol. 55, no. 7, pp. 1919-1928 , (2007).
[12] G. K. Mahanti et al. , Design of Reconfigurable Array
Antennas With Minimum Variation of Active Impedances,
IEEE Trans. Antennas Propagate., vol. 55, pp. 541-544 ,
(2006).
[13] John J. Maciel et al. , MEMS Electronically Steerable
Antennas for Fire Control Radars, IEEE A&E Systems
Magazine, Nov. 2007.
[14] Laurent Petit, Laurent Dussopt and Jean-Marc Laheurte,
MEMS-Switched Parasitic-Antenna Array for Radiation
Pattern Diversity, IEEE Trans. Antennas Propagate., vol.
54, no. 9 pp. 2624-2631 , (2006).
[15] Chang won Jung, Ming-jer Lee, G. P. Li, and Franco De
Flaviis, Reconfigurable Scan-Beam Single-Arm Spiral
Antenna Integrated With RF-MEMS Switches, IEEE Trans.
Antennas Propagate., vol. 54, no. 2 pp. 455-463 , (2006).
[16] Stylianos C. Panagiotou et al. , A Broadband, Vertically
Polarized, Circular Switched Parasitic Array for Indoor
Portable DVB-T Applications at the IV UHF Band, IEEE
Trans. OnBroadcasting., vol. 53, no. 2 pp. 547-552 , (2007).
[17] James T. Aberle et al. , Reconfigurable Antennas for
Portable Wireless Devices, IEEE Antennas andPropagaliOn
Magazine, Vol. 45, No. 6, December 2003.
64
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
8x8 Patch Array design for wind profiling radars in C Band
Bharat Bhushan Verma
1
, Sumanta Kumar Kundu
2
1 Student, Electronics & Comm Engg.4th Year,
2 Assistant Professor, Electronics & Comm Engg
Bharati Vidyapeeth college of Engineerin, A-4 Paschim Vihar New Delhi-110063,India
1bharatverma81@gmail.co,2sunny19662002@yahoomail.co.in
Abstract This paper present a rectangular patch microstrip
array antenna operates in C-band (4.2 GHz 8 GHz). The
proposed rectangular patch array antenna will be light
weight, flexible, slimand compact unit compare with current
antenna used in C-band. The paper also presents the detail
steps of designing the Rectangular patch microstrip array
antenna. An Integrated Equation in 3-Dimention (IE3D)
software is used to compute the gain, Return loss, radiation
pattern, antenna efficiency and radiation efficiency of the
antenna. The proposed rectangular patch microstrip array
antenna basically is a phased array consisting of N elements
(Rectangular patch antennas) arranged in a rectangular
grid. The size of each element is determined by the operating
frequency. The incident wave from satellite arrives at the
plane of the antenna with equal phase across the surface of
the array. Each N element receives a small amount of
power in phase with the others. There are feed network
connects each element to the microstrip lines with an equal
length, thus the signals reaching the rectangular patches are
all combined.
Keywords High-gain, broadband, microstrip antenna.
I. INTRODUCTION
In high performance aircrafts, spacecrafts, satellites,
missiles and other aerospace applications where size,
weight, performance, ease of installation and
aerodynamics profile are the constraints, a low or
flat/conformal profile antenna may be required[1]. In
recent years various types of flat profile printed antennas
have been developed such as Microstrip antenna (MSA),
strip line, slot antenna, cavity backed printed antenna and
printed dipole antenna. When the characteristics of these
antenna types are compared, the micro strip antenna is
found to be more advantageous [2, 3].
Microstrip antenna are conformable to planar or non
planar surface, simple and inexpensive to manufacture,
cost effective compatible with MMIC designs and when a
particular patch shape and excitation modes are selected,
they are very versatile in terms of resonant frequency,
polarization, radiation patterns and impedance [5,6].
In this work, Design of linearly polarized quarter wave
transformer fed microstrip rectangular patch antenna
arrays is presented [4]. Microstrip antennas have several
advantages compared to conventional microwave antennas
and therefore have many applications over the broad
frequency range from 100MHz to 50GHz.
This paper depicts the gain, S11, radiation efficiency,
antenna efficiency, bandwidth of an array of 8 8. We also
present the details of the proposed antenna design.
II. RADIATIONMECHANISM
Microstrip antennas are essentially suitably shaped
discontinuities that are designed to radiate. The
discontinuities represent abrupt changes in the microstrip
line geometry. Discontinuities alter the electric and
magnetic field distributions. These results in energy
storage and sometimes radiation at the discontinuity [9].
As long as the physical dimensions and relative dielectric
constant of the line remains constant, virtually no
radiation occurs.
However the discontinuity introduced by the rapid
change in line width at the junction between the feed line
and patch radiates. The other end of the patch where the
metallization abruptly ends also radiates. When the field
on a microstrip line encounters an abrupt change in width
at the input to the patch electric fields spread out [1].
III. MICROSTRIP LINES
A microstrip line consists of a single ground plane and
a thin strip conductor on a low loss dielectric substrate
above the ground plate [11]. Due to the absence of the top
ground plate and the dielectric substrate above the strip,
the electric field lines remain partially in the air and
partially in the lower dielectric substrate. This makes the
mode of propagation not pure TEM but what is called
quasi-TEM [13]. Due to the open structure and any
presence in discontinuity, the microstrip line radiates
electromagnetic energy. The use of thin and high
dielectric materials reduces the radiation loss of the open
structure where the fields are mostly confined inside the
dielectric [15].
Uniform dielectric can support a single well defined
mode of propagation at least over a specific range of
frequencies (TEM for coaxial lines TE or TM for wave
guides.) Transmission lines which do not have such a
uniform dielectric filling cannot support a single mode of
propagation. Microstrip falls in this category [15]. Here
the bulk of energy is transmitted along the microstrip with
a field distribution which quite closely resembles TEM
and is usually referred to as Quasi TEM.
The microstrip design consists of finding the values of
width (w) and length (l) corresponding to the
characteristic impendence (Z
o
) defined at the design stage
of the network. A substrate of permittivity (
r
) and
thickness (h) is chosen. The effective microstrip
permittivity (
eff
) is unique to a fixed dielectric
65
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
transmission line system and provides a useful link
between various wave lengths impedances and velocities.
The microstrip in general, will have a finite strip
thickness, t which influences the field distribution for
moderate power applications. The thickness of the
conducting strip is quite significant when considering
conductor losses [15]. For microstrip with t /h 0.005, 2

r
10 and w /h 0.1, the effects of the thickness
are negligible. But at smaller values of w /h or greater
values of t/h the significance increases.
A. DesignPrinciples
The designed antenna is an 8X8 linear array. The first
step in the design is to specify the dimensions of a single
microstrip patch antenna. The patch conductor can be
assumed at any shape, but generally simple geometries are
used, and this simplifies the analysis and performance
prediction. Here, the half-wavelength rectangular patch
element is chosen as the array element (as commonly used
in microstrip antennas) [14, 15]. Its characteristic
parameters are the length L, the width W, as shown in
Figure 1.
Fig1: Simple Patch Geometry
To meet the initial design requirements (operating
frequency = 5 GHz) various analytical approximate
approaches may be used. Here, the inter-elemental
distance of 0.7 is used for this design. We can vary the
inter-elemental distance between 0.5 to 1.2 to avoid
grating lobes and mutual coupling [7,8]
Calculating the Length of Patch
Calculating Effective relative permittivity
Here,
reff
and f
r
, , c are effective relative permittivity, the
operating frequency, the fringe factor and speed of light
respectively.
B. Impedance MatchingTechnique Used
Here we have used a Quarter-wave transformer for
matching the impedance between patch and microstrip
line [1]. In Quarter-wave transformer, the antenna input
impedance and impedance of micro strip line of different
impedance can be matched with a section of transmission
line that is a quarter wavelength long based on the
wavelength in the transmission line.
This characteristic impedance of the matching section is
given by
Fig 2: An Example of 2x1 and 4x4 Patch array
To meet the initial design requirements (operating
frequency f
r
= 5 GHz, and beam width = 100) various
analytical approximate approaches may be used. Here, the
calculations are based on the transmission line model
[10,12]. Although not critical, the width W of the radiating
edge is specified first. The square-patch geometry is
chosen since it can be arranged to produce circularly
polarized waves. In practice, the length L is slightly less
than a half wavelength (in the dielectric). The length may
also be specified by calculating the half wavelength value
and then subtracting a small length to take into account the
fringing fields [15,16] as;
Calculating the Width of the patch
Fig 3: Matching Network of a single patch
Fig 4: Design of 8x8 patch array
66
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
IV. RESULTS
A. Result For Single Patch Antenna
Fig 5: Return Loss of the single patch
Fig 6: VSWR Vs Freq of the single Patch Antenna
Fig 7: Smith chart of Single Patch Antenna
Fig 8: 3-Dimensional Radiation Pattern of the single
Patch Antenna
Fig 9 : Total E-field of 2 D Polar Plot of the Single
Patch Antenna
Fig 10: Antenna Efficiency, Radiation Efficiency Vs
Frequency for single patch antenna
The narrow band antenna of a single element is shown
in figure 5. After the simulation using IE3D software
package [17], the antenna elements exhibit a 130.33 MHz
bandwidth (S
11
<-9.54dB). Fig 5: Return Loss of the single
patch.
B. Result of Proposed Antenna
The design of narrow band antenna of a 64 element
array is shown in figure 4. After the simulation using
IE3D software package [17], the antenna elements
exhibit a 120.88 MHz bandwidth (VSWR<2) and
17.5241 dBi gain as shown in Fig 11,12 13. An 8 8
element array, shown in Fig 4 was designed using the
single element as a basic building block. The final array
was built by etching on a metalized dielectric substrate
(FR4; h=1.6 mm, r=4.8 and tan =0.0148).
Fig 11 : Return Loss of the proposed Antenna
Fig 12: VSWR Vs Freq of the Proposed Design
Fig 13: Smith Chart of the proposed Antenna
67
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Fig 14 : 3-Dimensional Radiation Pattern of the
proposed Antenna
Fig 11 depicts the return loss (S11) of the 8 8 element
array measured at 4.68 GHz. The antenna exhibits a
120.88 MHz band width which results with increase in
the number of elements. Fig 13 shows the maximum
gain reaches 17.524 dBi at 4.686GHz. The radiation
patterns have also been measured. Fig14 shows the E-
plane and H-plane patterns at 4.686 GHz frequency.
Fig 15 : Total E-field of 2 D Polar Plot of the
Proposed Antenna
Fig 16 :Antenna Efficiency, Radiation Efficiency Vs
Frequency of proposed Antenna
V. CONCLUSION
An 8 8 element array has been realized for wind
profiling radars. It consists of a single layer narrow band
antenna element based on this 8 8 element array was
designed. Gain, Bandwidth and radiation patterns have
been computed over a frequency at 5 GHz. But a
frequency shift around 0.32 GHz has been observed. From
the data analysis, it has been pointed out that the side lobe
level is the most critical factor, and thus determines the
operating bandwidth. However, considering the
impedance, gain and side lobes at 5GHz frequency, a
120.88 MHz bandwidth has been obtained.
REFERENCES
[1] JAMES J.R., and HALL P.S., Handbook of Microstrip
Antennas Peter Peregrinus Ltd., London, UK, 1989.
[2] POZAR D.M., and SCHAUBERT D.H., Microstrip Antennas,
the Analysis and Design of Microstrip Antennas and Arrays ,
IEEE Press, New York, USA, 1995.
[3] K.R.CARVER and J.W.MINK, Microstrip Antenna Technology ,
IEEE Trans. Antenna Propag., Vol.AP-29, pp.2-24, Jan 1981.
[4] I.J.BAHL and P.BHARTIA, Microstrip Antennas , Dedham,
MA: Artech House, 1980.
[5] J.R.JAMES, P.S.HALL and C.WOOD, Microstrip Antenna
Theory and Design , London, UK,: Peter Peregrinus, 1981.
[6] M. Amman, Design of Microstrip Patch Antenna for the 2.4 Ghz
Band, Applied Microwave and Wireless, pp. 24-34, November
/December 1997.
[7] K. L. Wong, Design of Nonplanar Microstrip Antennas and
Transmission Lines, John Wiley & Sons, New York, 1999.
[8] W. L. Stutzman , G. A. Thiele, Antenna Theory and Design , John
Wiley &Sons,2nd Edition ,New York, 1998.
[9] M. Amman, Design of Rectangular Microstrip Patch Antennas
for the 2.4GHz Band , Applied Microwave & Wireless, PP. 24-34,
November /December1997.
[10] K. L. Wong, Compact and Broadband Microstrip Antennas, Wiley,
New York, 2002.
[11] G.S. Row, S. H. Yeh, and K. L. Wong, Compact Dual Polarized
Microstrip antennas , Microwave & Optical Technology Letters,
27(4), pp. 284-287,November 2000.
[12] M..O zyalcm, Modeling and Simulation of Electromagnetic
Problems via Transmission Line Matrix Method, Ph.D.
Dissertation, Istanbul Technical University, Institute of Science,
October 2002.
[13] W.L. Stutzman, G.A. Thiele, Antenna Theory and design, John
Wiley & Sons,2nd Ed., New York, 1998.
[14] A. Derneryd, Linearly Polarized Microstrip Antennas , IEEE
Trans. Antennas and Propagation, AP-24, pp. 846-851, 1976.
[15] M. Schneider, Microstrip Lines for Microwave Integrated
Circuits , Bell Syst. Tech. J., 48, pp. 1421-1444, 1969.
[16] E. Hammerstad, F.A. bekkadal, Microstrip handbook, ELAB
report, STF 44A74169, University of Trondheim, Norway, 1975.
[17] IE3D Software Release 11.0 (Zeland Software Inc., Fremont,
California,USA).S. M. Metev and V. P. Veiko, Laser Assisted
Microtechnology, 2nd ed., R. M. Osgood, Jr., Ed. Berlin,
Germany: Springer-Verlag, 1998.
[18] J. Breckling, Ed., The Analysis of Directional Time Series:
Applications to Wind Speed and Direction, ser. Lecture Notes in
Statistics. Berlin, Germany: Springer, 1989, vol. 61.
[19] S. Zhang, C. Zhu, J. K. O. Sin, and P. K. T. Mok, A novel
ultrathin elevated channel low-temperature poly-Si TFT, IEEE
ElectronDevice Lett., vol. 20, pp. 569571, Nov. 1999.
[20] M. Wegmuller, J. P. von der Weid, P. Oberson, and N. Gisin,
High resolution fiber distributed measurements with coherent
OFDR, in Proc. ECOC00, 2000, paper 11.3.4, p. 109.
[21] R. E. Sorace, V. S. Reinhardt, and S. A. Vaughn, High-speed
digital-to-RF converter, U.S. Patent 5 668 842, Sept. 16, 1997.
[22] M. Shell. (2002) IEEEtran homepage on CTAN. [Online].
Available:
http://www.ctan.org/texarchive/macros/latex/contrib/supported/IE
EEtran/FLEXChip Signal Processor (MC68175/D), Motorola,
1996.
[23] PDCA12-70 data sheet, Opto Speed SA, Mezzovico,
Switzerland.
[24] A. Karnik, Performance of TCP congestion control with rate
feedback: TCP/ABR and rate adaptive TCP/IP, M. Eng. thesis,
Indian Institute of Science, Bangalore, India, Jan. 1999.
[25] J. Padhye, V. Firoiu, and D. Towsley, A stochastic model of TCP
Reno congestion avoidance and control, Univ. of Massachusetts,
Amherst, MA, CMPSCI Tech. Rep. 99-02, 1999.
[26] Wireless LANMedium Access Control (MAC) and Physical Layer
(PHY) Specification, IEEE Std. 802.11, 1997.
68
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
MICROSTRIPREFLECTARRAY ANTENNAFORDIRECT TO HOME
(DTH): STUDY ANDDESIGN
NITIN KUMAR
1
,AJAYSURI
2
1
E-mail- gaurav.nitin@gmail.com
2
E-mail- suriajay21@gmail.com
Abstract- Today, most satellite TV customers
in developed television markets get their
programming through a direct broadcast
satellite (DBS) provider, such as DISH TV or
DTH platform. The provider selects programs
and broadcasts them to subscribers as a set
package. Basically, the providers goal is to
bring dozens or even hundreds of channels to
the customers television in a form that
approximates the competition from Cable TV.
Unlike earlier programming, the providers
broadcast is completely digital, which means it
has high picture and stereo sound quality. Early
satellite television was broadcast in C-band -
radio in the 3.4-gigahertz (GHz) to 7-GHz
frequency range. Digital broadcast satellite
transmits programming in the Ku frequency
range (10 GHz to 14 GHz).
Key-word: direct broadcast satellite, DTH,
Cable TV, C-band, ku-band.
1. INTRODUCTION- The satellite DTH
television delivery was the dream of futurists for
decades, little technological progress was made
before 1980. DTH service in the United States
began, serendipitously, in 1979, when the FCC
declared that receive- only terminal licensing was
no longer mandatory and individuals started
installing dishes, initially with a diameter >4 m, to
receive signals intended for distribution to cable
head-ends. From roughly 1985 to 1995, millions of
23-m dishes were purchased by individuals to
receive these analog cable feeds. Although the dish
installations could cost several thousand dollars,
the feeds were initially available without a monthly
charge. The major challenges of all system designs
have been the need to generate, within project cost
constraints, sufficient satellite power levels into a
practical dish size, and the need for reception
electronics requirements consistent with consumer
electronics price expectations [1].
Several planar antenna arrays have been proposed
for direct to home (DTH) system [2-12]. Each has
its own merits and drawbacks, whereas others have
low-gain and/or low radiation efficiency.
For DTH system, antenna should have bandwidth
from 10GHz to 14GHz. The frequency range for
this design should be 10.7 to 12.75GHZ, with a
center frequency of 11.725. the gain of this type of
antenna should be equal to 36dB and the HPBW
should be greater than 5.2 degree. Antenna should
have perfect input impedance match with
transmission line so that it radiate maximum power
with VSWR less than 2 over the bandwidth.
2. ANTENNA DESIGN ANDDISCUSSION
We have design antenna using IE3D v.12 antenna
simulator. Final dimension of proposed reflect-
array Micro-strip Antenna with micro-strip line
Feed is given in table below.
S. No. Parameter Value
1 Length of The radiator patch 5.22mm
2 Width of radiator patch 7.65mm
3 Length of feed strip 11.22mm
4 Width of feeding line 0.972mm
5 Relative dielectric constant 4.47
6 Thickness of substrate 1.6mm
7 Total no. of patch element 1024
The stub length of each patch is different
depending on the
equation .
Where is the phase delay, is the effective
dielectric constant and is fringing length. The
stub length is given in the table.
69
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
8.88 8.57 7.95 7.03 5.82 5.08 2.55 0.51 1.76 4.2 7.01 9.9 13.1 16.4 19.5 23.6
8.57 1.13 15.8 298 0.12 1.95 20.8 19.8 19.7 13.5 18.8 9.0 13.3 6.1 23.9 6.9
7.95 15.8 12.7 8.05 9.4 18.2 3.5 3.7 4.0 3.9 1.6 3.1 2.4 1.5 0.5 0.7
7.03 2.98 8.05 16 3.57 10 4 0.6 5.4 10 15.7 11 14 5 6.1 5.06
5.82 1.9 18 10 8.9 8.4 8.4 2.7 9.1 9.8 11.5 11.8 13.1 17.9 12.8 8.7
5.08 0.12 9.4 3.5 12.2 8.9 2.7 5.7 10.6 2.35 3.8 7.3 10 0.8 1.9 2.4
2.55 20.8 3.5 4.01 8.45 10.6 2.3 5.4 8.72 0.95 1.9 2.7 3.15 10.6 4.17 5.35
0.51 24.5 3.7 0.69 8.45 2.3 5.43 10.7 2.33 3.23 5.25 10.2 8.75 2.3 9.7 0.47
1.76 29 4.0 5.4 2.77 3.8 10 2.33 3.6 7.75 10.3 1.7 2.9 3.8 2.6 2.87
4.2 13.5 3.9 10 9.1 7.3 2.3 3.23 8.9 0.73 2.92 3.51 3.8 7.35 4.75 6.37
7.01 18.6 1.6 15 9.8 10.3 3.9 5.75 0.73 2.57 3.51 5.77 9.4 10.3 10.76 11
9.9 9.0 3.1 11.9 11.5 0.8 8.7 10 2.9 3.51 5.93 8.35 10.7 0.8 2.13 1.59
13.1 13 2.4 14 11.8 1.9 0.95 1.7 3.5 5.94 8.35 0.57 1.34 1.95 2.15 2.93
16.4 6.1 1.5 5.6 13.1 2.1 1.9 2.9 3.8 9.47 10.7 1.38 2.07 2.47 2.79 3.38
19.5 23 0.5 6.12 17.9 2.47 2.7 2.6 4.7 10.7 2.13 2.15 2.79 2.5 3.89 3.72
23.6 6.9 0.7 5.06 8.7 2.5 3.15 2.8 6.3 11 1.59 2.93 3.38 3.87 3.72 1.87
Table: Calculated stub length for the element to compensate phase delay
The table shown above is of 16x16 arrays
this shows in the first half of XY plane to
obtain the 32x32 matrix similar table for
remaining three quadrant of XY plane.
Figure 1: Return loss withfrequency
Figure 2: VSWRwithfrequency.
Figure 1 shows the return loss of
reflectarray which is approx. -28dB this
loss is acceptable for DTH.
Figure 2 shows the VSWR of antenna
which is less than 2 over the given
bandwidth and it is minimum at the centre
frequency.
70
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Fig 3: Efficiency of antenna withfrequency
Figure 4: Antenna gain in Eplane
Antenna efficiency at resonance frequency
is 90 % and always more than 65% over
the bandwidth of 10 to14GHz as shown in
figure 3.
The gain for a single element is about 5dBi
but over requirement is 35dBi so I use a
32x32 arrays for this purpose the graph of
gain for reflect array is shown in the figure
4.
All above results such as impedance
bandwidth, antenna efficiency more than
65%, antenna gain up to 35dBi are very
satisfactory results for microstrip
reflectarray antenna for DTH application.
APPENDIX:
(1) Yogesh B. Karandikar and T. L.
Venkatsubramani, internal technical report on
DTH GMRT co-existence, A RFI survey for direct
to home system.
(2) T.F.Lai, Wan Nor Liza Mahadi, Norhayati
Soin, circular patch micro-strip array antenna for
KU- band, World Academy of Science,
Engineering and Technology 48 2008.
(3) Jos A. Encinar, Design of two layer
printed reflect array using patch of variable size,
ieee transactions on antennas and propagation, vol.
49, no. 10, October 2001.
(4) Feng-Chi E. Tsai and Marek E.
Bialkowski, Designing of 161- element Ku- band
micro-strip reflect-array of variable size using an
equivalent unit cell wave guide approach, ieee
transactions on antennas and propagation, vol. 51,
no. 10, October 2003.
(5) Eva Schwenzfeier, broadband proximity-
coupled and dual polarised micro-strip antenna for
dth reception, IEEE transactions on antennas and
propagation, December 1999.
(6) Jason Stockmann and Richard Hodges,
The Use of Waveguide Simulators to Measure the
Resonant Frequency of Ku-band Micro-strip
Arrays, IEEE transactions on antennas and
propagation, October 2005.
(7) Hervey LEGAY, Beatrice PINTE, Etienne
GIRARD, Raphael GILLARD, Michel
CHARRIER, Afshin ZIAEI, Low Loss Steerable
Reflect-array antenna for Space application,
(8) Fengchi e. Tsai and marek e. Bialkowsk,
a unit cell waveguide approach to designing multi-
layer reflect-arrays of variable size patches,
(9) P. De Vita1, A. Freni1, G. Dassano, P.
Pirinoli, R.E. Zich, Broadband Printed Reflect-
array Antenna,
(10) E.A. Soliman, A.M. Affandi, K.H. Badr,
Planar micro-strip antenna element and 2-by-2
sub-array for satellite TV receivers, IEEE
transactions on antennas and propagation, page 47-
53, year 2008.
(11) Adel Bedair Abdel, Mooty Abdel,
Rahman, Design and Development of High Gain
Wideband Micro-strip Antenna, international
conference in Czech Technical University in year
2005.
(12) J. Huang, Analysis of a Micro-strip
Reflect-array Antenna for Microspacecraft
Applications, TDA Progress Report 42-120
February 15, 1995.
71
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Investigation of Hexagonal PatchAntenna with
Superstrate loading
Ravindra Kumar Yadav, Jugul Kishor, Sweta Agarwal
Department of Electronics and Communication Engineering, I.T.S.
EngineeringCollege, Greater Noida, Uttar Pradesh, India
ravipusad@gmail.com,jugulkishor@gmail.com, sweta.bsr@gmail.com
Abstract In this paper we have proposed a superstrate loaded
hexagonal patch antenna with varying thickness of dielectric cover.
The antenna is fed by a coaxial cable. The antenna characteristics
such as return losses, gain and impedance bandwidth are analyzed.
The proposed antenna is designed to operate in ISM band (2.4-
2.4835GHz). The proposed antenna without superstrate is able to
achieve an impedance bandwidth of about 2.9% at center frequency
2.45GHz which is an important design parameter. It is observed
that on varying the thickness of dielectric cover (plexiglass) the
centre frequency shifts.
Index TermsHexagonal patch, Microstrip patch, Superstrate
loading
I. INTRODUCTION
Microstrip antennas are popular because of their use in
wireless application due to their many features, such as light
weight, low profile planner structure, low fabrication cost, easy
integration and flush mounted. A vast number of papers are
available in the literature, investigating various aspects of
microstrip antennas. Hexagonal microstrip antenna have smaller
size compared to the square and circular microstrip antenna for
given frequency. The small size is an important requirement for
portable communication equipment, such as global positioning
satellite (GPS) receivers.
The environmental effects (such as snow, raindrops, etc.)
deteriorate the performance of antenna; particularly resonance
frequency and reflection coefficient. This is the reason,
superstrate (cover) dielectric layers are often used to protect
printed circuit antennas (PCAs) from external hazards, or may
be naturally formed (e.g. ice layers) during flight or severe
weather conditions. Whether a cover layer is naturally formed or
imposed by design, it may affect adversely the antenna
performance characteristics, such as gain, radiation resistance
and efficiency. Superstrate loading of an antenna may alter
resonant frequency, causing detuning which may seriously
degrade system performance. For this reason, it is important to
analyze superstrate loading effects from a fundamental point of
view, so that the performance may be understood better or a
proper choice of cover parameters may be implemented. There
are various shapes of patch antennas whose characteristics have
been analyzed for dielectric covers such as rectangular/square,
triangular, equilateral triangular, elliptical and circular patch
antennas. Therefore, here we have presented the progressive
studies of dielectric loading on such antennas and describe
loading effects on the various parameters; resonance frequency,
radiation, impedance bandwidth and beam-width etc.
We know that hexagon has an area almost. Experimental results
for the gain and radiation pattern are presented. In this paper,
commercial simulator was employed to study the designs of the
key parameters for this coaxial fed hexagonal shaped patch
antenna fabricated on thick substrate.
II. HEXAGONAL PATCH ANTENNA
A. Design
Fig.1 Structure of the hexagonal patch antenna
The basic configuration of the proposed antenna is illustrated
if the fig. 1 a hexagonal patch antenna etched on the RT-duriod
substrate with dielectric constant 2.33 and thickness h of 1.575
mm, fed with coaxial probe.
.A regular hexagon shape is an important variation of the
equilateral triangle element. By comparing areas of a regular
hexagon and a circular disk of radius [1]
where S= side of a regular hexagon
The resonant frequency for the lowest order mode is
approximately calculated as
Where k is propagation constant
The calculated value of each side of the hexagon is 28.52mm.
This is approximate value with 10% error.
B. Results and analysis
In order to present the design procedure for achieving
impedance matching for this case, each side of the hexagonal
patch antenna are 28.5 mm selected initially respectively. This
dimension is calculated corresponding to 2.4GHz center
72
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
R
e
t
u
r
n
l
o
s
s
(
d
B
)
G
a
i
n
(
T
o
t
a
l
)
d
B
I
m
p
e
d
a
n
c
e
(

)
(GHz)
V
S
W
R
frequency. For hexagon side length of 24.5mm we achieve
resonant frequency at 2.45GHz with return losses 24.1 dB as
shown in fig.2.
0
-5
-10
-15
-20
-25
10
5
0
-5
-10
-15
-20
-25
-30
0 100 200 300 400
Theta(degree)
0 1 2 3 4
Frequency(GHz)
Fig. 2 Return loss of the microstrip antenna
10
8
Fig. 5 Gain of the Antenna
80
60
40
20
im
0
re
-20
6
4
2
0
2.3 2.4 2.5 2.6
Frequency(GHz)
Fig. 3 VSWR of the hexagonal patch antenna
After optimization of the response the return loss and
impedance of the hexagonal patch antenna is shown in Fig. 2 and
6 respectively. The configuration yielded bandwidth for
VSWR<2 of 0.8 MHz as shown if fig 3. Each sides of the
hexagonal patch antenna are 24.5 mm at which we get the
required center frequency of 2.45GHz. The return loss is -24.1
dB.
Fig. 4 Radiation Pattern of the antenna
1.5
Frequency
2.5
Fig. 6 Impedance response of the Antenna
To verify whether or not the microstrip antenna can radiate,
the radiation of the antenna are plotted are illustrated in the Fig.
4. As shown in the Fig.4 proposed antenna can radiate linearly
polarized wave. As shown in the figure 5 the gain of the
proposed antenna can be 3.6799dB.
III. HEXAGONAL PATCH ANTENNAWITH SUPERSTRATE
LOADING
The geometry of rectangular patch antenna having dielectric
cover is shown in Fig. 6. In reality, the microstrip antenna
attached to an electronic device will be protected by a dielectric
cover (superstrate) that acts as a shield against hazardous
environmental effects.
Fig. 7 Structure of antenna with dielectric cover.
These shielding materials will decrease the overall
performances of the antenna operating characteristics such as
resonant frequency, reflection coefficient, impedance bandwidth
and radiating efficiency [9]. In this paper we have used the
73
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
R
e
t
u
r
n
L
o
s
s
e
s
(
d
B
)
V
S
W
R
(
d
B
)
I
m
p
e
d
e
n
c
e
(

)
dielectric cover of various thicknesses and analyze the effects of
dielectric cover on the different antenna parameters.
A. Results and analysis
The frequency response of the microstrip antenna covered with
dielectric is varies as a function of the dielectric cover thickness.
The performance characteristics of antenna verses operating
frequency for various value of the dielectric cover is shown in
the Fig. 8. We also observed that the at higher superstrate
thickness, resonance frequency shifts towards lower frequency.
VSWR of the antenna at different superstrate thickness is shown in
the Fig. 9.The impedance of the hexagonal antenna has been
plotted in Fig.10.
0
-5
-10
a=0.5mm
-15
a=1mm
Fig.10 Impedance of antenna with different superstrate thickness
IV. CONCLUSION
A hexagonal microstrip patch antenna element has been designed in
this paper. The antenna has a bandwidth of 2.9% and return loss
of -24.1dB. The antenna has a gain of 3.679dB. After that,
superstrate of different thickness were loaded on the hexagonal-
patch microstrip antenna for evaluation. The results show that the
antenna performances such as centre frequency, bandwidth and
radiating efficiency are reduced as expected.
ACKNOWLEDGMENT
The authors express their appreciation to Dr. A. K. Shrivastav,
Antenna Division, SAMEER-CEM, Chennai, India for their
technical support and valuable advice. The authors are also wish to
acknowledge Dr. Ram Lal Yadav, Professor, Department of
Electronics and Communication, Galgotias College of
Engineering and Technology for their motivation and guidance.
REFERENCES
-20
-25
-30
-35
2 2.25 2.5 2.75
Frequency(GHz)
a=1.2mm
a=1.4mm
[1] I. J. Bahl and P. Bhartia, Microstrip Antennas, IEEE
Trans. Microwave Theory Tech. Vol. MTT-28. No.2, pp.
104-109, February 1980.
[2] I. J. Bahl, P. Bhartia and S.S. Stuchly, Design of a
microstrip antenna covered with a dielectric layer, IEEE
Transactions on Antennas and Propagation, Vol. AP-30,
No.2, pp. 314-18, March 1982.
[3] N. G. Alexopoulos and D. R. Jackson, Fundamental
Fig. 8 Return Loss of the antenna with superstrate
3
2.5
2
1.5
1
0.5
2.2 2.4
Frequency(GHz)
Fig. 9 VSWR of the antenna with different superstrate thickness
70
60
50
40
30
20
10
0
a=0.5m
a=1mm
a=1.2m
a=1.4m
a=0.5mm
a=1mm
a=1.2mm
a=1.4mm
superstrate (cover) effect on printed circuit antennas, IEEE
Transactions on Antennas and Propagation, Vol. AP-32,
pp. 807-816, 1984.
[4] H. Y. Yang and N. G. Alexopoulos, Gain enhancement
methods for printed circuit antennas through multiple
superstrate, IEEE Transactions on Antennas and
Propagation, Vol. AP-35, No.7, pp. 860-863, July 1987.
[5] A. Bhattacharyya and T. Tralman, "Effects of dielectric
superstrate on patch antennas", Electron. Letters, Vol. 24,
No. 6, pp. 356-358, March 17, 1988.
[6] R.Q. Lee and A. J. Zaman, Effects of dielectric superstrates
on a two-layer electromagnetically coupled patch antenna,
Ch @ IEEE, pp-620-623, 1989.
[7] A. Benalla and K. C. Gupta, Multiport network model for
rectangular microstrip patches covered with a dielectric
layer, Proc. Inst. Elect. Eng., Vol. 137, Pt. No.6, pp. 377
383, December 1990.
[8] R. Afzalzedeh and R.N. Karekar, Effect of dielectric
protecting superstrate on radiation pattern of microstrip
patch antenna, Electron. Letters, Vol. 27, No.13, pp. 1218-
1219, 1991.
[9] Ghulam Qasim and Shunshi Zhong, Radiation
characteristics of microstrip patch antenna with dielectric
covers, IEEE @1992, pp. 2208-2211.
1.5 1.75 2 2.25 2.5 2.75 3
Frequency(GHz)
74
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Square Patch Antenna with Dielectric Cover
Niharika, Jugul Kishor, R. K. Yadav
Department of Electronics and Communication Engineering,
I.T.S. Engineering College, Greater Noida, Uttar Pradesh, India
niharika.its@gmail.com, jugulkishor@gmail.com, ravipusad@gmail.com,
Abstract:- In this paper, we have studied, the effect of
superstrate loading on square patch antenna. In this case a
coaxial fed microstrip patch antenna is used to indicate the
design procedures to achieve good impedance matching are
discussed. With the introduction of the superstrate layer on the
coaxial fed microstrip patch antenna, the characteristics of the
patch antenna will be changed. Experimental results of the
superstrate loading on the characteristics coaxial fed microstrip
patch antenna are investigated. The antenna characteristics
which are affected by superstrate loading are resonant
frequency, impedance matching, bandwidth and gain.
1. INTRODUCTION
One of the important advantages for microstrip antenna is that
very small antenna elements can be constructed very precisely
due to its construction method. Its other attractive and unique
properties are; low in profile, light in weight, low cost,
conformable structure and its flexibility with regard to
frequency, polarization, pattern and impedance [1].
Although the sizes of ring microstrip antenna with CP
radiation are considerably compact, they are always marred by
problems such as narrow CP operating bandwidth and high
edge impedance that due to slenderized ring strip.
The major drawback of the microstrip antenna in its basic
form is its inherently narrow bandwidth, which is a major
obstacle that restricts wider applications. Therefore, a vast
amount of techniques for widening the microstrip antenna s
bandwidth have been proposed.
In this paper, we describe a microstrip coaxial cable-fed
square patch antenna; we will observe the effects of dielectric
cover on the antenna characteristics. Experimental results for
the multi-band performance, gain, and radiation pattern also
are presented. In this paper, commercial simulator was
employed to study the designs of the key parameters for this
square patch microstrip antenna fabricated on thick substrate.
Follow up by constructing and testing several antenna
prototypes with various side length at a fixed substrate
thickness, details of the measured antenna performances such
as bandwidth, operating centre frequency and peak gain are
presented and discussed. In addition to the above
investigation, superstrate with various thickness and dielectric
constant loaded on the square patch microstrip antenna are
measured. Although it is well known that the characteristic
effects of superstrate loading on patch antenna includes
resonant frequency, resistant and radiating efficiency
reduction etc. [8], these effects can be eliminated by fine
tuning the key parameters of the square patch microstrip
antenna introduced.
2. Design of square patch antenna
Figure 1 Structure and design parameters of the antenna
In the most basic form, a microstrip patch antenna consists of
a radiating patch on one side of the dielectric substrate, which
has a ground plane on the other side. The resonant length of
the antenna determines the resonant frequency. The patch, in
fact, electrically a bit larger than its physical dimensions due
to the fringing fields. The patch that introduces here (Figure 1)
has made of the conduction material copper. First simulation
was done considering a substrate (
r
=2.33, h=1.575mm, and
tan -0.001) having width 55.75 mm and length 5.75mm are
considered for the design of the patch antenna. The operating
frequency is 2.5 GHz. Coaxial feeding is given to the point
where input resistance is 50 ohms. The main advantage of this
type of feeding scheme is that the feed can be placed at any
desired location inside the patch in order to match with its
input impedance. This feed method is easy to fabricate and has
low spurious radiation. However, its major disadvantage is
that it provides narrow bandwidth and is difficult to model
since a hole has to be drilled in the substrate and the connector
75
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
protrudes outside the ground plane, thus not making it
completely planar for thick substrates
Ansoft Corporation
Smith Plot 1
Curve Inf o
HFSSDesign1
140
150
120
130
0.50
110
100 90 80
1.00 70
60
2.00
50
40
30
St(coax_ pin_ T1,coax_ pin_ T1)
Setup1 : Sw eep1
160 0.20 5.00 20
3. Effect of dielectric cover
170
180
-170
-0.0.000
0.20 0.50 1.00 2.00 5.00
10
0
-10
-160 -0.20 -5.00 -20
-150 -30
-140
-130
-0.50
-120
-110 -1.00
-100 -90 -80
-70
-2.00
-50
-60
-40
Figure 2. Structure of antenna with dielectric cover.
Figure 4 Smith chart of patch antenna
The geometry of square patch antenna having dielectric cover
is shown in figure2. In reality, the microstrip antenna attached
to an electronic device will be protected by a dielectric cover
(superstrate) that acts as a shield against hazardous
environmental effects. These shielding materials, normally
plastics (lossy dielectric), will decrease the overall
performances of the antenna operating characteristics such as
resonant frequency, impedance bandwidth and radiating
efficiency [9].
In this paper we have used the dielectric cover of various
thicknesses and analyze the effects of dielectric cover on the
different antenna parameters.
4. Result and analysis
Figure5. Gain Plot
Ansoft Corporation
-90
-60
-120
-30
Radiation Pattern 2
0
30
-1.00
-7.00
-13.00
-19.00
60
90
120
HFSSDesign1
Curve Inf o
dB(DirTotal)
Setup1 : LastAdaptive
Phi= ' 0deg'
dB(DirTotal)
Setup1 : LastAdaptive
Phi= ' 90deg'
(a) Result of square patch antenna
In order to present the design procedure of achieving
impedance matching for this case, first prototype of the
antenna was designed using a RT duriod substrate resonating
at 2.4GHz.
Figure6. Radiation Pattern
-150
-180
150
Figure3 s-parameter of square patch antenna
Figure 7 VSWR curve
Fig. 3 as depicted shown the return loss of the antenna, and
fig. 4 shows the smith chart of patch antenna, Fig. 5 and fig. 6
shows the realized gain and radiation pattern and fig.7 shows
76
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
St(coax_ pin_ T1,coax_ pin_ T1)
Setup1 : Sweep1
a= ' 1mm'
St(coax_ pin_ T1,coax_ pin_ T1)
Setup1 : Sweep1
a= ' 1.2mm'
St(coax_ pin_ T1,coax_ pin_ T1)
Setup1 : Sweep1
a= ' 1.4mm'
St(coax_ pin_ T1,coax_ pin_ T1)
Setup1 : Sweep1
a= ' 1.5mm'
St(coax_ pin_ T1,coax_ pin_ T1)
Setup1 : Sweep1
a= ' 1.6mm'
St(coax_ pin_ T1,coax_ pin_ T1)
Setup1 : Sweep1
a= ' 1.8mm'
St(coax_ pin_ T1,coax_ pin_ T1)
Setup1 : Sweep1
VSWR curve. The constant resistance of 50 ohm is taken
initially.
(b)Result of square patch antenna after dielectric
cover
Effects on the antenna characteristics after applying dielectric
cover are shown in figure8, figure9 and figure10. The
performance characteristics of antenna are decreased after
using dielectric cover as shown. A variable a is used to
calculate width of dielectric cover and its value varies from
1mm to 3mm having step size of 0.2mm. The variations in the
return loss characteristics and the smith chart are shown below
in the diagrams.
Figure 8 Return Loss
the results revealed that it could reduce the patch size for the
square patch microstrip antenna operated at a given frequency.
Based on the information obtained, superstrate of different
thickness were loaded on the square patch microstrip antenna
for evaluation. The results show that the antenna performances
such as centre frequency; bandwidth and radiating efficiency
are reduced as expected. Furthermore, the axial ratio
measurement shows that material with lower dielectric
constant is more preferable if thicker superstrate is to be
implemented.
6. Reference
[1]. R.E. Munson, Conformal Microstrip phased arrays, IEEE Trans
Anteenas propogation, AP-22, pp. 74-78, Jan 1974.
[2]. P.K. Agrawal and M.C. Bailey, An analysis technique for feed
line microstrip anteenas, IEEE Trans. Antennas propagation, vol.
AP-25, pp. 756-758, Nov. 1977.
[3]. I. J. Bahl and S.S. Stuchly, Analysis of Microstrip covered with a
lossy dielectric, IEEE Trans. Vol. MTT-28, pp.104- 109, Feb,
1980.
[4]. Bahl, I. J., Bhatiya and Stuchly, S. S., Design of microstrip
antenna covered with a dielectric layer, IEEE Trans. AP-30, pp.
314-318, 1982.
[5]. H. Pues and A van d capelle, Accurate transmission line model
for the rectangular microstrip antenna, IEEE Proc. Vol. 131,H,
pp. 334-340,1984.
[6]. Alexopoulos, N.G. and Jackson Fundamental superstrate(cover)
effects on printed circuit antennas, IEEE Trans. 1984, AP-32,(S),
pp.807-815.
[7]. [7]. W. S. Chen, K. L. Wong and J. S. Row Superstrate loading
effects on the circular polarization and cross-polarization
characteristics of a rectangular microstrip patch antenna IEEE
Trans. Antenna propagation, vol. 42, pp. 260-264, Feb.1994.
[8]. I. J. Bahl, Build microstrip antenna with paper thin dimensions,
Ansoft Corporati on Smith Plot 1 HFSSDesi gn1
Microwaves vol.18, pp.50-63, Oct.1979.
140
150
120
130
0.50
110
100 90 80
1.00 70
60
2.00
50
40
30
Curve Inf o
[9]. R. Shavit, Dielectric cover effect on rectangular microstrip
antenna array, IEEE Trans. Antenna Propagation AP.42, pp.
1180-1184, 1994.
160 0.20 5.00 20
170 10
0.20 0.50 1.00 2.00 5.00
180
-0.0.000
0
-170
-160 -0.20
-10
-5.00 -20
-150 -30
Figure 9 Smith chart
-140
-130
-0.50
-120
-110 -1.00
-100 -90 -80
-70
-2.00
-50
-60
-40
Ansoft Corporation Radiation Pattern 2 HFSSDesign1
-90
-60
-30
0
30
-2.00
-9.00
60
-16.00
-23.00
90
Cur ve Inf o
dB(Dir Total)
Setup1 : LastAdaptive
a= ' 1mm' Fr eq= ' 2.5GHz' Phi= ' 0deg'
dB(Dir Total)
Setup1 : LastAdaptive
a= ' 1mm' Fr eq= ' 2.5GHz' Phi= ' 90deg'
dB(Dir Total)
Setup1 : LastAdaptive
a= ' 2.8mm' Freq= ' 2.5GHz' Phi= ' 0deg'
dB(Dir Total)
Setup1 : LastAdaptive
a= ' 2.8mm' Freq= ' 2.5GHz' Phi= ' 90deg'
-120 120
-150 150
Figure10 Radiation Pattern
5. Conclusion
-180
The key parameters of square patch microstrip antenna are
presented and the design procedures for impedance matching
are studied. An antenna prototype has been constructed, and
77
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Cosine Modulated Filter Bank with
Perfect Reconstruction
C.S.Vinitha, Ambedkar Institute of Technology, Delhi.
Abstract In this paper, we present filter banks based on
Cosine Modulation. The analysis filters H
k
(z) are obtained
from a real coefficient prototype P
0
(z) by Cosine Modulation.
The design complexity is low because only this prototype has
to be minimized. The modulation cost can be reduced by
expressing the analysis and synthesis bank in terms of the
DCT matrix, for which there exist fast implementations. In
contrast to the 2M channel structures, the proposed filter bank
has only M- bands, and all the sub-filters are linear phase .
The modulation used is DCT-II, which ensures fast and
efficient implementation.
Index TermsFilter bank, impulse response, real coefficient,
perfect reconstruction, prototype filter.
I INTRODUCTION
Multi-rate digital signal processing has attracted much
attention over the past two decades due to the applications in
sub-band coding of speech, audio and video, multiple carrier
data transmission, etc. A key characteristic of multi-rate
lgorithms is their high computational efficiency. A multi-rate
system can increase or decrease the sampling rate of
individual signals before or while processing them. These
signals then with different sampling rate can be
simultaneously processed in various parts of the multi-rate
system. Digital filter banks are the most important applications
of multi-rate DSP. A great amount of different filter bank
approaches have been developed over last fifteen years.
In a filter bank, first the analysis filters channelize the
signal to be processed. The extracted sub- band signals are
then decimated and processed by a processing unit according
to the application in hand. This is either stored or transmitted.
In the receiver side, a synthesis filter bank reverses this
process and reconstructs the original signal. Recently, the PR
cosine- modulated filter bank has emerged as an attractive
choice of filter bank with respect to implement cost and
design ease[1-5]. The impulse responses of the analysis and
synthesis filters h
k
(n) and f
k
(n) are cosine- modulated versions
of the protype filter h(n). The formulation in [4,5] does not
give linear phase analysis and synthesis filters.
Linear phase property leads to faster and more efficient
implementation. To get linear phase sub-filters the structure
proposed in [6,7] uses a 2M band structure with sine and
cosine modulation , which actually gives function of only an
M-band filter bank. It is therefore highly redundant and needs
twice the resourses of an M- channel filter bank. In this paper,
we derive the PR conditions for an M-channel CMFB with
linear phase sub- filters using the general frame work. The
proposed filter bank uses only M- channels, all the filters are
linear phase, uses simple modulation using an orthogonal
transform and only one prototype filter.
II Cosine Modulated Filter Bank
2.1 Generation of real coefficient filter banks: In any M-
channel filter bank the filters H
K
(z) are related to H
0
(z) as
H
K
(z) = H
0
(Zw
k
M
)where W
M
=
-j2 /M
. Since h
k
(n) is
obtained by exponential modulation of h
0
(n)[1,2], the
coefficients of h
k
(n) are in general complex even if h
0
(n) is
real. Therefore the output of H
K
(z) could be a complex signal
even if the input x(n) is real.
But here we derive a class of filters with real coefficients, by
using cosine modulation rather than exponential modulation.
This can be done by first obtaining 2M complex filters using
exponential modulation, and then combining appropriate pairs
of filters[4]. In the figure shows a uniform- DFT analysis
bank, with the 2M filters related as
H
k
(z) = H
0
(zW
k
), that is h
k
(n)=h
0
(n)W
-kn
---(1)
WhereH
0
(z) is called the prototype filter. The polyphase
components of H
0
(z) are G
K
(z), 0 k 2M-1
Fig 1(a) M-channel maximally decimated
78
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
k k
k k
ear
the
K
(z)=a
k
U
k
(z)+a
*
Parallel filter bank. filter F
k
(z), whose passband coincides with that of H
K
(z) ,
retains the un-shifted version H
K
(z)X(z), and also permits a
small leakage of the shifted versions. Therefore we generate
F
k
(z) as
F
k
(z)=b
k
U
k
(z)+b
*
V (z),0kM-1. (8)
Where b
k
are unit-magnitude constants. The outputs of F
k
(z)
as well as F
k-1
(z) have the common alias components
X(zW
k
), we try to choose F (z) and F (z) such that this
M k-1 k
From equation (1)we have
component is canceled when these outputs are added. The
negative frequency part of F
k
(z) , has the following significant
alias components:
*
U (Zw
-k
)V (z)) X(Zw
-k
) +
H
k
(
jw
)=H
0
(
j(w-k /M)
) ---------(2)
(a
k
b
k K M k M
*
U (Zw
-(k+1)
)V (z))X(Zw
-(k+1)
), ----(9).
The responses of | H
0
(
jw
) | and |H
2M-1
(
jw
) | are images of
each other with respect to zero-coefficient filter. The typical
pass-band width of such a filter is equal to 2 /M, which is
twice that of H
0
(z). In order to make all the filter bandwidths
(a
k
b
k K M k M
And the negative- frequency part of F
k-1
(z) has
(a
k-1
b
k-1
*
U
K-1
(Zw
M
(k-1)
)V
k-1
(z)) X(Zw
M
(k-1)
) +
equal after combining pairs, we use a right shifted version of
-
-k)
)V (z))X(Zw
-
-k
), ----(10).
the original set of 2M responses, the amount of right-shift
being /2M. This is accomplished by replacing z with zw
0.5
.
(a
k-1
b
k-1
*
U
K-1
(Zw
M k-1 M
-k
) can be eliminated if the sum of
The complex filters Q
k
(z) given in terms of H
0
(z) as
k+0.5
The alias component X(Zw
M
the coefficients of this term in the above two equations is
Q
k
(z)=H
0
(zW ),0k2M-1. -----(3) made equal to zero. By using the definitions for U
k
(z) and
V
k
(z) and the condition |c
k
| = | c
k-1
| = 1, we can write the
The magnitude responses of Q
k
(z) and Q
2M-1-k
(z) are now
images of each other with respect to zero- frequency, that is |
Q
k
(
jw
)| = | Q
2M-1-k
(z)|.The impulse response coefficients of
above sum in terms of V
I
(z)

s as
(a
k
b
k
+-a
k-1
b
k-1
)V
k-1
(z)V
K
(z)=0. ----(11).
* *
Q
k
(z) and Q
2M-1-k
(z) are conjugates of each other, that is ,
This condition can be satisfied by constraining a
i
and b
i
such
* *
Q
2M-1-k
(z)=Q
k*
(z) --------(4)
2.1.1Definition of the Real coefficient Analysis filters:
Define
that a
k
b
k
=-a
k-1
b
k-1,
1kM-1. ---(12)
2.1.3Eliminating phase Distortion : The distortion function
T(z) is always expressed as [1,2]
U
k
(z)=c
k
H
0
(zW
And
k+0.5
)=c Q (z) -----(5)
T(z)= ----(13)
The QMF bank is free from phase distortion if T(z) has lin
phase. If the synthesis filters are chosen according to
V
k
(z)=c
k
H
0
(Zw )=c
k
Q
2M-1-k
(z) (6)
mirror image condition
* -(k+0.5) *
The M analysis filter can be generated as follows
H
k
V
k
(z)0kM-1. (7)
f
k
(n)=h
k
(N-n) -----(14)
Or equivalently as
C
k
and a
k
are unit- magnitude constants. Since the coefficients
of P
0
(z) are real, the coefficients of V
k
(z) and U
k
(z) are
F
K
(z)=z
-N
H
K
(z
-1
)=z
-N
k
(z) ------(15)
conjugates of each other. So h
k
(n) are real. 2.1.2Alias cancelation: The decimated output of H
K
(z) gives
79
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
In this case T(z) becomes
T(z) = -----(16)
rise to the alias components H
K
(zW
l
) X(Zw
l
), The synthesis
M M
80
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
a
0
Thus
MT( ) =
2
-----(17)
a
k
= j a
k-1
------(27)
by constraining a
0
and a
M-1
such that
That shows that T(z) has linear phase.[1]
4
= a
4
M-1
= -1 -----(28)
Choice of c
k
to ensure linear phase of U
K
(z) and V
k
(z): In a
linear phase filter which is symmetric ie. h
0
(n) = h
0
(N-n), so
that
we can eliminate the cross terms U
0
(z)V
0
(z) and U
M-1
(z)V
M-
1
(z) which cause distortion and thus yielding
T(z) = (z) + (z)) ----(29)
(z) = H
0
(z). ------(18)
We then have
H
0
( ) =
R
(w), -----(19)
Where H
R
(w) is real- valued. We will choose the value of c
k
for the complex coefficients filters U
k
(z) and V
k
(z) to have
the same phase as h
0
(z), so we have from equ (5)
U
k
( ) =c
k
W
-(K+0.5)N/2
e
-jwN/2
H
R
(w- ) (20)
So if we choose
C
k
= W
(k+0.5)N/2
, ------(21)
Then equ (20) becomes
U
K
(e
jw
) = e
-jwN/2
H
R
(w- ) ----(22)
Since the above equ resembles the equ(19), its proved that
U
k
(z) and V
k
(z) are linear phase filters with the phase
responses identical to that of the prototype H
0
(e
jw
).
Choice of b
k
to ensure the relation F
k
(z) = = z
-N
H
k
(z
-1
): As
U
k
(z) and V
k
(z) have linear phase, we can write
(z) = U
k
(z), (z) = V
k
(z), -----(23)
Using the above relations in equ (7) we get
(z ) = a
*
k
(z) + a
k
(z ) -----(24)
Z
-N
(z ) = a
*
k
U
k
(z)+ a
k
V
k
(z) ------(25)
Based on the above constraints we choose
a
k
= , = (-1)
k
, 0 k M-1,
2.1.4 Final real coefficient expression for the filters: The final
expression for the analysis filter is as follows. The first term of
the eqa(7) is
aU
k
(z) = c
k
P
0
(zW
k+0.5
)
= W
(k+0.5)N/2
, --(30)
The coefficients of the second term are obtained by
conjugating the above expression. So h
k
(n) equals two times
the real part of the first term, that is ,
h
k
(n) = 2 p
0
(n) cos ( (k + 0.5) (n- ) +
k
), ---(31)
The synthesis filters f
k
(n) are obtained by replacing a
k
with b
k
.
Since b
k
= a
*
k
, so we replace
k
= -
k
, that is ,
f
k
(n) = 2p
0
(n) cos ( (k + 0.5)(n- ) -
k
). ---(32)
2.2 Conditions for perfect reconstruction: We can achieve
perfect reconstruction by constraining E(z) which is a
polyphase matrix [1,2]to be paraunitary i.e. (z)E(z) = di and
taking the synthesis filter coefficients to be the time reversed
conjugates
Theorem: Let the prototype P
0
(z) be a real-coefficient FIR
filter with length N+1= 2Mm for some integer m. Assume
p
0
(n) = p
0
(N-n). Let G
K
(z), 0 k 2M-1, be the 2M polyphase
components of P
0
(z). Suppose the M analysis filters H
K
(z) are
If we choose b
k
= a
*
k
-----(26) generated by cosine modulation with
k
= (-1)
k
/4. Then the
The right hand side of equ (25) reduces to F
k
(z) and the mirror
image condition of equ (15) is satisfied using the above
constraint. Choice of a
k
: The alias cancellation constraint i.e.
eqa (12) can be simplified by using the relation in eqa no (26)
as follows
MXM polyphase component matrix E(z) is paraunitary if and
only if G
K
(z) satisfy the pairwise power complementary
conditions
(z) G
K
(z) + (z) G
M+k
z) = , 0 k M-1,(33)
a
2
k
= - a
2
k-1
thus we can write
81
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Where G
K
(z) are the type I polyphase component [1,2] of
H(z). The above lemma covers all cosine- modulated filter
banks which are derived from a linearphase prototype, whose
length N = 2Mm. Because of the linear symmetry of H(z),
approximately half of the M constraints given in (20) are
redundant. The linear phase CMFB reported in [6,7] has two
subsystems in which the first subsystem has M+1 channels
and the second has M-1 channels or viceversa. These use
cosine modulation for one subsystem and sine modulation for
And the elements are
---(36)
the other. The redundancy in this filter bank is evident.
III. M- channel CMFB with linear phase sub-filters
The fig(2) shows the proposed structure for the M-channel
filter bank using the structure given in [8]. The input is
represented as a row vector composed of down sampled input
sequences x(n) = [x(nM), x(nM+1),..,x(nM+ M-1)]
where n may be viewed as the index of sequences x(nM+ m),
m = 0,1,,M-1.so from the fig the output of the block P
a
is
y(n) = [ y
0
(n),y
1
(n),.., y
M-1
]. The block P
a
is an M X M
matrix formed from the analysis filters. The filter length is
assumed to be an integer multiple of M.
Fig 2 Proposed filter bank structure.
E (z) = (n+Nm) z
-m
---(37)
Let Z be the z transform of the vector at the output of the
block P
s
, then we can represent the analysis as Y= X.P
a
and
the reconstruction process as Z = Y.P
s
. The sub-band filters in
a modulated filter bank have the form h
k
(n) = h(n) T
a
(n,k)
where T
a
(n,k) is the modulation kernel with 0 k LM-1, 0
k M-1. Let T
a
(n,k) be a block M transform and F
a
be an M X
M matrix such that Pa = F
a
. T
a
and P
s
= F
S
. Ts, F
S
and Ts
being the inverse of F
a
and T
a
respectively. We choose the
block transform as
T
a
(n,k) = c
k
cos ( k(n+0.5)), ---(38)
C
k
= for k = 0, c
k
= 1 for k 0, 0 k M-1, 0 n M-1.
The modulated analysis filter considered are
h
k
(n) = c
k
h(n) cos ( (k + 0.5) (n- ) +
k
),-(39)
where h(n) = sin( (n+ )) this window h(n) is that of
MLT[3].
The matrix P
a
has the form
The matrix F
a
can be derived from the polyphase
representation of the prototype filter . The polyphase
representation of the prototype filter H(z) is
H(z) = ( ) z
-n
,[1,2]
Where the matrix elements are defined as
---(34)
Where G
n
(z) is the type I polyphase component of the
prototype filter. The polyphase matrix of the analysis section
is as given in [1]
E (z) = (n+Nm) z
-(L-1-m)
---(35)
Where L is a positive integer such that the filter length N=
LM.
Similarly the matrix P
s
for the synthesis bank is
E(z) = C where ---(40)
g
0
(z) = diag(G
0
(z) G
1
(z) .G
M-1
(z)),
g
1
(z) = diag (G
M
(z) . G
2M-1
(z)),
[C]
kn
= c
k
cos ( (k + 0.5) (n- ) +
k
),
82
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
k = 0,1,.M-1, n = 0,1,..2M-1, ----(41)
Fig 3 Frequency response of sub-band filter h
0
(n)
From eqa(34 and 35) we can write P
a
and F
a
as
P
a
= z
-(2m-1)
[ (z
2
)z (z
2
)]C
T
---(42)
-1
with only M channels which removes the redundancy of
previous designs where we use 2M- channels , is presented.
References:
1. P.P. Vaidyanathan, Multirate Systems and Filter Banks, Prentice-
Hall, Englewood Cliffs, NJ, 1993.
2, [ P. P. Vaidyanathan, Multirate digital filters, filter banks,
polyphase networks, and applications: A tutorial, Proc. IEEE, vol.
78, no. I ,pp. 56-93, Jan. 1990.
3. H.S. Malvar, Modulated QMF filter banks with perfect
reconstruction, Electron. Lett. 26 (13) (June 1990) 0.
4. R.D. Koilpillai, P.P. Vaidyanathan, Cosine-modulated FIR Filter
banks satisfying perfect reconstruction, IEEE Trans.Signal Process.
40 (4) (April 1992) 770783.
5. T.Q. Nguyen, R.D. Koilpillai, The theory and design of arbitrary-
length cosine-modulated filter banks and wavelets, satisfying perfect
reconstruction, IEEE Trans. Signal Process. 44 (3) (March 1996)
473483.
6. Y.P. Lin, P.P. Vaidyanathan, Linear phase cosine modulated
maximally decimated filter banks with perfect reconstruction, IEEE
Trans. Signal Process. 43 (11) (November 1995) 25252539.
7. X.-Q. Gao, Z.-Y. He, X.-G. Xia, The theory and implementation of
F
a
= z
-(2m-1)
[ (z
2
)z (z
2
)]C
T
T
a
We can express P
k
(z) as
---(43)
arbitrary-length linear-phase cosine-modulated filter bank, Signal
Processing 80 (2000) 889896
8. G.D.T. Schuller, M.J.T. Smith, New framework for modulated
P
k
(z) = (2lM+k) z
-(m-1-l)
----(44)
Let z
-(2m-2)
(z
2
) = P
k
(z) ---(45)
Using the above assumption we can find the value of g
0
(z)
and g
1
(z) and finally the value of F
a
using the eqa(43).The
synthesis filter is derived with the inverses of F
a
and T
a
.
IV Results
The frequency response of a linear phase prototype filter
satisfying the PR condition is obtained for M = 8, length = 32
taps. The frequency response of the sub- band filter h
0
(n) is
shown in the fig 3.
V Conclusion
In this paper we have derived the expression for the real
coefficient of the filter banks. Also we have derived the
condition for perfect reconstruction i.e the polyphase
component matrix E(z) should be lossless or paraunitary.An
efficient design satisfying the above condition and a design
perfect reconstruction filter banks, IEEE Trans. Signal Process. 44 (8)
(1996) 19411954
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
n
f
1
k 1 s
p
Condition Monitoring Of Electrical Machine Using
Gabor Transform
NeelamMehala
#
#
Department of Electronics Engineering,
YMCAUniversity of Science and Technology, Faridabad-121006 (Haryana) INDIA
neelamturk@yahoo.co.in
Abstract The main goal of this paper is to develop the method
permitting to determine the short winding faults in electric
machines using Gabor Transform. The development growing
that the equipment of measurement (spectral analyzer) and
software of digital signal processing have made possible the
diagnosis of electric machine defects. Motor Current Signature
Analysis (MCSA) is used for the detection of stator winding
faults. Stator current contains unique fault frequency
components that can be used for short winding fault detection.
The proposed method allows continuous real time tracking of
short winding faults in induction motors operating under
continuous stationary and non stationary conditions, thus
allowing the continuous monitoring of the motor health. The
LabVIEW 8.2 software and NI- PCI 6251 data acquisition card
are used for controlling the test measurements and data
acquisition, and for the data processing. In the present
investigation, Laboratory experiments indicate that the Motor
system. Short-circuit related faults have specific components
in the stator current frequency spectrum. Incipient faults can
be detected by sampling the stator current and analyzing its
spectrum [1,2,3].
The inter short circuit of the stator winding is the starting
point of winding faults such as turn loss of phase winding.
The short circuit current flows in the inter-turn short circuit
windings. This initiates a negative MMF, which reduces net
MMF of the motor phase. Therefore, the waveform of air gap
flux, which is changed by the distortion of the net MMF,
induces harmonic frequencies in a stator winding current.
The frequencies which appear in the spectrum showing the
presence of a short-circuit fault are given by the following
equation [1, 4]:
Current Signature Analysis with proposed method is a reliable
tool for diagnosis of stator winding faults.
Keywords Condition monitoring and fault detection, signal
processing, motor current signature analysis, induction motors.
f
sc
where
p - pole pairs
s - rotor slip
(1)
I. INTRODUCTION
Induction motors are inherently reliable and require
minimum maintenance. However, like other motors, they
eventually deteriorate and fail. This gives rise to the need for
cost effective preventive maintenance based on condition
monitoring, which can be addressed by monitoring and
analyzing the real-time signals of the motors. A motor failure
can result in the shut down of a generating unit or production
line. One major cause of the failures is breakdown of the
winding insulation leading to puncture of ground wall. Early
detection of inter shorts during motor operation would
eliminate consequently damage to adjacent coils and stator
core reducing repair cost and motor outage time. In addition to
the benefits gained from early detection of winding insulation
breakdown, significant advantages would accrue by locating
the faulted coil within the stator winding. The most common
kind of faults related to stator winding of induction motors are:
phase-to-ground, phase-to-phase and short-circuit of coils of
the same or different phase. The last kind of fault is also
called turn to turn fault. All these faults are classified as
isolation faults and have several causes: hot spots in the stator
winding (or stator core) resulting in high temperatures,
loosening of structural parts, oil contamination, moisture and
dirt, electrical discharges (in case of high voltage windings),
slack core lamination, abnormal operation of the cooling
f
1
- fundamental frequency(Hz)
f
sc
- short-circuit related frequency (Hz)
k=1,3,5...
n= 1,2,3
The frequencies revealing the presence of short-circuit of
winding are in some cases very close to frequencies related to
other kinds of defect, as for example eccentricities. For a
correct diagnosis it is very important to be able to distinguish
one from the other.
II. DEVELOPED SYSTEM FOR STATOR WINDING FAULT
A system for fault detection was developed and
implemented based on Motor Current Signature Analysis
(MCSA). The stator current is first sampled in the time
domain and in the sequence; the frequency spectrum is
calculated and analyzed aiming to detect specific frequency
components related to incipient faults. For each stator fault,
there is an associated frequency that can be identified in the
spectrum. The faults are detected comparing the amplitude of
specific frequencies with that for the same machine
considered as healthy. Based on the amplitude in dB it is also
possible to determine the degree of faulty condition. In the
described system, data acquisition board was used to acquire
the current samples from the motor operating under different
load conditions. The current signals are then transformed to
the frequency domain using Gabor Transform.
81
82
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
h(t mdM)e
j 2 nt / N
nt / N j 2
s[t] y *[t mdM]e
1 n 1
c
m
0 n 0
m
,n
h
m,n
(t)
The MCSA is applied for detection of short winding fault
where the side band spectrum around the fundamental
frequency is considered. Based on the MCSA, a system for
h
m,n
(t)
..(2)
fault detection was developed and implemented. A data
acquisition card and data acquisition board (ELVIS) is used to
acquire the current samples from the motor under load. The
current signals are then transformed to the frequency domain
using a Gabor Transform algorithm. The stator current is first
sampled in the time domain and in the sequence the frequency
spectrum is calculated and analyzed aiming to detect specific
where h(t) is the synthesis window, dM is time step and N
is sample frequency. c
m,n
reveals how the signal behaves in
the joint time frequency domain around the time and
frequency centers of h
m,n
(t) .
We can use the Gabor transform to obtain the Gabor
frequency components related to incipient faults. For each
stator faults there is an associated frequency that can be
coefficients c
m,n
with the following equation:
identified in the spectrum.
Faults are detected comparing the harmonic
amplitude of specific frequencies with the harmonic amplitude
of the same machine considered as healthy. Based on the
amplitude in dB it is also possible to determine the degree of
faulty condition. The experiment was performed on three
phase 0.5 hp, 4 poles, 50 Hz motor. Several measurements
were made, in which the stator current waveform was
acquired for a given number of short-circuited coils. Current
measurements were performed for a healthy stator winding
and also for the same machine with different number of
shortened coils in the same phase. The data was sent to a PC
through an acquisition board (ELVIS) of National Instruments.
After the reading, the current signal is decomposed by Gabor
Transform algorithm. All the signal processing is performed
using LabVIEWs Advance Signal processing module to
generate the frequency spectrum. First motor was tested in
the absence of fault. Afterwards, experiments were performed
on faulty motor.
III. SHORT WINDING FAULT DIAGNOSIS USINGGABOR
TRANSFORM
Gabor transform is a linear time-frequency analysis method
that computes a linear time-frequency representation of time-
domain signals. Gabor spectrogram has better time frequency
resolution than the STFT spectrogram method and less cross
term interference than the WD method. Gabor Spectrogram
represent a time domain signal, s(t), as the linear combination
c
m,n
(3)
t
where y(t) is the analysis window, y(t) and h(t) are a pair of
dual functions.
Gabor spectrogram is used to estimate the frequency
content of a signal. Moreover, these kinds of images provide
graphical information of the evolution of the power spectrum
of a signal, as this signal is swept through time. Spectrograms
are widely used by voice and audio engineers. They help to
develop a visual understanding of the frequency content of
one speech signal while a particular sound is being vocalized.
The spectrograms are also used in industrial environments to
analyze the frequency content. Here, the spectrogram is used
to diagnose the short winding fault in induction motor.
IV. EXPERIMENTAL SET UP, DATAACQUISITION AND
PRACTICAL RESULTS
In this experiment, a short circuited motor is used. A
Virtual instrument (VI) was developed to diagnose short
winding faults using Gabor spectrogram algorithm. Sample
frequency was 2000 Hz and number of samples 1500 has been
taken. The window size is taken 128 and order of spectrogram
is taken 3. The order of spectrogram balances the time-
frequency resolution and the cross term interference of Gabor
spectrogram. As the order increases, the time frequency
resolution of Gabor spectrogram improves, but the
spectrogram includes more cross term interference and
of elementary functions
equation [5,6]:
s(t)
m
h
m,n
(t) , as shown in following
.(1)
requires a longer computation time. When order is zero, the
Gabor spectrogram is non-negative and is similar to the STFT
spectrogram. As the order increase, the Gabor spectrogram
converges to the Wigner distribution. For most of applications,
an order of two to five is chosen to balance the time frequency
resolution and cross-term suppression. Figure 1 shows the
Gabor spectrogram for a healthy induction motor. For the
motor with short winding analyzed in the experiment, the
where h
m,n
(t) is the time frequency elementary function,
resulting spectrogram is showed in figure 2 and it illustrates
c
m,n
is the weight of h
m,n
(t) and c
m,n
is the Gabor
the spectral distribution resulting from a short winding. The
spectrogram shows the harmonic nearest to main frequency
coefficients. The Gabor transform computes the coefficients
c
m,n
for the signal s(t).
The following equation defines the time shifted and
which result from short winding. Spectrogram shows the fault
frequencies from the perspective of time variation and could,
therefore, be useful techniques to apply for diagnosis of stator
winding faults.
frequency modulated version,
function, h(t):
h
m,n
(t) , of a window
83
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Figure 1: Gabor spectrogram for a healthy induction motor
Fault frequency
Figure 2: Gabor spectrogram for short circuited induction motor
V. CONCLUSION
The paper presents an experimental study to diagnose the
stator winding fault. Here, Gabor Transform is also used to
detect the short winding fault. Gabor spectrograph clearly
shows the fault frequency which is result of short circuit
winding fault. The implemented and tested methods showed
efficiency as practical results corresponded to the predicted
with the developed system. The results obtained present a
great degree of reliability, which enables the applied method
to be used as monitoring tools for similar motors.
REFERENCES
[1] J. Sottile and J. L. Kohler, An on-line method to detect incipient failure
of turn insulation in random-wound motors, IEEE Trans. Energy
Conversion, vol. 8, no. 4, pp. 762-768, December, 1993.
[2] S. B. Lee, R. M. Tallam, and T. G. Habetler, A robust, on-line turn-
fault detection technique for induction machines based on monitoring
the sequence component impedance matrix, IEEE Trans. Power
Electronics, vol. 18, no. 3, pp. 865-872, May, 2003.
[3] A. H. Bonnet and G. C. Soukup, Cause and analysis of stator and rotor
failures in three-phase squirrel case induction motors, IEEE Trans.
Industry Application, vol., 28, no., 4, pp. 921-937, July/August, 1992.
[4] Jung J.H., Lee J.J., and Kwon B.H., Online Diagnosis of induction
motor using MCSA, IEEE transaction on industrial electronics, Vol.
53, no. 6, pp. 1842-1852, Dec. 2008.
[5] www.ni.com
[6] Leon Cohen, Time frequency Analysis , Prentice Hall PTR, 1995.
[7] Lokenath Debnath, Wavelet Transforms and Time Frequency signal
analysis , Birkhauser Boston, 2001.
Dr. Neelam Mehala is working as Assistant professor in
Electronics and Communication Engineering Department,
YMCA University of Science and Technology Faridabad
(Haryana). She received B.E .degree in Electronics and
Communication from North Maharashtra University Jalgaon
(M.S.). She did both M. Tech. (Electronics) and Ph.D.
(Electronics) from National Institute of Technology,
Kurukshetra (Haryana). She has published several papers in
national and international conferences/journals. Her areas of
interests are Digital Signal Processing, Condition Monitoring,
fault diagnosis, spectrum analysis and electrical machines.
84
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
STUDY OF MICROSCOPICIMAGE
PROCESSING
Sasikumar Gurumurthy, B.K.Tripathy
Abstract- Recent advances in hardware and software have
made it possible to create digital microscopic scans of whole
slides. These images are relatively large (100k x 100k) and in
color, hence processing them present new challenges.
Although this research area is getting increasingly popular, it
does not receive enough attention in the current curriculum. is
paper will introduce the current challenges, recent advances
and innovations in this newly developing area while reviewing
several frequently used image processing techniques in this
context. Most of the signal processing concepts that apply to
one-dimensional signals also extend to the two-dimensional
image signal. Some of these one-dimensional signal processing
concepts become significantly more complicated in two-
dimensional processing. Image processing brings some new
concepts, such as connectivity and rotational invariance, that
are meaningful only for two-dimensional signals. The fast
fourier transform is often used for image processing
operations because it reduces the amount of data and the
necessary processing time.
Keywords- Signal Processing, Image, Fourier
I.INTRODUCTION
Recent advances in hardware and software have made it
possible to create digital microscopic scans of whole slides.
These images are relatively large (100k x 100k) and in
color, hence processing them present new challenges.
Although this research area is getting increasingly popular,
it does not receive enough attention in the current
curriculum. This paper will introduce the current
challenges, recent advances and innovations in this newly
developing area while reviewing several frequently used
image processing techniques in this context.
Image Processing is any form of signal processing for
which the input is an image, such as photographs or frames
of video; the output of image processing can be either an
image or a set of characteristics or parameters related to the
image.
Sasikumar Gurumurthy is with the VIT University, Vellore, Tamil
Nadu, India. He is now with the School of computing science and
Engineering. He also pursuing his Ph.D degree in Soft Computing in
Vellor Institute of Technolology,Vellore.His current research focuses on
Wireless Networks,Artificial Intelligence.(corresponding author to provide
phone: 0416-2202016; fax: 0416-2243092; e-mail: g.sasikumar@vit.ac.in).
Dr.B.K.Tripathy is with the VIT University, Vellore, Tamil Nadu,
India. He is now with the School of computing science and
Engineering.(corresponding author to provide phone: 0416-2202016; fax:
0416-2243092; e-mail: tripathy@vit.ac.in).
Most image-processing techniques involve treating the
image as a two-dimensional signal and applying standard
signal-processing techniques to it. Image processing usually
refers to digital image processing, but optical and analog
image processing are also possible.
Digital Image Processing is the use of computer
algorithms to perform image processing on digital images.
As a subfield of digital signal processing, digital image
processing has many advantages over analog image
processing; it allows a much wider range of algorithms to
be applied to the input data, and can avoid problems such as
the build-up of noise and signal distortion during
processing.
Microscope Image Processing is a broad term that covers
the use of digital image processing techniques to process,
analyze and present images obtained from a microscope.
Such processing is now commonplace in a number of
diverse fields such as medicine, biological research, cancer
research, drug testing, metallurgy, etc. A number of
manufacturers of microscopes now specifically design in
features that allow the microscopes to interface to an image
processing system.
II.MICROSCOPE IMAGE PROCESSING IMAGE
ACQUISTION
Until the early 1990s, most image acquisition in video
microscopy applications was typically done with an analog
video camera, often simply closed circuit TV cameras.
While this required the use of a frame grabber to digitize
the images, video cameras provided images at full video
frame rate (25-30 frames per second) allowing live video
recording and processing. While the advent of solid state
detectors yielded several advantages, the real-time video
camera was actually superior in many respects.
Today, acquisition is usually done using a CCD camera
mounted in the optical path of the microscope. The camera
may be full colour or monochrome. Very often, very high
resolution cameras are employed to gain as much direct
information as possible. Cryogenic cooling is also common,
to minimise noise. Often digital cameras used for this
application provide pixel intensity data to a resolution of
12-16 bits, much higher than is used in consumer imaging
products. Ironically, in recent years, much effort has been
put into acquiring data at video rates, or higher (25-30
frames per second or higher). What was once easy with off-
the-shelf video cameras now requires special, high speed
electronics to handle the vast digital data bandwidth.
Higher speed acquisition allows dynamic processes to
be observed in real time, or stored for later playback and
analysis. Combined with the high image resolution, this
approach can generate vast quantities of raw data, which
85
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
can be a challenge to deal with, even with a modern
computer system. It should be observed that while current
CCD detectors allow very high image resolution, often this
involves a trade-off because, for a given chip size, as the
pixel count increases, the pixel size decreases. As the pixels
get smaller, their well depth decreases, reducing the number
of electrons that can be stored. In turn, this results in a
poorer signal to noise ratio.
Moreover, one must also consider the temporal
resolution requirements of the application. A lower
resolution detector will often have a significantly higher
acquisition rate, permitting the observation of faster events.
Conversely, if the observed object is motionless, one may
wish to acquire images at the highest possible spatial
resolution without regard to the time required to acquire a
single image.
III.THE COMPUTER AND SOFTWARE
Many computer operating systems are available today.
Operating systems that are more or less compliant to the so
called "POSIX" standard are not deliberately designed to be
difficult to program. Because of this it is MUCH easier to
write complex programs for them, and programmers can
concentrate on making the programs functional rather than
trying to program around the deliberate obstacles placed in
their path to prevent their writing programs! Examples of
POSIX type operating systems include Solaris, the three
BSD distributions, and the multitude of Linux distributions.
These, and most of the software that run on them, are all
free and can be simply downloaded from the Internet. All of
the programs discussed here are available free by simply
downloading them from the Internet. Some image
processing algorithms require substantial calculation and
thus work best on fast systems such as Athlon64s. There is
somewhat less choice in operating system for 64 bit Athlon
processors. Of the Linux distributions, Fedora seems to
support them best.
IV.THE IMAGE OBTAINING DEVICES
Most digital cameras do not have replaceable lenses, so
using them requires attaching the camera somehow to the
microscope ocular. This can produce very good results.
Some cameras work much better than others, and the
variable is not so much the price of the camera, but how
well the camera's optical system links with the microscope
ocular. Of course, the images are almost always round
when this technique is used. Although there are special
oculars available for this purpose, they are rather expensive,
and the results with them are often inferior to using
ordinary oculars. It is not too difficult to fabricate cameras
holder for doing this using machine shop equipment.
The image below shows a stained plant section taken this
way:
Fig 1.Stained Plant Section
Images obtained with traditional silver halide
photographic techniques can be converted to digital images
using film scanners. These wonderful devices are available
of converting both colour slides and colour negatives into
digital images. Unfortunately these scanners must be built
to very high precision, and consequently, they are all rather
expensive. Many people make the serious mistake of using
ultra high speed colour negative film for about every
standard photographic image that they take. The images
from this type of film are extremely grainy, and the colour
rendition is generally very inferior to slower film.
Most USB digital cameras can be mounted as drives to
download their images directly to POSIX compliant
devices. Many dedicated microscope cameras work best
with POSIX operating systems, especially Linux systems.
They often, however, require some sort of kernel module.
There are available microscope imaging attachments that send
a continuous stream of images through the USB port, these
usually are fairly low resolution, usually 640x480 pixels.
When these cameras are being used, the images can be
captured at any point using the ImageMagick "import"
utility. These cameras are very sensitive to dim light, and
image rate is quite rapid, so that one can capture images of
moving organisms very easily, often producing images that
are surprisingly spectacular. (One particularly inexpensive
one is sold as the "DCM35". A linux kernel module called
spca5xx is available which permits its use with late version
Linux kernels.)
Scanners generally will work well on POSIX systems
when the xsane system has been installed. Most scanners
can simply be plugged into the USB port, and the xsane
program will recognise them.
V.THE HVD SYSTEM: WRITING DATA
Digital images can almost always be improved by image
processing techniques. Computers replace darkrooms!
The BSD, Linux, and other free Unix clones have a more
or less standard interface called the POSIX standard, and all
of these operating systems can use the basically free
Xwindow system. X users have an enormous choice of
Xwindow managers that provide dramatically different
capabilities.
86
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
There are three free extraordinarily good image
processing software packages for POSIX operating
systems:
Netpbm: This is an enormous set of command line
programs that can be used to interconvert virtually any
common image formats, and can also perform certain image
processing functions. This package has no provision for
viewing images.
Image Magick: This package can read many image
formats, and perform several types of image processing
functions. It has a very useful image display program called
"display." It can read and write formats that have 16
bits/colour. There are at least two separate versions of this
package being maintained at this point.
GIMP: This program has been around for several
years and is very stable. It has many extraordinary
capabilities. Unfortunately it cannot read formats with more
than 8 bits/colour. It takes "plug ins" that can add more
capabilities than it already has. It is somewhat user hostile,
and the documentation is not as great as it could be.
Microscope images differ from images taken of most
other subjects in a very important way, namely, the most
important image characteristic is that it provide the best
possible display of structures on the original subject.
Faithful production of colours and relative visual effect
often are of little concern. After all the original subjects are
not even visible to the naked eye at all, and optical
techniques such as phase contrast and the like have already
dramatically altered the image. For images taken with phase
systems it is often best to use just the green image, or to use
a green light source and monochrome film if using
conventional photography. Images of slides stained with a
single colour dye generally reveal the most detail when the
image is recorded monochrome near the wavelength of
maximum absorption of the dye. For images taken at high
magnification images taken in blue light are noticeably
sharper than those taken in red.
V.METHODS AND IMPLEMENTATION
The first step after obtaining a digital image, whether
from scanning a negative or slide, or from a digital device,
is to examine it very carefully and see if it be apparent that
any kind of image processing technique might improve it. If
the image have more than 8 bits/colour, you may have to
use the ImageMagick display program to do this. One
should look for the following:
1. Should this image be kept as a colour image? (If, for
example, this be an image taken with a phase contrast
system using green light, it makes no sense to keep it as a
coloured image since only the green channel has any image
anyway.)
2. Is the contrast correct?
3. Is the image too dim or too bright?
4. Is the colour balance correct. Examine both light and
dark areas. Unlike conventional photography one can
modify both the contrast and the intensity of the individual
primary colours. (red, green, and blue.)
5. Is the image sharp? Amazingly enough out of focus
images can be processed to make them sharp again, but this
often introduces artifacts such as halos around objects, and
artifacts already in the image tend to get multiplied.
6. Might the image be improved by processing techniques
that make certain features more visible even though this
may make the resulting image visually very different from
the original?
Very often one may find that the netpbm utilities, the
ImageMagick utilities, and Gimp may all three be required
to produce a really first rate image. It is always best to use
gimp LAST, especially when the original image has more
than 8 bits/colour.
It is good practise to store the original images on
CDROMs and to store the final images separately. Before
trying to modify the image, copy it to a working directory.
If it be not in pnm format, it is probably best to convert it to
this format first.
When it be apparent that a monochrome image would be
better than a colour one, it is best to convert the image to
monochrome before doing anything else. Generally it is
best to use just one of the three colour channels. This
requires that you separate the three colour channels that are
in the original image file. The netpbm program
"pamchannel" is supposed to do this, but it produces output
only in pam format, and it does not seem to work properly
with 64 bit linux. Unlike the other netpbm utilities, it is also
needlessly user hostile. We could not find a program to
decompose the standard ppm "P6" format to the standard
pgm "P5" format that worked the way I wanted it to, so I
wrote one myself, pnmdecompose. You can compile it by
simply typing "cc -o pnmdecompose pnmdecompose.c" It
does not require any special libraries at all. After it is
compiled, you can move the executable to /usr/local/bin. If
you simply type pnmdecompose myfile.pnm it will produce
three grey image files, one red, one green, and another blue.
Do not attempt to use this program with an old computer
using 64K segments, as it loads the entire image into
memory. (My program can handle 16 bits/colour images.)
The image below shows the original image captured with
a pcm35 imaging attachment. (cheek epithelial cells)
Fig 2 Epithelial Cells
It is a phase contrast image, and there is not much colour
in the image at all. Decomposing it with pnmdecompose
produced the following results, red is to the left, green is in
the centre, and blue is on the right. In many cases the
original image will have be one with more than 8
87
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
bits/colour. The gimp program is NOT capable of handling
these images without first converting them to 8 bits/colour.
This conversion can result in a very substantial loss of
pictorial information that can have very adverse effects on
the final image. For that reason it is best to manipulate the
program with the netpbm programs and the ImageMagick
utilities first.
Many images have very low contrast. It generally makes
sense to run the netpbm program, "pnmnorm" on them.
(When monochrome images are being used, it is especially
important to decompose them into individual colour
channels before doing this.) "pnmnorm" will adjust the
image so that the pixel values range from 0 to the pnm
image's specified maximum. Often this image modification
alone makes an otherwise poor image into a spectacular
one. Microscope images tend to benefit from using the
pnmnorm program much more than other types of images.
The image below shows the result of running pnmnorm on
each of the three images from above:
Fig 3 Running Pnmnorm
Become familiar with the netpbm utilities. It generally
makes sense to try these first. All of the netpbm programs
are command line utilities, so the exact effect of the
program will only be apparent after the program has been
run and the image is viewed with another program such as
ImageMagick "display". (Many common display programs
are incapable of displaying images with more than 8
bits/colour, but "display" handles them with ease.)
After the possibilities from netpbm have been
exhausted, it is time to try to determine if any of the
modifications that can be made using the ImageMagick
programs are going to be worthwhile. Several of its
commands can dramatically improve many images. The
sharpen, saturation, gamma, and brightness commands are
often all that is required to convert an image into a
spectacular one. It is important to save the image whenever
it looks really great before trying anything else. Although
there is an undo command, it is best to be safe!
ImageMagick has an "emboss" command. This will give the
image a 3D quality. This can dramatically enhance
visibility of otherwise faint features, the result is rather
similar to Nomarski contrast. The ImageMagick emboss
command provides somewhat different results than the one
from gimp. The ImageMagick one is much easier to use,
however.
Gimp has a bewildering array of possibilities, and many
more plug ins are available from the gimp web site. Users
who are familiar with it can rapidly modify images with
spectacular results. It is very easy to click on its buttons and
drag its cursors around to create spectacular effects. It is
also very easy to make dreadful ones too, however!
Gimp has a "layers" menu. A rather different "embossed"
image can be obtained by splitting an image into two layers,
and then running the emboss command on one of the layers.
The program allows different algorithms to be used to
combine the original image with the embossed one and to
adjust the proportion of each image that goes into making
the final image. A rather greater variety of effects is
possible than with the ImageMagick display program's
emboss command.
The colours command under the layer's menu allows
adjusting the response curves in a complex way for each of
the three primary colours and for the overall intensity. If
gimp accepted more than 8 bits/colour this command would
be more useful than it is. There is a plugging available for
adaptive contrast. Unfortunately the power of this technique
is dramatically reduced because of the 8 bit/colour
limitation of gimp.
A command line version of this same program with 16
bits/colour capability is also available, it is called pnmace.
Unfortunately it is difficult to compile, however, the source
code contains a compiled version that will run with some
versions of Linux. A few similar programs are also
available from Internet sources.
The scanned image below was converted directly from its
original pnm format to jpeg:
Fig 4 Pnmnorm to JPEG
The image below shows the same image after being
modified by separating it into two layers, embossing one of
the layers, and then combining the two layers: The image
below of a stained slide of a section of skin tissue was
originally captured using ASA 200 colour negative film in a
dedicated microscope camera. The original colour negative
was scanned using a Nikon 35mm slide scanner and was
directly converted into a positive image. Some "image
processing" is done during the setup for the scan process.
The scanned image below:
88
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Fig 5 Scanned image Pnmnorm
The modified image shows emphasises many fine
details. Even with ASA 200 colour negative film the film
grain is quite apparent in the modified image. With film
much faster than this grain becomes unacceptable. ASA
100 film is really best.
The image below was obtained using a Wild 100X phase
contrast objective and the DCM35 640x480 resolution usb
microscope camera. Illumination was supplied by a 3 watt
light emitting diode illuminator. Once again this is an image
of an unstained epithelial cell. At this magnification only
part of the cell could fit into the field of view at a time. As
in all images taken at the limit of visible light resolution,
there is a slight peculiar colour fringing that results from the
red image having substantially lower resolution than the
blue. This image is the unmodified image as it was captured
by the ImageMagick "import" program:
Fig 6 Unmodified Image
Considering the extreme magnification, the above image
is surprisingly good and a lot of cellular detail is evident.
The image below shows the result of taking the above
image and executing the ImageMagick display program's
sharpen command, followed by its "normalise" command.
The resulting image is more striking, and details are more
obvious. Some image artifacts are also enhanced by this
process.
The image was then converted to grey scale, and then
subjected to the "emboss" command. The embossed image
was also saved as a png image. Both of these images were
loaded into gimp. (Unlike ImageMagick display gimp can
display many images at once.) The layer dialogue was
opened on the original image, and the image duplicated as a
second layer. The embossed image was copied into the
layered image, and the top layer was replaced with the
embossed image. The relative contributions of the two
layers was adjusted, and when the image looked right, the
"merge down" command was applied. The result is rather
dramatic! Many fine details are emphasised. Unfortunately
once again close inspection reveals that the visibility of
some artifacts is also enhanced. Still the image is quite
striking.
It makes some sense to store the final processed images
as jpeg images, because they are compact and the small
defects that this format introduces are not usually apparent
even with careful inspection. Many image processing
algorithms, however, make them stand out! That is why the
image should be kept out of the jpeg format until it is in its
final form! However, one should remember that the png
format compresses image data strongly and does not result
in loss of image data like jpeg does. Unfortunately png files
are usually much bigger than jpeg. Consequently, it makes
sense to store large images in jpeg format and small ones in
png.
VI.COMPUTERIZED CCD MICROSCOPY ENABLED
precise photometric quantification
introduction of photogeodesic chromosome structure
mathematical chromosome invariants determination
mathematical definition of similarity
detection and precise addressing of genetic signals and
more precise gene mapping
color composite construction - functional colors,
starting with a few linearly independent exposures
automated chromosome extraction, normalization and
comparison
automatic classification of chromosomes
automated morphometric measurements
automatic dot counting
effective functional microscopic magnification
improve by two orders of magnitude
VII.RESULT ANS DISCUSSION
Very rare syndrome in hematology - characterized by the
appearance of wrong chromosome - arrowed, whose genetic
origin is determined by our photomorphology tools (SURF,
ANALYST programs) FISH - original images in bottom
raw, B/W processed - middle, combined in color
composites - top (IMAGE COMBINE program)
A few decades ago, image processing was done largely
in the analog domain, chiefly by optical devices. These
optical methods are still essential to applications such as
holography because they are inherently parallel; however,
due to the significant increase in computer speed, these
techniques are increasingly being replaced by digital image
processing methods
Digital image processing techniques are generally more
versatile, reliable, and accurate; they have the additional
benefit of being easier to implement than their analog
counterparts. Specialized hardware is still used for digital
image processing: computer architectures based on
pipelining have been the most commercially successful.
There are also many massively parallel architectures that
have been developed for the purpose. Today, hardware
solutions are commonly used in video processing systems.
However, commercial image processing tasks are more
89
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
commonly done by software running on conventional
personal computers.
XI.CONCLUSION
Microscope Image Processing is on the rise at a brisk
pace. Amongst many other Image Processing technologies,
this one has proved its merit, outdoing the supremacy of its
other counterparts. This field of technology needs much
more attention among the in-field technologists so that
future innovations can harness the existing technology to
greater details and can produce best of the best results. In
this paper, we have seen the current challenges, recent
advances and innovations in this newly developing area
while reviewing several frequently used image processing
techniques in this context. Further, in future, based on the
advancement of science, we would find ways to sort out the
challenges faced in the current existing method and make it
a promising technology in near future.
90
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Knowledge-Based Template for Eye Detection
Ms.Vijayalaxmi
1
,Mr. S.Sreehari
2
Assistant Professors, Department of Electronics &CommunicationEngineering
VignanInstitute of Technology &Sciences, Hyderabad
E-mail: laxmi214@yahoo.co.in1, harisree.ssc@gmail.com2
Mobile No.09505949113
Abstract: This paper presents an efficient approach to achieve
fast and accurate eye detection in still gray level images with
unconstrained background. The template based approach is
used to detect the eyes in images, where the eye template is
created based on iris information from the feature image. The
experimental results demonstrated on Face Expression database
shows the improved efficiency of the proposed method without
using SVM [8] which requires large database. Homorphorphic
filtering is used to enhance the features in an image and then
template matching is applied. This paper is a part of research
work on Development of Non-Intrusive system for real-time
Monitoring and Prediction of Driver Fatigue and drowsiness
project sponsored by Department of Science & Technology,
Govt. of India, New Delhi at Vignan Institute of Technology and
Sciences, Vignan Hills, Hyderabad.
Keywords: Homomorphic filtering, Clustering, Creating
Template, Template matching, Eye detection.
1. INTRODUCTION
Eye detection has become one of the major areas of
research for use in diverse applications including drowsiness
detection for intelligent vehicle systems, video conferencing
and vision assisted user interface [1]. The main difficulty in
locating the eyes from face images is the change of facial
expressions and rotation of head which changes the eye
position.
There are various methods to detect eyes from facial
images based on approach [10-18]. These methods can be
classified into two categories traditional image based
approaches and IR based approaches. The traditional image
based approaches are broadly classified into three categories,
template based method, appearance based methods and
feature based methods. The IR based approach for eye
detection is based on intensity distribution. The traditional
based approach for eye detection require large amount of
training data representing the different subjects of eye. In the
proposed algorithm, one of the traditional based approaches
is used for detecting eyes in an input image by creating an
eye template and applying binary template matching.
The rest of the paper is organized as follows. The
proposed method is given in Section 1. Experimental results
are reported in Section 2 and conclusions are drawn in
Section 3.
2. PROPOSED METHOD
The main objective of this paper is to propose an eye
detection algorithm assuming that face is already extracted.
The developed eye detection algorithm is shown below.[8].
The basic flow chart of the method is shown in figure 1.
Initially the facial image is enhanced and is binary to obtain
the structure image. To compensate for illumination
variations and to obtain more image details, a homomorphic
filter is used to enhance the brightness and the contrast of the
images. Then a clustering algorithm is used to separate the
facial features from the skin. Binary images are obtained
through thresholding. Then binary eye template is used to
locate eyes in the image.
Input image
Homomorphic
Filtering
Clustering
Thresholding
Create template
Binary template
matching
Eye detection
Figure 1: Algorithm For Eye Detection
91
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
2.1 Homomorphic Filtering:
Homomorphic filtering [6] is a method to develop a
frequency domain procedure for improving the appearance of
an image by simultaneous gray level range compression and
contrast enhancement. At the same time it also normalizes
the brightness across the image and improves contrast. An
image f(x,y) can be expressed as the product of illumination
and reflectance components as
f(x,y) = i(x,y).r(x,y) (1)
The enhancement approach using the forgoing concepts is
summarized in fig. (2). This method is based on special case
of a class of system known as homomorphic system.
g(x,y)
(A) (B) (C) (D)
Figure 3: Example (A) Original Image (B) Enhanced Image (C) Binaried
Image (D) Structured Image
2.2 Clustering and Thresholding:
After image enhancement through homomorphic filtering,
the image is divided into features of interest from the skin
Ln FFT H(u,v
)
IFF
T
Exp
and the background by clustering the grey level image into
three clusters through the K-Mean Clustering algorithm. The
low grey level pixel representing the background is set to
255, the intermediate representing the skin is set to 128, and
the high grey level pixels representing both the features and
Figure 2 : Process of Homomorphic filter
In particular application, the key to the approach is the
separation of the illumination and reflectance components is
done by applying Fourier Transform. The homomorphic
filter function H(u,v) can then operate on these components
separately. The illumination component of an image
generally is characterized by slow spatial variation while the
reflectance components tend to vary abruptly, particularly at
the junction of dissimilar objects. These characteristics lead
to associating the low frequencies of the image with
illumination and the high frequencies with reflectance. To
obtain the enhanced image, the inverse Fourier transform and
the exponent transform are applied. H(u,v) used in this paper
has the following form
H (u,v) = (H
H
-H
L
) . (1 exp (-C.D
2
(u,v)/D
0
2
) ) + H
L
(2)
where D(u,v) is the distance between point (u,v) and point
of origin in frequency domain, D
0
is the threshold, C is
sharpen parameter. If the parameter H
H
and H
L
are chosen to
be H
H
> 1 and H
L
<1, then the filter H(u,v) will decrease the
contribution of the low frequency (illumination) and amplify
the contribution of the mid and high frequencies(reflectance).
As shown in figure 3 (a). The input image has low contrast
due to illumination. Segmentation results therefore are
unlikely to be good. Figure 3(b) demonstrates the image
enhanced by Homomorphic filter, the contrast is improved
and the details in the face region are enhanced.
other dark pixels of the image (for example the hair and the
beard) is set to 0.
The thresholding is applied to the clustered image
obtained by assigning three different set of values to the
enhanced image. In the proposed algorithm a threshold value
is set to 128. Then a binary image is obtained, which
contains only facial features.
In order to improve the speed of template matching, the
labeling process is applied to remove the large black areas
from the binaried image. The final e image is obtained, as
shown in Figure 3(d).
2.3 Creation of Eye Templates:
After applying Clustering and Thresholding to an image
consist of structure of eyes from this to detect eyes. The
template matching can be used to detect the eyes. Before
applying template matching the difficult step is to create a
template. Its easy to find out eye templates which can be
obtained from a real face image. But the template cant be
directly used for matching, because the size of the eye in the
template is not same as that in the input image. A simple
solution for this problem is creating the template from the
structured image with fixed size. The eye consists of eye
brows and iris. The templates are created from the structured
image based on a group of continuous black pixels indicate
eye brows or iris of the person so we crop that area that act as
a template. Concerning our algorithm, in order to improve
the efficiency, we divided the templates into two parts that is
left eye template and right eye template. The Figure. 4 shows
eye templates created based on the structured image of fixed
size. After creating template apply template matching to
detect eyes.
92
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
(A) (B)
Figure 4: Eye Templates (A) Left Eye (B) Right Eye
2.4 Binary Template Matching &Eye Detection:
Template matching is a method for finding a part of an
original image using template. The process of template
matching is moving the template point to point in an original
image. The input images are compared with created
templates, to evaluate the similarity of the counterpart using
normal cross correlation. The problem with simple template
matching is that it cannot deal with eye variations in scale,
expression, rotation and illumination. Use of creating
template from structured image was somewhat helpful in
solving the above mentioned problem in template matching.
In order to determine the set of rows which contains the eyes,
we apply the binary template matching to the structure image
in order to search possible eye pairs [7]. Different templates
have been used for different images which are of same size.
Eyes will be detected no matter whether they are open or
closed.
Figure 5: Eye Detected Image
3. EXPERIMENTAL RESULTS
The proposed method was tested on the Face Expression
database [9]. The Face Expression database consists of 75
images (64X64 pixels, grey levels) of same person with
different facial expressions. The eye pair candidates can be
selected successfully in most of the cases no matter whether
face pattern are in different expression. In different
expressions the eyes openness level is different and in few
images eyes are almost close, causing error in result as
shown in figure 7. It is essential and basic requirement that
eyes should be open to detect. The eye location rate is 96%
in Face Expression database. The successful results of eye
detection with the proposed algorithm are demonstrated in
Figure 6.
Figure 6 . Experimental Results
The limitation of eye detection for Face expression
database images using proposed algorithm are shown in
figure 7.
Figure 7: Limitations
93
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
4. CONCLUSION & FUTURE WORK
The proposed algorithm is an efficient method for
detecting eye in still gray level images with unconstrained
background. The template of eye is used as a robust cue to
detect eyes in an input image from the structured image. To
obtain structured image the preprocessing is applied to input
images. Homomorphic filtering is applied to enhance the
facial images with poor contrast which makes the clustering
process more effective. Eye pair candidates are extracted by
using a binary template matching technique. However this
method will not influence the detection of closed eyes. The
proposed method cannot deal with moderate rotations,
glasses wearing and partial face occlusions. The proposed
algorithm is a part of research work, in future research, more
information will be combined together to detect eye states
more efficiently along with detection of drowsiness.
References
1. P. Wang and Q. Ji, Multi-view Face and Eye Detection
Using Discriminate Features, Computer Vision and Image
Understanding, vol. 105, no. 2, 2007, pp. 99-111.
2. [J. Deng and F. Lai. Region-Based Template Deformation
and Masking for Eye-feature Extraction and Description.
Pattern Recognition. 1997, 30(3):403419
3. [Y. Tian, T. Kanade, and J. Cohn. Dual-State Parametric Eye
Tracking, International Conference on Face and Gesture
Recognition. 1999, pp.110-115
4. S. Bernogger, L. Yin, A. Basu and A. Pinz. Eye Tracking and
Animation for MPEG-4 Coding. In Proc. IEEE Conf. Pattern
Recognition. 1998, 2:12811284
5. H. Liu, Y.W Wu, and H.B Zha. Eye States Detection from
Color Facial Image Sequence. In Proceeding of SPIE 4875,
693 (2002).
6. A.V.Oppenheim,R.W.Schafer, and T.G.S.Jr,Non-linear
filtering of multiplied and convolved signals, IEEE
Proc.,vol.56,no.8,pp.1264-1291,1968.
7. P. Campadelli, R. Lanzarotti. Localization of Facial Features
and Fiducial Points. In Processings of the International
Conference Visualization, Imaging and Image Processing,
Malaga (Spagna). 2002, pp.491495.
8. Qiong Wang and Jingyu Yang,Eye detection in Facial
images with unconstrained background,Journal of Pattern
Recognition Research 1(2006) 55-62.
9. http://chenlab.ece.cornell.edu.
10. A. Yuille, P. Hallinan, D. Cohen, Feature extraction from
faces using deformable templates, International Journal of
Computer Vision 8 (2) (1992) 99-111.
11. X. Xie, R. Sudhakar, H. Zhuang, On improving eye feature
extraction using deformable templates, Pattern Recognition
27 (1994) 791-799.
12. K. Lam, H. Yan, Locating and extracting the eye in human
face images, Pattern Recognition 29 (1996) 771-779.
13. L. Zhang, Estimation of eye and mouth corner point positions
in a knowledge-based coding system, in: Proc. SPIE Vol
2952, pp. 21-18, Berlin, Germany,1996.
14. M. Kampmann, L. Zhang, Estimation of eye, eyebrow and
nose features in video phone sequences, in: International
Workshop on Very Low Bit rate Video Coding (VLBV 98),
Urbana, USA, 1998.
15. G. C. Feng, P. C. Yuen, Variance projection function and its
application to eye detection for human face recognition,
International Journal of Computer Vision19 (1998) 899-906.
16. G. C. Feng, P. C. Yuen, Multi-cues eye detection on gray
intensity image, Pattern recognition 34 (2001) 1033-1046.
17. M. Nixon, Eye spacing measurement for facial recognition,
in: Proceedings of the Society of Photo-Optical Instrument
Engineers, 1985.
18. P. W. Hallinan, Recognizing human eyes, in: SPIE
Proceedings,Vol. 1570:Geometric Methods in Computer
Vision, 1991, pp. 212-226.
94
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Digital Image Processing: Windfall in Cancer
Diagnosis
Sangeeta Mangesh
Assistant Professor, Department of Electronics & Communication Engineering
Narula Institute of Technology, 81, Nilgunj Road, Agarpara, Kolkata (W.B) 700109(sangeetamangesh@gmail.com)
Abstract.Cervical intra-epithelial neoplasia (CIN) is
a pre-cancerous condition. Detection of CIN helps in
early treatment of cervical cancer. It is therefore
important to diagnose and treat it as early as possible.
This paper highlights the various techniques of image
acquisition, segmentation, compression and
registration that has been used medical field for the
diagnosis is CIN. This work focuses on image analysis
technique that can fetch comparative results so that
early detection of CIN is possible in order to prevent
cancer.
I. INTRODUCTION
Cervical cancer is the second most common cancer in
females worldwide, with nearly a half-a-million new cases
and over 270,000 deaths annually. Image analysis of
cervical imagery can be used in cervical cancer screening
and diagnosis with the potential of having a direct impact
on the improvement of womens health care and
associated cost reduction [3].
Lesion characteristics such as margin, contour, colour,
A CAD diagnostic systems and image processing tools
can aid in clinical diagnosis. Quantitative measurement
and analysis of images captured during colposcopy can
provide a diagnostic and prognostic tool for gynecologists. It
also reduces the need of biopsies [7].
Diagnostic imaging processing is an invaluable tool in the
field of medicine today that has an added benefit of image
processing. Magnetic resonance imaging (MRI), computed
tomography (CT), digital mammography and other
imaging modalities provide an effective means for non-
invasively mapping the anatomical abnormality of a
subject. These technologies have greatly increased
knowledge of normal and diseased anatomy for medical
research and are a critical component in diagnosis and
planning of treatments.
This paper recommends a complete image processing
system with proposed comparative image analysis for
diagnosis of CIN in cervical cancer.
II. DIGITAL IMAGE PROCESSINGSYSTEM
opacity, blood vessel calibre, inter-capillary spacing, and
capillary distribution help in clinical diagnosis.
Early detection of CIN aids in prevention and
treatment of cervical cancer [2].
As invasive diseases are preceded by pre-malignant
cervical intraepithelial neoplasia (CIN), if it is detected
early and treated adequately, cervical cancer can be
universally prevented[2][3].
Image
Acquisition
Image
registration
Knowledge
Extraction
Feature
Extraction
Comparative
analysis
Image
Analysis
Fig.1. Structural model of cervical epitheliumin normal
and CIN conditions
CIN is categorized histologically as grades 1, 2, and 3,
depending on the severity of the lesions [3].
Fig.2. Model of the Digital Image Processing System
A. Image acquision:Preprocessing:
Colposcopy is a well established diagnostic method to
detect cancerous and pre-cancerous tissues.
95
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Regularity
patterns a
mosaic an
easily rec
elongated,
vessels of
punctuate
appearanc
backgroun
D. Image analysi
Texture analysis
an image by t
attempts to quant
During examination, acetic acid (3-5%) is applied on the
cervix inducing color and textural changes in abnormal
and metaplastic epithelium. Maximum contrast is achieved
in 60 seconds. Later the cervix returns to its original color
in three to five minutes. These color changes are observed
through a low magnification microscope (colposcope).
The Colposcopic findings are confirmed by biopsy studies
[1][4].
A multispectral digital colposcope (MDC) aids in
acquiring images of the entire cervix with white light
illumination. Its video-rate colour CCD camera and frame
grabber acquires images. The colposcope produces a
stereoscopic image with magnification.Reflectance images
are captured using an inexpensive, commercially
available, video rate, colour CCD camera.
Generally, 60 images are captured in the first minute of
image acquisition stage (640x480) as a base line reference
(1frame/second). Following acetic acid application, 540
images are taken in 9 minutes using the same sampling
frequency. Each image is then saved independently [4].
B. Image Registration:
Registration is the preliminary stage in image processing
carried out to analyse the sequence acquired images, i.e.
compare and evaluate corresponding structures, the
objects in the images that are brought into the same
position by removing the differences in series of
colposcope images (taken at different time intervals).A
normal reference image is used as a standard.
A spatial transformation (also known as a geometric
operation) modifies the spatial relationship between pixels
in an image, mapping pixel locations in an input image to
new locations in an output image These include resizing,
rotating, cropping,2-D and 3-D or N-D transformations
that can be applied to the image. Transformations can be
applied point wise, linear or piecewise linear.
In this system a point transformation is applied with
image resizing .The control point pairs are manually
selected and placed to check the epithelial lining feature.
Fig 3-Normal cell image after point transformations
C. Feature Extraction and image segmentation
After registration process, the signal to noise ratio is
increased using a spatial low pass filter implemented using
a kernel window (3x3). The intensity value of each pixel
over time is used to construct a time series. For this
analysis the important parameter i.e. the rate of change can
be estimated[3].
Formally, image segmentation is defined as the
partitioning of an image into non-overlapping, constituent
regions that are homogeneous with respect to some
characteristic such as intensity or texture [6].
Various segmentation methods include- thresholding
approaches, Region growing approaches, Classifiers,
Clustering approaches, Markov random field models,
Artificial neural networks, Deformable models, Atlas-
guided approaches[5].
At this stage diagnostically important features include
inter-capillary distance, coarseness of vessels and
regularity of patterns.
The processing and comparative analysis based on these
features has yielded several results such as
The inter-capillary distance-is the distance between
vessels or space encompassed by the mosaic
vessels. In normal epithelium, the maximum inter-
capillary distance of the hairpin and network
capillaries varies, but it ranges from approximately
50 to 200 m with an average of about 100 m. On
the other hand, the maximum inter capillary
distance increases as the lesion becomes more
severe, i.e. in CIN 1 the average inter-capillary
distance may be 200 m, whereas in CIN 3 the
greatest inter-capillary distance is often 450 to 500
m[2]
Grading of Observed Vascular Patterns-In mosaic
and punctuation patterns, the inter-capillary
distance increases as the degree of histological
abnormality in the epithelium becomes greater.
Mosaic and punctuation may be additionally
graded on the basis of coarseness of vessels and
regularity of their pattern: the more severe
underlying lesion, the coarser the vessels are likely
to be and the more irregular the pattern[2].
of patterns - The distinctive vascular
ssociated with abnormal epithelium are
d punctuation. The punctuation pattern is
ognized, being characterized by dilated,
often twisted and irregularly terminating
the hairpin type, arranged in a prominent
configuration. Its essential colposcopy
e is a series of fine red dots in a whitish
d[2].
s
refers to the characterization of regions in
heir texture content. Texture analysis
ify intuitive qualities described by terms
such as rough, smooth, silky, or bumpy as a function of
the spatial variation in pixel intensities. In this sense, the
roughness or bumpiness refers to variations in the
intensity values, or gray levels. The analysis is done using
96
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
all the available filters in the mat lab such as range-to find
variations in the gray levels, knowing standard deviation
in the gray levels and finding the entropy of the gray scale
images under various stages of CIN.
The GLCM technique (Gray level Co-Occurrence Matrix)
has yielded results that are shown below. There are 40
coordinate points are considered that are listed on X-axis
and the gray level analysis is on Y-axis.
1
0.95
0.9
0.85
0.8
CIN III
0.9
0.8
Normal cells
0.75
0.7
0.65
0 5 10 15 20 25 30 35 40
Fig 4 Image Texture Analysis Results
0.7
0.6
0.5
0 5 10 15 20 25 30 35 40
The texture analysis results clearly indicate absolute
differences in the three cases that can help in diagnosis at
an early stage.
The Complete Image Processing system with stored image
database is updated with every new testing sample .This
can further enhance the comparative analysis to attain
more precision in diagnosis. With incorporation of
maximum image processing tools, accuracy can be
maximized.
.
0.95
0.9
0.85
0.8
0.75
0.7
0.65
0.6
0.55
0.95
0.9
0.85
0.8
0.75
0.7
0.65
0.6
0.55
CIN1
0 5 10 15 20 25 30 35 40
CIN II
REFERENCES
[1] Acosta-Mesa Hctor G., Zitov Barbara, Ros-Figueroa
Homero V, Cruz-Ramrez Nicandro, Marn- Hernndez
Antonio, Hernndez- Jimnez Rodolfo, Cocotle-Ronzn
Bertha E., Hernnde -Galicia Efran1-Cervical Cancer
Detection Using Colposcopic Images: a Temporal
Approach.-Proceedings of the Sixth Mexican
International Conference on Computer Science
(ENC05)0-7695-2454-0/05 2005 IEEE.
[2] Bernadetta Kwintiana Ane, Maruli Pandjaitan , Winfried
Steinberg, Jeremiah Suryatenggara Pattern
Recognition on 2D Cervical Cytological Digital Images
for Early Detection of Cervix Cancer- IEEE
Transactions on Image Processing -paper presented at
2009 World Congress on Nature & Biologically Inspired
Computing (NaBIC 2009) pg 257-262.
[3] Costas Balas 'A Novel Optical Imaging Method for the
Early Detection, Quantitative Grading, and Mapping of
Cancerous and Precancerous Lesions of Cervix '- IEEE
TRANSACTIONS IN BIOMEDICAL ENGINEERING,
VOL. 48, NO. 1, JANUARY 2001,Pg 96-104.
[4] Sun Young Park,Michele Follen,Andrea
Milbourne,Helen Rhodes,Anais Malpica,Nick
MacKinnon,Calum MacAulay- Automated image
analysis of digital colposcopy for the detection of
cervical neoplasia Journal of Biomedical Optics 131,
014029 January/February 2008.
[5] Qiang Ji, John Engel, and Eric Craine- Texture
Analysis for Classification of Cervix Lesions- IEEE
TRANSACTIONS ON MEDICAL IMAGING, VOL.
19, NO. 11, NOVEMBER2000 ,pg 1144-49
[6] Yinhai Wang, Danny Crookes, Osama Sharaf Eldin,
Shilan Wang, Peter Hamilton, and JimDiamond-
Assisted Diagnosis of Cervical Intraepithelial Neoplasia
(CIN) -IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL
PROCESSING, VOL. 3, NO. 1, FEBRUARY 2009 page 112-21
[7] Wenjing Li, Sankar Venkataraman, Sun-Young Park,
Ulf Gustafsson, and Gregory D. Hager-Computer
Aided Diagnosis Algorithms for Cervical Cancer Digital
0 5 10 15 20 25 30 35 40
Imagery- Computer Society Open access Journal.
97
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
FUZZY LOGICBASEDIMPULSENOISEREMOVALFILTER
Reena Tyagi Amrita Sharma
RKGEC, Pilakhuwa (Ghaziabad) RKGEC, Pilakhuwa (Ghaziabad)
reenatyagi94@gmail.com er.amrita@gmail.com
ABSTRACT
In digital image processing removing impulse noise
is very active research area. Present day applications
require various kinds of images and pictures as
sources of information for interpretation and analysis.
When image is converted from one form to another
form there is degradation occurs at the output. The
output image has to undergo a process called image
enhancement for improvement.
This research paper deals with Fuzzy inference
system (FIS) which help to take the decision about
the pixels of the image under consideration. This
paper focuses on the removal of the impulse noise
with the preservation of edge sharpness and image
details along with improving the contrast of the
images which is considered as the one of the most
difficult tasks in image processing. The Fuzzy noise
removal filter proposed in the paper is based on fuzzy
logic for removing the impulse noise from the
affected image.
Keywords: DIP, impulse noise, FIP, Fuzzy Noise
filtering, FIS, Fuzzy noise removal filter
1. INTRODUCTION
Images are the most common and convenient means
of conveying or transmitting information. Images
portray spatial information that we can recognize as
objects. An image processing method includes multi-
resolution decomposition to decompose an input
image into frequency-band images, which are
subsequently filtered according to an order statistics
filtering. In digital image processing, a degraded
image is typically obtained by sampling an analog
signal from the original scene in a two-dimensional
discrete space [l]. A point in this discrete space is
called a pixel and the process is called digitization or
discretisation of the original scene.
The images are digitized in amplitude and spatially
during digitization process. We then quantized
amplitude of image in 256 integer levels, which can
be represented by eight bits, with 0 corresponding to
black and 255 to white. These values are called
intensity values. Image processing operation [2] can
be done in three steps-Image compression, Image
Enhancement and Restoration and Measurement
Extraction. Image compression involves in reducing
the amount of memory needed to store a digital
image. Image restoration is the process of taking an
image with some known, or estimated, degradation,
and restoring it to its original appearance. Image
restoration is often used in the field of photography
or publication where an image was somehow
degraded, but need to be improved before it can be
printed. Image enhancement is improving an image
visually. The main advantage of IE is in the removal
of noise in the images. Removing or reducing noise
in the images is very active research area in the field
of DIP.
2. NOISE INIMAGE
A random variation of brightness or color
information in images by the circuitry of scanner or
digital camera is known as image noise. Digital
images are affected by impulse noise (such as salt
and paper noise) during transmission and acquisition.
An image containing salt-and-pepper noise will have
dark pixels in bright regions and bright pixels in dark
regions. This type of noise can be caused by
dead pixels, analog-to-digital converter errors, bit
errors in transmission, etc
Normally filters are used to remove noise from
images.
Filters are classified into two types,
1. Linear Filters
2. Non-linear Filters
Linear Filter include Convolution, Derivative Filters,
Steerable Filters, Edge Detection, Wiener Filter and
in non Linear filtering consist of Convolution,
Derivative Filters, Steerable Filters, Edge Detection,
Wiener Filter and in Non-Linear Filtering we use
Median Filter, Dithering.
Linear filters too tend to blur sharp edges, destroy
lines and other fine image details, and perform poorly
in the presence of signal-dependent noise. With non-
linear filters, the noise is removed without any
98
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
D1 A1 D2
A3 P A4
D3 A2 D4
attempts to explicitly identify it. The median filter
was one of the most popular nonlinear filter for
removing Salt & Pepper noise. The noise is removed
by replacing the window center value by the median
value of center neighborhood. [3]
3. NOISEREMOVALUSINGFUZZY
LOGIC
In the classical image processing method removal of
noise is done by averaging a image using
neighborhood averaging. Fuzzy image processing is
the collection of all approaches that understand,
represent and process the images, their segments and
features as fuzzy sets. Fuzzy Noise filtering can be
viewed as replacing every pixel in the image with a
new value depending on the fuzzy based rules.
Ideally, the filtering algorithm should vary from pixel
to pixel based on the local context.
The system for system Fuzzy filter for noise
reduction deals with noise like impulse noise and
median filter.
There are two phases for noise reduction in all
process.
1. Noise detection
2. Noise reduction
Almost all noise reduction algorithms are executed in
two steps, i) detect the corrupted pixels and ii) correct
the pixels by replacing the filter estimated values. In
Fuzzy based filtering we use fuzzy membership
functions which are determined by fuzzy set
construction algorithm. The membership function is
then used to remove impulse noise from digital gray
scale image.
4. PROPOSED WORK.
A faster and efficient method for removing noise
include following steps:
the concept of Image enhancement and Fuzzy logic.
The test image is corrupted with impulse noise and
pepper noise.
The work is completed in two phases. Noise is
removed in first phase and contrast in improved in
second phase. After this we get a noise free high
contrast image.
Phase 1:Noise removal:
For the construction of algorithm we take a gray
image. Let the grayscale image be represented by a
matrix M of size S1 S2, M ={M(i, j) {0, . . . ,255},
i = 1, 2, . . . ,S1, j = 1, 2, . . . ,S2}. Our construction
starts with the introduction of the similarity function
: [0 ; ) R. We will need the following
assumptions for :
(1) is decreasing in [0; ),
(2) is convex in [0; ),
(3) (0) = 1, ( ) = 0.
For each pixel (i,j) of the image we use a window
(3x3). We the gradient value for each pixel position.
Notation for each pixel in matrix is shown below.
NW N NE
W P E
SW S SE
(A) (B)
Here P represents the main pixel value, D1, D2, D3
and D4 are the Diagonal Pixels whereas A1, A2, A3
and A4 are the Adjacent Pixels.
1. Remove the noise from the test image
2. The pixels without noise remained
unchanged
3. Preserved the edges
4. Contrast improvement
We study several fuzzy and non fuzzy based filters
[4][5][6][7] for impulse noise reduction. The impulse
noise (salt-and-pepper type noise), shot noise or spike
noise) is typically caused by malfunctioning pixel
elements in the camera sensors, faulty memory
Basic
R gradient
D1
D1
A1
A1
D2
A4
D2
D4
A4
A2 D4
D3 A2
A3
D3
A3
Related gradient
D1 , D1
A1 , A1
D2 , D2
A4 , A4
D4 , D4
A2 , A2
D3 , D3
A3 , A3
locations, or timing errors in the digitization process.
The main difficulty for removal of noise is to
preserve the edges. We introduced a filter based on
The two related gradient value for each pixel in each
direction are given in table1. Each neighbor with
respect to (i, j) corresponds to one direction {D1 =
99
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
R
Northwest, A1= North, D2 = North East, A3 = West,
A4 = East, D3 = South West, A2 = South, D4 =
South East}. Each such direction with respect to (i, j)
can also be linked to a certain position. Each
direction R corresponds to central position. Column 2
gives the Basic gradient for each direction; column3
gives the two related gradients. The fuzzy gradient
value for direction R is calculated by following fuzzy
rules:
If Mag(
R
) is large AND Mag(
R
)
is small
OR
Mag(
R
) is large AND Mag(
R
) is
small
OR
Mag(
R
) is big positive AND
Mag((
R
) and
(
R
)) is big negative
OR
Mag(
R
)is big negative AND
Mag(
R
and
R
) is big positive
THEN
Fig 1: Original Image
Mag(
F
) is large
The central pixel in the window W is replaced by that
one, which maximize the sum of similarities between
all its neighbors. The value of new pixel is taken
from window W.
Phase 2: Image contrast improvement
After removing noise from the image again
operated for contrast improvement which is the
second phase of algorithm.
The following steps are followed for contrast
improvement:
1. According to actual image set the shape of
membership function.
2. Set the value of Fuzzifier Beta
3. Calculate the membership values
4. Modify the membership value
5. Generate new gray levels
5. RESULT
Below Shown are the results of the test image
which was initially corrupted by adding 50% salt
and pepper noise and later on the fuzzy noise
filter was applied to it to remove the noise.
Fig 2: Image with 50% salt and
Pepper Noise.
Fig 3: Image after applying fuzzy
Filter.
100
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
6. CONCLUSION
This paper attempts to remove Salt & pepper noise
from the corrupted images using fuzzy filter. The
performance of the fuzzy filter produces desired
results as it removes the added noise completely
producing a clear image.
7. REFERENCES
[1]. J. S. Lim, Two-Dimensional Signal and
Image
Processing, Prentice-Hall, Englewood Cliffs, NJ
(1990).
[2] Gonzalez, R.C., Woods, R.E., Book on
Digital Image Processing, 2nd Ed, Prentice-
Hall ofIndia Pvt. Ltd
[3] D. Maheswari et. al. ,Dr.V.Radha NOISE
REMOVAL INCOMPOUND IMAGE
USING MEDIAN FILTER, (IJCSE)
International Journal on Computer Science and
Engineering Vol. 02, No. 04, 2010, 1359-1362
[4] R. Yang, L. Lin, M. Gabbouj, J.Astola, and
Y. Neuvo,Optimal weighted median filters
under structural constraints, IEEE Trans. Signal
Processing, vol. 43, pp. 591604, Mar. 1995.
[5] T. Song, M. Gabbouj, and Y. Neuvo, Center
weighted median filters: some properties and
applications in image processing, Signal
Processing, vol. 35, no. 3, pp. 213229,1994.
[6] S. J. Ko and Y. H. Lee, Center weighted
median filters and their applications to image
enhancement, IEEE Trans. Circuits Syst., vol.
38, pp.984993, Sept.1991.
[7] T. Sun and Y. Neuvo, Detail-preserving
median based filters in image processing,
Pattern Recognit.Lett., vol. 15, no. 4, pp. 341
347, Apr.1994.
[8] E. Abreu, M. Lightstone, S. K. Mitra, and K.
Arakawa, A new efficient approach for the
removal of impulse noise from highly corrupted
images,IEEE Trans. Image Processing, vol. 5,
pp. 10121025, 1996.
101
National Conference onMicrowave, Antenna &Signal Processing April 22-23,
2011
COMPARISONBETWEEN VARIOUS EDGEDETECTION
ALGORITHMS USINGMATLAB
Amrita Sharma
1
Reena Tyagi
2
RKGEC, Pilakhuwa(Ghaziabad) RKGEC, Pilakhuwa(Ghaziabad)
er.amrita@gmail.com reenatyagi94@gmail.com
ABSTRACT
Since computer vision involves the identification
and classification of objects in an image, edge
detection is an essential tool. An edge is the
boundary between two regions with relatively
distinct gray level properties.Edges characterizes
object boundaries and are, therefore, useful for
segmentation, registration, and identification of
objects in scenes. If the edges in an image can be
identified accurately, all of the objects can be
located and basic properties such as area,
perimeter and shape can be designed for image
detection.
In this paper, we have done a detailed literature
survey of the various edge detection techniques
and an algorithm has been designed for
comparing the various edge detection algorithms
using MATLAB. The Algorithm compares the
image results of various image detection
algorithms such as Canny, Prewitt, Robert,
Sobel, LOG (Laplacian of Gaussian) and Zero
crossing. A conclusion is further derived as to
which technique appears to produce better
results.
Key words: Edge Detection, Image Processing,
Canny, Prewitt, Robert, LOG.
1. INTRODUCTION
Edge detection refers to the process of
identifying and locating sharp discontinuities in
an image. The discontinuities are abrupt changes
in pixel intensity which characterize boundaries
of objects in a scene. The function of edge
detection is to identify the boundaries of
homogeneous regions in an image based on
properties such as intensity and texture. Classical
methods of edge detection involve convolving
the image with an operator (a 2-D filter), which
is constructed to be sensitive to large gradients in
the image while returning values of zero in
uniform regions. There are an extremely large
number of edge detection operators available,
each designed to be sensitive to certain types of
edges. Many edge detection algorithms have
been developed based on computation of the
intensity gradient vector, which, in general, is
sensitive to noise in the image. In order to
suppress the noise, some spatial averaging may
be combined with differentiation such as the
Laplacian of Gaussian operator and the detection
of zero crossing. A large number of edge
detection techniques have been proposed. The
common approach is to apply the first (or
second) derivative to the smoothed image and
then find the local maxima (or zero-crossings).
2. REVIEW OF PREVIOUS
WORK
In the past two decades several algorithms were
developed to extract the contour of homogeneous
regions within digital image. Classically, the first
stage of edge detection (e.g. the gradient
operator, Robert operator, the Sobel operator, the
Prewitt operator) is the evaluation of derivatives
of the image intensity. Smoothing filter and
surface fitting are used as regularization
techniques to make differentiation more immune
to noise. Li Dong Zhang and Du Yan Bi [1]
presented an edge detection algorithm that the
gradient image is segmented in two orthogonal
orientations and local maxima are derived from
the section curves. They showed that this
algorithm can improve the edge resolution and
insensitivity to noise. Canny [2] derived
analytically optimal step edge operators and
showed that the first derivative of Gaussian filter
is a good approximation of such operators.
Raman Maini and J. S. Sobel [3] evaluated the
performance of the Prewitt edge detector for
noisy image and demonstrated that the Prewitt
edge detector works quite well for digital image
corrupted with Poisson noise whereas its
performance decreases sharply for other kind of
noise. Davis, L. S. [4] has suggested Gaussian
preconvolution for this purpose. Sharifi, M. et al.
[5] introduces a new classification of most
important and commonly used edge detection
algorithms, namely ISEF, Canny, Marr-Hildreth,
Sobel, Kirch and Laplacian. Shin, M.C et al. [6]
presented an evaluation of edge detector
performance using a structure from motion task.
Rital, S. et al. [7] proposed a new algorithm of
edge detection based on properties of hyper
102
National Conference onMicrowave, Antenna &Signal Processing April 22-23,
2011
graph theory and showed this algorithm is
accurate, robust on both synthetic and real image
corrupted by noise. Fesharaki, M.N.and
Hellestrand, G.R [8] presented a new edge
detection algorithm based on a statistical
approach using the student t-test. They selected a
5x5 window and partitioned into eight different
orientations in order to detect edges.
3. TRADITIONALEDGE
DETECTORS
An edge is defined in an image as a boundary or
contour at which a significant change occurs in
some physical aspect of the image. Edge
detection is a method as significant as threshold.
Traditional edge detectors were based on a rather
small 3x3 neighborhood, which only examined
each pixel s nearest neighbor. This may work
well but due to the size of the neighborhood that
is being examined, there are limitations to the
accuracy of the final edge. These local
neighborhoods will only detect local
discontinuities, and it is possible that this may
cause false edges to be extracted. A more
powerful approach is to use a set of first or
second difference operators based on
neighborhoods having a range of sizes (e.g.
increasing by factors of 2) and combine their
outputs, so that discontinuities can be detected at
many different scales. Edges can be detected in
many ways such as Laplacian Roberts, Sobel and
gradient. In both intensity and color, linear
operators can detect edges through the use of
masks that represent the ideal edge steps in
various directions. They can also detect lines and
curves in much the same way. Gradient
operators, Laplacian operators, and zero-crossing
operators are usually used for edge detection.
The gradient operators compute some quantity
related to the magnitude of the slope of the
underlying image gray tone intensity surface of
which the observed image pixel values are noisy
discretized samples. The Laplacian operators
compute some quantity related to the Laplacian
of the underlying image gray tone intensity
surface. The zero-crossing operators determine
whether or not the digital Laplacian or the
estimated second direction derivative has a zero-
crossing within the pixel. Edge detectors based
on gradient concept are the Sobel, Roberts and
Prewit.
4. EDGE DETECTION
ALGORITHMS
The most frequently used edge detection
methods are : Roberts Edge Detection, Sobel
Edge Detection , Prewitt edge detection, Canny
Edge detection and LOG edge detection. The
details of methods as follows:
1) The Roberts Detection: The Roberts Cross
operator performs a simple, quick to compute, 2-
D spatial gradient measurement on an image. It
thus highlights regions of high spatial frequency
which often correspond to edges. In its most
common usage, the input to the operator is a
grayscale image, as is the output. Pixel values at
each point in the output represent the estimated
absolute magnitude of the spatial gradient of the
input image at that point [9].
2) The Prewitt Detection: The prewitt edge
detector is an appropriate way to estimate the
magnitude and orientation of an edge. Although
differential gradient edge detection needs a
rather time consuming calculation to estimate the
orientation from the magnitudes in the x and y-
directions, the compass edge detection obtains
the orientation directly from the kernel with the
maximum response. The prewitt operator is
limited to 8 possible orientations, however
Fig. 1. Roberts Mask
experience shows that most direct orientation
estimates are not much more accurate. This
gradient based edge detector is estimated in the
3x3 neighbourhood for eight directions. All the
eight convolution masks are calculated. One
convolution mask
is then selected, namely that with the largest
module[9].
Fig. 2. Prewitt Mask
3) The Sobel Detection: The Sobel operator
performs a 2-D spatial gradient measurement on
an image and so emphasizes regions of high
spatial frequency that correspond to edges.
Typically it is used to find the approximate
absolute gradient magnitude at each point in an
input grayscale image. In theory at least, the
operator consists of a pair of 3x3 convolution
kernels as shown in Figure 3.
103
National Conference onMicrowave, Antenna &Signal Processing April 22-23,
2011
Fig 3: Sobel Mask.
These kernels are designed to respond maximally
to edges running vertically and horizontally
relative to the pixel grid, one kernel for each of
the two perpendicular orientations. The kernels
can be applied separately to the input image, to
produce separate measurements of the gradient
component in each orientation (call these Gx and
Gy). These can then be combined together to find
the absolute magnitude of the gradient at each
point and the orientation of that gradient [10].
4) The Canny Detection: The Canny edge
detection algorithm is known to many as the
optimal edge detector. Canny's intentions were to
enhance the many edge detectors already out at
the time he started his work. He was very
successful in achieving his goal and his ideas and
methods can be found in his paper, "A
Computational Approach to Edge
Detection"[11]. In his paper, he followed a list
of criteria to improve current methods of edge
detection. The first and most obvious is low error
rate. It is important that edges occurring in
images should not be missed and that there be no
responses to non-edges. The second criterion is
that the edge points be well localized. In other
words, the distance between the edge pixels as
found by the detector and the actual edge is to be
at a minimum. A third criterion is to have only
one response to a single edge. This was
implemented because the first two were not
substantial enough to completely eliminate the
possibility of multiple responses to an edge.
Based on these criteria, the canny edge detector
first smoothes the image to eliminate and noise.
It then finds the image gradient to highlight
regions with high spatial derivatives. The
algorithm then tracks along these regions and
suppresses any pixel that is not at the maximum
(no maximum suppression). The gradient array is
now further reduced by hysteresis. Hysteresis is
used to track along the remaining pixels that
have not been suppressed. Hysteresis uses two
thresholds and if the magnitude is below the first
threshold, it is set to zero (made a non edge). If
the magnitude is above the high threshold, it is
made an edge. And if the magnitude is between
the 2 thresholds, then it is set to zero unless there
is a path from this pixel to a pixel with a gradient
above T2.
5). Laplacian of Gaussian: Laplacian of a
Gaussian[8] function is referred to as LoG. The
filtering process can be seen as the application of
a smoothing Filter, followed by a derivative
operation. The smoothing is performed by a
convolution with a Gaussian function. Usually a
truncated Gaussian function is used when the
convolution is calculated directly. The
derivatives applied to a smoothed function can
be obtained by applying a convolution with the
derivative of the convolution mask. One of the
interesting characteristics of Gaussian is its
circular symmetry which is coherent with the
implicit anisotropy of physical data The
Laplacian is a 2-D isotropic measure of the 2nd
spatial derivative of an image.
The Laplacian of an image highlights regions of
rapid intensity change and is therefore often used
for edge detection. The Laplacian is often
applied to an image that has first been smoothed
with something approximating a Gaussian
Smoothing filter in order to reduce its sensitivity
to noise. The operator normally takes a single
gray level image as input and produces another
gray level image as output.
The commonly used small kernels are shown in
Figure 4.
Figure 4. Commonly used discrete
approximations to the Laplacian filter.
5. VISUAL COMPARISON BEWEEN
VARIOUS EDGE DETECTION
ALGORITHMS
Edge detection of all five types was performed
on Figure shown below and a comparison was
drawn regarding the best algorithm for edge
detection amongst the five:
104
National Conference onMicrowave, Antenna &Signal Processing April 22-23,
2011
6). CONCLUSION
Since edge detection is the initial step in
object recognition, it is important to know
the differences between edge detection
techniques.
Computer Graphics Image Process. (4), 248-270,
1995.
[5] Sharifi, M.; Fathy, M.; Mahmoudi, M.T.; " A
classified and comparative study of edge
detection algorithms", International Conference
on Information Technology: Coding and
Computing, Proceedings, Page(s):117 120, 8-
10 April 2002.
In this paper we studied the most commonly
used edge detection techniques of Gradient-
based and Laplacian based Edge Detection. The
software is developed using MATLAB 7.0.
Canny s edge detection algorithm is
computationally more expensive compared to
Sobel, Prewitt and Robert s operator However,
the Canny s edge detection algorithm performs
better than all these operators under almost all
scenarios. However it can also be observed that a
sharper edge is obtained using Prewit and Sobel
edge detection techniques but all the finer details
are not present in them. Comparatively Sobel
produces thicker edges as compared to the Prewit
edge detector.
7). REFERENCES
[1]. Li Dong Zhang; Du Yan Bi; "An improved
morphological gradient edge detection
algorithm", Communications and Information
Technology, ISCIT 2005. IEEE International
Symposium on Volume 2, Page(s):1280 1283,
12-14 Oct. 2005.
[2]. Canny, J., "A Computational Approach to
Edge Detector", IEEE Transactions on PAMI,
pp679- 698, 1986.
[3.] Raman Maini and J. S. Sobel, "Performance
Evaluation of Prewitt Edge Detector for Noisy
Images", GVIP Journal, Vol. 6, Issue 3,
December 2006.
[4]. Davis, L. S., "Edge detection techniques",
[6] Shin, M.C.; Goldgof, D.B.; Bowyer, K.W.;
Nikiforou, S.; " Comparison of edge detection
algorithms using a structure from motion task",
Systems, Man and Cybernetics, Part B, IEEE
Transactions on Volume 31, Issue 4,
Page(s):589-601, Aug. 2001.
[7] Rital, S.; Bretto, A.; Cherifi, H.; Aboutajdine,
D.; "A combinatorial edge detection algorithm
on noisy images", Video/Image Processing and
Multimedia Communications 4th
EURASIPIEEE Region 8 International
Symposium on VIPromCom, Page(s):351 355,
16-19 June 2002.
[8] Fesharaki, M.N.; Hellestrand, G.R.; "A new
edge detection algorithm based on a statistical
approach", Speech, Image Processing and
Neural Networks, Proceedings, ISSIPNN '94.,
International Symposium, Page(s):21 - 24 vol.1,
13-16 April 1994.
[9] N. Senthilkumaran and R. Rajesh, Edge
Detection Techniques for Image Segmentation -
A Survey, Proceedings of the International
Conference on Managing Next Generation
Software Applications (MNGSA-08), 2008,
pp.749-760.
105
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
u
M N
ui
Image Denoising Using Fractal Method
Rashmi Kumari ,Vishal Rathore
Department of ECE, FET MRIU Faridabad
rashmi167k@gmail.com,vishal.rathorer@gmail.com
Abstract: Most often, the fractal based methods are used for
the image compression. Our purpose is to denoise the image
using fractal coding. Fractal-based schemes exploit local and
global self-similarities that are inherent in many classes of
real-world images. Natural image structures possess
similarities across other parts of the image which can be
exploited for fractal image coding. However, noisy
structures have no resemblance in other parts of the image
and therefore cannot be accurately encoded using fractal
coders. Consequently, encoding a noisy image with a fractal
coder results in a good approximation of the natural self-
similar structures, whereas the noisy contents cannot be
described or reconstructed well by the fractal transform.
Hence, fractally encoding a noisy image results in some
degree of image denoising.
Keywords: Image Denoising, Fractal ,Fractal transform
,Quadtree, Blockiness artifacts
I. INTRODUCTION
The fractal image coding has received much interest over
the past decade, mostly in the context of image
compression. However, little or no attention has been
given to the use of such fractal-based methods for the
purpose of image enhancement and restoration. Indeed, a
noisy image is somewhat denoised when it is fractally
coded. This led to the question of whether such a simple
fractal encoding of the noisy image could be used as a
starting point to estimate the fractal code of the noiseless
image, perhaps with some knowledge of the noise, e.g., its
variance. The denoised estimate of original image can be
reconstructed by the fractal code. Since the (white
Gaussian) noise process is not represented well by the
(local) linear transform that maps parent blocks to child
blocks, hence resulting in noise reduction. The fractal
code parameters such as the gray-level map coefficients
can be used to estimate those of its noiseless counterpart,
assuming that one can estimate the variance of the
(white Gaussian) noise. This leads to an improvement in
the fractal approximation to a target image. It will be
shown that this method performs in a manner similar to
that of the human visual system, producing extra
smoothing in flat, low activity regions and a lower degree
II. FRACTALIMAGE CODING
Fractal image coding techniques are based on the theory of
Iterated Function Systems(IFS). The fractal-based schemes
exploit the self-similarities that are inherent in many real-
world images for the purpose of encoding an image as a
collection of transformations. Hence, a digitized image,
which typically requires mega-bytes of storage memory, can
be stored as a collection of IFS transformations (parameters)
and is easily regenerated or decoded for use or display. The
storage of the IFS transformation coefficients generally
requires much less memory, resulting in data compression.
Iterated function systems were originally introduced to
generate globally self-symmetric compact sets and natural
image. IFS was initially limited with the image of high
degree of self similarity. Later on a block based fractal
image compression method developed. In block coding, the
target image is partitioned into non-overlapping sub-blocks
and similar subregions are then matched. The fractal-based
approach provides efficient and accurate models for many
real-world images, resulting in relatively high compression
ratios and good reconstruction fidelity.
To exploit the local self-similarities within sub-regions of
images, the image is subdivided into a pair of simple and
uniform partitions of the image: A domain partition of larger
sub-blocks, also known as parent sub-blocks and a range
partition of smaller sub-blocks, also known as child sub-
blocks. A parent sub-block is mapped into its corresponding
child sub-block using a geometric mapping, followed by a
simple affine transformation, known as the gray-level map.
III. IMAGE QUALITY MEASURES
The identified two major acceptable" image quality
measures are:
Root Mean Squared Error (RMSE)
Suppose that the original image u of size M X N has been
denoised, using an image denoising scheme, and let u~ be
the denoised estimate. The RMSE between the denoised
image and the original image is given by
of smoothing near high activity regions, including edges.
RMSE =
( j
i 1 j 1
~ij )2
MxN
106
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Peak Signal to Noise Ratio (PSNR)
It is inversely proportional to the RMSE, its units are in
decibels (dB) and is formally defined by
the whole image is partitioned into nonoverlapping fractally
encoded child subblock
PSNR = 20 log
10
255
(dB)
RMSE
where 255 is the maximum pixel value for an 8 bits/pixel
gray-scale image.
IV. UNIFORM PARTITIONING
In this method images are uniformly square partitioned.
The use of fixed-size partitions may have limitations
since there are sub-regions in the image that are difficult
to cover using the prescribed resolution or size of the
partition. For instance, high detail sub-regions of the
image, such as edges, may require a small mesh size to be
represented well. On the other hand, there may be sub-
regions in the image that can be covered well using larger
block sizes, hence resulting in a reduction of the total
number of parameters to be stored and an increase in
compression of the image. The most common method of
adaptive image partitioning is that of quadtrees.
V. QUADTREE IMAGEPARTITIONING
In this method the original image are broken down into
quadrants in a recursive tree structure. The partitioning,
which will vary throughout the image, is terminated when
a particular condition is satisfied. Typically, regions of
higher image activity, for example edges, will produce
partitions of finer resolution, i.e., small block sizes.
Consequently, edges are generally represented well in
quadtree-based coding schemes, including fractal coding.
The certain blocks as shown in figure are subdivided
further into four quadrants while others are encoded and
not further decomposed. Also, note that the regions of the
image that contain \too many" details, such as the eyes
and hair and the edges of the hat, are subdivided into a
finer level, in some cases the minimum allowable block
size is reached. Similarly, Note how relatively flat parts of
the image, such as the face and the background, are
partitioned more coarsely. This is indeed the essence and
the benefit of the quadtree partitioning scheme.
Originally, the quadtree-based fractal image coding
scheme adopts a collage decomposition criterion. A child
sub-block is fractally encoded and the collage error,
which describes the goodness of fit, is computed. If the
resulting collage error is within a prescribed tolerance,
then the child is presumed fractally encoded. However, if
the collage error exceeds the prescribed threshold, then
the child sub-block is sub-divided into four uncoded child
sub-blocks (quadtrees). This process is then repeated until
Fig. 1 Quadtree Image Partitioning
Fig 2 Quadtree based fractal representation
Standard fractal schemes are based on spatial
transformations among the target image sub-blocks; as a
result, the reconstructed image generally suffers from
disturbing artifacts or blockiness. In fact, as the image is
partitioned into blocks and since errors tend to be strongly
correlated within a block but generally uncorrelated across
neighboring blocks, distracting artifacts in the fractal
representation of an image are observed as shown. The
zooming on the fractal representation reveals the blockiness
artifacts.
Fig 3 Zoomed Lena image after quadtree coding
(Blockiness artifacts)
107
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
VI. COMPARISON OF UNIFORM AND
QUADTREE PARTITIONING
The quadtree-based fractal denoising scheme performs a
content-based denoising where a high degree of denoising
is performed in uniform sub-regions of the image and a
lower degree of denoising in performed in the vicinity of
edges without compromising their sharpness. Thus, the
quadtree denoised image, appears overly smoothed in flat
regions and noisy near edges and within high activity sub-
regions of the image, which explains why the quadtree
denoised estimate has a high RMSE (low PSNR).
However, the uniform-based fractal denoising scheme
performs a uniform degree of smoothing throughout the
image, regardless of its content, and the resulting
denoised estimate is smoothed uniformly and contains
little or no residual noise.The qualitative fidelity measures
seem to indicate that uniform-based fractally denoised
image is better than the quadtree-based fractally denoised
one,visually one may prefer the latter because the
presence of noise near edges is less noticeable. However,
the quadtree-based fractally denoised image suffers from
non-uniform artifacts.
Fig. 4 Uniform partitioning :(M,N) = (32,64)
RMSE=10.03, PSNR = 28.10
Fig. 5 Quadtree partitioning
RMSE =9.10, PSNR =28.95
VII. CONCLUSION AND FUTURE SCOPE
There is clearly a significant improvement of the
quality of the fractally denoised estimate, especially when
the quadtree partitioning of the image is used. In these
fractal representations, most of the noise appears to have
been suppressed without blurring the edges or other high
frequency components of the image.Except for a few
blockiness artifacts, The quadtree-based fractally
denoised estimate appears to have high visual quality.
This blockiness artifacts which seems on zooming have to
still remove ,however some ideas have proposed .
REFERENCES
[1] M. Ghazel, E.R. Vrscay, and A.K. Khandani, \An
interpolative scheme for fractal image compression
in the wavelet domain,"Proc. 8th International
Conference on Computer Analysis of Images and
Patterns (CAIP), Ljubljana, Slovenia, September 1-
3, 1999.
[2] .M. Ghazel, and E.R. Vrscay, \Adaptive fractal and
wavelet image coding using quadtree partitioning,"
Proc. 20th Biennial Symposium on Information
Theory, Kingston, May, 2000.
[3] .M.F. Barnsley, Fractals Everywhere. New York:
Academic Press, 1988.
[4] M.F. Barnsley, and S. Demko, \Iterated function
systems and the global construction of fractals,"
Proc. Roy. Soc. Lond., vol. A399, pp.243-275,
1985.
[5] K. Belloulata, and J. Konrad, \Fractal image
compression with region-based functionality,"IEEE
Trans. Image Processing, vol. 11, no. 4, pp. 351-
362, 2002.
[6] IEEE Trans. Image Processing, vol. 5, no. 6, June
1996.
[7] A. Jacquin, Image coding based on a fractal
theory of iterated contractive image
transformations, IEEE Trans. Image Processing,
vol. 1, pp.1830, 1992.
[8] J. Lvy Vhel, Introduction to the multifractal
analysis of images, in Fractal Image Encoding and
Analysis. ser. NATO ASI Series F 159, Y.Fisher,
Ed. New York: Springer-Verlag, 1998.
[9] J. Lvy Vhel and B. Guiheneuf, Multifractal
ImageDenoising: INRIA Rocquencourt, 1997.
[10] N. Lu, Fractal Imaging. New York: Academic,
1997.
108
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Sequence detection for MPSK/MQAM
1
Ms.Neha Singhal (Sr. Lecturer),
2
Ms.Neha Goel(Assit. Professor)
3
Mr. Ankit Tripathi(Sr. Lecturer)
1,2,3
Raj Kumar Goel Institute Of Technology,Ghaziabad
1
nehasinghal7@yahoo.co.in,
2
17nehagoel@gmail.com
3
tripathiankit10@gmail.com
Abstract
A Viterbi algorithm is developed for efficient
detection of M-ary phase-shift keyed (MPSK)
sequences received over the additive, white,
Gaussian noise (AWGN) channel with an
unknown carrier phase..By assuming the
carrier phase is constant over some L symbol
intervals and using the sequence detection
metric in as the decoding metric, Its
performance approaches that of coherent
detection provided the observation interval L
for forming the decoding metric is sufficiently
long.Interval L is fixed and chosen based on
prior statisticalm knowledge of the carrier
phase characteristics..Thus, the metric is
nonadaptive, and cannot be optimized when a
priori statistical knowledge of the carrier
phase is not available. It is made adaptive by
developing a recursive metric that is adapted
on-line based on the received signals, without
prior knowledge of the carrier phase
characteristics.This paper is using adaptive
reference estimation is proposed init.
Keywords: Introduction, signal analysis,
analysis of MLSD, Algo of adaptive MLSD,
BEP performance, Conclusion.
I. INTRODUCTION
A Viterbi type algorithm is presented for
adaptive estimation of MPSK sequences over a
Rayleigh fading channel with unknown fading
spectrum and unknown additive channel noise
intensity.Simulations show significant
performance gains over both differential
detection and adaptive symbol-by-symbol
detection proposed in [3].
In [1], we have developed a Viterbi type
algorithm for efficient estimation of M-ary
phase-shift keyed (MPSK) sequences, whether
coded or uncoded, over the Gaussian channel
with unknown carrier phase. The aim of this
paper is to develop a similar Viterbi type
algorithm for estimating uncoded MPSK
sequences over the Gaussian channel with
nonselective Rayleigh fading.
We consider maximum likelihood sequence
detection (MLSD) over the additive, white,
Gaussian noise (AWGN) channel with unknown
carrier phase. The MLSD structure was first
derived in [3] where it is shown that in the limit
of a long sequence, its performance approaches
that of coherent detection provided the carrier
phase remains constant An efficient Viterbi-type
algorithm was proposed in [1] by using the
decision metric of the MLSD of [3] as the metric
for choosing the survivor at each state. It reduces
the complexity of MLSD, and the carrier phase
is allowed to be slowly timevarying. Both [1]
and [4] assume that the carrier phase is slowly
varying so that it can be assumed constant over
the L-symbol observation interval for forming
the metric.
We have proposed an efficient, adaptive,
MLSD scheme for a data sequence received over
the AWGN channe with unknown carrier phase.
The data can be coded or uncoded, M-ary phase
shift keyed (MPSK) or M-ary quadrature
amplitude modulated (MQAM). The adaptive
algorithm can adapt on-line to the varying
carrier phase based on the received signals
alone, without prior knowledge of the carrier
phase characteristics. The better performance is
109
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
achieved by sequence detection compared to
symbol-by-symbol (SBS) detection [1], [3], we
will extend the adaptive idea for SBS detection
for MPSK in [2] to the MLSD for MQAM.
II. SIGNAL ANALYSIS
The received signal, r,(k), i = 1, 2, ..., L, over the
ith diversity channel, after IF filtering, is given
in complex envelope notation by
r(k) = m(k)e
j (k)
+ n(k) . Let m = [m(0) m(1)
m(K 1)]T be the transmitted
sequence of K MPSK/MQAM modulated
symbols.
The noise n = [n(0) n(1) n(K 1)]T is
a vector of independent, identically
distributed (i.i.d.), complex Gaussian random
variables, due to channel AWGN.
i.e. the diversity channels are independent and
identical. Each n,(k) is the IF output owing to
input additive, white Gaussian noise (AWGN),
and is a zero-mean, complex Gaussian noise
whose power density spectrum is a constant,
though unknown, over the IF band. The
processes are also mutually independent but
statistically identical. The IF filters are of unity
gain and assumed wideband, each with
bandwidth a few times the data rate, so that (k)
passes undistorted through the filters. The
function of the IF is mainly to band limit the
input AWGN.
III. ANALYSIS OF MLSD
The Viterbi algorithm (VA) is an efficient
method to implement MLSD. In the VA, we
treat a digital sequence, whether coded or
uncoded, as a path through a trellis. Sequence
detection is considered as a problem of
searching for the correct path through the trellis
based on the received signal sequence. As in [1],
the survivor at each state is the path that
currently has the highest likelihood among all
paths entering that state. At each time k, the
receiver computes the likelihood p(r(k)|m(k)) of
each subsequence m(k) = [m(0) m(1)
m(k)]
T
entering a state, given the subsequence
r(k) = [r(0) r(1) r(k)]
T
of received
signals. For each hypothesized data sequence
m(k), it is straightforward to show that p
(r(k)|m(k)) can be computed recursively for a
general time-varying carrier phase process as
To compute the above equation, each term on
the right side can be computed as
Consider first the term p(r(l)|m(l), r(l1), (l))
in (3). Fromthe signal model,
we see that given m(l) and (l), r(l) depends
only on n(l), and is independent of r(l1)
because n(l) is independent of the past. Thus, we
have p(r(l)|m(l), r(l 1), (l)) = ( N
0
)
1
e

1N0|r(l)m(l)ej (l)|2.
Conditional pdf of Carrier Phase
We will show that the conditional pdf
p( (k)|m(k 1), r(k 1)) can be approximated
as
It is reasonable toapproximate the pdf
p( (k)|m(k1), r(k1)), where the estimate

m
(k,L)) given the immediate past L 1
hypothesized data symbols and
received signals is replaced by
m
(k|k
1)) which is the best estimate of (k) given the
entire past information m(k 1) and r(k 1).
The corresponding estimation error variance
V
m
(k,L) given the immediate past L 1 signals
is replaced by V
m
(k|k 1), and
m
(k) is
approximately the inverse of V
m
(k|k 1).
The Metricof The VA
Given the pdf derived above, the likelihood
function p(r(l)|m(l), r(l 1))
can be computed as p(r(l)|m(l), r(l 1)) =
Thus, the receiver is easy to implement, and the
complexity is fixed. For MPSK modulation, the
metric can be written as
110
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
sta
thi
an
simulat
te need be stored.The BEP performance of
s adaptive sequence detector is difficult to
alyze and we obtained its performance via
ion
IV. ALGORITHMOF ADAPTIVE MLSD
An adaptive filter is used to recursively generate
a reference phasor vm(k|k 1) ej m(l|l1),
for each hypothesized sequence m(k1). By
extending the filter to sequence detection for
non-equal energy signals, we have
v
m
(k|k 1) =
m
v
m
(k 1|k 2) + (1
m
)z
m
(k
1) k 1
The adaptive filter is suitable for both MPSK
and MQAM signals. A reference is generated for
the survivor path at each state. The reference
phasor v
m
(k|k 1) is a weighted combination of
the previous estimate v
m
(k1|k2) at time k 2
and the input information z
m
(k 1) at time k
1. The optimal value of
m
can adapt on-line to
the unknown carrier phase characteristics. By
using this adaptive filter, the reference phasor
can be generated based on the entire past
information without memory truncation. The
problem of having to determine an appropriate
value of the memory length L based on a priori
knowledge of the carrier phase is avoided. The
weight
m
is chosen at each time k to minimize
the risk function R
m
(k) [2] defined for each mas
For general MQAM, this relationship still holds
for the actual transmitted data sequence m
0
. For
high SNR, m
0
(l)v
m0
(l|l 1) can match r(l) very
well, and results in the minimumR
m
(k) given in
above equation among all hypothesized data
sequences. This gives us the motivation to
define the risk function, and use v
m
(l|l1) as the
phasor reference. Thus, in summary, the receiver
computes for each m, the metric
imp
(k,m(k))
recursively as

imp
(k,m(k)) =
imp
(k 1,m(k 1)) +|r(k)
V. THE BEP PERFORMANCE
The performance of the adaptive sequence
detector (ASD) is obtained in simulations using
the random-walk carrier phase model given by
(k + 1) = (k) + _ (k) in which { (k)} is a
sequence of i.i.d., zero-mean, Gaussian
increments with variance
2
, the phase noise
variance. The knowledge of the carrier phase
model is not given to the ASD. In the ASD, a
preambles of 50 pilot symbols is used for the
adaptive filter to initialize. During the data
transmission, ten pilot symbols are transmitted
every thousand data symbols. For the NASD in
[1], a preamble of L symbols is used to initialize
the MLSD, and L pilot symbols are transmitted
every thousand data symbols. The optimum
values of L are chosen through extensive
simulations for each SNR and each value of
2
.
In our simulations, the entire sequence length is
set to be K = 10
5
. For fair comparison, the
energy consumed by the preamble and pilot
symbols is taken into account. Thus, the effective
average energy per data symbol in the ASD or
the NASD is K/(K+L
a
+L
b
n
b
) of Es. L
a
is the
length of the preamble, L
b
the length of pilot
symbols every thousand data symbols, and n
b
=
K/1000. Specifically, L
a
= L
b
= L for the
NASD, L
a
= 50 and L
b
= 10 for the ASD.
Fig. 1. BEP comparison of uncoded QPSK using
adaptive and nonadaptive
m(k)v
m
(k|k 1)|
2
.
we can see that the storage requirement of our
detectors for phase noise variance
2
rad
2
= 0.0012
newly proposed metric is smaller, since only the
metric value and the reference phasor for each
Fig. 1 shows the performance for uncoded
QPSK, for phase noise variance of 0.0012 rad
2
,
which corresponds to a root mean square (rms)
111
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
phase fluctuation of 2

. For low SNR, the


optimum L is larger because it is necessary to
smooth out the additive noise. For high SNR, the
additive noise is not significant, and it is more
important to reduce the effects of carrier phase
noise by using a shorter observation window.
The lengths L shown are such that one is about
the best for the lower half of the SNR range, and
the other the best for the higher half. The
adaptive sequence detector can be seen to
perform better, and it achieves this through its
own automatic adjustment of the filter weight
based on the observed signal samples. The
performance loss compared to that of coherent
detection is only about 0.5 dB.
Fig. 2. BEP comparison of uncoded 8PSK using
adaptive and nonadaptive detectors for phase
noise variances
2
= 0.0012 rad
2
and
2
= 0.0003
rad
2
.
Fig. 3. BEP comparison of uncoded 16QAM
using adaptive and nonadaptive detectors for
phase noise variance
2
= 0.0012 rad
2
and
constant phase
Fig. 2 shows the simulated BEP performance for
uncoded 8PSK, for phase noise variances of
0.0003 rad
2
and 0.0012 rad
2
. At low SNR (SNR
8 dB), the ASD is not as good as the NASD
using the optimum value of L. This is expected
since the performance of phase tracking using
the adaptive filter degrades at low SNR due to
decision errors. However, when SNR increases
to the values where satisfactory BEP
performance can be achieved, say, SNR > 8 dB,
the performance of the ASD is better than that of
the NASD.
For uncoded 16QAM, the results in Fig. 3 show
that the NASD does not work properly for the
time-varying unknown phase, although the
NASD can approach the performance of
coherent detection when the carrier phase
remains constant. In this case, our ASD can
achieve much better performance when the
carrier phase is time-varying. The performance
loss compared to that of coherent detection
increases with the level of modulations.
Fig. 4. BEP comparison of convolutionally
coded 8PSK using adaptive and nonadaptive
detectors for phase noise variance
2
= 0.0012
rad
2
.
Fig. 4 shows the simulated BEP results of
convolutionallycoded 8PSK, with Gray coding
of coded bits onto the signal points. The code
used for 8PSK in Fig. 4 is the optimum, rate-1/3,
constraint length-3 code [13, ch. 8]. The
generators are g
1
= [101], g
2
= [1 1 1], and g
3
=
[1 1 1], with the same notations as in [13, ch. 8].
Thus, E
b
= E
s
/(mR
c
), where m = logM and R
c
is
the code rate. It shows that the ASD
outperforms the NASD with the optimum value
of L.
VI. CONCLUSIONS
We have developed an adaptive algorithm for
the MLSD of coded/uncoded MPSK and
MQAM over the AWGN channel with unknown
carrier phase characteristics. Analysis in the VA
was developed based on the conditional pdf of
the carrier phase derived, and it incorporates the
phase estimation accuracy for each hypothesized
112
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
data sequence. A practical metric was proposed
with low complexity. The metric uses an
adaptive filter to generate the reference phasors,
which can adapt on-line to the unknown carrier
phase characteristics based on the received
signal alone, without any a priori statistical
information of the unknown carrier phase. The
proposed ASDalso has reduced complexity.
REFERENCES
[1] P. Y. Kam and P. Sinha, A Viterbi-type
algorithm for efficient estimation of MPSK
sequences over the Gaussian channel with
unknown carrier phase, IEEE Trans. Commun.,
vol. COM-43, pp. 2429-2433, Sept. 1995.
[2] P. Y. Kam, K. H. Chua and X. Yu,
Adaptive symbol-by-symbol reception of
MPSK on the Gaussian channel with unknown
carrier phase characteristics, IEEE Trans.
Commun., vol. COM - 46, pp. 1275 - 1279,
October 1998.
[3] P. Y. Kam, Maximum-likelihood digital
data sequence estimation over the Gaussian
channel with unknown carrier phase, IEEE
Trans. Commun., vol. COM 35, pp. 764-767,
Jul. 1987.
[4] G. Colavolpe and R. Raheli, Noncoherent
sequence detection, IEEE Trans. Commun.,
vol. 47, pp. 1376-1385, Sept. 1999.
[5] R. Raheli, A. Polydoros, and C. Tzou, Per-
survivor processing: A general approach to
MLSE in uncertain environments, IEEE Trans.
Commun., vol. COM - 43, pp. 354 - 364, Part I
of Feb/Mar/April, 1995.
[6] G. Ferrari and G. Colavolpe and R. Raheli,
On linear predictive detection for
communications with phase noise and frequency
offset, IEEE Tran. Vehic. Tech., vol. 56, pp.
2037-2085, Jul. 2007.
[7] L. Lampe and R. Schober, Noncoherent
sequence detection receiver for bluetooth
systems, IEEE J. Sel. Areas in Commun., vol.
23, pp. 1718-1727, Sept. 2005.
[8] A. N. DAndrea, U. Mengali, G. M. Vitetta,
Approximate ML decoding of coded PSK with
no explicit carrier phase reference, IEEE Trans.
Commun., vol. 42, pp. 1033-1039, Feb/Mar/Apr.
1994.
[9] G. Ferrari and G. Colavolpe and R. Raheli,
A unified framework for finite-memory
detection, IEEE J. Sel. Areas in Commun.,vol.
23, pp. 1697-1706,Sept. 2005.
[10] P. Y. Kam, Maximum likelihood carrier
phase recovery for linear suppressed-carrier
digital data modulations, IEEE Trans.
Commun., vol. COM-34, pp. 522-527, Jun.
1986.
[11] H. L. Van Trees, Detection, Estimation,
and Modulation Theory: Part I. New York:
Wiley, 1968.
[12] H. L. Van Trees, Detection, Estimation,
and Modulation Theory: Part
II. New York: Wiley, 1971.
[13] J. G. Proakis, Digital Communications,
4th ed. New York: McGraw-Hill, 2001.
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Kaiser Window Based Finite Impulse Response
Bandpass Filter Approach for Noise Minimization
Amiya Dey
#1
, Ayan Kumar Ghosh
*2
#
Electronics &CommunicationEngineering Department, Seacom EngineeringCollege,
Howrah, West Bengal, India
amiyo4u@gmail.com
*
Electronics &CommunicationEngineering Department, Seacom EngineeringCollege,
Howrah, West Bengal, India
akghsh@gmail.com
Abstract We propose a Finite Impulse Response (FIR)
Bandpass Filter (BPF) to minimize the channel Additive White
Gaussian Noise (AWGN) contribution for a communication
system. The digital FIR BPF is implemented using Kaiser
Window. We define a unique model for assigning a signal &
contaminating it with AWGN to obtain a noisy signal. Then the
noisy signal is passed through the Kaiser window based FIRBPF
to get the filtered output with minimum possible AWGN
contribution. The simulation result shows improved
communication system performance. The Kaiser Window length
and shape of the window with reduced passband ripple and
stopband ripple have taken to meet the purpose of work. The
frequency domain response of the Kaiser Window based FIR
BPF, frequency domain and time domain response of the signal,
noisy signal and filtered signal are taken to analyze the system
properly. The entire above spoken communication system
modelling, simulation and performance analysis are
implemented using MATLAB.
Keywords FIR, BPF, Kaiser Window, Signal, AWGN,
MATLAB.
I. INTRODUCTION
The need for improved signal processing performance is
increasing day by day for designing better communication
system in presence of Additive White Gaussian Noise
(AWGN). This paper is an approach toward that goal of
minimizing the channel AWGN contribution for a
communication system using Digital Signal Processing (DSP)
Bandpass Filter (BPF). The BPF uses the Finite Impulse
Response (FIR) with window technique. The window used for
this experimental purpose is the famous Kaiser Window.
II. DESIGN
To meet the objective, the communication system design,
simulation and performance analysis is implemented in
MATLAB. For designing such a system first a single-tone
signal of frequency 4 Hz is taken within 100 seconds specified
time interval. Then the signal is contaminated with AWGN to
obtain a noisy signal. The noisy signal is passed through the
Kaiser Window based FIR BPF. The filtered output gives
minimum possible AWGN contribution as observed from the
output diagrams.
The possible minimum Kaiser Window length has taken to
allow minimum AWGN to pass through the filter. The shape
of the Kaiser Window is taken such that the filter can give
maximum possible passband gain. Possible gain reduction in
passband ripple and stopband ripple has made for eliminating
the AWGN components outside the passband of the BPF.
III. RESULTS
For the frequency domain representation along the X-axis
the normalized frequency has used. The normalized frequency
is the frequency normalized to Nyquist frequency. By default,
the magnitude of the BPF is normalized, so that, the
magnitude response of the BPF at the centre frequency of the
filter passband is 0 dB.
Figure 1 shows the time domain response of the noisy
signal plotted as the Magnitude versus Time within the time
interval of 100 seconds. The diagram indicates that the signal
is severely corrupted with noise.
Fig. 1 Noisy signal time domain response
Figure 2 shows the frequency domain response of the noisy
signal plotted as the Magnitude versus Frequency within the
frequency interval 5 Hz in both directions of the origin. The
diagram indicates noise content almost over the entire
frequency spectrumincluding the signal frequency.
113
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Fig. 1 Noisy Signal Frequency domain response
Figure 3 shows the frequency domain response of the
Kaiser Window based FIR BPF. It has 2 plots. The top plot
shows the magnitude response in the frequency domain
plotted as the Magnitude versus Normalized Frequency. The
bottom plot is the phase response in frequency domain plotted
as the Phase versus Normalized Frequency. The diagram
indicates that the centre frequency of the passband of the
Kaiser Window based FIR BPF lies at the desired signal
frequency of 4 Hz.
Fig. 3 Frequency domain response of Kaiser Window based FIR BPF
Figure 4 shows the time domain response of the filtered
signal plotted as the Magnitude versus Time within the time
interval of 100 seconds. The diagram indicates almost
negligible noise content at the output of the Kaiser Window
based FIR BPF.
Fig. 4 Filtered signal time domain response
Figure 5 shows the frequency domain response of the
filtered signal plotted as the Magnitude versus Frequency
within the frequency interval 5 Hz in both directions of the
origin. The diagram indicates almost negligible AWGN
presence at the output of the Kaiser Window based FIR BPF
over the entire frequency spectrum.
Fig. 5 Filtered signal frequency domain response
IV. CONCLUSIONS
From the experimental viewpoint it can be concluded that
the Kaiser Window based FIR BPF can be used for
minimizing the AWGN content in any transmitted signal. This
clearly improves the overall performance of the
communication system significantly.
REFERENCES
[1] L. R. Rabiner and B. Gold, Theory and Applications of Digital Signal
Processing, New Jersey: Prentice-Hall, 1975.
[2] J. G. Proakis and D. G. Manolakis, Digital Signal Processing-
Principles, Algorithms and Applications, New Delhi: Prentice-Hall,
2000.
[3] Y.-P. Lin and P. P. Vaidyanathan, A Kaiser window approach for the
design of prototype filters of cosine modulated filter banks, IEEE
Signal ProcessingLett., vol. 5, pp. 132134, June 1998.
[4] R. E. Crochiere and L. R. Rabiner, Optimum digital FIR implementations for
decimation, interpolation and narrow-band filtering, IEEE Trans. on Acoust.,
Speech, and Signal Processing, vol. ASSP-23, no. 5, pp. 444-456, Oct. 1975.
114
115
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
A Comparison and Analysis of Different PDE
Approaches for Image Enhancement
Anubhav kumar
1
, Awanish Kr Kaushik
1
, R.L.Yadava
1
, Divya Saxena
2
1
Department of Electronics &CommunicationEngineering
Galgotias Collegeof Engineering &Technology, Gr.Noida, India
2
Department of Mathematics
VishveshwaryaInstitute of Engineeringand Technology , G.B.Nagar,India
rajput.anubhav@gmail.com
Abstract- In this paper, we have compared two recently
developed techniques for image enhancement and denoising.
Image enhancement and denoisng improves the quality of images
for human viewing. Removing blurring and noise, increasing
contrast, and revealing details are examples of enhancement
operations. These methods are based on the use of partial
differential equations, including fourth order, and the complex
partial differential equation. We consider various well-known
measuring metrics used in image processing applied to standard
images in this comparison. In this study, it is shown that the
capability of the PDE-based approaches depends highly on the
neighboring structure. Our investigations show the complex
diffusion method partial differential equation offers a better
result in Quality of image compared to fourth order partial
differential equations. In case of noise measurement, fourth order
partial differential equations method offers a better result in
PSNR results of image compared to complex diffusion partial
differential equation.
Keywords fourth order partial differential equations, complex
diffusion partial differential equation, PSNR, M-
SVD,RFSIM,FcSIM.
blocky effect, while preserving sharp jump discontinuities
(edges), many other nonlinear filters have been suggested in
the literature [7][10] and during the last few years, fourth-
order PDEs have been of special interest [7][12].
Time stepping is perhaps the crucial matter for fourth order
diffusions.The use of partial differential equations (PDE) in
image processing has grown significantly over the past years.
Here the basic idea is to deform an image, a curve or a surface
with a PDE and obtain the expected results as a solution to this
equation. One of the main advantages of the usage of partial
differential equations (PDEs) is that it takes the image analysis
to a continuous domain simplifying the formalism of the
model, which becomes independent from the grid used in the
discrete problem.
The paper is organized as follows. Section II is devoted to
description of proposed modeling. Section III presents result
and analysis. Section IV gives conclusion and discussion .
I.INTRODUCTION
Image restoration is an important step in image processing and
a necessary pre-processing for other image tasks like image
segmentation. Image restoration methods, particularly,
denoising methods have occupied a peculiar position in image
processing. Many difficulties arise when computing any fourth
order diffusions. The dynamics depend highly on the
smoothness of initial data. Boundary conditions are often
difficult to implement; fourth order equations require
prescribing two boundary conditions in contrast to the one
needed for second order diffusions.
The work of Perona and Malik [1], which replaces the
isotropic diffusion by an anisotropic diffusion, many methods
in connecting adaptive smoothing with systems of nonlinear
partial differential equations (PDEs) [2][6] have been
proposed to preserve important structures in images, while
removing noise. Anisotropic diffusion is associated with an
energy- dissipating process that seeks the minimum of the
energy functional. When the energy functional is the total
variation norm of the image, the well-known total variation
(TV) minimization model can be obtained. To reduce the
II. PDE ALGORTHEMS
A. Fourth Order PDE -
One of the most commonly used PDE based denoising
technique is introduced by the Perona-Malik method. The
Perona - Malik equation for an image u is given by
du/dt= div[c ( u) u], u(x, y) t=0 = u0 (x, y) (1)
Where u is the gradient of the image u, div is the divergence
operator and c is the diffusion coefficient. The diffusion
coefficient c is a non decreasing function and diffuses more on
plateaus and less on edges and thus edges are preserved. Two
such diffusion coefficients suggested by Perona and Malik [3]
are
C(s) =1 (1+(s/k) 2 ) (2)
And C(s) = exp [-(s/k)
2
(3)
In this work we implemented and tested the fourth order PDEs
proposed by You and Kaveh . You and Kaveh [1] proposed an
L2 curvature gradient flow method which is given as
116
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
du/dt =
2
[c (
2
u)
2
u] (4)
Where
2
u is the Laplacian of the image u.The discrete form
of non-linear fourth order PDE described in equation (4)
On our personal laptop of Pentium 4 the experiment has been
carried out at 2.4 GHz. The test set for this evaluation
experiment selected from the internet. Experiments were
carried out on various types of images. Comparisons and
analysis were done on the basis of Quality image matrix (
u
n+1 n 2 n
i, j
= u
i, j
t g
i,j
(5)
B. The Complex Diffusion PDE -
Complex diffusion is a comparatively new method and can be
applied for image denoising. This is a generalization of
diffusion and free Schrodinger equations. In various areas of
physics and engineering, it was realized that extending the
analysis from the real axis to the complex domain is very
helpful, even though the variables and/or quantities of interest
are real. Analysis of linear complex diffusion shows that the
generalized diffusion has properties of both forward and
inverse diffusion.
The duality relations that exist between the Schrodinger
equation and the diffusion theory have been studied in [12].
By solving equations (1) we will get the following two
equations:
I
RT
= C
R
I
Rxx
-C
I
I
Ixx
, I
R
(x,y.0)=I
o
(6)
I
IT
= C
I
I
Rxx
-C
R
I
xx
, I
I
(x,y.0)=0 (7)
where I
RT
is the image obtained for the real plane and I
IT
is the
image obtained for the imaginary plane at time T and C
R
=cos
and C
I
=sin . The relation I
Rxx
>> I
Ixx
holds for small theta
approximation :
I
RT
I
Rxx
;I
IT
I
Ixx
+ I
Rxx
(8)
where IR is controlled by a linear forward diffusion
equation, whereas II is affected by both the real and
imaginary equations. The above-mentioned method is a
linear complex diffusion equation. A more efficient
nonlinear complex diffusion can be written as in
equation:
I
t
= . [c (Im(I)) (I)] (9)
Where
c (Im(I)= (10)
In the above equation k is the threshold parameter. The phase
Angle should be small ( >>1). Since the imaginary part is
normalized by , the process is hardly affected by changing
the value of , as long as it stays small
III. RESULT AND ANALYSIS
FcSIM [13] ,RFSIM [15] ,M-SVD [14] ) and Peak signal to
noise ratio (PSNR).
(a ) (b)
(c ) (d)
Figure 1. (a) Input image (b) Noisy image (c) Denoise by Fourth order PDE
(d) Denoise by Complex diffusion PDE
(a ) (b)
(c ) (d)
Figure 2. (a) Input image (b) Noisy image (c) Denoise by Fourth order PDE
(d) Denoise by Complex diffusion PDE
(a ) (b)
117
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Images FourthOrder PDE Complex Diffusion
PSNR
Lena
(RandomNoise)
40.10 37.74
Couple
(Gaussian blurring)
43.05 39.2
Liberty
(Additive Gaussian
white noise)
42.93 41.2
Images FourthOrder PDE Complex Diffusion
RFSIM
Lena
(RandomNoise)
.081 .096
Couple
(Gaussian blurring)
.534 .543
Liberty
(Additive Gaussian
white noise)
.279 .299
Images FourthOrder PDE Complex Diffusion
M-SVD
Lena
(RandomNoise)
44.25 45
Couple
(Gaussian blurring)
25.95 26.54
Liberty
(Additive Gaussian
white noise)
40.97 43.63
Images FourthOrder PDE Complex Diffusion
FcSIM
Lena
(RandomNoise)
.793 .793
Couple
(Gaussian blurring)
.933 .936
Liberty
(Additive Gaussian
white noise)
.901 .905
(c ) (d)
Figure 3. (a) Input image (b) Noisy image (c) Denoise by Fourth order PDE
(d) Denoise by Complex diffusion PDE
Table- I PSNR Results
Table- II RFSIM [15] Results
Table- III M-SVD [14] Results
Table- IV FcSIM [13] Results
Table I, we can see that fourth-order PDEs denoising result is
better than the complex diffusion. Our results (Table II,III,IV)
show that in an image in which the energy of noise is low, the
complex diffusion offers a better result in image enhancement
compared to the PDE methods. However, when the energy of
noise increases, performance of the complex diffusion
declines.Table II,III,IV show that the complex diffusion
method partial differential equation offers a better result in
Quality of image compared to fourth order partial differential
equations. In case of noise measurement (Table-I), fourth
order partial differential equations method offers a better result in
PSNR of image compared to complex diffusion partial
differential equation.
V .CONCLUSION
In this paper a comparative analysis of fourth order Partial
Differential Equations (PDEs) and Complex PDEs for
Enhancement is done. The analysis were mainly done on the
basis of Different Quality image matrix (QIM) and peak
signal to noise ratio (PSNR) The visual appearance of the
signal is also considered. Analysis was done on standard
images. The goal of this paper is to study the ability of both
PDEs to remove noise and enhancement by keeping the
maximum image information. In case for enhancement of
images, our experiment results show that the complex
diffusion method partial differential equation offers a better
result in Quality of image compared to fourth order partial
differential equations. In case of noise measurement, fourth
order partial differential equations method offers a better result
in PSNR results of image compared to complex diffusion
partial differential equation.
V. REFERENCE
[1] P. Perona and J. Malik, Scale-space and edge detection using
anisotropic diffusion, IEEE Trans. Pattern Anal. Mach. Intell., vol. 12,
no. 7, pp. 629639, Jul. 1990.
[2] K. Chen, Adaptive smoothing via contextual and local discontinuities,
IEEE Trans. Pattern Anal. Mach. Intell., vol. 27, no. 10, pp. 15521567,
Oct. 2005.
[3] R. A. Carmona and S. Zhong, Adaptive smoothing respecting feature
directions, IEEE Trans. Image Process., vol. 7, no. 3, pp. 353358,
Mar. 1998.
[4] S.- H. Lee and J. K. Seo, Noise removal with Gauss curvature-driven
diffusion, IEEE Trans. Image Process., vol. 14, no. 7, pp. 904909, Jul.
2005.
[5] D. Barash, A fundamental relationship between bilateral filtering,
adaptive smoothing, and the nonlinear diffusion equation, IEEE Trans.
PatternAnal. Mach. Intell., vol. 24, no. 6, pp. 844847 Jun. 2002.
[6] A. C.-C. Shih, H. Y. M. Liao, and C.-S. Lu, A new iterated two-band
diffusion equation: Theory and its application, IEEE Trans. Image
Process., vol. 12, no. 4, pp. 466476, Apr. 2003.
[7] M. Lysaker and X. C. Tai, Iterative image restoration combining total
variation minimization and a second-order functional, Int. J. Comput
Vis., vol. 66, no. 1, pp. 518, 2006.
[8] Y.-L. You and M. Kaveh, Fourth-order partial differential equations for
noise removal, IEEE Trans. Image Process., vol. 9, no. 10, pp. 1723
1730, Oct. 2000.
[9] M. Lysaker, S. Osher, and X.-C. Tai, Noise removal using smoothed
normals and surface fitting, IEEE Trans. Image Process., vol. 13, no.
10, pp. 13451357, Oct. 2004.
[10] T. Tasdizen, R. Whitaker, P. Burchard, and S. Osher, Geometric
surface processing via normal maps, ACMTrans. Graph., vol. 22, no.
4, pp. 10121033, 2003.
[11] M. Lysaker, A. Lundervold, and X.-C. Tai, Noise removal using
Fourth-order partial differential equation with applications to medical
magnetic resonance images in space and time, IEEE Trans. Image
Process., vol. 12, no. 12, pp. 15791590, Dec. 2003.
[12] M. Ngasawa, Scrodinger equations and diffusion theory, Monographs
in mathematics, vol. 86, Birkhauser Verlag, Basel, Switzerland 1993.
[13] Zhang, L.; Zhang, L.; Mou, X.; Zhang, D.; L. Zhang, FSIM: A
Feature Similarity Index for Image Quality Assessment ,IEEE
Transactions on image processing. VOL. 1, NO. 99, JAN 2011.PP.1-10.
[14] Aleksandr Shnayderman, Alexander Gusev, and Ahmet M. Eskicioglu,
"An SVD-Based Grayscale Image Quality Measure for Local and Global
Assessment", IEEE TRANSACTIONS ON IMAGE PROCESSING,
VOL. 15, NO. 2, FEBRUARY 2006.
[15] Lin Zhang, Lei Zhang, and Xuanqin Mou, "RFSIM: a feature based
image quality assessment metric using Riesz transforms", in: Proc.
IEEE International Conference on Image Processing, 2010, PP.321
324.
118
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Face Detection Algorithm in Color Images of
Complex Background
Anubhav kumar
1
, Awanish Kr Kaushik
1
, Anuradha
2
, Divya Saxena
3
1
Department of Electronics &CommunicationEngineering
Galgotias College of Engineering &Technology, Gr.Noida, India
2
Department of Electronics &CommunicationEngineering
Laxmi Devi Institute of Technology,Alwar, India
3
Department of Mathematics
VishveshwaryaInstituteof Engineeringand Technology , G.B.Nagar,India
rajput.anubhav@gmail.com
Abstract- Human face detection plays an important role in
application such as video surveillance, face image management,
navigation and computer interface. We propose a face detection
algorithm for color images in presence of varying lighting
condition as well as complex background. Color information
based methods take a great attention, because colors have
obviously character and robust visual cue for detection. Our
method is effective on facial variations such as dark/bright vision,
close eyes, open moth, a half profile face, and pseudo faces. It is
worth stressing that our algorithmcan human face correctly. The
experimental results show that our approach can detect the
90.19% detection rate.
Keywords Face detection, intensity based detection, localization,
classification.
I.INTRODUCTION
One of the most popular topics of research in the computer
vision field. It has many applications including face
recognition, crowd surveillance, and human computer
interaction. Face detection is to locate and detect all the faces
and size from input images. It is complicated because of
complex background, various illumination conditions, and
large amount of facial expressions and so on. Therefore we
have to find an effective approach which is somehow invariant
to these changes. Li and Zhang proposed FloatBoost method
for training the classifier [1]. Backtracking scheme was
employed for removing unfavorable classifiers from the
existing classifiers. Wu et al. carried out multiview face
detection using nested structure and real AdaBoost [2]. EAs
have been applied in many classifier training tasks such as
face detection [3][5], face recognition [6], and car detection
[7]. Some of the above studies applied EAs to AdaBoostbased
classifier training problems. They tried to improve either
making ensemble classifiers or selecting features. AdaBoost is
a greedy search algorithm which provides an ensemble of
classifiers [8].
Face detection and face recognition are important in many
applications such as security, human-machine interfaces,
gesture based computer interfaces, automatic driver
monitoring, biometric identification and video search. Face
detection is an essential component in human-computer
interaction, video surveillance, face tracking and face
recognition [11~14]. Face images are significantly changed by
lighting conditions and therefore may cause performance
degradation both in face detection and in recognition [15].
Face tracking algorithm has classified a few categories. First, a
rule-based face detection algorithm [9] is based on reasoning
rule from human face research workers knowledge. Second,
feature-based face detection algorithm [10] used facial feature
for face detection. Skin color which is one of a variety of
features is less sensitive to facial translation, rotation, scale.
First, Color-based method is hard to detect the skin-color
under different lighting conditions [16][17]. Second, Many
researchers find the feature of eyes by detecting eyeball, the
white of the eye, or the pupil of the eye. It will result in false
detection when the human close eyes or wearing glasses.
Third, Most of the traditional algorithms can not discriminate
cartoon face and real human face at the same scope. Fourth,
The feature-based detection has large computation and
operates slowly.
This paper describes an active face feature tracking that is not
related to imaging conditions. Detecting face, we used skin
color detection. The paper is organized as follows. Section 2 is
devoted to description of proposed modeling. Section 3
presents result and analysis. Section 4 gives conclusion and
discussion of future work.
III. PROPOSED ALGORTHEM
The face detection in color images based on intensity function is
specified as follows.
Figure 1. Flow chart of Proposed Algorithm
119
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
A. Preprocessing
In this step, firstly resize the input image, segment out the
image region with mathematical analysis and convert the
image to the gray scale image. After that convert gray image
to binary image based on Otsu threshold show in figure 2(b).
(a)
(b)
Figure 2. (a) Original image (b) Preprocessing Image
B. Face Detection
In this step, the morphologically operation with the structuring
element apply in the image and generally the morphological
operations is used for binary images. After this Fill image
regions and holes based on morphological reconstruction.
Since this stage aims to find out to exact face. Thus a
morphological close and open algorithms are applied on the
resultant reconstructed image. Thus at last the morphological
dilation with a square Structuring element is used to eliminate
the effects of slightly slanted edges and a vertical linear
structuring element is then employed in a closing operator to
force the strong vertical edges clogged. Resultant figure using
face detection process shown in figure 3.
Figure 3. Resultant face detected image
C. Face Localization
When human face candidate area screening, as a result of the
forehead part superposition, as well as the forehead with other
parts, for example clothes and so on the connection, to
screened has caused the difficulty, therefore should use first
shuts the operation operation, separated the connection, then
carried on processing. So in this step, segment out non-fac
regions using major to minor axis ratio with help of Heuristic
Filtering. Only those regions in the retained image which have
an area greater than or equal to 1/250 of the maximum area
region and remove those regions which have Width/Height < 6
ratio. After that heuristics approach some mathematical
analysis has been done those are depend on two steps.In first
step Determines a template scope using the face width long
ratio general bound and Face region <200 remove, generally
for hand or other disturbances.In second step,remove
rectangular unwanted regions. Face localization image are
shown in figure 4.
Figure 4. Resultant face Localization image
D. Face Classification
The purpose of this section is to bound boundary box the faces
from the localization text region. As the main color in a human
faces is skin color, the threshold should be big enough. On the
other hand, face region contains eyes, mouth, hair, nose and
other background colors, therefore the threshold cannot be too
high. Face classification image are shown in figure.
Figure 5. face Detected output image
IV. RESULT AND ANALYSIS
On our personal laptop of Pentium 4 the experiment has been
carried out at 2.4 GHz. The test set for this evaluation
experiment randomly selected from the internet.
Figure 6. Output image (23 faces, 22 faces detected, 3 false alarm)
120
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Figure 7. Output image (25 faces, 24 faces detected,3 false alarm)
Figure 8. Output image (8 faces, 8 faces detected, 1 false alarm)
The proposed model was implemented using 25 images. The
performance of our proposed method is better than that of
existing color image enhancement algorithms. Experiments
show that the proposed algorithm for face detection has a very
good performance in detecting low quality faces and faces
affected by environmental lightning conditions. However, it is
sensitive to pose rotated faces and faces corrupted by other
objects.
Table- I Experiment Results
Parameter Result
Total Images 25
Total Faces 408
Detected faces 368
False alarm 89
Missed Faces 40
Recall Rate 90.19 %
Precision rate 80.52 %
The detection rates and false positive rates on test sets are
listed in Table 1. Note that the false detection especially is
very low, while the detection rate is acceptable. Examples
from the face database are shown in Figures 5-8.The proposed
model was implemented using five images. The
implementation results were compared with other results and
the performance of our proposed method is better than that of
existing color image enhancement algorithms.
V .CONCLUSION
In this paper a face detection in color images based on
intensity algorithm has been proposed. Experiments using the
face database show great ability of the proposed algorithm in
detecting faces, especially detecting faces in low quality
images, without giving too much false detections. The reason
that the proposed algorithm has reduced false detections is that
a false detection occurs when both of the classifiers detect a
non face pattern as a face at the same location in three sequent
scales; this is almost impossible. By setting the parameters of
the two classifiers such that almost all frontal faces are
detected, a high correct face detection rate is achieved.
V. REFERENCE
[1] S. Z. Li and Z. Q. Zhang, Floatboost learning and statistical face
detection,IEEE Trans. PatternAnal. Mach. Intell., vol. 26, no. 9, pp.
11121123, Sep. 2004.
[2] B.Wu, H. Ai, C. Huang, and S. Lao, Fast rotation invariant multi-view
face detection based on real Adaboost, in Proc. 6th Int. Conf. Autom.
Face Gesture Recogn., 2004, pp. 7984.
[3] J.-S. Jang, K.-H. Han, and J.-H. Kim, Evolutionary algorithm-
basedface verification, Pattern Recogn. Lett., vol. 25, pp. 18571865,
2004.
[4] [J.-S. Jang and J.-H. Kim, Evolutionary pruning for fast and robust
facedetection, in Proc. IEEE Congr. Evol. Comput., 2006, pp. 1293
1299.
[5] A. Treptow and A. Zell, Combining Adaboost learning and
evolutionary search to select features for real-time object detection, in
Proc. IEEE Congr. Evol. Comput., 2004, pp. 21072113.
[6] C. Liu and H. Wechsler, Evolutionary pursuit and its application to
face recognition, IEEE Trans. Pattern Anal. Mach. Intell., vol. 22, no.6,
pp. 570582, 2000.
[7] Y. Abramson, F. Moutarde, B. Stanciulescu, and B. Steux,
Combiningadaboost with a hill-climbing evolutionary feature search for
efficient training of performant visual object detectors, in Proc. 7th Int.
FLINSConf. Appl. Artif. Intell., 2006, pp. 737744.
[8] Y. Freund and R. E. Schapire, A decision-theoretic generalization
ofon-line learning and an application to boosting, J. Comput. Syst.
Sci.,vol. 55, no. 1, pp. 119139, 1997.
[9] C. Kotropoulos, and I. Pitas, Rule-based detection in frontal
views, International Conference on Acoustics, Speech and
Signal Processing, vol. 4, pp2537-2540, 1997.
[10] S.A. Sirohey Human face segmentation and identification,
Technical Report CS-TR-3176 University of Maryland, 1993.
[11] S. Phimoltares, C. L. Lursinsap, K. Chamnongthai, Face
detection and facial feature localization without considering the
appearance of image context, Image and Vision Computing,
(25)5, pp. 741-753, 2007.
[12] C. A. Perez, V. Lazcano, P. A. Estvez, Real-time iris detection
on coronal-axis-rotated faces, IEEE Trans. Systems Man
Cybern. C Appl. Rev., 37, (5), pp. 971978, 2007.
[13] C. A. Perez, C. M. Aravena, J. I. Vallejos, P. A. Estevez, C. M.
Held, Face and iris localization using templates designed by
particle swarm optimization, Pattern Recognit. Lett., pp. 857
868, 2010.
[14] C. Perez, V. Lazcano, P. Estevez, C. Held, Real-Time
Template Based Face and Iris Detection on Rotated Faces,
International Journal of Optomechatronics, Vol.3:1, pp.54-67,
2009.
[15] S. Choi, C. Kim, C. Choi, Shadow compensation in 2D images
for face recognition, Pattern Recognit., 40, (7), pp. 21182125,
2007.
[16] V. D. M. Nhat, and S. Lee, "Two-dimensional weighted PCA algorithm
for face recognition," 2005 IEEE Proc. of Intl Symp. on Computational
IntelligenceinRobotics and Automation, pp. 219-223, Jun. 2005.
[17] K.Yang, H. Zhu, and Y.J. Pan, "Human Face Detection Based on SOFM
Neural Network," 2006 IEEE Proc. of Intl Conf. on Information
Acquisition, pp. 1253-1257, August 2006.
121
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
An Introduction to Various Edge Detectors for
finding Edges & Lines in Image
Tanu Shree Gupta
1
, Dr. A.K. Sharma
2
, Mr. Sudeep Tanwar
3
, Mrs. Antima Puniya
4
1
CE, Shobhit University, Meerut, UP, India;
2
CSE, YMCA, Haryana, HR, India;
3
IT, Bharat Institute of Technology (GBTU, Lucknow, India), Meerut, UP, India;
4
CSE/IT, Shobhit Institute of Engineering &technology (GBTU, Lucknow, India), Gangoh, Saharanpur, UP, India;
its.tanushree@gmail.com
ashokkale2@rediffmail.com
sudeep149@rediffmail.com
apuniya@gmail.com
Abstract - Detecting edges in a given image is an important
problemin image processing. In a grey level image, an edge may
be defined as a sharp change in intensity. Edge detection is the
process of detecting the presence and location of these intensity
transitions. The edge representation of an image drastically
reduces the amount of data to be processed, yet it retains
important information about the shapes of objects in the scene.
For most of the high level machine vision tasks such as motion
analysis and object recognition, an edge map is sufficient to carry
out further analyses.
Edges characterize boundaries and are therefore a problem
of fundamental importance in image processing. Since edge
detection is in the fore font of image processing for object
detection, it is crucial to have a good understanding of edge
detection algorithms. As edge detection is a fundamental step in
computer vision, it is necessary to point out the true edges to get
the best results from the matching process. That is why it is
important to choose edge detectors that fit best to the application.
In this paper, we discuss the various features of the Edge
Detectors. Edge detection plays an important role in the areas of
image processing, multimedia and computer vision. It has been
shown that the Gabor edge detection algorithm performs better
than all these operators under almost all scenarios. We use the
programming language MATLAB for implementing the
algorithms. Evaluation of the images showed that under noisy
conditions Gabor, Canny, LoG (Laplacian of Gaussian), Robert,
Prewitt, Sobel exhibit better performance, respectively. It has
been observed that Cannys edge detection algorithm is
computationally more expensive compared to Gabor, LoG
(Laplacian of Gaussian), Sobel, Prewitt and Roberts operator.
Keywords Edge detectors, Digital Image Processing.
I. INTRODUCTION
Edge of image is one of the most fundamental and significant
features. Edge detection is always one of the classical studying
projects of computer vision and image processing field. It is the first
step of image analysis and understanding. The purpose of edge
detection is to discover the information about the shapes and the
reflectance or transmittance in an image. It is one of the fundamental
steps in image processing, image analysis, image pattern recognition,
and computer vision, as well as in human vision. The correctness and
reliability of its results affect directly the comprehension machine
systemmade for objective world.
Edge detection refers to the process of identifying and locating
sharp discontinuities in an image. The discontinuities are abrupt
changes in pixel intensity which characterize boundaries of objects in
a scene. There are an extremely large number of edge detection
operators available, each designed to be sensitive to certain types of
edges. Variables involved in the selection of an edge detection
operator include Edge orientation, Noise environment and Edge
structure. Edge detection is difficult in noisy images, since both the
noise and the edges contain high frequency content. Attempts to
reduce the noise result in blurred and distorted edges. Not all edges
involve a step change in intensity. There are many ways to perform
edge detection. However, the majority of different methods may be
grouped into two categories:
A. Gradient based Edge Detection:
The gradient method detects the edges by looking for the
maximumand minimumin the first derivative of the image.
B. Laplacianbased Edge Detection:
The Laplacian method searches for zero crossings in the second
derivative of the image to find edges. An edge has the one-
dimensional shape of a ramp and calculating the derivative of the
image can highlight its location. When the first derivative is at a
maximum, the second derivative is zero.
II. PROBLEMS OF EDGE DETECTION
The separation of a scene into object and background is an
essential step in image interpretation. This is a process that is carried
out effortlessly by the human visual system, but when computer
vision algorithms are designed to mimic this action, several problems
can be encountered. This section describes some of the problems
involved in detecting and localizing edges. Due to the presence of
noise and quantization of the original image, during edge detection it
is possible to locate intensity changes where edges do not exist. For
similar reasons, it is also possible to completely miss existing edges.
The degree of success of an edge-detector depends on its ability to
accurately locate true edges.
Edge localization is another problem encountered in edge
detection. The addition of noise to an image can cause the position of
the detected edge to be shifted from its true location. The ability of
122
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
+1 0
0 -1
-1 0 +1
-1 0 +1
-1 0 +1
+1 +2 +1
0 0 0
-1 -2 -1
an edge-detector to locate in noisy data an edge that is as close as
possible to its true position in the image is an important factor in
determining its performance.
Another difficulty in any edge detection system arises from the
fact that the sharp intensity transitions which indicate an edge are
sharp because of their high-frequency components. As a
result, any linear filtering or smoothing performed on these
edges to suppress noise will also blur the significant
transitions. Low-pass filters are the most widely used
smoothing filters. The amount of smoothing applied depends
on the size or scale of the smoothing operator. One filter size
may not be good enough to remove noise while keeping good
localization.
III. VARIOUS EDGE DETECTORS
A. Roberts cross operator:
The Roberts Cross operator performs a simple, quick to
compute, two dimensional spatial gradient measurements on an
image. Pixel values at each point in the output represent the
estimated absolute magnitude of the spatial gradient of the input
image at that point. The operator consists of a pair of 22
convolution kernels as shown in Figure 1. One kernel is simply the
other rotated by 90.
+1 0
0 -1
Gx Gy
FIGURE 1: Masks used for Robert operator
These kernels are designed to respond maximally to edges
running at 45 to the pixel grid, one kernel for each of the two
perpendicular orientations. The kernels can be applied separately to
the input image, to produce separate measurements of the gradient
component in each orientation (call these Gx and Gy). These can then
be combined together to find the absolute magnitude of the gradient
at each point and the orientation of that gradient. The gradient
magnitude is given by:
Typically, an approximate magnitude is computed using:
which is much faster to compute. The angle of orientation of the edge
giving rise to the spatial gradient (relative to the pixel grid
orientation) is given by:
B. Prewitts operator:
Prewitt operator is used for detecting vertical and horizontal
edges in images.
+1 +1 +1
0 0 0
-1 -1 -1
Gx Gy
FIGURE 2: Masks for the Prewitt gradient edge detector
C. Sobel Operator:
The operator consists of a pair of 33 convolution kernels as
shown in Figure 3. One kernel is simply the other rotated by 90.
-1 0 +1
-2 0 +2
-1 0 +1
Gx Gy
FIGURE 3: Masks used by Sobel Operator
These kernels are designed to respond maximally to edges
running vertically and horizontally relative to the pixel grid, one
kernel for each of the two perpendicular orientations. The kernels can
be applied separately to the input image, to produce separate
measurements of the gradient component in each orientation (call
these Gx and Gy). These can then be combined together to find the
absolute magnitude of the gradient at each point and the orientation
of that gradient [3]. The gradient magnitude is given by:
Typically, an approximate magnitude is computed using:
which is much faster to compute. The angle of orientation of the edge
(relative to the pixel grid) giving rise to the spatial gradient is given:
D. Laplacian of Gaussian:
The Laplacian is a 2-D isotropic measure of the 2nd spatial
derivative of an image. The Laplacian is applied to an image that has
first been smoothed with something approximating a Gaussian
Smoothing filter in order to reduce its sensitivity to noise. The
operator normally takes a single gray level image as input and
produces another gray level image as output. The Laplacian L(x,y) of
an image with pixel intensity values I(x, y) is given by:
Since the input image is represented as a set of discrete pixels,
we have to find a discrete convolution kernel that can approximate
the second derivatives in the definition of the Laplacian. The
commonly used small kernels are shown in Figure 4.
FIGURE 4: Three commonly used discrete approximations to the
Laplacian filter.
These kernels are second derivative measurement on the image,
they are very sensitive to noise. Doing things this way has two
advantages: Since both the Gaussian and the Laplacian kernels are
usually much smaller than the image, this method usually requires far
fewer arithmetic operations.
123
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
The 2-D LoG function centered on zero and with Gaussian standard
deviation has the form:
E. Canny edge detection:
The Canny edge detection is known as the optimal edge
detector. He followed a list of criteria to improve current methods of
edge detection such as:
1.) The low error rates.
2.) The edge points be well localized.
3.) To have only one response to a single edge.
Based on these criteria, the canny edge detector first smoothes
the image to eliminate noise. It then finds the image gradient to
highlight regions with high spatial derivatives. The algorithm then
tracks along these regions and suppresses any pixel that is not at the
maximum (non maximum suppression). The gradient array is now
further reduced by hysteresis. Hysteresis is used to track along the
remaining pixels that have not been suppressed.
Hysteresis uses two thresholds and if the magnitude is below the
first threshold, it is set to zero (made a non edge). If the magnitude is
above the high threshold, it is made an edge. And if the magnitude is
between the 2 thresholds, then it is set to zero unless there is a path
fromthis pixel to a pixel with a gradient above T2.
F. Gabor filter
Gabor filter is a linear filter that is created by modulating a
sinusoid with a Gaussian. It have been used in many applications,
such as texture segmentation, target detection, fractal dimension
management, fingerprint matching, edge detection, image coding and
image reconstruction.
This process involves firstly super positioning different
Gabor filters which are at different phases and orientations,
and secondly performing a convolution of the filters with the
original image.
This filter is given by equation: (Grigorescuet al. 2003;
Petkov & Wieling 2006).
where
The parameters used in the above equation are explained
below:
1. is the standard deviation of the Gaussian factor and
determines the (linear) size of its receptive field.
2. specifies the wavelength of the cosine factor of the
Gabor filter.
3. specifies the orientation of the normal to the parallel
stripes of the Gabor filter.
4. is the phase offset of the cosine factor and determines
the symmetry of the Gabor filter.
5. is called the spatial aspect ratio and specifies the
ellipticity of the Gaussian factor.
IV. COMPARISON OF VARIOUS EDGE
DETECTORS VISUALLY
FIGURE 5: Image used for edge detection analysis (wheel.gif)
Edge detection of all types were performed on figure 5. Gabor
yielded the best results. This was expected as Gabor filter edge
detection accounts for regions in an image. Gabor yields thin lines
for its edges by super positioning different Gabor at different phases.
Gabor also performconvolution of filters with original image.
FIGURE 6: Results of edge detection on Figure 5. Canny that
the best results.
(a) ORIGINAL IMAGE
124
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
VI CONCLUSIONS
(b) SOBEL (c) PREWITT
(d) ROBERT (e) LoG
(f) CANNY (g) GABOR
FIGURE 7: Comparison of Edge Detection Techniques (a) Original
Image (b) Sobel (c) Prewitt (d) Robert (e) Laplacian of Gaussian (f)
Canny (g) Gabor filter
V. PERFORMANCE OF EDGE DETECTORS
As edge detection is a fundamental step in computer vision, it is
necessary to point out the true edges to get the best results from the
matching process. That is why it is important to choose edge
detectors that fit best to the application. In this respect, we first
present some merits and demerits of Edge Detection Techniques.
The classical operator such as Sobel, and Prewitt which uses
first derivative has very simple calculation to detect the edges and
their orientations but has inaccurate detection sensitivity in case of
noise. Laplacian of Gaussian (LOG) operator is represented as
another type of edge detection operator which uses second derivative.
It finds the correct places of edges and testing wider area around the
pixel. The disadvantages of LOG operator is that it can not find the
orientation of edge because of using the Laplacian filter.
The other type of edge detection operator is the Gaussian edge
detectors such as Canny operators which are using probability for
finding error rate and localization. Also it is symmetric along the
edge and reduces the noise by smoothing the image. So it is performs
the better detection in noise condition but unfortunately it has
complex computing. Cannys edge detection algorithm is
computationally more expensive compared to Sobel, Prewitt and
Roberts operator. However, the Cannys edge detection algorithm
performs better than all these operators under almost all scenarios.
Image edge detection is a kind of developing technology from
the classical differential operators to updating edge detection
algorithms associated with the new technologies. This paper just
introduced various image edge detection technologies. The
appearance of these methods indicated mankind look for a general
and effective edge detection algorithm all the time. Nowadays, the
main difficulties on image project are medical images, infrared
images, micro-images, remote sensing images. Those images
characterized in higher noise and complicated image edge. Therefore,
many researchers presented the concept multi-scale in edge
extraction, however, wavelet transformappeared unusual superiority.
Many researchers improved working efficiency of their algorithms.
In this paper, subjective evaluation of edge detection result
images show that Gabor, Canny, LOG, Prewitt, Sobel and Robert
exhibit better performances respectively under noisy conditions. This
is due to the Sinusoidal Gaussian operators using probability for
finding error rate, localization and response.
ACKNOWLEDGEMENT
First of all we render our gratitude to the ALMIGHTY
who bestowed self-confidence, ability and strength to
complete this work. With deep sense of gratitude, the authors
express their sincere thanks to our preacher, teacher, and
supervisor.
REFERENCES
1. J. Matthews. An introduction to edge detection: The sobel edge
detector,, 2002.
2. M.C. Shin, D. Goldgof, and K.W. Bowyer Comparison of edge
Detector Performance through Use in an Object Recognition
Task. Computer Vision and Image Understanding, vol. 84, no.
1, pp. 160-178, Oct. 2001.
3. Xian-Min Ma, A revised edge detection algorithm based on
wavelet transform for coal gangue image, Proceedings of 2007
International Conference on Machine Learing and Cybernetics,
IEEE,USA,2007,pp.1639-1642.
4. Grigorescu, C., Petkov, N., & Westenberg, M. A. 2003, IEEE
TRANSACTIONS ON IMAGE PROCESSING, 12, 729, Noble,
S. 2001.
5. R. Methrotra & K. R Namuduri and N. Ranganathan.
Gabor FilterBased Edge Detection, Pattern
Recognition, VOL .25, pp.1479-1494, 1992.
6. Tai Sing Lee. Image Representation Using 2D Gabor
Wavelets, IEEE Trans. on Pattern Analysis and Machine
Intelligence, Vol.18, NO.10, 1996.
7. Yan Naung Oak, Nasser Alidoust, May 12, 2008, Image
Reconstruction And Edge Recognition Using Gabor And
Canny Filter.
125
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Haar Wavelet : A Technique of Image Compression
*Sudhanshu Upadhyay, **Pankaj Sharma, ***Manoj Kumar Bansal, **** Agha Asim Husain
* Senior Lecturer, Department of ECE, sudha_lv@yahoo.com, Vira College of Engineering, Bijnor
** Lecturer, Department of CSE, shapankaj.sharma@gmail.com, Vira College of Engineering, Bijnor
*** Senior Lecturer, Department of ECE, manojbansal14@gmail.com, Vira College of Engineering, Bijnor
**** A.P., Department of ECE, aghaasim2000@yahoo.com, ITS,G. Noida
ABSTRACT
Compression is useful because it helps reduce the
consumption of expensive resources, such as hard
disk space or transmission bandwidth. On the
downside, compressed data must be decompressed to
be used, and this extra processing may be
detrimental to some applications. The design of data
compression schemes therefore involves trade-offs
among various factors, including the degree of
compression, the amount of distortion introduced (if
using a lossy compression scheme), and the
computational resources required to compress and
uncompress the data. A picture can say more than a
thousand words. Unfortunately, storing an image
can cost more than a million words. This isn't
always a problem, because many of today's
computers are sophisticated enough to handle large
amounts of data. Sometimes however you want to
use the limited resources more efficiently. Digital
cameras for instance often have a totally
unsatisfactory amount of memory, and the internet
can be very slow. Wavelet theory intends to analyse
and transform data. It can be used to make explicit
the correlation between neighbouring pixels of an
image, and this explicit correlation can be exploited
by compression algorithms to store the same image
more efficiently. Wavelets can even be used to
transform an image in more and less important data
items. By only storing the important ones the image
can be stored in an amazingly more compact
fashion, at the cost of introducing hardly noticeable
distortions in the image. In this paper the
mathematics behind the compression of images will
be outlined. I will treat the basics of data
compression, explain what wavelets are, and outline
the tricks one can do with wavelets to achieve
compression.
Keywords: Image compression, Haar wavelet,
Vector quantization.
I. INTRODUCTION
In recent years there has been an astronomical
increase in the usage of computers for a variety of
tasks. With the advent of digital cameras, one of the
most common uses has been the storage,
manipulation, and transfer of digital images. The files
that comprise these images, however, can be quite
large and can quickly take up precious memory space
on the computers hard drive. A gray scale image that
is 256 x 256 pixels has 65, 536 elements to store, and
a typical 640 x 480 color image has nearly a million!
The size of these files can also make downloading
from the internet a lengthy process. The Haar wavelet
transform provides a means by which we can
compress the image so that it takes up much less
storage space, and therefore electronically transmits
faster and in progressive levels of detail.
The computer is becoming more and more powerful
day by day. As a result, the use of digital images is
increasing rapidly. Along with this increasing use of
digital images comes the serious issue of storing and
transferring the huge volume of data representing the
images because the uncompressed multimedia
(graphics, audio andvideo) data requires
considerable storage capacity and transmission
bandwidth. Though there is a rapid progress in mass
storage density, speed of the processor and the
performance of the digital communication systems,
the demand for data storage capacity and data
transmission bandwidth continues to exceed the
capabilities of on hand technologies. Besides, the
latest growth of data intensive multimedia based web
applications has put much pressure on the researchers
to find the way of using the images in the web
applications more effectively. Internet
teleconferencing, High Definition Television
(HDTV), satellite communications and digital storage
of movies are not feasible without a high degree of
compression. As it is, such applications are far from
realizing their full potential largely due to the
limitations of common image compression
techniques.
The image is actually a kind of redundant data i.e. it
contains the same information from certain
perspective of view. By using data compression
techniques, it is possible to remove some of the
redundant information contained in images. Image
compression minimizes the size in bytes of a graphics
file without degrading the quality of the image to an
unacceptable level. The reduction in file size allows
more images to be stored in a certain amount of disk
or memory space. It also reduces the time necessary
for images to be sent over the Internet or downloaded
from web pages. The scheme of image compression
is not new at all. The discovery of Discrete Cosine
126
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Transform (DCT) in 1974 is really an important
achievement for those who work on image
compression. The DCT can be regarded as a discrete
time version of the Fourier Cosine series. It is a close
relative of Discrete Fourier Transform (DFT), a
technique for converting a signal into elementary
frequency components. Thus DCT can be computed
with a Fast Fourier Transform (FFT) like algorithm
of complexity O(nlog2 n). Unlike DFT, DCT is real
valued and provides a better approximation of a
signal with fewer coefficients. There are a number of
various methods in which image files can be
compressed. There are two main common
compressed graphic image formats namely Joint
Photographic Experts Group (JPEG, usually
pronounced as JAY-pehg) and Graphic Interchange
Format (GIF) for the use in the Internet.
In 1992, JPEG established the first international
standard for still image compression where the
encoders and decoders are DCT-based. The JPEG
standard specifies three modes namely sequential,
progressive, and hierarchical for lossy encoding, and
one mode of lossless encoding. The performance of
the coders for JPEG usually degrades at low bit-rates
mainly because of the underlying block-based
Discrete Cosine Transform (DCT). The baseline
JPEG coder is the sequential encoding in its simplest
form. Fig. 1(a) and (b) show the key processing steps
in such an encoder and decoder respectively for
grayscale images. Color image compression can be
approximately regarded as compression of multiple
grayscale images, which are either compressed
entirely one at a time, or are compressed by
alternately interleaving 8x8 sample blocks from each
in turn.
(a)
Figure 1(a) and (b): Encoder and Decoder Block
Diagramrespectively [1]
After output from the Forward DCT (FDCT), each of
the 64 DCT coefficients is uniformly quantized in
conjunction with a carefully designed 64-element
Quantization Table (QT). At the decoder, the
quantized values are multiplied by the corresponding
QT elements to pick up the original unquantized
values. After quantization, all the quantized
coefficients are ordered into zig-zag sequence. This
ordering helps to facilitate entropy encoding by
placing low frequency non-zero coefficients before
high-frequency coefficients. The DC coefficient,
which contains a significant fraction of the total
image energy, is differentially encoded.
Wavelets are a mathematical tool for hierarchically
decomposing functions. Though rooted in
approximation theory, signal processing, and physics,
wavelets have also recently been applied to many
problems in Computer Graphics including image
editing and compression, automatic level-of-detail
control for editing and rendering curves and surfaces,
surface reconstruction from contours and fast
methods for solving simulation problems in 3D
modeling, global illumination, and animation.
Wavelet-based coding provides substantial
improvements in picture quality at higher
compression ratios. Over the past few years, a variety
of powerful and sophisticated wavelet-based schemes
for image compression have been developed and
implemented. Because of the many advantages of
wavelet based image compression as listed below, the
top contenders in the JPEG-2000 standard are all
wavelet-based compression algorithms.
Wavelet coding schemes at higher compression
avoid blocking artifacts.
They are better matched to the HVS (Human Visual
System) characteristics.
Compression with wavelets is scalable as the
transform process can be applied to an image as
many times as wanted and hence very high
compression ratios can be achieved.
Wavelet based compression allow parametric gain
control for image softening and sharpening.
Wavelet-based coding is more robust under
transmission and decoding errors, and also facilitates
progressive transmission of images.
Wavelet compression is very efficient at low bit
rates.
Wavelts provide an efficient decomposition of
signals prior to compression.
II. WAVELETS FOR IMAGE COMPRESSION
Wavelet transform exploits both the spatial and
frequency correlation of data by dilations (or
contractions) and translations of mother wavelet on
the input data. It supports the multiresolution analysis
of data i.e. it can be applied to different scales
according to the details required, which allows
progressive transmission and zooming of the image
without the need of extra storage. Another
encouraging feature of wavelet transform is its
127
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
a b (a+b)/2 b-a (a+b+c+d)/4
(a+b-c-d)/2
(b+d-a-c)/2
d-c-b+a c d (c+d)/2 d-c
symmetric nature that is both the forward and the
inverse transform has the same complexity, building
fast compression and decompression routines. Its
characteristics well suited for image compression
include the ability to take into account of Human
Visual Systems (HVS) characteristics, very good
energy compaction capabilities, robustness under
transmission, high compression ratio etc. The
implementation of wavelet compression scheme is
very similar to that of subband coding scheme: the
signal is decomposed using filter banks. The output
of the filter banks is down-sampled, quantized, and
encoded. The decoder decodes the coded
representation, up-samples and recomposes the
signal. Wavelet transform divides the information of
an image into approximation and detail subsignals.
The approximation subsignal shows the general trend
of pixel values and other three detail subsignals show
the vertical, horizontal and diagonal details or
changes in the images. If these details are very small
(threshold) then they can be set to zero without
significantly changing the image. The greater the
number of zeros the greater the compression ratio. If
the energy retained (amount of information retained
by an image after compression and decompression) is
100% then the compression is lossless as the image
can be reconstructed exactly. This occurs when the
threshold value is set to zero, meaning that the details
have not been changed. If any value is changed then
energy will be lost and thus lossy compression
occurs. As more zeros are obtained, more energy is
lost. Therefore, a balance between the two needs to
be found out.
III. HAAR WAVELET TECHNIQUE
Wavelets can split a signal into two components. One
of these components, named S for smooth, contains
the important, large-scale information, and looks like
the signal. The other component, D for detail,
contains the local noise, and will be almost zero for a
sufficiently continuous or smooth signal. The smooth
signal should have the same average as the original
signal. If the input signal was given by a number of
samples, S and D together will have the same amount
of samples.
The simplest wavelet is the Haar wavelet. It is
defined for a input consisting of two numbers a and
b. The transform is:
s= (a+b)/2
d= b-a
The inverse of this transformation is
a=s-d/2
b=s+d/2
Discrete wavelets are traditionally defined on a line
of values:
,x
-2
,x
-1
,x
0
,x
1
,x
2
,
For the Haar wavelet pairs <x
2i
, x
2i+1
> are formed,
such that every even value plays the a role, and the
odd value the b role.
Lifting is a way of describing and calculating
wavelets. It calculates wavelets in place, which
means that it takes no extra memory to do the
transform. Every wavelet can be written in lifting
form.
The first step of a lifting transform is a split of the
data in even and odd points. This step is followed by
a number of steps of the following form:
Scaling. All samples of one of the sets are
multiplied with a constant value.
Adding. To each sample of one of the wires a
linear combination of the coefficients of the
other wire is added.
At the end of the transform the even samples are
replaced by the smooth coefficients and the odd by
the detail coefficients.
It is common to picture lifting in an electric circuit.
The input is split in even wires at the upper wire, and
odd components on the lower wire. Samples of one
wire are repeatedly added to the other wire, until the
upper wire contains the smooth signal, and the lower
wire the detail signal. For the Haar wavelet only two
adding steps are necessary.
Because the smooth signal is again a continuous
signal, it is possible to repeat the whole trick again
and again and again to get maximal compression.
Applying the transform for the first time is level 0.
Applying it again on the smooth data is called the
level 1 transform, and so on.
There are many wavelets defined on rows of samples.
For image compression we need wavelets defined on
the two dimensional grid.
It is possible to apply a certain wavelet first
horizontally, thus split the data in a left part, the
smooth data, and then vertically on all columns. The
coefficients are now of four kinds: SmoothSmooth,
SmoothDetail, DetailSmooth, DetailDetail. This is
the Mallat scheme. For the Haar wavelet, it yields the
following transformation:
To understand how wavelets work, let us start with a
simple example. Assume we have a 1D image with a
resolution of four pixels, having values [9 7 3 5].
Haar wavelet basis can be used to represent this
image by computing a wavelet transform. To do this,
first the average the pixels together, pairwise, is
calculated to get the new lower resolution image with
pixel values [8 4]. Clearly, some information is lost
in this averaging process. We need to store some
detail coefficients to recover the original four pixel
128
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
values from the two averaged values. In our example,
1 is chosen for the first detail coefficient, since the
average computed is 1 less than 9 and 1 more than 7.
This single number is used to recover the first two
pixels of our original four-pixel image. Similarly, the
second detail coefficient is -1, since 4 + (-1) = 3 and
4 - (-1) = 5. Thus, the original image is decomposed
into a lower resolution (two-pixel) version and a pair
of detail coefficients.
IV. HOW WAVELATES TRANSFORM WORKS
In order to transform a matrix representing an image
using the Haar wavelet transform, we will first
discuss the method of transforming vectors called
averaging and differencing. First, we start out with an
arbitrary vector representing one row of an 8 x 8
image matrix:
y = (448 768 704 640 1280 1408 1600 1600)
Because the data string has length 8 = 23 there will
be three steps to the transform process. If the string
detail coefficients from y2. The Haar wavelet does
this transformation to each row of the image matrix,
and then again to every column in the matrix. The
resulting matrix is known as the Haar wavelet
transform of the original matrix. It is important to
note at this point that this process is entirely
reversible. It is this fact that makes it possible to
retrieve the original image from the Haar wavelet
transform of the matrix.
V. A LINEAR ALGEBRA APPROACH TO
WAVELET TRANSFORM
The averaging and differencing method that we just
discussed is very effective, but, as you can imagine,
the calculations can quickly become quite tedious.
We can, however, use matrix multiplication to do the
grunt work for us. In our previous example, we can
describe the transformation of y to y1 as:
y1 = yA1
Where A1 is the matrix
were 2k long, there would be k steps in the process.
For the first step our data string becomes:
y1 = (608 672 1344 1600 160 32 64 0)
We get this be first thinking of our original data as
being four pairs of numbers (448 & 768, 704 & 640,
etc...). We average each of these pairs, and the results
become the first four entries of our modified string
A1=
1/ 2
1/ 2
0
0
0
0
0
0
0
0
1/ 2
1/ 2
0
0
0
0
0
0
0
0
1/ 2
1/ 2
0
0
0
0
0
0
0
0
1/ 2
1/ 2
1/ 2
1/ 2
0
0
0
0
0
0
0
0
1/ 2
1/ 2
0
0
0
0
0
0
0
0
1/ 2
1/ 2
0
0
0
0
0
0
0
0
1/ 2
1/ 2
y1. These numbers are known as approximation
coefficients. Next we subtract these averages from
the first member of each pair. These answers become
It can next be shown that the transformation from y1
to y2 can be written as y2 = y1 A2. Where A2 is the
matrix
the last four entries of y1 and are known as detail
coefficients, which are shown in bold. The detail
coefficients are repeated in each subsequent
transformation of this data string. The visual
interpretation of approximation and detail
coefficients will be discussed in Section 5. For now
we will proceed with the second step which changes
A2=
1 / 2
1 / 2
0
0
0
0
0
0
0
0
1 / 2
1 / 2
0
0
0
0
1 / 2
1 / 2
0
0
0
0
0
0
0
0
1 / 2
1 / 2
0
0
0
0
0 0 0 0
0 0 0 0
0 0 0 0
0 0 0 0
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
our data string to:
y2 = (640 1472 32 128 160 32 64 0)
And lastly we can show that y3 = y2 A3, where A3 is
the matrix
We get this by treating approximation coefficients
from the first step as pairs (608 & 672, 1344 &
1600). Once again we average the pairs and the
results become the first two entries of y2, and are our
new approximation coefficients. We then subtract the
averages from the first element of each pair. The
results become the third and fourth elements of y2
(These are also detail coefficients and are repeated in
A3=
1/ 2
1/ 2
0
0
0
0
0
0
1/ 2 0 0 0
1/ 2 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
0 0 0 0
0 0 0 0
0 0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
1 0 0
0 1 0
0 0 1
the next step). The last four elements in y2 are
identical to y1. For the last step our data string
becomes:
y3 = (1056 416 32 128 160 32 64 0)
This time we obtain the first entry of y3 by averaging
the two approximation coefficients of y2 (640 &
1472). We obtain the second element by subtracting
the average from the first element of the pair. This is
the final detail coefficient and is followed by the
This whole transformation can be done in one step
and stated by the equation y3 = yW, where the
transformation matrix W = A1 A2 A3
It is also important to note that since every column of
the individual matrices that comprise W is orthogonal
to every other column, the matrices are invertible.
Thus:
129
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
1 / 8 1 / 8 0 1 / 4 0 0 1 / 2 0
1 / 8 1 / 8 0 1 / 4 0 0 1 / 2 0
1 / 8 1 / 8 0 1 / 4 0 0 0 1 / 2
1 / 8 1 / 8 0 1 / 4 0 0 0 1 / 2
-1
1 / 8
W= 1 / 8
1 / 8
1 / 8
1 / 8
1 / 8
1 / 8
1 / 8
1 / 4 0
1 / 4 0
1 / 4 0
1 / 4 0
1 / 2
1 / 2
0
0
0 0 0
0 0 0
1 / 2 0 0
1 / 2 0 0
2. By mistreating the detail coefficients the image
can be compressed even more.
3. Previews of the image can be constructed before
the total file is received.
Wavelet theory also gives an entry point for a smart
color transform, usable both for lossy and lossless
compression.
W
-1
=A
3
-1
A
2
-1
A
1
This is what allows us to reconstitute our image from
the compressed form. To return to our original data
we use the equation
y = y3 W
-1
In general we can say that Q = I W, where Q is the
row transformed matrix and I is the original image
matrix. But, as stated before, the Haar wavelet
transformation does these transformations to each
column of the image matrix, and then repeats them
on each column of the matrix. This is done by
multiplying I on the left by the transpose of W. This
gives us our final equation for the row-and-column
transformed matrix T:
T = ((I W)
T
W)
T
= W
T
I W
And from this we can easily derive the equation to
return our row-and-column transformed matrix to the
original image matrix:
I = ((T
T
W
-1
)
T
W
-1
= (W
-1
)
T
T W
-1
Figure 3 shows the compression of an image by using
wavelet.
(a) Original Image
(b) Compressed Image
Figure 4: Image with 10.3 : 1 compression ratio.
VII. CONCLUSION
A wavelet transform is a good starting point for a
compression algorithm. The wavelet transform used
in this paper accomplishes three things.
1. The total entropy gets lower, so after the
transform the image can be compressed.
VIII. BIBLIOGRAPHY
1. Calderbank AR, Daubechies Ingrid,
Sweldens Wim, Yeo Boon-Lock Lossless image
compression using integer to integer wavelet
transforms
2. Chao Hongyang, Fisher Paul, Hua Zeyi
An approach to integer reversible wavelet
transformations for lossless image compression
3. Gomes J, Velho L Image processing for
computer graphics
IX. REFERENCES
[1] Talukder, K.H. and Harada, K., A Scheme of
Wavelet Based Compression of 2D Image, Proc.
IMECS, Hong Kong, pp. 531-536, June 2006.
[2] Ahmed, N., Natarajan, T., and Rao, K. R.,
Discrete Cosine Transform, IEEE Trans. Computers,
vol. C-23, Jan. 1974, pp. 90- 93.
[3] Pennebaker, W. B. and Mitchell, J. L. JPEG, Still
Image Data Compression Standards, Van Nostrand
Reinhold, 1993.
[4] Rao, K. R. and Yip, P., Discrete Cosine
Transforms - Algorithms, Advantages, Applications,
Academic Press, 1990.
[5] Wallace, G. K., The JPEG Still Picture
Compression Standard, Comm. ACM, vol. 34, no. 4,
April 1991, pp. 30-44.
[6] Eric J. Stollnitz, Tony D. Derose and David H.
Salesin, Wavelets for Computer Graphics- Theory
and Applications Book, Morgan Kaufmann
Publishers, Inc. San Francisco, California.
[7] Robert L. Cook and Tony DeRose, Wavelet
Noise, ACM Transactions on Graphics, July 2005,
Volume 24 Number 3, Proc. Of ACM SIGGRAPH
2005, pp. 803-811.
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Pixel level Document Image Processing for an OCRsystem
Sunanda Verma*,D.P.Dwivedi**
*student,M.Tech (ECE) Indra Gandhi Instituteof Technology,IndraprasthaUniversity, Delhi
** Lecturer, VisveshwaryaInstitute of Engg &Technology,G.B.Nagar,Dadri
e-mail- sunandaverma@gmail.com,durgapdwivedi@gmail.com
Abstract
Image processing plays a vital role in document
image processing system. For large scale of
digitization process, various methods are
available to provide an electronic version of a
paper document, and scanning of the paper
document is one of the best suitable methods.
Optical scanning is the new technique applied on
an image document, which converts the raw
output data to the optical character recognition
(OCR) system. Since the computer system can
not understand the language of the written
documents, we need to convert these documents
into the electronic documents, so that they can
easily processed by the computer system. OCR
converts the written text documents into the e-
documents. In this paper we determine the
threshold value of an scanned image document by
using global thresholding method, which is based
on the otsus algorithm. On the basis of the
threshold values obtain from the different
methods, we can judge the quality of an image
document and hence can improve the quality of
an scanned image document.
Keywords: Scanned documents, OCR,
Thresholding, Document image processing
1. Introduction
Now a days, image processing plays a vital role
in the field of scanned document processing. For
large scale of digitization process, various
130
methods are available to provide the electronic
version of the paper document. Scanned images
provides the digital record of the paper
document. There are so many uses of the
scanned documents in the private sector as well
as in government sectors. Optical scanning is the
technique which is applied on the scanned
document, which forms the raw output of the
optical character recognition system. The output
produced by the OCR is the set of recognized
characters. The Methodology employs the
preprocessing of the scanned document, which
improves the quality of the scanned document
for the further processing of the document
through OCR.
Preprocessing is the very fast step for the
scanned document analysis so that the document
being scanned gives more impressive results.
The purpose of preprocessing is to improve the
quality of the document being scanned. In this
section, we first preprocess the image document,
the preprocessing technique employed here is
the binarization. In this a scanned document is
converted from color or grayscale into the bi-
level representation The objective of making an
image into binarized form, so that we can mark
the pixels which belongs to the true foreground
regions with single intensity and background
region with different intensity. Binarization of
the scanned document is done by thresholding
method ,in which the grayscale image is
converted into the binary image. After the
binarization of an image we cleaned the noise
present in an image so that it will give the better
results for OCR system.
2. Document Image Processing
Document Image Processing is an electronic
form of filing. In a DIP system, a document is
passed through a scanner and a digitized image
is then stored on a storage device perhaps an
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
optical disk. This can then be retrieved and
shown on a computer screen. The image of the
document can include handwriting and
diagrams. The process is the same as that
employed in fax machine technology. That is,
the image is recorded but the system does not
identify the marks on the paper as letters or
numbers. A scanner scans a whole page of input
and records a pattern of dots, according to
whether areas of the paper original are black or
white.
3. Optical character recognition
OCR is an abbreviation of optical character
recognition, it is the recognition of printed or
written text characters by a computer. This
involves scanning of the text character-by-
character, analysis of the scanned-in an image,
and then translation of the character image into
character codes, such as ASCII codes,
commonly used in data processing.
In OCR processing, the scanned image or
bitmap is analyzed for background and
foreground region in order to identify each
alphabetic letter or numeric digit. When a
character is recognized, it is converted into an
ASCII code. Special circuit boards and computer
chips designed expressly for OCR are used to
speed up the recognition process.
respective values,over the entire image
document.Each pixel in the document assigned
to the whole page foregroundor background
based on its gray values.
Global methods are computationally inexpensive
and they give good results for the scanned
documents.
In adaptive or local thresholding method,
different threshold values are used for different
local regions in the image.It creates a black and
white image pixels by analyzing each pixel with
respect to the neighbourhood pixel.
Adaptive method also give better results but this
method is slower than the global thresholding
method.
For binarization of the scanned document,we use
the global thresholding method,which is based
on otsu s algorithm.
Fig1. Graphical representationof threshold
4.Thresolding
Thresholding is the technique which makes the
gayscale image into the bi-level image ie.in the
form of 0 or 1 , in which 0 represent the black
pixels and 1 represent the white pixels in the
image document. The pupose of thresholding is
to extract those pixels from the image document
which represent the object, which can be written
in either text or image.
There are different methods are available for
thresholding:
(a) Global thresholding method
(b) Adaptive or local thresholding method
In global thresholding method,a single threshold
is used for all the pixels of an image.When the
pixel values of the components and that of
background are fairly consistent in their
131
5. Binarization
A binarization method is the binarizing of an
image by extracting the feature amount from the
image. When a pixel is selected in an image, a
sensitivity is added to and/or subtracted from the
value concerning the value of the selected pixel
to set a threshold value range. Next, when
another pixel is selected, the sensitivity is added
to or subtracted from the value concerning the
value of the selected pixel and a new threshold
value range is set containing the calculation
result and the already setup threshold value
range. The pixel with the value concerning the
value of any pixel in the image within the
threshold value range is extracted as the same
brightness as the selected pixel and the
extraction result is displayed.
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Fig2.Binarization of scanned image document by
thresholding
The above figure shows the preprocessing step
which includes binarization. Binarization is done
by thresholding called global thresholding
method,which is based on otsus algorithm.
6. Otsus algorithm
In 1979,otsu proposed an algorithm for
automatic threshold selection from the histogram
of an image. This is global thresholding
method, which stores the intensities of the pixels
in an array. On the basis of threshold calculated
by the otsu;s algorithm, each pixel is set to either
0 or 1,ie background(white) or
foreground(black). Threshold of an image can be
calculated by taking mean and variance of an
image.
In this, the pixels are divided into 2 classes, C
1
with gray levels [1, ...,t] and C
2
with gray levels
[t+1, ... ,L]. The probability distribution for the
two classes is given by:
Also, the means for the two classes are:
132
and using Discriminant Analysis, Otsu defined
the between-class variance of the thresholded
image as
For bi-level thresholding, Otsu verified that the
optimal threshold t* is chosen, so that the
between-class variance
B
is maximized; that is
The advantage of otsu method, is that otsu
method is very simple and easy to calculate.
Since it is global algorithm, thus it is well suited
for the images having equal intensities.
7. Noise Cleaning
Noise cleaning fromthe scanned document in
this paper is done by erosion and dilation
technique. Both the operations are the
fundamentals to the morphological processing.
Erosion is a shrinking operation, while dilation
grows or thickens the objects in a binary image.
The value of the output pixel is the maximum
value of all the pixels in the input pixel's
neighborhood. In a binary image, if any of the
pixels is set to the value 1, the output pixel is set
to 1.The value of the output pixel is the
minimumvalue of all the pixels in the input
pixel's neighborhood. In a binary image, if any of
the pixels is set to 0, the output pixel is set to
0.
8.Result and conclusion
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Fig3. Original scanned document Image
Fig4. Gray scaled image
Fig5.Binarized image
Fig6. Noise cleaned image
Fig7. Image histogram
Threshold value calculated using otsu s method = 123
The result obtained from the various scanned
document images, we had concluded that the
OCR cannot process the noisy image as well, so
we clean the noise present in the image,which
133
works well on the OCR.The threshold level
calculated by otsu s method is much better than
the threshold calculated by other methods such
as peak valley method. In this we also concluded
that erosion and dilation method applied on the
image give better results of noise cleaning than
other methods such as n-connectivity method.
9. Future work
In the future we will try to apply this method to
the various images which are in different
languages such asenglish text, Gurumukhi
(Punjabi script),devnagri (Hindi script) and try
to achieve the best results for these different
languages and can make the text of these image
into editable form.
10. Refrences
[1] J.Pradeep,E srinivasan and S.Himavathi , Diagonal
Feature Extraction Based Handwritten Character System
Using Neural Network International Journal of Computer
Applications (0975 8887) Volume 8 No.9, October
2010.
[2] S.V. Rajashekararadhya, and P.Vanajaranjan, Efficient
zone based feature extraction algorithm for handwritten
numeral recognition of four popular south-Indian scripts
Journal of Theoretical and Applied Information
Technology, JATIT vol.4, no.12, pp.1171-1181, 2008.
[3.] Anil.K.Jain and Torfinn Taxt, Feature extraction
methods for character recognition-A Survey, Pattern
Recognition,vol. 29, no. 4, pp. 641-662, 1996.
[4] Nitin Khanna and Edward J. Delp Source Scanner
Identification for Scanned Documents Video and Image
Processing Laboratory School of Electrical and Computer
Engineering Purdue University West Lafayette, Indiana
USA
[5] Shang Jin1, Yang You2, Yang Huafen3 A Scanned
Document Image Processing Model for Information
System 2010 Asia-Pacific Conference on Wearable
Computing Systems
[6] Otsu, N., 1979. A threshold selection method from
gray-level histograms. IEEE Trans. Systems, Man, and
Cybernetics, 9(1), pp. 62-66.
134
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
by n by 3
Image Resizing using Bilinear Interpolation
Navdeep Goel
Electronics & Communication Engineering Section
Yadavindra College of Engineering
Talwandi Sabo, Punjab, India
goel_navdeep@rediffmail.com
Nitu Jindal
M.Tech Student, ECE Section
Yadavindra College of Engineering
Talwandi Sabo, Punjab, India
Abstract The fast development of multimedia computing has
led to the demand of using digital images. The manipulation,
storage and transmission of images in their raw form is very
expensive, and significantly slows the transmission and make
storage costly. Efficient image compression and enlargement
solutions are becoming critical with the recent growth of data
intensive, multimedia based applications. In this paper bilinear
interpolation algorithm is used for image scaling or image
resizing.
Keywords-Image resizing, Bilinear interpolation, Digital
images.
I. INTRODUCTION
Image resizing (magnification or reduction) is a common
operation in image processing [1]. It is used whenever one
wants to change the image resolution. For example, it is
required on a routine basis in digital photography, multimedia,
and electronic publishing [2], [3], for adapting the pixel size to
the resolution of an output device (printer or monitor) [4], [5],
and for generating preview images, or posting digital pictures
on the web.
Image resizing is an essential task in many image
processing applications such as satellite imaging, medical
imaging, digital photography; multimedia etc. Image resizing
modifies pictures to improve them. Image resizing means
increasing or decreasing the total number of pixels. Resizing
images should view as zooming and shrinking processes
[6-12]. Zooming has two steps that are creation of new pixel
locations and then assigning gray level to the new location.
Sometime images are so small, that it is difficult to view them.
By image resizing we can increase the size of image. For
example, if we want zoom a 600 600 pixel image by 1.5
times, then the resulting image will be of 900 900
resolutions. To visualize it, a grid of 900 900 pixels is laid
over the original image and the spacing within the grid will be
less than one pixel. [13]. To do gray level assignment in the
over lay, the closest pixel in the original image is searched,
assign its gray level to the new pixel in the grid and after
doing all the assignments, expand it to the original specified
size to the zoom image[14].
We have to deal with huge amount of digital data we
see digital movie, listen digital music, read digital mail, and
store documents digitally, making conversation digitally. The
manipulation, storage and transmission of images in their raw
form is very expensive and significantly slows the
transmission and make storage costly. So, data compression
plays a very significant role to keep the digital world realistic.
If there were no data compression techniques, we would have
not been able to listen songs over the Internet, see digital
pictures or movies or we would have not heard about video
conferencing or telemedicine. So, data compression plays a
very significant role to keep the digital world realistic [15]. .
What are the main advantages of data compression in
digital world? There may be many answers but the three
obvious reasons are the saving of memory space for storage,
channel bandwidth and the processing time for transmission
[4], [10]. The different types of images are binary, indexed,
intensity, and RGB image types. A binary image is a digital
image that has only two possible values for
each pixel. Typically the two colours used for a binary image
are black and white though any two colours can be used. In
MATLAB, a binary image is represented by a unit 8 or double
logical matrix containing 0's and 1's (which usually represent
black and white, respectively). An intensity image consisting
of intensity (gray scale) values, images of this sort, also
known as black-and-white are composed exclusively of shades
of gray, varying from black at the weakest intensity to white at
the strongest. In MATLAB, intensity images are represented
by an array of class uint8, uint16, or double. An RGB image is
one in which each pixel is specified by three values -- one
each for the red, blue, and green components of the pixel's
colour. An RGB (red, green, blue) image is a three-
dimensional byte array that explicitly stores a colour value for
each pixel. In MATLAB, an RGB image is represented by
m array of class uint8, uint16, or double [11],
[16].
When processing the selected image, for obtaining the
final resolution is formed a so called Frame , comprising of
four images: the original one, the intermediate one obtained
at the first step of the processing (reduction or magnification);
the final one obtained at the second step of the processing
(for one-step image processing the intermediate and the final
images are identical) and another one represents the visual
(RGB) difference between the original and the final images
[11], [16].
Section II gives an introduction of bilinear interpolation
method used for image resizing. Section III gives the results of
Image Resizing. Some conclusions and references are finally
drawn in Section IV and Section V Respectively.
II. BILINEAR INTERPOLATION
The proposed method is Bilinear Interpolation Method. In
mathematics, bilinear interpolation is an extension of linear
135
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
, y )
, y ).
, y ) ax by cx y d
NW X
x
(NE NW)
SW X
x
(SE SW)
B Y
y
( A B)
interpolation for interpolating functions of two variables (e.g.,
x and y ) on a regular grid. The interpolated function should
The four red dots in figure 2 shows the data points and
the green dot is the point at which we want to interpolate.
not use the term of x
2
or y
2
, but xy, which is the bilinear
form of x and y .The key idea is to perform linear
interpolation first in one direction, and then again in the other
direction .Although each step is linear in the sampled values
and in the position, the interpolation as a whole is not linear
but rather quadratic in the sample location (details below)
[13]. For Zooming an image, bilinear interpolation uses the
four nearest neighbour of a point. If (x are the coordinates
of the point in the zoomed image and the gray value assigned
it is v(x According to the bilinear interpolation method,
the gray level assigned is given by equation [13],[16],[17-18].
V(x (1)
In equation (1), coefficients a, b , c and d are determined by
the four unknown neighbours. Image shrinking can be done
using same methods like zooming. For pixel replication, the
same method is applied with deletion strategy. To shrink the
image by half, delete every other row and column. In the same
way, the nearest neighbourhood interpolation and bilinear
interpolation can be visualized by laying the grid to fit over
the original images [12], [13].
Bilinear interpolation determines the newly generated
pixel from the weighted average of the four closest pixels to
the specified input coordinates. The four closest pixels are
NW (North-West), NE (North-East), SW (South-West), and
SE (South-East). Bilinear method uses three linear
interpolations. The first linear interpolation evaluates the first
interpolated value (A) from the values at NW and NE.
Figure 1. Physical meaning of the bilinear interpolation
In the same way, linear interpolation at the second interpolated
value (B) evaluates from the values at SW and SE. The new
pixel value(C) is then linearly interpolated from the two values
previously obtained. It is represented by figure 1 and Equation
(2) [19].
A
Figure 2. Method of Bilinear Interpolation
Example of bilinear interpolation on the unit square with the
z-values 0, 1, 1 and 0.5 as indicated. Interpolated values in
between represented by colour shown in figure 3.
Figure 3. Example of Bilinear Interpolation
The result of bilinear interpolation is independent of the order
of interpolation. If we had first performed the linear
interpolation in the y-direction and then in the x-direction,
then the resulting approximation would be the same. In
computer vision and image processing, bilinear interpolation
is one of the basic re-sampling techniques [13], [18].
When an image needs to be scaled up, each pixel of the
original image needs to be moved in a certain direction based
on the scale constant. However, when scaling up an image by
a non-integral scale factor, there are pixels (i.e. holes) that are
not assigned appropriate pixel values. In this case, those holes
should be assigned appropriate RGB or gray-scale values so
that the output image does not have non-valued pixels [13],
[16]. Bilinear interpolation can be used where perfect image
transformation with pixel matching is impossible, so that one
can calculate and assign appropriate intensity values to pixels.
Unlike other interpolation techniques such as nearest
neighbour interpolation and bicubic interpolation, bilinear
Interpolation uses only the 4 nearest pixel values which are
located in diagonal directions from a given pixel in order to
find the appropriate colour intensity values of that pixel [16].
Bilinear interpolation considers the closest 2 2
neighborhood of known pixel values surrounding the unknown
Here,
B
C
X
x
is the Horizontal distance, and
(2)
Y
y
is the vertical
pixel's computed location. It then takes a weighted average of
these 4 pixels to arrive at its final, interpolated value. The
weight on each of the 4 pixel values is based on the computed
pixel's distance (in 2D space) from each of the known points
distance between the newly generated pixel and neighbor pixel
[19].
[18].
As seen in the example (in figure 4), the intensity value
at the pixel computed to be at row 20.2, column 14.5 can be
136
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
calculated by first interpolating between the values at column
14 and 15, and then interpolating linearly between rows 20
and 21.This algorithm reduces some of the visual distortion
caused by resizing an image to a non-integral zoom factor, as
opposed to nearest neighbor interpolation, which will make
some pixels appear larger than others in the resized image
[13],[16],[20].
Figure 4. Bilinear interpolation in grayscale values
III. PERFORMANCE PARAMETERS ANDSIMULATIONSET-UP
This section shows an original image and results of image
resizing with compressed and zoomed images. In Table 1,
results of bilinear interpolation method are compared with
nearest neighbour method.
TABLE I. COMPARISON OF INTERPOLATIONRESULTS
Method Average Error PSNR
Nearest neighbour 9.9761 21.74
Proposed method 7.1373 25.48
Figure 5. Actual Image without any Processing
Figure 6. Compressed Image using Bilinear Interpolation Method
I. DISCUSSION ANDCONCLUSION
It has been observed that the proposed method produces
sharper and better quality image than the conventional
interpolation methods such as nearest neighbor. In nearest
neighbor method the value of average of error and PSRN ratio
are 9.9761 & 21.74 respectively while these
Figure 7. Zoomed Image using Bilinear Interpolation Method
values observed from bilinear interpolation method are 7.1373
& 25.48 respectively.
REFERENCES
[1] W. K. Pratt, Digital Image Processing. New York: Wiley, 1978.
[2] V. Di Lecce, G. Dimauro, A. Guerriero, S. Impedoro, G. Pirlo, and A.
Salzo, Electronic document image resizing, in Proc. Int. Conf.
Document Analysis Recognition, Los Alamitos, CA, Sept. 2022,
1999.
[3] H. F. Schantz, Optical imaging and (OCR) recognition technology
(recognology), Remit. Doc. Process.Today, vol. 14, no. 7, pp. 1115,
1992.
[4] C. A. Glasbey and G. W. Horgan, Image Analysis for the Biological
Sciences , West Sussex, U.K.: Wiley, 1995.
[5] N. Hekotetou and A. N.Venetsanopoulos, Colour image interpolation
for high resolution acquisition and display devices, IEEE Trans.
Consum. Electron, vol. 41, no. 4, pp. 11181126, 1995.
[6] K. Jenson and D. Anastassiou, Spatial resolution enhancement of
images using non-linear interpolation , IEEE Int. Conf. Acoust.
Speech, Signal Processing,, 2045-2048,1990.
[7] R. G. Keys, Cubic convolution interpolation for digital image
processing , IEEE Transactions on Acoustics, Speech, and Signal
Processing 29 (6), 1153-1160,1981.
[8] L. Khriji and M. Gabbouj, Vector median-rational hybrid filters for
multichannel image processing , IEEE Signal Processing Letters 6 (7),
258-272,1991.
[9] H. Leung and S. Haykin, Detection and estimation using an adaptive
rational function filter , IEEE Trans. Signal Processing 42, 3366-
3376,1999.
[10] S. K. Park and R. A. Schowengerdt, Image reconstruction by
parametric cubic convolution , Computer Vision, Graphics and Image
Processing 23 (3), 258-272, 1983.
[11] C. M. Eden and M. Unser, High-quality image resizing using oblique
projection operators , IEEE Trans. Image Processing 7 (5), 679-692,
1998.
[12] H. S. Hou and H. C. Andrews, Cubic splines for image interpolation
and digital filtering , IEEE Transactions on Acoustics, Speech and
Signal Processing 26 (6), 508-517, 1978.
[13] Kang T sung, Chang, Computation for Bilinear Interpolation,
Introduction to geographic information systems. 5th Ed. New York,
NY: McGraw-Hill, 2009.
[14] Rafael C. Gonzalez, Richard E. Woods, Digital image processing,, 2
nd
edition, Prentice Hall.
[15] Vijay K. Madisetti, Douglas B. Williams,Digital Signal Processing
Handbook, Chapman & Hall CRC net BASE, 1999.
[16] Min Hu; Jieqing Tan , Feng Xue, A New Approach to the Image
Resizing Using Interpolating , received 8 May 2005; revised 25 June
2005.
[17] R., Keys, Cubic Convolution Interpolation for Digital Image
Processing , IEEE Trans on ASSP, vol ASSP-29, No. 6,1981.
[18] M., Gleicher A Brief Tutorial on Interpolation for Image Scaling ,
Dec. 1999.
[19] Young-Hyun Jun, Jong-Ho Yun, Jin-Sung Park, and Myung-Ryul Choi,
Design of an Image Interpolator for Low Computation Complexity ,
International Journal of Information Processing Systems, Vol.2, No.3,
Dec. 2006.
[20] K.R. Caslteman, Digital Image Processing, 2
nd
Edition, 1996,
Englewood.
137
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
N
a
i
i
OPTIMIZED FIR FILTER USING PSO
Mukhtiar Rana
*
U.P.Singh
**
Amrish Saini
***
*
M. Tech. Scholar, Electronics & Communication Engg. JMIT Radaur., Yamunanagar, INDIA
**
Astt. Prof.,Electronics & Communication Engg., JMIT Radaur , Yamunanagar, INDIA
***
M. Tech. Scholar, Electronics & Communication Engg. JMIT Radaur., Yamunanagar, INDIA
Abstract- The purpose of this paper is to optimized FIR filter
using PSO.The purpose of the filters is to allow some
frequencies to pass unaltered, while completely blocking
others. The digital filters are mainly used for two purposes:
separation of signals that have been combined, and
restoration of signals that have been distorted in some way.
Analog (electronic) filters can be used for these tasks, as these
are cheap, fast, and have a large dynamic range in both
amplitude and frequency; however, digital filters are vastly
superior in the level of performance. In this work, a type of
digital filter i.e., FIR filters is used to separate one band of
frequencies fromanother. The primary attribute of FIR filters
is their stability. This is because they are carried out by
convolution rather than recursion. FIR filters are linear phase
filters both phase delay and group delays are constant in these
filters. When the search space is too large to search
mean squares (TLMS) [2], [3]. Algorithm play a key role
for the equalization but adaptive filter [4] is also main we
can use FIR or IIR filters. As we know IIR filters not easy
to implement and question on stability so we can use Finite
Impulse Response or Partial Impulse Response filters and
will show in this paper under which condition which one is
beneficial.
2. FIRFILTERS
In finite impulse response filters, the impulse response is of
finite duration. This means that the impulse response of fir
filters has a finite numbers of non-zero terms. Consider the
FIR filter with the input-output relationship governed by:
N
(1)
exhaustively, population based searches may be a good
y[n] a
i
x[n i]
i 0
alternative, however, population based search techniques
cannot guarantee you the optimal (best) solution. A
population based search technique is Particle Swarm
Optimization (PSO). The PSO Algorithm uses similar
characteristics to Genetic Algorithm; however, the manner in
where x(k) and y(k) are the filters input and output,
respectively, and N is the filter order. The transfer
function of this FIR filter can be written in the following
general form:
which the two algorithms traverse the search space is
H z z
fundamentally different. I 0
(2)
Key words: - FIRFilter, PSO(particle swarmoptimization)
I. INTRODUCTION
The digital filters are an essential part of DSP. In fact,
their extraordinary performance is one of the key reasons
that DSP has become so popular. The purpose of the filters
is to allow some frequencies to pass unaltered, while
completely blocking others. The digital filters are mainly
used for two purposes: separation of signals that have been
combined, and restoration of signals that have been
distorted in some way. Analog (electronic) filters can be
used for these tasks, as these are cheap, fast, and have a
large dynamic range in both amplitude and frequency;
however, digital filters are vastly superior in the level of
performance. In this work, a type of digital filter i.e., FIR
filters is used to separate one band of frequencies from
another. The primary attribute of FIR filters is their
stability. This is because they are carried out by
convolution rather than recursion. FIR filters are linear
phase filters both phase delay and group delays are constant
in these filters [1]. Equalizer complexities are two
conflicting parameters; hence a compromise is usually
sought. For equalization, many efficient adaptive
algorithms have been developed such as the total least
An important task for the designer is to find values of a
i
such that the magnitude response of the filter approximates
a desired characteristic while preserving the stability of the
designed filter. The stability is assured if all the poles of the
filter lie inside the unit circle in the z-plane. The Digital
filters have various stages for their design.
3. PARTICLESWARM OPTIMIZATION
The PSO algorithm is an adaptive algorithm based on a
social-psychological metaphor; a population of individuals
(referred to as particles) adapts by returning stochastically
toward previously successful regions. Particle Swarm has
two primary operators: Velocity update and Position
update. During each generation each particle is accelerated
toward the particles previous best position and the global
best position. The new velocity value is then used to
calculate the next position of the particle in the search
space. The particle swarm algorithm is used here in terms
of social cognitive behavior. It is widely used for problem
solving method in engineering. In PSO, each potential
solution is assigned a randomized velocity, are flown
through the problem space. Each particle adjusts its flying
according to its own flying experience and its companions
flying experience. The ith particle is represented as X
i
=
(x
i1,
x
i2,
-------x
id
). Each particle is treated as a point in a D-
dimensional space. The best previous position (the best
138
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
M
a
g
n
it
u
d
e
(
d
B
)
fitness value is called pBest) of any particle is recorded and
represented as P
i
= (p
i1
, p
i2
------p
id
).Anther best value
(called gBest) is recorded by all the particles in the
population. This location is represented as P
g
= (p
g1,
p
g2
-----
-p
gd)
. A t each time step, the rate of the position changing
velocity (accelerating) for particle i is represented as V
i
=
(v
i1,
v
i2
-----v
id
). Each particle moves toward its pBest and
gBest locations. The performance of each particle is
measured according to a fitness function, which is related
to the problem to be solved [3].
Initialize each particle
with random position and
velocity.
5. CONCLUSION
To design filters with special requirements such as a trade-
off in norms or concerning quantization effects there is a
need of more general optimization techniques.FIR digital
filters are widely used in the field of signal processing due
to its distinguishing features such as: the stability, linear
phase and easiness for realization. Traditionally, there exist
some methods for FIR digital filters design, such as
window method, frequency sampling method and best
uniform approximation. Unfortunately, each of them is
only suitable for a particular application. In recent years,
many evolutionary computation techniques, such as
simulated annealing approach (SA), genetic algorithms
(GA), particle swarm optimization (PSO), have been
employed to design FIR digital filters. GA is a good global
Evaluate the desired optimization fitness
function (F) of each particle.
If F (Xi)>F
(gBest) then
If F (Xi)>F
(pBest) then
N
Manipulate
next particle
Optimize the position of
each particle.
Optimize the velocity of
each particle.
searching method, but it is difficult to realization because
of the complexity of coding. PSO is a recently proposed
random search algorithm and has been applied to many
real-world problems, has a good global searching algorithm
and gave good result
6. REFERENCES
[1] J.G. Proakis and D.G. Manolakis, Digital Signal Processing: Principle,
and Application. N.Delhi: Pearson Education, 2004.
If
criterion
is, meet
Y
pBest is the best
solution
(Flow diagram of PSO)
4. SIMULATION RESULTS
Simulation is carried out for certain specification such as
T1 = 5dB, T2 = 7dB, central frequency = 1000 Hz, The
plots of normalized frequency and magnitude are shown in
Figure 1.
[2] Dan Lavry, Understanding FIR (Finite Impulse Response) Filters - An
Intuitive Approach By Lavry Engineering.
[3] J. Kennedy and R. Eberhart. Swarm Intelligence. Morgan Kaufmann
Publishers, Inc.,San Francisco, CA, 2001.
[4] Lathi, B.P., Signal Processing & Linear Systems, Berkeley-Cambridge
Press, 1998,
[5] Sanjay Sharma, Digital Signal Processing. S.KKataria & sons
[6] Alan V. Oppenheim, Ronald W. Schafer, John R. Buck : Discrete-
Time Signal Processing, Prentice Hall, ISBN 0-13-754920-2
Magnitude Response (dB)
60
50
40
30
20
10
0
-10
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8
Normalized Frequency ( rad/sample)
Figure 1:Magnitude responce
139
National Conference onMicrowave, Antenna &Signal Processing April 22-23,
2011
MULTISPECTRAL IMAGE RESTORATION
USING WAVELET
Manju Singh, IGIT, GGSIPU
Email: singhmanju027@gmail.com
Abstract-- In this paper, multispectral
image restoration is performed. Remote
sensing images are often degraded due to
atmospheric effects or physical limitations.
As a consequence, the images are
corrupted by blur and noise. In this paper
for restoring the image the expectation
maximization algorithm is used with the
wavelet. This algorithmapplies the
deconvolution step or E step and de-
noising or M step. The goal of image
restoration is to recover the original image.
The restoration is improved by using a
multispectral approach instead of a band
wise approach. The EM algorithm herein
proposed combines the efficient image
representation offered by the discrete
wavelet transform. The restoration uses
both the within channel (spatial) and
between channel (spectral) correlation;
hence, the restored result is a better
estimate than that produced by
independent channel or band restoration.
Index termMultispectral images, image
restoration, image denoising, wavelets,
expectation- maximization algorithm.
1. INTRODUCTION
A multi-spectral image is a collection
of several monochrome images of the same
scene, each of them taken with a different
sensor. Each image is referred to as a band. A
well known multi-spectral (or multi-band
image) is a RGB color image, consisting of a
red, a green and a blue image, each of them
taken with a sensor sensitive to a different
wavelength. In image processing,
multispectral images are most commonly
used for Remote Sensing applications.
Satellites usually take several images from
frequency bands in the visual and non-
visual range. Remote sensing images are
subject to degradation caused by different
kinds of atmospheric effects and physical
limitation of the sensors. The degradation is
apparent as blurring affecting the spatial
resolution and noise added at the top. In the
case of multispectral images, each of the
bands is degraded, in general with different
blurring and noise level. The goal of image
restoration is to recover the original image
from the degraded image. The simplest way
to restore a multispectral image is bandwise
method, which each bands separately. As
we observe, spectral bands are generally
very similar and correlated, and often
convey information on each other. Therefore
a multiband treatment during image
restoration is superior to a bandwise
treatment.
Restoration of an image needs both
deconvolution and denoising, to remove the
blurring and to suppress the noise,
respectively. Deconvolution is deal with
Fourier domain. However a disadvantage of
Fourier transform is that it does not
efficiently represent image singularities. So
only small amount of shrinkage are allowed
to avoid to distortion of the edge in image.
The wavelet domain on the other hand is
optimal for approximating images. The
wavelet transform of an image is generally
sparse, meaning that only a fairly small
amount of non-zero coefficients can
approximate the image accurately[ 3]. This
property makes the wavelet transform
suited for image denoising, which has been
demonstrated in a countless number of
effective denoising schemes. Therefore it is
evident to combine Fourier deconvolution
with wavelet denoising.
Our proposed method is based on an
expectation-maximization (EM) algorithm,
140
National Conference onMicrowave, Antenna &Signal Processing April 22-23,
2011
2
which is an iterative procedure developed to Where the noise N is zero mean white
maximize the likelihood function. Each Gaussian with a covariance n and the
iteration consists of two steps: a Fourier-
based E-step and a wavelet-based M-step. E-
step solving the deconvolution and an M-
step managing the noise. In this we use the
extended EM algorithm [1] restoration
strategies towards the multispectral images.
The procedure is constructed as follows:
The E-step is formulated by extending the
spatial blurring operator towards an
operator, allowing for spatial as well as
spectral blurring. In the m- step of iterative
procedure, a state of the art multispectral
image denoising procedure is introduced.
The process of wavelet based multispectral
image denoising has been treated
recently[2]. In these works, multivariate
probability density functions of the images
were proposed that account for the
correlation between the image bands.
Recently, a multispectral image denoising
algorithm has been proposed [2]., where
with in the same Bayesian framework extra
prior information was included in the form
of co-registered noise free, single band
image. We will introduce this extension [8]
as well in the M-step of our restoration
procedure. The EM- based restoration
technique as well as the denoising
procedure all assume that the blurring
functions and noise variance are known.
The proposed multispectral restoration
procedure is validate on the real
multispectral data, with simulated blurring
and noise, allowing for quantitative
validation
2. EXPECTATION-MAXIMIZATION
ALGORITHM
Image restoration aims at recovering an
degradation H of the multichannel image is [3]
The restoration of this image involves
both deconvolution and denoising. Since it
is not evident to do both simultaneously,
they will be done sequentially, using an
Expectation Maximization algorithm.
The idea is to perform both function in
two steps. The above equation is
decomposed as
Y = HX + N2
Where X = S + nN1
Where nHN1 + N2 = N . As a result the
original problem has been split up into a
deconvlution problem and a denoising
problem. The unknown image X can now be
regarded as the missing data.
The E-M algorithm is used for
finding maximum likelihood estimates of
distribution parameters given observed
data. It's used when the likelihood function
is too complicated to easily maximize, but
can be simplified by introducing some
additional latent variables. The goal is to
find an Smax which maximize the complete
data probability density function
Smax = arg max p (Y, X S)
s
The first step of the iterative two-step EM
algorithm[1], the E-step, computes the so
called Q-function
original image X from a degraded observed
Q (S,
(
k)
) = E [ log p(Y, X S) Y,S
(k)
]
version Y. The observed multicomponent
image Y of size MxMxK consists of
unknown signal S. which is first degraded
by circular convolution with a known
impulse response from a linear system H
and then corrupted by some additive noise
as
Y= HS + N
using the observed image Y and the
estimate of the kth iteration. The second step,
The M-step, maximizes This Q-function and
calculates a new maximum penalized
likelihood estimate

(k+1)
=arg max [ Q (S,
(
k)
)- (S)
s
Where the penalty function (S) is
introduced to suppress the noise.
141
National Conference onMicrowave, Antenna &Signal Processing April 22-23,
2011
3. EXPECTATION STEP
Using Bayes rule and the independency of
Y on S, when conditioned on X, the
conditioned probability density function can
be written as
log p(Y, X S) = log [ p(Y X,S) p(X S)]
= log [ p(Y X) p(X S)]
-1/2( S X)
T
C
n
-1
(S-X)
omitting all terms not depending on S.
To estimate X
(k)
,the probability density
function p([X Y,
(k)
) p(Y X)p(X
(k)
)
will be integrated, which leads to
X
(k)
= E [X Y,
(k)
]
=
(k)
+ H
T
(Y - H
(k)
)
Usually , this step can easily be dealt with in
the Fourier domain, because the blurring
function H is diagonal in the Fourier
domain.
4. MAXIMIZATION STEP
A new estimate
(k+1)
is then calculated as

(k+1)
= arg max [- S- X
(k)

2
- 2 n
2
(s) ]
s
Where s and x are the wavelet transforms of
S and X respectively.
The penalty function (s) is linked to a prior
distribution p(s) of the wavelet coefficients
of the image. Depending on the choice of a
prior, leads to a specific MAP estimate. For a
Gaussian prior
(s) = - log p(s) 1/2(sCs
-1
s
T
)
For denoising of multispectral images,
making use of inter band correlation is
essential. An image discontinuity appearing
in one image band is likely to appear in at
least some of the remaining bands. The
best representation in which to estimate the
noise free signal is the wavelet transform.
The wavelet transform offers an
efficient representation of spatial
discontinuities with in each spectral band.
For a multispectral imaging system, there
will be assumed to exist a total of N separate
images, i.e., the number of spectral bands
through which the system collects images is
N. We will further assume that the number
assume that each of the individual images in
the multispectral array is perfectly
registered with one another.
To solve the complete problem, a
prior distribution p(s) of the wavelet
coefficients is required. For grayscale image
[2], different priors were proposed to better
approximate the marginal densities. In this
paper we multivariate GSM (Gray scale
mixture). A random variable S is said to be a
GSM if its density can be decomposed into a
linear combination (finite or infinite) of zero-
mean Gaussian densities; i.e.,
p(s) = p(s z)p(z)dz
where p(z) is the mixing density, and p(s z)
is a zero mean Gaussian . Using the GSM
prior, the additive noise model becomes
x = s+n =z u+n
where u is zero-mean Gaussian distributed
and z is an independent positive scalar
random variable. p(s z) has covariance
C
s z
= z C
u ,
or by taking expectations over z with
E[Z] = 1,, C
s
= C
u.
GSM densities are
symmetric and zero-mean and heavier
tailed than Gaussians [2]. These are known
to better model the shape of the wavelet
coefficient marginals than Gaussians. A
prior density for is required. A widely used
prior is Jeffreys prior
p(z) 1/z
The Bayes least squares estimate E(S X) is
given by
E[s x] = sp(s x) ds
= sp(s,z x) dzds
= p(z x) E[s x,z] dz
The posterior distribution of z can be
obtained, using Bayes rule, as
and, since the GSM model s conditioned on
z is Gaussian, the expected value within the
integral is given by a Wiener estimate
E[s x,z]= zCu(zCu+Cn)
-1
x
The mean of the images is preserved.
Blurring does not change the mean, so the
mean of X(k) is exactly the same as the mean
(k-1)
of samples in each multispectral image is M
of . During the M-step only the
by M, i.e.,M2 total samples. We finally
subbands with detail coefficients are
142
National Conference onMicrowave, Antenna &Signal Processing April 22-23,
2011
denoised, while the coarse image is kept
unchanged.
CONCLUSION:
In this paper, a restoration technique for
multispectral images is presented. This
multispectral procedure is based on an
iterative Expectation-Maximization
algorithm, applying alternately a
deconvolution and a denoising step. The
deconvolution technique allows for the
reconstruction of spatial as well as spectral
blurring. The denoising step is performed in
wavelet domain, using a multispectral
probability density model for the wavelet
coefficients. Instead of using a simple
multinormal model, a heavy-tailed Gaussian
Scale Mixture model has been chosen.
Makes use of Gaussian Scale Mixtures as
prior models that approximate well the
marginal distributions of the wavelet
coefficients, and makes use of a noise-free
image as extra prior information.
(a) Original image
(b) image with pepper noise
(c) Restored image
REFERENCES:
[1] A.P. Dempster, N.M. Laird and D.B.
Rubin. Maximum Likelihood from complete
data via the EM algorithm. J. Royal
statistical society B, 39:1-38, 1977
[2] P. Scheunders and S. De Backer,
Wavelet denoising of multicomponent
images, using gaussian scale mixture
models and a noise-free image as priors,
IEEE Trans. Imag Process., vol. 16, no. 7, pp.
18651872, 2007.
[3] S. G. Chang, B. Yu, and M. Vetterli,
Adaptive wavelet thresholding for image
denoising and compression, IEEE Trans.
Imag Process., vol. 9, no. 9, pp. 15321546,
2000.
[4] B. R. Hunt and O. Kubler, Karhunen-
Loeve multispectral image restoration, part
I: Theory, IEEE Transactions on Acoustics,
Speech and Signal Processing, vol. ASSP-32,
no. 3, pp. 592600, 1984.
[5] R. R. Schultz and R. L. Stevenson,
Stochastic modeling and estimation of
multispectral image data, IEEE Trans. Imag
Process., vol. 4, no. 8, pp. 11091119, 1995.
[6] A. Benazza-Benyahia and J.-C. Pesquet,
Building robust wavelet estimators for
multicomponent images using Steins
principle, IEEE Trans. Imag Process., vol. 14,
no. 11, pp. 18141830, Nov 2005.
[7] R. Neelamani, H. Choi, and R. Baraniuk,
ForWaRD: Fourier -wavelet regularized
deconvolution for ill-conditioned systems,
IEEE Trans. Imag Process., vol. 52, no. 2, pp.
418433, 2004.
[8] M. A. T. Figueiredo and R. D. Nowak,
An EM algorithm for waveletbased image
restoration, IEEE Trans. Imag Process., vol.
12, no. 8, pp. 906916, 2003.
143
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
RFIDACCESS CONTROLSYSTEMCOMBINING WITH FACERECOGNITION
Anubhav Srivastava
1
, Pranshi Agarwal
1
, Usha Kumari
2
,
1
Student,
2
Asst. Prof., I.T.S. Engineering College, Greater Noida (U.P.)
Abstract:
RFID has been widely adopted in access control as
an identity identification technology. This system can
be combined with the face recognition technique to
avoid the RFID used by unauthorized people. In this
paper, we propose a method ensuring that such kind
of spoof can be rejected. Eigenface approach method
for face recognition can improve the validity of the
access control systemeffectively.
as an anti-spoof mechanism which can reject such
delusion.
II Key Concepts
In this section, we introduce recent development
of anti-spoofing technology and provide the brief
descriptions of eigenface approach method for face
recognition.
Keywords:
Face recognition, RFID, eigenfaces, feature vectors
I Introduction
Radio Frequency Identification (RFID) is a
wireless technology which uses radio waves to edit
data in an identification chip [6].The identification
chip which is called a RFID tag stores its own ID
and a small amount of application data that can be
retrieved by RFID readers [17]. RFID technology
has been widely adopted in many areas, such as
physical distribution, traffic control, automatic tolls
collection, product tracking, identification and
access control, etc [10].
Among those applications, access control with
RFID has drawn much attention nowadays for its
advantage that it allows convenience contactless
access. Using an RFID key card to open an
electronic lock, the first RFID based access control
system was invented by Charles Walton in 1973
[14]. However, card based or key based access
controls share the same problem that anyone who
picked or stolen the card could get access granted
to the real owner. With the wide spreading
application of RFID, its security threats affect
peoples daily life more seriously.
Biometrics, which refers to the automatic
identification of individual identity by exploiting
ones physiological characteristics, such as faces,
iris, fingerprints, and gait etc. [8], [5], is another
approach for security systems, since biometric data
are unique and stable for individuals. Facial data is
unique personal privacy information, which is
permanently associated with a person [2].
Therefore, by combining the RFID access control
system with face recognition technology, this
problem can be solved to some extent. However,
the common face recognition system does not
know whether the face presented in front of the
camera is a real humans face or just a photo. In
this paper, we propose an access control system
embedded with face recognition subsystem as well
2.1. Recent Development of Anti-spoofing
Technology [1]
Several methods have been developed to avert
attacks of photograph spoofing. Assuming a
photograph would generate a constant depth map
while a real human face will produce a varying one,
T. Choudhury and B. Clarkson suggested a method
of constructing a relative depth map by tracking
facial landmarks: the eyes and the mouth [15]. But
there is no further research from then. But when the
head is still, it is hard to estimate the depth map and
the yielding result is very sensitive to the lighting
condition and noise produced by the camera. Non-
rigid deformation and appearance change is a
distinct characteristic of live face comparing to
photograph. Some systems designed to evaluate
facial expression changes or movements especially
the eye-blinking on a statistical models and the
motion of lips [11], [4]. But the reliable
performance of these systems needs high quality
input video. K. Kollreider detected facial organ
motion information by analyzing optical flow field
for liveness judgment [9]. But it would be
vulnerable to photo blending and translation of
photo. There are some approaches involving the
speech modality, trying to detect the fake face
attacks by exploiting the fused audio-visual
features [3], [12]. One of this is M. I. Faraj and J.
Biguns study based on the possibility of
recognizing utterances (digits 0-9) from lip motion
[13]. But this also requites high resolution videos.
Li suggested exploiting Fourier spectrum to
classify live faces and fake face attacks and
claimed that the high frequency components of the
photograph is less than those of live face images
[7].
III Face recognition based on eigenface
approach
For the purpose of face recognition, we compute
eigen values using MATLAB 7.2. In this work we
used face images from two standard face databases.
144
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
The two face databases, we used are ORL database
and UMIST database for the purpose of face
processing and eigenface computation for face
recognition. The various steps used in generating
the eigenfaces are as follows:-
1. Image file conversion
The face images in ORL and UMIST face
databases are in portable grey map (PGM) format.
All the images are converted into bitmap (BMP)
file format which is a standard windows based
image format that is also workable in MATLAB
7.2.
2. Cropping of face images
When the face images are taken then the images
also contain some background in addition to the
actual face area. Cropping is the process of
selecting the face area from the face images.
3.Calculating the mean face
The mean face is calculated by taking the pixel
by pixel average of all the faces in the database. Let
the training set of face images be
1
,
2
,
3

M
.
The average face of the set is defined by
face image),and determining N
2
eigenvectors and
eigenvalues is an intractable task of typical image
sizes. We need computationally feasible methods to
find these eigenvectors. For the ORL and UMIST
databases the faces have been cropped to 92 pixels
each in width and height, so the covariance matrix
is of order 8464.
Read a RFID card
Fetch Card Owner ID
Active Camera
1
M
n
(1)
n 1
Face detection
In the ORL database, number of images is 250(10
faces per person of 25 persons),so M =250.For
UMIST face database,19 Faces are taken for each
person and the number of person considered is 15
so M =285.
4. Calculation of difference image
Once the mean image has been calculated, the
mean image is subtracted from the individual face
images one by one resulting in difference images.
Each face differs from the average by the vector
Anti-spoofing judgement
Pass
Face Recognition
RFID & Face
Image
Database
i
i
(2)
The difference is calculated on a pixel by pixel
basis, i.e., the value of the(x,y)
th
pixel of mean face
is subtracted from the value of the (x,y)
th
pixel of
the original face image.
5. Calculation of the Covariance matrix
The covariance matrix C stores the difference
images in a matrix form, thus capturing the
Match?
No
Yes
deviations of the face images from the mean face.
The covariance matrix represents the variation
across the faces of face
Pass the Security test
Security Alert
C
database.
1
M
T
n n
n 1
.................... (3)
Figure1. Flowchart of the security system
6. Determining the eigenfaces
The above equation makes it clear that the
covariance matrix is the element by element
product between the difference matrix and the
transpose of the difference matrix calculated over
mall the M training faces. The eigenvectors and
eigenvalues are defined for the covariance matrix.
The matrix C, however, is N
2
by N
2
?(where N is the
number of pixels in both width and height of each
Rather than calculating the covariance matrix
which is of the order of is N
2
by N
2
, we try to
reduce the dimensionality of the matrix based on
the approach of principal components. If the data
points is in the image is less than the dimension of
the space, there will be only M-1,where M is the
number of faces in the training set, rather than N
2,
meaningful eigen vectors.(the remaining eigen
145
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
vectors will have associated eigen values of
zero).So we can solve for N
2
dimensional eigen
vectors by solving for the eigenvectors of an M by
M matrix and then taking appropriate linear
combinations of the face images. So the
dimensionality of the covariance matrix can be
reduced the order of images in the training set for
the total number of pixels in the face image.
For ORL and UMIST databases the face has been
cropped to 92 pixels each in width and height, so
the covariance matrix is of order 8464.We try to
reduce the dimensionality of the matrix based on
the approach of principal components. So the
dimensionality of the covariance matrix can be
reduced to the order of images (M) on training set
for total number of pixels in the face image.
For ORL database, M=250 and total number of
pixels (N
2
) =8464
And for UMIST database, M=285 and total number
of pixels(N
2
)=8464.So the number of eigen faces
are needed to capture the variation of the ORL
database are reduced from 8464 to 250 and for
UMIST database only 285 eigenfaces are sufficient
to represent the variations in the faces.
As the property of the eigenface, each of them has
a eigen value associated with it .More important,
eigenvectors with bigger eigen values provide more
information on the face variation than those with
smaller eigen values.
Conclusions
In this paper we illustrate an anti spoofing
mechanism for an access control system
constructed by a RFID subsystem and a face
recognition system as shown in fig. (1) in order to
enhance the security level. This mechanism utilizes
the eigenface approach which has been optimized
for grey scale ORL and UMIST face databases.
Eigenfaces based approach excels in its speed,
simplicity and learning capability.
References
[1]Bing-Zhong Jing, Patrick, P.K. Chan, Wing
W.Y.NG,Daniel S. Yeung Anti spoofing system for
RFID access control system combining with face
recognition Ninth international Conference on
Machine and Cybernetics,Qingdao,July 2010.
[2] Biometric Systems Technology, Design and
Performance Evaluation, J. Wayman, A. Jain, D.
Maltoni, and D. Maio, eds. Springer, 2005.
[3] G. Chettyand M. Wagner. Liveness
Veri_cation in Audio-Video Speaker
Authentication. In 10
th
Australian Int. Conference
on Speech Science and Technology, p. 358363,
Sydney, Australia
[4] H.K. Jee, S.U. Jung, and J.H. Yoo, Liveness
detection for embedded face recognition system,
International Journal of Biomedical Sciences, vol.
1, no. 4, 2006, pp. 235238.
[5] H. Lu, K. N. Plataniotis, and A. N.
Venetsanopoulos, MPCA: Multilinear principal
component analysis of tensor objects, IEEE Trans.
Neural Netw., vol. 19, no. 1, pp. 1839, Jan. 2008.
[6] Hossain M.M., Prybutok V.R., Consumer
Acceptance of RFID Technology:An Exploratory
Study, IEEE Transactions on Engineering
Management, Vol 55, Issue 2, pp. 316 328, May
2008.
[7] J. Li, Y. Wang, T. Tan, and A. K. Jain. Live
face detection based on the analysis of Fourier
spectra. In Biometric Technology for Human
Identi_cation, p. 296303. SPIE Volume: 5404,
August 2004. 2
[8] J. Lu, K. N. Plataniotis, and A. N.
Venetsanopoulos, Face recognition using kernel
direct discriminant analysis algorithms, IEEE
Trans. Neural Netw., vol. 14, no. 1, pp. 117126,
Jan. 2003.
[9] K. Kollreider, H. Fronthaler, and J. Bigun,
Evaluating liveness by face images and the
structure tensor, Fourth IEEE Workshop on
Automatic Identification Advanced Technologies,
Oct. 2005, pp. 75-80
[10] Landt J., The history of RFID, Potentials
IEEE,Vol 24, Issue 4, pp. 811,Oct.-Nov. 2005
[11] L. Sun, G. Pan, and Z. Wu, Blinking-based
live face detection using conditional random
fields, International Conference on Biometrics,
Aug. 2007, Lecture Notes in Computer Science,
vol. 4261, 2007, pp. 252-260. December 8-10,
2004. 2
[12] M. I. Faraj and J. Bigun. Audio visual person
authentication using lip-motion from orientation
maps. Pattern Recognition Letters, 28(11): 1368
1382, 2007. 2, 3
[13] M. I. Faraj and J. Bigun. Lip biometrics for
digit recognition. In Int. Conference on Computer
Analysis of Images and Patterns, volume 4673 of
LNCS, p. 360366, 2007. 1, 2
[14] Rieback M.R., Crispo B., Tanenbaum A.S.,
The Evolution of RFID Security, Pervasive
Computing, Vol 5, No.1, pp.62 69, Jan 2006.
[15] T. Choudhury, B. Clarkson, T. Jebara, and A.
Pentland. Multimodal Person Recognition using
Unconstrained Audio and Video. In 2nd AVBPA,
Washington D.C., 2223 March 1999. 1, 2
[16]Usha Kumari,Ashraf Saifi.Face recognition
system based on Eigenface approach.National
Conference,Nov.2010,Jaipur(Rajasthan)India
[17] Weinstein R.,RFID: a technical overview
and its application to the enterprise, IT
Professional, Vol 7, Issue 3, pp. 27 33,May-June
2005.
146
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Interleavers in IDMA Communication System: A Survey
Aasheesh Shukla, Member IEEE
Department of Electronics Engineering
GLA University
Mathura, India
aasheeshshukla@gmail.com
Rohit Bansal, Sankalp Anand
Department of Electronics & Communication
Engineering
GLA University
Mathura, India
rohit.bansalec07@gmail.com
sankalp.kulsh@gmail.com
Abstract-- The performance of code-division multiple-access
(CDMA) systems is mainly limited by multiple access
interference (MAI) and intersymbol interference (ISI). Interleave
division multiple access (IDMA) method combined with iterative
chip-by-chip (CBC) multiuser detection (MUD) is a relatively
new multiple access method for spread spectrumcommunication.
IDMA not only inherits all the advantages of CDMA but also has
the capability to overcome its deficiencies. In this scheme users
are distinguished on the basis of interleavers. The number of
various interleavers have been proposed by researchers to meet
the future requirements in IDMA communication system. In this
paper, various interleavers have been reviewed for this advanced
multiple access scheme.
Keywords: CDMA, Interleavers, Multi user detection
I. INTRODUCTION
The goal for the next generation mobile
communication system is to seamlessly provide a wide
variety of communication services to anybody,
anywhere, anytime. The Intended services for next
generation mobile phone users include services like
transmitting high speed data, video and multimedia
traffic as well as voice signals. The technology needed to
tackle the challenges to make these services available is
popularly known as the Third Generation (3G) Cellular
Systems using multiuser detection. Multiuser detection
(MUD) has been widely studied for code division
multiple-access (CDMA) systems and significant
progress has been made recently [1]. However,
complexity has always been a formal concern for MUD.
Much research effort has been devoted to this issue in
pursuit of simpler solutions without compromising
performance.
Interleave-division multiple-access (IDMA) is a recently
proposed scheme that employs chip-level interleavers for
user separation [2]. IDMA inherits many advantages
from CDMA, e.g., diversity against fading and
mitigation of the worst-case other-cell user interference.
Furthermore, it allows a very simple chip-by-chip (CBC)
Iterative MUD strategy with complexity (per user)
independent of the user number.
The objective of this paper is to give brief introduction
of various interleavers which are used with IDMA
communication system. The paper is organized as
follows. IDMA scheme is introduced in section II.
Section III deals with review of various interleavers used
with IDMA. Finally conclusions are presented in section
IV.
II. IDMA SCHEME
The performance of conventional code-division multiple
access (CDMA) systems [1] is mainly limited by
multiple access interference (MAI), as well as inter
symbol interference (ISI). Also, the complexity of
CDMA multi-user detection has always been a serious
problem for researchers all over the world. The problem
can be visualized from the angle of computational cost
as well complexity of multi-user detection algorithms in
CDMA systems. The use of user specific signature
sequences is a characteristic feature for a conventional
CDMA system. The possibility of employing
interleaving for user separation in CDMA systems is
briefly inducted in [1] but the receiver complexity is
considered as main problem. In interleave-division
multiple-access (IDMA) scheme, users are distinguished
by user specific chip-level interleavers instead of
signatures as in a conventional CDMA system. The
scheme considered is a special case of CDMA in which
bandwidth expansion is entirely performed by low-rate
coding. This scheme allows a low complexity multiple
user detection techniques applicable to systems with
large numbers of users in multipath channels in addition
to other advantages. In CDMA scheme, signature
sequences are used for user separation while in IDMA
scheme, every user is separated with user-specific
Interleavers, which are orthogonal in nature. The block
diagram of IDMA scheme is shown in figure 1 for K
users. The principle of iterative multi user detection
(MUD) which is a promising technique for multiple
access problems (MAI) is also illustrated in the lower
147
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
part of Fig. 1. The turbo processor involves elementary completing the detection cycle for user 1, the interleaver
signal estimator block (ESEB) and a bank of K decoders
can be updated from
t
1
o

to
o(t ) = o
2
. This procedure
(SDECs). The ESEB partially resolves MAI without
considering FEC coding. The outputs of the ESEB are
then passed to the SDECs for further refinement using
the FEC coding constraint through de-interleaving block.
The SDECs outputs are fed back to the ESEBto improve
continues recursively. Numerical results show that the
new interleaver generation method, so-called power-
interleavers , can take the place of random-interleavers
without performance loss.
The drawback of this scheme is to consume higher
its estimates in the next iteration with proper user
access time for user securing
o
n
interleaver where n is
specific interleaving. This iterative procedure is repeated
a preset number of times (or terminated if a certain
stopping criterion is fulfilled). After the final iteration,
the SDECs produce hard decisions on the information
bits [1]. The complexity involved (mainly for solving a
size KxK correlation matrix) is O (K2) per user by the
well-known iterative minimum mean square error
(MMSE) technique in CDMA, while in IDMA, it is
independent of user. This can be a major benefit when K
is large.
Figure 1. Transmitter and Receiver structures of IDMA scheme
With K simultaneous users.
III. INTERLEAVERS
Interleaver is a distinguishing feature of Interleave
Division Multiple Access. In this section we are going to
review various interleavers which are used in IDMA
communication system
A. Power Interleaver
In the paper [2], L. Ping has proposed a new interleaver
named power-interleaver with this method, the
interleaver assignment scheme is simplified and memory
cost is greatly reduced without sacrificing performance.
Here, only the power interleaver
o
is needed to be
the user number. Simulation result shows that similar
results have been obtained as that achieved with Random
Interleavers, but considerable amount of memory space
has been saved.
B. Block Random Interleaver
In paper [3], Maria kovaci, Horia G. Balta and Miranda
M. Naformita proposed a block random interleaver and
their BER & FER (Frame Error Rate) are compared with
random interleaver, S-interleaver, block interleaver,
pseudo random interleaver ad Takeshita-Costello
interleaver. The performances recommend the proposed
interleaver as an alternative to the S-interleaver, which is
the bestat lengths superior to 1000.bits. The
performances of the proposed BRL (Block Random in
Line) interleaver are very close to the performance of the
S-interleaver with maximum S (a minimum interleaving
distance) but the design of the new interleaver is simpler.
C. Square Interleaver
In paper [4], Hugo M. Tull berg & Professor Paul H.
Siegel investigated Square interleavers, where the
encoded bits are permutated in a structured way and
random interleavers, where the n bit streams from a rate
R=k/n Convolutional encoder are fed into n separate
randominterleavers.
Simulations in [4], show that for large block lengths the
random interleaver outperforms the square interleaver.
This is because, as the block length increases, the
random interleaver tends towards a perfect interleaver,
and the channel appears to be an uncorrelated fading
channel.
The performance of the square interleaver is also shown
to depend heavily upon the choice of block size N.
When the side of the square interleaver,
N
is an
integer multiple of n, the system performance is
significantly worse.
In voice communication, however large block lengths
give an unacceptable latency and shorter block lengths
stored. Let the power interleaver be
t
1
o

. After
are required. There results show that for shorter
148
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
149
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
B
it
E
r
r
o
r
R
a
t
e
interleaver lengths with acceptable latency the square
interleaver is in fact slightly better than the random
interleaver. For short delays, square interleavers seem to
performslightly better than randominterleavers.
D. Orthogonal, Pseudo Random, and Nested Interleaver
In Paper [15] the construction of the three different types
of interleavers which are orthogonal, pseudo random and
nested interleavers are described. These interleavers
meet the design criteria of simplicity and fast generation
on the one hand and low cross-correlation on the other
hand. Simulation in [15] show that the performances of
these interleavers are very similar to the performance of
random interleavers. Also, IDMA using orthogonal,
pseudo random or nested interleavers, can support more
users than a conventional CDMA systems.
E. Cyclo diag Interleaver
In paper [6], M. Shukla, N. AnilKumar, V.K. Srivastava
and S.Tiwari proposed a new interleaver Cyclo diag
Interleaver which is a combination of cyclic interleaver
and diagonal interleaver. Cyclo diag interleaver supports
randomness and orthogonality in limited number user
area with user not exceeding 20 at a time.
Simulation results of Cyclic Diagonal Interleaver with
AWGN channel without fading & without coding with
power allocation, with data length=1024, spread
length=16, and iteration=20 and for number of users=20.
The results are shown in figure 2 for 20 users operating
at a time for different types of interleavers including
random, master random interleaver and the proposed
cyclo diag interleaving. It is evident from the figure that
the performances of IDMA scheme get improved with
the proposed interleaver
-1
10
random i nt erl eaver
mast er random i nt erl eaver
proposed i nt erl eaver
-2
10
-3
10
-4
10
-5
10
-6
10
1 2 3 4 5 6 7 8 9 10
The proposed interleaver performs better at 20 users but
if the user number exceeds the performance of the
system gets degraded. So, this interleaver is
recommended to be used with IDMA scheme where less
than 20 users are operative at a time.
F. Tree Based Interleaver
In paper [7] N. V. AnilKumar, M. K. Shukla and S.
Tiwari, propose a new Tree Based Interleaver (TBI) to
generate different chip-level interleaving sequences for
different users in an IDMA system, which reduces
computation complexity. This method of generation also
solves the memory cost problem and reduces the amount
of information exchange between mobile stations and
base stations required to specify the interleaver.
In this paper the proposed interleaver generation method
is implemented in FPGA for 6 users with having 8
bits/chip.
Simulation result of Tree Based Interleaver & Random
Interleaver with AWGN channel, without coding and
with Power Allocation techniques for 64 users.
G. PEGAlgorithmBased Interleaver
In Paper [8] Zhisong bie, Weiling Wu proposed PEG
Interleaver and simulation shows that the performance of
new interleaver is better than random interleaver when
no. of users are relatively large. Further in this paper
factor graph representation of IDMA system is
described, and on the basis of this representation this
new interleaver is proposed. The progressive edge
growth algorithm is proposed for LDPC code is adapted
to multi-user interleaver pattern design, and amended
PEG algorithm is also proposed in the last of the paper
to improve the performance of this interleaver.
H. Shifting Interleaver
In paper [9] Zhang Chinghai, Hu Jianhao presented a
new interleaver which provides lower complexity and
less memory consumption named shifting interleavers. A
series of interleavers can be generated by circular
shifting a specific pseudo noise (PN) interleaver, which
is generated by a PN sequence generator. Thus, the
architecture of the proposed interleaver is much simpler
than that of other interleaver schemes for IDMA
systems. The simulation results show that shifting
interleavers can achieve the same performance with
much less resource consumption compared to random
interleavers in the IDMA communication system
I. Chip Level Interleaver
Eb/No(dB)
Figure 2 Performance of cyclo diag Interleaver with Random
Interleaver
In paper [10] H. Wu, L. Ping and A. Perotti proposed a
user-specific interleaver design for interleave-division
multiple access (IDMA) systems. This method can solve
150
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
the memory cost problem for chip-level interleavers, and
reduce the amount of information exchange between
mobile stations and base stations to specify the
interleaver used as their identifications.
J. Evolutionary Interleavers
In this paper Xinyi Xu, Quipping Zhu presents the model
of interleavers for the Interleave-Division Multiple-
Access (IDMA) based on evolutionary algorithm[11]. In
all the previous works, interleavers are all generated
independently and randomly which is simple but with
good performance. Considered the difference between
the model of interleavers and the traveling salesman
problem (TSP), a specific fitness function based on
covariance matrix is given and the optimum interleavers
are computed by evolutionary algorithm. The simulation
results show that the bit error ratio (BER) performance
of the evolutionary interleavers (EI) is much better than
other unrandom interleavers. The BER performance of
independent and random interleavers near to EI it is a
proof that EI is the theoretical optimum interleavers for
IDMA.
K. Linear congruence based interleaver
Recently In paper [14] Zhifeng Luo Albert K. Wong
Shuisheng Qiu proposed this technique. Criteria for a
good interleaver design for IDMA include low memory
requirement, easy generation, low correlation among
interleavers, and low overhead for synchronization
between user and base station. A novel interleaver
design based on linear congruence s for IDMA systems
is proposed in this paper. The design requires the storage
of only a small number of parameters and the
transmission of a small number of bits for
synchronization. Users and base stations can derive the
interleavers from the identification numbers for different
users independently and simultaneously. A parallel
permutation mechanism is proposed for reducing the
generation time of interleavers. Simulation results show
that our interleaver design has better performance than
the pseudo randominterleaver design for IDMA.
IV. CONCLUSION
This paper explores the existing interleavers available
for IDMA Communication System. One can further
explore the basic design of interleaver associated with
lower order of mathematical complexity, further the
performance of these interleavers can be analyzed for
improvement in interleaver design. Some of the
interleavers named Helical interleaver [12] and
Contention free interleavers [13] still unexplored with
existing system of IDMA. It can be the proposed future
work to integrate these interleavers in the existing
IDMA.
V. REFERENCES
[1]. Li Ping, Lihai Liu, K. Y. Wu, and W. K. Leung
Interleave Division Multiple-Access (IDMA)
Communications IEEE vol 5, pp 938-947.April 2006
[2]. H. Wu, L.Ping and A. Perotti, User-specific chip-
level interleaver design for IDMA System, IEEE
Electronics Letters, Vol.42, No.4, Feb 2006
[3]. Maria Kovaci, Horia G. Balta and Miranda M.
Nafornita The Performance of Interleavers Used in
Turbo Codes, IEEE, pp. 363-366, 2005
[4]. Hugo M. Tullberg and Professor Paul H. Siegel,
Interleaving Techniques for Coded Modulation for
Mobile Communications, UCSD, Centre for Wireless
Communications, pp. 1-14.
[5]. Eric Tell, Dake Liu, A Hardware architecture for a
multi mode block interleaver
[6]. M. Shukla, N. Anil kumar, V. K. Srivastava, S.
Tiwari, A novel Interleaver for Interleave-Division
Multiple Access, communicated in International
Conference on Information and Communication
Techniques, ICCT 07, Sept. 2007.
[7]. N. V. AnilKumar, M. K. Shukla, S. Tiwari,
Performance of an Optimum Tree Based Interleavers
for IDMA Systems, published in IEEE, 2007.
[8]. Zhisong Bie Weiling Wu PEG Algorithm Based
Interleavers Design for IDMA System IEEE, pp 480-
483, 2007
[9]. Zhang Chenghai Hu Jianhao The Shifting Interleaver
Design Based on PN Sequence for IDMA systems
IEEE 2007
[10]. H. Wu, L. Ping and A. Perotti User-specific chip-
level interleaver design for IDMA systems IEEE
Electronics letter, vol 42, 2006
[11]. Xinyi Xu, Quipping The Model of Evolutionary
Interleavers for IDMA Communication SystemIEEE,
Jul, 2007
[12]. Dapeng Hao and Peter Adam Hoeher Helical
Interleaver Set Design for Interleave-Division
Multiplexing and Related Techniques IEEE
communication letter, vol 12,2008
[13]. Ajit Nimbalker, T. Keith Blankenshipet.al
Contention-Free Interleavers for High-Throughput
Turbo Decoding IEEE, vol 56, Aug 2008
[14]. Zhifeng Luo Albert K. Wong Shuisheng Qiu
Interleaver Design Based On Linear Congruences for
IDMA Systems IEEE, Sep 2009
[15]. Ioachim Pupeza, Aleksandar Kavcic and Li
Ping,Efficient Generation of Interleavers for IDMA,
IEEE Communication Society ICC 2006 proceedings,
2006 .
151
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
OFDMChannel Capacity Enhancement
UnderAdditive White Gaussian Noise
Gaurav Bhandari MukeshPathela
Asst Professor , DIT Asst Professor , DIT
g_bhandari83@yahoo.com mukeshpathela@gmail.com
AbstractWith the wireless multimedia application becoming
more and more popular the required bit rate are achieved due to
OFDM multicarrier transmission which is a bandwidth efficient
scheme through which a large amount of information can be send
in very small bandwidth, if we can use multiple antennas with the
OFDM system the channel capacity can be further improved
keywords OFDM, AWGN, channel capacity, multiple
antennas
I. INTRODUCTION
Bandwidth is the most important constraint of any
communication system, combining OFDM with multiple
antennas system provides increased channel capacity. Demand
for higher data rates in future wireless communications system
design requires increased spectral efficiency and improved link
reliability. OFDM is a special form of multicarrier modulation
which was originally used in high frequency military radio. An
efficient way to implement OFDM by means of Discrete-time
Fourier Transform (DFT) was found by Weinstein in 1971 [1].
The computational complexity could be further reduced by a
Fast Fourier Transform (FFT). In the 1990s, OFDM was
adopted in the standards of digital audio broadcasting (DAB),
digital video broadcasting (DVB), asymmetric digital
subscriber line (ADSL), and IEEE802.11a. OFDM is also
considered in the new fixed broadband wireless access system
specifications. In OFDM systems, the entire channel is divided
into N narrow sub channels and the high-rate data are
transmitted in parallel through the sub channels at the same
time. Therefore, the symbol duration is N times longer than
that of single-carrier systems and the ISI is reduced by N
times.In section 2 there is brief Description of OFDM-
multicarrier system, in section 3 we described the method that
we have chosen to Increase the channel capacity under
AWGN. Section 4 deals with .Channel capacities of SISO,
SIMO & MISO systems, in section 5 simulation results are
presented, finally conclusion and future scope is contained in
section 6
2.OFDM-Multi Carrier Modulation System
Frequency division multiplexing (FDM) extends the concept
of single carrier modulation by using multiple subcarriers [2]
within the same single channel. The total data rate to be sent in
the channel is divided between the various subcarriers. FDM
systems usually require a guard band between modulated
subcarriers to prevent the spectrum of one subcarrier from
interfering with another. These guard bands lower the systems
effective information rate when compared to a single carrier
system with similar modulation. If the FDM system above had
been able to use a set of subcarriers that were orthogonal to
each other, a higher level of spectral efficiency could have
been achieved. The guard bands that were necessary to allow
individual demodulation of subcarriers in an FDM system
would no longer be necessary. The use of orthogonal
subcarriers would allow the subcarriers spectra to overlap,
thus increasing the spectral efficiency. As long as
orthogonality is maintained, it is still possible to recover the
individual sub carriers signals despite of their overlapping
spectrums. Orthogonally can also be viewed from the
standpoint of stochastic processes. If two random processes
are uncorrelated, then they are orthogonal. OFDM [3] is a
special form of multicarrier modulation especially suitable for
high-speed communication due to its resistance to ISI. As
communication systems increase their information transfer
speed, the time for each transmission necessarily becomes
Shorter. Since the delay time caused by multipath remains
constant, ISI becomes a limitation in high-data-rate
communication. OFDM avoids this problem by sending many
low speed transmissions simultaneously. Figure 1 shows the
general block diagram of OFDM transmission system
The Multiple antenna technologies enable high capacities
suited for Internet and multimedia services, and also
dramatically increase range and reliability. The key objectives
of the system are to provide good coverage in a non-line-of-
sight (LOS) environment, reliable transmission, high peak data
rates (>1 Mb/s), and high spectrum efficiency. These system
requirements can be met by the combination of two powerful
technologies in the physical layer design: multiple antennas
and orthogonal frequency division multiplexing (OFDM)
modulation.
152
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
In figure 2 basic architecture for four different multiple
antenna schemes is shown.
Figure 1
3. METHODOOGY
Fugure 1
By using multiple antenna scheme the signal to noise power
ratio increases in accordance with the number of antenna
elements being used, that results in effective increase in
Channel capacity , The OFDM system is simulated with
different multiple antenna scheme one-by-one .
C is measured in bits per second if the logarithm is taken in
base 2, or nats per second if the natural logarithm is used,
assuming B is in hertz; the signal and noise powers S and Nare
measured in watts or volts
2
, so the signal-to-noise ratio here is
expressed as a power ratio, not in decibels (dB); since figures
are often cited in dB, a conversion may be needed. M is the
number of antenna elements on receiving side; N is the number
of antenna elements on transmitter side. Multiple Antennas
scheme can be used along with different communication
system to increase the effective Channel Capacity.
4. CHANNEL CAPACITIES OF SISO, SIMO & MISO
SYSTEMS
Channel capacity is the tightest upper bound on the amount of
information that can be reliably transmitted over a
communication channel. By the noisy-channel coding theorem,
the channel capacity of a given channel is the limiting
information rate (in units of information per unit time) that can
be achieved with arbitrarily small error probability An
application of the channel capacity concept to an additive
white Gaussian noise (AWGN) channel with B Hz bandwidth
and signal-to-noise ratio S/Nis the ShannonHartley theorem:
Figure 2
`
5. SIMULATION RESULTS
C= Blog
2
(1+ S/N )
This is applicable for SISO systems
In [4] there is an analysis of channel capacities for different
multiple antenna schemes.
In Single input Multi output system there is one antenna at the
transmitter side and multiple antenna at the receiver side ,for
the SIMO system, assuming that signals received on each
antenna has equal average amplitude, and then the channel
capacity is,
C=Blog
2
(1+ M. S/N )
In Multiple input and single output antenna systems their are
multiple antennas at the receiver side and single antennas at
the receiver side for the MISO system, the total transmitted
power is divided into each branch, and then the channel
capacity is,
C=Blog
2
(1+ N. S/N )
Simulations are carried out for the OFDM along with these
different types of multiple antenna scheme and the results are
shown with significant improvement in channel capacity ,In
figure 3 channel capacity improvement over Single input
Single output (SISO) OFDM system is given by using 1x2
(SIMO). In Figure 4 channel capacity comparison of 1x2
(SIMO) OFDM system is made by using 1x3 (SIMO) OFDM
systems. Figure 5 shows the OFDM channel capacity
improvement by using 2x1 MISO over SISO,
Figure 6 shows the OFDM channel capacity comparison by
using 2x1 MISO & 3X1 MISO
153
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
C
a
p
a
c
i
t
y
p
e
r
u
n
i
t
b
a
n
d
w
i
d
t
h
b
i
t
s
/
s
e
c
/
h
z
C
a
p
a
c
i
t
y
p
e
r
u
n
i
t
b
a
n
d
w
i
d
t
h
b
i
t
s
/
s
e
c
/
h
z
C
a
p
a
c
i
t
y
p
e
r
u
n
i
t
b
a
n
d
w
i
d
t
h
b
i
t
s
/
s
e
c
/
h
z
C
a
p
a
c
i
t
y
p
e
r
u
n
i
t
b
a
n
d
w
i
d
t
h
b
i
t
s
/
s
e
c
/
h
z
4.5
Cpactiy with 1X2 SIMO Cpactiy with 2X1 MISO & 3X1 MISO
5
4
3.5
4.5
4
3
2.5
2
1.5
1
3.5
3
2.5
2
1.5
1
0.5
0
channel capacity for SISO
channel capacity for 1X2 SIMO
0 1 2 3 4 5 6 7 8 9 10
Signal-to-Noise ratio
Figure 3
0.5
0
channel capacity for 2X1 MISO
channel capacity for 3X1 MISO
0 1 2 3 4 5 6 7 8 9 10
Signal-to-Noise ratio
.
Figure 6
5
4.5
4
3.5
3
2.5
2
1.5
1
0.5
0
4.5
4
3.5
3
2.5
2
1.5
Cpactiy with 1X2 SIMO & 1X3 SIMO
channel capacity for 1X2 SIMO
channel capacity for 1X3 SIMO
0 1 2 3 4 5 6 7 8 9 10
Signal-to-Noise ratio
Figure 4
6. CONCLUSION AND FUTURE SCOPE
It is quite clear form the simulations result that Capacity of
OFDM systems can be increased under AWGN environment
by using multiple antennas, as the number of antenna element
is increased we are getting an enhanced channel capacity, same
technique can be implemented for other systems like IDMA
etc, also effect of using this over OFDM in generalized
Nagakami model can be studied
REFERENCES
[1] [G. J. Foschini, Layered Space-time Architecture for
Wireless Communication in a Fading Environment when
Using Multielement Antennas, Bell Labs Tech. J.,
pp.41-59, Autumn 1996.
[2] Bingham, J.A.C., Multicarrier Modulation for Data
Transmission: An Idea whose Time has Come , IEEE
Communications Magazine, 28, No. 5, pp. 5-14, May 1990.
1
0.5
0
channel capacity for SISO
channel capacity for 2 X1 MISO
0 1 2 3 4 5 6 7 8 9 10
Signal-to-Noise ratio
Figure 5
Multiplexing for Wireless Communications November
11, 2004.
[4] Wireless communication by Upena Dalal, Oxford
University Press 3
rd
impression
154
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Survey of WPANTechnologies: ZigBee, Bluetooth, andWibree
Vinith Chauhan
1
, Manoj Pandey
2
, Krishna Mohan Rai
3
1
Assistant Professor (ECE), St. Margaret Engineering College, Neemrana
vinithchauhan@rediffmail.com
2
Assistant Professor (EIC), St. Margaret Engineering College, Neemrana
mr_mkpandey@yahoo.co.in
3
Assistant Professor (ECE), St. Margaret Engineering College, Neemrana
krishnamohanrai@rediffmail.com
Abstract:-Wireless Personal Area Network
(WPAN) designs have been flourishing in recent
years. The pervasive success of Bluetooth has been
a boon to all devices in the IEEE 802.15 working
group. As competing and complementary
standards are formed within this working group,
a successful embedded system designer must
understand the differences between the technology
standards. It is the goal of this paper to examine
three: Bluetooth, ZigBee, and Wibree. Wibree is a
technology that has been under development by
Nokia since 2001. It was originally adapted from
the Bluetooth specification, and in 2006, the
Bluetooth Special Interest Group (SIG)
announced it would be adopting Wibree into the
Bluetooth Specification. While Wibree has been
under development, the competing IEEE 802.15.4
technology of ZigBee has been available to the
public for several years. ZigBee was designed as a
low power, low-cost, low-speed solution
Keyword: -
1. Introduction
Bluetooth, ZigBee, and Wibree are intended for use
as so called Wireless PAN Systems. Wireless
Personal Area Network (WPAN) designs have been
flourishing in recent years. The pervasive success of
Bluetooth has been a boon to all devices in the IEEE
802.15 working group. As competing and
complementary standards are formed within this
working group, a successful embedded system
designer must understand the differences between the
technology standards. It is the goal of this paper to
examine three: Bluetooth, ZigBee, and Wibree. They
are intended for short range communication between
devices typically controlled by a single person. A
keyboard might communicate with a computer, or a
mobile phone with a hands free kit, using any of
these technologies.
2. ZigBee (IEEE802.15.4)
ZigBee is a PAN technology based on the IEEE
802.15.4 standard. Unlike Bluetooth or wireless USB
devices, ZigBee devices have the ability to form a
mesh network between nodes. Meshing is a type of
daisy chaining from one device to another. This
technique allows the short range of an individual
node to be expanded and multiplied, covering a much
larger area.
The ZigBee Alliance is an association of companies
working together to enable reliable, cost-effective,
and low-power wirelessly networked monitoring and
control products based on an open global standard.
As per its main role, it standardizes the body that
defines ZigBee, and also publishes application
profiles that allow multiple OEM vendors to create
interoperable products.
ZigBee Alliance has specified a full protocol suite
that provides efficient high level communication in
wireless sensor networks.
ZigBee low stack layers (PHY and MAC) are equal
to those in IEEE 802.15.4 while high stack layers are
designed to support extended networking
functionality and to provide simple interface between
network and end user applications.
Main ZigBee extensions compared to IEEE 802.15.4:
Full support of complex network topologies: tree
and mesh
Reliable communication within entire network
(beyond transmission range of a single node)
Unified networking interface for end user
applications
Public application profiles for interoperability
between devices from different vendors
2.1 Applications of ZigBee
ZigBee protocols are intended for use in embedded
applications requiring low data rates and low power
consumption. ZigBee's current focus is to define a
general-purpose, inexpensive, self-organizing mesh
network that can be used for industrial control,
embedded sensing, medical data collection, smoke
and intruder warning, building automation, home
automation, etc. The resulting network will use very
small amounts of power individual devices must
have a battery life of at least two years to pass
ZigBee certification.
Typical application areas include
Home Entertainment and Control
Smart lighting, advanced temperature
control, safety and security, movies and
music
155
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Home Awareness Water sensors, power
sensors, energy monitoring, smoke and fire
detectors, smart appliances and access
sensors
Mobile Services m-payment, m-
monitoring and control, m-security and
access control, m-healthcare and tele-assist
Commercial Building Energy
monitoring, HVAC, lighting, access control
Industrial Plant Process control, asset
management, environmental management,
energy management, industrial device
control, machine-to-machine (M2M)
communication.
2.2 Advantages of ZigBee
ZigBee is a low-cost, low-power, wireless mesh
networking standard. First, the low cost allows the
technology to be widely deployed in wireless control
and monitoring applications. Second, the low power-
usage allows longer life with smaller batteries. Third,
the mesh networking provides high reliability and
more extensive range.
3. Bluetooth
The Bluetooth (IEEE 802.15.1) specification was
first devolved by the Swedish telecommunications
equipment manufacturer Ericsson. In September of
1998 The Bluetooth Special Interest Group (SIG) was
formed by Ericsson, IBM, Intel, Toshiba and Nokia
and formalized the standard. In 1999 the first
Bluetooth specification 1.0 was released. Two years
later in2000 the first Bluetooth consumer product was
released, a Bluetooth headset and phone adapter from
Ericsson. In 2003 the Bluetooth specification 1.2
was released with the main benefit of adding adaptive
frequency-hopping spread spectrum to the Bluetooth
standard. In 2004 Bluetooth specification 2.0 was
released with an enhanced data rate of 2.1 Mbit/s.
Bluetooth technology also reaches an installed base
of 250 million devices.
3.1 Bluetooth applications
A few other possible applications of the Bluetooth
are as follows:
Data synchronization need never again be a problem
as your Bluetooth enabled PDA, PC or laptop all talk
to each other and update their respective files to the
most recent ones.
Travelling in a plane, a person may write but
not send e-mail. When the plane touches down
the Bluetooth enabled laptop will
communicate with the user's phone and will
automatically send them.
Mice and keyboards will identify themselves
to the computer without intervention, or could
also be used to command TVs, videos or hi-fis
at the touch.
Use e-mail while your laptop is still in the
briefcase! When your laptop receives e-mail,
you'll get an alert on your mobile phone. You
can also browse all incoming e-mails and read
those you select in the mobile phone's display.
A travelling businessperson could ask his
laptop computer to locate a suitable printer as
soon as he enters a hotel lobby, and send a
printout to that printer when it has been found.
3.2 Advantages of Bluetooth
The Bluetooth standard was designed around using
extremely low power. Each transmission on your
Bluetooth cell phone uses about 1 mW of power.
This means that Bluetooth has almost no effect on
the charge of your cell phone, and that Bluetooth
enabled headsets are able to get upwards of 15 hours
battery life on a full charge. - Bluetooth is very robust
for a number of reasons. Since it uses adaptive
frequency hopping, there is practically no
interference at all, even with a number of other
devices in close proximity. Ad-hoc networking also
increases Bluetooths robustness. Since there is no
central node, your entire Bluetooth network wont go
down if your digital camera runs out of batteries.
Being a radio standard, Bluetooth doesnt need line of
sight between two devices to communicate. This
means that we can walk out of the room wearing your
Bluetooth headphones and not lose the signal unlike a
pair of infrared headphones. Bluetooth devices
operate in the unlicensed ISM band at 2.4 GHz.
Since many other devices such as cordless phones
and garage door openers operate in this same 2.4
GHz band, Bluetooth uses a couple of techniques to
limit interference. One way Bluetooth limits
interference is by sending out weak signals over short
distances. Since a normal Bluetooth device only has
a range of 10 meters this limits the possibility of
being in range of devices that could possibly
interfere. Yet the real reason why Bluetooth doesnt
interfere is because it uses a technique called spread-
spectrum frequency hopping.
4. Wibree
Wibree is a short-range wireless protocol optimized
for low power consumption. Developed primarily by
156
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Nokia, the company has submitted Wibree as an open
standard to promote adoption and interoperability.
Wibree is intended to compliment Bluetooth
communications in certain PAN applications where
small, lightweight design makes standard Bluetooth
communication unsuitable or difficult. For instance,
Bluetooth-enabled wristwatches require relatively
large transmitters and batteries, making the devices
heavy and uncomfortable. Wibree-enabled
wristwatches can use smaller transmitters and smaller
batteries, increasing user comfort and reducing
fatigue while extending battery life.
Wibree operates in the same 2.4 GHz frequency band
as Bluetooth, which ensures backwards hardware
compatibility. Due to this, a single antenna can
support both protocols, and many existing Bluetooth
devices will require only a simple software update to
communicate with Wibree devices. While these
upgraded devices will not benefit from the size
savings dedicated Wibree models enjoy, they will see
much improved battery life. Additionally,
compatibility with newer Wibree models will help
prolong the lifespan of existing equipment.
While Wibree and Bluetooth are similar protocols
with overlapping functions, Wibree differs from
Bluetooth in several fundamental ways. First, recent
Bluetooth specifications, notably 2.0, are designed
with an emphasis on throughput, or data transfer
speed. Bluetooth 2.0 devices can exceed speeds of
350 kb/s under ideal conditions. This is about three
times the maximum speed of planned Wibree
devices, which transfer data no faster than.128 kb/s.
The tradeoff comes to light in terms of power, space
and weight savings. Current Bluetooth-enabled
wristwatches must replace their large, specialty
batteries on a monthly basis. Planned Wibree models,
with comparable features, can last over a year on a
single standard button battery. In addition to the
smaller, lighter battery, Wibree watches utilize
antennas less than a third the size of current
Bluetooth antennas. The combined space savings
help fit the wireless Wibree hardware in a wristwatch
no larger than a standard quartz watch, with a
comparable weight. In contrast, Bluetooth watches
are heavy and bulky, making them inappropriate and
even uncomfortable for everyday use.
4.1 Wibree applications
Imagine a wireless keyboard and mouse with battery
lifetimes exceeding one year communicating with a
PC without using a fragile dongle. Imagine a watch
equipped with a wireless link communicating with
both a tiny sports sensor embedded within the users
shoe and mobile phone. Imagine a range of personal
devices communicating with mobile phones or PCs,
but without the inconvenience of changing or
charging batteries every week. Mobile phones
equipped with Wibree technology will enable a range
of new accessories such as call control/input devices,
sports and health sensors, security and payment
devices.
These devices will benefit from the ultra-low power
consumption of Wibree making possible compact,
coin cell battery operated devices with battery
lifetimes up to 3 years (depending on the actual
application).
Wibree is also designed to offer wireless connectivity
to high performance PC accessories such as mice,
keyboards and multimedia remote controls. The ultra-
low power consumption of Wibree extends battery
lifetimes to over a year. Moreover, Wibrees ultra-
low power consumption will bring wireless
connectivity to watches without compromising
battery lifetime. Imagine a tiny sports or health
sensor embedded in your shoe equipped with a
wireless link communicating with your watch.
4.2 Advantages of Wibree
The advantage of Wibree over Bluetooth is that it is a
lot more power efficient, which makes it ideal for use
in smaller and less costly devices than the ones
currently using Bluetooth. The immediate uses once
the standard is approved is to connect peripherals,
like keyboards, to computers, but its lows cost may
make it applicable to smaller devices such as toys,
wrist watches, or sports gear. The main advantages of
Wibree are:
Ultra low power consumption.
Ultra low cost
Reduced size for human interface devices (HID)
Global interoperability
Additionally, Wibree can be implemented in both a
stand-alone chip and dual-mode chip that can include
both Bluetooth and Wibree. Companies are now
submitting Wibree through a standardization process
in order to obtain wider acceptance and, once
completed, hopefully schedule to launch it sometime
in 2007.
5. Comparison
The comparison between ZigBee, Bluetooth and
Wibree is based on Frequency, Power, Range, Data
rate, Network Topologies, Security, Data rate,
Network Topologies, Battery Life, Antenna.
Refer to table no. 1.
157
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Bluetooth ZigBee Wibree
Frequency 2.4 GHz 2.4GHz,
868MHz,
915MHz
2.4GHz
Power 100mW 30 mW 10mW
Range 10-30 m 10-75 m 10 m
Data rate 1-3 Mbps 25-250 Kbps 1 Mbps
Network
Topologies
Ad hoc ,
point to
point, star
Mesh, ad
hoc, star
Point to
point, star,
ad hoc
Security 128- bit
encryption
128-bit
encryption
128-bit
Encryption
Battery
Life
Days-
month
6 month- 2
year
1-2 years
Antenna Shared Independent Shared
Table 1 Comparsion between ZigBee,
Bluetooth and Wibree
Frequency Band
All three technologies operate in the unlicensed 2.4
GHz spectrum, while ZigBee can also operate at
reduced speeds at 915MHz and 868 MHz.
Antenna and Hardware
Wibree s adoption into the Bluetooth spec was
directly related to the fact that it can coexist on BT
hardware. Devices that wish to take advantage of
both Bluetooth and Wibree will not need to add extra
hardware; one antenna will do for both. ZigBee
support, however, requires its own hardware and
antenna.]
Power and Battery Life
ZigBee, designed to be a low-power alternative to
Bluetooth, offers 30mW performance compared to
Bluetooth s 100mW.
Range
Both Bluetooth and Wibree are designed to operate
within a 10m range, though Bluetooth 2.1now states
a maximum range of 30m . ZigBee, being designed to
enable home and industry automation , allows a
maximum range of 75m.
Data Rate
Wibree has caught up to Bluetooths original data
rate at 1Mbps; while Bluetooth has proceeded to
reach maximum rates of 3Mbps. ZigBee intentionally
lags far behind these numbers, sacrificing data rates
for power savings, and so transmits only 20-
250Kbps.
Network Topologies
Bluetooth and Wibree operate primarily in ad hoc
piconets, where a master device controls multiple
slaves. These piconets are limited to 8 devices.
ZigBee has far greater flexibility in this arena,
supporting mesh and star configurations. Mesh
networks offer resilience against severed
connections, as Coordinator devices can reroute
traffic as needed. Star configurations at the ends of
the mesh allow clusters of ZigBee devices to interact
with the outside world, while the mesh devices focus
on data transmission.
Security
All three technologies support state-of-the-art 128-bit
encryption, and all three continue to be scrutinized
for key distribution vulnerabilities and the like.
Time to Wake and Transmit
One of ZigBee s greatest strengths over Bluetooth
has been its freedom to sleep often. This comes from
its quick wake-from-sleep design. A ZigBee device
can wake up and get a packet across a network
connection in around 15 milliseconds, while a
Bluetooth device would take 3 seconds. A Wibree
device would presumably behave more like the
ZigBee device, but this remains to be seen.
6. Conclusion and Future scope
ZigBee used to be the clear choice for very low
Power, low latency, low data rate, mostly asleep
devices. Preliminary designs for Bluetooth Low
Energy Technology is just now being released.
Bluetooth with integrated Wibree may steal that
momentum (in nonmesh networks).
7. References
[1]. Ciardiello, T., "Wireless communicaRons for
industrial control and monitoring," CompuRng &
Control Engineering Journal , vol.16, no.2, pp.
1213,AprilMay 2005
URL:hbp://ieeexplore.ieee.org/stamp/stamp.jsp?a
rnumber=1454279&isnumber=31235
[2]. EvansPughe, C., "ZigBee wireless standard,"
IEEE Review , vol.49, no.3, pp. 2831, March
2003URL :hbp :/ /ieeexplore.
ieee.org/stamp/stamp.jsp?arnumber=1196386&is
number=26924
[3]. Home networking with ZigBee
hbp://www.embedded.com/columns/technicalinsi
ghts/18902431?_requesRd=184548
[4]. ZigBee Wireless Networking Overview
hbp://focus.R.com/lit/ml/slyb134a/slyb134a.pdf.
[5]. AlternaRves for Short Range Low Power
Wireless Communications hbp:// ieeexplore.
ieee.org/stamp/stamp.jsp?arnumber=1434907&is
number=30915
[6]. Nick Baker, "ZigBee and Bluetooth: Strenghts
and Weaknesses for Industrial
Applications"hbp://ieeexplore.ieee.org/stamp/sta
mp.jsparnumber=01454281
[7]. Bluetooth.com, "Compare with Other
Technologies,"hbp://www.bluetooth.com/Bluetoo
th/Technology/Works/Compare/
158
National Conference onMicrowave, Antenna &Signal Processing April 22-23,
2011
[8]. Mike Foley, "The new wireless fronrer,"
hbp://www.bluetooth.com/NR/rdonlyres/9A00E8
AB133740FFA4A54092D9E17121/0/Bluetoo
thLowEnergyTechnology_MikeFoley_TheNew
WirelessFronRer.pdf
[9]. Bluetooth.com, "Market potenRal for Bluetooth
low energy technology," hbp:// www. bluetooth.
com/NR/rdonlyres/30CC41D0527041EDBE9F
C6608AE0C9DD/0/BluetoothLowEnergyTechn
ology_MarketPotenRal.pdf
[10].Bluetooth.com, "Technical Comparison,"
hbp://www.bluetooth.com/Bluetooth/Technology
/Works/Compare/Technical/
159
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
communications is rapidly
in a demand for wireless
Analysis of Peak to average power ratio reduction of
OFDMsignals
1
Mr. Ankit Tripathi(Sr. Lecturer),
2
Ms.Neha Goel(Assit. Professor)
3
Ms.Neha Singhal (Sr. Lecturer)
1,2,3
Raj Kumar Goel Institute Of Technology,Ghaziabad
1
tripathiankit10@gmail.com
2
17nehagoel@gmail.com
3
nehasinghal7@yahoo.co.in
Abstract
Wireless digital
expanding, resulting
systems that are reliable and have a high spectral
efficiency. Orthogonal Frequency Division
Multiplexing (OFDM) has a high tolerance to
multipath signals and is spectrally efficient making it
a good candidate for future wireless communication
systems.One disadvantage of OFDM is that the peak
of the signal can be up to N times the average power
(where N is the number of carriers). These large
peaks increase the amount of intermodulation
distortion resulting in an increase in the error rate.
The average signal power must be kept low in order
to prevent the transmitter amplifier limiting.
Minimising the PAPR allows a higher average power
to be transmitted for a fixed peak power, improving
the overall signal to noise ratio at the receiver. It is
therefore important to minimise the PAPR.
The PAPR of an OFDM signal can be reduced in
several ways. Selective mapping involves generating
a large set of data vectors all representing the same
information. The data vector with the lowest resulting
PAPR is selected. Information about which particular
data vector was used is sent as additional carriers.
However there may be potential problems with
decoding the signal in the presence of noise with
selective mapping. Errors in the reverse mapping
would result in the data of whole symbols being lost.
In this paper,a technique is described for a better than
5 dB reduction in the Peak to Average Power Ratio
(PAPR) of an OFDM signal. The optimal amplitude
and phase of additional peak reduction carriers (PRC)
is obtained using a code-book, which is obtained
using a search of all possible signal combinations.
Keywords: Introduction, Peak Reduction Carriers,
Effect of PRC Position, Results, Conclusion
Introduction
Orthogonal Frequency Division Multiplexing
(OFDM) is a multicarrier transmission technique,
which divides the available spectrum into many
carriers, each one being modulated by a low rate data
stream. OFDM is similar to FDMA in that the
multiple user access is achieved by subdividing the
available bandwidth into multiple channels, that are
then allocated to users. However, OFDM uses the
spectrum much more efficiently by spacing the
channels much closer together. This is achieved by
making all the carriers orthogonal to one another,
preventing interference between the closely spaced
carriers. One disadvantage of OFDM is that the peak
of the signal can be up to N times the average power
(where N is the number of carriers). These large
peaks increase the amount of intermodulation
distortion resulting in an increase in the error rate.
The average signal power must be kept low in order
to prevent the transmitter amplifier limiting.
Minimising the PAPR allows a higher average power
to be transmitted for a fixed peak power, improving
the overall signal to noise ratio at the receiver. It is
therefore important to minimise the PAPR.
A large amount of research has been done on
broadcast OFDM systems, however most wireless
communication systems must support multiple users.
One application that a multi-user OFDM system
would be suitable for is fixed wireless telephony
applications. In such a system each user is allocated a
small percentage of the system carriers, typically 4-
16 depending on the symbol rate used. This paper
presents a method for reducing the PAPR of a OFDM
signal that contains data. It is optimised for a low
number of carriers making it a suitable technique for
multi-user OFDM.
Peak Reduction Carriers
This paper presents a technique that combines
selective mapping and cyclic coding. A reduction in
the PAPR is achieved by adding extra carriers
referred to as Peak Reduction Carriers (PRC). The
phase and amplitude of the PRCs is varied to
minimise the overall PAPR. The original information
carriers are unaffected and can be decoded normally.
The receiver can disregard the PRCs, or they can be
160
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
used for error detection. The frequency of PRCs, or
relative positioning of the PRCs can be varied with
respect to the information carriers depending on the
application. The results presented were found using a
computationally intensive exhaustive search to find
the optimal setting for the PRCs. However it is
assumed that further work will allow a more efficient
algorithm to be found.
An optimal setting for the PRCs corresponds to the
combination of phase and amplitude that achieves the
lowest PAPR of the overall OFDM symbol
(information carriers and PRCs). In this paper the
phase and amplitude of the PRCs was set in a coarse
quantised manner to minimise the number of
combinations needed to be searched. The phase of the
PRCs was set to 0 or 180 and the carriers were
turned on or off. There are therefore 3
M
combinations
for the PRCs for each information code word (where
M is the number of PRCs). This level of quantisation
was found to be appropriate for BPSK information
carriers. Finer quantisation may produce improved
results for higher modulation schemes.
An exhaustive search of all combinations of
allowable phase and amplitude gives optimal PRCs,
but is computationally intensive. This method can be
used for small numbers of carriers where the optimal
PRC coding can be stored in a look up table or code-
book. This is impractical for more than 16
information carriers or for more than 10 PRCs as the
number of combinations becomes too large to store
and calculate. However for some multi-user OFDM
applications 16 or less carriers per user is sufficient.
The results shown were calculated based on all
combinations of information code words, thus will
give a good indication of the practical PAPR
improvement.
For each experiment the inverse fast Fourier
transform (IFFT) of the carrier configuration was
used to give a complex base band signal. Let the
complex base band signal be defined as in eqn.(1).
(1)
When this is quadrature modulated to RF the signal
can be written in polar form as:
(2)
where a(t) is the amplitude and q(t) is the phase of the
signal. Thus:
(3)
The definition of the PAPR in eqn 4, where T is the
OFDM symbol period, can be used for RF as well as
base band [1].
(4)
For the simulations carried out, the base band carriers
were centred on DC and the size of the IFFT was
made at least 8 times greater than the number of
carriers, oversampling the time domain signal. This
ensures that peaks in the signal were accurately
represented to get an accurate PAPR [4, 5].
Results
The simplest arrangement for the relative positioning
of the data and PRCs is to have a block of data
carriers immediately followed by a block of PRCs.
This arrangement was used for the results shown in
figures 1-3.
Figure 1. PAPR verses number of edge grouped
PRCs (8 BPSK data carriers)
Figure 1 shows the worst case PAPR and the 90%
point in the cumulative distribution of PAPR as the
number of PRCs is increased. The maximum PAPR
for the 8 information carriers is reduced by >5.5dB
for the addition of 10 PRCs. Selecting the optimal
amplitude and phase of the PRC improves the
performance significantly as compared with only
setting the phase as used in cyclic coding [3]. For this
reason phase and amplitude modulation of the PRCs
was used in all later experiments due to the improved
performance.
161
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Figure 2. Maximum PAPR verses number of data
carriers and edge grouped PRCs, where M is the
number of PRCs.
Figure 2 shows the effect of adding PRCs to the
PAPR as the number of information carriers is varied.
The improvement in PAPR remains relatively
constant as the number of information carriers is
increased. This shows that this technique gives
consistent performance gains as the number of
information carriers is varied.
Adding PRCs reduces the PAPR at the expense of
additional transmission power. Figure 3 shows the
net improvement in PAPR due to the addition of
PRCs. The PAPR reduction was calculated as the
difference between the PAPR results for zero PRCs
and the PAPR results with the addition of PRCs. The
loss in signal power due to the PRCs was then
subtracted from the PAPR reduction in order to give
the net PAPR improvement. If the data signal power
lost due to the transmission of the PRCs was more
than the PAPR gain then there would be little
point in adding the PRCs. It can be seen that for 10
BPSK carriers there is little improvement in adding
more than 2 PRCs. In fact adding more than 5 PRCs
results in a worsening of the average (50%) PAPR.
This is due to the power cost of the PRCs.
Effect of PRC Position
Previous results are shown for grouped PRCs that
were positioned immediately after the data carriers as
shown in Figure 4a. Two different positioning tests
were performed. One test kept the PRCs grouped
together as in Figure 4a, however they were moved
with respect to the data carriers as in Figure 4b.The
second test positioned the PRCs in a spread out
manner. The best spread pattern was established
using a randomised search.
Figure.4 PRC position combinations
Adding PRCs use a significant amount of additional
bandwidth. It is therefore important to minimise the
number used, or to position the PRCs so that the
bandwidth can be reused. For example, in a multi-
user OFDM system where each user transmits a
block of carriers, the PRCs can be overlapped, i.e.
they are transmitted at the same frequency,
effectively halving the bandwidth.
Grouped PRCs
In this scheme the PRCs were maintained as a group
of carriers. They were repositioned by sliding them
with respect to the data carriers. Figure 5 and 6 show
the effect of the position on effectiveness of the
PRCs. Figure 5 shows for a small number of PRCs
the position of the PRCs within the data carriers has
little effect on the performance. However with 4 or
more PRCs the position has a significant effect on the
performance of the PRCs.Placing the PRCs within
the data carriers with an off centre of 3 carriers gives
the best results. This gives a further reduction of 1dB
as compared with edge grouped PRCs. Not having
edge grouped PRCs would prevent overlapping of the
PRCs for a multi-user OFDM, thus doubling the
bandwidth used by the PRCs.
162
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Figure 5. PAPR verses position of 2 grouped PRCs
( 10 BPSK data carriers)
Figure 6. PAPR verses position of 4 grouped PRCs
(10 BPSK data carriers)
Spread PRCs
The position of the 4 grouped PRCs had a significant
effect on the PRC performance, thus it seemed likely
that spreading the PRCs out might lead to further
improvements. The exact relationship between the
position of the PRCs and the PAPR distribution is
currently unknown and so a random search was used
for optimisation. The PRCs and data carriers were
positioned randomly to form a block of carriers with
no gaps as shown in figure 4d. For each position
combination the PAPR distribution was found and
the combination which resulted in the lowest
maximum PAPR was selected as the optimised PRCs
position.
The PAPR distribution was found by testing all
combinations of the data code words. For each data
code word combination the optimum PRCs were
found as described in section 3. The PAPR
distribution verses the number of PRCs is shown in
figure 7. This result is for 10 data carriers and shows
that spreading the PRCs can result in large reductions
in the PAPR of the OFDM symbol. A reduction of >
6dB is possible.
Figure 7. PAPR verses the number of spread PRCs
(10 BPSK data carriers)
Figure 8. Net improvement in PAPR, position
optimised PRCs (10 BPSK data carriers)
Figure 8 shows the overall net improvement in the
PAPR using position optimised PRC. This can be
directly compared to figure 3 which shows the results
for edge grouped PRCs. The maximum net gain for
position optimised PRCs is nearly double that of the
edge grouped PRCs. Figure 8 shows that the net
PAPR gain increases rapidly up to 4 PRCs, after
which the gain is minimal. Thus the optimal number
163
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
of PRCs would be 4 for 10 data carriers. Other tests
also show that the number of PRCs needs to be
approximately 40% of the number of data carriers in
order to get significant improvements in the PAPR.
Overlapping of the spread PRCs in a multi-user
OFDM system is more difficult as most of the PRCs
will be bounded with data carriers. Thus simple
overlapping may not be possible. This would be the
case if the data and PRC are grouped. However if the
carriers for each user are spread out it might be
possible have spread out PRCs that overlap between
the users, but still provide a large reduction in the
PAPR.
The addition of 4 PRCs with 10 data carriers results is a large net gain of
4.5 dB, allowing more power to be transmitted. For a transmission with
no PRCs at an error rate of 1x10
-3
, adding the PRCs and maintaining the
same peak power the error rate would be decreased to
1x10
-7
[7]. This is more efficient than adding error correcting bits at the
same coding rate. For example using Hamming coding at a rate of 4
parity bits for 11 data bits gives a gain of only 1.2 dB at a bit error rate
of 1x10
-4
[7] which is significantly less than 4.5dB.
Table 1 shows the number PRC positions tested, and the best
combination found.
Table 1. Optimised positions found for 10 BPSK data carriers
Conclusion
Adding peak reduction carriers can significantly
improve the PAPR of an OFDM signal. The PRCs
can result in a reduction of >6dB in the maximum
PAPR and a net reduction of >4.5dB when the
additional power for PRCs is taken into account. It
was found that varying the amplitude as well as phase
for the PRCs gave improved performance over just
phase variation. It was also found that spreading the
position of the PRCs resulted in a better performance
than grouped PRCs. Adding more PRCs results in a
lower PAPR, however the use of large numbers of
PRCs is limited by the cost of additional transmission
power, bandwidth and complexity limits.
References
1. Bauml, R.W., Fischer, R.F.H., Huber, J.B.,
Reducing the peak-to-average power ratio of
multicarrier modulation by selected mapping,
Electronic Letters, 1996, Vol. 32, pp. 2056-2057
2. Van Eetvelt, J., Wade, G., Tomlinson, M., Peak to
average power reduction for OFDM schemes by
selective scrambling, Electronic Letters, 1996, Vol.
32, pp. 1963-1964
3. Wulich, D., Reduction of peak to mean ratio of
multicarrier modulation using cyclic coding,
Electronic Letters, 1996, Vol. 32, pp. 432-433
4. Tellambura, C., Use of m-sequences for OFDM
peak-to-average power ratio reduction, Electronic.
Letters, 1997, Vol. 33, pp. 1300-1301
5. Tellambura, C., Phase optimisation criterion for
reducing peak-to-average power ratio in OFDM,
Electronic Letters, 1998, Vol. 34, pp. 169-170
6. Davis, J.A., Jedwab, J., Peak-to-mean power
control and error correction for OFDMtransmission
using Golay sequences and Reed-Muller codes,
Electronic Letters, 1997, Vol. 33, pp. 267-268
7. Sklar, B., Digital Communications Fundamentals
and Applications, Prentice Hall, 1988, pp. 300
164
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
RADIUS Server to Improve Wireless Security
Anuroop TazeemAhmadKhan
Department of Electronics and Communication, Department of Electronics and Communication,
Galgotia College of Engg. & Technology, G. Noida Jamia Millia Islamia New Delhi 11025
E-mail: anuroopbajpai@yahoo.com E-mail: khan_taz@yahoo.com
Abstract we investigated the use of Remote Authentication Dial
In User Service (RADIUS) protocol in the area of IEEE 802.11
wireless LAN security (WLAN). We wanted to ascertain if the
RADIUS protocol can increase the security of a wireless network,
and if so, to what extent. In this paper, we will show that the
RADIUS protocol can provide increased security.
I. INTRODUCTION
The popularity of wireless networks grows both in personal
use and in business use. Since wireless networks are a
broadcast medium, making sure data is not made available to
unauthorized users is a very real concern. As an analogy,
consider a board meeting that is taking place at a company
where the discussion focuses on the release of a new product in
a highly competitive market. But lets further suppose that
they must hold this meeting in public. Anyone walking by can
hear these conversations, including market competitors, much
to the dismay of the originating corporation. This is the same
scenario we face with wireless networks. Anyone within range
of the radio frequencies has access to the bits being transmitted.
The transmission is, in effect, public. Wireless security then
has two major goals. One is to protect the bits in such a way
that even though they can be read by anyone, in the possession
of unauthorized users no sense can be made of the bits. The
second is to limit the access in such a way that if a hacker
succeeds in gaining access to the wireless network, he cannot
see anything other than the public bits on the wireless
network. In other words, prevent the intruders from accessing
other portions of the network.
The rest of this paper is organized as follows: in Section II,
we offer a synopsis of related work. In Section III, we
illustrate the weaknesses of current security protocols. In
Section IV, we discuss the IEEE 802.1X standard and the
authentication framework that it provides. In Section V, we
give a brief history of the RADIUS protocol and outline its
general functionality. In Section VI, we show more detail of
the RADIUS protocol and its function related to authentication.
In Section VII, we address disadvantages of the RADIUS
protocol. In Section VIII, we offer conclusions to this research.
In Section IX, we suggest further research into possible real-
world implementation issues.
II. RELATED WORK
In the paper titled A Survey on Wireless Security
protocols (WEP, WPA and WPA2/802.11i) [10] by ARASH
Habibi Lashkari and Mir Mohammed Seyed Danesh, a wireless
security approach is given, but it relies on a device security
model. This paper describes the major encryption protocols,
but goes no further. This model, as will be explained in this
paper, can be improved upon.
III. PROBLEM STATEMENT
As mentioned earlier, wireless networks function over a
broadcast medium. Lets examine some common methods to
protect the broadcast data from unauthorized users, those being
Service Set Identifier (SSID) cloaking, MAC Access Control
Lists (ACLs), Wired Equivalent Privacy (WEP) and Wi-Fi
Protected Access (WPA/WPA2) [8].
SSID cloaking prevents the broadcast of the network
name so that clients cannot pick it from a list of
networks to connect. However, tools are readily available
to allow users to connect with hidden SSIDs.
The use of MAC ACLs is a strategy of only access
allowing to devices whose MAC address is contained in an
access list. There are many freeware packet-sniffing
applications, such as Wireshark, available to the general
public. MAC filtering does nothing to prevent packet
sniffing. The data being transmitted is still public and
accessible to anyone in range of the wireless network.
There are also programmable network interface cards
(NIC) available to the public. It would not take a
significant amount of effort for a hacker to sniff a valid
MAC address and program his NIC with this address.
Once this is done, the hacker now has access to the
network beyond the device that is using MAC ACLs.
WEP is an encryption standard that was part of the initial
802.11 standard. WEP uses a secret key known by the
sender and receiver and combines that with a 24-bit
initialization vector (IV). This IV is not a static value, but
with only 24 bits it will eventually repeat itself. The secret
key is not technically static, but in practice it remains static
for long periods of time. The collection of enough frames
can allow a hacker to determine the shared values among
them [4].
WPA provides for a sophisticated key hierarchy that
generates new encryption keys each time a mobile device
connects. This standard has also been shown to be
vulnerable when implemented using temporal key integrity
protocol (TPIK) [4]. It is not as vulnerable as WEP in the
sense that the key itself is not discovered, but at the very
least, WPA is vulnerable to ARP poisoning. The
interested reader can find a brief discussion of ARP
poisoning here [5].
165
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
V. RADIUS PROTOCOL
What exactly is the RADIUS protocol? Livingston
Enterprises developed the RADIUS protocol in 1991 [1]. In
1997 it was published as RFC 2058. The current version is
But lets suppose that we have successfully implemented a
key encryption protocol such that a hacker cannot break into it.
Is our wireless network safe now? The answer is no, because
the methods and protocols discussed to this point are device-
level security measures. What if a hacker somehow gains
possession of a device that has been configured with these
protocols? That device has access to the network by virtue of
the device-level access control, so now the hacker has access
also. As a second scenario, it may also be desirable to restrict
access to specific networks to certain employees only, such as
an advanced research and development staff. We would want
the restricted users to be able to roam the corporate wireless
networks with the exception of the R&D network. In this case,
a user-level authentication is necessary. These cases are can be
addressed by the IEEE 802.1X standard and the RADIUS
protocol can help.
IV. IEEE802.1X
The IEEE 802.1X standard [6] was developed to support
port-based network access. The IEEE 802.1X standard uses
Extensible Authentication Protocol (EAP) to allow for a variety
of authentication mechanisms [6]. This standard presents the
notion of three entities: the supplicant, the authenticator (or
network port) and the authentication server. The supplicant is
the end device, in our case the wireless client (or hacker). The
authenticator for the purposes of this paper is a wireless
network Access Point. The RADIUS server is the
authentication server. Figure 1 shows a typical LAN/WLAN
architecture depicting these entities. Figure 2 shows the EAP
stack. As stated, a supplicant is an entity that wishes to use a
network service, or port. An authenticator is in control of a set
of ports. An authentication server can instruct an authenticator
to provide access after a successful authentication, or to deny
access after an authentication failure.
Figure 1. The entities in an IEEE 802.1X setup [7]
Figure 2. The EAP Stack [7]
RFC 2865. RADIUS was designed to provide centralized
authentication, authorization and accounting (AAA)
management for computers connecting to a network service.
As the name implies, RADIUS was originally developed to
manage dispersed serial line and modem pools [2]. Instead of
each Network Access Server (NAS) maintaining its own list of
authorized users and passwords, the NAS device would send an
authorization request to a centralized AAA server running the
RADIUS protocol. Use of the RADIUS protocol has
expanded, and it is now commonly used for network ports,
VPN, web servers, access points, etc. [3]. In fact, there are
many different commercial products available in this area
Alepo (RADIUS Server for ISPs, RADIUS Server for VoIP,
RADIUS Server for Public WLAN), Interlink Networks
RADIUS Server, and Microsoft Internet Authentication Service
to name a few. It is interesting to note that RADIUS uses the
UDP transport-layer protocol, not TCP. The interested reader
can find an explanation of this choice in section 2.4 of RFC
2865. As mentioned earlier, the RADIUS protocol may be
used with many environments other than WLANs; however,
we will focus our attention on how it can be used with WLAN
implementations.
VI. RADIUS INACTION
A client using the RADIUS protocol obtains authentication
information from the user. For example, by way of a login
prompt. The client creates an Access Request (using EAP as
stated above) that contains attributes such as username,
password, client ID and port ID. The access request is
submitted through the authenticator (Access Point) to the
RADIUS server. The RADIUS server validates the sending
client. If the client is valid the RADIUS server turns to
validating the user. How it does this is implementation
dependent. The RADIUS server may have its own database of
users, or it may consult other servers, for example, an Active
Directory or LDAP server. The RADIUS server may give
responses to the authenticator such as Access-Accept, Access-
Reject or Access-Challenge, which are then passed back to the
client (wireless user). In this way, the device and the user are
verified before communication over the wireless network is
allowed.
VII. DISADVANTAGES TO RADIUS
How effective is the RADIUS protocol? Unfortunately,
there are several noteworthy vulnerabilities [9].
Access-Request messages sent by clients are not
authenticated [9].
The RADIUS shared secret can be weak due to poor
configuration and limited size [9].
Sensitive attributes are encrypted using the RADIUS
hiding mechanism [9].
Poor Request Authenticator values can be used to decrypt
encrypted attributes [9].
These vulnerabilities, along with suggested best practices to
mitigate the exposure and risk, are detailed in [9]. In addition
to these vulnerabilities, there compatibility concerns when
166
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
using RADIUS. Not all wireless devicessuch as PDS or
hand held bar code scannersare capable of supporting IEEE
802.1X and by extension, cannot support the RADIUS
protocol.
VIII. CONCLUSIONS
In this paper we have considered the problem of securing a
wireless network. There are clear advantages to using a
wireless network, for both personal and business networks.
Infrastructure costs to install a traditional LANswitchgear,
cabling and installation, diagnosing cable faults, etc.can be
high. In addition, the mobility provided by wireless topologies
can allow people to be at one location today, but at another
location tomorrow, or even mere moments later, maintaining
full, uninterrupted access to computer applications. However,
these advantages come with risk. Because wireless networks
are a broadcast medium, physical security does not exist. As
we have seen, there have been many attempts at securing
wireless networks. Unfortunately, none of them provide a
completely secure solution by themselves. However, full WPA
(encompassing the IEEE 802.1X standard), coupled with a
RADIUS server does provide an increased level of security
over that which can be achieved using a solution that only
implements device-level security.
IX. FURTHER RESEARCH
As a topic for further research, we suggest investigating the
potential for bottlenecks or other performance degradation
caused by additional client authentication steps when
incorporating a RADIUS server.
X. REFERENCES
[1] Wireshark Konrad Roeder, Radius
(http://wiki.wireshark.org/Radius), September 4, 2009
[2] RFC 2865, http://www.faqs.org/rfcs/rfc2865.html, June
2000
[3] Brien Posey, SolutionBase: RADIUS deployment
scenarios, August 31, 2006
[4] Martin Beck and Erik Tews, Practical attacks against
WEP and WPA, November 8, 2008
[5] Corey Nachreiner , Anatomy of an ARP Poisoning
Attack, July 24, 2007
[6] 802.1X,
http://standards.ieee.org/getieee802/download/802.1X-
2004.pdf, March 10, 2005.
[7] Proactive Key Distribution Using Neighbor Graphs,
Arunesh Mishra, Min Ho Shin, Nick L. Petroni, Jr., T.
Charles Clancy, and William A. Arbaugh
[8] Meraki White Paper: Wireless LAN Security, March 2009
[9] Joseph Davies, RADIUS Protocol Security and Best
Practices, January 2002
[10] ARASH Habibi Lashkari and Mir Mohammed Seyed
Danesh , A Survey on Wireless Security protocols (WEP,
WPA and WPA2/802.11i), September 21, 2009
167
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Capacity Of MIMO SystemWith Optimally Ordered
Successive Interference Cancellation
1
Tanu Gupta,
2
Sanjib Sahu
1
M.Tech ECE, IGIT, GGSIPU,
2
Asst. Registrar USIT, GGSIPU, Delhi-110006
tanu_0509@yahoo.co.in
ABSTRACT
In wireless communications, spectrum is a scarce resource and hence imposes a high cost on the high data rate transmission. It has
been demonstrated that multiple antenna system provides very promising gain in capacity without increasing the use of spectrum,
reliability, throughput, power consumption and less sensitivity to fading, hence leading to a breakthrough in the data rate of wireless
communication systems The vertical Bell labs layered space-time (V-BLAST) system is a multi-input multi-output (MIMO) system
designed to achieve good multiplexing gain.. In this paper, we study the V-BLAST MIMO system architecture, with optimally
ordered successive interference cancellation (SIC) receiver in ZERO FORCING EQUALIZATION (ZF) and simulate this structure
in Rayleigh fading channel. Based on bit error rate, we show the performance of this receiver, indicates that the ordered SIC
detector most effectively balances the accuracy of symbol detection. SIC receiver based on ZF combined with symbol cancellation
and optimal ordering improves the performance with lower complexity. Finally, the paper addresses the current questions regarding
the integration of MIMO system in practical wireless systems and standards.
Keywords: MIMO, V-BLAST, ZF, OPTIMALLY ORDERED SIC.
1. INTRODUCTION
During the past few years, a new dimension to future wireless
communication is opened, whose ultimate goal is to provide
high data rates to universal personal and multimedia
communication. To achieve such an objective, the next
generation personal communication networks will need to be
support a wide range of services which will include high
quality voice, data, facsimile, still pictures and streaming
video. These future services are likely to include applications
which require high transmission rates of several Mega bits per
seconds (Mbps). The data rate and spectrum efficiency of
wireless mobile communications have been significantly
improved over the last decade or so. Recently, the advanced
systems such as 3GPP LTE and terrestrial digital TV
broadcasting have been sophisticatedly developed using
OFDM and CDMA technology. In general, most mobile
communication systems transmit bits of information in the
radio space to the receiver. The radio channels in mobile radio
systems are usually multipath fading channels, which cause
inter-symbol interference (ISI) in the received signal. To
remove ISI from the signal, there is a need of strong equalizer
which requires knowledge on the channel impulse response
(CIR)[1]. Equalization techniques which can combat and
exploit the frequency selectivity of the wireless channel are of
enormous importance in the design of high data rate wireless
systems. On the other hand, the popularity of MIMO
communication channels, rapidly time varying channels due to
high mobility, multi-user channels, multi-carrier based systems
and the availability of partial or no channel state information at
the transmitter and/or receiver bring new problems which
require novel equalization techniques. [2] Hence, there is a
need for the development of novel practical, low complexity
equalization techniques and for understanding their potentials
and limitations when used in wireless communication systems
characterized by very high data rates, high mobility and the
presence of multiple antennas.[10]. The time span over which
an equalizer converges is a function of the equalizer algorithm,
the equalizer structure, and the time rate of change of the
multipath radio channel.[3]In 1996, Raleigh and Cioffi and
Foschini proposed new approaches for improving the
efficiency of MIMO systems, which inspired numerous further
contributions [11][13] for two suitable architectures for its
realisation known as Vertical Bell-Labs Layered Space-
Time(VBLAST), and Diagonal Bell-Labs Layered Space-
Time BLAST (D-BLAST) algorithm has been proposed by
Foschini, which is capable of achieving a substantial part of the
MIMO capacity. It is capable of achieving high spectral
efficiency while being relatively simple to implement. This
structure offers highly better error performance than other
existence detection method and still has low complexity. The
basic motive was to increase the data rate in a constrained
spectrum. The promises of information theoretic MIMO
analysis for the channel capacity were the main trigger for this
enthusiasm and also ignited the study of related areas such as
MIMO channel modelling , Space- Time signal processing,
Space-Time coding, etc. The objective of such multi-channel
dia-gonalization is to partition or distribute multi-user signals
168
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
into disjoint space and resultant channel gains are maximized to
optimize the overall system capacity under the constraint of
a fixed transmit power . Also improve the quality (BER) or
potential of achieving extraordinary data rates [2,7] by
transferring the signals in time domain and space domain
separately, without consuming more frequency resources ,
frequency diversity due to delay spread, higher spectral
efficiency and without increasing the total transmission power
or bandwidth [14]-[18] of the communication system.
2. MIMOSYSTEM
MIMO systems are an extension of smart antennas systems.
Traditional smart antenna systems employ multiple antennas at
the receiver, whereas in a general MIMO system multiple
antennas are employed both at the transmitter and the receiver.
The addition of multiple antennas at the transmitter combined
with advanced signal processing algorithms at the transmitter
and the receiver yields significant advantage over traditional
smart antenna systems - both in terms of capacity and diversity
advantage. A MIMO channel is a wireless link between M
transmits and N receive antennas. It consists of MN elements
that represent the MIMO channel coefficients. The multiple
transmit and receive antennas could belong to a single user
modem or it could be distributed among different users. The
later configuration is called distributed MIMO and cooperative
communications.
Figure1 Functions of MIMO
3. CHANNELCAPACITYOF MIMOSYSTEM
To mitigate the problem of impairment in Multipath
propagation, diversity techniques were developed. Antenna
diversity is a widespread form of diversity. Information theory
has shown that with multipath propagation, multiple antennas
at both transmitter and receiver can establish essentially
multiple parallel channels that operate simultaneously, on the
same frequency band at the same total radiated power. Antenna
correlation varies drastically as a function of the scattering
environment, the distance between transmitter and receiver, the
antenna configurations, and the Doppler spread. Recent
research has shown that multipath propagation can in fact
contribute to capacity. Channel capacity is the maximum
information rate that can be transmitted and received with
arbitrarily low probability of error at the receiver. A common
representation of the channel capacity is within a unit
bandwidth of the channel and can be expressed in bps/Hz. This
representation is also known as spectral (bandwidth) efficiency.
MIMO channel capacity depends heavily on the statistical
properties and antenna element correlations of the channel.
Representing the input and output of a memory less channel
with the random variables X and Y respectively, the channel
capacity is defined as the maximum of the mutual information
between X and Y :
C = max p(x) I (X;Y) ...(1)
A channel is said to memory less if the probability distribution
of the output depends only on the input at that time and is
conditionally independent of previous channel inputs or
outputs. P(x) is the probability distribution function (pdf) of the
input symbols X.
For the MIMO system, we have M antennas at transmitter and
N antennas at receiver.
We analyze the capacity of MIMO channel in two cases:
3.1 Same signal transmitted by each antenna
In this case, the MIMO system can be view in effect as a
combination of the SIMO and MISO channels:
SNR = N
2
M
2
. signal power
=
M. N. SNR
N. M. (noise)
So the capacity of MIMO channels in this case is:
C
MIMO
=B.log
2
[1 + M.N. SNR](BPS Hz) (2)
Thus, the channel capacity for the MIMO systems is higher
than that of SIMO and MIMO system. But in this case, the
capacity is increasing inside the log function. This means that
trying to increase the data rate by simply transmitting more
power is extremely costly.
3.2 Different signal transmitted by each antenna
The big idea in MIMO is that we can send different signals
using the same bandwidth and still be able to decode correctly
at the receiver. Thus, it is like we are creating a channel for
each one of the transmitters. The capacity of each one of these
channels is roughly equal to:
C
MIMO
= B.log
2
[1+ SNR] ( BPS Hz) .(3)
But we have M
T
of these channels, so the total capacity of the
system is:
C
MIMO
=M.B.log
2
[1+ SNR](BPS/Hz)(4)
Roughly, with N M, the capacity of MIMO channels is equal
to:
169
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
C
MIMO
=M.B.log
2
[1 + SNR] (BPS Hz) (5)
Thus, we can get linear increase in capacity of the MIMO
channels with respect to the number of transmitting antennas.
So, the key principle at work here is that it is more beneficial to
transmit data using many different low-powered channels than
using one single, high-powered channel. In the practical case of
time-varying and randomly fading wireless channel, it shown
that the capacity of M x N MIMO system for known Channel is
C
MIMO
=B.log
2
det[I
N
+SNR.HH*] (BPS Hz)...(6)
M
We can see that the advantage of MIMO systems is significant
in capacity. As an example, for a system which
M = N and HH*

I
N
M
Therefore, the capacity increases linearly with the number of
transmit antennas .
C
MIMO
=M.B.log
2
[1+SNR] (BPS Hz)(7)
MIMO is best when SNR and angular spread are large but for
Small angular spread or presence of a dominant path (e.g.
LOS) reduce MIMO performance. In multipath using multiple
antennas at both TX and RX multiplies capacity: C increases
by K bps/HZ for every 3 dB SNR increase for MIMO and C
increases by 1 bps/HZ for every 3dB of SNR increase for
SIMO,MISO or SISO(at high SNR). Where K represents the
number of nonzero (i.e., positive) Eigen values of HH*.
4. V-BLASTARCHITECTURE
One of the earliest communication systems that were proposed
to take advantage of the promising capacity of MIMO channels
is the BLAST architecture. It achieves high spectral
efficiencies by spatially multiplexing coded or uncoded
symbols over the MIMO fading channel. Symbols are
transmitted through M antennas. Each receiver antenna
receives a superposition of faded symbols. The ML decoder
would select the set of symbols that are closest in Euclidean
distance to the received N signals. However, it is hard to
implement due to its exponential complexity. More practical
decoding architectures were proposed in the literature.
4.1. V-BLAST Technique
A data stream is demultiplexed into M sub-streams termed
layers. For V-BLAST at each transmission time, layers are
arranged horizontally across space and time across the M
transmit antennas and the cycling operation of D-VLAST is
removed before transmission is shown in Fig.2. At the receiver,
as mentioned previously, the received signals at each receive
antenna is a superposition of M faded symbols plus additive
white Gaussian noise (AWGN). Although the layers are
arranged differently for the two BLAST systems across space
and time, the detection process for both systems is performed
vertically for each received vector. Without loss of generality,
assume that the first symbol is to be detected. The detection
process consists of two main operations:
I) suppression of Interference (nulling): The suppression
operation nulls out interference by projecting the received
vector onto the null subspace (perpendicular subspace) of the
subspace spanned by the interfering signals. After that, normal
detection of the first symbol is performed.
II) Interference cancellation (subtraction): The contribution of
the detected symbol is subtracted from the received vector.
Figure 2: Block Diagram of V-BLAST Architecture.
5. PERFORMANCE ANALYSIS OF MIMO
TECHNOLOGY USINGV-BLAST
5.1 Zero Forcing Equalization using V-BLAST with
Optimally ordered SIC:
In classical Successive Interference Cancellation(SIC), the
receiver arbitrarily takes one of the estimated symbols, and
subtract its effect from the received symbol y1 and y2
However, we can have more intelligence in choosing whether
we should subtract the effect of 1 first or 2 first. To make
that decision, let us find out the transmit symbol (after
multiplication with the channel) which came at higher power at
the receiver. The received power at the both the antennas
corresponding to the transmitted symbol 1
is, = The received power at the both the
antennas corresponding to the transmitted symbol x
2
is, .If then the receiver
decides to remove the effect of from the received vector y
1
and y
2
and then re-estimate . Else if Px
1
< Px
2
the receiver
decides to subtract effect of from the received vector y
1
and
y
2
, and then re-estimate .
6. RESULT
Zero Forcing equalizer performs well only in theoretical
assumptions that are when noise is zero. Its performance
degrades in mobile fading environment. Zero forcing with
Successive interference cancellation improves the performance
170
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
of equalizer. This process improves the estimator performance
on the next component compared to the previous one.
Compared to Zero Forcing equalization alone case, addition of
successive interference cancellation results in around 2.2dB of
improvement for BER. Zero forcing with Successive
interference cancellation with optimal ordering ensures that the
reliability of the symbol which is decoded first is guaranteed to
have a lower error probability than the other symbol.
Compared to Zero Forcing equalization with successive
interference cancellation case, addition of optimal ordering
results in around 2.0 db of improvement for BER.
Figure 3: BER plot for MIMO equalized by ZF-SIC with
optimal ordering
7. CONCLUSION
In this paper, we provide a general multiple antenna system,
the general V- BLAST system and analyzed the performance
of V-BLAST with ZF detector in Rayleigh fading channels.
We first provide a comprehensive summary of capacity for
single-user MIMO channels. This indicate that the capacity
gain obtained from multiple antennas heavily depends on the
amount of channel knowledge at either the receiver or
transmitter, the channel SNR, and the correlation between the
channel gains on each antenna element. We then focus
attention on the capacity regions for MIMO broadcast and
multiple accesses under known channels or unknown channels.
In contrast to single-user MIMO channels, capacity results for
these multiuser MIMO channels are quite difficult to obtain,
even for constant channels. We summarize capacity results for
the MIMO broadcast and multiple access channels for channels
that are either constant or fading with perfect instantaneous
knowledge of the antenna gains at both transmitter(s) and
receiver(s). Doing successive interference cancellation with
optimal ordering ensures that the reliability of the symbol
which is decoded first is guaranteed to have a lower error
probability than the other symbol. This results in lowering the
chances of incorrect decisions resulting in erroneous
interference cancellation. Hence gives lower error rate than
simple successive interference cancellation. MIMO is an
important key for enabling the wireless industry to deliver on
the vast potential and promise of wireless broadband.
8. REFERENCES
[1]. [DIG-COMM-BARRY-LEE-MESSERSCHMITT] Digital
Communication: Third Edition, by John R. Barry, Edward
A. Lee, David G. Messerschmitt.
[2]. WIRELESS-TSE, VISWANATH] Fundamentals of
Wireless Communication, David Tse, Pramod Viswanath
[3]. R. Scholtz, Multiple Access with Time-Hoping Impulse
Modulaton, IEEE milit. Commun. Conf., vol . 2, pp. 447-
450,1993.
[4]. Wireless communications and networks : second edition, by
Theodore S. Rappaport.
[5]. ZERO-FORCING EQUALIZATION FOR TIME-
VARYING SYSTEMS WITH MEMORY by Cassio B.
Ribeiro, Marcello L. R. de Campos, and Paulo S. R. Diniz.
[6]. ZERO-FORCING FREQUENCY DOMAIN
EQUALIZATION FOR DMT SYSTEMS WITH
INSUFFICIENT GUARD INTERVAL by Tanja Karp ,
Martin J. Wolf , Steffen Trautmann , and Norbert J. Fliege
[7]. Adaptive Equalization by SHAHID U. H. QURESHI,
SENIOR MEMBER, IEEE.
[8]. Approximate Minimum BER Power Allocation for MIMO
Spatial Multiplexing Systems Neng Wang and Steven D.
Blostein, Senior Member, IEEE.
[9]. MIMO-OFDM modem for WLAN by, Authors: Lisa
Meilhac, Alain Chiodini, Clement Boudesocque, Chrislin
Lele, Anil Gercekci.
[10]. G. Leus, S. Zhou, and G. B. Giannakis, Orthogonal
multiple access over time- and frequency-selective
channels, IEEE Transactions on Information Theory, vol.
49, no. 8, pp. 19421950, 2003.
[11].B. Lu and X. Wang, Iterative receivers for multiuser space-
time coding systems, IEEE J.Sel. Areas Commun., vol. 18,
no. 11, pp. 23222335, (Nov. 2000).
[12]. X. Zhu and R. D. Murch, Layered spacefrequency
equalization in a single-carrier MIMO system for frequency-
selective channels, IEEE Trans. Wireless Commun., vol. 3,
no. 3, pp. 701708, (May 2004).
[13].M. R. McKay and I. B. Collings,Capacity and performance
of MIMO-BICM with zeroforcing receivers, IEEE Trans.
Commun., vol. 53, no. 1, pp. 7483, (Jan. 2005).
[14].J. H.Kotecha and A.M.Sayeed,Transmit signal design for
optimal estimation of Correlate MIMO channels Kotecha,
IEEE transaction on signal processing, Vol.52, PP.546-
577(Feb 2004).
[15].A. J. Paulraj, D. A. Gore, R. U. Nabar, and H. BlcskeiAn
overview of MIMO communication-A Key to Gigabit
Wireless, Proceedings of the IEEE, Vol. 92, No. 2, PP.
198-218(Feb. 2004).
[16]. Kyung Won Park and Yong Soo Cho,"An MIMO-OFDM
technique for high-speed mobile channels," IEEE
Communications Letters, Volume 9, No. 7, PP. 604
606(July 2005).
[17].H. Bolcskei,MIMO-OFDM wireless systems: Basics,
perspectives, and challenges, IEEE Journal on Wireless
Communications, Vol. 13, No. 4, PP. 31-37 ( Aug. 2006).
[18]. M. Cicerone,O.Simeone and U.Spagnolini, Channel
Estimation for MIMO-OFDM Systems by Modal
Analysis/Filtering, IEEE transaction on communication,
Vol.54, No.11, PP..2062-207 ov.2006).
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Cloud Computing: Security and Management
Jaswant Singh * Gagandeep Kaur*
M.tech (Student ECE) Asst Prof. (ECE)
*YadavindraCollege of Engg. Talwandi Sabo
Punjab, India
Abstract-The Cloud represents the internet: Instead
of using software installed on your computer or
saving data to your hard drive, you are working and
storing data on the web. Since cloud computing offers
large number of applications, however there are still
some challenges to be solved .Among these are the
security and privacy issues .User has no control over
its data, location of data .Client is not aware of
processes running .Security concerns arise as soon
as one begins to run applications and enter in the
public domain ,so the purpose of this paper is to
provide an overall security perspective of cloud
computing aimed at highlighting the security
concerns. This paper also focuses on security issues
in cloud computing and provides and overviewof the
processes and controls used to address security of
data centers, network hardware and communications.
Keywords- Cloud Computing, Security, SLA, SaaS,
PaaS, IaaS, Privacy,XML,TLS
1. INTRODUCTION
Cloud Computing is web based processing,
whereby shared resources, software and information
are provided to computers and devices. With cloud
computing users can access database resources via
the internet from anywhere. It is a very independent
platform in terms of computing. Organizations use
cloud computing as a
Service, infrastructure, like to examine the security
and confidentiality issues for their business critical
applications. The clouds provide many services and
based on those services are different architecture
.User data is stored on to a Centralized location
called data centers having a large size of data storage.
This Data is processed on servers, so the clients have
to trust the provider on the availability as well as data
security. The SLA is the only legal agreement
between the service provider and the client.
Fig.1: Cloud Computing Conceptual Diagram[8]
The only means the provider can gain trust of the client is through
SLA [1] .Despite of technical advantages of cloud computing lack
of control in the cloud is the major worry. To remove this,
transparency is the solution. To access the services of cloud, web
services are commonly used to provide access to IaaS services
and Web browsers are used to access SaaS applications. In Paas
we use both approaches. These layers promise to reduce the
capital expenditures which further include reduced hardware
costs.
The remainder of this paper is structured as: Section II
introduces the different types of cloud, Section III explains the
SLA, Section IV discusses the delivery models, Section V
highlights security issues which include security at physical level,
network security and data security.
2. TYPES OF CLOUDS
In providing a secure cloud computing solution, a major
decision is to decide on the type of cloud to be implemented
.Currently there are three types of cloud deployment models
offered, namely, a public, private and hybrid cloud. [2].
170
171
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Fig2: Adapted from Ramgovind and Eloff [2]
2.1. Public Cloud
In this cloud the service provider offers,
maintains and bills for the services. A public cloud is
a model which allows users access to the cloud via
interfaces using mainstream web browsers. It is
typically based on a pay-per-use model, which is
flexible enough for cloud optimization. This helps
cloud clients to better match their IT expenditure at
an operational level by decreasing its capital
expenditure on IT infrastructure [3] .Public clouds
provide less security as compared to other clouds
Because the applications and the data accessed on the
public cloud are subjected to malicious attacks.
2.2. Private Cloud
A private cloud is set up within an organizations
internal enterprise datacenter. It is easier to align with
security and provides more enterprise control over
deployment and use. In private cloud scalable
resources and virtual applications provided by the
cloud vendor are pooled together and are available
for cloud users to share and use. It differs from public
cloud in that all the cloud resources and applications
are managed by the organization itself [4]. A private
cloud allows cloud computing internally that is on
internal networks, without seeking the risks attached
to external clouds. It provides reduced costs and
greater flexibility, standardized and better quality of
service along with greater security.
2.3. Hybrid Cloud
A hybrid cloud is a private cloud linked to one or
more external cloud services, centrally managed,
provisioned as a single unit [5]. It is a combination of
a public cloud and private cloud. A large number of
enterprises prefer a mix of in-house and external IT
resources that are they choose according to job in
hand. Therefore there is a necessity of such hybrid
model, as this offers more flexibility to suit dynamic data
requirements.
To summarize we can say that in order to choose between
these various deployment models, the business managers has to
take into account the security differences of each cloud
deployment model.
3. SLA (service level agreement)
A service level agreement is a document which defines the
relationship between two parties: the provider and the recipient
[1]. As we know in cloud computing the service and maintenance
of data is provided by some vendor. Due to this it makes the
customer unaware of where the processes are being running or
where the data is stored, so we can say that the customer has no
control over the data location. To remove these problems the
vendor has provided the SLA that is SLA minimizes the security
risks.
If used properly it should:
Identify and define customers needs
Provide a framework for understanding
Simplify complex issues
Reduces areas of conflict
Eliminate unrealistic expectations [1]
Service Level Agreement Contents:
3.1 Definition of Services: - It describes the services and the
manner in which those services are to be delivered. The
information on the services must be accurate and contain detailed
specifications of exactly what is being delivered. [1]
3.2 Performance Management: - Every service must be capable
of being measured and the results are analyzed and reported. The
service performance level must be regularly reviewed by the two
parties. [1]
3.3 Problem Management: - The main purpose of problem
management is to minimize the adverse impact of incidents and
problems.
3.4 Customer Duties and Responsibility: - It is important for
the customer to understand that it also has the responsibilities to
support the service delivery process. The customer must arrange
for access facilities and resources for the suppliers employees
who need to work on site. [1]
3.5 Disaster recovery and business continuity: - Disaster
recovery and business continuity can be of critical importance.
This fact should be reflected within the SLA. [1]
172
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
4. DELIVERY MODELS
The next security consideration that must be
known to business management is the various cloud
delivery models.
4.1. Software as a Service (SaaS)
Suppose an application is provided by the vendor
to the customer or user. If the user only wants to
simply use the application but not to have full control
over its operating system or hardware, then that
service is known as software as a service, that is user
only uses the application by not controlling it. In this,
User can only upload the files. In this the provider
installs, manages and maintains the software. The
consumer does not have access to the infrastructure;
they can only access the application .SaaS offers a
large number of advantages as it is user friendly,
backup facility available and cheap. The virtualized
applications can be moved onto different hardware
quickly in response to increased demand.
Fig: Cloud Computing use cases white paper [6]
4.2. Infrastructure as a Service (IaaS)
This type of service allows the user
Or the consumer to have full control over the
operating system, hardware on which the application
is functional. The user can control operating system,
storage and networking components. The consumer
uses the service as if it was a disk drive, database,
message queue or machine, but they cannot access
the infrastructure that hosts it. It is more convenient
and offers less/more cost according to less/more
traffic.
4.3. Platformas a Service (PaaS)
This type of service allows the user to have small
amount of control over the applications environment
that is the environment in which the application in which it is
running, but no control over operating system hardware. PaaS is a
compromise between SaaS and IaaS. It reduces maintenance cost
and is more flexible.
5. SECURITY AND PRIVACY
To build Cloud Computing Systems
Different technologies are used and combined. To make the
system more secure it depends on the type of Cloud SaaS, IaaS
or PaaS as defined above. This section presents a selection of
security issues related to cloud computing and ways how they are
made reliable.
5.1. Physical Security
The physical security provides technical systems to automate
authorization for access and authentication. Earlier we were using
traditional enterprise applications, but now a days Software as a
service and software-plus services are used. So due to these
advancements it become necessary for the company or
organization to employ additional adjustments so that their
valuable data and assets remain safe. The following two main
things are done to employ physical security: -
5.1.1 Restricting access to data center personnel: - It means
providing security requirements upon which data center
employees and contractors are reviewed. Access is restricted by
applying a least privilege policy, so that only essential personnel
are authorized to manage customer applications and services.
5.1.2 Centralizing physical asset access management: - A tool
was developed to manage access control to physical assets, for the
purpose of requesting, approving, and provisioning access to data
centers.
5.2. Network Security
To ensure network security special hardware such as load
balancers, firewalls and intrusion prevention devices are in place
to manage volume based denial of service attacks. ACL (access
control lists) are applied to segment virtual local area network
(VLANs) and applications as needed.
5.3. Data Security
Data assets falling into the modern impact category are
subjected to encryption requirements when they are residing on
removable media or when they are involved in external network
transfer. For example keys longer than 128-bits are required for
symmetric encryption. When using asymmetric algorithms, keys
of 2048-bits are required.
5.4. WS-Security
The most important specification addressing security for web
services is the WS-Security, defining how to provide integrity,
173
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
confidentiality and authentication .WS-Security
defines a SOAP header that carries the WS-Security
extensions. Additionally it defines how existing XML
security and XML signature are applied to SOAP
messages.
XML signature allows XML fragments to be
digitally signed to ensure integrity or to proof
authenticity. In addition to encryption and signatures,
WS-Security defines security tokens suitable for
transportation of digital identities [7]
5.5. TLS
Transport Layer Security has been introduced;
under its common name Secure Socket Layer (SSL).
It consists of the two important parts, first is the
Record Layer encrypts/decrypts TCP data stream
using the algorithms and keys negotiated in the TLS
Handshake, which is also used to authenticate the
server and the client. TLS offers many different
options for key agreement, encryption and
authentication of network peers. [7]
5.6. PRIVACY
To protect the privacy and security of customers,
various principles are applied: -
5.6.1 Privacy by design: - This principle is used in
multiple ways during development, release and
maintenance of applications to ensure the data
collected from customers is for a particular purpose
and that the customer is given notice in order to
enable informed decision making.
5.6.2 Privacy by default: - In this customers were
asked for permission before collecting or transferring
service data. Once authorized, such data is projected
by means such as access control lists in combination
with identity authentication mechanisms
5.6.3 Privacy in development: - It provides
mechanisms to organizational customers so as to
allow them to establish privacy and security policies
for their users.
5.6.4 Communications: - It engages the public
through publication of privacy policies, white papers,
and other documentations relating to privacy.
CONCLUSION
In this paper, we presented a selection of issues of
Cloud Computing Security. We analyzed ongoing
issues along with applications. As we showed Cloud
Computing Security needs a lot of improvement, the
threats to Cloud Computing are numerous and each one of them
requires an in depth analysis.
REFERENCES
[1]. Balachandra Reddy Kandukuri, Ramakrishna Paturi V,
Dr. Atanu Rakshit, Cloud Security Issues,
http://www.computer.org/portal/web/csdl/doi/10.1109/S
CC.2009.84, 2009.
[2]. [2] Ramgovind S, Eloff MM, Smith E,The Management
of Security in Cloud Computing ,http:// uir.unisa.
ac.za/bitstream/handle/10500/3883/ramgovind.pdf?seque
nce=1, 2010.
[3]. A Platform Computing Whitepaper,Enterprise Cloud
Computing: Transforming IT, Platform Computing,
pp6, 2010.
[4]. Dooley B, Architectural Requirements of the Hybrid
Cloud, Information Management Online,http:// www.
information-management.com/news/hybrid-cloud-
architectural-requirements-100017152-1.html, 2010.
[5]. Global Netoplex Incorporated, 2009, demystifying
the cloud. Important opportunities, crucial choices,
http://www.gni.com, 2009.
[6]. Cloud Computing Use Cases: A White Paper Produced
by Cloud Computing Use Case Discussion Group,
Version 2.0, http:// www. opencloudmanifesto.org
/Cloud_Computing_Use_Cases_Whitepaper-2_0.pdf,
October 2009.
[7]. Meiko Jensen, Jorg Schwenk, Nils Gruschka, Luigi LO
Iacono, On Technical Security Issues in Cloud
Computing ,http:/ /www. mendeley. com/research/on-
technical security-issues-in-cloud-computing, 2009.
[8]. Cloud Computing Conceptual Diagram,http:/ /en.
Wikipedia .org/wiki/Cloud_computing.
174
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
WAR IN5 GHz FREQUENCYBANDWIRELSS LAN
Abhilash Saurabh TazeemAhmad Khan
Department of Electronics and Communication, Department of Electronics and
Communication,
N.I.E.T,Greater Noida IIMT college of engineering,
Greater Noida
E-mail:abhi.ec231@gmail.com E-mail: khan_taz@yahoo.com
Abstract
It is highly likely that next generation WLAN standard
will be in 5 GHz band, there are two main standards
offering higher data rate in this band, they differ from
each other significantly. This study analyzes the current
state of standardization war in 5 GHz band.
Key Words
WLAN
1. Introduction
The idea behind standardization work is to minimize the
incompatibility and interoperability issues between
competent technologies as well as to maximize the
improvements. As a matter of fact, the lack of
incompatibility between competingtechnologies
introduces limitations in freedom and flexibility of
consumers in a sense that the switching cost gets higher in
magnitude. Customers some how gets hocked and cannot
enjoy easy escape. The significance of standardization
work is well understood; on the other hand the
standardization war between rival technologies is not
uncommon. By the term Standards war we mean the act
or action by means of which two incompatible
technologies tries to gain leadership in the market [Carl
Shapiro; Hal R Varian 1999].
Industry is too much stuffed with several WLAN
standards: both from European and American
organizations. IEEE 802.11b, 802.11a, HIPERLAN1,
HIPELAN2, HomeRF, OpenAir, SWAP, Buletooth are
few of them. It has been found that most of the standards
have one thing in common that is incompatibility, they
are pretty much incompatible to each other to certain
extends. Incompatibility of these standards spans in wide
range of spectrum in terms of operating range, medium
access mechanisms, data rate, modulation scheme, QoS,
connectivity, security features etc.
The demand for broadband wireless communication for
multimedia application supporting higher data rate has
pushed standardization organizations to develop new
standard [Zahed Iqbal 2002]. On the other hand 2.4 GHz
band is too crowded with several devices and protocols,
which have made it susceptible to several interferences.
European Telecommunication Standardization Institution
(ETSI) has developed HIPERLAN2, which operates in 5
GHz band and offers data rates upto 54 Mbps. On the
other hand Institute of Electrical & Electronics
Engineering (IEEE) has standardized 802.11a WLAN
system operating in 5 GHz band and provides data rates
from 6 to 54 Mbps. It highly likely that these 5 GHz band
will become next generation WLAN communication
standard and will compete each other to become global de
facto standard. Both of the standards have some
similarities and dissimilarities and they challenge each
other to dominate in the market we call it
standardization war. This paper analyses the state of art
standard war between 802.11a and HIPERLAN2,
commercial success factors of both standards and services
behind them.
2. 5 GHz WLANTechnical facts
2.1 802.11a
A new standard for enterprise class WLAN, operating at in
the 5 (5.15 to 5.825) GHz band, data rates up to 54
Mbps, provides short-range, high-speed wireless
networking communication. It shares the same MAC layer
as 802.11 and 802.11b but uses Orthogonal Frequency
Division Multiplexing (OFDM) [Zahed Iqbal 2002]. It
implements an Ethernet-like scheme and considered as
powerful commercial competitor of HIPERLAN2
technology. The products of 802.11a enjoy an early
entrance in the market compare to HIPERLAN2. Further
development of 802.11a is underway: 802.11g for
backward compatibility with 802.11b and will be running
on 2.4 GHz, 802.11e running at 5 GHz band to address
QoS, 802.11h to address regulatory limitations of 802.11a.
2.2 HIPERLAN2
HIPERLAN2 is an European effort (backed by several
giant telecom companies as consortium) for next
generation WLAN standard, operates in 5 MHz band, and
provides data rates up to 54 Mbps in PHY layer.
HIPERLAN2 standard uses Orthogonal Frequency
Division Multiplexing as modulation scheme to transmit
analog signals. It is the first standard to be based on
OFDM modulation the key-contributing factor for
higher data rate. It implements a convergence layer by
means of which it is possible to provide seamless access
to a variety of legacy backbone networks (ATM, 3G, PPP
and IP etc). Because of higher data rates, HIPERLAN2
175
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Rival Technology (802.11a)
Compatib Incompatibl
can support real time streaming video application,
multimedia applications with guaranteed QoS in
connection oriented way. Besides, security support for
authentication and encryption, it offers Dynamic
Frequency allocation support eliminating manual
frequency planning [Zahed Iqbal 2002].
3. The Standards War
Historically it has been seen that most of the standards
war are different from each other. How they differ form
each other depends on the amount of switching cost to
adapt to the new technology and the level of compatibility
of new standards with existing ones. Standard battle can
be classified in three different ways. Rival evolution is a
kind of standard war where two competing technology
being incompatible with each other but compatible with
the existing established technology.
Y
O
U
R
H
dissimilarities, they are very much incompatible to each
other.
Now if we look at the situation in light of the definition of
standard war, irrespective of operating frequency band,
we see that 802.11b exist is established standard and early
mover to the market. In the context of wireless solutions,
802.11a and HIPERLAN2 are latecomer and new
entrance. Both 802.11a and HIPERLAN2 will be able to
provide the much more and better data service with high
performance, security and reliability to existing 802.11b
applications. Hence, in the standard war game, 802.11bs
offering can be considered as existing and will be used to
find out the pattern of the game. Now, it would be
necessary to find out whether 5 GHz standards are
compatible to each other or not to fit in the
standardization framework described above.
The MAC layer of 5 GHz band standards is quite
different: 802.11a incorporates distributed CSMA-CA
where as centralized TDMA approach in HIPERLAN2.
[Zahed Iqbal 2002]. Guaranteed QoS in HIPERLAN2
makes it strategically different from 802.11as best effort
offering to WLAN video applications. Network
T I
Rival
E P
Evoluutio
C E
n
H R
N L
O A
L N
G 2 Revolutio
Y n vs.
Evolution
Evoluutio
n
vs.
Revoluuti
on
Rival
Revolutio
n
connectivity in 802.11a is limited to only Ethernet
compare to various legacy network connectivity offerings
in HIPERLAN2 this is a clear difference showing the
limited scope of 802.11a in terms of interoperability.
According to EU frequency regulation, 5 GHz band
devices should support Total Power Control (TPC) and
Dynamic Frequency Selection (DFS), 802.11a lacks both
of them to operate in EU. All these differences lead to a
situation where both of them are incompatible to each
other. Finally, it can be concluded that both new
Figure 1: Types of standard war, courtesy: Carl Shapiro;
Hal R Varian [1]
If one of the competing new technologies is compatible
with the existing one but the other is not, then it is called
evolution versus revolution. On the other hand if none of
the new competing technology is compatible with existing
one, then it is called rival revolution types of standard
war. [Carl Shapiro and Hal R Varian [2] 1999]
802.11b, also called Wi-Fi, is the most widely used and
established WLAN solution so far, in terms of product
availability, deployment scenario, and large user base. It
operates in 2.4 GHz band and offers data rate up to 11
Mbps in access point.
There exists significant difference between 802.11b and
802.11a even though they have similar MAC layer. They
transmit in two different frequency ranges, which make
sure that they will not interfere with each other and as a
result these two technologies are incompatible to each
other. In practice, an 802.11b interface card will not be
able to connect 802.11a AP and other way around.
There exist almost no similarities between HIPERLAN2
and 802.11b. Both of them operate in different frequency
bands, offers different data rates, implements quite
different PHY layers. As consequences of these sky-high
technologies are incompatible to each other and again
each of them are incompatible to the existing established
technology, clearly the standardization battle falls in the
category of Rival Revolution.
Waging a standard war does not only depend on the
technical superiority of one over other, there are many
other factors involved. Being technically championed is
one advantage off course but, that is not all to win the
game.
One of the competing standard can be in an advantageous
position in the battle if it posses certain key assets.
*.
By means of owning those assets the standard can secure
its position by helping the adoption of new technology
with much lower switching cost. [Carl Shapiro and Hal R
Varian [2]. 1999]
In this respect 802.11a seems to in better position.
802.11a products are in market since 2001, more over bug
fixes, improvements on those products are underway. As a
first-mover to the market, 802.11a will be benefited
compare to its counter part.
*
1) Control over an installed base of customer 2) Intellectual property right 3)
Ability to innovate 4) First-mover advantages 5) Manufacturing abilities 6)
Strength in complements 7) Reputation and brand name
176
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
802.11a can be considered as successor of 802.11b.
802.11b is quite successful in Wi-Fi area for sometime
now. Dual mode (a/b) WLAN products are available
already; these enable users to switch in between whenever
needed. This way 802.11a gets an easy control over an
installed base.
There are some differences in standardization process
between US and European authority. In US model, one
company develops the initial specification and then tries
to get it approved by the standards body, typically faster
process. These enables US manufacturer for quicker
invention and proprietary extension in future. European
Model formally starts the process by gathering consensus
from industry and academia, usually a slow process.
HIPERLAN2 can be in strong position in terms of
manufacturing abilities. The products can be
manufactured in low cost since the manufactures do not
have to pay the royalty payments to any member of the
standardization body.
Finally 802.11a products enjoy brand names and
reputation more than its competitors.
4. Services behind the technology
Original idea behind WLAN was to make it as near as
possible to traditional wired network with support of
mobility. In general all kinds of data services that are
available in wired net are also aimed for WLAN as well.
It is highly expected that WLAN will be deployed at
home, office environment, and public places for reasons
convincible by the users. Following are the list of services
that WLAN can offer
Public Wireless Internet access: Internet access
through publicly deployed AP is an emerging
service sector. More and more business traveler
demanding access to their corporate information, e-
mail when they are in travel. A number of ISPs and
MNO have already launched businesses model,
specially getting popular in Scandinavia.
TeliaSonera has launched commercial wireless LAN
services the Sonera wGate. wGate service has
been installed in Helsinki Airport, Sokos hotel,
Radisson SAS hotel and expanding to cover all 25
CAA Finland airports [Strategy Analytics 2002,
Northstream 2003]. TeliaSonera has completed
SIM-based WLAN roaming pilot testing in
cooperation with NOKIA in 2003 and aiming to
lunch roaming service outside Finland. More and
more service operators (MNOs) and others are
offering free hotspot access in different places, if
this situation continues, the paid-access hotspot
business model will be challenged.
Voice Over WLAN (VoWLAN): The benefits of
VoIP is well understood, with the emergence of
WLAN, VoWLAN has got attention now a days
and seeking more growth. Companies, like Cisco,
and others are released VoWLAN handsets with
lower price, and price will fall dramatically in near
future. VoWLAN has significant market where
workers spend a lot of time outside their office.
[The 802.11 Report website]
WLAN in Office Environment: To some extent
WLAN can be used as an extension to wired LAN
as access network. In case of small environment
WLAN can totally replace fixed network, saving
substantial cost of installation.
Heath care industry: Wireless technology can be
utilized to improve patient care and increase
efficiency and accuracy with wireless workers.
Keeping centralized record will enable substantial
operational cost saving.
Home networking: In home environment data
networking and entertainment are two distinct areas
where WLAN can make difference compare to
other technologies (DSL, ADSL, PnP, Bluetooth)
[Wireless Design Online website 2002].
Networking among household devices such as TV,
VCR, Stereo, DVD player etc can be established
without any cable installation [Timo Smura 2001].
Internetworking with UMTS: To some extend
WLAN and UMTS complements each other.
UMTS plays role where lower band with greater
mobility is needed and on the other hand higher
bandwidth within limited coverage WLAN can be
used. Integrating WLAN and UMTS in one
handheld device could be something revolutionary.
In some cases WLAN can be used as an access
network to access UMTS services just by passing
UMTS access network and in other cases WLAN
can be connected to core UMTS network using
UTRAN interface. [Timo Smura 2001]
5. Commercial Success factors
In this section commercial success factor of those
technology is discussed, in general commercial success
factors depends both technical and non-technical facts.
Roaming, billing, security and mobility issues are
considerable barrier that WLAN should overcome in near
future. Either of the standard can make their position
stronger compare to the rival through the technical
solution to eliminate those barriers.
QoS remains issues as always, HIPELAN2 is leading the
way by providing guaranteed QoS connection (connection
oriented) for the application where real-time data delivery
is essential (streaming, music, video, etc).
Connection diversity meaning the ability to connect
various backend core networks, is another very important
aspect. HIPERLAN2 dominates 802.1la by providing
connectivity to various backend networks. At home
networking HIPERLAN2 is idle for the data types
required for entertainment.
177
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Unfortunately there is no commercial HIPERLAN2
product available at this moment, only few prototypes,
even though there has been lot of promises from H2GF
members. It is questionable whether there will be any
commercial HIPERLAN2 product at all or not. Nokia
announced their H2GF participation in 1999 but no
significant move has been observed so far since then
[NOKIA website]. Ericssion, one of the founder members
of H2GF (HIPERLAN2 Global Forum), have successfully
tested HIPERLAN2 prototype in year 2000, since then no
major activities have been seen form Ericsson as well.
The bad new is that recently Ericsson has discontinued all
HIPERLAN2 product development and has started to put
effort on 802.11a. Few Asian manufacture like SONY,
Philips have demonstrated HIPERLAN2 prototypes for
wireless consumer electronics connectivity (at home
environment) in two CeBits, but they are wary about
market lunch. This may be due to the fact that the last
independent developer and manufacturer of HiperLAN/2-
Chips the Dresden located company SystemOnIC was
taken over at the end of 2002 by Philips Semiconductor
whilst Ericsson, which had focussed on this technology for
some time, seems to want to have nothing more to do with
HiperLAN/2 since the joint venture with SONY [Arno
Karl 2003]
It seems that European manufactures are pretty much
convinced by the fact that 802.11a and 802.11b is leading
the way because of installed base user and low cost
product availably. 802.11 products are available from
several vendors and prices are going down rapidly
because of tougher competition among chip manufacturer.
On the other hand, giant computer vendors are backing
802.11 standard by including dual mode (a/b) wireless
NIC in to PC and laptops. It is expected that 90% of
laptop will be shipped with wireless card in 2005. At the
same time network manufacturer are also promoting
802.11 standard by providing wireless solutions that
works at office and as well as home environment. In this
way HIPERLAN2 has failed to take early lead and most
likely HIPERLAN2 efforts will go in vain at premature
stage.
5. Conclusion
It is not straight forward to comment which one of the
5GHz standard will win the battle, but mostly it depends
how well they are able to meet the demand of customer
application in terms of QoS, security, reliability, ease of
use, and total cost of ownership issues. Governments,
regulatory bodies and local authorities will play a major
role on acceptance and adoption of certain standards
because of legal issues related to power level and
interference.
Interoperability issues have to be solved, lack of
interoperability in between 5 GHz standards could
jeopardize the wide scale acceptance of these standards.
End user will be in dilemma whether to buy 802.11a or
HIPERLAN2 products if they suffer interoperability. On
the other hand adding more features and hardening
security making products more complex and
interoperability becoming difficult. 22% products have
failed in interoperability test at CeBit 2004.
To my view there is no doubt about the technical
superiority of HIPERLAN2 in 5 GHz band, but lacking
support from industry and product manufacture will
undermine the credibility of the standard and end user will
loose confidence. 802.11b/a will gain better position
because of huge support form product manufacturer and
vendors. Lower switching cost to adopt a new standard is
another prime factor for the wide acceptance;
manufactures should lower product prices quickly by
increasing production volume.
There is room for some kind of convergence and
coexistence between two standards by mix and matching
features from both. In that case technically pioneer
solutions will be created and end user will be benefited
mostly.
References
1.Carl Shapiro; Hal R Varian [1]. 1999. The Art of
Standard Wars. California Management Review;
Berkeley; winter 1999.
2.Northstream. 2003. Public WLAN Services,
http://www.norwthstrweam.se, referenced 15.03.04.
3.Carl Shapiro and Hal R Varian [2]. 1999. Varian.
Information Rules, a strategy guide for network
economies 1999, pp 261-296
4.Wireless Design Online website. 2002.
http://www.wirelessdesignonline.com/content/news/articl
e.asp? docid={b6fff613-0a06-11d6-a789-00d0b7694f32},
referenced 20.03.04
5.Zahed Iqbal. 2002. Wireless LAN Technology: Current
State and Future Trends, Internet Publications 2003 in
Telecommunications Software and Multimedia, Helsinki
University of Technology.
6.Arno Karl. 2003. IFA-2003-Report Wireless Consumer
Electronics, http:// www.hiperlan2. com/newsdocs/
member/IFA2003 -Englishversion_.doc, page 8-9,
reference 22.03.2004
7.NOKIA website. 1999. http:/ /press.nokia.
com/PR/199909/775955_5.html, referenced 21.03.04
8.The 802.11 Report website. 2003.http://w ww.8021 1
report. om /topics /emergingtech.html, referenced 25.0.04
Frost & Sullivan. 2001.
http: //www. indbranch.c om/catalog/prod uct.js p?highli
ght=true&c ode=R1-1994&disp lay=brief&bundle=&
partner= 110, referenced 15.03.2004
Strategy Analytics. 2002. Wireless LANs: Strategies
For Mobile Operators, Strategy Analytics,
http://www.strategyanalytics.com, page 54, referenced
12.03.2004
Timo Smura. 2001. Hiperformance Radio LAN Type 2.
http://www.hut.fi/~tsmura/hiperlan2.pdf, page 14-15,
referenced 24.03.2004
178
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Wireless & RF Communication Systems
Abhishek Dwivedi, Ved Prakash Sharma
Maharana Pratap College of Technology, Gwalior (M.P.)
abhi_jhs97@rediffmail.com, ved_rjit@yahoomail.com
Abstract - Radio frequency (RF) communication has
emerged as the key technology, after its importance is
relegated for years by the fiber optics technology. It
established itself as the backbone of the global information
technology infrastructure and thereby putting new demands on
the RF and wireless industry worldwide for skilled workforce.
This paper describes the initial efforts in developing and
integrating these two states of technology-
Wireless Communication Systems and RF
Communhication System.
The aim of this paper is to provide an overview of the wireless
fundamentals, which will be approached within the context of
the block diagram shown in Fig 1 using RF Communication.
1. INTRODUCTION
Wireless communication systems are not new, but they have
been continually evolving for many years, especially in the
area of mobile communications. First generation analogue
mobile systems began to emerge in the late 1970s and in the
early 1980s work began on what was to become the second
generation digital GSM system. In the early 1990s migration to
the second generation system started to gain momentum and
within two years all of the major European operators had
started to operate commercial GSM networks. During the mid-
1990s, ground work preparations started that would eventually
lead to the development of third generation systems and the
first commercial network was launched in early 2000. More
recently, mobile WiMAX has begun to emerge and there is a
growing interest in its potential as an alternative to the other
mobile networks and their planned migration paths.
In the past 10-15 years RF communication has emerged as the
key technology, after its importance is relegated for years by
the fiber optics technology. The RF wireless technology
became the backbone of the information technology, which is
one of the pillars of the economic and cultural globalization
that defines the modern world.
cooperation with the industry, have established a model
wireless curriculum modules to cater for this global demand.
These modules, Global Wireless Educational Consortium
(GWAC) wireless curriculum modules, cover broad spectrum
of topics including RF and Communication theory.
2. CURRICULUMDEVELOPMENT
The application of the emerging RF wireless systems has
changed a great deal, requiring diversified knowledge in other
closely related fields. The engineering technologist is expected
to understand the basics of RF effects at the component and
system level. The principles of signal propagation between
devices within and between RF receivers and transmitters
need to be grasped. This requires an apprehension of the
concepts of transmission lines, electromagnetic (EM) wave
propagation through free space (or air), and antenna theory. In
addition the RF technologist must be skilled with the complex
RF equipment and measurements techniques necessary to
verify and test the reliable operation of the communication
systems.
Understanding the concepts of communication systems is an
essential prerequisite. The knowledge of various analog and
digital modulation techniques and frequency domain analysis
of systems and signals is vital for understanding the operation
of RF communication systems, subsystems and components.
In the modern digital communication era, where all signals are
digitally processed within the transmitters and receivers, the
basic knowledge of digital signal processing (DSP) techniques
is essential for design, development and troubleshooting. An
RF technologist needs to understand the concepts of DSP, in
addition to RF and communication systems concepts, to fully
comprehend the function of modern transmitters and receivers.
Figure 1 shows a typical RF communication system diagram.
To ensure the integrity of the global information technology
infrastructure, the RF and wireless systems need to be
sustained continuously. These systems are complex and
require skilled and qualified engineering technologists to
develop, construct and sustain their operations. To meet this
workforce demand by the industry, there is a worldwide need
to upgrade RF and communication curriculum and provide
adequate education and training for the engineering
technologist. A number of universities and colleges, in
Radio frequency propagation
All wireless systems, whether they use licensed or licence
179
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
exempt spectrum, operate in exactly the same physical
environment and hence they are all susceptible to the same
fundamental propagation characteristics. All of these will
affect performance in some way, but the ones that have the
most significant impact are:
Propagation path loss,
Received signal fading,
Uplink/downlink channel duplex,
Wideband channel transport techniques.
Each of these aspects will now be considered in more
detail.
2.1 Propagation path loss
At the most basic level of consideration, propagation loss is
proportional to both the frequency used and the distance
travelled. Its dependence on frequency is shown by the plot1 in
Fig 2 for the 1 to 6 GHz frequency band expressed as an
increase on the loss at 1 GHz. The main point to note is that
over this 5 GHz frequency range the path loss difference is
almost 15 dB.
The loss with distance is equal to the distance traveled raised to
the power of the propagation coefficient, which for free
space propagation is equal to 2. In practice, however,
empirical path loss models are derived by curve fitting to
measurements and the propagation coefficients for these
models typically lie in the range 2.5 to 4 for urban macrocells
and 2 to 8 for micro-cells. The extent to which this increases
relative path loss with respect to that of free-space propagation
can be appreciated from the plots in Fig 3.
These clearly show that, as the propagation coefficient
increases, the relative difference in path loss becomes
increasingly significant with distance. For example, doubling
the propagation coefficient gives a loss difference of 20 dB at
10 m, but at 1 km this increases to 60 dB. Consequently,
practical operating ranges are significantly smaller than would
be achieved with ideal free-space propagation.
2.2 Received signal fading
Unfortunately, there is a drawback to the use of
empirical propagation models for calculating path
loss, which is that for a given distance from the
transmitter the predicted loss is the same for all
directions of propagation. This is an issue because in
practice different propagation directions experience
different levels of clutter (buildings, trees, etc), so
their path losses differ over a given distance. This
causes a statistical variation in path loss, which is known as
shadowing or slow fading. Therefore, an operating margin is
needed to ensure that the probability of falling below the
minimum acceptable received signal-to-noise ratio is
acceptably small. The shadowing fade margin is plotted in Fig
4 as a function of this probability, which is known as the
shadowing fade outage probability, for a range of location
variability2 ( LV ) values typical of a macro-cell.
180
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
There is also another form of fading to take account of which
is known as multipath or fast fading. This type of fading
occurs when the antenna detecting the main radio signal also
receives reflected versions of itself from obstructions such as
buildings, hills, etc (see Fig 5). Because these multipath
signals are time delayed with respect to each other, and
possibly also frequency spread due to the Doppler effect, the
resulting constructive/destructive interference causes the
received signal power to fluctuate with time. The statistical
nature of these fluctuations is characterised by whether or not
the propagation path is line-of-sight (Ricean fading) or non-
line-of-sight (Rayleigh fading) and also by the value of the
channel delay spread, which is a parameter that depends upon
the particular propagation environment.
To counteract the performance variation caused by the
statistical nature of multipath fading, which depends upon the
number of reflections received and their relative powers and
phase relationships, an operating margin is needed to ensure
that the probability of falling below the minimum acceptable
signal-to-noise ratio threshold is acceptably small.
This fade margin is equal to the difference between the
threshold signal-to-noise ratio and the mean value needed to
achieve a particular multipath fade outage probability. Figure
6 shows the relationship between multipath fade margin and
outage probability for Ricean (solid lines) and Rayleigh
(dashed line) flat3 fading channels.
The factor associated with the solid lines, which is known as
the Ricean K factor4, represents the ratio of line-of-sight
power to total multipath signal power.The plots in Fig 6 show
that for non-line-of-sight conditions and relatively low values
of outage probability there is significant benefit to be gained
from using fade mitigation techniques to reduce the amount of
margin needed, whereas for line-of-sight the benefit
diminishes as the main signal becomes increasingly dominant
over the multipath signal power. In principle, fading can be
mitigated by using time, frequency, space, code, and angle-of-
arrival and polarization diversity techniques.
2.3 Uplink/downlink channel duplex
In practice, the propagation characteristics are not necessarily
the same for both the uplink and the downlink because the
radio spectrum allocation process for licensed spectrum
specifies which one of two modes of operation is to be used
these are known as frequency division duplex (FDD) and time
division duplex (TDD). The difference between them is that
the former uses different frequencies for the uplink and the
downlink, whereas the latter shares a single frequency. The
advantages of time division duplex are that the same channel
propagation characteristics apply for both the uplink and the
downlink and their bandwidth ratio can be dynamically
adjusted to match changing operating conditions. Frequency
division duplex, however, has the advantages of providing
better isolation between the uplink and downlink and in some
instances potentially has a greater operating range due to using
a narrower bandwidth receiver than a TDD system (see section
6 for more details).
2.4 Wideband channel transport techniques
If channel bandwidth is increased to enable higher datarates to
be used, a point is eventually reached where the inter-symbol
interference5 caused by the time offsets between the main
signal and its multipath reflections cannot be ignored, and the
point at which this occurs depends upon the channel symbol6
rate and the delay spread characteristics of the propagation
environment. For symbol rates that are much smaller than the
reciprocal of the delay spread, inter-symbol interference
becomes negligible and the fading is flat. For the converse
situation, inter-symbol interference cannot be ignored and the
fading is frequency selective. Consequently, increasing
channel bandwidth causes single carrier systems to become
increasingly more susceptible to inter-symbol interference and
frequency selective fading unless equalization techniques are
used to counteract them.
Several channel transport techniques have emerged for use in
cases where the channel bandwidth exceeds the value beyond
which inter-symbol interference and frequency selective
multipath fading effects cannot be ignored.
Specific examples are direct sequence spread spectrum
(DSSS), orthogonal frequency division multiplexing (OFDM),
181
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
single carrier, frequency domain equalization (SC/FDE) and
orthogonal frequency code division multiplexing (OFCDM).
2.4.1 Direct sequence spread spectrum
This is a single carrier technique that frequency spreads the
baseband modulation signal so that it fills the whole of the
channel bandwidth. This is achieved using frequency
spreading codes that are ideally designed to have a shorter
correlation time than the various multipath delay differences.
Provided this requirement is satisfied, the individual multipath
signals can be separated out if an appropriate receiver is used
(e.g. RAKE receiver ), thus enabling the individual multipath
signals to be extracted and combined in a way that improves
overall performance by effectively removing the inter-symbol
interference.
2.4.2 Orthogonal frequency division multiplexing
Unlike DSSS, this is a multi-carrier technique that transports
data over the channel by distributing it across an underlying
frequency multiplex. The basic principle involved is that the
larger the number of carriers (tones) used, the lower the data
rate they each have to carry. So, by using an appropriate
number of carriers the symbol duration can be made to be
significantly longer than the multipath delay differences, and
hence reduce the level of inter-symbol interference and also
create channels that have flat fade characteristics.
Furthermore, the inter-symbol interference can be completely
eliminated by the introduction of a cyclic
prefix provided its duration is longer than the channel delay
spread. However, this is at the expense of increased channel
overhead.
2.4.3 Single carrier, frequency domain
equalization
Like DSSS, this is a single carrier transport technique that uses
frequency domain equalisation to overcome the impracticality
of using time domain equalisation to counteract multipath
fading in wide bandwidth channels. Overall, SC/FDE, which
also has a media access variant known as SC-FDMA7, gives
similar performance to OFDM for essentially the same overall
level of complexity, but because it is a single carrier transport
technique it has the advantage of having a lower peak-to-
average power ratio than OFDM, which has certain cost
advantages.
2.4.4 Orthogonal frequency code
division multiplexing
This is a transport technique that adaptively exploits both
frequency and time domain spreading to maximize
performanceaccording to the particular operating
environment. Essentially, it adaptively exploits OFDM to
provide frequency spreading and DSSS to provide time
spreading, and current claims are that it will improve on the
performance of current OFDM systems.
References
[1]. Proakis J G and Salehi M: Communication
Systems Engineering, Second Edition,
Prentice -Hall.
[
2
3].
S
an
k
d
lar
A
B
pp
:
li

c
D
at
i
i
g
o
i
n
ta
s
l
,
Communications: Fundamentals
Prentice-Hall.
[1]. Saunders S R: Antennas and Propagation for
Wireless Communication system, wiley
182
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Wireless charging systembased on Ultra-Wideband Retro-Reflective Beamforming
Harshita Sachan ,Ravindra Kumar Yadav and Gagan Deep Arora
Department of Electronics & Communication Engineering
ITS Engineering College
Greater Noida, U.P.
Abstract: In this paper, a new wireless charging
system for movable devices is addressed. The
structure and principle of wi rel ess charging
systems are expounded. The novel concept
comes into being to lessen severe power loss due
` to electromagnetic wave propagation, the basis
for a new generation of universal charging
system for a wide range of movable devices. This
new system improves the efficient charging over
long distance. Furthermore, it also improves
feasibility of this system.Introduction
Numerous portable electronic devices (such as
laptops, cell phones, digital cameras, and electric
shavers) rely on rechargeable batteries and must be
routinely charged by the line power. A wireless
charging technique capable of delivering
electromagnetic energy to these portable devices
would make them truly portable. Wireless
charging is especially valuable for devices with
which wired connections are intractable, e.g.,
unattended radio frequency identification tags and
implanted sensors. In recent years, enormous
research efforts have been devoted to wireless
charging. In 1990s, a case study is reported in [1] to
construct a point-to-point wireless electricity
transmission to a small isolated village called Grand-
Bassin in France. In 2007, an inductive resonance
coupling scheme, which makes use of near-field
coupling between two magnetic resonators, was
demonstrated able to power a 60-Watt light bulb
over two meters by a team of Massachusetts
Institute of Technology [2]. In addition, several
companies (Power Cast, Wild Charge, Wildower,
etc.) have developed products targeting specific
applications. Specifically, (i) to achieve efficient
charging over long distance, severe power loss
due to electromagnetic wave propagation must be
remedied; (ii) humans exposure to
electromagnetic radiation should always be kept
below safety level while sufficient power is
delivered to devices; and (iii) some existing systems
are unsuitable for ubiquitous deployment due to
high cost, large size, and/or heavy weight. In this
paper, an innovative wireless charging system
based on ultra- wideband retro-reflective beam
forming is proposed to address the above challenges.
The proposed charger consists of multiple antenna
elements distributed in space. According to pilot
signals (which are short impulses) they receive from
the target device, the antenna elements jointly
construct a focused electromagnetic beam onto
the device (i.e., beamforming). Beamforming
enables spatially focused/dedicated power
delivery to devices while keeping power level in
all the other locations minimal. As a result, the
proposed system attains high charging efficiency and
leads to little hazard/interference to other objects.
Performance of the proposed wireless charging
system is demonstrated by some simulation results
obtained by a full-wave Maxwell s equations solver.
I. Ultra-Wideband Retro Reflective
Beamforming
The proposed wireless charging system aims for both
indoor and outdoor applications. As illustrated in
Fig. 1, the charger consists of a central station and
multiple base stations mounted ar ound t he
regi on of concer n. The centr al st at i on and
bas e st at i ons ar e connected through wires. The
multiple base stations collaboratively radiate
wireless power to devices residing in the region as a
beamformer.
Fig. 1: Illustration of the proposed charging system
183
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
In other words, the base stations jointly establish
focused beams onto the devices. Beam forming
results in dedicated power delivery channels to
devices while keeping power levels in all the other
locations minimal, which ensure high efficiency and
human safety at the same time. Retro-reflective
antenna array is exploited to achieve beamforming,
which involves three steps. First, all the array
elements receive pilot signals from the target device.
Second, all the array elements analyze the pilot
signals magnitudes and phases. And finally, once all
the array elements transmit complex-conjugate
versions of the pilot signals they receive, the
resulting beam is spatially focused onto the target
device. The retro-reflective array takes advantage of
channel reciprocity and acts as a matched filter .
As a result of channel reciprocity, the waves
retro-reflected by the retro-reflective array are
constructive at the target device and destructive
elsewhere. Since retro-reflective beamforming
responds to the pilot signal and traces back to the
origin of the pilot signal, tracking of mul t i pl e
mobil e/portable devi ces is strai ght f or ward.
Furthermore, beam focusing due to retro-reflection
does not suffer from multi-path in complex
environments [3]. Retro-reflective arrays have been
investigated for radar tracking applications for
many years [4]. In addition, existing retro-
reflective technologies (which are based on
continuous waves) are unable to satisfy all the
practical requirements of wireless charging; thus
here, a more sophisticated retro- reflective array is
proposed. First, in the proposed retro-reflective array,
antenna elements are distributed over multiple base
stations and there is no strict restriction on the base
stations spatial locations, which offers construction
flexibility as well as reliability and safety. If the
line-of-sight path between the device and a certain
base station is blocked by an object (a person for
instance), the base station is deactivated such that the
object is not under direct illumination (Fig. 1).
Second, the proposed retro-reflective array
incorporates multiple discrete frequencies in a wide
frequency band and the frequencies are
programmable to avoid possible electromagnetic
interference. Third, the pilot signal is designed to
be short impulses that are able to carry
information of all the multiple frequencies [5]. In
summary, the proposed wireless charging system
integrates three technological elements: charging,
communication, and radar tracking. To be specific,
it relies on radar tracking to localize the target device
and dynamically reconfigure focused beams onto the
device; and, coordination of radar tracking and
charging is made possible by the communication
functionality. As a result;
It can be deployed in complex environments
and in all weather conditions.
It is capable of tracking and charging
multiple portable devices at the same
time.
III. Numerical Modeling Results
Feasibility of the wireless charging system
proposed in the previous section is demonstrated
through a numerical model in Fig. 2. Eight base
stations are assumed to be deployed over a circle
with radius 3 m in the x-y plane. Each base station
comprises of an antenna array with 5 by 5 elements
equal-spaced by 12 cm. Two devices reside in the
region. The antennas of both base stations and
devices are z-oriented dipoles in this study.
Fig. 2: Numerical model
The devices transmit short impulses as pilot signals,
which cover frequency band [4 GHz, 6 GHz].
Charging power is allocated to N discrete
frequencies in this band. To represent more realistic
scenarios, a metallic plate with length 1 m and
height 0.6 m is placed to block line-of-sight path
between the devices and one base station (Base
Station B).
The model in Fig. 2 are simulated by the
Method of Moments. Simulated Ez
Field distributions
in a 2 m by 2 m region around the two devices are
presented in Fig. 3. When one device (the one at
the center) sends pilot signals to the charger with
the absence of obstacle, all eight base stations are
active. If the charger only transmits the charger only
transmits charging Power at one frequency 4.09
GHz (i.e. N = 1), the field distribution is shown in
fig 3(a). Apparently, field is focused at many
locations other than the device (the undesired focal
points resemble side lobes of regular phased arrays).
184
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
When N is chosen to be 30, only one focal point is
left, which coincides with the device s location, as
shown in Fig. 3(b). When both devices send pilot
signals, the field is automatically f ocus ed onto
the two devices (Fig. 3(c)). It is noted that wireless
charging produces strong fields in certain regions
around both devices, termed red zones from which
humans should stay away. In Fig. 3(c), the red
zones are roughly spherical regions with radii 20
cm. Out of the red zones; field strength is at least
Fig. 3: Simulated field distribution plots of the proposed
wireless charging system (with |Ez| represented by colors)
15 dB weaker than those at the devices. Our
numerical simulations show that, with more
antenna elements in the beamforming array, spatial
focusing improves and the red zones shrink. In Fig.
3(d), the obstacle is assumed to be present, and
Base Station B is blocked and turned off (the
remaining seven base stations are active). Field focusing
does not rely on the stations, as shown in Fig. 3(d).
With the presence of obstacle, which base stations
IV. Conclusion
The wireless charging system is successfully adapted
to movable devices and demonstrated satisfactory
performance. The requirements for the system should
be incorporated into a base station and the operating
frequency is set. Multifarious movable devices can
be charged simultaneously through the presented
charging system, despite their orientations and
positions. Furthermore, the feasibility of this system
has been demonstrated withthepractical
measurements. These results provide a useful
example for a practical charging system for
movable devices.
REFERENCES
obstacle s presence and the number of base should be
deactivated can be easily determined through
processing the pilot signals. Two base stations, Base
Station A and Base Station B, are used as examples.
After these two base stations receive pilot signals
from one device, phase differences between two
local antenna elements are plotted in Fig. 4; one of
the two local elements is at the center and the other
at the corner in the 5 by 5 array. As expected, since
Base Station A has line-of- sight interaction with the
device, its phase difference follows a straight line
proportional to the frequency (a time delay); while
such a pattern does not appear at Base Station B.
1. J. D. Lan Sun Luk, A. Celeste, P. Romanacce, L. Chane
Kuang Sang, and J. C. Gatina, "Point-to-point wireless
power transportation in reunion island," presented at 48th
International Astronautical Congress, Turin, Italy, October
1997.
2. Kurs, A. Karalis, R. Moffatt, J. D. Joannopoulos, P. Fisher,
and M. Soljacic, "Wireless power transfer via strongly
coupled magnetic resonances," Science, vol. 317, pp. 83-86,
July 2007.
3. B. E. Henty and D. D. Stancil, "Multipath-enabled super-
resolution for rf and microwave communication using
phase-conjugate arrays," Physical Review Letters, vol. 93,
pp. 243904, December 2004.
4. L. Chiu, T. Y. Yum, W. S. Chang, Q. Xue, and C. H. Chan,
"Retrodirective array for RFID and microwave tracking
beacon applications," Microwave and Optical Technology
Letters, vol. 48, no. 2, pp. 409-411, February 2006.
5. H. Zhai, S. Sha, V. K. Shenoy, S. Jung, M. Lu, K. Min, S.
Lee, and D. S. Ha, An electronic circuit system for time-
reversal of ultra-wideband short impulses based on
frequency domain approach, IEEE Transactions on
Microwave Theory and Techniques, vol. 58, no. 1, pp. 74-
86, January 2010.
Fig. 4: Simulated phase difference of pilot signals received
at two base stations
185
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
PERFORMANCEANALYSIS OF
EQUALIZERS WITH DIVERSITY
COMBINING IN CELLULAR SYSTEMS
Richa Jain, M. Tech, IGIT, GGSIPU,
ABSRACT
Email: richajain_2005@yahoo.com
Mobile communications and wireless networks have experienced massive growth and commercial success in the recent years.
Unlike, wired channel that are stationary and predictable, wireless channels are extremely random and time-variant. It is well
known that the wireless multi-path channel causes an arbitrary time-dispersion, attenuation and phase-shift, known as fading, in
the received signal. The use of Decision Feedback Equalization (DFE) techniques along with diversity combining is important
to achieve good performance in current cellular mobile radio systems. Minimum mean square-error (MMSE) DFE, which has
been demonstrated, is an effective receiver structure for combating co-channel interference (CCI) on dispersive and noisy
channels. In this paper, inter symbol interference (ISI) as well as co-channel interference (CCI) is evaluated in a quasi-
stationary frequency selective fading environment by using DFE and Diversity combining. For frequency selective fading
channel, DFE is used with least mean square algorithm. Combination of antenna diversity and DFE provides better performance
in comparison to DFE alone. For combating ISI, joint optimization combining and power selection diversity combining are
considered. The performance of system is evaluated on the basis of number of antenna diversity branches, delay spread and
signals to noise ratio. The combiner uses QPSK modulation with different number of antenna branches. The performance
improvement as a function of taps in both feed-forward and feedback filters is quantified.
Keywords: Wireless communication, decision feedback equalizer, inter-symbol interference, co-channel interference,
antenna diversity combining.
INTODUCTION
Mobile radio channels in urban environment are rapidly
time-varying and the major impediments to reliable digital
communication are deep frequency selective fades, with
fading rates dependent on vehicle speed. So, main proposal
is to combine diversity reception, known to combat flat
fading in radio transmission, with adaptive equalization
known for mitigating the effects of ISI, into a single robust
receiver. Equalization compensates for ISI created by
multipath with time dispersive channels. To reduce ISI,
various types of equalizations are defined: linear equalizer,
decision feedback equalizer and maximum likelihood
sequence estimator (MLSE) [1, 2]. In this paper, Decision
feedback equalizer (DFE) is used.
DFE is non-linear equalizer because in this both feed-
sforward (FFF) as well as feed-backward (FBF) taps are
used. In DFE, once an information symbol has been
detected and decided upon, ISI that it induces on future
symbols can be estimated and subtracted out before
detection of subsequent symbols [3].
Diversity improves the signal quality by using two or
more receiving antennas. Diversity combining (or antenna
arrays) is another counter-measure to reduce ISI [4, 5].
Different diversity schemes like selection, maximal ratio
combining, MMSE combining can be employed. MMSE
offers a significant improvement over selection diversity
for channels with a high delay spread, particularly when
three or four branch diversity is employed. Here, in this
paper, maximum number of that can used are four.
Here, two types of combining schemes are used: joint
optimization combining and power selection diversity. For
joint optimization scheme, least mean square (LMS)
algorithm is used for both antenna diversity combining and
equalization optimization, simultaneously. For the selection
diversity, signal power strength is used to select a diversity
branch before equalization either at demodulator input or
output. The radio channel is modelled by many power
delay profiles like: equal amplitude two ray profile,
exponential profile and Gaussian profile. Here, exponential
profile is preferred because it is more realistic than two ray
profile when the delay spread of the channel is large [18].
186
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
2. SYSTEMDESCRIPTION
Fig. 1 represents the baseband diversity channel model
with DFE. Here, the receiver is coherent. Over a time
interval, the channel is time-invariant due to quasi-
stationary channel [13]. QPSK modulation is used since it
allows investigating the combiner/equalizer with complex
tap weight coefficients. So, the data symbol stream d (i)
can takes on values of 1/ j/ . The signalling
function g (t) represents the combined effects of filters in
various stages including transmitter and receiver filters. A
raised cosine roll off function is chosen for g (t). As desired
user and CCI user not synchronized to each other, random
relative delays are considered among the desired signal and
CCI signals.
First, consider the joint optimization of diversity
combining and equalization. Let the number of diversity
branches be n
a
, feed forward taps per branch are n
1
and
feedback taps per branch are n
2.
Thus, the total number of
complex taps is n
a
n
1
+ n
2
for joint optimization
combining. Here fixed receiver filter is considered with a
square root raised cosine spectrum [6, 7]. For timing
recovery, the method of squaring timing loop is considered
[5, 10].
The timing instant t
d
is
Figure 1 Diversity combining and DFE in presence of CCI
assumed for both transmitter and receiver. So, the averaged
received signal at the demodulator input being averaged by
Where phase [.] denotes the modulo-2 phase in radians, n
a
is the number of antennas and F
k
is the first harmonic of the
mean value of |y
k
(t)|
2
averaged over the input data for the
k-th diversity branch.
data bits can be expressed as
Where is the roll-off factor,
l,k
represents the complex
gain for l-th path of k-th diversity channel, n is number of
paths and n
p
is the number of paths per symbol period. In
selection diversity, only one FFF is required for the
selected diversity branch having largest average signal
power. So, the total number of complex weights is n
1
+n
2
.
The signal power strength can be evaluated either at the
output or the input of demodulator [11].
The power of received signal at the demodulator output
after being averaged by data bits is given by
where G(f) and H(f) are the frequency domain transfer
function of the raised cosine roll-off function g(t) and k-the
channel impulse response h
k
(t), respectively. LMS
algorithm is used to optimize and adjust tap weight
coefficients [3, 14].
3. CHANNELMODEL
The channel impulse response for each branch changes
slowly with time. The impulse response values for each
delay are a random process, independent from branch to
branch. The impulse response of diversity channel is
modelled by using a n-ray model which is defined as
is the timing instant for k-th branch with F
k
given by (2).
In the system, square root raised cosine spectrum is
187
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Where n is the number of paths,
l, k
represents the complex
gain for i-th path of k-th diversity channel, T
s
represents
symbol period and n
p
is the number of paths per symbol
period.
For determining the gain, statistical model is used because
it allows easier control of channel parameters such as delay
spread [8, 9]. In this, multipath with different delays is
uncorrelated. Therefore, the path gain
i1, k1
and
i2, k2
are
uncorrelated if i
1
i
2
or k
1
k2. The path gain
i, k
model
statistically by zero mean, complex Gaussian random
variables, with their power following the exponential delay
profile described by
This is a continuous delay profile with rms delay spread
and average channel gain normalised to one.
Frequency selective fading caused by multipath time
delay spread causes ISI, which results in an irreducible bit
error rate (BER) floor for mobile systems. For small delay
spread (relative to symbol duration), the resulting flat
fading is the dominant cause of errors bursts and for large
delay spread, timing errors and ISI are the dominant error
mechanisms [18]. For large values of delay spread, there is
an even more realistic profile that models the channel by a
number of exponential decaying ray clusters a time.
Assume that channel is quasi-stationary in which over a
time interval channel is time-invariant. Due to motion,
parameters like path gain, delay spread are randomly time
varying functions. However, their rate of variations is very
slow compared to any useful signalling rates that are likely
to be considered. Thus, these parameters can be treated as
virtually time-invariant random variables [8, 9].
4. SIMULATION RESULTS
Here, computer simulations are used to study the
performance of diversity combining with DFE in a quasi-
stationary frequency selective fading channel, with AWGN
and CCI. In this, random channel impulse responses are
generated by using (6) and (7). Long training sequence of
1500 symbols is generated to ensure convergence of tap
weight coefficients and utilize a step-size parameter of 0.01
for the LMS algorithm. For QPSK modulation, consider a
roll-off factor of 0.35 and the raised-cosine pulse truncated
at 6T
s
. The choice of roll-off factor involves a trade-off
between spectrum efficiency and link performance. The
performance of optimum combining with equalization is
insensitive to the roll-off factor.
For each average probability of a bit error (P
e
), 2500
10000 transmissions of data packets with independent
channels are simulated, with more transmissions used for
low values of delay spread and high signal-to-noise ratio
(SNR). This is because, for such situations, P
e
values are
very small and, hence, more simulations are needed to
achieve reliable results. For selection combining, 10000
transmissions are always used. In the simulation,
normalised rms delay spread ranges from 0 to 2.
Consider n
a
antenna diversity branches, n
1
feed forward
taps per branch and n
2
feedback taps. SNR is given in terms
of E
b
/N
o
, where E
b
is the bit energy (i.e., half of the symbol
energy) and N
o
is the noise power density. The desired and
interfering users are assumed to have the same transmitted
power, but with different average channel gains (i.e., the
area under the multipath power profile). The average
channel gain of the desired signal is normalized to one
while that of i-th CCI is represented by
Where n
cci
is the total number of CCI signals. Here, all CCI
signal gains are assumed to be equal.
First, consider the case of diversity alone. For joint
optimization combining, LMS algorithm is used that based
on MMSE criterion, for adjusting all tap weight
coefficients at the same time. Figure 2 represents the error
performance of MMSE diversity combining without
equalization. Here number of feed forward and feed
backward taps is 1 and 0 respectively. Therefore total
number of taps is n
a
. For an average probability of a bit
error, P
e
, less than 10
-3
at E
b
/N
o
of 17 dB, 2 diversity
branches can accommodate delay spreads d up to around
0.3, with 3 diversity branches up to 0.5 and 4 diversity
branches up to 0.7
Fig. 2 Error performance of MMSE diversity combining.
Figure 3 represents the error performance of MMSE
diversity combining with equalization as a function of
number of taps n
1
and n
2
= n
1
-1 for d= 1.0.
188
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Fig. 3 Error performance of joint optimization combining with d = 1.0.
For n
a
=1, error performance of using DFE alone improves
as n
1
increases at the beginning and then reaches a
minimum. This minimum corresponds to infinite length
equalizer. But, as the number of taps increases,
performance decreases due to addition of noise by taps.
For large values of d, more number of taps is required to
reach small values of P
e
. So, when system operates at low
SNR, there is no requirement of large number of taps.
For the case of power selection diversity, number of taps
required is more than in case of joint optimization
combining and also its cost is more.
CONCLUSION
The use of DFE techniques along with diversity combining
is important to achieve good performance in current
cellular mobile radio systems. This paper has analysed the
accurate performance of such a technique when CCI is
subject to severe frequency-selective fading. One
successfully applies the QPSK modulation to calculate the
probability of error in the presence of CCI and ISI.
The performance of diversity alone, for small delay
spreads provides better improvement in comparison for
large delay spreads.
To achieve a certain value of P
e
the total number of
required taps when joint optimization diversity combining
is used is less than that when only DFE is used.
We can see that if diversity is employed, the total number
of taps required is smaller. In addition, the number of taps
required is similar for 2 to 4 antennas and hence dual
antenna diversity is considered optimum for low SNR.
While using only antenna diversity combining without
equalization or only equalization cannot suppress CCI in a
frequency selective fading environment, the joint
optimization combining with DFE can suppress effectively
CCI, even when the delay spread of CCI is larger than that
of the desired signal.
FUTURE WORK
Further its implementation can be done with infinite length
DFE by using QPSK or GMSK modulation. The DFE can
also used in CDMA and MIMO systems for suppressing
interference [19, 20].
REFERENCES
[1] S. U. H. Qureshi, Adaptive equalization, Proc. IEEE, vol. 73,
pp.1349-1387, Sept.1985.
[2] C. A. Belfiore and J. H. Park, Jr., Decision feedback
equalization, Proc. IEEE, vol. 67, pp. 11431155, Aug. 1979.
[3] J. G. Proakis, Adaptive equalization for TDMA digital mobile
radio, IEEE Trans. Veh. Technol., vol. 40, pp. 333341, May 1991.
[4] B. Glance and L. J. Greenstein, Frequency selective fading effects in
digital mobile radio with diversity combining, IEEE Trans.
Commun.,vol. 31, pp. 10851094, Sept. 1983
[5] M. V. Clark, L. J. Greenstein, W. K. Kennedy, and M. Shafi, MMSE
Diversity combining for wide-band digital cellular radio, IEEE Trans.
Commun., vol. 40, pp. 11281135, June 1992.
[6] P. Balaban and J. Salz, Dual diversity combining and equalization in
Digital cellular mobile radio, IEEE Trans. Veh. Technol, vol. 40, pp.342
354, May 1991.
[7] Optimumdiversity combining and equalization in digital data
Transmission with applications to cellular mobile radioPart I and II,
IEEE Trans. Commun., vol. 40, pp. 885907, May 1992.
[8] A. A. M. Saleh and R. A. Valenzuela, Astatistical model for indoor
Multipath propagation, IEEE J. Select. Areas Commun. ,. vol. 5, pp.128
137, Feb. 1987.
[9] J. W. McKown and R. Lee Hamiliton, Jr. Ray tracing as a design tool
for radio networks, IEEE Network Mag., pp. 2730, Nov. 1991.
[10] J. C.-I. Chuang, The effects of multipath delay spread on timing
recovery, IEEE Trans. Veh. Technol., vol. VT-35, pp. 135140,
Aug.1987.
[11] Hafeth Hourani, An overview of Diversity techniques in wireless
communication systems , S-72.333 Postgraduate course in radio
communications (2004-2005).
[12] Clark M.V., Greenstein L.J., Kennedy W.K., Shafi M.: Matched
filter performance bound for diversity combining receivers in digital
mobile radio, IEEE Trans. Veh. Technol., 1992, 41, (4), pp. 356362.
[13] S.C. Lin, Performance analysis of decision-feedback equalisation for
cellular mobile radio with co-channel interference and fading, IET
Commun., 2009, vol. 3, Iss. 1, pp. 100-114.
[14]Brent R. Petersen and David D. Falconer, Suppression of Adjacent
channel, co-channel and intersymbol interference by equalizers and linear
combiners, IEEE Trans. Commun, vol 42, pp. 3109-3118, December
1994.
[15] Lin S.C., Fang H.W.: Accurate performance analysis of optimum
diversity combining and equalization over mobile radio fading channel
with CCI. Paper 0431, Proc. ICICS2006.
[16] Al-Dhahir N., Cioffi J.M.: MMSE decision-feedback equalizers:
finite-length results, IEEE Trans. Inf. Theory, 1995, 41, (4), pp. 961975
[17] S. Haykin, Adaptive Filter Theory. Englewood Cliffs, NJ: Prentice
Hall, 1991.
[18] Rappaport, Wireless communication , Pearson publication.
[19] Abdulrahman M., Sheikh U.H., Falconer D.D.: Decision feedback
equalization for CDMA in indoor wireless communications, IEEE J. Sel.
Areas Commun., 1994, 12, (4), pp. 698706
[20] Tidestav C., Ahlen A., Sternad M.: Realizable MIMO decision
feedback equalizers: structure and design, IEEE Trans. Signal Process.,
2001, 49, (1), pp.121133
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
An overview of Technical aspect for WiMAX & LTE
Netwaoks Technology
Awanish kumar kaushik
Electronics and Communication Deptt.
Galgotias college of Engineering and Technology.
Gr.Noida,India
Kaushik2feb@gmail.com
R.L.Yadava
Electronics and Communication Deptt.
Galgotias college of Engineering and Technology.
Gr.Noida,India
rlyadava@rediffmail.com
Abstract the explosive growth of the Internet over the
last decade has led to an increasing demand for high speed,
ubiquitous Internet access. Broadband Wireless
technologies are increasingly gaining popularity by the
successful global deployment of the Wireless Personal
Area Networks (Bluetooth), Wireless Local Area Networks
(WiFi), Wireless Metropolitan Area Networks (WiMAX-
IEEE 802.16), and LTE (3GPP Long Term Evolution).
Next Generation Broadband Wireless Technologies such as
3GPP Long Term Evolution (LTE) and WiMAX offer
voice, data, video and multimedia services on mobile
devices at high speeds, higher data rates and cheap rates
with require suitably higher capacity backhaul networks.
Recently, WiMAX and LTE have been proposed as
attractive wireless communication technologies for
providing broadband access for metropolitan areas. This
paper presents a study of the security-related standards,
architecture and design for the LTE and WiMAX
technologies.
Keywords WiMAX, IEEE802.16, LTE.
I. INTRODUCTION
The development of different fixed and mobile broadband
technologies is depend on global boom in the number of users
of the Internet and this development providing support for
high speed streaming multimedia, customized personalized
services, ubiquitous coverage and unhampered QoS. The
demand of Wireless networks within the organization depends
on increased use of mobile devices and increase in worker
mobility. The development of wireless networks provides the
connectivity for sharing the information between individual-
to-individual, individual-to-business and business-to-business
scenarios. Wireless networks are typically homogeneous and
vertically integrated and the Wireless networks are also a
service provider to a wireless standard, such as GSM/UMTS
or CDMA2000. A large number of wireless networks are
based on radio waves, which makes the network medium open to
interception and also support a wide range of mobile
communication. Initially, the Wireless technology was slow,
expensive and reserved only for limited mobile situations or
hostile environments, where cabling was impossible and very
Anubhav kumar
Electronics and Communication Deptt.
Galgotias college of Engineering and Technology.
Gr.Noida,India
Rajput.anubhav@gmail.com
Anuradha
Department of Electronics and Comm. Engg.
Laxmi devi Institute of Engineering and Technology,
Alwar,rajsthan,India,India
Rajput.anubhav@gmail.com
difficult to establish. The Mobile communication technology
developed rapidly because of high demands of large data rates
and good quality of services. These demands was Fulfill by
introduce a new air interface for mobile communications
which may enhances the performance and capacity of the
system [1]. Wireless technology is basically providing the
connection between two computers to communicate using
standard network protocols. Wireless networking does not
require any fixed infrastructure and cabling for connectivity
point of view. This technology is direct related to the cross-
vendor industry standards like as IEEE 802.11 and IEEE
802.16. Because of rapidly increase the number of users and
limited bandwidth resources requires the large spectrum
efficiency for mobile communication systems be improved by
adopting the some advanced technologies. In such case some
novel key technologies like as MIMO (multiple input, multiple
output) and OFDM (orthogonal frequency division
multiplexing) improve the performance of current mobile
communication systems [2], [3]. For this purpose WiMAX and
LTE two leading standards are used. The economical concept
with these two technologies is comparable, and both are new
and still under development [4], [5].
Both technologies are already deployed but WiMAX
technology is considering more mature and enhanced
technology.
WiMAX is emerging broadband wireless technologies which
are based on IEEE 802.16 standards [6]. WiMAX/802.16
provides higher bandwidth IP based mobile and wireless
access, handover between different networks of different
technologies and management authorities and broadband in
remote areas. The IEEE 802.16 standard, published in 2002
[7], which is used to support the communications for the
frequency band of 1066 GHz and provide the services for
Wireless Metropolitan Area Network (WMAN) with the line
of sight (LOS) of 30- 50 km and it is also defines the air
interface for point to point or fixed point-to-multipoint
broadband wireless access networks.
The long-term evolution (LTE) is defined by the Third
Generation Partnership Project (3GPP) and it is a highly
flexible radio interface [8, 9], initial it was developed at the
end of 2009. The first release of LTE support peak rates of
300 Mbps, frequency-division duplex (FDD), time-division
189
190
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
duplex (TDD) and a wide range of system bandwidths in order
to operate in a large number of different spectrum allocations.
The second release of LTE introduces a few performance
enhancements features such as Downlink Transmission. The
main target of LTE is to provide a smooth evolution then the
previous 3GPP systems for example: time division
synchronous code division multiple access (TDSCDMA) and
wide-band code division multiple access/high-speed packet
access (WCDMA/HSPA), as well as 3GPP2 systems such as
code division multiple access (cdma) 2000.
This paper presents a summary analysis of WiMAX/802.16
network architecture and LTE network architecture with an
overview. The paper is structured as follows. Section II
presents some general aspects regarding LTE network
architecture. Section III presents the general aspects regarding
WiMAX network architecture and conclusions are drawn in
section IV.
II- General Aspect of LTE
A- LTEoverview
Basic requirements for LTE (Long Term Evolution) were
introduce in June 2008 and since then LTE has been evaluated
and developed in 3GPP RAN (Radio Access Network)
Working Groups and the total development of LTE-Advanced
was completed in March 2010 and several type of release of
LTE was introduced and some of the release (for example
release 10) have been kicked off. The first release of LTE in
known as Release 8 and it is introduced and finalized by 3GPP
and it is maintain in maintenance phase. The second release of
LTE is known as Release 9 and it is presently introduced by
3GPP. This release will support a few performance
enhancements property for example Enhanced Downlink
Transmission but the main concept of this property is to
support the new services like UE (User Equipment)
positioning and broadcasting.
Third Generation Partnership Project (3GPP) Long Term
Evolution (LTE) [10] is a significant development step for
UMTS in terms of capacity and architecture. The main target
of LTE is to provide a minimum of 100/50 Mbps for both
Uplink and Down link (UL/DL) connectivity with 1 sector of
20MHz spectrum and up to 1 Gbps Downlink (DL)
connectivity with 3 sectors, and Downlink of 300
Mbps/Uplink of150 Mbps per sector) per LTE Base Station.
The whole concept defines the increment in 3G in term of
peak data rates around 30 Mbps. The LTE model has less
inter-cell interference and so that has higher spectrum
efficiency with LTE Release 8. The model has better match
with the real life interference propagation and therefore is a
more realistic environment for technology evaluation.
LTE is able to operate in different frequency bands as well as,
with different bandwidths in order to operate in spectrum of
different sizes. And this can be enable efficient migration of
other radio-access technologies to LTE.
LTE also support the more advanced multi-antenna
schemes, including transmit diversity, spatial multiplexing
(including both so-called single-user multiple-input multiple-
output [MIMO], as well as multi-user MIMO) with up to four
antennas, and beamforming.
B- The main function of LTE project
The major function indicators as follows:
For 1.4MHz to 20MHz bandwidth-
--Data rate: 50Mbps (uplink),
: 100Mbps (downlink).
--Spectral efficiency: up to 2~4 times of 3GPP,
--Delay 5ms for users plane and < 100ms for control plane.
For mutual operation of present 3GPP and non-3GPP systems-
--Support broadcast services, multicast services,
--Work in enhancement-mode
For IP multimedia subsystem and core network-
--Pursuing backward compatibility,
--circuit switching domain implementing,
-- adopting VOIP;
--Optimizing low-speed-mobile system,
--supporting high-speed mobile;
--support similar technology to simultaneously paired and
unpaired frequency;
C- Network Architecture
LTE (Long Term Evolution) technologies are developed by
UMTS (Universal Mobile Telecommunication
System)/HSDPA (High Speed Downlink packet Access)
cellular technology to provide the high data rates and
increased mobility to the user to access the services or for
communication. The LTE radio access technology is based on
OFDM (Orthogonal Frequency Division Multiplexing)
technique and these technologies supports the bandwidths of
different carrier frequency which lies between 1.4MHz to 20
MHz in both frequency-division duplex (FDD) and time-
division duplex (TDD) modes [11]. In LTE-Advanced, it is
necessary to support a wide range of bandwidth than the range
of bandwidth for LTE; it means 20 MHz is sufficient to satisfy
a high aim peak data rate requirement such as 1 Gbps. In the
downlink using OFDMA, intra-cell orthogonal multiplexing
among physical channels is achieved in the time and
frequency domains by utilizing localized or distributed
transmission and peak data rates go from 100 Mbps to 326.4
Mbps, depending on the modulation type and antenna
configuration used.. In the LTE uplink, single-carrier (SC)-
frequency division multiples access (FDMA) is adopted
because the SC-FDMA have high priority to achieving wider
area coverage than on achieving higher performance by
utilizing the robustness against MPI in a multicarrier approach
and reduces the Peak-to-Average Power Ratio compared to
OFDMA, increasing the battery life and the usage time on the
UEs (User Equipments). Interference coordination can be
applied to both uplink and downlink, although with some
fundamental differences between the two links. The main
target of LTE is to provide IP backbone services, flexible
spectrum, lower power consumption and simple network
architecture with open interfaces. The LTE architecture is
shown in figure (1) [12], [13]
191
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Figure 1- LTE Architecture.
From figure (1), MME represent a Mobility Management
Entity and it is similar to S-GW, means Serving Gateway
which is direct connected to the base stations which are
represent by the eNodeB. This architecture is also known as
two-node architecture because only two nodes are involved
between the user equipment and the core network. These two
nodes represent as the base station (eNodeB) and the serving
gateway (S-GW) in the user plane and the mobility
management entity (MME) in the control plane, respectively
[14]. This LTE architecture support smooth integration and
handover to the existing 3GPP and 3GPP2 networks [15]. The
LTE architecture is a combination of Core Network (CN) and
Access Network (AN), where Core Network related to the
Evolved Packet Core (EPC) and Access Network related to E-
UTRAN (Universal Terrestrial Radio Access Network). The
combination of both Cone Network and Access Network
represent the Evolved Packet System (EPS) and the EPS
provide the connectivity between users and Packet Data
Network (PDN) with the help of an IP address is to access the
internet and internet services like Voice over Internet Protocol
(VoIP). MME is a control plane entity within EPS (Evolved
Packet System) and it can support the following functions:
inter CN node signaling for mobility between 3GPP access
networks, S-GW selection, roaming, authentication, bearer
management functions and NAS (Non Access Stratum)
signaling. Serving Gateway is the gateway which terminates
the interface towards E-UTRAN.
In E-UTRAN, eNodeB (base station) is related to wireless
network, and the whole access network is completely
composed of eNodeB. The eNodeB not only perform the
function of former eNodeB, but it can also perform the most
function of former RNC (Radio Access Network), with
physical layer, MAC (Media Access Control) layer, RRC,
scheduling access control, bearer control, access-mobility-
management.
A UE must acquire synchronization signals including the
primary synchronization signal (PSS), secondary
synchronization signal (SSS), and physical broadcast channel
(PBCH), send the system specific and cell-specific
information first to establish the downlink radio link at the
initial acquisition or intermittent reception mode in the LTE.
Due to the use of OFDM based transmission, LTE can use
channel-dependent scheduling in both the time and frequency
domain to exploit rather than suppress such rapid channel-
quality variations, thereby achieving more efficient utilization
of the available radio resources. To support the LTE features,
scheduling decisions, hybrid-ARQ feedback, channel-status
reports, and other control information must be communicated
between the base station and the terminal.
D- SpectrumFlexibility
The radio spectrum for mobile communication is available in
different band of frequency with different sizes and it is
depend on regulatory aspects in different geographical areas
and environment area, and some time radio spectrum is
available in both paired and unpaired bands. The Paired band
of frequency define that the uplink and downlink
transmissions are used the separate frequency bands, but the
unpaired band of frequency define that the uplink and
downlink must share the same frequency band. But some time
in an initial migration phase, some of the different radio-
access technologies should be able to operate jointly in the
same spectrum band. The Spectrum flexibility is one of the
major property of LTE radio access and it provide the
operation under all these conditions. LTE can be operated in
different frequency bands as well as it can operated with
different bandwidths. Unlike the previous mobile
communication systems, LTE provides or support the
possibility for different uplink and downlink bandwidths,
enabling asymmetric spectrum utilization.
Firstly define the cell bandwidth and the duplexing technique
to access a cell and it can be done by enable a terminal. The
system information only used the narrow bandwidth which is
provided by LTE. After acquires the system information, the
cell bandwidth and the duplexing scheme is known, and the
terminal can access the cell based on this knowledge.
Therefore it is easy to predict that LTE-Advanced is the
mainstream technology for mobile broadband evolution.
III - General Aspect of WiMAX
A- WiMAX Overview
The IEEE 802.16 i.e. the WiMAX System support the
following Specifications-
Range - 30-mile (50-km) radius from base station, Speed 70
megabits per second, Line-of-sight- not needed between user
and base station, Frequency bands - 2 to 11 GHz and 10 to 66
GHz (licensed and unlicensed bands), channel size- 1.75 MHz
to 20MHz, Network employed MAN, Multiplexing-
TDM/FDM, Mobility- vehicular [16], support larger coverage
area than WLAN, less costly compared to the current 3G
cellular standards.
The main target of Mobile WiMAX standard is to provide the
connectivity between the WLANs (Wireless Local Area
Network) and 3G cellular system by providing a specification
that supports a mobile broadband access system [including
functions to enable handoff between base stations (BSs) or
sectors]. The WiMAX standard provide very high data rate but
it has short coverage range and the 3G cellular systems
192
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
provide highly mobile long-range coverage but it has low data
rate.
WiMAXs attributes open the technology to a wide variety of
applications .With its large range and high transmission rate,
WiMAX can serve as a backbone for 802.11 hotspots for
connecting to the Internet. The technology can also provide
fast and cheap broadband access to markets that lack
infrastructure, such as rural areas and unwired countries.
WiMAX Mesh topology connects subscribers to the Internet
without being connected to the Base Station (BS). Without
relying on basic infrastructure like roads, tunnels or network
backbones, WiMAX mesh will provide answers to long
existing connectivity demands for underserved areas.
Mobile WiMAX air interface specifications are confined only
to the physical (PHY) and medium access control (MAC)
layer specifications, these standards are very extensive. The
Mobile WiMAX standard has targeted application areas for
WiMAX systems like as voice over Internet protocol (VoIP),
video conferencing, streaming media, multiplayer interactive
gaming, Web browsing, instant messaging, and media content
downloading.
Available frequency bands are playing an important role for
providing broadband wireless services. WiMAX uses both
license frequency and unlicensed frequency. The licensed
frequency bands used by WiMAX are 2.3 GHz, 2.5 GHz, 3.3
GHz and 3.5 GHz and the unlicensed frequency band of 5
GHz, respectively.
B- WiMAX Architecture
The mobile WiMAX architecture based on IP (Internet
Protocol) is shown in figure 2, in which it support AAA means
the authentication, authorization, and accounting, IMS means
IP multimedia subsystem, MIP means mobile IP, MIP HA
means mobile IP home agents and user databases, and
interworking gateway devices. The whole architecture is
divide into four network, i) IEEE 802.11e Terminals, ii)
Access Service Network, iii) Connectivity Service Network,
iv) Ethernet. Basically the WiMAX network architecture is an
Internet protocol (IP) end-to-end network architecture, in
which the integrated telecommunications network architecture
use the IP for end-to-end transport of all user data and
signaling data. IP routers and switches can be easier install and
then operate in a circuit switched network to transport the user
data. The main functions of the Mobile WiMAX network
architecture are explain as: 1) separation of the access
architecture from the IP connectivity service, means there
should be no relation between both, 2) organization should be
in a hierarchical form, in a flat manner, or in a mesh topology,
3) support of all three type of subscriber (fixed, nomadic, and
mobile subscribers), and 4) support of global roaming and
inter-working with other 3G wireless systems and
compatibility with other communication devices. The Access
Service Network (ASN) support one or more ASN Gateways
(ASN GWs) and one or more Base Stations (BSs). The BS
provides or support and manages resources over the air
interface and is responsible for handover triggering. ASN-GW
performs AAA client functionality, establish and manage
mobility tunnel with BSs and connections towards selected
Connectivity Service Network (CSN) [17]. From figure 2, by
using delimiting the access service network (ASN) from the
connectivity service network (CSN), the separation of the
access architecture from the IP connectivity service is
achieved. The CSN support a set of network services or it
provide a set of network function by which it provide the IP
connectivity and related services to the WiMAX subscribers.
Figure 2- IP based WiMAX network architecture
The IMS uses the standard IP, which allows existing and new
services to be provided for example VoIP and circuit-switched
phone. Each Base Station (BS) is having its personal gateway
to provide the facility of connectivity to the user, but some
time both BSs sharing one gateway and it depending on their
localities and traffic. As shown in figure 2, a mobile station
(MS) can move from one cell to another and it is denoted by a
red arrow, and a handoff between the BSs through the network
can be executed. Mobile Internet Protocol (MIP) support the
element like as home agents (HAs) that allow handoff of
services when a user moves from one cell coverage to another
cell. The IP based WiMAX architecture also allow the
dynamic and static home address configurations to optimize
routing and load balancing.
VI- Conclusion
This paper proposed an overview of Technical aspect for
WiMAX & LTE networks technology by focusing on the
basic overview and architecture design. The WiMAX and LTE
will play equally important roles in the future of wireless
networks. WiMAX is very important as it represents a whole
new dimension of market opportunities. WiMAX is a
promising wireless communication technology for wireless
MANs. LTE-Advanced provides smooth evolution path for
both LTE and HSPA to provide further enhanced end user
experience with mobile broadband services. It decreases cost
of operation and deployment and opens new business
opportunities in local area deployments. The robustness and
effectiveness of end-to-end security approaches in WiMAX
and LTE will become clear only after deployment. Therefore it
is easy to predict that LTE-Advanced and WiMAX are the
mainstream technology for mobile broadband evolution.
193
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
References
[1] S. Ahson, A. Ahson and M. Ilyas, WiMAX: Aplications.
CRC Press, 2007.
[2] W. Choi and J. G. Andrews Downlink Performance and
Capacity of Distributed Antenna Systems in a Multicell
Environment in IEEE Transactions on Wireless
Communications, January 2007.
[3] T. Jiang and Y. Wu, An Overview: Peak-to-Average
Power Ratio Reduction Techniques for OFDM Signals in
IEEE Transactions on Broadcasting, vol. 54, No. 2, June 2008.
[4] S. Z. Asif, WiMAX Developments in the Middle East and
Africa in IEEE Communications Magazine, vol. 47, No. 2,
February 2009.
[5] Darren McQueen, The Momentum Behind LTE
Adoption in IEEE Communications Magazine, vol. 47, No. 2,
February 2009.
[6] IEEE 802.16-2004, IEEE Standard for Local and
Metropolitan Area Networks Part 16: Air Interface for
Fixed Broadband Wireless Access Systems, IEEE Press,
2004.
[7] IEEE Std 802.16a-2003, IEEE Standard for Local and
metropolitan area networks Part 16: Air Interface for Fixed
Broadband Wireless Access SystemAmendment 2: Medium
Access Control Modifications and Additional Physical Layer
Specifications for 2-11 GHz, 2003.
[8] E. Dahlman et al., 3G Evolution: HSPA and LTE for
Mobile Broadband, 2nd ed., Academic Press, 2008.
[9] 3GPP TS36.300, Evolved Universal Terrestrial Radio
Access (E-UTRA) and Evolved Universal Terrestrial Radio
Access Network (E-UTRAN): Overall Description.
[I0] E-UTRAN Architecture description, 3GPP TS 36.401,
3GPP specifications [online].:http://www.3gpp.orgj
[11] K. Bogineni, R. Ludwig et co. LTE Part I: Core Network
Guest Editorial in IEEE Communications Magazine, vol.
47, No. 2, February 2009.
[12] 3GPP TS 36.300 v8.7.0, Technical Specification Group
Radio Access Network , Rel. 8, December 2008.
[13] S. Sesia, I. Toufik and M. Baker, LTE The UMTS Long
Term Evolution: From Theory to Practice. John Wiley&Sons
Ltd, 2009.
[14] B. Furht and S. Ahson, Long Term Evolution 3GPP
LTE Radio and Cellular Technology. CRC Press, 2009.
[15] Darren McQueen, The Momentum Behind LTE
Adoption in IEEE Communications Magazine, vol. 47, No. 2,
February 2009.
[16] Amalia Roca, "Implementation of WiMax Simulator in
Simulink", Vienna, Feb 2007.
[17] J. G. Andrews, A. Ghosh and R. Muhamed,
Fundamentals of WiMAX Understanding Broadband
Wireless Networks. Prentice Hall PTR, 2007.
194
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Anti-collision Protocols for RFID System
Mayur Petkar, Avadhut Apte, P.B.Borole
Abstract- RFID technology is used for an identification of an
object. But the major problem arise in the use of RFID
technology is when multiple number tags read by reader
simultaneously. Collision will be occur when multiple tag reply to
the query of a reader. Hence due to collision prolong their
identification, bandwidth and energy wastage. Therefore it is
important that RFID application developers should be aware of
current tag reading protocol. The main focus of this paper is to
research the collision resolution protocols for multi-access
communication and make a comparison of performance for these
anti-collision protocols in RFIDsystems.
Index TermsRFID systems, Anti-collision protocols, Tree
Variants, Aloha Variants, Tag estimation functions.
I. INTRODUCTION
An RFID system consist of a reading device is called reader
and one or more tags. The reader is typically a powerful
device with ample memory and computational resources. On
the other hand, tags vary significantly in their computational
capabilities. They range from dumb passive tags, which
respond only at reader commands, to smart active tags, which
have an on-boardmicro-controller, transceiver, memory, and
power supply [5]. Among tag types, passive ones are emerging to
be a popular choice for large scale deployments due to their low
cost [6] [7].The main objectives of a RFID system is the
identification of all the tags present in the area covered by the
reader.
One of the major challenges for passive RFID systems is that
of avoiding or solving collisions due to interference that might
occur among tags. Collisions occur when two or more tags
simultaneously transmit to the same reader (tag-tag collisions).
This is a serious problem when the density of RFID tags is
high and for applications in which many RFID tags in the
same area must be read. It results in wastage of bandwidth,
energy, and increases identification delays. To minimize
collisions, RFID readers must use an anti-collision protocol.
To this end, this paper reviews state-of-the-art tag reading or
anti-collision protocols, and provides a detailed comparison of
the different approaches used to minimize collisions, and
hence help reduce identification delays.
II. BACKGROUND
Before moving toward into anti-collision or tag reading
protocols, we first see how RFID systems operate, and their
classifications.
Mayur Petkar is a student of M.Tech Electronics, Department of Electrical
Engineering, VJTI, Mumbai (mayur_petkar@yahoo.com)
Avadhut Apte is a student of M.Tech Electronics, Department of Electrical
Engineering, VJTI, Mumbai (avadhut.apte@gmail.com)
P.B.Borole is an Assistant Professor in Electrical Engineering Department in
VJTI,Mumbai.
Fig. 1. RFID System.
A. Communication Principle
RFID systems communicate using either magnetic or
electromagnetic coupling. The basic premise behind RFID
systems is that you mark items with tags. These tags contain
transponders that emit messages readable by specialized RFID
readers. Most RFID tags store some sort of identification
number code. A reader retrieves information about the ID
number from a database, and acts upon it accordingly. RFID
tags can also contain writable memory, which can store
information for transfer to various RFID readers in different
locations.
TABLE I
CLASSIFICATION OF RFID SYSTEMS BASED ON THEIR OPERATING
FREQUENCIES.
criterion LF HF UHF Microwave
Frequency
Range
<135
khz
13.56
Mhz
860-
930Mhz
2.45Ghz
Tag
Characteristic
s
Passiv
e
Passive Active,
Passive,
Semi-
Passive
Active,
Passive
Read Range 2m 0.1m-0.2m 4m-7m 1m
Data Transfer
Rate
<10
Kbit/s
<100
Kbit/s
<100
Kbit/s
<200 Kbit/s
Cost High
tag
cost
Less
expensive
than LF
tags
UHF Tags
are
cheaper
than LF or
HF tags
Expensive
compared
to LF, HF,
UHF.
No. of tag
reads per
second
Lowest Highest
Tag power
consumption
Lowest Highest
Passive tag
size
Lowest Highest
Bandwidth Lowest Highest
B. Operating Frequency
RFID systems operate in the Industry, Scientific and Medical
195
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
(ISM) frequency band that ranges from 100 KHz to 5.8 GHz.
Table I summarize the Characteristics of RFID systems based
on their operating frequency.
C. Tag Types
Tags are the basic building block of an RFID system. A
tagconsists of an electronic microchip and coupling
elements.RFID tags without a microchip are called chip less
tags, andPromise significant cost savings since they can be
printed directly on products[7].
There are three types of tags: passive, active and semi passive
[8]. Passive tags have limited computational capacity, No
ability to sense the channel, detect collisions, and
communicate with each other. Semi-passive tags behave in a
similar manner to passive tags, but have the advantage of an
on-board power source that can be used to energize their
microchip. Active tags are the most expensive compared to
passive and semi passive tags. Moreover, they can sense the
channel and detect collisions.
III. ANTI-COLLISITION PROTOCOL
Anti-collision protocols are critical to the performance ofRFID
systems. Figure 2 shows eight tags and a reader.Without an
anti-collision protocol, the replies from these tagswould
collide and thereby prolong their identification.
Also,Collisions cause bandwidth and energy wastage.
Fig.2 Tag Collision Problem
Figure 3 shows the various anti-collision protocols in existent
[7][10]. Broadly, they can be categorized into, space division
multiple access (SDMA), frequency division multipleaccess
(FDMA), code division multiple access (CDMA), andtime
division multiple access (TDMA). Briefly, SDMA protocols
spatially separate the channel using directional antennas or
multiple readers to identify tags. They, however, are expensive
and require intricate antenna designs. On the other hand,
FDMA [7] protocols involve tags transmitting in one of
several predefined frequency channels;thus, requiring a
complex receiver at the reader. The systems based on CDMA
[7] require tags to multiply theirID with a pseudo-random
sequence (PN) before transmission.Unfortunately, CDMA
based systems are expensive and power consuming.
Fig.3. Classification of tag reading or anti-collision protocols.
TDMA protocols constitute the largest group of anti-
collisionProtocols. These protocols can be classified as reader
driven, and tag driven. The former and latter are also called
Reader-talk-first (RTF) and Tag-talk-first (TTF) respectively.
Most applications use RTF protocols, which can be further
classified into Aloha and tree based protocols/algorithms. The
basic idea behind RTF is that tags remain quiet until
specifically addressed or commanded by a reader. On the
other hand, TTF procedures function asynchronously. This
means a TTF tag announces itself to the reader by transmitting
its ID in the presence of a reader. Tags driven procedures are
slow as compared to RTF procedures.
A. Alohabased protocol
We first review Aloha based tag reading protocols before
discussing tree protocols. The following are Aloha variants in
existent:
1) Pure Aloha (PA).
2) Slotted Aloha (SA).
3) Framed Slotted Aloha (FSA).
a) Basic framed slotted Aloha (BFSA).
b) Dynamic framed slotted Aloha (DFSA).
c) Enhanced Dynamic framed slotted Aloha (EDFSA).
1) Pure aloha (PA):In PA based RFID systems, a tag responds
with its ID randomly after being energized by a reader. It then
waits for the reader to reply with, i) a positive
acknowledgment (ACK), indicating its ID has been received
correctly, or ii) a negative acknowledgment (NACK), meaning
a collision has occurred. If two or more tags transmit, a
196
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
complete or partial collision occurs [9], which tags then
resolve by backing off randomly before retransmitting their
ID.
2) Slotted aloha (SA):In Slotted Aloha (SA) based
RFIDsystems, tags transmit their ID in synchronous time slots.
Ifthere is a collision, tags retransmit after a random delay.
Thecollision occurs at slots boundary only, hence there are
nopartial collisions.
3) Framed Slotted Aloha (FSA): In PA and SA based systems;
a tag with a high response rate will frequently collide with
potentially valid responses from other tags. Therefore, FSA
protocols mandates that each tag responds only once per
frame. The following sections describe various FSA variants.
a) Basic Frame Slotted Aloha (BFSA): BFSA has four
variants. They are, 1) BFSA-non muting, 2) BFSA-muting, 3)
BFSA-non-muting-early-end, and 4) BFSA-muting-early end.
Note, the term basic refers to the frame size being fixed
throughout the reading process. In BFSA-non muting, a tag is
required to transmit its ID in each read round. In non-muting
variants, the reading delay is dependent on the confidence
level , where =0.99 indicates 99% of the tags have been read
successfully. The number of read cycles R needed to read a tag
set with confidence level is given by,
WhereN is the frame size, n is the number of tags, and the
probabilityof having a successful transmission is
To obtain an integral value, and avoid conservative delay
values. For BFSA-Muting, the number of tags reduces after
each read round, since tags are silenced after identification.
When a read round is collision free, the reader concludes that
all tags have been identified successfully.
BFSA-non-muting-early-end and BFSA-muting-early-end
variants incorporate the early-end feature. Specifically, the
reader closes a slot early if no response is detected at
beginning of a slot. For BFSA non-muting suffers from an
exponential increase in identification delay when the number
of tags is higher than the frame size.
b)Enhanced Dynamic framed slotted Aloha (EDFSA):
Specifies a predefined set of frame sizes (varying from 8 to
256) for different ranges of estimated unread tags. Typically,
the frame size is around the midpoint of the range. As an
example, if the estimated number of unread tags is between 82
and 176, than the selected frame size is 128, while for a range
between 177 and 354 tags; the frame size grows to 256. For
larger tag populations, EDFSA randomly splits tags into
groups of the maximum frame size (i.e., 256 tags). In such a
case, only the tags associated with one of the groups are
queried in the following frame. Chebyshevs inequality is
usedat the end of the frame to estimate the number of tags
which have participated in it. Such an estimate is then used to
refine the estimate of the global tag population, possibly
adjusting the number of tag groups and the size of the
following frames.
B. Tree Based Protocols
Tree based protocols were originally developed for multiple
access arbitration in wireless systems. These protocols are able
to single out and read every tag, provided each tag has a
unique ID. All tree based protocols require tags to have
muting capability, as tags are silenced after identification.
Tree based algorithms can be classified into the following
categories:
1) Tree splitting (TS).
2) Query tree (QT).
3) Binary search (BS).
4) Bitwise arbitration (BTA).
1)Tree Splitting (TS): TS protocols operate by splitting
respondingtags into multiple subsets using a random number
generator. An algorithm that performs collision resolution by
splittingcollided tags into b disjoints subsets. These subsets
becomeincreasingly smaller until they contain one tag.
Identificationis achieved in a sequence of timeslots. Each tag
has a randombinary number generator b. In addition, each tag
maintains acounter to record its position in the resulting tree.
Tags witha counter value of zero are considered to be in the
transmitstate, otherwise tags are in the wait or sleep state.
After eachtimeslot, the reader informs tags whether the last
timeslotresulted in a collision, single or no response. If there
wasa collision, each tag in the transmit state generates a
randombinary number and adds the number to its current
countervalue. On the other hand, tags in the wait state
incrementtheir counter by one. In the case of idle or single
response,tags in the wait state decrement their counter by one.
Afteridentification, tags enter the sleep state.
2) Query Tree (QT):queries tags according to their ID.The
reader interrogates tags sending a string, and only thosetags
whose IDs have a prefix matching that string respond tothe
query. At the beginning, the reader queries all tags (sending
NULL string). If a collision occurs, then the string length is
increased of one bit until the collision is solved and a tag is
identified. The reader then starts a new query with a different
string. In particular if tag identification occurred with a string
q0 the reader will query for string q1. The resulting binary tree
has nodes at the i th level labeled with all the possible values
of a prefix of length i (e.g. nodes at level 1 contain prefixes 0
and 1, nodes at level 2 prefixes 00, 01, 10, 11 and so on). The
exploration of a subtree is skipped in case there is only one tag
matching the prefix stored in the subtree root (i.e. if tag
identification occurs when the reader queries with the subtree
root prefix). In case of uniform ID distribution, the tree
induced by the query tree is analogous to the tree induced by
the BS protocol. This is because a set of uniformly distributed
tags splits approximately in equal parts at each query, like in
197
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
the BS protocol. Hence we can assert that QT system
efficiency is about 34%.
3) Binary Search (BS): BS algorithm [7] involves thereader
transmitting a serial number to tags, which they thencompare
against their ID. Those tags with ID equal to or lowerthan the
serial number respond. The reader then monitors tagsreply
bitby bit using Manchester coding, and once a collision
occurs,the reader splits tags into subsets based on collidedbits.
4) Bitwise Arbitration (BTA) Algorithms:Researchers
haveproposed various BTA algorithms. Unlike TS, QT, and
IDSprotocols, BTA algorithms operate by requesting tags
torespond bit by bit from the most significant bit (MSB) tothe
least significant bit (LSB) of their ID. The key feature ofBTA
algorithms is that bit replies are synchronized;
meaningmultiple tags responses of the same bit value result in
nocollision. A collision is observed only if two tags respond
with different bit values. Moreover, the reader has to
specifythe bit position it wants to read.
IV. RESEARCH DIRECTIONS
From our discussions above, it is clear that researchers have
studied both Aloha and tree protocols extensively. Research
on Aloha protocols is shifting towards DFSA variants,
specifically those that rely on a tag estimation function. From
our survey, we find that dynamic estimation schemes to be the
most promising because of their higher accuracy for a given
tag range. However, further research is required to reduce their
considerable computational cost and memory requirements.
TABLE II
COMPARISION BETWEEN TREE AND ALOHABASED PROTOCOL
Criterion Tree protocol Aloha protocol
Protocol Feature Tree protocol operates
by grouping responding
tag into subset and then
identifying tag in each
subset sequentially.
Aloha based protocol
require tags to respond
randomly in an
asynchronous manner
or in synchronized slot.
Usage UHF and Microwave
RFID System
LF and HF
RFID System
No. of tag reader to
Tag command
High Low
Delay versus tag
density
Low Identification
delay in high tag
density environment.
Low identification
delay achievable only
when tag density is
low.
Method Deterministic probabilistic
Channel utilization 43% 18.4% (pure Aloha)
36.5% (BFSA)
42.6% (DFSA)
Tree algorithms provide a deterministic approach to identify
tags. On the other hand, aloha based approaches are
probabilistic in nature, simple, and promise dynamic
adaptability to varying loads; unlike tree protocols which must
restart their reading process if a new tag enters a readers
interrogation zone while tags are being read. Table II shows a
comparison between Aloha and tree based algorithms.For tree
protocols, QT variants have had a number of advances. This is
mainly due to their simpler tag designs that only require a
prefix matching and a synchronization circuit. A key
disadvantage of QT protocols, however, is that the length of a
query is proportional to the depth of the constructed tree.
Another problem is that identification delay increases with ID
size. This issue becomes critical when the EPC adopts 256 bit
IDs. The current approach to address long IDs is by using
randomly generated pseudo IDs. The advantage of such an
approach is that it involves minimal data exchange between
the reader and tags, and uses shorter IDs, which reduces tree
depth. From our survey, an interesting observation is that,
except for, existing tag reading protocols do not yet
incorporate pseudo IDs. Therefore, an interesting research
direction is to analyze the performance gains to be had if
protocols use pseudo IDs.
V. CONCLUSION
We have presented a comprehensive survey and classification
of RFID anti-collision protocols. In general, there are two
methods used for identifying tags: Aloha and tree. The key
advantages of Aloha protocols are dynamic adaptability to
varying loads and low reader to tag commands. On the other
hand, tree protocols promise deterministic identifications, but
require a high number of readers to tag commands.
ACKNOWLEDGMENT
We extend our thanks to VJTI Faculty Members of Electrical
Engineering Department for their help and support throughout
the paper.
REFERENCES
[1]In-stat,Explosive growth projected in next five years for RFID
tags.http://www.intant.com.
[2]RFID Journal, Wal-Mart begin RFID process
changes.http://www.rfidjournal.com/article/article view/1385.
[3]RFID Journal,DoD release final RFID
policy.http://www.rfidjournal.com/article/article view/1080/1/1.
[4]RFID Journal,DoD reaffirms its RFID
goals.http://www.rfidjournal.com/article view/3211/1/1.
[5]M Kodialam and T Nandagopal, fast and reliable estimation schemes in
RFID system in SIGMOBILEACM Special interest Group on mobility
of system, users, data and Computing,pp322-33,2006.
[6]R.Want,The magic of RFID,RFID: Threat or Promise? COLUMN:Q
Focus: RFID ,VOL.2,no.7,pp.40-48,2004.
[7]K.Finkenzeller, RFID Handbook, Fundamentals and applications in
Contactless smart cards and Identification. JohnWiley and Sons Ltd,
2003.
[8]S.Lahiri, RFID Sourcebook.USA:IBMPress,2006.
[9]L.A.Burdet,RFID multiple access method. Technical
report.http://www.vs.inf.ethz.ch/edu/SS2004/DS/reports/06 rfid-mac
report.pdf.
[10]D.H.Shih, P.L.Sun, D.C.Yen and S.M.Huang, Taxonomy and survey of
RFID anti-collision protocols: Short survey, in computer communication,
vol.29, no.11, pp.2150-2166, 2006.
[11]R.Want,An introduction to RFID technology,IEEE Pervasive
Computing, vol.5, no.1, pp.25-33, 2006.
198
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
BER for BPSK in OFDM with Rayleigh multipath channel
Mrs.Dipti Sharma Mr.Abhishek Saxena
Sr. Lecturer, Lecturer.,
Apex Intitute of Technology AIT,
Rampur Rampur
dipti_shaarma510@yahoo.co.in abhishek.saxena.ei@gmail.com
Abstract:- Orthogonal frequency-division
multiplexing (OFDM) is the modulation technique for
European standards such as the Digital Audio
Broadcasting (DAB) and the Digital Video
Broadcasting (DVB) systems. As such it has received
much attention and has been proposed for many other
applications, including local area networks and
personal communication systems .Here we will study
the bit error rate for BPSK channel in OFDM with
Rayleigh mulitpath channel.The therotical results match
the actual results.
INTRODUCTION
The first multichannel modulation systems appeared in
the 1950's as military radio links, systems best
characterized as frequency-division multiplexed
systems. The first OFDM schemes were presented by
Chang, [1] and Saltzberg, [2]. Actual use of OFDM
was limited and the practicability of the concept was
questioned.. The type of OFDM that we will describe in
this article uses the discrete Fourier transform (DFT)
[1] with a cyclic prefix [4]. The DFT (implemented
with a fast Fourier transform (FFT)) and the cyclic
prefix have made OFDM both practical and attractive
to the radio link designer. A similar multichannel
modulation scheme, discrete multitone (DMT)
modulation, has been developed for static channels
such as the digital subscriber loop [6]. DMT also uses
DFTs and the cyclic prefix but has the additional
feature of bit-loading which is generally not used in
OFDM.
The choice for OFDM as transmission technique
could be justified by comparative studies with single
carrier systems. However, few such studies have been
documented in the literature, see, e.g., [6]. OFDM is
often motivated by two of its many attractive features: it is
considered to be spectrally efficient and it offers an
elegant way to deal with equalization of dispersive
slowly fading channels. We concentrate here on such
channels.
Multiuser systems that use OFDM must be
extended with a proper multiple-access scheme as must
single carrier transmission systems. Compared to single
carrier systems, OFDM is a versatile modulation
scheme for multiple access systems in that it
intrinsically facilitates both time-division multiple
access and frequency-division (or subcarrier-division )
multiple access [7]. In addition, considerable attention
has been given to the combination of the OFDM
transmission technique and code-division multiple
access (CDMA) in multicarrier-CDMA systems, MC-
CDMA, see [Hara and Prasad, 1997] and the
references therein.
OFDM also has some drawbacks. Because
OFDM divides a given spectral allotment into many
narrow subcarriers each with inherently small carrier
spacing, it is sensitive to carrier frequency errors.
Furthermore, to preserve the orthogonality between
subcarriers, the amplifiers need to be linear. OFDM
systems also have a high peak-to-average power ratio or
crest-factor, which may require a large amplifier power
back-off and a large number of bits in the analog-to-
digital (A/D) and digital-to-analog (D/A) designs. All
these requirements can put a high demand on the
transmitter and receiver design.
The performance of Rayleigh channel is as follows
Though the total channel is a frequency selective
channel, the channel experienced by each subcarrier in
an OFDM system is a flat fading channel with each
subcarrier experiencing independent Rayleigh
fading[3].
So, assuming that the number of taps in the channel is
lower than the cyclic prefix duration (which ensures
that there is no inter symbol interference), the BER for
BPSK with OFDM in a Rayleigh fading channel should
be same as the result obtained for BER for BPSK in
Rayleigh fading channel.
OFDM systemLet us use an OFDM system loosely
based on IEEE 802.11a specifications.
Parameter Value
FFT size. nFFT 64
Number of used subcarriers.
nDSC
52
FFT Sampling frequency 20MHz
Subcarrier spacing 312.5kHz
Used subcarrier index
{-26 to -1, +1 to
+26}
Cylcic prefix duration, Tcp 0.8us
Data symbol duration, Td 3.2us
Total Symbol duration, Ts 4us
Eb/No and Es/No in OFDM
199
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
The relation between symbol energy and the bit energy
is as follows:
.
Expressing in decibels,
.
Rayleigh multipath channel model
The channel is modelled as n-tap channel with each the
real and imaginary part of each tap being an
independent Gaussian random variable[7]. The impulse
response is,
,
where
is the channel coefficient of the 1st tap,
is the channel coefficient of the 2nd tap
and so on[8]. The real and imaginary part of each tap is
an independent Gaussian random variable with mean 0
and variance 1/2.
The term is for normalizing the average channel
power over multiple channel realizations to 1.
Figure: Impulse response of a multipath channel
CYCLICPREFIX
The need for cyclic prefix and how it plays the role of a
buffer region where delayed information from the
previous symbols can get stored. Further, since addition
of sinusoidal with a delayed version of the sinusoidal
does not change the frequency of the sinusoidal (affects
only the amplitude and phase), the orthogonality across
subcarriers is not lost even in presence of multipath.
Since the defined cyclic prefix duration is 0.8us
duration (16 samples at 20MHz), the Rayleigh channel
is chosen to be of duration 0.5us (10 taps)[9].
Expected Bit Error Rate
Te BER for BPSK in a Rayleigh fading channel is
defined as
Fourier transform of a Gaussian random variable is still
has a Gaussian distribution. So, the frequency response
of a complex Gaussian random variable (a.k.a Rayleigh
fading channel) will be still be independent complex
Gaussian random variable over all the frequencies.
frequency response of a complex Gaussian random
variable is also complex Gaussian (and is independent
with frequency).
Given so, the bit error error probability which we have
derived for BER for BPSK in Rayleigh channel holds
good even in the case of OFDM.
Simulation model
The simulation is done using Matlab Script.The
following steps are done to obtain the simulation
(a) Generation of random binary sequence
(b) BPSK modulation i.e bit 0 represented as -1 and bit
1 represented as +1
(c) Assigning to multiple OFDM symbols where data
subcarriers from -26 to -1 and +1 to +26 are used,
adding cyclic prefix,
(d) Convolving each OFDM symbol with a 10-tap
Rayleigh fading channel. The fading on each symbol is
independent. The frequency response of fading channel
on each symbol is computed and stored.
(e) Concatenation of multiple symbols to form a long
transmit sequence
(f) Adding White Gaussian Noise
(g) Grouping the received vector into multiple symbols,
removing cyclic prefix
(h) Converting the time domain received symbol into
frequency domain
(i) Dividing the received symbol with the known
frequency response of the channel
(j) Taking the desired subcarriers
(k) Demodulation and conversion to bits
(l) Counting the number of bit errors
(m) Repeating for multiple values of Eb/No
The simulation results are as shown in the plot below.
Figure: BERplot for BPSKwith OFDM modulation
in a 10-tap Rayleigh fading channel
.
200
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Conclusion
The simulated BER results are in good agreement with
the theoretical BER results.Thus the OFDM channel
contains less interference when Rayleigh channel is
used. If AWGN channel is used then the interference
would have been more[5].
References:-
1.M. Sandell, J.J. van de Beek, and P.O. Brjesson
[1995], Timing and Frequency Synchronization in
OFDM Systems Using the Cyclic Prefix, in
Proceedings of the IEEE International Symposium on
Synchronization, pp. 16-19.
2.T.M. Schmidl [1997b], Synchronization Algorithms
for Wireless Data Transmission Using Orthogonal
Frequency Division Multiplexing (OFDM), PhD thesis,
Stanford University.
3.T.M. Schmidl, and C. Cox [1997a], Robust
Frequency and Timing Synchronization for OFDM,
IEEE Transactions on Communications, 45, 12, pp.
1613-1621.
4.T. Seki, Y. Sugita, and T. Ishikawa [1997], OFDM
Synchronization Demodulation Unit, United States
Patent, 5,602,835.
5.C.-R. Sheu, Y.-L. Huang, and C.-C. Huang [1997],
Joint Symbol Frame, and Carrier Synchronization for
Eureka 147 DAB System, in Proceedings of the
International Conference on Universal Personal
Communications (ICUPC'97), pp. 693-697.
6.J. Tellado, and J.M. Cioffi [1997], PAR Reduction
in Multicarrier Transmission Systems, ANSI
T1E1.4/97-367.
7.P.J. Tourtier, R. Monnier, and P. Lopez [1993],
Multicarrier Modem for Digital HDTV Terrestrial
Broadcasting, Signal Processing: Image
Communication, 5, 5/6, pp. 379-403.
8.VDSL Alliance [1997], VDSL Alliance SDMT
VDSL Draft Standard Proposal, ANSI Contribution
T1E1.4/97-332.
9.W.D. Warner, and C. Leung [1993], OFDM/FM
Frame Synchronization for Mobile Radio Data
Communication, IEEE Transactions on Vehicular
Technology, 42, 3, pp. 302-313.
10.L. Wei, and C. Schlegel [1995], Synchronization
Requirements for Multiuser OFDM on Satellite Mobile
and Two-Path Rayleigh Fading Channels, IEEE
Transactions on Communications, 43, 2/3/4, pp. 887-
895.
11.S.B. Weinstein, and P.M. Ebert [1971], Data
Transmission by Frequency-Division Multiplexing
Using the Discrete Fourier Transform, IEEE
Transactions on Communications, 19, 5, pp. 628-634.
201
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Sybil attack: Threat in P2P Network
POOJA RANI1 DEEPTI SHARMA2 and SUMIT KUMAR3
Department of Information Technology, ITM, Sector-23 A, Gurgaon, me_ pooja@msn.com1
Department of HR & IT, College of Management Studies, Kanpur, im_ deeptisharma@rediffmail.com2
Department of Instrumentation Control, NSIT, Dwarka, Delhi, me_ sumit18@yahoo.in3
Abstract
Peer to peer (P2P) network provide benefits of fast,
easy and inexpensive communication among it users.
But its one of the problem is Sybil attack, As in P2P
network most of communication node are not aware of
participatory node identity, there is absence of central
controlling (administrative) or local authority. In P2P
network when some node make their pseudonymous
identity and participate in system for declining the
performance of system that says to be Sybil attack and
node is says to be malicious node.
This paper addresses the problem of Sybil attack in
various type of P2P network, discuss its mathematical
model that helps in studying and analysis of Sybil
attack. What are various factors that effect Sybil
solution. Continuing with discussion about the
previous proposed solution to recent solution and done
the comparative analysis of different approach of
solutions that are applied in P2P network for
defending Sybil attack.
Keywords:
Computation, Malicious, Nodes, Peer to Peer, Sybil-
Attack, Resources, System
1-Introduction
As peer to peer (P2P) networks is come in to existence in
near future but grow and became one of the important
concept in communication technology their is lots of
effort, time is put to mature the P2P networks. We cant
challenge its existence so work has been carried out in
direction to challenge against
the Sybil attack. Sybil attack occurs in peer to peer
network, P2P networks occur to communicate between
large number of nodes they are lacking in identification
system, trust-worth is not present in system is one of the
major drawback of this system. Because lack in trust-
worthiness of system grow exponential with growth of
system that can lead to redefinition of system at basic
level, if any system restructured from root that leads to
ruining the system existence. In Sybil attack a node
decline the reputation of a P2P network by creating a
large number of pseudonymous identity using them to
gain a highly fake the system for its own objective.
A P2P network prone to threat on basis, how cheap
identity can be generated, the degree to which the system
accepts inputs from nodes that do not have a method to
identify the identity of other node whether the other node
are trusted or fake, their is lack of identity behavior. A
node on a P2P network is a piece of software which has
access to local resources. A node advertises itself on the
peer-to-peer network by presenting itself with an identity.
More than one identity can correspond to a single node. In
other words the mapping of identities to node is many to
one. Node in P2P networks use multiple identities for
purposes of redundancy, resource sharing, reliability and
integrity. In P2P networks the identity is used as an
abstraction so that a remote entity is aware of identities
without necessarily knowing the correspondence of the
identities with their local entities. By default, the distinct
identities are assumed to correspond to a distinct local
node. In reality they may correspond to a same local node.
A faulty node or an adversary may present itself with
multiple identities in a P2P network to appear and
function as distinct nodes. By becoming part of the P2P
network, the adversary may then overhear
communications or act maliciously. By masquerading and
presenting multiple identities, the adversary can control
the network substantially. In present scenario we can
discuss about its presence in various type of peer to peer
networks, As we talk about a on-line poll with total x
number of user participate in a poll, thats is think
situation where the actual situation is that only one user
opinion gets the majority its amazing to think but it can be
possible as discussed in literature, explaining total more
than half identity or major number of identity created by
user who is motivated by previous mindset that makes
majority identity thats result in majority poll.
2-Various Effects of Sybil Attack
In presence of Sybil attack there is lot of suffering from
system performance, few of them is discussing below:
202
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
2.1 Driving Trend:
After gaining majority voting in the system it is possible
for the malicious nodes (attacker) to thwart the system
trend by their predetermined idea. Consider scenario only
few participant decide what going to be the majority of
trend. That makes majority voting concept useless as
whatever attacker decide they drive the system.
2.2 Byzantine failure:
As the majority concept are in favor of attackers in Sybil
attack this situation can be easily generate. As it is easily
possible to make large number of fake identity.
2.3 Reputation System:
It has been used for consulting other helpful in making
system i.e. ebay where in sybil attack scenario if it
happens frequently that not only degrade it performance.
It spoil the concept of reputation system in which their is
use of many reference points, make it concept fade that
effect in making more no. of query generation and
wastage of time.
2.4 Free Riding:
It remain the behavior of notorious user to use system
resources with giving co-operation to other, making equal
opportunity or give-take concept nullify by their driving
trend various resources, are mono-driven by preconceived
idea rather than co-operative driving system.
2.5 Wastage of Resources:
It has been observe in study that noncooperation most of
time leads to wastage of resources as conceived idea of
wasting resource, harming the system, making it worst. In
full captured scenario most activities on system remain of
no use or the going in vain that behavior is loss of
resources i.e. bandwidth, processing and data storage.
3-Factors that effect Solution
Approaches
There are various system dependability that effect system
performance in presence of Sybil attack, they directly or
indirectly affect solution design few of them are listed
below:
3.1 Availability of System:
It define the system is capable of providing service in
presence of Sybil attack there is not Denial-of-
Service(DOS) to the part of system in the Sybil attack,
more system resource availability increase system
efficiency to fight and make system stronger and less
availability of resources make system more prone to
system failure at earliest.
3.2 Integrity of System:
It define the internal faith of node on each other as it is
the guarantee that a delivered message contains exactly
the information that was originally sent. This guarantee
precludes the possibility of messages being altered in
transit. The causes of integrity violation may be
accidental or malicious but, in practice it is impossible to
distinguish one from the other as their level of system
involvement. High degree of integrity avoid Sybil attack,
low degree make it more prone to Sybil attack.
3.3 Authenticity of node:
Authenticity is the guarantee that participants in
communication are genuine and not impersonators. To
achieve authenticity, participants in the communication
are required to prove their identities. Without this
authentication, an attacker could impersonate a legitimate
participant, allowing him to obtain access to confidential
resources or disturb the normal network operation by
propagating fake messages.
3.4 Confidentiality at hierarchical level:
It is the guarantee that certain information is only
readable by those who have been authorized to do so.
This prevents information from being disclosed to
unauthorized parties. If authentication is performed
properly, confidentiality is a relatively simple process to
implement at various level distribution make it more
tough to break and that beneficial for system integration.
3.5 Self-Healing:
It refers to a capability of node that its ability to recover
automatically from an erroneous state, in a finite amount
of time, without human intervention. For instance, it
should not be possible to permanently disable a network
by injecting a small number of malicious packets at a
given point in time. If a node is self-healing, an attacker
must remain in the network and inflict continuous damage
in order to prevent the node from recovering, this
behavior that makes the attacker easier to locate.
4 - Model Discussion: Used for analysis
of problem
4.1 Basic Terminology used:
4.1.1 Game theory:
It has been used for designing aspect of problem at basic
level identification of each node to equality at starting
level, beneficial while observation of the node as time
grows help in increasing the score of node, its behavior as
per data collected.
4.1.2 Prisoner Dilemma:
203
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
It has been used in the dynamic network study as it uses
the behavior of two different node that useful in
implementation. As per their behavior it is useful at the
implementation level.
4.1.3 Network formation:
The strategies we will consider will have an impact on the
connectivity of the nodes, and which other nodes in the
network they rely on to reach their designing goal in clear
manner. Also this is based on the previous graphical
strong foundation that from the begin of study of network
help-full in studying the network behavior in the various
coordinate scenario.
4.1.4 Social network theory:
To model reality it is a good idea to move away from the
assumption that nodes have
random needs, and model communication needs that are
more likely to be observed in real networks. These
include a the study of node behavior its coordinate study,
group analysis it uses approach of data mining that define
the behavior of the node according to study that under
done up to that period of time.
4.1.5 Simulation:
It is observed that whatever Mathematical design can be
implemented in real scenario but direct implementation
lead to various type of bottleneck problem, cost and other
barrier that can avoid the implementation of that idea. But
Simulation facilitate user to design idea and implement it
at cheapest cost of observing the scenario with high
degree of reliability.
4.2 Model Proposed:
A simple model based on Game theory is used to define
the conceptually formulation of problem to make
predictions on how node would behave, when all
strategies interact with each other to dictate the final
outcome[4]. Concept says that a game with N players,
having each M possible strategies, requires an effort equal
to solve using brute force. Slightly more efficient
algorithms exist for simple games, e.g. where all players
do not influence the utility of all others. For those reason
the relationship between node are formulated and how it
work in between node, and different time or situation
what behavior it has in the network that help in suspecting
and identifying node behavior. According to previous
define categorize used.
The key parameters of our model are as follows:
Users:
It is define as the basic level of identity each user has
characteristic that beneficial for their existence and
survival. They are categorize according to their behavior
which is analysis by the authority or system.
Categories of user:
Honest user:
They define as single identity participation
beneficial for system, in gaining the reputation of
system. Appreciate their existence and long life
span benefit the system.
Malicious user:
That participate in system with preconceived
strategy of influencing system that is beneficial
for malicious user that mostly decrease
reputation of system, it degrade system while
participation make system down feeling that
affect or threat to behavior of system. They try to
over whelming the system idea.
Defense system:
It used for preserving the system reputation with
the fluent predetermined affect that id drive of
malicious user it try to make system stronger and
increase system reputation that can be done by
identifying malicious user, degrade their
participation and appreciate participation of
honest user.
Figure 4.1 Initial Phase of Sybil Attack
Figure 4.2 Growing Phase in Sybil Attack
5 - Various Approaches for defending
Sybil Attack
204
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Several work has been carried out in literature for
defending Sybil attack, here in report few of them are
discuss. Defending approach are categorized on basic of
there foundation concept, for identification of group that
make understanding to key policy or base to learn and
approach toward the solution. As per the literature says
only strong modeling reach to successful completion of
work if there is problem in theoretical modeling then
things cant be implemented practical.
5.1 Centralized Authority:
The central control of group generate a common identity
that can be trusted, verified by each node that involve in
group, it is the most common solution mainly due to its
potential to completely eliminate Sybil attacks. However,
it provide guarantees that each node is assigned exactly
one identity, as indicated by possession of a certificate[1].
In fact, this approach others no method for ensuring such
uniqueness, and in practice it mostly performed by a
manual configuration. This manual procedure can be
costly and create a performance bottleneck in large-scale
systems. Additionally in order to be effective the
Centralized authority must guarantee the existence of a
mechanism to detect and revoke lost or stolen identities
[5].
These requirements make Centralized authority very
difficult to implement in decentralized networks, which
lacking in definition a centralized authority that can
provide the certification service it violate basic modeling
where it has been condition that system designed must be
distributed that shows lack of basic modeling in this
approach. It can be says that this approach cant follow
the basic modeling definition so lack in begin level it
performance cant be to trustworthy.
5.2 Resource testing:
This approach involves checking of computing ability,
storage ability, and network bandwidth, as well as limited
IP addresses. Some specifically propose testing for IP
addresses in different domains or autonomous systems[7].
Requiring heterogeneous IP addresses prevents some
attacks but does not discourage others (such as zombie
networks) and limits the usability of an application. In this
approach the design goal is to discourage rather than
prevent Sybil attacks, and the number of identities an
attacker can have is, in theory, limited[4]. For many
applications this is insufficient if an attacker can obtain
enough identities for a successful attack, even if it is
expensive. In many peer to peer communication system it
is easy to obtain two identity and this is one of the loop
hole of this approach.
5.3 Recurring costs and fees:
In this approach it has been design to impose recurring
fees. It may require certification of identities, but this
certification is not trusted rather, it is seen as a way of
imposing identity creation costs. Sufficiently Secure Peer-
to-Peer Networks uses an economic, game-theoretical
approach to examine when attacks on censorship resistant
networks are cost-effective[6]. In work, it has been shown
that charging a recurring fee for each participating
identity is quantitatively more effective as a disincentive
against successful Sybil attacks than charging one-time
fees. For many applications, recurring fees can incur a
cost to the Sybil attack that increases linearly with the
total number of identities participating, one-time fees
incur only a constant cost. This approach has some strong
reason to successful as its designing is more stronger, in
each attack malicious node has to be suffer that may also
impose some restriction on malicious behavior of node, it
leads to benefit the scenario to defense against malicious
behavior.
5.4 Domain Specific:
In this approach some countermeasures are design that are
application-domain specific[3]. For example, a detection
mechanism for mobile ad hoc networks is proposed,
based on the location of each node. For an attacker with a
single device, all Sybil identities will always appear to
move together. However, the defense is not applicable
beyond mobile networks.
5.5 Computational Resource Testing:
This approach relies on the assumption that the nodes
possess a limited computational power. The use of the
technique was for combating junk email("spam") and later
used as a defense against DOS attacks. In a computational
resource test (CRT) to verify if nodes own an expected
amount of computational power. The test, entitled crypto-
puzzle, consists on having identities solving a
cryptographic problem, solvable only by brute force
calculation, in a certain amount of time[4]. This way, a
node with constrained computationally power has a limit
on the number of crypto-puzzles it can solve in a given
time period, thus setting an upper bound on the number of
sybil identities it can present to the network. But it major
drawback is scenario where resources are shared or vary
from time to time many application or nodes are belong to
network where their is variation in the CRT from time to
time, there is not fixed or static computing environment.
5.6 Trusted devices:
The use of trusted devices this approach is design by
binding one hardware device to one node. While this can
effectively prevent the Sybil attack, the main issue with
this approach is that there is no efficient way to prevent
one node from obtaining multiple hardware devices other
than manual intervention[3]. It is mostly design for
defense related application. The cost of acquiring multiple
devices may be high, Bottleneck problem, involvement of
human intervention, that also frame idea of central control
indirectly same scenario . This approach also lack at
various modeling problem.
205
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
5.7 Social Networks:
The approach design on the basis of social behavior of
node means who are among the nearest node that have
same behavior among nodes. It categorized node on basis
of their social adaptability i.e. as in orkut tracks person
and suggest him friend or other personalities that can also
be liked by the person[4].
In the same way node are categorized on behavior of them
their social network, means to which they are related
membership of group. Then group identification is
matched with the previous behavior of the most member
and categorized, there is constant feedback and change in
node behavior. This approach is time taken, decision
based on data mining of huge amount of data calculation
that increase expense cost. But it prove to successful on
both theoretical and application based thus it among better
approach to fight the sybil attack.
6-Dsybil: Optimal Sybil-Resistance
6.1 Design of Dsybil:
Basic of Algorithm[8]:
Observe behavior of no
de at pro-behavior increase its vote strength otherwise
decrease its vote strength according to Behavior of node
that enhance the performance of group, its behave
accordingly in every behavioral study.
It has been built in java. Study based on it has been
observed in ebay on more than one year. Theoretical its
model follow most of basic i.e. characterization of node,
node age benefit, making each observation significant.
6.2 Achievement of Dsybil:
Dsybil uses the modeling concept of previous discuss
algorithm which include both strong modeling, designing
behavioral of a strong base of study that conserv large
data-collection, data-mining and design of base
architecture. It has been first time a solution shows a
significant defending capability of record breaking. It
performance measure are to good with comparison to
previous solutions.
It can defend against an unlimited number of Sybil
identities over time. It has been proved that lifespan of the
honest user is far important than their population. It
provide a growing defense: If the user is under protect of
DSybil then it loss is much less than the worst case loss.
The loss in under the worst-case attack is logarithmic.
7-Conclusion
This repot discussed about the solution of Sybil attack,
how do we model Sybil attack on basis of mathematical
model, various factors which effects of Sybil attack and
various type of solution approaches of Sybil Solution.
After study of above point we consider the latest solution
of Sybil attack, known as Dsybil. As Dsybil performance
is far better than the previous solution because of its
strong modeling and implementation design style. We can
say that a solution approach which done the modeling
significantly, study the behavioral approach of factor that
effect the solution study and lastly on basis of strong
algorithm work best in ever scenario to face and challenge
Sybil attack. At various application of P2P system.
References
[1] A. Leonardo, L. Martucci, M. Kohlweiss, C.
Andersson, A. Panchenko(2008). Self-Certi_ ed Sybil-
Free Pseudonyms. In WiSec '08: Proceedings of the _rst
ACM conference on Wireless network security, New
York, NY, USA, pp. 154{159. ACM.
[2] A. Mishra (2008). Security and Quality of Service in
Ad Hoc Wireless Networks. Cam-bridge University Press.
[3] C. Piro, C. Shields, B. Levine (2006). Detecting the
sybil attack in mobile ad hoc networks. Securecomm and
Workshops, pp. 1{11.
[4] H. Yu, M. Kaminsky, A. Flaxman (2008). SybilGuard:
Defending Against Sybil Attacks via Social Networks.
Networking, IEEE/ACM Transactions on 16 (3), pp.
576{589.
[5] J. Douceur(2002). The sybil attack. First International
Workshop on Peer-to-Peer Systems, pp. 251{260,
Springer-Verlag.
[6] k. Boris, N. M., B. Levine(2008). Quantifying
resistance to the sybil attack. In Proc.Financial
Cryptography (FC).
[7] H. Yu, M. Kaminsky, A. Flaxman (2009). Sybil-
Resilient Online Content Voting. 6th
USENIX Symposium on Networked Systems Design and
Implementation, pp. 15-28.
[8] Ha. Yu, C. Shi, M. Kaminsky, P. B. Gibbons, F. Xiao
(2009). DSybil: Optimal Sybil-Resistance for
Recommendation Systems. 30th IEEE Symposium on
Security and Privacy,pp. 283-298.
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
A Comparative study of network and traffic simulators under varying
VANETs Conditions
POOJARANI
Department of Information Technology
ITM, Sector-23A, Gurgaon, India
me_pooja@msn.com
ABSTRACT-VANET is an ad-hoc network formed
between vehicles as per their need of communication. In
Vehicular ad-hoc network the metrics like Average Jitter
Rate, Packet Delivery Ratio, Throughput have been
estimated with varying traffic scenarios. To computing
the viability and performance of network and traffic
simulation techniques are used and put into practical.
Here we are comparing different traffic under varying
VANET conditions.
Keywords
Ad-hoc network, Simulation, VANET, Vehicle-to-Vehicle
Communication, Routing protocols, WAVE.
I - INTRODUCTION
VANET is an Ad hoc network formed between moving
vehicles as per their need of communication. In order to
develop a VANET every participating vehicle must be
capable of transmitting and receiving wireless signals up
to range of three hundred meters. VANET range is
restricted up to one thousand meters in various
implementation. According to the research conducted in
this field, the performance of a VANET remains optimum
within one thousand meters and beyond that it is not
feasible to communicate among vehicles because packet
loss rate would be very high. Finding optimum path is a
typical task for dynamic protocols as management of
vehicle movement is quite complex. So there is a need to
update entries in route finding node accordingly. Since
VANET is not restricted up to vehicle-to-vehicle
communication, it takes benefits of road side
infrastructure that can also participate in communication
between vehicle but in this thesis the main focus is on
vehicle-to-vehicle communication as shown in Figure 1.
There are various challenges for VANET such as high
speed of vehicles, dynamic route for communication,
presence of buildings in communication path, reflecting
objects, roadside objects, other obstacles in path of radio
communication, different direction of vehicles, concern
finding about privacy, authorization of vehicles, security
of data, lack of simulator, restriction on utilization of 3G,
high cost of cellular network, high set-up cost and sharing
of multimedia services .
NITIN SHARMA
NITIN SHARMA
Department of Computer Science and Engineering
MNIT, Allahabad, India
nsharma283@gmail.com
Figure 1: Vehicle to Vehicle Communication
Ad hoc Routing Protocols
An Ad hoc routing protocol is a standard for monitoring
node decisions how and when routing packets traverse a
network. A network is combination of different nodes,
which intent to join it and are not aware of network
topology. It discovers the topology by announcing its
presence and listening to broadcasts from other nodes
(neighbors) in the network.
The process of route discovery is performed differently
depending on the routing protocol implemented in a
network. There are several routing protocols designed for
wireless Ad hoc networks. Routing protocols are
classified either as Table Driven Protocol and Source
Initiated on Demand Protocol.
1. AODV
Ad-hoc On-Demand Distance Vector Routing
Reactive
Next-hop routing
Use of sequence numbers to identify the most
recent path
Uses path define by intermediate node
Route requests have a Time to Live (TTL)
2. DSDV
Destination Sequenced Distance Vector
Proactive
Using sequence number to guarantee loop-free
paths
206
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Relies on periodic exchange of routing
information
Inefficient due to periodic update transmissions
even no changes in topology
Overhead grows as O(n
2
)
3. DSR
Dynamic Source Routing
Potentially larger overhead
Intended for moderate speed mobile nodes
No network topology changes, no overhead
Support asymmetric link
Allow nodes keep multiple routes to one
destination in their cache
II-WIRELESS TECHNOLOGY
Wireless technologies are broadly divided in following
two main groups on the basis of their application domain
and feature:
First, moderate bandwidth group as GSM, GPRS or
UMTS. It covers large area, regularized by governing
authorities, restricted usage of bandwidth and most of
the time run by service providers
Second, higher bandwidth group as local area
technologies Wireless Local Area Network
(WLAN). It covers small area, autonomous system,
bandwidth is as per need of user, restrained to
authorized user.
This paper focuses on the second aspect, the WLAN. Two
different standards exist for WLAN HIPERLAN from
European Telecommunications Standards Institute (ETSI)
and 802.11 from Institute of Electrical and Electronics
Engineers (IEEE). Nowadays the 802.11 standard totally
dominates the market and the implementing hardware is
well engineered.
The IEEE 802.11 Family
IEEE 802.11 WLAN protocols are part of the 802 family
that standardizes Local Area Networks (LAN) and
metropolitan area networks (MAN). The 802.11 LAN is
based on a cellular architecture where the system is
subdivided into cells and each cell (called Basic Service
Set or BSS, in the 802.11 nomenclature) is controlled by a
Base Station called Access Point as shown in Figure 2.
Even though that a wireless LAN may be formed by a
single cell, with a single Access Point. But most
installations are formed by several cells, where the Access
Points are connected through some kind of backbone
called Distribution System (DS). The backbone is
typically Ethernet and in some cases wireless itself. The
whole interconnected Wireless LAN including the
different cells, their respective Access Points and the
Distribution System, is seen to be the upper layers of the
OSI model, as a single 802 network, and is called in the
standard as Extended Service Set (ESS).
Wireless Access in Vehicular Environment
Communication standard used for vehicle to vehicle
communication is used as Wireless Access in Vehicular
Environment (WAVE). It uses Intelligent Transport
System (ITS). A comparison of WAVE with other
wireless system are given in Table 1. There are two
WAVE organization. First type, refers to Vehicle to
Infrastructure (V2I) and second type is known as Vehicle
to Vehicle (V2V). In V2I, infrastructure is stationary
while in operation and usually permanently mounted along
the roadside.
Figure 2: Typical 802.11 WLAN
Table 1: Comparison of WAVE and Wireless Systems
WAVE Cellular Mobile
WiMAX
Data Rate 1-27
Mbps
2 Mbps 1-32 Mbps
Latency 50 ms Seconds Seconds
Range 1 km 10 Km 15 km
Mobility 60 mph 60 mph 10 mph
Bandwidth 10 MHz 3 MHz 2.5 MHz
III PERFORMANCE METRICS
These are the following metrics:
Average Jitter Rate:
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
It is the delay between two consecutive packet delivery at
a node. Quality of Service (QoS) of the network is
measured by Average Jitter Rate.
Packet Delivery Ration:
The fractional of total packets received at a node to the
packets sent by the node. It is associated with the QoS and
bandwidth utilization in the network.
Throughput:
Throughput is the sum of data to all the nodes in the
system during a period. In a time interval the throughput
reflects the bandwidth utilization.For each of the above
parameters, various traffic scenarios are simulated by
changing the number of vehicles, distance between
vehicles and vehicle speed. The effect of these changes in
the routing protocols are analyzed and results are shown
below. Throughput of the protocols in different scenarios.
The observations are as follows:
IV- SIMULATIONWORK
The Protocol stack of VANET consists of two
combination of standards Dedicated Short Range
Communication (DSRC) and Wireless Access in
Vehicular Environment (WAVE). DSRC contains three
layer Physical Layer, Media Access Control (MAC)
Layer, and Logical Link Control (LLC) Layer. The layers
make communication possible in wireless environment.
WAVE lies on top of the DSRC layer. WAVE is also
known as IEEE P1609 and is used as a standard for
communication. The ns-2 is used to evaluate performance
of routing protocols in VANET. It is a network simulator
and is also widely used for VANET related simulation
work [8]. Ns-2 provides a standard for IEEE 802.11p
simulation in the form of tcl scripts. Also various types of
communication patterns, traffic scenarios and resources
are available with ns-2. The simulation grid is 500m x
500m. in which the initial position of every vehicle is
specified at the starting time of simulation by calling C++
object and the destination of every vehicle is set after a
certain time interval. The vehicle approach their
destination with variable speed. In simulation, three input
parameters are provided namely, speed of vehicle vary
between 10MPH to 100MPH, number of vehicles vary
between 20 to 100 and distance between vehicles very
between 20m to 100m. for the traffic purpose, Constant
Bit Rate (CBR) traffic with a fixed packet size is used.
The vehicle antenna is omni directional and its
transmission range is 150m. The channel data rate is
2Mbps. IEEE 802.11p standard is used for vehicle-to-
vehicle communication, which is available is ns-2 are
wireless support [9]. It follows most of the WAVE
standard, Nakagami model is used for radio propagation,
which is the best model for WAVE environment [10].
While implementation a routing protocol in ns-2 the
behavior of queue and its adaptability criteria are carefully
considered.
In the simulation work, the available standard
and data units are followed. Ns-2 produces output in the
form of trace file which is further processed by shell
scripting to calculate the desired parameter. Shell
scripting and awk are widely used to process trace files.
Fig 1: Throughput with various vehicle distances
Figure 1, DSR outperforms over other two routing
protocols (AODV and DSDV). The packet drop rate of
DSR is much higher than that of other two routing
protocols. It benefits in reducing load on network and
choice of multiple paths for routing. The performance of
AODV is much better than DSDV as it uses smart
technique for routing information updating of selective
updating. While DSDV has both types of updating - full
and incremental packets. It increases the congestion in
VANET that degrades DSDV performance over other two
routing protocol.
Fig 2: Throughput with number of vehicle varies
Figure 2 show the throughput of AODV is better than
other two routing protocols. It is due to availability of
routing path that causes low delay in transmission. Since
throughput is the ratio of total amount of data that a
receiver receives from sender to the time it takes for the
receiver to get last packet. Low delay in the network
translates into higher throughput. DSR initially performs
better than DSDV but after high presence of vehicle, its
Throughput goes down. It is due to finding route for
higher number of vehicles become complicated, which
results into degraded performance of DSR.
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Figure 3 shows the routing protocols are not straight
forward as previously discussed. At initial level, DSR
gives better Throughput than other two but after reaching
higher speed its speed remains in range level. At initial
range, DSR is taking advantage of having less routing
overhead and non repetition of routes.
Fig 3: Throughput with speed of vehicle varies
In the simulation work, we compared the performance of
AODV, DSDV and DSR on fifteen criteria as shown in
Table 2. It is observed that no protocol performs good in
all the conditions. One protocol performs well in specific
domain area and others in different domain area. AODV
performs better than DSDV in most of the parameters
because of its time limit in table entry usages. That
benefits in Throughput, Packet Delivery Ration and some
times that also benefits in Throughput Routing Overhead.
TABLE 2
PARAMETER COMPARISION OF ROUTINGPROTCOLS
Parameters Variables Routing Protocols
AODV DSDV DSR
Jitter Rate Dist. b/w Vehicles M H L
Jitter Rate No of Vehicles M H L
Jitter Rate Speed of Vehicles M H L
Packet Del
Ratio
Dist. b/w Vehicles M L H
Packet Del
Ratio
No of Vehicles H L M
Packet Del
Ratio
Speed of Vehicles H L M
Throughput Dist. b/w Vehicles M L H
Throughput No of Vehicles H L M
Throughput Speed of Vehicles H L M
Abbreviation: L- Low Value, M- Medium Value & H- High Value
DSDV Performance is worst among three protocols, it
uses updation of table and depends on table the
unnecessary decreases Packet Delivery Ratio, increases
number of Hop Counts and decreases Throughput.
V - CONCLUSTION0
On the basis of simulation results presented in Table I for
different traffic scenario and for three protocols,
performance of DSR has been found to be better than that
of AODV and DSDV. The performance results are
summarized for Highway, Urban and Freeway traffic
scenarios in Table 3. From the parameter values
characterizing the three traffic scenarios. DSR is found
suitable for Highway and Freeway traffics, whereas
AODV is suitable for Urban traffic scenario.
TABLE 3 DIFFERENT TRAFFIC SCENARIOS
Traffic Scenario Freeway Highway Urban
Distance b/w
Vehicles
Small Large small
Density of Vehicles Low Low High
High Speed of
Vehicles
No Yes No
Suggested Protocol DSR DSR AODV
Thus it can be concluded that a single protocol doesnt
give best performance in all traffic scenarios. Since traffic
scenarios change throughput the day, some part of hybrid
adaptive protocol would give better performance
REFERENCES
[1] Yi wang. Akram Ahmad, Bhaskar Krishnamachari and
Konstantinos Posunis, IEEE 802.11p Performance Evaluation
and Protocol Enhancement. IEEE International Conference on
Vehicular Electronics and Safety, 2008.
[2] Bo Xu, Aris Ouksel and Ouri Wolfson, Opportunistic
Resouce Exchange in Inter-Vehicle Ad-hoc Networks,
International Conference on Mobile Data Management, 2004.
[3] Lin Yand, Jindua Guo and Ying Wu, Piggyback
Cooperative Repetition for Reliable Broadcasting of Safety
Message in VANETs IEEE Internatitional Conference on
Consumer Communication and Networking, 2009, pp-1-5
[4] Katrin Bilstrup, and Urban Bilstrup, On the Ability of
the 802.11p MAC Method and STDMA to support Real-Time
Vehicle-to-Vehicle Communication. EURASIP Journel on
Wireless Communication and Networking 2009.
[5] Rainer Baumann, Simon Heimilicher and Martin May,
Towards Realistic Mobility Models for Vehicular Ad-hoc
Networks. IEEE International Conference of Mobility
Networking for Vehicular Environment, 2007.
[6] Valery Naumov, Rainder Baumann and Thomas
Gross, An Evaluation of Inter-Vehicle Ad-hoc Networking Based
on Realistic Vehicular races, ACM International Symposium
on Mobile AdHoc Networking and Computing, 2006.
[7] D. Rajini Girinath and Dr. S. Selvan, Data
Dissemination to regulate vehicular traffic using HVRP in
urban mobility model, International Journel of Recent Trends in
Engineering, 2009
[8] Victor Cabrea, Francisco J. Ros and Pedro M. Ruiz,
Simulation based Study of common Issues on VANET Routing
Protocols, IEEE Vehicular Technology Conference, 2009.
[9] Djamel Djenouri, WassimSoualhi and Elmalik Nekka,
VANETs Mobility and Overtaking: An Overview, International
Conference 2008, pp1-6.
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Error Performance of Digital Modulation
Techniques over Rayleigh Multipath Channel-
A Unified Statistical Frame Work
Vicky Singh
1
, Amit Sehgal (MIEEE, MICEIT)
2
1, 2
ECE Department, G.L. Bajaj Institute of Technology and Management, Gr. Noida
1
vicky.glbitm@gmail.com,
2
amitsehgal09@rediffmail.com
Abstract Authors have written this paper motivated by two facts.
First is that detailed approach to obtain the Bit Error Rate (BER)
expressions of modulation techniques over Additive White Gaussian
Noise (AWGN) channel is available in several published material but
a little or no material is available for error performance over
multipath fading channels from the perspective of the undergraduate
students. Another important fact is that clear understanding of signal
propagation over fading channel is a must for practical wireless
channel modelling. Therefore, motivated by these two facts,
statistical models for estimation and calculation of BER of
modulation techniques over Rayleigh multipath channel have been
derived. Two different techniques have been used to derive the
statistical models and each technique is unique in its own pedagogy.
A sincere effort is done by the authors to explain the fading channel
modelling from the perspective of under-graduate students.
Techniques used in this paper for deriving the statistical models can
also be extended to evaluate the performance of modulation
techniques over other multipath fading channels.
Keywords Digital Modulation techniques, BER, Multipath
Fading, Rayleigh Distribution, MGF
I. INTRODUCTION
There are three main mechanisms that affect the signal
propagation in a wireless radio link. These are - reflection of
the signal from a smooth surface, diffraction around the
corners of an obstacle and scattering due to an object of the
properties of the received signal envelope is required in order
to develop a digital communication system and for planning
mobile radio networks.
Now BER is one of the most important and critical parameter
used to analyse the performance of digital modulation
techniques. Also it is the one which is most revealing about
the nature of the system. In this paper, we provide statistical
models for the BER estimation and calculation of digital
modulation techniques in Rayleigh multipath channel. The
modulation techniques considered are coherent GMSK, BPSK,
BFSK and DPSK. We provide the complete description for
GMSK only and rest of the results are taken from our earlier
published papers. First we present the system model and then
two different approaches being used to develop the models for
the estimation and calculation of BER. One of the approaches
is Error Function based approach and the second approach is
MGF based approach. Finally the models obtained have been
simulated and comparison of the BER performance of above
mentioned techniques is done in a multipath Rayleigh fading.
II. SYSTEMMODEL
In a multipath channel, the received signal consists of a large
number of plane waves whose complex low pass signal can be
modeled as circular symmetric Gaussian random variable and
can be given as [page no. 524, 6]
order of the wavelength of the transmitted signal. Due to the
presence of these three mechanisms in a wireless radio link,
=
re
+ j
im
(1)
several replicas of the transmitted signal are developed which
arrive at the receiver via multiple paths. Now the multipath
signals combine vectorially at the receiver due to which a
resultant signal oscillating in amplitude, phase and angle of
arrival is obtained. Variations of this resultant signal depend
on the phases of all the incoming signal components. This
effect is known as the multipath fading. It is one of the main
If none of the multipath components is dominant,

re
and
im
are Gaussian processes of zero mean independent
and identically distributed Gaussian random variable of zero
mean and variance o
2
. The magnitude is a Rayleigh
random variable which has a probability density given as [1]
factors that limit the performance of a wireless radio link.
It becomes clear now that modelling the error performance of
p( ) =


o
2
(
2
`
exp
| 2
\
2o
'

(2)
a practical wireless channel is a complex task because it
depends on the physical properties of the radio channel
making it difficult to generalize the results of the error
performance analysis. The knowledge of the statistical
This represents the Rayleigh fading channel model which
mitigates the signal propagation in Non Line of Sight (NLOS)
channel suffering from multipath fading. When Rayleigh
210
211
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
0
b
b 0
fading is present, the received carrier amplitude is attenuated
1
(
0.68
2
E
`
1
(8)
by the fading amplitude , which is a random variable with
p(b; ) = erfc|
2
b
= erfc( 0.68 )
N 2
mean square value
2
and probability density function (PDF)
|
\
0
'

2
dependent on the nature of the fading channel.
The received signal in a Rayleigh fading channel is given as
Where
E
b
=
N
0
y = .x +n (3)
To find the error probability over all random values of
2
,
Where y is the received signal, is Rayleigh multipath
random variable, x is the transmitted symbol and n is the
we have to evaluate the conditional PDF
PDF of .
P
b
(E; ) over the
Additive White Gaussian Noise (AWGN). For Rayleigh fading, is a Rayleigh distributed random
Following assumptions have been taken in order to derive the
variable, therefore
2
is a chi-square distributed with two
results
(1) The channel is flat fading
(2) The channel is randomly varying in time.
(3) The Rayleigh multipath factor is known at the
degrees of freedom. Since
2
is chi-square distributed, is
also chi-square distributed. The PDF of is given as [page
no. 528, 6]


receiver.
(4) Equalization is performed at the receiver by dividing
the received symbol by apriori known .
P( ) =
1
e


(9)

y =
y
=
.x + n
= x + n



(4)
Now to find the BEP over Rayleigh fading channel, we have
to evaluate conditional BEP over PDF of [page no.252, 3]

Where n

is the AWGN scaled by the Rayleigh


P
b
(E) =

P(b; )P

( )d
multipath random variable.
(5) The noise n follows the Gaussian Probability
Density Function (PDF).
Now the received instantaneous signal power is attenuated by
0
1


=

erfc( 0.68 )e

d
(10)
2

E
b
N
0
and therefore instantaneous Signal to Noise Ratio
(SNR) is given as [page no. 527, 6]
2
Performing the successive integration by parts, following
expression is obtained
A E
b
N
0
Average SNR is given as
(5)
P(E) =


1
[ erfc(

0.68y )e

+


0.68


(
erf
0.68 +1
0
`
0.68
|
2 0.68 +1
|
0.68

A
2
E N (6)
\ '
(11)
Due to such a condition imposed by fading, Bit Error Rate
(BER) of any modulation scheme is obtained by replacing
Evaluating the eq. (11), the desired result obtained can be
given as
E
b
N
0
by in the expression for AWGN performance.


This is known as conditional BEP and it is denoted
P (E) =
1
1
0.68

=

(12)
GMSK
2
0.68 +1

by P
b
(E; ) .
III. ERRORFUCTIONAPPROACH
The BER of GMSK over AWGN channel is given as [page no.
343, 13]



IV. MGF APPROACH
The approach we discussed in the last section is based on first
expressing the BER over AWGN in terms of Error Function
p (E) =
1
erfc
(
0.68E
b
`

(7)
and then evaluating the BER over Rayleigh fading. MGF
b
2
|
N

\
0
'
However in the presence of Rayleigh fading amplitude , the
2
approach differs from the last approach. In this approach, first
we express the BER in terms of Gaussian Q function and then
rest of the mathematics is done.
There is a simple relationship between Gaussian Q function
effective bit energy to noise ratio is
E
b
. So the
and Error function given as
N
0
Q(x) =
1
erfc
( x `

(13)
conditional BEP for given value of is 2
|
2

\ '
212
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011

t
Using eq. (13), BER can be expressed either in terms of Error
Function or in terms of Gaussian Q function. Therefore BER
of GMSK over AWGN in terms of Q function can be
The integral in eq. (21) reduces to following form
(
a
2
`
| (22)
expressed as
I =
=
1
|
1
2

(
P(E) = Q
1.36E
b
`

(14)
2 |
1+ a
2

|
2

b
|
\ '
\
N
0 '
Now comparing eq. (10) and eq. (16) and making use of eq.
Now the conditional BER is given by
P(E; ) = Q( 1.36 ) (15)
(22), the final expression for BER is given as below which is
similar to the result obtained by the approach discussed in the
last section.
As far as the performance of coherent digital communication


is concerned, the generic form of expression for BER involves
P (E) =
1
1
0.68

=
(23)
Gaussian Q function with an argument proportional to square
GMSK
2

0.68 +1


root of instantaneous SNR of received signal. Now in this
paper we have considered communication over a slowly
varying fading channel, the instantaneous SNR per bit is a
time-varying random variable with a PDF P

( ) .
To compute the average BER, we must evaluate an integral

Using these two approaches that we have just discussed BER
of any other modulation scheme can be obtained for Rayleigh
fading or any other fading channel of interest.
BER of BPSK over Rayleigh Fading is given as [5]


consisting of above mentioned Gaussian Q function and
P (E) =
1
1



(24)
fading PDF, that is [page no. 124, 3]
BPSK
2

+1




I =

Q(a )P

( )d


(16)


BER of BFSK over Rayleigh fading is given as[1]
0
1




Where a is a constant which depends on the specific P
BFSK
(E) = 1
(25)
modulation or detection scheme.
Now using classical definition of Gaussian Q function in eq.
(16), we have
2 + 2
BER of DPSK over Rayleigh fading is [page no. 343, 13]
1

1
t 2
a
2


P
DPSK
(E) =

2(1+ )
(26)
I =

exp( )(d0 )P( )(d )
0
t
0
2sin
2
0


V. RESULTS ANDDISCUSSION
1
t 2
a
2


=

[

exp( )P( )d ]d0




(17)
t
0 0
2sin
2
0



Now the inner integral is in the form of Laplace Transform
with respect to variable . Since Moment Generating
Function (MGF) of is
M

(s) =

e
s
P( )d

0
(18)
From eq. (17) and eq. (18), we have [page no. 124, 3]
1
t 2
I =

M


0
a
2
( )d0
2sin
2
0


(19)
Fig. 1: GMSK in AWGN and Rayleigh fading
The Laplace Transform of Rayleigh fading PDF is given as
M

(s) =
1
,
1+ s

s >0 (20)
Now putting eq. (20) into eq. (19), we have [page no. 125, 3]
t 2
2


I =

(1+

0
a
2sin
2
0
)
1
d0

(21) Fig. 2: BPSK in AWGN and Rayleigh fading
213
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Fig. 3: BFSK in AWGN and Rayleigh fading
Fig. 4: DPSK in AWGN and Rayleigh fading
Fig. 5: Comparison in Rayleigh fading
Results obtained in section III and IV have been simulated
using MATHEMATICA.
BER performance of the individual techniques has been
compared in AWGN and Rayleigh fading in the Fig. (1) - (4).
It is being observed that BER increases drastically due to
Rayleigh fading for all the four techniques. This means we
have to spend more power on the transmitter in a fading
channel in order to get a reliable wireless link.
Fig. (5) Shows the BER performance comparison of all the
four techniques in Rayleigh fading channel. It is being
observed that BPSK provides the best performance while
BFSK and DPSK worst while the performance of GMSK is
moderate.
CONCLUSION
In this paper, we have explored the mathematics behind the
expressions of BER of coherent modulation techniques over
Rayleigh fading from the perspective of undergraduate
students and results obtained have been simulated. The
simulated results are in good accordance with the earlier
established results in theory.
We hope that as soon as the readers reach the end of this paper,
they will feel the power behind the discussed approaches and
will further involve themselves in investigating new and
existing applications.
ACKNOWLEDGEMENT
The authors offer their sincere regards and thanks to Dr.
Rajeev Agrawal from GLBITM, Gr. Noida for his constant
encouragement and support.
REFERENCES
[1] Vicky Singh, Amit Sehgal, Rajeev Agrawal Error Modeling of BFSK
over Rayleigh Fading channel-A statistical frame work , in
proceedings of SPRTOS , HBTI, Kanpur, 2011.
[2] John.G.Proakis, Digitalcommunication, 5th international edition ,
Singapore, TMH, 2011.
[3] M.K.Simon and Alouini Digital communications over fading
channels , 2nd edition, 2005.
[4] Bernard Sklar, Digital Communication , 2nd edition, PEARSON
EDUCATION, 2011.
[5] Vicky Singh, Amit Sehgal, BER modelling of BPSK under Rayleigh
fading channel A Statistical pedagogic approach , in proceedings of
AECTE, IIMTcollege, Gr. Noida, 2011.
[6] Fuqin Xiong, Digital Modulation Techniques , 1st edition, Artech
House, 2000.
[7] Marko Milojevic, Martin Haardt, Ernst Eberlein, Albert Heuberger,
Channel Modeling for Multiple Satellite Broadcasting Systems ,
IEEE TRANSACTIONS ON BROADCASTING, VOL. 55, NO. 4,
DECEMBER 2009.
[8] Lee, Messerschmitt, Digital communication , 3rd edition, Springer,
2004.
[9] Peter Stravoulakis, Interference analysis and reduction
for wireless systems , Artech House, London, 2003.
[10] A. D. Polyanin and A. V. Manzhirov, Handbook of Integral
Equations , Second Edition, Updated, Revised and Extended,
Chapman & Hall/CRC Press, 1144 pages, 2008.
[11] Harish Parthsarthy, Engineering Mathematics , Ane books, 2nd
edition, 2011.
[12] Kamilo Feher, Wireless Digital Communications , 6th Indian edition,
PHI, 2002.
[13] Rappaport, Wireless communication principles and practice , 2nd
Edition, PHI Learning, 2002.
214
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Noise Reduction of Audio Signal using Wavelet
Transformwith Modified Universal Threshold
Rajeev Aggarwal
1
, Vijay Kumar Gupta
2
, Jay Karan Singh
3
1
Research Scholar,
3
Professor,
13
Department of Electronics and Communication(SSSIST), Sehore
Rajiv Gandhi Proudyogiki Vishwavidyalaya, Bhopal, (M. P.), INDIA.
2
Assistant Professor, Department of Electronics and Communication(IMS), Ghaziabad
Uttar Pradesh Technical University, Lucknow, (U. P.), INDIA
1
rajiv200624@gmail.com
Abstract In this paper, Discrete-wavelet transform (DWT)
based algorithm are used for audio signal denoising. Here soft
thresholding are used for denoising. Analysis is done on noisy
audio signal corrupted by babble noise, airport noise, car noise &
train noise at 5dB, 10dB and 15dB signal to noise ratio (SNR)
levels. Simulation & results are performed in MATLAB 7.10.0
(R2010a). Output SNR is calculated & compared using soft
thresholding methods. Soft thresholding method performs better
than hard thresholding at all input SNR levels. Soft thresholding
shows a maximumof 37.29dBimprovement in output SNR.
Keywords discrete wavelet transforms, soft thresholding, hard
thresholding, signal to noise ratio.
I. INTRODUCTION
Wavelets have become a popular tool for speech processing,
such as speech analysis, pitch detection and speech
recognition. Wavelets are successful front end processors for
speech recognition, this by exploiting the time resolution of
the wavelet transform. The recognition performance depends
on the coverage of the frequency domain. The goal for good
speech recognition is to increase the bandwidth of a wavelet
without significantly affecting the time resolution. This can be
done by compounding wavelets [1]. Speech denoising is a
field of engineering that studies methods used to recover an
original speech from noisy signals corrupted by different types
of noises. Noise may be in the form of babble noise, airport
noise, car noise, train noise, white noise, pink noise and many
other types of noise present in the environment. Over the last
decades, noise removal from speech signals is an area of
interest of researchers during speech processing. The
computation of the coefficients is done using a multi-
resolution wavelet filter bank. The filter choice depends on the
noise level and other parameters. For a good denoising result,
a good threshold level has to be estimated. The wavelet
function and the decomposition level also play an important
role in the quality of the denoise signal. The application areas
of wavelet transform are wavelet modulation in
communication channels, producing & analyzing irregular
signals etc.
The paper is organized as follows :- Section II Wavelet and
Multi-resolution, Section III Discrete wavelet Transform
(DWT), Section IV Modified Universal Threshold, Section V
Soft and Hard thresholding, Section VI results and discussion,
Section VII conclusion.
II. WAVELET AND MULTIRESOLUTION
Wavelet is a small wave and wavelet transforms convert a
signal into a series of wavelets and provide a way for
analysing waveforms, bounded in both frequency and duration.
This allows signal to be stored more efficiently than by
Fourier transform. Wavelet transform is preferred over Fourier
Transform (FT) and Short Time Fourier Transform (STFT),
since it provided multi-resolution. In time domain signal, the
independent variable is time and the dependent variable is the
amplitude. Most of the Information is hidden in the frequency
content. By using wavelet Transform, we can get the
frequency information which is not possible by working in
time-domain. The analysis of a non-stationary signal using the
Fourier Transform and Short Time Fourier Transform does
not give satisfactory results. Better results can be obtained
using wavelet transform analysis. In STFT, a fixed time-
frequency resolution is used where as in wavelet transform,
multi-resolution technique is used. One advantage of wavelet
transform analysis is the ability to perform local analysis.
Wavelet analysis is able to express signal appearance that
other analysis techniques miss such as breakdown points,
discontinuities etc.
[2] In multi-resolution analysis (MRA), signal has good time
resolution and poor frequency resolution at high frequencies
and other way good frequency resolution and poor time
resolution at low frequencies. It is more suitable for short
duration of higher frequency and longer duration of lower
frequency components. It is assumed that low frequencies
appear for the entire duration of the signal, whereas high
frequencies appear from time to time as short interval. This is
often the case in practical applications. There are many
different wavelet systems that can be used effectively. A
wavelet system is a set of building blocks to represent a signal.
The method is based on thresholding in the signal that each
wavelet coefficient of the signal is compared to a given
threshold. Using wavelets to remove noise from a signal
requires identifying which components contain the noise, and
then reconstructing the signal without those components.
215
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
III. DISCRETE WAVELET TRANSFORM
The continuous wavelet transform (CWT) performs a multi-
resolution analysis by contraction and dilatation of the wavelet
functions and discrete wavelet transform (DWT) uses filter
banks for the construction of the multi-resolution time-
frequency plane [2]. The DWT uses Multi resolution filter
banks and special wavelet filters for the analysis and
reconstruction of signals.
[3] In FT we know the information of frequency content of the
signal, but we dont know at what times frequency
components occur. So to avoid this problem we use Short
term Fourier transform (STFT) and wavelet transform for
analysis of signals like speech. If we want to choose transform
from STFT & wavelet transform, we will give preference to
wavelet transform because it analyses the signal at different
frequency with different resolutions.
DWT provides sufficient information both for analysis and
The soft and hard thresholding methods are used to estimate
wavelet coefficients in wavelet threshold denoising. [2] Hard
thresholding zeros out small coefficients, resulting in an
efficient representation. Soft thresholding softens the
coefficients exceeding the threshold by lowering them by the
threshold value. When thresholding is applied, no perfect
reconstruction of the original signal is possible.
Hard thresholding can be described as the usual process of
setting to zero the elements whose absolute values are lower
than the threshold. The hard threshold signal is x if x > thr
and is 0 if x < thr, where thr is a threshold value. Soft
thresholding is an extension of hard thresholding, first setting
to zero the elements whose absolute values are lower than the
threshold, and then shrinking the nonzero coefficients towards
0. If x > thr, soft threshold signal is (sign (x). (x - thr)). In
figure 1, threshold value thr is 0.5.
synthesis and reduce the computation time sufficiently. It
analyse the signal at different frequency bands with different
resolutions, decompose the signal into a coarse approximation
and detail information. Human ear has better frequency

T
Hard
(x) =

x x >
0 x <
thr
thr
resolution at low frequencies and lower frequency resolution
at high frequencies. Decomposition of the signal is obtained
sign (x) . (x


- thr) x > thr
by passing time domain signal through low pass and high pass
filters.
T
Soft
(x) =

0 - thr s


x < thr
IV. MODIFIEDUNIVERSAL THRESHOLD

sign (x) . (x + thr) x < -thr
In this paper, we removed the babble noise, airport noise, car
noise & train noise from noisy signal which contain the noise
contents of different noises. Here we use the multi-resolution
concept. Now, we want to find threshold value that will use to
remove noise from noisy signal, but also recover the original
signal efficiently. If the threshold value is too high, it will also
remove the contents of original signal and if the threshold
value is too low, denoising will not work properly.
One of the first methods for selection of threshold was
developed by Donoho and Jonstone [4] and it is called as
universal threshold. Different universal threshold was
proposed in [5]:
--------------------- (4)
VI. RESULTS ANDDISCUSSION
thr = on 2Log2( N)
------------------------ (1) We implemented babble noise removal algorithm in Matlab 7.10.0
Where N denotes number of samples of noise and o
n
is
standard deviation of noise. But threshold obtained by
equation (1) is too high.
Again Universal threshold was proposed in [3] and modified
with factor k in order to obtain higher quality output signal:
(R2010a). Wavelet toolbox in Matlab has large collection of
functions for wavelet analysis. Input of our simulation is noisy signal
in Wave format which is sampled at sampling frequency of Fs=8000
Hz. Speech signal is corrupted by babble noise at 5dB, 10dB and
15dB SNR levels.
This algorithm is very useful when we dont know about original
signal (noise free). We only use the original signal just to compare
thr = k.on 2Log2( N)
----------------------- (2)
the de-noised signal with the original speech signal. It is assumed
that high amplitude DWT coefficients represent signal, and low
During our research it is noticed that if we use two factors i.e.
k & m, then new threshold value gives better results,
especially to recover the original signal. We will discuss on
this topic in detail in section VI.
V. SOFT ANDHARDTHRESHOLDING
amplitude coefficients represent noise. It is considered that some
samples of noisy signal contain only noise, so we choose that
samples for noise calculation.
Its very important to select the proper level in multi-resolution
analysis. In multi-resolution analysis, the approximation signals
216
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
1
N
2
1
N
2
(output of Low-pass filter & then decimation) are splits up to the
certain level.
Figure 2 shows the Original speech signal and figure 3 shows the
noisy signal.
We choose 5-level DWT and db5 wavelet. Improved threshold value
is obtained by replacing threshold thr (equation 2) with
As shown in table 1, we fix the value of both factor k & m
and analyse the post SNR of different noisy signals at 5dB,
10dB and 15dB SNR level.
Graph 1 shows comparison between SNR (Pre) db with SNR
(Post) db at different SNR level by using soft thresholding
method.
thr = m.k.on 2Log2( N)
-------------- (5)
Where, 0 < k < 1 and 0 < m < 1. N denotes number of noise
samples and o
n
is standard deviation of noise.
We use two factors k and m. It is found that if we fix one factor &
vary other factor, than we can get different range of threshold value
which gives improved results to recover the original signal especially
for low level noise.
We apply this threshold value to approximation coefficient & all
SNR is used to quantify how much a signal has been corrupted by
noise. It is defined as the ratio of signal power to the noise power
corrupting the signal. Denoising is successful if Post SNR is higher
than Pre SNR.
Signal to Noise Ratio is defined as:
(P `

detail coefficients. We use soft & hard thresholding separately.
SNR = Log
|
signal

=P P
Finally use these new coefficients to reconstruct the signal. We found
db
10
10
| P
signal,db noise,db
that soft thresholding results are more efficient than hard
thresholding.
\ noise '


------------------------ (6)
Figure 4 shows the reconstructed (denoise) signal.
Another method to analyse the denoise signal is by Mean square
error (MSE) which is mostly used for estimating signal quality.
Input MSE is defined as :

( X
i
Y
i
)
i
------------------------ (7)
Where,
X
i
is original signal and
Y
i is noisy signal.
Output MSE is defined as:

( X
i
Xi )
i
------------------------ (8)
Where,
X
i is reconstructed signal.
217
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Denoising is successful if output MSE is lower than input MSE.
Lower MSE means the closer match between two signals.
VII. CONCLUSION
In this paper we used wavelet transform for denoising speech
signal corrupted with babble noise, airport noise, car noise &
train noise. Speech denoising is performed in wavelet domain
by thresholding wavelet coefficients. We found that by using
modified universal threshold, we can get the better results of
de-noising, especially for low level noise. During different
analysis we found that soft thresholding is better than hard
thresholding because soft thresholding gives better results
than hard thresholding. Higher threshold removes noise well,
but the part of original signal is also removed with the noise.
It is generally not possible to filter out all the noise without
affecting the original signal. We can analyse the denoise
signal by signal to noise ratio (SNR) and mean square error
(MSE) analysis.
VIII. REFERENCES
[1] R.F. Favero. Compound wavelets: wavelets for speech recognition. IEEE,
pages 600603, 1994.
[2] Wavelet Theory and Applications, a literature study, R.J.E. Merry, Prof.
Dr. Ir. M. Steinbuch, Dr. Ir. M.J.G. van de Molengraft, Eindhoven
University of Technology , department of Mechanical Engineering,
Control Systems Technology Group Eindhoven, June.
[3] White Noise Reduction of Audio Signal using Wavelets Transform with
Modified Universal Threshold MATKO SARIC, LUKI BILICIC,
HRVOJE DUJMIC, University of Split, R. Boskovica b. b HR 21000
Split, CROATIA.
[4] D.L. Donoho: "De-noising by Soft Thresholding , Technical Report no.
409, Stanford University, December 1992.
[5] Alexandru Isar, Dorina Isar : Adaptive denoising of low SNR signals ,
Third International Conference on WAA 2003, Chongqing, P. R. China,
29-31 May, 2003, p.p. 821-826
[6] Wavelet analysis and its applications: second international conference,
WAA By Yuan Yan Tang
218
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Low Power Methods for different technologies
Himani Mittal
1
,Prof. Dinesh Chandra
2
,Sampath kumar
3
1
Assistant Professor in J.S.S.Academy of Technical Education , NOIDA
Email : himanimit@yahoo.co.in
2
H.O.D in E&CDeptt. In J.S.S.Academy of Technical Education , NOIDA
Email: dinesshc@gmail.com
3
Assistant Professor in J.S.S.Academy of Technical Education , NOIDA
Email: sampath_sams@yahoo.com
Abstract Power consumption and power-related issues have
become a major concern for most designs. The primary
method used for reducing power is supply voltage reduction,
this technique begins to lose its effectiveness as voltages drop
to sub threshold volt range and further reductions in the supply
voltage begin to create more problems than are solved For
the past few years, several techniques,methodologies and tools
for designing low-power circuits have been presented in the
scientific literature. However, only a few of them have found
their way in current design flows. The purpose of this paper is
to summarize, mainly by way of examples,what in our
experience are the most trustful approaches to lowpower
design. In other words, our contribution should not be
intended as an exhaustive survey of the existing literature
on low-power design; rather, we would like to provide insights
a designer can rely upon when power consumption is a critical
constraint.We will focus on the reduction of power
consumption on different technologies for different values of
capacitance.
Keywords FSM Decomposition[2] ,Mealy and Moore
Machines , Capacitance[5], Power saving
1. INTRODUCTION
The consumer demand for greater functionality and higher
performance, but also for lower costs adds significant pressure
on System-on-Chip (SoC) manufacturers. The continuing
advances in process technology, and ability to design highly
complex SoCs does not come without a cost. So the next
generation of processes surely brings about the next
generation of challenges.
With ever increasing System-on-Chip (SoC) complexity,
energy consumption has become the most critical constraint
for today's integrated circuit (IC) design. Consequently, a lot
of effort is spent in designing for low-power dissipation.
Power consumption has become a primary constraint in
design, along with performance, clock frequency and die size.
Lower power can be achieved only by designing at all levels
of abstraction: from architectural design to intellectual
property (IP) component selection and physical
implementation. Energy reduction techniques can also be
applied at all levels of the system.
Designers should use components that deploy the latest
developments in low-power technology. The most effective
power savings can be achieved by making the right choices
early on during the system and architectural level of
abstraction. In addition to using power-conscious hardware
design techniques, it is important to save power through
careful design of the operating system and application
programs.Objective of this paper is :
(1) Computing the Power consumption in original FSM
(2) create a table for different technologies for different
value of capacitance .
II.SOURCES OF POWER DISSIPATION
The sources of energy consumption on a CMOS chip can be
classified as static and dynamic power dissipation. The
dominant component of energy consumption in CMOS is
dynamic power consumption caused by the actual effort of the
circuit to switch. A first order approximation of the dynamic
power consumption of CMOS circuitry is given by the
formula[3]:
P = C * V
2
* f
where P is the power, C is the effective switch capacitance, V
is the supply voltage, and f is the frequency of operation. The
power dissipation arises from the charging and discharging of
the circuit node capacitances found on the output of every
logic gate. Every low-to-high logic transition in a digital
circuit incurs a change of voltage, drawing energy from the
power supply.
A designer at the technological and architectural level can try
to minimize the variables in these equations to minimize the
overall energy consumption. However, power minimization is
often a complex process of trade-offs between speed, area, and
power consumption.
Static energy consumption is caused by short circuit currents,
bias, and leakage currents. During the transition on the input
of a CMOS gate both p and n channel devices may conduct
simultaneously, briefly establishing a short from the supply
voltage to ground. While statically-biased gates are usually
219
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
parts will be turned off.
(6) In general, since the combinational circuit
submachine is smaller than that for the origina
found in a few specialized circuits such as PLAs, their use has
been dramatically reduced. Leakage current is becoming the
dominant component of static energy consumption. Until
recently, it was seen as a secondary order effect; however, the
total amount of static power consumption doubles with every
new process node.
Energy consumption in CMOS circuitry is proportional to
capacitance; therefore, a technique that can be used to reduce
energy consumption is to minimize the capacitance. This can
content Ci of an event E
i
in this manner is o take logarithmic
of the event probability
C
i
= log
2
(1/P
i
)
Since 0 P
i
1, the logarithmic term is non negative and we
have C
i
>0.
The average information contents of the system is the
weighted sum of the information content of C
i
by its
occurrence probability This is also called the entropy[4] of the
system.
be achieved at the architectural level of design as well as at the
logic and physical implementation level.
Connections to external components, such as external
memory, typically have much greater capacitance than
H( X )
m 1
1
p
i
log
2
i 1
p
i
connections to on-chip resources. As a result, accessing
external memory can increase energy consumption.
Consequently, a way to reduce capacitance is to reduce
external accesses and optimize the system by using on-chip
resources such as caches and registers. In addition, use of
fewer external outputs and infrequent switching will result in
dynamic power savings.
Routing capacitance is the main cause of the
limitation in clock frequency. Circuits that are able
to run faster can do so because of a lower routing
capacitance. Consequently, they dissipate less
power at a given clock frequency. So, energy
reduction can be achieved by optimizing the clock
frequency of the design, even if the resulting
performance is far in excess of the requirements.
III. METHODS AND APPROACHES
The key steps in our approach are:
(1) Finding the number of happening states then find the
probability Of FSM.
(2) Take different technologies and its related value of
voltage.
(3) Find the frequency by using the formula :
F = P(1-P)
(4) With different value of capacitor find the power savings.
(5) An effective approach to reduce power dissipation is to
turn off portions of the circuit, and hence reduces the
switching activities in the circuit. We synthesize an FSM in
such a way that only the part of the circuit which computes the
state transitions and outputs will be turned on while all other
for each
l machine,
power consumption in the decomposed machine will be
smaller than that of the original machine.
IV. BASICPRINCIPLES
Entropy is a measure of the randomness carried by a set of
discrete events observed over time. In the studies of the
information theory, a method to quantify the information
IV. POWERESTIMATION
Entropy is a measure of the randomness carried by a set of
discrete events observed over time. In the studies of the
information theory, a method to quantify the information
content Ci of an event E
i
in this manner is o take logarithmic
of the event probability
C
i
= log
2
(1/P
i
)
Since 0 P
i
1, the logarithmic term is non negative and we
have C
i
>0.
The average information contents of the system is the
weighted sum of the information content of C
i
by its
occurrence probability This is also called the entropy of the
system.
V. LOW POWER CHALLENGES IN PHYSICAL
DESIGN
Since capacitance, a function of fanout, wirelength, and
transistor size , reducing capacitance means reducing it
physically . But there are challenges in reducing it. The
challenge of low-power physical design is to create, optimize,
and verify the physical layout so that it meets the power
budget along with traditional timing, SI, performance, and
area goals. The design tool must find the best tradeoffs when
implementing any number of low-power techniques.
While low-power design starts at the architectural level, the
low-power design techniques continue through place and
route. Physical design tools must interpret the power intent
and implement the layout correctly, from placement of special
cells to routing and optimization across power domains in the
presence of multiple corners, modes, and power states, plus
manufacturing variability. While many tools support the more
common low-power techniques, such as clock gating,
designers run into difficulty with more advanced techniques,
such as the use of multiple voltage domains, which cause the
design size and complexity to explode.
VI. REDUCTION APPROACH
(1) For a particular technology take its power supply
voltage
(2) Take different value of capacitance like 1F, 0.5 F,
0.25 F, 0.1 F.
220
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
(3) By using formula P= CV
2
f calculate power
dissipation in 180nm , 130nm , 90nm technologies
(4) Compare the results.
VII. RESULTS:
8
IX.CONCLUSION
Power saving with decrease in capacitor and with latest
technology leads to more power saving . This is power in
original FSM . I f we decompose machines in more sub
machines then frequency reduces and we can get more
reduction in powerThe Decomposed FSM[2] technique leads
in a 34.13% average reduction in switching activity of the
state variables, and 12% average reduction of the total
7 6 . 9
6
5
4
3
2
1 1
0
3 . 4 9
0 . 5
Capacit or ( F)
Power ( W)
1. 7 4 8
0 . 6 9 9
0 . 2 5
0 . 1
switching activity of the implemented circuit. Although the
solution is heuristic, and does not guarantee the minimum
power consumption, these results leads to a reduction in the
power consumption in the complete circuit.
REFERENCES
1. Chow, S.-H., Ho, Y.-C., Hwang, T. and LIU, C.L.
1.2
1
0.8
0.6
0.4
0.2
0
1 2 3 4
Cap. vs Power in 180nmtech.
Capaci t or (F)
Power (W)
1 2 3 4
Cap. vs Power in 130nmtech
1996. Low power realization of finite state
machines- a decomposition approach. ACM Trans,
Des. Autom. Electron. Syst. 1,3, 315-340.
2. *L. Benini, A. Bogliolo, and G. De Micheli, A
survey of Design Techniques for System-Level
Dynamic Power Management, IEEE Trans . On
VLSI Systems, Vol.8, No.3, pp, 299-316, June 2000.
Low-Power Circuits : Practical Recipes, IEEE
Circuits and Systems magazine, pp7-25, Vol.1,
No.1,2001.
Transformation and synthesis of FSM for low-
power gated-clock implementation. IEEE trans.
Computer-
5. Designing Low-Power Circuits:Practical Recipes
by LucaBenini *Giovanni De MicheliEnrico Macii
1.2
1
Capaci t or (F)
Power (W)
0.8
0.6
0.4
0.2
0
1 2 3 4
Cap. vs Power in 90nmtech
VIII. CALCULATIONANDRESULT
VDD(V) Frequency(hz) Capacitor(F) Power(W)
1.8 0.2159 1 6.9
1.8 0.2519 0.5 3.49
1.8 0.2519 0.25 1.748
1.8 0.2519 0.1 0.699
180 nm technology
221
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
DESIGNANDSIMULATIONOF LOW POWERMULTIPLIER
Vikas Garg TazeemAhmadKhan
Department of E & C Department of E & C,
NIET Greater Noida IIMT college of Engineering Greater Noida
E-mail: vikas_g87@yahoo.co.in E-mail: khan_taz@yahoo.com
Abstract
In the fast technological
circuit operation PMOS and NMOS
transistors of CMOS gate become
advancement in the field of Information
and Communication the use of low
power circuits has taken a prominent
stance. The lowpower multiplier circuits
have been widely applied in Calculating
machines and computers. This project
proposes a new solution for the design of
low power multiplier using Booths
Algorithm. The idea behind this solution
is inserting more number of zeros in the
multiplicand in order to reduce the
switching activities in the circuit which
ultimately results in fast processing and
reduced power consumption.
1. Introduction
As we get closer to the limits of
scaling in complementary metal oxide
semiconductor (CMOS) circuits, power
and heat dissipation issues are becoming
more and more important. The power
dissipation in CMOS circuit has several
components that are usually estimated
on the device parameters of the
technology used. The total power in the
circuit is given by the following
equation,
P total = P switching + P short circuit +
P static + P leakage
Where P switching is switching
component of the power and it is a
dominating component in these
calculations. P short circuit is the power
dissipated due to the fact that during the
simultaneously during the transition at
the input level, Pstatic is the contribution
due to the biasing current required for
the device, Pleakage is the power
consumption due to the reverse biased P-
N junctions in the circuit. In FPGA
designs power reduction is possible only
through reduced switching activity,
which is also called dynamic power.
2. Proposed Design and
Implementation
Higher power reduction can be
achieved if the multiplicand contains
more number of 0.s than 1.s [5]. In this
approach we propose Binary / Booth
Recoding Unit which will force
multiplicand to have more number of
zeros. The advantage here is that if
multiplicand contains more successive
number of ones then booth-recoding unit
converts these ones in zeros.
Approach
The switching activity of the
component used in the design depends
on the input bit coefficient. This means
if the input bit coefficient is zero,
corresponding row or column of adders
need not be activated. If multiplicand
contains more zeros, higher power
reduction can be achieved. We propose a
Binary / Booth Recoding Unit which
will force multiplicand to have more
number of zeros. Considerthe
222
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
multiplication of 1111 x 1000 in which
multiplicand can be booth recoded as
1000b where b is -1. Booth recoded
multiplicand contains only two ones
which will switch two columns.
Therefore, instead of taking 1111 as an
multiplicand 1000b can be taken. Now
consider the multiplication of
1010 x 1000 in which multiplicand can
be booth recoded as 1b1b0. Booth
recoded multiplicand contains only
single zero whereas binary multiplicand
contain two zeros. In this case binary
multiplicand can be chosen for
multiplication.
0 0 0 1 0 0 0 0 1 1 0 0 0 1 0 0
Multiplier Design
The low power multiplier can be
constructed as shown in figure 1. It is
organized in two units as Binary/ Booth
Recoding Unit and Multiplication Unit.
Fig.1 Proposed multiplier architecture
2.2.1. Binary/Booth Recoding Unit
This unit chooses the
multiplicand with more number of zeros.
It generates booth-recoded multiplicand
and chooses any multiplicand binary or
booth recoded according to the greater
number of zeros. When the booth-
recoded multiplicand is chosen, the
multiplicand is represented with (b, 0,1).
To represent this b in binary number
system we have taken sign bit register S
that will hold the value 1 only when the
corresponding bit is b otherwise 0. For
binary multiplicand, S is always zero. If
the multiplicand is 16 bit in length
0000111110111100 that can be booth
recoded as 00010000b1000b00. These
ternary values can be represented in two
registers like magnitude and Sign
Register.
We have used look up tables for
counting the number of zerosand
converting multiplicand to booth
recoded multiplicand as shown in table
1. For multiplication with b it will take
2.s complement of multiplier. This unit
guarantees us to have always more or
equal number of zeros in the
multiplicand.
Multiplicand
Bit i-1 Bit i
Version of
multiplier
selected by
bit i
0 0 0
0 1 +1
1 0 b
1 1 0
Table .1 Booth Recoding Table
2.2.2. Multiplication Unit
Figure2 shows the 4x4 low
power multiplier structures. This
technique will be very useful as we go
for higher width of the multiplicand
223
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
specially when there are successive
numbers of ones.
Fig.2 4x4 Multiplier architecture
Multiplying with .1 will take 2.s
complement of multiplier. However, we
need extra sign bit circuitry to add sign
extension bits. But, since in booth
recoding no two consecutive -1 will be
there and in worst case there will be two
-1s. Even though in the case of worst
multiplicand i.e. 0101 the output of the
Binary / Booth Recoding Unit is binary
multiplicand. So there is no need of extra
correction circuitry since multiplier will
perform normal binary operation. The
Modified Full Adder is constructed as
shown in figure 6. If aj is zero, FA is
disabled. Here sj is a sign bit of
multiplicand.
3. Implementation and Results
In order to evaluate
the performance of low
power parallel multiplier,
we implement all these
designs on Xilinx xc2vp2-6fg256 FPGA.
We compare the performance of this
design with column bypassing
multiplier, row bypassing multiplier and
multiplier without bypassing. Table 2
highlights the comparison
between binary multiplicand and
booth-recoded multiplicand. Later is
generated when
224
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
binary multiplicand is passed
through Binary / Booth Recoding
Unit. It clearly indicates that booth-recoded multiplicand saves significant amount of switching activity as compared to the Binary multiplicand. Implementation of counting the number of zeros and generation of booth-recoded multiplicand with
the help of lookup table is another
advantage of this design. It takes
less number of CLBs when
compared with usual loop statement
or FSM. For Design Entry, we
used ModelSim 6.0d and
design with VHDL. The design was
synthesized on Xilinx
ISE 7.1i and SynplifyPro.
Binary
Multiplicand
Booth Recoded
Multiplicand
11111100 100000-100
11111110 1000000-10
11111111 10000000-1
00000001 00000001-1
Table 2. Output of the Binary / Booth
Recoding Unit
Fig.3 Simulation Result for Binary / Booth
Recoding Unit
4. Conclusion
225
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
In this project presents a new
methodology for designing of low power
parallel multiplier with
reduced switching.
Method for increasing number
of zeros in the multiplicand is
discussed with the help of Binary /
Booth Recoding Unit. We use look up
table for implementing the logic for
counting the number of ones and
generation of booth
recoded multiplicand.
Comparing with column bypassing
and other techniques our
methodology guarantees to have equal
or more number of zeros in the
multiplicand.
5. References
[1] Oscal T. -C. Chen, Sandy Wang,
and Yi-Wen Wu, Minimization of
Switching Activities of Partial Products
for Designing Low-Power Multipliers,
IEEE Transactions on VLSI Systems,
June 2003vol. 11, no.3.
[2] A. Wu, High performance adder
cell for lowpower pipelined multiplier,
in Proc. IEEE Int. Symp. on Circuits and
Systems, May 1996 , vol. 4, pp. 57-60.
[3] S. Hong, S. Kim, M.C.
Papaefthymiou, and W.E. Stark, Low
power parallel multiplier design for DSP
applications through coefficient
optimization, in Proc. of Twelfth
Annual IEEE Int. ASIC/SOC onf., Sep.
1999, pp. 286-290.
[4] C. R. Baugh and B. A.Wooley, A
twos complement parallel array
multiplication algorithm, IEEE Trans.
Comput., Dec. 1973, vol. C-22, pp. 1045.
1047.
[5] I. S. Abu-Khater, A. Bellaouar, and
M. Elmasry, Circuit techniques
f
or
226
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
CMOS low-power high performance
multipliers, IEEE J. Solid-State
Circuits, Oct. 1996, vol. 31, pp.
1535.1546.
[6] J. Ohban, V.G. Moshnyaga, and K.
Inoue, Multiplier energy reduction
through bypassing of partial products,
Asia-Pacific Conf. on Circuits and
Systems. 2002.,vol.2, pp. 13-17.
[7] Ming-Chen Wen, Sying-Jyan Wang,
and Yen-Nan Lin, LowPower Parallel
Multiplier with Column Bypassing,
Electronics letters, 10, 12 May 2005
Volume 41, Issue Page(s): 581 . 583.
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
An Area-Efficient VLSI Implementation for
Programmable FIR Filters based on a Parameterized
Divide and Conquer Approach
Anshul Chaudhary
Asst. Professor in
Department of Electronics and CommunicationEngineering
Institute of Technology Roorkee, RCPUniverse
ansh2385@yahoo.in, achaudhary7chaudhary@gmail.com
AbstractThe aim of this project is optimal VLSI
implementation for a class of programmable FIR filter with
binary coefficients. The architecture is based on portioning the
filter transfer function, using the divide and conquer approach.
Divide and conquer algorithms operate by reducing a large
problem into a number of smaller problems that are easy to
solve.
KeywordsModelsim, Xilinx, FPGA
Introduction
The architecture based on the portioning the filter transfer
function, using the Divide and conquer approach, which is
optimized with respect to certain design parameter(s) to
minimize a particular computational complexity measure
(CCM). The computational saving achieved is attributed to
fact that the architecture removes, in an optional fashion, the
redundancies that are shown to inherently exist in the filter
structure. By partitioning the filter into a number of smaller
sub-filter, it is shown that not all of them could be distinct.
This redundancy is removed by filtering the input signal in all
possible small filter with a certain number of binary
coefficients, and the result are then broadcast to the remainder
of the portioned filter structure. The basic requirement of such
an architecture is a robust and efficient programmable
structure that, when configured using a control program, can
effectively distribute such precomputed signal to their desired
destinations within the filter structure. In our project,
Programmable Switch Matrix (PSM) architecture has been
used for performing this task based on its tradeoff between
metallization area and programming ease. The structure is
based on the crossbar topology, commonly employed in
smaller Asynchronous Transfer Mode ( ATM) network and
Field Programmable Care Arrays (FPGAs) and is strictly
non-blocking and capable of multicasting.
The invention may be implemented as a device and/or a
method of multiplying a binary multiplicand with a binary
multiplier. An embodiment of the invention is a VLSI
architecture referred to herein as the Parameterized Binary
Multiplier Architecture("PBMA"). It is based on an existing
parameterized divide and conquer algorithm that uses optimal
partitioning and redundancy removal for simultaneous
computation of partial sums. The PBMA may be implemented
to have two types of basic units. The first type of basic unit is
referred to herein as the Sigma unit , and the second type of
basic unit is referred to herein as the Omega unit .The Sigma
unit may generate distinct partial sums of the multiplier and
shifted forms of the multiplier. The partial sums are referred
to collectively as "p-sums".
The Omega unit may combine the partial sums generated by
the Sigma unit in order to obtain the product of the
multiplicand and the multiplier. The architecture is
parameterized by a partition parameter, that is referred to
herein as " r ". The partition parameter may be selected so as
to minimize a desired computational complexity measure such
as area or area-time product.
A central principle of operation for the PBMA is adapted from
[Ref. 3] and is described below. For reference purposes, " m "
is the number of binary digits in the multiplicand (" X "), and
" n " is the number of binary digits in the multiplier (" Y ").
Since multiplication is commutative, in the multiplication Xx
Y we can assume that m n without imposing any limitation.
In order to implement the invention, initially X is partitioned
into a number (" s ") of partitions, where s \ ml r \. The
partitions may be thought of as short multiplicands of width r.
As such, X may be written as
X = [2
srr
... 2
r
2] * P * [2"
~l
... 2
1
2]

Equation (2) where *


indicates matrix multiplication, T denotes the transpose of a
matrix andP Equation (3)
Since X
1
e {0, 1} , the sxr matrix P can have at most 2
r
- 1
distinct rows that have at least one non-zero element. Any
redundancy due to the repetition of one or more rows in P may
be eliminated by expressing P as P
x
* P
1
where P
x
is a s x (2
r
-
1) matrix with at most one' 1 ' in each row and 'O's elsewhere,
and P
1
is a (2
r
- 1) x r matrix with its I
th
row containing the
binary digits of integer / as its entries, resulting in:
X - = r [-2>
s
... Z- Z, ] * P
x
* P, * [2
r~x
... 2
1
2]

Equation (4)
where P
1
* [2
r l
... 2
1
2]

generates a column of all possible 2


r
-l polynomials of degree r -\ in powers of 2, while [2
sxr~r
... 2
r
2] * P
x
assigns to each such polynomial all terms in Equation
(2) that share it. Now, the product (" Z ") of the multiplicand
and multiplier, Z = XxY , may be expressed as:
225
226
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Z = [2
sxr
-
r
... 2
r
2] * P
x
* P^ [T
'1
... 2
1
2]
T
x F Equation (5) metallization area to a multiplexer based selection structure
r r
The PBMA may be thought of as an implementation of
Equation (5). The partition size is parameterized by the
partition parameter r , which may be selected to minimize a
desired computational complexity measure, such as area or
area-time product. The Sigma unit of the PBMA may be
embodied to implement P
1
* [2
r~
... 2
1
2]

, and the Omega


unit may be embodied to implement [2
sxr~r
... 2
r
2] * P
x
.
The Sigma unit may generate 2
r
-1 distinct partial sums of the
multiplier Y and shifted forms of the multiplier 27,...,2^
1
F.
The partial sums are sometimes referred to herein as Y,2Y, ...
, (2
r
- 1)7 The Omega unit may be thought of as implementing
the equation [2
sxr~r
... 2
r
2] * P
x
. The Omega unit may
include two types of sub-units. A first such type of sub-unit
sends either one of the partial sums or a '0' to appropriate
nodes of a second such type of sub-unit. The second type of
sub-unit then combines the outputs from the first sub-unit to
obtain the desired product.
An embodiment of the invention is depicted in FIG. 1. In FIG.
1, the Sigma unit is efficiently realized using only 2
r~
- 1 , n -
bit adder units . FIG. 2 depicts one such Sigma unit for the
situation in which r has been selected to equal 3. Depending
on the application, the n -bit adder units in the Sigma unit may
be implemented using basic adder architectures like the
Ripple-Carry Adder ("RCA") for minimal silicon utilization
or using faster adder architectures like the Carry-Look-Ahead
Adder ("CLA") for higher operational speed. Units of type 2'
that represent a t -bit shift operation, such as 2 , 2
1
and 2
2
are
used only for functional clarity and it will be recognized that
they may be realized by appropriately hardwiring the involved
signals.
The first type of sub-unit of the Omega unit that performs
the sending task may be implemented using a
programmable switch matrix ("PSM") . The PSM may be
based on the crossbar topology commonly employed in
smaller asynchronous transfer mode ("ATM") networks and
field programmable gate arrays ("FPGA"). The PSM may be
strictly nonblocking and capable of multicasting. The PSM
shown in FIG. 3 is a programmable array of s x 2
r
identical
switch elements called C units.The C units that are connected
to the same input of the MSA are referred to here in as a set of
C units. FIG. 4 shows a C unit that employs n + r
complementary pass transistor switches and an inverter.
By careful inspection, it can be observed that one switch per 2
r
switches will pass or not pass only the O', thereby requiring
only an NMOS transistor. Therefore, the PSM could be
implemented using sx (2
r
- 1) complementary switch elements of
type C used to broadcast the partial sums, and s NMOS- only
switch elements of an alternate type C, used to broadcast the O'.
However, in a currently preferred embodiment of the present
invention, the PSM is realized using only identical C units to
maintain the overall modularity of the architecture. Further,
since the PSM may be implemented to require only s
+ 2
r
buses of width n + r , it also compares favorably in
that would require s x 2 + 2 buses of the same width.
A control algorithm is required to configure the PSM. In a
currently preferred embodiment of the invention, the s x 2
r
control bits, which are required to turn on or q^the appropriate
C units , are generated from the available m bits of X . One
such means of creating the control bits extends X to s x r bits
by adding s x r - m 'O's to the most significant part of X . Then
X is partitioned into s , r -bit partitions, and each such
partition is decoded into a 2
r
-bit control sub-string. FIG. 7
depicts Table 1, which illustrates the control sub-strings that
may be used when r = 3.
Each control unit may be functionally identical to a binary
decoder and may include r inverters , and 2
r
, r -input AND
gates . FIG. 5 depicts such a control unit for r = 3. An
embodiment of the second type of sub-unit of the Omega unit
which computes the final product, is referred to herein as the
multi-shifter-adder ("MSA"). Its operation may be similar to
the shift-add operation of a conventional multiplier, except
that there are r shifts, instead of one, between any two
additions. This functional similarity facilitates implementation
of the MSA by allowing the MSA to be based on several
existing multiplier architectures, with minor modifications.
Carry Adder (RCA) for minimal silicon utilization or using
faster adder architectures like the Carry-Look- Ahead Adder
(CLA) for higher operational speed.
An extension of the PBMA for simultaneously performing
binary multiplication of a number ("Z ") of multiplicands, X(l)
= x(l)
m
_
v
..x(l\x(l)
0
, X(2) = x(2)
m
_
v
..x(2\x(2\ , ..., X(L) =
x(L)
m
_
v
..x(L\x(L)
0
, by a given multiplier, Y = y
n
_
v
..y

y
0
,
includes a Sigma unit 10 and L Omega units . FIG. 8 depicts
one such system. The resulting L products are Z(I) = X(X) x 7
, ... , Z(L) = X(L) x Y . The Sigma unit may generate the 2" -
1 distinct partial sums of Y27, ... ,2
r~l
Y .,
The resulting 2
r
-1 distinct partial sums are Y, 27, ... , (2
r
-
V)Y . The implementation of the Sigma unit and each of the
Omega units in a currently preferred embodiment of the
invention are described above. For high-speed applications,
the Sigma unit and the MSA 40 may be based on faster tree
architectures, such as the Wallace Multiplier [Ref. 6] or the
Dadda Multiplier [Ref. 7].
For high throughput operation, a pipelined implementation
[Ref. 8] of the PBMA is suggested. A reduced version of the
PBMA that generates a truncated or rounded product [Ref. 9]
could also be desirable in certain signal processing
applications.
Although the invention has been described with reference to
specific embodiments, the invention is not limited to these
embodiments. Rather, other embodiments of the invention
may be made without departing from the spirit and scope of
the invention. For example, references Ref. 10, Ref. 11, and
Ref. 12 describe other embodiments of the invention.
Hence, the present invention is deemed limited only by the
appended claims and the reasonable interpretation thereof.
227
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
s
n
e
e
g
:
o
r
-
a
d
s
,
y
n
u
o
f
e
f
.
e
Divide and Conquer algorithm:
The term divide-and-conquer is the literal translation of divide
et imperacoming from Latin. It originally referred to the
political, military, and economic strategy often called divide ut
imperes. Technically, it means that larger concentrations and
groups ought to be divided (split) into smaller groups. This
way their power can be decreased and the one implementing
the strategy can overpower them successfully.
There has to be a correlation between the underlying idea in
both military and computer science. But to answer this we
should focus on the question of what kinds of problems can be
solved using the divide-and-conquer algorithm. The problems
I'm talking about can be divided into two or more sub-
problems which are smaller in size, easier to solve, and
ultimately can lead to the final solution.
. Basically, if a problem can be divided into two or more sub-
problems of the same (or related) type and ultimately each of
these sub-problems' solution can be combined to find the final
solution to the original problem, then divide and conquer can
be used. It is especially important to emphasize that sub-
problems should be of the same type or related. This is
necessary because we use the same means of evaluation.
Additionally, we do not divide only the big problem into its
sub-problems. We also divide the sub-problems and so on
until they become so simple that solving them directl
becomes possible (trivial, if need be). We are going to know
when the sub-problem becomes trivial because thats whe
you cant divide it anymore. So if you do get there, then yo
need to actually solve it.
In the first part of this series we used the tree-like analogy t
see how the algorithm goes through the process o
execution. Lets see what happens if we apply that tre
analogy to divide-and-conquer. Obviously, in the root node o
the tree the original problem is located (this level is zero)
Then this is divided into two or more sub-problems that ar
located on the first level of the tree, and so on.
In the end, the trivial sub-problems are located in the leaves of
the tree (last node, without child(ren)). When the program get
there, those can be solved, and recursively relying o the
recurrence relation, we can move further, solving th
previous sub-problems and so on. This way we can eliminat
each of the levels containing numerous sub-problems, movin
closer to the root node, which is the original problem.
The divide-and-conquer approach has three major elements
divide, conquer, and combine. The first breaks a problem int
sub-problems, the second solves a problem, and the latte
combines the results coming from the previously solved sub
problems to get the final solution to the original problem.
Almost always we opt for a recursive approach based on
recurrence relation. In this case, the sub-problems are store in
the procedure call stack. However, there are exception
where a non-iterative approach offers more flexibility and
freedom regarding the place where we store the solutions of
the sub-problems (stack, queue, etc.). We usually opt for this
when the recursive variation doesnt cut it (is too hard to
implement).
Divide and marriage before conquest is a variation of the
original divide-and-conquer technique. It essentially means
that merging (combining) the sub-problems can be done right
after each phase of division, not necessarily after we have
solved all of the sub-problems. In these cases the combination
of the solution(s) is already implemented in the recursion, so
by the time we finish all of the sub-problems, the solution is
ready. Sorting algorithms usually benefit from this variation
of the original D&C.
Fig 1. Diagram of Divide and Conquer method
Fig 2. Architecture of the of FIR Filter
Fig 3. Architecture of sigma unit
228
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Divide and Conquer 7.165nS
Baughwooley Multiplier 9.393nS
Modifiedbooth
Multiplier
9.573nS
Array Multiplier 17.78nS
Conventional FIR Filter 23.13nS
Divide and Conquer 15.116
Baugh wooley Multiplier 52.760
Array Multiplier 69.163
Modifiedbooth Multiplier 76.187
Conventional FIR Filter 71.812
Divide and Conquer 128mW
Conventional FIR Filter 509mW
Baugh wooley Multiplier 1405mW
Array Multiplier 1518mW
Modifiedbooth Multiplier 2068mW
Cantrol Unit
3 bit partiti on of X
Fig 4. Architecture of the Control Unit
Programmable Switch Matrix (PSM)
Fig 7 Power (mW)
Time Delay Comparison
X
r r r r
0
n +r
Y
n +r
2Y
n +r
r
(2- 1)Y
n +r
Control
unit
Control
unit
Control
unit
Control
unit
MSA n +r n +r
n +r n +r
Fig 5. Architecture of PSM
Multi-shifter-adder
Area Comparison
Fig.8 Delay (nS)
n+r n+r n+r n+r n+r
0 0 0 0
r r r
+ + +
r r r
r
r
+ + +
r r r
r
r
r r
r
r r r r
r r r
+ + +
r r
r
r
r r
r r
Vector Merging Adder
r
r r
r
r
r
r r r z
Fig 6. Architecture of MAS
Power comparison
Fig. 9 Area Comparison
229
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Power Delay Product Comparison
Divide and Conquer 917.12
Baughwooley Multiplier 13197.16
Modifiedbooth Multiplier 19796.96
Array Multiplier 26990.04
Conventional FIR Filter 11773.17
Fig. 10 PDP Comparison
Conclusion:
The VLSI implementation of a programmable FIR Filter
with binary coefficients has been implemented. The design has
two units namely sigma unit and omega unit. The design
consideration for the sigma unit is given in this report. The
concept of divide and conquer algorithm has been given.
The embodiment of the design is efficiently realized using
only 2r-1-1, n-bit adder units. Depends on the application, the
n-bit adder units may be selected from basic adder
architectures like ripple carry adder for minimal silicon
utilization or using faster adder architectures like carry-look-
ahead adder for higher speed operation. For high-speed
applications, the Sigma unit and the MSA may be based on
faster tree architectures, such as the Wallace Multiplier [Ref. 6]
or the Dadda Multiplier. The simulation results are verified
and it is shown in the fig.9.The design has been shown to be
easily extendable to FIR Filters with multi-bit coefficients
with arbitrary sign by employing a modified filter bank
structure. For systems where the input signal rate is
considerably slower than the routing metallization among
multiple signals, might prove to be the optimal choic.
REFERENCES
[1]. A. D. Booth, A signed binary multiplication technique,
Quarterly Journal of Mechanics and Applied Mathematics 4,
1961, pp. 236- 240.
[2]. C. R. Baugh and B. A. Wooley, A two's complement parallel
array multiplication algorithm, IEEE Transactions on
Computers 22(12), 1973, pp. 1045-1047.
[3]. A. T. Fam, Optimal partitioning and redundancy removal in
computing partial sums, IEEE Transactions on Computers
36(10), 1987, pp. 1137-1143.
[4]. A. T. Fam, A multi-signal bus architecture for FIR Filters with
single bit coefficients, Proceedings of IEEE International
Conference on Acoustics, Speech, and Signal Processing
(ICASSP), 1984, pp. 11.11.1-11.11.3.
[5]. T. Poonnen and A. T. Fam, An area-efficient VLSI
implementation for programmable FIR Filters based on a
parameterized divide and conquer approach, Proceedings of
IEEE International Conference on Microelectronics (ICM),
2003, pp. 93-96.
[6]. C. S. Wallace, A suggestion for a fast multiplier, IEEE
Transactions on Electronic Computers 13, 1964, pp. 14-17.
[7]. L. Dadda, Some schemes for parallel multipliers, AUa
Frequenza 34, 1965, pp. 349-356.
[8]. J. R. Jump, S. R. Ahuja (1978), Effective Pipelining of Digital
Systems, IEEE Transactions on Computers, 27(9), 1978, pp.
855 865.
[9]. E. E. Swartzlander Jr., Truncated multiplication with
approximate rounding, Record of IEEE Asilomar Conference on
Signals, Systems, and Computers (ACSSC), 1999, pp. 1480-
1483.
[10].T. Poonnen, A. T. Fam, A Novel VLSI Divide and Conquer
Implementation of the Iterative Array Multiplier, Proceedings of
IEEE International Conference on Information Technology
New Generations (ITNG), 2007, pp. 723-728.
[11]. T. Poonnen, A. T. Fam, A Novel VLSI Divide and Conquer
Array Architecture for Vector-Scalar Multiplication,
Proceedings of IEEE International Conference on IC Design and
Technology (ICICDT), 2007, pp. 41^14.
[12]. T. Poonnen, Efficient VLSI Divide and Conquer Array
Architectures for Multiplication, Ph.D. dissertation, State
University of New York at Buffalo, NY, 2007.
230
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Email Archiving: The Best Way to Secure e-mail Data
Ashish Bagla, Ankit Singhal, Shubham Mittal
Department of Computer Science and Engineering
S. D. College of Engineering and Technology, Muzaffarnagar
baglaashish@rediffmail.com, singhal_ankit46@yahoo.com
Abstract Statistics show that as much as 60 percent of
business-critical data now resides in email, making it
the most important repository of data your company
may own. This huge amount of datawhich grows on a
daily basistranslates into a significant burden on
corporate storage resources. These factscombined
with a recent onslaught of regulatory compliance
rulesare forcing organizations to take a deeper look at
email storage, retention, and archiving practices. its not
just compliance with regulations that is driving this
trend to archive. As email messages increasingly take
center stage in headlines and lawsuits, email has
become the electronic equivalent of DNA evidence.
Having a system in place that takes this risk into
account is crucial for businesses that dont want to end
up at the center of one of these scandals. IT
professionals need to understand a range of business
and technology issuesfrom the key reasons for
archiving to the best type of archive to meet those
needs.
I. Need of Email Archiving
The majority of a companys business-critical data is
stored in emaildata that impacts revenue, business
decisions, corporate reputations and end-user
productivity. With all of this at stake, its not surprising
that email is subject to a growing range of legal,
regulatory compliance, and business requirements. Its
also not surprising that email can cause serious storage
issues for businesses.
By providing a secure, searchable, and centralized
repository for email, an archive can address the full
range of legal, regulatory, business and storage
challenges presented by email. These challenges, and
the opportunities presented by email archiving
solutions, are explored in more detail, below.
II. Electronic Discovery and Litigation
Electronic discoveryor e-discoveryusually
refers to the retrieval of data from a computer to meet a
legal request. However, the term can also be used
whenever data retrieval is required for regulatory
compliance, HR concerns, validation of client
correspondence or other corporate needs. As a result, all
organizations require search and discovery capabilities
for email, even if they are not currently involved in
litigation. Recently, the electronic discovery burden on
IT organizations has increased both in frequency and
demand. In fact, a survey performed by Osterman
Research, Inc. found that:
1. Two-thirds (66%) of IT organizations have
referred to email or IM archives or backup.
2. Stapes to support their organizations
innocence in a legal case.
3. Nearly two-thirds (63%) of organizations
have been ordered by a court or regulatory
body to produce employee email or instant
messages.
This is not surprising when you consider that email is
just as admissible in court as paper-based documents,
and can be requested for legal discovery at any time. In
fact, email evidence has been the smoking gun in
numerous cases of illegal corporate activity. In fact,
according to the American Management Association,
27 percent of Fortune 500 companies have defended
themselves against claims of sexual harassment
stemming from inappropriate email and/or Internet use.
Without an archiving system with appropriate search
and discovery capabilities, these requests can add up to
a great deal of time, effort, and expense on the part of
the IT department. According to Osterman Research,
the IT department in a typical large organization spends
five hours per 1,000 users per week performing
backups, recovering users deleted emails and dealing
with other backup and archiving-related tasks. That
works out to approximately $10 per user per year on
labor alone. For smaller organizations, Osterman
Research estimates that cost can go up to $34 per user
per year on just the labor involved in managing backups
and archiving.
In the case of litigation, the costs can rise even more
dramatically. An employee discrimination suit known
as Zubulake vs. UBS Warburg is a great example of
this. UBS Warburg archived outgoing and incoming
email for their registered traders on optical disk, with
no effective means of searching. When the Zubulake
discovery request sought internal mails stored on
231
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
backup tapes, UBS Warburg was forced to pay the cost
of recovery, despite the fact that recovery costs for a
sample set of email on five initial backup tapes cost
$19,003.43, or about $4,000 per tape. A second round
of discovery requests resulted in costs of more than
$100,000, before related litigation feescosts that UBS
Warburg, the defendant in the case, was once again
responsible for covering.
Cost isnt the only concern when retrieving data for a
discovery request. In most cases, a strict time limit is
placed on when data must be produced. For example,
the SEC generally requires that requested email be
produced within 48 hours of a request. Failure to
produce requested email in a reasonable timeframe can
result in significant fines, as in a case involving J.P.
Morgan Chase & Co. The investment banking firm was
fined $2.1 million when they failed to produce all the
emails sought because backup tapes could not be found
in storage facilities, other tapes were damaged or
contained errors, or backup tapes were not made for
some periods.
In the case of litigation, seemingly unimportant email
messages can often support a companys claims of
innocence. A deleted email trail can not only weaken an
organizations defense, but it can also lead to a
presumption of guilt, potentially costing a business
millions in fines and settlements and causing
immeasurable damage to corporate credibility.Without
an archiving discovery system, it is also difficult to
limit searches for appropriate data before presentation
to litigators, creating opportunities for unnecessary data
to be exposed.
III. Regulatory Compliance
In recent years, the archiving of email messages has
become a business requirement driven by numerous
federal and state regulations including Sarbanes-Oxley ,
SEC 17a 3-4, HIPAA, and NASD rules. With more
than 10,000 regulations on data and record retention
currently in force in North America, very few
businesses are exempt from some form of regulatory
scrutiny.
These regulations are forcing businesses to retain email
just as they must retain other formal corporate
recordsor face penalties that can include significant
fines or even criminal charges. As just one recent
example (September, 2007), Morgan Stanley & Co.
paid $12.5 million to resolve charges with the Financial
Industry Regulatory Authority (FINRA) that a former
affiliate failed, on numerous occasions, to provide
emails to claimants in arbitration proceedings.With a
policy-driven archiving system in place, email can be
checked for compliance with regulations, and then
retained for the appropriate amount of time based on
email content. These solutions can also reduce the risk
of inappropriate content being exchanged, as employees
can be alerted when an email doesnt comply with
company policy
IV. Legal Discovery Benefits of Using Proof
point Email Archiving
The growing cost of e-discovery, compounded by new
regulations such as the Federal Rules of Civil Procedure
(FRCP), has changed the way businesses must deal
with email. To be prepared for legal discovery, a
business must know where all their email data is stored,
and be able to search through and retrieve that data in a
short period of time. They must also apply a consistent
email retention policy, and have a way to enforce a
litigation hold by preventing data from being deleted if
necessary
For companies that allow PSTs and rely on backup
tapes to store historical email, both the cost and time
involved in meeting these requirements can be very
high. Exposure to legal risk is also significant, with
missing or corrupt data resulting in spoliation of
evidence. This can lead to costly fines, guilty verdicts
and damaged reputations.
The on-demand Proofpoint Email Archiving solution
makes it easy to respond to e-discovery requests and
meet the requirements of the FRCP by:
Storing all email in a central repository with
real-time search from a browser interface and
a simple retrieval process.
Enforcing consistent retention policies and
litigation holds.
Allowing legal counsel to conduct advanced
searches for early case assessment and full e-
discovery
V. Storage Management
Nearly every IT department has struggled with the issue
of storage management for messaging servers. The
pressure to increase storage limits continues to grow as
the amount of email sent each dayas well as the size
of messages and attachmentsincreases. This ever-
increasing storage demand is driven in part by faster
connection speeds, and partly by the fact that emails
role as a primary channel for corporate communication
continues to expand. Radicati Research estimates that
corporate email traffic will almost double between 2005
and 2009, going from 64.9 to 120 billion messages a
day.
232
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
An archiving system, by automatically offloading data
into an archive, can dramatically help improve the
efficiency of messaging servers, their reliability and the
speed with which they deliver messages.
VI. Knowledge Management
Beyond the capacity issues associated with storage
management, email has also become the de facto filing
system for many enterprises. According to IDC, as
much as 60 percent of business-critical information is
stored in email and other electronic messaging tools.
Everything from sales proposals and marketing plans to
competitor profiles, contracts, and personnel files can
existsometimes exclusivelyin an employees
inbox.
Maintaining an archive that allows end-users to easily
access and search all previous email can greatly
improve productivity. In addition, vital content cannot
be deleted by a disgruntled employee; in the event of an
employee leaving the company, the trail of information
managed by that staff member can be accessed in the
future.
VII.Is Your Organization Exposed?
Before embarking on an email archiving strategy, every
organization should evaluate their current email set-up
to identify key concerns and potential future issues.
A. Server Backups
By reviewing practices around email storage and
backup, a better understanding of the risks your
organization may face can be gained. Virtually all
organizations perform some type of regular backup of
their messaging servers in order to restore content in the
event of a server crash or other problem. Common
practice among most organizations is to create daily
backup tapes that are recycled and overwritten on a 30-,
60- or 90-day basis. This can be an effective disaster
recovery strategy, but it is not a viable archiving
strategy, despite common misperceptions.
First of all, this snapshot method means that you
never get a full view of your email repository. Email
that is sent and deleted between backups cant be
restored, making legal discovery difficult, if not
impossible. And while some organizations may keep a
copy of backup tapes for longer-term storage, these
tapes are typically very time-consuming and expensive
to restore from, as noted previously.
B. Mailbox Quotas
In the ongoing struggle to deal with excessive email
storage demands, most organizations set a per-user
quota for email. However, while most organizations
have limits on the amount of data that can be stored,
very few have enforced time limits on how long things
can be kept.
A per-user quota system can deal with basic storage
management issues. However, it exposes an
organization to a number of risks. With so much
business-critical data residing within email stores, end-
users will often find other ways to archive their data.
The result is confidential business data stored in
multiple locations (often as PST files) with no record of
these files and no way to easily retrieve them.
C. Key Issues: Archiving, Policy
The basis for a good archiving systemone that
reduces an organizations exposure to risk and puts it in
compliance with regulationsis a good policy.
Organizations that dont develop, communicate and
enforce formal policies that establish acceptable email
behavior and storage guidelines put both their
employees, and the organization itself, at serious risk of
fraud, lawsuits and loss of confidential datanot to
mention the risks of reputation damage, loss of
business, and decreased productivity.
To effectively manage the risks of corporate email,
businesses need to develop a set of formal policies to
guide the use of email. An effective policy will provide
specific rules for the acceptable use of email,
addressing the use of business email for personal
reasons, the forwarding of confidential documents, and
acceptable language and content, among other things. A
policy should also clearly identify required retention
periods and any email monitoring processes.
To ensure that staff fully understands the policy
guidelines, email policies should be made available and
easily accessible to all employees. This could mean
including it in employee handbooks or on company
intranets. All staff should be required to review, sign
and submit a copy of the policy to a manager or human
resources staff. In addition, some companies are now
taking the step of asking employees to sign a copy
along with their employment contract.
Ideally, an archiving solution will allow non-technical
staff with appropriate administrative rights to access
and make changes to the policy directly. This type of
direct access makes it much easier for the policy to be
updated.
D. Data Security Concerns
233
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
When considering how and where to archive your
organizations most confidential data, security is of
critical importance. Numerous recent security breaches
have highlighted how easily data can be lost in transit
or can be stolen directly from storage facilities,
particularly when stored in tape form. Reinforcing this
point, Proofpoints March 2008 survey on enterprise
data loss prevention issues (see
http://www.proofpoint.com/outbound) found that more
than a quarter (27%) of US companies had investigated
the exposure of confidential, sensitive or private
information via lost or stolen storage media or mobile
devices in the past 12 months.
Many organizations decide to maintain data in-house on
the basis that it is more secure; however, this may not
be the case. In fact, there is some evidence that shows
that data may be more at risk from internal sources than
from external attacks. According to the 2005 Global Se-
curity Survey released by the Financial Services
Industry practices of the member firms of Deloitte
Touche Tohmatsu (DTT), internal attacks on
information technology systems are surpassing external
attacks at the worlds largest financial institutions.
Specifically, 35% of respondents confirmed
encountering attacks from inside their organizations
within the previous 12 months compared to 26% from
external sources.
VIII. Email Archiving Options
There are three main types of archiving solutions
available. The firstthe in-house optionwill involve
the purchase and installation of storage hardware and
software for policy enforcement. The second option is
to contract with an outsourcer or application service
provider (ASP) that provides archiving as a hosted
service. Finally, businesses can deploy a hybrid
solution that combines certain elements of the in-house
and outsourced models. Understanding the differences
between these solutions is crucial in determining what
is best for your business.
A. In-house Archiving Solutions
To deploy an email archiving solution in-house, an
organization must define requirements, develop or
purchase appropriate software, and buy the needed
hardware. With the large amount of email data that
most organizations send and receive, archiving requires
a significant amount of storage hardware.
In-house email archiving solutions typically use a
dedicated, server-based approach that copies all email
from the message store into an archive. Some solutions
also require that software be installed on all PC clients
to facilitate searching and retrieval. In-house solutions
offer a high level of control and data security, as well as
convenient integration with other systems in the
organizations existing infrastructure.
However, these solutions can be costly to acquire and
often require dedicated, skilled personnel to maintain.
B. Hosted Archiving Solutions
An alternative to the in-house approach is to choose a
hosted solution. This allows a company to archive their
data at a third-party location, reducing the burden on
internal IT resources. Outsourcing also allows a
company to avoid the substantial cost of buying
hardware and software, as well as the inconvenience of
maintaining an archiving system. However, a serious
disadvantage with some hosted solutions is a lack of
data security. By storing confidential email data at an
external location, a business may open itself to security
breaches or Privacy Act concerns. In many hosted
solutions, archived data is not stored in encrypted form,
posing an even greater risk. In addition, without direct
integration with the organizations email server,
management of archives can be an additional challenge.
C. SaaS Hybrid Archiving Solutions
A third, emerging, approach is the SaaS (Software-as-a-
Service) hybrid model. The typical setup involves an
appliance installed at the customers site, combined
with secure storage managed in the cloud by a third-
party provider. In some cases, encryption is performed
before the data leaves the customer location, ensuring
the content of archived email can never be accessed
from outside the customers own network. The hybrid
approach combines the convenience of a hosted
solution with the more robust features and security of
on-premises solutions. The hybrid model is based on
the idea that customers want security and easy
integration, but also wish to avoid the high costs and
inconvenience of acquiring and managing large
amounts Email Archiving of storage. (And, of course,
storage costs are not the only consideration. In-house
solutions typically require high levels of administration,
maintenance, and ongoing support.) As organizations
better understand the long-term costs and maintenance
required to archive email, the hybrid model is likely to
become a common approach.
IX. Conclusion
234
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Clearly, email use within the corporate environment
will only continue to rise. The Radicati Group stated
that worldwide email traffic increased by 35 percent in
2004, totaling 76.8 billion messages per day. Corporate
emails accounted for 83 percent of this traffic. If left
unchecked, corporate email can leave a business
vulnerable. With regulatory compliance, legal
discovery and storage management concerns growing,
the question is not whether your organization will need
an archiving solution, but rather, when it will need one.
Starting now to understand the key risks, rewards and
reasons for archiving will put you in a better position to
make the right choice when the time comes.
References:
[1]. "IDC Technology Spotlight: EMC
SourceOne Email Management: A Next
Generation Email Archiving Solution"by EMC
magzine
[2]. Email Archiving system by Sandy Cosser.
[3]. Email Archive. Getting Started. Guide.
Version 1.0. Date 20-03-2007. Getting Started
with Archive Manager
[4]. The Impact of Regulations on Email Archiving
Requirements by Captaris Alchemy.
[5]. Email archiving article by Waterford
Technology.
[6]. Email archiving article by INFO~TECH
research Group.
235
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Y SENSOR FOR TEMPERATURE
MEASURMENT
Shailesh Kumar Singh Praveen Kumar
Department of Electrical & Electronics Department of Electronics & Communication
KIET, Ghaziabad KIET, Ghaziabad
shailesh.iet85@gmail.com Praveen_tu80@yahoo.com
Abstract Analyze some of the basic principles involved in the
design of optical fiber based displacement sensor and to applied
it to design a weight measurement system. Bifurcated optical
fiber sensors can have many applications. Having high sensitivity
to short distances, bifurcated optical fiber (Y-sensor) is well
suited not only for control applications as position sensors, but
also for gauging and surface assessment. In addition, they can be
utilized as non contact pressure sensors if used in conjunction
with reflective diaphragms moving in response to pressure. Y-
sensor can also be applied as temperature transducer if used in
conjunctionwith deformable bimetal sensors.
Keywords-component: Optical Fiber sensor,Y-sensor ,bifurcated.
I. INTRODUCTION
Many methods have been developed to measure temperature.
Most of these rely on measuring some physical property of a
working material that varies with temperature. One of the most
common devices for measuring temperature is the glass
thermometer. This consists of a glass tube filled with mercury
or some other liquid, which acts as the working fluid.
Temperature increases cause the fluid to expand, so the
temperature can be determined by measuring the volume of the
fluid. Such thermometers are usually calibrated, so that one can
read the temperature, simply by observing the level of the fluid
in the thermometer. Another type of thermometer that is not
really used much in practice, but is important from a theoretical
standpoint is the gas thermometer [5]. Optical fiber sensors
have advantages such as lightweight, small size, high
sensitivity, large bandwidth, and ease in signal light
transmission. However, in many fields of application, optical
fiber sensors should compete with other rather mature
technologies such as electronic measurements. Photonic
sensing schemes utilizing fiber optic technologies have been
studied since the 1970s, and various sensing principles, special
devices for the sensors, and new applications have been
created. Recently several optical fiber sensors have already
been used in practical application fields, which provide unique
functions compared with other sensing principles [2]. Optical
fiber sensors have been developed for a variety of applications
in industry, medicine, defense and research. Some of these
applications include gyroscopes for automotive navigation
systems [3]. Strain sensors for smart structures and for the
measurement of various physical and electrical parameters like
temperature, pressure, liquid level, acceleration, voltage and
current in process control applications [3],[ 4]. Various types of
sensors are also used to measure temperature. One of these is
the thermistor, or temperature-sensitive resistor. Most
thermistors have a negative temperature coefficient (NTC),
meaning the resistance goes up as temperature goes down. Of
all passive temperature measurement sensors, thermistors have
the highest sensitivity (resistance change per degree of
temperature change). Thermistors do not have a linear
temperature/resistance curve [5].
Fig.1depicts the basic setup of a typical multimode intensity
sensor using lower technology fiber for displacement
measurement purposes. It consists of an optical source (e.g.,
lased, LED), a photo detector, an optical power meter, a Y-
branched fibers, and a planar mirror. The amount of light
returning to the detector depends on the distance h between the
end of the fiber and the mirroring surface being monitored.
Fig.1.1 Basic configuration of a bifurcated fiber displacement sensor (Y-
sensor). Bifurcated fiber (BF), optical source (OS), planar mirror (M), photo
detector (PD), optical power meter (OPM)
II- THEORTICAL APPROACH
Recently, fiber-optic systems for point measurement and for
distributed measurement along the winding have been tested
for temperature monitoring within large power transformers
[7]. The accuracy obtained is sufficient for on-line monitoring:
10 C for point sensors, and 10 C/m for distributed
measurement systems. However, present fiber-optic systems
have some clear disadvantages. First, the costs of these systems
are still noticeable, thereby contradicting the idea of a reliable
and low-cost monitoring system. In addition, especially for
distributed solutions, the mechanical characteristics of fibers
are not capable of assuring sufficient system reliability. In one
of technique the sensing element was obtained by replacing a
little portion of the plastic cladding of the fiber by a
reference liquid whose temperature refractive index change
is known. A temperature change in the reference liquid (RL)
results in a modulation of the refractive index that modifies the
236
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
propagation regime along the fiber. This effect allowed the
temperature of the fluid, in which the probe is immersed, to be
evaluated simply by monitoring the fiber output using low-
cost, dual-slope analog-to-digital (ADC) processing hardware.
Light propagation along a large-core optical fiber can be
analyzed by the classical ray method. A ray undergoes total
reflection at the core boundary if the angle between the ray
path and the fiber axis does not exceed the complementary
critical angle c of the core-cladding interface that, for a step-
index fiber, reads:
c =cos
-1
(
cl
/
co
) (1)
Where co and cl are the core and cladding refractive index
respectively. When total reflection occurs, the ray propagates
along the fiber without any power loss (bound-ray). On the
contrary, when is greater than c, the ray leaks power into the
cladding in correspondence to each reflection at the core
boundary (leaky ray). For a given fiber excitation, a localized
change of the cladding refractive index modifies the
propagation regime along the fiber, i.e., causes power coupling
between bound and leaky rays. This coupling results in
modulation of the propagating bound-ray power that can be
detected at the fiber end. Temperature is one of the most
significant of the influencing parameters for the refractive
index of fluid substances. In particular, the temperature
coefficient of the refractive index is almost always negative
and often very much higher, in absolute value, than that of the
fiber silica core. Thus, if a little portion of the cladding along
an optical fiber is replaced by a liquid with a suitable refractive
index, a temperature variation of the liquid acts as a localized
refractive index change that can be detected at the fiber end by
a power measurement. To be able to determine the temperature
value from such measurements, the relationship between the
fiber output power and the temperature of the liquid should be
known. Although this relationship can be numerically
determined under particular assumptions (propagation of
meridional rays only, no polarization-dependent effects, etc.) a
more accurate evaluation can be obtained empirically by means
of a calibration procedure. Obviously, this calibration should
be carried out for every liquid whose temperature is to be
measured. This problem was overcome by inserting the unclad
zone in a tank, containing a reference liquid. The small
thickness and dimension of the tank (compared with the
external fluid mass) allow the reference liquid temperature to
follow the external unknown fluid temperature with a very
low response time. For a given fiber and then for a given core
refractive index, the reference liquid can be chosen with a
suitable refractive index versus temperature characteristic. This
implies that only a one-off calibration of the fiber-reference
liquid pairing will be required, without specifying or
constraining the optical characteristics of the fluid being
measured.
Fig 1.2 Hardware layout. PS is the power supply; LD is the laser diode; F is
the fiber; RL is the reference liquid; T is the tank; IC is the insulation
cylinder; PD is the photodiode, CC is the conditioning circuit.
The application of a fiber-optic temperature sensor for
measuring power transformer hot-spot temperatures has been
presented. The laboratory metrological characterization
attributes a 0.2
o
C resolution and an overall accuracy of 0.5
o
C to
the prototype. Experimental tests carried out on a 25 kVA
power transformer confirm the expected absence of
electromagnetic susceptibility and indicate that the sensor
response time is low enough to allow the hot-spot temperature
to be accurately monitored in a range of 01300
o
C [6].
Fig 1.3 output characteristic of the sensor
III- EXPERIMENTAL SETUP
Few applications, we have already discussed, Y-sensor can also
apply to measure the temperature. The arrangement to measure
the Temperature is shown in fig1.4; the principle is based on
expansion of the liquid as we increase the temperature. A glass
tube filled with liquid is used here. The plastic tube is
connected to the one side of the plastic pipe. Y-sensor is fitted
at the other end of the plastic pipe. The small part of the pipe is
filled with the mercury. The surface of the mercury act as the
reflecting surface, the laser light reflects back from the surface
of the mercury. The arrangement shown in fig 1.4, a glass tube
is used in which water is filled. A plastic pipe is used in which
237
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
a small part of mercury is placed. One side of the plastic pipe is
fitted with glass tube and on the other side Y-sensor is
connected. Laser is used as optical light source. The mercury is
placed at a position according our requirement because it
affects the temperature range. So, when we heat the glass tube
the temperature of the water is increases, due to this the
expansion of water will occur, so due to air pressure the
mercury displaced. As the mercury move towards the end of
the Y-sensor the out put voltage will increases, as shown in
fig.1.6 This arrangement is very sensitive for small temperature
range i.e. 8
o
C. we can shift this temperature range to any
temperature by changing the initially position of the mercury.
From the fig we can see, there are four graph corresponding to
each different initially position.
Fig 1.4 Arrangement to measure the Temperature using Y-sensor
IV -EXPERIMENTAL RESULT
First graph, in which initially position is 3.5c.m. And
temperature range is 34
o
C-42
o
C, in second graph initially
position is 5.5c.m. And temperature range is 38
o
C-46
o
C, in
third graph initially position is 8.5c.m. and temperature range is
44
o
C-52
o
C and in fourth graph initially position is 13.5c.m.
And the temperature range is 56
o
C-64
o
C.The results shown in
figure are for water. We can increase the range of the
temperature by using a liquid whose thermal expansion
coefficient is small.
Fig 1.5 Circuit diagramthat gives the voltage corresponding to temperature
Fig 1.6 Graph between Temperature and voltage for different initial position
of the mercury.
V- CONCLUDING DISCUSSION
Fiber optic sensors are very simple, reliable, highly sensitive
and very accurate as compared to the conventional types of
sensors. Fiber optic sensors are immune to temperature related
errors and electromagnetic interference. In case of temperature
measurement system sensitivity is very high, applied to
measure temperature of small range i.e. 8
o
C. The temperature
range can be shift according to our requirement. In this case the
voltage is varying from 0v to 0.6v. By connecting one more
amplifier stage of gain 10, the value of the voltage will vary
from 0v to 6v. So easily display digitally using microcontroller
and ADC.
REFERENCES:
[1] J. B. Faria, "A Theoretical Analysis of the Bifurcated Fibre Bundle
Displacement Sensor", IEEE Transactions on Instrumentation and
Measurement, vol. 47, no. 3, pp. 742-747, June 1998.
[2] Girao P.M. B. S., Postolache O.A., Faria J., Pereira J. M. C. D., "An
overview and a contribution to the optical measurement of linear
displacement", IEEE Sensors Journal, v. 1, n. 4, pp. 322-331, December
2001.
[3] S. R. Lang, D. J. Ryan, and J. P. Bobis, Position sensing using an
optical potentiometer, IEEE Trans. Instrum. Meas., vol. 41, pp. 902
905, Dec.1992.
[4] Pedro M. B. Silva, Octavian A. Postolache, and Jos M. C. Dias Pereira
An Overview and a Contribution to the Optical Measurement of Linear
Displacement IEEE SENSORS JOURNAL, VOL. 1, NO. 4,
DECEMBER 2001
[5] http://en.wikipedia.org/wiki/temperaturemeasurement
[6] Buffa, G. Perrone, A. Vallan A Plastic Optical Fiber Sensor for
Vibration Measurements I2MTC 2008 - IEEE International
Instrumentation and Measurement Technology Conference Victoria,
Vancouver Island, Canada, May 12-15, 2008.
[7] Giovanni Betta & Antonio Pietrosanto an enhanced Fiber-optic
Temperature sensor system for power transformer monitoring IEEE
transactions on instrumentation and measurement, Vol. 50, No.5.
October 2001.
238
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Emergency Connectivity Protocol for Mobile
IPv6
M.Altamash Sheikh
School of ICT,GautamBuddha
University,Gr.Noida,U.P.
altsheikh@gmail.com
Prof.Sanjay Jasola
School of ICT,GautamBuddha
University,Gr.Noida,U.P.
sjasola@yahoo.com
Sandeep Singh
School of ICT,GautamBuddha
University,Gr.Noida,U.P.
er.sandeep_vhdl@yahoo.com
Dr.Karan Singh
School of ICT,GautamBuddha
University,Gr.Noida,U.P.
karancs12@gmail.com
ABSTRACT
MobileIPv6 is a standard communication protocol
hence it is necessary to increase the reliability of
MIPv6 protocol. This protocol provides a mechanism
for MobileIPv6 Node to establish a wireless
connectivity with new access router when the
wireless connectivity of a node fails with the home
agent. This paper also presents algorithm for the
switching back to Home Agent when Agents wireless
connection starts working properly.
Keywords Mobile IPv6, Mobile Node, Home
Agents, New Access Router, Correspondent Node,
Router
INTRODUCTION
With the time the numbers of wireless users will
increases. Each user will have multiple devices like
laptop, I phone, IPod, IPad, cell phones etc. And each
device requires a separate unique IP address. Hence
the demand for different IP based services will also
increase. Therefore it is mandatory to provide the
connectivity to devices even in the worst case
scenario. As the user is mobile, so devices are
required to support user mobility as well. MobileIP is
an Internet Engineering Task Force (IETF) standard
communication protocol. MobileIP is designed to
allow mobile device users to move from one network
to another while maintaining a permanent IP
Address.
MobileIPv4 RFC 3344 [12], and updates are
available in IETF RFC 4721[14] is a derivative
Protocol of IPv4 [4]. MobileIPv4 is having address
space of 32 bits. Due to lack of availability of address
spaces several techniques like NAT [16], sub netting
is being practiced.
In the year 1998 RFC 2460 [17] proposes new
version of Internet protocol called as IPv6.This new
protocol is having 128 bits of addressing space. This
amount of IP addresses can fulfill for future
requirements of IP addresses. The basic work for the
mobility of IPv6 is been done in RFC 3775[15] .This
document specifies the basic procedure required for
the proper functioning of protocols like agent
discovery, agent advertisement, registration, reverse
routability procedure(RRP), and movement detection
of mobile node. The MobileIP have several entities
like Home agent (HA), Mobile Node (MN), Care of
address (CoA), Correspondence Node (CN).As soon
as MN becomes mobile and reaches a foreign
network. It uses IPv6 neighbor discovery protocol
[14], the M.N listens to router advertisement and uses
the information to determine it has reaches new link. As
the MN requires a CoA MN sends binding update
(BU) to H.A. to updates its current location to CN,
MN uses reverse routability procedure.
Let us consider a scenario where a M.N is operating
in an overlapping area of two cells from different
access routers. A M.N is connected to its home agent
through home network. M.N is using a wireless
connectivity. Now suddenly the wireless link of
home agent fails due to hardware and software
problem but the wired link of home agent is working
properly.
In this scenario the M.N is disconnected even though
it has availability of another wireless link from
another access router.
This paper presents an algorithm with the help of
which a M.N can establishes urgent connectivity
from available access routers.
In the following diagram 1 there is a MN which is
being operated in an overlapping area of two access
router. One of the access router is an HA of MN. The
239
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
MN is connected to HA with the help of wireless
link. There is a CN which is connected to its own HA
on its own network. The routers are connected with
each others with the help of wired lines. There is
another access router which is in the coverage area of
MN. The MN is working properly with its HA with
the help of wireless link. Due to some hardware or
software problem the wireless connection of HA fails
in this case the MN is disconnected.
Here just due to the failure of one link the entire
communication collapses. There is another access
router is available, and MN is lies in the coverage
area of this access router. The specialty of the
algorithm is that the connectivity establishment is
limited to a certain part of network, and CN no need
to change HA address of MN in its update list.
Therefore MN is also not required to start the RRP
with the CN.
Diagram1
Connectivity Establishment Algorithm
The working of connection establishment algorithm
with the sequence of interactions is as follows.
1. The link between M.N & H.A is down. The
M.N experiences it with the help of L2
information.
2. Other Access Router which periodically
send router Advertisement. This Advt. is
received by M.N
3. The M.N sends DHCP Request to Access
Router for registration to work as their
temporary H.A.
4. For the authentication of M.N new access
Router send Auth. Request to their H.A.
5. The H.A Ackn. the request of Access Router
by establishing a Tunnel between H.A &
Access Router.
6. Access Router Acknowledges the M.N
7. request by sending DHCP reply to M.N
8. The M.N starts operation with Access
Router, as packet is tunneled from H.A to
Access Router.
SWITCHINGBACKALGORITHM
The working of switching back algorithm with the
sequence of interactions is as follows
1. Firstly the wireless link between H.A &
M.N is up. The M.N experiences it with the
help of L2 information.
2. The M.N receives the Agent agreement sent
by H.A.
3. The M.N sends the DHCP request to H.A to
get back associated with H.A
4. H.A sends back DHCP response to M.N
5. H.A sends De-registration request to access
router for de-registration of M.N from the
Access Router.
6. Now this De-registration request is
forwarded to M.N for the Ackn. by the
Access Router.
7. As the M.N receives the De-registration
request from Access Router, it reply with an
Ackn. to Access Router. (As Wireless link
of M.N is already up with H.A).
8. As Access Router receives the Ackn. it
down the wireless link.
9. Now it sends back reply to H.A for the
Ackn. Of De-registration.
10. Finally Tunnel between H.A & Access
Router is withdrawn by H.A.
240
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
CONCLUSION
This protocol provides a mechanism for MIPv6 node
to establish wireless connectivity with any available
access router in case failure of wireless connectivity
with its home agent. This paper assumes that MIPv6
node is operating in an overlapping area of HA and
one access router only. The scenario of MIPv6
operating in an overlapping area of multiple access
router along with HA is out of scope of this paper.
REFERENCES
[1] Postel J. B. et al., "Internet Protocol
IETF RFC 791, Sept. 1981.
[2] Postel J. B, et al., "Transmission Control
Protocol, IETF RFC 793, 1981.
[3] R.Droms, "Dynamic Host Configuration
Protocol," IETF RFC 1541 1993
[4] Perkins.C, ed., "IPv4 Mobility Support,"
IETF RFC 2002, Oct. 1996.
[5] Perkins.C, , "IP Encapsulation within
IP", RFC 2003,October 1996.
[6] Perkins.C, MobileIP IEEE
Communications Magazine, May 1997
[7] Kent.S, R. Atkinson, "IP Authentication
Header",IETF RFC2402, November
1998.
[8] S. Thomson, T. Narten, "IPv6 Stateless
Address Auto configuration", IETF
RFC 2462, Dec 1998
[9] Das,S., Misra, A., Agrawal, P., Das, S.K.
TeleMIP: Telecommunications-
Enhanced Mobile IP Architecture for
Fast Intradomain Mobility, IEEE
Personal Communications, August 2000,
p.50-58, 2000.
[10] Eom, D., Lee, H., Sugano, M., Murata,
M., Miyahara, H. Improving TCP
handoff performance in Mobile IP
based networks, Computer
Communications,volume 25, issue 7,
p.635-646, 2001
[11] Castelluccia, C. HMIPv6: A
hierarchical mobile IPv6 proposal,
ACMSIGMOBILE Mobile Computing
and Communications Review, volume
4, p.48-59, 2000.
[12] Perkins.C, Ed.IP Mobility Support for
IPv4IETF RFC 3344, Sept 2002
[13] Perkins.CIP Mobility Support for
IPv4IETF RFC 3220 January 2002
[14] C. Perkins Mobile IPv4
Challenge/Response ExtensionsIETF
RFC 4721 January 2007
[15] Perkins et al., Mobility Support in
IPv6,IETF RFC3775,June 2004
[16] Suresh. P. IP Network Address Translator
(NAT) Terminology and Considerations
RFC2663 Aug.1999
[17] Deering. S Internet Protocol, Version 6 (IPv6)
Specification RFC 2460,Dec.1998
241
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Tamil Character Recognition using (2D)
2
Principal Component Analysis
First author
#
L. Sumathi
(1)
, R.Yashmin Begam
(1)
,
Second author
*
E. Menaka
(2)
,
Vivekanandha College of Engineering for Women,
Department of Information technology lssumathi01@gmail.com, yashwaive@gmail.com
AP/IT, Vivekanandha College of Engineering for Women,
menakaparthi80@gmail.com
Abstract
This paper presents a new application of 2-directional two dimensional Principal Component Analysis ((2D)
2
PCA) to the
problem of online character recognition in Tamil Script. A novel set of features employing polynomial fits and quartiles in
combination with conventional features are derived for each sample point of the Tamil character obtained after smoothing
and resampling. These are stacked to form a matrix, using which a covariance matrix is constructed. A subset of the
eigenvectors of the covariance matrix is employed to get the features in the reduced sub space. Each character is modeled as
a separate subspace and a modified form of the Mahalanobis distance is derived to classify a given test character. Results
indicate that the recognition accuracy using the (2D)
2
PCA scheme shows an approximate 3% improvement over the
conventional 2DPCAtechnique.
Keywords: Principal Component Analysis (PCA), (2D)
2
PCA, Mahalanobis Distance.
I. INTRODUCTION
In this paper, we propose to evolve an Tamil
characters using a technique called (2D)
2
Principal
Component Analysis ((2D)
2
PCA). Tamil is a
classical South Indian language spoken by a
segment of the population in countries such as
Singapore, Malaysia and Sri Lanka apart from
India. The Tamil alphabet comprises of 247 letters
(consonants, vowels and consonant vowel
combinations). Each letter is represented either as a
separate symbol or as a combination of discrete
symbols, which we refer to as characters in this
work. Only 156 distinct characters are sufficient to
recognize all the 247 letters [4]. Samples of each of
these characters form a separate class.
In an handwriting recognition system, a
methodology is developed to recognize the writing
when a user writes on a pressure sensitive screen
using a stylus that captures the temporal
information. Handwritten script recognition
engines exist for languages like Latin [1], Chinese
[2] and Japanese [3]. However, little attention has
been devoted to develop similar engines for Indian
languages. In this paper, we attempt to evolve an
recognition system for Tamil characters using a
technique called two directional two dimensional
Principal Component
Dimensionality reduction techniques like Principal
Component Analysis [6] have also been employed
for recognition. In this work, we propose an
adaptation of the (2D)
2
PCA technique [7] for
character feature extraction in a reduced subspace.
Each of the 156 classes is separately modeled as a
subspace. Contrary to the conventional PCA, the
(2D)
2
PCA operates on matrices rather than 1D
vectors. A set of local features (basically a novel
set of features combined with conventional
features) are derived for each sample point of the
preprocessed character. The features corresponding
to a sample point are stacked to form the rows of a
matrix, referred to as the character matrix in this
work. A covariance matrix of a significantly
smaller size as compared to the one obtained in
PCA is constructed from the character matrix. In
order to represent the features in a reduced
subspace, we project the character matrix onto a
subset of the eigenvectors of the covariance matrix.
For the classification of a test character, we have
employed a modified form of the Mahalanobis /
Euclidean distance. To the best of our knowledge,
there have been no attempts in the literature of
applying the 2DPCA technique to the context of
online character recognition till date. Most of the
applications for which this technique has been
242
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
15
proposed have been image-based such as face
recognition [7].
II. PREPROCESSING
Prior to feature extraction and recognition, the
input raw character is smoothened to minimize the
effect of noise. The character is then resampled to
obtain a constant number of points uniforml
sampled in space following which it is normalized
by centering and rescaling [6].The preprocessing is
an essential stage prior to feature extraction since it
controls the suitability of the results for the
successive stages. The stages in a pattern
recognition system are in a pipeline fashion
meaning that each stage depends on the success of
the previous stage in order to produce optimal/valid
results.
2.1 Recognition Principle
A block diagram of the newly developed
handwritten character recognition method.
The radial distance and angle in radians of the
sample point with respect to the centroid of the
character are computed to form two features F
3
i
and F
4
i
.
Radial distance and polar angle from the
segment mean:
We find the length of the preprocessed character
and divide it into 4 segments. Samples lying within
a segment are used to compute the mean for that
segment. The radial distance and polar angle of the
sample point of the character under consideration is
computed as follows: when it lies in segment k,
(1<= k <= 4) its distance and angle from the mean
of that segment is the feature F
5
i
and F
6
i
.
Polynomial fit coefficients:
At every sample point, we intend to
relate its position with respect to its immediate
neighbors. In order to exploit this local property,
we take a sliding window of size M (M odd)
centered on the sample point and perform an N
th
order polynomial fit on the samples within the
Table Reference
Pattern
window using numerical techniques. We use the
resulting N+ 1 polynomial coefficient as the
features. For our work, we stake M=3, N=2
(quadratic fit) and accordingly denote the features
i i
Preprocessing
Feature
Extraction
Matches
Recognition
as F
7
i
, F
8
and F
9
.
Autoregressive (AR) Coefficients:
We separately model the x and y
coordinates of the sample point by two
Nth
order
autoregressive (AR) processes and use the
Fig 1. Recognition Principle
result
resultant AR coefficients also as features. We
employ a 2
nd
order AR process and accordingly
III. FEATURE EXTRACTION
obtain the features F
10
i
, F
11
i
, F
12
i
, F
13
i
, F
14
i
and F
15
i
.
Let the number of sample points in the
preprocessed character be Np. At each sample point
(x
i
, y
i
) for 1<= i <= N
p
of the resampled character,
we extract a set of local features described in
Section 3.1. Let
Fj
i
represent the j
th
feature derived
from the i
th
sample point of the character. This
notation has been adopted here merely to index the
features and not to assign any weightage to them.
In case of multistroke characters, we concatenate
the strokes into a single stroke, retaining the stroke
It is to be explicitly stated that for obtaining the
polynomial and AR coefficients of the first and last
sample points of the character, we assume that the
last sample point of the last stroke is connected to
the first sample point of the first stroke. Such a
connection ensures that the notion of neighborhood
is not lost while computing the polynomial fit
features for the first and last sample point of the
character. The set of 15 features obtained at a
sample point (x
i
, y
i
) are concatenated to form a
i
of size 1 X 15.
order, before feature extraction.
feature vector F
v
F
v
i
= [F
1
i
, F
2
i
......F
i
] .....eq.1
3.1 Local Features
Normalized x-y coordinates:
The normalized x and y coordinates of the sample
point are used as features and are denoted by F
1
i
and F
2
i
.
Radial Distance and Polar Angle:
We then construct a matrix C (referred to as the
character matrix in this work) by stacking the
feature vectors of the sample points of the
preprocessed character.
243
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
FV
1
C = FV
2
.
.....eq.2
.
FV
N
p
component vectors { Y
ic
k
} are draw independently
from a multivariate Gaussian probability
distribution function of the form [ 5]s
P(Y
ic
)= 1 - -
------------- e
1/2(Y
-
Y
)
T(Y
-
Y
) ..eq6
It can be observed that the i
th
row of the character
matrix C corresponds to the feature vector derived
for the i
th
sample point. Therefore the size of matrix
where
Y
(2 ) (
N
p)/2

1/2
ic ic ic ic
Cis Np 15.
ic
=1/M
and
M - -
ic
- )
T
2
IV. (2D) PCA
(2D)
2
PCA works in the row and column direction
of characters respectively. That is, (2D)
2
PCA learns
an optimal matrix X from a set of training character
reflecting information between rows of characters,
and then projects an m by n character A onto X,
yielding an m by d matrixY=AX. Similarly, the
alternative (2D)
2
PCA learns optimal matrix Z
reflecting information between columns of
character, and then projects A onto Z, yielding a q
by n matrix B=Z
T
A . In the following, we will
present a way to simultaneously use the projection
K=1
The estimated mean vector and covariance matrix
of the i
th
principal component vectors of the class
_c. Eq. 6 gives the likelihood of the i
th
principal
component vector Yic for the given class c. For
simplicity, we make an assumption that any set of
principal component vectors of class c, { Y
mc
k
}
and {Y
nc
k
} (m n ) are independent of each other.
Therefore, using this we can write the likelihood of
the principal component vectors in the subspaces in
which they lie as:
d
p(B
c
)= (Y
ic
k
)
i=1
Using Eq. 6 we can write
d - -
matrices X and Z.
p(B
c
)=C

e
1/2 (Y Y T -1(Y Y
i=1
ic
-
ic
)
ic ic
-
ic
)
Suppose we have obtained the projection matrices
X and Z projecting the m by n character A onto X
and Z simultaneously, yielding a q by d matrix C
C=Z
T
AX ..eq3
The matrix C is also called the coefficient matrix in
character representation, which can be used to
reconstruct the original character A, by
A=ZCX
T
..eq4
V. CLASSIFICATIONSCHEME
where 1
C = --------------------------------------
d
(2 ) (
N
p)d/2
(
ic)
1/2
I=1
And B
c
=[Y
ic
,Y
2c
Y
dc
] c=1,2,.156
Let be the labels of the classes corresponding to
the 156 Tamil characters. Given a test character,
we can now construct a feature matrix of the form
test
= [ Y
test
,Y
test
,....... Y
test
] ....eq7
Assume that we have Mtraining samples of a class
B
ic ic ic ic
(character)
c
. After transformation by (2D)
2
PCA,
we obtain Mfeature matrices of the form
B
c
k
= [Y
1c
k
, Y
2c
k
, .... Y
dc
k
] k =1, 2,..M ..eq5
From eq5, we can interpret {Y
ic
k
} as the set of i
th
principal component vectors corresponding to the
by projecting it to each of the 156 subspaces using
the (2D)
2
PCA. Test c B refers to the feature matrix
obtained by projecting the test character onto the
subspace of class c. Using Eq. 6 we see that
d - -
P(B
ic
test
)= C

e-
1/2 (Y test
-
Y
)
T -1(Y test
-
Y
)
ic ic ic ic ic
M training samples of thev class c. These
principal component vectors have been obtained by
projecting the M character matrices C
1,
C
2
,...C
M
I=1 .eq8
The test character is assigned the class
test
for
which the following condition is satisfied.
onto the eigenvector corresponding to the i
th
largest
eigenvalue of the character covariance matrix Gt.
We assume that the set of Np dimensional principal

test
= arg max
c
P(B
c
test
) .eq9
244
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
245
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
It can be readily verified from Eq. 8 that we assign
the test character to the class for which the
modified Mahalanobis distance is minimized.
Let
d - -
[2]. Cheng-Lin Liu, Stefan Jaeger and Masaki Nakagawa.
Recognition of Chinese Characters:The State-of-the-Art.
IEEE Trans.on Pattern Analysis and Machine
Intelligence, 26 (2), pp.188-213, 2004.
[3]. S. Jaeger, C.-L. Liu and M. Nakagawa .The state of the art
in Japanese online handwriting recognition compared to
D
c
=
(Y test
-
Y T -1(Y test
-
Y techniques in western handwriting recognition. Intl
ic
I=1
ic
)
ic ic ic
)+log|
ic
| .eq10
Journal on Document Analysis and Recognition, Springer
Berlin 6 (2): pp. 75-88, October 2003.
then we can write

test
= arg min
c
D
c
eq11
On the other hand, if we employ a simple nearest
neighbor Euclidean distance based the
classification of the test data, we first compute the
distance of the test character to its closest training
sample in the subspace of class
c
as
d
D
c
= min
j
||Y
mc
test
-Y
mc
j
|| ..eq12
M=1 j=1,2..156
Given the set of distances {D, D, ...., D } , the test
character is assigned the class
test
for which the
following condition is satisfied.

test
= arg min
c
D
c
[4]. HP Labs Isolated Handwritten Tamil Character
Dataset.http://www.hpl.hp.com/india/research/penhwinterf
aces-1linguistics.html#datasets
[5]. Niranjan Joshi, G Sita, A G Ramakrishnan and Sriganesh
Madhavanath, Comparison of Elastic Matching
Algorithms for Online Tamil Handwritten Character
Recognition. Proceedings of the 8th Intl Workshop on
Frontiers in Handwriting Recognition (IWFHR-8) pp 444-
448, October 2004.
[6]. Deepu V, Sriganesh Madhavanath and A G Ramakrishnan.
Principal Component Analysis for Online Handwritten
Character Recognition, Proc. of the 17th Intl Conf .Pattern
Recognition (ICPR 2004) 2: pp 327-330, August 2004.
[7]. Jiang Yang, David Zhang, Alejandro. F.Frangi and Jing yu
Yang, Two Dimensional PCA: a New Approach to
Appearance Based Face Representation and Recognition.
IEEE Trans. On Pattern. Analysis and Machine
Intelligence, 26(1), pp.131-137, January 2004.
[8]. Andrew Webb, Statistical Pattern Recognition. Second
Edition. John Wiley and Sons Ltd, 2002.
VI. CONCLUSION
In this paper we have attempted to apply the
nd
[9]. R. Bellman, Introduction to Matrix Analysis, 2
McGraw-Hill, NY, USA, pp. 96, 1970.
Edition,
recently reported (2D)
2
PCA technique to the
context of tamil character recognition. A set of
local features is derived for each sample point of
the preprocessed character to form the character
matrix, using which a covariance matrix is
constructed. The smaller size of the covariance
matrix in (2D)
2
PCA makes the process of feature
extraction much faster compared to the
conventional PCA. Experimental results indicate
that the recognition accuracy using the (2D)
2
PCA
scheme shows an approximate 70% improvement
over the conventional PCA technique while at the
same time being more computationally efficient.
Though the algorithm has been tested for
recognizing Tamil characters, it can be widely used
for the recognition of other scripts as well. Further
potential areas of research are to develop other
imensionality reduction schemes that take into
account the class discriminatory characteristics for
the online character recognition problem, to
possibly improve the overall classification
accuracy.
REFERENCES
[1]. C.C.Tappert, C.Y.Suen and T. Wakahara. The state of
online handwriting recognition. IEEE Trans. on Pattern.
Analysis and Machine Intelligence, 12 (8), pp.787-807,
August 1990.
[10]. A.K. Jain, R.P.W Duin, and J. Mao, Statistical
pattern recognition: a review, IEE Trans. On Pattern
Analysis and Machine Intelligence, vol. 22, no. 1, pp.
4-37, Jan. 2000.
[11]. I.T. Jolliffe, Principal Component Analysis, Springer-
Verlag, New York, 1986.
[12]. M. Kirby and L. Sirovich, Application of the KL
procedure for the characterization of human faces,
IEEE Trans. Pattern Analysis and Machine Intelligence,
vol. 12, no. 1, pp. 103-108, Jan. 1990.
[13]. P.J. Phillips, H. Wechsler, J. Huang, and P.J. Rauss, The
FERET database and evaluation procedure for face-
recognition algorithms, Image and Vision Computing,
vol.16, no.5, pp. 295-306, Apr. 1998.
[14]. M. Turk and A. Pentland, Eigenfaces for recognition,
J. Cognitive Neuroscience, vol. 3, no. 1, pp. 71-86, Jan.
1991.
[15]. J. Yang, D. Zhang, A.F. Frangi, and J.Y. Yang, Two-
dimensional PCA: a new approach to Appearance -
based face representation and recognition, IEEE
Trans. on Pattern Analysis and Machine Intelligence, vol.
26, no. 1, pp. 131-137, Jan. 2004.
[16]. D.Q. Zhang, S.C. Chen, and J. Liu, Representing
image matrices: Eigen images vs. Eigenvectors, In :
Proceedings of the 2nd International Symposium on
Neural Networks(ISNN'05), Chongqing, China, vol. 2, pp.
659-664, 2005.
[17]. L. Zhao and Y. Yang, Theoretical analysis of illumination
in PCA-based vision systems, Pattern Recognition, vol.
32, no. 4, pp.547-564, Apr. 1999.
[18]. W. Zhao, R. Chellappa, A. Rosenfeld, and P.J.
Phillips. Face recognition: a literature survey.
http://citeseer.nj.nec.com/374297.html, 2000.
246
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Number System radix
BSC (Binary
stored-carry)
2 0 2 1
SC (Stored-carry) r 0 r 1
BSD or BSB 2 1 1 1
SB (Stored-borrow) r 1 r-1 1
BSCB
(Binarystored-
carry-or-borrow)
2 1 2 2
SCB (Stored-carry-
or-borrow)
r 1 r 2
Minimally
redundant
symmetric SD
1
I
Signed Digit Number Representation for Fast
Arithmetic Computations
1
Vedvrat,
2
Khushboo Srivastava,
3
Shivangi Sinha
1
Department of Electronics & Communication Engineering, A.I.T, Kanpur, U.P.
2
Department of Electronics & Communication Engineering, N.C.E.T, Kanpur, U.P.
3
Department of Electronics & Communication Engineering, N.C.E.T, Kanpur, U.P.
(Email:
1
r.ved.hbti@gmail.com,
2
khushboosrivastava.67@gmail.com,
3
shivangi.sinha22@gmail.com, )
Abstract-- Fast addition and multiplication are of
paramount importance in many arithmetic circuits and
processors. The use of redundant binary signed-digit
number system for implementation of fast arithmetic units
has been presented in this paper. Signed-Digit number
representations are redundant positional representations.
The relevant properties of S-D representations are
discussed in brief. Unlike the conventional binary number
system, where {0,1} digit set is used in representation of
numbers, the numbers are represented in {-1,0,1} digit set.
The sum of two redundant numbers is carried out in two
stages. In the second stage, the final sum is generated
without any carry. Because of no carry-propagation chain
in S-D addition, the dependency on previous carry to next
bits addition is omitted. Hence, the time of addition of S-D
numbers is independent of the length of the operands.
Using redundant binary signed number system, the
addition is performed in parallel and constant time. The
digit redundancy and carry free addition effects on
propagation delay and power consumption, thereby
dictating the performance of the overall circuit. Theory of
Hybrid signed-digit additionis also presented at final.
Key WordsRedundant Number Representation, Signed-
Digit Numbers, Carry-Free Addition, Hybrid Signed-Digit
Addition
I. INTRODUCTION
n digital systems like computers, signal processors, image
processors etc. the arithmetic operations play an important
proposed the idea of redundant signed digit. The non-
conventional number system i.e. Redundant Binary Signed
Digit (RBSD) numbers for fast arithmetic is particularly
gaining much attention as it offers the possibility of carry free
addition. Chow and Robertson [3] suggested the idea of
logical design of the RBSD (redundant binary signed digit)
adders. This number representation possesses sufficient
redundancy to allow for the annihilation of carry or borrow
chains and hence result is fast. The redundant binary signed
digit (RBSD) number system is the special case of GSD
number system. The suitable hardware design using logic
gates also responsible for the high-speed arithmetic operations
as the output from the adder circuit depends on the
propagation delay time of the logic circuit.
II. GENERALIZED SIGNED DIGIT NUMBER
SYSTEMS
Generalized signed-digit (GSD) number system [4] is a
positional number system. GSD number system allows the
digit set with the conditions
, where r is radix of the number
representation. The depicts the redundancy index of GSD
number system i.e. . GSD number systems
cover the different systems as shown in Table.1 and Figure 1
describes the hierarchical relationships of these systems.
Table. 1. GSD number systems
role. A good choice of arithmetic system and internal number
representation affects both efficient implementation of the
machine operations and the accuracy of approximated real
arithmetic. With recent development in technology of
integrated circuits, various high-speed circuits with regular
structures and low power design have been proposed and
some of them have been fabricated on VLSI chips. However
the arithmetic operations still suffers some problems like
propagation time delay. Robertson [1] and Avizienis [2]
247
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
OSD
Minimally
redundant
r
Maximally
redundant
r r-1 r-1 r-1
IV. STRUCTURE OF RBSD ADDER
Min GSD
GSD
Non-Min GSD
The logic design for the redundant binary signed digit
(r=2) adder [3] is shown in figure 2. The operand digits are
restricted to the symmetric digit set {- 1, 0, 1}. The
redundancy provided in the signed-digit representation allows
for fast addition or subtraction because the sum or difference
digit is a function of only the digits in three adjacent digit
positions for a radix of 2. The algebraic relationships for the
RBSD adder are given as
For upper section: (1)
Sym Min
GSD
BSD
or
BSB
Asym
Min GSD
SC
SB
(non-
Sym Non-
Min GSD
OSD
Asym Non-
Min GSD
SCB
For lower section:
Xi-1
Yi-1
mi-2
ai-1
di-1
b
i-1
(2)
Si-1
BSC
binary)
Min
Redundant
OSD
Max
Redundant
OSD
BSCB
Lower section
Xi
Yi
X
i+1
m
i-1
ai
mi
di
bi
di+1
Upper section
Si
Fig.1. Hierarchical relationships of GSD number systems
III. REDUNDANT SIGNED DIGIT NUMBER
Yi+1
ai+1
mi+1
bi+1
Si+1
REPRESENTATION
The arithmetic of signed-digit number system is proposed
by Avizienis, in 1961. Unlike the conventional binary number
Fig.2. Logic design for RBSD adder
The addition of two RBSD numbers X and Y is described
by the equations (3), (4), (5), (6), (7).
system, where {0, 1} digit set is used in representation of
numbers, in redundant binary signed digit number system {-1,
0, 1} is the allowable digit set. It has a fixed radix-2. The
radix-2 SD code representation of a fractional number X has
the general form:
(3)
(4)
(5)
Where and N is the number of ternary
digits. The RB representation allows the existence of
redundancy as using RBSD number system one number can
be represented more than one way. This redundant number
representation gives the technique of carry free addition. For
example, integral number -3 is expressed as:
(6)
(7)
Where are binary variables with digit set {0, 1},
whereas are RBSD input numbers with digit set {-1,
0, 1} which is encoded by {(10), (00), (01)} in binary form.
The input of adder i.e. X is represented by two bits as .
V. HYBRID SIGNED DIGIT NUMBER SYSTEM
248
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
The well known signed digit number representation makes
it possible to perform addition with carry propagation chain
that are limited to a single digit position. In the addition of two
the word length (linear in ripple-carry adders, logarithmic in
carry look-ahead adders). The speedup in addition time in the
SD number system does not come without cost, however,
since two bits are needed to represent a binary signed digit.
Thus more area is traded off for the constant addition time.
The SD and the twos complement number representations are
at two extremes. In the SD number system more bits,
switching devices and routing are required per digit. In return,
the propagation is limited to a single digit position. In the
conventional number system, fewer bits, switching devices
and routings are needed per digits, but the carry propagates
across the entire word length.
In Hybrid Signed-Digit number representation [5], the
maximum carry propagation length can be set to any desired
value between one and the full word length. The area required
decreases as the length of the carry propagation chain
increases. In HSD number system representation, we let some
of the digits to be signed and leave the others unsigned. For
example, every alternate or every third or fourth digit can be
signed and remaining ones are unsigned. The maximum length
of a carry propagation chain equals (d + 1), where d is the
longest distance between neighboring signed digits.
The addition using hybrid signed-digit number system is
performed in two steps:
Step 1: In first step, the signed digit positions generate a
carry-out and an intermediate sum based only on the two input
signed digits and the two bits at the neighboring lower order
unsigned digit position. Let x
i
and y
i
be the signed digits to be
added in the i
th
position and a
i-1
and b
i-1
be the unsigned digits
in the (i-1)
th
position.
Step 2: In the second step, the carries generated out of the
signed digit position ripple through the unsigned digits all the
way up to the next higher order signed digit position, where
the propagation stops. The second step can also be carried out
in parallel, i.e., all the carry propagation chains between the
signed digit positions are executed simultaneously.
numbers represented in conventional binary number system,
carry propagates all the way from least significant digit to
most significant digit. The addition time is thus dependent on
VI. CONCLUSION
The logic design of RBSD adder and theory of hybrid
signed-digit number system has been presented. In redundant
binary signed digit number system, to eliminate the carry
propagation chain, interval arithmetic is used. The final sum
fits into the allowable digit set {-1, 0, 1}. In RBSD number
representation all bits are signed bits, whereas in HSD number
system few bits are defined as signed and remaining bits as
unsigned. Thus in these two representations there is a tradeoff
between area and speed. RBSD representation comprises less
addition time and larger area whereas; less speeds and less
area is required using HSD number representation. The
presented number systems can be used for the implementation
of fast multiplier and hence processor also.
VII. REFRENCES
[1] J. E. Robertson, Redundant Number Systems for Digital
Computer arithmetic, summer conference at Ann Arbor Mich,
July 6-10, 1959.
[2] Avezienis, Signed Digit Number Representation for Fast
Parallel Arithmetic, IRE Trans on Elec. Comp., EC-10, pp.
389-400, Sept. 1961.
[3] C. Y. Chow and J. E. Robertson, Logical Design of a
Redundant Binary Adder, proceeding of 4
th
symposium on
Computer Arith., pp. 109-115, 1978
[4] B. Parhami, Generalized signed-Digit Number Systems, A
Unifying Framework for Redundant number representations,
IEEE. Trans. Computers, 39, pp. 89-98, Jan. 1990.
[5] D. S. Pathak and I. Koren, Hybrid Signed Digit Number
Systems: A Unified Framework for Redundant number
Representations with Bounded Carry Propagation Chains,
IEEE Trans. Computers, 43, No. 8, pp. 880-891, Aug. 1994.
249
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Design and Implementation of various loads for
on-chip voltage regulator and stability analysis
Vivek Kumar Saxena,
Department of Electronics and Electrical Engg
BCT- Kumaon Engineering College, Dwarahat
U.K, India 263653
vivekeng2006@gmail.com
Hemanta Mondal,
Department of Electronics and Communication Engg
ITS Engineering College, Greater Noida -201308
mondal.ece@gmail.com
AbstractA on-chip voltage regulator, design is based on a zero-
pole cancellation scheme and minimum output-current
requirement, low-drop voltage(LDO), fast transient response
with improved steady state response and reduced transient
overshoots and undershoots which is simulated on 180nm. LDO
voltage regulator provides a constant 900mVoutput voltage
against all load currents from 0 to 140uA.
Keywords-Low-dropout, low-voltage regulator, power supply
circuits, regulators.
I. INTRODUCTION
The most basic function of a voltage regulator is voltage
regulation, provides clean, constant, accurate voltage to a
circuit. Voltage regulators are a fundamental block in the
power supplies of most all the electronic equipment. Key
regulator benefits and applications include. Accurate supply
voltage, Active noise filtering, Protection from over current
faults, Inter-stage isolation, Generation of multiple output
voltages from a single source[1]. Useful in constant current
sources A low dropout regulator is a class of linear regulator
that is designed to minimize the saturation of the output pass
transistor and its drive requirements. A low-dropout linear
regulator will operate with input voltages only slightly higher
than the desired output Voltage. So to make a on-chip voltage
regulator, low drop-out linear voltage regulator is preferred.
Fig.1 : Basic LDO Regulator
Conventional LDOs are inherently unstable at no load
currents[2]. A large output capacitor and its equivalent series
resistance (ESR) are required to achieve the dominant pole
compensation and insert a zero to cancel the non dominant
pole. The most important challenge for this circuit is to keep
the pass transistor always in the saturation region, in order to
operate properly but it becomes hard due to the significant
variations of the load. These variations move the pole and can
lead the LDO to become unstable. In order to minimize this
problem a large , internal or external capacitor in the output
node is used.
There are 3 categories of specifications for regulators: static-
state specifications, dynamic-state specifications and high-
frequency specifications. In static-state parameters include the
line regulation, load regulation and temperature coefficient
effects. In dynamic-state specifications determines the LDO
regulators capability to regulate the output voltage during load
line transient conditions [3]. The LDO regulator must respond
quickly to transients in order to reduce variations in output
voltage. In high frequency, the regulators ability to reject high
frequency noise on the input line. It is the function of the
parasitic capacitances and is proportional to reciprocal loop
gain.
The LDO voltage regulator circuitry is Band gap Reference
voltages or currents that exhibit little dependence on
temperature prove essential in many analog circuits[2]. Since
most process parameters vary with temperature, if a reference
is temperature-independent, It remains constant with
temperature, if two quantities having opposite temperature
coefficients (TCs) are added with proper weighting; the result
displays a zero TC. It offer Good voltage tolerance
(inherently) ,Good noise performance, Desirable at low
voltages, High efficiency. The output current is controlled by
the error amplifier output. It produces an error signal
whenever the fed back sensed output differs from the
reference voltage. In this circuit the error amplifier is a folded
cascade op-amp circuit. As the output impedance hence the
gain of this op-amp is very high so it provides good load and
250
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
line regulation. The pass (output) transistor and its
configuration are the dropout voltage of a linear voltage
regulator. The common output configurations descending
order correspond to dropout voltage[4].Depending on the
transistor's characteristic, the dropout voltage normally varies
between 1.6 to 2.5 volts, which is about a VCE south plus two
Vbe. The PNP output is what makes for a "true LDO" with a
dropout voltage that nominally range between 150 mill volts to
450 mill volts, which is about the saturation voltage. In
electronics, a voltage divider is a simple linear circuit that
produces an output voltage (Vout
t
) that is a fraction of its
input voltage (Vin). Voltage division refers to the partitioning
of a voltage among the components of the divider. The input
voltage is applied across the series impedances Z
1
and Z
2
and
the output is the voltage across Z
2
. Z
1
and Z
2
may
be composed of any combination of elements such as
resistors, inductors and capacitors. Applying Ohm's Law, the
relationship between the input voltage, Vin and the output
voltage, Vout, can be found:
Vout = z2/(Z1+Z2).Vin
All voltage regulators use a feedback loop to hold the output
voltage constant. The feedback signal changes in both gain
and phase as it goes through the loop, and the amount of phase
shift which has occurred at the unity gain (0 dB) frequency
determines stability.The loop gain is defined as the ratio of
the two voltages:
Loop Gain = VA/VB
II. PROPOSEDLDO STRUCTURE
The design issue of LDO voltage regulator are stability and
transient response.
A. Stability
LDO regulators are an necessary part of the power
management system that provides constant voltage supply
rails. They fall into a class of linear voltage regulators with
improved power efficiency and not as an independent
document This topology consists of a PMOS transistor,
controlled by an error amplifier which compares the voltage
reference with the output voltage sensed through the feedback
resistors R1 and R2.
The error signal controls the gate of the pass transistor and
forms the negative feedback loop. The most important
challenge for this circuit is to keep the pass transistor always
in the saturation region, in order to operate properly.
Maintaining the pass transistor in the saturation region
becomes hard with the topology. These variations move the
pole and can lead the LDO to become unstable due to In order
to minimize this problem a large, internal or external,
capacitor in the output node is used.
B. Transient response:
It consists of static-state specifications, dynamic-state
specifications, and high-frequency specifications.
The static-state parameters include the line regulation, the load
regulations and the temperature coefficient effects. The LDO
regulators dynamic-state specifications determine the LDO
regulators capability to regulate the output voltage during
load and line transient conditions. The LDO regulator must
respond quickly to transients in order to reduce variations in
output voltage[6]. In high frequency regulators ability to
reject high frequency noise on the input line.
The Low-dropout (LDO) linear regulator is widely used in
power management due to its low noise , precision output, and
fast transient response. Due to this made the regulators
specification is to be modified with very low dropout, low
quiescent, and fast transient response.
This LDO regulator based on the pole-spitting method. The
LDO is a three-stage amplifier compensated by pole-splitting
frequency compensation. The power dissipated at the power
transistor increase its tranconductance due to the moderate
output current, results in higher non dominant complex-pole
frequencies. When the non dominant complex-pole poles
locate far higher than the unity-gain frequency in the open
loop frequency response, the LDO is stable.
Fig 2: Schematics of the LDO circuit
III. EXPERIMENTAL RESULTS
The LDO has been implemented in 0.18-um CMOS
Technology. The threshold voltage of nMOSFET and
pMOSFET are 055V and -0.75V respectively. The output
voltage of LDO is constant 900mV and load current 140uA.
251
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
(a) Input voltage Vs output voltage plot, output voltage is
constant at 900mV.
The line regulation is the amount that the output voltage
changes for a given change in input voltage. It depends on the
size of the Pass transistor is dedicated for drop-out voltage.
(b) Load Current increases with Vin and max
140uA.
The load current for each circuit was swept from 0 to 30uA
with the input voltage. Clearly, a minimum load current is
needed for the LDO to be stable. This is because when the load
current decreases, the output impedance of pass transistor
increases, lowering the frequency of the second pole and the
decreasing the phase margin of the system. If the load current
is too small, the second pole and the dominant pole will be too
close together, causing the LDO to become unstable.
( c) Output voltage Constant with Time
It is specified as the maximum allowable output voltage
variation for a sudden step change of load current while
switching from one mode to another mode, a temporary glitch
may appear at the output of LDO regulators resulting in the
loop response delay.
(d) Output Voltage 900mVand load current 140uA
(e) Temp Variation= -50 to 100 C, Output Voltage Variation = 1.05 to0.85.
Absolute value of VT decreases with an increase in
temperature. Variation is approx. -4mV/oC for high substrate
doping levels and -2mV/oC for low doping levels .
-Increasing temperature
Reduces mobility
Reduces Vt
-ION decreases with temperature
-IOFF increases with temperature
(f) Voltage Phase: 10 degree to -10 degree,
The loop gain includes two main transfer functions, that of an
error amplifier and that of a load. The first term of the
equation expresses the voltage gain numerator and the single
pole roll off denominator of the error amp. The second term
252
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Technology 0.18um
Output Voltage 900mV
Input Voltage 1.5-5.0V
Load Current 140uA
Line Regulation% 0.5
Drop-out Range 600mV
Output Voltage Phase 10 Degree to -10 Degree
Output Voltage Vs Time 900mV
Output Voltage with Temp
Variation
1.05-0.85V
expresses the zero and the numerator and the pole in the
denominator of the load in combination with an open loop
output resistance of the regulator.
A pole continuously decreases in an amplitude of about 20 dB
per decade with a corresponding negative phase shift.
At a frequency, zero continuously increases the amplitude at 20
dB per decade with a corresponding positive phase shift.
(g) Compensated LDO AC response
TABLE I: SUMMARY OF MEASUREDPERFORMANCE
transients overshoots and undershoot, lower minimum output
current requirement, excellent line and load regulations as well
as low noise output which is important for on-chip voltage
regulator which is simulated on 180nm.
REFERENCES
[1] Sreehari Rao Patri and K. S. R. Krishna Prasad, A Robust Low-Voltage
On-Chip LDO Voltage Regulator in 180nm , IEEE
Transactions Volume 2008 , Article ID 259281, 7 pages.
[2] S. K. Lau, P. K. T Mok, K. N. Leung, "A low-dropout regulator for SoC
with Q-reduction", IEEE J. Solid-State Circuits, vol. 42, no. 3, pp. 658-
664, Mar. 2007.
[3] R. J. Milliken, J. Silva-Martnez , and E. Snchez-Sinencio, Full
on- chip CMOS low-dropout voltage regulator, IEEE Transactions
on Circuits and Systems I, vol. 54, no. 9, pp. 18791890, 2007.
[4] C.-L. Chen, W.-J. Huang and S.-I. Liu, CMOS low dropout regulator
with dynamic zero compensation IEEE Transactions on Solid-State
circuit, 13 April 2007.
[5] Vishal Gupta and Gabriel A. Rincon-Mora, A Low-Dropout
,CMOS Regulator with High PSR over Wideband
Frequencies IEEE Transactions SoC,2004
[6] K. N. Leung and P. K. T. Mok, A capacitor-free CMOS low-dropout
regulator with damping- factor-control frequency compensation, IEEE
J. Solid-State Circuits, vol. 38, Oct. 2003.
[7] K. C.Kwok, P. K. T. Mok. Pole-zero tracking frequency compensation
for low dropout regulator, 2002 IEEE International Symposium on
Circuits and Systems, Scottsdale, Arizona, vol. 4, May2002
[8] Albert M. Wu and Seth R. Sanders, An Active Clamp Circuit for
Voltage Regulation Module (VRM) Applications IEEE Transactions on
Power Electronics, vol.16,No. 5,September 2001
[9] G. A. Rincon-Mora and P.E. Allen, A low-voltage, low quiescent
current, low drop-out regulator, IEEE J. of Solid-State Circuits, vol. 33,
no. 1, pp. 36-44, Jan. 1998
IV .CONLUSION
A on-chip voltage regulator has been introduced, supported by
circuit-modeling and experimental results. The scheme is based
on a simply pole-zero cancellation scheme. The LDO has small
253
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Variation of Bandwidth with Feed Position of
Microstrip Patch Antenna
Anil Kumar, Vipul Gulati, Vineet Arora, Anand Abhishek, S.B.Kumar
Department of Electronics and Communications
Bharati Vidyapeeths College of Engineering, Paschim Vihar, New Delhi-110063
E mail: vistavineet@gmail.com, anandabhishek12@hotmail.com, sbkumar2010@gmail.com
ABSTRACT:-
This proposed paper describes a microstrip patch
antenna designed at operating frequency 3.4Ghz
and discussed about feed position at different
stages for improving Bandwidth. It is seen that
as the position of feed increases to certain value,
bandwidth increases respectively. Also VSWR,
Return loss, Radiation pattern and 3dBi gain are
discussed. It is seen too plot of band width vs
feed position.
Key word: Microstrip patch Antenna, Return
loss, VSWR, Radiation pattern.
1. INTRODUCTION
The demand for improved microstrip antennas
has been increasing rapidly since its invention in
1953.Because of their extremely thin profile
(0.01 to 0.05 free-space wavelength), printed
antennas have found main applications in
military aircraft, missiles, rockets, and
satellites[4]. The cost to develop Microstrip
antenna has dropped significantly
due to reduction in cost of
substrate material, manufacture
process andavailability of
computer aided design (CAD) tools, thus
making them more popular in the field of
cellular communication, Direct broadcast
television, Digital audio broadcast etc[4]. Other
advantages of microstrip antenna are Light
weight, Conformability to surfaces of substrates,
Low cost, versatility and possible integration
with other circuits.[2]
A patch antenna is a type of radio antenna with a
low profile, which can be mounted on a flat
surface. It consists of a flat rectangular sheet or
"patch" of metal, mounted over a larger sheet of
metal called a ground plane. The two metal
sheets together form a resonant piece
of microstrip transmission line with a length of
approximately one-half wavelength of the radio
waves. The radiation mechanism arises from
discontinuities at each truncated edge of the
microstrip transmission line. The radiation at the
edges causes the antenna to act slightly larger
electrically than its physical dimensions, so in
order for the antenna to be resonant, a length of
microstrip transmission line slightly shorter than
one-half a wavelength at the frequency is used.
Coaxial feeding coaxial line that is set
perpendicular to the ground plane as shown in
Fig.2.1. In this case the inner conductor of the
coaxial line is attached to the radiating patch
while the outer conductor is connected to the
ground plane. It has low spurious radiation
because the radiating and feeding systems are
disposed on the two sides of the ground plane
and shielded fromeach other.
2. ANTENNACONFIGURATION
The antenna was feed with coaxial cable feeding
technique. The design simulated had
specifications- r =4.4, Tan=0.02, thickness
(d)=1.588mm. Operations were performed at
Iteration level 0, as shown in Fig.2.1
Figure 2.1: Left- Microstrip Antenna fed with Coaxial
Cable, Right- Iteration level 0&1 of Patch
254
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Sno. Distance of coaxial
cable from origin
(xp) (in mm)
Bandwidth
(BW)
(in MHZ)
1. 3.5684 170
2. 3.5754 170
3. 3.5848 170
4. 3.6004 180
5. 3.6244 180
6. 3.644 190
7. 3.744 200
8. 3.8 200
9. 3.9 210
10. 3.95 214
11. 4 214
12. 4.05 211
13. 4.1 209
14. 4.2 199
The Dimensions of Patch was decided on the
basis of formula as below [1]:
Width
(1)
The feeds were given at variable distance from
origin along the positive X-axis. The tabulation
is as below in Table-3.1 along with the pictorial
presentation of the data in fig.3.2
Effective Dielectric
Fringing Field Length
(2)
Length
(3)
(4)
The distance for coaxial feed from origin
(xp) were decided on the basis of
following formulae [3]:
(5)
Table-3.1: Table displaying xp & BW
Where: Z
a
= impedance of antenna.
R
in
=Input resistance (50 )
3. SIMULATION RESULTS
(6)
A rectangular patch was taken with dimensions:
Length= 20.6016 mm, Width= 26.8481 mm,
thickness=1.588mm. Resultant micro strip
antenna design is shown in Fig. 3.1.
Figure-3.2: Bandwidth vs Coaxial feeding
Position.
It is evident from Table3.1 and Figure 3.2 that
the bandwidth gradually increases from the
distance of 3.5684mm to 4mm. The maximum
bandwidth of 214MHz and gain of 5.366dBi
was obtained at 4mm distance. After that, the
bandwidth again started decreasing.
Figure 3.1: Design of Patch antenna
255
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
The Return Loss of the antenna being -26.3dB
and Radiation pattern for the Microstrip antenna
at xp=4.0mm are shown in figure-3.3, 3.4 as
below
Figure 3.3: Return loss of the antenna
Figure 3.4: 3DRadiation Pattern with Gain of
5.36dbi
The 2D radiation pattern of Polar Plot for
the simulation performed is shown in figure
3.5 at phi=0, 90, 170 at frequency of
3.33GHz.
4. CONCLUSION
From simulated result it is seen the
Bandwidth of the antenna first gradually
increases with increasing feed position at
certain value, beyond this value the
Bandwidth again decreases gradually.
Future work is for fabrication and get the
measured result ,to be done.
REFERENCES
[1] Sierpinski Carpet Fractal Antenna by
M. F. Abd Kadir, A.S.Jaafar, M. Z. A.
Abd Aziz, Dept of Telecommunication,
Faculty of Electronic and Computer
Engineering, Dept. of Telecommun.,
Univ. Teknikal Malaysia, Ayer Keroh,
Issue Date:4-6-Dec-2007, On page(s): 1
- 4 ,Location: Melaka, Print ISBN: 978-
1-4244-1434-5, INSPEC Accession
Number: 10183847.
[2] Broadband Planar Antennas by Zhi
Ning Chen and Michael Y. W. Chia,
University of Infocom research,
Singapore, YOP: 2005, Publisher: John
Wiley and Sons.
[3] Antenna Theory and Design by
Stutzman and Thiele, Second
Edition,YOP: 1981 Publisher: John
Wiley and Sons.
[4] PHD Thesis by MSc. Adel Abdel-
Rahman Ph.D student at Microwave
and Communication Engineering
Department, University of Magdeburg,
Magdeburg, Germany.
Figure 3.5: Polar plot
256
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
ADesign Rule for coaxial fed Rectangular Microstrip Patch Antenna
Satya Deep Chatterjee
#1
, Richa Daga
*2
, Stanley K . Varkey
$3,
SK Kundu
4
#1, *2, $3
Students, Electronics & Comm Engg., 3
rd
Year
4
Assistant Professor, Electronics & Comm. Engg
Bharati Vidyapeeths college of Engineering
A-4 Paschim Vihar New Delhi-110063,India
#1
satyadeep.c@gmail.com,
*2
richa.daga@yahoo.com,
$3
stanv1989@yahoo.com
Abstract-This paper presents a microstrip patch
antenna with coaxial feed. The detailed steps of
designing using optimization programs coded in
MATLAB and the simulated antenna using software
IE3D (Integrated Equation in 3-Dimention) are also
presented in this paper.
I. INTRODUCTION
A patch antenna, also known as printed antenna is a wide
beam but narrowband antenna fabricated by etching a
pattern of metal on a substrate which has a continuous
metal layer bonded to its back side. The substrate is an
insulating dieletric while the continuous metal layer
forms the ground plane. The patch antennas that use
dielectric spacers instead of substrates are less rugged
but have a wider bandwidth[1]. Because these antennas
are small and can be made flexible they find application
in a wide range of fields including mobile radio
communication devices, spacecrafts, aircrafts etc.
Microstrip antenna are conformable to planar or non
planar surface[1]. simple and inexpensive to
manufacture, cost effective compatible with MMIC
designs and when a particular patch shape and excitation
modes are selected, they are very versatile in terms of
resonant frequency, polarization, radiation patterns and
impedance [5,6].
A. Radiation mechanism
Microstrip antennas are essentially suitably shaped
discontinuities that are designed to radiate[2,3]. The
discontinuities represent abrupt changes in the microstrip
line geometry. Discontinuities alter the electric and
magnetic field distributions. These results in energy
storage and sometimes radiation at the discontinuity [9].
As long as the physical dimensions and relative dielectric
constant of the line remains constant, virtually no
radiation occurs.
However the discontinuity introduced by the rapid
change in line width at the junction between the feed line
and patch radiates [4]. The other end of the patch where
the metallization abruptly ends also radiates.
Microstrip patch antennas radiate primarily because of
the fringing fields between the patch edge and the ground
plane.[1] Efficiency and bandwidth are improved with a
thick substrate of low dielectric constant,but leads to a
larger antenna size. [2].
Hence a compromise must be reached between antenna
dimensions and antenna performance.
B. Coaxial feed
The Coaxial feed or probe feed has been used for this
design. As seen from figure, the inner conductor of the
coaxial connector extends through the dielectric and is
soldered to the radiating patch, while the outer conductor
is connected to the ground plane [13,14].
Fig 1: Probe fed to rectangular Patch
The main advantage of this type of feeding scheme is
that the feed can be placed at any desired location inside
the patch in order to match with its input impedance.
This feed method is easy to fabricate and has low
spurious radiation. The major disadvantage is that it
provides narrow bandwidth and is difficult to model
since a hole has to be drilled in the substrate and the
connector protrudes outside the ground plane, thus not
making it completely planar for thick substrates ( h >
0.02 o ). Also, for thicker substrates, the increased probe
length makes the input impedance more inductive,
leading to matching problems [1].
II. DESIGN ANALYSIS
The microstrip is essentially a non homogeneous line of
two dielectrics, typically the substrate and air [5,6].
257
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Fig 2: Antenna andFringing Field
As seen from figure, most of the electric field lines reside
in the substrate and parts of some lines in air. As a result,
this transmission line cannot support pure transverse
electric-magnetic (TEM) mode of transmission, since the
phase velocities would be different in the air and the
substrate. Instead, the dominant mode of propagation
would be the quasi-TEM mode [12,15]. Hence, an
effective dielectric constant (
reff
) must be obtained in
order to account for the fringing and the wave
propagation in the line. The value of
reff
is slightly less
then
r
because the fringing fields around the periphery
of the patch are not confined in the dielectric substrate
but are also spread in the air as shown in figure above.
The expression for
reff
is [3] :
Where
reff
= Effective dielectric constant
r
= Dielectric
constant of substrate h = Height of dielectric substrate W
= Width of the patch
Consider figure below, which shows a rectangular
microstrip patch antenna of length L, width W resting on
a substrate of height h.
Fig 3 : Structure of Simple Patch
In order to operate in the fundamental TM
10
mode, the
length of the patch must be slightly less than / 2 where
is the wavelength in the dielectric medium and is equal
to
o
/ (
reff
)^(0.5) where
o
is the free space wavelength.
The TM
10
mode implies that the field varies one /2 cycle
along the length, and there is no variation along the
width of the patch. In the figure shown below, the
microstrip patch antenna is represented by two slots,
separated by a transmission line of length L and open
circuited at both the ends. Along the width of the patch,
the voltage is maximum and current is minimum due to
the open ends. The fields at the edges can be resolved
into normal and tangential components with respect to
the ground plane.
Fig 4 : RadiationPatternandFieldPattern of the Patch
It is seen from figure that the normal components of the
electric field at the two edges along the width are in
opposite directions and thus out of phase since the patch
is / 2 long and hence they cancel each other in the
broadside direction [7]. The tangential components (seen
in figure), which are in phase, means that the resulting
fields combine to give maximum radiated field normal to
the surface of the structure. Hence the edges along the
width can be represented as two radiating slots, which
are / 2 apart and excited in phase and radiating in the
half space above the ground plane[9,11]. The fringing
fields along the width can be modeled as radiating slots
and electrically the patch of the microstrip antenna looks
greater than its physical dimensions [8,9]. The
dimensions of the patch along its length have now been
extended on each end by a distance L , which is
[10,11]:
The effective length of the patch L
eff
now becomes:
L
eff
= L + 2 L
For a given resonance frequency f
o
, the effective length is
given by as:
For a rectangular Microstrip patch antenna, the
resonance frequency for any TM
mn
mode is [1]:
258
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Where m and n are modes along L and W respectively.
For efficient radiation, the width W is:
A. Calculating the input resistance
B. Calculating the bandwidth
C. Calculating the radiation efficiency
Fig 5 : Antenna Top View
III. RESULTS
Various graphs of antenna efficiency and bandwidth
were plotted by varying frequency and substrate height.
Bandwidth vs frequency and radiation efficiency vs
frequency plots keeping substrate height constant as
shown in fig 6, fig 7. Also, bandwidth vs height and
radiation efficiency vs height plots keeping frequency
constant are plotted using MATLAB.
Fig 6 : Bandwidthvs Freq using MATLABCode
Fig 8 : RadiationEfficiency vs Freq using MATLABCode
Fig 7 : Bandwidthvs substrate height using MATLABCode
Fig 9 : : Radiation Efficiency vs Height of the substrate
using MATLABCode
259
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Fig 10 Radiation pattern showing the gain of the proposed Single
patch
Fig 13 Antenna & RadiationEfficiency vs freq of the proposed
Single patch
Fig 11: VSWRvs freq of the proposed Single patch
Fig 12 Return Loss vs freq of the proposed Single patch antenna
The antenna proposed is designed at 5GHz frequency,1.6
mm as height of substrate and taking the value of
dielectric constant as 4.8. According to graphs obtained
in MATLAB ie. from figures 6, 7, 8 and 9, the efficiency
obtained is 76.96 % and bandwidth is 3.03%
Fig .14 Total E-fieldof 2 D Polar Plot of the Proposed Single
patch Antenna
The top view of microstrip patch antenna is shown in
figure 5. After the simulation using IE3D software
package [16], the antenna elements exhibit a 150.63
MHz bandwidth (VSWR<2) as shown in figure 11 and
12 and operating frequency of 5.102 GHz as depicted in
figure 12. The antenna efficiency obtained was 72 % as
shown in efficiency vs frequency plot and bandwidth
which in agreement with analysis of MATLAB code.
Band Width is 2.95%. The final antenna was built by
etching on a metallized dielectric substrate (FR4;
h=1.6mm,
r
=4.8 and tan =0.0148).
V. CONCLUSION
A microstrip patch antenna was designed using coaxial
feed. Bandwidth vs frequency, radiation efficiency vs
frequency, bandwidth vs height and radiation efficiency
vs height graphs have been plotted at 5GHz frequency.
Simulation was done using IE3D [16] and return loss,
VSWR, radiation pattern, antenna efficiency, Radiation
efficiency and Bandwidth graphs were obtained. The
values computed from MATLAB and IE3D were almost
same and acceptable and a frequency shift around 0.1
GHz has been observed.
REFERENCES
[1] Balanis, C.A., Antenna Theory: Analysis and Design , Third
Edition, John Wiley & Sons, Inc, 2005.
[2] Nakar, Punit Shantilal, Design of a compact Microstrip Patch
Antenna for use in Wireless/Cellular Devices, 2004.
[3] I.J.BAHL and P.BHARTIA, Microstrip Antennas, Dedham, MA:
Artech House, 1980.
[4] J.R.JAMES, P.S.HALL and C.WOOD, Microstrip Antenna
Theory and Design, London, UK,: Peter Peregrinus, 1981.
[5] M. Amman, Design of Microstrip Patch Antenna for the 2.4 Ghz
Band, Applied Microwave and Wireless, pp. 24-34, November
/December 1997.
[6] K. L. Wong, Design of Nonplanar Microstrip Antennas and
TransmissionLines, John Wiley & Sons, New York, 1999.
[7] W. L. Stutzman , G. A. Thiele, Antenna Theory and Design , John
Wiley &Sons,2nd Edition ,New York, 1998.
260
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
[8] M. Amman, Design of Rectangular Microstrip Patch Antennas for
the 2.4GHz Band, Applied Microwave & Wireless, PP. 24-34,
November/December1997.
[9] K. L. Wong, Compact and Broadband Microstrip Antennas, Wiley,
New York, 2002.
[10] G.S. Row, S. H. Yeh, and K. L. Wong, Compact Dual Polarized
Microstrip antennas, Microwave & Optical Technology Letters, 27(4),
pp. 284-287,November 2000.
[11] M.O.Ozyalcm, Modeling and Simulation of Electromagnetic
Problems via
Transmission Line Matrix Method, Ph.D. Dissertation, Istanbul
Technical University, Institute of Science, October 2002.
[12] W.L. Stutzman, G.A. Thiele, Antenna Theory and design, John
Wiley & Sons,2nd Ed., New York, 1998.
[13] A. Derneryd, Linearly Polarized Microstrip Antennas, IEEE
Trans. Antennas and Propagation, AP-24, pp. 846-851, 1976.
[14] M. Schneider, Microstrip Lines for Microwave Integrated
Circuits, Bell Syst. Tech. J., 48, pp. 1421-1444, 1969.
[15] E. Hammerstad, F.A. bekkadal, Microstrip handbook, ELAB
report, STF 44A74169, University of Trondheim, Norway, 1975.
[16] IE3D Software Release 12.21 (Zeland Software Inc., Fremont,
California,USA)
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
WEATHER FORECASTING USING RADAR ACTIVE
ANTENNA:
AN APPLICATION OF ACTIVE MICROSTRIP ANTENNAS
Avijit Gupta, GauravPawar , NavdeepDara &RadhikaRajdev
Department of Electronics and Communication Engineering
Galgotias College of Engineering and Technology
Email I.D. - pawar121@gmail.com
ABSTRACT
In this paper we propose an original antenna
structure which may be used in weather
forecasting radars. It is composed of micro strip
radiating elements, active circuits and micro strip
power splitters. This antenna may replace the
heavy and expensive existing parabolic structures
which are used in satellite communications or
radar applications. We'll discuss the utility of its
different components and the problems one may
meet on the way of their realization.
Weather stations require stratosphere radar for
measuring wind velocity in different atmosphere
layers. This information captured by the radar is
useful for weather forecasting and permits
aircrafts to avoid turbulent zones and reduce their
fuel expenses. These radars use a great parabolic
antenna to emit in a vertical direction and also in
an oblique direction (15o to N, S, E or W) to
obtain reflections on atmospheric layers which
allow after a serious signal processing evaluating
the wind velocity.
Micro strip antennas are well suited to replace the
heavy and expensive parabolic antennas; a great
amount of work must be done in amplifying,
phase shifting and power weighing in antenna
arrays. Associating amplifying, phase shifting and
power shifters behind each radiating elements
allows us to attain all these objects and to remove
mechanical heavy engine and expensive polished
antenna surfaces.
INTRODUCTION
Weather forecasting is the application of science
and technology to predict the state of the
atmosphere for a given location. . Weather
forecasts are made by collecting quantitative data
about the current state of the atmosphere and
using scientific understanding of atmospheric
processes to project how the atmosphere will
evolve. Weather warnings are important forecasts
because they are used to protect life and property.
Forecasts based on temperature and precipitation
are important to agriculture, and therefore to
traders within commodity markets. Temperature
forecasts are used by utility companies to estimate
demand over coming days. A Forecast will predict
the change of atmosphere with time from present
state.
HISTORY OF EVOLUTIONOF
WEATHERFORECASTING
As the development of telegraph networks during
1800s, meteorologists were able to take ground-
level observations and assemble a synoptic, or
wide-area, picture of surface atmospheric
conditions. From this synoptic analysis an
experienced forecaster could often produce an
260
261
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
accurate local forecast. In the 1940s, a group of
American computer scientists and meteorologists
began to develop computer weather "models",
collections of dynamical equations modified so
that computers can solve them, along with a set of
initial conditions obtained from observations.
The models divided the atmosphere into layers,
and the layers into a set of points making up a
grid, so that the dynamics equations could be
solved at each point. A model's ability to predict
how weather varies across space is termed its
resolution; the early models had very low
resolution, owing to the limitations of computer
speed and power at the time. The only computer
available at the time was so slow that it took 24
hours to produce a 24 hour forecast. Finally, by
1950, most of the problems had been worked out,
and simple research models were yielding
forecasts of upper level winds. In time, as digital
computers increased in speed and reliability, it
became possible to model the weather more
accurately. In December 1954, a Swedish group
led by Carl-Gustaf Rossby issued the world's first
real time numerical forecast.
RADARANTENNA
convey microwave-frequency signals. It consists
of a conducting strip separated from a ground
plane by a dielectric layer known as the substrate.
Microwave components such
as antennas, couplers, filters, power dividers etc.
can be formed from microstrip, the entire device
existing as the pattern of metallization on the
substrate. Microstrip is thus much less expensive
than traditional waveguide technology, as well as
being far lighter and more compact.
Microstrip transmission lines consist of a
conductive strip of width "W" and thickness "t"
and a wider ground plane, separated by a
dielectric layer (a.k.a. the "substrate") of thickness
"H" as shown in the figure below :-
The most important tool needed for weather
station is stratospheric radar for measuring wind
velocity in different atmospheric layers. This
information will reduce the fuel cost of aircrafts as
the information provided by these radars will
allow aircrafts to avoid turbulent zones.
These radars use a great parabolic antenna
(frequency about 960 MHz or 400 MHz) to emit
in a vertical direction. The parabolic antennas with
mechanical engines are too heavy to be replaced
any time. Thus Micro strip patch antennas are
used to replace these heavy and expensive
parabolic antennas.
MICROSTRIP
FIG : Cross sectional viewof a microstrip line
Basic Micro strip Patch Antenna
The conventional micro strip antenna consists of a
pair of parallel conducting layers separating a
dielectric medium, referred to as the substrate as
shown in Figure. In this configuration, the upper
conducting layer or patch is the source of radiation
where electromagnetic energy fringes off the
edges of the patch and into the substrate. The
lower conducting layer acts as a perfectly
reflecting ground plane, bouncing energy back
through the substrate and into free space.
Micro strip is a type of electrical transmission
line which can be fabricated using printed circuit
board [PCB] technology, and is used to
262
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Figure: Typical geometry of a micro strip antenna
PROPERTIES OF ACTIVE MICROSTRIP
ANTENNAS
Polarization
Polarization of an antenna in a given direction is defined as
the polarization of the wave transmitted (radiated) by the
antenna. Polarization of the radiated wave is defined as
property of an electromagnetic wave describing the time
varying direction and relative magnitude of the electric field
vector.
i. Main lobes:-This is the radiation lobe containing the
direction of maximum radiation.
ii. Side lobes: - These are the minor lobes adjacent to the
main lobe and are separated by various nulls. Side lobes
are generally the largest among the minor lobes.
iii. Back lobes: - This is the minor lobe diametrically
opposite the main lobe.
Figure : Radiationpattern of ageneric directional antenna.
Half Power BeamWidth (HPBW)
The half power beam width is defined as: In a plane
containing the direction of the maximum of a beam, the
angle between the two directions in which the radiation
intensity is one half the maximum value of the beam.
Thick substrate Thin substrate
Low dielectric constant High dielectric constant
Better efficiency Less efficiency
Larger bandwidth Smaller bandwidth
Larger element size Smaller element size
Increase in weight Lighter in weight
Increase in dielectric loss Minimumdielectric loss
Figure: Types of antennapolarization
Radiation Pattern
An antenna radiation pattern or antenna pattern is defined as
a mathematical function or a graphical representation of the
radiation properties of the antenna as a function of space
coordinates. In most cases, the radiation pattern is
determined in the far-field region and is represented as a
function of the directional coordinates. Radiation properties
include power flux density, radiation intensity, field
strength, directivity phase or polarization. The radiation
pattern could be divided into:
Antenna arrays,
radiation pattern and array factor
The antenna elements can be arranged to form a 1 or
2 dimensional antenna array. A number of antenna array
Figure: Half power Beam Width
Bandwidth
The bandwidth of an antenna is defined as the range of
frequencies within which the performance of the antenna,
with respect to some characteristic, conforms to a specified
standard.
Substrates Characteristics
There are many substrates that can be used for the design of
microstrip antennas, and their dielectric constants (
r
) are
usually in the range of 2.2
r
12. Thick substrates are
most desirable for antenna performance as their dielectric
constants are in the lower end, which provide better
efficiency, larger bandwidth, loosely bound fields for
radiation into space (better radiation power).
specific aspects will be outlined We used 1dimensional
arrays for simplicity reasons. Antennas exhibit
a specific radiation pattern. The overall radiation pattern
changes when several antenna elements are combined in an
array. This is due to the so called array factor. This factor
quantifies the effect of combining radiating elements in an
array without the element specific radiation pattern taken
263
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
into account. The overall radiation pattern of an array is
determined by this array factor combined with the radiation
pattern of the antenna element. The overall radiation pattern
results in a certain directivity and thus gain linked through
the efficiency with the directivity. Directivity and gain
are equal if the efficiency is 100%.
Influence of the number of elements
on the array factor
The array directivity increases with the number of elements.
Figure shows the directivity of 3 arrays with 2 (red), 5
(green) and 10 (blue) elements. The element spacing is
0.4 times the wavelength (l) for all the arrays in figure.
Note the presence of side lobes next to the main lobes. This
is typical for arrays. The number of side lobes
and the side lobe level increase with the number of
elements. It is important to note that due to the array factor
definition there are 2 main lobes. There is a main lobe at
theta 0 (positive z axis) and a main lobe at theta 180/180
(negative z axis).
Figure : Directivity of a 2 (red), 5 (green) and
10 (blue) element array with 0.4l element spacing.
Antenna structure:
It consists of 5 layers:
1) Radiating patch array: There are 276 micro strip
patches arranged in panels. There dimensions are
calculated for the given frequency (961Hz), beam
aperture (5
o
) and spurious side lobe level (-20db).
2) Power distribution network: This layer is made
by micro strip line to feed in phase the radiating
elements of sub array. The dimensions are
computed to match four 50 radiating elements to
a 50 driver.
3) Active circuits: This layer has amplifiers, emitter
or receiver switches and controllable phase shifters.
4) Panel distributing network: Micro strips lines are
used to satisfy the unequal power distribution in the
radiating elements and for the matching of the input
impedance of the amplifiers to the output of the
drivers.
5) Primary power splitters: This layer is used to
distribute the microwave power into 9 panels. The
power required by each panel and the power
distribution is non-uniform.
References
1) The Basics of Antenna Arrays By G.J.K. Moernaut
and D. Orban
2) A wide bandwidth micro strip sub array for array
antenna applications fed using aperture coupling.
By J C Mackichan, P A Miller & H R Staker,
british telecom research laboratories.
3) Weather Radar, From Wikipedia
264
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
MATHEMATICALMODELLINGOF MICROSTRIP ANTENNA FORMILLIMETER
WAVE ON THICK SUBSTRATE
Harsh Kumar
1
and Gyanendra Singh
2
harshkumar.bitm@gmail.com, gksingh24@sify.com
Abstract - Millimeter wave technology being an
emerging area is still very undeveloped. A
substantial research needs to be done in this
area as its applications are numerous. In the
present endeavor, a circular patch antenna is
designed on thick substrate and simulated using
SONNET software.
Introduction
Millimeter Wave can be classified as electromagnetic
spectrum that spans between 30GHz to 300 GHz,
which corresponds to wavelengths from 10mm to
1mm. Despite millimeter wave (mm Wave)
technology has been known for many decades, it is
undeveloped and available for using a broad range of
new product and service, including high speed; point to
point wireless local area network and broad band
access. The mm Wave systems have mainly been
deployed for military process technologies and low-
cost integration solutions, mm Wave technology has
started to gain a great deal of momentum from
academia, industry, applications. In this paper,
however, we will focus specifically on 39.2 GHz
frequency, which is used for high speed microwave
data link. Here a millimeter wave antenna is designed
for 39.2 GHz on thick substrate and compared it with
calculated result. When thick substrate is used
surface wave excitation takes place so surface wave
loss come in count. James & Henderson[3] propose
that for thick substrate For r =2.32 &
For
r
=10 .Where h is the thickness of
substrate and
0
is the free space wavelength. In
microstrip patch antenna various types of losses takes
place such as dielectric loss, conductor loss, radiation
loss, surface wave loss. In general the losses can be
determined by using formula[4,5and ]
3
P
d
is dielectric loss, P
c
is conductor loss, P
r
is radiated
loss, tan loss tangent, W
T
total power absorbed,
resonant frequency
Circular patch antenna
The far field expression obtained from cavity model
for circular patch antenna is simple and adequate for
Fig 1(circular patch)
practical purposes. The expressions are
5
6
On using eq 5, 6, 4 we get[6]
7
8
2
3
Where is Radiation resistance
The existence of dielectric substrate over the
conducting ground plane in microstip antenna can
cause the surface wave excitation along air
dielectric interface. The resistances due to surface
265
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
wave excitation Rs can be derive from ratio of power
loss to surface wave Ps and radiation power given by
[7]
17
Design consideration
Where
10
For thick substrate the height of the substrate is
9
decided by the given condition h=0.8 mm,
r
=2.32.
The disk metallization radius a can be determined
by the resonance condition, that is, =0.
For the lowest order mode n=1 and the 1
st
root of
occurs at 1.841[9]. The feed point can be
determined by the following expression [8].
The total energy store in the patch at the
resonance frequency for circular patch is given by the
expression
Using the above relations the dimensions of the patch
were determined as radius, a=1.21 mm, feed point
location =0.46 for optimum matching
11
Now putting the value of in eq 2, 3 we get the
value of power loss due to conductor and power loss
due to dielectric
12 Fig.2:Block diagram of designed antenna (sonnet)
.13
Using this formula we can determine the value of
resistance due to dielectric, and conducting resistance
14
RESULTS
The theoretical results for return loss were obtained
using the above derived expressions and the antenna
was simulated using sonnet software and the resulting
graph for return loss is shown in Fig.3.
15
Total resistance can be determine by using above
eq
Simmulated return loss
Theoretical return loss
16
Where R
r
is resistance due to radiation loss, R
s
is
surface wave resistance, R
c
conductor loss
resistance, R
d
resistance due to dielectric loss
Radiating efficiency:
Its the ratio of radiated power to input power, that is
Fig.3 Theoretical and Simulated results of variation
of return loss with frequency
266
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
The graphs show a good agreement of the theoretical
and simulated results, thus proving the correctness of
the technique developed. Both the graphs are
resonating at 39GHz. The bandwidth shown by the
theoretical results (320 MHz) is lower than the
simulated results (250 MHz). The VSWR value for
theoretical analysis at resonance comes out to be 1.38
and for simulated results it has the value of 1.3.
Gain is calculated using above eqns. and the results
plotted in Fig. 4 showing the value for maximum
gain obtained at resonance as 4.76dB
Fig.4:gain vs frequency(theoretical)
Fig.5:
Simulate radiation pattern
For simulated result we can see the gain of the
antenna is 5.7dB at 39 GHz frequency
CONCLUSION
It can be concluded from the above analysis that
an efficient technique has been developed for
analysis of circular patch antenna at millimeter wave
frequencies for thick substrates. The feasibility of the
technique is proved by simulations. The expressions
to find out input resistance and gain for circular patch
antenna on thick substrate have been developed
and verified
REFERENCES
[1] R. Piesiewicz, T. Kleine-Ostmann, N. Krumbholz, D.
Mittleman, M. Koch, J. Schoebel, and T. Kurner,Short-range
ultra-broadband terahertz communications: concept and
perspectives. IEEE Antennas Propag. Mag 49(6), 2438
(2007)Dec.
[2] P. Kumar, A. K. Singh, G. Singh, T. Chakravarty and S.
Bhooshan. Terahertz technology a new direction. Proc.
IEEE Int. Symp. Microwave, pp. 195-201 (2006).
[3] Zhang Y P, Sun M and Guo L H 2005 On-chip antennas
for 60-GHz radios in silicon technology IEEE Trans.
Electronic Devices 52 16648
[4] Mehmet kara , Empirical formulas for the computation
of physical parameter of rectangular microstrip antenna
with thick substrate. john wiley & son ccc 0895-2477/97
[5] Zhang Y P, Sun M, Chua K M, Wai L L, Liu D and
Gaucher B P 2007 Antenna-in-package in LTCC for 60-GHz
radio IEEE Int. Workshop on Antenna Technology, March 2123,
2007 pp 27982
[6] Z. Qi, and B. Liang, Design of microstrip antenna with
broader bandwidth and beam. IEEE Antennas and
Propagation Society International Symposium 3A, 617620
(2005) July.
[7] Shaoyong Wang
1
, Qi Zhu
1
and Shanjia Xu
1
Dept. of EEIS,
Univ. of Sci. and Tech. of China, Hefei, 230027, Anhui, China
Received 28 January 2007 Accepted:
17 April 2007 Published online: 10 May 2007
267
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
A new Improved Approach for Video Object tracking using
Adaptive Kalman Filter
Student, M.Tech (ALCCS)
IETE, New Delhi
esmita.singh07@gmail.com
Abstract
The keyword object tracking familiar and
potentially effective way to track an object that
makes video acquisition equipment better and
cheaper in price that is why it is increasing the
ratio to utilize the digital video. Video sequences
provide us more natural and more information
about the objects and how it is changing its
position and scenarios over time in comparison of
still images. New improved approach for video
moving object tracking is proposed. In the
initialization any moving object is selected by the
user and then this object is segmented and then the
dominant color is extracted from the segmented
target. While tracking, a motion model is made to
set the system model of adaptive Kalman filter and
then, the dominant color of the moving object in
HIS color space will be used as to follow the
moving object in the sequential video frames. And
then the result is come back as the measurement of
adaptive Kalman filter and estimated parameter of
adaptive Kalman filter are stable by occlusion
ratio adaptively. The proposed method has strong
capability to track any moving object in the
consecutive frames under some kind of real
scenarios complex situations such as the moving
object disappearing totally or partially due to
occlusion by other ones ,rapid moving object
,changing lighting, changing the direction and
orientation of the moving object, and changing the
velocity of moving object suddenly. The proposed
method is an efficient video object tracking
algorithm.
I. Introduction
Object tracking in video streams has many
applications security, and smart spaces and
Human-machine interfaces to name a few. In these
cases the object are either any person, or vehicles.
The same property of these object is that sooner or
later they exhibit some movement which is the
proof of that differentiate them from the backend
and identifies them as frontend objects. The
information technologies developed over the last
several years have been heavily technology based,
while decision-making remained a human thinking
process. Video tracking is the way to track any
moving object (or multiple objects) over time
using a camera. It has a variety of uses, some of
which are: airport,embassies, any automated
system controlling (missiles) etc.ith the help of
this technique we can answer many question such
as what is that object doing at a particular time,
their actions and from who are they. Object
tracking can be defined as a method of following
an object through sequenced image format to
determine its relative movement with respect to
another object. This process of tracking of video
can be a time consuming process due to the
amount of data that is contained in video. Adding
further to the complexity is the possible need to
use object recognition techniques for tracking.
Basic steps in object tracking can be listed as:
Image Segmentation
Frontend/backend extraction
Modeling
Feature extraction
This basic tracking is very fast to performand
allows:-
1. Segmentation problems correction: If one
object of human size is split in several 1ittle parts,
we can merge them back by comparing the
268
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
surfaces of these several successors boxes to the
human size and correct the segmentation. If a
group of people is split into several persons,
nothing is done because each of them is of human
size in surface.
2. Occlusion problems detection: If several little
parts (smaller in size than a human one) merge
into a bigger object, nothing is done. If several
persons merge into a group, this group will have
several predecessors and a size twice or at least
greater than a human one so we may have
occlusion problems.
Image Segmentation
The segmentation of frontend objects can be done
by processing the difference of the current frame
from a backend image. This backend image can be
static or can be computed adaptivelySegmentation
is the process of identifying components of the
image. Segmentation involves operations such as
boundary detection, connected component
labeling, thresholding etc. Boundary detection
finds out edges in the image. Any differential
operator can be used for boundary detection.
Front end / Back end extraction
As the name suggests this is the process of
separating the frontend and backend of the image.
Here it is assumed that frontend contains the
objects of interest. Some of the methods for
foreground extraction are:-
Difference images:- In this method we use
subtraction of images in order to find objects that
are moving and those that are not. The result of
the subtraction is viewed as another grey image
called difference image.
Kalman filter: - The Kalman filter is one of the
most popular estimation techniques in motion
prediction because it provides an optimal
estimation method for linear dynamic systems
with white Gaussian noise. It also provides a
generalized recursive algorithm that can be
implemented easily with computers. This method
employs Kalman filter for predicting the image
at t
i+1
based on some noise model. The difference
between predicted and actual intensities is
thresholded to classify the image pixel as
foreground or background. The background can be
estimated to help some of the issues that arise
from the standard method .This is a simplest
technique that use pixel by pixel intensity
statistical estimated scheme. One advantage of this
method is it considers effect of noise, which is
very important feature in real world applications.
For example an automatic road traffic
management system may detect false objects due
to bad weather, wind etc. Once frontend is
extracted a simple subtraction operation can be
used to extract the backend.
Modeling
Camera model is an important aspect of any
object-tracking algorithm. All the existing objects
tracking systems use a preset camera model. In
words camera model is directly derived from the
domain knowledge.
Feature extraction and tracking
Finally, the feature is extracted out of the image
and object is tracked using Kalman Filter.
Video object tracking provide comprehensive
treatment of the fundamental aspects of algorithm
and application development for the task of
estimating, over time, the position of object of
interest seen through the cameras. Video object
tracking is useful in a wide range of applications:
surveillance cameras, vehicle navigation,
perceptual user interface, and augmented reality.
However, most of the research on tracking an
object outperforms using selective algorithms that
are applicable for fixed settings. The focus of this
thesis is tracking a general object selected in a real
time.
There are some situations for video in a real world
environment, including camera lens is fixed or
not, multiple moving objects, rigid object, or non-
rigid object, occlusion by the other objects, one or
many cameras, full automatic or semi-automatic
semantic object tracking, etc. According to the
above discussion, to conclude, we will encounter
the following three problems for tracking moving
object.
Initial moving object segmenting problem.
Detection of moving object.
269
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Tracking the moving target in occlusion.
The method with the Kalman filter technique
which predicts the detecting range to reduce the
computational complexity can not be applied to
solve the target in occlusion. Therefore, for
tracking the moving target in occlusion, our paper
proposes the adaptive Kalman filter to estimate the
motion information under a deteriorating
condition as occlusion. The tracking algorithm
needs to satisfy the two qualities: simplicity and
robustness. Simplicity implies that the algorithm is
easy to implement and has the minimum number
parameters.
II. Proposed Method:-
The main steps of the proposed tracking algorithm
are listed as follows:
Initialization
1. Moving object segmentation by frame
difference and region growing.
2. Object feature extraction.
Tracking by adaptive Kalman filter
1. Motion model construction to build the system
state model of adaptive Kalman filter.
2. Moving object detection in consecutive frames
for the correction step of adaptive Kalman filter.
User interface
2.1.1. Feature extraction and moving object
segmentation by the method of frame
difference and region growing.
Frame difference:- Change detection methods
separate the temporally changed and unchanged
regions of two successive images based on the
evaluation of the frame difference(FD).It is a
simple method to segment the moving object in
image video. After the user appoints a moving
object as target, the target is segmented by the
differences of frames in t -1, t, and t + 1. It uses
the difference of consecutive frames to detect the
change area of frames
FD(x, y, t) = 0 if | f(x, y, t+1)-f(x, y, t) | threshold,
1 otherwise,
Region growing method is used at the segmented
moving object which the user appoints. The region
growing method consists of two basic
morphological methods
1. Select seed pixels within the image
2. From each seed pixel grow a region:
Set the region prototype to be the seed
pixel;
Calculate the similarity between the region
prototype and the candidate pixel;
Calculate the similarity between the
candidate and its nearest neighbor in the
Prediction
User Appoints
an Object as
target
Moving
Object
segmentation
and feature
extraction
Moving Object Detection
Occultation rate
Measurement
And occultation rate
Prediction step
region
Include the candidate pixel if both
similarity measures are higher than
experimentally-set thresholds;
Update the region prototype by calculating
the new principal component
Go to the next pixel to be examined.
Initialization
Correction
Step
System
State
Model
In first iteration,
MO
k
= ( MO
k-1
SE ) MRS; k = 1; 2; 3; . . . ;
Adaptive Kalman Filter
Fig1: The Flowchart of Adaptive Kalman Filter Tracking Method If the initial point is not located in one of the
segmented regions, we select the segmented
region that is closest to the users selector locating
point. The 3 3 block is used as the structuring
270
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
element to group the object points. The iterations
do not terminate until the connected region is
extracted, i.e., MO
n
=MO
n-1
at nth iteration, and
then the other segmented objects will be given up.
In second iteration, the points of segmented
moving object is defined as:-
MR(x, y, t) =FD(x, y, t-1) FD(x, y, t).
The set of segmented moving object area is
described as:-
MRS= {(x, y) | MR(x, y, t) =1}
Once a connected segmented region is found,
another morphological method called region
filling is applied to dilate the MO
n
. The region
filling is expressed as:-
OR
k
= (OR
k-1
SE) MO
c
n
Where OR
0
=MO
0
is the initial point and OR
k
is
the result of the k
th
iteration
2.1.2. Color feature extraction
To find the dominant color, we process feature
extraction in the RGB color space. The RGB color
space is represented by a 3-dimensional cube with
red, green, and blue additive primaries. When
color clustering by K-means method operates in
RGB space, it can directly use identical weighting
in every dimension to calculate, since the RGB
color space forms a cube and the clustering does
not need to consider the special property of every
dimension. For example, the cyclic property of the
hue component in HSI color space needs to be
considered. Therefore, the RGB color space is
very suitable for dominant color extraction.
First, the moving object colors in RGB
components are classified by K-means.The K is
experimentally set as 5. The color pixels of
moving object are classified into different K
groups by color and the average color of the every
group in frame t is defined as MC
k
(t). If the group
d is the highest density (minimum distance
summation) one, the MC
d
(t) is set as the dominant
color feature.
2.2. Moving object tracking by the use of
adaptive Kalman Filter
For the tracking problem, if we have to estimate
the motion information of object under such a
deteriorating condition as occlusion, a simple
detecting method will not be able to measure and
track the moving object correctly. In the paper, we
propose an adaptive Kalman filter to implement.
In the proposed adaptive Kalman filter, a moving
model is constructed to set the system state model
and the result of the moving object detection in
consecutive frames is fed back as the
measurement for correction. In addition, the
estimate parameters of Kalman filter are adjusted
adaptively.Kalman filter has two steps:-
Prediction step
s
-
(t) = O (t-1) s
+
(t-1)
P
-
(t) =O (t-1) P
+
(t-1) O (t-1)
T
+Q (t-l)
Correction step
K(t)= P
-
(t)H(t)
T
(H(t)P
-
(t)H(t)
T
+R(t))
-1
s
+
(t) = s
-
(t) K (t) (z (t)-H(t) s
-
(t))
P
+
(t) = (I- K (t) H (t)) P
-
(t)
where s
-
(t) is priori estimate, s
+
(t) is posteriori
estimate, P
-
(t) is priori estimate error covariance
and P
+
(t) is posteriori estimate error covariance
Therefore, the system will get near optimal result,
if we can decide which one we will trust. The
prediction-correction cycle of Kalman filter is
repeated. For developing the proposed adaptive
Kalman filter, firstly, the moving model of object
is constructed and is used as the system state
model of Kalman filter. The moving model will be
applied in the prediction step of Kalman filter.
Then, for the correction step of Kalman filter, a
moving object detection method is proposed.
After constructing the motion model and
achieving the measurement by moving object
detection, we can apply the adaptive Kalman
filtering to track the object in video sequences.
The system state model in adaptive Kalman
filtering is built by the motion model and it is used
in prediction step. The so called adaptive Kalman
271
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
filter is to let the estimate parameters of Kalman
filter adjust automatically. Since the measurement
error and occlusion ratio are in direct rate,
therefore, in correction step, the occlusion rate is
used to adjust the estimate parameters of adaptive
Kalman filter.
According to Equations, the Kalman gain K (t) is
in an inverse proportion to the measurement error
R(t). If the occlusion rate is less than the
threshold, then the value of R(t) is set as occlusion
rate a(t) and the prediction error Q(t - 1) is 1 - a(t).
Otherwise, it is reasonable to let the measurement
error R(t) and the prediction error Q(t-1) be
infinity and zero, respectively, thus the Kalman
gain K(t) is a zero value. According to the
occlusion rate, the Kalman filter system is
adjusted adaptively. That is to say, if the occlusion
rate is less than the threshold, then the
measurement result will be trusted more than
predicted one. Otherwise, the system will trust the
predicted result completely. By this way, the
Kalman filter can be adjusted automatically to
estimate the moving object.
Image Sequence
User interface
User
Appoints
the measurement of system is provided by moving
object detection using HSI matching method. In
addition, the occlusion ratio is applied to adjust
the prediction and measurement errors adaptively.
The two errors will make the adaptive Kalman
filter system to trust prediction or measurement
more and more
III. Related Work
Many researchers have tried various approaches
for object tracking. Nature of the technique used
largely depends on the application domain. Some
of the research work done in the field of object
tracking includes:
Shiuh-Ku Weng , Chung-Ming Kuo , Shu-Kang Tu
[1] has studied the object tracking algorithm in
which a moving object selected by the user is
segmented and the dominant color is extracted
from the segmented target. In tracking step, a
motion model is constructed to set the system
model of adaptive Kalman filter firstly. Then, the
dominant color of the moving object in HIS color
space will be used as feature to detect the moving
object in the consecutive video frames. The
detected result is fed back as the measurement of
adaptive Kalman filter and the estimate parameters
an object
as target
Frame Difference
Region growing
Feature extraction
of adaptive Kalman filter are adjusted by
occlusion ratio adaptively.
Ning Li, Lu Liu, De Xu [3] has studied the
prototype of Kalman filter and proposed a corner
feature based Adaptive Kalman Filter (AKF) for
moving object tracking. Unlike pixel-level feature
corner feature is insensitive to dynamic change of
outdoor scenes such as water ripples, plants and
Dominant color feature of
the moving object
Fig1: The Flowchart of Initial moving Object Segmentation
Finally, we summarized the proposed tracking
algorithm using adaptive Kalman filter again. In
the initialization, moving object segmentation and
feature extraction are included. First, the frame
difference and region growing are used to segment
moving object. Then, the dominant color is
extracted from the segmented moving object. In
the tracking procedure of adaptive Kalman filter, a
motion model is constructed to build the system
state and is applied to prediction step. In addition,
illumination. Thus, the corner feature can robustly
describe MO in outdoor area. The main
contribution is to take advantage of corner points
to describe MO and then use the variation in the
number of occluded corner points across
consecutive frames to design an AKF.
Nimmakayala Ramakoti,Ari Vinay, Ravi Kumar
Jatoth has presented the idea of kalman [5] filter
that tracks the object by assuming the initial state
and noise covariance. For efficient tracking by any
filter like Kalman filter noise covariances must be
optimized. They propose tuning of noise
272
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
covariances of Kalman filter for object tracking
using particle swarm optimization (PSO).
Mohand Said Allili, Djemel Ziou [2] has studied
novel object tracking algorithm for video
sequences, based on active contours. The tracking
is based on matching the object appearance model
between successive frames of the sequence using
active contours. They formulate the tracking as a
minimization of an objective function
incorporating region, boundary and shape
information. Further, in order to handle variation in
object appearance due to selfshadowing,
changing illumination conditions and camera
geometry, they propose an adaptive mixture model
for the object representation.
One simple feature based object tracking method
was explained by Yiwei Wang, John Doherty and
Robet Van Dyc [4] The method first segments the
image into foreground and background to find
objects of interest. Then four types of
features are gathered for each object of interest.
Then for each consecutive frame the changes in
features are calculatedfor various possible
directions of
IV. Conclusion and discussion
In this paper, an effective adaptive Kalman filter is
proposed to track the moving object. In the
proposed adaptive Kalman filter, the occlusion
rate is used to adjust the error covariance of
Kalman filter adaptively. The method can track
the moving object in real-time. It successfully
estimates the objects position in some kinds of
real-world situations such as the fast moving
object, partial occlusion, long-lasting occlusion,
changing lighting, changing the direction and
orientation of the moving object, and changing the
velocity of moving object suddenly. Furthermore,
to consider the situations of tracking multiple
objects, every one of multiple objects can be set an
adaptive Kalman filter to track it. Since the
processing time using the proposed method to
track the single object is short therefore, the
systems implemented by the proposed method can
afford to track multiple objects in real time.
V. References
[1] Shiuh-Ku Weng , Chung-Ming Kuo , Shu-
Kang Tu, Video object tracking using
adaptive Kalman filter, Elsevier Journal
2006,ACM Journal of Visual
Communication and Image Representation
Vol.17,Issue 6,Dec.2006.
[2] Mohand Saiid Allili, Djemel Ziou, Object
Tracking in videos using adaptive mixture
models and active contours, ACM Journal
of Neurocomputing Vol.71, Issue 10-12,
June 2008.
[3] Ning li, lu Liu De Xu, Corner feature based
object tracking using adaptive Kalman
filter, IEEE International conference on
signal processing,2008.
[4] Y. Wang, J. Doherty and R. Van Dyck ,
. Moving object tracking in video - Proc
Conference on Information Sciences and
Systems,Princeton,NJ,March 2004
[5] Nimmakayala Ramakoti, Ari Vinay, Ravi
Kumar Jatoth, Particle Swarm
Optimization Aided Kalman Filter for Object
Tracking, IEEE International Conference
on Advances in Computing, Control, &
Telecommunication Technologies, 2009
[6] R.C. Gonzalez, R.E. Woods, Digital Image
Processing, Prentice-Hall, Englewood Cliffs,
NJ, Third Edition, 2008.
[7] Vance Faber, Clustering and the continuous
K-means algorithm, Los Alamos Science,
Second edition, 2007
[8] Junqiu wang [Member IEEE], yasushi yagi
[Member IEEE],Integrating color and shape
texture features for adaptive real time object
tracking, IEEE transactions on image
processing, vol no 17,no 2, page no 235-
240,2007
[9]. Wei chen, Kangling fang, A hybrid
clustering approach using particle swarm
optimization for image segmentation, ICALIP
2008, page no 1365-1368, 2008
[10] J.Vermaak, P. Perez, M. Gangnet, A.Blake,
Towards improved observation models for
visual tracking: selective adaptation
proceedings of European conference on
computer vision (2007)
273
National Conference onMicrowave, Antenna &Signal Processing April 22-23,
2011
NEURO-FUZZY BASED IMAGEPROCESSING
SYSTEM
Supriya Gupta
1
, Shreya Prakash
2
, Vipra Tiwari
3
IT Department, Galgotias college of Engineering And Technology, Greater Noida
ABSTRACT-In this paper, we propose
Neurofuzzy Techniques for various Image
Processing Scheme. Neurofuzzy refers to the
combination of fuzzy set theory and neural
networks with the advantages of both. The noise
is filtered step by step. In each step, noisy pixels
are detected by the help of fuzzy rules, which
are very useful for the processing of human
knowledge where linguistic variables are used.
Pixels that are detected as noisy are filtered, the
others remain unchanged. We propose a
generalized fuzzy inference system (GFIS) in
noise image processing.
The GFIS is a multi-layer Neuro-fuzzy structure
which combines both Mamdani model and TS
fuzzy model to form a hybrid fuzzy system. The
GFIS can not only preserve the interpretability
property of the Mamdani model but also keep
the robust local stability criteria of the TS
model. Simulation results indicate that the
proposed model shows a high-quality
restoration of filtered images for the noise
model than those using median filters or wiener
filters, in terms of peak signal-to-noise ratio
(PSNR). Fuzzy logic reasoning strategy is
proposed for edge detection in digital images
without determining the threshold value.
KeywordsNeurofuzzy Techniques, neural
networks, linguistic variables, GFIS.
Supriya Gupta
1
, B.Tech (Final year) , is with the
Information Technology department, Galgotias College
Of Engineering and Technology, Greater Noida, India
(corresponding author to provide phone:
9453534596;email:supriya.galgotia@gmail.com)
Shreya Prakash
2
, B.Tech (Final year), is with the
Information Technology Department, Galgotias college
of Engineering and Technology, GreaterNoida, India
(corresponding author to provide phone: 9212048907;
email: shreya.srvstv@gmail.com).
Vipra Tiwari
3,
B.Tech (Final year), is with the
Information Technology Department, Galgotias college
of Engineering and Technology, Greater Noida, India
(email:vipratiwari25@gmail.com, phone: 8802570343)
Piyush Gupta
4
is working as Assistant Professor in IT
Department, GCET, Greater Noida, India (email:
collone.piyush @gmail.com, phone: 9410031072)
Fig: Showing filtered and noisy image
I. INTRODUCTION
We propose an alternative scheme to crisp image
processing algorithms, especially when subjective
or very sensitive parameters or concepts related to
the image need to be measured or defined. It
involves an image fuzzyfication function, fuzzy
operators and an optional defuzzification function.
The Applicability of the scheme is illustrated in
three applications, image binarization,edge
detection and geometric measurements. This paper
also attempts to formulate a mathematical model
for a fuzzy image processing approach to provide a
guidance to perform fizzy image processing and
also applications of fuzzy logic in the development
of image processing.
II LITERATUREREVIEW
Recent research has concerned using neural Fuzzy
Feature to develop edge detectors, after training on
a relatively small set of proto-type edges, in sample
images classifiable by classic edge detectors. This
work was pioneered by Bezdek etal, who trained a
neural net to give the same fuzzy output as a
normalized Sobel Operator. However, work by the
writer and collaborators has shown that training
NN classifiers to crisp values is a more effective
variant of Bezdek's scheme.
The advantage of the neural fuzzy edge detector
over even the traditional edge detector on which
the neural fuzzy form was based is very apparent.
The dynamic channel assignment (DCA) technique
for large-scale cellular networks (LCNs) using
noisy chaotic neural network. In this technique, an
LCN is first decomposed into many subnets, which
are designated as decomposed cellular subnets
(DCSs). The DCA process is independently
performed in every subnet to alleviate the signaling
274
National Conference onMicrowave, Antenna &Signal Processing April 22-23,
2011
overheads and to apportion the DCA computational
load among the subnets.
III BRIEF DESCRIPTION OF THE
SYSTEM
roundness for the curved lines. In the same time the
corners get sharper and can be defined easily.
Neuro-fuzzy hybridization is widely termed as
Fuzzy Neural Network (FNN) or Neuro-Fuzzy
System (NFS) in the literature. Neurofuzzy refers
to the combination of fuzzy set theory and neural
networks with the advantages of both:
i) Handle any kind of information (numeric,
linguistic, logical, etc.)
ii) Manage imprecise, partial, vague or imperfect
information.
iii) Resolve conflicts by collaboration and
aggregation.
iv) Self-learning, self-organizing and self-tuning
capabilities.
v) No need of prior knowledge of relationships of
data.
vi) Mimic human decision making process.
vii) Last computation using fuzzy number
operations
A new fuzzy filter for the removal of random
impulse noise in color image is presented. By
working with different successive filtering steps, a
very good tradeoff between detail preservation and
noise removal is obtained. One strong filtering step
that should remove all noise at once would
inevitably also remove a considerable amount of
detail. Therefore, the noise is filtered step by step.
In each step, noisy pixels are detected by the help
of fuzzy rules, which are very useful for the
processing of human knowledge where linguistic
variables are used. Pixels that are detected as noisy
are filtered, the others remain unchanged. Filtering
of detected pixels is done by block matching based
on a noise adaptive mean absolute difference. The
experiments show that the proposed method
outperforms other state-of-the-art filters both
visually and in terms of objective quality measures
such as the mean absolute error (MAE), the peak-
signal-to-noise ratio (PSNR) and the normalized
color difference (NCD).
The fuzzy technique is an operator introduced in
order to simulate at a mathematical level the
compensatory behavior in process of decision
making or subjective valuation. The edge pixels are
mapped to a range of values distinct from each
other. The robustness of the proposed method
results for different captured images are compared
to those obtained with the linear Sobel operator. It
is gave a permanent effect in the lines smoothness
and straightness for the straight lines and good
In many image and video processing scenarios one
is forced to operate under adverse conditions
caused by various types of interfering signals. In
simple cases the interfering signal can be as
unstructured as white noise, where as in more
difficult cases, the interference can be as structured
as the class of signals one is interested in. We
consider the scenario where a target image is to be
predicted using an anchor image, , that is corrupted
with such structured interference. Specifically, we
look at prediction scenarios involving an anchor
image of the form
x=D(y)+l+w
where D contains linear and multiplicative
distortions, is an interfering -like signal, and w
is white noise. Our task will be to obtain a close
approximation to using and some helper
information. We are particularly interested in noisy
transitions, x->y , that depict combinations of
cross-fades, clutter, intensity modulations, focus
changes, and so on
Fig. . Transform coefficients for a toy example
where two independent toy-images, y and l, are
composited to form x = y + l + w is the analysis
basis and the noise, _, is assumed to result in
mostly insignificant coefficients2. Many well-
known transforms provide sparse decompositions
over typical images and video frames. It is also
well-known that the locations of an images
significant coefficients provide a signature that
delineates it from other images [44], [9], [52]. One
hence expects easy separation of component
275
National Conference onMicrowave, Antenna &Signal Processing April 22-23,
2011
images from the composite in transform domain
since, typically, each significant coefficient of the
composite belonging. Locations where significant
coefficients of (color-coded blue) and significant
coefficients of (red) overlap are expected to be few
(magenta).
Their goal is to recover the two natural images
assuming simple blending functions. Our
framework is different as we predict one image
based on the other. Our work can also handle
blends involving more than two images, cross-
fades depicting a transition from one blend to a
different blend so that the image to be predicted is
a blend itself, brightness and focus changes, clutter,
and many other complex inter-picture transitions.
In comparison to regularized de-noising
formulations we note that the noise that we
remove from the anchor frame during prediction is
highly structured and cannot be dealt with using
simple de-noising setups and
thresholding nonlinearities. Our work is also
single-pass and it can handle many cases that are
difficult to address with established iterative sparse
recovery algorithms. Despite
the substantial differences however, the
fundamental similarity between our work and these
methods is the reliance on the non-convex structure
of natural image sets.
Fig. (a) Estimate y in terms of a sequence of bXb
blocks. (b) Neighborhood N around B establishes
the training region T in conjunction with A.
IV DEVELOPMENT AND
IMPLEMENTATION OF NEURO-
FUZZY LOGICIN THESYSTEM
In the field of artificial intelligence, Neuro-fuzzy
refers to combinations of artificial neural networks
and fuzzy logic. Neuro-fuzzy was proposed by J. S.
R. Jang. Neuro-fuzzy hybridization results in a
hybrid intelligent system that synergizes these two
techniques by combining the human-like reasoning
style of fuzzy systems with the learning and
connectionist structure of neural networks. Neuro-
fuzzy hybridization is widely termed as Fuzzy
Neural Network (FNN) or Neuro-Fuzzy System
(NFS) in the literature. Neuro-fuzzy system (the
more popular term is used henceforth) incorporates
the human-like reasoning style of fuzzy systems
through the use of fuzzy sets and a linguistic model
consisting of a set of IF-THEN fuzzy rules. The
main strength of Neuro-fuzzy systems is that they
are universal approximations with the ability to
solicit interpretable IF-THEN rules.
The strength of Neuro-fuzzy systems involves two
contradictory requirements in fuzzy modeling:
interpretability versus accuracy. In practice, one of
the two properties prevails. The neuro-fuzzy in
fuzzy modeling research field is divided into two
areas: linguistic fuzzy modeling that is focused on
interpretability, mainly the Mamdani model; and
precise fuzzy modeling that is focused on accuracy,
mainly the Takagi-Sugeno-Kang (TSK) model.
The Mamdani and TS models are different in
THEN-part only, therefore both models can be
expressed in a more compact form.
Fuzzy Image Processing
Fuzzy image processing is the collection of all
approaches that understand, represent and process
the images, their segments and feature as fuzzy
sets. The representation and processing depend on
the selected fuzzy technique and on the problem to
be solved. Fuzzy image processing has three main
stages.
1. Image fuzzyfication
2. Membership modification
3. Image defuzzification
Fuzzy image processing using fuzzy techniques
plays a very important role in image processing.
Fuzzy techniques are important and powerful tools
for knowledge representation and processing, and
also managing the subjectivity and uncertainty very
efficiently. The three important areas that are not
perfect are Grayness ambiguity, Geometrical
fuzziness, vague knowledge. Fuzzy Geometry,
measures of Fuzziness and image information,
fuzzy inference systems, fuzzy clustering, Fuzzy
mathematical morphology, fuzzy measure theory,
Fuzzy Grammars, neural fuzzy are some of
important theoretical components of fuzzy image
processing scheme.
Steps in Fuzzy image processing
scheme.
(a) Mapping the image into the fuzzy
domain
276
National Conference onMicrowave, Antenna &Signal Processing April 22-23,
2011
The mapping function F is defined such that image
characteristics or concepts that are of interest,
brightness, contrast, edges, regions, connectivity,
image complexity etc, could be better represented
in the new image model. The concept of a fuzzy
processing scheme is to map the original image
into the fuzzy domain, apply a fuzzy operator to
the fuzzy image, and defuzzification of the fuzzy
image to return to the Original domain.
(b) Operations on the Fuzzy Domain
The fuzzy operators can be expressed through
mathematical expression or fuzzy rules. Some of
the operators on the Fuzzy domain are given by
(i) Contrast intensification
(ii) Filtering
(iii) Fuzzy entropy
(c) Defuzzification Function
If there is a coding (fuzzyfication) of image data,
then, there will be a decoding [defuzzification] of
an image. Defuzzification function consists of the
mapping of fuzzy values to the allowed values of
the visualization device.
The above model will provide a base for
discussion, we note that the techniques of this
paper can be used in scenarios that are difficult to
represent with simple picture-wide
parameterizations. As we will see, the localized
algorithms we will propose can easily handle many
variations around (21), such as
colored/quantization noise with spatially varying
statistics, spatial variations in , spatially varying
correlations between and , as well as, between and ,
and so on. Our algorithms can also deal with cases
involving sophisticated forms of interference where
picture-wide parameters are not uniquely defined.
In variant, 4x4 block DCT. Only the sub-band
corresponding to the coefficient (the first
horizontal AC coefficient) is shown. Darker colors
correspond to larger coefficients. The intuition
from the toy example in mostly translates. There
are locations of overlapping significant coefficients
but most locations with significant coefficients .
Fig. Transform coefficients for a real example
where two images, are composited to form the
decomposition is a translation
VI CONCLUSION
This paper proposed an adaptive Neuro-fuzzy
approach for additive noise reduction. The main
feature of the GENERALIZED FUZZY
INFERENCE SYSTEM is the hybridizations of the
Mamdani and TS models. Experimental results are
obtained to show the feasibility and robustness of
the proposed approach. These results are also
compared to other filters by numerical measures
and visual inspection. In the near future, we will
extend GFIS to process color images. Moreover,
the uniform distribution impulsive noise model will
be further studied.
REFERENCES
[1]. R. C. Gonzalez and R. E. Woods, Digital
Image processing, 2nd ed. Englewood Cliffs,
NJ: Prentice-Hall, 2001.
[2]. A. C. Bovik, T. S. Huang, and D. C. Munson.
A generalization of median filtering using
linear combinations of order statistics. IEEE
Transactions on Acoustics, Speech, and Signal
Processing.
[3]. A. Adler, Y. Hel-Or, and M. Elad, A
weighted discriminative approach for image
denoising with overcomplete representations,
in IEEE ICASSP, Dallas, TX, Mar. 1419,
2010, pp. 782785.
[4]. C. S. Burrus, J. A. Barreto, and I. W.
Selesnick, Iterative reweighted least-squares
design of FIR filters, IEEE Trans. Signal
Process., vol. 42, pp. 29262936, Nov. 1994.
[5]. E. Abreu, M. Lightstone, S. K. Mitra, and K.
Arakawa, A new efficient approach for the
removal of impulse noise from highly
corrupted images, IEEE Trans. Image
Process., vol. 5, no. 6, pp. 10121025, 1996.
[6]. I.M. Elewa, H.H Soliman and A.A.
Alshennawy. "Computer vision Methodology
for measurement and Inspection: Metrology in
Production area ". Mansoura Eng. First conf.
Faculty of Eng. Mansoura Univ., March 28-
30,1995,Pp. 473-444.
277
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
IMAGE COMPRESSION AND
ENHANCEMENT TECNIQUES
PRERANA
Email id: prerana6289@yahoo.com
Dehradun Institute of technology
Dehradun (U.K)
ABSTRACT: Uncompressed multimedia
(graphics, audio and video) data requires
considerable storage capacity and
transmission bandwidth. Despite rapid
progress in mass-storage density, processor
speeds, and digital communication system
performance, demand for data storage
capacity and data-transmission bandwidth
continues to outstrip the capabilities of
available technologies. The recent growth of
data intensive multimedia-based web
applications have not only sustained the
need for more efficient ways to encode
signals and images but have made
compression of such signals central to
storage and communication technology.
In todays technological world as our use of
and reliance on computers continues to
grow, so too does our need for efficient
ways of storing large amounts of data and
due to the bandwidth and storage
limitations, images must be compressed
before transmission and storage. For
example, someone with a web page or
online catalog that uses dozens or perhaps
hundreds of images will more than likely
need to use some form of image
compression to store those images. This is
because the amount of space required to
hold unadulterated images can be
prohibitively large in terms of cost.
Fortunately, there are several methods of
image compression available today. This fall
into two general categories: lossless and
lossy image compression.
However, the compression will reduce the
image fidelity, especially when the images
are compressed at lower bit rates. The
reconstructed images suffer from blocking
artifacts and the image quality will be
severely degraded under the circumstance of
high compression ratios
IMAGE COMPRESSION: Image
compression is used to minimize the amount
of memory needed to represent an image.
Images often require a large number of bits
to represent them, and if the image needs to
be transmitted or stored, it is impractical to
do so without somehow reducing the
number of bits. The problem of transmitting
or storing an image affects all of us daily.
TV and fax machines are both examples of
image transmission, and digital video
players are examples of image storage.
Three techniques of image compression are
pixel coding, predictive coding, and
transform coding. The idea behind pixel
coding is to encode each pixel
independently. The pixel values that occur
more frequently are assigned shorter code
words (fewer bits), and those pixel values
that are more rare are assigned longer code
words. This makes the average code word
length decrease.
Predictive coding is based upon the principle
that images are most likely smooth, so if
278
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
pixel b is physically close to pixel a, the
value of pixel b will be similar to the value
of pixel a. When compressing an image
using predictive coding, quantized past
values are used to predict future values, and
only the new info (or more specifically, the
error between the value of pixels a and b) is
coded.The image compression technique
most often used is transform coding. A
typical image's energy often varies
significantly throughout the image, which
makes compressing it in the spatial domain
difficult; however, images tend to have a
compact representation in the frequency
domain packed around the low frequencies,
which makes compression in the frequency
domain more efficient and effective.
Transform coding is an image compression
technique that first switches to the frequency
domain, then does its compressing. The
transform coefficients should be
decorrelated, to reduce redundancy and to
have a maximum amount of information
stored in the smallest space. These
coefficients are then coded as accurately as
possible to not lose information. In this
project, we will use transform coding.
parts of the signal that will not be noticed by
the signal receiver, namely the Human
Visual System (HVS). In general,
Three types of redundancy can be identified:
1- Spatial Redundancy or correlation
between neighboring pixel values.
2- Spectral Redundancy or correlation
between different color planes or spectral
bands.
3-Temporal Redundancy or correlation
between adjacent frames in a sequence of
images (in video applications).
Image compression research aims at
reducing the number of bits needed to
represent an image by removing the spatial
and spectral redundancies as much as
possible. Since we will focus only on still
image compression, we will not worry about
temporal redundancy.
Principles behind compression:
A common characteristic of most images is
that the neighboring pixels are correlated
and therefore contain redundant information.
The foremost task then is to find less
correlated representation of the image. Two
fundamental components of compression are
redundancy and irrelevancy reduction.
Redundancy reduction aims at removing
duplication from the signal source
(image/video). Irrelevancy reduction omits
APLLICATIONS OF DIGITAL
IMAGEPROCESSING :
The field of digital image processing has
experienced continuous and significant
expansion in recent years. The usefulness of
this technology is apparent in many different
disciplines covering medicine through
remote sensing. The advances and wide
availability of image processing hardware
has further enhanced the usefulness of image
processing. The Application of Digital
279
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Image Processing conference welcomes
contributions of new results and novel
techniques from this important technology.
Papers are solicited in the broad areas of
digital image processing applications,
including:
1- Medical applications
2- Multidimensional image processing
3- Image processing architectures and
workstations
4- Programmable DSPs for video coding
5- High-resolution display
6- High-quality color representation
7- Super-high-definition image processing
8- Pattern recognition
9- Video processing
10- Digital cinema
11- Image transmission and coding
12- Color processing
13- Remote sensing
IMAGE ENHANCEMENT: Now a
days digital images have enveloped the
complete world. The digital cameras which
are main source of digital images are widely
available in the market in cheap
ranges. Sometimes the image taken from a
digital camera is not of quality and it
requiredsome enhancement. There exist
many techniques that can enhance a digital
imagewithout spoiling it. First of all, let me
tell you that the enhancement methods can
broadlybe divided in to the following two
categories:
1. Spatial Domain Methods
2. Frequency Domain Methods
In spatial domain techniques, we directly
deal with the image pixels. The pixel values
are manipulated to achieve desired
enhancement. In frequency domain methods,
the image is first transferred in to frequency
domain. It means that, the Fourier Transform
of the image is computed first. All the
enhancement operations are performed on the
Fourier transform of the image and then the
Inverse Fourier transform is performed to get
the resultant image.
FUTURE ASPECTS: The rapid growth
of digital imaging applications, including
desktop publishing, multimedia
teleconferencing, and high-definition
television (HDTV) has increased the need
for effective and standardized image
compression techniques. A lot of research
work hasbeen done on still image
compression since the establishment of the
JPEG standard in 1992. To bring these
research efforts into a focus, a new standard
called JPEG-2000 for coding of still images
is currently under development, and should
be completed by the end of year 2000. This
standard is intended to advance standardized
image coding systems to serve applications
into the next millennium. It will provide a
set of features vital to many high-end and
emerging image applications by taking
advantage of new modern technologies.
Specifically, this new standard will address
areas where current standards fail to produce
the best quality or performance. It will also
provide capabilities to markets that currently
do not use compression. In todays
technological world as our use of and
reliance on computers continues to grow, so
too does our need for efficient ways of
storing large amounts of data and due to the
bandwidth and storage limitations, images
must be compressed before transmission and
storage. The image compression will reduce
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
the image fidelity, especially when the
images are compressed at lower bit rates.
The reconstructed images suffer from
blocking artifacts and the image quality will
be severely degraded under the circumstance
of high compression ratios.
280
281
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Image Analysis Filtering and Enhancement
Neeraj Saini
#
, Siddhant Jain
#
, Rishabh Yadav
#
Rajkumar Saini
#
, Tarun Kumar

,
#
Student,

Assistant Professor
Vidya College of Engineering
neerajsaini0@gmail.com, siddhant9jain@gmail.com, rajkumarsaini.r s@gmail.com
Abstract Edges characterize boundaries and are therefore a
problem of fundamental importance in image processing. Edges
in images are areas with strong intensity contrasts a jump in
intensity from one pixel to the next. Edge detecting an image
significantly reduces the amount of data and filters out useless
information, while preserving the important structural
properties in an image. This was also stated in DCT, SOBEL,
CANNY and FFT detection in this article. Also image
enhancement like blur, contrast, brightness, sharpness, and
inversion is shown. MATLAB 7.0 2007b was used for the
implementation of all results.
Keywords: Canny, Sobel, FFT, DCT, Blur, Contrast,
Sharpness, Brightness, Inversion, image.
Introduction
Edge detection is a fundamental tool used in most image
processing applications to obtain information from the frames
as a precursor step to feature extraction and object
segmentation. This process detects outlines of an object and
boundaries between objects and the background in the image.
An edge-detection filter can also be used to improve the
appearance of blurred or anti-aliased video streams. Image
enhancement is the process of manipulating an image so that
the result is more suitable than the original for a specific
application. The output produced by filtering is inputted to the
next phase, image enhancement.
I. IMAGE TYPES
There are three type of image, which is described below [12].
A. True color image
It is also known as an RGB image. A true color image is an
image in which each pixel is specified by three values, one
each for the red, blue, and green components of the pixel'
scalar. M-by-n-by-3 array of class uint8, uint16, single, or
double whose pixel values specify intensity values. For single
or double arrays, values range from [0, 1]. For uint8, values
range from [0, 255]. For uint16, values range from [0, 65535].
Fig. 1 RGB image.
B. Gray scale image
It is also known as an intensity, gray scale, or gray level
image. Array of class uint8, uint16, int16, single, or double
whose pixel values specify intensity values. For single or
double arrays, values range from [0, 1]. For uint8, values
range from [0,255]. For uint16, values range from [0, 65535].
For int16, values range from[-32768, 32767].
Fig. 2 Gray scale image.
C. Binary image
A binary image is a logical array of 0s and 1s. Pixels with
the value 0 are displayed as black; pixels with the value 1 are
displayed as white.
Fig. 3 Binary image.
II. D.C.T. FILTER
With the advent of the Internet and multimedia systems,
the JPEG standard has gained widespread popularity for lossy
compression of still-frame, continuous-tone images. In this
algorithm, the image is first divided into non-overlapping
blocks of size pixels, where each block is then subjected to the
discrete cosine transform (DCT) before quantization and
282
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
entropy coding. As the centrepiece of the compression
algorithm, the DCT has been extensively studied by various
researchers [5].
In fact, there has been an interest in understanding the
distributions of the DCT coefficients since more than 20 years
ago. The question is as follows: after we have performed the
DCT on each of the blocks and collected the corresponding
coefficients from them, what is the resulting statistical
distribution? Such knowledge would be useful, for instance, in
quantizer design and noise mitigation for image enhancement.
TRANSFORM coding has become the de facto standard
paradigm in image. It is well acknowledged that hardware (or
software) implementation of the DCT is less expensive than
that of the wavelet transform.
Fig. 4 DCT filtered image.
Use One-Dimensional DCT in both horizontal and vertical
directions.
First direction F = C*X
T
Second direction G = C*F
T
We can say 2D-DCT is the matrix:
Y = C(CX
T
)
T
III. SOBEL FILTER
Based on this one-dimensional analysis, the theory can be
carried over to two-dimensions as long as there is an accurate
approximation to calculate the derivative of a two-
dimensional image. The Sobel operator performs a 2-D spatial
gradient measurement on an image. Typically it is used to find
the approximate absolute gradient magnitude at each point in
an input gray scale image. The Sobel edge detector uses a pair
of 3x3 convolution masks, one estimating the gradient in the
x-direction (columns) and the other estimating the gradient in
the y-direction (rows). A convolution mask is usually much
smaller than the actual image [8].
It is a classic edge detection filter. It can be defined by
separate kernels wx, wy of each of x, y directions.
and the total filter weight is defined as:
The x-direction Sobel filter is thus defined as:
The y weight is similarly computed. The final output of the
Sobel filter is defined as:
Sobel's filter detects horizontal and vertical edges separately
on a scaled image. Sobel horizontally- renders near horizontal
edges and Sobel vertically- renders near vertical edges.
Fig. 5 Sobel filtered image.
IV. CANNY FILTER
The Canny edge detection algorithm is known to many as
the optimal edge detector. Canny's intentions were to enhance
the many edge detectors already out at the time he started his
work [1]. The Canny edge-detection algorithm processing
requirement is a combination of the four stages of Gaussian
smoothing for noise reduction, finding zero crossings using
the derivative of Gaussian, non-maximal suppression to thin
the edges, and finally hysteresis thresh holding [2].
According to Canny, the optimal filter that meets all three
criteria above can be efficiently approximated using the first
derivative of a Gaussian function.
This process alleviates problems associated with edge
discontinuities by identifying strong edges, and preserving the
283
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
relevant weak edges, in addition to maintaining some level of
noise suppression. While the results are desirable, the
hysteresis stage slows the overall algorithm down
considerably.
The first stage involves smoothing the image by
convolving with a Gaussian filter. This is followed by finding
the gradient of the image by feeding the smoothed image
through a convolution operation with the derivative of the
Gaussian in both the vertical and horizontal directions. The 2-
D convolution operation is described in the following equation.
Where:
g(k, l) = convolution kernel
I(x, y) = original image
I(x, y) = filtered image
2N + 1 = size of convolution kernel [10].
Results can be further improved by performing edge
detection at multiple resolutions using multi-scale
representations. Convolution at multiple resolutions with large
Gaussian filters requires even more computation power. This
may prove to be challenging to implement as a software
solution for real-time applications that select the User-defined
threshold tool box to define the low and high threshold
values[3].
Fig. 6 Canny filtered image.
V. F.F.T. FILTER
FFT stands for Fast Fourier Transformation. Fourier
theory states that any signal, in our case visual images, can be
expressed as a sum of a series of sinusoids. In the case of
imagery, these are sinusoidal variations in brightness across
the image. Fouier term that encodes 1: the spatial frequency, 2:
the magnitude (positive or negative), and 3: the phase. These
three values capture all of the information in the sinusoidal
image. The spatial frequency is the frequency across space
(x-axis here) with which the brightness modulates [4].
The magnitude of the sinusoid corresponds to its contrast,
or the difference between the darkest and brightest peaks of
the image. A negative magnitude represents a contrast-
reversal, i.e. the bright become dark, and vice-versa. The
phase represents how the wave is shifted relative to the origin,
in this case it represents how much the sinusoid is shifted left
or right [6].
A Fourier transform encodes not just a single
sinusoid, but a whole series of sinusoids through a range of
spatial frequencies from zero (i.e. no modulation, i.e. the
average brightness of the whole image) all the way up to the
"nyquist frequency", i.e. the highest spatial frequency that can
be encoded in the digital image, which is related to the
resolution, or size of the pixels. The Fourier transform
encodes all of the spatial frequencies present in an image
simultaneously as follows. A signal containing only a single
spatial frequency of frequency f is plotted as a single peak at
point f along the spatial frequency axis, the height of that peak
corresponding to the amplitude, or contrast of that sinusoidal
signal.
Fig. 7 Amplitude-Frequency graph.
There is also a "DC term" corresponding to zero frequency
that represents the average brightness across the whole image.
A zero DC term would mean an image with average
brightness of zero, which would mean the sinusoid alternated
between positive and negative values in the brightness image.
But since there is no such thing as a negative brightness, all
real images have a positive DC term, as shown here too [9].
Actually, for mathematical reasons beyond the scope of
this paper, the Fourier transform also plots a mirror-image of
the spatial frequency plot reflected across the origin, with
spatial frequency increasing in both directions from the origin.
For mathematical reasons beyond the scope of this
explanation, these two plots are always mirror-image
reflections of each other, with identical peaks at f and -f as
shown below [11].
Fourier analysis is used in image processing in much the
same way as with one-dimensional signals. when the Fourier
transform is taken of an audio signal, the confusing time
domain waveform is converted into an easy to understand
frequency spectrum. In comparison, taking the Fourier
transform of an image converts the straightforward
information in the spatial domain into a scrambled form in the
frequency domain [7].
284
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Fig. 8 Amplitude-Frequency graph with mirroring.
In imaging applications, the FT allows inter conversion
between the spatial domain representation and the frequency
domain representation of images, causing the FT to become
one of the most powerful tools in image processing and
filtering.
Fourier Transformation F is defined as:
To improve the computational speed, fast Fourier
transforms emerged. The FFT mathematical formulation is
shown below:
where f (x,y)=image of size MxN that is transformed into
the coefficient spectrum image F(u,v) by means of the FFT; x
and y=spatial samples (pixels); and u and v=frequency
samples.
Fig. 9 FFT filtered image.
VI. BLUR
A blurred or degraded image can be approximately
described by this equation g = H f + n, where
g is the blurred image, H is the distortion operator, also
called the point spread function (PSF). This function, when
convolved with the image, creates the distortion. f is the
original true image and n is additive noise, introduced during
image acquisition, that corrupts the image.
The image f really doesn't exist. This image represents what
you would have if you had perfect image acquisition
conditions [13].
Fig. 10 Blurred image.
Fig. 11 Deblurred image.
VII. INVERSION
The brightness value of each pixel in the image is
converted to the inverse value on the 256-step color-values
scale. For example, a pixel in a positive image with a value of
255 is changed to 0, and a pixel with a value of 5 is changed
to 250. This is like negative film of an image [13].
Fig. 12 Inverted image.
285
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
VIII. BRIGHTNESS
Relative lightness or darkness of the color, usually
measured as a percentage from 0% (black) to 100(white) [13].
Fig. 13 Bright image.
Fig. 14 Histogram equalization for Brightness.
IX. CONTRAST
It clips the shadow and highlight values in an image and
then maps the remaining lightest and darkest pixels in the
image to pure white (level 255) and pure black (level 0) [13].
Fig. 15 Contrast image.
Fig. 16 Histogram equalization for Contrast.
Fig.17 Image analysis tool.
X. CONCLUSION
We have carried out a comparative study of DCT, SOBEL,
CANNY and FFT for images. Based on empirical
performance results, we illustrate that the main factors in
image coding are the quantizer and sampling rather than the
difference among above four filters. The image enhancement
is done for betterment of image to be used in different
perspectives in different fields. The objective was to design
and implement a image analysis tool in MATLAB that will
analysis the image in different perspective. This was
represented in fig 17.
XI. FUTURE SCOPE
The future work can be extended to build a system, which
can deal with face detection, face recognition and finger print
recognition.
REFERENCES
[1] Canny, J., A Computational Approach to Edge Detection, IEEE
Trans. Pattern Analysis and Machine Intelligence, 8:679-714,
November 1986.
[2] Advanced Edge Detection Technique: Techniques in Computational
Vision:
http://www.cpsc.ucalgary.ca/Research/vision/501/edgedetect.pdf
[3] AN333: Developing Peripherals for SOPC Builder:
http://www.altera.com/literature/an/an333.pdf Feb 2011.
[4] http://www.dspguide.com/ch24/6.htm, Feb 2011.
[5] J. M. Shapiro, Embedded image coding using zerotrees of wavelet
coefficients, IEEE Trans. Signal Processing, vol. 41, pp. 34453463,
Dec. 1993.
[6] A. Said and W. A. Pearlman, A new, fast, and efficient image codec
based on set partitioning in hierarchical trees, IEEE Trans. Circuits
Syst. VideoTechnol., vol. 6, pp. 243250, June 1996.
[7] http://google.com/image%20processing/fft/Fourier%20Image%20Anal
ysis.htm.
[8] S. Wu and A. Gersho, Rate-constrained picture adaptive quantization
for JPEGbaseline coders, in Proc. ICASSP93, Apr. 1993, vol. 5, pp.
389392.
[9] K. Ramchandran and M. Vetterli, Rate-distortion optimal fast
thresholding
with complete JPEG/MPEGdecoder compatibility, IEEE Trans.
Image Processing, vol. 3, pp. 700704, Sept. 1994.
[10] http://www.altera.com/literature/cp/gspx/edge-detection.pdf
[11] http://www.cs.virginia.edu/~gfx/Courses/2011/ComputerVision/slides/l
ecture03_filtering.pdf
[12] http://www.ijcaonline.org/volume7/number2/pxc3871493.pdf
[13] Book-Digital Image processing- Gonzalez & Woods Pearson
education.
286
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Nanosensors: Types, applications and future aspects
Vinith Chauhan
1
, Manoj Pandey
2
, Pradeep Kumar Nathaney
3
1
Assistant Professor (ECE), St. Margaret Engineering College, Neemrana
vinithchauhan@rediffmail.com
2
Assistant Professor (ECE), St. Margaret Engineering College, Neemrana
mr_mkpandey@yahoo.co.in
3
Assistant Professor (ECE), St. Margaret Engineering College, Neemrana
pradeep_nathaney@rediffmail.com
Abstract-This paper comprehensively surveys the
past, present and future of nanosensors, provides a
snapshot of the fast expanding and burgeoning
research activities in this field, and studies the
overall impact of nanotechnology era on sensors.
Nanosensors are the wonderful application of
nanotechnology. Nanosensors are any biological,
chemical, or surgery sensory points used to convey
information about nanoparticles to the macroscopic
world. Their use mainly includes various medicinal
purposes and as gateways to building other
nanoproducts, such as computer chips that work at
the nanoscale and nanorobots. Presently, there are
several ways proposed to make nanosensors,
including top-down lithography, bottom-up
assembly, and molecular self-assembly. This paper
covers the salient developments in nanosensors from
the viewpoints of materials used, device structures,
types, applications and emerging future prospects of
nanosensors.
1. Introduction
Another wonderful invention of nanotechnology is
nanosensors these are biological, chemical or surgical
sensory edges or points which are used to detect and
transfer nanoparticle information to the other devices
such as microscopic/macroscopic world. Their use
mainly include various medicinal purposes and as
gateways to building other nanoproducts, such as
computer chips that work at the nanoscale and
nanorobots Nanosensors are nanotechnology-enabled
sensors characterized by one of the following attributes
either the size of the sensor or its sensitivity is in the
nanoscale, or the spatial interaction distance between
the sensor and the object is in nanometers. Any device
conforming to one of these properties will be
designated as a nanosensor.
2. Working principle of nanosensor
Nanosensors works with their special sensation ability
which can detect information and data. There
arrangement is like ordinary sensors but the major
difference b between sensors and nanosensors is that
nanosensors are developed at nanoscale which makes
them distinguished from ordinary ones. Nanosensors
can accurately identify specific cells or the parts of the
body having any deficiency. They work by calculating
and measuring ups downs and changes, displacement,
dislocations, concentration, volume, acceleration,
external forces pressure or temperature of each cell in
the living body. Many nanosensors are designed to
differentiate between normal and abnormal cells such
as sensors for detecting cancer in living body,
molecular controllers to deliver medicines in the human
body. They are also enabling to detect macroscopic
changes that appear from the external interactions and
communicate these variations to the other
nanocomponents working along.
3. Nanosensor Materials
Very large surface areas are offered by several
nanomaterials. Nanoporous carbon can provide surface
areas up to 2000 sqm/g. Carbon nanotubes, exotic
variations of common graphite, are molecular tubes
2
made up of hexagonally-bonded sp carbon atoms,
having dimensions in the range 1-50 nm depending on
their structure, i.e., single-wall carbon nanotube
(SWCNT) or multi-wall carbon nanotube (MWCNT).
Surface area of as-grown single-walled carbon
nanotubes lies in the range 400-900 sqm/g. Zeolites, a
range of naturally occurring or manufactured materials,
have nanoscale pores. Anodic etching of
monocrystalline Si using HF buffered with ethanol,
produces nanoporous Si. It has a high surface area-to-
volume ratio (hundreds of sqm/cc).
4. Types of Nanosensers
There are many different types of nanosensors. A few
of them are the chemical sensor, biosensors,
electrometers, and deployable nanosensors. The
chemical sensor uses capacitive readout cantilevers and
electronics in order to analyze the signal. This type of
sensor is sensitive enough to detect a single chemical or
biological molecule. Another type of nanosensor is the
287
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
electrometer which is a nanometer-scale mechanical
electrometer that consists of a torsional mechanical
resonator, a detection electrode, and a gate electrode
which are used to couple charge to the mechanical
element. One of the most largely funded areas of
research in nanosensors is biosensors. This is mostly
due to the possibilities that this technology could lead
to in early cancer detection and detection of other
various diseases. The biosensors can also be used to
detect specific types of DNA. They can be used in the
sensing of DNA and other biomaterials using encoded
antibodies on them. The image to the right is an
example of carbon nanotubes being used as a biosensor.
The DNA molecules attach to the ends of the vertical
carbon nanotubes that were grown on a silicon chip.
Other types of biosensors can be used in more specific
applications such as the detection of asthma attacks up
to three weeks before it happens and assisting
astronauts on space missions. A different type of sensor is
referred to as a deployable nanosensor. There is not a lot
of research available on this type of nanosensor.
These mostly refer to sensors that would be used in the
military or other forms of national security.
5. Manufacturing of Nanosensors
Nanosensors can be manufactured in a number of
different methods. The three most commonly known
methods are top-down lithography, bottom-up
assembly, and molecular self-assembly. Researchers
have also found a way to manufacture a nanosensor
using semiconducting nanowires, which is said be an
easy-to-make method of producing a type of
nanosensor. Other methods of creating the sensors
include the use of carbon nanotubes (CNTs).
Top-down lithography
The top-down lithography method is quite simple in
concept. Simply put it is the method of starting out with
a larger block of material and carving out the desired
form of what we want. The pieces that are carved out
are used as the components to use in specific
microelectronic systems such as sensors. In this case
the components that are carved out are of the nanosized
scale. This is the method that is used in the creation of
many integrated circuits. In the case of nanosensors, it is
common to use a silicon wafer as the base for this
method. A layer of photoresist is then added to the
wafer, then using lithography to shine a light on parts of
the wafer to carve away parts of the wafer to create the
component you desire. This piece of material can then
be doped and modified using other materials to be used
for things such as nanosensors.
Bottom-up assembly
The method of bottom-up assembly is a bit more
difficult to accomplish, however, simple in concept.
This method uses atomic sized components as the basis
of the sensor. These components are moved one by one
into position to create the sensor. This is an extremely
difficult method to use especially in mass production
because at this point in time it has only been achieved
in a laboratory using atomic force microscopes. This
process would most likely be used as a basis for the
next method of manufacturing, called self-assembly.
Self-assembly
There are two methods to the concept of molecular self-
assembly, also known as growing nanostructures. The
first of these methods uses a piece of previously created
or even naturally formed nanostructure as the base and
immersing it in free atoms of its own kind. Over time,
the structure would begin to take a shape with an
irregular surface that would then cause the structure to
become more prone to attracting more molecules,
continuing the pattern of capturing more of the free
atoms and forming more of itself, creating a larger
component of the nanosensor. The second method of
self-assembly is more difficult. It begins with a
complete set of components that automatically
assemble themselves into the finished product, in this
case the nanosensor. This has only been accomplished
in the manufacturing of micro-sized computer chips,
and has yet to be accomplished at the nanoscale.
However, if this were to be perfected at the nanoscale,
the sensors would be able to be made accurately, at a
quicker rate and for a cheaper cost.
6. Nanosensor Applications
There is an endless list of ways that nanosensors can be
applied to our everyday lives for the simple reason that
sensors are everywhere we look. In transportation we
can see nanosensors being applied on land, at sea, as
well as in the air and even in space. Also in the field of
communications, nanosensors are likely to be seen in
wired and wireless technologies as well as optical and
RF technologies. They can even be seen in buildings
and facilities, which consists of factories, offices, and
even homes. Last but certainly not least, is perhaps the
largest field aside from medical, which is robotics of all
kinds. Medicinal uses of nanosensors mainly revolve
around the potential of nanosensors to accurately
identify particular cells or places in the body in need.
By measuring changes in volume, concentration,
displacement and velocity, gravitational, electrical, and
magnetic forces, pressure, or temperature of cells in a
288
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
body, nanosensors may be able to distinguish between
and recognize certain cells, most notably those of
cancer, at the molecular level in order to deliver
medicine or monitor development to specific places in
the body. In addition, they may be able to detect
macroscopic variations from outside the body and
communicate these changes to other nanoproducts
working within the body. Nanosensors may be use to
build smaller integrated circuits, as well as
incorporating them into various other commodities
made using other forms of nanotechnology for use in a
variety of situations including transportation,
communication, improvements in structural integrity,
and robotics. Nanosensors may also eventually be
valuable as more accurate monitors of material states
for use in systems where size and weight are
constrained, such as in satellites and other aeronautic
machines. Amongst other applications they can be used:
To detect various chemicals in gases for pollution
monitoring
For medical diagnostic purposes either as blood
borne sensors or in lab-on-a-chip type devices
To monitor physical parameters such as
temperature, displacement and flow
As accelerometers in MEMS devices like airbag
sensors
7. Economic and social Impacts of Nanosensors
Nanosensors are still considered new in the field
of technology. They have strong economic affects
because advance production and high fabrication is
required to produce nanosensors. Products containing
nanosensors ranges from $0.6 billion to $2,7 billion
which is huge cost to use this technology. First,
however, nanosensor developers must overcome the
present high costs of production in order to become
worthwhile for implementation in consumer products.
Additionally, nanosensor reliability is not yet suitable
for widespread use, and, because of their scarcity,
nanosensors have yet to be marketed and implemented
outside of research facilities. Consequently,
nanosensors have yet to be made compatible with most
consumer technologies for which they have been
projected to eventually enhance. Ethical and social
impacts are harder to define and sort as good or bad
compared to health and environmental impacts. The
advancement in detecting and sensing different
biological and chemical species with increased capacity
and accuracy may transform societal mechanisms that
were originally designed on uncertainty and imprecise
information. For example, the ability to measure
extremely low amounts of air pollutants or toxic
materials in water raises questions and dilemmas of risk
thresholds especially if the advancement of such
technologies outpaces the ability of the public to
respond. As another example, medical sensors will not
only help in diagnoses and treatment but may also
predict the future profile of an individual. This will add
to the information used by health insurance companies
to grant or deny coverage. Other social issues resulting
from the widespread use of nanosensors and
surveillance devices include privacy invasion and
security issues. In future nanosensors would surely
improve the present world of technology.
8. Future scope
In the future we can see many of these advances
become realities. Some of them may happen within the
next five to ten years and some may not happen for
fifty, or even within our lifetime. With the increasing
research into this technology it is hard to tell. As far as
nanosensors go, we can expect to see them start to pop
up within our life time, even if we cannot go out to a
store as consumers and purchase them. Some of the
advantages that would come out of using nanosensors
are because of their tiny size, the fact that they require
less power to run, their greater sensitivity and that they
have better specificity than todays sensors. All of these
advantages will allow us to accomplish things that we
could never imagine before such as atomic sized
sensors flowing in our blood streams that could predict
cancer and other diseases. Between the medical
advancements that
could be made with the use of nanosensors and the
advancements in airborne chemical detection being
used for national security, nanosensors could not only
make our lives easier, but also safer in more than one
way.
9. Conclusion
Nanotechnology offers important new tools expected to
have a great impact on many areas in medical
technology. It provides extraordinary opportunities not
only to improve materials and medical devices but also
to create new smart devices and technologies where
existing and more conventional technologies may be
reaching their limits. It is expected to accelerate
scientific as well as economic activities in medical
research and development. Nanotechnology has the
potential to make significant contributions to disease
detection, diagnosis, therapy, and prevention. Tools are
important and integral parts for early detection. Novel
tools and tools complementing existing ones are
envisaged. It offers opportunities in multiple platforms
for parallel applications, miniaturization, integration,
and automation. Nanotechnology could have a
289
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
profound influence on disease prevention efforts
because it offers innovative tools for understanding the
cell as well as the differences between normal and
abnormal cells. It could provide insights into the
mechanism of transformation, which is fundamental in
designing preventive strategies. Further, it provides
novel non-invasive observation modalities into the
cellular machinery. It allows for the analysis of such
parameters as cellular mechanics, morphology, and
cytoskeleton, which have been difficult to achieve using
conventional technologies.
10. References
[1 ]J. W. Aylott, Optical nanosensors-an enabling
technology for intracellular measurements, Analyst,
128,2003, pp. 309-312.
[2] V. K. Khanna, Nanoparticle-based sensors, Defence
Science Journal, Special Issue on Nanomaterials:
Science & Technology-II, 58, 5, 2008, pp. 608-616.
[3]. R. Bogue, Nanosensors: a review of recent
progress, Sensor Review, 28, 1, 2008, pp. 1217.
[4]. M. S Dresselhaus, G. Dresselhaus and P. Avouris,
Carbon Nanotubes: Synthesis, Structure, Properties, and
Applications, Springer, Berlin, 2001.
[5]. G. Binnig, H. Rohrer, C. Gerber and E. Weibel,
Surface Studies by scanning tunneling microscopy,
Phys. Rev. Lett., 49, 1982, pp. 5761.
[6]http://forums.howwhatwhy.com/showflat.php?Cat=
&Board=Chemistry&Number=146622&fpart=1
[7]. F. J. Giessibl, AFM's path to atomic resolution,
Materials Today, 8, 5, 2005, pp. 32-41.
[8]. B. Mahar, C. Laslau, R. Yip and Y. Sun,
Development of carbon nanotube-based sensorsA
review, IEEE Sensors Journal, 7, 2, 2007, pp. 266-284.
[9]http://www.technologyreview.com/Nanotech/18127/
[10]http://en.wikipedia.org/wiki/Nanosensor
290
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Design of Heart Beat Measuring device
Ritika Srivastava
1
, Nikita Agarwal
2
B.Tech final year, Department of Electronics and Instrumentation
Galgotias College of Engineering and Technology (GCET) , Greater Noida
{ ritika.srivastava89@gmail.com
1
, nikita.agarwal2409@gmail.com
2
}
Abstract : This paper describes the design of
microcontroller based heart beat and pulse rate
measuring device using optical sensor and LCD.
As Heart related diseases are increasing at
tremendous rate , so the need of accurate and Low
Cost Heart Beat measuring device is essential. The
Heart beat signal is obtained by LED and LDR
combination. This project demonstrates a
technique to measure the heart rate by sensing the
change in blood volume in a finger artery while
the Heart is pumping the blood. Pulses from hands
interrupts the light reaching the LDR and this
signal is read by Microcontroller.
Keywords: Heart beat , Microcontroller , Pulse rate.
I. INTRODUCTION
Heart rate indicates the soundness of our Heart and
helps accessing the condition of Cardiovascular
system [1]. Heart disease such as heart attack ,
coronary Heart disease, congestive Heart failure is the
leading cause of Death for man and women in many
countries. The heart rate of healthy adult is at rest is
around 72 beats per minute ( bpm) [2]. The Heart rate
of Babies are higher then adults, around 120 bpm
while older children have Heart beat at around 90
bpm. The heart rate gradually increases during
exercises. [3] and returns slowly to the rest value after
exercise.A Heart rate monitor is a device that
takesample of Heartbeats and calculates the Heart
beat per minute (bpm) . The Heart rate monitor can
be constructed using two methodology Electrical
and optical method. Electro cardiogram (ECG) is one
of the most widely used and accurate method for
calculating the Heartbeats[4].But ECG ia an
expensive device and is not economical.Another
method which is generally used is known as
auscultation , a more accurate method for measuring
Heart rate [5]. There are many other merhods for
measuring Heart rate like Phonocardiogram (PCG) ,
blood pressure waveform [6] and Pulse meter [7], but
these methods are used in clinics and are expensive
.Many low cost device are also available . Lower than
normal Heart rate is indication of a condition known
as bradycardia , and Higher than normal Heart rate is
indication of a condition known as tachycardia.The
average Human Heart rate is about 70 bpm for adult
males and 75 bpm for adult females. Heart rate varies
between individuals based on age and fitness.
This paper describes the design of a low cost Heart
rate measuring device which measures the Heart rate
of patient by placing the finger tip of patient between
LED and LDR. When the Heart pumps the blood , it
flowsinside the finger tip , Pulses from hand
interrupts the light , emitted from LED,in reaching
the LDR and this signal is then read by
microcontroller and is displayed on LCD as text. The
advantage of this method is that it is economical .The
proposed cost of this device is INR 800. The
corresponding Figure is shown in
Figure 1 { figure showing the finger placed between
photodiode and LDR}
The paper is organized as follows . In section II we
have discuses the system overview. In section III
system methodology is discussed and section IV
discusses the result. Section V concludes the paper.
II .SystemMethodology:
The system layout is shown in the figure1. It consists
of Infrared Transmitter Section, Infrared Receiver
Section, Amplifiers, Filters, microcontroller and the
display section i.e. LCD. The Figure of system layout
is shown in Fig 2.
Fig 2 { systemLayout }
A.Transmitter and Receiver Section:
The Transmitter and the Receiver Section consists of
infrared led and the infrared LDR i.e. light dependent
resistor. The index finger of a person for whom the
heart rate is to measured is kept between the
transmitting and the receiving section. Led emits the
291
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
light and the LDR measures the heart rate through the
change of blood reflectivity on the index finger. The
intensity with which the led emits the light is
measured and compared with the light receiving on
the LDR. The intensity of light reaching the LDR is
attenuated through the index finger and then it
changes the resistance of LDR which is then
measured by transforming into voltage ranging from
1mv to 10mv. Attenuation vary with the persons
heart rate. On an average attenuation is about 80% of
the light transmitted.
B.Signal Conditioning:
Signal conditioning is done using Amplifiers and
Filters. Band pass Filters is used to remove any
interference that are caused by ambient light and level
distortions. The filter has a cut off frequency of 2.5
Hz to allow a maximum heart rate 0f 125 bpm. Then
the signal is sent through DC blockings to prevent
immeasurable pulses caused by a high DC offset from
ambient light. The signal is finally passed through
amplifier. LM358 is used for the amplification of the
signal. LM358 provides a pulse of higher amplitude
to be fed into the microcontroller input. It works by
detecting the peak of every pulse and creates a
corresponding pulse of higher amplitude. The signals
are amplified twice to provide a signal of necessary
amplitude. The corresponding figure of amplification
is shown in fig 3
Fig 3 { Circuit diagram of amplifier }
C. Microcontroller and Display Section.
Microcontroller is used to count the pulse rate and to
display it on the LCD. Programming the
microcontroller involves the developing of an
algorithm to count the pulse rate. Microcontroller will
check every time the signal is fed into it and start
counting using the algorithm and finally the count is
displayed on the LCD. A Character LCD is used to
show the formatted result of heart rate.
III. COMPONENTS SPECIFICATION:
A. Infrared LED
Infrared LEDs are a specific type of light-emitting
diode (LED) that produces light in the infrared
spectrum. Light in this range is not visible to the
human eye, but can be sensed by a variety of
electronic devices, making the LED ideal for items
like remote controls, where the LED does not need to
be seen in order to function. The wavelength of the
light emitted by an infrared LED falls into the
infrared spectrum. The figure is shown in fig4.
Fig 4.Infrared LED {courtesy Solarbotics Ltd. [8] }
B.Infrared LDR
A photo resistor or light dependent resistor
(LDR) is a resistor whose resistance decreases
with increasing incident light intensity. It can
also be referred to as a photoconductor. A photo
resistor is made of a high resistance
semiconductor. If light falling on the device is of
high enough frequency, photons absorbed by the
semiconductor give bound electrons enough
energy to jump into the conduction band. The
resulting free electron (and its hole partner)
conduct electricity, thereby lowering resistance.
A photoelectric device can be either intrinsic or
extrinsic. The circuit showing the connection of
infrared LDR is shown in fig. 5.
Fig 5. { LDR connection}
C.LM358
The LM358 is a low power dual operational
amplifier. It consists of two independent, high gain,
internally frequency compensated operational
amplifiers which were designed specifically to
operate from a single power supply over a wide range
of voltages. Operation from split power supplies is
also possible and the low power supply current drain
is independent of the magnitude of the power supply
voltage. Application areas include transducer
amplifiers, dc gain blocks and all the conventional op
amp circuits which now can be more easily
implemented in single power supply systems. For
example, the LM358 series can be directly operated
off of the standard +5V power supply voltage which
is used in digital systems and will easily provide the
required interface electronics without requiring the
292
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
additional 15V power supplies. The pin
configuration of LM358 is shown in the figure.6
Fig 6. { Pin diagram of LM358}
D.PIC 16F877A MICROCONTROLLER
The PIC16F877A is a 8 bit microcontroller with a
14.3 Kbytes of flash memory. It features 256 bytes of
EEPROM data memory, self programming, an ICD, 2
Comparators, 8 channels of 10-bit Analog-to-Digital
(A/D) converter, 2 capture / compare / PWM
functions, the synchronous serial port can be
configured as either 3-wire Serial Peripheral Interface
or the 2-wire Inter-Integrated Circuit and a Universal
Asynchronous Receiver Transmitter (USART). All of
these features make it ideal for more advanced level
A/D applications in automotive, industrial, appliances
and consumer applications.The pin diagram of
microcontroller is shown in figure 7.
Fig .7{ pin diagram of PIC16F874A}
E.Liquid Crystal Display
A liquid crystal display (LCD) is a thin, flat
electronic visual display that uses the light
modulating properties of liquid crystals (LCs). LCs
do not emit light directly. They are used in a wide
range of applications, including computer monitors,
television, instrument panels, aircraftcockpit
displays, signage, etc. They are common in consumer
devices such as video players, gaming devices,
clocks, watches, calculators, and telephones. LCDs
have displaced cathode ray tube (CRT) displays in
most applications. They are usually more compact,
lightweight, portable, less expensive, more reliable,
and easier on the eyes. They are available in a wider
range of screen sizes than CRT and plasma displays,
and since they do not use phosphors, they cannot
suffer image burn-in. LCDs are more energy efficient
and offer safer disposal than CRTs. Its low electrical
power consumption enables it to be used in battery-
powered electronic equipment. It is an electronically-
modulated optical device made up of any number of
pixels filled with liquid crystals and arrayed in front
of a light source (backlight) or reflector to produce
images in color or monochrome. The figure is shown
in figure 8.
fig 8. { LCD display }
IV.RESULTS:
The signals after each stage is displayed on CRO i.e.
cathode ray oscilloscope and then signals are
analyzed for proper functioning. The signals obtained
after filtering is shown in fig.9
Fig 9.{diagram showing filtered signal }
The final signal that is obtained after amplification
section is shown in fig 10. The figure consists of
pulses. An LED is connected to the output of the
amplifier flashes as the pulses are received and
amplified by the circuit. The output of the amplifier
and the filter circuit is connected to one of the digital
input of the PIC 16F877A microcontroller. The
operating frequency is 4 MHz.The microcontroller
output drives the LCD .
Fig 10. {Signal obtained after filtering and
amplification}
293
National Conference onMicrowave, Antenna &Signal Processing April 22-23,
2011
V. CONCLUSION
In this paper, the design and development of a Heart
Rate Measuring device is presented that measures the
Heart rate efficiently and with less expense without
using time consuming and expensive clinical pulse
detection systems. The Heart beat signal is obtained
bt LED and LDR combination .Pulses from hand
interrupts the signal from reaching the LDR and then
this signal is read by microcontroller. Experimental
results showed that heart rate can be filtered and
digitized so that it can be counted to calculate an
accurate pulse rate. The device is able to detect and
display the heart beat of user economically.
Our future work will focus on generalizing the
proposed model by performing measurements with
different subjects and under different breathing
circumstances.
ACKNOWLEDGEMENT
The author would like to thank Indian Institute of
Technology Delhi especially IDDC for the
permission to use MDIT Lab.
REFERENCES
[1] R.G Lnandaeta , O. Lasas , and R.P.Areny ,
Heart Rate detection from plantar bioimpedence
measurements , 28
th
IEEE EMBS Annual
International Conference , USA , 2006 , PP 5113
5116
[2] S.Edwards , Heart rate Monitor Book , Leisure
systems international , Dec 1993.
[3] M .Malik and A.J .Camm , Heart Rate
variability , Futura Publishing Co. Inc , Sept 1995.
[4] J.R Hampton , The ECG in practise , Churchill
Livingstone , March 2003.
[5]Wikipedia , Heart rate Available at http://en,
Wikipedia.org/wiki/Heart rate[2009]
[6] C.C Tai and J.R.C Chien , An improved peak
quantification algorithm for automatic heart rate
measurements , IEEE 27
th
Annual Conference on
Engineering in medicine and biology , China , pp
6623-6626
[7] Y.Chen , Wireless Heart rate Monitor with
infrared detecting module
[8]http://www.solarbotics.com
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Computer Aided Condition Monitoring Of Induction Motor: An
Experimental Study
NeelamMehala
Department of Electronics Engineering,
YMCAUniversity of Science and Technology, Faridabad-121006 (Haryana) INDIA
neelamturk@yahoo.co.in
Abstract Preventive maintenance of electric derive systems
with induction motors involves monitoring of their operation for
detection of abnormal electrical and mechanical conditions that
indicate, or may lead to, a failure of the system. The monitoring
of the electrical machines can significantly reduce the costs of
maintenance by allowing the early detection of faults, which
would be expensive to repair. These paper study the effects of
induction motor rotor faults on the motor terminal quantities
(Current, Voltage) using Fast Fourier Transform (FFT) based
Power spectrum. This technique utilizes the results of spectral
analysis of the stator current. Reliable interpretation of the
spectra is difficult, since distortions of the current waveform
caused by the abnormalities in the derive system are usually
minute. In the present investigation, the frequency signature of
rotor fault is well identified using power spectrum leading to
better interpretation of the motor current spectra. Power
spectrum is obtained by using the Virtual Instrument. The
Virtual Instrument was build up by programming in LabVIEW
8.2. The experiments were conducted on three phase 0.5 hp, 415V
induction motor. The rotor faults are replicated in the
laboratory. The results obtained from laboratory experiments
signify that the proposed technique is a reliable tool for diagnosis
of rotor fault of induction motor.
Keywords Rotor fault, Fast Fourier Transform(FFT),
Induction motor, LabVIEW.
I. INTRODUCTION
The induction motor (IM) has been the horse power of
industry for many years. In an industrialized nation, they can
typically consume 40%-50% of all the generated capacity of
that country. Effective online condition monitoring of
Induction motors is critical to improving the productivity,
reliability and safety to avoid unexpected downtime and
expensive repair cost. Therefore, the diagnosing of the health
condition of induction motors is receiving more and more
attention from industry in the past decades since it can detect
an incipient fault at an early stage [1]. Because of natural
aging processes and other factors in practical applications,
induction motors are subject to various faults. These faults
disturb the safe operation of motors, threaten normal
manufacturing, and can result in substantial cost penalties.
The field of motor condition monitoring recognizes those
problems, and more and more relative research is being
devoted to it by industry and academia. With condition
monitoring, an incipient fault can be detected at an early stage.
Appropriate maintenance can then be scheduled at a planned
downtime, avoiding a costly emergency [2,3]. This reduces
downtime expense and reduces the occurrence of catastrophic
failures.
Broken rotor bars can be a serious problem with certain
induction motors due to arduous duty cycles. Although broken
rotor bars do not initially cause an induction motor to fail,
there can be serious secondary effects. The fault mechanism
can result in broken parts of the bar hitting the end winding or
stator core of a high voltage motor at a high velocity. This can
cause serious mechanical damage to the insulation and a
consequential winding failure may follow, resulting in a costly
repair and lost production [4, 5].
There are many condition monitoring methods, including
vibration monitoring, temperature monitoring, chemical
monitoring, acoustic emission monitoring, current monitoring,
etc [2]. Except for current monitoring, all these monitoring
methods require expensive sensors or specialized tools and are
usually intrusive. In current monitoring, no additional sensors
are necessary. This is because the basic electrical quantities
associated with electromechanical plants such as currents and
voltages are readily measured by tapping into the existing
voltage and current transformers that are always installed as
part of the protection system. As a result, current monitoring
is non-intrusive and may even be implemented in the motor
control center remotely from the motors being monitored.
Therefore, current monitoring offers significant
implementation and economic benefits.
In this paper, some results on non-invasive detection of
broken bars in induction motors are presented. FFT based
power spectrum is used to diagnose the rotor bar fault. The
diagnosis procedure was performed by using Virtual
Instruments (VIs). The Virtual Instrument was built up by
programming in Lab VIEW 8.2.
II. ROTOR BAR ANALYSIS
Modern measurement techniques in combination
with advanced computerized data processing and acquisition
show new ways in the field of rotor bar analysis monitored by
the use of spectral analysis. The success of these techniques
depends upon locating by spectrum analysis with specific
harmonic components caused by faults. One of the most
frequently used fault detection methods is Fast Fourier
transform (FFT). This technique utilizes the spectral analysis
of motor current. It depends upon locating the specific
harmonic components in the line current produced of unique
rotating flux components caused by faults such as broken bars,
air gap eccentricity and shorted turn in stator windings. The
two slip frequency sidebands due to broken rotor bars near the
main harmonic can be observed.
294
295
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
120 f
1
n n
1
n
1
n
n
2
s.n
1
n s.n
1
n
1
s.n
1
n
1
(1 s)
n
2
. p s.n
1
.p
n n
2
n
1
(1 s) s.n
1
n
1
n
1
.s s.n
1
n
1
2n
1
.s
n
1
.(1 2s)
f
1
.(1 2s)
Usually a decibel (dB) versus frequency spectrum is
used in order to give a wide dynamic range and to detect the
unique current signature patterns that are characteristic of
different faults [6]. The fundamental reason for the
appearance of the above-mentioned sideband frequencies in
the power spectrum is discussed next. Under perfect balanced
condition, a forwarding rotating magnetic field is produced in
induction motor which rotates at synchronous speed.
(1)
Computer with software Lab VIEW 8.2. The rated data of the
tested three-phase squirrel cage induction machine were: 0.5
hp, 415V, 1.05 A and 1380(FL) r/min. LabVIEW 8.2
software is used to analyze the signals. It is easy to take any
measurement with NI LabVIEW. We can automate
measurements from several devices and analyze data
spontaneously with this software. Data acquisition card PCI-
6251 and acquisition board ELVIS are used to acquire the
current samples from the motor under load. NI M Series high-
n
1
p
speed multifunction data acquisition (DAQ) device can
measure the signal with superior accuracy at fast sampling
where f
1
is the supply frequency and pthe poles.
rates. This device has NI-MCal calibration technology for
We know that
Slip (s) .(2)
n
1
improved measurement accuracy and six DMA channels for
high-speed data throughput. It has an onboard NI-PGIA2
amplifier designed for fast settling times at high scanning
rates, ensuring 16-bit accuracy even when measuring all
where n is speed of induction motor
Slip speed (n
2
)
Put the value of n
2
in equation (1)
Slip (s)
n
1
Thus, n
2
n
1
n
..(3)
channels at maximum speeds. In addition, it has a minimum of
16 analog inputs, 24 digital I/O lines, seven programmable
input ranges, analog and digital triggering and two
counter/timers. The NI ELVIS integrates 12 of the most
commonly used instruments including the oscilloscope,
DMM, function generator, and Bode analyzer into a
compact form factor ideal for the hardware lab.
Based on NI LabVIEW graphical system design
software, NI ELVIS offers the flexibility of virtual
instrumentation and allows for quick and easy measurement
acquisition and display. The speed of the motor is measured
n ..(4)
Slip frequency( f
2
)
The backward rotating magnetic field speed
produced by the rotor due to broken bars and with respect to
the rotor is:
n
b
n
b
n
b
n
b
.(5)
It may be expressed in terms of frequency:
f
b
.(6)
Classical twice slip frequency sidebands therefore occur at
2s f1 around the supply frequency [3]:
f
b
=(1 2s)f
1 .
(7)
The lower sideband is specifically due to broken bar while the
upper sideband is due to consequent speed oscillation. In fact,
researchers show that broken bars actually give rise to a
sequence of such sidebands given by [6, 7, 8]:
f
b
=(1 2ks)f
1
, k = 1, 2,3 (8)
III. EXPERIMENTAL SET UP
In order to diagnose the fault of induction motor with
high accuracy, a modern laboratory test bench was set up. It
consists of an electrical machine coupled with rope brake
dynamometer, transformer, NI data acquisition card PCI-6251,
data acquisition board ELVIS and Pentium-IV Personnel
by digital tachometer. The Virtual Instrument (VIs) was built
up with programming in LabVIEW 8.2. This VIs was used
both for controlling the test measurements and data
acquisition, and for the data processing. In order to test the
system in practical cases, several measurements were made to
read the stator current of a motor.
A system for fault detection was designed to detect
the broken rotor bar fault. The stator current is first sampled
in the time domain and in the sequence; the power spectrum is
calculated and analyzed aiming to detect specific frequency
components related to incipient faults. For each rotor faults,
there is an associated frequency that can be identified in the
spectrum. The faults are detected comparing the amplitude of
specific frequencies with that for the same motor considered
as healthy. Based on the amplitude in dB it is also possible to
determine the degree of faulty condition. In the described
system, data acquisition board was used to acquire the current
samples from the motor operating under different load
conditions. The current signals are then transformed to the
frequency domain using a Fast Fourier Transform (FFT) based
Power spectrum. The power spectrum is obtained by
programming in Lab VIEW 8.2.
Current measurements were performed for a healthy rotor
and also for the same motor having different number of
broken rotor bar. Initially, Test was conducted on healthy
motor. Then, tests were carried out for full loads with faulty
motors having up to 12 broken rotor bars. The rotor faults
were provoked interrupting the rotor bars by drilling into the
rotor. The slip was 0.01, 0.04 and 0.08 at no load, 50% load
and full load respectively. The power spectrum of the
296
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
measured phase currents was plotted. The results obtained for
the healthy motor and those having rotor faults were
compared, especially looking for the sideband components
having frequencies given by equation (8).
IV. EXPERIMENTAL RESULTS ANDDISCUSSION
Motor was tested for healthy working condition and
for broken rotor bars. The current measurements were made
Figure 1: Power spectrum of healthy motor
recognized in the current spectrum as shown in figures 2, 3
and 4.
Fault..Frequencies
Figure 3: Power spectrumof faulty motor with 5 broken bars under full load
Fault.. frequencies
Fault..Frequencies
Figure 4: Power spectrumof faulty motor with 12 broken bars under full load
Figure 2: Power spectrumof faulty motor with 1 broken bar under full load
for one phase at full load. Due to similarity of the 3 phases,
the results for only one of them are presented. Cases were
considered where rotor has 1, 5, and 12 broken bars. The
figure 1 to figure 4 presents the practical results for these
cases. The power spectrum in fig 1 represents the spectrum
properties of a healthy motor in the frequency range of 0-1500
HZ. The same spectrum zoomed spectrum which show the
frequency range 30-70 HZ. By zooming into the spectrum,
more frequency information appears per frequency resolution.
The practical results show that sidebands are not present in
power spectrum of faulty motor under no load condition.
Under no load condition it is almost impossible to detect
broken bar faults because the associate frequency is very close
to the fundamental.
Therefore, power spectrums of faulty motor under no load
condition have not shown in the paper. Broken bar detection
at full load could be performed in more reliable way. The
frequency components related to broken bar could be clearly
These components are marked as FF (Fault frequency). It
can also be observed that the magnitude of the frequency
components increase when the number of broken bars
increases. Based on the results obtained with the systems it
can be stated that this method proven to be adequate for the
cases and load conditions considered, as the system was
capable to detect the broken bar fault.
V. CONCLUSION
In this paper, the effects of rotor faults on the motor current
spectrum of an induction machine have been investigated
through experiments. To diagnose the rotor fault, FFT based
Power spectrum is implemented. Several experiments were
performed on motor under no load condition and with load
coupled to shaft of motor. Results show the all expected fault
frequencies which are the due to rotor fault. In this
experimental study, the severity of fault was increased from
one broken bar to twelve broken bar. It can be seen that the
magnitude of fault frequencies increases with increase the
severity of rotor fault. It has been observed that proposed
technique is not effective under no load and light load
condition. However, at high load condition, this technique
shows the better results. Based on the results obtained from
the experiments, it can be concluded that FFT based power
297
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
spectrum can be effectively used for diagnosis of rotor faults
of induction motor. The results obtained from the experiments
present a great degree of reliability, which enables the
proposed technique to be used as monitoring tools for similar
motors.
REFERENCES
[1] Peter Vas, Parameter estimation, condition monitoring, and diagnosis
of electrical machines , Clarendon Press Oxford., 1993.
[2] P. J. Tavner and J. Penman, Condition Monitoring of Electrical
Machines. Hertfordshire , England: Research Studies Press Ltd, ISBN:
0863800610, 1987.
[3] W. T. Thompson, A review of on-line condition monitoring
techniques for three phase squirrel induction motors Past, present and
future, in Proc. IEEE Int. Symp. Diagnostics Electrical Machines,
Power Electronics Drives, pp. 318, 1999.
[4] P. C. Krause, Analysis of Electric Machinery. New York: McGraw-
Hill, 1986.
[5] A.H. Bonnett and G.C. Soukup, Rotor failures in squirrel cage
induction motors , IEEE Trans. Ind. Apllication. . Vol. IA-22, pp
1165-1173, Nov. Dec. 1986.
[6] W.T. Thomson and R.J. Gilmore, Motor current signature analysis to
detect faults in induction motor derives-Fundamentals, Data
interpretation, and industrial case histories, proceedings of 32nd
Turbomachinery symposium, Texas, A&Muniversity, USA, 2003.
[7] Richard G. Lyons, Understanding digital signal processing , Pearson
Education, 2009.
[8] WWW.NI.COM
Dr. Neelam Mehala is working as Assistant professor in
Electronics and Communication Engineering Department,
YMCA University of Science and Technology Faridabad
(Haryana). She received B.E .degree in Electronics and
Communication from North Maharashtra University Jalgaon
(M.S.). She did both M. Tech. (Electronics) and Ph.D.
(Electronics) from National Institute of Technology,
Kurukshetra (Haryana). She has published several papers in
national and international conferences/journals. Her areas of
interests are Digital Signal Processing, Condition Monitoring,
fault diagnosis, spectrum analysis and electrical machines.
298
National Conference onMicrowave, Antenna &Signal Processing April 22-23,
2011
Data Warehousing in Increasing Website
Popularity
Gunjan Malik
#
,Charu Rani
#
, Nakul Singh
#
,Tarun Kumar

#
Student,

Assistant Professor
CSED,Vidya College of Engineering, meerut
gunjanskyblue4ever@gmail.com@gmail.com, tarunmalik124@gmail.com
Abstract
Websites is a common term that everybody is aware
of. With the increase in the popularity and use of
internet, accessing of websites is also increasing.
Every organization, big or small, is heading towards
building their own website for globalizing their
products and increasing their sales. In this paper we
will use the concepts of data warehousing and data
mining in raising the popularity of the websites. Data
warehousing is a method of bringing together all of
the data from various sources. The data warehouse
connects different databases together in order to offer
a more comprehensive data set for making decisions.
Data mining is the extraction of hidden predictive
information from large databases. There are various
techniques and algorithms used in data warehousing
and mining like Classical Techniques: Statistics and
Clustering and modern Techniques: Trees, Neural
Networks, Graphs, Apriori algorithm, Quinlans
depth-first strategy and many more. The paper
explains the application of apriori algorithmand how
it will profitably help increase the popularity of a
website. The paper concludes that use of these
techniques will show how the website developers can
be benefited by using these techniques.
Keywords: data warehousing, data mining,
clustering, apriori algorithm.
1. Introduction
The terms Data mining and data warehousing [1]
have evolved recently, though the concept is age
old. Organizations largely make use of these
concepts in several different ways to improve their
productivity, analyze various trends, make strategic
decision-making, wealth generation, etc. Data
mining looks for hidden patterns and trends in data
that is not immediately apparent from summarizing
the data. Data mining tools predict future trends
and behaviors, allowing businesses to make
proactive, knowledge-driven decisions. The
automated, prospective analyses offered by data
mining move beyond the analyses of past events
provided by retrospective tools typical of decision
support systems. They can answer business
questions that traditionally were too time
consuming to resolve. They score databases for
hidden patterns, finding predictive information that
experts may miss because it lie outside their
expectations [2]. Data warehouse maintains its
functions in three layers: staging, integration, and
access. Staging is used to store a data
warehouse (DW) which is a database used for
reporting. The data is offloaded from the
operational systems for reporting. The data may
pass through an operational data store for
additional operations before it is used in the DW
for reporting raw data for use by developers
(analysis and support). The integration layer is
used to integrate data and to have a level of
abstraction from users. The access layer is for
getting data out for users. This definition of the
data warehouse focuses on data storage. The main
source of the data is then Cleaned, transformed,
catalogued and made available for use by
developers for data mining and decision support.
However, the means to retrieve and analyze data,
to extract, transform and load data, and to manage
the data dictionary are also considered essential
components of a data warehousing system. In this
the main focus is on analyzing the type of targeted
customers and to attract them to access the sites.
2. Our Proposed Methodology
2.1 Architecture of Data Warehouse:
Operational database layer
The source data for the data warehouse- An
organization's Enterprise Resource
Planning systems fall into this layer [3]
Data access layer
The interface between the operational and
informational access layer- Tools to extract,
transform, load data into the warehouse fall into
this layer.
Metadata layer
The data dictionary -This is usually more detailed
than an operational system data dictionary. There
are dictionaries for the entire warehouse and
299
National Conference onMicrowave, Antenna &Signal Processing April 22-23,
2011
sometimes dictionaries for the data that can be
accessed by a particular reporting and analysis tool.
[9]
Informational access layer
The data accessed for reporting and analyzing and
the tools for reporting and analyzing data- This is
also called the data mart. Business
intelligence tools fall into this layer.
Operational database layer
Data access layer
Fig 3- Different phases of model
Metadata layer
Informational access layer
Fig1- Architecture of Data Warehouse
2.2 Data Mining-
Type of data Type of
Interestingness criteria
+ = hidden pattern
Data Interestingness
Criteria
Fig2 - Data mining
2.3 CRISP-DMProcess Model-
CRISP-DM provides a uniform
framework for
guidelines
experience
documentation
CRISP-DM is flexible to account for
differences
Different business/agency
problems
Different varieties of data
Phases of CRISP-DM Model-
The first two phases deals with understanding the
problem statement and the objective of website
development. it also talks of the statements of Data
Mining objective and Success Criteria. In Data
Understanding, we explore the data and verify its
quality. [4]
2.3.2 Phase 3-
Data preparation takes generally over 90% of the
time. In this we require to Collect, Assess,
Consolidate and Clean data. We create table links,
aggregation level, and check for missing values,
etc. In Data selection, use of samples, existing
records and from collected data, optimal data is
selected, and thoroughly visualized using various
tools and transformations to create new variables is
done. [10]
2.3.3 Phase 4-
This phase deals with Model building. In this we
need to select suitable modeling techniques which
are based on the data mining objective. Modeling is
an iterative process. We may model it for either
description or prediction. [8]
Types of Models
Predictive models for predicting and
classifying
Regression algorithms:
neural networks.
Descriptive Models for grouping and
finding associations
Clustering/Grouping
algorithms: K-means,
Kohonen
Association algorithms:
apriori
Neural Network-
Neural network is very powerful predictive
modeling technique. It creates very complex
300
National Conference onMicrowave, Antenna &Signal Processing April 22-23,
2011
models which are represented by numeric values in
a complex calculation that requires all of the
predictor values to be in the form of a number. The
output of the neural network is also numeric and
needs to be translated if the actual prediction value
is qualified. [5]
Kohonen Network-
It is used for clustering and segmenting the
database. The clusters are created by forcing the
system to compress the data by creating prototypes
or by algorithms that guide the system toward
creating clusters that compete against each other
for the records that they contain, thus ensuring that
the clusters overlap as little as possible.
Fig4- Kohonen network
Fig5- Types of clusters
There are two types of clustering-Hierarchical
clustering: Clusters are formed at different levels
by merging clusters at a lower level.
Partitional clustering: Clusters are frmed at only
one level.
Apriori-
It seeks association rules in datase and performs
Market basket analysis. It then seeks for sequence
discovery. [7]
2.3.4 Phase 5-
This phase deals with firstly Evaluating the model,
i.e, how well it perform on testing and analysing
data. Methods and criteria selected depend on
model type.second step is Interpretation of model,
we need to analyse how important or not the data
is.
2.3.6 Phase 6-
This is an important step in data mining. In
Deployment, we determine how the results
evaluated need to be utilized
Who needs to use them? How often do they need to
be used? Then deploy data mining results by
scoring a database utilizing results for website
promotion. [6]
3. Conclusion and Future Work
The concept, usefulness and practicality of data
warehousing and data mining is already a fact. The
returns-on-investment have well justified in
building and maintaining one. Since websites are
increasing day-by-day, one definitely aims at
making more money through globalizing their
products. In this paper we have used a simple
approach of how we can increase the popularity of
our websites. In early times, when internet had no
identity, organizations and their products were
known only to limited number of people. But,
through internet, organizations and their products
have reached almost the entire world; all they have
to do is to popularize their websites. Through data
warehousing and data mining techniques, we can
gather information of the targeted customers,
analyze their interests shopping habits. Through
proper data collection and extracting relevant
patterns, we can introduce different advertisement
techniques to reach out the masses, making the
products and appearance more appealing and
attractive, thus increasing product sales and move
towards growth and success. Work is going on in
field to explore better ways of extracting patterns
in a precise and more accurate manner.
References
[1]. R. Agrawal, R. Srikant, ''Mining Sequential
Patterns'', Proc. of the Int'l Conference on Data
Engineering (ICDE), Taipei, Taiwan, March 1995.
[2]. R. Agrawal, A. Arning, T. Bollinger, M. Mehta, J.
Shafer, R. Srikant: "The Quest Data Mining
System", Proc. of the 2nd Int'l Conference on
Knowledge Discovery in Databases and Data
Mining, Portland, Oregon, August, 1996. .
[3]. Shoshani. OLAP and Statistical Databases:
Similarities and Differences. Proc. of ACM PODS
1997.
[4]. Srinath Srinivasa, Myra Spiliopoulou. Modeling
Interactions Based on Consistent Patterns. Proc. of
CoopIS 1999, Edinburg, UK.
[5]. Srinath Srinivasa, Myra Spiliopoulou. Discerning
Behavioral Patterns by Mining Transaction Logs.
Proc. of ACM SAC 2000, Como, Italy.
[6]. http://www.research.ibm.com/cs/quest/index.html
[7]. http://fas.sfu.ca/cs/research/groups/DB/sections/pub
lication/kdd/kdd.html
[8]. http://www.dwinfocenter.org/
[9]. http://datawarehouse.itoolbox.com/
[10]. http://www.datawarehousing.com/
301
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
REMOTE HEART RATE MEASUREMENT &
OBSERVATION USING MICROCONTROLLER
BASED ECG TELEMETRY SYSTEM
APURV ASHISH
DEPT.OF ELECTRONICS AND
INSTRUMENTATION ENGINEERING
BBDNITM,LUCKNOW
aashish25jan@gmail.com
BHOOPENDRAKUMAR
DEPT. OF ELECTRONICS AND
INSTRUMENTATIONENGINEERING.
BBDNITM,LUCKNOW
Ilovebhoopendra@yahoomail.com
ABSTRACT-With todays fast growing medicinal
technology, many incurable diseases can now easily
be rectified. But the advantages of these modern
medication and health support are restricted mainly
to urban and sub-urban population. In countries
like INDIA where 70% of population still lives in
rural and remote areas its quite mandatory acquire
some other means to provide these people with
necessary health and support. This paper mainly
focuses on the rectification of this problem. Through
our paper we propose a possible design and
specifications of tele-monitoring of the heart-rate of
a remote patient. Our paper mainly focuses on the
implementation part of the system. Since we are
proposing a heart rate monitor system based on a
microcontroller, hence it offers the advantage of
portability over tape-based recording systems. The
paper explains how a single-chip microcontroller
can be used to analyze heart beat rate signals in
real-time. In addition, it also allows doctors to get
the ECG of the patient through internet or mobile
services. It can also be used to control patients or
athletic person over a long period. The system reads,
stores and analyses the heart beat rate signals
repetitively in real-time. The hardware and software
design are oriented towards a single-chip
microcontroller-based system, hence minimizing the
size.
Keywords:Microsystems, microcontroller,
real-time, heart rate monitoring
1. Introduction
Early diagnosis for heart disease is typically based on tape recording
of Electrocardiogram (ECG) signal which is then studied and
analyzed using a microcomputer. This paper however, presents the
design and implementation of a compact microcontroller-based
portable system used for control of heart rate on real time. Diagnosis
of heart disease using ECG signals , may be achieved by either
correlating the pattern of the ECG signal with a typical healthy signal
[4], characterizing the typical ECG signal using basic logical
decisions [9], or more complicated algorithms to process in depth the
heart disease [2, 3, 14, 19]. The first approach requires complicated
mathematical analysis to obtain the required diagnosis, while the
second one involves only simple analysis in most cases. A long-term
study of ECG signal during everyday activity is required to obtain a
broad spectrum of heart disease categories based on heart rate
changing. Many techniques have been implemented, such as the use
of a minicomputer in intensive care to observe patients [15], or
microprocessor-based card in portable system [11, 18]. In this case,
the disadvantage is the restriction of patient movement. A wire-free
system connected to a hospital minicomputer allows patient mobility
within restricted area in the hospital. Tape systems for recording ECG
signals are bulky, heavy and prone to mechanical failure. In addition,
these systems need large batteries. In order to reduce the size, weight
and power consumption of the system, a single chip Reduced
Instruction Set Computer (RISC) architecture microcontroller was
chosen. To keep the patient free of movement at home [14, 19], a data
transmission protocol using e-mail is implemented in the system [2,
5]. Aspects that have been carefully considered are:
The logic and arithmetic involved in the data acquisition
and the analysis of the ECGsignals.
The nature of the information to be stored.
Most single-chip microcontrollers are characterized by the
limitations of the arithmetic instruction.
It is therefore advantageous to use a simple mathematical analysis of
the ECG signal. Regarding memory, representation of the complete
ECG signal by an equivalent diagnostic word appreciably reduces the
memory size required. Figure 1 shows the P, Q, R, S and T waves on
an electrocardiogram tracing (lead 1) illustrating the three normally
recognizable deflection waves and the important intervals.
302
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
FIG. SYSTEMBLOCKDIAGRAM
Figure 1. An electrocardiogram tracing (lead 1) illustrating the three
normally recognizable deflection waves and the important
Intervals.
The method in storing information related to the ECG signal is
considered in this paper. In this method, only the time and type of
variation compared to the reference heart rate are registered. At the
same time, the rate is processed to detect any disease, such as,
bradycardia or tachycardia either for adult or children. The system
connected to the parallel port of a microcomputer is able to transmit
the information or the collected data to the cardiologist by e-mail
every end of the day. Further on, provision for storing a
number of ECG signals assists the cardiologist to formulate his
personal analysis and to be more confident of systemperformance.
SYSTEMDESIGN:
The proposed system comprises of telemetering system incorporated
in a heart rate data acquisition system. Originally it is a data-
acquissition system with an added feature of data transmission on
radio-link.
The transmitter section, at the patient s side is an embedded
microcontroller based system. It has to pick-up, encode and transmit
the patient s E.C.G. information in air in the form of electromagnetic
waves, using ASK/RF signals. It also allows for the selection of
desired ECG lead. The transmitter section comprises of clip-on type
E.C.G. electrodes with patient cable assembly, Wilson electrode
arrangement, lead selector, instrumentation amplifier, low pass filter,
high gain amplifier and patient isolation system, Analog to Digital
converter, Embedded microcontroller, ASK/RF modulator and
antenna interface circuits.
The system uses the Wilson electrode arrangement for acquiring the
ECG of the patient. This system uses the right leg of the patient as
driven right leg lead. This involves summing network to obtain the
sumof the voltages from all other electrodes and driving
amplifier, the output of which is connected to the right leg of the
patient. This arrangement is known as Wilson electrode(8). The
effect of this arrangement is to force the reference connection at the
right leg of the patient to assume a voltage level equal to the sum of
the voltages at the other leads. This arrangement increases the
common mode rejection ratio(CMRR) of the overall system and
reduces noise interference(4). It also has the effect of reducing the
current flow in to the right leg electrode. Increased concern for the
safety aspect of electrical connection to the patient have caused
modern ECG design to obtain the principle of ground reference
altogether and use isolated or floating amplifiers(5). The Wilson
electrode is realized with the help of high slew rate FET output OP-
Amps available in LM348 IC.
To record an electrocardiogram, five electrodes are affixed to the
body of the patient. The group of electrodes connected to the input of
the amplifier is called lead. In the normal electrode placement shown
in Fig, five electrodes are used to record the electrocardiogram. The
electrode on right leg is only for ground reference. The 12 standard
leads used most frequently are shown in figures. The three bipolar
limb selections are as follows:
Lead I : Left arm (LA) and right arm (RA)
Lead II : Left leg (LL) and right arm (RA)
Lead III : Left leg (LL) and left arm(LA)
Fig: Bipolar Limb Leads (Upper half) Unipolar Limb Leads
(Lower Half)
Fig. the lead connection And colour code of the ECGleads.
303
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
These three leads are called bipolar because electrocardiogram is
recorded from two electrodes and a third electrode is not connected.
Of the three limb leads, lead II produces the greatest R-wave
potentials. Thus when the amplitude of the three limb leads are
measured the R-wave amplitude of lead II is equal to the sum of R-
wave amplitudes of lead II and lead III. For unipolar leads the
electrocardiogram is recorded between a single exploratory electrode
and the central terminal, which has potential corresponding to center
of the body. This central terminal is obtained by connecting the three
active limb electrodes together
through resistor of equal size. The potential at the connection point of
the resistor corresponds to the mean or average of the potential at the
three electrodes. In the unipolar limb leads one of the limb electrodes is
used as exploratory electrode. In augumented limb leads the limb and
an exploratory electrode is not used for the central terminal
thereby increasing the amplitude of the ECG signal without changing
its waveform appreciably. These leads are designated as aVR. aVL
and Avf.
Analog multiplexers are used for lead selection in the place of
mechanical rotary switches for smoother operation. These
multiplexers can select the desired lead, which in turn is selected by
the microcontroller. Since there are two multiplexers that are operated
simultaneously two electrodes are selected one from each group. The
selected lead number is displayed on the LCD display. The output
from this section is connected to the input of instrumentation
amplifier.
If the movement of the patient is not an issue, and only simple heart
rate measurement has to be performed then the whole system
consisting of lead arrangements and amplifiers can be made more and
more compact by using an IR LED arrangement for the heart rate
measurement.
MEASUREMENTDEVICE:
Figure below shows the block diagram of the proposed device.
Basically, the device consists of an infrared transmitter LED and an
infrared sensor photo-transistor. The transmitter-sensor pair is clipped
on one of the fingers of the subject (see Figure 2). The LED emits
infrared light to the finger of the subject. The photo-transistor detects
this light beam and measures the change of blood volume through the
finger artery. This signal, which is in the form of pulses is then
amplified and filtered suitably and is fed to a low-cost microcontroller
for analysis and display. The microcontroller counts the number of
pulses over a fixed time interval and thus obtains the heart rate of the
subject. Several such readings are obtained over a known period of
time and the results are averaged to give a more accurate reading of
the heart rate. The calculated heart rate is displayed on an LCD in
beats-per-minute in the following format:
Rate = nnn bpm
FIG. IR HEARTRATE MEASUREMENT
connection, i.e. through light beam modulation to carry signals. The
patient is safe guarded against the risk of electric shock by having a
optical coupler which transfer the ECG signals to further stage
without electrical connection i.e. through light beam modulated to
carry signals7. The ECG signal might be attenuated slightly in the
opto-isolator.
Where nnn is an integer between 1 and 999.
Fig. Block diagramof measuring device
The circuit diagram of the measurement device is shown in Figure 3.
The circuit basically consists of 2 operational amplifiers, a low-pass
filter, a microcontroller, and an LCD. The first amplifier is set for a
gain of just over 100, while the gain of the second amplifier is around
560. During the laboratory trials it was found necessary to use a low
pass filter in the circuit to filter out any unwanted high frequency
noise from nearby equipment. The cut-off frequency of the filter was
chosen as 2Hz. Figure 4 shows the frequency and phase responses of
the amplifier together with the filter. The output time response of the
amplifier and filter circuit is shown in Figure 5 which consists of
pulses. An LED, connected to the output of the
operational amplifiers flashes as the pulses are received and amplified
by the circuit.
The output of the amplifier and filter circuit was fed to one of the
digital inputs of a AT8951 type microcontroller [8]. In order to reduce
the cost of the circuit the microcontroller is operated from a 4MHz
resonator. The microcontroller output ports drive the LCD as shown
in Figure 3. The circuit operates when a push-button switch
connected to RB1 port of the microcontroller is pressed.
The instrumentation amplifier is basically a differential amplifier that
amplifies the difference between the two input signals. Hence the
common mode signal is effectively eliminated. Two buffer amplifiers
at the input of each signal, is provided to offer very high
input impedance. The gain of the instrumentation amplifier is set
around 1000.
The amplified ECG signal is passed through a low pass filter to
remove the noise and other high frequency signal that might picked
up by the cable etc. The pass band of this filter is set below 150 Hz.
All the important components of ECG lie below 150 Hz6. The signal
from the low pass filter is further amplified using high gain amplifier.
Patients Isolation:
The patient s isolation stage is yet another important requirement due
to safety of the patient. It uses an opto-coupler to transfer the
processed E.C.G. signals to further stages, without electrical
Sample and hold is a circuit, which samples an input signal, and hold
it to its last sample value until the input is sampled again. The
purpose of sample and hold circuit is to sample fast changing signals
and provide this signal as input to slow processing circuits like ADCs
to match with its conversion time.
The Analog to Digital Converter (ADC) encodes the ECG signal into
an 8-bit binary code, by taking rapid samples of the processed ECG
signal. Since the ADC input should be constant when a conversion is
in progress, the Sample and Hold amplifier is used to store the
sampled ECG signal and provide the instantaneous value of the
sample to the ADC.
304
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Microcontroller:
The key feature of micro controller based computer system is that, it is
possible to design a system with a great flexibility. It is possible to
configure a system as large or as small system by adding or removing
suitable peripherals. On the other hand, the micro controller
incorporates all the features that are found in micro computer.
However, it has added features to make a complete microcomputer
system on its own. Therefore the micro controllers are sometimes
called as single chip microcomputer. The micro controller has built-in
ROM, RAM, parallel I/O, serial I/O, counters, interrupts and a clock
oscillator circuit
The microcontroller receives the binary codes from ADC and
converts them into a serial stream and passes them to the ASK/RF
modulator, at a fixed baud rate. The ASK/RF transmitter used here is
the industry standard transmitter module. It works in the 433 MHz
UHF bands The microcontroller controls lead selection, LCD module,
the sampling rate of ADC and sets the required baud rate for serial
transmission. The ASK modulator converts serial data stream into
two audio tones, each corresponding to a logic 1 and logic 0 , to
fall in the voice band. The microcontroller in the transmitter is
programmed in assembly language.
RECIEVER SIDE:
The receiver section at the doctor s side is a computer-based system.
A cardiologist doctor can use a conventional desktop PC or a laptop
computer. It should receive the electromagnetic waves transmitted
from the patient side unit and recover ECG signals from ASK/RF
signals and plot the E.C.G. on a personnel computer. The receiver
section comprises of an antenna,
ASK/RF demodulator, low-pass filter, serial port interface to convert
TTL signals to RS 232 and a personnel computer, to display the ECG
waveforms.
Band Pass Filter:
The ASK/RF signals, arriving at the receiving end are filtered using
an active band-pass filter and are fed to the input of a PLL to
demodulate ASK signals. The PLL detects the logic 1 and 0 from
the tones. Thus it re-constructs the serial stream.
The serial port on the Personal Computer receives the serial data
stream and reconstructs the byte. This byte is used to calculate the
position of each pixel on the graphical plot on computer s monitor.
CONCLUSION:
The future of this system lies in its practical application to cardiac
evaluation of patients with suspected disease using this technique.
The major value of this system is in the detection of disease states of
the myocardium of the patients who are located in the remote areas or
in travel and are not in a position to report to the doctor for immediate
treatment. The ECG signal can be transmitted using the telemetry
system to the doctors and advises can be sought for saving the life of
the patient.
Hence this system design can prove very advantageous and
productive in future.
REFERENCES:
[1] BIO-MEDICAL INSTRUMENTATION, BY CROMWELL.
[2] Arif C., Embedded Cardiac Rhythm Analysis and Wireless
Transmission (Wi-CARE), MS Thesis, School of Computing and
Software Engineering, Southern Polytechnic State University,
Marietta, Georgia, USA, 2004.
[3] Ayang-ang C. and Sison L., Electrocardiograph Pre-Filtering,
QRS Detection, and Palm Display Programming for Biomedical
Applications, in Proceedings of the ECE Conference, Universit of
St. Tomas, Manila, 2001.
[4] Balm G., Cross-Correlation Techniques Applied to the
Electrocardiogram Interpretation Microcontroller Based Heart Rate
Monitor 157 Problem, IEEE Transactions on Biomedical
Engineering, vol. 14, no. 4, pp. 258-262, 1979.
[5] Celler B., Remote Monitoring of Health Status of the Elderly at
Home, International Journal of Biomedical Computing, vol. 40, no.
2, pp. 147-153, 1995.
[6] Choi W., Kim H., and Min B., A New Automatic Cardiac Output
Control Algorithm for Moing Actuator Total Artificial Heart by
Motor Current Waveform Analysis, International Journal of
Artificial Organs, vol. 19, no. 3, pp. 189-197, 1996.
[7] Enderle J., Introduction to Biomedical Engineering, Academic
Press, USA, 2000.
[8]8051 microcontroller programming, by Mazidi .
[9] Leffler C., Saul J., and Cohen R., Rate-Related and Autonomic
Effects on Atrioventricular Conduction Assessed Through Beat-to-
Beat PR Interval and Cycle Length Variability, Journal of
Cardiovascular Electrophysiology, vol. 5, no. 1, pp. 2-15, 1993.
[10] Microchip Manual, PIC16F87X Data sheet 28/40-Pin 8-bit
FLASH Microcontrollers, Microchip Technology Inc., 2001.
[11] Morizet-mahoudeaux P., Moreau C., Moreau D., and Qharante
J., Simple Microprocessor-Based System for on-Line ECG
Arrhythmia Analysis, Medical and Biological Engineering, pp. 497-
500, 1981.
[12] Mulder Van Roon A. and Schweizer D., Cardiovascular Data
Analysis Environment (CARSPAN) User's Manual, Groningen, The
Netherlands, 1995.
[13] Prieto A. and Mailhes C., Multichannel ECG Data Compression
Method Based on a New Modeling Method, Computers in
Cardiology, vol. 28, pp. 261-264, 2001.
[14] Prybys K. and Gee A., Polypharmacy in the Elderly: Clinical
Challenges in Emergency Practice, Part 1 Overview, Etiology, and
Drug Interactions, Emergency Medicine Reports, vol. 23, no. 11, pp.
145-151, 2002.
[15] Schamroth C., An Introduction to Electro Cardiography,
Blackwell Science Publishing, 7
th
edition, 2001.
[16] Shemwetta D. and Ole-Meiludie R., The Physical Workload of
Employees in Logging and Forest Industries, Wood for Africa Forest
EngineeringConference, pp. 178-185, 2002.
[17] Tura A., Lambert C., Davall A., and Sacchetti R., Experimental
Development of a Sensory Control System for an Upper Limb
Myoelectric Prosthesis with Cosmetic Covering, Journal of
Rehabilitation Research and Development, vol. 35, no. 1, pp. 14-26,
1998.
[18] Woodward B. and Habib R., The Use of Underwater
Biotelemetry for Monitoring the ECG of Swimming Patient, in
Proceedings of the 1st Regional IEEE-EMBS Conference, New Delhi,
India, pp. 107-108, 1995.
[19] Yang B., Rhee S., and Asada H., A Twenty- Four Hour Tele-
Nursing System Using a Ring Sensor, IEEE International
Conference onRobotics and Automation, Leuven, Belgium, 1998.
305
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
306
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
WEATHER MEASUERMENT
Temperature, Humidity, Air Speed
Uma Gupta
1
, Deepika Singh
2
Department of electronics and instrumentation engineering
Galgotias college of engineering and Technology, Greater Noida , INDIA
umagupta14@gmail.com
2deepika.singh306@gmail.com
Abstract: Weather and climate are the major, uncontrolled
driving forces. They affect every process. Many modern day
practices pose risks to the environment; therefore, it is
important to measure and record the weather condition and
record them so that appropriate management practices are
accomplished when weather conditions are favorable. In this
paper a weather measurement system is discussed which is
used for measurement and data is stored for analysis and
advancement of any process.
Key words: Weather , Humidity, Temperature and Air speed
Measurements.
1. INTRODUCTION
Weather is caused by the movement or transfer of
Energy. Energy is transferred wherever there is a
Temperature difference between two objects. There are
three main ways by which energy can be transferred that are
radiation, conduction and convection. The science of the
study of weather is called meteorology; the meteorologist
measures temperature, rainfall, pressure, humidity, sunshine
and cloudiness, and makes predictions and forecasts about
what the weather will do in the future.
The hotness or coldness of a substance is called its
temperature and is measured with a thermometer. The
ordinary thermometer consists of a hollow glass bulb
attached to a narrow stem with a thread like bore. The bulb
is filled with liquid, usually mercury, which expands when
the temperature rises and contracts when the temperature
falls. The amount of expansion and contraction in
thermometers depend on their own temperature, they are
usually needed to measure the temperature of the
surrounding air. To ensure that the temperature of the
surrounding air is the same as the thermometer, it must be
shaded from sun light and be exposed to adequate
ventilation[1][2].
A number of thermometers are relatively in-
expensive. Digital thermo-meters are now becoming easily
available. Many of them have the temperature probe on a
lead a few meters long, enabling the display to be inside the
classroom or house. Some varieties will store the maximum
and minimum temperature since last reset. The probe can
also be used to measure soil temperature and for example,
the temperature of a stream or pond [1].
The air is nearly always in motion, and this is felt as
wind. Two factors are necessary to specify wind, its speed
and direction. First the direction of wind is expressed as the
point of the compass from where the wind is blowing. Air
moving from the north-east to the south-west is called a
north-east wind. It may also be expressed in degrees from
true north. A north-east wind would be the wind speed
which can be expressed in miles or kilometers per hour,
meters per second, knots or as a force on the Beaufort scale
[3]. The device for air speed measurement is known as
anemometer [1]. The cheapest sort of commercial
anemometer is known as a ventimeter, in which the wind
blows into an orifice at the bottom of a tube and raises a
plate up the tube. These devices cost about Rs 850 each.
Some water in the form of invisible vapour is
intermixed with the air throughout the atmosphere. It is the
condensation of this vapour which gives rise to most
weather phenomena: clouds, rain, snow, dew and fog. There
is a limit to how much water vapour the air can hold and this
limit varies with temperature. When the air contains the
maximum amount of vapour possible for a particular
temperature, the air is said to be saturated [4][5]. Warm air
can hold more vapour than cold air. In general the air is not
saturated, containing only a fraction of the possible water
vapour.
All in all, there is a lot to be said for the simple dial
humidity gauge, costing only Rs100 and reading RH
directly. It will not be as accurate as the wet- and dry- bulb
device, but the strip of paper which it uses responds well to
changes in moisture and readily shows the way the relative
humidity varies during the day [1]. Most dials are clearly
marked in percentage relative humidity.
2. SYSTEM IMPLEMENTATION
Most commonly used electrical temperature
sensors are difficult to apply. For example, thermocouples
have low output levels and require cold junction
compensation. Thermistors are nonlinear devices, we use
LM35 Precision Celsius Temperature Sensor.
The LM35 series are precision integrated circuit temperature
sensors, whose output voltage is linearly proportional to the
Celsius (Centigrade) temperature. The LM35 thus has an
advantage over linear temperature sensors.
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Humidity Sensor used is HIH4000. The amount of vapour in
the air can be measured in a number of ways. The humidity
of a packet of air is usually denoted by the mass of vapour
contained within it, or the pressure that the water vapour
exerts. The variation of absolute humidity (vapour content
of saturated air) with temperature is shown in (Figure 1)
Fig:1 The variation of absolute humidity (vapour content of
saturated air) with temperature
Air flow is detected by the cooling effect of air movement
across a heated resistor that is Resistance Temperature
Detectors (RTD) .
The description of Block diagram is as the Sensors of
temperature LM 35, humidity and air speed will give analog
value at their output which will be connected to ADC
(because micro controller only reads digital values).
Microcontroller will be programmed for collecting the data
from various sensors and store their values in memory
which will be interfaced with microcontroller and also send
the values of various parameters to LCD for display
purpose. Here memory is also used for the data logging so
that it may be use full in future decision.
Fig:2 Weather Measurement system
3. APPLICATIONS
The advantages of having a weather station in the
backyard is that it will be able to predict any upcoming
weather which may be help full to increase the production in
agriculture. It can also be used for military purpose to
determine the conditions. As it is an intelligent system so
there is no requirement of an expert to operate the system
and can be use in telemetry system.
4. CONCLUSION & DISCUSSION
The system discussed works in real time. It has
great advantages over other weather measurement system
due to its capability of data logging. A memory device is
interfaced with microcontroller for data logging purpose.
Although as the system is based on microcontroller so it is a
intelligent system and can be upgraded easily if required.
References
[1]. Chia-Yen Lee, Rong-Hua Ma, Yu- Hsing Wang,microcantilever -
based weather station for temperature humidity and wind velocity
measurement. DTIP, Italy 25-27 April 2007.
[2]. Measuring Temperature with RTDs A Tutorial, Application Note
046, National Instruments,www.ni.com
[3]. Z.M.Rittersma,recent achievement in miniaturized humidity sensors- a
review of transduction techniques, sensors and actuatore: vol 96,2002
pp.196-210.
[4]. C.Y.Lee, G.B.Lee, Humidity sensors: A Review, sensor Letters, vol
3(1), 2005 pp. 1-14.
[5]. Wang, Y.H.; Lee, C.Y.; Chiang, C.M. A MEMS-based Air Flow
Sensor with a Free-standing Microcantilever Structure. Sensors 2007, 7,
2389-2401.
306
307
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Controlling of Home Security System using Mobile
Ankit Mittal, Aditi Malik
Student -ECE Dept., A.B.E.S. Institute of Technology, NH-24, Vijay Nagar, Ghaziabad (UP) 201009,
(Tel: 09999970157; e-mail: ankitmittal1403@gmail.in, aditimalik0412@gmail.com )
ABSTRACT: This paper describes the design and development of Security systemincluding various
home appliances controlled by mobile at remote distance. To ensure Security of Home and Office we are
making a user friendly Security systemwhich can be developed according to user demand, for different
applications such as Fire control, LPGgas leakage, Theft / Intruder alarm, regulating Temperature. With
the help of mobile we check and examine the status of the House/ Office automatically. If there is some
mis-happening like leakage of gas or smoke due to fire ,then mobile is automatically on and circuit
triggers the signal for redial button which dials the pre-dialed number and sends the pre-recorded voice
message to the receiver end .
Keywords: Temperature controller , fire alarm, LPG gas sensor, Theft/Intruder alarm, 89c51, LDR
Introduction
Security is a matter of great concern to all of us in this world. To many people the subject of Security is a
Science Fiction that are either far or beyond the state of art. Actually recent advances in Computers,
Sensors and other related technology have made such a system feasible in the relatively near term. In
todays world everything is advancing so fast and everything is getting automated. The idea behind the
smart Household Security system lies in the satisfaction of people along with some advantage.
The goal of this project is to utilize the after-market parts and build an integrated home security system.
Besides traditional magnetic switch equipped on doors and windows , if there is any mis-happening then it
triggers the voice processor ic that plays pre-recorded message according to the problem occurred. At the
same time for any mis-happening it triggers the relay driver circuit that dials the pre-dialed number then
automatically voice message is transferred through the mobile from mike.
As one knows about the problem he/she can perform certain actions by sending the command to the
controller using the same mobile through the DTMF technology from remote distance
308
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Different alarms
Fire alarm:- In the fire sensor we use bimetallic plates to sense fire, when fire touch the bimetallic
plates then plates join together and immediate provide a signal to the controller via ic 555.
LPGgas leakage detection:- We use MQ5 gas sensor for the detection of leakage of LPGgas . Gas
sensor is a special sensor when gas sensor sense the gas then gas sensor again provide a small
signal to the controller via ic 555 .
Ahot wire sensor consist of two heated elements ,over a compensator , which act as zero leak
reference resistance and the other a catalyst coated active sensor head called the detector .
Temperature monitoring :- For the over Temperature measurement we use lm35 transistor as a
temperature measurement. LM35 output is connected to controller via ADC 0804.
Intruder / theft:-
Sensitive Vibration Detector:- This vibration detector will sense vibration caused by
activities like drilling and switch on the connected load ( Bulb , Buzzer) to alert you. The
circuit works off a 6v regulated supply and use piezceramic element as the vibration
detector .
IR Sensor :- In this circuit there is infra red loop between infra re transmitter and receiver
using 555 timer . whenever this loop break it alarms.
Shadow alarm:-This opto-sensitive circuit sounds an alarmwhenever a shadow falls on it.
A dimlighting in the roo m is necessary to detect the moving shadow. Unlike opto-
interruption alarms based on light dependent resistors (LDRs), it does not require an
aligned light beamto illuminated the photo-sensors.
Power resumption alarm:-This circuit gives audio-visual indication of the failure and resumption of
mains power . the circuit is built around dual timer ic LM556. When mains is present the bicolour
LEDglows in green colour and mains fail it turn red.
Access Control Logic:- In the access control logic we use 9 switches , out of these 9 switches only
4 switches is for the password control logic. When we enter a proper password then only door is
open otherwise alarmis on automatically. These switches are directly connected with the controller
automatically.
309
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
A
P
R
9
6
0
0
BLOCKDIAGRAM
DTMF
Decoder
This port pins are
reserved for different
alarms according to
users demand such as
Fire control , LPG gas
leakage, Theft /
Intruder alarm ,
regulating Temperature.
Switches control
through DTMF
Mobile Handset
Kit
REFERENCES
[1] Atmel Corporation, 89c51 data manual ,2000,
http://www.atmel.com/
[2] Vishay Semiconductor GmbH, D-74025 Heilbronn, Germany
http:/www.vishey.com/
[3] National Semiconductor Corporation, TTL and CMOS data manual,
http://www.national.com/
[4] Muhammad Ali Mazidi et.al., The 8051 Microcontroller and Embedded Systems, Pearson Education
Inc. Second Edition 2006.
310
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Real Time Vehicle MonitoringAnd Safety SystemUsing GPS, GSM& GIS
Sumit Kumar
1
, Md. Shameem
2
, Gautam Srivastava
3
1
Asst. Professor, Electronics &Communication Dept., I.T.S EngineeringCollege, Gr. Noida
2,3
B. Tech. Final Year Student, Electronics &Instrumentation Engg., Galgotias College Of Engg. &
Technology, Gr. Noida
Abstract
The paper presents a method for the determination of
precise vehicle position and the calculation of the velocity
by using GPS, GSM and GIS. The system uses
geographical position and time information from the
Global Positioning Satellites. The system has an on-
board module which is placed in the vehicle(s) to be
tracked and a Base Station that receives and processes
the data from the concerned vehicle(s). The on-board
module consists of a Global Positioning System (GPS)
Receiver, a GSM module and a microcontroller. The
Base Station consists of a GSMmobile phone and GIS
workstation. The system is exploited for vehicle security
providing opportunity to remote server to secure the
vehicle in case of theft with indispensable anti theft
device.
Key Words: global positioning system,
microcontroller, satellites, tracking.
1. Introduction
Vehicle Positioning Systems are the devices used for
locating vehicles in real time. This is made possible by
installing electronic devices in the vehicle, developed
specifically for this purpose. It is the signals sent out by
the devices that enable owners or other parties entrusted
with the positioning job to locate and follow the vehicle.
Global Positioning System (GPS) is the technology
which is most commonly used for vehicle positioning
these days. The GPS modules, along with their satellite-
linked positioning technique, make easy and accurate
localization of the vehicles possible.
The other technology being used in these systems is
Global System for Mobile communication (GSM). GSM
is required in these systems because GPS system can
normally only receive location information from satellites
but cannot communicate back with them. Hence, we need
some other communication system like GSM to send this
location information to the Base Station.
The GPS module also provides us with the speed of
the tracking vehicle so we can send this data to micro
controller and the base station monitor the speed as well
and generate alarm as speed is exceeded to predetermined
level.
Also, Geographical Information System (GIS) is used
which is a software which consists of specially developed
comprehensive and detailed maps of the city with
longitude and latitude information of each place, street,
junctions and address. These maps are useful to locate the
address of a vehicle equipped with GPS system. GPS-
GSM system only provides the longitude and latitude
information of the vehicle but GIS software, if properly
developed, can provide details of exact or nearby address
where the vehicle is.
2. Global Positioning System
2.1. Introduction
GPS consists of three parts: the space segment, the
control segment, and the user segment. The U.S. Air
Force develops, maintains, and operates the space and
control segments.
GPS satellites send signals from space, which is used
by each GPS receiver to calculate its three-dimensional
location (latitude, longitude and altitude) plus the current
time and velocity of the vehicle.
The space segment is composed of 24 to 32 satellites
in medium Earth orbit and also includes the boosters
required to launch them into orbit. The control segment is
composed of a master control station, an alternate master
control station, and a host of dedicated and shared ground
antennas and monitor stations. The user segment is
composed of hundreds of thousands of U.S. and allied
military users of the secure GPS Precise Positioning
Service, and tens of millions of civil, commercial, and
scientific users of the Standard Positioning Service.
2.2. Working
Each GPS satellite transmits radio signals that enable
the GPS receivers to calculate where it is located on Earth
311
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
and convert the calculations into geodetic latitude,
longitude and velocity. A receiver needs signals from at
least three GPS satellites to generate the accurate values
indicating its position.
The principle is very similar to that which is used in
orienteering- if we can identify three places on our map,
take a bearing to where they are, and draw three lines on
the map, then we can find out where we are on the map.
The lines will intersect, and, depending on the accuracy of
the bearings, the triangle formed by their intersection will
approximately indicate our position, within a margin of
error.
GPS performs a similar kind of exercise, using the
known positions of the satellites in space, and measuring
the time that the signal takes to travel from satellite to
Earth.
The result of this triangulation enables the GPS to
calculate where the device is located.
Figure 1. Working of GPS
3. Global Systemfor Mobile Communication
3.1. An Overview
Global System for Mobile communication (GSM) is a
globally accepted standard for digital cellular
communication.
GSM is a cellular network. It means that mobile
phones connect to it by searching for cells in the
immediate vicinity. GSM networks operate in a number of
different carrier frequency ranges, with most 2G GSM
networks operating in the 900 MHz or 1800 MHz. Where
these bands are already allocated, the 850 MHz and 1900
MHz bands are used. The GSM network is divided into
three major systems: the switching system (SS), the base
station system (BSS), and the operation and support
system (OSS).
For Vehicle Positioning Systems, GSM is used to send
the latitude and longitude values to the Base Station via
SMS. So, a brief description of the SMS technology and
its working is important to mention here.
3.2. SMS network entities
SMS messages are created by mobile phones or other
devices (e.g. personal computers). These devices can send
and receive SMS messages by communicating with the
GSM network. All of these devices have at least one
MSISDN (Mobile Subscriber ISDN) number. They are
called Short Messaging Entities.
The messages are composed using PDU (Protocol
Description Unit) specifications. The text mode is just an
encoding of the bit stream represented by the PDU mode.
The following figure shows a typical organization of
network elements in a GSM network supporting SMS.
Figure 2. Elements in a GSM network supporting SMS
SME- Short Messaging Entity
SMSC- Short Message Service Centre
MSC- Mobile Switching Centre
BSS- Base Station System
SMS-GMSC- SMS Gateway Mobile Switching
Centre
VLR- Visitor Location Register
HLR- Home Location Register
MS- Mobile Station
The SMEs are the starting points (the source) and the
end points (the receiver) for SMS messages. The SMSC is
the entity which does the job of storing and forwarding of
messages to and from the mobile station. The SMS GMSC
is a gateway MSC that can also receive short messages
and is a mobile network s point of contact with other
networks. HLR is the main database in a mobile network
containing the information like subscription profile of the
mobile, the area where the mobile is currently situated
etc.. MSC is the entity in a GSM network which does the
job of switching connections between mobile stations or
between mobile stations and the fixed network. A VLR
corresponds to each MSC and contains temporary
information about the mobile.
312
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Table 1. SMS technology fact sheet
4. Geographical Information System
Geographical Information System (GIS) is any system
that captures, stores, analyzes, manages, and presents data
that are linked to location. In the simplest terms, GIS is
the merging of cartography, statistical analysis, and
database technology.
Modern GIS technologies use digital information, for
which various digitized data creation methods are used.
The most common method of data creation is digitization,
where a hard copy map or survey plan is transferred into a
digital medium through the use of a Computer-Aided
Design (CAD) program, and geo-referencing capabilities.
With the wide availability of ortho-rectified imagery (both
are satellite and aerial sources), heads-up digitizing is
becoming the main avenue through which geographic data
is extracted. Heads-up digitizing involves the tracing of
geographic data directly on top of the aerial imagery
instead of by the traditional method of tracing the
geographic form on a separate digitizing tablet (Heads-
down digitizing).
Figure 3. An example of GIS
5. Velocity Determination
Every GPS Reciever gives out many data including
lattitue,longitude,time,altitude,velocity etc.We are using
velocity Data to provide the safety and security of the
vehicle .The Velocity data which are received by the
receiver is now sent to the microcontroller .whenever the
speed of vehicle exceeds the predetermine levels its
geneate an alaram to alert the driver so that he can reduce
its speed.
The difference between the GPS
tracked speed (Km/h) and the vehicle
speedometer (Km/h) can be seen in the
following graph-1
6. SystemDesign and Working
6.1. Block Diagram
Microcontroller unit forms the heart of on-board
module which acquires and processes the position data
and velocity data from the GPS module. From the position
parameter, it can calculate the cumulative distance
travelled. The accuracy of distance calculation depends on
the speed variation and gradient of terrain. This data is
stored in the memory. The frequency at which the GPS
parameters are updated to the memory can be configured.
Figure 4. Block diagramof 'On-board' module
313
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
314
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
The device checks for the alarms based on pre-defined
configuration and in case of such events, the data and
alarm are sent to the base station through the GSM
interface on a periodic/demand basis.
One such alarm can be for alerts like speeding alert. If
the speed crosses a specified value, the on-board module
sends SMS alert to the base station. The speeding alert
threshold parameter can be configured from the base
station.
6.2. Working
The hardware interfaces to microcontroller are GPS
receiver, GSM module and memory (can be EEPROM).
The GPS modem will continuously give the data i.e. the
latitude , longitude and the velocity data indicating the
position and the speed of the vehicle. The same data is
sent to the mobile at the other end from where the
monitoring of the vehicle is done. Memory is used to
store the data received by GPS receiver.
In order to interface GSM modem and GPS receiver to
the microcontroller, a MUX is used. The design uses RS-
232 protocol for serial communication between the
modules and the microcontroller. A serial driver IC is
used for converting TTL voltage levels to RS-232 voltage
levels.
When a request by the user is sent to the number at the
modem, the system automatically sends a return reply to
that particular mobile indicating the position of the vehicle
in terms of latitude and longitude and velocity of the
vehicle.
7. Results
The vehicle tracking system produces significants
results and vehicle can be tracked to minor variation of
10m maximum .System onced installed,completely
secures the car with highly efficient anti-theft system.
GSM based communication can also be used to
maintain proper communication between the Base Station
and on Board module.
System Accurately determine the position and
generates alarm signal whenever speed exceeds the set
point value.
Memory (like EEPROM) is used to store the GPS
latitude and Longitude values at regular intervals of time.
8. Conclusion
The system titled Real Time Monitoring And Security
System Using GPS And GSM is a model for real-time
Vehicle Tracking Unit along with complete safety of the
vehicle with the help of GPS, GSM.
The system is used for positioning and tracking the
vehicle with an accuracy of 10m.
The Further work in this research leads to complete
security of vehicle by installing this module on the system.
9. Further Work
The future work will be done in following directions:
- To reduce the size of the kit by using GPS+GSM on the
same module.
- To increase the accuracy up to 3m by using better GPS
receivers.
- To utilize the system for detection of accidents. This can
be done by incorporating high sensitivity vibration sensors
in the system.
10. References
[1] Mohinder S. Grewal, Lawrence Weill, Angus Andreus,
Global Positioning Systems, Inertial Navigation, and
Integration, John Wiley and Sons, Inc., 2001Gregory T.
French, Understanding the GPS- An Introduction to
GPS, GeoResearch, Inc., 1996.
[2] James Bao-Yen Tsui, Fundamentals of Global Positioning
SystemReceivers: ASoftware Approach, JohnWiley &
Sons, Inc., 2000.
[3] Jen-Yi Pan, Wei-Tsong Lee, and Nen-Fu Huang,
Providing multicast short message services over self-
routing mobile cellular backbone network, IEEE Trans.
Vehicular Technology, vol. 52, no. 1, pp. 240-253, Jan.
2003.
[4] Ioan Lita, Ion Bogdan Cioc and Daniel Alexandru Visan,
ANew Approach of Automobile Localization System
Using GPS and GSM/GPRS Transmission, Proc. ISSE '
06, pp. 115-119, 2006.
[5] Wen Leng and Chuntao Shi, The GPRS-based location
systemfor the long distance freight, ChinaCom 06, pp1-
5, Oct.2006.
[6] M. Elena, J.M. Quero, S.L. Toral, C.L. Tarrida, J.A.
Segovia and L.G. Franquelo, CARDIOSMART:
Intelligent cardiology monitoring systemusing GPS/GPRS
networks, IECON 02, vol.4, pp. 3419-3424, Nov.2002.
[7] Chia-Hung Lien, Chi-Hsiung Lin, Ying-Wen Bai, Ming-
Fong Liu and Ming-Bo Lin, Remotely Controllable Outlet
Systemfor Home Power Management, Proceedings of
2006 IEEE Tenth International Symposium on Consumer
Electronics (ISCE 2006), St. Petersburg, Russia, pp. 7-12,
June 28- July 1, 2006
[8] Available [online]: http://www. logixmobile. com/faq/
show. asp?catid=1&faqid=3
[9] Available [online]: http://www.gsm-modem.de/sms-pdu-
mode.html Available [online]: http://www. ozeki.hu/index.
php?ow_page_number=489&page_name=sms_basic_conce
pts Available [online]: http://aprs.gids.nl/nmea/
[10] Available [online]: http://www .palowireless.com/ gps/
howgpsworks. Asp Available [online]: http://www.
garmin.com/products.
315
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
Ultrasonic Surface Detection
Using 89S51 Microcontroller
Mr.Vinod Kumar
1
Rupendra K. Pachaur i
2
Department of Electronics and communication Engineering
ACME College of Engg. , Muradnagar,
Ghaziabad(INDIA)
1 Kumarvinod2002 @gmail.com
2 rupendra_pachauri@rediffmail.com
Abstract: The microcontroller 89S51 based ultrasonic surface
detector is developed to determine the distance of flawon the
specimen surface (if existing) using ultrasonic sensors
(SRF04). The ultrasonic sensors are based on wave
phenomenon in different medium (air, water, fluid etc.) to
measure the distance of flaw on the surface of specimen.
These distance sensors detects echoes from objects and
evaluate their propagation time with amplitude. The
intelligent microcontroller is used to implement the formula of
time and speed for determining the distance of flaw which
displays using LCD. The range of the measurement depends
upon types of ultrasonic transducers are used. The designed
surface flawdetector can be used for a lowrange from 32 cm
to 250 cm. This range can be extended on the basis of the
ultrasonic sensors are used.
1. INTRODUCTION
Sensors work as a sensing organs of technical systems. They
collect information about variables in the environment as well
as non-electrical system parameters and provide the results as
electrical signals. Sensors are an essential part of power
generation and distribution systems, automated industrial
processes, traffic management systems, as well as
environmental and health maintenance systems. Even
relatively complex sensors, which previously could be
realized only as scientific instruments are now feasible as
compact devices at low costs.
Ultrasonic sensors are based on the medium. Which are used
to measure quantities, which often are immediately relevant
for humans, such as short distances e.g. level measurement
within a man s reach or very long distances e.g. obstruction
detection in pipelining process. Moreover, these sensors imply
generic approaches which could be used for other gases and
even for liquids.
properties of the medium. Thus while passing through a
particular medium these signals get attenuated. The
attenuation of ultrasonic signal is taken as the means for the
measurement of distance of the target and for different other
applications [1].
Ultrasonic distance sensors are used to detect the presence of
surface flaw by measuring the distance. They do so by
evaluating the echo of a transmitted pulse with concern to its
travel time. Time dependent control of sensitivity is used to
compensate the distance dependency of the echo amplitude,
while different reflection properties are compensated by an
automatic gain control, which holds the average echo
amplitude constant. Echo amplitude therefore has very little
influence on the accuracy of the distance measurement
provided the signal to noise ratio is not very low. By
considering whether the echo has been received within a time
window, i.e. a time interval, which can be preset by the user,
the distance range is given in which the sensor responds to the
presence of an object [2].
A variety of ultrasonic presence sensors with different
operation frequencies are designed for different distance range
and different resolution. Such sensors are employed in the
automation of industrial processes as well as in traffic control
systems, for example- to monitor, whether car parking places
are occupied. Ultrasonic distance meters are used for the
measurement of the filling level in containers or the height of
material on conveyor belts.
Ultrasonic waves are generally used two types which are
given as :
(a) Longitudinal waves-
Longitudinal waves exist when the motion of the particle and
the medium is parallel to the direction of propagation of the
waves. These types of waves are referred as L waves. Since
these can travel in solid, liquid and gases. These waves can be
easily detected [3].
2. ULTRASONICWAVES
Sound waves with frequency range from 20 Hz to 20 KHz are
responsive to the human ear. Vibrations above this frequency
are termed as ultrasonic. Ultrasonic signals are affected by the
(b) Transverse waves:
In this case particles of the medium vibrate at right angle to
the direction of propagation of the waves. These are also
called shear waves.
316
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
2.1- ULTRASONIC DISTANCE SENSORS
Ultrasonic sonar sensors actively transmit acoustic waves
and receive them later. This is done by ultrasonic
transducers, which transform an electrical signal into an
ultrasonic wave and vice versa. The ultrasound signal
carries the information about the variables to be
measured. As intelligent sensors they have to extract the
information carried by the ultrasonic signals efficiently
and with high accuracy. To achieve this performance, the
signals are processed, demodulated and evaluated by
dedicated hardware. Algorithms based on models for the
ultrasonic signal propagation and the interaction between
the physical or chemical variables of interest are
employed [4]. Furthermore, techniques of a sensor
specific signal evaluation are being applied. Ultrasonic
sensors can be embedded into a control system that
accesses additional sensors, combines information of the
different sensors, handles the bus protocols and initiates
actions.
Fig.1 - Receiving & transmitting process of ultrasonic echoes for surface flaw
detection
Distance sensors based on ultrasonic principles use the
travel time and amplitude of the received signal (e.g. the echo)
to derive the presence, distance, and type of a sound reflecting
object. Intelligent evaluation methods allow target objects to
be recognized and classified. Furthermore, lateral details can
be recognized by introducing defined relative movement
between the sensor and the object [4].
3. PRESENT WORK
3.1 Receiver & Transmitter
The designed circuit detects the ultrasonic which returned
from the flaw on the surface of the objects. The output of the
detection circuit is detected using the comparator. At this time,
the operational amplifier of the single power supply is used
instead of the comparator. The operational amplifier amplifies
and outputs the difference between the positive input and the
negative input. In case of the operational amplifier which
doesn't have the negative feedback, the inverter is used for the
drive of the ultrasonic sensor.
3.2 Signal amplification circuit
The ultrasonic signal received with at the receiver sensor is
amplified by 1000 times (60dB) of voltage with the
operational amplifier with two stages. Therefore, for the
positive input of the operational amplifiers, the half of the
power supply voltage is applied as the bias voltage. Then the
alternating current signal can be amplified on 4.5V central
voltage. When using the operational amplifier with the
negative feedback, the voltage of the positive input terminal
and the voltage of the negative input terminal become equal
approximately [5].
3.3 Resonator
In this system 4-MHz resonator is used. It is 1 microsecond
per count for the counter count up time. Timer1 to use for
capture is a maximum of 65535 counts (16 bits). So, a
maximum of 65.535 milliseconds count is made. The
propagation speed of the sound in air is 343 m/second in case
of 20C. In the time which goes and returns in the 10-m
distance, it is 20
m
/343
m/sec
= 0.0583 seconds (58.3
milliseconds). As the range meter this time, it is an exactly
good value.
3.4 LCD display Unit
Frequently, an 89S51 microcontroller interacts with the
outside world using input and output devices. One of the most
common devices connected to the 8051 is an LCD display.
Some of the common LCDs connected to the 89S51 are 16x2
and 20x2 display [6].
Fig.2 - Block diagram of ultrasonic surface flaw detector
3.5 Microcontroller
The 89S51 developed and launched in the early 80 s, is one of
the most popular micro controller in use today .it has a
317
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
reasonably large amount of built in ROM and RAM. In
addition it has the ability to access external memory [7], [8].
Features:
(1) 8-bit CPU. (consisting of the A and B registers)
(2) 4K on- chip ROM
(3) 128 byte on- chip RAM
(4) 32 I/O lines. (Four- 8 bit ports, labeled P0, P1,)
(5) Two 16 bit timers/Counters
(6) Full duplex serial data receiver / transmitter.
(7) 5- interrupt sources with two priority levels (two
external and three internal)
(8) On- chip oscillator (frequency = 11.0592 MHz)
3.6 Power supply circuit
+9V are used for the transmitter and the receiver and +5V
voltage is used for the lighting-up of LCDs. It is converting
voltage with the transistor to make control at the operating
voltage of PIC (+5V). Because C-MOS inverters are used, it is
possible to do ON/OFF at high speed comparatively.
Fig.3 - Power Supply for the Ultrasonic surface flaw detector
3.7 Ultrasonic wave propagation speed in different medium
(air, water, fluid, etc.)
The sound wave propagation speed in different medium as in
air, water, fluid etc. is changed by the temperature. Generally
in air- At 0C, it is 331m/sec. At 20C, it is 343m/sec. At
40C, it is 355.5
m/sec
. The time which the sound wave takes to
go and return is 2m/331.5m/sec = 0.006033 seconds = 6.033
milliseconds. As we see that the ultrasonic sound speed varies
according the temperature of the medium. [9], [10].
4. EXPERIMENTAL SET- UP&RESULTS
As a system designed of the Ultrasonic surface flaw detector,
for measurement the distance of flaw or obstruction on surface
so the three tactile buttons A, B, and D are used as shown in
the flow chart operations. First of all switch ON the system,
LCD shows the message as the instructions as press A for
speed measurement and press B for distance measurement of
flaw and last key D press to refresh the system.
For measuring the distance of the flaw on surface of the
specimen, first calibrate the system for defining the speed in
particular atmosphere. By this method the system defines or
calculates the speed of the ultrasonic waves or echo in
environmental atmosphere in which the system calculates the
distance of flaw.
The system measure the distance of flaw, press the buttons A,
B, and D one by one and find out the results on the LCD in
centimetres which is unit of the length. As defined by the
mathematical relationship between the time, speed and
distance as
Distance = Speed * Time
According the above relationship, calculated all the
parameters as Time, which is calculated by the help of the
Timers/Counters in intelligent microcontroller. The above
relation is programmed to ROM in Assembly Language
Programming (ALP) and in last the results is shown in the
LCD display device. e.g. taking different times results with
the help of the system.
Further Algorithm is shown in the operation of the system in
the figure below:
Fig.4. Flow chart operation of the ultrasonic surface flaw detector for key
identification
318
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
S.NO. ACTUAL
DISTANCE OF
SPECIMEN
SURFACE
X- AXIS
(Cm)
SURFACE FLAW
POSITON
MEASURED BY
SYSTEM
Y- AXIS
(Cm)
1. 150 147
2. 150 145
3. 150 160
4. 150 148
5. 150 143
6. 150 138
7. 150 145
8. 150 150
9. 150 140
10. 150 150
11. 150 150
12. 150 150
13. 150 149
14. 150 123
15. 150 150
16. 150 150
17. 150 150
18. 150 148
180
160
140
120
100
80
60
40
20
0
1 3 5 7 9 11 13 15 17
SURFACE
FLAW
POSITON
MEASURED
BY SYSTEM
Y- AXIS
SURFACE
FLAW
POSITON
MEASURED
BY SYSTEM
Y- AXIS
Fig.7. Graph between actual distance of flaw and distance measured by the
system
6. CONCLUSION
Fig.5. Flow chart operation for distance measurement of flaw on specimen
surface
5. RESULTS
The results obtained on the basis of experimental work as
shown in table:
Table-(1) Result between actual distance of specimen surface and distance
measured of flaw by system
Evidence has been given that a lot of different ultrasonic
sensors can be developed for operation in air. Topics for
future research and development work comprise: advanced
physical models and algorithms to improve the sensor
functionality and accuracy, application of digital signal
processors to provide improved sensor signal evaluation at
competitive costs, extension of the intelligent ultrasonic
sensor concept forming a decentralized multiple sensor system
with bus communication capabilities.
REFERENCES:
[1] Murugavel Raju: U ltrasonic Distance Measurement with the
MSP430 Application Report SLAA136A- October 2001, pp- 1-7.
[2] Satish Pandey, Dharmendra Mishra, Anchal Srivastava, Atul
Srivastava, R. K. Shukla:Ultrasonic Obstruction Detection and
Distance Measurement Using AVR Micro Controller, Sensors &
Transducers journal, vol. 95, Issue 8, August 2008, pp-49-57.
[3] Robert W. Weinert Very high frequency piezoelectric
transducers, IEEE Trans. on Sonics and Ultrasonic .January
1977.Vol SU24 No 1 pp 48-53.
[4] AG. Munich: Ultrasonic Sensors in Air Valentin Magori
Corporate Research and Development, Siemens, Germany 1994
ultrasonic symposium, pp-1-5.
[5] Yudhisther Kumar, Ashok Kumar and Rita Gupta: Calibration of
ultrasonic flaw detector, National Seminar of ISNT, Chennai, 2002,
pp-1-7.
[6] H.S. Kalsi: Transducer & Display system, Chapter-03-04-05,
TMH Publication.
[7] Kenneth J. Ayala: The 8051 Microcontroller Architecture,
Programming and Applications, West Publishing Company, College
and school Division, 1996.
[8] http://www.8051projects.info/intro_11.asp
[9] Timothy L.J. Ferris, Jingsyan Torng and Grier C.I. Lin: Design
of two tone ultrasonic distance measurement system, The First
Japanese-Australian Joint Seminar, March 2000, Adelaide, Australia,
pp- 1-6.
[10] Alessio Carullo and Marco Parvis: An Ultrasonic Sensor for
Distance Measurement in Automotive Applications, Senior Member
IEEE, sensors journal, vol. 1, no. 2, August 2001, pp- 143-148.
National Conference onMicrowave, Antenna &Signal Processing April 22-23, 2011
318
National Conference onMicrowave, Antenna&Signal Processing April 22-23, 2011
National Printers # 9871998250, 9312079393

You might also like