You are on page 1of 340

Available Online At: www.gtia.co.

in

Jan-June 2016
Vol. 5, No. 1

Refereed Journal

ISSN: 2319-3344

Special Issue
Proceedings of National Conference on
Recent Innovations in Engineering &
Technology
NCRIET 2016
8th -9th April, 2016

Organised by
Electronics and Communication
Engineering Department
Northern India Engineering College
NAAC Accredited and Affiliated to
GGSIPU
New Delhi

INTERNATIONAL

JOURNAL OF
INNOVATIONS IN
ENGINEERING AND
MANAGMENT

IJIEM Editorial Board


EDITOR IN CHIEF
Mr. Anshul Goel,
India

ASSOCIATE EDITOR
Dr. Sukumar Senthilkumar,
Center for Advanced Image and Information Technology,
Chon Buk National University, South Korea.
EDITORIAL BOARD

Jos Luis Calvo Rolle


University of Corua, Department of Industrial
Engineering, Spain

Jon Arambarri Basaez


Telecommunication Engineer, Spain
Prof. Nawab Ali Khan
Dept. of HRM, College of Business
Administration Salman Bin Abdulaziz
University, Al- Kharj, Saudi Arabia

H.O Nwankwoala
University of Port Harcourt,Nigeria
Dr. Lalan Kumar,
Senior Scientist, Central Institute of Mining
and Fuel Research (CIMFR)
Dhanbad, Jharkhand, India
Joefrelin C. Ines
Lecturer, Business Studies Dept., Shinas
College of Technology, Sultanate of Oman
Dr. Ananda. S.
Dept. of International Business
Administration, College of Applied
Sciences, Ministry of Higher Education,
Salalah, Al Sadaa, Sultanate of Oman
Dr. D.Thiyagarajan
Assistant Professor, K.S.Rangasamy College
of Technology, Thiruchengode, India

Dr Mohd Israil
Assistant Professor, Physics Department, Al
Jouf University, Kingdom of Saudi Arabia
Prof. Dr. Akin MARSAP
Istanbul Aydin University, Faculty of
Economics and Administrative Sciences,
Istanbul/TURKEY

Dr.Santosh Singh Bais


Assistant Professor, Dept. of Commerce and
Management, CHINCHOLI
Gulbarga, Karnataka, India
Dr. M. Rizwan
Assistant Professor, Department of
Electrical Engineering, DTU, Delhi, India
Dr. Balasundaram Nimalathasan
Department of Accounting, Faculty of
Management Studies & Commerce,
University of Jaffna, Jaffna, Sri Lanka.
Dr. Zakir Ali
Assistant Professor, Deptt. of ECE,
I.E.T. Bundelkhand University, India
Dr. Sandeep Bhongade
Assistant Professor, Dept. of E.Engg., G.S.
Institute of Technology & Science, Indore
(M.P), India

Global Technocrats & Intellectual's Association (GTIA)


A Registered Society under Societies Act 21 of 1860.
Published by Mr. Anshul Goel for GTIA
Mobile: +91-9812667898:
E-mail: gtia@rediffmail.com

Two day Na t i o n a l Co n f e r e n c e o n
R e c e n t I n n o v a t i o n s i n E n g i n e e r in g a n d T e c h n o l og y
NCRIET-2016
8-9th April, 2016

Organized by
Northern India Engineering College
(Babu Banarasi Das Group of Educational Institutions)
Department of Electronics and Communication Engineering
FC-26, Shastri Park, New Delhi -53
Ph: 011-39905900-99

NAAC Accredited & AICTE Approved


A f f i l i a t e d t o G G S I P U n i v e r s i t y, D e l h i
ISO 9001:2008 & ENISO 14001:2004 Certified Institute

All data, views, opinion etc, being published


are the sole responsibility of the authors and neither
the publisher nor the organizer of the conference
is anyway responsible for them

Organizing Committee
Chief Patron
Dr. Akhilesh Das Gupta (Chairman)
Ms. Alka Das Gupta (Vice Chairperson)
Patron
Mr. S. N. Garg, CEO, NIEC
Conference Chairperson
Prof.(Dr.) G. P. Govil, NIEC
Conference Convener
Prof.(Dr.) Rajiv Sharma, HOD (ECE)
Conference Co-Convener
Dr. Arti M.K.

Dr. Surender Dhiman


Coordinators
Ms. Pooja Mendiratta
Mr. Harsh Kumar
Mr. Surender
Ms. Suman Arora
Ms. Tanupreet Sabharwal
Technical Committee

Dr. Arti MK
Mr. Harsh Kumar
Ms. Preeti Singh

Dr. Surender Dhiman


Mr. Ankur Chaturvedi
Mr. Gaurav Verma
Organizing Committee

Mr. Kamal Singh


Ms. Neha Sharma
Ms. Swati Juneja
Ms. Richa Malhotra
Ms. Khushboo Verma
Mr. Varun Jain
Ms. Mohina Ganhi
Ms. Monika Jain
Ms. Pragya Srivastava
Ms. Sapna Aggarwal

Ms. Shilpa Jain


Ms. Neha Gupta
Ms. Neeru Bala
Mr. Manoranjan Kr.
Ms. Divya Arora
Ms. Amrita Kaul
Ms. Neha Srivastava
Ms. Prachi Punyani
Mr. Devraj Gautam
Ms. Medha Hooda

Editorial Board

We are pleased to present a conference proceeding of two days national


conference on Recent Innovations in Engineering & Technology (NCRIET
2016)
The objective of the conference is to bring together leading researchers and
developers from Electronics and Communication discipline for discussion on the
overall growth of Electronics and Communication technologies. It also aims to
promote the research and practice of new strategies, tools, techniques and
technologies for the design, development and implementation of Electronics and
Communication system.
Future Electronics and Communication technologies would stand for all of
continuously evolving and converging Electronics and Communication
technologies, including Wireless Sensor Network, Mobile
Communication,
Satellite Communication, Radar Communication, Acoustic Signal Processing,
Digital and Analog Circuit Design, and so on, for satisfying our ever-changing
needs.
We are sure that this national conference proceeding will go long way to motivate
the Electronics and communication peoples, researchers of this country. We are
also hopeful that these conference proceedings will give you a valuable addition in
the library of all reputed universities and colleges.
We look forward to suggestions from all sections of researchers, practioners,
students and delegates to make this endeavor fruitful for all.
We wish to thanks to the management and the various faculty members of
Northern India Engineering College, New Delhi, and those who have directly or
indirectly helped us in organizing this conference.

Conference Convener

Conference Chairperson

Prof.(Dr.) Rajiv Sharma, HOD (ECE)

Prof.(Dr.) G. P. Govil, NIEC

Dr. Akhilesh Das Gupta


LL.B., MBA, Ph. D.

Message
It is a matter of great pleasure that Northern India Engineering College is
organizing a National Conference on Recent Innovations in Engineering &
Technology (NCRIET 2016) on 8th& 9th April, 2016 which will provide an
integrated platform for various ideas, discipline and technologies related to current
and future trends in Electronics and Communication Engineering.
NIEC has a rich tradition of pursuing academic excellence, value based education
and providing a conductive environment for overall personality development of the
students.
I express my best wishes for the success of the conference and hope that the
illuminating thought provoking and path breaking presentations will make a real
contribution to the advancement of knowledge and its practical application.

DR. AKHILESH DAS GUPTA


CHAIRMAN,
BBD GROUP OF EDUCATION

Mrs. Alka Das Gupta


LL.B., MBA

Message
It gives me immense pleasure to know that ECE department is organizing a
National Conference on Recent Innovations in Engineering &
Technology(NCRIET 2016) on 8th& 9th April, 2016.
Our vision is to create an institution par excellence with innovative concepts for
imparting quality education to enable our students to serve the society better.
Electronics and Communication Engineering discipline is at the forefront of
continuing development and evolution of our modern technological society.
I am sure this will help to inculcate requisite technical knowledge, competencies,
right kind of culture and values among the students and will be an asset to the
industry and the country.
I express my heartiest wishes for the grand success of NCRIET-2016 and hope that
extensive valuable and latest up-to-date knowledge provided by the participants
will make a real contribution to the advancement in the research and technology.
MRS. ALKA DAS,
VICE CHAIRPERSON,
BBD GROUP OF EDUCATION

Sh. S. N. Garg

Message
Greeting from NIEC.
With a vision to provide quality technical education, NIEC emphasizes the overall
development of students in all spheres.
Innovation of great ideas and its implementation can change the view of the world.
Keeping this in mind, the ECE deptt is organizing a National Conference on
Recent Innovations in Engineering & Technology (NCRIET- 2016). NCRIET
offers a track for quality research and development updates from researchers,
scientists, engineers and students. This conference will provide an opportunity in
bringing the new technology and perspectives that will contribute to Electronics
and Communication Engineering for a next few years.
I express my heartiest wishes for the grand success of the National Conference and
hope that the participants will get benefited from the valuable presentations and
discussions.,

SH. S. N. GARG
CHIEF EXECUTIVE OFFICER
NIEC

Prof. (Dr.) G.P.Govil

Message
Warm and Happy greeting to all.
It gives me immense pleasure to announce that ECE Deptt. of our college is
organizing a National Conference on Recent Innovations in Engineering &
Technology (NCRIET 2016) on 8th 9th April 2016.
Under the able guidance of our Honble Chairman & Honble Vice Chairperson,
NIEC continues to march on the path of success with confidence.
We are indeed fortunate to have a team of highly dedicated HODs and faculty
members who are the driving force behind the overall development of our students.
The role of students in building nation cannot be overlooked and students of NIEC
are trained in all aspects to become successful engineers and good citizens. I
sincerely appreciate the untiring efforts made by the HOD, Staff and students of
ECE department for organizing this conference and wish all the success.

PROF. (DR.) G.P.GOVIL


DIRECTOR
NIEC

Prof. (Dr.) Rajiv Sharma

Message
I am indeed very happy to note that the national conference on recent Innovations
in Engineering & Technology (NCRIET 2016) on 8th 9th April 2016 is being
organized by the ECE dept of the NIEC.
Conference of such nature provides a great opportunity to update our knowledge of
latest technologies. The department is committed to add value to the intellectual,
moral, social and technological capabilities of our students.
I place on records with appreciation the hard work, involvement and effort taken
by the team of faculty members and students in organizing this conference. I
congratulate all the concerned with gratitude and wish the conference a grand
success.

PROF. (DR.) RAJIV SHARMA


HOD - ECE
NIEC

CONTENTS
List of Papers
1. Data Communication in Linear Wireless Sensor Networks via Unmanned Aerial Vehicles using
FSO Communication
Akansha Solanki , Nitin Garg, Mona Aggarwal and Swaran Ahuja
1
2. VHDL implementation of Built in Self-Test For 2D-CWT still image compression algorithm
Tanupreet Sabharwal, Prachi Punyani
7
3. Speed Control Of Dc Motor Based On Temperature
Neha Srivastava, Shalini Kumari

12

4. A comparison study of Appearance Features descriptors for Age Estimation from face images
Prachi Punyani, Tanupreet Sabharwal
14
5. Underground Platform Cooling System for Delhi Metro
Jatin Gaur, Prateek Mishra

18

6. Li fi Visible Light Communication


Amrita Kaul, Divya Punyia

22

7. Pre-existing EDA tools limitations and mixed modeling approach for IP based SoCs
Abhishek Anand, Taran Aggarwal, Medha Chhillar Hooda

25

8. Image Processing on Low-cost Embedded Systems


Utkarsh Gupta, Medha Chhillar Hooda

28

9. Analysis of C-Band Erbium-Doped Fiber Amplifier For WDM Network


Monika Jain, Shrinikesh Yadav

31

10. Variability Analysis of Binary to Reflected Code Converter at 16-nm Technology Node
Pragya Srivastava, Neha Sharma, Deepali Jain
35
11. Generation of Electricity by Geothermal Energy
Rajkumar Kaushik, Nainy Chauhan, Shweta Mishra

44

12. A gm-C Quadrature oscillator based on DO-VDBA


Priyanka Gupta, Chetna Malhotra, Varun Kumar Ahalawat, V.Venkatesh Kumar, Rajeshwari
Pandey
49
13. Design and Analysis of a Low Power Ternary Content Addressable Memory
Ankaj Gupta

53

14. Characterization And Simulation Of Semiconductor Thin Films Using Quantitative Mobility
Spectrum Analysis (QMSA)
Nisha Chugh, A.K Vishwakarma, S. Sitharaman
55
15. Performance Evaluation of Cascaded Optical Filters on 400Gbps PM-16QAM Coherent
Communication Systems
Sapna Aggarwal, Varun Jain
60
16. Image Compression using neural networks
Neeru Bala

64

17. Determining Shape and FringeCount in a Holographic Recording Media


Dheeraj, Devanshi Chaudhary, Vivek Kumar, Sandeep Sharma

69

18. Prediction of Forest Fires Using Artificial Neural Networks


Ishita Aggarwa1, Harsh Joshi1, Divya Arora , Sandeep Sharma

71

19. Efficient Video Facial Analysis and Face Recognition


Harsh Pandey, Vivek Kumar, Sandeep Sharma, Divya Arora

74

20. Design Of Amultipurpose Orthosis Assistant And Prosthetic Limb


Nipun Sachdeva, Garvit Dahiya, Pratyush Gupta, Divya Arora

77

21. Channel Capacity Comparison of MIMO Systems with Rician Distributions and
Rayleigh Distributions
Manoranjan Kumar, Harsh Kumar

80

22. A Review of CFOA based Single Element Controlled Oscillators


Surendra Kumar

83

23. Comparison of Classical and Dynamic time warping time series clustering algorithm
Neha Sharma, Rumita sharma, Jagmale singh

88

24. Review Of Substrate Integrated Wave Guide Antenna


Harsh Kumar, Manoranjan

95

25. Image Quality Assessment for Fake Biometric System, a Review of Fingerprint, Iris, and Face
Recognition
Sunil Nijhawan, Jitender Khurana
99
26. A Study and analysis of Booth Multiplication Algorithm
Tarun Damani, Preeti Singh, Richa Malhotra

105

27. Electrooculography- A Review


Devraj Gautam, Varshika Valluri . Akriti agarwal . Neelakshi Rana,

108

28. Memristors: The Fourth Basic Circuit Element


Khushboo, Mohina Gandhi

111

29. An exemplar in Telecom: MIMO


Rajeev Sharma, Surender Kumar

115

30. Designing of Combinational and Sequential Logic Circuits Using Precomputation Technique
Neha Gupta, Pooja Mendiratta
121
31. Issues and Challenges faced in Wireless Sensors Networks
Pooja Mendiratta, Neha Gupta, Ashish Singh Rawat

127

32. Challenges to mockup times in calculation to interrupt latency using RTDM


Hirender, Sunil Dalal

133

33. Role of Private Sector in India Power Transmission System:A Review


Neeraj Kumar, Rohit Verma, Subham Gandhi

137

34. Innovation Technique of Denoising of Ultrasonographic Images Using Dual Tree Complex
Wavelet Transform
Anil Dudy, Subham Gandhi, Jitender Khurana
140
35. Modeling setup for next generation wireless system using MIMO-STBC
Niranjan Yadav, Subham Gandhi

143

36. (CoMP) Techniques In 4G-LTE-Advanced


Deepak Kumar Gahlot, Vijay Nandal

150

37. Graphene: Emerging Technology In Nanoelectronics


Pratibha, Vijay Nandal

153

38. The Memristor: Revolution In Electronics


Isha , Vijay Nandal, Manisha

158

39. Rectenna Design and Modelling for Wireless Power Generation


Karuna , Manisha

163

40. Review paper on watermarking with DWT and RDWT using SVD
Kanchan, Sonal, Pankaj Bhatia

167

41. Review Paper On Prolongation Of Pr Interval And Denoising Ecg Signal Using Wavelets
Kanika Tayal, P.M.Arivananthi, Vijay Gill, K.Deepa, Nitika
171
42. Designing of IR Transmitter using Multisim and its Applications
Bhargava Yasasvi, Puja Acharya, Shilpa Mehta

175

43. Challenges in NAND Flash Memory


Manisha Sharma, Munesh Devi

178

44. Strained Silicon Complementary Metal Oxide Semiconductor: A Review


Anita, Vanita Batra, Ritu Pahwa, Jyoti Sehgal

184

45. Tunnel Field Effect Transistor: A Review


Sawan, Dhiraj Kapoor, Rajiv Sharma

190

46. Load Balancing and QoS in MANET


Sakshi Dhawan, Sudhir Vasesi

194

47. 'Human detection' through Computer Vision as a means for fighting Poaching
Paurush Dube, Harsh Joshi, Sandeep Sharma, Divya Arora

198

48. Spectrum Sensing and Utilization Techniques For Cognitive Radio Systems: A review
Kamal Singh, Pradeep Kumar Gupta

201

49. ECG Signal as a Biometric


Bashrat Bahir , K.Deepa, Nitika, Vijay Gill, P.M.Arivananthi

207

50. Huffman Coding And Its Application In Image Compression


Kartik Kumar Attree Mahima Singh Choudhary Kanika Sharma

Rishu

213
.Manuj Gupta

51. Performance analysis of functional parameters of solar power generation


Shashi gaurav, binit ranjan, umang goyal, nishu jain, shawet mittal

216

52. A Comparative Study FPGA Implementation of I2C & SPI Protocols


Richa Malhotra, Preeti Singh

219

53. Heart Beat Monitoring System Through Fingertip Sensor


Shilpa Jain, Suman Arora

224

54. Comparison of Various Memory Architecture in Quantum Dot Cellular Automata


Sunita Rani, Naresh Kumar

227

55. Designing of Digital FIR Filter using CORDIC Algorithm


Shilpa Jain, Suman Arora

232

56. A Study Of Image Resolution Techniques For Satellite Images


Neha Gupta, Ashutosh Kharb, Seema Kharb
57. Thyristor Controlled Series Capacitor: A Facts Device
Neeraj Ku. Jain

238
242

58. SiC JFET: A Review


Priya Sharma, Vanita Batra, Jyoti Sehgal, Ritu Pahwa

246

59. Fractal Geometry


Mohina Gandhi, Khushboo

250

60. Full reference and non reference quantitative measures for evaluating the performance of image
fusion algorithms: A review
Meenu Manchanda, Rajiv Sharma
255
61. Gate All Around MOSFET: A Review
Renu Ahlawet, Dhiraj Kapoor, Rajiv Sharma

260

62. A Micro Strip Patch Antenna To Harvest Rf Energy From Rf Signal At Gsm-950 Mhz
Deepak Vats, Jayant Dhondiyal, Archana Mongia

264

63. Localization in Wireless Sensor


Archana Mongia, Deepak Vats

267

64. Automated Sorting Of Object Rejection And Counting Machine


Akshita, Alis, Prateek, Khushboo

272

65. Study on Energy Harvesting Methods for Wireless Sensor Network


Gaurav Verma, Ashish Rawat, Vidushi Sharma

277

66. Design And Power Of Flip Flops


283
Rishu, Kanika Sharma, Mahima Singh Choudhdary, Manuj Gupta, Kartik Kumar Attree
67. Investigation of Interaction of three Solitory wave in optical fiber
Rajeev Sharma, Surender Kumar

288

68. Estimation Techniques Of Path Loss


Charu, Jyoti Sehgal, Meenu Manchanda

290

69. Implementation of Image Compression using Discrete Cosine Transform


Varun Jain, Sapna Aggarwal, Ankur Chatturvedi

295

70. Power Scenario of India and Technological Trends


Srishti, Shalini Shukla, Vishal, Amruta Pattnaik
71. Design, Simulation and Synthesis of Generic Synchronous FIFO Architecture
Prateek Singh, Anmol Sharma, Surender Kumar

298
303

72. Review of Low Power 4:1 Multiplexer Circuit Design For CMOS Logic Styles At 90nm
Technology
Anmol Sharma, Prateek Singh
308
73. Implementation of synchronous FIFO using Verilog
Rohan Jain

313

74. Uncovering the Dark Energy of Universe


Rohan Jain

318

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Data Communication in Linear Wireless


Sensor Networks via Unmanned Aerial
Vehicles using FSO Communication
Akansha Solanki , Nitin Gargy, Mona Aggarwalz and Swaran Ahujax
EECE Department Northcap University Gurgaon, Haryana-122017, India
E-mail: akansha4777@gmail.com , nitingarg@ncuindia.eduy, ermonagarg24@gmail.comz and
swaranahuja@ncuindia.edux
Abstract The paper gives a review of how
unmanned aerial vehicles can be used for data
communication
in
wireless
sensor
networks.
Conventionally, where wireless sensor networks used
the multi-hop method to share data from different
sensors to sink nodes, the use of UAVs for purpose of
sharing data could be more feasible as well as faster
method without any interference. Other than that, over
large areas many UAVs (Unmanned Aerial Vehicles)
can be employed and we propose the use of free space
optical link for communication between the UAVs. The
paper will conclude explaining the methodology how
this type of communication can be possible.

Keywords Free Space Optical Communication,


Unmanned Aerial Vehicle, Wireless Sensor Network
I.

INTRODUCTION

The drones or the unmanned aerial vehicles (UAVs)


can operate without any internal pilot and can be
programmed before their launch and are controlled by
radio links. They are different from missiles in a way
that the these can reused, unlike the missiles that can
be used only once [1]. They can operate outside line
of sight and also at altitudes where it is not visible to
the operator. They have light weight frames,
protected data links and advanced technol-ogy control
systems and payloads. All the operations of the UAV
are controlled by the ground stations. The UAV has
wide range of applications in different areas [2] such
as they can be used for remote sensing, for production
of oil, gas and mineral, in hazard prone areas and
domestic policing. The remote sensors could be any
sensor like temperature sensor, air sensor, sound
detecting devices or even spectrum detection sensors.
The spectrum detection sensor consists of infrared
sensors and camera that detect the spectrum. Air
composition sensors use laser spectroscopy and can
estimate the composition of all elements and gases in
air. Temperature detectors can detect the temperature

of the surroundings. In case of oil and gas producer,


the unmanned vehicle can be used to conduct
geographical surveys where knowledge of the
underlying rock structure from which the composition
and concentration of minerals at an area can be found.
Minerals and oil can be excavated by the help of inview unmanned vehicle. During a hazard, it becomes
impossible to reach some locations manually. It can
reach at any altitude and area where the person
cannot reach and can help rescue people affected by
the disaster. Also they can also carry food and
medicines to the people and other facilities in any
area affected by disaster. They can also be used for
surveillance by civilians and also cops. It includes
home security system, guarding the pipelines and
tracking the criminals [3]. The various other
applications include: Commercial and motion picture
filming, target practice of pilots during training,
search and rescue operations during disaster,
archaeological surveys etc.

Fig. 1 FSO Link

These applications of UAVs can be further


extended over large areas when used along with free

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at: www.gtia.co.in

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Fig. 2 Network Architecture

space optical technology. That way many aerial


vehicles could communicate amongst each other
using the laser optical beam to transmit and receive
data with the help of transceivers that could be
installed on all the vehicles. The combination of both
the technologies could prove to be highly efficient as
free space communication requires no backbone
architecture and can be utilised anywhere as the only
requirement is line of sight communication between
any two vehicles. The Free Space Optical
Communication (FSO) refers to the transmission of
data from source to destination via unguided optical
medium that propagates in free space. The optical
carriers through which the data is transmitted in free
space could be visible, infrared or ultraviolet bands.
FSO has many advantages over conventional RF
links [4]. It allows transmission of data at high data
rates up to 10Gbps due to availability of high optical
bandwidth in comparison to RF links which can only
scale up to only hundreds of Mbps. These systems
make use of narrow laser beams which provides high
reuse factor and immunity to electromagnetic
interference. Thus, the frequency utilized in FSO
communication is above 300GHz which is an
unlicensed spectrum and is not easy to intercept. It is
highly economical, portable and is even the solution
to the last mile bottleneck that is present in
communication through copper cables which is the
incapability to provide high bandwidth. Various
applications of Free Space Optical communication
could be [5]- [7]for building connectivity, video
monitoring, cellular system backbone, reliable link
during disasters, security and broadcasting. These
systems can be used to connect various buildings
together in an area or campus for sharing of data
between buildings at ultra-high speeds without the

need of dedicated wired connections like fibre optics,


copper cables etc. The network traffic that can be
shared could be voice, video, audio or data.
The various commercial, military as well as traffic
man-agement applications require monitoring of the
surroundings and for this surveillance cameras are
required. Thus, this technology proves to be a good
option to transmit data at high speed without
compromising with the quality of the video. Also,
wired connections provides link between the base
stations and mobile switching centres in cellular
systems but these conventional connections give a
lower throughput. Thus, FSO systems can provide
high throughput as compared to wired connections
and can handle the increasing traffic of mobiles. It is
easier to form an FSO link during a disaster or any
emergency than any other wired connection as it only
requires a line of sight communication between a
transmitter and receiver through optical light which is
used as the medium to transfer information. So,
temporary links can be formed within hours without
much requirements of infrastructure and prove to be
quite reliable form of communication. These links are
highly reliable for secure transfer of data from one
point to another. The data could only be extracted by
getting in contact with the optical wireless link but
this contact could lead to breaking of link and so data
extraction proves to be a difficult task. It is
economical as compared to fibre optical connections
as fibre optics require the installation of good
quantum cryp-tography systems to secure its data.
Also, it can provide a good link between cameras and
broadcasting vehicles while broadcasting live
information such as sports, ceremonies or television
reporting during war. It even provides high quality
transmission that meet with the requirements of high

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at: www.gtia.co.in

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Fig. 3. Data communication using UAVs.

definition television or HDTV broadcasting


applications.
The other applications in the field of air traffic
management, maritime and mobile research
community has led to the research on collision
avoidance in UAV systems. Collision Avoidance
Systems (CASs) use cooperative schemes where
inter-agent communication between different points
is possible [8]. In CAS monitoring system, the
environment is monitored to avoid any kind of
encounter between the vehicles and other obstacles
that could be moving or stationary has led to the
research on collision avoidance in UAV systems.
Collision Avoidance Systems (CASs) use cooperative
schemes where inter-agent communication between
different points is possible [8].
In CAS monitoring system, the environment is
monitored to avoid any kind of encounter between the
vehicles and other obstacles that could be moving or
stationary. If there is any kind of encounter between
vehicle and other obstacles in the shared airspace then
collision might occur. Further, continuous sensing is
done to collect the information at regular intervals for
encounters with the help of active and passive
sensors. The information collected includes the
position of the vehicles and surrounding obstacles,
the speed and height of the vehicles.
II. SURVEY
The UAV based network architecture has been
explained that elaborates how the communication
between multiple vehicles and infrastructure can take
place, classification of data traffic in these networks,
case study explaining how data communication and
how data collection can take place in wireless sensor
network using UAVs [9]. The types of
communication are UAV assisted sensing where

multiple vehicles can use different sensors to give


accurate information about an area and UAV based
data storage where the vehicle first stores the data and
then sends it to the base station. The reasons for
storing data is the requirement of high bandwidth for
transfer of data, no immediate need to transfer data or
data has to processed first and then transferred to base
station. The data processing includes collaboration of
many aerial vehicles for high performance computing
like image processing, videos processing, pattern
recognition etc. As shown in figure2, networking
architecture is of many types such as direct, satellite,
cellular and mesh. Another study shows how aerial
vehicles could form different team configurations to
work as MANET such as intra-team, inter-team and
global inter-networking teams. In the Intra
networking team, there is wireless communication
among the vehicles with the help of equipment that
are installed within them [10]. Thus, performing
wireless communication together, they form a team
and lead to a mobile ad hoc network. They can now
send information or data to other nodes of the team
configuration. Inter-networking team is used to
expand the range of data sharing and communication
further. In this, many intra networking teams work
together in coordination which is spread over large
distance. To maintain the coordination among all the
teams, every intra team allots a node as head of all the
other UAVs. The head node now works as a gateway
in all the intra team net-works and these head nodes
communicate with head nodes of other teams. All the
nodes assigned as the head node send advertisement
messages through beacons to identify the head UAVs
of other teams for further communication. Global
Inter Networking comes into play when individual
vehicles move from one location to another and so
entering into foreign intra networking scheme. In that
case, all the intra networking teams work under IPv6
scheme. Under this, all the vehicles have their

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at: www.gtia.co.in

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Fig. 4. Data communication using UAVs

specific home addresses and when they move into a


foreign location, they are assigned a Care of Address
(COA) by the head node. The home network team
can track the location and functioning of the vehicle
based on its care of address while its identity remains
the same that is, its home address remains unchanged.
UAV swarms can be formed in the form of different
architectures like ring, star and meshed architecture.
Mesh network architecture is the most reliable
architecture and it consists of all the advantageous
features of ring and star network architecture. It
provides high security and less chances of failure as
data can be transmitted in any direction and to any
node present in the architecture through any route.
Another study also mentions the use of these warms
to form different types of network configurations and
increase the performance of the communication
channel [11]. The different network formations could
be ring, star and mesh. Multi carrier transmission
based
on
Orthogonal
Frequency
Division
Multiplexing (OFDM) technology has also been
discussed as means of communication among UAVs
[12]. But it faces the challenge of continuous
movement of the vehicle and this mobility leads of
inter carrier interference in OFDM technique and
degrades its performance.
The Free Space Optical communication has been
carried out in past where sunlight was the optical
carrier which was used to send signals by reflecting
sunlight. One such invention was the Heliograph by
Carl Friedrich Gauss which transmitted signals
through beam of sunlight to different base stations
[13]. Another such invention was the Photo phone
which is known to be the first wireless telephone by
Alexander Graham Bell [14]. This device used the
vibrations generated by voice on the mirror to be
reflected and carried by sunlight to the receiver where
the vibrations were converted back to voice signals

[15]. This type of communication falls under the


category of long range optical wireless
communication that deals with terrestrial links and
line of sight communications between multiple
buildings over a range [16]. The signal through the
channel undergoes various losses. As the laser beam
travels along optical channel, its spreads resulting in
the loss of optical power known as the Geometrical
Losses. Another type of loss is the atmospheric loss
which is the result of many atmospheric factors. The
Fog, dust, aerosols, smoke etc affect the visibility of
the laser light as they absorb the energy to cause
attenuation and broadening of optical pulses [17] [19]. Another factor that causes atmospheric loss is
Scintillation [20]. Atmospheric pressure and temperature variations results into changes in the refractive
index of air due to which the light intensities vary in
time and space at the receiver and thus is called
scintillation or fading. It is more advantageous to use
optical link for data exchange between two UAVs
than RF links because optical communication
requires line of sight communication [21]. Thus, it is
more secure. This is because in order to gain any
information from optical link, the beam has to be
interrupted directly which can be easily detected as
the line of sight communication would break.
Another paper describes how link between many
aerial vehicles have to be setup before data
transmission [22]. Initially, an align-ment has to be
made between two FSO systems which come under
pointing phase. Then this alignment has to be
adjusted for communication to start which is
acquisition phase and simultaneously tracking phase
begins that keeps the systems in alignment. Studies
have also been carried out on horizontal links in high
altitudes which are more commonly known as High
Altitude Platforms (HAPs) [23]. HAPs also fall under
the category of unmanned vehicles but can also come

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at: www.gtia.co.in

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

under category of manned vehicle where it provides a


permanent service by using circling aircraft
techniques. Studies also show that during the
movement of the vehicles, it is important to maintain
good performance in the communication channel as
the coherence time be-comes short whereas Doppler
spread becomes large in the fading channel due to
high speed [24]. Thus, inter- carrier interference is
introduced in the channel which degrades its
performance. Previous investigations proved how
IEEE 802.11 was not suitable for WSN-UAV
systems. The reason behind that was the hidden
terminal effect and abundance of overheads on low
power sensors. Although Time Division Multiple
Access (TDMA) was used with WSNs but it only
worked towards reducing energy consumption.
Similarly, Frequency Division Multiple Access
(FDMA) could also not prove to be worthwhile due
to synchronization problems in frequency domain
[25] - [26].
According to the figure3, the unmanned vehicle
generates a beacon signal with the help of a beacon
generator. When the beacon signal is received by the
sensors, the sensors get activated and share their data
to other vehicles [27]. Based on the applications, the
path of travelling could be random or a fixed path.
The data from the sensors is shared between the
vehicles at regular intervals of time. Studies have
shown that multi hop communication between sensor
to sensor is quite inefficient because the signal power
decays to the fourth power of distance whereas direct
communications between the UAVs and sensors help
in saving energy consumed by the sensors to perform
tasks [28].
III. METHODOLOGY
It is shown in figure4, that the data collection in
wireless sensor network is done using unmanned
aerial vehicles. They are are used with different types
of sensors working on it. If we have to collect data of
an area giving all the information about it
environment, we take temperature sensors, radiators
and gas monitors [29]- [30]. The data from all the
vehicles combine to give accurate information about
an area. The method for this data communication is
shown in figure elaborating the communication
through wireless ad-hoc networking. In any area,
there are various sensors placed and these sensors are
called sensor nodes and all these sensors form a
cluster. As the area increases, we have many clusters.
The head of each cluster is the relay node that collects

information from all the sensor nodes of that cluster.


Now, the vehicle detects the data from the relay node
and transport it to the sinks located at the ends of the
wireless sensor networks. Thus, data is
communicated in wireless sensor network. Similarly,
if many such vehicles have been employed between
two or more sinks, then the communication between
them also play a major role as they could share data
among each other and the vehicle closer to the sink
nodes can finally give the data to the sink nodes. The
communication between UAVs comes into play when
the distance between sink nodes is large and not
feasible for one vehicle to travel back and forth to
transmit data.
According to the figure5, an FSO link is formed
be-tween the two aerial vehicles. Free space optical
link is advantageous due to high bandwidth as more
data can be transmitted and also because there is line
of sight communication between the vehicles. Both of
them behave like transceivers. For transmission,
optical sources such as Lasers or LEDs are used that
carry information through light by modulation
through modulators. For reception, photo detectors
are employed which detect the information and
convert it back into electrical signal which further
demodulates the original data.
Through the above survey, it is seen that UAV has
a high scope considering its applications. It can be
used for military operations as well as can be used for
commercial home security purposes by the civilians.
More technology advancement in UAVs can prove to
be very beneficial. The unmanned air vehicle can
make tasks easy for humans like one of the example
is of UAV based pesticide sprayer which could
pesticides over the crops and make the work easier
and complete it faster. Also, by the use of Free Space
Optical link, the UAVs can be used for data sharing
and communication over large areas with less
probability of error in the data collected.

Fig. 5. FSO link between two UAVs.

Special Issue: National Conference on Recent Innovations In Eng ineering & Technology
(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

[16]

References
[1] I. Jawhar, N. Mohamed and J. Al-Jaroodi, Data
communication in linear wireless sensor networks using
Unmanned Aerial Vehicles, International Conference on
Unmanned Aircraft Systems (ICUAS), Orlando, pp 43-51,
May 2014.
[2] I. Jawhar, N. Mohammed, J. Al-Jaroodi and S. Zhang, A
framework for using unmanned aerial vehicles for data
collection in linear wireless sensor networks, International
Conference on Unmanned Aircraft Systems (ICUAS), Atlanta,
pp 492-499, May 2013.
[3] A. R. Girard, A. S. Howell, K. Hedrick, Border patrol and
surveil-lance missions using multiple unmanned air vehicles,
43rd IEEE Conference on Decision and Control, Nassau, pp
620-625, December 2004.
[4] D. Rodewald, MRV introduces industrys first 10G ethernet
wireless point-to-point system, MRV Communications,
Chatsworth, CA, USA, September 2008.
[5] V. W. S. Chan, Free-space optical communications, Journal
of Lightwave Technology, vol 24, no. 12, pp 4750-4762,
December 2006.
[6] E. Leitgeb, M. S. Awan, P. Brandl, T. Plank, C. Capsoni and
R. Nebuloni, Current optical technologies for wireless
access,
10th
International
Conference
on
Telecommunications, Zagreb, pp 7-17, June 2009.
[7] D. Kedar and S. Arnon, Urban optical wireless
communication networks: The main challenges and possible
solutions, in Commu-nications Magazine, IEEE, vol 42, no.5,
pp 2-7, May 2004.
[8] B. M. Albaker, N. A. Rahim, A survey of collision avoidance
approaches for unmanned aerial vehicles, in International
Conference for Technical Postgraduates (TECHPOS), pp 1-7,
Kuala Lumpur, December 2009.
[9] M. A. Khalighi and M. Uysal, Survey on Free Space Optical
Commu-nication: A Communication Theory Perspective, in
Communications Surveys and Tutorials, IEEE, vol 16, no. 4,
pp 2231-2258, June 2014.
[10] B. R. Bellur, Mark G.Lewis and F. L Templin, Tactical
informa-tion operations for autonomous teams of unmanned
aerial vehicles (UAVs), in Aerospace Conference
Proceedings, IEEE, Menlo Park, USA, pp 2741-2756, May
2002.
[11] K. Zettl, S. S. Muhammad, C. Chlestil, E. Leitgeb, N. P.
Schmitt and W. Rehm, Reliable Optical Wireless Links
within UAV Swarms, In-ternational Conference on
Transparent Optical Networks, Nottingham, pp 39-42, June
2006.
[12] W. Zhiqiang, H. Kumar and A. Davari, Performance
evaluation of OFDM transmission in UAV wireless
communication, Proceedings of the Thirty-Seventh
Southeastern Symposium in System Theory, Montgomery,
USA, pp 6-10, March 2005.
[13] A. A. Huurdeman, The Worldwide History of
Telecommunications, Wiley-Interscience Conference, USA,
pp 269-293, 23
[14] D.J Phillipson, Alexander Graham Bell The Canadian
Encylopedia.
[Online]
availaible
:
http://www.thecanadianencylopedia.com/articles/alaxandergraham-bell
[15] M. Groth, Photophones Revisited, [Online]. Available:
http://www.bluehaze.com.au/modlight/GrothArticle1.htm

[17]

[18]

[19]

[20]

[21]

[22]

[23]

[24]

[25]

[26]

[27]

[28]

[29]

[30]

M. C. Jeung et al, 8 10-Gb/s terrestrial optical free-space


transmis-sion over 3.4 km using an optical repeater, in
Photonics Technology Letters, IEEE, vol 15, pp 171173,
South Korea, January 2003.
M. Grabner, V. Kvicera, Multiple scattering in rain and fog
on freespace optical links, in Journal of Lightwave
Technology, vol 32, no. 3, pp 513520, February 2014.
Ronald L Fante, Electromagnetic beam propagation in
turbulent media, Proceedings of IEEE, Bedford, pp 16691692, December 1975.
L. B. Pedireddi, B. Srinivasan, Characterization of
atmospheric turbulence effects and their mitigation using
wavelet-based signal processing, IEEE Transactions in
Communications, Chennai, India, vol 58, no. 6, pp 17951802, June 2010.
L. C. Andrews, R. L. Phillips, C. Y. Hopen, M. A. AlHabash, Theory of optical scintillation, Journal on Optical
Image Science, vol 16, no. 6, pp 14171429, June 1999.
S. Arnon, Effects of atmospheric turbulence and building
sway on optical wireless communication systems, Optical
Letters, IEEE, vol 28, pp 129 131, January 2003.
H. Yuksel, S. Milner, C. C. Davis, Aperture averaging for
optimizing receiver design and system performance on freespace optical commu-nication links, Journal on Optical
Network, vol 4, no. 8, pp 462475, August 2005.
D. Grace, N. E. Daly, T. C. Tozer, A. G. Burr, D. A. J.
Pearce, Providing multimedia communications services
from high altitude platforms, in Proceedings of International
Journal of Satellite Com-munications, vol 19, pp 559-580,
September 2001.
E. Leitgeb, K. Zettl, S. Muhammad, N. Schmitt, W. Rehm,
Investiga-tion in Free Space Optical Communication Links
Between Unmanned Aerial Vehicles (UAVs), 9th
International Conference on Transparent Optical Networks,
Rome, pp 152-155, July 2007.
M. Salajegheh, H. Soroush and A. Kalis, HYMAC: Hybrid
TDMA/FDMA Medium Access Control Protocol for
Wireless Sensor Networks, in 18th International symposium
on Personal,Indoor and Mobile Radio Communications,
Athens, Greece, pp 1-5, September 2007.
Tu Dac Ho, Jingyu Park, S. Shimamoto, Novel multiple
access scheme for wireless sensor network employing
unmanned aerial vehi-cle, 29th conference in Digital
Avionics Systems Conference(DASC), IEEE, Salt Lake City,
UT, pp 5-8, October 2010.
K. Sohrabi, B. Manriquez, G. J. Pottie, Near Ground
Wideband Channel Measurement, in Vehicular Technology
Conference, IEEE, Houston, Texas, July 1999.
S. M. Adams, C. Friedland, A survey of umanned aerial
vehicle(uav) usage for imagery collection in disaster research
and management, The 9th International Workshop on
Remote Sensing for Disaster Response, September 2011.
D. Jea, A. A. Somasundara and M. B. Srivastava, Multiple
controlled mobile elements (data mules) for data collection in
sensor networks, in International Conference on Distributed
Computating in Sensor Systems, Pasadena, CA, pp 10301035, July 2005.
W. Zhao and M. Ammar, Message ferrying: Proactive
routing in highly-partitioned wireless ad hoc networks, In
Proceedings of IEEE Workshop on Future Trends in
Distributed Computing Systems, Atlanta, USA, pp 308-314,
May 2003.

Special Issue: National Conference on Recent Innovations In Eng ineering & Technology
(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

VHDL Implementation of Built in Self-Test


For 2D-CWT Still Image Compression
Algorithm
Tanupreet Sabharwal, Prachi Punyani

Dept. of Electronics and Communication, Northern India Engineering College, New Delhi, India
E-mail: tanupreet.sabharwal@niecdelhi.ac.in, prachipunyani90@gmail.com
Abstract Due to increasing system complexity,
fabrication and manufacturing defects, shrinking
device geometrics day by day, the number of faults
are also increasing in FPGAS, ASICS and
SOCS, thus making testing and fault tolerance
difficult. These constraints can be overcome by
BIST algorithms. BIST gives at speed testing,
eliminating the use of external test equipment. In
this paper various BIST algorithms are compared
and BIST algorithm for 2D-CWT still image
compression algorithm is proposed for real time
still image compression and restoration for
effectiveness of entire soc system. Instead of using
LFSR for pattern generation pseudo exhaustive
pattern generation is used to give better testing
and fault coverage. Growing gap between SOC
external and internal speeds makes BIST a
promising solution for at speed and economy
testing and fault tolerance within the SOC system.
2D-CWT
(Complex
Wavelet
Transform), ATE (Automatic Test Equipment),
CUT (Circuit under Test), BIST (Build in SelfTest), TPG (Test Pattern Generator), ORA
(Output Response Analyser), SOC (System on
Chip), LFSR (Linear Feedback Shift Register),
MISR (Multiple Input Shift Register), Random
Pattern generation (RPG)
Keywords

I.

INTRODUCTION

For the storage and transmission of information data


compression is necessary. Data can be any form it
may be text, audio, video, still or moving image,
graphics etc. Compression of still images is a major
image processing method. The advent of multimedia
computing has led to an increased demand for digital
images. The storage and manipulation of these
images in their raw form is very expensive. There is a
need, therefore, for high quality image compression
[1].

To make widespread use of digital imagery practical,


some form of data compression must be used. Digital
images can be compressed by eliminating redundant
information. There are three types of redundancy that
can be exploited by image compression systems:
Spatial Redundancy:- In almost all natural images,
the values
of neighboring pixels are strongly
correlated.
Temporal Redundancy:- Adjacent frames in a video
sequence often show very little change [1].
Spectral Redundancy:- In images composed of more
than one spectral band, the spectral values for the
same pixel location are often correlated.
The wavelet transform is applied to the entire image
thus no blocking artifacts, which are not desirable.
DCT did not have good directional and orientation
properties which led to the development of new
wavelet transforms such as DWT and CWT.
Basic BIST process can be understood as follows:The test Machine applies input test patterns to CUT
and compares CUT output response to known good
circuit output response. For the given set of input test
patterns usually obtained from simulation CUT gives
correct response to all test vectors, assumed to be
fault-free CUT gives incorrect response to one or
more text vectors assumed to be faulty. The ability of
logic to verify a failure-free status automatically,
without the need for externally applied test stimuli
(other than power and the clock), and without the
need for the logic to be part of a running system. Any
of the methods of testing an integrated circuit (IC)
that uses special circuits designed into the IC,
performs test functions on the IC, and signals whether
the parts of the IC covered by the BIST circuits are
working properly. The basic idea of BIST, in its most
simple form, is to design a circuit so that the circuit

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
7

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

can test itself and determine whether it is good or


bad.

Fig. 1 Basics of Testing

II.

PROPOSED WORK

The work proposed here is to choose best pattern


generation technique, and test the 2D-CWT
compression algorithm (CUT) and compare the
results with an output response analyser. Deeply
embedded SOC systems have low controllability and
observability. This requires test access mechanism to
transport test from source to core and from core to
sink for protection of intellectual property. Limited
core design knowledge by chip integrator need selfcontained test. Without core design knowledge,
limited input/output bandwidth, fast data flow rates
between cores inside the soc compared to lower rates
between ip cores and the external environment need
built-in-self-test. This BIST can be used as a real time
application by capturing images as inputs in real time
by a camera and then compressing these and storing
these. In the proposed BIST technique pseudo
random pattern generation is to be used, the DUT is
2D-CWT compression algoritm.

complete test coverage. The LFSR based on primitive


polynomial generates maximum-length PRPG. Builtin self-test (BIST) is a commonly used design
technique that allows a circuit to test itself. BIST has
gained popularity as an effective solution over circuit
test cost; test quality and test reuse problems. There
are other test pattern generation techniques such as
Exhaustive Pattern Generation, Pseudo Exhaustive
Pattern Generation, Random Pattern generation
(RPG), Pseudo Random Pattern Generation,
Weighted Pseudo Random Pattern Generation,
Cellular Automata Pattern Generation. LFSR is a
random pattern generator, complex in design, it
generates unwanted patterns which increases the
switching activity of the CUT leads to increase in
testing period. For these reasons exhaustive and
pseudo exhaustive testing are preferred. In exhaustive
testing number of test vector are more which is
minimized in pseudo exhaustive testing. BIST
techniques usually combine a built-in binary pattern
generator with circuitry for compressing the
corresponding response data produced by the circuit
under test. The compressed form of the response data
is compared with a known fault-free response.
IV.

CIRCUIT UNDER TEST

The DUT for which the BIST is to be designed is


shown in the following figure. It is a 3 Level 2DCWT using Q-shift and biorthogonal filters in which
still image compression is obtained by convolution of
image pixels and filter coefficients.

Fig. 2 Proposed BIST using PRPG and MISR

III.

ALGORITHM DESIGN USING BIST

The standard LFSR (linear feedback shift register)


used for pattern generation may give repetitive
patterns, which are in certain cases not efficient for

Fig. 3 Compression Algorithm (2D-CWT)

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
8

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Fig 3 shows a standard BIST process including linear


feedback shift register, the circuit under test and
output response analyzer.

Fig. 6 Proposed BIST using PRPG AND MISR

Fig 4. BIST process using LFSR, DUT and ORA

Fig. 7 Technology Schematic for DUT

Fig.5 Pseudo Exhaustive Pattern Generator

V.

RESULTS

The Synthesis Report, Map Report, RTL Schematics


and technology schematic for CUT are generated
using Xilinx 13.1. The simulation results are
generated and verified.
Fig. 8 ISIM 13.4 Behavioural Simulation Results For Design
Under Test i.e Modified 2D-CWT Algorithm

MISR (MULTIPLE INPUT SIGNATURE REGISTER)


For BIST operations, it is impossible to store all
output responses on-chip, on-board, or in-system to
perform bit-by-bit comparison. An output response
analysis technique must be employed such that output
responses can be compacted into a signature and
compared with a golden signature for the fault-free
Special Issue: National Conference on Recent Innovations In Engineering & Technology
(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
9

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

circuit either embedded on-chip or stored off chip


compaction differs from compression in tha
compression is loss-less, while compaction is lossy.
Compaction is a method for dramatically reducing the
number of bits in the original circuit response during
testing in which some information is lost.
Compression is a method for reducing the number of
bits in the original circuit response in which no
information is lost, such that the original output
sequence can be fully regenerated from the
compressed sequence.

will be the same as a good circuit. When longer


sequence is used signature analysis gives high fault
coverage.
REFERENCES
[1]

[2]

[3]
[4]

[5]
Fig 9. Multiple Input Signature Analysis
[6]

VI.

CONCLUSION

The research motivation for this work is to develop


VHDL BIST for 2D-CWT image compression
algorithm. BIST provides vertical testability from
wafer to system, high diagnostic resolution, at-speed
testing, reduced need for external test equipment.
Disadvantages of BIST techniques are area overhead,
performance penalties, additional design time &
effort, lack of orthogonal testing, reduced test
development time & effort, more economical burn-in
testing reduced manufacture test time & cost, reduced
time-to-market. Growing gap between SoC internal
and external speed makes Built-In Self-Test (BIST) a
promising solution for testing. BIST uses high speed
testing to detect the faults in embedded memory. This
is the only solution that allows at speed testing for
embedded memories. BISR is combining BIST with
efficient and low cost repair schemes in order to
improve the yield and system reliability as well. This
BIST algorithm with 2D-CWT can be used as a real
time application by capturing images as inputs in real
time by a camera and then compressing these and
storing them in an externally interfaced memory and
then consecutively checking the compression
characteristics before storing i.e checking the image
compression parameters such as BPP, CR, PSNR
without much affecting the image quality. The above
system can be implemented as a SOC. Signature
analysis is used to make verification of the circuit.
Signature mismatch with the reference signature
means that the circuit is faulty. However there is a
small probability that the signature of a bad circuit

[7]

[8]

[9]

[10]

[11]

[12]

[13]

[14]
[15]

[16]

[17]

Hilton, M. L.; Jawerth, B. D. and Sengupta, A. (1994),


Compressing Still and Moving Images with Wavelets,
appeared in Multimedia Systems, Vol. 2, No. 3.
Nagabushanam, M.; Raj p, C. P. and Ramachandran, S.
(2011), Design and FPGA Implementation of Modified
Distributive Arithmetic Based DWT IDWT Processor for
Image Compression, Communications and signal processing
proceedings, pp. 1-4.
Mulcahy, C. (1995), Image compression using the Haar
wavelet transform, Spelman Science and Math Journal.
Kingsbury, N. G.(2003) ,Minimization Design of Q-shift
Complex Wavelets for Image Processing Using Frequency
Domain Energy, proceedings of the IEEE Conf. on Image
Processing, vol. 1, pp. 1013-16.
Muhit, A. A.; Islam, M. S. and Othman, M. (2004), VLSI
Implementation of Discrete Wavelet Transform (DWT) for
Image Compression, proceedings of 2nd International
Conference on Autonomous Robots.
Shukla, P. D. (2003), Complex Wavelet Transforms and
their Applications, M-Phil thesis.
S. Hamdioui, Z. Al-Ars, A.J. van de Goor, Testing Static
and Dynamic Faults in Random Access Memories, In Proc.
of IEEE VLSI Test Symposium, pp. 395-400, 2002.
N. Z. Haron, S.A.M. Junos, A.S.A. Aziz, Modelling and
Simulation of Microcode Built-In Self-test Architecture for
Embedded Memories, In Proc. of IEEE International
Symposium on Communications and
Information
Technologies pp. 136-139, 2007.
R. Dean Adams, High Performance Memory Testing:
Design Principles, Fault Modeling and Self-Test, Springer
US, 2003.
J. Rajski and J. Tyszer, Recursive pseudoexhaustive test
pattern generation, IEEE Trans. Comput., vol. 42, no. 12,
pp. 15171521, Dec. 1993.
P.Ravinder and N.Uma Rani, Design and Implementation
of Built-in-Self Test and Repair, International Journal of
Engineering Research and Applications, (IJERA) ISSN:
2248-9622 www.ijera.comVol. 1, Issue 3, pp.778-785.
S.Pushpraj and S.Priyanka, VHDL Implementation of Logic
BIST (Built In Self Test) Architecture for Multiplier Circuit
for High Test Coverage in VLSI Chips, International Journal
of Advanced Research in Electrical, Electronics and
Instrumentation Engineering (An ISO 3297: 2007 Certified
Organization) Vol. 3, Issue 11, November 2014.
S. Tanupreet and R. Munish, Automation of HDL and C++
Code Generation for Pipelined and Multiplier Free 2D-CWT
Image Compression Algorithm, REACT 2013.
S. Tanupreet and R. Munish, Review Article on Wavelet
Based Still Image Compression Techniques, REACT 2013.
F. Yang, S. Chakravarty, N. Devta-Prasanna, S.M.
Reddy1and I. Pomeran, 2008 An Enhanced Logic BIST
Architecture for Online Testing 978-0-7695-3264-6/08, 14th
IEEE International On-Line Testing Symposium 2008.
Mohammed F. AlShaibi, Charles R.Kime, MFBIST: A BIST
Method For Random Pattern Resistant Circuits 0-7803-35406196 1996 IEEE.
Yuejain, W., S. Thomson, D Mutcher and E. Hall, 2011.
Built-In Functional test for silicon validation and system

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
10

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

integration of telecom SOC designs. IEEE trans. Very large


scale integration (VLSI) Syst., 19(4): 629-637.
[18] Lusco, M.A., J.L. Dailey and C.E. stourd, 2011 BIST for
multipliers in altera cyclone II field Programmable gate
arrays. IEEE 43rd system theory (SSST), Mar, 14-16, pp:
214-219.

[19] Tseng, T. W., L. Jin Fu and C.C. Hsu 2010. Re BISR: A


Reconfigurable bilt in self repair scheme for random access
memories in SOCs. IEEE trans. Very Large scale integration
(VLSI) Syst., 18(6): 921-932.
[20] International SEMATECH, International Technology
Roadmap for Semiconductors (ITRS): Edition 2001

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
11

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Speed Control of DC Motor Based On


Temperature
Neha Srivastava, Shalini Kumari

Dept. of Electronics and Communication, Northern India Engineering College, New Delhi, India
E-mail: nehamit0505@gmail.com, shalini.kumari@hotmail.com

Abstract The idea which gave us an instinct to


develop this project is the work of heavy motors in
an industry at different temperature conditions;
generally we see that motor has an internal coil
winding which gets heated on continuously
moving of the motor and as the temperature
changes there is an adverse effect in the motors
which are not visible by our naked eyes so we
thought of designing this project which will help
the motor to control its speed automatically
according to change in temperature. In this
project our aim is to design and implement DC
motor speed. Temperature sensing of material.
This project is mainly concerned on DC motor
speed control system by using microcontroller
8051.
Keywords DC motor, change in temperature,

microcontroller
I.

INTRODUCTION

The 8051 series of microcontrollers are very useful in


electronics engineering department.It can be used for
industrial and commercial control applications,
appliances
control,
temperature
sensing,
instrumentation, etc. Direct current (DC) motors have
variable characteristics and are used extensively in
variable speed devices .DC motor can provide a high
starting torque and it is possible to obtain speed
control over wide range .DC motor plays an
important role in modern industry.There are several
types of applications of a DC motor like in home
appliances,washers dryers and compressors.
Temperature is one of the main parameter to control
in most of the manufacturing industries like
chemical,food processing ,pharmaceutical etc.In these
kinds of industries some product need the required
temperature to be maintained at highest priority
otherwise the product will fail.So the temperature
controller is most widely used in almost all the

industries and is also the initial part of our project.In


our project our main concern is to control DC motor
speed control system by using microcontroller 89s52
which is of the 8051 family .Motor speed can be
controlled with variable resistor. so this
microcontroller 89s52 which is a programming
device can be used to control the speed of motor.
II.

PRINCIPAL

The main principle of the project is to control the


DC motor speed when the temperature is greater or
lower than the threshold value.The microcontroller
continuously reads the temperature from its
surroundings .The temperature sensor acts as a
transducer and converts the sensed temperature to
electrical value this is analog value which is applied
to the adc which converts it into digital and applies it
to the microcontroller and this is shown in the lcd
screen which is the output device,also at the output of
the microcontroller is connected DC motor which
controls its speed as per the temperature sensed by
the sensor.

Fig. 1 Circuit Diagram

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
12

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

III.

ADVANTAGES AND DISADVANTAGES

1. This will increase the durability of the motor.


2. Low budget kit and can be easily assembled
anywhere.
The main disadvantage of this project is that if the
circuit gets damaged by any reason then it might not
control the speed of the motor.
IV.

APPLICATION

It can be used in industrial purposes where there


are motors for industrial purposes.
2. It can be used for domestic purpose like in
ceiling fan where there is a chance of getting the
coil burnt.

.The actual temperature and set value of temperature


were getting displayed on the lcd screen.

REFERENCES
[1]
[2]
[3]

1.

V. CONCLUSION
The Recent developments in science and technology
provide a wide range of scope of applications of high
performance DC motor drives in areas such as rolling
mills, chemical process, electric trains, robotic
manipulators and the home electric appliances require
speed controllers to perform tasks. DC motor has
speed controlling capabilities.
The goal of this project was to design a DC motor
speed control based on temperature variations .The
controller will maintain the speed at desired speed

[4]
[5]

[6]

[7]

[8]

R.Krishnan, Electric Motor DrivesModelling, Analysis, and


Control, PrenticeHall InternationalInc., New Jersey, 2001.
Duane, H., Brushless Permanent Magnet Motor Design.
University of Maine, Orno, USA, 2nd edition, 2002.
F. Luo, X. Zhao, and Y. Xu, "A new hybrid elevator group
control system scheduling strategy based on particle swarm
simulated annealing optimization algorithm", Intelligent
Control and Automation (WCICA), pp.5121-5124, 2010.
Fundamentals of electric drive by Gopal k dubey, narosa
publishing house New Delhi,1989
Z. Ahmad and M. N. Taib,A study On the DC Motor Speed
Control by Using Back-EMF Voltage, Asia SENSE
SENSOR, 359-364, 2003.
John, PIC Microcontroller Project Book, second ed., Mc
Graw-Hill, Singapore, 2000. , Micro controller Technology,
The 68HC1 l, Prentice Hall, 1992.
M. H. Rashid, Power Electronics Circuits, Devices and
Applications, third ed., Prentice Hall, United States of
America, 2004.
S.Ravi and P.A.BalakrishnansDesign And Development Of
A Microcontroller Based Neuro Fuzzytemperature
ControlleR IEEE/OSA/IAPR International Conference on
Infonnatics, Electronics & Vision.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
13

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

A Comparison Study of Appearance


Features Descriptors for Age Estimation
from Face Images
Prachi Punyani, Tanupreet Sabharwal

Dept. of Electronics and Communication, Northern India Engineering College, New Delhi, India
E-mail: prachipunyani90@gmail.com, tanupreet.sabharwal@niecdelhi.ac.in
Abstract Automatic Age Estimation from face
images is one of the latest research fields of work
nowadays after face recognition research in the
field of soft biometrics. Age estimations, remains a
challenging research problem because it depends
on both intrinsic and extrinsic factors. In this
paper, we compare four different appearance
feature extraction techniques used to extract
features from a face image for age estimation of a
person. We have compared four different
descriptors: namely, gradient based encoded aging
features (GEF), intensity based encoded aging
features (IEF), biologically inspired aging features
(BIF) and local binary patterns (LBP). The results
show that mean absolute error (MAE) of
biologically inspired features are found to be
better than the other three.
Keywords face recognition, biologically inspired
features, age estimation, appearance features.

I.

INTRODUCTION

Age estimation is one of the active research topics


these days because of its major applications in
forensics, entertainment, biometrics and security
purposes. It is one of the interesting research problem
and an extension in the field of soft biometrics. Till
date, face images are preferably used to estimate the
age of a person and other biometric traits are yet to be
tested deeply for age estimation.
Age estimation using face images is challenging
because age of a person is not only affected by
intrinsic but also extrinsic factors. In terms intrinsic
factors, there are significant differences in the aging
process of males and females. Weather conditions,
lifestyle of a person and health also affect the age of a
person. Presence of scars, face hairs, moles cosmetics
used on face are some other factors that create
challenges while estimating the age of a person.
Automated age estimation over computer systems is
hard to achieve because of the variations caused by
illumination, pose and expressions of a person.
Illumination effects could be like the background
light, backside views which act as a noise over the

image. There may be pose variations like image could


be clicked from a side view which does not show the
whole face for proper estimation. Different
expressions of a person during different situations
also play a major hindrance while estimating age like
the smile creates wrinkles near the lips and eyes of a
face. Similarly disgust expressions create lines on the
forehead and make the process of age estimation
difficult.
There are three different method of estimation
namely anthropometry-based approach, image based
approach and appearance based approach.
In this paper, we have discussed and compared four
variations of appearance-based approaches for age
estimation which uses facial features like texture and
shape for estimation. The four variations are named
as gradient based encoded aging features (GEF),
intensity based encoded aging features (IEF),
biologically inspired aging features (BIF) and local
binary patterns (LBP).
Work is investigated on 50 frontal face images
acquisitioned from FGNET database. Some of the
sample images are shown below in figure 1. All the
simulations are done on Matlab2012b.

Fig. 1: Face collection from FGNET database

Remaining paper summarizes the related work in


section II, proposed approach in section III and
experimental results and conclusions in section III
and IV respectively.
II.

RELATED WORK

Various studies have been done in the field of


biometrics to estimate the age of a person. All the

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
14

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

works are focused on finding the appropriate facial


features for age estimation. For instance, OToole et
al. [1] use 3D models of faces to apply caricaturing
processes in order to describe age variations between
samples. Wu et al. [2] develop a system for the
simulation of wrinkles and skin aging for facial
animation. Suo et al. [3] present a model for face
aging by analyzing it as a Markov process through a
graph representing different age groups. Tiddeman et
al. [4] also develop prototype models for face aging
using texture information. In [5], a quantitative
approach to face evolution of aging is presented.
The results of these studies show that the
craniofacial development and skin texture are the
most important features for age estimation. In fact,
one of the first approaches for age estimation is
proposed by Kwon and da Vitoria Lobo [6], where
individual faces are classified into three age groups
(baby, young and senior). This classification is
performed using the theory of craniofacial
development [7] and facial skin wrinkle analysis.
Lanitis et al. [8] propose an age estimation method
based on regression analysis of the aging function.
During the training procedure, a quadratic function of
facial features is fitted to each individual in the
training set as his/her aging function. As for age
estimation, they propose four approaches to
determine the proper aging function for the unseen
face image. The Weighted Person Specific (WPS)
approach achieves the best performance in the
experiments. This function, however, relies on
profiles of the individual containing external
information such as gender, health, living style, etc.
The appearance-based approach, such as in [8], [9],
[10], [11], [12], [13], [14], [15], utilizes facial
appearance (both texture and shape) to differentiate
faces among individual demographic groups. Active
Appearance Model (AAM) and its variations [16],
[17] are widely used to model the facial texture and
shape. Similar to the anthropometrybased approach,
the appearance-based approach demands highly
accurate facial landmark localization.
III.

COMPARISON STUDY

Whole sequence of steps followed for automated age


estimation is shown in the figure 2. We have
implemented the whole process four different times
using four different features extraction techniques
each time respectively. Final results are then
evaluated by computing the mean absolute error
using each technique respectively.

Fig. 2: Block diagram of Sequence of steps followed

Face Prepossesing
Face preprocessing is the most important and initial
step for the entire face image in the database to
remove all the variations and unwanted noises.
Firstly, the coloured image is converted into a grey
scale image to remove the effects of inconsistent
colours using the formula:
A.

I 0.298* R 0.587* G 0.114* B (1)


R, G and B are the red, green and blue channels of
the colour image and I is the final grey-scale image.
The image is cropped into 60 x 60 pixels with 32
pixel interpupillary distance and centralized to
remove the effect of scale, rotation and translation
variations, as shown in figure 3. Finally Gaussian
filtering is applied to remove the left unwanted noise.

Fig. 3: Preprocessed Image

Features Representation
To describe the features of the image we have used
four different types of appearance features and
compared them to validate the best one out of all
four. The for features descriptors described below are
Intensity based encoded aging features (IEF),
Gradient based encoded aging features (GEF),
Biologically inspired aging features (BIF) and local
binary patterns (LBP).
1) Intensity based encoded aging features
These are low level features which are based on
learning based encoding. Firstly, for each pixel, a
discriminative low level feature is computed.
Secondly, these computed features are encoded using
PCA tree-based codebook [23]. Face is divided into
patches; codes are computed for each patch and
B.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
15

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

described using a histogram. These histograms are


finally concatenated together to form an age
descriptor. IEF based intensity sampling is used to
capture skin texture and wrinkle details. Neighboring
25 intensity values are sampled around each pixel in a
ring based pattern having two rings with r=1 (8
values) and r=2 (16 values) including the central
pixel value itself. The gradient orientations are binned
to equally spaced bins over 0360, where the
gradient magnitudes are accumulated. As in [24],
Gaussian derivatives are chosen for calculating the
gradient and the number of bins equals eight.
2) Gradient based encoded aging features
GEF is based on the same concept as IEF but here we
find out gradient histograms to capture wrinkle
details. For extraction, gradient directions are
computed in an 88 neighborhood of each pixel.
3) Biologically inspired aging features
Guo et al. introduced Biologically inspired features
[20] for features extraction. Features are extracted
using two layer filters. First layer of the filter uses
Gabor filters for different scales and orientations. In
the second layer, It takes the responses in the first
layer with same direction and adjacent scales band
into a single value using standard deviation or max
functions. In this experiment, we have calculated 16
orientations and eight bands to build the descriptor.
4) Local binary patterns
Local binary patterns descriptor is proposed by Ojala
et al [21]. It takes the intensity value of the center
pixel as threshold to convert the neighborhood pixels
to a binary code. Computed binary codes describe the
ordered pattern of the center pixel. This procedure is
repeated for each pixel on the image and the
histogram of the resultant 256 labels can then be used
as a texture descriptor. In [22], Ojala et al. show that
a large number of the local binary patterns contain at
most two bitwise transitions from 0 to 1 or 1 to 0,
which is called a uniform pattern. Therefore, during
the computation of the histograms, the size of the
feature vector can be significantly reduced by
assigning different bins for each of the 58 uniform
patterns and one bin for the rest. Uniform local binary
patterns are used in experiments, and are hereafter
referred to as LBP. Eight neighborhood pixels (on a
circle with a radius of 1 pixel) are used to extract the
LBP features.
Quality Assessment
Performance of any kind of recognition syatem
depends on the quality of the images. Quality
assessment of any type of system depends on factors
such as IPD, pose, blur, illumination, etc. [18], [19].
C.

We apply the same thing for demographic estimation.


In this section, we present a learning-based quality
assessment method to identify and reject low-quality
face images. Low quality face images due to blur,
pose and illumination variations are detected and
removed for the further process.
In the case of age estimation, this partition of low
quality and high quality face images is decided on the
basis of a threshold P.
To address the problem of imbalance, resampling
with replacement is used for the positive samples. In
each resampling of positive samples, the same
number of positive samples is drawn as the negative
samples. This resampling is performed K times to
build an ensemble classifier to distinguish between
high quality and low-quality face images,

QA( x)

1
K

Q ( x)
k 1

(2)

where Qk (.) is a binary SVM classifier with the RBF


kernel, and Qk (.) = 1 if a face image is of high
quality, and Qk (.) = 0, otherwise. Based on the
ensemble classifier, we reject a test face image xt
only when

QA( xt ) = 0. Such a conservative

rejection criterion assures a low rejection rate during


testing but is still effective in improving the overall
accuracy of demographic estimation. In our age
estimation experiments, K = 5 is used, and P is set so
that we reject about 5 percent of the test images.
Classification and Regression
For efficient classification of face images, a generic
classifier is not a good idea. Instead a two level
classifier and regressor is used. First level of the age
estimation is used to classify the face images into
various age groups like (8-16, 17-24, 24-32, 3340..). In the second level, exact prediction of
age is done.
In a two level architecture, we use SVM classifiers in
the first level for age group classification. In the
second stage we use SVM regressors for fine tuning
of the age to a specific number. This fine tuning SVM
regressors are defined specifically for each age group.
To optimize the SVM configuration, different kernels
with different parameters are tested for face database
and the configuration with minimum validation error
is selected.
D.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
16

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

IV.

EXPERIMENTAL RESULTS

All the experiments are performed on FGNET


database and implementations are done on Matlab.
Table I presents the minimum absolute error (MAE)
of all four appearance feature descriptors.
TABLE I: Mean Absolute Error on appearance features on the
database

Features
Appearance: None
Appearance: IEF
Appearance: GEF
Appearance: BIF
Appearance: LBP

Mean Absolute Error


(MAE)
N/A
4.70 (+- 4.76)
5.45 (+-5.56)
5.75 (+-6.14)
5.41 (+-5.57)

Biologically inspired features are found to have


minimum MAE of 5.75 and hence it is a better
appearance feature descriptor over the other three.
V.

CONCLUSION

Our study shows a comparison study of four different


features descriptors namely, gradient based encoded
aging features (GEF), intensity based encoded aging
features (IEF), biologically inspired aging features
(BIF) and local binary patterns (LBP). Firstly, face
preprocessing is performed by conversion into grey
scale imaage and cropping. Features encoding is done
using four different techniques GEF, IEF, BIF and
LBP. In the first level, classification is done using
SVM classifiers and in the second level, regression is
done using SVM regressors. Finaly age of the person
is estimated accuracy is estimated for all four
descriptors individually in terms of Mean Absolute
Error (MAE). Biologically inspired features are found
to be a better descriptor than the other three beacause
of the minimum MAE.
REFERENCES
[1]. J. OToole, T. Vetter, H. Volz, and E. M. Salter, Threedimensional caricatures of human heads: Distinctiveness and
the perception of facial age, Perception, vol. 26, no. 6, pp.
719732, 1997.
[2]. Y. Wu, N. M. Thalmann, and D. Thalmann, A dynamic
wrinkle model in facial animation and skin ageing, J. Vis.
Comput. Animation, vol. 6, no. 4, pp. 195205, 1995.
[3]. J. Suo, S.-C. Zhu, S. Shan, and X. Chen, A compositional
and dynamic a model for face aging, IEEE Trans. Pattern
Anal. Mach. Intell., vol. 32, no. 3, pp. 385401, Mar. 2010.
[4]. Tiddeman, M. Burt, and D. Perrett, Prototyping and
transforming facial textures for perception research, IEEE
Comput. Graph. Appl., vol. 21, no. 5, pp. 4250, Sep./Oct.
2001.

[5]. M. Ortega, L. Brodo, M. Bicego, and M. Tistarelli, On the


quantitative estimation of short-term aging in human faces,
in Proc. 15th ICIAP, 2009, pp. 575584.
[6]. Y. H. Kwon and N. da Vitoria Lobo, Age classification from
facial images, Comput. Vis. Image Understand., vol. 74, no.
1, pp. 121, 1999.
[7]. T. R. Alley, Social and Applied Aspects of Perceiving Faces.
Hillsdale, NJ, USA: Lawrence Erlbaum Associates, 1988.
[8]. Lanitis, C. J. Taylor, and T. F. Cootes, Toward automatic
simulation of aging effects on face images, IEEE Trans.
Pattern Anal. Mach. Intell., vol. 24, no. 4, pp. 442455, Apr.
2002.
[9]. X. Geng, Z.-H. Zhou, and K. Smith-Miles, Automatic age
estimation based on facial aeging patterns, IEEE Trans.
Pattern Anal. Mach. Intell., vol. 29, no. 12, pp. 22342240,
Dec. 2007.
[10]. Y. Fu and T. S. Huang, Human age estimation with
regression on discriminative aging manifold, IEEE Trans.
Multimedia, vol. 10, no. 4, pp. 578584, Jun. 2008..
[11]. S. E. Choi, Y. J. Lee, S. J. Lee, K. R. Park, and J. Kim, Age
estimation using a hierarchical classifier based on global and
local facial features, Pattern Recognit., vol. 44, no. 6, pp.
12621281, Jun. 2011.
[12]. K. Luu, K. Seshadri, M. Savvides, T. Bui, and C. Suen,
Contourlet appearance model for facial age estimation, in
Proc. Int. Conf. Biometrics, 2011, pp. 18.
[13]. X. Geng, C. Yin, and Z.-H. Zhou, Facial age estimation by
learning from label distributions, IEEE Trans. Pattern Anal.
Mach Intell., vol. 35, no. 10, pp. 24012412, Oct. 2013.
[14]. Y. Saatci and C. Town, Cascaded classification of gender
and facial expression using active appearance models, in
Proc. 4th Int. Conf. Automat. Face Gesture Recognit., 2006,
pp. 393400.
[15]. Y. Wang, K. Ricanek, C. Chen, and Y. Chang, Gender
classification from infants to seniors, in Proc. 4th Int. Conf.
Biometrics: Theory Appl. Syst., 2010, pp. 16.
[16]. K. Luu, K. Seshadri, M. Savvides, T. Bui, and C. Suen,
Contourlet appearance model for facial age estimation, in
Proc. Int. Joint Conf. Biometrics, 2011, pp. 18.
[17]. T. F. Cootes, G. J. Edwards, and C. J. Taylor, Active
appearance models, in Proc. Eur. Conf. Comput. Vis., 1998,
pp. 484498.
[18]. Tabassi, C. Wilson, and C. Watson, Fingerprint image
quality, Nat. Inst. Standards Technol., Gaithersburg, MD,
USA, The Image Group of Information Access Division,
Tech. Rep. NISTIR7151, Aug. 2004.
[19]. P. Grother and E. Tabassi, Performance of biometric quality
measures, IEEE Trans. Pattern Anal. Mach. Intell., vol. 29,
no. 4, pp. 531543, Apr. 2007.
[20]. Guo, G. Mu, Y. Fu, and T. S. Huang, Human age estimation
using bio-inspired features, in Proc. IEEE CVPR, Jun. 2009,
pp. 112119.
[21]. T. Ojala, M. Pietikinen, and D. Harwood, A comparative
study of texture measures with classification based on
featured distributions, Pattern Recognit., vol. 29, no. 1, pp.
5159, 1996.
[22]. T. Ojala, M. Pietikinen, and T. Maenpaa, Multiresolution
gray-scale and rotation invariant texture classification with
local binary patterns, IEEE Trans. Pattern Anal. Mach.
Intell., vol. 24, no. 7, pp. 971987, Jul. 2002.
[23]. Y. Freund, S. Dasgupta, M. Kabra, and N. Verma, Learning
the structure of manifolds using random projections, in
Advancesin Neural Information Processing Systems 20. Red
Hook, NY, USA: Curran Associates, 2007, pp. 473480.
[24]. Alnajar, C. Shan, T. Gevers, and J.-M. Geusebroek,
Learning-based encoding with soft assignment for age
estimation under unconstrained imaging conditions, Image
Vis. Comput., vol. 30, no. 12, pp. 946953, 2012.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
17

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Underground Platform Cooling System


for Delhi Metro
Jatin Gaur, Prateek Mishra

Dept. of Electronics and Communication, Northern India Engineering College, New Delhi, India
E-mail: jatin.gaur@niecdelhi.ac.in
Abstract Slaughtering summers of Delhi call out
for the air-conditioning of the underground
platforms of Delhi Metro. D.M.R.C has, however,
come up with a model to do so but this research
paper suggests an alternative method to achieve
cooling of air on underground platforms. The
method involves a simple approach of cooling air
by making a highly efficient use of the Joule
Thomson effect. The suggested process takes into
consideration many other physical aspects to set
up a system which is much more efficient and
incurs less expenditure.
Keywords Joule Thomson effect, inversion
temperature, liquefaction of gas, Claudes process.

I.

INTRODUCTION

We cannot do great things on this Earth, but


only small things with great love.
Delhi Metro is India's first modern public
transportation system, which has revolutionized
travel by providing a fast, reliable, safe, and
comfortable means of transport. The network
consists of six lines with a total length of 189.63
kilometers (117.83 mi) with 142 stations of which 35
are underground, five are at-grade, and the remainder
is elevated. All stations have escalators, elevators,
and tactile tiles to guide the visually impaired from
station entrances to trains.
Delhi Metro is being built and operated by the Delhi
Metro Rail Corporation Limited (DMRC). The metro
has an average daily ridership of 1.8 million
commuters, and, as of July 2011, had carried over
1.25 billion commuters since its inception. As the
network has expanded, high ridership in new trains
has led to increasing instances of overcrowding and
delays on the Delhi Metro. To alleviate the problem,
8 coach trains have been introduced and an increase
in the frequency of trains has been proposed [5].
Delhi Metro has been working to make its ride a

green one. The Delhi Metro has won awards for


environmentally friendly practices from the United
Nations and the International Organization for
Standardization, becoming the second metro in the
world, to be ISO 140011 certified for
environmentally friendly construction. Most of the
Metro stations on the Blue line conduct rainwater
harvesting. It is also the first railway project in the
world to earn carbon credits after being registered
with the UN under the Clean Development
Mechanism and has so far earned 400,000 carbon
credits by saving energy through the use of
regenerative braking systems on its trains.[5]
All these recognition in the kitty of Delhi Metro
are of great appreciation but there is still a lot to
be achieved.
Earthlings have forgotten how to be good
guests, how to walk lightly on the earth as its
other creatures do.
It is high time to realize that we have to work
towards development while keeping earth green.
Never doubt that a small group of thoughtfully
committed citizens can change the world. Indeed, it's
the only thing that ever has caused any change. And
that is the thought which could act as a catalyst
towards a greener world.
This is the main force guiding this research paper.
An attempt to realize an air conditioning system
meant for the underground platforms of Metro with
minimal adverse effect to the environment. If
conditions of the process are satisfied, it could help
achieving cheaper, greener and efficient cooling.
This paper has a general to specific approach, along
with a scientific background of the terms involved in
the paper. It gives a detailed description of the
process being suggested with a proper diagram. All
the simple questions such as- Why this system? How

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
18

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

can it be achieved? have been answered. Scopes of


future development and improvements have also been
stressed upon.
II.

METHOD

1. The following are the basic theoretical points


which are the basis of the procedure which is used
for air-conditioning purposes of an underground
platform:
A.

Controlled Liquefaction of gases

Liquefaction of gas is a simple conversion of gas


into liquid state [1]. However, controlled
liquefaction prevents full conversion of gas into
liquid state but just cools it. This is achieved with
the help of Joule Thomson effect.
B. Joule Thomson effect
It refers to the change in temperature that
accompanies expansion of a gas without production
of work or transfer of heat. At ordinary temperatures
and pressures, all real gases except hydrogen and
helium cool upon such expansion; this phenomenon
often is utilized in liquefying gases. The cooling
occurs because work must be done to overcome the
long-range attraction between the gas molecules as
they move farther apart [2].

TABLE 1: IMPORTANT PROPERTIES OF GASES

Gas

Boiling
Point
(0C)

Freezing
Point (0C)

Air
O2
N2
H2
CO2

-191
-183
-196
-252.8
-78.3

-212.3
-218.8
-210
-259.2
------

Maximum
Inversion
Temperature
(0C)
330
620
347.8
-77.8
1230

D. Claudes Process for Liquefaction of Air


The air is first compressed to about 200 atmospheres
by compressor. The compressed air then passes
through the tube which bi-furcates the incoming gas
part of the air (at X) goes into the cylinder fitted with
an air tight piston and rest of the air passes through
the spiral coil which ends in a jet. The air that goes
into the cylinder pushes the piston and thus doessome
external work. As a result, the internal energy of air
falls and hence temperature falls. The cooled air then
enters the chamber at lower end. The air that passes
through the spiral coil is cooled by Joule-Thomson
effect as it comes out through the jet in low pressure
region (50 atm) of chamber. The cooled air is
circulated again and again with the incoming air and
subsequently cooling of air takes place [3].

Figure 1: Diagram of Joule-Thomsons experiment

C. Inversion Temperature
The inversion temperature is the critical temperature
below which a real gas that is expanding at constant
enthalpy will experience a temperature decrease, and
above which will experience a temperature increase,
in accordance with Joule Thomson effect.

Figure 2. Diagram of Cloudes Process

2. The System For Achieving Cooling Of Platforms

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
19

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

IV.

DISCUSSION

System works on a simple principle of Joule


Thomson effect. The air could be cooled during
night hours and sufficient amount of cooled air could
be prepared to use it in the morning hours. After that
the system could be operational again during
afternoon hours producing cooled air once again.
This system is only meant for underground metro
platforms and not for the elevated ones.
A. Requirements for the realization of the system
1.

Figure 3: Platforms equipped with Claudes Apparatus to


expel Cooled air through ventilators as shown

III.

WORKING OF SYSTEM:-

The setup of this system of cooling of air consists of


an apparatus capable of undergoing liquefaction of
air by making use of the Claudes Process as shown
in the figure 2 above , is the miniature version of the
process.
First the air is passed through a compressor. A part
of this compressed air passes through a narrow tube
which results in cooling of air on the basis of Joule
Thomson effect. The other part is passed through a
piston which again cools it. The cooled gas passes
again through the same cycle.
However, the aim of this system is not to liquefy the
gas but only to cool it. In reference to Table 1, the
inversion temperature for air is 330 C and its critical
temperature is - 140.2 C. If temperature is below the
inversion temperature before expansion, cooling
results in expansion. Therefore, cooling of air could
take place as initially gas would be at room
temperature which is much lower than 330 C. But
the system has to be configured such that only
cooling takes place and not the liquefaction of air
which could be easily achieved by a temperature
controller.
The cooled air could then be passed through the
ventilator openings as shown in figure 2 to
achieve even cooling of underground platform.

2.

3.
4.

An apparatus to realize Claudes Process as


shown in figure: 3; the important parts being compressor with an ecological cooling agent
(example: - CS ANTIFREEZE EKO, CS
EKOTERM) [4], a throttle and a piston system
to compress gas.
A Temperature Control System to maintain
a temperature to cause cooling and not
liquefying.
An exit for cooled air through the ventilators as
shown in figure 2.
A mechanism to store the cooled air produced
during two cycles of the operation of system.

B. Advantages of System
1.
2.

Reduced energy consumption.


Enabling to draw power during off peak hours,
when the rate of consumption of energy is low.

C. Interpreting Outcomes
1.
2.

Claudes system is an efficient process of


cooling if all the conditions are satisfied.
The obvious outcomes are the result of the
Joule Thomson effect.

D. Future Work and Possible Improvements


1.

2.

The system could be used in winters as well.


This is because in accordance of Joule
Thomson effect, the air could be heated in the
same manner if the initial temperature is above
inversion temperature of gas. But the system
has to be modified in such a manner to exploit
this side of the Joule Thomson effect as well.
The system could be installed in the present
Phase 3 Metro stations (underground ones). The
existing underground stations could also be

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
20

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

3.

equipped by these systems.


Ways could be developed to mechanize the
process such as improving the compression
technique of the procedure and to improve the
better storage of cooled gas.
V.

1.

2.

RESULTS

Efficient cooling system of underground


platforms is achieved by making use of the
Claudes Process which works on the basis of
Joule Thomson effect.
At present, the chiller plant works to first cool
down the water, which is then discharged
through coils to the air-handling unit that
converts the water into cool air. This is then
used to air-condition the station. However the
suggested upgradation technique to the present
method makes the air-conditioning faster and
more efficient with a significant reduction in
electrical energy consumption.
VI.

CONCLUSION

As required, the system to achieve cooling on metro


stations which is simple and cheap has been
suggested which in certain ways could help reduce
the short-comings of the present system. The future
for Delhi Metro could be made greener, better and
yet equipped with the modern technology. This is the
main objective of this system to make our Metro not
only passenger-friendly but environment-friendly as
well. And this is not a vision to be achieved it is
something our Metro is doing in many ways to do so
but it is only another way to take this initiative
forward. The need of the hour is to have an earth
which is green and whose inhabitants are
environmentally sound.
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]

http://en.wikipedia.org/wiki/Liquefaction_of_gases
http://www.britannica.com/EBchecked/topic/306635/Joule
-Thomson-effect
http://chemistry-desk.blogspot.in/2011/06/claudesprocess.html
http://www.classic-oil.cz/en/ecological-coolant
http://en.wikipedia.org/wiki/Delhi_Metro
Radhey Sham, T.K.Jindal, and B.S.Pabla, Cryogenic
Processes-A Review, I.J.E.S.T, Jan, 2011.
Paul J. Gans, Joule-Thomson Expansion, Sep, 1992.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
21

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Li fi-Visible Light Communication


1

Amrita Kaul, 2Divya Punyia

Dept. of Electronics and Communication, Northern India Engineering College, New Delhi, India
2

Ph.D Scholar NIT Kurukshetra, Haryana, ,India


E-mail: amrita.kaul@niecdelhi.ac.in, dpunia92@gmail.com
Abstract At the time of using wireless internet at
any spot, one has probably gotten frustrated
because of the slow velocity of web when more
gadgets are associated with a solitary router. Dr.
Harald Haas has come up with a solution for those
he calls Information through brightening. Li-Fi
or Light Fidelity is 5G Visible Light
Communication systems wielding light-emitting
diodes as a medium for high-speed conveyance.
LI-FI is a new epoch of high intensity light source
of solid state outline which convey clean lighting
answers for general and forte lighting. The
concept of Li-Fi is data communication on quick
glimmering of light which is not distinguished by
human eye but it is focused on photo detector
which converts the on-off state into binary digital
data.

for information transmission, however while Wi-Fi


uses radio waves, Li-Fi utilizes unmistakable light
correspondence as a part of the scope of 100Mbps.
The present paper manages the VLC which give a
wide and quick information rate like 500Mbps. In this
paper, the correlation is made between Wi-Fi and LiFi innovation. This paper additionally examines the
working, usage and upgrades in Li-fi technology.
II.

STANDPOINTS OF LI-FI

Keywords LED (Light Emitting Diode), Wi-Fi


(Wireless Fidelity),Li-Fi (Light Fidelity),VLC (Visible
Light Communication),RF (Radio Frequency).

I.

INTRODUCTION

The idea of LI-FI is presently attracting a good deal


of interest, not least as a result of it offers a real and
really economical different to RF.As a growing
variety of individuals and their recent device access
wireless net, the airwaves have become more and
more clogged and inaccessibility of free bandwidths
to each device, creating it additional and harder to
induce a reliable, high speed signal. The chance to
take advantage of a totally completely different a part
of the spectrum is extremely appealing. Li-Fi has
different blessings over Wi-Fi, like safe to use at
atomic energy plants, thermal power stations
wherever Wi-Fi can't be used[1]. In such stations RF
waves can be harmful and can cause accident, to
communicate in such regions only visible light
spectrum can be safe Li-fi is available wherever there
is accessibility of light, thus destroying the need of
having problem areas just at chose places. There are
four rules to judge on the working of Li-Fi and Wi-Fi
that is, limit, productivity, accessibility and security.
Both Li-fi and Wi-Fi utilizes electromagnetic range

Fig. 1 Li Fi Communication with other devices

In figure 1 demonstrates how the Li-Fi cloud will get


corresponded with others gadgets. Li-fi utilizes
obvious light rather than gigahertz radio waves.
Currently there are 1.4 billion base stations which
expend more vitality and its productivity is under 5
percent and we have an aggregate of around 5 million
cellular telephones which exchange more than 600
terabytes of information consistently which showcase
the way that remote has ended up utility. Li-Fi is free
of complex system of wires and box which is
introduced on account of Wi-Fi. This is an advanced
framework that interprets the exemplary paired
dialect of zeros and ones in light heartbeats off or on,
separately, through minor LED knobs on and off a
million of times each second. The pioneers of
information transmission through squinting of LEDs
can make remote web access with information
transmission rates of near 10Gbit/s, hypothetically,

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
22

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

permitting a top notch film to be downloaded in 30


seconds which is 250 times speedier than superfast
broadband. These advantages come at a fivefold
travels at present offering fibre optic lines; to profit
by this innovation requires a radiant switch (which
can follow efficiently and effortlessly into any
ordinary electric knob) which is equipped for
radiating the binary signal. [12][15]
III.

WORKING OF LI-FI

Fig. 3 Association of web with LED

Fig.2 Data Communication

Figure 2, depicts that the binary data is caught by few


light receptors are required, and are introduced on a
wide range of associated gadgets, from PCs to tablets,
to telephones, TVs or apparatuses. Matter specialists
clarify that the light heartbeats are indistinct to the
human eye, without bringing on harm or uneasiness
of any sort. Furthermore, any light or spotlight can
turn into a hotspot. How Li-Fi functions is
straightforward: You have a light toward one side (a
LED), and a photo detector (light sensor) on the
other. In the event that the LED is ON, the
photograph locator enrols a double one; otherwise it's
a paired zero. Streak the LED enough times and you
develop a message. Utilize a variety of LEDs, and
maybe a couple of various hues, and soon you are
managing information rates in the scope of hundreds
or megabits every second, this is proficient by the
glimmering of LED lights to make paired code (on =
1, off = 0), and is done at higher rates than the human
eye can recognize. The more LEDs in your light, the
more information it can handle [10]
Figure 3 demonstrates brief association of web with
LED and data recovered on the PC. One LED
exchanges information at a slower rate, so a huge
number of LEDs with one micron size are introduced
in the globule. The diminishment of size of LEDs
does not diminish its capacity to exchange
information or power on the inverse it expands the
proficiency of one light to transmit the information at
a suddenly higher rates.

Besides, these miniaturized scale LEDs are at last


just pixels and at one micron, these LEDs would be a
ton littler than those in your Smartphone's retina
show. You could have an immense exhibit of these
LEDs that bend over as a room's light source and a
showcase and gives organizing ability as an
afterthought. Maybe a next-cutting edge console
would correspond with your gamepad, Smartphone,
and different peripherals by means of a Li-Fi
prepared TV. It without a doubt gives an interstate
lighting that enlightens the street, gives progressive
activity information/notices, and gives web access to
your auto, in addition to the greater part of the
gadgets on-board.
Transmission of information is finished by single
LED or multi LED through an visible light as
appeared in underneath figure-3.On the beneficiary
side there is a photodetector, which change over this
light into electric signs and it will provide for the
gadget which associated with it. Voltage controller
and level shifter circuits are utilized on both the side
to change over or keep up a voltage level in the
middle of transmitter and recipient. The LI-FI source
has an extremely high amount of light radiated every
second in a unit strong edge from a uniform source
(light power). A solitary source with just a couple of
millimetres in size can deliver 2300 lumens of
splendid white light. Much of the time, it will just
need to use one light source per road light. It makes
the mechanical and optical execution of light much
less difficult and less costly.
IV.

APPLICATIONS

Aviation routes: We were confronting the issue in


correspondence media at the season of going in the
aviation routes, on the grounds that the entire aviation
routes interchanges are performed on the premise of
radio waves. We can overcome this drawback by
using LI-FI technology.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
23

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Fill Green data technology: LI-FI never gives any


symptoms on any living thing like radio waves and
other correspondence waves which impacts on the
winged creatures, human bodies, and so forth.
Free From Frequency Bandwidth Problem: LI-FI
is an obvious light correspondence medium, so it
doesn't require any sort of spectrum license i.e. we
don't have to pay any sum for correspondence and
permit.
More intelligent Power Plants: Power plants require
quick
and
information
frameworks
with
interconnected to screen things like network integrity,
demand and (in atomic plants) centre temperature and
Wi-Fi couldn't work appropriately in these regions as
these are more touchy to radio recurrence such as in
petrochemical plants. LI-FI could work appropriately
in these delicate ranges and it likewise spares cash.
Expand Communication Security: Light can't enter
to the divider so in noticeable light correspondence,
security is higher than whatever other correspondence
innovation.
V.

COMPARISON WITH WI-FI

CHARACTERISTICS
FREQUENCY

WI-FI
2.4 5
GHz

RANGE

100 meters

DATA RATE
COST
SECURITY
STANDARD

11 mbps
Medium
Medium
IEEE
802.11b

>1 gbps
Low
High
IEEE
802.15

OPERATING BAND

RF band

Visible
light

WORKING
CONCEPT

Various
topologies

PRIMARY
APPLICATION

WLAN

Direct
binary data
serving
Wherever
light is
there

VI.

LI-FI
No
frequency
for light
Based on
LED light
falling

CONCLUSION

In this paper, an overview on Li-Fi innovation has


been talked about. From this 5G Li-Fi innovation, we
can see that the Li-Fi is a propelled approach on
configuration, having the best ever outline of web by

to a great extent lessening the measure of gadget


which exchanges information, execution by method
for having more than 1.4 million lights everywhere
throughout the world if supplanted by such LEDS can
give attainable get to, and last however not the
minimum colossal applications contrasted with
whatever other systems in different fields which can't
be envisioned by on use systems. Despite the fact that
there are a few burdens, but it can be dispensed with
via watchful further research. Li-Fi has given a stage
forward development in the realm of developing
craving correspondence, this is protected to all
biodiversity including people and advancing towards
a greener, less expensive and brighter fate of
innovations.
REFERENCES
[1]. Nitin Vijaykumar Swami. "LI-FI (LIGHT FIDELITY) THE
CHANGING
SCENARIO
OF
WIRELESS
COMMUNICATION". International Journal of Research in
Engineering and Technology 04.03 (2015): 435-438.
[2]. Abdulhussain Shuriji, Asst. Lec. Eng. Mushreq. "An
Extensive Comparison Of The Next Generation Of Wireless
Communication Technology: Light-Fidelity (Li-Fi) Versus
Wireless-Fidelity (Wi-Fi)". GSTF Journal on Media &
Communications 2.1 (2014).
[3]. I.S. Jacobs and C.P. Bean, Fine particles, thin films and
exchange anisotropy, in Magnetism, vol. III, G.T. Rado and
H. Suhl, Eds. New York: Academic, 1963, pp. 271-350.
[4]. Singh, R., OFarrell, T., & David "Color Intensity
Modulation
For
Multicolored
Visible
Light
Communications". IEEE Photonics Technology Letters 24.24
(2012): 22542257.
[5]. Gilliard, R P et al. "Operation Of The Lifi Light Emitting
Plasma In Resonant Cavity". IEEE Trans. Plasma Sci. 39.4
(2011): 1026-1033.
[6]. Lee, S. H., & Kwon, J. K "IEEE 802.15.7 Visible Light
Communication: Modulation Schemes And Dimming
Support". IEEE Communications Magazine 50.3 (2012): 7282.
[7]. Jungnickel, V. et al. "A Physical Model Of The Wireless
Infrared Communication Channel". IEEE J. Select. Areas
Commun. 20.3 (2002): 631-640.
[8]. Narmada, B, P Srinivasulu, and P.Prasanna Murali Krishna.
"High Efficient Li-Fi And Wi-Fi Technologies In Wireless
Communication By Using Finch Protocol". IJETT 12.5
(2014): 219-223.
[9]. Audeh, M., & Kahn, J "The New Era Of Transmission And
Communication Technology : Li-Fi (Light Fidelity) LED &
TED Based Approach". International Journal of Advanced
Research in Computer Engineering & Technology 3.2 (2014).
[10]. WAKI, Masaki et al. "Optical Fiber Connection Navigation
System Using Visible Light Communication In Central
Office With Economic Evaluation". IEICE Transactions on
Communications E95.B.5 (2012): 1633-1642.
[11]. Zhitong Huang, Zhitong Huang, and Yuefeng Yuefeng.
"Design and Demonstration Of Room Division MultiplexingBased Hybrid VLC Network". Chinese Optics Letters 11.6
(2013): 060603-60607.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
24

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Pre-existing EDA Tools Limitations and


Mixed Modeling Approach for IP Based
SoCs
Abhishek Anand, Taran Aggarwal, Medha Chhillar Hooda
1

Dept. of Electronics and Communication, Northern India Engineering College, New Delhi, India
E-mail: abhishekkanand@yahoo.in, taran3aggarwal@gmail.com, medha.chhillar@gmail.com

Abstract the Concept of SoCs(System-on-Chip)


came into existence once the demand of more
powerful devices was felt. Few things that were
taken care of while designing SoC were power
consumption, speed, timing and area. The
demands were contradictory as the speed was to
be enhanced and the area reduced. This led to the
increase in the complexity of the circuit and hence
verification was the top level challenge in the
development of SoC. The verification of SoC has
multiple levels involved in it: IP level verification,
Chip verification and the Hardware- software coverification. Last level i.e. Hardware software coverification holds major importance. Few tools
have been provided by the EDA vendors for
verification. In this paper we will analyse the
architecture of co-verification and the weaknesses
of some of the pre-existing EDA tools. Also we will
try to develop mixed verification strategy on the
platform designed by user himself.

in the figure. As we saw that there are large numbers


of components that are being fabricated on SoC, so it
has increased its complexity. Complexity both in
terms of designing as well as verification. When
designing SoC different components are placed and
connected. But when it comes to verification of that
chip, it becomes more difficult as we have to verify
the functionality of much diverse system [1].
While using the IP cores provided by the IP vendor,
its correctness is to be verified in advance by system
designer. This verification of different IP cores
separately consumes a lot of time i.e. approximately
70% of the whole design period.
There is difference between the verification of SoC to
that of an ASIC. SoC contains CPUs and memory
which we dont have in ASIC. Based on the CPU the
system has software also. Other than hardware, the
verification of SoC the needs to be cooperated by
software as well. This verification process is known
as hardware/software co-verification.

KeywordsSoC(system-on-the-Chip); PGA(Field
programmable gate array); EDA(Electronic
design automation); IP core; verification; Chip
tape-off(Chip ready for production after final
testing)
I.

INTRODUCTION

Semiconductor industry has seen lot of changes in the


field of integrated circuits. The number of gates that
could be fabricated on to a chip has increased
substantially. The size of integrated circuits has been
shrinking day by day. The concept of system-on-achip (SoC) is now in practice. SoC contains one or
multiple number of processors along with other
digital peripherals like memory, bus based
architecture, coprocessors, IO channels, timing source
and external interfaces. All these functional blocks
are called IP cores. SoCs architecture is shown later

Fig 1- Architecture of SoC [1]

The figure describes the architecture of SoC. It


shows the way various components are connected
through buses [1].
There are several Co-verification EDA(Electronic
Design Automation) tools provided by different EDA

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
25

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

vendors. But there are some problems with these


tools.

case the timing is not met we re-simulate the design


and this cycle continues.

This paper aims in pointing out the problems in the


existing EDA tools and developing mixed verification
strategy on the platform designed by the user.

II.

VERIFICATION

There are two types of verification methods that can


be used to verify the functionality of any digital
circuit. One method is dynamic function simulation
and the another is STA (Static Timing Analysis).
(A) Dynamic Functional Verification:
In Dynamic function simulation a testbench is written
that defines the external stimuli of DUT (Design
under Test), which is then applied to the DUT as
inputs. After simulation, the waveforms hence
generated are checked for the correctness of the
design [2].
(B) STA (Static timing Analysis):
This is another method to check the correctness of the
design the design in terms of timing constraints. STA
tools are used to calculate the setup time, hold time
and time path of the inner logic of the design and then
the results thus obtained are compared with the script
file of timing analysis which are created in advance.
The result tells us that whether the design fulfils time
specifications or not [3].
SoC can be verified by these two methods discussed
above. Taking it generally it could be concluded that
SoC verifications involves three levels: IP-separate
verification, whole Chip verification and HW/SW coverification.
2.1) IP-Separate verification:
IP vendors provide many IP cores that is of general
use. In IP design, common design flow starts at RTL
level. The design involves three levels i.e. RTL level,
Gate level and Physical level. So IP verification has
all these three levels.
The dynamic simulation is used to check the design at
the RTL level to determine the correctness of the
logic function. We can get the netlist after the
compilation. The netlist gives us the delay
information of the circuit. We simulate the IP core to
judge its correctness and use STA tool to see that
whether timing constraints are taken care of or not. In

Fig-2 : IP separate simulation[4]

BFMs (Bus Functional Models) are created for chip


cores to substitute the chip cores when the IPseparate simulation is running because most of the
inputs and outputs of IP cores are the inputs and
outputs of chips and some of them are interconnect
interface of IP cores [5].
2.2) Whole Chip Verification:
Even after performing separate IP verification
successfully, there are chances that the circuit would
not work when combined. So, we need to carry out
full chip verification
Full chip verification is also carried out by dynamic
simulation and STA tools [5].
2.3) Hardware- Software co-verification:
This is the most important part of the SoC
verification. As we know that SoCs have both the
software and the hardware embedded into it. In the
past hardware and software verification were
separated as their platforms were totally different.
Software verification was done after the chip tape off.
Chip tape-off is the final stage of chip development
i.e. when it is ready for production after final testing.
It increased the design time. Also there were
possibilities that if the software verification is not
correct, one has to redesign the chip from the scratch.
Best solution for this problem that seemed was
Hardware/software co-verification. HW/SW coverification
makes
both
verifications
run
simultaneously at the same platform. This helped to
shorten the verification time and also the bugs can be
detected at the early stage which saved the
verification time [5].
III. ARCHITECTURE OF CO-VERIFICATION
Figure 3 describes the architecture of HW/SW coverification. HW/SW co-verification gives a platform
for the verification to be carried out. There are tools

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
26

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

that can be used to carry out the verification, for


example Modelsim(Hardware debugging tool) and
XRAY debugger (Software debugging tool).The
architecture includes co-simulation kernel, debugger
Interface, bus interface models, Memory models. Cosimulation kernel controls the communication
between the hardware simulation and software
simulation. Both the hardware and software
simulation is carried out simultaneously. The
HW/SW co-verification can be carried out either in
the pure software environment provided by the EDA
vendors or in hardware platform designed by
someone self [6].

software tools like Design compile or primetime, and


for co-verification we develop mixed verification
strategy.
In mixed verification method. We customize a
debugging PCB in advance. PCB contains FPGA, test
chip for CPU and other functional chips. Because
FPGA is programmable, and it considers the
Hardware Description Language (VHDL or Verilog)
as the main design source, so we download the
netlist, which contains the delay information
associates with corresponding manufacture process,
into the FPGA, then the FPGA turns into a full
custom IC actually. Because MCU is much more
complicated than any other IP cores, we prefer a hard
micro rather than a soft micro, so it could not be
downloaded into the FPGA, thus, we use a test chip
provided by IP vendor for this verification, this test
chip is the same with the IP core both in form and
functionality.
This method of verification enhances the efficiency
and code coverage by approximately 20% [6].
VI.

Fig-3: HW/SW co-verification architecture [4]

IV.

PROBLEMS WITH PRE-EXISTING EDA


TOOLS

There are lot of co-verification environments


provided by EDA vendors but there are some
restrictions which limits their use:
These EDA tools are much expensive so these
will increase the verification cost. Also one
should gain some special skill sets, to use these
tools.
Each of these EDA tools has different feature
sets and requires suitable models of the IP cores
defined by itself without an accepted standard. It
is also not possible for an IP vendor to develop
custom solutions for each EDA vendor. So it is
restricted for the compatibility and it should be
improved for the future [7].
V.

MIXED VERIFICATION METHOD

CONCLUSION

The verification of SoC is done in multiple aspects


i.e. dynamic simulation and the STA are both needed.
For verification, we have different tools Provided by
the EDA vendors, but due to some weakness in them
co-verification becomes difficult task. Mixed
modelling method is one of the alternate ways that
can be adopted over using software based coverification tool.
REFERENCES
[1]. [URL: http://electronicdesign.com/dsps/what-s-dealsocverification
[2]. URL:http://www.xilinx.com/support/documentation/sw_man
uals/xilinx11/ise_c_simulation_functional.html
[3]. URL:http://asicsoc.blogspot.in/2008/08/dynamic-vs-statictiming-analysis.html
[4]. Study On a Mixed Verification Strategy for IP-Based SoC
Design by Chen Wenwei, Zhang Jinyi, Li Jiao, Ren Xiaojun,
Liu Jiwei URL:http://www.designreuse.com/articles/16358/
a-new-methodology-for-hardware-software-coverification.html
[5]. Chang H, Cooke L, Hunt M. Surviving the SOC revolution: a
guide to platform-based design, Kluwer Academic
Publishers (New York, 1999)
[6]. URL:http://www.informit.com/articles/article.aspx?p=10261
3&seqNum=3.

In mixed verification method, we do IP separate


verification and the whole chip verification in the

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
27

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Image Processing on Low-Cost Embedded


Systems
Utkarsh Gupta, Medha Chhillar Hooda

Dept. of Electronics and Communication, Northern India Engineering College, New Delhi, India
E-mail: utkarshgupta1871@gmail.com, medha.chhillar@gmail.com
Abstract This paper deals with the major
problem encountered in image processing i.e. the
memory (RAM) space required for Real Time
Image processing. All modern robots like the
autonomous drones used in military, unmanned
aerial vehicles and autonomous courier service
robots etc. are dependent on real-time image
processing for performing their tasks. In this text,
a brief insight about Embedded systems and
image processing has been provided. Thereafter
the problem of image processing on low cost
embedded systems has been elaborated. After
discussing the problem, a working solution to this
problem has been proposed.
Keywords
image
processing;
embedded
computer systems; image color depth; frames per
second(fps); lag; Raspberry Pi
I.

INTRODUCTION

In the imaging science, image processing implies


processing of images using any mathematical or
logical operations by using any form of signal
processing for which the input is an image, a series of
images, or a video, such as a photograph or video
frame. The output of an image processing operation
may be either an image or a set of characteristics or
parameters related to the image. Most imageprocessing techniques involve treating the image as a
2-dimensional signal and applying standard signalprocessing techniques to it [1]. Most image processing
platforms (like Computer Vision) treat the image as a
2-dimensional array of pixels in which each pixel is
itself a 1X3 array of the color channels. Although the
dimension of each pixel is dependent on the color
model in use i.e. RGB, HSV, gray etc.
An embedded system is a miniscule computer
system with a single dedicated function (or functions)
within a larger mechanical or electrical system, often
with real-time computing constraints. It is embedded
in the device often including necessary hardware and
mechanical parts. Embedded systems are deployed in

almost all smart devices in common use today. At


present, 98 percent of all microprocessors are
manufactured as the components of embedded
systems[2].
But nothing is perfect even in the digital world.
Embedded Systems usually possess low specifications
which are not enough to fulfill the requirements of
Image Processing which was realized while working
on a Courier Service Simulation project. The project
incorporated the use of a camera for detecting the
environment of a courier service robot. On the basis of
the input video stream from the camera, the robot
ought to decide the next course of action. The robot
was supposed to go around an arena (a model of a
city) and on the basis of the image captured by the
camera it would decide when to move and where to
go. The brain of the robot was a Raspberry Pi board
which is a low cost credit card sized computer running
on an ARM VI microprocessor and 512 MB RAM.
But this small RAM of the Raspberry pi board was
insufficient for real time image processing. As a result
the robot suffered from a lot of lag i.e. there was a
delay in the response to the image captured by the
Robot. For instance, if the robot detected a red traffic
signal at an instant 't' it would stop moving at an
instant 't+10' seconds. As a result, by the time the
robot stopped it had already crossed the traffic signal.
Although the same program executed normally on a
PC, but it suffered from a terrible lag while working
on the Raspberry Pi B+ board used in the project. It
appeared that the only solution to this problem was
using a high end Embedded computer having higher
technical specifications (like higher RAM and multicore processor etc.).
But after some research and analysis a better
solution was found using some existing tools of
Python (a programming language) and OpenCV (an
image processing module for python). By the end of
this paper, the reader would be enabled to use a low
cost embedded computer (having small RAM) for
image processing.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
28

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

II.

MEMORY (RAM) REQUIREMENT FOR


IMAGE PROCESSING

Before calculating the memory space required for


storing an image, it is important to understand how
exactly an image is stored in the Computer. Every
image has two important parameters- resolution
(width and height in pixels), color depth (no. of colors
per pixel) which decide the amount of space required
by it in memory. The color depth is expressed in bits
and usually has the values-16, 24 and 32. This actually
is an indication of the no. of bits required for storing
one pixel of the image. So a 16 bit image (which has
216 colors) will require 2 bytes (16/8 bytes= 2 bytes)
of memory space per pixel [3]. The total memory
space required by an image is calculated as followsM=P*B/(220)

(1)

Here M represents the required memory in MB (i.e.


Megabytes), P is the total no. of pixels in the image
and B is the no. of bytes required per pixel. For
instance an 8 Megapixel (3264X2448) 24 bit image
(P=7990272, B=3) requires 23 MB of memory space.
This doesnt look very big. But in real time image
processing one has to deal with videos having about
30 fps (for standard video quality). This would
require 23X30 i.e. approximately 690 MBs of RAM
per second. But the Embedded System developers
usually incorporate RAM in the range of 128 MBs to
512 Mbs in their products. So it is certain that a
developer will experience a lag unless he uses an
expensive Embedded system which has at least 1GB
(Gigabyte) RAM.
On the other hand if the video being
processed is of HD quality it would have a frame rate
of 60 fps. So for the same image quality it will require
approximately 1.4 GB of RAM space.
III.

software like openCV (which is compatible with many


languages like C and Python). The required function
for reducing the resolution in case of openCV is
cv2.resize() for python and void resize() for C++ [4]
Upon resizing the above image to a resolution of
1000X1000 the required memory space from the
above formula is calculated to be approximately equal
to 2.8 MB for an image and 84MB for a video of
standard quality (i.e. frame rate = 30 fps). This can be
easily provided by taking an embedded system
containing 128 Mbs of RAM (or higher) .
Another Improvement can be made by deleting
and recreating the container (variable) containing the
image in every iteration of the video capture loop
instead of updating it in every iteration of the video
capture loop. This can be achieved easily in modern
programming languages like python. In case of python
the required function is 'del var_name'. As a result
only one image is stored in the RAM at any particular
instant.
IV.

EXPERIMENTAL RESULTS

By using the above two methods it was found that


the lag had completely vanished and there was no
delay in the response from the robot. There was no
visible change in the video quality except that the size
of the video had shrunk.
These 2 methods not only optimized the available
memory usage, but they also reduced the time
required for the image processing. For instance, the
time required for displaying an image was calculated
before and after changing the resolution of the image.
For this calculation, the time.time() function of python
was used and it was found that the time taken for
displaying an image was reduced by approximately
28% after using these 2 approaches.

THE SOLUTION

The essence of writing a research paper lies in


providing a solution to an existing problem. So this is
the most important part of any research paper. In the
previous section it was realized that the memory space
required (M) to store an image is dependent on its
resolution and color depth. So in order to reduce the
required memory space one of these parameters
discussed above must be changed. Most image
processing tasks like object detection are dependent
on the value of all color channels of a pixel.
Therefore it is a pre-requisite to keep the color depth
unchanged. So any possible reduction can be done
only in the resolution of the image. This is easily
achieved in any open source image processing

Fig. 1. Python Shell showing time taken before resizing the image

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
29

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Fig. 2. Python Shell showing time taken after resizing the image

REFERENCES
[1]

https://en.wikipedia.org/wiki/Image_processing

[2]

https://en.wikipedia.org/wiki/Embedded_system

[3]

http://kias.dyndns.org/comath/44.html

[4]

http://docs.opencv.org/2.4/modules/imgproc/doc/geometric_t
ransformations.html#resize

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
30

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Analysis of C-Band Erbium-Doped Fiber


Amplifier for WDM Network
1

Monika Jain, 2Shrinikesh Yadav

ECE Deptt. ABESIT Northern India Engineering College, New Delhi, India
2
ABES Institute of Technology Ghaziabad,Uttar pradesh
E-mail: monika_j_89@yahoo.in, shrinikeshin@gmail.com

Abstract As world grows with the technology


.many more devices are invented for long distance
communication. Optical fiber is the one of the
most widely used cable for communication.
Optical fiber has many distinct advantages over
conventional copper wire system, such as wider
bandwidth, better signal to noise ration and low
losses of signal. Unlike the previous technology
which used electrical amplifiers, the optical
amplifiers are used for amplifying the signal now
from the last decades .Amplification of signal is
not only cost effective, but also minimizes the
losses and give better amplification of signal. With
the advancement of technology in recent years, an
erbium doped fiber amplifier (EDFA) has been
emerged that has enabled the optical signals in an
optical fiber to be amplified directly at a very high
bit rate which goes beyond terabits. Optical fibers
used pumping technique for the amplification of
signal which results into various losses. One of the
loss is amplified spontaneous noise (ASE). Fiber
loss is a fundamental limitation in realizing the far
distance communication by optical fiber. In this
paper, we have calculated the gain, noise figure of
C-band (1525-1565nm) by using optisystem
11.0(licensed product of Canadian based
company).
Keywords image processing; embedded computer
systems; image color depth; frames per second(fps); lag;
Raspberry Pi

I.

INTRODUCTION

Optical communication has many advantages in


comparison to the conventional copper wire
communication. Some of these advantages are
enormous bandwidth, better signal to noise ratio,
better gain and less noise. Due to these advantages,
optical fiber is widely used for the longhaul
communication. In the early days of its operation,
optical fiber used electrical amplifier, which convert

electrical signal into amplified electrical signal.


Electrical to electrical amplification was not only
bulky but also less efficient in operation. Now from
the last decades, The scenario of optical amplification
has been completely changed. Now, Three types of
amplifiers are mainly used in optical fiber which are
optical based amplifier, semiconductor based
amplifier (SOA), fiber doped amplifier (EDFA and
Raman amplifier). These amplifier changed the
concept of repeaters in optical communication. Since
its investigation in 1986, Erbium doped fiber
amplifier (EDFA) has made tremendous changes
which results into complete replacement of the
processes using repeaters mainly. Short for erbiumdoped fiber amplifier. EDFA is an optical repeater
device that is used to boost the intensity of optical
signals being carried through a fiber optic
communications system. An optical fiber is doped
with the rare earth element erbium so that the glass
fiber can absorb light at one frequency and emit light
at another frequency. An external semiconductor
laser couples light into the fiber at infrared
wavelengths of either 980 or 1480 nanometers. This
action excites the erbium atoms. Additional optical
signals at wavelengths between 1530 and 1620
nanometers enter the fiber and stimulate the excited
erbium atoms to emit photons at the same wavelength
as the incoming signal. This action amplifies a weak
optical signal to a higher power, affecting a boost in
the signal strength. Use of fiber optic in 1980s
required the light signals to be converted back into
electronic signals at Data's final destination. EDFA
removes this step from the process: all the steps of its
operation are the actions of photons, so there is no
conversion of optical signals to electronic signals.
Erbium had little commercial uses before the age of
fiber-optic telecommunications. Now it is an
important constituent of signal repeaters in longdistance telephone cables.
I. Structure of Erbium-doped fiber amplifier (
EDFA)

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
31

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Fig. 1. Basic structure of EDFA amplifier

Basically EDFA technology uses Erbium doped fiber


(EDF),which is conventional silica fiber doped with
erbium .When erbium is illuminated with light energy
at suitable wavelength (either 980 nm or 1480 nm) it
is excited to a long lifetime intermediate state
following decay to ground state by emitting light
within the band of 1525-1565nm. If light energy
already exists within the 1525-1565nm band for
example, due to signal channel passing through EDF
,then this simulate the decay process (so called
simulated emission ) resulting in additional light
energy[2]. Thus, if a pump wavelength is
simultaneously propagating through an erbium doped
fiber ,energy transfer will occur via the erbium from
the pump wavelength to the signal wavelength,
resulting in signal amplification. The erbium doped
fiber amplifier can be better understood by using
three level energy diagram structure. The important
characteristics of the amplifier can be analyzed from
the simple model and its assumptions. Figure.2 shows
the electron transition diagram of erbium doped fiber
amplifier.

transition rates between those two levels can be found


by the product of signal flux and signal cross section.
The spontaneous rates of transition can be seen in the
above figure. In the figure shown above, we can see
three energy levels, in which ground level is denoted
by energy level 1, intermediate state by level 3 and in
between these two level there is level 2 which is
responsible for longer life time of electron in case of
good amplification.The population of electrons are
represented by n1, n2 and n3. Population inversion is
needed to be achieved between the energy level 3 and
2 for amplification. Since energy level 1 is at ground
state, hence for better amplification we need
population inversion at the level 2. For obtaining
population inversion, pumping methods are needed.
Basically, three types of pumping are used in erbium
doped fiber amplifier, the forward
pumping
,backward pumping and bidirectional pumping.
Different pumping methods have different effect on
amplification of electron, hence by using different
pumping methods in different situation we can
archive better gain noise figure ,and amplification of
signal.

Fig. 3. Different pumping technique in EDFA.

II.

Fig. 2. Energy diagram of EDFA amplifier.

SIMULATION OF GAIN AND


ASE

For the simulation of gain and noise figure of erbium


doped fiber amplifier optisystem11.0 is used. The
parameters used for simulation are shown in figure.4.

The transition rates between energy level 1 and 3 are


proportional to the population in those levels and the
product of the pump flux and pump crosssection .the
Special Issue: National Conference on Recent Innovations In Engineering & Technology
(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
32

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

optimum value of fiber length is found to be 7m.


Gain variation with the length of fiber can be
investigated to find the optimum value of fiber for
better gain options.

Fig. 6. Output pumps power vs fiber length for different input


pump power.

Fig.4 Simulation of gain by using optisystem (8)

III.

From the figure.7 showing the variation of gain with


input power for different fiber lengths, we found that
gain increases as the pumping power increases.

RESULTS AND DISCUSSION

The variation of gain with fiber length is evaluated by


using optisystem for different pump powers having
constant signal input power and doping consideration.
The gain of the amplifier varies with the variation of
pump power. For the given fiber length, we can
observe the change in value of gain with the pumping
power. We can see from the figure that with the given
fiber length gain first increases exponentially to
certain point then goes to saturation after the certain
level of pump power.
Fig. 7. Gain vs input power for different length of fiber.

The variation of noise figure plays very important


role in determining the characteristics of erbium
doped fiber amplifier (EDFA). Here we investigated
the noise figure for different wavelength of fiber for
different level of input power .

Fig. 5. Gain vs length of fiber for fixed pump power.

The pump power output pump power is also


measured in term of length of fiber with constant
input power. We find from the figure that the output
power increases as the pump power increases. The

Fig.8. Noise figure (Nf) vs wavelength for different pump power.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
33

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

From fig. 8, we find that with the increase in input


power and wavelength, noise figure decreases, which
shows that that signal to noise ratio decreases
indicating the dominance of noise at higher power
and high wavelength in erbium doped fiber amplifier
(EDFA). Similarly, by using optisytem, we evaluated
the noise figure with respect to length of fiber and
found that for the fixed length of fiber (say 10 to 50)
and different level of input pumping power noise
figure increases almost linearly.

[12] Desurvire, "Erbium-Doped Fiber Amplifiers: Principles


and Applications", Wiley, (1994).
[13] Dr. Neena Gupta Inderpreet Kaur, " A Novel approach
for performance improvement of DWDM systems using
TDFA-EDFA configuration," IJECT, vol. I, no. 1,
December 2010.

IV. CONCLUSION
In this paper, we investigated the optical fiber
amplifier EDFA using different parameters with help
of optisystem. The performance characteristics of
erbium doped fiber amplifier is investigated for cband and pumped at 980 nm and simulation of gain,
noise figure is investigated .we found that as for fixed
value of input pump power as the length of fiber
increases gain decreases. The maximum gain can be
obtained by operating the EDFA in saturation region.
Similarly while investigating the noise figure .it is
found that increase in wavelength and pump power
decrease the noise figure. Hence minimum noise
figure can also be obtained when erbium doped fiber
operating in saturation region.
REFERENCES
Agarwal G.P .Fiber optics communication systems,
John wiley &sons ,new York 1997.
[2] M,A Othman ,MM Ismail et.al Erbium doped fiber
amplifier(EDFA) for c-band optical communication
system ijet-iiens vol:12 no:04 august 2012
[3] a.cem Cokrak,Ahmet Altuncu Gain and noise figure
performance of erbium doped fiber amplifier(EDFA).
[4] Banza .O.Rasid ,Perykhan.M.Jaff,gain and noise figure
performance of erbium doped fiber amplifier s at
10Gbps2006
[5] Senior optical fiber communication third edition 2010
[6] BO-Hun Choi,Hyo-Hoon Park,Moojung Chu and Seung
Kwan Kim High gain co-efficient long wave length
erbium doped fiber amplifier using 1530 nm band
pumpIEEE2001.
[7] R Deepa,R.VijayaIifluence of bidirectional pumping in
high power EDFA on single channel ,multichannel
and pulsed signal amplification
[8] optiwave corporation component library2012.
[9] Paul Urquhart, Oscar Garcia Lopez, Optical
Amplifiers for Telecommunication, IEEE, 2007
[10] Gerd Kaiser, Optical Fiber Communication, McGraw
Hill International Edition 2010, pp 398 421.
[11] J. Hansryd, P.A. Andrekson, M. Westlund, J. Li, P.O.Hedekvist, S,A, "Fiber-Based Parametrics and their
Applications", IEEE Selected Topics in Quantum
Electronics, 8 (3), 506-520, (2002
[1]

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
34

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Variability Analysis of Binary to Reflected


Code Converter at 16-nm Technology Node
1

Pragya Srivastava, 2Neha Sharma, 3Deepali Jain


ECE Deptt. Northern India Engineering College, New Delhi, India
2
Uttrakhand Technical University, Dehradun, India
3
Dr. M. C. Saxena Group of Colleges, Lucknow, India
E-mail: Pragyasrivastava.19jun@gmail.com, 2deeptiyadav8@yahoo.co.in, 3india.deepali@rediffmail.com
1

Abstract CMOS technology scaling driven by the


benefit of integration density, higher speed of operation
and lower power dissipation, has crossed many barriers
over the past 40 years. Due to aggressive technology
scaling, it is facing even more obstacles, which are more
severe than earlier. One of them is variability.
Variability is becoming a metric of equal importance as
power, delay, and area. This paper carries out
variability analysis of various exclusive-OR circuits at
the transistor level in terms propagation delay, average
power and power-delay product (PDP) at 16-nm
technology node. The objective for this analysis is to
determine the circuit with least variability of PDP.
Finally, it implements the best XOR circuit in emerging
CNFET technology to achieve even better results in
terms of propagation delay and PDP. The proposed
CNFET based implementation of XOR circuit offers 3.7
improvements in PDP compared to its CMOS
counterpart. The best XOR circuit with least variability
of PDP is then used to reali

Keywords variability; propogation delay; power delay


product (PDP); Binary to Gray code converter.

I.

INTRODUCTION

multiplication and division. They are a part of the


critical path and thereby influence the overall
performance of the entire system.
Desired feature for the design of a XOR cell is to
have a small number of transistors to implement it
and low power dissipation. NAND/NOR functions
have a compact implementation in the wellestablished CMOS technology [7]. However, XOR
circuits have various realizations. For instance, a
direct realization of an XOR function using static
CMOS logic requires 12 MOSFETs [2]. Circuits
based on pass transistors can be a solution to this
problem.XOR circuit being the basic building block
of many useful circuits, proper selection of these
circuits can enhance the performance of larger
number of circuits that they are part of. Therefore, it
is extremely desirable to select XOR circuit with
optimum design metrics. What is meant by optimum
design is to avoid degradation of output level, have
lower propagation delay (tP) and PDP [5].
This paper makes the following contributions:

An exclusive-OR gate acts as a buffer when one of its


inputs is low; on the other hand, it acts as an inverter
when one of its inputs is high. Therefore, XOR gate is
used as controlled inverter. XOR gate represents odd
function.
XOR gate finds its application in various logic
circuits such as shift register, parity generator/checker
for error detection and correction, gray to
binary/binary to gray code converter [8]. XOR gate is
the integral component of arithmetic circuits such as
full adder and multiplier. It is also extensively used in
compressors, comparators and phase detector circuit
in PLL [6].
Circuits like full adders are more critical as they play
an important role for many useful operations like

It analyzes 5 different XOR circuits, their


output levels and variability analysis of
parameters like the tP, average power
(PWR), and PDP.
XOR circuits CNFET implementation as
emerging technology and its variability
analysis.
It realizes Binary to Gray code converter
with the best XOR circuit and carries out its
variability analysis.

The remainder of this paper is organized as follows.


Various XOR designs are briefly reviewed in Section
II. Section III describes simulation result for fixed
supply voltage condition. Variability analysis results
are discussed and compared in Section IV.
Implementation of best XOR circuit using CNFETs is

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
35

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

brought up in Section V. Simulation setup and


framework of a new 6 bit binary to reflected (gray)
code converter is done in Section VI. Finally, the
conclusion of the paper appears in Section VII.
II.

ANALYSIS OF VARIOUS XOR GATE

XOR circuit presented in [5] (Fig. 1) is based on GDI


technique. This technique maintains low complexity
of logical design and allow reduced the power
consumption, propagation delay and area [9].
However, GDI based XOR circuit gives a bad 1 for
10 (AB = 10) input pattern. At 10 input condition,
P1 turns ON unlike N1, at the same time P2 turns
OFF but N2 conducts, this makes XOR output
connected to VDD via N2, which leads to a bad 1 due
to Vtn2 drop across N2. Moreover, its voltage level
remains Vtp2 above the ground level for 00 input
pattern as P2 stops conducting when XOR output
falls just below Vtp2.
XOR circuit shown in Fig. 2 has pass transistor logic
with output buffer. Here transistors act as switches to
pass logic levels between nodes of a circuit, rather
than switches connected directly to supply voltages.
Therefore this can reduce the number of active
devices, but, at the same time, it has a disadvantage
that the difference of the voltage between high and
low logic levels decreases at each stage. PTL XOR
cell [8] provides good output voltage levels for all
input pattern, except for the 11 input pattern. For
11 input pattern, P1, P2 turns OFF while N1 and N2
turns ON, now due to Vtn2 (or Vtp1) drop node 1
comes at a potential Vtn lower than VDD, thereby
making the XOR output slightly above 0 V.

Vtp1) above the ground level because P2 (or P1) stops


conducting when XOR output falls just below Vtp2(or
Vtp1).
CMOS inverter XOR circuit presented in Fig. 5
consist of inverter which leads to signal level
restoration and better driving capability, but, with the
drawback of extra power consumption. The XOR
circuit [3] yields a bad 0 and a bad 1 for 00 and 10
input pattern respectively. For 00 inputs P1 and P2
turns ON unlike N1 and N2, therefore ground gets
disconnected and XOR output drops down to a
voltage Vtp2 above the ground, and below which
(Vtp2) P2 stops conducting, thereby giving a bad 0.
For 10 input conditions. XOR output gets connected
to VDD via N2, which leads to a bad 1 due to Vtn2 drop
across N2.

Fig. 1 GD1 based XOR Cell [5]

7T XOR circuit [1] in (Fig. 3) comes up with a


solution of threshold voltage loss and results into a
full voltage swing. In other words, unlike 6T XOR
circuits its internal nodes do have a full voltage
swing, thereby showing a perfect response for all
possible input patterns.
Low power XOR circuit presented in (Fig. 4) is based
on optimized implementation for XOR function [4]. It
employs high functionality of PTL style. Though the
circuit has a non-full voltage swing at the output node
and but is characterized by its low power
consumption. Its XOR circuit gives a bad 0 for 00
input pattern. For 00 inputs P1 and P2 turn ON, and
N1 and N2 turns OFF, thereby disconnecting ground
form output and its voltage level remains Vtp2 (or

Fig.2. PTL XOR cell [8]

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
36

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

depicting the values of tP, PWR, and PDP are


reported in Table I.
TABLE I.
Simulation
Pdp Of XOR Circuits
S.
No.
1

XOR
Circuit
GDI XOR
Circuit

tp (Sec)

Results

Showing tP, Pwr And

2.2849E
-09

PWR
(Watt)
9.5669E
-09

PDP
(Joule)
2.1860E
-17

VDD
(volts)
0.7

PTL XOR
Cell

2.3106E
-09

1.5687E
-07

3.6245E
-16

0.7

7T XOR
Circuit

2.3016E
-09

1.0275E
-08

2.3649E
-17

0.7

Low
Power
XOR
CMOS
inverter
XNOR

2.2938E
-09

6.6038E
-10

1.5148E
-18

0.7

2.2849E
-09

9.5669E
-09

2.1860E
-17

0.7

Fig.3. 7T XOR Circuit [1]


5

IV.

VARIABILITY ANALYSIS OF XOR


GATES

Variability analysis of tP, PWR and PDP when supply


voltage is varied by 10% using Monte Carlo
simulation technique shows the impact of changing
supply voltage on circuit performance.

Fig. 4. Low Power XOR circuit [4]

Fig. 5. CMOS Inverter XOR circuit [3]

III.

SIMULATION RESULTS AND


DISCUSSION

The circuits discussed above were successfully


simulated on HSPICE using 16-nm technology node
at a supply voltage of 0.7 V. Simulation results

Design metrics are estimated with MC (Monte Carlo)


simulation using 16-nm technology node. As per
ITRS 2009, expected variation in VDD is 10% in
future technology generations [12]. Hence, most of
the design metrics are estimated by varying the
supply voltage by 10% around the nominal V DD of
0.7 V. The sample size of 2000 ensures a lower than
4% inaccuracy in the estimation of standard deviation
[13]. Design metrics in this work are estimated with
5000 sample size to achieve even higher accuracy.
In digital electronics, the Power Delay Product (PDP)
is a figure of merit correlated with the energy
efficiency of a logic gate or logic family. It is the
product of average power consumption (PWR) and
Propagation delay (tP). It has the dimension of
energy, and measures the energy consumed per
switching event. Ideally, a circuit should have
minimum PDP.

GDI XOR gate shown in Fig. 1 is estimated


scaling the supply voltage (VDD) from 770
mV down to 630 mV (nominal VDD 10%),

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
37

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

it exhibits low variability of tP as compared to


PWR and PDP. A comparative study for all
three parameters can be done by observing
the Tables II, III, and IV.
PTL XOR cell shown in Fig. 2 shows low
variability for tP. A clear observation of Fig. 6
shows that variability for PDP ranges from
0.718 to 0.788 which is lower than that of
GDI XOR circuit.
Though 7T XOR circuit shown by Fig. 3
provides perfect response for all possible
input patters, it has a drawback of high
variability of PDP which is undesirable. For a
supply voltage of 0.63 V it reaches 3.02
which is the highest value out of complete
analysis carried out in this paper. The
estimated values are reported in Table V ,
and VI, VII.
TABLE IV.

TABLE II.

Gdi Xor Circuit tp

VDD
(volts)

Std.
Dev. of
PDP
(Joules)
7.774e17

Mean of
PDP
(Joules)

Variability
(Std.
Dev./mean)

2.640e17

2.945

0.665

1.149e16

4.151e17

2.768

0.004

0.7

1.654e16

6.398e17

2.585

0.003

0.735

2.327e16

9.680e17

2.404

0.77

3.208e16

1.439e16

2.23

VDD
(volts)

Std. Dev.
of
tp
(Seconds)

Mean of
tp
(Seconds)

Variability
(Std.
Dev./mean)

0.63

8.795e12
8.349e12
8.109e12
7.900e12
7.614e12

2.291e09
2.287e09
2.285e09
2.283e09
2.282e09

0.004

0.63

0.004

0.665
0.7
0.735
0.77

TABLE III.

VDD
(volts)

0.63
0.665
0.7
0.735
0.77

Gdi Xor Circuit Pdp

0.003

TABLE V.

GDI XOR CIRCUIT Pwr

Std.
Dev. of
PWR
(Watt)
3.371e08
4.993e08
7.197e08
1.013e07
1.398e07

7T Xor Circuit - tp

VDD
(volts)

Std. Dev.
of
tp
(Seconds)

Mean of
tp
(Seconds)

Variability
(Std.
Dev./mean)

2.926

0.63
0.665

2.572

0.7

2.391

0.735

2.218

0.77

2.317e09
2.310e09
2.305e09
2.301e09
2.298e09

0.004

2.754

1.005e11
8.987e12
8.120e12
7.706e12
7.571e12

Mean of
PWR
(Watt)

Variability
(Std.
Dev./mean)

1.152e08
1.813e08
2.798e08
4.236e08
6.303e08

0.004
0.004
0.003
0.003

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
38

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

TABLE VI.

7T Xor Circuit PWR

VDD
(volts)

0.63
0.665
0.7
0.735
0.77

TABLE VII.

0.665
0.7
0.735
0.77

Mean of
PWR
(Watt)

Variability
(Std.
Dev./mean)

1.149e08
1.809e08
2.799e08
4.241e08
6.305e08

3.019
2.84
2.636
2.436
2.247

7T Xor Circuit - PDP

VDD
(volts)

0.63

Std.
Dev. of
PWR
(Watt)
3.469e08
5.137e08
7.378e08
1.033e07
1.417e07

VII, and IX, X shows that even if variability


for tP is low, its PDP value goes high because
of higher variability for PWR.

Std.
Dev. of
PDP
(Joules)
8.043e17
1.188e16
1.704e16
2.385e16
3.268e16

Fig. 7. Variability analysis of Low power XOR circuit

Mean of
PDP
(Joules)

Variability
(Std.
Dev./mean)

2.663e17
4.181e17
6.454e17
9.765e17
1.450e16

3.02

VDD
(volts)

2.841

0.63

2.64

0.665

2.442

0.7

2.254

0.735

TABLE VIII.

CMOS inverter Xor Circuit - tp

0.77

Low power XOR circuit shown in Fig. 4 has


least value of variability for PDP, therefore it
can be titled as best circuit of all these five
circuits under consideration. The maximum
value of PDP obtained for this circuit is 0.248
(Fig. 7) which is lesser than any of its
counterpart.

There is a basic requirement to design low-power


VLSI system, due of the fast growing technology in
mobile computation and communication. In recent
technology era increasing demand of battery operated
devices insist for high speed as well as low energy
operation [10].

CMOS inverter XOR circuit (Fig. 5) Presence


of inverter improves the driving capability of
the circuit but simultaneously suffers from
extra power consumption. Study of TABLE

TABLE IX.

Std. Dev.
of
tp
(Seconds)
8.795e12
8.349e12
8.109e12
7.900e12
7.614e12

Mean of
tp
(Seconds)
2.291e09
2.287e09
2.285e09
2.283e09
2.282e09

Variability
(Std.
Dev./mean)
0.004
0.004
0.004
0.003
0.003

CMOS inverter Xor Circuit - PWR

VDD
(volts)

Std.
Dev. of
PWR
(Watt)

Mean of
PWR
(Watt)

Variability
(Std.
Dev./mean)

0.63

3.371e08

1.152e08

2.93

0.665

4.993e08

1.813e08

2.754

0.7

7.197e08

2.798e08

2.572

0.735

1.013e07

4.236e08

2.391

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
39

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

CMOS inverter Xor Circuit PDP

TABLE X.

VDD
(volts)

Std.
Dev. of
PDP
(Joules)
7.774e17

Mean of
PDP
(Joules)

Variability
(Std.
Dev./mean)

2.640e17

2.945

0.665

1.149e16

4.151e17

2.768

0.7

1.654e16

6.398e17

2.585

0.735

2.327e16

9.680e17

2.404

0.77

3.208e16

1.439e16

2.23

0.63

V.

better than CMOS MOSFET based Low Power XOR


circuit which is similar to previous research results in
literature. Moreover, variability analysis shows that
variability of PDP for CNFET based Low Power
XOR circuit (Fig. 8) is lower to that of MOSFET
based Low Power XOR circuit (Fig. 7). This is
because, CNTs are used to develop Carbon nanotube
Field Effect Transistors (CNFETs) whose conducting
channel is made of carbon nanotubes. CNFETs
intrinsic delay (CV/I) is very low, they show higher
electron mobility compared to bulk silicon and
therefore provide better power delay product.

NEED FOR EMERGING TECHNOLOGY


AND ITS VARIABILITY ANALYSIS

Recent technological development such as CNFET


(carbon nanotube FET), DG MOSFET (double-gate
MOSFET also known as FinFET) and ultrathin body
(UTB) devices are the promising technologies of
choice to replace traditional MOSFET (i.e., bulk
MOSFET) in the nanoscale design [12]. Out of the
nanoelectronic devices researched till date, CNFET
seems to have the brightest prospect as per its better
electronic characteristics. Speed enhancement due to
scaling down to 16-nm and 10-nm technology node
has given the impetus to its use. CNT is basically a
long, thin allotropic carbon tube which provides a
single path between source and drain. CNTs are
sheets of graphite rolled into hollow cylinders of
diameters varying from 0.4 nm to 4 nm [15] Although
CNFET circuits show some imperfections such as
misalignments, diameter and doping variations,
improvements on CNFET device technology is
promising.
We found that the best circuit emerged out in this
paper, i.e. Low Power XOR circuit (Fig. 4)
(Variability Analysis carried out in Section IV shows
that it gives least variability of propagation delay and
PDP) could also be implemented using CNFET
instead of MOSFET. Leakage current, high field
effect, short channel effect and lithographic limit
problems associated MOSFET are largely taken care
of in CNFET [16]. Power delay product of CNFET
based Low Power XOR circuit are up to 3.5 times

Fig. 8. Variability Analysis of CNFET based XOR circuit

They exhibit unique electrical properties and


extraordinary strength. CNTs derive their name from
their size, as the diameter of a nanotube is on the
order of a few nanometres, while they can be up to 18
centimetres in length (as of 2010) [17],[18].
Literature survey shows that a CNFET circuit with
one to ten CNTs per device is about two to ten times
faster compared with CMOS circuits[19], [20]. Fig.
10 illustrates a typical structure of a CNFET with
multiple CNTs. CNTs are placed on substrate having
dielectric constant of Ksub = 4. Doping variation
along the length of CNT shows that the channel
region of CNTs is un-doped, and the other regions of
CNTs are heavily doped. Material called high-k (Hik) or hafnium (HfO2) is responsible for separating the
tubes having dielectric constant of (Kox) 16 and
thickness (tox) of 4 nm. The effective width of the
multi-tubed CNFET (Wg) is defined as Wg =
Pitch(NCNT)+DCNT, where Pitch is the distance
between centre of two adjacent tubes (see Fig. 10),
NCNT is the number of tubes and DCNT is the
diameter of tube. Other important device and

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
40

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

technology parameters related


tabulated in Table XI.

to

CNFET

are

TABLE XI.
FOR CNFET

DEVICE AND TECHNOLOGY PARAMETERS

Parameter

Description

Value

Lch

Physical channel length

Wg

The width of metal gate


(sub_pitch)

16
nm
6.4
nm

tox

The thichness of high-k top


gate
dielectric
material
(planer gate)
Dielectric constant of high-k
gate oxide

4nm

(n1, n2)

Chirality of the tube

(19,0)

n_CNT

Number of tube per device

Kox

as a BINARY TO REFLECTED (GRAY) CODE


CONVERTER. The simulation framework shown in
Fig. 11 is made by cascading Low power XOR circuit
presented in Fig. 4 (Variability Analysis carried out
in Section IV shows that it gives least variability of
propagation delay and PDP).
This section presents simulation setup for
estimating design metrics such as propagation delay
(tP), average power dissipation (PWR) and powerdelay product (PDP). Simulation framework is also
explained in this section.
A.

16

Monte Carlo analysis performs numerous simulations


with different boundary conditions. It chooses
randomly different process parameters within the
worst case deviations from the nominal conditions for
each run and allows statistical interpretation of the
results. In addition to the process parameter
variations, mismatch can be taken into account as
well, providing a more sophisticated estimation of the
overall stability of the performance with respect to
variations in the processing steps [11].
B.

Fig. 9 XOR Gate

VI.
SIMULATION SETUP AND
FRAMEWORK OF 6 BIT BINARY TO
GRAY CODE CONVERTER

This paper analyzes XOR gate in terms of its


figures of metrics such as propagation delay, power
dissipation, and power-delay product when it stands

Simulation Framework

Proposed circuit shown in Fig. 11 is a Binary to Gray


code converter. XOR 3 is the XOR Gate under test
and XOR 4 is its load (which is further loaded by
XOR 5). To first order, these two gates (XOR 3 and
XOR 4) would be enough for estimation of design
metrics. However, the delay of XOR 3 also depends
on the input slope. One way to obtain a realistic
response slope is to cascade the circuit under test.
This is done to make the circuit suitable for practical
applications as standalone circuits have no
application.
C.

Fig. 10. A typical CNFET structure with multiple CNTs [19]

Simulation Setup

Simulation Result and Variabitily Analysis

The focus of this work is on the challenges faced in


designing logic circuit in nanometer regime, where
variations occur due to process and environmental
parameters such as operating voltage and
temperature. The root cause of variations is scaling.
The problem of variability (defined as standard
deviation () to mean () ratio of a design metric)
becomes more severe with greater miniaturization
and hence, it is imperative that this problem is
addressed [14].

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
41

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Fig. 11. Simulation framework to measure variability in terms of


propagation delay, average power and PDP.

The estimated variability of tP, PWR and PDP are


reported in Fig. 12. This is a matter of observation
that variability of PDP has dropped even more as
compared to the parent circuit (Low Power XOR
circuit Fig. 4). The lowest value of variability of PDP
for Low Power XOR circuit was found to be 0.154
(see Section IV) which is higher than the highest
value of variability of PDP (0.116) of 6 bit Binary to
Gray code converter. This is due to the fact that
propagation delay of XOR 1 is not carried out by
XOR 2, similarly propagation delay of XOR 2 is not
carried out by XOR 3 and so on. Moreover,
propagation delay is not only a function of the circuit
technology and topology, but depends upon other
factors as well. Most importantly, the delay is a
function of the slopes of the input and output signals
of the gate, which are independent at each level.
Since propagation delay and PDP are directly
proportional, PDP is also reduced for Binary to
Reflected (Gray) code converter.
VII.

Fig. 12. Variability analysis of 6 bit Binary to Gray code converter

its robustness against supply voltage variation, which


is an indirect outcome of process voltage and
temperature (PVT) variations. Therefore, proposed
CNFET implementation is an attractive choice to
achieve higher immunity against PVT variations. The
Low power XOR circuit is then used to construct a 6
bit Binary to Gray code converter by cascading the
circuit. Simulation setup and results are discussed in
brief with variability analysis also.
REFERENCES
[1]

[2]

[3]

CONCLUSION
[4]

In this paper a thorough analysis of five XOR circuits


has been brought up. Variability analysis with 10%
variation in VDD was done for all XOR/XNOR
circuits by MC (Monte Carlo) simulation using 16nm technology node with 5000 sample size to achieve
higher accuracy. We then proposed Low power XOR
circuit (Fig. 4) as the circuit with least variability of
PDP. In terms of output levels, it may not be as
perfect as 7T XOR circuit, but in terms Power Delay
Product and Propagation delay it stands well above
all its counterparts. It also proposes CNFET
implementation of the best XOR circuit. The
proposed CNFET realization proves to be better than
its CMOS counterpart. This work also carries out
variability analysis of the best XOR circuit using
CNFET implementation. The proposed CNFET
implementation shows

[5]

[6]

[7]

[8]

Hanho Lee and Gerald E. Sobelman, New low voltage


circuits for XOR and XNOR, Proc. IEEE, pp.225-229,
April 1997.
M. Vesterbacka, A new six transistor CMOS XOR circuit
with complementary output in Circuit and Systems 1999,
42nd Midwest Symposium., vol 2, pp.796-799, Aug 1999.
Sumeer Goel, Mohamed A. Elgamel and Magdy A. Bayoumi,
Novel design methodology for high performance XORXNOR circuit design in Integrated Circuits and Systems
Design, 2003. SBCCI 2003, pp. 71-76, Sept 2003.
Sumeer Goel, Mohammed A. Elgamel, Magdy A. Bayoumi,
and Yasser Hanafy, Design methodologies for high
performance noise tolerant XORXNOR circuits IEEE
Trans. vol 53, pp. 867-878, April 2006.
Aminul Islam, A. Imran and Mohd. Hasan, Variability
analysis and FinFET based design of XOR and XNOR
circuit in Computer and Communication Technology
(ICCCT), 2011 2nd International Conf, pp. 239-235, Sept
2011.
Rajeev Kumar and Vimal Kant Pandey, A new 5-transistor
XOR-XNOR circuit based on the pass transistor logic in
Information and Communication Technologies (WICT), 2011
World Congress, pp. 667-671, Dec 2011.
Luca Amaru, Pierre-Emmanuel Gaillardon and Giovanni De
Micheli, MIXSyn: An efficient logic synthesis methodology
for mixed XOR-AND/OR dominated circuits in Design
Automation Conference (ASP-DAC), 2013 18th Asia and
South Pacific, pp. 133-138, Jan 2013.
Shinichi Nishizawa, Tohru Ishihara, and Hidetoshi
Onodera,Analysis and comparison of XOR cell structures
for low voltage circuit design in Quality Electronic Design
(ISQED), 2013 14th International Symposium, pp. 703-708,
March 2013.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
42

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

[9]

[10]

[11]

[12]
[13]

[14]

[15]

Arkadiy Morgenshtein, Alexander Fish, and Israel A.


Wagner, Gate-diffusion input (GDI): a power efficient
method for digital combinatorial circuits IEEE Trans., vol.
10, October 2002, pp. 566-581.
Aminul Islam, M.W. Akram and Mohd. Hasan, Variability
immune FinFET based full adder design in subthreshold
region in Devices and Communications (ICDeCom), 2011
International Conference, pp. 1-5, Feb 2011.
K.G. Verma, B.K.Kaushik, and Dr. R.Singh, Deviation in
propagation delay due to pocess induced driver width
variationin Emerging Trends in Networks and Computer
Communications (ETNCC), 2011 International Conference,
pp. 89-92, April 2011.
http://www.itrs.net/Links/2009ITRS/Home2009.htm
M. Alioto, G. Palumbo, and M. Pennisi, Understanding the
effect ofprocess variations on the delay static and domino
logic, IEEE Trans.Very Large Scale Integr. (VLSI) Syst.,
vol. 18, pp. 697710, May 2010.
Aminul Islam and Mohd. Hasan, Leakage charaterization of
10T SRAM cell IEEE Trans. Electron Devices, vol 59, no.
3, pp. 631-638, March 2012.
A. Kumar, S. Prasad and A. Islam, Realization and
optimization of CNFET based operational amplifier at 32-nm
technology node in proc of: International Conclave 2013

[16]

[17]

[18]

[19]

[20]

Innovations in Engineering & Management(ICIEM 2013),


Feb 2013.
Farad Ali Usmani, and Mohammad Hasan, Carbon nanotube
field effect trasistors for high performance analog
application:
an
optimum
design
approach,
in
Microelectronics Journal, vol. 41, pp. 395-402, July 2010.
Aminul Islam and Mohd. Hasan, Design and analysis of
power and variability aware digital summing circuits in
ACEEE Int. J. on Communication, Vol. 02, July 2011.
X. Wang, et al., Fabrication of ultralong and electrically
uniform single-walled carbon nanotubes on clean substrates,
Nano Lett., vol. 9, pp. 31373141, Aug. 2009.
J. Deng, and H.-S. P. Wong, A compact SPICE model for
carbon-nanotube
field-effect
transistors
including
nonidealities and its application - part I: model of the intrinsic
channel region, IEEE Trans. Electron Devices, vol. 54, pp.
3186-3194, Dec. 2007
J. Deng, and H.-S. P. Wong, A compact SPICE model for
carbon-nanotube
field-effect
transistors
including
nonidealities and its application - part II: full device model
and circuit performance benchmarking, IEEE Trans.
Electron Devices, vol. 54, pp. 3195-3205, Dec. 2007.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
43

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Generation of Electricity by Geothermal


Energy
1

Rajkumar Kaushik, 2Nainy Chauhan, 3Shweta Mishra

M.Tech (Power system), JIT Group Of Institue And Technology, Kalwara, Jaipur,(Rajasthan)
2,3
B.Tech (ECE), Jayoti Vidhyapeeth Womens University, Jaipur, (Rajasthan)
E-mail: rajkumarkaushikbsacet@gmail.com, nainychauhan@gmail.com, aasthamishra08@gmail.com

Abstract In this paper we discussed about the


different sources of geothermal energy and
different techniques for the generation of
electricity by using geothermal energy. As we
know, we are using a huge amount of conventional
fuel for the generation of electrical energy. The
generation cost of electricity by conventional
sources is very high because of unavailability. But
some places of earth surface are like that, which is
having very high temperature in interior of earth
in various forms like fire, steam, hot water. These
different forms of heat can be used to generate the
electricity .The complete set up of geothermal
power station is same as steam power station only
difference between the sources of heat.
As a result of consuming the fossil fuel resource, it
has been reached to the edge of finishing. In the
future, geothermal energy is the important source
and in order to achieve desired aspects, the aim of
these studies was carried out with efficient energy
analyzing over an existing geothermal power
system.
Keywords geothermal energy,
generation, steam, geothermal power
I.

electricity

INTRODUCTION - GEOTHERMAL
ENERGY

The word geothermal was originated from adjective


of two GREEK roots geo means earth and
thermo means heat. Thus, the geothermal energy
is the heat energy which is occupied from the earth.
Geothermal energy can be used for origination of
electricity by extracting the heat from the rocks which
are available in the interior of the earth. The
technologies are so developed which offer flexible
method of using energy hence they are of two types.

i.

Near surface geothermal energy : It includes


heating , cooling (for ex: multi-family)
Deep geothermal energy : It includes electricity
generated in power plant

ii.

It was invented by a group of Italians who built up an


electric generator at Lardarello in 1904.
After invention it was firstly developed in United
States in 1992 at the Geysers Steam Field in Northern
California .In an Ancient time period countries named
Romans, Chinese used hot spring for bathing,
cooking and heating. But these days hot springs are
used for heating buildings as people believe hot
mineral spring has natural heating power
Nowadays, Human around the world use this energy
to produce electricity for heating buildings, green
house and for other purposes. Geothermal energy is
also known as renewable energy because the water is
used by rainfall and the heat is continuously produced
by earth.
II. DESCRIPTION ABOUT GEOTHERMAL
ENERGY RESOURCE
These can be divided on the basis of increase in the
level of heat energy. These are categories in four
types

Hydrothermal
Geo-pressure resource
Hot dry rock resource
Magma

Till todays generation hydrothermal


resource is the only resource which is available in
recovering economy as it is the only resource which
is commercially use up till now. In further further
parts we will give discuss briefly about all the types
which were shown above.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
44

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

a.

Hydrothermal Energy:

Hydrothermal energy is produce when interior of


earth, the water comes in contact with rock.
Therefore, the water is collected in underground
reservoir and it is heated by the help of surrounding
rocks. In this process magma supply heats with the
help of solid rocks which are present below the
reservoir.
As the pressure increases there is a rise in
temperature and the water gets heated and its
temperature goes as high as 350 degree Celsius. This
can be used in hot springs or geyser.

b.

These are found at the depth of 100m-500m


interior of the earth.

Geo-pressured Energy:

These are found at the level of 3-5km in this crude oil

reservoir which is done by the help of drilling deep


cavity this is called as enhanced geothermal system
as engineered geothermal system. This can be
gapped by
1). Nuclear explosion
2). Hydraulic fracturing
* Hydraulic fracturing method is commonly used and
efficiently used because of N-security purpose.
d.

Magma resources:

It is found at the depth above not more than 10


km magma resources are found as molten or partially
molten with a temperature of 650 -1200 degree
Celsius. These type of resources of situated specially
in the recent volcanic activities.
Due to very high temperature are the large
volume magma makes a huge potential source of
energy thus, it is a largest source of all geothermal
resources.

gas, salt water is available.


Till now no drilling technologies established and
it is expected to the most difficult technologies for
extracting magma as compared to other type of
resources which are utilized.
III.

Fig. 1 Basic diagram of hydrothermal resourses

It is found with the high pressure at a moderate


temperature 90-200 degree Celsius, due to the high
pressure of water in the interior layer of reservoir.
This energy is referred as geo-pressured energy.

ELECTRICITY GENERATION BY
GEOTHERMAL ENERGY:

Basic power station:


Geothermal power station is comparatively same
to other steam turbine thermal power station and is
used to generate the electric energy as steam power
plant. In this steam power plant water is changed into
steam which comes into the contact of geothermal
energy. Thus, remaining part of station component is
same fossil fuel based as steam power station.
In this components used are:

Its main function is methane gas is dissolved


in water usually from 1.9 - 3.8 /m^3 while, bring the
water to surface the pressure level decreases and
because of this methane gas is realized from the
solution.
c.

I. Geothermal Extraction
II. Boiler
III. Steam Turbine
IV. Condenser
V. Normal Water Feeding Pump

Hot Dry Rock:

It is found at the level of 5-10 km where less or


no water region is found, it involves a human made
Special Issue: National Conference on Recent Innovations In Engineering & Technology
(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
45

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Fig. 3 Dry Stream Power Station

Working of dry stem power station:

Fig. 2 Basic Geothermal Power Station

Functioning of Geothermal Steam Power Station:


In this heat energy is extracted from the sources
which are presented at the interior of the earth level.
By this, heat comes in contact with water which is
present in the boiler and thus, the water is converted
in steam that is produced over in steam turbine which
converts kinetic energy of steam into mechanical
energy. Therefore, the steam turbine is paired which
generated into electrical energy after the process
steam turbine is condensed by condenser.
Geothermal steam power station is further divide
into three main subcategories which are described
below:
1.

Dry Steam Power Station:

* There are premier and simplest and design.


* Temperature use is 150 degree more for starting the
turbine.
* Hydrothermal fluid is used to dry steam, where dry
stream is sent to a turbine which starts generators or
produce electricity.
* In this, it emits fewer amounts of non-condensable
gases.
* This technique is most commonly used and is cost
effectively.

In case of this power plant only steam is extracted


from the depth of the earths surface and this steam is
directly injected over the steam turbines because of
this dry steam power plants are the most common and
effective with lowest cost method of steam power
plant .Dry steam is not required for any vaporization
process injected over the steam turbine that rotates
the alternator and hence, alternator will generate the
electrical energy. After injecting over the turbine,
steam is condensate by condenser and the condensate
hot water is supplied from the river, sea or drains
2. Flash Steam Power Station:
In this type of station the heat energy is found in
form of liquid called flash steam, this flash steam is
put in a flash tank which is available at lower
pressure in this process fluid is vaporized into steam
turbines which helps to start the generator for
generating electrical energy.
* The temperature required for the fluid is 180
degree.
* It is the most widely used station of operation
today.
* Geothermal reservoir of water is greater than 360
degree.
* Left our water and steam is send back to the
reservoir make it a sustainable resource.
3. Binary Cycle Power Station:
* It is the most recent development station.
* Fluid temperature is low as 57 degree.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
46

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

* Binary fluid are used as water to generate steam


when geothermal resource have low temperature.

On behalf of geothermal energy source the system


is categorized in two parts:
* Geothermal Preheat Hybrid System:
In this case hest energy is basically used to
increase the temperature of water present in the
boiler. The temperature of water by the help of
geothermal resources, less amount of fuel is required
to convert the water into steam and because of its
efficiency the power station is improved.

Fig. 4 Flash Steam Thermal Power Station

* The thermal efficiency of this station is between 1013%.


* Binary fluid like isobutene, isopantane, have low
boiling temperature so there are used to generate to
steam by coming in contact with low temperature
geothermal energy.

Fig.6 Geothermal Preheat Hybrid System

* Fossil Super Heat Hybrid System:


In this type of system steam is not generated
from the boiler but it is generated from geothermal
heat energy then this steam is converted into pure
steam with the help of super heater as super heater
helps to remove the moisture availability thus, its use
to reduce the fuel efficiency and of steam generation
and system is improve.
Note: fossil in this geothermal heat energy is used
and before the super heater to enhance increasing
water temperature.
IV.

Fig. 5 Binary Fluid Power Station

ENVIRONMENTAL AND INDUSTRIAL


PROBLEMS

Geothermal power plant creates major problems,


which are shown below:

Hybrid geothermal fossil fuel system :


i.
In this geothermal energy resource are pre owned
to enhance the generation of fossil fuel based steam
power station.

Some products contain boron, fleuron and


ace which are very harmful for plants and
animals life.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
47

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

ii.

iii.

Some products are highly mineralized and


can pollute the grand water stream, to avoid
these types of things its necessary to
enhance the product in reservoir which can
take care of thermal pollution.
The emission of water and other toxic gases
into atmosphere /environment is a source of
problem.

All these effects of geothermal power generations


are at a very low level. There is no solute pollutant
emitted in atmosphere.
APPLICATION:
Geothermal energy can be used for the following
applications;
Fish farming, dried milk production,
used in
industries (e.g. textile industries), drying agricultural
products, maintaining the temperature of swimming
pool, generation of electrical energy, process heating
for industry Cooling building etc.
REFERENCES
[1]. Paschen, H., Oertel, D. and Grnwald, R.: Mglichkeiten
geothermischer Stromerzeugung in Deutschland, TABArbeitsbericht Nr. 84, Buru fur Technikfolgenabschtzung
beim Deutschen Bundestag (TAB), Berlin, (2003).
[2]. Reinhold, K., Krull, P. and Kockel, F.: Salzstrukturen
Norddeutschlands 1: 500 000, BGR, Hannover, (2008).
[3]. Sifford, A. and Bloomquist, R. G., Geothermal Electric
Power Production in the United States: A Survey and Update
for 1995-1999, submitted to the Geothermal World Congress,
May 2000.
[4]. Sifford, ibid.
[5]. EPRI, Renewable Energy Technology characterizations, Dec
1997.
[6]. Islandsbanki, United States Geothermal Energy Market
Report, 2009.
[7]. Jennejon, D., U.S. Geothermal Power Production
Development Update. Geothermal Energy Association,
Washington, DC: 2009.
[8]. Lund, J.W., Freestone, D.H., and Boyd, T.L., Direct
Application of Geothermal Energy: 2005 Worldwide
Review, Geothermic, 34: 691-727, 2005
[9]. Rybach, L., Megel, T., Eugster, W.J., How Renewable are
Geothermal Resources? Geothermal Resources Council
Transactions Vol. 23, October 17-20, 1999.
[10]. Modeling Post-Abandonment Electrical Capacity Recovery
for a Two-Phase Geothermal Reservoir: Transactions
Geothermal Resources Council, Vol. 22, pp. 521-528.
[11]. The Quaker Economist, 2007, Vol. 7, No. 156, March 2007.
Sanyal, S.K. and S.J. Butler, 2009. Feasibility of Geothermal
Power Generation from Petroleum Wells. Trans.
[12]. Geothermal Resources Council, Vol. 33, pp.673-680,
October, 2009.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
48

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

A gm-C Quadrature Oscillator Based on


DO-VDBA
Priyanka Gupta, Chetna Malhotra, Varun Kumar Ahalawat, V.Venkatesh Kumar, Rajeshwari Pandey

Department of Electronics and Communication, Delhi Technological University, New Delhi, India
E-mail: priyankagupta09@gmail.com
Abstract This paper presents an application in
the field of analog signal generation using DOVDBAs. An integrable quadrature oscillator (QO)
circuit is proposed in this work which uses two
active elements and two grounded capacitors for
its realization and provides two quadrature
outputs of equal magnitude. The simulation
results on the QO verify the theory. Features of
proposed circuits are discussed and verified by
computer simulations with appropriate lowvoltage 0.18 m CMOS technology models.
Keywords quadrature oscillator, DO-VDBA, FO.

I. INTRODUCTION
Active devices in electronics have always been very
important since last few decades. It has driven to the
birth of transistors which have been used, then, in
amplifiers, impedance converters, filters, oscillators
etc. Current-mode (CM) circuits are useful for the
low voltage operation and therefore, they have been
receiving a great deal of interest as an alternative to
voltage mode circuits especially for analog signal
generation and processing applications. Current-mode
active blocks have been increasingly used to realize
active lters, sinusoidal oscillators, and immittances.
A variety of sinusoidal oscillators using op-amps, a
representative of voltage mode analog building block;
are available, However, these configurations are
limited in their high frequency operations due to
lower slew rate and constant gain-bandwidth product
of the op-amps.
The QOs produce outputs with phase-locked sinecosine relationship which finds applications in the
field
of
communication,
instrumentation,
measurement and control systems [1-10].
Over the last few decades current-mode processing
has emerged as an alternative design technique using
current signals for signal processing and generation

[11]. The CM circuits are low impedance node


networks; hence result in low time constant. This
improves system performance in terms of speed and
slew rate. the CM approach to signal processing has
often been claimed to provide one or more of the
following advantages: higher frequency range of
operation, lower power consumption, higher slew
rates, improved linearity, and better accuracy.
Current mode signal processing has resulted in
emergence of numerous analog active building blocks
(ABB) as mentioned in [12] and references cited
therein which are used for realization of various
signal processing and generation circuits. The circuit
element Voltage differencing buffered amplifier
(VDBA) [12] is one among those and is a suiatble
choice for voltage output circuits.
This paper presents a VDBA based QO which is
based on the well known approach of generating
quadrature oscillations by cascading two lossless
integrator circuits and forming closed loop with a
unity feedback. The paper is organized as follows:
Section 2 describes the VDBA element followed by
details of the proposed design presented in section
3.Section 4 presents the simulation results and
cocluding remarks are presented in section 5.

II. DO-VDBA
The VDBA is a four terminal ABB and is
characterized by 2 high impedance voltage
differencing input terminals (P and N), one high
impedance current output terminal (Z) and one
buffered low impedance voltage output terminal
(W).By using an additional voltage inverter the
VDBA may provide both inverting and noninverting
outputs [13-17] which is very useful for differential
mode signal operations.This modified block is termed
as dual output VDBA (DO-VDBA). The symbol and
behavioral model of DO-VDBA are shown in Fig. 1.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
49

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

(a)

Fig 2: Proposed quadrature oscillator circuit

The characteristic equation of the Circuit can be


deduced as
(2)

The frequency of oscillation (FO) from (2) can be


determined as

(b)

(3)

Fig.1: Dual output VDBA a) Symbol b) behavioral model [12]

Relation between terminals can be written in hybrid


matrix as follows:

It is worth observing that the FO can be tuned by


varying the biasing current thus enabling electronic
tuning of the circuit.

IV. SIMULATION RESULTS

(1)
where gm represents the transconductance of the DOVDBA. It may be observed from behavioral model as
shown in Fig. 1(b), gm can be tuned by external
biasing current Ib.

III. PROPOSED STRUCTURE


Although many a circuits are available in literature
using VDBA yet oscillator circuits needs more
attention. Thus the proposed QO circuit configuration
is shown in Fig.2. It consists of two lossless
integrators forming a closed loop, connected in
inverting and non-inverting modes respectively.

The proposed QO is verified through simulations


using the CMOS implementation of the DO-VDBA
which is shown in Fig. 3 [16]. The SPICE simulations
are performed using 0.18m CMOS process
parameters provided by MOSIS (AGILENT). Supply
voltages taken are 1V. W/L ratio of the transistors is
shown in Table 1.
Table 1: Aspect Ratios

S No.

Transistor

W/L

1.

M3,M4

14/1

2.

M1,M2

4/1

3.

M5,M6,M7,M8

36/0.18

The QO is designed for an FO of 310 KHz for which


capacitance values are taken 0.20pF. The simulated
FO was observed to be 307 KHz. the simulated
steady state time domain output and corresponding
Special Issue: National Conference on Recent Innovations In Engineering & Technology
(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
50

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

frequency spectrum are shown in Fig. 4. The


percentage total harmonic distortion (THD) as
recorded in Table 2 is 3.53% for the proposed circuit.

Table 2: Total harmonic distortion for Proposed Circuit

HARMONIC FREQUENCY FOURIER NORMALISED


NORMALISED
PHASE (DEG)
NO
(HZ) COMPONENT COMPONENT
PHASE (DEG)

Fig 3: CMOS implementation of DO-VDBA [16]

3.07E+05 4.82E-01

1.00E+00 1.02E+02 0.00E+00

6.14E+05 3.48E-03

7.22E-03

9.21E+05 1.51E-02

3.13E-02 -4.66E+01 -3.51E+02

1.23E+06 3.30E-03

6.85E-03 -1.76E+02 -5.82E+02

1.54E+06 6.22E-03

1.29E-02 -1.88E+01 -5.26E+02

1.02E+02 -1.01E+02

TOTAL HARMONIC DISTORTION =


3.532807E+00 PERCENT

V. CONCLUSION
The topology of VDBA based sinusoidal oscillators is
presented. The topology makes use of lossless
integrators. The oscillator is electronically tunable.
No external resistor is required making it suitable for
integration. Workability of the proposed oscillators is
verified through PSPICE simulations using 0.18m
AGILENT CMOS process parameters. The total
harmonic distortion (THD) for the proposed designs
is found to be quite low.
REFERENCES

(a)
295mV

200mV

100mV

0V
300.0KHz
V(schematic3:n)

310.0KHz
320.0KHz
Frequency
V(schematic3:w+)

(b)
Fig.4: Output of Circuit I (a) Steady state (b) Frequency spectrum

[1]. P. Prommee and K. Dejhan, "An integrable electroniccontrolled quadrature sinusoidal oscillator using CMOS
operational transconductance amplifier", International
Journal of Electronics, vol. 89, no. 5, pp. 365-379, 2002.
[2]. Linares-Barranco, T. Serrano-Gotarredona, J. Ramos-Martos,
J. Ceballos-Caceres, J. Mora, and A. Linares-Barranco, A
precise 90 quadrature OTA-C VCO between 50-130 MHz,
2004 IEEE International Symposium on Circuits and
Systemspp, 642-66.
[3]. M. Siripruchyanun and W. Jaikla, "Cascadable Current-Mode
Biquad Filter and Quadrature Oscillator Using DO-CCCIIs
and OTA", Circuits, Systems & Signal Processing, vol. 28,
no. 1, pp. 99-110, 2008.
[4]. S. Maheshwari and I. Khan, Current controlled third order
quadrature oscillator, IEE Proceedings - Circuits, Devices
and Systems IEE Proc., Circuits Devices Syst., vol. 152, no.
6, p. 605-607, 2005.
[5]. W. Tangsrirat, W. Tanjaroen and T. Pukkalanun, "Currentmode multiphase sinusoidal oscillator using CDTA-based
allpass sections", AEU - International Journal of Electronics
and Communications, vol. 63, no. 7, pp. 616-622, 2009.
[6]. J. Vavra, J. Bajer, and D. Biolek, Differential-input buffered
and transconductance amplifier-based all-pass filter and its
application in quadrature oscillator, 2012 35th International
Conference on Telecommunications and Signal Processing
(TSP), pp. 411415, 2012.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
51

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

[7]. Hou, J. Wu, J. Hwang and H. Lin, "OTA-based evenphase


sinusoidal oscillators", Microelectronics Journal, vol. 28, no.
1, pp. 49-54, 1997.
[8]. M. AHMED and I. KHAN and NIGAR MINHAJ, "On
transconductance-C quadrature oscillators", International
Journal of Electronics, vol. 83, no. 2, pp. 201-207, 1997.
[9]. Khan and S. Khwaja, "An integrable g m -C quadrature
oscillator", International Journal of Electronics, vol. 87, no.
11, pp. 1353-1357, 2000.
[10]. R. Pandey, N. Pandey, G. Komanapalli, A. K. Singh, and R.
Anurag, New realizations of OTRA based sinusoidal
oscillator, 2015 2nd International Conference on Signal
Processing and Integrated Networks (SPIN), pp. 913916,
2015.
[11]. Toumazou, F.J Lidgey, D.G Haigh, Analogue IC Design:
The current mode approach. IEEE Circuits and Systems
Series 2. Peter Peregrinus Ltd., 1990.
[12]. Biolek, R. Senani, V. Biolkova, and Z. Kolka, Active
Elements for Analog Signal Processing: Classification,
Review, and New Proposals, Radioengineering, 17, pp. 15
32. 2008.
[13]. O. Onjan, T. Pukkalanun, and W. Tangsrirat, SFG
realization of general nth-order allpass voltage transfer
functions using VDBAs, 2015 12th International
Conference on Electrical Engineering/Electronics, Computer,
Telecommunications and Information Technology (ECTICON), 2015.
[14]. Biolkova, Z. Kolka, and D. Biolek, Fully Balanced Voltage
Differencing Buffered Amplifier and its applications, 2009
52nd IEEE International Midwest Symposium on Circuits
and Systems, pp. 4548, 2009.
[15]. N. Khatib and D. Biolek, New voltage mode universal filter
based on promising structure of Voltage Differencing
Buffered Amplifier, 2013 23rd International Conference
Radioelektronika (RADIOELEKTRONIKA), pp. 177181,
2013.
[16]. R. Sotner, J. Jerabek, and N. Herencsar, Voltage
Differencing Buffered/Inverted Amplifiers and Their
Applications for Signal Generation, RADIOENGINEERING,
vol. 22, no. 2, pp. 490504, 2013.
[17]. Kaar, A. Yeil, and A. Noori, New CMOS Realization of
Voltage Differencing Buffered Amplifier and Its Biquad
Filter Applications, RADIO ENGINEERING, vol. 21, no. 1,
pp. 333339, 2012.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
52

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Design and Analysis of a Low Power


Ternary Content Addressable Memory
Ankaj Gupta

Research Scholar, Poornima University, Jaipur


E-mail: ankajgarg87@hotmil.com
Abstract Ternary content addressable memory
or associative memory
have their primary
application in Network Router. In todays all web
based search engine TCAM memory is employed.
But power consumption is a major issue with a
TCAM. In this paper a low power ternary content
addressable memory having very low leakage is
proposed. Simulation results show upto 30%
reduction in power. The circuit has been designed
and implemented in 0.35 m CMOS technology.
The circuit dissipated a maximum 10.5 nW of
power and is suitable foe low power application.

accelerators [5], data compression [6], and image


processing [7]. Recent applications of TCAMs
include real-time pattern matching in virus/intrusiondetection systems and gene pattern. searching in
bioinformatics [8-9]. Since the capacities and wordsizes of TCAMs used in most of these applications
are much smaller than the TCAMs used in
networking equipment, the current TCAM research is
primarily driven by the networking applications,
which require high capacity TCAMs with low-power
and high-speed operation.
II.

Keywords Ternary content addressable memory,


power dissipation, leakage
I.
INTRODUCTION
An efficient hardware solution to perform table
lookup is the ternary content addressable memory
(TCAM). TCAM searches for matching data by
content and returns the address at which the matching
data is found. TCAMs are used extensively today in
applications such as network address translation,
pattern recognition, and data compression. In these
applications, there is a steady demand for TCAMs
with higher density and higher search speed, but at
constant power. Currently, commercial TCAMs are
limited to 18 Mb of storage and 100 million searches
per second on a 144-bit search word, at typically 5 W
per TCAM chip. Compared to the conventional
memories of similar size, TCAMs consume
considerably larger power. This is partly due to the
fully parallel nature of the search operation, in which
a search word is compared in parallel against every
stored word in the entire TCAM array..
A TCAM can be used as a co-processor for the
network processing unit (NPU) to offload the table
lookup tasks. Besides the networking equipment,
TCAMs are also attractive for other key applications
such as translation look-aside buffers (TLBs) in
virtual memory systems [1-2], tag directories in
associative cache memories [3-4], database

PROPOSED LOW POWER TCAM


CIRCUIT

The proposed low power circuit has shown in Fig. 1.


This TCAM cell uses two independent cell for storing
1, 0 and X. X is mask state. The circuit uses
AND type match line. Transistor 11 and 12 store
complementary value when store word is either 0 or
1. Transistor 5 and 7 are used to charge node between
search line. Transistor 11 and 12 perform the
comparaision operation i.e. XOR operation.
The static power of CMOS circuit is given y
equation (1)
PS= IL * VDD
(1)
Where IL= Leakage current of the circuit
The subthreshold leakage current of NMOS transistor
with zero VGS voltage and full swing VDS is given by
equation (2) [9].

ISV Io exp(

K 1 s K 2 s Vdd
)
nVr
(2)

Where
Io oCox (

Weff
)Vr 2e1.8
Leff

= barrier lowering parameter


K1, K2 = no uniform doping effect parameer
VT = thermal voltage = KT/q
0 = mobility of elctron
Leff = channel length of transistor

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
53

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Weff = channel width of transistor


Equation (2) shows an exponential relation between
ISN and VDD.

network routers, search engine and other low-power


applications.

Leakage (nA)

25
20
15
10
5
0
0

2
VDD(V)

Fig. 1. Proposed Circuit

Fig. 2. TCAM cell leakage for 0.65 m technology

The current density dye to direct tunneling is given


by (3)

REFERENCES

B[1 (1

2 ox 2 ox
JDT AEox ( )(
1)e
Vox Vox

3/2

Vox
)

ox
Eox

(3)

Where

q3
16 2h ox and

4 2m * ox 3 / 2
B
3hq
Vox voltage drop across the oxide
Eox electric field in the oxide
Tox oxide thickness
M* effective mass of an electron
H plank constant
III.

RESULTS AND DISCUSSION

The proposed Low Power Ternary Content


Addressable Memory circuit has been designed and
implemented in 0.35m CMOS technology. The
minimum supply voltage is 1.5 V and the maximum
supply current is 11.5A at the maximum supply
voltage of 2.5 V and the maximum temperature of
operation is 3000K. The circuit simulation results are
presented in Fig. 2 Transient
IV.

CONCLUSION

A TCAM circuit has been proposed and simulated


results has been discussed. The simulation results
show that the circuit is highly immune to supply and
temperature variation. Simulation result shows upto
30% reduction in leakage and cell area over the
conventional TCAM cell. This circuit can be used in

[1]. H. Higuchi, S. Tachibana, M. Minami, and T. Nagano, A 5mW, 10-ns cycle TLB using a high-performance CAM with
low-power match detection circuits, IEICE Transactions on
Electronics, vol. E79-C, no. 6, Jun. 1996.
[2]. M. Sumita, A 800 MHz single cycle access 32 entry fully
associative TLB with a 240ps access match circuit, Digest of
Technical Papers of the Symposium on VLSI Circuits, pp.
231-232, Jun. 2001.
[3]. P.-F. Lin, and J. B. Kuo, A 1-V 128-kb four-way setassociative CMOS cache memory using wordline-oriented
tag-compare WLOTC structure with the content addressable
memory (CAM) 10-transistor tag cell, IEEE Journal of
Solid-state Circuits, vol. 36, no. 4, pp. 666-675, Apr. 2001.
[4]. P.-F. Lin, and J. B. Kuo, A 0.8-V 128-kb four-way setassociative two-level CMOS cache memory using two-stage
wordline/bitline-oriented tag-compare (WLOTC/BLOTC)
scheme, IEEE Journal of Solid-state Circuits, vol. 37, no.
10, pp. 1307-1317, Oct. 2002.
[5]. J. P. Wade, and C. G. Sodini, A ternary content-addressable
search engine, IEEE Journal of Solid-state Circuits, vol. 24,
no. 4, Aug. 1989.
[6]. K.-J. Lin, and C.-W. Wu, A low-power CAM design for LZ
data compression, IEEE Transactions on Computers, vol.
49, no. 10, Oct. 2000.
[7]. T. Ogura, M. Nakanishi, T. Baba, Y. Nakabayshi, and R.
Kasai, A 336-kb content addressable memory for highly
parallel image processing, Proceedings of the IEEE Custom
Integrated Circuits Conference (CICC 1996), pp. 273-276,
May 1996.
[8]. Yu, R. H. Katz, and T. V. Lakshman, Gigabit rate packet
pattern-matching using TCAM, Proceedings of the IEEE
International Conference on Network Protocols (ICNP04),
Berlin, Germany, pp. 5.1.1-5.1.10, Oct. 5-8, 2004.
[9]. Yu, and R H. Katz, Efficient multi match packet
classification with TCAM, Proceedings of the IEEE
Symposium on High Performance Interconnects (HOTI04),
Stanford, CA, pp. 2.1.1-2.1.7, Aug. 25-27, 2004
[10]. R. X. Gu, and M. I. Elmasry, Power dissipation analysis and
optimization of deep submicron CMOS digital circuits,
IEEE Journal of Solid-state Circuits, vol. 31, no. 5, pp. 707713, May 1996
[11]. K. F. Schuegraf, and C. Hu, Hole injection SiO2 breakdown
model for very low voltage lifetime extrapolation, IEEE
Transactions on Electron Devices, vol. 41, no. 5, May 1994.
[12]. The MOSIS Service. www.mosis.org

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
54

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Characterization and Simulation of


Semiconductor Thin Films Using
Quantitative Mobility Spectrum Analysis
(QMSA)
1

Nisha Chugh, 2A.K Vishwakarma, 3S. Sitharaman

Asstt. Prof., Department of ECE, Jagannath University,Delhi


Sc.B Micro-Bolometer Group, MEMS division SSPL (DRDO), Delhi
3
Scientist F, LPE Group IR Division, SSPL, (DRDO), Delhi

E-mail: nishachugh0711@gmail.com, amit.spl.drdo@gmail.com, sramanindia@yahoo.com

Abstract A quantitative mobility spectrum


analysis (QMSA) technique of characterizing
mobility and concentration of individual carrier
species for modern semiconductor heterostructures by experimentally generated Hall and
resistivity data as a function of magnetic field is
presented. QMSA enables the conductivity
contribution of bulk majority carriers to be
separated from that of carriers present at the
surface or at the spacer/barrier layer of GaAsAlGaAs based HEMTs for characterizing carriers
developed in 2 DEG region. Beck and Anderson
mobility spectrum analysis (MSA) technique was
considered as the first trial function. A variation
on the iterative procedure of Dziuba and Gorska is
used to obtain the mobility spectrum which shows
the quantitatively accurate mobility distribution.
QMSA is advantageous because in comparison to
its previous counter parts such as MCF, MSA and
Dziuba and Gorska it does not take any prior
assumption about the number of electron and hole
species as well as their approximate mobilities. A
ghost hole along with surface and bulk electron
was found in the 2 DEG region developed in
HEMTs. In this article we apply QMSA to both
analytical data of Hg1-xCdxTe and real
experimental data of AlGaAs obtained from the
Hall measurement system.
Keywords QMSA algorithm, Spline Interpolation,
Pinning Point, Jacobis Iteration, Gauss Seidel Iteration
Procedure. Van-der Pauw Technique, Resistivity, Hall
Coefficient, Carrier concentration and mobility.

I.

INTRODUCTION

A compound semiconductor is a compound of


elemental semiconductors like Si, Ge, In, Al, As and
boron etc. from two or more different groups of
periodic table. These elements can form binary (e.g.
Gallium (III), Arsenide (V), (GaAs)), ternary
(e.g. Aluminium Gallium Arsenide (AlGaAs), indium
gallium arsenide (InGaAs),Cadmium zinc telluride
(CdZnTe), Mercury cadmium telluride (HgCdTe)
also called as MCT) and quaternary (e.g. Aluminium
Gallium Indium Phosphide (AlGaInP)) alloys.
HgCdTe is an alloy of CdTe and HgTe and is
sometimes claimed to be third semiconductor of
technological importance after silicon (IV) and
gallium (III) arsenide. Resistivity and Hall coefficient
measurements [1] at a single magnetic field are of
limited use when mixed conduction systems are
implemented, since they provide only averaged
values of the carrier concentration and mobility,
which do not represent any of the individual species.
A detailed information becomes available if the
magneto-transport experiments are performed as a
function of magnetic field because it is then possible
to simultaneously characterize densities and
mobilities for each of the multiple electron and hole
species When a compound semiconductor having
multi-carriers is grown over another compound
semiconductor having multiple carriers, the heterostructure will contain in total many no of carriers
causing mixed-conduction. In order to characterize
their transport properties, an analysis with varying
large magnetic field than the classic Hall
measurement at a single magnetic field is required.
Such analyses are generally consisted of a multicarrier fit (MCF) to the experimental Hall data under

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
55

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

the applied magnetic field which is not unique since


the starting parameters such as number and type of
carriers, and corresponding mobilities and
concentrations need to be assumed [3]. Based on all
the previous attempts like MCF, MSA [4], Dziuba
and Gorska Iterative procedure of obtaining an
accurate MS, an approach described by Antoszewski
and Faraone et al. [5], and known as quantitative
mobility spectrum analysis (QMSA), has been
developed and then systematically tested and
improved [6-9]. By using the MS of Beck and
Anderson as an initial function for a modified Gauss
Seidel iterative procedure the inherent instability of
previous iterative procedures during the convergence
process is removed and a good fit to the experimental
data is obtained. QMSA gave a perfect fit for 2
carriers in the analytical data of Hg1-xCdxTe. In this
paper, we present and discuss the performance of
QMSA for analytical data of Hg1-xCdxTe and for
evaluating mobility spectra of carriers developed in
2 DEG region of GaAs/AlGaAs hetero-structure
HEMTs.

Figure 1 above shows the possible contact placement


configurations in the VdP technique. The
measurements require that four ohmic contacts be
placed on the sample.

VAN-DER PAUW TECHNIQUE

The method was first propounded by Leo J. Van-der


Pauw in 1958 [1]. Its power lies in its ability to
accurately measure the properties of a sample of any
arbitrary shape, so long as the sample is
approximately two-dimensional (i.e. it is much
thinner than it is wide) and the electrodes are placed
on its perimeter. From the measurements made, the
following properties of the material can be calculated:

The resistivity of the material,


The doping type (i.e. P-type or N-type )
The sheet carrier density of the majority
carrier (the number of majority carriers per
unit area)
The mobility of the majority carriers,

Square or Rectangle:
Square or
Rectangle: Contacts Contacts at the edges
or inside the
at the corners
4
perimeter

Cloverleaf
1

L
2

II.

The sample must not have any isolated


holes.
The
sample
must
be homogeneous and isotropic.
All four contacts must be located at the
edges/corners of the sample.
The area of contact of any individual contact
should
be
at
least
an order
of
magnitude smaller than the area of the entire
sample.

(a)
Preferred

(b)
Acceptable

Figure 1 Some possible contact placements in VdP

III.

MIXED-CONDUCTION MULTICARRIER SYSTEMS

In a multi-carrier system having more than one type


of carriers, a conventional hall measurement
technique provides resistivity (B) and hall coefficient, RH(B). Hall measurement system directly
determines Rxx and Rxy, (magneto-transport
resistances) in longitudinal and transverse direction
which ultimately calculate (B) and RH(B) as
formalized in eq.(1)
exp (B) 1.13 f t R xx cm
R exp
H (B)

2.1 Conditions and Sample Preparation


In order to use the Van-der Pauw method, the sample
thickness must be much less than the width and
length of the sample. In order to reduce errors in the
calculations, it is preferable to take the sample as a
symmetrical one i.e. uniform all across. There are
five conditions for using Van-der Pauw technique:

The sample must have a flat shape of


uniform thick-ness all around.

(c)
Not Recommended

VH t
2.5 107 cm 3/C
IB
(1)

VH
R xy
Where I
, tensor component of resistance in
8

ohms, t=100 (100 10 ) is the thickness of the


AlGaAs epi layer grown on GaAs buffer layer in our
experiment, f is the form factor (f=1, for perfect
geometry), B is the magnetic field applied along z
axis in units of Gauss (where 1 Tesla=104 Gauss). So
finally

R H (B)

becomes as given in eq. (2) ahead

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
56

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

R exp
H (B)

R xy t
B

2.5 107 cm 3/C

(2)

The diagonal (xx) and hall (xy) conductivity tensor


components in the analytical data of a multi-carrier
system can be expressed as a sum over the m-species
(m stands for multi-carriers) in the system as shown
in eq. (3)
en i i

xx (B)

1 ( B)
i 1

en i i2 B

xx (B)

Si

i 1

1 ( i B)2

(3)

Where ni and i are the concentration and mobility of


the ith carrier species, respectively, and Si is +1 for
holes and -1 for electrons. The diagonal (xx) and hall
(xy) conductivity tensor components for the
experimental data will be calculated by experimental
resistivity and hall co-efficient values calculated by
hall measurement, as given in eq. (4)
xx (B)

xy (B)

1
2
exp

R (B)B
exp (B) Hexp
1
(B)

R exp
H ( B) B
2
exp

2
R (B)B
exp (B) Hexp
1
(B)

(4)

Eq. (3) is modified in the discrete mobility form and


is termed as Dziuba and Gorska Jacobi iterative
procedure and is given in eq. (5)
m

xx (B j )

i 1

xy (B j )

i 1

[s p ( i ) s n ( i ] i
1 i2 B2j

[s p ( i ) s n ( i ] i B j i
1 i2B2j

1 B
i 1

i 1

Sixx i
2 2
i j

Sixy i B j i
1 i2 B2j

(5)

density functions sp()[where sp() = ep()] and


sn()[where sp()=ep()] which represent the
mobility spectra of individual carrier species present
in the sample, where p() and n() are the hole and
electron concentration density functions.
IV.

QMSA IMPLEMENTATION AND


RESULTS

The starting point for the QMSA procedure is to


allow for the existence of mobility distribution of
hole-like and electron-like species in the
semiconductor material. The QMSA uses the
Mobility spectrum of Beck & Anderson [4] in which
experimental conductivity tensor vs. magnetic field
data is transformed into a continuous profile of carrier
mobilities present in the sample as first trial function
in order to solve eq. (5) through Gauss Seidel
iterative procedure. The ultimate goal is to
characterize [10] the semiconductor materials causing
mixed conduction so that the mobility spectra of
individual carrier species can be known. Convergence
coefficients wx=0.03 and wy=0.003 minimize the
error, speed up the rate of convergence and yield a
stable, quantitatively accurate solution, as compared
to having wx=wy=1 in the standard iteration
procedure. We use 100 mobility points i per decade,
and extend the mobility range considered by Dziuba
and Gorska by more than an order of magnitude in
each direction by interpolating at low magnetic fields
to the data including B=0, and by extrapolating the
experimental data at high magnetic fields. In order to
verify the ability of QMSA to transform experimental
Hall data in the magnetic field domain to mobility
domain to have meaningful mobility spectra,
synthetic data set of two carriers with their carrier
concentration and mobility have been generated by
substituting synthetic values of the parameters in eq.
(3) and (5). In order to simulate experimental
conditions, a 1% random error is then superimposed
onto the synthetic data. The maximum magnetic field
is taken to be 8 T (0.1 to 8Tesla), which is typical for
electromagnets used in Hall systems. An example of
n-Hg1-xCdxTe conductivity vs. magnetic field at T=80
K is shown in figure 2(a) and mobility spectra at
T=80k and 110 k is shown in figure 2(b) below

Where parameter m, now defines the number of


points in the final mobility spectrum not the number
of carrier species as in eq. (1) and B i is the spline
interpolated experimentally applied varying magnetic
field. The ultimate goal is to calculate conductivity
Special Issue: National Conference on Recent Innovations In Engineering & Technology
(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
57

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Table 1. Input Values in an Analytical QMSA for HgCdTe

n- Hg1-xCdxTe at T=80 K

(a)

(a)

(b)
Figure 2 QMSA for n- HgCdTe: (a) QMSA fit (dashed curves) to
the experimental diagonal and Hall conductivities (solid curves) vs.
magnetic field for an n-type sample (x=0.224) at 80 K [6] , (b)
Mobility spectrum (QMSA) for LPE HgCdTe at 80 and 150 K and
B & A envelope at 110 K [5]

Figure 2(b) shows three electrons E1, E2 and E3 in


the mobility spectra. The 80 K spectrum displays two
sharp peaks (E1 and E2), while the 110 K spectrum
contains two peaks (E1 and E3) plus a broad
shoulder. The solid line indicated as B & A is the
initial, discrete form of the Beck and Anderson
envelope for the iterative procedure.
4.1 QMSA for Analytical Data-Results
In the analytical data for two carriers of n-HgCdTe
carrier concentration and mobility values taken for
analytical data were given as below in the table 1.

(b)
Figure 3 Graphs of Analytical results of Hg1-xCdxTe(where
x=0.286) for two carriers: (a) Analytically (sigmaxx and sigmaxy)
and theoretically (sigcxx and sigcxy) calculated conductivity
components with magnetic field, (b) Simulated and QMSA
generated mobility spectrum of HgCdTe (Hole and electron density
function with mobility)

As shown in figure 3(a) the analytically (shown by


blue and red curves) and theoretically (shown by
green and violet curves) calculated conductivity
components are overlapping with each other giving a
verification of QMSA and figure 3 (b) shows the
mobility spectra of two carriers given as input in
Table 1.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
58

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

V.

CONCLUSIONS

The extensive testing of the QMSA algorithm for


analytical data first has shown it to be an extremely
accurate, reliable, and convenient technique for
analyzing magnetic field-dependent experimental
data of GaAs/AlGaAs hetero-structure. QMSA is
ideally
suited
for
the
routine
electrical
characterization of different wideband gap
semiconductor materials and devices, particularly IR
materials such as HgCdTe. MCT is a material whose
magneto-transport properties tend to be strongly
influenced by mixed-conduction of multi-carriers i.e.
conduction by holes, light holes and electrons as
carriers and are well separated by QMSA. For
GaAs/AlGaAs hetero-structure high doping of Si as
donor impurities is done to make N+ AlGaAs layer
above the undoped AlGaAs spacer layer and capping
layer of GaAs is also highly N+ Si doped. QMSA
spectra of 2-DEG contains electrons as the majority
carriers and no minority carriers are seen in the
mobility spectra except ghost hole which is
approximately same in mobility as the fastest electron
in 2 DEG region. The high mobility carriers
developed in HEMTs have their electrical
applications in RADARs and communication and
optical application in LEDs and Lasers.
ACKNOWLEDGMENTS
The authors would like to thank all the scientists and
technicians at Solid State Physics Laboratory (SSPL),
Defence Research and Development Organization
(DRDO) for providing all the facilities pertaining to
the experiment, the MBE grown sample of GaAsAlGaAs and allowed to have practical work there on
Hall measurement system.

[7].

[8].

[9].

[10].

Colombo, and P.K. Liao, Journal of Electronic Materials,


vol. 25, pp.1157, 1996, Advanced magneto-transport
characterization of LPE-grown HgCdTe by QMSA.
J.R. Meyer, C.A. Hoffman, J. Antoszewski, and L. Faraone,
Journal of Applied Physics, vol. 81, pp. 709, 1997,
Quantitative mobility spectrum analysis of multicarrier
conduction in semiconductors.
J.R. Meyer, C.A. Hoffman, F.J. Bartoli, J. Antoszewski, and
L. Faraone, US Patent Nature 5, issue 789, pp. 931, 1998
Mobility spectrum analysis for magnetic-field dependent
Hall and resistivity data.
J. Antoszewski and L. Faraone, , vol.12, No.4, pp. 347-352,
2000 Quantitative mobility spectrum analysis (QMSA) in
multi-layer semiconductor structures, published in 4th
International Conference on Solid State Crystals, Optoelectronics review.
Dieter K. Schroder, 779 pages, Jan 2006. Semiconductor
material and Device Characterization, 3rd edition, A John
Wile & Sons, Inc., Publication, IEEE Press, WileyInterscience, Arizona State University Tempe.

REFERENCES
[1]. Van der Pauw, L.J., Feb. 1958, "A method of measuring
specific resistivity and Hall Effect of discs of arbitrary
shape".
[2]. J. W. Allen, Nature, vol. 187, Issue-4735, pp. 403405
1960,"Gallium Arsenide as a semi-insulator".
[3]. M.C. Gold and D.A. Nelson, A4, pp. 2040,1986, Journal of
Vaccum Science Technology, Variable magnetic- field Hall
effect measurements and analyses of high purity Hg vacancy
(p-type) HgCdTe.
[4]. W.A. Beck and J.R. Anderson, vol.62, pp.541, 1987, Journal
of Applied Physics, Determination of electrical transport
using a novel magnetic field-dependent Hall technique.
[5]. J. Antoszewski, D.J. Seymour, L. Faraone, J.R. Meyer, and
C.A. Hoffman, Journal of Electronic Materials,vol. 24, pp.
1255, 1995 Magneto-transport characterization using
quantitative mobility spectrum analysis.
[6]. J. Antoszewski, J.R. Meyer, C.A. Hoffman, F.J. Bartoli, L.
Faraone, S.P. Tobin, P.W. Norton, C.K. Ard, D.J. Reese, L.
Special Issue: National Conference on Recent Innovations In Engineering & Technology
(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
59

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Performance Evaluation of Cascaded


Optical Filters on 400Gbps PM-16QAM
Coherent Communication Systems
Sapna Aggarwal, Varun Jain
Department of Electronics and Communication, Northern India Engineering College, New Delhi, India
E-mail: sapnaggarwal18@gmail.com, varunjain1202@gmail.com

Abstract This paper investigates the effect of


cascaded optical filters on 400Gbps polarization
multiplexed
16-ary
quadrature
amplitude
modulation (PM-16QAM) with 100, 50 and 25
GHz channel spacing (f). Q-factor and BER are
used as performance evaluation metrics, where we
use 4 types of filters: Thin Film Filter (TFF), Fiber
Bragg Grating (FBG), Bessel Optical Filter (BOF),
Optical band pass filter (OBPF). Simulation
results indicate that TFF and FFP filter cascade
successfully improved system parameters for all
three channel spacing (f). Q factor versus
transmission distance graphs have also been
plotted for up to 750km.
Keywords Coherent detection, PolarizationMultiplexed 16-ary Quadrature Amplitude
Modultion (PM-16QAM), Wavelength division
multiplexing (WDM), BER, Cascaded filters.
I.

INTRODUCTION

When different optical signals are transmitted over


optical links, various wavelength components of the
optical signals usually experience varying
propagation times due to the fact that, the
transmitting medium has different effective refractive
indices for respective wavelengths. In recent time,
with the rapid growth in the network business, people
urgently need more capacity and network systems.
The main goal of any communication system is to
maximize the transmission distance. Loss and
Dispersion are the major factors that affect fiberoptical communication. The EDFA being the gigantic
change happened in the fiber-optical communication
system; the loss is no longer the major factor to
restrict the fiber optical transmission. But since
EDFA works in 1550 nm wave band, the average
Single Mode Fiber (SMF) dispersion value in this

wave band is very big, i.e. approx 15-20ps /


(nm.km). Therefore, dispersion becomes the major
factor that restricts long distance fiber-optical
communication.
In the search for even higher bit rates and spectral
efficiencies, polarization division multiplexing
(PDM) and multilevel modulation have recently been
very actively investigated. These require coherent
detection, enabled by high-speed analog-to-digital
converters (ADCs) and digital signal processing
(DSP). The most studied and successful
implementation of such format has been,
polarization-multiplexed
quadrature
phaseshiftkeying (PM-QPSK). Todays technology allows
a maximum symbol rate of about 25 Gbaud/s,
allowing operations up to 100 Gb/s for PM-QPSK,
while comfortably fitting it within a 50-GHz
wavelength-division-multiplexing (WDM) network.
Beyond 100 Gb/s, more complex formats, like 16level quadrature amplitude modulation (16QAM) are
required. This format would permit double the bit rate
(up to about 200 Gb/s) and achieve remarkable
spectral efficiency over 50-GHz and 25-GHz WDM
network. Wavelength division multiplexing (WDM)
networks
transmit
multiple
wavelengths
simultaneously at high data rates.
The aim of this paper is to study the impact of
cascading multiple optical filters on 400Gb/s PM16QAM systems using 100, 50 and 25 GHz channel
spacing. Q factor and BER (Bit Error Rate) or
logBER are used as performance evaluation metrics.
In this paper, we focus on various Dispersion
Compensation Techniques, including Dispersion
Compensation Fiber (DCF), Fiber Bragg Grating
(FBG) and various optical filters like Thin Film Filter
(TFF), Optical Band Pass Filter (OBPF) and Bessel

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
60

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Optical Filter (BOF). Both FBG and TFF filters are


extensively used for dispersion compensation because
of their flat pass band and low insertion loss.
Individually, FBG has its advantage because of its
low manufacturing cost but it has higher dispersion
value than TFF. The TFF filters used in our
simulation has been modeled as Butterworth filters
whose orders are adjusted to match the other filters
amplitude response at the same bandwidth (BW).
Also, OBPF filters used in our simulation has been
modeled as Fiber Fabry-Perot Optical Filter (FFP)
whose BW is also adjusted as per the amplitude
response of other filters cascaded at the same BW.
Due to the periodic nature of Fabry-Perot
interferometry, Fabry-Perot etalon-based devices are
ideally suited to wavelength locking for DWDM
system lasers. Also, it ensures frequency stabilization
making sure that transmitting laser will not interfere
with other wavelengths in DWDM systems.

Fig. 1.(a)Layout of the system and (b)of the coherent receiver.

II.

SIMULATION SETUP

We analyze the system setup [1] shown in Fig.1(a) a


multi span link carrying WDM coherent 400Gb/s
PM-16QAM signals with channel spacing f=100,
50, 25-GHz respectively. A receiver (Rx) was placed
after N spans (N varying from 1 to 5). Each span was
composed of 120 km (Length of each span) of
transmission optical fiber, a dispersion-compensating
unit (DCU), and an erbium-doped fiber amplifier
(EDFA) with noise-figure F=4dB. Also, VOA may be
used for the purpose to increase the span loss, without
altering any propagation effects. We assumed that the
DCUs are linear and lossless. We used standard
single-mode fiber (SSMF) with dispersion D=16
ps/(nm-km) and dispersion compensation fiber (DCF)
with dispersion D= -80ps/(nm-km).

We used transmitters based on nested MachZehnder


modulators (MZM), followed by super-Gaussian
optical filters, which helps shape the channel spectra.
We chose this specific optical filter transfer function
because of its commercial availability. The 3-dB
bandwidth of the optical filters was set equal to the
channel spacing f as per each case, i.e. 100, 50 and
25GHz respectively. This turned out to be the best
trade-off between intrachannel ISI (Inter-Symbol
Interference) and interchannel linear crosstalk.
The coherent receiver (Rx) structure, shown in Fig.
1(b), includes a local oscillator (LO) that is mixed
with the incoming signal, separated in X- and Ypolarization components by polarization beam splitter
(PBS), in two 90 hybrids, one for each polarization.
The alignment of the LO to the channel center
frequency is assumed to be ideal as well as its line
width. The received signal components are detected
by four balanced photo detectors (BPDs). The signals
are then filtered by a low-pass Bessel filter with cutoff frequency being 0.75*Bit-rate of the system.
Chromatic dispersion is fully recovered by a DSP
electronic dispersion compensating (EDC) stage
based on finite impulse response (FIR) filters. Thus
BERs were evaluated using direct error counting on
the center channel. The total transmitted symbols
were 65536. We simulated eight channels and probed
the performance of the center one.
The simulation has been modeled in 5 cases, designed
to study the impact of various filter cascade systems
to compensate for chromatic dispersion (CD). First,
the system is studied for effect of WDM system in
absence of any filter cascade system. In second case,
1 Thin Film Filter (TFF) is used along with in-line
dispersion compensation. TFF filters used in our
simulation have been modeled as Butterworth filters
of order 1, with 3-dB bandwidth of f. Thus the
system is studied for effect of WDM system in
presence of 1TFF filter only. In third case, 1 Thin
Film Filter (TFF) and 1 Fiber Bragg Grating (FBG)
cascade system is used.FBG and TFF filters (modeled
as Butterworth filters of order 1) used in our
simulation have been with 3-dB bandwidth of f.
Thus the system is studied for effect of WDM system
in presence of 1TFF and 1 FBG filter cascade system.
In fourth case, 1 Thin Film Filter (TFF) and 1 Bessel
Optical Filter (BOF) cascade system is used. Thus the
system is studied for effect of WDM system in
presence of 1TFF and 1 BOF filter cascade system.
Lastly, 1 Thin Film Filter (TFF) and 1 Optical Band
Pass Filter (OBPF) cascade system is used. OBPF

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
61

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

III.

RESULTS

400Gbps 8-channel WDM PM-16QAM system when


stimulated, following results were obtained are listed
in tables and graphs below. As seen in the case with
f=100GHz, our system was having Q factor (and
BER) below 6 in case of without any filter present.
As 1TFF filter was used, Q factor (and BER)
improved up to 7.34, 7.64 and 8.6. Further, moving
on to filter cascade of 1TFF + 1FBG, Q factor (and
BER) improved up to 12.04. Filter cascade 1TFF +
1BOF gave remarkable improvement for shorter
distance but did not improve the system any further at
longer fiber transmissions. It was within normal range
of Q factor being 8.07, 8.66. Then, 1TFF + 1FFP
filter cascade remarkably improved the system metric
with Q factor up to 24.75 and 14.24 at longer
distances, thus proving to be a good filter cascade
configuration for the system.
Table 1: 8-channel 400Gbps WDM PM-16QAM at f=100GHz
Scenario
At 150kms
At 450kms
At 750kms
Q factor
Q factor
Q factor
(BER)
(BER)
(BER)
System with no filter
3.8 ( 10)
3.11 ( 10) 3.49 ( 10)
System with 1TFF filter
7.34
7.64
8.6 ( 10)
( 10)
( 10)
System with 1TFF +
7.83
12.04
8.67
1FBG filter cascade
( 10)
( 10)
( 10)
System with 1TFF +
15.12
8.66
8.07
1BOF filter cascade
( 10)
( 10)
( 10)
System with 1TFF +
11.06
24.75
14.24
1FFP filter cascade
( 10)
( 10)
( 10)

Table 2: 8-channel 400Gbps WDM PM-16QAM at f=50GHz


Scenario
At 150kms
At 450kms
At 750kms

System with no filter


System with 1TFF
filter
System with 1TFF +
1FBG filter cascade
System with 1TFF +
1BOF filter cascade
System with 1TFF +
1FFP filter cascade

Q factor
(BER)
5.6 ( 10)

Q factor
(BER)
5.5 ( 10)

8.18
( 10)
9.02
( 10)
8.16
( 10)
9.5 ( 10)

7.9 ( 10)
6.41
( 10)
6.29
( 10)
22.7
( 10)

Q factor
(BER)
3.55
( 10)
7.05
( 10)
6.06
( 10)
5.67
( 10)
12.8
( 10)

Also at f=25GHz, our system was having Q factor


(and BER) below 6 in case of without any filter
present. As 1TFF filter was used, Q factor (and BER)
improved up to 6.9, 8.15 and 14.93 (for short
distances only).
25

No Filter

20

1TFF

15

1TFF+ 1FBG

10

1TFF+ 1BOF

1TFF+ 1FFP

0
150

450

750

Length of transmission fiber (km)

30
Q Factor

improved up to 7.05, 7.9 and 8.18. Further, moving


on to filter cascade of 1TFF + 1FBG, Q factor (and
BER) improved up to 9.02. Filter cascade 1TFF +
1BOF gave improvement for shorter distance but did
not improve the system any further at longer fiber
transmissions. It was within normal range of Q factor
being 6.29, 8.16. Then, 1TFF + 1FFP filter cascade
remarkably improved the system metric with Q factor
up to 22.7 and 12.8 at longer distances, thus again
proving to be a good filter cascade configuration for
the system.

Q Factor

(modeled as Fabry-Perot Optical Filter (FFP)) and


TFF filters (modeled as Butterworth filters of order 1)
used in our simulation have been with 3-dB
bandwidth of f. Thus the system is studied for effect
of WDM system in presence of 1TFF and 1 FFP filter
cascade system.

25

No filter

20

1 TFF

15

1TFF+ 1FBG

10

1TFF+ 1BOF

1TFF+ 1FFP

0
150
450
750
Length of transmission fiber (km)

Figure 2. Fiber length vs Q factor for 8-channel 400Gbps WDM


PM-16QAM at f=100GHz

Similarly, at f=50GHz, our system was having Q


factor (and BER) below 6 in case of without any filter
present. As 1TFF filter was used, Q factor (and BER)

Figure 3. Fiber length vs Q factor for 8-channel 400Gbps WDM


PM-16QAM at f=50GHz

Further, moving on to filter cascade of 1TFF + 1FBG,


Q factor (and BER) improved up to 13.37 but again
for short distances only. Filter cascade 1TFF + 1BOF
gave improvement neither for shorter distances nor at
longer fiber transmissions. Then, 1TFF + 1FFP filter
cascade remarkably improved the system metric with
Q factor up to 17.25 and 10.65 at longer distances,
thus yet again proving to be a good filter cascade
configuration for the system.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
62

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Table No.3: 8-channel 400Gbps WDM PM-16QAM at f=25GHz


Scenario
At 150kms At 450kms At 750kms

System with no filter


System with 1TFF
filter
System with 1TFF +
1FBG filter cascade
System with 1TFF +
1BOF filter cascade
System with 1TFF +
1FFP filter cascade

Q factor
(BER)
3.6 ( 10)

Q factor
(BER)
3.7 ( 10)

Q factor
(BER)
3.6 ( 10)

14.93
( 10)
13.37
( 10)
4.5 ( 10)

8.15
( 10)
8.09
( 10)
5.7 ( 10)

6.9 ( 10)

9.73
( 10)

17.25
( 10)

10.65
( 10)

8.35
( 10)
5.0 ( 10)

20
Q Factor

No Filter
15

1TFF

10

1TFF+ 1FBG
1TFF+ 1BOF

1TFF+ 1FFP
0
150

450

750

Length of transmission fiber (km)


Figure 4. Fiber length vs Q factor for 8-channel 400Gbps WDM
PM-16QAM at f=25GHz

IV. COMMENTS AND CONCLUSION


The channel data rates of 10 Gb/s and 40 Gb/s will be
upgraded in the near future to 100 Gb/s and 400 Gb/s
with 50 and 100 GHz channel spacing respectively to
match the increase of traffic demands in the recent
years. 100 Gb/s systems are already in commercial
products, while 400 Gb/s systems are still in research
labs. In this paper, we have proposed to compensate
for linear CD electronically using dispersion
compensation techniques and digital filters for 400
Gb/s systems. The result analyses of all the systems
show that for 400Gbps PM-16QAM WDM system,
TFF and FFP filter cascade successfully improved
system parameters for all three channel spacing i.e.
f=100, 50, 25GHz.While f=100GHz showed a
maximum Q factor value of 24.75(BER 10),
f=50GHz showed a maximum Q factor value of
22.7(BER 10). Also, f=25GHz showed a
maximum value of Q factor to be 17.25(BER 10).
Q factor versus transmission distance graphs have
also be plotted for all the three channel spacing
varying the transmission distance from 100 to
800kms.
Also, the spectral efficiency obtained for the systems
are
1. 4Gbits/sec/GHz (for f=100GHz)
2. 8Gbits/sec/GHz (for f=50GHz)
3. 16Gbits/sec/GHz (for f=25GHz)
Note that we have used only single filters in our filter
cascade systems. These can be further extended and
manipulated as per the need.

REFERENCES
[1] Curri, P. Poggiolini, Andrea Carena, and F. Forghieri,
Performance Analysis of Coherent 222-Gb/s NRZ PM16QAM WDM Systems Over Long-Haul Links IEEE
PHOTONICS TECHNOLOGY LETTERS, VOL. 22, NO. 5,
MARCH 1, 2010
[2] Andrea Carena, Vittorio Curri, Pierluigi Poggiolini, Gabriella
Bosco, and Fabrizio Forghier, Maximum Reach Versus
Transmission Capacity for Terabit Superchannels Based on
27.75-GBaud PM-QPSK, PM-8QAM, or PM-16QAM,
IEEE PHOTONICS TECHNOLOGY LETTERS, VOL. 22,
NO. 11, JUNE 1, 2010
[3] Rami Al-Dalky, Aly Elrefaie, Taha Landolsi, and Mohamed
Hassan, Performance Degradation of 100 Gb/s PM-QPSK
and 400 Gb/s PM-16QAM Dual Carrier Coherent Systems
Due to Cascaded Optical Filters, IEEE
[4] Curri, P. Poggiolini, A. Carena, and F. Forghieri, Dispersion
compensation and mitigation of non-linear effects in 111
Gb/s WDM coherent PM-QPSK systems, IEEE Photon.
Technol. Lett., vol. 20, no. 17, pp. 14731475, Sep. 1, 2008.
[5] van den Borne, V. A. J. M. Sleiffer, M. S. Alfiad, S. L.
Jansen, and T. Wuth, POLMUX-QPSK modulation and
coherent detection: The challenge of Long-Haul 100G
transmission, in Proc. ECOC 2009, Vienna, Sep. 2024,
2009, Paper 3.4.1.
[6] H. Gnauck and P. J.Winzer, 10 112-Gb/s PDM 16-QAM
transmission over 1022 km of SSMF with a spectral
efficiency of 4.1 b/s/Hz and no optical filtering, in Proc.
ECOC 2009, Vienna, Sep. 2024, 2009, Paper 8.4.2.
[7] M. Kuznetsov, N. M. Froberg, S. R. Henion, and K. A.
Rauschenbach, Dispersion-induced power penalty in fiberBragg-grating WDM filter cascades using optically
preamplified and nonpreamplified receivers, IEEE Photonics
Technology Letters , vol.12, no.10, pp.1406-1408, Oct. 2000.
[8] M. Kuznetsov, N. M. Froberg, S. R. Henion, and K. A.
Rauschenbach , Power penalty for optical signals due to
dispersion slope in WDM filter cascades, IEEE Photonics
Technology Letters , vol.11, no.11, pp.1411- 1413, Nov.
1999.
[9] S. J. Savory, G. Gavioli, R. I. Killey, and P. Bayvel,
Electronic compensation of chromatic dispersion using a
digital coherent receiver, Opt. Express, vol. 15, no. 5, pp.
21202126, 2007.
[10] H. Gnauck, P. J. Winzer, C. R. Doerr, and L. L. Buhl, 10
x1112-Gb/s PDM 16-QAM transmission over 630 km of
fiber with 6.2-b/s/Hz spectral efficiency, in Proc. OFC 2009,
San Diego, CA, Mar. 2226, 2009, Paper PDPB8.
[11] Carena, V. Curri, P. Poggiolini, and F. Forghieri, Non-linear
propagation limits and optimal dispersion map for 222 Gbit/s
WDM coherent PM-16QAM transmission, in Proc.
ECOC2009,Wien,AT, Sep. 2024, 2009, Paper P4.11.
[12] Saurabh Kumar, Prof. A. K. Jaiswal, Er. Mukesh Kumar and
Er. Rohini Saxena, Performance Analysis of Dispersion
Compensation in Long Haul Optical Fiber with DCF, IOSRJECE , Volume 6, Issue 6 (Jul. - Aug. 2013), PP 19-23.
[13] Bo-ning HU, Wang Jing, Wang Wei, Rui-mei Zhao,
Analysis on Dispersion Compensation with DCF based on
Optisystem. 2nd International Conference on Industrial and
Information Systems pp. 40-43 2010.
[14] John D. Downie, Jason Hurley, Dragan Pikula, Sergey Ten,
and Chris Towery, Transmission Reach Study of Three
Optical Fibers for 200 Gb/s PM-16QAM Systems with 100
km Spans, 2013 Optical Society of America.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
63

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Image Compression Using Neural Networks


Neeru Bala
Department of Electronics and Communication, Northern India Engineering College, New Delhi, India
E-mail: neeru.panghal1@gmail.com

Abstract One of the major difficulties


encountered in image processing is the huge
amount of data used to store an image. Thus, there
is a pressing need to limit the resulting data
volume. Image compression techniques aim to
remove the redundancy present in data in a way,
which makes image reconstruction possible. This
paper presents the use of neural networks in
image compression.
Keywords
Image
Processing;
Compression;
Neural
Networks
words)Introduction
I.

Image
(key

INTRODUCTION

Image Processing is a very interesting and a hot area


where day-to-day improvement is quite inexplicable
and has become an integral part of own lives. Image
processing is the analysis, manipulation, storage, and
display of graphical images. An image is digitized to
convert it to a form which can be stored in a
computer's memory or on some form of storage
media such as a hard disk. This digitization procedure
can be done by a scanner, or by a video camera
connected to a frame grabber board in a computer.
Once the image has been digitized, it can be operated
upon by various image processing operations. Image
processing is a module that is primarily used to
enhance the quality and appearance of black and
white images. It also enhances the quality of the
scanned or faxed document, by performing
operations that remove imperfections. Image
processing operations can be roughly divided into
three major categories, Image Enhancement, Image
Restoration and Image Compression. Image
compression is familiar to most people. It involves
reducing the amount of memory needed to store a
digital image.
Digital image presentation requires a large amount of
data and its transmission over communication
channels is time consuming. To rectify that problem,
large number of techniques to compress the amount

of data for representing a digital image have been


developed to make its storage and transmission
economical. One of the major difficulties
encountered in image processing is the huge amount
of data used to store an image. Thus, there is a
pressing need to limit the resulting data volume.
Image compression techniques aim to remove the
redundancy present in data in a way, which makes
image reconstruction possible. Image compression
continues to be an important subject in many areas
such as communication, data storage, computation
etc. In order to achieve useful compression various
algorithms were developed in past. A compression
algorithm has a corresponding decompression
algorithm that, given the compressed file, reproduces
the original file.
There have been many types of compression
algorithms developed. These algorithms fall into two
broad types, 1) Loss less algorithms, and 2) Lossy
algorithms [1]. A lossless algorithm reproduces the
original exactly. Whereas, a lossy algorithm, as its
name implies, loses some data.
Data loss may be unacceptable in many applications.
For example, text compression must be lossless
because a very small difference can result in
statements with totally different meanings. There are
also many situations where loss may be either
Unnoticeable or acceptable. But various applications
require accurate retrieval of image, wherein one such
application is medical processing. So Image
Compression enhances the progress of the world in
communication.
II.

NEURAL NETWORK

Artificial neural networks are simplified models of


the biological neuron system. A neural network is a
highly interconnected network with a large number of
processing elements called neurons in an architecture
inspired by the brain. Artificial neural networks are
massively parallel adaptive networks which are
intended to abstract and model some of the

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
64

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

functionality of the human nervous system in an


attempt to partially capture some of its computational
strengths [2]. They are considered as the possible
solutions to problems and for the applications where
high computation rates are required.
Construction of a neural network involves the three
tasks.

Determine the network properties: the


network topology, the type of connection, the order
of connection, and the weight range.

Determine the node properties: the


activation range and the transfer function.

Determine the system dynamics: the weight


initialization scheme, the activation calculating
formula, and the learning rule.

changes the networks weights so that, when training


is finished, it will give you the required output for a
particular input. The input and its corresponding
target are called a Training Pair. Back Propagation
networks are ideal for simple Pattern Recognition and
Mapping Tasks. As just mentioned, to train the
network you need to give it examples of what you
want the output you want (called the Target) for a
particular input.
The training using Back Propagation Algorithm
involves four stages:
1. Initialization of weights.
2. Feed forward.
3. Back Propagation of errors.
4. Updating of weights and biases.

Fig. 1. Basic neural network

III.

BACK PROPAGATION ALGORITHM [3]

Many hundreds of Neural Network types have been


proposed over the years. In fact, because Neural Nets
are so widely studied (for example, by Computer
Scientists, Electronic Engineers, Biologists and
Psychologists), they are given many different names.
Youll see them referred to as Artificial Neural
Networks (ANNs), Connectionism or Connectionist
Models, Multi-layer Percpetrons (MLPs) and Parallel
Distributed Processing (PDP).However, despite all
the different terms and different types, there are a
small group of classic networks which are widely
used and on which many others are based. These are:
Back Propagation, Hopfield Networks, Competitive
Networks and networks using Spiky Neurons. There
are many variations even on these themes. We have
used Back Propagation Algorithm in this thesis work.
Most people would consider the Back Propagation
network to be the quintessential Neural Net. Actually,
Back Propagation is the training or learning algorithm
rather than the network itself. A Back Propagation
network learns by example. You give the algorithm
examples of what you want the network to do and it

Fig. 2. Back Propagation Architecture [4]

A.
Node Properties
The activation levels of nodes can be discrete (e.g., 0
and 1) or continuous across a range (e.g., [0, 1]) or
unrestricted. This depends on the transfer function
(activation) chosen. If it is hard-limiting function,
then the activation levels are 0 (or -1) and 1. Transfer
functions calculate a layers output from its net input.
Many transfer functions are included in the Neural
Network Toolbox software.

Hard limit transfer function: The hard-limit


transfer function shown below limits the output of the
neuron to either 0, if the net input argument n is less
than 0, or 1, if n is greater than or equal to 0. This
function is used as,Perceptrons, to create neurons
that make classification decisions.
Syntax for assign this transfer function to layer i of a
network is given below.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
65

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

net.layers{i}.transferFcn = 'hardlim';
Algorithm: hardlim(n) = 1 if n 0
0 otherwise

Algorithm: purelin(n) = n

Fig. 3. (c) Pure linear transfer function


Fig. 3. (a) Hard limit transfer function

Symmetric hard limit transfer function: Assign this


transfer function to layer i of a network is given
below.
net.layers{i}.transferFcn = 'hardlims';
Algorithm: hardlims(n) = 1 if n 0
-1 otherwise

Log Sigmoid transfer function: The sigmoid


transfer function shown below takes the input, which
can have any value between plus and minus infinity,
and squashes the output into the range 0 to 1.The
syntax for assign this transfer function to layer i of a
network is given below .
net.layers{i}.transferFcn = 'logsig';
Algorithm: logsig(n) = 1 / (1 + exp(-n))
This transfer function is commonly used in back
propagation networks, in part because it is
differentiable.

Fig. 3. (b) Symmetric hard limit transfer function

Pure linear transfer function: The syntax for assign


this transfer function to layer i of a network is given
below.

Fig. 3. (d) Log sigmoid transfer function

net.layers{i}.transferFcn = 'purelin';
Special Issue: National Conference on Recent Innovations In Engineering & Technology
(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
66

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Hyperbolic tangent Sigmoid transfer function: The


syntax for assign this transfer function to layer i of a
network is given below.
net.layers{i}.transferFcn = 'tansig';
Algorithm: a = tansig(n) = 2/(1+exp(-2*n))-1
This is mathematically equivalent to tanh(N). It
differs in that it runs faster than the MATLAB
implementation of tanh, but the results can have very
small numerical differences. This function is a good
tradeoff for neural networks, where speed is
important and the exact shape of the transfer function
is not.

B.

Compression Process

Step 1: Read image pixels and then normalize it by


converting it from range [0-255] to range [0-1].
Step 2: Divide the image into non-overlapping
blocks.
Step 3: Rasterizing the image blocks.
Step 4: Apply the rasterized vector into input layer
units
Step 5: Compute the outputs of hidden layer units by
multiplying the input vector by the weight matrix (V).
Step 6: Store the outputs of hidden layer units after
renormalizing them in a compressed file.
Step 7: If there are more image vectors go to Step 4.

Fig. 4. Compression Process


Fig. 3. (e) tan Sigmoid Transfer Function

C. Decompression Process
IV.
A.

SUMMARY OF WORK

NN learning Algorithm Steps [5]

Step 1: Initialization of network weights, learning rate


and Threshold error. Set iterations to zero.
Step 2: Total error = zero; iterations iterations+1.
Step 3: Feed one vector to the input layer.
Step 4: Initialize the target output of that vector.
Step 5: Calculate the outputs of hidden layer units.
Step 6: Calculate the outputs of output layer units.
Step 7: Calculate error (desired output - actual output)
and calculate total error.
Step 8: Calculate New Error of output layer units and
adjust weights between output and hidden layer.
Step 9: Calculate New Error of hidden layer units and
adjust weights between hidden and input layer.
Step 10: While there are more vectors, go to Step 3.
Step 11: If Threshold error >= Total error then stop,
otherwise go to Step 2.

Step 1: Take one by one vector from the compressed


image.
Step 2: Normalize this vector (it represents the
outputs of hidden layer units).
Step 3: The outputs of output layer units by
multiplying outputs of hidden layer units by the
weight matrix
Step 4: Derasterize the outputs of output layer units to
build the sub image
Step 5: Return this sub image to its proper location
Step 6: Renormalize this block and store it in the
reconstructed file.
Step 7: If there are more vectors go to Step 1.
V.

RESULTS

Following results have been observed by testing our


algorithm on different standard images using
MATLAB R2014b and the input image of 256256
in grayscale format block size of 44 hidden layer

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
67

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

size of multilayer neural network to be 8. Default


trainlm algorithm for training the back- propagation
based neural network for 100 epochs of training Two
image parameters:[6],[7]

,
(

)-

Fig. 5. Decompression Process

(m and n are the rows and columns of test image)


PSNR=10.log_10(MAX_I^2/MSE)
Results are shown in table 1.

S.No
.

Original Image

REFERENCES
[1]
K. S. N. Reddy, Image Compression and
Reconstruction Using a New Approach by Artificial Neural
Network, no. 6, pp. 6885.
[2]
D. P. Dutta, S. D. Choudhury, M. a. Hussain, and S.
Majumder, Digital Image Compression Using Neural Networks,
2009 Int. Conf. Adv. Comput. Control. Telecommun. Technol.,
2009.
[3]
J. Jiang, Image compression with neural networks - a
survey, Signal Process. Image Commun., vol. 14, no. 9, pp. 737
760, 1999.
[4]
B. Anjana and R. Shreeja, Image Compression: An
Artificial Neural Network Approach, vol. 2, pp. 5358, 2012.
[5]
F. B. Ibrahim, Image Compression using Multilayer
Feed Forward Artificial Neural Network and DCT, vol. 6, no. 10,
pp. 15541560, 2010.
[6]
B. P. Karthikeyan and N. Sreekumar, A Study on
Image Compression with Neural Networks Using Modified
Levenberg - Maruardt Method, vol. 11, no. 3, 2011.
[7]
B. K. Patel, Image Compression Techniques Using
Artificial Neural Network, vol. 2, no. 9098972681, 2013.
[8]
Book "Neural Network and Soft Computing" by Simon
Haykin.
[9]
Book "Neural Networks for Pattern Recognition" by
Christopher M. Bishop.

Decompressed
Image

Error Image

MSE

PSNR
(dB)

1.

325.0
7

23.01

2.

169.3
2

25.84

3.

176.1
2

25.67

4.

182.2
6

25.52

5.

386.1
7

22.26

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
68

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Determining Shape and FringeCount in a


Holographic Recording Media
Dheeraj, Devanshi Chaudhary, Vivek Kumar, Sandeep Sharma
Department of Electronics and Communication Engineering, DIT University, Dehradun, 248009, India
Email: yadavdheeraj129@gmail.com,devanshi0114@gmail.com
Abstract Holographic projection is a new wave
of technology which changes the way we see things
today. It has gained tremendous advantage in
major fields of life including business, education,
science, arts, healthcare etc. Digital holograms
inherently have large information content and
thereby lossless coding of holographic data is
rather possible to some extent with the advent of
speckled interference fringes. Our work reflects
the key idea of determining the shape and fringe
count which can be seen in a holographic
recording media from a certain threshold
distance.The application is found in data security
and data compression.

waves meet, they form the bright or constructive


interference. But when crust and trough of waves
meet, it forms dark or destructive fringe pattern. [2,
4].
The paper starts by briefingabout holography in
section I. Section II describes the recording of
hologram with its involved mathematical equations
along with our approach regarding the structure of the
fringes so formed. Section III presents number of
fringes that can be obtained on a holographic
recoding medium. Section IV gives the conclusion of
paper and ends by giving references in section V.

Keywords
Holographic
Recording
Media,
Interference fringes, Holograms, Data Security, Data
Compression

A. Determination of structure of fringes


During the recording process a standing wave
interference (fringe) pattern is generated where the
reference beam and object beam interfere within the
recording layer. This pattern can be recorded by a
high-resolution photosensitive plate. The fringe
patterns orientation or fringe angle is described by
f = obj+ ref/ 2
wheref is the angle at which the fringes are oriented
in the photosensitive layer, and objandrefare the
angles of the object and reference wave with the
recording media.[7]
Let the distance between source S1 and S2bed, then
from the point O both source will be at d/2
distance (Fig. 1).
For 2-D Plane: Y axis will be parallel to S1S2.

I.

INTRODUCTION

Holography is a 3-D imaging method where the


complex light wavefront reflected by objects is
recorded. Obtaining image views of the object is then
possible by reconstructing the recorded wavefront.
Holography is a method evolved by Gabor in 1947, in
which one not only records the amplitude but also the
phase of the light wave. It is simply interference.
During the recording process a standing wave
interference (fringe) pattern is generated where the
reference beam and object beam interfere within the
recording layer. This pattern can be recorded by a
high-resolution photosensitive plate [1,5]. The
orientation of the reference wavefront with respect to
the object wavefront determines the physical
parameters of the reconstruction geometry and the
structure of the fringe pattern produced.
In the Fig. 1,S is an isotropic source which
radiate monochromatic light of same intensity, there
is a slit that divide this source into two partsS1 and
S2.. P is a point at which these two waves i.e.
reference wave and object wave interfere. When the
waves interfere we get the pattern of fringes on the
recording media. These fringes can be bright and
dark. When the crust or trough of two waves or more

II.

FRINGE STRUCTURE

Let the coordinates of S1 be (0, , D) and S2 be (0, - ,


D). where D is the distance between slits and
recording media.
Now, path difference () will be
S2P S1P = = [x2 + (y-d/2)2 + D2]1/2 [x2 + (y +
d/2)2 +D2]1/2
P(x, y, o) since 2-D plane, so z-axis = 0
The shape of the reference and object wavefront will
affect fringe structure, but the angular separation of
their wavefront is the main determination of the
fringe orientation. [3,6]

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
69

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

maxima, and then we have first maxima after first


minima and so on. This shows that, if we take any
point on vertical line (a) we will always get maxima
and same for other lines.

Fig. 1: Interference Fringes formation[3]

B. Recording of a hologram
Let us take an isotropic point source (which radiates
the same intensity of radiation in all directions.) The
source is further divided into two parts to get two
individual sources for reference wave and object
wave of same phase and same frequency. Amplitude
of wave is represented by A. Complex amplitude of
a wave
A(r, t) = ( )ei(wt-kr)
where, r = (x2+y2+z2) [5,6]
From the above amplitude equation we observe
amplitude decreases as r increases.
[x2 + (y + d/2)2 + D2] = {+[x2+(y-d/2)2 + D2]1/2}2
On solving the above equation we get,
(2yd-2)2 = (2)2[x2 + D2 + (d2-D2)]
Y=

2/(d2-2) [D2 + (d2-2)]

(1)

Eq. 1 represents a HYPERBOLA.


However, when we assume x2<<D2 i.e. the slits are
far away from the screen, it forms aSTRAIGHT
LINE pattern fringe.[8]
Therefore, from the above derivation we
acknowledge that fringes are hyperbola in shape but
when screen is far from slit then it forms a straight
line fringe pattern.

III.

FRINGE COUNTS

Let us determine the number of fringes in a small part


of hologram. We can decide number of fringes in a
small part of hologram by calculating fringe
width.Fringe width is a distance between two
successive bright fringes or two successive dark
fringes. In an interference pattern, the fringe width is
constant for all the fringes. It means all the bright
fringes as well as the dark fringes are equally spaced.
Let us see the fringe pattern in the Fig. 2
We can see the fringe pattern in Fig. 2, there are both
bright fringes and dark fringes. Here bright fringes
are shown by black solid line and dark fringes are
represented by blank white space.When a wave meets
at the center of fringe pattern i.e. at line (a) we get a
central maxima (or zero maxima) of the fringe. We
have dark fringe or first minima adjoining the central

Fig. 2: Fringe pattern[9]

Suppose we have a recording plate of dimensionn*n.


Next, we take a small part of recording plate of
dimension (say)a*a.
Thereby, we can calculate fringe width by using
formula:
W = D/2d
(forbothbright and dark fringes).
With the amount of number of countable integer
fringes we can then estimate the number of fringes in
area a*a and later in whole n*n.

IV.

CONCLUSION

In this paper, we have presented a key aspect to count


number of fringes in a given holographic recording
medium of certain area by assuming a linear shape of
fringes which is obtained when the distance between
the recording medium and the slits is quite long. The
application of the finding can be stretched out in
lossless/lossy coding of data by saving the data as
holograms and further using the dictionary technique
to store the linear interference patterns.
REFERENCES
[1]

J.Goodman "Introduction to Fourier optics" Chapter-9


Holography "
[2] Andrew Chan "Digital hologram".
[3] Guy E.blelloch "Introduction to optics."
[4] HC-Verma "Concepts of physics volume-1".
[5] Tung H.Jeong "Fundamental of photonics"(Module 1.10-Basic principal and application of holography).
[6] S. Maniloff, D. Vacar, D. McBranch, et al.,
OpticalHolography (Academic, New York, 1971).
[7] N. V. Kamanina, L. N. Kaporskii, and B. V. Kotov, Optical
communication. 152(46), 280 (1997)
[8] S. Maniloff, D. Vacar, D. McBranch, et al., Optical Theory.
[9]
Huai M. Shang, Cheng Quan, Cho J. Tay, and Yua Y.
HungGeneration of carrier fringes in holography and
shearography, Vol. 39, Issue 16, pp. 2638-2645 (2000).

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
70

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Prediction of Forest Fires Using Artificial


Neural Networks
Ishita Aggarwa1l, Harsh Joshi11, Divya Arora2 , Sandeep Sharma1

Deptt. of Electronics and Communication Engineering, DIT University, Dehradun, 248009, India
Email: yadavdheeraj129@gmail.com,devanshi0114@gmail.com
2
NIEC, New Delhi, India

Abstract Forest fires are a major environmental


issue, creating economical and ecological damage
while endangering human lives. Fast detection is a
key element for controlling such phenomenon. In
this work, we explore a Data Mining approach to
predict the burned area of forest fires using the
neural network. It tested on recent real-world data
collected from the northeast region of Portugal.

adopted by Hsu et al. [4] to detect forest fire spots in


satellite images. In 2005 [5], satellite images from
North America forest fires were fed into a Support
Vector Machine (SVM). Stojanova et al. [6] have
applied Logistic Regression, Random Forest (RF) and
Decision Trees (DT) to detect fire occurrence in the
Slovenian forests, using both satellite-based and
meteorological data.

Keywords Neural networks, Fire Science, Data


mining Application

In contrast with these previous works, we present a


novel DM forest fire approach, where the emphasis is
the use of real-time and non-costly meteorological
data. We will use recent real-world data, collected
from the northeast region of Portugal, with the aim of
predicting the burned area (or size) of forest fires.

I.

INTRODUCTION

One major environmental concern is the occurrence


of forest fires (also called wildfires), which affect
forest preservation, create economical and ecological
damage and cause human suffering. Such
phenomenon is due to multiple causes (e.g. human
negligence and lightinings) and despite an increasing
of state expenses to control this disaster, each year
millions of forest hectares (ha) are destroyed all
around the world. In particular, Portugal is highly
affected by forest fires [1]. From 1980 to 2005, over
2.7 million ha of forest area (equivalent to the
Albania land area) have been destroyed. The 2003
and 2005 fire seasons were especially dramatic,
affecting 4.6% and 3.1% of the territory, with 21 and
18 human deaths. Fast detection is a key element for
a successful firefighting. Since traditional human
surveillance is expensive and affected by subjective
factors, there has been an emphasis to develop
automatic solutions. These can be grouped into three
major categories [2].Satellite-based, infrared/smoke
scanners and local sensors (e.g. meteorological)
Indeed, several DM techniques have been applied to
the fire detection domain. For example, Vega-Garcia
et al. [3] adopted Neural Networks (NN) to predict
human caused wildfire occurrence. Infrared scanners
and NN were combined in [1] to reduce forest fire
false alarms. A spatial clustering (FASTCiD) was

II.

FOREST FIRE DATABASE

The forest Fire Weather Index (FWI) is the Canadian


system for rating fire danger and it includes six
components [7]: Fine Fuel Moisture Code (FFMC),
Duff Moisture Code (DMC), Drought Code (DC),
Initial Spread Index (ISI), Buildup Index (BUI) and
FWI. The first three are related to fuel codes: the
FFMC denotes the moisture content surface litter and
influences ignition and fire spread, while the DMC
and DC represent the moisture content of shallow and
deep organic layers, which affect fire intensity. The
ISI is a score that correlates with fire velocity spread,
while BUI represents the amount of available fuel.
The FWI index is an indicator of fire intensity and it
combines the two previous components. Although
different scales are used for each of the FWI
elements, high values suggest more severe burning
conditions. Also, the fuel moisture codes require a
memory (time lag) of past weather conditions: 16
hours for FFMC, 12 days for DMC and 52 days for
DC.
The two databases were stored in tens of individual
spreadsheets, under distinct formats, and a substantial
manual effort was performed to integrate them into a

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
71

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

single dataset with a total of 517 entries. This data is


available
at:
http://www.dsi.uminho.pt/pcortez/forestfires/. The
burned area is shown in, denoting a positive skew,
with the majority of the fires presenting a small size.
It should be noted that this skewed trait is also present
in other countries, such as Canada [18]. Regarding
the present dataset, there are 247 samples with a zero
value. As previously stated, all entries denote fire
occurrences and zero value means that an area lower
than 1ha/100 = 100m2 was burned. To reduce
skewness and improve symmetry, the logarithm
function
y = ln(x + 1)
which is a common transformation that tends to
improve regression results for right-skewed targets.
III.

DATA MINING MODEL

NN are connectionist models inspired by the behavior


of the human brain. In particular, the multilayer
perceptron is the most popular NN architecture. It
consists of a feedforward network where processing
neurons are grouped into layers and connected by
weighted links [8]. This study will consider
multilayer perceptrons with one hidden layer of H
hidden nodes and logistic activation functions and
one output node with a linear function [9]. Since the
NN cost function is nonconvex (with multiple
minima), NR runs will be applied to each neural
configuration, being selected the NN with the lowest
penalized error. Under this setting, the NN
performance will depend on the value of H.

Fig. 1 NUMBER OF HIDDEN NODES = 100


NUMBER OF LEARNING ITERATIONS = 50
LEARNING RATE = 0.1

A regression dataset D is made up of k {1, ...,N}


examples, each mapping an input vector (xk1, . . . ,
xkA) to a given target yk. The error is given by: ek =
yk byk, where by k represents the predicted value
for the k input pattern. The overall performance is
computed by a global metric, namely the Mean
Absolute Deviation (MAD) and Root Mean Squared
(RMSE), which can be computed as :

In both metrics, lower values result in better


predictive models. However, the RMSE is more
sensitive to high errors. Another possibility to
compare regression models is the Regression Error
Characteristic (REC) curve [10], which plots the error
tolerance (xaxis), given in terms of the absolute
deviation, versus the percentage of points predicted
within the tolerance (y-axis). The ideal regressor
should present a REC area close to 1.0.
IV.

EXPERIMENTAL RESULT

All experiments reported in this study were


conducted using the Azure ML webservice. The
results were extracted from the features i.e spatial
features and temporal features and the meteorological
features.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
72

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

REFERENCES
[1]

Azure Machine learning a fully-managed cloud


service for building predictive analytics solutions,
helps overcome the challenges most businesses have
in deploying and using machine learning. How? By
delivering a comprehensive machine learning service
that has all the benefits of the cloud. In mere hours,
with Azure ML, customers and partners can build
data-driven applications to predict, forecast and
change future outcomes a process that previously
took weeks and months. bring together the
capabilities of new analytics tools, powerful
algorithms developed for Microsoft products like
Xbox and Bing, and years of machine learning
experience into one simple and easy-to-use cloud
service.
V.

CONCLUSION

Forest fires cause a significant environmental damage


while threatening human lives. In the last two
decades, a substantial effort was made to build
automatic detection tool that could assist Fire
Management Systems (FMS). The three major trends
are the use of satellite data, infrared/smoke scanners
and local sensors (e.g. meteorological). In this work,
we propose a Data Mining (DM) approach that uses
meteorological data, as detected by local sensors in
weather stations, and that is known to influence forest
fires. The advantage is that such data can be
collected in real-time and with very low costs, when
compared with the satellite and scanner approaches.
Recent real-world data, from the northeast region of
Portugal, was used in the experiments. The database
included spatial, temporal, components from the
Canadian Fire Weather Index (FWI) and four weather
conditions. This problem was modeled as a regression
task, where the aim was the prediction of the burned
area.

European-Commission. Forest Fires in Europe.


Technical report, Report N-4/6, 2003/2005.
[2] Arrue, A. Ollero, and J. Matinez de Dios. An Intelligent
System for False Alarm Reduction in Infrared ForestFire Detection. IEEE Intelligent Systems, 15(3):6473,
2000.
[3] Vega-Garcia, B. Lee, P. Woodard, and S. Titus.
Applying neural network technology to human-caused
wildfire occurence prediction. AI Applications, 10(3):9
18, 1996.
[4] W. Hsu, M. Lee, and J. Zhang. Image Mining: Trends
and Developments. Journal of Intelligent Information
Systems, 19(1):723, 2002.
[5] Mazzoni, L. Tong, D. Diner, Q. Li, and J. Logan. Using
MISR and MODIS Data For Detection and Analysis of
Smoke Plume Injection Heights Over North America
During Summer
[6] 2004. AGU Fall Meeting Abstracts, pages B853+,
December 2005.
[7] Stojanova, P. Panov, A. Kobler, S. Dzeroski, and K.
Taskova. Learning to Predict Forest Fires with Different
DataMining Techniques. In D. Mladenic and M.
Grobelnik, editors, 9th International multiconference
Information Society (IS 2006), Ljubljana, Slovenia,
2006.
[8] S. Taylor and M. Alexander. Science, technology, and
human factors in fire danger rating: the Canadian
experience. International Journal of Wildland Fire,
15:121135, 2006.
[9] S. Haykin. Neural Networks - A Compreensive
Foundation. Prentice-Hall, New Jersey, 2nd edition,
1999
[10] T. Hastie, R. Tibshirani, and J. Friedman. The Elements
of Statistical Learning: DataMining, Inference, and
Prediction. Springer-Verlag, NY, USA, 2001
[11] J. Bi and K. Bennett. Regression Error Characteristic
curves. In Proceedings of 20th International
Conference on Machine Learning (ICML), pages 43
50, Washington DC, USA, 2003.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
73

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Efficient Video Facial Analysis and Face


Recognition
Harsh Pandeyl, Vivek Kumar1, Sandeep Sharma1, Divya Arora2
1

Deptt. of Electronics and Communication Engineering, DIT University, Dehradun, 248009, India
2
NIEC, New Delhi, India
Email: harsh020596@gmail.com

Abstract As one of the most important


application of image analysis and understanding,
facial recognition has recently received significant
attention during the past few years. The problem
with effective facial recognition has remains
attractive to researchers across multiple
disciplines such as image processing, machine
learning, pattern recognition and computer vision.
Research efforts are ongoing to advance the state
of the art in various aspects of face processing.
The proposed system employs robust face analysis
algorithm for facial recognition in a video in real
time. This algorithm uses skin tone as a feature to
track the face in the feature. This system
demonstrates an effective way to tackle the
limitation of the adapted viola-jones face detection
algorithm.
Keywords Image Processing, Video Processing,
Facial Recognition
I.

INTRODUCTION

Facial recognition has been not much explored until


last few years, when the researchers started taking
keen interest in the underlined discipline. The concept
finds application in todays latest technologies and
contributed to the advancement of the technology to a
great extent. In recent years face detection has
attracted much attention not only in the field of object
detection and facial analysis but also in the field of
computer vision and automatic access control system
since it has potential applications not only to
engineers but also to neuroscientists. Facial
recognition can be used for biometric system, security
systems and can be used to provide information
regarding the desired test subject. The facial detection
and recognition is not very simple task to perform
and individuals new to this field find it complex to
implement the facial recognition concept in real
world. Facial recognition in an image poses quite a

problem when the output is required to be highly


accurate and when the facial analysis is to be
performed in a video the complexity for
implementation increases tenfold. Video analysis
poses a greater problem as different video are made at
different frame per second (Fps), so development of a
robust algorithm for facial recognition in different
videos of different fps is a complex task. Facial
analysis is not just about detecting and tracking faces
in the image but also includes lots of variations of
image appearance, such as pose variation (front, nonfront), occlusion, image orientation, illuminating
condition and facial expression. Many attempts have
been made to provide an efficient and accurate face
detection and face recognition techniques such as :
Low level analysis -This technique is based on the
concept of analyzing low level visual features by
using pixel properties like intensity levels, edges, and
color properties[2].
Motion based face detection- When use of video
sequence is available, motion information can be used
to locate moving objects. Moving silhouettes like face
and body parts can be extracted by simply
thresholding accumulated frame differences. Besides
face regions, facial feature scan be located by frame
differences[2].
Level based face detection-The gray information of
an image can also be considered as features. For
example, facial features like eyebrows, pupils, and
lips are usually darker than their surrounding regions.
This property can be useful to differentiate various
facial parts. Several recent facial feature extraction
algorithms are basically search for local gray minima
within segmented facial regions. All those approaches
discussed so far are rigid in nature; hence fail to solve
some problem like locating faces of various poses in
complex background and approaches such as motion
based face detection techniques tend to lose track of

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
74

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

face when the face is techniques tend to lose track of


face when the face is tilted or is rotated to some
angle[2].

Once the week classifier is provided it iteratively


combines classifiers and form a linear combination:
C(x)=( ht+b)

(2)

This linear combination leads the training error to


zero.

Fig1. Face to be tracked

II.

METHODOLOGY

The methodology includes the following the steps:


1.

Detect the face to track using modified


viola- jones algorithm.
1.1
Testing of algorithm by the training
the classifier using dataset in Microsoft
Azure ML.
1.2
Implementation of the Algorithm.
Identify the facial features to track the face.
Efficient tracking of the face.

2.
3.
.
1.

Detect the face to track :

Face detection is the primary step in face detection


and tracking the detected face. Many different
algorithms can be used for the face detection but the
algorithm used here is Viola-Jones algorithms with
modified features which provide a robust algorithm
and the modified features removes the limitations
faced by the viola-jones algorithms.
1.1 Viola Jones algorithm
Viola-Jones algorithm is a widely used algorithm for
real-time object detection. The training in this
algorithm is a slow process but this limitation is
compensated by its fast detection capabilities. The
algorithm is provided with the large amount of data
set which contains similar faces as well as non-faces
and the faces are provided with many variations such
as poses, illumination and variation across individuals
and the classifier is made to learn on this labeled data.
The Adaboost is given set of week classifiers
originally:
Hj(x) {+1,-1}

(1)

Fig 2.Viola-Jones Algorithm

1.2

Implementation of the Algorithm:

The algorithm is implemented by first testing the


underlined features of the algorithms by passing the
datasets in the algorithm. Primarily, the dataset is
collected(around 350) including the different faces,
non-faces and faces with different variation and then
the collected dataset is used as labeled data to train
the classifier which when passed through the
algorithm in Microsoft Azure ML. The output was
not satisfactory due to lack of amount of dataset
provided during the training session but the output
was enough to take out a conclusion regarding much
better results if large dataset is provided.
2. Identify the features to track the face :
This is the second step after the face is detected. The
system needs to continuously track the face in
successive video frames and should not lose track of
it even if the fps is increased. So for this task we need
to choose a parameter which should change with
movement of the object in the video and should
remain a static parameter[3]. This static parameter
used here is the skin tone of the person whos the face
is to be detected. Skin tone provides a good deal of
contrast between the face and the background and
does change with the motion of the object. The RGB
image captured from the video frame is converted to
HSV color space as the RGB color space describes
colors in terms of the amount of red, green, and blue
present. HSV color space describes colors in terms of

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
75

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

the Hue, Saturation, and Value. In situations where


color description plays an integral role, the HSV
color model is often preferred over the RGB model.
The HSV model describes colors similarly to how the
human eye tends to perceive color. RGB defines
color in terms of a combination of primary colors,
whereas, HSV describes color using more familiar
comparisons such as color, vibrancy and brightness.
The skin tone information is obtained by extracting
the hue from the video frames.
2.

Tracking the face across successive video


frames:

With skin tone selected as the tracking parameter, the


skin tone information is obtained by extracting the
hue from the video frames. The system tracks the face
using this information with the help of histogram
based tracker which is based CamShift algorithm.
The continuously Adaptive Mean Shift algorithm is
an adaptation of mean shift algorithm for object
tracking that is intended as a step towards head and
face tracking algorithm for perceptual user
interface[1]. The primary difference between
CamShift and Mean Shift Algorithm is that the
CamShift uses continuously adaptive probability
distributions while mean shift is based on static
distributions, which are not updated unless the target
experiences a significant change in shape, size or
color[1]. In this system hue channel pixels are
extracted from the nose region and are used to
initialize the histogram for the tracker.

Fig 4. Hue Channel Data

III.

FUTURE WORK

Due to the pilot nature of the system that was


investigated in this project, some aspects are
suggested for further development.
1.
Future work will focus on minimizing the
effects of bad light illumination in different video
frames while skin tone information extraction.
2.
Minimizing the false positive rates to almost
negligible level will surely be a task for future
optimization.
3.
Training datasets should be made faster and
the object detection rate should be increased for video
with high fps.
IV.

CONCLUSION

The paper aim to develop a robust algorithm for


efficient face detection and face recognition. The
algorithm used in the overall system designing
increases the efficiency and is feasible as well as easy
to install. The paper provides an approach to tackle
the drawbacks of approaches proposed before.
REFERENCES

Fig 3. Detected Face in Video

[1]. Object tracking using Camshift algorithm and multiple


quantized Feature Spaces by John G.Allen, Richard Y.D.Xu,
School of Information Technologies, University of Sydney.
[2]. Face detection analysis by Gary Chern, Paul Gurney, and
Jared Starman.
[3]. Face Detection by Inseong Kim, Joon Hyung Shim, and
Jinkyu Yang.
[4]. Face Processing Advanced Modeling and Methods by Wenyi
Zhao
[5]. An Introduction to Statistical Signal Processing by Robert M.
Gray and Lee D. Davisson
[6]. M. Young, The Technical Writers Handbook. Mill
Valley, CA: University Science, 1989.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
76

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Design of Amultipurpose Orthosis


Assistant and Prosthetic Limb
Nipun Sachdeva, Garvit Dahiya, Pratyush Gupta, Divya Arora

Deptt. of Electronics and Communication, Northern India Engineering College, New Delhi, India
E-mail: divyaa.mtech@gmail.com
Abstract The solution proposed in this paper
aims to improve the quality of lifestyle of
physically challenged people who are struggling
hard to carry out even their daily work. As these
disabilities cannot be treated, even if it is done, it is
way more costly than a normal human being can
actually afford. Our solution cannot treat it
medically but can be carried out very well in a
different way. This possible solution is done with
the involvement of Orthosis and Prosthesis. This
paper aims to design an easily affordable
prosthetics / orthotics that can become a solution
to these problems and this can be made by using
complex robotic systems. The presence of robotic
system in todays world may lead to complex
technical robotic system problems.In the current
scenario we need a reliable, efficient and an
affordable product which these disable people can
use in their daily lives. We are making this
possible by taking in suggestions from some of the
renowned orthopedics and doctors.
Keywords
handicapped

Prosthetics,

I.

Orthotics,

technology used in development of limbs is a


complete open source. This allows other people to
further develop it.
II.

MECHANICAL DESIGN

The data of length of arm and its joints is collected


from 10 test subjects of age group 20-40 and a mean
of that data is taken as standard for the designing with
scope for small adjustments.
Table 1: length of arm.
Length
Elbow
Wrist to
of
to wrist
fingers
shoulder
to elbow
150
250
120

Total
length

mean

520mm

The human arm is very complex in its operation and


no current technology is even close to its replication.
So the design philosophy is centered on just giving
the bare minimum functionality to the amputee.

Opensource,

INTRODUCTION

The type of engineering work involved is called as


rehabilitative engineering. The main objective of such
class of engineering work is to reduce or to eliminate
the negative impact of the non-reversible physical
disabilities in the quality of life of these patients. It is
a multipurpose limb that can be used in case of
orthosis and prosthesis. The objects designed are
generic in order to increase the field of its usage.
The paper justifies the usage of prosthetics in the
lives of normal people in various house hold task
which involves lifting heavy loads, carrying out their
daily task it even help laborers who struggle to lift
these heavy loads, army soldiers for carrying heavy
weapons, with the use of our design they can easily
lift these weapons, bombs, heavy bags etc. The

The basic functions of arm:


1.
Picking and placing
2.
holding an object by hand
3.
rotating and revolving
4.
providing support to body
5.
stability to the body while walking
From all the above functionality, the no. 4 and 5 are
the very basic and must be addressed first. The no. 2
and 3 are advance functions that are good to have but
are not basic. No. 1 is the functions that are the
second to basics.
The property of an arm to pick and place an object is
most desired and used by the humans.
The different elements of the design of the limb are:
1.
Body support
2.
Upper arm support
3.
Lower arm support

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
77

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

4.

Gripper to pick an object

Designing of the parts:

A.CIRCUIT DESIGN: The design involves selfdesigned circuitry with onboard controller. It is
designed keeping in mind the compatibility issue
with almost all the controllers which makes it
possible in meeting our vision of contributing to the
open source development. The self-design pcb file of
the circuit:

Fig. 2. Circuit Diagram

B. DC GEARED MOTORS: These types of motors


provide sufficient amount of TORQUE as well as
RPM.
1. FSR: It is Force Sensing Resistor
which will be placed in the palm of the
user in order to provide the actuation
signal to the motors.
2. ATMEGA 16: It is the integrated
circuit of the microcontroller used in
the project. It is a 8 bit microcontroller.
The reason behind choosing this MCU
is as follows:

High performance, Low Power


consumption.

AdvancedRISC architecture

High endurance and non volatile


memory segments

32 programmable lines

Operating Voltage :2.7 4.5

POWER CONSUPTION
o ACTIVE : 1.1mA
o IDLE MODE: 0.35mA
o POWER DOWN MODE : <1 uA.
I.
Fig. 1 Designing of Parts

ULN 2004: It is also called as Noise


Bridge. These are high voltage, high current
darlington drivers comprised of seven
darlington pairs. Features of this IC are:

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
78

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

1.
2.
3.
4.

II.

III.

Output current : 500mA


High sustaining voltage output is
50v minimum.
Output clamp diodes.
Inputs compatible with various
types of logics.

Voltage Regulators: Since the power


source is DC hence the voltage regulators of
class 78XX has been used in this circuit.
RELAYS: There are some motors in the
multipurpose limb which doesnt require
any speed control; hence they are controlled
using relays. These are electromagnetic
switches which doesnt have any fixed
actuation polarity. The type of relay which
we have used is SPDT i.e. Single Pole
Double Throw.
III.

COST EVALUATION

Cost is the deciding factor of the success of the


prosthetic arm for the masses. An expensive arm is
practically non feasible and will not be in the favor of
the masses.

IV.

CONCLUSION

There is a great need for the enhancement in


prosthetics. This project is just a step towards making
the lives of several people affected by various
disabilities easier. Advancements in this technique
will help them live their lives normally.
REFERENCES
[1]. "A Brief History of Prosthetics". inMotion: A Brief History
of Prosthetics. NovemberDecember 2007. Retrieved 23
November 2010
[2]. Jump up to: a b c "How artificial limb is made Background,
Raw materials, The manufacturing process of artificial limb,
Physical therapy, Quality control". Madehow.com. 1988-0404. Retrieved 2010-10-03
[3]. Smit, G; Bongers, RM; Van der Sluis, CK; Plettenburg, DH
(2012). "Efficiency of voluntary opening hand and hook
prosthetic devices: 24 years of development?". Journal of
Rehabilitation Research and Development 49 (4): 523534
[4]. Open Prosthetics Website Cheng, V. (2004) A victim
assistance
solution.
http://www.ispo.ca/files/bicycleprosthesis.pdf Permanently attached robotic arm, operated on
mind-control

Various vendors are contacted and quotations were


collected from the on the basis of minimum orders of
1000 pcs.
Table 2 : Cost Comparison
Vendor
Cost
Life cycle Labour
(days)
cost
A
7000
3500
2500
B
6800
3200
3000
C
6500
3000
3200
The basis of selection of the vendor is QFM( quality
for money) the quality of product delivered with
respect to the labor cost ,multiplied by the no. of days
of operation.
The formula used is
QFM = cost at quality*life cycle /labour cost
Vendor
A
B
C

QFM
9800
6375
6093

As seen from table, the QFM factor of vendor A is


highest. So vendor A is selected for the
manufacturing of the prosthetic arm.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
79

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Channel Capacity Comparison of MIMO


Systems with Rician Distributions and
Rayleigh Distributions
Manoranjan Kumar, Harsh Kumar

Deptt. of Electronics and Communication, Northern India Engineering College, New Delhi, India
E-mail: mksharma0343@gmail.com, harsh.mitp@gmail.com
Abstract This paper represents the MIMO
channel capacity over Rician fading and Rayleigh
fading channel. Here Rician fading model employs
a zero-mean stochastic sinusoid as the line-of-sight
component. This paper offers analysis and
simulations to the behavior of MIMO system and
its expected capacity for various channel
distribution under flat fading. Several types of
distributions (Rician and Rayleigh )are considered
with different parameters to generate the channel
matrix and determine the capacity for several
cases of antenna numbers in both transmitter and
receiver sides.

r =Hx+n

(1)

where r is (
x 1) received signal vector, x is (
x1) transmitted signal vector, n is (
x 1) complex
additive white Gaussian noise (AWGN) vector with
variance , and H is the ( x
) channel matrix.
The channel matrix H represents the effect of the
medium on the transmitter receiver links. The
channel matrix H can be represented as,

H= [

(2)

Keywords Fading distributions, Rayleigh


distribution, Rician distribution, MIMO channels.

I.

INTRODUCTION

MULTIPLE-ANTENNA wireless terminals, which


are used along with special signal-processing
techniques to achieve diversity and multiplexing
benefits, characterize multiple-inputmultiple-output
(MIMO) wireless technology. MIMO technology
exploits the space dimensions, in addition to the time
and frequency dimensions, to deliver data rates and a
quality of service unmatched otherwise with
comparable spectral resources. A MIMO channel is
represented by a channel matrix, whose elements are
channel gains between transmitterreceiver antenna
pairs. Thus, mathematical tools such as the random
matrix theory help the analysis. Many different
techniques have been proposed for the modeling and
simulation of mobile radio channels.
II.

Fig.1. MIMO channel model

Channel matrix may offer K equivalent parallel sub


channels with different mean gains [4], where
K= rank (
) min (
)
(3)
Singular value decomposition (SVD) simplification
can be used to demonstrate the effect of channel
matrix H on the capacity. Then, channel matrix H [6]
can expressed as:

THEORY
H=

The general MIMO system is shown in Fig. 1 with


transmitter antennas and
receiver antennas.
The signal[2] model represented as:

With the columns of the unitary matrix U (

(4)
x

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
80

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

contains the eigenvectors of


and the columns of
the unitary matrix V ( x
) contains the eigenvectors of
. The diagonal matrix B ( x ) has
non-negative, real valued elements (called singular
values) equal to the square roots of the Eigen values
of
.

and b(b > 0) scale parameter. The Rician distribution


is used to generate the channel matrix and determine
the related capacity for the system:

=[
Assuming that the channel is known at both and
(full or prefect channel sensing information CSI) then
the maximum normalized capacity with respect to
bandwidth (in term of b/s/Hz spectral efficiency) of
parallel sub channels [3]equals :
C=

(5)

Where is the power allocated to each sub channel I


and can be determined to maximize the capacity
using water filling theorem such that each sub
channel was filled up to a common level D :
+

+................................+

=D

(6)

(10)

RAYLEIGH DISTRIBUTION:
Rayleigh distributions are used to model scattered
signals that reach a receiver by multiple paths. The
Rayleigh distribution is a special case of Weibull
distribution. The distribution function of this Weibull
distribution [5] is given by:
( )

f( | )=

x>0

(11)

Weibull distribution with =2 and =2b where b is


the scale parameter of Rayleigh distribution which
probability density function [3] is given by

Or
= D-

(7)

Such that it satisfies the following condition that


sums of all sub channels power equal to the total
transmitted power[5] or :

f( | )=

This Rayleigh distribution is used to generate the


channel matrix and determine the related capacity for
the system:
(8)
=[

And if

> D then

is set to zero.

A brief overview of the random distributions used in


this work is as following:
RICIAN DISTRIBUTION:
The Rician distribution is appropriate to use when the
receivers position is on a line of sight (LOS) with
respect to the transmitter, thus there will be an LOS
signal component in the received signal due to the
multipath. The density function for this
distribution[1] is given by:
f(x)=

(12)

( )

(9)

Where
the zero-order is modified Bessel function
of the first kind, s (s 0) non-centrality parameter

III. SIMULATION RESULTS

In this paper, MATLAB m-file and Simulink is used


to verify the model and simulate the effects of several
types of distributions (Rician and Rayleigh) for a
MIMO system under flat fading to generate the
channel matrix.
Cases Number of transmitter Number
of
receiver
antenna( )
antenna( )
1st
1
1
nd
2
2
2
3rd
3
3
4th
4
4

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
81

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

A.

RICIAN DISTRIBUTION

The first distribution considered is Rice distribution


with three different sets of non-centrality parameter s
and parameter . The capacity of the system (in term
of b/s/Hz), for each set of the Rician distribution
parameters, is calculated for each case in Table 1 over
a wide range of SNR (-10 dB to 30 dB). Each of the
eight cases is represented with capacity curves using
different colors and special marker symbols. The first
set of parameters is unity non-centrality parameter (s
=1) and unity scale parameter (b = 1). The achieved
results are shown in Fig. 2.

function to the number of antennas in both transmitter


and receiver sides and manner similar to that of
Rician distribution.

Fig. 3 The channel capacity of Rayleigh

Comparing with results in Fig. 2, the capacity with


Rayleigh distribution is lower in value comparing to
that with Rician distribution (s =1, b = 1).
IV.
Fig. 2. The channel capacity with Rician distribution (s = 1, b = 1)
st

From the inspection of the Fig .2 , and for the 1


curve (NT = 1, NR = 1), its obvious that the capacity
is increased as signal to noise ratio (SNR) increases
with respect to eq. (5) which is relate to the
generating channel matrix H by Rician distribution as
in eq. (9).
For the 2nd case (NT = 2, NR= 2), the capacity is
improved for the same values of SNR comparing to
the1st one because of increasing number of antennas
in both transmitter and receiver sides.
The 3rd case (NT = 3 and NR = 3) shows that the
capacity is increased for the same values of SNR
comparing to the first and 2nd case. The capacity
increasing corresponds to the
in
approximating exponential manner.
The 4th case (NT = 4 and NR = 4) shows that the
capacity is increased for the same values of SNR
comparing to previous cases in more approximating
exponential behavior.
B.

RAYLEIGH DISTRIBUTION

The capacity of the system (in term of b/s/Hz), for


each set of the Weibull distribution parameters is
calculated for each case in Table 1 over a wide range
of SNR (-10 dB to 30 dB).The results of Fig.3,
illustrates variation of capacity with number of
employed antennas. The capacity is increasing

CONCLUSION

The obtained results give an inspection to the


influence of the distribution selection over the
capacity of multi-input multi-output MIMO system
estimation and led to better understanding of the
effect of each distribution and how it can be used to
approximate different environments. The change of
the evaluation parameters of each distribution, for the
same number of antenna pair at receiver and
transmitter and SNR, led to different value of
capacity since its effect the generating of H matrix.
Also, the investigating of more channel distributions
is benefit led to better modeling of channel for
different
operation
scenarios
and
various
environments.
REFERENCES
[1]. B.P. Lathi, ZhiDingModern Digital and Analog
Communication System.
[2]. E. Shannon, Mathematical Theory of Communication.
[3]. John G. Proakis, Masoud S., Communication System
Engineering, 2nd Ed., PHI learning private limited, 2009.
[4]. P. Dent, G.E. Bottomley, and T. Croft, Jakes fading model
revisited, Electron. Lett., vol.29, no.13, pp.1162-1163, June
1993.
[5]. Bengt Holter , The capacity of MIMO channel.
[6]. Edited by Jia-Chin Lin, Recent Advances in Wireless
Communications and Networks, 1st edition, InTech, 2011.
[7]. Edited by Ali E., Wireless Communications and NetworksRecent Advances, 1st edition, InTech, 2012.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
82

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

A Review of CFOA based Single Element


Controlled Oscillators
Surendra Kumar

Deptt. of Electronics and Communication, Northern India Engineering College, New Delhi, India
E-mail: skladhoria88@gmail.com
Abstract This paper presents a review of singleelement controlled sinusoidal oscillators based on
Current
Feedback
Operational
Amplifier
(CFOA). A general CFOA based topology and
important advantages of CFOA based oscillators
have been highlighted. AD844-type CFOAs are
commercial available to
confirm the practical
work ability of oscillator configurations.

and reliability with respect to conventional amplifier


[4].
II.

VARIOUS METHODS TO DESIGN SECO

Oscillator with three CFOA


A canonic second-order (i.e. employing only two
capacitors) oscillator can, in general, be characterized
by the following autonomous state equation:
A.

Keywords Analog circuits, current feedback


operational amplifiers (CFOA), sinusoidal
oscillators,
Single
element
Controlled
Oscillator(SECO).
From the above, the characteristic equation equation
(CE)
I.

INTRODUCTION

Single element controlled sinusoidal oscillator have


numerous application in instrumentation and
measurement system and may be used as or in test
oscillators or signal generators for testing of radio
receivers measurement of standingwave ratio,
signal-to-noise ratio, etc. Traditionally, the IC
operational amplifier (op amp) has been the work
horse of sinusoidal oscillators. However, lately, the
current-feedback operational amplifier (CFOA),
particularly the four-terminal type such as AD844
providing an externally accessible compensation
terminal, is attracting prominent attention as an
alternative building block for analog circuit designs
because of several advantages that it offers over
traditional operational amplifier. However, CFOAs
usually have dc offset and gain errors and are,
therefore, not favored for dc/VLF applications.
Interest in designing sinusoidal oscillators using
CFOAs grew when it was realized that CFOA-based
oscillators can offer improved performance in terms
of frequency accuracy, dynamic range, distortion
level, and frequency span as compared to their VOAbased counterparts [3]. Study shows that CFOA based
oscillators have high noise immunity, low distortion,
relatively large slew rate, extended bandwidth, low
sensitivity, low power consumption, greater precision

Fig.1 Exemplary CFOA-based SRCO evolved through the state


variable synthesis [1]

The condition of oscillation (CO) and frequency of


oscillation (FO) as

The proposed methodology involves the selections of


the parameters , i=1; j=1, 2, in accordance with the

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
83

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

required features (eg. non interacting controls for FO


and CO through separate resistors), conversion of the
resulting state equation into node equations (NE) and
finally, constituting a physical circuit from these NEs.
Thus to have FO controlled by resistor (say R1) and
CO controlled by another resistor (say R3), the
required oscillator should have a CE of the form

(ii) Accountability of the parasitic z-terminal


capacitances (due to appearing in parallel with C1 and
C2)
(iii) Feasibility of designing external-capacitor-less
active-R SRCO (by deleting C1 and C2 instead
employing only the parasitic z-terminal capacitances.
(iv) Availability of quadrature outputs (x1 and x2 are
in quadrature; see eqn.5) [1, section II].

One of the several possible choices of the parameters


ij implied by the above equation (by matching equ. 2
and 3) is

By substituting eua. 4 into equa. 1, we generate the


following
NEs

Fig. 2. Variation of FO with respect to R1 [1]

B.

Oscillator with two CFOA

From the above NEs, a physical circuit can be


formulated by noting the following:
(a) Using two grounded capacitors C1 and C2 with
voltages across them of X1 and X2, the term on the
left hand side of eqns. 5 and 6 represent the two
capacitor currents.
(b) The currents x2/R1, x2/R3 and (x1- x2)/R2 can be
generated
by
devising
inverting/noninverting/differential voltage controlled
current sources (VCCS).
(c) Summation of the current in the right hand side of
eqn. 6 can be accomplished, without requiring any
additional hardware, merely by joining the relevant
current output terminals.
The final CFOA based SRCO obtained by
implementing NEs in eqns. 5 and 6 as per (a)-(c) is
shown in fig. 1. Apart from providing the intended
non-interacting controls of FO and CO, the other
novel features of the structure are
(i) The availability of buffered outputs

Fig.3. Single element controlled oscillator [3]

The condition of oscillation (CO) and frequency of


oscillations (FO) are given below

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
84

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Fig.5. Most general six node structure [2]

Fig.4 Variation of oscillation frequency with resistance R1of circuit


in fig.3 [3]

It is useful to examine the effect of various CFOA


non ideal parameters on the FO of the circuit. Taking
into consideration the finite X- terminal input
resistance rx and the parasitic impedance at the Zterminal, consisting of a resistance Rp in parallel with
a capacitance Cp. The influence of CFOA parasitics
on the performance of the oscillators can be reduced
by choosing external resistances to be much greater
than rx and much smaller than Rp and selecting
external capacitors to be much larger than Cp.
The variability of oscillation frequency with resistor
R1 for the oscillator circuit is shown in fig. 4.The
component values were C1=2 nF, C2=1 nF,
R2=R3=4.7 K and Vcc=15V dc, and R1 was varied
from 100 to 100 K. Fig. 4 shows that over the
frequency range of interest, the percentage of errors
in the frequencies due to the effect of CFOA parasitic
is not more than 2.5%, which explains the close
correspondence between the experimental data with
the non-ideal curve in Fig. 4 [3, Section II].
C. Oscillator with single CFOA

Fig.6. A SRCO derived from the structure of Fig. 5[2]

The most general six node structure containing


twelve branches to enable the enumeration of the
complete set of canonic SRCOs is shown in Fig. 5
and has been obtained by first making a structure in
which every node has admittances connected to all
remaining nodes and then eliminating the admittances
which become redundant due to the constraints
imposed by the terminal characteristics of the CFOA
(i.e., iy = 0, vx = vy, iz = ix and vw = vz). thus, in
making the circuit of Fig. 5 an admittance connected
between W and ground, an admittance connected
between W and Z, and an admittance connected
between X and Y have been omitted.
A systematic search was then made from this
structure by considering all possible combination of
five braches (where a branch may be either a
resistance or a capacitance) out of the twelve which
may lead to a second order characteristic equation

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
85

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

(CE) with the capability of having a pair of imaginary


conjugate roots and also resulting in the singleresistance control of frequency of oscillation (FO) or
both FO and condition of oscillation (CO). This
search has led to a number of SCROs, out of which,
circuit in Fig. 6 is one of them. The non-ideal
expressions for SRCO:

(typically 50) and much smaller than Rp (typically


3M) [2, Section III).
D. Oscillator capable of absorbing all parasitic
impedances

Fig. 8. The proposed oscillator [7]

[2, Section II]

Assuming an ideal CFOA characterized by iy = 0, vx =


vy, iz = ix and vw = vz, though a straight forward
analysis, the condition of oscillation (CO) of the
circuit of Fig.8 is found to be
(7)
Whereas, the frequency of oscillation (FO) is found
to be

(8)

Fig. 7. Variation of frequency versus resistance R6 (122 9.3K) for the SCRO of Fig.6 (R3=2K, R10=1K+10K pot,
C9 =C11= 0.01F) [3]

It can be expected that because of the CFOA


parasitic, the CO and FO of the circuit will no longer
be independent. Considering x-port parasitic input
resistance Rx and z-port parasitic impedance (Rp
Cp), it may be noted that effect of Rx, Rp and Cp is
not very critical on the CO of the circuit. Since, in
practice, Cos are always to be adjusted through some
variable resistor in the circuit, the influence of the
parasitics will only result in altered value of this
condition setting resistor which is not a problem.
From the expressions of CO and FO, it can be seen
that effect of the parasitics on the CO as well as
percentage errors in FO both can kept small if the
external capacitors are chosen be much larger Cp
(typically 4 to 5 pF) and the external resisters are
chosen such that their value are much larger than Rx

It is seen that the circuit can be adjusted to produce


oscillation by varying the resistor R4 which does not
feature in the expression for oscillation frequency. A
non-ideal model of the CFOA, showing the various
parasitic is shown in Fig. 9.

Fig. 9. Non-ideal model of the CFOA [7]

For AD-844-type CFOA, the typical values of the various parasitic


impedances are Rx = 50 , Ry = 2 M, Cy = 2pF, Rp = 3 M, Cp
= 4.5 pF. By carefully looking at the proposed circuit, it is
observed that Rx; Cy, Ry can be absorbed in external elements R0
and C0 respectively. Thus a pre-distorted design of circuit can very

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
86

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

well incorporate these parasitic elements. A re analysis of the


circuit after incorporating all parasitic impedances shows that nonideal CO and FO are now given by:
Table 1.Comparision between theoretical and practical values

Thus, it is seen from table 1 that due to incorporating


the parasitic component values into the design (since
all of them are absorbed in the various external RC
components employed), the non-ideal values of the
oscillation frequencies are very close to the
practically observed values. In fact, the % error
between the observed oscillation frequencies and
those calculated from the practical formula (8) are in
the range of -0.001 % to 0.0012 % only and thus, are
seen to be extremely small, as expected from the
theoretical analysis[7, Section II].
III. CONCLUSION
The review of many SECOs is carried out on the
basis of the effects of CFOA parasitic and
approximate formulas for percentage error caused in
FO were determined. The effect of the parasitic
capacitance of the CFOA however affects the Circuit
and has been considered. By judicious choice of
external resistors and capacitors (Rx << Rexternal <<
Rp; Cexternal >> Cp) the errors may be kept small. The
parasitic impedances of the CFOA in the external RC
components shown in Fig.7 thereby ensuring that the
error between the practically observed values of the
oscillation frequencies and these calculated from the
non-ideal formula would be extremely small.
Simulation results or experimental for most of the
oscillator circuits reviewed are available in the proper
references indicated below.
REFERENCES
[1]

[2]

(9)

[3]

And
[4]

(10)
Where the various non-ideal values of components
are given by:

[5]

[6]

The circuit is realized by using AD 844 type of


CFOA biased with 12V DC power supplies. Ten
different sets of values of the components were
considered for designing the circuit to generate
different frequencies. The component values, the
calculated oscillation frequencies from equation (8)
and the particularly observed frequency are shown in
table 1.

[7]

R Senani and S S Gupta, Synthesis of single resistancecontrolled oscillators using CFOAs: simple state-variable
approach, IEE Circuit devices system, Vol. 144, April 1997,
pp.104-106.
V.K.Singh, R.K. Sharma, D.R. Bhasker and R. Senani, Two
new canonic singleSFOA oscillators with single resistor
controls,Vol. 52,IEEE Transactions on circuits and systems,
December 2005,pp. 860-864.
D.R. Bhasker and R Senani, New CFOA-based singleelement controlled sinusoidal oscillators, IEEE Trans.
Instrum. Meas., Vol. 55, No. 6, December2006,pp. 20142020.
Sahaj Saxena and Prabhat kumar Mishra, A novel equiamplitude
quadrature
oscillator
based
on
CFOA.,International journal of advanced science and
technology, Vol. 31, June 2011, pp. 93-98.
D. K. Srivastava and V.K. Singh, Single Capacitor
controlled oscillator using a single CFOA, International
conference on circuits, system and simulation IPCSIT vol. 7,
2011, pp 23-27.
M. Soliman, New grounded capacitor single resistance
controlled sinusoidal oscillators using two CFOAs, J. of
active and passive electronic devices, Vol. 7, 2012, pp. 209213
K. Srivastava, V.K. Singh and R. Senani, Novel single
CFOA-based sinusoidal oscillator capable of absorbing all
parasitic impedances, American journal of electrical and
electronic engineering, Vol. 3, No. 3, May 2015, pp. 71-74

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
87

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Comparison of Classical and Dynamic Time


Warping Time Series Clustering
Algorithm
Neha Sharma1, Rumita sharma2, Jagmale singh3
1

Asstt. Prof., Department of ECE, Northern India Engg. College (Delhi)


(Lecturer, Department of computer engg, Govt.Poly, N.chopta (Sirsa )

2, 3

E-mail: mrs.nehakaushik@gmail.com, rrumita@gmail.com, jagmel.danoda@gmail.com


Abstract This paper compares the two data
mining technique for clustering. Hierarchical
clustering algorithm is compared with a new
proposed algorithm DTW (dynamic time warping)
clustering algorithm of data mining on time series
data. Time-Series clustering is one of the
important concepts of data mining that is used to
gain insight into the mechanism that generate the
time-series and predicting the future values of the
given time-series. Time-series data are frequently
very large and elements of these kinds of data
have temporal ordering. The proposed work
comes under the raw data based approach. In the
proposed work, DTW is used as a
distance/similarity measure in the hierarchical
clustering algorithm with inter/intra-clusterdistance-based-swap.
DTW
uses
dynamic
programming to find all the possible paths and the
selects the one that results in minimum distance
between two time-series. The inter/intra-clusterdistance-based-swap is used to refine the output of
the hierarchical clustering. The performance of
the proposed work is evaluated by using
Clustering Validity indices. The validity indices
provide a way to judge the optimal number of
clusters and also help in identifying that which
clustering procedure gives the compact and well
separated clusters. The proposed algorithm gives
consistent results on different data sets of varying
sizes.
Keywords CVAP, clustering, DTW, validity
indices, time series database,
I.

INTRODUCTION

A time-series database [1] consists of sequences of


data points measured typically at successive times
spaced at uniform time intervals (e.g., hourly, daily,
weekly, and monthly).A time-series database is also a

sequence database [1]. Sequence database is any


database that consists of sequences of ordered events,
with or without concrete notions of time e.g. web
page traversal sequences and customer shopping
transaction sequences are sequence data, but they
may not be time-series data.

Fig. 1.Sequence Database


1.1 Taxonomy
of
Time-Series
Clustering
Algorithms
The existing time-series clustering algorithms are
categorized as temporal-proximity-based, modelbased, and representation-based [2].
1.1.1 Temporal-Proximity-Based Clustering
Methods that work with raw data, either in
time domain or frequency domain, are placed into
this category. It is also known as raw-data-based
approach. It uses distance measures which consider
temporal relations, e.g. Dynamic Time Warping,
Temporal-means/Hierarchical Clustering.
Advantages: It is a direct way to capture dynamic
behavior and prevents from losing any information. It
provides a flexible means to deal with variable length
of temporal data.
Disadvantages: This approach is sensitive to
initialization and has high computational complexity.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
88

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

1.1.2 Representation-Based Clustering


This approach is also known as feature-based
clustering. In this, a set of features are extracted from
raw temporal data and then any of the existing
clustering algorithm can be applied on the feature
space, e.g. Discrete Wavelet Transform, Adaptive
Piecewise Constant Approximation, and Curvature
Based PCA Segments etc.
Advantages: It is compatible with the existing static
data clustering algorithms. It significantly reduces the
computational cost.
Disadvantages: The Loss of useful information
conveyed in the original data can occur and there may
be Difficulty in the selection of appropriate extraction
method.
II.

The resistivity of the material,


The doping type (i.e. P-type or N-type )
The sheet carrier density of the majority
carrier (the number of majority carriers per
unit area)
The mobility of the majority carriers,

Conditions and Sample Preparation


In order to use the Van-der Pauw method, the sample
thickness must be much less than the width and
length of the sample. In order to reduce errors in the
calculations, it is preferable to take the sample as a
symmetrical one i.e. uniform all across. There are
five conditions for using Van-der Pauw technique:
C.

Figure 1 above shows the possible contact placement


configurations in the VdP technique. The
measurements require that four ohmic contacts be
placed on the sample.
Among the methods commonly used for similarity
measurement, the dynamic time warping distance
(DTW) proposed by Berndt and Clifford in the
context of speech recognition, to account for
differences in speaking rates between speakers and
utterances [6].

DYNAMIC TIME WARPING

The method was first propounded by Leo J. Van-der


Pauw in 1958 [1]. Its power lies in its ability to
accurately measure the properties of a sample of any
arbitrary shape, so long as the sample is
approximately two-dimensional (i.e. it is much
thinner than it is wide) and the electrodes are placed
on its perimeter. From the measurements made, the
following properties of the material can be calculated:

The sample must have a flat shape of


uniform thick-ness all around.
The sample must not have any isolated
holes.
The
sample
must
be homogeneous and isotropic.
All four contacts must be located at the
edges/corners of the sample.

The area of contact of any individual contact


should
be
at
least
an order
of
magnitude smaller than the area of the entire
sample.

It is often used to determine time-series


similarity, classification and, to find
corresponding region between two time
series.
Unlike the Minkowski distance function, it
breaks the limitation of one to one
alignment, and also supports non-equallength time-series. Euclidean distance (a
limiting case of Minkowski distance) can
also be used [4].

The Euclidean distance between two time-series is


simply the sum of the squared distances from each
nth point in one time series to the nth point in the
other. The main disadvantage of using Euclidean
distance for time-series data is that its results are very
unintuitive. If two time-series are identical, but one is
shifted slightly along the time axis, then Euclidean
distance may consider them to be very different from
each other, so DTW was introduced to overcome this
limitation and give intuitive distance measurements
between time-series by ignoring both global and local
shifts in the time dimension.

Let

It uses dynamic programming technique to


find all possible paths, and selects the one
that results in minimum distance between
two time-series using a distance matrix,
where each element in the matrix is a
cumulative distance of the minimum of the
three surrounding neighbors.
and
, are two given time-series. DTW

aligns the two series so that their difference is


minimized and finds the warping path that has the
minimum distance between two time series. A
warping path,
where
max (m,n)K m+n-1, is a set of matrix elements that

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
89

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

satisfies three constraints: boundary condition,


continuity and monotonicity [3].

The boundary condition constraint requires


the warping path to start and finish in
diagonally opposite corner cells of the
matrix i.e. w1= (1,1) and wK=(m, n).
The continuity constraint restricts the
allowable steps to the adjacent cells.
The monotonicity constraint forces the
points in the warping path to be
monotonically spaced in time.

Mathematically,

(1)
Dynamic programming can be used to effectively
find this path by evaluation the following recurrence.
First, we create an n-by-m matrix, where every (i, j)th
element of the matrix is the cumulative distance of
the distance at (i, j) and the minimum of the three
elements neighboring the (i, j)th element, where 0 i
n and 0 j m:

( )

(
(

)+ (2)

It must be noted during the DTW calculation in the


above equation, there could be some ties in selecting
the minimum value from the three surrounding
elements. In this case, the algorithm arbitrarily
choose any neighbor in the tie, thus producing
different optimal warping paths even though the
warping distance will always turns out to be the same.
This algorithm [6] illustrates the implementation of
dynamic time warping when the two sequences are
strings of discrete symbols. d(i,j) is a distance
|
between symbols, e.g. ( ) |
int DTW Distance (char s[1..n], char t[1..m]) {
declare int DTW[0..n, 0..m]
declare int i, j, cost
for i := 1 to m
DTW [0, i]:= infinity
for i := 1 to n
DTW [i, 0]:= infinity
DTW [0, 0]:= 0
for i := 1 to n
for j := 1 to m
cost := d(s[i], t[j])

DTW [i, j]:= cost + minimum (DTW [i-1, j],


// insertion
DTW [i, j-1],
// deletion
DTW [i-1, j-1])
// match
return DTW[n, m]
DTW has a quadratic time and space complexity that
limits its use to only small time series data sets. Fast
DTW [61] an approximation to DTW has been
proposed that has a linear time and space complexity.
Moreover, varieties of DTW such as Re-sampled
DTW, Hybrid DTW and, CEDTW are there that have
greater accuracy and high performance [34].
2.1. Inter/Intra-Cluster-Distance-Based Swap
This approach is used to refine the output of
hierarchical clustering method and also it removes the
inability of hierarchical clustering. Once we get K
clusters as an output of hierarchical clustering, we
then apply this inter/intra-cluster-distance-basedswap to refine these K clusters and get new number
of clusters as output.
Given
two
clusters
A
and
B,
*
+ and
*
+, m and
n are sizes of A and B. Let W is the similarity matrix
among time-series, that is, Wuv(u
).

Let
, and then distance
between clusters A an B is:

(
)
(| | | |)

(3)

| | are sizes of A and B respectively.


Where | |
For a single time-series u, its average linkage to
)
(
) | | and its linkage
cluster A is (
)
(
) | | and from
to cluster B are (
these two equations, we calculate the linkage
)
(
)
(
).
difference (
In this way, for an arbitrary time-series u (
),
(
) can be calculated.
all linkage difference
For every u , there are two conditions about the
(
):
value of

(
)
(
)
(
) < 0. It
means series u has a relative larger linkage
to cluster B even though it is located in
cluster A. Then we move series u to cluster
B.
(
)
(
)
(
)
It
means series u has a relative larger linkage

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
90

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

to its initial cluster A. Then we do nothing in


this situation.
o For every vB, there are similar
two conditions:

(
)
( )
( )
.
It
means series v has a relative larger linkage to
cluster A even though it is located in cluster B.
Then we move series v to cluster A.
(
)
( )
( )
.
It
means series v has a relative larger linkage to its
initial cluster B. Then we do nothing in this
situation.

III.

PROPOSED ALGORITHM

Start by assigning each item to its own


cluster, so that if there are N items, there are
N clusters, each containing just one item.
Use Dynamic Time Warping as time-series
similarity measurement, and then let the
similarities between the clusters equal the
similarities between the items they contain.
Find most similar pair of clusters and merge
them into a single cluster, so that now one
cluster can be reduced.
Compute the average linkage as similarities
between the new cluster and each of the old
clusters.
Repeat steps 3 and 4 until get K clusters.
Adopt
inter/intra-cluster-distance-based
swap to refine the K clusters from step 5 and
then get the new K clusters.

3.1 Evaluation Criteria


Cluster validation is an important and
necessary step in cluster analysis. There are various
validation indices and functions to provide validity
measures for each partition. The validation indices [7]
also provide a clear picture on the optimal number of
clusters. Some of the indexes used are discussed in
brief:
Silhouette index: Better quality of a clustering is
indicated by a larger Silhouette value.
DaviesBouldin index: The lower the value the
better the cluster structures.
CalinskiHarabasz index: It evaluates the clustering
solution by looking at how similar the objects are
within each cluster and how dissimilar are different
clusters. It is also called a pseudo F statistic.
KrzanowskiLai index: Optimal clustering is
indicated by maximum value.
Dunns index: This index is proposed to use for the
identification of compact and well-separated

clusters. Large values indicate the presence of


compact and well-separated clusters.
Alternative Dunn index: The aim of modifying the
original Dunns index was that the calculation
becomes simpler.
In case of overlapped clusters the above index values
are not very reliable because of repartitioning the
results with the hard partition method. So the indexes
below are very relevant for Fuzzy clustering.
Xie and Benis index: It aims to quantify the ratio of
the total variation within clusters and the separation
of cluster. The optimal number of clusters should
minimize the value of the index.
Partition index: It is the ratio of the sum of
compactness and separation of the clusters. It is a sum
of individual cluster validity measures normalized
through division by the Fuzzy cardinality of each
cluster. A lower value of Partition index indicates a
better partition.
In our work we give more importance to Silhouette,
DaviesBouldin and Dunns indices and the
measurement of these indices is done with the help of
CVAP (Cluster Validity and Analysis Platform).
3.2 CVAP
CVAP (Cluster Validity and Analysis Platform) is a
visual cluster validation tool based on MATLAB. It
provides important tools and convenient analysis
environment for validity evaluation of clustering
algorithms. It includes 4 External validity indices, 14
internal validity indices. It supports other clustering
algorithms via loading a solution file with class
labels, or by adding new codes, and similarity metrics
of Euclidean distance and Pearson correlation
coefficient are supported.
IV.

PERFORMANCE INVESTIGATION AND


RESULTS

After preprocessing and normalization of the


stock data, experiments are performed on the stock
data by using Cluster Validity and Analysis Platform
(CVAP), a MATLAB tool. The first step is to find the
clustering of the given data using both, classical and
improved hierarchical clustering algorithm. In the
next step, we calculate the validity indices.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
91

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Fig. 4. DaviesBouldin Index of Classical

Hierarchical Clustering for Stock Data


Fig. 2. Classical Hierarchical Clustering Dendrogram for Stock
Data

4.1 Clustering and calculation of validity indices


for Stock data
Both the algorithms are performed on the given
stock data set and their results are presented in this
section. Hierarchical clustering (Figure 4.1) is
represented by a two dimensional figure called
dendrogram, a tree like structure. Validity indices
are used to present a clear picture on the optimal
number of
clusters. There are many validity indices but, in
our work, Silhouette, DaviesBouldin and Dunns
indices are given importance.

Fig. 5. Dunns Index of Classical Hierarchical Clustering for Stock


Data

Table4.1 Validity Indices for Classical Hierarchical Clustering for


Stock data

Number
of
clusters
(K)
2
3
4
5
6
7
8
9
10

Fig. 3. Silhouette Index of Classical

Hierarchical Clustering for Stock Data

Silhouette
Index

Davies
Bouldin
Index

Dunns
Index

0.46413
0.31945
0.25678
0.24476
0.20217
0.22244
0.21411
0.23006
0.20921

0.80669
1.0837
1.1163
0.92565
1.0808
1.0385
1.0334
1.0469
1.0213

2.2112
1.0219
1.2001
1.2203
0.97968
1.0898
1.0898
1.3167
1.2856

Since Silhouette and Dunns indices are


higher the better indices and from the table 4.1, it is
clear that these indices have higher value at K=2.
DaviesBouldin is the lower the better index which is
lower at K=2. As all these indices show good
clustering at K=2. So, the optimal number of clusters

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
92

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

is 2 for the given stock data by classical hierarchical


clustering.

Fig. 9 Dunns Index of Proposed


Fig. 6 Proposed Hierarchical Clustering Dendrogram for Stock
Data

From the table 2, it is clear that all these indices show


good clustering at K=2. So, the optimal number of
clusters is 2 for the given stock data by proposed
hierarchical clustering. Since, higher value of the
Dunns index indicates the compact and well
separated cluster. The value of Dunns index by
proposed method is 6.4047, which is much higher
than 2.2112, the value
Table 2 Comparison Indices
Number
of
clusters
(K)

Fig. 7 Silhouette Index of Proposed Hierarchical Clustering for


Stock Data

3
4
5
6
7
8
9
10

Silhouette
index

DaviesBouldin
index

Dunns index

0.62694
0.47244
0.31902
0.13596
0.078199
0.02902
0.023598
-0.03432
-0.03392

0.2979492
2.00695
10.46175
14.68835
35.6478
29.8439
27.47298
175.1764
224.7963

6.4047
0.35494
0.35494
0.037149
0.018048
0.006852
0.0068984
0.00059638
0.00059638

of Dunns index by classical hierarchical method. So,


the clustering by the proposed method is efficient and
makes compact clusters as compared to classical
hierarchical clustering
4.2 Major Contribution of Proposed Approach

Fig. 8 DaviesBouldin Index of Proposed Hierarchical Clustering


for Stock Data

The proposed methodology used Hierarchical


Clustering Algorithm with DTW and Inter/Intracluster-based-swap method. The main features of the
proposed work are as follows:

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
93

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Scalability: Most of the time-series data sets are high


dimensional and clustering of high dimensionality of
data is a challenging task. The proposed method gives
the consistent results on large datasets as well as
small datasets while the classical hierarchical
clustering methods are not scalable to give results on
large datasets.

[3]
[4]

[5]

Consistently
high
clustering
accuracy:
Experimental results show that the improved
hierarchical clustering method outperforms the wellknown clustering algorithms in terms of accuracy. It
is robust and consistent even when it is applied to
large datasets.
Number of clusters as optional parameters: Many
existing clustering algorithms like K-means require
the user to specify the desired number of clusters as a
parameter. The proposed algorithm treats it as
optional parameters. By using validity indices for the
evaluation of the proposed clustering algorithm,
optimal number of clusters and clustering quality can
be identified.
V.

[6]

[7]

Transactions On Systems, Man, And Cybernetics-Part C:


Applications And Reviews, Vol. 41, No. 2, March 2011, pp.
190-199.
T.W.
Liao,
Clustering
of
time
series
datasurvey, PatternRecognition 38 (2005), pp. 18571874
V.Niennattrakul;C.A.Ratanamahatana , "On Clustering
Multimedia Time Series Data Using K-Means and Dynamic
Time Warping," Multimedia and Ubiquitous Engineering,
2007. MUE '07. International Conference on , vol., no.,
pp.733-738, 26-28 April 2007.
S.Ongwattanakul;D.Srisai; Contrast Enhanced Dynamic
Time Warping Distance for Time Series Shape Averaging
Classification ICIS 2009, November 24-26, 2009 Seoul,
Korea Copyright 2009 ACM 978-1-60558-710-3/09/11.
D. J. Berndt and J. Clifford, Finding patterns in time series:
a dynamic programming approach, in Advances in
Knowledge Discovery and Data Mining, pp. 229248,
AAAI/MIT Press, Cambridge, Mass, USA, 1996.
S.R Nanda;B Mahanty;M.K tiwari; Clustering Indian stock
market data for portfolio management published in journal
Experts Systems with Application: An international journal
archive Volume 37 Issue 12,December, 2010.

CONCLUSIONS

distance/similarity measure. DTW uses dynamic


programming to find the optimal distance and is often
used for time-series data. This is an elastic measure
and breaks the limitation of one-to-one alignment of
Euclidean distance. Moreover, inter/intra-clusterdistance-based-swap removes the limitation of
hierarchical clustering and refines the clusters, and
makes the proposed algorithm more scalable and
efficient. The proposed algorithm has been tested on
Stock, Intels sensor networks dataset. The
experiments results show that the proposed method
performs better than classical hierarchical clustering
algorithm in terms of dimensionality, scalability and
efficiency.
The proposed method shows higher Dunns index as
compared to classical hierarchical clustering. Hence it
gives more accurate and efficient clustering output.
The improved hierarchical clustering algorithm has
shown high performance independent of the size of
the data sets. Hence, it is scalable than the classical
hierarchical clustering algorithm.
REFERENCES
[1]
[2]

J. Han, M. Kamber, Data Mining: Concepts and Techniques,


Morgan Kaufmann, San Francisco, 2001.
Y. Yang and K. Chen, Time-Series Clustering via RPCL
Network ensemble with different representations, IEEE

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
94

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Review of Substrate Integrated Wave


Guide Antenna
Harsh Kumar, Manoranjan

Asstt. Prof., Department of ECE, Northern India Engg. College (Delhi)


E-mail: harsh.mitp@gmail.com, mksharma0343@gmail.com
Abstract Substrate-integrated waveguide (SIW)
technology is an emerging and very promising
technique for the development of circuits and
components operating in the microwave and
millimetre-wave region. This technology is also use
to enhance the performance of the antenna. In this
technique rows of conducting cylinders or slots
embedded in a dielectric substrate that produced a
cavity back side of the patch. This study aims to
provide an overview of the recent advances in the
modeling,
design
and
technological
implementation of SIW antenna.
Keywords: SIW Antenna, CBA, HMSIW

I.

INTRODUCTION

Mostly microstrop antenna is a broad side radiation


antenna and it radiates perpendicular to the patch.
Radiation pattern of this type of antenna has major
and minor lobs and the minor lob of the antenna is
mostly unwanted radiation that reduced the
performance of the antenna. To improve this
limitation a technique is introduced for the designing
of antenna that is metallic cavity backed antenna.
Conventional metallic cavity backed antennas (CBA)
have been extensively presented for their satisfactory
radiation performances [10, 11]. Figure 1 shows the
classical conventional CBA, and the depths of their
metallic backed cavities are roughly one-quarter
wavelength. Their bulky volumes make the
conventional metallic CBA not suitable for some
practical applications.

Figure 1: Conventional metallic cavity backed antennas

To overcome the limitation of conventional metallic


cavity backed antennas a new technique Substrate
integrated wave guide (SIW) has been used for the
design of the antenna. SIW structure keeps the
advantages of conventional metallic waveguides,
such as high-factor, high selectivity, cutoff
frequency characteristic, and high power capacity. It
also has the advantages of low profile, light weight,
conformability to planar or curved surfaces, and easy
integration with planar circuits. In order to make the
SIW cavity be equivalent to conventional metallic
cavity the condition of d/d00.5 and d/00.1 must be
satisfied where , 0, and 0 are the metalized vias
diameter, the spacing between two neighboring vias,
and the free space wavelength, respectively [12-14].
There are commonly two type of SIW antenna. First
when the SIW cavity is placed on the back side of a
patch second when the SIW cavity is placed on the
back side of the slot and third when antenna is formed
with the help of SIW( SIW horn antenna)Shown in
figure 2,3 and 4 respectivly .

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
95

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

(a)

(b)

(c)
Figure 2 Low profile SIWcavity backed antennas: (a) patch
antenna; (b) slot antenna.(c)SIW horn antenna array.

This type of antenna is fed by both planar(


microstrip line, coplanar waveguide) and nonplanar
transmission line (probe and waveguide). SIW cavity
backed antennas, with different configurations, and
different performances, presented by international
researchers are reviewed and their performance
improvement methods are also discussed.
II.

SIW ANTENNA

Increasing development of wireless communication


has made high-performance antennas attractive for
low-cost application. Cavity-backed antennas have
been greatly investigated by numerous researchers for
their high radiation, and low back side radiation
performance in which cavity-backed antennas are the
mostly required. A novel low profile cavity backed
circularly polarized antenna has been proposed in [4].
Its backed cavity can be realized on a single-layer

substrate by channel milling followed by copper


electro plating to define the cavity edges. A single
probe along the cavity diagonal line is used as the
feeding element. In order to reduce fabrication
complexity and facilitate integration with planar
circuits, the substrate integrated waveguide (SIW)
technique is applied to cavity backed antenna design.
The first SIW based antenna was base on four slotted
SIW array operating at 10 GHz [1]. This antenna is
obtained by etching longitudinal slot in top metal
surface of the SIW. The feed network of this antenna
is based on microstip power divider, integrated on
same substrate of SIW antenna. A microstrip fed slot
antenna backed by a conventional metallic cavity has
been presented in [2], whose bandwidth and gain are
more than 28% and 5 dBi, respectively. A cavitybacked linear tapered slot antenna fabricated by using
low-temperature cofired ceramic (LTCC) process has
been developed in [3]. Its bandwidth is more than
50% and its gain higher than 4.9 dBi. A low profile,
cavity backed, linearly polarized antenna based on
SIW was first presented in [5]. The entire antenna,
including the radiating element, the feeding element,
and the backed cavity, can be completely fabricated
on a single layer substrate. Low-profile cavity-backed
slot antennas based on substrate integrated waveguide
(SIW) technology have been presented in[6] and [7].
The whole antennas including their backed cavities
and feeding elements can be completely constructed
at a single substrate. These novel antennas have high
radiation performance and retain advantages of lowprofile, low-cost fabrication, and seamlessly planar
integration. A size reduced SIW cavity-backed slot
antenna has been studied in [8], in which meandered
slots inductively loaded with lateral slots are used as
its radiating elements. A tunable active antenna
oscillator comprised of a SIW cavity-backed slot
antenna has been investigated in [9], in which a
dogbone-shaped slot is adopted as its radiating
element. However, these antennas have a drawback
of narrow bandwidth (about 2%) because they are
basically resonating at a single frequency. were firstly
discussed in [15, 16, 17], whose SIW backed cavity
with two dimensions rotational symmetries was
requisite. Crossed slot at the center of one metal
surface of the SIW backed cavity was adopted as a
radiating element. A single GCPW or probe located
at one diagonal line of the crossed slot was used to
excite the cavity. In order to simultaneously excite
the two degenerate modes in the cavity by a single
feed, a small perturbation of lengths of the crossed
slot two arms must be introduced. When two

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
96

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

orthogonal degenerate cavity modes (TM110 or


TE120 and TE210) were successfully excited,
circularly polarized radiation was produced by tuning
the two arms length difference to achieve a 90 0
phase difference between the radiations from the two
arms.
III.

BANDWIDTH ENHANCEMENT
TECHNIQUES

SIW Antenna discuss in previous section mostly has


low bandwidth this is the limitation of SIW antenna.
To improve this limitation of the antenna different
techniques has been used by different researcher. A
bandwidth-enhanced linearly polarized SIW cavity
backed slot antenna was presented in [18], which was
a single GCPW fed rectangular SIW cavity backed
long slot antenna. The long slot was a completely
matching slot other than the previously used
resonating slot and its length is far more than a half
resonant wavelength. After properly setting, two
hybrid modes can be excited in the SIWcavity, which
were two different combinations of theTE110 and
TE120 resonances. One is a combination of a strong
TE120 resonance and a weak TE110 resonance and
the other is a combination of a strong TE110
resonance and a weak TE120 resonance. Although
the fields distributing at the two sides of the slot are
in phase, their huge magnitude difference also can
make the slot radiate electromagnetic wave.
Radiations generated from the two hybrid modes are
consistent because their equivalent currents
distributed in the slot are nearly the same. Compared
with those of the previously presented SIW cavity
backed slot antenna, fractional impedance bandwidth
of the proposed antenna was improved about 4 times.
Its gain and radiation efficiency are also slightly
improved about 0.6 dB and 8%, respectively. Its SIW
cavity size is reduced about 30%.A bandwidth
enhanced SIW cavity backed patch array was
presented in [19].The proposed array consisted of two
stacked substrates. Patch elements were printed on
the top microstrip substrate and the SIW backed
cavities were constructed by many via holes spaced
along circular openings at the bottom
cavity
substrate. The bottom metal surface of the microstrip
substrate and the top metal surface of the cavity
substrate had common circular openings underneath
the patch. The top microstrip substrate was kept thin
in order to minimize the surface wave and the
associated feeding network losses. The bottom cavity
substrate was relatively thick for bandwidth
enhancement. Since the required fractional matching

impedance bandwidth of the proposed antenna was


inversely proportional to the square root of the
dielectric constant, increment of the bottom cavity
substrate thickness was used to improve bandwidth.
IV.

GAIN ENHANCEMENT OF SIW


ANTENNA

Gain is very important parameter of any antenna and


in most of practical application antenna high gain
required. A gain enhanced cavity backed slot antenna
using high order cavity resonance was proposed in
[20]. A GCPW line located at one diagonal line of the
SIW cavity was used as the feeding element to excite
the TE220 resonance in the cavity. Triple parallel
slots, parallel to the two cavity edges, were used as
the radiating elements. One of the triple parallel slots
was etched at one center line of the SIW cavity. The
other two slots were symmetrically distributed about
the SIW cavity center line and they were close to the
SIW cavity walls. Three groups of tuning posts
located at the two center lines of the SIW cavity were
used as the auxiliary tuning elements to excite the
required TE220 resonance. This design method can
be extended to be used at the higher order cavity
resonances, such as TE230, TE330, and TE440, to get
a much higher gain radiation. An SIW cavity backed
wide slot antenna array operating at 60GHz was
studied in [21]. The proposed antenna behaved the
dual-resonance operating mechanism. Its backed
cavities were not only reflectors for the radiating slots
but also radiating elements. Measured results showed
that the proposed 2X 4 antenna array had a gain of 12
dBi and a cross polarization level lower than 25 dB
over its whole operating bandwidth of about 11.6%.
V.

MINIATURIZATION OF SIW ANTENNA

Size of the antenna is an important factor for the


fusibility of antenna. Different method has been
proposed by researcher to miniaturization the size of
antenna. A size-reduced SIW cavity backed slot
antenna had been proposed in [22], whose slot
consisted of a pair of meandered slots. Its size
reduction of 50% was accomplished by inserting a
capacitive element into the SIW cavity, which leads
the increment of the stored electric energy and nearly
unaffected the stored magnetic energy. A cylindrical
metal post, through the thickness of the cavity and
aligned parallel to the electric field, was implemented
as the capacitive element. The post was isolated from
the top metal surface of the SIW cavity by using a
ring slot. A size-reduced cavity backed antenna was

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
97

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

proposed in [23] by using half mode substrate


integrated waveguide (HMSIW) technique. The
backed cavity of the antenna is triangular in shape,
which was realized by metalized vias array through
the substrate, the bottom ground plane, and the top
triangular patch. This triangular cavity was a half
mode cavity in which the field distribution was
almost half of that in the original SIW cavity. The
resonating frequency of the proposed triangular
HMSIW cavity is equal to that of its corresponding
square SIW one. Radiating field was generated by
dominant TE110 cavity mode through a dielectric
aperture created by HMSIW. The proposed antenna
was reduced to half of its corresponding SIW cavity
backed antenna.
VI.

CONCLUSIONS

A review of cavity backed antennas based on SIW


technique has been presented in this paper.
Performance improvements of SIW cavity backed
antenna have been investigated, including bandwidth
enhancement, size
REFERENCES
[1] Y. Liu, Z. Shen, and C. L. Law, A compact dual-band
cavity-backed slot antenna, IEEE Antennas Wireless
Propag. Lett., vol. 5, pp. 46,2006.

[2] Kim, N. Kidera, S. Pinel, J. Papapolymerou, J. Laskar, J.


Yook, and M. M. Tentzeris, Linear tapered cavity backed
slot antenna for millimeter wave LTCC modules, IEEE
Antennas Wireless Propag. Lett.,vol. 5, pp. 175178, 2006.

[3] Sievenpiper, H. Hsu, and R. M. Riley, Low-profile cavitybacked crossed-slot antenna with a single-probe feed
designed for 2.34-GHz satellite radio applications, IEEE
Trans. Antennas Propag., vol. 52,no. 3, pp. 873879, Mar.
2004.

[4] G. Q. Luo, Z. F. Hu, L. X. Dong, and L. L. Sun, Planar slot


antenna backed by substrate integrated waveguide cavity,
IEEE Antennas Wireless Propag. Lett., vol. 7, pp. 236239,
Aug. 2008.

[5] G.Q. Luo, Z. F.Hu, Y. Liang, L. Y.Yu, and L. L. Sun,


Development of low profile cavity backed crossed slot
antenna for planar integration, IEEE Trans. Antennas
Propag., vol. 57, no. 10, pp. 29722979, Oct.2009.

[6] J. C. Bohorquez, H. A. F. Pedraza, I. C. H. Pinzon, J. A.


Castiblanco, N. Pena, and H. F. Guarnizo, Planar substrate
integrated waveguide cavity backed antenna, IEEE Antennas
Wireless Propag. Lett., vol. 8, pp. 11391142, 2009.

[7] Giuppi, A. Georgiadis, A. Collado, M. Bozzi, and L.

Perregrini, Tunable SIWcavity backed active antenna


oscillator, Electron. Lett., vol. 46, no. 15, pp. 10531055,
Jul. 2010.

Antennas and Wireless Propagation Letters, vol. 7, pp. 414


417, 2008.
Manteghi and Y. Rahmat-Samii, Multiport
characteristics of a wide-band cavity backed annular patch
antenna for multipolarization operations, IEEE Transactions
on Antennas and Propagation, vol. 53, no. 1, pp. 466474,
2005.

[9] M.

[10] Xu and K. Wu, Guided-wave and leakage characteristics of


substrate integrated waveguide, IEEE Trans. Microw.
Theory Tech., vol. 53, no. 1, pp. 6673, Jan. 2005.

[11] Q. Luo, W. Hong, Q. H. Lai, K.Wu, and L. L. Sun, Design


and experimental verification of thin frequency selective
surface with quasielliptic bandpass response, IEEE Trans
Microw. Theory Tech., vol. 55, no. 12, pp. 24812487, Dec.
2007.

[12] Uchimura, T. Takenoshita, and M. Fujii, Development of

Laminated Waveguide,IEEE Trans Microw. Theory


Tech., vol. 46, no. 12, pp. 24382443, Dec. 1998.

[13] Q. L.Guo and L. S. Ling, Circularly polarized antenna based


on dual-mode circular SIW cavity, in Proceedings of the
International Conference on Microwave and Millimeter Wave
Technology (ICMMT 08), vol. 13, pp. 10771079, Nanjing,
China, April 2008.

[14] G. Q. Luo, L. L. Sun, and L. X. Dong, Single probe fed


cavity backed circularly polarized antenna, Microwave and
Optical Technology Letters, vol. 50, no. 11, pp. 29962998,
2008

[15] G. Q. Luo, Z. F. Hu, Y. Liang, L. Y. Yu, and L. L. Sun,


Development of low profile cavity backed crossed slot
antennas for planar integration, IEEE Transactions on
Antennas and Propagation, vol. 57, no. 10, pp. 29722979,
2009

[16] G. Q. Luo, Z. F. Hu, W. J. Li, X. H. Zhang, L. L. Sun, and J.


F. Zheng, Bandwidth-enhanced low-profile cavity-backed
slot antenna by using hybrid SIWcavity modes, IEEE
Transactions onAntennas and Propagation, vol. 60,no. 4, pp.
16981704, 2012.

[17] K. J. Lee, J. A. Lee, and M. Kim, Multilayer dielectric


cavity antenna design for wide bandwidth, Microwave and
Optical Technology Letters, vol. 54, no. 9, pp. 20462049,
2012

[18] G. Q. Luo, X. H. Zhang, L. X. Dong, W. J. Li, and L. L. Sun,


A gain enhanced cavity backed slot antenna using high order
cavity resonance, Journal of Electromagnetic Waves and
Applications, vol. 25, no. 8-9, pp. 12731279, 2011.

[19] K. Gong, Z. N. Chen, X. Qing, P. Chen, and W. Hong,


Substrate integrated waveguide cavity-backed wide slot
antenna for 60GHz bands, IEEE Transactions on Antennas
and Propagation, vol. 60, no. 12, pp. 60236026, 2012.

[20] C.A. T.Martinez, J. C. B. Reyes, O. A. N. Manosalva,


andN.M. P. Traslavina, Volume reduction of planar
substrate integrated waveguide cavity-backed antennas, in
Proceedings of the 6th European Conference on Antennas and
Propagation, pp. 2919 2923, March 2012.

[21] S. A. Razavi and M. H. Neshati, Development of a low


profile circularly polarized cavity backed antenna using
HMSIW technique, IEEE Transactions on Antennas and
Propagation, vol. 61, no. 3, pp. 10411047, 2013.

[8] J. Sarrazin, Y. Mahe, S. Avrillon, and S. Toutain,


Investigationon cavity/slot antennas for diversity and MIMO
systems: the example of a three-port antenna, IEEE
Special Issue: National Conference on Recent Innovations In Engineering & Technology
(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
98

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Image Quality Assessment for Fake


Biometric System, a Review of Fingerprint,
Iris, and Face Recognition
Sunil Nijhawan1, Jitender Khurana2
1

Research Associate Baba Mast Nath University, Rohtak


2
ECE Deptt., Baba Mast Nath University, Rohtak
E-mail: sunil_nijhawan@yahoo.co.in

Abstract A typical biometric sensing system


consists, matching modules and feature extraction.
But now a days fake biometrics attacking the
biometric systems. Fake biometrics create the fake
identities of human identification characteristics
like iris, fingerprint, face recognition on printed
paper. This paper shows a software-based fake
detection method which can be used in multiple
biometric systems to detect different types of
fraudulent access attempts. By using the real
images predictable quality, we can easily
differentiate between real and fake samples. This
report shows Image quality assessment for
liveliness detection technique, which used to find
out the fake biometrics use the multiple source of
information
for
recognition
of
person
authentication as the single biometric system are
less reliable than Multi biometric system. To
detect the fake biometrics, the liveliness detection
technique of Image quality assessment being used.

for find out the fake biometrics. Image assessment is


force by supposition that it is predictable that a fake
image and real sample will have different quality
acquisition.
For example, fig 1 shows iris images captured from a
printed paper are more likely to be fuzzy or out of
focus due to shaky; face images captured from a
mobile device will almost certainly be over-or underdiscovered; and it is not rare that fingerprint images
which is shows in fig 2. captured from a dummy
finger.

Fig. 1 Image Capuring from a printed paper

Keywords Image processing, Biometric system,


fingerprint, face recognition, Image quality assessment.

I.

INTRODUCTION

Biometric identifiers are the distinctive, measurable


characteristics used to label and describe individuals.
Biometric identifiers are often categorized as
physiological versus behavioral characteristics. Many
different aspects of human physiology, chemistry or
behavior can be used for biometric authentication.
Biometric system is of different type that are face
recognition system, fingerprint recognition system,
iris recognition system. Multi biometric system
means a biometric system is used more than one
biometric system for one multi-biometric system. In
this Survey Base seminar report Image quality
assessment for liveliness detection technique is used

Fig. 2 Image Captured from dummy finger

II.

DIMAGE QUALITY ASSESSMENT

Image quality assessment is a most important topic in


the image processing area. Image quality is a trait of
any image. Usually compared with an ideal or perfect
image. Expected quality differences between real and
fake samples may include: degree of sharpness, color
and luminance levels, local artifacts, amount of
information found in both type of images (entropy),

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
99

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

structural distortions or natural appearance. For


example, iris images captured from a printed paper
are more likely to be blurred or out of focus due to
trembling; face images captured from a mobile
device will probably be over- or under-exposed; and
it is not rare that fingerprint images captured from a
gummy finger present local acquisition artifacts such
as spots and patches.
Fingerprint quality assessment
As we know every fingerprint of each person has to
be unique, Even the Twins also contain different
fingerprint. Fingerprint recognition is the most
accepted biometric recognition method. Poor quality
fingerprint images can lead to incorrect spurious
feature (minutia) detection (illustrated in Figure 3)
and thereby
degrading the performance of a
fingerprint recognition system.

Fig. 3 Incorrect spurious feature (minutia) detection

Fingerprints consist of ridges and furrows on the


surface of a fingertip. For proper functioning, Quality
assessment of fingerprint ridge quality is essential.
Most fingerprint quality assessment metrics compute
image properties in local regions and pool these
metrics to present a single quality score. Fingerprint
quality is also used to find local unrecoverable
regions of the fingerprint, as enhancement of these
regions
may be counter-productive for ridge
information. A detailed review of some seminal
techniques is presented here. In literature, the most
popular fingerprint quality assessment algorithm is
the National Institute of Standards and Technology
(NIST) Fingerprint Image Quality (NFIQ). NFIQ
produces a value between 1-5 from a fingerprint
image that is directly predictive of expected matching
performance. A feature vector v consists of 11 quality
features obtained on the basis of localized quality
map per fingerprint image. And this
map is

computed based on the local orientation, contrast, and


curvature of each region of a rectangle tessellated
fingerprint image (blocks with size 3 3). Recently
NFIQ2.0 introduced with added features, NFIQ 2.0
aims to provide a higher resolution quality score (in
range of 0-100), lower computation complexity, as
well as support for quality assessment in mobile
platform. This technique converts the image into
orientation tensor representation. The orientation
tensors represents in both horizontal and vertical
direction and then combined to encode the edge
information obtained from the horizontal, vertical, or
parabolic tensors. The total information present in
each local region is combined to obtain the final
quality score.
The other technique is a local-feature-based quality
metric, it computes orientation certainty level (OCL),
ridge thickness, ridge frequency, and ridge-to-valley
thickness ratio. It use Gabor filters for quality
assessment. Gabor filters with different orientations is
applied on each block and Fingerprint image is
tessellated into blocks, For high-quality blocks,
response from filters of some orientations is
significantly higher than others, whereas for lowquality blocks, the difference in responses from the
filters is generally low. The fingerprint quality
assessment techniques measure consistency and
strength of the ridge patterns. A direct association is
made between the properties of the ridge patterns and
the recognition performance of the sample. Finally,
quality assessment of 3D fingerprints that are
obtained either from a 3D sensor or reconstructed
from multiple 2D views, is an open research problem.

III.

IMAGE QUALITY ASSESSMENT FOR


LIVENESS DETECTION

The use of image quality assessment for liveness


detection is motivated by the assumption that: It is
expected that a fake image captured in an attack
attempt will have different quality than a real sample
acquired in the normal operation scenario for which
the sensor was designed.The quality differences
between real and fake samples may contain: general
artifacts, color and luminance levels, quantity of
sharpness, and quantity of information, found in
both type of images, structural distortions or natural
appearance.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
100

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Fig: 5 (Liveness detection in finger print)

IV.

Fig: 4 (Liveness detection method for face)

Face liveness detection is a process to determine


whether a detected face is real or not before a face
recognition system identifies the face. It prevents the
face recognition system from making a wrong
decision. There are several types of fake faces, such
as 2D printed photos, videos, high-definition (HD)
tablets, 3D masks, and many more. Among them, 2D
photos are easily available so used widely as they are
cheap to obtain. To minimize the attacks on 2D
photos, researchers have shown steady progress in
developing anti-spoofing technologies based on
features of these photos. In recaptured 2D photos,
there are some special characteristics like detailed
components and sharpness are lost. In this case,
researchers analyze texture and frequency
components in the input data. In order to represent the
textural feature, local binary patterns (LBP) are often
used . Some methods detect high frequency
components and look into power spectrum. Although
feature domains are different, those studies approach
to the solution in terms of texture. The second
characteristic difference is a difference in light
distributions on a face. In this approach skin
reflectance of real and fake faces being focused.
These information can be pulled out for finding fake
faces. This can be a clue to distinguish motionless
fake faces.

FACE RECOGNITION AND ATTACK ON


SYSTEM

The most acceptable biometrics is Face


reorganization because it is one of the most universal
methods of identification that humans use in their
visual interactions and acquisition of faces. To make
face recognition as a successful biometric technology,
it is needed to overcome the spoofing attack problem.
Thus to overcome from the problem, liveness
detection is the only method of solution. It gives a
strong guarantee for reliable face recognition system.
So Liveness detection plays important role here.
Liveness detection denotes the methods capable in
discriminating real human traits from synthetic
human traits made by gelatin silicon, or play-doh etc.
and photos or videos of someone else. This detection
can take place either at processing stage or
acquisition stage. The research on liveness detection
is highly desirable. Liveness detection using human
facial features in biometric system is a method that
captures the image of the person and test for their
liveness after getting authenticated. There are three
ways for introducing liveness detection:
1. Using extra hardware: This approach is an
expensive but fast approach (Deploying costly system
configuration like several cameras including stereo,
heat sensitive cameras, etc.)
2.Using software: It is done at processing stage. It is
less costly but takes much time in comparison to
hardware method.
3. Combination of hardware and software: This is
time consuming as well as expensive. But provides a
good high end solution for livenesss detection which
is difficult to obtain.
The approaches for detection of liveness can be
classified into four groups.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
101

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

1. Exploiting inherent characteristics of a live face


like eye blinking, few images containing some natural
motion is sufficient for reliable liveness detection.
2. Using additional light sources or sensing devices,
e.g., thermal imaging sensors
4. User interactions i.e. demanding real time
responses e.g., mouth movement.

for future use in comparing it every time a user want


to access to the system we can see that in fig 7.

Fig. 7 uniqueness in human iris.

To detect fake irises, it is necessary to find a feature


that distinguishes between live irises and fake ones.
Fake iris detection methods can be divided into two
broad categories. They are passive and dynamic
methods. The block diagram of our robust iris
recognition system with fake iris detection module is
shown in Fig 8,

Fig: 6 (facial feature of person)

The system makes use of facial features of human


like valleys and peaks and landmarks and treats these
as nodes that can be compared and measured against
those which are stored in the system's database.
To comprise the face print there are approximately 80
nodes are available we can see that in fig 6, this
includes the eye socket depth, distance between the
eyes, jaw line length, the width of the nose and
cheekbone shape.
V.

IRIS RECOGNITION AND ATTACK ON


SYSTEM

The Iris part of the eye is considered as most reliable


source of biometric operation. Biometric methods
based on the spatial pattern of the iris are believed to
allow very high accuracy. so here we will discuss the
various system with help of which we can chase iris
spoofing. Usually for better identification of Iris the
noiseless iris part of the eye is required but in this
non-cooperative Iris recognition system, the
recognition system works even if the iris has noise.
The system analyzes over 200 points of the iris
including: rings, furrows, freckles, the corona and
others characteristics. After recording data from each
individual, it will save the information in a database

Fig; 8 ( Iris detection module)

It consists of image selection module, preprocessing


module is nothing but the iris segmentation unit
which extract the iris accurately from eye image and
last module test whether iris is fake or alive by using

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
102

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

any one of the three methods, namely, active, passive


or composite method.

that covers and detects all four classes of fake iris.


This is a combination of previous fake iris detection
techniques with certain modifications required to
improve the efficiency of algorithm as part of
software implementation.
APPLICATIONS
-biometric system is used in India for making
Aadhar card this multi-biometric system is used face
recognition, iris recognition, and fingerprint
recognition.
-biometric system used in Airport
-biometric system is used in banking
VI.

Fig; 9 ( Diff methods for iris liveness detection)

Iris segmentation and Active Method


(Pupil Dynamics)
In this method of iris segmentation the outer
boundary of iris is calculated by tracing objects of
various shape and structure. This detection method
also detects intersection of iris with lower and upper
eyelids to ensure minimum loss of iris information.
inner iris boundary uses two eye images of same
subject at different intensities, these images are
compared with each other to detect the variation in
pupil size or pupil dynamics. The variation in pupil
size is also used for aliveness detection of iris. So,
this approach is a very useful for making iris
recognition systems more efficient. The success rate
is 99.48% of accurate iris localization from eye
image with minimal loss of iris features in spatial
domain as compared to all existing techniques.
Passive method (Wavelength reflection
coefficient method)
This method is based on the reflectance properties of
iris. In existing reflectance based methods, two
images are used, whereas, in this work three images
have been used instead of two, third image is the
original image termed as base or reference image
which is captured at visible light.

CONCLUSIONS

Biometrics refers to an automatic recognition of a


individual based on her/his behavioral and
physiological features. In future various business
applications will rely on biometrics since it is the
only way to guarantee the presence of the owner
when a transaction is made. Image quality assessment
for liveness detection technique is used to detect the
fake biometrics. Due to Image quality measurements
it is easy to find out real and fake users because fake
identities always have different color and luminance
levels, quantity of information, and quantity of
sharpness, general artifacts, found in both type of
images, structural distortions or natural appearance.
Multi-Biometric system is challenging system. It is
more secure than single biometric system. In this
paper studied about the three biometric systems that
are face recognition, fingerprint recognition, iris
recognition, and the attack on these three systems.
Multi biometric system is used for various
applications.
REFERENCES
[1] International Journal of Computer Applications Technology
and Research Volume 2 Issue 3, 250 - 254, 2013 250 Visual
Image Quality Assessment Technique using FSIM Rohit
Kumar Csvtu bhilai Sscet bhilai India, Vishal Moyal Csvtu
bhilai Sscet bhilai India

[2] G. L. Marcialis, A. Lewicke, B. Tan, P. Coli, D. Grimberg,


A. Congiu, et al., First international fingerprint liveness
detection competition LivDet 2009, in Proc. IAPR
ICIAP,Springer LNCS-5716. 2009, pp. 1223

[3] Annu, Chander Kant," Liveness Detection in Face


Composite Method
From the result we can easily say it is not possible to
find the every kind of false iris detection with help of
either active or passive methods. So, here we will use
the composite method by developing a new algorithm

Recognition Using Euclidean Distances , International


journal for advance research in engineering and technology,
Vol. 1, Issue IV, ,pp.1-5, May 2013.

[4] M Vatsa, R Singh, A Tiwari, S Bharadwaj, HS Bhatt,


Analyzing fingerprint of Indian population using image
quality: a UIDAI case study, in Proceedings of International

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
103

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Workshop on Emerging Trends and Challenges in HandBased Biometrics (Hong Kong, 2629 September 2010), pp.
15

[5] Gang Pan, Lin Sun, zhaohuiwuand yuemingwang -Monocular


camera-based face liveness detection by combining eyeblink
and scene context, TelecommunSyst (2011), Published online
on 4 August 2010.

[6] Mohmad Kashif Qureshi Liveness Detection of Biometric


Traits, International journal of information Technology and
knowledge Management, Vol 4, No.1 , pp. 293-295,January june 2011.

[7] T. Matsumoto : Impact of Artificial Gummy Fingers on


Fingerprint Systems, Proc. SPIE, vol.4677, pp. 275-289
(2002).

[8] Iris Recognition: An Emerging Biometric Technology,


Richard P Wildes, Proceedings of the IEEE, Vol. 85, No. 9,
September 1997

[9] Daugman,

J.: Iris Recognition and Anti-Spoofing


Countermeasures. In: 7th International Biometrics
Conference, London (2004)

[10] Mahdi S. Hosseini, Babak N. Araabi, and Hamid SoltanianZadeh, Pigment Melanin: Pattern for Iris Recognition

[11] Wojciech Wojcikiewicz hough transform-line detection in


robot soccer

[12] R.P. Wildes, Iris Recognition: An Emerging Biometric


Technology, Proc.IEEE, vol. 85, no. 9, pp. 1348-1363, Sept.
1997.

[13] Smita S. Mudholkar 1, Pradnya M. Shende 2, Milind V.


Sarode 3 1, 2& 3 Department of Computer Science &
Engineering, Amravati University, India.

[14] Biometrics Technology for Human Recognition , Anil K.


Jain, Michigan State University http://biometrics.cse.msu.edu
,October 15, 2012.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
104

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

A Study and Analysis of Booth


Multiplication Algorithm
Tarun Damani, Preeti Singh, Richa Malhotra
Department of Electronics and Communication, Northern India Engineering College, New Delhi, India
2

M.Tech. Scholar at Gateway institute of engineering and technology, Sonepat))


E-mail: preeti.vlsi@gmail.com

Abstract The Booth multiplier makes use of


Booth encoding algorithm in order to reduce the
number of partial products by considering certain
number of bits of the multiplier at a time, thereby
achieving a speed advantage over other multiplier
architectures. This algorithm is valid for both
signed and unsigned numbers. It can handle
signed binary multiplication by using 2s
complement representation.
Keywords Booth multiplier, partial products,
baugh wooley multiplier, FFT.

I.

INTRODUCTION

Multipliers are key components of many high


performance systems such as FIR filters,
microprocessors, digital signal processors, etc.
Multiplication based operations such as multiply and
accumulate(MAC) and inner product are among some
of the frequently used computation- intensive
arithmetic functions currently implemented in many
digital signal processing(DSP) applications such as
convolution, fast fourier transform(FFT), filtering and
in microprocessors in its arithmetic and logic unit.
Since multiplication dominates the execution time of
most DSP algorithms, so there is a need of high speed
multiplier. Currently, multiplication time is still the
dominant factor in determining the instruction cycle
time of a DSP chip. The demand for high speed
processing has been increasing as a result of
expanding computer and signal processing
applications. Reducing the time delay and power
consumption are very essential requirements for
many applications. A systems performance is
generally determined by the performance of the
multiplier because the multiplier is generally the
slowest element in the system.

II.

BINARY MULTIPLIER

In order to achieve signed number multiplication


Partial Products are generated. After generation of
new Partial products they are reduced using adders.
For generation of partial products booths recoding
algorithm is used and for the reduction of partial
products carry save adder to generate intermediate
partial products and carry ripple adder is used to
generate the final sum and carry. For partial product
generation Radix 2, Radix 4 and Radix 8 Booths
recoding algorithms are studied. The booth multiplier
makes use of booth encoding algorithm in order to
reduce partial products by considering certain bits at a
time, thereby achieving speed advantage over other
multiplier architectures.
The entire process of multiplication is divided in 3
parts.
1) Generate the Partial Products
2) Partial Product Reduction.
3) Final stage Carry Propagate Adder.
III.

BOOTH ALGORITHM

In the digital multiplication, as in initial step, one


needs to generate n shifted copies of the multiplicand,
which may be added in the coming stage. The value
of the multiplier bit determines whether the shifted
copy is to be added or not: if the ith bit of the
multiplier is 1, then the shifted copy of the
multiplicand is added. If the bit is 0 it is not added.
The logical AND gate can implement this operation
and the resulting values are called partial products.
Conventional array multipliers, like the Braun
multiplier and Baugh Wooley multiplier achieve
comparatively good performance but they require
large area of silicon, unlike the add-shift algorithms,
which require less hardware and exhibit poorer
performance. Here booth algorithm is used for the
generation of partial products.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
105

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Various algorithms of booth recoding are discussed


as under:
A)
Radix-2 Booths Algorithm
In order to start the algorithm, an imaginary 0 is
appended to the right of the multiplier. Subsequently,
the current bit xi and the previous bit xi-1 of the
multiplier, xn-1 x n-2... x1x0 are examined in order to
yield ith bit, yi of the recoded multiplier, y n-1 y n-2 ...
y1y0. At this point, the previous bit xi-1 serves only as
a reference bit. At its turn, xi-1 will be recoded to
yield yi-1, with xi-2 acting as the reference bit. For i=0,
its corresponding reference bit x-1 is defined to be
zero. Table presents a summary on the recoding
method used by the Booth's theorem.
Table 1

xi

recoding method used by the Booth's theorem.


Operation
Comments
yi
Xi-1

Shift only

String
zeroes

Shift only

String of ones

Subtract
shift

Add and shift

B)

and

of

Table 2.

x
xi-1
0
0

partial product
xi-2 yi
Operation
yi-1
0
0
+0
0

0
1
1
0
1
1
0
0
0
1
1
0

1
1

0
0
1
1
1

Comments
String of zeroes

0
1
-1
0
0
-1
0
1
1
0
0
-1

+A

A single 1

-2A

Beginning of 1s

-A

Beginning of 1s

+A

End of 1s

+2A

End of 1s

-A

A single 0

+0

String of 1s

0
C)

Radix-8 Booths Algorithm

Beginning of a
string of ones

-1

End of a string
of ones

Radix-4 Booths Algorithm

The booth encoding algorithm is a bit-pair encoding


algorithm that are used to select multiples of the
multiplicand {-2X, -X, 0, +X, +2X}. The three
multiplier bits consist of a new bit pair [Y (i+1), Y
(i)] and the leftmost bit from the previously encoded
bit pair [Y (i-1)].The modified Booths algorithm
(radix-4 recoding) starts by appending a zero to the
right of x0 (multiplier LSB). Triplets are taken
beginning at position x 1 and continuing to the MSB
with one bit overlapping between adjacent triplets. If
the number of bits in X (excluding x 1) is odd, the
sign (MSB) is extended one position to ensure that
the last triplet contains 3 bits. In every step we will
get a signed digit that will multiply the multiplicand
to generate a partial product entering the reduction
tree.

Recoding extended to 3 bits at a time - overlapping


groups of 4 bits each. Only n/3 partial products
generated - multiple 3A needed - more complex basic
step. Example: recoding 010 yields yi yi-1 yi-2=011.
Radix-8 recoding applies the same algorithm as
radix-4, but now we take quartets of bits instead of
triplets. Consequently, a multiplier based on this
radix-8 scheme generates fewer partial products than
a radix-4 multiplier, but the computation of each
partial product is more complex. In particular, a
partial product corresponding to an encoding x=+3
requires the computation of 3x, and therefore a full
addition.
full addition
Quartet Signedvalue
digit
value
0000
0

Table 3.

Quartet
value
1000

Signeddigit
value
-4

0001

+1

1001

-3

0010

+1

1010

-3

0011

+2

1011

-2

0100

+2

1100

-2

0101

+3

1101

-1

0110

+3

1110

-1

0111

+4

1111

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
106

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

IV.

Table 4.

EXAMPLE

No. of Slices
A)
Example of Radix-2 Algorithm
A
1
0
1
-5
X
* 0
0
0
1
Y
0
0
1
recoded multiplier
Add A
0 1 0
1
Shift
0 0 1
0 1
Add A
+1 0 1
1
1 1 0 1
1
Shift
1 1
1
0 1 1
Shift
1
1
1
1
0
1
-5

No. of LUTs
Minimum
Period
Maximum
Frequency

1
1
-1

B) Example of radix 4 Algorithm


A
01 00
X
11 01
Y
recoded multiplier
Add A
+ 10 11
2 bit sh
1 11 10
Add 2A + 0 10 00
01 11
2 bit shift
00 01
Add -A
10
11
11
-153

01
11

17
-9
+2A -A

-A
11
11 11
10
01 11
11 01
11
01
10

[2]

11
01

11
[3]

C) Example of radix 8 Algorithm

00
00
00
00
0
V.

A)

00

01

00

01

00

00

10

10

10
00
01
01

00
10
00

10
00
01
01
+170

Radix-4
71

Radix-2
397

88
3.98ns

100
4.75ns

184
5.45ns

251 MHz

210
MHZ

185
MHZ

The shortcoming of Radix 2 Booth


algorithm is that it becomes inefficient
when there are isolated 1's. For example,
001010101(decimal 85) gets reduced to 0111-11-11-1(decimal 85), requiring eight
instead of four operations.001010101(0)
recoded as 011111111, requiring 8 instead
of 4 operations. This problem can be
overcome by using high radix Booth
algorithms.
As we move towards Radix 8 less number
of partial products are generated but more
number of operations are required to
generate {+1,+2, +3, +4, -1, -2, -3, -4}. In
Radix 4 we need to save {+2, -2, +1, -1}.
Speed of Radix 8 is highest among Radix 2,
4 and 8 but the complexity increases.
REFERENCES

[1]

[4]

A
+17
X
+10
Add A
3 bit shift
Add A

Radix-8
46

Soojin Kim and Kyeongsoon Cho, Design of High-speed


Modified Booth Multipliers Operating at GHz Ranges,
World Academy of Science, Engineering and Technology 61
2010.
Young-Ho Seo, Member, IEEE, and Dong-Wook Kim,
Member, IEEE, A New VLSI Architecture of Parallel
MultiplierAccumulator Based on Radix-2 Modified Booth
Algorithm IEEE transactions on very large scale integration
(VLSI) systems, vol. 18, no. 2, February 2010.
Behrooz Parhami, Computer Arithmetic, Algorithms and
Hardware Design, Oxford University Press, 2000.
K.C. Chang, Digital system design with VHDL and
Synthesis An integrated Approach IEE Computer Society,
pp 408-437, 1999.

10
01

01

COMPARISION

Synthesis Results on FPGA: The Xilinx ISE


8.2i is used for implementation of all the
circuits.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
107

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Electrooculography- A Review
Devraj Gautam, Varshika Valluri. Akriti agarwal. Neelakshi Rana

Deptt. of Electronics and Communication, Northern India Engineering College, New Delhi, India
E-mail: devrajgautam10@gmail.com
Abstract Electrooculography (EOG) technology
is one of the techniques used as a human-computer
interface (HCI) systems for smart control of
appliances. Electrooculogram, the bio-potential
generated around eyes due to the movement of the
eye ball can be used to track the man oeuvre.
Electrooculogram is the signal acquired using a
data acquisition system and various signal features
are obtained. These signals classify the movements
of the eyeball in horizontal and vertical direction.
The main objective of measuring and processing
these signals is to help people with physical
disabilities succeed in dealing with the
inconveniences in the physical world especially
particularly for those paralyzed. This is achieved
by placing electrodes and sensing the cornearetinal potential (CRP) that provides the resting
potential between the cornea and the retina of the
eye. This potential so obtained is proportional to
the motion of the eye. The electrodes convert the
ion current obtained from the skin into electron
current. The acquired signal has low voltage and
hence amplified, filtered and processed to
eliminate accidental blinks, noises and other
vestigial signals.

Electrooculography (EOG) is a technology in which


the electrodes are placed on the users forehead
around the eyes to register eye movements.
Electrodes are used to detect the minute electrical
potential between the eyes. The cornea is in fact
positive relatively to the posterior part of the sclera
which can be considered the front-most and rear-most
parts of the eye bulb respectively. This is called
cornea-retinal potential (CRP). This potential
difference is measured between the cornea and the
retina known as the resting potential ranging from
0.4mV to 1mV. Either of the electrodes is compared
to the ground electrode thus resulting the recorded
signal to be a positive or a negative signal. Since the
development of bio amplifier, digital electronics and
software engineering have become fields that go hand
in hand in the development of Human ComputerInterface system.

Keywords
Appliance
control,
Electrooculography,
electrooculogram,
eye
movement, human-computer interface
I.

INTRODUCTION

People with physical disabilities encounter several


problems in communication with the outside world.
Persons with disabilities use computers for
communication, environmental control, and as
source of information and entertainment, however, a
type of Human-Computer Interface must be provided
such that the user can manipulate data. Very few eye
movement tracking systems are available for cursor
control and further a portion of them are equipped
with sophisticated systems using convoluted designs
and exorbitant hardware. The goal of this project is to
provide a reliable system as a very affordable solution
for these people.

Fig. 1 Block diagram

II.

ENGINEERING ANALYSIS

A. Block Diagram
The real execution of the device has been divided into
three distinctive stages -:
Stage 1 -Pair of electrodes are placed to capture the
potential difference with reference to the ground

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
108

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

electrode. The obtained signal is amplified and


filtered by the electronic circuit.
Stage 2 - PIC microcontroller converts the incoming
analog signal into the digital signal. A pattern is
identified. If the recognized pattern is found, the USB
controller will transmit the data to the computer
through RS-232.
Stage 3 - As the computer receives information, it is
fed into the Graphical User Interface.
B. Instrumentation Amplifier INA-118
INA118 is a low power amplifier, which gives
excellent accuracy. It is a 3-op amp design, which is
small in size and will be used in many applications. It
has a linear gain of approximately 5000-7500. An
external register will be connected to the INA 118,
which will set the gain. It will provide very low noise.
It has a high Common Mode Rejection Ratio, 110dB
at G = 1000.
C. Filters
Filter is an electrical circuit, which is used to modify,
reshape, and reject all the unwanted frequencies and
pass all the wanted signals to the circuit. In
electrooculography we are using low-pass filter and a
notch filter.
Low-pass filter -: A low-pass filter will only allow
low frequency signal ranges from 0Hz to its cut-off
frequency, and it will block the high frequency. It is a
combination of a resistance, capacitance and
inductance. It can be further classified as a first-order
low-pass filter and second-order low-pass filter. It has
a constant output voltage from DC. A low-pass filter
has a cut-off frequency of 0.707, the frequency point,
which is less than 0.707 is known as pass band
whereas the frequency range more than this cut-off
point is commonly known as stop band.

high output impedance is transferred to a second


circuit with low input impedance level. It provides
the isolation to the circuit so that the power of circuit
is distributed very little.
E. PIC16F886
The "peripheral interface controller" are electronic
circuits that can be programmed to carry out a vast
range of tasks. They can be programmed to be timers
or to control a production line and much
more.PIC16f886 is RISC architecture and a 8 bit
microcontroller which is very powerful and easy to
program as there are 32 single word instructions to
learn. Key features include wide availability, low
cost, ease of reprogramming with built-in EEPROM
(electrically erasable programmable read-only
memory), an abundant development tools.
F. Human Computer Interface
Humancomputer interaction (HCI) is the design and
use of computer technology which focus selectively
on the interfaces between users and computers.
Researchers in the field of HCI observe the ways in
which humans interact with computers and
technologies that let humans interact with
computers.here, RS232 cable is used in this to
provide the human-computer interface. It is used for
serial transmission of data. It is a powerful way of
connection computer peripherals to devices it is used
as interface as pic16886 is compatible is with RS232
cable only.

Notch filter -: The major problem is a high frequency


noise in the circuit, which can be solved by using a
notch filter. A notch filter will block 50Hz power line
to escape its effect on EOG electrodes sensitivity.
Fig.2 Placement of electrodes

D. Voltage Follower
Voltage follower is an op-amp circuit, which acts as a
buffer providing no amplification or attenuation. It is
designed to maintain the voltage gain of 1. it is called
as voltage follower because input directly leads the
output that is the voltage from a first circuit having

III.

CONCLUSION

In this work, a EOG based human computer interface


has been described. This technique in EOG has
resulted in an inexpensive as well as a reliable device

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
109

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

with no side effect on the user. The input from the


user is received where variation in skin potential is
generated due to the eye movement that was
amplified using an instrumentation amplifier
followed by filter removing unwanted artifacts. The
microcontroller converted this analog signal to digital
signal. The virtual USB port in the computer received
the signal through RS-232 and generated a graphic
user interface thus resulting in a Human Computer
Interface. The result obtained was very accurate and
reliable. Thus this system proves to be propitious for
immobile people to interact with the physical world.
REFERENCES
[1]
[2]
[3]
[4]

[5]
[6]

Mechatronics and its Applications, 2009. ISMA '09. 6th


International Symposium, March 2009
Soft Computing, Computing with Words and Perceptions in
System Analysis, Decision and Control, 2009. ICSCCW 2009
Shackel. Pilot Study In Electro-oculography. EMI Electronics
Ltd. (1959)
F.C.C. Riemslag, H.F.E. Verduyn Lunel & H.
Spekreijse.
The Electrooculogram: A Refinement of The Method.
Documenta Ophthalmologica (1990)
Analysis of Electrooculography signals for the Interface and
Control of appliances, Arthi SV, February 2015
Duane Denney And Colin Denney. The eye-blink
electrooculogram.Department of sychiatry,Oregon Health
Sciences University (1984)

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
110

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Memristors: The Fourth Basic Circuit


Element
Khushboo, Mohina Gandhi

Deptt. of Electronics and Communication, Northern India Engineering College, New Delhi, India
E-mail: kkhushboo_2008@yahoo.com, mohina.gandhi@gmail.com
Abstract A Memristor is a 2-terminal electrical
circuit element first postulated in 1971 [1] as the
fourth basic circuit element, in addition to the
three classic elements Resistor, Capacitor and
Inductor [2], [3]. The fourth fundamental passive
element called memristor is recently fabricated
even though invented several decades ago. The
element is named memristor as it combines the
behavior of a memory and a resistor. Memristor is
a two-terminal element whose resistance depends
on the magnitude and direction, and duration of
the applied voltage. Memristor remembers its
most recent memristance when voltage was turned
off and until the next time voltage is turn on and
can provide dynamical-negative resistance. It thus
has the promising characteristics to potentially
revolutionize nanoelectronics. It can find
applications in analog and digital circuits which
are part of everyday use systems such as sensors
and mobile phones. We extend the notion of
memristive systems to capacitive and inductive
elements, namely capacitors and inductors whose
properties depend on the state and history of the
system. All these elements show pinched hysteretic
loops in the two constitutive variables that define
them: current-voltage for the memristor, chargevoltage for the memcapacitor, and current-flux for
the meminductor.
Keywords Memory, Resistance, modelling,
characteristics, simulation, memristive system.
I.

INTRODUCTION

Circuit elements that store information without the


need of a power source would represent a paradigm
change in electronics, allowing for low-power
computation and storage. In addition, if that
information spans a continuous range of values
analog computation may replace the present digital
one. Such a concept is also likely to be at the origin
of the workings of the human brain and possibly of
many other mechanisms in living organisms so that

such circuit elements may help us understand


adaptive and spontaneous behavior, or even learning.
One such circuit element is the memory-resistor
(memristor for short) which was postulated by Chua
in 1971 by analyzing mathematical relations between
pairs of fundamental circuit variables. The memristor
is characterized by a relation between the charge and
the flux, defined mathematically as the time integral
of the voltage, which need not have a magnetic flux
interpretation. This relation can be generalized to
include any class of two-terminal devices (which are
called memristive systems) whose resistance depends
on the internal state of the system [4]. His seminal
paper challenged the well-established perception of
traditional electronics which consist of three
fundamental circuit elements: the capacitor, the
resistor and the inductor [1].
The fundamental circuit elements contain four types
of variables: flux (), charge (q), voltage (v) and
current (i). Theoretically, a relation would exist
between each pair of the variables, as a result, in
regarding with the symmetry arguments that there
should be a total of 6 relations between those 4
variables, however, only 5 had been identified.
Therefore, Chua suggested there should exist a fourth
element. He named the fourth element as memristor
that holds a relationship between flux and charge and
mathematically proved its rationality. 37 years after
he predicted the memristor existence, a working
solid-state memristor based on a nanoscale thin-film
of titanium dioxide was created by a team headed by
Williams at Hewlett Packard. In order to improve the
computing performance, an outstanding technological
progress has been driven through some inventions of
integrated circuits and the transistor. Among several
emerging technologies, the memristor has become
one of the most promising candidates in building a
better storage structure with higher capacity and more
efficient performance. Memristor and memristive
system could be a very suitable technology for such
demanding procedures and all the requirements that

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
111

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

were mentioned could be provided by computing


technologies in an efficient and energy-saving way.
Memristor offers non-volatility, fast switching, low
energy cost and high density, in particular,
memristors can be scaled down to less than 10 nm
[5]. Memristors have been widely used in many
applications, such as storage elements in the content
addressable memory and synapses in neural network.
HP currently announces that they will pop 100TB
memristor drives into their product in five years. The
memristor behaves like a resistor with the function
which could remember the history of its current
meanwhile exhibits many peculiar unconventional
non-linear features. These interesting distinctive
characteristics have been developed and designed to a
number of models for different applications [6].
Modelling the memristor is important to design
memristive systems, memristor circuits and analyse
their performance. In particular, the following
resistance switching devices are memristors:
RRAM: Resistance switching RAM
ReRAM: Resistive RAM
PCRAM: Phase-Change RAM
MRAM: Magnetoresistive RAM
MIM: Metal-Insulator-Metal memory cell

Fig. 2 Symbols

In this paper we show that the above concept of


memory device is not necessarily limited to
resistances but can in fact be generalized to
capacitative and inductive systems. Quite generally, if
x denotes a set of n state variables describing the
internal state of the system, u(t) and y(t) are any two
complementary constitutive variables [7] (i.e.,
current, charge, voltage, or flux) denoting input and
output of the system, and g is a generalized response,
we can define a general class of nth-order ucontrolled memory devices as those described by the
following relations
y(t) = g (x, u, t) u(t)
(1)
x = f (x, u, t)
(2)
where f is a continuous n-dimensional vector
function, and we assume on physical grounds that,
given an initial state u(t = t0) at time t0, Eq. (2)
admits a unique solution [8]. Memcapacitive and
meminductive systems are special cases of Eqs. (1)
and (2), where the two constitutive variables that
define them are charge and voltage for the
memcapacitance, and current and flux for the
meminductance.

II.

MEMORY RESISTOR SYSTEMS

For completeness, let us introduce first the notion of


memory-resistive systems [9]. We also provide a
simple analytical example of memristive behavior
which has been employed in describing learning
circuits [10]. For a broad range of memristive
system models, see, e.g., Ref. [11]. Definition - From
Eqs. (1) and (2), an nth-order current controlled
memristive system is described by the equations
Fig. 1.Memristor missing relationship

VM(t) = R(x, I, t) I(t)


x = f (x, I, t)

(3)
(4)

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
112

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

with x a vector representing n internal state variables,


VM(t) and I(t) denote the voltage and current across
the device, and R is a scalar, called the memristance
(for memory resistance) with the physical units of
Ohm. The equation for a chargecontrolled memristor
is a particular case of Eqs. (3) and (4), when R
depends only on charge, namely
VM = R(q (t)) I,

(5)

with the charge related to the current via time


derivative: I = dq/dt. Note that the analytical equation
describing the TiO2 device derived in the work by the
Hewlett-Packard group [7] has precisely this form,
and it therefore represents an ideal memristor. We
can also define an nth-order voltage-controlled
memristive system from the relations
I(t) = G(x, VM, t) VM(t)
x = f (x, VM, t)

(6)
(7)

where we call G the memductance (for memory


conductance).

where q(t) is the charge on the capacitor at time t,


VC(t) is the corresponding voltage, and C is the
memcapacitance (for memory capacitance) which
depends on the state of the system. Similarly, we can
define an nth-order chargecontrolled memcapacitive
system from the equations
VC(t) = C1 (x, q, t) q(t)
x = f (x, q, t)

(10)
(11)

where C1 is an inverse memcapacitance.


From a microscopic point of view a change in
capacitance can occur in two ways: i) due to a
geometrical change of the system (e.g., a variation in
its structural shape), or ii) in the quantum-mechanical
properties of the carriers and bound charges of the
materials composing the capacitor (manifested,
e.g., in a history-dependent permittivity "), or both. In
either case, inelastic (dissipative) effects may be
involved in changing the capacitance of the system
upon application of the external control parameter.
These dissipative processes release energy in the
form of heating of the materials composing the
capacitor. However, this heat may not be simply
accounted for as a (time-dependent) resistance in
series with the capacitor. Similarly, there may be
situations in which energy not from the control
parameter but from sources that control the equation
of motion for the state variable, Eq. (11), is needed to
vary the capacitance of the system (e.g., in the form
of elastic energy or via a power source that controls,
say, the permittivity of the system via a polarization
field). This energy can then be released in the circuit
thus amplifying the current. Therefore, Eqs. (10) and
(11) for memcapacitive systems postulated above
could, in principle, describe both active and passive
devices. However, starting from a fully discharged
state, the amount of removed energy from a passive
memcapacitive system cannot exceed the amount of
previously added energy.

Fig. 3 Voltage Controlled Memristor

III.

MEMCAPACITATIVE SYSTEM

Fig. 4 Memristor, Memcapacitor, Meminductor

We define an nth-order voltage-controlled


memcapacitive system by the equations
q(t) = C (x, VC, t) VC(t)
x = f (x, VC, t)

(8)
(9)

Fig. 5Memcapacitive range

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
113

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

IV.

MEMINDUCTIVE SYSTEMS

Fig. 5 Meminductive System

Let us now introduce the third class of memory


devices. We call an nth-order current-controlled
meminductive system one described by the equations
(t) = L (x, I, t) I(t)
x = f (x, I, t)

(12)
(13)

where L is the meminductance, and an nth-order


fluxcontrolled meminductive system the following
I(t) = L1 (x, , t) (t)
x = f (x, , t)

(14)
(15)

within certain time scale. Apart from the obvious use


of these devices in non-volatile memories, several
applications can be already envisioned for these
systems, especially in neuromorphic devices to
simulate learning, adaptive and spontaneous
behavior. For instance, the identification of
memristive behavior in primitive organisms such as
amoebas [10], opens up the possibility to relate
physiological processes that occur in cells with the
theory of
Acknowledgment
I would like to acknowledge the contributions of my
parents and my colleagues, to support me while doing
this paper.
REFERENCES
[1]
[2]

with L1 the inverse meminductance.


In electronics, inductors are primarily of a solenoid
type consisting of a coil of conducting material
wrapped around a core. The inductance of a solenoid
is proportional to the relative permeability r of the
material within the solenoid and also depends on the
geometrical parameters of the system. The simplest
way to introduce a memory effects in such a system is
to use the core material whose response to the applied
magnetic field depends on its history. As an example,
we can think about ferromagnetic materials exhibiting
a magnetic hysteresis such as iron. In fact, an
electronic circuit with such inductor was recently
analyzed. Another way to introduce memory effects
is by varying the inductors shape. A circuit model
for a meminductive system can be formulated
similarly to the models of memristive and
memcapacitive systems discussed above.
V.

CONCLUSION

We have extended the notion of memory devices to


both capacitative and inductive systems. These
devices have specific properties that appear most
strikingly as a pinched hysteretic loop in the two
constitutive variables that define them: current
voltage for memristive systems, charge-voltage for
memcapacitive systems, and current-flux for the
meminductive systems. Many systems belong to
these classifications, especially those of nanoscale
dimensions. Indeed, with advances in the
miniaturization of devices, these concepts are likely
to become more relevant since at the nanoscale the
dynamical properties of electrons and ions may
strongly depend on the history of the system, at least

[3]

[4]

[5]

[6]

[7]

L. O. Chua, Memristor - The Missing Circuit Element,


IEEE Trans.Circuit Theory, vol. 18, pp. 507-519, 1971.
L. O. Chua and S. M. Kang, Memrisive devices and
systems, Proc.IEEE, vol. 64, pp. 209-223, 1976.
M. Sapoff and R. M. Oppenheim, Theory and application of
selfheatedthermistors, Proc. IEEE, vol. 51, pp. 1292-1305,
1963.
Y. Chen, G. Y. Jung, D. A. A. Ohlberg, X. M. Li, D. R.
Stewart, J. O.Jeppesen, K. A. Nielsen, J. F. Stoddart, and R.
S. Williams, Nanoscalemolecular-switch crossbar circuits,
Nanotech., vol. 14, pp. 462-468,2003.
Yu. V. Pershin and M. Di Ventra, Spin memristive systems:
Spin memory effects in semiconductor spintronics, Phys.
Rev. B, Condens Matter, vol. 78, p. 113309/1-4, 2008.
Yu. V. Pershin and M. Di Ventra, Current-voltage
characteristics of semiconductor/ferromagnet junctions in the
spin-blockade regime, Phys. Rev. B, Condens. Matter vol.
77,
p.073301/1-4,
2008;
Spin
blockade
at
semiconductor/ferromagnet junctions, Phys. Rev. B,
Condens. Matter, vol. 75, p. 193301/1-4, 2007.
Batas and H. Fiedler, A Memristor SPICE Implementation
and a New Approach for Magnetic Flux-Controlled
Memristor Modeling, IEEE Transactions on
Nanotechnology, vol. 10, no. 2, pp. 250255, March 2011.

Y. V. Pershin and M. Di Ventra, Memristive Circuits


Simulate Memcapacitors and Meminductors, Electronics
Letters, vol. 46, no. 7, pp. 517518, April 2010.
[9] O. Garitselov, S. P. Mohanty, and E. Kougianos, A
Comparative Study of Metamodels for Fast and Accurate
Simulation of Nano-CMOS Circuits, IEEE Transactions on
Semiconductor Manufacturing, vol. 25, no. 1, pp. 2636,
February 2012.
[10] W. Fei, H. Yu, W. Zhang, and K. S. Yeo, Design
Exploration of Hybrid CMOS and Memristor Circuit by New
Modified Nodal Analysis, IEEE Transactions on Very Large
Scale Integration (VLSI) Systems, vol. PP, no. 99, 2011.
[8]

[11] Lehtonen and M. Laiho, Stateful Implication Logic With


Memristors, in Proceedings of the IEEE/ACM International
Symposium on Nanoscale Architectures, 2009, pp. 3336.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
114

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

An Exemplar in Telecom: MIMO


1

Rajeev Sharma, 2Surender Kumar


1

TIT&S, Bhiwani
Deptt. of Electronics and Communication, Northern India Engineering College, New Delhi, India

E-mail: rajeevsharma78@yahoo.com, suren001@gmail.com


Abstract the use of multiple antennas at both
ends of a wireless link (MIMO technology) holds
the potential to drastically improve the spectral
efficiency and link reliability in future wireless
communications
systems.
3rd
Generation
Partnership Project (3GPP) has recently
completed the specification of the Long Term
Evolution (LTE) standard. Majority of the worlds
operators and vendors are already committed to
LTE deployments and developments, making LTE
the market leader in the upcoming evolution to 4G
wireless communication systems. Multiple input
multiple output (MIMO) technologies introduced
in LTE such as spatial multiplexing, transmit
diversity, and beam forming are key components
for providing higher peak rate at a better system
efficiency, which are essential for supporting
future broadband data service over wireless links.
In this paper we discuss the basics of MIMO
system.
Keywords SISO, SIMO, MISO, MIMO, LTE,
4G
I.

INTRODUCTION

The foremost challenges in future wireless


communications system design are increased spectral
efficiency and enhanced link reliability. The wireless
channel constitutes a hostile propagation medium,
which suffers from fading (caused by destructive
addition of multipath components) and interference
from other users. Diversity provides the receiver with
several (ideally independent) replicas of the
transmitted signal and is therefore a powerful means
to combat fading and interference and thereby
improve link reliability. Common forms of diversity
are time diversity (due to Doppler spread) and
frequency diversity (due to delay spread). In recent
years the use of spatial (or antenna) diversity has
become extremely popular, which is frequently due to
the fact that it can be provided without loss in spectral
efficiency. Receive diversity, that is, the use of
multiple antennas on the receive side of a wireless
link, is a well-studied subject [1]. Driven by mobile

wireless applications, where it is difficult to deploy


multiple antennas in the handset, the use of multiple
antennas on the transmit side combined with signal
processing and coding has become known under the
name of space-time coding [2{4] and is currently an
active area of research. The use of multiple antennas
at both ends of a wireless link (multiple-input
multiple-output (MIMO) technology) has recently
been demonstrated to have the potential of achieving
extraordinary data rates [5{9]. The corresponding
technology is known as spatial multiplexing [5,9] or
BLAST [6,10] and yields an impressive increase in
spectral efficiency.
II.

MIMO BASICS

As a result of the use of multiple antennas, MIMO


wireless technology is able to considerably enhance
the capacity of a given channel. By increasing the
number of receive and transmit antennas it is likely to
linearly increase the throughput of the channel with
every pair of antennas added to the system. This
makes MIMO wireless technology one of the most
vital wireless techniques to be employed in recent
years. As spectral bandwidth is becoming an ever
more valuable commodity for radio communications
systems, techniques are needed to use the available
bandwidth more effectively. MIMO wireless
technology is one of these techniques.
2.1

MIMO SISO

The simplest form of radio link can be defined in


MIMO terms as SISO Single Input Single Output.
This is efficiently a standard radio channel this
transmitter operates with one antenna as does the
receiver. There is no diversity and no additional
processing required.

Fig. 1 SISO - Single Input Single Output

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
115

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

The benefit of a SISO system is its simplicity. SISO


requires no processing in terms of the various forms
of diversity that may be used. However the SISO
channel is limited in its performance as interference
and fading will impact the system more than a MIMO
system using some form of diversity. The throughput
depends upon the channel bandwidth and the signal to
noise ratio.
2.2

2.3

MIMO MISO

Multiple Input Single Output (MISO) is also termed


transmit diversity. In this case, the same data is
transmitted redundantly from the two transmitter
antennas. The receiver is then able to receive the
optimum signal which it can then use to receive
extract the required data.

MIMO SIMO

The SIMO or Single Input Multiple Output version of


MIMO occurs where the transmitter has a single
antenna and the receiver has multiple antennas. This
is also known as receiving diversity. It is often used
to enable a receiver system that receives signals from
a number of independent sources to combat the
effects of fading. It has been used for many years
with short wave listening / receiving stations to
combat the effects of ionosphere fading and
interference.

Fig.3 MISO - Multiple Input Single Output

The advantage of using MISO is that the multiple


antennas and the redundancy coding / processing is
moved from the receiver to the transmitter. In
instances such as cellphone UEs, this can be a
significant advantage in terms of space for the
antennas and decreasing the level of processing
required in the receiver for the redundancy coding.
This has a affirmative impact on size, cost and battery
life as the lower level of processing requires less
battery consumption.
2.4

Fig.2 SIMO - Single Input Multiple Output

SIMO has the advantage that it is relatively easy to


implement although it does have some disadvantages
in that the processing is required in the receiver. The
use of SIMO may be quite acceptable in many
applications, but where the receiver is located in a
mobile device such as a cellphone handset, the levels
of processing may be limited by size, cost and battery
drain.
There are two forms of SIMO that can be used:
1. Switched diversity SIMO: This form of SIMO
looks for the strongest signal and switches to that
antenna.
2. Maximum ratio combining SIMO: This form of
SIMO takes both signals and sums them to give the a
combination. In this way, the signals from both
antennas contribute to the overall signal.

MIMO

MIMO is effectively a radio antenna technology as it


uses multiple antennas at the transmitter and receiver
to facilitate a variety of signal paths to carry the data,
choosing separate paths for each antenna to permit
multiple signal paths to be used.

Fig.4 MIMO - Multiple Input Multiple Output

One of the core ideas behind MIMO wireless systems


space-time signal processing in which time is
complemented with the spatial dimension inherentin
the use of multiple spatially distributed antennas, i.e.
the use of multiple antennas located at different
points. Accordingly MIMO wireless systems can
beviewed as a logical extension to the smart antennas
that have been used for many years to improve
wireless. It is found between a transmitter and a

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
116

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

receiver, the signal can take many paths. Additionally


by moving the antennas even a small distance the
paths used will change. The variety of paths available
occurs as a result of the number of objects that
appears to the side or even in the direct path between
the transmitter and receiver. Previously these multiple
paths only served to introduce interference.By using
MIMO, these additional paths can be used to
advantage. They can be used to provide additional
robustness to the radio link by improving the signal to
noise ratio, or by increasing the link data capacity.
The two main formats for MIMO are given below:
1. Spatial diversity: Spatial diversity used in this
narrower sense often refers to transmit and receive
diversity. These two methodologies are used to
provide improvements in the signal to noise ratio and
they are characterised by improving the reliability of
the system with respect to the various forms of
fading.
2. Spatial multiplexing: This form of MIMO is used
to provide additional data capacity by utilising the
different paths to carry additional traffic, i.e.
increasing the data throughput capability. One of the
key advantages of MIMO spatial multiplexing is the
fact that it is able to provide additional data capacity.
MIMO spatial multiplexing achieves this by utilizing
the multiple paths and effectively using them as
additional "channels" to carry data. The maximum
amount of data that can be carried by a radio channel
is limited by the physical boundaries defined under
Shannon's Law. Multiple-input, multiple-output
(MIMO) antenna systems are used in modern
wireless standards, including in IEEE 802.11n, 3GPP
LTE, and mobile WiMAX systems. The technique
supports enhanced data throughput even under
conditions of interference, signal fading, and
multipath. The demand for higher data rates over
longer distances has been one of the primary
motivations behind the development of MIMO
orthogonalfrequency-division-multiplexing
(OFDM) communications systems. Shannon's law
defines the maximum rate at which error free data can
be transmitted over a given bandwidth in the presence
of noise. It is usually expressed in the form:
Capacity = BW log2 (1 + SNR)

(1)

Where C is the channel capacity in bits per second,


BW is the bandwidth in Hertz, and SNR is Signal to
Noise Ratio. The above Eq. shows, an increase in a

channel's SNR results in marginal gains in channel


throughput. As a result, the traditional way to achieve
higher data rates is by increasing the signal
bandwidth. Unfortunately, increasing the signal
bandwidth of a communications channel by
increasing the symbol rate of a modulated carrier
increases its susceptibility to multipath fading. For
wide bandwidth channels, one partial solution to
solving the multipath challenge is to use a series of
narrowband overlapping subcarriers. Not only does
the use of overlapping OFDM subcarriers improve
spectral efficiency, but the lower symbol rates used
by narrowband subcarriers reduces the impact of
multipath signal products. MIMO communications
channels provide an interesting solution to the
multipath challenge by requiring multiple signal
paths. In effect, MIMO systems use a combination of
multiple antennas and multiple signal paths to gain
knowledge of the communications channel. By using
the spatial dimension of a communications link,
MIMO systems can achieve significantly higher data
rates than traditional single-input, single-output
(SISO) channels. In a 2 x 2 MIMO system, signals
propagate along multiple paths from the transmitter to
the receiver antennas. Using this channel knowledge,
a receiver can recover independent streams from each
of the transmitter's antennas. A 2 x 2 MIMO system
produces two spatial streams to effectively double the
maximum data rate of what might be achieved in a
traditional 1 x 1 SISO communications channel. The
maximum channel capacity of a MIMO system, the
channel capacity can be estimated as a function of N
spatial streams. A basic approximation of MIMO
channel capacity is a function of spatial streams,
bandwidth, and signal-to-noise ratio (SNR) and is
shown in the following Eq. :
Capacity = N BW log2 (1 + SNR)

(2)

Given the equation for MIMO channel capacity, it is


possible to investigate the relationship between the
number of spatial streams and the throughput of
various implementations of SISO and MIMO
configurations. As an example, the IEEE 802.11g
specs prescribe that a wireless-local-area network
(WLAN) channel uses a SISO configuration. With
this standard, the maximum coded data rate of 54
Mb/s requires use of a 64-QAM modulation scheme
and a code rate of 3/4. As a result, the uncoded bit
rate is 72 Mb/s (4/3 x 54 Mb/s). With minimum
transmitter error vector magnitude (EVM) at -25 dB,
an SNR of 25 dB can be estimated as the requirement

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
117

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

for a 64-state quadrature amplitude- modulation


(64QAM) scheme. While EVM and SNR are not
equivalent in all cases, we can assume that the
magnitude error of a symbol will dominate the signal
error as the SNR approaches its lower limit. The
maximum data rate of IEEE 802.11g maps closely
with the maximum channel capacity dictated by the
Shannon- Hartley theorem. According to this
theorem, a Gaussian channel with an SNR of 25 dB
should produce an uncoded data rate of 94 Mb/s in a
20-MHz channel bandwidth. By contrast, Eq. 2 would
suggest that a MIMO channel with four spatial
streams should be capable of four times the capacity
of the SISO channel. 20-MHz channel with a signalto-noise ratio (SNR) of 25 dB and four spatial
streams should have an uncoded bit rate of 4 x 94
Mb/s = 376 Mb/s. This estimation maps closely with
the expected data rates of the draft IEEE 802.11n
physical layer specs. IEEE 802.11n is designed to
support MIMO configurations with as many as four
spatial streams. At the highest data rate, bursts using
a 64QAM modulation scheme with a 5/6 channel
code rate produce a data rate of 288.9 Mb/s and an
uncoded bit rate of 346.68 Mb/s. At the highest data
rate, the IEEE 802.11n channel with four spatial
streams produces a data rate that is comparable to the
theoretical limit of 376 Mb/s. It can be observed that
the bit rate of a 4 x 4 (four spatial stream) MIMO
configuration exceeds that of the Shannon- Hartley
limit at all data rates, making MIMO systems
attractive for higher data throughput. While MIMO
systems provide users with clear benefits at the
application level, the design and test of MIMO
devices is not without significant challenges.
III.

BENEFITS OF MIMO TECHNOLOGY

(1)
Multiple antenna configurations can be
used to overcome the unfavorable effects of multipath and fading when trying to achieve high data
throughput in limited-bandwidth channels.
Multiple-input, multiple-output (MIMO) antenna
systems are used in modern wireless standards,
including in IEEE 802.11n, 3GPP LTE, and mobile
WiMAX systems. The technique supports enhanced
data throughput even under conditions of
interference, multi-path and fading. The demand for
higher data rates over longer distances has been one
of the primary motivations behind the development of
MIMO orthogonal- frequency-division-multiplexing
(OFDM) communications systems.

(2)
Superior
Reliability

Data

Rates,

Range

and

Systems with multiple antennas at the transmitter and


receiver also referred to as Multiple Input Multiple
Output (MIMO) systems offer superior data rates,
range and reliability without requiring additional
bandwidth or transmit power. By using several
antennas at both the transmitter and receiver, MIMO
systems create multiple independent channels for
sending multiple data streams.

Fig. 5 Stream combining for enhanced reliability

4x4 MIMO system supports up to four independent


data streams. These streams can be combined through
dynamic digital beamforming and MIMO receiver
processing (in the red oval) to increase reliability and
range. The number of independent channels and
associated data streams that can be supported over a
MIMO channel is equivalent to the minimum number
of antennas at the transmitter or receiver. Thus, a 2x2
system can support at most two streams, a 3x3 system
can support three streams and a 4x4 system can
support four streams. Some of the independent
streams can be combined through dynamic digital
beamforming and MIMO receiver processing, as
shown in the red oval, which results in increased
reliability and range.
IV.

LTE MIMO CONCEPTS

MIMO systems form an essential part of LTE in


order to achieve the ambitious requirements for
throughput and spectral efficiency. MIMO refers to
the use of multiple antennas at transmitter and
receiver side.
4.1 Downlink MIMO
For the LTE downlink, a 2x2 configuration for
MIMO is assumed as baseline configuration, i.e. 2
transmit antennas at the base station and 2 receive
antennas at the terminal side. Configurations with 4
antennas are also being considered. Different MIMO
modes are envisaged. It has to be differentiated

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
118

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

between spatial multiplexing and transmit diversity,


and it depends on the channel condition which
scheme to select.
4.1.1

Spatial Multiplexing

Spatial multiplexing allows transmitting different


streams of data simultaneously on the same downlink
resource block(s). These data streams can belong to
one single user (single user MIMO / SU-MIMO) or to
different users (multi user MIMO / MU-MIMO).
While SU-MIMO increases the data rate of one user,
MUMIMO allows increasing the overall capacity.
Spatial multiplexing is only possible if the mobile
radio channel allows it.

spaced. In LTE, up to 2 code words can be mapped


onto different so-called layers.
The number of layers for transmission is equal to the
rank of the matrix H. There is a fixed mapping
between code words to layers. Figure-7 below
describes how precoding on transmitter side is used to
support spatial multiplexing. This is achieved by
applying a precoding matrix W to the signal before
transmission

Fig. 7 Pre-coding principle

Fig. 6 Spatial multiplexing

Figure-6 shows the principle of spatial multiplexing,


exploiting the spatial dimension of the radio channel
which allows to transmit the different data streams
simultaneously Each transmit antenna transmits a
different data stream. Each receive antenna may
receive the data streams from all transmit antennas.
The channel (for a specific delay) can thus be
described by the following channel matrix H: As
above figure.
In this general description, Nt is the number of
transmit antennas, Nr is the number of receive
antennas, resulting in a 2x2 matrix for the baseline
LTE scenario. The coefficients hij of this matrix are
called channel coefficients from transmit antenna i to
receive antenna j, thus describing all possible paths
between transmitter and receiver side. The number of
data streams that can be transmitted in parallel over
the MIMO channel is given by min {Nt, Nr} and is
limited by the rank of the matrix H. The transmission
quality degrades significantly in case the singular
values of matrix H are not sufficiently strong. This
can happen in case the 2 antennas are not sufficiently
de-correlated, for example in an environment with
little scattering or when antennas are too closely

The optimum pre-coding matrix W is selected from a


predefined codebook is known at eNodeB and UE
side. Unitary pre-coding is used, i.e. the precoding
matrices are unitary: WHW= I. The UE estimates the
radio channel and selects the optimum pre-coding
matrix. The optimum pre-coding matrix is the one
which offers maximum capacity. The UE provides
feedback on the uplink control channel regarding the
preferred pre-coding matrix (pre-coding vector as a
special case). Ideally, this information is made
available per resource block or at least group of
resource blocks, since the optimum pre-coding matrix
varies between resource blocks.
4.1.2

Transmit Diversity

Instead of increasing data rate or capacity, MIMO can


be used to exploit diversity. Transmit diversity
schemes are already known from WCDMA release
99 and will also form part of LTE as one MIMO
mode. In case the channel conditions do not allow
spatial multiplexing, a transmit diversity scheme will
be used instead, so switching between these two
MIMO modes is possible depending on channel
conditions. Transmit diversity is used when the
selected number of streams (rank) is one.
4.2

Uplink MIMO

Uplink MIMO schemes for LTE will differ from


downlink MIMO schemes to take into account
terminal complexity issues. For the uplink, MUMIMO can be used. Multiple user terminals may
transmit simultaneously on the same resource block.
This is also referred to as spatial domain multiple

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
119

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

access (SDMA). The scheme requires only one


transmit antenna at UE side which is a big advantage.
The UEs sharing the same resource block have to
apply mutually orthogonal pilot patterns. To exploit
the benefit of two or more transmit antennas but still
keep the UE cost low, antenna subset selection can be
used. In the beginning, this technique will be used,
e.g. a UE will have two transmit antennas but only
one transmits chain and amplifier. A switch will then
choose the antenna that provides the best channel to
the Node B.
V.

CONCLUSION

Multiple-input multiple-output, or MIMO, is a radio


communications technology or RF technology that is
being mentioned and used in many new technologies
these days. Wi-Fi, LTE (3G long term evolution) and
many other radio, wireless and RF technologies are
using the new MIMO wireless technology to provide
increased link capacity and spectral efficiency
combined with improved link reliability using what
were previously seen as interference paths.
REFERENCES
[1]

T. M. Cover and J. A. Thomas, Elements of Information


Theory. John Wiley, 1991.

[2]

Schlegel, Statistical communication theory. Lecture Notes


for ELEN 7950-4, University of Utah, Spring 2002.

[3]

P. Viswanath, D. N. C. Tse, and R. Laroia, Opportunistic


beamforming using dumb antennas, IEEE Transactions on
Information Theory, vol. 48, pp. 12771294, June 2002.

[4]

Tse and P. Viswanath, Fundamentals of wireless


communication. Lecture Notes for EE 290S, U.C. Berkeley,
Fall 2002.

[5]

Telatar, Capacity of multi-antenna gaussian channels,


European Transactions on Telecom- munications, vol. 10, pp.
585595, November 1999.

[6]

S. Haykin, Adaptive Filter Theory. Prentice Hall, 4th ed.,


2002.
[7] T. James, Distributions of matrix variates and latent roots
[8] S. M. Alamouti, A simple transmit diversity technique for
wireless communications, IEEEJournal on Select Areas in
Communications, vol. 16, pp. 14511458, August 1998.
[9] Tarokh, N. Seshadri, and A. R. Calderbank, Spacetime
codes for high data rate wireless communication:
Performance criterion and code construction, IEEE
Transactions on Information Theory, vol. 44, pp. 744765,
March 1998.
[10] Tarokh, H. Jafarkhani, and A. R. Calderbank, Spacetime
block codes from orthogonal designs, IEEE Transactions on
Information Theory, vol. 45, pp. 14561467, July 1999

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
120

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Designing of Combinational and Sequential


Logic Circuits Using
Precomputation Technique
Neha Gupta, Pooja Mendiratta

Deptt. of Electronics and Communication, Northern India Engineering College, New Delhi, India
E-mail: pjmendiratta@yahoo.co.in
Abstract A recently proposed logic optimization
technique called precomputation is discussed
which selectively disables the inputs of a
sequential logic circuit, thereby reducing
switching activity and power dissipation. In this
paper,
we
present
new
precomputation
architectures for both combinational and
sequential logic and describe new precomputationbased logic synthesis methods that optimize logic
circuits for low power.
We present a general precomputation architecture
for sequential logic circuits and show that it is
significantly more powerful than the architectures
previously treated in the literature. In this
architecture, output values required in a
particular clock cycle are selectively precomputed
one clock cycle earlier, and the original logic
circuit is turned off in the succeeding clock
cycle.
We introduce a powerful precomputation
architecture for combinational logic circuits that
uses transmission gates or transparent latches to
disable parts of the logic.
Keywords
circuits

Precomputing,

I.

sequential logic

INTRODUCTION

Average power dissipation has recently emerged as


an important parameter in the design of generalpurpose and application-specific integrated circuits.
Optimization for low power can be applied at many
different levels of the design hierarchy. In CMOS
circuits, the probabilistic average switching activity
of the circuit is a good measure of the average power
dissipation of the circuit. Average power dissipation
can thus be computed by estimating the average
switching activity. Several methods to estimate power
dissipation for CMOS combinational circuits have
been developed . More recently, efficient and
accurate methods of power dissipation estimation for
sequential circuits have been developed .

In this work, we are concerned with the problem of


optimizing logic-level circuits for low power.
Recently ,a new sequential logic optimization method
has been presented that is based on selectively
precomputing; the output logic values of the circuit
one clock cycle before they are required, and using
the precomputed values to reduce internal switching
activity in the succeeding clock cycle [1]. The
primary optimization step is the synthesis of the precomputation logic, which computes the output values
for a subset of input conditions. If the output values
can be precomputed, the original logic circuit can be
turned off in the next clock cycle and will not have
any switching activity. The precomputation logic
adds to the circuit area and can also result in an
increased clock period.
We introduce two new precomputation architectures
in this paper. The first architecture targets sequential
logic circuits and allows the precomputation logic to
be a function of a subset or all of the input variables.
The second precomputation architecture targets
combinational circuits. The reduction in switch-ing
activity is achieved by introducing transmission-gates
or transparent latches in the circuit which can be
disabled when the signal going through them is not
necessary to determine the output values. This
architecture is more flexible than any of the
sequential architectures since we are not limited to
precomputation over primary inputs.
II.

PREVIOUS WORK

In this section we describe the Subset Input Disabling


precomputation architecture introduced in [1].
Consider the circuit of Figure 1. We have a
combinational logic block A that is separated by
registers R1 and R2. While R1 and R2 are shown as
distinct registers in Figure 1 they could, in fact, be the
same register.
In Figure 2 the Subset Input Disabling
precomputation architecture is shown. The inputs to
the block A have been partitioned into two sets,

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
121

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

corresponding to the registers R1 and R2. The output


of the logic block A feeds the register R3. Two
Boolean functions g1 and g2 are the predictor
functions. We require:
g1=1=f=1

We describe two new precomputation architectures,


the first targeting sequential circuits which is more
general than the architecture presented in the previous
section, and the second targeting combinational
circuits.

(1)
3.1: New Sequential Precomputation Architecture

R1

R2
A

Fig. 1 Original circuit

The basic limitation of the Subset Input Disabling


architecture is that having chosen a subset of inputs
for the precomputation logic we can only disable the
input registers when the output is the same for all
combinations over all inputs not in the selected
subset. Thus even if there is only one combination for
which this is not true, we cannot precompute output
values since we need to know the value of input
variables that are not in the precomputation logic.
The Complete Input Disabling precomputation
architecture proposed in the following section is able
to handle these cases.

Fig. 2 Subset Input Disabling precomputation architecture

Therefore, during clock cycle t if either g1 or g2


evaluates to a 1, we set the load enable signal of the
register R2 to be 0. This implies that the outputs of R2
during clock cycle t + 1 do not change. However,
since the outputs of register R1 are updated, the
function f will evaluate to the correct logical value. A
power reduction is achieved because only a subset of
the inputs to block A change implying reduced
switching activity.
The choice of g1 and g2 is critical. We wish to include
as many input conditions as we can in g1 and g2. In
other words, we wish to maximize the probability of
g1 or g2 evaluating to a 1. To obtain reduction in
power with marginal increases in circuit area and
delay, g1 and g2 have to be significantly less complex
than f . This architecture achieves this by making g 1
and g2 depend on significantly fewer inputs than f.
III.

NEW PRECOMPUTATION
ARCHITECTURES

Fig. 3 Complete Input Disabling precomputation architecture

3.1.1
Complete
Input
Precomputation Architecture

Disabling

In Figure 3 ,the new precomputation architecture for


sequential circuits is shown. The functions g1 and g2
satisfy the conditions of Equations 1 and 2 as before.
During clock cycle t if either g1 or g2 evaluates to a 1,
we set the load enable signal of the register R1 to be
0. This means that in clock cycle t + 1 the inputs to
the combinational logic block A do not change. If g1
evaluates to a 1 in clock cycle t, the input to register
R2 is a 1 in clock cycle t + 1, and if g2 evaluates to a
1, then the input to register R2 is a 0. Note that g1 and

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
122

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

g2 cannot both be 1 during the same clock cycle due


to the conditions imposed by Equations 1 and 2.
The important difference between this architecture
and the Subset Input Disabling architecture shown in
Figure 2 is that the precomputation logic can be a
function of all input variables, allowing us to
precompute any input combination. We have
additional logic corresponding to the two flip-flops
marked FF and the AND-OR gate shown in the
figure. Also the delay between R1 and R2 has
increased due to the addition of this gate.
3.2: Combinational Precomputation Architecture
Given a combinational circuit, any sub circuit within
the original circuit can be selected.Assume that this
sub circuit has n inputs and m outputs as shown in
Figure 4. In an effort to reduce switching activity, the
algorithm will turn off a subset of the n inputs
using the circuit shown in Figure 5. The figure shows
p inputs being turned off, where 1 p < n.
The term turn off means different things according
to the type of circuit style that is being used. If the
circuit is built using static logic gates, then turn off
means prevent changes at the inputs from propagating
through block B to the sub-circuit (block A) thus
reducing the switching activity of the sub-circuit. In
this case block B may be implemented using one of
the transparent latches shown in Figure 6. If the
circuit is built using Domino logic, then turn off
means prevent the outputs of block B from evaluating
high no matter the value of the inputs. This can be
implemented using 2-input AND gates as shown in
Figure 7.
Blocks g1 and g2 determine when it is appropriate to
turn off the selected inputs. The selected inputs may
be turned off if the static value of all the outputs, f1
through fm, are independent of the selected inputs. To
fulfill this requirement, outputs g1 and g2 are required
to satisfy Equations 1 and 2.
If either g1 or g2 is high, the inputs may be turned
off. If they are both low, then the selected inputs are
needed to determine the outputs, and the circuit is
allowed to work normally.

Fig. 4 Sub-circuit with input disabling circuit

Given the sequential and combinational architectures,


algorithms are needed to determine which inputs to
turn off and to determine the functions g1 and g2 so
that the power consumption of the combinational or
sequential circuit is reduced. The details of these
algorithms are given in the next two sections.
IV.

COMPLETE INPUT DISABLING


PRECOMPUTATION

In this section, we describe methods to determine the


functionality of the precomputation logic for the
Complete Input Disabling architecture targeting
sequential circuits.
4.1: Precomputation Logic for Single Output
Functions
The key tradeoff in selecting the precomputation
logic is that we want to include in the logic as many
input combinations as possible but at the same time
keep the logic simple. The Subset Input Disabling
precomputation architecture ensures that the
precomputation logic is significantly less complex
than the combinational logic in the original circuit by
restricting the search space to identifying g1 and g2
such that they depend on a relatively small subset of
the inputs to the logic block A.
By making the precomputation logic depend on all
inputs, the Complete Input Disabling architec-ture
allows for a greater flexibility but also makes the
problem much more complex.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
123

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

We will be searching for the subset of inputs that are


necessary, a large fraction of the time, to determine
what the value of f is. We follow a strategy of
keeping the precomputation logic simple by making
the logic depend mostly on a small subset of inputs.
The difference is that now we are not going to restrict
ourselves to those input combinations for which this
subset of inputs defines f , we will allow for some
input combinations that need inputs not in the
selected set.
4.1.1

Selecting a Subset of Inputs

Given a function f we are going to select the best


subset of inputs S of cardinality k such that we
minimize the number of times we need to know the
value of the other inputs to evaluate f . For each
subset of size k, we compute the cofactors of f with
respect to all combinations of inputs in the subset. If
the probability of a cofactor of f with respect to a
cube c is close to 1 (or close to 0), that means that for
the combination of input variables in c the value of f
will be 1 (or 0) most of the time.
Let us consider f with inputs x1; x2; ; xn and that we
have selected the subset x1; ; xk . If the probability of
the cofactor of f with respect to x1 xk being all 1's is
high (prob(fx1 xk ) 1), then over all combinations of
xk+1; ; xn there are only a few for which f is not 1. So
we can include x1 xk fx1 xk in g1. Similarly if the
probability of the fx1 xk is low (prob(fx1 xk ) 0), then
over all combinations of xk+1; ; xn there are only a few
for which f is not 0, so we include x1 xk fx1 xk in g2.
Note that in the Subset Input Disabling architecture
we would only do this if fx1 xk = 1 or fx1 xk = 0.
Since there is no limit on the number of inputs that
the precomputation logic is a function of, we need to
monitor its size in order to ensure it does not get very
large.
The procedure SELECT LOGIC receives as
arguments the function f and the desired number of
inputs k to select. SELECT LOGIC calls the
recursive procedure SELECT RECUR with four
arguments. The first is the function to precompute.
The second argument D corresponds to the set of
input variables currently selected. The third argument
Q corresponds to the set of active variables, which
may be selected or discarded. Finally, the argument k
corresponds to the number of variables we want to
select.
If jDj + jQj < k it means that we have dropped too
many variables in the earlier levels of recursion and
we will not be able to select a subset of k input
variables.
We store the selected set corresponding to the
maximum value of the cost function.

4.1.2

Implementing the Logic

The Boolean operations of OR and cofactoring


required in the input selection procedure can be
carried out efficiently using reduced, ordered Binary
Decision Diagrams (ROBDDs) [4]. In the pseudocode of Figure 8 we show how to obtain the g 1 + g2
function. We also need to compute g1 and g2
independently. We do this in exactly the same way,
by including in g1 the cofactors corresponding to
probabilities close to 1 and in g2 the cofactors
corresponding to probabilities close to 0.
Once we have ROBDDs for g1 and g2, these can be
converted into a multiplexor-based network.
SELECT LOGIC( f , k ): f
/* f = function to precompute */ /* k = # of inputs to
select */ BEST IN COST = 0 ; SELECTED SET = ;
SELECT RECUR( f , , X , k ) ; return( SELECTED
SET ) ;
g
SELECT RECUR( f , D, Q, k ): f
if( jDj + jQj < k )
return ; if( jDj == k) f
exact = approx = 0;
BDD1 = BDD2 = 0;
For each combination c over all variables in D f
if(prob(fc) == 1 or prob(fc) == 0) f
exact = exact + 1; BDD1 = BDD1 + c; BDD2 =
BDD2 + c;
g
if(prob(fc) > 1 ) f approx = approx + 1; BDD2 =
BDD2 + c fc;
g
if(prob(fc) < ) f approx = approx + 1;
BDD2 = BDD2 + c fc;
g
g
cost = (exact + size(BDD1)size(BDD2) approx)=2jDj ; if( cost
> BEST IN COST) f
BEST IN COST = cost ;
SELECTED SET = D ;
g
return ;
g
choose xi 2 Q such that i is minimum ;
SELECT RECUR( f , D [ xi, Q xi, k ) ; SELECT
RECUR( f , D, Q xi, k ) ;
g
Fig. 6 Procedure to determine the precomputation Logic

4.1.3
Simplifying the Original Combinational
Logic Block
Whenever g1 or g2 evaluate to a 1, we will not be
using the result produced by the original combinational logic block A, since the value of f will be

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
124

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

set by either g1 or g2. Therefore all input


combinations in the precomputation logic are new
don't-care conditions for this circuit and we can use
this information to simplify this logic block, thus
leading to a reduction in area and consequently to a
further reduction in power dissipation.
4.2: Multiple-Output Functions
In general, we have a multiple-output function f1; ; fm
that corresponds to the logic block A in Figure 1. All
the procedures described thus far can be generalized
to the multiple-output case.
The functions g1i and g2i are obtained by computing
the cofactors of fi separately. The function g whose
complement drives the load enable signal is obtained
as:
mg=(g1i+ g2i)
(3)
i=1
The function g corresponds to the set of input
conditions that control the values of all the fi's.
4.2.1

Selecting a Subset of Outputs

We describe an algorithm, which given a multipleoutput function, selects a subset of outputs and the
corresponding precomputation logic so as to
maximize a given cost function that is dependent on
the probability of the precomputation logic and the
number of selected outputs. This algorithm is
described in pseudo-code in Figure 9.
The inputs to procedure SELECT OUTPUTS are the
multiple-output function F , and a number k
corresponding to the size of the set in the input
selection.
The procedure SELECT ORECUR receives as
inputs two sets G and H , which correspond to the
current set of outputs that have been selected and the
set of outputs which can be added to the selected set,
respectively. Initially, G = and H = F . The cost of a
particular selection of outputs, namely G, is given by
prG gates(F H )/total gates, where prG corresponds to
the signal probability of the precomputation logic,
gates(F H ) corresponds to the number of gates in the
logic corresponding to the outputs in G and not
shared by any output in H , and total gates
corresponds to the total number of gates in the
network (across all outputs of F ).
There are two pruning conditions that are checked for
in the procedure SELECT ORECUR. The first
corresponds to assuming that all the outputs in H can
be added to G without decreasing the probability of
the precomputation logic. This is a valid condition
because the quantity proldG in each recursive call can
only decrease with the addition of outputs to G. We
then set a lower bound on the probability of the
precomputation logic prior to calling the input
selection procedure. Optimistically assuming that all
the outputs in H can be added to G without lowering
the precom-putation logic probability, we are not

interested in a precomputation logic probability for G


that would result in a cost that is equal to or lower
than BEST OUT COST.
SELECT OUTPUTS( F = ff1;

; fmg, k ):

f
/* F = multi-output function to precompute */ /* k =
size of set in the input selection */ BEST OUT COST
=0;
SEL OP SET = ;
SELECT ORECUR( , F , 1, k ) ; return( SEL OP
SET ) ;
g
SELECT ORECUR( G, H , proldG, k ): f
lf = gates(F H )/total gates proldG ; if( lf BEST OUT
COST )
return ; if( G 6= )
if( SELECT LOGIC( G, k ) == ) return ;
prG = BEST IN COST ; /* BEST IN COST is set in
SELECT LOGIC */ cost = prG gates(G)/total gates ;
if( cost > BEST OUT COST) f BEST OUT COST =
cost ;
SEL OP SET = G ;
g
choose fi 2 H such that i is minimum ;
SELECT ORECUR( G [ fi, H
fi, prG, k ) ;
SELECT ORECUR( G, H
fi, prG, k ) ;
g
Fig. 7 Procedure to determine the set of outputs to precompute

4.2.2

Logic Duplication

Since we are only precomputing a subset of outputs,


we may incorrectly evaluate the outputs that we are
not precomputing as we disable certain inputs during
particular clock cycles. If an output that is not being
precomputed depends on an input that is being
disabled, then the output will be incorrect. However,
an appropriate duplication of registers and logic will
ensure that the outputs which are not selected are still
implemented correctly (as described in [1]). The
algorithm of Figure 9 attempts to minimize this
duplication.

VI.

COMBINATIONAL PRECOMPUTATION

In order to synthesize precomputation logic for the


architecture of Figure 4, algorithms are needed to
accomplish several tasks. First, an algorithm must
divide the circuit into sub-circuits. Then for each subcircuit, algorithms must: a) select the subset of inputs
to turn off, and b) given these inputs, produce the
logic for g1 and g2 in Figure 4. For each of these
steps, the goal is to maximize the savings function
net

(savings(A)cost(B)

(3)

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
125

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

saving cost(g))
s =
all subcircuits
where g is defined as g = g1 + g2.
We must divide the original combinational circuit
into sub-circuits so that Equation 3 is max-imized.
Note that the original circuit can be divided into a set
of maximum-sized, single-output sub-circuits. A
maximum-sized, single-output sub-circuit is a singleoutput sub-circuit such that no set of nodes from the
original circuit can be added to this sub-circuit
without creating a multi-output sub-circuit. An
equivalent way of saying this is, the circuit can be
divided into a minimum number of single-output subcircuits. Such a set exists and is unique for any legal
circuit. A linear time algorithm for determining this
set is given in Figure 10.
Next, note that there is no need to analyze any subcircuit that is composed of only a part of one of these
maximum-sized, single-output sub-circuits. If a part
of a single-output sub-circuit including the output
node is in some sub-circuit to be analyzed, then the
rest of the nodes of the single-output sub-circuit can
be added to the sub-circuit at no cost since the
outputs remain the same. Adding these nodes can
only result in more savings. Further, if a part of a
single-output sub-circuit not including the output
node is in some sub-circuit to be analyzed, then the
rest of the nodes of the single-output sub-circuit can
be added to the sub-circuit because the
precomputability of the outputs can only become less
restrictive. Therefore, even in the worst case, the
disable logic can be left the same so that there is no
additional cost yet additional savings are achieved
because of the additional nodes.
Based upon this theory, an algorithm to synthesize
precomputation logic would 1) create the set of
maximum-sized, single-output sub-circuits, 2) try
different combinations of these sub-circuits, and 3)
determine the combinations that yield the best net
savings. Given the maximum-sized single-output subcircuits, we use the algorithms of the previous section
to determine a subset of the sub-circuits and a
selection of inputs to each sub-circuit that results in
relatively simple precomputation logic and maximal
power savings.
GET SINGLE OUTPUT SUBCIRCUITS( circuit ):
f
arrange nodes of circuit in depth-first order outputs to
inputs; foreach node in depth order ( node ) f

if ( node is a primary output ) f subcircuit = create


new subcircuit(); mark node as part of subcircuit;
g
else f
check every fanout of node;
if ( all fanouts are part of the same sub-circuit )
subcircuit = sub-circuit of the fanouts;
else
subcircuit = create new subcircuit(); mark node as
part of subcircuit;
g
g
g
Fig. 8. Procedure to find the minimum set of single-output subcircuits

REFERENCES
[1]

Mazhar Alidina, Jose Monteiro, Srinivas Devadas, Abhijit


Ghosh, and Marios Papaefthymiou. Precomputation-based
sequential logic optimization for low power. In International
Workshop on Low Power Design, pages 5762, April 1994.
[2] P. Ashar, S. Devadas, and K. Keutzer. Path-Delay-Fault
Testability Properties of Multiplexor-Based Networks.
INTEGRATION, the VLSI Journal, 15(1):123, July 1993.
[3] R. Brayton, R. Rudell, A. Sangiovanni-Vincentelli, and A.
Wang. MIS: A Multiple-Level Logic Optimization System.
In IEEE Transactions on Computer-Aided Design, volume
CAD-6, pages 10621081, November 1987.
[4] R. Bryant. Graph-Based Algorithms for Boolean Function
Manipulation. IEEE Transactions on Computers, C35(8):677691, August 1986.
[5] A. Chandrakasan, T. Sheng, and R. W. Brodersen. Low
Power CMOS Digital Design. In Journal of Solid State
Circuits, pages 473484, April 1992.
[6] A. Ghosh, S. Devadas, K. Keutzer, and J. White. Estimation
of Average Switching Activity in Combinational and
Sequential Circuits. In Proceedings of the 29 th Design
Automation Conference, pages 253259, June 1992.
[7] J. Monteiro, S. Devadas, and A. Ghosh. Retiming Sequential
Circuits for Low Power. In Proceedings of the Int'l
Conference on Computer-Aided Design, pages 398402,
November 1993.
[8] F. Najm. Transition Density, A Stochastic Measure of
Activity in Digital Circuits. In Pro-ceedings of the 28th
Design Automation Conference, pages 644649, June 1991.
[9] K. Roy and S. Prasad. SYCLOP: Synthesis of CMOS Logic
for Low Power Applications. In Proceedings of the Int'l
Conference on Computer Design: VLSI in Computers and
Processors, pages 464467, October 1992.
[10] E. M. Sentovich, K. J. Singh, C. Moon, H. Savoj, R. K.
Brayton, and A. Sangiovanni-Vincentelli. Sequential Circuit
Design Using Synthesis and Optimization. In Proceedings of
the Int'l Conference on Computer Design: VLSI in
Computers and Processors, pages 328 333, October 1992.
[11] A. Shen, S. Devadas, A. Ghosh, and K. Keutzer. On Average
Power Dissipation and Random Pattern Testability of
Combinational Logic Circuits. In Proceedings of the Int'l
Conference on Computer-Aided Design, pages 402407,
November 1992.
[12] C-Y. Tsui, J. Monteiro, M. Pedram, S. Devadas, A. Despain,
and B. Lin. Exact and Ap-proximate Methods for Switching
Activity Estimation in Sequential Logic Circuits. IEEE
Transactions on VLSI Systems, 3(1), March 1995. to appear

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
126

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Issues and Challenges Faced in Wireless


Sensors Networks
Pooja Mendiratta, Neha Gupta, Ashish Singh Rawat

Deptt. of Electronics and Communication, Northern India Engineering College, New Delhi, India
E-mail: pjmendiratta@yahoo.co.in
Abstract Sensor network consists of tiny sensors
with general purpose computing elements to
cooperatively monitor physical or environmental
conditions, such as temperature, pressure, etc.
They have a great potential for long term
applications and also have the ability to transform
human lives in various aspects. However, there
have been resources constraints problems such as
memor, power consumption of nodes in WSNs.
Depending on the resources limitations and used
applications of WSNs, security is very important
and big challenge in WSNs. In this paper we
investigate application issues and challenges
associated with development of wireless sensor
networks
Keywords WSN, RSSI, Wireless application,
Biological Application, Security In Wireless
Sensor Network
I.

INTRODUCTION

There is a rapid growth in the wireless network in the


last decade. Wireless communication is extensively
used in cellular telephony, wireless internet and
wireless home networking arenas. New generations
of handheld devices allowed users access to stored
data even when they travel. Users could set their
laptops down anywhere and instantly be granted
access to all networking resources. But still today,
while wireless networks have seen widespread
adoption in the home user markets, widely reported
and easily exploited holes in the standard security
system have stunted wireless deployment rate in
enterprise environments. Over time, it became
apparent that some form of security was required to
prevent outsiders from exploiting the connected
resources. We believe that the current wireless access
points present a larger security problem than the early
internet connections.

Fig. 1 Infrastructure wireless network


Wireless sensor networks usually comprise a number
of sensors with limited resources. Each sensor
includes sensing equipment, a data processing unit, a
short range radio device and a battery. These
networks have been considered for various purposes
including border security, military target tracking and
scientific research in dangerous environments. Since
the sensors may reside in an unattended or hostile
environment, security is a critical issue. An adversary
could easily access the wireless channel and intercept
the transmitted information, or distribute false
information in the network. Under such
circumstances, authentication and confidentiality
should be used to achieve network security. Since
authentication and confidentiality protocols require a
shared key between entities, key management is one
of the most challenging issues in wireless sensor
networks (WSNs).
Wireless sensors are small and cheap devices
powered by low-energy batteries, equipped with radio
transceivers, and responsible for responding to
physical stimuli, such as pressure, magnetism and
motion, by producing radio signals. They are featured
with resource (e.g., power, storage, and computation
capacity) constraints and low transmission rates.
Wireless sensor networks (WSNs) are collections of
such wireless sensors that are deployed (e.g., using
aircraft) in strategic areas to gather data about the
changes in their surroundings, to report these changes
to a data-processing center (which is also called a
data sink), and possibly to respond to these changes.
The processing center can be a specialized device or
just one of the sensors, and its function is to analyze
the collected data to determine the characteristics of

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
127

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

the environment or to detect events. In this paper we


are trying to arise the applications and problems
occurs in the wireless sensor network.
II.

APPLICATIONS OF WIRELESS
NETWORK

(1) Biological Application


The WSN based applications have made tremendous
impact for biological problems. Some of these
include biological task mapping and scheduling,
biomedical signal monitoring etc. A description of
these applications has been presented in this section.
(A)BIOLOGICAL TASK MAPPING
WSNs find widespread applications in the area of
biological sensing. Specifically, there is recent
research going on in the concept of labs on a chip,
supported by latest technologies like nano-techniques.
The use of WSNs for biological applications have
been accelerated due to the advancements in Micro
Electro-Mechanical Systems (MEMS), embedded
systems, microcontrollers and various wireless
communication technologies. Y.E.M. Hamouda and
C. Phillips [1] presented a BTMS (Biological Task
Mapping and Scheduling) algorithm, in which a
group of nodes was used to execute an application. In
this work, it was assumed that the application could
be broken down into smaller tasks with different
weights and hence a general model was considered
for complex applications. In order to achieve and
enhance the desired performance objectives,
assigning of resources to tasks is known as Task
mapping and the sequence of execution of the tasks is
known as task scheduling. Task mapping and
scheduling are of much importance in high
performance computing. A near-optimal solution for
task mapping can be obtained using heuristic
techniques. But the constrained resources of WSNs
require the design objectives to be different. However
the simulation model that was built was applicable
only if the nodes in the WSN were separated with a
distance set to 150m.
(B) BIOMEDICAL APPLICATION
WSNs have revolutionized the field of medicine in
many ways. Telemedicine is the field which involves
the treatment and care of patients from a distance and
also aids in biomedical diagnosis. The application of
WSNs has significantly improved this field. The basic
principles and features required at the time of
biological signals have been presented in [2]. To

develop modern equipment for monitoring patients in


remote places using wireless technologies, the
network topology, sensors specific signal reception
and analysis has been considered.
(2) COMMERCIAL APPLICATIONS
Some of the commercial applications of WSNs
include vehicular monitoring, cultural property
protection, event detection and structural health
monitoring. These applications have a profound
impact on ordinary day-to-day affairs.
(A) SMART PARKING SYSTEM
Detection of vehicles in a parking lot using magnetic
sensors along with ultrasonic sensors together has
been presented by S.Lee, D.Yoon and A. Ghosh in
[3]. It was proved that accurate vehicular detection
was possible with the combined use of ultrasonic
sensors and magnetometers but it did not provide any
solution for better parking management. A WSN
based Smart Parking System (SPARK) management
system has been presented in [4]. Monitoring of
remote parking, mechanism for parking reservation
and automated guidance are some of the latest
features provided by the system. However, the system
should be made fault tolerant, by incorporating
mechanisms for identifying defaulters.
(B) SECURITY OF INTRA-CAR
Fuel efficiency and reduction in the weight of
automotive can be achieved by replacing wired
sensors and their cables with wireless sensors.
However, the inherent vulnerability of the wireless
platform makes the security issues of such a
replacement, highly questionable. Security problems
for intra-car wireless sensor networks have been
addressed in [5]. In this work, selection of appropriate
security algorithms for WSNs using a systematic
methodology and determination of the best
combination with regard to execution time and
security has been presented.
(3) ENVIRONMENTAL APPLICATION
Environmental applications include the monitoring of
atmospheric parameters, tracking of the movements
of birds and animals, forest fire detection, habitat
surveillance etc.
A. Greenhouse Monitoring

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
128

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

To ensure that the automation system in a greenhouse


works properly, it is necessary to measure the local
climate parameters at various points of observation in
different parts of the big greenhouse. This work if
done using a wired network will make the entire
system clumsy and costly. However, a WSN based
application for the same purpose using several small
size sensor nodes equipped with radio would be a
cost effective solution. Such an application has been
developed in [6]. Data analysis, DSP based control
solutions and more complex network setups are the
areas yet to be explored.
B. Habitat Surveillance
WSNs find widespread application in habitat
surveillance compared to other monitoring methods
due to high deployment density and self- organization
of the sensor nodes. The advantage with WSN is that
the invisible placement of sensor nodes in the habitat
does not leave any noticeable mark which might
affect the behavior pattern of the inhabitants. A WSN
based application in combination with
General Packet Radio Service (GPRS) for habitat
monitoring is introduced in [7]. The details of a
sensor node that made use of the combination of
ARM technology and IEEE 802.15.4 has been given.
This paper addressed the energy management issue
and developed a low-weight, constant duty cycle
policy for energy management. However, developing
a WSN based application that will never affect the
biological behavior of the inhabitant species is very
important, and hence a challenge to be considered.
III.

HEALTHCARE APPLICATIONS

WSNs are very efficient in supporting various day-today applications. WSN based technologies have
revolutionized home and elderly healthcare
applications. Physiological parameters of patients can
be monitored remotely by physicians and caretakers
without affecting the patients activities. This has
resulted in reduction of costs, improvement of
equipment and better management of patients reaping
huge commercial benefits. These technologies have
significantly minimized human errors, allowed better
understanding into origin of diseases and has helped
in devising methods for rehabilitation, recovery and
the impacts of drug therapy. The recent developments
in the application of WSN in healthcare are being
presented. The implementation and analysis of a
WSN based e-Health application has been described
in [8]. The main research issue to be addressed is to
increase the degree of awareness of home assistants,

caregivers, primary healthcare centers, to understand


the patients health and activity status to quickly
discern and decide on the required action. A simple
localization algorithm based on sensor data and
Received Signal Strength Indicator (RSSI) was
presented. This algorithm was proved experimentally
to work fine in home environment. However, the use
of multi-sensor analysis, which is expected to give
better accuracy, is an area yet to be explored. A
qualitative research on the perceptions and
acceptance, of elderly persons regarding the usage of
WSN for assisting their healthcare is done in [9]. A
light-weight, low-cost WSN based home healthcare
monitor has been developed in [10]. An attempt to
integrate the WSN technology and public
communication networks in order to develop a
healthcare system for elderly people at home without
disturbing their routine activities has been presented
in [11]. Improved performance with minimum
decision delay and good accuracy using Hidden
Markov Model is yet to be addressed. A WSN based
home healthcare application is developed in [12]. The
main issue that was considered in this research is the
development of a working model of home healthcare
monitoring system with efficient power, reliability
and bandwidth. A WSN based prototype sensor
network for monitoring of health, with sensors for
heart activity, using 802.15.4 complaint network
nodes is described in [13]. The issues regarding its
implementation have also been discussed. The paper
also describes the hardware and software organization
of the presented system and provides solutions for
synchronization of time, management of power and
on-chip signal processing. However, the areas that are
yet to be addressed are improvement in QoS of
wireless communication, standardization of interfaces
and interoperability. Specific limitations and new
applications of the technology can be determined by
in-depth study of different medical conditions in
clinical and ambulatory settings.
IV.

MILITARY APPLICATIONS

WSNs play a vital role in military Command,


Control, Communications, Computing, Intelligence,
Surveillance,
Reconnaissance
and
Targeting
(C4ISRT) systems. Few challenges faced by WSNs
on the battlefield are addressed in [14]. In the
battlefield, the WSNs are prone to the attacks, where
either the data or corrupting control devices are
attacked, leading to large amount of energy
consumption and finally to the exit of nodes from

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
129

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

work. The energy efficiency of sensor nodes and the


correct modeling of energy consumption are the
research issues yet to be explored. WSN based
collaborative target detection with reactive mobility
has been presented in [15]. A sensor movement
scheduling algorithm was developed and its
effectiveness was proved using extensive simulations.
WSNs have found application in very critical
applications such as object detection and tracking.
These applications require high detection probability,
low false alarm rate and bounded detection delay.
V.

ISSUES OF SECURITY IN WIRELESS


SENSOR NETWORK

The requirement of security not only affects the


operation of the network, but also is highly important
in maintaining the availability of the whole network
.It is necessary to know and understand these security
requirements first before implementing security
scheme for WSN.WSN should take the following
major security requirements which are basic
requirements for any network into Consideration of
secure mechanism:
A. Data Integrity
Data integrity in sensor networks is needed to ensure
the reliability of the data. It ensures that data packets
received by destination is exactly the same with
transferred by the sender and any one in the middle
cannot alter that packet [16].The techniques like
message digest and MAC are applied to maintain
integrity of the data. By providing data integrity we
are able to solve the Data integrity attacks. Data
integrity is achieved by means of authentication the
data content.
B. Data Confidentiality
Confidentiality is to protect data during
communication in a network to be understood other
then intended recipient. Cryptography techniques are
used to provide confidentiality. Data confidentiality is
the most important issue in all network security.
Every network with any security focus will typically
address this problem first Data confidentiality of the
network means that data transfer between sender and
receiver will be totally secure and no third person can
access it(neither read nor write) .Confidentiality can
be achieved by using cryptography: symmetric or
asymmetric key can be used to protect the data.
C. Data Availability

Availability ensures that the services are always


available in the network even under the attack such as
Denial of Service attack (Dos). The researchers
proposed different mechanisms to achieve this goal.
Availability is of primary importance for maintaining
an operational network. Data Availability determines
whether a node has the ability to use the resources
and whether the network is available for the messages
to communicate. Availability ensures that sensor
nodes are active in the network to fulfill the
functionality of the network.
D. Data Authentication
Data Authentication of a sensor node ensures the
receiver that the data has not been modified during
the transmission [17]. Data authentication is achieved
through symmetric or asymmetric mechanisms where
sending and receiving nodes share secret keys. In
asymmetric cryptographic communication digital
signatures are used to check the authentication of any
message or user while in symmetric key, MAC
(Message Authentication Code) are used for
authentication purpose.
E. Data Freshness
Data freshness is very important in wireless sensor
networks. Because an attacker can send an expire
packet to waste the network resources and decrease in
network lifetime. Freshness ensures that the data
received by the receiver is the recent and fresh data
and no adversary can replay the old data. The
freshness is achieved by using mechanisms like
nonce or timestamp should add to each data packet.
VII.

ATTACKS IN WSN

This paper focus on the security of WSNs, providing


security services in these networks and preventing
DOS attacks which is most challenges security issues
for these networks. The most vulnerable attack in
terms of exhaustion of resources in WSN is Denial of
Service attacks (DOS). Denials of Service attacks are
specific attacks that attempt to prevent legitimate
users from accessing networks, servers, services or
other resources by sending extra unnecessary packets
and thus prevent legitimate network users from
accessing services or resources.
A. Black hole attack
Also known as sink holes attack occurring at the
network layer. It builds a covenant node that seems to
be very attractive in the sense that it promotes zero-

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
130

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

cost routes to neighboring nodes with respect to the


routing algorithm.This results maximum traffic to
flow towards these fake nodes. Nodes adjoining to
these harmful nodes collide for immense bandwidth,
thus resulting into resource contention and message
destruction.
B. Wormhole attack
In the wormhole attack, pair of awful nodes firstly
discovers a wormhole at the network layer. The
whole traffic of the network is tunneled in a particular
direction at a distant place, which causes deprivation
of data receiving in other parts of the network. These
packets are then replayed locally. This creates a fake
scenario that the original sender is only one or two
nodes away from the remote location. This may cause
congestion and retransmission of packets squandering
the energy of innocent nodes.
C. Selective forwarding attack
Selective forwarding is a network layer attack. In this,
an adversary covenants a node, that it scrupulously
forwards some messages and plunge the others. This
hampers the quality of service in WSN. If the attacker
will drop all the packets then the adjoining nodes will
become conscious and may evaluate it to be a flaw.
To avoid this, the attacker smartly forwards the
selective data. To figure out this type of attack is a
very tedious job.
D. Flooding
Flooding also occurs at the network layer. An
adversary constantly sends requests for connection
establishment to the selected node. To hit each
request, some resources are allocated to the adversary
by the targeted node. This may result into effusion of
the memory and energy resources of the node being
bombarded.
E. Sybil attack
This again is a network layer attack. In this, an awful
node presents more than one character in a network.
It was originally described as an attack able to defeat
the redundancy mechanisms of distributed data
storage systems in peer-to-peer networks. The Sybil
attack is efficient enough to stroke other fault tolerant
schemes such as dispersity, multi path routing,
routing algorithms, data aggregation, voting, fair
resource allocation, topology maintenance and
misbehavior detection. The fake node implies various
identities to other nodes in the network and thus
occurs to be in more than one place at a time. In this

way, it disturbs the geographical routing protocols. It


can collide the routing algorithms by constructing
many routes from only one node.
F. Node replication attack
Every sensor node in a network has a unique ID. This
ID can be duplicated by an attacker and is assigned to
a new added malicious node in the network. This
assures that the node is in the network and it can lead
to various calamitous effects to the sensor network.
By using the replicated node, packets passing through
malicious node can be missed, misrouted or modified.
This results in wrong information of packet, loss of
connection, data loss and high end-to-end latency.
Malicious node can get authority to the sensitive
information and thus can harm the network.
VIII.

CONCLUSION

In the above paper we describe the various


application of wireless sensor network along with the
issues and attacks in WSN. These interesting
applications are possible due to the flexibility, fault
tolerance, low cost and rapid deployment
characteristics of sensor networks but still security is
an important requirement and complicates enough to
set up in different domains of WSN. The application
of WSNs is not limited to the areas mentioned in this
paper. The future prospects of WSN applications are
highly promising to revolutionize our everyday lives.
There is currently enormous research potential in the
field of WSN.
REFERENCES
[1]

[2]

[3]

[4]

[5]

[6]

Y. E. M. Hamouda and C. Phillips, Biological task mapping


and scheduling in wireless sensor networks, in Proceedings
of ICCTA, pp. 914-919, 2009
T. Camilo, R. Oscar, and L. Carlos, Biomedical signal
monitoring using wireless sensor networks, IEEE LatinAmerican Conf. on Communciations, pp.1-6, 2009
S. Lee, D. Yoon, and A. Ghosh, Intelligent parking lot
application using wireless sensor networks, Intl. Symposium
on Collaborative Technologies and Systems, pp. 48-57, 2008.
S.V. Srikanth, P. J. Pramod, K. P. Dileep, S. Tapas, M. U.
Patel, S. C. Babu, Design and implementation of a prototype
smart PARKing (SPARK) system using wireless sensor
networks, Intl. Conf. on Advanced Information Networking
and Applications Workshop, pp. 401-406, 2009.
H. Lee, H. M. Tsai, and O. K. Tonguz, On the security of
intra-car wireless sensor networks, IEEE 70th Vehicular
Technology Conf,pp.1-5, 2009.
T. Ahonen, R. Veirrankoski, and M. Elmusrati, Greenhouse
monitoring with wireless sensor network, IEEE/ASME Intl.
Conf. on Mechtronics and Embedded systems and
Applications, pp. 403-408, 2008 G. Y. Ming and J.
Rencheng, A novel wireless sensor network platform for
Habitat surveillance, Intl. Conf. on Computer Science and
Software Engineering, pp. 1028-1031, 2008.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
131

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

H. Yan, Y. Xu, and M. Gidlund, Experimental e-health


applications in wireless sensor networks, Intl Conf. on
Communications and Mobile Computing, pp. 563-567, 2009.
[8] R. Steele, A. Lo, C. Secombe, and Y. K. Wong, Elderly
persons perception and acceptance of using wireless sensor
networks to assist healthcare, International Journal of
Medical Informatics, vol. 78, pp.788-801, December 2009.
[9] L. Xuemei, J. Liangzhong, and L. Jincheng, Home
healthcare platform based on wireless sensor networks, in
Proc. of the Fifth Intl. Conf. on Information Technology and
application in Biomedicine, pp. 263-266, 2008.
[10] H. Huo, Y. Xu, H. Yan, S. Mubeen, and H. Zhang, An
elderly health care system using wireless sensor networks at
home, Third Intl. Conf. on Sensor Technologies and
Application, pp. 158-163, 2009.
[11] R. A. Rashid, S. H. S. Arifin, M. R. A. Rahim, M. A. Sarijari,
and N. H. Mahalin, Home healthcare via wireless
biomedical sensor network, IEEE International RF and
Microwave Conf Proceedings, pp. 511-514, 2008.
[7]

[12] Milenkovic, C. Otto, and E. Jovanov, Wireless sensor


networks for personal health monitoring: Issues and an
implementation, Computer Communications, vol. 29, pp.
2521-2533, 2006.
[13] N. Alsharabi, L. R. Fa, F. Zing, and M. Ghurab, Wireless
sensor networks of battlefields hotspot: challenges and
solutions, Sixth Intl. Symposium on Modeling and
Optimisation in Mobile adhoc and Wireless Networks and
Workshops, April 2008, pp.192-196.
[14] R. Tan, G. Xing, J. Wang, and H. C. So, Collaborative target
detection in wireless sensor networks with reactive mobility,
16th Intl. Workshop on Quality of Service, pp. 150-159,
2008.
[15] Tin win maw,Myo hein jaw, A secure for mitigation of DoS
attack in cluster Based wireless sensor networks, IJCCER ,
vol. 1,Issue 3,2013
[16] Prajeet Sharma, Niresh Sharma, Rajdeep Singh, "A Secure
Intrusion detection System against DDOS attack in Wireless
Mobile Ad-hoc Network",IJCA, Vol. 41 No.21, March
2012.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
132

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Challenges to Mockup Times in Calculation


to Interrupt Latency Using RTDM
Hirender#, Sunil Dalal*
#

Student M.tech ECE, M.R.I.E.M Rohtak,


*AP, ECE Department, M.R.I.E.M Rohtak,
E-mail: hirender@gmail.com, sunil1dalal@gmail.com
Abstract in modern age communication
interrupt latency may become an emerging issue
to be divulged amid civilization so that a proper
conformity of delay procedure among conflicting
interrupts to be done for proper synchronization
of tasks making system towards excellence. A real
time environment of tasks handling possess a
plenty of conflicts and it might be at the peak if
message of call or invitation is broadcast. A
proper intimation of duration of interrupt latency
with cause can save a users precious time. Thus
we can design a RTDM which is a real time driver
model to calculate interrupt latency based on a
dummy of
users real time multitasking
environment along with real time constraints. This
helps us to determine the situations of interrupts
with their priorities and sequences of occurrences
so that we are able to execute a plan that can
resolve the conflict that can occur in real time
processing of the event. RTDM is designed with
ability to run at one or more mockup times. The
RTDM product affords significant tractability in
raising multi rate schemes, that is, schemes with
more than one mockup time. However, this same
springiness also allows us to hypothesis simulation
models for which the code generator cannot
generate real-time code for execution in a
multitasking environment. To make multi rate
models operate as expected in real time, a
modification is to be done with deadline labels
depending upon the real time constraint situation.
In general, the amendments encompass insertion
of rate switch slabs between lumps that have
unequal sample times. This research paper
deliberates disputes that might address to use a
multi rate model in a multitasking environment.
Keywords Interrupt latency, real time driver
model, real time constraints, and real time
processing.

I.

INTRODUCTION

The mockup time is a bound that indicates when, all


through simulation, the corpus yields outputs and if
applicable, modernizes its internal state. The internal
state embraces but is not limited to continuous and
discrete states that are logged. In trade, mockup time
mentions to the rate at which a discrete system testers
its inputs. RTDM allows us to estimate single-rate
and multi rate discrete systems and hybrid
continuous-discrete systems through the apposite
situation of lump mockup times that control the rate
of lump execution i.e. calculations. For sundry
business solicitations, we must rheostat the rate of
lump execution. RTDM can be made to determine
implicit mockup time which is not set or chosen by
the user. It can be decided based upon the context of
the bulge in the system. Mockup times can be portbased or chunk-based. For chunk-based mockup
times, all of the inputs and outputs of the chunk run at
the same rate. For port-based mockup times, the input
and output ports can run at different rates. Mockup
times can also be distinct, unceasing, and static in
slight step, congenital, persistent, fickle, prompted, or
asynchronous. The subsequent section is available to
discourse these mockup time kinds, as well as
mockup time promulgation and rate shifts between
chunk-based or port-based mockup times. We can
habit these facts to rheostat our lump execution rates,
debug our RTDM, and validate our development
model.
II.
MOCKUP TIME TYPES
Following are the mainly eight types of the MT
(mockup time). (1) Distinct MT (2) Unceasing MT
(3) Static in slight step MT (4) Congenital MT (5)
Persistent MT (6) Fickle MT (7) Prompted MT (8)
Asynchronous MT
1.

Distinct mockup time

Given a lump with a distinct mockup time, RTDM


achieves the lump output or acquaint routine at stints
Special Issue: National Conference on Recent Innovations In Engineering & Technology
(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
133

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

tn = n * TMu + | To |

---

(1)

where the mockup time period T Mu is all the time


larger than zero and less than the imitation time, T Im.
The number of periods (n) is an integer that must
satisfy:

0 n TIm / TMu.

----

4.

If a lump mockup time is concoct to [1, 0] or 1, the


mockup time is congenital and RTDM concludes the
best mockup time for the wedge based on the wedge
milieu within the model. RTDM accomplishes this
task all through the constituting juncture; the novel
congenital scenery never seems in an amassed model.

(2)
5.

As imitation headways, RTDM calculates lump


outputs only once at each of these fixed time intervals
of tn. These imitation times, at which RTDM fulfils
the output process of a lump for a given mockup
time, are mentioned to as mockup times hits. Distinct
mockup times are the only type for which mockup
time hits are known a priori. If we in prerequisite to
deferral to some extent of delay the initial mockup
times hit, we can define an offset, T o.
2.

Unceasing mockup time

Unlike the distinct mockup time, unceasing mockup


times hits are alienated into chief time steps and
trivial time steps, where the trivial steps represent
subdivisions of the chief steps. We can build and use
a conventional differential equation routine named
CDE solvent that can assimilates all unceasing states
from the imitation start time to a given chief or trivial
time step. The solvent might able to standardize the
times of the trivial steps and uses the results at the
trivial time steps to mend the accuracy of the results
at the chief time steps. However, we are able to see
the lump output only at the chief time steps.
3.

Congenital mockup time

Static in slight step mockup time

If the mockup time of a lump is set to [0, 1], the lump


would become static in slight step. For this scenery,
RTDM does not achieve the lump at the trivial time
steps; updates ensue only at the chief time steps. This
progression jettisons gratuitous reckonings of lumps
whose output is unable to amendment between chief
steps. Meaning that, a constant factor affects
negligible value to the chief result. Though we can
overtly concoct a lump to be static in trivial step,
more characteristically RTDM devise this
circumstance as either a congenital mockup time or
as an adaptation to a user description of steadiness as
a value zero. This scenery is comparable to, and
therefore rehabilitated to, the fastest discrete rate
when we use a static-step resolver.

Persistent mockup time

Postulating a persistent mockup time is an entreaty


for an optimization for which the lump implements
only once during model initialization. RTDM
integrities such appeals if all of the following
circumstances clutch. The lump has no unceasing or
distinct states also allows for a persistent mockup
time. It does not effort an output port of a
provisionally effected subsystem and has no tuneable
strictures. Only immunity is an empty subsystem that
has no lumps not even an input or output chunk
always has a persistent mockup time irrespective of
the status of the conditions listed above.
6.

Fickle mockup time

Lumps that use a fickle mockup time have a couched


mockup time constraint that the lump postulates; the
lump articulates RTDM when to track it. The hoarded
mock time is [2, Tvo] where Tvo is a unique fickle
offset. The pulse generation scheme is an example of
a lump that possess a fickle mockup time. Since
RTDM supports fickle mockup times for variablestep resolvers only, the pulse generation lump agrees
a distinct mockup time if we use a static-step
resolver.
7.

Prompted mockup time

If a lump is privileged of a prompted-type (e.g.,


function-call, empowered and prompted, or iterator)
subsystem, the lump may have either a prompted or a
persistent mock up time. We are not allowed to
postulate the prompted mockup time overtly.
However, to achieve a prompted type during
compilation, we are able and must set the lamp
mockup time to inherit (1). RTDM then clinches the
precise times at which the lump figures its output
during simulation. Only omission is if the subsystem
is an asynchronous function call.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
134

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

8.

Asynchronous mockup time

An asynchronous mockup time is comparable to a


prompted
mockup
time.
In
individually
circumstances, it is essential to state a congenital
mockup time because the RTDM does not frequently
execute the lump. As an alternative, a run-time
situation governs when the lump executes. For the
case of an asynchronous mockup time, a special
function might make an asynchronous function call.
III.

powered drive roles can be shown in Figure 1 has its


peculiar steadfast processor and each essentially
drives individualistically.

CHALLENGES TO MOCKUP TIME

There are mainly two challenges are being discussed


here. These are as follows:1. Deed to be done perpetually more in less time
Defy to mockup time is deed done perpetually more
in less time. Structure sensitivity is a driving force in
real-time solicitations. How rapidly and reliably can a
system react to real-time events? Can the system
achieve its essential errands within a precise,
constrained time, every single time? We who drive in
concocts repeatedly pursue to enact ever more urbane
functions and candidness but in less total time.
To illustrate more specific to begin with, the
embedded hardware executed meek proportionalcentral-imitative (PCI) electrically powered rheostat.
Concluded towards time, electrically powered
rheostat suited supplementary stylish to comprise real
time driver model-based electrically powered rheostat
clarifications. Signal-adaptive electrically powered
rheostat consents the scheme to perceptively
acclimatise to fluctuating structures environments
and arrange differently rheostat restrictions based on
sensor response. To sum up, in a workshop
computerisation milieu, compound electric motors
communicate electronically to synchronize their
rejoinder and to perform intricate actions. For
specimen, safety allied immunity may elicit a
stoppage sequence that necessitates synchronised
actions of a diversity of utensils to shield both the
machinist and downstream machinery to minimize
system stoppage. Unsurprisingly, all of this chic
computing chances in ever-decreasing volumes of
time.
2. Challenge to abide the scheduling clashes
Scheduling clashes are additional inexorable
challenge in real-time system design. In outmoded
intention tactics, each of the four notable electrically

Fig.1 Notable electrically powered drive roles

In a congregated explanation, these four serviceable


assemblages are joint into a solitary organisation but
each still able to operate asynchronously. Latent
scheduling clashes ensue because all of the interrupts
are channelled to a solitary manoeuvre. If we are not
able to control suitably, the unplanned and
asynchronous environment of the interrupts
theoretically causes scheduling clashes within the
solicitation program, eventuating in dwindled
sensitivity. Supervision to restrict jitter and ensuring
more-deterministic deeds are key influences to dodge
schedule clashes.
IV.

EVALUATING SCHEME TO
RECEPTIVENESS

Studying and adapting all the actualities and


dependencies we have discussed so far accumulating
us at a spot to use an evaluating scheme to generate
for quantifying parameters that are essential to
measure the receptiveness. Now let proceed toward
the primary conflict that how is real-time
receptiveness dignified? Researching to problem we
retrieved that receptiveness entails of two rudiments:
1.
2.

Interrupt latency
Execution time

Interrupt latency may be defined with words that once


an event ensues, how hurriedly can the system spot
it? For processor- or DSP-based solicitations, the
interrupt latency (Ilt) is the retro time available from
the jiffy an interrupt is stressed on the spot that the
processor concludes its presently accomplishing
contraption instruction and twigs to the first line of
the interrupt service routine (ISR).
Execution time may be defined as the time applied
after the event is acknowledged and how speedily can

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
135

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

the organization route it? For processor- or DSPbased solicitations, execution time (Et) is the volume
of time essential for the processor to ample all the
instructions within a precise ISR and then return to
customary action.
The total rejoinder time (Rt) adds the interrupt latency
to the interrupt execution time
Rt = Ilt + Et
--(3)
as exemplified in Figure 2.

REFERENCES
[1]

[2]

[3]

[4]

a.

Jakob Engblom et. al.Worst-Case Execution-Time


Analysis for Embedded Real-Time Systems, Software
Tools for Technology Transfer, Wind river system
February 2001.
A Colin and I Puat, Worst Case Timing Analysis of the
RTEMS Real-Time Operating System, Technical
Report No 1277, IRISA, November 1999.
S-S Lim et. al. , An Accurate Worst Case Timing
Analysis Technique for RISC Processors, IEEE
Transactions on Software Engineering, 21(7): 593-604,
July 1995. 32
Y-T S Li and S Malik, Performance Analysis of
Embedded Software Using Implicit Path Enumeration,
In Proc. 32nd Design Automation Conferenc

Fig. 2 Pictorial to rejoinder time calculation

V.

CONCLUSION

With experience of lots of precious thoughts,


perceptions, texts, ideas, advices and suggestions
from professional writers, speakers and our
supervisors we are concluding to a basic real time
driver model whose core part can be upgraded and
evaluated with the change in the external
environment influences. Priorities with essentiality
and deadlines can be assigned to the tasks available
to the device driver scheme. Due to some time
constraints and resources here is just a generic
perception of RTDM with limited block modules of
interacting schemes. Coding and implantation to a
specific case study can be done in future work.
VI.

FUTURE WORK

Resolving conflicts is beginning to new life in future


thus in this proposed real time driver model future
work will be based on the feedback to this proposed
model and significant suggestions will be admired to
make this RTDM most venerable piece of work.
After resolving such clashes a code will be adapted to
perform it electronically so that human error can be
omitted and an optimization of speed may be
achieved.
Different
context
of
processing
environment will have different responses and
different rejoinder times that will be tabled to
statistics and studied for improvement to future work.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
136

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Role of Private Sector in Indian Power


Transmission System: A Review
Neeraj Kumar1, Rohit Verma2, Subham Gandhi3
1

Research Scholar, B.M.N. University, Rohtak


2
Associate Professor , NPTI Faridabad
3
Associate Professor, B.M.N. University, Rohtak,
E-mail: neerajgian77@gmail.com, subhamsavailable@gmail.com
Abstract India is one of the few countries where
transmission sector has been opened up for the
private participation and has been granted a
significant interest from private player. India is
able to meet a peak demand of only 128 GW
despite and installed capacity of over 250 GW.
The reason for this gap is lack of strong
transmission Network. India now plans to
connect all regional grids into a national grid by
year 2016 to improve the transmission of power
across the country. The Electricity Act 2003
opened the doors for private sector participation
in power sector. Private investment in
transmission system started in year 2006 by
framing a policy. According to this
power
transmission projects will be awarded through the
tarrif based competitive bidding route. This paper
is presented to see the approaches made by
government of India regarding private sector
participation in transmission sector.
Keywords Transmission system.Tariff based
competitive bidding, private investor.
I INTRODUCTION
The developing country like India is still unsuccessful
to bridge the gap between the power demand and
supply [1]. This gap largely affect the economic and
development of the country. Despite the total
installed capacity of over app 250 MW India is able
to meet a demand of only 128GW. The reason for this
is lack of strong transmission network. During 10 th
and 11th plan periods transmission infrastructure
development did not match with generation capacity.
As per report of power System Corporation limited
around 30 transmission lines in the country are
overloaded and stressed. So, transmission planning is
an important part of the power system planning.
Power transmission in India was restricted to central
and state utilities until the year 2006. Private
participation in this sector has been increasing

steadily. Private sector participation in transmission is


inevitable with huge potential for investment.
Increase in power generation capacity will result in
steady order flow for T&D, providing ample
opportunities for private player [2]. Partnership
between private player and central/state utilities will
ensure efficient and optimum operation of the
transmission
system
thereby
reducing
the
transmission losses, power pilferage and voltage
fluctuation. The transmission projects worth an
estimated of Rs 256 billion and aggregates over
4600KM transmission line length were awarded
through the tariff based competitive bidding (TBCB)
last year. The projects will be taken for global
bidding by nodal agencies PFC and REC. Several
765KV and 400 KV Lines and associated
infrastructures are proposed to be set up under the
identified projects. These would help from evacuating
power from conventional and renewable power
generation projects. The private sector presence is set
to increase further during the thirteenth plan period.
II PRIVATE PARTICIPATION SO FAR
National tariff policy 2006 introduced mandatory
TBCB for all transmission projects with the object of
promoting competitive procurement of transmission
services, encouraging greater investment by private
players in the transmission sector and increasing
transparency and fairness in the process. During the
year 2014-15 six interstate projects were awarded
through the bidding route. Of the newly awarded
projects in 2014 the power grid won one, while
Sterlite Grid limited (SGL) and Spain based
Instalaciones Inabensa SA secured one each SGL
have won the 441 KM Long transmission system with
a total investment of Rs 80 billion. It is also made
biggest private player in the transmission segment.
Other private company who won the transmission
projects through bidding are :

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
137

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

a Essel Infrastructure limited(EIL).


b. Techno Electric and Engg. Company
Limited(TEECL).
c. Larsen and TourboInfrastructure Projects limited.
d. Reliance Power Transmission Limited.
At the Interstate level too,there are a few states like
Maharashtra, Rajasthan, Uttar Pradesh, Haryana and
Madhya Pradesh that have taken the lead in awarding
the transmission projects. The newly approved
transmission system projects send out a positive
signal to the investor.
III KEY CHANLLENGES IN TRANSMISSION
PROJETS
Transmission network faces a various issues and
challenges during the stages of project implementation.
These challenges are classified as technical and
administrative challenges etc [5].Technical challenges
mean lack of skilled manpower. Transmission projects
are widely spread over an area which includes different
terrain and remote locations, the power system developer
faces issues in transportation of equipment along with
manpower to the work site. An administrative challenge
arises due to lack of government support and absence of
necessary laws. Currently there is no proper
compensation policy for the right of way (R.O.W) which
is major issue in transmission projects. In comparison to
public sector companies the private developers faces a
large problem. Availability of cheap labour, land
acquisition and social activism are other issues faced by
developers besides the above two main challenges.
IV FARE BIDDING PROCESS BEWEEN GOVT.
UTILITES AND PRIVATE SECTOR
In order to promote greater private participation in the
power transmission sector, it is important that private
players be given a level playing field along with state
owned players such as PGCIL. PGCIL currently
plays a dual role - transmission planning (as CTU)
and execution of interstate transmission projects - and
is thereby privy to commercially sensitive
information. In the course of discharging its duties, as
a CTU & as a member of EC, PGCIL is privy to
certain material non-public & cost-sensitive
information apart from having rights to influence
decision making in EC. It is therefore recommended
that CTU be hived off from PGCIL & in order to
ensure fairness in the bidding process, an independent
and impartial Empowered Committee without any
representation from PGCIL should decide whether

projects should be done by tariff based bidding or


under the cost-plus route. State entities and private
players should be treated at par with similar norms &
processes for securing forest clearance[4].
V PROPOSED TRANSMISSION PROJECTS
TAKEN UP THROUGH COMPETITIVE
BIDDING
Since the competitive bidding for transmission
guidelines already notified by ministry of power in
year 2006. Following are the projects which will be
carried out by bidding process.
An estimated line length 350 km,765kvBhuj
Pool (New) D/C line Banaskantha Sankhari. The name of this project is
Interstate transmission system for renewable
Western Region I .estimated cost of this
project is 24.87 billion.
A 765 kv Banaskantha-chittorgarh D/C line
app line length is 300 km and cost is 18.48
billion.
A 765 KV Chittogarh(New)- Ajmer(New)
D/C Transmission line 190 km with a app
cost 13.56 billion
An interstate project for renewable northern
region III line 765 KV Ajmer(New)Suratgarh (New) D/C line length app 380
km bearing a cost of Rs 22.77 billion.
Transmission system associated with
2*800MW Gadarwara STPS of NTPC
Between Gadarwara STPS-Jabalpur pool
D/C Line(120 km), 765 KV Gadarwara
STPS-new pooling station near Warora D/C
line (200 km) with a cost of 25.25 billion.
400 KV Dinchang-Rangia/Rowta pooling
point D/C quad (120 km) cost is app 8.73
billion.
Additional system strengthening scheme for
Chattisgarh IPPs linelength is 370 km
between Raipur-Rajnadgaon D/C line 765
KV cost of this project is app 21.91 billion.
V CONCLUSIONS
Despite the challenges in transmission system
segment there is a significant investment in
transmission infrastructures in the next 5-6 years in
India. Around 62800 km ckt. Line length 15000 MW
of HVDC capacity will be added at a voltage level
of 400 KV and above during the thirteenth plan. The
private sector plays an important role in achieving

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
138

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

this target. In this paper role of private sector and


challenges during evacuation of transmission system
has been discussed. This paper will be helpful for
power system designers and private investors. To
ensure more participation of private investors steps
must be taken for timely clearance and removal of
Right Of Way(R.O.W)
REFERENCES
[1]

[2]

[3]
[4]
[5]
[6]

Seyed Mohammad Ali Hossein,Transmission network


expansion planning in the competitive environment, A
reliability based approach IEEE Transaction, vol.13, July
2011.
RaminderKaur,,Maneesh Kumar,Transmission expansion
planning in Indian context: a review, International
conference on Recent advances and trends in Electrical
Engineering(RATEE-2014).
http://www.cea.nic.in
http://www.powergridindia.com
Power line magazine vol.18, no.10 June 2014.
Power line magazine vol.19, no.10 June 2015.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
139

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Innovation Technique of Denoising of


Ultrasonographic Images Using Dual Tree
Complex Wavelet Transform
Anil Dudy1, Subham Gandhi2, Jitender Khurana3
1
2, 3

Research Scholar, B.M.N. University, Rohtak


Associate Professor, B.M.N. University, Rohtak,

E-mail: anildudy10@gmail.com, subhamsavailable@gmail.com, jitukhurana@gmail.com


Abstract Ultrasound imaging is a non-invasive,
non-destructive and low cost technique. It is used
for imaging organs and soft tissue structures in
human body. Digital image acquisition and
processing pays a very important role in current
medical diagnosis techniques. Medical images are
corrupted by noise in its acquisition and
transmission process. In addition to the system
noise, a significant noise source is the speckle
phenomenon.Ultrasound has historically suffered
from an inherent imaging artifact known as
speckle. Speckle significantly degrades the image
quality. It makes it more difficult for observer to
discriminate fine details of the images in
diagnostic examination. Accordingly, speckle
filtering or reduction is a central pre-processing
for feature extraction, analysis, and recognition
from medical imagery measurements. Dual tree
complex wavelet transform is an efficient method
for denoising of ultrasound images. It not only
reduces the speckle noise but also preserves the
detail features of image. In this paper denoising of
ultrasound images has been performed using Dual
tree complex wavelet transform. The results
achieved with DTCWT will be better than the
other existing methods like Discrete Wavelet
Transform (DWT). This method not only reduces
the speckle noise but also preserves the detail
features of image.
Keywords DTCWT, DWT, PSNR, MSE

of image-denoising techniques is to remove such


noises with the retention of information signal.
Speckle cannot be directly correlated with specific
reflectors or cells, in the body. It is necessary to
analyze an ultrasound system to understand the
origins of speckle. Various techniques have been
raised for the ultrasound denoising. Conventional
speckle suppression methods are based on temporal
averaging and median filtering. The adaptive filters
are widely used in ultrasound image restoration
because they are easy to implement and control. The
Speckle Reducing Anisotropic Diffusion (SRAD)
was introduced and involves a noise-dependent
instantaneous coefficient of variation [1], [2]. The
adaptive weighted median filter [3] can reduce
speckle but it does not preserve useful details such as
edges of the image properly. Speckle cannot be
directly correlated with specific reflectors, or cells, in
the body, it is necessary to analyze an ultrasound
system to understand the origins of speckle. The dualtree complex wavelet transform (DTCWT) is a
relatively recent enhancement to the discrete wavelet
transform (DWT), with important additional
properties; it is nearly shift invariant and directionally
selective in two and higher dimensions. It achieves
this with a redundancy factor of only 2d for ddimensional signals. It is substantially lower than the
undecimated DWT. This is an appropriate method for
speckle reduction which enhances the signal to noise
ratio while conserving the edges and lines in the
images.
II.

I.

INTRODUCTION

Medical images are usually corrupted by noise during


their acquisition and transmission. Noise tends to
degrade the resolution and contrast of ultrasound
images. It may lead to elimination of some useful and
important diagnostic information. The main objective

TOOLS AND METHODOLOGY

Dual tree complex wavelet transform is preferred to


improve the human interpretation of ultrasound
images. The DTCWT is a relatively recent
enhancement to the discrete wavelet transform
(DWT), with important additional properties like; it is
nearly shift invariant and directionally selective in
two and higher dimensions. Speckle reduction makes

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
140

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

an ultrasound image cleaner with clearer boundaries.


Denoising is a preprocess step for many ultrasound
image processing tasks like segmentation and
registration. Speckle reduction improves the speed
and accuracy of automatic and semiautomatic
segmentation & registration. In this paper for the
denoising of ultrasound images, the wavelet
transform has been proved a more effective tool than
the Fourier transform.
The discrete wavelet
transform lacks the shift-invariance property, and in
multiple dimensions it does a poor job of
distinguishing orientations, which is important in
image processing. For these reasons, to obtain some
applications improvements, the Separable DWT is
replaced by Complex dual tree DWT which has been
done by using self-build function.
The peak signal-to-noise ratio (PSNR) and mean
square error (MSE) are the parameters that have been
used to compare the performance of Discrete Wavelet
Transform & Dual Tree Complex Wavelet
Transform. The term MSE is the difference between
the original image and the recovered image and it
should be as low as possible. It is given by;
MSE=

,( )

( )-

(1)

The term, PSNR, is the ratio between the maximum


possible power of a signal and the power of
corrupting noise signal. The PSNR is defined as

PSNR=10.

(2)

where, MAXI is the maximum possible pixel value of


the image. Matlab as a toolbox is used for ultrasound
image denoising. Testing is made on a set of medical
images. The flowchart for the DTCWT to despeckle
the noise in medical images is given in Fig. 1
III.

RESULT

The analysis has been carried out in terms of PSNR


and MSE. The calculated values of both these
parameters for the image of brain are given in table 1.
Table 1: Calculated MSE and PSNR values for image of Brain

WAVELET TRANSFORMATION
METHODS
PERFORMANCE
PARAMETERS
DWT
0.252252
56.1419

MSE
PSNR

DTCWT
2.2483
83.8521

On comparison of the performance parameters, MSE and


PSNR shown in table 1, it is found that DTCWT is more
robust and efficient than DWT. The Simulation results of
applying DWT and DTCWT on image of brain are shown
in Fig. 2.

(a)

Fig. 1 Flow chart for dual tree complex wavelet transform

=20.

(b)

(c)

Fig.2 Simulation results of applying DWT and DTCWT on image of


brain.

The calculated values of MSE and PSNR for the image


of lung are given in table 2.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
141

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Table 2: Calculated MSE and PSNR values for image of lung


WAVELET
METHODS

[3]

TRANSFORMATION
[4]

PERFORMANCE
PARAMETERS
MSE

DWT
1.6814

DTCWT
5.6735

PSNR

45.908

78.6394

[5]
[6]

[7]

On comparison of the performance parameters, MSE [8]


and PSNR shown in table 2, it is found that DTCWT is
more robust and efficient than DWT. The Simulation
[9]
results of applying DWT and DTCWT on image of lung
are shown in Fig. 3.
[10]

[11]

[12]

[13]

(a)

(b)

(c)

Fig.3: Simulation results of applying DWT and DTCWT on image of


lung

IV.

[14]

[15]

CONCLUSIONS

Speckle reduction improves the speed and accuracy


of automatic and semiautomatic segmentation &
registration. In this paper denoising of ultrasound
images has been performed using Dual tree complex
wavelet transform. This transform has been proved
that an efficient technique for the required purpose.
From the experimental results it is concluded that the
performance in terms of PSNR and MSE for a set of
acquired medical images is better with DTCWT as
compared to the performance with DWT.

[16]

[17]

T. Loupas, W. N. Medicken, and P.L. Allan An adaptive


weighted median filter for speckle suppression in medical
ultrasonic images, IEEE Trans. Circuits Syst., vol. 36, pp.
129-135, jan 1989.
N. K. Ragesh1, A. R. Anil2, Dr. R. Rajesh3, Digital Image
Denoising in Medical Ultrasound Images: A Survey, ICGST,
pp. 12-14, April 2011
International Journal of Signal Processing, Image Processing
and Pattern Recognition Vol. 2, No.3, September 2009
Deka and P. K. Bora, Despeckling of Medical Ultrasound
Images using Sparse Representation Signal processing and
Communication, International Conference, pp.1 5, 2010
Ashish Khare, Manish Khare, Yongyeon Jeong, Hongkook
Kim and Moongu Jeon
Despeckling of medical
ultrasound images using Daubechies complex wavelet
transform Signa; processing, pp.428-439, 2010
Parisa Gifani, Hamid Behnam, Ahmad Shalbaf, Zahra Noise
Reduction of Echocardiography Images Using Isomap
Algorithm Biomedical engineering 1st Middle East
Conference, pp. 150 153, 2011
Chen Binjin, Xiao Yang, Yu Jianguo and XueHaihong,
Ultrasonic Speckle Suppression Based on a Novel
Multiscale Thresholding Technique, Communication and
Mobile Network, 5th International Symposium pp. 1-5, 2010
Di Lai1, Navalgund Rao, Chung-hui Kuo, Shweta Bhatt and
VikramDogra, An Ultrasound Image Despeckling Method
Using Independent Component Analysis Biomedical
Imaging: From Nano to Macro, IEEE International
Symposium pp. 658 661, 2009
Sindhu Ramachandran S and Dr. Manoj G Nair, Ultrasound
Speckle Reduction using Nonlinear Gaussian filters in
Laplacian Pyramid domain, Image and Signal Processing 3rd
International Congress, vol. 2, pp.771 776, 2010
Sheng Yan, Jianping Yuan, Minggang Liu, and Chaohuan
Hou, Speckle Noise Reduction of Ultrasound Images Based
on an Undecimated Wavelet Packet Transform Domain Nonhomomorphic, BMEI, pp. 1-5,2009
M.I.H. Bhuiyan, M.O. Ahmad and M.N.S. Swamy Spatially
adaptive thresholding in wavelet domain for despeckling of
ultrasound images Image processing, vol.3, pp. 147-162,
2009
Arash Vosoughi and Mohammad. B. Shamsollahi Speckle
Noise Reduction of Ultrasound Images Using M-band
Wavelet Transform and Wiener Filter in a Homomorphic
Framework, International Conference on Biomedical and
Imformatics, vol. 2, pp. 510-515, 2008
R. K. Mukkavilli, J. S. Sahambi and P. K. Bora Wang
Modified Homomorphic Wavelet Based Despeckling of
Medical Ultrasound Images Electrical and Computer
Engineering Canadian Conference, pp. 887-890, 2003
S.Kother Mohideen, Dr.S. Arumuga Perumaland Dr. M.
Mohamed Sathik Image De-noising using Discrete Wavelet
transform, IJCSNS, International Journal of Computer
Science and Network Security, vol. 8, no.1,pp. 213-215, Jan.
2008
]Ricardo G. Dantas and Eduardo T. Costa Ultrasound
Speckle Reduction Using Modified Gabor Filters
Ultrasonic, Ferroelectrics and Frequency Control, IEEE
Transactions, vol.54, pp. 530-538, 2007

REFERENCES
[1]

[2]

Y.Yu, and S.T. Acton, Speckle reducing anisotropic


diffusion, IEEE Trans. Image Process, vol.11, 2002,
pp.1260-1270.
Y.Yu, J. A. Molloy and S.T. Acton, Three-dimensional
speckle reducing anisotropic diffusion, in, Proc. 37 th
Asilomar Conf. Signals, Systems and Computers, 2003, vol.
2,pp. 1987-1991.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
142

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Modeling Setup for Next Generation


Wireless System Using MIMO-STBC
Niranjan Yadav1, Subham Gandhi2
1

Research Scholar, B.M.N. University, Rohtak


Associate Professor, B.M.N. University, Rohtak,

E-mail: niranjnayadav97@gmail.com, subhamsavailable@gmail.com


Abstract Multiple input multiple output
(MIMO) systems in wireless communications refer
to any wireless communication system where at
both sides of the communication path more than
one antenna is used. High-performance 4th
generation
(4G)
broadband
wireless
communication system can be enabled by the use
of multiple antennas not only at transmitter but
also at receiver ends. A multiple input multiple
output (MIMO) system provides multiple
independent transmission channels, thus, under
certain conditions, leading to a channel capacity
that increases linearly with the number of
antennas.
Orthogonal
frequency
division
multiplexing (OFDM) is known as an effective
technique for high data rate wireless mobile
communication. By combining these two
promising techniques, the MIMO and OFDM
techniques, we can significantly increase data rate,
which is justified by improving bit error rate
(BER) performance. In this section, we briefly
describe the concept of MIMO system. Through
comparison with CDMA system, its key benefits
are discussed.
Keywords MIMO, 4G, OFDMA, BER
I.

INTRODUCTION

A multiple-input multiple-output (MIMO) system


consists of multiple antennas at the receiver and
transmitter. These multiple antennas can be used to
improve the performance of the system through
spatial diversity or increase the data rates by spatial
multiplexing. One can also use some of the antennas
for diversity and some for spatial multiplexing. The
number used for diversity and spatial multiplexing
depends on the application. MIMO systems can
support higher data rates at the same transmission
power and bit error-rate (BER) requirements i.e. for
the same throughput requirement, MIMO systems
require less transmission energy. Hence it is tempting

to believe that MIMO systems are more energy


efficient than single-input single-output (SISO)
systems. However the circuit energy consumption of
a MIMO system is more than for a SISO system as it
has multiple RF chains and requires more signal
processing. Several studies on the energy efficiency
of MIMO systems have been done. Traditional
wireless communication systems with one
transmit and one receive antenna are denoted as
single input single output (SISO) systems, whereas
systems with one transmit and multiple receive
antennas are denoted as single input multiple output
(SIMO) systems, and systems with multiple transmit
and one receive antenna are called multiple input
single output (MISO) systems. Conventional smart
antenna systems have only a transmit side or only a
receive side equipped with multiple antennas, so they
fall into one of last two categories. Usually, the base
station has the antenna array, as there is enough space
and since it is cheaper to install multiple antennas at
base stations than to install them in every mobile
station. Strictly speaking, only systems with multiple
antennas at both ends can be classified as MIMO
systems. Although it may sometimes be noted that
SIMO and MISO systems are referred as MIMO
systems. In the terminology of smart antennas, SIMO
and MISO systems are also called antenna arrays.
We conduct a literature survey on the
energy efficiency of MIMO systems. We study the
results of the work done in [2], [3] and [4] and break
down these various studies in terms of the different
systems, their energy consumption for transmission
and by the circuit, and the diversity gain and/or
multiplexing gain achieved. With this we come up
with general system model, which we use to study the
impact of increasing the rate on energy consumption
at different distances.
II.

OVERVIEW OF THE MIMO SYSTEM

The structure of MIMO system consists of seven


modules each of which incorporates an important part
of any simple communication system. The output of
each module is cumulative and is used as an input to

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
143

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

the next module in the system. The diagram below


briefly shows how these modules are arranged:

blocks
Numtx: Number of
transmit antennas

Create
Signal

Mapping
Space-time
codingPulse Shaping Channel Receiving
This package implements the following simple
communication system that allows the user to
implement and simulate MIMO channels as well (The
Transmitter and Receiver blocks are explained in
more detail further in the manual): In this paper the
different modules are implemented such that the
inputs and outputs are organized in an object
oriented style. Each module will be fed with one or
two objects, which includes certain parameters that
need to be used in each specific module. The output
data of each module is also generated as an object.
This technique makes it easier for the user to locate
and use any of the parameters in each different
module. Below is a breakdown of the different
objects and where and how each will be used.

Numrx: Number of
receive antennas
SNR: a vector of
SNR values

for QAMs)

Type of
filter to be
used
nsamp:
oversampl
ing rate
Delay:
Delay of
the filter

In order to test/ simulate different MIMO


transmission schemes the user needs to only edit
Transmit and Receiver accordingly to reflect the
new scheme. These two modules currently
correspond to a 2x2 MIMO system. If the user wishes
to change other system parameters such as
Modulation scheme, pulse shaping, etc., the user has
complete control of these features in the
CreateSignal module. The different global variables
needed within the communication system are set here.
The user must change these values depending on the
type of simulation being conducted. Some important
variables such as the length of the signal, number of
iterations, number of transmit and receive antennas,
bit period, symbol frequency, etc. must be set in order
to have a functioning communication system. The
user might also wish to add variables other than what
is already available.

Fig. 1:- Simple communication system

Moduels-Types of objects
This object is first created in the signal module, in
which the user will get a chance to set the parameters
used in creating the input signal, the pulse shaping
filters, etc. Here is the list of the parameters that will
be involved in this object:

The input signal is randomly generated here using a


random number generator. It generates a stream of
bits (0s and 1s). The size of the signal will be equal to
L*iit*k . The User can substitute a signal of their own
if they desire, the signal must be a column vector of
bits.

Table1: type of objects

B.

A.

System
Parameters

Signal
Parameters

M: M-ary, binary (2),


quad (4) or 2^n

L: Length
of signal

K: Number of bits
per symbol calculated
from M

itt: number
of
iterations

Block: String for


block type

Px:
The
power of
signal
(1
for
PSK,
different

BlockL: Length of

Filter
Parameter
s
r: Rolloff-factor
T: Bit
period
dur:
Duration

FltrType:

Mapping

This module provides various selection of mapping


techniques. For this specific MIMO project only a
few modulation techniques are operational, for the
complete list of implemented techniques. Apart from
the fully implemented modulation techniques, this
file contains many other techniques that are not fully
implemented. Each of the cases calls other functions
found in the folder Modulation; the user can
understand how each modulation technique is
implemented by inspecting those functions. Below is
a table that lists the fully implemented modulation
techniques that do not require training blocks: [1]

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
144

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Table 2: type of modulation

Modulating Technique
BPSK
QPSK
8PSK
Fig. 3: Transmission Medium [8]

16PSK
16QAM
64QAM
Pulse Shaping
Implemented in this module a case hierarchy which
allows the user to use different types of pulse shaping
filters as required for the simulation. The input
parameter FltrType. Following filters to be used for
pulse shaping: - sqrtrcos: Generates a Square Root
Raised Cosine Filter. - sqrttrrcos: Generates a
Truncated Square Root Raised Cosine Filter.
sqrtmrcos: Generates the Modified Square Root
Raised Cosine Filter. sqrtsfrcos: Generates a
Shifted Square Root Raised Cosine Filter.
C.

SpaceTime Coding
This module implements a transmit technique in
which the partitioned signal is processed and filtered.
The user can implement any transmit scheme within
this module and must correspondingly implement the
same scheme on the receiver [4] [5]. The current file
implements the Alamouti transmit scheme for a 2x2
MIMO system [6].
D.

In this module, random noise (Additive White


Gaussian Noise) is generated and added to our
transmitted sequences that are being transmitted from
each antenna. According to the Signal to Noise Ratio
(SNR) range that has been provided and the
simulation will run for all the input values of SNR.
The SNR value is used to calculate the noise variable
needed to create the random receiver noise. This part
of the module generates a random noise that is 10x
larger than the required sequence length, and further
randomly select a vector from this sequence; by doing
so we allow for less error within the MATLAB
random generator and minimize the correlation
between the coefficients of the random noise.
The two for loops in the section encode the signals to
be received at the receive antennas by accounting for
channel coefficients and noise accordingly. The Final
received signals are saved in the object ch, variable
R in order to be used in the next module as follows:
R1= Ch1*Tx1 + Ch2*Tx2 + ... + Ch (Numtx) *Tx
(Numtx)
R2= Ch1 (Numtx+1)*Tx1 + Ch (Numtx+2)*Tx2 + ...
+ Ch (Numtx*2)*Tx (Numtx*2)...R (Numrx) =.....
Receiving
The receiver in a communication system implements
a decoding and decryption scheme that is designed in
parallel with the transmit scheme; by doing so the
maximum likelihood detector in the receiver will try
to retrieve the original signal with the least error.
F.

Fig. 2: Transmitter Block Diagram [4]

Channel
This module acts as the transmission medium
for the communication system. This is the center
piece of the package and in this module the user can
test/ simulate different coefficient generation
methods, noise structures, encoding and transmitting
signal schemes. First the module generates the
channel coefficients that will be used for each
transmission path, ch0, ch1, ch2, and ch3 as shown
below:
E.

Fig 4: Receiver Block diagram [6]

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
145

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

III.

WIRELESS MIMO SYTEM FOR NEX


GENERATION

This module acts like the data processing and


assessing. It currently generates a graph that shows
the results of the simulation. The current
implemented graph technique takes an average of the
entire Bit Error Rate for all iterations and plots it
against the Signal to Noise ratio range. The user has
the ability to use the results as they wish since the
output is saved in the .mat file which has stored the
input and the output results of the simulation.

space-time-frequency adaptive processing. Singleinput single-output (SISO) is the well-known wireless


configuration, single-input multiple-output (SIMO)
uses a single transmit antenna and multiple receive
antennas, multiple-input single-output (MISO) has
multiple transmit antennas and one receive antenna.
And multiuser-MIMO (MU-MIMO) refers to a
configuration that comprises a base station with
multiple transmit/receive antennas interacting with
multiple users, each with one or more antennas

Fig. 5 : Transmitter model for MIMO System block


form [2]

The idea of using multiple receive and multiple


transmit antennas has emerged as one of the most
significant technical breakthroughs in modern
wireless communications. Theoretical studies and
initial prototyping of these MIMO systems have
shown order of magnitude spectral efficiency
improvements in communications. As a result,
MIMO is considered a key technology for improving
the throughput of future wireless broadband data
systems MIMO is the use of multiple antennas at both
the transmitter and receiver to improve
communication performance. It is one of several
forms of smart antenna technology. MIMO
technology has attracted attention in wireless
communications, because it offers significant
increases in data throughput and link range without
requiring additional bandwidth or transmit power.
This is achieved by higher spectral efficiency and link
reliability or diversity (reduced fading). Because of
these properties, MIMO is an important part of
modern wireless communication standards such as
IEEE 802.11n (Wifi), IEEE 802.16e (WiMAX),
3GPP Long Term Evolution (LTE), 3GPP HSPA+,
and 4G systems to come. Radio communication using
MIMO systems enables increased spectral efficiency
for a given total transmit power by introducing
additional spatial channels which can be made
available by using space-time coding. In this section,
we survey the environmental factors that affect
MIMO performance. These factors include channel
complexity, external interference, and channel
estimation error. The multichannel term indicates that
the receiver incorporates multiple antennas by using

Fig. 6 : MIMO

Array gain
Array gain can be made available through processing
at the transmitter and/or the receiver, and results in an
increase in average received signal-to-noise ratio
(SNR) due to a coherent combining effect. Transmitreceive array gain requires channel knowledge at the
transmitter and receiver, respectively, and depends on
the number of transmit and receive antennas. Channel
knowledge at the receiver is typically available
whereas channel state information at the transmitter is
in general more difficult to obtain. Array gain means
a power gain of signals that is achieved by using
multiple-antennas at transmitter and/or receiver. It is
the average increase in the SNR at the receiver that
arises from the coherent combining effect of multiple
antennas at the receiver or transmitter or both. If the
channel is known to the transmitter with multiple
antennas, the transmitter can apply appropriate
weight to the transmission, so that there is coherent
combining at the receiver. The array gain in this case
is called transmitter array gain. Alternately, if we
have only one antenna at the transmitter and no
knowledge of the channel, then the receiver can
suitably weight the incoming signals so that they
coherently add up at the output, thereby enhancing
the signal. This is called receiver array gain which
can be exploited in SIMO case. Essentially, multiple
antenna systems require some level of channel
knowledge either at the transmitter or receiver or both
to achieve this array gain.
G.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
146

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Diversity gain
In a wireless channel, signals can experience fadings.
When the signal power drops significantly, the
channel is said to be in a fade and this gives rise to
high BER. Diversity is a powerful technique to
mitigate fading in wireless links, so diversity is often
used to combat fading. Diversity techniques rely on
transmitting the signal over multiple (ideally)
independently fading paths over time, frequency,
space, or others. Spatial (or antenna) diversity is
preferred over time/frequency diversity as it does not
incur expenditure in transmission time or bandwidth.
A diversity scheme refers to a method for improving
the reliability of a message signal by using two or
more communication channels with different
characteristics. Diversity plays an important role in
combating fading and co-channel interference and
avoiding error bursts. It is based on the fact that
individual channels experience different levels of
fading and interference. Multiple versions of the same
signal may be transmitted and/or received and
combined in the receiver. Alternatively, a redundant
forward error correction code may be added and
different parts of the message transmitted over
different channels. Diversity techniques may exploit
the multipath propagation, resulting in a diversity
gain, often measured in decibels.
H.

The following classes of diversity schemes can be


identified
Time diversity: Multiple versions of the
same signal are transmitted at different time
instants. Alternatively, a redundant forward
error correction code is added and the
message is spread in time by means of bitinterleaving before it is transmitted. Thus,
error bursts are avoided, which simplifies
the error correction.

Frequency diversity: This type of diversity


provides replicas of the original signal in the
frequency domain. The signals are
transmitted using several frequency channels
or the signals are spread over a wide
spectrum that is affected by frequencyselective fading. The former method can be
found in coded-OFDM systems such as
IEEE 802.11agn, WiMAX, and LTE, and
the latter method can be found in CDMA
systems such as 3GPP WCDMA.
Multiuser diversity: Multiuser diversity is
obtained by opportunistic user scheduling at
either the transmitter or the receiver.
Opportunistic user scheduling is as follows:
the transmitter selects the best user among
candidate receivers according to the qualities

of
each channel between the transmitter and
each receiver. In FDD systems, a receiver
typically feedback the channel quality
information to the transmitter with the
limited level of resolution.
Space diversity (antenna diversity): The
signal is transmitted over several different
propagation paths. In the case of wired
transmission, this can be achieved by
transmitting via multiple wires. In the case
of wireless transmission, it can be achieved
by antenna diversity using multiple transmit
antennas (transmit diversity) and/or multiple
receive antennas (receive diversity). In the
latter case, a diversity combining
technique is applied before further signal
processing takes place. If the antennas are
far apart, for example at different cellular
base station sites or WLAN access points,
this
is
called macro diversity or site diversity. If the
antennas are at a distance in the order of one
wavelength, this is called micro diversity. A
special case is phased antenna arrays,
which also can be used for beam forming,
MIMO channels and Spacetime coding
(STC). Space diversity can be further
classified as follows.
Receive
diversity:
Maximum
ratio
combining is a frequently applied diversity
scheme in receivers to improve signal
quality
Transmit diversity: In this case we introduce
controlled redundancies at the transmitter,
which can be then exploited by appropriate
signal processing techniques at the receiver.
There are open loop transmit diversity where
transmitter does not require channel
information and closed loop transmit
diversity where transmitter requires channel
information to make this possible. Closed
loop transmit diversity is sometimes
regarded as a Beam forming. Space-time
codes for MIMO exploit both transmit as
well as receive diversity schemes, yielding a
high quality of reception.
Polarization diversity: Multiple versions of a
signal are transmitted and/or received via
antennas with different polarization. A
diversity combining technique is applied on
the receiver side.
Cooperative diversity: Achieves antenna
diversity gain by using the cooperation of
distributed antennas belonging to each node.

Multiplexing gain
Spatial multiplexing gain is achieved when a
system is transmitting different streams of data from
the same radio resource in separate spatial
I.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
147

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

dimensions. Data is hence sent and received over


multiple channels - linked to different pilot signals,
over multiple antennas. This results in capacity gain
at no additional power or bandwidth. Spatial
multiplexing is transmission techniques in MIMO
wireless communication to transmit multiple data
signals from each of the multiple transmit antennas.
Therefore, the space dimension is reused, or
multiplexed, more than one time. If the transmitter is
equipped with NT antennas and the receiver has NR
antennas, the maximum spatial multiplexing order is

If a linear receiver is used, this means that Ns


streams can be transmitted in parallel, ideally leading
to an Ns increase of the spectral efficiency. The
practical
multiplexing
gain
can
be
limited by spatial correlation and the rank property of
the channel, which means that some of the parallel
streams may have very weak or no channel gains.

this chapter, we introduce STBC and STTC signal


models for transmitter /receiver structure in MIMO
system.
IV.

RESULT AND CONCULSION

In order to demonstrate the possible performance


difference between the different pulse shaping
techniques, three sets of simulations were conducted
for the following three Square Root raised Cosine
pulse shape filters: sqrtrcos (normal), sqrttrrcos
(truncated), and sqrtmrcos (modified). Alamoutis 2x2
transmission technique is implemented in the
simulation. The three simulations were conducted
with length equal to 5000, iteration count equal to
500, SNR range of 0 to 12 and BPSK modulation. The
filters in the three simulations had period equal to
0.01, duration equal to 0.05, rolloff factor equal to
0.25 and an oversampling rate of 4.
SNR Vs BER

-1

10

BPSK-sqrtmrcos (5000,500)
-2

10

-3

BER

10

-4

10

-5

10

-6

10

10

12

14

SNR (dB)
Fig. 8: SNR VS BER for BPSK Sqrtmrcos

Fig. 7: MIMO Interference Channel

SNR Vs BER

-1

10

BPSK-sqrtrcos (5000,500)
-2

10

-3

10
BER

The MIMO channel model is discussed first,


which are deterministic, and frequency flat or
selective fading channels. This study will be carried
out mathematical derivation of the capacity in each
MIMO channel. We begin with basic system
capacities which compare SISO, SIMO and MIMO,
and then we explore to general case that the system
has MT transmit antennas and NR receive antennas.
Finally, fundamental capacity limits for transmission
over MIMO channels is discussed Many kinds of
signal encoding schemes that support multiple
antenna systems have well been studied [2]. Among
them, the primary ones include Bell Labs Layered
Space Time (BLAST), space-time trellis codes
(STTC), space-time block codes (STBC) and cyclic
delay diversity (CDD) and so on. So, the latter part in

-4

10

-5

10

-6

10

10

12

14

SNR (dB)
Fig. 9: SNR VS BER FOR BPSK Sqrtcos

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
148

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

[6]

SNR Vs BER

-1

10

BPSK-sqrttrrcos (5000,500)
-2

10

[7]

-3

BER

10

-4

10

-5

10

[8]
-6

10

-7

10

10

12

14

SNR (dB)

[9]

Fig. 10: SNR VS BER FOR BPSK Sqttrcos

-1

[10]

10

BPSK-sqrtrcos (5000,500)
BPSK-sqrttrcos (5000,500)
BPSK-sqrtmrcos (5000,500)

-2

10

[11]

-3

BER

10

-4

10

-5

10

[12]

-6

10

10

12

SNR(dB)

[13]

Fig. 11: Comparision SNR VS BER for different waves

We can observe that better results are produced by


the system which uses more number of receiver
antennas. This is due to the fact that as the number of
receiver antennas increases, the diversity of the
system will increase. Higher diversity will give better
performance. So while designing the STBC for a
particular application, it is needed to select the
number of antennas at both ends of the
communication link, the modulation and the rate of
transmission. By using the proper STBC technology,
it is possible to improve the data rate and range of the
wireless communication systems.

[14]

[15]

M.Wennstrm, M. Helin, T.bergOn the Optimality and


Performance of Transmit and Receive Space Diversity in
Rayleigh Fading Channels, IEE Seminar on MIMO
Systems, London Dec. 12, 2001.
T.Svantesson, A. Ranheim Mutual coupling effects on the
capacity of multielement antenna systems, Acoustics,
Speech, and Signal Processing, 2001. Proceedings. 2001
IEEE InternationalConference on , Volume: 4 , 2001.
Page(s): 2485 -2488
Arash Mirbagheri, K.N. Plataniotis, S. Pasupathy, `An
enhanced widely linear CDMA receiver with OQPSK
modulation' , IEEE Trans. on Communications, vol. 54, no. 2,
pp. 261- 272, February 2006.
S.W.L. Poon, K.N. Plataniotis, S. Pasupathy, `Superimposed
asymmetric modulation in narrow-band fading channels with
orthogonal codes' , IEEE Trans. on Wireless
Communications, vol. 5, no. 6, pp. 1260-1265, May 2006.
A.C.C.C. Lam, A. Elkhazin, S. Pasupathy, K.N. Plataniotis,
`Pulse shaping for differential offset-QPSK' , IEEE Trans. on
Communications, vol. 54, no. 10, pp. 1731-1734, October
2006.
S. Lam, K.N. Plataniotis, S. Pasupathy, `Self-matching spacetime block codes for matrix Kalman estimator based ML
detector in MIMO fading channels' , IEEE Trans. on
Vehicular Technology, vol.56, no 4 II, pp. 2130-2142, July
2007.
A. Elkhazin, K.N. Plataniotis, S. Pasupathy, `Reduced
dimension MAP turbo-BLAST detection' , IEEE Trans. on
Communications, vol. 54, no. 1, pp. 108-118, January 2006.
Alamouti, Siavash M, A Simple Transmit Diversity
Technique for Wireless Communications:, IEEE JOURNAL
ON SELECT AREAS IN COMMUNICATIONS, VOL. 16,
NO. 8, OCTOBER 1998, 1451-1458
D.J. Young and N.C. Beaulieu, "The Generation of
Correlated Rayleigh Random Variates by Discrete Fourier
Transform", IEEE Transactions on Communications, vol. 48,
pp. 1114-1227, July 2000.
Mousumi Haque, Shaikh Enayet Ullah And Joarder Jafor
Sadique, Secure Text Message Transmission In Mccdma
Wireless Communication System With Implementation Of
Stbc And Memo Beam forming Schemes International
journal of Mobile Network Communications & Telematics (
IJMNCT) Vol. 3, No.1, February 2013

REFERENCE
[1]

[2]

[3]

[4]

[5]

M. Nakagami, The m-distribution, a general formula of


intensity distribution of rapid fading, in Statistical Methods
in Radio Wave Propagation, W.G Hooffman, Ed. Oxford,
England: Pergamon, 1960.
H. Suzuki, A statistical model for urban radio channel
model, IEEE Trans. Common., vol. 25, pp.673-680, July
1977.
H. Blcskei, A. Paulraj Multiple-Input Multiple-Output
(MIMO) Wireless Systems, The Communications
Handbook, 2nd ed., J. Gibson, Ed.
C. Julian, B. Norman Maximum-Likelihood Based
Estimation of the Nakagami m Parameter, Fellow IEEE,
Commun letters, vol. 5 No 3 March 2001.
F. Boixadera Espax, J. Boutros Capacity considerations for
wireless
MIMO
channels,http://www.com.enst.fr/publications/publi/MMT99
_boixadera.ps. 2002-01-08.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
149

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

(CoMP) Techniques In 4G-LTE-Advanced


Deepak Kumar Gahlot1, Vijay Nandal2
1

ECE Department, MRIEM,, ROHTAK


2
A.P, MRIEM,, ROHTAK,

E-mail: dks.gahlot@gmail.com, vijay.nandal@yahoo.co.in


Abstract LTE (Long term evolution) is a
standard for wireless communication of high
speed data for mobile with low delay and it is
purely packet-switched radio access technology.
The standard is developed by 3GPP (3rd
Generation Partnership Project).The LTE
Advanced standard formally satisfies the ITU-R
requirements to be considered IMT-Advanced.[3]
LTE uses different radio access technology for
uplink and downlink, whereas OFDMA
(orthogonal frequency division multiple access)
technology is employed in the downlink to provide
high data rates and spectrum efficiency. And for
uplink, SC-FDMA (Single-carrier frequency
division multiple access) is used because of
efficient
performance
of
UEs
Battery.
Coordinated multipoint (CoMP) transmission and
reception is used in the LTE-Advanced to improve
system throughput, cell-edge throughput and
spectral efficiency. It is also called as cooperative
MIMO. [1] In this paper, we have to show that
how CoMP technique improves the performance
of 4G-LTE.

LTE-A to improve the system throughput, spectrum


throughput and cell-edge throughput. Recently, Joint
processing
(JP),
Coordinating
scheduling/
Beamforming (CS/CB) and Joint Transmission (JT)
are types of CoMP which are under evalution. CS/CB
supports single data transmission point with UE
scheduling / beamforming decisions. [3] JP provides
multiple data transmission points among multiple
coordinated eNBs for each UE. And with JT, multiple
cells can transmit same data concurrently by using the
same radio resources (frequency or time).In this IntereNBs and Intra-eNBs are employed for using the
proper spectrum.The radio architecture of LTEAdvanced is shown in fig.1

Keywords
CoMP
Transmission/Reception;
3GPP LTE advanced; SC-FDMA ; OFDMA.

III.

INTRODUCTION

Coordinated multipoint technique (CoMP), based


on the network MIMO. By combining and
coordinating signals from multiple antenna, CoMP
will provide mobile user to uniform performance and
quality when they access video, text and share
bandwidth services whether they are closer to LTE
cell or its outer edge. This technique is considerd by
3GPP to improve system throughput, cell edge
throughput and spectral efficiency.[2] CoMP
communication can occur in intra-site and inter-site
CoMP.
LTE-A was proposed to improve the LTE system
to meet the IMT-Advanced requirements and issued
by the ITU-R.[3] CoMP transmission and reception in

Fig. 1: LTE-Advanced System Architecture[4]

The main idea of CoMP is that when a UE is in a


cell-region, it may be able to receive signals from
multiple cell sites and UEs Transmission may be
received at the multiple sites regardless of system
load.if the signaling transmitted from sites is
coordinated, the DL performance can be significantly
increased. For the UL, since the signal can be received
by this multiple cell sites, if the scheduling is
coordinated from the different cell sites,the system can
take advantage of this multiple reception to
significantly improve the link performance.CoMP
communication can occur in intra-site and inter-site
CoMP which is shown in following fig.2.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
150

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Table.1 Summary of the characteristics of each type of CoMP


architecture [5]

III

CoMP CATEGORIES IN TERMS OF

3GPP

Fig. 2:Intra-site and Inter-site CoMP [3]

In this Section, we introduce different CoMP types


depending uponDya whether the blackhaul is ideal or
non ideal,whether CoMP between two eNBs is
supported or not,whether MIMO antenna supports one
user or multiple user,whether it is applied to DL or
UL.Although different types of CoMP can be used
together ,we will explain each of them below.
A.

II
CoMP

(CS/CB)

CHANNEL INFORMATION USED IN

Channels are the routes for transmitting the data


between the Tx antenna and Rx antenna through air.If
base station know UEs channel information
beforehand, they can transmit precoded data so that
UE can get better reception.For this purpose,UEs
measure their channels and report the resulting
channel state information (CSI) to their base
stations.In general ,CSI information includes channel
quality
indicator
(CQI),
Precoding
Matrix
Indicator(PMI), and Rank Indicator (RI).[5]

CQI: An indicator of channel quality.


Displayed as a highest modulation and
coding rate (MCR) value that satisfies the
condition of channel block error rate
(BLER)< 0.1.It is set as a value ranging from
0~15(4 bits). For better channel quality, the
higher MCR is used.

RI: Indicates the number odf data stream


being delivered in DL.

PMI: Base station deliver more than one data


stream through Tx antenna.Precoding matrix
shows how individual data stream are
mapped to antennas.To calculate precoding
matrix, UEs obtain channel information by
measuring the channel quality of each DL
antenna.

In CS/CB scheme,CoMP eNBs within a cluster only


share their scheduling information.Efforts to
minimize interference among cell-edges UEs, CS and
CB CoMP select one of the cooperating as a
transmission cell,and use it in communicating with
UE. It reduces inter-cell interference by allocating
different frequency resources to cell-edge UEs, while
CB allocates different spatial resources to UEs at cell
edges by using smart antenna technology.

Fig. 3 CS/CB Architecture[5]

CS and CB are used together. Cell A and Cell B


Cooperate with each other to allocate different
frequency resources(f3,f2) and different spatial
resources(beam 1 pattern, beam 2 pattern) A1 and B1
respectively. This cooperative is more effective,
because CS alone can easily take care of interference
issues and besides CB can even ensure better
reception quality. If used with CB, CS can achieve

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
151

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

better cell-edges throughputs because Cb helps A1


and B1 to avoid signal sent to other.
B.

Joint Transmission

Multiple cells can transmit same data concurrently by


using the same radio resources(frequency and time).
Because the same data is sent, the speed would not
double, but reception information would be
improved.

Fig. 4 Joint Transmission


(Data is sent from DU to RU1 and RU2)[5]

A1 at cell A edge receive data from its serving cell


(Cell A) ,and same data from cell B as well, and this
leads to best reception quality at A1.The signal
received from other cell do not cause interference ,but
instead they actually make the signal destinated for
A1 stronger. [4]In order to JT CoMP works
effectively ,tight synchronization between JT cells is
required. Between JT cells ,transmission latency
should be sufficient low.When data is delivered from
multiple cells/ Base stations, HARQ is performed at
the serving base/cell station only.
C.

Dynamic Point Selection

DPS works as same as JT in that multiple cells share


same data.First channel quality of UEs is checked in
each sub frame, and data is sent by one cell that has
minimum path loss.Other cells which are not selected
is not disturbed.
Cell A and Cell B cooperate with each other
to allocate the same frequency resource (f3)
to A1, shares the same data and dynamically
transmit data in each subframe ,which leads
to better reception with low distortion.
F= {f1,f2..fn}

fi = RB or Subcarrier
RB= Resource Block
RU= Radio Unit
DU=Digital Unit
A1 receive data from cell A or cell B,
whichever has a better channel quality

Fig. 5 DPS Architecture[5]

IV CONCLUSION
In this paper, we have studied the different CoMP
Technique in LTE-A System in terms of 3GPP
through which we have to improve the system
throughput, spectrum efficiency and cell-edge
throughput.Alternatively we increase the performance
of the 4G- LTE network.
REFERENCES
[1]

[2]
[3]
[4]

[5]

Cheng-Chung Lin, Kumbesan Sandrasegaran and Scott


Reeves,Handover Algorithm with Joint Processing in LTEAdvanced,in Phetchaburi Conference. 2012.
R. Divya and A. Hseyin, "3GPP Long Term Evolution A
Technical Study," Spring 2009.
Zahid Ghadialy, CoMP Transmission and Reception,
Google-3G4G Blog,Feb 2010.
Xiaodong, C. Xin, and L. Jingya, "Handover Mechanism in
Coordinated Multi-Point Transmission/ Reception System,"
in ZTE Communications.vol.1, 2010, p. 5.
Dr. Michelle M.Do and Dr.Harrison J.Son,CoMP
Categories, August 2014.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
152

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Graphene: Emerging Technology in


Nanoelectronics
Pratibha1, Vijay Nandal2
1

ECE Department, MRIEM,, ROHTAK


2
A.P, MRIEM,, ROHTAK,
E-mail: vijay.nandal@yahoo.co.in

Abstract In this paper, a novel concept of


Graphene and its extraordinary characteristics is
proposed. Hailed a rapidly rising star on the
horizon of materials science, Graphene holds the
potential to overhaul the current standards of
technological and scientific efficiency and marks
the new era of flexible, widely applicable materials
science and nano-electronic circuits. Graphene is a
native one-atom-thick crystal which consists of a
single layer of carbon atoms. Until its discovery in
2004, graphene had been hiding in plain sight
tucked away as one of millions of layers forming
the graphite commonly found in the lead of
pencils. Graphene has amazing abilities due to its
unique band structure characteristics defining its
enhanced electrical capabilities for a material with
the highest mobility known to exist at room
temperature. Thus, the graphene is very
promising material for high frequency nanoelectronic devices like GFET, switches and
oscillators etc.
Keywords Graphene, GNRs (Graphene Nano
Ribbons), GFET (Graphene Field Effect
Transistor), CNTs, Graphene characteristics and
applications.

I.

INTRODUCTION

Over the past few decades, the fabrication


process for integrated circuits (ICs) has been
scaled well according to the Moores law which
states that the number of transistors that can fit
on a chip doubles about every two years. The
continuous scaling has benefited the industry
with increasing data processing capability on a
chip at a decreasing cost. Moores Law [1], has
dictated ambitious innovation cycle in Silicon
technology over the last four decades. Along the
way, it has provided the fundamental CMOS
technology for todays global information

society. While the end of Silicon technology has


been predicted a number of times for
technological reasons, it has not preserved but
infact set to remain the driving technology for at
least 15 more years. Even beyond the Silicon
horizon, a demise of CMOS technology is
unlikely. Instead, a range of add-on technologies
is envisioned to boost the Silicon workhouse.
One of the most promising future options to
enhance silicon is the introduction of carbonbased electronics. In recent years, intriguing
electrical properties have been found in carbon
nanotubes (CNTs) [2]. The major disadvantage
of CNTs, however, is their random distribution,
which clearly hampers their utilization as a
replacement for silicon as a substrate. This
leaves two options for carbon-electronics: either
self-organization methods for CNTs or carbon
substrates, thin layers with similar properties
to CNTs.
Graphene, two-dimensional single sheets of
carbon atoms, has newly fascinated great interest
among the physicist and engineers. It has been
recently demonstrated to be thermodynamically
stable. Monolayer graphene consists of sp2bonded carbon atoms arranged in a dense
honeycomb crystal structure as shown in the
figure 1. The symmetry of its honeycomb lattice
structure confers to graphene very exceptional
transport properties [3].

Fig 1 : Monolayer model of SP2 hybridization of carbon atoms in


Graphene

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
153

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Physicist Andre Geim and Konstantin


Novoselov from Manchester University were
firsts who successfully obtained single sheet
graphene from bulk graphite, an accomplishment
for which they awarded the Nobel Prize for
Physics in 2010.[4]
The figure 1 is showing the Monolayer of
Graphene consists of sp2 bonded hybridized
carbon atoms arranged in the repetitive
hexagonal honeycomb lattice of the Graphene,
also known as Chicken wire lattice in which six
carbon atoms form each vertex as shown in the
figure 2.

limiting case of the family of flat polycyclic


aromatic hydrocarbons. [5]

Fig 3 : Various Forms of Graphene, 2-D Graphene, 3-D Graphite,


1-D CNT, 0-D Fullerenes[4]

Fig 2 : Hexagonal lattice of Graphene

Graphene, an allotrope of carbon, is made up of


graphite + ENE where ENE is a suffix which
denotes a double bonded atom. Allotropes are
the different structural forms of the same
element having same physical state. Allotropy
means other that is atoms are bonded together in
a different manner. There are many allotropes of
carbon such as Diamonds, Graphite, Fullerenes,
CNT (carbon nanotubes), all of these have
different atomic structures. [5]

Since 2004, the field of graphene research has


exploded, with over 200 companies involved in
research and more than 3000 papers published in
2010 alone [6].

II.

PROPERTIES OF GRAPHENE

Many researchers proclaim graphene as the 21st


centurys miracle material, as it possesses
powerful properties that other compounds do
not: immense physical strength and flexibility,
unparalleled super-conducting capabilities, and a
diverse range of academic and mainstream
applications.
Physical Attributes

It is the basic structural element of other


allotropes, including Graphite, Charcoal, Carbon
nanotubes CNT & fullerenes. It can be wrapped
up into zero-dimensional (0D) fullerenes, rolled
into one-dimensional (1D) nanotubes or stacked
into three-dimensional (3D) graphite [4] as
shown in Fig 3. Graphene can be considered as
the indefinitely large Aromatic molecules, the

Graphene boasts a one-atom-thick, twodimensional structure, making it the thinnest


material in the known universe [7]. A single
layer of graphene is so thin that it would require
three million sheets stacked on top of one
another to make a pile just one millimetre high
[6]. In fact, graphene is so thin that the scientific
community has long debated whether its

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
154

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

independent existence is even possible. More


than 70 years ago, the band structure of graphite
was discovered, revealing to the scientific
community that graphite was composed of
closely packed monolayers of graphene held
together by weak intermolecular forces. In its 3D graphite structure, graphene sheets are weakly
coupled between the layers with van-der Waals
forces. However, scientists at the time argued
that two-dimensional structures, like that of
graphene, were thermodynamically unstable and
thus could exist only as a part of threedimensional atomic crystals [8]. This belief was
well established and widely accepted until the
experimental discovery of graphene and the
subsequent isolation of other freestanding twodimensional crystals in 2004 [8] which shows
that it is thermodynamically stable due to its
tightly packed carbon atoms & sp2 orbital
hybridization [5]. With its very discovery,
graphene began to push the limits of traditional
materials science. Conventionally, our wisdom
dictates that thin implies weak. Yet, graphene
denies expectations. According to mechanical
engineering professor and graphene researcher
James Hone of Columbia University, Our
research establishes graphene as the strongest
material ever measured, some 200 times stronger
than structural steel. [9] Recent research has
also shown that it is several times tougher than
diamond and supposes that it would take an
elephant balanced on a pencil to break through
a sheet of graphene the thickness of a piece of
plastic wrap [6]. The enormous strength of
graphene is attributed to both the powerful
atomic bonds between carbon atoms in the twodimensional plane and the high level of
flexibility of the bonds, which allows a sheet of
graphene to be stretched by up to 20% of its
equilibrium size without sustaining any damage
[6]. With the development of this new wonder
material which has unique properties, one
might expect unreasonably high prices and
relative
inaccessibility
for
industrial
applications. However, one of graphenes most
exciting features is its cost. Graphene is made by
chemically processing graphite which is the
same inexpensive material that composes the

lead in pencils [9]. Every few months,


researchers develop new, cheaper methods of
mass-producing graphene and experts predict
prices to eventually reach as low as $7 per pound
for the material [6].
The experiments show that the charge carriers in
free-standing graphene lose their effective mass
and become the massless Dirac Fermions [4].
These can be described by a Dirac-like Equation
[10] instead of by the Schrodinger Equation used
in traditional semiconductors. The Dirac
Fermions is a kind of fermions which is not its
own antiparticle. All Fermions in the standard
model (except possibly neutrinos) are Dirac
Fermions.
Ballistic transport of charges [11] takes place in
the Graphene that means there is no collision or
negligible collision of electrons during the
transport of charge (electrons). This property
allows to design low-power and faster switching
transistors. It is a semimetal with an extremely
small overlap between the valence and the
conduction band. It is also called the zero-band
gap material where the conduction and valence
band touches at the Dirac points in the Brillouin
zone and increases linearly for energies above
and below it as shown in fig. 4, which allows for
carrier modulation and consequently, the
electron has zero effective mass [10]. It can be
used as a metal or a semiconductor material
depending on different factors such as width of
GNRs.

Fig 4 : Band structure of 2D Graphene [10].

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
155

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Graphene is nearly transparent. It exhibits


Bipolar Transistor Effect. Bipolar transistor is a
semiconductor device used, for amplification,
used as an oscillator, as a switch. It is a type of
transistor which uses both the electrons and
holes charge carriers. Thereby, the Graphene
shows its Ambipolar Behavior [12] and high
intrinsic carrier concentration in it as shown in
the Figure 5. Graphene also uses both the charge
carriers electrons and holes for its current
conduction and thus it is the best conductor
followed by the copper.
Graphenes powerful conducting ability also
makes it an ideal candidate as a material for the
next generation of semiconductor devices.
The 2-D (two-dimensional) nature of graphene is
confirmed by Quantum Hall Effect and it also
shows a very high carrier mobility (electrons and
holes) in excess of 200,000 cm2/Vs t T=5 K and
in excess of 100,000 cm2/Vs t T=240 K in
suspended graphene, the highest ever reported
for any semiconductor. [3].

high efficiency and good performance. Its


unique quantum properties make it extremely
best choice for the VLSI technology.

III.

APPLICATION OF GRAPHENE IN
FET

The Graphene is the Wonder Element for the


electronic science and hence it has many
applications such as it can be used in thin,
flexible and durable display screens, solar cells,
in electric circuits such as switches; oscillators,
in GFETs, as well as in medical, chemical and
industrial applications.[5] IBM announced in
December 2008 that they had fabricated and
characterized graphene transistors operating at
GHz frequency. In 2009, both the n and p- type
transistors had been created. [5]
Device structure for simulation
The simplest graphene channel FET has the
source and the drain which are formed by the
direct metal contact to the GNRs or graphene
thin film as shown in the figure 6.

Fig 6 : Device Structure for Graphene Channel FET with Direct


Metal Contacts.

Fig 5 : Ambipolar conductivity in Graphene

Among several emerging nanotechnologies,


GRAPHENE has become promising candidates
to build the various nano-electronic devices with

Here in this case, both the charge carriers


(electrons and holes) contributed to the drainsource current IDS, showing the ambipolar
characteristics. As there is a very large
concentration of intrinsic carrier, thats why the
minimum current for such transistor will be very
large.
The IDS-VG curve of a GFET (Graphene Channel
FET) with direct metal contacts to graphene film
is shown in figure 7. An Ambipolar behavior of
the FET device can be clearly observed. The

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
156

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

minimum drain current is large due to the high


intrinsic carrier concentration in graphene. [11]

[3]

[4]

Fig 7 : IDS-VG for graphene channel FET with direct metal


contacts.

The Ambipolar behavior can be greatly


suppressed by using the Schottky tunneling
source/drain in GFET.
IV.

CONCLUSION

The paper has demonstrated that graphene plays


an important role in new innovative high
frequencies nanoelectronics devices. The unique
and wonderful properties of graphene have
motivated intense work among the physicists
and engineers who have seen the new
opportunities in this material to replace the
silicon technology based nanoelectronics circuits
such as low-power switches, solar cells,
transistors, oscillators etc. In this paper, the
graphene and its applications as the graphene
channel and highly doped silicon source/drain
has been reviewed. The transistors can be p-type
or n-type when we apply negative or positive
gate voltage respectively. This feature allows
graphene based transistor to be easily integrated
into nanoscale circuits. Overall, this paper
illustrates that graphene based electronics is an
emerging circuit paradigm for nanoscale
electronics.

nanotube FETs with 40-nm gate length, IEEE Electron


Device Lett., vol. 26, no. 11, pp. 823825, Nov. 2005.
Palacios Tomas, Allen Hsu and Han Wang, 2010,
Applications of graphene Devices in RF Communications,
IEEE Communication Magazine, vol. 0163.
Z. F. Wang, Huaixiu Zheng, Q. W. Shi, and Jie Chen,
Emerging
Nanocircuits
Paradigm:
Graphene-based
Electronics for Nanoscale Computing, IEEE International
Symposium on Nanoscale Architecture (NANOARCH 2007),
2007.
[5]
www.wikipedia.org

[6] D. Derbyshire, the wonder stuff that could change the world:
graphene is so strong a sheet of it as thin as Clingfilm could
support an elephant. Daily Mail, Science & Tech (2010).
http://www.dailymail.co.uk/sciencetech/article2045825/Graphene-strong-sheet-clingfilm-supportelephant.html(October 2011).
[7] M.J. Allen, V.C. Tung, R.B. Kaner, Honeycomb carbon: a
review of graphene. Chem. Rev. 1, 132-145 (2010).
[8] Scott Gladstone,Graphene And Its Applications : The
Miracle Material of the 21st Century, Dartmouth
Undergraduate Journal of Science.
[9] 3. A. Hudson, Is graphene a miracle material? BBC News
(2011).
Available
at
http://news.bbc.co.uk/2/hi/programmes/click_online/9491789
.stm (May 2011).
[10] Vandana Sharma, Ashok Kumar, Role of Nanotechnology in
Radio Frequency Communications, Weekly Science
Research Journal, vol 3, Sept. 2015
[11] Gengchiau Liang, Neophytos Neophytou, Dmitri E. Nikonov,
and Mark S. Lundstrom, Performance Projections for
Ballistic Graphene Nanoribbon Field-Effect Transistors,
IEEE Transactions On Electron Devices, VOL. 54, NO. 4,
APRIL 2007.
[12] Max C. Lemme, Tim J. Echtermeyer, Matthias Baus, and
Heinrich Kurz, A Graphene Field-Effect Device, IEEE
ELECTRON DEVICE LETTERS, VOL. 28, NO. 4, APRIL
2007.
[13] K. S. Novoselov, A. K. Geim, D. Jiang, Y. Zhang, S. V.
Dubons, I. V. Grogorieva, and A. A. Firsov, Electric field
effect in atomically thin carbon films, Science, 306, pp. 666669, 2004.
[14] M. Dragoman, D. Dragoman, A. A. Muller, High Frequency
Devices Based On Graphene, IEEE, 2007

REFERENCES
[1]
[2]

G. E. Moore, Cramming more components onto integrated


circuits, Electronics, vol. 38, no. 8, pp. 114117, Apr. 1965.
Y.-M. Lin, J. Appenzeller, C. Zhihong, Z.-G. Chen, H.-M.
Cheng, and P. Avouris, High-performance dual-gate carbon

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
157

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

The Memristor: Revolution in Electronics


Isha 1, Vijay Nandal2, Manisha3
1

ECE Department, MRIEM,, ROHTAK


2,3
A.P, MRIEM,, ROHTAK,
E-mail: vijay.nandal@yahoo.co.in

Abstract This paper presents the memristor as


fourth fundamental element of electronics along
with its capabilities. By definition, a memristor
gives a relationship between charge and flux. The
existence of memristor was predicted in 1971 and
this device really comes into existence in 2008.
According to Moores law; the number of
transistors that can fit on a chip doubles about
every two years. Now, its time to stop shrinking.
Memristor is an ideal device with non-volatile
memory that can replace RAM, Flash and Disk
along with internal computation using logics;
contains multiple petabits of persistent storage
and can be configured to be either memory or
CPU. It is also exploring the emulation of brains.
It complements Mr. Greys Model.
Keywords Memristor, Pinched hysteresis loop,
Implementation and functions of Memristor.
I.

INTRODUCTION

Memristor theory was formulated and named by Leon


Chua in a 1971 paper [1]. CHUA lead to some
properties using mathematical means that shows it
gives unique applications that cannot be meant using
simple RLC circuit. A device relating charge and flux
(defined as time integrals of current and voltage), the
memristor, has a non-volatile memory i.e. retains
value of resistance even when power is turned off[8].
In fact its resistance depends on the charge that
flowed through the circuit. When current flows in one
direction the resistance increases, in contrast when
the current flows in opposite direction the resistance
decreases. However resistance cannot go below zero.
When the current is stopped, the resistance remains in
the value that it had earlier. Labs discovery of a
switching memristor [2] made up of a thin film of
titanium dioxide, it has been presented as an
approximately ideal device [3], [4], [5]. Chua also
theorized that they may be beneficial in the
constructing artificial neural networks [3], [4].
Memristor is a successor to the transistor generation.
The emphasis in electronics design will have to shift

devices that are not just increasingly infinitesimal but


increasingly capable [6]. Transistor toggles between
OFF and ON state, whereas memristor, like analog
devices, can occupy a range of in between states.

II.

REVIEW OF MEMRISTOR

Passive circuit theory can be thought of relationship


between following electromagnetic
Quantities [9]:
1) Voltage v, defined as the change magnetic
flux with respect to time t;
2) Current i, defined as the change in electric
charge q with respect to time t;
3) Resistor R defined as a linear relationship
between voltage and current (dv = Rdi);
4) Capacitor C, defined as a linear relationship
between voltage and electric charge (dq =
Cdv);
5) Inductor L, defined as a linear relationship
between magnetic flux and current I (d
= Ldi)

Fig: 1 Memristor missing relationship

However, in 1971 Leon Chua [1] speculates


mathematically, a fourth fundamental passive circuit
element could exist, called memristor as fourth
fundamental element, that relates the charge q to the
linkage flux (see Fig. 1, electrical symbol of
memristor is also indicated here):
d= Mdq
In particular the memristor has originally been
defined in terms of a non-linear functional
relationship between the so-called flux linkage (t)

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
158

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

and the amount of electric charge that has flowed


through the device, q(t):

dw/dt = f(i)
From these equations we get the pinched hysteresis
loop shown:

f((t),q(t)) = 0
where (t) and q are time-domain integrals of
memristor electric voltage v and electric current I
respectively.
The magnetic flux linkage , is conceptualized from
the circuit characterstics of an inductor. It does not
represent a magnetic field here. Its physical meaning
is discussed below. The symbol may be regarded
as the integral of voltage over time. Memristor is
characterized by its memristance function describing
the charge-dependent rate of change of flux with
charge.
( )

Substituting the flux as the time integral of the


voltage, and charge as the time integral of current, the
more suitable form is
( ( ))

( )
( )
Fig 3: pinched hysteresis loop

To relate the memristor to the resistor, capacitor, and


inductor, it is convenient to visualize using
differential equations:
Device

Characteristic
property (units)

Differential
equation

Resistor (R)

Resistance (V / A,
or ohm, )

R = dV / dI

Capacitor (C)

Capacitance (C / V,
or farad)

C = dq / dV

Inductor (L)

Inductance (Wb / A,
or henry)

L = dm /
dI

Memristor
(M)

Memristance (Wb /
C, or ohm)

M = dm /
dq

Fig 2: Differential equations[8]

Chuas general memristance definition has two parts.


The first equation defines how the memristors
voltage depends on current and a state variable
that is, a quantity that measures some physical
property of a device[1].
The second equation expresses how the changing
state variable (the TiO2s thickness) depends on the
amount of charge flowing through the device[1].
V = R(w)I

The graph described the current-voltage (I-V)


characteristics that Chua had plotted for his
memristor. Chua had called them pinched-hysteresis
loops; we called our I-V characteristics bow ties.
A pinched hysteresis loop looks like a diagonal
infinity symbol with the center at the zero axis, when
plotted on a graph of current against voltage. The
voltage is first increased from zero to a positive
maximum value, then decreased to a minimum
negative value and finally returned to zero. The bow
ties on our graphs were nearly identical [see graphic,
Bow Ties][10]
III.

IMPLEMENTATION OF MEMRISTOR

Memristor is designed in crossbar array form. The


crossbar architecture is a fully connected mesh of
perpendicular wires. Any two crossing wires are
connected by a switch. To close the switch, a positive
voltage is applied across the two wires to be
connected. The voltage is reversed and the switch is
opened. Conceptually memristor is a tiny sandwich.
The HP device is composed of a thin film (50nm)
titanium dioxide film between two electrodes, one
titanium, the other platinum. Initially there are two
layers of titanium dioxide film, one of which has a

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
159

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

slight depletion of oxygen atoms. The oxygen


vacancies carries charge, meaning that the depleted
layer has a much lower resistance than the nondepleted layer. When an electric field is applied, the
oxygen vacancies drift, changing the boundary
between the high resistance layer and low resistance
layers. Thus the resistance of the film as a whole is
dependent on how much charge has been passed
through it in a particular direction, which is reversible
by changing the direction of current. Since the HP
device displays fast ion conduction at nanoscale, it is
considered a nanoionic device.
Memristance is displayed only when both the doped
layer and depleted layer contribute to resistance.
When enough charge has passed through the
memristor that the ions can no longer move, the
device enters hysteresis[8].

time and energy that must be spent to achieve a


desired change in resistance. This assumes that the
applied voltage remains constant. Solving for energy
dissipation during a single switching event reveals
that for a memristor to switch from Ron to Roff in
time Ton to Toff, the charge must change by Q
= QonQoff.

( )

( ( ))

( ) ( )

Substituting V=I(q)M(q), and then dq/V = Q/V for


constant V to produces the final expression. This
power characteristic differs fundamentally from that
sof a metal oxide semiconductor transistor, which is
capacitor-based. Unlike the transistor, the final state
of the memristor in terms of charge does not depend
on bias voltage.
The type of memristor described by Williams ceases
to be ideal after switching over its entire resistance
range, creating hysteresis, also called the "hardswitching regime". Another kind of switch would
have a cyclic M(q) so that each off-on event would be
followed by an on-off event under constant bias. Such
a device would act as a memristor under all
conditions, but would be less practical.

Fig 4: crossbar array[8]

The microscopic nature of resistance switching and


charge transport in such devices is still under debate,
but one proposal is that the hysteresis requires some
sort of atomic rearrangement that modulates the
electronic current[11].

Fig 5: working of memrostor

IV.

FUNCTIONS OF MEMRISTOR

Memristor have a list of functions. Some of them are


discussed in this paper.
Memristor as a switch:
For some memristors, applied current or voltage
causes substantial change in resistance. Such devices
may be characterized as switches by investigating the

Fig 6: Memristor as a switch

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
160

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

On the basis of this proposition, we consider a thin


semiconductor film of thickness D sandwiched
between two metal contacts, as shown in Fig. 5.
The total resistance of the device is determined by
two variable resistors connected in series, where the
resistances are given for the full length D of the
device. Specifically, the semiconductor film has a
region with a high concentration of dopants (in this
example assumed to be positive ions) having low
resistance RON, and the remainder has a low
(essentially zero) dopant concentration and much
higher resistance ROFF.
The application of an external bias v(t) across the
device will move the boundary between the two
regions by causing the charged dopants to drift. For
the simplest case of ohmic electronic conduction and
linear ionic drift in a uniform field with average ion
mobility v, we obtain
V(t )= (

( )

( )

( )

/)

(1)

( )

which yields the following formula for w(t):


( )=

( )

(2)

By inserting equation (2) into equation (1) we obtain


the memristance of this system, which for RON=ROFF
simplifies to:
( )

( ))

The q-dependent term in parentheses on the righthand side of this equation is the crucial contribution
to the memristance, and it becomes larger in absolute
value for higher dopant mobilities v and smaller
semiconductor film thicknesses D. For any material,
this term is1,000,000 times larger in absolute value at
the nanometre scale than it is at the micrometre scale,
because of the factor of 1/ , and the memristance is
correspondingly more significant. Thus, memristance
becomes more important for understanding the
electronic characteristics of any device as the critical
dimensions shrink to the nanometre scale[11].
It replaces RAM, Flash and Disc:
Memristor are nano devices that remember
information permanently, switch in nano seconds, are
super dense, and power efficient. That makes
memristors potential replacements for DRAM, Fash
and Disk.

Williams projects that well reach the end of our


ability to scale RAM, flash and disk in the next few
years. Fortunately for us the memristor is here to save
the day. Memristors have the power and speed of the
DRAM cell and the (potential) lifetime of a hard disk.
Currently the memristor has a lifetime greater than
flash, but still working on that. In coming years
memristor could completely replace DRAM and disk
and eventually CDs and DVDs. It is a universal nonvolatile memory.
This allow multiple petabits of memory (1 petabit =
128TB) to be addressed in one square centimeter of
space. To get a feel for how much memory this is
consider 1 terabyte is equal to 128 DVDs or 250,000
4 mega images.
Memristor computes, learns and flattens the CPU
memory hierarchy:
Two-terminal
nanodevices
are
advantageous
compared to three terminal ones, because they can be
fabricated without nanoscale alignment. However,
they are passive and therefore need to be
accompanied with devices of drive capability. A
memristor is a non-volatile two-terminal device,
whose resistance can be programmed. It was
theoretically described in and recently experimented
with many groups, e.g., So, memristor just dont
remember; they perform logic. Memristor naturally
implement something called material implication
logic, which can be interconnected to create any
logical operation, much the same way NAND gates
were used to build early superconductors because
they were easier to build.
Memristors are naturally suited for performing
implication logic (combination of implication and
false operation) instead of Boolean logic. Also, it
should be noted that a memristor can be used as both
a logic gate and a latch (stateful logic). Being
functionally complete, implication logic can be used
to compute any Boolean function. However, by
performing implication logic with stateful devices,
storage of intermediate results requires additional
memristors to keep data yet to be used from being
written over[11],[1]. Memristor can be a memory, a
switching network, or logic.
William claims that dynamically changing
memristors between memory and logic operations
constitutes a new computing paradigm enabling
calculations to be performed in the same chips where
data is stored, rather than in a specialized central
processing unit. Quite a different picture than the
Tower of Babel memory hierarchy that exist today.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
161

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Memristors are also used in emulation of brains


because the properties of the memristor apparently
mimic neurons and can learn without supervision.
Synapses and axons are both effectively neurons.
Data storage in computers is physically separated
from where the data is processed, whereas every
synapse in the brain is both an element of
computation and the element of memory. Memristor
can perform computation along with memory.
DARPA synapse program is an practical application
for the implementation of memristor in biological
field.
V.
BENEFITS OF MEMRISTOR
1) Provides greater resiliency and reliability when
power is interrupted in data centers.
2) Have great data density
3) Combines the jobs of working memory and hard
drives into one tiny device.
4) Faster and less expensive than MRAM.
5) Uses less energy and produces less heat
6) Would allow for a quicker boot up since
information is not lost when the device is turned
off.
7) Operating outside of 0s and 1s allows it to
imitate brain functions.
8) Eliminates the need to write computer programs
that replicate small parts of the brain.
9) Creating a Computer that never has to boot up.
10) Does not lose information when turned off.
11) Density allows for more information to be stored.
12) Has the capacity to remember the charge that
flows through it at a given point in time.
VI.

REFERENCES
[1]
[2]
[3]
[4]

[5]
[6]
[7]

[8]
[9]
[10]
[11]
[12]

[13]

Chua, L. O., Memristor-The missing circuit element.IEEE


Trans. Circ. Theory.CT-18, 1971, pp. 507-519.
L. O. Chua and S. M. Kang, Memristive devices and
systems,Proc. IEEE, vol. 64,pp. 209223, 1976.
L. 0. Chua, Synthesis of new nonlinear network elements,
Proc. IEEE, vol. 56, Aug. 1968, pp.13251340.
-, Memristor-The missing circuit element, Sch. Elec.
Eng.,Purdue Univ., Lafayette, Ind.; Tech. Rep. TR-EE 70-39,
Sept. 15, 1970
R. J. Duffin,Nonlinear network I, Bull. Amer. Math. Sot.,
vol. 52,1946,pp. 836-838.
Stork Milan & Hrusak Josef, Memristor based feedback
systems.
K. Eshraghian, K.R. Cho, O. Kavehei, S.K Kang, D, Abbott,
S.M. Steve Kang, Memristor MOS Content Addressable
Memory (MCAM): Hybrid Architecture for Future High
Performance Search Engines, vol. 19, no. 8, pp. 1407-1417,
2011.
Wikipedia.org
Marani Roberto, Gelao Gennaro & Perri Gina, A review on
memristor applications.
Williams R.; (2008) How We Found The Missing Memristor.
IEEE Spectrum, vol.45, n.12, pp.28-35.
D. Strukov, et al., The missing memristor found,Nature,
vol. 453, pp.80-83, May 2008.
P. Kuekes, Material Implication: Digital Logic with
Memristors,Memristor
and
Memristive
Systems
Symposium, UC Berkeley, 2008.
Lehtonen Eero, Stateful Implication Logic With
Memristors, 978-1-4244-4958-3/09,2009 IEEE.

CONCLUSION

In this paper we concluded that a memristor is a


fourth fundamental element with striking pinched
hysteresis loop with current- charge memristive
system having its own memory which can replace
RAM, Flash and Disk. For some memristors, applied
current or voltage will cause a great change in
resistance. Such devices may be characterized as
switches by investigating the time and energy that
must be spent in order to achieve a desired change in
resistance[6].
Memristor is a device that can
revolutionize the electronics completely with its high
computational and storage capabilities.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
162

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Rectenna Design and Modelling for


Wireless Power Generation
Karuna 1, Manisha2
1
ECE Department, MRIEM,, ROHTAK
2
A.P, MRIEM,, ROHTAK,
E-mail: manishabangar@gmail.com

Abstract A new rectenna topology consisting


only of an antenna, a matching circuit, a Schottky
diode, and a DC filter has been modeled using a
global simulation. A circular aperture coupled
patch antenna is proposed to suppress the first
filter in the rectenna device, and in addition, the
losses associated with this filter.Energy harvesting
techniques forms a suitable alternative for the
existing energy resources. These include
rectennas, solar cells, harvesting human energy
and wind power. A circular aperture coupled
patch antenna is proposed to suppress the first
filter in the rectenna device, and in addition, the
losses associated with this filter. The harmonics
rejection of the antenna is primarily used to
reduce the rectenna size... In addition, the
proposed architecture is uni-planar, robust and
compact, which lead to an easy design and
realization at the required frequency ranges with
a very low cost. A 2.45 GHz rectenna system is
designed and measured to show their microwave
performances.
Keywords Rectenna, Schottky diode, low pass
filter (LPF), matching circuit.

I.

INTRODUCTION

There were many energy harvesting techniques which


form a good alternative to existing energy resources.
These include energy harvesting from geothermal,
hydro power, tidal power, wind energy and solar
power.
We can use solar rectennas to harvest solar power but
those rectennas are operating at GHz/THz and have a
lot of disadvantages such as power fading,
complicated design procedure and high fabrication
technology. Thus, it has been suggested to use
rectennas operating at low range of frequencies using
fractal antennas not to harvest solar power but to
harvest electromagnetic waves instead of all those

energy harvesting techniques because those rectennas


are simpler and cheaper to construct. To maintain
rectennas advantages, fractal antennas are used. DCup converter circuits can be used to raise the voltage
harvested by the rectenna.
At low RF frequencies (kilohertz to low megahertz),
both p-n diodes and transistors are used as rectifiers.
At microwaves (1 GHz and higher), Schottky diodes
(GaAs or Si) with shorter transit times are required.
In the present case, we have chosen silicon based on
availability, low cost, and simulated performance.
Similar to low- frequency, high-power applications,
the diode is driven as a half-wave rectifier.
For low-power applications, as is the case for
collected small amount of energy, there is generally
not enough power to drive the diode in a highefficiency mode. The diode is not externally biased in
this application, so it is important to use a diode with
a low trigger voltage. Furthermore, rectification over
multiple octaves requires a different approach from
standard matching techniques. In rectenna, the
antenna itself can be used as the matching mechanism
instead of using a transmission-line matching circuit.
So the antenna design is heavily dependent on the
diode characteristics. The following section presents
various techniques for analyzing diode operation at
microwave frequencies. The results are then used to
design the antenna and integrated rectenna for
relevant ambient power levels.

II.

RECTENNA DESIGN

A rectifying antenna (rectenna) receives a microwave


signal at the antenna and converts it to DC voltage. It
should do this as efficiently as possible and provide a
clean, constant, low-ripple voltage. A rectenna
composed of four components:
1. An antenna,
2. A pre-rectification filter,
3. A rectification diode,
4. A post-rectification filter.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
163

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

The rectenna can eliminate the need for a low-pass


filter (LPF) placed between the antenna and the
diode, as well as achieve a conversion efficiency of
77.8% at 2.4 GHz. The low pass filter is the prerectification filter used in this rectenna circuit. This is
illustrated in the following diagram

The use of full-wave rectificatio n for rf-to-dc


conversion showed that 70.69% rectenna efficiency
could be achieved with an input powe r of 45 mW.
The conversion efficiency depends on loa d
resistance, due to the internal resistance of the rectenn
a system.

III.

Fig. 1 Block diagram of the convent ional and the proposed


rectenna.

A. Antenna
Microstrip patch antenna is suitable for rectenna
design. It is compact and interfacing with planar
circuit is simple. Different antenna for rectenna has
been researched extensively. Patch antenna with
insert feed is selected for rectenna design. Antenna is
initially analyzed using cavity model. Antenna is
designed and optimized by EM simulation tools.
Parametric analysis is done with Ansoft HFSS and
the same is verified with Agilent ADS momentum.

DESIGNING OF RECTENNA
USING ADS

A full- wave rectenna circuit will provide more


stable dc output voltage than that a half-wave
rectenna of the same chip area. We can figure
out the feasibility of a rectenna with an
efficiency of 53% at an incident radiation power
density of 30 W cm2 and frequency of 35 GHz.
The rectenna comprised a power-receiving linear
tapered slot antenna (LTSA), a slot line (SL) to
finite-width ground coplanar waveguide
(FGCPW) transition, a band pass filter (BPF), a
ful l-wave rectifier for rf-to-dc conversion, a dc
bypass capacitor, and a resistive load. The
fabricated rectenn a with off-chip lumped
elements is depicted on the following figure.

Fig 3: Block diagram of full-wave rectifier


Fig. 2 Patch Antenna

A rectenna system combining a two-stage, zero-bias


Schottky diode with a miniature antenna could
achieve 70% efficiency at 2.4 GHz. The diodes are in
parallel to signal source, but appear in series for the
dc circuit in order to produce double the voltage.

As in the previous circuit, in order to minimize the


size and weight, elimination of the low-pass filter is
considered. For our design, a micro strip-fed dipole
will be used in pla ce of the LTSA. It is a lighterweight structure that can be tuned to 10 GHz,
whereas the LTSA is a broadb and structure. The

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
164

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

three different circuit designs for a full-wave rectenna


are shown below,

Fig. 5 Output power (watts) versus time for a full-wave rectenna


design with a pre-LPF
Fig. 4 Voltage doubler

Full-wave rectenna design using a pre-LPF


Full-wave rectenna design using a post-LPF
Full-wave rectenna design without a LPF

In the ADS model, the antenna is represented by a


power source with 50 impedance. The diodes are
matched using shorted stubs placed the required
distance from the diodes. The stub location and stub
length were computed using the standard stubmatching procedure. Then the ADS optimization
program was used to adjust the line lengths so the
output power was a maximum.

The simulated conversion efficiencies of the three


full-wave rectenna designs are given in figure shown
below. In the figure, the efficiency of the full-wave
rectenna with post-LPF and no-LPF are the same and
higher than that with pre-LPF. In comparison with the
previous result, the design with no LPF is able to
convert wireless power to dc power with an
efficiency of 68.1% at an input power of 170 mW,
but the design with pre-LPF only achieves an
efficiency of 56.6% at the same input power.

Fig. 6

IV.

EVALUATION OF HARMONIC

The reflected harmonic energy from the input or


output side of the diode can alter the voltage
across the diode. The diode also begins to bias
itself as it produces more dc current, thus
moving the dc operating point of the I-V curve
Special Issue: National Conference on Recent Innovations In Engineering & Technology
(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
165

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

in a nonlinear fashion. The diodes harmonic


frequency components can possibly be radiated
by the antenna, causing interference with other
systems.
Based on the 44 properties of the diode at
microwave frequencies, we simulate and analyze
the radiated harmonics and dc power of different
rectenna designs for an input microwave power
at 10 GHz. This can be accomplished using the
harmonic balance (HB), nonlinear-circuit
analysis module of the ADS software.
The full-wave rectenna ideally converts all
power, due to its architecture. We also observe
that the full-wave rectenna without LFP converts
more dc power than other designs and the fullwave rectifier design eliminates the fundamental
frequency and third harmonic frequency at the
output. In order to have a lightweight rectenna,
the full-wave rectenna without LFP is selected.

V.

PERFORMANCE MEASURE

The simulated conversion efficiency of the full-wave


rectenna as a function of input power is shown in
figure. The full-wave design is able to convert
microwave power to dc power with an efficiency of
65.9% at an input power of 200 mW. The simulated
output power of the full-wave rectenna design is
shown in Figure 60. The full-wave design is able to
produce 132 mW at an input power of 200 mW.

REFERENCES
W. C. Brown, The History of Power Transmission by Radio
Waves, IEEE Transactions on Microwave Theory and
Techniques, Vol. MTT-32, No. 9, September 1984.
[2] U. Olgun, C.-C.Chen and J. L. Volakis, Investigation of
Rectenna Array Configurations for Enhanced RF Power
Harvesting, IEEE Antennas and Wireless Propagation
Letters, vol. 10, no. 1, pp. 262265, April 2011.
[3] M. T. L. Meng, Efficient Rectenna Design for Wireless
Power Transmission for MAV application, Naval
Postgraduate School, December 2005.
[4] J. A. Hagerty, F. B. Helmbrecht, W. H. McCalpin, R. Zane
and Z. B. Popovic, Recycling Ambient Microwave Energy
With Broad-Band Rectenna Arrays,
[5] IEEE Transactions on Microwave Theory and Techniques,
vol. 52, no. 3, pp.10141024, March 2004.
[6] Y.-H. Suh and K. Chang, A High-Efficiency DualFrequency Rectenna for 2.45-and 5.8-GHz Wireless Power
Transmission, IEEE Transactions on Microwave Theory
and Techniques, vol. 50, no. 7, pp.17841789, July 2002.
[7] Strassner, S. Kokel and K. Chang, 5.8 GHz Circularly
Polarized Low Incident Power Density Rectenna Design and
Array Implementation, IEEE Antennas and Propagation
Society International Symposium, vol. 3, pp.950953, June
2003.
[8] T. Yamamoto, K. Fujimori, M. Sanagi and S. Nogi, The
Design of mw-Class RF-DC Conversion Circuit using the
Full-Wave Rectification, The 37th European Microwave
Conference, Proceeding, 912 October 2007, pp.905908.
[9] T.-W. Yoo and K. Chang, Theoretical and Experimental
Development of 10 and 35 GHz Rectennas, IEEE
Transactions on Microwave Theory and Techniques, vol. 40,
no. 6, pp.12591266, June 1992.
[10] H.-K. Chiou and I.-S. Chen, High-Efficiency Dual-Band
On-chip Rectenna for 35- and 94-GHz Wireless Power
Transmission in 0.13-m CMOS Technology, IEEE
Transactions on Microwave Theory and Techniques, vol. 58,
no. 12, pp. 35983606, December 2010.
[1]

Fig. 7 Conversion efficiency of final full-wave rectenna design

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
166

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Review paper on Watermarking with DWT


and RDWT Using SVD
Kanchan1, Sonal2, Pankaj Bhatia3
1,2
Students of ECE Department, Manav Rachna University, Faridabad
3
Assistant Professor, Department of ECE, ManavRachnaUniversity, Faridabad
E-mail: kanchan193mishra@gmail.com, sonal123@gmail.com, pankajbhatia@mru.edu.in
Abstract It is now possible to construct, make
and send digital images in a quick and speedy
manner because of the advancement in digital
technology. But the unfavorable impact is that the
hackers can easily fiddle with a given image and
use it anywhere for their own purpose. So it is
necessary to use some technique with the help of
which data, images and important documents can
be easily transmitted digitally without being
altered. Watermarking is one of the popular and
known technologies for data hiding. For hiding
data our image should be robust so that no one
can extract our watermarked image. For this there
are various techniques available like DWT,
RDWT but RDWT is better than DWT. In this
paper we are describing the results of both DWT
and RDWT using SVD. The original image is first
transformed using RDWT, and then the gray scale
watermarked image is embedded in the bi
diagonal singular values of the low-frequency sub
band of the host image. Due to the extraction
phase, which is performed without the original
image, this scheme is blind. Experimental results
on benchmark images demonstrate that our
proposed scheme is able to withstand a variety of
attacks, in addition to its good invisibility.
Keywors: RDWT, SVD, DWT

I.

INTRODUCTION

In the technological environment prevailing


nowadays the digital revolution has caused an
outburst of awareness. It has lead to inspiration and
support for digitization of the logical work of art. Due
to lack of safety measures, images can be easily
copied and circulated without owners consent.
Digital image watermarking is an alteration in the
novel image data by incorporating a watermark
containing important information. It basically hides
the data and provides

authenticity to the document to be transmitted. It is


always desired that the data to be transmitted should
be secure and should be transmitted safely from one
device to another. For data transmission,
watermarking is used in which the image, text or any
data like logo, signature etc. is hidden and with the
help of algorithms or techniques like DCT, DWT,
RDWT etc. the original data can be retrieved. The
motivation behind the work in this area is the desire
to achieve information security, information hiding,
authentication, and fingerprinting. Several approaches
have been proposed for digital image watermarking.
One of such approaches is the discrete wavelet
transform (DWT) approach. The DWT finds a great
popularity in the field of watermarking as it is able to
decompose the available images into sub-bands , in
which watermarks can be embedded, selectively
[1,2,3].

II.

DIGITAL WATERMARKING

A digital watermark is a marker that has been secretly


set in a signal which can be easily exposed to such as
avideo, audio or an image signal. It is basically used
to spot the ownership of the copyright of such signal.
"Watermarking" is the process of hiding digital
information in a carrier signal. Digital watermarks are
generally used to verify the genuineness or reliability
of the carrier signal or to show the characteristics of
its owners. It is importantly used for tracing copyright
infringements or find out whether any tampering of
the document has taken place. There are various
applications of watermarking. These are listed as
follows [4].
1. Copyright Protection: Copyright information can
be inserted as a watermark whenever a new work is
produced. This watermark can work as evidence if
any disagreement of possession or rights occurs

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
167

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

2. Broadcast Monitoring: For monitoring an


unofficial or illegal broadcast station this application
can be used. It can be used for verifying whether the
content has been actually broadcasted or not.
3. Tamper Detection: For tamper detection fragile
watermarks can be used. The occurrence of tampering
by damaging or corrupting the watermark shows that
the digital content is not authentic and hence cannot
be relied upon.
4. Authentication and Integrity Verification: Any
change in the digital content can be detected by
Content authentication. By the use of fragile or semifragile watermark a change in the image can be
detected.
5. Fingerprinting: Fingerprints are exclusive to the
holder of the digital content and can easily tell the
presence of an illegal copy.
6. Content Description: This watermark can contain
some detailed information of the host image such as
labeling and captioning.
7. Covert Communication: It includes exchange of
messages secretly embedded within images. In this
case, the main requirement is that hidden data should
not raise any suspicion that a secret message is being
communicated.

III.

TECHNIQUE OF
WATERMARKING

The technique of watermarking consists of the


following:

IV.

DISCRETE WAVELET
TRANSFORMATION

DWT is known as discrete Wavelet Transform. It is a


modern technique which is frequently used in digital
image processing, compression, watermarking etc.
The transforms are based on small waves, called
wavelet, of varying frequency and limited duration. A
wavelet series is a representation of a squareintegrable function by a certain ortho-normal series
generated by a wavelet. Furthermore, the properties
of wavelet can be decomposed from original signal
into wavelet transform coefficients which contains
the position information. The original signal can be
completely reconstructed by performing Inverse
Wavelet Transformation on these coefficients.
Watermarking in the wavelet transform domain is
generally a problem of embedding watermark in the
sub bands of the cover image [4].
1.

DWT Analysis
[k] = (

[k] * h [-k])

[k] = (

2,

And
[k] * g [-k]) 2,

Where * indicates convolution, and


2 indicates
down sampling. That is if y[n] = x[n] 2, then
y[n] = x [2n]
DWT Synthesis
[k] = (( [k] 2)*h[k] + (

[k] 2)*g[k])

2 is upsampling process. That is, if y[n] =x[n] 2,


then

x, ], n even
Y[n]=

0, n odd

LL2
LH2

[5]

HL2
HH2

LH1

Fig. 1 Watermarking

HL1
HH1

Fig. 2 DWT decomposition

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
168

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

V.

REDUNDANT DISCRETE
WAVELET TRANSFORM

RDWT stands for redundant discrete wavelet


transform. DWT is the most frequently used method
in watermarking, however it has disadvantages like
shift invariance etc. This problem occurs because of
down-sampling process followed by each level
filtering which causes a considerable change in the
wavelet coefficients of host image even for minor
shifts in it[5]. This causes an inaccurate extraction of
the watermark. To overcome this problem RDWT is
established, which eliminates down-sampling and upsampling of coefficients during each filter bank
iteration [4]. In the following equations RDWT and
its inverse is given [10]
[k] = (

[k] * [-k]

(1)

[k] = (

[k] *

(2)

[k] = ( [k] * [k] +

[k] *

[-k]
[-k]

(3)

Equations (1) and (2) are RDWT analysis and


equation (3) is synthesis, Where h[-k] and g[-k] are
low pass and high pass analysis filters respectively
and h[k] and g[k] are corresponding lowpass and
highpass synthesis filters[5].

VI.

Fig. 3 Block diagram of embedding process using RDWT with


SVD

Fig. 4 Block diagram of extracting process using RDWT with SVD

Fig 3 and 4 are basically RDWT using SVD


technique used for embedding and extraction of
image. This method(RDWT-SVD) works better than
DWT-SVD for all attacks except median filter, cut
and JPEG and the comparison is all taken from this
[7].

SVD

Singular value decomposition (SVD) is a technique


used for different kind of applications. It includes the
factorization of a real or complex rectangular matrix
used in applications such as signal processing, images
processing and statistics. It factorizes the matrix into
product of three matrices [6].
A =U
A =MN matrix
=diagonal matrix
U and V is unitary matrix.
One of the differences between BSVD and SVD is
that BSVD gives four keys while SVD have two key.
Another difference is that BSVD uses single routine
matrix calculation and its performance is better than
SVD.

VII.

COMPARISON OF DWT RDWT AND


SVD

Table 1. Comparison of DWT, RDWT and SVD

Attacks

DWTRDWTSVD(PSNR) SVD(PSNR)
29.09
29.27

Gaussian
noise(var0.001)
Salt and pepper 27.19
(density0.005)

Histogram
equalization
Speckle
noise(var0.04)
Histogram
equalization

30.25

4.73

7.66

13.32

18.01

4.73

7.66

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
169

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

VIII.

CONCLUSION

In this paper, the difference between DWT and


RDWT using SVD is shown. In these techniques the
embedding and extraction process is revealed and the
results are compared after testing with different types
of attacks incorporated into the image. RDWT shows
good results as compared with DWT, but in some
attacks the results are not up to the mark. The results
can be improved further by using some advance
algorithm and thus a robust algorithm can be
designed to cater against various attacks.

REFERENCES
[1]

[2]

[3]

[4]

[5]

[6]

[7]

Hanaa A. Abdallah, Mohiy M. Hadhoud, Addalhameed A.


Shaalan and Fathi E. Abd El- samie, Blind Wavelet based
Image Watermarking International Journal of Signal
Processing, Image Processing and Pattern Recognition , vol.4
No.1, March 2011
R. Dugad, K. Ratakonda and N. Ahuja, A new wavelet-based
scheme for watermarking images,Proc. IEEE Intl. Conf. on
Image Processing, ICIP98,Chicago, IL, USA, 419-423, Oct.
1998.
Miyazaki, A. Yamamoto and T. Katsura,A digital
watermarking technique based on the wavelet transform and
its robustness on image compression and transformation,
IEICE Trans., Special Section on Cryptography and
Information Security,E82-A, No. 1 , 2-10, Jan. 1999.
Vaishali S. Jabade, Dr. Sachin R. Gengaje, Literature Review
of Wavelet Based Digital Image Watermarking Techniques,
International Journal of computer Application(0975-8887)
Volume 31-No.1, October 2011.
Nasrin M. Makbol, Bee Ee Khoo, Robust Blind Image
Watermarking Scheme Based on Redundant Discrete
Wavelet Transform and Singular Value Decomposition,
International journalof Electronicsandcommunication (AEU)
67(2013) 102-112.
Loganathan Agliandeeswari and Kumaravel Muralibabu, A
Robust
Video
Watermarking
Algorithm
Content
Authentication using Discrete Wavelet Transform(DWT) ad
Singular Value Decomposition(SVD) , International Journal
of Security and Its Applications, Vol.7, No.4, July 2013
Samira Lagzian, Mohsen Soryani, Mahmood Fathy, Robust
Watermarking based on RDWT-SVD: Embedding Data in all
Subbands, IEEE 2011

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
170

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

A Review on Prolongation of PR Interval


and Denoising ECG Signal Using Wavelets
Kanika Tayal1, P.M.Arivananthi2, Vijay Gill2, K.Deepa2, Nitika2
M.Tech. Scholar, ECE Department, Manav Rachna University, Faridabad
2
Assistant Professor, Department of ECE, Manav Rachna University, Faridabad

E-mail: tayal_kanika@yahoo.in, pmarivananthi@mru.edu.in, vijay@mru.edu.in kdeepa@mru.edu.in,


nitika@mru.edu.in
Abstract ECG is the graphical recordingof the
electrical activity produced by the heart. ECG is
an important tool for diagnosing whether the
heart is functioning properly or suffering from
any abnormalities. However, ECG signals
recorded from electrocardiograph are usually
corrupted by noise attributed to several factors
such baseline wandering, muscle artifacts etc. The
objective is to denoise ECG signal especially for
feature extraction and to locate the interested
characteristic points that can be used to detect
possible cardiovascular abnormalities. To solve
these problems, we develop a simple but
inexpensive and easy to implement MATLAB
model that generates ECG signals and gives us
mathematical control over the ECG signal. The
denoising of ECG signal can be done by various
techniques such as FFTs, STFTs, wavelets etc.
Keywords ECG, MATLAB, Wavelets

I.

INTRODUCTION

Electrocardiogram (ECG) is a diagnosis tool that


represents the electrical activity of heart recorded
with the help of skin electrode. The morphology and
heart rate of the recorded ECG signal reflects the
cardiac heath of human heart beat. It is a non-invasive
technique that means this signal is measured on the
surface of human body, which is used for
identification of the heart diseases. Any disorder of
heart rate or rhythm, or change in the morphological
pattern, is an indication of cardiac arrhythmia, which
could be detected by analysis of the recorded ECG
waveform. The amplitude and duration of the P-QRST wave contains useful information about the nature
of disease afflicting the heart. The electrical wave is
due to depolarization and re polarization of Na+ and
k-ions in the blood. The ECG signal provides the
following information of a human heart.

A typical ECG signal shows the oscillations


between cardiac contractions (systole) and relaxations
(diastole) states as reflected in a heart rate (HR) [1].
Thus the ECG signal determines the number of heart
beats per minute. A number of important events
characterize cardiac functions. Atrial and ventricular
depolarization/re-polarization takes place for each
heartbeat. The cardiac cycle is associated with
portions of the heart becoming positively charged,
while the remaining parts become negatively charged
interchangeably. This potential difference generated
initiates the flow of current.

II.

ECG DESCRIPTION

A typical ECG signal depicts a series of waveforms


occur in a repetitive order. The waveforms are
initiated from the isometric line, from which a
deflection indicates electrical activity. One normal
heart beat is represented by a set of three
recognizable waveforms that start with the P-wave,
followed by the QRS complex and ends with the Twave. The relatively small P-wave is initiated by the
depolarization of the atrial muscles and is related to
their contraction. The large QRS-wave complex,
made up of three waves, is caused by the
depolarization of the ventricles and is connected to
their contraction.
F

Fig. 1 ECE waveform

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
171

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Atrial
repolarization
happens
during
the
depolarization of the ventricles but its weak signal is
undetected on an ECG. The T-wave is caused by
currents flowing during the repolarization of the
ventricles. A normalcardiac cycle of an individual at
rest consistingof all waveforms (from P T waves)
spans 0.8sec.

III.

ECG INTERPRETATION

ECG waveform used for diagnosing various


diseases by analyzing variation in different
individual waves and segments of ECG
waveform. ECG waveform consists of small
individual waves deviation in them from
standardized values indicated the improper
functioning of heart. It is decomposed into P,
QRS complex, T waves.
Table 1
Electrical activity

Associated
pattern

Duration
(sec)

Atrial
depolarization

P wave

Delay at AV node

PR segment

Ventricular
Depolarization
Ventricular
repolarization

QRS complex

< 0.12

V.
0.12-0.20
0.08-0.10

0.2
T wave

No electrical
activity

IV.

by the impulse to move from atria to ventricles.


It usually has a time span of 120 to 200ms [10].
But if it takes greater than 200ms then it
indicates prolongation of PR interval.
Etiologies for Prolonged PR interval are:
1. Hypokalemia
2. Acute rheumatic fever
3. Carditis associated with Lyme diseases
4. Congenital heart diseases
5. Coronary heart diseases
Symptoms of prolonged PR interval are
chestpain, fainting, loss of memory, shortness of
breath, body pain. The high occurrence of
coronary heart disease indicates the importance
of prolongation of PR interval. The occurrence
of prolonged PR interval is doubly significant in
the older age group people.
Various risk factors associated with the
prolonged PR interval are
1. High blood pressure
2. Excessive alcohol
3. Smoking
4. Diabetes
5. Lack of exercise

Isoelectric
line

ECG SIGNAL PROCESSING

It is divided into two stages: preprocessing and


feature extraction [5]. The preprocessing includes
reduction of noise from raw ECG signal and the
feature extraction includes extracting the diagnostic
information from ECG signal.
Preprocessing of ECG signals involves various
stages. The Pan and Tompkins detection algorithm
identifies the QRS complexes based upon digital
analysis of slope, amplitude and width of the ECG
data [2, 3, 4].

PROPOSED WORK-ETIOLOGIES
OF PROLONGATION OF PR
INTERVAL

The PR interval is measured from the starting of


P wave and continues till the onset of QRS
complex in lead II [6].Normal ECG signal
defines the standard values of amplitude and
durations of different waves combining to form
ECG signal. PR interval defines the time taken

Fig. 2 ECG s/g Preprocessing

Feature extraction involves extraction of diagnostic


information to be extracted from ECG signal for
determining the functioning of the heart. The

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
172

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

preprocessed signal is fed to feature extraction part


for analyzing the resulted output signal and then
extracting the information from the received signal.
The feature extraction is found to be of two
categories: temporal and morphology [7].
VI.

Steps involved in denoising of ECG signal


using Wavelets

Fig. 4

DENOISING ECG SIGNAL

Denoising of ECG signal is one of the important


problems which occur during thediagnosing ECG
signal. ECG signal is a biological non stationary
signal. It plays a very important role in analyzing
ECG signal. Therefore ECG signals require an
effective technique for denoising ECG signal [8].
Various denoising techniques used for noise
reduction [11]. Recently wavelets transform is used
for non - stationary signals. Unlike Fourier
transforms, wavelets transform provides information
in both domains i.e. time and frequency domain.
Discrete wavelet transformation is most
popular for signal processing [9]. Wavelet
transformation is based on a set of analyzing wavelets
(small waves) allowing the decomposition of ECG
signal in a set of coefficients.
Each analyzing wavelets has its own time
duration, time location and frequency band. For
example, in Mallats developed a very simple and
efficient algorithm to compute DWT [8].

Discrete Wavelet transform is performed on the noisy


ECG signal. After noise reduction threshold value is
used to take final decision like hard decision or soft
decision.
Thethreshold values are chosen according to
the noise level of the respective signal.Sure shrink
and Neigh block are two famous thresholding
methods in use of wavelets for estimation of
unknown signal in presence of noise.
VIII.

CONCLUSION

As the use of Wavelet in denoising of ECG signal is a


new concept, many methodological aspects of the
wavelet technique will require further investigation in
order to improve the usefulness in medical field of
the ECG signal processing. In this paper, we
reviewed about prolongation of PR interval in the
ECG signal, its causes, symptoms and consequences
and denoising of ECG signal using discrete wavelet
transforms. It removes 50/60 Hz power line
interference in the ECG signal.
REFERENCES
[1]

Fig. 3 Two levels Mallats algorithm for decomposition

Wavelets are used to remove 50/60 Hz power line


interference from ECG signal. The HPFs and LPFs
are used for calculating details [ n] and approximation
[n] coefficients for decomposing theECG signal into
wavelets. After decomposing into coefficients, down
sampling is done to remove redundant samples and to
make the total number of samples same. The
approximation coefficients [n] result from the down
sampling of the output signal from the low pass filter
and are related to the low frequency part of signal.
They include main features and information of the
signal. Similarly detail coefficients result from the
down sampling of the output signal from high pass
filter. They are required to preserve perfect shape of
the signal when signal is reconstructed.

[2]

[3]

[4]

[5]

[6]

[7]

P. E. McSharry, G. Clifford, L. Tarrasenko, and L.


A.
Smith, A dynamical model for generating synthetic
electrocardiogram signals,IEEE Trans. Biomed. Eng.vol 50,
no. 3 pp. 289-294, 2003.
M. K. Islam, A. N. M. M. Haque, G. Tangim, T. ammad, and
M. R. H. Khondokar, Study and Analysis of ECG Signal
Using
MATLAB
&
LABVIEW
as
Effective
Tools,InternationalJournal of Computer and Electrical
Engineering, Vol. 4, No. 3, June 2012
Analyzing of an ECG signal mathematicallyby generating
synthetic ECG, the internationaljournal of engineering and
science, volume 4, pp 39-44, 2015.
Shital L. Pingale, Using Pan Tompkinssmethod, ECG
signal processing and Diagnose various diseases in
MATLAB,
DepartmentInstrumentation
and
control
Engineering, Name of organization Cummins college of
Engineering for womensKarvenagar, Pune, India.
Sachin Singh and Netaji Gandhi N, Patternanalysis of
different ECG signal using Pan Tompkins algorithm,
International journal ofcomputer science and engineering,
volume 2, 2502-2502, 2010.
H.B.Calleja and M.X. Guerrero, Prolonged PRinterval and
coronary artery disease, British Heart Journal, 1973, 35,
372-376.
Akinlolu A. Ponnle, Oludare Y . Ogundepo Development of
a computer aided application for analyzing ECG signals &
detection of cardiac arrhythmia using back propagation
neural network- Part-I, International Journalof Applied
Information System, vol-9,no-3,2015

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
173

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Mohammad AlMahamdy, H. Bryan Riley, Performance


study of different denoising methods for ECG signal,
Elsevier Procediacomputer science 37 (2014)325-332.
[9] Galya Georgieva- Tsaneva, Krassimir Tcheshmedjiev,
Denoising of Electrocardiogram data with methods of
wavelets transform, CompsysTech13, pp.-9-16, 2013.
[10] Patient.info/health/abnormal-heart-rhythms-arrhythmias.
[11] Aswathy Velayudhan, Soniya Peter , Study ofDifferent
ECG signal denoising techniques, International Journal of
Innovative Research in Computer and Communication
Engineering, vol-3, issue-8, August 2015
[8]

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
174

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Designing of IR Transmitter Using


Multisim and its Applications
Bhargava Yasasvi1, Puja Acharya2, Shilpa Mehta2,
1
Student of ECE Department, K R Mangalam University, Gurgaon, India
2
Assistant Professor, Department of ECE, K R Mangalam university, Gurgaon, India
E-mail: byasasvi@gmail.com, puja.acharya@krmangalam.edu.in, shilpa.mehta@krmangalam.edu.in
Abstract The electromagnetic spectrum is the
range of all possible frequencies which include
gamma rays, X-rays, ultraviolet, visible, infrared,
microwaves and radio waves. The difference lies in
their frequency, wavelength and energy. Most part
of the electromagnetic spectrum are used in
science (in ways to study and characterize matter).
In addition, some are used for communications
and manufacturing. The aim of this paper is to see
how to transmit infrared waves over long
distances. One of the many applications that we
will see in this paper is wireless charging using
these transmitted waves. Here designing is done
using Multisim software.
Keywords ECG, MATLAB, Wavelets
I.

INTRODUCTION

The infrared radiation lies in the region of the


electromagnetic radiation spectrum at wavelengths
greater than those of visible light, but shorter than
those of radio waves. The frequencies of IR are
higher than those of microwaves, but lower than
those of visible light. Infrared light was discovered in
1800 by Sir Frederick William Herschel. He gave a
conclusion that infrared radiation can not only be
absorbed, transmitted, reflected, and refracted.
Infrared technology is growing in mainstream
applications. It will help people with different
disabilities to access list of information resources.
Some of its applications are in remote control of TVs
and environmental control systems, VCRs and CD
players, personal computers, wireless charging and
talking signs. More independent access to electronic
information systems like ATMS, Fare Machines
(ticket machines, etc.), Wireless charging. As we
know, Humans are surrounded by technology and the
major drawback is that it needs to be charged. Wired
charging limits the area of use. This will be
overcomed by wireless charging.

II.

DESIGNING

Fig 1: Circuit Diagram of IR Transmitter

IR Transmitter and Receiver pair can be designed


using following components like 555 timer, IR LED
and TSOP1738 IR Receiver. This can be used for
remote controls, burglar alarms etc. TSOP1738 is a
very commonly used IR receiver for PCM remote
control systems. It has only 3 pins, Vcc, GND and
Output which can be powered using a 5V power
supply and its active low output can be directly
connected to a microcontroller or microprocessor. It
has high immunity against ambient light and other
electrical disturbances.[5] It is able to transfer data up
to 2400 bits per second. The PCM carrier frequency
of TSOP1738 is 38 KHz, so we want to design an
astable multivibrator of 38 KHz. This can be done by
using 555 Timer. In the above circuit, 555 Timer is
wired as an Astable multivibrator. The 100F
capacitor (C1) is used to reduce ripples in the power
supply. 1st and 8th pins of 555 are used to give power
Vcc and GND respectively. 4th pin is the reset pin
which is active low input, hence it is connected to
Vcc. 5th pin is the Control Voltage pin which is not
used in this application. Hence it is grounded via a
capacitor to avoid high frequency noises through that
pin. Capacitor C2, Resistors R1, R2 determines the
time period of oscillation. Capacitor C2 charges to
Vcc via resistors R1 and R2.[6] It discharges through
Resistor R2 and 7th pin of 555. The voltage across
capacitor C2 is connected to the internal comparators
via 2nd and 6th pins of 555. Output is taken from the
3ed pin of the IC. Please read the article Astable
Multivibrator using 555 timer for more detailed

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
175

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

working. Charging time constant of the capacitor


(output HIGH period) is determined by the
expression 0.693(R1+R2) C2 and discharging time
constant (output LOW period) is determined by
0.693R2C2. They are approximately equal.
III.

COMPONENTS

LEDS and NE555 IC are some of the main


components used in circuit.
LEDs
An electroluminescent IR LED is designed using
narrow band hetero structures with energy gap from
0.25
to
0.4eV.
As we know, Infrared rays spread in all directions,
which propagate along straight line in forward
direction. These have the characteristics of producing
secondary wavelets when it collides with any
obstacles in its path. When IR rays gets emitted from
LED, it takes the direction as it is angled.[3] When
any obstacle interferes in the path, the IR rays get cut
and it produces secondary wavelets which propagates
in opposite direction of the primary waves, which
produces the net result like reflection of IR rays.
D.

NE555 IC
A monostable multivibrator also known as a one-shot
multivibrator which is a pulse generator circuit in
which the duration of the pulse is determined by the
R-C network, connected externally to the 555 timer.
Here, one state of output is stable while the other is
unstable. For auto-triggering of output from quasistable state to stable state energy is stored by an
externally connected capacitor C to a reference level.
[1]The time taken in storage determines the pulse
width. The transition of output from stable state to
quasi-stable state is accomplished by external
triggering.
E.

inverting input of a comparator which is responsible


for the transition of flip-flop from set to reset. The
output of the timer will depend on the amplitude of
the external trigger pulse applied to this pin. A
negative pulse with a dc level greater than Vcc/3 is
applied to this terminal. In the negative edge, as the
trigger passes through Vcc/3, output of the lower
comparator becomes high and the complimentary of
Q becomes zero. Thus the 555 IC output gets a high
voltage, and thus a quasi-stable state.
Pin 3: Output Terminal: Output of the timer is
available at this pin. There are two ways in which a
load can be connected to the output terminal. One
way is to connect between output pin and ground pin
or between pin 3 is called the normally on load and
supply pin (pin 8) is called the normally off load.
Pin 4: Reset Terminal: Whenever the timer IC is to
be reset or disabled, a negative pulse is applied to it,
and thus is named as reset terminal. When this pin is
not to be used for reset purpose, it should be
connected to + VCC for avoiding false triggering.
Pin 5: Control Voltage Terminal: The threshold
and trigger levels are controlled using this terminal.
The pulse width of output waveform is determined by
connecting a Potentiometer or providing an external
voltage to this pin which can be used to modulate the
output waveform.
Pin 6: Threshold Terminal: This provides noninverting input terminal of comparator 1, that will
compare the voltage applied to the terminal with a
reference voltage of 2/3 VCC.
Pin 7 : Discharge Terminal: This pin is connected
internally to the collector of transistor and mostly a
capacitor is connected between this terminal and
ground. It is called discharge terminal because when
transistor saturates, capacitor discharges through the
transistor. When the transistor is cut-off, the capacitor
charges at a rate determined by the external resistor
and capacitor.
Pin 8: Supply Terminal: A supply voltage of + 5 V
to + 18 V can be applied to this terminal with respect
to ground.
IV.

SIMULATION RESULTS

Fig 2: Pin Configuration of NE 555 IC

Functions of each Pin


Pin 1: Grounded Terminal: All the voltages are
measured with respect to this terminal.
Pin 2: Trigger Terminal: The trigger pin is used to
feed the trigger input then the 555 IC will be set as a
mono-stable multivibrator. This pin gives an

Fig 3: Designing using Multisim

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
176

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Q3)
Is
this
technique efficient
or not?
RIDHI

Q2)Define the
limitations for this
technique?

Fig. 4 Waveform
V.

Q1) What do you


mean by Wireless
Charging?

APPLICATION

WIRELESS CHARGING
This technology consists of a receiver with a
photovoltaic (PV) cell and a transmitter with an
infrared light emitting diode (IR LED) that users can
install by plugging it into a electrical socket.[2] Both
are backed with retro-reflective mirrors that reflect
light or in this case, send the IR energy back to its
source. This allows the transmitter and receiver to
form a beam of resonating IR light. This selfalignment system allows wireless charging from
almost anywhere within the radius without aiming.
The transmitter only delivers power to receivers it has
found.

Q3)
Is
this
technique efficient
or not?

VII.
VI.

OM

LAKSHAY

ADVANTAGES

SURVEY
1.

Name of the
student
KANIKA

Not efficient until it


modem
gets
installed at every
place.
There is no wired
connection between
the phone and the
charger.
The base of the
charger still needs
to be attached to a
socket by a wire
hence
its
a
contradiction
of
wireless charging.
It is same as that of
wired
charging
because the phone
has to be kept over
the charger base
which is connected
to a socket hence
movement
of
phonies restricted.
Therefore it is not
efficient.

Questions

Views

Q1) What do you


mean by Wireless
Charging?
Q2) Define the
limitations for this
technique?
Q3)
Is
this
technique efficient
or not?

Charging
wire.

Q1) What do you


mean by Wireless
Charging?

Q2)Define
the
limitations for this
technique?
Q3)
Is
this
technique efficient
or not?
Q1) What do you
mean by Wireless
Charging?

Q2) Define the


limitations for this
technique?

Expensive,
issues.

without

health

Yes it is, as it helps


us
in
sending
information more
efficiency.
Charging a device
without
any
physical
linkage
with the charger in
any ports.
Costly, device cant
be used during
charge.
It is efficient but not
cost efficient
The process of
charging
any
electronic device by
surging power into
it without using any
sort of physical
medium such as
wires.
In case of any
repair, it takes time
to get it repaired.

2.
3.
4.
5.

Due to low power requirements it is suitable for


laptops, telephones, personal digital assistants
Circuit designing cost is less.
Circuitry is simple.
It is secured.
It is portable.
VIII.

1.
2.
3.
4.

LIMITATIONS

Transmitters and receivers should be directly


aligned in order to communicate.
Transmission can be blocked by different
materials like walls, plants.
Suitable for short distances only.
It is sensitive to light and weather

REFERENCES
[1]
[2]
[3]

[4]
[5]

[6]

Malik Tubaishat, Qi, Yi Shang, Hongchi Shi Wireless


Sensor-Based Traffic Light Control, IEEE 2008.
Ramesh S, Yuvaraj S. Improved Response Time on Safety
Mechanism Based on PIR. 2012
N. W. Lo and K.H. Yeh, "Novel RFID Authentication
Schemes for Security Enhancement and System
Efficiency," Lecture Notes in Computer Science, Secure
Data Management, vol. 4721/2007, pp. 203-212, 2007
http://www.ijater.com/Files/IJATER_03_06.pdf
Eason, B. Noble, and I.N. Sneddon, On certain integrals of
Lipschitz-Hankel type involving products of Bessel
functions, Phil. Trans. Roy. Soc. London, vol. A247, pp.
529-551, April 1955.
http://www.eeweb.com/blog/extreme_circuits/long-range-irtransmitter

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
177

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Challenges in NAND Flash Memory


1

Manisha Sharma, 2Munesh Devi


Assistant Prof.,ECE Deparrtment,Gateway Institute of Engineering & Technology,
E-mail: 1sharmma9mani@gmail.com, 2muneshsangwan@gmail.comm

Abstract The demand of memory devices is


increasing day by day. They are used in various
applications like digital still cameras, digital music
players, personal digital assistants, electronic
books, and cell phones. To full fill the increasing
demand of memory devices high density flash
memories are developed. It can be used for
enormous data storage. This paper gives the
review of NAND flash memories & various
problem & interference associated with these
memory devices.
Keywords Flash Memory, Floating Gate
Coupling Effect, Multilevel Cell, NAND flash
memory etc.
I.

INTRODUCTION

Memory devices have been increasingly used for


portable mass storage applications, such as in digital
still cameras, digital music players, personal digital
assistants, electronic books, and cell phones. Due to
the great demand, the development of high-density
flash memories has accelerated. NAND flash memory
is mainly used for massive data storage. Due to the
great demand, the development of high-density flash
memories has accelerated. NAND flash memory is
mainly used for massive data storage. To increase the
storage capacity of a flash memory, multi-level cells
are used to store more than one bit in each cell by
programming the cell threshold voltage. Cost is
reduced because an MLC flash stores more than two
bits per cell, while a single-level cell (SLC) stores
one bit on the same cell size. However, the threshold
voltage margin is tight in an MLC technique, thus the
program performance is inferior to an SLC NAND
flash memory [23]. In Multilevel cell techniques there
are multiple values that an MLC can represent.
Different values are given in table below Two bits
range from fully programmed to fully erase.
Regardless of this impressive growth, flash memory
is also facing its technological challenges in scaling
because as the size is going to shrink sensitivity
between each level is increased.
NAND type flash memory was first introduced by
Toshiba in the late 1980s, following NOR type flash

memory by Intel [1].IN RECENT years, non-volatile


memories in particular, flash memories have
attracted considerable attention due to their high datatransfer rate and low power consumption [1]. Flash
memory cells consist of floating gate transistors, in
which the amount of trapped charge determines the
cell voltage, referred to as the cell level. A flash
memory cell is written to, or programmed, by
applying a suitable voltage to the cell in order to
inject the desired amount of charge to reach a certain
cell level. Programming accuracy is an important
factor in achieving capacity of flash memory storage.
Parasitic capacitance is responsible for random
telegraph noise, direct effect, and inter-cell
interference. These parasitic between adjacent cells
result in change of threshold voltage. Due to change
in threshold voltage level of a so-called victim cell
may be increased when a high voltage is applied to
neighboring cells [2], [3].Other than above some
more problems in NAND Flash memories are cell to
cell interference, random telegraph noise, floating
gate interference, adjacent bit line cell interference,
direct effect etc. which we have to consider while
designing the multi level cell.
Flash memory cell is a single transistor cell using a
dual gate MOS device. A floating gate exists between
the control gate and silicon substrate. Floating gate is
completely isolated by dielectrics, therefore can trap
electrons and keep its charge [5].
Table 1.
value

State

00

Fully Programmed

01

Partially
Programmed
10
Partially Erased
11
Fully Erased
For programming memory cell, NOR flash uses
channel-hot-electron (CHE) injection while NAND
flash uses Fowler- Nordheim (FN) tunneling for
programming & erase. With the CHE injection
method, MOSFET is properly biased in drain and
gate and large current flows into the cell. Due to this
large current, electrons in the channel gain sufficient

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
178

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

energy to overcome the gate oxide barrier and get


trapped in the floating gate. In FN tunneling, only
drain of MOS device is biased and less current is used
for programming. Therefore programming by FN
tunneling takes longer than CHE injection but allows
many cells to be programmed simultaneously. Once
electrons are trapped in the floating gate, they cannot
escape high energy silicon dioxide barrier even after
the device is powered off. When a flash memory cell
is programmed, it is considered logic 0 because
when it is read it cannot conduct a current due to
increased threshold voltage by the trapped charges in
the floating gate [4,5,6] .When it is erased it is
considered logic 1

Table 2

In both NAND and NOR flash, cell is erasured by FN


tunneling. With negative biasing of the cell gate, a
high electric field is formed across gate oxide which
helps trapped electrons to overcome the high energy
barrier and depart the floating gate [4, 5, 6].
Overview of NAND & NOR flash memory trends is
given below.
One big disadvantage of Floating Gate technique is
that charges can be trapped in the floating gate during
processing and can result in change of threshold
voltage and inaccuracy in the circuit. Semi-FloatingGate uses a clock for the initialization, thus not suited
for pure analog circuit design, but suited for MultiValued-Logic digital circuit design. Pseudo-FloatingGate (PFG) borrows ideas from both Nonvolatile
Floating Gate and Semi Floating Gate without being
purely floating. Large value resistors weakly control
the offset voltage at the floating gate node thus avoids
the problems like trapped charges and the need for
programming.

II.

FLOATING GATE & TYPES

Floating gate is a poly silicon gate surrounded by


silicondioxide. Charge on the floating gate is stored
permanently, providing a long term memory,
because it is completely surrounded by a high
quality insulator. Floating gate is a poly silicon layer
that has no contacts to other layers. This floating gate
can be the gate of a MOSFET and can be capacitively
connected to other layers. In circuit terms, a floating
gate occurs when we have no DC path to a fixed
potential. No DC path implies only capacitive
connections to the floating node. Fig. given below
shows the basic floating gate.

Fig.1 Basic floating gate

Programming a floating-gate transistor involves


setting the DC voltage of the floating node to any
desired value by adding to or removing charge from
the floating gate. Charge modification is achieved by
using two physical phenomena, namely hot-electron
injection and Fowler-Nordheim tunneling [17].
Tunneling-based
programming
involves
the
application of high-voltage pulses of both positive
and negative polarities to modify floating-gate
charge. The logarithmic nature of tunneling, however,
makes faster programming highly time consuming.
Faster programming times can be achieved by using
special processing steps such as an ultra-thin
tunneling oxide or textured polysilicon or achieved
by increasing the tunneling voltage even further. Hot-

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
179

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

electron injection, on the other hand, is a


phenomenon that involves adding electrons onto the
floating gate. Hot electron injection occurs when the
electric field in the channel is high enough that it
accelerates channel electrons to energies higher than
the Si SiO2 barrier.
Fast and accurate programming is achieved by using
a combination of hot-electron injection and tunneling,
where tunneling is used primarily as a global erase
for all floating gates in the circuit. Once the charge on
all floating gates has been normalized, hot-electron
injection is used to individually program each
floating gate to the desired value. This is achieved by
first isolating the floating-gate transistor from the rest
of the circuitry and then applying a sufficient sourceto drain voltage for a specific period of time that is
based on the desired floating-gate target current.
Quasi floating gates

(a) Quasi floating gate (b) symbolic representation


Fig 2 (a) shows quasi floating gate inverter & (b) shows its
symbolic representation. It can be recharge by short circuiting the
floating gate output. Semi floating gate is recharged by VDD/2
instead of either to rail.

By changing the way of recharge we get another gate


which is known as
pseudo floating gate. Main
difference between quasi & pseudo floating gate is
the use of a feedback buffer instead of a switch. In
this feedback buffer is used in place of recharge
switch thus recharge clocks & periods are eliminated.
In pseudo floating gate there in no recharge mode as
in semi floating gate & device operates in continuous
mode.

Hybrid Floating gate


As we approaching to multilevel cell approach, in
which we can store more than one bit as compared to
single level cell approach, parrallelely we are scaling
the device [21]. Limitation in scaling of conventional
NAND Flash memory cell is the loss of the control
gate (CG) to floating gate (FG) sidewall capacitance
when the space between neighboring cells becomes
so small that the inter poly dielectric (IPD) and the
CG can no longer be wrapped around the FG, finally
leading to a planar structure. This results in a
reduction of the CG to FG coupling ratio, causing
severe program saturation, even with high-k IPD.
Replacing the n-type poly-Si FG with a p-type metal
FG would reduce program saturation because of the
increased electron tunneling barrier at the FG/CG
interface. However, due this barrier is at the
FG/tunnel
oxide
interface,
therefore
erase
performance is degraded. To reduce program
saturation without degrading erase performance, a
dual-layer (or hybrid) FG stack (HFG), which
combines a n-type poly-Si at the tunnel oxide
interface with a p-type metal at the IPD interface
(Fig. 1) given below, has been recently proposed . In
this stack, the p-type metal at the IPD interface
increases the tunneling barrier for leakage through the
IPD, therefore postponing programming saturation. A
thin poly-Si layer is maintained at the tunnel oxide
interface in order to avoid increasing the barrier at
that interface for tunnel erasing the memory cell.

Fig. 4. Band diagram of (left) a normal poly floating gate Flash


cell and (right) a cell with poly/metal dual-layer floating gate in
programming condition. Due to the larger work function of the
metal layer in the floating gate, the barrier for tunneling leakage
through the inter poly dielectric is strongly reduced [21].

III.

CHALLENGES IN NAND FLASH


MEMORY

As technology scaling of NAND flash memory is


going on major threats have to be considered in
addition to multilevel cell approach. Main issues are
cell to cell interference; inter cell interference,
random telegraph noise, floating gate interference,
adjacent bit line cell interference, direct effect etc.
since they produce new issuses during the common
Fig. 3 Quasi floating gate is having less noise effect than the
Pseudo floating gate.
Special Issue: National Conference on Recent Innovations In Engineering & Technology
(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
180

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

operation of NAND flash array [18]. Major sources


of distortion are:
1. Electrons capture and emission events at charge
trap sites near the interface developed over PE
cycling directly result in memory cell threshold
voltage fluctuation, which is referred to as random
telegraph noise (RTN)
2. Interface trap recovery and electron detrapping
gradually reduce memory cell threshold voltage,
leading to the data retention limitation. Moreover,
electrons trapped in the oxide over PE cycling make
it difficult to erase the memory cells, leading to a
longer erase time, or equivalently, under the same
erase time, those trapped electrons make the threshold
voltage of the erased state increase. Most commercial
flash chips employ erase-and-verify operation to
prevent the increase of erase state threshold voltage at
the penalty of gradually longer erase time with PE
cycling.
3. Parastics capacitances between cells results shifs
the Vth of the adjacent cell . This change in in Vth
give rise to floating gate interference.
4. Due to parastics capacitance one more effect which
is known as direct field effect also arises.
Floating gate interference
Cell integration density is increased NAND flash
memory cell suffers from increased parasitic
capacitance between the cells. It results in major
threats for multilevel cell operation. Floating gate
interference results from capacitive coupling through
parasitic capacitors surrounding the floating gate
degrades the cell characteristics such as current,
speed, and the cell Vth distribution. The cell shift by
floating-gate interference is linearly proportional to
the adjacent cell Vt change. Hence, the floating-gate
interference affects the multilevel cell operation
seriously because programmed-cell Vts are relatively
high in the multilevel cell scheme to ensure gaps
between the cell levels. The interference can be
reduced significantly with silicon oxide spacer due to
lower parasitic capacitance. Therefore, a low- k
dielectric material should be adopted for multilevel
cell integration [18].
Vt distributions widen and gaps in the multilevel cell
have reduced by the floating-gate interference. Thus,
failure may occur in the multilevel cell operation by

cell-level overlap, because there are additional shifts


by programming/erasing cycling, data retention, and
programming
interference.
The
floating-gate
interference also affects the programming speed. In
the programming operation, a programming voltage
is applied to the control gate of a cell and a pass
voltage is applied to the other cells. If a lower voltage
is applied to nearby control gates, capacitive coupling
lowers the floating-gate voltage of the programming
cell. To reduce the floating-gate interference, it is
necessary to thin the floating-gate, in addition to
adopting low- k dielectric material. So to minimize
the floating gate interference, floating gate height
should be lowered .low dielectric material as well as
very thin floating gate structures will help to avoid
floating gate interference [18].
Direct field effect
Direct field effect of an adjacent cell transistor due
cell-to-cell interference. As cell transistor sizes are
scaled down below 50 nm, a selected cell transistor
gets nearer to neighboring cell transistors so that they
influence each other directly and indirectly. The
indirect effect is due to a well-known parasitic
capacitance-coupling effect. the intrinsic Vth shift
caused by a neighboring cell transistor results in
direct effect. Due to the short distance between
transistors in the sub-50-nm region, the electric field
of the adjacent cell transistor directly affects the
channel edge of a selected cell transistor, provoking
cell Vth shift. Moreover, based on the fact that most of
the cell Vth is determined on the channel edge due to
severe boron segregation and electric field crowding
at the channel edge , the cell transistor suffers an
intense Vth shift, particularly in the x-direction (Vx).
The direct field effect of an adjacent cell transistor
becomes prominent as the cell size reduces to below
50 nm. Strong influence of the direct field effect on
cell-to-cell interference is
in the x-direction.
Therefore, to reduce this effect in NAND Flash cell
arrays below 50 nm, it is necessary to maximize the
field oxide recess [20].
Random telegraph noise
Random telegraph noise (RTN) caused by the
capture/emission of an electron at a trap becomes one
of the critical issues, because it can cause an error
during read operation and make the memory
unreliable. As well known, the effect of RTN is more
severe in floating-gate NAND Flash memories
because they have thicker tunnel oxide as compared

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
181

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

with other CMOS devices . Since there is a position


effect in a NAND cell string, a cell located closer to
the bit line (BL) has higher RTN threshold voltage
fluctuation (Vth) because it has a lower
transconductance due to higher equivalent source
resistance . Furthermore, it has been reported that the
low frequency noise power and Vth of a cell string
are influenced by the state of cells in the BL
direction, bias conditions, and program (P)/erase (E)
(P/E) cycling [22].

IV.

TECHNIQUES TO REDUCE
CHALLENGES IN NAND FLASH
MEMORY

As the number of level increases, correspondingly


rising cell-to-cell interference becomes the major
challenge for further NAND flash memory
technology scaling and more aggressive use of MLC.
Therefore, it is important to develop techniques that
can minimize or compensate cell-to-cell interference.
Cell-to-cell interference compensation schemes using
reduced symbol pattern of interfering cells for
multilevel cell (MLC) NAND flash memory is
proposed. The proposed schemes consist of three
procedures, estimation of cell-to-cell interference,
compensation for cell-to-cell interference, and
generation of log-likelihood ratio (LLR). First,
reduced symbol pattern of interfering cells is used to
estimate cell-to-cell interference by modifying the
levels of the threshold voltage shift from multi page
programming to two levels. Second, based on this
estimation, cell-to-cell interference is compensated by
modifying the read voltage considering the estimated
cell-to-cell interference in the proposed scheme 1 and
by subtracting the estimated cell-to-cell interference
from the sensed voltage in the proposed scheme 2.
Finally, after conducting compensation, LLR is
calculated for low-density parity check (LDPC) codes
in the assumption of free cell-to-cell interference
since interference between cells is mitigated by the
compensation procedure. By using these techniques,
cell-to-cell interference can be relaxed with a simple
structure and a high reliability.

compensate cell-to-cell interference and to generate


the LLR, fine-grained cell-threshold-voltage sensing
is used.
One more method proposed to reduce cell to cell
interference is Data Post compensation and Pre
distortion which will reduce induced bit errors. The
first technique, called data post compensation, aims
to estimate and subtract cell-to-cell interference when
we read data from NAND Flash memories, which
essentially follows the concept of signal equalization
in digital communication [19]. The second technique,
called data pre distortion, aims to predict cell-to-cell
interference and accordingly pre distort the data when
we write data to NAND Flash memories, which
essentially follows the concept of signal pre distortion
in digital communication
VI.
CONCLUSION
Nowadays memory devices have been increasingly
used for portable mass storage applications. Due to
the great demand, the development of high-density
flash memories has accelerated. NAND flash memory
is mainly used for massive data storage. There are a
lot of challenges & issues related to NAND flash
memories. Main issues are cell to cell interference;
inter cell interference, random telegraph noise,
floating gate interference, adjacent bit line cell
interference, direct effect etc. Also there are various
techniques to reduce these challenges like multilevel
cell NAND flash memory. In this paper we have
provided a review on various issues related to these
memory devices and also the available solutions of
these challenges.

REFERENCES
[1]
[2]

[3]

[4]

The read voltage is shifted based on the estimated


cell-to-cell interference. With the modified read
voltage, the threshold detection for the target cell can
be performed. In the proposed scheme 2, interference
between cells is compensated by subtracting the
estimated cell-to-cell interference from the sensed
voltage. In order to provide sufficient accuracy to

[5]

[6]

P. Cappelletti, C. Golla, P. Olivo, and E. Zanoni, Flash


Memories. Kluwer Academic Publishers, 1st Edition, 1999.
J.-D. Lee, S.-H. Hur, and J.-D. Choi, Effects of floating-gate
interference on NAND flash memory cell operation, IEEE
Electron Device Lett., vol. 23, no. 5, pp. 264266, May 2002.
Dong, S. Li, and T. Zhang, Using data postcompensation
and predistortion to tolerate cell-to-cell interference in MLC
NAND flash memory, IEEE Trans. Circuits Syst., vol. 57,
no. 10, pp. 27182728, October 2010.
NAND Flash Applications Design Guide. Toshiba America
Electronic
Components,
Inc.,
http://www.dataio.com/pdf/NAND/Toshiba/NandDesignGuid
e.pdf.pdf, April 2003.
R. Bez, E. Camerlenghi, A. Modelli, and A. Visconti.
Introduction to Flash Memory. Proceedings of the IEEE,
vol. 91, no. 4, pp. 489-502, April 2003.
J. E. Brewer, and M. Gill. Nonvolatile Memory Technologies
with Emphasis on Flash: A Comprehensive Guide to
Understanding and Using NVM Devices. Wiles-IEEE Press,
2007

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
182

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

[7]

[8]

[9]

[10]

[11]

[12]

[13]

[14]

[15]

T. Krazit. Intel flashes ahead to 1 Gb memory. CNET


News, http://news.cnet.com/ Intel-flashes-ahead-to-1Gbmemory/2100-1006_3-6057216.html?tag=nw.11, 2006.
MacGillivray. Inside Intel's 65-nm NOR flash. Techonline,
http://www.techonline.com/article/printArticle.jhtml?articleI
D=196600919, December 2006.
STMicroelectronics Offers Automotive-Grade 32 Mbit NOR
Flash
Memory.
IHS,http://parts.ihs.com/news/stmicroelectronics-32mbitnor.htm, 2007
Duncan. Samsung Reveals 30nm, 64 Gb Flash. Digital
Trends,
http://news.digitaltrends.com/newsarticle/14582/samsung-reveals-30nm-64gb-flashprinterfriendly, 2007 Cormier.
Hynix is the NAND flash memory engraved in 48 nm
memory.
PCInpact,http://translate.google.com/translate?hl=en&sl=fr&
u=http://www.pcinpact.com/
actu/news/40473-Hynixmemoire-flash
NAND48nm.htm&sa=X&oi=translate&resnum=3&ct=result
&prev=/search%3Fq%3DHynix%2B48%2Bnm%2BNAND
%26hl%3Den, 2007
S. Mutschler. Toshiba touts 43-nm CMOS 16-Gb NAND
flash.
EDN:Electronics
Design,
Strategy,
News,
http://www.edn.com/index.asp?
layout=articlePrint&articleID=CA6529998, 2008.
M. LaPedus. Intel, Micron roll 34-nm NAND device. EE
Times,
http://www.eetimes.com/showArticle.jhtml?articleID=20840
0713, 2008
S. Seguin. Toshiba Launches First 512 GB SSD. Tom's
Hardware, http:// www.tomshardware.com/news/Toshiba512GB-SSD,6716.html, 2008.
Walko. NOR flash parts move to 65 nm processing. EE
Times
Asia,
http://www.eetasia.com/ART_8800556954_480200_NP_509
eba15.HTM#, 2008.

[16] M. LaPedus. SanDisk, Toshiba to ship 32-nm NAND in


'09.
EE
Times,http://www.eetimes.com/news/latest/showArticle.jhtml
?articleID=212800210&printable=true&printable=true, 2009.
[17] M. Lezlinger and E. Snow, Fowler-Nordheim tunneling in
thermally grown SiO2, Journal of Applied Physics, vol. 40,
pp. 278283, Jan. 1969.
[18] Jae-Duk Lee, Sung-Hoi Hur and Jung-Dai Choi Effects of
Floating Gate Interference on NAND Flash Memory Cell
Operation IEEE Electron Device Letters, vol. 23, No. 5,
May 2002.
[19] Taehyung Kim , Gyuyeol Kong , XiWeiya , and Sooyong
Choi Cell-to-Cell Interference Compensation Schemes
Using Reduced Symbol Pattern of Interfering Cells for MLC
NAND Flash Memory IEEE TRANSACTIONS ON
MAGNETICS, VOL. 49, NO. 6, JUNE 2013.pp. 256-2573.
[20] Mincheol Park, Keonsoo Kim, Jong-Ho Park and Jeong
Hyuck Choi Direct Field Effect of Neighbering Cell
Transistor on Cell-to-Cell Interferenceof NAND Flash Cell
Arrays IEEE Electron Device Letters, vol. 30, No. 2,
February 2009.
[21] P. Blomme, A. Cacciato, D. Wellekens, L. Breuil, M.
Rosmeulen, G.S. Kar, S. Locorotondo, C. Vrancken, O.
Richard, I. Debusschere, and J. Van Houdt Hybrid Floating
Gate Cell for Sub-20-nm NAND Flash Memory Technology
IEEE Electron Device Letters, vol. 33, No. 3, March 2012.
Pp. 333-335.
[22] Sung-Min Joe, Min-Kyu Jeong, Bong-Su Jo, Kyoung-Rok
Han, Sung-Kye Park, and Jong-Ho Lee The Effect of
Adjacent Bit-Line Cell Interference on Random Telegraph
Noise in NAND Flash Memory Cell Strings IEEE
TRANSACTIONS ON ELECTRON DEVICES, VOL. 59,
NO. 12, DECEMBER 2012. Pp. 3568-3573.
[23] Donghyuk Park and Jaejin Lee Floating-Gate Coupling
Canceller for Multi-Level Cell NAND Flash IEEE
TRANSACTIONS ON MAGNETICS, VOL. 47, NO. 3,
MARCH 2011. Pp. 624-628.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
183

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Strained Silicon Complementary Metal Oxide


Semiconductor: A Review
1

Anita, 2Vanita Batra, 2Ritu Pahwa, 2Jyoti Sehgal


M.Tech Scholar, ECE, Vaish College of Engineering Rohtak, Haryana
2
Assistant Professor, ECE Vaish College of Engineering Rohtak, Haryana.
1

E-mail: anitachhillar20@gmail.com, vanita.batra@rediffmail.com, ritumtech@gmail.com,


legendjyoti@rediffmail.com

Abstract To enhance integrated circuit


performance, the introduction of strain in the
channel of a CMOS silicon transistor has been
widely accepted. Strain helps carriers to travel
faster. The increased performance is achieved
through higher carrier mobility and reduced
source/drain resistance. In this review paper we
study the mechanism and role of the strained
layers in boosting modern CMOS technology.
Keywords
Strained
CMOS,
enhancement, global strain, local strain
I.

mobility

INTRODUCTION

The key for the tremendous success of CMOS


technology is performance enhancement through
downscaling of transistor feature size [1]. At the
90nm technology node, stress technique was
introduced keeping MOSFET design intact.
To enable the design of new device structures based
on strained-Si, a reliable set of models for parameters
such as mobility, energy bandgap, and relaxation
times is required.
The reason for the mobility enhancement is the stress
induced band structure modification. Stress causes a
deviation of the silicon lattice constant from its
equilibrium value. This modifies the electronic band
structure. Mechanical stress in silicon can be
generated either globally, by growing an epitaxial
layer on a relaxed SiGe substrate [2] [3], by
mechanical deformation or induced during the
processing steps.
Biaxially strained-silicon layers grown on relaxed
SiGe substrates have shown large enhancements of
electron mobility. This method however suffers from
several integration issues. There has thus been a
growing interest in uniaxially strained-silicon, which
delivers superior motilities for both electrons and

holes [4]. The strain in the channel region can be


obtained by the optimization of the stress introduced
by the individual process steps and can as well be
implemented by using strained-Si substrates. The
process-induced strain engineering needs to be taken
into account at the transistor level and may limit the
flexibility and add further complexity to transistor
architectures. There are basically two approaches on
basic of direction to introduce strain into the
transistor channel.
Global (Biaxial) strain also referred to as substrateinduced strain, it is created over the whole wafer.
Local (Uniaxial) strain is realized locally, in the
transistor channel either by using strained
compressive or tensile contact etch stop layers for
pMOS and nMOS devices respectively or by the
integration of silicon germanium (SiGe) in the source
and drain regions of the pMOS transistors[13]. Fig.1
illustrates the structure of strained Si MOSFETs
.When a thin layer of Si is pseudomorphically grown
on a thick, relaxed SiGe layer [5].

Fig.1. Typical structures of strained Si/relaxed SiGe bulk


MOSFETs [5].

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
184

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Other type of strain on the basic of crystal orientation


is two type: Tensile strain :<110> orientation is
tensile strain ,apply on NMOS due to tensile strain
mobility of electron increase by lowering of effective
mass and scattering rate of electron. Compressive
strain :< 100> orientation is compressive strain, apply
on PMOS due to compressive strain mobility of hole
increase by lowering of effective mass of scattering
of hole.
The tensile strain will increase both electron and hole
mobilities where as compressive strain will decrease
electron mobility but will increase hole mobility. The
carrier mobility is affected by the strain-induced
change in electron and hole effective masses, while
the hole mobility is also affected by the straininduced suppression of interband and intraband
scatterings.
Strain-induced
carrier
transport
enhancement is maintained with gate-length scaling.
This paper gives an overview of carrier mobility
enhancement in strained CMOS technologies and
MOS Structures with strained layers.
II. MECHANISMS FOR MOBILITY
ENHANCEMENT
The performance of MOS transistor is improved by
the possibility to change the properties of materials.
From the band structures, the effective masses of
electrons and holes at the valence- and conductionband edges and the band-splitting energies can be
studied. Measurements performed on Hall structures
in strained silicon layer at room temperature showed
large electron mobility, which in the case of a very
low temperatures (0.4 K) reaches extremely high
values, up to 500000 cm2/Vs [2]. The carrier
mobility of electrons and holes as a function of strain
is supported by theoretical studies [4].
Fig.2 shows that in the conduction band, tensile strain
splits the six fold degeneracy, and lowers the twofold degenerate perpendicular -valleys with respect
to the four-fold in-plane -valleys in energy space.
Such energy splitting suppresses inter-valley carrier
scattering between the two -fold and four-fold
degenerate valleys, and causes preferential
occupation of the two-fold valleys where the in-plane
conduction mass is lower. These two effects lead to
increased electron mobility.

(a)

(b)

(c)

Fig. 2 Schematic diagram of balance silicon structure in SiGe


composition (a) and conduction zones of unstrained (b) and
strained silicon (c) [2].

Three-dimensional electrons have an anisotropy in


the effective mass, composed of light transversal
effective mass, mt(=0.19m0), where m0 is the electron
mass in free space,and the heavy longitudinal
effective mass, ml(=0.916m0).As a result, the two
fold degenerate valleys have the effective masses of
mt in parallel and ml in perpendicular to MOS
interfaces, while the fourfold degenerate valleys have
the effective masses of mt and ml in parallel and mt in
perpendicular to MOS interfaces[6].

Fig. 3.Biaxial tension-induced changes in conduction band in


strained Si [6].

The difference of the effective mass causes different


Physical properties in 2 and 4 valleys. The
conductive electron mass in parallel MOS
interface is smaller in 2 the plane than in the 4
plane, and, therefore, the electron mobility is
greater in the 2 plane, than in the 4 plane.
Also, as the inverted layer thickness and the

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
185

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

subzone energy are determined by effective mass


in the direction normal to MOS interface, an
inversion layer is thinner and the sub bands energy is
lower in 2 plane, and so the effective mass is
higher[2].

transistors. The strain in the channel can be obtained


by the individual process steps and the use of strained
silicon substrate.

The effect of strain on the electron mobility can be


reflected by changing the subbands energy, i.e. the
energy levels in the conduction zone. The increase
of the energy of the sublevels results in increased
mobility through two mechanisms: increasing the
average value of mobility in 2 plane due to a large
number of electrons with higher mobility and
modification of the bottom of the conduction
band leads to a reduction of dissemination of
holders on the phonons, which, in total, increases the
electron mobility even when the electric field has low
values [7].

In the global strain/biaxially strain, stress is introduce


across the entire substrate. This is done by using
epitaxially growing a SiGe buffer layer on top of
the silicon substrate. Also having other structures
with strained silicon layer on insulator (Strain-Si On
Insulator SSOI) [8], where strained/ relaxed layers
are formed on the buried oxide, and the structures
where the strained silicon layer is directly linked to
the buried oxide (Strain-Si Directly On Insulator
SSDOI) [5]. Common structures and substrates using
biaxial strained layers are shown in Fig. 4.
When SiGe layer growing on top of silicon layer, the
atoms in the silicon layer align with those in the SiGe
layer, which has a slightly larger crystalline lattice
(since germanium atoms are larger than silicon).and
increase in spacing (4% at most) between the silicon
atoms produces biaxial strain in the silicon channel,
which changes the shape of the energy bands both for
electrons and holes. Resultant is increased mobility
and an increase in channel drive current for a given
device design, most important to improved
performance. The most important advantage of this
approach is that it creates biaxial stress and this can
be used for both pMOS and nMOS devices [2][10].

III. IMPLEMENTATION TECHNOLOGIES FOR


MOS STRUCTURES WITH STRAINED LAYERS
The strained-Si CMOS technology has been regarded
as mandatory for future technology nodes, because of
the necessity to maintain high current drive. The
strain improves CMOS performances by using a
simple change in materials, applying less geometrical
scaling, of the transistor's gate length and oxide
thickness.

A. Global

Strain

Technology

There are basically two ways global and the local, to


introduce strain into the transistor channel. Biaxial
strain is also referred to as global strain and is
introduced by epitaxial growth of Si and SiGe layers
(substrate engineering). The strain is induced by the
lattice mismatch between Si and SiGe. Uniaxial strain
is generated by local structural elements near the
channel region. It is also referred to as processinduced strain (PIS).The biaxial strained layer
showed the best results in the case of long channels
transistor but the contact resistance of the source and
drain, saturation velocity, self-heating of relaxation
for strained layer complicate the improvement of
performances in nano devices.
The uniaxial strain is installed during production of
CMOS circuits. But, the scalability and the
geometrical dependence affect the efficiency and
strength of strain in the strained layer .This type of
strain layers increase the mobility of both types of
carriers and the power drain in NMOS and PMOS

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
186

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

1) Local strain based on stress liners:


Local strain based on stress liners refers to a
technique where stress is induced into the transistor
channel by dielectric (e.g. nitride) layers. A thin layer
of SiN can be deposited around the gate after frontend-of-line processing on the MOS structure as show
in fig.5, which introduces strain in the channel. The
stress along the channel direction has a different
impact on holes and electrons. Due to this reason,
two types of strain-inducing films are required:
tensile films for nMOS and compressive films for
pMOS. The deposition parameters of the epitaxial
process determine which type of strain that is created
by the films. The integration of dielectric layers has
increased nMOS and pMOS device performance by
about 1030% compared to unstrained reference
transistors [10].Since this methods are used in the
production of logical LSI circuits for technologies
less than 90 nm, the local strain technology has a
growing practical importance in CMOS technology
[11][12].

Fig.4. Cross-sections view of MOS structures using biaxial strain


[9]

With a global strained silicon layer, many researcher


have achieved a improvement of current activation in
the range of 10-25%, and gate lengths smaller than
100 nm [5][8]. Also, there has been success in the
design of CMOS circuits with gate lengths of 25 nm
and the integration of strained silicon layers with gate
oxides with high dielectric constant and a metal gate
[6].
Main advantages of global strain technology are
obtaining more strength with uniformly strained
layers and possibility of implementing standard
CMOS process steps with minimal modifications.
B. Local Strain Technology
The limitations of the global strain technology, is
eliminated by this technique, the main focus is paid to
the local introduction of structures and materials that
will cause strain in the channel of MOS transistors.
This process can be done by two ways:

Fig.5. The device structure using different local strain


techniques.STI means Shallow Trench Isolation. Black and white
arrows indicate compressive and tensile strain, respectively [8].

2) Local strain based on SiGe in the source


and drain:
Other methods to introduce local strain are by
replacing conventional silicon source and drain (S/D)
regions by SiGe source and drain regions. The source
and drain of the transistor are formed by etching a
recess into the Si and selectively growing an epitaxial
layer of SiGe composition as shown in fig.6. Because
the lattice constant of the SiGe is larger than that of
Si, they induce a compressive stress in the channel.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
187

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Fig. 7 Drive current enhancement in strained SiGe (S/D) NMOS


[12].

Fig.6. Cross-sections of view MOS structures using Local strain in


source and drain regions [11].

The possibility of introducing both the compressive


and tensile strains simultaneously by using SiC and
SiGe or SiGeC as stressor for the CMOS
performance enhancement . The basic advantages of
the local strain technology are the strain CMOS
produced from standard CMOS process with slight
modifications and low cost.
IV.

EFFECT OF STRAIN ON ELECTRICAL


CHARACTERISTICS

The strain in the channel region can be obtained by


the optimization of the stress introduced by the
individual process steps and can as well be
implemented by using strained-Si substrates. It is
important to observe the electrical characteristics
due to the combined strain, the uniaxial (processinduced) and biaxial (substrate-induced).
Figs.7and 8 shows the combined strain effects on the
electrical characteristics of SS MOS.
The influence of combining the substrate and
process-induced strain together in MOSFETs down to
45 nm gate lengths and show a performance
enhancement of more than 62% with respect to
conventional silicon MOSFETs[12].

Fig. 8 Transconductance enhancement in strained-Si NMOS [12]

V. CONCLUSION
Strain engineering is being widely accepted as a
promising technique to improve CMOS performance
with significant mobility enhancement. This is
obtained by applying appropriate strain provides
higher carrier velocity in MOS channels, resulting in
higher drive current under a fixed supply voltage and
gate oxide thickness. The development includes the
optimal design of strain profiles in future CMOS
structures and their realization through global strain
techniques, local strain technique or their
combination. The improvement of the electrical
characteristics of induced strain and unstrained
CMOS transistor has been studied. Therefore future
CMOS can be for the most favorable and reliable
design of strained structure.
REFERENCES
[1]

R.Chua, B.Doyle, S.DattaJ.Kavalieros, K.Zhang,"Integrated


nanoelectronics for the future," Nature Mater2007; 6:8102,
2007.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
188

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

[2]

[3]

[4]

[5]

[6]

[7]

[8]

[9]

[10]

[11]

[12]

[13]

T.P.Branin, B.L.Doki,"Strained Silicon Layer in CMOS


Technology," ELECTRONICS, VOL. 18, NO. 2,
DECEMBER 2014
K.Rim, J.L.Hoyt, and J.F. Gibbons, Transconductance
enhancementin deep submicron strained Si N-MOSFETs, in
IEDM Tech. Dig. , pp. 707710 ,1998.
S. Dhar, H. Kosina, V. Palankovski, E. Ungersboeck, and S.
Selberherr, Electron mobility model for strained-Si devices,
IEEE Trans. Electron Devices, vol. 52, no. 4, pp. 527533,
Apr. 2005.
K. Rim, E.P. Gusev, C. D'Emic, T. Kanarsky, H. Chen, J.
Chu, J. Ott, K.Chan, D. Boyd, V. Mazzeo, B.-H. Lee, A.
Mocuta, J. Welser, S.L.Cohen, M. Leong, H.-S. Wong,
"Mobility enhancement in strained Si NMOSFETs with HfO2
gate dielectrics," 2002 Symposium on VLSI Technology,
pp.12-13, June 2002.
S. Takagi, T. Mizuno, N. Sugiyama, T. Tezuka, A. Kurobe,
Strained-Si-on-Insulator (Strained-SOI) MOSFETs
Concepts,
Structures and Device Characteristics, IEICE
transactions on electronics, vol. 84, pp.1043-1050, 2001.
K.Rim, J. Hoyt, J. Gibbons, Fabrication and analysis of
deep submicron strained-Si N-MOSFET`s, IEEE
Transactions on Electron Devices, vol. 47, pp. 1406-1415,
2000.
S Takagi, "Strain-Si CMOS Technology in Advanced Gate
Stack
for High-Mobility Semiconductors," Springer
Berlin Heidelberg, ch.1,2007.
K. J. Kuhn, A. Murthy, R. Kotlyar, and M. Kuhn, Past,
Present and Future: SiGe and CMOS Transistor Scaling,
The Electrochemical Society Transactions, vol. 33, pp. 3-17,
2010.
K.W.Ang,
K.J.Chui,
V.Bliznetsov,
A.Du;
N.
Balasubramanian, M.-F. Li, G. Samudra, Y.-C. Yeo,
"Enhanced performance in 50 nm NMOSFETs with siliconcarbon source/drain regions," 2004 IEDM IEEEInternational
Electron Devices Meeting, pp. 1069-71, Dec. 2004.
C.K Maiti, N.B. Chakrabarti, S.K. Ray," Strained Silicon
Heterostructures: Materials and Devices", IEE Circuits,
Devices and Systems
Series, 12, The Institution of
Engineering and Technology;2001.
S. S. Mahato, T. K. Maiti, R. Arora, A. R. Saha, S. K. Sarkar
and C. K.Maiti, Strain Engineering for Future CMOS
Technologies,International Conference on Computers and
Devices for Communication (CODEC-06), 2006.
Els Parton and Peter Verheyen," Strained silicon the key to
sub-45 nm CMOS IMEC, Belgium

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
189

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Tunnel Field Effect Transistor: A Review


Sawan1, Dhiraj Kapoor2, Rajiv Sharma3
M.Tech Scholar, ECE, Vaish College of Engineering Rohtak, Haryana
2
Assistant Professor, ECE Vaish College of Engineering Rohtak, Haryana.
3
Professor & HOD - ECE, Northern India Engineering College, New Delhi, India
1

E-mail: sawangupta412@gmail.com , kapoordhiraj79@gmail.com, rsap70@rediffmail.com


Abstract Tunnel Field Effect Transistor (TFET)
is considered as a very promising device for ultralow power applications because of its steep subthreshold slope prospect. In this paper, authors
are reviewing TFETs prospects such as the device
structure, various configurations and its I-V
characteristics and comparing its characteristics
with conventional MOSFETs characteristics. The
source to channel tunnel barrier height is lower in
III-V heterojunction TFETs which allows them to
attain steeper sub-threshold slope and improved
drive current. However, the ION/IOFF ratio of
TFET is lower than that of MOSFET which can
be further improved by using lower bandgap
channel material, by creating higher doping levels
and abrupt doping profiles.
Keywords TFET, Band-to-Band Tunneling
(BTBT), Subthreshold swing (SS), ON current
(ION) and OFF current (IOFF).
I.

INTRODUCTION

MOSFETs scaling into the nanoscale region is


affected by various factors like short channel effects,
non-scalability of sub threshold swing, threshold
voltage roll-off, high standby leakage current etc. [1,
2]. These factors degrade the performance of MOS
devices. One of the best alternatives for the standard
MOS device in low-power applications is the Tunnel
Field Effect Transistor (TFET). TFETs enable the
power-supply scaling to below 0.5V [3]. TFET is
simply a gated p-i-n diode which operates in reversebias mode, so the current flow is only in one direction
and uses band-to-band tunneling as a source carrier
injection mechanism instead of thermal carrier
injection as in MOSFET [4]. In TFETs, the reverse
biased tunnel junction eliminates the high-energy tail
present in the Fermi-Dirac distribution of the valence
band electrons in the p+ source region and allows sub
threshold swing lower than 60 mV/dec at room
temperature, even the sub threshold swing nearly 25

mV/dec have been reported [3,5] and hence reduces


the power dissipation, so that TFETs can achieve
much higher ION-IOFF ratio over a specified gate
voltage swing compared to MOSFETs, thus making
them attractive for low VDD operation. Due to the
reduced supply voltage (VDD) while keeping leakage
current low, energy consumption can be minimized
which improves mobile device battery life [6]. TFETs
have a better immunity against short channel effects
due to a built in tunnel barrier. This gate modulated
tunnel barrier controls the I-V characteristics and
allows the TFETs to circumvent the thermionic limit.
The Vt roll-off while scaling is also very small in
TFETs because the threshold voltage depends on the
band bending in the small tunnel region, but not the
whole channel region [7]. In order to achieve high
band-to-band-tunneling generation rate and large ION,
small band gap materials should be used. They
typically have low effective carrier masses, such as
Ge and III-V materials. ION/IOFF ratio can also be
improved by using germanium-source TFET, SiGesource TFET and III-V compound semiconductor in
source-to-channel region of TFET [8,9].
The direct BTBT and high electron mobility of III-V
materials allows n-channel TFETs to achieve drive
current above 100 A/m [8]. However, p-channel
TFET (pTFET) still require more research because
III-V materials have high direct band-to-band
tunneling (BTBT) rate but low hole mobilities which
leads to high channel resistance. SiGe or Ge might be
used for pTFETs because they have high hole
mobility, but their BTBT is indirect and the ON-state
current is low [10]. Various multi-gate architectures
like Double Gate TFETs, Gate-All-Around (GAA)
etc. were proposed in order to improve the current
driving capability and to have a better gate control
over the channel [11]. The fundamental challenge for
realizing commercially competitive TFETs is limited
ON-current level, which is typically addressed by
creating higher doping levels and abrupt doping
profiles [12].

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
190

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

II. TUNNEL FIELD EFFECT TRANSISTOR


(TFET)
The basic TFET structure is similar to a MOSFET
except that the source and drain terminals of a TFET
are doped of opposite type. The source and drain in
n-TFETs are highly doped p-type and n-type regions
respectively. The intermediate channel region is
moderately doped p-type layer. The different TFET
configuration
includes
homojunction
and
heterojunction TFET. If the two materials having the
same band gap are in contact, their interface is known
as homojunction and if two materials have different
band gaps, their interface is known as heterojunction.
In heterojunction TFETs, the source to channel
barrier height is smaller than homojunction TFETs
which increases their drive current and hence,
become more suitable option for designers [13].
According to ITRS roadmap, 2018 Guidelines set for
low power applications require gate length Lg=13nm.
GaSb/InAs Het-j TFET with gate length (Lg) of 13
nm is shown in Fig. 1. A thinner wire than ITRS
guidelines is chosen to the achieve close to ideal
characteristics. GaSb/InAs Het-j TFET has very high
drive current because of the broken band gap
alignment which makes it one of the leading TFET
options [3].

tunneling process, the tunneling electron acquires a


change in momentum by absorbing or emitting a
photon. The indirect tunneling is the main tunneling
process because the direct tunneling process is
negligible in indirect band gap materials like silicon
because the higher barrier width decreases the
transmission probability rapidly. These electrons are
then transported to the drain through drift diffusion
and gives the ON current. The energy band diagram
for ON/OFF state of n-TFET is shown in Fig. 2.
In OFF state of TFET when no gate voltage is
applied, the electrons do not accumulate in the
intrinsic region and tunneling barrier width becomes
very large. Due to this large potential barrier,
electrons from the valance band of the source cannot
tunnel to the conduction band of the intrinsic region,
thus giving an extremely low OFF current.

Fig. 2. Energy band diagram in the ON/OFF states of the


n-channel TFET [4].

Fig. 1. Schematic diagram of conventional n-TFET [14].

A. Working Principle
On applying the sufficient positive gate voltage
(above threshold voltage), the electrons accumulation
occurs in the intrinsic region and tunneling barrier
width reduces which increases the electric field near
the p-n junction. The band-to-band tunneling (BTBT)
occurs when the electric field across a p-n junction is
sufficiently large and the electrons from the valance
band of the source region tunnel to the conduction
band of the intrinsic region without the assistance of
traps. The tunneling process in which electron travels
from the valance band to the conduction band without
the absorption or emission of photon is known as
direct tunneling. On the other hand, in indirect

B. I-V Characteristics
The I-V characteristics of heterojunction TFET
proposed by S. Chander et al. with different values of
drain to source voltage (VDS) is shown in Fig. 3 [1].
The authors have shown that the ION and IOFF of TFET
strongly depend on VDS. As VDS increases, ION
increases without any increase in subthreshold swing.
The IOFF also shows strong VDS dependence.

Fig. 3 IDS VGS characteristics of conventional Het-j TFET for


different VDS [1].

For het-j TFET, as VDD increases beyond 0.3V, IOFF


starts increasing significantly even with 10 nm drain

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
191

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

underlap. TFET can support IOFF < 10pA/m at 0.3V,


the lowest IOFF achievable at 0.5V is 1nA/m [6].
TFET also shows lower ION/IOFF ratio than
conventional MOSFET. The ION is low because of the
different carrier injection mechanism between TFET
and MOSFET. There are various methods to increase
ON current, one of them is by using lower equivalent
oxide thickness (EOT) which is a distance, usually
given in nanometers (nm), that indicates how thick
oxide film would need to produce the same effect as
that of high-k material. Lower EOT increases the
coupling between gate voltage and channel potential
which gives the higher ION. Another method to
increase the ON current involves the lower bandgap
material which increases the BTBT generation rate.
By using more abrupt source doping profile which
reduces the tunneling barrier width, ON current can
also be increased [4].
III. TFET v/s CONVENTIONAL MOSFET
The comparison between I-V characteristics of Het-j
n-channel TFET and Si MOSFET is shown in Fig. 4.
The IV characteristics of TFET and MOSFET have
subthreshold swing (in their steepest region) of 41
mV/dec and 63 mV/dec, respectively [6].
Het-j TFETs have high tunneling current capabilities
and their performance is very similar to CMOS.
Among the Het-j TFETs, GaSb/InAs has highest
drive current. Due to this fact, Het-j TFETs are
preferred by the designers [6].

Fig. 5. Comparison of Gate capacitance characteristics between NTFET and conventional MOSFET [14].

Aside from numerous advantages of TFET, there are


some limitations as well which degrades the
performance of device. The major shortcoming is the
low ION. This is because of the small BTBT rate but it
can be overcome by using narrow bandgap materials
but their performance is not similar with standard
CMOS [15]. Since the TFETs are gated p-i-n diode
which operates only in reverse biased condition;
therefore, the current flow corresponds to only one
direction and because of this asymmetry in current
flow, it is observed that the area required by TFET
based SRAM cell is larger than standard CMOS
based SRAM cell [9]. This asymmetry also causes
the ambipolar behavior. Ambipolarity is conduction
of current in two different directions (i.e. for positive
gate voltage as well as negative gate voltage). It
happens when the tunnel junction from source side is
transferred to the drain side when the gate voltage
Vgs < 0 for an N-TFET and this ambipolar leakage
causes the degradation in subthreshold swing and
hence, increases the IOFF [15].
IV. COMPARISON BETWEEN N-TFET AND
P-TFET

Fig. 4. IV characteristics of Het-j N-TFET compared to Si


MOSFET [6].

Another important parameter is Gate Capacitance C g.


Dynamic power can be reduced by using Het-j NTFETs and they also minimize the circuit delay
because their gate capacitance is low as compared to
the Si MOSFET as shown in Fig. 5, due to low
density-of-states (DOS) in conduction band. For
achieving steep subthreshold slope TFETs, low
defect channel material is required but we also have
to consider the other intrinsic issues such as phonon
scattering [14].

The comparison between N-TFET and P-TFET drive


current is shown in Fig. 6. P-TFET fabricated from
III-V materials have subthreshold swing nearly 60
mV/dec, because of the large Fermi degeneracy of the
source creating a region where subthreshold swing is
determined by the thermal tail [14]. However, by
lowering the source doping, the subthreshold swing
of P-TFET can be improve and hence the Fermi
degeneracy. But the doping level must be adjusted
very carefully otherwise the electric field at source
channel junction will decrease significantly which
reduces the drive current.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
192

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

[7]

[8]

Fig. 6. Comparison of N-TFET and P-TFET drain currents for


different materials to Si MOSFET [14].

GeSn has symmetric P- TFET and N-TFET


characteristics, but with lower IDsat than Het-j NTFET.
V. CONCLUSION
The authors demonstrated the beneficial aspects of
TFET such as the device is almost free from short
channel effects, better performance in terms of low
subthreshold swing, higher ION, reduced leakage
current and have compatibility with CMOS process.
Based on the simulation study carried out by different
authors, the subthreshold swing less than 60 mV/dec
and ION/IOFF ratio of around 107 have been observed.
Realization of TFETs have certain challenges such as
high quality III-V materials are required and also
very thin body dimensions are required for achieving
good electrostatics. It is expected that performance of
TFETs can be further improved by reducing
diameters, using lower bandgap channel materials,
optimized doping profile and improving the process
of fabrication.

[9]

[10]

[11]

[12]

[13]

T. S. Arun Samuel and N. B. Balamurugan, Analytical


modeling and simulation of germanium single gate silicon on
insulator TFET, Journal of Semiconductors, vol. 35, no. 3,
pp. 034002-1034002-6, 2014.
P. Guo, Y. Yang, Y. Cheng, G. Han, C. K. Chia and Y. C.
Yeo, Tunneling Field-Effect Transistor (TFET) with Novel
Ge/In0.53Ga0.47As tunneling junction, ECS Transactions, vol.
50, no. 9, pp. 971-978, 2012.
H. Nam, M. H. Cho and C. Shin, Symmetric Tunnel Field
Effect Transistor (S-TFET), Current Applied Physics, vol.
15, pp. 71-77, 2015.
Y. Yang, et al., Towards direct Band-to-Band Tunneling in
P-Channel Tunneling Field Effect Transistor (TFET):
Technology enablement by Germanium-Tin (GeSn), IEEE
International Electron Devices Meeting (IEDM), pp. 16.3.1 16.3.4, 2012.
M. Saxena, Upasana, R. Narang and M. Gupta, Simulation
study for dual material gate hetero-dielectric TFET: Static
performance analysis for analog applications, Annual IEEE
India Conference (INDICON), pp. 1-6, 2013
Ganjipour, J. Wallentin, M. T. Borgstrm, L. Samuelson, and
C. Thelander, Tunnel field-effect transistors based on InPGaAs heterostructure nanowires, ACS Nano, vol. 6, no. 4,
pp. 31093113, 2012.
Dewey et al., Fabrication, Characterization, and Physics of
III-V Heterojunction Tunneling Field Effect Transistors (HTFET) for Steep Sub-Threshold Swing, IEEE International
Electron Devices Meeting (IEDM), pp. 33.6.1 - 33.6.4, 2011.

[14] U. E. Avci, D. H. Morris, S. Hasan, R. Kotlyar, R. Kim, R.


Rios, D. E. Nikonov and I. A. Young, Energy efficiency
comparison of nanowire heterojunction TFET and Si
MOSFET at Lg=13nm, Including P-TFET and variation
considerations, IEEE International Electron Devices
Meeting (IEDM), pp. 33.4.1 - 33.4.4, 2013
[15] P. Wang, Y. Zhuang, C. Ii and Z. Jiang, Analytical modeling
for double-gate TFET with tri-material gate, IEEE
International Conference on
Solid-State and Integrated
Circuit Technology (ICSICT), pp. 1-3, 2014

REFERENCES
[1]

[2]

[3]

[4]

[5]

[6]

S. Chander, B. Bhowmick and S. Baishya, Heterojunction


fully depleted SOI-TFET with oxide/source overlap,
Superlattices and Microstructures, vol. 86, pp. 4350, 2015.
K. Tomioka, M. Yoshimura, and T. Fukui, Steep-slope
tunnel field effect transistors using IIIV nanowire/Si
heterojunction, in Proc VLSI Technol. (VLSIT) Symp., pp.
4748, 2012
Sharma, A. A. Goud and K. Roy, GaSbInAs n-TFET with
doped source underlap exhibiting low subthreshold swing at
sub-10-nm gate-lengths, IEEE Electron Device Letters, vol.
35, no. 12, pp. 1221-1223, 2014.
W. Y. Choi, B. G. Park, J. D. Lee and T. J. K. Liu,
Tunneling Field-Effect Transistors (TFETs) with
subthreshold swing (SS) less than 60 mV/dec, IEEE
Electron Device Letters, vol. 28, no. 8, pp. 743-745, 2007.
R. Vishnoi and M. J. Kumar, "An accurate compact analytical
model for the drain current of a TFET from sub-threshold to
strong Inversion", IEEE Trans. on Electron Devices, vol. 62,
pp. 478-484, 2015.
U. E. Avci, D. H. Morris and I. A. Young, Tunnel FieldEffect Transistors: Prospects and Challenges, Journal of the
Electron Devices Society, vol. 3, no. 3, pp. 88-95, 2015.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
193

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Load Balancing and QoS in MANET


1

Sakshi Dhawan, 2Sudhir Vasesi


M.Tech Scholar, ECE, BMIET, Sonepat, Haryana
2
Assistant Professor, ECE BMIET, Sonepat, Haryana
1

E-mail: sakshidhawan8@yahoo.in
Abstract A MANET is a infrastructureless
network that can exchange information to each
other with limited resources. In MANETs it is a
challenge to provide QoS guarantee to the user or
application due to its several disadvantages like
unstable wireless link unstable wireless link and
irregular availability. MANETs require an
efficient routing protocol that achieves the quality
of service (QoS) mechanism. It is also essential to
consider load balancing issue in routing
mechanism. The distribution of balanced load is
difficult to achieve by the shortest path routing.
Because this creates a heavier load on the central
node.
In this paper, we will discuss the various
challenges that occur in providing QoS.
Keywords MANETs,
Routing, DSR
I.

QoS,

Load

balance,

INTRODUCTION

The areas where there is no communication


infrastructure or permanent signal coverage and it is
inconvenient for the mobile users to communicate.
This problem can be overcome by the formation of an
infrastructure-less or ad hoc network [1]. A Mobile
Ad Hoc Network (MANET) forms an infrastructureless network dynamically with the independent
mobile hosts without the assistance of centralised
infrastructure. The major application areas of
MANETs are military operations, emergency rescue
operation, law enforcement, vehicular networking,
deep rural areas and convention centres [4].
MANET is a self-organising network that merges the
wireless communication with a high degree nodemobility. Nodes together form an arbitrary topology.
In other networks to perform the basic operation like
routing, packet forwarding, they have only one
dedicated node for it, but in MANETs every node can
perform these basic operation. Nodes in MANET are
multihop means through the wireless link nodes that
come in the radio's range can communicate directly,

whereas nodes which are not in the range rely on


intermediate node to act as router for them. The node
in MANETs can move, leave and join the network
anytime, but the router has to be updated after every
node movement [2]
Routing in MANETs is to select an efficient path to
send the traffic from source to the destination node.
Each node decides its own route. The routing tables
maintain the record of the routes of each node to
different destinations [5]. In MANETs providing a
route to the traffic is an extreme challenge as there
are problems like limited communication resources,
node mobility and larger number of nodes [3]. These
limitations of resources in MANETs need an efficient
routing protocol which uses the resources skilfully.
While providing the route to the nodes sometimes the
central node has to pass more traffic which will cause
the problem of Load-balancing.
QoS in MANETs is the network ability to provide its
user satisfaction. The QoS can be measured by the
parameters such as bandwidth, throughput, end-toend delay and jitter. To provide QoS we need a realtime routing protocol [11].
II.

QoS IN MANETS

QoS is defined as a set of service, provided to the


user and application, to met certain requirements by
the network while transporting a packet stream from
one point to the other. The network has to guarantee
the user that its necessity are to be met after accepting
a connection request from the user[7].
The QoS routing protocols consist of mainly two
architectures which are IntServ and DiffServ. In the
IntServ (Integrated Services) to maintain QoS a path's
node reservation method. In DiffServ (Differentiated
Services) it classify the traffic that travel into the Ad
hoc networks. There are one or more classes for
traffic and these classes are treated according to the
priority.
QoS parameters in the wired network are
characterised by the multimedia traffic. In case of ad
hoc new QoS constraints are required due to dynamic

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
194

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

network topology, traffic load conditions, timevariant QoS parameters like throughput, latency and
power capacity[8].

Fig 1: Some challenges when providing the QoS in MANETs[5]

In wireless communication it is difficult and complex


to guarantee the QoS to user or application than in
wired network. The various characteristics of
MANETs makes the QoS support complicated:
bandwidth is often too limited, nodes are free to
move, join or leave the network, energy is low, power
is lesser than required[7]. To overcome these
challenges a routing protocol is designed as per the
network requirements. The ad hoc routing protocols
controls how efficiently the nodes decides their path
from the source to the destination. Most of the
conventional QoS routing protocols like DSDV
(Destination-sequenced distance vector Routing),
AODV (On-Demand Distance Vector Routing) and
DSR (Dynamic Source Routing) selects the
transmission path for nodes according to network
resource availability and QoS requirements. But,
when the routing selection contain two or more
additive parameters then the selection of routing
become a problem[9].
The QoS routing protocols consist of mainly two
architectures which are IntServ and DiffServ. In the
IntServ (Integrated Services) to maintain QoS a path's
node reservation method is adapted. It can provide
the service of circuit switched network in packet
switched network. In DiffServ (Differentiated
Services) it classify the traffic that travel into the Ad
hoc networks. There are one or more classes for
traffic and these classes are treated according the
priority. With the help of InteServ and DiffServ, the
QoS routing protocol to support QoS are divided into
three groups: Routing protocol extended to support
QoS, Routing protocols designed for QoS, QoS
routing protocols designed for Real-time applications.

A. Routing protocols extended to support QoS


In this QoS routing protocol, the existing protocols
are enhanced to improve the QoS. QOLSR is an
enhancement of OLSR protocol. OLSR is a proactive
protocol. There are various enhancements of DSR
and AODV protocol which react to find route only on
demand.
B. Routing protocols designed for QoS
It works on the architecture i.e., InterServ and
DiffServ of QoS. They are divided in three
categories: The protocol based on IntServ, the
protocol based on diffServ and the protocol based on
both IntServ and DiffServ. The first category includes
CEDAR (Core Extraction Distributed Ad Hoc
Routing), AQOR (Ad hoc QoS On-demand Routing).
The second category focuses on DiffServ which
includes the MQRD (Multipath QoS Routing
protocol of supporting DiffServ). In the third
category the protocol combines both IntServ and
DiffServ principles like FQMM (Flexible QoS Model
for MANETs).
C. QoS Routing protocols designed for Real-time
applications
It is a challenge to transmit real time data in ad hoc
networks, solution to this is a difficult task. Many
researchers tried to give a solution to this problem by
enhancing or making the existing protocol more
efficient through different techniques[10].
In MANETs it is difficult to provide the QoS
according to the application or user requirements that
is guaranteed. This is due to drawbacks of MANETs
like unstable wireless link, restricted capacity, and
irregular availability. These drawbacks of MANETs
affect the QoS parameters performance[11].
III. LOAD BALANCING IN MANETS
Load balancing refers to the transfer of traffic from
the source till the destination without burdening a
particular node. Means there should be no burden on
particular node to transmit more traffic than the other.
In ad hoc network there are many types of load
balancing. The first is Path, in this less number routes
are selected so as to balance the load. The second one
is Delay, in this the nodes which are having high
delay to transmit data are avoided. The third one is
Traffic, in this the main aim is to distribute the traffic
equally among the nodes[12].
The cause for load balancing problem can be finding

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
195

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

the shortest route in the scenario which makes central


node to transmit more traffic. If the load is not
balanced then it will cause various problems like
delay in packet delivery, increase in packet drop
ratio, affect the overall throughput, as well as
increasing the end -to-end delay[12]. So a routing
protocol is desired to overcome this problem by
defining the right route for the nodes to continue.
As per the[16] analysis done DSR almost always has
a lower routing load than AODV. This gives a way to
prefer DSR over AODV.
IV.

COMPARISON BETWEEN VARIOUS


ROUTING PROTOCOLS
V.

Table 1: Comparison between AODV, DSR and DSDV routing


protocol[17, 18]
Parameter
type

AODV

DSR

DSDV

Protocol type

Ad hoc ondemand
distance
vector
routing

Dynamic
Source
routing

Destination
sequence
distance vector

Routing
approach

Reactive

Reactive

Reactive

Route

Single route

Throughput

Best

Packet
delivery ratio

High

Routing
overhead

More
DSR

Normalised
routing load
(NRL)

than

Consistent
&
worse
NRL when
increasing
no. of nodes

Multiple
route
Better than
DSDV
Performs
well when
the
number of
nodes is
less but it
declines
drastically
when the
numbers
of nodes
are
increased

Multiple route

routing protocols, so in DSR it will require less


routing packets to maintain transmission of data
packets[17,18].
V. DSR (DYNAMIC SOURCE ROUTING)
Dynamic source routing is a reactive routing protocol.
It is an on-demand protocol means when a request is
made only then it works. The DSR has two main
operation: Route Discovery and Route Maintenance.
In Route Discovery a node will discover its shortest
path, if it doesn't have a valid route. It will broadcast
a route request containing its own address and the
destination address. Each node will receive the
request and check if it has a path to the destination
requested. If it has the route the send the packet to the
destination and will get a route reply. If it doesn't
have a valid route the node will insert its address and
broadcast it again. Route Maintenance, in this it gives
intimation if there is a link breakage in the network.
When it discover a link breakage it will send a route
error to the source node[11]. There are various besteffort routing protocols like DSDV, DSR, AODV and
TORA etc[17]. By modifying the DSR routing
Protocol we can try to overcome the problem of
Load-balancing.

Low

Comparatively
less than DSR
and AODV

Lower

Higher

Much
higher
than
AODV
when
network
load
is
increased

Higher routing
load than the
AODV & DSR

As the comparison between DSDV, AODV and DSR


shows that the routing overhead is lower in DSR this
will help in creating lesser stale routes. And also the
normalised routing load is much higher than other

VI. CONCLUSION
In this paper we reviewed about the challenges that
occur while providing QoS in MANETs. The QoS
effecting parameters and various Routing protocols
that are designed to improve QoS in MANETs.
Further we discussed about the Load balancing that
degrade the performance of MANETs by making the
centralized node more heavier causing it to delay in
transmission and also affect the following QoS
parameters: end-to end-delay, throughput, jitter and
packet lost. DSR routing protocol is preferred for the
load balancing problem as it has lower routing load
than AODV.
VI.

FUTURE WORK

We propose to implement the Effectual DSR


algorithm which will try to balance the traffic based
load and it also improves the Quality of service
metrics such as end to end delay, packet lost, and
jitter. We will also try to improve the throughput in
MANETs.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
196

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

REFERENCES
[1]

[2]
[3]

[4]

[5]
[6]

[7]

[8]
[9]

[10]

[11]

[12]

[13]
[14]
[15]

[16]

[17]

[18]

Josh Broch David A. Maltz David B. Johnson Yih-Chun Hu


Jorjeta Jetcheva "A Performance Comparison of Multi-Hop
Wireless Ad Hoc Network Routing Protocols" Proceedings
ofthe Fourth Annual ACM/IEEE International Conference on
Mobile Computing and Networking (MobiCom98) October
2530, 1998, Dallas, Texas, USA.
Security of Self-Organizing Networks MANET, WSN,
WMN, VANET edited by Al-Sakib khan Pathan.pdf
Xiaoyan Hong, Kaixin Xu, Mario Gerla "Scalable Routing
Protocols for Mobile Ad Hoc Networks" ONR
MINUTEMAN project under contract N00014-01-C-0016,
in part by DARPA under contract DAAB07-97-C-D321
Manu J Pillai, M P Sebastian and S D Madhukumar
"Dynamic Multipath Routing for MANETs A QoS
Adaptive Approach" 978-1-4799-0048-0/13/$31.00 2013
IEEE
InTechRouting_in_mobile_ad_hoc_networks.pdf
Prasant Mohapatra, Jian Li, And Chao Gui, "Qos In Mobile
Ad Hoc Networks" 1536-1284/03/$17.00 2003 IEEE IEEE
Wireless Communications, June 2003
Gabriel Ioan Ivascu, Samuel Pierre, Alejandro Quintero "QoS
routing with traffic distribution in mobile ad hoc networks"
Mobile Computing and Networking Research Laboratory
(LARIM), Department of Computer Engineering, cole
Polytechnique de Montral, C.P. 6079, succ. Centre-Ville,
Montral, Que., Canada H3C 3A7
Quality of Service (QoS) Provisioning in Mobile Ad-Hoc
Networks (MANETs) by Masoumeh Karimi.pdf
MENG Limin, SONG Wenbo "Routing Protocol Based on
Grovers Searching Algorithm for Mobile Ad-hoc Networks"
China Communications, March 2013
Sofiane Ouni Jihen Bokri and Farouk Kamoun "DSR based
Routing Algorithm with Delay Guarantee for Ad Hoc
Networks" JOURNAL OF NETWORKS, VOL. 4, NO. 5,
JULY 2009
Hanif Maleki, Mehdi Kargahi and Sam Jabbehdari "RTLBDSR: a Load-Balancing DSR Based QoS Routing Protocol in
MANETs" 2014 4th International Conference on Computer
and Knowledge Engineering (ICCKE)
] Zhijing Xu, Kang Wang, Liu Qi "Adaptive Threshold
Routing Algorithm with Load-balancing for Ad Hoc
Networks" Proceedings of the 2009 International symposium
on Web Information Systems and Applications
www1.i2r.a-star.edu.sg/~winston
Mobile Ad-Hoc Networks by Silvia Giordano
MOBILE AD HOC NETWORKING by STEFANO
BASAGNI, MARCO CONTI, SILVIA GIORDANO, IVAN
STOJMENOVIC
Mamoun Hussein Mamoun "Important Characteristic of
Differences between
DSR and AODV Routing Protocol"
MCN 2007 Conference, November 7-10, 2007
Ramandeep, Sangeeta Monga"Comparison of MANET
Routing Protocol-A Review" IOSR Journal of Electronics
and Communication Engineering (IOSR-JECE)
Shabana Sultana and Dr. C. Vidya Raj "Packet Delivery
Ratio and Normalized Routing Load Analysis on Ad - hoc
Network Protocols" Elsevier

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
197

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

'Human Detection' Through Computer Vision


as a Means for Fighting Poaching
Using Raspberry Pi B+ and GSM GPRS SIM900A module
1

Paurush Dube, 1Harsh Joshi, 1Sandeep Sharma, 2Divya Arora


Electronics and Communication Engineering Department, DIT University, Dehradun, India
2
Assistant Professor, ECE NIEC, New Delhi, India

E-mail: paurushdube17@gmail.com
Abstract As the number of species entering the
list of "Endangered animals" continues to increase
and the population levels of almost all wildlife
species continue decreasing by the day it becomes
imperative for us to find out innovative ways and
means to curb this phenomenon lest an ecological
imbalance is just round the corner.
Through this paper we present a device that would
help in putting a check on poaching which without
a doubt is the biggest reason behind these
declining numbers. The device aims at facilitating
surveillance of forest areas by detecting the
presence of poachers in reserved forest areas and
in turn alerting the concerned authorities about
the same.
'Human detection' comprises the core of this
project and we employ computer vision algorithms
for realizing the same.
Keywords OpenCV, Imutils, Raspberry Pi B+,
GSM Module, non-maxima suppression
I.

would not be present in the premises of the forest.


This would not only provide us with timely
information about the presence of poachers, but
would also greatly reduce the time and effort being
put in surveillance. To add to that this method proves
to be rather cost effective compared to other
conventional means.
There have been several developments along
the same lines. For example Margarita MuleroPzmny and Roel Stolper used drones to monitor
rhinoceros populations and also to detect poaching
activities[1] . However due to stability issues this
approach isn't being used as widely as it can be.
There is also the problem of short flight time.
Google's popular Google Earth platform is also being
used for the same purposes, however the image
processing technique employed is largely different.
What we present here is more of a "modified
stealth camera" which can remotely transmit signals
and specifically send an alert signal in case human
activity is detected. The major components and
libraries used are as such:-

INTRODUCTION

Despite the numerous measures adopted by forest


officials for preventing the entry of poachers into
reserved forest areas (viz. barbed wires, patrolling,
CCTV cameras), it is usually found that poachers
manage to find their way into the forest. The reason
behind this being that all of these conventional
methods of surveillance call for continuous
monitoring, which talking in pragmatic terms, isn't
always possible. As a result no matter how efficiently
the area under surveillance might be being monitored,
there always are certain 'gaping holes' which the
poachers use very well to their advantage. In order to
make our surveillance effective we must try to
eliminate the need for continuous monitoring. This
may be achieved by using algorithms which
specifically look for 'human figures', which otherwise

1. Raspberry pi B+ board: It is a popular single board


computer. It has wide ranging applications in
domains such as image processing, robotics, etc.
Given its small size it provides a decent amount of
processing power to the user with a good degree of
reliability. Here it serves as the processing unit and
synchronizes the actions of the attached devices.
2. Raspberry pi camera module: The module is 25
mm x 20 mm x 9 mm in size and can produce 1080p,
720p and 640x480p videos [3].
3. GSM GPRS SIM900A module: The module may
be used to send and receive SMS messages by means
of a SIM card. It can easily be interfaced with the
Raspberry pi board.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
198

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

4. OpenCV: OpenCV (Open Source Computer


Vision) is a library of programming functions mainly
aimed at real- time computer vision, originally
developed by Intel research center in Nizhny
Novgorod (Russia), later supported by Willow
Garage and now maintained by Itseez.[4]
5. Imutils package: A series of convenience functions
to make basic image processing operations such as
translation, rotation, resizing, skeletonization, and
displaying Matplotlib images easier with OpenCV
and Python.
III.

HUMAN DETECTION

As already mentioned, the basic aim of this project is


to detect the presence of humans. For this we identify
certain features which are specific to humans and
based on the analysis the algorithm determines
whether a human is present or not. Images for this
purpose are obtained using the Raspberry pi camera
module. The technique may be broken down into the
following steps:A. Non maxima Suppression Algorithm The gist of
the non-maxima suppression algorithm is to take
multiple, overlapping bounding boxes and reduce
them to only a single bounding box:
.

Fig 1: (Left) Multiple bounding boxes are falsely detected for the
person in the image. (Right) Applying non-maxima suppression
allows us to suppress overlapping bounding boxes, leaving us with
the correct final detection.

This helps reduce the number of false-positives


reported by the final object detector.
B. Initializing the Human detector
We firstly initialize the Histogram of Oriented
Gradients descriptor. Then, we set the Support Vector

Machine to be pre- trained human detector, loaded


via
the
cv2.HOGDescriptor_getDefaultPeopleDetector()
function.
C. loading our image off disk and resizing it to have a
maximum width of 400 pixels.
The reason we attempt to reduce our image
dimensions is two- fold:
i) Reducing image size ensures that less sliding
windows in the image pyramid need to be evaluated
(i.e., have HOG features extracted from and then
passed on to the Linear SVM), thus reducing
detection time (and increasing overall detection
throughput).
ii) Resizing our image also improves the overall
accuracy of our human detection (i.e., less falsepositives).[5]
D. Constructing an Image pyramid
The detectMultiScale method constructs an image
pyramid with scale=1.05 and a sliding window step
size
of (4, 4) pixels in both the x and y direction,
respectively.
The size of the sliding window is fixed at 32 x 128
pixels, as suggested by Dalal and Triggs. [6]
The detect Multi Scale function returns a 2-tuple of
rects , or the bounding box (x, y)-coordinates of each
person in the image, and weights , the confidence
value returned by the SVM for each detection. A
larger scale size will evaluate less layers in the image
pyramid which can make the algorithm faster to run.
However, having too large of a scale (i.e., less layers
in the image pyramid) can lead to pedestrians not
being detected.
Similarly, having too small of a scale size
dramatically increases the number of image pyramid
layers that need to be evaluated. Not only can this be
computationally wasteful, it can also dramatically
increase the number of false- positives detected by
the pedestrian detector. That said, the scale is one of
the most important parameters to tune when
performing pedestrian detection.[7]
D. Drawing the finalized bounding boxes
After applying non-maxima suppression, we draw the
finalized bounding boxes.
The results of this algorithm are shown below. The
images used here have been randomly picked from

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
199

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

the internet and the results shown here are as seen on


a computer screen.

MultiScale function.
VII. CONCLUSION

Fig. 2 Result1of Algorithm

Poaching poses a serious threat not just to the


existence of wildlife but to the very survival of the
human race. It is more than just our moral
responsibility to try and stop the atrocities being
conducted on animals. In this work we have
presented a rather simple, inexpensive and easy to
implement methodology for catching poachers 'in the
act' and alerting the concerned authorities as soon as
a poacher is spotted. This effectiveness of this
approach does not depend on strict monitoring but on
effective deployment. The device must be deployed
at places where animals are most likely to get
poached, such as near water bodies. With
improvements in the technique used for tuning the
parameters the efficiency of the device may further
be improved.
REFERENCES
[1]

Fig. 3 Result2of Algorithm

VI. LOAD SENDING ALERT SIGNAL


If the Human Detection algorithm makes a detection,
it sets a flag because of which the Raspberry pi in
turn performs the following two functions:i) It instructs the GSM Module to send a predefined
Alert signal to a predefined phone number using the
SIM card installed on the GSM Module.
ii) The camera module saves the images it takes for
the next ten minutes onto the memory of the
Raspberry Pi board. This is done so that the poachers
may later be identified once they've been
apprehended.
The images so obtained may easily be extracted by
connecting the Raspberry Pi to a computer.

[2]

[3]

[4]
[5]
[6]

[7]

VII.

EXPERIMENTAL RESULTS

Margarita Mulero-Pzmny, Roel Stolper, L. D. van


Essen,Juan J. Negro, Tyrell Sassen " Remotely Piloted
AircraftSystems as a Rhinoceros Anti-Poaching Tool in
Africa" DOI: 10.1371/journal.pone.0083873, Published
online
on
January
8,
2014m(http://journals.plos.org/plosone/article?id=10.1371/jo
urnal.pone.0083873)
ZDNet "Google Earth, drones protect Kenya's elephants"
byCharlie Osbourne http://www.zdnet.com/article/googleearth- drones-protect-kenyas-elephants/ , Dated: October 10,
2013
"RPI Camera board - Raspberry-Pi - Raspberry Pi KameraBoard, 5MP | Farnell Deutschland". de.farnell.com. Retrieved
9 June 2013.
Itseez leads the development of the renowned computer
vision library OpenCV. http://itseez.com
PyimageSearch
http://www.pyimagesearch.com
Dated:
November 9, 2015
Navneet Dalal and Bill Triggs " Histograms of Oriented
Gradients for Human Detection" In INRIA Rhone-Alps, 655
avenue de lEurope, Montbonnot 38334, France
{Navneet.Dalal,Bill.Triggs}@inrialpes.fr,http://lear.inrialpes.
fr
PyimageSearch
http://www.pyimagesearch.com
Dated:
November 9, 2015

We tested the device on a set of 122 images. Out of


these humans were correctly detected in about 81% of
the images and alert signal was sent, in spite of
overlapping boundaries in most of the cases. The
efficiency of the algorithm is expected to improve
with better tuning of the parameters of detect

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
200

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Spectrum Sensing and Utilization


Techniques for Cognitive Radio Systems: A
Review
1

Kamal Singh, 2Pradeep Kumar Gupta


Research Scholar, Sharda University, Greater Noida
2
M.Tech Scholar, Somani (PG) Institute of Tech. And Management, Rewari, Haryana
1

E-mail: kamalsliet@gmail.com
Abstract Spectrum sensing is done to utilize it
efficiently. It can be utilized better by increasing
the number of users in the assigned band without
causing any significant disturbance to the primary
users. Cognitive radios sense the utility factor of
the present spectrum being used by primary users
and on the data received, it makes decision to
transmit the packet. Two sensing schemes,
namely; cooperative sensing and eigenvalue-based
sensing are studied. The various advantages and
disadvantages are highlighted. Based on this
study, the cooperative spectrum
sensing is
proposed for employment in spectrum sensing in
wideband based cognitive radio systems. Cognitive
radio may help improve spectrum management by
moving it from strict framework of regulations to
the flexible realm of networks and devices, thereby
enabling dynamic spectrum sharing and
improving spectrum utilization.
Keywords Primary radios, spectrum sharing,
Data Fusion, Spectrum Sensing; wideband
sensing.
I.

INTRODUCTION

Spectrum sensing can be said to be the process of


performing measurement on a part of the spectrum
and making a decision related to spectrum usage
based upon measured data. Studies have shown that
most of the spectrum is under utilized and some of
the spectrum will go waste by technological
advancement[1]. Cognitive Radios is a modern mean
to solve the traffic congestion of todays network. In
this paper we have given the mechanism of spectrum
sensing and sharing to improve the spectrum
utilization.

II.

COMPLEXITY OF SPECTRUM SENSING

Spectrum opportunity is conventionally defined in


literature as a band of frequencies that are not used
by a primary user of that band at a particular time and
specific geographic location,[2]. This definition
therefore introduces multi-dimensional spectrum
awareness, since a spectrum hole is a function of
frequency, time and geo-location. Since noise is
present all the time in the entire radio spectrum, then
an empty frequency bin doesnt exist. [3]
Therefore it is important to be able to differentiate a
band occupied by a primary user signal (PU) and the
one from a spectrum hole that contains noise only
signal. The traditional definition of spectrum sensing
only exploits the three dimensions of the spectrum
space. It is an important requirement of the Cognitive
Radio network to sense spectrum holes, detecting
primary users is the most efficient way to detect
spectrum holes[4]. Spectrum sensing techniques can
be classified into three categories:
(1)Transmitter detection: Cognitive radios must
have the capability to determine if a signal from a
primary transmitter is locally present in a certain
spectrum, there are several approaches proposed: (a)
matched filter detection (b) energy detection(2)
Cooperative detection: It refers to spectrum sensing
methods where information from multiple Cognitive
radio users are incorporated for primary user
detection.(3) Interference based detection.
(ii) Spectrum Management: It is the task of
capturing the best available spectrum to meet user
communication requirements. Cognitive radios
should decide on the best spectrum band to meet the
Quality of Service requirements over all available
spectrum bands, therefore spectrum management

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
201

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

functions are required for Cognitive radios, these


management functions can be classified as: (a)
spectrum analysis (b) spectrum decision
(iii) Spectrum Mobility: It is defined as the process
when a cognitive radio user exchanges its frequency
of operation. Cognitive radio networks target to use
the spectrum in a dynamic manner by allowing the
radio terminals to operate in the best available
frequency
band,
maintaining
seamless
communication requirements during the transition to
better spectrum availability.
(iv) Spectrum Sharing: It refers to providing the fair
spectrum scheduling method, one of the major
challenges in open spectrum usage is the spectrum
sharing. Cognitive radios have the capability to sense
surroundings and allow intended secondary user to
increase QoS by opportunistically using unutilized
spectrum holes[5]. If a secondary user sense available
spectrum, it can use this spectrum after the primary
licensed user vacates it
III. OPPORTUNISTIC SPECTRUM ACCESS
This section presents a study on spectrum sensing
techniques that require knowledge of both source
signal and noise power information. Some of the
most common spectrum sensing techniques in this
category are explained in this section.
1.Parametric Method of Spectrum sensing schemes
Three basic parametric method of spectrum sensing
are explained as follows:
a) Optimal LRT-Based Sensing:
The Neyman-Pearson states that for a given
probability of false alarm, the test statistics that
maximizes the probability of detection is the
likelihood ratio test (LRT) [6] which is defined as

b) Matched Filter:
Matched filter (MF) is a linear filter designed to
maximize the output signal to noise ratio (SNR) for a
given input signal [7]. Matched filtering is also
known as optimal method for detection of primary
users when transmitted signal is known [8]. Hence,
cognitive radio has a prior knowledge of the Primary
User Signal at both PHY and MAC layer, such as
bandwidth, frequency, modulation type to
demodulate received signals [9]. Matched filter
detector has a high processing gain, but the sensing
devices have to achieve coherency and demodulate
primary user signal. This can be achieved since most
wireless networks have pilot patterns (or symbols)
and preambles that can be used for coherent
detection. For examples: TV Signal has narrowband
pilot for audio and video carriers; CDMA system
have dedicated spreading codes for pilot and packet
acquisition. The operation of matched filter detection
is expressed as:
N
Y[n] = h[n k]x[k]
(2)
k=0
where x is the unknown signal (vector) and is
convolved with the h. The impulse response of the
matched filter is useful only in cases where the
information from the primary users is known to the
cognitive users[10].
The drawback of matched filter is that it requires
prior knowledge of every primary signal. If the
information is not accurate, MF would perform
poorly. Also the most significant disadvantage of MF
is that cognitive radio would need a dedicate receiver
for every type of primary user [11]. To avoid the
interference to the primary user the sensitive
threshold (

th ) should follow the relation:

(3)
TLRT (x) = P(x/H1)/P(x/H0)

(1)

where P (.) denotes the probability density function


(PDF) and (x) denotes the received signal vector that
is the aggregation of x(n), n = 0,1......., N-1. Such
likelihood ratio test decides H1 when TLRT (x)
exceeds a threshold value, otherwise it usesH0. The
main challenge is in implementing the LRT is the
requirement on the distribution give in equation 1.

Where is the path loss exponent[8]. Rcr can easily


be derived from equation (1) as

(4)

CR device located at a distance r can use the


frequencies associated with a TV station located at R

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
202

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

only if

(5)

c) Cyclostationary Based Detection:


Cyclostationary based detection is a method that
detects
primary
users
by
exploiting
its
Cyclostationary features of the received signals [12,
13]. Modulated signals are in general coupled with
sine wave carriers, pulse trains, repeating spreading,
hoping sequence or cyclic prefixes; these modulated
signals are known as cyclostationary, since they have
statistics, mean and autocorrelation. They can also be
intentionally induced to assist spectrum sensing [14].
The cyclostationary based detection algorithm can
differentiate noise from primary users signal. This is
due to the fact that noise is in wider sense stationary
with no correlation while modulated signal are
cyclostationary with spectral correlation due to the
redundancy of signal periodicities [15]. This
periodicity trend is used for analyzing various signal
processing tasks such as detection, recognition and
estimation of the received signals. Even though
cyclostationary feature detection have high
computational complexity, it performs well
satisfyingly under low SNR due to its robustness
against unknown level of noise. Free bands in the
spectrum are detected following the hypotheses
testing problem in received signal x (t) [16] which is
given as:
x(t) = s(t)h + w(t)

(6)

where s(t) is the modulated signal, h is channel


coefficient and w (t) is AWGN.
Under H0, x(t) it is not cyclostationary and thus the
band is considered free.
Under H1 x(t) is cyclostationary and thus the band
is considered congested where H0 signifies the
existence of signals and H1 the existence of signal.
Modulated signal x(t) is considered to be a periodic
signal or a cyclostationary signal in wide sense its
mean and autocorrelation exhibits periodicity as
shown in [17].Though cyclostationary detection has
certain advantages such as its robustness to
uncertainty in noise power and propagation channel.
It has its own disadvantages as follows:

It needs a very high sampling rate


The computation of spectral correlation density
(SCD) function would require large number of
samples and thus become complex. The strength of
SCD could be affected by the unknown channel.
Sampling time error and frequency offset could
affect the cyclic frequency.
d) Energy Detection
Energy detection is an optimal way to detect primary
signals when prior information of the primary signal
is unknown to secondary users. It measures the
energy of the received waveform over a specified
observation time [18, 19].
In addition, as receivers do not require any
knowledge on the primary users signal. The signal is
detected by comparing the output of the energy
detector with a threshold which depends on the noise
floor. Energy detector also known as radiometer has
been investigated and widely used for signal
detection due to its advantage of simple circuitry in
practical implementation[20]. Prior to energy
detection been proposed, many work have been
performed to study energy detection based schemes
in radar and security communication areas. Energy
detection have some advantages that motivate
research in this area. These include the following:
It is more generic as receivers do not need any
knowledge on primary users signal.
It is very simple to implement.
The signals can be detected at low SNR provided the
detection interval is adequately long and noise power
spectral density is known. The study of energy
detection takes into account the dynamics traffic
patterns of primary users, in the form random signal
arrival and departure is of theoretical and practical
importance. However, some of the existing
techniques resort to approximation to certain
approximation techniques to characterize the
detection performance. In order to improve this
technique, we would propose a Bayesian based
Energy detection algorithm. There has been recent
works which addresses the effect of primary user
traffic patterns on the performance of the detection of
energy detectors. In [21], they considered the random
arrival or departure of the primary users signal
which exploits the distributions of the arrival and

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
203

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

departure times. The effect of the primary user traffic


on the detection performance is investigated and
studied in [22]. However, to improve the robustness
of energy detection we would propose a Bayesian
based Energy detection by exploiting the statistical
knowledge.
e) Wave form based Sensing
In wireless systems, known patterns such as
preambles, midambles, regularly transmitted pilot
pattern, spreading sequence etc. [23]. The problems
of energy detection which are false detection and
difficulty in differentiating modulated signals from
interference. Both of these problems are addressed in
waveform based sensing. Waveform based sensing is
performed in time domain using received signal y(n)
and is given by:
y(n) = x(n) + z (n)
(7)
Where x(n) is the signal to be detected and z (n) is the
Additive white Gaussian noise (AWGN).
IV. TOTALLY BLIND DETECTION
This section presents detection techniques of
spectrum sensing that requires no information what
so ever on source signal or power. These techniques
are explained as follow:
a) Eigenvalue based-sensing
This section reviews two sensing algorithms under
the totally blind sensing spectrum. The first algorithm
is based on the ratio of the maximum eigenvalue to
minimum eigenvalue and the other is based on the
ratio of average eigenvalue to minimum eigenvalue.
There are two major eigenvalue based detection
technique that would be studied in this paper, they
are:
1) Maximum-minimum eigenvalue detection
(MME)
This method generalizes the energy detection because
it is used on a basis similar to the energy detection.
What makes this unique is that it does not require any
prior knowledge of the signal and the channel. It also
eliminates the susceptibility of energy detection
synchronization error, since it doesnt require
synchronization. It is shown that the ratio of the
maximum eigenvalue to the minimum can be used to
detect signal [24]. This is achieved by some Random
matrix theories (RMT), from this we can quantize the
ratio and therefore find the threshold. The probability

of the false alarm can also be found by using the


random matrix theories [25, 4]. This technique
overcomes the noise uncertainty difficulty which is
peculiar to the energy detection while keeping the
advantages of energy detection. It can even perform
better than energy detection when the signals to be
detected are highly correlated for signal detection as
we already know from the beginning of this paper,
there are two hypotheses H0 , signal does not exist
and H1 signal exist. The received signal under the
hypothesis is given as follows [26, 13]:
H 0 : x(n) = (n),
H 1 : x(n) = s(n) + (n),
(8)
where s(n) is the transmitted signal sample and (n)
is the white noise which is independent and
identically distributed (iid). There are two
probabilities that are of interest for channel sensing.
They are; probability of detection Pd , at hypothesis
H1 and the probability of the sensing algorithm
having detected the presence of primary signal. The
probability of false alarm P (FA) which defines the
Hypothesis H1 [27]. The major advantage of the
maximum-minimum eigenvalues based detection is
that they do need the noise power for detection. The
major similarity with the energy detector is that they
both use the received signal for detection and no
information on the transmitted signal and channel is
needed.
2) Energy with Minimum Eigenvalue based
Detection (EME)
In this algorithm, the ratio of the signal energy to the
minimum eigenvalue is used for detection of the
primary user signal. as discussed in [28]. The
diffrence between the conventional energy detection
and EME is:
Energy detection compares the signal energy to the
noise power, which has to be estimated in adavance.
While the EME on the other hand compares the
signal energy to the minimum eigenvalues of the
sample covariance matrix, which is computed from
the received signal only. Though they have
differences, but are however similar to energy
detection. The MME and EME only use the received
signal samples for detection and requires no
information on the transmitted signal and channel is
needed. The major advantage of EME detection over

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
204

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

energy detection is:


Energy detection requires noise power for detection
while the EME does not. The major complexity of
EME is the computation of the covariance matrix
equations and the eigenvalue decomposition of the
covariance matrix. From the work done by Zeng et al.
[29], the EME is worse than the ideal energy
detection but better than energy detection with noise
uncertainty 0.5dB. The MME on the other hand
performs better than the EME from the experiment
done by Zeng et al [30] but there is no theoretical
proof yet in literature yet. The eigenvalue based
methods can be used for different signal detection
application without the knowledge of the signal,
channel and noise powersuch and DTV signal and
wireless microphone.
The CR local SNR of spectrum can be calculated as:

| ( )|

(9)

V.

COGNITION
AND
SPECTRUM SHARING

DYNAMIC

SDR and CR technology advancements have the


potential to alleviate the limitations in the frequency,
spatial, and temporal domains and provide for realtime spectrum access negotiation and transactions,
thus facilitating dynamic spectrum sharing. The
cellular and Industrial, Scientific, and Medical
(referred to as ISM) bands are presently over-utilized.
In contrast, television and land mobile radio bands
are most often under-utilized, primarily during
periods of off-peak use. Studies show that spectrum
is not used uniformly across the space and time
domains[31]. The FCC Spectrum Task Force reports
utilization rates between 1585 percent. These
numbers underline the need for technology that
exploits areas of low utilization. Dynamic Spectrum
Sharing (DSS) is a collection of techniques for better
utilizing radio spectrum as a function of time, space,
and context. There are two distinct ways of sharing
radio spectrum:
1.Underlay: In this model, spectrum is used by a
second party at the same time as the primary licensee,
but with the intent of causing as little interference as
possible. Ultra Wideband (UWB) technologies are
particularly suited for this type of spectrum sharing
because signals are spread over large swaths of

spectrum and the signal strength is around the RF


noise level (this allows a UWB signal to operate on
occupied spectrum with a very low power output, and
not cause any interference). This model relies on
measuring the ambient noise and the interference
caused in the operating range and maintaining it
under a predefined threshold (the interference
temperature threshold).
2.Overlay: In this model, spectrum is shared
explicitly in one of three ways:(a) Opportunistic,
where spectrum is used whenever the licensee does
not use it(b) Cooperative, where frequencies are
allocated centrally based on real-time negotiation
with the licensee(c) Mixed, where sharing is
cooperative when possible and opportunistically
otherwise. The fundamental capability in spectrum
awareness alone, while essential in all the spectrum
sharing models, is not sufficient for the more
sophisticated cooperative sharing mode[32]. Higher
levels of cognition are necessary. Radio selfawareness the radio has knowledge of its internal
and network architectures and can make flexible
decisions on how to best take advantage of the
architecture. Planning capabilities the radio follows
goals as a function of time, space, and context.
Negotiation capabilities the radio can negotiate
alternatives with other radios in the environment. At
this level of cognition, the radio could modify its
physical layer behavior not only to use available
spectrum but also to satisfy higher-level application
requirements. Networks would be able to leverage the
radios cognition abilities to achieve an
unprecedented
level
of
flexibility
and
reconfigurability. Such a network could achieve high
levels of cooperation between all its components and
thereby more efficiently and effectively utilize
spectral resources.
VI. IMPLIMENTATION OBSTACLES
Several obstacles must be overcome before cognitive
radio implementation can become practical. In the
underlay sharing model, the radio needs to be aware
of the ambient RF noise level as a function of both
space and time. It is fairly difficult to measure the
interference temperature threshold given that in reallife situations, directional and omni-directional RF
sources can coexist. The overlay spectrum sharing
mode calls for higher computing power and more
sophisticated radio reconfigurability also more

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
205

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

efficient spectrum negotiation algorithms need to be


developed. Security is also a major concern in an
environment where radios can decide how spectral
resources are used, proper authentication is vital.
Rogue players have the potential to cause major
communication errors. Mechanisms must be
developed to identify and incapacitate these devices.
Robust security measures are necessary to
accomplish this goal.
VII. CONCLUSION
Cognitive radio technology seems to be an effective
mean in dealing with high bandwidth demand. On the
basis of time and space factors huge number of
subscribers can be served. Once the hurdles
mentioned above are solved this technology will
become the boon for communication society.
REFERENCES
[1]

J. Mitola Cognitive radio for flexible mobile multimedia


communications. In the Sixth international Workshop on
Mobile Multimedia Communications, San Diego, CA, 1999.
[2] J. Mitola, Cognitive radio architecture: The engineering
foundation of radio XML, John Wiley and Sons, USA, 2006.
[3] Fette (ed.), Cognitive radio technology, Elsevier, Amsterdam,
2006.
[4] S. Haykin, Cognitive raio: Brain-empowered wireless
communications, IEEE Journal on Selected Areas in
Communications, Vol 23, No. 2, pp 201-219, 2005
[5] M. Nekovee, Dynamic spectrum access concepts and future
architectures, BT Technology Journal, Vol 24, No 2, pp 111116, 2006.
[6] W. D. Horne, Adaptive Spectrum Access: Using the full
spectrum space, Technical Report, The MITRE Corporation,
2004.
[7] P. Marques, A. Gameiro and L. Doyle, SDR for opportunistic
use of UMTS licensed bands, Proc. 51st SDR Forum General
Meeting and Technical Conference, Orlando, Florida, USA,
2006
[8] T. Rappaport, Wireless Communications, Principle and
Parctice, Prentice-Hall, 2000
[9] Cordeiro, K. Challapli, D. Birru, and S. Shankar, IEEE
802.22: The first worldwide wireless standard based on
cognitive radios, Proc. IEEE DySPAN2005, Baltimore,
Maryland, USA, November 2005.
[10] Hossain and V. K. Bhargava, Cognitive Wireless
Communication Networks. New York, NY: Springer Science
Business Media, LLC, 2007.
[11] S. Haykin, Cognitive radio: Brain-empowered wireless
communications,IEEE Journal on Selected Areas in
Communications, vol. 23,no. 2, February 2005.
[12] K. Hamdi and K. B. Letaief, Cooperative communications
for cognitive radio networks, in The 8th Annual
Postgraduate
Symposium,
The
Convergence
of
Telecommunications, Networking and Broadcasting,
Liverpool John Moores University, June 2007.

[13] S. M. Mishra, A. Sahai, and R. Brodersen, Cooperative


sensing among coginitive radios, in IEEE International
Conference on Communications, Istanbul, Turkey, June
2006.
[14] " End to End Efficiency (E3)," 2009. [Online]. Available:
http://ict-e3.eu.
[15] J.Mitola, "Cognitive Radio: Making software radio more
personal," IEEE personal communication, vol. 6, no. 4, pp.
13 - 18, Aug 1999.
[16] FCC, "Spectrum policy task force report,," Technical report
02-135Federal communication commission, Nov 2005.
[17] A.M.Shahzad, M.A.Shah, A.H.Dar,A.Haq, A.U.Khan,
T.Javed, S.A.Khan, "Comparative analysis of primaty
transmitter detection based spectrum sensing technologies in
cognitive radio systems," Australian Journal of Basic and
applied sciences , vol. 4, no. 9, pp. 4522 - 4532, 2010.
[18] W.Wang, "Spectrum sensing for cognitive radio," third
international symposium on intelligent information
technology application workshop, pp. 410 - 412, 2009.
[19] V.Stoianovici, V.Popescu, M.Murroni, "A Survey on
spectrum sensing techniques in cognitive Radio.," Bulletin of
the Transilvania University of Brasov, vol. 15, no. 50.
[20] 802.22 operation," IEEE communication magazine, vol. 45,
no. 5, pp. 80- 87, May 2007.
[21] E.Visotsky, S.Kuffner and R.Petterson, "On collaborative
detection on TV transmission in support of dynamic spectrum
sharing," in Proceedings of IEEE International symposium of
New Frontiers in Dynamic spectrum access network,
Baltimore, Nov 2005.
[22] T.Weiss, J.Hillenbrand and F.Jondral, "A diversity approach
for the detection of Idle spectral resource in spectrum pooling
systems," in Proceedings of the 48th International scientific
colloqium, IImenau, Germany, 2003.
[23] Z.Chair and P.K.Varshney, "Optimal data fusion on multiple
sensor detection system," IEEE transaction on Aerospace
Electronics system, vol. 22, no. 1, p. 98 101 , 1986.
[24] M.Gandetto, A.F.Catto, C.S.Regazzoni and M.Musso,
"Distributed cooperative mode identification for cognitive
radio application," in Proceedings of international radio
science union (URSI), New Delhi, India, 2005.
[25] M.Gandetto, A.F.Cattoni, C.S.Regazzoni, "Distributted
approach to mode identification and spectrum monitoring for
cognitive radio," in Proceedings of SDR forum for technical
conference, Orange County, California, USA, Nov. 2005.
[26] A.F.Cattoni, I.Minetti, M.Gandetto, R.Niu, P.K.Varshney and
C.S.Regazzoni, "A Spectrum sensing algorithm based on
distributedcognitive model," in Proceeding of SDR forum for
technical conference, Orlando, Florida, USA, Nov.2006.
[27] M.Gandetto andf C.S.Regazzoni, "Spectrum Sensing: A
distributed approach for cognitive terminals," IEEE Journal
on selected areas of communication, vol. 25, no. 3, pp. 546 557, 2007.
[28] P.Pawelczar, G.J.Janssen and R.V.Prasad, "Performance
measure of dynamic spectrum access networks," in
Proceedings of IEEE Global telecommunication conference
(Globecom), San Francisco, California USA, Nov.2006.
[29] R.Chen and J.M.Park, "Ensuring trustworthy spectrum
sensing incognitive radio network," in Proceedings of IEEE
workshop onnetworking technologies for software defined
radio networks (held in conjunction with IEEE SECON
2006), 2006.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
206

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

ECG Signal as a Biometric


Bashrat Bahir1 , K.Deepa2, Nitika2, Vijay Gill2, P.M.Arivananthi2
M.Tech. Scholar, Department of ECE, Manav Rachna University Faridabad
2
Assistant Professor, Department Of ECE, Manav Rachna University Faridabad
1

E-mail: basharatsofi19@gmail.com, kdeepa@mru.edu.in , nitika@mru.edu.in, vijay@mru.edu.in,


pmarivananthi@mru.edu.in
Abstract The electrocardiogram (ECG also
called EKG) trace expresses cardiac features that
are unique to an individual. Biometrics is being
used extensively for the purpose of security
measures. Biometric recognition provides strong
security by identifying an individual based on the
feature vector(s) derived from their physiological
and/or behavioral characteristics. It has been
proved that the human Electrocardiogram (ECG)
shows adequately unique patterns for biometric
recognition. Individual can be identified once
ECG signature is formulated. This paper presents
a systematic Template matching for Identification
of individuals from ECG data. This work
establishes that ECG signal is a signature like
fingerprint, retinal signature for any individual
Identification.
Keywords Biometric, Electrocardiogram (ECG),
QRS complex, Amplitude Features, Template
Matching
I.

INTRODUCTION

Conventional methods of identity verification


based on strategies such as ID cards, Social security
numbers and passwords provide less security. There
are many applications for a more secure, easily
applied, low-cost method to identify (or to verify) the
individuals. Human Identification plays an important
role in many applications, especially in security
systems through Biometric. Recent advancements
have made Identification of people based on their
biological, physiological,or behavioral qualities a
reasonable approach for access control [35].
Establishing
human
identity
reliably
and
conveniently has become a major challenge for a
modern-day society.
Biometric is an automatic, real-time, nonforensic for human Identification. Biometric
Identification or verification shows great potential in
bridging some of the existing security gaps. To reach

a higher security level, specific features from the


human must be selected to recognize a person.
Biometrics use anatomical, physiological or
behavioral characteristics that are significantly
different from person to person and are difficult to
forge. This is useful in security applications and
authentication devices, offering an alternative to
conventional methods. A number of biometrics
modalities have been investigated in the past,
examples of which include physiological traits such
as face, fingerprint, iris, and behavioral
characteristics like gait and keystroke. Human
Identification through ECG is feasible and highly
effective [36-37]. The way the heart beats is a unique
& private feature of an individual. People have
similar but different ECG. Figure 1(database taken
from HAH Centenary Hospital Delhi) shows an
example of two persons with exactly the same age,
sex, weight and height who have completely different
ECG patterns.
There is strong evidence that hearts electrical
activity embeds highly unique characteristics,
suitable for applications such as the recognition of
humans [38-39]. The validity of using ECG for
biometric recognition is supported by the fact that the
physiological and geometrical differences of the heart
in different individuals display certain uniqueness in
their ECG signals [40-41].

Fig. 1 Two persons having different ECG patterns

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
207

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

The advantage of ECGs in biometric systems is their


robust nature against the application of life indicator,
and can be used as a tool for liveness detection
increasing system reliability. ECG person
Identification relies on the all-or-nothing phenomena
of action potentials and assumes that the PQRST
waveform shape remains relatively constant over a
reasonable time period.
II.

ECG BASICS

In
remote
health
monitoring
ECG
(electrocardiogram) is an essential physiological
parameter which is needed to monitor in almost all
critical diseases related to heart. Authentication of
patients, during monitoring should not need any
additional parameter to be recorded. The human
ECG, an electrical signal that is associated with the
electrical activity of heart offers several benefits as a
biometric: it is universal, continuous and difficult to
falsify. The ECG signal from different individuals
confirms to a fundamental morphology but also
exhibits several personalized traits, such as relative
timings of the various peaks, beat geometry, and
responses to stress and activity. With this concept in
mind, the primary focus of literature review has been
ECG.
The human electrocardiogram reflects the specific
pattern of electrical activity of the heart throughout
the cardiac cycle, and can be seen as changes in
potential difference. The ECG is affected by a
number of physiological factors including age, body
weight, and cardiac abnormalities. A typical beat in
an electrocardiogram consists of1. A low amplitude P-wave, representing arterial
depolarization.
2. The QRS complex of much higher amplitude than
the P-wave, representing ventricular depolarization
3. A T-wave of smaller amplitude and larger duration
than the QRS complex, representing ventricular repolarization
III. ECG

Electrocardiogram (ECG) is a method to measure and


record different electrical potentials of the heart. The
ECG may roughly be divided into the phases of
depolarization and repolarization of the muscles fiber
making up the heart. The depolarization correspond
to the P-Wave (atrial depolarization) and QRS wave
(ventricles
depolarization).The
repolarization
correspond to the T- wave. The ECG is measured by
placing electrodes on selected spots on human body
surface.
IV. PROPOSED WORK
The schematic description of ECG based individual
identification system is shown in Figure 2. The
method is implemented in a series of steps: (1)
preprocessing, includes correction of signal from
noise artifacts and classification of waveforms, (2)
feature extraction, includes recognition of dominant
features between the diagnostic points, (3)
identification and (4) decision making using the
technique of template matching and adaptive
thresholding.
ECG data is acquired from the individuals and
subsequently it is digitized. Preprocessing of ECG
involves the correction of signal from low and high
frequency noises. Low frequency noise is resulted
from baseline oscillations, body movements and
respiration while high frequency noise is resulted
from power line interferences and digitization of
analog potential [14]. Digital filters of linear phase
characteristics are employed in the experiment and it
involves the detection of dominant complexes: QRS
complex, P and T waves from the signal.
Feature extraction is concerned to the detection of
differences in transmembrane voltages in myocardial
cells that occur during depolarization and
repolarization. These differences are classified into
interval, amplitude and angle features. From the
extracted features, selected features are formed a
feature vector. In identification, distance (e.g.,
Euclidean distance) between feature vectors of a test
template and template stored in the database is
computed. Finally, decision making process decides
how well the claimed template matches to its
counterpart stored in the database using adaptive
thresholding criterion.
We propose a new technique that the ECG data
acquired will be at the fingers using single lead.

Fig. 2. Basic shape of an ECG

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
208

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

V.

METHODOLOGY

A biometric system is essentially a pattern


recognition system that operates by acquiring
biometric data from an individual, extracting a
feature set from the acquired data, and comparing this
feature set against the template set in the database.
Figure - 2 shows the Architecture of a Biometric
Identification System using ECG Signal.
A biometric system is essentially a pattern
recognition system that operates by acquiring
biometric data from an individual, extracting a
feature set from the acquired data, and comparing this
feature set against the template set in the database.
Figure - 2 shows the Architecture of a Biometric
Identification System using ECG Signal.
VI. PRE-PROCESSING
Generally the presence of noise will corrupt the signal
and make the feature extraction and classification less
accurate. The collected ECG data in raw format
usually contains lot of noise,which include low
frequency components that cause baseline wander,
and high frequency components such as power-line
interferences.

Fig 3 Biometric Identification system

VII. BASELINE DRIFT REMOVAL


Baseline wandering is one of the noise artifacts that
affect ECG signals. We use median filters (200-ms
and 600-ms) to eliminate baseline drift of ECG signal
[18]. The process is as follows

The original ECG signal is processed with a


median filter of 200-ms width to remove QRS
complexes and P waves.
The resulting signal is then processed with a
median filter of 600-ms width to remove T
waves. The signal resulting from the second
filter operation contains the baseline of the ECG
signal.
By subtracting the filtered signal from the
original signal, a signal with baseline drift
elimination can be obtained.
VIII.

NOISE REMOVAL

After removing baseline wandering, the resulting


ECG signal is more stationary and explicit than the
original signal. However, some other types of noise
might still affect Feature Extraction of the ECG
signal. To remove the noise, we use Discrete Wavelet
Transform. This first decomposes the ECG signal
into several subbands and then modifies each
wavelet coefficient by applying a threshold function,
and finally reconstructs the denoised signal. The high
frequency components of the ECG signal decreases
as lower details are removed from the original signal.
As the lower details are removed, the signal becomes
smoother and the noises disappears since noises are
marked by high frequency components picked up
along the ways of transmission.

1. Related Work
Although extensive studies have been conducted for
ECG based clinical applications, the research for
ECG-based biometric recognition is still in its infant
stage. In this section, we provide a review of the
related works. Biel et al. [2] are among the earliest
effort that demonstrates the possibility of utilizing
ECG for human identification purposes. A set of
temporal and amplitude features are extracted from a
SIEMENS ECG equipment directly. A feature
selection algorithm based on simple analysis of
correlation matrix is employed to reduce the
dimensionality of features. Further selection of
feature set is based on experiments. A multivariate
analysis-based method is used for classification. The
system was tested on a database of 20 persons, and
100% identification rate was achieved by using
empirically selected features. A major drawback of
Biel et al.s method is the lack of automatic
recognition due to the employment of specific
equipment for feature extraction. This limits the
scope of applications. Irvine et al. [3] introduced a

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
209

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

system to utilize heart rate variability (HRV) as a


biometric for human identification. Israel et al. [4]
subsequently proposed a more extensive set of
descriptors to characterize ECG trace. An input ECG
signal is first preprocessed by a bandpass filter. The
peaks are established by finding the local maximum
in a region surrounding each of the P,R,T complexes,
and minimum radius curvature is used to find the
onset and end of P and T waves. A total number of
15 features, which are time duration between detected
fiducial points, are extracted from each heartbeat. A
Wilks Lambda method is applied for feature
selection and linear discriminant analysis for
classification. This system was tested on a database
of 29 subjects with 100% human identification rate
and around 81% heartbeat recognition rate can be
achieved. In a later work, Israel et al.[5] presented a
multimodality system that integrate face and ECG
signal for biometric identification. Israel et al.s
method provides automatic recognition, but the
identification accuracy with respect to heartbeat is
low due to the insufficient representation of the
feature extraction methods. Shen et al. [6] introduced
a two-step scheme for identity verification from onelead ECG. A template matching method is first used
to compute the correlation coefficient for comparison
of two QRS complexes. A decision-based neural
network (DBNN) approach is then applied to
complete the verification from the possible
candidates selected with template matching. The
inputs to the DBNN are seven temporal and
amplitude features extracted from QRS T wave.
Template matching and mean square error (MSE)
methods were compared for prescreening, and
distance classification and DBNN compared for
second-level classification. The features employed
for the second-level classification are seventeen
temporal and amplitude features. The best
identification rate for 168 subjects is 95 .3% using
template matching and distance classification. In
summary, existing works utilize feature vectors that
are measured from different parts of the ECG signal
for classification. These features are either time
duration, or amplitude differences between fiducial
points. However, accurate fiducial detection is a
difficult task since current fiducial detection
machines are built solely for the medical field,where
only the approximate locations of fiducial points
are required for diagnostic purposes. Even if these
detectors are accurate in identifying exact fiducial
locations validated by cardiologists, there is no

universally acknowledged rule for defining exactly


where the wave boundaries lie [14]. In this paper, we
first generalize existing works by applying similar
analytic features, that is, temporal and amplitude
distance attributes. Our experimentation shows that
by using analytic features alone, reliable performance
cannot be obtained.To improve the identification
accuracy, an appearance-based approach which only
requires detection of the R peak is introduced, and a
hierarchical classification scheme is proposed to
integrate the two streams of features. Finally, we
present a method that does not need any fiducial
detection.This method is based on classification of
coefficients from the discrete cosine transform (DCT)
of the autocorrelation (AC) sequence of windowed
ECG data segments. As such,it is insensitive to heart
rate variations, simple and computationally efficient.
Computer simulations demonstrate that it is possible
to achieve high recognition accuracy without pulse
synchronization. M. M. T. Abdelraheem , et al[34]
used only the main loop of the ECG for
identification. Two different Algorithms are used. In
the first one coefficients from specially developed
descriptor(the eqal distance descriptor)are used for
identification and in the second, selected Fourier
descriptor coefficients of the main loop of the ECG
are used as biometric data. In both methods feed
forward neural networks are used as classifiers giving
identification.But the system was not suitable for
long term heart beats and with physical activities like
running,walking,etc.
Wang et al. [23] were the first to propose an approach
that did not entirely rely on fiducial based features by
combining a set of analytic features derived from
Fiducial points with appearance features obtained
using PCA and LDA(principal component analysis
and linear discriminate
analysis) for feature
extraction and data reduction . The accuracy for 13
subjects was 84% using analytic features alone and
96% using LDA with K-NN(K-nearest neighbor).
The combination of the types of features was used to
achieve 100% accuracy. Janani, et al [28] handle
activity induced ECG variation by extracting a set of
accelerometer features that characterize different
physical activities along with fiducial and non
fiducial ECG features. This is the first paper which
involves usage of ECG data when the subject is in
motion.
Bayesian
Classification
uses
the
Accelerometer data accurately which improves the
performance with High Accuracy Rate: 88%. It used
the SHIMMER platform developed by Intel Digital

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
210

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Health Advanced Technology Group. The


SHIMMER1 is a compact sensing platform with an
integrated 3axis accelerometer. Chan et al.[25]
proposed another non-Fiducial feature extraction
framework using a set of distance measures including
a novel wavelet transform distance . Data was
collected from 50 subjects using button electrodes
held between the thumb and finger. The wavelet
transform distance outperforms other measures with
an accuracy of 89% .Can Ye et al[30] use a two-lead
ECG signals for human identification. Wavelet
Transform and Independent Component Analysis are
applied to each single lead signal to extract
morphological information. The information from
two leads is fused by rejecting the heartbeat segments
that are inconsistently classified between two leads.
This decision-based fusion significantly enhanced the
identification accuracy.
The subject identity is finally determined based on
the majority voting among multiple consecutive
consistently classified heartbeats. The methodology
has been validated over three public ECG databases
and substantially high rank-1 IDAs (as high as
99.6%) are achieved in the short-term as well as in
the long-term and with or without the presence of
arrhythmia. The result demonstrates the great
potential of ECG signals and the proposed method in
the biometrics system.Wang et al. [23] were the first
to propose an approach that did not entirely rely on
fiducial based features by combining a set of analytic
features derived from Fiducial points with
appearance features obtained using PCA and
LDA(principal component analysis and linear
discriminate analysis) for feature extraction and data
reduction . The accuracy for 13 subjects was 84%
using analytic features alone and 96% using LDA
with K-NN(K-nearest neighbor). The combination of
the types of features was used to achieve 100%
accuracy.
IX. CONCLUSION
This paper reviewed the processing, exploitation,
and dissemination of heartbeat data for biometric
application. We laid down the systems functional
blocks and those researchers performing in these
areas. The limitations of the data and algorithms to
characterise individuals are being reduce, though the
ecg signals supplementary understanding of the
operational environment.
The expected performance for a biometric sys tem

will depend on the nature of the biometric task, the


sensing and processing system, system enrolment
procedures, and the sensing environment. For
example, identity verification of cooperative
individuals using contact measurements appears
within reach for a modest number of enrolled
individuals (Irvine and Israel, 2009). Extending these
methods to the general
identification problem will
require additional development, but current methods
hold promise. Three important issues, however,
require further investigation: stability of the
signatures over long period (e.g. years), robustness to
variation in mental and emotional state, and
scalability to larger populations. The initial analysis
of these issues suggests that robustness and
scalability can be addressed (Irvine et al., 2008; Israel
et al., 2009). Extensions to non- contact sensing
methods, especially with non-cooperative subjects,
will require more development to insure reliable
acquisition of the cardiac signal.
REFERENCES
[1]

[2]

[3]

[4]

[5]

[6]

[7]

[8]

Dr Eric J Lind,Dr sundaresan,Mr Tony Mckee A sensate


liner for personnel monitoring applications , proceeding
Iswc '97 proceedings of the 1st IEEE international
symposium on wearable computers , pp 98-105,1997.
Dr. Boo-Ho yang, Prof Haruhiko H. Asada ,Yi Zhang Cuffless continuous monitoring of beat- to-beat Blood pressure
using a kalman filter and sensor ,Fusion-first joint IEEE
conference of bmes/embs,1999.
L. Biel, O. Pettersson, L. Philipson, and P. Wide. ECG
analysis: a new approach in human identification.
Proceedings of the 16th Instrumentation and Measurement
Technology Conference, IEEE, volume 1,1999.
Julio C.D Conway,Claudionor J.N Coelho ,Luis C.G
Andrade, Wearable computer as a multi- parametric monitor
for physiological signals IEEE international conference on
bioinformatics and biomedical engineering, pp-236242,2000.
M. Sanjeev Dasrao,Yeo Joon Hock,Eugene KW sim,
Diagnostic blood pressure wave analysis and ambulatory
monitoring using a novel, non-invasive portable deviceproceeding of international conference on biomedical
engineering, pp 267-272, 2001.
Peter Varady Benyo ,Balazs, An open architecture patient
monitoring system using Standard technologies, IEEE
transactions on information technology in biomedicine, vol.
6, 2002.
Paul Lucowicz, Urs Anliker, Jamie Ward, Gerhard Troster,
Amon: a wearable medical computer for high risk
patients, IEEE computer society, 6th international
symposium on wearable computers (iswc.02), 2002.
T. W. Shen, W. J. Tompkins, and Y. H. Hu, One-lead ECG
for identity verification, Proceedings of the 24th Annual
Conference on Engineering in Medicine and Biology and the
Annual Fall Meeting of the Biomedical Engineering Society,
Volume 57, issue 2,2002

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
211

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

[9]

[10]

[11]

[12]

[13]

[14]

[15]

[16]

[17]

[18]

[19]

[20]

[21]

[22]

Alfonoso F.Cardenas, Raymond K. Ron, Robert B.Cameron,


Management of streaming body sensor data for medical
information systems,191 pp-186,2003.
Ilkka Korhonen,Lee Wang,Tong Boon,Li Cui,Health
monitoring in the home of the future IEEE engineering in
medicine and biology magazine, , pp-66-73, 2003.
Erik A.Johannessen,Lee Wang,Tong Boon Lang,Li
Cui,Implementation of multichannel
sensors for
remotebiomedical measurements in a Microsystems IEEE
transactions on biomedical engineering, vol. 51, no. 3, pp525-535, 2003.
Florian Michahelles,Raman Wicki heart-rate detection
without even touching the user- proceedings 8th
international symposium on wearable computers IEEE
computer society of india,2004.
Wenxi Chen ,Damig Wei, Michall Cohen,Shuxue
Ding,Shigeru Toxinoya, Development of a scalable
healthcare monitoring platform, fourth international
conference on computer and information technology (cit04),
2004.
Roozbeh Jafari, Andre Encarcacao,Azad Zahoory,Foad
Dabiri, Hyduke Noshadi,Majid Sarrafzadeh, Wireless sensor
networks for health monitoring, second annual international
conference on mobile and ubiquitous system: networks and
services, pp 479-481,2005.
Steven A. Israel, Jhon M.I,Andrew C.,Mark D.W,ECG to
identify
individuals,Virtual
reality
medical
center,USA,Pattern Recognization,Volume 38,issue1,2005.
Srdjan Krco,Health care sensor networks architecture and
protocols, ad hoc & sensor wireless networks, vol. 1, pp. 125,2005.
Khalid Mohmed Alajel, Khairi Bin Yousuf, Abdul Rahman
Ramji, EI Sadig Ahmed,Remote electrocardiogram
monitoring based on the internet, kmitl sci. J., Vol. 5 no., pp
493-501,2005.
P. de Chazal, C. Heneghan, E. Sheridan, R.Reilly, P. Nolan,
M. O'Malley, Automated Processing of the Single-Lead
Electrocardiogram for the Detection of Obstructive Sleep
Apnoea, IEEE Trans. Biomed. Eng., 50(6): 686-689, 2003..
Hyun K. Kim ,S.James Biggs ,Continuous shared control for
stabilizing reaching and grasping with brain-machine
interfaces IEEE transactions on biomedical engineering, vol.
53, no. 6, pp- 1164-1173,2006.
Basit Qureshi ,Mohamed Tounsil, A bluetooth enabled
mobile intelligent remote healthcare monitoring system in
saudi arabia: analysis and design issues ,18th national
computer conference , 2006.
Chao-Hung Lin, Shuenn-Tsong Young,Te Son Kuo, A
remote data access architecture for home-monitoring,
journal of medical engineering & physics, vol 29, pp 199204,2007.
Mehmet R.Yuce, Reng chaong ng,Chin K.Lee,Jamil

[23]

[24]

[25]

[26]

[27]

[28]

[29]

[30]

[31]
[32]

[33]

[34]

Y.Khan,Wentai Liu, A wireless medical monitoring over a


heterogeneous sensor network,29th annual international
conference of IEEE engineering in medicine and biology
society, pp 5894-5898,2007.
Y. Wang, F. Agrafioti, D. Hatzinakos, and K. N. Plataniotis,
Analysis of human electrocardiogram for biometric
recognition, EURASIP Journal on Advances in Signal
Processing,2008
Jamil k.Khan, Farbood Karami, Mehmet R.Yuee,
Performance evaluation of a wireless body area sensor
network for remote patient monitoring ,30th annual
international conference of IEEE engineering in medicine and
biology society, pp 1266-1269,2008.
C. Chan, M. M. Hamdy, A. Badre, and V. Badee. Wavelet
distance measure for person identification
using
electrocardiograms, Transactions on Instrumentation and
Measurement,IEEE, Volume 57, issue 2,2008
Chiu, C. Chuang, and C. Hsu. A novel personal identity
verification approach using a discrete wavelet transform of
the ECG signal, Proceedings of the International Conference
on Multimedia and Ubiquitous Engineering, Volume 6, issue
4,2008.
Bernardo G., Jose P., Rodrigo V. A., Giancarlo G. ,ECG
data provisioning for telehomecare Monitoring ; ACM ,
volume 5,2008.
Janani S., Minho S., Tanzeem C., David K., Activity-aware
ECG-based patient authentication for remote health
monitoring; , International Conference on Mobile
Systems,ACM,Nov,2009
Chuang-Chien
chiu,Chou-Min
Chuang,Chin
YuHsu,Discrete Wavelet Transform Apply on personal identity
verification with ECG signal, Volume 7, issue 3,2009.
Can Y.,Miguel C., B.V.K Vijaya k.,Investigation of human
identification using two lead electrogram signals; ,
Proceedings of the 4th International Symposium on Applied
Sciences in Biomedical and Communication Technologies
,IEEE, Volume 50,2010
Fahim S., Ibrahim K., and Jiankun H., ECG-Based
Authentication,Springer, Volume 1,2010.
S.M Wendelken, S.P Mc Grath, G.T. Blike Agent based
casualty care a medical expert system for modern triage ,
journal of american medical association, 2010.
R. Martin Arthur, Shuli Wang, and Jason W. Trobaugh,
Changes in Body-Surface Electrocardiograms From
Geometric Remodeling With Obesity; IEEE transactions on
biomedical engineering, vol. 58, no. 6, june 2011.
M. M. T. Abdelraheem, Hany Selim, Tarik Kamal
Abdelhamid, Human Identification Using the Main Loop of
the Vectorcardiogram; American Journal of Signal
Processing 2012, 2(2): 23- 29,2012

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
212

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Huffman Coding and its Application in


Image Compression
1

Kartik Kumar Attree, 2Mahima Singh Choudhary, 3Kanika Sharma, 4Rishu, 5Manuj Gupta
Deptt. of Electronics and Communication, Northern India Engineering College, New Delhi, India
E-mail: mahimasinghchoudhary789@gmail.com

Abstract Compression is a technique to reduce


the quantity of data without compromising the
quality at an acceptable level. As the number of
bits is reduced, we have more efficient utilization
of memory and faster transmission over a
medium. In this paper we focus upon Huffman
coding used as a lossless data compression
algorithm. We give an example of image
compression using Huffman coding.
Keywords huffman coding; image compression

I. INTRODUCTION
Data is not the same as information. Data is the
means to express information. The amount of data
content of a transmission or reception can be more
than the information content and that too providing
no additional information. Compression aims to
eliminating this redundancy, i.e. keeping only the
data that provides some information. [1]
Huffman coding is a lossless data compression
algorithm. It is based on the frequency of occurrence
of a data item, i.e. pixels in images. The technique is
to use a lower number of bits to encode the data into
binary codes that occur more frequently. [2]
Huffmans algorithm generates minimum redundancy
codes compared to other algorithms. The Huffman
coding is effectively used in text, image, video
compression and conferencing system.
II.

TYPES OF COMPRESSION

A. Lossy Compression
In this type of compression the exact data is not
retrieved, i.e. some part of the data is lost after
compression. This type of data works where some
fidelity is acceptable.
B. Lossless Compression
In this type of compression no part of the data is lost,
i.e. exact data is retrieved after decompression. The

main challenge to a lossless compression algorithm is


to get back the original data taking minimum time.
Some of the lossless compression techniques are:1. Huffman coding
2. Lempel-Ziv-Welch coding
3. Arithmetic coding
4. Adaptive Huffman coding
III.

HUFFMAN CODING

Huffman encoding results in optimum uniquely


decodable code. Thus it is the code that has the
highest efficiency. The idea is to assign variablelength codes to input characters, lengths of the
assigned codes are based on the probabilities of
corresponding characters. The higher the probability,
the shorter the code sequence for this character.
Hence, the probabilities of the characters should be
known a priori. [3]
Huffman encoding procedure [4]
1. List the source symbols in order of decreasing
probability.
F.

2.

[1]
Combine the probabilities of the two symbols
having the lowest probabilities, and reorder the
resultant probabilities, this step is called
reduction 1. The same procedure is repeated until
there are two ordered probabilities remaining.

[2]

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
213

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

3.
4.

5.

6.

Start encoding with the last reduction, which


consist of exactly two ordered probabilities.
Assign 0 as the first digit in the code words for
all the source symbols associated with the first
probability; assign 1 to second probability.

[3]
Now go back and assign 0 and 1 to the second
digit for the two probabilities that were combined
in the previous reduction step, retaining all
assignments made in step 3.

[4]
Keep regressing this way until the first column is
reached.

[5]
Properties [5]
Unique prefix property: The variable-length
codes assigned to input characters are prefix
codes, means the codes are assigned in such a
way that the code assigned to one character is not
the prefix of code assigned to any other
character. This is how Huffman coding makes
sure that there is no ambiguity when decoding
the generated bit stream.
2. Optimality: Minimum redundancy code- proved
for a given data model.
a. The two least frequent symbols will have the
same length for their Huffman codes, differing
only at the last bit.
b. Symbols that occur more frequently will
have shorter Huffman codes than symbols that
occur less frequently.
G.

1.

Advantages [6]
Huffman algorithm is easy to implement.
They produce an exact duplicate of the original
data.
3. It can be discovered via a slightly different
algorithm called adaptive Huffman coding.

Disadvantages [7]
The probabilities should be known a priori.
Cannot achieve high levels of compression.
Huffman coding requires two passes one to build
a statistical model of the data and a second to
encode it so is a relatively slow process.
4. The binary strings and codes in the encoded data
are all of different lengths. This makes it difficult
for decoding software to determine when it has
reached the last bit of data and if the encoded
data is corrupted it will be decoded incorrectly
and the output will be nonsense.
I.

1.
2.
3.

Applications [8]
Huffman coding is widely used in all the
mainstream compression formats that we might
encounter- from GZIP, PKZIP and BZIP2 to
image formats such as JPEG and PNG.
2. It is the base of JPEG compression.
3. It is the work-horse of the compression industry
and it is used as a backend to some other
compression methods.
4. Almost all communication with and from the
internet are at some point Huffman encoded.
J.

1.

IV.

IMAGE COMPRESSION

Compression of image is significantly different from


compression of raw binary data. The primary
objective of an image compression is to minimize the
average number of bits and to achieve a reasonable
compression ratio as well as better quality of
reproduction of image with a low power
consumption.
For a compressed image, the bit rate achieved by the
compressor, measured in bits/pixel, is defined as the
number of bits used in the compressed representation
of the image divided by the number of pixels in the
image. [5]

H.

1.
2.

Fig.1 Process of Image Compression [1]

Huffman coding and decoding algorithm for image

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
214

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

compression is as given below [9]:1.

Read the image on to the work space of the


MATLAB.
2. Convert the given color image into gray
level image.
3. Call a function which will find the
symbols(i.e. pixel value which is nonrepeated)
4. Call a function which will calculate the
probability of each symbol.
5. Probability of symbols are arranged in
decreasing order and lower probabilities are
merged and this step is continued until only
two probabilities are left and codes are
assigned according to rule that the highest
probable symbol will have a shorter length
code.
6. Further Huffman encoding is performed, i.e.
mapping of the code words to the
corresponding symbols will result in a
compressed data.
7. The original image is reconstructed, i.e.
decompression is done by using Huffman
decoding.
8. Generate a tree equivalent to the encoded
tree.
9. Read input character wise and left until last
element is reached.
10. Output the character encoded in the leaf and
return to the root, and continue the step 9
until all the codes of corresponding symbols
are known.

Fig. 2 Original Gray Scale Image [10]

V.

CONCLUSION

Data compression is the most consideration thing of


the recent world as the data we want to store is more
compared to the space we have. As social media has
caught up the fancy of the majority population,
transfer of data has become an indispensable part of
our daily lives. Thus requiring faster transfer rate.
From the above examples taken we conclude that
Huffman coding is an efficient technique in data
compression and image compression to some extent.
Also we note that the decoded image is quite close to
the original image. Image compression using
Huffman coding depends on the number of pixels in
an image, size of the image and compression ratio.
Huffman coding is really, really good with data that
repeats order a lot and contains a subset of the
character space. It is used to convert fixed length
codes into variable length codes, which results in
lossless compression.
REFERENCES
Robin Strand, Image Coding and Compression, lecture 7,
Centre for Image Analysis, Swedish University of
Agricultural Sciences, Uppsala University
[2] Mamta Sharma, Compression Using Huffman Coding,
International Journal of Computer Science and Network
Security, Vol. 10 no. 5, May 2010
[3] Greedy Algorithms | Set 3 (Huffman Coding),
GeeksforGeeks, www.geeksforgeeks.org
[4] Dr. Sanjay Sharma, Entropy Coding, Information Theory,
Communication Systems (Analog and Digital), sixth edition,
September 2012
[5] Aarti, Performance Analysis of Huffman Coding
Algorithm, International Journal of Advanced Research in
Computer Science and Software Engineering, vol. 3, issue 5,
May 2013
[6] What are the advantages of Huffman Coding?, Answers,
www.answers.com
[7] David Dunning, The Disadvantages of Lossless Encoding
Techniques, eHow, www.ehow.com
[8] What are the real-world applications of Huffman Coding?,
Stack Overflow, www.stackoverflow.com
[9] Jagadish H. Pujar, Lohit M. Kadlaskar, A New Lossless
Method of Image Compression And Decompression Using
Huffman Coding Techniques, Journal of Theoretical and
Applied Information Technology, 2005-2010
[10] Mridul Kumar Mathur, Seema Loonkar, Dr. Dheeraj Saxena,
Lossless Huffman Coding Technique for Image
Compression and Reconstruction Using Binary Trees,
International Journal of Computer Technology and
Applications, vol. 3(1), 76-79, Jan-Feb 2012
[11] Wikipedia The free encyclopedia, en.m.wikipedia.org
[1]

Fig. 3 Reconstructed Gray Scale Image [10]

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
215

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Performance Analysis of Functional


Parameters of Solar Power Generation
*Shashi Gaurav, *Binit Ranjan, *Umang Goyal, *Nishu Jain, **Shawet Mittal
Deptt. of Electronics and Electrical Engineering, Northern India Engineering College, New Delhi
E-mail: shashigaurav21@gmail.com , ranjanbinit2012@gmail.com , ugoel14@gmail.com ,
jain.nishu95@gmail.com shawet.mittal@niecdelhi.ac.in
Abstract This paper describes a method of
modeling and simulation photovoltaic (PV)
module that is implemented in Simulink/Matlab.
It is required to define a circuit-based simulation
model for a PV cell in order to allow the
communication with a power converter.
Characteristics of PV cells that are exaggerated by
irradiation and temperature are modeled by a
circuit model. A simplified PV equivalent circuit
with a diode equivalent is engaged as model. The
simulation results are compared with different
types of
PV module datasheets. Its result
indicated that the created simulation blocks in
Simulink/Matlab are similar to actual PV
modules, attuned to different types of PV module.
The proposed model is designed with a userfriendly icon and a dialog box like Simulink block
libraries.
Keywords PV Cell, Solar Cell

The introduction on PV devices is followed by the


modeling and simulation of PV arrays.
II.

WORKING OF PV CELL

A photovoltaic cell is fundamentally a semiconductor


diode whose pn junction is exposed to sunlight [1],
[2]. Several types of semiconductors are used to
manufacture photovoltaic cells. At commercial scale
monocrystalline and polycrystalline silicon are used.
The IV characteristic of a solar cell has an
exponential characteristic alike to that of a diode in
darkness [3]. When the photons hits on the solar cell,
energy crosses the band gap energy of the
semiconductor and electrons are released from its
atoms creating electron-hole pairs [4]. The charged
carriers move under the influence of internal electric
fields of the p-n junction and hence a current
proportional to the incident photon radiation is
developed. This phenomenon is called photovoltaic
effect.

I. INTRODUCTION
III.
Solar photovoltaic generation systems are becoming
increasingly essential as renewable energy source
since it offers many advantages such as acquiring no
fuel costs, not being polluting, requiring little
maintenance, emitting no noise and others. The basic
unit of PV arrays is the solar cell, which is a pn
semiconductor junction. Solar cell is basically a p-n
junction fabricated on a thin wafer of semiconductor.
The electrical energy is obtained from photovoltaic
cells using electromagnetic radiation of the sun by
using photoelectric effect.

MODEL OF PV CELL

A simple PV cell model can be represented by a


current source with diode in parallel. The output of
PV cell is directly proportional to number of photons
striking it. In darkness it produces neither voltage nor
current. However, in presence of external source, it
produces current known as diode current(I d). The
diode determines the I-V characteristics of the cell.

In this paper a simulation of solar panel is presented


in which various input parameters are considered and
their characteristic curves are established and is
implemented using Matlab/Simulink and also, these
parameters along with other parameters are studied.

Fig. 1-circuit diagram of PV cell

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
216

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Following parameters are considered for accurate


simulation:

Temperature dependence of the diode


reserved saturation current Is [5].
Temperature dependence of the photo
current Iph.
Series resistance Rs [8] (internal losses due
to the current flow) which gives a more
accurate shape between the maximum power
point and the open circuit voltage.
Shunt resistance Rsh [9], in parallel with the
diode, this corresponds to the leakage
current [10] to the ground.

Table-1 Solarex MSX-60 specifications (1kW/m2, 25C) [7]

Characteristics
Typical peak power (Pmpp)
Voltage at peak power (Vmp)
Current at peak power (Imp)
Short circuit current (Isc)
Open circuit voltage (Voc)
Temperature coefficient of open
circuit voltage (Kv)
Temperature coefficient of short
circuit current (Ki)
Approximate
effect
of
temperature on power
Nominal
operating
cell
temperature (NOCT)
VI.

Specifications
60 W
17.1 V
3.5 A
3.8 A
21.1 V
-(8010)mV/C
(0.0650.01)%/C
-(0.50.015)%/C
(472)C

SIMULATION

Fig. 3 Simulation

In the above Simulink model, we considered its main


functional parameters on which its working depends
i.e. temperature and insolation. In this we have kept
temperature constant and varied the insolation w.r.t to
its standard value and determined the output of PV
panel.
VII. SIMULATION RESULT
IV curve, PI curve and PV curve obtained after
simulation at different solar insolations.
Fig. 2.Characteristic of IV curve from Iph and Id [6].
Special Issue: National Conference on Recent Innovations In Engineering & Technology
(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
217

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Fig. 4-IV curve

Ish is shunt current.


Iph is photocurrent.
Irs is reverse saturation current at T ref
Is is reverse saturation current.
Id is diode current.
I is load current.
V is load voltage.
Tref is the reference temperature.
Q is one electron charge
Irref is reference irradiance.
KI is short circuit current temperature
cofficient
Eg is bandgap energy
Isc is short circuit current.
Voc is open circuit current.
Ir is actual irradiation.
A is ideality factor.
K is Boltzmann constant(1.38e-23)

REFERENCES

Fig. 5-PI curve

VI.

CONCLUSION

Here we tried to implement our concept using a


standard basic model and output for some constant
values is obtained and is plotted accordingly.
And in the end we conclude that the power output
performance of the PV system is primarily influenced
by the weather, the solar irradiance and the module
temperature.
Also, it is observed that the solar panel works at its
full potential only at certain specific conditions which
are,

When solar irradiation is 1000 W/M2


When temperature is 25C.
VII.

TERMINOLOGY

[1]. S. Sedra and K. C. Smith, Microelectronic Circuits. London,


U. K.: Oxford Univ. Press, 2006.
[2]. H. J. Moller, Semiconductors for Solar Cells. Norwood,
MA: Artech House, 1993.
[3]. G. Walker, "Evaluating MPPT converter topologies using a
Matlab PV model, Journal of Electrical & Electronics
Engineering, Australia, IEAust, vol.21, No. 1, 2001, pp.4956.
[4]. Lorenzo, E. (1994). Solar Electricity Engineering of
Photovoltaic Systems. Artes Graficas Gala, S. L., Spain.
[5]. Francisco M. Gonzlez-Longat - 2do congreso
iberoamericano de estudiantes de ingeniera elctrica,
electrnica
y
computacin,
Model
of
Photovoltaic Module in Matlab (II CIBELEC 2005).
[6]. Marcelo Gradella Villalva, Jonas Rafael Gazoli and Ernesto
Ruppert Filho. Comprehensive Approach to Modeling and
Simulation of Photovoltaic Arrays -IEEE Transactions on
power electronics, vol. 24, no. 5, May 2009
[7]. Dominique
Bonkoungou,
Zacharie
Koalaga, Donatien Njomo, Modeling and Simulation of
photovoltaic module considering single-diode equivalent
circuit model in MATLAB- (IJETAE), Iss.3 Vol. 3, March
2013
[8]. Huan-Liang Tsai, Ci-Siang Tu, and Yi-Jie Su, Member,
IAENG, Development of generalized photovoltaic model
using MATLAB /SIMULINK, Proceedings of the World
Congress on Engineering and Computer Science 2008,
WCECS 2008, October 22 - 24, 2008, San Francisco.
[9]. Savita Nema, R. K. Nema, Gayatri Agnihotri, Matlab /
Simulink based study of photovoltaic cells / modules / array
and their experimental verification, International Journal of
Energy and Environment, Volume 1, Issue 3, 2010 pp.487500.
[10]. Development of Generalized Photovoltaic Model Using
MATLAB/SIMULINK Huan-Liang Tsai, Ci-Siang Tu, and
Yi-Jie Su, Member, IAENG.

PV cell is photovoltaic cell.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
218

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

A Comparative Study FPGA


Implementation of I2C & SPI Protocols
Richa Malhotra, Preeti Singh
Department of Electronics and Communication, Northern India Engineering College, New Delhi
E-mail: Malhotraricha11@gmail.com
Abstract I2C and SPI are the most commonly
used serial protocols for both inter-chip and intrachip low/medium bandwidth data-transfers. This
paper
contrasts
and
compares
physical
implementation aspects of the two protocols
through a number of recent Xilinxs FPGA
families, showing up which protocol features are
responsible of substantial area overhead. This
valuable information helps designers to make
careful and tightly tailored architecture decisions.
For a comprehensive comparative study, both
protocols are implemented as general purpose IP
solutions, incorporating all necessary features
required by modern ASIC/SoC applications
according to a recent market investigation of an
important number of commercial I2C and SPI
devices. The RTL code is technology independent,
inducing around 25% area overhead for I2C over
SPI, and almost the same delays for both designs.

In our attempt to implement universal I2C/SPI IP


cores [9], we first made a market study of an
important number of recent commercial I2C/SPI
devices (datasheets) from different vendors [10] to
look at the requirements and what features are to be
included to satisfy modern ASIC/SoC applications.
The key features required for I2C/SPI IP cores as a
result of the market investigation are summarized in
table II, and their translation into architectures are
depicted by figures 1 and 2, respectively. Its
noteworthy to mention that only the slave side of the
protocols is dealt with in this paper.
The paper is organized as follows. In this section, we
showed the requirement specifications of a recent
market investigation for modern I2C/SPI IPs and their
corresponding architectures. Section two contrasts
and compares the implementation results. And finally
some concluding remarks.

Keywords Inter Integrated Circuit (I2C), Serial


Peripheral Interface (SPI), Intellectual Property
(IP), System-on-Chip (SoC).

I. INTRODUCTION
Today, at the low end of the communication
protocols we find two worldwide standards: I2C and
SPI [1]. Both protocols are well suited for
communications between integrated circuits for
low/medium data transfer speed with on-board
peripherals. The two protocols coexist in modern
digital electronics systems, and they probably will
continue to compete in the future, as both I2C and
SPI are actually quite complementary for this kind of
communication [2][3][4][5][6].
The I2C and SPI protocol specifications are
meticulously defined in [7] and [8], respectively.
Consequently, they will not be discussed here.
Instead, a quick overview is provided in table .

* : N is the number of devices connected to a single


master on the bus
. + : Feature inducing substantial area overhead.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
219

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
220

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Fig. 1 I2C-Slave Transceiver Architecture

Fig. 2. SPI-Slave Transceiver Architecture

III.

COMPARISON OF IMPLEMENTATION
RESULTS

For a more precise comparison, early in the design


process, some precautions were taken to put both

implementations under the same conditions. Starting


from the initial specifications (Table II), we first built
up the I2C-Slave architecture [10], from which we
derived the SPI-Slave one, keeping exactly the same
architectural topology with minor modifications

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
221

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

except for the Low-Level Protocol and FSM units.


Afterwards, the I2C-Slave architecture was translated
into a high-quality technology-independent RTL
code. The same code was used for the SPI-Slave with
necessary modifications. Both RTL codes were
written by the same designer to preserve the same
coding style.
The whole design code, either for synthesis or
functional verification, is implemented in Verilog
2001 (IEEE 1365). The synthesis design code is
technology independent and was simulated at both
RTL and gate level (post place & route netlist) with
timing back annotation using ModelSim SE 6.3f and
mapped onto Xilinxs FPGAs using Foundation ISE
10.1 version. Both designs have undergone severe
functional software verification procedure according
to our own IP development methodology summarized
in [9]. As for physical test, both designs were
integrated around Microblaze SoC environment using
V2MB1000 demonstration board [13] with Xilinxs
EDK 9.1i version.
The RTL-Code size of I2C-Slave is about 1.44 times
the code size of SPI-Slave (Table III). The 44% extra
code size is mainly due to the additional logic
required by the I2C-Slave to handle the software
addressing (7/10 bits), the control flow, and clock
stretching features (Table I).
The mapping of RTL-code including two 4-byte
FIFOs and digital filters onto Xilinxs FPGA devices
(Table IV), exhibits a slice utilization average around
500 and 360 for I2C-Slave and SPI-Slave
respectively, except for Virtex 5 devices where the
utilization is around 185 and 140, respectively. This
difference is due to the number of look-up-tables
(LUTs) per slice, which is: 2 LUTs of 4 inputs each
for Spartan 2-3 and Virtex 2-4 devices, and 4 LUTs
of 6 inputs each for Virtex 5 devices. Note that some
slices are used only for routing. Nevertheless,
whatever the FPGA device used; the I2C-Slave
induces an average of 25% area overhead over SPISlave, which is the counterpart of higher usage
flexibility and a more secure transfer.
It is noteworthy to mention that all results, either for
slice occupation or delays, are obtained using the
default options of the implementation software
(Foundation ISE 10.1) with the selection of the fastest
speed grade for each FPGA device.

TABLE III.

COMPARISON OF RTL-CODES
2

I C-Slave
Architecture Units

SPI-Slave

Number

Size

Number

Size

of Lines

(Ko)

of Lines

(Ko)

Top Module (HSI + FSM)

598

22

434

15

Low-level Protocol

472

19

217

FIFO

232

232

Filter

67

67

Total

1369*

52

950*

34

* : (1369-950)/950 = 0.44

There is almost no significant difference in terms of


delays (Table V). Delays are calculated for two types
of paths: Clock-To-Setup and all paths together (PadTo-Setup, Clock-To-Pad and Pad-To-Pad.) The
Clock-To-Setup gives more precise information on
the delays than other remaining paths, which depend
in fact on I/O Block (IOB) configuration (low/high
fanout, CMOS, TTL, LVDS)
The transfer rate for I2C-Slave is fixed, while for
SPI-Slave is unlimited, but for both a timing
relationship between the master clock and the
synchronous transfer clock must be known to enable
the sampling of smallest events depending on the
timing constraints of each protocol [7][8]. The master
clock (clk) must be at least 5 and 4 times faster than
the transfer clock (scl, sclk) for I2C-Slave and SPISlave respectively. In fact only a ratio of 2 is required
for SPI-Slave, but the RTL coding style requires 2
additional clock cycle. For instance, if we consider
the Clock-To-Setup delay of SPI-Slave mapped onto
Virtex-5 device (3.303 ns), a transfer rate of 13.212ns
can be achieved, corresponding to 75 MBPS.
IV.

CONCLUSION

A practical comparative study of I2C and SPI


protocols has been presented. Our primary concern
was the accuracy of the comparison, for which
careful measures were set at each development
step.While this comparison is only limited to the
slave side of the protocol, logical predictions can
easily be stated for the master side: approximately the
same delays with a greater area overhead to cope with
the multi- master feature of I2C protocol. For SPIMaster, there will be no significant area overhead
compared to SPI-Slave in case a simple counterbased baud-rate is integrated. This will not be the
case if a digital-frequency synthesizer is used instead.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
222

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

[3]

[4]

[5]
[6]

[7]
[8]
[9]

TABLE V. COMPARISON OF DELAY


[10]

[11]
[12]

[13]

February 2009.
Leens, Solutions for SPI Protocol Testing and Debugging in
Embedded System, Byte Paradigms White Paper, pp. 1-9,
Revision 1.00, June 2008.
L. Bacciarelli et al, Design, Testing and Prototyping of a
Software Programmable I2C/SPI IP on AMBA Bus,
Conference on Ph.D. Research in MicroElectronics and
Electronics (PRIME'2006), pp. 373-376, ISBN: 1-4244-01577, Ortanto, Italy, June 2006.
R. Hanabusa, Comparing JTAG, SPI and I2C, Spansions
application note, pp. 1-7, revision 01, April 2007.
P. Myers, Interfacing Using Serial Protocoles: Using SPI
and
I2C.
Available:
http://intranet.daiict.ac.in/~ranjan/esp2005/paper/i2c_spi_341
.pdf
Philips Semiconductors, The IIC-Bus Specifications,
version 2.1, January 2000.
Motorola Inc., SPI Block Guide V03.06, February 2003.
A.K. Oudjida et al, Front-End IP-Development: Basic
Know-How, Revue Internationale des Technologies
Avances, N 20, pp. 23-30, December 2008, ISSN 11110902, Algeria
A.K. Oudjida et al, Universal Low/Medium Speed I2C
Slave Transceiver: A Detailed FPGA Implementation,
Journal of Circuits, Systems and Computers (JCSC), Vol. 17,
No. 4, pp. 611-626, August 2008, ISSN: 0218-1266, USA.
R. Usselmann, OpenCores SoC Bus review, revision 1.0,
January 2001.
A.K. Oudjida et al, Master-Slave Wrapper Communication
Protocol: A Case Study, Proceedings of the 1st IEEE
International Computer Systems and Information Technology
Conference ICSIT05, pp 461-467, 19-21 July 2006, Algiers,
Algeria.
Xilinx Inc., Virtex-II V2MB1000 Development Board
Users
Guide.
Available
:
http://ww.cs.lth.se/EDA385/HT06/doc/restricted/V2MB_Use
r_Guide_3 _0.pdf

The paper has shown up the results of an up-to-date


FPGA implementation of the slave side of the two
standard protocols I2C/SPI, which are:

An utilization ratio of 3% and 2%


respectively for I2C-Slave and SPI-Slave on the
smallest Viertex-5 FPGA device;

A maximum transfer rate of 75 MBPS for


SPI-Slave;

and an area overhead of 25% for I2C-Slave


over SPI-Slave.
As the RTL-code is technology independent, much
faster transfer rate can be obtained for SPI-Slave with
ASIC implementation using a standard cell library.
REFERENCES
[1]

[2]

J.M. Irazabel & S. Blozis, Philips Semiconductors, I2CManual, Application Note, ref. AN10216-0, March 24,
2003.
Leens, An Introduction to I2C and SPI Protocols, IEEE
Instrumentation & Measurement Magazine, pp. 8-13,

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
223

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Heart Beat Monitoring System through


Fingertip Sensor
Suman Arora, Shilpa Jain
Department of Electronics and Commmunication, Northern India Engineering College, Delhi,India
Email : tell_suman@yahoo.com, shilpa41083@gmail.com

AbstractHeart rate is a very vital health


parameter that is directly related to the soundness
of the human cardiovascular system.This paper
presents the design and development of reliable,
cheap and accurate heart beat monitoring system
through heart beat sensor. This paper deals with
signal conditioning and data acquition of heart
rate signal. The hardware and software designed
are oriented towards microcontroller based
system, minimizing the complexity of system. This
paper describes a technique of measuring the
heart
rate
through
a
fingertip
using
microcontroller.
Keywords: Heart beat Sensor, microcontroller,
heart beat monitor
I. INTRODUCTION
Technology is being used everywhere in our daily life
to fulfill our requirements[1]. One of the ideal ways
of using technology is to employ it to sense serious
health problems so that efficient medical services can
be provided to the patient in correct time. Changes in
lifestyle and unhealthy lifestyles have resulted in
incidents of heart disease. Coronary heart disease is
the leading cause of death. Hence there is a need that
patient is able to measure the hearth rate in home
environment as well.
The heart rate of a healthy adult at rest is around 72
bpm. Athletes normally have lower heart rates than
less active people. Babies have a much higher heart
rate at around 120 bpm, while older children have
heart rates at around 90 bpm.
Heart beat monitor and display system is a portable
and a best replacement for the old model stethoscope
which is less efficient. The heart beat rate is
calculated manually using stethoscope where the
probability of error is high because the heart beat rate
lies in the range of 70 to 90 per minute whose
occurrence is less than 1 sec, so this device can be

considered as a very good alternative instead of a


stethoscope. The functioning of this device is based
on the truth that the blood circulates for every heart
beat which can be sensed by using a circuit formed by
the combination of an LDR and LED. Depending
upon the rate of circulation of blood per second the
heart beat rate per minute is calculated. This device
consists of a micro controller [2] which takes the
input from the fingertip sensor and calculates the
heart rate of the patient. The micro controller also
takes the responsibility to display the same on LCD,
which is interfaced to it through LCD drivers.
The next Section gives, Hardware system overview,
SectionIII
introduces
software
used
for
implementation of prototype fingertip heartbeat
sensor. Result and Conclusion are given in Section IV,
while future advancements are given at the end
II.

HARDWARE SYSTEM

The hardware Design is based on an embedded system


implementation using PIC16FA778 microcontroller
from microchip. The block diagram of hardware
system is shown in Fig 1.

Fig 1: Block diagram of Microcontroller based heartbeat monitor


with display on LCD

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
224

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

The block diagram consists of Microcontroller


PIC16FA778, Heart Beat Sensor, Reset, Crystal
Oscillator,LCD Driver,LCD Display,LCD Intensity
Control and LED Indicators.
2.1 Microcontroller 16FA778
A Microchip Microcontroller PIC 16FA778 [3] is
used to collect and process data. It has 256 bytes of
EEPROM data memory, 2 camparator, 8 channels 10
bit Analog to Digital Convertor. It has on chip 3
timers and 8k flash program memory. The Heart beat
Sensor is interfaced to microcontroller via port pins.
The Output of Sensor is fed to microcontroller via
ADC (Analog to Digital Convertor).An LCD is used
to display data.

comparator output consists of positive pulses


corresponding to blood pulses. The comparator output
is given to the microcontroller. The microcontroller
calculates the time duration between 2 successive
pulses and then computes the instantaneous heart rate.
The microcontroller then proceeds to display the
calculated heart rate on the LCD display,as shown in
Fig 3.

Fig 2: Placing the finger on heart beat sensor

2.2 Heart Bear Sensor


III.
Heart beat Sensor consists of super bright LED and
LDR. It works on the principle of light modulation by
the blood flow through finger at each pulse. The finger
is inserted in probe, shown in Fig 2 and red light from
high intensity LED is allowed to fall on the finger.
The amount of red light absorbed by finger varies
according to the pulsatile blood flow in the finger.
Therefore the amount of light transmitted varies
according to the blood flow. The LDR placed on
opposite side of LED detects the transmitted light.
With increase in transmitted light its resistance
decreases and vice-versa. A voltage divider circuit is
employed to get a voltage signal proportional to the
resistance of the LDR. This voltage signal consists of
AC and DC components. Non-moving structures
(veins, blood capillaries, bones, soft tissues, nonpulsatile blood) absorb constant amount of light and
hence contribute to the DC component of voltage
signal. As it provides no information about the blood
pulses, DC components are not needed. Pulsatile
blood absorbs varying amount of light and hence
contributes to AC component of voltage signal. AC
components are our required signal. The magnitude of
the DC components is almost 100-1000 times higher
than the AC components. Hence they need to be
removed in order for the AC components to be
conditioned properly further on. Therefore, a high
pass filter circuit is employed after the voltage divider
network to block the DC components of the signal.
The AC signal is now amplified from mV range to V
range. The amplified signal is given to a comparator
where it is compared against a set threshold value. The

SOFTWARE SYSTEM

This work is implemented using the following


software, Proteus for designing circuit and
simulation, MPlab - for compilation ,Embedded C
for programming code , Pickit2 for dumping the
programming code into the microcontroller (Burner)
3.1 Microcontroller Software
In this case, the method consists of computing the
heart rate of the person each minute. A pre-processing
step is needed to perform an amplification of the
signal and hardware filtering to remove unwanted
components. The programming language used to
program the microcontroller is Embedded C. Many
algorithms have been investigated to choose the best
fit method for the microcontroller. The micro
controller is programmed in such a way that it takes
input from the heart beat sensor when a finger is
inserted into it and displays the value on the LCD
continuously.

Fig 3: Hardware Design

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
225

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

IV.

RESULT AND CONCLUSION

In this paper, the implementation of an embedded


system, based on microcontroller for real time analysis
of heart beat rate has been investigated. The system
has been tested successfully on subjects of different
age group. The heart beat sensor which detects heart
beat is interfaced to microcontroller along with LCD,
which display the heart beat rate. The goal of the
paper is to reduce the hospitalization and assistance
cost.
The pulse rate can be used to check overall heart
health and fitness level. Besides it can prove to be a
boon for senior citizen people who wont have to
travel distances or wait in long queues at the hospitals
and clinics to get a measure of their heart beat. They
can themselves handle this device easily by sitting at
home. The low cost factor associated with this device
can make it a household name.
V.

FUTURE SCOPE

The work can be extended to improve health care


system
by transmitting patients physiological
signals wirelessly. Wireless technology like Zigbee
can be used to eliminate the wired mechanism [4].
Also, GSM module can be used to send the monitored
heart beat values for doctor reference. The work can
also be extending to measure other vital body signals
like Blood pressure and transmit them wirelessly.
REFERENCES
[1]

[2]

[3]
[4]

Dhvani Parekh. Designing heart rate, blood pressure, body


temperature sensors for mobile on-call system.in Electrical
Engineering Biomedical Capestones, Paper 39, 2010.
Mohamed Fezari, Mounir Bousbia-Salah, and MouldiBeddas
Microcontroller based heart rate monitor, The International
Arab Journal of Information Technology, Vol. 5, No. 4,
October 2008.
Microchip Manual, PIC16F87XData sheet 28/40-Pin 8-bit
FLASH Microcontrollers, Microchip Technology Inc., 2001.
Ming-Zher Poh*, Daniel J. McDuff, and Rosalind W.
PicardAdvancements in Noncontact, Multiparameter
Physiological Measurements Using a Webcam, IEEE
Transcation on biomedical engineering, Vol. 58, No. 1,
January 2011.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
226

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Comparison of Various Memory


Architecture in Quantum Dot Cellular
Automata
Sunita Rani, Naresh Kumar

Department of Electronics and Communication Engineering, BPSM Visvavidyalya, Sonepat, India


Deptt. of Electronics & Communication Engg., SGT FET, SGT University, Gurgaon, India
Email : er.sunitarani@gmail.com, engg.tanwar86@gmail.com

Abstract A Quantum Cellular Automaton


(QCA) is a nanotechnology which is an alternative
for transistor based technologies in the near
future.
It is also a potentially attractive
technology
for
implementing
computing
architectures at the nano scale. The basic Boolean
primitive in QCA is the majority gate. The
hardware requirements for a QCA design can be
reduced and circuits can be simpler in level and
logic gate counts using majority gate. These
circuits require low power for their working along
with the potential for high density and reliability.
Various memory architectures has been designed
in quantum dot cellular automata. In this paper,
previous designed and proposed memory
architecture are compared on the basis of area,
latency and complexity.

are not stored as voltage levels, but rather the position


of individual electrons.
The basic elements of QCA are QCA cell, Majority
gate and inverter. A QCA cell can be viewed as a set
of four dots that are positioned at the corners of a
square [5]. A quantum dot is a site in a cell in which a
charge can be localized. The cell contains two extra
mobile electrons that can quantum mechanically and
tunnel between dots, but not cells. QCA cell works on
columbic interaction of electronics between the cells
[5]. The locations of the electrons determine the
binary states. Fig.1 (a), Fig.1 (b) and Fig.1 (c) shows
the QCA cell diag

Keywords QCA Cell, CMOS

I.

INTRODUCTION

In the past decades, as per moores law it was found


that number of transistor on integrated circuits
doubles after every eighteen months. So the
exponential scaling in feature size and increase in
processing power have been successfully achieved by
VLSI using mainly CMOS technology. But power
consumption was main problem in CMOS technology
and it comes to physical limit. The current chips
integration technology is reaching to its physical
limits. So, there are a lot of proposed technologies
that can be used at the nano scale. One of these are
quantum dot cellular automata (QCA) .The theory of
QCA was proposed by Lent et al. in 1993 (Lent et al.
1993). QCA encodes binary information in the charge
configuration within a cell. Coulombic interaction
between cells is sufficient to accomplish the
computation in QCA arrays-thus no interconnect
wires are needed between cells. No current flows out
of the cell so that low power dissipation is possible.
QCA is an emerging technology in which logic states

Fig1: (a)QCA cell with polarization"-1" for the binary 0 logic


level; (b)QCA cell with polarization"+1" for the binary 1 logic
level

Fig.1(c):QCA cell with polarization"0"; for the binary 0 logic


level

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
227

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

II.

Row Select

LITERATURE SURVEY

Using QCA technology various authors have


discussed the memory architecture. In 2003 K.Walus
has implememted a memory cell in the paper
Memory designed in RAM design using quantum
dot cellular automata. This QCA memory is made of
QCA logic gates. That is AND,OR, and NOT gates.
Each memory cell consists of 158 cells. Since the
design consists of a simple two dimentional grid
structure, an n-bit memory would have O(n) cells. If
we assume that cells are spaced 10nm apart, the
design has a storage capacity of over 1.6Gbit/cm2.
Further optimization could significantly increase this
figure, possibly by an order of magnitude. One such
optimization would be to store two or more bits per
loop and maintain a parallel architecture. The circuit
schematic for this QCA memory cell is shown in fig
2.
III.
DATA MINING MODEL
NN are connectionist models inspired by the behavior
of the human brain. In particular, the multilayer
perceptron is the most popular NN architecture. It
consists of a feedforward network where processing
neurons are grouped into layers and connected by
weighted links [8]. This study will consider
multilayer perceptrons with one hidden layer of H
hidden nodes and logistic activation functions and
one output node with a linear function [9]. Since the
NN cost function is nonconvex (with multiple
minima), NR runs will be applied to each neural
configuration, being selected the NN with the lowest
penalized error. Under this setting, the NN
performance will depend on the value of H.
Memory loop
Row Select

I/P

M3
M2

M4

M1

M5

M6

O/P

W/R
Fig. 3. QCA memory cell layout

The memory value is constantly circulated inside the


memory loop until the write/read and row select wires
are polarized to 1, at which time the incoming input is
fed into the memory loop and circulated. If the Row
select is polarized to 1 and write/read is polarized to
0, the current memory value inside the loop is fed to
the output.
Diwakar Agrawal has discussed two types of memory
in the paper Memory designed quantum dot cellular
automata memories. One is parallel memory and
other is serial memory.Both types of memories have
their merits as well as demerits. Parallel memories
have multiple 1- bit memory loops. Hence all the bits
of a word can be accessed simultaneously which
results into low latency. But the disadvantage of
using the parallel memory is that there is a lot of
circuitry which is repeated for bits in a word. Hence
parallel memories are not preferred where area is the
more important commodity than latency of the
memory. Fig.4 shows a one bit memory loop used for
implementing the parallel memory. The memory unit
contains a multiplexer whose output is fed back into
one of its inputs. During the read operation, Rd/Wt
signal low, the multiplexer is in feedback mode thus
functioning as a memory loop.When the Rd/Wt signal
is high, new data is written into the memory unit.
Row
Select

I/P
O/P

Input
W/R
Fig. 2. Memory cell schematic

The QCA layout of above discussed memory cell is


shown in Fig 3.Which have six majority gate.

Rd/Wt
Fig. 4. Schematic of parallel memory

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
228

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

The Schematic of QCA layout of parallel memory is


shown in Fig.5. Which have five majority gates.
Row Select

M2
R/W

M1

M5

O/P

I/P

M3
M4
Fig. 5. Schematic of Layout of parallel memory with QCA
majority gate

The serial memories, on the other hand, require less


area but have high latency. Since multiple bits are
stored in a serial memory loop all the bits will reach
the output one by one and hence have much more
latency than the parallel memories. But since there is
no repetition of the circuitry for the different bits, the
area consumed is less. .
Serial memories offer an important advantage over
parallel memories. Serial memories are more compact
due to less duplication of circuitry. Many
architectures have been proposed so far to emulate a
serial memory [9, 10, 11]. But most of these works do
not use the conventional clocking mechanism and
deploy clocks which are complex in nature. It is
obvious that presence of multiple clocking
mechanisms in a circuit will be complex and difficult
to implement rather that a circuit architecture which
uses a conventional clocking and thus is similar in
functioning to other circuits developed.
The present architecture uses the memory in motion
paradigm for implementation of serial memory. In
this the memory loop contains the same number of
bits looping in it as is the word size of the memory.
For this, the feedback loop has clock zones which are
equal to 4 times the number of bits in a word (4 clock
phases for 1bit).
To remove the problem of clocking in serial memory
Moein Kianpour disussed a memory in the paper
Novel design and simulation of 16 bits ram
implementation in quantum-dot cellular automata.In
this Memory cell is a collection of storage cells
together with the necessary circuits to transfer
information to and from them. These architectures are
based on the memory-in motion paradigm [1].
These architectures of memories have different
features, such as number of bits stored in a loop,
access type (serial or parallel) and cell arrangement
for the memory bank. It is unit memory cell which
means it is used to store 1-bit data only. The data bit

is stored in a loop, until the WR/RD control signal is


low. When Read bar/Write is raised high, then the
input bit is stored in the loop and data is written in the
memory cell. When Read bar/ Write is low then the
previous written data is read from the memory cell.
The loop must be implemented using all zones of the
four phase adiabatic switching technique for the
clock, thus allowing the motion of the stored bit..
The right AND gate A3 is called an enable gate and
operates independently from the remaining gate of
the circuit. To read or write mode, the enable gate
outputs the stored value when EN is '1'. This shows
that the memory cell is selected to be read.
Otherwise, when EN is '0', the output is '0', which
means that the memory cell is not selected to be read.
The basic memory cell (conventional) of this
architecture is shown in Fig.6.
Enable

OR

Data I/O

A3
A2

A1

Output
N

Read/Write

Fig. 6. Basic conventional memory cell

It has four logic gates in which three AND gate, one


OR gate and one inverter gate are used. And it has
been designed in QCA designer with three input
majority gate.

Enabl
e

Data
I/O

M3
M4
M

Outp
Read/Writ
ut
e
Fig. 7. Schematic of layout of QCA conventional cell

Schematic of layout of QCA designed conventional


memory cell is shown in Fig.7. Which have four
majority gate.but It faces one problem, when we read
the memory in next cycle after the enable 0 then
output should be zero but output will be the previous
data written in the memory.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
229

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

IV.

PROPOSED MEMORY CELL

A.

Implementation of proposed memory cell

To remove the problem of above discussed basic


conventional memory cell, here one memory cell is
designed also with less number of majority gate.
Schematic of this proposed me ory cell is shown in
Fig.8.
Enable

Output

OR

Data I/O
A1

A2

Implementation of proposed memory cell


B.

Simulation result of proposed memory cell

N
Read/Write
Fig. 8. Proposed Memory cell

As shown in Fig.8. It has four logical gates. Two


AND logic, one OR logic and one inverter are used.
Here one logical AND gate is reduced so latency and
complexity will be reduced. This proposed memory is
designed in QCA Designer with new five input
majority gate.
To read/write mode operation enable should be 1
otherwise output will be always zero. When the Read
bar/write is high then data is written in the memory
cell. And when Read bar/write is low then data is
read out from the memory. The schematic of layout
of proposed memory cell is shown in Fig.9.which
have only three majority gates.
Enable

output

M3

M
1

Data
I/O

M
Read/Write
Fig. 9. Schematic of layout of proposed memory cell

V.

COMPARISON VARIOUS MEMORY


CELL

From the comparative study of proposed memory cell


with other available cells, it is easy to construct the
various parametric comparison table.
Analysis is done in the term of its volatile nature,
read/write feature, no. of majority gate required,
number of memory cell required to construct RAM of
16 bit wide capacity.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
230

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Table1. Comparison between memory cells


Read/Writ Volatil No. of
e feature
e
majorit
y gate
Paper 1

Yes

No

No.
of
cell
s
158

Paper 2
Paper 3

Yes
Yes

No
No

5
4

140
130

Propose
d
memory
cell

Yes

Yes

124

VI.

CONCLUSION & FUTURE SCOPE

From the analysis of proposed memory cell in the


paper and previously designed papers on memory cell
in other papers, it is clear obviously that proposed
memory cell is much better than the others in order
to having the volatile nature, need of number of
majority gates and number of cells required to
construct it.

REFERENCES
[1]

[2]

[3]

[4]

[5]
[6]

Y. Yorozu, M. Hirano, K. Oka, and Y. Tagawa, Electron


spectroscopy studies on magneto-optical media and plastic
substrate interface, IEEE Transl. J. Magn. Japan, vol. 2, pp.
740741, August 1987 [Digests 9th Annual Conf. Magnetics
Japan, p. 301, 1982].
H. Miller, "A note on reflector arrays," IEEE Trans. Antennas
Propagat., in press. Papers from Conference Proceedings
(Published):
J. L. Alqueres and J. C. Praca, "The Brazilian power system
and the challenge of the Amazon transmission," in Proc. 1991
IEEE Power Engineering Society Transmission and
Distribution Conf., pp. 315-320.
S. Hwang, "Frequency domain system identification of
helicopter rotor dynamics incorporating models with time
periodic coefficients," Ph.D. dissertation, Dept. Aerosp. Eng.,
Univ. Maryland, College Park, 1997.
IEEE Guide for Application of Shunt Power Capacitors,
IEEE Std. 1036-2010, Sep. 2010.
Brandli and M. Dick, "Alternating current fed power supply,"
U.S. Patent 4 084 217, Nov. 4, 1978.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
231

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Designing of Digital FIR Filter using


CORDIC Algorithm
Shilpa Jain, Suman Arora
Assistant Professor, ECE NIEC, New Delhi, India
E-mail: shilpa41083@gmail.com , tell_suman@yahoo.com

Abstract The
Coordinate Rotation Digital
Computation (CORDIC) is an algorithm used in
implementation of various kinds of digital signal
processing
architecture,
robotics,
image
processing etc. The use of CORDIC based system
is increasing day by day because of simple shift
and add operations and it reduces the number of
iterations and mean absolute percentage error .
CORDIC is also used in designing of digital filters,
FFT computation ,multiplication of real numbers
.In this paper we describe how to increase the
performance of FIR Filter by using CORDIC
algorithm to reduce power consumption.
Keywords IIR Filter ,FIR Filter, Iteration
period, Critical path, Rotation mode.
I.

INTRODUCTION

Finite impulse response (FIR) filters are digital filters


whose response is finite in duration. This is in
contrast to infinite impulse response (IIR) filters
whose response is infinite in duration. The methods
for designing and implementing FIR and IIR filters
differ considerably. The main characteristics which
describe the effectiveness of Digital Filter system are
the iteration period for IIR and FIR System.
The CORDIC algorithm was first developed by Jack
E. Volder[1] in 1959 and later on generalized to
hyperbolic functions by Walther [7] . CORDIC
algorithm is extremely useful in efficient and
effective implementation of DSP systems . Two basic
CORDIC modes, the rotation mode and the vectoring
mode, are used for
computation of different
functions.
This
algorithm
allows
implementation
of
trigonometric functions like sine, cosine, magnitude
and phase with great precision by using just simple
shift and adding operations . Although the same
functions can be implemented using multipliers,
variable shift registers etc. but CORDIC can

implement these functions efficiently.


The paper is described as follows. In Section II, the
basic of filters (LTI and Adaptive) is described; the
architecture (direct and transposed form) of FIR filter
is also presented in the same section. The basic,
rotation matrix, modes and working as multiplier of
CORDIC are described in Section III, and the
architecture (Transposed and Parallel) of filter having
CORDIC subsystem (for multiplication) is presented
in Section IV. Then, the calculation of power saving
among the architectures of FIR (with CORDIC as
multiplier) is described in Section V. Based on the
calculation some conclusions are given in Section VI.
II.

FILTER

Filtering is the most common form of signal


processing used to remove the frequencies in certain
parts and to improve the magnitude, phase, or group
delay. In signal processing, a filter is a system or
process that removes some unwanted component or
feature from a signal.
Filtering is a class of signal processing, the defining
feature of filters being the complete or partial
suppression of some aspect of the signal. The essence
of a digital filter is that it directly implements a
mathematical algorithm, corresponding to the desired
filter transfer function, in its programming or
microcode.
A linear, time variant and causal filter is described by
difference equation

A. Types of Filters
The digital filter can be classified as

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
232

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

1) LTI (Linear Time Invariant) filters:

point implementations.
Y(n) = a x(n) + b x(n-1) + c x(n-2) + d x(n-3)

i. FIR filter
Finite impulse response filters, as suggested by the
name these filters have a finite impulse response. FIR
filter can be represented by the equation

bk = 0

for 1k N
Fig. 1. Direct Form-1 architecture of FIR filter

FIR filter is an all zero system. Other salient features


of FIR filters are as given below:

This is a single input single output system (SISO).

(i.)Linear phase: FIR filters can have an exact linearphase response, resulting in a constant group
delay over the frequency range of interest. Therefore
no phase distortion is introduced by the filter.
(ii)Guaranteed stability: FIR filters are always stable
due to non- recursive realization.
ii. IIR filter
If the impulse response of an FIR filter is not of a
finite length sequence, the filter is called an IIR filter.
An IIR filter can be represented by following
equation:

Fig. 2. Single Input Single Output System

Iteration period: Time required for execution of one


iteration of the algorithm.
Titer = Tsample =Tclk
Based on the equation, we can see that the II R filter
does not have a simple relation between the
Coefficients of the IIR filter and the impulse response
of the filter.Other important features of IIR filter are
as follows:
(i). Nonlinear phase: IIR filters have a nonlinearphase response over the frequency of interest. There,
group delay varies at different frequencies and results
in phase distortion.
(ii). Stability issue: IIR filters are not always stable
due to recursive realization. Therefore a careful
design approach stable filter is needed to ensure that
all of the poles of an IIR filter lie inside the unit
circle to guarantee a stable filter, especially for fix-

Iteration rate is the reciprocal of the iteration period.


During each iteration, the 4-tap FIR filter processes
only one input signal and Completes 4 multiplication
and 3 addition operations and generates 1 output
sample.
Critical path computation time determines the
minimum feasible clock period of DSP System
Tclk = TM + 3TA
Titer = Tsample = Tclk (TM + TA)
To reduce Tclk (critical path) of filter architecture,
direct form-I architecture is replaced by transposed
architecture

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
233

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

zk+1 = zk kk
the number k are stored constants which depends on
the value of m.
B. ROTATION MATRIX
A rotation matrix is a matrix that is used to perform a
rotation in Euclidean space. It turns the whole space
around the origin

Fig. 3. Transposed architecture of 4-tap FIR filter

Tclk = TM + TA
Titer = Tsample = Tclk (TM + TA)

There are two basis vectors. The first one (1, 0) goes
(cos, sin), whose length is still one; it lies on the
-line. The second basis vector (0, 1) rotates into (sin, cos)
Then,

III. CORDIC ALGORITHM


A.

Basic of CORDIC

CORDIC (Coordinate Rotation Digital Computer) is


a simple & efficient algorithm to calculate the
hyperbolic & trigonometric functions. The
trigonometric CORDIC algorithms were originally
developed as a digital solution for real time
navigation problems. It is used when no hardware
multiplier is available as the only operation it requires
is addition, subtraction, bit shift & table lookup.

X = X cos Y sin
Y = X sin + Y cos
In matrix representation

Thus the rotational matrix is

The simple form of CORDIC is based on the


observation that if a unit length vector with end point
at(x, y) = (1, 0) is rotated at an angle z, its new end
point will be at (x, y) = (cosz, sinz).

Fig. 4. calculation of desired angle z

The Generalized CORDIC algorithm consist of three


iterative equations
xk+1 = xk m kyk2-k
yk+1 = yk + kxk2-k

Fig. 5. Representation of Rotation Matrix

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
234

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

xk+1=x0
yk+1=y0+ k x02-k..
zk+1=z0- k 2-k

C. Modes of CORDIC
CORDIC iteration can be used in two operating
modes, namely the Rotation mode (RM) & Vector
mode (VM), they differ only on the direction of
micro rotations are chosen.

(i)
(ii)
(iii)
(k = 2-k)
let the starting values of the given numbers x 0, y0 = 0
and z0. Then equation (ii) and (iii) implies that,
n
yk+1= k x02-k
k=0
n
and zk+1= z0 - k 2-k
k=0
n

z0 = zk+1 + k 2-k
k=0
Hence,

(i).Rotation mode: In Rotation mode, a vector V0 is


rotated by an angle z to obtain new vector V1. In this
mode, the direction of each micro rotation k is
Determined by the sign of zk
, k = sgn(zk) = 1, zk0
= -1
, zk<0
(ii). Vectoring mode: In vectoring mode , the
CORDIC rotator rotates the input vector towards the
positive x-axis in order to minimize the y component
of the residual vector.
k = sgn(zk) = 1, zk 0
= -1

, zk <0

IV. ARCHITECTURES OF FIR FILTER


USING CORDIC SUB SYSTEM
A. Transposed Architecture:
CORDIC algorithm is suitable for performing
multiplication operation, so, multiplier in FIR
architecture is replaced by CORDIC subsystem (for
multiplication operation).
Then, the redrawn transposed architecture of FIR
filter is

D. CORDIC as Multiplier
For multiplication, CORDIC is operated in rotation
mode. In this mode, m = 0, k =-sgn(yk) and k = 2-k.
th en the CORDIC equation is rewritten as:

Fig. 6. Transposed architecture of FIR filter using CORDIC


subsystem

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
235

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

B. To reduce sampling frequency


processing technique is used):

(Parallel

Parallel processing is one of the techniques used for


increasing the speed of processing of non-recursive
digital filter. In parallel processing, the hardware for
the original serial system is duplicated and the
resulting system is a multiple input multiple output
(MIMO) system. The clock frequency stays the same,
and the sampling frequency is increased in case of
parallel processing.
To obtain a parallel processing structure, the SISO
system must be converted into a MIMO system.

Fig. 8. Parallel architecture (2-level) of FIR filter (with CORDIC


subsystem)

The critical path of the block or parallel processing


system has remained unchanged and the clock period
must satisfy:

y(2k) = a x(2k) + b x(2k-1) + c x(2k-2) + d x(2k-3)


y(2k+1) = a x(2k+1) + b x(2k) + c x(2k-1) + d x(2k2)
These equations describe a parallel system with 2
inputs per clock cycle (i.e level of parallel processing
L=2).Here k denotes the clock cycle. As can be seen,
a t the k-th clock cycle the 2 inputs x(2k) and x(2k+1)
are processed and 2 samples are generated at the
output. Parallel processing systems are also referred
as block processing systems and the number of inputs
processed in a clock cycle is referred to as the block
size.

Fig. 7. Multiple Input Multiple Output system

But since 2 samples are processed in 1 clock cycle


instead of 2, the iteration period is given by:

V. REDUCTION IN POWER
CONSUUMPTION
Parallel processing can reduce the power
consumption of a system by allowing the supply
voltage to be reduced. In a L parallel system, the
charging capacitance does not usually charge while
the total capacitance is increased by L times. In order
to maintain the same rate, the clock period of L
parallel circuit must be increased to LT seq, where Tseq
is the propagation delay of the sequential circuit
given by fig.5. This means that Ccharge is charged in
time LTseq rather than Tseq. So, the supply voltage can
be reduced by V0.
The Propagation delay of the original filter is

The parallel processing architecture (2- level) are


drawn by using equation y(2k) and y(2k+1) as shown
in fig.8.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
236

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

while , the propagation delay of L parallel system is

From equation (1) and (2),

The only feasible supply voltage for 2-parallel filter is


V0 = 2.21496 V. (The other root = 0.0277 is
discarded since V0 = 0.09141 V is less than the
threshold voltage). Power
saving = 2 = 0.450509
= 45.05%
VI.

Where Pseq is power consumption of the sequential


system calculated as
Pseq = Ctotal (V0)2 f
Therefore, the Power consumption of L parallel
system has been reduced by a factor of 2 as
compared with original sequential system.
Power calculation: Above shown fig 6 and 8 are the
4 tap FIR filter architecture (with CORDIC
subsystem) and its 2-parallel version respectively.
For Power calculation purposes, assume that the
capacitance of the CORDIC subsystem (for
multiplication) is 6 times of an adder and parallel
filter is operated at the supply voltage V0, so,
Ccharge.seq = Ccordic + CA = 7CA
Ccharge.parallel = Ccordic + 2CA = 8CA

By using Transposed structure the critical path is


reduced in comparison to Direct form structure of an
FIR filter Parallel processing architecture of
transposed structure with CORDIC subsystem gives
more improved performance in terms of speed and
power saving. In future, the performance may be
improved by using more reliable version of CORDIC
multiplier increasing the speed of processing of nonrecursive digital filter. In parallel processing, the
hardware for the original serial system is duplicated
and the resulting system is a multiple input multiple
output (MIMO) system. The clock frequency stays
the same, and the sampling frequency is increased in
case of parallel processing.
REFERENCES
[1]

[2]

[3]

The propagation delays of the 2 filters are:


[4]

[5]

From equation (2)

[6]

[7]

[8]
[9]

Substituting V0 = 3.3, Vt = 0.45 in above equation,


we get
76.232 53.28 + 1.4175 = 0
By solving = 0.6712, or = 0.0277.

CONCLUSION

J. E. Volder, The CORDIC Trigonometric Computing


Technique, IRE Trans Electronic Computers, vol. EC-8,
pp.330-334, Sept. 1959.
Antelo, J. Villalba, J. D. Bruguera and E. L. Zapatai, High
performance rotation architecture based on thr radix-4 cordic
algorithm, IEEE Trans.computers, vol.46, no.8, pp. 855-870,
Aug 1997.
J.M. Muller, Elementary Functions, Algorithms and
Implementation, Birkhauser, Boston, MA:
Birkhauser
Boston, 2006.
Pramod K. Meher, 50 years of Cordic algorithms,
architecture and applications, IEEE Trans. On Circuits
&Systems-I: Regular papers, vol.56, no.9, 2009, Sep.
J. E. Volder, The Birth of CORDIC", J. VLSI Signal
Processing, vol. 25, pp.101-105, 2000.
Y. H. Hu and S. Naganathan, An angle recoding method for
Cordic algorithm implementation, IEEE Trans. Comput.,
vol. 42, no. 1, pp. 99-102, Jan 1993.
J. S. Walther, A Unified Algorithm for Elementary
Functions, in Proc. 38th Spring Joint Computer Conference,
Atlantic city, NJ,1971, pp379385.
J. S. Walther, The Story of Unified CORDIC", J. VLSI
Signal Processing, vol.25, no.2, pp. 107-112, June 2000.
S Cochran, Algorithms and Accuracy in the HP 35,
Hewlett Packard J., pp.1-11, June 1972.

[10] VititKantabutra, On hardware for computing exponentialand


trigonometric functions IEEE Trans. Computers 45 (3), pp.
328-339, 1996.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
237

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

A Study of Image Resolution Techniques


for Satellite Images
Neha Gupta1, Ashutosh Kharb2, Seema Kharb3
M.Tech Scholar, Department of ECE, BMIET, Sonepat
2
Assistant Professor, ECE, BMIET, Sonepat
3
Reasearch Scholar, CSED, DCRUST Murthal

E-mail: gneha9045@gmail.com
Abstract Image processing is one of the research
areas whose popularity is increasing rapidly. It is
a wide area that consist of various sub-branches
having application in various fields of remote
sensing, graphics, printing industry. Image
enhancement is one such branch of image
processing that is most popular among the
researchers. In image processing the image with
higher resolution gives better results and desirable
in many applications. Good quality image i.e. high
resolution image produces excellent results in
image processing. This paper provides a brief
overview for image enhancement using various
resolution techniques.
Keywords Digital image processing, Image
enhancement, Resolution techniques, DCT, DWT,
SVD.
I. INTRODUCTION
Digital image processing is a subfield of digital
signal processing that deals with the manipulation of
digital image bi the use of digital computers[5]. In
digital image processing computer algorithms are
used to perform image processing on digital images.
As a subfield of digital signal processing, digital
image processing has many advantages over analog
image processing; it allows a much wider range of
algorithms to be applied on the input data and can
avoid problems such as the build-up noise and signal
distortion during processing.
Digital image processing provides improved
pictorial information for human interpretation and
processing of image data for storage and transmission
[2]
. Image processing has a wide range of applications
such as remote sensing, Medical imaging, Forensic
Studies, Textiles, Material Science, Military, Film
industry, Document Processing and many more.

Image processing is not a one step process [1]. It


includes: image preprocessing, image enhancement,
image segmentation, feature extraction, image
classification.
First step of digital image processing includes
number of different operations known as image
processing. Next step is image enhancement which is
used to emphasize and sharpen image features for
display and analysis. It is used to improve quality and
appearance of an image that suffers from low contrast
and noise[3].
The image enhancement methods for dimmed
image are broadly divided into two categories i.e.
direct and indirect enhancement methods. Direct
enhancement methods are those in which the contrast
of an image is specified directly with the contrast
terms while in indirect enhancement methods the
image contrast is enhanced by redistributing
probability density for example histogram
modification[6].
Image enhancement techniques can be applied in
two domains viz. Spatial domain and frequency
domain. The spatial domain deals with the image
pixels directly i.e. the pixels values are manipulated
to achieve desired results. [4]The image enhancement
process in spatial domain is divided into four
categories:1) Contrast manipulation/ intensity
transformation, 2) Image Smoothing, 3) Image
Sharpening, 4) Image resampling. In frequency
domain Fourier transform is used. It is performed in
order to modify the image brightness, contrast or
distribution of gray level. [1] Conversion between
spatial
representation
and
wave
number
representation is known as Fourier transform. [5] It is
divided into three categories: 1) Image filtering, 2)
Image smoothing, 3) Image sharpening.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
238

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

DWT technique has been employed for a solution to


preserve the high frequency components of the
satellite images. This method is broadly utilized for
performing image interpolation. DWT decomposes a
low resolution image into four sub bands. These are
LL (low low), LH (low high), HL (high low), HH
(high high). all these sub bands then are combined or
interpolated. a difference of image is generated by
discarding the LL band from original low resolution
image. This difference of images is then added to the
higher band components of the image. Then finally
IDWT is applied to combine the estimated high
resolution images with input images to get the
original HR image.
Figure 1: showing the effect of image enhancement

This paper focuses on various image resolution


techniques used in image enhancement and is
organised as follows. Section II describes the various
image enhancement methods, their comparison
followed by conclusion and future scope.
II. IMAGE RESOLUTION ENHANCEMENT
METHODS
[9]

Resolution is an important parameter for image


and video processing application for resolution
enhancement of video and feature extraction. Some of
the images captured over narrower spectral band have
low spatial resolution. So there is need to increase the
resolution for betterment of result. Interpolation and
fourier transform are two conventional techniques
that are used for resolution enhancement.
Interpolation is a method in image processing that
increases number of pixels. The interpolation based
image resolution losses in high frequency
components thus fourier and wavelet transforms are
in use. Interpolation techniques are of three types of
neighbor, bilinear and bicubic. Fourier transform is
used to improve resolution of the images that are
degraded due to blurring. This section discusses some
techniques that deal with the problem of low
resolution, low contrast images or images having
blurring effects.
A. Discrete Wavelet Transform Based Resolution
(DWT)
[10]

The DWT is the technique that captures both


frequency and location information of an image. The

B. Stationary
Wavelet
Resolution (SWT)

Transform

Based

SWT is also used in various image processing


application. It is used to overcome the lack of
translational-variance of DWT and it is redundant.
SWT is another variant of wavelet transform which is
same as the DWT with the exception that it does not
down-sample the image [11] and hence all the
frequency bands has the size of original image. Same
as DWT it divides the input image into different sub
bands. It is applied to high and low pass filters to the
data at each level which produces the two new
sequences of same length as that of original. In this
filters are modified at each level by padding with
zeros. After that, the images are combined using
inverse process to get resolution enhanced image. So,
the technique generates a super resolved image[12].
C. Singular Value Decomposition
Based Resolution (SVD)

Transform

SVD is used to improve the brightness of an image.


[18]
SVD method transform matrix A into product as
USVT ,where U,V are orthogonal matrix of m*m and
n*n, S is a diagonal matrix of m*n; which allows us
to refactoring a digital image in three matrices using
singular values of such refactoring allows us to
represent the image with a smaller set of values,
which can preserve useful feature of original image,
but use less space in the memory.SVD provides
intensity information. It is used for data
reduction,feature
detection
and
enhancement
[8]
purpose .

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
239

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Table 1: Basic advantages and disadvantages of DWT,SVD, DTCWT and SWT [11, 12, 15, 16, 17]

Resolution
Technique
DWT

SVD

DT-CWT

Advantage

Disadvantage

Gives Sharper
image

Loses high
frequency
contents

Improves the
brightness
of an image
Reduces
Artifacts

Cannot give
clear image

Figure 2: shows factoring A into USVT

SWT

D. Dual Tree Complex Wavelet Transform Based


Resolution (DT-CWT)

Redundant

Not
much suitable for
hyper
spectral images.
Disortion
may occur to the
image

[12]

It is used to obtain the real and imagenary parts of


complex wavelet coefficients. It overcomes the
problems of SWT thus it is shift-invariant,
directionally selective, and computationally efficient.
In this technique, as stated in Dual-tree CWT (DTCWT) is used to decompose an input image into
different sub band images[15]. DT-CWT is used to
decompose an input low resolution image into
different sub bands. Then , the high frequency sub
band images and the input image are interpolated,
followed by combining all these images to generate a
new high resolution image by using inverse DTCWT.
E. Curvlet Transform Based Resolution (CVT)
Wavelets transform fails to represent the objects
having randomly oriented edges and curves. It found
bad to represent line singularities[12]. Thus CVT
develop to overcome all these limitations. They uses
only a small number of coefficients and handles curve
discontinuities greatly. [14]This transform cam be
decomposed with four steps that is: sub band
decomposition, smooth partitioning, renormalisation,
and ridgelets analysis. By inversing the step sequence
it is able to reconstruct the original image which is
called inverse curvelet transform. CVT coefficients
can be modified in order to enhance edges in image.
Table 1 below lists the basic advantages and
disadvantages of resolution techniques discussed so
far.

F. Contourlet Transform Based Resolution (CT)


It
is
discrete-domain
multiresolutiom
and
multidirection expansion using non seperable filter
banks, in much the same way that wavelets were
derived from filter banks. This construction results in
a flexible multiresolution, local, and directional
image expansion using contour segments, and thus it
is named the Countourlet Transform [16]. It is a true
two-dimensional transform that can capture the
intrinsic geometrical structure of an image. The CT
also has no shift invariant property like DWT because
of its down-sampling operation.
G. Non Subsampled Contourlet Transform Based
Resolution (NSCT)
[12]

It is stated as non subsampled CT, the NSCT is


implemented via. The a trous algorithm results in
flexible mutliscale and multidirection. It is built upon
non subsampled multiscale pyramids and non
subsampled directional filterbanks thus it is a fully
shift-invariant version of the CT i.e. achieved. [17]It is
to be very efficient in image denoising and image
enhancement.
Table 2 below provides a comparison of various
resolution techniques based on various performance
parameters

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
240

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Table 2: Comparison between DT-CWT, SWT, CT and NSCT [11,


12, 15, 16, 17]
PARAM
ETERS

DTCWT

SWT

CT

error.
REFERENCES

NSCT
[1]

REDUN-DANCY

Limited
redundan
cy

redundant

redundant

Highly
redundant

EFFICIENCY

More
efficient
than
DWT,
SWT

Inefficient
in multiple
dimension

Efficient in
directional
multiresolution
image
representa
-tion

Efficient in
image
denoising
and
enhancem-ent

[2]

[3]

[4]
[5]
[6]

SHIFT VARIAN
T/INVARIANT

Shift
invariant

Shift
invariant

Not shift
invariant

Fully shift
invariant

INTERP
OLABIL
-ITY

Inherently
interpolable

Interpolable

Not
interpolable

Highly
interpolabl
-e

[7]

[8]

III. CONCLUSION
This paper provides a brief overview of various
image resolution techniques that can be used for
image enhancement of low resolution satellite
images. Based on the discussion in previous section,
it can be concluded that wavelet domain is used for
improving the resolution of images. Complex wavelet
transform is used for satellite image resolution
improvement. When DWT combines with other
techniques it gives more accuracy and better results.
In the above techniques, the SVD transform is
simple, robust, easy and fast to implement. DWT
down samples the image .CVT is invertible, stable
and redundant.
IV. FUTURE WORK
In future work, these resolution techniques and
their combination like DWT-SWT, DWT-SVD,
DWT-DCT etc. Will combined with bio-inspired
algorithms for optimisation of results. By applying
the metaheuristic algorithm that is best for image
enhancement we try to optimize our problem for
better quality and results of an image in order of their
contrast and brightness by measuring their mean,
variance, peak signal to noise ratio, mean square

[9]

[10]

[11]

[12]

[13]

[14]

[15]

[16]

[17]

[18]

Bernd Jahne, Digital Image Processing,6th Revised Edition


And Extended Edition, Springer, 2005.
B. Chitravedi, P. Srimathi, An Overview On Image
Processing Technique, International Journal Of Innovative
Research In Computer And Communication Engineering,
Volume 2, Issue 11, November 2014.
Jasbir Singh Lamba, Prof Rajiv Kapoor, A Study On
Various Image Enhancement Techniques: A Critical
Review, Journal Of Indian Research (ISSN: 2321-4155),
Volume 3, No. 1, January-March ,2015.
Apurba Das, Guide To Signal And Patterns In Image
Processing, Springer, 2015.
Rafael C. Gonzalez, Richard E. Woods, Digital Image
Processing, Third Edition, Pearsons, 2008.
Shih-Chia Huang, Et. Al, Efficient Contrast Enhancement
Using Adaptive Gamma Correction With Weighting
Distribution, IEEE Transactions On Image Processing,
Volume 22, No. 3, March,2013.
Meenakshi Chaudhary, Anupma Dhamija, A Brief Study Of
Various Wavelet, Journal Of Global Research In Computer
Science, Volume 4, No. 4, April, 2013.
A.K. Bhandari, Et. Al, Cuckoo Search Algorithm Based
Satellite Image Contrast And Brightness Enhancement Using
DWT-SVD, ISA Transaction , 2014.
Mr. G.M. Khaire, Prof. R.P. Shelkikar, Resolution
Enhancement Of Images With Interpolation And DWT-SWT
Wavelet Domain Components, International Journal Of
Application Or Innovation In Engineering And Management,
Volume 2, Issue 9, September, 2013.
O.Harikrishna, A.Maheshwari , Satellite Image Resolution
Enhancement Using DWT Technique, International Journal
Of Soft Computing And Engineering (IJSCE), ISSN:22312307, Volume 2, Issue 5, November ,2015.
P. Bala Srinivas, Et. Al, Comparative Analysis Of DWT,
SWT, DWT & SWT And DTCWT- Based Satellite Image
Resolution Enhancement, IJECT , Volume 5, Issue 4, OctDec, 2014.
Shutao Li*, Bin Yang, Jianwen Hu, Performance
Comparison Of Different Multiresolution Transforms For
Image Fusion, Information Fusion 12(2011) 74-84.
K. Narasimhan, Et. Al, Comparison Of Satellite Image
Enhancement Techniques In Wavwlet Domain, Research
Journal Of Applied Sciences, Engineering And Technology
4(24):5492-5496,2012.
Jean-Luc Starck, Et. Al, Gray And Color Image Contrast
Enhancement By The Curvelet Transform , IEEE
Transactions On Image Processing, Volume 12, No. 6, June,
2003.
Hasan Demirel And Gholamreza Anbarjafari, Satellite
Image Resolution Enhancement Using Complex Wavelet
Transform, IEEE Geosciences And Remote Sensing Letters,
Volume 7, No. 1, January, 2010.
Minh N.Do, Et. Al, The Contourlet Transform: An Efficient
Directional Multiresolution Image Representation, IEEE
Transactions On Image Processing.
Arthur L. Da Cunha, Et. Al, The Non Sub Sampled
Contourlet Transform: Theory, Design, And Applications,
IEEE Transactions On Image Processing, Volume 15, No. 10,
October, 2006.
Lijie Cao, Singular Value Decomposition Applied To
Digital Image Processing, Divison Of Computing Studies,
Arizona State University Polytechnic Campus Mesa, Arizona
85212.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
241

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Thyristor Controlled Series Capacitor: A


Facts Device
Neeraj Ku. Jain
M.Tech. Student, EEE, AL FALAH University, Faridabad, Haryana
E-mail: neeraj.jain2163@gmail.com
Abstract Now a days power system are
undergoing numerous changes and becoming
more complex from operation, control and
stability maintenance standpoint when they meet
ever increasing load demand. In amodern days
power system have been growing due to increase
of our need its getting more and more complex to
prodide stability and control .In this paper an
overview to genaraltype of FACTS device
andperformance of TCSC IS GIVEN. The facts
device play an important role to improve
performance of power system. In this paper, we
have discussed about FACTS device and TCSC
is one of them.such type of FACTS device a
Thyristor Controlled Series Capacitor (TCSC) I
implements for a transmission line. The TCSC
circuit and characteristics are discussed in brief.
Keywords : FACTS, TCSC, power quality, firing
angle
I. INTRODUCTION
The potential profit of Flexible AC Transmission
Systems (FACTS) are now widely acknowledge by
the power system engineering community [1,2]. Two
Thyristor Controlled Series Compensation devices
(TCSC) [3,4], along with a Thyristor Switched Series
Capacitor (TSSC), have been in operation for some
time in North America [5]. Two other TCSCs were
commissioned in early 1999 in South America [6].
The short-term need to assess the impact of FACTS
technology has led to R&D efforts on modeling,
methodologies and software for static and dynamic
analyses, and control strategies. Dynamic studies
must contemplate both low and high frequency
phenomena, calling for the use of different computer
tools.
II .INTRODUCTION OF FLEXIBLE AC
TRANSMISSION SYSTEM (FACTS)
The series devices are compensating reactive power
with their influence on the effective impedance on the

line they have an influence on stability and power


flow [7].The SSSC is a device which has so far not
been builds on transmission level because Series
Compensation and TCSC are fulfilling all the today's
requirements more cost efficient. But series
applications of Voltage Source Converter have been
Implemented for power quality applications on
distribution level for instance to secure factory in
feeds against dips and flicker. These devices are
called Dynamic Voltage Restorer (DVR) or Static
Voltage Restorer (SVR) [7]. A capacitive reactance
compensator which consists of a series capacitor bank
shunted by a thyristor controlled reactor in order to
provide smoothly variable series capacitive reactance
[8].
The basic applications and advantages of FACTS
devices are[7]:
1. Power flow control.
2. Increase of transmission capability.
3. Voltage control.
4. Reactive power compensation.
5. Stability improvement.
6. Power quality improvement.
7. Power conditioning.
8. Flicker mitigation.
9. Interconnection of renewable and distributed
generation and storages [7].
10. Rapid, continuous control of the transmission line
reactance [9].
III. FACTS Controller
Now, for maximum utilization of any FACTS device
in power system planning, operation and control.
Power flow solution of the network that contains any
of these devices is a fundamental requirement [10]
controller can be divided into four main categories as
shown in figure 1 [8].
It is significant to appreciate` that the series
connected controller impact the driving voltage and

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
242

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

hence the current and power directly. Therefore, if the


of the application is to control the current/power flow
and damp oscillation, the series controller for a given
MVA size is several time more powerful than the
shunt controller. FACT controller may be based on
thyristor devices with no gate turn off (only with
turn on) [8].

Table 1 Constraint equation and control variables for FACTS


controllers [14].

IV.

Fig. 1. Classification of FACTS controllers [8].

OPERATION OF TCSC

The basic operation of TCSC can be easily explained


from circuit analysis. It consists of a series
compensating capacitor shunted by a Thyristor
controlled reactor (TCR). TCR is a variable inductive
reactor XL (figure 3) controlled by firing angle .
Here variation of XL with respect to is given by
equation 1.

Fig. 3 Equivalent circuit of TCR

Fig. 2. General symbol of fact controller [8].

In principle all the series controllers inject voltage in


series with the line shown in figure 2. Series
connected controller impacts the driving voltage and
hence, the current and power flow directly. Static
Synchronous Series Compensator (SSSC) and
Thyristor Controlled Series Compensator (TCSC) etc.
are the examples of series controllers [15].

For the range of 0 to 90 of (), Xl() start vary from


actual reactance Xl to infinity. This controlled reactor
is connected across the series capacitor, so that the
variable capacitive reactance (figure 2) is possible
across the TCSC to modify the transmission line
impedance. Effective TCSC reactance XTCSC with
respect to () is, given by equation 2.

( )=

( )

(1)

XTCSC()= -Xc + C1[2(-) + Sin(2(-)]C2[cos2(-) tan{ (-)]- tan(-)


(2)

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
243

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

where,
Xlc=
C1=
C2=
=
A TCSC is a series controlled capacitive reactance
that can provide continuous control of power on the
ac line over a wide range. It can provide many
benefits for a power system including controlling
power flow in the line. The TCSC concept is that it
uses an extremely simple main circuit. The capacitor
is inserted directly in series with the transmission line
and the thyristor-controlled inductor is mounted
directly in parallel with the capacitor. It makes TCSC
simple and easy to understand the operation.

Fig. 4 Equivalent circuit of TCSC

No interfacing equipment like e.g. High voltage


transformers is required. This makes TCSC much
more economical than some other competing FACTS
technologies.
TCSC controllers use thyristor-controlled reactor
(TCR) in parallel with capacitor segments of series
capacitor bank. The combination of TCR and
capacitor allow the capacitive reactance to be
smoothly controlled over a wide range.
Figure 5 shows the impedance characteristics curve
of a TCSC device. It is drawn between effective
reactance of TCSC and firing angle . [16]

Fig. 5 Impedance Vs firing angle characteristic curve

Net reactance of TCR, XL() is varied from its


minimum value XL to maximum value infinity.
Likewise effective reactance of TCSC starts
increasing from XL value to till occurrence of parallel
resonance condition XL() = XC, theoretically XTCSC
is infinity. This region is inductive region. Further
increasing of XL() gives capacitive region which
starts decreasing from infinity point to minimum
value of capacitive reactance XC. Impedance
characteristics of TCSC shows, both capacitive and
inductive region are possible though varying firing
angle () as follow:
90 < < L
Inductive region
L < < c
Capacitive region
c < < 180 Resonance region
While selecting inductance, XL should be sufficiently
smaller than that of the capacitor XC to get both
effective inductive and capacitive reactance across
the device. Suppose if XC is smaller than the XL, then
only capacitive region is possible in impedance
characteristics. In any shunt network, the effective
value of reactance follows the lesser reactance
present in the branch. So only one capacitive
reactance region will appears. Also X L should not be
equal to XC value or else a resonance develops that
result in infinite impedance and unacceptable
condition. Note that while varying X L (), a condition
should not allow to occur XL() = XC. Resonance
condition causes high voltage drop across TCSC
which can be used as a constraint load flow analysis.
V.

CONCLUSION

Series capacitors have been successfully utilized for


many years in electric power networks. With series
compensation, it is possible to increase the transfer
capability of power transmission systems at a
favourable asset cost and with a short mechanism
time compared to the building of additional lines.
This is due to the inherent ability of series capacitors
to achieve increased dynamic stability of power
transmission systems . it is also capable of
enhancement of voltage regulation and reactive
power balance which improves load sharing between
parallel lines. In addition to above advantages, TCSC
has capability of reducing the risk of sub synchronous
resonance. TCSC has a quality of dynamic power
flow control which can be used to damp active power

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
244

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

oscillations. It is also highly efficient in Postcontingency stability improvement.


REFERENCES
[1]

[2]

[3]

[4]

[5]

[6]
[7]

[8]

[9]

[10]

[11]

[12]

[13]

IEEE FACTS Working Group 15.05.15 in cooperation


withCIGRE, FACTS Overview, IEEE Special Publication
96-TP-108. 1996.
Task Force on FACTS Applications of the IEEE FACTS
Working Group 15.05.15 FACTS Applications, IEEE
Special Publication 96TP 116-0, 1996.
R.J. Piwko, C.A. Wegner, B.L, Damsky, B.C. Furumasu,
J.D.Eden, The Slatt Thyristor Controlled Series Capacitor
Project-Design, Installation, Commissioning, and System
Testing. CIGRE paper 14-104, Paris, 1994.
N. Chistl, R. Hedin, K. Sadek, P. Lutzelherger, P.E.
Krause,S.M. McKenna. A.H. Monloya. D. Torgerson.
AdvancedSeries Compensation (ASC) with Thvristor
ControlledImpedance, CIGRE Paper 14/37/38-05, Paris,
1992,
A.J.F. Keri, B.J. Ware R.A. Byron, M. Chamia,P.
Halvarsson, L. Angquist, Improving Transmission System
Performance Using Controlled Series Capacitors, CIGRE
Paper 14/37/38-07, Paris, 1992.
Gama, R.L. Leoni, J.C. Salomiio, J.B. Gribel, R. Fraga,
M.J.X. Eiras, W. Ping, A. Ricardo. J. Cavalcanti, Brazilian
North-South Interconnection Application of flyristor
Controlled Series Compensation (TCSC) to Damp Inter-Area
Oscillation Mode, SEPOPE Conference, Brazil, May 1998.
Venu Yarlagadda, Dr. B.V Sankar Ram and Dr. K. R. M
Rao,Automatic Control of Thyristor Controlled Series
Capacitor(TCSC), Vol. 2, Issue 3, May-Jun 2012, pp. 444449
Narayana Prasad Padhyay and M.A. Abdel Moamen
Powerflow control and solutions with multiple and Multi
TypeFACTS devices, Electric Power Systems Research,
Vol.74,2005, pp. 341-351
Douglas J. Gotham, G. T. Heydt, Power flow Control and
power flow studies for systems with FACTS devices, IEEE
Transactions on Power Systems, Vol. 13, No. 1,
February1998 pp.60 65
P. Yan and A. Sekar, steady state analysis of power
Systemhaving multiple FACTS devices using line Flow
based equations, IEEE Pro. Transm. Distrib., Vol,152,No.,
January 2005, pp. 31 39
C.R. Fuerte Esquivel, E. Acha, Newton Raphson
Algorithmfor the reliable solution of large power Networks
withembedded FACTS devices, IEEE Pro. Gener.Transm.
Distrib., vol,143,No.5 ,Sep 1996, pp. 447-454
Ying Xiao, Y.H. Song and Y.Z. Sun, Power flow
Controlapproach to power systems with embedded FACTS
devices,IEEE transaction on power Systems, vol 17, No.4,
Nov. 2002,pp. 943 950

[14] Preeti Singh, Mrs. Lini Mathew, Prof. S. Chatterji,


MATLABBased Simulation of TCSC FACTS Controller,
Proceedingsof 2nd National Conference On Challenges &
Opportunities inInformation Technology (COIT-2008)
RIMT-IET, MandiGobindgarh, March 29, 2008
[15] K.R Padiyar, FACTS Controllers in power Transmission
anddistribution, copywrite 2007, New Age International,
ISBN:978-81-224-2142-2
[16] Ying Xiao, Y. H. Song, Power Flow Control Approach
toPower Systems with Embedded FACTS Devices, IEEE
Transaction on Power System, Vol. 17, No. 4, December
2002
[17] A. Canizares and Z. T. Faur, Analysis of SVC and
TCSCControllers in Voltage Collapse, IEEE Transactions
on PowerSystems, Vol. 14, No. 1, February 1999 pp. 158
165

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
245

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

SiC JFET: A Review


Priya Sharma*, Vanita Batra, Jyoti Sehgal, Ritu Pahwa
Department of Electronics and Communication Engineering
Vaish College of Engineering, M. D. University, Rohtak-124001, India
E-mail: *priyasharma15august@gmail.com , vanita.batra@rediff.com , legendjyoti@gmail.com ,
ritumtech@gmail.com
Abstract In this paper, various SiC power
devices are presented. Due to the mature stage of
development, Silicon Carbide (SiC) devices are
easily available in the market. Silicon Carbide
(SiC) devices have numerous advantages as
compared to the silicon (Si) devices, such as wider
band gap, higher breakdown electric field, higher
blocking capability, higher thermal conductivity
and faster transitions which make them more
suitable for high-power and high frequency
converters. SiC Junction Field Effect Transistor
(JFET) is a voltage controlled device without oxide
reliability issues which makes it very promising
power switch as compared to the SiC Metal Oxide
Semiconductor Field Effect Transistor (MOSFET)
and Bipolar Junction Transistor (BJT). The
authors are presenting the Enhancement-Mode
SiC JFET, its operation, SiC Normally off JFET,
its operation and comparing its characteristics
with other power devices with SiC Schottky
Barrier Diodes (SBD), SiC Metal-Oxide
Semiconductor
Field
Effect
Transistors
(MOSFETs), SiC BJT.
Keywords SiC SBD, SiC MOSFET, SiC BJT, SiC
JFET, EM mode and SiC Normally Off JFET.
I.

INTRODUCTION

Silicon (Si) is considered as the preferred material for


all semiconductors because the processing of silicon
is cost efficient [1]. Si power devices were
evolutionary improved through device design,
process techniques which led to great advancements
in power device and material quality. Si-based
technology has been more stable and less expensive,
thus providing large range of process possibilities.
While using Si material, performance limitations
occurs in the terms of blocking high voltage, low on
state drop, and switch at high frequency, requires
high voltage and high temperature operation [2].
Semiconductor materials such as silicon carbide (SiC)

and gallium nitride (GaN) are used to improve the


performance of power devices and they are referred
as the wide-band gap (WBG) semiconductors. Si is
replaced by the more promising SiC in terms of
fabrication of power semiconductor device because of
the wide-band gap of SiC. SiC is more suitable for
the high-voltage power devices (1200V) due to
vertical device structure and superior material quality.
In general, SiC is also called as polytype because it
has a wide variety of crystalline structures. SiC offers
larger energy band gap than silicon resulting in lower
leakage current, higher operating temperature, higher
radiation hardness [3, 4]. The thermal conductivity is
several times higher than the silicon which eliminates
the necessity to have bulky and expensive cooling
system. The breakdown electric field for SiC is 10
times higher than for Si that provides higher voltage
block capability, faster switching transition [2, 4].
SiC provides higher saturation velocity which
determines the maximum current density of the
device, which is twice as that of Si. Although SiC
presents various benefits, it has various limitations
such as due to material defects, lack of high quality
SiC wafers and epitaxial layer, on-state instability in
bipolar SiC devices. SiC allows the device to operate
at high temperatures up to 2500C. SiC power
semiconductor devices require high efficiencies, high
temperatures and high densities [5]. MOSFETs,
JFETs, and cascodes are different SiC transistor
configurations. JFETs mainly operates on two
devices which are normally-on and normally-off
devices. Out of these two devices the more
widespread configuration is the normally-on JFET
and the cascode configuration.
II.

TYPES OF SIC POWER DEVICES

A. SiC Schottky Barrier Diodes (SBDs)


SiC Schottky Barrier Diodes (SBDs) are also called
as Junction Barrier Diodes. 4H-SiC junction barrier
Schottky (JBS) is kind of a Schottky rectifier
structure as shown in Figure 1. In SBDs structure, p-n

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
246

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

junction grids exist which are integrated into its drift


region [6]. Due to shielding of the high electric field
away from the Schottky contact, the leakage current
of the JBS rectifier is lower than that of the
conventional Schottky rectifier.

Fig. 2. SiC planar MOSFET [5].

Fig. 1. Basic structures of SiC JBS-type SBD [5].

The main advantages of the JBS rectifier is that it


operates on Schottky-like on-state and it has fast
switching characteristics both with on-state and offstate. The off-state characteristics having a low
leakage current. SiC JBS rectifier has the conduction
loss which can be much lower than that of a PiN
diode. It has a breakdown voltage of less than 3 kV
due to the high (2.7 V) turn-on voltage.

2) SiC trench MOSFET: A "trench" refers to a


groove. In this configuration, a groove (trench) is
formed on the chip surface and a MOSFET gate is
formed along its side. The structure of SiC trench
MOSFET is shown in Figure 3. In SiC trench
MOSFET, it is possible to make this device smaller
than a planar configuration. No JFET regions exists
in trench-gate MOSFET structure [7].

B. SiC Metal-Oxide Semiconductor Field Effect


Transistors (MOSFETs)
SiC MOSFETs are normally-off type Devices.
Silicon (Si) power MOSFETs has fast switching
capability so it is mostly used in high frequency
power converters. SiC power MOSFETs have
become competitive because of its superior material
properties, higher blocking voltage, higher
operational temperature and higher switching
frequency.
1) SiC Planar MOSFET: SiC Planar MOSFET in the
on-state requires higher positive gate voltage. The
structure of SiC Planer MOSFET is shown in Figure
2. This high gate voltage increases the charge and
compensate for low channel mobilities when
compared to silicon. While dealing with this device,
big challenges also exist. In this device the SiC/SiO 2
interface quality is poorer than that of Si/SiO2. In
planar type, the JFET resistance is structurally
present.

Fig. 3. Trench SiC MOSFET [5].

C. SiC BJT
Further utilization of SiC material in high-power and
high-temperature applications, providing by bipolar
devices such as the power bipolar junction Transistor
(BJT), insulated gate bipolar transistor (IGBT). BJTs
based on 4H-SiC have double-sided high-level
injection so they have no gate oxide and low on-state
voltage in the lightly doped drift region as shown in
Figure 4. It is a current-controlled device. Despite the
absence of an oxide layer in main active cell, the BJT
still requires significant base current during its
operation which results in high power consumption of
the base-drive unit.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
247

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Fig. 4. SiC BJT [5].

SiC Bipolar Junction Transistor (BJT) occupies


conductivity modulation and good high temperature
operation because it has a major advantage in low onstate resistance. In SiC BJTs the Emitter, the Base,
and the Collector layers are continuously epitaxially
grown with minimum crystal defects.
D. SiC Junction Field Effect Transistors
(JFETs)
Being a voltage-controlled device without oxide
reliability issues, the SiC junction field-effect
transistor (JFET) is a very promising power switch
compared to the SiC metal-oxide semiconductor
field-effect transistor (MOSFET) and bipolar junction
transistor (BJT). Its channel is controlled by a p-n
junction. A Junction Field Effect Transistor (JFET)
has no SiO2-SiC interface and so it could be available
as a commercial power device. The basic difference
between the JFET and the MOSFET is that the
MOSFET is normally-off device, whereas the JFET is
normally-on device. Normally-on SiC JFETs are
robust and has low input capacitance. The drawback
of normally -off JFETs device that exists only in their
on-state performance and gate drive requirements can
make them unattractive than normally-on devices [8].
In SiC JFET the MOS structure is absent due to
which have no oxide reliability issues. The SiC JFET
has high switching speed and bipolar degradation
issues can be neglected in comparison to the SiC BJT.
1) Lateral-Channel JFET: This JFET is similar to
SiC Double-Implanted-MOSFET (DMOSFET). In
the lateral-channel JFET, below the gate contact, the
oxide layer for a control of the inversion layer exists
which is replaced with a bulk material as shown in
Figure 5.

This structure provides an integral body diode can be


used as an anti-parallel diode in power electronic
circuits.
2) SiC Buried-Gate JFET: In buried grid
configuration channel is vertical as shown in Figure
6. A large portion of the vertical cross-sectional area
can be used for the current conduction results in very
low values of RDS (on).

Fig. 6. SiC Buried Grid [5].

For the optimization of the BG SiC JFET, each layer


of the device can be controlled separately in terms of
doping concentrations and thicknesses.
III.

ENHANCEMENT MODE SiC JFET

The EM SiC JFET capable of


blocking 1200 V between its drain and source
terminals in the presence of a gate-source shortcircuit. Unlike the EM silicon power metal oxide
semi-conductor field effect transistor (MOSFET), the
EM SiC JFET does not contain a parasitic body diode
as shown in Figure 7. It provides a variety of
applications such as high frequency synchronous
rectification (SR), a fast body diode reverse recovery
is needed for high conversion efficiency to be
achieved. The EM SiC JFETs lack of a body diode
therefore has important performance [9].

Fig. 7. Cross-section of the EM SiC JFET [9].


Fig. 5. SiC Lateral-Channel JFET [5].
Special Issue: National Conference on Recent Innovations In Engineering & Technology
(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
248

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

A. Working Principle
1) Under gatesource short circuit conditions: The
EM SiC JFET conducts current in the sourcedrain
direction. The characteristic is neither purely ohmic
nor like that of a diode. The source-to-drain voltage
drop observed with current flowing through the EM
SiC JFET channel in the source-to drain direction
decreases when a positive gatesource voltage is
applied.
2) Under negative gatesource voltage is applied:
The channel voltage drop was found to increase. Find
a range of gatesource voltages with light drain
currents having that the sourcedrain voltage drop is
typically about 0.5 V larger than any applied negative
gatesource voltage.
3) Under fixed voltage is applied: The gate current
drawn varies depending on the drain current with a
particularly steep increase in gatesource current
when large channel currents are flowing in the
sourcedrain direction (negative IDS). By limiting
the gate current to some arbitrary magnitude
increases the drainsource voltage drop for large
source to drain currents in exchange for lower drive
losses.
B. SiC Normally Off JFET

Fig. 8. SiC normally off 1200 V JFET: cross section [10].

This device makes special demands on the gate driver


circuit compared to other unipolar SiC or Si devices.
To fully exploit the potential of SiC normally off
JFETs, conventional gate driver circuits for unipolar
switches need to be adapted for use with these
switches.
1) During on-state operation: The gatesource
voltage must not exceed 3 V, while a current of
around 300 mA (depending on the desired onresistance) must be fed into the gate.
2) During switching operation: The transient gate
voltage should be around15 V and the low threshold
voltage of less than 0.7 V requires a high noise
immunity which is a severe challenge as the device
has a comparably low gatesource but high gate
drain.

IV.

CONCLUSION

This paper deals with various types of SiC power


devices like SiC SBDs, SiC MOSFETs, SiC BJTs,
SiC JFETs. In
this paper have studied the
Enhancement mode of SiC JFET which does not
contain any parasitic body diode than SiC MOSFET.
This paper presents switching characteristics of
normally-off SiC devices, that makes special
demands on the gate driver circuit compared to other
unipolar SiC. The SiC JFET need not to require any
significant input current to the gate and requires
comparably less fabrication steps, low voltage drop
and high switching speed.
REFERENCES
Z. Xu, S. Pan, Design and Analyse of Silicon Carbide JFET
Based Inverter, International Journal of Digital Content
Technology & its Applications, vol. 6, pp. 98-106, 2012.
[2] Platania, Z. Chen, F. Chimento, AE. Grekov, R. Fu, L. Lu,
A Physics Based Model for a SiC JFETAccounting for
Electric-Field-Dependent Mobility, IEEE Transactions on
Industry Applications, vol. 47, no. 1, pp. 199-211, 2011.
[3] llebrand and HP. Nee, On the possibility to use SiC JFETs
in Power Electronic circuits, KTH, Royal Institute of
Technology, SWEDEN, 2016.
[4] R. Alonso, M. F. Daz, D. G. Lamar, M. A. P. D. Azpeitia,
M. M. Hernando, J. Sebastian, Switching Performance
Comparison of the SiC JFET and SiCJFET/SiMOSFET
Cascode Configuration, IEEE Transactions on Power
Electronics, vol. 29, no. 5, pp. 2428-2440, 2014.
[5] J. K. LIM, Simulation and Electrical Evaluation of 4H SiC
Junction Field Effect Transistors and Junction Barrier
Schottky Diodes with Buried Grids, 2015.
[6] L. Zhu and T. P. Cho, Analytical Modeling of High Voltage
4H-SiC Junction Barrier Schottky (JBS) Rectifiers, IEEE
Transactions on Electron Devices, vol. 55, no. 8, pp. 18571863, 2008.
[7] T. Nakamura, Y. Nakano, M. Sasagawa, T. Otsuka, M.
Aketa, M. Miura, High-performance SiC Power Devices and
Modules with High Temperature Operation, International
Symposium on VLSI Design, Automation and Test (VLSIDAT), pp. 1-2, 2011.
[8] P. J. Garsed, R. A. McMahon, Optimising the Dynamic
Performance of an All-Wide-Bandgap Cascode Switch,
IEEE Annual Conference of the Industrial Electronics
Society (IECON), pp. 1112-1117, 2013.
[9] R. Shillington, P. Gaynor, M. Harrison, W. Heffernan,
Silicon carbide JFET reverse conduction characteristics and
use in power converters, IET Power Electronics, vol. 5, no.
8, pp. 1282-1290, 2011.
[10] Wrzecionko, D. Bortis, J. U. Biela, J. W. Kolar, Novel AC
Coupled Gate Driver for Ultrafast Switching of Normally Off
SiC JFETs, IEEE Transactions on Power Electronics, vol.
27, no. 7, pp. 3452-3463, 2011.
[1]

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
249

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Fractal Geometry
Mohina Gandhi, Khushboo
Electronics and Communication Dept. Northern India Engineering College, New Delhi, India.
E-mail: mohina.gandhi@gmail.com, kkshushboo_2008@yahoo.com
Abstract Many pattern of nature are either
irregular or fragmented to such an extreme degree
that Euclidean geometry cannot describe their
form. Thus, fractal geometry based analysis has
received increasing attention as a number of
studies have shown fractal based measures to be
useful for characterizing complex structures. We
calculate fractal dimension of these complex
structures available in nature and then try to find
the relationship between fractal dimension and
some property of nature. Fractal geometry has
become an exciting frontier bordering between
mathematics and information technology. It has
given significant impacts in many aspects of
society such as fashion design, art and culture [1].
Keywords Fractal geometry; Box dimension.

I.

INTRODUCTION

Natural objects are not union of exact reduced copies


of whole. A magnified view of one part will not
precisely reproduce the whole object, but it will have
the same qualitative appearance. Although fractal
geometry is closely connected with computer
techniques, some people had worked on fractals long
before the invention of computers. Those people were
British cartographers, who encountered the problem
in measuring the length of Britain coast. The
coastline measured on a large scale map was
approximately half the length of coastline measured
on a detailed map. The closer they looked, the more
detailed and longer the coastline became. They did
not realize that they had discovered one of the main
properties of fractals. [2]
II.

repeats over and over, or for some fractals, nearly the


same pattern reappears over and over. Self-similarity
itself is not necessarily counter-intuitive e.g., people
have pondered self-similarity informally such as in
the infinite regress in parallel mirrors or the
homunculus, the little man inside the head of the little
man inside the head... The difference for fractals is
that the pattern reproduced must be detailed.
This idea of being detailed relates
to another feature that can be understood without
mathematical background. Having a fractional
or fractal dimension greater than its topological
dimension, for instance, refers to how a fractal scales
compared to how geometric shapes are usually
perceived. A regular line, for instance, is
conventionally understood to be 1-dimensional; if
such a curve is divided into pieces each 1/3 the length
of the original, there are always 3 equal pieces. In
contrast, consider the curve in Figure 1 It is also 1dimensional for the same reason as the ordinary line,
but it has, in addition, a fractal dimension greater than
1 because of how its detail can be measured. The
fractal curve divided into parts 1/3 the length of the
original line becomes 4 pieces rearranged to repeat
the original detail, and this unusual relationship is the
basis of its fractal dimension.

FEATURES OF FRACTALS
Fig. 1 Iterations of Koch snowflake

The feature of "self-similarity", for instance, is easily


understood by analogy to zooming in with a lens or
other device that zooms in on digital images to
uncover finer, previously invisible, new structure. If
this is done on fractals, however, no new detail
appears; nothing changes and the same pattern

This also leads to understanding a third feature, that


fractals
as
mathematical
equations
are
"nowhere differentiable". In a concrete sense, this
means fractals cannot be measured in traditional
ways. To elaborate, in trying to find the length of a

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
250

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

wavy non-fractal curve, one could find straight


segments of some measuring tool small enough to lay
end to end over the waves, where the pieces could get
small enough to be considered to conform to the
curve in the normal manner of measuring with a tape
measure. But in measuring a wavy fractal curve such
as the one in Figure 2, one would never find a small
enough straight segment to conform to the curve,
because the wavy pattern would always re-appear,
albeit at a smaller size, essentially pulling a little
more of the tape measure into the total length
measured each time one attempted to fit it tighter and
tighter to the curve. This is perhaps counter-intuitive,
but it is how fractals behave.
III.

NATURAL FRACTALS

A. Branching
Fractals are found all over nature, spanning a huge
range of scales. We find the same patterns again and
again, from the tiny branching of our blood vessels
and neurons to the branching of trees, lightning bolts,
and river networks. Regardless of scale, these
patterns are all formed by repeating a simple
branching process.

Fig. 3 Spiral fractal in nature


[Source site:- http://fractalfoundation.org]

IV.

GEOMETRIC FRACTALS

Purely geometric fractals can be made by repeating a


simple process.
A. Cantor Set
Cantor Set, its major characteristic is removed 1/3 of
middle part of pattern unceasingly and arranged the
pattern from large to small gradually.

Fig. 4 Iterations of cantor set

B. Sierpinski
It refers to a process in which original triangles are
substituted for the three triangles with half-heights in
comparison with the origin.

Fig. 2 Branching fractals in nature


[source site: http://fractalfoundation.org]

B. Spirals
Fig. 5 Iterations of Sierpinski set

The spiral is another extremely common fractal in


nature, found over a huge range of scales. Biological
spirals are found in the plant and animal kingdoms,
and non-living spirals are found in the turbulent
swirling of fluids and in the pattern of star formation
in galaxies. All fractals are formed by simple
repetition, and combining expansion and rotation is
enough to generate the ubiquitous spiral.

C. Koch Snowflake
The Koch Snowflake is a mathematical curve and one
of the earliest fractal curves to have been described.
The Koch snowflake can be constructed by starting
with an equilateral triangle, then recursively altering
each line segment as follows:

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
251

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

1. Divide the line segment into three segments of


equal length.
2. Draw an equilateral triangle that has the middle
segment from step 1 as its base and points outward.
3. Remove the line segment that is the base of the
triangle from step 2.
After iterations of this process, the resulting shape is
the outline of a hexagram. The Koch snowflake is the
limit approached as the above steps are followed over
and over again.

Fig. 6 Iterations of Koch snowflake

The Koch curve originally described by Koch is


constructed with only one of the three sides of the
original triangle. In other words, three Koch curves
make a Koch snowflake.
It refers to panel style line of the mold-making; it
could exist in the form of line and volume and appear
in the fly facing, hemline and others.
D. Box Fractal

coincidentally, the Mandelbrot Set was discovered in


1980, shortly after the invention of the personal
computer.
A. Julia Set
Julia Set can be represented by using the quadratic
case form
f (Z) = Z2 + C. Here Z represents a
variable of the form a + ib (a and b are real numbers)
which can take on all values in the complex plane.
The quantity C also is defined as a complex number,
but for any given Julia set, it is held constant. We can
also say that there are an infinite number of Julia sets,
each defined for a given value of C [3] [4].
To work on Julia set a point C on the complex plane.
The algorithm determines whether this point on
complex plane Z belongs to the Julia set associated
with C, and determines the color that should be
assigned to it. To see if Z belongs to the set, we have
to iterate the function as per (1):
2
Z1 Z0 C using Z n 1 Z n

(1)

To produce an image of the whole Julia set associated


with C, we must repeat this process for all the points
Z [5]. The varying value of C determines the shape of
the Julia Set at each point of the complex plane as
shown in Fig. 8.

The box fractal is a fractal also called the anti crossstitch curve which can be constructed using string
rewriting . The basic square is decomposed into nine
smaller squares in the 3-by-3 grid. The four squares at
the corners and the middle square are left, the other
squares being removed. The process is repeated
recursively for each of the five remaining sub
squares.
Fig. 8 Julia set at different values of C
Fig. 8. Iterations of Julia Set at different values of

B.
Fig. 7 shows iterations of Box fractal

V.

ALGEBRAIC FRACTALS

We can also create fractals by repeatedly calculating


a simple equation over and over. Because the
equations must be calculated thousands or millions of
times, we need computers to explore them. Not

Mandelbrot Set

Julia sets are connected with the Mandelbrot set. The


iterative function that is used to produce Julia set is
the same as for the Mandelbrot set. The only
difference is the way (1), is being used. In order to
draw a picture of the Mandelbrot set, we iterate the
formula for each point C of the complex plane,
always starting with Z0 = 0 [5] [6].

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
252

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

The beauty of the Mandelbrot set is quiet interesting


as one zoom in on the perimeter of its shape, it does
not get uncomplicated and repeats the original shape
again and again on the perimeter [7]. Fig. 9 shows the
Mandelbrot set for

Z2 C

and Z

C.

Fig. 9 Mandelbrot Set at different iterations

VI.

CONCEPT OF FRACTAL
DIMENSION

A fractal dimension is a ratio providing a statistical


index of complexity comparing how detail in
a pattern changes with the scale at which it is
measured. It has also been characterized as a measure
of the space-filling capacity of a pattern that tells how
a fractal scales differently than the space it is
embedded in; a fractal dimension does not have to be
an integer. It reflects the degree of irregularity over
multiple scales [8]. For a D-dimensional object, the
number of identical parts N, divided by a scale ratio r,
the self similarity dimesion D canbe obtained as

D log N / log 1/ r

(2)

Here D is also known as Fractal Dimension


(Luo,1998).
VII.

APPLICATIONS OF FRACTALS

Fractals can be used to model the underlying process


in a variety of applications. A range of fractal
analytical methods are used to characterize the fractal
behavior of the World Wide Web traffic. A realistic
queuing model of Web traffic is developed based on
fractal theory that provides analytical indications of
network bandwidth dimensioning for Internet service
providers [9]. Another application is to characterize
the fractal nature of the entire system working in a
LAN environment. Research is currently going on in

the design of an airborne conformal antenna using


fractal structure that offers multiband operation [10].
Fractal geometry is used for understanding and
planning the physical form of cities. It helps to
simulate cities through computer graphics.
The structural properties of fractals can be used in the
architectural designs and also to model the
morphology of surface growth. Fractal theory can be
applied to a wide range of issues in chemical sciences
like aggregation phenomena Reposition and diffusion
processes, chemical reactivity etc. The geological
features like rock breakage, ore and petroleum
concentrations, seismic activity and tectonics, and
volcanic eruptions can be studied using their fractal
characteristics. In biology, it has been found that the
DNA of plants and animal cells does not contain a
complete description of all growth patterns, but
contains a set of instructions for cell development that
follows a fractal pattern. Fractal geometry can be
used in an analytical way to predict outcomes, to
generate hypotheses, and to design experiments in
biological systems in which fractal properties are
most seen [11].
Man-made objects are well defined using Euclidean
geometry whereas natural objects are better modeled
by fractal geometry. After the introduction of fractal
geometry, their effects on various natural phenomena
were studied. Goodchild (1980) studied the relation
between fractal and geographical measure and
pointed out that the fractal dimension can be used to
predict the effect of cartographic generalization and
spatial sampling. In cartography, Dutton (1981) used
the properties of irregularity and self similarity of
fractal to develop an algorithm to enhance the detail
of digitized curves by altering their dimensionality in
a parametrically controlled self similar fashion. Batty
(1985) showed a number of examples of simulated
landscape, mountain and other graphics generated by
using the Fractals are widely used in bio signal
analysis and pattern recognition. They are used to
study the structure, complexity and chaos in tumors.
Aging, immunological response, autoimmune and
chronic diseases can be better analyzed through
fractal geometry (Klonowski, 2000). The study of
turbulence in flows is adapted to fractals. Turbulent
flows are chaotic and are very difficult to model
correctly. A fractal representation of them helps
engineers and physicists to better understand complex

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
253

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

flows. Fractal techniques are used in histopathology


to interpret histological images to make a diagnosis
and selection of treatment. (Gabriel,1996). Fractal
models have been found to be useful in describing
and predicting the location and timing of earthquakes
(Hastings and Sugihara, 1993). Astronomy, in
particular cosmology, was one of the fields where
fractals were found and applied to study various
phenomena. The largest subsystem studied by means
of fractals is the distribution of galaxies. The map of
the distribution of the galaxies obeys the postulation
of a power-law decreasing behavior for the density in
concentric spheres as a function of radius. This is the
characteristic behavior expected in hierarchical
fractals (Perdang, 1990; Heck and Perdang, 1991;
Elmegreen and Elmegreen, 2001).[12][13][14]
REFERENCES
[1]

[2]

[3]

[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]

[12]
[13]
[14]

Xiaotian Long, Wen Li, Weiyan Luo. Design and Application


of Fractal Pattern Art in the Fashion Design. International
Workshop on Chaos-Fractals Theories and Applications,
2009.
Mandelbrot, B., (1977, 1983, 21st printing 2006), The Fractal
Geometry of Nature, New York, W. H. Freeman and
Company.
Kenneth Falconer, Iteration of complex functionsJulia
sets in Fractal Geometry: Mathematical Foundations and
Applications, England, John Wiley & Sons Ltd., 2003, pp.
215-240.
M. Barnsley, Fractals Everywhere, San Diego, CA, USA. 2 nd
Ed., Academic Press, 1993.
Michael McGoodwin, Julia Jewels: An Exploration of Julia
Sets, March 2000.
B.B. Mandelbrot. (1989). Fractal geometry: what is it, and
what does it do? Proc. of the Royal Society A 423, pp.3-16.
Mandelbrot, The fractal Geometry of Nature. San Francisco,
CA: W.H. Freeman and company, 1982.
Stevens, R.T., Understanding of self similar fractals.
Prentice Hall of India,1995.
http://vvww.cs.bu.edu/facultv/crovella/paper-archive/self
sim/paper.html).
(http://www.fractenna.com)
Saar, E., Towards Fractal description of Structure,332.
Morphological Cosmology Proceedings, Cracon, Poland,
205-218, 1988.
Thankee, S.G., Classification of galaxies. Degree Thesis,
University of Nevada, Las Vegas, 1999.
P.A. Burrough, Fractal dimensions of landscapes and other
environmental data. Nature, Vol.294,1981,pp.240-242.
Dejian Lai, Marius-F. Danca, Fractal and Statistical analysis
on digits of irrational numbers chaos, Solutions and Fractals,
2006.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
254

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Full Reference and Non Reference


Quantitative Measures for Evaluating the
Performance of Image Fusion Algorithms: A
Review
1

Meenu Manchanda, 2Rajiv Sharma


Research Scholar, MDU, Rohtak, Haryana, India.
2
Professor & HOD - ECE, Northern India Engineering College, New Delhi, India
Email : meenumanchanda73@gmail.com, rsap70@rediffmai.com
1

Abstract- The widespread usage of image fusion


causes an increase in the importance of assessing the
performance of different fusion algorithms. The
problem of introducing a suitable quality measure for
image fusion lies in the difficulty of defining an ideal
fused image. Thus, a review of various non-references
image quality measures as well as full-reference
image quality measures has been presented.
Keywords- Image Fusion, Image Processing.

I.

INTRODUCTION

Multiple imaging devices are generally used to capture


different images of the same scene or object. However,
viewing and analyzing these images simultaneously
leads to confusion and an unnecessary burden on the
observer, while combining information from a group of
observers is almost impossible. This problem can be
efficiently solved by fusing important information
present in various images into a single image. This
process is called image fusion. Researchers have
introduced a large number of image fusion algorithms.
Evaluating the performance of these algorithms has
become an important issue. Subjective evaluation
involves human observers to evaluate the quality of the
fused image. Since this evaluation depends highly on
human visual characteristics, hence it is difficult to
distinguish the difference between the fused images
when they are approximately similar. Although
subjective evaluation is very important in characterizing
algorithm performance, yet objective evaluation can be
regarded as equally important for characterization of the
algorithm performance.

Thus, this paper, presents an analysis of some of the


commonly used objective measures for the evaluation of
fusion algorithms.
II.

Non-Reference Image Quality Measures

As there is no ideal fused image in experimental image


fusion methods, using any ground-truth image is not
assumed in most of objective measures and the
performance is calculated using input images and the
fused image. In this section, a brief overview of the
commonly used non-reference image fusion measures is
presented.
Standard deviation: Standard deviation (SDF) [3] is
used to determine the information contained in an image
by measuring the contrast level present in it. A high
value of standard deviation indicates better clarity and
contrast of an image. Mathematically,

Where F(i,j) is the intensity of pixel located at (i,j) of


image F and is the mean of the image.
Entropy: Entropy (HF) of an image is utilized to
measure the quantity of information present in it. An
image with the higher value of entropy contains more
amount of information. Mathematically,

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at: www.gtia.co.in

255

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

where pm is the probability associated with mth gray level


and L is the total number of gray levels in an image.
Edge strength: Xydeas and Petrovic [6] proposed an
edge strength (
) measure to determine the amount of
edge information that has been transferred from the
source images into the fused image. For a perfect
representation of an input edge, both the strength and its
orientation should be effectively preserved in the fused
image. A variation in any of these parameters will result
in a degraded fused image. A Sobel edge operator, on its
application to input images and fused image (X, Y and
F) provides the information about the edges strength,
s(i,j) as well as about its orientation, o(i, j) [0, ] for a
pixel situated at (i,j). Hence, for an image, X
)

And

)
)

where hX(i,j) and vX(i,j) are the output of the horizontal


and vertical Sobel templates centered at pixel located at
(i,j) and convolved with the corresponding pixels of
image X. Using these parameters, the relative strength,
SXF(i,j) and orientation, OXF(i,j) between the source
image, X and the fused image, F can then be obtained as:
(i,j) =
=

Generally 0 QXF (i,j)1. QXF (i,j)= 1 shows fusion with


no loss of edge information whereas QXF (i,j)= 0 shows
fusion with complete loss of edge information.
Finally, a normalized weighted edge performance
measure,
is obtained from the edge preservation
values (QXF (i,j) and QYF (i,j)) of source images X and Y
using:

And

)
)

Where
)

))

))

And

else

Fusion artifacts: Sometimes fusion process itself creates


undesired artifacts in the fused image; these are indicated
by a stronger gradient strength in the fused image as
compared to its value in the input images. Total fusion
artifacts introduced through the fusion of X and Y into F
is thus obtained as [4]

))

)
)

)
)

))

The constants s, ks, s and o, ko, o exactly defines the


) and
sigmoid functions used to evaluate
). The edge information preservation value, QXF
(i,j) is then determined using:
)

Fusion Loss: In practice, no fusion process is


completely perfect; some information of the input
images is certainly lost in this process. A weaker
gradient strength indicates the loss of information in the
fused image.

And
)

)
)

These values are then used to compute the edge strength


) and
and orientation preservation values
)
)

)
)

Where wX(i,j) and wY(i,j)are the weights assigned to


edge preservation values QXF(i,j) and QYF(i,j)
respectively. A high value of QFXY indicates better edge
information in the resultant image.

)
= 0,

Feature Mutual Information: Haghighat [2] proposed


feature mutual information measure to calculate the
amount of information related to image features
transferred from source images into the fused image. For
an input image X, the marginal distribution defined as
the normalized gradient magnitude is given as:
)

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at: www.gtia.co.in

256

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Whereas the joint distribution function between an input


image X and the fused image F can be obtained as:

FSFXY indicates better performance of the fusion process.


Mathematically,
)

)
(

Where MIXF and MIYF are the mutual information


between the input images, X and Y, and the fused image,
F.
III.

)
(

where
(
) is the cross-correlation coefficient
corresponding to Frechets upper (lower) bound.
These equations are also applicable for the joint
distribution function between Y and F. The amount of
feature information passed from X and Y into F is
measured individually using Mi as:

)
)

)
)

Finally, the feature mutual information (


is:

)
) metric

Fusion Factor: Fusion Factor (FFFXY) [1] determines the


amount of mutual information between each individual
source image and the fused image. It measures the
contribution of each input image towards the resultant
fused image. A large value of FFFXY indicates that a
large amount of information has been transferred from
the source images into the fused image. Mathematically,

Where MIXF and MIYF are the mutual information


between the input images, X and Y, and the fused image,
F.
Fusion Symmetry: Large value of FFFXY does not mean
that the source images are fused symmetrically. Thus,
another measure called Fusion Symmetry (FSFXY) [1]
was introduced to indicate the symmetry of the process
with respect to the source images. A small value of

Full-Reference Image Quality Metrics

There are some metrics that need a ground truth image to


evaluate the performance, which is generally unknown.
Therefore, these metrics are evaluated with the help of
input images (X and Y) and then taken the average of
these values as the final result.
Peak signal-to-noise ratio: Peak signal-to-noise ratio
(PSNR) is defined as:

where L is the maximum intensity in an image, f(i,j) and


x(i,j) are the intensity of pixel of the fused image, F and
input image, X respectively. Among various fusion
techniques the one that possess higher value of PSNR is
considered to be of better performance. Similarly PSNR
(PSNRYF) between Y and F can also be calculated. The
overall PSNR (PSNRFXY) is given as:
)
Structural Content: Structural content (SC) measures
the closeness between two images. SC between X and F
is defined as:

)
Similarly
overall SC (

between Y and F can be calculated. The


) is calculated as:
)

Correlation Coefficient: The correlation coefficient


(CC) between X and F is defined as:

where
is the covariance between X and F and
)
is the standard deviation of X(F). The overall correlation
between input images and the fused image is obtained
as:

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at: www.gtia.co.in

257

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

)
where
is the correlation between Y and F. For
similar images, CC approaches unity.
Structural Similarity Index Measure: Structural
similarity index measure (
) [5] another most
popular image quality parameter is used for determining
the structural similarity between two images. This
parameter takes into account the characteristics of
human visual system. The
parameter is obtained
by sliding two windows x and f, of size N x N, pixel-bypixel on the images X and F respectively and is
calculated using the formula:
)
)

)
)

Feature Similarity Index Measure: Zhang [7] proposed


feature similarity index measure (
) for the
assessment of image quality. FSIM measures the
similarity between a pair of images based on the
combination of phase congruency (PC) and gradient
magnitude (GM). PC and GM provide complementary
information. The former provides information about
local structures in an image and the latter provides the
contrast information. The feature similarity index for
grayscale images X and F is defined as:

where and are the contents of the image in the j th


local window and W is the total number of windows
chosen in the image. The overall MSSIM index measure
for images X, Yand F is then obtained as:
)
Where
is the mean SSIM index measure for
images Y and F.

)
)
where
and
are the local phase congruency
values determined for the input image, X and the fused
) is the local
image, F respectively. And
similarity value defined as:
)

Where L is the dynamic range of pixel values and is


determined using L = 2n - 1 where n is number of bits
are the local means,
per pixel.
, ,
and
,
standard deviations and cross-covariance for windows, x
and f, respectively. k1 and k2 are constants introduced
for avoiding any division by zero (if any) in dark and flat
areas in the image and default value of these constants
are 0:01 and 0:03 respectively for 8-bit grayscale
images. The value of this parameter is generally between
-1 and +1. A large value of
indicates its ability
to preserve the structural similarity of the original
images. Maximum value (+1) of this parameter is
obtained when both the images are identical. Generally
window size preferred is 8 x 8. Further, instead of
sliding windows pixel-by-pixel, only a subgroup of
windows is chosen for reducing the complexity involved
with the calculation. The overall structural similarity
index measure of the whole image is defined as the
Mean SSIM (MSSIM) index measure and is given by

Where

)
)

)
)

)
)

where
and
are the small stabilizing constants.
The value of
depends on the dynamic range of PC
values. The gradient magnitude values can be obtained
using gradient convolution operators such as the Scharr
operator, the Sobel operator or the Prewitt operator. The
overall feature similarity index for grayscale images X,
Y and F is then obtained as:
)
where
is the feature similarity index for
grayscale images Y and F.

IV.

Conclusion

Fusion performance evaluation is a challenging task as


the ground truth is not available in most of the
applications. Researchers have proposed and used
various parameters to evaluate the performance of
fusion. Therefore, a review of various objective
parameters used for objectively analyzing the
performance of image fusion algorithms have been
presented in the paper.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at: www.gtia.co.in

258

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

REFERENCES
[1] T. Arathi and K.P. Soman, Performance evaluation of information
theoretic image fusion metrics over quantitative metrics, In
International Conference on Advances in Recent Technologies in
Communication and Computing, (ARTCom 09), pages 225227,
2009.
[2] Mohammad Bagher Akbari Haghighat, Ali Aghagolzadeh, and
HadiSeyedarabi, A non-reference image fusion metric based on
mutual information of image features, Computers & Electrical
Engineering, 37(5):744756, 2011.
[3] Ketan Kotwal and Subhasis Chaudhuri, A novel approach to
quantitative evaluation of hyperspectral image fusion techniques,
Information Fusion, 14(1):518, 2013.
[4] B.K. Shreyamsha Kumar, Multifocus and multispectral image
fusion based on pixel significance using discrete cosine harmonic
wavelet transform, Signal, Image and Video Processing, 7(6):1125
1143, 2013.
[5] Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P
Simoncelli, Image quality assessment: from error visibility to
structural similarity, IEEE Transactions on Image Processing,
13(4):600612, 2004.
[6] CS Xydeas and V Petrovic. Objective image fusion performance
measure. Electronics Letters, 36(4):308309, 2000.
[7] Lin Zhang, D Zhang, and Xuanqin Mou, FSIM: a feature
similarity index for image quality assessment, IEEE Transactions on
Image Processing, 20(8):23782386, 2011.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at: www.gtia.co.in

259

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Gate All Around MOSFET: A Review


Renu Ahlawet1, Dhiraj Kapoor1, Rajiv Sharma2
Deptt. of ECE, Vaish College of Engineering, Rohtak-124001, India
2
Professor, Deptt. of ECE, Northern India Engineering College, New Delhi (India)
1

E-mail: renuahlawat263@gmail.com,kapoordhiraj79@gmail.com, rsap70@rediffmail.com


Abstract In this paper, authors are reviewing
Gate All Around (GAA) prospects such as the subthreshold swing (SS), ON current (ION), threshold
voltage, transconductance. Its I-V characteristics
are compared with conventional DG MOSFET. It
improves the electrostatic control of the channel.
Scaling of channel length do not affect the ION
current due to tunnelling voltage. ION is
independent and does not affect by channel length.
It includes comparison between double gate
MOSFET and gate all around MOSFET.
Keywords

Gate-all-around
(GAA),
performance analysis, Si-NanoWire (NW),
MOSFET, Drain Induced Barrier Lowering
(DIBL), Equivalent-Oxide-Thickness (EOT).
I.

INTRODUCTION

SOI (Silicon on Insulator) MOSFET are evolved


from single-gate structures to multigate (double-gate,
triple-gate and gate-all-around) structures [1].
Increasing the effective number of gates, it improves
the electrostatic control of the channel by the gate and
therefore it reduces short-channel effects [2].Gate
material surrounds the channel region on all sides.
The material of channel is made up of Si or SiC and
the gate oxide material is made up of SiO2 or HfO2.
SiC (Silicon Carbide) NWFET (Nanowire FET) and
Si NWFET
show some identical electrical
characteristics like sub-threshold swing (SS), ON
current (ION), higher thermal conductivity, higher
band gap, higher electron drift velocity, higher
breakdown electric field, better physical and chemical
stability [3]. SiO2 material has small dielectric
constant (2.5 times smaller) than SiC. GAA silicon
nanowire FET have good gate controllability, low
leakage current, high ON/OFF ratio and enhance
carrier transport property[4]. It has immunity to short
channel effects. The cylindrical nanoscale
GAA-MOSFET can be seen as a quantum wire where
the electrons are confined within a cylindrical
potential well. Among III-V materials, Indium
Gallium Arsenide (InGaAs) is considered one of the

most able materials for N-channel MOSFETs, due to


its high electron mobility, high electron velocity and
unique band alignment [5]. To improve highK(InGaAs) interface quality and achieve sub-1nm
Equivalent-Oxide-Thickness (EOT), various high-K
dielectric integration schemes and interface
passivation techniques have been investigated [6].
Gate-All-Around FETs (GAA-FETs) improve 3-D
electrostatic control of the channel, but it also
increases self-heating, so there should be a
compromise between the performance and the
reliability. Hot spots and heat-dissipation pathways
may lead to localized heating and damage to gate
insulators [7]. In tunnelling junction model,
transportation of carriers are defined by tunnelling
across barrier but in conventional FET diffusion over
barrier occurs. Simulation of new tunnelling junction
model is similar to TFET (Tunnel FET) in which
tunnelling is in normal direction to the gate [8].
Scaling of channel length do not affect the ION current
due to tunnelling voltage counterpart. ION is
independent and does not affect by channel length
[8]. The implementation of nanowires as the channel
in TFTs (Thin Film Transistor) show superior
performance, owing to their small volume and the
accompanying reduction in defects [9]. It uses the
triple material cylindrical GAA (CGAA) for
calculation of threshold voltage. The electric field
density in the CGAA increases greatly with scaling in
the axial direction resulting in the formation of highly
energetic and accelerated hot carriers [10]. Vertical
Nanowires having high aspect ratio (up to 50:1) with
diameter 20 nm can be achieved by using lithography
and dry-etch defined Si-pillars with subsequent
oxidation. It is simple and flexible for further scaling
of transistor devices [11]. Fabrication of Vertically
Stacked Double Gate (DG) Silicon Nanowire FETs
includes two Gate-All-Around (GAA) electrodes.
There are two gate electrodes, one is the Control Gate
(CG) and other is the Polarity Gate (PG) [12]. The
scaling of interconnect goes down to the sub-10 nm
nodes, then interconnect performance becomes
primarily dominated by the resistance rather than the

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
260

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

capacitance due to the ever-increasing size effects of


copper and a higher input capacitance of the devices
[13,14].
II.

III.

STRUCTURE OF GAA (GATE ALL


AROUND)

DIFFERENT GATE STRUCTURES

Fig. 1 Single Gate Structure [1]

Fig. 2. DELTA/FinFET Structure (Double Gate Structure) [1]

Fig. 4. GAA Structure [3]

GAA Structure has been shown in fig. 4[3]. In Gateall-around, gate material surrounds the channel region
on all sides which improves the electrostatic control
of the channel and hence reduces short channel effect
[3]. Most widely used gate insulator material of
NWFETs is SiO2.It has some disadvantages when
compared with SiC like its small dielectric constant
(2.5 times smaller) and weak interface of SiC/SiO2
results in an inconvenient increase in the gate oxide
electric field when compared with semiconductor. So
HfO2 is used by replacing SiO2 which has higher
dielectric constant [3].
Further improvement is done by fabrication of
Vertically Stacked Double Gate (DG) Silicon
Nanowire (SiNW) FETs which includes two GateAll-Around (GAA) electrodes, control gate and
polarity gate are shown in fig.5 [13]. Control Gate
(CG) acts as switching on and off the device and the
Polarity Gate (PG) which acts on the side regions of
the channel. Switching of the device polarity occurs
dynamically between n and p type.

Fig. 3. Triple-Gate Structure [1].

Different Gate Structures are shown in fig.1,2,3[1].


MOSFETs are evolved from single-gate structures to
multigate (double-gate, triple-gate and gate-allaround) structures. Single-gate has only one gate,
Doublegate has two gates, Triplegate has three
gates and then Gate All Around (GAA) come. The
current drive of multiple-gate MOSFETs is
essentially proportional to the total gate width. The
current drive of a double-gate device is double that of
a single-gate transistor having same gate length and
width [1].

Fig. 5: SiNWFET S/D pillars support a vertical stack of nanowires.


The nanowiresare surrounded by the GAA Polarity Gate and GAA
Control Gate [13].

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
261

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

IV.

COMPARISON BETWEEN DOUBLE


GATE MOSFET AND GATE ALL
AROUND MOSFET

J. Y. Song, W. Y. Choi, et. al. analysed the drain


current, transconductance value, threshold voltage,
SS (Sub-threshold Swing) and DIBL (Drain Induced
Barrier
Lowering)
characteristics.
Transfer
characteristics of basic DG MOSFET and GAA
MOSFET are shown in fig. 6[15] in which GAA
MOSFET has lower SS and DIBL values than DG
MOSFET.

GAA MOSFETs are good for the short-channel


effects when compared with DG MOSFETs. In a sub100-nm MOSFET, the channel doping leads to
fluctuation, sub-threshold swing degradation, the
mobility reduction and the high threshold voltage
sensitivity to the fin width. To avoid these negative
effects, the silicon body usually remains lightly doped
and the threshold voltage is determined only by the
gate work function [15].

Fig. 8. Threshold voltage characteristics of DG and GAA


MOSFETs versus the gate length [15].
Fig.6. Transfer characteristics of basic DG and GAA MOSFET
[15].

Table 1. Results of DG and GAA MOSFET Simulations [15]

GAA MOSFET have higher transconductance which


is good for better working of GAA MOSFET as
shown
in
fig.7[15].

Fig.7 Transconductance (gm ) characteristics of DG and GAA


MOSFET [15].

Threshold voltage characteristics of DG and GAA


MOSFETs versus the gate length are shown in fig.
8[15]. The reduction rate of the threshold voltage in
GAA MOSFETs is smaller than that in DG
MOSFETs as gate length decreases. It means that

Table 1 shows the results of DG and GAA MOSFET


simulations [15]. GAA MOSFETs have small SS
(Sub-threshold Swing) and DIBL (Drain Induced
Barrier Lowering) as well as high ON current and
OFF current in comparison with DG MOSFETs [15].
Gate-all-around (GAA) is the optimum device to
electrostatically control a transistor with narrowest
channel length and minimize the leakage current
when the device is in the off-state, making the device

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
262

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

operate with less dissipation per switching event.


Several GAA geometries are possible and have been
demonstrated either in horizontalor vertical
configuration [16]. The junctionless (JL) FET has
advantages compared to the conventional FET, such
as reduced fabrication complexity due to a low
thermal budget and elimination of the requirement of
abrupt junction, improved immunity against shortchannel effects (SCEs), and less stringent demand to
reduce the gate dielectric thickness [17]. It is having
disadvantage like scaling of lateral nanowire devices
are limited by process related issues, thus resulting in
devices with high extension resistance.
V.

CONCLUSION

The authors demonstrated the beneficial aspects of


GAA such as higher SS, DIBL values, threshold
Voltage and better physical and chemical stability. It
has very high electrostatic control of the channel It
has strong current drive and it gives better
combinations of performance and energy efficiency.
GAA MOSFET has higher transconductance value
than DG MOSFET.

[10]. S. Dubey, A. Santra, G. Saramekala, M. Kumar, and P. K.


Tiwari, An Analytical Threshold Voltage Model for TripleMaterial
Cylindrical
Gate-All-Around
(TM-CGAA)
MOSFETs, IEEE Transactions on Nanotechnology, vol. 12,
no. 5, pp. 766-773, 2013.
[11]. B. Yang, K. D. Buddharaju, S. H. G. Teo, N. Singh, G. Q.
Lo, and D. L. Kwong, Vertical Silicon-Nanowire Formation
and Gate-All-Around MOSFET, IEEE Electron Device
Letters, vol. 29, no. 7, pp. 791-793, 2008.
[12]. M. D. Marchi1, D. Sacchetto1, S. Frache, J. Zhang, P. E..
Gaillardon, Y. Leblebici1 and G. D Micheli, Lausanne, P. D.
Torino, Polarity Control in Double-Gate, Gate-All-Around
Vertically Stacked Silicon Nanowire FETs, IEEE Electron
Devices Meeting, pp. 8.4.1-8.4.4, 2012.
[13]. C. Pan, A. Naeemi, A Paradigm Shift in Local Interconnect
Technology Design in the Era of NanoscaleMultigate and
Gate All Around Devices, IEEE Electron Device Letters,
vol. 36, no. 3, pp. 274-276, 2015.
[14]. M. Khaouani, A. G. Bouazza, et. al., 3D Quantum
Numerical Simulation of Horizontal Rectangular Dual Metal
Gate/Gate All Around MOSFETS, International Journal of
Electrical,
Computer,
Energetic,
Electronic
and
Communication Engineering, vol. 8, no. 4, pp. 719-722,
2014.
[15]. J. Y. Song, W. Y. Choi, et. al., Design Optimization of GateAll-Around (GAA) MOSFETs, IEEE Transactions on
Nanotechnology, vol. 5, no. 3, pp. 186-191, 2006.
[16]. N. Clement, X. L. Han and G. Larrieu, Electronic transport
mechanisms in scaled gate-all-around silicon nanowire
transistor arrays, Applied Physics Letters, vol. 103, pp.
263504.1-263504.5, 2013.
[17]. D. Moon, S. J. Choi, et. al., Investigation of Silicon
Nanowire Gate-All-Around Junctionless Transistors Built on
a Bulk Substrate, IEEE Transactions on Electron Devices,
vol. 60, no. 4, pp. 1355-1360, 2013.

REFRENCES
[1].
[2].
[3].

[4].

[5].

[6].

[7].

[8].

[9].

J. P. Colinge, Multiple-gate SOI MOSFETs, Solid-State


Electronics 48, 2004.
Benfdila, Beyond CMOS: Materials and Engineering, J. P.
Colinge, Multi-gate SOI MOSFETs, 2007.
T. N Fashtami1 and S. Z. S. Ali, Performance Investigation
of Gate-All-Around Nanowire FETs for Logic
Applications, Indian Journal of Science and Technology,
vol. 8, no. 3, pp. 231236, 2015.
A. Sharma, S. Akashe, Performance Analysis of Gate-AllAround Field Effect Transistor for CMOS Nanoscale
Devices, International Journal of Computer Applications
(0975 8887), vol. 84, no. 10, pp. 44-48, 2013.
D. Jimnez, J. J. Senz, B. Iguez, , J. Su, L. F. Marsal,
and J. Pallars, Modeling of Nanoscale Gate-All-Around
MOSFETs, IEEE Electron Device Letters, vol. 25, no. 5,
pp. 314-316, 2004.
N. Conrad, S. H. Shin, J. Gu, M. Si, H. Wu, M.
Masuduzzaman, M. A.
Alam,
and
P.
D.
Ye,
Performance and Variability Studies of InGaAs Gate-allaround Nanowire MOSFETs.
S. H. Shin, M. A. Wahab, M. Masuduzzaman, K. Maize, J.
Gu, M. Si, Ali Shakouri, P. D. Ye, and M. A. Alam, Direct
Observation of Self-Heating in IIIV Gate-All-Around
Nanowire MOSFETs, IEEE Transactions on Electron
Devices, vol. 62, no. 11, pp. 3516-3523, 2015.
A.Sharma and S. Akashe, Analyze the Tunneling Effect on
Gate-All-Around Field Effect Transistor, International
Journal of Advanced Science and Technology, vol.63, pp 922, 2014.
C. J. Su, T. I. Tsai, Y. L. Liou, Z. M. Lin, H. C. Lin, and T. S.
Chao, Gate-All-Around Junctionless Transistors With
Heavily Doped Polysilicon Nanowire Channels, IEEE
Electron Device Letters, vol. 32, no. 4, pp. 521-523, 2011.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
263

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

A Micro Strip Patch Antenna to Harvest


RF Energy from RF Signal AT GSM-950
MHZ
Deepak Vats1, Jayant Dhondiyal2, Archana Mongia3
Deptt. of ECE, Northern India Engineering College, New Delhi (India)
2
Department of RF & Microwave, AIACTR Geeta Colony Delhi

1, 3

E-mail: deepakvatsrohini@gmail.com, jayantdhondiyalhcl@gmail.com


Abstract RF energy harvesting is one of the most
important techniques to harvest maximum power
from RF signal. In this paper, there is focus on
advance RF energy harvesting system. This system
harvests the energy from multiband RF signal.
GSM-950 MHz frequency band has been targeted
to harvest energy. RF energy can be harvested
from BTS Tower. A Micro Strip patch antenna
has been designed to enhance the input power at
RF energy harvester. A Proposed antenna is
designed and fabricated for deployment in Micro
Strip Patch antenna. The proposed antenna is
fabricated on FR-4 material. The antenna
chartectristics are evolved and measured in return
loss, VSWR, Gain & radiation pattern.
Keywords Micro Strip Patch antenna, RF
energy harvesting wireless power transmission,
Metamaterial (MTM), Return Loss, VSWR, Gain,
& Radiation Pattern, Microwave Rectifier
I.

INTRODUCTION

Currently, there is an active research era


investigating a number of ways to extract energy
from the RF Signal and convert into the electrical
energy. Cleaner, more sustainable form of electrical
power are needed in order to keep cost lower and to
insure a healthier environment for future generation.
This work is being carried out by many researchers
for the following reason:
Complementing the low power sources used
for energizing the low power electronic device, as an
application to green technology.
The energy is freely available in space.
RF energy harvesting from ambient source have
great potential on the cellular phones and portable
electronic device. This concept needs an efficient

antenna along with a circuit capable of converting RF


signal to electrical energy.
In this paper we review and detail some of the
points in RF harvesting using micro strip patch
antenna. Stating its fundamental working. The
designed and development of patch antenna and
single stage rectifier for RF energy harvesting is
presented the proposed antenna resonates at 950MHz.
The basic block diagram of RF energy harvesting
systems given in fig.1. There are three elements in the
system to harvest energy, which are antenna,
impedance matching circuit and rectifier.

Fig-1: Block diagram of RF harvesting system

II.

FUNDAMENTAL OF MICRO STRIP


PATCH

ANTENNA

General structure of Micro strip Patch Antenna. A


Micro strip Patch antenna consists of a radiating
patch on one side of a dielectric substrate which has a
ground plane on the other side as shown in Fig 2.The
patch is normally made of conducting material such
as copper or gold and can take any possible shape.
The radiating patch and the feed lines are usually
photo etched on the dielectric substrate. In order to
simplify analysis and performance estimation.

Fig-2: Patch

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
264

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

For a rectangular patch, the length L of the patch is,


where o is the free-space usually 0.33o<L< 0.5 o,
where wavelength The patch is selected to be very
thin such that t<< o (where t is the patch
thickness). The height h of the dielectric substrate
is usually 0.003 <h<0.05o. The dielectric constant
the range 2.2 = er = 12.

radiation intensity in the same direction which would


be produced by an isotropic radiator having the same
input power. Isotropic antenna is considered to have a
gain of unity. The gain function can be
described as:

For good performance of antenna, a thick dielectric


substrate having a low dielectric constant is necessary
since it provides larger bandwidth, better radiation
and better efficiency. However, such a typical
configuration leads to a larger antenna size. In order
to reduce the size of the Micro strip patch antenna,
substrate with higher dielectric constants must be
used which are less efficient and result in narrow
bandwidth.

where (, ) is the power radiated per unit solid


angle in the direction(, ) and
is the total
radiated power. Micro strip antennas because of the
poor radiation efficiency have poor gain.
Numerous researches have been conducted in various
parts of the world in order to obtain high gain
antennas.

III.

PREFORMANCE PARAMETERS

(a) Radiation Pattern:-The antenna pattern is a


graphical representation in three dimensional of the
radiation of the antenna as the function of direction. It
is a plot of the power radiated from an antenna per
unit solid angle which gives the intensity of radiations
from the antenna. If the total power radiated by the
isotropic antenna is P, then the power is spread over a
sphere of radius r, so that the power density S at this
distance in any direction is given as:
Then the radiation intensity for this isotropic antenna
can be written as:
Isotropic antennas are not realizable in practice
but can be used as a reference to compare the
performance of practical antennas. The radiation
pattern provides information on the antenna beam
width, side lobes and antenna resolution to a
large extent.
The E plane pattern is a graphical representation
of antenna radiation as a function of direction in a
plane containing a radius vector from the centre of
the antenna to the point of maximum radiation and
the electric field intensity vector. Similarly the H
plane pattern can be drawn considering the magnetic
field intensity vector.
(b) Gain:-Antenna gain is the ratio of maximum
radiation intensity at the peak of main beam to the

(, ) =

(c)Return loss:-Return loss or reflection loss is the


reflection of signal power from the insertion of a
device in a transmission line or optical fiber. It is
expressed as ratio in dB relative to the transmitted
signal power. The return loss is given by:
= 10
Where
is the power supplied by the source and
is the power reflected.
If

is the amplitude of the incident wave and


that of the reflected wave, then the return loss
can be expressed in terms of the reflection coefficient
as:
= 20 ||,
And the reflection coefficient can be expressed as:
=
For an antenna to radiate effectively, the return loss
should be less than10 .
(d)VSWR:-A standing wave in a transmission line
is a wave in which the distribution of current,
voltage or field strength is formed by the
superimposition of two waves of same frequency
propagating in opposite direction. Then the
voltage along the line produces a series of nodes
and antinodes at fixed positions.
If () represents the total voltage on the line then
Vz = +e-jbz+V-e+jbz
Then the Voltage Standing Wave Ratio (VSWR)
can be defined as:
VSWR=

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering Coll ege, New Delhi.
Available online at:www.gtia.co.in
265

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

The value of VSWR should be between 1 and 2


for efficient performance of an antenna.
IV.

ADVANTAGES OF MICRO STRIP


PATCH ANTENNA

Micro strip antennas are used as embedded antennas


in handheld wireless devices such as cellular phones,
and also employed in Satellite communications.
Some of their principal advantages are given below:
a) Light weight and low fabrication cost.
b) Supports both, linear as well as circular
polarization.
c) Can be easily integrated with microwave
integrated circuits.
d) Capable of dual and triple frequency
operations.
e) Mechanically robust when mounted on rigid
surfaces.
V.

have been presented. It is evident that harnessing


energy through micro strip patch antenna provider a
cleaner way of powering lighting systems and other
equipment. It is a new approach to lead the world into
implementing greener technologies that are aimed at
protecting the environment. A modified patch
antenna is presented for RF energy harvesting system
to harvest energy at 950MHz electromagnetic wave
source. This system consist rectifying circuit for
converting power from RF to dc voltage and amplify
same.
REFRENCES
[1]

[2]

[3]

RECTIFIER CIRCUIT DESIGN


[4]

Fig.3 shows the block diagram of energy harvesting


system. The antenna, the impedance matching circuit
and rectifier circuit are connected back to back.
Impedance matching circuit is used to match the
impedance of antenna to the impedance of
rectifying circuit Rectifier circuit is used to
convert RF power in to DC electric power.

[5]

[6]

[7]

[8]

Piang, T., J. Morroni, A. Dolgov, J. Shin, Wirelessly


powered wireless sensor platform, proc.of the 37th
European Microw. Conference, Minich,999-1002, oct.2007.
Olgun, U., C.-C. Chen, Design of an Efficient ambient WiFi
energy harvesting system, IET Microw. Antennas Propag.,
Vol. 5, No. 11, 1200-1206, 2012.
Hong, S. S. B. , R. Ibrahim, M. H. M. Khir, Rectenna
architecture based energy harvester for low power RFID
application , 4th international conference on intelligent and
advanced system ,382-387, Petronas , Malaysia jun. 12-14,
2012.
J. O. Mcspadden , L. Fan, and K. Chang, Design and
experiment s of a high conversion efficiency 5.5GHz
rectenna, IEEE Tranc.Microwave Theory Tech., Vol. 46, no.
12,pp. 2053-2060, dec. 1998.
Ren, Y.J. and K. Chang, 5.8GHz circularly polarized dual
diode rectenna and rectenna array for microwave power
transmission,IEEE Trams. Microw. Theory., Vol. 54, No.1,
1495-1503, Apr. 2006.
Douyere, A., J.D. Lan Sun Luk, and F. Alicalapa, High
efficiency microwave rectenna circuit Modeling and Design,
Electronic Letters, Vol. 44, No. 24, 1409-1410, Now. 2008.
Shailendra Singh Ojha, P.K. Singhal, AnshulAgrawal. 2GHz Dual Diode Dipole Rectenna For Wireless Power
Transmission, International Journal of Microwave and
Optical Technology, vol.8, No.2, March 2013.
WalidHaboubi, Hakim Takhedmit, Jean Danial L.S. Luk,
An efficient dual circularly polarized rectenna for RF
Energy harvesting in the 2.45GHz ism band, Progress in
Electromagnetic Research, Vol. 148, 31-39, 2014.

Fig.-3 Matching Network

A silicon based schottky diode having threshold


voltage of 230mV and diode capacitance of 0.26pF is
chosen for rectifier. Here C is a charging Capacitor.
At microwave Frequency, non linear capacitance of
the diode governs the maximum power transfer to the
load and amplitude of the rectifier output as input
impedance of the rectifier changes with frequency.
VI.

CONCLUSION

In this paper a theoretical model for energy


harvesting system using micro strip patch antenna

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering Coll ege, New Delhi.
Available online at:www.gtia.co.in
266

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Localization in Wireless Sensor


1

Archana Mongia, 2Deepak Vats


M.Tech. Student, MSIT, Sonipat, Haryana
2
M.Tech. Student, DRCUST, Sonipat, Haryana
E-mail: archanamongia@yahoo.co.in
1

Abstract Wireless sensor network (WSN)


are widely used in many different scenarios.
The localization information is crucial for the
operation of WSN. There are mainly two
types of localization algorithms. The Rangebased localization algorithm has strict
requirements on hardware, thus is expensive
to be implemented in practice. The Rangefree localization algorithm reduces the
hardware cost. In this paper, we locate
unknown nodes by incorporating the
advantages of these two types of methods and
propose a new algorithm named the RSSIbased
DV-hop
algorithm
(RDVhop).Localization is the process of finding a
sensor nodes position in space. This paper
explains the procedure for locating nodes in a
wireless sensor network, including the
techniques for estimating inter-node distances
and how nodes compute their positions using
trilateration or triangulation. It focuses on the
mathematical
concepts
underlying
localization, detailing the computational steps
involved in trilateration and triangulation,
the steps necessary to compensate for inexact
distance , and the derivation of the linear
systems for calculating nodal coordinates in
2D or higher space dimension.
Keywords: WSN, RSSI, RDV-hop, DV-hop

I.
INTRODUCTION
A Wireless sensor network (WSN) is
composed of plenty of sensor nodes. These
nodes have the ability of sensing, computation,
and wireless communication. Due to its powerful
function and low energy cost, the WSN has been
widely used. In various domains, such as
national defense and military affair, environment
inspection, traffic management, long- distance
control of dangerous region, and so on, WSN
has shown its significance and capability in
application. In WSN, the position information is

crucial. When an abnormal event occurs, the


sensor node detecting the event needs the
position information to locate the abnormal
event and report to the base station. Therefore,
the position information is usually embedded in
the report message generated by the sensor node.
Without position information, WSN cannot work
properly. In practice, sensor nodes are often
deployed by random bestrewing (airplane
bestrewing for example). And for the high-cost,
only a few nodes are equipped with Global
Positioning System (GPS) which can capture
their position after being bestrewed.
In wireless sensor networks (WSNs),
localization is the process of nding a sensor
nodes position in space. There are two main
techniques for computing node positions:
trilateration and triangulation .Both techniques
need anchor nodes, which know their accurate
positions in space, to locate other sensors.
Trilateration uses the distance to three different
anchors to compute a nodes 2D position.
Conversely, triangulation relies on the angular
separation between three different pairs of
anchors to locate a node in 2D space. Depending
on the information available at each node, a
choice is made between the two techniques.
Trilateration can also be used to calculate 3D
positions provided a node knows the distance to
four anchors, i.e. one more anchor than the
number of space dimensions. Similarly,
triangulation requires the angular separation
between four different pairs of anchors to
determine 3D positions. Existing localization
algorithms use a combination of trilateration,
triangulation, and different techniques for
estimating distances and angles to compute
nodal coordinates in a WSN. The more
successful strategies implement methods to
reduce the propagation of errors within the

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
267

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

network, thus increasing the positioning


systems accuracy.
Current papers on positioning systems for
WSNs typically focus on the isolated parts of the
localization process, concentrating on: methods
for estimating internode ranges and angles,
mathematical techniques for determining the
position of a single node, and algorithms for
computing the positions of nodes in an entire
WSN.
II.
RANGE-BASED LOCALIZATION
ALGORITHMS
Time of Arrival (TOA), Time Difference on Arrival
(TDOA), and Angle of Arrival (AOA) are all popular
Range-based methods. They require additional
hardware support and thus, are very expensive to be
used in large scale sensor networks. RSSI is the most
fundamental method.
Both theoretical and empirical models are used to
translate signal strength into estimated distance. Due
to its easy implementation and there is no need for
additional hardware, RSSI has been widely used. It is
also used in this paper. In the RSSI method, the
senders transmitting intensity can be known and
receiver can compute the signal loss after receiving
message.
III.
RANGE FREE LOCALIZATION
ALGORITHMS

The Centroid, Approximate Point in Triangle


Test (APIT), Coordinate , DV-hop, Amorphous
and so on are all range-free algorithms. In
Centroid algorithm, the anchors send out
beacons which include their position information
to neighbor nodes at periodic intervals. A
receiver node infers proximity to a collection of
anchor nodes. The position of the node is then
estimated to be the centroid of the anchor nodes
from which it can receive beacon packets. This
algorithm is simple but it needs too many
anchors. Authors proposed APIT in which given
three anchor nodes, any unknown node can
determine its position if it lies inside the triangle
composed by the three anchors. In the
localization scheme each sensor node performs
numerous APT tests with different combination
of audible anchor nodes, and infers its location
as the center of gravity of the intersection area of

all the triangles in which the node lies in.


Coordinate is a GPS-free algorithm with no
entire reference frame information in
localization because no anchor exists.
DV-hop is proposed by D.Niculescu and
B.Nath.Anchor nodes generate packets including
their position information and a flag which is
initialized as 1 to figure the number of hops
away from them. These packets are flooded in
WSN. When they are transmitted by the relay
nodes, the hop number is increased by 1. In this
way, any node can determine the hop number
from it to a certain anchor node. Similarly the
anchor nodes can compute their hops to other
anchors as well.
The average distance per hop can be determined
by a simple formula and then will be broadcast.
When an unknown node receives it, the receiver
will estimate its node (average distance per
hop*hop number). After it obtains three or more
istance to the anchor estimated values from
anchor nodes, its location can be figured out.
Our algorithm is a combination of RSSI and
DV-hop.

Fig. 1 An example to show the error of DV-HOP

IV.

RSSI-BASED DV-HOP ALGORITHM

DV-hop localization algorithm can work out the


position of unknown nodes which are beyond
anchors transmission radius, and it does not
need the exact metrical information as described
in Section . However, the error of average
distance per hop is large if it is only computed
by hops. For an unknown node that is only one
hop away from the anchor, it still needs to

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineeri ng College, New Delhi.
Available online at:www.gtia.co.in
268

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

compute the location of itself according to


average distance per hop. The error will differ
based on the route bend degree. For example, in
the figure as follows:
An example to show the error of DV-hop
Assume L1, L2 and L3 are all anchor nodes.
Node A is an unknown node which needs to be
located. The three anchors know their distances
from each other, which are 30, 30 and 40
respectively. The actual distance from A to L1 is
15 and the hop number is 1. The hop numbers
from A to L2 and L3 are both 3. Every edge
length is assumed to be 10. DV-hop algorithm
works as follows. Firstly, anchor nodes
broadcast beacons including their position
information and a flag which is initialized as 1 to
figure out the number of hops away from the
other nodes. When the beacons are forwarded,
the hop number is increased by 1. So each node
will know the hop distances from itself to all
anchors. Secondly, the anchor nodes will
compute the average distance per hop after
receiving the beacon messages. In Fig. 1, L1, L2
and L3 will compute as follows:
L1: (30+30)/(4+4) =7.5, L2: (30+40)/(4+6) =7,
L3:(30+40)/(4+6) =7
After computing the average distance per hop,
the anchor nodes will broadcast the value
through the network, and unknown node will
take the first value which it receives as the
average distance per hop. That is to say, L1, L2
and L3 will broadcast their computing results
7.5, 7 and 7, respectively. Because the distance
from A to L1 is only one hop, so node A will
receive 7.5 and regard it as the average distance
per hop. At last, A will calculate the distances
from itself to the three anchors: the distance
from L1 is 7.5*1=7.5, the distance from L2 and
L3 both 7.5*3=22.5. Then after obtaining these
distances, trilateral method will be used to
localize A. The actual distance is 15, but the
estimate distance is 7.5 by the DV-hop
algorithm. It is obvious that the error reaches

twice of the actual distance, so after using


trilateral method, the estimate position of node A
will be deflected from actual position. There is a
straight-line between node A and L1, but the
DV-hop uses curvilinear average distance
instead of straight-line distance. Therefore, we
propose a RSSI-based DV-hop localization
algorithm, which incorporates RSSI and DV-hop
to implement localization together, aiming to
reduce the estimate error of the nodes which are
nearby anchors calculated by DV-hop algorithm.
V.

ESTIMATING DISTANCES

The distance estimation phase is the initial step


performed when locating a nodes position in
space. By estimating distances to neighbors with
known coordinates, a node can determine its
own position using trilateration or triangulation
.Once a node determines its position, it becomes
an anchor node and can then help other
neighbors nd their positions. Alternatively,
nodes attached to GPS-devices can rely on this
instrument for obtaining accurate coordinates,
without needing to estimate distances to their
neighbors.
VI.

RECEIVED SIGNAL STRENGTH

By denition, the received signal strength is the


voltage measured by the receivers received
signal strength indicator (RSSI) circuit [Patwari
et al. 2005]. RSS based localization systems do
not require hardware components in addition to
the radio transceiver. Moreover, no dedicated
packets need to be sent over the network for
such systems to function. However, RSS
measurements are very unreliable,even when
both sender and receiver are stationary . Ranging
errors of 50% have been observed , leading to
inaccurate distance estimates. Hence, it is
important to understand the sources of error
before relying on this technique for locating
node.
Assuming that the transmission power (Ptx), the
path-loss model, and the path- loss coefficient

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineeri ng College, New Delhi.
Available online at:www.gtia.co.in
269

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

() are known, it is possible to estimate the


distance between a sender and receiver using the
power of the received signal (Prx)
Prx = c Ptx / d^
Where c is a constant dependent on the path-loss
model . In free space, the received power is
inversely proportional to the square of the
distance between sender and receiver (i.e. = 2).
When considering an obstructed channel and
assuming multipath effects are mitigated using a
spread-spectrum technique, typically ranges
between two and four.
VII.

TIME-OF-ARRIVAL

The time-of-arrival is the instant in time when a


signal rst arrives at the receiver.
If the receiver knows when the signal was sent
(T1) and the signals propagation speed in the
medium (Vp), the distance to the source can be
calculated as follows:

messages, employed in the Carrier Sense


Multiple Access with Collision Avoidance
(CSMA/CA) MAC protocol, to transmit ToA
information. Both the sender and receiver
undergo an initial calibration phase to estimate
internal processing delays.Once this phase is
complete, the sender is capable of estimating its
distance to the receiver to an accuracy of 1m,
even in the presence of multipath interference.
VIII.

LOCATION ESTIMATION IN
THREE-DIMENSIONAL
The principle of trilateration in the twodimensional space: when the unknown node can
get the distance to three neighbor anchors which
are not collinear, it can get the location of itself
with trilateral measurement According to
mathematical knowledge, in a plane, if there are
four non-collinear points A, B, C, D, as long as
the distance between D and others including A,
B, C is known, and the coordinates are known,
we can get the co-ordinates of Point D.

d = (T2-T1) Vp
where T2 is the ToA. This technique requires the
clocks of both the sender and receiver to be
synchronized and, depending, on the signals
propagation speed, high resolution clocks may
be needed to obtain accurate distance estimates.
Further, these distance estimates are hindered by
additive noise and multipath effect. If sound
waves are used, the timing precision
requirements are less stringent. However, the
propagation speed of sound waves is affected by
ambient factors, such as: temperature and
pressure, requiring prior calibration of the
senders and receivers .
A technique known as two-way time-of-arrival
does not require clock synchronization between
sender and receiver. In this case, the propagation
time is calculated as half the round-trip-time
(RTT) between both nodes. Modied the
request-to-send (RTS) and clear-to-send (CTS)

Fig. 2 Trilateration

In this paper, RSSI algorithm is improved and


promoted into three-dimensional. The relative
location of two points is expressed by the vector
connecting them. When the reference point is
not given, the corresponding solution can be
rotated and mirrored by relatively arbitrary axis.
Adding a location to the solutions to the system
each time can reduce the degree of freedom.
In two-dimensional space, one unknown node
requires three anchors to located at least;
however, it needs at least four anchors in threedimensional space . Because when a point is an
axis as a reference point, two mirrors will be
produced.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineeri ng College, New Delhi.
Available online at:www.gtia.co.in
270

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

IX.

CONCLUSION

Current paper on positioning system for WSN


typically focus on the isolated parts of localization
process , concentrating on methods for estimating
internode ranges , mathematical technique for
determining the position of single node . We also
propose a RSSI -based DV -HOP localization
REFRENCES

Fig. 3 Quadrilation
[1]

[2]

[3]

[4]
Fig. 4 Mirror Point

Akyildiz, W. Su, Y. Sandarasurbramaniam, E. Cayirci,


Wireless sensor networks: a survey, Computer Networks
Journal, 38(4) (2002): 393 422.
J. Beutel, Geolocation in a PicoRadio environment, M.S.
Thesis, ETH Aurich, Electronics Laboratory, December
1999.
J. Caffery, A new approach to the geometry of TOA
location, Proc. Of IEEE Vehicular Technology Conference
(VTC), September 2000, pp. 1943 1950.
BASAGNI,
S.
CONTI,
M.GIORDANO,
S.,AND
STOJMENOVIC , I.2004 Mobile AD. hoc Networking. John
Wiley & Sons.

All as we know, when the four nodes are in one


plane, there also will be mirrors, the specific
location cannot be determined, but how to make
sure that whether the four nodes were in the
same plane will pay a high price to calculate.
There is no need to consider whether the four
anchors are in the same plane,because of little
probability of them in the same plane with the
random distribution. It is judged by whether the
result of location of unknown node is only one.
In three-dimensional space, if the distance
between unknown node N and four neighbor
anchors A(xA, yA, zA) ,B(xB , yB, zB) ,C(xC, yC ,zC) D(xD, yD,
zD) is known, and the locations of four anchors
are known, the known node N(xN , yN ,zN) can
be located only.

When the distance is obtained using RSSI, the


unkhown nodes with four neighbors at least can
be located.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineeri ng College, New Delhi.
Available online at:www.gtia.co.in
271

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Automated Sorting of Object


Rejection and Counting Machine
Akshita, Alis, Prateek, Khushboo
ECE Dept., Northern India engineering College, Delhi, India
Email: kkshushboo_2008@yahoo.com
Abstract This paper presents an effective
solution for determining characteristic features of
objects from a simple measurement set-up. A lowcost IR diode array, working on a reflection light
scanner principle, has been designed. Essentially
arrays of several emitter-receiver pairs are
mounted on different sides of the area of
observation, enabling the estimation of the size of
the object in different dimensions and its
reflection coefficient. The emitters are driven
successively in time; hence no signal overlapping
and cross-talk occur. The results show that from
simple light intensity measurements, a variety of
objects can be reliably recognized. As an example,
the problem of determining the number of people
getting into or out of a room is addressed. Two
arrays of 3 diode pairs each are mounted on both
sides of a doorway. With the proposed sensor,
people can be recognized easily and are well
separable from other echoes (motion of hands
etc.), making the performance far more reliable
than that of ordinary light barriers.
KeywordsIR diode array,
Receiver, convyor belt, motors

Sensor,

Emitter-

I. INTRODUCTION
Auto-motion first opened its doors in 1967 as a
distributor of conveyors and conveyor accessories. It
did not take long to realize that one could provide far
greater service to the customers if one could also
control the manufacturing aspects of the conveyor
equipment. Auto-motion understood the value of
providing service in every faces from design and
production to installation, training and ongoing
factory trained technical support. Though it is
suggested that ancient civilizations such as the
Egyptians used conveyors in major construction
projects and history of the modern conveyor dates
back to the late 17th century. These early conveyor

systems were typically composed of a belt that


travelled over a flat wooden bed. The belt was usually
made from leather, canvas or rubber and was used for
transporting large bulky items. Conveyor belts were
designed made of layers of cotton and rubber
coverings. During the manufacturing increase of
World War II, manufacturers created synthetic
materials to make belting because of the scarcity of
natural components.
Today's conveyor belting is made from an almost
endless list of synthetic polymers and fabrics and can
be tailored to any requirements. Possible uses of
conveyors have broadened considerably since the
early days and they are used in almost any industry
where materials have to be handled, stored or
dispensed. The longest conveyor belt currently in use
operates in the phosphate mines of the Western
Sahara and is over 60 miles long. With the increasing
demand in the market, many synthetic polymers and
fabrics began to be used in the manufacture of
conveyor belts. Today, cotton, canvas, EPDM,
leather, neoprene, nylon, polyester, polyurethane,
urethane, PVC, rubber, silicone and steel are
commonly used in conveyor belts. Nowadays, the
material used for making a conveyor belt is
determined by its application
II. DESIGNED CIRCUITS
Counting circuit
Object counters are extensively used in industries.
Normally
these
are
made
of
expensive
microcontrollers or processors. For small scale
industries, this means a lot cost. Here is a simple
circuit that can count objects without using any costly
microcontroller. The circuit is good for those who are
not familiar with programing of microcontroller.
Rejection circuit
A mechanical rejection mechanism for an automatic
sorting machine has an inclined surface along which
the objects to be sorted. A separate section of the

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineeri ng College, New Delhi.
Available online at:www.gtia.co.in
272

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

surface is movably outwardly outwardly by an


operating mechanism responsive to an upstream
sensor to push rejected objects off the surface towards
a reject collecting station.
Control Section
The working of object sorting system using height
sensor and digital counter is described in steps as
follows: Objects on the running conveyor are
classified into two categories based on the height.
When the object passes through the sensing circuit it
identifies the height of the object on the conveyor and
sends signals to the micro- controller.
Power supply
When the anode of the diode is positive with respect
to its cathode, it is forward biased, allowing current to
flow. But when its anode is negative with alternating
current the electron flow is alternate, i.e. the electron
flow increases to maximum in one direction,
decreases back to zero. It then increases in the other
direction and then decreases to zero again. Direct
current flows in one direction only. Rectifier converts
alternating current to respect to the cathode, it is
reverse biased and does not allow current to flow.
This unidirectional property of the diode is useful for
rectification. A single diode arranged back-to-back
might allow the electrons to flow during positive half
cycles only and suppress the negative half cycles.
Double diodes arranged back-to-back might act as
full wave rectifiers as they may allow the electron
flow during both positive and negative half cycles.
Four diodes can be arranged to make a full wave
bridge rectifier.
NEED OF POWER SUPPLY:
Perhaps all of you are aware that a power supply is
a primary requirement for the Test Bench of a home
experimenters mini lab. A battery eliminator can
eliminate or replace the batteries of solid-state
electronic equipment and the equipment thus can be
operated by 230v A.C. mains instead of the batteries
or dry cells. Nowadays, the use of commercial battery
eliminator or power supply unit has become
increasingly popular as power source for household
appliances like transreceivers, record player, cassette
players, digital clock etc.
Sensor circuit
This circuit can be used to sense and differentiate
between different heights. This circuit demonstrates
the principle and operation of a simple height sensor
using LDR. The circuit is divided into three parts:
Detector (LDR), Comparator and Output. When light

of a particular color fall on LDR, its resistance


decreases and an output voltage is produced.
Transformer
A transformer is a static device that transfers
electrical energy from one circuit to another through
inductively coupled conductors through the
transformer's coils. A varying current in the first or
primary winding creates a varying magnetic flux in
the transformer's core and thus a varying magnetic
field through the secondary winding. This varying
magnetic field induces a varying electromotive force
(EMF) or "voltage" in the secondary winding. This
effect is called mutual induction.
Rectifiers Circuit
The signals from the micro-controller are then given
to the object rejecter through the switching circuit.
These signals will control the arm and rejecter
movement and will place the object picked from
conveyor belt to three different places in order to
segregate them. The switching circuit gives the option
of manual operation of arm movement as well as
rejecter operation. The automation switch on the
board will operate the system automatically. A
rectifier is an electrical device that converts
alternating current (AC), which periodically reverses
direction, to direct current (DC), which is in only one
direction, a process known as rectification. A fullwave rectifier is used, which converts the whole of
the input waveform to one of constant polarity
(positive or negative) at its output. Full- wave
rectification converts both polarities of the input
waveform to DC (direct current), and is more
efficient.6A diodes are used for voltage rectification.
Therefore the AC voltage is now converted pulsating
DC. While half-wave and full-wave rectification
suffice to deliver a form of DC output, neither
produces constant-voltage DC. In order to produce
steady DC from a rectified AC supply, a smoothing
circuit or filter is required. This pulsation is removed
by 1000micro-farad capacitor filter circuit. Sizing of
the capacitor represents a tradeoff.
Relays
A relay is an electrically operated switch. Relays are
used where it is necessary to control a circuit by a
low-power signal or where several circuits must be
controlled by one signal. The first relays were used in
long distance telegraph circuits, repeating the signal
coming in from one circuit and re-transmitting it to
another. Relays were used extensively in telephone
exchanges and early computers to perform logical

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineeri ng College, New Delhi.
Available online at:www.gtia.co.in
273

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

operations. A type of relay that can handle the high


power required to directly control an electric motor is
called a contractor. In this system DC Motors for
rejecter, turn table, object rejecter and conveyor belt
are connected through the relay circuit.
DC Motors
The DC motors are used to control the conveyor belt
and object a rejecter movement are connected to
controller circuit and receives signals from microcontroller. There are IR sensors installed in order to
accurately identify ground and drop places. An
electric motor converts electrical energy into
mechanical energy. DC motor design generates an
oscillating current in a wound rotor, or armature, with
a split ring commutator, and either a wound or
permanent magnet stator. A rotor consists of one or
more coils of wire wound around a core on a shaft; an
electrical power source is connected to the rotor coil
through the commutator and its brushes, causing
current to flow in it, producing electromagnetism.
Conveyor Belt
A conveyor belt consists of two or more pulleys,
with a continuous loop of material - the conveyor belt
- that rotates about them. One or both of the pulleys
are powered, moving the belt and the material on the
belt forward. The powered pulley is called the drive
pulley while the unpowered pulley is called the idler.
There are two main industrial classes of belt
conveyors; those in general material handling such as
those moving boxes along inside a factory and bulk
material handling such as those used to transport
industrial and agricultural materials, such as grain,
coal, ores, etc. generally in outdoor locations.
Infrared Array
The optical system consists of n pairs of a highly
directional IR emitter diode and a shielded sensitive
phototransistor, located close to one another. Due to
the face-to-face mounting, the maximum range R
max can be reduced to almost the half of typical door
widths what is still challenging for the detection of
some materials and dark colors. A high transmitter
power is one of the key elements. Pulsed emitter
diodes sfh 415-u diodes With Integrated preamplifier
and a maximum sensitivity at Approx. 950 nm have
been chosen. The emitters are driven at temporally
successive instances, hence overlapping and mutual
influence of the echo signals is excluded. All received
impulses are then demultiplexed into one channel.
The processing is performed on a low-power 8 bit
micro-controller which generates the pulses, handles

the timing of Emission, determines and stores the


received echo features and makes the decision about
the object. For the specific application as a people
counter, two Arrays will be mounted vertically.

Fig. 1 Experimental set up

III.

EXPERIMENTAL SETUP

Frames
Standard gravity conveyor frame widths are 305 mm,
460 mm and 610 mm overall. Conveyor frames are
stocked in both 1.5 meter and 3 meter lengths.
Frames are supplied with either butting plates
(standard) or hook and bar attachments to secure each
segment together
Rollers
Standard rollers for the conveyor frames are 50.8 mm
diameter. They are available in PVC (25kg capacity),
Black Steel and Galvanized Steel in both Medium
Duty (140 kg capacity) and Heavy Duty (200 kg
capacity) versions to suit varying loads or conditions.
Stainless steel rollers for wash-down or corrosive
applications are used. Spring loaded axles slot into
holes along the frame. On PVC and Medium Duty
rollers one end is a D shape whilst the other is round.
This allows for easy replacement of damaged rollers.
Supports
Two types of standard supports are available. Both
styles provide adjustment from 600 1000mm to
Top of Roller. Other support styles and complete
frames are used to special support. RHS Supports are
bolted to the underside of the conveyor frame via a
crescent (smiley) plate. This plate provides allowance
for any angular misalignment. Normally, supports are
only placed on every conveyor join (3 stands for 2
frames). Curves always require 2 stands for proper
stability.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineeri ng College, New Delhi.
Available online at:www.gtia.co.in
274

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Belt Conveyor
A conveyor style utilizes a flat belt running on a flat
fabricated steel deck or over rollers. They are used
where smooth and quiet transport of product is
desirable, and is ideally suited to irregular shaped
product that cannot easily be moved on other
conveyor styles.Line shaft is an economical method
of conveying flat bottomed product. A series of
rollers, each driven by a polyurethane band connected
to a single rotating shaft, mounted within the
conveyor body, drive the product through the system.
Line shaft conveyors are made as standard in widths
of 245mm, 398mm and 550mm (measured between
frames).It provides minimum pressure accumulation,
quiet operation and easy installation. It is suitable for
transportation of products within warehouse or
manufacturing operations where lighter weight
cartons, tote bins and other products need to be
moved, allowing for a variety of situations requiring
directional changes. Limited, minimal pressure
accumulation of product can be obtained with this
style of conveyor.Interfacing of Microcontroller to
Relay Circuit by Darlington Array (ULN Driver)
One option for driving relays would be to use a highvoltage, high-current, Darlington array driver IC such
as the ULN2803. The ULN2803 can directly
interface to the data outputs of the 8051 pins, and
provides much higher drive-current. The ULN2803
also has internal diode protection that eliminates the
need for the fly-back diode as shown in the above
relay driver schematics. One can connect 8 relay
using this IC. It is always best connecting the switch
to ground with a pull-up resistor as shown in the
"Good" circuit. When the switch is open, the 10k
resistor supplies very small current needed for logic
1. When it is closed, the port pin is short to ground.
The voltage is 0V and the entire sinking current
requirement is met, so it is logic 0. The 10k resistor
will pass 0.5 mA (5 Volt/10 k ohms). The drawback
is that the closure of switch gives logic 0 and people
like to think of switch closure gives logic 1. The
ULN2003 is a monolithic high voltage and high
current Darlington transistor arrays. It consists of
seven NPN Darlington pairs that feature high-voltage
outputs with common-cathode clamp diode for
switching inductive loads. The collector-current
rating of a single Darlington pair is 500mA. The
Darlington pairs may be paralleled for higher current
capability. Applications include relay drivers,
hammer drivers, lamp drivers, display drivers (LED

gas discharge), line drivers, and logic buffers. The


ULN2003 has a 2.7kW series base resistor for each
Darlington pair for operation directly with TTL or 5V
CMOS device.

Fig. 2 Relay Driving Circuit

This project uses relay circuit board to control various


parameters of project. It uses 8 relay boards. The
relay acts as a switch for parameters like turn table,
shoulder of robot and rejecter. A relay is usually an
electromechanical device that is actuated by an
electrical current. The current flowing in one circuit
causes the opening or closing of another circuit.
Relays are like remote control switches and are used
in many applications because of their relative
simplicity, long life, and proven high reliability.
Although relays are generally associated with
electrical circuitry, there are many other types, such
as pneumatic and hydraulic. Input may be electrical
and output directly mechanical, or vice versa.
Generally relay coils are designed to operate from a
particular supply volt. Small relay have operation
between 12V and 5V.
IV. FUTURE SCOPE
This project involves the sorting of objects through
color sensors the future advancements can be done by
increasing the efficiency of the color sensor. The
sensor is key component of project which aides in
distinguishing the objects. Failing of which may
result in wrong material handling. Thus it becomes
vital that the sensor had a very high sense of
sensitivity and ability to distinguish between colors.
Another area of improvement is design of efficient
rejecter of Digital Image Processing (DIP) is a
multidisciplinary science. The applications of image
processing include: astronomy, ultrasonic imaging,
remote sensing, medicine, space exploration,
surveillance, automated industry inspection and many
more areas. Different types of an image can be
discriminated using some image classification
algorithms using spectral features, the brightness and

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineeri ng College, New Delhi.
Available online at:www.gtia.co.in
275

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

"color" information contained in each pixel. The


Classification procedures can be "supervised" or "un
supervised. With supervised classification, identified
examples of the Information classes (i.e., land cover
type) of interest in the image. These are called
"training sites. The image processing software
system is then used to develop a statistical
characterization of the reflectance for each
information class. Genetic algorithm has the merits of
plentiful coding, and decoding, conveying complex
knowledge flexibly. An advantage of the Genetic
Algorithm is that it works well during global
optimization especially with poorly behaved
objective functions such as those that are
discontinuous or with many local minima. MATLAB
genetic algorithm toolbox is easy to use, does not
need to write long codes, the run time is very fast.
The results can be visual. The aim of this work was to
realize the image classification using Matlab
software. Matlab is a widely used software
environment for research and teaching applications
on robotics and automation, mainly because it is a
powerful linear algebra tool, with a very good
collection of toolboxes that extend Matlab basic
functionality, and because it is an interactive open
environment. The paper presents a toolbox that
enables access to real robotic and automation (R&A)
equipment from the Matlab shell. If used in
conjunction with a robotics toolbox it will extend
significantly their application, i.e., besides robotic
simulation and data analysis the user can interact online with the equipment. The objective of the
approach are; firstly to sort the objects by their colors
precisely; secondly to detect any irregularity of the
colors surrounding the apples efficiently. An
experiment has been conducted and the results have
been obtained and compared with that has been
performed by human sorting process and by color
sensor sorting devices. Existing sorting method uses a
set of inductive, capacitive and optical sensors do
differentiate object color. Advanced mechatronics
color sorting system solution with the application of
image processing. Supported by Open CV, image
processing procedure senses the circular objects in an
image captured in real time by a webcam and then
extracts color and position information out of it. This
information is passed as a sequence of sorting
commands to the manipulator that does pick-andplace mechanism. Extensive testing proves that this
color based object sorting system works 100%

accurate under ideal condition in term of adequate


illumination, circular objects shape and color. The
circular objects tested for sorting are silver, red and
black. For non-ideal condition, such as unspecified
color the accuracy reduces to 80%.
V.
CONCLUSION
The project works successfully and separates
different heighted objects using HEIGHT sensor. The
height sensor result was converted chiefly to the
command that drive the handling systems which drive
the reject and count machine to pick up the object and
place it into its designated place. There are two main
steps in height sensing part, objects detection and
height recognition. The system has successfully
performed handling station task, namely pick and
place mechanism with help of height sensor. Thus a
cost effective Mechatronics system was designed
using the simplest concepts and efficient result was
being observed. This system is a depicting the
prototype of sorting systems which are used in
industries.
REFERENCES
[1]
[2]

[3]

[4]

[5]

rspublication.com/ijeted/2014/MAY14/27.pdf
Huang, T, Wang, P.F., Mei, J.P., Zhao, X.M.,Time
Minimum Trajectory Planning of a 2-DO Translational
Parallel Robot for Pick-and-place OperationsIEEE
Computer Magazine,Vol. 56, No. 10, pp. 365-368, 2007.
Sahu, S., Lenka, P.; Kumari, S.; Sahu, K.B.; Mallick, B.;
Design a color sensor: Application to robot handling
radiation work, Industrial. Engineering, Vol. 11, No. 3, pp.
77-78, 2010.
Khojastehnazhand, M., Omid, M., and Tabatabaeefar, A.,
Development of a lemon sorting system based on color and
size Journal of Plant Science, Vol. 4, No. 4, pp. 122-127,
2010.
Dogan Ibrahim Microcontroller Based Applied Digital
Control, International Journal of Science, Vol. 23, No. 5,
pp.1000- 1010,20.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineeri ng College, New Delhi.
Available online at:www.gtia.co.in
276

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Study on Energy Harvesting Methods for


Wireless Sensor Network
Gaurav Verma1, Ashish Rawat2, Vidushi Sharma3
Electronics and Communication Engineering Department, 2Electrical and Electronics
Engineering, Northern India Engineering College, Delhi, India
3
SOICT, Gautam Budhaa University, Greater Noida, UP,India
Email : Gaurav.gspindia@gmail.com,rehanlovemoney@gmail.com,svidushee@gmail.com
1

AbstractIn this paper, various method of energy


harvesting for wireless sensor node has been discussed.
A study on renewable and non-renewable sources has
been analyzed theoretically. The principle of energy
harvesting for solar energy, wind energy thermoelectric
energy and RF energy has been studied and analyzed.
This is investigated theoretically that which energy
harvesting system is suitable to which kind of
application. In this paper a review has been provided
for different available energy harvesting system for the
Wireless sensor network node. Here different areas of
research for various energy harvesting system
mechanisms have been discussed.

These are the three major area dedicated to enhancing


the life time of the node.

Index
TermsWSN,
Energy
Harvesting,
Comparative study, Energy Harvesting Methods,
Principle of Energy Harvesting (EH)

I. INTRODUCTION
Recently WSN is one of the important fields of
research due to its enormous applications like
environmental monitoring, animal control, structural
health monitoring (SHM), Body Area Network
(BAN), military application etc.[1]. The Wireless
Sensor Network works on the basis of cluster of
nodes. Powerful network developed by the nodes and
the occurred events are transferred to the base station.
The sensor node consists of the MCU, sensor and one
RF transceiver. The whole assembly is powered by
battery or Energy harvesting/Scavenging unit. The
battery power specify the life time of the node. Life
Time of node defined as the time for which the node
actively participated in the network. Once the Life
Time of the node finishes the node is no longer of use
and the whole network may suffer problem in
transferring the information from one point to another
point. Recently lot of research has been carried out to
prolong the life time of the node. The research is
focused on the area of Energy Management, Energy
Harvesting/scavenging and Energy Optimization.

Figure 1: Energy harvesting sources and their energy


conversion devices. [15]

In this research paper, various energy harvesting


for WSN are discussed. It focuses on how the energy
is transferred from one type of energy source
(Renewable or non-renewable) into electrical energy.
This electrical energy can be used for various
operations for dc circuit.
In this paper, the sources of Energy are discussed
in Section II. The principle of energy harvesting
techniques and previous work is discussed in section
III. The Application mapping with various energy
harvesting techniques is discussed in the Section IV
and finally Section V provide the conclusion.
II. PRINCIPLE AND SURVEY OF ENERGY HARVESTING
Energy harvesting is also called power harvesting
or energy scavenging. Energy scavenging is defined
as the power harvesting from the surrounding sources
which are not knowingly created from the system.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineeri ng College, New Delhi.
Available online at:www.gtia.co.in
277

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

The circuits which harvest energy from its


surrounding are known as anticipated methods or
Energy scavenging. Table 1 shows the difference
between the Energy Harvesting and Energy
scavenging.
The circuits which harvest energy from its
surrounding are known as anticipated methods or
Energy scavenging. Following is the Table 1 which
shows the difference between the Energy Harvesting
and Energy scavenging.
Table 1: Comparison of energy scavenging and energy
harvesting [16]

Energy
Sources
Photonic
Thermal
Kinetic Flow

Electromagn
etic

Harvestin
g
Daily
Solar Cycle
Furnace
Covers
Air
conditioning
ducts
Dedicated
Transmitters

Scavenging
Random
Lights
Forest Fires
Winds

GSM
stations/WLAN

In this section, the principles of different energy


harvesting techniques presented
A. Solar energy harvesting
Solar energy can be converted into electrical
energy using a solar cell. The solar cell is basically a
semiconductor diode consisting of a large-area p-n
junction. When the cell is illuminated with light
(photons) having energy greater than the band gap
energy of the semiconductor, electron-hole pairs are
generated due to the absorption of photons. These
electron-hole pairs are separated by a built-in electric
field created by the cell junction, with the electrons
swept towards the n-side and holes towards the pside. The electrons and holes are then collected by the
contacts of each side of the cell, thus forming an
electrical potential. If an external load is connected to
the cell, the current will flow and thereby generating
electrical power. This process is known as the
photovoltaic effect. Figure 2 shows the current versus
voltage (I-V) characteristic of a typical solar cell,
with (illuminated) and without (dark) incident
radiation.

Figure 2: I-V characteristics of a typical solar cell [17].


Table 2: Specifications for several commercially available solar
cells with 1000 W/m-2 incident radiation [18].
Cell
Voc
Isc
FF
Dimensio
Thick
(%
ns
ness
)
(V)
(A)
(um)
(/cm
square)
100
330
0.59
3.15
0.77
14.5
Schott
EFG
1030
102
0.60
3.57
0.73
15.4
Photowat
300
t Af
50
SunPowe
r A300
SunPowe
rPegas
us

156

270
40

0.67

5.90

0.78

21.5

21.9

160

0.68

0.88

0.82

22.5

WSN require small scale energy harvesting


whereas energy harvesting from solar energy is
functional at large scale. This has been shown that
many research groups are here which are developing
the architecture for plug and play energy harvesting
module.
In [21], [22], Heliomote are developed. These are
solar powered WSN system developed on Mica2
platform. They used single stage energy storage as
NiMH batteries and a hardware-controlled battery
charging circuit. One component is used as to energy
monitoring.
In [23],[24], presented Prometheus using TelosB
platform. This architecture used one more stage of
energy storage and protocol based charging control
mechanism. A super capacitor is used as the primary
buffer to power the sensor node and to charge
secondary buffer i.e. Li-ion rechargeable battery
when the charge are in excess.
Simjee et al. [25,26] developed a solar power
operated WSN called Everlast. To maximize the
energy the Everlast is equipped with MPPT i.e.
Maximum Power Point Tracking system. Super
capacitor is charged with feed forward pulse
frequency modulated. Table 3 compares the solar
energy harvesting systems discussed above, in terms

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineeri ng College, New Delhi.
Available online at:www.gtia.co.in
278

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

of their solar panel power rating, storage type and


storage capacity.

Nodes

Heliom
ote [28]
Promet
heus[30
]

Everlast
[33]

Table 3: Comparison of Solar EH WSN nodes


Solar
Storage
Sensor
MPP
Storag
Power e
Capacity
Node
Trac
Type
k
(mW)

210

NiMH

1800mAh

Mica2

No

190

Two
Superca
pacitor
and Liion
Superca
pacitor

22F
200mAh

Telos

No

450

100F

Integrated

Yes

B. Thermoelectric Energy Harvesting


In [30], [31], recent research has been updated
regarding thermoelectric materials. The Systems
generated in [32] to [33] are used for macro power
generation using industrial processes and exhaust
gases of automobile. Such kind of architectures
cannot be used for the WSN node due to its low
[power consumption and very small size.
In [34], 30 x 34 x 3.2 mm thermo electric modules
are used to charge a NiMH battery from solar
radiation. The system consists of a Diode, a NiMH
battery and TEGs. Figure 7 shows the schematic for
such case. A Plexiglas window was placed above a
heat sink to trap the thermal energy to accumulate
and to increase the thermal gradient for the TEGs.

Following is the Table 4 shows comparative study


of different TEGs.
Table 4: Output of TEG modules

Set
Up

Total
Harvested
Energy (J)

1
2
3

9.03
28.12
227.70

2.4
4.2
4.7

Lawrence et al. [35] provide a TEG system using


the natural temperature gradient between air and soil
to generate small amount of electrical power.
Lawrence et al. [35] shows that on an average the
total value is between 0 to 100 W and at day time
this value is 350W in afternoon. The average output
power is 50W.

0.0228
0.071
0.575

Surface
Area
(per cm
square)
9
33
131

C. Wind Energy
Wind energy harvesting can be realized using a
wind turbine. The turbine converts the wind flow into
a shaft rotation using a rotor consisted of one or more
airfoil blades. The shaft is attached to a generator
which contains strong magnets and coils inside.
Weimer et al. [37] proposed anemometer based
solution for wind energy harvesting. A power control
circuit is used for the maximum optimization of
generated power. The output power is 650 W at
high wind speed of 8m/s whereas 5 to 80 W was
obtained a lower wind speed of 2 to 3.5m/s. In [37]
no clear information is provided for the anemometer
cups design.
In [38], small scale wind energy has been shown
for the EHWSN based in a four bladed, horizontal
axis wind turbine with a diameter of 6cm. The Carli
et al. [38] developed a buck-boost converter-based
Maximum power point circuit. Author provided an
effective power saving scheme by using an ultra-low
power comparator which switch off the energy
harvesting when the wind is calm. Other works
shown in [39], [40] based on turbine principle.
Air Flow
Speed (m/s)

Figure 3: Thermal Energy harvesting device for WSN using


Solar rays

Average
Power(mW)

Table 5: Characterization of data[38]


Optimum
Voltage
Load
(V)
RL(ohm)
715
2.40
559
4.21
549
4.68

Max.
Power
(mW)
2.02
7.93
9.95

In [50-52], factors affecting the power generation


through wind energy are the efficiency of generator,
power converters used, airflow speed and total swept
area.
D. Radio Frequency (RF) Energy Harvesting
Another possible alternative to power outdoor
WSN nodes is by RF energy harvesting [3]. In [42],
the energy of 60W is harvested from TV towers, 4.1
km away, and is able to operate small electronic

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineeri ng College, New Delhi.
Available online at:www.gtia.co.in
279

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

device. Wireless battery charging system using radio


frequency energy harvesting is discussed in [43]. RF
energy harvesting with ambient source is presented in
[44] where energy harvester can obtain 109W of
power from daily routine in Tokyo. Recently
prototypes for such RF harvesters have been
developed in the academia [45], [46]. Ambient RF
energy harvesting with two systems has been studied
in [47]. The first is broadband system without
matching while the second is narrow band with
matching. Commercial products have been introduced
by the industry [48] for RF energy harvesting
recently. In [49], the authors investigate the
feasibility and potential benefits of using passive
RFID as a wake-up radio. The results show that using
a passive RFID wake-up radio offers significant
energy efficiency benefits at the expense of delay and
the additional low-cost RFID hardware. Too much
research and VLSI engineers are still going on for the
more efficient generation of circuits for RF energy
harvesting.
Antenna selection plays an important role for the
RF energy harvesting as it is also an important factor
to achieve gain the energy from the transmitter for the
receiver. In [50], has created a Patch antenna used for
receiving power from surrounding. Catherine M.
Kruesi et al. presented a Nobel 3-D antenna technique
for Wireless Sensor Network and RF ID for RF
energy Harvesting [51]. [52] - [55] shown the
development of the Antenna for the Wireless Sensor
network RF energy harvesting. [56] - [58] provided
Rectenna (Rectifier and Antenna) design to achieve
efficient RF energy harvesting techniques.
The model derived in [59] proposed for the
hardware application for the RF energy Harvesting
and analyzed that properly but not provide any
protocol information for RF energy harvesting issue.
Prusayon Nintanavongsa et al [59] proposed the
RF Energy harvesting circuit and proposed a method
to optimize the circuit for RF energy harvesting from
multiple antennas. Using Mica2 the design is
fabricated and analyzed. The design is dual stage and
experiments and characterization plots reveal
approximately 100% improvement over other
existing designs in the power range of -20 to 7 dBm.
There is a range of EIRP which is allowed for all over
the world and this is 4W EIRP for Asian/Europe
Countries 2W EIRP is being used in USA/Canada.
N. M. Din et al. [60] designed three modules: a
single wideband 377 E-shaped patch antenna, a pi

matching network and a 7-stage voltage doubler


circuit. The design authentication for the EHWSN is
not proper because of bigger size of antenna.
Guocheng Liu et al. [61] provided a solution for
RF EHWSN. The design is dedicated for structural
health monitoring (SHM) applications to develop a
way of supplying power to sensor nodes in an
efficient and reliable manner. The design used for the
RF EHWSN as well as data comm
funication.
In [62], an integration scheme for RF energy
harvesting has been shown. The proposed system
designed over SoI technology. The technology
provides 0.8V @-20dBm input energy level at 868.3
MHz ISM band. The design is adjusted for a 10m
distance from power transmitter.
The SoG technology has an advantage of no subthreshold current in this case which minimize the
leakage current and hence improve the circuit
performance. The author used this advantage and
simulates the circuit which provides the better outputs
than that of conventional technology.
Alanson Sample et al. [63] describe two wireless
power transfer systems. The Wireless Identification
and Sensing Platform (WISP) is a platform for
sensing and computation that is powered and read by
a commercial off-the-shelf UHF (915MHz) RFID
reader. WISPs are small sensor devices that consume
on the order of 2uW to 2mW, and can be operated at
distances of up to several meters from the reader. An
experiment performed by the author in which 60uW
is harvested at a range of about 4km.
In [64], the author provided a better view for RF
EHWSN for smart home application. The author
makes a good comparison of the RF antennas. By
means of antennas, the author providing the
information about the role of antenna for the RF
energy harvesting for the household application.
Another aspect of RF energy harvesting also provided
in the article [65] by showing the application used for
the cellular phone charging system.
Bin Zhu et al. [67] propose secure multicasting
for simultaneous wireless information and power
transfer (SWIPT) in the presence of multiple energy
receivers who have potential to eavesdrop on the
messages of information receivers. Simulation results
are then provided to demonstrate the efficacy of the
proposed design in power saving. Some derivation
provided in [67] which provide the system modeling
structure.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineeri ng College, New Delhi.
Available online at:www.gtia.co.in
280

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

There is a lot of hardware structures are provided


for the Energy harvesting but the RF energy
harvesting can only provide us the energy on demand
model. The architecture provided in the
[59,60,61,62,63,64,65,66,67] are based on the
commercially available hardware like schottky diodes
capacitors etc. but no one tried to optimize the
component structures of the diodes under the solid
state technology. The part of VLSI design is very
less. Only [62], tried new technology for the
implementation of diodes on the basis of Silicon on
Glass and achieved good results. But [62] also only
providing the simulated results. A few authors
explain the protocol issue for their dedicated
hardware which may arise complication on run time.
Although, some authors, providing the generalized
model for EHWSN.
III. CONCLUSION
In this paper we have discussed the available
energy harvesting renewable and non-renewable
energy sources. The solar, wind energy sources are
completely renewable energy sources. Thermal EH
sources can be renewable when used naturally and can
be non-renewable when used by manmade system
dedicated for TH EH. Moreover we have studied the
previous work in the field of the available EH for
WSN node. RF EHWSN is studied deeply as it can
provide the Energy on demand issue. The EH systems
are discussed with their important applications and it
has been discussed that which EHWSN should be
used in the various applications.
REFERENCES
[1]
James M. Gilbert, Farooq Balouchi, Comparison of
Energy Harvesting Systems for Wireless Sensor Networks,
International Journal of Automation and Computing, pp: 334-347,
2008.
[2]
Z.G. Wan1, Y.K. Tan2 and C. Yuen3, Review on
Energy Harvesting and Energy Management for Sustainable
Wireless SensorNetworks, IEEE transaction, pp: 362-367, 2011.
[3]
Zhi Wei Sim, Radio Frequency Energy Harvesting for
Embedded Sensor Networks in the Natural Environment, Thesis
for School of Electrical and Electronic Engineering The
University of Manchester for the degree of Master of Philosophy,
2011.
[4]
A. H. Sellers, P. J. Robinson. Contemporary
Climatology, Longman Scientific & Technical, Essex, UK, 1986.
[5]
J. L.Monteith, M. H. Unsworth. Principles of
Environmental Physics, Edward Arnold, London, UK, 1990.
[6]
S. J. Roundy. Energy Scavenging for Wireless Sensor
Nodes with a Focus on Vibration to Electricity Conversion, Ph.D.
dissertation, University of California, Berkeley, USA, 2003.

[7]
K. Finkenzeller. RFID Handbook: Fundamentals and
Applications in Contactless Smart Cards and Identification, John
Wiley & Sons, 2003.
[8]
T. Starner. Human-powered Wearable Computing. IBM
Systems Journal, vol. 35, no. 3, pp. 618629, 1996.
[9]
Highway Energy Systems Ltd., [Online], Available:
http://www.hughesresearch.co.uk/, March 6, 2008.
[10]
M. Trew, T. Everett. Human Movement: An
Introductory Text, Churchill Livingstone, 2001.
[11]
S. J. Roundy, P. K. Wright, J. Rabaey. A Study of Low
Level Vibrations as a Power Source for Wireless Sensor Nodes.
Computer Communications, vol. 26, no. 11, pp. 11311144, 2003.
[12]
F. M. Discenzo, D. Chung, K. A. Loparo. Power
Scavenging Enables Maintenance-free Wireless Sensor Nodes. In
Proceedings of the 6th International Conference on Complex
Systems, Boston, USA, 2006,
[13]
M. A. Gree. Third Generation Photovoltaics:
Advanced Solar Energy Conversion, Springer, Germany, 2005.
[14]
T. Starner, J. A. Paradiso. Human-generated Power for
Mobile Electronics. Low-Power Electronics Design, C. Piguet
(ed.), CRC Press, Chapter 45, pp. 135, 2004.
[15]
J. P. Thomas, M. A. Qidwai, and J. C. Kellogg, "Energy
scavenging for small-scale unmanned systems," Journal of Power
Sources, vol. 159, pp. 1494-1509, 2006.
[16]
D. Steingart, S. Roundy, P. K.Wright, and J. W. Evans,
"Micropower Materials Development for Wireless Sensor
Networks," MRS BULLETIN, vol. 33, pp. 408-409, April 2008.
[17]
M. R. Patel, Wind and Solar Power Systems: CRC
Press, 1999.
[18]
J. P. Thomas, M. A. Qidwai, and J. C. Kellogg, "Energy
scavenging for small-scale unmanned systems," Journal of Power
Sources, vol. 159, pp. 1494-1509, 2006.
[19]
A. Reinders, "Options for Photovoltaic Solar Energy
Systems in Portable Products " TMCE, 2002.
[20]
C. Knight, J. Davidson, and S. Behrens, "Energy
Options for Wireless Sensor Nodes," Sensors, vol. 8, pp. 80378066, 2008.
[21]
V. Raghunathan, A. Kansal, J. Hsu, J. Friedman, and M.
Srivastava, "Design considerations for solar energy harvesting
wireless embedded systems," in Fourth International Symposium
on Information Processing in Sensor Networks, IPSN 2005, 2005,
pp. 457-462.
[22]
Crossbow Technology Website [Online]. Available:
www.xbow.com.
[23]
X. Jiang, J. Polastre, and D. Culler, "Perpetual
environmentally powered sensor networks," in Fourth
International Symposium on Information Processing in Sensor
Networks, 2005, pp. 463-468.
[24]
"Telos: Ultra low power IEEE 802.15.4 compliant
wireless sensor module," [Online].
[25]
F. Simjee and P. H. Chou, "Everlast: Long-life,
Supercapacitor-operated Wireless Sensor Node," in Low Power
Electronics and Design, 2006. ISLPED'06. Proceedings of the
2006 International Symposium on, 2006, pp. 197-202.
[26]
F. I. Simjee and P. H. Chou, "Efficient Charging of
Supercapacitors for Extended Lifetime of Wireless Sensor Nodes,"
Power Electronics, IEEE Transactions on, vol. 23, pp. 1526-1536,
2008.
[27]
S. Dalola, M. Ferrari, V. Ferrari, M. Guizzetti, D.
Marioli, and A. Taroni, "Characterization of Thermoelectric
Modules for Powering Autonomous Sensors," Instrumentation and
Measurement, IEEE Transactions on, vol. 58, pp. 99-107, 2009.
[28]
H. J. Goldsmid, "Conversion Efficiency and Figure-ofMerit," in CRC Handbook of Thermoelectric, 1995.
[29]
J. P. Carmo, L. M. Goncalves, and J. H. Correia,
"Thermoelectric Microconverter for Energy Harvesting Systems,"
Industrial Electronics, IEEE Transactions on, vol. 57, pp. 861-867.
[30]
N. S. Hudak and G. G. Amatucci, "Small-scale energy
harvesting through thermoelectric, vibration, and radiofrequency
power conversion " Journal of Applied Physics, vol. 103, pp.
101301 - 101301-24 2008.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineeri ng College, New Delhi.
Available online at:www.gtia.co.in
281

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

[31]
J.-c. Zheng, "Recent advances on thermoelectric
materials," Frontiers of Physics in China, vol. 3, pp. 269-279,
2008.
[32]
K. Matsubara, "Development of a high efficient
thermoelectric stack for a waste exhaust heat recovery of vehicles,"
in 21st International Conference on Thermoelectrics, 2002.
Proceedings ICT '02. , 2002, pp. 418-423.
[33]
J. Vzquez, M. A. Sanz-Bobi, R. Palacios, and A.
Arenas, "State of the art of thermoelectric generators based on heat
recovered from the exhaust gases of automobiles," in Proceedings
of the 7th European Workshop on Thermoelectrics, Pamplona,
Spain, 2002.
[34]
H. A. Sodano, G. E. Simmers, R. Dereux, and D. J.
Inman, "Recharging batteries using energy harvested from thermal
gradients," Journal of Intelligent Material Systems and Structures,
vol. 18, pp. 3-10, 2007.
[35]
E. E. Lawrence and G. J. Snyder, "A study of heat sink
performance in air and soil for use in a thermoelectric energy
harvesting device," in 21st International Conference
onThermoelectrics, 2002. Proceedings ICT '02. , 2002, pp. 446449.
[36]
R. Morais, S. G. Matos, M. A. Fernandes, A. L. G.
Valente, S. F. S. P. Soares, P. J. S. G. Ferreira, and M. J. C. S.
Reis, "Sun, wind and water flow as energy supply for small
stationary data acquisition platforms," Computers and Electronics
in Agriculture, vol. 64, pp. 120-132, 2008.
[37]
M. A. Weimer, T. S. Paing, and R. A. Zane, "Remote
area wind energy harvesting for low-power autonomous sensors,"
in 37th IEEE Power Electronics Specialists Conference, 2006, pp.
1-5.
[38]
D. Carli, D. Brunelli, D. Bertozzi, and L. Benini, A
high-efficiency wind-flow energy harvester using micro turbine,"
in Power Electronics Electrical Drives Automation and Motion
(SPEEDAM), 2010 International Symposium on, pp. 778-783.
[39]
C. C. Federspiel and J. Chen, "Air-powered sensor," in
Sensors, 2003. Proceedings of IEEE, 2003, pp. 22-25 Vol.1.
[40]
A. Flammini, D. Marioli, E. Sardini, and M. Serpelloni,
"An autonomous sensor with energy harvesting capability for
airflow speed measurements," in Instrumentation and
Measurement Technology Conference (I2MTC), 2010 IEEE, pp.
892-897.
[41]
N. Tesla, The transmission of electric energy without
wires, in 13th Anniversary Number of the Electrical World and
Engineer, 1904.
[42]
A. Sample and J. R. Smith, Experimental results with
two wireless power transfer systems, in IEEE Radio Wireless
Symp., pp. 1618 Jan. 2009.
[43]
D. W. Harrist, Wireless battery charging system using
radio frequency energy harvesting,M.S. thesis, Univ. Pittsburgh,
Pittsburgh, PA, 2004.
[44]
M. M. Tentzeris and Y. Kawahara, Novel energy
harvesting technologies for ICT applications, in IEEE Int. Symp.
Appl. Internet, pp. 373376, 2008.
[45]
T. Ungan and L. M. Reindl, Harvesting low ambient
RF-sources for autonomous measurement systems, in Proc. IEEE
Int. Instrum. Meas. Technol. Conf., pp. 6265 May 2008.
[46]
H. Javaheri and G. Noubir, iPoint: A platformindependent passive information kiosk for cell phones, in Proc.
7th IEEE SECON 2010, pp. 19, Jun. 2010.
[47]
D. Bouchouicha, F. Dupont, M. Latrach, and L.
Ventura, Ambient RF energy harvesting, in IEEE Int. Conf.
Renewable Energies Power Quality (ICREPQ10), pp. 486495,
Mar. 2010.
[48]
P2000 Series 902928 MHz Powerharvester
Development Kit Powercast Corp.
[49]
Wireless Identification and Sensing Platform (WISP).
[50]
Gianni Giorgetti, Alessandro Cidronali, Sandeep K.S.
Gupta, Gianfranco Manes,
Exploiting Low-Cost Directional
Antennas in 2.4 GHz IEEE 802.15.4 Wireless Sensor Networks,
European Conference on Wireless Technologies, pp:2098-2101,
2007

[51]
Catherine M. Kruesi, Rushi J. Vyas, Manos M.
Tentzeris, Design and Development of a Novel 3-D Cubic
Antenna for Wireless Sensor Networks (WSNs) and RFID
Applications, IEEE transaction on antennas and propagation
VOL. 57, NO. 10, pp: 3293-3299, OCTOBER 2009.
[52]
The Good Food European Project (FP6-IST-1-508744IP),http://www.goodfood-project.org, 2004.
[53]
C. Santivanez and J. Redi, On the use of directional
antennas for sensor networks, IEEE MILCOM, vol.1, pp. 670
675, 2003.
[54]
D. Leang and A. Kalis, Smart sensor dvb: sensor
network development boards with smart antennas, in
International Conference on Communications, Circuits and
Systems, vol. 2, p. 1476-1480, 2004.
[55]
D'Souza, M., Bialkowski, K., Postula, A. & Ros, M.
(2007) A wireless sensor node architecture using remote power
charging, for interaction applications. In Proceedings 10th
Euromicro Conference on Digital System Design Architectures,
Methods and Tools DSD, (pp. 485-492), 2007.
[56]
Hamid Jabbar, Young. S. Song, RF Energy Harvesting
Systemand Circuits for Charging of Mobile Devices, IEEE
Transactions on Consumer Electronics, Vol. 56, No. 1,
FEBRUARY 2010, pp: 247-253
[57]
Hiroshi Nishimoto Yoshihiro Kawahara Tohru Asami,
Prototype Implementation of Ambient RF Energy Harvesting
Wireless Sensor Networks, IEEE SENSORS 2010 Conference, pp.
1282-1287, 2010.
[58]
Prusayon Nintanavongsa, Ufuk Muncuk, David Richard
Lewis, and Kaushik Roy Chowdhury,Design, optimization and
implementation for RF Energy Harvesting Circuits, IEEE Journal
on emerging and selected topics in circuit and systems, VOL. 2,
NO. 1, pp.24-34, MARCH 2012.
[59]
N. M. Din, C. K. Chakrabarty, A. Bin Ismail, K. K. A.
Devi and W.-Y. Chen, Design of RF EH System for energizing
low power devices, Progress In Electromagnetics Research, Vol.
132,pp. 49-69, 2012.
[60]
Guocheng Liu1,3, Nezih Mrad2, George Xiao3,
Zhenzhong Li1,3, and Dayan Ban1, RF-based Power
Transmission for Wireless Sensors Nodes, International
Workshop on smart materials, structures & NDT in aerospace
Conference, Canada 2011.
[61]
H. Yan, J. G. Macias Montero, A. Akhnoukh, L. C. N.
de Vreedeand J.N. Burghartz, An Integration Scheme for RF
Power Harvesting, pp: 64-66, 2011.
[62]
Alanson Sample and Joshua R. Smith, Experimental
Results with two Wireless Power Transfer Systems, IEEE
Transactions on Instrumentation and Measurement, Vol. 57, No.
11, 2008
[63]
Zahriladha Zakaria, Nur Aishah Zainuddin, Mohd Nor
Husain, Mohamad Zoinol Abidin Abd Aziz, Mohamad Ariffin
Mutalib, Abdul Rani Othman, Current Developments of RF
Energy Harvesting System for Wireless Sensor Networks,
Advances in information Sciences and Service Sciences(AISS)
Volume5, Number11, pp. 328-338June 2013.
[64]
Kaibin Huang and Vincent K. N. Lau, Enabling
Wireless Power Transfer in Cellular Networks: Architecture,
Modeling and Deployment, IEEE transaction on wireless
communication, VOL. 13, NO. 2, pp: 902-912, 2014.
[65]
Ryo Shigeta, Tatsuya Sasaki, Duong Minh Quan,
Yoshihiro Kawahara, Rushi J. Vyas, Tohru Asami, Ambient-RFEnergy-Harvesting Sensor Device with Capacitor-Leakage-Aware
Duty Cycle Control, IEEE sensors journal, pp: 1-10, JULY 2013.
[66]
Bin Zhu, Jianhua Ge, Yunxia Huang, Ye Yang, and
Meilu Lin, Rank-Two Beamformed Secure Multicasting for
Wireless Information and Power Transfer, IEEE signal processing
letters, VOL. 21, NO. 2, pp. 199-203, 2014.
[67]
Triet Le, Karti Mayaram, and Terri Fiez, Efficient FarField Radio Frequency Energy Harvesting for Passively Powered
Sensor Networks,IEE journal of Solid State Circuits, VOL. 43,
NO. 5, pp. 1247-1302, MAY 2008.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineeri ng College, New Delhi.
Available online at:www.gtia.co.in
282

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Design and Power of Flip Flops


Rishu, Kanika Sharma, Mahima Singh Choudhdary, Manuj Gupta, Kartik Kumar Attree
Deptt. of Electronics and Communication Engineering, Northern India Engg., College New Delhi, India
Email: kanika8sharma@gmail.com
Abstract: This paper is written in exhaustive study
identified with flip flop and its operation. One of
the biggest challenges of our times is to limit the
power consumed in electronic devices which will
lead to longer battery life while maintaining high
performance.
Several
different
design
methodologies and low power techniques are being
probes with this end in minded .This paper speaks
to the data identified with history, usage, sorts and
uses of flip flop.

systems. But circuit configurations of JK flip flops are


more complex and their power dissipations are higher.
Than those of other types of flip flops. The circuit
complexity and high power dissipations of JK flip-flops
limit their implementation in large scale integration.[3]

Keyword: Latches, flip flop power, design, low


power techniques.
I.

INTRODUCTION
Symbol

In electronics, a flip-flop or latch is a circuit that has


two stable states and can be used to store state
information. A flip-flop is a bistable multivibrator. The
circuit can be made to change state by signals applied to
one or more control inputs and will have one or two
outputs. It is the basic storage element in sequential
logic. Flip-flops and latches are a fundamental building
block of digital electronics systems used in computers,
communications, and many other types of systems.
Flip-flops and latches are used as data storage elements.
A flip-flop stores a single bit (binary digit) of data; one
of its two states represents a "one" and the other
represents a "zero". Such data storage can be used for
storage of state, and such a circuit is described as
sequential logic. When used in a finite-state machine,
the output and next state depend not only on its current
input, but also on its current state (and hence, previous
inputs). It can also be used for counting of pulses, and
for synchronizing variably-timed input signals to some
reference timing signal.[1][2]
A JK flip flop covers the characteristics of an RS, a T,
and a D flip flop, and are widely applied in digital

Circuit
Fig. 1 Circuit and Symbol

II.

REGIONS OF FLIP-FLOP OPERATION

There are three regions of flip-flop operation [4,5], of


which only one region is acceptable for a sequential
design to function correctly.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
283

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

These regions are:


Stable region: Where the setup and hold times of a
flip-flop are met and the Clock-to-Q delay is not
dependent on the D-to Clock delay. This is the
required region of operation.
Metastable region: As D-to-Clock delay decreases,
at a certain point the Clock-to-Q delay starts to rise
exponentially and ends in failure. The Clock-to-Q
delay is nondeterministic and this might cause
intermittent failures and behaviors which are very
difficult to debug in real circuits.
Failure region: Where changes in data are unable
to be transferred to the output of the flip-flop.

Fig. 2 Flip Flop regions of operation

Figure 1 illustrates the different regions of flip-flop


operation. The optimal setup time noted on the graph
would be the highest performance D-to-Clock delay
to accomplish fastest D-to-output delay. Due to the
steep curve to the left of that point not all library
developers would target this value. Instead, they
would prefer adding guard bands to any library cell
or design to guarantee stability and reliability.
III.

POWER CONSUMPTION
CIRCUITS

IN

The average power used over the interval is just the


energy divided by the time:
Pavg =E T =(1/T) idd(t) Vdd dt

For CMOS digital circuits, equation (3) can be further


expressed in the following equation:
Pavg = pt(CL Vddfclk)+ ISC Vdd + Ileackage Vdd

The above equation consists of three terms and hence


illustrates that there are three major sources of power
consumption in a digital CMOS circuits. The first term
represents the switching component of power, where CL
is the effective switched loading capacitance, fclk is the
clock frequency and pt is the probability that a power
consuming transition occurs. In most cases, the voltage
swing V is the same as the supply voltage V dd.
However, in some logic design styles such as in passtransistor logic, the voltage swing on some internal
nodes may be slightly less. It is important to point out,
that the effect of internal glitching should be included
as a component of the switching power consumption.
The second term is caused by the direct path short
circuit current Isc, which arises when both the NMOS
and PMOS transistors or networks are simultaneously
active or on, conducting current from the supply V dd to
ground. Finally, a factor that is growing more and more
important as we develop deep submicron technologies,
leakage current Ileakage, which can arise from substrate
injection, gate leakage and subthreshold effects. I leakage
is primarily determined by the CMOS fabrication
process technology and modeled based on its
characterization.
We can observe from (4) that power consumption of a
circuit depends strongly on its structure and input data
statistics.
IV.

(5)

The above equation assumes that the voltage power


supply is stable and constant throughout operation. The
energy consumed over the time interval T is the integral
of the instantaneous power:
E = T0 idd(t) Vdd dt

(8)

LOGIC

The instantaneous power[6] of any circuit is calculated


as follows:
P(t)= idd(t) Vdd

(7)

(6)

TYPES OF LOW POWER FLIP FLOP


TECHNIQUES

Conditional Clocking Flip Flop Techniques


Conditional Pre-charging Technique
This technique is used for controlling the internal node
in the pre charging path in a sequential element. It has
been seen to reduce the system power[7].Referring to

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineeri ng College, New Delhi.
Available online at:www.gtia.co.in
284

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Fig1, we can see the that the D input is given to the first
NMOS in the PDN network (CMOS). When this input
is high , the output should be high too. The clk input to
the PMOS will charge the output node to high when clk
is low. If D input is already high , there is no need to
charge the output to high again. Thus , if one can
control this behaviour there can be a power reduction in
the flop. To control the internal node in the precharge
path , a control switch is used as shown in Fig 1. Only a
transition that is going to change the state of the output
is allowed. As one of the input to flops is the clock ,
considering the clock(Clock signal) is the element that
makes the most transition in a system,[8] a technique
such as conditional precharging can significantly help
reduce power.

this technique has been explained well in the journal


"High-Performance and Low-Power Conditional
Discharge Flip-Flop" by Tarek K. Darwish and Magdy

Fig. 4 Conditional Capture technique

A. Bayoumi.[9]

Fig. 3 Conditional Pre Charging technique

Conditional Capture
This technique looks to prevent any necessary internal
node transition by looking at the input and output and
checking to see if there is at all a need to switch states.
From Fig2 , we can see there is a control signal that is
applied to control the switching of the internal nodes.
We can see the clock is supplied to two NMOS in
series. The discharge path will not be complete till the
control signal allows the last NMOS to be on. This
control signal could be generated by a simple circuitry ,
with its inputs being the present output , input and the
state of the clock (high or low). If the output of the flop
is low , and high clock pulse is applied with the input
being a low pulse , then there is no need to cause a state
transition. The extra computation to sample the inputs
cause a increase in setup time of the flop , this is a
disadvantage of this technique.[7] A further insight into

Data Transition Look-Ahead Flip-Flop


In Fig3[10,11] , the circuit shows how data transition
technique can be beneficial for power saving. The
XNOR logical function is performed on the input of the
D Flip-Flop and the output Q. When Q and D are equal
, output of the logical XNOR will be zero thereby
gating the internal clock and generating no internal
clock. Referring to Fig3, we can see that the circuit can
be broken down into 3 parts , namely data-transition
look ahead , pulse generator , clock generator. The
pulse generator output is given into the clock generator
which is used to clock the D flip Flop. Based on the
input and output signals , if there is a need to change
the state of the D flop then the clock is allowed to
switch to cause a transition else the clock is not allowed
to transition. When the clock is not to make a transition,
some time has been already spent in computing the
logic and data from the D input may make it through
the first stage of the D flop and some power is
consumed. this power consumption is still less than
what an ordinary flop would have consumed with a
clock transition and no change in output. As it can be
noticed the pulse generator is still generating pulse at
every external clock edge. This too can be controlled
and a technique that controls this part of the circuit is
Clock On Demand.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineeri ng College, New Delhi.
Available online at:www.gtia.co.in
285

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Fig. 5 Data transition look ahead technique

Clock On Demand Flip Flop


Fig4[10,11], shows the COD (Clock On Demand)
technique. The clock generator and pulse generator are
combined together in this implementation. The
advantage of this is that there is reduction in areas and
thus more energy efficiency. If the XNOR output is
zero then the pulse generator will not generate any
internal signal from derived by the external clock. If the
output Q and input D do not match then the pulse
generator will generate internal clock to aid state
transition and change value of output.

Fig. 7 4-bit parallel data input

Shift register
Shift registers are used to transfer the contents of one
register to another register ,or within the same register
one bit at a same time.

Frequency division

Fig. 6 Clock on demand technique

VI.

APPLICATION OF FLIP FLOP

Parallel data storage


Several bits of data can be stored simultaneously in a
group of flip flop. Each of four parallel data lines is
connected to the D input of flip flop. The clock input of
all flip flops are connected to a common clock input,so
that each flip flop is triggered at the same time. As
positive edge triggered flip flops are used ,the data on
the D inputs are stored simultaneously by the flip flop
on the positive edge of the clock. The clear inputs are
connected to a common CLR line, which resets all the
flip flops.

VII.

CONCLUSION

The proposed method provides the track of low power


consumption techniques that encourages high
performance of any flip flop. The basic feature of flip
flop is highlighted based on timing characteristics,
power consumption.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineeri ng College, New Delhi.
Available online at:www.gtia.co.in
286

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

REFERENCES
[1]
[2]

[3]

[4]
[5]
[6]

[7]

[8]

[9]

[10]

[11]

[12]

[13]

[14]

[15]

[16]

[17]

[18]

Pedroni, Volnei A. (2008). Digital electronics and design


with VHDL. Morgan Kaufmann. p. 329..
Latches and Flip Flops (EE 42/100 Lecture 24 from
Berkeley)"...Sometimes the terms flip-flop and latch are used
interchangeably..."
UPwinder Kaur, Rajesh Mehra, Low Power CMOS Counter
Using Clock Gated Flip-Flop, International Journal of
Engineering and Advanced Technology, Vol. 2, Issue 4, pp.
796-798, year 2013
R.
H.
Katz,
Contemporary
Logic
Design,
Benjamin/Cummings Publishing Company, Inc. 1994.
J. McCluskey, Logic Design Principles, Prentice Hall, 1986.
A.Sayed and H. Al-Asad,a new power high performance flip
flop,proc. International Midwest Symposium on circuits and
systems,2006.
CONDITIONAL TECHNIQUES FOR LOW POWER
CONSUMPTION FLIP- FLOPS Nikola Nedovic, Marko
Aleksic and Vojin G. Oklobdzija".
"Comparison of Conditional Internal Activity Techniques for
Low Power Consumption and High Performance FlipFlops"(PDF).
Low-Power Flip-Flop Using Internal Clock Gating And
Adaptive Body Bias by Jorge Alberto Galvis".
[11]."Investigation and implementation of data transmission
look-ahead D flip flops by Yuan Yongyi".
Dhar K Design of a high speed, low power synchronously
clocked NOR based JK flip-flop using modified GDI
technique in 45nm Technology. IEEE Conference in
Advances in Computing, Communications and Informatics.
Pp. 600 606, year 2014.
Priyanka Sharma & Rajesh Mehra, True Single Phase
Clocking Based Flip-Flop Design Using Different
Foundries, International Journal of Advances in Engineering
& Technology, Vol. 7, Issue 2, pp. 352-358, year 2014.
Suresh Kumar Power and Area Efficient Design of Counter
for low Power VLSI System. International Conference on
Computer Science and Information Technology (IJCSMC),
Vol. 2, Issue 6, pp. 435 443, June 2013.
J. Diamond, W. Pedrycz, D. McLeod Fuzzy JK FlipFlops as
Computational Structures: Design and Implementation.
IEEE Transaction on Circuits and Systems, Vol-41, pp. 215226, year 1994.
Ms. Tania Gupta, Mr. Rajesh Mehra, Low Power Explicit
Pulsed Conditional Discharge Double Edge Triggered FlipFlop, International Journal of Scientific & Engineering
Research, Vol. 3, Issue 11, pp. 1-6, year 2012.
Zhao Xianghong, Guo Jiankang, Song Guanghui An
improved low power clock gating pulse triggered JK flip
flop. IEEE International Conference on Information
Networking and Automation. Vol-2, pp. 489-491, year 2010.
Yuejun Zhang, Pengjun Wang Design of multivalued double
- edge triggered JK flip flop based on neuron MOS
transistor. IEEE International Conference, pp. 58-61, year
2009.
Varun I., Gupta T.K., Ultra - low power NAND based
multiplexer and flip flop. IEEE International Conference,
pp. 1-5, year 2013.
S.Salivahanan,S.Arivazhagan (2012).Digital Circuits And
Design p. 298

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineeri ng College, New Delhi.
Available online at:www.gtia.co.in
287

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Investigation of Interaction of Three Solitory


Wave in Optical Fiber
1

Rajeev Sharma, 2Surender Kumar


1

TIT&S, Bhiwani
Department of Electronics and Communication, Northern India Engineering College, New Delhi, India

E-mail: rajeevsharma78@yahoo.com, suren001@gmail.com

Abstract: Optical solitons are well thought-out as


natural bits for telecommunications as they have
the inclination to preserve their shape over
transoceanic distances In order to achieve
stability, there should be a balance between the
group velocity dispersion (GVD) which causes
pulse broadening and the self phase modulation
(SPM) which causes spectrum broadening. The
time interval TB between two adjacent bits or
pulses determines the bit rate of a communication
system as B =1/TB. It is thus imperative to study
how close two solitons can come without affecting
each other.In this paper Soliton collision in
nonlinear optical fibers are investigated.
Keywords: SPM, GVD, SOLITON
I.

between co-propagating soliton pulses. In this work,


various simulation experiments have been performed
using Matlab to examine the interaction involving
two neighboring solitons of equal amplitude, unequal
amplitude by varying the phase[8-9] between them.
Also the interactions between neighboring solitons by
varying the spacing and amplitude has been
investigated.
II.

MATHEMATICAL MODELLING
SOLITON INTERACTION

FOR

The equation which govern soliton is the Non-linear


Schrodinger equation (NLSE) for an optical pulse
with the field envelope u(z,t) propagating in the
optical fiber with the no loss and higher order
dispersion is given in [3,4,5] as

INTRODUCTION

The increased bandwidth demand has fascinated


attention of the researchers to discover new avenues
to streamline the suffocated bits in the bandwidth
pipeline. The fiber losses, dispersion and fiber
nonlinearities are the limiting factors of optical
communication system design. The higher bit rate
requires the use of short pulses, which have inherited
nonlinearities. ultimately, optical solitons[1-2] will be
the eventual candidate, which use the nonlinear self
phase modulation to counteract the group velocity
dispersion (GVD). For an precise system the fiber
nonlinearities can be balanced by GVD whereas fiber
losses can be compensated by periodic or distributed
amplification. However, increased channel capacity
requires the pulses to be closely spaced. It is
therefore necessary to investigate interaction[3-7]

- 2

2
+ |u| u

(1)

where 2 is the second order dispersion parameter. By


solving the equation numerically with the input
amplitude consisting of a soliton pair [1] we can find
the outcome of interaction on solitons. The solutions
of the NLS [12-14] allow investigation of different
amplitude and phases associated with interaction
between three solitons the solutions of the NLS allow
by using the following form at the input end of fiber

u(0,) = sech()+sech(-q0) + rsech(+q0)

(2)

where q0 is the center of pulse, r is the relative


amplitude, is the relative phase between
neighboring solitons.

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineeri ng College, New Delhi.
Available online at:www.gtia.co.in
288

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

III.

SIMULATION RESULT

The effect of varying the phase, amplitude and


spacing between neighboring solitons has been
studied with Matlab using equation (3) results are:

Fig. 1 r = 1, = 0

The periodic collision of neighboring solitons is


unwanted from practical point of view. One way to
avoid collison is to increase the soliton sepration but
this will have great impact on bandwidth another way
is to change the phase and amplitude of the
neighbouring solitons.
IV.
CONCLUSION
In this paper, we have revealed through simulation
two important phenomenas one that solitons are
stable pulses and maintain their shapes even after
interaction with each other. Due to these stable
phenomena, they are resistant to fiber losses and are
becoming base for very high speed optical networks.
In order to avoid interaction, the phase, the choice of
amplitude and spacing should be taken into concern
since interaction decrease the efficiency of soliton
transmission.
REFERENCES
[1]

[2]

[3]

[4]

Fig. 2 r =1.1, = 0

And when the input pulse is of the form u(0,) =


sech()+rsech(-q0) + rsech(+q0)
the simulation
result shows:

[5]

[6]
[7]

[8]

[9]

HASEGAWA, A., and TAPPERT, F.: Transmission of


stationary nonlinear optical pulses in dispersive dielectric
fibers', Appl. Phys. Lett., 1973, 23, pp. 142-144
HASEGAWA, A., and KODAMA, Y.: 'Signal transmission
byoptical solitons in monomode fibre', Proc. IEEE, 1981, 63,
(9), pp.1145-1150
CHU, P.L., and DESEM, C : 'Optical fibre communication
using solitons'. 4th International Conference on Integrated
optics and optical fiber communication, IOOC'83, Tokyo,
Japan, June 1983, Tech. Dig., 27-30, pp. 52-53
KODAMA, Y., and HASEGAWA, A.: 'Amplification and
reshaping of optical solitons in glass fiber-IF, Opt. Lett.,
1982, 7, pp. 339-341
BLOW, K.J., and DORAN, N.J.: 'Bandwidth limits of
nonlinear (soliton) optical communication systems', Electron.
Lett., 1983, 19, (11), pp. 429-430
GORDON, J.P.: 'Interaction forces among solitons', Opt.
Lett., 1983,8,(11), pp. 596-598
KARPMAN, V.I., and SOLOV'EV, V.V.: 'A perturbational
approach to the two-soliton systems', Physica D, 1981, 3, pp.
487-502
HERMANSSON, B., and YEVICK, D.: 'Numerical
investigation of soliton interaction', Electron. Lett., 1983,19,
(15), pp. 570-571
SHIOJIRI, E., and FUJII, Y.: 'Transmission capability of an
optical fibre communication system using index
nonlinearity', Appl. Opt., 1985, 24, (3), pp. 358-360

Fig. 3 r =1.1, = 0

Special Issue: National Conference on Recent Innovations In Engineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineeri ng College, New Delhi.
Available online at:www.gtia.co.in
289

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Estimation Techniques of Path Loss


1

Charu, 2Jyoti Sehgal, 3Meenu Manchanda


M.Tech Scholar, V.C.E. Rohtak, Electronics & communication, Haryana
2
Assistant Professor, V.C.E. Rohtak, Electronics & communication, Haryana
3
Associate professor, V.C.E. Rohtak, Electronics & communication, Haryana
1

E-mail: ccharudua@gmail.com, legendjyoti@gmail.com, Meenumanchanda73@gmail.com


Abstract In this paper, various propagation
models of path loss are discussed and compared.
In urban areas, the propagation loss depends on
the height of antenna and distance between
transmitting station and receiving station.
Propagation models are used in network
planning for conducting feasibility studies and
performing interference studies.

systems which provides two types of parameters


corresponding to the large-scale path loss and smallscale fading statistics. A unique channel model is
incapable to describe radio propagation between
transmitter & receiver so various models are required
for variety of environments to enable system designs.

Refraction

Keywords:- Path Loss, ECC-33 model, Walfisch


Ikegami model, Lees model
I.

INTRODUCTION

Wireless communication is the fastest growing


segment of the wireless industry. The transmission
path between transmitter & receiver can change from
line of sight to obstructions like buildings, mountains
etc. The mechanisms that governs radio propagation
are complex and diverse, and can be attributed on
basis of propagation mechanisms as shown in fig 1:
reflection, diffraction, refraction and scattering.
The rest of the paper is organized as follows: Section
II gives detail of propagation mechanisms like
refraction, reflection diffraction and scattering.
Section III gives details of path loss models like free
space path loss model, Hata Okumara model, Hata
Okumara extended model ECC-33 model, Walfisch
Ikegami model, Lees model. Section IV shows the
graphical results for all these models. Finally section
V gives the conclusion.
II.

PROPAGATION MECHANISMS

Propagation characteristics play an important role in


implementing designs of wireless communication

Reflection

Mechanisms

Diffraction
Scattering

Fig.1. Propagation Mechanisms

Refraction[4]
Radio waves can be refracted as well as affected in
same way as light waves. It is found that the direction
of an electromagnetic wave changes as it moves from
an area of one refractive index to another. The angle
of incidence and the angle of refraction are linked by
Snell's Law that states:
n1 sin (1) = n2 sin (2)
The radio signals move abruptly from a region with
one refractive index, to a region with another
including comparatively gradual change. This results
in the direction of the signal to bend rather than
undergoing an immediate change in direction.

Special Issue: National Conference on Recent Innovations In En gineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
290

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Reflection[4]
Reflection occurs when a propagating electromagnetic wave affects the objects with large
dimensions compared to the wavelength of the
propagating wave. Reflected waves can be produced
from surface of earth, walls of buildings, ceilings and
floors. During reflection ray is attenuated by factor
which depends on angle of incidence & properties of
medium.
Diffraction[4]
Diffraction is an important phenomenon in
microcellular regions (outdoors) where signal
transmission is not possible. It occurs when the
propagating path between the transmitter and receiver
is obstructed by edges of buildings, walls & large
objects which act as secondary waves. These waves
give rise results to bending of waves around the
obstacles. Also radio frequency (RF) energy can be
explained by diffraction which travel in regions
which is not in the LOS of transmitter. This
phenomenon is also called shadowing.
Scattering[4]

Free Space Path Loss Model (FSPL)


Free space path loss is defined as signal strength lost
during propagation from transmitter to receiver.
FSPL is proportional to the square of the distance
between transmitter & receiver and proportional to
square of the frequency of the radio signal
For typical radio applications, it is common to
find c measured in units of GHz and in km, in
which case the FSPL equation becomes
where
PLfs(dB)= e e+32.44+20log +20logc

(1)

Gte is transmitted antenna gain in dBm


Gre is received antenna gain in dBm
d is T-R separation in Km.

Scattering Scattering follows the same principles as


diffraction which causes energy from a transmitter to
be reradiated in many different directions. Scattering
occurs when dimensions of objects are of order of its
wavelength or less. It is produced by irregular objects
like rough surfaces, vehicles, foliage, microcellular
systems & lamp posts etc which results in reduced
power levels.
III.

model so we require various models for different


environments to enable system design. Path loss
model estimates area covered by wireless base station
& access point and maximum distance between
terminals in Adhoc networks. It also estimates
received signal levels as the functions of distance &
predict SNR for mobile communication systems

PATH LOSS (PL) MODELS

Path loss model relates the loss of signal strength to


distance between two terminals. By using path loss
model received signal level can be estimated & also
SNR can be predicted for mobile communication
systems. Also it calculates coverage area of wireless
base stations and access point. [2]
Path Loss Radio propagation between transmitter &
receiver cannot be explained using unique channel

fc is frequency in MHz.

Outdoor Propagation Models


Path loss can be estimated using profile of particular
area varying from curved earth profile to
mountainous profile including various obstacles.
Various models are available to predict path loss over
irregular terrain which predicts strength of signal at
receiving area. Cells are typically classified roughly
according to size as macrocells and microcells.
Earlier planning was mainly based on empirical
formulas with an experimental back- ground.
Empirical models are the set of equations derived
from extensive field measurements and are simple
and efficient to use. The input parameters for the
empirical models are usually qualitative and not very
specific, e.g., a dense urban area and a rural area. The

Special Issue: National Conference on Recent Innovations In En gineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
291

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

empirical model for macrocells cannot be used for


indoor picocells which is its drawback.

HATA-OKUMURA EXTENDED
ECC-33 MODEL[4]

HATA-OKUMURA MODEL[5]

The ECC 33 path loss model is developed from


original measurements by Okumura and modified its
assumptions so that it more closely represents a fixed
wireless access (FWA) system [4]. The most
extensively used empirical propagation model is the
Hata-Okumura model, which is a well- established
model for the Ultra High Frequency (UHF) band. The
original Okumura model does not provide any data
greater than 3 GHz. Based on prior knowledge of
Okumura model an extrapolated method is applied to
predict the model for higher frequency greater than 3
GHz. The tentatively proposed propagation model of
Hata-Okumura model with report is referred to as
ECC-33 model. In this model path loss is given by

The Hata model is an empirical formulation of the


graphical path-loss data provided by the Okumura
and is valid over roughly the same range of
frequencies, 150-1500MHz. This empirical formula
simplifies the calculation of path loss because it is
closed form formula and it is not based on empirical
curves for the different parameters. The Okumara
Hata model is the combination of both the above
models. The standard formula for empirical path loss
[dB] in urban areas under the Okumara Hata model is
given by:[5]
L50(urban)(dB)=69.55+26.16log fc-13.82 log htea(hre)+(44.9-6.55log hte)log d
(2)
L50
loss

50th percentile value of propagation path

fc

frequency from 150MHZ to 1500MHZ

hte
200m)

effective transmitter antenna height(30m to

hre
m)
d

effective receiver antenna height(1m to 10


T-R separation distance (km)

a(hre) correction factor for effective mobile antenna


height(function of size of coverage area)
For small to medium size cities
correction factor is given by

mobile antenna

(hre) = (1.1log c 0.7)hre (1.56log c 0.8)db


This model is well suited for large cell mobile system
but not for cells of order 1km radius like personal
communication system. This disadvantage was
compensated by extended hata model.

L50(

)=Amu+

MODEL

OR

(hte)(h e)

Where
Amu is free space attenuation [3]
Abm is basic medium path loss.
G(hte) is BS height gain factor.
G(h e) is Received antenna height gain factor.
Amu=92.4+20log +logc
=20.41+9.83log +7.894logc+9.56log(c)2
(hte)=log(hte/200)[13.958+{5.8log( )}]2
(h e)=42.57+13.7logcl (hre)0.585]
Where, c is frequency in GHz.
THE WALFISCH IKEGAMI MODEL
This model is advantageous in dense urban
environments. This is related to various urban
parameters like building density, street width and
average building height. Average building height is
more than antenna height helps in guiding signals
along the street, simulating an urban canyon type of
environment.
This model considers roof tops and building height
impacts with the help of diffraction in order to find
average signal strength at street level. Path loss

Special Issue: National Conference on Recent Innovations In En gineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
292

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

WPL=PSQ2P1

d=2Km

Where

d0=30m

PS=free space path loss between isotropic antennas


and

Gte =6 dB/dipole
Gre =0 dB/dipole

PS=(

hte =200 m

Q2=Reduction in rooftop signal because of row of


buildings which immediately shadow the receiver at
street level.
For line of sight propagation the path loss is given by
WPL(LOS)=42.6+20log(fc)+26log(d)

hre = 5m
By using equation (ii) the path loss for Hata
Okumara Model is computed and is shown by fig. 2
as:
Hata Okumara Model

(3)

550
540

Where
fc=frequency
d=distance of mobile from base station
THE LEES MODEL[6]

----> Path loss(in dbs)

530
520
510
500
490
480

IV.

RESULTS

470
460

200

400

600

800 1000 1200


----> distance(in m)

1400

1600

1800

2000

Fig.2. Path loss for Hata Okumara model

And the path loss for The Walfisch Model is


computed using equation(iii) and is shown by fig. 3
as:
Walfisch Model
700

650

----> Path loss(in dbs)

This model is applied on flat terrain based on


empirical data but cant be applied to non-flat terrain
due to occurrence various errors. It is called North
American model. In this we may find out propagation
path loss in general mobile radio environment which
can be sub urban area. The standard path loss formula
is given by:
PL(dB)=107.7+38.4log(d/1600)-20log(hte/30)10log(hre/3)-Gte-Gre
(4)
Where
hte =effective transmitter/base station antenna heights
(in meters)
hre = effective receiver /mobile antenna heights (in
meters)
d= transmitter-receiver separation distance (in
meters) from base station
Gte = base station (transmitter) antenna gain (in dB)
Gre = mobile terminal (receiver) antenna gain (in dB)

600

550

500

450

200

400

600

800
1000 1200
----> distance(in m)

1400

1600

1800

2000

Fig.3. Path loss for Walfisch model

The path loss for Lee Model is computed using


equation(iv) and is shown by fig 4 as:

fc= 1500MHz
Special Issue: National Conference on Recent Innovations In En gineering & Technology
(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
293

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Lee Model
100

50

----> Path loss(in dbs)

-50

-100

-150

-200

-250

200

400

600

800
1000
1200
----> distance(in m)

1400

1600

1800

2000

Fig.4. Path loss for Lee model

And the path loss for free space model is computed


using equation(i) and is shown by fig. 5 as:
Free space path loss Model
290

factors. Various empirical path loss models have


been determined for macro cells. Out of several
propagation models, above mentioned are the most
significant ones, providing the foundation of mobile
communication services. These prediction models are
based on experimental data and statistical analysis,
which enable us to compute the received signal level
for given propagation medium. It is concluded that
ECC-33 gives best results in urban areas, ECC-33
and COST-231 gives better results in suburban areas.
HATA model is better for determining path loss in
rural areas whereas OKUMURA model shows better
results in urban and sub urban areas.

280

REFERENCES

270

260

[1]
250

240

230

220

200

400

600

800

1000

1200

1400

1600

1800

2000

[2]

Fig.5. Path loss for Free Space model


[3]

The comparison of path loss for various models


combinly shown by taking above data is represented
by fig. 6 as:

[4]
[5]

700
600

----> Path loss(in dbs)

500

[6]

400
300

[7]

200
100
0
ECC model'Hata Okumara Model
Walfisch Model
Lee Model
Free space path loss Model

-100
-200
-300

200

400

600

800
1000
1200
----> distance(in m)

1400

1600

1800

Electronic Communication Committee(ECC) within the


European Conference of Postal Telecommunications
Administration(CEPT) ,The analysis of the coexistence of
FWA cells in the 3.4 -3.8GHz band tech. rep.,ECC Report
33 , May 2003.
Cost Action 231,Digital mobile radio towards future
generation systems, final report, tech.rep European
Communities, EUR 18957, 1999.
R. K. Crane,Prediction of attenuation by rain, IEEE
Transactions on Communications, vol. COM-28, pp. 17271732, September 1980.
T.S. Rappaport, Wireless Communication: Principles &
Practice, Upper Saddle River, NJ, Prentice Hall PTR, 1996.
M.Hata,Empirical Formula for Propagation Loss in Land
Mobile Radio Service,IEEE Transactions on Vehicular
Technology, VT-29, 3, 1980, pp. 317-325.
W.C.Y.Lee, Mobile Cellular Telecommunications Systems,
New York McGraw Hill Publications, 1989.
Y.Okumura,Field strength and its variability in VHF and
UHF land-mobile radio-services,Review of the Electrical
Communication Laboratory, vol.16, September-October
1968.

2000

Fig.6. comparison of Path loss of various models

V. CONCLUSION
Path loss is the power reduction of an
electromagnetic wave as it propagates through space .
It is important in analysis and design of a
communication system. It depends on frequency,
antenna height, receive terminal location relative to
obstacles and reflectors, and distance including other
Special Issue: National Conference on Recent Innovations In En gineering & Technology
(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
294

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Implementation of Image Compression using


Discrete Cosine Transform
Varun Jain, Sapna Aggarwal, Ankur Chatturvedi
A. P.,Dept. of Electronics and Communication, Northern India Engineering College, New Delhi (India)
Email.com: varunjain1202@gmail.com, sapnaggarwal18@gmail.com, ankurchaturvedi.ec05@gmail.com
Abstract Image compression of digital
images needed in various applications, such as
transmission of images over communication
channels varying widely in their bandwidths, and
display at different resolution depending on the
resolution of a display device etc. In this work we
proposed some modifications in compression
algorithms given by Jayanta Mukherjee and
Sanjit K. Mitra. We have also extended this
approach to the different formats of images like
.bmp, .tiff, .png. Our proposed modifications do
compression according to our need and that we
can control from manually.

space. The principle behind the algorithms developed


by Mukherjee and Mitra is similar to the sub band
DCT computation. In this work, we propose a
modification to their algorithms. In this proposed
work, we proposed modification to their algorithm. In
proposed work we use DQM and multiply this
quantization matrix with the compress matrix of size
8 x 8 of gray image. We have also studied the
performances of their scheme along with ours at
varying compression rates for typical images and
different threshold [3].

Keywords Discrete Cosine Transform (DCT),


Discrete Fourier Transform (DFT), Joint Photo
Graphic Expert Group (jpeg), Direct Quantization
Matrix (DQM), Peak Signal to noise Ratio (PSNR),
Bit Per Pixel (bpp) and Compression Ration (CR).

We present here briefly the computation technique


for DCTs of a signal from its subbands. Let x(n), n =
0, 1, 2, ..,N-1 be an N-point data sequence with N
even. Let the sequence x(n) be decomposed into
subsequences xL(n) and xH(n) of length N/2 each as
follows:

I.

II. SUBBAND DCT COMPUTATION

INTRODUCTION
xL(n) = {x(2n) + x(2n + 1)}

Spatial scalability of an image representation is


required in various applications, such as
transmission, storage, retrieval and display of digital
images. For example, in different channels with
varying bandwidths the same image may be
transmitted at different spatial resolutions [1]. In
internet applications also, for browsing a remote
image database, initially down sampled images may
be sent and depending upon the interest and request
from the client images of larger size are sent later.
But for efficient storage, images are usually
represented in the transform domain as compressed
data. As DCT-based jpeg standard is widely used for
image compression, a number of approaches have
been advanced to compress the images in the DCT

xH(n) = {x(2n) - x(2n + 1)},

(1)

n = 0, 1,, N/2 1
(2)

Subband computation of DCT of x(n) is peformed


using the DCT and DST of xL(n) and xH(n),
respectively. The DCT is defined as [1]
( )

( )

*(

+
(3)

Similarly, the DST of x(n) is defined as,


( )

( )

*(

+
(4)

Special Issue: National Conference on Recent Innovations In En gineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
295

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Let CL(K) be the


and be the

- point DCT of xL(n) and SH(K)

- point DST of xH(n). Then the

computation of DCT of x(n) from C L(K)s and SH(K)s


could be performed as follows,
( )

3 CL(K) +

3 SH(K) 0

K N-1

c) Setting threshold which is lesser the value


make it zero.
2) Now Re-construct the image.
a) Multply Q with the compressed 8 X 8 block
previously obtain.
b) Apply K = A*with (step 2-a)*A
c)

Arranged back in the block of full image.

(5)
III.

ALGORITHM USED FOR IMAGE

IV. EXPERIMENT AT DIFFERENT COMPRESSION LEVELS

COMPRESSION DCT
In this section, we describe the algorithm that we
used. In this algorithm, the image is represented by
DCT based jpeg standard. Let be a 8 X 8 block of
an image in the spatial domain, whose DCT
coefficients are encoded as an Maintaining the
Integrity of the Specifications 8 X 8 block in the
compressed domain [2].

Figure 1 Compressed image of 'lena.jpg' of size 256 X 256 at


threshold value of 0.01

Where Q is Quantization matrix used in following


algorithm.
16

11

10

16

24

40

51

61

12

12

14

19

26

58

60

55

14

13

16

24

40

57

69

56

14

17

22

29

51

87

80

62

18

22

37

56

68

109

103

77

24

35

55

64

81

104

113

92

49

64

78

87

103

121

120

101

72

92

95

98

112

100

103

99

Figure 2 Compressed image of 'lena.jpg' of size 256 X 256 at


threshold value of 0.02

Figure 3 Compressed image of 'cameraman.bmp' of size 256 X 256


at threshold value of 0.01

Algorithm for Image Compression using DCT


DQM:Input : Acquisition of gray scale image if not then
first convert color image into gray image
A = dctmtx(8);
1) Apply 8 X 8 block based DCT encoding.
a) K = A * 8 X 8 block * A
b) K = K./Q

Figure 4 Compressed image of 'cameraman.bmp' of size 256 X 256


at threshold value of 0.02

Special Issue: National Conference on Recent Innovations In En gineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
296

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

V. CONCLUSION
On the basis of performance parameter which is
image compression using DCT DQM. And work
for other formats of image stored and also compress
the images according to our need or as per our
requirement of transmission bandwidth available.
Figure 5 Compressed image of 'peppers.png' of size 256 X 256 at
threshold value of 0.01

Figure 6 Compressed image of 'baboon.tif' of size 256 X 256 at


threshold value of 0.01

The objective quality is evaluated using Peak SNR


(PSNR), which is defined as [2],
PSNR = 10 log10 .

REFERENCES
[1]

Jayant Mukherjee and Sanjit K. Mitra, Image Resizing in


the compressed Domain Using Su band DCT, IEEE
TRANSACTIONS ON CIRCUITS AND SYSTEM FOR
VIDEO TECHNOLOGY, vol. 12, NO. 7, July 2002.

[2]

Rakesh Dugad, Student Member, IEEE, and Narendra Ahuja,


Fellow, IEEE A fast Scheme for Image Size change in the
Compressed Domain, IEEE TRANSACTIONS ON
CIRCUITS AND SYSTEM FOR VIDEO TECHNOLOGY,
vol. 11, NO. 4, April 2001.

[3]

Tamal Bose, Digital Signal and Image Processing, Aisa:


Wiley 2004.

[4]

Gonzalez and Woods, Digital Image Processing, Prentice


Hall India, edition 2010.

[5]

https://en.wikipedia.org/wiki/Discrete_cosine_transform

Where Mean Square Error = (1/N ) ( x i, j y i, j) 2


Peak Signal Value = 255 for 8 bit per pixel image;
x i, j , y i, j value of pixel (I, j) in the original and the
re-constructed images, respectively;
N2 number of pixels in the image.
2

TABLE I OUTPUT FOR DCT DQM ALGORITHM OF USING


LENA.JPG

Threshold

PSNR (dB)

bpp

CR

0.01

28.4934

0.66

12.12

0.02

28.4929

0.47

17.11

0.03

28.4925

0.38

21.17

0.04

28.4921

0.33

24.53

0.05

28.4917

0.29

27.19

0.06

28.4913

0.27

29.53

0.07

28.4910

0.25

31.45

0.08

28.4907

0.24

33.03

Special Issue: National Conference on Recent Innovations In En gineering & Technology


(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
297

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Power Scenario of India and Technological


Trends
Srishti*, Shalini Shukla*,Vishal*, Amruta Pattnaik**
*Department of EEE, Northern India Engineering College, Shastri Park, New Delhi, India
**Assistant Professor (EEE) Northern India Engineering College, Shastri Park, New Delhi, India
Email: srishtisingh700@gmail.com, shalinishukla808@gmail.com, vishal.mirg93@gmail.com

Abstract- This paper presents the need of resources


which are renewable and eco-friendly to meet the
continuous growth in global population and economy
considering the fact that fossil fuels cannot be relied
upon. A global revolution is needed in the way the
energy is generated, supplied and used. Carbon print
reduction is the current global concern. It gives an
idea about how much India and some developed
countries have succeeded in harnessing renewable
energy. In this paper technological trends and future
prospects have been discussed. This paper discusses
the present scenario and future prospects of
renewable energy resources.
Keywords- Renewable and Non-Renewable sources,
efficiency, Smart Grid, National Solar Mission,
sustainable development.
I.

in the use of renewable resources which are present in


abundance in India due to its location.
Indias electrical network has been divided into five
regions Northern region, Eastern region, North-Eastern
region and Southern region. Indias energy-mix
comprises both non renewable (coal, lignite, petroleum
and natural gas) and renewable energy resources (wind,
solar, small hydro, biomass, cogeneration bagasse etc).
II.

The present scenario of Indian energy sources as on


31.08.2013 is mainly divided into four sourcesthermal, hydro, nuclear and renewable sources.
Amongst these, thermal accounts for the greatest
portion. This shows the Indias overdependence on
coal based energy.

INTRODUCTION

Our goal is to fundamentally change the way the world


uses energy We want to change the entire energy
infrastructure of the world to zero carbon-Elon Musk.
The world is now facing serious challenges in energy.
The global economy is set to grow fourfold in the next
40 years which promises economic benefits and huge
improvement in peoples standard of living. Energy
development, industrial development and economic
development are interlinked. Economic development is
always associated with increase in demand and
consumption of energy. Fossil fuel is not the permanent
and sufficient solution to the present and increasing
global demand. We need to switch to other more
environment friendly and more reliable sources for our
increasing demand. The answer to all the problems lies

PRESENT SCENARIO

9%
2%

1%

coal
RES (MNRE)

15%
14%

hydro

61%

nuclear
gas
oil

Figure 1: Total installed capacity in India-31.01.2016[1]

In India, Tehri Hydroelectric Power has the largest


capacity of 2.4 GW [6].
Considering the growth in energy requirement and
the widening gap several authors have discussed
various issues and solution in terms of renewable

Special Issue: National Conference on Recent Innovations In Eng ineering & Technology
(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
298

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

energies, improvement of technologies and also


optimized transmission.
Table 1: Power supply position [1]
Year

Requirement

Availability

2009-10

(MU)
830,594

(MU)
746,644

2010-11
2011-12
2012-13
2013-14
2014-15
2015-16

861,591
937,199
995,557
1,002,257
1,068,923
837,958

788,355
857,866
908,652
959,829
1,030,785
819,225

Surplus(+)/
Deficit(-)
(MU)
%
-83,950
10.1
-8.5
-73,236
-8.5
-79,313
-8.7
-86905
-4.2
-42,428
-3.6
-38,138
-18,733
-2.2

The power generation during March 2015 was


861,000.64MU. The all India per capita consumption
in the year 2014-15 was 1010kWh (provisional) as
against target of 1000kWh [2].
The following graph shows a clear picture of the
exponential increase in the demand of energy
required.

installed electricity capacity by 84 times. Indias


energy use was the fourth highest in the world in
2014. For the accomplishment of the same power
goal more power plants and new reliable sources
have been studied. Indias Power Scenario with that
of Japan and Switzerland, this would help us in
understanding the important steps we need to take to
match our requirements and future goals.
Japan
Japan is one of the leading economies in the world.
With the proper utilization of every possible source
available to them they are leading a green energy
program all over the country for promoting the use of
renewable energy.
Oil

4%

Coal

2%

8%

Natural Gas
42%

1400000

22%
Nuclear

1200000
22%

1000000

Hydro

800000
Other
Renewable

600000
400000
200000
0

energy(MU)de
mand met
energy (MU)
req.

Figure 3: Power generation of Japan

Japan uses oil as its major generation source which


contributes 42% of their total generation followed by
coal and natural gas each at 22% of the total.
Switzerland

Figure 2: Energy required and demand met [3]

Electricity consumption is a sign of economic


development; more the consumption the better
developed the economy. With increase in the
consumption there is an increase in demand of power.
Indias energy use has increased 16 times and the

Switzerland relies
mainly
on hydroelectricity.
Switzerland which has almost negligible source is
generating most of its power from the renewable
source and it is surpassing its requirement. Main
source of power generation is hydro followed by
nuclear. It is the most eco-friendly nation.

Special Issue: National Conference on Recent Innovations In Eng ineering & Technology
(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
299

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

5%

1%

1%
Hydro
Nuclear

39%

Fossil
54%

Solar
Wind

Space Solar Power


Solar power is in abundance in nature. We can take
wireless power transmission to a new scale by
resonating electromagnetic waves in combination
with space solar cells. Solar satellites in geostationary
orbits would be illuminated 99% of the time by solar
rays. An overview of system design:
i. Solar panel
ii. Spacetenna(antenna on satellite)
iii. Rectenna(rectifying antenna)
iv. Orbit selection

Figure 4: Switzerland power production

From the efficiency chart shown below, we conclude


that wind energy is the most efficient resources
among non-conventional resources
Energy Efficiency
Wind
Hydro
Solar
Natural Gas
Coal

Energy Efficiency
Figure 6: Space Solar power system
0% 500%1000%1500%

Figure 5: Efficiency of energy sources

III.TECHNOLOGICAL TRENDS AND FUTURE


PROSPECTS
Today electricity is considered an essential to life.
Technological development has been fueled by
electricity since its first application back to 16th
century. This wonderful phenomenon comes with a
price. During last 30 years there have been major
changes that are detrimental to the future of our
planet. Scientists have predicted that if this path is
left unaltered, then certain parts of the world would
be uninhabitable by 2050.To combat this situation we
have discussed following technological trends in our
paper:
Smart grids
To optimize the conservation and delivery of power
new grids have been evolved from current electrical
grid. It is self healing, self-balancing, self- optimizing
and resist attacks.

To control microwave beam direction accurately and


quickly, phased array antenna is used in SPS.
Microwave tubes (magnetron) and semiconductor
amplifier are two types of microwave generators used
[4].
Hydrogen power
Hydrogen produces almost no pollution and
hydrogen fuel cell produces clean byproduct-pure
water. Since 1970 NASA has been using liquid
hydrogen to propel space shuttle. Fuel cell is a
promising technology used as source of electricity
and heat. NREL (National Renewable Energy
Laboratory, US) hydrogen and fuel cell research
support the development of fuel cell systems which
are cost-effective and high-performance fuel cells.
Researchers at NREL are developing advanced
techniques to generate hydrogen economically from
sustainable sources [5]. NREL's hydrogen generation
and delivery R&D efforts, which are led by Huyen
Dinh, focuses on the following given methods to
produce hydrogen:
i.Biological Water Splitting

Special Issue: National Conference on Recent Innovations In Eng ineering & Technology
(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
300

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

ii.Fermentation
iii.Conversion of Biomass and Wastes
iv.Photo electrochemical Water Splitting
v.Solar Thermal Water Splitting
vi.Renewable Electrolysis
Solar floating water panels
Floating solar power plant is a smart way to install
distributed solar power that covers small inland
bodies of water like ponds and reservoirs.
Solar power company Kyocera has recently launched
solar power plant that floats on a reservoir and will
produce about 2680MWhr per year enough for 820
households, the installation cost of almost 9,100
waterproof solar panels atop a float made of high
density polythene. This system is easy to install and
dismantle, can be adapted to any electrical
configuration, is scalable from low to high power
generation, and requires no heavy equipments. It is
also eco-friendly, fully recyclable, has low
environmental implications and is cost effective. Till
now, the system has been installed in the UK and
a Japanese system will be installed by March 2016.
Green steam energy
This advanced technology has been developed by
Robert Green, an American inventor. In this, kinetic
energy is produced through conversion of waste heat
of engines. The green steam engine is a piston which
converts reciprocating movements into rotary
movements. It requires zero lubrication, runs on very
low steam pressure and low volume, extremely
lightweight, have high torque at low speeds, highly
versatile and can run in any position like an electric
motor.
Thermo-chemical solar power
Thermo-chemical technology is different from PV
technique as it traps solar energy and stores it in the
form of heat in molecules of chemicals. Heat stored
using thermo-chemical fuel is stable while through
conventional solar system heat gets percolated with
time.

An amazing new technology is being developed by


researchers at the Center for Biotechnology at The
Biodesign Institute of Arizona State University to
extract electricity from pollutions and organic waste
materials using microbial fuel cells (MFC) that can
oxidize organic pollutants and generate electricity.
The microbial fuel cell is powered by bacteria
growing as a biofilm on a conductive solid surface
acting as an electrode in a pool of organic waste.
KymoGen wave energy generator
A new wave power technology has been designed by
mechanical engineer David Hartmann which can
produce clean and low cost energy using constant.
The name KymoGen arise from the word Generator
linked with Kymopoleia the wave walker the Greek
goddess of waves.
Spherical sun power generator
A spherical sun power generator prototype beta.ray is
created by German company named Rawlemon of
German architect Andre Broessel. Through this
technology, we can squeeze more power from sun
even during the night hours and in low-light regions.
This technology will combine spherical geometry
principles with a dual axis tracking system. The
beta.ray comes with a hybrid collector to convert
daily electricity and thermal energy at the same time.

Figure 7: Spherical sun power generator

IV.

DISCUSSION

From the above data we are now able to analyze the


position India holds as a developing country in the

Energy from pollution


Special Issue: National Conference on Recent Innovations In Eng ineering & Technology
(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
301

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

field of electrical power generation and of the future


of India in using the resources both conventional and
non- conventional and to be able to present a clear
idea about the need of new innovative resources for
the growing economy of India and the world. The
utility electricity sector in India had an installed
capacity of 288GW as of 31 January 2016.
Renewable Power plants constituted 28% of total
installed capacity and non-renewable power plants
constitute the remaining 72%[6]. To a great surprise
India was the first country to set-up the ministry of
Non-Conventional Energy Resources in 1980s. The
wind power generation capacity of India is fifth
largest in the world and still not able to harness all
possible energy. Moreover a fact to be kept in mind is
that wind energy has the highest efficiency among all
renewable sources. The biggest power failure in
history of India was blackout on 30th and 31th of
July, 2012 which is a matter of great concern. In
order to address the lack of adequate electricity
availability to all people in our country by platinum
jubilee (2022) year of Indias independence,
government of India has launched a scheme called
Power for All. This scheme will ensure that there is
24/7 continuous supply provided to all households,
industries and commercial establishments by creating
and improving infrastructure. Government of India is
trying its utmost to keep the power prices lower and
the knowledge of above mentioned technologies will
be helpful in reducing the cost and in that respect

Smart grids provide a two way communication


between user and electronic components. In
Odisha and Chhatarpur smart grid project are
being initiated by govt. Solar floating panel system
has been installed in the UK and a Japanese
system will be installed by March 2016. Indias
leading hydropower generator NHPC is planning to
set up solar photovoltaic projects over water bodies
in some states of India in West Bengal, Orissa and
Kerala. India targets renewable energy development
by launching national solar mission, an ambitious
program by government of India which aims at
ultimate capacity of 20,000MW by year 2020. SPS is
expected to be operational around 2030. A rectenna
has already been set up in US which receives

5000MW power from SPS which has estimated 85%


efficiency if properly channelized.

V.

CONCLUSION

We are on peak of using our non-renewable resources


and one day ultimately they will vanish of their
existence from world. The benefits of nonconventional energy resources are innumerable
including the fact that they can provide power to even
those villages where electricity grid cannot extend.
This report discusses issues, challenges and
opportunities particular to India and the scope of its
development with respect to other countries.
We have also discussed briefly about recent
technological trends and future prospects. For a
sustainable development and better future, we need to
steer clearly towards alternate energy which is the
need of the hour. It's not just an issue for India or any
other developing countries but a matter of concern
for entire nation across the globe.

REFERENCES
[1]
[2]

[3]

[4]
[5]
[6]

Power sector at a glance, Ministry of Power, Government of


India
Growth of Electricity Sector in India from 1947-2013(PDF),
Central Electricity Authority, Ministry of Power,
Government of India, 2014.
Khazode, Prutha, Nigam, Siddhartha, Prabhakaran.S.
Kumar.Sathish.K, Indian power scenario-A road map to
2020.
Hasarmani, S.T, Wireless power transmission for solar power
satellite
Hydrogen production and delivery, National Renewable
Energy laboratory US.
All India installed capacity (in MW) of Power Stations
(PDF).

Special Issue: National Conference on Recent Innovations In Eng ineering & Technology
(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
302

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Design, Simulation and Synthesis of Generic


Synchronous FIFO Architecture
1

Prateek Singh, 1Anmol Sharma, 2Surender Kumar

1,2

Electronics and Communication Department, Northern India Engineering College, New Delhi, India
Email: prateek10singh@gmail.com, adwatsharma@gmail.com

AbstractFIFOs are being increasingly regarded as an


important component in modular design approach.
Recently published works have discussed various single
clock and dual clock FIFO architectures and multiasynchronous clock design techniques to pass data safely
from one clock domain to another. This paper focuses on
the design, simulation and synthesis of a generic
synchronous FIFO architecture in a circular mode of
operation using RAM generation program. The
behavioral description of the design module has been
written in Verilog HDL. The design has been synthesized
using Xilinx ISE Design Suite 14.3 and simulated in vsim
simulator of ModelSim SE 6.2b. Waveforms obtained
during simulation have been analyzed to verify the
functionality of the design.
KeywordsFIFO, RAM, synchronous, Verilog, RTL,
Test Bench.

I.

INTRODUCTION

As systems on a chip (SOC) become larger and


faster, it is becoming increasingly difficult to distribute
a single synchronous clock to the entire chip [1], [2].
Any communication crossing two asynchronous clock
domains requires careful synchronization; and the first
in first out (FIFO) working in asynchronous clock
domains is well competent for the job [3]. However
most existing FIFOs have high throughput at the cost of
high forward and reverse latency [4]. The delay from
the input to the output in an empty FIFO is defined as
the forward latency and the delay from the output to the
input in a full FIFO is defined as the reverse latency.
The existing FIFOs working in asynchronous clock
domains can be designed with synchronous and
asynchronous circuits.
FIFO implies first in first out queue methodology for
memories in order to read and write any

information and data using some control logic. The


operation of FIFO is completely dependent on the
control circuitry and clock domain [5]. It is often used
to control the flow of data from source to destination by
the transition of every clock. Basically
FIFO can be differentiated by clock domain as either
synchronous or asynchronous. In synchronous FIFO,
write operation to the FIFO buffer and read operation
from the same FIFO buffer are occurring in same clock
domain. But in asynchronous FIFO these two
operations of writing to and reading from the FIFO
buffer respectively occur in different clock domains [6].
Section II includes the design methodology which
consists of the problem statement, architecture and
design algorithm. Results of simulation including
testbench simulation results, RTL schematic and
waveform are given in Section III. Section IV discusses
some of the applications of FIFO architecture,
respectively. Some conclusions are nally drawn in
Section V.
II.

DESIGN METHODOLOGY
A.

Problem Statement

The idea is to take a memory array (of any kind) and


use it to implement a generic FIFO. Two address
pointers are used to define the head and the tail of the
FIFO. The queue will be implemented in a cyclic mode
of operation, and the FIFO is defined to be full when
the head pointer and the tail pointer are equal. The
FIFO will be initiated using a reset signal. The FIFO
will send a signal if it is full and does not accept any
further write commands. It will also send a signal if it is
empty and does not accept any further read command.
The memory array will be automatically generated by a
RAM generation program. The Width and Depth are
supplied to the program which then generates a single

Special Issue: National Conference on Recent Innovations In Eng ineering & Technology
(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
303

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

cycle RAM memory array of the requested size.


Implementation of the read and write commands of the
FIFO is always done at the next clock edge. The RTL is
synthesized and the HDL code is simulated. A
testbench is also written for the FIFO design to test the
system.
B.

Architecture

The port list for the required design is as follows: dout


It is the output port which is used to send the data
stored in FIFO to the other module. It will be high
impedance when system is reset. It will output the new
data to the external environment when the re input is
high and FIFO is not empty.
empty
It is the output port which indicated whether the FIFO
is empty (empty is high) or not (empty is low).
full
It is the output which indicates whether the FIFO is full
(full is high) or not (full is low).
clk
It is input signal which is used to synchronize all the
system operations.
din
It is the input port which is used to accept data from the
user and store it into the FIFO. New data is only
accepted if full signal is low. Otherwise the data is
rejected.
cs
It is the most important port of the system. It is the
input port which acts as the power supply to the system.
All the actions to the system will only happen if this
port is high. If this port is low then the FIFO is cleared
and no input/output or read/write instruction can take
place.
we
It is the input port which is used to instruct the system
to accept the new data and store it into the FIFO.
However it is stored if and only if the FIFO is not full.

re
It is the output port which is used to instruct the system
to output the least recently stored valid data from the
FIFO. However it happens if and only if the FIFO is not
empty.
Four internal registers are used in the system to
implement the design. These registers are not visible to
the system user but can be accessed or are visible to
only to the designer.
mem
It is an array acting as the FIFO. It is two dimensional
and is used to store and extract out the data.
r_addr
It is the register used to store the address of the FIFO
word to be read.
w_addr
It is the register used to store the address of the slot in
FIFO in which the word is to be written.
count
It is used to keep a count of the number of slots of FIFO
having valid data yet to be read out.
C.

Design Algorithm

The behavioral style of modeling has been followed, so


the algorithm acts as a major role in designing the
system. All the tasks should be done only when cs pin
is high. The system is always reset (made by making
cs pin low for some duration) before its first time use.
When the system is reset, the FIFO is cleared and data
pointers are set to point first location and count is also
reset. When cs is high; and we pin is high, data is
written into the FIFO if full is low. Count and
w_addr are incremented. When cs is high and re
pin is high, data is read out from the FIFO if empty is
low. r_addr is incremented whereas count is
decremented. However when cs, we and re are all
high, system ports and internal registers act as above
except that count is not affected.

Special Issue: National Conference on Recent Innovations In Eng ineering & Technology
(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
304

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

III.
A.

SIMULATION

Simulation Environment and RTL Schematic

The Verilog codes have been simulated in vsim


simulator of ModelSim SE 6.2b and synthesized using
Xilinx ISE Design Suite 14.3. The RTL schematic of
top module as well as the design summary of
synthesized FIFO is shown in Fig. 1 and Fig. 2
respectively. The internal gate-level netlist is shown in
Fig. 3. The synthesized RTL shows that the Verilog
code is hardware realizable that can be used to make a
chip or any real world device.

Fig. 1. RTL of FIFO


Fig. 3. Gate-level netlist of FIFO

B.

Test Bench Simulation Results

The Test Bench is a feature incorporated into the tool


to allow basic logical and functional verification.
However, the RTL for the testbench does not have any
ports. It implies that testbench is doesnt have any real
world physical or hardware realization. It is only a
piece of code used to stimulate the DUT.

Fig. 2. Design Summary of FIFO (Synthesized)

In the testbench shown in Fig. 4, the address bus is


defined to be 2 bits wide and the data bus is defined to
be 8 bits wide. Different combinations of test inputs
have been applied arbitrarily in order to verify the
design functionality. Fig. 5 shows the contents of
memory implemented as FIFO circular queue which
consists of 4 memory locations each of which is 8 bits
wide.

Special Issue: National Conference on Recent Innovations In Eng ineering & Technology
(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
305

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

C.

Waveform

module fifo_tb();
`define abus 2
`define dbus 8
reg clk,cs,we,re;
reg [`dbus-1:0]din;
wire [`dbus-1:0]dout;
wire full,empty;
fifo UUT (dout,empty,full,clk,din,cs,re,we);
initial
begin
clk=1; cs=1; we=0; re=0; din=3;
#5 cs=0; #2 cs=1;
re=1;
#10 re=0; #10 we=1;
#10 din=5; #10 din=8; #10 din=12;
#10 din=6; #10 din=10; #10 din=3;
we=0; re=1;
#10 din=14; #10 din=2;
we=1;
#10 din=4; #10 din=6;
//re=0;
#10 din=0; #10 din=1;
we=1;
#10 din=1; #10 din=3;
#10 re=1;
#10 din=4; #10 din=6; #10 din=13;
#10 re=0;
#10 din=15;
we=1;
#10 din=96; #10 din=34; #10 din=17;
#10 din=55; #10 din=66; #10 we=0;
#10 re=1;
end
always
#5 clk=~clk;
endmodule

Fig. 4. Verilog code of Testbench


Fig. 6. Waveform of FIFO obtained during simulation

IV.

APPLICATIONS

FIFOs are used in designs to safely pass multi-bit data


words from one clock domain to another.
In asynchronous FIFO, Data words are placed into a
FIFO buffer memory array by control signals in one
clock domain, and the data words are removed from
another port of the same FIFO buffer memory array by
control signals from a second clock domain.
Fig. 5. Contents of memory implemeted as FIFO obtained during
simulation

Special Issue: National Conference on Recent Innovations In Eng ineering & Technology
(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
306

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

In synchronous FIFO, FIFO where writes to, and


reads from the FIFO buffer are conducted in the same
clock domain.

[5]

FIFO is used as a buffer to store data in any system if


the read frequency of the receiver system is less in
comparison to the write frequency of the transmitter
system. It prevents flooding of the receiver from more
data than it can accept.

[7]

V.

[6]

Harish Sharma and Charu Rana, Designing of 8-bit


Synchronous FIFO Memory using Register File, International
Journal of Computer Applications, vol. 63, no. 16, Feb. 2013.
Clifford E. Cummings, Simulation and Synthesis Techniques
for Asynchronous FIFO Design, in SNUG (Synopsys Users
Group Conference, San Jose, CA), 2002.
Clifford E. Cummings, Clock Domain Crossing (CDC) Design
and Verification Techniques Using System Verilog, in SNUG
(Synopsys Users Group Conference, Boston), 2008.

CONCLUSION

FIFO is usually a small memory, which operates first


in first out basis. So, this memory wont have any
address input. Although internally the address may be
generated automatically, but from the users point of
view, the user must provide data and read/write signals
only. In this paper, authors have designed, simulated
and synthesized a generic synchronous FIFO
architecture in a circular mode of operation. The
advantage is that the data bus and address bus are
defined using macros due to which only one change is
required to change is required to change the data and
address bus width. Also, since it is a synchronous
device, every activity takes place at a fixed instant of
time that reduces the unwanted effects of noise and
glitches on the output.
However, this FIFO architecture cannot be used when
read and write clocks have different frequency. So
improvements can be incorporated in the future by
making it respond to two different clocks; one for write
cycle and the other for read cycle.
REFERENCES
[1]

G. Friedman, Clock distribution networks in synchronous


digital integrated circuits, Proceedings of the IEEE, 2001, pp.
665-692.

[2]

J. Martin and M. Nystrom, Asynchronous Techniques for


System-on-Chip Design, Proceesings of the IEEE, 2006, pp.
1089-1120.
T. Chelcea and S. M. Nowick, Robust interfaces for mixedtiming systems, IEEE Transactions on Very Large Scale
Integration (VLSI) Systems, vol. 12, no. 8, pp. 857-873, Aug.
2004.
Y. Xiao and R. Zhou, Low Latency High Throughout Circular
Asynchronous FIFO*, Tsinghua Science and Technology,
vol.13, no. 6, pp. 812-816, Dec. 2008.

[3]

[4]

Special Issue: National Conference on Recent Innovations In Eng ineering & Technology
(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
307

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Review of Low Power 4:1 Multiplexer


Circuit Design for CMOS Logic Styles
at 90nm Technology
1

Anmol Sharma, 1Prateek Singh


Deptt. of Electronics and Communication Engg., Northern India Engineering College, Delhi, India
Email: prateek10singh@gmail.com, adwatsharma@gmail.com

1,2

Abstract A multiplexer, sometimes referred


to as a MUX, is device that selects between a
numbers of input signals and give an output
signal. It is a unidirectional combinational device
which is used in application where data must be
switched from multiple sources to a destination.
This paper represents the simulation of different
4:1 multiplexer designed using various different
CMOS logic styles and their comparative analysis
of power at a range of voltage supply varying from
1.6v to 2.4v. All simulations have been carried out
on BSIM 3V3 90nm technology at Tanner EDA
tool.
Keywords CMOS, VLSI, low-voltage, low-power
logic styles, Multiplexer.

I.

INTRODUCTION

Advances in CMOS technology have led to a


renewed interest in the design of basic functional units
for digital systems. The use of integrated circuits in
high performance computing, telecommunications,
and consumer electronics has been growing at a very
fast pace. This trend is expected to continue, with very
important implications for power-efficient VLSI and
systems designs. Digital integrated circuits commonly
use CMOS circuits as building blocks. The continuing
decrease in feature size of CMOS circuits and
corresponding increase in chip density and operating
frequency have made power consumption a major
concern in VLSI design. Excessive power dissipation
in integrated circuits not only discourages their use in
portable environment but also causes overheating
which reduces chip life and degrades performance [1],
[2].

II. LOGIC STYLES


A.

Impact of Logic Style

The logic style used in logic gates basically


influences the speed, size, power dissipation and the
wiring complexity of a circuit. The circuit delay is
determined by the number of inversion levels, the
number of transistors in series, transistor sizes (i.e.,
channel widths) and intra- and inter-cell wiring
capacitances. Circuit size depends on the number of
transistors and their sizes and on the wiring
complexity. Power dissipation is determined by the
switching activity and the node capacitances (made
up of gate, diffusion and wire capacitances) the latter
of which in turn is a function of the same parameters
that also control circuit size. Finally, the wiring
complexity is determined by the number of
connections and their lengths and by whether singlerail or dual-rail logic is used. All these characteristics
may vary considerably from one logic style to another
and thus make the proper choice of logic style crucial
for circuit performance. As far as cell-based design
techniques (e.g., standard-cells) and logic synthesis
are concerned, ease-of-use and generality of logic
gates is of importance as well. Robustness with
respect to voltage and transistor scaling as well as
varying process and working conditions, and
compatibility with surrounding circuitries are
important aspects influenced by the implemented
logic style.
B. Logic Style Requirements for Delay
According to the formula
tpd (C/I) V
The delay (tpd) of logic gate depends on its output
current I, load capacitance C and output voltage

Special Issue: National Conference on Recent Innovations In Eng ineering & Technology
(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
308

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

swing V. Faster circuits families attempt to reduce


one of these terms. nMOS transistor provide more
current than pMOS for the same sizes and
capacitance, so nMOS network are preferred.
Observe that the logical effort is proportional to the
C/I term because it is determined by the input
capacitance of a gate that can deliver a specified
output current.
C. Logic Style Requirements for Low Power
According to the formula

The dynamic power dissipation of a digital CMOS


circuit depends on the supply voltage Vdd, the clock
frequency fclk, the node switching activities n, the
node capacitances cn, the node short circuit currents
iscn and the number of nodes . A reduction of each of
these parameters results in a reduction of dissipated
power.
However, clock frequency reduction is only feasible
at the architecture level, whereas at the circuit level
frequency fclk is usually regarded as constant in order
to fulfil some given throughput requirement. All the
other parameters are influenced to some degree by the
logic style applied. Thus, some general logic style
requirements for low-power circuit implementation
can be stated at this point.
1) Switched capacitance reduction:
Capacitive load, originating from transistor
capacitances (gate and diffusion) and interconnect
wiring, is to be minimized. This is achieved by
having as few transistors and circuit nodes as possible
and by reducing transistor sizes to a minimum. In
particular, the number of (high capacitive) inter-cell
connections and their length (influenced by the circuit
size) should be kept minimal. Another source for
capacitance reduction is found at the layout level [3],
which, however is not discussed in this paper.
Transistor downsizing is an effective way to reduce
switched capacitance of logic gates on noncritical
signal paths [4]. For that purpose, a logic style should
be robust against transistor downsizing, i.e., correct
functioning of logic gates with minimal or near-

minimal transistor sizes must be guaranteed (ratio


less logic).
2) Supply voltage reduction:
The supply voltage and the choice of logic style are
indirectly related through delay-driven voltage
scaling. That is, a logic style providing fast logic
gates to speed up critical signal paths allows a
reduction of the supply voltage in order to achieve a
given throughput. For that purpose, a logic style must
be robust against supply voltage reduction, i.e.,
performance and correct functioning of gates must be
guaranteed at low voltages as well. This becomes a
severe problem at very low voltages of around 1V
and lower, where noise margins become critical [5],
[6].
3) Switching activity reduction:
Switching activity of a circuit is predominantly
controlled at the architectural and registers transfer
level (RTL). At the circuit level, large differences are
primarily observed between static and dynamic logic
styles. On the other hand, only minor transition
activity variations are observed among different static
logic styles and among logic gates of different
complexity, also if glitching is concerned.
4) Short-circuit current reduction:
Short-circuit currents (also called dynamic leakage
currents or overlap currents) may vary by a
considerable amount between different logic styles.
They also strongly depend on input signal slopes (i.e.,
steep and balanced signal slopes are better) and thus
on transistor sizing. Their contribution to the overall
power consumption is rather limited but still not
negligible ( 1030%), except for very low voltages
Vdd Vtn + Vtp, where the short-circuit currents
disappear. A low-power logic style should have
minimal short-circuit currents and, of course, no
static currents besides the inherent CMOS leakage
currents.
D. Multiplexers
A multiplexer is a combinational circuit that selects
binary information from one of many input lines and
directs it to a single output line. The selection of a
particular input line is controlled by a set of selection
lines. Normally, there are 2n input lines and 2 n
selection lines whose bit combinations determine
which input is selected.

Special Issue: National Conference on Recent Innovations In Eng ineering & Technology
(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
309

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Different CMOS Logic styles used to design


multiplexer circuits are the following:-

1.

CMOS This logic style is widely accepted


by many VLSI circuit designers for
designing any arbitrary circuit because of the
lucrative features of CMOS like full swing,
theoretically no steady state losses and
robustness against transistor downsizing
(ratioless) and voltage scaling. However it
must be noted that optimum logic style for
any circuit is technology independent.

2.

Pseudo nMOS- It is a ratioed circuit with


mosFET count half that of CMOS. It has a
pMOS load in always ON condition. This
leads to steady state losses. However absence
of dual pMOS network unlike that in CMOS
provides less effective load capacitance
leading to lower delays compensating for
power losses. Certain situations can lead to
pseudo nMOS circuit being five times faster
than equivalent CMOS circuits.

3.

LEAP- LEAP PTL is a single rail logic which


uses a feedback pull up pMOS. Reliable
operation of this circuit is only possible for
very low threshold circuits. Its single rail
logic with reduced transistor count and low
input loads. However nMOS network causes
the 1s to be passed inefficiently leading to
poor operation for Vdd>Vtn + |Vtp|.
Conventional logic styles can be easily
mapped onto the chip with appealing
implementation of mux like circuits.

4.

III. SIMULATION AND ANALYSIS


A.

Simulation Environment

All the circuits have been simulated using BSIM


3V3 90nm technologies on Tanner EDA tool. All the
circuits have been simulated on exactly same input
patterns to make sure of impartial testing
environment. Every simulation has been performed
on range of voltage varying from 0.8v to 2.4v.W/L
ratio for nMOS transistors were kept 1.8/1.2 and
((W/L)p/ (W/L)n) = 2.
B.

Schematics

Schematics of single bit full adder designed in Sedit using all the 4 different logic styles discussed in
section II have been presented.

Fig. 1. Schematic of CMOS.

Dual rail Domino- Dual rail domino is just


like CVSL logic where gate of pmos load are
connected
to
clock
rather
than
complementary outputs. Its function is
similar to Domino logic style. However face
difficulties working at low threshold.
Additionally nMOS footer transistor can be
placed to guard against output discharging
during precharge phase. Dynamic losses can
be reduced by reducing the duty cycle of the
clock to less than 50%.

Fig. 2. Schematic of Pseudo nMOS

Special Issue: National Conference on Recent Innovations In Eng ineering & Technology
(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
310

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Fig. 6.Maximum Power Consumption vs Vdd for cmos, pseudo


nMOS, leap, dual-rail domino based multiplexer circuits.

Fig. 3. Schematic of LEAP.

Fig. 7.Minimum Power Consumption vs Vdd for cmos, pseudo


nMOS, leap, dual-rail domino based multiplexer circuits.

D.

Fig. 4. Schematic of Dual-Rail Domino.


C.

PERFORMANCE ANALYSIS

This work includes power analysis of the logic


styles discussed above. Fig. 5 depicts the power
consumption vs Vdd of cmos, pseudo nMOS, leap,
dual rail domino logic styles for 4:1 multiplexer
circuit. Fig. 6 shows maximum power consumption
vs Vdd and Fig. 7 shows minimum power
consumption vs Vdd of cmos, pseudo nMOS, leap,
dual rail domino logic styles for 4:1 multiplexer
circuit.

RESULTS AND DISCUSSION

Waveform analysis of the different logic styles


during realizing 4:1 multiplexer circuit using W-edit
has been done. While performing waveform analysis
of the designed circuits, we applied all the possible
input values individually using bit pattern interface
and also provided random values using pulse
interface as voltage source. Fig. 8 shows simulation
of 4:1 multiplexer circuit. Table I, II and III depicts
the reading of power analyzed for voltage supply
varying from 1.6v to 2.4v.
TABLE I.

POWER ANALYSIS FOR LEAP BASED 4:1


MULTIPLEXER CIRCUIT

Fig. 5. Power Consumption vs Vdd for cmos, pseudo nMOS, leap,


dual rail domino based multiplexer circuits

Special Issue: National Conference on Recent Innovations In Eng ineering & Technology
(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
311

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

TABLE II.

POWER ANALYSIS FOR PSEUDO NMOS BASED 4:1


MULTIPLEXER CIRCUIT

TABLE III.

POWER ANALYSIS FOR DUAL RAIL DOMINO BASED


4:1 MULTIPLEXER CIRCUIT

different CMOS logic styles can be used to


implement arbitrary logics. But the study conducted
by authors in this work suggest that LEAP, Pseudo
nMOS and Dual rail domino logic style are not
appropriate choice for low power multiplexer circuit
implementation at 90nm technology. Since delay
analysis of logic styles indicate due to large logical
effort and short circuit current. Moreover, it was
found at voltage supply in range of 1.6v to 2.4v
leakage power increases with transistor down-sizing
and voltage scaling (Vdd Vtn + Vtp) for these logic
styles. This paper concluded with the CMOS is the
logic style of choice for the implementation of
arbitrary combinational circuits, if low voltage, low
power, and small power-delay products are of
concern. Also power analysis comparison of 4 logic
styles shows that CMOS out power all other logic
styles.
REFERENCES

[1]

Bellaouar, Mohamed I. Elmasry,Low-power digital VLSI


design: circuits and systems, 2nd Edition.

[2]

Kang, Sung-Mo, Leblebici, Yusuf,CMOS Digital Integrated


Circuits Analysis and Design, McGraw-Hill International
Editions, Boston, 2nd Edition, 1999.
J. Yuan and C. Svensson, New single-clock CMOS latches
and ipops with improved speed and power savings, IEEE
J. Solid-State Circuits, vol. 32, pp. 6269, Jan. 1997.

[3]
* INDICATES - DO NOT WORK (VDD VTN + VTP)
[4]

Piguet, J.-M. Masgonty, P. Mosch, C. Arm, and V. von


Kaenel, Lowpower low-voltage standard cell libraries, in
Proc. Low Voltage Low Power Workshop, ESSCIRC95,
Lille, France, Sept. 1995.

[5]

R. Rogenmoser, H. Kaeslin, and N. Felber, The impact of


transistor sizing on power efciency in submicron CMOS
circuits, in Proc. 22nd European Solid-State Circuits Conf.,
Neuchatel, Switzerland, Sept. 1996, pp. 124127.

[6]

Piguet, J.-M. Masgonty, S. Cserveny, and E. Dijkstra, Lowpower low-voltage digital CMOS cell design, in Proc.
PATMOS94, Barcelona, Spain, Oct. 1994, pp. 132139.

[7]

Sebastian T. Ventrone, Low power multiplexer circuit,


United States Patent, Patent Number: 6,054,877, Apr. 2000.
Proc. Low Voltage Low Power Workshop, ESSCIRC95,
Lille, France, Sept. 1995.
R. Rogenmoser, H. Kaeslin, and N. Felber, The impact of
transistor sizing on power efciency in submicron CMOS
circuits, in Proc. 22nd European Solid-State Circuits Conf.,
Neuchatel, Switzerland, Sept. 1996, pp. 124127.

[8]

Fig. 8. Simulation result for input signal v(A), v(B), v(C), v(D),
v(S1), v(S0) and output signal v(Out) for 4:1 multiplexer based
circuits.

IV. CONCLUSION

[9]

Piguet, J.-M. Masgonty, S. Cserveny, and E. Dijkstra, Lowpower low-voltage digital CMOS cell design, in Proc.
PATMOS94, Barcelona, Spain, Oct. 1994, pp. 132139.

[10] Sebastian T. Ventrone, Low power multiplexer circuit,


United States Patent, Patent Number: 6,054,877, Apr. 2000.

Earlier research work suggest that optimum logic


style for any circuit is technology independent and
Special Issue: National Conference on Recent Innovations In Eng ineering & Technology
(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
312

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Implementation of Synchronous FIFO


Using Verilog
Rohan Jain
Electronics and Communication Department, NIEC, Delhi, India
E mail: rohanjain.06.01@gmail.com
AbstractIn this paper synchronous FIFO has
been implemented. The implementation was done
using verilog HDL. Not only RTL code was
written, the design was tested ample number of
times using different test cases. A testbench was
made for the same purpose. It is a synchronous
FIFO so every activity takes place at fixed instant
of time. Since the FIFO realized is hardware, so
once the system will be synthesized, its bus width
cannot be changed. Data bus and address bus are
`define so only one change is required to change
the data and address width. The RTL design,
testbench design, input and corresponding
waveforms have been shown.
KeywordsFIFO, RTL, HDL, testbench.
I.

INTRODUCTION

Digital Design is a plan or model produced to


show the look and function or workings of an
optimized power, performance, and reliability metrics
of complex digital and mixed-signal electronic
systems. Digital designs consist of combinational and
sequential circuits. From the age of vacuum tubes and
transistors we evolved through SSI to MSI to LSI to
VLSI (to ULSI, still a topic of research). Because of
the complexity of circuits present we exploit CAD
tools. In this paper, implementation of a synchronous
FIFO has been shown using ModelSim. The
implementation was done on XILINX ISE v14.3 too.
In the section 2, some basic concepts required to
understand the RTL designing and related terms are
explained. Section 3 describes the tool used to carry
out the work. Section 4 describes the implementation
of synchronous FIFO. In the section 5 and 6 the
results have been analyzed and finally some
conclusions are drawn.

II. EASE OF UNDERSTANDING


A.

Hardware Descrition Language

HDL is an abbreviation for Hardware Description


Language. It is one of the latest development in the
electronics industry contributing to the CAD
designing of the circuits. HDL is a part of EDA tools
required for structuring, designing and operating the
digital electronics circuits. Verilog is the HDL used
implement the synchronous FIFO. It is a introduced
by Gateway Design System Corporations; now part
of Cadence Design Systems, Inc.
B.

Register Transfer Level

It is a design abstraction level in which we model


a digital circuit, whether synchronous or
asynchronous in terms of data flow (digital signal
flow), between various hardware registers and caters
to the processing of that data including all the logical
operations performed on that data. HDL's like
Verilog and VHDL are used to create top modules of
the circuits and small inner modules are implemented
lateron.
C.

The Design Flow

The design flow determines what are the steps we


need to follow in order to make a finished product
which can be shipped to the users. Figure 1 shows a
pictorial representation of the design flow needed to
make a vlsi chip. Initially specification of the desired
circuit are needed. We make the HDL design and
simulate it to verify the functionality of the circuit.
After successful simulations, synthesis is done
followed by post silicon validation. This design flow
needs to be followed to make a flawless product.

Special Issue: National Conference on Recent Innovations In Eng ineering & Technology
(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
313

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Microsoft Windows and Linux, in 32-bit and 64-bit


architectures.
A.

RTL Design

Designs using the RTL specify the characteristics


of a circuit by operations and the flow of data or
signals between the registers. A clock is explicitly is
used. Exact timing possibilities are present in the
RTL design; operations are scheduled to occur at
certain times. Modern definition of a RTL code is
"Any code that is synthesizable is called RTL code".
B.

Testbench

A particular type of HDL code that is used to


provide an organized, multiple set of inputs to
stimulate an RTL design. A test bench code should be
portable across various families of simulators present.
A very simple HDL code including clocks and other
various inputs; they are switched in order to form
various input combinations to stimulate the design. A
more complicated file that assists in error checking,
file input and output, and conditional testing can be
additionally included along with the testbench.

Fig. 1. VLSI Design Flow Chart [1]

III. TOOLS USED


Mentor Graphics has designed and contributed for
the development for many HDLs such as VHDL,
verilog and SystemC for the structuring, designing
and operating digital logic structures. ModelSim is a
multi-language HDL simulation environment by
Mentor Graphics, for simulation of hardware
description languages such as VHDL, Verilog and
SystemC, and includes a built-in C debugger.
ModelSim can be used independently, or in
conjunction with Altera Quartus or Xilinx ISE.
Simulation is performed using the graphical user
interface (GUI), or automatically using scripts.
ModelSim SE offers high-performance and advanced
debugging capabilities. ModelSim SE is used in large
multi-million gate designs, and is supported on

Fig. 2. The test bench Design

A test bench has four components:


Input: The are the stimulus to the design.
Procedures to : The tasks or processes that will
convert or process the input into the output
Procedures to check: they determine whether the
output got is the desired one meeting all the standards
or not.

Special Issue: National Conference on Recent Innovations In Eng ineering & Technology
(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
314

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Output: It is the required exit criterion from the


workbench.

A.

Problem Statement

a) The idea is to take a memory array (of any


kind) and use it to implement a generic FIFO.
b) Two address pointers are used to define the
head and the tail of the FIFO.
c) The queue will be implemented in a cyclic
mode of operation, and the FIFO is defined to be full
when the head pointer and the tail pointer are equal.
d) The FIFO will be initiated using a reset
signal.
e) The FIFO will send a signal if it is full and
does not accept any further write commands.
f)
The FIFO will send a signal if it is empty
and does not accept any further read command.
g) The memory array will be automatically
generated by a RAM generation program. The Width
and Depth are supplied to the program which then
generates a single cycle RAM memory array of the
requested size.

Fig. 3. Architecture of the Test bench

IV. FIFO IMPLEMENTATION


FIFO is usually a small memory, which operates on
first in first out basis. So this memory won't have any
address input. Although internally the address may be
generated, automatically but form the user point to
view, the user would provide data and read/write
signals only.
Async FIFO: Used to transfer data from one clock
domain to the other.
Sync FIFO: Usually used for buffering up some
command/data.
For example if a microprocessor is writing to an
interface say SPI interface, then it would normally
write commands to a FIFO in a burst, and then it will
get busy with something else. SPI i/f would be slow
to respond, so it will take its own time to collect
commands/data from that FIFO. This will allow the
processor to get 'free' while SPI is taking its own time
to do things.

h) Implementation of the read and write


commands of the FIFO is always done at the next
clock edge.
i)
code.

Synthesis the RTL and simulate the HDL

j)
Write testbench for the FIFO design to test
the system.
B.

About the Ports

dout
It is the output port which is used to send the data
stored in FIFO to the other module. It will be high
impedance when system is reset. It will output the
new data to the world when the re input is high and
FIFO is not empty.

empty
It is the output port which indicated whether the FIFO
is empty (empty is high) or not (empty is low).

full
It is the output which indicates whether the FIFO is
full (full is high) or not (full is low).

clk
It is input which is used to synchronize all the system
operations.

Special Issue: National Conference on Recent Innovations In Eng ineering & Technology
(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
315

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

din
It is the input port which is used to accept data from
the user and store it into the FIFO. New data is only
accepted if full signal is low. Otherwise the data is
rejected.

cs
It is the most important port of the system. It is the
input port which acts as the power supply to the
system. All the actions to the system will only happen
if this port is high. If this port is low then the FIFO is
cleared and no input/output or read/write instruction
can take place.

we
It is the input port which is used to instruct the system
to accept the new data and store it into the FIFO.
However it is stored if and only if the FIFO is not
full.

re
It is the input port which is used to instruct the system
to output the least recently stored valid data from the
FIFO. However it happens if and only if the FIFO is
not empty.
C.

Internal Registers

mem
It is an array acting as the FIFO. It is two dimensional
and is used to store in and extract out the data.

use. When the system is reset, the FIFO is cleared and


data pointers are set to point first location and count
is also reset.

When cs is high; and we pin is high, data


is written into the FIFO if full is low. Count and
w_addr are incremented.

When cs is high; and re pin is high, data


is read out from the FIFO if empty is low. r_addr
is incremented whereas count is decremented.

However when cs, we and re are all


high, system ports and internal registers act as above
except that count is not affected.
V. RESULTS AND ANALYSIS
Synthesis of the design code is shown in the figure
4.
The synthesis of the Verilog code for the system
designed is shown above. Both the top module as
well as its internal gate level netlist is shown.
Successful synthesis of the system shows that the
Verilog code is hardware realizable and that can be
used to make a chip or any real world device.
However we see that the RTL for the testbench does
not have any ports. It implies that testbench is not
any real world device. Its only a piece of code used
to stimulate the DUT.
Testbench does not have any physical or real world
realization. Hence it is a feature incorporated into
the tool to allow basic level logical and functional
verification.

r_addr
It is the register used to store the address of the FIFO
word to be read.

w_addr
It is the register used to store the address of the slot in
FIFO in which the word is to be written.

count
It is used to keep a count of the number of slots of
FIFO having valid data yet to be read out.
D.

Designing Algorithm

All the tasks should be done only when cs


pin is high.

The system is always reset (made by making


cs pin low for some duration) before its first time

Fig. 4. RTL of FIFO using XILINX ISE 14.3

Special Issue: National Conference on Recent Innovations In Eng ineering & Technology
(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
316

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

VI. CONCLUSION

Fig. 5. Gate level netlist of FIFO using XILINX ISE 14.3

In this paper implementation of synchronous FIFO


was done using verilog HDL. Data bus and address
bus are `define so only one change is required to
change the data and address width. It a synchronous
FIFO so every activity takes place at fixed instant of
time. Since the FIFO realized is hardware, so once
the system will be synthesized, its bus width cannot
be changed. This kind of FIFO cannot be used in a
scenario where read and write clocks are of different
frequency. The FIFO designed has the same clock
frequency for both the read and the write cycles; this
doesnt make it a one pick device. Generally buffer
should also be used in devices where read and write
frequency is different. So this FIFO can be made to
respond to two different clocks; one for write cycles
and one for write cycles.

Synthesis of the testbench code is shown in figure 6.

REFERENCES
[1]

[2]

[3]
[4]
Fig. 6. System of testbench using XILINX ISE 14.3
Simulation of the design are shown in the figure 8.
[5]

Queue
(abstract
data
type),
Retrieved
from
https://en.wikipedia.org/wiki/Queue_(abstract_data_type), on
13 June 2015.
FIFO (computing and electronics), Retrieved from
https://en.wikipedia.org/wiki/FIFO_(computing_and_electron
ics), on 14 June 2015.
Samir Palnitkar, Verilog HDL.
Clifford E. Cummings Peter Alfke, Simulation and
Synthesis Techniques for Asynchronous FIFO Design with
Asynchronous Pointer Comparisons, SNUG 2002, San Jose,
CA, 2002.
Dadhania Prashant C., Designing Asynchronous FIFO,
Journal Of Information, Knowledge And Research In
Electronics And Communication Engineering, vol 02, issue 2,
pp 561-563, Nov. 12 Oct. 13.

Fig. 7: Waveform using ModelSim SE 6.2b

Special Issue: National Conference on Recent Innovations In Eng ineering & Technology
(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
317

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

Uncovering the Dark Energy of Universe


Rohan Jain
Electronics and Communication Department, Northern India Engineering College, Delhi, India
E-mail: rohanjain.06.01@gmail.com

Abstract Dark energy is a theoretical energy


which has been considered the form of energy
leading to accelerated expansion of this universe
since its inception. Many leading scientists,
cosmologists and astronomers have accepted this
hypothetical form of energy to be a major cause of
this expansion of universe. The standard model of
cosmology claims that 68.3% of total mass-energy
of the universe is a form of dark energy. More
accurate data is required to analyze the reason for
expansion of universe. In general relativity, the
evolution of the expansion rate is parameterized
by the state equation in cosmology (the
relationship between temperature, pressure, and
combined matter, energy, and vacuum energy
density for any region of space). Optimizing the
equation of state for this hypothetical energy is
one of the biggest efforts in observational
cosmology today. This paper reviews some of the
very important works, as they touch upon the
analysis already done to give a more clear insight
in this field of cosmology; the review suggests new
research directions, to strengthen and support the
existing theories and/or identify patterns among
existing research studies.
Keywords dark energy, CDM model, baryons
I.

INTRODUCTION

Lambda CDM model is the result of combining


the cosmologys standard FLRW metric to
cosmological constant. Because of the precise
agreement of the results with the observations, the
Lambda CDM model has been recognized as the
standard model of cosmology. Dark energy has been
used as a crucial ingredient in a recent attempt to
formulate a cyclic model for the universe.
In this paper, various works in the field of
cosmology pertaining to dark matter, dark energy and
expansion of universe are reviewed. Section 2

contains some of the key terms required to understand


dark energy of universe. Some key concepts and
models that have tried to uncover the dark mask on
universe have been mentioned in section 3. Finally
some results have been drawn in the section 4.
II.
A.

KEY TERMS
Dark Energy

Michael Turner used "Dark energy" as a new


technical term for the first time in 1998; the term was
however inspired by the term dark matter coined by
Fritz Zwicky's in the 1930s. Dark energy is a
theoretical energy which has been considered the
form of energy leading to accelerated expansion of
this universe since its inception. Many leading
scientists, cosmologists and astronomers have
accepted this hypothetical form of energy to be a
major cause of this expansion. By 1930s, the missing
mass problem of big bang nucleo-synthesis and large
scale structure was established. It was theorized by
the majority of the scientists and cosmologists that
apart from all the known sources of energy, there
were some unknown component to our universe.
Initially observations for supernovas and the data
related to their accelerated expansion formed a strong
evidence for dark energy. After a series of arduous
testing procedures and cosmological observations, the
Lambda CDM model was made. For the past 9 billion
years or so, this kind of energy has been present.
Hubble space telescope observatory observations
have confirmed this fact. Figure 1 reveals changes in
the rate of expansion since the universe's birth 15
billion years ago. The shallowness of the curve
represents that the rate of expansion is quicker. About
8 billion years ago the objects began flying apart. As
a result the curve started changing appreciably at that
point in the time. It has been theorized by the
cosmologists that an enigmatic, dark and unknown
kind of force is pulling galaxies apart.

Special Issue: National Conference on Recent Innovations In Eng ineering & Technology
(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
318

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

(rho) is the energy density of the cosmological


constant. Therefore, P is negative and, in fact, P =
[1].
C.

Fig. 1.Pictorial Representation of change in rate of expansion


of universe since its inception 15 billion year ago.

Fortunately Einstein was the first person to really


step into this field and say that dark energy is nothing
but an Anti gravity effect. Or it could be simply the
characteristic property of free space. Since free space
is expanding and the energy density i.e. the
cosmological constant is always true that means with
increasing bulk of free space the energy possessed by
free space is also increasing. Hence this could be
virtually true to some extent. It could be estimated
that more and more free is space coming into
existence. Therefore the energy of empty space is
also increasing, becoming a reason for the accelerated
expansion of the universe.
B.

Integrity of cosmological pressure Specifications

The expansion of universe has been accelerating


ever since. This can be easily understood by the
virtue of pressure dynamics. Energy density and
cosmological constants are two quantities which are
equal in strength but opposite in functionality. The
conventions of classical thermodynamics unveil the
reason for cosmological constant having negative
pressure. If work is to be done on the body, then
energy needs to be dissipated from inside the body. A
change in volume dV requires work done equal to a
change of energy P dV, where P is the pressure. But
the amount of energy in the body full of vacuum
actually increases when the volume increases (dV is
positive), because the energy is equal to V, where

Dark Matter

More about dark matter is unknown than is


known. At this point we could imagine the amount of
knowledge we have to be analogous to an atom in the
vast sea of knowledge yet to be known. Stars and
planets emit some form of energy which can be
detected. The energy in the scope of our study is dark
which means celestial bodies like stars and planets
cant be dark. However bodies like dead star and
black holes can be very well fall into this category. It
has been determined that very small amount of visible
matter is present to make up the 25% of observations.
Baryons are known as particles which make up this
kind of dark matter. Also, it has been claimed that
actually there is no such things as dark cloud made up
of dark matter. Baryonic particle clouds could have
been detected if any form of energy would have been
emitted or absorbed by them. Any kind of antimatter
emits gamma rays. No such rays have been detected
for the presence of dark matter. This fact undermines
the belief that dark matter is some kind of antimatter.
The last and only possibility of dark matter could be
large black holes which can be ruled out on the
accounts of detection of so many gravitational lenses
visible to us.
D.

Dark Matter
assumptions.

Dark Matter Core


core

defies

the

Einstein`s

Figure 1 shows the distribution of dark matter,


galaxies, and hot gas in the core of the merging
galaxy cluster Abell 520. These findings could
potentially pose a challenge to the canons.
Apart from classical thermodynamics, quantum
mechanics too have given several theories about the
subject matter. In this theory, the void of space is
actually full of temporary ("virtual") particles that
continually form and then disappear. And certainly
when theoretically the energy possessed by this
cluster was calculated an error occurred. An error that
was not too small to be neglected but too big to be
calculated. That error was of the order 10 120.
Another explanation for dark energy is that of a
dynamical energy fluid or field, something that fills
all of space but something whose effect on the

Special Issue: National Conference on Recent Innovations In Eng ineering & Technology
(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
319

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

accelerated swelling up of the universe is the opposite


of that of matter and normal energy. It could be
something like black hole wave matter because it is
dark, possesses its own energy, just instead of
imparting energy it absorbs it.

A new branch of physics needs to be developed to


really understand the core of this. The whole universe
is dark i.e. no matter is present there that would
absorb light and emit colours. But still there is
something that has been the reason behind the
accelerated expansion.
III.
A.

Fig. 2. Distribution of dark matter, galaxies, and hot gas in the core
of the merging galaxy cluster Abell 520.
E.

Newtonian and Einsteinian Constant

It has been known for years that our universe


exploded out of nothing; discussed and elaborated by
the Big Bang theory about 15 billion years ago. It is
believed and experimentally confirmed that our
universe is expanding but the question is whether it is
expanding acceleratedly or retardedly. Later on the
Newtonian Classical Mechanics and Einsteinian
Relative universe gave way to studies that could
really answer this question. Frequent experiments and
hypothesis have been put up and amazingly a very
curious result has come up. In the early 1950`s the
talk was that universe is expanding very slowly. It
was believed that universe has got blank free space
and matter. Matter has attracting properties. Then
accordingly due to matter attraction the universe
should one day collapse. But to the amaze of the
scientists in 1990` Hubble Space Laboratory
discovered that actually it is completely opposite of
what was believed.
The universe is expanding and at a faster rate. The
force of matter gravity is being over shadowed by
some other force which is completely dominating this
universe and that both the reigning legends Newton
and Einstein have been sidelined.

IMPORTANT THEORIES

Foundation of theoryof Dark Energy

Basis of the theory of universe dark energy, a


solution of Einstein's cosmological constant problem,
physical interpretation of universe dark energy and
Einstein's
cosmological
constant
Lambda
(=0.29447times10-52m-2), values of universe dark
energy density (=1.2622times10-26kg/m3=6.8023
GeV), universe critical density (=1.8069times1026kg/m3=9.7378 GeV), universe matter density
(=0.54207times10-26 kg/m3=2.9213 GeV), and
universe
radiation
density
(=2.7103times1031kg/m3=1.4558 MeV) has been given [2].
Foundation of theory of dark energy was laid down
after the space-time was perfectly modeled
geometrically. This modeling was based on geometric
four-dimensional continuum cosmic fluid and
inferred that time generated the momentum.
Considering the fact that momentum is a mechanical
concept, it must rather be equal to energy of the
universe but negatively. Such a theory can thrive only
if time is considered to be mechanical in nature rather
something else. Einstein once considered the spacetime itself to be the dark energy; however no
substantiating evidence has been found. Dark energy
has been considered to be fluidic in nature, the fourth
law of thermodynamics is proposed, a new
formulation and physical interpretation of Kepler's
Three Laws are presented [2]. Furthermore, based on
the fact that it is being observed that it is just the
history of our universe, on the Big Bang Theory,
Einstein's General Relativity, Hubble Parameter, the
estimated age of the universe, cosmic inflation theory
and on NASA's observation of supernova la [2].
Accelerated expansion of the universe can be plotted
based in the data inferred from the study of the above
mentioned theories which is a second- order
(parabolic) parametric model. Foundation of dark
energy goes on to show that universe is approaching
cosmic horizon line or in other words, the vanishing
point. If the calculations are correct, then the universe
is approaching to a point where its fate will be in

Special Issue: National Conference on Recent Innovations In Eng ineering & Technology
(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
320

International Journal of Innovations in Engineering and Management, Vol. 5; No. 1: ISSN: 2319-3344 (Jan-June 2016)

danger earlier than expected because of the


accelerated expansion of universe. Considering the
breaking symmetry model and the variational
principle of mechanics, then the universe will witness
an infinitesimally stationary state and a symmetry
breaking. As result of that, a very massive impulse
(Big Impulse of magnitude ~ 1033 x the linear
momentum of the universe) will occur soon and,
correspondingly, the universe will collapse [2].
B.

theoretical models and the cosmological observations


gives us a very unique composition of the universe,
~70% dark energy, ~25% dark matter, ~5% normal
matter. This is very unique set of composition
bringing several new theories into the picture. The
thing that is needed to decide between dark energy
possibilities - a property of space, a new dynamic
fluid, or a new theory of gravity - is more data, better
data.

Dark Matter with weak Gravitational Lensing

Domain of weak lensing has evidently deduced


the fact that only 4-5% of the universe is made up of
visible matter. This fact also is restated in another
theory which states that universe is composed of
more matter; however this 4-5% of visible matter is
actually the detected matter where more matter is yet
to be detected. Today the rest of the universe to us
seems to be dark and undetectable. Latest
development in astronomical statistics has resulted in
the reconstruction of a domain of study which
analyses dark matter. The Universe is now thought to
be mostly composed of an invisible, pressureless
matter, potentially a relic from higher energy
theories, called "dark matter" (20-21%) and by an
even more mysterious term, described in Einstein
equations as a vacuum energy density, called "dark
energy" (70%) [3]. This dark universe seems to be an
unsolvable mystery today, so this point could be the
next breakthrough in cosmology.
IV.

RESULTS AND CONCLUSION

Baryonic particles if are supposed to be the dark


matter, then they need to be present in brown dwarf
chunks which should be dark. Even more exotic
particles like axions or WIMPS (Weakly Interacting
Massive Particles) known as halo objects which are
part of certain celestial bodies, could make up this
dark matter alongside baryons or even independently.
It could be antimatter like thing; may be a virtual one
where it could behave in two ways. Firstly it could
possess properties opposite to that of matter. As
matter interacts attractively this antimatter could
interact repulsively causing the universal expansion.
Secondly it could be antimatter in the sense where it
annihilates with matter and instead of producing
gamma radiations it produce some other form of
energy wave which in our view is the reason why
universe is expanding. Amalgamation of some sound

REFERENCES
[1]
[2]

[3]

[4]

[5]

[6]

[7]

Rupert W Anderson, The Cosmic Compendium: The


Ultimate Fate of the Universe.
M. Shibli, The Foundation of the Theory of Dark Energy:
Einstein's Cosmological Constant, Universe Mass-Energy
Densities, Expansion of the Universe, a New Formulation of
Newtonian Kepler's Laws and the Ultimate Fate of the
Universe, IEEE conference on Recent Advances in Space
Technologies, 2007. RAST '07. 3rd International Conference,
pp. 788 - 799, 14-16 June 2007.
S. Piresx, J. L. Starck, A. Refregier, Light on dark matter
with weak gravitational lensing, IEEE Signal Processing
Magazine, vol. 27, Issue: 1, pp. 76 85, January 2010.
Dark
Energy,
Dark
Matter;
Retrieved
from
http://science.nasa.gov/astrophysics/focus-areas/what-is-darkenergy/. On 13 April 2013.
Andreas Albrecht, Dark Energy Task Force (DETF) and
follow
up
work,Retrieved
from
http://albrecht.ucdavis.edu/special-topics/dark-energy-taskforce ; on 30 March 2013.
Brandon Bozek, Augusta Abrahamse, Andreas Albrecht,
Michael Barnard, Exploring Parameter Constraints on
Quintessential Dark Energy: The Exponential Model,
Journal reference: Phys.Rev.
U. F. S. U. Ibrahim, Determining the dark matter content of
dwarfs and dwarfs spheroidal galaxies in the local group
cluster using the CAS parameters, IEEE conference on
Space Science and Communication (IconSpace), pp. 182
184, 12-13 July 2011.

Special Issue: National Conference on Recent Innovations In Eng ineering & Technology
(NCRIET- 2016), 8- 9th April, 2016 held at Northern India Engineering College, New Delhi.
Available online at:www.gtia.co.in
321

NIEC AT A GLANCE
ISO: 9001: 2008 & EN ISO: 14001: 2004 Certified
NAAC Accredited
Northern India Engineering College (NIEC), New Delhi was
established by BBDES, LUCKNOW in the year 2003. NIEC
offers Under Graduate and Post Graduate level full time
Professional programs approved by AICTE, New Delhi in
affiliation with Guru Gobind Singh Indraprastha University
(GGSIPU), New Delhi.
Under the visionary and dynamic guidance of Honorable
Chairman, Dr.Akhilesh Das Gupta and Honorable Vice
Chairperson, Mrs.Alka Das Gupta, the college has won
laurels and is one of the top level institutes across India.

Northern India Engineering College


FC- 26, Shastri Park
New Delhi- 110053
Ph. 011 39905900-99, 32526261-64
Website : www.niecdelhi.ac.in

Published By

Global Technocrats and Intellectuals Association


Website: www.gtia.co.in

You might also like