You are on page 1of 283

MOHANDAS COLLEGE OF ENGINEERING AND TECHNOLOGY

ANAD, NEDUMANGAD, THIRUVANANTHAPURAM, KERALA

COLLOQUIUM 11
A NATIONAL LEVEL TECHNICAL FESTIVAL ON 11th AND 12th FEBRUARY 2011

--PROCEEDINGS--

BIOTECHNOLOGY & BIOCHEMICAL ENGINEERING


BT01 GENETIC ENGINEERING OF HUMAN STEM CELLS FOR ENHANCED ANGIOGENESIS USING BIODEGRADABLE POLYMERIC NANOPARTICLES Rohit Mohan & Liju J. Kumar Mohandas College of Engineering &Technology BIOREMEDIATION Jayakrishnan U. & Meria M. Dan Mohandas college of engineering and technology DESIGNER BABIES Athira S. & Sanjana A. Nair Mohandas College of Engineering and Technology HUMAN ARTIFICIAL CHROMOSOME Shreya Sara Ittycheria & Sherin J.S. Mohandas College of Engineering and Technology MEDICAL APPLICATIONS OF WIRELESS BODY AREA NETWORKS Nandana Sasidhar & Meenakshi Sudhakaran Mohandas College of Engineering and Technology 1

BT02

BT03

BT04

14

BT 05

17

BT06

CANCER CHEMOPREVENTION BY RESVERATROL Anushka Bejoy & Gayathri.S. Mohandas College of Engineering and Technology PERIPHERAL NERVE INJURY AND REPAIR Alisha Asif & Reshma Nair U. Mohandas College of Engineering and Technology HUMAN GENOME PROJECT Vimal Jayaprakash & Sandhya K. Mohandas College of Engineering and Technology REDUCTION OF ALCOHOL INTOXICATION IN EXPERIMENTAL ANIMALS BY RESVERATROL Parvathy S. Nair & Parvathy R Mohandas College of Engineering and Technology

23

BT07

33

BT08

39

BT09

50

COMPUTER SCIENCE AND INFORMATION TECHNOLOGY


CS01 3D INTERNET Abin Rasheed & Manju N. Mohandas College of Engineering and Technology ANALYSIS AND IMPLEMENTATION OF MESSAGE DIGEST FOR NETWORK SECURITY S.Naresh Kumar & G.Karanveer Dhiman Adhiyamaan college of Engineering ARTIFICIAL INTELLIGENCE S.Gokul & F.Ivin Prasanna Adhiyamaan college of Engineering ARTIFICIAL INTELLIGENCE IN VIRUS DETECTION AND RECOGNITION Parvathy Nair & Parvathy.R.Nair Mohandas College of Engineering and Technology AUGMENTED REALITY Renjith.R & Bijin.V.S Mohandas College of Engineering and Technology BRAIN FINGERPRINTING TECHNOLOGY Shalini J. Nair & Anjitha Pillai Mohandas College of Engineering and Technology FACE DETECTION THROUGH NEURAL ANALYSIS Akhil G. S. & Gibu George Muslim Association College of Engineerng ENERGY-EFFICIENT MANAGEMENT OF DATA CENTER RESOURCES FOR CLOUD COMPUTING: A VISION, ARCHITECTURAL ELEMENTS, AND OPEN CHALLENGES Aby Mathew C & Arjun Karat College of Engineering, Thiruvananthapuram HOLOGRAPHIC MEMORY Susan & Parvathy Mohandas College of Engineering and Technology A NOVEL TECHNIQUE FOR IMAGE STEGANOGRAPHY BASED ON BLOCK-DCT AND HUFFMAN ENCODING Arunima Kurup P. & Poornima D. Sreenagesh Mohandas College of Engineering and Technology 56

CS02

60

CS03

66

CS04

70

CS05

76

CS06

80

CS07

87

CS08

90

CS09

99

CS10

104

CS11

NANO TECHNOLOGY Sreeja S.S. Mohandas College of Engineering and Technology REALISTIC SKIN MOVEMENT FOR CHARACTER ANIMATION Malu G. Punnackal Mohandas College of Engineering and Technology SKINPUT: THE HUMAN ARM TOUCH SCREEN Manisha Nair Mohandas College of Engineering and Technology TABLET COMPUTING Anita Bhattacharya & Beeba Mary Thomas Mohandas College of Engineering and Technology THE DEVELOPMENT OF ROAD LIGHTING INTELLIGENT CONTROL SYSTEM BASED ON WIRELESS NETWORK CONTROL Lekshmi R. Nair & Laxmi Laxman Mohandas College of Engineering and Technology ECONOMIC VIRTUAL CAMPUS SUPERCOMPUTING FACILITY BOINC Aravind Narayanan P. & Karthik Hariharan College of Engineering, Thiruvananthapuram

110

CS12

113

CS13

121

CS14

129

CS15

132

CS16

137

CS17

PROJECT SILPA: PYTHON BASED INDIAN LANGUAGE PROCESSING FRAMEWORK Anish A. & Arun Anson Mohandas College of Engineering and Technology

140

ELECTRICAL AND ELECTRONICS ENGINEERING


EE01 AUDIO SPOTLIGHTING Liji Ramesan Santhi & Sreeja V. Mohandas College of Engineering and Technology AUGMENTED REALITY Amy Sebastian & Jacqueline Rebeiro Mohandas College of Engineering and Technology CLAYTRONICS Arun K. & Joseph Mattamana College of Engineering, Thiruvananthapuram CONTROLLING HOUSEHOLD APPLIANCES USING DIGITAL PEN AND PAPER Arun Antony Govt. Engineering College, Barton Hill FIRE-FIGHTING ROBOT Abhimanyu Sreekumar & Murali Krishnan Mohandas College of Engineering and Technology FULLY AUTOMATIC ROAD NETWORK EXTRACTION FROM SATELLITE IMAGES Sameeha S. & Sreedevi D.V. LBS Institute of Technology for Women MEMRISTOR P. Balamurali Krishna & Rajesh R. M. G. College of Engineering MOBILE AUTONOMOUS SOLAR COLLECTOR Anooj A. & Vishnu R. Nair Mohandas College of Engineering and Technology MODERN POWER SYSTEM MODERNISED WAVE ENERGY CONVERTER Vinod Kumar K. & Vinod A.M. Noorul Islam College of Engineering NEW WAY TO MAKE ELECTRICITY Reshma Ittiachan & Reshma A.R. Mohandas College of Engineering and Technology

EE02

EE03

EE04

EE05

EE06

EE07

EE08

EE09

EE10

EE11

SILICON PHOTONICS Prasad V.J. & Vishnu R.C. Mohandas College of Engineering and Technology SWARM INTELLIGENCE Aravind Raj & Sarath. B.V. Mohandas college of Enginering TELEPORTATION Lekshmy Vijayakumar & Sreedhanya M. Unnithan LBS College of Technology for Women INTRODUCTION TO THE WORLD OF SPIN ( A CONCEPT BASED INNOVATIVE TECHNOLOGY OF FUTURE THROUGH ELECTRONICS ) Nikhil G.S. & Rony Renjith Maria College of Engineering and Technology THE ELECTRIC MICROBE Vishnu R. & Ramesh K. R. Noorul Islam College of Engineering WIRELESS HOME AUTOMATION NETWORK Sreevas S. College of Engineering, Thiruvananthapuram

EE12

EE13

EE14

EE15

EE16

MECHANICAL ENGINEERING & CIVIL ENGINEERING


ME01 COMPRESSED AIR ENGINES Vivek S. Nath & Saju Joseph Mohandas College of Engineering and Technology DEVELOPMENT OF BASIC AGGLOMERATED FLUX FOR SUBMERGED ARC WELDING Deviprakash K.J. & Anand Krishnan O.K. Mohandas College of Engineering and Technology MAGNETIC REFRIGERATION Feby Philip Abraham & Ananthu Sivan Mohandas College of Engineering and Technology 228

ME02

232

ME03

237

ME04

MICRO-CONTROLLER AIDED GEARBOX AND CHAIN DRIVE FAULT RECOGNITION SYSTEM Rahul R. Mohandas College of Engineering and Technology

243

ME05

REVERSE ENGINEERING OF MECHANICAL DEVICES Libin K. Babu & Karthik A.S. Mohandas College of Engineering and Technology SMART CARS Athul Vijay & Arjun Sreenivas Mohandas College of Engineering and Technology MORPHING AIRCRAFT TECHNOLOGY & NEW SHAPES FOR AIRCRAFT DESIGN V.Vikram Mohandas College of Engineering and Technology

246

ME06

251

ME07

256

STREAM 1
BIO-TECHNOLOGY & BIO-CHEMICAL ENGINEERING

Genetic Engineering of Human Stem Cells for Enhanced Angiogenesis Using Biodegradable Polymeric Nanoparticles
Rohit Mohan & Liju J.Kumar (Sixth semester Biotechnology & Biochemical Engineering
Mohandas College of Engineering &Technology)

Abstract
Stem cells hold great potential as cell-based therapies to promote vascularization and tissue regeneration.Vascular endothelial growth factor (VEGF) high-expressing, transiently modified stem cells promote angiogenesis. Nonviral, biodegradable polymeric nanoparticles are developed to deliver hVEGF gene to human mesenchymal stem cells (hMSCs) and human embryonic stem cell-derived cells (hESdCs). Treated stem cells demonstrated markedly enhanced hVEGF production, cell viability, and engraftment into target tissues. S.c. implantation of scaffolds seeded with VEGF- expressing stem cells (hMSCs and hESdCs) led to 2- to 4-fold-higher vessel densities 2 weeks after implantation, compared with control cells or cells transfected with VEGF by using Lipofectamine 2000, a leading commercial reagent. Four weeks after intramuscular injection into mouse ischemic hindlimbs, genetically modified hMSCs substantially enhanced angiogenesis and limb salvage while reducing muscle degeneration and tissue fibrosis. These results indicate that stem cells engineered with biodegradable polymer nanoparticles may be therapeutic tools for vascularizing tissue constructs and treating ischemic disease. Genetic modification of stem cells to express angiogenic factors enhance the efficacy of stem cells for therapeutic angiogenesis. However, previous studies have largely relied on viral vectors to deliver the therapeutic genes to stem cells, which are associated with safety concerns. Nonviral delivery systems, such as polyethylenimine and Lipofectamine are often associated with toxicity and provide significantly lower transfection efficiency than a viral-based approach.

Key Words: VEGF, hMSC, hESdC, angioenic, DNA nanoparticles. Introduction


Angiogenesis is the physiological process involving the growth of new blood vessels from pre-existing vessels.It is an important process occurring in the body to heal the wounds & to restore blood flow to the tissues after injury. However, some injuries or genetic modifications may demand enhanced angiogenesis. Human stem cells modified using optimized poly(amino esters)-DNA nanoparticles to express an angiogenic gene encoding VEGF can enhance angiogenesis.

Angiogenesis
The process of angiogenesis occurs as an orderly series of events: 1. 2. Diseased or injured tissues produce and release angiogenic growth factors that diffuse into the nearby tissues. The angiogenic growth factors bind to specific receptors located on the endothelial cells (EC) of nearby preexisting blood vessels. Once growth factors bind to their receptors, the endothelial cells become activated. Signals are sent from the cell's surface to the nucleus. The endothelial cell's machinery begins to produce new molecules including enzymes. These enzymes dissolve tiny holes in the sheath-like covering

3.

4.

(basement membrane) surrounding all existing blood vessels. 5. The endothelial cells begin to divide (proliferate) and migrate out through the dissolved holes of the existing vessel towards the diseased tissue. 6. Specialized molecules called adhesion molecules called integrins serve as grappling hooks to help pull the sprouting new blood vessel sprout forward. 7. Additional enzymes are produced to dissolve the tissue in front of the sprouting vessel tip in order to accommodate it. As the vessel extends, the tissue is remolded around the vessel. 8. Sprouting endothelial cells roll up to form a blood vessel tube. 9. Individual blood vessel tubes connect to form blood vessel loops that can circulate blood. 10. Finally, newly formed blood vessel tubes are stabilized by specialized muscle cells that provide structural support. Blood flow then begins.

Materials and Methods


Transfection
Bone marrow-derived hMSCs and hESdCs were obtained and cultured as previously described. Cells were transfected with VEGF plasmid or control plasmid (EGFP or luciferase) by using optimized poly(-amino esters) transfection conditions. Lipofectamine 2000 (Invitrogen), a commercially available transfection reagent, was used for control transfection

S.C. Implantation Of Stem Cell- Seeded Scaffolds


All procedures for surgery were approved by the Committee on Animal Care of Massachusetts Institute of Technology. All constructs were implanted into s.c. space in the dorsal region of athymic mice. Three experimental groups were studied for hMSCs transfected with: (i) C32103/VEGF, (ii) C32-117/VEGF, (iii) C32122/VEGF. Three control groups include (i) hMSC-C32-103/Luc, (ii) hMSC-Lipo/VEGF, and (iii) acellular scaffold alone. For hESdCs, cells were transfected by using either C32117/VEGF or C32117/Luc, and the acellular scaffold group was examined as blank control. All tissue constructs were harvested at 2 or 3 weeks after implantation for analyses

Polymer Synthesis
Poly(-amino esters) (PBAE) were synthesized after a two-step procedure, in which C32-Ac was first prepared by polymerization by using excess diacrylate over amine monomer and C32-Ac was then reacted with various amine reagents to generate amine-capped polymer chains (Fig. 1B). Here, we chose three leading end-modified C32 polymers (C32-103, C32-117, and C32122), which demonstrated high transfection efficiency in stem cells.

Transplantation Of Stem Cells Into Ischemic Mouse Hindlimb


Hindlimb ischemia was induced in a mouse model as previously described (6). Immedi-ately after arterial dissection, cells were suspended in 100 L of hMSC growth medium and injected intramuscularly into two sites of the gracilis muscle in the medial thigh. Five experimental groups were examined as following: (i) PBS, (ii) no transfection, (iii) hMSC-C32-122/EGFP, (iv) hMSC-Lipo/VEGF, and (v) hMSC-C32122/VEGF.

Results
In Vitro VEGF Production
VEGF production by transfected stem cells was examined by measuring the VEGF concentration in the supernatant of transfected cells by using ELISA. Four days after transfection, VEGF secretion from PBAE-transfected hMSCs or

Figure. 1.

hESdCs was 1- to 3-fold higher than their respective untransfected controls, and 1- to 2fold higher compared with Lipofectamine 2000 secretion from day 4 to day 9 slightly decreased and was still significantly higher in PBAEtransfected groups than the control groups. Cell viability after the PBAE-mediated transfection was 8090% in both stem cell types.

Enhancement Of Angiogenesis In The S.C.Space


Angiogenesis in s.c. space was examined 2 or 3 weeks after implantation. Compared with acellular scaffold controls, scaffolds seeded with VEGF-transfected hMSCs using three poly(amino esters) (PBAE) led to markedly increased blood vessel migration into the constructs from adjacent tissues (Fig. 2A), whereas the control groups (hMSCs transfected with C32-103/Luc or Lipo/VEGF) did not appear to be much different from the acellular control. H&E and mouse endothelial cell antigen (MECA) staining of the harvested tissue sections demonstrated 3- to 4fold-higher vessel density in the hMSCPBAE/VEGF groups compared with the controls (Fig. 2 A and B), and a similar trend was observed with hESdC groups.

transfected with EGFP and 1-fold higher than the hMSC-Lipo-VEGF group (Fig. 3A). Compared with the normal limb tissues, the ischemic mouse limbs also demonstrated >20-fold increase in SDF-1 expression (P = 0.001) (Fig. S2A), a chemokine that has been previously shown to stimulate the recruitment of progenitor cells to the ischemic tissues (23). Meanwhile, expression of SDF-1 receptor CXCR4 by the transplanted C32-122/VEGF modified hMSCs was 3-fold higher than the Lipo-VEGF modified hMSCs.

Figure. 3

Enhanced Cell Survival And Localization Of Transplanted Cells


RT-PCR for human-specific chromosome 17 satellite region confirmed the presence and engraftment of transplanted hMSCs in ischemic tissues, and cell survival was markedly increased in the C32-122/VEGF treatment group (Fig. 3B). Immunofluorescent staining of HNA showed significantly higher localization and retention of transplanted hMSCs in the C32-122/VEGFtreated group compared with the untransfected cells alone or Lipo/VEGF-modified group.

Improved Ischemic Limb Salvage


Figure. 2. . The therapeutic efficacy of genetically engineered hMSCs in limb salvage was examined by evaluating physiological status of ischemic limbs 4 weeks after surgery. The outcome was rated in three levels: limb salvage (similar limb integrity and morphology as normal limb control of the same animal), foot necrosis, or limb loss. Overall, control groups demonstrated extensive limb loss and foot necrosis and C32-122/VEGF-transfected hMSCs greatly improved limb salvage (Fig. 4A). Triphenyltetrazolium chloride (TTC) staining of

Enhanced VEGF Production And Homing Factor Expressions


VEGF production by the transplanted stem cells in vivo was examined by hVEGF ELISA. Two days after transplantation, C32-122/VEGFtransfected hMSCs produced 6-fold-higher VEGF than did untransfected cells or cells

muscle samples harvested from the ischemic limbs also showed more viable tissues in C32122/VEGF-treated group, which resembled the appearance of normal muscle control (Fig. 4B). Compared with the untransfected hMSCs, cells transfected with VEGF by using our polymer increased the percentage of limb salvage from 12.5% to 50% and decreased the percentage of limb loss from 60% to 20%. In contrast, groups treated with untransfected hMSCs alone, hMSCs modified with EGFP, or Lipo/VEGF-transfected hMSCs still showed substantial limb loss (50%) and varying degree of foot necrosis (25% to 40%) (Fig. 4C).

groups (untransfected hMSCs or hMSCs transfected with C32-122/EGFP) were 50% lower compared with the experimental group (C32-122/VEGF). This indicates that cells transfected with polymer do not have significant effects on angiogenesis. Furthermore, cells transfected with VEGF by using Lipofectamine 2000 showed only modest efficacy in angiogenesis in both models. ELISA data showed that Lipo-VEGF only slightly increased VEGF protein production (40%), whereas the polymers of interest led to 3-fold-higher VEGF secretion compared with the untransfected controls. These results suggest that a critical threshold of VEGF dose may be required to achieve significant angiogenesis. Limb ischemia not only led to impaired angiogenesis, but also caused abnormal tissue fibrosis. Imaging analysis of tissue sections stained for collagen showed that fibrotic area in ischemic region was markedly reduced by injection of hMSCs transfected with C32-122/VEGF nanoparticles, compared with all of the controls. The observed enhanced angiogenesis and reduced tissue necrosis is likely a result of enhanced paracrine signaling from stem cells. Previous work has shown that untransfected stem cells themselves may secret a broad spectrum of cytokines (e.g., FGF2 and Sfrp2) that can mediate ischemic tissue survival and repair. Together with the up-regulated production of VEGF by PBAE/VEGF transfection (Fig. S1), these paracrine factors secreted by the stem cells may lead to enhanced angiogenesis, decreased cell apoptosis, and better tissue survival, relative to VEGF protein alone. This hypothesis is supported by our in vitro conditioned medium study. We found that conditioned medium from PBAE/VEGF-transfected stem cells led to increased viability of human endothelial cells under hypoxic (1% oxygen) and serum-free conditions, an in vitro model mimicking ischemia. Efficient cell engraftment and retention is critical for successful cell-based therapy for promoting angiogenesis. To assess the engraftment and survival of transplanted human stem cells in ischemic mouse limbs, we measured humanspecific gene expression level (chromosome 17 satellite region) in target mouse tissues, which should be directly in proportion to the engraftment and survival of transplanted human cells. We observed significantly enhanced human 17- expression and human nuclear

Fig. 4.

Reduced Muscle Degradation And Fibrosis In Ischemic Limbs


Ischemic limbs harvested at 4 weeks after cell transplantation were used for histological analyses. H&E and Massons Trichrome staining of the control group (PBS injection) showed muscle degeneration and fibrosis in the ischemic regions (Fig. 4 D and E). Transplantation of untreated hMSCs alone attenuated tissue degeneration to some degree but failed to maintain the large muscle fibrils seen in the normal tissue. In contrast, ischemic limbs treated with C32-122/VEGF-transfected hMSCs display substantially reduced tissue degeneration (Fig. 4D) and minimal fibrosis (Fig. 4E and Fig. S5).

Discussion
Transplantation of PBAE/VEGF-modified stem cells significantly enhanced angiogenesis in a mouse s.c. model and in a hindlimb ischemia model. In contrast, vessel density in the control

antigen (HNA) staining in hMSC-C32122/VEGF group (Fig. 3 B and C), which suggests enhanced localization and engraftment of genetically engineered stem cells into ischemic sites. This is also supported by the significantly up-regulated gene expression of two stem cell homing factors: SDF-1 and its receptor CXCR4. The observed enhanced CXCR4 expression is probably due to VEGFmediated angiogenic signaling and enhanced cell survival. Previous work reported the use of 3D matrices to facilitate localization of transplanted cells and more sustained delivery of angiogenic factors for revascularization. Injection of alginate microparticles with VEGF protein was shown to enhance in vivo survival of transplanted cells and the subsequent angiogenesis in hindlimb ischemic tissue . However, alginate microparticles are nondegradable, and may not be cleared. In contrast, ex vivo genetic modification allows for a transient, matrices-free approach. Our results suggest that PBAE/VEGFmodified stem cells alone without matrices are sufficient to achieve the satisfactory cell engraftment and retention.

References
Applied Biological Sciences Special feature Fan Yang et al. Volume: 107 Number: 8 , Pages: 33173322 (2008) Induction of angiogenesis in tissue-engineered scaffolds designed for bone repair: A combined gene therapycell transplantation approach. Proc Natl Acad Sci USA 105:1109911104. (2002) Angiogenesis by implantation of peripheral blood mononuclear cells and platelets into ischemic limbs. Circulation 106:20192025. (2007) Transplantation of nanoparticle transfected skeletal myoblasts overexpressing vascular endothelial growth factor-165 for cardiac repair. Circulation 116:I113I120. (2003) Stem-cell homing and tissue regener-ation in ischaemic cardiomyopathy. Lancet 362:675676. (2001) Neovascularization of ischemic myocardium by human bone-marrowderived angioblasts prevents cardiomyocyte apoptosis, reduces remodeling and improves cardiac function. Nat Med 7:430436. (2007) Improvement of postnatal neovascularization by human embryonic stem cell derived endothelial-like cell transplantation in a mouse model of hindlimb ischemia. Circulation 116:24092419.

Conclusion
In summary, this study suggests that stem cells transiently modified with biodegradable polymeric nanoparticles can promote therapeutic angiogenesis. This technology may facilitate engineering and regeneration of large masses of various tissues such as bone and muscle, as well as complex structures that encompass multiple tissue types. We further hypothesize that this approach could be useful in treating other types of ischemic diseases such as myocardial infarction and cerebral ischemia.

Bioremediation
Jayakrishnan U and Meria M Dan
Third semester Biotechnology and Biochemical Engineering, Mohandas college of engineering and technology

Abstract
Bioremediation means to use a biological remedy to abate or clean up contamination. This makes it different from remedies where contaminated soil or water is removed for chemical treatment or decontamination, incineration, or burial in a landfill. Microbes are often used to remedy environmental problems found in soil, water, and sediments. Plants have also been used to assist bioremediation processes. This is called phytoremediation. Biological processes have been used for some inorganic materials, like metals, to lower radioactivity and to remediate organic contaminants. With metal contamination the usual challenge is to accumulate the metal into harvestable plant parts, which must then be disposed of in a hazardous waste landfill before or after incineration to reduce the plant to ash. Two exceptions are mercury and selenium, which can be released as volatile elements directly from plants to atmosphere. The concept and practice of using plants and microorganisms to remediate contaminated soil have developed over the past thirty years. KeyWord: Phytoremediation, contamination, microorganism.

Introduction
*Bioremediation is defined as the process whereby organic wastes are biologically degraded under controlled condition to an innocuous state, or to levels below concentration limits established by regulatory authorities. *This process is mainly carried out by biological agents like plants, microorganisms, fungi etc *Contaminant compounds are transformed by living organisms through reactions that take place as a part of their metabolic processes i.e.., detoxify substances substances hazardous to human health and the environment. * Not all contaminants are easily treated by bioremediation using microorganisms. Like heavy metals such as cadmium and lead are not readily absorbed or captured by organisms. The assimilation of metals such as mercury in the food chain may worsen matters. In such a case phytoremediation, bio accumulates these toxins in their above ground parts, which are then harvested for removal.

The organisms absorb the contaminants, induct them to their metabolic pathway and thereby enzymatically transform them into harmless substances. Bioremediations can be effective only where environmental conditions permit their growth and activity, is application often involves the manipulation of environmental parameters to allow their growth and degradation to proceed at a faster rate. As most of the bioremediation activities are taking places in open environment, the main type of reaction taking place is aerobic and the rate is dependent on it.

Types Of Bioremediation
Based on the site of application bioremediation is classified as: 1.Insitu is the application of bioremediation techniques to soil, ground water at the site. Lower cost and less disturbance since they provide the treatment in place avoiding excavation and transport of contaminants. The treatment is limited by the depth of the soil that can be effectively treated. In many soils

Principles of Bioremediation
From the name itself gives the idea that this technique uses only living organisms only not any other modern technologies.

effective oxygen diffusion for desirable rates of bioremediation extend of a range of only a few centimetres to about 30cm in the soil though depths of 60cm and greater are been effectively treated in some cases. The most important land treatments are -bio venting, biodegradation, biosparging, bio augmentation. 2 .Ex situ application of bioremediation technique to soil, ground water at the site which have been removed from the actual site. The particular region to be mediated is removed via excavation of pumping. Important exsitu technique are land farming, composting, bio piles, bioreactors. Based on the type of organisms used in bioremediation: Microbial, Myco and Phyto. 1. Microbial bioremediation mostly occurs naturally by the natural microbial flora which suites with condition and perform its life activities thereby consuming or transforming the contaminants. Sometimes in order to boost the growth fertilizers are add (bio stimulation). The bioremediators are boosted to break down contaminants when matched strains are being added. Deinococcus radiodurans, the most radioresistant organism known has been modified o consume and digest toluene and ionic mercury from highly radioactive nuclear waste. On the basis of how the bioremediation process takes place microbial bioremediation is divided in: Natural bioremediation is the biodegradation of dead organism. Its natural part of carbon, sulphur, nitrogen cycles. The microbe utilizes the chemical energy from the waste to grow. It mainly convert contaminants to carbon and hydrogen to carbondioxide. Managed bioremediation is applied by people. Main condition is the presence of favourable environment. R L Raymond was awarded a patent for the bioremediation of gasoline.

2. Mycoremediation is the use of fungal mycelia to remediate biological contamination. One of the primary role of fungi in the ecosystem is decomposition, which is performed by the mycelium. Mycelia secretes extracellular enzymes and acids that break down lignin and cellulose, the two main building blocks of plant fibres. It has been seen that the natural microbial community participates with the fungi to break down contaminants, eventually into carbondioxide and water. Wood degrading fungi are particularly effective in breaking down aromatic pollutants, as well as chlorinated compounds. 3 .Phytoremediation is the use of plants to remove contaminants from soil and water. It is useful in the case when there is wide spread heavy organic and metallic pollutants. Acts a filter to filter out the contaminants then metabolize them. There are five type of phytoremediation techniques, classified based on the fate of contaminant: phytoextraction, phytodegradation, phytostabilixation, rhixodegradation, ghixofiltration.

Factors Affecting Bioremediation


Microbial population that suites the environment that can biodegrade all of the contaminants. Oxygen: enough to support aerobic biodegration. Temperature: Appropriate temperatures for microbial growth (0-40). pH: Best range is from 6.5 to 7.5

Advantage Of Bioremediation
As it is a natural process bioremediation is generally accepted by people. Theoretically, bioremediation is useful for the complete destruction of a wide variety of contaminants or its transformation to harmless product.

Bioremediation can often be carried out on site, often without causing a major disruption of normal activities. Bioremediation can prove less expensive than other technologies that are used for clean-up of hazardous waste.

Bioremediation takes longer than other treatment options. There is no accepted definition of clean, evaluating performance of bioremediation is difficult, and there are no acceptable end points for bioremediation treatments.

Disadvantage Of Bioremediation
It is limited to those compounds that are biodegradable. Not all compounds are susceptible to rapid and complete degradation. There are some concerns that the products of biodegradations may be more persistent or toxic than the parent compound. Biological processes are often highly specific and it is difficult to extrapolate from bench and pilotscale studies to full-scale field operations.

References

M Vidal, Dipartmento di chimica Inorganic, Metallorganica, e Analitica, Universita di Padova via Loredan, Padova, Italy, Bioremediation. An overview, 2001 Bioremediation-Answer.com Bioremediation-Wikipedia.

Designer Babies
Athira S. and Sanjana A Nair
(S6, Biotechnology and Biochemical Engineering, Mohandas College of Engineering and Technology, Anad,Nedumangad-695544)

ABSTRACT
Every parent wants the best in the world for their child....but how would it be if parents could choose the sex of their child? How would it be if intelligence, beauty, even the body type, colour of hair and eyes etc. was just a matter of choice? With the advance in genetic engineering, this is now becoming a reality. A "designer baby" refers to a baby, whose genetic makeup has been artificially selected by genetic engineering, using methods like PGD, combined with in vitro fertilisation to ensure the presence or absence of particular genes or characteristics. This process rules out any unknowns in childbirth, allowing us to determine not only the childs sex , but also its future!!Although commendable when used to detect or prevent diseases and disorders, creation of designer babies becomes questionable when used to choose the sex or IQ of a child. Objections to the idea of designer babies include the termination of embryos and how many disapprove of methods such as these under moral and religious grounds. In the end it is for us to choose...should we design 'perfect' babies or leave it to god??

Keywords: IVF, PGD, Genetic engineering


ensure the presence or absence of particular genes or characteristics.

Introduction:
During the last few decades, research in genetic engineering has been advancing at lightning speeds. Recent breakthroughs have presented us with unforeseen promises, yet at the same time with complex predicaments. The promises of genetically modifying humans to improve their well being and to treat debilitating illnesses are becoming a reality. However, our newfound knowledge in genetics may also enable us to engineer our own genetic blueprintsto enhance our muscles, height, and intelligence, to choose the sex, hair colour, and even personality of our children, and to create super humans that seem perfect. The ethical predicaments surrounding genetic engineering are vast, complex, and profound.

This was a term popularized by media, and not originally used by scientists.

Historical Background
In 1976, the first successful genetic manipulation took place on mice, in efforts to produce more accurate disease models and test subjects. These mice were modified at the germline stage: that is, permanent genetic changes were induced by transplanting new genes into the mouses embryo. The real breakthrough happened twenty five years lateron January 11, 2001, when scientists in Oregon unveiled ANDi, a baby rhesus monkey carrying a new jellyfish gene in his genome. The birth of a genetically modified primate, one of the closest relatives to mankind, heralded the arrival of a new era in human genetic research. One month later, scientists announced in Nature the completion of sequencing, or mapping, of over 97% of the entire human genome, roughly five years ahead of schedule.

What Is A Designer Baby?


A "designer baby" refers to a baby whose genetic makeup has been artificially selected by genetic engineering combined with in vitro fertilisation to

This represented a crucial step in our march toward fully understanding human disease. Equipped with the new dictionary of the human genome, all we have to do is to learn how to use and modify it at our will. In early 2003, New Jersey fertility doctor Jacques Cohen reported the first modification in the human genome.According to Cohen, his pioneering infertility treatments produced two babies with DNA from two different mothers, which represented the first case of human germline genetic modification resulting in normally healthy children. Although such changes in the genetic makeup were miniscule, their implications were symbolically profound. Arthur Caplan, director of the Center for Bioethics at the University of Pennsylvania,called it an ethically momentous shift.

The savior sibling is conceived through in vitro fertilization. Fertilized zygotes are tested for genetic compatibility (human leucocyte antigen (HLA) typing), using preimplantationgenetic diagnosis (PGD), and only zygotes that are compatible with the existing child are implanted. Zygotes are also tested to make sure they are free of the original genetic disease. 2.Non-medical reasons:To choose the sex of a child, or to attempt to produce a preferable offspring.

How Do We Design Babies??


Embryo biopsy or Preimplantation genetic diagnosis (PGD), is a diagnostic procedure, used in genetic screening, in which a single cell is removed from an embryo two or three days after it has been conceived through in vitro fertilization. At this age the embryo consists of about eight genetically identical cells. The embryo itself is unaffected and continues to grow while the selected cell's genes are replicated usingpolymerase chain reaction and then studied for genetic defects. The procedure allows an embryo to be tested before it is implanted into the womb and it ensures that the most suitable embryo develops into a baby. Limitations on the number of tests that could be performed on an embryo and the complexity of the relationship between single genes and physical characteristics, the number of traits that could be tested is severely limited. Nevertheless, several applications of this technology have been especially effective in screening for severe medical conditions. One can detect potential genetic predispositions for Down syndrome and Huntingtons disease by analyzing cells containing the embryos genetic information, even before pregnancy begins. This prevents women from having to decide whether to abort an abnormal fetus, and eliminates the deep grief and economic difficulties that many families are forced to cope with.

Why Do We Design Babies??


1.Medical reasons : (a)To prevent a genetic disorder being passed on.disorders may be: Sex linked disorders: can be used to determine the sex of the embryo for sex linked disorders where the specific genetic defect is unknown, variable, or unsuitable for testing on single cells.eg. Duchenne muscular dystrophy. Molecular disorders: can be used to identify single gene defects such as cystic fibrosis, where the molecular abnormality is testable with molecular techniques. Chromosomal disorders: variety of chromosomal rearrangements, including translocations, inversions, and chromosome deletions can be detected using Fluorescence In Situ Hybridisation (FISH). (b)Saviour siblings : A child who is born to provide an organ or cell transplant to a sibling that is affected with a fatal disease, such as cancer or Fanconianemia, that can best be treated by hematopoietic stem cell transplantation.

10

and ethicists do not communicate sufficiently and struggle to articulate their concerns. Proponents argue that genetic engineering can cure more diseases, from cystic fibrosis to spina bifida, than other methods of therapy. Screening embryos for predispositions and risks in genetic diseases is also possible. Advocates of genetic engineering argue that this would enable parents to avoid the emotional hardships and economic burdens that accompany the birth of a child with an incurable disease. As the new technology becomes more widely available, new and better genes will be passed onto others. The social gap between the naturally endowed and everyone else, between those who can afford the technology and those who cannot, will ultimately narrow and disappear creating a new age of human beings who are happier, smarter, and healthier. However, some critics and ethicists disagree. They mention the tremendous human safety risks, and argue that one cannot prevent the misuse of the technology for non-medical purposes, such as enhancing ones athletic performance. Altering a babys genetic traits and manipulating our own nature, in this view, demeans the uniqueness of each individual and thus undermines our humanity. Ethicists contend that genetic engineering devalues the meaning of parenthood, where children become merely consumer goods and properties of their parents. Moreover, opponents argue that advances in genetics are not fuelled by justifiable societal needs, but by novel biomedical opportunities. Those who can pay for the new technology will make themselves better than well, widening the existing social gap between them and those who cannot afford it. No one knows for sure what the social consequences are if we play our own God. Should we allow humans the choice of being genetically modified? Should parents have the right to design and alter their children at will? Should current research in human genomics be banned completely? What other options are available? Ultimately, who finally decides on these matters? limitations onthe number of tests that could be performed on an embryo andthe complexity of the relationship between single genes andphysical characteristics, the number of traits that could

Can We Design Babies??


This only works for characteristics controlled by one or two genes. However, most traits in human beings are controlled by a range of genes and so, cannot be selected for. We cannot yet add genes inside an embryo.ie. If you dont carry the genes you cannot have an embryo with those traits. Further, creating intelligent babies, or beautiful babies, is still a very far off possibility.

Ethical Issues
Even with the enormous amount of progress in the field, our moral understanding and awareness is still limited in scope. Our ethical vocabulary does not provide us with adequate tools to address the problems posed by advances in genetics. As science outpaces moral understanding,scientists

11

betested is severely limited. Nevertheless, several applications ofthis technology have been especially effective in screening forsevere medical conditions. One can detect potential geneticpredispositions for Down syndrome and Huntingtons diseaseby analyzing cells containing the embryos genetic information,even before pregnancy begins. This prevents women from having to decide whether to abort an abnormal fetus, and eliminates the deep grief and economic difficulties that many families are forced to cope with.

associated with specific functions, parentscould potentially examine the genetic makeup of their fetuses,and modify them by inducing changes in their embryonic stemcells. This could enhance a childs mental and physical abilities, from being taller to having the potential to master musicand chess. Most parents simply want their children to be thebest they could be. With new genetics, their dreams may finallybe realized. Lee Silver, a professor of molecular biology andpublic affairs at Princeton called this no different than givingyour child advantages like piano lessons or private school. From a childs point of view, the genetic enhancementsimposed upon him or her by parents may pose a threat to freedom of action. Whether the child succeeds in life is not whollydetermined by his or her own efforts, but rather from parentaldecisions made prior to birth. The child might no longer acceptresponsibility for the things they do. There are many social ramifications of manipulating achilds genetic makeup. According to Kass, Its naive to think that you can go in there with the traits that deal with higher human powerswithout [causing] real changes in other areas. The ripple effects of adding a new gene are unknown.What would our society become if the next generation can live up to 120 years? There would be a population crisis, the impacts of which would be felt everywhere, but especially severely in developing countries where food and housing arescarce. As science outpaces social development, the complications of elderly care also arise. Improvements in medical technologies demand higher costs, and as people live longer, heathcare costs would skyrocket. The ultimate consequence, as proponents of gene therapy predict, is the narrowing of divisions currently in place in our society: from social, to ethical, to economical. Many social divides exist simply because some of us are genetically better endowed than others, and are doing jobs with better compensation. Some children are born with better athletic prowess, quicker mathematical minds, and more acute visual senses than others. As a result, those lacking the genes will be at a disadvantage. With the new technology, the boundaries between

Designer Babies-In The News!!


US, MAY 2000:The Nash family made medical history by having a baby boy who had been selected using PGD to be a perfect tissue match for his very ill older sister. His sister suffered a rare genetic disease so tissue from her new-born brother's placenta was used to restore her to health. UK, APRIL 2003: An Appeal Court decision overturned a ban on the use of IVF treatment to help save a critically ill boy.ZainHashmi, six, from Leeds, requires the treatment for beta thalassaemia major, a debilitating genetic blood disorder. 18 JAN 2011- The Leopoldina, Germany's national academy of sciences, has published a report strongly recommending that PGD of early embryos be allowed by law when couples know they carry genes that could cause a serious incurable disease if passed on to their children. 8 JAN 2011- A THAI fertility clinic , in the Phuket International Hospital, clients can have babies "made to order" in the gender of the parents' choice and, it is claimed, risk-free of hereditary disease. In Australia, UK, Sweden, France genetically predetermining a child's gender was banned, unless it is necessary to avoid passing on a serious genetic abnormality or disease.

Future Implications
In the future, the new technologies may offer parents thepossibility to enhance their children. As more and more genesare discovered to be

12

different classesbe they social, economical, or personalwill blur as time progresses. Proponents of genetic engineering argue that the technologies may be expensive initially, but just like all other important technologies such as telephones and computers, they will not be out of reach for long. As the techniques become widely available, enhanced genes will become more ubiquitous through new therapy and traditional means,and genetic gaps will close as a result. But critics adopt the opposite view: genetic engineering, they say, would not only deepen existing class divisions, but also create new ones. Unlike other ubiquitous technologies such as refrigerators and televisions, the benefits of gene therapy will be out of reach for most. Many people will refuse toaccept gene therapy even if the enhancements are made free,due to religious and other personal reasons. Others caution that it may take longer than usual before the technology becomes widely available, due to high cost and lack of efficacy.Therefore, the economic gap between those who can afford this technology and those who cannot will only deepen in the meantime. Looking into the future, our new society may start to resemble those dreaded worlds from the science fiction novels. We must thus,carefully consider the consequences of the technology, before widely implementing it. Otherwise,we will be facing more ethical and moral questions than we can ever imagine.

References
1. Tom Abate.first gene altered monkey hailed as research tool/opponents concerned about ethical issues,The San FransiscoChronicle,Jan 12 2001. 2. E.S. Lander,etal.initial sequencing and analysis of human genome, Nature,2001,860-921.

3. McGee, Glenn (2000). The Perfect Baby:


A Pragmatic Approach to Genetics. Rowman& Littlefield. ISBN 0-84768344-3. 4. Stephen L. Baird, Designer Babies: Eugenics Repackaged or Consumer Options.(April 2007), available through Technology Teacher Magazine.

5. Silver, Lee M. (1998). Remaking Eden:


Cloning and Beyond in a Brave New World. Harper Perennial. ISBN 0-38079243-5.

6. Hughes, James (2004). Citizen Cyborg:


Why Democratic Societies Must Respond to the Redesigned Human of the Future. Westview Press. ISBN 0-8133-4198-1 7. Kassleon et al. report from the presidents council on bioethics;regulation of new technologies,Washington DC ;March 2004

13

Human Artificial Chromosome


Shreya Sara Ittycheria & Sherin J.S. (S6, Biotechnology and Biochemical Engineering, Mohandas College of Engineering and Technology, Anad, Nedumangad, Trivandrum 695544) Abstract
The human artificial chromosome (HAC) is a microchromose that can act as a new chromosome in a population of human cells. Yeast artificial chromosome and bacterial artificial chromosome were created for human artificial chromosome which first appeared in 1997. They are useful in expression studies as gene transfer vectors and are a tool for elucidating human chromosome function. Grown in HT1080 cells, they are mitotically and cytogenetically stable for up to six months. The use of a similar strategy in human cells to produce human artificial chromosomes (HACs) might be expected to provide an important tool for the manipulation of large DNA sequences in human cells. However, of the three required chromosomal elements, only telomeres have been well defined in human cells to date. It has been demonstrated that telomeric DNA, consisting of tandem repeats of the sequence T 2AG3, can seed the formation of new telomeres when reintroduced into human cells (46). And recently, two telomeric binding proteins, TRF1 and TRF2, have been described (79). The second required element, a human centromere, is thought to consist mainly of repeated DNA, specifically the alpha satellite DNA family, which is found at all normal human.The production of HACs from cloned DNA sources should help to define the elements necessary for human chromosomal function and to provide an important vector suitable for the manipulation of large DNA sequences in human cells.
Key Words : Artificial chromosome, michromosome, gene transfer vector

Introduction
A human artificial chromosome (HAC) is a microchromose that can act as a new chromosome in a population of human cells.That is, instead of 46 chromosomes the cell could have 47 with the 47th being very small roughly 610 megabases in size and are able to carry new genes introduced by human researchers. Yeast artificial chromosome and bacterial artificial chromosome were created for human artificial chromosome which first appeared in 1997. They are useful in expression studies as gene transfer vectors and are a tool for elucidating human chromosome function. Grown in HT1080 cells, they are mitotically and cytogenetically stable for up to six months.

History
John J. Harrington, Gil Van Bokkelen, Robert W. Mays, Karen Gustashaw & Huntington F. Willard of Case Western Reserve University

School of Medicine published the first report of human artificial chromosomes in 1997. They were first synthesized by combining portions of alpha satellite DNA with telomeric DNA and genomic DNA into linear microchromosomes.

Construction using Yeast Artificial Chromosome


A human artificial chromosome (HAC) vector was constructed from a 1-Mb yeast artificial chromosome (YAC) that was selected based on its size from among several YACs identified by screening a randomly chosen subset of the Centre dtude du Polymorphisme Humain (CEPH) (Paris) YAC library with a degenerate alpha satellite probe. This YAC, which also

14

included non-alpha satellite DNA, was modified to contain human telomeric DNA and a putative origin of replication from the human -globin locus. The resultant HAC vector was introduced into human cells by lipid-mediated DNA transfection, and HACs were identified that bound the active kinetochore protein CENP-E and were mitotically stable in the absence of selection for at least 100 generations. As the time rapidly approaches when the complete sequence of a human chromosome will be known, it is striking how little is known about how human chromosomes function. In contrast, the necessary elements for chromosomal function in yeast have been defined for several years. Three important elements appear to be required for the mitotic stability of linear The use of a similar strategy in human cells to produce human artificial chromosomes (HACs) might be expected to provide an important tool for the manipulation of large DNA sequences in human cells. However, of the three required chromosomal elements, only telomeres have been well defined in human cells to date. It has been demonstrated that telomeric DNA, consisting of tandem repeats of the sequence T2AG3, can seed the formation of new telomeres when reintroduced into human cells. And recently, two telomeric binding proteins, TRF1 and TRF2, have been described. The second required element, a human centromere, is

Microdissected HACs used as fluorescence in situ hybridization probes localized to the HAC itself and not to the arms of any endogenous human chromosomes, suggesting that the HAC was not formed by telomere fragmentation. Our ability to manipulate the HAC vector by recombinant genetic methods should allow us to further define the elements necessary for mammalian chromosome function. chromosomes: centromeres, telomeres, and origins of replication. The ascertainment of these elements in Saccharomyces cerevisiae provided the basis for the construction of yeast artificial chromosomes (YACs), which have proven to be important tools both for the study of yeast chromosomal function and as large capacity cloning vectors. thought to consist mainly of repeated DNA, specifically the alpha satellite DNA family, which is found at all normal human centromeres. However, normal human centromeres are large in size and complex in organization, and sequences lacking alpha satellite repeats also have been shown to be capable of human centromere function. As for the third required element, the study of origins of DNA replication also has led to conflicting reports, with no apparent consensus sequence having yet been determined for the initiation of DNA synthesis in humancells.

The production of HACs from cloned DNA sources should help to define the elements necessary for human chromosomal function and to provide an important vector suitable for the manipulation of large DNA sequences in human cells. Two approaches to generate chromosomes with the bottom up strategy in human cells from human elements have recently been described. Harrington et al. (17) synthesized arrays of alpha satellite DNA, which were combined in vitro with telomeres and fragmented genomic DNA, and transfected into HT1080 cells. The undefined genomic DNA component appeared to play an important role in the ability to form HACs, leaving unanswered questions as to what sequences, other than telomeres and alpha satellite DNA, were necessary for chromosome formation. Ikeno et al. (18) used two 100-kb YACs containing alpha satellite

DNA from human chromosome 21 propagated in a recombination deficient strain, which necessitated transient expression of a recombination protein (Rad52) to modify the YAC with telomere sequences and selectable markers (19). Only one of the two YACs was able to form HACs in HT1080 cells, suggesting that not all alpha satellite sequences may be able to form centromeres. Here we report construction of functional HACs from a YAC that was propagated in a recombinationproficient yeast strain and was chosen solely for its size (1 Mb) and the presence of alpha satellite DNA. This YAC contains both alpha satellite and non-alpha satellite DNA and was modified to include a putative human origin of replication and human telomeric DNA. The function and stability of

15

HACs generated from this 1-Mb YAC in humancell line are described.

Advances in HAC Technology


Human artificial chromosome (HAC) technology has developed rapidly over the past four years. Recent reports show that HACs are useful gene transfer vectors in expression studies and important tools for determining human chromosome function. HACs have been used to complement gene deficiencies in human cultured .Conclusion Artificial chromosomes (ACs) are highly promising vectors for use in gene therapy applications. They are able to maintain expression of genomic-sized exogenous transgenes within target cells, without integrating into the host genome. Although these vectors have huge potential and benefits when compared against normal expression constructs, they are highly complex, technically challenging to construct and diffcult to deliver to target cells. cells by transfer of large genomic loci also containing the regulatory elements for appropriate expression. And, they now offer the possibility to express large human transgenes in animals, especially in mouse models of human geneticdiseases

Reference
1. Nature Genetics 15, 345 - 355 (1997) Harrington and Bokkelen et al.) 2. Grimes et al. Genome Biology 2004 5:R89) 3. Formation of de novo centromeres and construction of first-generation human artificial microchromosomes in Nature Genetics15, 345 355 (1997) Harrington and Bokkelen et al. ) 4.Advances in human artificial chromosome technology-Larin Z, Meja JE.) 5.Murray A W , Szostak J W (1983) Nature (London) 305:189193, pmid:6350893. 6.Murray A W ,Szostak J W (1985) Annu Rev Cell Biol 1:289315, pmid:3916318. 7.Burke D T , Carle G F , Olson M V -(1987) Science 236:806812, pmid:3033825. 8.Farr C ,Fantes J ,Goodfellow P ,Cooke H (1991) Proc Natl Acad Sci USA 88:70067010, pmid:1871116. 9. nett M A ,Buckle V J ,Evans E P ,Porter A C G ,Rout D ,Smith A G ,Brown W R A (1993) Nucleic Acids Res 21:2736, pmid:8441617. 10.Hanish J P ,Yanowitz J L ,De Lange T (1994) Proc Natl Acad Sci USA 91:88618865, pmid:8090736. 11. Bilaud T ,Brun C ,Ancelin K ,Koering C E ,Laroche T , Gilson E (1997) Nat Genet 17:236 239, pmid:9326951. 12. Chong L ,van Steensel B ,Broccoli D ,Erdjument H ,Hanish J ,Tempst P ,de Lange T (1995) Science 270:16631667, pmid:7502076. 13. van Steensel B ,Smogorzewska A ,de Lange T (1998) Cell 92:401413, pmid:9476899. 14. Lee C ,Wevrick R ,Fisher R B ,FergusonSmith M A ,Lin C C (1997) Hum Genet 100:291304, pmid:9272147. 14. Manuelidas L -(1978) Chromosoma 66:23 32, pmid:639625. 15. Willard H F- (1985) Am J Hum Genet 37:524532, pmid:2988334.

16

Medical Applications of Wireless Body Area Networks


Nandana Sasidhar & Meenakshi Sudhakaran
Department of Bio-technology & Bio-Chemical Engineering Mohandas College of Engineering and Technology, Anad, Nedumangad

Abstract
Wireless Body Area Networks (WBANs) provide efficient communication solutions to the ubiquitous healthcare systems. Health monitoring, telemedicine, military, interactive entertainment, and portable audio/video systems are some of the applications where WBANs can be used. The miniaturized sensors together with advance micro-electro-mechanical systems (MEMS) technology create a WBAN that continuously monitors the health condition of a patient. This paper presents a comprehensive discussion on the applications of WBANs in smart healthcare systems. We highlight a number of projects that enable WBANs to provide unobtrusive long-term healthcare monitoring with real-time updates to the health center. In addition, we list many potential medical applications of a WBAN including epileptic seizure warning, glucose monitoring, and cancer detection. Key Words: Wireless Sensor Networks (WSNs), Body Area Network (BAN), Body Sensor Networks (BSNs), Healthcare and medical Applications, Smart Biosensor.

Introduction
Advances in wireless communication and microelectro-mechanical systems (MEMS) allow the establishment of a large scale, low power, multifunctional, and (ideally) low cost network. Wireless sensor networks (WSNs) are finding applications in many areas, such as medical monitoring, emergency response, security, industrial automation, environment and agriculture, seismic detection, infrastructure protection and optimization, automotive and aeronautic applications, building automation, and military applications .Wireless sensor networks can be effectively used in healthcare to enhance the quality of life provided for the patients and also the quality of healthcare services. For example, patients equipped with a wireless body area network (WBAN) need not be physically present at the physician for their diagnostic. A body sensor network proves to be adequate for emergency cases, where it autonomously sends data about patient health so that physician can prepare for the treatment immediately. The biosensor based approach to medical care makes it more efficient by decreasing the response time, and reducing the heterogeneousness of the application. These sensors are implanted in the human body that forms a wireless network between themselves and some entities which are external to the human body. A wired network requires laying wires within the human body, which is not desirable therefore wireless network, is the most suitable option. Such a network can be used for a variety of applications.

These include both data aggregation and data dissemination applications. Biosensors may be used for monitoring the physiological parameters like blood pressure, glucose levels and collecting the data for further analysis. Wearable health monitoring systems allow an individual to closely monitor changes in her or his vital signs and provide feedback to help maintain an optimal health status. If integrated into a telemedical system, these systems can even alert medical personnel when lifethreatening changes occur. During the last few years there has been a significant increase in the number of various wearable health monitoring devices, ranging from simple pulse monitors, activity monitors, and portable Holter monitors, to sophisticated and expensive implantable sensor

Architecture
The proposed wireless body area sensor network for health monitoring integrated into a broader multitier telemedicine system is illustrated in Figure 1. The telemedical system spans a network comprised of individual health monitoring systems that connect through the Internet to a medical server tier that resides at the top of this hierarchy. The top tier, centered on a medical server, is optimized to service hundreds or thousands of individual users, and encompasses a complex network of interconnected services, medical personnel, and healthcare professionals. Each user wears a number of sensor nodes that are strategically placed on her body. The primary functions of these sensor nodes are to unobtrusively sample vital signs and transfer the relevant data to a personal server through wireless

17

personal network implemented using ZigBee (802.15.4) or Bluetooth (802.15.1). The personal server, implemented on a personal digital assistant (PDA), cell phone, or home personal computer, sets up and controls the WBAN, provides graphical or audio interface to the user, and transfers the information about health status to the medical server through the Internet or mobile telephone networks (e.g.,GPRS, 3G). The medical server keeps electronic medical records of registered users and provides various services to the users, medical personnel, and informal caregivers. It is the responsibility of the medical server to authenticate users, accept health monitoring session uploads, format and insert this session data into corresponding medical records, analyze the data patterns, recognize serious health anomalies in order to contact emergency care givers, and forward new instructions to the users, such as physician prescribed exercises.

through data mining. Integration of the collected data into research databases

Biosensors
A biosensor is an analytical device for the detection of an analyte that combines a biological component with a physicochemical detector component. It consists of 3 parts: The sensitive biological element or biological material (e.g.: tissue, microorganisms, receptors, enzymes, antibodies etc.). The sensitive elements can be created by biological engineering. The transducer or the detector element (works in physicochemical way, optical, piezoelectric, electro chemical etc.) that transforms the signal resulting from the interdiction of the analyte with the biological element Associate electronics or signal processors that are primarily responsible for the display of the results in a and quantitative analysis of conditions and patterns could prove invaluable to researchers trying to link symptoms and diagnoses with historical changes in health status, physiological data, or other parameters (e.g., gender, age, weight). In a similar way this infrastructure could significantly contribute to monitoring and studying of drug therapy effects. A vital sign monitoring system is shown in figure 2. user friendly way. This sometimes accounts for the most expensive part of the sensor device.

Data Monitoring
The medical server keeps electronic medical records of registered users and provides various services to the users, medical personnel, and informal caregivers. It is the responsibility of the medical server to authenticate users, accept health monitoring session uploads, format and insert this session data into corresponding medical records, analyze the data patterns, recognize serious health anomalies in order to contact emergency care givers, and forward new instructions to the users, such as physician prescribed exercises. The patients physician can access the data from his/her office via the Internet and examine it to ensure the patient is within expected health metrics (heart rate, blood pressure, activity), ensure that the patient is responding to a given treatment or that a patient has been performing the given exercises. A server agent may inspect the uploaded data and create an alert in the case of a potential medical condition. The large amount of data collected through these services can also be utilized for knowledge discovery

Bioreporters
Bioreporters refer to intact, living microbial cells that have been genetically engineered to produce a measurable signal in response to a specific chemical or physical agent in their environment (Figure 3). Bioreporters contain two essential genetic elements, a

18

promoter gene and a reporter gene. The promoter gene is turned on (transcribed) when the target agent is present in the cells environment. The promoter gene in a normal bacterial cell is linked to other genes that are then likewise transcribed and then translated into proteins that help the cell in either combating or adapting to the agent to which it has been exposed. In the case of a bioreporter, these genes, or portions thereof, have been removed and replaced with a reporter gene. Consequently, turning on the promoter gene now causes the reporter gene to be turned on. Activation of the reporter gene leads to production of reporter proteins that ultimately generate some type of a detectable signal. Therefore, the presence of a signal indicates that the bioreporter has sensed a particular target agent in its environment. The cells environment. The promoter gene in a normal bacterial cell is linked to other genes that are then likewise transcribed and then translated into proteins that help the cell in either combating or adapting to the agent to which it has been exposed. In the case of a bioreporter, these genes, or portions thereof, have been removed and replaced with a reporter gene. Consequently, turning on the promoter gene now causes the reporter gene to be turned on. Activation of the reporter gene leads to production of reporter proteins that ultimately generate some type of a detectable signal. Therefore, the presence of a signal indicates that the bioreporter has sensed a particular target agent in its environment.

user safety critical applications must have faulttolerant operation. Security Unauthorized access or manipulation of system function could have severe consequences. Security measures such as user authentication will prevent such consequences. Privacy BASNs will be entrusted with potentially sensitive information about people. Protecting user privacy will require both technical and nontechnical solutions. BASN packaging will need to be inconspicuous to avoid drawing attention to medical conditions. Encryption will be necessary to protect sensitive data, and encryption mechanisms will need to be resource-aware. Compatibility BASN nodes need to interoperate with other BASN nodes, existing inter-BASN networks, and even with electronic health record systems. This will require standardization of communication protocols and data storage formats. Ease of use Wearable BASN nodes will need to be small, unobtrusive, ergonomic, easy to put on, few in number and even stylish.

Applications of WBAN
Current healthcare applications of wireless sensor networks target heart problems, asthma, emergency response, stress monitoring. Cardiovascular diseases Smart sensor nodes that can be installed on the patient in an unobtrusive way can prevent a large number of deaths caused by cardiovascular diseases. The corresponding medical staff can do treatment preparation in advance as they receive vital information regarding heart rate and irregularities of the heart while monitoring the health status of the patient Cancer detection A sensor with the ability to detect nitric oxide (emitted by cancer cells) can be placed in the suspect locations. These sensors have the ability to differentiate cancerous cells, between different types of cells. Alzheimer and depression Wireless sensor network can help homebound and elderly people who often feel lonely and depressed by detecting any abnormal situation and alerting neighbors, family or the nearest hospital Glucose level monitoring . A biosensor implanted in the patient could provide a more consistent, accurate, and less invasive method

Requirements
Widespread BASN adoption and diffusion will depend on a host of factors that involve both consumers and manufacturers. User-oriented requirements include the following: Value Perceived value can depend on many factors, such as assessment ability, but overall, the BASN must improve its users quality of life. Safety Wearable and implanted sensors will need to be biocompatible and unobtrusive to prevent harm to the

19

by monitoring glucose levels, transmit the results to a wireless PDA or a fixed terminal, and by injecting insulin automatically when a threshold glucose level is reached. Asthma A wireless sensor network can help those millions of patients suffering from asthma by having sensor nodes that can sense the allergic agents in the air and report the status continuously to the physician and/or to the patient himself. Preventing medical accidents Approximately 98000 people die every year due to medical accidents caused by human error. Sensor Network can maintain a log of previous medical accidents, and can notify the occurrence of the same accident and thus can reduce many medical accidents. Epileptic Seizures Strike Early Warning Strokes affect 700,000 people each year in the US and about 275,000 die from stroke each year. Wearable sensor system has the ability to monitor home bounded people by measuring motor behavior at home for longer time and can be used to predict clinical scores HipGuard System HipGuard system is developed for patients who are recovering from hip surgery. This system monitors patients leg and hip position and rotation with embedded wireless sensors. Alarm signals can be sent to patients Wrist Unit if hip or leg positions or rotations are false. MobiHealth MobiHealth aims to provide continuous monitoring to patients outside the hospital environment [36]. MobiHealth targets, improving the quality of life of patients by enabling new value added services in the areas of disease prevention, disease diagnosis, and remote assistance, physical state monitoring and even in clinical research. Therefore, a patient who requires monitoring for short or long periods of time doesn't have to stay in hospital for monitoring. Figure 4 shows the typical structure of MobiHealth project.

UbiMon UbiMon (Ubiquitous Monitoring Environment for Wearable and Implantable Sensors) aims to provide a continuous and unobtrusive monitoring system for patient in order to capture transient events. This is shown in figure 5.

Advantages of WBAN
The biosensor based approach to medical care makes it more efficient by decreasing the response time, and reducing the heterogeneousness of the application. The following are some of the advantages of WBAN: Small and Efficient Wearable BASN nodes will be small, unobtrusive, ergonomic, easy to put on, few in number stylish and efficient. Remote health monitoring Since the proposed WBAN is connected to a medical server through GPRS or positioned with the help of a GSM network remote health monitoring is possible. Low cost Advances in wireless communication and microelectro-mechanical systems (MEMS) allow the establishment of a large scale, low power, multifunctional, and (ideally) low cost network Easy to implement Since we use a three layer architecture the complexity involves in the implementation of the proposed WBAN is comparatively very low when we compared with other PAN (Personal Area Network). Scope The scope of this proposed technology is not confines to provide efficient communication solution to health care system but also to provide efficient solutions for telemedicine, military interactive entertainment etc.

20

Advanced Diagnosis The WBAN allow the physicians to monitor the health care of a patient in real time and this advantage allows the physicians or the caregiver to provide advance diagnosis.

Regulatory Requirements must always be met, there must be some testimony that these devices will not harm human body. Therefore, it is imperative to have diligent oversight of these testing operations. Robustness Whenever the sensor devices are deployed in harsh or hostile environments Robustness rates of device failure becomes high. Protocol designs must therefore have built-in mechanisms, that the failure of one node should not cause the entire network to cease operation. A possible solution is a distributed network where each sensor node operates autonomously though still cooperates when necessary.

Challenges
For quality life healthcare is always a big concern for an individual. Generally, health monitoring is performed on a periodic check basis, where the patient must remember its symptoms; the doctor performs some tests and plans a diagnostic, then monitors patient progress along the treatment. Challenges in healthcare application includes: low power, limited computation, security and interference, material constraints, robustness, continuous operation, and regulatory requirements. Power challenge As most wireless networks based devices are battery operated therefore, power challenge is present in almost every area of application of wireless sensor networks, but limitation of a smart sensor implanted on a person still poses even further challenge. To deal with these power issues the developers have to design better scheduling algorithms and power management schemes Computation Due to both limited power as well as memory, computation should also be limited. Unlike conventional wireless sensor network nodes, biosensors do not have much that computational power. A solution is that some sensors may have varying capabilities that communicate with each other and send out one collaborative data message. Security and Interference One of the very important issues that could be consider, especially for medical systems is Security and interference. It is critical and in the interest of the individual, to keep this information from being accessed by unauthorized entities. This is referred to as Confidentiality, which can be achieved by encrypting the data by a key during transmission Material Constraints A biosensor should be implanted within the human body; therefore the shape, size, and materials might be harmless to the body tissue. Also chemical reactions with body tissue and the disposal of the sensor are of extreme importance. Continuous operation Continuous operation must be ensured along the lifecycle of a biosensor, as it is expected to operate for days, sometimes weeks without operator intervention. Regulatory Requirements

Future
These advances in diagnostics, medicine and therapy can only become reality by bringing life sciences research to the next level. Life sciences can benefit from progress in nanoelectronics which is now working at dimensions and with a precision equaling those of biology. In future autonomous sensor nodes can be used to create a body-area network that is worn on the body and that monitors vital body parameters in an unobtrusive way during daily life. When alarming values are reached, a caregiver can be contacted. Modeling and visualization of WBAN is another future application. Another future concept is Immersive Visualization System. This allows the system to present to the user a 3D virtual world within which the user can move and interact with the virtual objects.

Conclusion
This paper demonstrates the use of Wearable and implantable Wireless Body Area Networks as a key infrastructure enabling unobtrusive, constant, and ambulatory health monitoring. This new technology has potential to tender a wide range of assistance to patients, medical personnel, and society through continuous monitoring in the ambulatory environment, early detection of abnormal conditions, supervised. restoration, and potential knowledge discovery through data mining of all gathered information. This paper proves that wireless sensor networks can be widely used in healthcare applications. We believe that the role of wireless sensor networks / Body sensor networks in medicine can be further enlarged and we are expecting to have a feasible and proactive prototype for wearable /

21

implantable WBAN system, which could improve the quality of life.

References
[1] D. Estrin, "Embedded Networked Sensing Research: Emerging System Challenges," in NSF Workshop on Distributed Communications and Signal Processing for Sensor Networks Evanston, Illinois, USA, 2010. [2] D. Estrin, R. Govindan, J. Heidemann, and S. Kumar, "Next Century Challenges: Scalable Coordination in Sensor Networks," in IEEE/ACM International Conference on Mobile Computing and Networking, Seattle, Washington, USA, 2009, pp. 263-270 [3] K. W. Goh, J. Lavanya, Y. Kim, E. K. Tan, and C. B. Soh, "A PDA-based ECG Beat Detector for Home Cardiac Care," in IEEE Engineering in Medicine and Biology Society, Shanghai, China, 2008, pp. 375-378. [4] Cardio Micro Sensor, Available at: http://www.cardiomems.com, Accessed: June 2009 [5] TMSIhttp://www.tmsi.com/?id=2, Accessed in May 2009.

22

Cancer Chemoprevention by Resveratrol


Anushka Bejoy and Gayathri.S
S8, Biotechnology and Biochemical Engineering, Mohandas College Of Engineering and Technology,Anad, Nedumangad- 695544)

Abstract
Cancer, next only to heart diseases, is the second leading cause of deaths in the United States of America and many other nations in the world. The prognosis for a patient with metastatic carcinoma of the lung, colon, breast, or prostate (four of the most common and lethal forms of cancer, which together account for more than half of all deaths from cancer in the USA), remains dismal. Conventional therapeutic and surgical approaches have not been able to control the incidence of most of the cancer types. Therefore, there is an urgent need to develop mechanismbased approaches for the management of cancer. Chemoprevention via non-toxic agents could be one such approach. Many naturally occurring agents have shown cancer chemopreventive potential in a variety of bioassay systems and animal models, having relevance to human disease. It is appreciated that an effective and acceptable chemopreventive agent should have certain properties: (a), little or no toxic effects in normal and healthy cells; (b), high efficacy against multiple sites; (c)capability of oral consumption; (d), known mechanism of action; (e), low cost; and (f), acceptance by human population. Resveratrol is one such agent. A naturally occurring polyphenolic antioxidant compound present in grapes, berries, peanuts and red wine. In some bioassay systems resveratrol has been shown to afford protection against several cancer types. The mechanisms of resveratrol's broad cancer chemopreventive effects are not completely understood. In this review, we present the cancer chemopreventive effects of resveratrol in an organ-specific manner. The mechanisms of the antiproliferative/ cancer chemopreventive effects of resveratrol are also presented. We believe that continued efforts are needed, especially welldesigned pre-clinical studies in the animal models that closely mimic/represent human disease, to establish the usefulness of resveratrol as cancer chemopreventive agent. This should be followed by human clinical trials in appropriate cancer types in suitable populations. Key words: Resveratrol , Cancer, Chemoprevention

Introduction
Cancer , next only to heart diseases , is the second leading cause of deaths in the United States of America and many other nations in the world . Despite immense efforts to improve treatment and find cures for it , overall mortality rates have not significantly reduced in the past 25 years . The prognosis for a patient with metastatic carcinoma of the lung , colon , breast or prostrate

is dismal . Conventional medicine treats cancer along the lines of an infection . This has lead to

radical attempts to get rid of the disease through the cut , burn and poison technique of surgery , radiation and chemotherapy . This approach has not been successful in cancer management and

23

has been criticized . When pre-cancerous cells are formed in the body the immune system detects and destroys them before it becomes a problem . If the immune system is weak , cancer appears . The answer to combat cancer lies in prevention rather than cure . Chemoprevention via non-toxic agents could be such an approach . Chemoprevention is defined as the use of pharmacological and natural agents to prevent , arrest or reverse the process of cancer development before invasion and metastasis occur . Dietary factors contribute to one-third of potentially preventable cancers . Many naturally occurring agents have shown cancer chemo preventive potential in a variety of bioassay systems and animal models , having relevance to human disease . It is appreciated that an effective and acceptable chemopreventive agent should have some properties : (a) little or no toxic effects in normal and healthy cells (b) high efficacy against multiple sites (c) capability of oral consumption (d) known mechanism of action (e) low cost (f) acceptance by human population . Resveratrol is one such agent , which has been shown to possess many biological activities relevant to human diseases . Resveratrol , chemically known as 3,5,4'-trihydroxy-stilbene , is a naturally occurring polyphenolic anti-oxidant compound present in grapes , berries , peanuts and red wine. Some epidemiological studies have indicated that red wine protects against many diseases including cancer . ( The traditional Japanese and Chinese folk medicines have used root extract of the weed Polygonum cupsidatum which has resveratrol to fight liver , skin and circulatory diseases . Resveratrol possesses cancer chemopreventive effects against all the three major stages of carcinogenesis i.e. initiation , promotion and progression .

destruction of adjacent tissues ) , and sometimes metastasis (spread to other locations in the body via lymph or blood) . These three malignant properties of cancers differentiate them from benign tumors , which are self-limited, and do not invade or metastasize. Most cancers form a tumor but some , like leukemia , do not. The branch of medicine concerned with the study , diagnosis , treatment , and prevention of cancer is oncology. Cancer affects people at all ages with the risk for most types increasing with age . Cancer caused about 13% of all human deaths in 2007 . Cancers are caused by abnormalities in the genetic material of the transformed cells . These abnormalities may be due to the effects of carcinogens such as tobacco smoke , radiation , chemicals , or infectious agents . Other cancerpromoting genetic abnormalities may randomly occur through errors in DNA replication , or are inherited , and thus present in all cells from birth. The heritability of cancers is usually affected by complex interactions between carcinogens and the host's genome . Genetic abnormalities found in cancer typically affect two general classes of genes . Cancerpromoting oncogenes are typically activated in cancer cells , giving those cells new properties , such as hyperactive growth and division , protection against programmed cell death , loss of respect for normal tissue boundaries , and the ability to become established in diverse tissue environments.

Classification
Cancers are classified by the type of cell that resembles the tumor and , therefore, the tissue presumed to be the origin of the tumor. These are the histology and the location , respectively. Examples of general categories include :

Cancer
What is Cancer ? Cancer (medical term: malignant neoplasm) is a class of diseases in which a group of cells display uncontrolled growth (division beyond the normal limits) , invasion (intrusion on and

Carcinoma: Malignant tumors derived from epithelial cells. This group represents the most common cancers,

24

including the common forms of breast, prostate, lung and colon cancer.

(enlarged liver), bone pain ,

Sarcoma: Malignant tumors derived from connective tissue, or mesenchymal cells.


Malignancies derived from

fracture of affected bones and neurological symptoms.


Although advanced cancer may cause pain , it is often not the first symptom.

hematopoietic (blood-forming) cells

Systemic symptoms: weight

Germ cell tumor : Tumors derived from

totipotent cells. In adults most often found in the testicle and ovary ; in
fetuses , babies , and young children most often found on the body midline , particularly at the tip of the tailbone ; in horses most often found at the poll (base of the skull).

loss , poor appetite, fatigue and cachexia (wasting) , excessive sweating (night sweats) , anemia and specific paraneoplastic phenomena,
i.e. specific conditions that are due to an active cancer , such as

thrombosis
changes .

or

hormonal

Blastic tumor or blastoma: A tumor (usually malignant) which resembles an immature or embryonic tissue. Many of these tumors are most common in children .

Causes
A mutation in the error-correcting machinery of a cell might cause that cell and its children to accumulate errors more rapidly . A mutation in signaling (endocrine) machinery of the cell can send errorcausing signals to nearby cells . A mutation might cause cells to become neoplastic , causing them to migrate and disrupt more healthy cells . A mutation may cause the cell to become immortal (see telomeres) , causing them to disrupt healthy cells forever .

Signs And Symptoms


Symptoms of cancer metastasis depend on the location of the tumor. Roughly , cancer symptoms can be divided into three groups:

Local symptoms: unusual lumps or swelling (tumor) ,

hemorrhage (bleeding) , pain and/or ulceration .


Compression of surrounding tissues may cause symptoms such as jaundice (yellowing the eyes and skin) .

Symptoms of metastasis (spreading): enlarged lymph nodes , cough and hemoptysis , hepatomegaly
25

from the poisonous but medicinal Veratrum album , variety grandiflorum .

Cancer & Stages : Chemical Structure Of Resveratrol


The chemical structure is important because from its structure information regarding the biological activity may be obtained . Because it has more than one phenol groups this is classified as a polyphenol . Polyphenols are antioxidants as they react with free radicals to form a stable molecule that is less toxic than the original compound .

trans- Resveratrol

What Is Resveratrol ?

Resveratrol (3,5,4'-trihydroxy-trans-stilbene) is a phytoalexin produced naturally by several plants when under attack by pathogens such as bacteria or fungi as a defence against diseases. Resveratrol is a flavonol belonging to the group of flavonoids. It is a naturally occuring polyphenolic antioxidant compound in grapes , berries , peanuts and red wine . Resveratrol is found in the skin of red grapes and is a constituent of red wine . Resveratrol has also been produced by chemical synthesis and is sold as a nutritional supplement derived primarily from Japanese knotweed . Resveratrol was first mentioned in a Japanese article in 1939 by M . Takaoka , who isolated it

cis Resveratrol

Chemoprevention
Cancer chemoprevention is defined as the use of natural , synthetic, or biological or chemical

26

agents to reverse, suppress, or prevent carcinogenic progression to invasive cancer. Chemoprevention is also called chemoprophylaxis . It is the use of natural or laboratory-made substances to prevent a disease such as cancer. The regular use of aspirin is known to reduce the risk of the polyps from which colorectal cancer arises. This is an instance of chemoprevention.

on human breast epithelial cells . Resveratrol induced significant dose-dependent inhibition in human oral squamous carcinoma cell (SCC-25) growth and DNA synthesis. Resveratrol reduced viability and DNA synthesis capability of human promyelocytic leukemia (HL-60) cells, via an induction of apoptosis through BCl-2 pathway . Resveratrol has also been shown to regulate PSA gene expression by an AR-independent mechanism . Many antioxidant polyphenols present in wine including resveratrol inhibit the proliferation of human prostate cancer cell lines . Resveratrol has shown strong anti-proliferative properties that has been attributed to its ability to efficiently scavenge the essential tyrosyl radical of the small protein of ribonucleotide reductase and, consequently, to inhibit deoxyribonucleotide synthesis. It was also observed that resveratrol non-competitively inhibited the cyclooxygenase (COX)-2 transcription and activity in human mammary epithelial cells and colon cancer cells . Resveratrol was found to act as a potential inhibitor of inducible NO synthase (iNOS) and inducible COX-2 . Resveratrol also demonstrated anti-inflammatory effects and inhibited the activity of hydroperoxidase enzymes (suggestive of anti- promotion activity) in addition to cause the differentiation of human promyelocytic leukemia cells, indicating that this compound may also depress the progression phase of cancer . Several studies have shown that the cancer preventive activity of resveratrol could be attributed to its ability to trigger apoptosis in carcinoma cells . Few researchers have shown that resveratrol is metabolized by the enzyme cytochrome P450 (CYP)-1B1, which is found in a variety of different tumors. When resveratrol is metabolized by the CYP1B1, an anti-leukemic agent piceatannol is formed that has been previously identified as an anti- leukemic agent. This observation provides a novel explanation for the cancer preventive properties of resveratrol .These studies suggested that resveratrol may be developed as a potential cancer chemopreventive agent .

The term chemoprevention was coined to parallel the term chemotherapy . Chemoprevention prevents and chemotherapy treats .

Chemoprevention By Resveratrol
Resveratrol acts as an antioxidant and antimutagen . It induces phase II drug metabolising enzymes (anti initiation activity ) . It inhibits cyclooxygenase and hydroperoxidase functions (anti-promotion activity). It mediates anti-inflammatory effects. It induces human promyelocytic leukemia cell differentiation (anti -progression activity) . In addition it has inhibited development of preneoplastic lesions in carcinogen treated mouse mammary glands in culture and inhibited tumorigenesis in a mouse skin cancer model . It has shown to impart anti-proliferative effects

27

Resveratrol And Cancer


Many studies in cell culture system as well as in animal models have shown the cancer chemopreventive as well as cancer therapeutic effects of resveratrol . The organ specific cancer chemo-preventive effects of this polyphenol i.e. antioxidant that may be a constituent of diet and/or beverages consumed by human population has been summarized .

Resveratrol And Skin Cancer


In the USA, non-melanoma skin cancer that includes basal and squamous-cell carcinoma, is the most frequently diagnosed form of cancer accounting for nearly half of all cancer types . According to an estimate, more than a million new cases of skin cancer are diagonised annually in the USA . Depending on the cellular origin , human skin cancer is classified as melanocytic or epithelial ; melanomas are less common but more lethal than epithelial skin cancers . Studies have shown that resveratrol prevents the development of skin cancer. In fact, the first study showing the cancer chemopreventive effect of resveratrol demonstrated that resveratrol acts as an effective agent for the prevention of chemically-induced skin carcinogenesis . The application of TPA to mouse skin induces oxidative stress as evidenced by several biochemical responses including generation of and enhanced levels of myeloperoxidase and oxidized glutathione reductase activities and decreases in glutathione levels and superoxide dismutase activity . TPA treatment was also found to elevate the expression of cyclooxygenase-1 (COX-1) , COX2 , c-myc, c-fos , c-jun, transforming growth factor-1 (TGF-1) and tumor necrosis factor (TNF) . The pre-treatment of mouse skin with resveratrol negated several of these TPA-induced effects in a dose-dependent manner . and glutathione reductase and superoxide dismutase activities . The reverse transcriptase . polymerase chain reaction (RT-PCR) analysis showed that TPA-induction increases the expression of c-fos and the TGF-1 were inhibited by resveratrol . In addition resveratrol inhibited the de novo

formation of inducible nitric oxide synthase (Inos) in mouse macrophages stimulated with lipopolysaccharide .It was showed that in a mouse JB6epidermal cell line , resveratrol activated extracellular-signal-regulated protein kinases (ERKs) , c-Jun NH -terminal kinases (JNKs), and p38 kinase and induced serine 15 phosphorylation of p53 . Pretreatment of the cells with PD98059 or SB202190 or stable expression of a dominant negative mutant of ERK2 or p38 kinase impaired resveratrol-induced p53dependent transcriptional activity and apoptosis , whereas constitutively active MEK1 increased the transcriptional activity of p53 . These data strongly suggest that both ERKs and p38 kinase mediate resveratrol-induced activation of p53 and apoptosis through phosphorylation of p53 at serine 15 . It was determined that c-jun NH(2) terminal kinases (JNKs) are involved in resveratrolinduced p53 activation and induction of apoptosis in JB6 mouse epidermal cells. Resveratrol activated JNKs dose-dependently . The data suggested that JNKs act as mediators of resveratrol-induced activation of p53 and apoptosis, which may occur partially through p53 phosphorylation . Chemopreventive potential of stilbenes (transresveratrol is reported to be most effective among several red wine polyphenols viz. flavanols [(+)catechin], flavonols (quercetin) and hydroxybenzoic acids (gallic acid) . It was concluded that trans-resveratrol may be the most effective anticancer polyphenol present in red wine . In another study, chemopreventive capability of reseveratrol was found to be most effective compared to sesamol, sesame oil and sunflower oil . It was suggested that non-melanoma skin cancer is related to cumulative exposure to solar ultraviolet radiation. Resveratrol treatment of keratinocytes was also found to inhibit UVB mediated: (i), phosphorylation and degradation of I B (ii) activation of IKK . Based on the data,we suggested that NF- B pathway plays a critical role in the chemopreventive effects of resveratrol against the adverse effects of UV radiation

28

including photocarcinogenesis .

treated with BHA and NNK had reduced lung tumor multiplicity . The effects of stilbene glucosides isolated from medicinal plants and grapes on tumor growth and lung metastasis in mice bearing highly metastatic Lewis lung carcinoma (LLC) tumors were studied . Stilbene glucosides are naturally occurring phytoalexins , found in a variety of medicinal plants . Among the stilbene derivatives resveratrol 3-O-D-glucoside is found in grapes and wine .Tumor growth in the right hind paw and lung metastasis were inhibited by oral administration of resveratrol 3-O-D-glucoside and 2,3,5,4'-tetrahydroxystilbene-2-O-Dglucoside for 33 consecutive days , in LLCbearing mice. In addition, both stilbene glucosides inhibited angiogenesis of HUVECs , The authors of this study suggested that the antitumour and antimetastatic activity of resveratrol 3-O-D-glucoside and 2,3,5,4'tetrahydroxystilbene-2-O-D- glucoside, might be due to the inhibition of DNA synthesis in LLC

Resveratrol And Liver Cancer


Few studies have evaluated the chemopreventive effects of resveratrol against liver cancer. It was demonstrated that resveratrol administration to rats inoculated with a fast growing tumour (the Yoshida AH-130 ascites hepatoma) caused a very significant decrease (25%) in the tumor cell content. This effect was found to be associated with an increase in the number of cells in the G2\M phase of the cell cycle . Resveratrol causes apoptosis in the tumor cell. Resveratrol inhibited the growth of hepatoma cells line H22 in a doseand time-dependent manner via the induction of apoptosis . Resveratrol suppressed the invasion of the hepatoma cells even at low concentrations . Resveratrol and resveratrol-loaded rat serum suppressed reactive oxygen species-mediated invasive capacity. It was strongly found to inhibit cell proliferation .

Resveratrol and blood cancer


Resveratrol inhibits CYP1A1 expression in vitro , by preventing the binding of the AHR to promoter sequences that regulate CYP1A1 transcription . This was suggested to be important for chemopreventive activity of resveratrol . Resveratrol inhibits aryl hydrocarbon induced CYP1A activity in vitro inhibiting CYP1A1\CYP1A2 enzyme activity and by inhibiting the signal transduction pathway that upregulates the expression of carcinogen activating enzymes. The anti-proliferative/ chemopreventive effects of resveratrol against leukemia were evaluated . Resveratrol reduced the viability and DNA synthesis capability of cultured human promyelocytic leukemia (HL-60) cells . In this study the growth inhibitory and antiproliferative properties of resveratrol were suggested to be attributable to its induction of apoptotic cell death as determined by morphological and ultrastructural changes , internucleosomal DNA fragmentation and increased proportion of the sub-diploid cell population .Further, resveratrol treatment resulted in a gradual decrease in the expression of anti-apoptotic Bcl-2 . Involvement of caspases and CD95-CD95 ligand pathway in resveratrol-mediated induction of apoptosis in the myeloid leukemia HL60 cells . Resveratroltreated tumor cells exhibited a dose-dependent increase externalization of inner membrane phosphatidylserine and in cellular content of subdiploid DNA , indicating loss of membrane phospholipid asymmetry and DNA fragmentation

Resveratrol and lung cancer


The chemopreventive activities of butylated hydroxyanisole (BHA) , myo-inositol , curcumin , esculetin, resveratrol and lycopeneenriched tomato oleoresin (LTO) against lung tumor induction in A/J mice by the tobacco smoke carcinogens benzo[a]pyrene (BaP) and 4-(methyl-nitrosamino)-1-(3- pyridyl)-1-butanone (NNK) were evaluated . In this study, the mice

29

. Resveratrol-induced cell death was found to be mediated by intracellular caspases as observed by the dose-dependent increase in proteolytic cleavage of caspase substrate poly (ADP-ribose) polymerase (PARP) and the ability of caspase inhibitors to block resveratrol cytotoxicity . These data showed specific involvement of the CD95CD95L system in the anti-cancer activity of resveratrol and highlight the chemotherapeutic potential of this natural product , in addition to its recently reported chemopreventive activity . Another study has shown that resveratrol induces Fas signaling-independent apoptosis in THP-1 human monocytic leukaemia cells (125). Bernhard demonstrated that resveratrol induced arrest in the S phase and apoptosis . Resveratrol induced extensive apoptotic cell death not only in CD95-sensitive leukemia lines , but also in B-lineage leukemic cells that are resistant to CD95-signaling . Some studies revealed the proapoptotic potential of resveratrol and its hydroxylated analog piceatannol . These agents are potent inducers of apoptotic cell death in Burkitt-like lymphoma cells . Resveratrol imparts anti-leukemic activity against mouse and human leukemic cell lines by inhibiting proliferation . Inhibition of leukemic cell lines by resveratrol was due to induction of apoptosis .

been dubbed as the French Paradox .

Conclusion
A structured path for developing diet derived agents as cancer chemopreventive is emerging fast . Resveratrol , a polyphenolic compound present in grapes , red wine , nuts and berries has shown promise against certain cancer types like liver , lung , prostate , blood , skin , colorectal and intestinal cancers . In various in vitro and in vivo models, resveratrol has proved to be capable of retarding steps of carcinogenesis . The compound also bears a simple chemical structure that is capable of interacting with a variety of receptors and enzymes and serving as an activator or inhibitor in a number of pathways No toxicity reports have been published with respect to resveratrol in animals. Resveratrol has proven to be non-toxic , even at high doses (3,000mg/kg diet for 120 days in rats) . Since resveratrol is an active ingredient of several traditional medicines used for centuries in India, China, and Japan, the general medicinal value and safety of this compound may be suggested .

The french paradox


Despite the heavy consumption of cheese, butter , eggs, rich creamy sauces , and other fatcontaining foods , the French population appears to be surprisingly healthy, with low incidence of coronary heart diseases and certain types of cancer. Although a typical French diet contains approximately 15% more saturated fat than an American diet , and even though they exercise less than Americans , the rate of heart diseases for the French people is 60% lower than that of Americans . Similarly , the incidence of certain cancer types are much lower in French population than in American . This has been attributed to high consumption of red wine by French people that in fact ranks to the highest per capita consumption in the world . This phenomenon has

Reference

Hong WK and Sporn MB: Recent advances in chemoprevention of cancer. Science 278: 1073-1077, (2009) .

Ries LAG, Kosary CL, Hankey BF, Miller BA and Edwards BK (eds.): SEER Cancer Statistics Review: 19731996. National Cancer Institute, Bethesda, MD, (2009). Jang M and Pezzuto JM: Cancer chemopreventive activity of resveratrol.

30

Drugs Exp Clin Res 25: 65-77, (2008) Creasy LL and Coffee M: Phytoalexin production potential of grape berries. J Am Soc Hortic Sci 113: 230-234, (2007) Gusman J, Malonne H and Atassi G: A reappraisal of the potential chemopreventive and chemotherapeutic properties of resveratrol. Carcinogenesis. 22: 1111-1117, (2006)

(2001)

Wolter F, Clausnitzer A, Akoglu B and Stein J: Piceatannol, a natural analog of resveratrol, inhibits progression through the S phase of the cell cycle in colorectal cancer cell lines. J Nutr , 132: 298-302, (2001) Ciolino HP and Yeh GC: The effects of resveratrol on CYP1A1 and aryl hydrocarbon receptor function in vitro. Adv Exp Med Park JW, Choi YJ, Jang MA, Lee YS, Jun DY, Suh SI Baek WK, Suh MH, Jin IN and Kwon TK: Chemopreventive agent resveratrol, a natural product derived from grapes, reversibly inhibits progression through S and G2 phases of the cell cycle in U937 cells. Cancer Lett 163: 43-49,( 2001) Tsan MF, White JE, Maheshwari JG and Chikkappa G: Anti-leukemia effect of resveratrol. Leuk Lymphoma 43 983-987,( 2001).

Delmas D, Jannin B, Malki MC and Latruffe N: Inhibitory effect of resveratrol on the proliferation of human and rat hepatic derived cell lines. Oncol Rep 7: 847-852, (2004).

Gao X, Xu YX, Divine G, Janakiraman


N, Chapman RA and Gautam SC: Disparate in vitro and in vivo antileukemic effects of resveratrol, a natural polyphenolic compound found in grapes. J Nutr 132: 2076-2081, (2002).

Pervaiz S: Resveratrol - from the bottle to the bedside? Leuk Lymphoma 40: 491-498, (2001)

Kapadia GJ, Azuine MA, Tokuda H, Takasaki M, Mukainaka T, Konoshima T and Nishino H: Chemopreventive effect of resveratrol, sesamol, sesame oil and sunflower oil in the Epstein-Barr virus early antigen activation assay and the mouse skin two-stage carcinogenesis. Pharmacol Res 45: 499505, (2001) Shih A, Davis FB, Lin HY and Davis PJ: Resveratrol induces apoptosis in thyroid cancer cell lines via a MAPKand p53- dependent mechanism. J Clin Endocrinol Metab 87: 1223-1232,

Kimura Y and Okuda H: Resveratrol isolated from Polygonum cuspidatum root prevents tumor growth and metastasis to lung and tumor-induced neovascularization in Lewis lung carcinomabearing mice. J Nutr 131: 1844-1849, (2001) She QB, Huang C, Zhang Y and Dong Z: Involvement of c-jun NH(2)-terminal kinases in resveratrol-induced activation of p53 and apoptosis. Mol Carcinog 33: 244-250, (2000). Roman V, Billard C, Kern C, FerryDumazet H, Izard JC, Mohammad R, Mossalayi DM and Kolb JP: Analysis of resveratrol-induced apoptosis in human

31

B-cell chronic leukaemia. Br Haematol 117: 842-851, (2000).

paradox for coronary heart disease. Lancet 339: 1523-1526, (1992).

Bravo L: Polyphenols: chemistry, dietary sources, metabolism, and nutritional significance. Nutr Rev 56: 317-333, (1998). Vinson JA: Flavonoids in foods as in vitro and in vivo antioxidants. Adv Exp Med Biol 439: 151-164, (1998). Yang CS, Landau JM, Huang MT and Newmark HL: Inhibition of carcinogenesis by dietary polyphenolic compounds. Annu Rev Nutr 21: 381406, (1998) Fremont L: Biological effects of resveratrol. Life Sci 66: 663-673,(1998) Renaud S and De Lorgeril M: Wine, alcohol, platelets, and the French

Szende B, Tyihak E and KiralyVeghely Z: Dose-dependent effect of resveratrol on proliferation and apoptosis in endothelial and tumor cell cultures. Exp Mol Med 32: 88-92, (1992). Surh Y: Molecular mechanisms of chemopreventive effects of selected dietary and medicinal phenolic substances. Mutat Res 428: 305-327, (1992). Soleas GJ, Grass L, Josephy PD, Goldberg DM and Diamandis EP: A comparison of the anticarcinogenic properties of four red wine polyphenols. Clin Biochem 35: 119124, (1992)

32

33

Peripheral Nerve Injury and Repair


Alisha Asif & Reshma Nair U BT & BCE, MCET-Trivandrum

Abstract
Peripheral nerve injuries are common, and there is no easily available formula for successful treatment. Incomplete injuries are most frequent. Seddon classified nerve injuries into three categories: neurapraxia, axonotmesis, and neurotmesis. After complete axonal transection, the neuron undergoes a number of degenerative processes, followed by attempts at regeneration. A distal growth cone seeks out connections with the degenerated distal fiber. The current surgical standard is epineurial repair with nylon suture. To span gaps that primary repair cannot bridge without excessive tension, nerve-cable interfascicular auto- grafts are employed. Unfortunately, results of nerve repair to date have been no better than fair, with only 50% of patients regaining useful function. There is much ongoing research regarding pharmacologic agents, immune system modulators, enhancing factors, and entubulation chambers.Clinically applicable developments from these investigations will continue to improve the results of treatment of nerve injuries.

Introduction
Peripheral nerves were first distinguished from tendons by Herophilus in 300 BC. By meticulous dissection, he traced nerves to the spinal cord, demonstrating the continuity of the nervous system. In 900 AD, Rhazes made the first clear reference to nerve repair. However, not until 1795 did Cruikshank demonstrate nerve healing and recovery of distal extremity function after repair. In the early 1900s, Cajal pioneered the concept that axons regenerate from neurons and are guided by chemotrophic substances. In 1945, Sunderland promoted microsurgical techniques to improve nerve repair outcomes. Since that time, there have been a number of advances and new concepts in peripheral nerve reconstruction. Research regarding the molecular biology of nerve injury has expanded the available strategies for improving results. Some of these strategies involve the use of pharmacologic agents, immune system modulators, enhancing factors, and entubulation chambers. A thorough understanding of the basic concepts of nerve injury and repair is necessary to evaluate the controversies surrounding these innovative new modalities.

Within and through the epineurium lie several fascicles, each surrounded by a perineurial sheath. The perineurial layer is the major contributor to nerve tensile strength. The endoneurium is the innermost loose collagenous matrix within the fascicles. Axons run through the endoneurium and are protected and nourished by this layer.

Anatomy
The epineurium is the connective tissue layer of the peripheral nerve, which both encircles and runs between fascicles. Its main function is to nourish and protect the fascicles. The outer layers of the epineurium are condensed into a sheath.

Sunderland has demonstrated that fascicles within major peripheral nerves repeatedly divide and unite to form fascicular plexuses. This leads to frequent changes in the cross-sectional topography of fascicles in the peripheral nerves. In general, the greatest degree of fascicular crossbranching occurs in the lumbar and brachial plexus regions. Several studies have demonstrated greater uniformity of fascicular arrangement in the major nerves of the extremities; in fact, the palmar cutaneous and motor branches of the median nerve may be dissected proximally for several centimeters without significant cross-branching. In nerve repair, fascicular matching is critical to outcome, and strategies for achieving this will be discussed. The blood supply of peripheral nerves is a complex anastomotic network of blood vessels.

33

There are two major arterial systems and one minor longitudinal system linked by anastomoses. The first major system lies superficially on the nerve, and the second lies within the interfascicular epineurium. The minor longitudinal system is located within the endoneurium and perineurium. The major superficial longitudinal vessels maintain a relatively constant position on the surface of the nerve. The segmental vascular supply consists of a number of nutrient arteries that vary in size and number and enter the nerve at irregular intervals. They repeatedly branch and anastomose with the internal longitudinal system to create an interconnected system. Injection studies have revealed the relative tortuosity of the blood vessels, which accommodates strain and gliding of the nerve during motion.

Endoneurial capillaries have the structural and functional features of the capillaries of the central nervous system and function as an extension of the blood-brain barrier. The endothelial cells within the capillaries of the endoneurium are interconnected by tight junctions, creating a system that is impermeable to a wide range of macromolecules, including proteins. This barrier is impaired by ischemia, trauma, and toxins, as well as by the mast-cell products histamine and serotonin.

Injury Classification
Seddon2 classified nerve injuries into three major groups: neurapraxia, axonotmesis, and neurotmesis.Neurapraxia is characterized by local myelin damage, usually secondary to compression. Axon continuity is preserved, and the nerve does not undergo distal degeneration. Axonotmesis is defined as a loss of continuity of axons, with variable preservation of the connective tissue elements of the nerve. Neurotmesis is the most severe injury, equivalent to physiologic disruption of the entire nerve; it may or may not include actual nerve transection. After injury (short of transection), function fails sequentially in the following order: motor, proprioception, touch, temperature, pain, and sympathetic. Recovery occurs sequentially in the reverse order. Sunderland1 further refined this classification on the basis of the realization that axonotmetic injuries had widely variable prognoses. He divided Seddon.s axonotmesis grade into three types, depending on the degree of connective tissue involvement. Neurapraxia is equivalent to a Sunderland type 1 injury. Complete recovery follows this injury, which may take weeks to months.

is displaced peripherally. This reflects a change in metabolic priority from production of neurotransmitters to production of structural materials needed for axon repair and growth, such as messenger RNA, lipids, actin, tubulin, and growth-associated proteins. Shortly after axonal transection, the proximal axon undergoes traumatic degeneration within the zone of injury. In most instances, the zone of injury extends proximally from the injury site to the next node of Ranvier, but death of the cell body itself may occur, depending on the mechanism and energy of injury. Wallerian degeneration (i.e., breakdown of the axon distal to the site of injury) is initiated 48 to 96 hours after transection. Deterioration of myelin begins, and the axon becomes disorganized. Schwann cells. proliferate and phagocytose myelin and axonal debris Nerve injury may disrupt the nerveblood barrier. Incompletely injured nerves may then be exposed to unfamiliar proteins, which may act as antigens in an autoimmune reaction. This mechanism may propagate the cycle of nerve degeneration.

Physiology of Nerve Regeneration


After wallerian degeneration, the Schwann cell basal lamina persists. The Schwann cells align themselves longitudinally, creating columns of cells called Bngner bands, which provide a supportive and growthpromoting microenvironment for regenerating axons. Endoneurial tubes shrink as well, and Schwann cells and macrophages fill the tubes.

Physiology of Nerve Degeneration


Following axonal transection, a sequence of pathologic events occurs in the cell body and axon. The cell body swells and undergoes chromatolysis, a process in which theNissl granules (i.e., the basophilic neurotransmitter synthetic machinery) disperse, and the cell body becomes relatively eosinophilic. The cell nucleus

34

At the tip of the regenerating axon is the growth cone, a specialized motile exploring apparatus. The growth cone is composed of a structure of flattened sheets of cellular matrix, called lamellipodia, from which multiple fingerlike projections, called filopodia, extrude and explore their microenvironment. The filopodia are electrophilic and attach to cationic regions of the basal lamina. Within the filopodia are actin polypeptides, which are capable of contraction to produce axonal elongation. The cone releases protease, which dissolves matrix in its path to clear a way to its target organ. The growth cone responds to four classes of factors: (1) neurotrophic factors, (2) neuritepromoting factors, (3) matrix-forming precursors, and (4) metabolic and other factors. Neurotrophic factors are macromolecular proteins present in denervated motor and sensory receptors. They are also found within the Schwann cells along the regeneration path. These factors aid in neurite survival, extension, andmaturation. The original neurotrophic factor is nerve growth factor.This protein was seen to be released by a murine sarcoma and, when transplanted into chick embryos, caused sensory and sympathetic axons to grow toward the tumor. In addition to being trophic (i.e., promotes survival and growth), nerve growth factor is chemotropic (i.e., guides the axon) and also affects growth-cone morphology. Other neurotrophic factors include ciliary neurotrophic factor3 and motor nerve growth factor,4 which also have an important role in the survival and regeneration of damaged neurons. Unlike the neurotrophic factors,the neuritepromoting factors are substrate-bound glycoproteins that promote neurite (axonal) growth. Laminin, a major component of the Schwann cell basal lamina, is bound to type IV collagen, proteoglycan, and entactin, and has been shown to accelerate axonal regeneration across a gap.5 Fibronectin is another neuritepromoting factor that has been shown to promote neuritegrowth,6 as have neural cell adhesion molecule and N-cadherin.7 Fibrinogen, a matrixforming precursor, polymerizes with fibronectin to form a fibrin matrix, which is an important substrate for cell migration in nerve regeneration.8 The fourth class comprises a variety of factors that enhance nerve regeneration but cannot appropriately be placed in any of the first three classes. Among them are acidic and basic fibroblast growth factors,9 insulin and insulinlike growth factor, eupeptin, glia-derived protease inhibitor, electrical stimulation, and hormones such as thyroid hormone, corticotropin, estrogen, and testosterone

Nerve Grafting Autografts


When primary repair cannot be performed without undue tension, nerve grafting is required. Autografts remain the standard for nervegrafting material. Allografts have not shown recovery equivalent to that obtained with autogenous nerve and are still considered experimental. The three major types of autograft are cable, trunk, and vascularized nerve grafts. Cable grafts are multiple small-caliber nerve grafts aligned in parallel to span a gap between fascicular groups. Trunk grafts are mixed motor-sensory wholenerve grafts (e.g., an ulnar nerve in the case of an irreparable brachial plexus injury). Trunk grafts have been associated with poor functional results, in large part due to the thickness of the graft and consequent diminished ability to revascularize after implantation. Vascularizednerve grafts have been used in thepast, but with conflictingresults. They may be consideredif a long graft is needed in a poorly vascularized bed. Because donor-site morbidity is an issue, vascularized grafts have been most widely utilized in irreversible brachial plexus injuries. The most common source of autograft is the sural nerve, which is easily obtainable, the appropriate diameter for most cable grafting needs, and relatively dispensable. Other graft sources include the anterior branch of the medial antebrachial cutaneous nerve, the lateral femoral cutaneous nerve, and the superficial radial sensory nerve.1 The technique of nerve grafting involves sharply transecting the injured nerve ends to excise the zone of injury. The nerve ends should display a good fascicular pattern. The defect is measured, and the appropriate length of graft is harvested to allow reconstruction without tension. If the injured nerve has a large diameter relative to the nerve graft, several cable grafts are placed in parallel to reconstruct the nerve. The grafts are matched to corresponding fascicles and sutured to the injured nerve with epineurial sutures, as in the primary neurorrhaphy technique. Fibrin glue maybe used to connect the cable grafts,thus decreasing the number of sutures and minimizing additional trauma to the nerve grafts. The surgeon can make fibrin glue intraoperatively by mixing thrombin and fibrinogen in equal parts, as originally described by Narakas. Although nerve grafts have not generally been considered polarized, it is recommended that the graft be placed in a reversed orientation in the repair site. Reversal of the nerve graft decreases the chance of axonal dispersion through distal

35

nerve branches. A well-vascularized bed is critical for nerve grafting. The graft should be approximately 10% to 20% longer than the gap to be filled, as the graft inevitably shortens with connective tissue fibrosis. The graft repair site and the graft itself have been demonstrated to regain the same tensile strength as the native nerve by 4 weeks; therefore, the limb is usually immobilized during this initial period to protect the graft

may provide sensory input to facilitate motor reeducation. Early-phase sensory reeducation decreases mislocalization and hypersensitivity and reorganizes tactile submodalities, such as pressure and vibration. Later goals include recovery of tactile gnosis.

Evaluation of Recovery
The most widely used grading system for nerve recovery is that developed by the Medical Research Council for the evaluation of both motor and sensory return (Table 2). Motor recovery is graded M0 through M5, and sensory recovery is graded S0 through S4 on the basis of the physical examination. An excellent result is described as M5,S4; a very good result, M4,S3+; good, M3,S3; fair, M2,S2-2+; poor, M0-1,S0-1. Objective measurement of sensory recovery includes density testing by use of moving and static two-point discrimination and threshold testing by use of Frey or SemmesWeinstein filaments. Measurement of grip and pinch strength is of limited use because of inability to discriminate among early levels of recovery and the fact that both the median and the ulnar nerve contribute to pinch and grip function.

Allografts
Allografts have several potential clinical advantages: (1) grafts can be banked; (2) there is no need for sacrifice of a donor nerve; and (3) surgical procedures are quicker without the need to harvest a graft. However, allografts are not as effective as autografts, mainly due to the immunogenic host response. Ansselin and Pollard17 studied rat allograft nerves and found an increase in helper T cells and cytotoxic/suppressor T cells, implying immunogenic rejection. The cellular component of allografts.and with it, their immunogenicity.can be destroyed by freezethawing. This leads to the production of cell debris, which in turn impairs neurite outgrowth. Dumont and Hentz18 reported on a biologic detergent technique that removes the immunogenic cellular components without forming cell debris. Their experiments in rats have shown that allografts processed with this detergent had equivalent postrepair results compared with autografts.

Results
The first large series of results of nerve repairs came from Woodhall and Beebe in 1956; they reported on 3,656 injuries sustained during World War II, with an average 5- year follow-up.19 The results were relatively poor, tainting the concept of nerve repair in the minds of surgeons for years. It must be remembered that these injuries were pre.antibiotic era war injuries with large areas of soft-tissue destruction and wound contamination. Repairs were performed without the benefit of modern microsurgical technique . The results from subsequent studies in which modern surgical techniques were used have been more encouraging. In a large compilation of data from a 40-year period, Mackinnon and Dellon19 reported that very good results (M4,S3+) were obtained in approximately 20% to 40% of cases. Very few injuries recovered fully, and war injuries generally did worse. A more recent series of primary repairs and fascicular grafts in 132 patients with median nerve injuries showed good to excellent results in 47 of 98 patients (48%) treated with grafting and in 17 of 34 patients (50%) treated with secondary neurorrhaphy. 20 Overall, 65 of 132 patients (49%) had good to excellent results,

Rehabilitation of Nerve Injuries


The preoperative goals in a denervated extremity are to protect it and to maintain range of motion,so that it will be functional when reinnervated. Splinting is useful to prevent contractures and deformity. Range-of-motion exercises are imperative while awaiting axonal regeneration, so as to maintain blood and lymphatic flow and prevent tendon adherence. The extremity must be kept warm, as cold exposure damages muscle and leads to fibrosis. Judicious bandaging protects and limits venous congestion and edema. Direct galvanic stimulation reduces muscle atrophy and may be of psychological benefit to the patient during the prolonged recovery phase, but has not been unequivocally demonstrated to enhance or accelerate nerve recovery or functional outcome. During reinnervation of the limb, continued motor and sensory rehabilitation are critical. Pool therapy can be helpful to improve joint contractures and eliminate the effects of gravity during initial motor recovery, thereby enhancing muscular performance. Biofeedback

36

14 (11%) had fair results, and 53 (40%) had poor results. Results were poor in four situations: (1) the patient was more than 54 years old; (2) the level of injury was proximal to the elbow; (3) the graft length was greater than 7 cm; or (4) the surgery was delayed more than 23 months. In a separate series of 33 radial nerve repairs treated with grafting or secondary neurorrhaphy, Kallio et al21 demonstrated useful (good to excellent) results in 21 patients. Grafting was done in 21 cases and resulted in useful recovery in 8. Vastamki et al22 reviewed the data on 110 patients after ulnar nerve repair and demonstrated useful recovery in 57 patients (52%). In a study by Wood23 of 11 peroneal nerve reconstructions, 9 were treated with nerve grafting and 2 with direct neurorrhaphy. In the 9 patients treated with grafting, the results were excellent in 2, good in 2, fair in 3, and poor in 2. The only statistically significant prognostic factor was nerve graft length. All 4 patients with nerve grafts measuring 6 cm or less had good or excellent results; in contrast, all 5 patients with grafts longer than 6 cm or less had good or excellent results; in contrast, all 5 patients with grafts longer than 6 cm had fair or poor results. Of the 2 patients treated with direct neurorrhaphy, 1 had an excellent result, and 1 had a good result. On the basis of 40 years. Experience with nerve repairs, Sunderland1 made a number of generalizations regarding nerve reconstruction results. He found that (1) young patients generally do better than old patients; (2) early repairs do better than late repairs; (3) repairs of singlefunction nerves do better than mixednerve repairs; (4) distal repairs do better than proximal repairs; and (5) short nerve grafts do better than long nerve grafts.

Strategies to Improve Results


Because of the relatively large number of fair to poor results still being obtained in civilian injuries with modern microsurgical technique, much research is being done to alter regeneration mechanisms and improve results of nerve repair. The strategies to improve results fall into four major categories: pharmacologic agents, immune system modulators, enhancing factors, and entubulation chambers. Pharmacologic agents work on the molecular level to alter nerve regeneration. Horowitz24 has shown the positive effects of gangliosides on rat sciatic nerve regeneration. Gangliosides are neurotrophic (i.e., they aid in the survival and maintenance of neurons) and neuritogenic (i.e., they aid in increasing the number and size of branching neural processes). Klein et al25 have shown

forskolin to be an activator of adenylate cyclase that increases neurite outgrowth in vivo. Wong and Mattox26 have shown that polyamines work on the molecular level to increase the functional recovery of rat sciatic nerve. Immune system modulators work by decreasing fibrosis and/or histiocytic response. In a murine model, ganglioside-specific autoantibodies have been demonstrated after nerve injury. In that gangliosides are neurotrophic and neuritogenic, it is evident that antibodies to them would be deleterious to nerve regeneration.27 Azathioprine and hydrocortisone decrease the levels of these autoantibodies, thereby imparting a protective effect on gangliosides after nerve-blood barrier disruption. Regarding other modulators, Sebille and Bondoux- Jahan28 have shown that cyclophosphamides increase motor recovery in rat sciatic nerve. Bain et al29 have shown that cyclosporin A increases nerve recovery in primate and rat models. The numerous enhancing factors include nerve growth factor, ciliary neurotrophic factor, motor nerve growth factor, laminin, fibronectin, neural cell adhesion molecule, Ncadherin, acidic and basic fibroblast growth factor, insulinlike growth factor, and leupeptin. Nerve growth factor is chemotrophic to regenerating neurons, as demonstrated by the classic experiments first done by Cajal in the early 1900s. Recent studies lend support to these original theories. In animal studies simi- lar to those of Cajal, a transected nerve is allowed to regenerate toward appropriate and inappropriate receptor nerve segments on either end of a Y-shaped tubing. Axons have been demonstrated to grow preferentially in a ratio of 2:1 to the appropriate nerve end.30 Other studies have used Y chambers to show that nerves preferentially grow toward their distal stump, rather than toward tendon.31 Proximal motor axons have been shown to grow preferentially toward their distal motor axons instead of their sensory axons.32 Although trophic factors undoubtedly play a role in nerve regeneration specificity, proper end-organ reinnervation is essential to ultimate function. A considerable pruning effect has been demonstrated to occur after axonal mismatch and initial reinnervation. Entubulation chambers are an intriguing concept, and extensive research is under way to better our understanding of their effects. These chambers are hollow cylindrical tubes that serve as the conduit for loosely approximated nerve ends. They allow decreased surgical handling of nerve ends and thus decreased scarring. Use of entubulation chambers leaves a small intentional gap between nerve ends, which allows fascicular rerouting. Entubulation chambers may also allow local introduction of some of the previously

37

mentioned pharmacologic agents, immune system modulators, and enhancing factors.33 Entubulation chambers can be made from a variety of materials. Some that are currently being investigated include silicone, Gore-Tex, autogenous vein or dura, and polyglycolic acid.34 Hentz et al33 have stated that tubularization offers no advantage over epineurial repair. Lundborg et al12 reported on the treatment of 18 patients with silicone tubes and a 3- to 4mm repair gap. They stressed the importance of using slightly larger tubes to prevent nerve compression. Sensory and motor testing after 1 year showed improvement of tactile sensation with tubularization; other variables were not statistically different. Research is under way to find a material that will allow diffusion of nutrients, blood, and locally introduced factors; will prevent aberrant sprouting; and will resorb with time

6. Gundersen RW: Response of sensory neurites and growth cones to patterned substrata of laminin and fibronectin in vitro. Dev Biol 1987. 7. Dodd J, Jessell TM: Axon guidance and the patterning of neuronal projections in vertebrates. Science 1988. 8. Williams LR, Varon S: Modification of fibrin matrix formation in situ enhances nerve regeneration in silicone chambers. J Comp Neurol 1985. 9. Cordeiro PG, Seckel BR, Lipton SA, D.Amore PA, Wagner J, Madison R: Acidic fibroblast growth factor enhances peripheral nerve regeneration in vivo. Plast Reconstr Surg . 10. Mackinnon SE: New directions in peripheral nerve surgery. Ann Plast Surg 1989.

Summary
Despite more than 100 years of intense laboratory and clinical investigations, results of nerve repairs are somewhat discouraging, with only 50% of patients regaining useful function. The current standard of treatment is immediate epineurial repair with nylon suture. If primary repair would place more than modest tension on the anastomosis, nerve-cable autografts are employed to bridge the gap. At this time, there is much research under way, and pharmacologic agents, immune system modulators, enhancing factors, and entubulation chambers offer promise for future improvement innerve repair outcomes.

References
1. Sunderland S: Nerve Injuries and Their Repair: A Critical Appraisal. New York: Churchill Livingstone, 1991. 2. Seddon HJ: Surgical Disorders of the Peripheral Nerves. Baltimore: Williams & Wilkins, 1972. 3. Manthorpe M, Skaper SD, Williams LR, Varon S: Purification of adult rat sciatic nerve ciliary neuronotrophic factor. Brain Res 1986. 4. Slack JR, Hopkins WG, Pockett S: Evidence for a motor nerve growth factor. Muscle Nerve 1983. 5. Madison R, da Silva CF, Dikkes P, Chiu TH, Sidman RL: Increased rate of peripheral nerve regeneration using bioresorbable nerve guides and a laminincontaining gel. Exp Neurol 1985.

38

The Human Genome Project Vimal Jayaprakash & Sandhya.K


S1S2 Biotechnology and Biochemical Engineering Mohandas College of Engineering & Technology Nedumangad

Abstract
The Human Genome Project (HGP) is an international scientific research project with a primary goal of determining the sequence of chemical base pairs which make up DNA and to identify and map the approximately 20,00025,000 genes of the human genome from both a physical and functional standpoint. The project began in 1990 and was initially headed by the Office of Biological and Environmental Research in the U.S. Department of Energy's Office of Science. In summary: the best estimates of total genome size indicate that about 92.3% of the genome has been completed and it is likely that the centromeres and telomeres will remain un-sequenced until new technology is developed that facilitates their sequencing. Most of the remaining DNA is highly repetitive and unlikely to contain genes, but it cannot be truly known until it is entirely sequenced.. The roles of junk DNA, the evolution of the genome, the differences between individuals, and many other questions are still the subject of intense interest by laboratories all over the world. This Mega Project is co-ordinated by the U.S. Department of Energy and the National Institute of Health. During the early years of the project, the Wellcome Trust (U.K.) became a major partner, other countries like Japan, Germany, China and France contributed significantly.. It is anticipated that detailed knowledge of the human genome will provide new avenues for advances in medicine and biotechnology The project's goals included not only identifying all of the approximately 24,000 genes in the human genome, but also to address the ethical, legal, and social issues (ELSI) that might arise from the availability of genetic information. Five percent of the annual budget was allocated to address the ELSI arising from the project Keywords: HGP-Human Genome Project, ELSI-Ethical Legal & Social Issues

Introduction
The Human Genome Project (HGP) is an international scientific research project with a primary goal of determining the sequence of chemical base pairs which make up DNA and to identify and map the approximately 20,00025,000 genes of the human genome from both a physical and functional standpoint. The project began in 1990 and was headed by the Office of Biological and Environmental Research in the U.S. Department of Energy's Office of Science. Francis Collins directed the National Institutes of Health National Human Genome Research Institute efforts. A working draft of the genome was announced in 2000 and a complete one in 2003, with further, more detailed analysis still being published. A parallel project was conducted outside of government by the Celera Corporation, which was

formally launched in 1998. Most of the governmentsponsored sequencing was performed in universities and research centers from the United States, the United Kingdom, Japan, France, Germany, and China. The mapping of human genes is an important step in the development of medicines and other aspects of health care. While the objective of the Human Genome Project is to understand the genetic makeup of the human species, the project has also focused on several other nonhuman organisms such as E. coli, the fruit fly, and the laboratory mouse. It remains one of the largest single investigative projects in modern science. Human genome project is called a mega project mainly because of the aims of the project which are as follows: Human genome is said to have approximately 3x109 bp and if the cost of sequencing is US $ 3 per bp (estimated cost in the beginning), the total estimated cost of the project would be 9 billion US dollars. Further if the obtained sequence were to be stored in typed form in books, and if each page of the book contained 1000 letters and each book

39

contained 1000 pages, then 3300 such books would be required to store the information of DNA sequence from a single human cell. The enormous amount of data expected to be generated also necessitated the use of high speed computational devices for data storage and retrieval and analysis. The Human Genome Project originally aimed to map the nucleotides contained in a human haploid reference genome (more than three billion). Several groups have announced efforts to extend this to diploid human genomes including the International HapMap Project, Applied Biosystems, Perlegen, Illumina, JCVI, Personal Genome Project, and Roche-454. The "genome" of any given individual (except for identical twins and cloned organisms) is unique; mapping "the human genome" involves sequencing multiple variations of each gene. The project did not study the entire DNA found in human cells; some heterochromatic areas (about 8% of the total genome) remain un-sequenced.

The project began with the culmination of several years of work supported by the United States Department of Energy .This 1987 report stated boldly, "The ultimate goal of this initiative is to understand the human genome" and "knowledge of the human as necessary to the continuing progress of medicine and other health sciences as knowledge of human anatomy has been for the present state of medicine." Candidate technologies were already being considered for the proposed undertaking at least as early as 1985. The $3-billion project was formally founded in 1990 by the United States Department of Energy and the U.S. National Institutes of Health, and was expected to take 15 years. In addition to the United States, the international consortium comprised geneticists in the United Kingdom, France, Germany, Japan, China, and India. Due to widespread international cooperation and advances in the field of genomics (especially in sequence analysis), as well as major advances in computing technology, a 'rough draft' of the genome was finished in 2000 (announced jointly by then US president Bill Clinton and the British Prime Minister Tony Blair on June 26, 2000).. Ongoing sequencing led to the announcement of the essentially complete genome in April 2003, 2 years earlier than planned. In May 2006, another milestone was passed on the way to completion of the project, when the sequence of the last chromosome was published.

History
In 1976, the genome of the RNA virus Bacteriophage MS2 was the first complete genome to be determined, by Walter Fiers and his team at the University of Ghent (Ghent, Belgium). The idea for the shotgun technique came from the use of an algorithm that combined sequence information from many small fragments of DNA to reconstruct a genome. This technique was pioneered by Frederick Sanger to sequence the genome of the Phage X174, a virus (bacteriophage) that primarily infects bacteria that was the first fully sequenced genome (DNA-sequence) in 1977. The technique was called shotgun sequencing because the genome was broken into millions of pieces as if it had been blasted with a shotgun. In order to scale up the method, both the sequencing and genome assembly had to be automated, as they were in the 1980s.

Background

40

Those techniques were shown applicable to sequencing of the first free-living bacterial genome (1.8 million base pairs) of Haemophilus influenzae in 1995 and the first animal genome (~100 Mbp) It involved the use of automated sequencers, longer individual sequences using approximately 500 base pairs at that time. Paired sequences separated by a fixed distance of around 2000 base pairs which were critical elements enabling the development of the first genome assembly programs for reconstruction of large regions of genomes (aka 'contigs'). Three years later, in 1998, the announcement by the newly-formed Celera Genomics that it would scale up the pairwise end sequencing method to the human genome was greeted with skepticism in some circles. The shotgun technique breaks the DNA into fragments of various sizes, ranging from 2,000 to 300,000 base pairs in length, forming what is called a DNA "library". Using an automated DNA sequencer the DNA is read in 800bp lengths from both ends of each fragment. Using a complex genome assembly algorithm and a supercomputer, the pieces are combined and the genome can be reconstructed from the millions of short, 800 base pair fragments. The success of both the public and privately funded effort hinged upon a new, more highly automated capillary DNA sequencing machine, called the Applied Biosystems 3700, that ran the DNA sequences through an extremely fine capillary tube rather than a flat gel. Even more critical was the development of a new, larger-scale genome assembly program, which could handle the 3050 million sequences that would be required to sequence the entire human genome with this method. At the time, such a program did not exist. One of the first major projects at Celera Genomics was the development of this assembler, which was written in parallel with the construction of a large, highly automated genome sequencing factory. Development of the assembler was led by Brian Ramos. The first version of this assembler was demonstrated in 2000, when the Celera team joined forces with Professor Gerald Rubin to sequence the fruit fly Drosophila melanogaster using the wholegenome shotgun method.[20] At 130 million base pairs, it was at least 10 times larger than any genome previously shotgun assembled. One year later, the Celera team published their assembly of the three billion base pair human genome. The Human Genome Project was a 13 year old mega project, that was launched in the year 1990 and completed in 2003. This project is closely associated to the branch of biology called Bio-informatics. The human genome project international consortium

announced the publication of a draft sequence and analysis of the human genomethe genetic blueprint for the human being. An American company Celera, led by Craig Venter and the other huge international collaboration of distinguished scientists led by Francis Collins, director, National Human Genome Research Institute, U.S., both published their findings. This Mega Project is co-ordinated by the U.S. Department of Energy and the National Institute of Health. During the early years of the project, the Wellcome Trust (U.K.) became a major partner, other countries like Japan, Germany, China and France contributed significantly. Already the atlas has revealed some starting facts. The two factors that made this project a success are: 1. 2. Genetic Engineering Techniques, with which it is possible to isolate and clone any segment of DNA. Availability of simple and fast technologies, to determining the DNA sequences.

Being the most complex organisms, human beings were expected to have more than 100,000 genes or combination of DNA that provides commands for every characteristics of the body. Instead their studies show that humans have only 30,000 genes around the same as mice, three times as many as flies, and only five times more than bacteria. Scientist told that not only are the numbers similar, the genes themselves, baring a few, are alike in mice and men. In a companion volume to the Book of Life, scientists have created a catalogue of 1.4 million single-letter differences, or single-nucleotide polymorphisms (SNPs) and specified their exact locations in the human genome. This SNP map, the world's largest publicly available catalogue of SNP's, promises to revolutionize both mapping diseases and tracing human history. The sequence information from the consortium has been immediately and freely released to the world, with no restrictions on its use or redistribution. The information is scanned daily by scientists in academia and industry, as well as commercial database companies, providing key information services to bio-technologists. Already, many genes have been identified from the genome sequence, including more than 30 that play a direct role in human diseases. By dating the three millions repeat elements and examining the pattern of interspersed repeats on the Y-chromosome, scientists estimated the relative mutation rates in the X and the Y chromosomes and in the male and the female germ lines. They found that the ratio of mutations in male

41

Vs female is 2:1. Scientists point to several possible reasons for the higher mutation rate in the male germ line, including the fact that there are a greater number of cell divisions involved in the formation of sperm than in the formation of eggs.

State Of Completion
There are multiple definitions of the "complete sequence of the human genome". According to some of these definitions, the genome has already been completely sequenced, and according to other definitions, the genome has yet to be completely sequenced. The genome has been completely sequenced using the definition employed by the International Human Genome Project. A graphical history of the human genome project shows that most of the human genome was complete by the end of 2003. However, there are a number of regions of the human genome that can be considered unfinished: First, the central regions of each chromosome, known as centromeres, are highly repetitive DNA sequences that are difficult to sequence using current technology. The centromeres are millions (possibly tens of millions) of base pairs long and for the most part these are entirely unsequenced. Second, the ends of the chromosomes, called telomeres, are also highly repetitive, and for most of the 46 chromosome ends these too are incomplete. It is not known precisely how much sequence remains before the telomeres of each chromosome are reached, but as with the centromeres, current technological restraints are prohibitive. Third, there are several loci in each individual's genome that contain members of multigene families that are difficult to disentangle with shotgun sequencing methods these multigene families often encode proteins important for immune functions. Other than these regions, there remain a few dozen gaps scattered around the genome, some of them rather large, but there is hope that all these will be closed in the next couple of years.

completed and it is likely that the centromeres and telomeres will remain un-sequenced until new technology is developed that facilitates their sequencing. Most of the remaining DNA is highly repetitive and unlikely to contain genes, but it cannot be truly known until it is entirely sequenced. Understanding the functions of all the genes and their regulation is far from complete. The roles of junk DNA, the evolution of the genome, the differences between individuals, and many other questions are still the subject of intense interest by laboratories all over the world

Goals
The sequence of the human DNA is stored in databases available to anyone on the Internet. The U.S. National Center for Biotechnology Information (and sister organizations in Europe and Japan) house the gene sequence in a database known as GenBank, along with sequences of known and hypothetical genes and proteins. Other organizations such as the University of California, Santa Cruz, and Ensemblpresent additional data and annotation and powerful tools for visualizing and searching it. Computer programs have been developed to analyze the data, because the data itself is difficult to interpret without such programs. The process of identifying the boundaries between genes and other features in a raw DNA sequence is called genome annotation and is the domain of bioinformatics. While expert biologists make the best annotators, their work proceeds slowly, and computer programs are increasingly used to meet the highthroughput demands of genome sequencing projects. The best current technologies for annotation make use of statistical models that take advantage of parallels between DNA sequences and human language, using concepts from computer science such as formal grammars. Another, often overlooked, goal of the HGP is the study of its ethical, legal, and social implications. It is important to research these issues and find the most appropriate solutions before they become large dilemmas whose effect will manifest in the form of major political concerns. All humans have unique gene sequences. Therefore the data published by the HGP does not represent the exact sequence of each and every individual's genome. It is the combined "reference genome" of a small number of anonymous donors. The HGP

In summary: the best estimates of total genome size indicate that about 92.3% of the genome has been

42

genome is a scaffold for future work in identifying differences among individuals. Most of the current effort in identifying differences among individuals involves single-nucleotide polymorphisms and the HapMap.

billion chemical units in the human genetic instruction set, finding the genetic roots of disease and then developing treatments. With the sequence in hand, the next step was to identify the genetic variants that increase the risk for common diseases like cancer and diabetes. It was far too expensive at that time to think of sequencing patients whole genomes. So the National Institutes of Health embraced the idea for a "shortcut", which was to look just at sites on the genome where many people have a variant DNA unit. The theory behind the shortcut was that since the major diseases are common, so too would be the genetic variants that caused them. Natural selection keeps the human genome free of variants that damage health before children are grown, the theory held, but fails against variants that strike later in life, allowing them to become quite common. (In 2002 the National Institutes of Health started a $138 million project called the HapMap to catalog the common variants in European, East Asian and African genomes.) The genome was broken into smaller pieces; approximately 150,000 base pairs in length. These pieces were then ligated into a type of vector known as "bacterial artificial chromosomes", or BACs, which are derived from bacterial chromosomes which have been genetically engineered. The vectors containing the genes can be inserted into bacteria where they are copied by the bacterial DNA replication machinery. Each of these pieces was then sequenced separately as a small "shotgun" project and then assembled. The larger, 150,000 base pairs go together to create chromosomes. This is known as the "hierarchical shotgun" approach, because the genome is first broken into relatively large chunks, which are then mapped to chromosomes before being selected for sequencing. Funding came from the US government through the National Institutes of Health in the United States, and a UK charity organization, the Wellcome Trust, as well as numerous other groups from around the world. The funding supported a number of large sequencing centers including those at Whitehead Institute, the Sanger Centre, Washington University in St. Louis, and Baylor College of Medicine.

Findings
Key findings of the draft (2001) and complete (2004) genome sequences include 1. There are approximately 20,500 genes in human beings, the same range as in mice and twice that of roundworms. Understanding how these genes express themselves will provide clues to how diseases are caused. 2. Between 1.1% to 1.4% of the genome's sequence codes for proteins 3. The human genome has significantly more segmental duplications (nearly identical, repeated sections of DNA) than other mammalian genomes. These sections may underlie the creation of new primate-specific genes 4. At the time when the draft sequence was published less than 7% of protein families appeared to be vertebrate specifi

How It Was Accomplished

The first printout of the human genome to be presented as a series of books, displayed at the Wellcome Collection, London The Human Genome Project was started in 1989 with the goal of sequencing and identifying all three

The Human Genome Project is considered a Mega Project because the human genome has approximately 3.3 billion base-pairs; if the cost of sequencing is US $3 per base-pair, then the approximate cost will be US $10 billion.

43

If the sequence obtained was to be stored in book form, and if each page contained 1000 base-pairs recorded and each book contained 1000 pages, then 3300 such books would be needed in order to store the complete genome. However, if expressed in units of computer data storage, 3.3 billion base-pairs recorded at 2 bits per pair would equal 786 megabytes of raw data. This is comparable to a fully data loaded CD.

statement sent Celera's stock plummeting and dragged down the biotechnology-heavy Nasdaq. The biotechnology sector lost about $50 billion in market capitalization in two days. Although the working draft was announced in June 2000, it was not until February 2001 that Celera and the HGP scientists published details of their drafts. Special issues of Nature (which published the publicly funded project's scientific paper) and Science (which published Celera's paper) described the methods used to produce the draft sequence and offered analysis of the sequence. These drafts covered about 83% of the genome (90% of the euchromatic regions with 150,000 gaps and the order and orientation of many segments not yet established). In February 2001, at the time of the joint publications, press releases announced that the project had been completed by both groups. Improved drafts were announced in 2003 and 2005, filling in to 92% of the sequence currently. The competition proved to be very good for the project, spurring the public groups to modify their strategy in order to accelerate progress. The rivals at UC Santa Cruz initially agreed to pool their data, but the agreement fell apart when Celera refused to deposit its data in the unrestricted public database GenBank. Celera had incorporated the public data into their genome, but forbade the public effort to use Celera data. HGP is the most well known of many international genome projects aimed at sequencing the DNA of a specific organism. While the human DNA sequence offers the most tangible benefits, important developments in biology and medicine are predicted as a result of the sequencing of model organisms, including mice, fruit flies, zebrafish, yeast, nematodes, plants, and many microbial organisms and parasites. In 2004, researchers from the International Human Genome Sequencing Consortium (IHGSC) of the HGP announced a new estimate of 20,000 to 25,000 genes in the human genome. Previously 30,000 to 40,000 had been predicted, while estimates at the start of the project reached up to as high as 2,000,000. The number continues to fluctuate and it is now expected that it will take many years to agree on a precise value for the number of genes in the human genome

Public Versus Private Approaches


In 1998, a similar, privately funded quest was launched by the American researcher Craig Venter, and his firm Celera Genomics. Venter was a scientist at the NIH during the early 1990s when the project was initiated. The $300,000,000 Celera effort was intended to proceed at a faster pace and at a fraction of the cost of the roughly $3 billion publicly funded project. Celera used a technique called whole genome shotgun sequencing, employing pairwise end sequencing, which had been used to sequence bacterial genomes of up to six million base pairs in length, but not for anything nearly as large as the three billion base pair human genome. Celera initially announced that it would seek patent protection on "only 200300" genes, but later amended this to seeking "intellectual property protection" on "fully-characterized important structures" amounting to 100300 targets. The firm eventually filed preliminary ("place-holder") patent applications on 6,500 whole or partial genes. Celera also promised to publish their findings in accordance with the terms of the 1996 "Bermuda Statement," by releasing new data annually (the HGP released its new data daily), although, unlike the publicly funded project, they would not permit free redistribution or scientific use of the data. The publicly funded competitor UC Santa Cruz was compelled to publish the first draft of the human genome before Celera for this reason. On July 7, 2000, the UCSC Genome Bioinformatics Group released a first working draft on the web. The scientific community downloaded one-half trillion bytes of information from the UCSC genome server in the first 24 hours of free and unrestricted access to the first ever assembled blueprint of our human species. In March 2000, President Clinton announced that the genome sequence could not be patented, and should be made freely available to all researchers. The

44

Method

number of donors. Only a few of many collected samples were processed as DNA resources. Thus the donor identities were protected so neither donors nor scientists could know whose DNA was sequenced. DNA clones from many different libraries were used in the overall project, with most of those libraries being created by Dr. Pieter J. de Jong. It has been informally reported, and is well known in the genomics community, that much of the DNA for the public HGP came from a single anonymous male donor from Buffalo, New York (code name RP11). HGP scientists used white blood cells from the blood of two male and two female donors (randomly selected from 20 of each) -- each donor yielding a separate DNA library. One of these libraries (RP11) was used considerably more than others, due to quality considerations. One minor technical issue is that male samples contain just over half as much DNA from the sex chromosomes (one X chromosome and one Y chromosome) compared to female samples (which contain two X chromosomes). The other 22 chromosomes (the autosomes) are the same for both genders. Although the main sequencing phase of the HGP has been completed, studies of DNA variation continue in the International HapMap Project, whose goal is to identify patterns of single-nucleotide polymorphism (SNP) groups (called haplotypes, or haps). The DNA samples for the HapMap came from a total of 270 individuals: Yoruba people in Ibadan, Nigeria; Japanese people in Tokyo; Han Chinese in Beijing; and the French Centre dEtude du Polymorphisms Humain (CEf) resource, which consisted of residents of the United States having ancestry from Western and Northern Europe. In the Celera Genomics private-sector project, DNA from five different individuals were used for sequencing. The lead scientist of Celera Genomics at that time, Craig Venter, later acknowledged (in a public letter to the journal Science) that his DNA was one of 21 samples in the pool, five of which were selected for use. On September 4, 2007, a team led by Craig Venter published his complete DNA sequence, unveiling the six-billion-nucleotide genome of a single individual for the first time.

The IHGSC used pair-end sequencing plus wholegenome shotgun mapping of large (100 Kbp) plasmid clones and shotgun sequencing of smaller plasmid sub-clones plus a variety of other mapping data to orient and check the assembly of each human chromosome. The Celera group emphasized the importance of the whole-genome shotgun sequencing method, relying on sequence information to orient and locate their fragments within the chromosome. However they used the publicly available data from HGP to assist in the assembly and orientation process, raising concerns that the Celera sequence was not independently derived.

Genome Doners

In the IHGSC international public-sector Human Genome Project (HGP), researchers collected blood (female) or sperm (male) samples from a large

45

Benefits
The work on interpretation of genome data is still in its initial stages. It is anticipated that detailed knowledge of the human genome will provide new avenues for advances in medicine and biotechnology. Clear practical results of the project emerged even before the work was finished. For example, a number of companies, such as Myriad Genetics started offering easy ways to administer genetic tests that can show predisposition to a variety of illnesses, including breast cancer, disorders of hemostasis, cystic fibrosis, liver diseases and many others. Also, the etiologies for cancers, Alzheimer's disease and other areas of clinical interest are considered likely to benefit from genome information and possibly may lead in the long term to significant advances in their management. There are also many tangible benefits for biological scientists. For example, a researcher investigating a certain form of cancer may have narrowed down his/her search to a particular gene. By visiting the human genome database on the World Wide Web, this researcher can examine what other scientists have written about this gene, including (potentially) the three-dimensional structure of its product, its function(s), its evolutionary relationships to other human genes, or to genes in mice or yeast or fruit flies, possible detrimental mutations, interactions with other genes, body tissues in which this gene is activated, diseases associated with this gene or other datatypes. Further, deeper understanding of the disease processes at the level of molecular biology may determine new therapeutic procedures. Given the established importance of DNA in molecular biology and its central role in determining the fundamental operation of cellular processes, it is likely that expanded knowledge in this area will facilitate medical advances in numerous areas of clinical interest that may not have been possible without them. The analysis of similarities between DNA sequences from different organisms is also opening new avenues in the study of evolution. In many cases, evolutionary questions can now be framed in terms of molecular biology; indeed, many major evolutionary milestones (the emergence of the ribosome and organelles, the development of embryos with body plans, the vertebrate immune system) can be related to the molecular level. Many questions about the

similarities and differences between humans and our closest relatives (the primates, and indeed the other mammals) are expected to be illuminated by the data from this project. The Human Genome Diversity Project (HGDP), spinoff research aimed at mapping the DNA that varies between human ethnic groups, which was rumored to have been halted, actually did continue and to date has yielded new conclusions. In the future, HGDP could possibly expose new data in disease surveillance, human development and anthropology. HGDP could unlock secrets behind and create new strategies for managing the vulnerability of ethnic groups to certain diseases .It could also show how human populations have adapted to these vulnerabilities.

Advantages
1. Knowledge of the effects of variation of DNA among individuals can revolutionize the ways to diagnose, treat and even prevent a number of diseases that affects the human beings. 2. It provides clues to the understanding of human biology.\

Hapmap

46

The DNA sequence of any two people is 99.9 percent identical. The variations, however, may greatly affect an individual's disease risk. Sites in the DNA sequence where individuals differ at a single DNA base are called Single Nucleotide Polymorphisms (SNPs). Sets of nearby SNPs on the same chromosome are inherited in blocks. This pattern of SNPs on a block is a haplotype. Blocks may contain a large number of SNPs, but a few SNPs are enough to uniquely identify the haplotypes in a block. The HapMap is a map of these haplotype blocks and the specific SNPs that identify the haplotypes are called tag SNPs. The HapMap should be valuable by reducing the number of SNPs required to examine the entire genome for association with a phenotype from the 10 million SNPs that exist to roughly 500,000 tag SNPs. This will make genome scan approaches to finding regions with genes that affect diseases much more efficient and comprehensive, since effort will not be wasted typing more SNPs than necessary and all regions of the genome can be included.

variation research such as HGP is group population research, but most ethical guidelines, according to Harry, focus on individual rights instead of group rights. She says the research represents a clash of culture: indigenous people's life revolves around collectivity and group decision making whereas the Western culture promotes individuality. Harry suggests that one of the challenges of ethical research is to include respect for collective review and decision making, while also upholding the Western model of individual rights

Conclusion
Deriving meaningful knowledge from the DNA sequences will define research through the coming decades leading to our understanding of biological systems. This enormous task will require the expertise & creativity of tens of thousands of scientists from varied disciplines in both the public & private sectors worldwide. One of the greatest impacts of having the HG sequence may well be enabling a radically new approach to biological research. In the past, researchers studied one or a few genes at a time. With whole genome sequences & new high through put technologies, we can approach questions systematically & on a much broader scale. They can study all the genes in a genome, for example, all the transcripts in a particular tissue or organ or tumor, or how tens of thousands of genes and proteins work together in interconnected networks to orchestrate the chemistry of life.

Ethical, Legal & Social Issues


The project's goals included not only identifying all of the approximately 24,000 genes in the human genome, but also to address the ethical, legal, and social issues (ELSI) that might arise from the availability of genetic information. Five percent of the annual budget was allocated to address the ELSI arising from the project. Debra Harry, Executive Director of the U.S group Indigenous Peoples Council on Biocolonialism (IPCB), says that despite a decade of ELSI funding, the burden of genetics education has fallen on the tribes themselves to understand the motives of Human genome project and its potential impacts on their lives. Meanwhile, the government has been busily funding projects studying indigenous groups without any meaningful consultation with the groups. The main criticism of ELSI is the failure to address the conditions raised by population-based research, especially with regard to unique processes for group decision-making and cultural worldviews. Genetic

47

Reference

Robert Krulwich. (2001-04-17). Cracking the Code of Life. [Television Show]. PBS. ISBN 1-5375-16-9.. ^ "It's personal: Individualised genomics has yet to take off". The Economist. 2010-06-17. ^ Barnhart, Benjamin J. (1989). "DOE Human Genome Program". Human Genome Quarterly ^ DeLisi, Charles (2001). "Genomes: 15 Years Later A Perspective by Charles DeLisi, HGP Pioneer". Human Genome News 11: 34. ^ Noble, Ivan (2003-04-14). "Human genome finally complete". BBC News. ^ "Guardian Unlimited". The Guardian (London). Archived from the original on October 12, 2007. ^ [Human Genome Project Race]http://www.cbse.ucsc.edu/research/hg p_race ^ Adams, MD. et al. (2000). "The genome sequence of Drosophila melanogaster.". Science 287 (5461): 21852195. ^ IHGSC (2004). "Finishing the euchromatic sequence of the human genome.". Nature 431 (7011): 931945 ^ Waterston RH, Lander ES, Sulston JE (2003). "More on the sequencing of the human genome". Proc Natl Acad Sci U S A. 100 ^ Kennedy D (2002). "Not wicked, perhaps, but tacky". Science 297 (5585):

48

Reduction Of Alcohol Intoxication In Experimental Animals By Resveratrol


Parvathy S. Nair & Parvathy R.
S8 Biotechnology & Biochemical Engineering Mohandas College of Engineering & Technology, Anad

Abstract
Chronic alcohol consumption induces an increase in oxidative stress. As polyphenolic compounds are potent antioxidants, the experiment was aimed to examine whether dietary supplementation of resveratrol (a polyphenol) may attenuate lipid peroxidation, the major end-point of oxidative damage, liver problems and alcohol-induced morality resulting from chronic alcohol administration. Three groups of experimental animals, namely, rat or mice were used. The first group served as control. The second and third group was daily injected with 35% ethanol at 3 g/kg body weight or up to 40%v/v in drinking water. The third group was supplemented with resveratrol (5 g/kg) in standard diet or 10mg/ml in drinking water. Malondialdehyde (MDA), an indicator of oxidative stress, was measured in the liver, heart, brain, and testis. Also, blood levels were determined for transaminase and IL-1. At the end of a 6 weeks treatment period, MDA, transaminase and IL-1 levels were significantly increased in the liver, heart, brain, and testis. However, when alcohol treated animals were given resveratrol the increase in MDA, transaminase and IL-1 levels was significantly reduced to nearly those of control animals. Mortality in the third group was 22% compared to 78% in the second group. Thus the results obtained shows that resveratrol is able to alleviate alcohol-induced liver problems and morality and have protective effect against oxidative injury. Keywords: IL-1, MDA, Lipid peroxidation, Oxidative stress, Polyphenols, Transaminase.

Introduction
Alcohol Consumption
The average person metabolizes about 10 grams of alcohol(1 standard drink) per hour. There are many harmful effects of alcohol consumption. The short term effects include behavioral or physical abnormalities. Chronic alcohol consumption causes adverse effects. Some of the common effects include alcoholism, malabsorption, chronic pancreatitis, liver cirrhosis, cancer. Cirrhosis is a condition in which the

tissue, partially blocking the flow of blood through the liver. It also causes damage to central nervous system, peripheral nervous system. In short, alcohol in excessive quantities is capable of damaging nearly every organ and system in the body. It has been found that intake of resveratrol helps reduce toxic effects caused by excessive alcohol consumption.

liver slowly deteriorates and malfunctions due to chronic injury. Scar tissue replaces healthy liver

50

Base unit Flavone Class/Polymer (Flavanoid)

Resveratrol Phenolic subcomponent Resorcinol Eg: Resveratrol

Resveratrol Resveratrol (3, 5, 4'-trihydroxy-trans-stilbene) is a polyphenolic phytoalexin. Phytoalexins are antimicrobial substances synthesized de novo by plants that accumulate rapidly at areas of incompatible pathogen infection. Phytoalexins produced in plants act as toxins to the attacking organism. Polyphenols are group of chemical substances, found in plants, characterized by the presence of more than one phenol unit. Resveratrol is present in many plants and fruits, including red grapes, eucalyptus, spruce, blueberries, mulberries, peanuts, giant knotweed. Also red wine contains a lot of it. The longer the grape juice is fermented with the grape skins the higher the resveratrol content will be.

Resveratrol is an antioxidant, posses anticancer properties and inhibits lipid peroxidation of lowdensity lipoprotein and prevents the cytotoxicity of oxidized LDL. Resveratrol also increases the activity of some antiretroviral drugs in vitro. They appear to mimic several of the biochemical effects of calorie restriction. Possess antioxidant, anticancer, antitoxic properties, helps in increased lifespan and heart health. Experiment Animals used for the experiment are either mice or rats. Male Wistar variety were used in case of rats and male Balb/c variety were used in case of mice, each weighing about 26g. The animals were divided into 3 groups of 12 animals each. They were maintained on a regular 12-hour light period at a controlled temperature (25 2C), with free access to food and water. The mice were adapted for 2 to 5 days prior to initiation of the

51

experimental protocol. The diet consisted of 58.5% carbohydrates, 15.5% proteins, 2.7% fat, 5.5% minerals, 3.7% fiber and 12% humidity The caloric contents were 3000 kcal/kg.

IL-1 was analyzed using ELIZA kits based on antimouse TNF- monoclonal antibodies. Extend of MDA production was determined by TBARS Assay (Thiobarbituiric acid reactive substances Assay). Thiobarbituric acid reacts with malondialdehyde to yield a fluorescent product. This test is indicative of lipid peroxidation in tissues.

Experimental Procedure
First group of animals were taken as control. In order to induce alcoholic intoxication, second group was administered pure alcohol by diluting it in the drinking water. 10% in the first week, 20% in the second, 30% in the third and 40% in the subsequent weeks until the end of the study. Third group was also administered alcohol in the same manner.In addition, resveratrol was also added to the drinking water - 10mg in every 1ml. Liquids and food were changed twice a week and the animals were monitored daily for general health. A treatment period of six weeks was given. The timing of sacrifice for the study of liver damage was determined from previous trials in which mortality was seen to be very high after the sixth week. The animals were sacrificed by decapitation. Few animals from each group was allowed to live, destined for the evaluation of mortality were followed until death in the 7th week. The mortality and liver-damage studies were each repeated on three occasions to confirm the histological and laboratory alterations (12 mice per group, for a total of 36 animals per group) and again on three occasions to confirm the mortality curves (12 mice per group, for a total of 36 animals per group). Animals from each group was taken and tested for transaminase and IL-1 levels in blood. MDA(Malondialdehyde) was tested in liver, heart, brain and testis.

Transaminase
Transaminase or Aminotransferase is an enzyme that catalyzes reaction between amino acid and keto acid. The reaction involves removing the amino group from the amino acid, leaving behind an -keto acid, and transferring it to the reactant -keto acid and converting it into an amino acid. The reaction is called Transamination. The presence of elevated transaminases can be an indicator of liver damage. Measuring the concentrations of various transaminases in the blood is important in the diagnosing and tracking many diseases. In this experiment Aspartate transaminase (AST) & Alanine transaminase (ALT) is considered. Transaminases require the coenzyme pyridoxalphosphate, which is converted into pyridoxamine in the first phase of the reaction, when an amino acid is converted into a keto acid. Enzyme-bound pyridoxamine in turn reacts with pyruvate, oxaloacetate, or alpha-ketoglutarate, giving alanine, aspartic acid, or glutamic acid, respectively.

Laboratory Tests
Alcohol in blood was determined by REA assay a quantitative reagent system for the measurement of ethanol in murine whole blood. Transminase (both AST & ALT) was determined on a computer-controlled biochemical analyzer. It uses a colored reaction scheme to detect transaminase enzymatic activity in serum samples.

AST shown in fig.

52

Interlukins
IL-1 refers to Interleukin-1, cytokines secreted by the immune system. They are a group of three polypeptides ( IL-1, IL-1, interleukin-1 receptor antagonist (IL-1Ra) ). The term interleukin derives from (inter-) "as a means of communication", and (leukin) "deriving from the fact that many of these proteins are produced by leukocytes and act on leukocytes". As the source of secretion suggests, they play a central role in the regulation of immune and inflammatory responses. A peculiar behavior of interleukins is that, low levels of interleukins helps liver to repair damage while high levels can cause injury & death of liver cells.

between which lie methylene -CH2- groups that possess especially reactive hydrogens. As with any radical reaction the reaction consists of three major steps: initiation, propagation and termination. Initiation is the step whereby a fatty acid radical is produced. The initiators in living cells are most notably reactive oxygen species (ROS), such as OH which combines with a hydrogen atom to make water and a fatty acid radical. The fatty acid radical is not a very stable molecule, so it reacts readily with molecular oxygen, thereby creating a peroxyl-fatty acid radical. This too is an unstable species that reacts with another free fatty acid producing a different fatty acid radical and a lipid peroxide or a cyclic peroxide if it had reacted with itself. This cycle continues as the new fatty acid radical reacts in the same way. The radical reaction stops when two radicals react and produce a non-radical species. This happens only when the concentration of radical species is high enough for there to be a high probability of two radicals actually colliding.One important such antioxidant is vitamin E. Other antioxidants made within the body include the enzymes superoxide dismutase, catalase, and peroxidase. If not terminated fast enough, there will be damage to the cell membrane, which consists mainly of lipids. In addition, end products of lipid peroxidation may be mutagenic and carcinogenic. For instance, the end product malondialdehyde reacts with in DNA, deoxyadenosine & deoxyguanosine in DNA forming DNA adducts to them, primarily M1G. A DNA adduct is a piece of DNA covalently bonded to a (cancer-causing) chemical. This has shown to be the start of a cancerous cell, or carcinogenesis.

Crystal structure of IL-1 shown in fig.

Lipid Peroxidation
In simple terms, lipid peroxidation is the oxidative degradation of lipids. It is a process whereby free radicals "steal" electrons from the lipids in cell membranes, resulting in cell damage. The reaction proceeds by a free radical, chain reaction mechanism. It most often affects polyunsaturated fatty acids, because they contain multiple double bonds in

53

species is more complex than this formula suggests. This reactive species occurs naturally and is a marker for oxidative stress. Reactive oxygen species degrade polyunsaturated lipids, forming malondialdehyde. This compound is a reactive aldehyde and is one of the many reactive electrophile species that cause toxic stress in cells and form covalent protein adducts which are referred to as advanced lipoxidation end products (ALE), in analogy to advanced glycation end-products(AGE). The production of this aldehyde is used as a biomarker to measure the level of oxidative stress in an organism.

Results
The mortality curves were similar in all three test series. The mice in the alcohol group began to die after the second week of alcohol intoxication, with a survival of 22% (4/18) in the seventh week. None of the mice survived beyond eight weeks. The animals belonging to the alcohol plus resveratrol group began to die later (after the fourth week, involving a single mouse), with a survival of 78% (14/18) in the seventh week. The control group in turn presented a survival of 100% (17/18) in the seventh week. Survival was significantly lower in the alcohol group than in the rest of groups. The mice subjected to alcohol intoxication showed a poorer general condition after the second week, as reflected by decreased activity, immobility, grouping and coarse hair. There were no such observations in the other two groups (control and alcohol plus resveratrol), with no differences among them. The average food intake among the control rats was 4.270.86 g/day. In the experiment a significant decrease was observed in food ingestion and body weight among the alcohol-consuming mice.

Mechanism of Lipid Peroxidation shown in fig.

MALONDIALDEHYDE (MDA)

Fig. showing chemical structure of MDA

Malondialdehyde is an organic compound with the chemical formula CH2(CHO)2. The structure of this

54

Fig. shows survival of the different groups of mice over time (weeks) In short, alcohol treated animals showed high levels of transaminase & IL-1 levels in the blood stream. They also indicated high amounts of MDA in the liver, heart, brain and testis. Alcohol treated animals administered with resveratrol showed transaminase, IL-1 & MDA levels nearly as low as the control. Mortality rates as on the 7 th week in the resveratrol plus ethanol administered animals was 22% compared to 78% in the alcohol administered group.

Arenas JI. BMC Gastroenterol, Effect of resveratrol on alcohol-induced mortality and liver lesions in mice, 6: 19(2008). Kaeberlein M, McDonagh T, Heltweg B, Hixon J, Westman EA, Caldwell SD, Napper A, Curtis R, DiStefano PS, Fields S, Bedalov A, Kennedy BK. J Biol Chem, Substrate-specific activation of sirtuins by resveratrol, 280: 1703817045(2006). Tilg H, Diehl AM: N Engl J Med, Cytokines in alcoholic and nonalcoholic steatohepatitis, 343:1467-1476(2006). Martnez J, Moreno JJ: Biochem Pharmacol, Effect of resveratrol, a natural polyphenolic compound, on reactive oxygen species and prostaglandin production, 59:865870(2002). Bujanda L: Am J Gastroenterol, The effects of alcohol consumption upon the gastrointestinal tract, 95:3374-3382(2000). Yin M, Gabele E, Wheeler MD, Connor H, Bradford BU, Dikalova A, Rysyn I, Mason R, Thurman RG: Hepatology, Alcoholinduced free radicals in mice: direct toxicants or signaling molecules?, 34:935942(2001). Cai, Y. J., Fang, J. G., Yang, L. et al. Biochimica et Biophysica Acta, Inhibition of free radical-induced peroxidation of rat liver microsomes by resveratrol and its analogues, 1637, 3138(2003).

Conclusion
From the results we can conclude: Resveratrol is able to alleviate alcohol-induced liver problems and morality. They have protective effect against oxidative injury.

References
Borra MT, Smith BC, Denu JM. J Biol Chem, Mechanism of human SIRT1 activation by resveratrol, 280: 17187 17195(2007). Bujanda L, Garca-Barcina M, Gutirrez-de Juan V, Bidaurrazaga J, de Luco MF, Gutirrez-Stampa M, Larzabal M, Hijona E, Sarasqueta C, Echenique-Elizondo M,

55

STREAM 2
COMPUTER SCIENCE & INFORMATION TECHNOLOGY

3D Internet
Abin Rasheed & Manju N
S4 Department of Information Technology Mohandas College of Engineering and technology, Anad, Thiruvananthapuram rasheedabin@yahoo.com

Abstract
This is an attempt from our part to present the future of internet which would be in 3D.In the time when 3D televisions and 3D movies are a reality, 3D Internet should be the topic of discussion

Introduction
In todays ever-shifting media landscape, it can be a complex task to find effective ways to reach your desired audience. As traditional media such as television continue to lose audience share, one venue in particular stands out for its ability to attract highly motivated audiences and for its tremendous growth potential the 3D Internet. Also known as virtual worlds, the 3D Internet is a powerful new way to reach consumers, business customers, co-workers, partners, and students. It combines the immediacy of television, the versatile content of the Web, and the relationship-building strengths of social networking sites like Face book. Yet unlike the passive experience of television, the 3D Internet is inherently interactive and engaging. Virtual worlds provide immersive 3D experiences that replicate (and in some cases exceed) real life. People who take part in virtual worlds stay online longer with a heightened level of interest. To take advantage of that interest, diverse businesses and organizations have claimed an early stake in this fast-growing market. They include technology leaders such as IBM, Microsoft, and Cisco, companies such as BMW, Toyota, Circuit City, Coca Cola, and Calvin Klein, and scores of universities, including Harvard, Stanford and Penn State.

world environment are augmented by virtual computer-generated imagery. Real or fictitious information are mapped on to the real world to create new experiences. Currently, most AR research is concerned with the use of live video imagery which is digitally processed and augmented by the addition of computer generated graphics. In 1990 TOM CAUDELL ,a researcher at aircraft manufacturer Boeing, coined the term augmented reality .He applied the term to a head mounted digital display that guided workers through assembling electrical wires in aircrafts. Augmented reality is a term for a live direct or an indirect view of a physical real world environment whose elements are augmented by computer generated imagery. It is related to a more general concept called mediated reality in which a view of reality is modified (possibly diminished rather than augmented) by a computer. As a result, the technology functions by enhancing ones current perception of reality. In the case of AR, the augmentation is conventionally in real time and in semantic context with environmental elements, such as sports scores on TV during a match .With the help of advanced AR technology(e.g. adding computer vision and object recognition) the information about the surrounding real world of the user becomes interactive and digitally able. Artificial information about the environment and the objects in it can be stored and retrieved as an information layer on top of the real world view. The early definition for augmented reality, was an intersection between virtual and physical

Augmented Reality
The Augmented Reality (AR) technology is a field of computer research, which functions by enhancing ones current perception of reality by the combination of real-world and computergenerated data. The elements of a physical real-

56

reality, where digital visuals are blended in to the real world to enhance our perception. Augmented reality research explores the application of computer generated imagery in live video streams as a way to expand the real world. Advanced research includes use of head mounted displays and virtual retinal displays for visualization purpose, and construction of controlled environments containing any number of sensors and actuators. There are two commonly accepted definitions of augmented reality are: One was given by RONALD AZUMA in 1997. AZUMAs definition: AUGMENTD REALITY

Using various cameras located in different positions as a starting point one can extend this view to a much broader approach. Many young people are prepared to put their own life in internet e.g. by uploading pictures and videos to the popular web sites. This concept can be extended further: people witnessing an event can make a video of it by using a mobile phone and send it in real time via a mobile or wireless network to a web site hosting a 3D construction program. This program takes the images taken from different locations and creates a 3D video which is made available on the net. 2D images could be converted to 3D images by using simple softwares like Photoshop, which is explained in detail in the presentation. This concepts could be used in various 3D construction programs that are present in 3D supporting Web sites. Similarly, 2D videos could be converted to 3D VIDEOS. Similar softwares provide opportunity for any internet user to connect with friends, meet new people with same interest and create a virtual home. Any easy decoration tool allows anyone to build a unique 3Dversion of their web site or social network page. See them, wave to them and chat to them with 3D avatar multiuser chat. Any 3D designer or business can create places and worlds that can be visited with Exit reality, as well as widgets, gadgets and other applications that can be used to decorate 3D web spaces. Users would normally spend no longer than a couple of minutes on a 2D web site .In a 3D environment this time can extend much more, which creates a huge potential for the web site owner to maximize user engagement. It is compactable with Firefox, Internet Explorer abd Chrome browsers; Exit Reality includes a 3D search engine which gives the user access to the biggest repository of 3D objects and worlds on the internet.

Combines real and virtual Is interactive in real time Is registered in 3d

And this technology is about to be combined with the 3d web browsing possibilities to develop the Internet /virtual world. Mobile phones will be the first to commercially offer AR Smart phones, which include GPS hardware and cameras, are crucial to driving the evolution of augmented reality, eg, movie posters will trigger interactive experiences such as a trailer on an iPhone. Developers have recently augmented reality apps for the Google Android powered HTC G1 handset. Brain Selzer, co-founder of Ogmeto, a company that creates augmented reality products also Layar, accompany based in Amsterdam, released an AR browser for Android. Nokia is currently developing an AR application named Point and Find.

Advantages
Web Based Learning System Universities can now deliver their entire campus from their website.

General Working
Another perspective into 3D Internet can be taken from the production of 3D images and videos. Taking the well known ways to do this is:

57

provide immersive 3D education outcomes using existing learning management systems. Exit Reality- powered virtual worlds are hosted and controlled on the universitys own servers so that the utmost security is maintained and peace of mind is ensured.

Indeed, practically anything than can be done in the real world can be reproduced in the 3D Internet with the added benefit being that.

3D Navigation System

Virtual Worlds
Virtual worlds provide immersive 3D experiences that replicate (and in some cases exceed) real life. People who take part in virtual worlds stay online longer with a heightened level of interest.

In 2008 Hip Digital Media from Canada reputedly acquired vSide for USD$40 million, today Exit Reality (3D Internet company) announced the acquisition of San Franciscobased vSide to create a global 3D teen experience with its existing groundbreaking technology. vSide is a rich virtual world where every user gets an apartment to decorate and throw parties. Users can hang out with their friends, shop, listen to music, play games, watch videos or go out clubbing in one of three stunning neighborhoods.

3D Video Conferencing
Employees will be able to meet in an open or secure environment. Product demonstrations. All in real time. Reduced travel expenses.

The most well-known of the 40-some virtual world platforms today is a second life. Its inworld residents number in the millions. As residents, they can: Remotely attend group meetings, training sessions, and educational classes Engage in corporate or community events View and manipulate statistical information and other data such as biological or chemical processes in three dimensions Try out new products, electronic devices and gadgets Take part in virtual commerce Participate in brand experiences that carry over to the real world.

58

Reference
[1] www.google.com [2] www.wikipedia.com

Obstacles to Commercial Success in 3D Worlds


Advertisers, marketers and organizations have yet to capitalize on the vast potential of the 3D Internet. Factors inhibiting the commercial usability of virtual worlds include: The limited effectiveness of traditional media techniques such as fixed-location billboards when applied to virtual worlds. In the 3D Internet, participants have complete control over where they go and what they do and can move their avatars instantly through virtual space. What is required is a means for making content readily available to people not only at specific points, but throughout virtual worlds. Lack of an effective way for enabling people in virtual worlds to encounter commercial content that enhances their virtual experience. Because participants have a choice in whether to interact with an offering, it is essential that it be viewed as relevant and valuable to their particular goals in the 3D Internet. An inconsistent means for enabling inworld participants to easily interact with and access video, rich multimedia, and Web content.

The lack of a cohesive means for advertisers and content providers to receive the detailed metrics required to measure success.

59

Analysis And Implementation Of Message Digest For Network Security


S.Naresh Kumar, G.Karanveer Dhiman
3rd Year B.TECH Depertment opf Information and technology Adhiyamaan college of Engineering sungonaresh@gmail.com karanveerdhiman14@gmail.com

Abstract
With communication playing a vital role in our day-to-day life, computers are becoming more and more important and since then networking of computers has become essential. Of course, most of the people are hanging over the Internet for various communicational needs. Though authentication areas have grown to a great extent, hacking and cracking have become very common in the Internet world and one is not sure that information received is valid. The integrity of data has become a great question mark. A common practice followed in maintaining the integrity of data is applying a message digest algorithm to obtain the message digest for the required data and digitally signing it with digital signature algorithms and then transmitting it for communication.In the above process, a certificate is issued for validation and a private key is given to the signer with a common public key (for decryption purposes). This clearly reveals that message digest is the one, which helps in data integrity. Various message digest algorithms are used such as SHA, MD5, .etc. So, our approach is to take the MD5 algorithm and to improve the secureness of data by eliminating the possible weaknesses of it. We have chosen MD5 algorithm because it is most widely used and it is the proposed one for the emerging IPv6 Standard.

Contents
Introduction What is network security? 1. Secrecy 2. Authentication Message integrity 1. Digital signature 2. Message digest 3. Key management Key distribution and Certification Intrusion Message digests Message digest algorithm Terminology and notation MD5 algorithm description Error detection Merits Conclusion Bibliography

Introduction
In a race to improve security infrastructures faster than hackers and stealers ,who invent to penetrate

passwords and firewalls, new technologies are being developed to confirm authentication, secure information transaction,...etc. Some of the factors characterizing network security are secrecy, authentication, message integrity, key distribution and certification. Of these, message integrity is of main concern because it helps in accurate information transmission. This is whereMESSAGE DIGEST comes into action. After an overview of the present scenario in network security, its threats, we will go on to highlight what are message digests, the MD5 algorithm and the improvement of the message digest scheme with MD5s proposed by us. What is Network Security? Secrecy: only sender, intended receiver should understand msg contents sender encrypts msg receiver decrypts msg Authentication: sender, receiver want to confirm identity of each other Message Integrity: sender, receiver want to ensure message not altered (in transit, or afterwards) without detection

60

through the following

What Is Network Security?


The above block indicates the various strategies in network security. We shall discuss each one of them separately as follows: Secrecy: This is achieved through cryptography -the process of securely transmitting data over a network in such a way that if the data is intercepted, it cannot be read by unauthorized users. Cryptography involves two complementary processes: Encryption is the process of taking data and modifying it so that it cannot be read by untrusted users. Decryption is the process of taking encrypted data and rendering it readable for trusted users. Encryption and decryption are performed using algorithms and keys. An algorithm, a series of mathematical steps that scrambles data, is the underlying mathematical process behind encryption. Authentication: The process of validating users credentials to allow them access to resources on a network. Authentication can be classified according to how the credentials are passed over the network and include the following methods: 1. Anonymous access: This method is supported by Microsoft Internet Information Services (IIS) and allows anonymous users on the Internet access to Web content on your server. 2. Basic Authentication: This method transmits passwords as clear text and is often used in UNIX networks and for File Transfer Protocol (FTP) services. 3. Windows NT Challenge/Response Authentication: This is the standard secure authentication method for Windows NT domain controllers. 4. Kerberos v5 Security Protocol: This is the standard secure authentication method for Windows 2000 domain controllers. Message Integrity: This is managed Digital Signatures: It is an encrypted file accompanying a program that in turn indicates exactly where the file is coming from. In a Digital signature private key is used for encryption and a public key is used for decryption. While signing an object, the signer calculates a digest of the objects using a message digest algorithm such as MD5. The digest is used as a fingerprint for the object. This digest is in turn encrypted using the private key to produce the objects digital signature. The signature is verified by decrypting the signature using the signers public key. As a result of this decryption, the digest value is produced. The objects digest value is calculated and in turn compared with the decrypted digest value. If both the values match i.e., objects digest value and decrypted digest value, the signature is verified. The document representing this signature is called Certificate. Message Digests: Cryptographically secure message digests, such as MD5 and SHA-1. These algorithms, also called one-way hash algorithms, are useful for producing "digital fingerprints" of data, which are frequently used in digital signatures and other applications that need unique and unforgivable identifiers for digital data. Key Management: A set of abstractions for managing principles (entities such as individual users or groups), their keys, and their certificates is key management. It allows applications to design their own key management systems, and to interoperate with other systems at a high level.

61

Key Distribution Certification:

And

In order to provide establishment of shared secret key over the network, Key Distribution Centers (KDC) are used which act as intermediary between two entities. For trusted transaction of keys, certification authorities are used. Certification authority (CA) binds public key to particular entity. Entity (person, router, etc.) can register its public key with CA. Entity provides proof of identity to CA. CA creates certificate binding entity to public key. The certificate is then digitally signed by CA. In short, a certificate is a digitally signed statement from one entity, saying that the public key of some other entity has some particular value. At present the following companies offer certificate authentication services: VeriSign http://www.verisign.com Thawte Certification http://www.thawte.com

cannot be reversed. The basic idea is that such a checksum is completely different if even a single byte in the original data is changed and that it is not possible (or at least very expensive) to calculate two identical checksums from two different pieces of data. Therefore message digests are used to verify data, usually files. Message Digests Hash function properties

Computationally expensive to Public-keyencrypt long

Many-to-1 Produces fixedsize msg digest (fingerprint)

messages
Goal: fixedlength, easy to compute digital signature,fing erprint(message digest) Apply hash function H to m, Get fixed size message digest, H(m).

Given message digest x, computationally infeasible to find m such that x=H(m) computationally infeasible to find any two messages m and m such that H(m)=H(m).

Intrusion:
Have you checked your website lately? Does it still have the content that you put there? Website defacement is the most common type of attack. It accounted for 64% of the attacks reported, by far exceeding proprietary information theft at 8%. An intruder may use an anonymous ftp area as a place to store illegal copies of commercial software, consuming disk space and generating network traffic which may also result in denial of service. Knowing that just intrusion has occurred is less of worth than knowing about intrusion and what data was tampered. Our implementation was designed to serve both the purposes.

Normally hash functions are used to calculate the checksum of data and its properties are shown in the table.

Message Digest Algorithm (Or One Way Hash Function):


A function that takes arbitrary-sized input data (referred to as a message) and generates a fixed-size output, called a digest (or hash). A digest has the following properties: It should be computationally infeasible to find another input string

Message Digests:
Message digests are used to calculate a checksum from any kind of data. The calculation is one way, meaning it

62

that will generate the same digest. The digest does not reveal anything about the input that was used to generate it. Message digest algorithms are used to produce unique and reliable identifiers of data. The digests are sometimes called the "digital fingerprints" of data. Some digital signature algorithms use message digest algorithms for parts of their computations. Some digital signature systems compute the digest of a message and digitally sign the digest rather than signing the message itself. This can save a lot of time, since digitally signing a long message can be timeconsuming. The two most popular message digests are Secure Hash Algorithm (SHA) and MD5. MD5 Algorithm Most used. widely SHA-1 Algorithm Also used. US standard

which is a 160-bit digest (20 bytes). SHA is slower but more secure than MD5. Popular tools using SHA are the jarsigner tool (chipped with Java) and Pretty Good Privacy (PGP, used for digitally signing of mails). MD5 Digest: The MD5 digest was developed by Ron Rivest of the MIT in 1991. It is faster than Secure Hash Algorithm and has no limitation on the size of the data to digest, but it is less secure.

The Md5 Algorithm:


The algorithm takes as input a message of arbitrary length and produces as output a 128-bit "fingerprint" or "message digest" of the input. The MD5 algorithm is intended for digital signature applications, where a large file must be "compressed" in a secure manner before being encrypted with a private (secret) key under a public-key cryptosystem such as RSA. The MD5 algorithm is designed to be quite fast on 32-bit machines. In addition, the MD5 algorithm does not require any large substitution tables; the algorithm can be coded quite compactly. Terminology And Notation: In this document a "word" is a 32-bit quantity and a "byte" is an eight-bit quantity. A sequence of bits can be interpreted in a natural manner as a sequence of bytes, where each consecutive group of eight bits is interpreted as a byte with the highorder (most significant) bit of each byte listed first. Similarly, a sequence of bytes can be interpreted as a sequence of 32-bit words, where each consecutive group of four bytes is interpreted as a word with the loworder (least significant) byte given first. Let x_i denote "x sub i". If the subscript is an expression, we surround it in braces, as in x_{i+1}. Similarly, we use ^ for superscripts (exponentiation), so that x^i denotes x to the i-th power. Let the symbol "+" denote addition of words (i.e., modulo-2^32 addition). Let X <<< s

Computes 128bit message digest in 4-step process. Arbitrary 128bit string x, appears difficult to construct msg m whose MD5 hash is equal to x. Maximum size of data-infinite

160-bit message digest

Maximum size of data-2^64 bits

Secure Hash Algorithm: The SHA algorithm was developed by National Institute of Standards and Technology (NIST) and the National Security Agency (NSA). Actually there exists more than one SHA algorithm now (SHA-256, SHA-384, and SHA-512 for 256, 384 and 512-bit digests respectively), but SHA means SHA-1

63

denote the 32-bit value obtained by circularly shifting (rotating) X left by s bit positions. Let not(X) denote the bit-wise complement of X, and let X v Y denote the bit-wise OR of X and Y. Let X xor Y denote the bit-wise XOR of X and Y, and let XY denote the bitwise AND of X and Y.

MD5 Algorithm Description:


We begin by supposing that we have a b-bit message as input, and that we wish to find its message digest. Here b is an arbitrary nonnegative integer; b may be zero, it need not be a multiple of eight, and it may be arbitrarily large. We imagine the bits of the message written down as follows: m_0 m_1 ... m_{ b-1 } The following five steps are performed to compute the message digest of the message. Step 1: Append Padding Bits The message is "padded" (extended) so that its length (in bits) is congruent to 448, modulo 512. That is, the message is extended so that it is just 64 bits shy of being a multiple of 512 bits long. Padding is always performed, even if the length of the message is already congruent to 448, modulo 512. Padding is performed as follows: a single "1" bit is appended to the message, and then "0" bits are appended so that the length in bits of the padded message becomes congruent to 448, modulo 512. In all, at least one bit and at most 512 bits are appended. Step 2: Append Length A 64-bit representation of b (the length of the message before the padding bits were added) is appended to the result of the previous step. In the unlikely event that b is greater than 2^64, then only the low-order 64 bits of b are used. (These bits are appended as two 32-bit words and appended low-order word first in accordance with the previous conventions.) At this point the resulting message (after padding with bits and with b) has a length that is an exact multiple of 512

bits. Equivalently, this message has a length that is an exact multiple of 16 (32-bit) words. Let M[0 ... N-1] denote the words of the resulting message, Step 3 : Initialize MD Buffer A four-word buffer (A,B,C,D) is used to compute the message digest. Here each of A, B, C, D is a 32-bit register. These registers are initialized to the following values in hexadecimal, (low-order bytes first): word A: 01 23 45 67 wordB:89abcdef word C: fe dc ba 98 word D: 76 54 32 10 Step 4 : Process Message in 16Word Blocks We first define four auxiliary functions that each take as input three 32-bit words and produce as output one 32-bit word. F(X,Y,Z) = XY v not(X) Z G(X,Y,Z) = XZ v Y not(Z) H(X,Y,Z) = X xor Y xor Z I(X,Y,Z) = Y xor (X v not(Z)) In each bit position F acts as a conditional: if X then Y else Z. The function F could have been defined using + instead of v since XY and not(X)Z will never have 1's in the same bit position.) It is interesting to note that if the bits of X, Y, and Z are independent and unbiased, the each bit of F(X,Y,Z) will be independent and unbiased. The functions G, H, and I are similar to the function F, in that they act in "bitwise parallel" to produce their output from the bits of X, Y, and Z, in such a manner that if the corresponding bits of X, Y and Z are independent and unbiased, then each bit of G(X,Y,Z),H(X,Y,Z), and I(X,Y,Z) will be independent and unbiased. The function H is the bitwise "xor" or "parity" function of its inputs. This step uses a 64-element table T[1 ... 64] constructed from the sine function. Let T[i] denote the i-th element of the table, which is equal to the integer part of 4294967296 times abs(sin(i)), where i is in radians. where N is a multiple of 16.

64

The following shows the four rounds of operation involved: /* Process each 16-word block. */ For i = 0 to N/16-1 do /* Copy block i into X. */ For j = 0 to 15 do Set X[j] to M[i*16+j]. End /* of loop on j */ The general representative equation for all the rounds can be given as follows: a = b + ((a + Q(b,c,d) + X[k] + T[i]) <<< s) where, a,b,c,d -- 32-bit registers X[k] -- input data block T[i] -- table element s -- number of bit positions to be shifted and Q(b,c,d) -- represents the function in each round . Round1 Round2 Round3 Round4 F(b,c,d) G(b,c,d H(b,c,d I(b,c,d)

therefore cheap) to check some data for validity. 3. The algorithms are well known and implemented in most major programming languages, so they can be used in almost all environments

Conclusion:
We believe that one of the noteworthy things in our contribution is that it has very well removed the general opinion MD5 is fast but not secure. Thus the implementation of the MD5 message digest algorithm incorporated with our prescribed improvement will definitely move a step ahead in maintaining the secure ness of information. We sincerely hope that with necessary modifications and future improvements, our model will contribute to a minor extent in maintaining the integrity of data for network security.

Step 5 : Output The message digest produced as output is A, B, C, D. That is, we begin with the low-order byte of A, and end with the high-order byte of D.

Bibliography:
http://www.jguru.com/faq/view.jsp? EID=3822 http://www.java.sun.com/products/JD K/1.2/docs/guide/security http://www.infoserversecurity.org/files /misc/rfc/rfc1321.txt http://www.rsasecurity.com/rsalabs/fa q/3-6- 6.html

Error Detection:
Consider the case, where the original data (for which the message digest has been computed) is damaged or modified. As expected the digest generated in the receiver side is different and the user can find that the contents he/she is viewing is not the valid one. In order to find where the data is tampered the message digests corresponding to every 100 characters are generated as in the first phase. It is then compared with the digests of the original data, which indicates approximately where the error has happened. Thus our model provides a way to identify where the error has occurred.

Merits:
1. The generation of a digest is very fast and the digest itself is very small and can easily be encrypted and transmitted over the internet. 2. It is very easy and fast (and

65

Artificial Intelligence
S.Gokul, F.Ivin Prasanna
3rd Year B.TECH Department of Information & Technology Adhiyamaan college of Engineering, Hosur-635109 handrope10@gmail.com,ivinprasanna@gmail.com

Abstract
Artificial Intelligence is study and design of machines and design of intelligents, where intelligent agent is a system tool which prevents its environment and takes actions which maximize its chances of success. This paper throws light on implementing the artificial intelligence to the word processing and the word editing software. This can be accomplished by teaching the machine, both the logic & sense that we possess over the language. It is a very tedious job. Yet it can be achieved through the concept of "Patternisation". It means that the language is stored as set of rules and regulations, so that the machine can automatically generate its own sentences and also can easily identify the incorrect ones. This is a key concept which is going to make the ultimate dream of humans (AI) come true. These ideas are purely unique. This is a research work.

Introduction
It is artificially imparting the intelligence of humans to the machines, so that they can learn and act by themselves like us. Major AI textbooks define the field as "the study and design of intelligent agents", where an intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success. AI has become the major plot for the fiction writers since the start of the research. Particularly in the field of ROBOTICS, AI finds its applications to be extraordinary. Generally the fiction works are more exposed than the reality about the AI. In contrast to fiction, AI has been stuck up in a basic level problem which is strong enough to get its hold until it is solved properly. Apart from this hold, it has its development jailed by many other problems of the previous breed which is not going to let many generations enjoy the AI. In short, we can simply say that it is almost impossible to replicate the actions of HUMAN BRAIN artificially using the range of machines present now. The central problems of AI include mostly Brain involving activities such as Reasoning, Knowledge, Planning, Learning, Communication, Perception and the ability to move and manipulate objects. General intelligence is the most typical part to impart. good basement. Lets discuss all the materials required for a good basement of AI.

Basic Problems
Lack Of Sense: AI uses computers because they are the best available tool, not because they are the object of study. Now lets discuss the problems keeping the MS-WORDs Autocorrect option as the plot. This highly famous option has never reached the people as a basic and efficient tool of AI.Coming back to the problems, the chief problems are concerned with the language. The stop is in making the computer to understand the language. We can program the entire language to the machine but can not make them understand the language. Let us consider the following situation: We have programmed the entire English grammar in a computer with entire grammatical rules and regulations. Now let us see how the computer reacts to the following sentences. 1) Cat ate rat 2) Rat ate cat If we instruct the computer to check the correctness of the two sentences, the computer will surely give out that the two sentences are correct. This is a big drawback. We humans can easily sort out that the second sentence is grammatically correct but not sensibly. Because of the sense that we posses we can understand that the sentence is wrong, but it is merely impossible to program that sense to the machines. This is where computers are one step behind the humans.

Problems of AI
We have almost picked all the basic level problems to develop AI. The basic level problems are emphasized because there is never a strong building without a

66

For the present time being we have not yet solved this problem even for the phrases. Example 1) Leave bus 2) Leave letter So far, Scientists could not program the machine even to find the difference in the grammatical sense of the above sentences. It is very crucial for a machine to understand the sense in the language. Now lets consider the third example. 1) It is midnight. 2) It is dark. These 2 sentences are correct. But the problem peeping out is that the computer cannot understand the coherence between these 2 sentences. Machines totally ignore that these 2 sentences have some connection between them. This leads us to a point that machines do whatever they are programmed to. They dont understand and do. They simply convert the user data into processed output.

Movie files stored in 1yr=3650GB Can a machine be much efficient to sort the data, delete the unwanted, remember the most wanted, process the bulk of data in a fraction of second and always give us the correct information? What will be the size of storage unit and the processing speed of the machine if designed so? Now you can estimate the might of the brain which does all these and also more than these and keeps silent inside us with a negligible weight of 1.5kg. Can anyone replicate it artificially at least making the machine to perform 1/100th of its work? For everything, the answer is NO at present.

Betty Crows Hook:


Two crows named BETTY and ABEL learnt to use bent wire to fish a bucket of food from a vertical tube (as in the picture). Then ABEL flew off with a hook. BETTY tried to use a single piece of wire for a while and then failed. The next thing what she did was a great example for intelligence. She then pushed one end of the wire into the tape holding the tube and moved the other end using her beak, making a hook. She then used the hook to carry the bucket. She did this correctly 9 times out of 10.

Intelligence
Intelligence is the linking of present events with the experience and coming out with new ideas. Humans naturally possess this ability. Our task as discussed above is imparting this ability artificially to the machines. I am mentioning this again because there is a point to note down. The Linking process is the key. Let us consider this example: I saw TOM& ________. Ans: ? Your mind would have answered in a fraction of second that the answer is JERRY without any hesitation. Ya. This is intelligence. It becomes possible for us to do this because of this simple mechanism. As soon as brain receives a question, it searches the records saved in it. There is memory of size more than 1TB in the brain. In a fraction of second our brain finishes analyzing all these huge data and finds the answer by linking the given question to each possibly related data. This is the mechanism. Having analyzed the brain mechanism, let us discuss whether it is possible to achieve this mechanism in the machines. Consider our two eyes as a video camera and Brain is the processing and storage unit. Let us assume that the eyes capture video for 12hrs a day. Let us assume that the videos are stored in the standard high quality AVI format. Let us do a simple calculation. 12hr movie file in std. AVI format= 10GB (approx.)

http:\news.bbc.co.uk\hi\sci\tech\2178920.stm To find more, give in GOOGLE: Betty crow Hook This was reported and shown in BBC- August 2002. This is one of the examples portraying the intelligence of the living organisms. This is a simple question to be raised. Can a Robot be able to replicate BETTYs mental process?

67

The answer is No. Machines just do what they are programmed to. You can even program a machine to do this work. But it wont come under AI. AI is more about making the machines to learn by themselves. Let us consider the same event and analyze it a bit differently. Now let us consider the crow to be a machine. In its first attempt with a straight wire, it could not produce the desired output. So it either produces an error statement or gives out improper output. There is no chance for it to take efforts to prepare a hook. What actually happens inside the crow is that it learns that the output is improper. So it identifies the required output and changes its code itself to get the required output. Can a machine edit its own code according to the output? The reasons for the behavior of the crow are 1) Innate behavior. 2) Learnt adaptation. 3) Self knowledge. Now let us see a simple way to overcome this problem to a certain level.

Below there is a schematic procedure for the process mentioned above. The diagram clearly portrays the flow of data and the flow of controls to generate the sentences of pattern S+V.

Patternisation
Mentioning it again and again, A.I is simply making the machines to replicate the human brain. So let us discuss how language is covered by brain. It uses the concept of patternsiation. For example, let us consider a sentence. 1) Ram is a teacher. The sentence pattern of this sentence is S+V+C. To be straight, the brain recognizes the sentences in a pattern like this. It finds out the error if any part of a sentence mismatches the pattern. So our task is to code the pattern and rules to the computer. Rules cover an important portion in this due to the flexibility of language. Flexibility leads to a lot of exceptions which all can be translated to machines as rules. Whenever there is a special case or an exception, there must be a rule inserted to maintain stability. For example, let us consider the pattern S+V. In this pattern, a sentence is generated when a subject and a verb is fed. But the sentence generated is correct only when the subject is animate. So we need to insert a rule there which states that If subject is inanimate, sentence is wrong. Else correct Machine can thereby generate and check all the sentences of this pattern. This can be implemented to all sentence patterns. This is a successful first step. This is only for sentences level.

This is the schematic representation of the algorithm for the automatic generation of the infinite number of sentences of the pattern "S+V".) Apart from these the problems in the development of AI are discussed below:

Planning
Intelligent agents must be able to set goals and achieve them. They need a way to visualize the future (they must have a representation of the state of the world and be able to make predictions about how

68

their actions will change it) and be able to make choices that maximize the utility (or "value") of the available choices. In some planning problems, the agent can assume that it is the only thing acting on the world and it can be certain what the consequences of its actions may be. However, if this is not true, it must periodically check if the world matches its predictions and it must change its plan as this becomes necessary, requiring the agent to reason under uncertainty. Multi-agent planning uses the cooperation and competition of many agents to achieve a given goal.

and other subjects that study humans and other animals. AI is neither a branch of Computer Science, nor a purely engineering discipline.

The Change Required:


A tiny group of scientific fields can never achieve this task. It requires the involvement and team work of numerous scientific fields. Further it also requires the effective participation of linguistics department .Without the team work artificial intelligence can never be achieved. AI is a difficult dream of humans. In the present context what we can do is simply enjoy and admire the efficiency of AI in the fiction Movies and Novels. The next few generations have no chance of even smelling the sweet fragrance of AI in their graveyards. The Patternization is the present hold for the development of AI. If this patternization is achieved, AI can be achieved in the interaction level, thus leading us to the hopeful and stimulated development of the next step.

General intelligence
Most researchers hope that their work will eventually be incorporated into a machine with general intelligence (known as strong AI), combining all the skills above and exceeding human abilities at most or all of them. A few believe that anthropomorphic features like artificial consciousness or an artificial brain may be required for such a project. Many of the problems discussed above are considered AI-complete: to solve one problem, you must solve them all. For example, even a straightforward, specific task like machine translation requires that the machine should follow Reason (the author's argument), Knowledge (know what it's talking about), and Social Intelligence (faithfully reproduce the author's intention). Machine translation, therefore, is believed to be AI-complete: it may require strong AI to be done as well as humans can do it.

References [1] Artificial Intelligence [Hardcover] by Kevin


Knight

[2] en.wikipedia.org/wiki/Artificial_intelligence
[3] www.encyclopedia.com/topic/artificial_intel ligence.aspx [4] en.wikipedia.org/wiki/Microsoft_Word

What Is Artificial Intelligence?


It is more general than some definitions imply: AI is a (relatively) new approach to some very old problems about the nature of mind and intelligence.

It combines with and contributes to several other disciplines, including: Psychology Philosophy linguistics biology anthropology logic mathematics computer science & software engineering

69

ARTIFICIAL INTELLIGENCE IN VIRUS DETECTION AND RECOGNITION


Parvathy Nair & Parvathy.R.Nair S8-Computer Science Department Mohandas College of Engineering and Technology Trivandrum. mystra2@gmail.com Abstract
Artificial intelligence (AI) techniques have played increasingly important role in virus detection. At present, some principal artificial intelligence techniques applied in virus detection are proposed, including heuristic technique, data mining, agent technique, artificial immune, and artificial neural network. It believes that it will improve the performance of virus detection systems, and promote the production of new artificial intelligence algorithm. This paper introduces the main artificial intelligence technologies, which have been applied in antivirus system (Heuristics scanning).Virus detection is based on recognition of a signature or string of code which identifies a certain virus. Similar to how investigators use characteristics to identify criminals; antivirus look for digital footprints in order to recognize a virus .Nevertheless, to detect an unknown virus, a particular signature or recognized code does not yet exist. For this reason a heuristic scan is used.Heuristic methods are based on the piece-by-piece examination of a virus, looking for a sequence or sequences of instructions that differentiate the virus from normal programs. intervention. Trojan horses are created for the purpose of running code on the user's computer that Introduction To Heuristic Scanning he otherwise would not have consented to, allowing the author of the Trojan access to a number of Malware personally-desired purposes.Adware is a Trojan horse may modify the user's computer to display Malware, short for malicious software, is software advertisements in undesirable places, such as the designed to infiltrate a computer system without the desktop or in uncontrollable pop-ups, or it may be owner's informed consent. The expression is a less notorious, such as installing a toolbar on to the general term used by computer professionals to mean user's Web browser without prior mentioning. A a variety of forms of hostile, intrusive, or annoying backdoor in a computer system is a method of software or program code. Software is considered to bypassing normal authentication, securing remote be malware based on the perceived intent of the access to a computer, obtaining access to plaintext, creator rather than any particular features. Malware and so on, while attempting to remain undetected. includes computer viruses, worms, trojan horses, Keystroke logging (often called keylogging) is the spyware, dishonest adware, crimeware, most rootkits, action of tracking (or logging) the keys struck on a and other malicious and unwanted software. keyboard, typically in a covert manner so that the person using the keyboard is unaware that their Types Of Malware actions are being monitored .A typical hoax is an email message warning recipients of a non-existent A computer virus is a computer program that can threat, usually quoting spurious authorities such as copy itself and infect a computer. The term "virus" is Microsoft and IBM. also commonly but erroneously used to refer to other types of malware, including but not limited to adware Malware! =Virus and spyware programs that do not have the reproductive ability. A true virus can spread from one Due to different behavior, each malware group uses computer to another (in some form of executable alternative ways of being undetected. This forces code) when its host is taken to the target computer. A anti-virus software producers to develop numerous computer worm is a self-replicating malware solutions and countermeasures for computer computer program. It uses a computer network to protection. This presentation focuses on methods send copies of itself to other nodes (computers on the used especially for virus detection, not necessarily network) and it may do so without any user effective against other types of

70

malicious software.

and characteristics for detecting viruses and other forms of malware.

Infection Strategies
Nonresident viruses Nonresident viruses can be thought of as consisting of a finder module and a replication module. The finder module is responsible for finding new files to infect. For each new executable file the finder module encounters, it calls the replication module to infect that file. Resident viruses Resident viruses contain a replication module that is similar to the one that is employed by nonresident viruses. The virus loads the replication module into memory when it is executed instead and ensures that this module is executed each time the operating system is called to perform a certain operation. Resident viruses are sometimes subdivided into a category of fast infectors and a category of slow infectors. Fast infectors are designed to infect as many files as possible. A fast infector, for instance, can infect every potential host file that is accessed. This poses a special problem when using anti-virus software, since a virus scanner will access every potential host file on a computer when it performs a system-wide scan. If the virus scanner fails to notice that such a virus is present in memory the virus can "piggy-back" on the virus scanner and in this way infect all files that are scanned. Slow infectors, on the other hand, are designed to infect hosts infrequently. Some slow infectors, for instance, only infect files when they are copied. Slow infectors are designed to avoid detection by limiting their actions: they are less likely to slow down a computer noticeably and will, at most, infrequently trigger anti-virus software that detects suspicious behavior by programs.

General Meta Heuristics


In computer science, pattern matching is the act of checking some sequence of tokens for the presence of the constituents of some pattern. In contrast to pattern recognition, the match usually has to be exact. The patterns generally have the form of either sequences or tree structures. The process of emulation is just like hitchhiking. The emulator convinces the viral code that it is actually executing, and it hitchhikes to the point where the virus passes control to the original program.

Lacks In Specific
Generally speaking, there are two basic methods to detect viruses - specific and generic. Specific virus detection requires the anti-virus program to have some pre-defined information about a specific virus (like a scan string). The anti-virus program must be frequently updated in order to make it detect new viruses as they appear. Generic detection methods however are based on generic characteristics of the virus, so theoretically they are able to detect every virus, including the new and unknown ones. Why is generic detection gaining importance? There are four reasons: 1) The number of viruses increases rapidly. Studies indicate that the total number of viruses doubles roughly every nine months. The amount of work for the virus researcher increases, and the chances that someone will be hit by one of these unrecognizable new viruses increases too. 2) The number of virus mutants increases. Virus source codes are widely spread and many people can't resist the temptation to experiment with them, creating many slightly modified viruses. These modified viruses may or may not be recognized by the anti-virus product. Sometimes they are, but unfortunately often they are not. 3) The development of polymorphic viruses. polymorphic viruses like MtE and TPE are more difficult to detect with virus scanners. It is often months after a polymorphic virus has been discovered before a reliable detection algorithm has been developed. In the meantime many users have an increased chance of being infected by that virus. 4) Viruses directed at a specific organization or company. It is possible for individuals to utilize viruses as weapons. By creating a virus that only works on machines owned by a specific organization or company it is very unlikely that the virus will

Metaheuristics and Heuristics


Metaheuristic is a heuristic method for solving a very general class of computational problems by combining user-given black-box procedures in a hopefully efficient way. Metaheuristics are generally applied to problems for which there is no satisfactory problem specific algorithm or heuristic. In computer science, a heuristic is a technique designed to solve a problem that ignores whether the solution can be proven to be correct, but which usually produces a good solution or solves a simpler problem that contains or intersects with the solution of the more complex problem. Most realtime, and even some on-demand, anti-virus scanners use heuristic signatures to look for specific attributes

71

spread outside of the organization. Thus it is very unlikely that any virus scanner will be able to detect the virus before the payload of the virus does its destructive work and reveals itself. Each of these scenarios demonstrates the fact that virus scanners can not recognize a virus until the virus has been discovered and analyzed by an antivirus vendor. These same scenarios do not hold true for generic detectors, and therefore many people are becoming more interested in generic anti-virus products. Of the many generic detection methods, heuristic scanning is currently becoming the most important.

assigned to it. The values assigned to the various suspicious abilities are dependant on various fact. A disk format routine doesn't appear in many normal programs, but often in viruses. So it gets a high value. The abilities to remain resident in memory are found in many normal programs, so despite of the fact that they also appear in many viruses it doesn't get a high value. If the total of the values for one program exceeds a predefined threshold, the scanner yells "Virus!". A single suspected ability is never enough to trigger the alarm. It is always the combination of the suspected abilities which convince the scanner that the file is a virus.

Heuristic Scanning
One of the most time consuming tasks that a virus researcher faces is the examination of files. People often send files to researchers because they believe the files are infected by a new virus. Sometimes these files are indeed infected, sometimes not. Every researcher is able to determine very quickly what is going on by loading the suspected file into a debugger.

Heuristic Flags
Some scanners set a flag for each suspected ability which has been found in the file being analyzed. This makes it easier to explain to the user what has been found. TbScan for instance recognizes many suspected instruction sequences. Every suspected instruction sequence has a flag assigned to it:

Flag Description Artificial Intelligence


Some of the many differences between viruses and normal programs is that normal programs typically start searching the command line for options, clearing the screen, etc.Viruses however never search for command line options or clear the screen. Instead they start with a search for other executable files, by writing to the disk, or by decrypting themselves.A researcher who has loaded the suspected file into a debugger can notice this difference in only a glance. Heuristic scanning is an attempt to put this experience and knowledge into a virus scanner.The word 'heuristic' means (according to a Dutch dictionary) 'the self finding' and 'the knowledge to determine something in a methodic way'. A heuristic scanner is a type of automatic debugger or disassembler. The instructions are disassembled and their purposes are determined. If a program starts with the sequence MOV AH,5 INT 13h which is a disk format instruction for the BIOS, this is highly suspected, especially if the program does not process any command line options or interact with the user. In reality, heuristics is much more complicated. The heuristic scanners that I am familiar with are able to detect suspicious instruction sequences, like the ability to format a disk, the ability to search for other executables, the ability to remain resident in memory, the ability to issue non-standard or undocumented system calls, etc. Each of these abilities has a value F = Suspicious file access. Might be able to infect a file. R = Relocator. Program code will be relocated in a suspicious way. A = Suspicious Memory Allocation. The program uses a non-standard way to search for, and/or allocate memory. N = Wrong name extension. Extension conflicts with program structure. S = Contains a routine to search for executable (.COM or .EXE) files. The more flags that are triggered by a file, the more likely it is that the file is infected by a virus. Normal programs rarely trigger one flag, while at least two flags are required to trigger the alarm. To make it more complicated, not all flags carry the same 'weight'. After presenting specific scanning to AV software, malware authors were obligated to introduce new techniques of being undetected. Beside of polymorphism and mutation engines viruses started to use various stealth techniques which basically hooked interrupts and took controlover them. This allowed them to be invisible for traditional scanner. Moreover, most of them started using real-time encryption which madethem look like totally harmless program.

72

Mixing stealth techniques with encryption and antiheuristic sequences allowed viruses to be unseen even by signature and heuristic scanning combined together. It was obvious that new solution was needed. The idea came from VM conceptions. Why not to create artificial runtime environment to let the virus do its job? Such approach found implementation in environment emulation engines, which became standard AV software weapon.

Virtual Realityw

Virtual reality

The idea of environment emulation is simple. Antivirus program provides a virtual machine with independent operating system and allows virus to perform its routines. Behavior and characteristics are being continuously examined, while virus is not aware that is working on a fake system. This leads to decryption routines and revealment of its true nature. Also stealth techniques are useless because whole VM is monitored by AV software.

O = Found code that can be used to overwrite/move a program in memory. All of these abilities are available in LoadHi, and the flags are enough to trigger the heuristic alarmWhether we call it a false positive or a false suspicion doesn't matter. We do not like the scanner to yell every time we scan. So we need to avoid this situation. How do we achieve this? 1) Definition of (combinations of) suspicious abilities The scanner does not issue an alarm unless at least two separate suspected program abilities have been found. 2) Recognition of common program codes Some known compiler codes or run time compression or decryption routines can cause false alarms. These specific compression or decryption codes can be recognized by the scanner to avoid false alarms. 3) Recognition of specific programs Some programs which normally cause a problem (like the LoadHi program used in the example) can be recognized by the heuristic scanner. 4) Assumption that the machine is initially not infected Some heuristic scanners have a 'learn' mode, i.e. they are able to learn that a file causing a false alarm is not a virus.

What Can Be Expected From It In The Future? The Development Continues


Most anti-virus developers still do not supply a ready-to-use heuristic analyzer. Those who have heuristics already available are still improving it. It is however unlikely that the detection rate will ever reach 100% without a certain amount of false positives. On the other hand it is unlikely that the amount of false positives will ever reach 0%.Maybe you wonder why it isn't possible to achieve 100% correct results. There is a large grey area between viruses and non-viruses. Even for humans it is hard to describe what a virus is or not, an often used definition of a computer virus is this: "A virus is a program that is able to copy itself"

False Positives
Just like all other generic detection techniques, heuristic scanners sometimes blame innocent programs for being contaminated by a virus. This is called a "false positive" or "False Alarm".The reason for this is simple. Some programs happen to have several suspected abilities. For instance, the LOADHI.COM file of QEMM has the following suspected abilities (according to an older, yet obsolete version of TbScan): A = Suspicious Memory Allocation. The program uses a non-standard way to search for, and/or allocate memory. M = Memory resident code. This program may be a TSR but also a virus. U = Undocumented interrupt/DOS call. The program might be just tricky but can also be a virus using a non-standard way to detect itself. Z = EXE/COM determination. The program tries to check whether a file is a COM or EXE file. Viruses need to do this to infect a program.

Evolution
Based on these early attempts, the first generation of scanners with minor heuristic capabilities were developed. The heuristics they used were very basic and usually generated warnings about peculiar file date and file time stamps, changes to file lengths, strange headers, etc. Some examples:

73

EXAMPLE1. COM 12345 01-01-1995 12:02:62 The heuristics of the current, second, generation of scanners are much better. All the capabilities of the first generation scanners have obviously been retained, but may new heuristic principles have been added: code analysis, code tracing, strange opcodes, etc. For example: 0F POP CS Strange opcode-an 8086-only instruction! A (third generation?) scanner type based exclusively on heuristics exists, performing no signature, algorithmic or other checks. Maybe this is the future, but the risk of a false alarm (false positive, true negative) is quite high at the moment. In large corporations, false alarms (false positives) can cost a lot of time and thus money.

3D02h/21h, the analyser only has to know that the behaviour of that piece of code is: Open a file for (both reading and) writing. On the one hand, this may not seem that difficult. Most viruses do perform interrupt calls, and when they do, we just have to evaluate the contents of the register to derive the behaviour characteristic. On the other hand, this is only correct if we talk about simple, straightforward viruses. For viruses using different techniques (hooking different interrupts, using call/jmp far constructions) it may be very difficult for the emulator to keep track of the instruction flow. In any case, the emulator must be capable of reducing instruction sequences to the bare functionality in a well-defined manner. We call the result of this reduction a behaviour characteristic, if it can be found in a pre-compiled list of characteristics to which we attach particular importance. Another problem is that the emulator must be capable of making important decisions, normally based on incomplete evidence (we obviously want to emulate as little code as possible before reaching a conclusion regarding the potential maliciousness of the software in question). Let us illustrate this with a small example: MOV AX, 4567 INT 21 CMP AX, 7654 JNE jmp-1 JMP jmp-2 This is an example of an 'Are you there?' call used by a virus. When tracing through the code, the emulator obviously doesn't know whether jmp-1 or jmp-2 leads to the code which installs the virus in case it is not already there. So, should the emulator continue with the jmp-1 flow or the jmp-2 flow? Now, a simple execution of the code will result in just one of these flows being relevant, whereas a forensic emulator must be able to follow all possible program flows simultaneously, until either a flow leads to a number of relevant behaviour characteristics being detected, at which time the information is passed to the analyser, or a flow has been followed to a point where one of the stop-criteria built into the emulator is met. The strategy used in this part of the emulator is a determining factor when it comes to obtaining an acceptable scanning speed.

Emulator Design Issues


When designing a code emulator for forensic purposes, a number of special requirements must be met. One problem to tackle is the multiple opcodes and multiple instructions issue: 87 C3 93 87 D8 XCHG AX,BX XCHG BX,AX XCHG BX,AX

The result is the same, but different opcodes are used. PUSH AX PUSH BX POP AX POP BX PUSH AX MOV AX,BX POP BX

These give the same result. More than the five different code sequences shown above exist to exchange the contents of registers AX and BX. The technique of expressing the same functionality using many different sets of opcode sequences is used by encryptors generated by polymorphic engines. Some being over 200 bytes in size, they only contain the functionality of a cleanly coded decryptor of 25 bytes. Most of the remaining code is redundant, but sometimes seemling redundant code is used to initiate registers for further processing. It is the job of the emulator to make sure that the rule-based analyzer gets the correct information, i.e. that the behaviour characteristics passed to the analyser reflect the actual facts. No matter which seris of instructions/opcodes are used to perform

74

Hopefully, this has illustrated some of the problems associated with designing a forensic emulator. It is a very diffcult and complex part of this set-up. Once the emulator has finished its job it passes information, a list of behaviour characteristics which it has found in the code, on to the analyser.

Conclusion
The number of viruses is increasing rapidly: this is a known fact. The time will soon arrive when scanning using signatures and dedicated algorithms will either use too much memory or just become too slow. With storage media prices dropping fast, lots of systems now come equipped with very large hard disks, which will take more and more time, and thus money, to scan using traditional techniques. A properly designed rule-based analyzing system feeding suspicious code into a scanner, which can identify the suspicious code as a known virus or Trojan, or perhaps dangerous code needing further investigation, is bound to save a lot of time. Although it is impossible to prove that code is not malicious without analyzing it from one end to the other, we in Computer Security Engineers Ltd believe it possible to reduce significantly the time used to check files by using all the available system knowledge instead of only small bits of it, as it is done today. Using virus scanning as the primary, or in many cases the only, anti-virus defense is an absurd waste of time and money, and furthermore blatantly insecure!

References
[1] Data mining methods for detection of new malicious executable-MATHEW.G CHU LTZ [2] Heuristics scanner for Artificial IntelligenceRIGHARD ZWIENEN DERG [3] Heuristics Antivirus technologyFRANSVELDSMAN

75

Augmented Reality (AR)


Renjith.R & Bijin.V.S (Department of computer application(MCA) ,Mohandas college of Engineering and technology Anad, Trivandrum) Abstract
Technology has advanced to the point here realism in virtual reality is very achievable. However, in our obsession to reproduce the world and human experience in virtual space, we overlook the most important aspects of what makes us who we areour reality. On the spectrum between virtual reality, which creates immersible, computer-generated environments, and the real world, augmented reality is closer to the real world. Augmented reality adds graphics, sounds, haptics and smell to the natural world as it exists .Augmented reality will truly change the way we view the world. Picture yourself walking or driving down the street. With augmented-reality displays, which will eventually look much like a normal pair of glasses, informative graphics will appear in your field of view and audio will coincide with whatever you see. These enhancements will be refreshed continually to reflect the movements of your head. In this article, we will take a look at this future technology, its components and how it will be used. Augmented reality (AR) refers to computer displays that add virtual information to a user's sensory perceptions. Most AR research focuses on see-through devices, usually worn on the head that overlay graphics and text on the user's view of his or her surroundings. In general it superimposes graphics over a real world environment in real time.Augmented reality is far more advanced than any technology you've seen in television broadcasts, although early versions of augmented reality are starting to appear in televised races and football games. These systems display graphics for only one point of view. Next-generation augmented-reality systems will display graphics for each viewer's perspective.
1. INTRODUCTION 1.1. DEFINITION Augmented reality (AR) is a field of computer research which deals with the combination of real world and computer generated data. Augmented reality (AR) refers to computer displays that add virtual information to a user's sensory perceptions. It is a method for visual improvement or enrichment of the surrounding environment by overlaying spatially aligned computer-generated information onto a human's view (eyes) Augmented Reality (AR) was introduced as the opposite of virtual reality: instead of immersing the user into a synthesized, purely informational environment, the goal of AR is to augment the real world with information handling capabilities. AR research focuses on see-through devices, usually worn on the head that overlay graphics and text on the user's view of his or her surroundings. In general it superimposes graphics over a real world environment in real time. An AR system adds virtual computergenerated objects, audio and other sense enhancements to a real-world environment in real-time. These enhancements are added in a way that the viewer cannot tell the difference between the real and augmented world.

1.2 PROPERTIES AR system to have the following properties:

1.

Combines real and virtual objects in a real environment; 2. Runs interactively, and in real time; and 3. Registers (aligns) real and virtual objects with each other. Definition of AR to particular display technologies, such as a head mounted display (HMD). Nor do we limit it to our sense of sight. AR can potentially apply to all senses, including hearing, touch, and smell.

2. AUGMENTED REALITY Vs VIRTUAL

REALITY
The term Virtual Reality was defined as "a computer generated, interactive, three-dimensional environment in which a person is immersed." There are three key points in this definition. First, this virtual environment is a computer generated three-

76

dimensional scene which requires high performance computer graphics to provide an adequate level of realism. The second point is that the virtual world is interactive. A user requires realtime response from the system to be able to interact with it in an effective manner. The last point is that the user is immersed in this virtual environment One of the identifying marks of a virtual reality system is the head mounted display worn by users. These displays block out all the external world and present to the wearer a view that is under the complete control of the computer. The user is completely immersed in an artificial world and becomes divorced from the real environment. A very visible difference between these two types of systems is the immersiveness of the system. Virtual reality strives for a totally immersive environment. The visual, and in some systems aural and proprioceptive, senses are under control of the system. In contrast, an augmented reality system is augmenting the real world scene necessitating that the user maintains a sense of presence in that world. The virtual images are merged with the real view to create the augmented display. There must be a mechanism to combine the real and virtual that is not present in other virtual reality work. Developing the technology for merging the real and virtual image streams is an active research topic .

physical properties etc. are just parameters in simulation. 2. Tracking Getting the right information at the right time and the right place is the key in all these applications. Personal digital assistants such as the Palm and the Pocket PC can provide timely information using wireless networking and Global Positioning System (GPS) receivers that constantly track the handheld devices 3. Environment Sensing It is the process of viewing or sensing the real world scenes or even physical environment which can be done either by using an optical combiner, a video combiner or simply retinal view. 4. Visualization and Rendering Some emerging trends in the recent development of human-computer interaction (HCI) can be observed. The trends are augmented reality, computer supported cooperative work, ubiquitous computing, and heterogeneous user interface. AR is a method for visual improvement or enrichment of the surrounding environment by overlaying spatially aligned computer-generated information onto a human's view (eyes). This is how AR works.

3. Different AR Techniques There are two basic techniques for combining real and virtual objects; optical and video techniques. While optical technique uses an optical combiner, video technique uses a computer for combining the video of the real world (from video cameras) with virtual images (computer generated). AR systems use either Head Mounted Display (HMD), which can be closed-view or see-through HMDs, or use monitor-based configuration. While closed-view HMDs do not allow real world direct view, see-through HMDs allow it, with virtual objects added via optical or video techniques 4. What Makes AR Work? The main components that make an AR system works are,

Pick A Real World Scene Real world. User's view through the see-through head-worn display of the real world, showing two struts and a node without any overlaid graphics. Add your Virtual Objects in it User's view of the virtual world intended to overlay the view of the real world. Delete Real World Objects Not Virtual Reality since Environment Real

5.Augmented

Reality

Application

Domains
Only recently have the capabilities of real-time video image processing, computer graphic systems and new display technologies converged to make possible the display of a virtual graphical image correctly registered with registered with a view of the 3D environment surrounding the user. Researchers working with augmented reality systems have proposed them as solutions in many domains. The areas that have been discussed range from entertainment to military training. Many of the domains, such as medical are also proposed for traditional virtual reality systems. 5.1. Medical

1.

Display

This corresponds to head mounted devices where images are formed. Many objects that do not exist in the real world can be put into this environment and users can view and exam on these objects. The properties such as complexity,

77

This domain is viewed as one of the more important for augmented reality systems. Most of the medical applications deal with image guided surgery. Pre-operative imaging studies, such as CT or MRI scans, of the patient provide the surgeon with the necessary view of the internal anatomy. From these images the surgery is planned. Visualization of the path through the anatomy to the affected area where, for example, a tumor must be removed is done by first creating a 3D model from the multiple views and slices in the preoperative study. Being able to accurately register the images at this point will enhance the performance of the surgical team and eliminate the need for the painful and cumbersome stereo tactic frames. Simulated AR medical image of a brain

wartime, the display of the real battlefield scene could be augmented with annotation information or highlighting to emphasize hidden enemy units. 5.4. Engineering Design Imagine that a group of designers are working on the model of a complex device for their clients. The designers and clients want to do a joint design review even though they are physically separated. If each of them had a conference room that was equipped with an augmented reality display this could be accomplished. The physical prototype that the designers have mocked up is imaged and displayed in the client's conference room in 3D. The clients can walk around the display looking at different aspects of it 5.5. Robotics and Telerobotics In the domain of robotics and telerobotics an augmented display can assist the user of the system. A telerobotic operator uses a visual image of the remote workspace to guide the robot. Annotation of the view would still be useful just as it is when the scene is in front of the operator. There is an added potential benefit. The robot motion could then be executed directly which in a telerobotics application would eliminate any oscillations caused by long delays to the remote site. 5.6.Manufacturing,Maintenance and Repair Recent advances in computer interface design, and the ever increasing power and miniaturization of computer hardware, have combined to make the use of augmented reality possible in demonstration test beds for building construction, maintenance and renovation. When the maintenance technician approaches a new or unfamiliar piece of equipment instead of opening several repair manuals they could put on an augmented reality display. In this display the image of the equipment would be augmented with annotations and information pertinent to the repair. The military has developed a wireless vest worn by personnel that is attached to an optical see-through display. The wireless connection allows the soldier to access repair manuals and images of the equipment. Future versions might register those images on the live scene and provide animation to show the procedures that must be performed.

5.2 Entertainment
A simple form of augmented reality has been in use in the entertainment and news business for quite some time. Whenever we are watching the evening weather report the weather reporter is shown standing in front of changing weather maps. In the studio the reporter is actually standing in front of a blue or green screen. This real image is augmented with computer generated maps using a technique called chroma-keying. It is also possible to create a virtual studio environment so that the actors can appear to be positioned in a studio with computer generated decorating In this the environments are carefully modeled ahead of time, and the cameras are calibrated and precisely tracked. For some applications, augmentations are added solely through real-time video tracking. Delaying the video broadcast by a few video frames eliminates the registration problems caused by system latency. Furthermore, the predictable environment (uniformed players on a green, white, and brown field) lets the system use custom chroma-keying techniques to draw the yellow line only on the field rather than over the players. With similar approaches, advertisers can embellish broadcast video with virtual ads and product placements 5.3 Military Training The military has been using displays in cockpits that present information to the pilot on the windshield of the cockpit or the visor of their flight helmet. This is a form of augmented reality display. By equipping military personnel with helmet mounted visor displays or a special purpose rangefinder the activities of other units participating in the exercise can be imaged. In

5.7. Consumer Design


Virtual reality systems are already used for consumer design. Using perhaps more of a graphics

78

system than virtual reality, when you go to the typical home store wanting to add a new deck to your house, they will show you a graphical picture of what the deck will look like When we head into some high-tech beauty shops today you can see what a new hair style would look like on a digitized image of yourself. But with an advanced augmented reality system you would be able to see the view as you moved. If the dynamics of hair are included in the description of the virtual object you would also see the motion of your hair as your head moved.

narrative performances and structures may lead to more realistic and richer AR experience.

Social acceptance
The final challenge is social acceptance. Given a system with ideal hardware and an intuitive interface, how AR can become an accepted part of a users everyday life, just like a mobile phone or a personal digital assistant. Through films and television, many people are familiar with images of simulated AR. However, persuading a user to wear a system means addressing a number of issues. These range from fashion to privacy concerns. To date, little attention has been placed on these fundamental issues. However, these must be addressed before AR becomes widely accepted

5.8. Augmented mapping


Paper maps can be brought to life using hardware that adds up-to-the-minute information, photography and even video footage. Using AR technique the system, which augments an ordinary tabletop map with additional information by projecting it onto the maps surface. can be implemented. It would help emergency workers and have developed a simulation that projects live information about flooding and other natural calamities. The system makes use of an overhead camera and image recognition software on a connected computer to identify the region from the maps topographical features. An overhead projector then overlays relevant information - like the location of a traffic accident or even the position of a moving helicopter - onto the map

7. Conclusion
The research topic "Augmented Reality" (AR) is receiving significant attention due to striking progress in many subfields triggered by the advances in computer miniaturization, speed, and capabilities and fascinating live demonstrations. AR, by its very nature, is a highly inter-disciplinary field, and AR researchers work in areas such as signal processing, computer vision, graphics, user interfaces, human factors, wearable computing, mobile computing, computer networks, distributed computing, information access, information visualization, and hardware design for new displays. Augmented reality is a term created to identify systems which are mostly synthetic with some real world imagery added such as texture mapping video onto virtual objects. This is a distinction that will fade as the technology improves and the virtual elements in the scene become less distinguishable from the real ones.

5. Challenges
Technological limitations
Although there is much progress in the basic enabling technologies, they still primarily prevent the deployment of many AR applications. Displays, trackers, and AR systems in general need to become more accurate, lighter, cheaper, and less power consuming. Since the user must wear the PC, sensors, display, batteries, and everything else required, the end result is a heavy backpack. Laptops today have only one CPU, limiting the amount of visual and hybrid tracking that we can do.

8. References: www.sciencedirect.com

www.augmentedreality.com www.newscientist.com www.howstuffworks.com www.citeseer.ist.psu.edu. www1.cs.columbia.edu www.lsi.upc.es. www.cs.ualberta.com

User interface limitation


We need a better understanding of how to display data to a user and how the user should interact with the data. AR introduces many highlevel tasks, such as the need to identify what information should be provided, whats the appropriate representation for that data, and how the user should make queries and reports. Recent work suggests that the creation and presentation of

79

BRAIN FINGERPRINTING TECHNOLOGY


Shalini J Nair & Anjitha Pillai
S6 Information and Technology Mohandas College of Engeneering & Technology,Anad,Thiruvananthapuram

Abstract
Brain Fingerprinting is a new computer-based technology to identify the perpetrator of a crime accurately and scientifically by measuring brain-wave responses to crime-relevant words or pictures presented on a computer screen. Brain Fingerprinting has proven 100% accurate in over 120 tests, including tests on FBI agents, tests for a US intelligence agency and for the US Navy, and tests on real-life situations including felony crimes .

Why Brain Fingerprinting?


Brain Fingerprinting is based on the principle that the brain is central to all human acts. In a criminal act, there may or may not be many kinds of peripheral evidence, but the brain is always there, planning, executing, and recording the crime. The fundamental difference between a perpetrator and a falsely accused, innocent person is that the perpetrator, having committed the crime, has the details of the crime stored in his brain, and the innocent suspect does not. This is what Brain Fingerprinting detects scientifically

Brain Fingerprinting is evidence stored in the brain.) Brain Fingerprinting measures electrical brain activity in response to crime-relevant words or pictures presented on a computer screen, and reveals a brain MERMER (memory and encoding related multifaceted electroencephalographic response) when, and only when, the evidence stored in the brain matches the evidence from the crime scene. Thus, the guilty can be identified and the innocent can be cleared in an accurate, scientific, objective, non-invasive, non-stressful, and non-testimonial manner. Mermer Methodology The procedure used is similar to the Guilty Knowledge Test; a series of words, sounds, or pictures are presented via computer to the subject for a fraction of a second each. Each of these stimuli are organised by the test-giver to be a Target, Irrelevant, or a Probe. The Target stimuli are chosen to be relevant information to the tested subject, and are used to establish a baseline brain response for information that is significant to the subject being tested. The subject is instructed to press on button for Targets, and another button for all other 80

The Secrets Of Brain Fingerprinting Matching evidence at the crime scene with evidence in the brain When a crime is committed, a record is stored in the brain of the perpetrator. Brain Fingerprinting provides a means to objectively and scientifically connect evidence from the crime scene with evidence stored in the brain. (This is similar to the process of connecting DNA samples from the perpetrator with biological evidence found at the scene of the crime; only the evidence evaluated by

stimuli. Most of the non-Target stimuli are Irrelevant, and are totally unrelated to the situation that the subject is being tested for. The Irrelevant stimuli do not elicit a MERMER, and so establish a baseline brain response for information that is insignificant to the subject in this context. Some of the non-Target are relevant to the situation that the subject is being tested for. These stimuli, Probes, are relevant to the test, and are significant to the subject, and will elicit a MERMER, signifying that the subject has understood that stimuli to be significant. A subject lacking this information in their brain, the response to the Probe stimulus will be indistinguishable from the irrelevant stimulus. This response does not elicit a MERMER, indicating that the information is absent from their mind. Note that there does not have to be an emotional response of any kind to the stimuli- this test is entirely reliant upon recognition response to the stimuli, and relies upon a difference in recognition- hence the association with the Oddball effect. The Fantastic Four!!!
The Four Phases of Brain Fingerprinting

1. Brain Fingerprinting Crime Scene Evidence Collection; 2. Brain Fingerprinting Brain Evidence Collection; 3. Brain Fingerprinting Evidence Analysis; and Computer

4. Brain Fingerprinting Scientific Result. In the Crime Scene Evidence Collection, an expert in Brain Fingerprinting examines the crime scene and other evidence connected with the crime to identify details of the crime that would be known only to the perpetrator. The expert then conducts the Brain Evidence Collection in order to determine whether or not the evidence from the crime scene matches evidence stored in the brain of the suspect. In the Computer Evidence Analysis, the Brain Fingerprinting system makes a mathematical determination as to whether or not this specific evidence is stored in the brain, and computes a statistical confidence for that determination. This determination and statistical confidence constitute the Scientific Result of Brain Fingerprinting: either "information present" ("guilty") the details of the crime are stored in the brain of the suspect or "information absent" ("innocent") the details of the crime is not stored in the brain of the suspect. Scientific Procedure, Applications Research, and

In fingerprinting and DNA fingerprinting, evidence recognized and collected at the crime scene, and preserved properly until a suspect is apprehended, is scientifically compared with evidence on the person of the suspect to detect a match that would place the suspect at the crime scene. Brain Fingerprinting works similarly, except that the evidence collected both at the crime scene and on the person of the suspect (i.e., in the brain as revealed by electrical brain responses) is informational evidence rather than physical evidence. There are four stages to Brain Fingerprinting, which are similar to the steps in fingerprinting and DNA fingerprinting: 81

Informational Evidence Detection.

The detection of concealed information stored in the brains of suspects, witnesses, intelligence sources, and others is of central concern to all phases of law enforcement, government and private investigations, and intelligence operations.

Brain Fingerprinting presents a new paradigm in forensic science. This new system detects information directly, on the basis of the electrophysiological manifestations of information-processing brain activity, measured non-invasively from the scalp. Since Brain Fingerprinting depends only on brain information processing, it does not depend on the emotional response of the subject.
The Brain Mermer

Brain Fingerprinting utilizes multifaceted electroencephalographic response analysis (MERA) to detect information stored in the human brain. A memory and encoding related multifaceted electroencephalographic response (MERMER) is elicited when an individual recognizes and processes an incoming stimulus that is significant or noteworthy. When an irrelevant stimulus is seen, it is insignificant and not noteworthy, and the MERMER response is absent. The MERMER occurs within about a second after the stimulus presentation, and can be readily detected using EEG amplifiers and a computerized signal-detection algorithm.
Scientific Procedure

Since the targets are noteworthy for the subject, they elicit a MERMER. Most of the non-target stimuli are irrelevant, having no relation to the crime. These irrelevants do not elicit a MERMER.Some of the nontarget stimuli are relevant to the crime or situation under investigation. These relevant stimuli are referred to as probes. For a subject who has committed the crime, the probes are noteworthy due to his knowledge of the details of the crime, and therefore probes elicit a brain MERMER. For an innocent subject lacking this detailed knowledge of the crime, the probes are indistinguishable from the irrelevant stimuli. For such a subject, the probes are not noteworthy, and thus probes do not elicit a MERMER. Computer Controlled The entire Brain Fingerprinting System is under computer control, including presentation of the stimuli and recording of electrical brain activity, as well as a mathematical data analysis algorithm that compares the responses to the three types of stimuli and produces a determination of "information present" ("guilty") or "information absent" ("innocent"), and a statistical confidence level for this determination. At no time during the testing and data analysis do any biases and interpretations of a system expert affect the stimulus presentation or brain responses.

Brain Fingerprinting incorporates the following procedure. A sequence of words or pictures is presented on a video monitor under computer control. Each stimulus appears for a fraction of a second. Three types of stimuli are presented: "targets," "irrelevants," and "probes." The targets are made relevant and noteworthy to all subjects: the subject is given a list of the target stimuli and instructed to press a particular button in response to targets, and to press another button in response to all other stimuli. 82

The Devices Fingerprinting

Used

In

Brain

-----Red: information expected to know

the

suspect

is

-----Green: information not known to suspect -----Blue: information of the crime that only perpetrator would know

NOT GUILTY:

Because the blue and green. Lines closely correlate, suspect does Not have critical knowledge of the crime
GUILTY:

Brain Waves:

because the blue and red Lines closely correlate, and suspect has critical knowledge of the crime
Scientific Experiments, Criminal Cases Field Tests, And

Scientific studies, field tests, and actual criminal cases involving over 120 individuals described in various scientific publications and technical reports verify the extremely high level of accuracy and overall effectiveness of Brain Fingerprinting. The system had 100% accurate scientific results in all studies, field tests.

Using Brain Waves to Detect Guilt


How it works

A Suspect is tested by looking at three kinds of information represented by Different colored lines: 83

Terry Harrington's Responses

Brain-Wave

Results Of The Brain Fingerprinting Test On Terry Harrington For the test on Schweer's murder at U.S, the determination of Brain Fingerprinting was "information absent," with a statistical confidence of 99.9%. The information stored in Harrington's brain did not match the scenario in which Harrington went to the crime scene and committed the murder. The determination of the Brain Fingerprinting test for alibi-relevant information was "information present," with a confidence of 99.9%. The information stored in Harrington's brain did match the scenario in which Harrington was elsewhere (at a concert and with friends) at the time of the crime. Conclusion Brain Fingerprinting is a revolutionary new scientific technology for solving crimes, identifying perpetrators, and exonerating innocent suspects, with a record of 100% accuracy in research with US government agencies, actual criminal cases, and other applications. The technology fulfills an urgent need for governments, law enforcement agencies, corporations, investigators, crime victims, and falsely accused innocent suspects. References 1. www.google.com 2. www.brainfingerprint.org 3. www.brainfingerprint.pbwiki.com

Y-axis: voltage in micro volts at the parietal (Pz) scalp site. X-axis: time in milliseconds (msec). Stimulus was presented at 0 msec.

Determination: information absent. Statistical Confidence: 99.9%

Determination: information present. Statistical Confidence: 99.9%

84

Face Detection Through Neural Analysis


Akhil G S, Gibu George
S4 Mechanical Engineering, Muslim Association College Of Engineering, Venjaramoodu akhilsiva.gs.99@gmail.com

Abstract
The aim of this paper is to implement an effective system to locate upright frontal faces on monochromatic images with use of a neural network-based classifier. In this paper, A new approach to reduce the computation time taken by fast neural nets for the searching process is presented. The principle of divide and conquer strategy is applied through image decomposition. Each image is divided into small in size sub - images and then each one is tested separately using a fast neural network. Compared to conventional and fast neural networks, experimental results show that a speed up ratio is achieved when applying this technique to locate human faces in automatically in cluttered scenes. Furthermore, faster face detection is obtained by using parallel processing techniques to test the resulted sub-images at the same time using the same number of fast neural networks. Moreover, the problem of sub image centring and normalization in the Fourier space is solved.

Introduction
An artificial neural network (ANN), also called a simulated neural network (SNN) or commonly just neural network (NN) is an interconnected group of artificial neurons that uses a mathematical or computational model for information processing based on a connectionist approach to computation. A NN is a massively parallel distributed processing system made up of highly interconnected neural computing elements. It has the ability to learn and thereby acquire Knowledge and make it available for use. The Neural Network is the imitation of the Central nervous system.

Overall View

Face Detection Pre-Locating Module

87

By applying Sobel masks on the given image the system retrieves an appropriate edge image. Since at each iteration, only 20x20 pixel faces are intended to be detected, the system removes all edges that are quite large with the assumption that they represent large objects which belong to the images background or large faces which, would be detected at latter iterations. Of course there exists the risk of removing objects which are in fact faces, however experiments have shown that this case just can be treated as an exception. By involving this module the system avoids analysing all possible 20x20 pixel windows at each location (pixel after pixel), what is a quite time consuming task. The number of remained 20x20 pixel windows needed to be individually analysed by the detector in the latter steps is only about 20 percent of the whole.

Bootstrapping
Generating a training set for the SVM /neural network is a challenging task because of the difficulty in placing characteristic non-face images in a the training set. To get a representative sample of face images is not much of a problem; however, to Choose the right combination of nonface images from the immensely large set of such images, is a complicated task. For this purpose, after each training session, non-faces incorrectly detected as faces are placed in the training set for the next session. This bootstrap method overcomes the problem of using a huge set of non face images in the training set, many of which may not influence the training. database and a database of Indian faces generated here. In each image to be placed in the training set the eyes, nose and left, right and centre of the mouth were marked. With these markings, the face was transformed into a 20x20 window with the marked features at predetermined positions. The training set was subsequently enhanced with bootstrapping of scenery and false-detected images. To make the system somewhat invariant to changes such as rotation of the face random transformations (rotation by 15 degrees, mirroring) were Initially, for negative samples, random images were created and added to applied to images in the training set. The last used training set (including bootstrapping) had 8982 input vectors.

Multi-layer networks use a variety of learning techniques, 4 the most popular being back propagation. Here the output values are compared with the correct answer to compute the value of some predefined error-function. By various techniques the error is then fed back through the network. Using this information, the algorithm adjusts the weights of each connection in order to reduce the value of the error function by some small amount. After repeating this process for a sufficiently large number of training cycles the network will usually converge to some state where the error of the calculations is small. In this case one says that the network has learned a certain target function. To adjust weights properly one applies a general method for non-linear optimization. task that is called gradient descent. For this, the derivative of the error function with respect to the network weights is calculated and the eights are then changed such that the error decreases (thus going downhill on the surface of the error function). For this reason back-propagation can only be applied on networks with differentiable activation functions.

Network Structure
This implementation is a crude version of the system described in [rowley98]. Arbitration amongst multiple networks and the size of the training set used was significantly smaller and this is not implemented. The neural network is a two-layer (one hidden, one output) feed-forward network. There are 400 input neurons, 26 hidden neurons and 1 output neurons. Each hidden neuron is not connected to ALL the input neurons.

Figure 2: System's architecture

Results For Other Species

Standard

Pictures

Of

Backpropagation Algorithm

Here we have shown some images of animal faces to see if the network learnt to recognize faces in general (two eyes, a nose and a mouth) or was able to detect something unique about human faces. Do note that none of these animal faces were in the training set. Some interesting results obtained were: The application screenshots above) didnt draw rectangle around the chimp, so it didnt think it was a face. However, when inspected more closely, we say that this chimp and some others too had a network output quite close to 0.5 (the demarcating limit we used between a face and a

88

non face). This dogs face was detected by the network. The region after all does have two eyes; the fur of the dog is dark n the middle which makes it appear somewhat like a nose. However, many other dog faces were categorically rejected by the system.

Outputs
The networks above with only one output gave a few false detections and on rare occasions missed a face. A common strategy used in many neural network based classifiers is a two-output system. Some believe that neural networks work better with sparse input/output schemes. A two output system was tried, where the first output gives us a measure of how likely is the given image to be a face while the second output gives a measure of how likely is the given image to not be a face. Again, such a structure seemed to be no better than the original, more compact network with one input.

Conclusion
The Neural Network works just like the biological neurons and it produce the results like the humans do. In todays highly integrated world, when solutions to problems are rossdisciplinary in nature, soft computing promises to become a powerful means for obtaining solutions to problems quickly, yet accurately and acceptably. Though the technology is in atency it is certain that it is going to take the world by a storm soon except great things to happen within a decade or two.

Fully Connected Network


After reading about the aforementioned network an obvious question that arose was the effect on the network of such restricted connections between hidden neurons and others. Rowley proposed 1426 different edges, while if we fully connect all 400 inputs to all 26 hidden neurons to the output neuron we end up with 10426 edges. To see this, a fully connected network on the same training set was trained. They observed that results were quite similar, however, the time taken to process the image with the fully connected network was much larger (420% extra edges). Since this slower performance didnt translate to more accurate detection, we concluded that Rowleys construction was quite appropriate.

Reference
Kunihiko Fukushima, Neural Network Model For Selective Attention And Associative Recall, 1987. Robert Hecht-Nielsen, Neurocomputing, Addisonwesley, 1990. Neural Networks Algorihms, Applications & Programming Techniques By James A.Freeman & David M Skapura.

89

Energy-Efficient Management of Data Center Resources for Cloud Computing:


A Vision, Architectural Elements, and Open Challenges
Aby Mathew C & Arjun Karat
Department of Computer Science College of Engineering , Trivandrum

Abstract
Cloud computing is offering utility-oriented IT services to users worldwide. Based on a pay-as-youmodel, it enables hosting of pervasive applications from consumer, scientific, and business domains. however, data centers hosting Cloud applications consume huge amounts of energy, contributing to high operational costs and carbon footprints to the environment. Therefore, we need Green Cloud computing solutions that can not only save energy for the environment but also reduce operational costs. This paper presents vision, challenges, and architectural elements for energy-efficient management of Cloud computing environments. We focus on the development of dynamic resource provisioning and allocation algorithms that consider the synergy between various data center infrastructures (i.e., the hardware, ower units, cooling and software), and holistically work to boost data center energy efficiency and performance. In articular, this pape r proposes (a) architectural principles for energy-efficient management of Clouds; (b) energy-efficient resource allocation policies and scheduling algorithms considering quality-of-service expectations, and devices power usage characteristics; and (c) a novel software technology for energy-efficient management of Clouds. We have validated our approach by conducting a set of rigorous performance evaluation study using the Cloud Sim toolkit. The results demonstrate that Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios.

Introduction
Computing Utilities, Data Centers and Cloud Computing: Vision and Potential
In 1969, Leonard Kleinrock [1], one of the chief scientists of the original Advanced Research Projects Agency Network (ARPANET) which seeded the Internet, said: As of now, computer networks are still in their infancy, but as they grow up and become sophisticated, we will probably see the spread of computer utilities which, like present electric and telephone utilities, will service individual homes and offices across the country. This vision of computing utilities based on a service provisioning model anticipated the massive transformation of the entire computing industry in the 21st century whereby computing services will be readily available on demand, like other utility services available in todays society.

Similarly, users (consumers) need to pay providers only when they access the computing services. In addition, consumers no longer need to invest heavily or encounter difficulties in building and maintaining complex IT infrastructure. In such a model, users access services based on their requirements without regard to where the services are hosted. This model has been referred to as utility computing, or recently as Cloud computing [5]. The latter term denotes the infrastructure as a Cloud from which businesses and users can access applications as services from anywhere in the world on demand. Hence, Cloud computing can be classified as a new paradigm for the dynamic provisioning of computing services supported by state-of-the-art data centers that usually employ Virtual Machine (VM) technologies for consolidation and environment isolation purposes [11]. Many computing service providers including Google, Microsoft, Yahoo, and IBM

90

are rapidly deploying data centers in various locations around the world to deliver Cloud computing services. The potential of this trend can be noted from the statement: The Data Center Is The Computer, by Professor David Patterson of the University of California, Berkeley, an ACM Fellow, and former President of the ACM CACM [2]. Cloud computing delivers infrastructure, platform, and software (applications) as services, which are made available to consumers as subscription-based services under the payas-you-go model. In industry these services are referred to as Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) respectively. A recent Berkeley report [23] stated Cloud Computing, the longheld dream of computing as a utility, has the potential to transform a large part of the IT industry, making software even more attractive as a service. Clouds aim to drive the design of the next generation data centers by architecting them as networks of virtual services (hardware, database, user-interface, application logic) so that users can access and deploy applications from anywhere in the world on demand at competitive costs depending on their QoS (Quality of Service) requirements [3]. Developers with innovative ideas for new Internet services no longer require large capital outlays in hardware to deploy their service or human expense to operate it [23]. Cloud computing offers significant benefits to IT companies by freeing them from the low-level task of setting up basic hardware and software infrastructures and thus enabling focus on innovation and creating business value for their services. The business potential of Cloud computing is recognised by several market research firms. According to Gartner, Cloud market opportunities in 2013 will be worth $150 billion. Furthermore, many applications making use of utilityoriented computing systems such as Clouds emerge simply as catalysts or market makers that bring buyers and sellers together. This creates several trillion dollars worth of business opportunities to the utility/pervasive computing industry as noted by Sun cofounder Bill Joy [24]. He said It would take time until these markets mature to generate this kind of value. Predicting now

which companies will capture the value is impossible. Many of them have not even been created yet.

Cloud Infrastructure: Requirements

Challenges

and

Modern data centers, operating under the Cloud computing model are hosting a variety of applications ranging from those that run for a few seconds (e.g. serving requests of web applications such as e-commerce and social networks portals with transient workloads) to those that run for longer periods of time (e.g. simulations or large data set processing) on shared hardware platforms. The need to manage multiple applications in a data center creates the challenge of ondemand resource provisioning and allocation in response to time-varying workloads. Normally, data center resources are statically allocated to applications, based on peak load characteristics, in order to maintain isolation and provide performance guarantees. Until recently, high performance has been the sole concern in data center deployments and this demand has been fulfilled without paying much attention to energy consumption. The average data center consumes as much energy as 25,000 households [20]. As energy costs are increasing while availability dwindles, there is a need to shift focus from optimising data center resource management for pure performance to optimising for energy efficiency while maintaining high service level performance. The total estimated energy bill for data centers in 2010 is $11.5 billion and energy costs in a typical data center double every five years, according to McKinsey report [19]. Data centers are not only expensive to maintain, but also unfriendly to the environment. Data centers now drive more in carbon emissions than both Argentina and the Netherlands [20]. High energy costs and huge carbon footprints are incurred due to massive amounts of electricity needed to power and cool numerous servers hosted in these data centers. Cloud service providers need to adopt measures to ensure that their profit margin is not dramatically reduced due to high energy costs. For instance, Google, Microsoft, and Yahoo are building large data centers in barren desert land surrounding the Columbia

91

River, USA to exploit cheap and reliable hydroelectric power [4]. There is also increasing pressure from Governments worldwide to reduce carbon footprints, which have a significant impact on climate change. For example, the Japanese government has established the Japan Data Center Council to address the soaring energy consumption of data centers [6]. Leading computing service providers have also recently formed a global consortium known as The Green Grid [7] to promote energy efficiency for data centers and minimise their environmental impact. Lowering the energy usage of data centers is a challenging and complex issue because computing applications and data are growing so quickly that increasingly larger servers and disks are needed to process them fast enough within the required time period. Green Cloud computing is envisioned to achieve not only efficient processing and utilisation of computing infrastructure, but also minimise energy consumption. This is essential for ensuring that the future growth of Cloud computing is sustainable. Otherwise, Cloud computing with increasingly pervasive front-end client devices interacting with back-end data centers will cause an enormous escalation of energy usage. To address this problem, data center resources need to be managed in an energy-efficient manner to drive Green Cloud computing.

instance, a consumer can be a company deploying a Web application, which presents varying workload according to the number of "users" accessing it.

Cloud Infrastructure Figure 1: High-level system architectural framework. b) Green Resource Allocator: Acts as the interface between the Cloud infrastructure and consumers. It requires the interaction of the following components to support energy-efficient resource management: Green Negotiator: Negotiates with the consumers/brokers to finalize the SLA with specified prices and penalties (for violations of SLA) between the Cloud provider and consumer depending on the consumers QoS requirements and energy saving schemes. In case of Web applications, for instance, QoS metric can be 95% of requests being served in less than 3 seconds. Service Analyser: Interprets and analyses the service requirements of a submitted request before deciding whether to accept or reject it. Hence, it needs the latest load and energy information from VM Manager and Energy Monitor respectively. Consumer Profiler: Gathers specific characteristics of consumers so that important consumers can be granted special privileges and prioritised over other consumers. Pricing: Decides how service requests are charged to manage the supply and demand of computing resources and facilitate in prioritising service allocations effectively. Energy Monitor: Observes and determines which physical machines to power on/off. Service Scheduler: Assigns requests to VMs and determines resource entitlements for allocated VMs. It also decides when VMs are to be added or removed to meet demand. VM Manager: Keeps track of the availability of VMs and their resource entitlements. It is also in charge of

Green Cloud Architectural Elements


The aim of this paper is to addresses the problem of enabling energy-efficient resource allocation, hence leading to Green Cloud computing data centers, to satisfy competing applications demand for computing services and save energy. Figure 1 shows the high-level architecture for supporting energyefficient service allocation in Green Cloud computing infrastructure. There are basically four main entities involved: a) Consumers/Brokers: Cloud consumers or their brokers submit service requests from anywhere in the world to the Cloud. It is important to notice that there can be a difference between Cloud consumers and users of deployed services. For

92

migrating VMs across physical machines. Accounting: Maintains the actual usage of resources by requests to compute usage costs. Historical usage information can also be used to improve service allocation decisions. c) VMs: Multiple VMs can be dynamically started and stopped on a single physical machine to meet accepted requests, hence providing maximum flexibility to configure various partitions of resources on the same physical machine to different specific requirements of service requests. Multiple VMs can also concurrently run applications based on different operating system environments on a single physical machine. In addition, by dynamically migrating VMs across physical machines, workloads can be consolidated and unused resources can be put on a low-power state, turned off or configured to operate at low-performance levels (e.g., using DVFS) in order to save energy. d) Physical Machines: The underlying physical computing servers provide hardware infrastructure for creating virtualised resources to meet service demands .

accounting, the ability to simulate service applications with variable over time workload has been incorporated.

Power Model
Power consumption by computing nodes in data centers consists of consumption by CPU, disk storage and network interfaces. In comparison to other system resources, CPU consumes larger amount of energy, and hence in this work we focus on managing its power consumption and efficient usage. Recent studies [28], [29], [30], [31] show that application of DVFS on CPU results in almost linear power-to-frequency relationship. The reason lies in the limited number of states that can be set to the frequency and voltage of CPU and the fact that DVFS is not applied to other system components apart from CPU. Moreover, these studies show that in average an idle server consumes approximately 70% of the power consumed by the server running at full CPU speed. This fact justifies the technique of switching idle servers off to reduce total power consumption. Therefore, in this work we use power model defined in . where Pmax is the maximum power consumed when the server is fully utilised; k is the fraction of power consumed by the idle server; and u is the CPU utilisation. The utilisation of CPU may change over time due to variability of the workload. Thus, the CPU utilization is a function of time and represented as u(t) . Therefore, total energy (E) consumption by a physical node can be defined as an integral of the power consumption function over a period of time

Early Experiments and Results


In this section, we will discuss some of our early performance analysis of the energy-aware allocation heuristics described in the previous section. As the targeted system is a generic Cloud computing environment, it is essential to evaluate it on a large-scale virtualised data center infrastructure. However, it is difficult to conduct large-scale experiments on a real infrastructure, especially when it is necessary to repeat the experiment with the same conditions (e.g. when comparing different algorithms). Therefore, simulations have been chosen as a way to evaluate the proposed heuristics. The CloudSim toolkit [34] has been chosen as a simulation platform as it is a modern simulation framework aimed at Cloud computing environments. In contrast to alternative simulation toolkits (e.g. SimGrid, GandSim), it supports modeling of on-demand virtualization enabled resource and application management. It has been extended in order to enable power-aware simulations as the core framework does not provide this capability. Apart from the power consumption modeling and

Experimental Setup
We simulated a data center that comprises 100 heterogeneous physical nodes. Each node is modeled to have one CPU core with performance equivalent to 1000, 2000 or 3000 Million Instructions Per Second (MIPS), 8 Gb of RAM and 1 TB of

93

storage. Power consumption by the hosts is defined according to the model described in Section 5.1. According to this 5 model, a host consumes from 175 W with 0% CPU utilization and up to 250 W with 100% CPU utilization. Each VM requires one CPU core with 250, 500, 750 or 1000 MIPS, 128 MB of RAM and 1 GB of storage. The users submit requests for provisioning of 290 heterogeneous VMs that fills the full capacity of the simulated data center. Each VM runs a web application or any kind of application with variable workload, which is modeled to create the utilization of CPU according to a uniformly distributed random variable. The application runs for 150,000 MIPS that equals to 10 minutes of execution on 250 MIPS CPU with 100% utilization. Initially, VMs are allocated according to the requested characteristics assuming 100% utilization. Each experiment has been run 10 times and the presented results are built upon the mean values.

To evaluate ST policy we conducted several experiments with different values of the utilization threshold. The simulation results are presented in Figure 2. The results show that energy consumption can be significantly reduced relatively to NPA and DVFS policies by 77% and 53% respectively with 5.4% of SLA violations. They show that with the growth of the utilization threshold energy consumption decreases, whereas percentage of SLA violations increases. This is due to the fact that higher utilization threshold allows more aggressive consolidation of VMs, however, by the cost of the increased risk of SLA violations. (a) Surfaceview

Simulation Results
For the benchmark experimental results we have used a Non Power Aware (NPA) policy. This policy does not apply any power aware optimizations and implies that all hosts run at 100% CPU utilization and consume maximum power. The second policy applies DVFS, but does not perform any adaptation of allocation of VMs in run-time. For the simulation setup described above, using the NPA policy leads to the total energy consumption of 9.15 KWh, whereas DVFS allows decreasing this value to 4.4 KWh. (b) Top view Figure 3. Energy to thresholds relationship for MM policy To evaluate two-threshold policies it is necessary to determine the best values for the utilization thresholds in terms of power consumption and QoS provided. Therefore, at first we simulated MM policy with different values of thresholds varying absolute values of the thresholds as well as the interval between the lower and upper thresholds. The results showingthe energy consumption achieved by using this policy are presented in Figure 3. The lowest values of energy consumption

Figure 2. Energy consumption and SLA violations by ST policy.

94

can be gained by setting the lower threshold from 10% to 90% and the upper threshold from 50% to 100%. However, the obtained intervals of the thresholds are wide. Therefore, to determine the concrete values, we have compared the thresholds by the percentage of SLA violations caused, as rare SLA violations ensure high QoS. The experimental results have shown that the minimum values of both characteristics can be achieved using 40% as the interval between the utilization thresholds. (a) Energy consumption (b) SLA violations (c) Number of VM migrations (d) Average SLA violation

(d) Average SLA violation Figure 4. Comparison of two-threshold algorithms. We have compared MM policy with HPG and RC policies varying exact values of the thresholds but preserving 40% interval between them. The results (Figure 4) show that these policies allow the achievement of approximately the same values of energy consumption and SLA violations. Whereas the number of VM migrations produced by MM policy is reduced in comparison to HPG policy by maximum of 57% and 40% on average and in comparison to RC policy by maximum of 49% and 27% on average . Table 1. The final simulation results. Policy NPA DVFS ST 226 ST 231 MM 3070% MM 4080% MM 5090% Energy, kWh 9.15 4.40 50% 60% 1.48 1.27 SLA, % 2.03 1.50 1.11 2.75 VM migrations 5.41 35 9.00 34 3359 3241 Avg. SLA, % 81 89 56 65

(a) Energy consumption

(b) SLA violations

1.14

6.69

3120

76

(c) Number of VM migrations

95

(a) Energy consumption

(a) Energy consumption

Final results comparing all the policies with different values of the thresholds are presented in Table 1 and in Figure 5. The results show that dynamic reallocation of VMs according to current utilization of CPU provides higher energy savings compared to static allocation policies. MM policy leads to the best energy savings: by 83%, 66% and 23% less energy consumption relatively to NPA, DVFS and ST policies respectively with thresholds 30-70% and ensuring percentage of SLA violations of 1.1%; and by 87%, 74% and 43% with thresholds 50-90% and 6.7% of SLA violations. MM policy leads to more than 10 times less VM migrations than ST policy. The results show flexibility of the algorithm, as the thresholds can be adjusted according to SLA requirements. Strict SLA (1.11%) allow the achievement of the energy consumption of 1.48 KWh. However, if SLA are relaxed (6.69%), the energy consumption is further reduced to 1.14 KWh.

Open Challenges
(b) SLA violations In this section, we identify key open problems that can be addressed at the level of management of system resources. Virtualisation technologies, which Cloud computing environments heavily rely on, provide the ability to transfer VMs between physical nodes using live of offline migration. This enables the technique of dynamic consolidation of VMs to a minimal number of nodes according to current resource requirements. As a result, the idle nodes can be switched off or put to a power saving mode (e.g. sleep, hibernate) to reduce total energy consumption by the data center. In order to validate the approach, we have proposed several resource allocation algorithms discussed in Section 4 and evaluated them by extensive simulation studies presented in Section 5. Despite the energy savings, aggressive consolidation of VMs may lead to a performance degradation and, thus result in SLA violation. Our resource management algorithms effectively address the trade-off between energy consumption and performance delivered by the system.

(c) Number of VM migrations

(d) Average SLA violation Figure 5. The final simulation results.

96

4.1 Energy-aware Allocation

Dynamic

Resource

Recent developments in virtualisation have resulted in its proliferation of usage across data centers. By supporting the movement of VMs between physical nodes, it enables dynamic migration of VMs according to QoS requirements. When VMs do not use all provided resources, they can be logically resized and consolidated on a minimal number of physical nodes, while idle nodes can be switched off. 8 Currently, resource allocation in a Cloud data center aims to provide high performance while meeting SLA, without a focus on allocating VMs to minimise energy consumption. To explore both performance and energy efficiency, three crucial issues must be addressed. First, excessive power cycling of a server could reduce its reliability. Second, turning resources off in a dynamic environment is risky from a QoS prospective. Due to the variability of the workload and aggressive consolidation, some VMs may not obtain required resources under peak load, so failing to meet the desired QoS. Third, ensuring SLA brings challenges to accurate application performance management in virtualized environments. A virtual machine cannot exactly record the timing behaviour of a physical machine. This leads to the timekeeping problems resulting in inaccurate time measurements within the virtual machine, which can lead to incorrect enforcement of SLA. All these issues require effective consolidation policies that can minimise energy consumption without compromising the used-specified QoS requirements. To achieve this goal, we will develop novel QoS-based resources selection algorithms and mechanisms that optimise VM placements with the objective of minimizing communication overhead as described below.

essential to carry out a study of Cloud services and their workloads in order to identify common behaviors, patterns, and explore load forecasting approaches that can potentially lead to more efficient resource provisioning and consequent energy efficiency. In this context, we will research sample applications and correlations between workloads, and attempt to build performance models that can help explore the trade-offs between QoS and energy saving. Further, we will investigate a new online approach to the consolidation strategy of a data center that allows a reduction in the number of active nodes required to process a variable workload without degrading the offered service level. The online method will automatically select a VM configuration while minimising the number of physical hosts needed to support it. Moreover, another goal is to provide the broker (or consumers) with resource-selection and workload-consolidation policies that exploit the tradeoffs between performance and energy saving.

Optimisation of Virtual Network Topologies


In virtualised data centers VMs often communicate between each other, establishing virtual network topologies. However, due to VM migrations or non-optimised allocation, the communicating VMs may end up hosted on logically distant physical nodes providing costly data transfer between each other. If the communicating VMs are allocated to the hosts in different racks or enclosures, the network communication may involve network switches that consume significant amount of power. To eliminate this data transfer overhead and minimise power consumption, it is necessary to observe the communication between VMs and place them on the same or closely located nodes. To provide effective reallocations, we will develop power consumption models of the network devices and estimate the cost of data transfer depending on the traffic volume. As migrations consume additional energy and they have a negative impact on the performance, before initiating the migration, the reallocation controller has to ensure that the cost of migration does not exceed the benefit.

QoS-based Resource Selection and Provisioning


Data center resources may deliver different levels of performance to their clients; hence, QoS-aware resource selection plays an important role in Cloud computing. Additionally, Cloud applications can present varying workloads. It is therefore

97

Concluding Remarks and Future Directions


This work advances Cloud computing field in two ways. First, it plays a significant role in the reduction of data center energy consumption costs and thus helps to develop a strong, competitive Cloud computing industry. This is especially important in the context of Australia as a recent Frost & Sullivan's report shows that Australia is emerging as one of the preferred data center hubs among the Asia Pacific countries [25]. Second, consumers are increasingly becoming conscious about the environment. In Australia, a recent study shows that data centers represent a large and rapidly growing energy consumption sector of the economy and is a significant source of CO2 emissions [26]. Reducing greenhouse gas emissions is a key energy policy focus of many countries including Australia. Therefore, we expect researchers world-wide to put in a strong thrust on open challenges identified in this paper in order enhance energy-efficient management of Cloud computing environments.

References
[1] L. Kleinrock. A Vision for the Internet. ST Journal of Research, 2(1):4-5, Nov. 2005. [2] D. A. Patterson. The Data Center Is The Computer. Communications of the ACM, 51(1):105-105, Jan. 2008. [3] R. Buyya, C. S. Yeo, S. Venugopal, J. Broberg, and I. Brandic. Cloud Computing and Emerging IT Platforms: Vision, Hype, and Reality for Delivering Computing as the 5th Utility. Future Generation Computer Systems, 25(6):599-616, Elsevier, June 2009. [4] J. Markoff & S. Hansell. Hiding in Plain Sight, Google Seeks More Power. New York Times, June 14, 2006.

98

Holographic Memory
Susan S6 IT and Parvathy S6 CS Mohandas College of Engineering and Technology, Anad

Abstract
At high density inside crystals Holographic memory is developing technology that has promised to revolutionalise the storage systems Holographic memory is a technique that can store information. Data from more than 1000 CDs can fit into a holographic memory System. . Holographic storage has the potential to become the next generation of storage media Conventional memories use only the surface to store the data. But holographic data storage systems use the volume to store data. It has more advantages than conventional storage systems. It is based on the principle of holography .This paper provides a description of Holographic data storage system (HDSS), a three dimensional data storage system which has a fundamental advantage over conventional read/write memory systems.

Introduction
With its omnipresent computers, all connected via the Internet, the Information Age has led to an explosion of information available to users. The decreasing cost of storing data, and the increasing storage capacities of the same small device footprint, have been key enablers of this revolution. While current storage needs are being met, storage technologies must continue to improve in order to keep pace with the rapidly increasing demand. However, both magnetic and conventional optical data storage technologies, where individual bits are stored as distinct magnetic or optical changes on the surface of a recording medium, are approaching physical limits beyond which individual bits may be too small or too difficult to store. Storing information throughout the volume of a mediumnot just on its surface offers an intriguing high-capacity alternative. Holographic data storage is a volumetric approach which, although conceived decades ago, has made recent progress toward practicality with the appearance of lower-cost enabling technologies, significant results from longstanding research efforts, and progress in holographic recording materials. Hence the holographic memory has become a great white whale of technology research.

Concept Of Holographic Memory


Holography is a technique, which allows recording and playback of 3-dimensional images. This is called a hologram unlike other 3-dimensional picture hologram provide what is called parallax. Parallax allows the viewer to move back and forth up and down and see different perspective as if the object were actually there. In holography, the aim is to record complete wave field( both amplitude and phase) as it is intercepted by recording medium the record in plane may not even be an image plane. The scattered or reflected light by object is intercepted by the recording medium the recorded completely in spite of the fact that the detector is insensitive to the phase difference among the various part of the optical field. In holography, interference between the object wave and reference wave is formed and recorded on a holographic material. The record known as hologram (whole record) captures the complete wave, which can be viewed at the later time by illuminating the hologram with an appropriate light beam. To this day holography continues to provide the most accurate depiction of 3-dimensional image in the world. In a holographic memory device, a laser beam is split in two, and the two resulting beams in a crystal medium to store a holographic recreation of a page of data.

What Is Holography?
While a photograph has an actual physical image, a hologram contains information about size, shape, brightness and contrast of object being recorded. This information is stored in a very microscopic and complex pattern of interference. The interference pattern is made possible by the properties of light generated by LASER. In order to record the whole pattern, the light used must be highly directional and must be one of one color. Such light is called coherent, because the light from a LASER is one colors and leaves the LASER with one wave in perfect one-step with all others, it is perfect for making hologram. When we shine a light on the hologram the information that is stored as an interference pattern takes the incoming light and recreates the original optical wave front that was reflected off the object hence the eyes and brain now perceives the object as being in front of us once again.

Technique Of Storing Data On A Holographic Material


To record on the hologram, a page composer converts the data in the form of electric signals to optical signal the controller generate the address to access the desired page. This results in the exposure of a small area of the recording medium through an aperture. The optical output signal is directed to the exposed area by the deflector. When the Blue-Argon laser is focused, a beam splitter splits it into two, a reference beam and a signal beam. The signal beam passes through spatial light modulators (SLM) where digital information organized in a page like format of ones and zeros, is modulated onto the signal beam as a two-dimensional pattern of brightness and darkness. This signal beam is then purified using different crystals. When the signal beam and reference beam

99

meets the interference pattern created stores the data that is carried by the Different data pages are recorded over the surface depending on the angle at which the reference beam meet the signal beam a holographic data storage system is fundamentally page oriented with each block of data defined by the no. of data bits that can be spatially impressed onto the object the total storage capacity of the system is then equal to the product of the paper size (in bits) and the no. of pages that can be recorded. . The theoretical limits for the storage density of this technique is approximately several tens of Terabytes (1 terabyte = 1024 gigabytes) per cubic centimeter. In 2006, In Phase Technologies published a white paper reporting an achievement of 500 Gb/in2. From this figure we can deduce that a regular disk (with 4 cm radius of writing area) could hold up to a maximum of 3895.6Gb

0f 32 bit is sent to a processing unit by a conventional read head, a holographic memory system would in turn send 32*32 bits, or 1024 bits due to its added dimension this provides very fast access times in volumes for greater than serial access methods. The volume could be one Megabit per page using a SLM resolution of 1024*1024 bits at 15-20 microns per sec

Reading Data
The stored data is read through the reproduction of the same reference beam used to create the hologram. The reference beams light is focused on the photosensitive material, illuminating the appropriate interference pattern, the light diffracts on the interference pattern, and projects the pattern onto a detector. The detector is capable of reading the data in parallel, over one million bits at once, resulting in the fast data transfer rate. Files on the holographic drive can be accessed in less than 200 milliseconds

Spatial Light Modulator (Slm)


Spatial light modulator is used for creating binary information out of laser light. The SLM is a 2-D plane, consisting of pixels, which can be turned on and off to create 1s and 0s. An illustration of this is a window and a window shade. It is possible to pull the shade down over window to block incoming sunlight. If sunlight is desired again, the shade can be raised. A spatial light modulator contains a two dimensional array of windows which are only microns wide. These windows block some parts of the incoming laser light and let other parts go through. The resulting cross section of the laser beam is a two dimensional array of binary data, the same as what was represented in SLM. After the laser beam is manipulated, it is sent into hologram to be recorded. This data is written into the hologram as page form. It is called this due to its representation in Two dimensional plane or page of data. Holographic memory reads data in the form of pages instead. For example, if a stream

100

Technical Aspects
Like other media, holographic media is divided into write once (where the storage medium undergoes some irreversible change), and rewritable media (where the change is reversible). Rewritable holographic storage can be achieved via the photorefractive effect in crystals: Mutually coherent light from two sources creates an interference pattern in the media. These two sources are called the reference beam and the signal beam. Where there is constructive interference the light is bright and electrons can be promoted from the valence band to the conduction band of the material (since the light has given the electrons energy to jump the energy gap). The positively charged vacancies they leave are called holes and they must be immobile in rewritable holographic materials. Where there is destructive interference, there is less light and few electrons are promoted. Electrons in the conduction band are free to move in the material. They will experience two opposing forces that determine how they move. The first force is the coulomb force between the electrons and the positive holes that they have been promoted from. This force encourages the electrons to stay put or move back to where they came from. The second is the pseudo-force of diffusion that encourages them to move to areas where electrons are less dense. If the coulomb forces are not too strong, the electrons will move into the dark areas. Beginning immediately after being promoted, there is a chance that a given electron will recombine with a hole and move back into the valence band. The faster the rate of recombination, the fewer the number of electrons that will have the chance to move into the dark areas. This rate will affect the strength of the hologram. After some electrons have moved into the dark areas and recombined with holes there, there is a permanent space charge field between the electrons that moved to the dark spots and the holes in the bright spots. This leads to a change in the index of refraction due to the electro-optic effect.

gigabits per cubic millimetre. In practice, the data density would be much lower, for at least four reasons: The need to add error-correction The need to accommodate imperfections or limitations in the optical system Economic payoff (higher densities may cost disproportionately more to achieve) Design technique limitationsa problem currently faced in magnetic Hard Drives wherein magnetic domain configuration prevents manufacture of disks that fully utilize the theoretical limits of the technology.

Unlike current storage technologies that record and read one data bit at a time, holographic memory writes and reads data in parallel in a single flash of light.

Two-Color Coding
For two-color holographic recording, the reference and signal beam fixed to a particular wavelength (green, red or IR) and the sensitizing/gating beam is a separate, shorter wavelength (blue or UV). The sensitizing/gating beam is used to sensitize the material before and during the recording process, while the information is recorded in the crystal via the reference and signal beams. It is shone intermittently on the crystal during the recording process for measuring the diffracted beam intensity. Readout is achieved by illumination with the reference beam alone. Hence the readout beam with a longer wavelength would not be able to excite the recombined electrons from the deep trap centers during readout, as they need the sensitizing light with shorter wavelength to erase them. Usually, for two-color holographic recording, two different dopants are required to promote trap centers, which belong to transition metal and rare earth elements and are sensitive to certain wavelengths. By using two dopants, more trap centers would be created in the Lithium niobate crystal. Namely a shallow and a deep trap would be created. The concept now is to use the sensitizing light to excite electrons from the deep trap farther from the valence band to the conduction band and then to recombine at the shallow traps nearer to the conduction band. The reference and signal beam would then be used to excite the electrons from the shallow traps back to the deep traps. The information would hence be stored in the deep traps. Reading would be done with the reference beam since the electrons can no longer be excited out of the deep traps by the long wavelength beam. Effect of annealingdoubly doped LiNbO3 crystal there exists an optimum oxidation/reduction state for desired performance. This optimum depends on the doping levels of shallow and deep traps as well as the annealing conditions for the crystal samples. This optimum state generally occurs when 95 98% of the deep traps are filled. In a strongly oxidized sample holograms cannot be easily recorded and the diffraction efficiency is very low. This is because the shallow trap is completely empty and the deep trap is

When the information is to be retrieved or read out from the hologram, only the reference beam is necessary. The beam is sent into the material in exactly the same way as when the hologram was written. As a result of the index changes in the material that were created during writing, the beam splits into two parts. One of these parts recreates the signal beam where the information is stored. Something like a CCD camera can be used to convert this information into a more usable form. Holograms can theoretically store one bit per cubic block the size of the wavelength of light in writing. For example, light from a helium-neon laser is red, 632.8 nm wavelength light. Using light of this wavelength, perfect holographic storage could store 4

101

also almost devoid of electrons. In a highly reduced sample on the other hand, the deep traps are completely filled and the shallow traps are also partially filled. This results in very good sensitivity (fast recording) and high diffraction efficiency due to the availability of electrons in the shallow traps. However during readout, all the deep traps get filled quickly and the resulting holograms reside in the shallow traps where they are totally erased by further readout. Hence after extensive readout the diffraction efficiency drops to zero and the hologram stored cannot be fixed.

Phase-conjugate readout
As described in the previous sections on tester platforms, the need for both high density and excellent imaging requires an expensive short-focal-length lens system corrected for all aberrations (especially distortion) over a large field, as well as a storage material of high optical quality. Several authors have proposed bypassing these requirements by using phase-conjugate readout of the volume holograms [1215]. After the object beam is recorded from the SLM with a reference beam, the hologram is reconstructed with a phase-conjugate (timereversed copy) of the original reference beam. The diffracted wavefront then retraces the path of the incoming object beam in reverse, canceling out any accumulated phase errors. This should allow data pages to be retrieved with high fidelity with a low-performance lens, from storage materials fabricated as multimode fibers [12, 13], or even without imaging lenses [14, 15] for an extremely compact system. Most researchers have relied on the visual quality of retrieved images or detection of isolated fine structure in resolution targets as proof that phase-conjugate retrieval provides high image fidelity. This, however, is no guarantee that the retrieved data pages will be correctly received by the detector array. In fact, the BER of pixel-matched holograms can be used as an extremely sensitive measure of the conjugation fidelity of volume holograms. Any errors in rotation, focus, x-y registration, magnification, or residual aberrations will rapidly increase the measured bit-error rate (BER) for the data page. Using the pixel-matched optics in both the DEMON I platform and the PRISM tester, we have implemented low-BER phaseconjugate readout of large data pages. On the PRISM tester, phase conjugation allowed the readout of megapel pages through

much smaller apertures than in the original megapel experiment mentioned above, which was performed without phase conjugation . This demonstrates a thirtyfold increase in areal density per hologram. Figure a shows a simplified diagram of the PRISM tester, modified for this phase-conjugate experiment. The Fourier lenses were removed, and the object beam was focused by a lens through the megapel mask onto a mirror placed halfway between the mask and CCD. After deflection by this mirror, the object beam was collected by a second lens, forming an image of the mask. Here an Fe-doped LiNbO3 crystal was placed to store a hologram in the 90-degree geometry . After passing through the crystal, the polarization of the reference beam was rotated and the beam was focused into a self-pumped phaseconjugate mirror using a properly oriented, nominally undoped BaTiO3 crystal. In such a configuration, the input beam is directed through the BaTiO3 crystal and into the far corner, creating random backscattering throughout the crystal. It turns out that counterpropagating beams (one scattered upon input to the crystal, one reflected from the back face) are preferentially amplified by the recording of real-time holograms, creating the two pump waves for a four-wave-mixing process. Since momentum (or wavevector) must be conserved among four beams (energy is already conserved because all four wavelengths are identical), and since the two pump beams are already counterpropagating, the output beam generated by this process must be the phase-conjugate to the input beam . The crystal axes of the LiNbO3 were oriented such that the return beam from the phaseconjugate mirror wrote the hologram, and the strong incoming reference beam was used for subsequent readout. (Although both mutually phase-conjugate reference beams were present in the LiNbO3 during recording, only the beam returning from the phase-conjugate mirror wrote a hologram because of the orientation of the LiNbO3 crystal axes. For readout, the phaseconjugate mirror was blocked, and the incoming reference beam read this hologram, reconstructing a phase-conjugate object beam.) By turning the mirror by 90 degrees, this phase-conjugate object beam was deflected to strike the pixel-matched CCD camera. We were able to store and retrieve a megapel hologram with only 477 errors (BER ' 5 3 1024) after applying a single global threshold. The experiment was repeated with a square aperture of 2.4 mm on a side placed in the object beam at the LiNbO3 crystal, resulting in 670 errors. Even with the large spacing between SLM and CCD, this is already an areal density of 0.18 bits per mm2 per hologram. In contrast, without phaseconjugate readout, an aperture of 14 mm 3 14 mm was needed to produce low BERs with the custom optics . The use of phaseconjgate readout allowed mapping of SLM pixels to detector pixels over data pages of 1024 pixels 3 1024 pixels without the custom imaging optics, and provided an improvement in areal density (as measured at the entrance aperture of the storage material) of more than 30. In a second experiment, we modified the DEMON I platform in an analogous manner, using a BaTiO3 crystal for phase conjugation and LiNbO3 for recording databearing holograms of 320 pixels 3 240 pixels. To demonstrate the phase-conjugation properties, the two retrieved pages of Figure b illustrate the results of passing the object beam through a phase aberrator

102

Limitations

1. In any holographic data storage system,the angle at


which the second reference beam is focused on the crystal to a page of data is the crucial component. It must match the original beam exactly without deviation. A difference of even a thousandth of a millimeter will result in failure to retrieve that page of data. Also, if too many pages are stored in one crystal, the strength of each hologram gets diminished. If there are too many holograms stored on a crystal and the reference crystal used to retrieve a hologram is not focused ast the prcised angle, it will pick up a lot of background from the other hologram stored around it.

2. 3.

Reference 1. www.wikipedia.com 2. www.signallake.com Portion of data page holographically reconstructed through a phase abberation without phase conjugate readout(BER :5*10^2)

3. www.computerweekly.com

Portion of data page holographically reconstructed through a phase abberation with phase conjugate readout(BER<10^-5) Thus cancelling out accumulated phase error.

Advantages
1. 2. 3. The very first advantage of holographic memory system is that an entire page of data can be retrieved quickly and at one time It provides the very high storage density amount in the order of terabytes and be stored in small cubic devices. High data transfer rates can be achieved with the perfect holographic set up with data transfer rates between 1-10 GB per second. Since th8is memory is not serially or sequentially operated like most memory,that is why a page of data can be read out in parallel.

103

A Novel Technique for Image Steganography Based On Block-DCT and Huffman Encoding
Arunima Kurup P& Poornima D Sreenagesh
S8, Department of Information Technology, Mohandas College Of Engineering And Technology,Anad,Thiruvananthapuram

Abstract
Image steganography is the art of hiding information into a cover image. This paper presents a novel technique for Image steganography based on Block-DCT, where DCT is used to transform original image (cover image) blocks from spatial domain to frequency domain. Firstly a gray level image of size M N is divided into no joint 8 8 blocks and a two dimensional Discrete Cosine Transform(2-d DCT) is performed on each of the P = MN / 64 blocks. Then Huffman encoding is also performed on the secret messages/images before embedding and each bit of Huffman code of secret message/image is embedded in the frequency domain by altering the least significant bit of each of the DCT coefficients of cover image blocks. The experimental results show that the algorithm has a high capacity and a good invisibility. Moreover PSNR of cover image with stego-image shows the better results in comparison with other existing steganography approaches. Furthermore, satisfactory security is maintained since the secret message/image cannot be extracted without knowing decoding rules and Huffman table.

Introduction
With the development of Internet technologies, digital media can be transmitted conveniently over the Internet. However, message transmissions over the Internet still have to face all kinds of security problems. Therefore, how to protect secret messages during transmission becomes an essential issue for the Internet. Encryption is a well-known procedure for secure data transmission. The commonly used encryption schemes include DES (Data Encryption Standard) , AES (Advanced Encryption Standard) and RSA. These methods scramble the secret message so that it cannot be understood. However, it makes the message suspicious enough to attract eavesdroppers attention. Hence, a new scheme, called steganography, arises to conceal the secret messages within some other ordinary media (i.e. images, music and video files) so that it cannot be observed. Steganography differs from cryptography in the sense that where Cryptography focuses on concealing the contents of a message, steganography focuses on concealing the existence of a message.Two other technologies that are closely related to steganography are watermarking and fingerprinting. Watermarking is a protecting

hidden watermarks. Therefore, the goal of


steganography is the secret messages while the goal of watermarking is the cover object itself. Steganography is the art and science of hiding information in a cover document such as digital images in a way that conceals the existence of hidden data. The word steganography in Greek means covered writing(Greek words stegos meaning cover and grafia meaning writing)]. The main objective of steganography is to communicate securely in such a way that the true message is not visible to the observer. That is unwanted parties should not be able to distinguish in any sense between cover-image (image not containing any secret message) and stego-image (modified cover-image that containing secret message). Thus the stego-image should not deviate much from original cover-image. Today steganography is mostly used on computers with digital data being the carriers and networks being the high speed delivery channels.

Related Work
Steganography is a branch of information hiding in which secret information is camouflaged within other information. A simple way of steganography is based on modifying the least significant bit layer of images, known as the LSB technique. The LSB technique directly embed the secret data within the pixels of the cover image. In some cases, LSB of pixels

technique which protects (claims) the owners property right for digital media (i.e. images, music, video and software) by some

104

visited in random or in certain areas of image and sometimes increment or decrement the pixel value. Some of the recent research studied the nature of the stego and suggested new methodologies for increasing the capacity. Habes proposed a new method (4 least Significant) for hiding secret image inside carrier image. In this method each of individual pixels in an image is made up of a string of bits. He took the 4-least significant bit of 8-bit true color image to hold 4bit of the secret message /image by simply overwriting the data that was already there.The schemes of the second kind embed the secret data within the cover image that has been transformedsuch as DCT (discrete cosine transformation). The DCT transforms a cover image from an image representation into a frequency representation, by grouping the pixels into non-overlapping blocks of 8 8 pixels and transforming the pixel blocks into 64 DCT coefficients each. A modification of a single DCT coefficient will affect all 64 image pixels in that block. The DCT coefficients of the transformed cover image will be quantized, and then modified according to the secret data. Tseng and Chang in proposed a novel steganography method based on JPEG. The DCT for each block of 88 pixels was applied in order to improve the capacity and control the compression ratio. Capacity, security and robustness , are the three main aspects affecting steganography and its usefulness. Capacity refers to the amount of data bits that can be hidden in the cover medium. Security relates to the ability of an eavesdropper to figure the hidden information easily. Robustness is concerned about the resist possibility of modifying or destroying the unseen data.

embedded directly. On spatial domain, the most common and simplest steganographic method is the least significant bits (LSB) insertion method. In the LSB technique, the least significant bits of the pixels is replaced by the message which bits are permuted before embedding. However, the LSB insertion method is easy to be attacked. In a new steganography technique, named, modified side match scheme was proposed. It reserves the image quality, increases embedding capacity but is not robust against attack because it is a spatial domain approach and no transfer is used. Based on the same embedding capacity, our proposed method improves both image quality and security.Hiding the secret message/image in the special domain can easily be extracted by unauthorized user. In this paper, we proposed a frequency domain steganography technique for hiding a large amount of data with high security, a good invisibility and no loss of secret message. The basic idea to hide information in the frequency domain is to alter the magnitude of all of the DCT coefficients of cover image. The 2-D DCT convert the image blocks from spatial domain to frequency domain. The schematic/ block diagram of the whole process is given in figure 2((a) and (b)).

PSNR (Peak Signal to Noise Ratio)


The PSNR is expressed in dBs. The larger PSNR indicates the higher the image quality i.e. there is only little difference between the coverimage and the stego-image. On the other hand, a smaller PSNR means there is huge distortion between the cover-image and the stegoimage.

Proposed Image Steganography Algorithm


Image steganography schemes can be classified into two broad categories: spatial-domain based and transform-domain based . In spatial domain approaches, the secret messages are

105

(a) Insertion of a Secret image (or


message) into a Cover image.

8-bit block preparation


Huffman code H is decomposed into 8-bits blocks B. Let the length of Huffman encoded bits stream be LH.Thus if LH is not divisible by 8, then last block contains r = LH % 8 number of bits.

Embedding of Secret Message / Image


The proposed secret message/image embedding scheme comprises the following five steps: Step 1: DCT. Divide the carrier image into non overlapping blocks of size 88 and apply DCT on each of the blocks ofthe cover image f to obtain F. . Step 2: Huffman encoding. Perform Huffman encoding on the 2-D secret image S of size M2 N2 to convert it into a 1-D bits streamH. Step 3: 8-bit block preparation. Huffman code H is decomposed into 8-bits blocks B. (b) Removal message) of Secret Image (or Step 4: Bit replacement The least significant bit of all of the DCT coefficients inside 88 block is changed to a bit taken from each 8 bit block B from left to right. The method is as follows: For k=1 ; k_1; k=k+1 LSB( ( F(u,v))2 ) B(k) ; Where B(k) is the kth bit from left to right of a block B and (F(u,v)2 ) is the DCT coefficient in binary form. Step 5: IDCT. Perform the inverse block DCT on F using eqn (2)and obtain a new image f1 which contains secret image.

Figure 2: Block diagram of the proposed steganography technique.

Discrete Cosine Transform


Let I(x,y) denote an 8-bit grayscale cover-image with x = 1,2,.,M1 and y = 1,2,.,N1. This M1N1 cover-image is divided into 8 8 blocks and two-dimensional (2-D) DCT is performed on each of L =M1N1 / 64 blocks..

Huffman encoding Huffman table (HT)

and

Before embedding the secret image into cover image, it is first encoded using Huffman coding Huffman codes are optimal codes that map one symbol to one code word. For an image Huffman coding assigns a binary code to each intensity value of the image and a 2-D M2 N2 image is converted to a 1-D bits stream with length LH < M2 N2. Huffman table (HT ) contains binary codes to each intensity value. Huffman table must be same in both the encoder and the decoder. Thus the Huffman table must be sent to the decoder along with the compressed image data.

Embedding Algorithm
Input: An M1N1 carrier image and a secret message/image. Output: A stego-image. 1. Obtain Huffman table of secret message/image. 2. Find the Huffman encoded binary bit stream of secret-image by applying

106

3.

4.

5. 6.

7. 8.

9.

10. 11.

Huffman encoding techniqueusing Huffman table obtained in step 1. Calculate size of encoded bit stream in bits. Divide the carrier image into non overlapping blocks of size 88 and apply DCT on each of the blocks of the cover image. Repeat for each bit obtained in step 3 Insert the bits into LSB position of each DCT coefficient of 1st 88 block found in step 4. Decompose the encoded bit stream of secret message/image obtained in step 2 into 1-D blocks of size 8 bits. Repeat for each 8-bit blocks obtained in step 6 (a) Change the LSB of each DCT coefficient of each 88 block(excluding the first) found in step 4 to a bit taken from left(LSB) to right(MSB) from each 8 bit block B. Repeat for each bit of the Huffman table (a) Insert the bits into LSB position of each DCT coefficient Apply inverse DCT using identical block size. End

3. The least significant bits of all of the


DCT coefficients inside 88 block (excluding the first) are collected and added to a 1-D array. 4. Repeat step 3 until the size of the 1-D array becomes equal to the size extracted in step 2. 5. Construct the Huffman table by extracting the LSB of all of the DCT coefficients inside 88 blocks excluding first block and the block mentioned in step 3. 6. Decode the 1-D array obtained in step 3 using the Huffman table obtained in step 5. 7. End.

Conclusion
In this paper, we propose a steganography process in frequency domain to improve security and imagequality compared to the existing algorithms which are normally done in spatial domain. According to the simulation results the stego-images of our method are almost identical to other methods stego-images and it is difficult to differentiate between them and the original images. Our proposed algorithm also provides additional three layers of security by means of transformation (DCT and Inverse DCT) of cover image and Huffman encoding of secret image. The demand of robustness in image steganography filed is not requested as strongly as it is in watermarking filed. As a result, image steganography method usually neglects the basic demand of robustness. In our proposed method, the embedding process is hidden under the transformation i.e. DCT and inverse DCT. These operations and Huffman encoding of secret image keep the images away from stealing, destroying from unintended users and hence the proposed method may be more robust against brute force attack.

Extraction of the secret message / Image


The stego-image is received in spatial domain. DCT is applied on the stego-image using the same block of size 8 8 to transform the stegoimage from spatial domain to frequency domain. The size of the encoded bit stream and the encoded bit stream of secret message/image are extracted along with the Huffman table of the secret message/image.

Extraction Algorithm
Input: An M1N1 Stego-image. Output: Secret image. 1. Divide the stego-image into non overlapping blocks of size 88 and apply DCT on each of the blocks of the stego-image. 2. The size of the encoded bit stream is extracted from 1st 8 8 DCT block by collecting the least significant bits of all of the DCT coefficients inside the 1st 88 block.

References
[1] DES Encryption Standard (DES), National Bureau of Standard (U.S.). Federal Information Processing Standards Publication 46, National Technical Information Service, Springfield, VA, 1997. [2] Daemen,J., and Rijmen, V. Rijndael: The Advanced Encryption Standard, Dr. Dobbs Journal, March 2001.

107

[3] R. Rivest, A. Shamir, and L. Adleman, 1978. A method for obtaining digital signatures and public-key cryptosystems. Communication of the ACM: 120-126. [4] Pfitzmann, B. 1996. Information hiding terminology, Proc. First Workshop of Information Hiding Proceedings, Cambridge, U.K., Lecture Notes in Computer Science, Vol.1174: 347-350. [5] Wang, H & Wang, S, Cyber warfare: Steganography vs. Steganalysis, Communications of the ACM, 47:10, October 2004 [6] Jamil, T., Steganography: The art of hiding information is plain sight, IEEE Potentials, 18:01, 1999. [7] Moerland, T, Steganography and Steganalysis, Leiden Institute of Advanced Computing Science, www.liacs.nl/home/ tmoerl/privtech.pdf [8] N. F. Johnson and S. Katzenbeisser, A survey of steganographic techniques., in S. Katzenbeisser and F. Peticolas (Eds.): Information Hiding, pp.43-78. Artech House, Norwood, MA, 2000. [9] Li, Zhi., Sui, Ai, Fen., and Yang, Yi, Xian. 2003 A LSB steganography detection algorithm, IEEE Proceedings on Personal Indoor and Mobile Radio Communications: 2780-2783. [10] J. Fridrich and M. Goljan, "Digital image steganography using stochastic modulation", SPIE Symposium on Electronic Imaging, San Jose, CA, 2003. [11] Alkhrais Habes , 4 least Significant Bits Information Hiding Implementation and Analysis , ICGST Int. Conf. on Graphics, Vision and Image Processing (GVIP-05), Cairo, Egypt, 2005. [12] Krenn, R., Steganography and Steganalysis, http://www.krenn.nl/univ/cry/steg/article.pdf [13] C.-C. Chang, T.-S. Chen and L.-Z. Chung, A steganographic method based upon JPEG and

quantization table modification, Information Sciences, vol. 141, 2002, pp. 123-138. [14] R. Chu, X. You, X. Kong and X. Ba, A DCT-based image steganographic method resisting statistical attacks, InProceedings of (ICASSP '04), IEEE International Conferenceon Acoustics, Speech, and Signal Processing, 17-21 May.vol.5, 2004, pp V-953-6. [15] H.-W. Tseng and C.-C. Chang, Steganography using JPEG-compressed images, The Fourth International Conference on Computer and Information Technology, CIT'04, 14-16 Sept 2004, pp. 12-17. [16] Chen, B. and G.W. Wornell, 2001. Quantization index modulation: A class of provably good methods for digital watermarking
and information embedding.IEEE Trans. Inform. Theor., 47: 1423-1443. DOI: 10.1109/18.923725. [17] Chan, C.K. and Cheng. L.M. 2003. Hiding data in image by simple LSB substitution. Pattern Recognition, 37: 469 474. [18] Chang, C.C and Tseng, H.W. 2004. A Steganographic method for digital images using side match. Pattern Recognition Letters, 25: 1431 1437.

[19] SWANSON, M.D., KOBAYASHI, M., and TEWFIK, A.H.: 'Multimedia data embedding and watermarking technologies', Proc. IEEE, 1998, 86(6), pp. 1064-1087 [20] Chen, T.S., Chang C.C., and Hwang, M.S. 1998. A virtual image cryptosystem based upon vector quantization. IEEE transactions on Image Processing, 7,10: 1485 1488. [21] Chung, K.L., Shen, C.H. and Chang, L.C. 2001. A novel SVD- and VQ-based image hiding scheme. Pattern Recognition Letters, 22: 1051 1058. [22] Iwata, M., Miyake, K., and Shiozaki, A. 2004. Digital Steganography Utilizing Features of JPEG Images, IEICE Transfusion Fundamentals, E87-A, 4:929 936. International Journal of Computer Science and Information Technology, Volume 2, Number 3, June 2010 112 [23] Chen, P.Y. and Wu, W.E. 2009. A Modified Side Match Scheme for Image Steganography, International Journal of Applied Science and Engineering, 7,1: 53 60.

108

Nano Technology Sreeja S S (Department of computer application(MCA) ,Mohandas college of Engineering and technology Anad, Trivandrum)

Abstract
Nanotechnology is engineering and manufacturing at the molecular scale, thereby taking advantage of the unique properties that exist at that scale. The application of nanotechnology to medicine is called nanomedicine. This paper reviews the study of the different aspects of nanotechnology in curing the different types of diseases. Nanotechnology is concerned with molecular scale properties and applications of biological nano structures and as such it sits at the interface between the chemical, biological and the physical sciences. Applications in the field of medicine are especially promising Areas such as disease diagnosis, drug delivery and molecular imaging are being intensively researched. The special stress and application is given in this paper is on the application of nanorobot in medicine. This paper also proposes the use of nanorobot based on the nanotechnology that will be used for replacing the exiting surgeries that involves so many risks to the patient. However, no matter how highly trained the specialists may be, surgery can still be dangerous. So nanorobot is not only the safe but also fast and better technique to remove the plaque deposited on the internal walls of arteries.
tools and devices are being developed. Using nanoparticle contrast agents, images such as ultrasound and MRI have a favorable distribution and improved contrast. The new methods of nanoengineered materials that are being developed might be effective in treating illnesses and diseases such as cancer. What nanoscientists will be able to achieve in the future is beyond current imagination. This might be accomplished by self assembled biocompatible nanodevices that will detect, evaluate, treat and report to the clinical doctor automatically.

1. Introduction
Nanotechnology is the engineering of functional systems at the molecular scale. This covers both current work and concepts that are more advanced. In its original sense, 'nanotechnology' refers to the projected ability to construct items from the bottom up, using techniques and tools being developed today to make complete, high performance products

2. Nanotechnology in medicine A. Drugdelivery


Nanomedical approaches to drug delivery center on developing nanoscale particles or molecules to improve drug bioavailability. Bioavailability refers to the presence of drug molecules where they are needed in the body and where they will do the most good. Drug delivery focuses on maximizing bioavailability both at specific places in the body and over a period of time. This can potentially be achieved by molecular targeting by nanoengineered devices. It is all about targeting the molecules and delivering drugs with cell precision. More than $65 billion are wasted each year due to poor bioavailability. In vivo imaging is another area where

B. Protein and peptide delivery


Protein and peptides exert multiple biological actions in human body and they have been identified as showing great promise for treatment of various diseases and disorders. These macromolecules are called biopharmaceuticals. Targeted and/or controlled delivery of these biopharmaceuticals using nanomaterials like nanoparticles and Dendrimers is an emerging field called nanobiopharmaceutics, and these products are called nanobiopharmaceuticals.

Cancer
A schematic illustration showing how nanoparticles or other cancer drugs might be used to treat cancer.

110

The small size of nanoparticles endows them with properties that can be very useful in oncology, particularly in imaging. Quantum dots (nanoparticles with quantum confinement properties, such as sizetunable light emission), when used in conjunction with MRI (magnetic resonance imaging), can produce exceptional images of tumor sites. These nanoparticles are much brighter than organic dyes and only need one light source for excitation. This means that the use of fluorescent quantum dots could produce a higher contrast image and at a lower cost than today's organic dyes used as contrast media. The downside, however, is that quantum dots are usually made of quite toxic elements. Another nanoproperty, high surface area to volume ratio, allows many functional groups to be attached to a nanoparticle, which can seek out and bind to certain tumor cells. Additionally, the small size of nanoparticles (10 to 100 nanometers), allows them to preferentially accumulate at tumor sites (because tumors lack an effective lymphatic drainage system). A very exciting research question is how to make these imaging nanoparticles do more things for cancer. For instance, is it possible to manufacture multifunctional nanoparticles that would detect, image, and then proceed to treat a tumor? This question is under vigorous investigation; the answer to which could shape the future of cancer treatment. [11] A promising new cancer treatment that may one day replace radiation and chemotherapy is edging closer to human trials. Kanzius RF therapy attaches microscopic nanoparticles to cancer cells and then "cooks" tumors inside the body with radio waves that heat only the nanoparticles and the adjacent (cancerous) cells.

made of bio-inert material, and they demonstrate the nanoscale property that color is size-dependent. As a result, sizes are selected so that the frequency of light used to make a group of quantum dots fluoresce is an even multiple of the frequency required to make another group incandesce. Then both groups can be lit with a single light source

Nanoparticle targeting
nanoparticles are promising tools for the advancement of drug delivery, medical imaging, and as diagnostic sensors. However, the biodistribution of these nanoparticles is mostly unknown due to the difficulty in targeting specific organs in the body. Current research in the excretory systems of mice, however, shows the ability of gold composites to selectively target certain organs based on their size and charge. These composites are encapsulated by a dendrimer and assigned a specific charge and size. Positively-charged gold nanoparticles were found to enter the kidneys while negativelycharged gold nanoparticles remained in the liver and spleen. It is suggested that the positive surface charge of the nanoparticle decreases the rate of osponization of nanoparticles in the liver, thus affecting the excretory pathway. Even at a relatively small size of 5 nm , though, these particles can become compartmentalized in the peripheral tissues, and will therefore accumulate in the body over time. While advancement of research proves that targeting and distribution can be augmented by nanoparticles, the dangers of nanotoxicity become an important next step in further understanding of their medical uses.

Surgery
A greenish liquid containing gold-coated nanoshells is dribbled along the seam. An infrared laser is traced along the seam, causing the two sides to weld together. This could solve the difficulties and blood leaks caused when the surgeon tries to restitch the arteries that have been cut during a kidney or heart transplant. The flesh welder could weld the artery perfectly

B. Medical applications nanotechnology Nanorobots

of

molecular

Visualization
Tracking movement can help determine how well drugs are being distributed or how substances are metabolized. It is difficult to track a small group of cells throughout the body, so scientists used to dye the cells. These dyes needed to be excited by light of a certain wavelength in order for them to light up. While different color dyes absorb different frequencies of light, there was a need for as many light sources as cells. A way around this problem is with luminescent tags. These tags are quantum dots attached to proteins that penetrate cell membranes. The dots can be random in size, can be

The somewhat speculative claims about the possibility of using nanorobots[17] in medicine, advocates say, would totally change the world of medicine once it is realized. Nanomedicine [1][16] would make use of these nanorobots (e.g., Computational Genes), introduced into the body, to repair or detect damages and infections. According to Robert Freitas of the Institute for Molecular Manufacturing, a typical blood borne medical nanorobot would be between 0.5-3 micrometres in size, because that is the maximum size possible due to capillary passage requirement. Carbon could be the primary element used to build these nanorobots due to the inherent strength and other characteristics of some forms of carbon (diamond/fullerene composites), and nanorobots would be fabricated in desktop nanofactories specialized for this purpose. Nanodevices could be observed at work inside the body using MRI, especially if their components were

111

manufactured using mostly 13C atoms rather than the natural 12C isotope of carbon, since 13C has a nonzero nuclear magnetic moment. Medical nanodevices would first be injected into a human body, and would then go to work in a specific organ or tissue mass. The doctor will monitor the progress, and make certain that the nanodevices have gotten to the correct target treatment region. The doctor will also be able to scan a section of the body, and actually see the nanodevices congregated neatly around their target (a tumor mass, etc.) so that he or she can be sure that the procedure was successful.

Cell repair machines


Using drugs and surgery, doctors can only encourage tissues to repair themselves. With molecular machines, there will be more direct repairs. Cell repair will utilize the same tasks that living systems already prove possible. Access to cells is possible because biologists can stick needles into cells without killing them. Thus, molecular machines are capable of entering the cell. Also, all specific biochemical interactions show that molecular systems can recognize other molecules by touch, build or rebuild every molecule in a cell, and can disassemble damaged molecules. Finally, cells that replicate prove that molecular systems can assemble every system found in a cell. Therefore, since nature has demonstrated the basic operations needed to perform molecular-level cell repair, in the future, nanomachine based systems will be built that are able to enter cells, sense differences from healthy ones and make modifications to the structure. The healthcare possibilities of these cell repair machines are impressive. Comparable to the size of viruses or bacteria, their compact parts would allow them to be more complex. The early machines will be specialized. As they open and close cell membranes or travel through tissue and enter cells and viruses, machines will only be able to correct a single molecular disorder like DNA damage or enzyme deficiency. Later, cell repair machines will be programmed with more abilities with the help of advanced AI systems. Nanocomputers will be needed to guide these machines. These computers will direct machines to examine, take apart, and rebuild damaged molecular structures. Repair machines will be able to repair whole cells by working structure by structure. Then by working cell by cell and tissue by tissue, whole organs can be repaired. Finally, by working organ by organ, health is restored to the body. Cells damaged to the point of inactivity can be repaired because of the ability of molecular machines to build cells from scratch. Therefore, cell repair machines will free medicine from reliance on self repair alone.

protein structures at the atomic level; 2) nano-imaging approaches to study cellular processes in kidney cells; and 3) nano medical treatments that utilize nanoparticles and to treat various kidney diseases. The creation and use of materials and devices at the molecular and atomic levels that can be used for the diagnosis and therapy of renal diseases is also a part of Nanonephrology that will play a role in the management of patients with kidney disease in the future. Advances in Nanonephrology will be based on discoveries in the above areas that can provide nano-scale information on the cellular molecular machinery involved in normal kidney processes and in pathological states. By understanding the physical and chemical properties of proteins and other macromolecules at the atomic level in various cells in the kidney, novel therapeutic approaches can be designed to combat major renal diseases. The nano-scale artificial kidney is a goal that many physicians dream of. Nano-scale engineering advances will permit programmable and controllable nano-scale robots to execute curative and reconstructive procedures in the human kidney at the cellular and molecular levels. Designing nanostructures compatible with the kidney cells and that can safely operate in vivo is also a future goal. The ability to direct events in a controlled fashion at the cellular nano-level has the potential of significantly improving the lives of patients with kidney diseases

Nanonephrology
Nanonephrology is a branch of nanomedicine and nanotechnology that deals with 1) the study of kidney

112

Realistic Skin Movement for Character Animation


Malu. G. Punnackal S6-Computer Science Department Mohandas College of Engineering and Technology Trivandrum malupunnackal@gmail.com

Abstract
Skin movement modeling is a crucial element for realistic animation of both human and creature characters. In this paper, we are primarily concerned with a special skin movement effect, skin sliding. Although physical properties, such as skin elasticity, are usually regarded important in physical simulation, it often leads to an increasing computational load. A complete physical solution will also limit the animators ability to fine-tune the visual effects. In this paper we present a physically based skin sliding modeling method borrowing an idea from structural mechanics called bar networks. The advantage of using a bar network system in simulating the physical properties of skin surfaces is that only a set of sparse linear equations need to be solved, making the method much faster than a simulation based technique. As only one additional step is added to the animation pipeline to restore the skin elastic property, this algorithm is compatible with all existing skin deformation method such as smooth skinning, muscle base skin deformation, cluster and Free Form Deformations, and it therefore does not disturb the existing production pipeline.

Introduction
Realistic skin movement has been paid a great deal of research efforts over the recent years. It is crucial for convincing character animation. In this seminar I am concentrating mainly on the modeling of the skin sliding phenomenon, which, comparing with other skinning related deformation effects, has received relatively little attention in computer animation. Human and animal skin exhibits complex physical characteristics, making it more dynamic than kinematic. Skin becomes an extremity to the multiple layers of underlying anatomical structures. The movement of skin during body motion is highly dependent on the above said underlying anatomical structures thereby making it essential to delve into the physical aspects of the skin-sliding phenomenon. This is where physics based simulation approaches come into play. Incorporating physical properties of anatomic structures can potentially improve realism. Physics can be used at the muscle level, bones, fat or elastic skin layer. Some work on skin sliding has been carried out in this category using spring-mass systems to link the skin surface with the underlying anatomical structure. Spring networks usually do not consider the elasticity of the skin itself. One possible way to simulate that would be to replace the edges of the skin mesh also with springs. Though this would general a good effect of

skin slide, the large number of springs on the skin model would increase the computation significantly. Skin sliding can be essentially thought of as a three dimensional problem with two-dimensional attributes. With relation to the surface, the movement of the skin is parallel to the surface, with little or no perpendicular motion to the surface. In this paper we present a novel method to model skin slide. Bar-nets are useful in form finding. By considering the skin as a sheet and reducing it to a 2D problem, it becomes very easy to take into consideration the physical properties of skin elasticity. In addition, our method can be easily integrated into the animation pipeline without any change to the traditional methods of skinning, giving the animator the freedom to control and design actively during the skinning phase. Attacking skin sliding as a 2D problem makes our method very efficient and fast The parameters of the barnets can be adjusted to achieve the effect of different skin elasticity.

Old School Methods


The very first animated characters were 2D sprites. Just like traditional animation or flip books. When we moved to 3D, our first animated characters were jointed. Each limb or part of a limb is a separate rigid object. The Problem with that is that the Interpenetration at joints and lack of accuracy.

113

Physics Based Animation


Create a model based on the physics of a situation, and solve equations for what happens. They are physically meaningful and have a capacity to generate more realistic simulation. It has been applied to Rigid Objects, Cloth, Water, Smoke, Explosion etc.

Spring Mass Systems


Model objects as systems of springs and masses. The springs exert forces, and you control them by changing their rest length. We propose a physicallybased approach based on anatomical knowledge for real-time animation. A reasonable but simple, physical model for muscles. The skin is modeled by a mass-spring system with nonlinear springs which have a stress-strain relationship to simulate the elastic dynamics of real skin. Muscles are modeled as forces deforming the spring mesh. Based on the action units (AUs), various expressions and deformations can be generated by a combination of contractions of a set of muscles.

Skin deformation, or skinning, has time and again proven to be an indefatigable part of character animation. In the current scenario of computer animation, where realism is paramount, efficient and visually believable techniques of skin deformation are essential. An intuitive attempt to deform a character was involving a skeleton into skin deformation. This approach treats the skin as a shell that moves by an explicit function of the skeleton. Vertices of the skin are deformed by a weighted combination of the joint transformations of the characters skeleton. Collectively, such methods are known as the Smooth Skinning. During smooth skinning, you bind a model's deformable objects to a skeleton. After smooth skinning, the deformable objects are called smooth skin objects (or skin objects, or skin). The points (CVs, vertices, or lattice points) of the deformable objects are then referred to as smooth skin points, or skin points. If you want to change the results of smooth skinning to create unique skeletal deformation effects, you can edit or paint the weights of smooth skinning at the point level (the CV, vertex, or lattice point level). Additionally, to add further deformation effects to smooth skin, you can use Maya's deformers and smooth skin influence objects. Joints closer to a smooth skin point will have a greater influence than joints far from the skin point. The joint closest to a smooth skin point will have the greatest influence. Which joints have the next greatest influence can depend on whether you want Maya to consider the skeleton's hierarchy during binding or to ignore the skeleton's hierarchy during binding.

Skeletons And Skins


Modern approach is called skinning. Here the Skin is a 3D model made out of triangles. Skeleton is invisible and only the skin is seen by the player. Each vertex of each triangle is attached to one or more bones. We use weights to define bones influences. Weights at a joint must always add up to 1.Each vertex of each triangle is attached to one or more bones. We use weights to define bones influences. Weights at a joint must always add up to 1. Skeletons have two kinds of poses:

Bind Pose: The skeletons pose when the skin was first attached. Current Pose: Any other pose of the skeleton; usually a frame of an animation.

Physics Based Approaches


The joint-based system is popular owing to its interactivity and use of minimal animation data. There are basically two main approaches to modeling skin deformations, namely, physics-based approaches (also known as anatomy-based approach) and example-based skinning. Physics-based methods are based on the anatomy, elastic mechanics, or biomechanics of skin deformation originating from the movements of muscles and tendons. They are physically meaningful and have a capacity to generate more realistic simulation. The example-based approach forms a suitable alternative where computational expenses are

The bind pose is like a home base for the characters skeleton. If you drew the mesh without its skeleton, it would appear in its bind pose.

Smooth Skinning
114

to be minimized. An artist models certain key poses of the characters. New poses are interpolated from these key poses. However because each example skin shape was modeling separately, it is impossible to realize a smooth sliding effect using example based method. Because of the heavy computation involved in the simulation of skin sliding effect, no commercial animation software provides this function until 2009. Maya 2009 in its muscle package gives the animator an option to paint the sliding weight in the muscle simulation. However what it does is only to push out the skin surface wherever there is collision between muscle and skin surface mesh.

based unwrapping method to map the 3D mesh to 2D square. This method is applied to each skin patch undergoing sliding. The 2D bar net is created as a pre-processing step to encode the physical properties of the original skin surface. The animator can then use various skin deformation methods including smooth skinning, wire deformer, cluster of freeform deformation to animate the character. Using the same unwrap method, the deformed skin surface in each key frame is mapped to the same 2D square. By comparing these two 2D Bar-nets (before and after skin deformation), the skin sliding can be generated on the deformed skin surfaces. The workflow of this skinsliding method is shown in (Figure 2). Our sliding methodology works on the key frames of the skin deformation rather than the deformation method itself. This helps in keeping the sliding technique to be independent of the skin deformation thereby making it compatible with all existing skinning methods.

Fast Skin Sliding Method


Our fast skin-sliding method is developed by adapting an idea from structural mechanics. Structural mechanics employs bar-net, which is basically a network of nodes whose connections consist of rigid bars. The shape of the network depends on the structural and material properties and also on the forces acting on it. In order for the skinning problem to be converted to the 2D realm from the 3D realm, we develop an unwrapping technique, which can flatten the skin using the network shape finding technique. With this enhanced unwrapping method, we map a skin patch to a 2D plane. The process is made faster due to the fact that only a set of sparse linear equations need to be solved, once. To speed up sliding simulation, we present a 2D image based look-up table to do the point-in-triangle test. For each vertex in the sliding area, a small set of triangles is selected to perform the test and compute the barycentric coordinate. In geometry, the barycentric coordinate system is a coordinate system in which the location of a point is specified as the center of mass, or barycentric, of masses placed at the vertices of a simplex (a triangle, tetrahedron, etc). Barycentric coordinates are a form of homogeneous coordinates. This reduces the computation complexity to O(n).creative work is well protected. In this section, we give a brief overview of the structural mechanics based skin sliding simulation. The entire skin surface of the character needs not be considered for skin sliding computation only the area with muscle deformation is required. So, initially, the animator needs to mark the area of the skin surface (Figure 1) to be used in the skin sliding operation. At the binding pose, that is, during the pre-animation phase we implement a bar-net

Figure1. Select skin patches on the skin surface to simulate skin sliding.

Figure 2. Work flow of the skin sliding algorithm. Bar Network First, A bar-net or force density mechanical network is a structure commonly used in structural

115

engineering. Its shape depends on the structural and material properties and the forces acting upon it. A bar-network connects ns points, in three-dimensional space with straight-line segments, called bars. These points on the net are known as nodes. The nodes can be either fixed or free. Fixed nodes will not have their positions changed regardless of whether they are subject to external forces or not. Free nodes can be moved to balance the acting forces on the net. Each bar connects two nodes. These bars can be stretched and squashed resulting in the repositioning of the end nodes, but they remain topologically static that is they retain their bar shape. Upon external forces acting on free nodes, the bar network may achieve different shapes. The final shape represents the rest shape of the network and is the result of the balance of all external and internal forces. The network described above is in fact a graph with links connecting pairs of nodes. Assuming there are ns nodes (n free nodes and nf fixed nodes) and m bars in the network. A matrix Cs (m*ns) called the branchnode matrix can be formed, which represents in a tabular form the graph of the network. For each row which represents a bar, the elements on the column i and j which are the two linked nodes indices, will be set 1 and -1, all others will be set to 0. This branch node matrix can be further subdivided into two submatrices, C and Cf, by grouping the free-node columns and fixed-node columns of the original matrix respectively. These matrices are used in computing the rest shape of a bar-net. In our case, the effect of stiffness of a network can be approximated by the quantity of force-length ratios of all the bars. An added advantage is that it is also able to solve the form finding problem with a set of sparse linear equations. Branch Node Matrix

The crucial part of our method is to find a mapping between a 3D mesh to the 2D square. Before we get into the detail of our unwrapping method, we need to build up the correspondence between these two meshes. At the binding pose, the animation needs to mark out where on the skin surface he/she expected to get the sliding effect by selecting patches on the mesh. Sliding wont across the boundary of each patch, thus each patchs sliding can be handled separately. The topology of a patch might be very complicated. But it is always possible to construct a mapping from a 3D patch to a simple square on the 2D plane. This means there is always a cutting method which can unwrap the patch into a one piece on a 2D plane (notice distortion is permitted). The first thing needing to do is to find out the complexity of each patch, i.e. the number of boundary loops. Normally there will be at least one boundary from the patch. Find the longest boundary Bi which will be mapped to the 2D squares four edges. All other holes or boundaries will be mapped onto the inside area of the 2D square. The reason for choosing the longest boundary is to minimize the distortion during the unwrapping. On the boundary Bi we can automatically choose four points which are evenly distributed along the edge loops. These four points will be mapped to the four corners of the 2D square. All vertices on the boundary loop will be mapped to the four edges of the square.

Unwrapping from 3D to 2D There are many different unwrapping methods to map a 3D mesh to a 2D plane for the purpose of texture UV definition. Texture mapping is a method for adding detail, surface texture, or color to a computer-generated graphic or 3D model. Multitexturing is the use of more than one texture at a time on a polygon. UV mapping is the 3D modeling process of making a 2D image representation of a 3D model. Based on the target they are pursuing, they can be categorized into area, angle and edge-length

Finding the Mapping between the 3D Mesh and a 2D Square

116

preserving methods. In our skin sliding simulation, the unwrapping is merely used to find the correspondence between the original mesh and the deformed mesh. If the selected skin patch has more than one boundary loops, all the vertices on other boundaries (except Bi) will set to free at this first unwrapping step. The vertices on Bi are fixed during the bar-net deformation while leaving all the vertices inside to freely settle inside the square. From the topology linkage information of the bars and the nodes within the network, we can define C and Cf. In order to keep the topology information of the 3D skin patch and keep the consistency between this mapping and mapping after deformation. The external force vectors, px = py = pz = 0, so all the free nodes will stay on the 2D plane and kept inside the 2D square. The final equilibrium state of this bar-net will be the 2D mapping of the 3D skin patch. Fig. 3a shows an example of the unwrapping of front part of a human face which has only one boundary edge loop, Fig. 3b shows an example of unwrapping of a mesh with five boundaries, the neck boundary was used to be mapped to the edges of the 2D square. The reasons that we choose bar network to make the unwrapping are: (1) Bar network based unwrapping will not create fold which must be guaranteed to find the correspondence between the two unwrapped meshes. For each free node, the forces acting on it should be balanced. (2) The spring based pelting solution makes use of springs, requiring numerical solution to solve the force equations. Our bar-net based solution needs to solve only a set of sparse linear equations and hence is faster by a great magnitude.

Skin deformation is crucial for skin sliding to work. As stated earlier, it is the outcome of the deformation that is significant and not the technique used to achieve the same. Regardless of the type of skin deformation used, the inputs to our skin sliding method are the original mesh and the deformed mesh. Mapping the deformed skin to 2D After skin deformation using the traditional methods, we are left with a deformed skin patch, which corresponds, to the original patch prior to deformation. Topologically the skin patch remains the same, with only the positions of the vertices modified to conform to the new deformed shape. Using the same mapping method, the deformed 3D skin patch can be mapped to the 2D plane. The barnode connection information is still the same with the first bind pose mapping; however we need to change the force density between the nodes based on the new lengths of the edges. And if the sliding patch has more than one boundary, all inside boundary vertices need to be set to fix and keep their position in the first binding pose mapping on the 2D plane. By solving a set of sparse linear equations, the new barnet can be settled in no time. We choose the Indexed Storage of Sparse Matrics to save the memory from the big matrix.

Generating the 3D Skin Slide Effect from the 2D bar Network Suppose the original 3D mesh is M1, the deformed 3D mesh is M2, the Bar-net from the original mesh M1 is B1, and the Bar-net from the deformed mesh M2 is B2. In order to generate the 3D mesh after the skin slide process, two steps need to be performed, namely, computing the barycentric coordinates of the mapped vertex in the 2D plane and interpolation of the 3D vertex in the deformed mesh M2 using the barycentric coordinates to get the new position of the vertex. It has to be noted that if the bar-nets are not triangle meshes, then the polygons should be triangulated firstly. To find the corresponding triangle Tj in B2 for each vertex ui in B1, we have to perform a point-in-triangle test. Since

Skin Deformation

117

it is time consuming to process all the triangles for the test, we present an image based look-up table to speed up the test. To facilitate this process, the 2D bar network B2 is rendered as a 1024*1024 color image I, where the color of each triangle is set to the triangle id in B2. For efficiency, if the number of triangles in the patch is very small, we can even use indexed image or grey image. The rendering of triangle meshes is already highly optimized in current graphics hardware allowing the creation of the 2D image lookup table at a quick rate. To find the triangles where ui lies, we can use the 2D coordinate (x,y) of ui to find the closest pixel and its neighbors which will totally cover ui. From these selected pixels, we can get the indexes of the triangles covering the point ui. In practice, the number of triangles involved depends on the linkage of the original bind pose mesh M1.

VA, VB and VC are the coordinates in the mesh M2 for the vertex A, B and C. Using the 3D coordinates from M2, and the mesh configuration from M1, we successfully combine the deformed skin patch and original patch together. Thus we have developed an intuitive mechanism by which fast skin sliding is achieved by understanding the phenomena of sliding in relation to computer animated characters, rather than purely simulating it. Thus the SKIN SLIDING THROUGH STRUCTURAL MECHANICS can be summarized as follows: 1. Initially, the animator needs to mark the area of the skin surface to be used in the skin sliding operation. 2. At the binding pose, that is, during the preanimation phase we implement a bar-net based unwrapping method to map the 3D mesh to 2D square. 3. This method is applied to each skin patch undergoing sliding.

Figure: A method to find the face enclosing a vertex and any polygons in the 3x3 grid about the closest pixel to this vertex are tested

4. The 2D bar net is created as a pre-processing step to encode the physical properties of the original skin surface. 5. The animator can then use various skin deformation methods including smooth skinning, wire deformer, cluster of freeform deformation to animate the character. 6. Using the same unwrap method; the deformed skin surface in each key frame is mapped to the same 2D square.

The maximum number of the triangles though, is the maximum edges linked to one vertex in the mesh M1.

If the above condition is satisfied, the point is definitely outside the triangle. If it is equal to the area of triangle ABC, then P is inside and the triangle is the correct one. Following through, the computation of the barycentric coordinate of P in the triangle is

7. By comparing these two 2D Bar-nets, the skin sliding can be generated on deformed skin surfaces.

Experimental Results
Here we demonstrate three applications of the proposed skin sliding method, the finger and elbow skin deformation and the facial animation.

For each vertex P in the deformed mesh M2, its new position P is calculated by the following interpolation:

118

Observations
We got the following observations from the previous Experiment: The algorithm is fast as it involves only some simple computations in the skin sliding simulation. The animator only needs to mark out some areas to define the skin patches involved in sliding simulation before animation. A pelting computation is then carried out on each patch. This requires the solution of a set of sparse linear equations. During the animation, the bar network based unwrap computation is required again for the deformed patches, and also a rendering of the deformed two dimensional bar-net so it is much faster than the pre-processing stage and so it requires only small iterations. For each vertex, normally one point-insidetriangle test was needed. Following this only a simple linear interpolation was required to essentially compute the new slid mesh. We reduce the complexity of the whole algorithm.

We have implemented this technique into a prototype program in the form of a plug-in to the Autodesk Maya software.

Applications
These are some of the applications of this skin sliding Algorithm method:

Many popular computer-animated films can been made much more efficient using this technique. It gives low cost and real time facial expressions. Video games, Virtual social environment. Simulations, Entertainment

Future Enhancements
There are several avenues for future work stemming from this technique.

It is possible to support for skin overlapping patches with this system, although handling the blending between multiple re-sampling results may require significant user intervention. Wrinkling is a closely related secondary animation phenomenon, and incorporating a wrinkling simulation would be particularly useful. We also hope to extend the user interface to make the specification of patch boundaries and edge forces more intuitive. In addition, the performance of this method could conceivably be improved through further optimizations or implementation on graphics hardware, making this method applicable to realtime systems such as games.

Advantages
These are the Advantages of this skin sliding Algorithm method: And since the overall shape remains the same even after the skin sliding, the animators creative work is well protected With the bar-net form finding method, only sparse linear equations need to be solved, leading to fast computations. It offers near real-time performance for artist interactivity. Natural integration into standard industry animation pipeline is straightforward, as the sliding process is independent of skin deformation techniques. It preserves the modeled skin shape during animation. The properties of the bar-net naturally prevent unwanted creases, wrinkles and folding from happening.

Conclusion
An efficient skin sliding technique has been developed for realistic animation of virtual human and animal characters. By considering the bind pose skin surface as the elastic target, we are able to simulate skin elasticity with good results. With our method, the animator can still use their familiar deformation method to achieve the result they expected. Our method basically adds an additional step after the deformation to restore the skin elasticity in order to achieve the skin sliding effect. Our skin

119

sliding simulation is compatible with any kinds of skin deformation method, irrespective of the type of skinning methods. Only an extra step is added to the deformed shape to restore the elasticity of the skin surface. And since the overall shape remains the same even after the skin sliding, the animators creative work is well protected

References
1] H.J. Schek, The force density method for form finding and computation of general networks, Computer Methods in Applied Mechanics and Engineering, vol. 3, 1974, pp. 115134. [2] J.J. Zhang, , X. Yang, , and Y. Zhao, Bar-net driven skinning for character animation, Computer Animation and Virtual Worlds, vol. 18, 2007, pp. 437-446. [3] N. Magnenat-Thalmann, R. Laperrire, and D. Thalmann, Jointdependent local deformations for hand animation and object grasping, In Proceedings on Graphics interface, vol. 88, 1988, pp. 26-33.

120

Skinput The Human Arm Touch screen


Manisha Nair S8CS Mohnadas College Of Engineering And Technology

Abstract
Devices with significant computational power and capabilities can now be easily carried on our bodies. However, their small size typically leads to limited interaction space (e.g., diminutive screens, buttons, and jog wheels) and consequently diminishes their usability and functionality. Since we cannot simply make buttons and screens larger without losing the primary benefit of small size, we consider alternative approaches that enhance interactions with small mobile systems. One option is to opportunistically appropriate surface area from the environment for interactive purposes. For example, it describes a technique that allows a small mobile device to turn tables on which it rests into a gestural finger input canvas. However, tables are not always present, and in a mobile context, users are unlikely to want to carry appropriated surfaces with them (at this point, one might as well just have a larger device). However, there is one surface that has been previous overlooked as an input canvas and one that happens to always travel with us: our skin. Appropriating the human body as an input device is appealing not only because we have roughly two square meters of external surface area, but also because much of it is easily accessible by our hands (e.g., arms, upper legs, torso). Furthermore, proprioception our sense of how our body is configured in three-dimensional space allows us to accurately interact with our bodies in an eyes-free manner. For example, we can readily flick each of our fingers, touch the tip of our nose, and clap our hands together without visual assistance. Few external input devices can claim this accurate, eyes-free input characteristic and provide such a large interaction area. Skinput, a technology that appropriates the human body for acoustic transmission, allows the skin to be used as an input surface. In particular, we resolve the location of finger taps on the arm and hand by analyzing mechanical vibrations that propagate through the body. We collect these signals using a novel array of sensors worn as an armband. This approach provides an always available, naturally portable, and onbody finger input system. We assess the capabilities, accuracy and limitations of our technique through a two-part, twenty-participant use study.
interface. It uses a different and novel technique: It listens to the vibrations in your body. It could help people to take better advantage of the tremendous computing power and various capabilities now available in compact devices that can be easily worn or carried. The diminutive size that makes smart phones, MP3 players and other devices so portable also severely limits the size, utility and functionality of the keypads, touch screens and jog wheels typically used to control them. Thus, we can use our own skin-the bodys largest organ as an input canvas because it is always travels with us and makes the ultimate interactive touch surface. It is a revolutionary input technology which uses the skin as the tracking surface or the unique input device and has the potential to change the way humans interact with electronic gadgets. It is used to control several mobile devices including a mobile phone and a

Introduction
Touch screens may be popular both in science fiction and real life as the symbol of next-gen technology but an innovation called Skinput suggests the true interface of the future might be us. This technology was developed by Chris Harrison, a third year Ph.D. student in Carnegie Mellon Universitys HumanComputer Interaction Institute (HCII), along with Desney Tan and Dan Morris of Microsoft Research. A combination of simple bio-acoustic sensors and some sophisticated machine learning makes it possible for people to use their fingers or forearms and potentially any part of their bodies as touch pads to control smart phones or other mobile devices. Skinput turns your own body into a touch screen

121

portable music player. Skinput system listens to the sounds made by tapping on parts of a body and pairs those sounds with actions that drive tasks on a computer or cell phone. When coupled with a small projector, it can simulate a menu interface like the ones used in other kinds of electronics. Tapping on different areas of the arm and hand allow users to scroll through menus and select options. Skinput could also be used without a visual interface. For instance, with an MP3 player one doesnt need a visual menu to stop, pause, play, advance to the next track or change the volume. Different areas on the arm and fingers simulate common commands for these tasks, and a user could tap them without even needing to look. Skinput uses a series of sensors to track where a user taps on his arm. This system is simply amazing and accurate.

Primary Goals
Always-Available Input: The primary goal of Skinput is to provide an always available mobile input system that is, an input system that does not require a user to carry or pick up a device. A number of alternative approaches have been proposed that operate in this space. Techniques based on computer vision are popular. These, however, are computationally expensive and error prone in mobile scenarios (where, e.g., non-input optical flow is prevalent). Speech input is a logical choice for always-available input, but is limited in its precision in unpredictable acoustic environments, and suffers from privacy and scalability issues in shared environments. Other approaches have taken the form of wearable computing. This typically involves a physical input device built in a form considered to be part of ones clothing. For example, glove-based input systems allow users to retain most of their natural hand movements, but are cumbersome, uncomfortable, and disruptive to tactile sensation. Post and Orth present a smart fabric system that embeds sensors and conductors into fabric, but taking this approach to always-available input necessitates embedding technology in all clothing, which would be prohibitively complex and expensive. The Sixth Sense project proposes a mobile, always available input/output capability by combining projected information with a color-marker-based vision tracking system. This approach is feasible, but suffers from serious occlusion and accuracy limitations. For example, determining whether, e.g., a finger has tapped a button, or is merely hovering above it, is extraordinarily difficult. Bio-Sensing:

Skinput leverages the natural acoustic conduction properties of the human body to provide an input system, and is thus related to previous work in the use of biological signals for computer input. Signals traditionally used for diagnostic medicine, such as heart rate and skin resistance, have been appropriated for assessing a users emotional state. These features are generally subconsciously driven and cannot be controlled with sufficient precision for direct input. Similarly, brain sensing technologies such as electroencephalography (EEG) and functional nearinfrared spectroscopy (fNIR) have been used by HCI researchers to assess cognitive and emotional state; this work also primarily looked at involuntary signals. In contrast, brain signals have been harnessed as a direct input for use by paralyzed patients, but direct brain computer interfaces (BCIs) still lacks the bandwidth required for everyday computing tasks, and require levels of focus, training, and concentration that are incompatible with typical computer interaction. Researchers have harnessed the electrical signals generated by muscle activation during normal hand movement through electromyography (EMG). At present, however, this approach typically requires expensive amplification systems and the application of conductive gel for effective signal acquisition, which would limit the acceptability of this approach for most users. The input technology most related to our own is that of Amento et al, who placed contact microphones on users wrist to assess finger movement. However, this work was never formally evaluated, as is constrained to finger motions in one hand. The Hambone system employs a similar setup. Moreover, both techniques required the placement of sensors near the area of interaction (e.g., the wrist), increasing the degree of invasiveness and visibility. Finally, bone conduction microphones and headphones now common consumer technologies - represent an additional biosensing technology that is relevant to the present work. These leverage the fact that sound frequencies relevant to human speech propagate well through bone. Bone conduction microphones are typically worn near the ear, where they can sense vibrations propagating from the mouth and larynx during speech. Bone conduction headphones send sound through the bones of the skull and jaw directly to the inner ear, bypassing transmission of sound through the air and outer ear, leaving an unobstructed path for environmental sounds. Acoustic Input: Our approach is also inspired by systems that leverage acoustic transmission through (non-body) input surfaces. Paradiso et al. measured the arrival time of a sound at multiple sensors to locate hand

122

taps on a glass window. Ishii et al. use a similar approach to localize a ball hitting a table, for computer augmentation of a real-world game. Both of these systems use acoustic time-of-flight for localization, which we explored, but found to be insufficiently robust on the human body, leading to the fingerprinting approach described in this paper.

How Skinput Achieves The Goals


Skin: To expand the range of sensing modalities for always available input systems, we introduce Skinput, a novel input technique that allows the skin to be used as a finger input surface. In our prototype system, we choose to focus on the arm (although the technique could be applied elsewhere). This is an attractive area to appropriate as it provides considerable surface area for interaction, including a contiguous and flat area for projection. Appropriating the human body as an input device is appealing not only because we have roughly two square meters of external surface area, but also because much of it is easily accessible by our hands (e.g., arms, upper legs, torso). Furthermore, proprioception (our sense of how our body is configured in three-dimensional space) allows us to accurately interact with our bodies in an eyes-free manner. For example, we can readily flick each of our fingers, touch the tip of our nose, and clap our hands together without visual assistance. Few external input devices can claim this accurate, eyesfree input characteristic and provide such a large interaction area. Also the forearm and hands contain a complex assemblage of bones that increases acoustic distinctiveness of different locations. To capture this acoustic information, we developed a wearable armband that is non-invasive and easily removable. In this section, we discuss the mechanical phenomenon that enables Skinput, with a specific focus on the mechanical properties of the arm. Bio-Acoustics: When a finger taps the skin, several distinct forms of acoustic energy are produced. Some energy is radiated into the air as sound waves; this energy is not captured by the Skinput system. Among the acoustic energy transmitted through the arm, the most readily visible are transverse waves, created by the displacement of the skin from a finger impact. When shot with a high-speed camera, these appear as ripples, which propagate outward from the point of contact. The amplitude of these ripples is correlated to both the tapping force and to the volume and compliance of soft tissues under the impact area. In general, tapping on soft regions of the arm creates

higher amplitude transverse waves than tapping on boney areas (e.g., wrist, palm, fingers), which have negligible compliance. In addition to the energy that propagates on the surface of the arm, some energy is transmitted inward, toward the skeleton. These longitudinal (compressive) waves travel through the soft tissues of the arm, exciting the bones, which are much less deformable then the soft tissue but can respond to mechanical excitation by rotating and translating as a rigid body. This excitation vibrates soft tissues surrounding the entire length of the bone, resulting in new longitudinal waves that propagate outward to the skin. We highlight these two separate forms of conduction transverse waves moving directly along the arm surface and longitudinal waves moving into and out of the bone through soft tissues because these mechanisms carry energy at different frequencies and over different distances. Roughly speaking, higher frequencies propagate more readily through bone than through soft tissue, and bone conduction carries energy over larger distances than soft tissue conduction. While we do not explicitly model the specific mechanisms of conduction, or depend on these mechanisms for our analysis, we do believe the success of our technique depends on the complex acoustic patterns that result from mixtures of these modalities. Similarly, we also believe that joints play an important role in making tapped locations acoustically distinct. Bones are held together by ligaments, and joints often include additional biological structures such as fluid cavities. This makes joints behave as acoustic filters. In some cases, these may simply dampen acoustics; in other cases, these will selectively attenuate specific frequencies, creating location specific acoustic signatures. Sensing: To capture the rich variety of acoustic information, we evaluated many sensing technologies, including bone conduction microphones, conventional microphones coupled with stethoscopes, piezo contact microphones, and accelerometers. However, these transducers were engineered for very different applications than measuring acoustics transmitted through the human body. As such, we found them to be lacking in several significant ways. Foremost, most mechanical sensors are engineered to provide relatively flat response curves over the range of frequencies that is relevant to our signal. This is a desirable property for most applications where a faithful representation of an input signal, uncolored by the properties of the transducer, is desired. However, because only a specific set of frequencies is conducted through the arm in response to tap input, a flat response curve leads to the capture of irrelevant

123

frequencies and thus to a high signal- to-noise ratio. While bone conduction microphones might seem a suitable choice for Skinput, these devices are typically engineered for capturing human voice, and filter out energy below the range of human speech (whose lowest frequency is around 85Hz). Thus most sensors in this category were not especially sensitive to lower-frequency signals, which we found in our empirical pilot studies to be vital in characterizing finger taps. To overcome these challenges, we moved away from a single sensing element with a flat response curve, to an array of highly tuned vibration sensors. Specifically, we employ small, cantilevered piezo films (MiniSense100, Measurement Specialties, Inc.). By adding small weights to the end of the cantilever, we are able to alter the resonant frequency, allowing the sensing element to be responsive to a unique, narrow, low-frequency band of the acoustic spectrum. Adding more mass lowers the range of excitation to which a sensor responds; we weighted each element such that it aligned with particular frequencies that pilot studies showed to be useful in characterizing bio-acoustic input. Additionally, the cantilevered sensors were naturally insensitive to forces parallel to the skin (e.g., shearing motions caused by stretching). Thus, the skin stretch induced by many routine movements (e.g., reaching for a doorknob) tends to be attenuated. However, the sensors are highly responsive to motion perpendicular to the skin plane perfect for capturing transverse surface waves and longitudinal waves emanating from interior structures. Finally, our sensor design is relatively inexpensive and can be manufactured in a very small form factor (e.g., MEMS), rendering it suitable for inclusion in future mobile devices (e.g., an arm-mounted audio player).

Tiny Pico projectors: Handheld projector (also known as a pocket projector or mobile projector or Pico projector) is an emerging technology that applies the use of a projector in a handheld device. It is a response to the emergence of compact portable devices such as mobile phones, personal digital assistants, and digital cameras, which have sufficient storage capacity to handle presentation materials but little space to accommodate an attached display screen. Handheld projectors involve miniaturized hardware and software that can project digital images onto any nearby viewing surface, such as a wall. The system comprises four main parts: the electronics, the laser light sources, the combiner optic, and the scanning mirrors. First, the electronics system turns the image into an electronic signal. Next the electronic signals drive laser light sources with different colors and intensities down different paths. In the combiner optic the different light paths are combined into one path demonstrating a palette of colors. Finally, the mirrors copy the image pixel by pixel and can then project the image. This entire system is compacted into one very tiny chip. An important design characteristic of a handheld projector is the ability to project a clear image, regardless of the physical characteristics of the viewing surface. An Acoustic Detector: An acoustic detector can detect the acoustic signals generated by such actions as flicking and convert them to electronic signals enabling users to perform simple tasks as browsing through a mobile phone menu, making calls, controlling portable music players, etc. It recognizes skin taps on corresponding locations of the body based on bone and soft tissue variation. It detects the ultralow frequency sounds using 10 sensors. The sensors are cantilevered piezo films which are responsive to a particular frequency range and are arranged as two arrays of five sensing elements each.

Technologies Used
Skinput, the system is a marriage of two technologies: the ability to detect the ultralow frequency sound produced by tapping the skin with a finger, and the microchip-sized Pico projectors now found in some cell phones. The system beams a keyboard or menu onto the users forearm and from a projector housed in an armband. An acoustic detector, also in the armband, then calculates which part of the display you want to activate. It turns your largest organ: skin into a workable input device. Tiny Pico projectors display choices onto your forearm and an acoustic detector in an armband detects the ultralow frequency sounds produced by tapping the skin with your finger. These sensors capture sound generated by such actions as flicking or tapping fingers together, or tapping the forearm.

Armband Prototype
Our final prototype, shown in the figures, features two arrays of five sensing elements, incorporated into an armband form factor. The decision to have two sensor packages was motivated by our focus on the arm for input. In particular, when placed on the upper arm (above the elbow), we hoped to collect acoustic information from the fleshy bicep area in addition to the firmer area on the underside of the arm, with better acoustic coupling to the Humerus, the main bone that runs from shoulder to elbow. When the sensor was placed below the elbow, on the forearm,

124

one package was located near the Radius, the bone that runs from the lateral side of the elbow to the thumb side of the wrist, and the other near the Ulna, which runs parallel to this on the medial side of the arm closest to the body. Each location thus provided slightly different acoustic coverage and information, helpful in disambiguating input location. Based on pilot data collection, we selected a different set of resonant frequencies for each sensor package. We tuned the upper sensor package to be more sensitive to lower frequency signals, as these were more prevalent in fleshier areas. Conversely, we tuned the lower sensor array to be sensitive to higher frequencies, in order to better capture signals transmitted though (denser) bones.

two to calibrate the system for each new user. That is done by choosing one of the six tasks that the prototype can now handle- up, down, left, right, enter, cancel and pairing the choice with a tap on the arm or the hand. This explains the basic working of the present prototype of Skinput system.

Analysis
To evaluate the performance of our system, a trial involving 20 subjects was done. We selected three input groupings from the multitude of possible locations; combinations to test. Subjects were from different ages and sex. From these three groupings, five different experimental conditions were derived. They are fingers (five locations), whole arm (five locations) and forearm (ten locations). Fingers (five locations) The participants were asked to tap on the tips of each of their five fingers. The fingers provide clearly discrete interaction point, exceptional finger to finger dexterity and are linearly ordered, which is potentially useful for interfaces like number entry, magnitude control(e.g.: volume) and menu selection. At the same time, the fingers are among the most uniform appendages on the body with all but the thumb sharing a similar skeletal and muscular structure. This drastically reduces acoustic variation and makes differentiating among them difficult. Additionally acoustic information must cross as many as five fingers and wrist joints to reach the forearm, which further dampens signal. Despite these difficulties the finger flicks could be identified with 97% accuracy. Whole arm (five locations) The participants were asked to tap on five input locations on the forearm and hand: arm, wrist, palm, thumb and middle finger. These locations were selected because they are distinct and named parts of the body, so could be accurately tapped without training and are acoustically distinct. Forearm (ten locations) The participants were asked to tap on ten different locations on the forearm. This relied on an input surface with a high degree of physical uniformity and had a large and flat surface area with immediate accessibility. This also makes an ideal projection surface for dynamic interfaces. Accuracy depended in part on proximity of the sensors to the input; forearm taps could be identifies with 96%accuracy when sensors were attached below the elbow and 88% accuracy when the sensors were above the elbow.

How Skinput Works


Skinput is a technology which transforms a human body into a display and input surface that can interact with electronic gadgets. To see how Skinput performs this functionality, an armband prototype is required. The user needs to wear an armband, which contains a very small Pico-projector that projects a menu or keypad onto a persons hand or forearm. It also contains an acoustic sensor which makes unique sounds based on the tapping on different parts of the body owing to the areas bone density, soft tissues, joints and other factors. The sounds are not transmitted through the air, but by transverse waves through the skin and longitudinal or compressive waves through the bones. This armband is connected to a computer via a large receiver to process the sounds. When the different sounds are analyzed by the computer different wave patterns are formed. By analyzing186 different features of the acoustic signals, including frequencies and amplitude a unique acoustic signature is created for each tap location. Controls are then assigned to each location. The projector projects the menu or keypad of the gadget which is to be controlled by the system onto the persons hand. The user then taps on different parts of the body. Various acoustic signals are generated from different parts. A bio-acoustics sensing array is used to pick up the different signals and deliver to the computer where they are monitored. The custom built software is capable of listening to the different acoustic variations; determine which button the user has just tapped. Wireless Bluetooth Technology then transmits the information to the device and controls it. So if you have tapped out a phone number, the wireless technology would send the data to your phone to make the call. The system has achieved accuracies ranging from 81.5 to 96.8 % and enough buttons to control many devices. It takes a minute or

125

The system was thus able to classify the inputs with 88% accuracy overall. It produces a unique acoustic signature for each tapped location, that machine learning programs could learn to identify.

Additional Analysis
Walking and Jogging As discussed previously, acoustically-driven input techniques are often sensitive to environmental noise. In regard to bio-acoustic sensing, with sensors coupled to the body, noise created during other motions is particularly troublesome, and walking and jogging represent perhaps the most common types of whole-body motion. This experiment explored the accuracy of our system in these scenarios. Each participant trained and tested the system while walking and jogging on a treadmill. Three input locations were used to evaluate accuracy: arm, wrist, and palm. Participants only provided ten examples for each of three tested input locations. Furthermore, the training examples were collected while participants were jogging. Thus, the resulting training data was not only highly variable, but also sparse neither of which is conducive to accurate machine learning classification. Single-Handed Gestures In the experiments discussed thus far, we considered only bimanual gestures, where the sensor-free arm, and in particular the fingers, are used to provide input. However, there are a range of gestures that can be performed with just the fingers of one hand. We conducted three independent tests to explore onehanded gestures. The first had participants tap their index, middle, ring and pinky fingers against their thumb and then there were flicks. This motivated us to run a third and independent experiment that combined taps and flicks into a single gesture set. Participants re-trained the system, and completed an independent testing round. Even with eight input classes in very close spatial proximity, the system was able to achieve a remarkable 87.3% accuracy. This result is comparable to the aforementioned ten location forearm experiment (which achieved 81.5% accuracy), lending credence to the possibility of having ten or more functions on the hand alone. Furthermore, proprioception of our fingers on a single hand is quite accurate, suggesting a mechanism for high-accuracy, eyes-free input. Surface and Object Recognition During piloting, it became apparent that our system had some ability to identify the type of material on which the user was operating. Using a similar setup

to the main experiment, we asked participants to tap their index finger against 1) a finger on their other hand, 2) a paper pad approximately 80 pages thick, and 3) an LCD screen. Results show that we can identify the contacted object with about 87.1% (SD=8.3%, chance=33%) accuracy. This capability was never considered when designing the system, so superior acoustic features may exist. Even as accuracy stands now, there are several interesting applications that could take advantage of this functionality, including workstations or devices composed of different interactive surfaces, or recognition of different objects grasped in the environment. Identification of Finger Tap Type Users can tap surfaces with their fingers in several distinct ways. For example, one can use the tip of their finger (potentially even their finger nail) or the pad (flat, bottom) of their finger. The former tends to be quite boney, while the latter more fleshy. It is also possible to use the knuckles (both major and minor metacarpophalangeal joints). We evaluated our approachs ability to distinguish these input types. A classifier trained on this data yielded an average accuracy of 89.5during the testing period. This ability has several potential uses. Perhaps the most notable is the ability for interactive touch surfaces to distinguish different types of finger contacts (which are indistinguishable in e.g., capacitive and vision-based systems). One example interaction could be that double-knocking on an item opens it, while a padtap activates an options menu. Segmenting Finger Input A pragmatic concern regarding the appropriation of fingertips for input was that other routine tasks would generate false positives. For example, typing on a keyboard strikes the finger tips in a very similar manner to the finger-tip input we proposed previously. Thus, we set out to explore whether finger-to-finger input sounded sufficiently distinct such that other actions could be disregarded. As an initial assessment, we asked participants to tap their index finger 20 times with a finger on their other hand, and 20 times on the surface of a table in front of them. This data was used to train our classifier. This training phase was followed by a testing phase, which yielded a participant wide average accuracy of 94.3%

Example Interfaces And Interactions


We conceived and built several prototype interfaces that demonstrate our ability to appropriate the human

126

body, in this case the arm, and use it as an interactive surface. While the bio-acoustic input modality is not strictly tethered to a particular output modality, we believe the sensor form factors we explored could be readily coupled with visual output provided by an integrated Pico-projector. There are two nice properties of wearing such a projection device on the arm that permit us to sidestep many calibration issues. First, the arm is a relatively rigid structure the projector, when attached appropriately, will naturally track with the arm. Second, since we have fine-grained control of the arm, making minute adjustments to align the projected image with the arm is trivial (e.g., projected horizontal stripes for alignment with the wrist and elbow). To illustrate the utility of coupling projection and finger input on the body (as researchers have proposed to do with projection and computer vision-based techniques), we developed three proof-of-concept projected interfaces built on top of our systems live input classification. In the first interface, we project a series of buttons onto the forearm, on which a user can finger tap to navigate a hierarchical menu. In the second interface, we project a scrolling menu, which a user can navigate by tapping at the top or bottom to scroll up and down one item respectively. Tapping on the selected item activates it. In a third interface, we project a numeric keypad on a users palm and allow them to tap on the palm to, e.g., dial a phone number. To emphasize the output flexibility of approach, we also coupled our bio-acoustic input to audio output. In this case, the user taps on preset locations on their forearm and hand to navigate and interact with an audio interface.

or permanent augmentation of surfaces-you can set your phone down on a table at your local coffee house, and youve instantly got an ad hoc gestural finger-input surface. When youre done, simply pick up your phone and off you go. MINPUT Minput was born from a desire to experiment with high-precision spatial tracking. Specifically, it incorporates optical tracking sensors in the back of a device-the same cheap, small, high-precision sensors used in optical mice. Two sensors capture not only up, down, left, and right motions, but also twisting gestures. This configuration lets a device track its own relative movement on surfaces, especially large ad hoc ones like tables, walls, and furniture, but also your palm or clothes if nothing else is around. Minput can be used in several ways. One is gestural a user can grasp a device like a tool and gesture with it. Like brushstrokes on a canvas, these gestures can be big and bold and in general arent limited by the devices diminutive form. This also keeps the users fingers off the tiny display, eliminating interface occlusiona problem in touch-screen interaction. The Minput technique can also be used as a peephole display. This effect is somewhat like reading a newspaper in a dark room with only a small flashlight for illumination. Although only a fraction of the entire canvas is visible at any given moment, the whole document is immediately accessible similar to scrolling through a webpage on a smart phone. We augment this interaction by using twisting gestures to zoom, an analog motion to which its well suited. Finally, Minput can transform a devices sensor data into a cursor, which could allow small devices to run very complex widget-driven interfaces. Much like a mouse, the control-device gain can be manipulated. This enables extremely precise, pixellevel accuracy. Minput provides low cost and high precision pointing for gadgets.

History
Scratch Input Scratch Input, which allows mobile devices to appropriate horizontal surfaces for gestural finger input. It works by placing a specialized microphone on the backside of devices; gravity provides just enough force to acoustically couple the device to whatever hard surface its resting on. Lots of things happen on tables we want to ignore; the system filters this out by listening exclusively to the frequency range human fingernails produce when running over a textured surface- wood, paint, linoleum, and many other materials (not glass or marble, however, which are too smooth). Taps and flicks are easily detected as well. The sensor is very small-just a single microphone-and can be easily integrated into even the smallest devices. This means Scratch Input capability goes wherever the device goes; no infrastructure is necessary. It also requires no special

How Skinput Is Better


Skinput is a revolutionary input technology which uses the skin as the tracking surface or the unique input device and has the potential to change the way humans interact with electronic gadgets. It is used to control several mobile devices including a mobile phone and a portable music player. It could help people to take better advantage of the tremendous computing power now available in compact devices that can be easily worn and carried. The diminutive size that makes smart phones, MP3 players and other devices so portable also severely limits the size, utility and functionality of the keypads, touch screens

127

and jog wheels typically used to control them. It uses the largest organ of the human body as an input canvas which always travels with us and makes the ultimate interactive touch surface. Appropriating the human body as an input device is appealing not only because we have roughly two square meters of external surface area, but also because much of it is easily accessible by our hands (e.g., arms, upper legs, torso). Furthermore, proprioception (our sense of how our body is configured in three-dimensional space) allows us to accurately interact with our bodies in an eyes-free manner. For example, we can readily flick each of our fingers, touch the tip of our nose, and clap our hands together without visual assistance. Few external input devices can claim this accurate, eyes-free input characteristic and provide such a large interaction area. Thus, Skinput can also be used without a visual interface. Skinput doesnt require any markers to be worn and it is more suitable for persons with sight impairments, since it is much easier to operate it even with your eyes closed. The system can even be used to pick up very subtle movements such as a pinch or muscle twitch. The amount of testing can be increased and accuracy likely would improve as the machine learning programs receive more training under different conditions. It analyzes 186 different features of the acoustic signals and thus, can produce a unique acoustic signature for various locations on the body. It works with good accuracy even when the body is in motion. It takes a minute or two to calibrate the system for each new user. Its future prospects can reduce the bulky prototype and scale it down to be watch-sized on your wrist.

functionality would be increased to control many more electronic gadgets more effectively and efficiently. Mr. Harrison said he envisages the device being used in three distinct ways. Firstly, the sensors could be coupled with Bluetooth to control a gadget, such as a mobile phone in a pocket. It could be used to control a music player strapped to the upper arm. Secondly he said, the sensors could work with a Pico projector that uses the forearm or hand as a display surface. This could show buttons, a hierarchical menu, a number pad or a small screen. Finally, Skinput can even be used to play games such as Tetris by tapping on fingers to rotate blocks. Thus it has all the capability to become a commercial product some day.

Conclusion
In this paper I have presented the approach to appropriating the human body as an input surface. A novel, wearable bio-acoustic sensing array is described which is built into an armband in order to detect and localize finger taps on the forearm and hand. Results from the experiments are shown that the system performs very well for a series of gestures, even when the body is in motion. Its accuracy though affected by age and sex can be improved as the machine learning programs receive more training under such conditions. The system can even be used to pick up very subtle movements such as pinch or muscle twitch. Additionally, certain initial results demonstrate other potential uses of the approach which include single handed gestures, taps with different parts of the finger and differentiating between materials and objects. Several approaches were made in this field of technology like Sixth Sense etc. While Sixth Sense could perform better in loud environments and offers more features, the Skinput doesnt require any markers to be worn and it is more suitable for persons with sight impairments since it is much easier to operate it even with eyes closed and uses proprioception. To conclude with, several prototype applications have been described that demonstrate the rich design space we Skinput enables. This system is quite amazing and certainly shows what can be achieved with a bit of thought.

Future Enhancements
This technology is unique and simple in its current prototype but is enclosed in a bulky cuff. The future prospects would be to easily miniaturize the sensor array, scale them down and put them in a gadget which could be worn much like a wrist watch. In the future your hand could be your iPhone and your handset could be watch-sized on your wrist. The miniaturization of the projectors would make Skinput a complete and portable system that could be hooked up to any compatible electronics no matter where the user goes. Besides being bulky, the prototype has a few other glitches that need to be worked out. For instance, over time the accuracy of interpreting where the user taps can degrade. This happens because the system requires being re-trained occasionally. As we collect more data and make the machine learning classifiers more robust, this problem will hopefully reduce. It would be made more usable and its

References
Chris Harrison, Carnegie Mellon University http://computingnow.computer.org www.chrisharrison.net/projects/skinput/ research.microsoft.com/enus/um/.../cue/.../HarrisonSkinputCHI2010.pdf www.physorg.com/news186681149.html

128

www.msnbc.msn.com/id/35708587/ - United States www.inhabitat.com/.../microsofts-skinput-systemturns-skin-into-a-touchscreen

www.cmu.edu/homepage/computing/2010/winter/ski nput.shtml

129

Tablet Computing
Anita Bhattacharya & Beeba Mary Thomas
S6 Department of Computer Science Engineering Mohandas College of Engineering and Technology

Abstract
The idea of tablet computing is generally credited to Alan Kay of Xerox, who sketched out the idea in 1971. The first widely-sold tablet computer was Apple Computer's Newton, which was not a commercial success. Technological advances in battery life, display resolution, handwriting recognition software, larger memory, and wireless Internet access have since made tablets a viable computing option. A tablet PC is a wireless, portable personal computer with a touch screen interface. The tablet form factor is typically smaller than a notebook computer but larger than a smart phone. There are primarily two styles of tablets- the convertible tablet and the slate tablet. New marketable applications are being produced at an incredible rate. In April 2010, Apple released the iPad which is controlled by a multitouch displaya departure from most previous tablet computers, which used a pressure-triggered stylusas well as a virtual onscreen keyboard in lieu of a physical keyboard.

Introduction
A tablet personal computer (tablet PC) is a portable personal computer equipped with a touchscreen as a primary input device and designed to be operated and owned by an individual. The term was made popular as a concept presented by Microsoft in 2001 but tablet PCs now refer to any tablet-sized personal computer, even if its not using Windows but another PC operating system. Tablets may use virtual keyboards and handwriting recognition for text input through the touchscreen. All tablet personal computers have a wireless adapter for Internet and local network connection. Software applications for tablet PCs include office suites web browsers, games and a variety of applications. However, since portable computer hardware components are low powered, demanding PC applications may not provide an ideal experience to the user.

impulse into something the computer's processor can understand. This is very similar in action to what happens when you click your mouse, or keyboard. You can add a keyboard to your tablet PC if you desire, and even add a mouse, though this does somewhat defeat the purpose of a touch-screen. Some people prefer to treat tablet PC's as standard PC's, and only use the touch screen when doing things like drawing, or writing quick notes. Complete the Action- The computer processes the translated information, and acts accordingly, whether it be a click, a slide, or writing. The entire process happens in a fraction of a second, allowing you to do multiple actions at once. For instance, double-clicking an application to open it, can be done with the speed and precision of using a standard mouse. The computer process both clicks independently, and is able to recognize the intent of the user. Hand Writing- The hand writing recognition feature of tablet PC's functions by converting the hand writing into an image. The computer logs this image, and then analyzes it. Any future writing is converted into an image, and checked against all the other images that have been stored and analyzed. This allows the computer to recognize what you are writing, for when it converts your hand writing into type.

Types of Tablet PCs


There are three types of Tablet PCs: the convertible, the slate and the rugged. Each type is designed with a specific user in mind. Convertible Slate - application, or even put your handwriting in the correct field. Translate Information- Once the computer has the coordinates of your touch, a special driver translates the

129

laptops need to be lugged wherever you go. Tablet PCs score on account of their lightness of weight.

How Does The Tablet PC Work?


Apply a Stimulus- Tablet PC's are best thought of as touch-screen PC's. Touch screens begin operation as soon as a stimulus is applied. Generally, touch screens can operate with the use of any stimulus, but some have specific stimuli. In the case of tablet PC's, this stimulus is generally a stylus pen. Change Electrical Field-Conductive and resistive metal plates, in the screen, act together to process where on the screen you touched. The plates each have a respective electrical field running through them. When pressure is applied, the plates meet in the exact same spot, and the electrical field alters. The computer recognizes this change in the electrical field as the beginning of an action and immediately starts processing the event. Determine Coordinates- The alteration of the electrical field, allows the computer processor to determine the coordinates of your touch. This allows the computer to know the exact location on the screen that you put your stylus. This is what causes the computer to select the proper file, or open the proper The tablet PC can be laid flat on the working surface. This is ideal when you are in a conference. The laptop screen needs to be kept vertical and that might obstruct clear view of the person sitting in front of you. The tablet PCs take their input basically with the help of a special pen. Your handwriting is the input. That can be a good choice if you are doing something artistic. You can fine-tune your input better with a pen than with a mouse or a touch pad. Over time, you will learn to use the pen in the right manner, and even customize the pen to your tablet PC. There are different gestures that you can apply to the pen, which will produce different kinds of results. Eventually this will become easier for you than creating results with the keyboard and mouse. Finally, it must be said that a tablet PC becomes more personal to the user than a laptop. Since everyone has a different style of holding and using pens, the tablet PC will become unique to the user, and even the user will become unique to the pen. In fact, there are

Tablet PC Operating Systems


Some of the operating systems used by tablet computers are: GNU Linux MeeGo Android Apple iOS Windows Tablet PC Edition

Some Popular Tablet PCs


Apple iPad Samsung Galaxy Tab BlackBerry PlayBook HP Slate, PalmPad Dell Streak 7,10 ViewSonic ViewPad Notion Ink Adam Tablet Motorola Xoom

Advantages Of Tablet PCs Vs. Laptops


The primary advantage of the tablet PC is that it is lighter than most laptops. They are also smaller in sizes,which means you can take them quite easily tucked under your arm from one place to another. Most

130

handwriting recognition applications that will train the tablet PC to understand your handwriting and convert it into text to up to 99% accuracy.

References

Tablet PC Quick Reference by Jeff Van West Teach Yourself Visually iPad by Lonzell Watson www.ehow.com www.google.com www.wikipedia.com

Disadvantages Of Tablet PCs Vs. Laptops


Some people might find the screen size of the tablet PC too small in comparison with a laptop. The maximum size the tablet PC screen can go up to is 14.1'. Another handicap with a tablet PC is that it does not have an inbuilt optical drive, though you can connect it externally. But this could be a deterrent to some users. The reason behind not including the optical drive is to maintain the low mass of the device. A tablet PC is not good if other people besides you are planning to share it. The tablet PC understands your handwriting and writing gestures, and it may not understand those of others. For that reason, tablet PCs are good only when single users intend to use it for their entire lifetime. Laptops can be used by any number of users without such concerns. There are more chances of screen damage to tablet PCs than to laptops. This is because of the kind of input they take, with the pen device. You will need a special screen guard for the tablet PC installed when you purchase it. Technically, inputs with tablet PCs become slower than those with laptops. The reason here is that tablet PCs take handwriting inputs, and that cannot match the speed of the keyboard and mouse, which the laptops use. Tablet PCs are also costlier than laptops. That must be one of the clinching points in making your decision, but you do need to check out the features that they provide too.

Conclusion
The tablet computer has met with both bouquets as well as brickbats since the time it has made its foray into the market. There are people who consider it to be the next best invention.

131

The Development of Road Lighting Intelligent Control System Based on Wireless Network Control
Lekshmi.R.Nair & Laxmi Laxman S8-Computer Science Department Mohandas College of Engineering and Technology Trivandrum. lechu_nair_2006@yahoo.com Abstract
Along with the development of science and technology, information technologies have been used in many fields. Energy problem is a social focus nowadays. Energysaving and environmental protection is the policy of China. According to our national conditions, road lighting system of our country is still emerging technologies. In this paper, to reach the purpose of energy saving, it introduces an information method to deal with the problem of road lighting. The road lighting intelligent control system is based on wireless network control that can implement real-time monitoring for road lighting, intelligent work without manual intervention. It also can save the energy and work efficiently. This paper discusses the design method of road lighting intelligent control system, which was built up by wireless personal network technology, GPRS, microprocessor and host computers, and researched on the system including the structure, key technologies, software design and encryption. At last, future research plans are given. Communication line is set up in the control system which is centralized control buy 1. Introduction controlling center. The advantage is accurate and
As the acceleration of country urbanization, road lighting develops quickly. However, the road green lighting is just starting. It is important to lower the cost, to reduce the pollution, to cut down the energy consuming and to improve the efficiency of road lighting. The characteristics of intelligent control road lighting system are different according to different time and different areas. The TPO (Time/Place/ Occasion) management is the requirement of road lighting, which could be automatically and intelligent control in accordance with the different occasion. Thanks to the energy-saving of high-frequency Electronic ballast and the great potential market, a new lighting industry has been born. Compared with traditional ballast, high-frequency electronic ballast is more convenient to adjust lighting and establish control network. Therefore, it comes into a hot topic that how realizes the intelligent control system based on the integrated bus control technology. Recently, there are two ways of road lighting intelligent control system: 1) Control by communication cable.

reliable controlling. It can take several control schemes. The disadvantages are high cost, more faults and hard to maintain; 2) Control automatically, including timing control and luminance control. It is low cost, easy installation and simple maintenance. However, timing control is unable to fit the environment change, and luminance control is usually affected by environment, what is more, they are not enough flexible. To improve the road lighting control level, this paper has discussed the design method of road lighting intelligent control system, which was built up by wireless personal network technology, GPRS, microprocessor and host computer, and researched on the system including the structure, key technologies, software design and encryption.

2. The whole design of Road Lighting 132

Intelligent Control System


Every road lamp in the road lighting intelligent Control system has the functions of controlling, monitoring, information management etc. For the requirements in above texts, the system is composed of field wireless controlling net, controlling terminal. The system whole configuration block diagram is shown in Fig. 1. The wireless communication net of the system is divided into two parts: 1) The GPRS network is used as long-distance communication net for controlling room and controlling field; 2) The ZigBee net is used as controlling net in controlling field. ZigBee networks have the characteristics of adapting network easily, low power consumption, great network capacity, credible communication and low cost .Therefore, the best method of the field controlling network is the ZigBee wireless network. It has 4,096 road section of road lighting intelligent controlling field (road section) and every section can have 65,535 road lighting controlling terminals(road lamp), which can meet the quantity requirements for road lighting in a city.

From the frame of road intelligent lighting intelligent controlling system, it need to be solved to develop the road lighting intelligent control system including terminal controller technology, interface module of GPRS network and ZigBee network, system management software etc.

3. The terminal controller of road Lighting intelligent control system 3.1. The hardware design of the Terminal controller
The terminal controller of road lighting intelligent control system is the controlling and actuating equipment to implement road lighting control system. It consists of three parts: microprocessor, ZigBee wireless module and control module. In this terminal controller, Freescales MC9S08QG8 which is a member of the low-cost, high-performance HCS08 Family of 8-bit microcontroller units was adopted as Microprocessor. MC9S08QG8 have 24 interrupt Sources, 1 serial communications interface (SCI) and internal phase-locked loop that can let 32.768K of external clock become 8M internal bus clock. Therefore, MC9S08QG8 not only meets the requirement of processing speed but also have the ability of anti-interference. Large amount of interrupt resources and high stability ensure reliable data transmission, working status query of road lighting and normal operation of whole road lighting system. Microprocessor module implements overall control, communication, working status query that feed back to controlling center by ZigBee wireless network. Wireless module adopts advanced ZigBee technology for it has self network building function, flexible networking, without manual interference. And one net node can perceive other nods existence and determine connection relationship. International general free frequency band(2. 4~2. 48 GHz ISM) is adopted as usage frequency and direct sequence spread spectrum(DSSS) which has the ability of high anti-interference is adopted as transmission method to support high network physical performance. Above all, ZigBee technology is the best way for the network of road lighting control system. In practical design, Freescales MC13214 that incorporates a low power 2.4 GHz radio frequency transceiver and an 8-bit

Figure 1: The system whole configuration Block diagram The controlling terminal of the road lighting intelligent control system includes ZigBee controlling terminal and high frequency electronic ballast. Electronic ballast connecting with controlling terminal block diagram is shown in Fig. 2.

Figure 2: Electronic ballast connecting with controlling terminal block diagram

133

microcontroller is adopted as wireless transceiver chip. The RF transceiver is an 802.15.4 Standard compliant radio that operates in the 2.4 GHz ISM frequency band. So it can support many network connection manners such as point to point, mesh network, star network and tree network and it can enhance flexibility and adaptability of the net of road lighting. MC13214s configuration is shown in Fig. 3.

programs interface to ZigBee protocol stack is RS232 bus. Below, application program is discussed. First, the microprocessor initializes all register and function module, such as watchdog, timer, interrupt, SPI module, I/O port etc. Second, it moves to the main cycle to collect the status of the road lighting system and wait for valid data from the ZigBee network. And then, the received data is processed and analyzed. Last, according to the received valid data, the corresponding command is implemented and then the executive result is delivered to the controlling centre. The application program flow chart is shown in Fig. 4.

Figure configuration

3:

MC13214s

Control module includes a light-adjusting module and a switching module. In the lightadjusting module, TLC5615, a D/A chip, is adopted as the main chip of control module that can transform the digitized signals into analog control signals. In this system, the analog control signals have three control grades: all light mode, reducing light mode and half light mode. The analog voltage values of the three control grades are 5V, 3.75V, 2.5V. In switching module, for the interface of electronic ballast has internal upper pulling, the way of switching control is open-collector (OC). Besides, the terminal controller has a function of collect status of road lighting equipment. When the test port is at high level, it shows that the equipment is in normal state and the low level stand for abnormal.

3.2. The software design of the Terminal controller


The software of the terminal controller include ZigBee protocol stack and application program. The ZigBee protocol stack consists of physical layer, link layer and network layer. The duty of physical layer is management of data transmitting and receiving; Link layer provide reliable data transmission service and manage the link layer; Network layer have the duty to construction network and manage the network members. Application program is designed by CodeWarrior environment and the application Figure 4: The application program flow chart For practical applications in road lighting system, security is very important to be considered. Although AES-128 encryption algorithm is adopted, the ciphertexts that come from the same plaintext is the same. So it can not meet our security requirements because it is easy to catch the ciphertexts from the air and break into the control system by ciphertexts. Therefore, before the control coed become the plaintext of AES-128 encryption algorithm, encode process

134

and adding synchronous counter process are required. Concrete process is shown in Fig. 5. After adopted the method, same plaintexts (control code or data) have totally different ciphertexts. If eight bytes is used as synchronous counter, the situation that the same plaintext has the same ciphertexts will happen after 4,294,967,295 times process. If 1,000 control code or data is transmitting, the situation will happen after 4,294 years. Above all, the method can meet our need of security.

programming idea is the same as the terminal controller of road lighting intelligent control system. This program also include ZigBee protocol stack and application program. The ZigBee protocol is the same as the terminal controller of road lighting intelligent control system. Below, application program is discussed. First, the microprocessor initializes all register and function module, such as watchdog, timer, interrupt, SCI module, I/O port and GPRS module etc. Second, going to the main cycle wait for valid data from the ZigBee network and GPRS network. And then, the received data are processed and analyzed. Finally, valid data transmit to corresponding network (GPRS or ZigBee).

Figure 5: Concrete process of encode and Encryption

5. Management software of the road Lighting intelligent control system


The management software is designed with the idea that the management software is a management and operation platform for users. In practical design, the host computer software is divided into monitoring subsystem, control subsystem, query subsystem and automatic control subsystem. The main functions are as follows: 1) It can monitor and display working condition of all the lights in real-time, support automatic alarm when it discoveries abnormal events; 2) According to the users' real needs, the software can enable single light control, group control, fancy control and brightness regulation; 3) It can query according to the users conditions and the results can shown with visual interface; 4) The system can realize the control scheme which is composed of time control and brightness control, which means that, according to the users real needs, it can realize timing switch, timing brightness regulation to realize energy saving and prolong natural life of lamps; 5) The system acquires the total power information of road lighting and so on in realtime. As the control and monitoring window of the road lighting intelligent control system based on wireless network control, the host computer software provides a simple, convenient and friendly operation interface for users to realize stably user's control operations, and encrypts, decrypts communication information, which ensures the normal operation of the road lighting intelligent control system based on wireless network control. Software interface is showed in Fig. 6.

4. The GPRS and ZigBee interface Module of road lighting intelligent Control system
4.1. The hardware design of the GPRS And ZigBee interface module
The main function of the GPRS and ZigBee interface module of road lighting intelligent control system is that information from GPRS network and ZigBee network exchange each other. In this module, it contains a microprocessor module, a GPRS module and the ZigBee module. In this module, Freescales MC9S08DZ60 which is a member of the lowcost, high-performance HCS08 Family of 8-bit microcontroller units, has been adopted as microprocessor. MC9S08DZ60 have 2 serial communications interface (SCI) and 1 Serial Peripheral Interface (SPI). Therefore, MC9S08DZ60 can meet our communication and process requirements.

4.2. The software design of the GPRS and ZigBee interface module
The GPRS and ZigBee interface module of road lighting intelligent control systems

135

Figure 6: The management Software Interface

6. Road lighting intelligent control System practical application effect


Road lighting intelligent control system based on wireless network control, which is designed in this paper, has been tested in small scale for more than two months . The result shows that it has remarkable energy-saving effect, can save energy up to 40%; the system is stable and reliable, and communication data loss rate is lower than 3%; the software has a friendly interface and convenient operation control. The system reaches energy saving, intelligent and practical goal. The actual effect is showed in Fig. 8 and Fig. 9.

Figure 8: Half Light Scene

7. Conclusion
Road lighting intelligent control system based on wireless network control can decrease energy consumption, reduce pollution, improve efficiency for the city and road lighting system, This paper proposed a design method of road lighting intelligent control system which was built by wireless personal network technology, GPRS, microprocessor and host computers. Also investigate the structure, key technologies, software design and encryption, and obtained some achievements. In the future, the research work will be carried out as follows: 1) Improving system stability further, which ensures that links of road lighting intelligent control system work stably for a long time; 2) Decreasing communication delay, optimizing algorithm, improving the system communication efficiency; 3) Enhancing the system expansibility. The system must connect to control system of other management departments such as traffic control system, power control system etc. and provide interface of information interaction and communication control for them.

Figure 7: Full light scene

Acknowledgement
It is a project supported by the Hi-Tech Research and Development Program (863) of China (No. 2006AA11Z215).

References
[1] WangJingmin, ZhangShizhao.

Implementation status and effect analysis of green lighting in different countries, Journal, Power Demand Side Management, Vol. 9, No. 6, 2007, pp. 70-72.

136

[2] ChenMing. Management technologies: towards distribution, integrated, dynamic and intelligence, Journal, JOURNAL OF CHINA INSTITUTE OF COMMUNICATIONS, Vol. 21, No. 11, 2000, pp. 75- 80. [3] Freescale Microcontrollers MC9S08QG8, Data sheet, 2008. [4] Freescale Microcontrollers. MC1321x Data sheet, 2008.

137

Economic Virtual Campus Super Computing Facility BOINC


Aravind Narayanan P & Karthik Hariharan
College of Engineering & Technology, Thiruvananthapuram aravindun@gmail.com, karthikhariharan99@gmail.com

ABSTRACT
A supercomputing facility at campus can contribute to scientific and academic research with its enormous computational power. But setting up a supercomputing center based on off the shell server class microprocessors as a cluster require huge investment and maintenance cost. We therefore introduce a variant of the conventional technology, modeled for a campus environment, which drastically reduces the cost involved without significant loss in computing power. We term it The Economic Virtual Campus Supercomputing Facility(EVCSF). Even though the major benefit is cost reduction, other advantages that comes with this is decentralization, load balancing and high availability, low power consumption and guaranteed uptime. Our calculations indicate that a 5000 computer EVCSF with 50% uptime will cost around Rs 2,50,000. But a cluster based supercomputer for same performance (requiring 2500 nodes approx) costs Rs 1,10,00,000 assuming same processor type.

Introduction
What is EVCSF
Economic Virtual Campus Supercomputing Facility abbreviated as EVCSF is an idea, rather a vision to implement a super computer in a campus. Instead of going for the conventional notion of multiprocessor computational power we attempt to utilize the idle C.P.U cycles of student laptops, desktops and systems in department labs. It is a collection of a few dedicated servers, desktop grids and volunteer computers integrated to form a virtual supercomputing facility. The main advantage is the cost effectiveness and better utilization of otherwise untapped C.P.U capacities.

Technical Procedure to setup a VCSF


The major aim of this paper is to formulate a technical procedure to setup a virtual supercomputer inside the campus. Till now we have explained what are the essential features required to setup this. This section explains the technical procedure for doing the same. There are four main steps involved in this: 1. 2. 3. 4. Setting up a BOINC server Creating grid of trusted nodes Setting up volunteer computing segment Integration and Finalization

Setting up BOINC server


We need a server dedicated to manage the virtual super computer. Intel dual Xeon or AMD Opteron will be a nice choice. Internet connection should be reliable and server must have a static IP. At least 2 GB of RAM, and 40 GB of free disk space, UPS power supply, RAID disk configuration, hot-swappable spares, temperature-controlled machine room, etc and do everything to make it secure. A midrange server computer like dell poweredge will do. Put the entire system behind a firewall. Switch of ports like ftp and telnet that are not in use.

Main Features of EVCSF


The main features are 1. 2. 3. 4. Integration of Volunteer computing and grid computing Uses BOINC middle ware Economic Viability Incrementing Computational power

137

Software requirements:
VMware Player BOINC Server Virtual Machine

VMware Player is a freeware virtualization software product from VMware, Inc. (vmware.com). The player can run virtual machines, ie, it will create a virtual environment in the system. For example you can virtually run windows in Linux or vice versa provided you have appropriate virtual machines. You can download the BOINC server virtual machine from boinc.berkely.edu. Download and run the BOINC VM(847MB) in VMware player in the server to get started. So now that we have a server with BOINC virtual machine running on it, its time to move on to the grid creation part.

Following similar procedure setup another custom installer with Account creation enabled Redundancy set up to a desired value Other preference parameters setup to suit specific needs. Ask students and faculty to install this custom client.

Integration and Finalization


Connect systems to form, desktop grid. Let lab systems be ON whenever computing power is desired. Distribute the volunteer client to all non- trusted units in VCSF ( Eg: Student laptops). Let them connect when they power on their systems. The whole network is connected by wired or Wi-Fi LAN. Now a virtual supercomputer for campus is ready.

Creating grid of trusted nodes


Although BOINC was originally designed for volunteer computing, it can be configured to work for grid computing. The steps in creating a BOINC-based grid are: Modify preferences of work unit (computation to be performed) from the BOINC server to disable redundant processing. Since a grid will contain only trusted nodes, redundancy is not necessary. Create an account with the general preferences enforced for the desktop grid. Clients can be remotely monitored and controlled if necessary. Configure project to disable account creation. New account creation is for the volunteer computing segment and we do not require it here. Create a custom installer that includes the desired configuration files. Deploy the installer in each system in the lab and other trusted computers.

So now we have setup each the node in grid segment. Note that our Economic virtual campus supercomputing facility combines the benefits of both Desktop grid computing and volunteer computing. We connect the trusted systems (like lab) to the desktop grid part and other non trusted (student laptops and misc PCs) system to the volunteer computing segment. Now we move to setup the volunteer computing segment.

The Client Side


The volunteers who are ready to contribute to the project should be aware of their CPU usage of BOINC. This is the screen shot of CPU usage of my system before installing BOINC. The average CPU usage of your computer will be less than 20% approximate in windows vista and less than 5% in windows XP. Since this processor idle time is used for processing supercomputing tasks this will rise up.

Creating the volunteer computing segment


As BOINC is specially designed for volunteer computing, much change is not necessary to be made to BOINC client.

Advantages

138

Cost effective supercomputer inside campus Empowers scientific and academic research Youngsters contributing to indigenous projects Efficient resource utilization Students exposed to supercomputing arena.

Limitations
Fluctuating computational power. If the computer acts only as a node of the EVCSF the power utilization is not efficient.

References CERN. Grid Cafe - The place for everybody to learn about the Grid. http://gridcafe.web.cern.ch/gridcafe/. Foster and C. Kesselman, eds. The Grid 2: Blueprint for a New Computing Infrastructure. 2nd ed. San Francisco, CA: Elsevier, 2004. TeraGrid. Retrieved Jan. 10, 2007 from http://www.teragrid.org/. Volunteer computing on wikipediahttp://en.wikipedia.org/wiki/Volunteer_computing Southeastern Universities Research Association. (2007). SURA | Information Technology | SURAgrid. http://sura.org/programs/sura_grid.html. Berkley university BOINC resource website. http://boinc.berkeley.edu/ Association for Computing Machinery and IEEE Computer Society (2001). Computing Curricula 2001 Computer Science. from http://www.computer.org/education/cc2001/. LHC2home the grid computing project for the large hadron collider experiment http://lhcathome.cern.ch/

139

Project Silpa
Python Based Indian Language Processing Framework
Prepared by Anish A & Arun Anson
S6 CSE, Mohandas College of Engineering and technology, Anad, Thiruvananthapuram

aneesh.nl@gmail.com,arunanson@gmail.com Abstract Silpa, Swathanthra Indian Language Processing Applications is a web platform to host the free(as in freedom) software language processing applications easily. It is a web framework and a set of applications for processing Indian Languages in many ways. Or in other words, it is a platform for porting existing and upcoming language processing applications to the web. Silpa can also be used as a python library or as a web service from other applications. Introduction
Silpa is the abreviation of Swathanthra Indian Language Processing Applications. Silpa can be used as The components are explained below.

Common Utils : applications in silpa.

Used

for

common

A web framework for hosting Indian Language Processing applications. JSON[1]-RPC[2] based web service for using the Silpa services for other applications

Settings : Handles configuration files and settings for silpa. Templating : Handles the templates, style sheets, etc for silpa. URL Mapping : Maps the URLs to modules according to settings. JSON Encoder/Decoder : It interprets JSON data for processing in silpa and converts internal representation to JSON for communicating with applications. RPC handler : Handles remote Procedure Calls. Action Dispatcher : Handles action taken by modules. WSGI[5] handler : It acts as an interface between silpa and web server.

A Python Library for Indian Language Processing . Silpa project is released under GNU Affero General Public License Version 3[3][4]. It is a free(as in freedom) software. Its lead developer is Santhosh Thottingal. I am also a developer.

Architecture
The silpa architecture components. consists of following

Figure 1: Silpa Components

Modules
Silpa contains many modules. Some of which are stable and some in experimental state. Modules extend the functionalities of silpa. Or in other terms, modules make silpa usable.

Spell checker
Spell check[7] provides language independent spell checking service across Indian languages and English. The spell check service checks the dictionary for spelling. If found, it will be displayed as correct. If not found, it will fetch similar words from the dictionary.

Sort

Unicode Collation Algorithm(UCA)[6] based sorting for [7] all languages defined in Unicode. The collation weights A spell checker customarily consists of two parts : 1. A set of routines for scanning text and used in this application is a modified version of Default extracting words, and Unicode Collation Element Table (DUCET).The current 2. An algorithm for comparing the extracted version is modified only for Malayalam language. For words against a known list of correctly spelled other languages,it use the default weights defined by words (i.e., the dictionary). Unicode. Malayalam sorting is compatible with GNU C Eg : library collation definition. Wrong Spelling. Suggestions : Indic Soundex Soundex is a phonetic indexing algorithm. It is used to search/retrieve words having similar pronunciation but slightly different spelling. Soundex was developed by Robert C. Russell and Margaret K. Odell. The soundex code for a word is an english alphabet followed by a Dictionary Dictionary module provides dictionary service on indic number of digits. By this algorithm, if a name is written as Santhosh , languages. It also looks in wiktionary for meanings. Santosh , Santhos or Santos , the soundex code Eg : Definitions from SILPA Dictionary remains same and it is a5B20000. Original Soundex algorithm is not multilingual. Our human algorithm will not be exact copy of English soundex, but we will use the concept. Algorithm 1. For each letter in the word except first letter, Definitions from Wiktionary get the corresponding soundex digit from the character map, which is nothing but a table. 2. If the letter is not found in character map, the soundex digit for that letter is 0. 3. Duplicate consecutive soundex codes are skipped. ie, effectively will be considered as .

Text Similarity
This module will compare two texts for their similarity. Based on the similarity it will give a number between 0 and 1. 1 means both text are similar. 0 means texts are completely different. A value in between 0 and 1 indicates how much they are similar.

4. Replace first digit with first alpha character. 5. Remove all 0s from the soundex code. The algorithm uses an n-grams[8] model and cosine 6. Return soundex code padded to the required similarity[9]. length (ie, if required length of code is 5 and soundex is BCD, then soundex returned will be BCD0. Example , , = APKBF00 = APKBF00

Transliteration
Transliteration is the practice of converting a text from one writing system into another in a systematic way [10]. This application helps you to transliterate text from any Indian language to any other Indian language. Language of each word will be detected. You can give the text in any language and even with mixed language. Transliteration to International Phonetic Alphabet[11] is also available.

Stemmer
Stemming[15] is the process for reducing inflected (or sometimes derived) words to their stem, base or root form generally a written word form. The stem need not be identical to the morphological root of the word; it is usually sufficient that related words map to the same stem, even if this stem is not in itself a valid root. Stemming programs are commonly referred to as stemming algorithms or stemmers. Eg: Stem of is .

Syllabification
A syllable[12] is a unit of organization for a sequence of speech sounds. Syllabification is the separation of a word into syllables, whether spoken or written. In most languages, the actually spoken syllables are the basis of syllabification in writing too. Eg : , ,

Approximate String Search


Fuzzy string search[16] application. This application illustrates the combined use of Edit distance [17] and Indic Soundex algorithm. By mixing both written like(edit distance) and sounds like(soundex), we achieve an efficient aproximate string searching. This application is capable of cross language string search too. That means, you can search Hindi words in Malayalam text. If there is any Malayalam word, which is approximate transliteration of hindi word, or sounds alike the hindi words, it will be returned as an approximate match. The "written like" algorithm used here is a bigram average algorithm. The ratio of common bigrams in two strings and average number of bigrams will give a factor which is greater than zero and less than 1. Similarly the soundex algorithm also gives a weight. By selecting words which has comparison weight more than the threshold weight(which 0.6), we get the search results. Eg : In the following text,

Web Fonts
Web fonts allows web designers to use custom fonts in the pages without having the fonts installed in users computer. This technique make use of @font-face feature. Any modern web browser is capable of using web fonts. Silpa provides a set of indic fonts that you can use in your web pages. Users can see the pages in Indic languages even if the font is not available in their computer. Silpa web fonts module simplifies the usage of web fonts for developers by hosting available open source fonts in our sever and providing the easy to use css links.

Katapayadi
[13]

Katapayadi system of numerical notation is an . ancient Indian system to depict letters to numerals for searching for . View highlighted part. easy remembrance of numbers as words or verses. Indic Ngram Library Assigning more than one letter to one numeral and [8] nullifying certain other letters as valueless, this system An n-gram model is a type of probabilistic model for provides the flexibility in forming meaningful words out predicting the next item in a sequence. n-grams are used in various areas of statistical natural language of numbers which can be easily remembered. processing and genetic sequence analysis. An n-gram In Malayalam it is also known as Paralperu [14] is a subsequence of n items from a given sequence. (). The items in question can be phonemes, syllables, Eg: represents letters, words or base pairs according to the application. An n-gram of size 1 is referred to as a 31415926536 which is *1000000000000000.

"unigram"; size 2 is a "bigram" (or, less commonly, a Another functionality is conversion of Indic text to "digram"); size 3 is a "trigram"; and size 4 or more is images. The image can be customized and support simply called an "n-gram". PNG, SVG and PDF outputs.

Fortune Cookies (Random Quotes)

Conclusion

Fortune[18] is a simple program that displays a What you can do? pseudorandom message from a database of Use silpa for your applications. quotations. Here, we are getting a Malayalam Test the experimental versions of modules and proverbs, Thirukural quotes and Chanakya quotes. report any bugs found. Eg : Join silpa mailing list[24] and keep updated.

Guess the language


Guess the language module gives the functionality to guess the language of the given text. This module provides functionality for all modules to guess the input language and take appropriate decisions.

Check task list[25] of silpa and undertake any task. Code on silpa. Design artworks, themes graphics, etc for silpa. Speak about silpa and help spread the word.

Text to Speech

Etc. Speech synthesis[19] is the artificial production of human speech. A computer system used for this Silpa is a emerging project in the field of Indic purpose is called a speech synthesizer, and can be language computing. Use you imagination to use that implemented in software or hardware. A text-to-speech to full extent. (TTS) system converts normal language text into speech; other systems render symbolic linguistic representations like phonetic transcriptions into speech. Text-to-speech module uses dhvani[20] indian language text-to-speech system as back-end for speech synthesis.

Hyphenate Text
Hyphenation[21][22] is the process inserting hyphens in between the syllables of a word so that when the text is justified, maximum space is utilized. An example is here[23].

Script Renderer
This is an experimental online service based on pypdflib library in development. This library projects objective is to develop an opensource pdf rendering library which can support all complex scripts. The script renderer can generate a PDF file of an indic language wikipedia page.

References
1. http://en.wikipedia.org/wiki/JSON 2. http://en.wikipedia.org/wiki/Remote_proced ure_call 3. http://www.gnu.org/licenses/agpl.html 4. http://en.wikipedia.org/wiki/Affero_General _Public_License 5. http://en.wikipedia.org/wiki/Web_Server_G ateway_Interface 6. http://en.wikipedia.org/wiki/Unicode_collati on_algorithm 7. http://en.wikipedia.org/wiki/Spell_checker 8. http://en.wikipedia.org/wiki/N-gram 9. http://en.wikipedia.org/wiki/Cosine_similarit y 10. http://en.wikipedia.org/wiki/Transliteration 11. http://en.wikipedia.org/wiki/IPA 12. http://en.wikipedia.org/wiki/Syllable 13. http://en.wikipedia.org/wiki/Katapayadi_sa nkhya

14. http://ml.wikipedia.org/wiki/Paralperu 15. http://en.wikipedia.org/wiki/Stemming 16. http://en.wikipedia.org/wiki/Fuzzy_string_s earching 17. http://en.wikipedia.org/wiki/Levenshtein_di stance 18. http://en.wikipedia.org/wiki/Fortune_(Unix) 19. http://en.wikipedia.org/wiki/Speech_synthe sis 20. http://dhvani.sourceforge.net/ 21. http://en.wikipedia.org/wiki/Hyphenation_al gorithm 22. http://thottingal.in/blog/2008/12/16/hyphen ation-of-indian-languages-in-webpages/ 23. http://ftp.twaren.net/Unix/NonGNU/smc/hy phenation/web/example.html 24. http://lists.nongnu.org/mailman/listinfo/silpa -discuss 25. https://savannah.nongnu.org/task/? group=silpa

This work is licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by-sa/3.0/

STREAM 3 ELECTRONICS & ELECTRICAL ENGINEERING

Audio Spotlighting
Liji Ramesan Santhi & Sreeja V
Electrical and Electronics Department Mohandas College of Engineering and Technology

Abstract
In the present world, distortion sound which was annoying to the people in the vicinity, an audio spotlighting system was designed as a solution to one of lifes many challenges. Audio spot lighting is a very recent technology that creates focused beams of sound similar to light beams coming out of a flashlight. By shining sound to one location, specific listeners can be targeted with sound without others nearby hearing it. This acoustic device comprises a speaker that fires inaudible ultrasound pulses with very small wavelength which act in a manner very similar to that of a narrow column. The ultra sound beam acts as an airborne speaker and as the beam moves through the air gradual distortion takes place in a predictable way due to the property of non-linearity of air. This gives rise to audible components that can be accurately predicted and precisely controlled. The targeted or directed audio technology is going to a huge commercial market in entertainment and consumer electronics and technology developers are scrambling to tap in to the market. Being the most recent and dramatic change in the way we perceive sound since the invention of coil loud speaker, audio spot light technology can do many miracles in various fields. In areas where headsets have previously been needed, why not use multiple audio spotlights, so that customers can hear naturally

Introduction
Engineers have nearly struggled for half a century to produce a speaker design with 20Hz 20000Hz capability of human hearing and also produce a narrow beam of audible sound... The directivity of any wave producing source depends on the size of the source, compared to the wavelength it generates. Inherent properties of the air cause the ultrasound to distort in a predictable way. The distortion gives rise to frequency components in the audio bandwidth, which can be predicted and precisely controlled. By generating the correct ultrasonic signal, we can create, within the air itself, essentially any sound desired. The ultrasound column acts as an airborne speaker, and as the beam moves through the air, gradual distortion takes place in a predictable way. This gives rise to audible components that can be accurately predicted and precisely controlled...

Loud Speakers Vs Audio Spotlighting


All loudspeakers today have one thing in common: they are direct radiating-- that is, they are fundamentally a piston-like device designed to directly pump air molecules into motion to create the audible sound waves we hear. The audible portions of sound tend to spread out in all directions from the point of origin. They do not travel as narrow beamswhich is why you dont need to be right in front of a radio to hear music. In fact, the beam angle of audible sound is very wide, just about 360 degrees. This effectively means the sound that you hear will be propagated through air equally in all directions.

145

Conventional loudspeakers suffer from amplitude distortions, harmonic distortion, phase distortion, cross over distortion etc. In order to focus sound into a narrow beam, you need to maintain a low beam angle that is dictated by wavelength. The smaller the wavelength, the less the beam angle, and hence, the more focused the sound. To create a narrow sound beam, the aperture size of the source also mattersa large loudspeaker will focus sound over a smaller area. If the source loudspeaker can be made several times bigger than the wavelength of the sound transmitted, then a finely focused beam can be created. The problem here is that this is not a very practical solution. To ensure that the shortest audible wavelengths are focused into a beam, a loudspeaker about 10 meters across is required, and to guarantee that all the audible wavelengths are focused, even bigger loudspeakers are needed, thus the low angle beam can be achieved only by making the wavelength smaller and this can be achieved by making use of ultrasonic sound. Audio spotlight looks like a disc-shaped loudspeaker, trailing a wire, with a small laser guide-beam mounted in the middle. When one points the flat side of the disc in your direction, you hear whatever sound he's chosen to play for you. But when he turns the disc away, the sound fades almost to nothing.

Point-'N'-Shoot Sound Makes Waves


Researchers have developed technology that can project a beam of sound so narrow that only one person can hear it. "Directed" audio sounds like it's coming from right in front of you even when transmitted from a few hundred meters away. Inventors of the new "ventriloquist" technology say it could provide an added dimension to entertainment. The military, however, is investigating using it to confuse opponents or even inflict pain. The Audio Spotlight is one of two competing audio transmission systems that emit a one-foot square column of sound that can only be heard by people in its direct path. Joseph Pompeii, a PhD student at the MIT Media Lab, decided to develop it while working at audio company Bose, which he joined at 16 as its youngest-ever engineer.

Working
It's markedly different from a conventional speaker, whose orientation makes much less difference. PRO SOUND SYSTEM > Audio Spotlight > AUDIO SPOTLIGHT TECHNOLOGY

The original low frequency sound wave such as human speech or music is applied into an audio spotlight emitter device. This low frequency signal is frequency modulated with ultrasonic frequencies ranging from 21 KHz 28 KHz. The

output of modulator will be modulated form of original sound wave. Since ultrasonic frequency is used the wavelength of the combined signal will be in the order of few millimeters. Since the wavelength is smaller the beam angle will be

146

around 3 Degree, as a result the sound beam will be narrow one with a small dispersion. While the frequency modulated signal travels through air, the non linear property of air, comes into action which slightly changes the sound wave. If there is a Change in the sound waves new sound waves are formed within the wave. Therefore new sound signal generated within the ultrasonic sound wave will be corresponding to the original information signal will be produced within the ultrasonic sound wave. Since we cannot hear the ultrasonic sound wave we only hear the new sounds that are formed by non linear action of air. Thus in an audio spotlighting there are no actual speakers that produces the sound but the ultrasonic envelope acts as airborne speaker.

The new sound produced virtually has no distortions associated with it and faithful reproduction of sound is freed from bulky enclosures. There are no woofers or crossovers. This technology is such a way that you can direct ultrasonic emitter towards a hard surface a wall for instance and the listener perceives the sound as coming from the spot on the wall. The listener does not perceive the sound as emanating from the face of the transducer, but only from the reflection of wall. For the maximum volume that trade show use demands it is recommended that the audio spotlight speaker more accurately called transducer is mounted no more than three meter from the average listener ears or five meters in the air. The mounting hardware is constructed with a ball joint so that the audio spotlights can easily aimed wherever the sound is desired.

Components
Power Supply: The audio spotlighting system works off dc voltage. Ultrasonic amplifier requires 48V dc supply for its working and low voltage for microcontroller and other process management. Frequency Oscillator: It generates ultrasonic frequency signals in the range of 21 KHz to 28 KHz which is required for modulator of information signals. Modulator In order to convert the source signal material into ultrasonic signal a modulation scheme is

required which is achieved through a modulator. In addition error correction is needed to reduce distortion without loss of efficiency Audio signal processor: Signal is sent to electronic signal processor circuit where equalization and distortion control are performed in order to produce a good equalization signal. Microcontroller: A dedicated microcontroller circuit takes care of the functional management of the system. In the future version it is expected that the whole process like functional management, signal processing, double side band, modulation and

147

even mode of power supply would be effectively taken care of by a single embedded IC. Ultrasonic amplifier: High efficiency ultrasonic power amplifier amplifies the frequency modulated wave in order to match the impedance of the integrated transducers so that the output of the emitter will be more powerful and can cover more distortion. Transducer: It is 1.27cm thick and 17 in diameter. It is capable of producing audibility up to 200mwith better clarity of the sound. It has the ability of real time sound reproducibility with zero lag. These transducers are arranged in the form of an array called parametric array in order to propagate the ultrasonic signals from the emitter and thereby to exploit the non linearity property of air.

Advantages
1. 2. 3. 4. 5. 6. 7. 8. Can focus sound only at the place you want. Ultra sonic emitter devices are thin and flat and do not require a mounting cabinet. The focused or directed sound travels much faster in a straight line than conventional loudspeakers. Dispersion can be controlled very narrow or wider to cover more listening area. Reduces the feed back from microphones. Highly cost effective as the maintenance required is less as compared to conventional loudspeakers and have longer lifespan. Require only same power as required for regular speakers. There is no lag in reproducing the sound.

Applications Modes Of Listening


Direct mode: It requires a clear line of approach from the sound system unit to the point where the listeners can here the audio. To restrict the audio in a specific area this method is appropriate. Projected mode: This mode requires an unbroken line of approach from the emitter of audio spot lighting system, so the emitter is pointed at the spot where this is to be heard. For this mode of operation, the sound beam from an emitter is made to reflect from reflecting surface such as wall surface or a diffuser surface. A virtual sound source creates an illusion of sound source that emanates from a source or direction where no physical loudspeaker is present. Automobiles: Beam alert signals can be directly propagated from an announcement device in the dashboard to the driver. Presently Mercedes Benz buses are fitted with audio spotlighting speakers so that the individual travelers can enjoy music of their own interest. Retail sales Provide targeted advertisement directly at the point of purchase. Public announcements Highly focused announcements in noisy environments such as subways, air ports, amusement parks, traffic etc Emergency Rescue Rescuers can communicate with endangered people far from reach. Entertainment system In home theater system speakers can be eliminated by the implementation of audio spotlighting and the properties of sound can be improved. Museum

148

In museums this can be used to describe about a particular object to a person standing in front of it, so that the other person standing in front of another object will not be able to hear the description. Audio/video conferencing Project the audio spotlighting from a conference in four different language forms a single central device without the need for headphones. Sound bullets Jack the sound level 50 times the human threshold of pain and an offshoot of audio spotlighting sound technology becomes a non lethal weapon.

Conclusion
Although the project didnt succeed, there are several issues to be gained. It is really going to make a revolution in sound transmission and the user can decide the path in which audio signal should propagate. It is going to shape the future of sound and will serve our ears with magical experience.

Reference 1. F Joseph Pompei:The use of airborne


ultrasonic for generating audible sound beams- Journal of the Audio Engineering Society P J Westervelt, Parametric Acoustic array-Journal of the Acoustical society of America. The past, present and Future of Audio Signal Processing, IEEE signal Processing Magazine(pages 30-57,sept 1999) www.silentsound.co.za-silent sound. www.techalone.com-Audio Spotlighting www.holosonics.com Electronics for you- volume 40 (January 2008) www.fileguru.com/apps/audio_spotlighting_ ieee paper(2009)

Future
It holds the promise of replacing the conventional speakers. It allows the user to control the direction of propagation of sound. The audio spotlighting will force people to rethink their relationship with sound and it put sound where you want it.

2. 3. 4. 5. 6. 7. 8.

149

Augmented Reality
Amy Sebastian and Jacqueline Rebeiro
S6, Department of Electrical Engineering Mohandas College of Engineering and Technology, Thiruvananthapuram

Abstract
Augmented reality is one of the newest innovations in the electronics industry. Augmented reality systems superimpose graphics for every perspective and adjust to every movement of the user's head and eyes.Development of the needed technology for augmented reality systems, however, is still underway within the laboratories of both universities and high tech companies.It is forecasted that by the end of this decade, the first mass-produced augmented reality systems will hit the market.An Augmented Reality system supplements the real world with virtual (computergenerated) objects that appear to coexist in the same space as the real world.Augment Reality can be thought of as a middle ground between Virtual Environment (completely synthetic) & Tele-presence (completely real).Augmented reality systems in combination with other technologies such as WiFi could also be used to provide instant information to its users.Students could use this system to have a deeper understanding on things. Medically, augmented reality systems could be used to give the surgeon a better sensory perception of the patient's body during an operation.Augmented reality systems can be used in almost any field or industry. The novelty of instant information coupled with enhanced perception will ensure that augmented reality systems will play a big role in how people live in the future.

Definition:
An Augmented Reality system supplements the real world with virtual (computer-generated) objects that appear to coexist in the same space as the real world.We can generally find Augmented Reality system to have the following properties: 1) Combines real and virtual objects in a real environment; 2) Runs interactively, and in real time; 3) Aligns real and virtual objects with each other. Augment Reality can be thought of as a middle ground between Virtual Environment (completely synthetic) & Tele-presence (completely real). Augmented Reality Vs Virtual Reality VR was defined as a computer generated interactive 3-D environment in which a person is immersed. The user is completely immersed in an artificial world & is divorced from the real environment.VR Strives For Totally Immersive Environment, AR Augmenting Real World Scenes. Augmented reality is changing the way we view the world or the way its users see the world.

Picture yourself walking or driving down the street. With augmented-reality displays, which will eventually look much like a normal pair of glasses, informative graphics will appear in your field of view, and audio will coincide with whatever you see. These enhancements will be refreshed continually to reflect the movements of your head. Similar devices and applications already exist, particularly on smartphones like the iPhone.

Augmented Reality:

Reality

vs.

Virtual

Virtual reality is a technology that encompasses a broad spectrum of ideas. The term is defined as "a computer generated, interactive, threedimensional environment in which a person is immersed. There are three key points in this definition. First, this virtual environment is a computer generated three-dimensional scene, which requires high performance computer graphics to provide an adequate level of realism. The second point is that the virtual world is interactive. A user requires real-time response from the system to be able to interact with it in an effective manner. The last point is that the user is immersed in this virtual environment.

150

One of the identifying marks of a virtual reality system is the head mounted display worn by users. These displays block out all the external world and present to the wearer a view that is under the complete control of the computer. The user is completely immersed in an artificial world and becomes divorced from the real environment. For this immersion to appear realistic the virtual reality system must accurately sense how the user is moving and determine what effect that will have on the scene being rendered in the head mounted display. The discussion above highlights the similarities and differences between virtual reality and augmented reality systems. A very visible difference between these two types of systems is the immersiveness of the system. Virtual reality strives for a totally immersive environment. In contrast, an augmented reality system is augmenting the real world scene necessitating that the user maintains a sense of presence in that world. The virtual images are merged with the real view to create the augmented display. There must be a mechanism to combine the real and virtual that is not present in other virtual reality work. The computer generated virtual objects must be accurately registered with the real world in all dimensions. Errors in this registration will prevent the user from seeing the real and virtual images as fused. The correct registration must also be maintained while the user moves about within the real environment.

the scene become less distinguishable from the real ones. Components of Augmented Reality System: 1. 2. 3. Head Mounted Display Tracking System (GPS) Mobile Computing Power

Head Mounted Displays


They enable us to view graphics and text created by the augmented reality system. There are two basic types of head mounted displays being used. Video See-Through Display The "see-through" designation comes from the need for the user to be able to see the real worldview that is immediately in front of him even when wearing the HMD. This system blocks the wearers surrounding environment using small cameras attached to the outside of the goggle to capture images. On the inside of the display, the video image is played in realtime and the graphics are superimposed on the video. One problem with the use of video cameras is that there is more lag, meaning that there is a delay in image-adjustment when the viewer moves his or her head.

Milgram's Continuum

Reality-Virtuality

The real world and a totally virtual environment are at the two ends of this continuum with the middle region called Mixed Reality. Augmented reality lies near the real world end of the line with the predominate perception being the real world augmented by computer generated data. Augmented Virtuality is a term created by Milgram to identify systems, which are mostly synthetic with some real world imagery added such as texture mapping video onto virtual objects. This is a distinction that will fade as the technology improves and the virtual elements in

Graphics System produces the virtual objects, which are aligned to that of real objects, virtual objects are then merged with the real objects generated by the video camera and sent to the monitor from where it is displayed to the user. Optical See-Through Display The optical see-through HMD eliminates the video channel that is looking at the real scene. Instead merging of real world and virtual augmentation is done optically in front of the user.

151

hurdles in developing this technology. So far, the best systems developed still presents a lag or a delay between the user's movement and the display of the image.

Portable Computer
Augmented reality systems will need highly mobile computers. As of now, available mobile computers that can be used for this new technology are still not sufficiently powerful to create the needed stereo 3-D graphics. Graphics processing units like the NVidia GPU by Toshiba and ATI mobility 128 16MB-graphics chips are however being integrated into laptops to merge the current computer technology to augmented reality systems.

There are advantages and disadvantages to each of these types of displays. With both of the displays that use a video camera to view the real world there is a forced delay of up to one frame time to perform the video merging operation. At standard frame rates that will be potentially a 33.33 millisecond delay in the view seen by the user. Since everything the user sees is under system control compensation for this delay could be made by correctly timing the other paths in the system. Or, alternatively, if other paths are slower then the video of the real scene could be delayed. With an optical see-through display the view of the real world is instantaneous so it is not possible to compensate for system delays in other areas. On the other hand, with monitor based and video see-through displays a video camera is viewing the real scene. An advantage of this is that the image generated by the camera is available to the system to provide tracking information. The optical see-through display does not have this additional information. The only position information available with that display is what position sensors on the head can provide mounted display itself. The major advantage of optical see through display is that they could be made very small however the biggest constraint in using this technology is the prohibitive cost.

Applications:
Advertising: Marketers started to use AR to promote products via interactive AR applications. Support with complex tasks: Complex tasks such as assembly, maintenance, and surgery can be simplified by inserting additional information into the field of view. Navigation devices: AR can augment the effectiveness of navigation devices for a variety of applications. Industrial Applications: AR can be used to compare the data of digital mock-ups with physical mock-ups for efficiently finding discrepancies between the two sources. Military and emergency services: AR can be applied to military and emergency services as wearable systems to provide information such as instructions, maps, enemy locations, and fire cells. Prospecting: In the fields of hydrology, ecology, and geology, AR can be used to display an interactive analysis of terrain characteristics Art: AR can be incorporated into artistic applications that allow artists to create art in real time over reality such as painting, drawing, modeling, etc. Architecture: AR can be employed to simulate planned construction projects.

Tracking and Orientation


Another component of an augmented reality system is its tracking and orientation system. This system pinpoints the user's location in reference to his surroundings and additionally tracks the user's eye and head movements. The complicated procedure of tracking overall location, user movement and adjusting the displayed graphics needed are some of the major

152

Sightseeing: Models may be created to include labels or text related to the objects/places visited. With AR, users can rebuild ruins, buildings, or even landscapes as they previously existed. Collaboration: AR can help facilitate collaboration among distributed team members via conferences with real and virtual participants Entertainment and education: AR can be used in the fields of entertainment and education to create virtual objects in museums and exhibitions, theme park attractions etc

Conclusion:
Thus, augmneted reality is a growin technology, where one area where a break-through is required is tracking an HMD outdoors at the accuracy required by Augmented Reality. If this is achieved several interesting applications will become possible.Augmenetd reality is thus very useful.

References:
Taxonomy of Mixed Reality Visual Displays , P. Milgram and A. F. Kishino How Augmented Reality Works, Kevin Bonsor Augmented reality set for major growth, C. Harnick Technology enhanced learning and augmented reality, A .Dias

Future applications:
Expanding a PC screen into the real Virtual devices of all kinds Enhanced media applications Replacement of cellphone and car navigator screens Virtual plants, wallpapers, panoramic views, artwork, decorations, illumination etc., enhancing everyday life With AR systems getting into mass market, we may see virtual window dressings, posters, traffic signs, Christmas decorations, advertisement towers and more. These may be fully interactive even at a distance, by eye pointing for example. Virtual gadgetry becomes possible. Subscribable group-specific AR feeds. For example, a manager on a construction site could create and dock instructions including diagrams in specific locations on the site. The workers could refer to this feed of AR items as they work AR systems can help the visually impaired navigate in a much better manner (combined with a text-to-speech software). Computer games which make use of position and environment information to place virtual objects, opponents, and weapons overlaid in the player's visual field.

153

Claytronics
Arun K, Joseph Mattamana
Electronics And Communication Department College Of Engineering, Thiruvananthapuram

Abstract
Claytronics is the concept of the future which aims to break the barrier in transferring and transforming tangible 3D objects. The concept basically is to make an object to be composed of millions of programed nano scale robots and to move them relative to each other in a controlled coordinated manner to change shape and other properties of the body. Claytronics consists of individual components called claytronic atoms or Catoms. As the actual hardware is to manipulate itself to whatever desired form each catoms should consist of CPU, a network device for communication, single pixel display, sensors, a means to adhere with each other and power source. Organizing all of the communication and actions between millions of catoms also require highly advanced algorithms and programing language. This idea is broadly referred to as also programmable matter. Claytronics has the potential to greatly affect many areas of daily life, such as telecommunication, human-computer interface, entertainment etc.

Introduction
Claytronics is a form a programmable matter that takes the concept of modular robots to a new extreme and is expected to make a new revolution in communication sector. The concept of modular robots has been around for some time. In general the goal of these projects was to adapt to the environment to facilitate, for example, improved locomotion. One of the primary goals of claytronics is to form the basis for a new media type, pario. Pario, a logical extension of audio and video, is a media type used to reproduce moving 3D objects in the real world. A direct result of our goal is that claytronics must scale to millions of micron-scale units. Having scaling (both in number and size) as a primary design goal impacts the work significantly. The long term goal of this is to render physical artifacts with such high fidelity that our senses will easily accept the reproduction for the original.When this goal is achieved we will be able to create an environment, which could be synthetic reality, in which a user can interact with computer generated artifacts as if they were the real thing. Synthetic reality has significant advantages over virtual reality or augmented reality. For example, there is no need for the user to use any form of sensory augmentation, e.g., head mounted displays or haptic feedback devices will be able to see, touch, pick-up, or even use the rendered artifacts.

Claytronics is made up of individual components, called catomsfor Claytronic atomsthat can move in three dimensions (in relation to other catoms), adhere to other catoms to maintain a 3D shape, and compute state information (with possible assistance from other catoms in the ensemble). Each catom is a self-contained unit with a CPU, an energy store, a network device, a video output device, one or more sensors, a means of locomotion, and a mechanism for adhering to other catoms. A Claytronics system forms a shape through the interaction of the individual catoms. For example, suppose we wish to synthesize a physical copy of a person. The catoms would first localize themselves with respect to the ensemble. Once localized, they would form an hierarchical network in a distributed fashion. The hierarchical structure is necessary to deal with the scale of the ensemble; it helps to improve locality and to facilitate the planning and coordination tasks. The goal (in this case, mimicking a human form) would then be specified abstractly, perhaps as a series of snapshotsor as a collection of virtual deforming forces, and then broadcast to the catoms. Compilation of the specification into local actions would then provide each catom with a local plan for achieving the desired global shape. At this point, the catoms would start to move around each other using forces generated on-board, either magnetically or electrostatically, and adhere to each other using, for example, a Nano fiber-adhesive mechanism. Finally, the catoms on the surface would display an image; rendering the colour and texture characteristics of the source object.Except for taste and smell it will be an exact replica that is, for the other three senses there wont be any difference between original and replica. If the source object begins to move, a concise description of the movements would be broadcast allowing the catoms to

Fig.1. Creating a claytronics replica of the man

154

update their positions by moving around each other. The end result will bea real time replica of the object and thus next leap in communication industry.

Claytronic Hardware
A fundamental requirement of Claytronics is that the system must scale to very large numbers of interacting catoms and hardware part deals with designing of catoms.

only a few catoms must be connected in order for the entire ensemble to draw power. When connected catoms are energized, this triggers active routing algorithms which distribute power throughout the ensemble. 4) Communications: Communications is perhaps the biggest challenge that researchers face in designing catoms. An ensemble could contain millions or billions of catoms, and because of the way in which they pack, there could be as many as six axes of interconnection.At present a lot of emphasis is put on hardware part and with the development of nano-technology hardware part will be a reality, the next challenge is software (or program part of it).

Fig.2. comparison of catoms designed at various scales

Design of catoms should besimple, and each will have atleast following four capabilities: 1) Computation: It is believed that catoms could take advantage of existing microprocessor technology. Given that some modern microprocessor cores are now under a square millimeter, they believe that a reasonable amount of computational capacity should fit on the several square millimetres of surface area potentially available in a 2mm-diameter catom. 2) Motion: Although they will move, catoms will have no moving parts. This will enable them to form connections much more rapidly than traditional micro robots, and it will make them easier to manufacture in high volume. Catoms will bind to one another and move via electromagnetic or electrostatic forces, depending on the catom size. Imagine a catom that is close to spherical in shape, and whose perimeter is covered by small electromagnets. A catom will move itself around by energizing a particular magnet and cooperating with a neighbouring catom to do the same, drawing the pair together. If both catoms are free, they will spin equally about their axes, but if one catom is held rigid by links to its neighbours, the other will swing around the first, rolling across the fixed catom's surface and into a new position. Electrostatic actuation will be required once catom sizes shrink to less than a millimeter or two. The process will be essentially the same, but rather than electromagnets, the perimeter of the catom will be covered with conductive plates. By selectively applying electric charges to the plates, each catom will be able to move relative to its neighbours. 3) Power: Catoms must be able to draw power without having to rely on a bulky battery or a wired connection. Under a novel resistornetwork design the researchers have developed,

Fig.3. nano-scale MEMS sphere(above) planar catoms(below)

155

The following are some catoms Planar catoms Electrostatic latches Stochastic catoms Giant helium catoms MEMS sphere In the future with the development of nanotechnology the hardware hurdle will be crossed and next hurdle will be software.

Claytronic Software
The usual programming languages like C++ or Java are not suitable fora massively distributed system composed of numerous resource-limited catoms. It is also difficult to think of programing in these languages and debugging errors is even harder, for this special high level language withmore abbreviated syntax and a different style of command is required.The goal of a claytronics matrix is to dynamically form three dimensional shapes. However, the vast number of catoms in this distributed network increases complexity of micro-management of each individual catom. So, each catom must perceive accurate position information and command of cooperation with its neighbors. In this environment, software language for the matrix operation must convey concise statements of highlevel commands in order to be universally distributed. Specially for this purpose two new programming languages are being developed1) Meld 2) Locally Distributed Predicates (LDP).

Meld use a collection of facts and a set of production rules for combining existing facts to produce new ones. Each rule specifies a set of conditions (expressions relating facts and pieces of facts), and a new fact that can be proven (i.e., generated safely) if these conditions are satisfied. As a program is executed, the facts are combined to satisfy the rules and produce new facts which are in turn used to satisfy additional rules. This process, called forward chaining, continues until all provable facts have been proven. A logic program, therefore, consists of the rules for combining facts while the execution environment is the set of base facts that are known to be true before execution begins. Example-Walking program Let, Dist(S,D)-gives distance,D between target and catom S At(S,P)-gives current location of S where P=(X,Y) Farther(S,D)-gives that S is farther from target than D destination()-gives target destination Neighbor(S,T)-gives that S and T are neighbours MoveAround(S,T,U)-cause S to roll around the outside of T until it touches U

Meld
Meld is a declarative language, a logic programming language developed for programming catoms. By using logic programming, the code for an ensemble of robots can be written from a global perspective, enabling the programmer to concentrate on the overall performance of the claytronics matrix rather than writing individual instructions for every one of the thousands to millions of catoms in the ensemble. This dramatically simplifies the thought process for programming the movement of a claytronics matrix and also consumes 20 times less memory than C++.

Fig.4. Program length comparison C++ vs Meld Fig.5. Illustration of robots collectively walking

156

Related work in modular robot programming can be roughly divided into three categories: logical declarative languages for programming distributed systems, reactive programming techniques for robots, and functional approaches with roots in sensor network research. Meld is the solution for first category and LDP is the solution of latter two. In contrast to classical global predicate evaluation,which attempts to detect conditions over entire distributed systems, LDP operates on fixed-size,connectedsubgroups of modules. The advantages of such anapproach are twofold. First, searching in fixed-sized, connectedsubgroups is a significantly less expensive operationthan searching the entire ensemble, allowing us to executemore searches more frequently. Second, the notion of small,connected groups of modules reflects the natural structure ofdistributed programs written for large modular robots, here global decisions are expensive and rare.

The above rules are repeatedly applied and positions updated then again rules are applied, this goes on until it reaches target destination. This is a very simple example which is not practically used as such. When practically implemented a lot more additions are required for better accuracy, reliability and efficiency.

LDP technique1) LDP Syntax An LDP program consists of data declarations and a series of statements, each of which has a predicate clause and a collection of action clauses. When a predicate matches on aparticular sub-ensemble, the actions are carried out on thatsubensemble. LDP has no explicit control structures, suchas looping or function calls, though these can be emulatedwith the use of flag and counter variables.Each predicate begins with a declaration for eachmodule involved in the statement. These modules aresearched for in the order listed and, most importantly,there must be a path between all modules in a matchingsub ensemble. The condition itself is composed of numericstate variables (expressed as module.variableName),temporal offsets (the operators prev() and next()),and topology restrictions (via the neighbor relationneighbors(moduleA,moduleB)). The core language of LDP extends the condition grammar or distributed watch points with the addition of set variables (variables prefixed with a $ are set variables, as in moduleName. $setVar) and the requisite operators for manipulating these variables. 2) Distributed Predicate Detection The core of the LDP execution model is the Pattern-Matcher. A Pattern Matcher is a mobile data structure that

Fig.6. Messages Sent(shows efficiancy) comparison C++ vs Meld

Locally distributed predicates (LDP)


Traditional imperative programming languages, such asC/C++, Java, do little to address the ensemble-level issues involved in programming modular robots. These languages are inherently oriented towards a single processing node, and require significant additional effort when used in a distributed setting. In addition to creating a representation of the data needed for an algorithm, the programmer must determine what information is available locally and whatmust be obtained from remote nodes, the messages andprotocol used to transfer this data, mechanisms to route orpropagate information through multiple hops as needed, anda means to ensure the consistency of this data. Furthermore,in algorithms to control ensembles, it is often necessaryto express and test conditions that span multiple modules.Languages that constrain the programmer to the perspectiveof a single node make such algorithms difficult to implement.

157

encapsulates one distributed search attempt for a particular statement. This object migrates around the subensemble until either it fails to match or it matches. Every Pattern-Matcher contains an expression tree, which encodes the Boolean condition that the LDP is attempting to match. This expression tree containsstorage for state variable values, to allow for comparison of state between multiple modules. Pattern Matchers provide numerous opportunities for optimization,allowing for Boolean short-circuiting, as well asmore intelligent search strategies than spreading to all neighbors.Additionally, Pattern Matchers allow for backtrackingin search paths. 3) Triggering Actions By themselves, distributed watch points were insufficient to serve as a programming language, as they could not trigger arbitrary actions on predicate matches. For LDP, we add a final clause to the predicatethe trigger. We define three types of triggers: (1) setting a state variable to a value, (2) changing the topology of the system, and (3) calling an arbitrary function implemented by the robots runtime. Any predicate may have more than one trigger action, however we require that all the actions must be executed on the same module. This eliminates the need for locking or synchronization across multiple actions and/or modules. 4) Implementing LDP Using LDP in any given modular robotic system is straightforward. The system must call an LDP initialization function to set up various data structures. The runtime requires the implementation of three basic routines which (1) enumerate a modules current neighbors, (2) transmit Pattern Matchers between neighboring robots, and (3) invoke the statement threads at appropriate intervals. Finally, the system must ensure that incoming LDP messages trigger the appropriate callback. Each application that uses LDP must additionally implement variable initialization, access, and modification for any state variables used in the program. The programmer must also implement any custom

library functions that will be called from LDP actions. Example

Fig.7. a sample claytronic 2D matrix

forall(a,b) where (a.val=1) & ( b.val=3) This will be true for three cases in the above matrix, we can initiate some action when this happens, thus if we want we can avoid the situation or have some other statement executed.

Algorithms
For implementing the programmable matter one more step is required that is developing efficient algorithms. Two important classes of claytronics algorithms are shape sculpting and localization algorithms. The ultimate goal of claytronics research is creating dynamic motion in three dimensional poses. All the research on catom motion, collective actuation and hierarchical motion planning require shape sculpting algorithms to convert catoms into the necessary structure, which will give structural strength and fluid movement to the dynamic ensemble. Meanwhile, localization algorithms enable catoms to localize their positions in an ensemble.A localization algorithm should provide accurate relational knowledge of catoms to the whole matrix based on noisy observation in a fully distributed manner. Some very potential algorithms are 1) Hole Motion This is a novel shape formation algorithm inspired from the semi-conductor holes for ensembles of 2-dimensional lattice-arrayed modular robots, based on the manipulation of regularly shaped voids within the lattice (holes). The algorithm is massively parallel

158

and fully distributed. Constructing a goal shape requires time proportional only to the complexity of the desired target geometry. Construction of the shape by the modules requires no global communication nor could broadcast floods after distribution of the target shape. This can be extended to 3D as well in the future. For expanding holes are to be injected, while for contraction holes could be rejected.

Medicine: A replica of your physician could appear in your living room and perform an exam. The virtual doctor would precisely mimic the shape, appearance and movements of your "real" doctor, who is performing the actual work from a remote office. . Disaster relief: Human replicas could serve as stand-ins for medical personnel, fire-fighters, or disaster relief workers. Objects made of programmable matter could be used to perform hazardous work and could morph into different shapes to serve multiple purposes. A fire hose could become a shovel, a ladder could be transformed into a stretcher. . Sports instruction: A renowned tennis teacher, golf instructor, or soccer coach could "appear" at clinics in multiple locations. Entertainment: A football game, ice skating competition or other sporting event could be replicated in miniature on your coffee table. A movie could be recreated in your living room, and you could insert yourself into the role of one of the actors. . 3D physical modelling: Physical replicas could replace 3D computer models, which can only be viewed in two dimensions and must be accessed through a keyboard and mouse. Using claytronics, you could reshape or resize a model car or home with your hands, as if you were working with modelling clay. As you manipulated the model directly, aided by embedded software that's similar to the drawing tools found in office software programs, the appropriate computations would be carried out automatically. You would not have to work at a computer at all; you would simply work with the model. Using claytronics, multiple people at different locations could work on the same model. As a person at one location manipulated the model, it would be modified at every location. 3D printers and fax: Send some 3D object as signals and receive it on the other side. Some changes may occur in the basic life style itself with emergence of claytronics. Even carry a tab with a large display in the pocket and when you need it just expand it or if in a meeting if some unexpected people come just convert a part of the table into a chair(may be the table will become few millimeters less thick).

Fig.8. expansion and contraction using hole motion

2) Metamodules The idea is to have catoms to form different structures(module) and these could help change shape by localized movements of catoms. Modules could expand by having hollow space inside it, similarly it could conract by filling the hholloe space in itself. The algorithm is easy to implement because of the localized nature.

Fig.9. metamodule expanding to double the size

3) Collective Actuation This is a method inspired by the muscles of our body and the way it becomes flexible. Here shape change is by coordinated movements of a stack of catoms. This could be used to mimic muscle type movements in the pario communication.

Fig.10. shape changing by collective actuation

Future Applications of Claytronics


The potential applications of dynamic physical rendering are limited only by the imagination. As the capabilities of computing continue to develop and robotic modules shrink, claytronics will become useful in many applications. Following are a few of the possibilities:

159

Conclusion
Expect the revolution to occur in a few years to half a century, just a few barriers to be broken and the humanity will not look back. As giant companies like Intel competes to crack the problem of nanotechnology and as algorithms get better, this will be a reality.

References
1. www.cs.cmu.edu/~claytronics/ 2. en.wikipedia.org/wiki/Claytronics 3. techresearch.intel.com/articles/Exploratory/15 00.htm 4. http://www.postgazette.com/pg/05136/505033.stm 5. www.jumpingelectrons.com/Science/Claytroni cs-Synthetic-Reality.asp 6. www.youtube.com/watch?v=bcaqzOUv2Ao

160

Controlling Household Appliances Using Digital Pen and Paper


Arun Antony
Electronics And Communication Department Govt. Engineering College, Barton Hill

Abstract
User interfaces to control networked house hold appliances are often inadequate. Either they are too simplistic or they are too complex like PC based interface. So user interface that support complex functionalities and easy to user is required. In this report, digital pen and paper is presented as a suitable interface. Three components make up the system-pen, paper and software. Digital pen and paper technology promises to capture hand written data and it also transfers instantaneously to a digital world. Digital pens are marginally larger than their traditional counterparts .It records every pen stroke with a built-in camera and stores it. Digital papers are ordinary papers with a unique pattern printed on it. Paper has dots barely visible by human eye called anoto pattern. The stored information is send via Bluetooth to the local area network from where it is send to the household for controlling. The whole approach is integrated with an Open Services Gateway Initiative (OSGI) which gives flexibility to appliances using different protocols.

Networked Appliances
Networked appliances are typically deployed inside home and are connected to a local area network(LAN). These appliances use a large number of protocols to communicate such as the Universal Plug and Play(UpnP), X-10,Bluetooth,Home Audio Video Interoparability(HAVI) AND Multimedia Home Platform(MHP).So appliances using different protocols cannot communicate with each other.This issue is solved by use of a residential gateway which can act as a glue or bridge between protocols. The residential gateway links the home with the internet and offers services like network address translator and firewall functionalities. The gateway also hosts the software services which control the appliances.

forms a grid and act as a map coordinate to direct the exact location of every penstroke on the paper. The dot pattern is like a two dimensional barcode which has small dots spaced 0.3mm apart. The complete pattern space is divided into various domain.Each domain is given a special purpose like memo formatting, personal planning, note book paper, etc. The anoto pattern can be printed onto any paper using a standard printing process with atleast 600 dpi resolutions and carbon based black ink.

Digital Pen
Apart from the mundane duty of writing in ink , the digital pen is able to record every penstroke through a built-in camera. When we write with the digital pen, snapshots of the writing are taken about 50 shots per second. Every snap shot contains exact information to make calculation of exact position of the pen. Recorded pen strokes including drawing and sketches are stored in the pens memory. Memory has 856KB of stroke memory or the equivalent of 40 written pages. Memory use as well as battery life is indicated

Digital Pen And Paper The Paper


To properly use the digital function of a pen , writing has to occur on specially prepared paper.The paper consists of dots barely visible to the naked eye which

161

by using 2 separate light indicators. The battery used is a lithium ion battery with upto 3 hours of charge retention capacity. The pen is recharged in a cradle that hooks to personal computer via USB. This cradle also acts as an automatic download point for recorded pen strokes. The available pen brands are Logitech IO pens, Maxwell digital pens, Adapx pens, Nokia SU-1B.

How Does It Work


Once a paper is completed, the information is transferred to the mobile phone using Bluetooth from where it is send to the service provider. The service provider processes the information and sends it to target destination which is the users home. After downloading a document into PC, the document can be stored, modified or can even be e-mailed automatically. Text conversation can also be accomplished. This feature enables written notes to be converted to text file with 90% accuracy. The system also has second software called text recognition software. Using this software, pen identifies users handwriting so that a second person will not be able to use it. Each pen is assigned a unique ID. So while transferring data, pens ID is also sent along with it. This is done to find the correct destination. The mobile phone transfers these information should support Bluetooth 1.1 and the Bluetooth profiles dial-up networking (DUN) and the object push profile (OPP). The mobile acts like a modem and using GPRS sends the data on to the service provider.

The OSGi specifications will create an open standard for services that bridges the external and internal networks. The OSGi specifications will link client devices in home or office to external service providers. It will provide a central point from which services can be deployed and managed. It is neutral with respect to the service it offers. It is only concerned with remote management of services like installing, starting, stopping and updating without the need to restart the gateway. Thus OSGi has a built-in ability to integrate a wide range of different network protocols and allow applications to control appliances within the home without the awareness of underlying communication protocols.

The Framework

Osgi-Open Services Gateway Initiative


In our home, we use a number of appliances with different protocols. So for the proper functioning of pen and paper technology, there should be a system which can integrate all the protocols. This system is called Open Services Gateway Initiative (OSGi). It is an industry group working to define an open standard for connecting the next generation of smart consumer and small business appliances with commercial internet services. As residential telecom and datacom services combine, homes and small offices will be equipped with Service Gateways that will function as the platform for communication based services. The core component of OSGI Specifications is the OSGi Framework. The framework provides a standardized environment to applications (called bundles). The Framework is divided in a number of layers. LO : Execution Environment L1 : Modules L2 : Life Cycle Management L3 : Service Registry A ubiquitous security system is deeply intervened with all the Layers. The L0 execution environment is the specification of the java environment. Java 2 configurations and

162

profiles, like J2SE, CDC, CLDC, MIDP etc. are all valid execution environments. The OSGi platform has also standardized an execution environment based on foundation profile and a smaller variation that specifies the minimum requirements on an execution environment to be useful for OSGi bundles.

are server-like objects, like an HTTP server, while other services represent an object in the real world like the Bluetooth phone nearby. The service model is fully security instrumented. The service security model provides an elegant way to secure the communication between bundles passes. OSGi is creating an end-to-end service delivery architecture to enable the home and small business markets for internet and e-commerce services such as Security alarm and safety services. Energy management and metering services. Entertainment services. Health care and patient monitoring services. Appliance monitoring and repair services. Home automation and networking services. One point internet access. Power companies can deliver energy management and load management for homes and businesses. Utility providers like gas, water and electric companies will have automated meters. Home security system is improved. Messages could be sent to mobile phones with simultaneous triggering of alarms. So police will get information. Caring for elderly parent will be enhanced through low cost patient monitoring devices that continuously transmit critical care or emergency information to hospital.

Modules
The L1 Modules layer defines the class loading policies. The OSGi Framework is a powerful and rigidly specified class-loading model. It is based on top of Java but adds modularization. In java there is normally a single class path that contains all the classes and resources. The OSGi Modules layer adds private classes for a module as well as controlled linking between modules. The module layer is fully integrated with the security architecture, enabling the option to deploy closed systems, walled gardens, or completely user managed systems at the discretion of the manufacturer.

Life Cycle
The L2 Life Cycle layer adds bundles that can be dynamically installed, started, stopped, updated and uninstalled. Bundles rely on the module layer for class loading but add an API to manage the modules in run time. The lifecycle layer introduces dynamics that are normally not part of an application. Extensive dependency mechanisms are used to assure the correct operation of the environment. Life cycle operations are fully protected with the security architecture, making it virtually impossible to be attacked by viruses.

Bibliography
[1] www.ieee.org [2] www.anotopattern.com [3] www.logitech.com [4] www.havi.com [5] www.mhp.com [6] www.ea.knowledge.com

Service Registry
The L3 adds a Service Registry. The service registry provides a cooperation model for bundles that takes the dynamics into account. Bundles can cooperate via traditional class sharing but class sharing is not very compatible with dynamically installing and uninstalling code. The service registry provides a comprehensive model to share objects between bundles. A number of events are defined to handle the coming and going of services. Services are Java objects that can represent anything. Many services

163

Fire-Fighting Robot
Abhimanyu Sreekumar and Murali Krishnan
S8 Department of Electrical and Electronics Mohandas College of Engineering and Technology

Abstract
The security of home, laboratory, office, factory and building is important to human life. We are developing an intelligent multi sensor based fire fighting robot in our daily life. We design the fire detection system using four flame sensors in the fire fighting robot, and program the fire detection and fighting procedure using sensor based method. The fire fighting robot can get detection signals. If fire accident is true, the fire fighting robot can uses four flame sensors to find out fire source by the proposed method, and move to fire source to fight the fire using extinguisher. It is more advantageous than a smoke detector as it can extinguish the fire at the inception than waiting for an object to burn and produce smoke. When a smoke detector detects fire it, sprays water all over the place, instead of that particular point of source. It voluntarily detects and extinguishes fire without human aid. The fire sensor used is IRD 300 while the micro-controller used is PIC16F877. Here we use two motor driving units; one for the movement of robot & other for the water pump. 2 Stepper motors of VEGA Robo Kit having a rating of 100rpm/12V is at the sides and 1 free ball in ball bearing wheel at front of robot; while a 12V DC motor for pumping water. Here the robot detects fire by the fire sensor and moves towards it. After reaching a certain distance from the fire, it sprays water at the fire, thereby extinguishing it. It then comes back to its initial position.

Objectives
The security of home, laboratory, office, factory and building is important to human life. We are developing an intelligent multi sensor based fire fighting robot in our daily life. We design the fire detection system using four flame sensors which is programmed for fire detection and fighting procedure using sensor based method. We design a low cost based obstacle detection module using IR sensors and ultrasonic sensors in the mobile robot. The is true, fire fighting robot can get detection signals. If fire accident the fire fighting robot can uses four flame sensors to find out fire source by the proposed method, and move to fire source to fight the fire using extinguisher.

Justification
It is more advantageous than a smoke detector as it can extinguish the fire at the inception than waiting for an object to burn and produce smoke. When a smoke detector detects fire, it sprays water all over the place, instead of that particular point of source. It voluntarily detects and extinguishes fire without human aid.

3. Methodology

164

Fire Sensor (IRD300) It is a photo diode which detects flame having radiation flux density 5Mw/cm2 and color temperature 2870K. This sensor has a range upto 1 meter. Micro-Controller (PIC16F877A) It has the name Peripheral Interface Controller. The PIC uses the Harvard architecture. The 16F87X series Micro Controller contains flash memory. Harvard architecture has the program and data memory as separate memories and is accessed from separate busses; this improves speed over the traditional Von Neumann architecture. Motor Driving Units Here we use two motor driving units; one for the movement of robot & other for the water pump. The motor driving IC used for movement is L293D and for pumping water is ULN2003. Buzzer It gives out an alarm when fire is detected by the robot. LCD display 16X2 line display, used to display name of room where fire broke out.

As we go into the Future, we will be entering a Technological Era where humans and robots are going to co-exist.

References
Develop a Multiple Interface Based Fire FightingRobot Chien, T.L.; Guo, H.; Su, K.L.; Shiau, S.V.; Mechatronics, ICM2007 4th IEEE International Conference on Digital Object Identifier: 10.1109/ICMECH.2007.428004 0 Publication Year: 2007 , Page(s): 1 6 Automatic Fire Detection System Using Adaptive Fusion Algorithm for Fire Fighting Robot Su, K.L.; Systems, Man and Cybernetics, 2006. SMC '06. IEEE International Conference on Volume: 2 Digital Object Identifier: 10.1109/ICSMC.2006.384525 Publication Year: 2006 , Page(s): 966 971

Design and Construction of an Autonomous FireFighting Robot Altaf, K.; Akbar, A.; Ijaz, B.; Information and Emerging Technologies, 2007. ICIET 2007. International Conference on Digital Object Identifier: 10.1109/ICIET.2007.4381341 Publication Year: 2007 , Page(s): 1 - 5

Applications
Different kinds of accidents are possible in a tunnel but, accidents involving fire are the most dangerous of all. If it is not possible to extinguish the fire in minutes, it would be so hot that human life will be at risk. But, one of the biggest fears among emergency personnel who should respond to tunnel fires is the possibility of finding hazardous material fuelling fire! In such a situation it is best to leave the job to robots.

Present status and problems of fire fighting robots Amano, H.; SICE 2002. Proceedings of the 41st SICE Annual Conference Volume: 2 Digital Object Identifier: 10.1109/SICE.2002.1195276 Publication Year: 2002 , Page(s): 880 885 vol.2

Conclusions
If we do this innovative project on large scale it will surely save many lives.

165

Fully Automatic Road Network Extraction From Satellite Images


Sameeha S & Sreedevi D V
S8, Applied Electronics and Instrumentation LBS Institute of Technology for Women, Poojapura, TVM

Abstract
Our paper deals with the automatic detection of roads in satellite images. Suggested approach comprises of preprocessing the satellite image via a series of wavelet based filter banks based on frequency response of the corresponding FIR filter. Here we use a Trous algorithm twice with two different wavelet bases in order to filter and denoise the satellite image. The resulting two images are fused together in to a single image of same size as the original satellite image using KLT transform which is based on principal component analysis (PCA). Then a fuzzy inference algorithm is used to detect roads based on statistical information and on geometry which classifies each pixel as road or non-road with regard to fuzzy inference rules yielding in a binary image. This output is then fed as input to the geographical information system (GIS) for cartographic or for other purpose that are in need.

Introduction
Our paper deals with a fully automatic road detection algorithm. Road detection algorithms can be classified into two major groups; semiautomatic and automatic. The first approach necessitates that the user specify some initial conditions usually in the form of seed points entered manually by a human operator thorough some graphical user interface (GUI). The other approach which is fully automatic on the other hand does not require input from an operator and works on its own. Here a fully automatic approach is implemented. Suggested approach comprises of preprocessing the image via a series of wavelet based filter banks and reducing the yielding data into a single image which is of the same size as the original satellite image, then utilizing a fuzzy inference algorithm to carry out the road detection whose output can then be used as an input to a geographical information system (GIS) for cartographic or other purposes for that matter. We use a trous algorithm twice with two different wavelet bases in order to filter and de-noise the satellite image. Each wavelet function resolves features at a different resolution level associated with the frequency response of the corresponding FIR (finite impulse response) filter. Resulting two images are fused together using Karhounen- Louve transform (KLT). This process underlines the prominent features of the original image as well as denoising it, since the prominent features appear in both of the wavelet transformed images while noise does not correlate well between high and low resolution scales as it lacks coherence. On this image obtained thorough wavelet filtering and KLT, road detection is carried out using a fuzzylogic inference algorithm. Linguistic variables used for this task are mean, standard deviation which

are computed within a 5x5 pixel size image window and also another linguistic variable based on geometry. The inference 2 algorithm then classifies each pixel as road or non-road with regard to the fuzzy inference rules yielding in a binary image. Besides road detection fuzzy logic is a powerful and intuitive (i.e. in the sense of human friendliness) tool for identification of other features like runways, moving and/or stationary targets, other man-made objects on earth or any combination of these thereof, etc. Parallelizability of any detection algorithm as a direct consequence is of utmost importance should it be intended for military use as well. The images to be used have 512 x 512 pixel resolutions. The spatial resolution on earth surface is about 1.5 m/pixel for each image with only slight variation from one another.

Wavelet Filtering
Since satellite images are finite energy (i.e. square integrable) functions a wavelet transform exists. Numerical algorithm utilized to deconvolve the normalized temperature images is referred as algorithme trous. It is a translation invariant form of DWT (Discrete Wavelet Transform) since it does not involve any decimation during down-sampling. Because of its good time frequency localization characteristics wavelet analysis find wide applications. Wavelet transform decomposes a signal into a set of basis functions. These basis functions are called wavelets. Wavelets are obtained from a single prototype wavelet called mother wavelet by dilations and shifting. The wavelet transform is computed separately for different segments of the time-domain signal at different frequencies. It is designed to give good time resolution and poor frequency resolution at high frequencies and good frequency resolution and

167

poor time resolution at low frequencies. Here we use Trous Algorithm. The wavelet function is given by two functions that is, a scaling function and wavelet function, which represents a low pass filter and high pass filter respectively. The wavelet function used for implementing wavelet filtering is: - db1 (Haar wavelet function) - db8 wavelet function.

Haar Wavelet function


The Haar function is also known as the db1 wavelet. Haar system is the unique one that satisfies biorthogonality, symmetry. Haar system has a symmetric scaling function, an antisymmetric wavelet function, a single vanishing moment and has finite support length in the interval [0,1].

Db8 wavelet function


Db8 is daubechies wavelet with more number of coefficients than Haar wavelets. As the number of coefficient increases the wavelet becomes smoother.

Wavelet Filtering Algorithm


The image is fed to a low pass as well as high pass wavelet filter. The image represented as a matrix is convoluted by the FIR coefficients of this filter. First the row wise convolution occurs where the number of columns is halved. This occurs for both low pass filter and high pass filter. Thus row wise filtered each output from both filters are again fed to the scaling filter and wavelet filter for column wise filtering. Thus there will be four output images each with size 1/4th of the original image. This can be continued for any number of levels. The number of level depends on the resolution required. Here the problem with an ordinary algorithm is that the size of the image is decreased which necessitate the decimation during down sampling. This limitation is overcome by Trous algorithm.

the detail coefficients of each level to the smoothened image 3. Ease of interpretation If the smoothing operation is stopped at resolution p, reconstruction of the original image I is achieved by adding the detail coefficients wj of each level to the smoothed image cp. This linear simple additive reconstruction formula is the unique convenience of AWT along with its translation invariance and both account for its ease of interpretation. As a consequence of these two properties this algorithm is often employed in object detection. Only drawback of this algorithm is its redundancy, which requires (p+1) times larger storage space than the original image. Above convolution can be cast into the following discrete form. j denotes scale as emphasized earlier, x1 ,x2 are the pixel coordinates and k1,k2 are dummy integers which the summations are made onto and g is the corresponding FIR filter associated with the scaling function _. 4. Here a Trous algorithm is truncated at p=4 level which gives adequate information at that point. The wavelet planes shown in are accumulation of w1.w4. Note that due to the additive reconstruction property of the trous algorithm this is equivalent to Ic4. Therefore in implementation we continue with the computation of the coarse scale image using until then subtract it from the original image. For haar, output image : I1= I c4 for h,g _ db2 For db8, output image: I2= I c4 for h,g _ db8

Karhounen-Louvetransform (Klt)
Karhounen Louve Transform (KLT) is used for image fusion It tend to decorrelate the components of a given signal After the wavelet transforms we have two images, let us call them I1and I2. Then we can proceed to define matrix X which has the information about these two images in its two rows .Operator vec denotes vectorization of matrices. X= vec (I1) vec (I2) 2xN 2 Given X next step is to compute the KarhounenLouve transform of X and collapse the information into half along its principal components. Next calculating the mean column vector as in mx = _ xi / 2= [m1..m512 2]

Trous Algorithm
Trous Algorithm is similar to fast biorthogonal wavelet transform without subsampling. For any filter h[n], hj[n] is the filters obtained by inserting 2j-1 zeros between each sample of h[n].Inserting zeros in the filters creates holes (trous in French).Now filtering is performed in a similar way over these filters with new coefficients.

Why Trous Algorithm?


1. No need for down sampling 2. Obeys Linear Additive reconstruction: Reconstruction of the original image by adding

168

Fuzzy Logic Inference Algorithm


Mathematically a fuzzy set is defined as asset whose members have a degree of membership. (x) represents the membership function. However by combining this statistical radiometric information with geometric information, the false alarm rate (i.e. assigning road values to nonroad pixels) can greatly be reduced. Hough transform is an excellent tool to detect the linear features of an image 5 and is quite commonly used in computer vision applications. Reducing false alarm rate, Fuzzy Rule Base is developed as, If (Statistical Radiometry) And (Geometry) Then Assign Road Statistical Radiometry Mean and Standard deviation Geometry - Hough Transform

transform evaluated for each pixel are then given as inputs to the fuzzy logic system. Fig6. Fused image transformed into Hough domain

Fuzzy logic
Fuzzy inference is the process of formulating the mapping from a given input to an output using fuzzy logic. Step1. Fuzzify Inputs The first step is to take the inputs and determine the degree to which they belong to each of the appropriate fuzzy sets via membership functions. Step2. Apply Fuzzy Operator If the antecedent of a given rule has more than one part, the fuzzy operator is applied to obtain one number that represents the result of the antecedent for that rule. Step3. Apply Implication Method The input for the implication process is a single number given by the antecedent, and the output is a fuzzy set. Implication is implemented for each rule. Step4. Aggregate All Outputs Aggregation is the process by which the fuzzy sets that represent the outputs of each rule are combined into a single fuzzy set. The output of the aggregation process is one fuzzy set for each output variable. 6 Step5. Defuzzify Centroid method is used for defuzzification. A set of rules have been formulated in order to make the decision for the road extraction. The 6 rules are: _ If Mean is average & S.D is low & Hough is Line assign Road _ If Mean is average & S.D is high & Hough is Line assign Road Unlikely _ If Mean is high & S.D is low & Hough is Line assign Road Unlikely _ If Mean is low & S.D is low &

Statistical Radiometry:
For each pixel a 5x5 pixel size window is centered around the pixel of interest and within this window mean and standard deviation, are calculated. Mean (M) is calculated as, Standard deviation (S.D) is calculated as, 0.5 Where, x= mean; n= total no: of pixels in the image

Hough Transform:
It is an excellent tool to detect linear features. Road segments are linear features in image. Image from (x, y) domain is transformed in to (_, _) domain. For the fuzzy inference geometry input instead of transforming the whole image into Hough space image is broken into windows and Hough transform is computed separately in each window. In Hough domain one then looks for significant peaks which correspond to strong linear features of the image. Selecting these peaks the linear features can be reconstructed in the (x, y) domain through an inverse transformation. During this transformation pixels where linear features are identified are assigned a value of one and the others are assigned a zero value. Mean, Standard deviation and Hough

169

Hough is Line assign Road Unlikely _ If Mean is low & S.D is high & Hough is Not a Line assign Not Road _ If Mean is average & S.D is high & Hough is Not a Line assign Not Road After defuzzification, pixels are segmented into road & non- road pixels to form a binary image Fig7. Results of fuzzy classification

References
1. C. Peng and A. Chen, "Speckle Noise Removal in SAR Image Based on SOT Structure in Wavelet Domain", Geosciences and Remote Sensing Symposium, 2001, IGARSS'01, IEEE International 7, pp. 3039-3041 2. M. S. Moore, "Model Integrated Program Synthesis for Real Time Image Processing", PhD Dissertation, Vanderbilt University, May 1997,Nashville, TN, USA 3. M. Holschneider, R. KronlandMartinet, J. Morlet, and P. Tchamitchian,"A Real-Time Algorithm for Signal Analysis with the Help of the Wavelet Transform," Wavelets, time-frequency methods and phase space, Springer-Verlag, Berlin, Germany, 1989 4. S. Mallat, "A Theory of Multiresolution Signal Decomposition: The Wavelet Representation", IEEE Trans.Pattern Anal. Machine Intell.,1989, 11, pp. 674-693 7

Conclusion
An automated method for extracting road networks from satellite images is done in this project. Wavelet based filter banks such as Haar and db8 are used for pre-processing the image. Inorder to eliminate the disadvantages of ordinary decomposition, trous algorithm is used with two different wavelet bases to filter and de-noise the satellite images. Karhounen-Louve transform is employed for image fusion. Road detection is carried out by fuzzy inference algorithm that can mimic human logical reasoning well. For this, six rules are formulated. In this paper, the approach introduced is structured such that it is quite possible to introduce very efficient parallelization in order to do the real time processing of images.

170

Memristor
P.Balamurali Krishna & Rajesh.R
Department of Electronics and Communication Engineering M.G. College of Engineering balanthegreat@gmail.com

Abstract
A memristor is a passive two-terminal circuit element in which the resistance is a function of the history of the current through and voltage across the device. Memristor theory was formulated and named by Leon Chua in a 1971 paper. Chua strongly believed that a fourth device existed to provide conceptual symmetry with the resistor, inductor, and capacitor. This symmetry follows from the description of basic passive circuit elements as defined by a relation between two of the four fundamental circuit variables. A device linking charge and flux (themselves defined as time integrals of current and voltage), which would be the memristor, was still hypothetical at the time. However, it would not be until thirty-seven years later, on April 30, 2008, that a team at HP Labs led by the scientist R. Stanley Williams would announce the discovery of a switching memristor. Based on a thin film of titanium dioxide, it has been presented as an approximately ideal device.

Introduction
A memristor is a passive two-terminal electronic component for which the resistance (dV/dI) depends in some way on the amount of charge that has flowed through the circuit. When current flows in one direction through the device, the resistance increases; and when current flows in the opposite direction, the resistance decreases, although it must remain positive. When the current is stopped, the component retains the last resistance that it had, and when the flow of charge starts again, the resistance of the circuit will be what it was when it was last active.[8] More generally, a memristor is a two-terminal component in which the resistance depends on the integral of the input applied to the terminals (rather than on the instantaneous value of the input as in a varistor). Since the element "remembers" the amount of current that has passed through it in the past, it was tagged by Chua with the name "memristor." Another way of describing a memristor is that it is any passive two-terminal circuit elements that maintains a functional relationship between the time integral of current (called charge) and the time integral of voltage (often called flux, as it is related to magnetic flux). The slope of this function is called the memristance M and is similar to variable

resistance. Batteries can be considered to have memristance, but they are not passive devices. The definition of the memristor is based solely on the fundamental circuit variables of current and voltage and their time-integrals, just like the resistor, capacitor, and inductor

Fig1. The Simplest Chuas Circuit

172

Need For Memristor


Memristance (Memory + Resistance) is a property of an Electrical Component that describes the variation in Resistance of a component with the flow of charge. Any two terminal electrical component that exhibits Memristance is known as a Memristor. Memristance is becoming more relevant and necessary as we approach smaller circuits, and at some point when we scale into nano electronics, we would have to take memristance into account in our circuit models to simulate and design electronic circuits properly. An ideal memristor is a passive two-terminal electronic device that is built to express only the property of memristance (just as a resistor expresses resistance and an inductor expresses inductance). However, in practice it may be difficult to build a 'pure memristor,' since a real device may also have a small amount of some other property, such as capacitance (just as any real inductor also has resistance).A common analogy for a resistor is a pipe that carries water. The water itself is analogous to electrical charge, the pressure at the input of the pipe is similar to voltage, and the rate of flow of the water through the pipe is like electrical current. Just as with an electrical resistor, the flow of water through the pipe is faster if the pipe is shorter and/or it has a larger diameter. An analogy for a memristor is an interesting kind of pipe that expands or shrinks when water flows through it. If water flows through the pipe in one direction, the diameter of the pipe increases, thus enabling the water to flow faster. If water flows through the pipe in the opposite direction, the diameter of the pipe decreases, thus slowing down the flow of water. If the water pressure is turned off, the pipe will retain it most recent diameter until the water is turned back on. Thus, the pipe does not store water like a bucket (or a capacitor) it remembers how much water flowed through it. Possible applications of a Memristor include Nonvolatile Random Access Memory (NVRAM), a device that can retain memory information even after being switched off, unlike conventional DRAM which erases itself when it is switched off. Another interesting application is

analog computation where a memristor will be able to deal with analog values of data and not just binary 1s and 0s.

Figure 4. Variables.

Fundamental

circuit

Elements

and

Memristor Theory And Its Properties:


Definition of Memristor
The memristor is formally defined as a two-terminal element in which the magnetic flux m between the terminals is a function of the amount of electric charge q that has passed through the device.

Figure 5. Symbol of Memristor. Chua defined the element as a resistor whose resistance level was based on the amount of charge that had passed through the memristor

Memristance
Memristance is a property of an electronic component to retain its resistance level even after power had been shut down or lets it remember (or recall) the last resistance it had before being shut off.

173

Theory
Each memristor is characterized by its memristance function describing the chargedependent rate of change of flux with charge.

Working of Memristor

Noting from Faraday's law of induction that magnetic flux is simply the time integral of voltage, and charge is the time integral of current, we may write the more convenient form

It can be inferred from this that memristance is simply charge-dependent resistance. . i.e. , V(t) = M(q(t))*I(t) 3 This equation reveals that memristance defines a linear relationship between current and voltage, as long as charge does not vary. Of course, nonzero current implies instantaneously varying charge. Alternating current, however, may reveal the linear dependence in circuit operation by inducing a measurable voltage without net charge movement as long as the maximum change in q does not cause much change in M.

Figure 8(a). Al/TiO2 or TiOX /Al Sandwich The memristor is composed of a thin (5 nm) titanium dioxide film between two electrodes as shown in figure 5(a) above. Initially, there are two layers to the film, one of which has a slight depletion of oxygen atoms. The oxygen vacancies act as charge carriers, meaning that the depleted layer has a much lower resistance than the non-depleted layer. When an electric field is applied, the oxygen vacancies drift changing the boundary between the high-resistance and low-resistance layers.

174

3-terminal Memristor (Memistor) Implementations Titanium dioxide memristor


Titanium Dioxide Memristor It is a solid state device that uses nano scale thin-ilms to produce a Memristor. The device consists of a thin titanium dioxide film (50nm) in between two electrodes (5nm) one Titanium and the other latinum. Initially, there are two layers to the titanium dioxide film, one of which has a slight depletion of oxygen atoms. The oxygen vacancies act as charge carriers and this implies that the depleted layer has a much lower resistance than the no depleted layer. When an electric field is applied, the oxygen vacancies drift, changing the boundary between the highresistance and low-resistance layers. Thus the resistance of the film as a whole is dependent on how much charge has been passed through it in a particular direction, which is reversible by changing the direction of current.
Although the memristor is defined in terms of a 2terminal circuit element, there was an implementation of a 3-terminal device called a memistor developed by Bernard Widrow in 1960. Memistors formed basic components of a neural network architecture called ADALINE developed by Widrow and Ted Hoff (who later invented the microprocessor at Intel). In one of the technical reports the memistor was described as follows: Like the transistor, the memistor is a 3-terminal element. The conductance between two of the terminals is controlled by the time integral of the current in the third, rather than its instantaneous value as in the transistor. Reproducible elements have been made which are continuously variable (thousands of possible analog storage levels), and which typically vary in resistance from 100 ohms to 1 ohm, and cover this range in about 10 seconds with several milliamperes of plating current. Adaptation is accomplished by direct current while sensing the neuron logical structure is accomplished nondestructively by passing alternating currents through the arrays of memistor cells.

Polymeric memristor
In July 2008, Victor Erokhin and Marco P. Fontana, claim to have developed a polymeric memristor before the titanium dioxide memristor more recently announced. In 2004, Juri H. Krieger and Stuart M. Spitzer published a paper "Non-traditional, Non-volatile Memory Based on Switching and Retention Phenomena in Polymeric Thin Films" at the IEEE Non-Volatile Memory Technology Symposium, describing the process of dynamic doping of polymer and inorganic dielectric-like materials in order to improve the switching characteristics and retention required to create functioning nonvolatile memory cells. Described is the use of a special passive layer between electrode and active thin films, which enhances the extraction of ions from the electrode. It is possible to use fast ion conductor as this passive layer, which allows to significantly decrease the ionic extraction field.

Spintronic Memristor
Yiran Chen and Xiaobin Wang, researchers at diskdrive manufacturer Seagate Technology, in Bloomington, Minnesota, described three examples of possible magnetic memristors in March, 2009 in IEEE Electron Device Letters. In one of the three, resistance is caused by the spin of electrons in one section of the device pointing in a different direction than those in another section, creating a "domain wall," a boundary between the two states. Electrons flowing into the device have a certain spin, which alters the magnetization state of the device. Changing the magnetization, in turn, moves the domain wall and changes the device's resistance. This work attracted significant attention from the electronics press, including an interview by IEEE Spectrum.

175

Applications Crossbar latches


Williams' solid-state memristors can be combined into devices called crossbar latches, which could replace transistors in future computers, taking up a much smaller area. They can also be fashioned into non-volatile solidstate memory, which would allow greater data density than hard drives with access times potentially similar to DRAM, replacing both components. HP prototyped a crossbar latch memory using the devices that can fit 100 gigabits in a square centimeter, and has designed a highly scalable 3D design (consisting of up to 1000 layers or 1 petabit per cm3). HP has reported that its version of the memristor is currently about one-tenth the speed of DRAM. The devices' resistance would be read with alternating current so that the stored value would not be affected. Some patents related to memristors appear to include applications in programmable logic, signal processing, neural networks, and control systems. Recently, a simple electronic circuit consisting of an LC network and a memristor was used to model experiments on adaptive behavior of unicellular organisms. It was shown that the electronic circuit subjected to a train of periodic pulses learns and anticipates the next pulse to come, similarly to the behavior of slime molds Physarum polycephalum subjected to periodic changes of environment.[ Such a learning circuit may find applications, e.g., in pattern recognition. The DARPAs SyNAPSE project has funded HP Labs, in collaboration with the Boston University Neuromorphics Lab, to develop neuromorphic architectures which may be based on memristive systems. In 2010, Massimiliano Versace and Ben Chandler co-wrote an article describing the MoNETA (Modular Neural Exploring Traveling Agent) model. MoNETA is the first large-scale neural network model to implement whole-brain circuits to power a virtual and robotic agent compatibly with memristive hardware computations.

New 'Memristor' Could Make Computers Work like Human Brains


After the resistor, capacitor, and inductor comes the memristor. Researchers at HP Labs have discovered a fourth fundamental circuit element that can't be replicated by a11 combination of the other three. The memristor (short for "memory resistor") is unique because of its ability to, in HP's words, "[retain] a history of the information it has acquired." HP says the discovery of the memristor paves the way for anything from instant on computers to systems that can "remember and associate series of events in a manner similar to the way a human brain recognizes patterns." Such brain-like systems would allow for vastly improved facial or biometric recognition, and they could be used to make appliances that "learn from experience." In PCs, HP foresees memristors being used to make new types of system memory that can store information even after they lose power, unlike today's DRAM. With memristor-based system RAM, PCs would no longer need to go through a boot process to load data from the hard drive into the memory, which would save time and power especially since users could simply switch off systems instead of leaving them in a "sleep" mode

Memristors Make Chips Cheaper


The first hybrid memristor-transistor chip could be cheaper and more energy efficient. Entire industries and research fields are devoted to ensuring that, every year,computers continue getting faster. But this trend could begin to slow down as the components used in electronic circuits are shrunk to the size of just a few atoms.Researchers at HP Labs in Palo Alto, CA, are betting that a new fundamental electronic component--the memristor--will keep computer power increasing at this rate for years to come. They are nanoscale devices with unique properties: a variable resistance and the ability to remember the resistance even when the power is off.Increasing performance has usually meant shrinking components

176

so that more can be packed onto a circuit. But instead, Williams's team removes some transistors and replaces them with a smaller number of memristors. "We're not trying to crowd more transistors onto a chip or into a particular circuit," Williams says. "Hybrid memristor-transistor chips really have the promise for delivering a lot more performance."12 A memristor acts a lot like a resistor but with one big difference: it can change resistance depending on the amount and direction of the voltage applied and can remember its resistance even when the voltage is turned off. These unusual properties make them interesting from both a scientific and an engineering point of view. A single memristor can perform the same logic functions as multiple transistors, making them a promising way to increase computer power. Memristors could also prove to be a faster, smaller, more energy-efficient alternative to flash storage.

memristor-based RAM could one day replace DRAM altogether.

Future of Memristor
Although memristor research is still in its infancy, HP Labs is working on a handful of practical memristor projects. And now Williams's team has demonstrated aworking memristor-transistor hybrid chip. "Because memristors are made of the same materials used in normal integrated circuits," says Williams, "it turns out to be very easy to integrate them with transistors." His team, which includes HP researcher Qiangfei Xia, built a field-programmable gate array (FPGA) using a new design that includes memristors made of the semiconductor titanium dioxide and far fewer transistors than normal. Engineers commonly use FPGAs to test prototype chip designs because they canbe reconfigured to perform a wide variety of different tasks. In order to be so flexible, however, FPGAs are large and expensive. And once the design is done, engineers generally abandon FPGAs for leaner "applicationspecific integrated circuits. In the new chip, these tasks are performed by memristorsAccording to Williams, using memristors in FPGAs could help significantly lower costs. "If our ideas work out, this type of FPGA will completely change the balance," he says. Ultimately, the next few years could be very important for memristor research.

Memristor as Digital and Analog


A memristive device can function in both digital and analog forms, both having very diverse applications. In digital mode, it could substitute conventional solid-state memories (Flash) with highspeed and less steeply priced nonvolatile random access memory (NVRAM). Eventually, it would create digital cameras with no delay between photos or computers that save power by turning off when not needed and then turning back on instantly when needed.

No Need of Rebooting
The memristor's memory has consequences:The reason computers have to be rebooted every time they are turned on is that their logic circuits are incapable of holding their bits after the power is shut off. But because a memristor can remember voltages, a memristor-driven computer would arguably never need a reboot. You could leave all your Word files and spreadsheets open, turn off your computer, and go get a cup of coffee or go on vacation for two weeks, says Williams. When you come back, you turn on your computer and everything is instantly on the screen exactly the way you left it.that keeps memory powered. HP says

Conclusion
By redesigning certain types of circuits to include memristors, it is possible to obtain the same function with fewer components, making the circuit itself less expensive and significantly decreasing its power consumption. In fact, it can be hoped to combine memristors with traditional circuit-design elements to produce a device that does computation. The Hewlett-Packard (HP) group is looking at developing a memristor-based nonvolatile memory that could be 1000 times faster than magnetic disks and use much less power. As rightly said by Leon Chua and R.Stanley Williams (originators of memristor), memrisrors are

177

so significant that it would be mandatory to re-write the existing electronics engineering textbooks.

178

Mobile Autonomous Solar Collector


Anooj.A,Vishnu.R.Nair
Sixth Semester, Electronics and Communication Engineering Mohandas College of Engineering and Technology

Abstract
MASC stands for Mobile Autonomous Solar Collector. MASC is a combination of sensors, analog converters, motors and microcontrollers that enables automatic collection of solar energy via solar panels without the involvement of human beings. MASC can be adopted in small scale domestic applications and large scale industrial applications; scale of application is identified depending on the net solar panel area. Here the panels are made mobile by placing them on the MASC in which automaticity is achieved with the help of proximity sensors, light sensors, ADCs, motors and microcontroller. The sensors gather the data which are provided to the ADCs based on these inputs the microcontroller produces an output that is compared with a predefined baseline value. Then microcontroller provides the necessary outputs which in turn drive the motors thus physically moving the panels from one point to another for maximum solar energy output. Thus the net efficiency increases. In industrial applications as the net panel area is quite large it is difficult to move an entire panel physically. Therefore the panels are made to tilt in an angle using the circuitry mentioned above Thus MASC enables the collection of solar power, which can be stored in batteries for domestic and industrial use, without any human involvement in the collection or storage process. Unlike an idle solar panel, MASC increases the net solar power efficiency which can lead to reduced dependence on fossil fuel based electricity generation thereby contributing to the concept of the green planet.

Introduction
Solar energy has been harnessed by humans since ancient times using a range of ever-evolving technologies. Nowadays only a minuscule fraction of the available solar energy is used. This is where the significance of MASC comes in. MASC- Increases the net solar power efficiency which can lead to reduced dependence on fossil fuels.

Flow Chart
The basic flow diagram model of MASC is:

MASC
MASC stands for Mobile Autonomous Solar Collector. It is a combination of sensors, ADC, DAC, motors and microcontrollers. It enables automatic collection of solar energy via solar panels without the involvement of human beings.

178

Sensors
In MASC two types of sensors are used they are Proximity sensors and sunlight sensors. Proximity sensors, helps in determining the direction of motion.Sunlight sensors, help in identifying the regions with maximum sunlight.

Conclusion
Unlike an idle solar panel, MASC increases the net solar power efficiency. Thus it leads to reduced dependence on fossil fuel based electricity generation. Thereby it contributes to the concept of a GREENER PLANET.

Microcontroller
Microcontroller is the brain of the system. It receives the information from the sensors. This information is combined with efficiency enhancer logic. Based on these computations, microcontroller gives the driving instructions to the motor.

Reference
1. 2. www.electronicsforyou.com www.ermicro.com/blog

Efficiency Enhancer Logic


Based on the outputs from the sensors, the solar panels have to be moved so that maximum efficiency can be obtained from the available sunlight. Solar panels must be centered with respect to the sunlight. This is performed using EEL. Current data from the sensors are further analyzed with the already obtained data to move to a new physical location. This ensures maximum exposure to the sunlight. Thus maximum efficiency is obtained.

Working
The sensors gather the data which are provided to the ADCs. Based on these inputs the microcontroller produces an output that is compared with a predefined baseline value. Microcontroller provides the necessary outputs by combining this with the efficiency enhancer logic. It drives the motors, thus physically moving the panels from one point to another for maximum solar energy output.

Applications
Scale of application is identified depending on the net solar panel area. In domestic applications, solar panels are made mobile by placing them on the MASC.In domestic applications where the sizes of panels are relatively small and the area for light exposure is less, the full potential of MASC can be realized. In industrial applications, the solar panels are made to tilt in an angle using the same circuitry.

179

Modern Power System Modernised Wave Energy Converter


Vinod Kumar.K , Vinod.A.M.
Department Of Electrical and Electronics Engineering Noorul Islam College Of Engineering Kumaracoil, Thuckalay vinod.aver@gmail.com, lifeamvinod@gmail.com

Abstract
A variety of technologies have already been proposed to capture energy from ocean waves, but this one requires a minimum of material, is cheap and robust. Rather than looking at the up and down movements of waves, the proposed method lets the circular water currents beneath the waves directly drive rotors.. Due to economic social cohesion, the European Union is promoting to improve the production of electrical energy from renewable energy sources. Sea waves have associated a form of renewable energy which can be captured by using a hydro mechanical device that in turn drives an electrical generator to produce electrical energy. After a brief description of wave formation and quantifying the power across each meter of wave front associated to the wave, the paper describes several devices used presently to extract mechanical energy from the waves and their advantages and disadvantages are presented as conclusions. In particular, the modern Pelamis system is described in some detail. Wave energy market is also discussed.

Introduction
To protect the environment for future generations it is vital that we move rapidly to a more sustainable lifestyle, reducing carbon emissions of greenhouse gases and consumption of limited resources. Offshore wave energy has the potential to be one of the most environmentally benign forms of electricity generation with a minimal visual impact from the shore. Wave energy is essentially stored, concentrated wind energy, the waves being created by the progressive transfer of energy from the wind as it blows over the surface of the water. Wave energy could play a major part in the worlds efforts to combat climate change, potentially displacing 1 2 billion tonnes of CO2 per annum from conventional fossil fuel generating sources. Such installations would also provide many employment opportunities in construction, operations and maintenance.The Pelamis Wave Energy Converter is a technology that uses the motion of ocean surface waves to create electricity. The machine is made up of connected sections which flex and bend as waves pass; it is this motion which is used to generate electricity. Developed by the Scottish company Pelamis Wave Power (formerly Ocean Power Delivery), the Pelamis became the worlds first wave machine to generate electricity into the grid from offshore wave energy, when it was connected to the UK grid in 2004. Pelamis Wave Power have since gone to build and test four additional Pelamis machines. Three, first generation P1 machines which were tested in a farm off the coast of Portugal in 2009 and the first of a second generation of machines, the P2 started tests off Orkney in 2010 .

The Pelamis is an attenuating wave energy converter designed with survivability at the fore. The Pelamis's long thin shape means it is almost invisible to hydrodynamic forces, namely inertia, drag, and slamming, which in large waves give rise to large loads. Its novel joint configuration is used to induce a tunable cross-coupled resonant response. Control of the restraint applied to the joints allows this resonant response to be turned-up in small seas where capture efficiency must be maximised or turned-down to limit loads and motions in survival conditions. It is named for a shallow dwelling sea snake and appears just as serpentine at a length of 140 meters. It is the latest innovation to tap natures power for our own. Pelamis is the worlds first commercial wave power project set up off the Portuguese coast. The Scottish engineering company, Pelamis Wave Power Limited is the brains behind the devices, technically called the Pelamis Wave Energy Converters (PWEC). The idea is simple in itselfthe machine uses the motion of the waves on the ocean surface to create power. It is a series of semi submerged cylindrical sections linked by hinged joints. The motion of these sections due to the waves in turn churns the hydraulic rams which pump high pressure oil through hydraulic motors. The hydraulic motors drive electrical generators to produce electricity. Power from all the joints is fed down a single umbilical cable to a junction on the sea bed. Several devices can be connected together and linked to shore through a single seabed cable.

180

Each Pelamis machine measures 120m long by 3.5m wide (about the size of four train carriages) and weighs 750 tons fully ballasted. The Pelamis Wave Energy Converter is a semi-submerged, articulated structure composed of cylindrical sections linked by hinged joints. The wave-induced motion of these joints is resisted by hydraulic rams, which pump highpressure fluid through hydraulic motors via smoothing accumulators. The hydraulic motors drive electrical generators to produce electricity. Power from all the joints is fed down a single umbilical cable to a junction on the sea bed. Several devices can be connected together and linked to shore through a single seabed cab

Electrical Power Research Institute-Public/private project part funded by DOE, NREL and individual states-Project: five state wave energy sites in Maine, Oregon, Washington, Hawaii, Massachusetts + city of San Francisco-Pelamis selected by EPRI as system currently recommended for deployment.-Target installation 2007Funding for ocean energy approved in recent Energy bill Pelamis

Conclusion
This technology might create additional markets for rubber, plastic, aluminium, metal frames, cables, subsea connectors, electromagnetic equipment, power inverters. Lets stop global warming!

Advantages
Pelamis offers technological, economic and environmental advantages including: Current production machines are 180m long and 4m in diameter with 4 power conversion modules per machine. Each machine is rated at 750kW. The energy produced by Pelamis is dependent upon the conditions of the installation site. Depending on the wave resource, machines will on average produce 25-40% of the full rated output over the course of a year. Each machine can provide sufficient power to meet the annual electricity demand of approximately 500 homes.

References
1.http://en.wikipedia.org/wiki/Pelamis_Wave_Energy_Conver ter 2. http://www.pelamiswave.com/ 3.http://www.renewableenergyworld.com/rea/news/article/200 4/03/p elamis-wave-energy-converter-is-launched-10639 4.http://www.changingideas.com/Pelamis-Wave-EnergyConverter/Electricity.html 5.http://theirearth.com/index.php/news/pelamis-wave-powerpelamis-wave-energy-converter 6.http://www.bionomicfuel.com/is-the-pelamis-wave-energyconverter-at-the-forefront-of-wave-energy/

181

New Way To Make Electricity


Reshma Ittiachan & Reshma A R
S6 Department Of Electrical And Electronics Mohandas College Of Engineering And Technology, Thiruvananthapuram

Abstract
A new area of energy research, which is producing electrical energy from carbon Nano tubes, is emerged recently by Massachusetts institute of technology. The phenomenon, described as thermo power waves. A carbon nanotube can produce a very rapid wave of power when it is coated by a layer of fuel and ignited, so that heat travels along the tube. The key ingredient in the recipe is carbon nanotubes sub-microscopic hollow tubes made of a chicken-wire-like lattice of carbon atoms. These tubes, just a few billionths of a meter (Nano meters) in diameter, are part of a family of novel carbon molecules, including Bucky balls and graphene sheets, which have been the subject of intensive worldwide research over the last two decades. After further development, the system now puts out energy, in proportion to its weight, about 100 times greater than an equivalent weight of lithium-ion battery. In the experiments, each of these electrically and thermally conductive nanotubes was coated with a layer of a reactive fuel that can produce heat by decomposing. This fuel was then ignited at one end of the nanotube using either a laser beam or a high-voltage spark, and the result was a fast-moving thermal wave travelling along the length of the carbon nanotube like a flame speeding along the length of a lit fuse. Heat from the fuel goes into the nanotube, where it travels thousands of times faster than in the fuel itself. As the heat feeds back to the fuel coating, a thermal wave is created that is guided along the nanotube. With a temperature of 3,000 kelvins, this ring of heat speeds along the tube 10,000 times faster than the normal spread of this chemical reaction. The heating produced by that combustion, it turns out, also pushes electrons along the tube, creating a substantial electrical current. The amount of power released, he says, is much greater than that predicted by thermo electric calculations. Many semiconductor materials can produce an electric potential when heated, through something called the See beck effect, that effect is very weak in carbon. The See beck effect is a phenomenon in which a temperature difference between two dissimilar electrical conductors or semiconductors produces a voltage. Theres something different in this case since part of the current appears to scale with wave velocity, causes electron flow in different way. The electrical charge carriers (either electrons or electron holes) just as an ocean wave can pick up and carry a collection of debris along the surface. This important property is responsible for the high power produced by the system. The researchers coated the nanotubes with a fuel, such as gasoline or ethanol, and applied heat to one end. The result: The fuel reacts and produces more heat, which ignites more fuel to create even more heat.

182

Energy

Energy has different forms and Electrical energy is one among them. This is most reliable, flexible and what not? There is no necessity to elaborate them, because we all are the best users of it.

Generation
There are different ways to generate electrical energy. You must have heard of a number of alternative ways for producing electricity all are conventional methods even the reactors I mean nuclear reactors also a common way of power generation today. All these methods of generation have their merits and demerits.

can produce a very rapid wave of power when it is coated by a layer of fuel and ignited, so that heat travels along the tube. These tubes, just a few billionths of a meter (Nano meters) in diameter, are part of a family of novel carbon molecules, which have been the subject of intensive worldwide research over the last two decades. But this is the first time to predict that such waves could be guided by a nanotube or nanowire and that, this wave of heat could push an electrical current along that wire. Carbon nanotubesand graphene are both nanoscale structures consisting of carbon atoms. Graphene is a sheet-like hexagonal lattice of carbon atoms, while nanotubes can be described as graphene wrapped into a cylinder with a nanoscale diameter. Carbon nanotube transistors suit printed electronics. Carbon nanotube elastic property enhances LCD TV quality. Carbon nanotube transistors run at 30GHz high frequency. Carbonnanotube arrays take heat off chips.

Thermo Power Waves

New Way To Make Electricity


Nano Technology

Theory Carbon Nano Tubes

The phenomenon, described as thermo power waves, like a collection of flotsam [debris floating in ocean]-propelled along the surface by waves traveling across the ocean, it turns out that a thermal wave a moving pulse of heat traveling along a microscopic wire can drive electrons along, creating an electrical current.

See Beck Effect


A Nano tube is with a diameter about 30,000 times smaller than a strand of hair +. Submicroscopic hollow tubes made of a chicken-wirelike lattice of carbon atoms. A carbon nanotube Dont confuse this thermo power waves with See beck effect, that effect is very weak in carbon. The See beck effect is a phenomenon in which a temperature difference between two dissimilar electrical conductors or semiconductors

183

produces a voltage. Many semiconductor materials can produce an electric potential when heated, through something called the See beck effect.

Alternating Current & Wave Front Oscillation

Theres something different in this case since part of the current appears to scale with wave velocity, causes electron flow in different way. Reactive fuels Coating. Fuel used gasoline or ethanol. Heat Applied to one end by ignition. Laser beam or by a high-voltage spark. Each of these electrically and thermally conductive nanotubes was coated with a layer of a reactive fuel that can produce heat by decomposing. The fuel reacts and produces more heat. This fuel was then ignited at one end of the nanotube using either a laser beam or a high-voltage spark, and the result was a fastmoving thermal wave traveling along the length of the carbon nanotube like a flame speeding along the length of a lit fuse. As the heat feeds back to the fuel coating, a thermal wave is created that is guided along the nanotube. Thermo power waves travels 10,000 times faster than normal spread of a chemical reaction. Temperature is about 3,000 kelvin. The thermal wave squeezes electrons out of the nanotubes like a tube of toothpaste. Heat from the fuel goes into the nanotube, where it travels thousands of times faster than in the fuel itself. As the heat feeds back to the fuel coating, a thermal wave is created that is guided along the nanotube. With a temperature of 3,000 kelvins, this ring of heat speeds along the tube 10,000 times faster than the normal spread of this chemical reaction. The heating produced by that combustion, it turns out, also pushes electrons along the tube, creating a substantial electrical current.

Another aspect of the theory, is that by using different kinds of reactive materials for the coating, the wave front could oscillate, thus producing an alternating current (ac). That would open up a variety of possibilities.

Mathematical Study & Quantity Of Power


The phenomenon has been studied mathematically for more than 100 years, but it is the first to predict that such waves could be guided by a nanotube or nanowire and that this wave of heat could push an electrical current along that wire. The system now puts out energy, in proportion to its weight, about 100 times greater than an equivalent weight of lithium-ion battery.

184

Application
Nano tube-based power can be used in Cell phones to hybrid-electric vehicles.

Advantages
Environmental friendly & Non-toxic, Very small size power source, Indefinite power and Choice of Power AC or DC power. The carbon nanotube variety would not contain any toxic metals. So it is environmental friendly. These power devices could be made 10 times smaller than todays cell-phone batteries but still hold the same amount of power. In theory, such devices could maintain their power indefinitely until used, unlike batteries whose charges leak away gradually as they sit unused. And while the individual nanowires are tiny, they could be made in large arrays to supply significant amounts of power for larger devices. The present energy-storage systems all produce direct current. Choice of Power that is AC or DC power can be achieved by changing the

coating on the Nano tubes as mentioned in these elaborations. Large number electrical and structural properties. They are used to reinforce high end tennis rackets and bicycle handle bars. To craft Lilliputian nano motors and to Modulate signals in electronics. Potential applications include transistors for computer circuits. ( demonstrated by IBM) Computer memories. Nantero Solar cells and Loud speakers and nano radio. Coated Multiwalled carbon nano tubes nanotubes with CNT (cyclotrimethylene trinitramine) a hemical fuel. Electricity is produced by the movement of electrons. Thermo waves travels at a speed 10,000 times as faster as it can in fuel it self. In other words they can conduct heat more than a factor of 100 times faster than a metal. Zero self discharge. It has Very Large power density.

References
(1) IEEE Spectrum (2) MIT News March 8th 2010, Massachusetts Institute of Technology. (3) Nature Materials on March 7 2010 issue

Conclusion
Continuous efforts are being done to improve the setup enough for commercial use in near future. This will leads to an era of miniature type power sources for the entire gadget world which has a burden of heavy weight power source as battery array.

185

Silicon Photonics
Prasad.V.J, Vishnu.R.C
Mohandas College of Engineering and Technology, Anad, Thiruvananthapuram vishnurc.333@gmail.com spideycod@gmail.com

Abstract
In its everlasting quest to deliver more data faster and on smaller components, the silicon industry is moving full steam ahead towards its final frontiers of size, device integration and complexity. As the physical limitations of metallic interconnects begin to threaten the semiconductor industry's future, researches are concentrated heavily on advances in photonics that will lead to combining existing silicon infrastructure with optical communications technology, and a merger of electronics and photonics into one integrated dual functional device. Optical technology has always suffered from its reputation for being an expensive solution. This prompted research into using more common materials, such as silicon, for the fabrication of photonic components, hence the name silicon photonic.

Introduction
During the past few years, researchers at Intel have been actively exploring the use of silicon as the primary basis of photonic components. This research has established Intels reputation in a specialized field called silicon photonics, which appears poised to provide solutions that break through longstanding limitations of silicon as a material for fiber optics. In a major advancement, Intel researchers have developed a silicon-based optical modulator operating at 50 GHz - an increase of over 50 times the previous research record of about 1GHz (initially 20MHz). This is a significant step towards building optical devices that move data around inside a computer at the speed of light. It is the kind of breakthrough that ripples across an industry over time, enabling other new devices and applications. It could help make the Internet run faster, build much faster high-performance computers and enable high-bandwidth applications like ultra-highdefinition displays or vision recognition systems. Intels research into silicon photonics is an end-to-end program to extend Moores Law into new areas. In addition to this research, Intels expertise in fabricating processors from silicon could enable it to create inexpensive, high performance photonic devices that comprise numerous components integrated on one silicon die. Siliconizing photonics to develop and build optical devices in silicon has the potential to bring PC economics to high-bandwidth optical communications. Another advancement in silicon photonics is the demonstration of the first continuous silicon laser based on the Raman Effect. This research breakthrough paves the way for making optical amplifiers, lasers and wavelength converters to switch a signals color in low-cost silicon. Fiber optic communication is well established today due to the great capacity and reliability it provides. However, the technology has suffered from a reputation as an expensive solution. This view is based in large

part on the high cost of the hardware components. These components are typically fabricated using exotic materials that are expensive to manufacture. In addition, these components tend to be specialized and require complex steps to assemble and package. These limitations prompted Intel to research the construction of fiber-optic components from other materials, such as silicon. The vision of silicon photonics arose from the research performed in this area. Its overarching goal is to develop high-volume, low-cost optical components using standard CMOS processing the same manufacturing process used for microprocessors and semiconductor devices.

What Is Silicon Photonics?


Photonics is the field of study that deals with light, especially the development of components for optical communications. It is the hardware aspect of fiber optics, and due to commercial demand for bandwidth, it has enjoyed considerable expansion and development during the past decade. Fiber-optic communication, as most people know, is the process of transporting data at high speeds using light, which travels to its destination on a glass fiber. Fiber optics is well established today due to the great capacity and reliability it provides. However, fiber optics has suffered from its reputation as an expensive solution. This view is based in large part on the high price of the hardware components. Optical devices typically have been made from exotic materials such as gallium arsenide, lithium niobate, and indium phosphide that are complicated to process. In addition, many photonic devices today are hand assembled and often require active or manual alignment to connect the components and fibers onto the devices. This nonautomated process tends to contribute significantly to the cost of these optical devices.

186

Silicon photonics research at Intel hopes to establish that manufacturing processes using silicon can overcome some of these limitations. Intels goal is to manufacture and sell optical devices that are made out of easy-tomanufacture silicon. Silicon has numerous qualities that make it a desirable material for constructing small, lowcost optical components: it is a relatively inexpensive, plentiful, and well understood material for producing electronic devices. In addition, due to the longstanding use of silicon in the semiconductor industry, the fabrication tools by which it can be processed into small components are commonly available today. Because Intel has more than 35 years of experience in silicon and device fabrication, it finds a natural fit in exploring the design and development of silicon photonics. Silicon photonics is the study and application of photonic systems which use silicon as an optical medium. It can be simply defined as the photonic technology based on silicon chips. Silicon photonics can be defined as the utilization of silicon-based materials for the generation (electrical-to-optical conversion), guidance, control, and detection (optical-to-electrical conversion) of light to communicate information over distance. The most advanced extension of this concept is to have a comprehensive set of optical and electronic functions available to the designer as monolithically integrated building blocks upon a single silicon substrate. The goal is to siliconize photonics-specifically to build in silicon all the functions necessary for optical transmission and reception of data. The goal is then to integrate the resulting devices onto a single chip. An analogy can be made that such optical chips hold the same relationship to the individual components as integrated circuits do to the transistors that constitute them: they provide a complete unit that can be manufactured easily and inexpensively using standard silicon fabrication techniques. Intel has recently been able to demonstrate basic feasibility to siliconize many of the components needed for optical communication. The most recent advance involves encoding high-speed data on an optical beam. There are two parallel approaches being pursued for achieving optoelectronic integration in silicon. The first is to look for specific cases where close integration of an optical component and an electronic circuit can improve overall system performance. One such case would be to integrate a Si-Ge photo-detector with a Complementary Metal-Oxide-Semiconductor (CMOS) trans-impedance amplifier. The second is to achieve a high level of photonic integration with the goal of maximizing the level of optical functionality and

optical performance. This is possible by increasing light emitting efficiency if silicon.

Why Silicon Photonics?


Fiber-optic communication is the process of transporting data at high speeds on a glass fiber using light. Fiber optic communication is well established today due to the great capacity and reliability it provides. However, the technology has suffered from a reputation as an expensive solution. This view is based in large part on the high cost of the hardware components. These components are typically fabricated using exotic materials that are expensive to manufacture. In addition, these components tend to be specialized and require complex steps to assemble and package. These limitations prompted Intel to research the construction of fiber-optic components from other materials, such as silicon. The vision of silicon photonics arose from the research performed in this area. Its overarching goal is to develop high-volume, low-cost optical components using standard CMOS processing the same manufacturing process used for microprocessors and semiconductor devices. Silicon presents a unique material for this research because the techniques for processing it are well understood and it demonstrates certain desirable behaviors. For example, while silicon is opaque in the visible spectrum, it is transparent at the Infra-red wavelengths used in optical transmission, hence it can guide light. Moreover, manufacturing silicon components in high volume to the specifications needed by optical communication is comparatively inexpensive. Silicons key drawback is that it cannot emit laser light, and so the lasers that drive optical communications have been made of more exotic materials such as indium phosphide and gallium arsenide. However, silicon can be used to manipulate the light emitted by inexpensive lasers so as to provide light that has characteristics similar to more-expensive devices. This is just one way in which silicon can lower the cost of photonics. Silicon photonic devices can be made using existing semiconductor fabrication techniques, and because silicon is already used as the substrate for most integrated circuits, it is possible to create hybrid devices in which the optical and electronic components are integrated onto a single microchip. The propagation of light through silicon devices is governed by a range of nonlinear optical phenomena including the Kerr effect, the Raman effect, Two Photon Absorption and interactions between photons and free charge carriers. The presence of nonlinearity is of fundamental importance, as it enables light to interact

187

with light, thus permitting applications such as wavelength conversion and all-optical signal routing, in addition to the passive transmission of light. Within the range of fiber optic telecommunication wavelength (1.3 m to 1.6 m), silicon is nearly transparent and generally does not interact with the light, making it an exceptional medium for guiding optical data streams between active components. Also optical data transmission allows for much higher data rates and would at the same time eliminate problems resulting from electromagnetic interference. The technology may also be useful for other areas of optical communications, such as fiber to the home.

technology of silicon on insulator in electronics, whereby components are built upon a layer of insulator in order to reduce parasitic capacitance and so improve performance. B. Kerr Nonlinearity Silicon has a focusing Kerr nonlinearity, in that the refractive index increases with optical intensity. This effect is not especially strong in bulk silicon, but it can be greatly enhanced by using a silicon waveguide to concentrate light into a very small cross-sectional area. This allows nonlinear optical effects to be seen at low powers. The nonlinearity can be enhanced further by using a slot waveguide, in which the high refractive index of the silicon is used to confine light into a central region filled with a strongly nonlinear polymer. Kerr nonlinearity underlies a wide variety of optical phenomena. One example is four-wave mixing, which has been applied in silicon to realize both optical parametric amplification and parametric wavelength conversion. Kerr nonlinearity can also cause modulation instability, in which it reinforces deviations from an optical waveform, leading to the generation of spectralsidebands and the eventual breakup of the waveform into a train of pulses. C. Two-Photon Absorption Silicon exhibits Two Photon Absorption (TPA), in which a pair of photons can act to excite an electronhole pair. This process is related to the Kerr effect, and by analogy with complex refractive index, can be thought of as the imaginary-part of a complex Kerr nonlinearity. At the 1.55 micron telecommunication wavelength, this imaginary part is approximately 10% of the real part. The influence of TPA is highly disruptive, as it both wastes light, and generates unwanted heat. It can be mitigated, however, either by switching to longer wavelengths (at which the TPA to Kerr ratio drops), or by using slot waveguides (in which the internal nonlinear material has a lower TPA to Kerr ratio). Alternatively, the energy lost through TPA can be partially recovered by extracting it from the generated charge carriers. D. Free Charge Carrier Interactions The free charge carriers within silicon can both absorb photons and change its refractive index. This is particularly significant at high intensities and for long durations, due to the carrier concentration being built up by TPA. The influence of free charge carriers is often (but not always) unwanted, and various means have been proposed to remove them. One such scheme is to implant the silicon with helium in order to

Physical Properties
A. Optical Guiding and Dispersion Tailoring Silicon is transparent to infrared light with wavelengths above about 1.1 microns. Silicon also has a very high refractive index, of about 3.5. The tight optical confinement provided by this high index allows for microscopic optical waveguides, which may have crosssectional dimensions of only a few hundred nanometers. This is substantially less than the wavelength of the light itself, and is analogous to a sub wavelength-diameter optical fiber. Single mode propagation can be achieved, thus (like single-mode optical fiber) eliminating the problem of modal dispersion. The strong dielectric boundary effects that result from this tight confinement substantially alter the optical dispersion relation. By selecting the waveguide geometry, it is possible to tailor the dispersion to have desired properties, which is of crucial importance to applications requiring ultra-short pulses. In particular, the group velocity dispersion (that is, the extent to which group velocity varies with wavelength) can be closely controlled. In bulk silicon at 1.55 microns, the group velocity dispersion (GVD) is normal in that pulses with longer wavelengths travel with higher group velocity than those with shorter wavelength. By selecting suitable waveguide geometry, however, it is possible to reverse this, and achieve anomalous GVD, in which pulses with shorter wavelengths travel faster. Anomalous dispersion is significant, as it is a prerequisite for modulation instability. In order for the silicon photonic components to remain optically independent from the bulk silicon of the wafer on which they are fabricated, it is necessary to have a layer of intervening material. This is usually silica, which has a much lower refractive index (of about 1.44 in the wavelength region of interest), and thus light at the silicon-silica interface will (like light at the silicon-air interface) undergo total internal reflection, and remain in the silicon. This construct is known as silicon on insulator. It is named after the

188

enhance carrier recombination. A suitable choice of geometry can also be used to reduce the carrier lifetime. Rib waveguides (in which the waveguides consist of thicker regions in a wider layer of silicon) enhance both the carrier recombination at the silicasilicon interface and the diffusion of carriers from the waveguide core. A more advanced scheme for carrier removal is to integrate the waveguide into the intrinsic region of a PIN diode, which is reverse biased so that the carriers are attracted away from the waveguide core. A more sophisticated scheme still, is to use the diode as part of a circuit in which voltage and current are out of phase, thus allowing power to be extracted from the waveguide. The source of this power is the light lost to two photon absorption, and so by recovering some of it, the net loss (and the rate at which heat is generated) can be reduced. As is mentioned above, free charge carrier effects can also be used constructively, in order to modulate the light. E. The Raman Effect Silicon exhibits the Raman Effect, in which a photon is exchanged for a photon with a slightly different energy, corresponding to an excitation or a relaxation of the material. Silicon's Raman transition is dominated by a single, very narrow frequency peak, which is problematic for broadband phenomena such as Raman amplification, but is beneficial for narrow band devices such as Raman lasers. Consequently, all-silicon Raman lasers have been fabricated.

Optical connections often require very precise alignment, which demands improved alignment technologies for efficient mass production.

Silicon Light Source


While a silicon laser is still out of reach, work is being done worldwide on silicon light emitters that emit both visible and infrared radiation. A silicon emitter is the missing piece for monolithic integration as it would enable all optical elements and drive electronics to be fabricated on a common substrate. Because we are using silicon waveguides to guide light, the emitter must be in the infrared region of the wavelength spectrum (> 1.1 m) where optical absorption loss is low. We first summarize the different paths researchers are investigating to achieve electrically pumped light emission, known as Electro Luminescence (EL), from silicon. Until reliable and efficient silicon emitter can be produced, hybrid integration must be considered (i.e., using a non-silicon-based light source coupled to silicon waveguides). The difficulty in making a silicon light emitter arises from silicon's indirect band-gap. This indirect band-gap results in radiative (light emitting) decay being less likely compared to other non-radiative (e.g., Auger recombination) routes, and thus in a less-efficient corresponding light emission. Forming a laser or even a light emitter from silicon is therefore difficult, although not impossible, and research worldwide has shown light emission from silicon and silicon-based materials by a wide variety of different methods. For example, these range from photo-luminescence in textured bulk silicon, to fabrication of nano scale or porous silicon to doping with exotic ions, to Raman emission. To achieve infrared light emission from silicon, the silicon must be doped with a suitable material, such as p-FeSi2, or Erbium. Erbium-doped silicon waveguides have shown infrared light emission. These kind of doped bulk silicon devices suffer from a major problem although emission can be relatively strong below 100 K; the emission intensity falls rapidly when the device is heated to room temperature. This greatly limits the application of these devices. A different approach to enhance the efficiency of light emission in silicon is to reduce the other non-radiative mechanisms for electron hole recombination. This can be done by restricting carrier diffusion to the nonradiative recombination centers in the lattice. This increases the probability for radiative transitions and hence increases light emission efficiency. Silicon nanocrystals suspended in silicon-rich oxide restrict carrier movement while still allowing electrical pumping. Other

Challenges
Although the possible merits of silicon-based photonics are huge, there are also very substantial challenges for such a technology: Having an indirect band-gap, silicon is a very inefficient light emitter. Although various tricks have been developed to get around this, the laser or amplifier performance of silicon-based devices cannot compete with that for other approaches, based on, e.g., gallium arsenide or indium phosphide. No practical modification to silicon has yet been conceived which gives efficient generation of light. Thus it required the light source as an external component which was a drawback. The band-gap of silicon is also larger than desirable, making it impossible to detect light in the telecom spectral regions around 1.5 and 1.3m. The heat dissipated by a laser source on a chip might well be more than is convenient.

189

means to obtain carrier confinement and efficient emission of infrared wavelengths include using Ge/Si quantum dots or crystalline defects. e.g., ytterbium or terbium allows emission at 0.980 and 0.540 m in resonant cavity silicon LEDs. For these devices to be used in practical applications, however, their lifetimes and reliability still need to be optimized. Another limitation for all forward-biased silicon light emitters is their low direct modulation speeds (~1 MHz). This means that realistically this kind of silicon emitter will require an external modulator for high-speed communication links. Reverse biasing has the potential for achieving higher direct modulation speeds (~200 MHZ), but at the moment this comes at the expense of light emission efficiency. A. Continuous Silicon Laser Researchers at Intel have announced another advance in silicon photonics by demonstrating the first continuous silicon laser based on the Raman Effect. This research breakthrough paves the way for making optical amplifiers, lasers and wavelength converters to switch a signals color in low-cost silicon. It also brings Intel closer to realizing its vision of siliconizing photonics, which will enable the creation of inexpensive, highperformance optical interconnects in and around PCs, servers and other devices. Silicon presents a unique material for this research because the techniques for processing it are well understood and it demonstrates certain desirable behaviors. For example, while silicon is opaque in the visible spectrum, it is transparent at the infrared wavelengths used in optical transmission, hence it can guide light. Moreover, manufacturing silicon components in high volume to the specifications needed by optical communication is comparatively inexpensive. Silicons key drawback is that it cannot emit laser light, and so the lasers that drive optical communications have been made of more exotic materials such as indium phosphide and gallium arsenide. However, silicon can be used to manipulate the light emitted by inexpensive lasers so as to provide light that has characteristics similar to more-expensive devices. This is just one way in which silicon can lower the cost of photonics. Raman Effect The term laser is an acronym for Light Amplification through Stimulated Emission of Radiation. The stimulated emission is created by changing the state of electrons the subatomic particles that make up electricity. As their state changes, they release a photon, which is the particle that composes light. This generation of photons can be stimulated in many materials, but not silicon due to its material properties.

However, an alternate process called the Raman Effect can be used to amplify light in silicon and other materials, such as glass fiber. Intel has achieved a research breakthrough by creating an optical device based on the Raman Effect, enabling Silicon to be used for the first time to amplify signals and create continuous beams of laser light. This breakthrough opens up new possibilities for making optical devices in silicon. The Raman Effect is widely used today to make amplifiers and lasers in glass fiber. These devices are built by directing a laser beam known as the pump beam into a fiber. As light enter, the photons collide with vibrating atoms in the material and through the Raman Effect, energy is transferred to photons of longer wavelengths. If a data beam is applied at the appropriate wavelength, it will pick up additional photons. After traveling several kilometers in the fiber, the beam acquires enough energy to cause a significant amplification of the data signal (Figure 1a). By reflecting light back and forth through the fiber, the repeated action of the Raman Effect can produce a pure laser beam (see sidebar on lasers). However, fiber-based devices using the Raman Effect are limited because they require kilometers of fiber to provide sufficient amplification. The Raman Effect is more than 10,000 times stronger in silicon than in glass optical fiber, making silicon an advantageous material. Instead of kilometers of fiber, only centimeters of silicon are required (Figure 1b). By using the Raman Effect and an optical pump beam, silicon can now be used to make useful amplifiers and lasers.

190

fact, Intel recently demonstrated the first silicon device with a continuous net amplification with a gain that more than doubled the input signal power. Two-Photon Absorption Usually, silicon is transparent to infrared light, meaning atoms do not absorb photons as they pass through the silicon because the infrared light does not have enough energy to excite an electron. Occasionally, however, two photons arrive at the atom at the same time in such a way that the combined energy is enough to free an electron from an atom. Usually, this is a very rare occurrence. However, the higher the pump power, the more likely it is to happen. Eventually, these free electrons recombine with the crystal lattice and pose no further problem. However, at high power densities, the rate at which the free electrons are created exceeds the rate of recombination and they build up in the waveguide. Unfortunately, these free electrons begin absorbing the light passing through the silicon waveguide and diminish the power of these signals. The end result is a loss significant enough to cancel out the benefit of Raman amplification. B. Break -Through Silicon Laser The Challenge The process of building a Raman amplifier or laser in silicon begins with the creation of a waveguide a conduit for light in silicon. This can be done using standard CMOS techniques to etch a ridge or channel into a silicon wafer (Figure 1b). Light directed into this waveguide will be contained and channeled across the chip. In any waveguide, some light is lost through absorption by the material, imperfections in the physical structure, roughness of the surfaces and other optical effects. The challenge that Intel researchers surmounted is making a waveguide in which the amplification provided by the Raman Effect exceeds the loss in the silicon waveguide. In mid-2004, Intel researchers discovered that increasing the pump power beyond a certain point no longer increased the amplification and eventually even decreased it. The reason turned out to be a physical process called two-photon absorption, which absorbs a fraction of the pump beam and creates free electrons. These electrons build up over time and collect in the waveguide. The problem is that the free electrons absorb some of the pump and signal beams, reducing the net amplification. The higher the power density in the waveguide, the higher the loss incurred. Intels breakthrough is a solution that minimizes the extra electrons caused by two-photon absorption so that an amplified, continuous laser beam can be generated. In Intels solution is to change the design of the waveguide so that it contains a semiconductor structure, technically called a PIN (P-type Intrinsic N-type) device. When a voltage is applied to this device, it acts like a vacuum and removes the electrons from the path of the light. Prior to this breakthrough, the two photon absorption problem would draw away so many photons as to not allow net amplification. Hence, maintaining a continuous laser beam would be impossible. Intels breakthrough is the use of the PIN to make the amplification continuous. Figure 3 is a schematic of the PIN device. The PIN is represented by the p- and n- doped regions as well as the intrinsic (un-doped) silicon in between. This silicon device can direct the flow of current in much the same way as diodes and other semiconductor devices do today in common electronics. Hence, the manufacture of this device relies on established manufacturing technologies and it reinforces the basic goal of silicon photonics: inexpensive, high-performance optical components. To create the breakthrough laser, Intel coated the ends of the PIN waveguide with mirrors to form a laser cavity (Figure 4). After applying a voltage and a pump beam to the silicon, researchers observed a steady beam of laser light of a different wavelength exiting the cavity - the first continuous silicon laser.

191

Applications Fundamentally, Intel researchers have demonstrated silicons potential as an optical gain material. This could lead to many applications including optical amplifiers, wavelength converters, and various types of lasers in silicon. An example of a silicon optical amplifier (SiOA) using the Raman Effect is shown in Figure 1b. Two beams are coupled into the silicon waveguide. The first is an optical pump, the source of the photons whose energy will cause the Raman Effect. The spectral properties of this pump determine the wavelengths that can be amplified. As the second beam, which contains the data to be amplified, passes through the waveguide, energy is transferred from the pump into the signal beam via the Raman Effect. The optical data exits the chip brighter than when it entered; that is, amplified. Optical amplifiers such as this are most commonly used to strengthen signals that have become weak after traveling a great distance. Because silicon Raman amplifiers are so compact, they could be integrated directly alongside other silicon photonic components, with a pump laser attached directly to silicon through passive alignment. Since any optical device (such as a modulator) introduces losses, an integrated amplifier could be used to negate these losses. The result could be lossless silicon photonic devices. The Raman Effect could also be used to generate lasers of different wavelengths from a single pump beam. As the pump beam enters the material, the light splits off into different laser cavities with mirrors made from integrated silicon filters (Figure 5). The use of lasers at multiple wavelengths is a common way of sending multiple data streams on a single glass fiber. In such a scenario, Intels silicon components could be used to generate the lasers and to encode the data on each wavelength.

The encoding could be performed by a silicon modulator. This approach would create an inexpensive solution for fiber networking that could scale with the data loads of large enterprises. In addition to communications, there are other potential applications for silicon Raman lasers. Because the Raman Effect involves the conversion of a pump beam to a longer wavelength, this could be used to create new laser beams at wavelengths that cannot be attained by compact semiconductor lasers at room temperature. Lasers with wavelengths greater than 2m have applications in medical spectroscopy but must be made from bulky bench-top components because semiconductor lasers are not available at such long wavelengths. However, a compact silicon Raman laser could be made to reach these wavelengths, enabling the creation of more portable medical devices.

High Speed Silicon Modulator


Modulators encode bits onto a wavelength by turning the light on and off. Since the earliest days of fiber-optic communications, different approaches to modulation have been tried with varying degrees of success. The simplest form is to turn the laser itself on and off very quickly. While effective, this approach has severe drawbacks: as the laser turns on and off, the light tends to shift from its fundamental wavelength, so that the received light is less clear and it becomes more difficult to detect accurately. This drift, known as chirping, long ago led vendors to use an external modulatora device that is independent of the laser itself. The simplest type of external modulator uses a shutter to interrupt the light. However, the use of moving parts such as the shutter mechanism makes the modulator much too slow for data communication. Historically, silicon photonics has relied on injecting current into the silicon waveguides to encode the data on the light traversing the waveguides. While this approach is more effective and it removes the need for moving parts, it is

192

still limited by a top speed of about 20MHz. At this speed, data throughput is below the capacity of most home networks todayhardly the speed we expect from fiber optics. For a long time, this ceiling of 20 MHz made silicon photonics unsuitable for commercial application. However, Intels announcement of a silicon modulator capable of 1 GHz speed appears poised to positively shatter this barrier and lift silicon photonics into a realm of far greater relevance in commercial computing. Now 50 GHz speed modulator is also realized.

encoded. Intels breakthrough modulator (Figure 7) takes the incoming light beam and splits it into two beams. Then, the beams are phase shifted relative to each other. The two beams are recombined and the phase shifting of the one beam changes the amplitude of the resulting beam so that it goes bright and dark, thereby encoding the data.

How Silicon Modulator Works


To understand how the modulator functions, we need to touch briefly on the nature of light. Light is a form of radiation that occurs at specific frequencies, some of which are visible, and some, like ultraviolet and infrared, that are invisible. When light is emitted it travels in a pattern that looks very much like a sine wave (the top row of Figure 6). The total distance reached by the peaks and troughs of this sine wave is known as amplitude. When the sine wave is nearly flat, the light is at its dimmest and has low amplitude. When the peaks and troughs are very high and deep, the light shines brightly and has greater amplitude. When two wavelengths are combined, the resulting sine wave is the sum of the two constituent sine waves. For example, if two sine waves are perfectly in sync and added together (left column of Figure 6), the resulting sine wave has twice the amplitude of the individual waves. In contrast, when two waves are completely out of sync (right column of Figure 6) the resulting wave has no amplitude. In Figure 6, for example, see how the peak of the top wave on the right has a value of +1. When it is aligned with the trough of the lower wave, which has a value of -1, the net result is 0: the peak and trough offset each other exactly. In this case, the two light waves cancel each other out and the resulting light wave is off for the duration of these two sine waves. The degree to which two waves are in sync is called phase by optical engineers. When two waves are in phase, they are aligned so that their peaks line up, as do their troughs. When they are out of phase, their peaks and troughs offset each other and the light dims. These resulting changes in amplitude (the strength of the light) are the basis on which the photodetector recognizes 0s and 1s. Because the amplitude is being modulated (to encode the data), this technique is referred to as amplitude modulation (AM). A curious side note is that this AM is the same as found in AM radios. Radio transmitters use changes in amplitude to encode variations in pitch on a radio wave. The AM radio receives the wave and from the changes in amplitude re-creates the sound that was

Figure 6

Figure 7 However, the importance of this particular advance is that the Intel phase shifter can perform this modulation at speeds in excess of 1 GHz. In so doing, Intel has raised the previous performance ceiling by a factor of 50x. Now 50 GHz is also achieved. We expect to achieve even greater bandwidth by multiplexing these data streams. This approach could bring silicon photonics into an age where bandwidths of 1Tbps bandwidths or more are common. At these capacities, silicon photonics could present a compelling, inexpensive choice for commercial uses, especially as backbones and wiring for corporate campuses. In addition, through the availability of high-performance, low-cost silicon building blocks, connected devicesbe they servers, desktop PCs, notebooks, or handheld

193

devicesmay one day sport optical ports for quick and easy connectivity to high-bandwidth connections. Should they do so, these devices will recognize in silicon photonics the enabling technology for inexpensive bandwidth on demand.

component for optical interconnects. This horizontal junction detector is a huge improvement for several reasons, not the least among which is that it can be readily coupled to single-mode fiber. This opens the door for wavelength-multiplexed silicon-based optical interconnects that will reduce the complexity of connectors and cabling in high-performance systems. Now we can easily integrate WDM and detection functionality into one chip. A single silicon photonics device that can take a single input stream of light with 100 WDM channels, demultiplex the wavelengths and route each wavelength to its own detector. We can envision integrating 100 receiver channels, each operating at 40 GB/s, on a single chip. Researchers at Intel have pushed the boundaries of silicon photonics by developing the first avalanche photodetector (APD). The silicon-based device, which Intel claims could reduce costs and improve the performance of existing commercially available optical devices, promises to revolutionize how multiple processor cores communicate within computing systems. APDs underlying technology uses standard silicon to transmit and receive optical data among computers and other electronic devices, aiming to provide a reliable platform for future bandwidth needs of data-intensive computing applications such as remote medicine and 3D virtual worlds. According to Intel engineers, silicon photonicsbased technology is the optimal cost-efficient method to dramatically increase communication speeds between devices powered by multiple processor cores. The APD is a light sensor that amplifies weak signals as light is directed onto silicon. Intel's APD converts the light beams into electrical signals. The product is specially designed to be robust, hence the usage of silicon - a relatively inexpensive and tested commodity that can produce devices equivalent to mature, commercially available indium phosphide (InP) APDs. It has also been announced the development of a photodetector made from germanium and silicon. The device, which has a bandwidth of 31 gigahertz, makes use of germaniums capability to efficiently detect light in the near infrared, which is the standard for communications. However, design defects compromised the products electrical performance and prompted to explore a slightly different approach. The new photodetector has built-in amplification, which makes the product much more useful in detecting signals when minimal light falls on the detector. First, a negative and a positive charge [electrons and holes, in semiconductor terminology] are created when the light strikes the detector. The electron is accelerated

Futuristic Terabit Silicon Optical Transmitter

Concept image of a future integrated terabit silicon optical transmitter containing 25 hybrid silicon lasers, each emitting at a different wavelength, coupled into 25 silicon modulators, all multiplexed together into one output fiber.

Si-Based Photodetectors
The previous research has focused on vertical detectors within sub-micron scale waveguides to achieve high speed operation. These typically exhibit high loss and are hard to integrate with waveguide geometries needed for other functionalities such as WDM (wavelength division multiplexing) multiplexer and de-multiplexer devices. The new structure developed by Kotura supports high speed operation and yet is compatible with a variety of waveguide heights including the larger waveguides needed for high performance WDM operation. These structures allow standard silicon processing techniques to be used to couple waveguides and photo-detectors on the same chip with extremely low loss and high performance. Devices with more than 32 GHz optical bandwidth @1V bias, a responsivity of 1.1 A/W, a dark current <300nA and a fiber coupling loss of less than 1.2dB have been demonstrated. A lowloss, high-speed, easy-to-manufacture detector is a key

194

by an electric field until it attains a high enough energy to slam into a silicon atom and create another pair of positive and negative charges. Each time this happens, the number of total electrons doubles, until this avalanche of charges is collected by the detection electronics. It is largely accepted in the electronics industry that less-expensive silicon photonics produce inferior results. While this is true in many cases, it is not so with APDs. In fact, silicons properties allow for higher gain with less excess noise than that recorded in InP devices. Moreover, the new approach, also results in higher sensitivity, a metric defined as the smallest amount of optical power falling on the detector needed to maintain a low bit error rate. APD utilizes silicon and CMOS processing to attain a "gain-bandwidth product" of 340GHz - a breakthrough achievement. The gain-bandwidth product is a standard measure for APD performance that multiplies the device's amplification capability (gain) by the fastest speed signal that can be detected (bandwidth). This opens the door to lower the cost of optical links running at data rates of 40 gigabits per second or faster and proves, for the first time, that a silicon photonics device can exceed the performance of a device made with traditional, more expensive optical materials such as indium phosphide. Higher speeds, along with lower power and noise levels, are essential in applications related to supercomputing, data center communications, consumer electronics, automotive sensors and medical diagnostics. This research demonstrates once again how silicon can be used to create very high performing optical devices. Apart from optical communication, the silicon-based APDs could be employed in other areas such as sensing, imaging, quantum cryptography and biological applications. This APD utilizes the inherently superior characteristics of silicon for high-speed amplification to create world-class optical technology. Now we need two potential extensions to the new technology. The first would be to develop a wave guidebased APD, which could improve the absorption at wavelengths up to about 1600 nanometers and allow for easy integration with other optical devices, such as de-multiplexers and attenuators. Researchers believe that commercial optics is just a couple of years away.

of light from a standard optical fiber or external light source to a silicon waveguide. Overcoming these challenges requires the development of processes and structures in addition to the core device. Tapers: A single-mode fiber core (n = 1.5) usually has a diameter of 8 um with a symmetric mode while a silicon waveguide (n = 3.45) is typically only a few micrometers in width with an asymmetric mode. To overcome these large differences in effective index, core size, and symmetry, one frequently used method is to employ a waveguide taper. Tapers allow for a reduction in coupling loss through an adiabatic modal transformation and can also be used to increase the alignment tolerance of other optical devices, such as IIIV lasers. Fiber Attach: In order to integrate the optical devices into optical networks, they must be integrated with fibers. The small waveguide dimensions and high index contrast of the silicon system lead to a fundamental difference in the optical mode profile between the waveguide and fiber. The integration of waveguide tapers at the waveguide/fiber interface can solve this problem. Current fiber attach techniques are "active," relying upon the closed loop optimization of fiber position in order to ensure low loss coupling. This technique is time consuming however and hence costly. Passive alignment techniques for fiber attachment remove the need for closed-loop optimization by creating highly precise lithographically defined structures on the silicon surface in order to align the fiber to the waveguide aperture. Scanning Electron Micrograph has several U-grooves, two of which are populated with optical fibers and aligned to silicon waveguides. Active alignment techniques are typically capable of placement tolerances better than 1 m. The accuracy required of a passive alignment technique will depend upon the mode field overlap of the fiber and waveguide modes, which can be controlled by the waveguides and tapers.

50 Gbps Optical Data Connection


It is a milestone in the development of silicon photonics reaching a reproducible 50 Gbps link between two modules. This was demonstrated by Intel on 27thJuly, 2010. The process to get where Intel is today did not happen overnight and there were many milestones along the way that were necessary before the culmination could work reliably. Back in 2006 Intel and the University of California Santa Barbara developed the first hybrid silicon laser that enabled a silicon-only

Optical Coupling And Packaging


One of the most difficult challenges facing high-index contrast optical systems is efficiently coupling light into and out of the chip. Particularly difficult is the coupling

195

solution to start the data transmission process. The modulators responsible for converting data into light have slowly increased from 1 GHz to 40 Gbps over a few years and the multiplexers are necessary to combine the silicon laser data. Along with the transmission the development of photo-detectors to receive and decode the data were being fleshed out in mid 2007 with some incredibly fast versions as recent as winter 2008.

Applications
The main applications include: High speed data communication Optical amplifiers Wavelength converters Silicon lasers Low cost lasers for biomedical applications Portable medical devices Optical debug of high speed data Computing industry-more powerful supercomputers 3D virtual worlds

All of these components are combined into a single piece of silicon on either side of a fiber optic cable to offer incredible data transmission speed. The transmitter die on the left in the diagram shows a set of four hybrid lasers each tuned to a different wavelength of light that run into modulators that act as optical shutters. The current iteration of these modulators are capable of running at 12.5 Gbps each and when they are combined by the multiplexer a total of 50 Gbps is available to output through a single fiber optic cable. On the other end of the connection lies the integrated receiver chip that uses a demultiplexer to separate the light into the four light wavelengths once again and then into the above mentioned photodetectors that then convert the 12.5 Gbps streams of data back into 1s and 0s to be used with currently existing electrical systems. Achieving a 50 Gbps data rate is impressive but it isn't the goal of Intel to stop there and instead they have plans to scale silicon photonics both up and out. Scaling up involves increasing the throughput of the modulators above 12.5 Gbps to 25 and even 40 Gbps based on research Intel has already started. By doing so with the current 4 laser implementation we would see as much as 100 Gbps total speeds. Scaling out simply means Intel could increase the number of laser of different wavelengths on the silicon to 8, 16, or 25 and increase bandwidth that way. Or better yet, combine the two methods and have a 40 x 25 connection capable of reaching the 1 Terabit per second mark.

The most obvious place we could see this applied is in the server space where communication between servers and data centers could be drastically improved, enabling interesting paradigm shifts in data sharing and audio/visual communication. Obviously the shorter the cable/connection between the silicon photonics transmitter and receiver the easier this change will be, but Intel tells us that the because light can travel so much further on much lower cost infrastructure this could be used for long distance connections as well. Silicon photonics and data rates of this speed are not simply going to be in the enterprise market. Imagine downloading 1000 high resolution photos, 100 hours of digital music or an HD movie in less than a second; that is what is possible with the current 50 Gbps rates Intel is demonstrating. With the theoretical 1 Tbps speeds mentioned we could back up the entire contents of our hard drive or 2-3 seasons an HD TV drama in that same second. Obviously there are a lot of other infrastructure issues that would need to be addressed before anything like would actually be feasible but removing bottlenecks is the key to advancement. Looking even further down the road it is likely we will see photon communication replace the intra-chip communications used for motherboards or even multi-chip packaging. It seems amazing that when the laser was invented in 1960 no one really knew what the technology would be good for. Today, 50 years later, not only can we use them to read bar codes but we can transmit data at previously unheard of speeds.

Future
As Moores Law continues to push microprocessor performance, and as increasing volumes of data are sent across the Internet, the demands placed on network infrastructure will increase significantly. Optical communications and silicon photonic technology will allow enterprises to scale bandwidth availability to meet

196

this demand. In addition, due to the low cost of silicon solutions, servers and high-end PCs might one day come standard with an optical port for high-bandwidth communication. Likewise, other devices will be able to share in the bandwidth explosion provided by the optical building blocks of silicon photonics. The goal is not only achieving high performance in silicon photonics, but doing so at a price point that makes the technology a natural fit - even an automatic feature - for all devices that consume bandwidth. Over time, Intel's vision is to develop integrated, highvolume silicon photonic chips that could dramatically change the way that enterprises use photonics links for their systems and networks. Simply having photonics could eliminate bandwidth and distance limitations, allowing for radically new flexible architectures capable of processing data more efficiently. Silicon photonics may even have applications beyond digital communications, including optical debug of high-speed data, expanding wireless networks by transporting analog RF signals, and enabling lower-cost lasers for certain biomedical applications

Bibliography
1. 2. 3. Whitepaper on Continuous Silicon Laser, presented by Sean Koehl, Victor Krutul, Dr. Mario Paniccia Whitepaper: Introducing Intels advances in Silicon photonics, presented by Dr. Mario Paniccia, Victor Krutul , Sean Koehl Silicon Photonics by Bahram Jalali, fellow, IEEE, and Sasan Fathpour, Member, IEEE, Published in Journal of Light wave Technology,Vol. 24, no. 12, December 2006 Silicon Photonics: An Introduction, John Wiley and Sons techresearch.intel.com www.intel.com www.kotura.com www.photonics.com www.rp-photonics.com www.biztechmagazine.com www.pcper.com www.research.ibm.com www.nanowerk.com

4. 5. 6. 7. 8. 9. 10. 11. 12. 13.

Conclusion
It is clear that an enormous amount of work, corresponding to huge capital investments, is still required before silicon photonics can be established as a key technology. However, the potential merits motivate big players such as Intel to pursue this development seriously. If it is successful, it can lead to a very powerful technology with huge benefits for photonics and microelectronics and their applications. Although research in the area of planar optics in silicon has been underway for several decades, recent efforts at Intel Corporation have provided better understanding of the capabilities of such devices as silicon modulators, ECLs and SiGe detectors. Silicon modulators operating at 50 GHz have demonstrated several orders of magnitude improvement over other known Si-based modulators, with theoretical modeling indicating performance capabilities beyond 1 THz. Through further research and demonstration of novel silicon photonics devices, integrated silicon photonics has a viable future in commercial optoelectronics.

197

Swarm Intelligence
Aravind Raj D & Sarath B V
Electronics & Communication Department Mohandas College Of Engineering&Technology

Abstract
Swarm intelligence (SI) involves multiple simple agents interacting with each other and the environment to solve complex problems through their collective global behaviour. This is inspired by the intelligent behaviour seen in swarms of animals such as a colony of ants, flocks of birds or schools of fish. SI systems can handle many problems that are not suitable by traditional means. These include problems that are dynamic, non predictable, not defined or computationally hard. SI systems have a number of features such as flexibility,robustness,decentralized and self organization.

Introduction
As SI systems are inspired by natural biological swarms, standard algorithms are based on the search for food. The differences in food searching techniques lead to different SI algorithms, including:

over faster and hence more often quickly leading to strong pheromone trails. Introducing new ants randomly over time allows responses to dynamic changes in the environment. ACO is typically used to find an optimal path.

Particle Swarm Optimisation (Pso)


This form of Swarm intelligence is based on schools of fish and flocks of birds finding food. PSO is used to find an optimal point in space. Agents begin by being randomly spread out in the environment with random velocities. As the agents move they examine the area around them and communicate with the other agents their evaluations. This communication can either be a global communication or a local neighbourhood communication. Based on their own findings and the findings communicated to them, agents will adjust their velocities to follow better solutions. As a result agents will begin to head into areas where the best solutions are being found and this leads to an optimal solution.

Ant Colony Optimisation (Aco)


ACO replicate the natural behaviour of ants. Ants will randomly spread out and search for food. When food is discovered an ant will return to its base leaving a pheromone trail. Upon finding a pheromone trail another ant will follow that train and if it finds food on this trail it too will return to base, leaving its own pheromone trail. If an ant is on a pheromone trail and crosses a stronger pheromone trail it will follow the stronger trail. Pheromones decay over time allowing the removal of non optimal solutions. The ACO algorithm finds optimal solutions because shorter paths are traveled

199

Intelligent Water Drops


Intelligent Water Drops algorithm (IWD) is a swarm-based nature-inspired optimization algorithm, which has been inspired from natural rivers and how they find almost optimal paths to their destination.These near optimal or optimal paths follow from actions and reactions occurring among the water drops and the water drops with their riverbeds. In the IWD algorithm, several artificial water drops cooperate to change their environment in such a way that the optimal path is revealed as the one with the lowest soil on its links. The solutions are incrementally constructed by the IWD algorithm. Consequently, the IWD algorithm is generally a constructive population-based optimization algorithm.

of mostly simple physical robots. It is supposed that a desired collective behavior emerges from the interactions between the robots and interactions of robots with the environment. This approach emerged on the field of artificial swarm intelligence, as well as the biological studies of insects, ants and other fields in nature, where swarm behaviour occurs. One project that might deploy such methods in the near future is ANTS Autonomous Nano Technology Swarm. The acronym is apt, because ANTS is all about collective, emergent intelligence of the sort that appears in insect colonies. What scientists at NASAs Goddard Space Flight Center envision is a massive cluster of tiny probes that use artificial intelligence to explore the asteroid belt. Each probe, weighing perhaps 1 kilogram (2.2 pounds) would have its role while a small number of them direct the exploration, perhaps 900 of the probes would proceed to do the work, with only a few returning to Earth with data.One key factor here is redundancy; the mission succeeds even if a large number of individual probes are lost. ANTS could serve as a testbed for numerous technologies as it spreads computing intelligence across intelligent, networked spacecraft. In particular, computer autonomy would be critical to ensuring the success of the mission.

System Design
The difficult task in swarm intelligence is to answer the question: How do we program an individual agent so the entire global system behaves as we want it to? The techniques to design and control individual agents are a standard AI problem and techniques like reinforcement learning, fuzzy logic, neural networks etc, can be used. When designing an SI system both the individual agents ability to search and evaluate its area as well as a means for communication need to be considered. Many of the global emergent behaviour are difficult to predict.The major steps in SI system design are: Identification of analogies: in swarm biology and IT systems Understanding: computer modelling of realistic swarm biology Engineering: model simplification and tuning for IT applications

Crowd simulation :

It is the process of simulating the movement of a large number of objects or characters, now often appearing in 3D computer graphics for film. While simulating these crowds, observed human behavior interaction is taken into account, to replicate the collective behavior. The need for crowd simulation arises when a scene calls for more characters than can be practically animated using conventional systems, such as skeletons/bones. Simulating crowds offer the advantages of being cost effective as well as allow for total control of each simulated character or agent. The actual movement and interactions of the crowd is typically done in one of two ways:

Applications& Scope
Swarm Intelligence is utilised in the following areas:

Swarm robotics: It is a new approach to the coordination of multirobot systems which consist of large numbers

200

guides the entities based on one or more functions, such as sight, hearing, basic emotion, energy level, aggressiveness level, etc. The entities are given Particle Motion: The characters are attached to point particles, which are then animated by simulating wind, gravity, attractions, and collisions. Its usually very inexpensive to implement but the method is not very realistic because motion is generally limited to a flat surface. Crowd AI: The entities - also called agents - are given artificial intelligence, which goals and then interact with each other as members of a real crowd would. They are often programmed to respond to changes in environment. Crowd simulation can be employed for: Crowd simulation is often employed in public safety planning. Gaining insight into natural human behavior under varying types of stressful situations will allow better models to be created which can be used to develop crowd controlling strategies.

Advantages
Distributed system of interacting autonomus agents Goals: performance optimization and robustness Self-organized control and cooperation (decentralized) Division of labour and distributed task allocation Indirect interactions Swarm intelligence is close to nature and studies the collective behaviour of agents interacting with their environment, causing complex spatio-temporal patterns to emerge. SI systems are easy to code because of the simplicity of their rules.

Ant-Based Routing action due to The use of Swarm Intelligence


in Telecommunication Networks has also been researched, in the form of Ant Based Routing. Basically this uses a probabilistic routing table rewarding/reinforcing the route successfully traversed by each "ant" (a small control packet) which flood the network. Reinforcement of the route in the forwards, reverse direction and both simultaneously have been researched: backwards reinforcement requires a symmetric network and couples the two directions together; forwards reinforcement rewards a route before the outcome is known (but then you pay for the cinema before you know how good the film is). As the system behaves stochastically and is therefore lacking repeatability, there are large hurdles to commercial deployment. Mobile media and new technologies have the potential to change the threshold for collective swarm intelligence . Airlines have also used antbased routing in assigning aircraft arrivals to airport gates. Ant based routing in wirelesss network is shown below:

Disadvantages
As This Field Is Showing Tremendous Advancements Day By Day, It Is Very Difficult To Point Out Any Limitations Or Disadvantages Of Swarm Intelligence.

References
1.www.wikipidea.org 2. Science Daily. 2008 (April 1). "Planes, Trains and Ant Hills: Computer scientists simulate activity of ants to reduce airline delays.

201

Teleportation
Lekshmy Vijayakumar, Sreedhanya M UnnithanT
Electronics And Communication Lbs Institute Of Technology For Women Poojappura, Thiruvananthapuram

Abstract
Ever since the wheel was invented more than 5,000 years ago, people have been inventing new ways to travel faster from one point to another. The chariot, bicycle, automobile, airplane and rocket have all been invented to decrease the amount of time we spend getting to our desired destinations. Yet each of these forms of transportation share the same flaw: They require us to cross a physical distance, which can take anywhere from minutes to many hours depending n the starting and ending points. There are scientists working right now on such a method of travel, combining properties of telecommunications and transportation to achieve a system -Teleportation.

Introduction
Teleportation provides a unique communication system which will enable a life-size image of a person to appear within a 3D environment. You can make eye contact with individuals, use props and hold true two-way conversations communicating naturally with anyone or any group of people anywhere in the world, as you would if you were there. After all 80% of communication is non-verbal. The only thing you can't do is shake hand. More advanced than video conferencing . Video conferencing has never presented itself as a realistic alternative to face-to-face meetings because of its severe limitations - only one person can speak at any one time creating an amplified feeling of distance between participants. Teleportation allows a more natural form of conversation due to the lack of latency - people achieve a sense of presence that cannot be gained from any other technology.

its properties to the other one, without ever measuring those properties

Finally A Reality: Quantum Teleportation


In 1993, Charles Bennett (IBM, TJ Watson Research Center) and colleagues theoretically developed a method for quantum teleportation.Quantum teleportation involves the utter destruction of an unknown physical entity and its reconstruction at a remote location". Using a phenomenon known as quantum entanglement', the researchers force a photon of light to project its unknown state onto another photon, with only a miniscule amount of information being sent between the two. This is the first time quantum teleportation has been performed with a high degree of 'fidelity. The researchers explain that teleporting optical fields may somedaybe appropriate for the use in communication technology.The general idea of teleportation seems to be that the original object is scanned in such a way as to extract all the information from it, then this information is transmitted to the receiving location and used to construct the replica,perhaps from atoms of the same kinds, arranged in exactly the same pattern as the original.Until recently, teleportation was not taken seriously by scientists, because it was thought to violate the uncertainty principle of quantum mechanics, which forbids any measuring or scanning process from extracting all the information in an atom or other object. But scientists found a way to make an end-run around this logic, using a feature of quantum mechanics known as the Einstein-Podolsky-Rosen effect

History
Over the years the great barrier for anyone who experimented on teleportation as Mr. Warner Heisenberg..Anton Zeilinger, De Martini and their colleagues demonstrated independently that it is possible to transfer the properties of one quantum particle (such as a photon) to another--even if the two are at opposite ends of the galaxy. Until recently, physicists had all but ruled out teleportation, in essence because all particles behave simultaneously like particles and like waves. They presumed that to produce an exact duplicate of any one particle, you would first have to determine both its particle like properties, and its wavelike properties Yet doing so would violate the Heisenberg uncertainty principle of quantum mechanics.. The solution was based on a theorem of quantum mechanics dating to the 1930s called the Einstein-PodolskyRosen effect. It states that when two particles come into contact with one another, they can become "entangled". In an entangled state, both particles remain part of the same quantum system so that whatever you do to one of them affects the other.Thus, how, in principle, entangled particles might serve as "transporters" of sorts. By introducing a third "message" particle to one of the particles, one could transfer

203

Scientists found a way to scan out part of the information from an object A, which one wishes to teleport, while causing the remaining, unscanned, part of the information topass, via the Einstein-Podolsky-Rosen effect, into another object C which has never been in contact with A. Later, by applying to C a treatment depending on the scanned-out information, it is possible to maneuver C into exactly the same state as A was in before it was scanned. A itself is no longer in that state, having been thoroughly disrupted by the scanning, so what has been achieved is teleportation, not replication.

The Innsbruck Experiment


In the quantum teleportation process, physicists take a photon (or any other quantum-scale particle, such as an electron or an atom) and transfer its properties (such as its polarization, the direction in which its electric field vibrates) to another photon even if the two photons are at remote locations. The scheme does not teleport the photon itself; only its properties are imparted to another, remote photon. Here is how it works: At the sending station of the quantum teleporter,

Alice encodes a "messenger" photon (M) with a specific state: 45 degrees polarization. This travels towards a beam splitter. Meanwhile, twoadditional "entangled" photons (A and B) are created. The polarization of each photon is in a fuzzy, undetermined state, yet the two photons have a precisely defined interrelationship. Specifically, they must have complementary polarizations.Entangled photon A arrives at the beam splitter at the same time as the message photon M. The beam splitter causes each photon to either continue toward detector 1 or change course or travel to detector 2. In 25% of all cases, in which the two photons go off into different detectors, Alice does not know which photon went to which detector. This inability for Alice to distinguish between the two photons causes quantum weirdness to kick in. Just by the very fact that the two photons are now indistinguishable, the M photon loses its original identity and becomes entangled with A. The polarization value for each photon is now indeterminate, but since they, travel toward different detectors Alice knows that the two photons must have complementary polarizations. Since message photon M must have complementary polarization to photon A, then the other entangled photon (B) must now attain the same polarization value as M. Therefore, teleportation is successful. Indeed, Bob sees that the polarization value of photon B is 45 degrees: the initial value of the message photon..

204

The Makerbot Printer


Transmission of the information necessary to reconstruct an object is not a problem; what we need are 3-D scanners and printers. There is a fascinating open-source effort going on now to develop a 3-D printer, called the MakerBot. The MakerBot may work like a computer-controlled hot-glue gun, squirting melted plastic onto a platform moved by stepper motors. Under software control, it can reproduce plastic objects up to about the size of a small milk bottle.The last piece of the teleportation puzzle is a 3-D scanner that generates data in a form that theMakerBot can use. Such a scanner doesnt seem impossible The possibilities are endless

Conclusion
Teleportation As Said Is The Feat Of Making An Object Or Person Disintegrate In One Place While The Exact Replica Appears Somewhere Else. On 8th June 2010 9.9miles Teleportation-Which Is The Longest Distance Over Which Photonic Teleportation Was Achieved To Date(More Than 20 Times Longer Than Previous Implementation)-Was Achieved. For A Person To Be Teleported, A Machine Would Have To Scan And Analyse More Than A Trillion Atoms And Send This Information To A Receiving Station And Reconstruct Him With Exact Precision. If Such A Machine Were Possible, Its Unlikely That The Person Being Transported Would Actually Be Transported. Like All Technologies, Scientists Are Sure To Continue To Improve Upon The Ideas Of Teleportation That, One Of Our Descenters Would Finish His Work At His Space Office Several Galaxies Away And Tell Its Time To Beam Home For Dinner On Earth And Would Sit Down At Dining Table As Soon As His Words Leave His Mouth !!

205

Introduction To The World Of Spin ( A Concept Based Innovative Technology Of Future Through Electronics )
Nikhil.G.S, Rony Renjith.
III Department of Electronics and Communication Maria College Of Engineering and Technology
Nikhil4011@gmail.com

Abstract
As spintronics goes nano, new phenomena are predicted resulting from the interplay between spin dependent transport and single electron physics. The long term goal of manipulating spins one by one would open a promising path to quantum computing. Towards this end, there is an ever-growing effort to connect spin tanks (i.e. ferromagnetic leads) to smaller and smaller objects in order to study spintronics in reduced dimensions. Spin is not completely replacing charge but giving new definitions and better stability. This paper discusses the basic spintronics theory, its areas and developments, current researches, spintronic devices and its applications, future concepts, limitations and advantages. Discussions on nanolevel spintronic study is also involved. Brief future directions in the emerging field of nanospintronics towards quantum dots, carbon nanotubes and single molecule magnets.is also included.

Introduction
associated magnetic moment, in addition to its Spintronics, spin-electronics, or magnetoelectronic fundamental electronic charge, in solid-state is currently the focus of an intense research effort . devices. Spin S is an angular momentum with the In spintronics, we manipulate the spin degree of physical dimension of action, i.e., energy freedom of an electron as opposed to, or in addition -time.Like charge, spin is an intrinsic property of to, manipulating the charge as we do in conventional an electron. Its magnitude is fixed once and for electronic s. Spin is a quantum two-level system; all,but its states are quantum superpositions of "up" and "down", the two basis states in which the physical orientation is not. This is unlike the orbital angular momentum L, whose classical counterpart is system can be found. Spin reflects the quantum familiar from merry -go-round physics, and which nature of an electron so that the remarkable property depends on the position r and the linear momentum of quantum parallelism, which is so useful in P through the relation L = r x P and can therefore quantum computation, for instance, can be achieved take on many values depending on the spatial with spin states. Although spintronics is not distribution of the electron.The electron spin is expected to replace traditional electronics, it will expressed as likely play a complementary role to electronics, in particular in the domain of quantum information. Research in spintronics has been proceeding at a rapid pace, both experimentally and theoretically. New effects have been predicted, and then detected in the lab. As new experimental data become available, new interpretations are required. Although the study of the motion of charges can serve as a guide, the methods need to be adapted to the specifics of the spin degree of freedom. S= (h/2) where h, Planck's fundamental quantum of action, provides the dimension, the factor determines the magnitude of the spin and , the Pauli spin operator, determines its properties

History
The research field of Spintronics emerged from experiments on spin-dependent electron transport phenomena in solid-state devices done in the 1980s, including the observation of spin -polarized electron injection from aferromagnetic metal to a normal metal by Er.Jiveshwar Sharma (Jove) and Johnson and Silsbee (1985), and the discovery of giant

Defnition
Spintronics(aneologism meaning "spin transportelectronics"), also known as magnetoelectronics, is an emerging technology that exploits the intrinsic spin of electrons and its

206

magnetoresistance independently by Albert Fert et al. and Peter Grnberg et al. (1988).The origins can be traced back further to the ferro magnet/superconductor tunnelling experiments pioneered by Meservey and Tedrow, and initial experiments on magnetic tunnel junctions by Julliere in the 1970s.The useof semiconductors for spintronics can be traced back at least as far as the theoretical proposal of a spin field-effect-transistor by Datta and Das in1990.

Theory
Electrons are spin-1/2 fermions and therefore constitute a two-state system with spin "up" and spin "down". To make a spintronic device, the primary requirements are to have a system that can generate a current of spin polarized electrons comprising more of one spin speciesup or downthan the other (called a spin injector), and a separate system that is sensitive to the spin polarization of the electrons (spin detector). Manipulation of the electron spin during transport between injector and detector (especially in semiconductors) via spin precession can be accomplished using real external magnetic fields or effective fields caused by spin-orbit interaction. Spin polarization in non-magnetic materials can be achieved either through the Zeeman effect in large magnetic fields and low temperatures, or by non-equilibrium methods. In the latter case, the non-equilibrium polarization will decay over a timescale called the "spin lifetime". Spin lifetimes of conduction electrons in metals are relatively short (typically less than 1 nanosecond) but in semiconductors the lifetimes can be very long (microseconds at low temperatures), especially when the electrons are isolated in local trapping potentials (for instance, at impurities, where lifetimes can be milliseconds).

All spintronic devices act according to the simple scheme: (1) information is stored (written) into spins as a particular spin orientation (up or down), (2) the spins, being attached to mobile electrons, carry the information along a wire, and (3) the information is read at a terminal. Spin orientation of conduction electrons survives for a relatively long time (nanoseconds, compared to tens of femtoseconds during which electron momentum decays), which makes spintronic devices particularly attractive for memory storage and magnetic sensors applications, and, potentially for quantum computing where electron spin would represent a bit (called qubit) of information.

Principle
The basic action in a spin-polarized device is shown in , where it is assumed that the electrons are traveling from a ferromagnetic metal, through a normal metal, and into a second ferromagnetic metal. When the magnetizations(or, equivalently, the magnetic moments) of the two ferromagnetic metals are in an aligned state the resistance is low, whereas the resistance is high in the antialigned state. Actual devices are not generallyfabricated in the orientation shown in Fig. because they are made from thin films and the resistance perpendicular to the plane is too low.

Spintronics Effect
Spin-Orbit Coupling In atomic physics, an orbiting electron experiences the electric field of the nucleus. As Einstein explained, considered from a relativistic point of view, the electron experiences a magnetic field in its rest frame. The magnetic moment of the spin of the electron can now interact with this magnetic field. This is spinorbit coupling (SOC). In certain materials the Rashba and Dresselhaus effects originate respectively in Structure Inversion Asymmetry (SIA) and Bulk Inversion Asymmetry (BIA). SIA arises from the asymmetric doping of the quantum well which creates an electric field, while

207

BIA stems from the asymmetry of thezinc-blende crystal lattice structure. As a result of these two SOC-type effects, we can control the spin without using magnetic fields. The Rashba and Dresselhaus Hamiltonians, which describe the energy and evolution of electrons of mass m in two-dimensional materials, involve both linear momenta P and spin variables. Spin Hall Effect A second exciting area of spintronics centers on the Spin Hall Effect (SHE). In 187 9, long before the advent of quantum mechanics, Edwin Hall, then a doctoral student, discovered what came to be known as the Hall effect .

semiconductors with spin accumulation on opposite edges of the semiconductor as illustrated in Fig. below.

The Spin Hall Effect With Spin Separation And Accumulation

The Hall Effect, With Charge Separation and Accumulation In the original Hall effect, opposite charges accumulate on opposite edges of a conductor (top and bottom in Fig. ) because of the pull F of a magnetic field B (pointing into the page in Fig) which acts on the particles of charge q moving at velocity v (to the right in Fig.) as described by the microscopic Lorentz force F = q (v x B). The potential resulting from this charge separation leads to the Hall current. Fifty years later and now retired, Edwin Hall was again studying electrons in metals , using quantum mechanics and the statistical properties that spin forces upon them. Could he have predicted that several decades later, spintronics would open a new chapter in Hall effect physics? In 2003, Murakami et al. and Sinova et al. predicted the existence of an intrinsic Spin Hall Effect (SHE) as a result of Rashba SOC. Earlier Hirsch had predicted an extrinsic SHE effect resulting from spindependent scattering due to defects in the sample. In 2004, Kato et al. observed the SHE in

Graphics may be full color. All colors will be retained on the CDROM. Graphics must not use stipple fill patterns because they may not be reproduced properly. Please use only SOLID FILL colors which contrast well both on screen and on a black-and-white hardcopy, as shown in Fig. 1.

Fig. 1 A sample line graph using colors which contrast well both on screen and on a black-and-white hardcopy

Fig. 2 shows an example of a low-resolution image which would not be acceptable, whereas Fig. 3 shows an example of an image with adequate resolution. Check that the resolution is adequate to reveal the important detail in the figure.

208

Please check all figures in your paper both on screen and on a black-and-white hardcopy. When you check your paper on a black-and-white hardcopy, please ensure that: the colors used in each figure contrast well, the image used in each figure is clear, all text labels in each figure are legible. A. Figure Captions Figures must be numbered using Arabic numerals. Figure captions must be in 8 pt Regular font. Captions of a single line (e.g. Fig. 2) must be centered whereas multi-line captions must be justified (e.g. Fig. 1). Captions with figure numbers must be placed after their associated figures, as shown in Fig. 1.

pt Regular font with Small Caps. Every word in a table caption must be capitalized except for short minor words as listed in Section III-B. Captions with table numbers must be placed before their associated tables, as shown in Table 1. Page Numbers, Headers and Footers Page numbers, headers and footers must not be used. Links and Bookmarks All hypertext links and section bookmarks will be removed from papers during the processing of papers for publication. If you need to refer to an Internet email address or URL in your paper, you must type out the address or URL fully in Regular font.

Fig. 2 Example of an unacceptable low-resolution image

Fig. 3 Example of an image with acceptable resolution

B. Table Captions Tables must be numbered using uppercase Roman numerals. Table captions must be centred and in 8

209

References The heading of the References section must not be numbered. All reference items must be in 8 pt font. Please use Regular and Italic styles to distinguish different fields as shown in the References section. Number the reference items consecutively in square brackets (e.g. [1]). When referring to a reference item, please simply use the reference number, as in [2]. Do not use Ref. [3] or Reference [3] except at the beginning of a sentence, e.g. Reference [3] shows . Multiple references are each numbered with separate brackets (e.g. [2], [3], [4][6]). Examples of reference items of different categories shown in the References section include: example of a book in [1] example of a book in a series in [2] example of a journal article in [3] example of a conference paper in [4] example of a patent in [5] example of a website in [6] example of a web page in [7] example of a databook as a manual in [8] example of a datasheet in [9] example of a masters thesis in [10] example of a technical report in [11] example of a standard in [12]

Causal Productions wishes to acknowledge Michael Shell and other contributors for developing and maintaining the IEEE LaTeX style files which have been used in the preparation of this template. To see the list of contributors, please refer to the top of file IEEETran.cls in the IEEE LaTeX distribution.

References
S. M. Metev and V. P. Veiko, Laser Assisted Microtechnology, 2nd ed., R. M. Osgood, Jr., Ed. Berlin, Germany: Springer-Verlag, 1998. J. Breckling, Ed., The Analysis of Directional Time Series: Applications to Wind Speed and Direction, ser. Lecture Notes in Statistics. Berlin, Germany: Springer, 1989, vol. 61. S. Zhang, C. Zhu, J. K. O. Sin, and P. K. T. Mok, A novel ultrathin elevated channel low-temperature poly-Si TFT, IEEE Electron Device Lett., vol. 20, pp. 569571, Nov. 1999. M. Wegmuller, J. P. von der Weid, P. Oberson, and N. Gisin, High resolution fiber distributed measurements with coherent OFDR, in Proc. ECOC00, 2000, paper 11.3.4, p. 109. R. E. Sorace, V. S. Reinhardt, and S. A. Vaughn, High-speed digital-to-RF converter, U.S. Patent 5 668 842, Sept. 16, 1997. (2002) The IEEE website. [Online]. Available: http://www.ieee.org/ M. Shell. (2002) IEEEtran homepage on CTAN. [Online]. Available: http://www.ctan.org/texarchive/macros/latex/contrib/supported/IEEEtran/ FLEXChip Signal Processor (MC68175/D), Motorola, 1996. PDCA12-70 data sheet, Opto Speed SA, Mezzovico, Switzerland. A. Karnik, Performance of TCP congestion control with rate feedback: TCP/ABR and rate adaptive TCP/IP, M. Eng. thesis, Indian Institute of Science, Bangalore, India, Jan. 1999. J. Padhye, V. Firoiu, and D. Towsley, A stochastic model of TCP Reno congestion avoidance and control, Univ. of Massachusetts, Amherst, MA, CMPSCI Tech. Rep. 99-02, 1999. Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specification, IEEE Std. 802.11, 1997.

Conclusion
THE
VERSION OF THIS TEMPLATE IS

V2. MOST

OF THE

FORMATTING INSTRUCTIONS IN THIS DOCUMENT HAVE BEEN BY CAUSAL PRODUCTIONS FROM THE IEEE LATEX STYLE FILES. CAUSAL PRODUCTIONS OFFERS BOTH A4 TEMPLATES AND US LETTER TEMPLATES FOR LATEX AND MICROSOFT WORD. THE LATEX TEMPLATES DEPEND ON THE OFFICIAL IEEETRAN.CLS AND IEEETRAN.BST FILES, WHEREAS THE MICROSOFT WORD TEMPLATES ARE SELFCONTAINED. CAUSAL PRODUCTIONS HAS USED ITS BEST COMPILED EFFORTS TO ENSURE THAT THE TEMPLATES HAVE THE SAME APPEARANCE.

Acknowledgment
The heading of the Acknowledgment section and the References section must not be numbered.

210

The Electric Microbe


Vishnu.R & Ramesh K.R
Department Of Electronics and Instrumentation Engineering Noorul Islam College Of Engineering vishnu057@gmail.com, rkrgkr@gmail.com

Abstract
Electric microbe is the latest invention which is used to generate electricity from mud and wastewater. Bacteria have always gotten a bad rap. But we should be thankful for one especially talented microbe, Geobacter, which has tiny hair like extensions called pili that it uses to generate electricity from mud and wastewater.

Introduction:
Scientists have discovered a tiny biological structure that is electrically conductive, which could help clean up groundwater and produce electricity from renewable resources. The conductive structures, known as "microbial nanowires," are produced by a microorganism known as Geobacter. The very small nanowires are only 3-5 nanometers in width (20,000 times finer than a human hair), but quite durable and more than a thousand times long as they are wide. Geobacter are the subject of intense investigation because they are potentially useful agents in the bioremediation of groundwater contaminated with pollutants such as toxins, radioactive metals or petroleum. They also have the ability to convert human and animal wastes or renewable biomass into electricity. This new research shows how Geobacter transfers electrons outside the cell onto metals or electrodes to achieve these processes. These results help us understand how Geobacter can live in environments that Lack oxygen and carry out such unique phenomena as removing organic and metal pollution from groundwater.

Previous studies showed that Geobacter produces fine, hair like structures, known as pili, on just one side of the cell. A team in 2009 speculated that the pili might be miniature wires extending from the cell that would permit Geobacter to carry out its unique ability to transfer electrons outside the cell onto metals and electrodes. This was confirmed in a study using an atomic force microscope which found the pili were highly conductive. Such long, thin conductive structures are unprecedented in biology. This completely changes our concept of how microorganisms can handle electrons, and it also seems likely that microbial nanowires could be useful materials for the development of extremely small electronic devices. Manufacturing nanowires from more traditional materials such as metals, silica, or carbon is difficult and expensive. However, it is easy to grow billions of Geobacter cells in the laboratory and harvest the microbial nanowires that they produce. The researchers added that by altering the DNA sequence of the genes that encode for microbial nanowires, it may be possible to produce nanowires with different properties and functions. The remarkable and unexpected discovery of microbial structures comprising microbial

211

nanowires that may enable a microbial community in a contaminated waste site to form mini-power grids could provide new approaches to using microbes to assist in the remediation of DOE waste sites; to support the operation of minienvironmental sensors, and to nano-manufacture in novel biological ways. This discovery also illustrates the continuing relevance of the physical sciences to today's biological investigations," said Aristides Patrinos of the U.S. Department of Energy, which funds the Geobacter research.

Background of The Invention:


The basic hydrogen PEM fuel cell consists of two catalyst-loaded electrodes separated by a proton exchange membrane (PEM). Molecular oxygen supplied to the catalytically active cathode is dissociated and reduced to O2- (an energetically favored process). Molecular hydrogen supplied to the anode is dissociated and the hydrogen atoms oxidized to protons (H.sup.+), giving Up their electrons to the anode. Those electrons propagate through the external circuit to the cathode, delivering work in the process. The protons generated at the anode meanwhile diffuse through the PEM to combine with the reduced oxygen, producing water and heat as the waste products. Both the anode and cathode (in addition to the requirement that they be electrically conductive) are engineered with specific catalysts, commonly Pt, to facilitate the molecular dissociations and the respective electron transfers. Several mechanisms for for electron transfer via outer-surface c-type cytochromes,long range electron transfer via microbial nanowires, electron flow through a conductive biofilm matrix containing cytochromes, and soluable electron shuttles. Which mechanisms are most important depand on the microorganisms and the thickness of the anode biofilm. Emerging systems biology approaches to the study, design, and evolution of microorganisms interacting with electrodes are expected to contribute to improved microbial fuel cells.

Alternative Energy Source:


Numerous investigators have suggested that microbial production of electricity may become an important form of bioenergy because microbial fuel cells offer the possibility of extracting current from a wide range of complex organic wastes and renewable biomass. The limitation to wide-spread utilization of microbial fuel cells as an alternative energy source is that, at present, the power densities of microbial fuel cells are too low for most envisioned applications. The only practical applications are sediment microbial fuel cells that extract electrons from organic matter in marine sediments to power electronic monitering devices, and possibly sediment fuel cells in a pot which can serve as a light source or battery charger in offgrid areas. Substantial improvements will be required before other commonly projected uses of microbial fuel cells, such as large-scale conversion of organic wastes and biomass to electricity, or powering vehicles, mobile electronic devices, or households with suitably microbial fuel cells will be possible.

Conversion of Organic Matter To Electricity:


Recent studies have greatly expanded the range of microorganisms known to function either as electrode-reducing microorganisms at the anode or as electrode-oxidising microorganisms at the cathode. Microorganisms that can completely oxide organic compounds with an electrode serving as the sole electron acceptor are expected to be the primary contributors to power production.

212

Mechanisms for interaction:

microbe-electrode

Future Shock from The Microbe Electric:


The study of microbial fuel cells and, more generally, microbe-electrode interactions is rapidly amping up, not only in power production, but also in the number of investigators and areas of study. The most intense focus has been on waste water treatment and this is likely to continue for some time. It was probably safe to say 5 years ago that any compound that microorganisms can degrade could be converted to electricity in a microbial fuel cell, but if there was ever was any doubt, this point has been proven over and over again in a plethora of recent studies. It is clear from this work that a major limitation in converting complex wastes to electricity is the initial microbial attack on the larger, difficult to access molecules, just as it is in any other treatment option. It may well be that the intense focus on the degradation of complex organic matter in other bioenergy fields will soon make a contribution here.

A diversity of mechanisms by which microorganisms may transfer electrons to the anode of microbial fuel cells has been proposed. Initial investigations into the mechanisms of microbeanode interactions have focused on studies with pure culture models because pure cultures can be genetically modified for functional studies and genome-scale investigations on gone expression and proteomics are more readily interpretable with pure cultures. Pure culture studies are likely to have the most relevance to power production in mixed communities if the pure culture:1) is representative of those that predominate on anodes; 2) is capable of high current densities; and 3) completely oxidizes environmentally relevant organic electron donors, such as acetate. Two cultures,rhodopseudomonas palustris strain and geobacter sulfurreducens have been reported to be capable of current densities comparable to mixed communities. Of these two, detailed investigations on mechanisms for electron transfer to anodes have only been reported. Studies on this organism have the additional benefit that it is closely related to organisms that, as noted above, often predominate on anodes and that it is capable of completely oxidizing acetate with an electrode serving as the sole electron acceptor.

Conclusions:
Although the microbiology of microbe-electrode interactions is fascinating from a purely biological perspective, most research in this area is ultimately justified by the hope of increasing the power output of microbial fuel cells or developing additional microbe-electrode applications. Just as there is a wide phylogenetic diversity of microorganisms capable of extra cellular electron transfer to Fe(III), it is likely that there is an equally diverse range of microorganisms capable of interacting with electrodes. If the appropriate strategies can be capable of higher rates of electron transfer between microorganisms and electrodes than currently available strains. Genome-scale metabolic modeling coupled with genetic engineering may yield strains that can enhance current production. The capacity to produce current production is a promising approach for increasing the power output of microbial fuel cells. Further more, as the understanding of the range of reactions that microorganisms can carry out with electrodes serving either as the electron donor or the electron acceptor continues to expand the applications of microbe-electrode interaction and production of commodity chemicals may eclipse Power production as the most promising uses of this technology.

213

Touchscreen
Sudheesh.S & Dino P Ponnachan
Electronics & Communication Mohandas College of Engineering and Technology

Abstract
A touchscreen is an electronic visual display that can detect the presence and location of a touch within the display area. The term generally refers to touching the display of the device with a finger or hand. Touchscreens can also sense other passive objects, such as a stylus. Although the touchscreen technology dates back to about forty years ,it has made a revolution in the past five years with its application into devices that we deal in our day to day life .There are basically three underlying technologies used in touchscreens ,namely Resitive,,Capacitive and Acoustic. The latest developments in touchscreen technology are Multitouch and Haptic Touch.

Introduction
A touchscreen is an electronic visual display that can detect the presence and location of a touch within the display area. The touchscreen has two main attributes. First, it enables one to interact directly with what is displayed, rather than indirectly with a cursor controlled by a mouse or touchpad. Secondly, it lets one do so without requiring any intermediate device that would need to be held in the hand. Such displays can be attached to computers, or to networks as terminals. They also play a prominent role in the design of digital appliances such as the personal digital assistant (PDA), satellite navigation devices, mobile phones, and video games.

assisted learning terminal, which came out in 1975 as part of the PLATO project. The Main Components of Touch Screen Every touch screen has three main components: digital signal that computer can understand Touch Sensitive Surface The touch sensitive surface is an extremely durable and flexible glass or polymer touch response surface, and this panel is placed over the viewable area of the screen. In most sensors there is an electric signal going across the screen, and a touch on the surface causes change in the signal depending on the touch sensor technology used.This change allows the controller to identify the location of the touch. The Controller The controller is a device that acts as the intermediate between the screen and the computer. It interprets the electrical signal of the touch event to. The controller can be placed with the screen or housed externally. The Software Driver The software driver is an interpreter that converts what signal comes from the controller to information that the operating system can understand.

History
The prototype x-y mutual capacitance touchscreen (left) developed at CERN in 1977 by Bent Stumpe, a Danish electronics engineer, for the control room of CERNs accelerator SPS (Super Proton Synchrotron). In 1971, the first "Touch Sensor" was developed by Doctor Sam Hurst (founder of Elographics) while he was an instructor at the University of Kentucky. This sensor, called the "Elograph," was patented by The University of Kentucky Research Foundation. The "Elograph" was not transparent like modern touch screens; however, it was a significant milestone in touch screen technology. In 1974, the first true touch screen incorporating a transparent surface was developed by Sam Hurst and Elographics. In 1977, Elographics developed and patented five-wire resistive technology, the most popular touch screen technology in use today. Touchscreens first gained some visibility with the invention of the computer-

214

Technologies
1.Resistive A resistive touchscreen panel is composed of several layers, the most important of which are two thin, electrically conductive layers separated by a narrow gap. When an object, such as a finger, presses down on a point on the panel's outer surface the two metallic layers become connected at that point: the panel then behaves as a pair of voltage dividers with connected outputs. This causes a change in the electrical current, which is registered as a touch event and sent to the controller for processing. 2.Capacitive A capacitive touchscreen panel is one which consists of an insulator such as glass, coated with a transparent conductor such as indium tin oxide (ITO). As the human body is also a conductor, touching the surface of the screen results in a distortion of the screen's electrostatic field, measurable as a change in capacitance. Different technologies may be used to determine the location of the touch. The location is then sent to the controller for processing.The different technologies under capacitive sensing are Surface capacitance, Projected capacitance, Mutual Capacitance, Self Capacitance. 3. Surface acoustic wave Surface acoustic wave (SAW) technology uses ultrasonic waves that pass over the touchscreen panel. When the panel is touched, a portion of the wave is absorbed. This change in the ultrasonic waves registers the position of the touch event and sends this information to the controller for processing. Surface wave touch screen panels can be damaged by outside elements. Contaminants on the surface can also interfere with the functionality of the touchscreen.

finger, stylus or pen.Unlike capacitive touchscreenspatterning on the glass which increases durability and optical clarity of the overall system. Optical imaging This is a relatively modern development in touchscreen technology, in which two or more image sensors are placed around the edges (mostly the corners) of the screen. Infrared back lights are placed in the camera's field of view on the other side of the screen. A touch shows up as a shadow and each pair of cameras can then be triangulated to locate the touch or even measure the size of the touching object . This technology is growing in popularity, due to its scalability, versatility, and affordability, especially for larger units.

Dispersive signal technology This system uses sensors to detect the mechanical energy in the glass that occurs due to a touch. Complex algorithms then interpret this information and provide the actual location of the touch. The technology claims to be unaffected by dust and other outside elements, including scratches. Since there is no need for additional elements on screen, it also claims to provide excellent optical clarity. Also, since mechanical vibrations are used to detect a touch event, any object can be used to generate these events, including fingers and stylus. A downside is that after the initial touch the system cannot detect a motionless finger. Acoustic pulse recognition This system uses piezoelectric transducers located at various positions around the screen to turn the mechanical energy of a touch (vibration) into an electronic signal. The screen hardware then uses an algorithm to determine the location of the touch based on the transducer signals. The touchscreen itself is made of ordinary glass, giving it good durability and optical clarity. It is usually able to function with scratches and dust on the screen with good accuracy. The technology is also well suited to displays that are physically larger. As with the Dispersive Signal Technology system, after the initial touch, a motionless finger cannot be detected. However, for the same reason, the touch recognition is not disrupted by any resting objects.

Other Technologies
An infrared touchscreen uses an array of X-Y infrared LED and photodetector pairs around the edges of the screen to detect a disruption in the pattern of LED beams. These LED beams cross each other in vertical and horizontal patterns. This helps the sensors pick up the exact location of the touch. A major benefit of such a system is that it can detect essentially any input including a finger, gloved

215

Construction
There are several principal ways to build a touchscreen. The key goals are to recognize one or more fingers touching a display, to interpret the command that this represents, and to communicate the command to the appropriate application. In the most popular techniques, the capacitive or resistive approach, there are typically four layers; 1. 2. 3. 4. Top polyester layer coated with a transparent metallic conductive coating on the bottom Adhesive spacer Glass layer coated with a transparent metallic conductive coating on the top Adhesive layer on the backside of the glass for mounting.

When a user touches the surface, the system records the change in the electrical current that flows through the display. Dispersive-signal technology which 3M created in 2002, measures the piezoelectric effect the voltage generated when mechanical force is applied to a material that occurs chemically when a strengthened glass substrate is touched. There are two infrared-based approaches. In one, an array of sensors detects a finger touching or almost touching the display, thereby interrupting light beams projected over the screen. In the other, bottommounted infrared cameras record screen touches. In each case, the system determines the intended command based on the controls showing on the screen at the time and the location of the touch.

Comparison of touchscreen technologies


The following information is supplied by Mass Multimedia Inc., a Colorado-based company selling touch screen technology. Durability Stability Transparency Technology Installation Touch Intense lightresistant Response time Following Speed Excursion Monitor option 3 year High Bad 4-Wire Resistive 5 Year Higher Good Surface Acoustic Wave 5 Year High Bad 5-Wire Resistive 5 Year High Good Infrared 2 Year Ok Ok Capacitive Built-in Conductive Bad <15ms Good Big

Built-in/Onwall Built-in/Onwall Anything Good <10ms Good No CRT or LCD Finger/Pen Good 10ms Low Small CRT or LCD

Built-in/Onwall Onwall Anything Good <15ms Good Big CRT or LCD Finger/Pen Bad <20ms Good Big

CRT or LCD CRT or LCD or

216

LED Waterproof Good Ok Good Ok Good

Haptictouch
HapticTouch technology from Pacinian isn't just a step forward in user interface tactile feedback. Surface Actuation delivers the optimal tactile response at the point of contact. The result is a more compelling, highly satisfying user experience that enhances products in a wide array of applications and industries. The HapticTouch system creates surface level feedback using a simple yet elegant approach based on the principle of electrostatics. A charge differential is generated between the touch surface and a sub-surface, creating an attractive force, which causes motion of the touch surface. This movement can be precisely controlled to deliver a consistent, high fidelity response, resulting in a superior user experience. There are many factors to consider regarding the implementation of any tactile feedback technology, including power efficiency, scalability, and cost.

reflection measured. Touch surfaces can also be made pressure-sensitive by the addition of a pressuresensitive coating that flexes differently depending on how firmly it is pressed, altering the reflection. Handheld technologies use a panel that carries an electrical charge. When a finger touches the screen, the touch disrupts the panel's electrical field. The disruption is registered and sent to the software, which then initiates a response to the gesture.

Refernces
Shneiderman, B. (1991). "Touch screens now offer compelling uses". IEEE Software 8 (2): 9394, 107. Potter, R.; Weldon, L. & Shneiderman, B. (1988). Improving the accuracy of touch screen: An experimental evaluation of three strategies. Proc. CHI'88. Washington, DC: ACM Press. pp. 2732. Sears, A.; Plaisant, C. & Shneiderman, B. (1992). "A new era for high precision touchscreens". In Hartson, R. & Hix, D.. Advances in Human-Computer Interaction. 3. Ablex, NJ. pp. 133 . http://en.wikipedia.org/wiki/Touchscreen"C ategories: Touchscreens | Computer hardware Howstuffworks - How do touchscreen monitors know where you're touching? . MERL - Mitsubishi Electric Research Lab (MERL)'s research on interaction with touch tables. . Jefferson Y. Han et al. Multi-Touch Interaction Research. Multi-Input Touchscreen using Frustrated Total Internal Reflection

Multitouch
On touchscreen displays, multi-touch refers to the ability to simultaneously register three or more distinct positions of input touches.It is often used to describe other, more limited implementations, like Gesture-Enhanced Single-Touch, Dual-Touch or real Multi-Touch The most popular form are mobile devices (iPhone, iPod Touch), tables (Microsoft Surface) and walls. Both touchtables and touch walls project an image through acrylic or glass, and then back-light the image with LEDs. When a finger or an object touches the surface, causing the light to scatter, the reflection is caught with sensors or cameras that send the data to software which dictates response to the touch, depending on the type of

217

Wireless Home Automation Network


Sreevas. S
S6, Applied Electonics College of Engineering, Trivandrum

Abstract
Wireless home automation networks (WHAN) comprise of wireless embedded sensors and actuators that enable monitoring and control applications for home user comfort and efficient home management.

Need:
In recent years, the number of devices that need to be connected and controlled remotely in an industrial and home environment are several. In smart homes or businesses there is a need to control these devices remotely and power friendly manner. WHAN is the solution to these requirements. The frequencies are chosen such that they are not the harmonics of each other. The frequencies associated with various keys on the keypad are shown in figure (A). When you send these DTMF signals to the telephone exchange through cables, the servers in the telephone exchange identifies these signals and makes the connection to the person you are calling. Figure (B): A Typical frequency DTMF signal Along with these DTMF generator in our telephone set provides a set of special purpose groups of tones, which is normally not used in our keypad. These tones are identified as A, B, C, D. These frequencies have the same column frequency but uses row frequencies given in the table in figure (A). These tones are used for communication signaling. Figure (C): The frequency table Due to its accuracy and uniqueness, these DTMF signals are used in controlling systems using telephones. By using some DTMF generating ICs (UM91214, UM91214, etc) we can generate DTMF tones without depending on the telephone set.

Gsm Based Home Automation:


GSM (Global System for Mobile Communications: originally from Groupe Spcial Mobile) is the world's most popular standard for mobile telephony systems .When you press a DTMF key in the telephone set keypad, a connection is made that generates a resultant signal of two tones at the same time. These two tones are taken from a row frequency and a column frequency. The resultant frequency signal is called Dual Tone Multiple Frequency. These tones are identical and unique. A DTMF signal is the algebraic sum of two different audio frequencies, and can be expressed as follows: f(t) = A0sin(2*_*fa*t) + B0sin(2*_*fb*t) + .. -> (1) Where fa and fb are two different audio frequencies with A and B as their peak amplitudes and f as the resultant DTMF signal. fa belongs to the low frequency group and fb belongs to the high frequency group. Each of the low and high frequency groups comprise four frequencies from the various keys present on the telephone keypad; two different frequencies, one from the high frequency group and another from the low frequency group are used to produce a DTMF signal to represent the pressed key. The amplitudes of the two sine waves should be such that (0.7 < (A/B) < 0.9)V > (2)

Main Features:
Light control. Remote control. Smart energy. Remote care. Security and safety.

Technologies:
Bluetooth Used in PANs. Range is ~10m. ISM band at 2.4 GHz is employed. Supports only Star topology. Infrared (IR)LOS operation,

219

limited range. ZigBee Supports Star, Mesh, and Tree topologies. Multihop communication with unlimited range. Radio Frequency (RF) 413 MHz or 315 MHz, Range ~100m. Supports only Star topology.

Conclusion:
The much dreamt of concept of E-home can be realized through this project. Speech Controlled Interactive Wireless Home Automation can be made realistic. The immediate future will see a lot of rapid strides made in this area.

Bibliography:
1. Baris Yuksekkaya, A. et.al., A GSM, Internet and Speech Controlled Wireless Interactive Home Automation System, IEEE Transactions on Consumer Electronics, Vol. 52, No. 3, AUGUST2006 2. Yu-Ping Tsou, et.al., Building a Remote Supervisory Control Network System for Smart Home Applications, Proceedings of 2006 IEEE International Conference on Systems, Man, and Cybernetics, October 8-11, 2006, Taipei, Taiwan 3. Mohd Adib B., et.al., Wireless Home Security and Automation System Utilizing ZigBee based Multi-hop Communication, Proceedings of IEEE 2008 6th National Conference on Telecommunication Technologies and IEEE 2008 2nd Malaysia Conference on Photonics, 26-27 August 2008, Putrajaya, Malaysia.

220

Brain-Machine Interface
Ajay.J, & Ninan Lawrence Sixth Semester, Electronics and Communication Department Mohandas College of Engineering and Technology Abstract
Brain-Machine Interface (BMI) is a communication system, which enables the user to control special computer applications by using only his or her thoughts. It will allow human brain to accept and control a mechanical device as a part of the body. Data can flow from brain to the outside machinery, or to brain from the outside machinery. Different research groups have examined and used different methods to achieve this. Almost all of them are based on electroencephalography (EEG) recorded from the scalp. Our major goal of such research is to create a system that allows patients who have damaged their sensory/motor nerves severely to activate outside mechanisms by using brain signals. Cyber kinetics Inc, a leader in neurotechnology has developed the first implantable brain-machine interface that can reliably interpret brain signals and perhaps read decisions made in the brain to develop a fast, reliable and unobtrusive connection between the brain of severely disabled person to a personal computer.

Main Principle
Main principle behind this interface is the bioelectrical activity of nerves and muscles. It is now well established that the human body, which is composed of living tissues, can be considered as a power station generating multiple electrical signals with two internal sources, namely muscles and nerves. We know that brain is the most important part of human body. It controls all the emotions and functions of the human body. The brain is composed of millions of neurons. These neurons work together in complex logic and produce thought and signals that control our bodies. When the neuron fires, or activates, there is a voltage change across the cell, (~100mv) which can be read through a variety of devices. When we want to make a voluntary action, the command generates from the frontal lobe. Signals are generated on the surface of the brain. These electric signals are different in magnitude and frequency. By monitoring and analyzing these signals we can understand the working of brain. When we imagine ourselves doing something, small signals generate from different areas of the brain. These signals are not large enough to travel down the spine and cause actual movement. These small signals are, however, measurable. A neuron depolarizes to generate an impulse; this action causes small changes in the electric field around the neuron. These changes are measured as 0 (no impulse) or 1 (impulse generated) by the electrodes. We can control the brain functions by artificially producing these signals and sending them to respective parts. This is through stimulation of that part of the brain, which is responsible for a particular function using implanted electrodes.

Subject Detailing
A brain-machine interface (BMI) is an attempt to mesh our minds with machines. It is a communication channel from a human's brain to a computer, which does not resort to the usual human output pathways as muscles. It is about giving machine-like capabilities to intelligence, asking the brain to accommodate synthetic devices, and learning how to control those devices much the way we control our arms and legs today. These experiments lend hope that people with spinal injuries will be able to someday use their brain to control a prosthetic limb, or even their own arm. A BMI could, e.g., allow a paralyzed patient to convey her/his intentions to a computer program. But also applications in which healthy users can benefit from the direct brain computer communication are conceivable, e.g., to

221

speed up reaction times. Initially theses interactions are with peripheral devices, but ultimately it may be interaction with another brain. The first peripheral devices were robotic arms. Our approach bases on an artificial neural network that recognizes and classifies different brain activation patterns associated with carefully selected mental tasks. Using BMI artificial electrical signal can stimulate the brain tissue in order to transmit some particular sensory information.

difference between the actual measuring electrode and a reference electrode. The peak-to-peak amplitude of the waves that can be picked up from the scalp is normally 100 microV or less while that on the exposed brain, is about 1mV. The frequency varies greatly with different behavioural states. The normal EEG frequency content ranges from 0.5 to 50 Hz. Frequency information is particularly significant since the basic frequency of the EEG range is classified into five bands for purposes of EEG analysis. These bands are called brain rhythms and are named after Greek letters. Five brain rhythms are displayed in the table below: Band Delta Theta Alpha Beta Frequency [Hz] 0.5- 4 4- 8 8- 13 13- 22 22-30

Figure:The Organization Of BMI

Gamma

Electroencephalography(Eeg)
Electroencephalography (EEG) is a method used in measuring the electrical activity of the brain. The brain generates rhythmical potentials which originate in the individual neurons of the brain. These potentials get summated as millions of cell discharge synchronously and appear as a surface waveform, the recording of which is known as the electroencephalogram. When a neuron is exposed to a stimulus above a certain threshold, a nerve impulse, seen as a change in membrane potential, is generated which spreads in the cell resulting in the depolarization of the cell. Shortly afterwards, repolarization occurs. The EEG signal can be picked up with electrodes either from scalp or directly from the cerebral cortex. As the neurons in our brain communicate with each other by firing electrical impulses, this creates an electric field which travel though the cortex, the dura, the skull and the scalp. The EEG is measured from the surface of the scalp by measuring potential

The alpha rhythm is one of the principal components of the EEG and is an indicator of the state of alertness of the brain.

Figure: Examples of alpha, beta, theta and delta rhythms.

BMI Approaches
BMI researches have used the knowledge they have had of the human brain and the EEG in order to design a BMI. There are basically two different approaches that have been used:

222

The first is called a pattern recognition approach which is based on cognitive mental tasks.Here the subject concentrates on a few mental tasks. Concentration on these mental tasks produces different EEG patterns. The BCI (or the classifier in particular) can then be trained to classify these patterns. The second one called an operant conditioning approach is based on the selfregulation of the EEG response.Unlike in the pattern recognition approach, the BMI itself is not trained but it looks for particular changes (for example higher amplitude of a certain frequency) in the EEG signal. This requires usually a long training period, because the entire training load is on the user.

feedback training is essential for the user to acquire the control of his or her EEG response. However, feedback can speed up the learning process and improve performance.

BMI Components
A brain-machine interface (BMI) in its scientific interpretation is a combination of several hardware and software components trying to enable its user to communicate with a computer by intentionally altering his or her brain waves. The task of the hardware part is to record the brainwaves in the form of the EEG signal of a human subject, and the software has to analyze that data. In other words, the hardware consists of an EEG machine and a number of electrodes scattered over the subjects skull. The EEG machine, which is connected to the electrodes via thin wires, records the brain-electrical activity of the subject, yielding a multi-dimensional (analog or digital) output. The values in each dimension (also called channel) represent the relative differences in the voltage potential measured at two electrode sites. The software system has to read, digitize (in the case of an analog EEG machine), and preprocess the EEG data (separately for each channel), understand the subjects intentions, and generate appropriate output. To interpret the data, the stream of EEG values is cut into successive segments, transformed into a standardized representation, and processed with the help of a classifier. There are several different possibilities for the realization of a classifier; one approach involving the use of an artificial neural network (ANN) has become the method of choice in recent years.

A Schematic Representation Of BMI

The BMI consists of several components: 1.the implant device, or chronic multi-electrode array, 2.the signal recording and processing section, 3.an external device the subject uses to produce and control motion and 4.a feedback section to the subject. The first component is an implanted array of microelectrodes into the frontal and parietal lobes areas of the brain involved in producing multiple output commands to control complex muscle movements. This device record action potentials of individual neurons and then represent the neural signal using a rate code .The second component consists of spike detection algorithms, neural encoding and decoding systems, data acquisition and real time processing systems etc .A high performance dsp architecture is used for this purpose. The external device that the subject uses may be a robotic arm, a wheel chair etc. depending upon the application. Feedback is an important factor in BCIs. In the BCIs based on the operant conditioning approach,

Figure:A BMI based on the classification of two mental tasks. The user is thinking task number 2 and the BCI classifies it correctly and provides feedback in the form of cursor movement.

Implant Device
The EEG is recorded with electrodes, which are placed on the scalp. Electrodes are small plates, which conduct electricity. They provide the electrical contact between the skin and the EEG recording

223

apparatus by transforming the ionic current on the skin to the electrical current in the wires. To improve the stability of the signal, the outer layer of the skin called stratum corneum should be at least partly removed under the electrode. Electrolyte gel is applied between the electrode and the skin in order to provide good electrical contact.

the tissue this coating quickly dissolves. This allows easy implantation of a very flexible implant. Three-dimensional arrays of electrodes are also under development. These devices are constructed as twodimensional sheet and then bent to form 3D array. These can be constructed using a polymer substrate that is then fitted with metal leads. They are difficult to implement, but give a much great range of stimulation or sensing than simple ones.

Figure:An array of microelectrodes Usually small metal-plate electrodes are used in the EEG recording. Neural implants can be used to regulate electric signals in the brain and restore it to equilibrium. The implants must be monitored closely because there is a potential for almost anything when introducing foreign signals into the brain. There are a few major problems that must be addressed when developing neural implants. These must be made out of biocompatible material or insulated with biocompatible material that the body wont reject and isolate. They must be able to move inside the skull with the brain without causing any damage to the brain. The implant must be chemically inert so that it doesnt interact with the hostile environment inside the human body. All these factors must be addressed in the case of neural implants; otherwise it will stop sending useful information after a short period of time. There are simple single wire electrodes with a number of different coatings to complex threedimensional arrays of electrodes, which are encased in insulating biomaterials. Implant rejection and isolation is a problem that is being addressed by developing biocompatible materials to coat or incase the implant. One option among the biocompatible materials is Teflon coating that protects the implant from the body. Another option is a cell resistant synthetic polymer like polyvinyl alcohol. To keep the implant from moving in the brain it is necessary to have a flexible electrode that will move with the brain inside the skull. This can make it difficult to implant the electrode. Dipping the micro device in polyethylene glycol, which causes the device to become less flexible, can solve this problem. Once in contact with Figure: Block diagram of the neurotrophic electrodes for implantation in human patients A microscopic glass cone contains a neurotrophic factor that induces neurites to grow into the cone, where they contact one of several gold recording wires. Neurites that are induced to grow into the glass cone make highly stable contacts with recording wires. Signal conditioning and telemetric electronics are fully implanted under the skin of the scalp. An implanted transmitter (TX) sends signals to an external receiver (RX), which is connected to a computer.

Signal Processing Section


Multichannel Acquisition Systems Electrodes interface directly to the non-inverting opamp inputs on each channel. At this section amplification, initial filtering of EEG signal and possible artifact removal takes place. Also A/D conversion is made, i.e. the analog EEG signal is digitized. The voltage gain improves the signal-tonoise ratio (SNR) by reducing the relevance of electrical noise incurred in later stages. Processed signals are time-division multiplexed and sampled.

Figure: A BMI under design.

224

Spike Detection
Real time spike detection is an important requirement for developing brain machine interfaces. Incorporating spike detection will allow the BMI to transmit only the action potential waveforms and their respective arrival times instead of the sparse, raw signal in its entirety. This compression reduces the transmitted data rate per channel, thus increasing the number of channels that may be monitored simultaneously. Spike detection can further reduce the data rate if spike counts are transmitted instead of spike waveforms. Spike detection will also be a necessary first step for any future hardware implementation of an autonomous spike sorter. Figure 6 shows its implementation using an application-specific integrated circuit (ASIC) with limited computational resources. A low power implantable ASIC for detecting and transmitting neural spikes will be an important building block for BMIs. A hardware realization of a spike detector in a wireless BMI must operate in real-time, be fully autonomous, and function at realistic signal-to- noise ratios (SNRs). An implanted ASIC conditions signal from extra cellular neural electrodes, digitizes them, and then detects AP spikes. The spike waveforms are transmitted across the skin to a BMI processor, which sorts the spikes and then generates the command signals for the prosthesis.

at the same time it receives input. Telemetry is handled by a wearable computer. The host station accepts the data via either a wireless access point or its own dedicated radio card.

External Device
The classifiers output is the input for the device control. The device control simply transforms the classification to a particular action. The action can be, e.g., an up or down movement of a cursor on the feedback screen or a selection of a letter in a writing application. However, if the classification was nothing or reject, no action is performed, although the user may be informed about the rejection. It is the device that subject produce and control motion. Examples are robotic arm, thought controlled wheel chair etc

Feedback
Real-time feedback can dramatically improve the performance of a brainmachine interface. Feedback is needed for learning and for control. Real-time feedback can dramatically improve the performance of a brainmachine interface. In the brain, feedback normally allows for two corrective mechanisms. One is the online control and correction of errors during the execution of a movement. The other is learning: the gradual adaptation of motor commands, which takes place after the execution of one or more movements. In the BMIs based on the operant conditioning approach, feedback training is essential for the user to acquire the control of his or her EEG response. The BMIs based on the pattern recognition approach and using mental tasks do not definitely require feedback training. However, feedback can speed up the learning process and improve performance. Cursor control has been the most popular type of feedback in BMIs. Feedback can have many different effects, some of them beneficial and some harmful. Feedback used in BMIs has similarities with biofeedback, especially EEG biofeedback.

Signal Analysis
Feature extraction and classification of EEG are dealt in this section. In this stage, certain features are extracted from the preprocessed and digitized EEG signal. In the simplest form a certain frequency range is selected and the amplitude relative to some reference level measured . Typically the features are frequency content of the EEG signal) can be calculated using, for example, Fast Fourier Transform (FFT function). If the feature sets representing mental tasks overlap each other too much, it is very difficult to classify mental tasks, no matter how good a classifier is used. On the other hand, if the feature sets are distinct enough, any classifier can classify them. The features extracted in the previous stage are the input for the classifier.The classifier can be anything from a simple linear model to a complex nonlinear neural network that can be trained to recognize different mental tasks. Nowadays real time processing is used widely. Realtime applications provide an action or an answer to an external event in a timely and predictable manner. So by using this type of system we can get output nearly

Advantages
1. Linking people via chip implants to super intelligent machines seems to a natural progression creating in effect, super humans. 2. it provides better living, more features, more advancement in technologies etc.

225

3. Linking up in this way would allow for computer intelligence to be hooked more directly into the brain, allowing immediate access to the internet, enabling phenomenal math capabilities and computer memory.

Challenges
1.Connecting to the nervous system could lead to permanent brain damage, resulting in the loss of feelings or movement, or continual pain. 2.In the networked brain condition what will mean to be human? 3.Virus attacks may occur to brain causing ill effects.

by allowing them to steer a wheelchair with their mind. Mind-machine interfaces will be available in the near future, and several methods hold promise for implanting information. . Linking people via chip implants to super intelligent machines seems to a natural progression creating in effect, super humans. These cyborgs will be one step ahead of humans. And just as humans have always valued themselves above other forms of life, it is likely that cyborgs look down on humans who have yet to evolve. Technology moves in light speed now. In that accelerated future, todays hot neural interface could become tomorrows neuro trash. Thought communication will place telephones firmly in the history books.

Applications
The BMI technologies of today can be broken into three major areas: 1. Auditory and visual prosthesis - Cochlear implants - Brainstem implants - Synthetic vision - Artificial silicon retina 2. Functional-neuromuscular stimulation (FNS) - FNS systems are in experimental use in cases where spinal cord damage or a stroke has severed the link between brain and the peripheral nervous system. They can use brain to control their own limbs by this system 3. Prosthetic limb control - Thought controlled motorized wheel chair. - Thought controlled prosthetic arm for amputee. - Various neuroprosthetic devices Other various applications are Mental Mouse Applications in technology products, e.g., a mobile phone attachment that allows a physically challenged user to dial a phone number without touching it or speaking into it. System lets you speak without saying a word In effective construction of unmanned systems, in space missions, defense areas etc. NASA and DARPA has used this technology effectively. Communication over internet can be modified.

Conclusion
Cultures may have diverse ethics, but regardless, individual liberties and human life are always valued over and above machines. What happens when humans merge with machines? The question is not what will the computer be like in the future, but instead, what will we be like? What kind of people are we becoming? BMIs will have the ability to give people back their vision and hearing. They will also change the way a person looks at the world. Someday these devices might be more common than keyboards. Is someone with a synthetic eye, less a person than someone without? Shall we process signals like ultraviolet, Xrays, or ultrasounds as robots do? These questions will not be answered in the near future, but at some time they will have to be answered. What an interesting day that will be.

References
Websites: 1. www.betterhumans.com 2. www.popsci.com 3. www.ele.uri.edu 4. www.duke.edu 5. www.elecdesign.com 6. www.brainlab.org 7. www.howstuffworks.com Books: 1. Handbook Of Biomedical Instrumentation By R.S.Khandpur

Future Expansion
A new thought-communication device might soon help severely disabled people get their independence

226

STREAM 4 MECHANICAL ENGINEERING

COMPRESSED AIR ENGINES


Vivek S Nath, Saju joseph
S6 Mechanical Engineering Mohandas college of Engineering and Technology

Abstract
A compressed air engine is primarily an engine that uses the energy stored compressed air to do work. Here the expansion of compressed air stored at high pressure in a storage tank occurs in the engine cylinder to move a piston doing mechanical work. The main application of this engine is in automobile industry where the potential energy of the compressed air is converted into kinetic energy of the linear motion of piston and rotary motion of the crank and the crank shaft. This motion is transferred to the wheels using usual transfer mechanisms .As the working fluid is compressed air there is no requirement of any other fuel other than some amount of electrical energy for compression of air in an electric compressor .The engine is free of emissions at the tailpipe as the only exhaust is air and is environmental friendly. Even though it is below its counterparts in power, comfort and performance, its supporters believe that altered versions of this engine are to dominate the automobile industry in future.

Introduction
A Compressed-air engine is a pneumatic actuator that creates useful work by expanding compressed air and converting the potential energy into motion. A pneumatic actuator is a device that converts energy into motion The motion can be rotary or linear, depending on the type of actuator. CAEs are fueled by compressed air, which is stored in a tank at high pressure such as 30 MPa. The difference between the compressed air engine and IC engine is that instead of mixing fuel with air and burning it to drive pistons with hot expanding gases, compressed air engine use the expansion of previously compressed air to drive their pistons. This technology has been used by many companies like MDI (Motor Development Industry) to develop cars and other vehicles running on compressed air engine.

achieving the necessary strength. There is a cylinder having a reciprocating piston .There is also means to supply air from tank to cylinder to drive piston. A crank shaft is coupled to piston and is driven responsive to the reciprocating motion of piston, suitable mechanical arrangement coupled to crank shaft supply power to compressor, and also an independent means to supply power to compressor. Means for supplying air to cylinder comprises: a cylinder head, an auxiliary chamber in cylinder head, conduit means for connecting tank to auxiliary chamber, and input valve operative to periodically admit air from auxiliary chamber into the chamber formed by cylinder head and the top of piston, the periodicity of admission of air being synchronized with the rotation of crank shaft. There is also an inlet valve to allow the entry of air from the surroundings .There are also carbon filters to eliminate dirt, dust, humidity, and other urban air impurities that could hamper the engines performance. There is an exhaust valve that lets the expanded air out. A lubricant compartment is provided below the engine cylinder that provides suitable lubrication for the engine. The basic parts of a compressed air engine are illustrated in the figure given below

Parts
A basic compressed air engine primarily consists of a source of air under high pressure, means for supplying air from source to engine cylinder, a cylinder system, and an exhaust system. There are also auxiliary parts like the heater which improves the power output and efficiency of the engine.The source is a storage tank where compressed air of pressures as high as 30Mpa is stored. The storage tank is likely to be made of carbon-fiber in order to reduce its weight while

228

That is up to 4 times less than the average vehicle and more than two times less than the cleanest vehicle available today. The compressed air engine works in four different modes according to requirement Mode A: Operating with compressed air from Air Tank only in town less than 30 kph. In this mode, high pressure air from storage tank expands in the cylinder and moves the piston. The linear motion of piston is converted unto rotary motion of crank shaft. Mode B: Operating with compressed air from Air Tank only which is then heated by the heater to expand volume before entering engine. this increases the power output. Mode C: Operating with air from the Intake which is being heated to expand volume before entering engine. This is used on highway over 35 mph. Mode D: Operating as in Mode C but also refilling air Tank while running.

Working
The compressed air from the storage tank is supplied to the cylinder system by means of supply system. In the cylinder system the air first enters an auxiliary chamber from where it is periodically admitted to the main cylinder. The auxiliary chamber produces some power other than improving the overall efficiency of the engine. The compressed air which expands in the cylinder moves the piston down. When the piston moves up the exhaust valve opens and the expanded air is pushed out. In more evolved systems, the top portion of the main cylinder doubles up as the compressor. the linear up and down motion of the piston is converted to the rotary motion of the crank and crank shaft .This is transferred to the wheels by transfer mechanisms. Parked: It automatically shuts down the engine when the car is stationary. At Lower Speeds: Since the Compressed Air Vehicle is running exclusively on compressed air, it emits only air. The air expelled from the tail pipe is actually cleaner than the air used to fill the tank. This is because before compression, the air is run through carbon filters to eliminate dirt, dust, humidity, and other urban air impurities that could hamper the engines performance. At Higher Speeds: At speeds over 35mph the Compressed Air Vehicle uses small amounts of fueleither gasoline, propane, ethanol or bio fuelsto heat air inside a heating chamber as it enters the engine ( again, to expand volume before entering engine). This process produces emissions of only 0.141lbs of CO2 per mile.

Advantages
The principal advantages of an air powered vehicle are: 1) Refueling can be done at home using an air compressor or at service stations. 2) Reduced vehicle weight is the principal efficiency factor of compressed-air cars. Furthermore, they are mechanically more rudimentary than traditional vehicles as many conventional parts of the engine may be omitted. Some plans include motors built into the hubs of each wheel, thereby removing the necessity of a transmission, drive axles and differentials. A four passenger vehicle weighing less than 800 pounds (360 kg) is a reasonable design goal. 3) One manufacturer promises a range of 200 kilometers by the end of the year at a cost of 1.50 per fill-up. 4) Compressed air engines reduce the initial cost of vehicle production by about 20%, because there is no need to build a cooling system, spark plugs, starter motor, or mufflers. 5) Expansion of the compressed air lowers in temperature; this may be exploited for use as air conditioning. 6) Compressed-air vehicles emit no pollutants. 7) The technology is simple to achieve with low tech materials. This would mean that developing countries,

229

and rapidly growing countries like China and India, could easily implement the technology. 8) The price of fueling air powered vehicles may be significantly cheaper than current fuels. Some estimates project $3.00 for the cost of electricity for filling a tank. 9) Reduction or elimination of hazardous chemicals such as gasoline or battery acids/metals

Uses
Tools Impact wrenches, drills, die grinders, dental drills and other pneumatic tools use a variety of air engines or motors. These include vane type pumps, turbines and pistons. Torpedoes Most successful early forms of self propelled torpedoes used high pressure compressed air, although this was superseded by internal or external combustion engines, steam engines, or electric motors. Railways Compressed air engines were used in trams and shunters, and eventually found a successful niche in mining locomotives, although eventually they were replaced by electric trains, underground. Over the years designs increased in complexity, resulting in a triple expansion engine with air to air re-heaters between each stage. Aircraft Transport category airplanes, such as commercial airliners, use compressed air starters to start the main engines. The air is supplied by the load compressor of the aircraft's auxiliary power unit, or by ground equipment. Automotive Main article: Compressed air vehicle There is currently some interest in developing air cars. Several engines have been proposed for these, although none have demonstrated the performance and long life needed for personal transport.

Disadvantages
1) The principal disadvantage is the indirect use of energy. Energy is used to compress air, which in turn provides the energy to run the motor. Any conversion of energy between forms results in loss. For compressed air cars, energy is lost when electrical energy is converted to compressed air. 2) When air expands in the engine, it cools significantly and must be heated to desired temperature using a heat exchanger. The cooling is necessary in order to obtain maximum efficiency. The heat exchanger, While it. heats the stored air, the device gets very cold and may ice up in colder climates. 3) Refueling the storage tank of compressed air engine using a home or low-end conventional air compressor may take as long as 4 hours though the specialized equipment at service stations may fill the tanks in only 3 minutes. Early tests have demonstrated the limited storage capacity of the tanks; the only published test of a vehicle running on compressed air alone was limited to a range of 7.22 km.

Conclusion
With gas prices soaring, as they have over the past two years, it might not be long before many motorists turn to vehicles powered by alternative fuels. Although airpowered vehicles are still behind their gasoline counterparts when it comes to power and performance, they cost less to operate and are arguably more environmentally friendly, which makes them attractive as the future of highway transportation.

References
Automobile technology by John Hawkins. Advanced air engine technology by Guy Negre. Air Engines by Franklin Newett New Age Technologies by Wivian Hurly

230

Development Of Basic Agglomerated Flux For Submerged Arc Welding


Deviprakash K J & Anand Krishnan O K
Department of Mechanical Engineering Mohandas College of Engineering and Technology

Abstract
A significant percentage of the flux used in submerged are welding gets converted into very fine particles particles termed as flux dust due to transportation and handling. If these very fine particles are not removed fromt he flux before welding, it may results into defect like surface pitting and porosity. At the same time dumping of this flux dust will create pollution. Therefore to reduce the cost of welding and pollution, in the present study, attempts have been made to investigate the feasibility of using developed basic agglomerated flux by urilizing wasted flux dust in place of parent commercial basic flux. The chemical composition, tensile strength, toughness and radiographic examination of the all weld metal prepared by using the developed basic flux as well as same parent commercially available basic flux were compared. These properties for the all weld metal prepared by using the developed basic flux were found to be in the same range as that being prepared from the parent basic flux. Keywords : Submerged arc welding. Tensile properties, Toughness Prashad and Dwivedi [10] investigated the influence of submerged arc welding process parameters on microstructure, hardness and toughness of HSLA steel weld joints. Datta and Band)'opadhyay [11] has recycled slag generated during conventional submerged arc: welding (SAW) by mixing varying percentages of crushed slag with fresh flux to use in subsequent runs. In the work of Datta [12] application of the Taguchi method in combination with grey relational analysis has been applied for solving multiple criteria (objective) optimization problem in submerged arc welding (SA W). No work so far has been performed to develop the flux by using waste flux dust. Approximately i 015% of the flux used in submerged arc welding gets converted into very fine particles termed as flux dust due to transportation and handling. At the same time dumping of this flux dust will create pollution. In the present study an attempt has been made to investigate the influence of the developed basic flux prepared by utilizing wasted flux dust on the tensile properties and toughness of the welded joints. The chemical composition and properties viz. tensile strength and toughness of the all weld metal using basic developed flux as well as commercially available flux of the same type were compared. The radiographic examinations of all the welded joints were conducted to check weld metal integrity. Therefore the developed flux prepared from the waste flux dust can be used without any compromise in mechanical properties and quality of the welded joint. It will reduce the cost of welding and pollution.

Introduction
Submerged arc welding (SAW) produces coalescence of the metals by heating them with an arc between a basic metal electrode and the work. The arc and molten metal are submerged in blanket of granular fusible flux on the work. Submerged arc welding contributes to approximately 10% of the total welding. It is one of the most widely used processes for fabrication of thick plates, pipes, pressure vessels, rail tanks, ships, heat exchangers etc. Submerged arc welding process is characterized by higher metal deposition rate, deep weld penetration, high speed welding of plates at over 2.5 m/min and minimum emission of welding fume or arc light [I). Deposition rates approaching 45kglh have been reported[2]. This process is commercially suitable for welding of low carbon steel, high strength low alloy steel, nickel base alloys and stainless steel [3]. Shielding is obtained from a blanket of granular flux, which is laid directly over the weld area. Flux plays an important role in deciding the weld metal quality [4]. It may cost 50% of the total welding cost in submerged arc welding. It influences the weld metal physically, chemically and metallurgically. Physically, it influences the bead geometry and shape relationships, which in turn affects the load carrying capacity of the weld metal [5]. Chemically it affects the chemistry of weld metal, which in turn influences the mechanical properties of the weld metal [6]. Metallurgically, it influences the microstructure and hence again affects the mechanical properties of the weld metal [7]. It has been reported that agglomerated fluxes produce weld deposits of better ductility and impact strength as compared with fused fluxes [8]. Alloy transfer efficiency is also better in case of agglomerated fluxes. These f1uxes are hygroscopic in nature, therefore baking is essential for good weld metal integrity [9].

Experimental Work
In the present study one agglomerated cost effective basic flux was developed by using the flux dust of parent flux with addition of potassium silicate as binder and aluminum powder as deoxidizer. The solution of potassium silicate binder (90 ml in 550 grams of flux dust) was added to the dry mixed powder of

232

the flux dust and aluminum powder (4%of the weight of the flux dust) and it was wet mixed for 10 minutes and then passed through a 10 mesh screen to form small pellets. Potassium silicate was added as binder because of better are stability . The pallets of the flux were dried in air for 24 hours and then baked in the muffle furnace between 650-750C for nearly 3 hours. After cooling these pallets were crushed and subsequently sieved. After sieving, fluxes were kept in air tight bags and baked again at 300C before welding. A constant voltage D.C submerged arc 'welding power source was used for preparing the joints of mild steel plates of the dimensions 300 x 125 x 25 nm using 4 mm diameter wire electrode of grade C (AWS-5.17-S0 EH-14). DCEP polarity was used throughout the experimentation. The plates were cleaned mechanically and chemically to remove the rust, oil and grease from the fusion faces before welding. The surfaces bf the backing plates were also made free from rust and scale. The backing plates of 12 mm thick were tack welded to the base plates.The plates were preset so that they remain approximately flat after the welding operation has been complete, The inter-pass temperature was maintained in the range 200-225C. Four layer high weld pads were made for the basic developed agglomerated flux and parent flux as per AWS AS.2390 standard with the same welding conditions. The chemical compositions of all weld metal were evaluated by using spectrometer. The. two. butt weld joints were made with mild steel as base. plate and backing strip. The welded assembly was, subjected to radiographic examination to ascertain weld integrity prior to mechanical testing. The backing' plate was removed by machining before conducting radiographic examination. Three all weld metal tensile test pieces were cut from each welded plate and machined. The tensile tests were carried out on a universal testing machine (Make FIE-India). Scanning electron microscopy of the fractured surfaces of tensile test specimens were carried out at 20kV and 1500 X on microscope (Make J0EL Japan, JSM6100). Charpy V notch impact test was carried (lut to evaluate the toughness 'of the welded joints at 0C. Charpy impact tests were performed on standard notched specimens obtained from the welded joint. The notch was positioned in the centre of the weId and was cut in the face of the test specimen perpendicular to the surface of the plates. Five all weld metal impact test samples were cut from welded joint of plates. These samples were then fine polished by the surface grinder . Among the five values Of the impact strength the lowest and highest values were discarded and average of the 3 values was taken for the evaluation of impact strength of the groove welds.The charpy impact tests results obtained from the weld metal showed rather good repeatability.The same procedure was applied to the developed flux and commercially available parent flux to investigate the compatibility of the

developed flux with the commercial flux.

The details of the joint are mown in Fig. 1. The chemical composition of mild steel base plate and electrode as shown on table 1.

Results and Discussions


The flux behavior of the basic developed fluxes was found to be satisfactory. The bead service appearance was observed to be excellent and free from any visual defects and is comparable with the parent

As shown in Table 3. the compositions of all weld metal of the developed and parent flux are found to be in the same range. However manganese content of the weld metal laid by using the developed flux is slightly lower than the weld metal laid by using the parent flux. The silicon content of the weld metal laid by using the

233

developed flux is higher than the weld metal laid by using the parent flux. The carbon equivalent was be computed from the following equation. Ceq = C + Mn/6 + Si/24 + Ni/40 +Cr/5 + Mo/4 + V/4 Where C. Mn, Si, Ni. Mo and V represent the metallic content, expressed as percentage . Additional potassium silicate binder. which was added for agglomeration of the flux dust, contains silicon dioxide. The silicon di-oxide dissociate into oxygen and silicon due to heat during welding. It causes the additional amount of oxygen and silicon content in the weld pool [15]. The additional amount of oxygen results in oxidation of manganese and hence the less manganese content in the weld metal laid by using the developed flux as compared to the weld metal laid by using the parent flux. The additional amount of silicon results in increase of silicon content and hence the higher silicon content in the weld metal laid by using the developed flux as compared to the weld metal laid by using the parent flux. The radiographs of the welded joint which were prepared using developed fluxes were found to acceptable as per 9.252 of AWS radioifaphic standard of dynamic loading. The average values of tensile properties, yieId strength, ultimate strength. elongation percentage, area reduction percentage and average impact strength of the developed flux as well as parent flux are shown in Table 4 and Table 5 respectively

Fig. 2 and Fig. 3 shows the scanning electron micrographs of the fractured tensile [cst specimens of the weld laid out at same parameters using developed as well as parent basic flux. The micrographs of both the specimens show the ductile mode of fract

Conclusion
The flux behavior of the developed flux was found to be satisfactory. The weld bead surface appearance obtained by the developed flux was observed to be excellent and free fr'om any visual defects and is comparable with that of parent commercial flux. The welded joint prepared by using the developed flux was found to be radiographically sound. The chemical composition of all weld metal laid by using developed flux is comparable with all weld metal laid by using the respective parent basic flux. The tensile strength and impact strength of all weld laid by using the parent flux arc slightly higher than the tensile strength and impact strength of all weld laid by using the developed flux. Therefore the flux dust can be reused after developing as agglomerated basic flux without compromising with the quality. Thus the present study utilizes the concept of waste to wealth.

Reference
1. Kalpakjian, Serope, and Steven Schmid. Manufacturing Engineering and Technology. '5th ed'. Upper Saddle river, NJ: Pearson Prentice Hall, 2006. 2. Jeffus, Larry. Welding: Principles and Applications. Florence, KY: Thomson Delmar Learning, 2002.

234

Magnetic Refrigeration
Feby Philip Abraham , Ananthu Sivan
S4 Departement of Mechanical Engineering Mohandas College Of Engineering and Technology

Abstract
A cooling system consists of a device or devices used to lower the temperature of a defined region in space through some cooling process. Currently, the most popular commercial cooling agent is the refrigerant. A refrigerant in its general sense is what makes refrigerator cool foods, and it also makes air conditioners and other appliances perform their respective duties. A typical consumer based refrigerator lowers temperatures by modulating a gas compression-expansion cycle, to cool a refrigerant fluid which has been warmed by the contents of the refrigerator (i.e. the food inside). Typical refrigerants used in refrigerators include ammonia, methyl chloride, and sulfur dioxide, all of which are toxic. To mitigate the risks associated with toxic refrigerants, a collaboration by Frigidaire, General Motors, and DuPont netted the development of Freon (or R12), a chlorofluorocarbon. Freon is a non-flammable and non-toxic, but ozone- depleting gas. Because of the damaging effects of Freon to the ozone layer, there has been much interest in targeting other refrigerants. The popular refrigerant R134a (called Suva by DuPont) is currently used in most refrigerators, but American and international laws are beginning to phase out this refrigerant as well. The future seems ripe for new refrigeration technology. This has led the world to look for a better source of refrigeration, and magnetic refrigeration is certainly one of the best options if we consider the environmental aspects. There are two attractive reasons why magnetic refrigeration research continues. While a magnetic refrigerator would cost more than today's refrigerator at purchase, it could conserve over and above 20% more energy than current expansion-compression refrigerators, drastically reducing operating costs. The other attraction to magnetic refrigeration is the ecological impact a magnetic refrigerator would bring should it supplant current technologies. Not only would ozone-depleting refrigerant concerns be calmed, but the energy savings itself would lessen the strain our household appliances put on our environment. of the overall refrigeration process, a decrease in the strength of an externally applied magnetic field Introduction allows the magnetic domains of a chosen (magnetocaloric) material to become disoriented Magnetic refrigeration is a cooling technology based from the magnetic field by the agitating action of the on the magneto caloric effect. This technique can be thermal energy (phonons) present in the material. If used to attain extremely low temperatures (well the material is isolated so that no energy is allowed to below 1 Kelvin), as well as the ranges used in (e) migrate into the material during this time (i.e. an common refrigerators, depending on the design of the adiabatic process), the temperature drops as the system. The fundamental principle was suggested by domains absorb the thermal energy to perform their Debye (1926) and Giauque (1927) and the first reorientation. The randomization of the domains working magnetic refrigerators were constructed by occurs in a similar fashion to the randomization at the several groups beginning in 1933. Magnetic Curie temperature, except that magnetic dipoles refrigeration was the first method developed for overcome a decreasing external magnetic field while cooling below about 0.3 Kelvin (a temperature energy remains constant, instead of magnetic attainable by3He/4He dilution refrigeration). domains being disrupted from internal ferromagnetism as energy is added. One of the most The Magnetocaloric Effect notable examples of the magnetocaloric effect is in the chemical element gadolinium and some of its The Magnetocaloric effect (MCE, from magnet and alloys. Gadolinium's temperature is observed to calorie) is a magneto- thermodynamic phenomenon increase when it enters certain magnetic fields. When in which a reversible change in temperature of a it leaves the magnetic field, the temperature returns to suitable material is caused by exposing the material normal. The effect is considerably stronger for the to a changing magnetic field. This is also known as gadolinium alloy Gd5 (Si2Ge2). Praseodymium adiabatic demagnetization by low temperature alloyed with nickel (PrNi5) has such a strong physicists, due to the application of the process magnetocaloric effect that it has allowed scientists to specifically to effect a temperature drop. In that part approach within one thousandth of a degree of

237

absolute

zero.

gives it to the magnetic material. It helps to make the absorption of heat effective. Drive:Drive provides the right rotation to the Magneto caloric wheel. Due to this heat flows in the right desired direction Magneto caloric wheel:It forms the structure of the whole device. It joins both the two magnets to work properly

Working

Construction

Components required for construction Magnets:Magnets provide the magnetic field to the material so that they can loose or gain the heat to the surrounding and from the space to be cooled respectively Hot Heat exchanger:The hot heat exchanger absorbs the heat from the material used and gives off to the surrounding. It makes the transfer of heat much effective Cold Heat Exchanger:The cold heat exchanger absorbs the heat from the space to be cooled and

THERMODYNAMIC CYCLE

Magnetic Refrigeration Cycle


The cycle is performed as a refrigeration cycle, analogous to the Carnot cycle, and can be described at a starting point whereby the chosen working substance is introduced into a magnetic field (i.e. the magnetic flux density is increased). The working material is the refrigerant, and starts in thermal equilibrium with the refrigerated environment.

238

Adiabatic magnetization: The substance is placed in an insulated environment. The increasing external magnetic field (+H) causes the magnetic dipoles of the atoms to align, thereby decreasing the material'smagneticentrop y and heat capacity. Since overall energy is not lost (yet) and therefore total entropy is not reduced (according to thermodynamic laws), the net result is that the item heats up (T + Tad). Isomagnetic enthalpic transfer: This added heat can then be removed by a fluid like water or helium for example (-Q). The magnetic field is held constant to prevent the dipoles from reabsorbing the heat. Once sufficiently cooled, the magnetocaloric material and the coolant are separated (H=0). Adiabatic demagnetization: The substance is returned to another adiabatic (insulated) condition so the total entropy remains constant. However, this time the magnetic field is decreased, the thermal energy causes the domains to overcome the field, and thus the sample cools (i.e. an adiabatic temperature change). Energy (and entropy) transfers from thermal entropy to magnetic entropy (disorder of the magnetic dipoles). Isomagnetic entropic transfer: The magnetic field is held constant to prevent the material from heating back up. The material is placed in thermal contact with the environment being refrigerated. Because the working material is cooler than the refrigerated environment (by design), heat energy migrates into the working material (+Q). Once the refrigerant and refrigerated environment is in thermal equilibrium, the cycle begins anew.

with a heat sink (usually liquid helium) while the magnetic field is switched on, the refrigerant must lose some energy because it is equilibrated with the heat sink. When the magnetic field is subsequently switched off, the heat capacity of the refrigerant rises again because the degrees of freedom associated with orientation of the dipoles are once again liberated, pulling their share of equipartitioned energy from the motion of the molecules, thereby lowering the overall temperature of a system with decreased energy. Since the system is now insulated when the magnetic field is switched off, the process is adiabatic, i.e. the system can no longer exchange energy with its surroundings (the heat sink), and its temperature decreases below its initial value, that of the heat sink. The operation of a standard ADR proceeds roughly as follows. First, a strong magnetic field is applied to the refrigerant, forcing its various magnetic dipoles to align and putting these degrees of freedom of the refrigerant into a state of lowered entropy. The heat sink then absorbs the heat released by the refrigerant due to its loss of entropy. Thermal contact with the heat sink is then broken so that the system is insulated, and the magnetic field is switched off, increasing the heat capacity of the refrigerant, thus decreasing its temperature below the temperature of the He heat sink. In practice, the magnetic field is decreased slowly in order to provide continuous cooling and keep the sample at an approximately constant low temperature. Once the field falls to zero (or to some low limiting value determined by the properties of the refrigerant), the cooling power of the ADR vanishes, and heat leaks will cause the refrigerant to warm up.

Working Materials
The magnetocaloric effect is an intrinsic property of a magnetic solid. This thermal response of a solid to the application or removal of magnetic fields is maximized when the solid is near its magnetic ordering temperature. The magnitudes of the magnetic entropy and the adiabatic temperature changes are strongly dependent upon the magnetic order process: the magnitude is generally small in antiferromagnets, ferrimagnets and spin glass systems; it can be substantial for normal ferromagnets which undergo a second order magnetic transition; and it is generally the largest for a

Applied Technique
The basic operating principle of an ADR is the use of a strong magnetic field to control the entropy of a sample of material, often called the "refrigerant". Magnetic field constrains the orientation of magnetic dipoles in the refrigerant. The stronger the magnetic field, the more aligned the dipoles are, and this corresponds to lower entropy and heat capacity because the material has (effectively) lost some of its internal degrees of freedom. If the refrigerant is kept at a constant temperature through thermal contact

239

ferromagnet which undergoes a first order magnetic transition.Also, crystalline electric fields and pressure can have a substantial influence on magnetic entropy and adiabatic temperature changes. Currently, alloys of gadolinium producing 3 to 4 K per tesla of change in a magnetic field can be used for magnetic refrigeration or power generation purposes.Recent research on materials that exhibit a giant entropy change showed that Gd5(SixGe1 x)4, La(FexSi1 x)13Hx and MnFeP1 xAsx alloys, for example, are some of the most promising substitutes for Gadolinium and its alloys (GdDy, GdTy, etc...). These materials are called giant magnetocaloric effect materials (GMCE). Gadolinium and its alloys are the best material available today for magnetic refrigeration near room temperature since they undergo second-order phase transitions which have no magnetic or thermal hysteresis involved.

are of much smaller magnitude, they are less prone to self-alignment and have lower intrinsic minimum fields. This allows NDR to cool the nuclear spin system to very low temperatures, often 1 K or below. Unfortunately, the small magnitudes of nuclear magnetic dipoles also make them less inclined to align to external fields. Magnetic fields of 3 teslas or greater are often needed for the initial magnetization step of NDR.In NDR systems, the initial heat sink must sit at very low temperatures (10100 mK). This precooling is often provided by the mixing chamber of a dilution refrigerator or a paramagnetic salt ADR stage.

Commercial Development
This refrigeration, once proven viable, could be used in any possible application where cooling, heating or power generation is used today. Since it is only at an early stage of development, there are several technical and efficiency issues that should be analyzed. The magnetocaloric refrigeration system is composed of pumps, electric motors, secondary fluids, heat exchangers of different types, magnets and magnetic materials. These processes are greatly affected by irreversibilitys and should be adequately considered. Appliances using this method could have a smaller environmental impact if the method is perfected and replaces hydrofluorocarbon (HFCs) refrigerators (some refrigerators still use HFCs which have considerable greenhouse effect). At present, however, the superconducting magnets that are used in the process have to themselves be cooled down to the temperature of liquid nitrogen, or with even colder, and relatively expensive, liquid helium. Considering these fluids have boiling points of 77.36 K and 4.22 K respectively, the technology is clearly not cost-efficient and efficient for home appliances, but for experimental, laboratorial, and industrial use only.Recent research on materials that exhibit a large entropy change showed that Gd5(SixGe1 x)4, La(FexSi1 x)13Hx and MnFeP1 xAsx alloys are some of the most promising substitutes of Gadolinium and its alloys (GdDy, GdTy, etc...). Gadolinium and its alloys are the best material available today for magnetic refrigeration near room temperature. There are still some thermal and magnetic hysteresis problems to be solved for them to become really useful and scientist are working

Paramagnetic Salts
The originally suggested refrigerant was a paramagnetic salt, such as cerium magnesium nitrate. The active magnetic dipoles in this case are those of the electron shells of the paramagnetic atoms. In a paramagnetic salt ADR, the heat sink is usually provided by a pumped4He (about 1.2 K) or3He (about 0.3 K) cryostat. An easily attainable 1 tesla magnetic field is generally required for the initial magnetization. The minimum temperature attainable is determined by the self-magnetization tendencies of the chosen refrigerant salt, but temperatures from 1 to 100 mK are accessible. Dilution refrigerators had for many years supplanted paramagnetic saltADRs, but interest in space-based and simple to use lab-ADRs has recently revived the field. Eventually paramagnetic salts become either diamagnetic or ferromagnetic, limiting the lowest temperature which can be reached using this method.

Nuclear Demagnetisation
One variant of adiabatic demagnetization that continues to find substantial research application is nuclear demagnetization refrigeration (NDR). NDR follows the same principle described above, but in this case the cooling power arises from the magnetic dipoles of the nuclei of the refrigerant atoms, rather than their electron configurations. Since these dipoles

240

hard to achieve this goal. Thermal hysteresis problems is solved therefore in adding ferrite (5:4) A good review on magnetocaloric materials is entitled "Recent developments in magnetocaloric materials" and written by Dr. Gschneidner et at.Recent discovery has succeeded using commercial grade materials and permanent magnets on room temperatures to construct a magnetocaloric refrigerator which promises wide use.This technique has been used for many years in cryogenic systems for producing further cooling in systems already cooled to temperatures of 4 Kelvins and lower. In England, a company called Cambridge Magnetic Refrigeration produces cryogenic systems based on the magnetocaloric effect.On 20 August 2007, the Riso National Laboratory at the Technical University of Denmark, claimed to have reached a milestone in their magnetic cooling research when they reported a temperature span of 8.7 C. They hope to introduce the first commercial applications of the technology by 2010. Gadolinium and its alloys are the best material available today for magnetic refrigeration near room temperature. There are still some thermal and magnetic hysteresis problems to be solved for them to become really useful and scientist are working hard to achieve this goal. Thermal hysteresis problems is solved therefore in adding ferrite (5:4) A good review on magnetocaloric materials is entitled "Recent developments in magnetocaloric materials" and written by Dr. Gschneidner et at.Recent discovery has succeeded using commercial grade materials and permanent magnets on room temperatures to construct a magnetocaloric refrigerator which promises wide use.This technique has been used for many years in cryogenic systems for producing further cooling in systems already cooled to temperatures of 4 Kelvins and lower. In England, a company called Cambridge Magnetic Refrigeration produces cryogenic systems based on the magnetocaloric effect.On 20 August 2007, the

Riso National Laboratory at the Technical University of Denmark, claimed to have reached a milestone in their magnetic cooling research when they reported a temperature span of 8.7 C. They hope to introduce the first commercial applications of the technology by 2010.

Current And Future Uses


There are still some thermal and magnetic hysteresis problems to be solved for these first-order phase transition materials that exhibit the GMCE to become really useful; this is a subject of current research. A useful review on magnetocaloric materials is entitled "Recent developments in magnetocaloricmaterials" and written by Dr. Gschneidner et al.This effect is currently being explored to produce better refrigeration techniques, especially for use in spacecraft. This technique is already used to achieve cryogenic temperatures in the laboratory setting (below 10K). As an object displaying MCE is moved into a magnetic field, the magnetic spins align, lowering the entropy. Moving that object out of the field allows the object to increase its entropy by absorbing heat from the environment and disordering the spins. In this way, heat can be taken from one area to another. Should materials be found to display this effect near room temperature, refrigeration without the need for compression may be possible, increasing energy efficiency

241

Micro-Controller Aided Gearbox And Chain Drive Fault Recognition System


Rahul R
S8 Department Of Mechanical Engineering Mohandas College Of Engineering And Technology

Abstract
The meshing of gears produce a sound .This sound will normally vary for a perfect gear, a gear with one or more broken tooth and a gear that has undergone wear and tear. Similarly the sound of a chain drive will be different when the chain has slackness. The sound difference is captured by a microphone or piezoelectric crystal and fed into the computer to generate frequency-amplitude graphs in real time. These graphs serve as ideal references for finding fault of similar gear box without actually opening it. With the pre-recorded data, a microcontroller also analyses and gives an indication on the type of error on to a 7 segment display.

Introduction
This is a method of fault detection of gear box without actually opening it. The sound and vibration of gear meshing is different for a set of perfectly meshing gears, one that has one or more broken teeth and one that has wear and tear. An experimental gear box is set up to demonstrate the same. Three sets of gears are fabricated and one pair is set for reference. A tooth is machined off from the second set of gears to induce a tooth fracture. Overall tooth thickness is reduced for the third set of gears and it acts as an induced wear and tear. Two microphones are used to capture the sound of gear box and ambient sound respectively. The ambient sound is captured to remove the ambient content from the main microphone. The signal is amplified and fed into a computer through the line in port. A graph is generated in real time on the computer. The graph for perfect gear, fractured gear and wear induced gear is noted. These acts as reference for similar gear boxs fault analysis.

Gear with Induced wear and tear: When a wear is formed eroding away the tooth, the average tooth thickness reduces, causing clattering sound. This has an increase in frequency as well as amplitude with reference to the perfect gear. A graph of frequency with respect to amplitude is taken for analysis.

Classification Of Components
Mechanical components Electrical components Electronic components

Mechanical Components
The two major mechanical components are: A fabricated gearbox Chain drive with sprocket

Gearbox The experimental gear box is made from an angle-iron frame welded together. The Shafts of the gears are mounted on bearing blocks. One Shaft permits relative parallel motion with the other. This allows the gears to be shifted. Six gears of equal dimensions are used. One set is the perfect gear, the second set is the one that has a fracture and the third set has a wear induced in it. The gears can be shifted to each position for each fault. Grooves are provided on one shaft to hold the shaft to one place during drive.

Pillars Of Reference
Perfect gear It acts as a reference to compare the faulty gears . Tooth fractured gear This type of gear gives periodic disturbances. As the fracture rotates, a graph will repeat its peak or peaks in regular intervals. The interval depends on the number of tooth fractured. A time dependent amplitude graph is used.

243

Chain & Sprocket Drive A chain and sprocket is setup for experiment on the same frame housing the gearbox. The small sprocket drives the bigger one via a chain. The set up is such that the slackness can be adjusted. This is done with the help of a chain adjuster used in motorcycles. The graph is analyzed for all slackness of the chain and the results are noted. An indicator is made using microcontroller that indicates when it is time to tighten the chain.

Signal Generation
Two microphones feed data from gear box and ambient sound. The sound is first amplified using an operational amplifier IC LM741. A voltage comparator is used to remove the ambient content out of the main microphone, leaving behind the sound from the gear box or chain drive alone. This sound is taken and the peak is extracted using a Zener diode. This signal is given a negative gain using the same type amplifier. This is done to reduce the peak of the voltage signal to a range of 1 volt.

Electrical Components
An AC motor of 45 watts is used to drive the shaft having the gear as well as the shaft of the sprocket. A foot pedal with a variable resistance controls the speed of the gear. This enables to shift the gear during running by momentarily releasing the foot pedal. This eliminates the use of clutch, making the experimental setup more economical as well as simple in design.

Analysis And Detection Methodology


There are possibly three types of display that the micro controller makes for a gear and two for chain analysis. The display is made on a seven segment led display. The code generated for displaying each error on the 7 segment is predefined. When encountered with cyclic or repeated disturbances the microcontroller displays an error indicating a fractured gear. The analog signal from the circuit is fed into the Analog to Digital Converter of the microcontroller. The values of a signal at any instant in the graph is noted and assigned to a variable .The variable is reset to zero every 3 seconds. Peak is noted and the number of times it is attained it noted. Thus it identifies repeating type of disturbances. The average of amplitudes for a perfect gear is noted. This is compared along with frequency for the analysis of gear having wear and tear. The same methodology is used for fault analysis using graphs. Cyclic peaks and change in amplitude and frequency is noted.

Electronic Components
Analog Components Digital Components

Analog Components
Analog components account for all the electronic components used in to process the signal that has to be fed into the micro controller. Components are Amplifier (Using LM741) Signal Peak Extraction System Condenser Microphone / PiezoElectric crystal Voltage Comparator (LM311) 7 Segment LED display

Application
This method of gearbox fault analysis can be used to check or predict the fault of a gearbox without actually opening it. This can be used for similar type of gearbox without re-calibration. It can be used in production plants to check large number of gearboxes as they are being manufactured before leaving the industry. It can be used in service centers where the same type of gearbox is serviced. The analysis on a chain can be made as an indicator in the instrument panel of a motorcycle so that the customer is informed the time for tightening the chain.

Digital Components
The microcontroller is the main digital part of the system. The micro controller used is an ATMEL ATMEGA8 microcontroller. The program is written in C++ and compiled. It is then programmed using an interface programmer board that transfers the program to the controller via RS232 port of the computer.

244

Reverse Engineering of Mechanical Devices Libin K Babu & Karthik S


S6, Department of Mechanical Engineering Mohandas College of Engineering and Technology

Abstract
Whether it's rebuilding a car engine or diagramming a sentence, people can learn about many things simply by taking them apart and putting them back together again. That, in a nutshell, is the concept behind reverse-engineeringbreaking something down in order to understand it, build a copy or improve it. As computer-aided design has become more popular, reverse engineering has become a viable method to create a 3D virtual model of an existing physical part for use in 3D CAD [computer-aided design], CAM, CAE and other software. The reverse-engineering process involves measuring an object and then reconstructing it as a 3D model With the ever-increasing popularity of CAD, reverse engineering has proven to be a blessing for creation of 3D virtual model of the on hand physical part to be used in 3D CAE, CAM, CAD and many other soft wares. The measuring of physical object can be done by making use of #D scanning technologies such as computed tomography, structured light digitizers, laser scanners, and CMMs. The data that is measured usually gets represented as point cloud. It is devoid of topological information. Thats why, the processing and modelling takes place into usable format like a triangular faced mesh, CAD model, or a collection of surfaces of NURBS. Applications such as Polyworks, Image ware, Geomagic, or Rapidform are used for processing the point clouds into the formats that can be used in applications like 3D CAE, CAM, CAD or visualization. Introduction FORWARD ENGINEERING V/S REVERSE The term "reverse engineering" includes any ENGINEERING ; activity you do to determine how a product works, or to learn the ideas and technology that The most traditional method of the development were originally used to develop the product. of a technology is referred to as "forward Reverse engineering is a systematic approach for engineering." In the construction of a analyzing the design of existing devices or technology, manufacturers develop a product by systems. To be more precise, reverse engineering implementing engineering concepts and (RE) is the process of discovering the abstractions. By contrast, reverse engineering technological principles of a device, object or begins with final product, and works backward system through analysis of its structure, function to recreate the engineering concepts by analyzing and operation. It often involves taking something the design of the system and the (e.g., a mechanical device, electronic component, interrelationships of its components. or software program) apart and analyzing its workings in detail to be used in maintenance, or REASONS FOR RE : to try to make a new device or program that does Lost documentation: Reverse the same thing without copying anything from engineering often is done because the the original documentation of a particular device has been lost (or was never written), TYPES OF RE : and the person who built it is no longer available. Integrated circuits often seem Black Box RE : In "black box" to have been designed on obsolete, reverse engineering, systems are proprietary systems, which means that observed without examining internal the only way to incorporate the structure functionality into new technology is to reverse-engineer the existing chip and White Box RE : Here, the internal then re-design it. parts of the object that is being Product analysis. To examine how a reverse engineered is examined product works, what components it carefully. consists of, estimate costs, and identify potential patent infringement. Creation of unlicensed/unapproved duplicates.

246

Academic/learning purposes. Curiosity Competitive technical intelligence (understand what your competitor is actually doing versus what they say they are doing)

REVERSE ENGINEERING OF MECHANICAL DEVICES : RE of mechanical devices mainly involves measuring an object and then reconstructing it as a 3D model. It is also used to analyze how a product works, what it does, and what components it consists of, estimate costs, and identify potential patent infringement, etc. REVERSE ENGINEERING PROCESS As computer-aided design has become more popular, reverse engineering has become a viable method to create a 3D virtual model of an existing physical part for use in 3D CAD, CAM, CAE and other software. The reverseengineering process involves measuring an object and then reconstructing it as a 3D model. The physical object can be measured using 3D scanning technologies like CMMs, laser scanners, structured light digitizers or computed tomography. The measured data alone, usually represented as a point cloud, lacks topological information and is therefore often processed and modeled into a more usable format such as a triangular-faced mesh, a set of NURBS surfaces or a CAD model. There are two parts to any reverse engineering application: scanning and data manipulation. Scanning, also called digitizing, is the process of gathering the requisite data from an object. Many different technologies are used to collect three dimensional data. They range from mechanical and very slow, to radiation-based and highlyautomated. Each technology has its advantages and disadvantages, and their applications and specifications overlap. What eventually comes out of each of these data collection devices, however, is a description of the physical object in three-dimensional space called a point cloud. Point cloud data typically define numerous points on the surface of the object in terms of x, y, and z coordinates. At each x, y, z coordinate in the data where there is a point, there is a surface

coordinate of the original object. However, some scanners, such as those based on X-rays, can see inside an object. In that case, the point cloud also defines interior locations of the object, and may also describe its density. There is usually far too much data in the point cloud collected from the scanner or digitizer, and some of it may be unwanted noise. Without further processing, the data isnt in a form that can be used by downstream applications such as CAD/CAM software or in rapid prototyping. Reverse engineering software is used to edit the point cloud data, establish the interconnectedness of the points in the cloud, and translate it into useful formats such as surface models or STL files. It also allows several different scans of an object to be melded together so that the data describing the object can be defined completely from all sides and directions. Usually, the shortest part of any RE task is scanning or data collection. While there are exceptions, scanning might only require a few seconds or a few minutes. On the other hand, manipulating the data can be quite timeconsuming and labor-intensive. It may even require days to complete this part of the job. The situation is analogous to scanning twodimensional printed or photographic materials. It doesnt usually take very long to scan a picture or a diagram - but getting that picture into a presentable form can be quite a lot of work, indeed.

COORDINATE MEASURING MACHINE (CMM) :

247

A 'coordinate measuring machine' (CMM) is a device for measuring the physical geometrical characteristics of an object. This machine may be manually controlled by an operator or it may be computer controlled. Measurements are defined by a probe attached to the third moving axis of this machine. Probes may be mechanical, optical, laser, or white light, among others. The typical CMM is composed of three axes, an X, Y and Z. These axes are orthogonal to each other in a typical three dimensional coordinate system. Each axis has a scale system that indicates the location of that axis. The machine will read the input from the probe, as directed by the operator or programmer. The machine then uses the X,Y,Z coordinates of each of these points to determine size and position. A coordinate measuring machine (CMM) is also a device used in manufacturing and assembly processes to test a part or assembly against the design intent. By precisely recording the X, Y, and Z coordinates of the target, points are generated which can then be analyzed via regression algorithms for the construction of features. These points are collected by using a probe that is positioned manually by an operator or automatically via Direct Computer Control (DCC). COMPUTER AIDED DESIGN (CAD) : Computer-Aided Design (CAD) is the use of computer technology to aid in the design and particularly the drafting (technical drawing and engineering drawing) of a part or product, including entire buildings. It is both a visual (or drawing) and symbol-based method of communication whose conventions are particular to a specific technical field. CAD is used in the design of tools and machinery and in the drafting and design of all types of buildings, from small residential types (houses) to the largest commercial and industrial structures (hospitals and factories). CAD is mainly used for detailed engineering of 3D models and/or 2D drawings of physical components, but it is also used throughout the engineering process from conceptual design and layout of products, through strength and dynamic analysis of assemblies to definition of manufacturing methods of components.

CAD has become an especially important technology within the scope of computer-aided technologies, with benefits such as lower product development costs and a greatly shortened design cycle. CAD enables designers to lay out and develop work on screen, print it out and save it for future editing, saving time on their drawings. NURBS SURFACES : Non-uniform rational B-spline (NURBS) is a mathematical model commonly used in computer graphics for generating and representing curves and surfaces. NURBS are nearly ubiquitous for computer-aided design (CAD), manufacturing (CAM), and engineering (CAE) and are part of numerous industry wide used standards, such as IGES, STEP, ACIS, and PHIGS. NURBS tools are also found in various 3D modeling and animation software packages, such as formZ, Maya and Rhino3D. COMPUTER AIDED MANUFACTURING (CAM) : Computer-aided manufacturing (CAM) is the use of computer-based software tools that assist engineers and machinists in manufacturing or prototyping product components. CAM is a programming tool that makes it possible to manufacture physical models using computeraided design (CAD) programs. CAM creates real life versions of components designed within a software package. CAM was first used in 1971 for car body design and tooling. Traditionally, CAM has been considered as a numerical control (NC) programming tool wherein three-dimensional (3D) models of components generated in CAD software are used to generate CNC code to drive numerically controlled machine tools. Although this remains the most common CAM function, CAM functions have expanded to integrate CAM more fully with CAD/CAE PLM solutions. As with other Computer-Aided technologies, CAM does not eliminate the need for skilled professionals such as Manufacturing Engineers and NC Programmers. CAM, in fact, both leverages the value of the most skilled

248

manufacturing professionals through advanced productivity tools, while building the skills of new professionals through visualization, simulation and optimization tools. RAPID PROTOTYPING : Rapid prototyping is the automatic construction of physical objects using solid freeform fabrication. The first techniques for rapid prototyping became available in the late 1980s and were used to produce models and prototype parts. Today, they are used for a much wider range of applications and are even used to manufacture production quality parts in relatively small numbers USES OF REVERSE ENGINEERING : Understanding how a product works more comprehensively than by merely observing it. Investigating and correcting errors and limitations in existing programs. Studying the design principles of a product as part of an education in engineering. Transforming obsolete products into useful ones by adapting them to new systems and platforms.

Widely used in mechatronics and robotics Used in dental applications In the medical field, mostly for surgical implants In production sector including automobile industry

REFERENCES : T. Varady, R. R. Martin, J. Cox, Reverse Engineering of Geometric ModelsAn Introduction, Computer Aided Design 29 (4), 255-268, 1997.) E. J. Chikofsky and J. H. Cross, II, Reverse Engineering and Design Recovery: A Taxonomy, IEEE Software, vol. 7, no. 1, pp. 13-17, January 1990. Grenda, E. (2006). The Most Important Commercial Rapid Prototyping Technologies at a Glance.

APPLICATIONS OF RE : Original equipment manufacturer (OEM) unable or unwilling to provide replacement parts Prototypes with no models/drawings Worn or broken components for which there is no source of supply For base model geometry to edit and tailor for improved functionality or new application Design of parts and assemblies, tooling and jigs

LIMITATIONS OF RE : RE cannot be applied to all equipments Modifications, dependent on point clouds are limited in certain cases Processes involved are costly

FUTURE OF RE:

249

Smart Cars
Athul Vijay & Arjun Sreenivas
S4, Department of Mechanical Engineering Mohandas College of Engineering and Technology

Abstract
Science is a long way from producing machine as powerful as the human brain. However, the search fo artificial intelligence has come a long way since the first robots. New technologies enables scientists to produce devices capable of a range of human-like action, while many scientists now look to the insect world for inspiration for tomorrows thinking machines. This paper aims at three basic concepts of driving that is vehicle efficiency, driver comfort & ecofriendliness. The future is not something that we enter but we create. So, the smart car. Smart cars just dont mean cars that run on artificial intelligence. Its a combination of works assembled to make a masterpiece. Imagine a car with high efficiency, a car that can ease the driver stress, increase the safety & finally be eco-friendly, when all this comes in one bundle we get the smart cars. Artificial intelligence (AI) is the intelligence of machines and the branch of computer science that aims to create it. The study and design of intelligent agents,"where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success. Artificial intelligence has been the subject of optimism, but has also suffered setbacks and, today, has become an essential part of the technology industry, providing the heavy lifting for many of the mostdifficult problems in computer science. AI research is highly technical and specialized, deeply divided into subfields that often fail to communicate with each other. In this paper we are discussing about the impacts of ai in automobile industry. I-car is the latest emerging trend using ai as the base of operation. Most of the time, smart cars are mistaken with hybrid vehicles, smart cars are vehicles that use the latest technologies along with ai & other ultra modern technologies to ease human control over vehicles

Adaptive Cruise Control


Autonomous cruise control is an optional cruise control system appearing on some more upscale vehicles. The system goes under many different trade names according to the manufacturer. These systems use either a radar or laser setup allowing the vehicle to slow when approaching another vehicle and accelerate again to the preset speed when traffic allows. ACC technology is widely regarded as a key component of any future generations of intelligent cars. Laser-based systems are significantly lower in cost than radar-based systems; however, laser-based ACC systems do not detect and track vehicles well in adverse weather conditions nor do they track extremely dirty (non-reflective) vehicles very well. Laserbased sensors must be exposed, the sensor (a fairly-large black box) is typically found in the lower grille offset to one side of the vehicle. Radar-based sensors can be hidden behind plastic fascias; however, the fascias may look different from a vehicle without the feature. For example, Mercedes packages the radar behind the upper grille in the center; however, the Mercedes grille on such applications contains a solid plastic panel in front of the radar with painted slats to simulate the slats on the rest of the grille. Radar-based systems are available on many luxury cars as an option for approx. 10003000 USD/euro. Laser-based systems are available on some near luxury and luxury cars as an option for approx. 400-600 USD.Two companies are developing a more advanced cruise control that can automatically adjust a car's speed to maintain a safe following distance. This new technology, called adaptive cruise control, uses forward-looking radar, installed behind

251

the grill of a vehicle, to detect the speed and distance of the vehicle ahead of it.Adaptive cruise control is similar to conventional cruise control in that it maintains the vehicle's pre-set speed. However, unlike conventional cruise control, this new system can automatically adjust speed in order to maintain a proper distance between vehicles in the same lane. This is achieved through a radar headway sensor, digital signal processor and longitudinal controller. If the lead vehicle slows down, or if another object is detected, the system sends a signal to the engine or braking system to decelerate. Then, when the road is clear, the system will re-accelerate the vehicle back to the set speed.The 77-GHz Autocruise radar system made by TRW has a forward-looking range of up to 492 feet (150 meters), and operates at vehicle speeds ranging from 18.6 miles per hour (30 kph) to 111 mph (180 kph). Delphi's 76-GHz system can also detect objects as far away as 492 feet, and operates at speeds as low as 20 mph (32 kph).Adaptive cruise control is just a preview of the technology being developed by both companies. These systems are being enhanced to include collision warning capabilities that will warn drivers through visual and/or audio signals that a collision is imminent and that braking or evasive steering is needed. The cruise control system actually has a lot of functions other than controlling the speed of your car. For instance, the cruise control pictured below can accelerate or decelerate the car by 1 mph with the tap of a button. Hit the button five times to go 5 mph faster. There are also several important safety features -- the cruise control will disengage as soon as you hit the brake pedal, and it won't engage at speeds less than 25 mph (40 kph). BMW 7 Series, 5 series, 6 series, 3 series, Audi A4. Lane Departure Warning System If the car concludes that the driver is drowsing (more on that later), it issues an audible alarm, and an icon depicting a cup of coffee flashes on the instrument panel

The company's Driver Attention Warning System uses a voice alarm: If a driver is nodding off, the car announces "You are tired," followed by "You are dangerously tired! Stop as soon as it is safe to do so!" The driver's seat also vibrates to help rouse him or her. Additional measures, like emitting puffs of air on the back of a dozing driver's neck, vibrating steering wheels and automatic steering that takes over and gently uides you back into your lane when you drift, may all be found in driver alert systems soon. How can a car tell when you're nodding off? Researchers are tweaking already extant car safety technologies and applying them in new ways. For example, blind-spot warning systems in today's digital cars keep an eye out for other vehicles in places you can't see. They also analyze your car's relation to its lane and whether your turn signal's on or not. Add to this system automatic steering that kicks in when you drift, and you've got part of a drowsy driver alert system. onboard computer uses facial recognition software to determine if you're drowsing. Night vision cameras trained on your face analyze slackening facial muscles, your blinking patterns and how long your eyes stay closed between blinks. Once it concludes you're no longer awake, the system kicks in to rouse you from your dangerous slumber. Adaptive Highbeam Adaptive Highbeam Assist is the newest headlamp technology, introduced in 2009 in the new generation Mercedes-Benz E-Class. It is based on camera mounted behind the windshield and automatically and continuously adapts the headlamp range to the distance of vehicles ahead or which are oncoming. The same technology is also present in the BMW 7 series. BMW's version of this technology, developed in cooperation with Mobileye, uses swiveling headlights that always point in the direction the vehicle is steering so therefore the road ahead is better illuminated and obstacles become visible sooner. Adaptive Highbeam Assist is the newest headlamp technology,

252

introduced in spring 2009 in the new generation Mercedes-Benz E-Class. It is based on camera mounted behind the windshield and automatically and continuously adapts the headlamp range to the distance of vehicles ahead or which are oncoming. The same technology is also present in the BMW 7 series. BMW's version of this technology, developed in cooperation with Mobileye, uses swiveling headlights that always point in the direction the vehicle is steering so therefore the road ahead is better illuminated and obstacles become visible sooner. Even when the high beam is warranted by prevailing conditions, drivers generally do not use them. [57] There have long been efforts, particularly in America, to devise an effective automatic beam selection system to relieve the driver of the need to select and activate the correct beam as traffic, weather, and road conditions change. Early systems like Cadillac's Autronic Eye appeared in 1952 with an electric eye atop the dashboard (later behind the radiator grill) which was supposed to switch between low and high beam in response to oncoming traffic. These systems could not accurately discern headlamps from non-vehicular light sources such as streetlights, they did not switch to low beam when the driver approached a vehicle from behind, and they spuriously switched to low beam in response to road sign reflections of the vehicle's own headlamps. Present systems based on imaging CMOS cameras can detect and respond appropriately to leading and oncoming vehicles while disregarding streetlights, road signs, and other spurious signals. Camera-based beam selection was first released in 2005 on the Jeep Grand Cherokee, and has since then been incorporated into comprehensive driver assistance systems by automakers worldwide. Intelligent Light System is a headlamp beam control system introduced in 2006 which offers five different bi-xenon light functions, [58] each of which is suited to typical driving or weather conditions: Country mode

Motorway mode Enhanced fog lamps Active light function Cornering light function Magnetorheological Fluid The term "magnetorheological fluid" comes from a combination of magneto, meaning magnetic, and rheo, the prefix for the study of deformation of matter under applied stress. A magnetorheological fluid is a fascinating smart fluid with the ability to switch back and forth from a liquid to a near-solid under the influence of a magnetic field. They are mostly used as dampers in automobiles. If the shock absorbers of a vehicle's suspension are filled with magnetorheological fluid instead of plain oil, and the whole device surrounded with an electromagnet, the viscosity of the fluid, and hence the amount of damping provided by the shock absorber, can be varied depending on driver preference or the weight being carried by the vehicle - or it may be dynamically varied in order to provide stability control. This is in effect a magnetorheological damper. For example, the MagneRide active suspension system permits the damping factor to be adjusted once every millisecond in response to external conditions. The application set for MR fluids is vast, and it expands with each advance in the dynamics of the fluid. Regenerative Braking The ability to harness the power and energy which in many cases has been wasted in the past, such as solar panels, is an effective bonus for the electric car industry. One such method of harvesting waste power is regenerative braking which effectively translates the power produced when breaking into a system which allows the electric car battery to be recharged on an ongoing basis. When you consider how often you break and how much energy and power is used to slow down your vehicle, the ability to literally harness this waste energy can make a significant difference to

253

the efficiency of your vehicle and the impact on the environment. Hybrid Engines In simple terms a hybrid vehicle is a vehicle that uses a traditional internal combustion engine together with an electric backup system which can either take the lead or backup the more traditional engine fuel system. Depending upon what type of hybrid engine is available in your vehicle you will notice a distinct difference in the noise when you drive and also in the efficiency of your vehicle which is likely to offer significantly greater mileage over a traditional vehicle. A number of tests have been carried out with regards to hybrid vehicles and their impact on the environment which seem to show a reduction of around 25% in pollution. In many ways hybrid cars have been used to break the back of consumers who are concerned about the reliability and cost of electric cars. Hybrids also offer a significant improvement on environmental damage when compared to more traditional cars and can in many ways improve not only the efficiency of driving on the roads of today but also maximum journey lengths. Future of Smart Cars Smart cars of the future will be using advanced technology to perform such functions as automatic cruise control, lane departure warnings and correction, hazardous object avoidance, driver awakenings, position and satellite monitoring, self-parking and driverless transportation. Researchers are developing automotive technology so that in the future, smart cars will be able to interface by wireless and infrared connections with road signs, signals and other roadside communication devices. This will enable computerized smart cars to automatically determine driving conditions such as traffic ahead, road hazards or steep curves and make adjustments ahead of time.

Some day, smart cars will be able to determine their own speeds, put themselves on cruise control, take themselves off, avoid hazards and park themselves with little driver interaction. If you think this is just a pipe dream, then it's good to know that the European Union has already set forth its i2010 Intelligent Car Initiative. The initiatives goals are to develop safer, cleaner and smarter vehicles. These intelligent cars or smart vehicles will be safer to drive by using technology such as adaptive cruise control to keep a safe distance from other drivers, lane departure warnings and lane change assistants, hypovigilance systems for sleepy drivers and an alcohol lock for those over the DUI limits. Through advanced communication systems including computers, wireless networking and GPS, smart cars will also be able to interactively ease traffic congestion and take more favorable routes as traffic needs arise. Smart cars will also be intelligent enough to avoid pedestrians, bicyclers and others who are not driving automobiles.Hands-free motoring is another goal for smart car developers who wish to create public transportation systems with individual cars, taxis, shuttle buses and large transport buses that will carry passengers without the need for drivers.With the ever-increasing need for newer safety measures and way to decrease traffic congestion, it is most assured that smart cars will one day provide the relief and results that many are now seeking. What the experience of the intelligent car of tomorrow will be like going by the technology of today. Reference 1.Safety first: VDIM puts Toyota at the head of the safety technology pack in Japan | Automotive Industries | Find Articles at BNET 2.Revolutionary Lane departure system -Dr Kaimen James Jr. 3.AMC Smart Cars ,Mark Donohue 4.The Car That Changed The World , Bruce W. McCalley

254

Morphing Aircraft Technology & New Shapes For Aircraft Design


V.Vikram
S6 Department of Mechanical Engineering Mohandas college of Engineering and Technology

Abstract
Morphing aircraft are multi-role aircraft that change their external shape substantially to adapt to a changing mission environment during flight.2 This creates superior system capabilities not possible without morphing shape changes. The objective of morphing activities is to develop high performance aircraft with wings designed to change shape and performance substantially during flight to create multiple-regime, aerodynamically-efficient, shape-changing aircraft. Compared to conventional aircraft, morphing aircraft become more competitive as more mission tasks or roles are added to their requirements. This paper will review the history of morphing aircraft, describe a recent DARPA program, recently completed, and identify critical technologies required to enable morphing. Morphing aircraft design features reconcile conflicting mission requirements so that an aircraft can perform several mission functions or roles. These functions could be as simple as being able to use a small engine but land and take off from a short length field. These conflicting requirements make landing flaps appear on wings to increase area and lift coefficient at low speed, despite the increased weight that they add to the system. If morphing devices are not added then the wing design is compromised so that the aircraft may do one thing very well, but have problems executing other parts of the mission. according to their functioneverything should be Introduction sacrificed in the interest of speed and flying long distances. Their wings will be bat-type or preferably Even before the official beginning of controlled bird type, long and narrow, with the minimum of human flight in 1903, radical shape changing aircraft surface and hence a heavy load for each square appeared and then disappeared, contributing little to meter. Moreover the wings will be adjustable, so aviation. Clement Ader conducted flight experiments that in flight they can be reduced by a half or a in France as early as 1873 and proposed a wing third or even lessArmament will be nonexistent or morphing design as early as 1890, as indicated in very littleThe real weapon will be speed. Figure 6. He developed ideas for the future of aviation for warfare beginning in the 1890s and described them in a short monograph published in 1909. He advocated three basic types of aircraft as parts of future military air fleets: Scouts, Bombers and Airplanes of the Line. Consider his description of the general military airplane and in particular, the Scout aircraft: Whatever category airplanes might belong to, they must satisfy the following general conditions: their wings must be articulated in all their parts and must be able to fold up completely When advances in aircraft design and construction permit, the frames will fold and the membranes will be elastic in order to diminish or increase the bearing surfaces at the wish of the pilot The Scouts will be designed

Morphed Designs
Makhonines telescoping wing

256

Makhonines telescoping wing had three major parts that slid over each other to change the wing span and area: in operation, this airplane changed wing span 162% (from 13 meters to 21 meters) while the wing area changed 157% (from 21 to 33 square meters). Pneumatic actuators provided the energy for extension and contraction. The wing loading was about 30 lb/square ft and the airplane was considered to be underpowered, with a maximum speed of 186 mph with the wings retracted and 155 mph with the wings fully extended. Makhonine designed other successful variable-geometry aircraft. His last, the MAK-123, was first flown in 1947 in France and demonstrated extension retraction of telescoping wings with no adverse effects. Modern variable geometry aircraft

Morphing Materials
Synthetic Jets Synthetic jets covers a part of a wing. It replicates the effect of bird feathers. This makes the airplane more comfortable and stable. Microspheres Microspheres imitate the pores inside the birds bones. Microspheres are injected in composite materials and is heated to fuse them together. It helps to achieve a light weight and strong structure. SMART MATERIALS Smart materials move according to command. These materials responds to stimulus like electricity, heat and magnetism. It regains it original shape.

Mas Program
The MAS program had two primary technical goals: 1) To develop active wing structures that change shape to provide a wide range of aerodynamic performance and flight control not possible with conventional wings. 2) To enable development of air vehicle systems with fleet operational effectiveness not possible with conventional aircraft. This includes both Navy and Air Force operations. The MAS effort was an extension of activities that began more than a decade ago with DARPAs development of smart materials and devices; this effort was led by Dr. Robert Crowe, then a Program Manager in the DARPA/Defense Sciences Office (DSO). He followed this with demonstration projects such as the Smart Wing Program, SAMPSON (an advanced inlet morphing program), and the Smart Rotor Program. The Compact Hybrid Actuator Program (CHAP) was developed by Dr. Ephrahim Garcia during his tenure as a DARPA Program Manager.

The variable sweep concept comes closest to the present day morphing wing concepts as far as objectives are concerned. The interest in variable geometry wings in the 1950s and 60s arose because of aerodynamically dissimilar mission objectives. These objectives were: Long-range subsonic cruise or long0endurance on station High supersonic speed interception and low-altitude transonic strike Operation from limited length runways (or aircraft carriers).

256

Conclusion
The need of aircraft solutions Morphing aircraft are distinguished by their ability to change shape to respond to new environments that are encountered as their missions unfold. It is fair to ask so what, how will that increase my military capabilities? The answer to that question requires the questioner to accept several assumptions about the future. The first assumption is that enemies of the future will have a substantial, sophisticated air defense composed of a variety of radar directed weapons that can acquire and target airborne assets. A second assumption is that the enemy of the future will be adept at moving and hiding valuable assets, making them harder to find and more difficult to strike. The answer to this problem is to develop systems that can search, locate, target and attack both air and ground targets, but can also survive and persist in the face of a dedicated, high-tech enemy.

References
1. Aeronautics and Astronautics Department Purdue University USA. 2. Official website of NASA.

256

EDITORIAL BOARD
Dr. Ashalatha Thampuran, Principal Dr. S Dasgupta, HOD Electrical & Electronics Engg. Dept Mrs. Deepa Nair (EC) Mr. Rajesh (MCA) Mrs. Sandhya S (EC) Mr. Libin K Babu (S6 ME) Mr. Vishnu S R (S6 EEE) Mr. Rahul S (S6 EEE) Mr. Vineed Vijay (S6 EEE) Ms. Anita Bhattacharya (S6 CS) Mr. Anil Aravind Menon (S6 CS) Mr. Arun Anson (S6 CS) Mr. Anish A (S6 CS) Mr. Vivek S Nath (S6 ME) Mr. Rohit (S6 BT)

You might also like