You are on page 1of 20

Seminar Report

ANN(Artificial Neural Network)


[16th Nov., 2011]

CHAUDHARY DEVI LAL UNIVERSITY

Submitted To: Mr. Raghuwinder Lect. In CSA Dept CDLU,Sirsa

Submitted By: Nitish Jalan Roll no-40 Mtech.(FT),1st CDLU,Sirsa

chaudhary devi lal UNIVERSITY


Barnala Road, Sirsa-125055, Haryana, India Web: www.cdlu.in

Phone: 01666-239819, Fax: 01666- 247049

ABSTRACT

For many centuries, one of the goal of humankind has been to develop machines. We envisioned these machines as performing all cumbersome and tedious tasks so that we might enjoy a more fruitful life. The era of machine making began with the discovery of simple machines such as lever, wheel and pulley. Many equally congenial inventions followed thereafter. Nowadays engineers and scientists are trying to develop intelligent machines. Artificial Nerual Systems are present-day examples of such machines that have great potential to further improve the quality of our life. Artificial neural systems represent the promising new generation of information processing networks. Advances has been made in applying such systems for problems found intractable or difficult for traditional computation. Neural networks can supplement the enormous processing power of the Von Neumann digital computer with the ability to make sensible decisions and to learn by ordinary experience, as we do. A neural networks ability to perform computation is based on the hope that we can reproduce some of the flexibility and power of the human brain by artificial means. Network computation is performed by a dense mesh of computing nodes and connections. They operate collectively and simultaneously on most or all data and inputs.

CONTENTS

1. INTRODUCTION
1.1 What is ANN 1.2 History of ANNs 1.3 Overview of ANNs 1.4 Analogy to Brain

2. BIOLOGICAL NEURAL NETWORK


2.1 Biological Neuron

3. ARTIFICIAL NEURON MODELS


3.1 McCulloch and Pitts Model 3.2 Rosenblatt Model

4. NEURON FOR ARTIFICIAL NEURAL SYSTEM 5. DIFFERENCE WITH TRADITIONAL COMPUTING 6. APPLICATION OF ANNs 7. LIMITATION OF ANNs 8. REFERENCES

1. INTRODUCTION
3

What is ANN?
Artificial Neural Networks are the synthetic networks that emulate the biological neural networks found in living organisms. OR Artificial Neural Networks are simply a class of mathematical algorithms, such algorithms produce solutions to a number of specific problems, where a network can be regarded essentially as a graphic notation for a large class of algorithms.

History of ANNs:
The year 1943 is often considered the initial year in the development of artificial neural systems. McCulloch and Pitts(1943) outlined the first formal model of an elementary computing neuron. The model included all necessary elements to perform logic operations, and thus it could function as an arithmetic-logic computing element. McCulloch and Pitts neuron model laid the groundwork for future developments. Donald Hebb(1949) first proposed a learning scheme for updating neurons connections that we now refer to as the Hebbian learning rule. He stated that the information can be stored in connections, and postulated the learning technique that had a profound impact on future developments in this field. During the 1950s the first neurocomputers were built and tested(Minsky 1954). They adapted connections automatically. During this stage, the neuron-like element called a perceptron was invented by Frank Rosenblatt in 1958. It was trainable machine capable of learning to classify certain patterns by modifying connections to the threshold elements. The idea caught the imagination of engineers and scientists and laid the groundwork for the basic machine learning algorithms that we will still use today.

In the early 1960s a device called ADALINE( for ADAptive LINEar combiner) was introduced, and a new, powerful learning rule called the Widrow-Hoff learning rule was developed by Bernard Widrow and Marcian Hoff(1960,1962). The rule minimized the summed square error during training involving pattern classification.

Overview of ANNs:
Artificial Neural Networks are the synthetic networks that emulate the biological neural networks found in living organisms and the networks ability to perform computation is based on the hope that we can reproduce some of the flexibility and power of the human brain by artificial means. Network computation is performed by a dense mesh of computing nodes and connections. They operate collectively and simultaneously on most or all data and inputs. The basic processing elements of neural networks are called artificial neurons, or simply neurons. Often we simply call them nodes. Neurons perform as summing and nonlinear mapping junctions. In some cases they can be considered as threshold units that fire when their total input exceeds certain bias level. Neurons usually operate in parallel and are configured in regular architectures. They are often organized in layers, and feedback connections both within the layer and toward adjacent layers are allowed. Each connection strength is expressed by a numerical value called a weight, which can be modified.

Analogy to Brain:
Most basic element of human brain is specific type of cell which, unlike the rest of the body, doesnt appear to regenerate. It is assumed that these cell are to provide us with our abilities to remember, think and apply previous experiences to our every action. These cell, all 100 billions of them, are known as neuron

The power of human brain comes from the sheer numbers of these basic components and multiple connections between them.

The basic concept of multiple connections between the individual component is the backbone of ANN.

2. BIOLOGICAL NEURAL NETWORK

Initially, the term neural network concern with the biological neural network which are found in living organisms. Most basic element of biological neural network is specific type of cell which is called neuron of elementary nerve cell.

2.1 Biological Neuron:


The elementary nerve cell, called a neuron, is the fundamental building block of the biological neural network. Its schematic diagram is shown as in fig. A typical cell has three major regions:

1. Cell body or Soma 2. Dendrites 3. Axon 1.Cell body or Soma: The cell body which is also called soma is where the signal coming from dendrites is processed.

2. Dendrites:

Dendrites are extensively branching from the cell body. Dendrites form a dendritic tree, which is a very fine bush of thin fibers around the neurons body and perform two operations. 1. Dendrites receive information from neurons through axons. 2. Transmit the received information toward the cell body.

3. Axons: Axons are long fibers that serve as transmission lines. An axon is long cylindrical connection that carries impulses from the neuron. The end part of an axon splits into a fine arborization. Each branch of it terminates in a small endbulb almost touching the dendrites of neighboring neurons. The axon-dendrite contact organ is called a synapse.

4. Synapse: The synapse is where the neuron introduces its signal to the neighboring neuron. The signals reaching a synapse and received by the dendrites are electrical impulses. The interneuronal transmission is sometimes electrical but is usually effected by the release of chemical transmitters at the synapse. Thus, terminal buttons generate the chemical that effects the receiving neuron as shown in fig.

The receiving neuron either generates an impulse to its axon, or produces no response.

The neuron is able to respond to the total of its inputs aggregated within a short time interval called the period of latent summation. The neurons response is generated if the total potential of its membrance reaches a certain level. The membrance can be considered as a shell, which aggregates the magnitude of the incoming signals over some duration. Specifically, the neuron generates a pulse response and sends it to its axon only if the conditions necessary for firing are fulfilled. Let us consider the conditions necessary for the firing of a neuron. Incoming impulses can be excitatory if they cause the firing, or inhibitory if they hinder the firing of the response. A more precise condition for firing is that the excitation should exceed the inhibition by the amount called the threshold of the neuron, typically a value of about 40mV.

3. ARTIFICIAL NEURON MODELS

The year 1943 is often considered the initial year in the development of artificial neural systems. In 1943 Warren McCulloch and Pitts gives the first formal model of an elementary computing neuron. The model included all necessary elements to perform logic operations, and thus it could function as an arithmetic-logic computing element. The McCulloch-Pitts model of the neuron is shown in fig. below.

The inputs xi, for i= 1,2,3..,n are 0 or 1, depending on the absence or presence of the input impulse at instant k. The neurons output signal is denoted as o. The firing rule for this model is defined as follows

Ok+1= 1 if Xik Wi >=T for i=1,2,3,,n Ok+1= 0 if Xik Wi <T for i=1,2,3,,n Where superscript k = 0,1,2,. Denotes the discrete time instant, and wi is the multiplicative weight connecting the ith input with the neurons membrance. Note that wi = +1 for excitatory synapse, wi = -1 for inhibitory synapses for this model, and T is the neurons threshold value, which needs to be exceeded by the weighted sum of signals for the neuron to fire. This model is used to perform the basic logic operations NOT, OR, and AND. Because No Learning incorporated in system as weight are fixed. Some Logical Operations shown by MP Model as in give pic. 10

In 1958 Frank Rosenblatt gives another Neuron Model also called Perceptron Model. Perceptron Model consist of output from sensory units to a fixed set of association units, the outputs of which are fed to a neuron. Association units performs predetermined manipulation. The Rosenblatt model of the neuron is shown in below fig.

The main deviation from the McCulloch-Pitts model is that learning (adjustment of weight) is incorporated in the operation of the unit.

11

Desired output is compared with the actual output and the error is used to adjust the weight. Error = d o Where d= desired output O=actual output Weight Change wi = xi is learning rate parameter.

4. NEURON FOR ARTIFICIAL NEURAL SYSTEM

12

The general neuron model used for artificial neural system is shown in below fig. This representation shows a set of weights and the neurons processing unit, or node. The major components of artificial neuron are: Weighting Factors Summation Function Activation(Transfer) Function Output Function Error Function Learning Function

Fig. General symbol of neuron consisting of processing node and synaptic connections.

Weighting Factors:
Each neuron input has its own relative weight. Weights perform same function as varying synaptic strengths of biological neurons. Weights are adaptive coefficients within the network. They determine the intensity of the input signal as registered by artificial neuron. In above fig. weighting factors is W and it defined as: W = [w1 w2 wn]t

13

and X is input vector: X = [x1 x2


..

xn]t

Summation Function:
The first step in a processing is to compute weighted sum of all of the inputs. SUM= x1 * w1 + x2 * w2. or net=(wtx) Summation function can also select minimum, maximum, majority, product or several normalizing algorithms.

Activation Function:
Activation function is to allow summation output to vary with respect to time. Summation total is compared with some threshold to determine the neuron output. If the sum is greater than threshold value, the processing elements generates signal. The activation of transfer function is generally non-linear. Typical Activation function used are: Continues Function(Hard Limiting Function): These are further two types:

Bipolar Continues Function: f(net) = (2/1+exp(-net))-1

14

Where is steepness factor Unipolar Continues Function: f(net) = (2/1+exp(-net))

Sigmoid(Signum) Function(Soft Limiting Function): These are also two types:


Bipolar Signum Function: f(net) or sgn(net) = +1, net>0 -1, net<0

Unipolar Signuam Function: f(net) or sgn(net) = 1, net>0 0, net<0

Output Function:
Each processing element is allowed one output signal. Normally, output is directly equivalent to the transfer functions result. O = f(wt x) or

15

O = f(wi xi ) for(i=1,2,,n)

Error Function:
The difference between desired output and actual output produces Raw Error. Raw error is then transformed by error function to match particular network. Error =do

Learning Function:
Learning(Adaption) Function is used to modify the variable connection weights on the inputs of each processing element. The general learning rule is adopted in neural network studies is given by Amari(1990) and its states that the: The weight vector Wi = [wi1 wi2 win]t increases in proportion to the product of input x and learning signal r. The learning signal r is in general a function of wi, x, and sometimes of the teachers signal di. Thus learning signal r and incremented weight vector is gives as: r = r(wi,x,di) The increment of the weight vector wi produced by the learning step according to the general learning rule is: wi = cf(wtix)x Types of Learning: There are two types of learning method. 5. Supervised Learning Unsupervised Learning.

DIFFERENCE WITH TRADITIONAL COMPUTING

16

Programmed computing, which has dominated information processing for more than four decades, is based on decision rules and algorithms encoded into the form of computer programs. The algorithms and program-controlled computing necessary to operate conventional computers have their counterparts in the learning rules and information recall procedures of a neural network. These are not exact counterparts, however, because neural networks go beyond digital computers since they can progressively alter their processing structure in response to the information they receive. Brief overview about traditional computing and neural network computing is given as under.

6. APPLICATION OF ANNs

17

The neural network is used in various areas such as: Pattern Restoration Classifiers Language Processing Character Recognation Image(data) Compression Character Recognition etc.

7. LIMITATION OF ANNs
18

The major limitation of ANNs is operational problem encountered when attempting to simulate the parallelism of neural network.

Solution:
Implement neural networks directly in hardware, but these need a lot of development skills.

8. REFERENCES

19

1. Introduction to Artificial Neural Systems By Jacek M. Zurada. 2. Neural Network By M. Hajek. http://www.cs.unp.ac.za/notes/NeuralNetworks2005.pdf 3. Introduction to Artificial Neural Network By K. Ming Leung. http://cis.poly.edu/~mleung/CS6673/s09/introductionANN.pdf 4. An introduction to Neural Network By Vincent Cheung & Kevin Cannons http://www2.econ.iastate.edu/tesfatsi/NeuralNetworks.CheungCannonNotes.pdf 5.Artificial Neural Network By Ajith Abraham http://www.softcomputing.net/ann_chapter.pdf 6.Introduction to Artificial Neural Network. http://www.cse.unr.edu/~bebis/MathMethods/NNs/lecture.pdf

20

You might also like