Professional Documents
Culture Documents
CERTIFICATE
This is to certify that the project entitled “ARTIFICIAL NEURAL NETWORK “has been
carried out by BALVEER SINGH under my guidance in partial fulfillment of the degree of
Bachelor of Engineering in Computer Engineering Rajasthan Technical University, Kota during
the academic year 2009-2010. To the best of my knowledge and belief this work has not been
submitted elsewhere for the award of any other degree.
Introduction:
A neural network is characterized by its pattern of connections between the neurons referred to
as network architecture and its method of determining the weights on the connections called
training or learning algorithm. The weights are adjusted on the basis of data. In other words,
neural networks learn from examples and exhibit some capability for generalization beyond the
training data. This feature makes such computational models very appealing in application
domains where one has little or incomplete understanding of the problem to be solved, but where
training data is readily available. Neural networks normally have great potential for parallelism,
since the computations of the components are largely interdependent of each other.
Artificial neural networks are viable computational models for a wide variety of
problems. Already, useful applications have been designed, built, and commercialized for
various areas in engineering, business and biology. These include pattern classification, speech
synthesis and recognition, adaptive interfaces between human and complex physical systems,
function approximation, image compression, associative memory, clustering, forecasting and
prediction, combinatorial optimization, nonlinear system modeling, and control. Although they
may have been inspired by neuroscience, the majority of the networks have close relevance or
counterparts to traditional statistical methods such as non-parametric pattern classifiers,
clustering algorithm, nonlinear filters, and statistical regression models
NEURAL NETWORKS
Artificial neural networks have emerged from the studies of how brain performs. The human
brain consists of many millions of individual processing elements, called neurons that are highly
interconnected.
Information from the outputs of the neurons, in the form of electric pulses is received by the cells
at connections called synapses. The synapses connect to the cell inputs, or dendrites and the
single output pf the neuron appears at the axon. An electric pulse is sent down the axon when the
total input stimuli for all of the dendrites exceed a certain threshold.
Artificial neural networks are made up of simplified individual models of the biological neuron
that are connected together to form a network. Information is stored in the network in the form of
weights or different connections strengths associated with synapses in the artificial neuron
models.
Many different types of neural networks are available and multi layer neural networks are the
most popular which are extremely successful in pattern reorganization problems. An artificial
neuron model is shown below. Each neuron input is weighted by W. changing the weights of an
element will alter the behavior of the whole network. The output y is obtained by summing the
weighted inputs to the neuron and passing the result through a non-linear activation function, f ().
Multi layer networks consists of an input layer, a hidden layer are made up of no. of
nodes. Data flows through the network in one direction only, from input to output; hence
this type of network is called a feed-forwarded network. A two-layered network is shown
below.
• Unit characteristics (may vary within the network and within subdivisions within
the network such as layers).
• Training procedures.
FEATURES OF ANN
• Their ability to represent nonlinear relations makes them well suited for non
linear modeling in control systems.
• Adaptation and learning in uncertain system through off line and on line weight
adaptation
• Neural network can handle large number of inputs and can have many outputs.
Neural network architecture have learning algorithm associated with them. The most
popular network architecture used for control purpose is multi layered neural network
[MLNN] with error propagation [EBP] algorithm.
• VLSI Implement ability, The massively parallel nature of a neural network makes
it potentially fast for the computation of certain tasks. This same feature makes a neural
network well suited for implementation using very-large-scale-integrated technology.
One particular beneficial virtue of VLSI is that it provides a means of capturing truly
complex behavior in a highly hierarchical fashion.
Artificial Neural networks (ANNs) belong to the adaptive class of techniques in the
machine learning arena. ANNS are used as a solution to various problems; however, their
success as an intelligent pattern recognition methodology has been most prominently
advertised. Most models of ANNs are organized in the form of a number of processing units
called artificial neurons, or simply neurons, and a number of weighted connections referred to
as artificial synapses between the neurons. The process of building an ANN, similar to its
biological inspiration, involves a learning episode. During learning episode, the network
observes a sequence of recorded data, and adjusts the strength of its synapses according to a
learning algorithm and based on the observed data. The process of adjusting the synaptic
strengths in order to be able to accomplish a certain task, much like the brain, is called
“learning”. Learning algorithms are generally divided into two types, supervised and
unsupervised. The supervised algorithms require labeled training data. In other words, they
require more a priori knowledge about the training set.
MODELS OF A NEURON
A neuron is an information-processing unit that is fundamental to the operation of a
neural network. Fig.2.1a shows the model of a neuron, which forms the basis for designing
artificial neural networks. The three basic elements of the neuronal model are
NETWORK ARCHITECTURES
The manner in which the neurons of a neuron network are structured is intimately
linked with the learning algorithm used to train the network. Network structure can be broadly
divided into three classes of network architectures.
In a layered neural network the neurons are organized in the form of layers. The
simplest form of a layered network consists of an input layer of source nodes those projects
onto an output layer of neurons, but not vice-versa. In other words, this network is strictly a
feed forward or acyclic type.
Such a network is called a single-layer network, with the designation “single layer”
referring to the output layer of computational nodes. The input layer of source nodes are not
counted as no computation is performed here.
In a rather loose sense the network acquires a global perspective despite its local
connections due to the extra set of synaptic connections and extra dimension of neural
interactions. The ability of hidden neurons to extract higher-order statistics is particularly
valuable when the size of the input is large.
Recurrent Networks
A recurrent network distinguishes itself from a forward neural network in that it has
at least one feedback loop. For example, a recurrent network may consist of a single layer of
neurons with each neuron feeding its output signal back to the inputs of all the other neurons,
as illustrated in the architectural graph. The presence of feedback loops has a profound impact
on the learning compatibility of the network and its performance. Moreover, the feedback
loops involve the use of particular branches composed of unit-delay elements (denoted by z-1),
which result in a nonlinear dynamical behavior, assuming that the neural network contains
nonlinear units.
Neural networks are models that may be used to approximate summaries, classify,
generalize or otherwise represent real situations. Before models can be used they have to be
trained or made to ‘fit’ into the representative data. The model parameters, e.g., number of
layers, number of units in each layer and weights of the connections between them, must be
determined. In ordinary statistical terms this is called regression. There are two fundamental
types of training with neural networks: supervised and unsupervised learning. For supervised
training, as in regression, data used for the training consists of independent variables (also
referred to as feature variables or predictor variables) and dependent variables (target values).
The independent variables (input to the neural network) are used to predict the dependent
variables (output from the network). Unsupervised training does not have dependent (target)
values supplied: the network is supposed to cluster the data automatically into meaningful sets.
The fundamental idea behind training, for all neural networks, is in picking a set of
weights and then applying the inputs to the network and gauging the network’s performance
with this set of weights. If the network does not perform well, then the weights are modified
by an algorithm specific to each architecture and the procedure is then repeated. This iterative
process continues until some pre-specified criterion has been achieved. A training pass
through all vectors of the input data is called an epoch. Iterative changes can be made to the
weights with each input vector, or changes can be made after all input vectors have been
processed. Typically, weights are iteratively modified by epochs.
LEARNING TECHNIQUES
Learning rules are algorithm for slowly alerting the connections weighs to achieve a desirable
goal such a minimization of an error function. The generalized step for any neural network
leaning algorithm is follows are the commonly used learning algorithm for neural networks.
• Multi layer neural net (MLNN)
• Error back propagation (EBB)
• Radial basis functions (RBF
• Reinforcement learning
• Temporal deference learning
• Adaptive resonance theory (ART)
• Genetic algorithm
Using ANN carries out the modeling of the process by using ANN by any one of the following
two ways:
• Forward modeling
• Direct inverse modeling
FORWARD MODELING
The basic configuration used for non-linear system modeling and identification using
neural network. The number of input nodes specifies the dimensions of the network input. In
system identification context, the assignment of network input and output to network input
vector.
This approach employs a generalized model suggested by Psaltis et al.to learn the inverse
dynamic model of the plant as a feed forward controller. Here, during the training stage, the
control input are chosen randomly within there working range. And the corresponding plant
output values are stored, as a training of the controller cannot guarantee the inclusion of all
possible situations that may occur in future. Thus, the developed model has take of robustness
The design of the identification experiment used to guarantee data for training the neural
network models is crucial, particularly, in-linear problem. The training data must contain process
input-output information over the entire operating range. In such experiment, the types of
manipulated variables used are very important.
The traditional pseudo binary sequence (PRBS) is inadequate because the training data
set contains most of its steady state information at only two levels, allowing only to fit linear
model in over to overcome the problem with binary signal and to provide data points throughout
the range of manipulated variables. The PRBS must be a multilevel sequence. This kind of
modeling of the process play a vital role in ANN based direct inverse control configuration.
This control configuration used the inverse planet model. Fro the direct inverse control.
The network is required to be trained offline to learn the inverse dynamics of the plant. The
networks are usually trained using the output errors of the networks and not that of the plant. The
output error of the networks is defined.
En=1/2(ud-on)2
Where En is the networks output error ud is the actual controls signal required to get desired
process output and on is the networks output. When the networks is to be trained as a controller.
The output errors of the networks are unknown. Once the network is trained using direct inverse
modeling learns the inverse system model. It is directly placed in series with the plant to be
controlled and the configuration shown in figure. Since inverse model of the plant is in off line
trained model, it tacks robustness.
In the direct adaptive control. The network is trained on line. And the weights of
connections are updated during each sampling interval. In this case’ the cost function is the plant
output error rather than the networks output error. The configuration of DAC is shown in figure.
The limitation of using this configuration is that one must have the some knowledge
about the plant dynamics i.e. Jacobin matrix of the plant. To solve the problems; initially, Psaltis
D.et al proposed a technique for determining the partial derivatives of the plant at its operating
point Xianzhang et al and Yao Zhang et al presented a simple approach, in which by using the
sign of the plant Jacobin. The modifications of the weights are carried out.
The IMC uses two neutral networks for implementation. I n this configrurations,one
neutral networks is placed in parallel with the plant and other neutral network in series the plant.
The structure of nonlinear IMC is shown in FIG.4.
The IMC provides a direct method for the design of nonlinear feedback controllers. If a
good model of the plant is savable, the close loop system gives an exact set point tracking
despite immeasurable disturbance acting on the plant.
For the development of NN based IMC, the following two steps are required:
• Plant identification
• Plant inverse model
Plant identification is carried out using the forward modeling techniques. Once the network
is trained, it represents the perfect dynamics of the plant the error signal used to adjust the
networks weights is the difference between the plant output and the model output.
The neutral networks used to represent the inverse of the plant (NCC) are trained using
the plant itself. The error signal used to train the plant inverse model is the difference between
the desired plant and model outputs.
DIRECT NEURAL NETWORK MODEL REFERENCE ADAPTIVE
CONTROL:
The neural network approximates a wide variety of nonlinear control laws by adjusting
the weights in training to achieve the desired approximate accuracy. One possible MRAC
structure based on neural network is shown
In this configuration, control systems attempted to make the plant output YP(t) to follow
the reference model output asymptotically. The error signal
Used to train the neural network controller is the difference between the model and the plant
outputs, principally; this network works like the direct adaptive neural control system.
APPLICATION’S:
Aerospace
Automotive
Banking
Defense
Electronics
– Code sequence prediction, integrated circuit chip layout, process control, chip
failure analysis, machine vision, voice synthesis, nonlinear modeling
Financial
– Real estate appraisal, loan advisor, mortgage screening, corporate bond rating,
credit line use analysis, portfolio trading program, corporate financial analysis,
currency price prediction
Manufacturing
Medical
– Breast cancer cell analysis, EEG and ECG analysis, prosthesis design,
optimization of transplant times, hospital expense reduction, hospital quality
improvement, emergency room test advisement
Robotics
Speech
Telecommunications
Transportation
CONCLUSION
Artificial neural networks thus have paved the way for automatic analysis of
biological data. The simultaneous analysis of millions of genes at a time has driven the new
field of computational biology—Genome informatics.
But like all clouds with a silver lining Genomic engineering can be misused too,
which can be a real threat to the mankind. Let us hope that man puts his brain for the welfare of
his fellow beings and not against them.