You are on page 1of 16

(Approved by AICTE & affiliated to UPTU, Lucknow)

2015-2016
A
Seminar Report On
Artificial Neural Network

Submitted in partial fulfillment of the requirement for the award of


the degree of
B.Tech in
Information Technology

DEPARTMENT OF COMPUTER SCIENCE &


INFORMATION TECHNOLOGY ENGG.

Submitted By:-
ANJALI
Branch- IT
Semester- 6th

Dr. Anand Sharma Mr. Konark Sharma


(HOD, CS/IT Dept.) (Seminar-in-Charge)
CERTIFICATE
This is to certify that the Seminar Report entitled “Artificial Neural Network”
submitted by Ms. ANJALI has been a record of student’s own work carried out
individually in my guidance for the partial fulfillment of the degree Of Bachelor
Of Technology in Information Technology of Aligarh College Of Engineering &
Technology during the 6th Sem.
It is further certified to the best of my knowledge and belief that this work has
not been submitted elsewhere for the award of any other degree.

___________________
Mr. Konark Sharma
(Seminar In-charge)

2
ACKNOWLEDGEMENT

All praise to Almighty, the most beneficent, the most merciful, who bestowed
upon us the courage, patience and strength to embark upon this work and carry
it to the completion.
I feel privileged to express my deep sense of gratitude and highest appreciation
to
Mr. Konark Sharma,
Asst. professor,
Dept. of CS/IT Engg.
for his instant support and providing me with incalculable suggestions and
guidance. I sincerely acknowledge him for his support on literature, critical
comments & moral support which he rendered at all stages of the discussion
which was deeply helpful.
I also acknowledge my friends & Parents for their moral support & timely ideas
in completion of this Seminar. I promise to pay the reward of their help &
guidance in form of similar or even better ways to support others throughout
my life.

___________________
Anjali

3
INDEX
1) Introduction 5-6

2) ANN’s Basic Structure 7-8

3) Types of ANNs 9-10

4) Machine Learning 11

5) Comparisons 12

6) Properties of ANNs 13

7) Applications of ANNs 14

8) Advantages 15

9) Disadvantages 15

10) Conclusion 16

11) References 16

4
INTRODUCTION
In machine learning and cognitive science, artificial neural networks (ANNs)
are a family of models inspired by biological neural networks (the central nervous
systems of animals, in particular the brain) and are used to estimate or
approximate functions that can depend on a large number of inputs and are
generally unknown. Artificial neural networks are generally presented as systems
of interconnected "neurons" which exchange messages between each other. The
connections have numeric weights that can be tuned based on experience, making
neural nets adaptive to inputs and capable of learning.

For example, a neural network for handwriting recognition is defined by a set of


input neurons which may be activated by the pixels of an input image. After being
weighted and transformed by a function (determined by the network's designer),
the activations of these neurons are then passed on to other neurons. This process
is repeated until finally, an output neuron is activated. This determines which
character was read.

Like other machine learning methods – systems that learn from data – neural
networks have been used to solve a wide variety of tasks that are hard to solve
using ordinary rule-based programming, including computer vision and speech
recognition.

Background
Examinations of humans' central nervous systems inspired the concept of
artificial neural networks. In an artificial neural network, simple artificial nodes,
known as "neurons", "neurodes", "processing elements" or "units", are connected
together to form a network which mimics a biological neural network.
There is no single formal definition of what an artificial neural network is.
However, a class of statistical models may commonly be called "neural" if it
possesses the following characteristics:
1. Contains sets of adaptive weights, i.e. numerical parameters that are tuned
by a learning algorithm, and
2. Capability of approximating non-linear functions of their inputs.

The adaptive weights can be thought of as connection strengths between neurons,


which are activated during training and prediction.

Neural networks are similar to biological neural networks in the performing of


functions collectively and in parallel by the units, rather than there being a clear
delineation of subtasks to which individual units are assigned. The term "neural
network" usually refers to models employed in statistics, cognitive psychology
and artificial intelligence. Neural network models which command the central
5
nervous system and the rest of the brain are part of theoretical neuroscience and
computational neuro science.

In modern software implementations of artificial neural networks, the approach


inspired by biology has been largely abandoned for a more practical approach
based on statistics and signal processing. In some of these systems, neural
networks or parts of neural networks (like artificial neurons) form components in
larger systems that combine both adaptive and non-adaptive elements. While the
more general approach of such systems is more suitable for real-world problem
solving, it has little to do with the traditional, artificial intelligence connectionist
models. What they do have in common, however, is the principle of non-linear,
distributed, parallel and local processing and adaptation. Historically, the use of
neural network models marked a directional shift in the late eighties from high-
level (symbolic) artificial intelligence, characterized by expert systems with
knowledge embodied in if-then rules, to low-level (sub-symbolic) machine
learning, characterized by knowledge embodied in the parameters of a dynamical
system.

The inventor of the first neurocomputer, Dr. Robert Hecht-Nielsen, defines a


neural network as −
"...a computing system made up of a number of simple, highly interconnected
processing elements, which process information by their dynamic state response
to external inputs.”

6
ANN’s BASIC STRUCTURE
The idea of ANNs is based on the belief that working of human brain by making
the right connections, can be imitated using silicon and wires as living neurons
and dendrites.
The human brain is composed of 100 billion nerve cells called neurons. They are
connected to other thousand cells by Axons. Stimuli from external environment
or inputs from sensory organs are accepted by dendrites. These inputs create
electric impulses, which quickly travel through the neural network. A neuron can
then send the message to other neuron to handle the issue or does not send it
forward. The human Neural system working is as shown below:

ANNs are composed of multiple nodes, which imitate biological neurons of


human brain. The neurons are connected by links and they interact with each
other. The nodes can take input data and perform simple operations on the data.
The result of these operations is passed to other neurons. The output at each
node is called its activation or node value.

7
Each link is associated with weight. ANNs are capable of learning, which takes
place by altering weight values. The following illustration shows a simple ANN

The basic artificial neuron is as follows-

8
TYPES OF ANN
There are two Artificial Neural Network topologies − FreeForward and
Feedback.

FeedForward ANN
The information flow is unidirectional. A unit sends information to other unit
from which it does not receive any information. There are no feedback loops.
They are used in pattern generation/recognition/classification. They have fixed
inputs and outputs.

FeedBack ANN

Here, feedback loops are allowed. They are used in content addressable
memories.

9
Working of ANNs

In the topology diagrams shown, each arrow represents a connection between two
neurons and indicates the pathway for the flow of information. Each connection
has a weight, an integer number that controls the signal between the two neurons.

If the network generates a “good or desired” output, there is no need to adjust the
weights. However, if the network generates a “poor or undesired” output or an
error, then the system alters the weights in order to improve subsequent results.

10
MACHINE LEARNING
ANNs are capable of learning and they need to be trained. There are several
learning strategies −

 Supervised Learning − It involves a teacher that is scholar than the ANN


itself. For example, the teacher feeds some example data about which the
teacher already knows the answers.
For example, pattern recognizing. The ANN comes up with guesses while
recognizing. Then the teacher provides the ANN with the answers. The
network then compares it guesses with the teacher’s “correct” answers and
makes adjustments according to errors.
In supervised training, both the inputs and the outputs are provided. The
network then processes the inputs and compares its resulting outputs
against the desired outputs. Errors are then propagated back through the
system, causing the system to adjust the weights which control the
network. This process occurs over and over as the weights are continually
tweaked. The set of data which enables the training is called the "training
set." During the training of a network the same set of data is processed
many times as the connection weights are ever refined.

 Unsupervised Learning − It is required when there is no example data set


with known answers. For example, searching for a hidden pattern. In this
case, clustering i.e. dividing a set of elements into groups according to
some unknown pattern is carried out based on the existing data sets present.
At the present time, unsupervised learning is not well understood. This
adaption to the environment is the promise which would enable science
fiction types of robots to continually learn on their own as they encounter
new situations and new environments. Life is filled with situations where
exact training sets do not exist. Some of these situations involve military
action where new combat techniques and new weapons might be
encountered. Because of this unexpected aspect to life and the human
desire to be prepared, there continues to be research into, and hope for, this
field. Yet, at the present time, the vast bulk of neural network work is in
systems with supervised learning. Supervised learning is achieving results.
This is also called Adaptive Learning.

 Reinforcement Learning – This strategy built on observation. The ANN


makes a decision by observing its environment. If the observation is
negative, the network adjusts its weights to be able to make a different
required decision the next time.

11
COMPARISONS
Comparisons of the computing approaches is given in the table below:

CHARACTERISTICS TRADITIONAL COMPUTING ARTIFICIAL NEURAL


(including Expert Systems) NETWORKS

Processing style Sequential Parallel


Functions Logically (left brained) Gestault (right brained)
via Rules Concepts via Images
Calculations Pictures
Controls

Learning Method by rules (didactically) by example (Socratically)


Applications Accounting Sensor processing
word processing speech recognition
math inventory pattern recognition
digital communications text recognition

A comparison of artificial intelligence's expert systems and neural networks is


contained in Table below:
Characteristics Von Neumann Artificial Neural
Architecture Networks
Used for Expert Systems

Processors VLSI Artificial Neural Networks;


(traditional processors) variety of technologies;
hardware development is on going

Processing Approach Separate The same

Processing Approach Processes problem rule at a Multiple, simultaneously


one time; sequential

Connections Externally programmable Dynamically self programming

Self learning Only algorithmic Continuously adaptable


parameters modified

Fault tolerance None without special Significant in the very nature of the
processors interconnected neurons

Neurobiology in design None Moderate

Programming Through a rule based Self-programming; but network


complicated must be set up properly

Ability to be fast Requires big processors Requires multiple custom-built chips

12
PROPERTIES OF ANNs
Computational power
The multilayer perceptron is a universal function approximator, as proven by the
universal approximation theorem. However, the proof is not constructive
regarding the number of neurons required or the settings of the weights.
Work by Hava Siegelmann and Eduardo D. Sontag has provided a proof that a
specific recurrent architecture with rational valued weights (as opposed to full
precision real number-valued weights) has the full power of a Universal Turing
Machine[54] using a finite number of neurons and standard linear connections.
Further, it has been shown that the use of irrational values for weights results in
a machine with super-Turing power.

Capacity
Artificial neural network models have a property called 'capacity', which roughly
corresponds to their ability to model any given function. It is related to the amount
of information that can be stored in the network and to the notion of complexity.

Convergence
Nothing can be said in general about convergence since it depends on a number
of factors. Firstly, there may exist many local minima. This depends on the cost
function and the model. Secondly, the optimization method used might not be
guaranteed to converge when far away from a local minimum. Thirdly, for a very
large amount of data or parameters, some methods become impractical. In
general, it has been found that theoretical guarantees regarding convergence are
an unreliable guide to practical application.

Generalization and statistics


In applications where the goal is to create a system that generalizes well in unseen
examples, the problem of over-training has emerged. This arises in convoluted or
over-specified systems when the capacity of the network significantly exceeds
the needed free parameters. There are two schools of thought for avoiding this
problem: The first is to use cross-validation and similar techniques to check for
the presence of overtraining and optimally select hyper parameters such as to
minimize the generalization error. The second is to use some form of
regularization. This is a concept that emerges naturally in a probabilistic
(Bayesian) framework, where the regularization can be performed by selecting a
larger prior probability over simpler models; but also in statistical learning theory,
where the goal is to minimize over two quantities: the 'empirical risk' and the
'structural risk', which roughly corresponds to the error over the training set and
the predicted error in unseen data due to overfitting.

13
APPLICATIONS OF ANNs
They can perform tasks that are easy for a human but difficult for a machine −

 Aerospace − Autopilot aircrafts, aircraft fault detection.


 Automotive − Automobile guidance systems.
 Military − Weapon orientation and steering, target tracking, object
discrimination, facial recognition, signal/image identification.
 Electronics − Code sequence prediction, IC chip layout, chip failure
analysis, machine vision, voice synthesis.
 Financial − Real estate appraisal, loan advisor, mortgage screening,
corporate bond rating, portfolio trading program, corporate financial
analysis, currency value prediction, document readers, credit application
evaluators.
 Industrial − Manufacturing process control, product design and analysis,
quality inspection systems, welding quality analysis, paper quality
prediction, chemical product design analysis, dynamic modeling of
chemical process systems, machine maintenance analysis, project bidding,
planning, and management.
 Medical − Cancer cell analysis, EEG and ECG analysis, prosthetic design,
transplant time optimizer.
 Speech − Speech recognition, speech classification, text to speech
conversion.
 Telecommunications − Image and data compression, automated
information services, real-time spoken language translation.
 Transportation − Truck Brake system diagnosis, vehicle scheduling,
routing systems.
 Software − Pattern Recognition in facial recognition, optical character
recognition, etc.
 Time Series Prediction − ANNs are used to make predictions on stocks
and natural calamities.
 Signal Processing − Neural networks can be trained to process an audio
signal and filter it appropriately in the hearing aids.
 Control − ANNs are often used to make steering decisions of physical
vehicles.
 Anomaly Detection − As ANNs are expert at recognizing patterns, they
can also be trained to generate an output when something unusual occurs
that misfits the pattern.

14
ADVANTAGES
 It involves human like thinking.
 They handle noisy or missing data.
 They can work with large number of variables or parameters.
 They provide general solutions with good predictive accuracy.
 System has got property of continuous learning.
 They deal with the non-linearity in the world in which we live.
 A neural network can perform tasks that a linear program cannot.
 When an element of the neural network fails, it can continue without any
problem by their parallel nature.
 A neural network learns and does not need to be reprogrammed.
 It can be implemented in any application.
 It can be implemented without any problem.

DISADVANTAGES
 The neural network needs training to operate.
 The architecture of a neural network is different from the architecture of
microprocessors therefore needs to be emulated.
 Requires high processing time for large neural networks.

15
CONCLUSION
The computing world has a lot to gain from neural networks. Their ability to learn
by example makes them very flexible and powerful. Furthermore there is no need
to devise an algorithm in order to perform a specific task; i.e. there is no need to
understand the internal mechanisms of that task. They are also very well suited
for real time systems because of their fast response and computational times
which are due to their parallel architecture.
Neural networks also contribute to other areas of research such as neurology and
psychology. They are regularly used to model parts of living organisms and to
investigate the internal mechanisms of the brain.
Perhaps the most exciting aspect of neural networks is the possibility that
someday 'conscious' networks might be produced. There is a number of scientists
arguing that consciousness is a 'mechanical' property and that 'conscious' neural
networks are a realistic possibility.
Finally, we can say that even though neural networks have a huge potential we
will only get the best of them when they are integrated with computing, AI, fuzzy
logic and related subjects.

REFERENCES
1) https://en.wikipedia.org/wiki/Artificial_neural_network
2) http://www.psych.utoronto.ca/users/reingold/courses/ai/cache/neural3.html
3) http://www.slideshare.net/nilmani14/neural-network-3019822
4) http://studymafia.org/artificial-neural-network-seminar-ppt-with-pdf-report/
5)http://www.tutorialspoint.com/artificial_intelligence/artificial_intelligence_iss
ues.htm

16

You might also like