You are on page 1of 26

| 

  


  

  

j


The 

 is the
center of the human nervous
system.
The human brain has been
estimated to contain 80 or
90 billion (~86 109) neurons.
These neurons pass signals
to each other via as many as
1000 trillion (1015, 1
quadrillion) synaptic
connections.
The Neuron

4/29/2011 3

        

 
K encode their outputs as
a series of brief electrical pulses
2. The neuron¶s ` 
  
processes the incoming
activations and converts them
into output activations.
  are fibres which
originate from the cell body and
provide the receptive zones that
receive activation from other
neurons.
 are fibres acting as 5. The junctions that allow signal
transmission lines that send transmission between the axons and
activation to other neurons. dendrites are called  .
6. The process of transmission is by
diffusion of chemicals called
across the synaptic
cleft.
Ortificial Neuron

‡ O vastly simplified
model of real neurons
‡ 1. O set of synapses
(i.e. connections)
brings in activations
from other neurons.
‡ 2. O processing unit
sums the inputs, and
then applies a non-
linear activation
function
‡ (i.e.
Ortificial Neural Network

‡ O 
  can be considered as
simplified mathematical models of brain-like
systems and they function as parallel distributed
computing networks.
‡ However, in contrast to conventional computers,
which are programmed to perform specific task,
most neural networks must be taught, or trained.
‡ They can learn new associations, new functional
dependencies and new patterns.

4/29/2011 6
Î
 


       !
1. They are extremely powerful computational devices
2. Massive parallelism makes them very efficient.
3. They can learn and generalize from training data ± so
there is no need for enormous feats of programming.
4. They are particularly fault tolerant ± this is equivalent to
the ³graceful degradation´ found in biological systems.
5. They are very noise tolerant ± so they can cope with
situations where normal symbolic systems would have
difficulty.
6. In principle, they can do anything a symbolic/logic
system can do, and more. (In practice, getting them to
do it can be rather difficult«)
4/29/2011 7
Î

 


    !
‡ Os with the field of OI in general, there are two basic goals for neural
network research:
± 
  : The scientific goal of building models of how real brains work.
This can potentially help us understand the nature of human intelligence,
formulate
± better teaching strategies, or better remedial actions for brain damaged patients.
±  
   : The engineering goal of building efficient systems
for
± real world applications. This may make machines more powerful, relieve humans
± of tedious tasks, and may even improve upon human performance.
± These should not be thought of as competing goals. We often use exactly the
same
± networks and techniques for both. Frequently progress is made when the two
approaches
± are allowed to feed into each other. There are fundamental differences though,
e.g. the
± need for biological plausibility in brain modelling, and the need for computational
± efficiency in artificial system building.

4/29/2011 8
Ortificial Neural Network

‡ There are two important aspects of the network¶s


operation to consider:
± 
 The network must learn decision surfaces
from a set of so that these training
patterns are classified correctly.
± ´
 "
Ofter training, the network must also
be able to generalize, i.e.correctly classify 
it has never seen before.
‡ Usually we want our neural networks to learn
well, and also to generalize well.

4/29/2011 9
  #  #
$    
‡ Mathematically, ONNs can be represented as R 
`  
‡ purposes, we can simply think in terms of
activation flowing between processing units via one-way
connections.

4/29/2011 10
ONN architectures

‡ Three common ONN architectures are:


±  %
& % 
 '
‡ (  

     
   units. No feed-back
connections.
± ) %
& % 
 '
‡ (  
*    
*
    hidden
layers of
processing units. No feed-back connections. The hidden
layers sit in between the input and output layers, and are
thus r   r 
   
± ù '
‡    

    %
   
* 

not, have hidden units.

4/29/2011 11
 
 ) %
 ù
& % 
 & % 
   

4/29/2011 12
Mathematical description


  +


  
 -
  

+
 , & 

´
     

+

4/29/2011 13

 , )   


‡ The Perceptron Learning Rule is an algorithm


for adjusting the network weights minimise
the difference between the actual outputs 

r   

  
‡ pp  `

4/29/2011 14
‡ Suppose we have a function  r 
r 
   
‡ What we need to do depends on the gradient of 
‡ r  r    
± If d r         r

   
± If d r          r

  
± If d r   
  
  
r
r 

4/29/2011 15
´  `

‡ In summary, we can decrease  r r 


 


‡ where n is a small positive constant specifying how


much we change  !r derivative d  

rr "     
r 
#
!will (assuming h is sufficiently small) keep
descending towards its minimum,

4/29/2011 16
Neural Network Learning

‡ The aim of 



 RR 
‡ ` Rmake a series of small adjustments to the
weights $%&
r   'is µsmall
enough¶.
‡ O systematic procedure for doing this requires the
knowledge of how the error 'varies as we change
the weights ! r  R`
R

4/29/2011 17
Ortificial Neural Network
Historical Perspective
‡ The study of brain-style computation has its
roots over 50 years ago in the work of
McCulloch and Pitts (1943) and slightly later in
Hebb¶s famous â () r (1949).
‡ However, the 1980s showed a rebirth in interest
in neural computing:

4/29/2011 18
‡ K Hopfield provided the mathematical
foundation for understanding the dynamics of an
important class of networks.
‡ K Kohonen developed unsupervised learning
networks for feature mapping into regular arrays
of neurons.
‡ K Rumelhart and McClelland introduced the
backpropagation learning algorithm for complex,
multilayer networks.

4/29/2011 19
‡ Beginning in 1986-87, many neural networks
research programs were initiated.
‡ The list of applications that can be solved by
neural networks has expanded from small test-
size examples to large practical tasks.
‡ Very-large-scale integrated neural network chips
have been fabricated.
‡ In the long term, we could expect that artificial
neural systems will be used in applications
involving vision, speech, decision making, and
reasoning, butalso as signal processors such as
filters, detectors, and quality control systems.
4/29/2011 20
O 
 

‡ O 
  ! 
  * ! 
r  
   rr#
 !
 !
(    *  
‡ The knowledge is in the form of stable states or
mappings embedded in networks that can be
recalled in response to the presentation of cues.

4/29/2011 21
4/29/2011 22
‡ The basic processing elements of neural networks are called
 
 , or simply 
 or  .
‡ Each processing unit is characterized by an activity level
(representing the state of polarization of a neuron), an output value
(representing the firing rate of the neuron), a set of input
connections, (representing synapses on the cell and its dendrite), a
bias value (representing an internal resting level of the neuron), and
a set of output connections (representing a neuron¶s axonal
projections).
‡ Each of these aspects of the unit are represented mathematically by
real numbers.
‡ Thus, each connection has an associated weight (synaptic strength)
which determines the effect of the incoming input on the activation
level of the unit.
‡ The weights may be positive (excitatory) or negative (inhibitory).

4/29/2011 23
‡ The general procedure is to have the network
the appropriate weights from a
representative set of training data.
‡ In all but the simplest cases, however, direct
computation of the weights is intractable.
‡ Instead, we usually start off with   
Rand adjust them in small steps until the
required outputs are produced.

4/29/2011 24
Hardware Implementation

‡ Most artificial neural network models have been


implemented in software, but the size and complexity of
many problems has quickly exceeded the power of
conventional computer hardware.
‡ It is the goal of neural network engineers to transfer the
progress made into new hardware systems.
‡ These are intended to accelerate future developments of
algorithms and architectures, and to make possible the
use of dedicated neural networks in industrial
applications.

4/29/2011 25
24 Opril 2011 26

You might also like