Professional Documents
Culture Documents
Learning Tasks
Pattern association
Pattern recognition
Function approximation
Filtering
Beam forming
Identification and Control
Learning Methods
Supervised Learning
Unsupervised Learning
x1
x2 (vk ) -1
vk yk dk
5
wkj ek
xm
Example
Consider a single perceptron with the set of input training
vectors (samples) and initial weight vector
1 0 1 1
2 1.5 1 1
x1 , x2 , x3 ; w (1)
0 0.5 0.5 0
1 1 1 0.5
6
1
2
(vk (1)) w (1)T x1 1 1 0 0.5 2.5
0
1
w ( 2) w (1) 0.1 ( 1 1) x1
1 1 0.8
1 2 0.6
0.2
0 0 0
0.5 1 0.7
0.8
0.6
(vk (2)) w (2)T x 2 0 1.5 0.5 1 1.6
0
0.7
0.8
0.6
(vk (3)) w (3)T x 3 1 1 0.5 1 2.1
0
0.7
w ( 4) w (3) 0.1 (1 1) x 3
0.8 1 0.6
0.6 1 0.4
0.2
0 0.5 0.1
0.7 1 0.5
Remarks:
This is learning with a teacher.
8
Exapmple:
Consider the set of input training vectors and initial weight
vector
1 0 1 1
2 1.5 1
x1 , x 2 , x 3 ; w 1 1
0 0.5 0.5 0
1 1 1 0.5
12
vk2 ( w 2 )T x 2 1.948
yk2 (vk2 ) 0.75
1
' (vk2 ) [1 2 (vk2 )] 00.218
2
0.974
0.956
w 3 w 2 0.1[ d 2 (vk2 )] ' (vk2 ) x 2
0.002
0.531
4. Hebbian learning
To Donald Hebb in his famous book organizational
behavior (1949)
When an axon of cell A is near enough to excite a cell
B and repeatedly or persistently takes part in firing it,
some growth process or metabolic change takes place
in one or both cells such that A's efficiency, as one of
the cells firing B, is increased.
The above statement is in a neurobiological sense. For
more complex kinds of learning, almost every learning
modal that has been proposed, involves both output activity
and input activity in the learning rule. The essential idea is
that the amount of synaptic change is a function of both
pre-synaptic and post-synaptic activity. Based on the above
fact, Hebbian learning is the oldest and most famous of all
learning rules
15
Example
Consider a single perceptron with the set of input training
vectors (samples) and initial weight vector
1 1 0 1
2 0.5 1 1
x1 , x2 , x3 ; w (1)
1.5 2 1 0
0 1 .5 1.5 0.5
Step 1
T
1 1
T 1 2
vk w1 x1 3
0 1.5
0.5 0
2
3
w 2 w1 sgn(v k ) x1 w1 x1
1.5
0.5
Step 2
T
2 1`
3 0.5
vk w 2 x 2 0.25
1.5 2
0.5 1.5
1
3 2 2 2 2 2.5
w w sgn(vk ) x w x
3.5
2
Step 3
1
3.5
w 4 w 3 sgn( y k ) x 3 w 3 x 3
4.5
0.5
Exercise:
Repeat the same problem using a continuous bipolar
activation function instead of the logistic function
18
O1
Ok
Op
Now consider three inputs that fall in to the range [0,1].
One can see that all the activities are taking place on the
surface of a unit sphere. The rule has the overall effect of
moving the synaptic weight of the winning neuron towards
the input pattern x . So the final result is that the weight
vector of the winning neuron k orients itself towards the
input pattern x .
Exercise
20