Professional Documents
Culture Documents
l
T
kjl
V
j
V
l
+
j
W
kj
V
j
u
k
R
k
+ I
k
(1)
In equation (1), u
k
is thought of as the mean soma potential of the kth neuron, V
k
is the short term average of the neuron
ring rate, T
kjl
& W
kj
are the strength of the synapse and I
k
is a xed bias current.
For the higher order complex-valued Hopeld neural network the quantities u
k
, V
k
, T
kjl
, T
kj
and I
k
can have non-zero
imaginary component and the nonlinear transfer function g
k
(.) is a complex function. The dynamics for this network is depicted
from equation (2).
C
k
du
k
dt
=
l
T
kjl
V
j
V
l
+
j
W
kj
V
j
u
k
R
k
+ I
k
(2)
and the complex nonlinear function is given by:
u
k
= g
k
1
(V
k
) (3)
where C
k
> 0 k. This complex nonlinear function is chosen in such a way that for complex value also it should be
monotonically incresaing and continually differentiable. For our stability analysis we considerede g(.) = tanh(.) as the
complex-valued activation function because of following reason:
1) In real space tanh(x) is a sigmoid function and holds all the properties of sigmoid function.
2) In the complex domain tanh(z) has several important propoerties we dene these properties in next section.
Now we dene the Lyapunov energy function for the higher order complex-valued Hopeld neural network in following
equation:
E =
1
3
l
T
kjl
V
k
V
j
V
l
1
2
j
W
kj
V
k
V
j
+Re
k
1
R
k
V
k
0
g
1
k
(V )dV
k
I
k
V
k
(4)
Equations (2), (3) and (4) describes dynamics of proposed Hopeld neural network model. For binary valued case we can
determine the the weight matrixes for the this network model by Hebbs rule.
T
kjl
=
p
S
p
k
S
p
j
S
p
l
W
kj
=
p
S
p
k
S
p
j
(5)
where S
p
is the pth pattern to be stored in an N-neuron network, and S
p
k
denotes the kth component of the pth pattern and
denotes the complex conjugate . Other than this we can also use some optimization method such as least square method for
calculating the weights for the network.
The stability analysis of proposed higher order complex-valued Hopeld neural network is carried out in next section. As
in conventional Hopeld neural network the weight matrix are assumed to be symmetrical, in complex higher order Hopeld
neural network the complex weight matrix T
jkl
& T
kj
are assumed to be Hermitian. The energy function described by equation
(4) is a real number.
3. Stability Analysis of Proposed Network
We rst discuss some of the important properties considered while formulating the dynamics of higher order complex-
valued Hopeld neural network. In our stability analysis we considered tanh(z) as the complex-valued activation function.
The parameters and variable for the model are a complex-valued number (T, W, V, I C
n
). Here C
n
represents the complex-
valued number. We can also use some other complex-valued activation function if it satises the required properties and
assumptions.
1) Let z = x + iy and g(z) = g
x
+ ig
y
= tanh(z). Then
a)
g
x
x
> 0 for
4
y
4
(6)
b) g
1
(z) = g
1
(z
) (7)
Proof: Condition 1:
tanh(z) = g
x
(x, y) + ig
y
(x, y) =
sinh(2x)
cosh(2x) + cos(2y)
+
sin(2y)
cosh(2x) + cos(2y)
g
x
x
=
2(1 + cosh(2x))cos(2y)
(e
2x
+ e
2x
+ 2cos(2y))
2
(8)
The R.H.S. is 0 for
4
y
4
Condition 2:
g
1
(z) =
1
2
log
1 + z
1 z
log
(w) = log(w
1 + w
1 w
=
1 + w
1 w
(9)
With these properties we now eloborate sability analysis for the proposed model. In order to proove the convergence of proposed
energy function we will show that the
dE
dt
0. Consider the energy function shown in equation (10)
E =
1
3
l
T
kjl
V
k
V
j
V
l
1
2
j
W
kj
V
k
V
j
+Re
k
1
R
k
V
k
0
g
1
k
(V )dV
k
I
k
V
k
(10)
We compute
dE
dt
to show monotonic decrease of E with time. We break the proposed energy function in three parts such as:
E = E
1
+ E
2
+ E
3
E
1
=
1
3
l
T
kjl
V
k
V
j
V
l
E
2
=
1
2
j
W
kj
V
k
V
j
E
3
= Re
k
1
R
k
V
k
0
g
1
k
(V )dV
k
I
k
V
k
(11)
We now calculate derivative for respective energy function i.e. E
1
E
2
& E
3
with respect to time. This results in following
terms: The derivative
dE1
dt
gives rise to pairs of terms of the form:
1
3
dV
k
dt
T
kjl
V
j
V
l
+
1
3
dV
k
dt
T
kjl
V
j
V
l
+
1
3
dV
k
dt
T
kjl
V
j
V
l
1
3
dV
j
dt
T
jkl
V
k
V
l
+
1
3
dV
j
dt
T
jkl
V
k
V
l
+
1
3
dV
j
dt
T
jkl
V
k
V
l
1
3
dV
l
dt
T
ljk
V
j
V
k
+
1
3
dV
l
dt
T
ljk
V
j
V
k
+
1
3
dV
l
dt
T
ljk
V
j
V
k
(12)
and the derivative
dE2
dt
gives rise to pairs of terms of the form:
1
2
dV
k
dt
W
kj
V
j
+
1
2
dV
k
dt
W
kj
V
j
1
2
dV
j
dt
W
jk
V
k
+
1
2
dV
j
dt
W
jk
V
k
(13)
By using Hermitian property of matrix T
kjl
and T
kj
, the above pairs can be grouped together which results in:
Re[
dV
k
dt
T
kjl
V
j
V
l
] =
1
3
dV
k
dt
T
kjl
V
j
V
l
+
1
3
dV
k
dt
T
kjl
V
j
V
l
+
1
3
dV
k
dt
T
kjl
V
j
V
l
Re[
dV
j
dt
T
jkl
V
k
V
l
] =
1
3
dV
j
dt
T
jkl
V
k
V
l
+
1
3
dV
j
dt
T
jkl
V
k
V
l
+
1
3
dV
j
dt
T
jkl
V
k
V
l
Re[
dV
l
dt
T
ljk
V
j
V
k
] =
1
3
dV
l
dt
T
ljk
V
j
V
k
+
1
3
dV
l
dt
T
ljk
V
j
V
k
+
1
3
dV
l
dt
T
ljk
V
j
V
k
(14)
and the terms obtained from derivative
dE2
dt
are groupoed together as:
Re[
dV
k
dt
W
kj
V
j
] =
1
2
dV
k
dt
W
kj
V
j
+
1
2
dV
k
dt
W
kj
V
j
Re[
dV
j
dt
W
jk
V
k
] =
1
2
dV
j
dt
W
jk
V
k
+
1
2
dV
j
dt
W
jk
V
k
(15)
The derivative
dE3
dt
results in following term:
dE
3
dt
=
u
k
R
k
I
k
dV
k
dt
(16)
Placing equation (13), (14) and (15) in equation
dE
dt
results in following:
dE
dt
= Re
k
dV
k
dt
l
T
kjl
V
j
V
l
+
j
W
kj
V
j
u
k
R
k
+ I
k
= Re
k
C
k
dV
k
dt
du
k
dt
(17)
We assume the state of network has complex form as given below:
V
k
= g(u)
g(u) = g
x
(u
x
, u
y
) + ig
y
(u
x
, u
y
) (18)
We now proove that the term Re[
dV
k
dt
du
k
dt
] in equation (17) is 0.
Re[
dV
k
dt
du
k
dt
] =
dV
x
dt
du
x
dt
+
dV
y
dt
du
y
dt
=
g
x
u
x
u
x
dt
+
g
x
u
y
u
y
dt
du
x
dt
g
y
u
x
u
x
dt
+
g
y
u
y
u
y
dt
du
y
dt
=
g
x
u
x
u
x
dt
2
+
g
x
u
y
u
y
dt
2
+
dg
x
du
y
+
dg
y
du
x
du
x
dt
du
y
dt
(19)
The function g(.) is analytic, from Caucy-Riemann equations,
dg
x
du
y
+
dg
y
du
x
= 0
dg
x
du
y
=
dg
y
du
x
(20)
With the above condition shown in equation (20) we get following equation:
dV
x
dt
du
x
dt
+
dV
y
dt
du
y
dt
=
dg
x
du
x
du
x
dt
2
+
du
y
dt
(21)
The equation (21) is always positive, if
dgx
dux
is positive, hence to show that
dgx
dux
is always positive, we rst show that conjugate
state of a stable state is also an attractor [2]. To proceed further we make following assumptions:
1) I
k
and R
k
are real.
2) g
1
(V ) = g
1
(V
).
If these two conditions are satised, the energr function E given in equation (10) remains same when v
k
is replaced by V
k
and it satises following inference:
1) If I
k
is real then, Re(I
k
V
k
)=Re(I
k
V
k
).
2) If R
k
is real and g
1
(V ) = g
1
(V
) then, Re
1
R
k
V
k
0
g
1
k
(V )dV
= Re
1
R
k
V
k
0
g
1
k
(V )dV
j
W
ij
V
j
+
jT
ijk
V
j
V
k
+ I
i
V
i
= (u
i
) (22)
Consider m be the number of memory patterns to be stored, and each memory pattern is an n dimensional complex vector,
denoted by a
i
C
n
, i = 1, 2, ..., m. Here C
n
represents the set of complex numbers. If the memory vectors a
1
, a
2
, ..., a
m
are the equillibrium points of the network then the expression
dui
dt at ui=f(xi)
of the higher order complex-valued Hopeld
neural network is zero. At the given equillibrium points equation (22) is moied and is given by equation (23).
0 = b
i
+
j
W
ij
a
j
+
jT
ijk
a
j
a
k
+ I
i
a
i
= (b
i
) (23)
The nonlinear function a
i
= (b
i
) strictly monotonously increasing and continuously differentiable function. This nonlinear
function must be an invertible function i.e. b
i
=
1
(a
i
). In our analysis we have used following nonlinear function:
a
i
+a
i
=
b
i
+b
i
1 +|b
i
+b
i
|
, a
i
, b
i
C
n
(24)
The inverse function for equation (24) is given by equation (25)
b
i
+b
i
=
a
i
+a
i
1
(a
i
2
+a
i
2
)
, a
i
, b
i
C
n
(25)
We now dene vector A = [a
1
a
2
, a
2
a
3
, ..., a
m1
a
m
] C
n
and vector B = [b
1
b
2
, b
2
b
3
, ... , b
m1
a
m
] C
n
.
These vectors are used for calculating the weights and biases for the higher order complex-valued Hopeld neural network.
We will rst present the expression to determine the weights in equation (26) and these calculated weights placed in
equation (27) for nding biases for the network.
B =
j
W
ij
A
j
+
k
T
ijk
A
j
A
k
(26)
I
i
= b
i
j
W
ij
a
j
+
k
T
ijk
a
j
a
k
(27)
The weight matrix W and T will satisfy equation (26) if and only if the following conditions hold:
Conj(A)B = Conj(B)A
rank(A
T
) = rank(A
T
, B
T
j
), j = 1, 2, ..., n (28)
where B
j
is the jth row of B.
B. Steps for calculating weight matrix W and T
We have used real-valued approach of complex-valued steepest descent method for determining the weights for higher order
complex-valued Hopeld neural network. To obtain weight matrix we have to minimize the equation (26) with respect to all
weights. We can dene this by following equation:
min
j
W
ij
A
j
+
k
T
ijk
A
j
A
k
= 0,
A, B, W, T C
n
(29)
Using equation (29), we dene quadratic error function in equation (30) which has to minimize with respect to all the weights
of the network.
Error =
1
2
(Target Actual)
2
+(Target Actual)
2
, Target, Actual c
n
,
Target = B
Actual =
j
W
ij
A
j
+
k
T
ijk
A
j
A
k
(30)
The change in complex weights W and T i.e. W
ij
, W
ij
, T
ijk
and T
ijk
are obtained by calculating partial
derivative calculated error given in equation (30) with respect to W
ij
, W
ij
, T
ijk
and T
ijk
respectively. This is dened
by following equation:
Error
w
ij
= (Target Actual)(A
j
) +(Target Actual)(A
j
)
Error
w
ij
= (Target Actual)(A
j
) +(Target Actual)(A
j
)
Error
T
ijk
= (Target Actual)(A
j
A
k
+A
j
A
k
) +(Target Actual)(A
j
A
k
A
j
A
k
)
error
T
ijk
= (Target Actual)(A
j
A
k
+A
j
A
k
) +(Target Actual)(A
j
A
k
+A
j
A
k
) (31)
We have used following update rule for updating the weights of higher order complex-valued Hopeld neural network:
w
ij
New
= w
ij
old
W
ij
w
ij
New
= w
ij
old
W
ij
T
ijk
New
= T
ijk
old
T
ijk
T
ijk
New
= T
ijk
old
T
ijk
(32)
Here, is the learning rate, X represents the real part of the complex variable X and X represents the imaginary part of
the complex variable X.
We have used above mentioned approach for calculating the weights W and T for the network. These obtained weights are
then placed in equation (27) for obtaining the bias values. In next subsection we explained the strataegy used for simulating
the proposed network using the derived weights and biases.
C. Strategy for Simulating Higher Order Complex-Valued Hopeld Neural Network
In this section we propose the real-valued approach for the simulation of higher order complex-valued Hopeld neural
network. The weights (W,T) and biases (I) derived in previous subsection are used here in simulation of complex network.
Equation (33) shows the dynamics of the complex-valued Hopeld neural network.
du
i
dt
= u
i
+
j
W
ij
V
j
+
jT
ijk
V
j
V
k
+ I
i
V
i
= (u
i
) (33)
We seperated the real and imaginary component of equation (33) and is given by equation (34).
du
i
dt
=
u
i
+
j
W
ij
V
j
+
jT
ijk
V
j
V
k
+ I
i
V
i
= (u
i
)
du
i
dt
=
u
i
+
j
W
ij
V
j
+
jT
ijk
V
j
V
k
+ I
i
V
i
= (u
i
) (34)
Euler
s method is used for nding the numerical solution of equations (34). The energy for network is calculated using
equation (35).
E =
1
2
j
W
ij
(V
j
)x
i
+
k
T
ijk
V
i
(V
j
)(V
k
)
+
i
(I
i
)V
i
+
i
I
i
(V
i
)
ui
0
1
(v)dv
(35)
The above explained methodology has been used for simulation. The energy of the network at during the entire simulation
is calculated. In next section we present our results, obtained after following the proposed stratagey.
5. Application Example
In this section, we present numerical results for the proposed higher order complex-valued Hopeld neural network. We
make the network of 3 neurons for storing complex-valued vectors. Nonlinear function mentioned in equation (24) is used as
the activation function for analysis. Suppose vectors given in equation (36) are to be stored in the network of complex-valued
neurons.
a
1
=
0.69 + 0.4i
0.63 + 0.63i
0.108 + 0.7i
a
2
=
0.8 + 0.0i
0.45 0.77i
0.49 0.049i
(36)
We now computed the vector A and B from following equation:
A = [a
1
a
2
, a
2
a
3
, ..., a
m1
a
m
]
B = [b
1
b
2
, b
2
b
3
, ..., b
m1
b
m
]
where,
b
i
=
1
(a
i
) (37)
The conditions mentioned in equation (28) are evaluated next, that results in following:
Conj(A)B = Conj(B)A = 43.2868
rank(A
T
) = 1
rank(A
T
, B
T
1
) = 1
rank(A
T
, B
T
2
) = 1
rank(A
T
, B
T
3
) = 1 (38)
This satises the above mentioned conditions (equation 28) [3], [10], [11]. In our case it is found that if the above mentioned
conditioned are satised than there exists connectionist weight matrixes W & T such that the stored vectors becomes the
equillibrium points of the network.
After getting vector A and B we determine the connection weight matrixes W & T by using the approach mentioned in
previous section. The values of matrix W and T are used for calculating the biases values I
i
. The values of matrix W, matrix
T and the biases I
i
are shown in appendix. We plotted the error curve obtained while calculating the desired weights for the
proposed higher order complex-valued Hopeld neural network in gure1. It is evident from the gure 1 that the minimization
of function given in equation (29) takes only 1000 iterations to reach the desired minimum. In our simulation learning rate is
= 0.01.
Figure 1. Error prole obtained while calculating the weights for proposed higher order complex-valued Hopeld neural network
With the calculated weights and biases higher order complex-valued Hopeld neural network is constructed and is used
for further analysis. We have used real-valued approach for simulating the proposed higher order complex-valued Hopeld
neural network. Real-valued formulation for the dynamics of higher order complex-valued Hopeld neural network are shown
in equations (34). These equations are simulated by using Euler
(39)
B. The calculated higher order weight matrix T
T(:, :, 1) =
T(:, :, 2) =
T(:, :, 3) =
(40)
C. The calculated bias matrix I
1
I
1
=
0.1486 1.5213i
0.9071 1.7542i
1.2725 0.5693i
(41)
References
[1] T. Samad, and P. Harper, High-order Hopeld and tank optimization networks, Parallel Computing, Vol. 16, pp. 287292, 1990.
[2] S.V. Chakravarthy, J. Gosh, Studies on a network of complex neurons.
[3] Y. Kuroe, N. Hashimoto, and T. Mori, On Energy Function for Complex-Valued Neural Networks and Its Applications, Proceedings of ICONIP02,
Vol. 3, pp. 10791083.
[4] D. Hebb, Organization of behavior, John Weiley and Sons, New York, 1949.
[5] A. Hirose, Dynamics of Fully Complex-valued neural network, Electronic Letters, Vol 28, No. 16, pp. 14921494, 1992.
[6] A. Hirose(Ed.), Complex-Valued Neural Networks: Theories and Applications, World Scientic Series on Innovative Intelligence-Vol. 5, 2003
[7] J.J. Hopeld, Neural Networks and Physical Systems with Emergent Collective Computational Abilities, Proc. Natl. Acad. Sci. USA, Vol. 79, pp.
25542558, April 1982.
[8] S. Jankowski, A. Lozowski, J.M. Zurada, Complex-valued Multistate Neural Associative Memory, IEEE Trans. on Neural Networks, Vol. 7, No. 6, pp.
14911495, 1996.
[9] M. Kerem, C. Guzelis, and J.M. Zurada, A new Design for the Complex-valued Multistate Hopeld Associative Memory, IEEE Trans. on Neural
Networks, Vol. 14, No. 4, pp. 891899, 2003.
[10] J. H. Li and A. N. Micbel, Qualitative Analysis and Synthesis of a Class of Neural Networks, IEEE Transactions on Circuits and Systems, Vo1.35,
No.8, pp. 976-986,1988.
Figure 2. Total energy for the proposed higher order complex-valued Hopeld neural network during simulation
[11] S . R. Das, On the Synthesis of Nonlinear Continuous Neural Networks, IEEE Transactions on Systems, Man, and Cybernetics, Vo1.21, No.2, pp.
413-418,1991.
[12] K. Mehrotra, C.K. Mohan, and S. Ranka, Elements of Articial Neural Networks, The MIT Press, 1996.
[13] J.M. Zurada, Introduction to Articial Neural Systems, Jaico Publishing House, Mumbai, 1997.
[14] J.J. Hopeld, Neurons with Graded Response have Collective Computational Properties like those of Two-State Neurons, Proc. Natl. Acad. Sci. USA,
vol. 81, pp. 30883092, May 1984.
[15] McCulloch W.S., and Pitts, W., A Logical Calculus of the Ideas Immanent in Nervous Activity, Bull. Math. Biophysics, vol. 5, pp. 115-133, 1943.
[16] J.J. Hopeld and D.W. Tank, Neural Computation of decision in Optimization problems, Biological Cybernetics, vol. 52, pp. 141-152, 1985.
[17] V. Chande and P.G. Pooncha, On Neural Networks for Analog to Digital Conversion, IEEE Transaction on Neural Networks, vol. 6 , No. 5, pp.
1269-1274, 1995.
[18] D.W. Tank and J.J. Hopeld, Simple Neural Optimization: An A/D Converter, a Single Decision Circuit and Linear Programming Circuit, IEEE
Transaction on Circuit and Systems, vol. 33, pp. 137-142, 1991.
[19] W. Wan-Liang X. Xin-Li and W. Qi-Di, Hopeld Neural Networks Approach for Job Shop Scheduling ProblemsProceedings of the 2003 IEEE
International Symposium on Intelligent Control Houston, Texas, October 5-8, 2003, pp. 935-940.
[20] C. Bousoo and M.R.W. Manning, The Hopeld Neural Network Applied to the Quadratic Assignment Problem, vol. 3, no. 2, 1995 pp. 64 - 72.
[21] K. Chakraborty, K. Mehrotra, C.K. Mohan and S. Ranka, An Optimization Network for Solving a Set of Simultaneous Linear Equations, IEEE
Proceedings, pp. 516521, 1992.
[22] J.H. Park, Y.S. Kim, I.K. Eom, and K.Y. Lee, Economic Load Dispatch for Piecewise Quadratic Cost Function Using Hopeld Neural Network, IEEE
Transaction on Power System, vol. 8, no. 3, pp. 10301038, 1993.
[23] M. Atencia, G. Joya, and F. Sandoval, Hopeld Neural Networks for Parametric Identication of Dynamical Systems, Neural Processing Letters, vol.
21, pp. 143152, 2005.