You are on page 1of 10

Stability Analysis for Higher Order

Complex-Valued Hopeld Neural Network


Deepak Mishra, Abhishek Yadav, Amit Shukla, Prem K. Kalra
Department of Electrical Engineering
Indian Institute of Technology, Kanpur, India
E-mail: dkmishra@iitk.ac.in, ayadav@iitk.ac.in, kalra@iitk.ac.in
Abstract
In this paper, stability analysis for higher order complex-valued Hopeld neural network is proposed. As an application
complete recalling of stored complex-valued vector is demonstrated by computer simulations. A methodology for handling complex
valued Hopeld neural network is explained.
1. Introduction
A human brain consitsts of approximately 10
11
computing elements called neurons. They communicate through a connection
network of axon and synapses having a density of approximately 10
4
synapse per neuron. The neuron is the fundamental
building block of the biological neural network. A typicall neuron cell has three major regions: the cell body called soma,
the axon, and the dendrites [13]. The electrical signals are agrregated at soma and travel via axon. The ends of axon and
dendrites are connected via synapse. McCulluch-Pitts used these biological ndings of human brain to propose the rst
formal denition of a synthetic neuron model based on the highly simplied considerations of the biological model [15]. This
model allows binary 0,1 states only. After this several other neural network model were proposed by considering different
features of biological neurons. These neural networks can be catetgorised in the non-feedback type neural network as they
not incorporates the dynamical behavior of neurons. In 1982, Hopeld proposed a feedback type neural network known as
Hopfield neural network [7]. In his pioneering work Hopeld desgined a recurrent neural networks as dynamical associative
memories. He showed that a single-layer of fully connected network is capable of restoring a previously learned static pattern
called a memory vector, ensuring its convergence from an initial condition. Along with this real-valued Hopeld neural network
has been used for solving other optimization problems, such as Travelling salesman problem, nding solutions of linear and
nonlinear equaltions, Analog-to-Digital (ADC) converter, Job Schedulling, Assignment problem, Parametric indentication and
etc. [12], [16], [17], [18], [19], [21], [20], [23].
In order to use a neural network model to solve an optimization problem, the problem must be cast into the form of an
energy function that the model minimizes. A limitation of conventional Hopeld model based optimization is that they have
had energy function restricted to quadratic form. Hence to overcome this drawback authors in [1] proposed a real-valued
High-order Hopeld and Tank optimization network and solved the triangle partitioning problem.
In previous work on conventional Hopeld neural networks approach, real-valued input and output vectors are processed by
using real-number weighting factors and real nonlinear functions. On the other hand, as the application areas get growth, neural
networks dealing with input and output vectors expressed in complex numbers are strongly desired [5]. They can be applied
to problems of uses complex values for the computations. In [6], several applications of complex valued neural networks
are discussed. Authors in [8], [9] has discussed the complex-valued multistate Hopeld associative memory as the another
application of complex-valued Hopeld neural network. Energy function based approach and stability analysis for rst order
complex-valued Hopeld neural network is dicussed in [3], [2]. In this work we extended the previous work of [3], [2] for
higher order complex-valued Hopeld neural network.
In this paper we consider a class of fully connected complex-valued neural netowrks which are a complex value extension
of Higher order real-valued Hopeld type neural networks. We proposed a energy function for higher order complex-valued
Hopeld neural network and investigated the stability conditions. This proposed energy function formulation can be used for
solving various problems such as optimization and synthesis of associative memory. In our work as an application of proposed
method, we discuss the complete procedure and simulation details for recalling of a stored complex-valued vector. We have
used a real-valued approach for solving the higher order complex-valued Hopeld neural network and for determining the
weights of the network.
Complex version of higher order Hopelds continous model is formulated in section 2. In section 3, stability conditions
for network performnace dervied. Formulation of complex-valued associative memories using complex-valued higher order
Hopeld neural network is explained in section 4. The simulation results are portraied in section 5. We concluded our work
with a discussion in section 5.
2. Higher Order Complex-Valued Hopeld Neural Network
Extending the continous Higher order Hopeld neural network of real parameters to a complex neural network seems
to be a natural step in the way of generalizing the basic Hopeld neural network [1]. Complex number provides a more
powerful representation of information transmitted over a sysnapse. The soma potential u
k
can be imagined to carry phase
information which a model using real number fails to represent explicitely. The approach is also motivated by the observation
that representing signal as complex quantities is an importnat and effective technique in signal and system theory.
The real-valued higher order hopeld model can be represented by the equation of dynamics of N interconnected neurons,
shown in by:
C
k
du
k
dt
=

l
T
kjl
V
j
V
l
+

j
W
kj
V
j

u
k
R
k
+ I
k
(1)
In equation (1), u
k
is thought of as the mean soma potential of the kth neuron, V
k
is the short term average of the neuron
ring rate, T
kjl
& W
kj
are the strength of the synapse and I
k
is a xed bias current.
For the higher order complex-valued Hopeld neural network the quantities u
k
, V
k
, T
kjl
, T
kj
and I
k
can have non-zero
imaginary component and the nonlinear transfer function g
k
(.) is a complex function. The dynamics for this network is depicted
from equation (2).
C
k
du
k
dt
=

l
T
kjl
V
j
V
l
+

j
W
kj
V
j

u
k
R
k
+ I
k
(2)
and the complex nonlinear function is given by:
u
k
= g
k
1
(V
k
) (3)
where C
k
> 0 k. This complex nonlinear function is chosen in such a way that for complex value also it should be
monotonically incresaing and continually differentiable. For our stability analysis we considerede g(.) = tanh(.) as the
complex-valued activation function because of following reason:
1) In real space tanh(x) is a sigmoid function and holds all the properties of sigmoid function.
2) In the complex domain tanh(z) has several important propoerties we dene these properties in next section.
Now we dene the Lyapunov energy function for the higher order complex-valued Hopeld neural network in following
equation:
E =
1
3

l
T
kjl
V

k
V
j
V
l

1
2

j
W
kj
V

k
V
j
+Re

k
1
R
k

V

k
0
g
1
k
(V )dV

k
I
k
V

k

(4)
Equations (2), (3) and (4) describes dynamics of proposed Hopeld neural network model. For binary valued case we can
determine the the weight matrixes for the this network model by Hebbs rule.
T
kjl
=

p
S
p
k
S
p
j
S
p
l
W
kj
=

p
S
p
k
S
p
j
(5)
where S
p
is the pth pattern to be stored in an N-neuron network, and S
p
k
denotes the kth component of the pth pattern and
denotes the complex conjugate . Other than this we can also use some optimization method such as least square method for
calculating the weights for the network.
The stability analysis of proposed higher order complex-valued Hopeld neural network is carried out in next section. As
in conventional Hopeld neural network the weight matrix are assumed to be symmetrical, in complex higher order Hopeld
neural network the complex weight matrix T
jkl
& T
kj
are assumed to be Hermitian. The energy function described by equation
(4) is a real number.
3. Stability Analysis of Proposed Network
We rst discuss some of the important properties considered while formulating the dynamics of higher order complex-
valued Hopeld neural network. In our stability analysis we considered tanh(z) as the complex-valued activation function.
The parameters and variable for the model are a complex-valued number (T, W, V, I C
n
). Here C
n
represents the complex-
valued number. We can also use some other complex-valued activation function if it satises the required properties and
assumptions.
1) Let z = x + iy and g(z) = g
x
+ ig
y
= tanh(z). Then
a)
g
x
x
> 0 for

4
y

4
(6)
b) g
1
(z) = g
1
(z

) (7)
Proof: Condition 1:
tanh(z) = g
x
(x, y) + ig
y
(x, y) =
sinh(2x)
cosh(2x) + cos(2y)
+
sin(2y)
cosh(2x) + cos(2y)
g
x
x
=
2(1 + cosh(2x))cos(2y)
(e
2x
+ e
2x
+ 2cos(2y))
2
(8)
The R.H.S. is 0 for

4
y

4
Condition 2:
g
1
(z) =
1
2
log

1 + z
1 z

log

(w) = log(w

1 + w
1 w

=
1 + w

1 w

(9)
With these properties we now eloborate sability analysis for the proposed model. In order to proove the convergence of proposed
energy function we will show that the
dE
dt
0. Consider the energy function shown in equation (10)
E =
1
3

l
T
kjl
V

k
V
j
V
l

1
2

j
W
kj
V

k
V
j
+Re

k
1
R
k

V

k
0
g
1
k
(V )dV

k
I
k
V

k

(10)
We compute
dE
dt
to show monotonic decrease of E with time. We break the proposed energy function in three parts such as:
E = E
1
+ E
2
+ E
3
E
1
=
1
3

l
T
kjl
V

k
V
j
V
l
E
2
=
1
2

j
W
kj
V

k
V
j
E
3
= Re

k
1
R
k

V

k
0
g
1
k
(V )dV

k
I
k
V

k

(11)
We now calculate derivative for respective energy function i.e. E
1
E
2
& E
3
with respect to time. This results in following
terms: The derivative
dE1
dt
gives rise to pairs of terms of the form:
1
3
dV

k
dt
T
kjl
V
j
V
l
+
1
3
dV
k
dt
T
kjl
V

j
V
l
+
1
3
dV
k
dt
T
kjl
V
j
V

l
1
3
dV

j
dt
T
jkl
V
k
V
l
+
1
3
dV
j
dt
T
jkl
V

k
V
l
+
1
3
dV
j
dt
T
jkl
V
k
V

l
1
3
dV

l
dt
T
ljk
V
j
V
k
+
1
3
dV
l
dt
T
ljk
V

j
V
k
+
1
3
dV
l
dt
T
ljk
V
j
V

k
(12)
and the derivative
dE2
dt
gives rise to pairs of terms of the form:
1
2
dV

k
dt
W
kj
V
j
+
1
2
dV
k
dt
W
kj
V

j
1
2
dV

j
dt
W
jk
V
k
+
1
2
dV
j
dt
W
jk
V

k
(13)
By using Hermitian property of matrix T
kjl
and T
kj
, the above pairs can be grouped together which results in:
Re[
dV

k
dt
T
kjl
V
j
V
l
] =
1
3
dV

k
dt
T
kjl
V
j
V
l
+
1
3
dV
k
dt
T
kjl
V

j
V
l
+
1
3
dV
k
dt
T
kjl
V
j
V

l
Re[
dV

j
dt
T
jkl
V
k
V
l
] =
1
3
dV

j
dt
T
jkl
V
k
V
l
+
1
3
dV
j
dt
T
jkl
V

k
V
l
+
1
3
dV
j
dt
T
jkl
V
k
V

l
Re[
dV

l
dt
T
ljk
V
j
V
k
] =
1
3
dV

l
dt
T
ljk
V
j
V
k
+
1
3
dV
l
dt
T
ljk
V

j
V
k
+
1
3
dV
l
dt
T
ljk
V
j
V

k
(14)
and the terms obtained from derivative
dE2
dt
are groupoed together as:
Re[
dV

k
dt
W
kj
V
j
] =
1
2
dV

k
dt
W
kj
V
j
+
1
2
dV
k
dt
W
kj
V

j
Re[
dV

j
dt
W
jk
V
k
] =
1
2
dV

j
dt
W
jk
V
k
+
1
2
dV
j
dt
W
jk
V

k
(15)
The derivative
dE3
dt
results in following term:
dE
3
dt
=

u
k
R
k
I
k

dV

k
dt
(16)
Placing equation (13), (14) and (15) in equation
dE
dt
results in following:
dE
dt
= Re

k
dV

k
dt

l
T
kjl
V
j
V
l
+

j
W
kj
V
j

u
k
R
k
+ I
k

= Re

k
C
k
dV

k
dt
du
k
dt

(17)
We assume the state of network has complex form as given below:
V

k
= g(u)
g(u) = g
x
(u
x
, u
y
) + ig
y
(u
x
, u
y
) (18)
We now proove that the term Re[
dV

k
dt
du
k
dt
] in equation (17) is 0.
Re[
dV

k
dt
du
k
dt
] =
dV
x
dt
du
x
dt
+
dV
y
dt
du
y
dt
=

g
x
u
x
u
x
dt
+
g
x
u
y
u
y
dt

du
x
dt

g
y
u
x
u
x
dt
+
g
y
u
y
u
y
dt

du
y
dt
=

g
x
u
x

u
x
dt

2
+
g
x
u
y

u
y
dt

2
+

dg
x
du
y
+
dg
y
du
x

du
x
dt
du
y
dt

(19)
The function g(.) is analytic, from Caucy-Riemann equations,

dg
x
du
y
+
dg
y
du
x

= 0
dg
x
du
y
=
dg
y
du
x
(20)
With the above condition shown in equation (20) we get following equation:
dV
x
dt
du
x
dt
+
dV
y
dt
du
y
dt
=
dg
x
du
x

du
x
dt

2
+

du
y
dt

(21)
The equation (21) is always positive, if
dgx
dux
is positive, hence to show that
dgx
dux
is always positive, we rst show that conjugate
state of a stable state is also an attractor [2]. To proceed further we make following assumptions:
1) I
k
and R
k
are real.
2) g
1
(V ) = g
1
(V

).
If these two conditions are satised, the energr function E given in equation (10) remains same when v
k
is replaced by V

k
and it satises following inference:
1) If I
k
is real then, Re(I
k
V

k
)=Re(I
k
V
k
).
2) If R
k
is real and g
1
(V ) = g
1
(V

) then, Re

1
R
k

V

k
0
g
1
k
(V )dV

= Re

1
R
k

V
k
0
g
1
k
(V )dV

With the above inferences it is state that the


dgx
dux
will also positive and if C
k
> 0k, then the expression given in (17) is
also positive. With the above analysis, it is evident that if the above mentioned conditions are satised it can be seen that E
decreases monotonically and must reach a minimum.
We use the above analysis for complete recalkling of a stored complex-valued vector. It can be stated from the above analysis
that the dynamics of the higher order complex-valued Hopeld neural network is stable if and only if it satises the conditions
mentioned above. In next section we propose a methodology for handelling the dynamics of higher order complex-valued
Hopeld neural network for complex-valued associative memory like application. We also present a real-valued approach for
determing the weights of the network and for nding the desired solution of the propsed neural network model.
4. Complex-Valued Associative Memories Using Higher Order Complex-Valued Hopeld
Neural Network
In this section we deomstrate the application of proposed approach for retrival of a stored complex pattern from memory.
For real valued neural networks several studies on synthesis of associative memories by using energy function have been done
in [10], [11]. Author in [3] extended this approach for complex valued Hopeld neural network. In this subsection we extend
complex-valued Hopeld neural network approach for higher order complex-valued Hopeld neural network.
A. Steps Involved for Recalling Stored Complex-Valued Vector
Higher order complex-valued Hopeld neural network has nonlinear dynamics and its solution contains multiple asymptot-
ically stable equilibria. We can exploit this propoerty for builiding associative memory. Inorder to formulate the associative
memory the rst order & higher weights and biases of the higher order complex-valued Hopeld neural network has to
determined so that each desired memory vector becomes an assymptotically stable equllibrium point of the network.
Equation (22) shows the expression for higher order complex-valued Hopeld neural network. The (u, W
ij
, T
ijk
, V C
n
),
here C
n
represents the set of complex numbers.
du
i
dt
= u
i
+

j
W
ij
V
j
+

jT
ijk
V
j
V
k
+ I
i
V
i
= (u
i
) (22)
Consider m be the number of memory patterns to be stored, and each memory pattern is an n dimensional complex vector,
denoted by a
i
C
n
, i = 1, 2, ..., m. Here C
n
represents the set of complex numbers. If the memory vectors a
1
, a
2
, ..., a
m
are the equillibrium points of the network then the expression
dui
dt at ui=f(xi)
of the higher order complex-valued Hopeld
neural network is zero. At the given equillibrium points equation (22) is moied and is given by equation (23).
0 = b
i
+

j
W
ij
a
j
+

jT
ijk
a
j
a
k
+ I
i
a
i
= (b
i
) (23)
The nonlinear function a
i
= (b
i
) strictly monotonously increasing and continuously differentiable function. This nonlinear
function must be an invertible function i.e. b
i
=
1
(a
i
). In our analysis we have used following nonlinear function:
a
i
+a
i
=
b
i
+b
i
1 +|b
i
+b
i
|
, a
i
, b
i
C
n
(24)
The inverse function for equation (24) is given by equation (25)
b
i
+b
i
=
a
i
+a
i
1

(a
i
2
+a
i
2
)
, a
i
, b
i
C
n
(25)
We now dene vector A = [a
1
a
2
, a
2
a
3
, ..., a
m1
a
m
] C
n
and vector B = [b
1
b
2
, b
2
b
3
, ... , b
m1
a
m
] C
n
.
These vectors are used for calculating the weights and biases for the higher order complex-valued Hopeld neural network.
We will rst present the expression to determine the weights in equation (26) and these calculated weights placed in
equation (27) for nding biases for the network.
B =

j
W
ij
A
j
+

k
T
ijk
A
j
A
k
(26)
I
i
= b
i

j
W
ij
a
j
+

k
T
ijk
a
j
a
k

(27)
The weight matrix W and T will satisfy equation (26) if and only if the following conditions hold:
Conj(A)B = Conj(B)A
rank(A
T
) = rank(A
T
, B
T
j
), j = 1, 2, ..., n (28)
where B
j
is the jth row of B.
B. Steps for calculating weight matrix W and T
We have used real-valued approach of complex-valued steepest descent method for determining the weights for higher order
complex-valued Hopeld neural network. To obtain weight matrix we have to minimize the equation (26) with respect to all
weights. We can dene this by following equation:
min

j
W
ij
A
j
+

k
T
ijk
A
j
A
k

= 0,
A, B, W, T C
n
(29)
Using equation (29), we dene quadratic error function in equation (30) which has to minimize with respect to all the weights
of the network.
Error =
1
2

(Target Actual)
2
+(Target Actual)
2

, Target, Actual c
n
,
Target = B
Actual =

j
W
ij
A
j
+

k
T
ijk
A
j
A
k
(30)
The change in complex weights W and T i.e. W
ij
, W
ij
, T
ijk
and T
ijk
are obtained by calculating partial
derivative calculated error given in equation (30) with respect to W
ij
, W
ij
, T
ijk
and T
ijk
respectively. This is dened
by following equation:
Error
w
ij
= (Target Actual)(A
j
) +(Target Actual)(A
j
)
Error
w
ij
= (Target Actual)(A
j
) +(Target Actual)(A
j
)
Error
T
ijk
= (Target Actual)(A
j
A
k
+A
j
A
k
) +(Target Actual)(A
j
A
k
A
j
A
k
)
error
T
ijk
= (Target Actual)(A
j
A
k
+A
j
A
k
) +(Target Actual)(A
j
A
k
+A
j
A
k
) (31)
We have used following update rule for updating the weights of higher order complex-valued Hopeld neural network:
w
ij
New
= w
ij
old
W
ij
w
ij
New
= w
ij
old
W
ij
T
ijk
New
= T
ijk
old
T
ijk
T
ijk
New
= T
ijk
old
T
ijk
(32)
Here, is the learning rate, X represents the real part of the complex variable X and X represents the imaginary part of
the complex variable X.
We have used above mentioned approach for calculating the weights W and T for the network. These obtained weights are
then placed in equation (27) for obtaining the bias values. In next subsection we explained the strataegy used for simulating
the proposed network using the derived weights and biases.
C. Strategy for Simulating Higher Order Complex-Valued Hopeld Neural Network
In this section we propose the real-valued approach for the simulation of higher order complex-valued Hopeld neural
network. The weights (W,T) and biases (I) derived in previous subsection are used here in simulation of complex network.
Equation (33) shows the dynamics of the complex-valued Hopeld neural network.
du
i
dt
= u
i
+

j
W
ij
V
j
+

jT
ijk
V
j
V
k
+ I
i
V
i
= (u
i
) (33)
We seperated the real and imaginary component of equation (33) and is given by equation (34).

du
i
dt
=

u
i
+

j
W
ij
V
j
+

jT
ijk
V
j
V
k
+ I
i

V
i
= (u
i
)

du
i
dt
=

u
i
+

j
W
ij
V
j
+

jT
ijk
V
j
V
k
+ I
i

V
i
= (u
i
) (34)
Euler

s method is used for nding the numerical solution of equations (34). The energy for network is calculated using
equation (35).
E =
1
2

j
W
ij
(V

j
)x
i
+

k
T
ijk
V
i
(V

j
)(V

k
)
+

i
(I

i
)V
i
+

i
I
i
(V

i
)

ui
0

1
(v)dv

(35)
The above explained methodology has been used for simulation. The energy of the network at during the entire simulation
is calculated. In next section we present our results, obtained after following the proposed stratagey.
5. Application Example
In this section, we present numerical results for the proposed higher order complex-valued Hopeld neural network. We
make the network of 3 neurons for storing complex-valued vectors. Nonlinear function mentioned in equation (24) is used as
the activation function for analysis. Suppose vectors given in equation (36) are to be stored in the network of complex-valued
neurons.
a
1
=

0.69 + 0.4i
0.63 + 0.63i
0.108 + 0.7i

a
2
=

0.8 + 0.0i
0.45 0.77i
0.49 0.049i

(36)
We now computed the vector A and B from following equation:
A = [a
1
a
2
, a
2
a
3
, ..., a
m1
a
m
]
B = [b
1
b
2
, b
2
b
3
, ..., b
m1
b
m
]
where,
b
i
=
1
(a
i
) (37)
The conditions mentioned in equation (28) are evaluated next, that results in following:
Conj(A)B = Conj(B)A = 43.2868
rank(A
T
) = 1
rank(A
T
, B
T
1
) = 1
rank(A
T
, B
T
2
) = 1
rank(A
T
, B
T
3
) = 1 (38)
This satises the above mentioned conditions (equation 28) [3], [10], [11]. In our case it is found that if the above mentioned
conditioned are satised than there exists connectionist weight matrixes W & T such that the stored vectors becomes the
equillibrium points of the network.
After getting vector A and B we determine the connection weight matrixes W & T by using the approach mentioned in
previous section. The values of matrix W and T are used for calculating the biases values I
i
. The values of matrix W, matrix
T and the biases I
i
are shown in appendix. We plotted the error curve obtained while calculating the desired weights for the
proposed higher order complex-valued Hopeld neural network in gure1. It is evident from the gure 1 that the minimization
of function given in equation (29) takes only 1000 iterations to reach the desired minimum. In our simulation learning rate is
= 0.01.
Figure 1. Error prole obtained while calculating the weights for proposed higher order complex-valued Hopeld neural network
With the calculated weights and biases higher order complex-valued Hopeld neural network is constructed and is used
for further analysis. We have used real-valued approach for simulating the proposed higher order complex-valued Hopeld
neural network. Real-valued formulation for the dynamics of higher order complex-valued Hopeld neural network are shown
in equations (34). These equations are simulated by using Euler

s numerical method after keeping the integartion step size


t = 0.01. Total energy of the system is which is formulated in equation (35), is plotted for the proposed complex-valued
network in Figure 2. It is observed from the energy curve that the system enegry is always decreasing. This shows the proposed
network is stabile nad will leads to desired solution. Simulations results for complete recall of rst stored vector at one different
initial conditions are shown in Table 1. We found that for any random initail condition the proposed network converges to the
desired solution.
Table 1
RESULTS FOR THE PROPOSED HIGHER ORDER COMPLEX-VALUED HOPFIELD NEURAL NETWORK
a
1
u
Initial
u
Final
0.69+0.4i 0.4001+0.7334i 0.69+0.4i
-0.63+0.63i 0.1988+0.3759i -0.63+0.63i
0.108+0.7i 0.6252+0.0099i 0.108+0.7i
6. Conclusion and Discussion
In this paper we proposed a novel higher order complex-valued Hopeld neural network. We presented stability analysis
and energy function formulation for the proposed model. It is found that in order to proove the convergance of network as
proven for real-valued network, we have to make some extra assumptions. We presented a real-valued approach for nding
solutions of the proposed complex-valued network. As an application we have shown that network is able to reacll the stored
vector and can be used as the complex-valued associative memory. The proposed approach is also useful for solving other
complex-valued optimization problems. We plotted total energy curve for the proposed model and it is eveident that network
dynamics is stable. The results obtained with the proposed method are quite satisfactory and the time required for the complete
simulation is also less. Similar higher order real-valued Hopeld network the proposed complex-valued network can be useful
for solving other optimization problems. Other issues like hardware implementation of higher order complex-valued Hopeld
neural network has to be explored.
Appendix
A. The Calculated weight matrix W
W =

3.2667 0.8254 2.7447i 2.9678 + 0.6269i


0.8254 + 2.7447i 7.2544 3.4983 + 3.2498i
2.9678 0.6269i 3.4983 3.2498i 0.6578

(39)
B. The calculated higher order weight matrix T
T(:, :, 1) =

0.2377 0.3683 + 0.8817i 0.2899 + 0.4348i


0.2923 0.1302i 0.0980 + 1.2059i 0.3524 + 0.8796i
0.1839 + 0.3643i 0.2713 + 1.5543i 0.3439 + 0.7599i

T(:, :, 2) =

0.5096 + 0.5110i 0.5108 + 1.4760i 0.3452 + 0.6946i


0.3543 0.2935i 0.5108 0.2592 + 0.6537i
0.0568 + 0.4055i 0.6584 + 0.6899i 0.2628 + 0.5835i

T(:, :, 3) =

0.3553 + 0.3474i 0.1899 + 0.8399i 0.1757 0.0325i


0.1401 + 0.2566i 0.4349 + 0.9872i 0.2592 + 0.0130i
0.1490 + 0.4769i 0.3142 + 0.7807i 0.1757

(40)
C. The calculated bias matrix I
1
I
1
=

0.1486 1.5213i
0.9071 1.7542i
1.2725 0.5693i

(41)
References
[1] T. Samad, and P. Harper, High-order Hopeld and tank optimization networks, Parallel Computing, Vol. 16, pp. 287292, 1990.
[2] S.V. Chakravarthy, J. Gosh, Studies on a network of complex neurons.
[3] Y. Kuroe, N. Hashimoto, and T. Mori, On Energy Function for Complex-Valued Neural Networks and Its Applications, Proceedings of ICONIP02,
Vol. 3, pp. 10791083.
[4] D. Hebb, Organization of behavior, John Weiley and Sons, New York, 1949.
[5] A. Hirose, Dynamics of Fully Complex-valued neural network, Electronic Letters, Vol 28, No. 16, pp. 14921494, 1992.
[6] A. Hirose(Ed.), Complex-Valued Neural Networks: Theories and Applications, World Scientic Series on Innovative Intelligence-Vol. 5, 2003
[7] J.J. Hopeld, Neural Networks and Physical Systems with Emergent Collective Computational Abilities, Proc. Natl. Acad. Sci. USA, Vol. 79, pp.
25542558, April 1982.
[8] S. Jankowski, A. Lozowski, J.M. Zurada, Complex-valued Multistate Neural Associative Memory, IEEE Trans. on Neural Networks, Vol. 7, No. 6, pp.
14911495, 1996.
[9] M. Kerem, C. Guzelis, and J.M. Zurada, A new Design for the Complex-valued Multistate Hopeld Associative Memory, IEEE Trans. on Neural
Networks, Vol. 14, No. 4, pp. 891899, 2003.
[10] J. H. Li and A. N. Micbel, Qualitative Analysis and Synthesis of a Class of Neural Networks, IEEE Transactions on Circuits and Systems, Vo1.35,
No.8, pp. 976-986,1988.
Figure 2. Total energy for the proposed higher order complex-valued Hopeld neural network during simulation
[11] S . R. Das, On the Synthesis of Nonlinear Continuous Neural Networks, IEEE Transactions on Systems, Man, and Cybernetics, Vo1.21, No.2, pp.
413-418,1991.
[12] K. Mehrotra, C.K. Mohan, and S. Ranka, Elements of Articial Neural Networks, The MIT Press, 1996.
[13] J.M. Zurada, Introduction to Articial Neural Systems, Jaico Publishing House, Mumbai, 1997.
[14] J.J. Hopeld, Neurons with Graded Response have Collective Computational Properties like those of Two-State Neurons, Proc. Natl. Acad. Sci. USA,
vol. 81, pp. 30883092, May 1984.
[15] McCulloch W.S., and Pitts, W., A Logical Calculus of the Ideas Immanent in Nervous Activity, Bull. Math. Biophysics, vol. 5, pp. 115-133, 1943.
[16] J.J. Hopeld and D.W. Tank, Neural Computation of decision in Optimization problems, Biological Cybernetics, vol. 52, pp. 141-152, 1985.
[17] V. Chande and P.G. Pooncha, On Neural Networks for Analog to Digital Conversion, IEEE Transaction on Neural Networks, vol. 6 , No. 5, pp.
1269-1274, 1995.
[18] D.W. Tank and J.J. Hopeld, Simple Neural Optimization: An A/D Converter, a Single Decision Circuit and Linear Programming Circuit, IEEE
Transaction on Circuit and Systems, vol. 33, pp. 137-142, 1991.
[19] W. Wan-Liang X. Xin-Li and W. Qi-Di, Hopeld Neural Networks Approach for Job Shop Scheduling ProblemsProceedings of the 2003 IEEE
International Symposium on Intelligent Control Houston, Texas, October 5-8, 2003, pp. 935-940.
[20] C. Bousoo and M.R.W. Manning, The Hopeld Neural Network Applied to the Quadratic Assignment Problem, vol. 3, no. 2, 1995 pp. 64 - 72.
[21] K. Chakraborty, K. Mehrotra, C.K. Mohan and S. Ranka, An Optimization Network for Solving a Set of Simultaneous Linear Equations, IEEE
Proceedings, pp. 516521, 1992.
[22] J.H. Park, Y.S. Kim, I.K. Eom, and K.Y. Lee, Economic Load Dispatch for Piecewise Quadratic Cost Function Using Hopeld Neural Network, IEEE
Transaction on Power System, vol. 8, no. 3, pp. 10301038, 1993.
[23] M. Atencia, G. Joya, and F. Sandoval, Hopeld Neural Networks for Parametric Identication of Dynamical Systems, Neural Processing Letters, vol.
21, pp. 143152, 2005.

You might also like