Professional Documents
Culture Documents
Research Journal
Volume 7, No. 18, June 2013, pp. 20–27 Original Article
DOI: 10.5604/20804075.1049490
1
Faculty of Mathematics, IT and Landscape Architecture, The John Paul II Katholic University of Lublin, ul.
Konstantynów 1H, 20-708 Lublin, Poland, e-mail: michal.dolecki@kul.pl
2
Faculty of Applied Informatics and Mathematics, Warsaw University of Life Sciences – SGGW, ul. Nowoursy-
nowska 159, 02-776 Warsaw, Poland, e-mail: ryszard_kozera@sggw.edu.pl, ryszard.kozera@gmail.com
20
Advances in Science and Technology – Research Journal vol. 7 (18) 2013
rithms used for finding the optimal weights for points. Thus, there are also infinitely many dif-
a given neural network modify (via the updating ferent weights for our single neuron network to
procedure – see [8]) the corresponding values of classify the Boolean AND function. Such ambi-
multiple weights, which consequently leads to a guity yielding a correct network output, extends
very different output of the entire network. Ad- also to the general class of networks with more
ditionally, the neural network is considered to be complex topologies. The existence of multiple
trained, if the number of misclassified input vec- correct weights in neural networks shifts them
tors falls below an arbitrarily admitted threshold. aside to the fringe of cryptographers’ attention.
Thus upon completing the training phase, the The latter comes from the fact that in order to
network can still provide erroneous classifica- encrypt and to sign messages it is usually neces-
tion. Typically, the initial values of weights are sary for both sender and receiver to possess the
determined randomly, and although the learning key number with the same value. This problem
process is described by deterministic algorithm, can also be seen using newer neuro-fuzzy sys-
the trained network depends on initial guesses tems [3]. However, despite the above mentioned
for network’s weights. In particular, for different disadvantage, in certain applications neural net-
values of optimal weights the same accuracy in fi- works are still applied as a sophisticated tool for
nal classification can still be reached. This can be crypto-analysis [9].
easily seen in a very simple example of a network The research conducted in [11-15] indicates
implementing a logical AND function. Figure 1 the possibility of using artificial neural network to
shows the separation of the input signal space create a secure cryptographic key exchange proto-
for the AND function achieved by a single arti- col. This work introduces specific conditions and
ficial neuron with selected two different sets of modifications imposed on the network topology,
weights. The so-called decision boundaries repre- weight values, networks learning procedure and
senting two collections of weights are illustrated finally on the activation function within the out-
by red and green colors, respectively. The dotted put neurons. Such modified network is called the
black point (1.1) stands for the AND value equal TPM (Tree Parity Machine) – see e.g. [12, 13].
to 1, while the remaining three encircled points A characteristic fact for TPM is that in the process
correspond to the values equal to 0. of mutual network training a static learning set is
substituted by randomly generated input vectors.
These restrictions make the so-far used methods
to evaluate the accuracy of the network in question
(by examining the error of the pertinent energy
functions) inapplicable. There is no a priori given
learning set, which could be used as reference for
such analysis. The proposed key exchange proto-
col is based on the phenomenon of synchroniza-
tion of the mutual learning of neural networks.
The sender and receiver create networks with
the same topology and start with randomly cho-
sen different weights’ values, which also remain
confidential. In a sequel, both networks receive
Fig. 1. The AND function realized by two the same input vector and evaluate their outcome
weights sets values, which are then exchanged. Sender’s net-
work treats the result of the recipient’s network as
Evidently, in both cases this single neuron the expected result, and in a similar fashion the
network classifies input vectors {(0,0), (0,1), recipient’s network exploits the result of sender’s
(1,0), (1,1)} correctly, although the computed network. In the next step both networks modify
weights vary – each one is geometrically repre- their weights in accordance with the pre-selected
sented by different decision boundary (modulo learning method. Commonly used methods co-
k, w=kw1, k>1). Additionally, for this particular incide with Hebbian rule, Anti-Hebbian rule or
example (forming the so-called linearly sepa- with the Random Walk rule [8, 21-23]. In the sub-
rable set of data), it is visible that we have in- sequent step of the algorithm new input vectors
finitely many straight lines separating the above common to both networks are randomly chosen.
21
Advances in Science and Technology – Research Journal vol. 7 (18) 2013
As previously, both networks’ results are calcu- to compute the digital signature, to verify signa-
lated and mutually exchanged. The networks’ tures, to compute authentication code from mes-
weights are modified accordingly. Upon certain sages, to verify this code and finally also to estab-
number of iterations of this procedure, two net- lish keys in further communication [1].
works reach synchronization state, guaranteeing
the same respective values of two collections of
weights. The latter can be used directly as crypto- OBJECTIVE AND METHODOLOGY
graphic keys, or as the seed of the algorithm that
generates pseudo-random numbers, forming the Synchronization of the TPMs is a stochastic
role of respective keys [2, 13]. process, and the time needed to reach the same
It is shown in [15], that bidirectional inter- values of weights of the network with a given
action between sender and receiver, which is structure, depends on randomly selected initial
achieved by exchange of TPM’s outputs, allows weights and on randomly generated input vec-
faster synchronization than unidirectional network tors, respectively. In fact, the size of the network
learning, which can perform a potential attacker. affects also the network synchronization time.
This difference in time needed to finish synchro- Naturally, bigger TPMs synchronize longer. The
nization is crucial for security of the created key simulation results [12] (see also figure 4) show
exchange protocol. Another strengthening of the that the distribution of a synchronization time for
proposed schema can be the best precise deter- TPM networks with a given topology is asymmet-
mination of the point at which TPM’s are already ric. Namely, the respective frequencies measuring
synchronized, what in turn allows a quick termi- how often both nets synchronize in a given num-
nation of the learning process [4]. It makes it less ber of steps are high on the left and skewed toward
susceptible for potential attack to occur by a third the right. In addition, the network’s size does not
party. The latter holds as the time available for be- affect the built-in distribution characteristics. The
ing attacked is reduced together with the amount main task of this work is to determine the type of
of information potentially accessed by the attacker. the observed distribution and its parameters for
Computers and data stored on them are ex- the network with different structures. A compari-
posed to many attacks [6], and their protection son of this generated distribution is accomplished
is the main research area of cryptology. Crypto- here with the Poisson distribution, which is well-
graphic keys are numbers used in the algorithm as known and can be exploited by using e.g. standard
an additional input to the encryption and authenti- distribution tables. The latter, permits in turn to
cation of documents. In symmetric cryptography determine the probability for TPMs to be synchro-
systems, the same key for encryption and decryp- nized in a given number of steps. Mathematical
tion is applied. The security is based here on en- description of TPM synchronization with simple
suring that the key is known only to the sender Poisson distribution permits to continue further
and receiver, who are both trusted parties. Asym- theoretical research of this process, similarly to
metric cryptography presents a different approach hte application mathematical models depicting
[20] in which both of these operations use a pair various physical phenomena [16].
of keys. One of them is secret and refers to the The simulations of networks’ synchronization
private key and the other one, which is known, are carried out with different topologies by the au-
is coined as the public key. Using this system, thors’ computer program. The obtained results of
first the sender retrieves the recipient’s public key the synchronization measurements are analyzed
and encrypts the message. In sequel the receiver, with MS Excel. Due to the fact that the range of
having obtained the ciphertext, transforms it us- the analyzed sample is very high, it is divided into
ing his private own key to the plaintext. Asym- respective intervals. Therefore, the analysis of the
metric algorithms are slower in action. Therefore, TPMs’ synchronization time is contained within a
in practice they are used for establishing the key particular interval instead of specifying the prob-
that is applied in further communication with the ability of exact number of synchronization steps.
aid of symmetric algorithms and also to encrypt
small parts of data [25]. In addition, depending TREE PARITY MACHINE
on various applications, the keys can be divided
into different classes. Namely, they are applied to Artificial neural network used in synchroniza-
either encrypt messages, to decrypt cryptograms, tion is similar to the tree-like structure with cer-
22
str. 23, kol. 1, akapit 3 𝑖𝑖 ∈ [1; 𝑛𝑛]
𝑛𝑛 ∈ ℕ
str. 23, kol. 1, akapit 2 𝐾𝐾 ∈ ℕ
Advances in Science and Technology – Research Journal vol. 7 (18) 2013
{−𝐿𝐿, −𝐿𝐿 + 1, … , 𝐿𝐿 − 1, 𝐿𝐿}
−1, 𝑖𝑖𝑖𝑖 𝜑𝜑𝑡𝑡 ≤ 0
tain selected disjoint perceptron’s receptive input 𝜎𝜎𝑗𝑗,𝑡𝑡 = 𝑓𝑓𝑡𝑡 (𝜑𝜑𝑡𝑡 ) = �𝐾𝐾
str. 23, kol. 2, akapit 1 𝜏𝜏 = ∏𝑗𝑗=11,𝜎𝜎𝑖𝑖𝑖𝑖 𝑗𝑗 𝜑𝜑𝑡𝑡 > 0
fields. An example of a TPM network is shown
in Figure 2. Here str. a23,feed-forward,
kol. 1, akapit 3multi-layer Learning TPM (𝑡𝑡+1)𝑖𝑖is∈in[1; accordance
(𝑡𝑡)𝑛𝑛] with one of
str.
network, has in the output layer always only one 23, kol. 2, akapit 2 (punkty) the three methods 𝑤𝑤 𝑘𝑘𝑘𝑘 [23]: 𝑘𝑘𝑘𝑘 = 𝑤𝑤 − 𝑥𝑥 𝜎𝜎
𝑘𝑘𝑘𝑘 𝑘𝑘
perceptron. Alternatively, disjoint input fields can 1. Anti-Hebbian rule𝑛𝑛 –∈ weights ℕ are modified if
(𝑡𝑡+1) (𝑡𝑡)
be linked with all neurons in the hidden layer but the outputs 𝑤𝑤 𝑘𝑘𝑘𝑘 of =
both 𝑤𝑤 networks
𝑘𝑘𝑘𝑘 + 𝑥𝑥 𝑘𝑘𝑘𝑘 𝜎𝜎are
𝑘𝑘 different.
the unmarked connections str. 23, kol. 1, akapit
between input2 impuls- This process{−𝐿𝐿, −𝐿𝐿 leads +𝐾𝐾to 1,∈network
…ℕ, 𝐿𝐿 − synchronization 1, 𝐿𝐿}
es and first hidden neurons’ layers should have with opposite (𝑡𝑡+1) vectors (𝑡𝑡)
of weights. Weights’
𝑤𝑤𝑘𝑘𝑘𝑘 = 𝑤𝑤 𝐾𝐾−1,𝑘𝑘𝑘𝑘 + 𝑥𝑥𝑘𝑘𝑘𝑘
modification complies
𝜏𝜏 =) = 𝑗𝑗=1 ∏ here 𝑖𝑖𝑖𝑖 with 𝜑𝜑 ≤
the 0 following
weights always equal str. 23, to kol.
zero.2, akapit 1 𝜎𝜎𝑗𝑗,𝑡𝑡 = 𝑓𝑓𝑡𝑡 (𝜑𝜑 𝑡𝑡 � 𝜎𝜎𝑗𝑗 𝑡𝑡
TPM’s hidden layer consist of K neurons, formula: (𝑡𝑡+1)1, 𝑖𝑖𝑖𝑖 𝜑𝜑𝑡𝑡 > 0
str. 24, kol. 1, akapit 1 𝑤𝑤𝑘𝑘𝑘𝑘(𝑡𝑡)
𝐾𝐾 ∈ ℕ . Each of str. them iskol.built2,on the basis of the 𝑤𝑤
(𝑡𝑡+1)
model of McCulloch-Pitts
23,
str. 23, kol.[19] akapit
1, akapit
with bipolar,
2
3 (punkty)
step 𝑘𝑘𝑘𝑘 𝑖𝑖 ∈𝑤𝑤
= [1;𝑘𝑘𝑘𝑘 𝑛𝑛] − 𝑥𝑥𝑘𝑘𝑘𝑘 𝜎𝜎𝑘𝑘 .
t 2 𝜎𝜎 = 𝑓𝑓 (𝜑𝜑 )activation 𝜑𝜑𝑡𝑡 ≤str.
−1, 𝑖𝑖𝑖𝑖function, 0 given25, kol. 1, akapit 1
𝐾𝐾 str.
∈ ℕby 23,the kol.following
1, akapit 2for- This above 𝐿𝐿 ∈ {1,2, mentioned … ,5,10,15, method … ,50}
𝐾𝐾 ∈ ℕ is originally
(𝑡𝑡+1) 𝐾𝐾 ∈(𝑡𝑡)
𝑗𝑗,𝑡𝑡 𝑡𝑡 𝑡𝑡 = � str. 23, kol. 1, akapit 2 used by Kanter
𝑤𝑤𝑘𝑘𝑘𝑘 ℕ
=𝑛𝑛𝑤𝑤𝑘𝑘𝑘𝑘 + 𝑥𝑥𝑘𝑘𝑘𝑘 𝜎𝜎𝑘𝑘However, as
et al. in [12].
mula: 1, 𝑖𝑖𝑖𝑖 𝜑𝜑𝑡𝑡 > 0
−1, shown in [15] it is easier𝜆𝜆𝑘𝑘to𝑒𝑒 −𝜆𝜆 apply a normal
𝜎𝜎 = 𝑓𝑓 str.
(𝜑𝜑 25,
) = kol.
� 1,𝑖𝑖𝑖𝑖 𝜑𝜑𝑡𝑡 ≤50
akapit 𝑃𝑃[𝑋𝑋
𝜎𝜎 = =𝑓𝑓𝑘𝑘] (𝜑𝜑 =) =𝑖𝑖𝑖𝑖𝑘𝑘!� −1,
−1, 𝜑𝜑 ≤
𝑖𝑖𝑖𝑖 𝜑𝜑
0 𝑡𝑡 ≤ 0
𝑖𝑖 ∈ [1; 𝑛𝑛] 𝑗𝑗,𝑡𝑡 𝑡𝑡 𝑡𝑡
1, 𝑖𝑖𝑖𝑖 𝜑𝜑𝑡𝑡 > 0 Hebbian {−𝐿𝐿,rule,
𝜎𝜎𝑗𝑗,𝑡𝑡 =𝑤𝑤𝑓𝑓 −𝐿𝐿
(𝑡𝑡+1)
𝑗𝑗,𝑡𝑡
(𝜑𝜑 and+ )= 1,
𝑡𝑡the
𝑤𝑤… (𝑡𝑡)
𝑡𝑡 , 𝐿𝐿
results
+ − 𝑥𝑥 1, 𝑡𝑡 𝐿𝐿}
generated by
𝑘𝑘𝑘𝑘𝑡𝑡 𝑡𝑡 = � 𝑘𝑘𝑘𝑘 1, 𝑖𝑖𝑖𝑖 𝜑𝜑
1, 𝑖𝑖𝑖𝑖
𝑘𝑘𝑘𝑘 > 0 𝑡𝑡 > 0 𝜑𝜑
both methods are similar. 𝑡𝑡
j = 1, 2, 3, ..., K. 𝑟𝑟 𝐾𝐾 (𝑂𝑂𝑖𝑖 −𝐸𝐸𝑖𝑖 )
2
t3 𝑛𝑛 ∈ ℕ str.𝑖𝑖23,
str. 25,
∈ str. kol.𝑛𝑛]
kol.
[1; 2,akapit
2,
23, akapit
kol. 1
1, 1akapit 3 2. Hebbian rule𝜒𝜒 2𝜏𝜏–== ∑∏
weights(𝑡𝑡+1)
𝑖𝑖=1
𝑗𝑗=1 𝑖𝑖 ∈𝜎𝜎are𝑗𝑗𝐸𝐸[1; modified
𝑛𝑛] if the
The value of str. this24,
23, kol. 1, is
function akapit
the value 13 of the 𝑖𝑖 𝑤𝑤
∈𝑘𝑘𝑘𝑘[1; 𝑛𝑛] 𝑖𝑖
str. 23,{−𝐿𝐿, kol. 1, akapit 2 𝐾𝐾 ∈isℕan TPM’s results are equal. Then synchronization
−𝐿𝐿 +output 1, … , 𝐿𝐿 − neuron
1, 𝐿𝐿}str. j in 𝑛𝑛time∈ ℕt.2,Argument process ends (𝑡𝑡+1) 2
with the (𝑡𝑡) 𝑛𝑛 ∈ ℕ
adder block valuestr.
23,
at 25,
kol.
timekol.
akapit
1, akapit 1 according
t determined
2 (punkty) 𝐿𝐿 ∈ 𝑤𝑤𝑘𝑘𝑘𝑘
{1,2, =𝑛𝑛,5,10,15,
𝜒𝜒… 𝑤𝑤
≈ ℕsame
∈𝑘𝑘𝑘𝑘0,12 − 𝑥𝑥𝑘𝑘𝑘𝑘 … weights’
𝜎𝜎,50}
𝑘𝑘
values.
ol. 1, akapit 2 to the 𝐾𝐾 ∈ ℕ 𝐾𝐾 ∈ ℕ Weights
−1, 𝑖𝑖𝑖𝑖 𝜑𝜑𝑡𝑡 ≤ 0 are modified now according to:
𝜏𝜏 = ∏𝐾𝐾 formula:
𝑗𝑗=1 𝜎𝜎𝑗𝑗 {−𝐿𝐿, −𝐿𝐿 str.+ 26,1,kol. … ,𝜎𝜎1,
𝐿𝐿𝑗𝑗,𝑡𝑡− =1,𝑓𝑓𝑡𝑡𝐿𝐿}(𝜑𝜑1 𝑡𝑡 ) = � 1, 𝑖𝑖𝑖𝑖 𝜑𝜑𝑡𝑡 > 0 {−𝐿𝐿, (𝑡𝑡+1) {−𝐿𝐿, −𝐿𝐿 2 + 1,
(𝑡𝑡) … , 𝐿𝐿 − 1, 𝐿𝐿}
akapit 𝑤𝑤𝑘𝑘𝑘𝑘 −𝐿𝐿 = + 1, 𝑤𝑤𝜒𝜒𝑘𝑘𝑘𝑘… ,+ 𝐿𝐿𝑘𝑘𝑥𝑥
𝜆𝜆 − 𝑘𝑘𝑘𝑘1,
𝑒𝑒 −𝜆𝜆 𝜎𝜎𝑘𝑘𝐿𝐿}.
𝐾𝐾 ∈ ℕ
str. 25, kol.
−1, 𝑖𝑖𝑖𝑖 𝜑𝜑𝑡𝑡𝐾𝐾≤ 0, 1, akapit 5
−1, 𝑖𝑖𝑖𝑖 𝜑𝜑𝑡𝑡 ≤ 0 𝑃𝑃[𝑋𝑋 = 𝑘𝑘] = 𝑘𝑘!𝐾𝐾
(𝑡𝑡+1) 𝜎𝜎𝑗𝑗,𝑡𝑡 =(𝑡𝑡)𝑓𝑓𝑡𝑡 (𝜑𝜑𝑡𝑡 ) = � 𝜏𝜏𝜎𝜎𝑗𝑗,𝑡𝑡 ∏
= (𝜑𝜑𝜎𝜎𝑡𝑡𝑗𝑗kol.
𝑓𝑓𝑡𝑡23, ) = 2, 3. Random Walk rule – 𝜏𝜏𝐾𝐾similar = ∏ to Hebbian rule,
1,=
t 1 23, 𝑤𝑤 � akapit 𝜎𝜎
str. kol. 𝑘𝑘𝑘𝑘 1, akapit= 𝑤𝑤𝑘𝑘𝑘𝑘3 − 𝑥𝑥𝑘𝑘𝑘𝑘 𝜎𝜎𝑘𝑘 str. 𝑖𝑖𝑖𝑖 str.
23, 𝜑𝜑kol.
𝑡𝑡 >2,0akapit 11,
𝑗𝑗=1 ∈ [1;
𝑖𝑖 𝑖𝑖𝑖𝑖 𝜑𝜑1𝑡𝑡 𝑛𝑛]
>0 (𝑡𝑡+1) 𝜏𝜏 = ∏𝑤𝑤𝑗𝑗=1 (𝑡𝑡) 𝜎𝜎 𝑗𝑗=1 𝑗𝑗
where N−1, is the 𝑤𝑤
modification𝑘𝑘𝑘𝑘2occurs𝑟𝑟 when = +𝑗𝑗
(𝑂𝑂𝑖𝑖 −𝐸𝐸the 𝑥𝑥 2 results of the
𝑖𝑖𝑖𝑖 𝜑𝜑number ≤ 025,ofkol.
𝑡𝑡 str. input impulses
2, akapit 1 to a sin- 𝜒𝜒 equals. =∑
𝑘𝑘𝑘𝑘 𝑖𝑖 ) 𝑘𝑘𝑘𝑘
𝜎𝜎 = 𝑓𝑓 (𝜑𝜑 ) = � x𝑤𝑤 (𝑡𝑡+1) (𝑡𝑡) networks are (𝑡𝑡+1)
𝑖𝑖=1 The latter (𝑡𝑡) leads to equal
𝑤𝑤𝑘𝑘𝑘𝑘 3 =gle𝑤𝑤neuron, – input
[1; > 0=signals −(𝑖𝑖kol.𝑥𝑥∈𝑘𝑘𝑘𝑘[1; , 𝑛𝑛 ∈2ℕ(punkty)
),
𝑗𝑗,𝑡𝑡 𝑡𝑡 𝑡𝑡 𝑤𝑤 𝜎𝜎2,𝑘𝑘 𝑛𝑛]
ol.
t 21,(punkty)
akapit (𝑡𝑡+1) (𝑡𝑡) 1,𝑖𝑖 𝑖𝑖𝑖𝑖 ∈ i 𝜑𝜑𝑡𝑡 𝑛𝑛] str.
𝑘𝑘𝑘𝑘 23, akapit (𝑡𝑡+1)𝑤𝑤𝑘𝑘𝑘𝑘 (𝑡𝑡+1) (𝑡𝑡)= 𝐸𝐸 𝑤𝑤𝑖𝑖 𝑘𝑘𝑘𝑘 − 𝑥𝑥𝑘𝑘𝑘𝑘 𝜎𝜎
𝑘𝑘𝑘𝑘 + 𝑥𝑥𝑘𝑘𝑘𝑘 𝜎𝜎𝑘𝑘𝑘𝑘 str. 23, kol. 2, akapit 2 (punkty) 𝑤𝑤 = 𝑤𝑤 − 𝑥𝑥 𝜎𝜎 𝑘𝑘
wi – weights assigned
𝑘𝑘 str. 24,to kol.corresponding
1, akapit 1 inputs weights’ values
𝑘𝑘𝑘𝑘 for both
𝑤𝑤𝑘𝑘𝑘𝑘 𝑘𝑘𝑘𝑘 nets. In
𝑘𝑘𝑘𝑘 𝑘𝑘 this method,
(𝑡𝑡+1) ( 𝑖𝑖 ∈ [1; (𝑡𝑡) 𝑛𝑛] , 𝑛𝑛𝑤𝑤∈ ℕ). =
(𝑡𝑡+1) Each weight
(𝑡𝑡) {−𝐿𝐿,𝑛𝑛 ∈ −𝐿𝐿
ℕ + 1,
belongs to … the the weights’ adjustment
, 𝐿𝐿 − 1, 𝐿𝐿} 𝜒𝜒 2(𝑡𝑡+1)
≈ 0,12 does (𝑡𝑡)not depend on
𝑤𝑤𝑘𝑘𝑘𝑘 set = 𝑤𝑤{–L, + 𝑥𝑥 𝑤𝑤 + 𝑥𝑥 𝑘𝑘𝑘𝑘 𝜎𝜎 𝑘𝑘 the output (𝑡𝑡+1)
of the 𝑤𝑤hidden (𝑡𝑡)= 𝑤𝑤𝑘𝑘𝑘𝑘 + 𝑥𝑥𝑘𝑘𝑘𝑘 𝜎𝜎𝑘𝑘
layer neuron, but only
𝑘𝑘𝑘𝑘 –L + 1, ...,
𝑘𝑘𝑘𝑘𝑘𝑘𝑘𝑘 str.L25, 𝑘𝑘𝑘𝑘
– 1,kol. L}.1, akapit 1 𝑤𝑤
𝐿𝐿 ∈ 𝑘𝑘𝑘𝑘 =
{1,2, … ,5,10,15, 𝑘𝑘𝑘𝑘 𝑤𝑤 𝑘𝑘𝑘𝑘 + 𝑥𝑥 …
𝑘𝑘𝑘𝑘 𝜎𝜎,50}
𝑘𝑘
on the input signal: 2
str. 23, kol. 2, akapit {−𝐿𝐿,𝑛𝑛TPM1∈ −𝐿𝐿 + 1, …
ℕ network str. 26,
, 𝐿𝐿 {−𝐿𝐿,
structure
(𝑡𝑡+1)
kol.
− 1, 𝐿𝐿} −𝐿𝐿1,
can+
(𝑡𝑡)
akapit
be1,thus 1
…𝜏𝜏,expressed
𝐿𝐿=−∏1, 𝐾𝐾
𝑗𝑗=1𝐿𝐿}𝜎𝜎𝑗𝑗 𝜒𝜒
(𝑡𝑡+1) (𝑡𝑡)
𝑤𝑤
with
(𝑡𝑡+1)
three 𝑤𝑤𝑘𝑘𝑘𝑘 as=the
parameters 𝑤𝑤𝑘𝑘𝑘𝑘
network+ 𝑥𝑥𝑘𝑘𝑘𝑘of type K-N-L. (𝑡𝑡+1)𝑤𝑤𝑘𝑘𝑘𝑘 (𝑡𝑡)= 𝜆𝜆+𝑘𝑘 𝑒𝑒 𝑤𝑤−𝜆𝜆𝑘𝑘𝑘𝑘 + 𝑥𝑥𝑘𝑘𝑘𝑘
𝑘𝑘𝑘𝑘
𝐾𝐾str. 25, kol. 1, akapit 𝐾𝐾 5
𝑤𝑤 𝑃𝑃[𝑋𝑋
𝑘𝑘𝑘𝑘 = 𝑘𝑘] = = 𝑤𝑤 𝑘𝑘𝑘𝑘 𝑥𝑥 𝑘𝑘𝑘𝑘 .
ol. 2, akapit {−𝐿𝐿, 1 −𝐿𝐿 The+ 1,last … ,
𝜏𝜏 𝐿𝐿
=
layer − ∏ 1,
neuron 𝐿𝐿} 𝜎𝜎 performs 𝜏𝜏 = the ∏ 𝜎𝜎
multiplication
(𝑡𝑡+1) (𝑡𝑡) 𝑘𝑘!
str. 23, kol. 2, akapit 2 (punkty) 𝑗𝑗=1 𝑗𝑗 (𝑡𝑡+1) 𝑤𝑤𝑘𝑘𝑘𝑘 𝑗𝑗=1 = 𝑤𝑤 − 𝑥𝑥𝑘𝑘𝑘𝑘 𝜎𝜎𝑘𝑘
𝑗𝑗 (𝑡𝑡+1)
t 1 𝐿𝐿 ∈ {1,2, …operation, ,5,10,15,and … ,50} itsstr.outcome𝑤𝑤𝑘𝑘𝑘𝑘
24,
str.is24,
kol.
kol.
theakapit
1, output1, akapit
1 of the𝑘𝑘𝑘𝑘 1en- If the new weight value
𝑤𝑤
(𝑡𝑡+1)𝑤𝑤𝑘𝑘𝑘𝑘 2 is greater than
(𝑂𝑂
𝐾𝐾 𝑟𝑟 −𝐸𝐸 𝑖𝑖 )
y) 2, akapit 2 (punkty)
ol. =𝑤𝑤∏
𝜏𝜏tire 𝑗𝑗=1 𝜎𝜎=
network:
(𝑡𝑡+1) 𝑗𝑗 𝑤𝑤 (𝑡𝑡) str.− 25,
𝑥𝑥𝑤𝑤 kol.
(𝑡𝑡+1)
𝜎𝜎 2, = akapit
𝑤𝑤
(𝑡𝑡)1
(𝑡𝑡+1) − 𝑥𝑥 𝜎𝜎
(𝑡𝑡) L, it is replaced 𝜒𝜒 2by=L.∑Analogously, 𝑘𝑘𝑘𝑘
𝑖𝑖=1
𝑖𝑖
if the weight
t1 𝑘𝑘𝑘𝑘 𝜆𝜆𝑘𝑘 𝑒𝑒 −𝜆𝜆 𝑘𝑘𝑘𝑘
𝐿𝐿 ∈ {1,2,
𝑘𝑘𝑘𝑘𝑘𝑘𝑘𝑘𝑘𝑘
… ,5,10,15,
str. 25, kol. 𝑤𝑤… 1,
𝑘𝑘𝑘𝑘𝑘𝑘𝑘𝑘 = 𝑤𝑤
,50} akapit 1
𝑘𝑘𝑘𝑘 𝑘𝑘 + 𝑥𝑥𝑘𝑘𝑘𝑘 𝜎𝜎𝑘𝑘
𝑘𝑘𝑘𝑘 𝐿𝐿 ∈ {1,2,
𝐸𝐸𝑖𝑖
… ,5,10,15, … ,50}
𝑃𝑃[𝑋𝑋
(𝑡𝑡+1)
= 𝑘𝑘] =(𝑡𝑡) 𝑘𝑘! str. 25, kol. 1, akapit 1 𝐿𝐿 ∈ {1,2, … ,5,10,15, … ,50}
𝑤𝑤𝑘𝑘𝑘𝑘
=𝑤𝑤𝑤𝑤(𝑡𝑡+1) −=𝑥𝑥𝑘𝑘𝑘𝑘 𝜎𝜎
(𝑡𝑡)
𝑤𝑤𝑘𝑘𝑘𝑘 𝑘𝑘 + 𝑥𝑥𝑤𝑤
(𝑡𝑡+1)
𝜎𝜎𝑘𝑘 𝜆𝜆𝑘𝑘=𝑒𝑒 −𝜆𝜆
(𝑡𝑡)
𝑤𝑤𝑘𝑘𝑘𝑘
(𝑡𝑡+1)+= 𝑥𝑥𝑘𝑘𝑘𝑘𝑤𝑤𝜎𝜎(𝑡𝑡) 𝜒𝜒 2 ≈ 0,12
𝑘𝑘𝑘𝑘𝑘𝑘𝑘𝑘 𝑘𝑘𝑘𝑘𝑘𝑘𝑘𝑘 𝑤𝑤 𝑘𝑘 + 𝑥𝑥𝑘𝑘𝑘𝑘 𝑘𝑘 −𝜆𝜆
𝑘𝑘 𝑒𝑒 −𝜆𝜆 𝜆𝜆 𝑒𝑒
t5 𝑟𝑟 (𝑂𝑂𝑖𝑖 −𝐸𝐸𝑖𝑖 ) 𝑃𝑃[𝑋𝑋
2 = 𝑘𝑘] str.=25, kol. 1, akapit 5𝑘𝑘𝑘𝑘
𝑘𝑘𝑘𝑘 𝑃𝑃[𝑋𝑋 =𝜆𝜆𝑘𝑘] = 𝑘𝑘!
2
= ∑𝑖𝑖=1 (𝑡𝑡)
𝜒𝜒 (𝑡𝑡+1) str. 25, kol. 1, akapit 𝑘𝑘! 5 𝑃𝑃[𝑋𝑋 = 𝑘𝑘]2 = 𝑘𝑘!
𝑤𝑤𝑘𝑘𝑘𝑘 = 𝑤𝑤𝑤𝑤 𝐸𝐸+
(𝑡𝑡+1) 𝑖𝑖 𝑥𝑥
= 𝜎𝜎str.
𝑘𝑘𝑘𝑘𝑤𝑤
(𝑡𝑡) 26, kol.
𝑘𝑘 + 𝑥𝑥𝑤𝑤
1, akapit(𝑡𝑡)
(𝑡𝑡+1) 1 𝜒𝜒
str. 24, kol. 1, akapit 1𝑘𝑘𝑘𝑘 𝑘𝑘𝑘𝑘 𝑘𝑘𝑘𝑘 𝑘𝑘𝑘𝑘𝑘𝑘𝑘𝑘 = 𝑤𝑤 + 𝑥𝑥
2 𝑘𝑘𝑘𝑘 𝑤𝑤𝑘𝑘𝑘𝑘𝑘𝑘𝑘𝑘
(𝑡𝑡+1)
2 𝑟𝑟 (𝑂𝑂𝑖𝑖 −𝐸𝐸𝑖𝑖 ) 𝑟𝑟 )2 (𝑂𝑂𝑖𝑖 −𝐸𝐸𝑖𝑖 )2
t1
𝜒𝜒 2 ≈=0,12 𝜒𝜒 = ∑𝑖𝑖=1 str. 25, kol. 2, akapit 1 𝜒𝜒 2 =(𝑂𝑂∑−𝐸𝐸
𝑤𝑤
(𝑡𝑡+1)
𝑤𝑤
(𝑡𝑡)
+ 𝑥𝑥
(𝑡𝑡+1) str. 25, kol. 2,𝐸𝐸akapit 𝑖𝑖 (𝑡𝑡+1) 1 𝜒𝜒 = ∑𝑟𝑟𝑖𝑖=1 𝑖𝑖 𝐸𝐸𝑖𝑖=1𝑖𝑖
2 𝐸𝐸𝑖𝑖
ol.
str.1,25,
akapit 1𝑘𝑘𝑘𝑘akapit 1 𝑘𝑘𝑘𝑘 𝑤𝑤𝑘𝑘𝑘𝑘
kol. 1, 𝑘𝑘𝑘𝑘 𝐿𝐿 𝑤𝑤
∈ 𝑘𝑘𝑘𝑘{1,2, … ,5,10,15, … ,50} 𝑖𝑖
(𝑂𝑂𝑖𝑖 −𝐸𝐸𝑖𝑖 )2
str. 25, kol. 2, akapit 1 𝜒𝜒 2 = ∑𝑟𝑟𝑖𝑖=1 𝐸𝐸𝑖𝑖
𝜒𝜒 2 ≈ 0,12
24
Advances in Science and Technology – Research Journal vol. 7 (18) 2013
TPM
6 {−𝐿𝐿,
199.9 −𝐿𝐿232.4
+ 1, … , 𝐿𝐿 38
− 1, 𝐿𝐿} 0.038
Min Max Average
parameters 7 232.4 265.0 12 0.012
3-16-1 str.723, kol. 2,132
akapit 1 33.4 8 265.0 = ∏𝐾𝐾
𝜏𝜏 297.6 𝑗𝑗=1 𝜎𝜎𝑗𝑗8 0.008
str. 23, kol. 1, akapit 2 𝐾𝐾 ∈ ℕ
3-16-2 37 394 118.3 9 297.6 330.2 2 0.002
(𝑡𝑡+1) (𝑡𝑡)
3-16-3 str.
8023, kol. 2,702
akapit 2 (punkty)
264.2
𝜎𝜎 = 𝑓𝑓𝑡𝑡 (𝜑𝜑𝑡𝑡 ) 10
=�
𝑤𝑤𝜑𝜑𝑘𝑘𝑘𝑘𝑡𝑡 ≤ 0362.7
−1, 𝑖𝑖𝑖𝑖
330.2 = 𝑤𝑤𝑘𝑘𝑘𝑘 − 𝑥𝑥1 𝑘𝑘𝑘𝑘 𝜎𝜎𝑘𝑘 0.001
3-16-4 160 1469 469.7 𝑗𝑗,𝑡𝑡 1, 𝑖𝑖𝑖𝑖 𝜑𝜑 𝑡𝑡 > 0
11 362.7 395.3 1 0.001
3-16-5 263 2849 755.6 (𝑡𝑡+1) (𝑡𝑡)
str. 23, kol. 1, akapit 3 𝑖𝑖 ∈ [1; 𝑛𝑛] 𝑤𝑤𝑘𝑘𝑘𝑘 = 𝑤𝑤𝑘𝑘𝑘𝑘 + 𝑥𝑥𝑘𝑘𝑘𝑘 𝜎𝜎𝑘𝑘
3-16-10 1159 8579 3239.3
weighted
𝑛𝑛 ∈ ℕ average, 𝑤𝑤𝑘𝑘𝑘𝑘 which= 𝑤𝑤 is+calculated on the
3-16-15 2991 20263 7657.9 (𝑡𝑡+1) (𝑡𝑡)
𝑥𝑥𝑘𝑘𝑘𝑘
3-16-20 5044 46057 14444.6 basis of empirical results𝑘𝑘𝑘𝑘using class number as
3-16-25 8242 62486 23033.2 {−𝐿𝐿, −𝐿𝐿a + value
1, … ,and𝐿𝐿 − its
1, 𝐿𝐿} multiplicity
(𝑡𝑡+1) as weight. Hence,
str. 24, kol. 1, akapit 1 the sum of intervals’ 𝑤𝑤𝑘𝑘𝑘𝑘multiplicities is equal to
3-16-30 14520 92569 34104.3
str. 23, kol. 2, akapit 1 𝜏𝜏the ∏𝐾𝐾
= number
𝑗𝑗=1 𝜎𝜎𝑗𝑗 of analyzed nets, so weighted aver-
3-16-35 19209 124496 48443.2
str. 25, kol. 1, akapit 1 age for this 𝐿𝐿 ∈net {1,2, … ,5,10,15,
is equal to λ = 2,979.… ,50}In sequel,
3-16-40 23919 161440 64018.9 (𝑡𝑡+1) (𝑡𝑡)
str. 23, kol. 2, akapit 2 (punkty) 𝑤𝑤𝑘𝑘𝑘𝑘 the = 𝑤𝑤
above
𝑘𝑘𝑘𝑘 − 𝑥𝑥 𝜎𝜎
mentioned
𝑘𝑘𝑘𝑘 𝑘𝑘 hypothesis is verified in
3-16-45 26191 251386 82656.0 𝜆𝜆𝑘𝑘 𝑒𝑒 −𝜆𝜆
str. 25, kol. 1, akapit 5102382.7 compliance with 𝑃𝑃[𝑋𝑋 = 𝑘𝑘] = 𝑘𝑘! in which the
chi-square test,
3-16-50 42856 288219 (𝑡𝑡+1) (𝑡𝑡)
𝑤𝑤𝑘𝑘𝑘𝑘 sum = 𝑤𝑤is𝑘𝑘𝑘𝑘calculated:
+ 𝑥𝑥𝑘𝑘𝑘𝑘 𝜎𝜎𝑘𝑘
2 𝐾𝐾 ∈ ℕ
(𝑂𝑂 −𝐸𝐸 )2
Given the above str. 25, listed
kol. experimentally
2, akapit 1 gen-𝑤𝑤 (𝑡𝑡+1) = 𝑤𝑤 (𝑡𝑡) + 𝑥𝑥𝑘𝑘𝑘𝑘 𝜒𝜒 2
= ∑𝑟𝑟𝑖𝑖=1 𝑖𝑖 𝑖𝑖 ,
𝐸𝐸𝑖𝑖
erated data, −1, 𝑖𝑖𝑖𝑖 𝜑𝜑 𝑡𝑡 ≤ 0 sample 𝑘𝑘𝑘𝑘 𝑘𝑘𝑘𝑘
𝜎𝜎𝑗𝑗,𝑡𝑡 we
= 𝑓𝑓determined
𝑡𝑡 (𝜑𝜑𝑡𝑡 ) = �
next for each
where r is the number of classes, Oi is the ob-
the range, the 1,number
str. 24, kol.
1, 𝑖𝑖𝑖𝑖 𝜑𝜑 > 0
akapit 1of classes 𝑡𝑡for histogram
(𝑡𝑡+1)
𝑤𝑤𝑘𝑘𝑘𝑘 2
served probability 𝜒𝜒and≈E0,12stands for the expected
preparation, the width and the multiplicity of i
3 eachstr.
of 25,
the kol.
classes’
1, str.
akapit
[1;
𝑖𝑖intervals.
∈ 𝑛𝑛] Dividing the multi- 𝐿𝐿 ∈ {1,2,probability.
… ,5,10,15, Recall … ,50} that𝜒𝜒 2the requirement for the
26,1kol. 1, akapit 1 chi-squared test is that the sample size is not less
plicity of a given class by the size of the sample,
the str.
empirical probabilities 𝑛𝑛 ∈ ℕ of TPM’s synchro- 𝑃𝑃[𝑋𝑋 = 𝑘𝑘] = than 8 (see
𝜆𝜆 [7]).
𝑘𝑘 𝑒𝑒 −𝜆𝜆 For this reason, the intervals
25, kol. 1, akapit 5
nization in each time interval are consequently 8–11 in Table 2 are merged to form a range of
𝑘𝑘!
𝜒𝜒 2 ≈ 0,12 25
1 𝜒𝜒 2
Advances in Science and Technology – Research Journal vol. 7 (18) 2013
26
Advances in Science and Technology – Research Journal vol. 7 (18) 2013
27