You are on page 1of 8

Advances in Science and Technology

Research Journal
Volume 7, No. 18, June 2013, pp. 20–27 Original Article
DOI: 10.5604/20804075.1049490

DISTRIBUTION OF THE TREE PARITY MACHINE SYNCHRONIZATION TIME

Michał Dolecki1, Ryszard Kozera1, 2

1
Faculty of Mathematics, IT and Landscape Architecture, The John Paul II Katholic University of Lublin, ul.
Konstantynów 1H, 20-708 Lublin, Poland, e-mail: michal.dolecki@kul.pl
2
Faculty of Applied Informatics and Mathematics, Warsaw University of Life Sciences – SGGW, ul. Nowoursy-
nowska 159, 02-776 Warsaw, Poland, e-mail: ryszard_kozera@sggw.edu.pl, ryszard.kozera@gmail.com

Received: 2013.03.08 ABSTRACT


Accepted: 2013.04.12 Neural networks’ synchronization by mutual learning discovered and described by
Published: 2013.06.10 Kanter et al. [12] can be used to construct relatively secure cryptographic key ex-
change protocol in the open channel. This phenomenon based on simple mathematical
operations, can be performed fast on a computer. The latter makes it competitive to
the currently used cryptographic algorithms. An additional advantage is the easiness
in system scaling by adjusting neutral network’s topology, what results in satisfac-
tory level of security [24] despite different attack attempts [12, 15]. With the aid of
previous experiments, it turns out that the above synchronization procedure is a sto-
chastic process. Though the time needed to achieve compatible weights vectors in
both partner networks depends on their topology, the histograms generated herein
render similar distribution patterns. In this paper the simulations and the analysis of
synchronizations’ time are performed to test whether these histograms comply with
histograms of a particular well-known statistical distribution. As verified in this work,
indeed they coincide with Poisson distribution. The corresponding parameters of the
empirically established Poisson distribution are also estimated in this work. Evidently
the calculation of such parameters permits to assess the probability of achieving both
networks’ synchronization in a given time only upon resorting to the generated distri-
bution tables. Thus, there is no necessity of redoing again time-consuming computer
simulations.

Keywords: neural networks, neurocryptography.

INTRODUCTION signals and classifying them to one of the sev-


eral groups. Each of the networks’ input signals
Neural networks represent a model of func- is amplified or reduced, accordingly by the cor-
tioning living organisms’ brains and human brain responding weight value which determines the
in particular. Currently available technologies significance of a given input. Such classification
allow simulating the work of merely relatively mechanism is successfully applied to the recogni-
small networks in comparison to the real brain tion of bank or shop customers’ behavior patterns
size. There is still ongoing work on increasing or to analyze the results of medical examinations
the scale of possible experiments. In particular, to discover patterns characteristic for some dis-
the work of IBM engineers on the creation and ease entities [17]. Before one uses neural network
the use of chips operating like the brain [10] and this network must first undergo a pertinent train-
The Human Brain Project [18] form the most ad- ing process [21, 22]. In particular, the so-called
vanced research topics within the discussed field. supervised learning step of the feed-forward neu-
Functioning of artificial neural networks copying ral network consists of modifying the network’s
human brain, is based on processing of incoming weights via specific optimization process. Algo-

20
Advances in Science and Technology – Research Journal vol. 7 (18) 2013

rithms used for finding the optimal weights for points. Thus, there are also infinitely many dif-
a given neural network modify (via the updating ferent weights for our single neuron network to
procedure – see [8]) the corresponding values of classify the Boolean AND function. Such ambi-
multiple weights, which consequently leads to a guity yielding a correct network output, extends
very different output of the entire network. Ad- also to the general class of networks with more
ditionally, the neural network is considered to be complex topologies. The existence of multiple
trained, if the number of misclassified input vec- correct weights in neural networks shifts them
tors falls below an arbitrarily admitted threshold. aside to the fringe of cryptographers’ attention.
Thus upon completing the training phase, the The latter comes from the fact that in order to
network can still provide erroneous classifica- encrypt and to sign messages it is usually neces-
tion. Typically, the initial values ​​of weights are sary for both sender and receiver to possess the
determined randomly, and although the learning key number with the same value. This problem
process is described by deterministic algorithm, can also be seen using newer neuro-fuzzy sys-
the trained network depends on initial guesses tems [3]. However, despite the above mentioned
for network’s weights. In particular, for different disadvantage, in certain applications neural net-
values ​​of optimal weights the same accuracy in fi- works are still applied as a sophisticated tool for
nal classification can still be reached. This can be crypto-analysis [9].
easily seen in a very simple example of a network The research conducted in [11-15] indicates
implementing a logical AND function. Figure 1 the possibility of using artificial neural network to
shows the separation of the input signal space create a secure cryptographic key exchange proto-
for the AND function achieved by a single arti- col. This work introduces specific conditions and
ficial neuron with selected two different sets of modifications imposed on the network topology,
weights. The so-called decision boundaries repre- weight values​​, networks learning procedure and
senting two collections of weights are illustrated finally on the activation function within the out-
by red and green colors, respectively. The dotted put neurons. Such modified network is called the
black point (1.1) stands for the AND value equal TPM (Tree Parity Machine) – see e.g. [12, 13].
to 1, while the remaining three encircled points A characteristic fact for TPM is that in the process
correspond to the values equal to 0. of mutual network training a static learning set is
substituted by randomly generated input vectors.
These restrictions make the so-far used methods
to evaluate the accuracy of the network in question
(by examining the error of the pertinent energy
functions) inapplicable. There is no a priori given
learning set, which could be used as reference for
such analysis. The proposed key exchange proto-
col is based on the phenomenon of synchroniza-
tion of the mutual learning of neural networks.
The sender and receiver create networks with
the same topology and start with randomly cho-
sen different weights’ values, which also remain
confidential. In a sequel, both networks receive
Fig. 1. The AND function realized by two the same input vector and evaluate their outcome
weights sets values​, which are then exchanged. Sender’s net-
work treats the result of the recipient’s network as
Evidently, in both cases this single neuron the expected result, and in a similar fashion the
network classifies input vectors {(0,0), (0,1), recipient’s network exploits the result of sender’s
(1,0), (1,1)} correctly, although the computed network. In the next step both networks modify
weights vary – each one is geometrically repre- their weights in accordance with the pre-selected
sented by different decision boundary (modulo learning method. Commonly used methods co-
k, w=kw1, k>1). Additionally, for this particular incide with Hebbian rule, Anti-Hebbian rule or
example (forming the so-called linearly sepa- with the Random Walk rule [8, 21-23]. In the sub-
rable set of data), it is visible that we have in- sequent step of the algorithm new input vectors
finitely many straight lines separating the above common to both networks are randomly chosen.

21
Advances in Science and Technology – Research Journal vol. 7 (18) 2013

As previously, both networks’ results are calcu- to compute the digital signature, to verify signa-
lated and mutually exchanged. The networks’ tures, to compute authentication code from mes-
weights are modified accordingly. Upon certain sages, to verify this code and finally also to estab-
number of iterations of this procedure, two net- lish keys in further communication [1].
works reach synchronization state, guaranteeing
the same respective values ​​of two collections of
weights. The latter can be used directly as crypto- OBJECTIVE AND METHODOLOGY
graphic keys, or as the seed of the algorithm that
generates pseudo-random numbers, forming the Synchronization of the TPMs is a stochastic
role of respective keys [2, 13]. process, and the time needed to reach the same
It is shown in [15], that bidirectional inter- values ​​of weights of the network with a given
action between sender and receiver, which is structure, depends on randomly selected initial
achieved by exchange of TPM’s outputs, allows weights and on randomly generated input vec-
faster synchronization than unidirectional network tors, respectively. In fact, the size of the network
learning, which can perform a potential attacker. affects also the network synchronization time.
This difference in time needed to finish synchro- Naturally, bigger TPMs synchronize longer. The
nization is crucial for security of the created key simulation results [12] (see also figure 4) show
exchange protocol. Another strengthening of the that the distribution of a synchronization time for
proposed schema can be the best precise deter- TPM networks with a given topology is asymmet-
mination of the point at which TPM’s are already ric. Namely, the respective frequencies measuring
synchronized, what in turn allows a quick termi- how often both nets synchronize in a given num-
nation of the learning process [4]. It makes it less ber of steps are high on the left and skewed toward
susceptible for potential attack to occur by a third the right. In addition, the network’s size does not
party. The latter holds as the time available for be- affect the built-in distribution characteristics. The
ing attacked is reduced together with the amount main task of this work is to determine the type of
of information potentially accessed by the attacker. the observed distribution and its parameters for
Computers and data stored on them are ex- the network with different structures. A compari-
posed to many attacks [6], and their protection son of this generated distribution is accomplished
is the main research area of cryptology. Crypto- here with the Poisson distribution, which is well-
graphic keys are numbers used in the algorithm as known and can be exploited by using e.g. standard
an additional input to the encryption and authenti- distribution tables. The latter, permits in turn to
cation of documents. In symmetric cryptography determine the probability for TPMs to be synchro-
systems, the same key for encryption and decryp- nized in a given number of steps. Mathematical
tion is applied. The security is based here on en- description of TPM synchronization with simple
suring that the key is known only to the sender Poisson distribution permits to continue further
and receiver, who are both trusted parties. Asym- theoretical research of this process, similarly to
metric cryptography presents a different approach hte application mathematical models depicting
[20] in which both of these operations use a pair various physical phenomena [16].
of keys. One of them is secret and refers to the The simulations of networks’ synchronization
private key and the other one, which is known, are carried out with different topologies by the au-
is coined as the public key. Using this system, thors’ computer program. The obtained results of
first the sender retrieves the recipient’s public key the synchronization measurements are analyzed
and encrypts the message. In sequel the receiver, with MS Excel. Due to the fact that the range of
having obtained the ciphertext, transforms it us- the analyzed sample is very high, it is divided into
ing his private own key to the plaintext. Asym- respective intervals. Therefore, the analysis of the
metric algorithms are slower in action. Therefore, TPMs’ synchronization time is contained within a
in practice they are used for establishing the key particular interval instead of specifying the prob-
that is applied in further communication with the ability of exact number of synchronization steps.
aid of symmetric algorithms and also to encrypt
small parts of data [25]. In addition, depending TREE PARITY MACHINE
on various applications, the keys can be divided
into different classes. Namely, they are applied to Artificial neural network used in synchroniza-
either encrypt messages, to decrypt cryptograms, tion is similar to the tree-like structure with cer-

22
str. 23, kol. 1, akapit 3 𝑖𝑖 ∈ [1; 𝑛𝑛]

𝑛𝑛 ∈ ℕ
str. 23, kol. 1, akapit 2 𝐾𝐾 ∈ ℕ
Advances in Science and Technology – Research Journal vol. 7 (18) 2013
{−𝐿𝐿, −𝐿𝐿 + 1, … , 𝐿𝐿 − 1, 𝐿𝐿}
−1, 𝑖𝑖𝑖𝑖 𝜑𝜑𝑡𝑡 ≤ 0
tain selected disjoint perceptron’s receptive input 𝜎𝜎𝑗𝑗,𝑡𝑡 = 𝑓𝑓𝑡𝑡 (𝜑𝜑𝑡𝑡 ) = �𝐾𝐾
str. 23, kol. 2, akapit 1 𝜏𝜏 = ∏𝑗𝑗=11,𝜎𝜎𝑖𝑖𝑖𝑖 𝑗𝑗 𝜑𝜑𝑡𝑡 > 0
fields. An example of a TPM network is shown
in Figure 2. Here str. a23,feed-forward,
kol. 1, akapit 3multi-layer Learning TPM (𝑡𝑡+1)𝑖𝑖is∈in[1; accordance
(𝑡𝑡)𝑛𝑛] with one of
str.
network, has in the output layer always only one 23, kol. 2, akapit 2 (punkty) the three methods 𝑤𝑤 𝑘𝑘𝑘𝑘 [23]: 𝑘𝑘𝑘𝑘 = 𝑤𝑤 − 𝑥𝑥 𝜎𝜎
𝑘𝑘𝑘𝑘 𝑘𝑘
perceptron. Alternatively, disjoint input fields can 1. Anti-Hebbian rule𝑛𝑛 –∈ weights ℕ are modified if
(𝑡𝑡+1) (𝑡𝑡)
be linked with all neurons in the hidden layer but the outputs 𝑤𝑤 𝑘𝑘𝑘𝑘 of =
both 𝑤𝑤 networks
𝑘𝑘𝑘𝑘 + 𝑥𝑥 𝑘𝑘𝑘𝑘 𝜎𝜎are
𝑘𝑘 different.
the unmarked connections str. 23, kol. 1, akapit
between input2 impuls- This process{−𝐿𝐿, −𝐿𝐿 leads +𝐾𝐾to 1,∈network
…ℕ, 𝐿𝐿 − synchronization 1, 𝐿𝐿}
es and first hidden neurons’ layers should have with opposite (𝑡𝑡+1) vectors (𝑡𝑡)
of weights. Weights’
𝑤𝑤𝑘𝑘𝑘𝑘 = 𝑤𝑤 𝐾𝐾−1,𝑘𝑘𝑘𝑘 + 𝑥𝑥𝑘𝑘𝑘𝑘
modification complies
𝜏𝜏 =) = 𝑗𝑗=1 ∏ here 𝑖𝑖𝑖𝑖 with 𝜑𝜑 ≤
the 0 following
weights always equal str. 23, to kol.
zero.2, akapit 1 𝜎𝜎𝑗𝑗,𝑡𝑡 = 𝑓𝑓𝑡𝑡 (𝜑𝜑 𝑡𝑡 � 𝜎𝜎𝑗𝑗 𝑡𝑡
TPM’s hidden layer consist of K neurons, formula: (𝑡𝑡+1)1, 𝑖𝑖𝑖𝑖 𝜑𝜑𝑡𝑡 > 0
str. 24, kol. 1, akapit 1 𝑤𝑤𝑘𝑘𝑘𝑘(𝑡𝑡)
𝐾𝐾 ∈ ℕ . Each of str. them iskol.built2,on the basis of the 𝑤𝑤
(𝑡𝑡+1)
model of McCulloch-Pitts
23,
str. 23, kol.[19] akapit
1, akapit
with bipolar,
2
3 (punkty)
step 𝑘𝑘𝑘𝑘 𝑖𝑖 ∈𝑤𝑤
= [1;𝑘𝑘𝑘𝑘 𝑛𝑛] − 𝑥𝑥𝑘𝑘𝑘𝑘 𝜎𝜎𝑘𝑘 .
t 2 𝜎𝜎 = 𝑓𝑓 (𝜑𝜑 )activation 𝜑𝜑𝑡𝑡 ≤str.
−1, 𝑖𝑖𝑖𝑖function, 0 given25, kol. 1, akapit 1
𝐾𝐾 str.
∈ ℕby 23,the kol.following
1, akapit 2for- This above 𝐿𝐿 ∈ {1,2, mentioned … ,5,10,15, method … ,50}
𝐾𝐾 ∈ ℕ is originally
(𝑡𝑡+1) 𝐾𝐾 ∈(𝑡𝑡)
𝑗𝑗,𝑡𝑡 𝑡𝑡 𝑡𝑡 = � str. 23, kol. 1, akapit 2 used by Kanter
𝑤𝑤𝑘𝑘𝑘𝑘 ℕ
=𝑛𝑛𝑤𝑤𝑘𝑘𝑘𝑘 + 𝑥𝑥𝑘𝑘𝑘𝑘 𝜎𝜎𝑘𝑘However, as
et al. in [12].
mula: 1, 𝑖𝑖𝑖𝑖 𝜑𝜑𝑡𝑡 > 0
−1, shown in [15] it is easier𝜆𝜆𝑘𝑘to𝑒𝑒 −𝜆𝜆 apply a normal
𝜎𝜎 = 𝑓𝑓 str.
(𝜑𝜑 25,
) = kol.
� 1,𝑖𝑖𝑖𝑖 𝜑𝜑𝑡𝑡 ≤50
akapit 𝑃𝑃[𝑋𝑋
𝜎𝜎 = =𝑓𝑓𝑘𝑘] (𝜑𝜑 =) =𝑖𝑖𝑖𝑖𝑘𝑘!� −1,
−1, 𝜑𝜑 ≤
𝑖𝑖𝑖𝑖 𝜑𝜑
0 𝑡𝑡 ≤ 0
𝑖𝑖 ∈ [1; 𝑛𝑛] 𝑗𝑗,𝑡𝑡 𝑡𝑡 𝑡𝑡
1, 𝑖𝑖𝑖𝑖 𝜑𝜑𝑡𝑡 > 0 Hebbian {−𝐿𝐿,rule,
𝜎𝜎𝑗𝑗,𝑡𝑡 =𝑤𝑤𝑓𝑓 −𝐿𝐿
(𝑡𝑡+1)
𝑗𝑗,𝑡𝑡
(𝜑𝜑 and+ )= 1,
𝑡𝑡the
𝑤𝑤… (𝑡𝑡)
𝑡𝑡 , 𝐿𝐿
results
+ − 𝑥𝑥 1, 𝑡𝑡 𝐿𝐿}
generated by
𝑘𝑘𝑘𝑘𝑡𝑡 𝑡𝑡 = � 𝑘𝑘𝑘𝑘 1, 𝑖𝑖𝑖𝑖 𝜑𝜑
1, 𝑖𝑖𝑖𝑖
𝑘𝑘𝑘𝑘 > 0 𝑡𝑡 > 0 𝜑𝜑
both methods are similar. 𝑡𝑡
j = 1, 2, 3, ..., K. 𝑟𝑟 𝐾𝐾 (𝑂𝑂𝑖𝑖 −𝐸𝐸𝑖𝑖 )
2
t3 𝑛𝑛 ∈ ℕ str.𝑖𝑖23,
str. 25,
∈ str. kol.𝑛𝑛]
kol.
[1; 2,akapit
2,
23, akapit
kol. 1
1, 1akapit 3 2. Hebbian rule𝜒𝜒 2𝜏𝜏–== ∑∏
weights(𝑡𝑡+1)
𝑖𝑖=1
𝑗𝑗=1 𝑖𝑖 ∈𝜎𝜎are𝑗𝑗𝐸𝐸[1; modified
𝑛𝑛] if the
The value of str. this24,
23, kol. 1, is
function akapit
the value 13 of the 𝑖𝑖 𝑤𝑤
∈𝑘𝑘𝑘𝑘[1; 𝑛𝑛] 𝑖𝑖
str. 23,{−𝐿𝐿, kol. 1, akapit 2 𝐾𝐾 ∈isℕan TPM’s results are equal. Then synchronization
−𝐿𝐿 +output 1, … , 𝐿𝐿 − neuron
1, 𝐿𝐿}str. j in 𝑛𝑛time∈ ℕt.2,Argument process ends (𝑡𝑡+1) 2
with the (𝑡𝑡) 𝑛𝑛 ∈ ℕ
adder block valuestr.
23,
at 25,
kol.
timekol.
akapit
1, akapit 1 according
t determined
2 (punkty) 𝐿𝐿 ∈ 𝑤𝑤𝑘𝑘𝑘𝑘
{1,2, =𝑛𝑛,5,10,15,
𝜒𝜒… 𝑤𝑤
≈ ℕsame
∈𝑘𝑘𝑘𝑘0,12 − 𝑥𝑥𝑘𝑘𝑘𝑘 … weights’
𝜎𝜎,50}
𝑘𝑘
values.
ol. 1, akapit 2 to the 𝐾𝐾 ∈ ℕ 𝐾𝐾 ∈ ℕ Weights
−1, 𝑖𝑖𝑖𝑖 𝜑𝜑𝑡𝑡 ≤ 0 are modified now according to:
𝜏𝜏 = ∏𝐾𝐾 formula:
𝑗𝑗=1 𝜎𝜎𝑗𝑗 {−𝐿𝐿, −𝐿𝐿 str.+ 26,1,kol. … ,𝜎𝜎1,
𝐿𝐿𝑗𝑗,𝑡𝑡− =1,𝑓𝑓𝑡𝑡𝐿𝐿}(𝜑𝜑1 𝑡𝑡 ) = � 1, 𝑖𝑖𝑖𝑖 𝜑𝜑𝑡𝑡 > 0 {−𝐿𝐿, (𝑡𝑡+1) {−𝐿𝐿, −𝐿𝐿 2 + 1,
(𝑡𝑡) … , 𝐿𝐿 − 1, 𝐿𝐿}
akapit 𝑤𝑤𝑘𝑘𝑘𝑘 −𝐿𝐿 = + 1, 𝑤𝑤𝜒𝜒𝑘𝑘𝑘𝑘… ,+ 𝐿𝐿𝑘𝑘𝑥𝑥
𝜆𝜆 − 𝑘𝑘𝑘𝑘1,
𝑒𝑒 −𝜆𝜆 𝜎𝜎𝑘𝑘𝐿𝐿}.
𝐾𝐾 ∈ ℕ
str. 25, kol.
−1, 𝑖𝑖𝑖𝑖 𝜑𝜑𝑡𝑡𝐾𝐾≤ 0, 1, akapit 5
−1, 𝑖𝑖𝑖𝑖 𝜑𝜑𝑡𝑡 ≤ 0 𝑃𝑃[𝑋𝑋 = 𝑘𝑘] = 𝑘𝑘!𝐾𝐾
(𝑡𝑡+1) 𝜎𝜎𝑗𝑗,𝑡𝑡 =(𝑡𝑡)𝑓𝑓𝑡𝑡 (𝜑𝜑𝑡𝑡 ) = � 𝜏𝜏𝜎𝜎𝑗𝑗,𝑡𝑡 ∏
= (𝜑𝜑𝜎𝜎𝑡𝑡𝑗𝑗kol.
𝑓𝑓𝑡𝑡23, ) = 2, 3. Random Walk rule – 𝜏𝜏𝐾𝐾similar = ∏ to Hebbian rule,
1,=
t 1 23, 𝑤𝑤 � akapit 𝜎𝜎
str. kol. 𝑘𝑘𝑘𝑘 1, akapit= 𝑤𝑤𝑘𝑘𝑘𝑘3 − 𝑥𝑥𝑘𝑘𝑘𝑘 𝜎𝜎𝑘𝑘 str. 𝑖𝑖𝑖𝑖 str.
23, 𝜑𝜑kol.
𝑡𝑡 >2,0akapit 11,
𝑗𝑗=1 ∈ [1;
𝑖𝑖 𝑖𝑖𝑖𝑖 𝜑𝜑1𝑡𝑡 𝑛𝑛]
>0 (𝑡𝑡+1) 𝜏𝜏 = ∏𝑤𝑤𝑗𝑗=1 (𝑡𝑡) 𝜎𝜎 𝑗𝑗=1 𝑗𝑗
where N−1, is the 𝑤𝑤
modification𝑘𝑘𝑘𝑘2occurs𝑟𝑟 when = +𝑗𝑗
(𝑂𝑂𝑖𝑖 −𝐸𝐸the 𝑥𝑥 2 results of the
𝑖𝑖𝑖𝑖 𝜑𝜑number ≤ 025,ofkol.
𝑡𝑡 str. input impulses
2, akapit 1 to a sin- 𝜒𝜒 equals. =∑
𝑘𝑘𝑘𝑘 𝑖𝑖 ) 𝑘𝑘𝑘𝑘
𝜎𝜎 = 𝑓𝑓 (𝜑𝜑 ) = � x𝑤𝑤 (𝑡𝑡+1) (𝑡𝑡) networks are (𝑡𝑡+1)
𝑖𝑖=1 The latter (𝑡𝑡) leads to equal
𝑤𝑤𝑘𝑘𝑘𝑘 3 =gle𝑤𝑤neuron, – input
[1; > 0=signals −(𝑖𝑖kol.𝑥𝑥∈𝑘𝑘𝑘𝑘[1; , 𝑛𝑛 ∈2ℕ(punkty)
),
𝑗𝑗,𝑡𝑡 𝑡𝑡 𝑡𝑡 𝑤𝑤 𝜎𝜎2,𝑘𝑘 𝑛𝑛]
ol.
t 21,(punkty)
akapit (𝑡𝑡+1) (𝑡𝑡) 1,𝑖𝑖 𝑖𝑖𝑖𝑖 ∈ i 𝜑𝜑𝑡𝑡 𝑛𝑛] str.
𝑘𝑘𝑘𝑘 23, akapit (𝑡𝑡+1)𝑤𝑤𝑘𝑘𝑘𝑘 (𝑡𝑡+1) (𝑡𝑡)= 𝐸𝐸 𝑤𝑤𝑖𝑖 𝑘𝑘𝑘𝑘 − 𝑥𝑥𝑘𝑘𝑘𝑘 𝜎𝜎
𝑘𝑘𝑘𝑘 + 𝑥𝑥𝑘𝑘𝑘𝑘 𝜎𝜎𝑘𝑘𝑘𝑘 str. 23, kol. 2, akapit 2 (punkty) 𝑤𝑤 = 𝑤𝑤 − 𝑥𝑥 𝜎𝜎 𝑘𝑘
wi – weights assigned
𝑘𝑘 str. 24,to kol.corresponding
1, akapit 1 inputs weights’ values
𝑘𝑘𝑘𝑘 for both
𝑤𝑤𝑘𝑘𝑘𝑘 𝑘𝑘𝑘𝑘 nets. In
𝑘𝑘𝑘𝑘 𝑘𝑘 this method,
(𝑡𝑡+1) ( 𝑖𝑖 ∈ [1; (𝑡𝑡) 𝑛𝑛] , 𝑛𝑛𝑤𝑤∈ ℕ). =
(𝑡𝑡+1) Each weight
(𝑡𝑡) {−𝐿𝐿,𝑛𝑛 ∈ −𝐿𝐿
ℕ + 1,
belongs to … the the weights’ adjustment
, 𝐿𝐿 − 1, 𝐿𝐿} 𝜒𝜒 2(𝑡𝑡+1)
≈ 0,12 does (𝑡𝑡)not depend on
𝑤𝑤𝑘𝑘𝑘𝑘 set = 𝑤𝑤{–L, + 𝑥𝑥 𝑤𝑤 + 𝑥𝑥 𝑘𝑘𝑘𝑘 𝜎𝜎 𝑘𝑘 the output (𝑡𝑡+1)
of the 𝑤𝑤hidden (𝑡𝑡)= 𝑤𝑤𝑘𝑘𝑘𝑘 + 𝑥𝑥𝑘𝑘𝑘𝑘 𝜎𝜎𝑘𝑘
layer neuron, but only
𝑘𝑘𝑘𝑘 –L + 1, ...,
𝑘𝑘𝑘𝑘𝑘𝑘𝑘𝑘 str.L25, 𝑘𝑘𝑘𝑘
– 1,kol. L}.1, akapit 1 𝑤𝑤
𝐿𝐿 ∈ 𝑘𝑘𝑘𝑘 =
{1,2, … ,5,10,15, 𝑘𝑘𝑘𝑘 𝑤𝑤 𝑘𝑘𝑘𝑘 + 𝑥𝑥 …
𝑘𝑘𝑘𝑘 𝜎𝜎,50}
𝑘𝑘
on the input signal: 2
str. 23, kol. 2, akapit {−𝐿𝐿,𝑛𝑛TPM1∈ −𝐿𝐿 + 1, …
ℕ network str. 26,
, 𝐿𝐿 {−𝐿𝐿,
structure
(𝑡𝑡+1)
kol.
− 1, 𝐿𝐿} −𝐿𝐿1,
can+
(𝑡𝑡)
akapit
be1,thus 1
…𝜏𝜏,expressed
𝐿𝐿=−∏1, 𝐾𝐾
𝑗𝑗=1𝐿𝐿}𝜎𝜎𝑗𝑗 𝜒𝜒
(𝑡𝑡+1) (𝑡𝑡)
𝑤𝑤
with
(𝑡𝑡+1)
three 𝑤𝑤𝑘𝑘𝑘𝑘 as=the
parameters 𝑤𝑤𝑘𝑘𝑘𝑘
network+ 𝑥𝑥𝑘𝑘𝑘𝑘of type K-N-L. (𝑡𝑡+1)𝑤𝑤𝑘𝑘𝑘𝑘 (𝑡𝑡)= 𝜆𝜆+𝑘𝑘 𝑒𝑒 𝑤𝑤−𝜆𝜆𝑘𝑘𝑘𝑘 + 𝑥𝑥𝑘𝑘𝑘𝑘
𝑘𝑘𝑘𝑘
𝐾𝐾str. 25, kol. 1, akapit 𝐾𝐾 5
𝑤𝑤 𝑃𝑃[𝑋𝑋
𝑘𝑘𝑘𝑘 = 𝑘𝑘] = = 𝑤𝑤 𝑘𝑘𝑘𝑘 𝑥𝑥 𝑘𝑘𝑘𝑘 .
ol. 2, akapit {−𝐿𝐿, 1 −𝐿𝐿 The+ 1,last … ,
𝜏𝜏 𝐿𝐿
=
layer − ∏ 1,
neuron 𝐿𝐿} 𝜎𝜎 performs 𝜏𝜏 = the ∏ 𝜎𝜎
multiplication
(𝑡𝑡+1) (𝑡𝑡) 𝑘𝑘!
str. 23, kol. 2, akapit 2 (punkty) 𝑗𝑗=1 𝑗𝑗 (𝑡𝑡+1) 𝑤𝑤𝑘𝑘𝑘𝑘 𝑗𝑗=1 = 𝑤𝑤 − 𝑥𝑥𝑘𝑘𝑘𝑘 𝜎𝜎𝑘𝑘
𝑗𝑗 (𝑡𝑡+1)
t 1 𝐿𝐿 ∈ {1,2, …operation, ,5,10,15,and … ,50} itsstr.outcome𝑤𝑤𝑘𝑘𝑘𝑘
24,
str.is24,
kol.
kol.
theakapit
1, output1, akapit
1 of the𝑘𝑘𝑘𝑘 1en- If the new weight value
𝑤𝑤
(𝑡𝑡+1)𝑤𝑤𝑘𝑘𝑘𝑘 2 is greater than
(𝑂𝑂
𝐾𝐾 𝑟𝑟 −𝐸𝐸 𝑖𝑖 )
y) 2, akapit 2 (punkty)
ol. =𝑤𝑤∏
𝜏𝜏tire 𝑗𝑗=1 𝜎𝜎=
network:
(𝑡𝑡+1) 𝑗𝑗 𝑤𝑤 (𝑡𝑡) str.− 25,
𝑥𝑥𝑤𝑤 kol.
(𝑡𝑡+1)
𝜎𝜎 2, = akapit
𝑤𝑤
(𝑡𝑡)1
(𝑡𝑡+1) − 𝑥𝑥 𝜎𝜎
(𝑡𝑡) L, it is replaced 𝜒𝜒 2by=L.∑Analogously, 𝑘𝑘𝑘𝑘
𝑖𝑖=1
𝑖𝑖
if the weight
t1 𝑘𝑘𝑘𝑘 𝜆𝜆𝑘𝑘 𝑒𝑒 −𝜆𝜆 𝑘𝑘𝑘𝑘
𝐿𝐿 ∈ {1,2,
𝑘𝑘𝑘𝑘𝑘𝑘𝑘𝑘𝑘𝑘
… ,5,10,15,
str. 25, kol. 𝑤𝑤… 1,
𝑘𝑘𝑘𝑘𝑘𝑘𝑘𝑘 = 𝑤𝑤
,50} akapit 1
𝑘𝑘𝑘𝑘 𝑘𝑘 + 𝑥𝑥𝑘𝑘𝑘𝑘 𝜎𝜎𝑘𝑘
𝑘𝑘𝑘𝑘 𝐿𝐿 ∈ {1,2,
𝐸𝐸𝑖𝑖
… ,5,10,15, … ,50}
𝑃𝑃[𝑋𝑋
(𝑡𝑡+1)
= 𝑘𝑘] =(𝑡𝑡) 𝑘𝑘! str. 25, kol. 1, akapit 1 𝐿𝐿 ∈ {1,2, … ,5,10,15, … ,50}
𝑤𝑤𝑘𝑘𝑘𝑘
=𝑤𝑤𝑤𝑤(𝑡𝑡+1) −=𝑥𝑥𝑘𝑘𝑘𝑘 𝜎𝜎
(𝑡𝑡)
𝑤𝑤𝑘𝑘𝑘𝑘 𝑘𝑘 + 𝑥𝑥𝑤𝑤
(𝑡𝑡+1)
𝜎𝜎𝑘𝑘 𝜆𝜆𝑘𝑘=𝑒𝑒 −𝜆𝜆
(𝑡𝑡)
𝑤𝑤𝑘𝑘𝑘𝑘
(𝑡𝑡+1)+= 𝑥𝑥𝑘𝑘𝑘𝑘𝑤𝑤𝜎𝜎(𝑡𝑡) 𝜒𝜒 2 ≈ 0,12
𝑘𝑘𝑘𝑘𝑘𝑘𝑘𝑘 𝑘𝑘𝑘𝑘𝑘𝑘𝑘𝑘 𝑤𝑤 𝑘𝑘 + 𝑥𝑥𝑘𝑘𝑘𝑘 𝑘𝑘 −𝜆𝜆
𝑘𝑘 𝑒𝑒 −𝜆𝜆 𝜆𝜆 𝑒𝑒
t5 𝑟𝑟 (𝑂𝑂𝑖𝑖 −𝐸𝐸𝑖𝑖 ) 𝑃𝑃[𝑋𝑋
2 = 𝑘𝑘] str.=25, kol. 1, akapit 5𝑘𝑘𝑘𝑘
𝑘𝑘𝑘𝑘 𝑃𝑃[𝑋𝑋 =𝜆𝜆𝑘𝑘] = 𝑘𝑘!
2
= ∑𝑖𝑖=1 (𝑡𝑡)
𝜒𝜒 (𝑡𝑡+1) str. 25, kol. 1, akapit 𝑘𝑘! 5 𝑃𝑃[𝑋𝑋 = 𝑘𝑘]2 = 𝑘𝑘!
𝑤𝑤𝑘𝑘𝑘𝑘 = 𝑤𝑤𝑤𝑤 𝐸𝐸+
(𝑡𝑡+1) 𝑖𝑖 𝑥𝑥
= 𝜎𝜎str.
𝑘𝑘𝑘𝑘𝑤𝑤
(𝑡𝑡) 26, kol.
𝑘𝑘 + 𝑥𝑥𝑤𝑤
1, akapit(𝑡𝑡)
(𝑡𝑡+1) 1 𝜒𝜒
str. 24, kol. 1, akapit 1𝑘𝑘𝑘𝑘 𝑘𝑘𝑘𝑘 𝑘𝑘𝑘𝑘 𝑘𝑘𝑘𝑘𝑘𝑘𝑘𝑘 = 𝑤𝑤 + 𝑥𝑥
2 𝑘𝑘𝑘𝑘 𝑤𝑤𝑘𝑘𝑘𝑘𝑘𝑘𝑘𝑘
(𝑡𝑡+1)
2 𝑟𝑟 (𝑂𝑂𝑖𝑖 −𝐸𝐸𝑖𝑖 ) 𝑟𝑟 )2 (𝑂𝑂𝑖𝑖 −𝐸𝐸𝑖𝑖 )2
t1
𝜒𝜒 2 ≈=0,12 𝜒𝜒 = ∑𝑖𝑖=1 str. 25, kol. 2, akapit 1 𝜒𝜒 2 =(𝑂𝑂∑−𝐸𝐸
𝑤𝑤
(𝑡𝑡+1)
𝑤𝑤
(𝑡𝑡)
+ 𝑥𝑥
(𝑡𝑡+1) str. 25, kol. 2,𝐸𝐸akapit 𝑖𝑖 (𝑡𝑡+1) 1 𝜒𝜒 = ∑𝑟𝑟𝑖𝑖=1 𝑖𝑖 𝐸𝐸𝑖𝑖=1𝑖𝑖
2 𝐸𝐸𝑖𝑖
ol.
str.1,25,
akapit 1𝑘𝑘𝑘𝑘akapit 1 𝑘𝑘𝑘𝑘 𝑤𝑤𝑘𝑘𝑘𝑘
kol. 1, 𝑘𝑘𝑘𝑘 𝐿𝐿 𝑤𝑤
∈ 𝑘𝑘𝑘𝑘{1,2, … ,5,10,15, … ,50} 𝑖𝑖

𝜒𝜒 2(𝑡𝑡+1) 𝜒𝜒 2 ≈ 0,12 𝜒𝜒 2 ≈ 0,12


ol. 1, akapit 1 𝐿𝐿 𝑤𝑤 ∈𝑘𝑘𝑘𝑘{1,2, … ,5,10,15, 𝐿𝐿 ∈…{1,2, ,50} … ,5,10,15, … ,50} 𝜆𝜆𝑘𝑘 𝑒𝑒 −𝜆𝜆 𝜒𝜒 2 ≈ 0,12
str. 25, kol. 1, akapit 5
2
𝑃𝑃[𝑋𝑋 = 𝑘𝑘] = 𝑘𝑘!
t1 𝜒𝜒
str. 26, kol. 1, akapit 1 𝜒𝜒 2
𝐿𝐿 ∈ {1,2, … ,5,10,15, … ,50} str.𝜆𝜆𝑘𝑘26,
𝑒𝑒 −𝜆𝜆kol. 1, akapit 1 𝜆𝜆𝑘𝑘 𝑒𝑒 −𝜆𝜆 𝜒𝜒 2
ol. 1, akapit 5 𝑃𝑃[𝑋𝑋 = 𝑘𝑘] = 𝑘𝑘! 𝑃𝑃[𝑋𝑋 = 𝑘𝑘] 2= (𝑂𝑂𝑖𝑖 −𝐸𝐸𝑖𝑖 )2
str. 25, kol. 2, akapit 1 𝜒𝜒 = ∑ 𝑘𝑘!𝑟𝑟
𝑖𝑖=1 𝐸𝐸𝑖𝑖
𝜆𝜆𝑘𝑘 𝑒𝑒 −𝜆𝜆
𝑃𝑃[𝑋𝑋 = 𝑘𝑘] 2= 𝑘𝑘!𝑟𝑟 (𝑂𝑂𝑖𝑖 −𝐸𝐸𝑖𝑖 )2 2 (𝑂𝑂 −𝐸𝐸 )2
ol. 2, akapit 1 𝜒𝜒 = ∑𝑖𝑖=1 𝐸𝐸 𝜒𝜒 = ∑𝑟𝑟𝑖𝑖=1 𝑖𝑖 2 𝑖𝑖
𝑖𝑖 𝜒𝜒𝐸𝐸𝑖𝑖 ≈ 0,12
(𝑂𝑂𝑖𝑖 −𝐸𝐸𝑖𝑖 )2
𝜒𝜒 2 = ∑𝑟𝑟𝑖𝑖=1
str. 26, kol. 1, akapit 1 𝜒𝜒𝐸𝐸2𝑖𝑖 ≈ 0,12 𝜒𝜒 2 ≈ 0,12 𝜒𝜒 2

ol. 1, akapit 1 𝜒𝜒 2 ≈ 0,12 𝜒𝜒 2 𝜒𝜒 2


Fig. 2. Tree Parity Machine topology
2
𝜒𝜒
23
Advances in Science and Technology – Research Journal vol. 7 (18) 2013
str. 23, kol. 1, akapit 2 𝐾𝐾 ∈ ℕ
value is less than -L, it’s substituted with -L. Net- and the longest one is 1156, which gives the range
works that reached the synchronization status are of possible 1110 cycles. Average synchronization −1, 𝑖𝑖𝑖𝑖 𝜑𝜑𝑡𝑡 ≤ 0
𝜎𝜎𝑗𝑗,𝑡𝑡 = 𝑓𝑓𝑡𝑡 (𝜑𝜑𝑡𝑡 ) = �
synchronized regardless of further learning time. time equals 238 steps. The number of1,classes 𝑖𝑖𝑖𝑖 𝜑𝜑𝑡𝑡 >to0
Random Walk rule modifies weights using create a histogram was counted using Huntsberg-
only input vector, which is str.randomly
23, kol. 1,chosen
akapit at
3 er formula [5] k = 1 + 3.32 ·𝑖𝑖log ∈ [1; 10000𝑛𝑛] = 14.28,
each step of synchronization so the values ob- which is subsequently doubled for better graph
tained in this method are closer to uniform distri- readability. 𝑛𝑛 ∈ ℕ
bution as compared to the weights gained by using
Hebbian method. Figure 3 shows a comparison of {−𝐿𝐿, −𝐿𝐿 + 1, … , 𝐿𝐿 − 1, 𝐿𝐿}
the distribution of weight after synchronization of RESULTS
1000 TPMs of the structurestr. 23, kol.learned
3–16–5 2, akapit
with 1 𝜏𝜏 = ∏𝐾𝐾 𝑗𝑗=1 𝜎𝜎𝑗𝑗
Hebbian and Random walk rule. The histograms showing the number of syn-
The time required to achieve
str. 23,synchronization chronized networks in𝑤𝑤a (𝑡𝑡+1)specified (𝑡𝑡)
= 𝑤𝑤number of steps
kol. 2, akapit 2 (punkty) 𝑘𝑘𝑘𝑘 𝑘𝑘𝑘𝑘 − 𝑥𝑥𝑘𝑘𝑘𝑘 𝜎𝜎𝑘𝑘
by the network depends on the initial values ​​of are similar for all networks with different param-
eters K-N-L. They all remind (𝑡𝑡+1) the histogram
weights and the random input vectors chosen in
= 𝑤𝑤𝑘𝑘𝑘𝑘 + 𝑥𝑥𝑘𝑘𝑘𝑘 𝜎𝜎from
(𝑡𝑡)
𝑤𝑤𝑘𝑘𝑘𝑘 𝑘𝑘
every step of the synchronization. A typical his- Figure 4. The main difference is just the number
togram showing the number of synchronized net- of steps needed to synchronize (𝑡𝑡+1) TPMs. (𝑡𝑡)
works in a specified number of learning cycles is For a network with𝑤𝑤the = 𝑤𝑤𝑘𝑘𝑘𝑘
𝑘𝑘𝑘𝑘 number + 𝑥𝑥𝑘𝑘𝑘𝑘 in
of neurons
shown in Figure 4. the hidden layer equal to K = 3 and with the re-
(𝑡𝑡+1)
In this work we analyze str.10 000
24, kol. 1, akapit 1
synchroniza- spective number of input signals 𝑤𝑤𝑘𝑘𝑘𝑘for each of them
tions for the networks of type 3-16-3. The short- equal to N = 16, 1000 synchronizations with dif-
est observed synchronization timekol.
str. 25, (measured
1, akapit in
1 ferent values ​​of 𝐿𝐿 ∈ {1,2, … ,5,10,15, … ,50}
the corresponding number of steps) is 46 steps, are carried out in our tests. A Random walk rule
𝜆𝜆𝑘𝑘 𝑒𝑒 −𝜆𝜆
str. 25, kol. 1, akapit 5 𝑃𝑃[𝑋𝑋 = 𝑘𝑘] = 𝑘𝑘!

(𝑂𝑂𝑖𝑖 −𝐸𝐸𝑖𝑖 )2
str. 25, kol. 2, akapit 1 𝜒𝜒 2 = ∑𝑟𝑟𝑖𝑖=1 𝐸𝐸𝑖𝑖

𝜒𝜒 2 ≈ 0,12

str. 26, kol. 1, akapit 1 𝜒𝜒 2

Fig. 3. Weights distribution for Hebbian and Random Walk rule

Fig. 4. Synchronization time histogram for TPM 3–16–3

24
Advances in Science and Technology – Research Journal vol. 7 (18) 2013

str. 23, kol. 1, akapit 2 𝐾𝐾 ∈ ℕ


used as a learning method is invoked here. Ta- Table 2. TPM 3-16-2 synchronization time summary
ble 1 shows a summary of the numbers of steps Interval Boundaries −1, 𝑖𝑖𝑖𝑖 𝜑𝜑𝑡𝑡 ≤ 0
Multiplicity Probability
needed for achieving synchronization: namely, 𝜎𝜎𝑗𝑗,𝑡𝑡 = 𝑓𝑓𝑡𝑡 (𝜑𝜑𝑡𝑡 ) = �
1 37.0 69.6 1, 𝑖𝑖𝑖𝑖
96 𝜑𝜑𝑡𝑡 > 0 0.096
the shortest and longest which are observed, and
2 69.6 102.1 338 0.338
the average ones.str. 23, kol. 1, akapit 3 𝑖𝑖 ∈ [1; 𝑛𝑛] 292
3 102.1 134.7 0.292
4 134.7 167.3 137 0.137
Table 1. Summary of the number of TPM’s synchro- 𝑛𝑛 ∈ ℕ
nization steps 5 167.3 199.9 75 0.075

TPM
6 {−𝐿𝐿,
199.9 −𝐿𝐿232.4
+ 1, … , 𝐿𝐿 38
− 1, 𝐿𝐿} 0.038
Min Max Average
parameters 7 232.4 265.0 12 0.012
3-16-1 str.723, kol. 2,132
akapit 1 33.4 8 265.0 = ∏𝐾𝐾
𝜏𝜏 297.6 𝑗𝑗=1 𝜎𝜎𝑗𝑗8 0.008
str. 23, kol. 1, akapit 2 𝐾𝐾 ∈ ℕ
3-16-2 37 394 118.3 9 297.6 330.2 2 0.002
(𝑡𝑡+1) (𝑡𝑡)
3-16-3 str.
8023, kol. 2,702
akapit 2 (punkty)
264.2
𝜎𝜎 = 𝑓𝑓𝑡𝑡 (𝜑𝜑𝑡𝑡 ) 10
=�
𝑤𝑤𝜑𝜑𝑘𝑘𝑘𝑘𝑡𝑡 ≤ 0362.7
−1, 𝑖𝑖𝑖𝑖
330.2 = 𝑤𝑤𝑘𝑘𝑘𝑘 − 𝑥𝑥1 𝑘𝑘𝑘𝑘 𝜎𝜎𝑘𝑘 0.001
3-16-4 160 1469 469.7 𝑗𝑗,𝑡𝑡 1, 𝑖𝑖𝑖𝑖 𝜑𝜑 𝑡𝑡 > 0
11 362.7 395.3 1 0.001
3-16-5 263 2849 755.6 (𝑡𝑡+1) (𝑡𝑡)
str. 23, kol. 1, akapit 3 𝑖𝑖 ∈ [1; 𝑛𝑛] 𝑤𝑤𝑘𝑘𝑘𝑘 = 𝑤𝑤𝑘𝑘𝑘𝑘 + 𝑥𝑥𝑘𝑘𝑘𝑘 𝜎𝜎𝑘𝑘
3-16-10 1159 8579 3239.3
weighted
𝑛𝑛 ∈ ℕ average, 𝑤𝑤𝑘𝑘𝑘𝑘 which= 𝑤𝑤 is+calculated on the
3-16-15 2991 20263 7657.9 (𝑡𝑡+1) (𝑡𝑡)
𝑥𝑥𝑘𝑘𝑘𝑘
3-16-20 5044 46057 14444.6 basis of empirical results𝑘𝑘𝑘𝑘using class number as
3-16-25 8242 62486 23033.2 {−𝐿𝐿, −𝐿𝐿a + value
1, … ,and𝐿𝐿 − its
1, 𝐿𝐿} multiplicity
(𝑡𝑡+1) as weight. Hence,
str. 24, kol. 1, akapit 1 the sum of intervals’ 𝑤𝑤𝑘𝑘𝑘𝑘multiplicities is equal to
3-16-30 14520 92569 34104.3
str. 23, kol. 2, akapit 1 𝜏𝜏the ∏𝐾𝐾
= number
𝑗𝑗=1 𝜎𝜎𝑗𝑗 of analyzed nets, so weighted aver-
3-16-35 19209 124496 48443.2
str. 25, kol. 1, akapit 1 age for this 𝐿𝐿 ∈net {1,2, … ,5,10,15,
is equal to λ = 2,979.… ,50}In sequel,
3-16-40 23919 161440 64018.9 (𝑡𝑡+1) (𝑡𝑡)
str. 23, kol. 2, akapit 2 (punkty) 𝑤𝑤𝑘𝑘𝑘𝑘 the = 𝑤𝑤
above
𝑘𝑘𝑘𝑘 − 𝑥𝑥 𝜎𝜎
mentioned
𝑘𝑘𝑘𝑘 𝑘𝑘 hypothesis is verified in
3-16-45 26191 251386 82656.0 𝜆𝜆𝑘𝑘 𝑒𝑒 −𝜆𝜆
str. 25, kol. 1, akapit 5102382.7 compliance with 𝑃𝑃[𝑋𝑋 = 𝑘𝑘] = 𝑘𝑘! in which the
chi-square test,
3-16-50 42856 288219 (𝑡𝑡+1) (𝑡𝑡)
𝑤𝑤𝑘𝑘𝑘𝑘 sum = 𝑤𝑤is𝑘𝑘𝑘𝑘calculated:
+ 𝑥𝑥𝑘𝑘𝑘𝑘 𝜎𝜎𝑘𝑘
2 𝐾𝐾 ∈ ℕ
(𝑂𝑂 −𝐸𝐸 )2
Given the above str. 25, listed
kol. experimentally
2, akapit 1 gen-𝑤𝑤 (𝑡𝑡+1) = 𝑤𝑤 (𝑡𝑡) + 𝑥𝑥𝑘𝑘𝑘𝑘 𝜒𝜒 2
= ∑𝑟𝑟𝑖𝑖=1 𝑖𝑖 𝑖𝑖 ,
𝐸𝐸𝑖𝑖
erated data, −1, 𝑖𝑖𝑖𝑖 𝜑𝜑 𝑡𝑡 ≤ 0 sample 𝑘𝑘𝑘𝑘 𝑘𝑘𝑘𝑘
𝜎𝜎𝑗𝑗,𝑡𝑡 we
= 𝑓𝑓determined
𝑡𝑡 (𝜑𝜑𝑡𝑡 ) = �
next for each
where r is the number of classes, Oi is the ob-
the range, the 1,number
str. 24, kol.
1, 𝑖𝑖𝑖𝑖 𝜑𝜑 > 0
akapit 1of classes 𝑡𝑡for histogram
(𝑡𝑡+1)
𝑤𝑤𝑘𝑘𝑘𝑘 2
served probability 𝜒𝜒and≈E0,12stands for the expected
preparation, the width and the multiplicity of i
3 eachstr.
of 25,
the kol.
classes’
1, str.
akapit
[1;
𝑖𝑖intervals.
∈ 𝑛𝑛] Dividing the multi- 𝐿𝐿 ∈ {1,2,probability.
… ,5,10,15, Recall … ,50} that𝜒𝜒 2the requirement for the
26,1kol. 1, akapit 1 chi-squared test is that the sample size is not less
plicity of a given class by the size of the sample,
the str.
empirical probabilities 𝑛𝑛 ∈ ℕ of TPM’s synchro- 𝑃𝑃[𝑋𝑋 = 𝑘𝑘] = than 8 (see
𝜆𝜆 [7]).
𝑘𝑘 𝑒𝑒 −𝜆𝜆 For this reason, the intervals
25, kol. 1, akapit 5
nization in each time interval are consequently 8–11 in Table 2 are merged to form a range of
𝑘𝑘!

determined. {−𝐿𝐿, −𝐿𝐿 + 1, … , 𝐿𝐿 − 1, 𝐿𝐿} multiplicity 12


2 with the respective probability
2 𝑟𝑟 (𝑂𝑂𝑖𝑖 −𝐸𝐸𝑖𝑖 )
str. 25, kol. 2,
For TPM of type 3-16-2 akapit 1 𝜒𝜒 = ∑
0.012 = 0.008 + 0.002 + 0.001 + 0.001. Finally,
𝐾𝐾 our results are as fol-
𝑖𝑖=1 𝐸𝐸𝑖𝑖
1 lows: 𝜏𝜏 = ∏ 𝑗𝑗=1 𝑗𝑗𝜎𝜎 the number of classes reads as 8, and the sum
2
1. Sample size n = 1000. 𝜒𝜒 ≈ 0,12 . The number of classes is 8, thus, the
(𝑡𝑡+1) (𝑡𝑡) number of degrees of freedom is 7 = 8 – 1. From
2 (punkty) 2. Range R =𝑤𝑤357. = 𝑤𝑤 − 𝑥𝑥𝑘𝑘𝑘𝑘 𝜎𝜎𝑘𝑘
str. 26, kol.𝑘𝑘𝑘𝑘1, akapit 1 𝑘𝑘𝑘𝑘 the 𝜒𝜒 2
chi-square statistical distribution table, the
3. Number of intervals k = 11.
critical value for 7 degrees of freedom and reli-
𝑤𝑤𝑘𝑘𝑘𝑘d ≈ 32.6.
4. Interval size (𝑡𝑡+1) (𝑡𝑡)
= 𝑤𝑤𝑘𝑘𝑘𝑘 + 𝑥𝑥𝑘𝑘𝑘𝑘 𝜎𝜎𝑘𝑘
ability level 0.999 reads as 0.59849. Since the
The corresponding (𝑡𝑡+1)
boundaries
(𝑡𝑡)
of intervals, sum equal to 0.12 is much lower than critical
their multiplicity 𝑤𝑤𝑘𝑘𝑘𝑘 and=network 𝑤𝑤𝑘𝑘𝑘𝑘 +TPM 𝑥𝑥𝑘𝑘𝑘𝑘 3-16-2 syn- value 0.59849, even for restrictive level of reli-
chronization probabilities for the number of steps ability, there is no reason to reject the hypothe-
(𝑡𝑡+1)
1 belonging to the interval 𝑤𝑤𝑘𝑘𝑘𝑘 are shown in Table 2. sis. Hence the analyzed distribution is a Poisson
Given the shape of the histogram, we conjec- distribution.
1 ture the hypothesis
𝐿𝐿 ∈ {1,2, … that the Poisson
,5,10,15, … ,50} distribution Similar statistical analysis is performed for
defined by the formula other networks contained in Table 1. The results
𝜆𝜆𝑘𝑘 𝑒𝑒 −𝜆𝜆 of such research are presented in Table 3. For
5 𝑃𝑃[𝑋𝑋 = 𝑘𝑘] = 𝑘𝑘!
each of the analyzed networks we specify here
is a function of the probability 2for TPM syn- the observed minimal number of steps required to
2 𝑟𝑟 (𝑂𝑂𝑖𝑖 −𝐸𝐸𝑖𝑖 )
1 chronization time. The𝜒𝜒 = ∑ 𝑖𝑖=1 parameter λ is the synchronize the network, the approximate width
𝐸𝐸𝑖𝑖

𝜒𝜒 2 ≈ 0,12 25

1 𝜒𝜒 2
Advances in Science and Technology – Research Journal vol. 7 (18) 2013

Table 3. Chi-square test summary


TPM Average
Min Interval width Interval count Sum χ2 Critical value
parameters (parameter λ)
3-16-1 7 11.4 7 2.81 0.13 0.381
2 3-16-2 𝐾𝐾 ∈ 37
ℕ 32.6 8 2.98 0.12 0.598
3-16-3 80 56.8 10 3.74 0.09 0.152
−1, 𝑖𝑖𝑖𝑖 𝜑𝜑𝑡𝑡 ≤ 0
𝜎𝜎𝑗𝑗,𝑡𝑡 = 𝑓𝑓𝑡𝑡 (𝜑𝜑𝑡𝑡 )
3-16-4 = �160
1, 𝑖𝑖𝑖𝑖 𝜑𝜑𝑡𝑡 > 0 119.4 9 3.09 0.08 0.857
3-16-5 263 236 7 2.58 0.13 0.381
3 3-16-10 𝑖𝑖 ∈ [1; 𝑛𝑛]
1159 677 9 3.57 0.06 0.857
3-16-15 2991 1576 9 3.47 0.04 0.857
𝑛𝑛 ∈ ℕ
3-16-20 5044 3742.1 8 3.00 0.13 0.598
{−𝐿𝐿, −𝐿𝐿
3-16-25 + 1,8242
… , 𝐿𝐿 − 1, 𝐿𝐿} 4949.3 10 3.48 0.08 1.152
3-16-30 14520 7121.3 9 3.24 0.07 0.857
𝐾𝐾
1 3-16-35 𝜏𝜏 = ∏19209
𝑗𝑗=1 𝜎𝜎𝑗𝑗 9606.5 10 3.55 0.09 1.152
3-16-40 23919 12547.5 9 3.70 0.09 0.857
(𝑡𝑡+1) (𝑡𝑡)
2 (punkty) 𝑤𝑤𝑘𝑘𝑘𝑘
3-16-45 = 𝑤𝑤26191
𝑘𝑘𝑘𝑘 − 𝑥𝑥𝑘𝑘𝑘𝑘 𝜎𝜎𝑘𝑘 20547 8 3.27 0.15 0.598
3-16-50 42856 22387.1 8 3.16 0.08 0.598
(𝑡𝑡+1) (𝑡𝑡)
𝑤𝑤𝑘𝑘𝑘𝑘 = 𝑤𝑤𝑘𝑘𝑘𝑘 + 𝑥𝑥𝑘𝑘𝑘𝑘 𝜎𝜎𝑘𝑘
(𝑡𝑡+1) (𝑡𝑡)
𝑤𝑤𝑘𝑘𝑘𝑘 = 𝑤𝑤 + 𝑥𝑥𝑘𝑘𝑘𝑘
of the interval and the𝑘𝑘𝑘𝑘 number of intervals used 0.1728. This renders the answer to the above stat-
in the chi-squared test. (𝑡𝑡+1)As for the TPM described ed question. In addition, one can also calculate
1 𝑤𝑤𝑘𝑘𝑘𝑘 are collated in one group,
above, the last intervals the cumulative probability value for k ≤ 4. The
in order to satisfy requirements of minimum mul- latter yields the answer to the question what is
1 𝐿𝐿 ∈ {1,2, … ,5,10,15, … ,50}
tiplicity of elements in interval. The latter impacts the probability of network synchronization at
on some variation in the𝜆𝜆𝑘𝑘number 𝑒𝑒 −𝜆𝜆 of intervals for up to 600 steps. It should be mentioned here
5 the different𝑃𝑃[𝑋𝑋 = 𝑘𝑘] = The
networks. 𝑘𝑘! following columns that an alternative is to compute the probability
contain empirically determined 2
weighted averag- of the opposite event by calculating the prob-
2 𝑟𝑟 (𝑂𝑂𝑖𝑖 −𝐸𝐸𝑖𝑖 )
1 es, which are𝜒𝜒 used as parameter
= ∑𝑖𝑖=1 𝐸𝐸𝑖𝑖
λ of the Poisson ability of network synchronization in more than
distribution, the sum of chi-square and its criti- 600 steps.
cal value for reliability
𝜒𝜒 2 ≈ 0,12 level equals to 0.999. The
resulting value of the sum is less than the value
2
1 of the distribution 𝜒𝜒 for all tabled levels of reli- CONCLUSIONS
ability. Thus the table includes the most restric-
tive level. This work showed that the distribution of
The generated data listed in Table 3 permits TPM network’s synchronization time is a Pois-
to answer the question, what is the probability son distribution with parameter λ, which can be
for a given structure of TPMs to synchronize estimated as an weighted average using the ob-
in a certain time interval. For example, let the served synchronization times. The outcomes of
network parameters be 3-16-4 and the question the chi-square test of conformity lead to the rec-
is: “What is the probability that this TPM syn- ognition of the empirical synchronization time
chronizes in 600 cycles of learning”. First, one distribution for the Poisson distribution at sig-
verifies, using the shortest time and the width nificance level equal to 0.999. The simulations
of the interval, to which the interval belongs at and analysis performed herein allow determina-
a given time of synchronization. The first inter- tion of parameters λ for the networks in question
val is here (160.274), and our chosen number with different structures in a simple tables. The
600 does not belong to this interval. The fourth results generated in the presented research per-
interval coincides with (519.637) and contains mit further analysis of the TPM synchronization
the selected number 600, which is the searched process based on the value of the Poisson dis-
length of the synchronization. Then, from the tribution without necessity of undergoing long-
Poisson distribution tables the value for k = 4 term computer simulations and again perform-
and λ = 3.09 can be read, yielding a probability of ing the analysis of the obtained results.

26
Advances in Science and Technology – Research Journal vol. 7 (18) 2013

REFERENCES of information by synchronization of neural net-


works. Europhysics Letters, 57, 2002, 141-147.
1. Barker E., Barker W., Burr W., Polk W., Smid M.: 13. Kinzel W., Kanter I.: Neural cryptography. Cond-
Recommendation for Key Management, Part 1: Gen- mat 0208453, 2002.
eral (Revision 3). National Institute of Standards and 14. Klein E., Mislovaty R., Kanter I., Ruttor A., Kinzel
Technology Special Publication 800-57, 2012. W.: Synchronization of neural networks by mutual
2. Bisalapur S.: Design of an efficient neural key dis- learning and its application to cryptography. Ad-
tribution center. International Journal of Artificial vances in Neural Information Processing Systems,
Intelligence & Applications, 2(1), 2011, 60–69. 17, MIT Press, Cambridge, 2005, 689-696.
3. Charlak M., Jakubowski M.: Porównanie system- 15. Klimov A., Mityagin A., Shamir A.: Analysis of
ów rozmytych i sztucznych sieci neuronowych. neural cryptography. In: Y. Zheng (ed.), Advanc-
Advances in Science and Technology – Research es in Cryptology – ASIACRYPT 2002, Springer,
Journal, 4, 2010, 54–64. 2003, 288-289.
4. Dolecki M.: Tree Parity Machine synchronization 16. Lenik K., Korga S.: FEM applications to model
time – statistical analysis. Труды БГТУ, Серия friction processes in plastic strain conditions. Ar-
Математика, Физика, Информатика, 6(153), chives of Materials Science and Engineering, 41,
Mińsk, 2012, 149–151. 2010, 121–124.
5. Gardiner V., Gardiner G.: Analysis of Frequency 17. Lula P.: Sztuczne sieci neuronowe jako narzędzie
Distributions. Geo Abstracts, University of East analiz typu data mining. Data mining: metody i
Anglia, 1979. przykłady, StatSoft, 2002.
6. Gil A., Karoń T.: Analiza środków i metod ochrony 18. Markram M.: The Blue Brain Project. Nature Re-
systemów operacyjnych. Advances in Science and views Neuroscience 7, 2006, 153-160.
Technology – Research Journal, 12, 2012, 149–168.
19. McCulloch W.: A logical calculus of the ideas im-
7. Greń J.: Statystyka matematyczna modele i zada- manent in nervous activity. Bulletin of Mathemati-
nia. Warszawa, 1978. cal Biophysics, 5, 1943, 115-133.
8. Hassoun M.: Fundamentals of Artificial Neural
20. Menezes A., Vanstone S., Van Oorschot P.: Hand-
Networks. MIT Press, 1995.
book of Applied Cryptography, CRC Press, 1996
9. Ibrachim S., Maarof M.: A review on biological in-
21. Osowski S.: Sieci neuronowe w ujęciu algorytmic-
spired computation in cryptology. Jurnal Teknologi
znym, WNT, 1996.
Maklumat, 17(1), 2005, 90-98.
22. Rutkowski L.: Metody i techniki sztucznej inteli-
10. Imam N., Cleland T., Manohar R., Merolla P., Ar-
gencji, PWN, Warszawa 2006.
thur J., Akopyan F., Modha D.: Implementation of
olfactory bulb glomerular-layer computations in a 23. Ruttor A.: Neural Synchronization and Cryptogra-
digital neurosynaptic core. Frontiers in Neurosci- phy, PhD thesis, Wurzburg 2006.
ence, 6(83), 2012 24. Ruttor A., Kinzel W., Naeh R., Kanter I.: Ge-
11. Kanter I., Kinzel W.: The theory of neural networks netic attack on neural cryptography, Phys. Rev. E,
and cryptography. Proceedings of the XXII Solvay 73(3):036121, 2006.
Conference on Physics on the Physics of Commu- 25. Stokłosa J., Bilski T., Pankowski T.: Bezpie-
nication, 2002, 631-644. czeństwo danych w systemach informatycznych,
12. Kanter I., Kinzel W., Kanter E.: Secure exchange PWN, Warszawa 2001.

27

You might also like