Professional Documents
Culture Documents
__________________________________________________________________________________________________
IJIRAE: Impact Factor Value – Citefactor 1.9 (2017) SJIF: Innospace, Morocco (2016): 3.916 | PIF: 2.469 |
Jour Info: 4.085 | ISRAJIF (2017): 4.011 | Indexcopernicus: (ICV 2016): 64.35
IJIRAE © 2014- 18, All Rights Reserved Page–383
International Journal of Innovative Research in Advanced Engineering (IJIRAE) ISSN: 2349-2163
Issue 11, Volume 5 (November 2018) www.ijirae.com
NOMENCLATURE
: input matrix in the network
: matrix weight
: bias matrix
: activation function
: coefficient of diffusivity
: conductivity of the material
: convection coefficient outside the air
: initial temperature of the wall and temperature inside
: outside air temperature
: function of activation of the embeddable hidden layer in the sense of Riemann
I. INTRODUCTION
Globally, the building sector accounts for 30 to 40% of total energy consumption [1] and a large proportion of
anthropogenic environmental impacts [2]. As a result, it has a high potential for improvement both in terms of energy
and environmental plans [3]. To meet these energy and environmental challenges, several solution elements can be
implemented in a complementary way. Solutions, applied to the building, lead to work simultaneously on the
consumption of the building, its structure and its various equipment, from the design phase [4]. The principle of
function approximation and its application in the modeling of the heat conduction phenomenon of a monolayer wall is
based here on the one-dimensional analytical solution of the problem [5] [6]. To model an artificial neuron, the
TensorFlow [7] adopts its own presentation as shown in the Fig.1.
(2)
The approximation of the function f (x) then consists in determining the function g (x) which minimizes the
performance function. This approximation is possible with the use of a neuron network under the conditions that the
representative function f (x) is continuous and defines in
II. NETWORK OF NEURONS
The GRBF network is a two-layer neural network whose first layer contains neurons [9] whose parameters are a
prototype vector P and a spreading coefficient . The second layer performs the linear combination of the outputs of
the first layer and a bias is added to the total.
__________________________________________________________________________________________________
IJIRAE: Impact Factor Value – Citefactor 1.9 (2017) SJIF: Innospace, Morocco (2016): 3.916 | PIF: 2.469 |
Jour Info: 4.085 | ISRAJIF (2017): 4.011 | Indexcopernicus: (ICV 2016): 64.35
IJIRAE © 2014- 18, All Rights Reserved Page–384
International Journal of Innovative Research in Advanced Engineering (IJIRAE) ISSN: 2349-2163
Issue 11, Volume 5 (November 2018) www.ijirae.com
(3)
The PMC is able to approximate very diverse functions and transfer function of each neuron of a hidden layer is defined
by equation (1). It is generally organized in several layers whose outputs of one layer constitute the entry of the next
layer.
(4)
Monolayer wall modeling
Settings and transfer function
Let us consider in this part the analytical solution of the thermal conduction problem of a monolayer wall whose two
external faces are in contact one with the atmospheric air and the other with the interior environment of a local, the
solution corresponds to that of a semi-infinite wall given by:
(5)
From Equation (5), we can conclude that our model has five parameters and two input variables. Under TensorFlow,
the input of our neural network is represented by a seven-column X-labeled row matrix structured as follows:
The size of the matrix that contains the value of the connection weight or weight matrix of the hidden layer is
determined by the number of entry points and the number of neurons that make up the hidden layer. Thus, we have a
seven-row matrix whose number of columns is determined by the number of layer neurons. Thus the weight matrix of
the hidden layer denoted WC1 is represented by:
__________________________________________________________________________________________________
IJIRAE: Impact Factor Value – Citefactor 1.9 (2017) SJIF: Innospace, Morocco (2016): 3.916 | PIF: 2.469 |
Jour Info: 4.085 | ISRAJIF (2017): 4.011 | Indexcopernicus: (ICV 2016): 64.35
IJIRAE © 2014- 18, All Rights Reserved Page–385
International Journal of Innovative Research in Advanced Engineering (IJIRAE) ISSN: 2349-2163
Issue 11, Volume 5 (November 2018) www.ijirae.com
The bias matrix we note BC1 is a row matrix whose column number is also a function of the number of neurons in
the hidden layer.
The output matrix A1 of the first layer is given by the equation below:
(6)
In our case, our system to model has only one output, then the second layer which is the output layer is represented by
a single neuron whose number of inputs is determined by the number of output of the check mark upstream. The
weight matrix of the output layer denoted WS is therefore a column matrix whose number of lines is dictated by the
number of neurons of the hidden layer upstream.
The bias matrix of this output layer denoted BS is reduced to a single value:
The activation function of the output layer must be a linear function so that our PMC is a network of function
approximation neurons. The function f (x) = x is sufficient to be the action function of the output layer. So the output
equation is written:
(7)
(8)
Equation (8) represents the relationship between the inputs and the output of our multilayer neuron network, it is
translated by the Python code. the functions tf.add () and tf.matmul () are the equivalence of the matrix action and the
scalar product [11]. We use the function tf.nn.softplus () as an activation function.
The TensorFlow graph below summarizes the structure of the created PMC.
we varied the activation function and the number of neurons of the hidden layer and we fixed the learning rate at .
The activation functions considered are the hyperbolic tangent function and the tan () function relutf.nn.relu () and the
function softmaxtf.nn.softplus () [vii]. The characteristics of the wall considered are summarized in the table below:
Table 1 Values of the coefficients of a wall
Designation Notation Value
Coefficient of diffusivity
Conductivity of the material 0.12
Coefficient of convection outside the air 10
The other parameters taken in the simulations are set according to the following table:
Table 2 Parameter Values for Simulations
Designation Notation Value
Thickness of the wall 0.1 m
Initial temperature inside the wall 20 °C
Outside air temperature 32 C
V. PERFORMANCE OF MODELS
We start our simulations with the use of one hundred neurons in the hidden layer of each model. In the figure below
we notice that the performances vary according to the activation function used, the curves below summarize these
variations.
With five hundred and fifty neurons, performance fluctuates according to the graphs below:
In the fig.7, with seven hundred neurons, we see a marked improvement in the performance of the PMC using the
hyperbolic tangent function:
__________________________________________________________________________________________________
IJIRAE: Impact Factor Value – Citefactor 1.9 (2017) SJIF: Innospace, Morocco (2016): 3.916 | PIF: 2.469 |
Jour Info: 4.085 | ISRAJIF (2017): 4.011 | Indexcopernicus: (ICV 2016): 64.35
IJIRAE © 2014- 18, All Rights Reserved Page–387
International Journal of Innovative Research in Advanced Engineering (IJIRAE) ISSN: 2349-2163
Issue 11, Volume 5 (November 2018) www.ijirae.com
We can expand each view to see the data flows that interact with a node, in the following figure we show the flow of
data in the node representing the ADAM algorithm:
__________________________________________________________________________________________________
IJIRAE: Impact Factor Value – Citefactor 1.9 (2017) SJIF: Innospace, Morocco (2016): 3.916 | PIF: 2.469 |
Jour Info: 4.085 | ISRAJIF (2017): 4.011 | Indexcopernicus: (ICV 2016): 64.35
IJIRAE © 2014- 18, All Rights Reserved Page–389
International Journal of Innovative Research in Advanced Engineering (IJIRAE) ISSN: 2349-2163
Issue 11, Volume 5 (November 2018) www.ijirae.com
Fig.15 Release of PMC with softplus function with 1000 hidden neurons
VI. CONCLUSION
A two-layer perceptron can simulate thermal transfer through a one-dimensional monolayer wall provided that the
activation function of the hidden layer must be a non-polynomial function derivable in the Riemann sense and that of
the output layer a linear function. The accuracy of our model depends largely on the number of neurons and the
activation function chosen in the hidden layer and the multilayer perceptron. The three models we have presented can
approximate our heat transfer phenomenon, but everything also depends on the number of neurons and learning
loops.
BIBLIOGRAPHIE
__________________________________________________________________________________________________
IJIRAE: Impact Factor Value – Citefactor 1.9 (2017) SJIF: Innospace, Morocco (2016): 3.916 | PIF: 2.469 |
Jour Info: 4.085 | ISRAJIF (2017): 4.011 | Indexcopernicus: (ICV 2016): 64.35
IJIRAE © 2014- 18, All Rights Reserved Page–391