You are on page 1of 9

International Journal of Innovative Research in Advanced Engineering (IJIRAE) ISSN: 2349-2163

Issue 11, Volume 5 (November 2018) www.ijirae.com

COGNITIVE MODELING OF 1D CONDUCTIVE


THERMAL TRANSFER IN A MONO-WALL PLAN-WALL
APPLICATION CONSISTING OF BUILDING WOOD
Ranoarison Haingo Hardy
Laboratory of Research in Cognitive Sciences and Applications (LR-SCA),
Ecole Doctorale en Sciences et Techniques de l’Ingénierie et de l’Innovation, Ecole Supérieure Polytechnique ,
University of Antananarivo, Madagascar, East Africa
Randimbindrainibe Falimanana
Laboratory of Research in Cognitive Sciences and Applications (LR-SCA),
Ecole Doctorale en Sciences et Techniques de l’Ingénierie et de l’Innovation, Ecole Supérieure Polytechnique,
University of Antananarivo, Madagascar, East Africa
Robinson Matio
Laboratory of Research in Cognitive Sciences and Applications (LR-SCA),
Ecole Doctorale en Sciences et Techniques de l’Ingénierie et de l’Innovation, Ecole Supérieure Polytechnique,
University of Antananarivo, Madagascar, East Africa
Rajaonarison Eddie Franck
Sciences of Materials and Metallurgy, Ecole Supérieure Polytechnique,
University of Antananarivo, 101 Antananarivo, Madagascar, East Africa
Franck_eddieee@yahoo.fr;
Manuscript History
Number: IJIRAE/RS/Vol.05/Issue11/NVAE10088
Received: 02, November 2018
Final Correction: 15, November 2018
Final Accepted: 22, November 2018
Published: November 2018
Citation: Hardy, Falimanana, Matio & Franck (2018). COGNITIVE MODELING OF 1D CONDUCTIVE THERMAL
TRANSFER IN A MONO-WALL PLAN-WALL APPLICATION CONSISTING OF BUILDING WOOD. IJIRAE::International
Journal of Innovative Research in Advanced Engineering, Volume V, 383-391.doi://10.26562/IJIRAE.2018.NVAE10088
Editor: Dr.A.Arul L.S, Chief Editor, IJIRAE, AM Publications, India
Copyright: ©2018 This is an open access article distributed under the terms of the Creative Commons Attribution License,
Which Permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source
are credited
Abstract - This article shows the research work done on the cognitive modeling of 1D conductive heat transfer in a
monolayer plane wall. The unsteady regime reigns in general formulation in the solid region of the wall. Physical
modeling is based on the unidirectional thermal conduction equation with convective conditions at the outer surfaces.
The relative analytical study resulted in a result serving as a basis for the neural network we used. For the function
approximation, two large families of neural networks are potentially exploitable such as the GRBF (Gaussian Radial
Basis Functions or Gaussian Radial Functions Base) network and the PMC (Perceptron Multi-Layer). In our research,
we used the multilayer perceptron to model the thermal conduction through a wooden wall. Since performance is an
indicator that allows us to choose the model that best represents the physical phenomenon that we are simulating, we
presented the variations in the performance of each model as a function of the number of neurons and the number of
learning loops. In the training phase, we instructed our model of input and output data to which it must converge. The
results show that this cognitive modeling offers several possibilities in its field of application according to some
adjustable parameters such as the number of neurons and the learning loop.
Keywords: Thermal transfer; artificial neural network; python; cognitive modeling; TensorFlow;

__________________________________________________________________________________________________
IJIRAE: Impact Factor Value – Citefactor 1.9 (2017) SJIF: Innospace, Morocco (2016): 3.916 | PIF: 2.469 |
Jour Info: 4.085 | ISRAJIF (2017): 4.011 | Indexcopernicus: (ICV 2016): 64.35
IJIRAE © 2014- 18, All Rights Reserved Page–383
International Journal of Innovative Research in Advanced Engineering (IJIRAE) ISSN: 2349-2163
Issue 11, Volume 5 (November 2018) www.ijirae.com
NOMENCLATURE
: input matrix in the network
: matrix weight
: bias matrix
: activation function
: coefficient of diffusivity
: conductivity of the material
: convection coefficient outside the air
: initial temperature of the wall and temperature inside
: outside air temperature
: function of activation of the embeddable hidden layer in the sense of Riemann
I. INTRODUCTION
Globally, the building sector accounts for 30 to 40% of total energy consumption [1] and a large proportion of
anthropogenic environmental impacts [2]. As a result, it has a high potential for improvement both in terms of energy
and environmental plans [3]. To meet these energy and environmental challenges, several solution elements can be
implemented in a complementary way. Solutions, applied to the building, lead to work simultaneously on the
consumption of the building, its structure and its various equipment, from the design phase [4]. The principle of
function approximation and its application in the modeling of the heat conduction phenomenon of a monolayer wall is
based here on the one-dimensional analytical solution of the problem [5] [6]. To model an artificial neuron, the
TensorFlow [7] adopts its own presentation as shown in the Fig.1.

Fig.1 : Graph of a neuron under TensorFlow


From the graph in the figure Fig .1, the output aj is given by the formula (1):
(1)
Function approximation
A physical phenomenon can be represented by a function f which defines the relation between the input vector x and
the output vector y and is written in the form: y = f (x). The modeling of this transfer function f (x) sometimes has
problems of precision, so we resort to the function approximation. If we denote g (x) the new function that best
represents the function f (x), we call performance the difference between these two functions. The most used one is the
quadratic error. If we do not have access to f (x), the determination of the performance is done by a series of
measurements of the physical phenomenon which is represented by couples so the approximate
performance is computed by the formula of the mean squared error [8].

(2)
The approximation of the function f (x) then consists in determining the function g (x) which minimizes the
performance function. This approximation is possible with the use of a neuron network under the conditions that the
representative function f (x) is continuous and defines in
II. NETWORK OF NEURONS
The GRBF network is a two-layer neural network whose first layer contains neurons [9] whose parameters are a
prototype vector P and a spreading coefficient . The second layer performs the linear combination of the outputs of
the first layer and a bias is added to the total.
__________________________________________________________________________________________________
IJIRAE: Impact Factor Value – Citefactor 1.9 (2017) SJIF: Innospace, Morocco (2016): 3.916 | PIF: 2.469 |
Jour Info: 4.085 | ISRAJIF (2017): 4.011 | Indexcopernicus: (ICV 2016): 64.35
IJIRAE © 2014- 18, All Rights Reserved Page–384
International Journal of Innovative Research in Advanced Engineering (IJIRAE) ISSN: 2349-2163
Issue 11, Volume 5 (November 2018) www.ijirae.com

Fig.2 Diagram of a GRBF network


The equation that defines an output of a GRBF network is given by:

(3)
The PMC is able to approximate very diverse functions and transfer function of each neuron of a hidden layer is defined
by equation (1). It is generally organized in several layers whose outputs of one layer constitute the entry of the next
layer.

Fig.3 Representative Diagram of a hidden two-layer PMC


A two-layer PMC with non-polynomial Riemann [v] integrable activation functions on the first layer and a linear
activation function on the second layer is a universal approximation neuron network [10]. Learning a PMC is done with
a gradient descent, and its performance is defined by:

(4)
 Monolayer wall modeling
 Settings and transfer function
Let us consider in this part the analytical solution of the thermal conduction problem of a monolayer wall whose two
external faces are in contact one with the atmospheric air and the other with the interior environment of a local, the
solution corresponds to that of a semi-infinite wall given by:

(5)
From Equation (5), we can conclude that our model has five parameters and two input variables. Under TensorFlow,
the input of our neural network is represented by a seven-column X-labeled row matrix structured as follows:

The size of the matrix that contains the value of the connection weight or weight matrix of the hidden layer is
determined by the number of entry points and the number of neurons that make up the hidden layer. Thus, we have a
seven-row matrix whose number of columns is determined by the number of layer neurons. Thus the weight matrix of
the hidden layer denoted WC1 is represented by:

__________________________________________________________________________________________________
IJIRAE: Impact Factor Value – Citefactor 1.9 (2017) SJIF: Innospace, Morocco (2016): 3.916 | PIF: 2.469 |
Jour Info: 4.085 | ISRAJIF (2017): 4.011 | Indexcopernicus: (ICV 2016): 64.35
IJIRAE © 2014- 18, All Rights Reserved Page–385
International Journal of Innovative Research in Advanced Engineering (IJIRAE) ISSN: 2349-2163
Issue 11, Volume 5 (November 2018) www.ijirae.com

The bias matrix we note BC1 is a row matrix whose column number is also a function of the number of neurons in
the hidden layer.

The output matrix A1 of the first layer is given by the equation below:

(6)
In our case, our system to model has only one output, then the second layer which is the output layer is represented by
a single neuron whose number of inputs is determined by the number of output of the check mark upstream. The
weight matrix of the output layer denoted WS is therefore a column matrix whose number of lines is dictated by the
number of neurons of the hidden layer upstream.

The bias matrix of this output layer denoted BS is reduced to a single value:

The activation function of the output layer must be a linear function so that our PMC is a network of function
approximation neurons. The function f (x) = x is sufficient to be the action function of the output layer. So the output
equation is written:
(7)
(8)
Equation (8) represents the relationship between the inputs and the output of our multilayer neuron network, it is
translated by the Python code. the functions tf.add () and tf.matmul () are the equivalence of the matrix action and the
scalar product [11]. We use the function tf.nn.softplus () as an activation function.
The TensorFlow graph below summarizes the structure of the created PMC.

Fig .4 PMC GraphTensorFlow with a RELU activation function


III. TRAININGS
The input data is used to excite our neural network and the output data is used to calculate the performance of our
model whose determination is defined by equation (2). The code tf.square (y-pred) calculates the square of the
difference between the reference value y and the prediction pred. The tf.reduce_mean () function calculates the
average. The learning loop is managed by the ADAM algorithm [12] which is already native in the TensorFlow library.
The ADAM optimizer is invoked by means of the Python code below to control and minimize performance.
IV. RESULTS AND DISCUSSION
The results we will present come from several simulations based on the variation of the temperature as a function of
time. In our simulations,
__________________________________________________________________________________________________
IJIRAE: Impact Factor Value – Citefactor 1.9 (2017) SJIF: Innospace, Morocco (2016): 3.916 | PIF: 2.469 |
Jour Info: 4.085 | ISRAJIF (2017): 4.011 | Indexcopernicus: (ICV 2016): 64.35
IJIRAE © 2014- 18, All Rights Reserved Page–386
International Journal of Innovative Research in Advanced Engineering (IJIRAE) ISSN: 2349-2163
Issue 11, Volume 5 (November 2018) www.ijirae.com

we varied the activation function and the number of neurons of the hidden layer and we fixed the learning rate at .
The activation functions considered are the hyperbolic tangent function and the tan () function relutf.nn.relu () and the
function softmaxtf.nn.softplus () [vii]. The characteristics of the wall considered are summarized in the table below:
Table 1 Values of the coefficients of a wall
Designation Notation Value
Coefficient of diffusivity
Conductivity of the material 0.12
Coefficient of convection outside the air 10
The other parameters taken in the simulations are set according to the following table:
Table 2 Parameter Values for Simulations
Designation Notation Value
Thickness of the wall 0.1 m
Initial temperature inside the wall 20 °C
Outside air temperature 32 C

V. PERFORMANCE OF MODELS
We start our simulations with the use of one hundred neurons in the hidden layer of each model. In the figure below
we notice that the performances vary according to the activation function used, the curves below summarize these
variations.

Fig.5 Performance of models with 100 neurons in the hidden layer

With five hundred and fifty neurons, performance fluctuates according to the graphs below:

Fig.6 Performance Variations for 550 Hidden Neurons

In the fig.7, with seven hundred neurons, we see a marked improvement in the performance of the PMC using the
hyperbolic tangent function:

__________________________________________________________________________________________________
IJIRAE: Impact Factor Value – Citefactor 1.9 (2017) SJIF: Innospace, Morocco (2016): 3.916 | PIF: 2.469 |
Jour Info: 4.085 | ISRAJIF (2017): 4.011 | Indexcopernicus: (ICV 2016): 64.35
IJIRAE © 2014- 18, All Rights Reserved Page–387
International Journal of Innovative Research in Advanced Engineering (IJIRAE) ISSN: 2349-2163
Issue 11, Volume 5 (November 2018) www.ijirae.com

Fig. 7 Performance of 700 neuron models


The table below shows some performance values at the end of the learning of each model according to the number of
neurons and the number of loops.
Table 3 Performance by number of neurons and learning loops
Number of Neurones Number of Loops PMC RELU PMC SOFTPLUS PMC TANH
100 110000 0,04016 0,04328 0,30409
150 150000 0,17291 0,03847 0,16978
200 120000 0,04322 0,05728 0,32089
300 130000 0,43194 0,03514 0,08481
350 135000 0,02809 0,03713 0,20383
400 140000 0,05711 0,02776 0,06263
550 155000 0,00909 0,01272 0,28921
600 160000 0,02847 0,24711 0,06575
700 170000 0,04362 0,03232 0,03834
Complete simulations
In this part we will show the steps we take to simulate the heat transfer through a monolayer wall to one dimension.
Here we present the manipulation of TensorFlow's activation function pf.nn.relu (), but the steps remain the same for
the other two models. We run the program specifying the number of neurons in an MS-DOS command prompt or
terminal according to our operating system as follows:

Fig.8 Visualization of the performance


C:\Simulation> python PMCrelu.py –nb_neurones 1000
Command.1: Running a thousand-neuron model. In the previous command, we used the Python interpreter to execute
our program which receives for argument the number of neurons in the hidden layer. Pour voir le résultat de notre
simulation nous devons lancer une deuxième commande qui est la suivante :
__________________________________________________________________________________________________
IJIRAE: Impact Factor Value – Citefactor 1.9 (2017) SJIF: Innospace, Morocco (2016): 3.916 | PIF: 2.469 |
Jour Info: 4.085 | ISRAJIF (2017): 4.011 | Indexcopernicus: (ICV 2016): 64.35
IJIRAE © 2014- 18, All Rights Reserved Page–388
International Journal of Innovative Research in Advanced Engineering (IJIRAE) ISSN: 2349-2163
Issue 11, Volume 5 (November 2018) www.ijirae.com

C:\Simulation>tensorboard –logdir resultat_1000 –host 127.0.0.1


Command .2: Launching the visualization module
The (Com.2) command here receives two arguments to indicate the directory containing the results (after the --logdir
parameter) and the address to which we must attach to view these. In a browser, we enter the link
http://127.0.0.1:6006/ and we can go from one menu to another to see all the results of our simulation. To visualize
the variation of the average errors of our model and the numerical information that we need, we go in the menu
"SCALAR":
To visualize the output graph of the model we go to the "IMAGES" menu:

Fig.9 Presentation of the PMC Sotie Curve


In Fig.9, we see the representative curve of the phenomenon to be modeled (located at the top) and the curve coming
from the neural network. Structures and data flows for the artificial neural network are in the "GRAPHS" menu

Fig.10 General presentation of the model

We can expand each view to see the data flows that interact with a node, in the following figure we show the flow of
data in the node representing the ADAM algorithm:

__________________________________________________________________________________________________
IJIRAE: Impact Factor Value – Citefactor 1.9 (2017) SJIF: Innospace, Morocco (2016): 3.916 | PIF: 2.469 |
Jour Info: 4.085 | ISRAJIF (2017): 4.011 | Indexcopernicus: (ICV 2016): 64.35
IJIRAE © 2014- 18, All Rights Reserved Page–389
International Journal of Innovative Research in Advanced Engineering (IJIRAE) ISSN: 2349-2163
Issue 11, Volume 5 (November 2018) www.ijirae.com

Fig.11 Data Flows in ADAM


 PMC output graphs
The following graphs show the results of different simulations from the three models presented above and the
representative graph of the physical phenomenon to be modeled.

Fig.12 Graph representing the reference data

Fig.13 Release of the replayed PMC with 1000 hidden neurons

Fig.14: Hyperbolic tangent PMC output with 1000 hidden neurons


__________________________________________________________________________________________________
IJIRAE: Impact Factor Value – Citefactor 1.9 (2017) SJIF: Innospace, Morocco (2016): 3.916 | PIF: 2.469 |
Jour Info: 4.085 | ISRAJIF (2017): 4.011 | Indexcopernicus: (ICV 2016): 64.35
IJIRAE © 2014- 18, All Rights Reserved Page–390
International Journal of Innovative Research in Advanced Engineering (IJIRAE) ISSN: 2349-2163
Issue 11, Volume 5 (November 2018) www.ijirae.com

Fig.15 Release of PMC with softplus function with 1000 hidden neurons

VI. CONCLUSION
A two-layer perceptron can simulate thermal transfer through a one-dimensional monolayer wall provided that the
activation function of the hidden layer must be a non-polynomial function derivable in the Riemann sense and that of
the output layer a linear function. The accuracy of our model depends largely on the number of neurons and the
activation function chosen in the hidden layer and the multilayer perceptron. The three models we have presented can
approximate our heat transfer phenomenon, but everything also depends on the number of neurons and learning
loops.
BIBLIOGRAPHIE

1. Dudley, B. (2018). BP’s Energy Outlook. Bp.com/energyoutlook.


2. van der Hoeven, M. (2015) Energy Climate and Change. World Energy Outlook Special Report, International energy
agency. www.iea.org/t&c/
3. Ruuska, A & Häkkinen, T. (2014). Material Efficiency of Building Construction. Buildings, 4, 266-294;
http://dx.doi.org/10.3390/buildings4030266
4. Whitehead, B., Shah, A., Andrews, D., & Maidment, G. (2014). Assessing the environmental impact of data centres
part 1: Background, energy use and metrics. Building and Environment 82:151–159,
http://dx.doi.org/10.1016/j.buildenv.2014.08.021
5. Mark, E. D., & Robert, J. D. (2003). Fundamentals of Chemical Reaction Engineering. McGraw-Hill Higher Education.
TP155.7.D38 660'.28-dc21
6. Antonovič, V., Pundiene, I., Stonys, R., Česniene, J., & Keriene, J. (2010). A review of the possible applications of
nanotechnology in refractory concrete. Journal of Civil Engineering and Management, 16(4), 595–602.
7. Yifei, L. (2017). Deep neural networks and fraud detection. Uppsala University.
8. Petersen, K.B., & Michael Syskind Pedersen (2012). The Matrix Cookbook. Tech. rep., Technical University of
Denmark, Intelligent Signal Processing Group
http://www2.imm.dtu.dk/pubdb/views/publication_details.php?id=3274
9. Sayed, Y.M.V., & Reza, M.V. (2011). A novel multilayer neural network model for TOA-based localization in wireless
sensor networks. Conference: Neural Networks (IJCNN), http://dx.doi.org/10.1109/IJCNN.2011.6033628
10.Yue, W., Hui, W., Biaobiao, Z. & Du1, K.L. (2012). Using Radial Basis Function Networks for Function Approximation
and Classification. International Scholarly Research Notices, http://dx.doi.org/10.5402/2012/324194
11.Abadi, M., Barham, P., Jianmin, C., Zhifeng, C., Andy, D., Jeffrey, D., Matthieu, D., Sanjay, G., Geoffrey, I., Michael, I.,
Manjunath, K., Josh, L., Rajat, M., Sherry, M., Derek, G.M., Benoit, S., Paul, T., Vijay, V., Pete, W., Martin, W., Yuan, Y., &
Xiaoqiang, Z. TensorFlow: A System for Large-Scale Machine Learning. Usenix, Proceedings of the 12th USENIX
Symposium on Operating Systems Design and Implementation (OSDI ’16).
https://www.usenix.org/conference/osdi16/technical-sessions/presentation/abadi
12.Diederik, P.K., & Lei, J.B. (2017). ADAM: A Method for stochastic optimization. Conference paper ICLR,
arXiv:1412.6980v9 [cs.LG].

__________________________________________________________________________________________________
IJIRAE: Impact Factor Value – Citefactor 1.9 (2017) SJIF: Innospace, Morocco (2016): 3.916 | PIF: 2.469 |
Jour Info: 4.085 | ISRAJIF (2017): 4.011 | Indexcopernicus: (ICV 2016): 64.35
IJIRAE © 2014- 18, All Rights Reserved Page–391

You might also like