You are on page 1of 9

Energy Conversion and Management 47 (2006) 22022210

www.elsevier.com/locate/enconman

Modelling of steam red double eect vapour absorption


chiller using neural network
H.J. Manohar, R. Saravanan *, S. Renganarayanan
Department of Mechanical Engineering, Institute for Energy Studies, Anna University, Sardar Patel Road, Chennai 600 025, India
Received 28 June 2005; accepted 12 December 2005
Available online 3 February 2006

Abstract
In this paper, steady state modelling of a double eect absorption chiller using steam as heat input is presented and
discussed. The modelling is based on the articial neural network (ANN) technique with 6-6-9-1 conguration. The neural
network is a fully connected feed forward conguration using the back propagation learning algorithm. The model will
predict the chiller performance based on the chilled water inlet and outlet temperatures, cooling water inlet and outlet temperatures and steam pressure. The network was trained with one year of experimental data and predicts the performance
within 1.2% of the actual values.
2005 Elsevier Ltd. All rights reserved.
Keywords: Absorption chiller; Articial neural network; Back propagation algorithm; Coecient of performance

1. Introduction
Absorption cooling systems have become increasingly popular in recent years from the viewpoints of energy
and environment. The advantages of using an absorption refrigeration system for comfort applications and
process cooling are due to the reduced use of CFC refrigerants and the eliminated concerns about the increase
in energy cost. The actual performance of the system at the eld level depends on many factors, such as cooling demand, cooling water temperature availability, sources of heat input and its potential. Hence, detailed
modelling is required for predicting the performance of the chiller considering all the above factors.
The articial neural network (ANN) has received increased attention in the system modelling and simulation eld due to their learning ability and versatile mapping capabilities. ANNs have been applied to various
cooling system performances. An ANN predicting the time required by a controller for returning zones to
desired room temperatures after night or weekend setback increased its accuracy by 50% and decreased the
average amount of training time by 90% compared to traditional training method [1]. In-Ho Yang et al. [2]
presented the application of an ANN in a building control system. They developed an optimized ANN model

Corresponding author. Tel.: +91 44 22203268; fax: +91 44 2235 3637/91 44 22203269.
E-mail address: rsaravanan@annauniv.edu (R. Saravanan).

0196-8904/$ - see front matter 2005 Elsevier Ltd. All rights reserved.
doi:10.1016/j.enconman.2005.12.003

H.J. Manohar et al. / Energy Conversion and Management 47 (2006) 22022210

2203

Nomenclature
ANN
COP
cp
E
Dh
i
j
m
N
NV
p
Q_ e
Q_ in
r2
RMS
t
T
V
Vmax
Vmin
wj,i
Dwj,i
Y

articial neural network


coecient of performance
specic heat capacity (kJ/kg K)
error between desired and trained output
change in enthalpy (kJ/kg)
index to input node
index to next successive node to input node
mass ow rate (kg/s)
total number of training sets
normalized input/output value
steam pressure (bar)
cooling capacity (kW)
heat input (kW)
fraction of variance
root mean square error
temperature (C)
trained output
input/output value
maximum value of entire data set
minimum value of entire data set
weight from ith node to jth node
change in weight
desired output

Greek symbols
d
quantum of error
g
learning rate
Subscripts
Chi
chilled water inlet
Cho
chilled water outlet
Cwi
cooling water inlet
Cwo
cooling water outlet
s
steam

to determine the optimal start time for a heating system in a building. Control of a simulated dual-temperature
hydronic system has been done by Ding and Wong [3] using a neural network. The network was trained with
thermal demands and used to adjust the valve positions. A neural network developed by Ferrano and Wong
[4] predicts the next days cooling load using the 24 h period temperature pattern. An ANN determines the
delay time for an air conditioning plant to respond to control actions [5]. A neural network is applied to
two dierent vapour-compression chillers, and it predicted the compressor work input and the COP within
5% for both the chillers [6]. A neural network is used by Chow et al. [7] in modelling an absorption chiller
system, and a genetic algorithm is used in optimal control of the system. Forecasting daily electric load prole for an urban city using an ANN is formulated by Beccali et al. [8]. The actual forecast is obtained using a
two layered, feed forward neural network, trained with the back propagation with momentum learning
algorithm.
An articial neural network learns the relationship between the controlled and uncontrolled variables by
studying previously recorded data. It maps the input and output patterns and predicts the output for the

2204

H.J. Manohar et al. / Energy Conversion and Management 47 (2006) 22022210

required input pattern. At any given time, it is possible for the system to predict the cooling load demand from
the neural network predicted values.
The majority of industries have a double eect vapour absorption system for its cold utility streams because
of its high COP and low operational and maintenance cost. In this paper, the articial neural network technique is used to predict the performance of a steam red, double eect absorption chiller.
2. Experimental data reduction
The double eect, steam red absorption chiller employed in a pharmaceutical industry for process cooling
is considered in this study to compare with the developed ANN model. The cooling capacity is mainly controlled by the chilled water outlet temperature. The typical design conditions for the cooling water inlet temperature and the chilled water inlet temperature are 28 C and 10 C, respectively. Table 1 shows the general
details of the chiller.
The experimental performance was calculated for the chiller from the mass ow rates and temperatures,
respectively, for the chilled water, cooling water and process steam. The errors in the chiller water ow rate,
cooling water ow rate and steam ow rate are around 2%. Temperatures were measured with built in RTD
sensors with an uncertainty of 0.75 C. Steam pressure is measured by a Bourdon pressure gauge with an
uncertainty of 3%. The cooling capacity of the chiller is estimated as given below
Q_ e m_ ch cp tchi  tcho
Q_ in m_ s Dh

1
2

The actual coecient of performance is measured by


COP

Q_ e
Q_ in

Using the above expressions, the COP is estimated for all the experimental data for one year, and the values
are used to train the ANN model.
3. Neural network
The best example of a neural network is the human brain. Articial neural networks try to mimic this biological network in order to learn the solution to a physical problem from a given set of training sets. From this,
it has the ability to learn and predict the performance of the system. Table 2 shows the various input parameters of the absorption system used for training the neural network.
Table 1
Details of the absorption chiller
1.
2.
3.
4.
5.

Capacity
Working uid
Model no.
Design
Heat input

805 kW
Waterlithium bromide
C-211
Double eect series ow
Saturated steam @ 8 bar

Table 2
Input parameters for the ANN chiller model
Input variables

Range

Time (h)
Chilled water inlet temperature (C)
Chilled water outlet temperature (C)
Cooling water inlet temperature (C)
Cooling water outlet temperature (C)
Steam pressure (bar)

2.0024.00
8.0021.00
5.0017.00
24.0035.00
27.0038.00
0.805.00

H.J. Manohar et al. / Energy Conversion and Management 47 (2006) 22022210

2205

The input parameters are the ones that are easily measurable. The input variables required processing
before being employed for training. The output considered is the COP, which is the acceptable indicator of
chiller performance.
The control parameters employed to obtain the best neural network:
(1)
(2)
(3)
(4)
(5)

Learning rate.
Learning time.
Biasing.
Iterations.
Hidden layers and hidden neurons per hidden layer.

3.1. Learning rate


The learning rate parameter can signicantly impact how long a network takes for training and whether it
will converge. The higher the learning rate, the more likely the network is to oscillate. If it is too small, the
learning accuracy can suer, resulting in the local minimum problem. The main eect of increasing the
learning rate is to lower the learning time [9]. A good learning rate of 0.01 is reached before attaining the best
ANN.
3.2. Learning time
The main aim is to lower the learning time. There are three techniques to minimize learning time [10]. The
rst technique uses the change of weight information from the previous learning pass to alter our basic back
propagation rule. A new parameter, momentum, will be added. The second technique, called at spot elimination, alters the derivative of the sigmoid activation function to minimize a problem called sticking weights.
The derivative of the logistic function goes to zero as the units output approaches 0.0 or 1.0. Adding a constant 0.1 to the derivative of the sigmoid function alters the formula. This results in a useful error signal being
back propagated, even when the derivative of the sigmoid function is approaching zero. The third technique is
a heuristic called descending epsilon. The number of presentations required for the network to learn an input
pattern varies from pattern to pattern. Some patterns are learned early in the training cycle. Descending epsilon determines when to stop training a pattern based on when the error signal for that pattern goes below a
certain threshold. Flat spot elimination is implemented as it increases the ability of the network to back propagate a useful error signal.
3.3. Biasing
The purpose of biasing is to skew the output from the sigmoid function [11]. There are two methods of
biases: xed and variable. The rst method is to subtract a xed value from the summation of the inputs multiplied by the weights in the sigmoid function. The second method is to use variable biases by adding a node in
each layer, except the output layer, that has a constant activation value of one. Its advantage is that the learning algorithm can adjust the amount of biasing needed by adapting the weight values from the bias nodes during training. Variable biases were employed here.
3.4. Iterations
The number of iterations required for a particular neural network depends upon the learning rate. Once the
learning rate is xed, the number of iterations is determined. An increase in the number of iterations results in
a decrease in the square error, but a further increase in iterations results in an increase in the square error,
which, in turn, decreases the network accuracy [12]. The minimum square error is achieved after 10,000 iterations. An increase in the number of iterations by 2000 increases the error by 0.3%.

2206

H.J. Manohar et al. / Energy Conversion and Management 47 (2006) 22022210

Time
Chi
Cho

COP

Cwi
Cwo
Stpr

Input
Layer

Hidden
Layer 1

Hidden
Layer 2

Output
Layer

Fig. 1. ANN model of the absorption chiller system.

3.5. Hidden layers and hidden neurons per hidden layer


There are no reliable guidelines for deciding the number of neurons in a hidden layer or how many hidden
layers to use [13]. As a result, the number of hidden neurons and hidden layers were decided by a trial and
error method. The convergence proofs merely state that convergence with a minimum of three layers (input,
hidden and output) is possible. Two hidden layers may produce better results. A neural network with two hidden layers has produced better results. Networks with more than two hidden layers are rare, mainly due to the
diculty and time of training them. ANN applications that use hidden layers require an additional 3-ls processing cycle for each hidden layer [14].
The ANN developed, based on the control parameters, is a four layer, feed forward neural network with
two hidden layers between the input and output layers. Fig. 1 shows the schematic diagram of the four layer,
feed forward ANN modelling of the absorption system. The learning algorithm employed is the back propagation algorithm. The back propagation learning algorithm is chosen because it is the best suited technique for
the non-linear behaviour of the system.
The three stages in ANN modelling of the absorption system are:
(1) Training stage.
(2) Testing stage.
(3) Predicting stage.
In the training stage, input and output values of the training sets were supplied. Using these values, the
network tries to learn the system by adjusting the weights between the four layers. This is adapted by the back
propagation algorithm. About 1000 training experimental data sets were used to train the network. Once the
network is trained, it moves to the testing stage. In the testing stage also, both input and output values were
supplied. The only dierence between the testing and training stage is that there is no adjustment of weights in
the testing stage. In the predicting stage, only inputs are supplied and the output is predicted. Adjustment of
weights, in other words learning, takes place only in the training stage. Adequate training of an ANN system
model requires rst a good representation of the available training data.
4. Calculation procedure
Step 1: Initialize the data and normalize the data. Before training is performed, the training and testing data
must be normalized independently. All training and testing data were normalized between 0.1 and 0.9
as follows:

H.J. Manohar et al. / Energy Conversion and Management 47 (2006) 22022210

0:9  0:1
NV 0:1
V max  V min

2207


 V  V min

The range of 0.10.9 rather than 01 was used to avoid saturating the weights as the inputs approach
the asymptotes of the sigmoid [13].
Step 2: Initialize the weights between layers. Weights are initialized between layers using a random weight
generator by using one of the library functions in the C program.
Step 3: Calculate the output of the input layer. The output of the input layer is nothing but the normalized
input value.
Step 4: Calculate the output of the rst hidden layer. The corresponding normalized input value is multiplied
with the corresponding initial weight and these products are summed. This summation is again added
with the variable bias. The resulting sum is applied to the activation function. The activation function
employed is tan-sigmoid function.


1
F x
5
1 ex
The tan-sigmoid function is used for both hidden layers and the output layer.
Step 5: Calculate the output of the second hidden layer. The above operation is again performed as in step 4,
but the normalized input value is the output of the rst hidden layer.
Step 6: Calculate the output of the output layer. The same logic involving multiplication of weights with inputs
followed by summation and its application to the activation function as in step 4.
Step 7: Calculate the error of the ANN. The output of the output layer is compared with the desired output
for each training pattern, and the dierence between them is the error, which is calculated using
E Y 4;i  T j

Step 8: Calculate the square error of the ANN. The square error is calculated by squaring the error, E
Square error Y 4;i  T j

Step 9: Calculate the error for the node in hidden layer2. The error is calculated by multiplying the derivative
of the activation function with the error.
d4;i Y 4;i  1  Y 4;i  Y 4;i  T j

The derivative of the activation function is represented by Y4,i * (1  Y4,i).


Step 10: Calculate the error for the node in hidden layer1. The error is calculated by multiplying the derivative
of the activation function with the error.
d3;i Y 3;i  1  Y 3;i  d4;i  w3;i

The derivative of the activation function is represented by Y3,i * (1  Y3,i).


Step 11: Calculate the error for the node in input layer. The error is calculated by multiplying the derivative of
the activation function with the error.
d2;i Y 2;i  1  Y 2;i  d3;i  w2;i

10

The derivative of the activation function is represented by Y2,i * (1  Y2,i).


Step 12: Adjusting the weights. The weights are to be adjusted, and the adjustment is calculated using the learning rate.
Dw2;i g  d3;i  w2;i
Dw3;i g  d4;i  w3;i

11
12

g denotes the learning rate.


Step 13: Determine if the training should continue. The step 1 to step 12 is repeated until the specied
number of iterations gets exceeded. The optimized number of iterations is calculated, and its value
is 10,000.

2208

H.J. Manohar et al. / Energy Conversion and Management 47 (2006) 22022210

5. Results and discussion


The best ANN conguration is found to be 6-6-9-1, which is based on the control parameters such as learning rate, time, biasing, number of iterations, number of hidden neurons and hidden layers. It is a four layer,
feed forward neural network. The learning algorithm employed here is the back propagation algorithm. The
network is trained with 1036 training data sets. The higher the learning rate, the more likely the network is to
oscillate. If it is too small, the learning accuracy can suer, resulting in the local minimum problem.
From Fig. 2 it can be observed that the ANN with a learning rate of 0.1 seems to be more accurate than the
ANN with a learning rate of 0.01 in the initial 1000 iterations. However, in the later iterations, the network
with a learning rate of 0.01 predicts more accurately. The square error is quickly reduced from 0.325 to 0.09 in
the rst 34 iterations in the network with a learning rate of 0.1, whereas for 0.01, it takes 254 iterations. In the
last 1000 iterations, the square error is almost the same and nearly equal to zero. The error percentage is less in
the 0.01 learning rate ANN. This is because it takes only a few percent of error from each training set for
learning, and therefore, it adapts well with the training data set. Further reduction in learning rate results
in the local minimum problem.
The performance of the neural network is determined by statistical methods [6]. The statistical parameters
are:
(1) RMS (Root mean square error). Root mean square error is one of the parameters to determine the performance of the network. The RMS is calculated as follows:
s
PN
2
j1 Y j  T j
RMS
13
N
It should be as small as possible.
(2) r2 (Fraction of variance). The fraction of variance gives the degree of variance of the output value from
the desired value. The fraction of variance is calculated as follows:
PN
2
j1 Y j  T j
r 2 1  PN
14
2
j1 Y j
The absolute best fraction of variance is when r2 is equal to 1.

Fig. 2. ANN training result based on 6-6-9-1 conguration for learning rates 0.1 and 0.01.

H.J. Manohar et al. / Energy Conversion and Management 47 (2006) 22022210

2209

The Input is normalized between 0.1 and 0.9. The technique employed to reduce the learning time is at
spot elimination where the derivative of the sigmoid function is used to reduce the square error. The network
is trained for about 10,000 iterations. This network is analyzed for dierent learning rates of 0.1 and 0.01, and
it is found that the learning rate of 0.01 gives better results, which are shown in Table 3. Table 3 shows how
accurate the trained ANN network is from the performance parameters. The RMS and r2 for the trained network are 0.00173916 and 0.999753, respectively.
Fig. 3 shows the comparison between the actual and predicted COP for the absorption chiller. The chilled
water and cooling water mass ow rates were kept constant at 130 m3/h and 230 m3/h, respectively. The actual
Table 3
Results of the 6-6-9-1 ANN conguration
Learning rate

Error (%)

RMS

r2

0.01
0.1

1.15406
1.22334

0.00173916
0.00181940

0.999753
0.999730

1.08
1.07
1.06

Predicted COP

1.05
+ 1.2 %

1.04
1.03
- 1.2 %

1.02
1.01
1
0.99
0.99 1.00 1.01 1.02 1.03 1.04 1.05 1.06 1.07 1.08
Actual COP

Fig. 3. Comparison between actual and predicted COP for the absorption system.

Fig. 4. Variation of COP with respect to chilled water temperature dierence.

2210

H.J. Manohar et al. / Energy Conversion and Management 47 (2006) 22022210

cooling capacity, heat input and COP were estimated as explained in Section 2. The predicted COP from the
ANN model is compared with about 250 actual values. The closeness of the values towards the centerline indicates the accuracy of the network prediction. The network error is within 1.2% of the actual data. The accumulation of actual data points at a particular region indicates the part load performance of the chiller. The
variation of the COP with respect to the chilled water temperature dierence is shown in Fig. 4. The performance of the chiller increases with the increase in chilled water temperature dierence as expected. The model
predicts the performance of the system in good agreement with the actual values.
6. Conclusion
In this paper, an approach to model a double eect, series ow, absorption chiller using a neural network
has been presented and compared with the actual values. Flat spot elimination was a useful learning technique
that increases the stability of the training. This paper shows that the values predicted with the ANN, especially
with the back propagation learning algorithm along with feed forward, can be used to predict the performance
of the absorption chiller quite accurately. The results are promising as r2 approaches 99.9753%. The neural
network predicted the COP within 1.2% error. Though mapping can be done between any input and output
variables, providing relevant information as inputs will improve the network accuracy. Future studies will
concentrate on applications in predicting the fault diagnosis of absorption systems.
References
[1] Miller RC, Seem JE. Comparison of articial neural network with traditional methods of predicting return time from night or
weekend setback. ASHRAE Trans 1991;97(2):5008.
[2] Yang I-H, Yeo M-S, Kim K-W. Application of articial neural network to predict the optimal start time for heating system in
building. Energ Convers Manage 2003;44(17):2791809.
[3] Ding Y, Wong KV. Control of a simulated dual-temperature hydronic system using a neural network approach. ASHRAE Trans
1990;96(2):72732.
[4] Ferrano FJ, Wong KV. Prediction of thermal storage loads using a neural network. ASHRAE Trans 1990;96(2):7236.
[5] Huang SH, Nelson RM. Delay time determination using an articial neural network. ASHRAE Trans 1994;100(1):83140.
[6] Swider DJ, Browne MW, Bansal PK, Kecman V. Modelling of vapour-compression liquid chillers with neural networks. Appl
Thermal Eng 2001;21:31129.
[7] Chow TT, Zhang GQ, Lin Z, Song CL. Global optimization of absorption chiller system by genetic algorithm and neural network.
Energ Buildings 2002;34:1039.
[8] Beccali M, Cellura M, LoBrano V, Marvuglia A. Forecasting daily urban electric load proles using articial neural networks. Energ
Convers Manage 2004;45(1819):2879900.
[9] Gill GS, Wong KV. Passive solar design for windows using a neural network. ASHRAE Trans 1991;97(2):7803.
[10] Anstett M, Kreider JF. Application of neural networking models to predict energy use. ASHRAE Trans 1993;99(1):50517.
[11] Scalabrin G, Cristofoli G. The viscosity surfaces of R152a in the form of multilayer feed forward neural networks. Int J Refrig
2003;26:30214.
[12] Javadpour R, Knapp GM. A fuzzy neural network approach to machine condition monitoring. Comput Ind Eng 2003;45:32330.
[13] Stevenson William J. Using articial neural nets to predict building energy parameters. ASHRAE Trans 1994;100(2):10817.
[14] Mistry SI, Nair SS. Nonlinear HVAC computations using neural networks. ASHRAE Trans 1993;99(1):77584.

You might also like