You are on page 1of 4

20IO International Conference on Computer Application and System Modeling (ICCASM 2010)

Design and Realization of Optimization System of Urea Production Process Based


on BP Neural Network

Zhang Yu

Yu Liang

The Third Department


Southwest Institute of Technical Physics
Chengdu, China
musicsnow@163.com

Computer School
Sichuan University
Chengdu, China

Abstract-Optimization

of

urea

production

process

is

important for urea production in our country, and there are


wide requirements for it. Improving quality, raising output
and reducing cost all need optimization of production process.
So we look upon urea production process as an unknown non

Urea production process in the application of neural network


modeling can be regarded as a black box which can directly
reflects the relation between input and output of the system
and bypass detail questions.
II.

linear function in this paper, and we get BP neural network


model

that

can

be

used

in

optimization

by

function

NEURAL NETWORK METHOD REALIZING THE

FUNCTION ApPROXIMATION OF THE UREA PRODUCTION

approaching. Then cyclic variable method is used to optimize


that model, and we prove the validity of above-mentioned
method. Finally, the optimization system of urea production
process is designed and realized based on above theories by
software engineering method. This system needs to implement
large amount of calculations, and it is applied to the problem
resolving the offiine optimization.

Key Words-Urea production process Optimization BP neural


network Cyclic variable method

I.

PROCESS

The traditional identification methods are very difficult


for the general nonlinear system, but neural network can
provide a powerful tool. Its essential is that properly selects
neural network model to approximate actual system. It is
mainly depended on two factors, the network topology
structure and network study work rule, to establish a neural
network. Both the factors combine the major factor of the
network.
A.

INTRODUCTION

Urea mainly as a fertilizer applied to agricultural


production. At present, the world production of urea is more
than one-third of nitrogen fertilizer, and the urea production
of China is one-third of the world. This amount is increasing.
China urea industry was mainly developed by the circulating
water process, the process is shown in figure 1 below:

Specific improved algorithm

This paper uses the improved Back Propagation


algorithm (BP) modeling, and the network structure uses
type 8 X 8 X 1. That means the structure uses three layer
structure (Input layer, output layer and hidden layer). The
nodes number of input layer is 8, the nodes number of output
layer is 1, and the nodes number of hidden layer will be
determined through the practical application. The topological
structure is shown in figure 2 below:
Input Layer

Hidden Layer

Output Layer

Wastewater

treatment
Expel

Figure I.

products

Urea production process

Urea production process is very complex, and it has


many random factors and influences to determine the precise
linear mathematical model. As a nonlinear expression, the
neural network has an ability that can approach any
continuous function of information processing and pattern
recognition with arbitrary precision, thus increasing attention.

978-1-4244-7237-6/10/$26.00 mOl 0 IEEE

Figure 2.

BP neural network topology structure

The specific improved algorithm of BP neural network is


shown below:

VIO-368

2010 International Conference on Computer Application and System Modeling (ICCASM 2010)

(1) Create network


CDEight input neurons are temperature on the top of
synthetic tower, temperature on the bottom of synthetic
tower, proportion between NH3 and COb proportion between
H20 and CO2, NH3 concentration in the exit, CO2
concentration in the exit, carbamate concentration in the exit,
CO2 concentration in liquid dimethylamine, and they are
represented by XI X8 in network. One output neuron is CO2
conversion, and it is represented by y in network. The m hide
neurons are represented by hI h2 in network. The actual
output of original sample is represented by yd(P).
The activation function of hidden layer is
rx)=sigmoid(x), and the activation function of output layer
is g(x)=purlin(x).
The weights form input layer to hidden layer are iw1ij,
the weights form hidden layer to output layer are iw2j, the
thresholds of hidden layer are b1 j, and the threshold of output
layer is b2.
Then we set learning rate j.l, momentum factor a
alJowable error E, the upper limit of the iteration N.
(2) Train network
This Back Propagation neural network is actived by the
application of input, xI (p), X2(P), ... , X8(P), and expected
output Yd(P). Then the weights and thresholds are reverse
adjusted.
CD Calculate actual outputs of hidden neurons

hj(p) =

f[t

(]) Calculate calibrations of the weights of hidden layer

@ Update the weights of the hidden layer


iw1 ij(p + 1) = iw1 ij(p) + a xl1iw1 ij(p)
(3) Update learning style
CDCalculate the network error

Judge whether the network is convergence


If E (p) E , the learning is succeed. Then we export
the network parameters, and the algorithm is over. The
algorithm continues to the next step if this condition is not
met.
Update learning rate
If

(p)

If E (p)

Xi(P)XiWIij(P)-hIj

Calculate actual outputs of output neurons

c(p) = I [yd(p)-y(p)]
i=!

< E

(p-l),

> 1. 04 X E

j.l (p) =1. 05 X j.l (p-1)

(p-l),

j.l (p) =0. 7 X j.l (p-l)

Judge whether the upper limit of iteration is reached


If the limit is reached, the learning is failed. Then we
export the network parameters, and the algorithm is over.
Otherwise back to step 2 to start learning process again.
Compare the improved algorithm and the standard
algorithm

B.

Calculate error slopes of output neurons

8(p) = y(p)x[I- y(p)]xe(p),


e(p) = Yd(P)- yep)
Calculate calibrations of the weights of output layer

In order to verity the effect of improved BP algorithm is


better than standard BP algorithm, we used 400 groups
production data to train by two algorithms for five times, and
then compared the results of network training. We set the
network structure as 8 X 8 X l, learning rate as j.l =0. 1,
momentum factor as a =0. 95, alJowable error as E =0. 001,
the upper limit of the iteration as N=100. The comparison
results of network training are shown in table 1 below:
TABLE!

TRAINING

l1iw2j(p) = j.1xy/p) x8(p)


@Update the weights of the output layer

iw2j(p + 1)

iw2j(p) + a xl1iw2j(p)

Calculate error slopes of hidden neurons

THE COMPARISON RESULTS OF NETWORK

Training
algorithm

The
improved
BP
algorithm
The
standard

VIO-369

Output
absolute
error of
the
optimal
network

Training error

I
0.038036
4
0.046915
I
0.204823

2
0.044992
5
0.044315
2
0.201024

3
0. 045852

3
0. 194962

0.0571

0.1287

2010 International Conference on Computer Application and System Modeling (ICCASM 2010)

BP
algorithm

4
0.211278

Attribute(unit)

ApPLICATION EXAMPLES AND THE ANALYSIS OF

OPTIMIZATION SYSTEM OF UREA PRODUCTION PROCESS

This paper selects 498 groups production data of a


nitrogenous fertilizer factory in 2008 as system application
samples.
A.

Examples of application

First, the original sample data can be divided into two


parts. Four fifths of samples are randomly selected as the
training collection, and the others are selected as the
verification collection.
Next, function approximation can be executed when the
train collection data are used as discrete points. According to
the experience, We set learning rate as 11 =1, momentum
factor as a =0. 95, allowable error as =0. 001, the upper
limit of the iteration as n=lOOO, the number of times of
parallel running as m=20, hidden neurons as 8. After the
network model is created and trained, the network errors of
20 times parallel training are shown in table 2 below:
TABLE II

THE NETWORK ERRORS OF

20 TIMES PARALLEL

TRAINING

The nnmber of times of oarallel runnin!!


I
2
3
4
5
6
7
8
9
10
II
12
13
14
15
16
17
18
19
20

Training error
0.037457
0.044581
0.050142
0.044062
0.045251
0.044478
0.044390
0.048066
0.044214
0.045163
0.044517
0.046298
0.044808
0.044012
0.044959
0.044855
0.046530
0.044355
0.044665
0.044524

B.

OPTIMIZATION INTERVALS

lower limit

Temperature on the top of


synthetic tower("C)
Temperature on the bottom of
synthetic tower CC)
Proportion between NHJ and CO,
(%)
Proportion
between H20 and CO,
(%)
NH3 concentration in the exit (%)
CO, concentration in the exit (%)
Carbamate concentration in the
exit (%)
CO, concentration in liquid
dimethylamine (%)

The comparison results of two algorithm show that the


training error and output absolute error of the improved BP
algorithm are more smaller than the standard BP algorithm.
III.

THE

TABLE Ill.

5
0 196266

Upper limit

154.3970

155.2059

168.1734

169.0000

3.8200

4.7700

U500

1.9700

25.6000
10.0400

26.3618
10.7564

28.7981

29.5465

26.1212

26.7500

The optimization results analysis

Optimization data are created by the optimization range


in the last section, and then we use these data into created
model to simulate and get the prediction. Now we can
determine whether the ratio of samples reached expectation
and total samples have improved greatly than original data,
and it is the judge whether the optimization effect is good or
bad. Specific solutions are shown as follows:
1) Create 5 groups simulation samples, and the number
of every group is 500.
2) According to the request of optimization production,
the optimal class sample value is set to 62.0. So the optimal
class rates in the calculation results are shown in table 4
below:
TABLE IV.

THE OPTIMAL CLASS RATES OF 5 GROUPS


SIMULATION DATA

Group

I
2
3
4
5

From the results of 20 times parallel running, the


minimum error of the network, MSEmin=0.037457, is get.
This network model is proved to be ideal as correlation
coefficient R=0.9989 and mean absolute error MAE=0.0529.
After the model is created, the maximum iteration,
m=1000, is set. Then according to the actual production
condition , interval accuracy of 8 input attribute is set as 1, 1,
0.1, 0.1, 0.1, 0.1, 0.1, 0.1. The optimization intervals after
optimization calculation are shown in table 3 below:

minimum
value
61.64
61J5
61J 6
60.46
60.87

maximum
value
64.69
64.69
64.69
64.69
64.69

The optimal
class rate
99.4%
99.8%
99 0%
99.4%
99.0%

The optimal class rate of the third group, 99.0%, is


minimum.
3) The number of original samples is 498, and the
number of samples that have reached the optimal class
sample value is 91. So the optimal class rate of original
samples is 18.3%.
4) From the step 3 and the step 4, we can calculate the
ratio of optimal class rate is 5.4l.
5) From the calculation in step 5, we know that the
optimization effect is good because the ratio of optimal
class rate is much more than 1.
IV.

CONCLUSION

This paper uses the development process of urea


production as the research object. From the study of urea
production system, the BP neural network model is
established. On this basis, the optimization calculations of

VIO-370

2010 International Conference on Computer Application and System Modeling (ICCASM 2010)

eight attribute that can influence C02 conversion are


completed. The model is proved to be effective by simulating
finally.
REFERENCES
[1]

W.S. McCulloch,W.Pitts.A logical calculus of the ideas immanent in


nervous activity.Bulletin of Mathematical Biophysics,1943(5):115137

[2]

D.O. HebbThe Organization of Behavior.A Neuropsychological


Theory,John Wiley,I949

[3]

F. Rosenblatt.The perceptron a probabilistic model for information


storage
and
organization
In
the
brain. Psychological
Review,I958(65):3 86-408

[4]

M.L. Minsky, SAPapert.Perceptrons. MIT Press.Cambridge.MA,1969

[5]

S.Grossberg.How does a brain build a cognitive code. Psychological


Review,1980(87):I -5I

[6]

JJ. Hopfield.Neural networks and physical systems with emergent


collective computational abilities. Proceedings of the National
Academy of Sciences of the USA,1982(79):2554-2558

[7]

T.Kohonen. Self-organized formation of topolohically correct feature


maps.Biological Cybernetics,1982(43):59-69

[8]

AG. Barto,RS.Sutton, CW. Anderson. Neuroolike adaptive elements


that can solve difficult learning control problems. IEEE Transactions
on Systems,Man and Cybernetics,I 983(SMC-13):834-846

[9]

DERumelhart,lL.McClelland,eds.Parallel Distributed Processing:


Explorations in the Microstructures of Cognition 2 vols.MIT
Press.Cambridge.MA,1986

[10] D. S. Broomhead,D. Lowe.Multivariable functional interpolation and


adaptive networks.Complex Systems,I988(2) 321-355
[I I] K.Hornik,M . Stinchcombe,H .White. Universal Approximation of an
Unknown Mapping and its Derivatives Using in Multiplayer Feed
forward Networks. Networks,I990,(3):550-560

VI0-371

You might also like