You are on page 1of 9

K. Movagharnejad and S. Ghanbari / Journal of Chemical and Petroleum Engineering, 50 (2), Feb.

2017 / 9-17 9

Using Intelligent Methods and Optimization


of the Existing Empirical Correlations
for Iranian Dead Oil Viscosity
Kamyar Movagharnejad1* and Soraya Ghanbari2
1. Prof. of Chemical Eng., Thermokin. Dept.,Chemical Eng. Faculty,
Babol University of Technology, Babol.
2. MS. Student of Chemical Eng., Thermokin. Dept.,Chemical Eng. Faculty,
Babol University of Technology, Babol.

(Received 30 June 2015, Accepted 16 May 2016)

Abstract Keywords
Numerous empirical correlations exist for the estimation of crude oil Dead oil;
viscosities. Most of these correlations are not based on the experimen- Empirical correlation;
tal and field data from Iranian geological zone. In this study several Neural network;
well-known empirical correlations including Beal, Beggs, Glasso, Labe- SVM;
di, Schmidt, Alikhan and Naseri were optimized and refitted with the Viscosity.
Iranian oil field data. The results showed that the Beal and the Labedi
methods were not suitable for estimation of the viscosity of the Iranian
crudes, while the Beggs, Glasso and Schmidt methods gave reasonable
results. The Naseri’s correlation and their present method proved to
be the best classical methods investigated in this study. Two new in-
telligent methods to predict the viscosity of Iranian crudes have also
been introduced. The study also showed that the neural network and
SVM give much better results comparing to classical correlations. It is
claimed that this study may provide more exact results for the predic-
tion of Iranian oil viscosity.

1. Introduction few decades. So it is clear that the commercial oils


of these countries will be more important in the

M
iddle East countries contain at least 60 future oil market and industry. The crude oil is a
percentages of the world’s proved oil res- complicated mixture of too many components and
ervoirs. Most of these reservoirs locate in cannot be treated like normal multi-component
the six oil rich countries of Iran, Iraq, Saudi Arabia, mixtures. In most cases, the process engineers
Kuwait, Qatar and United Arab Emirates. As the ra- have to calculate the various physical oil proper-
tio of oil production to proved reservoirs in these ties without even knowing the exact composi-
countries is considerably lower than other parts of tion. Therefore, the conventional thermodynamic
the world, this share is expected to rise in the next methods usually are not applied to crude oil and
Introduction 1
oil products. The physical properties of these mix-
* Corresponding Author. tures are usually estimated by means of numerous
Tel/fax: 011-32337139 empirical correlations based on the limited experi-
E-mail: movagharnejad@yahoo.com (M. Movagharnejad) mental laboratory or field data of certain kinds of
10 K. Movagharnejad and S. Ghanbari / Journal of Chemical and Petroleum Engineering, 50 (2), Feb. 2017 / 9-17

crude oils or oil products. The accuracy of these reasoning are more tolerant than the conventional
empirical correlations generally depends on the hard computing techniques and may play a key role
degree of similarity between the sample crude oil in this field. Esmaeilzadeh and Nourafkan [4] calcu-
and the original experimental data set forming the lated the original oil in place (OOIP) in the oil res-
basis of the correlation. To say it in another way, ervoirs using genetic algorithm. They used a two-
if the original correlation is mainly based on the stage method using the genetic algorithm in the
experimental data of certain kinds of crude such as first stage to initialize and identify the search zone.
West Texas Intermediate (WTI) or Brent oil of the The final results were then obtained from a simplex
North Sea, the estimation would be most accurate method. They applied their technique to three dif-
for these kinds of crude oils or some similar crude ferent cases and concluded that this method is able
but the correlation would be expected to give poor to solve the problem in cases that are not solvable
results for a different kind of crude oil. by conventional methods. El-Sebakhy predicted the
The composition and properties of different PVT properties of crude oils by means of support
crude oils vary widely. Even different crudes from vector machines (SVM). He reported that the SVM
a unique geographical region such as Persian Gulf performance was more reliable and accurate in
or Caspian Sea are different from each other but comparison with other published correlations [5].
these differences are not usually so sharp. There- Emera and Sarma [6] used the genetic algorithm to
fore, the physical properties of the crude oils of estimate the CO2-Oil minimum miscibility pressure
different Middle East countries are different from and claimed that their GA based method is superi-
each other but of course, these differences are less or to other methods. Alomair et al. [7] developed a
distinct comparing to other crudes from different new model for dead oil viscosities of Kuwaiti heavy
geological locations such as North Sea or Venezu- crude oils at elevated temperatures. Hemmati-
ela. Most of the empirical correlations used in the Sarapardeh et al. [8] used support vector machine
oil industry are developed on the basis of experi- (SVM) soft computing technique to provide predic-
mental data from oil fields in the other parts of the tive models for dead oil viscosities. Their original
world such as North America. Therefore, it can be data bank consisted of over 1500 data sets from
expected that most of these correlations may give different geological locations.
poor predictions of the physical properties of the Viscosity is usually considered as one of the
Middle East crudes. most important physical properties of crude oil.
It was noted that the Middle East’s global share Numerous processes such as oil production and
of oil production will considerably rise in the future transportation, transport phenomena such as
decades, so the commercial importance of these heat and mass transfer and so on depend on the
kinds of crude oils justifies the reconsideration and precise evaluation of this property. Due to the
re-establishment of empirical correlations. complexity and unknown composition of crude
On the other hand, the intelligent methods have oil and oil products, the petroleum engineer of-
been widely used in different sectors of the oil in- ten uses the empirical correlations to predict the
dustry in recent years. During the recent decades oil viscosity. These empirical correlations are fit-
the petroleum industry has experienced a rapid ted with laboratory and field data which are usu-
increase in the applications of the artificial intel- ally gathered from very different geological loca-
ligence methods. Mohaghegh et al. [1] introduced tions at the Middle-East oil producing countries.
a new model that used the pattern capabilities of In this study, we aim to check the applicability of
the neural networks to characterize the reservoir the common empirical correlations to estimate
heterogeneity. They used a three layer, feed for- the viscosity of Iranian crudes. A large data bank
ward, back propagation network and their input of the viscosity of Iranian crudes was collected
variables consisted of spatial coordinates, well logs from different sources. The difference between the
and geological interpretations. Romero and Carter calculated and experimental results was used as a
[2] used genetic algorithm (GA) for reservoir char- measure tool to compare the different correlations
acterization. They applied their GA model to a real- with each other. The coefficients of some well-
istic complex synthetic reservoir and claimed that known correlations were re-fitted using Iranian
their GA based method give better results in com- field data to improve their precision when apply-
parison to other conventional methods. Nikravesh ing to Iranian crudes. A completely new empirical
and Aminzadeh [3] studied the intelligent reservoir correlation based on the dead oil viscosity of the
characterization methods. They concluded that the Iranian crudes, was also developed. Of course this
soft computing methods including neural network, new correlation is expected to be more precise in
fuzzy logic, genetic algorithm and probabilistic prediction of the viscosity of Iranian and similar
and South
crude America
samples fromandand M
Sout
publications. and South America
8 2.8177
M
2. The empirical correlations for dead oil
2. The empirical correlations for dead oil
and
 od
South America
 od  16 10 T f 8 2.8177 log A
and
 16 108 T f2.8177log A
M
viscosity logTTf f   26
2. The empirical correlations for dead oil
viscosity
Several empirical models exist for the
x od
  5.7526
16 10
x  5.7526 logT f   26.97
log A
.97
viscosity
Several empirical models exist for the x  5.7526 logT f   26.97
(10)
estimation of dead models
Several oil viscosity, some of (10)
estimationempirical
of dead oil viscosity, exist some
for the of The Elsharkawy and Alik
(10)
them
estimationhave been found to give reasonable The Elsharkawy and Alik
them have of been dead foundoil viscosity, some of
to give reasonable estimation
The Elsharkawy of dead and Alikoil
results
them for Iranian samples. Many new estimation of dead oil
resultshave for been
Iranian found to giveMany
samples. reasonable
new presented asof
estimation follow
dead [17]:
oil
empirical correlations
results have been introduced presented as follow [17]:
empiricalforcorrelations
Iranian samples. have beenMany new
introduced od  antias
presented log10 x  [17]: 1.0
inEngineering,
recent years
K. Movagharnejad and S. Ghanbari / Journal of Chemical and Petroleumempirical [10-11]
correlations
50 (2), Feb. but
have
2017
in recent years [10-11] but in this paper, we
in this
been
/ paper, we
introduced
9-17 11od  anti logfollow
10 x   1.0
 anti log  x 
have
in
have
focused
recent years on
focused
on those
[10-11]
thosebut
empirical
empirical
methods
in this paper,
methodswe x  
anti log
x od anti log10  y  .0
10 10 y  1
which
have have
whichfocused
been
have been
tested
ontested for Iranian
thoseforempirical crudes.
methods
Iranian crudes. xy  anti log  y0.02525 AP
2.16924
crudes. On the other hand, it was tried to use new In 1946,
In 1946,
1946, Beal
Bealintroduced
introduced foraa Iranian
graphical method
a graphical y  2.1692410 0.02525 AP
which
In haveBealbeen tested
introduced graphical method method
crudes. y  2.16924  0.02525 AP
(13)
intelligent methods to estimate the viscosity of the for
for In estimation
1946,
estimation of dead
Bealofintroduced
dead oilgraphical
aoil
oil viscosity viscosity [12].
method
[12]. This graph (13)
for estimation of dead viscosity [12]. Naseri
(13) etet al. introduced the
This
for graph
estimation was of mainly
dead oil based
viscosityon the
[12]. Naseri al. introduced the
crude oils and compare their performance with wasThismainly graph basedwasonmainly the Californianbased on field the data. In
formula
Naseri etforfor Iranian
al. Iranian
introduced dead o
the previous classical methods. These intelligent
Californian
This graph
Californian
1981, Standing
field
was
field
data.
mainly
data. In the
substituted
In 1981,
based
1981,
Standing
Beal’son the
Standing
graph by formula
the deadtheo
substituted the Beal’s graph by the following 2005[18].
formula for Iranian dead o
Californian field data. In by1981, Standing 2005[18].
methods were trained using the collected viscos- substituted
following
formula:formula:
the Beal’s graph the following od  anti log10 11.2699 
2005[18].
ity data, so they are believed to give better results substituted
formula: the Beal’s graph by the following  od  anti log 10 11 .2699 
formula:  1.8  1077  360  0.438.33API   anti log10 11.2699 
(14)
od(14)
for Iranian and probably similar crudes of other od   0.32  1.8  10  360 100.438.33API  (1)
There
(14) are many more cor
od   0.32  1API 4.53 
4
.8  10
.537  T 360 460 10 There are many more cor
Middle-East countries. It is clear that as the total od   0.32  API 4.53  T  460 100.43 API  formulas are usually us
8.33
There are
formulas are many more cor
usually use
share of the commercial oil production of Middle- In 1975(1)
 API   T  460  experts, they have been s
(1) Beggs and Robinson introduced an em- formulas
experts, they are have usually beenuses
East countries is expected to rise continuously in In
pirical 1975
(1) Beggs
correlation and Robinson introduced an the dead oil viscosity of
In 1975 Beggs andtoRobinson determine the dead
introduced an oil vis-
experts,
the deadthey have beenofs
oil viscosity
empirical
In 1975 correlation to determine the dead compare these empirical
the next few decades, the refitted and new meth- cosity [13].Beggs
empirical This and Robinson
empirical
correlation introduced
method
to determine an
is presented
the dead the
in dead oil
compare these viscosity
empirical of
oil viscosity
empirical [13]. This
correlation to empirical
determine method
the dead is following statistical mea
ods introduced in this study may be more widely oil viscosity
thepresented
following [13]. This empirical method is compare these empirical
oil viscosityin formulas:
the
[13].following
This empirical formulas: method is
following statistical mea
used:
used in near future. El-Hoshoudy et al. [9] used a presented x in the following formulas:
following
used: statistical mea
od  10x in
presented  1the following formulas: (2) 1. The average relative
used:
similar procedure for prediction of viscosity and   10  1 (2) 1.
(2) The average relative

x
od
 
y 
10
T
x
  1
460  1.163
(2)
(3) follow:
1. The average relative
density of Egyptian oil reservoirs. x  y T  4601.163
od  1.163
(3) follow:
x  y T  460 (3) follow:
Oil viscosity is measured in four different re- (3)
gions which are described as follows: 4
dissolved at gases 4
1. The oil viscosity therelease
bubble from
point the pressure
oil and the y  10 z (4) 4 (4)
dissolved
oil viscosity gases release
changes as afrom
result.the oil and the z
that refers to oil at the reservoir temperature zy  10 3.0324  0.02023API (4)
(5)
oil viscosity oilchanges
and bubble pressure. This kind of viscositythan
3- If the pressure as a result.
falls to lower is z  3.0324  0.02023 API based (5)
Glaso’s formulas were (5)on the field
3- If the gases
atmospheric
dissolved oil pressure,
pressure from falls to all
nearlythe lower of thanthe
saturatedrelease oil and z
usually called crude oil viscosity. data y  10
Glaso’s of North formulas America wereand basedcan (4)be on shown
the field by
atmospheric
dissolved
oil viscosity gases pressure,
release as afrom nearly the all of the
oil and
2. The oil viscosity
dissolved in changes
a pressure result.
lower than andthe data
the z following
Glaso’s 3of .0324 North
10 z formulas
America
 0formulas
.02023 wereAPI andbased
[14]: can (5)beonshownthe field by data of
oil
3- If the gases
is called release
dead orfrom
oilthepressure stock
falls the
tooil. oilDead
lower the
oil
than y following (4)
bubble pressure
oil viscosity
is
viscosity
dissolved which
called
atmospheric is is
changes
gasesthe dead
another called
release
pressure, as ora
important under-saturat-
result.
stock
from
nearly oil.
thekind Dead
ofof the
oil and
all oil
the
Glaso’s
Northod America formulas
 3z .141  10 andTcan
formulas
10
were
 460 be shown
[14]:
based
3.444
log API
on the
by thea following
field
a

data
 zy  103of .0324 141010
North .America
02023
10
T API and 3can
.444(5)
(4)  API  by
be shown
ed oil viscosity.
3- If
viscosity
dissolved theAs isoilthe
gases
and oil
pressure
another
werelease
are
oil viscosity changes as a result. pressure falls
important
from
going to to
the decreases
lower
kind
oil
consider of
and than
oil
the
this formulas
(6) od  3 z .[14]: 460 log
y following
Glaso’s 10.0324 formulas were based (4)on the field
atmospheric
from the oilbubble
viscosity
kind
3- viscosity
If
is of oil
the pressure,
calledand
oil pressure,
we
changes
viscosity
the are
pressure
dead inthe
as going
ora nearly
thisdissolved
to
result.
paper.
falls
stock to all
consider
lower
oil. of than
gases
Dead the
this
oil az 
the
(6)
data 3 10 of .z313
Northlog00formulas
.02023
T  460
America
API[14]:
  36
and 3can
.(5)
447 be (7) a by
dissolved
dissolved
release from the oil
gases
gases
and
release
release
the oil
from
from the
the
viscosity
oil and
oilchang-
and the
the z
Glaso’s 3 .
ay odfollowing
10 0324
 3.z313
10 formulas
.141   10
log
.02023
T
10
T460
were API
 was 
460 36
based (5)
.(4)
.444
447 on shown
log the
API  on
(7)field (6)
kind
3-
4- If
If of
the
atmospheric
viscosity
dissolved
oil oil
the
viscosity oilisviscosity
oil
gases pressure
exists
pressure,
another in
release
changes in a this paper.
falls
pressure
nearly
important
from
asora result. to
the lower
higher
all
kind
oil ofof
and than
the
oil Labedi’s
the
yz  10 correlationformulas [14]: mainly based
oil is the
called the dead stock oil. Dead oil Glaso’s
data 3 of
. 0324 formulas
North  0 America
. 02023 were API and basedcan (4)
(5) on
be the
shown field
by
4-
es as a result.If
atmospheric
bubble
dissolved
viscosity oil
pressure,
gases
and exists
pressure,
we it in
release
areis a pressure
nearly
called
from
going live
tothe higher
all
oil
oil
considerof
and
and than
the
its
this (6)
Labedi’s
the y  field
10 z correlation
data of
10 light was mainly
crudes frombased a on
North
oil
3- viscosity changes
If the isoilanother as
pressureimportant a result.
falls to kind lowerofthan 
data
the
z od of
following
310 3.North
.0324 141 010
log
America
formulas
.02023 TwereAPI and
460
[14]:
 can
3.444(4) log
be shown
API  by
field
viscosity
3. If the oilbubble
dissolved
viscosity
oil
kind
3- pressure
If
atmospheric pressure,
oilisgases
viscosity
isofcalled
the known
oil changes
viscosity
falls
the itto
deadasisinas
release
pressure
pressure, the called
afrom
live
this
lower
or live
the
oil
result.
paper.
stock
falls
nearly than
to oilDead
oil
oil.
lower
all andthan
and
viscosity.
atmo-
of
oil
its
the
oil
the
Glaso’s
the
a
Africa,
the z  field
3 .0324
following
formulas
data
especially
.313 0 . Tof
02023
formulas light
Lybia.
460 API  crudes
based
[15]
[14]: 36 .(5)
447
(5)
on
from the(7)
North (7)
viscosity
oil
All
3-
4- is
If
If
spheric pressure, of is
called
the
the and
these
is
oil known
oilthewe are
dead
four as
pressure
another
exists
nearly in thegoing
or
different live
important
all of a to
stock
falls
pressure
theoil
to consider
viscosity.
oil.
kinds
kind Dead
lower
higher of
of this
oil
than Glaso’s
(6)

data
Africa,
Labedi’s  3 .
od of especially
formulas
141
North  10
correlation
10
America  T
10 Lybia.
were
 460
and
was 
based
3.444
[15]
mainly
 on
log
3.444 on based

can be showna bythe
API 
field
a
on
atmospheric pressure, nearly thedissolved
all of the 9.224
10 log
dissolved
kind
All
viscosityof oil
atmospheric
bubble
gases
these
is
have
and
pressure,
release
viscosity
four
another
been
pressure,
we in
itareis different
from
this
important
studied
going
called paper.
in
nearly
tolive
oil and
kinds
kind
this of
the
of this
oilresearch,
all
considerof
and oil
the
its
Glaso’s
od
data
a
the
(6)  10of3.313
od following
formulas
1414log
.North 910 Tof0.6739
America
.formulas
were
T460 460 and36
based
was
[14]: can.447
(8) be shown
the
API(7) by
field
dissolved
gases release
oil is from
called gases
the
the release
oil
dead and or from
the
stock the
oil oil
is
oil. and
called
Dead the
oil the
Labedi’s
data field
ofAPI data
North10
correlation
.7013 224
T10f light
America and crudes
3can
from
mainly
be shown North
based by on the
4-
but Ifisall
viscosity
dissolvedthe ofoil
have
and exists
them
gases been
we inin
are
cannot
release agoing
studied pressure
be
from in
to the higher
this
consider
presented
oil and than
research,
this
inthe a the

a
(6)
Labedi’s following

10
 3. 313
. 141 
correlation
4log
. 10formulas
 T 0.  T 460
 [14]:
was
460  36 .444(8)
mainly
. 447log based
(7)
API aa on espe-
kind
oil
the dead viscosity of
or stock oil
calledis viscosity
known
isoil.the dead
Dead
another as the this
or
oil
important paper.
live
stock oil
viscosity viscosity.
oil.
kind Dead
isand
ofan- oil
oil Africa,
field
the
od
data
following especially
APIof light
7013
T
formulas Lybia.
crudes
6739 [15]
[14]:from
3.444 North Africa,
T460 36 log based athe
od
bubble
but
oil Ifisall
kind
single
4-
All of
the pressure,
of
oil
paper,
called
oil
these them
viscosity
sowe
the itcannot
only
existsdead
four is
in called
indifferent
the
a this
or dead
stock
pressurelive
bepaper.oil oilDead
presented
viscosity
oil.
higher
kinds of in its
isa
oil
than Kartoatmodjo
od 10
the
a
Labedi’s field
 3.313
.141 log
data  10
correlation Tand
10
of
f
 light  crudes
Schmidt
460
was .447presented
from (7)North
API
viscosity is another important kind oil (6) 10910 mainly on
.224
viscosity
other important
viscosity and
kind
is known are
of as going
oiltheviscosity to
livein oil consider
and we
viscosity. this cially  Lybia.
Kartoatmodjo
following  [15]
3.especially
141formulas 10
and 6739
Tin 460
Schmidt
1994
3.444
which log based
presented  the
APIbased
was
single
4- If of
considered
viscosity paper,
the oil heresoweonly
exists
and
isviscosity
another in the
theagoing dead
pressure
other
important oil
threeviscosity
higher
kind ofthan
kinds is
of
oil 
Africa,
Labedi’s
(6)  
correlation  Lybia. was
  [15]mainly (8) on
are goingbubble
viscosity pressure,
have
and beenitareisstudiedcalled tolivethisoilresearch,
considerand its
this the field data of light crudes from North
od
kind
to oil
consider this in
kind this paper.
of oil viscosity a 
od
following
on 10
3588 . 313 4log
APIformulas
.7013
measured T 0 . 460
T f inviscosity 1994  36 . 447
which data (7)
was(7) based
ofNorth
661
All
bubble
oil of
considered
viscosity
but all these
is
of andhere
pressure,
viscosity known
them four
weand
willitare
asis
be
cannot different
the
the other
called
considered
goinglive
be to
oil kinds
three
live oilin
consider ofother
kinds
and
viscosity.
presented oil
of
its
this
in (6)
the field 
data
10 9.224
 of light   crudes from
kind
4-
in this paper. of oil
If the oil viscosity
exists in
instudied this
a pressurepaper. higher thana a
Africa,
Labedi’s
od 3588
10 .
 samples
especially
313 log
correlation T  Lybia.
460 [15]
36
was mainly . 447
(8) based on (8)
viscosity
oil
kind
All
single
4- If
bubble viscosity
viscosity
publications.
of
the oilhave
is
these
paper,
oil knownbeen
will
viscosity
so four
only
exists asbe
inin the
a this
different
the inoil
considered
live
paper.
dead
pressure this
kinds
oil research,
in
viscosity.
of
viscosity
higher other
oil
than is on
crude
Kartoatmodjo
a
Africa,
Labedi’s 10 .API
313 4log
measured
especially
10
.7013
correlation
9.224 TTand
from
0. 460
Lybia.
6739  crudes
viscosity
Southwest
Schmidt
was 36
[15] .447
mainly
data
Asia,
presented ofNorth
(7)
based
661
the
on
4. If the oil exists
but Ifall inpressure,
publications.
All
4- of
viscositythe aoil
of
these
havepressure
them itcannot
four
exists
been
is higher
in
called
different be inlive
acalled
pressure than oilbubble
presented
kinds
higher
andin
of than
its
ofa
oil
the
crude
and

following
Labedi’s
field
South
 samples
od field correlation
data
America
formulas of
from
f light
and
in Southwest
Middle
1994
wascrudes which
mainly (8) from
Asia,
East.was
based
North
North
[16]
based on
considered
bubble
viscosity is here
pressure,
known and
it asisstudied
the
the other
live live
oil this
threeoilresearch,
kinds
and
viscosity. its the
Africa,
Kartoatmodjo 10
data 9 .
4.7013 of
especially
API 8 T
224
and0.6739 light
Lybia.Schmidt [15] from
presented Norththe the fol-
pressure,viscosity
itThe
isviscosity
single called
paper, live
so only oil and
the beits
dead viscosity
oil viscosity isits
isa and
onod 3588
KartoatmodjoSouth
 America
measured f and
2.8177 and Middle
Schmidt
viscosity xEast.
(8) [16]
presented
data ofNorth
661
viscosity
2.
bubble
but
oil
All all ofhave
of empirical
pressure,
is
these them
knownbeen
will
fourcannot
bestudied
itcorrelations
asis the called
different live inlive
oil this
for oil
presented
considered kinds research,
dead
inand oil
in
ofother
viscosity. oil the

Africa,
following
od field
 16 API  data
10
especially
4.7013
formulas
10 and 9T.224 Tf of0.6739 
light
Lybia. log
inSchmidt
1994 crudes
API
[15] 
which from (9)
wasbased
based
known asAllconsidered
2.the
but The
single live
all
viscosity here
oil
empirical
ofis
paper,
publications. them
known
so and
viscosity. the
correlations
cannot
only as the
the other
be
live
dead three
oilfor
presented
oil kinds
dead
viscosity.
viscosity oil
in ofa
is Kartoatmodjo
crude
lowing
Africa,
 samples
formulas especially
8
from
 2
in .8177
f 1994
Lybia. Southwest
Schmidtwhich
API (8)
[15] presented
Asia,
x was Norththe on 3588
oil
of
viscosity these
viscosityhave will four
been be different
studied considered
kinds
in
of
in this research, oil
other on
andx odod 5South
 3588
Kartoatmodjo
following
16
.7526
API
10 4log
10 9T
measured
formulas
.7013
America TTfand
.224
f0. in
6739 26
and
log.9718
viscosity
1994
Middle which data
presented
was
East.
(9)
(10)
ofbased
[16] 661
thesamples
All of these four
viscosity
single
Several
All of
considered different
paper, so
empirical
these here only
four
and kinds
the
models
different
the of
dead
other oil
oil
exist viscosity
viscosity
kinds
three forof
kinds theis
oil
of measured
 5 viscosity f0.6739data of (8) 661 (10) crude
8 T  in26Southwest
viscosity
but have been studied
all of them cannot be presented in a in this research, x od
crude
(10)  3588 .7526samples 10
4log
.7013
9 .224
Tfrom .9718which Asia, North
publications.
Several following
on   API formulas
measured 2f .8177 1994
viscosity (8) datawas ofbased
661
considered
estimation
viscosity
have been studied
oil
2. The
but
single ofempirical
in
viscosityhavehere
this
empirical
allpaper, them and
ofsowill
dead
been
research,
be models
the
oil
studied
correlations
cannot
only other
theconsidered
be
dead exist
three
viscosity,
inpresented
but this
all
for
oil of for
kinds
some
research,
in
deadthem
viscosity the
other
oil
in of
isa from  odSouthwest
Kartoatmodjo
(10)
and
The
od

South 16 API
Elsharkawy10 4.7013
America TTand
fAsia,
0.6739
f and and
North
Schmidt
log Middle
Alikhan’s
xand
 API presentedEast.
South
(9)
formulas[16] theAmerica
for
estimation
oil
them
but
cannot be presented
single viscosity
all
viscosity
considered have
publications. of
paper, of
been
them
inhere dead
will
a single
so only
and thebe
found
cannot oil
the viscosity,
considered
to
be
paper, give
presented
dead oil
other some
in
so viscosity
three onlykindsother
reasonablein
theofof
isa andon
crude
following
The x
3588
Kartoatmodjo
Middle

estimation 5 .
samples
7526
Elsharkawy
measured
formulas
East.
logof8  T
from
and

f
[16]
f and
dead
2 .  
8177 in26
viscosity
Southwest
Schmidt
1994
.
oil9718
Alikhan’s which
viscosity
data
Asia,
presented
was ofNorth
(10)
formulascan
661
basedthe
for
be
2.
themThe empirical
have
publications.
results
single
Several for
paper, been correlations
Iranian
so
empirical onlyfound the
models to
samples.
dead give for
oil
exist dead
reasonable
Many
viscosity
for oil
new
theis crude
Kartoatmodjo
and
following
on od 3588 South samples
 16 formulas America
10
measured Tf from
and and Southwest
inlog
Schmidt API
Middle
1994
viscosity which Asia,
 datawas
presented
xEast.
(9) North
[16]
ofbasedthe
661
dead oil viscosity
considered is considered
oil viscosity herewillandbe the here and
other
considered thein
three other
kindsotherof estimation
presented
(10)3588 as of
8 dead
follow [17]: oil viscosity can be
viscosity and
following
on South America
formulas
measured 2.8177 and
in Middle
1994
viscosity which xEast.
datawas [16]
ofbased
661
results
empirical
considered
2. The
estimation
oil for
empirical
viscosity
three kinds publications. of Iranian
correlations
here and
dead
will be the
correlations
of oil viscosity will be considered inoil samples.
have
otherbeen
viscosity,
considered Many
three
for introduced
kinds
dead
some
in new
oil
otherof crude
 x od samples
 5Elsharkawy
.7526
16  10 log T Tffrom  
26
[17]: 
Southwest
log.9718 API  Asia,(10)
(9) North (9)
10 from x   1.Southwest
presented
onod 3588
The  anti as
log follow
measured and
f .8177
Alikhan’s
0 MiddlexEast.
viscosity formulas
(11)[16] for
Several empirical models exist forotherthe  data of 661
8  2
empirical
2.
in
oil
other publications.The
recent
viscosity
viscosity
them have
publications. correlations
empirical
years been correlations
[10-11]
will be
found have
to inbeen
but
considered
give forintroduced
this dead
paper,
in
reasonableoil
we crude
and

(10)
x od 5South
od
estimation  16
.7526 samples
anti 10America
loglogof
TTf f and
 x
dead   log
26
1 .0 .  API
9718
oil viscosity
Asia, (9)North
(10)
(11) can be
10  yTf and
estimation of dead oil viscosity, some of crude
and  5South samples
logAmerica from Southwest Asia, North
in
2. recent
viscosity
have
Several
The focused
publications.
results foryears
empirical on[10-11]
Iranian
empirical those samples.
models
correlations but in exist
empiricalthis
forMany paper,
methods
for oil
dead newwe
the The
xx od
(10) 
presented
anti
Elsharkawy
.7526
16log 10
as log
810 2.8177
Tyf2.8177
8follow
 and log
26
[17]:
MiddlexEast.
 API xEast.
Alikhan’s
.9718 (12)
formulas
[16]
[16] for
(10)
(9) (10)
them
have
Several
which
estimation
empirical
viscosity focused have
have been
empirical
been on
2. The empirical correlations for dead oilof
of dead
correlations found
those
tested models
oil for
have to give
empirical
exist
Iranian
viscosity,
been reasonable
methods
for
crudes.
some
introduced the and xy 
estimation
 
South
anti
2 . 16924
16 
America
10 ofT 0 . 02525
dead
and

Middle
API
oil
log  0
viscosity
API .68875 (12) log10for
can
(9) be
T
(10) log10Txf and 
10
results for Iranian samples. Many new The
x od 5Elsharkawy
od
anti
.7526 log 8 f 2 .

8177
1
26 .Alikhan’s
0.9718 x formulas (10)
(11)
which
estimation
In
2.
them 1946,
in The
recent
viscosity
Several have
have Beal been
of
empirical
yearsbeen
empirical dead tested
introduced
[10-11] oil
correlations
found models for
buta Iranian
viscosity,
to graphical
ingivefor
this
exist crudes.
some
dead method
reasonable
paper,
for oilweof
the   52
xy(13)
presented 16
.7526
16924 10
as T0Tf dead
follow .f02525 log
[17]:
 and API API 0.68875
Alikhan’s (9)
log10forbe
T  for es-
2. The Empirical
empirical
In
them
for 1946,
estimation
viscosity
results
have focused have
for Correlations
correlations
Beal been introduced
Iranian
on of found
dead
those samples. have a
to for
been
graphical
oil give
viscosity
empirical Many Dead
introduced
method
reasonable
methods[12].
new
The
estimation
The
(10) x
od

Elsharkawy
.Elsharkawy
anti log
logof
 y 
and 26 Alikhan’s
oil
.9718 viscosity formulas
(10)
can
formulas
Several
estimation empirical
of dead models
oilbut exist
viscosity, for
some we the
of  x
Naseri
timation
(10) 
estimation
(13)
presented 5
 . anti
7526et log
al.
as logof
10
of dead oil T
introduced
follow x
dead   126.
[17]:
andviscosity
0.
oil9718
the viscosity
following
Alikhan’scan
(12)
(10)
(11) can
be presentedbe as
Oil Viscosity in recent years [10-11] in this paper, f
for
results
This
Several
which estimation
empirical for
graph
have Iranian
empirical
been was
correlations oftested
dead samples.
mainly
models for
have oilIranian
viscosity
based
exist
been Many on [12].
for
crudes.
introduced newthe The Elsharkawy
od 10 formulas for
estimation
them
have
This
empirical focused have
graph
of
been dead
on
was
correlations
found oil
those
mainly have
viscosity,
to give
empirical
based
some
reasonable
methods
on
of
the follow
The xyod 
presented
(10)
Naseri
formula
estimation
2Elsharkawy
anti
[17]:.anti
16924
etfor
logas
al.
log  y0.x02525
follow
introduced
Iranian
of 10 dead  dead
and [17]: API
the
1.Alikhan’s
0oil 0.68875
viscosity
following
oilviscosity formulas
(11)
(12) log10for
in
can
T 
be
Californian
estimation
in
In recent
them 1946, have of
years
Beal field
been dead data.
[10-11]
introduced
found oil In
but inbeen
1981,
viscosity,this
a graphical
to give introduced
Standing
some
paper,
method
reasonable weof
results
which
Californian
in
them
have
for recent
substituted
results focused
estimation
for
have
have
foryears
the
Iranian
been
field
been on tested
data.
[10-11]
Beal’s
Iranian of found
those
dead
samples.
graph
samples. forIn
butto Iranian
in1981,
by
empirical
oil this
give
Many
the
viscosity
Many crudes.
Standing
paper,
following
reasonable
methods
new
[12].
newwe
The

formula
2005[18].
(13)
estimation
presented
xyod Elsharkawy
 anti anti
2.16924
foras
loglog
10
Iranian
of .x02525
10y0dead
follow  dead
and 1.Alikhan’s
[17]:0oiloilviscosity
viscosity
APIviscosity
 0.68875
formulas
(12)
in for
(11)can
log10 be T
Several empirical
empirical models correlations exist for have the been estimation
introduced estimation
2005[18].
 10of dead oil  API be
can  2.052 log10 T f 
In
have 1946,
substituted
formula:
results
which
This
empirical
of dead oil viscosity,
in recentfocused Beal
for
have
graph the
been introduced
Beal’s
on
Iranian
was
correlations
some
years of those
tested
them
[10-11] graph
samples.
mainly for
have a Iranian
have
but
graphical
by
empirical
in the
based
been
been
thisMany method
following
methods
crudes.
on
introduced
found
paper, newthe
we
Naseri
presented
 x od
od

anti anti
antietlog log
al.
as
log 
follow
10 y 
introduced
100.11

11
x  .
 2699
[17]:
1 .0 
the 4 .298
following log (12)
(11)
10
T2.052(11)
yod 10 in 10  log10 T f 
for estimation of dead oil viscosity [12]. 
formula(13)
presented 2.16924
(14)
 anti
anti foras
log
log
10
follow
10
Iranian x02525
 [17]:
dead
.2699
 1 .0 API 4
oil 0.68875
viscosity
.298 log(11) log
API
formula:
which
empirical
Californian
In
in 1946,
recent  haveBeal been
correlations
years field
1 . tested
introduced
[10-11]
8  data.
10 7
 for
have
butIn
a Iranian
in
360 been
1981,
graphical
this crudes.
introduced
Standing
method
paper, 8.33we y
od
2 . 16924  0 
. 02525 API  0 .68875 log  T 
have
to give reasonable

This
In
in
for 
od1946,
recent
focused

graph
estimation
substituted 0. results
32
Beal
years
the
on
was
introduced
[10-11]
those
for
oftested
Beal’s 
mainly
7 
4dead for
graph but
empirical
Iranian based
aempirical
graphical
oil in
by this 10 methods
samples.

 paper,
viscosity
the
0 .
on
43
method
following[12].
API 
the
we x
Naseri
There 
(13)
2005[18]. anti

(14)
antietlog
arelogal.
log
many10
10
y 
introduced
10y.11 x   1 . 0 the
more correlations,(12) following (12)
(11)
but these
10
have focused 1 on
.8  those
10 360 methods x (13) anti
T2.052(12)
od
which have been API .53  T Iranian  crudes. formula
Many new empirical
od  graph correlations have   460been 0intro-
.43
Naseri
y od etforal. Iranian
introduced
10y0usually
 dead oil4following
the
API .viscosity in
log log 10 T f 
8.33
Californian
for
have
This estimation
formula:
which focused 0.32
have
field
been on
was data.
oftested
4dead
those
mainly forIn 1981,
oilIranian 10
viscosity
empirical
based Standing
methods
on
crudes. [12].
API
the 
There
formulas
x  2.16924
 anti are
anti logmany
log
are 10
02525
more .2699 correlations,
used 0by
298 .68875
log but
Iranian
(12)
10 these
API10oil
In
duced in recent
This
which
1946,
years
(1)graph
substituted  Beal[10-11]
the introduced
API
Beal’s
was
.53
but 
7graph
mainly
 Ta graphical
 460
in Iranian
this
by 
paper,
the
based
method
following
on we the
2005[18].
y
Naseri
formula
formulas
experts,  2
(14)
(13) . 16924
et al.
forare
they
10
 0
introduced
Iranian
have . 02525
usually dead
been API
the
usedoil  0 .
following68875
viscosity
selected by Iranian
to log
in
estimate  T
oil 
Californian have been
field 8oftested
data. for
 In 1981,  crudes.
Standing T2.052(13)
10
In
for 1946,
estimation Beal 1introduced
.and 10
dead aoil360
graphical
viscosity method
0.43[12]. y(13)
formula 2.16924
dead anti log 100.11
viscosity 02525 .2699 API 4
oil 0.68875
.viscosity
298 log in
log
API10To log10 T f 
(1) etfor Iranian dead
have focusedIn on 1975 those Beggs empirical Robinson methods introduced which 8.33
an 2005[18].
experts, they have been selected to 10estimate
formula:

In
for 
Californian
1946,
substituted
estimation  0 .32
Beal 
graphthewasfield data.
introduced
Beal’s
of 4dead graph 
  Ina
oil 1981,
graphical
by the 10
viscosity Standing
method
following[12].
API
the
There
Naseri od
are oil
many
al. introduced more ofthe Iranian
correlations,
following crudes.
but these
This
od
mainly based on the    10 T f 
.53
In 1975
empirical
have been tested for  Beggs API
correlation
Iranian and Robinson
to   T  460
determine introduced
 the dead an 2005[18].
 (13)
(14) anti API
substituted
for estimation
formula:
This
Californian graph the .8ofcrudes.
Beal’s
1was 10dead
7
graph
mainly  In oil
360by the  following
viscosity
based .43[12].
0on 8.33the
API 
the
Naseri
compare
formulas
formula 
od dead etfor log
oil
al.
these
are viscosity
introduced 11
10usually
Iranian . 2699
empirical
dead of 
the
used 4
oil .298
Iranian
following log
crudes.
correlations,
by Iranian
viscosity 10 in To
the
oil 2.052 log
 
empirical
oil viscosity
(1)  0.32 field
correlation
[13]. data.
This to
7 
  1981,
determine
empirical 10 Standing
the
method dead is Naseri
There

compare
formula
following
experts,  et
are
anti al.
formany
log
these introduced
Iranian
statistical 11 more . 2699
empirical
(14) they have been selected to estimate dead the
 4
oil
measures following
correlations,
.298 log
correlations,
viscosity have 
but these
API
in 
the
been  2.052 log 10 T f 
formula:
This graphthe
od
Californian
substituted 1was
fieldAPI 4.
.8following
Beal’s 10 mainly
data.
53
In360
T
empirical
graph by based
1981,
460 on the
Standing
 method
the following 2005[18].
od 10 10
oil viscosity
presented in [13].
the This formulas: 0.438.33APIis formula
formulas forare Iranian usually dead usedoil viscosity
by Iranianin oil
In
 1975

Californian
substituted  0
 x the. Beggs
32 field and
1Beal’s
Robinson
data.
4.53 graph
 
 TIn360 introduced
1981,  10 Standing an following
2005[18].
used:
There
the (14)
dead are many
oil statistical
viscosity more measures
od  anti log10 11.2699  4.298 log10  API   2.052 log10 T f 
correlations,
of Iranian havebut
crudes. been
theseTo
by the
 following
7
formula:
od
(1)10 .8  10
1theAPI 460 0.438.33API  2005[18].
experts,
presented
od 
empirical
substituted
formula:
 0.32 in
correlation
the following
Beal’s to
4.537graph
formulas:
 determine by(2)the 10following
the dead used:
There
1.
od(14)
formulas
compare The anti arethey loghave
many
average
are
these 11
10usually more been
.relative
2699
empirical selected
correlations,
used 4.error
298 bylog to
(ARE) API
estimate
but
Iranian
correlations, 10 theseas 2.052 log10 T f 
oil
the
In
od1975
oil viscosity
(1)
 10  xBeggs 1[13]. API
1.18and 10 Robinson
This T 
empirical
360 460
(2)introduced
 method anis the
1. 
formulas dead
The  anti oil
log
average
are 
viscosity 11
10usually .2699
relative of 
used Iranian
4 . 298
errorby log(ARE)
crudes.
Iranian API as 2.052 log10 T f 
To
oil
formula:

x od y T0 .32 4601 .8  10
.163
7  (3) 10the
0.438.33API follow:
experts,
following
There od(14) they
are these
many more have
statistical been selected
measures
correlations, to 10estimate
have been
but these
empirical
In (1)
1975
presented correlation
Beggs
in the APIand 4.53 to
Robinson
following  T 360
determine introduced
formulas:
 460 0.43dead an  compare
follow: empirical correlations, the
experts,
the (14)
dead they
areoil have
viscosity been of selected
Iranian to estimate
crudes. To
 y T0  1 .8  10 empirical (3) 10 used:
8.33
.32 7 
4.53  There many more correlations, but these
 1 .163
x odviscosity formulas are usually used by Iranian oil
API

oil 460 [13].


API This T 360
 460 method
0.438.33API is following statistical measures have been
In
od 1975
empirical

 10  0 xBeggs
.32correlation
1 and Robinson
to  determine introduced
(2) 
10 the dead an the
There
1. The
compare
formulas dead oil
aretheymany
average
these
are viscosity more relative
empirical
usually of Iranian
correlations,
used error crudes.
correlations,
by Iranian but
(ARE) theseTo
theas
oil
od (1)  in theAPI 4.53  experts, have been selected to estimate
presented following toempirical Tformulas:
 460  method used:
oil
In (1) xBeggs
empirical
viscosity
1975 correlation
[13]. .163This
1and Robinson determine introducedthe dead is
an 4 compare
formulas
following
follow:
experts, deadthey these
are statistical empirical
usually
have beenofselected used
measures correlations,
by Iranian
have
to the
been
estimate oil
x

oil  y
 10
viscosity
(1)
presented T  460
in 1 
[13].
the and This
following empirical (3)
(2)
formulas: method is
the
1.
4 following
experts, The oil viscosity
average
statistical relative Iranian
measures error have crudes.
(ARE) been
Toas
In od1975 Beggs
empirical correlation Robinson
to determine introducedthe dead an used:
the
compare deadthey oil
these have
viscosity been
empirical ofselected
Iranian to estimate
crudes.
correlations, To
the
presentedx in the1following .163 formulas: follow:
the collected viscosity data bank. The re- It is necessary to train
the collected correlations
viscosity dataare bank.given The re- It is necessary
constructed as network before tobeing train
constructed
follows: correlations are given as network before being
application. This proce
follows: application.
The optimized Glaso formula was found to method for This proce
calculation
The
be: optimized Glaso formula was found to method for calculation
weight factors and
be: weight
od  212.5T  46000..5495 5495
 log API a
a
training, factors
the newand outpu
od  212.5T  460 log API  (18)
(18)
training,
the network,new
the outpu
iterative
8
a  2.66381108 logT  460  4.155 (19) the network,rule
weight/bias iterative
is
a  2.6638110 logT  460  4.155 (19) weight/bias rule is
The optimized Labedi formula is: commonly used training
12 K. Movagharnejad and S. Ghanbari / Journal of Chemical The optimized and Petroleum Labedi formula is:
Engineering, 50 (2), Feb. 2017 / 9-17 commonly usedstudy.
training
10.04 applied in this In
1010.04 applied in this study. In
 od  10 generated randomly at
 od  API 2.764T 2.2662.764 2.266 generated
the inputs randomly
are entered at
Naseri et al. introduced the following formula The API Kartoatmojdo Tf
f
and Schmidt (20) formula was the inputs are entered
The Kartoatmojdo and Schmidt formula (20) and move forward throu
for Iranian dead oil viscosity in 2005[18]. optimized
The to:
Kartoatmojdo and move forwardto thethrou
was optimized to: and Schmidt formula of neurons
of
ou
was optimized to:6 2.293 generated outputthe
neurons to would ou
od  anti log10 11.2699  4.298 log10  API   2.052 log10 T f   od  31.93 106 T f2.293log API x x
(21) generated
(21) output would
 od  31.93 10 Tf log API  (21) the experimental de
the experimental de
log10 11.2699  4.298 log10  API   2.052 log10 T f  (14)  9
x  9.25 109 logT f   8.755 (22) Changing the weight f
x  9.25 10 logT f   8.755 (22) Changing
(22)
the calculated the weighterrors.f
These statistical measures have been the calculated errors.ca
There are many more correlations, but these These statistical process, the network
calculated for all ofmeasures the above-mentionedhave been
process,
formulas are usually used by Iranian oil experts, calculated
These statistical forand all the ofmeasures
the above-mentioned
have the dead oil viscosity. ca
the network
correlations results have been been calculat- the dead of
oilthe
viscosity.
edcorrelations Scaling output a
they have been selected to estimate the dead oil for all of the
summarized inandtable the
above-mentioned
1 and results
fig.1. havecorrelationsbeen and
Scaling of the output a
viscosity of Iranian crudes. To compare these em- summarized in table
the results have been summarized in Table 1 and 1 and fig.1. improve the increase
improve
performance. the Inincrease
this w
pirical correlations, the following statistical mea- Fig. 1. Neural Network Model
3. The performance. In thisbetw w
3. The Neural Network Model outputs are scaled
sures have been used: Neural networks generally consist of three outputs are scaled betw
Neural networks generally consist follows:
1. The average 100 x  x error (ARE) as follow: These
N relative main layers
networks have called input, hidden
been successfully usedandofoutput.
three
follows:
N xical  xiexp These networks have been successfully used
 N 
ARE 100 main layers called input, hidden and output.
ARE   (15) in many different types and architectures,
i i
NN x x x
cal exp
(15) in
These
Thesemany different
networks
networks 100 have
have types
beenand
been architectures,
successfully
successfully used
100
100
N iN1 xiiical  i xi
calxiexp iiexp and are consist of same elements suchused as
ARE
ARE 
2. The average N
 i  1 cal
cal

absolute x
exp
exp
exp
exp

relative
(15)
(15)
error as
(15)and
in
in are consist
many
many
nodes,
different
layersdifferent and
of types
same and
types
connections.
elements
and such as
architectures,
architectures,
Multi-layer
5
exprimental

5
N
2. The average absolute x relative error as nodes,
beal
and
and are arelayers
consist and of connections.
of same elements Multi-layer
such as
ii
i1
1
1 i
i
follow:
iexp
exp
exp
exp
perceptron consist
and radial samebasiselements
function such are two as beggs
2. 2.follow:
The
2. The average
Theaverage
average absolute absolute
absolute relative relative
relative error
error error as
as as follow:nodes, perceptron
nodes, layers
layers andand radial
and basis function
connections.
connections. are two
Multi-layer
Multi-layer
of the most popular architectures for

viscosity ,cp
glasso
N x x
follow:
(AARE)follow: 100 N xical  xiexp of the
perceptron
perceptron most
and
and popular
radial
radial basis
basis architectures
function
function are
are for
two
two
(AARE) AARE  N 
(AARE) AARE  100 10
prediction of physical properties of fluids. It glasso


ical iexp
NN x x x
1 xiiical 
prediction
of
of the
the of physical
most
most popular
popular propertiesarchitectures
architecturesof fluids.for forIt optimized
100N
100 iN x
calxiexp iiexp has been reported that multi-layer feed
labedi


i i
(AARE) AARE
(AARE) AARE  i  1 has been
prediction
prediction ofreported
of physical
physical that
properties
propertiesmulti-layer of
of fluids.
fluids. feed It
It
cal exp
(16)
cal exp
exp
exp labedi
(16) N
N iii111 xxiiiexp forward neural network with one hidden
(16) forward
has
has been
been neural
reported
reported network that
that with
multi-layer
multi-layerone hidden feed
feed
optimized
schmidt
3. The standard deviation (SD) as follow:
exp
exp
exp
layer is able to approximate any complicated
3.
(16)
(16) The standard
3. The standard deviation (SD) as follow: deviation (SD) as follow: layer
forward
forward is able
neural
neural to approximate
network
network any
with
with complicated
one
one hidden
hidden schmidt optimiz
nonlinear function 1 [19], thus a multi-layer
3.
3. The
The standard
standard deviation
 xi  xi (SD) (SD) as as follow: 2
2
nonlinear
layer is
layerforward able
is able tofunction
to [19],
approximate
25.0approximate thusany
any a multi-layer
complicated
27.0complicated alikhn
11 NN deviation follow: 26.0 28.0 29.0
xical  x feed perceptron neural network with
SD  N  1 
SD  
 cal
exp
 AARE 
 22 feed forward
nonlinear
nonlinear perceptron
function
function [19],
[19], neural
thus
thus anetwork
a
API multi-layer
multi-layer with

i 2
N  xi xxi  AARE 
exp
two hidden layers has been used in this work
N11 1 ii N11  xxiiicalcal two
feed hidden
forward layers
perceptron has been neural usednetworkin this work with
N
calxiexp iiexp  feed forward perceptron oilofneural network with
SD
SD  
N  1
  cal

x
exp
exp
exp
exp
 AARE 
AARE (17)to
to
two
two
correlate
Figure
Figure
correlate
hidden
hidden
1:the
the1.dead
layers
layers
dead
The
The comparison
has
has oil
been
been
viscosity
comparison
viscosity
used
used inofof
different empirical
in ofthis
this
Iranian
different
Iranian
work
work
empirical prediction
prediction methods
(17)
(17) N  1 iii111  xiiiexp  crudes. methods.
Figure 2 shows the general structure
  crudes.
to correlate
to the
correlate Figure the 2 shows
dead
the dead oil used oil the general
viscosity
viscosity of structure
Iranian
in thisofwork. Iranian
exp
exp
exp
It is clear that the coefficients of these of ANN architecture
It is clear that the coefficients of these
(17)
(17) of the ANN
crudes.
crudes. Figure
Figure architecture
22 shows
shows usedgeneral
the
the in this structure
general work.
structure
correlations
It is are found by fitting curves with Weight factors connect the different neurons
is clear
correlations
It is
aIt certain
clear
clear
set of
thatare
that
that thethe
the
found the coefficients
experimental
by fitting curves
coefficients
coefficients data.
of
of
ofOne thesewith correla-
these
theseof
Weight
of
of the
in the ANN
eachANN
factors
layer.
connect the
architecture
architecture
Except the
used
used different
input
in
in this
this
layer,
neurons
work.
work.other
tions a certain
correlations
are foundare
correlations set of
are the
found
byfound experimental
fittingbycurves by fitting
fitting data.
curves
with
curves One with
a certainof
with in
set of each factors
Weight
Weight layer. Except
factors connect
connect the
the
the input
different
different layer, other
neurons
neurons
the main aims of this study was to re-fit the layers receive a sum of inputs. The sigmoid
the main set
theacoefficients
certain
aexperimental
certain aims
set of
of of
the
the this
data. experimental
experimental
of these correlations for Iranian
study
One ofwas the tomain
data.
data. re-fit
One
One the
of
aims
of of layers
in
this
in each
each 3.
receive The
layer.
layer. Neural
aExcept
sum
Except
transfer function is used in this study
ofthe
theinputs. Network
input
input The
layer,
layer, sigmoid
otherModel
other
coefficients
the
the main
mainto aims
aims of of these
of this
this correlations
study
study was to tofor Iranian
re-fit the correla- transfer
layers
layers function
receive
receive aa sum
sum is of
of used
inputs.
inputs. in The
The thissigmoid study
sigmoid
studysamples.was This re-fit re-fit the was coefficientsdonewas with re-fit
of
the aidthe
these of because of easy derivation evaluation [20].
tions samples.
coefficients
coefficients
for This
Iranian ofre-fit
of these
these
samples. was correlations
correlations done
Thisbank. with
re-fitfor
for theIranian
aid done
Iranian
was of withbecause
transfer
transfer offunction
easy networks
function
Neural derivation
is
is used usedevaluation
in
in this
generally this [20]. study
study
consist of three main
the collected viscosity data The re- It is necessary to train Figurean artificial neural
the
samples.
samples.
theconstructedcollected This
This
aid of the collected viscosity
re-fit
re-fit was
was data
done
done bank.
with
with the
the The aid
aid re-
of
of It is
because
because necessary
of
of
layers easy
easy to train
derivation
derivation
called input, an artificial
evaluation
2: The
evaluation
hidden neural
[20].
general architecture
[20].
of the ANN model
and output. These net-
correlationsviscosity are given data bank. as Thenetwork before being used for a specific
constructed
the collected correlations
the collected viscosity
viscosity dataare
data bank.
bank.given given The
The as as
re-
re- network
It
It is
is necessary before to
necessary tobeing
train
train used
an for a specific
artificial
anis successfully
artificial neural
re-constructed
follows:
follows:
constructed
correlations
correlations
are
are given as
follows:application.works This have been
process a step byneural stepused in many dif-
constructed correlations are given as application.
network
network before
beforeThisbeing process
being used
used is aforforstep by
aa specific step
specific
The
The optimized
optimized Glaso Glaso formula formula was found was to
found to be:
method ferent
for types
calculation and ofarchitectures,
the optimized and are consist of
The optimized Glaso formula was found to
follows:
follows: method
application.
application. for This calculation
This process
process of
is aathe
is step
step optimized
by
by step step
be: weight samefactors elements and biases.
such asDuring
nodes, the
layers and connec-
be:
The optimized
The optimized Glasoformula Glaso formula was
was found found to
to weight factors
method
method for
for and biases.
calculation
calculation of the During
optimized the
  212.5T  4600.5495log API a
0.5495 a
(18)training, the new
tions. Multi-layeroutputs of arethe
perceptron
optimized
generated andby radial basis func-
od  212.5T  460
be:
be: od log API a (18) (18)
training,
weight
weight
the network,
the
factors new and
factors iteratively. outputs
and biases.biases. are
The
generated
During
During
sequential
by
the
the
tion are two of the most popular by architectures for
  212 ..55T  460 T log
000..5495 API  4.155 (18) the network,
training, the new iteratively.
outputs The
are sequential
generated by
.5495
5495 aa
training, the new outputs are generated
8
a 2  212
.66381 T
 10 4608 log log
460 
API
a  2.6638110 logT  460  4.155 (18)
od
od
od (19) (19)weight/bias prediction rule is of one of properties
physical the most of fluids. It has
(19) weight/bias
the
the network,
network, rule is
iteratively.
iteratively. one The
Theof the
sequential
sequential most
The
aa 2 optimized
...66381
66381 10 Labedi
 8


8
88
log
logT formula
460 460is: 4 4...155 commonly been used reportedtrainingthat methods and is also
multi-layer feed forward neural
The
a  optimized
2
2 66381  10
10 Labedi log Tformula
T  460 is:
 4 155 (19)
155 (19) commonly
weight/bias
weight/bias used rule
rule training
is
is methods
one
one of
of and
the
the is also
most
most
10.04 applied in this study. Initial weights are first
Theoptimized
The
The optimized
optimized 10 10.04 Labedi
Labedi Labedi formula
formula formula is:
is: is: applied
commonly
commonly in this
network used
used study.
training
with
training Initial
onemethods
methods weights
hidden and
and areis
layer
is firstis able to approxi-
also
also
  10 generated randomly at the first step. Then
odod  generated
applied in randomly
in this
mate study. at the weights
Initial firstnonlinear
step. 10 Then
are first
areany complicated firstfunction [19], thus
1010..04
10
2.764 10 04 2.266
.04
applied
the inputs this study.
entered Initial
into weights
the input are layer
od  API
API 10 2.764 2.266
T
T f (20) the inputs
generated are
randomly entered at into
the the
first input
step. layer
Then
od 
od 222..764 f
764 222..266
(20) (20) generated
and move randomly
a multi-layer
forward at
through the
feedthe first
forward
hiddenstep. Then
perceptron
layers neural net-
The Kartoatmojdo
API TTfff and Schmidt formula
.764 .266
266
API
The Kartoatmojdo and Schmidt formula (20) and
the move
inputs
the neurons
inputswork forward
are
areto withentered
enteredthrough
two into
into
hiddenthethe
the hidden
input
input
layers layers
layer
layer
has been used in this
was optimized to: (20) of the output layer and the
of
and neurons
and move
move forward forwardto thethrough outputthe
through layer
hidden and the
layers
was
The optimized
The Kartoatmojdo
Kartoatmojdo to:6 and and Schmidt formula
2.293Schmidt formula generated output would bethe hidden
compared layers
with
was  optimized to: log API x (21) x generated
of neurons
of neurons output
to thewould
to the output output be compared
layer and
layer viscosity.
and the with
the
 31.93 10 to: T f
6  2. 293
od  31.93  106 T f2.293 log  API 
wasod optimized (21) the experimental dead oil
the
generated
generatedexperimental
output
output would
would deadbe oil viscosity.
be compared
compared with
with
xod
od 931
31
.25..9393
10 10
9 66 22..293
9 T
10 log
293
Tfff T f Table log
log 8.755
API
1. The
API  comparison
xxx
(21)
(21)
(22) Changing
of different
Changing
the
the
empirical
the
experimental
weight
weightpredictionfactors
factors
dead
may
methods.
may
oil
decrease
decrease
viscosity.
x  9.25 109 logT f   8.755
od
(22) the experimental
the calculated errors. After the training dead oil viscosity.
These statistical measures have (22) been the
Changingcalculated the errors.factors
weight After may the decrease
training
fff 
99
xx 
These  99..2525 
statistical
 10
10 log
log 
T
measures
T  88..755
755 have Optimized been
(22) Changing
process, thethenetwork
Optimized weight factors
can be used may todecrease
Optimized predict
calculated Beal for all ofBeggs the above-mentioned Glasso process,
the
Labedi
the dead the
calculated
calculated network errors. can
Schmidt be
After
errors. After theschmidt used the to predict Alikhan Naseri
training
training
calculated
These
These for all ofmeasures
statistical
statistical measures the above-mentioned have
have been
glasso
been the oil viscosity.
labadi
correlations and the results have been the dead the
process,
process, oil
the viscosity.
network
network can
can be
be used
used to
to predict
predict
correlationsfor
calculated
calculated for and all the
all of
of1-36.3 the
the results above-mentionedhave been
above-mentioned Scaling of the output and input values may
summarized
ARE% 263.45 in table and fig.1. -37.89 -19.6 Scaling
the dead
32.69
the dead of
oil
oil the
25.06 output and
viscosity.
viscosity. -42.76 input values 24.44may -11.25 -65
summarized
correlations inand
correlations and tablethe the 1 and fig.1. have
results
results have been been improve the increase in the network
improveof
Scaling
Scaling of the
the
the increase
output
output and
and in
input
input thevalues
values network may
may
summarized
summarized 266.81 inin table
table 52.55 11 and and fig.1. fig.1. performance. In this work both inputs and 55.32
AARE%
3. The Neural Network Model53.21 45.01 86.76 the
performance.
improve
improve 65.07Inincrease
this work 56.52 inboth
in theinputs 41.7
network and 69
3. The Neural Network Model outputs arethescaled increase
between 0.1theandnetwork 0.9 as
Neural networks generally consist of three outputs
performance.
performance. are scaled In
In this
thisbetween
work
work 0.1 and
both
both inputs
inputs 0.9and as
and
Neural
3. The networks
Neural
369.86 Network generally 26Model consist
33.14 of three follows:
3.
SD%
main Thelayers Neural called Network input, Model
hidden and output.32 116
follows:
outputs
outputs are are scaled
77.8
scaled between
32.92
between 0.1 0.1 and and 0.9
35
0.9 as as
47.5 23
main
Neural networks generally consist ofoutput.
Neural layers
networks called input,
generally hidden consist and of three
three follows:
follows:
main
main layers layers called called input, input, hidden hidden and and output. output.
5
5
55
alikhn
25.0 26.0 27.0 28.0 29.0
API

Figure 1: The comparison of different empirical prediction methods

K. Movagharnejad and S. Ghanbari / Journal of Chemical and Petroleum Engineering, 50 (2), Feb. 2017 / 9-17 13

work to correlate the dead oil viscosity of Iranian


crudes. Figure 2 shows the general structure of the
ANN architecture used in this work.
Weight factors connect the different neurons
in each layer. Except the input layer, other layers
receive a sum of inputs. The sigmoid transfer func-
tion is used in this study because of easy deriva-
tion evaluation [20].
It is necessary to train an artificial neural net-
work before being used for a specific application.
This process is a step by step method for calcula-
Figure 2: The general architecture of the ANN model
Figure 2. The general architecture of the ANN model.
tion of the optimized weight factors and biases.
Scaled Value  The following regression
During the training, the new outputs are generated
by the network, iteratively. The sequential weight/ Actual Value   M inimum Value  constructed by using nonli
bias rule is one of the most commonly used train- M aximum Value   M inimum Value  function  .  :
ing methods and is also applied in this study. Initial  0.8  0.1 y  W . x   b with W
T

weights are first generated randomly at the first (23) b  R,  : R N  R M , M 


step. Then the inputs are entered into the input The dead oil viscosity data were randomly Where b is the bias and
layer and move forward through the hidden layers divided into two main groups which vector. If the least-square
of neurons to the output layer and the generated consisted 70% and 30% of the total data, used as an approximation
output would be compared with the experimental respectively. The first group was used to problem has to be solved
train and the second group was used to test of LS-SVM is given as:
dead oil viscosity. Changing the weight factors may
the trained network. The number of the 1 1
decrease the calculated errors. After the training neurons in the hidden layer of the neural Min J W, e   W T .W 
process, the network can be used to predict the 2 2
Figure 3:network
Figure graphical representation
was found
3. Graphical of
to be equal toof
representation the
20.
the neural
Figure
neural network
network.
dead oil viscosity. 3 shows the graphical representation of the These equations are constr
Scaling of the output and input values may im- neural network. Moreover, the characteristics y  W T . x ,b  ek k  1
prove the increase in the network performance. In of the optimized neural network are also Where, 𝛾𝛾 is the regulariza
this work both inputs and outputs are scaled be- Fig. 4 also in
summarized compares
table 2. the results of this section 𝑒𝑒𝑘𝑘 is the desired error.Th
tween 0.1 and 0.9 as follows: with Figure 4 also comparesdata.
the experimental the results
10 of this solved by a Lagrangian fun
section with the experimental
Mean square error has been data. used to compare LW , b, e,    J W , e  
Mean
the different square error has
prediction
dead
been oilto compare
used
Scaled Value  The following regression model ismethods. N
the different prediction methods.
0.9   k W T . x ,b  ek  y
Actual Value   M inimum Value  constructed by using nonlinear mapping
. :  ( xcal  xexp ) 2
k 1
(23)
function 0.8
M aximum Value   M inimum Value  mse  with W  R N , (24)
Where,  k is called the sup
(24)
 0.8  0.1 y  W T
.  x   b n differentiating with respec
0.7 (26)
calculated viscisity

(23) b  R, : R4.N Support  R M , MVector   Machine leads to the solution.


The dead oil viscosity data were randomly Vapnic [21] introduced the support vector L N
The dead oil viscosity data were randomly di-Where b0.6 is the bias and W is the weight
machines (SVMs)support in 1998. The advantage of  0  W   k . xk
divided into two main groups which vector. If the least-squares vector is
vided into two
consisted 70%main
and 30%groups
of thewhich consisted 70%
total data, used as an
4.0.5 Support
this method isVector
approximation thatfunction, Machine
it obtains a the
newsolution by
W k 1

andrespectively.
30% of the total data, respectively. The first solving the quadratic programming (QP) and  L N
The first group was used to problem has 0.4 to be solved. The optimization  0  W   k  0
group
trainwas
and used to train
the second groupandwasthe second
used to testgroup wasof LS-SVM Vapnic avoids
is given[21]
the as: introduced
local minima [22]. the support
Obtaining vector
the ma-
b k 1
used
theto test network.
trained the trained network.
The number The number of
of the chines final1 (SVMs)
0.3 SVM model 1in N1998. can be Thevery advantage
difficult of thisL
theneurons
neurons in the
in the hidden
hidden layer layer of the neural net-
of the neural Min J W, method   Wis .itthat
ebecause T
it 
W is necessaryobtains
2
ek (27) tothe solution
solve a set by of solving  0   k   .ek
2 2 e
work was found
network to be
was found to equal
be equaltoto20.20.Fig. 3 shows the
Figure nonlinear equations.
the0.2quadratic programming (QP) and avoidsk 1 A simplified version of the k
3 shows representation
graphical the graphical representation
of the neuralof the network.These equations
local SVM minimaare constrained
called[22]. by:
leastObtaining
square supportthe final SVM modelk  1,..., N
vector
neural network. Moreover, the characteristics
Moreover, the characteristics of the optimized y  W T
. 
0.1
x , 
machine b  e k k
(LS-SVM) 1...N leads (28)
to a set of linear
can be very difficult because it is necessary to solveL  0  W T . x ,b  e
of the optimized neural network are
neural network are also summarized in Table 2. also Where, a𝛾𝛾 set the 0.1regularization
isequations.
of nonlinear The 0.3 support
parameter
equations. 0.5and
vector A machine 0.7is version
simplified 0.9
 k
summarized in table 2. also used
𝑒𝑒𝑘𝑘 is the desired error.The exprimental to recognize the hidden
problem may viscosity be patterns
Figure 4 also compares the results of this solved by aand to classify
Lagrangian the input
function as: data by means of
k  1,..., N
section with the experimental data. LW , b, e, least   J W , e   method, but SVM is more
square The Karush-Kuhn-Trucker
Mean square error has been used to compare Figure 4: experimental
generalized inviscosity comparisonvs. to Neural
ANN. SVM Networkis predictions
obtained as the variable w
Table 2. The characteristics of the optimized neural
N
(29)
the different prediction methods.   network for
 k W . x ,b  ek  yk
T considered to 
Iranian
belong dead
to oil viscosity.
Kernel methods. removed:
Given a set of training data like this: 01N 11N   b   0
)2
k 1
Transfer function of the Transfer function of the
Numberof( xneurons
cal
x
exp Learning algorithm Where,   
is xcalled
, y ... 
the
x , 
support
y  R value.
N
 R N
Partial
(25) MSE 1 1   

mse  (24) input layer
k 1 1 k k output layer  N 1 Z   . I     y
n Sequential waight/bias
differentiating with respect to each variable With
4. Support20 Vector Machine leads to the
Sigmoid solution. Purelin 0.014 y   y1 ,..., y N 
rule
Vapnic [21] introduced the support vector L N

machines (SVMs) in 1998. The advantage of  0  W   k . xk  (30)


W k 1 6
this method is that it obtains the solution by
solving the quadratic programming (QP) and L N
 0  W   k  0 (31)
avoids the local minima [22]. Obtaining the b k 1
final SVM model can be very difficult L
because it is necessary to solve a set of  0   k   .ek
ek (32)
nonlinear equations. A simplified version of
SVM called least square support vector k  1,..., N
machine (LS-SVM) leads to a set of linear
Figure 3: graphical representation of the neural network
14 Scaled K. Movagharnejad
Value  and S. Ghanbari / Journal of Chemical Theand following Petroleum Engineering,
regression model 50 (2), is Feb. 2017 / 9-17
Actual Value   M inimum Value 
Scaled Value  constructed The following by using regression nonlinear model mapping is
MActual
Scaled aximum Value Value    
Value MMinimum inimum Value
Value  constructed
function 
The following regression model is  .  by : using nonlinear mapping
M 0.Actual
8  0.1Value   M inimum Value 
aximum Value    M inimum Value  function
constructed y  W T .x.by  : busing with W  R Nmapping
nonlinear ,
dead oil (26) (26)
Scaled
M
(23) Value 
8  0.1 Value   M inimum Value 
0.aximum function y The  W
b  R,  : R  R , M  
T
.
following   x
. N  : b Mwith W  R ,
regression model
N
is(26)
Scaled Value    M inimum byThe W,following xisby  the regression model , is
 0Actual 
0.9 The
(23) dead oil viscosity data were randomly constructed  R
Where T . : bR N using
bR M
with
bias,M nonlinear

W
and  W R Nmapping
is the weight
Scaled .8  Value 0.1Value  Value
The following regression model is(26)
divided
The
Scaled
(23) 
Scaled
M aximum dead
Actual into
ValueValueoil
Value two
viscosity main 
   M inimum Value 
 M groups
data
inimum were which
Value 
randomly constructed
function
vector. b Where
The
 R , If
following


: b
R
 .
the N by
is :

using
least-squares
the bias
regression
R M
, M
nonlinear
and
  support
W
model
mapping
is the
is vector
weight isvector. If
0.8 where Thebas is the bybias and Wmodel isNmapping
theis weight
M0Actual main
 30% 
Value Value and M inimum Value constructed
function T  . :
following using regression nonlinear
consisted
divided aximum into 70% two
Value     M of thewhich
groups
inimum total
Value data,
 used
vector.
constructed y  W . If
 an  the
x 
by approximation
 least-squares
b
using with W
nonlinear  R function,
support ,
mapping a new
vector isas an ap-
T b.is
TheScaled dead
Actual
.8  0.1ValueValue oil
Value 
viscosity  M data
inimum were Valuerandomly theconstructed
function The
Where
least-squares following : the regression
bias
support andvector model
WNmappingis the is weight
is used
0.7 (23) M 
respectively.
Actual
consistedaximum 70% The
Value and    
first
 M
30%  Mgroup
inimum
of
inimum thewas Value
totalused
Value 
data, to problem
used y W as  
has
an by

..function,
x to using
b
approximation be solved. nonlinear Thefunction, (26)
optimization
a new
calculated viscisity

 .  with W  R ,
divided
M 0.Actual
8  0into
aximum .1 Value twomain
Value  M
first M groups
inimum
inimum which
Value
Value tototest function
constructed
vector.
by 
proximation
function W,
R T If
. :
Rthe
N
xNis
by :

:to using, M
least-squares
bR
M nonlinear
anew support mapping
R NN ,problem vector has
(26) is to be
0.6 (23)
TheM
train
respectively.
0.8
consisted
and
 0.1the
aximum
dead 70%
oil
second
The
Value and
viscosity 30%
group
Mgroup
datainimum
of
was
thewas
were
used
used
Value
total data,
randomly
of
problem
function
b
used y 
LS-SVM
 R
W , asT has
. : anR x .   : given
approximation
 b R beMwith ,
withsolved.
Mas:W
W

  The
R function,
, optimization
(26) a new
(23)
the M
Scaled
train
respectively.
 0Value
8aximum
0.trained and .1the   The
Value
network.
second
Themain firstgroup
M inimum
numberused
groupwas was
Value
of the 
usedtototest
solved.ofby Where
Where
 The WThe
LS-SVM
R , following
T
.R
T : has
b is
optimization
xNis 
the b Rregression
1given
bias
Mwith Wof
M
T ,solved.
as:1
and
model
W
R LS-SVM
N ,N is the weight
is(26)is given as:
The  0.8dead  0into .1oil two viscosity data were randomly problem  bRethe xNNis  to beMbias The ,k2optimization
0.5 divided
(23)
neurons
The
(23)
the

divided 0
traindead trained
Actual
. 8  0
andinto
in
. 1 the
the
network.
Value
oil hidden
viscosity
second
two  
main  MThe
group
groups
layer
inimum
data
groups
numberof
were
was
which
the
Value
which
of
used 
neural
the
randomlyto test
vector.
ofbMin
Min
constructed
b yR
Where
LS-SVM
WJ,
RJ,as 
W,If . :
:an bRthe
by
isis


the
least-squares
the 2
busing
1given
W
R
R M
bias
,.W
with
T, M
M and
W
nonlinear

and
2
 
1 support
W
support
W
is
RN Nemapping
is2 the
the
(26)
(26)
weight
vector
(27)
weight
is
0.4
consisted
The
network
neurons
(23) deadinto
M trained
divided
the aximum was
in
70%
oilthe two
and
viscosity
found
Value
network. hidden main
30%
30%toThe
M data
be
layer
of
groups
the
were
equal
inimum
numberofwhich
total
toValue
the 20.
of
data,
randomly
neural
theto
Figure

vector.
used
function
vector.b Where  R,If:bRthe  If
W, .is
e 
N  least-squares
approximation
: theR bias
least-squares
W M .W
, M and
as:   support  W 
function,
k 1
N k
e
is thevector
vector
a
(27) new
weight
is
is
(27)
The
consisted
respectively. dead oil 70% viscosity
The and first data
groupof were
the was randomly
totaluseddata, used
These
problem Where asequations
T Ifan b isapproximation
has to the 12 be are bias
Tsolved. and
constrained 12 The W k 1N isoptimization
function, 2by:the weight
a new
divided
3network
The
consisted
neurons
divided
shows
respectively.
train 0.8and
dead into
the
was
.in
0into
oil
70%
1the the
two
graphical
two
found
viscosity
The and
hidden
second
main
main 30%to
firstgroup
groups
representation
be
data
layerof
groups
group
equal
were
the which
ofwas to
total
the
which
20.
randomlyof
data,
neural
used tototest
the
Figure vector.
used Min
y 
Where W
Was
J  W,
T Ifan.  b the
e
the
x 
xNis
is    least-squares
the b
approximation W with
bias. W  W
and   W
support
R e
,
is
function, the vector
weight
(27)
a new is
3 shows network. of was used vector.
problem has ,toleast-squares
2bbe esolved. support
The vector is
These koptimization
0.3 consisted
neural
divided
respectively.
network
consisted
train andinto
the
was
70%
70%
the The and
graphical
two
found
and
second
30%
Moreover,
main
first
30% groups
group
togroupbe of
the
the
representation
equal
the
was was total
to
total
usedused
20.
data,
characteristics
which oftothe
toFigure
data, test The following
of These
used
vector.
problem
y LS-SVM asequations
T If
.an
equations
has the to
given
approximation
least-squares
be
are kare
M solved.
constrained
as:k  21...N
constrained function,
support
Thek 1
by:(26) (28)
aby:
vector
optimization new is
Scaled Value the
(23) 
respectively.trained network.
The first The group number was of
usedthe totest used
of b  R
LS-SVM
y  W equations as
, 
regression : an
.𝛾𝛾has R x the isapproximation

,to given R
1bregularization model ,
eT solved.M
as: is

k 11...N  function,
N a new
(28)and
ofneural theand optimized
network. neural
Moreover, network the are also
characteristics Where,
problem
These  is  be are k constrained The parameter
2optimization
by: (27)
0.2 consisted
train
Actual Value 3
neurons
The
train
shows
respectively.
the
summarized
of the
trained
optimized
dead
and  in
the
the
the
M
70%
oiltheinimum
and
second
graphical
The
network. hidden
viscosity
second
in table
first
neural
30% group
The
2.layer
Value
groupdata
of the
was
representation
group number
was
network
was
of the
were
total
usedused
of data,
to
the
neural
randomly
used
are
of
toto
also
to the
testconstructedproblem
used
of Min by LS-SVM
Where J
using as W, an
has e nonlinear
bdesired
the

is
isthe
x is
approximation
tothegiven
1given WbeTbias . W
solved.as:
mapping
as:kand 1 The  
W
function,
N eiskoptimization
the may
a new
weight
respectively.
theneural
train
neurons
network
divided trained
and network.
in
intothe
was the The
network.
second
two
first
Moreover,
hidden
found main The
togroupgroup
layer
be
groupsnumber
equal ofwas
the
was the used
of the
characteristics
used
to
which 20. testfunction  of
toFigure
neural
𝑒𝑒Where,
problem
of Min
vector.. y𝑘𝑘  isW
LS-SVM
LS-SVM
: Jthe TW, If
.𝛾𝛾has eis ,to
is 12 bW
given be
least-squares
error.The
regularization
eksolved.
.W as: 121...N The problem
kN1 parameter
supportek22optimization(28)and
(27)
vector
be
is
(28)
M aximumneurons
0.1 the
Figure
train
theof Valuetrained
summarized
the and
trained
4in 
optimized
also
the network.
M
the incompares
second
network.
table
inimum
hidden neural
The
2.Value
group
The layer number
the was
network
number

results
of the of
used the
ofto
neural
are
of
this
also
the test solved
𝑒𝑒 Min
These
of is
LS-SVM J 
the W,by
equations e a
desired  Lagrangian

is 2
1 given W are
T
.
error.TheW as: 
constrained 12
function  
problem
N e as:
by: may
(27) be
network
3 shows
consisted was
the 70% found
graphical
and to
30% be ofequal
representation
the to
total 20. Figure
of
data, the
0.9 y  W . used Where,
xMin
𝑘𝑘
W  b,JJbas equations
TW,
𝛾𝛾 an ea Lagrangian
is theW1
approximation
2J Wregularization
TN
T .e W ,  12  
k N parameter
 1 k2
kfunction,
1 ek2by: as: (27)
and
a new
y𝑘𝑘 are
0.1 section 0.3 0.5 layer 0.7data. T
 0.8  0.1 neurons
theFigure
network
summarized
neurons
3neural showstrained
respectively.
in
4with
in
also
was
the
network.
the
the
the
network.
in
hidden
compares
found experimental
table
hidden
graphical
The Moreover,
firsttoThe be
2.layerthenumber
equal of
ofwas
representation
group the
the
results
the
neural
ofofthe
to used
20.
neural
this
Figure
of
characteristics tothe where, 𝑒𝑒solved
These
Min L isWγthe ,by
W,is
with
.e,has ethex
desired , 1bW
2regularization WR eT,ksolved.
.W
error.The kfunction
constrained 211...N
(26) The  ek2optimization
k 1 parameter
N
problem may beand ek is
(27)
(28)
network
3 Mean
section
neuronsshows square
withwas
exprimental
4 in the found
error
the
thefound hidden hasto
experimental bebeen
viscosity
layer equal used to
data.
of the 20.
to Figure
compare
neural theb  R,the
problem
These Min N
yLdesired W J 
, bR,byequations
W,.e,  M e  
ax Lagrangian
,
to 2 J Wbe
are
are
W . W kproblem
constrained
e,keconstrained  2
 function   e by: (27)
thegraphical representation of N M
(23) Figure
network
neural network.also
was compares
Moreover, togroupbe theequal results
the to ofalso
20. this
Figure
characteristics : RThese
solved W T
error. b The 21...N k 1 k
(29)as: (28)
may be and
solved by a
The dead oil 3
3
of
train
the
Mean
network
neural
of section
the
shows and
viscosity
shows
the
summarized
optimized
different
square
with
the
wasdata
network.
the
optimized
second
graphical
prediction
error
found
the
graphical
in
neural
Moreover,
table were
neural
has
network
to2.randomly
experimental be
was
representation
methods.
been equal used
the
representation
network
are
used
to
toare
to
of
20.compare
of
also
test
the
Figure
characteristics
data. the Where b
Where,
of
These
Where, yis
L N 
 W
LS-SVM
W
the k, bW 
equations
T 𝛾𝛾T is
.bias
equations
, e
 .x is
,   
the

x2b
,and given
J
,are

regularization
bW
W , e
eas:
ek constrained

iskk
y1...N
the k weight
by:
k 1parameter
by: (28)and
(29)
the trained network. The number of the Lagrangian
𝑒𝑒 is the
 𝛾𝛾  is function
desired the regularization
 error.The as: parameter
problem may be
T
These y  W
k 1  W .  x , b T .
equations x ,  1b  are e k
constrained 1 1...N N
by: (28)
neural
the differentnetwork. Moreover,
prediction the network
) 2 Network
methods. characteristics y eaxLagrangian
the
T
FigureFigure
4:divided
experimental 3Mean shows viscosity the vs.
graphical xNeural representation predictions
of
thisthe
𝑘𝑘
TW, ekk.W ekk yvector
.𝛾𝛾desired 
T
of
into
4. Experimental
neural
summarized
Figure
neurons the
two optimized
main
square
network.
4in also xthegroups
(viscosity
inerror neural
Moreover,
table
compares
hidden which
has vs.
2.layer network
been
neural
the used
the
results
of the are
to also
ofcompare
characteristics
neural pre-
vector. If the Where,
𝑒𝑒Where,
solved Min N is
W
least-squares
kJthe
by is ,bregularization
 W support
error.The function1...N
k problem parameter
e(29)2
kisas: (27)(28)and
may be
of the optimized cal neural exp network
2 results are also y 𝑘𝑘  W
  .
𝛾𝛾  is xis 
the, 
called b
  e
regularization
J2,regularization the k 
support 1...N value.
parameter (28)
Partial
and
consisted 70% neural
summarized and network.
30% in oftableMoreover,
the 2.bethe
total ) data,thedata.characteristics
 the y2k aproblem
k  1
dictions. of the
mse
Figure
section
network thedifferent
optimized
4withalso
was ( x the prediction
compares
found  xto
neural
experimental methods.
network
equal toare
(24)ofalso
20. this
Figure used as ansolved 𝑒𝑒Where,
L W is
 the
𝑘𝑘approximation W,bye𝛾𝛾, 
Tkdesired
.aLagrangian
is xcalled berror.The
regularization
,keekfunction
function, k 1new
as: mayand
parameter be
respectively. summarized
of
Figure the
The optimized
4
first also cal
in
group table
compares nneural
was 2. used
exp network
the results
to areof also
this Where,
𝑒𝑒These
differentiating
Where,
solved is ,kb the  𝛾𝛾
equations
by desired
is a isLagrangian
the with
W error.The
are the respect
constrainedsupport
function problem
to value.
each
parameter
by:
as: may
variablebe
Partial
and
summarized
3mse
section
Mean
Figure shows square
4
with
 the
also( x in
the table
error
graphical
compares
experimental
 2.
has representation
x been)
the
2 resultsdata.
used (24)
toofcompare
of
this theproblem has 𝑒𝑒LN𝑘𝑘W k 𝑘𝑘 1to
is , bthe be ,bye,  solved.
k
desired   J Werror.The The , e   function
optimization problem may be (29)
solved LyN𝑘𝑘isW a Lagrangian
Lagrangian (29) as:
 W   kfunction
summarized in table 2.
k. ,xcalled
4. Support Vector n has Machine 𝑒𝑒leads
differentiating isgiven to,by
, bthe the desired solution.
J,bwith berror.The problem may be
train and the section second 4with group the
cal experimental
was used
exp thetoresults data.
test e.
 T, 
Lagrangian e,ketheerespect to each as: variable
T
(28)
Figure
Mean
the
neural
mse
section 
square
different also
network.
with
compares
error
prediction
the Moreover,
experimental
been
methods. used
the this of LS-SVM
toofcompare
characteristics
data. (24) 
Where,
solved
L 

W
W
k bW e 
a xis
as:
 J W N N, e k  function
support
y1...N value.
(29)
Partial
Figure
Mean
the trainedsection
the
of 4. Vapnic
Support
network.
different
the
4
square
with
optimized
also [21]
Thethe compares
Vector
error introduced
number
prediction has Machine
experimental
nneural of the
been results
theused
the data.
methods.
) 2 the
network support
to
are
of this
compare
also vector solved
leads
 LkN1W
differentiating L1, bW , to , by
,0e𝛾𝛾TT, T,
the.is a
W the
solution.
x 1 J,with 
regularization
W bN ,eerespect k2xy(27)
k
k  to parametereach as:
(30) variable
,J
dead oil Where, (29) and
neurons in Meansection
machines
the
Mean
summarized Vapnic square
different
theSupport
hidden with
square x[21]
((SVMs) error
the
prediction
layer
cal
inerror 
table ofxhas
experimental
inexp
introduced
has the
2.
been
1998.methods.
neural
been 2
used
The data.
used
to compare
advantage
support
to compare vector of J W, eWhere,
Min
 kL
NNW
𝑘𝑘1LWis2kk,W bthe
W,ethe ,.W.issolution. 
xcalled  Wb   eeekkk.
, the support
ykk problem (29) may
value. Partial
4. Vector Machine leads  to Tkdesired error.The   be
T called
𝑒𝑒  
network
0.9 was
the
Mean
the mse
this
machines
Figure
different
foundmethod
different
Vapnic

square
4also
(
(
x
to(SVMs)
x be
cal
[21]
iserror 
prediction
thathas
equal
prediction
compares
x
introduced
n x
)
methods.
been
itto1998.
inexp obtains
20.
methods.
the
)
The
2 Figure the(24)
used
results
the
to
supportof
compare
solution
advantage thisvector
byof where, 

Where,
solved kN

differentiating
kL 
1 kαk is
W L


 W
W
0

by
 .
.

a is
W
 x
x 
2
,
called
Lagrangian ,


with Nb
kbk1
 N the
1 e
theekk support .

respect
x
y
support
y
function k
to
(29)
(30)
value.Partial
value.
(29)
each as:
Partial
variable
differ-
mse  the 2of the (24) These equations
 are T constrained
  by:  
N
k
3 shows thethe different prediction methods. k k k
solving quadratic
cal programming (QP) and entiating  with  respect k 1
to each variable leads to the
this method is that it
exp obtains the solution by W  . x , 
 b  e  y
1
sectiongraphical
machines with ( x representation
the   x
nexperimental ) 2 data. Where,
b,1W   0 0   W is W
,kisksolution.
called
    the  .
k  0support
x value.
(30)
(31) Partial
4. mse Support  the ((SVMs)
Vector xinMachine1998. The advantage of T Where,
xL
differentiating b, b ,eethe called Jwith W , the kerespect k to each variable
k  1
0.8network. avoids
solving  the cal
xlocal
quadratic minima exp )[22].
2
programming (24)
Obtaining
(QP) y  W solution.
the
and leads
. Where, kW L 
k
to

k
 1...N
k
support
k
(28) value. Partial
neural Mean
4. mse
this
mse Support
Vapnic Moreover,
 ( [21]
square
method
neural
cal
xVector
is
cal
error
thatn the
 xMachine
introduced
nminima
has characteristics
exp
it
exp
) the support
been
obtains used
the (24)
to compare
solution vector
(24) difficult by differentiating
leads N L to the
the
0   k W is
k solution. 
called 
with
k kN 11the
k 
respect 0support to value.
each
(31)
(29) variable
Partial
variable
of the
0.7optimizedfinal
avoids SVMthe local model
network can
are [22].be very
also Obtaining the Where,
𝛾𝛾 isdifferentiating LbL regularization
0 Tk is called
,with 
k N1 the .0support
respect xykk  to value.
each Partial
N
Where,   parameter to and
4. Support Vector Machine leads  to 
the W solution. (30)
calculated viscisity

the
solving
mse different the(SVMs) prediction
quadratic n methods.
programming (24)(QP) and
summarized
machines
4.because
final
avoidsin
Vapnic
Support

table
Vapnic SVM it
the(SVMs)
[21]
2. Vector
[21]is
local
introduced
necessary
model n
introduced
minima
in 1998.
Machine
can [22].
the
to
be
The
the
support
solve advantage
very
support
Obtaining
a
vector
set
difficult
vector𝑒𝑒
of
the
of is the
differentiating
leads
desired  
differentiating W
ebL
L 
1L to
 
k 0
to
W0 
0 the

the
error.The
W
. W  x
solution.
k
 
k 

with
with  b . 
e 
problemk
erespect
k k
 . xk  (32)
krespect may to
each
(31)
be
each
(30)
variable
variable (30)
) 2to
k
N
4.
machines
this Support
method xVector
is that xin Machine
it 1998.
obtains The the advantage
solution of
by𝑘𝑘 leads solution. k  1
0.6 4 also
Figure nonlinear
4. because
machines Vapnic
compares
Support  it((SVMs)[21]
equations.
isthe
Vector
cal
results
introduced
necessary init A
Machine
exp 1998. of the
simplified
this
The support
solve version
advantage vector
adifficult of
set solved
of by aleads W
Lagrangian
L Lk to00the 
is 
W  N
kfunction
solution. kN1.1ek kk . xk  (32)
as:support (30)
final
this
solving Vapnic
methodSVM the [21] ismodel
quadratic introduced
that can
obtains
programming be
the thevery
supportsolution
(QP) vector by
and Where, 
kWk1,...,e L 00  W called
N
N the
 k.0xk  (31) value.
(30) Partial
SVMmse
machines experimental
called (SVMs) is least indata. 1998.
square The advantage
support(24) vector of W , b, e,differentiating
W 0W0 , eNW W kN.11ekk respect
 k .  xk  to each
k
section
0.5 with nonlinear
the
Vapnic equations.
[21] introduced A simplified
the support version
a vector of L k with
this
because
machines
solving
avoids method thethe it(SVMs) is
quadratic
local that
necessary
nused
minima init 1998.
obtains
programmingto
[22]. The the
solve solution
advantage
(QP)
Obtaining setandL
byof
of
the beLJ   
(30) variable
Mean square this
machine
SVM
machines
solving method
error called
has
the (LS-SVM) is
been
(SVMs)
quadraticthat
least in it obtains
leads
square
to
1998. to
compare
programming The the
a
support solution
set of
advantage
(QP) by
linear
vector
and of  kW
L L 1,...,
 0  N W W 

 
kNk1  .  0 x (31)
(30)
(32)
Nk11x k, b  ek  y(31)
k
nonlinear
this
avoids
final
4. method
Support the
SVM equations.
localis that
model
Vector minima it A
can
Machine simplified
obtains[22].
be verythe version
solution
Obtaining difficult by
theof bW LxL
k
0to0 WW T N
.k
k  1
 (29) k
0 k
(31)
T leads L the solution. k 0
N
0.4
the different solving
equations.
machine
this
avoids
SVM
solving prediction
method thethe
called
the
quadratic
The
(LS-SVM)
local methods.
is
quadraticthat support
minima
least it programming
leads
obtains
square
vector
[22].
programming
to a
the set (QP)
machine
Obtaining
support of
solution
(QP)
and

linear
by
the
vector
and
is  W .  L
kbk1,...,,  b
000 
 NWe k   k
T .kN1N.
y exkk, 0 e  y(31) (33)
Wk T
final
because Vapnic SVM it is
[21] model
necessary
introduced can be
tothe very
solve
support adifficult
set
vector of k
  L L   0   W W   b0 (31)  0
avoids
also
equations. used the local
toquadratic
The minima
recognizesupport [22].
the
vector Obtaining
hidden machinepatterns the
k is ebbk 000 . x  (32)
k 
solving
xthe 2necessary programming (QP) and  k k k
0.3  (final machine
avoids
because
xand
nonlinear
machines SVMthe ismodel
it(LS-SVM)
local minima
)recognize
equations.
(SVMs) can
leads
A be
[22].
to to
simplified avery
solve set
Obtaining difficult
aof
version linear
set theof1 LLkk  1,...,  W 1.e k
1 kk k 0 k (30) (33)
(31)
final
also
avoids
becausecal to SVM
used classify
the exp
it to
local
is model theininput
minima
necessary
1998.
can thebe
[22].
to
The
data hidden
solve
advantage
very
by
Obtaininga difficult
means
patterns
set of
Where,
theof  
is  eWbL called  0 0  the N W 
support .   
.e x  ,  b  ek Partial
value. yk  0
 (32)
 final
mse0.2 equations.
nonlinear
SVM
this methodSVM
called Theismodel
equations. least
that support it can
A
square
obtains
(24) vector
be support
simplified thevery machine
difficult
version
solution vector by is
of k
The k L 
k
k 1,...,
ek k Karush-Kuhn-Trucker
  01,...,  N  N k
 k
 k 1
.e
1 k
(KKT) (32) system (33)is (32)
because
least
and to
final square
nSVM itequations.
classify ismodel necessary
method,
the input but todata
cansimplified
be solve
SVM aisdifficult
by version
very means setmore of 0  kk  N .to ekk each variable
nonlinear
SVM also
because
machine
solving used called
the to
it(LS-SVM)
is11 recognize
necessary
least
quadratic A
square
leads the
to
programming to hidden
solve
support
aANN. aofpatterns
setversion
(QP) set
vector
linear
and of
differentiating
of keLLLk  with
1,..., Nrespect (32)
4. Support
0.1
nonlinear
generalized
least
because
SVM andVector
nonlinear
machine
equations.
avoids
square
to called
the
it equations.
Machine
classify
in
is
equations.
(LS-SVM)
The
local
comparison
method,
necessary
least the
support
minima
A
square
input
A
leads
simplified
but to to
simplified
vector
[22].
SVM
solve
to support
data by is
a SVM
version
a Obtaining
set
machine set
of vector
means more
linear
the
of
is
of
leads
is to theremoved:
of
obtained
The  k e
ebL

solution.
kk   0
Karush-Kuhn-Trucker
 as W 
 0  W . x ,b  ek  yk  0
1,..., 0
1,...,  N N the k
 
Tvariable  .e kk  0 w and (KKT) e (31)
(32) are system is
SVM
considered
generalized
nonlinear called to
equations. least
inbelong
comparison square
to simplified
A Kernel to support
methods.
aANN. SVM
version vector is
of obtained kk k 1,...,
N0  N
N W T . x ,b  ek(KKT)
as theT kvariable 1 w and (32) (33)
 yek are 0
 system
Vapnic0.1machine
SVM [21] introduced
least
equations.
also
final square
used called
0.3
SVM (LS-SVM)
toThe the
method,
least
recognize
model support
0.5 leads
support
square
can but tovector
vector
the be0.7SVM
support
hidden set
very ofis
machine linear
more
vector
0.9
patterns
difficult isL
 The Lk 1,..., Karush-Kuhn-Trucker is
machine
Given
SVMconsidered
equations.
machines (SVMs) a set
called
in (LS-SVM)of
to
The
1998. training
belong
least support
The leads
to data
square Kernel
advantage to
like
vector a set
this:
methods.
supportof of
machine linear
vector is  0  removed:
obtained  Wk
 0 L Lkk   
1,..., 0
1,...,  N
as 1 .
N W
 the  x . 
variable x b , 
 b
(30) 
w e
0 and  ye  0 (33)
are
alsogeneralized
machine used (LS-SVM)
toisinrecognize
comparison leads
N the
to solve
to aANN.
N hidden
set means
of SVM linear
apatterns 0Lk k001 WkT1..exbk,
1  N 1  N k k
and
because to classify
it the
necessary input data
to by set ofW
k T k
b  0ek  yk (34) 0 (33)
this method xis1,that
equations.
Given
machine
alsoconsidered
equations.
and yused
to ... it(LS-SVM)
aexprimental
set
classify xto The
of
obtains
kto
The k the
, yrecognize
belong support
training the
support R viscosity
data
leads
to
input R
thevector
Kernel
solution like
to
vector
data
machine
this:
(25)
amethods.
hiddenby
byset of
machine
means linear
patterns is
is
of removed:
The 1  0Z1N
eLN1kkkNKarush-Kuhn-Trucker
1 1,..., 1WN T.. I x,b  yek(KKT)  y(32)k (34) 0 (33)is
system (33)
solving thealso
least
nonlinear
 
equations.
and x ,
quadratictoy
1square
used  ... 
classify
equations.
xtoThe
, y
method,
programming  
recognizesupport
the  R
A
input
N

but
simplified
the
R N
vector
data
(QP)
SVM
hiddenby
and
is
version
machine
(25)
more
patterns
means ofL
is 1kk  1,..., N 0
Z
 
W
Nthe 1   
.
. 
I
 x

 ,  
b  y k e  y  0 (33)
The

Karush-Kuhn-Trucker (KKT) system is
k
alsoGiven
least
generalized
SVMto called
1 used
1 a
square set tokof training
recognize
method,
in least
kcomparison datathe
but
squaredata like
hidden
SVM
to support
ANN.this: patterns
isSVM more
vector is  0  W 1kkNobtained
With 
 k 0  N  1,...,
1  k N
1,..., as
N 1 10 variable 
 
 b 
 (31) w
 0 and 
 e are (33)
the and classify the input bypredictions
means of
N
Figure 5:5.experimental (34)
xviscosity vs. RSVM
also The y1k Karush-Kuhn-Trucker   and (KKT) system is
avoids
Figure least
and x1,toyused
local
Experimental
generalized
considered
machine
least
square
...
minima
1square
to
(LS-SVM)
classifyviscosity recognize
method,
k   Rto
[22].
inbelong
k, y
to thevs.input
comparison
method,
Obtaining
SVM
N
leads  the
but
Kernel
but
N hidden
toSVM
data the
by
predictions.
ANN.
SVM (25)
amethods.
set means
of
is
patterns
isSVM more
linear
more
isb
of obtained
removed:
With
The   L k   1,...,
ky1,...,
Karush-Kuhn-Trucker
1 11,..., ZasyN
Nthe
N T1.variable I    w  y (KKT) e are (35)
system is
andmodel
final SVMgeneralized tosquare
aclassify
setcan inbelong the very
comparison
be input data
to ANN.
difficult by means isSVM of
is obtained N
 y 0,..., as1 W
 the .variable xb,b w  0eand
 and  eykare 0
system
NN variable
least
considered
Given
equations. to
of
The method,
trainingsupport todata but
Kernel SVM
like
vector methods.
this:
machine more isL The y01NKarush-Kuhn-Trucker
removed: k(KKT) is
generalized in comparison to ANN. SVM is obtained
With
The 1 asy
Karush-Kuhn-Trucker 1the w (KKT) e (35)
are system(33) is
least
because itconsidered is square
necessary to method,
belong
to solve to but
Kernel
a SVM
set methods.
of is more  0  removed:
obtained  k  .ekas the variable  w and e are(34)
x1, yused
generalized
Given
also
considered
generalized 1 ... xto
a setto of
k ,in  RtoNdata
inyrecognize
comparison
training
belong
kcomparison
N
R to
the
Kernel N like
to
ANN.
hidden (25) SVM
this:
methods.
ANN.
patterns
SVM isek 6 removed:
is
The
0 
obtained
 y 11NN1 yKarush-Kuhn-Trucker
k
Z
,..., as  11N . I  
y the

1
variable

 
b(32)   0y 
  w
 and
 e
(KKT) system is ob-
are
(35)
(34)
nonlinear Given
equations.
xleast toa... setxAkto of training
, simplified todata Nlikemethods.
version this:
of removed:
1 01kNas  11,..., the111N NN1  b  wand 0 
k   R Ndata . I  
of SVM called considered square ybelong support Kernel
Rvector machine
means of 6tained variable
and
Given
considered1 , y1a
classify
set of
to
the
training
belong
input
todataKernel
data by
(25)
Nlikemethods.
this: k  1,..., NWith 01NN1 Z 
removed: 1 N1  b    0y  e are (34)removed:
SVM called   least   square   support vector
(LS-SVM) leads Given
least x , yto a
xx1,, yy1 a...
Given
machine (LS-SVM)
1 1 ...
squareset
a x
xleads
set
kof
set
kof
, y k
DEAD OIL
training
method,
of 
to
, yktraining
R
linear
aR
N
R NNset
 butR like
SVM
equations.
N
 RofNlike
data
 R Ntolinear
(25)
this:
(25)
this:
is more
The 6 With
The 


y1
0
1
0 NN1
1 Karush-Kuhn-Trucker
1 y1,...,
Z
Zas
 1 
1 N1 
1y1the NN1. I
. I 
 



b 


b   w
 


0
0y (KKT)(34)
yand e are
system is
(34)
(35)
1 ... xk ,in SVM isL  0 With
generalized ykiscomparison
 ANN.(25) obtained variable
support vector 1 machine also used to recognize N
 xZZ,ybN 1..eIIk  (34)
1T
yW 1 N
N  1. yk 0yy (34)
thealso
hidden

equations. considered
patterns
x1 , y1support
The  ...xkto, ybelong
and k
to
 
vector  R
classify

to Kernel R
machine
the
(25)
is
methods.
input data  k With
removed: 1 y ,...,
y N 1y ,..., y      (33) (35)
1  (35)
400 With
by means
used Given to recognize
of leastthe a set of the
square training
method,
hidden data but
patterns
like
SVM this:is more k  61,...,With yyN0 yy ,..., 1
1y1NN   b   0 
N
1       (35)
y1 1 ,..., y N  (35)
1 N 1
and to classify input data byN means of beal 6 (34)
350
generalized
least square
  x
in comparison
1 , y  ...
method, but toSVM
1  x k , y k    R 
ANN.isSVM R N
more
(25)
is consid- 6
The Karush-Kuhn-Trucker with  N 1   y 1 ,..., Z  y  N  . I 
   system
(KKT)   y is(35)
beggs6
300 ered to belongintocomparison
generalized Kernel methods. to ANN. SVM is obtained 6 asWith the variable w and e are
Given a settoof
considered belong training to Kernel data methods. like this: removed: 6
glasso y   y1 ,..., y N  (35) (35)
250
Given a set of training data like this: 01labedi
N 11N   b   0 
  (34)
 N 1 Z 1 1. I,...,
1  1
200 x1, y1 ...xk , yk  R N  R N (25) (25) 1 6
    y  (36) (36)
the Labedi methods
schmidt10  10,..., ,..., 10  (36)(37) the Labedi ofmethods
estimation the vis
150 With
y1,..., y0N 0,..., 0
The following regression model is constructed (37) estimation
crudes.
(37) of
While thethevis
y alikhn 1 ,...,  N  (35) (38) crudes. While thegiv
100 by using nonlinear mapping function φ(.):   1 ,..., N  Schmidt methods
naseri z  Z kj | k , j  1,..., N ,   (38)
Schmidt
The leastmethods
square giv er
50 6 
z  Z kj | k , Tj  1,..., N ,  The leastandsquare er
nn Z kj    xk T . x j    (39)
Network SVM m
0 Z kj    xk  . x j 
K xk , x j j  1,..., N     (39)
Network
than the
than the
From
and SVM
classical meth
classical these
comparing meth
m
svr
mse K xk , x j j  1,..., N 
Where, K xk , x j  is called the RBF kernel
 From
each comparing
other, it is these
clear
Where, K xk , x j  is called the RBF kernel
function.
each other,
is more exactit than
is clear
the
Figure 6: Comparison of different correlations for prediction of Iranian deadfunction. oil viscosity is more exact than the
 x  xk 2  6. Conclusions
K  x, xk   exp   x  x2 2  (40) 6.
TheConclusions
results of this
  k 
0.6

calculated visc
0.5
0.4
0.3
0.2
0.1
0.1 0.3 0.5 0.7 0.9
exprimental viscosity
K. Movagharnejad and S. Ghanbari / Journal of Chemical and Petroleum Engineering, 50 (2), Feb. 2017 / 9-17 15
1  1,...,1 (36) the Labedi methods are not suitable
Figure 5: experimental for
viscosity vs.
1  1,...,1 (36) the Labedi methods are not suitable forSVM predictions
0  0,...,0 (37) estimation of the viscosity of the Iranian
0  0,...,0 (37) estimation of the viscosity of the Iranian
crudes. While the Beggs, Glasso and
   ,...,  (38) crudes. While the Beggs, Glasso and
  111,..., NNN  (38) Schmidt methods give reasonable results.
DEAD results.
OIL
 
z  Z kj | k , j  1,..., N , (38)Schmidt methods give reasonable
The least square errors of the Neural
z  Z kjkj | k , j  1,..., N , The least400square errors of the Neural
 
Network and SVM methods are much less
Z    x T . x 
T
(39) Network and SVM methods are much less
Z kjkjkj    xkkk T . x jjj  (39) than the classical
350 methods. beal

 
K x , x j  1,..., N (39)than the classical methods.
K xkkk , x jjj j  1,..., N From comparing300 these two last methods with beggs
From comparing these two last methods with
Where, K x , x  is called the RBF kernel each other, it is clear that the SVM method glasso
Where, K xkkk , x jjj  is called the RBF kernel each other,250it is clear that the SVM method
is more exact than the neural network model. labedi
function. is more exact 200 than the neural network model.
where,
function. k(xk ,xj) is called the RBF kernel function. 150
schmidt
 x  xk 222  6. Conclusions
K  x, x   exp   x  x 
alikhn
(40) 6. Conclusions100
K  x, xkkk   exp    22 kk  (40) (40)The results of this study show that the
The results of this study show that the naseri
  2  50
classical correlations give huge errors for
 classical correlations give huge errors for nn
The LS-SVM regression model can be 0
Iranian dead oil viscosity. Optimization of
The
The LS-SVM
LS-SVM regression
regression model
model cancanbe be shown as: Iranian dead oil viscosity. Optimization of svr
shown as: the original classical methods always gives
shown as: N the original classical methods always mse
gives
better estimations and this technique may be
y x    N
N
 k . K  x , xk   b (41) (41)better6:estimations
Figure Comparison of and this technique
different correlationsmay be
for prediction of Iranian dead oil viscosity
y x     . K  x , x   b (41) extended to other
Figure physical properties.
6. Comparison of differentTheThe
correlations for predic-
k 1 kk kk extended to other physical properties.
study tion also showed that the statistical
also of showed
Iranian dead thatoil the
viscosity.
kk 11
Where
where (b,
(b,(b, α)α)α)can can befound found by solution ofofEq.
Eq. 27. study statistical
Where canbe be found by solutionof
by solution Eq. methods of neural network and SVM give
27. methods of neural network and SVM give
The experimental data were divided randomly
27. much better results comparing to the
The experimental data were divided much better results comparing to the
The
intorandomly experimental
two different groups data as were
for divided
neural networks classical correlations. The results in table 4
into two different groups as for classical correlations. The results in table 4
randomly
model. 70% into
of two experimental
the different groupsdata as forwere usedshowederror that the of the different
Naseri’s methods
equation used in this study is
is better
neural networks model. 70% of the showed that the Naseri’s equation is better
forneural
training
experimental
networksand
data the
model.
wereremaining
70% for
used for training
of testing
the
and than the
the alsoother
listed empirical
in the
than the other empirical equations which
equations
Table 4. which
The results of this table
experimental
trained SVM. All dataofwere theused for training
figures were and
normalized may beshow due tothat the thefact Beal
that itandwasthe
originally
Labedi methods are not
the remaining for testing the trained SVM. may be due to the fact that it was originally
intothe
All[0.1,
remaining
of the0.9] figures
for testing
range were
the trained
to normalized
eliminate theSVM.
into effect of the
[0.1, based on
based on
the Iranian
suitable
the Iranian
dead
for dead oil viscosities.
estimation 12
of
oil viscosities.
On
the Onviscosity of the Ira-
All of the figures were normalized into [0.1, the other hand intelligent methods are more
figures’
0.9] range magnitude to eliminateon their thereal value.
effect of The
the resultsthe other nian
hand crudes.
intelligentWhile the Beggs,
methods are moreGlasso and Schmidt
0.9] range to eliminate the effect of the accurate than classical methods and this
figures’
of the SVM magnitude
regression onfortheir real value.
Iranian dead Theviscosity
oil accurate methods
than give
classical reasonable
methods results.
and this The least square
figures’ magnitude on their real value. The research implies that the SVM method is
results
areresults
summarized of the SVM in the regression
Table 3 for Fig.
and Iranian
5. research errors
implies of the
that Neural
the SVM Network
method and is SVM methods are
of the SVM regression for Iranian much more accurate than the neural network
dead oil viscosity are summarized in the much more much accurate
less thanthe
than the classical
neural network
methods.
dead oil viscosity are summarized in the model.
table 3 and Figure 5. model. From comparing these two last methods with
table 3 and Figure 5.
Table 3. The characteristics of the optimized SVM for Ira- Referenceseach other, it is clear that the SVM method is more
5.dead Comparison of the methods References
nian5. Comparison
oil viscosity. of the methods exact than
1. Mohaghegh, Sh., theArefi,neural network
R., Ameri, S., model.
Figure 6 compares the experimental data 1. Mohaghegh, Sh., Arefi, R., Ameri, S.,
Figure 6 compares the experimental data Amini, Kh. and Nutter, R. (1996).
with the
Type predictions of the different
γ methods.
σ MSE Amini, Kh. and Nutter, R. (1996).
withofthe Kernel function
predictions of the different methods. “Petroleum reservoir characterization with
This figure is arranged in term of API of the “Petroleum reservoir characterization with
This figure is arranged in term of API of the
samplesRadial and clearly shows100 that the 1.3096 the aid of
different 0.0063the aid 6.artificial
of Conclusions
artificial neural networks”,
neural networks”,
samples and clearly shows that the different Journal of Petroleum Science and
prediction methods behave differently for Journal of Petroleum Science and
prediction methods behave differently for Engineering, Vol.16,
The results pp. 263-274.
Iranian crude samples. Most of the
Iranian crude samples. Most of the Engineering, Vol.16,of this
pp. study show that the classical cor-
263-274.
prediction methods except the Beal’s, under- 2. Romero, C.E.
relations and
give Carter, J.N.
huge J.N.
errors (2001).
for Iranian dead oil viscos-
prediction methods except the Beal’s, under- 2. Romero, C.E. and Carter, (2001).
predict the viscosity and it is clear that the “Using genetic algorithms for reservoir
predict the viscosity and it is clear that the “Using ity. Optimization
genetic algorithms of
for the original
reservoir classical methods
characterization”, Journal of Petroleum
5. optimized
Comparison
optimized correlations
correlations of the are Methods
generally more
are generally more characterization”,
always Journal
gives of Petroleum
better
Science and Engineering, Vol. 31, pp. 113- estimations and this technique
accurate than the original ones. The least Sciencemay and Engineering,
be extended Vol.
to 31, pp.physical
other 113- properties. The
accurate than the original ones. The least 123.
square
Figure error of the different
6 compares methods used inwith the123. study also showed that the statistical methods of
square error of the the experimental
different methods used datain 3. Nikravesh, M. and Aminzadeh, F. (2001).
this study is also listed in the table 4. The 3. Nikravesh,
predictions
this study of is the
alsodifferent
listed in the methods.
table 4. The neuralM.
“Past, present and
and Aminzadeh,
network and SVM
future intelligent
F. (2001).
give much better results
reservoir
results of this table show that the Beal and “Past, present and future intelligent reservoir
results
This figure of thisis table arranged show in that
termthe ofBeal
APIandof the sam- comparing
characterization to the
trends”, classical
Journal of correlations. The results
characterization trends”, Journal of
ples and clearly shows that the different predic- in Table 4 showed that the Naseri’s equation is bet-
7
tion methods behave differently for Iranian crude
7 ter than the other empirical equations which may
samples. Most of the prediction methods except the be due to the fact that it was originally based on
Beal’s, under-predict the viscosity and it is clear the Iranian dead oil viscosities. On the other hand
that the optimized correlations are generally more intelligent methods are more accurate than classi-
accurate than the original ones. The least square cal methods and this research implies that the SVM

Table 4. Comparison of the different methods.

Method Beal Beggs Glasso Labedi Schmidt Alikhn naseri N.N. SVM
Minimum least square
369.86 26.54 33.14 116.9 32.92 47.56 23.78 1.959 0.0739
Error
16 K. Movagharnejad and S. Ghanbari / Journal of Chemical and Petroleum Engineering, 50 (2), Feb. 2017 / 9-17

method is much more accurate than the neural net- Batanoney, M.H., Desouky, S.E.M. and Ramzy,
work model. M. (2013). “New correlations for prediction
of viscosity and density of Egyptian oil reser-
voirs”, Fuel, Vol. 112, pp. 277-282.
References
10. Sanchez-Minero, F., Sanchez-Reyna, G., Anchey-
1. Mohaghegh, Sh., Arefi, R., Ameri, S., Amini, Kh. ta, J. and Marroquin, G. (2014). “Comparison of
and Nutter, R. (1996). “Petroleum reservoir correlations based on API gravity for predict-
characterization with the aid of artificial neu- ing viscosity of crude oils”, Fuel, Vol. 138, pp.
ral networks”, Journal of Petroleum Science and 193-199.
Engineering, Vol.16, pp. 263-274.
11. Al-Balushi, M., Mjalli, F. S., Al-Wahaibi, T. and
2. Romero, C.E. and Carter, J.N. (2001). “Using Al-Hashmi, A.Z. (2014), “Parametric study to
genetic algorithms for reservoir characteriza- develop an empirical correlation for undersat-
tion”, Journal of Petroleum Science and Engi- urated crude oil viscosity based on the mini-
neering, Vol. 31, pp. 113-123. mum measured input parameters”, Fuel, Vol.
119, pp. 111-119.
3. Nikravesh, M. and Aminzadeh, F. (2001). “Past,
present and future intelligent reservoir char- 12. Beal, C. (1946). “Viscosity of air, water, natu-
acterization trends”, Journal of Petroleum Sci- ral gas, crude oil and its associated gases at oil
ence and Engineering, Vol. 31, pp. 67-69. field temperature and pressures”, Transactions
of the AIME, Vol. 165, pp. 114-127.
4. Esmaeilzadeh, F. and Nourafkan, E. (2009).
“Calculation OOIP in oil reservoir by pressure 13. Beggs, H.D. and Robinson, J.R. (1975). “Es-
matching method using genetic algorithm”, timating the viscosity of crude oil systems”,
Journal of Petroleum Science and Engineering, Journal of Petroleum Technology, Vol. 9, pp.
Vol. 64, p. 35-44. 1140-1141.

5. El-Sebakhy, E.A. (2009). “Forecasting PVT 14. Glaso, O. (1980). “Generalized pressure–vol-
properties of crude oil systems based on sup- ume–temperature correlation for crude oil
port vector machines modeling scheme”, Jour- system”, Journal of Petroleum Technology, Vol.
nal of Petroleum Science and Engineering, Vol. 2, pp.785– 795.
64, pp. 25-34.
15. Labedi, R. (1992). “Improved correlations for
6. Emera, M.K. and Sarma, H.K. (2005). “Use of predicting the viscosity of light crudes”, Jour-
genetic algorithm to estimate CO2-oil minimum nal of Petroleum Science and Engineering, vol.
miscibility pressure- a key parameter in design 8, pp. 221-234.
of CO2 miscible floods”, Journal of Petroleum
Science and Engineering, Vol. 46, pp. 37-52. 16. Kartoatmodjo, F. and Schmidt, Z. (1994). “Large
data bank improves crude physical property
7. Alomair, A., Elsharkawy, A. and alkandari, correlation”, Oil and Gas Journal, Vol. 4, pp. 51-
H.(2014). “A viscosity prediction model for 55.
Kuwaiti heavy crude oils at elevated tempera-
tures”, Journal of Petroleum Science and Engi- 17. El-sharkawy, A.M. and Alikhan, A.A. (1999),
neering, Vol. 120, pp.102-110. “Models for predicting the viscosity of Middle
East crude oils”, Fuel, Vol. 78, pp. 891-903.
8. Hemmati-sarapardeh, A., Aminshahidy, B., Pa-
jouhandeh, A., Yousefi, S.A. and Hosseini-Kaldo- 18. Naseri, A., Nikazar, M. and Mousavi Dehghani,
zakh, S.A. (2016). “A soft computing approach S.A. (2005). “A correlation approach for pre-
for the determination of crude oil viscosity: diction of crude oil viscosities”, Journal of Pe-
Light and intermediate crude oil systems”, Jour- troleum Science and Engineering, Vol. 47, pp.
nal of the Taiwan Institute of Chemical Engi- 163-174.
neers, Vol. 59, pp. 1-10.
19. Haykin, S. (1994). Neural Networks: A com-
9. El-Hoshoudy, A.N., Farag, A.B., Ali, O.I.M., El- prehensive foundation , Prentice Hall.
K. Movagharnejad and S. Ghanbari / Journal of Chemical and Petroleum Engineering, 50 (2), Feb. 2017 / 9-17 17

20. Lekkas, D. F., Imrie, C.E. and Lees, M.J. (2001). 22. Li, J.Z., Liu, H.X., Yao, X.J., Liu, M.C., Hu, Z.D.,
“Improved non-linear transfer function and and Fan, B.T., (2007). “Structure–activity re-
neural network methods of flow routing for lationship study of oxindole-based inhibitors
real-time forecasting”, Journal of Hydroinfor- of cyclin-dependent kinases based on least-
matics, Vol. 3, pp. 153-164. squares support vector machines”, Analytica
Chimica Acta, Vol. 581, pp. 333-342.
21. Vapnik, V., (1998). Statistical Learning Theory,
John Wiley, New York.

You might also like