You are on page 1of 8

Prediction of coal grindability based on petrography,

proximate and ultimate analysis using multiple regression


and artificial neural network models
S. Chehreh Chelgani
a
, James C. Hower
b
, E. Jorjani
a,

, Sh. Mesroghli
a
, A.H. Bagherieh
a
a
Department of Mining Engineering, Research and Science Campus, Islamic Azad University, Poonak, Hesarak Tehran, Iran
b
Center for Applied Energy Research, University of Kentucky, 2540 Research Park Drive, Lexington, KY 40511, USA
A R T I C L E I N F O A B S T R A C T
Article history:
Received 27 April 2007
Received in revised form 8 June 2007
Accepted 22 June 2007
The effects of proximate and ultimate analysis, maceral content, and coal rank (R
max
) for a
wide range of Kentucky coal samples from calorific value of 4320 to 14960 (BTU/lb) (10.05 to
34.80 MJ/kg) on Hardgrove Grindability Index (HGI) have been investigated by multivariable
regression and artificial neural network methods (ANN). The stepwise least square
mathematical method shows that the relationship between (a) Moisture, ash, volatile
matter, and total sulfur; (b) ln (total sulfur), hydrogen, ash, ln ((oxygen+nitrogen)/carbon)
and moisture; (c) ln (exinite), semifusinite, micrinite, macrinite, resinite, and R
max
input sets
with HGI in linear condition can achieve the correlation coefficients (R
2
) of 0.77, 0.75, and
0.81, respectively. The ANN, which adequately recognized the characteristics of the coal
samples, can predict HGI with correlation coefficients of 0.89, 0.89 and 0.95 respectively in
testing process. It was determined that ln (exinite), semifusinite, micrinite, macrinite,
resinite, and R
max
can be used as the best predictor for the estimation of HGI on
multivariable regression (R
2
=0.81) and also artificial neural network methods (R
2
=0.95).
The ANN based prediction method, as used in this paper, can be further employed as a
reliable and accurate method, in the hardgrove grindability index prediction.
2007 Elsevier B.V. All rights reserved.
Keywords:
Hardgrove grindability index
Coal petrography
Coal rank
Ultimate and proximate analysis
Artificial neural network
1. Introduction
Grindability of coal is an important technological parameter in
assessing the relative hardness of coals of varying ranks and
grades during comminution [1]. This is usually determined by
Hardgrove Grindability Index (HGI), which is of great interest
since it is used as a predictive tool to determine the
performance capacity of industrial pulverizers in power
station boilers [2]. HGI reflects the coal hardness, tenacity,
andfracture andis directlyrelatedtothe coal rank, megascopic
coal lithology, microscopic maceral associations, and the type
and distribution of minerals [3]. Grinding properties are
important in mining applications since lower-HGI (harder to
grind) lithotypes will require a greater energy input [46].
Although the HGI testing device is not costly, the measur-
ing procedure to get a HGI value is time consuming. Therefore,
some researchers have investigated the prediction of HGI
based on proximate analysis, petrography, and vitrinite
maximum reflectance with using regression [710].
Artificial neural network (ANN) is an empirical modeling
tool, which is analogous to the behavior of biological neural
structures [11]. Neural networks are powerful tools that have
the abilities to identify underlying highly complex relation-
ships from inputoutput data only [12]. Over the last 10 years,
artificial neural networks (ANNs), and, in particular, feed-
forward artificial neural networks (FANNs), have been exten-
sively studied to present process models, and their use in
industry has been rapidly growing [13].
F U E L P R O C E S S I N G T E C H N O L O G Y 8 9 ( 2 0 0 8 ) 1 3 2 0
Corresponding author. Tel.: +98 912 1776737; fax: +98 21 44817194.
E-mail address: esjorjani@yahoo.com (E. Jorjani).
0378-3820/$ see front matter 2007 Elsevier B.V. All rights reserved.
doi:10.1016/j.fuproc.2007.06.004
www. el sevi er. com/ l ocat e/ f upr oc
Li et al. [14] discussed neural network analyses using 67
coals of a wide rank range of coal quality for the prediction of
the HGI on the basis of the proximate analysis. Aproblemwith
their analysis was the use of a rank range which spanned the
reversal of HGI values in the mediumvolatile bituminous rank
range. Also Bagherieh et al. [15] used vitrinite, inertinite,
liptinite, R
max
, fusinite, ash and sulfur analysis on 195 sets of
data and improved result of ANN prediction to R
2
=0.92 for
testing data.
The aimof the present work is the assessment of properties
of more than 600 Kentucky coals with reference to the HGI and
possible variations with respect to vitrinite maximum reflec-
tion, proximate and ultimate analysis, and petrography of coal
using the multivariable regression, SPSS software package,
also improvement of results by artificial neural network,
MATLAB software package.
This work is an attempt to solve the following important
questions: (a) Is there a suitable multivariable relationship
between vitrinite maximum reflection, petrography, proximate
and ultimate analysis with HGI for a wide range of Kentucky
coals? (b) Can we improve the correlation of predicted HGI with
actual measured HGI by using of artificial neural network?
2. Experimental data
A mathematical model requires a comprehensive database to
cover a wide variety of coal types. Such a model will be capable
for predicting of HGI with a high degree of accuracy. Data used
to test the proposed approaches are fromstudies conducted at
the University of Kentucky Center for Applied Energy Re-
search. A total of more than 600 sets of data were used.
3. Results and discussion
3.1. Multivariable correlation of HGI with macerals, R
max
,
proximate and ultimate analysis
3.1.1. Proximate analysis
Sengupta [1] examined relation between proximate analyses,
moisture, ash, volatile matter, and fixed carbon with HGI and
found a nonlinear second-order regression equation with
correlation coefficient (r) of 0.93. [1]. A problem with their
analysis was the use of all four parameters, fixed carbon,
moisture, volatile matter, and ash. It is not necessary to use all
four parameters since, by definition; the four parameters are a
closed system, adding to 100% [16].
In the current study, it was found that use of moisture, ash,
volatile matter, and total sulfur can achieve the best results to
predict the HGI. The ranges of input variables to HGI prediction
for the 633 Kentucky samples are shown in Table 1. By a least
square mathematical method, the correlation coefficients of
moisture (M), ash (A), volatile matter (V) and total sulfur with
HGI value were determined to be +0.184, +0.107, 0.160 and
+0.619, respectively. The results are shown that higher
moisture and total sulfur contents in coal can result higher
HGI and higher volatile matter content in coal results lower
HGI.
The following equations resulted between HGI and proxi-
mate analysis:
HGI 102:69 4:227S
total
1:634V 0:569A0:237M
R
2
0:77:
1
The distribution of difference between HGI predicted from
Eq. (1) andactual determinedamounts of HGI is showninFig. 1.
3.1.2. Ultimate analysis and moisture
Vuthaluru et al. [17] studied the effects of moisture and coal
blending on HGI for Collie coal of Western Australia, finding a
significant effect of moisture content on HGI [17].
In the current study, the best correlation was found
between ln S
total
, ln (oxygen+nitrogen/carbon) ((O+N)/C),
hydrogen (H), ash (A) and moisture (M) with HGI. By a least
square mathematical method, the correlation coefficients of
ln S
total
, ln ((O+N)/C) and H with HGI value was determined to
be +0.662, +0.198 and 0.263, respectively. The results show
that higher hydrogen content in coal can result lower HGI and
higher ln S
total
content results higher HGI.
Table 1 The ranges of variables in coal samples (as
determined)
Variable Min Max Mean Standard deviation
Moisture 0.80 13.2 3.95 2.66
Ash 0.64 59.8 10.3 8.55
Volatile matter 16.6 46.4 34.8 3.92
Total sulfur 0.30 7.60 1.84 1.56
Carbon 28.9 83.5 70.8 8.44
Hydrogen 2.48 6.05 5.15 0.52
Nitrogen 0.09 2.34 1.49 0.24
Oxygen 0.00 20.6 10.4 3.14
Resinite 0.00 7.8 0.64 0.64
Exinite 0.6 52.9 6.27 4.36
Macrinite 0.0 12.5 0.25 0.81
Micrinite 0.0 34.9 2.86 2.63
Semifusinite 0.0 47.1 5.52 5.70
R
max
0.4 1.1 0.87 0.16
Fig. 1 Distribution of difference between actual HGI and
estimated (Eq. (1)).
14 F U E L P R O C E S S I N G T E C H N O L O G Y 8 9 ( 2 0 0 8 ) 1 3 2 0
The following equation resulted between HGI and ultimate
analysis:
HGI 77:162 3:994lnS
total
10:920H 1:904M0:424A
11:765lnON=C
R
2
0:75: 2
The distribution of difference between HGI predicted from
Eq. (2) andactual determinedamounts of HGI is showninFig. 2.
3.1.3. Petrography and R
max
Hardgrove grindability index is primarily a function of the
maceral composition, more precisely the mix of macerals. The
greater amount of Liptinite macerals such as sporinite, cutinite,
resinite, andalginite, fromspores, leaf cuticles, resins, andalgae,
respectively, particularly in combination with finely-dispersed
inertinite macerals, can result in a lower grindability index [18].
HGI is not simply a function of the maceral content though.
Through the rank range present through most of the Central
Appalachians, HGI will increase with an increase in rank. The
influence of mineral matter on HGI is also complex [18].
The relationship between HGI and coal petrography was
studied by Hsieh [19], Chandra and Maitra [20], Hower et al. [7],
Hower and Wild [8], Hower [9], and Trimble and Hower [10].
Trimble and Hower evaluated the influence of macerals
microlithotypes on HGI and on pulverizer performance in
different reflectance range [10].
Hower and Wild examined 656 Kentucky coal samples
to determine the relationshipbetweenproximate andultimate
analysis, petrography, andvitrinitemaximumreflectancewith
HGI for both eastern and western Kentucky. For eastern
Kentucky, the subject of the investigations in this paper, they
found that HGI could be predicted as following equation [8]:
HGI 37:41 10:22lnliptinite 28:18R
max
S
total
R
2
0:64: 3
In the present work, macerals and R
max
were used as inputs
to the SPSS software and found that ln (exinite), semifusinite
(SF), micrinite (MI), macrinite (MA), resinite (R), and R
max
are
the variables that are the best constituents of multivariable
regression. The ranges of petrography components for the
Kentucky samples are shown in Table 1.
By a least square mathematical method, the correlation
coefficients of ln (exinite), semi fusinite, micrinite, macrinite,
resinite and R
max
with HGI are 0.814, 0.360, +0.588, 0.090,
0.448, and 0.116, respectively. The results show that
increase of the ln (exinite), semifusinite and resinite contents
in coal can decrease HGI. An increase in micrinite results in
higher HGI.
An equation between mentioned parameters and HGI can
be shown as follows:
HGI 48:175 7:679lnEx 13:269R
max
0:137SF
0:584MI 1:237MA1:171R
R
2
0:81: 4
The distribution of difference between HGI predicted from
Eq. (4) and actual determined amounts of HGI is shown in
Fig. 3.
3.2. Artificial neural network
Neural networks can be seen as a legitimate part of statistics
that fits snugly in the niche between parametric and non-
parametric methods [21]. They are non-parametric, since they
generally do not require the specification of explicit process
models, but are not quite as unstructured as some statistical
methods in that they adhere to a general class of models. In
this context, neural networks have been used to extend, rather
thanreplace, regressionmodels, principal component analysis
[22,23], principal curves [24], partial least squares methods [25],
as well as the visualization of process data in several major
ways, to name but a few. In addition, the argument that neural
networks are really highly parallelized neurocomputers or
hardware devices and should therefore be distinguished from
Fig. 2 Distribution of difference between actual HGI and
estimated (Eq. (2)).
Fig. 3 Distribution of difference between actual HGI and
estimated (Eq. (4)).
15 F U E L P R O C E S S I N G T E C H N O L O G Y 8 9 ( 2 0 0 8 ) 1 3 2 0
statistical or other patter recognition algorithms is not entirely
convincing. In the vast majority of cases neural networks are
simulated on single processor machines. There is no reason
why other methods cannot also be simulated or executed in a
similar way (and are indeed) [21].
Artificial neural networks (ANN) are simplified systems
simulating the intelligent behavior exhibited by animals via
mimicking the types of physical connections occurring in their
brains [26]. Derived from their biological counterparts, ANNs
are based on the concept that a highly interconnected system
of simple processing elements (also called nodes or neu-
rons) can learn complex nonlinear interrelationships existing
between input and output variables of a data set [27].
The main advantage of ANN is the ability to model a
problem by the use of examples (i.e. data driven), rather than
describing it analytically. ANN's are also very powerful to
effectively represent complex nonlinear systems. It is also
considered as a nonlinear statistical identification technique
[11].
For developing a nonlinear ANN model of a system, feed-
forward architecture namely MLP is most commonly used.
This network usually consists of a hierarchical structure of
three layers described as input, hidden, and output layers,
comprising I, J, and K number of processing nodes, respec-
tively. At times, two hidden layers (Fig. 4) are used between
input and output layers of the net work. Each node in the input
layer is linked to all the nodes in the hidden layer using
weighted {wij} connections. Similar connections exist between
hidden and output layer as also between hidden layer-I and
hidden layer-II nodes [26]. Feed-forward networks consist of N
layers using the dot prod weight function, netsum net input
function, and the specified transfer functions [28].
The first layer has weights coming from the input. Each
subsequent layer has a weight coming fromthe previous layer.
All layers have biases. The last layer is the network output [28].
Fig. 4 FANN architecture with two hidden layers.
Table 2 Details of ANN-based HGI models
Model
no.
Basis Model inputs Training
set size
Test
set
size
I J K
I As
determined
Moisture, total
sulfur volatile
matter, ash
400 232 4 12
II As
determined
Carbon,
hydrogen,
oxygen+
nitrogen, ln
(S
total
), moisture
400 200 5 12
III As
determined
Resinite,
micrinite,
macrinite ln
(exinite),
Semifusinite,
R
max
,
400 201 3 5 6
I = No. of input nodes; J = No. of nodes in the first hidden layer; K =
No. of nodes in the second hidden layer.
Table 3 Statistical analysis of HGI generalization
performance of ANN-based
Models Performance of ANN
models
Performance of ANN
models
Train set Test set
Correlation coefficient Correlation coefficient
I 0.82 0.89
II 0.81 0.89
III 0.86 0.95
16 F U E L P R O C E S S I N G T E C H N O L O G Y 8 9 ( 2 0 0 8 ) 1 3 2 0
Back propagation can train multilayer feed-forward net-
works with differentiable transfer functions to perform
function approximation, pattern association, and pattern
classification. The term back propagation refers to the process
by which derivatives of network error, with respect to network
weights and biases, can be computed. This process can be
used with a number of different optimization strategies [28].
However, the number of nodes (J,K) in the hidden layers are
adjustable parameters, whose magnitudes are governed by
issues such as the desired prediction accuracy and general-
ization performance of the ANN model. In order that the MLP
network accurately approximates the nonlinear relationship
existing between its inputs and the outputs, it is trained such
that a pre-specified error function is minimized. This training
procedure essentially aims at obtaining an optimal set of
network connection weights that minimizes a pre-specified
error function [29].
In this study, two ANN models (models I and II) have been
developed by considering one hidden layers and the third one
(Model III) by considering two hidden layer in MLP architecture
and with training using the EBP algorithm(Table 2). According
to the Eqs. (2), (3) and (5), the selected variables were
determined as the best variables for the prediction of HGI.
Therefore these variables were used as inputs to ANN for the
improvement of HGI prediction.
Neural network training can be made more efficient by
certain pre-processing steps. In the present work all inputs
(before feeding to the network) and output data (in models I
and III) in training phase, were scaled so that they changed in
the range of 0 and 1, using the mean and standard deviation:
pn Ap meanAps=stdAp: 5
Where, Ap is actual parameter, meanAps is mean of actual
parameters, stdAp is standard deviation of actual parameter
and pn is normalized parameter (input) [28].
While the training set was used in the EBP algorithm-based
iterative minimization of error, the test set was used after each
training iteration for assessing the generalization ability of
MLP model.
Predictionandgeneralizationperformances of ANNmodels
I, II and III were compared with results of Eqs. (2), (3) and (5),
respectively. The results are shown in Table 3. The training
process was stopped after 3000 for models I and II and 5000
epochs for Model III. The performance function used is the
mean square error (MSE), the average squared error between
the network predictedoutputs andthe target outputs, that was
0.18, 6.47, 0.14 for training data for models I to III, respectively.
Figs. 57 and 11(a,b,c) shows the predicted data using FANN
versus actual data in testing process. The distribution of
difference between HGI calculated from described ANN
procedures and actual determined HGIs are shown in Figs. 8
10. The above describe results suggest that ANNs owing totheir
excellent nonlinear modeling ability are better alternative to
the linear models for the prediction of HGI of coals.
4. Technical considerations
According to Eq. (1), which presents the relation of HGI with
moisture, volatile matter, ash, and total sulfur for 632 coal
samples, the correlation coefficient of nonlinear-regression-
estimated HGI and actual determined HGI is R
2
=0.77. Fig. 5
shows a better correlation coefficient of R
2
=0.89 than
Fig. 5 Predicted HGI by neural network versus actual
measured HGI in testing process (Model I).
Fig. 6 Predicted HGI by neural network versus actual
measured HGI in testing process (Model II).
Fig. 7 Predicted HGI by neural network versus actual
measured HGI in testing process (Model III).
17 F U E L P R O C E S S I N G T E C H N O L O G Y 8 9 ( 2 0 0 8 ) 1 3 2 0
regression for estimating HGI with test data sets using FANN
that 400 data sets were used for training and 232 data sets
were used for the test. In the related works to this study, Li et
al. [14] applied neural network analyses, generalized regres-
sion neural network (GRNN), using only 67 coal samples, 61
data sets for training and six data sets for the test in the
prediction of HGI on the basis of the proximate analysis. As
noted above, their study was flawed because of the use of
coals on both sides of the medium volatile bituminous
reversal of HGI. Also, Sengupta [1] examined the relation
between proximate analyses and HGI and found a correlation
coefficient (r) of 0.93 [1]. The problem of his work was the use
of all four parameters: fixed carbon, moisture, volatile matter
and ash that are a closed system, adding to 100% [16]. As a
result, in the present work, the interrelationship between coal
properties was considered, achieving a higher correlation
(R
2
=0.89), avoiding the problems that was mentioned in the
previous works.
In Eq. (2), the relation of HGI with ln S
total
, hydrogen,
moisture, ash, ln ((oxygen+nitrogen)/carbon) was presented.
To our knowledge, this is the first time that the mentioned
parameters were used to predict HGI using multivariable
regression and ANNs (400 and 200 data sets were used for
training and testing, respectively). The high correlation coeffi-
cients of R
2
=0.89 for HGI predictionby ANNs is evidence that the
proposed neural network model can accurately estimate the
HGI with the ultimate analysis and moisture as the predictors.
In Eq. (4), which presents the relation of HGI with ln
(exinite), semifusinite, micrinite, macrinite, resinite, and R
max
for 601 coal samples, the correlation coefficient between
nonlinear-regression-estimated HGI and actual determined
HGI is R
2
=0.81.
As a related work to this one, Hower and Wild [8] studied
relationship between sulfur, petrography, and vitrinite max-
imum reflectance with HGI for eastern Kentucky coals and
found a correlation of 0.64 for which liptinite, reflectance, and
sulfur emerged as significant predictors. In this work, a better
correlation coefficient (0.81) was achieved in a linear equation
in which ln (exinite), semifusinite, micrinite, macrinite,
resinite, and R
max
were predictors.
Fig. 7 shows a better correlation coefficient of R
2
=0.95 than
regression for estimating HGI, with test data sets using FANN
in which 400 data sets were used for training and 201 data sets
were used for the test. Bagherieh et al. [15] applied generalized
regression neural network analyses using 195 coal samples,
148 data sets for training and 33 data sets for the test in the
Fig. 9 Distribution of difference between actual HGI and
estimated by neural network (Model I).
Fig. 10 Distribution of difference between actual HGI and
estimated by neural network (Model II).
Fig. 8 Graphical comparison of experimental HGIs with
those estimated by ANN model-I (panel a), ANN model-II
(panel b), ANN model-III (panel c).
18 F U E L P R O C E S S I N G T E C H N O L O G Y 8 9 ( 2 0 0 8 ) 1 3 2 0
prediction of HGI on the basis of the petrography. The
correlation coefficient (R
2
) of the predicted HGI with actual
determined was 0.92 for testing data. In the current work it
was used from wide range (201 data sets) of coal sample for
testing and the results were improved by FANN to R
2
=0.95,
which is the highest correlation coefficient that was reported
until now.
According to the above significant results, it can be
concluded that the proposed multiple regression formulas
(Eqs. (2), (3) and (5)) and the ANN procedures yield significant
predictions of HGI. As a comparison between inputs to the
models, the coal macerals and R
max
are better predictors in
regression and ANN procedures than the others (Table 3).
5. Conclusions
Three data sets of: (a) Moisture, ash, volatile matter, and
total sulfur; (b) ln (total sulfur), hydrogen, ash, ln ((oxygen+
nitrogen)/carbon) and moisture; (c) ln (exinite), semifusinite,
micrinite, macrinite, resinite, and R
max
were found to be the
best constituents of multivariable regression for the predic-
tion of HGI.
Higher moisture content in coal can result in higher HGI and
higher volatile matter content in coal results in lower HGI.
No other (a) set parameters were significant.
The increase of hydrogen content in coal can result in lower
HGI and higher ln (S
total
) result in higher HGI.
Higher ln (exinite), semi fusinite and resinite contents in
coal decrease HGI. An increase in micrinite results in higher
HGI. No other macerals were significant.
The proposed multivariable equations:
Eq. (1) with moisture, ash, volatile matter, and total sulfur
input set achieved an R
2
=0.77.
Eq. (2) with ln (total sulfur), hydrogen, ash, ln ((oxygen+
nitrogen)/carbon) and moisture input set resulted in an
R
2
=0.75.
Eq. (4) with ln (exinite), semifusinite, micrinite, macrinite,
resinite, and R
max
input set resulted in the best regression
correlation reported until now (R
2
=0.81).
The FANN procedures used to improve of correlation co-
efficients between predicted HGIs and actual determined
Fig. 11 Distribution of difference between actual HGI and estimated by neural network (Model III).
19 F U E L P R O C E S S I N G T E C H N O L O G Y 8 9 ( 2 0 0 8 ) 1 3 2 0
HGIs, with a good resulting R
2
=0.89, 0.89, 0.95 for the input
sets of (a), (b) and (c) respectively, had not been previously
reported.
ln (exinite), semifusinite, micrinite, macrinite, resinite and
R
max
are the best predictors for the estimation of HGI by
both multivariable regression and artificial neural network
methods.
R E F E R E N C E S
[1] A.N. Sengupta, An assessment of grindability index of coal,
Fuel Processing Technology 76 (1) (2002) 110.
[2] X. Sun, Combustion Experiment Technology and Method for
Coal Fired Furnace, China Electricity and Power Press, Beijing,
2001.
[3] S. Ural, M. Akyildiz, Studies of relationship between mineral
matter and grinding properties for low-rank coal,
International Journal of Coal Geology 60 (2004) 8184.
[4] M.-Th. Mackrowsky, C. Abramski, Kohlenpetrographische
Untersuchengsmethoden und ihre praktische Anwendung,
Feuerungstechnik 31 (3) (1943) 4964.
[5] J.T. Peters, N. Schapiro, R.J. Gray, Know your coal,
Transactions of the American Institute of Mining and
Metallurgical Engineers 223 (1962) 16.
[6] J.C. Hower, G.T. Lineberry, The interface of coal lithology and
coal cutting: study of breakage characteristics of selected
Kentucky coals, Journal of Coal Quality 7 (1988) 8895.
[7] J.C. Hower, A.M. Graese, J.G. Klapheke, Influence of
microlithotype composition on Hardgrove Grindability Index
for selected Kentucky coals, International Journal of Coal
Geology 7 (1987) 227244.
[8] J.C. Hower, G.D. Wild, Relationships between Hardgrove
Grindability Index and petrographic composition for
high-volatile bituminous coals fromKentucky, Journal of Coal
Quality 7 (1988) 122126.
[9] J.C. Hower, Interrelationship of coal grinding properties and
coal petrology, Minerals and Metallurgical Processing 15 (3)
(1998) 116.
[10] A.S. Trimble, J.C. Hower, Studies of relationship between coal
petrology and grinding properties, International Journal of
Coal Geology 54 (2002) 253260.
[11] H.M. Yao, H.B. Vuthaluru, M.O. Tade, D. Djukanovic, Artificial
neural network-based prediction of hydrogen content of coal
in power station boilers, Fuel 84 (2005) 15351542.
[12] S. Haykin, Neural Networks, a Comprehensive Foundation,
USA, 2nd ed.Prentice Hall, USA, 1999.
[13] L.H. Ungar, E.J. Hartman, J.D. Keeler, G.D. Martin, Process
modelling and control using neural networks, American
Institute of Chemical Engineers Symposium Series 92 (1996)
5766.
[14] P. Li, Y. Xiong, D. Yu, X. Sun, Prediction of grindability with
multivariable regression and neural network in Chinese coal,
Fuel 84 (2005) 23842388.
[15] A.H. Bagherieh, J.C. Hower, A.R. Bagherieh, E. Jorjani, Studies
of the relationship between petrography and grindability for
Kentucky coals using artificial neural network. International
Journal of Coal Geology (in press).
[16] J.C. Hower, Letter to the editor, discussion: prediction of
grindability with multivariable regression and neural
network in Chinese coal, Fuel 85 (2006) 13071308.
[17] H.B. Vuthaluru, R.J. Brooke, D.K. Zhang, H.M. Yan, Effect of
moisture and coal blending on Hardgrove Grindability Index
of Western Australian coal, Fuel Processing Technology 81
(2003) 6776.
[18] J.C. Hower, C.F. Eble, Coal quality and coal utilization, Energy
Minerals Division Hourglass 30 (7) (February 1996) 18.
[19] S.-S. Hsieh, Effects of bulk-components on the grindability of
coals (Ph.D dissertation, The Pennsylvania State University,
University Park, 1976).
[20] U. Chandra, A. Maitra, A study on the effect of vitrinite
content on coal pulverization and preparation, Journal of
Indian Academy of Geosciences 19 (2) (1976) 9.
[21] C. Aldrich, Exploratory Analysis of Metallurgical Process
Data with Neural Networks and Related Methods, Elsevier,
2002, p. 5.
[22] M.A. Kramer, Nonlinear principal component analysis using
autoassociative neural networks, AIChE 37 (2) (1991) 233243.
[23] M.A. Kramer, Autoassociative neural networks, Computers
and Chemical Engineering 16 (4) (1992) 313328.
[24] D. Dong, T.J. McAvoy, Non-liner principal component
analysis-based on principal curves and neural networks,
Computers and Chemical Engineering 20 (1996) 6578.
[25] S. Qin, T.J. McAvoy, Nonlinear PLS modeling using neural
networks, Computers and Chemical Engineering 16 (1992)
379391.
[26] S.U. Patel, B.J. Kumar, Y.P. Badhe, B.K. Sharma, S. Saha, S.
Biswas, A. Chaudhury, S.S. Tambe, B.D. Kulkarni, Estimation
of gross calorific value of coals using artificial neural
networks, Fuel 86 (2007) 334344.
[27] S.S. Tambe, B.D. Kulkarni, P.B. Deshpande, Elements of
Artificial Neural Networks with Selected Applications in
Chemical Engineering, and Chemical and Biological Sciences,
Simulation and Advanced Controls, Louisville, KY, 1996.
[28] H. Demuth, M. Beale, Neural network toolbox for use with
MATLAB, Handbook, 2002.
[29] D. Rumelhart, G. Hinton, R. Williams, Learning
representations by backpropagating error, Nature 323 (1986)
533536.
20 F U E L P R O C E S S I N G T E C H N O L O G Y 8 9 ( 2 0 0 8 ) 1 3 2 0

You might also like