You are on page 1of 8

Applying Neural Network in Hydrotreating Process

Raissa Maria Cotta Ferreira da Silva, Petrleo Brasileiro S.A, Brazil; Luciano Villanova de Oliveira, Petrleo Brasileiro S.A., Brazil; Joyce Stone de Souza Aires, Petrleo Brasileiro S.A., Brazil. Abstract. Neural Network technology is an approach for describing process data behavior, using mathematical algorithms and statistical techniques. The use of neural network for modeling process is increasing in several kinds of chemical industries. This paper makes comments about successful critical factors, advantages and disadvantages of this methodology. Moreover, it presents some applications in Hydrotreating process of the petroleum refining industry. In feedstock Hydrotreating, the knowledge about characteristics of process regarding product property estimation, hydrogen chemical consumption and removal of contaminants (sulfur, nitrogen, aromatics), is very important to process optimization, product quality control and environment protection. The Neural Network technique has been used to model the behavior of the hydrogen chemical consumption, generation of light gas, the conversions of the hydrogenation of aromatic hydrocarbons (HDA), hydrodesulfurization (HDS) and hydrodenitrogenation (HDN) reactions and product physical properties. Operation conditions and some relevant feedstock properties were selected as input variables. In addition, Neural Networks have been built to predict the cetane number and stability of feedstock and hydrogenated products. The models were developed with experimental data, which were obtained in hydrogenation pilot plants from PETROBRAS. This paper presents a comparison between pilot plant data and estimated data. 1. Introduction
The neural network technology is used to describe the behavior of systems, through mathematical algorithms and statistical techniques, which try to mimic the human brain. The purpose of developing a neural network is to produce a tool that captures essential relationships in data. Artificial neural networks are computing tools composed of many interconnected elements called processing elements or neurons. A neuron performs a weighted summation of its input array and the application of a non-linear transfer function to this summation to give an output. The output of a neuron can be connected to input of other processing elements through weighted connections. Learning is the process of modifying the connection weights, i.e., it is the fitting of parameters in the modeling. These weights are obtained through an optimization process with the objective of minimizing the differences between the predicted and the observed outputs. The trained network can be used to estimate unknown output data; giving to the trained network the input data of the sample not included in the data set, the network calculates corresponding output data. Although a large variety of architectures exist, the feed-forward architectures, where data propagates in only the forward direction, are more useful for steady-state modeling. At present, there is the predominance in the literature of the supervised multi layer feed-forward architecture trained by Backpropagation[1]. Rarely, supervised multi layer feed-forward networks with an alternative approach are considered, such as cascade correlation architecture. The cascade correlation algorithm[2] is an option that attempts to reduce the complexity in the net construction process by simultaneous training and design of the network. The procedure begins with a minimal network that has some inputs and one or more output nodes but no hidden nodes. The hidden neurons are added to the network one by one, thereby obtaining a multi layer structure. Each new hidden neuron receives a connection from each of the input nodes and also from each preexisting hidden neuron; in other words this mode of construction entails adding hidden units one at a time, and always connecting all the previous units to the current unit. The procedure of adding new hidden neurons is continued in the manner described until satisfactory performance is attained. In the refining sector, there is growing pressure to produce higher quality and cleaner products at lower cost. Thus, refinery processes must maintain great flexibility in their manufacturing operations due to tighter environment requirements. In this sense, modeling, simulation and optimization are increasingly viewed as essential activities and a significant competitive differentiation among the refining industries to achieve these challenges.

There are a variety of tools to build a model, using both on-line data sets, by collecting real time plant data, or off-line data sets, by using data from carefully conducted pilot plant tests. Regarding the required ending use, modeling approaches can vary extensively; they range from detailed representations of the unit based on first-principles to empirical procedures. Normally, these models have associated optimizers, which are capable of predicting the most economic operating mode, attempting to force technical and commercial constraints. An important prerequisite for process optimization is the availability of a representative mathematical model. Process models based on first-principles are developed from fundamental laws of conservation of mass, energy, momentum, and other chemical engineering principles. However, this approach often leads to highly complicated nonlinear models that require simplifying assumptions for their solution. In many practical industrial situations, fundamental models are difficult to obtain for being computationally time-consuming and for lack of sufficient understanding about the process under consideration. For optimization purposes, which involve quite a high number of simulations, the model has to be able to describe the system accurately and, at the same time, has to run fast. In this case, neural network offers one of the most promising tools for modeling.

2. Successful aspects in building neural networks


Some critical factors are in general very important to neural network projects and one can find plenty information and guidance in the literature on this subject. In fact their knowledge turn into the first step in developing of more reliable applications. Here, some of the most important issues are discussed, in order to underline the procedures applied in this study. The advantage of neural network model over other empirical approaches will depend directly on the [3] degree of non-linearity of the process . It is convenient to make sure that the problem requires a neural network. It is likely that a neural network will be the proper approach if the problem is really non-linear [4]. The success in obtaining a reliable and robust network depends strongly on the choice of process variables that are relevant for the investigated system[3,4]. When data transformations are relevant they must be considered in the study[5]. Process knowledge is essential for driving and validating each step of the model building process. This is particularly important in the identification of the most pertinent process variables and their inter-relationships. Also it can suggest more appropriate transformations or combinations of process variables and help in the detection of inconsistencies in the data set, which can lead to lack of representative data[3,4,5,6,7]. Unfortunately, there are no rules for setting, in advance, the amount of data necessary to develop a neural network; one must rely only on previous experience to decide if theres enough data for training the neural network. Knowledge of statistical techniques, such as multivariate data analysis, is an excellent preparation for appreciating power and flexibility of neural networks. It is necessary to perform extensive data analysis prior to determine statistical correlations, main variables, data trends or outliers. It is not recommended to use non-analyzed, non-preprocessed data at a neural network[4]. In this sense, Statistics can be helpful to analyze the data, which can lead to developing a better neural network. The quality of information contained in the data is more important than the size of the data bank[7]. The data used for training must fairly represent the operating regions over which you expect the neural network application to perform. In addition, the data range and distribution must be adequate [3,4,5]. Neural networks do not extrapolate well into regions beyond which they have been trained. In fact, while the application is in use, it is wise to monitor the data to ensure that they remain within the boundaries of the training data, if not, the output of the application is doubtful[4]. There are many different types of neural networks and while a backpropagation neural network is a good general-purpose tool, often an alternative one is more suitable[4]. Although it has been reported that changes in network architecture has only limited impact on modeling accuracy[5], a more elegant use of this technology is to experiment different types of architectures for a given problem. Sometimes, both time required to train the neural networks and their maintenance cost are larger in the problems that possess multiple outputs and one neural network to each one of the outputs in the modeling. However, mapping multiple functions using a single neural network can not be convenient, since the neural network is forced to map more than one functional relationship with multiple output values within the same set of weights[4]. Normally, a simple function can be used when the outputs are from the same physical or chemical information sources. However, studies with multiple functions should be tried even in these cases.

Neural networks have to be first trained with a set of data and then tested using another set of data different from that used in the training, in order to assess the prediction capability. If the neural network is overtrained, the results from the test data will likely be unacceptable, leading to the conclusion that the neural network will not work adequately. A typical approach when developing a neural network is to keep an unused data set for final validation. The purpose is to provide an independent test of the neural network performance. Once the neural network has been trained and tested, if the results have not been satisfactory, the developer may make some changes to the neural network topology, then retrain and test it again. The same data sets may be used for training and testing or the data may be split into different random sets. In either case, the test data will be used to modify the neural network architecture. Nevertheless, it is necessary to spare a third data set not used during the development process[4]. The topology of the model has a significant effect on the training efficiency and the estimation accuracy. There is no rule to decide on the numbers of processing elements, and they are usually determined empirically through preliminary trial and error experimentation. However, it is recommended to use as few hidden units and training passes as possible through the data. The greater the number of neurons, the greater the amount of input/output training patterns, which must be used and the longer it takes to train the network. Generally, the use of only one hidden layer for backpropagation neural networks gives suitable results.

3. Advantages and disadvantages of the methodology


Neural networks are able to learn any complex non-linear mapping. The application of neural networks seems to be a promising tool to solve modeling problems for the cases where, as a result of insufficient knowledge, the governing mechanism can not be formulated[6]. In addition, neural networks lack theoretical background concerning explanatory capabilities, so they are considered as black boxes. In other words, the [8] model itself is not particularly transparent . Neural networks do not make prior assumptions about the probability distribution of the input-output data [8] set mapping function . In fact, their applications demand that data should have distribution in all training range. However, neural networks are extremely dependent on the quality and amount of available data. The selection of the network architecture and topology, as well as its parameters, is still a trial and error matter. There is no explicit set of rules to select a suitable neural network or learning algorithm. Some commercial packages offer automatic selection of topology in agreement with previously established definitions, which should be varied along the study. In others it is necessary to define manually all the steps of the modeling procedure. Nascimento et al.[3] presented an interesting approach with regard to applying neural network that illustrates the advantage of the execution speed of a neural network model, which has been mentioned by in other publications[7]. In the modeling step, the authors used a previously available phenomenological model to generate a large set of simulated data under different operation conditions, later these simulated data were used to train the neural network model. It was emphasized that regardless of the origin of the neural network model obtained, two possible optimization approaches can be employed. These options are to implement the neural network model into a conventional optimization method or into an optimization procedure based on mapping all the solutions of the problem. The use of a neural network instead of the phenomenological model has the advantage of being comparatively faster in the simulation process[3,7]. In this way, even a detailed grid search can be achieved in reasonable time, as long as there are not too many variables been optimized, in which case problems of dimensionality arise [3]. Neural Networks have the advantage of being capable of carrying out process optimization directly by involving the inverse property that is the ability to solve the inverse problem. Thus, the process optimization can be implemented by using the output to determine what inputs are required to achieve these. In meeting prescribed product specifications, this is a very powerful and useful operational tool. For example, new product specification may be met in several ways such as blending feedstock with products of other processes, adjusting the process temperature, changing the residence time, or using suitable additives. Therefore, if it is considered necessary to blend the feedstock for operational stability, these are treated as the outputs of the network. Then the inversion of neural networks is used to determine how to respond to changing market demands by providing suggestions on operational and control policies[9,10].

4. A brief review of applications in refining industry


Recently the application of neural networks has become more popular in chemical engineering. More neural network applications are being implemented in oil refineries, as well as in many other industries. Neural networks have been successfully applied in Chemistry for data correlations between analytical method spectrum and product properties; in catalysis for determining relationships between catalyst structure and activity; in process modeling for estimating product yields, operating conditions, and particularly for process control and fault diagnosis. The main reason for their growing popularity is primarily the lower costs compared to other technologies and the fact that it can overcome inherent complications of oil refinery process modeling: high complex hardware, a broad range of feedstock and operation conditions, and constantly changing product specifications. In the FCC modeling, the most widely used procedure is based on the lumping model. In an attempt to predict more information about the product distribution, the lumping was increased but did not overcome the limitations imposed by an additional computational effort as well as the need to deeper understanding of the feed composition. In the other hand, the quantitative description of fluid bed behavior is very difficult and is usually expressed in terms of empirical correlations. McGreavy et al.[9] built backpropagation neural networks to predict each one of all hydraulic parameters and a neural network is used to predict the product distribution. The neural network for predicting the product distribution was built using feedstock and catalyst properties and operating conditions as input variables. Two modes of operation of the hydrocracking unit of Arabian light vacuum gas oil by producing maximum jet fuel and producing maximum middle distillate were considered in forward neural network application proposed by Elkamel et al. [10]. Two neural networks were built to predict the quality and yield of all products for each one of the operating modes. The properties of the feedstock are used as input and the output units corresponding to product yields and product properties. The authors of this work concluded that feedforward neural network could be successfully used to model the complex hydrocracking unit. According to the study, considering the difficulty and complexity of developing a phenomenological model of the hydrocracking unit based on the discrete or continuous lumping approach, neural networks can be an effective alternative. Usually the development of the reaction kinetics is a highly complicated task, particularly for complex reacting systems as multiphase systems, polymerization reactions or catalytic reactions. Sometimes, the reactions occur competitively in such systems. In addition, usually the exact reaction mechanism is not known as a result of the difficulties to identify and quantify of the intermediate compounds. Molga et al.[6] reported an application of neural networks to model the conversion rates of a heterogeneous oxidation reaction oxidation of 2-octanol with nitric acid. They concluded that the proposed approach based on application of neural networks is an efficient and accurate tool to solve modeling problems due to more complex and unknown kinetics of the investigated reaction. Arai[11] presented an outline of recent advances in computational chemistry focusing on catalyst research and development. The author of this work emphasized that artificial intelligence of empirical knowledge is one of the most important requirements to develop tools for computational chemistry. However, Arai complained that the use of neural networks should be expanded. Hou et al. [12] described that catalyst design largely depends on empirical and qualitative knowledge, rather than on theoretical and quantitative relationships. In multi-component catalysts design, the synergistically generated catalytic functions have to be taken into consideration. This is especially the case for mixed oxide catalysts, although single-component oxides do not provide enough activity, mixed oxides can sometimes provide strong activity by a synergistic effect in mixing. A backpropagation neural network applied to design a VSbWSn (P, K, Cr, Mo)/Al2O3/SiO2 catalyst for acrylonitrile synthesis via propane was presented by the authors. The conversion of propane and selectivity of acrylonitrile were calculated as functions of the catalyst components. The obtained network was used as the objective subroutine of the optimization model, in order to choose better catalyst components that can be predicted after optimization. The optimized results were tested and the model was modified. After repeating the procedure of learning, optimizing and testing, the predicted results fitted well with the experiment. Hattori[13] also presented a similar approach in catalyst design by taking as an example the prediction of the catalytic performance of a series of lanthanide oxides in oxidations of methane and butane. The neural network technology has shown powerful alternative in product modeling. Yang et al.[14] reported a neural network to predict cetane number of diesel fuel from its physical properties. A neural network with 8 inputs (density, viscosity, aniline point, initial (IBP), 10%, 50%, 90% and end (FBP) distillation [15] used temperatures by ASTM D86) was chosen as the best model. In another approach, Yang et al. also neural networks to correlate and predict the cetane number of diesel fuel from its chemical composition that

was determined by liquid chromatography (LC) and gas chromatography-mass spectrometry (GC-MS). In addition, Leeuwen et al. [16] applied neural networks to the results of gas chromatographic analysis of 824 gasolines, each separated in 89 PIANO groups (paraffines, iso-paraffines, aromatics, naphthenes, olefins), in order to model the relation between PIANO groups and the octane number of gasoline.

5. Describing the application in HDT process


In the application presented in this paper, two available commercially neural network-modeling software packages have been used, namely: NeuralWares Professional/II Plus and NeuralSim. The first modeling study was developed by using the backpropagation neural network together with the option of a gradient descent learning rule, which are offered by the Professional/II Plus package. The second study was developed by using the cascade correlation architecture together with the option of an adaptive gradient learning rule, which are provided by the NeuroSim package. Transformations for input-output variables were considered in second case. This second study also utilized the variable selection option offered by the commercial package that utilizes a genetic algorithm to search for good sets of input variables. For each possible set, a logistic regression is applied to rank the subsets of inputs. The NeuralSim package is different from Professional/II Plus in that the first automates several decisions, such as what and how many the inputs of the model should be used, neural network building process, and so on. Hydrotreating data bank of middle distillates contains results of various studies run in pilot plant at PETROBRAS Research and Development Center (CENPES). The process modeling was performed with 90 hydrogenation pilot plant tests, which were obtained using a Ni-Mo/Al2O3 commercial catalyst and 17 different feedstocks. The tests were run in an isothermal reactor, using hydrogen and feed in up flow direction in order to avoid distribution and catalyst wetting problems. Industrial plants operate in adiabatic mode but the pilot plant operated in isothermal mode, therefore in order to compare the performance of the both plants it was necessary to use a weight average bed temperature (WABT) for the adiabatic reactor. The pilot plant tests were made changing only one variable each time and keeping the others constant. The main variables to hydrotreating process are the hydrogen partial pressure (PPH2), the liquid hourly space velocity (LHSV) and average bed temperature (WABT). The table 1 shows a summary of used procedures specifically related to type of application, linked to commercial software packages. Using both the commercial software packages, models were obtained for the prediction of hydrogen chemical consumption, and the conversions of the hydrogenation of aromatic hydrocarbons (HDA), hydrodesulfurization (HDS) and hydrodenitrogenation (HDN) reactions. In addition, neural networks were built aiming to predict the difference between feedstock and hydrogenated product properties (Property = Feedstock Property Hydrogenated Product Property). The considered properties in this study, were density @ 20/4oC, viscosities @ 20 and 50oC, and simulated distillation temperatures by ASTM D-2887 method at 10, 30, 50, 70 and 90% weight recovery. Also, it was built models to predict the generation of light gas (C1, C2, C3 and C4 in weight percentage of feedstock) by applying the cascade correlation method. The input data set utilized in the process modeling were the three operational conditions and feedstock properties. One network was built for each output variable, except in the case of the simulated distillation temperature differences between the feedstock and the product. In the latter, a single network was enough to adjust all the output variables. This procedure was more appropriate because the temperatures in simulated distillation are correlated, unlike the other results. Specific Neural Networks were developed to determine the accelerated storage stability and to estimate the cetane number of diesel fuel, using NeuralWares Professional/II Plus and NeuralSim commercial packages respectively. In both cases, the input variables were the feedstock properties. During the procedure of building the networks, various statistics were calculated in order to evaluate the model performance. The measures of performance were established as: the measure of the linear correlation between the experimental target and net output (R2); the average absolute error between the experimental result and model output; the root mean square error between the experimental value and net output, and so on. These measures were calculated for the complete data set, for the training, test and validation data subsets, as well as for the data subsets that were built by type of feedstock.

Table I Summary of Used Main Procedures for Process and Product Neural Networks Used Main Procedures Division of data set Commercial Transformation Selection of Training Test Software Package of variables variables data set data set Professional/II Plus fix random data fix random data No No sets, 80% of all sets, 20% of all NeuralSim Yes Yes different random data sets, 70% of all Different random data sets, 20% of all

Validation data set No different random data sets, 10% of all

The predictive capacity of the neural networks for a new feedstock depends strongly on similarity between the new one and data set used in the neural network modeling. Hierarchical clustering method[17] was applied to the feedstock, using the city-block (Manhattan) distance as similarity measure and unweighted pair-group average linkage rule, in order to evaluate the level of security in the neural model prediction. In most practical applications of cluster analysis, the investigator has to know enough about the problem to select an appropriated final configuration. In this work, the input variables of the trained neural networks were used as the input variables of cluster analysis. In fact, the input variables of nets have different weights in the estimation of the output variables. Thus, the variables in the cluster analysis were used in the non-weighted form, because it is difficult to stipulate their weights.

6. Analysis of performance of the neural networks


An analysis of performance of the models was conducted after building the neural networks, using a different data set not included in the data sets for training and validation of the net. In this task, It was employed the results of 26 hydrogenation tests, which were obtained using Ni-Mo/Al2O3 commercial catalysts and 5 different feedstock. The performance of the neural networks is evaluated comparing the experimental targets with the calculated results. The values of intercept, slope and linear correlation coefficient (R2) are measures of quality of fitting in plot of calculated values versus experimental values, i. e, if the predicted results are in perfect agreement with the measured results, one will obtain intercept value = 0, slope value = 1and R2 = 1. Tables II and III show these performance measures for the neural networks built using NeuralWares Professional/II Plus and NeuralSim commercial packages respectively. The results of HDS conversion and simulated distillation temperatures (30 and 70% weight recovery) using NeuralSim package, as well as the results of simulated distillation temperatures (10 and 50% weight recovery) using Professional/II Plus package, show stronger biases than the other nets, as it is indicated by their intercept and slope values. The 0 lowest linear correlation coefficients were obtained with the results of viscosities @ 20 and 50 C using Professional/II Plus package. The performances of neural network models depend on similarity of new feedstock related to feedstock data bank. Table IV shows the results for two different feedstock (FEED 1 and FEED 2) and both were hydrogenated in different conditions. It can be observed that the performance of the neural networks for FEED 1 is better than for FEED 2, in both Professional/II Plus package (NET 1) and NeuralSim package (NET 2). Figure 1 presents the two Tree Clustering Diagrams for the 17 feedstock of data bank together with FEED 1 and FEED 2, respectively. In this figure, it can be verified that FEED 1 is more similar than FEED 2, using the city-block (Manhattan) distance as similarity measure and unweighted pair-group average linkage rule as clustering configuration. Table II Performance Measures for Backpropagation Neural Networks (Professional/II Plus package) Viscosity Viscosity HDS HDN HDA H2 Cons. @ 50oC @ 20oC Intercept -19.3 -6.9 -4.8 -6.2 -1.8 -0.35 Slope 1.17 1,02 0.99 0.95 1.03 1.00 R2 0.73 0.69 0.78 0.94 0.56 0.69 Intercept Slope R2 Density -0.06 1.07 0.95 D-2887 T10 -50.2 1.23 0.92 D-2887 T30 -12.5 1.04 0.95 D-2887 T50 32.0 0.90 0.95 D-2887 T70 -6.5 1.02 0.97 D-2887 T90 -11.1 1.03 0.99

Table III Performance Measures for Cascade Correlation Neural Networks (NeuralSim package) Viscosity Viscosity H2 C1 C2 C3 C4 HDS HDN HDA Cons. @ 20oC @ 50oC Intercept -44.3 -3.9 -4.2 -1.9 -3.7 -0.44 0,003 0,001 0,004 0,006 Slope 1.43 1.04 1.03 0.95 1.33 1.08 0,99 0,96 0,89 0,88 R2 0.71 0.60 0.75 0.97 0.86 0.94 0,96 0,95 0,96 0,94
Density

Intercept Slope R2

0.05 0.94 0.97

Cetane No. 9.4 0.76 0.83

D-2887 T10

D-2887 T30

D-2887 T50

D-2887 T70

D-2887 T90

-12.7 1.06 0.95

-21.6 1.08 0.98

11.6 0.97 0.97

52.9 0.86 0.98

3.4 0.99 0.99

Table IV Comparison between Pilot Plant Results and Process Neural Network Results Conditions and Properties FEED 1 Product 1_1 FEED 2 Product 2_1 PPH2, bar --46.8 --62 WABT, oC --379.7 --350.5 LHSV, h-1 --2.0 --1.0 EXP. EXP. NET1 NET2 EXP. EXP. NET1 NET2 Density 20/4oC 0.8854 0.8730 0.8726 0.8731 0.9132 0.8934 0.8949 0.8922 Viscosity @ 20oC, cSt 13.93 10.95 10.17 11.01 17.93 14.59 12.19 15.67 Viscosity @ 50oC, cSt 5.39 4.70 4.21 4.67 6.240 5.50 5.02 5.60 T ASTM D2887, oC 10 w % 249.0 234.0 239.5 236.5 247 234.0 234.8 234.8 30 w % 297.0 287.0 287.6 290.4 298 283.0 285.5 288.0 50 w% 327.0 319.0 323.8 321.1 350 341.0 340.7 342.2 70 w % 361.0 353.0 356.1 356.2 383 377.0 374.7 377.1 90 w % 418.0 408.0 410.6 410.5 421 416.0 414.8 416.1 HDA conversion, w % --22.3 21.1 23.1 --35.2 27.3 26.7 HDN conversion, w % --81.2 73.9 78.2 --77.9 66.4 72.1 HDS conversion, w % --99.9 97.6 97.7 --98.1 95.1 96.6 --46 39 45 --100 82 89 H2 Consumption, Nl/l Stability Unstable Stable Stable --Unstable Stable Stable --Cetane Number 40.8 43.5 --43.4 33.8 39.1 --37.4

Unweighted pair-group average City-block (Manhattan) distances


120 100 (Dlink/Dmax)*100
(Dlink/Dmax)*100 120 100 80 60 40 20

Unweighted pair-group average City-block (Manhattan) distances

80 60 40 20 F_11 F_10 F_9 F_8 F_13 F_12 F_5 FEED 1 F_4 F_3 F_2 F_17 F_16 F_15 F_14 F_7 F_6 F_1 0

(a)

(b) Fig. 1. Tree clustering diagram for the 17 feedstock together with (a) FEED 1 and (b) FEED 2

F_11 F_10 F_9 F_8 F_13 F_12 F_5 FEED 2 F_4 F_3 F_2 F_17 F_16 F_15 F_14 F_7 F_6 F_1

7. Conclusions
Neural network technology is a reliable tool in product and process modeling. It can be successful applied in Hydrotreating Process for predicting the behavior of the hydrogen chemical consumption, generation of light gas and the conversions of the HDA, HDS and HDN reactions. Another important application is related to the product physical properties (density, viscosity, simulated distillation temperatures), as well as the cetane number and stability of feedstock and hydrogenated products. The models have been used in process simulator and optimizer designed with friendly user interface. The network models can be used to properly plan the operation of a refinery according to fluctuating demand and to determine the blend of feedstock appropriate for getting a pre-specified product quality. In addition, the neural networks can be applied in research project and unit design, reducing the number of pilot plant tests used in such applications. One important key element able to increase benefits in the area of process modeling and optimization is to choose a reduced number of feedstock properties that can be easily measured in the industrial plant. The neural network models performance depends strongly on similarity of new feedstock related to feedstock data bank. In this way, the cluster analysis can be used to increase the reliability of model predictions.

References
[1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] HAYKIN, S. Neural networks: a comprehensive foundation, 2nd ed., Prentice-Hall International Inc., 1999. FAHLMAN, S. E., LEBIERE, C. The cascade-correlation learning architecture, Advances in Neural Information Processing Systems 2, Morgan Kaufmann, 1988. NASCIMENTO, C. A. O., GIUDICI, R., GUARDANI, R. Neural network based approach for optimization of industrial chemical processes, Computers and Chemical Engineering, 2000, 24, 2303-2314. FREEMAN, J. Neural network development and deployment rules-of-thumb, Hydrocarbon Processing, 1999, October,101-107. LENNOX, B., MONTAGUE, G. A., FRITH, A. M., GENT, C., BEVAN, V. Industrial application of neural networks an investigation, Journal of Process Control, 2001, 11, 497-507. MOLGA, E. J., VAN WOEZIK, B. A. A., WESTERTERP, K. R. Neural networks for modeling of chemical reaction systems with complex kinetics: oxidation of 2-octanol with nitric acid, Chemical Engineering and Processing, 2000, 39, 323-334. NEELAKANTAN, R., GUIVER, J. Applying neural networks, Hydrocarbon Processing, 1998, September, 91-96. VELLIDO, A., LISBOA, P.J.G., VAUGHAN, J. Neural networks in business: a survey of applications (1992-1998), Expert Systems with Applications, 1999, 17, 51-70. McGREAVY, C., LU, M. L., WANG, X. Z., KAM, E. K. T. Characterization of the behavior and product distribution in fluid catalytic cracking using neural networks, Chemical Engineering Science, 1994, 49 (24A), 4717-4724. ELKAMEL, A., AL-AJMI, A., FAHIM, M. Modeling the hydrocracking process using artificial neural networks, Petroleum Science and Technology, 17, 1999, 9&10, 931-954. ARAI, Y. Computational chemistry on catalyst technology and international collaboration, Catalysis Today, 1995, 23, 439-448. HOU, Z.-Y., DAI Q., WU X.-Q., CHEN, G.-T. Artificial neural network aided design of catalyst for propane ammoxidation, Applied Catalysis A:General, 1997, 161, 183-190. HATTORI, T., Neural Networks in Catalyst design: an art turning into a science, Proceedings of the 15th World Petroleum Congress, 1998, 783-790. YANG. H., FAIRBRIDGE C., ZING, Z. Neural Network Prediction of Cetane Number for Isoparaffins and Diesel Fuel, Petroleum Science and Technology, 2001, 19(5&6), 573-586. YANG, H., RING, Z., BRIKER, Y., MCLEAN, N., FRIESEN, W., FAIRBRIDGE, C. Neural Network Prediction of Cetane Number and Density of Diesel Fuel from its Chemical Composition Determined by LC and GC-MS, Fuel, 2002, 81, 65-74. VAN LEEUWEN, J. A., JONKER, R. J., GILL, R., Octane number prediction based on gas chromatographic analysis with non-linear regression techniques, Chemometrics and Intelligent Laboratory Systems, 1994, 25, 325-340. JOHNSON, R. A., WICHERN, D. W. Applied multivariate statistical analysis, 2nd ed, Prentice-Hall International Inc., 1988.

You might also like