You are on page 1of 11

XI SIMPÓSIO DE ESPECIALISTAS EM PLANEJAMENTO DA

OPERAÇÃO E EXPANSÃO ELÉTRICA

XI SEPOPE XI SYMPOSIUM OF SPECIALISTS IN ELECTRIC OPERATIONAL


17 a 20 de Março 2009
March – 17th to 20th – 2009 AND EXPANSION PLANNING
BELÉM (PA) – BRASIL

ELECTRICITY SPOT PRICES - MODELING AND SIMULATION USING DESIGN


OF EXPERIMENTS AND ARTIFICIAL NEURAL NETWORKS

1
A.R. QUEIROZ, 2J.W. MARANGON LIMA, 2F.A. OLIVEIRA, 2P.P. BALESTRASSI,
P.S. QUINTANILHA FILHO, 3F. ZANFELICE
1
University of Texas at Austin, 2Universidade Federal de Itajubá, 3CPFL
Estados Unidos da América, Brasil

ABSTRACT

The deregulation process that started in the last few decades on the electricity sector has created the
figure of the Electricity Market. Together with this market the idea of electricity prices has emerged
and since then it has been one of the most important variables for the market participants. In this way,
efficient forecasting methods of electricity spot prices have become crucial to maximize the agent
benefits. Since the introduction of deregulation a lot of work was done with the main objective of
simulate or forecast these prices in many countries. In Brazil, the electricity prices are based on the
marginal cost provided by an optimization software (NEWAVE). The simulation process uses
Stochastic Dynamic Dual Programming to obtain the Operational Marginal Cost (OMC). The
computational effort required by the software and its volatility has been two big issues faced by the
Brazilian market participants. This work presents a fast and efficient model to simulate the OMC using
DOE (Design of Experiments) and ANN (Artificial Neural Networks) techniques. The objective is to
use DOE to identify the principal NEWAVE input variables and create an efficient set of scenarios
that will be used as training sample to build the model of ANNs. The paper proved that the combined
techniques provided a promising result and may be applied to risk management and investment
analysis.

INDEX TERMS

Electricity Prices, Simulation, Design of Experiments and Artificial Neural Networks

University of Texas at Austin – Anderson Rodrigo de Queiroz – ar_queiroz@mail.utexas.edu – phone: 1 512 529-6841
1. Introduction
The introduction of deregulation on the electricity sector has turned the short-term electricity price one
of the most important variables in the power market. Actually, the basic objective of deregulation on
these markets consists in maximizing the efficiency in electricity generation and transmission besides
reducing the price of electricity. In this way, efficient forecasting methods of electricity spot prices
have became crucial. Producers and consumers trust on the forecasting information to prepare theirs
bidding strategies. If producer agents have a fine forecasting of price to the next months, they can
develop one strategy to maximize their benefits.
The changes introduced on the Brazilian market model in 2004 by the Law 10848, established the new
rules for the commercialization environment. It created the regulated market environment and kept the
free wholesale market. This law made clear that the contracts on the regulated environment must be
conducted by Brazilian Electrical Energy Commercialization Chamber (CCEE) after an auction
procedure regulated by the National Regulatory Agency (ANEEL). The prices at the free market
named PLD (clearing prices of the differences) are set by calculating the operational marginal cost
(OMC) of the energy derived from a hydrothermal dispatch optimization program (NEWAVE) [1]-[2].
The NEWAVE program provides the OMC as the Lagrange coefficient of the load balance constraint.
The objective is to minimize the production costs considering the operation of the power plant
reservoirs and thermal generators. A stochastic dynamic dual programming is used for this purpose
because a today decision affects the future costs and the overall optimization. Therefore, NEWAVE is
the main tool to compute the returns and risks of investment portfolios and of selling and buying
contracts. However, this computer program is time consuming. Simulating the OMC and its volatility
has been the major problem in Brazil because if, for instance, a Monte Carlo Simulation (MCS) is
used for analyzing risk and return in this market, the NEWAVE became the critical point of the whole
process because the excessive number of iterations necessary for the MCS.
Then, the first step to forecast free market prices in Brazil is to solve the optimization problem in few
seconds. In other words, it is necessary to find a fast substitute for NEWAVE. This paper will show a
procedure to overcome this problem. In [3], it was shown a clone for NEWAVE using Design of
Experiments (DOE) and Artificial Neural Networks (ANN) techniques. The model was developed to
simulate the OMC for the four regions of the Brazilian electricity market. In [4], the NEWAVE clone
was used together with MCS to analyze risk and returns on a contract portfolio for a generation
company. In this paper, two main objectives are presented: explain the model developed to clone the
NEWAVE program; show some results of electricity price simulation with this model called
PREVCM and validate the results of this simulator using statistical control charts.
A model with several neural networks that can perform the calculations of NEWAVE program is
presented. Primarily the input variables are analyzed based on DOE technique to define the principal
variables (the variables that have strong effect on the results) and the secondary variables. The DOE
also design the sample necessary to train the ANNs. Secondly, the structure of ANNs is chosen and
the training stage is performed.
In section 2, it is presented a brief overview about electricity price forecast and also it is provided
some characteristics of the NEWAVE program. The most important variables that directly affect the
prices, like energy demand, fuel costs for thermal generation, reservoir bulk storage, and others are
described. Changes on these variables produce great impacts on the final energy price.
In section 3, it is described the DOE technique and the ANN design that it was used to create the
PREVCM simulator.

1
In section 4, PREVCM is presented. First, it is shown the process to build the dispatch scenarios when
there is no information about which variables have more influence on some specific outputs. An
efficient manipulation of the data is realized on the input variables using DOE. The DOE establishes
the minimum number of cases to be simulated on NEWAVE and captures the weight of each variable.
Second, the ANNs training process and the best ANNs configuration found are presented. At the end
of this section PREVCM is illustrated.
In section 5, the results for the electricity prices are shown. Since PREVCM is a clone of the
NEWAVE program, it is necessary to assure that its results will always be in consistence with
NEWAVE outputs. A back-test procedure was also performed to validate the model ability to respond
to changes on NEWAVE and is presented in this section.
Finally, section 6 provides the conclusions of this research.

2. Electricity Prices Forecasting


The electricity prices in the past, before the deregulation process, were determined directly. A local
utility provided to its clients all parts of the electric service including power generation, transmission,
distribution, and retail marketing sales. This model was called monopolistic because the competition
between market agents did not exist and the electricity sector planning and operation were centralized.
On this model, the government had the offer control and hence prices control. With the beginning of
the deregulation process, in some countries as Chile, Norway, Argentina and Australia, in the 1980s
decade, the electricity power system started to change. A very important change happened with
decisions about the offers that started to be done by the market agents instead of the government.
During this process, the electricity price became an extreme important variable because it indicates to
the market participants and system investors the expansion and operation costs of the system.
Together with the power system deregulation process many methods and techniques have been
developed with the objective of forecast or simulate such prices in competitive environments. An
important work that classifies and compares different techniques is presented in [5]. Models using
Time Series techniques were applied in many countries to predict the electricity price behavior [6];
stochastic process using demand and seasonality characteristics was used for the same purpose [7];
ANN models using different structures were applied to forecast electricity prices [8]-[9] and demand
in some cases [10]; simulation models using load flow, optimal dispatch, energy optimization were
used for precisely represent the power system operation [11].
In many countries the process to obtain the electricity prices is based on the supply and demand law,
in most of the cases these prices are determined in short time periods. In some markets as ERCOT
(Electric Reliability Council of Texas), NYISO (New York Independent System Operator) and PJM
(Pennsylvania-New Jersey-Maryland Interconnection) the prices are defined in real time within 5 to 15
minutes period.
In most of the cases mentioned before the largest amount of power generation is provided by thermal
plants what does not happen in Brazil. The Brazilian system has distinct types of thermal plants that
use different fuels (Natural Gas, Diesel, Nuclear, Oil and Coal), but the highest energy amount
(around 85%) is provided by hydro plants that are joint by cascade reservoirs. Besides, the system has
four different and interconnected submarkets representing the country regions (Southeast, South,
Northeast and North) and many others particular features. For this power system, the optimization
program NEWAVE establishes the economic dispatches with a joint representation of the hydro and
thermal plants. The program also provides the OMC for each submarket.

2
Because there are a large number of factors represented in NEWAVE by the input variables, in this
work, to build the model it was selected only the most important variables that influence directly the
electricity prices. The number of these input variables used to build the NEWAVE clone reaches
twenty seven. Changes on these input variables produce great impacts on the final OMC. The input
variables selected were:
1. Electricity Demand (4 variables, one for each submarket);
2. Load Growth Rate (1 variable);
3. Inflow Energy(4 variables, one for each submarket);
4. Reservoir Bulk Storage (4 variables, one for each submarket);
5. Fuel Costs (Coal, Gas, Diesel, Oil);
6. Thermal Plants Unavailability (3 variables, one for each submarket);
7. Thermal Plants Expansion (3 variables, one for each submarket);
8. Hydro Plants Expansion (4 variables, one for each submarket);
It is important to mention that it was considered the same load growth rate for all the submarkets, this
assumption was done to avoid increasing the number of input variables of the model. It is also
important to notice that there are only 3 variables for Thermal Plants Unavailability and 3 for Thermal
Plants expansion because there was no data available for the North submarket in the NEWAVE case
data for August of 2006 (that was the basis scenario).
For the outputs, besides the OMC, other NEWAVE outputs were selected to be part of the model. The
total number of output variables treated in this process reaches 32. The output variables selected were:
1. Operational Marginal Cost (12 variables, 3 representing the load profiles for each submarket);
2. Final Energy Storage (4 variables, one for each submarket);
3. Hydro Generation (12 variables, 3 representing the load profiles for each submarket);
4. Energy deficit risks (4 variables, one for each submarket);

3. Design of Experiments and Artificial Neural Networks

3.1. DOE Overview


Design of Experiments [12] is a technique created between 1920 and 1930 by Fisher and developed
later by important statistic researchers like Box, Hunter and Taguchi. DOE has a long history with
theory developments and practical applications in many fields. Some of them can be found in
agriculture, medicine, industry, engineering and others.
The first practical application of DOE dates from 1930 in British Textile. After the Second World
War, DOE was introduced in the chemistry industry and in the industrial process on USA and Europe
companies. The crescent DOE interest also occurred in Brazil and many other countries. Nowadays
many organizations increase their productivity using this technique.
Besides many applications of DOE on the literature, this technique was not effectively utilized in
simulation practices how it should be [13]. Moreover, the diffusion of DOE methodology is new
because of the complicated calculations and difficult to execute. However, with modern software like
MINITAB and STATISTICA, DOE became more accessible and its utilization has increased.

3.2. DOE Process


For a DOE process, the input variables or parameters of a model are called factors. Each factor can
assume two or more levels depending on the assumed project type. All the process factors arranged in

3
levels prescribe an experiment. In the particular case of this paper, an experiment represents a scenario
that needs to be simulated in NEWAVE.
Generally in a process, effects caused by inputs on outputs are obscured. This turned the relationship
among factors hard to be discovered using a simplistic approach like try and error. Which factors are
relevant on the output response are very important to model a process and to obtain a good sample.
The first DOE objective in this paper is to classify all factors between control and secondary. The
control factors have the main influence on the process response and the secondary ones have small
effects on the response.
The main difficulty is to generate the dispatch scenarios when there is no information about which
variables have more influence on some specific outputs. The definition about which factors have
larger influence on outputs is important to build the Simulator using ANN. After this definition an
efficient manipulation of data is realized on the input variables set. DOE is used to establish the
minimum number of cases to be simulated on NEWAVE without losing the information about the
optimization process.
The DOE determines the scenarios that are necessary to assess the output of the NEWAVE program,
i.e., the cases that must be simulated using NEWAVE. Then, based on the assumed values of the
inputs (X) it is generated the output vector Y. For example Y contains the OMC. From the assumed
outputs values, it is possible to verify the dependence relation Y = f (X) as shown in Figure 1.

Figure 1 - DOE application on the NEWAVE problem


In general a process with a large number of variables can be analyzed in steps and in this work three
different DOE applications are performed to define the final sample. It was selected for each
application one type of DOE experiment that could better correspond to the objective of the problem.
In this case the three steps are: screening, variable interactions and final sample determination.
The screening step is used for detecting the principal variables which have strong effects on the
outputs. The second step is used for identifying if the principal variables (obtained in the first step)
have a combined effect among them on the responses. The last step is applied to determinate the final
sample to the ANN training.

3.3. ANN Overview


The first work involving artificial neural networks was presented in 1943 by McCulloch and Pitts,
maybe inspired by some advances of Alan Turing and Jon von Neuman about Boolean Nature of
intelligence. Many researches of this field follow the Perceptron generation by Rosenblant in 1958 and
another similar model known as Adaline created by Widrow and Hoff in 1960.

4
An ANN is a “machine” created with the intention of modeling a procedure that is realized by the
human brain when it develops a specific function. Some important characteristics of ANN are:
learning from examples, adaptation to new situations, robustness, generalization about new examples,
fast solutions without much process information, solution of complex multivariate functions,
computational efficiency, and possibility of work in real time.
Because of these characteristics the applications of ANN are vast like pattern recognition, data
classification, forecast, process control, functions approach, credit valuation and others.
A neural network has two important elements, the architecture and the learning algorithm. Differently
from a computer that is programmed, the ANN is trained from examples and its “knowledge” is kept
inside. The learning algorithm generalizes the data and memorizes the knowledge inside the weights,
which are the adaptable network parameters. Then, an ANN is formed by the composition of neurons
whose prosecute consists in a linear combination of the inputs with the ramifications weights and the
results obtained are applied to an activation function. The future problems are to determine the
restrictions about the types of ANN and the learning algorithms.

3.4. ANN Training and Selection Process


In this work, the training process was done using the software STATISTICA with the Intelligent
Problem Solver (IPS). The IPS is an easy tool with a great analysis power that guides the user in a
process which begins in the construction of different types of ANN until the selection of the network
that has the best performance.
With the IPS it is possible to select many types of networks to build a model. To address this type of
problem the architecture that presented better adaptation to the process was the Multilayer Perceptron
network (MLP). During the training process of the networks it was used the Back propagation learning
algorithm that is available in IPS package. The MLP architecture is shown in Figure 2.

Figure 2 – MLP Architecture


The process to obtain the outputs with this type of ANN is shown by the equations 1 trough 4 below.

( X K × WKxJ ) + B1J = u J (1)

e u − e −u
gh =
e u + e −u (2)

(u ' J × WJxh ) + B 2 h = d h (3)

1
Yh = (4)
1 − e −d

5
Where, Xk is the input vector with k elements; Wk,J is the weight matrix of the hidden layer with j
hidden neurons; B1J is the first vector of bias factors generated during the ANN training process; u is
the vector that contains the values obtained after the weighted sum of the inputs with B1J values; gh is
the first activation function, that is applied to vector u generating vector u’; WJ,h is the weight matrix
of the output layer with h output neurons; B2h is the second vector of bias factors; d is a vector that
contains the values obtained after the second weighted sum of the vector u’ with B2h values; vector Yh
is the vector with the ANN outputs and it is obtained after the application of the second activation
function gi to the vector d.
The selection procedure to choose the ANNs with better performance was based on a parameter called
SD.Ratio obtained from a statistical analysis comparing the standard deviation of the real values
obtained from NEWAVE face with the simulated values obtained from the ANN. Equation 5 shows
the SD. Ratio.
σR
SD.Ratio = (5)
σS

Where σS is the standard deviation from the real sample (NEWAVE values) and σR is the standard
deviation of the residues obtained by the difference between the NEWAVE values and the ANN
values. A minor SD.Ratio denotes a better ANN performance and the precision varies according to the
problem type.

4. Simulation Model PREVCM

4.1. Applications of DOE


The DOE process was done in three steps to define the main variables and their correlation (the
definition about which factors have larger influence on the outputs is important to build the simulator
structured by ANNs), this procedure is very important in order to generate the sample necessary to
build the ANN simulator.
In the first DOE application, due to the 27 input factors without distinction between principal and
secondary factors, it was necessary to define 36 experiments to be simulated in the NEWAVE
program. At this DOE application, Plackett-Burman DOE was applied to define the 36 experiments.
Based on the NEWAVE simulation results, the 32 output variables were grouped by their correlation
coefficient to simplify the influence analysis of these inputs factors. The most correlate output variable
of each group was analyzed. The factors which had the largest influence on the output were assumed
as control factors of that group and the others as secondary factors. The input set was separated in 14
principals and 13 secondary factors by Pareto Effects Diagram; Table 1 shows the principal variables.
Table 1 – Principal Input Variables

ReSub1 % of reservoir available Southeast Sub2 Energy Demand South


ReSub2 % of reservoir available South Sub3 Energy Demand North
ReSub3 % of reservoir available Northeast LGR Load Growth Rate
ReSub4 % of reservoir available North CO Oil cost
VEaf1 Inflow Energy Southeast ITS2 Thermal Plants Unavailability South
VEaf4 Inflow Energy North ExpTS1 Thermal Plants Expansion Southeast
Sub1 Energy Demand Southeast ExpTS2 Thermal Plants Expansion South

After assessing the principal and the secondary factors, the new DOE application was done. The new
experiments were defined by Fractional Factorial DOE. Different from the first procedure this type of

6
project can capture the combined effects between the factors on the outputs. For this step it was
defined more 36 experiments to be simulated in the NEWAVE program. For the secondary factors it
was assumed the mean values for each one in order to not influence the principal effects. The same
analysis of the first application was done for the simulation results to analyze the possibility of include
other factors to the principal set, but no change was needed.
The last DOE application was done combining two experiments, the Taguchi DOE was used on the
control factors and other Fractional Factorial DOE was used to incorporate the secondary factors in the
analysis. For the 14 control factors it was defined 16 experiments with Taguchi DOE and for the 13
secondary factors it was defined additional 16 experiments with Fractional Factorial DOE. Combining
the two projects it was defined 256 different cases to be simulated in the NEWAVE program. At the
end of this process the final sample to the ANN training was represented by 328 cases (36 cases of the
First DOE, 36 of the second and 256 of the last one).

4.2. Simulation Model Development


The NEWAVE simulations work as the data for the neural network training, i.e., inputs and outputs
are used to build the ANN. The OMC outputs were clustered according to their correlation coefficient
in order to define the number of neural networks needed for simulating each output variable.
The OMC output set of Southeast Submarket (3 variables comprise OMC for low, medium and heavy
load) presented perfect correlation and therefore, only one output was modeled by ANN. Figure 3
shows the NEWAVE OMC results in R$/MW for the Southeast under the simulated scenarios.

Figure 3 OMC Results of the NEWAVE program

It was observed that, because of the large range of the OMC values, the model with just one ANN did
not have a good performance. Figure 4 presents the training cases simulated under the model with just
one ANN. Because of the error on the results with low OMC, another two ANNs were created in order
to improve the performance of the model.
The first ANN, the classification ANN (ANNQ) classifies the OMC into low, medium or high (it is the
first ANN developed). The other two ANNs provide the OMC for high values (ANNH) and for low
values (ANNH).
The first ANN that has the objective of classify the OMC indicating which ANN (ANNH or ANNH)
should be used to simulate the scenario. The adopted criterion for the response classification was
based on the OMC median obtained from the results of the training step. If the OMC obtained from
ANNQ is in a range of + 10% or – 10% of the median value, the output response will be equal to the
OMC obtained under the ANNC, i.e., it would not be necessary to use the other two ANNs.

7
Figure 4 - OMC Results using one ANN
The values of SD. Ratio, σS and σR that were obtained during the training process of each ANN for the
OMC of the Southeast Submarket are given in Table 2. For the NEWAVE simulations with a base
case of 2006, the OMC median was 193.43 [R$/MW]. Therefore, if the ANNC output stays below
174.09 [R$/MW] (median - 10%) the OMC will be simulated by ANNL. Otherwise, if the output stays
above the 212.77 [R$/MW] (median + 10%) the OMC will be simulated by ANNH. Figure 5 shows the
model structure elaborated to simulate the OMC using the triple ANN model.

Table 2 - ANN Model Parameters


Parameters ANNQ ANNL ANNH
σs 351.9 62.19 374.27
σR 64.47 10.14 72.5
SD. Ratio 0.1832 0.163 0.1937
Figure 5 - Triple ANN OMC Model
Figure shows the PREVCM software that was developed in C# language. This software simulates the
OMC for the Southeast Submarket and also the other NEWAVE outputs described in this work.

Figure 6 – PREVCM Software

5. Results Obtained

5.1. Simulation Results


Figure 7 shows the results obtained using the model with three neural networks to forecast the OMC
for the Southeast Submarket for the sample cases. It is possible to notice that the results are much

8
better than the case with just one ANN. This is also true for the OMC of the other Submarkets. In
order to analyze the performance of the ANN Simulator, some tests were done using out of sample
data. For the out of sample data was created varying the 27 input variables in a random process
considering the max and min of them. Figure 8 shows the OMC results for the out of sample scenarios.

Figure 7 - OMC Results using triple ANN Figure 8 - Out of Sample OMC Results

5.2. Tests
Since the ANN Simulator is based on the NEWAVE program, it is necessary to assure that its results
will always be in consistence with NEWAVE. A back-test procedure was designed to validate the
neural network ability to respond to the NEWAVE results. For example, if a real OMC value is 1
[R$/MW] and the ANN value is 2 [R$/MW], the relative error between them is 100%; if a real OMC
value is 100 [R$/MW] and the ANN value is 200 [R$/MW] we also have a relative error of 100%.
Although they have the same relative error, for the purpose of risk analysis the first error is not as bad
as the second one. To solve this problem it was adopted a procedure which consists in defining some
tolerance range to the OMC according to the values presented in Table 3.
Table 3 - OMC Tolerance Ranges [R$/MW]
From To Tolerance From To Tolerance
0,00 20,00 ± 10,00 300,00 400,00 ± 100,00
20,00 50,00 ± 15,00 400,00 600,00 ± 150,00
50,00 100,00 ± 25,00 600,00 800,00 ± 200,00
100,00 150,00 ± 35,00 800,00 1100,00 ± 250,00
150,00 250,00 ± 45,00
CMO > 1100,00 ± 350,00
250,00 300,00 ± 60,00
The results in sample are very satisfactory with the triple ANN (Figure 7) but the Out of Sample
results shows some deviations. Using the tolerances of Table 3, it was obtained the relative deviations.

2,5
UCL=2,266
2,0

1,5
Individual Value

1,0
_
0,5 X=0,577

0,0

-0,5

-1,0
LCL=-1,113

2 4 6 8 10 12 14 16 18 20 22 24
Observation

Figure 9 - Control Chart of the Deviations

9
The relative deviations were then tested by a Statistical Standard Deviation Control Chart to analyze
the performance of model. The Statistical Control Chart for deviations is shown in Figure 9.
Based on the test presented using the Out of Sample cases, it can be assumed that the process is under
control and the model can represent efficiently the NEWAVE program given the tolerances.

6. Conclusion
One of the most important variables in the Brazilian Market is the Operational Marginal Cost because
it is used for defining the Market Clearing Price. The main idea of this work consisted in classifying
the according input variables to their influence on the OMC. After this initial manipulation performed
by DOE, the unnecessary simulations are eliminated. The set of NEWAVE scenarios defined by the
DOE is then used for training neural networks that were used to build the software PREVCM. As
shown in the paper, the ANN simulator can substitute the NEWAVE in the OMC simulation with
advantages in terms of the computational effort. The deviations observed using PREVCM does not
compromise its use on a risk analysis algorithm which is the main objective of the OMC calculations.

7. Acknowledgment
The authors would like to thank the financial support of CAPES (project #023/2005) and CPFL.
BIBLIOGRAPHY
[1] L.A. Terry, et al., "Coordinating the Energy Generation of the Brazilian National Hydrothermal
Electrical Generating System", Interfaces 16, Jan/1986, pp.16-38.
[2] M.V.F. Pereira, "Optimal Stochastic Operations of Large Hydroeletric Systems", Electrical Power
& Energy Systems, Vol.11, N°3, pages 161-169, July/1989.
[3] A.R. Queiroz, F.A. Oliveira, J.W.M. Lima, P.P. Balestrassi, “Simulating the Electricity Spot
Prices in Brazil Using Neural Network and Design of Experiments”, IEEE Power Tech, ID402,
Lausanne, July/2007.
[4] F.A. Oliveira, A.R. Queiroz, J.W. Marangon Lima and P.P. Balestrassi, “The Influence of
Operational Marginal Cost Simulation Methods on Electricity Contract Portfolio Strategies in
Brazil”, PSCC 2008.
[5] G. Li et. al., “State-of-the-Art of Electricity Price Forecasting,” IEEE 2005.
[6] F.J. Nogales, et al., “Forecasting Next-Day Electricity Prices by Time Series Models,” IEEE
Transactions on Power Systems, Vol. 17, No. 2, pp. 342-348, May 2002.
[7] P. Skantze, M. Ilic, and J. Chapman, “Stochastic Modeling of Electric Power Prices in A Multi-
Market Environment,” Proceedings of Power Engineering Winter Meeting, January 2000.
[8] B. R. Szkuta, L.A. Sanavria, and T.S. Dillon, “Electricity Price Short-Term Forecasting Using
Artificial Neural Networks,” IEEE Transactions On Power Systems, August 1999.
[9] J.J. Guo, and P.B. Luh, “Improving Market Clearing Price Prediction by Using a Committee
Machine of Neural Networks,” IEEE Transactions on Power Systems, November 2004.
[10] P. Mandal, T. Senjyu, and T. Funabashi, “Neural Network Models to Predict Short-Term
Electricity Prices and Loads”, IEEE 2005.
[11] J. Bastian, J. Zhu, V. Banunarayanan, and R. Mukerji, “Forecasting Energy Prices in a
Competitive Market,” IEEE Computer Applications in Power, Vol. 12, No.3, pp. 40-45, 1999.
[12] D. C. Montgomery, “Designs and Analysis of Experiments”. John Wiley & Sons, 1997.
[13] J.P.C. Kleijnen et. al., “State-of-the-Art Review A User’s Guide to the Brave New World of
Designing Simulation Experiments”, INFORMS Journal on Computing, 2005, pp.263–289.

10

You might also like