You are on page 1of 78

Artificial Neural Networks

Outline

Introduction to Neural Network


Introduction to Artificial Neural Network
Properties of Artificial Neural Network
Applications of Artificial Neural Network
Demo Neural Network Tool Box
Case-1 Designing XOR network
Case-2 Power system security assessment

CIARE-2012, IIT Mandi 2


What are Neural Networks?
Models of the brain and nervous system
Highly parallel
Process information much more like the
brain than a serial computer
Learning

Very simple principles


Very complex behaviours
CIARE-2012, IIT Mandi 3
BIOLOGICAL NEURAL NETWORK

Figure 1 Structure of biological neuron

CIARE-2012, IIT Mandi 4


The Structure of Neurons
A neuron has a cell body, a branching input
structure (the dendrIte) and a branching
output structure (the axOn)

Axons connect to dendrites via synapses.


Electro-chemical signals are propagated
from the dendritic input, through the cell
body, and down the axon to other neurons

CIARE-2012, IIT Mandi 5


The Structure of Neurons

A neuron only fires if its input signal


exceeds a certain amount (the threshold)
in a short time period.
Synapses vary in strength
Good connections allowing a large signal
Slight connections allow only a weak signal.
Synapses can be either excitatory or
inhibitory.

CIARE-2012, IIT Mandi 6


The Artificial Neural Network

Figure 2 Structure of artificial neuron

Mathematically, the output expression of the



network is given as Y = F ( S ) = F X K W K + b
N


CIARE-2012, IIT Mandi
K =1 7
ANNs The basics
ANNs incorporate the two fundamental
components of biological neural nets:

1. Neurones (nodes)
2. Synapses (weights)

CIARE-2012, IIT Mandi 8


Properties of Artificial Neural
Nets (ANNs)

CIARE-2012, IIT Mandi 9


Properties of Artificial Neural
Nets (ANNs)

Many simple neuron-like threshold switching


units
Many weighted interconnections among units
Highly parallel, distributed processing
Learning by tuning the connection weights

CIARE-2012, IIT Mandi 10


Appropriate Problem Domains
for Neural Network Learning

Input is high-dimensional discrete or real-


valued (e.g. raw sensor input)
Output is discrete or real valued
Output is a vector of values
Form of target function is unknown
Humans do not need to interpret the results
(black box model)

CIARE-2012, IIT Mandi 11


Applications
Ability to model linear and non-linear systems
without the need to make assumptions implicitly.
Applied in almost every field of science and
engineering. Few of them are
Function approaximation, or regression analysis, including time series
and modelling.
Classification, including pattern and sequence recognition, novelty
detection and sequential decision making.
Data processing, including filtering, clustering, blind signal separation
and compression.
Computational neuroscience and neurohydrodynamics
Forecating and prediction
Estimation and control
CIARE-2012, IIT Mandi 12
Applications in Electrical
Load forecasting
Short-term load forecasting
Mid-term load forecasting
Long-term load frecasting
Fault diagnosis/ Fault location
Economic dispatch
Security Assessment
Estimation of solar radiation, solar heating, etc.
Wind speed prediction
CIARE-2012, IIT Mandi 13
Designing ANN models
Designing ANN models follows
a number of systemic
procedures. In general, there
are five basics steps:
(1) collecting data,
(2) preprocessing data
(3) building the network
(4) train, and
(5) test performance of model
as shown in Fig.

Fig. 3. Basic flow for designing


artificial neural network model
CIARE-2012, IIT Mandi 14
Neural Network
Problems

Many Parameters to be set


Overfitting
long training times
...

CIARE-2012, IIT Mandi 15


INTRODUCTION TO NN TOOLBOX

The Neural Network Toolbox is one of the


commonly used, powerful, commercially
available software tools for the development
and design of neural networks. The software is
user-friendly, permits flexibility and convenience
in interfacing with other toolboxes in the same
environment to develop a full application.

CIARE-2012, IIT Mandi 16


Features
It supports a wide variety of feed-forward and
recurrent networks, including perceptrons, radial
basis networks, BP networks, learning vector
quantization (LVQ) networks, self-organizing
networks, Hopfield and Elman NWs, etc.
It also supports the activation function types of
bi-directional linear with hard limit (satlins) and
without hard limit, threshold (hard limit), signum
(symmetlic hard limit), sigmoidal (log-sigmoid),
and hyperbolic tan (tan-sigmoid).
CIARE-2012, IIT Mandi 17
Features
In addition, it supports unidirectional linear with
hard limit (satlins) and without hard limit, radial
basis and triangular basis, and competitive and
soft max functions. A wide variety of training and
learning algorithms are supported.

CIARE-2012, IIT Mandi 18


Case-1 Problem Definition
The XOR problem requires one hidden layer &
one output layer, since its NOT linearly
separable.

CIARE-2012, IIT Mandi 19


Design Phase

CIARE-2012, IIT Mandi 20


NN Toolbox

NN toolbox can be open by entering


command
>>nntool
It can also be open as shown below
It will open NN Network/ Data Manager
screen.

CIARE-2012, IIT Mandi 21


Getting Started

CIARE-2012, IIT Mandi 22


NN Network/ Data Manager

CIARE-2012, IIT Mandi 23


Design
Let P denote the input and T denote the
target/output.
In Matlab as per the guidelines of
implementation these are to be expressed in
the form of matrices:
P = [0 0 1 1; 0 1 0 1]
T = [0 1 1 0]
To use a network first design it, then train it
before start simulation.
We follow the steps in order to do the above:
CIARE-2012, IIT Mandi 24
Provide input and target data
Step-1: First we have to enter P and T to the NN
Network Manager. This is done by clicking New
Data once.
Step-2: Type P as the Name, and corresponding
matrix as the Value, select Inputs under
DataType, then confirm by clicking on Create.
Step-3: Similarly, type in T as the Name, and
corresponding matrix as the Value, select
Targets, under DataType , then confirm.
See a screen like following figures
CIARE-2012, IIT Mandi 25
Providing input

CIARE-2012, IIT Mandi 26


Providing target data

CIARE-2012, IIT Mandi 27


Create Network

Step-4: Now we try to create a XORNet. For


this click on New Network.
See a screen like in the following figure.
Now change all the parameters on the screen
to the values as indicated on the following
screen:

CIARE-2012, IIT Mandi 28


Defining XORNet network

CIARE-2012, IIT Mandi 29


Setting network parameters

Make Sure the parameters are as follows:


Network Type = Feedforword Backprop
Train Function = TRAINLM
Adaption Learning Function = LEARNGDM
Performance Function = MSE
Numbers of Layers = 2

CIARE-2012, IIT Mandi 30


Define network size

Step-5: Select Layer 1, type in 2 for the


number of neurons, & select TANSIG as
Transfer Function.
Select Layer 2, type in 1 for the number of
neurons, & select TANSIG as Transfer
Function.
Step-6: Then, confirm by hitting the Create
button, which concludes the XOR network
implementation phase.

CIARE-2012, IIT Mandi 31


CIARE-2012, IIT Mandi 32
Step-7: Now, highlight XORNet with DOUBLE
click, then click on Train button.
You will get the following screen indicated in
figure.

CIARE-2012, IIT Mandi 33


Training network

CIARE-2012, IIT Mandi 34


Defining training parameters

CIARE-2012, IIT Mandi 35


Step-8: On Training Info, select P as Inputs,
T as Targets.
On Training Parameters, specify:
epochs = 1000
Goal = 0.000000000000001
Max fail = 50
After, confirming all the parameters have
been specified as indented, hit Train Network.

CIARE-2012, IIT Mandi 36


Training process

CIARE-2012, IIT Mandi 37


Various Plots

Now we can get following plots


Performance plot
It should get a decaying plot (since you are
trying to minimize the error).
Training State Plot
Regression Plot

CIARE-2012, IIT Mandi 38


Performance plot

plots the
training,
validation, and
test
performances
given the
training
record TR
returned by
the function
train.

CIARE-2012, IIT Mandi 39


Training state plot

plots the training


state from a
training record
TR returned by
train.

CIARE-2012, IIT Mandi 40


Regression plot

Plots the linear


regression of
targets relative to
outputs.

CIARE-2012, IIT Mandi 41


View weights and bias
Step-8: Now to confirm the XORNet structure
and values of various Weights and Bias of the
trained network click on View on the
Network/Data Manager window.
NOTE: If for any reason, you dont get the figure
as expected, click on Delete and recreate the
XORNet as described above.
Now, the XORNet has been trained
successfully and is ready for simulation.

CIARE-2012, IIT Mandi 42


XORNet Structure

CIARE-2012, IIT Mandi 43


Network simulation
With trained network, simulation is a way of
testing on the network to see if it meets our
expectation.

Step-9: Now, create a new test data S (with a


matrix [1; 0] representing a set of two inputs)
on the NN Network Manager, follow the same
procedure indicated before (like for input P).

CIARE-2012, IIT Mandi 44


Step-10: HighLight XORNet again with one
click, then click on the Simulate button on the
Network Manager. Select S as the Inputs,
type in ORNet_outputsSim as Outputs, then
hit the Simulate Network button and check
the result of XORNet_outputSim on the NN
Network Manager, by clicking View.
This concludes the whole process of XOR
network design, training & simulation.

CIARE-2012, IIT Mandi 45


Simulated result

CIARE-2012, IIT Mandi 46


Case-2 Problem Definition
Power system security assessment determines
safety status of a power system in three fold
steps: system monitoring, contingency analysis
and security control.
load flow equations are required to identify the
power flows and voltage levels throughout the
transmission system
The contingencies can be single element outage
(N-1), multiple-element outage (N-2 or N-X) and
sequential outage
Here single only outage
CIARE-2012, at a time is considered47
IIT Mandi
Data Collection
The input data is obtained from offline Newton-
Raphson load flow by using the MATLAB
software. The data have matrix size [12X65].

In data collection, these input data are divided


into three groups which are train data, validate
data, and test data. The matrix size of train data
is [12X32] while the matrix size of test data
is [12X23].

CIARE-2012, IIT Mandi 48


Data Collection
The bus voltages V1, V2 and V3 are not included
in the train data and test data because they
are generator buses. They will be controlled by
the automatic voltage regulator (AVR) system.
In train data, there are 10 train data in secure
condition while 12 train data in insecure
condition. For test data, there are 1 test data
which is secure status while 10 test data are
insecure status.

CIARE-2012, IIT Mandi 49


DATA COLLECTION

CIARE-2012, IIT Mandi 50


DATA COLLECTION

CIARE-2012, IIT Mandi 51


DATA PRE-PROCESSING

After data collection, 3 data preprocessing


procedures train the ANNs more efficiently.
solve the problem of missing data,
normalize data, and
randomize data.

The missing data are replaced by the average


of neighboring values.

CIARE-2012, IIT Mandi 52


Normalization

Normalization procedure before presenting


the input data to the network is required
since mixing variables with large magnitudes
and small magnitudes will confuse the
learning algorithm on the importance of each
variable and may force it to finally reject the
variable with the smaller magnitude.

CIARE-2012, IIT Mandi 53


Building the Network

At this stage, the designer specifies the


number of hidden layers, neurons in each
layer, transfer function in each layer, training
function, weight/bias learning function, and
performance function.

CIARE-2012, IIT Mandi 54


TRAINING THE NETWORK
During the training process, the weights are
adjusted to make the actual outputs
(predicated) close to the target (measured)
outputs of the network.
Fourteen types of training algorithms for
developing the MLP network.
MATLAB provides built-in transfer functions
linear (purelin), Hyperbolic Tangent Sigmoid (tansig)
and Logistic Sigmoid (logsig). The graphical
illustration and mathematical form of such functions
are shown in Table 1.
CIARE-2012, IIT Mandi 55
TRAINING THE NETWORK

Table 1. MATLAB built-in


transfer functions
CIARE-2012, IIT Mandi 56
Parameter setting

Number of layers
Number of neurons
too many neurons, require more training time
Learning rate
from experience, value should be small ~0.1
Momentum term
..

CIARE-2012, IIT Mandi 57


TESTING THE NETWORK
The next step is to test the performance of the
developed model. At this stage unseen data are
exposed to the model.
In order to evaluate the performance of the
developed ANN models quantitatively and verify
whether there is any underlying trend in
performance of ANN models, statistical analysis
involving the coefficient of determination (R), the
root mean square error (RMSE), and the mean
bias error (MBE) are conducted.
CIARE-2012, IIT Mandi 58
RMSE

RMSE provides information on the short


term performance which is a measure of
the variation of predicated values around
the measured data. The lower the RMSE,
the more accurate is the estimation.

CIARE-2012, IIT Mandi 59


MBE

MBE is an indication of the average deviation


of the predicted values from the corresponding
measured data and can provide information on
long term performance of the models; the
lower MBE the better is the long term model
prediction.

CIARE-2012, IIT Mandi 60


PROGRAMMING THE NEURAL
NETWORK MODEL
ANN implementation is a process that results in
design of best ANN configuration.
Percentages of classification accuracy and mean
square error are used to represent the
performance of ANN in terms of accuracy to
predict the security level of IEEE 9 bus system.
Steps of ANN implementation is shown in the
following flow chart.

CIARE-2012, IIT Mandi 61


FLOW CHART

CIARE-2012, IIT Mandi 62


USING NN TOOLBOX
First run the MATLAB file testandtrain.m.
This file contains test data (input data) and
target data.
Name of input data is train
Name of target data is target
Network can be initialized from command prompt
as
>>nftool
or by using following step

CIARE-2012, IIT Mandi 63


OPENING nftool

CIARE-2012, IIT Mandi 64


NETWORK FITTING TOOL
Network fitting tool appears as show below

CIARE-2012, IIT Mandi 65


PROVIDING INPUT AND TARGET
DATA
Clicking on next button provide option to give
input and target data.

CIARE-2012, IIT Mandi 66


VALIDATING AND TEST DATA
Here we define training, validating, and test data.

CIARE-2012, IIT Mandi 67


DEFINING NETWORK SIZE
Here we set the number of neurons in the fitting
networks hidden layer.

CIARE-2012, IIT Mandi 68


TRAIN NETWORK

CIARE-2012, IIT Mandi 69


TRAINING PROCESS
By clicking on train button training process
starts.

CIARE-2012, IIT Mandi 70


PERFORMANCE PLOT

CIARE-2012, IIT Mandi 71


TRAINING STATE PLOT

CIARE-2012, IIT Mandi 72


REGRESSION PLOT

CIARE-2012, IIT Mandi 73


EVALUATE NETWORK

CIARE-2012, IIT Mandi 74


SAVE RESULTS

CIARE-2012, IIT Mandi 75


SIMULINK DIAGRAM
Following are the simulink diagram of the network.

CIARE-2012, IIT Mandi 76


Query

CIARE-2012, IIT Mandi 77


Epoch- During iterative training of a neural network ,
an Epoch is a single pass through the entire training
set, followed by testing of the verification set.
Generalization- how well will the network make
predictions for cases that are not in the training set?
Backpropagation- refers to the method for computing
the gradient of the case-wise error function with
respect to the weights for a feedforward network.
Backprop- refers to a training method that uses
backpropagation to compute the gradient.
Backprop network- is a feedforward network trained
by backpropagation.
CIARE-2012, IIT Mandi 78

You might also like