Professional Documents
Culture Documents
Abstract: Artificial neural network (ANN) or hardly understand the structure and functioning of
neural network is an emerging technology that is the brain in spite of painstaking research for a long
advancing rapidly and having impact on many period of time. Therefore, the ANN model of
scientific and engineering applications. The biological network is far from accurate. However
frontier ofpower electronics and motor drives has inferior is the model, ANN (also called
recently been advanced considerably by the advent neurocomputer or connectionist system in
of this technology, and the future indicates a literature) tends to solve many important problems
tremendous promise. Neural network has brought in science and engineering. It can be shown that
challenge to the expertise of power electronic ANN is particularly suitable for solving pattern
engineers with traditional background. Thispaper recognition (or input-output mapping) and image
will discuss principles of neural networks and processing type problems that are difficult to solve
describe some of their applications in power by traditional digital computer. This mapping
electronics and motor drives. The specific capability of ANN is analogous to the associative
applications will include static feedback signals memory property of human brain. This property
estimation for a vector drive, space vector PWM helps us to remember or associate the name of a
for a two-level voltage-fed inverter and voltage person when we see his face, or recognize an apple
model flux vector estimation. Finally, some by its color and shape. Just like a human brain, an
problems and challenges for practical applications ANN is trained (not programmed like a digital
will be discussed. computer) to learn by example input-output
associating patterns. This is like teaching alphabet
characters to a child by a teacher (defined as
I. INTRODUCTION
supervised learning) where the characters are
shown and their names are pronounced repeatedly.
Neural network belongs to the area of artificial Broadly, ANN technology has been applied in
intelligence (AI) like expert system(ES), fuzzy process control, identification, forecasting,
logic (FL) and genetic algorithm (GA), but it is a diagnostics, robot vision, just to name a few. More
more generic form of AI compared to behavioral specifically in power electronics area, the ANN
rule-based ES and FL systems. The ANN, FL and applications include look-up table functions (one or
GA belong to the area of ‘soft computing’ where multi-dimensional), PWM techniques, adaptive P-I
approximate computation is required, compared to close loop control, delayless harmonic filtering,
‘hard computing’ or precise computation required model referencing adaptive control (MRAC),
in many applications. Broadly, the goal of AI is to optimization control, vector control, drive
imitate natural or human intelligence for emulation feedback signal processing, on-line diagnostics,
of human thinking. The natural intelligence is FFT signature analysis, estimation for distorted
contributed by biological neural network that gives waves, etc. [ 11. Within the length constraint of this
the human beings the capability to memorize, store paper, only a few applications will be discussed.
knowledge, perceive, learn and take intelligent
decision. The ANN tends to emulate the biological
neural network When a child is born, its cerebral 11. NEURAL NETWORK
cortex is said to contain around 100 billion nerve STRUCTURE
cells or biological neurons which are
interconnected to form the biological neural A. Biological and Artificial Neurons
network. As the child grows in age, his network
gets trained to give him all the expertise in adult
life. The neurologists and behavioral scientists
The structure of an artificial neuron shown in like network where the input signals (continuous
Fig.l(b) in a neural network is inspired by the variables or discrete pulses) pass through synaptic
concept of biological neuron shown in (a) of the weights (or gains) and collect in the summing node
same figure. Basically, neuron is a processing before passing through a nonlinear activation or
transfer hnction at the output. A bias signal (not
shown) can be added in the summing node.
Fig. 2 shows a few possible activation
functions. The simplest of all is the linear activation
function (not shown) that can be unipolar or
bipolar and saturate at 0 and 1 (unipolar) or -1 and
+ 1(bipolar), respectively, with large signal
amplitude. The threshold and signum functions can
be added with a bias signal and the outputs vary
abruptly between 0 to + I and -1 to +I, respectively,
as shown . The most commonly used activation
functions are sigmoidal (also called log-sigmoid)
and hyperbolic tan (also called tan-sigmoid) types
7‘ which are nonlinear, differentiable and
continuously varying types between two
asymptotic values 0, 1, and -1, +I, respectively.
I 1’-
feedforward and feedback (or recurrent) types.In a output signals are converted from per-unit signals
feedforward network, the signals from neuron to to actual signals by denormalization. A constant
neuron flow only in the forward direction, whereas bias source (not shown) is often coupled to hidden
in a recurrent network, the signals flow in forward and output layers through weights. The architecture
as well as backward or lateral direction. The of the network indicates that it is a fast and massive
examples of feedforward network are Perceptron, parallel-input parallel-output computing system
Adaline and Madaline, Back propagation network, where computation is done in a distributed manner.
Radial basis function network (RBFN), General The network can perform input-output static
regression network, Modular neural network nonlinear mapping, i.e., for a signal pattern input, a
(M”), Learning vector quantization (LVQ) corresponding pattern can be retrieved at the
network, Probabilistic neural network (PNN) and output. The pattern matching is possible due to
Fuzzy neural network (FNN). The examples of associative memory property contributed by so
recurrent neural network (RNN) are Hopfield many distributed weights. A large number of input-
network, Boltzmann machine, Kohonen’s self- output example data patterns are required to train
organizing feature map (SOFM), Recirculation the network. Usually, a computer program (such as,
network and Brain-state-in-a-box(BSB), Adaptive MATLAB Neural Network Toolbox by
resonance theory (ART) network and Bi-directional Mathworks) is used for such training. Once it is
associative memory (BAM). trained successfully, it can not only recall the
Fig. 3 shows the structure of most commonly example input-output patterns, but can also
interpolate or extrapolate them. The trained ANN
can be implemented in parallel by an ASIC chip or
serially by a high-speed DSP. An ANN is said to
have fault tolerance property because deterioration
of a few weights or a few missing links does not
affect the input-output pattern matching property.
The three-layer network shown in Fig. 3 is often
called universal function approximator because it
can convert any continuous function at the input to
any desired function at the output.
s . s
ydmS
= yds-ids LIS............. (1)
s . s
= yqs- I q s L,s ................. (2)
yqmS
Lr ydm
ydrs = - s
- LIridSs
...........(3)
Lm
Lr yqm
yqrs = - s
- L,,.iqss........... (4)
.IN
Lm
rJu
Ijr = Jm S
............... (5)
Y4r
SYSTEMS sine, = - .................... (7)
Ijr
ANN can be applied for various control,
identification and estimation applications. Some
applications were mentioned at the end on
Introduction. In this section, we will briefly discuss
some example applications from the literature.
A DSP-based estimator is also shown in the figure
for comparison. Since the feedforward network can
A. Vector Control Static Feedback not solve any dynamical system, the machine
Signals Estimation terminal voltages are integrated by a hardware low-
pass filter (LPF) to generate the stator flux signals
A simple example of feedforward ANN wd: and yqt as shown. Voltage integration for flux
application is the feedback signal computation for a synthesis by using RNN will be discussed later. The
vector-controlled induction motor drive [ 5 ] . Fig. 5 variable frequency variable magnitude sinusoidal
shows the block diagram of a direct vector- signals are then used to calculate the output
controlled induction motor drive where the ANN parameters by a feedforward ANN as shown in Fig.
estimates the rotor flux (yr),unit vector (cos e,, sin 6. The topology of the network has three layers and
OJ and torque pJ from the input fluxes (tydt, tyqlyss)
and currents (ids4 iqt) by solving the follow set of
non-dynamical equations:
the hidden layer contains 20 neurons (4-20-4 Space vector PWM (SVM) technique for a
network) . The hidden and output layers have three-phase isolated neutral machine with two-level
hyperbolic tan activation function to produce voltage-fed inverter is a well-discussed subject in
bipolar output signals. The network was trained the literature. Normally, SVM is implemented by a
with the simulated input-output data of an DSP, but it is also possible to implement by a
equivalent DSP .The ANN based estimator gives feedforward ANN [6] because SVM algorithm can
accurate performance and some amount of be looked upon as a nonlinear input-output
harmonic immunity. Noise or harmonic filtering is mapping problem. The advantage of ANN is fast
one of the advantages of ANN. implementation permitting much higher switching
frequency. Figs. 7 shows the command voltage
B. Space Vector P WM Controller vector cv:, trajectories in
corresponding digital word for the three phases integration of machine voltage down to very low
given by the general expression frequency (fraction of a Hz.) and permits direct
vector control to be used at very low speed. Fig. 11
shows two-stage PCLPF for estimation of stator
ON
+
=-wT, f ( Y " ) g ( a * ) ........... ... (9) fluxes ydsS and . Basically, PCLPF can
4 replace the LPF integrator shown in Fig. 5. Each
A single UP/DOWN counter at the output helps to voltage signal is filtered by a hardware LPF and
generate the symmetrical pulse widths for all the stator resistance drop is subtracted as shown before
three phases. passing through two identical stages of first-order
programmable low-pass filter (PLPF). The time
C. Voltage Model Flux Vector constant z of the filter is function of frequency U,
Estimation so as to give predetermined phase shift angle -
0.5(d2-(ph)[ (ph = hardware LPF phase shift angle].
Programmable cascaded low-pass filter (PCLPF) If (ph = 0, the phase shift angle is -7d4 irrespective
method of integration of machine terminal voltages of frequency. For ideal integration, an amplitude
for estimation of flux vectors has been discussed in compensation as function of frequency is also
literature [7]. Basically, PCLPF permits offset-free required. Each flux estimation channel is identical
i.,...._.......l.4.. ............
4r
lulu.rl
e 4
output flux wave at a particular frequency.
REFERENCES
J
Y, r , L . & *, U. T, V. ’Fa ?- c 1, L
[ 1 ] B. K. Bose, Modern Power Electronics and AC
Fig. 13. Stator flux oriented vector controlled Drives, Prentice Hall, Upper Saddle River, 200 1.
drive [2] B.K.Bose, “Neural network applications in
power electronics and drives”, Distinguished
of a Hz.) operation of the drive is required for the Lecture in Texas A&M University, College Station,
integrator. The stator flux orientation has the Texas, November 12,2001
advantage of minimal parameter variation problem, [3] B. K. Bose, “Expert systems, fuzzy logic, and
but it introduces some amount of coupling effect neural network applications in power electronics
which is compensated by a decoupler. and motion control”, Proc. of the IEEE, vol. 82, pp.
1303-1323, August 1994.
IV. CONCLUSION AND [4] S. Haykin, Neural Networks, Macmillan, NY,
DISCUSSION 1994.
[5] M.G.Simoes and B.K.Bose, “Neural network
The paper has discussed neural network based estimation of feedback signals for a vector
principles and a few selected applications related to controlled induction motor drive”, IEEE Trans. on
vector-controlled induction motor drive. For real Ind. Appl., vol. 31, pp. 620-629, May/June 1995.
time applications, a neural network can be [6] J.O.P.Pinto, B.K.Bose, L.E.Borges and M.P.
implemented either by a DSP or by an ASIC chip. Kazmierkowski, “A neural network based space
The ASIC chip implementation is desirable because vector PWM controller for voltage-fed inverter
of fast parallel computation capability. induction motor drive”, IEEE Trans. of Ind. Appl.,
Unfortunately, the nonavailability of large and vol. 36, pp. 1628-1636, Nov./Dec. 2000..
economical ASIC chips is hindering applications [7] J.O.P. Pinto, B.K.Bose and L.E.Borges, “A
of ANN. Intel introduced electrically trainable stator flux oriented vector controlled induction
analog neural network (ETANN type 80170NX) motor drive with space vector PWM and flux
but withdrew from the market because of drift vector synthesis by neural networks”, IEEE IAS
problem. Large digital ASIC chips with high
Annu. Meet. Conf. Rec., pp. 1605-1612, 2000.
resolution are not yet available. ANN requires large
----------
data sets for training and the training is time-
consuming. Currently, a lot of research is going on
to solve these problems. Fortunately, the computer
speed is also increasing to alleviate this problem.
For a parameter varying system such as a drive, the
ANN weights are to be adaptive by fast on-line
training. On-line training has been attempted by
various methods that include random weight
change and EKF algorithms. As mentioned before,
most of the current ANN applications are restricted
to feedforward back propagation type network.
There is a tremendous opportunity of expanding the
applications by other feedforward and feedback
networks. Hybrid AI techniques where ANN can be