You are on page 1of 67

www.rejinpaul.

com

IC6601ADVANCED CONTROL
SYSTEMS

Get useful study materials from www.rejinpaul.com


www.rejinpaul.com

IC6601ADVANCED CONTROL SYSTEMS

Get useful study materials from www.rejinpaul.com


www.rejinpaul.com

MODERN CONTROLSYSTEM

Unit I
STATE VARIABLE DESIGN

StateVariableRepresentation
Thestatevariablesmaybetotallyindependentofeachother,leading
todiagonalor normalformortheycouldbederivedasthe derivativesofthe
output.Ifthemisnodirectrelationshipbetweenvarious
states.Wecoulduseasuitabletransformationtoobtaintherepresentationindiagonalform.
PhaseVariableRepresentation
Itisoftenconvenienttoconsiderthe outputofthesystemas one
ofthestatevariableandremainingstatevariableasderivativesofthisstatevariable.Thestatevaria
blesthusobtainedfromoneofthesystemvariablesandits(n-1)derivatives,areknownasn-
dimensionalphasevariables.
Inathird-ordermechanicalsystem,theoutputmaybedisplacement
. .
x1,x1 x2 vandx2 x3 ainthecaseofmotionoftranslationorangulardisplacement
. . .
1 x1,x1 x2 wandx2 x3 w ifthemotionisrotational,

Wherevv,w,a, respectively,are velocity,angularvelocityacceleration,angular


acceleration.
Consider aSISOsystemdescribedbynth-orderdifferentialequation

Where

u is, in general, a function oftime.


Thenth order transfer function ofthis system is

Get useful study materials from www.rejinpaul.com


www.rejinpaul.com

With the states(each beingfunction of time)be definedas

Equationbecomes

UsingaboveEqs stateequations in phase satiableloan can heobtained as

Where

PhysicalVariableRepresentation
Inthisrepresentationthestatevariablesarerealphysicalvariables,whichcanbemeas
uredandusedformanipulationorforcontrolpurposes.Theapproachgenerallyadoptedistobre
aktheblockdiagram
ofthetransferfunctionintosubsystemsinsuchawaythatthephysicalvariablescanheidentifie
d.Thegoverningequationsforthesubsystemscanheusedtoidentifythephysicalvariables.Toi
llustratetheapproachconsidertheblockdiagramofFig.

Onemayrepresentthetransferfunctionofthissystemas

TakingH(s)=1,theblockdiagramofcanberedrawnasinFig.physicalvariablescan
.
bespeculatedasx1=y,output, x2 w theangularvelocityx3 iathearmature

currentinaposition-controlsystem.

Get useful study materials from www.rejinpaul.com


www.rejinpaul.com

Where

Thestatespacerepresentationcanbeobtainedby

And

State spacemodels from transfer functions

A simple example of system has an input and output as shown in Figure1.This classof
system has general form of modelgiven in Eq.(1).

u(t) y(t)
S

d ny d n 1y d m 1u
an1 bm1
n1 a0y(t) m1 b0u(t)
dtn dt dt

Models ofthis form have the propertyof the following:

u(t) u (t)
1 1 u (t)
2 2 y(t) y (t)
1 1 y (t)
2 2
(2)

where,(y1, u1)and(y2,u2)each satisfies Eq,(1).


Model ofthe form of Eq.(1)is known as linear time invariant (abbr.LTI) system. Assumethe
system is at rest priorto thetime t0=0,and, the input u(t) (0 t < )produces the outputy(t)(0 t
< ), themodel ofEq.(1) canbe represented byatransfer function interm ofLaplace transform
variables, i.e.:

Get useful study materials from www.rejinpaul.com


www.rejinpaul.com

bm sm bm 1 sm1 b0
y(s) u(s) (3)
an sn an 1 sn1 a0
Thenapplyingthesameinputshiftedbyanyamount oftimeproducesthesameoutputshifted bythe
same amount q of time. The representation of this fact is givenbythe
followingtransferfunction:

bm sm bm 1 s m 1 b0
y(s) e su(s) (4)
an sn an 1 sn1 a0

ModelsofEq.(1) havingallbi 0(i 0),a state space descriptionarose outofareduction

to a system offirstorderdifferentialequations. This technique is quite general.First, Eq.(1)is


writtenas:
y(n) f t,u(t),y,y.y,,y (n 1) ;
(5)
with initialconditions:y(0)=y,y (0) y(0),,y(n1)(0) y (0)
0 1 n1

Consider thevector x Rnwithx 1 y,x 2 y ,x 3 y,,x n y (n1),Eq.(5)becomes:

x2
x3
d
X (6)
dt
xn
f t,u(t),y,y.y,,y (n 1)

Incase oflinearsystem,Eq.(6) becomes:


0 10 0 0
00 1 0 0 0
d
X X 0 u(t); y(t)=1 0 0 0 X (7)
dt
0 0 1
-a0 -a1 -an-1 1

It can beshown that thegeneralform of Eq.(1) can be writtenas


0 1 0 0 0
0 0 1 0 0 0
dd
X X 0 u(t); y(t)=b0b1b m00 X (8)
t
0 0 1
-a0-a1 -an-1 1

and, will be representedinanabbreviationform:

Get useful study materials from www.rejinpaul.com


www.rejinpaul.com

X AX Bu ;y CX Du;D=0 (9)
Eq.(9) is known as thecontroller canonicalformof thesystem.

Transfer function fromstate spacemodels


We have just showed that a transferfunction model can beexpressedas a state
spacesystem ofcontrollercanonicalform.In thereversedirection, it also easytosee that
eachlinearstatespace systemofEq.(9) cab beexpressed as aLTItransfer function.
Theprocedureis to takelaplace transformation of theboth sides of Eq,(9) togive:
sX(s) AX(s) Bu(s); y(s) CX(s) Du(s) (10)
So that
1 n(s)
y(s) C sI A B D u(s) G(s)u(s) (11)
d(s)
An algorithm to compute the transfer function from statespace matrices
isgivenbytheLeverrier-Fadeeva-Frame formula of the following:
1 N(s)
sI A
d(s)
n1
N(s) s N 0 s n2N 1 sNn2 N n1
d(s) sn ds1 n 1 dn1s dn
where,
N0 I d1 trace(AN0) (12)
N1 AN0 d1I d2 1/2 trace(AN1)

1
N AN d I d trace(AN )
n1 n2 n1
n1 n2
n 1
1
0 AN dI d traceAR
n-1 n n n1
n
Therefore,accordingto the algorithmmentioned, the transferfunction becomes:

n(s) CN(s)B CD (13)

CN(s)B CD
or, G(s) (14)
d(s)

EigenValues

Consideranequation AX= Ywhichindicates the transformationofn 1vector


matrixXinto'nx1'vectormatrixYby'nxn'matrix operator A.

Get useful study materials from www.rejinpaul.com


www.rejinpaul.com

IfthereexistssuchavectorXsuchthatAtransformsittoavectorXXthenXiscalledthesolu
tionoftheequation,

Thesetofhomogeneousequations(1)haveanontrivialsolutiononlyundertheconditio
n,

Thedeterminant|XI-
A|iscalledcharacteristicpolynomialwhiletheequation(2)iscalledthecharacteristicequati
on.

Afterexpanding,wegetthecharacteristicequationas,

The'n'rootsoftheequation(3)i.e.thevaluesofXsatisfyingtheaboveequation
(3)arecalledeigen valuesof thematrix A.

Theequation(2)issimilarto|sI-
A|=0,whichisthecharacteristicequationofthesystem.HencevaluesofXsatisfyingcharact
eristicequationarctheclosedlooppoles ofthe system. Thuseigenvaluesaretheclosedloop
polesof thesystem.

EigenVectors
Anynonzerovector Xisuchthat AXi i Xiissaidtobeeigenvectorassociated

witheigenvalue i .Thuslet i
satisfiestheequation

ThensolutionofthisequationiscalledeigenvectorofAassociatedwitheigenvalue
andisdenotedasMi.
i

Iftherankofthematrix[ i I-A]isr,thenthereare(n-
r)independentEigenvectors.Similarlyanotherimportantpointisthatiftheeigenvaluesof
matrixAarealldistinct,thentherankrofmatrixAis(n-1)wherenisorderofthesystem.

8
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com

Mathematically,theEigenvectorcanbecalculatedbytakingcofactorsofmatrix( I-
i

A)alonganyrow.

th
WhereCkiiscofactorofmatrix( i I-A)ofk row.
KeyPoint:Ifthecofactoralongaparticularrowgivesnullsolutioni.e.allelementsofcorrespondin
geigenvectorsarezerothencofactorsalonganyotherrowmustheobtained.Otherwise inverseof
modalmatrixMcannotexist.
Example1
Obtain theEigen values,Eigenvectorsforthematrix

Solution
Eigenvaluesare roots of

Eigenvaluesare

To findEigenvector,Let

Get useful study materials from www.rejinpaul.com


www.rejinpaul.com

WhereC =cofactor

For 2 =-2

For =-3
3

Example2
For asystemwithstatemodelmatrices

Obtain thesystemwithstatemodelmatrices

10

Get useful study materials from www.rejinpaul.com


www.rejinpaul.com

Solution
TheT.F.isgivenby,

11

Get useful study materials from www.rejinpaul.com


www.rejinpaul.com

SolutionofStateEquations
Considerthestateequationnoflineartimeinvariantsystemas,
.
X(t) AX(t) BU(t)
ThematricesAandBareconstantmatrices.Thisstateequationcanbeoftwotypes,
1. Homogeneousand
2. Nonhomogeneous

HomogeneousEquation
IfAisaconstantmatrixandinputcontrolforcesarezerothentheequationtakestheform,

Suchanequationiscalledhomogeneousequation.Theobviousequationisifinputiszero,In
suchsystems,thedrivingforceisprovidedbytheinitialconditionsofthesystemtoproduceth
eoutput.Forexample,consideraseriesRCcircuitinwhichcapacitorisinitiallychargedtoVv
olts.Thecurrentistheoutput.Nowthereisnoinputcontrolforcei.e.externalvoltageappliedt
othesystem.Buttheinitialvoltageonthecapacitordrivesthecurrentthroughthesystemand
capacitorstartsdischargingthroughtheresistanceR.Suchasystemwhichworksontheinitia
lconditionswithoutanyinputappliedtoitiscalledhomogeneoussystem.
NonhomogeneousEquation
IfAisaconstantmatrixandmatrixU(t)isnon-
zerovectori.e.theinputcontrolforcesareappliedtothesystemthentheequationtakesnorm
alformas,

Suchanequationiscallednonhomogeneousequation.Mostofthepracticalsystemsrequ
ireinputstodivethem.Suchsystemsarcnonhomogeneouslinearsystems.

Thesolutionofthestateequationisobtainedbyconsideringbasicmethodof

12

Get useful study materials from www.rejinpaul.com


www.rejinpaul.com

findingthesolutionofhomogeneousequation.
Controllability andObservability

Morespecially,forsystemofEq.(1),thereexistsasimilartransformationthatwilldiagonalize the
system.In otherwords, Thereis atransformationmatrixQ such that

X AX Bu ; y CX Du ;X(0)=X0 (1)

X QX or X=Q-1X (2)

X X Bu y=CX Du (3)

1 0 0
0 2 0
Where (4)

0 n

Noticethatbydoingthediagonalizingtransformation,theresultingtransferfunctionbetweenu(s)andy
(s) will not be altered.

LookingatEq.(3),if bk 0,then xk(t)isuncontrollablebytheinputu(t),since, x k(t)is
kt
characterizedbythe modee bythe equation:

x(t) ek tx (0)
k k

th
Thelakeofcontrollabilityofthestate xk(t)isreflectbyazerok rowofB,i.e.bk.Which
wouldcauseacompletezerorowsinthefollowingmatrix(knownasthe controllabilitymatrix),i.e.:
2

b1 1b1 1 1
b n1
1 b1
2
b b b n 1b
2 2 2 2 2 2 2
2 3
C(A,b) B AB A B A B An-1B 2
(5)
b b b n 1b
k k k k k k k


2
n1
bn nbn n bn n bn

A C(A,b)matrixwith all non-zero rowhasa rankofN.



Infact,B Q1B or B QB.Thus,anon-singularC(A,b)matriximpliesanon-singular
matrixof C(A,b)of the following:

13

Get useful study materials from www.rejinpaul.com


www.rejinpaul.com

C(A,b) BABA2B An-1B (6)

It is important to note that this result holds in the case of non-distincteigenvaluesaswell.

Remark 1]

IfmatrixA hasdistincteigenvaluesand isrepresentedas a controllercanonical form, itis


easyto show the followingidentityholds, i.e.:

2 n1 2 n1
1 1 1 1 A 1 1 1 1 1 for each i.

Therefore a transpose ofso-calledVandermonde matrixV ofn columneigenvectors ofA


willdiagonalize A, i.e.,

T
1 1 1 2 n1
1
1 2 n 1 1
2
1
n1
2 2 1
W T
1 2 n2 2 2 2
(6)

2 n-
n1 n1 n-
1 n n 1n
1 2 1n

and

1 1
WT A WT or,A=WT WT WT AWT A

[Remark 2]

Thereis analternativewaytoexplain whyC(A,b)should have rank nforstate controllable,let us


startfrom the solution ofthe statespace system:

tf

X(t) At
e X(0 ) eA(t )
Bu()d (7)
t0

Thestatecontrollabilityrequires that for each X(tf)nearbyX(t0),thereis afinite sequence ofu(t;t


[to,tf]).

14

Get useful study materials from www.rejinpaul.com


www.rejinpaul.com

tf
A
X(tf) X0 e Bu()d
eAtf
t0

or
tf
A Atf
e Bu()d e X(tf) X0
t0

n t0 (k1)
eA Bu(t 0
k )d
k0 t0 k

n t0 (k1) in 1
= i ()AiB u(t 0 k )d
k 0 t0 k i0

i=n-1 t0 (k1) k
= AiB i ()u(t 0 k )d
i=0 t0 k k0

w1
i=n-1 w2
= AiBWi B ABA2B An-1B
i=0

wn

Thus, in order W has non-trival solution, we needthat C(A,b) matrixhasexactrank

nThereareseveral alternativeways of establishingthestatespace controllability:

The(n)rows ofeAtBarelinearlyindependent overthe realfieldfor allt.

Thecontrollabilitygrammian

tf
G (t,t ) e AtBBTeA
T
d is non-singular forallt t.
ramo f f 0
t0

[Theorem1]ReplaceB with b, (i.e.Dim{B}=n 1), a pair [A,b] is non-controllable if


and onlyif there exists arowvector q 0 suchthat

qA q, qb 0 (8)

Toprove theif part:

Ifthere is such rowvector, we have:

15

Get useful study materials from www.rejinpaul.com


www.rejinpaul.com

qA q and qb 0
qAb qb 0
q I A 0
qA2b qAb 2
qb 0 q b,Ab,A2b,,An1b 0
and qb 0

qAn-1b n1
qb 0

Sinceq 0, weconcludethat: b,Ab,A2b,,An1b is singular,and thus the

system is not controllable.

To prove the only if part:

Ifthe pair is noncontrollable, thenmatrixA can betransformed into non-


controllableform like:

AC A bC r
CC , b
A (9)
0 A 0 n r
C

Where,r rankC(A,b)(NoticethatEq.(33)isawell-
knowntheoreminlinearsystem.)

Thus, one canfind arowvectorhastheformq [0z],where zcanbeselected as

the eigenvector ofAC,(i.e.: zA z),for then:


C

qA 0zA 0 z q (10)

Therefore, wehaveshown that onlyif [A, b] is non-controllable, thereis


anon-zerorowvectorsatisfyingEq.(8).

Infact,accordingto Eq.(27),

k
eAt VetV1 VetWT vi wi T eit
i1

and, X(t) e AtX ,wehave:


0

t ) k t n
X(t) e X0At
e A(t bu()d vwTeitX v wTbeii(t )
u()d
i i 0 i i
0 i1 0i1

16

Get useful study materials from www.rejinpaul.com


www.rejinpaul.com

Thus,if b is orthogonaltowi, thenthestateassociated with i willnot


becontrollable,andhence, the systemis not completelycontrollable. Theanother form
totestforthe controllabilityof the[A,b]pair is known as the Popov-Belevitch-Hautus (abbrv.
PBH)test is A b n
tocheckifranksI forall s (notonlyat eigenvalues ofA). This test is based on

thefactthatif sI A b hasrankn,there cannot be anonzero rowvector q satisfying

Eq.(32).ThusbyTheorem1, pair [A, b] must becontrollable.


Referringtothe systemsdescribedbyEqs.(26) and(27),the statex (t)i correspondingto

the modeeit is unobservableat theoutput y,ifC
1 1i 0 foranyi=1,2,,n. Thelackof

observabilityof thestate xi (t)isreflectedbya completezero(ith)columnofsocalled

observabilitymatrixofthe systemO(A,C) ,i.e.:


C1 C11 C12 C1n

O( AC
, ) C1A 1C11 2C12 nC1n
1
(11)

n 1
n1

n1

n1
C1A 1 C 2 C12 n C1n


Anobservablestatexi(t)correspondstoanonzerocolumnofO(A,C) .Inthecase of distinct
eigenvalues,each nonzero columnincreases the rankbyone.Therefore, therank of

O(A,C) correspondingtothetotalnumberofmodes thatare observableat the outputy(t) is
termed theobservabilityrank ofthe system. Asinthe case ofcontrollability,it is notnecessaryto
transform a given state-space systemto modalcanonical form in order todetermine its
rank.Ingeneral, theobservabilitymatrixof thesystem is definedas:

C
CA
O(A,C)= =O(A,C)Q (A,C)V1

CAn 1


WithQ=V-1nonsingular.There,therankofO(A,C) equalsthe rankofO(A,C) .Itis
important to note thatthisresult holds in the caseofnon-distinct eigenvalues. Thus, a state-
space system is said to be completely(state) observable ifits observabilitymatrixhas a
fullrank n. Otherwisethe system is said to be unobservable

17
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com

Inparticular, it is well known that a state-space system is observable if


andonlyifthefollowingconditions aresatisfied:

The(n) columnofCeAtarelinearlyindependentover R forall t.

Theobservabilitygrammian ofthefollowingisnonsingular foralltf t0:

t
T
Granm,o e A CTCeAd
t0

I A
The(n+p) n matrix b hasrank n at all eigenvalues iofA.
C

PolePlacementDesign
Theconventionalmethodofdesignofsingleinputsingleoutputcontrolsystemconsistsofdesi
gnofasuitablecontrollerorcompensatorinsuchawaythatthedominantclosed loop poles will have
adesired dampingratio % andundamped natural frequencycon.Theorder of thesystemin this
case is increasedby1 or 2if there areno pole zerocancellationtakingplace.Itisassumed
inthismethodthattheeffectsontheresponsesofnon-
dominantclosedlooppoleslobenegligible.Insteadofspecifyingonlythedominant
closedlooppolesintheconventionalmethodofdesign,thepoleplacementtechniquedescribes
alltheclosed looppoleswhichrequiremeasurementsof allstatevariablesor inclusionofa
stateobserver inthe system.Thesystemclosed loop poles can be placedat
arbitrarilychosenlocations with the conditionthatthe
systemiscompletelystalecontrollable.Thisconditioncanbeprovedandtheproofisgivenbelow.Co
nsideracontrolsystemdescribedbyfollowingslateequation

Herexisastatevector,uisacontrolsignal whichisscalar,Aisnxnstatematrix.Bisnx1constantmatrix.

Figopenloop controlsystem

18

Get useful study materials from www.rejinpaul.com


www.rejinpaul.com

Thesystemdefinedbyaboveequationrepresentsopenloopsystem.Thestatexisnotfed
backtothecontrolsignalu.Letusselectthecontrolsignalto
beu=-Kxstate.Thisindicatesthatthecontrolsignalisobtainedfrominstantaneousstate.This is
calledstatefeedback. Thek is a matrixof order l xn calledstatefeedback
gainmatrix.Letusconsiderthecontrolsignaltobeunconstrained.Substitutingvalueofuinequation 1

ThesystemdefinedbyaboveequationisshownintheFig.5.2.Itisaclosedloopcontrolsyste
masthesystemstatexisfedbacktothecontrolsystemasthesystemstalexisfedbacktocontrol
signalu.Thusthisasystemwithstatefeedback

Thesolutionofequation2 issay
x(t)=e,x(0)istheinitialslate (3)
Thestabilityand the transient responsecharacteristicsare determinedbythe eigenvalues
of matrixA- BK.Dependingontheselectionofstatefeedback gainmatrix K,thematrix A-
BKcanbemade asymptoticallystableand it is possible to makex(t) approachingto zero as time
t approaches to infinityprovided x(0)* 0. Theeigen valuesof matrixA - BKarccalledregulator
poles. These regulator poleswhenplacedinleft
halfofsplanethenx(t)approacheszeroastimetapproachesinfinity.Theproblemofplacingtheclosedl
ooppolesatthedesiredlocationiscalledapoleplacementproblem.
DesignofStateObserver
Incaseofstateobserver,thestatevariablesareestimatedbasedonthemeasurement
softheoutputandcontrolvariables.Theconceptofobservabilityplaysimportantpartherei
ncaseofstateobserver.
Considerasystemdefinedbyfollowingstateequations

19

Get useful study materials from www.rejinpaul.com


www.rejinpaul.com

Letusconsiderxastheobservedstatevectors.Theobserverisbasicallyasubsystemwhichr
econstructsthestatevectorofthesystem.Themathematicalmodeloftheobserverissameas
thatoftheplantexcepttheinclusionofadditionalterm consistingofestimationerror to
compensate for inaccuraciesinmatrices AandBandthelackoftheinitialerror.
Theestimationerroror theobservationerror is thedifference
betweenthemeasuredoutputandtheestimatedoutput.Theinitialerroristhedifferenceb
etweentheinitialstateandtheinitialestimatedstate.Thusthemathematicalmodelofthe
observercanbedefined as,

HerexistheestimatedstateandCxistheestimatedoutput.Theobserverhasinputsofoutp
utyandcontrolinputu.MatrixK^iscalledtheobservergainmatrix.Itisnothingbutweighin
gmatrixforthecorrectiontermwhichcontainsthedifferencebetweenthemeasuredoutput
yandtheestimatedoutputcx
Thisadditionaltermcontinuouslycorrectsthemodeloutputandtheperformanceoftheobs
erverisimproved.
Fullorder state observer
Thesystemequationsarcalreadydefinedas

Themathematicalmodelofthestateobserveristakenas.

Todeterminetheobservererrorequation,subtractingequationofxfromxwcget

20

Get useful study materials from www.rejinpaul.com


www.rejinpaul.com

TheblockdiagramofthesystemandfullorderstateobserverisshownintheFig.

ThedynamicbehavioroftheerrorvectorisobtainedfromtheEigenvaluesofmatrixA-
K^CIfmatrixA-
K^Cisastablematrixthentheerrorvectorwillconvergetozeroforanyinitialerrorvectore(0).He
ncex(t)willconvergetox(t)irrespectiveofvaluesofx(0)andx(0).
IftheEigenvaluesofmatrixA-
KeCareselectedinsuchamannerthatthedynamicbehavioroftheerrorvectorisasymptotica
llystableandissufficientlyfastthenanyoftheerrorvectorwilltendtozerowithsufficientspee
d.
1/thesystemiscompletelyobservablethenitcanbeshownthatitispossibletoselectmatrixK
,.suchthatA-
K^ChasarbitrarilydesiredEigenvalues,i.c.observergainmatrixKecanbeobtainedtogetth
edesiredmatrixA-KCC.

Get useful study materials from www.rejinpaul.com


www.rejinpaul.com

Get useful study materials from www.rejinpaul.com


www.rejinpaul.com

UNIT II
PHASE PLANE ANALYSIS

SampledData System
When the signal or information at anyor somepoints in a system is in the form
ofdiscretepulses.Then thesystem is calleddiscretedatasystem.Incontrol
engineeringthediscrete data system is popularlyknown as sampled data systems.

Samplingprocess
Samplingis theconversion ofacontinuous time signal into adiscretetime
signalobtainedbytakingsampleof thecontinuous timesignal at discrete timeinstants.

Thus if f (t) is theinput to


thesamplerTheoutput is f(kT)
WhereT is called the

samplingintervalThereciprocalof T

23

Get useful study materials from www.rejinpaul.com


www.rejinpaul.com

Let 1/T=Fsis calledthe samplingrate. Thistype ofsamplingis calledperiodicSampling,


sincesamplesareobtained uniformlyatintervals of T seconds.

Multipleorder sampling A particular sampling pattern is repeated periodically

Multiplerate sampling- In this method two simultaneous samplingoperations withdifferent


timeperiods arecarried out on the signal to produce the sampled output.

RandomsamplingIn this case the samplinginstantsare random

Sampling Theorem

A band limited continuous time signal with highest frequencyfmhertz can


beuniquelyrecovered from its samples providedthat thesamplingrateFsisgreaterthan or equal
to 2fmsamplesperseconds
Signal Reconstruction

Thesignal given to the digitalcontrolleris a sampleddatasignal and in turn


thecontrollergives the controller output in digital form. But the system to becontrolled needs
an

24
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com

analogcontrolsignal as input. Thereforethe digital output of controllers must be convertersinto


analogform

This can beachieved bymeans of various types of holdcircuits. Thesimplest


holdcircuitsare thezero orderhold (ZOH).InZOH, the reconstructed analogsignalacquires
thesamevaluesas the last receivedsampleforthe entiresamplingperiod

Thehighfrequencynoises present in the reconstructedsignal are automaticallyfiltered


out bythe controlsystemcomponent which behaveslikelow passfilters.Ina firstorder hold
thelast two signals for thecurrent samplingperiod. Similarlyhigher order holdcircuitcanbe
devised.First or higherorder hold circuitsoffer no particularadvantage overthe zeroorder hold
Z- Transform
Definition ofZTransform
Letf (k)= Discrete timesignal
F(z) =Z{f(k)}=ztransformoff(k)
Theztransformsofa discrete time signal or sequenceis definedas the power series

F(Z) f(k)zk --------------1


k

Wherezis a complexvariable
Equation(1) is considered to be two sided and thetransform is called two sided z
transform.Sincethe timeindexk isdefined forboth positiveandnegative values.
Theonesided z transformof f(k)is defined as

F(Z) f(k)zk ---------------2


k0

25

Get useful study materials from www.rejinpaul.com


www.rejinpaul.com

Problem
1.Determine theztransformand theirROC ofthe discretesequences
f(k)={3,2,5,7}Given f(k) = {3,2, 5,7}
Wheref (0)=3
f (1) =2
f (2) =5
f (3) =7
f (k) =0fork <0and
k>3Bydefinition

Z{f(k)} F(z) f(k)zk


k

Thegivensequence is afinite duration sequence.Hence thelimits of summationcan


bechangedas k =0 to k = 3
3
F(z) f(k)zk
k0

2
F(z) f(0)z0+f(1)z 1
+f(2)z +f(3)z3
=3 2z1 5z2 7z3
Here F(z) is bounded,expectwhen z=0
TheROC is entirez-planeexpect z=0
2. Determine theztransformofdiscretesequences f(k) =u
(k)Given f(k) =u (k)
u (k)is a discrete unit step sequenceu
(k)=1 for k 0
=0fork <0
Bydefinition

Z{f(k)} F(z) f(k)zk u(k)zk


k k 0

k
z
k 0

(z 1) k
k0

26

Get useful study materials from www.rejinpaul.com


www.rejinpaul.com

F(z) is an infinitegeometricseries andit converges if z 1

1
F (z) 1
1 z
1
1
1
z
z
z 1
3. Findthe onesided z transformofthe discrete sequencesgenerated by mathematically
sampling the continuoustime function eatcoskT
f(t)Given
f(t) eatcoskT
Bydefinition

F(z)= Z{f(k)} eakT cos kTzk


k 0

ej kT
ej kT
akT
e zk
k0 2

1 1
eaTej T 1 k
z eaTe j T 1 k
z
2k 0 2k 0

1
WKT ck
k 0 1 c
1 1 1 1
F(z)
21 e aT
ejwTz1 21 e aT
e jwT 1
z
1 1 1 1
jT
21 jT
e 2 e
1
zeaT zeaT
1 ze aT ze aT

2 zeaT ej T zeaT e j T

1 zeaTzeaT e jT zeaTzeaT ej T
2 zeaT e j T zeaT ej T

zeaT zeaT e j T zeaT ej T


aT 2
2 (ze ) zeaTe j T zeaTej T ej Te j T

27

Get useful study materials from www.rejinpaul.com


www.rejinpaul.com

zeaT 2zeaT (ej T


e j T)
aT j
2 z2e2aT ze (e
T
e j T) 1

zeaT zeaT(zeaT cosT) ej e j

aT
cos
2 z2e2aT 2ze cosT 1 2

Inverse ztransform
Partialfractionexpansion(PFE)
Power series expansion
Partialfraction
expansionLetf (k)=discrete
sequence
F(z) =Z{f(k)} =ztransform of f(k)
m
F(z) = bz0 bz0 m1 bz0 m2 ..........bm wherem n
z n
az1 n1
az2 n2 ..........a n

Thefunction F (z) canbe expressedas a series of sum termsbyPFE


n
Ai
F(z) A0 --------------3
i1 z pi
Where A0is a constant

A1,A2,........An areresidues

p1,p2,........pn arepoles
Powerseriesexpansion
Letf (k)=discrete sequences
F(z) = Z{f(k)} =ztransform off
(k)Bydefinition

F(Z) f(k)zk
k

OnExpanding
F(Z) (.......f( 3)z3 f( 2)z2 f( 1) z f(0)z0 f(1)z 1
f(2)z2 .............)-------4
1

Problem
1. Determine the inverse ztransformofthe followingfunction
1
(i) F(z) = 1
1 1.5z 0.5z2

Get useful study materials


28 from www.rejinpaul.com
www.rejinpaul.com

Given
1
F(z) = 1
1 1.5z 0.5z2
1
1.5 0.5
1 z2
z

z2
z2 1.5z 0.5

z2
(z 1)(z 0.5)
F(z) z
z (z 1)(z 0.5)

Bypartial fraction expansion


F(z)z A1 A2
(z 1) (z 0.5)
F(z)
A (z 1)
1
z
Put z=1
z
A1 (z 1)
(z 1)(z 0.5)

z
(z 0.5)

1
(1 0.5)

Put z=0.5

F(z)
A2 (z 0.5)
z

z
A2 (z 0.5)
(z 1)(z 0.5)

z
(z 1)

29

Get useful study materials from www.rejinpaul.com


www.rejinpaul.com

0.5
(0.51)

-1
F(z) 2 1
z z 1 z 0.5

2z z
F(z)
z 1 z 0.5

WKT
z z
Z{a k } and Z{u(k)}
z a z 1

Ontakinginverseztransform
f(k) 2u(k) (0.5)k , k 0

z2
(ii) F(z) =
z2 z 0.5

Given
z2
F(z) =
z2 z 0.5

z2
(z 0.5 j0.5)(z 0.5 j0.5)

F(z) z
z (z 0.5 j0.5)(z 0.5 j0.5)

Bypartial fraction expansion


F(z) A A*
z (z 0.5 j0.5) (z 0.5 j0.5)

F(z) 0.5 j0.5)


A z (z
Put z= 0.5+j0.5

z (z 0.5 j0.5)
A (z 0.5 j0.5)(z 0.5 j0.5)

z
(z 0.5 j0.5)

30

Get useful study materials from www.rejinpaul.com


www.rejinpaul.com

0.5 j0.5
(0.5 j0.5 0.5 j0.5)
0.5 j0.5
zj
A* (z 0.5 j0.5)
(z 0.5 0.5)(z 0.5 j0.5)
Put z=0.5-j0.5

z
A* (z 0.5 j0.5)

0.5 j0.5 0.5 j0.5


(0.5 j0.5 0.5 j0.5)

F(z) (0.5 j0.5) (0.5 j0.5) (0.5 j0.5)z (0.5 j0.5)z


z (z 0.5 j0.5) (z 0.5 j0.5) (z 0.5 j0.5) (z 0.5 j0.5)
z
WKT Z{a k}
z a

Ontakinginverseztransform
f(k) (0.5 j0.5)(0.5 j0.5)k (0.5 j0.5)(0.5 j0.5)k

0.5 0.5) (0.5 j0.5)k 0.5 0.5)(0.5 j0.5)k


j( j( j
j
j(0.5 j0.5)(0.5 j0.5)k j(0.5 j0.5)(0.5 j0.5)k

j(0.5 j0.5)k1 j(0.5 j0.5)k1

2. Determine the inverseztransformofzdomainfunction


3z2 2z1
F(z) =
2
z 3z 2
Given

3z2 2z1
F(z) =
2
z 3z 2

3
z2 3z 2 3z2 2z 1

3z2 9z 6
11z 5

31

Get useful study materials from www.rejinpaul.com


www.rejinpaul.com

F(z) =3 11z 5 11z 5


3
z2 3z 2 (z 1)(z 2)

ByPFE
A1 A2
F(z)=3
(z 1) (z 2)

11z 5
A1 (z 1)
(z 1)(z 2)

11z 5 115
6
(z 2) 1 2

11z 5
A2 (z 2)
(z 1)(z 2)

11z 5 11(2) 5
(z 1) 17
2 1

6 17
F(z) 3
(z 1) (z 2)
1 z z z
3 6 17 1 z 3 6z1 17z1
z (z 1) zz 2 (z 1) z 2

On takinginverseztransform
f(k) 3 (k) 6u(k 1)17 2(k1)u(k 1) ;fork 0

2. Determine the inverseztransformofthefollowing


1
F(z) Where (i)ROC z 1.0 (ii) ROC z 0.5
1 3z 1 1 2

2 2z

Given
1
F(z)
1 3z 1
1 2

2 2z
(i) z 1.0

Get useful study materials


32
from www.rejinpaul.com
www.rejinpaul.com

3 1 7 z2 15 3
1 z z .........
2 4 8
3
1 z1 12 1
z
2 2
1 1 2
1 3z z
2 2

3 1 1 2

2z 2z
3 1 9 2
3 3

2z 4z 4z

7 2 3 2

4z 4z
7 2
3 2
7 4

4z 4z 8z

15 3 7 4

8z 8z

3 1 7 2 15 3
F(z) 1 z z ......... -------------(i)
2z 4 8

F(Z) f(k)zk
k

For acausal signal

F(z) f(k)zk
k 0

1 2
F(z) f(0)z f(1)z f(2)z3 ................. --------------(ii)
Comparingequation (i)&(ii)
3 7 15
f(0) 1,f(1) ,f(2) ,f(3)
2 4 8
3 7 15
f(k) {1, , , ,........ .......}fork 0
24 8

33

Get useful study materials from www.rejinpaul.com


www.rejinpaul.com

(i) z 0.5

2z2 6z3 11z4 30z5 ..........


1 3 z1
z2 1 1
2 2
1 3z 2z2

3z 2z2
3z 9z2 6z3

7z 2 6z3
7z 2 21z3 14z4

15z3 14z4
15z3 45z4 30z5

F(z) 2z2 6z3 11z4 30z5 .......... -------------- (i)

F(Z) f(k)zk
k

For ananti-causalsignal
0
F(Z) f(k)z1
k

F(z) ............ f( 5)z5 f( 4)z4 f( 3)z3 f( 2)z2 f( 1)z 1 f(0) ------------- (ii)
Comparingtheequation i &ii
f( 5) 30,f( 4) 14,f( 3) 6,f( 2) 2,f( 1) 0,f(0) 0
f(k) {...........30,14,6,2,0,0}

Difference equation
Discrete timesystems are describedbydifferenceequation oftheform

34

Get useful study materials from www.rejinpaul.com


www.rejinpaul.com

Ifthe system is causal,alinear difference equation providesan explicit relationshipbetweenthe


input and output. This can beseen byrewriting.

Thus the nth value oftheoutput can be computed from the nth input value and the N and
Mpastvalues ofthe outputand input, respectively.
Roleofz transforminlinear difference equations
Equation(1)gives us theform of the linear difference equation that describes
thesystem. Takingztransform on eitherside and assumingzero initial conditions, wehave

WhereH(z) is aztransform ofunit sample response h(n).


Stability analysis
Jurys stabilitytest
Bilineartransformation
Jurys stability test
Jurys stabilitytestis used to determine whether the roots of
thecharacteristicpolynomial lie within a unit circle or not.It consists of two
parts.Onesimple test fornecessarycondition forstabilityandanother test for sufficient
condition for stability.Let usconsider a generalcharacteristic polynomial F(z)
n n1
F(z) anz an1z ................ a1z a 0, wherean 0

Necessarycondition forstability

F(1) 0; ( 1)nF( 1) 0

Ifthis necessarycondition is not met,then the system is unstable. We neednot check


thesufficient condition.

35

Get useful study materials from www.rejinpaul.com


www.rejinpaul.com

Sufficientcondition forstability
a0 an
b0 bn 1
c0 cn 2
..................
r0 r2

Ifthe characteristic polynomial satisfies (n-1) conditions, then the system is stable

Jurystest

Bilineartransformation
Thebilinear transformation maps the interior ofunit circle in the z planeinto the lefthalf ofthe
r-plane.
z 1 1 r
r z
z 1 Or 1 r

Fig.Mappingof unitcircle in z-planeinto left halfof r-planeConsider


thecharacteristic equation
n1 n2
anz n an1z an2z ............ az a 0; a n 0..............(i)

1 r
Sub z in Equation (i)
1 r

36

Get useful study materials from www.rejinpaul.com


www.rejinpaul.com

1 1 r 1 r 1 r
an( r n an1( )n1 an2 ( )n2 ............a ( ) a 0 ...........(ii)
)
1 r 1 r 1 r 1 r 0

Equation (ii) can besimplified


n n1 n2
bnr bn1r bn2r ............ b1r b0 0

Problem

1. Check forstability of the sampled data control systemsrepresented


bycharacteristicequation.
(i)5z2 2z 2 0

Given
F(z) 5z2 2z 2 0

F(z) az2 2 a1z a0 5z2 2z 2


F(1) 5(1) 2 2(1) 2
5 2 2
5

( 1) nF( 1) ( 1) 2 5( 1) 2 2(1) 2
1(5 2 2)
9
Heren=2
SinceF(1) 0; ( 1)nF( 1) 0,thenecessaryconditionfor stabilityissatisfied.
Check forsufficient
conditionIt consistingof (2n-
3)rowsn=2 (2n-3)=(2*2-3)
=1
So, it consists of onlyone
rowRow z0 z1z2
1 a0 a1 a2
a0 2,a1 2,a2 5

a0 a1
Thenecessarycondition to be satisfied

37

Get useful study materials from www.rejinpaul.com


www.rejinpaul.com

Thenecessary&sufficient conditions forstabilityaresatisfied.Hence the system is stable

(ii) F(z) z3 0.2z2 0.25z 0.05 0

F(z) a 3z 3 a2z 2 a1z a0

z3 0.2z2 0.25z 0.05 0


Method 1

Check fornecessarycondition
F(z) z3 0.2z2 0.25z 0.05 0

F(1) 13 0.2(1) 2 0.25(1) 0.05 0.6

( 1)nF( 1) ( 1)3[( 1)3 0.2( 1)2 0.25(1) 0.05] 0.9


Heren=3
SinceF(1)>0&( 1)nF( 1)>0
Thenecessarycondition forstabilityis satisfied.Check
forsufficient condition
It consistingof (2n-3) rown
=3,(2n-3)=(2*6-3) =3
So, the tableconsists of three

rowsRow z0 z1 z2 z3
1 a0 a1 a2 a3
2 a3 a2 a1 a0
3 b0 b1 b2

a0 0.05
a1 0.25
a2 0.2
a3 1

38

Get useful study materials from www.rejinpaul.com


www.rejinpaul.com

a0 a3 0.05 1
b0 0.052 1
a3 a0 1 0.05
0.9975

a0 a2 0.05 0.2
b1 0.05( 0.25* 0.2)
a3 a1 1 0.25
0.1875

a0 a1 0.05 0.25
b3 0.05*(0.2)*( 0.25)
a3 a2 1 0.2
0.24

Row z0 z1 z2 z3
1 0.05 -0.25 -0.2 1
2 1 -0.2 -0.25 1
3 -0.9975 0.1875 0.24
Thenecessarycondition to be satisfied
a0 a 3, b0 b2
0.05 1 , 0.9975 0.25

Thenecessaryandsufficient conditions forstabilityaresatisfied.Hence thesystem is stable.


Method 2
F(z) z3 0.2z2 0.25z 0.05 0

1 r
Put z
1 r
1 1 r
F(r) 1 r3 0.2( r2 0.25( ) 0.05 0
( ) )
1 r 1 r 1 r

Onmultiplying throughoutby(1 r) 3 weget

39

Get useful study materials from www.rejinpaul.com


www.rejinpaul.com

(1 r) 3 0.2(1 r)2 (1 r) 0.25(1 r)(1 r)2 0.05(1 r)3 0


(1 r)(1 r2 2r) 0.2(1 r)(1 r2) 0.25(1 r)(1 r2) 0.05(1 r)(1 r2 2r) 0
2 2 2 2
(1 r)(1 r 2r 0.2 0.2r ) (1 r)( 0.25 0.25r 0.05 0.05r 0.1r) 0
2 2
(1 r)(1.2r 2r 0.8) (1 r)(0.3r 0.1r 0.2) 0
2 3 2 2
(1.2r 2r 0.8 1.2r 2r 0.8r) (0.3r 0.1r 0.2 0.3r3 1.1r2 1.2r) 0
3 2 3 2
(1.2r 3.2r 2.8r 0.8) ( 0.3r 0.4r 0.1 0.2) 0
3 2
0.9r 3.6r 2.9r 0.6 0
Thecoefficient of thenew characteristic equationis positive. Hence the
necessaryconditionforstabilityis satisfied.
Thesufficientcondition forstabilitycanbe determinedbyconstructingrouth arrayas
r3: 0.9 2.9 .........row1
2
r : 3.6 0.6 ........r.ow2
r 1 : 2.75 ........r.ow3
r0: 0.6 ........r.ow4
column1

(3.6*2.9)(0.9*0.6) 2.75
r1
3.6

(2.75*0.6)(0*3.6) 0.6
r0
2.75
Thereis no sign changein the elements of first column of routharray. Hencethe
sufficientcondition forstabilityis satisfied.
Thenecessarycondition and sufficientcondition forstabilityare satisfied.Hence the systemis
stable.
Pulsetransfer function
It is the ratio of s transform of discrete output signal of the systemto the z-transform
ofdiscrete input signal to the system. That is
C(z)
H(z) (i)
R(z)
Proof
Consider thez-transformof theconvolution sum

Z[C(k)] h(k m)r(m)zk ---------------- (ii)


k0 m0

On interchangingthe order ofsummation, weget

40

Get useful study materials from www.rejinpaul.com


www.rejinpaul.com

C(z) r(m). h(k m)zk ------------------(iii)


m0 k 0

Letl k m Thenl m when k 0&l


when l 0

C(z) r(m)zm. h(l)zl mm --------------------- (iv)


m0 k

C(z) r(m)zm. h(l)zlm ------------------------(v)


m0 k

C(z) R(z).H(z)

Thepulse transferfunction
C(z)
H(z) --------------------------- (vi)
R(z)
Theblock diagramfor pulse transferfunction

UNIT II
Z-TRANSFORMAND SAMPLED DATA SYSTEMS
PART A
1. What is sampled datacontrolsystem?
2. Explain the termssamplingand sampler.
3. What is meantbyquantization?
4. State (shanons) samplingtheorem
5. What is zero orderhold?
6. What is region of convergence?
7. DefineZ-transform ofunit step signal?
8. Write anytwo properties of discrete convolution.
9. What is pulse transferfunction?
10. What are the methods availablefor thestabilityanalysisofsampled
datacontrolsystems?
11. What is bilineartransformation?

41

Get useful study materials from www.rejinpaul.com


www.rejinpaul.com

PART B
1. (i)solve thefollowing
differenceequation2y(k) 2y(k-1) +y(k-
2) =r(k)
y(k) =0 for k<0 andr(k)
={1; k=0,1,2
{0;k<0 (8)
(ii)check ifall the roots of the followingcharacteristicsequation liewithin thecircle.
Z41.368Z3+0.4Z2+0.08Z+0.002=0(8)
2. (i)Explain the conceptofsamplingprocess. (6)(ii)Draw
thefrequencyresponse of Zero-orderHold
(4)(iii)Explain anytwo theorems onZ-transform (6)
3. Theblock diagram ofa sampleddata system isshown to Fig.(a)Obtaindiscrete-time
statemodel for the system. (b) Obtain the equationforintersample responseof the system.

4. Theblock diagram oils sampled-data system isshown in Fig.


(a) Obtaindiscrete-time statemodel for the system
(b) Find theresponse ofthe systemforaunitstepinput.
(c) ) What is the effect onsystem response(i)whenT =0.5sec(ii) T=1.5 sec

42
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com

UNIT III
DESCRIBING FUNCTION ANALYSIS

State variables

Concepts ofState andState


VariablesState
Thestateof adynamicsystem is the smallest setof variables(called
statevariables)suchthattheknowledge ofthese variablesat
t=t0,togetherwiththeknowledgeoftheinputs for
t t0 , completelydetermine the behaviour ofthesystemforanytimet t 0.

Theconcept ofstateisnotlimitedtophysicalsystems. Itis applicabletobiologicalsystems.economic


systems,socialsystems, and others.
Statevariables
Thestatevariablesofadynamic systemarethesmallest
setofvariablesthatdeterminethestateofthedynamicsystem.i.e.thestatevariablesaretheminimalsetof
variablessuchthattheknowledge
ofthesevariablesatanyinitialtimet=to,togetherwiththeknowledge ofthe
inputsfort t0is sufficientto completelydetermine the behaviour of thesystem forany

timet t0.Ifatleastnvariables x1,x2,......xnareneededtocompletelydescribethe

behaviourofadynamicsystemthanthosenvariablesareasetofstate variables.
Thestatevariablesneednotbephysicallymeasurableorobservablequantities.Variablesthatdonotre
present physical quantitiesandthosethatareneithermeasurablenorobservable canalso
bechosenasstatevariables.Suchfreedom inchoosingstatevariablesis anaddedadvantage of
thestate-space methods.
Canonical forms
Theyare fourmaincanonicalforms to be studied:
1. Controllercanonicalform
2. Observercanonical form
3. Controllabilitycanonicalform
4. Observabilitycanonical form

43

Get useful study materials from www.rejinpaul.com


www.rejinpaul.com

Controller canonical form


Consider thetransferfunction ofthe following for illustration:
2
y(s) bs
1
bs2 b3
3 2
u(s) s as as a
1 2 3

Thetransferfunction is firstlydecomposed into two subsystems:

y(s) y(s)z(s) 1 2 bs b
u(s) z(s)u(s) s3 as 2 as a bs1 2 3
1 2 3

In other words,
z(s) 1
3 2 ;
u(s) s as as a
1 2 3

2
y(s) bs bs2 b3
and 1

z(s) 1
It is easyto havethe state-space equation of
0 1 0 0
Z 0 0 1 Z 0 u;
a3 a2 a 1

y b3z1+ b2z2 + b1z3 = b3 b2 b1Z

Thus for ageneral transfer function of


bm sm bm 1 sm1 b0
y(s) u(s)
an sn an 1 sn1 a0
Thestate-space representation can begivenas
0 1 0 0
0
0 0 0 1
0
A ; b

a0 a1 an
- - 1
an an an

b0 b1 bm
C=C 0 0
a0 a0 a0

Forconvenience,we shallleta0 1 andm n 1.

44

Get useful study materials from www.rejinpaul.com


www.rejinpaul.com

In other words,forsystem of the following:

bsn1
1
bs2 n1 bn
y(s) u(s)
sn as1 n1 an
We have
0 1 0 0 0
0 0 1 0 0
A ;b

-an1 an 2 -a1 1

C bn bn1 b1

Observer canonical form

Now, weset n=3 forillustration.Bu assumingallinitialvaluesare zero,canbe writtenas

s3 y as1 2y asy
2
ay3 bs
1
2
y bsy
2
by
3

yx x1
1x x2 a1y b1u x2 b1u
2 a1x1x3 a2y b2u x3 b2u
a2x1
x3 a3y b3u a3x1 b3u

In other words,

a1 1 0 b1
A a2 0 1 ; b b2 ; C 0 0 1
a 0 0 b3

Ingeneral,

a1 1 0 0 0
a2 0 1 0 ;
A b= ; C=1 0 0
1 0
an 0 0 0 1

Controllabilitycanonical form

Again, use n=3asillustration, the controllabilityform is givenas:

45
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com

1
0 0 a3 1 a2 a1 1
A 1 0 a2 ; b= 0 ; C=b3 b2 b1 a1 1 0
0 1 a1 0 1 0 0

Ingeneral,
1
0 0 an an1 an 2 a1 1
1 0 an1 an 2 an 3 a1 1 0
A 0 1 0 -an 2 ; b=bn bn1 b
a1 1 0 0 0 0
0 0 1 -a1 1 0 0 0 0

C 1 0 0

The Observabilitycanonical form

0 1 0 0 0
0 0 1 0 0
A ;
0 0 0 0 1
-an an 2 a1
1
1 0 0 0 0 0
b1
a1 1 0 0 0 0
b
b b

an2 an 1 a1 1 0
bn
an1 an2 a1 1

C=10 0 0

Controllability andObservability

Thedynamicsofalineartime(shift))invariantdiscrete-timesystemmaybeexpressedinterms state
(plant) equation and output (observation or measurement) equation as follows

Wherex(k)anndimensionalslaterectorattimet=kT.anr-dimensionalcontrol(input)vectory(k).an
m-dimensionaloutput vector,respectively,arerepresented as

46

Get useful study materials from www.rejinpaul.com


www.rejinpaul.com

Theparameters(elements)ofA,ann n(plantparameter)matrix.Bann rcontrol


(input)matrix,andCanm routputparameter,Danm r parametricmatrixareconstants
fortheLTIsystem.SimilartoaboveequationstatevariablerepresentationofSISO(singleoutputandsi
ngleoutput)discrete-rimesystem(withdirectcouplingofoutputwithinput)canbe writtenas

Wherethe input u, outputyandd.arescalars,andb and caren-dimensional vectors.

Theconceptsofcontrollabilityandobservabilityfordiscretetimesystemaresimilartothecontinuous
-timesystem.

Adiscretetimesystemissaidtobecontrollableifthereexistsafiniteintegernandinput
mu(k);k [0,n 1]thatwilltransferanystate x0 bx(0)tothestatex n atk nn.

Controllability

Consider thestateEquationcan beobtained as

Equationcan bewritten as

Statexcanbetransferredtosomearbitrarystatex"inatmostnsteps tobe ifp(U)=rank


of[BABA2B.........An1B] n.

Thus,asystemiscontrollableiftherankcomposite(n nr)matrix[BABA2B.........An1B]
is n.

Observability

Considerthe output Equation can be obtainedas

47

Get useful study materials from www.rejinpaul.com


www.rejinpaul.com

Thus, wecanwrite

Ifrankof

!heninitialstatex(0)canbedeterminedfromthemostnmeasurementsoftheoutputandinput.

We can,therefore.State that "Adiscrete time systemis observableifthe rankof the


compositenm nmatrix.

Effect of samplingtimeon controllability

We have acontinuous-time plant which is to be controlled. The control action


maybeeithercontinuous or discreteand must make the plant behave ina desired
manner.Ifdiscretecontrolaction is thoughtof, then the problem of selection
ofsamplingintervalarises. Theselection of best samplinginterval fora digital control system is a
compromise amongmanyfactors. The basic motivation to lower thesamplingrate1/T is the
cost. A decrease insamplingrate means moretime is availableforcontrol calculations,
henceslowercomputersarepossible for a given control function ormorecontrol capacityis
availablefor agivencomputer.Thateconomically, the best choiceis the slowest
possiblesamplingrate that meetsall the performance
specifications.Ontheotherhand,ifthesamplingrateistoolow,the
samplerdiscardspartoftheinformationpresentinacontinuous.tirnesignal.The
minimumsamplingrateorfrequencyhasadefiniterelationshipwiththehighestsignificantsignalf
requency (i.e.,signalbandwidth).Thisrelationshipisgivenby
theSamplingTheoremaccordingtowhichtheinformationcontainedinasignalisfullypreservedinit
ssampledversionsolongasthesamplingfrequencyisatleasttwicethehighestsignificantfrequency
containedinthesignal.Thissetsanabsolutelowerboundtothe sample rateselection.

48

Get useful study materials from www.rejinpaul.com


www.rejinpaul.com

Weareusuallysatisfiedwiththetrialanderrormethodofselectionofsamplinginterval.Wecompa
retheresponseofthecontinuous-
timeplantwithmodelsdiscretizedforvarioussamplingrates.Thenthemodelwiththeslowestsam
plingratewhichgivesaresponsewithintolerablelimitsisselectedforfuturework.However,them
ethodisnotrigorousinapproach.Alsoawidevariety
ofinputsmustbegiventoeachprospectivemodeltoensurethat it is a treerepresentative
oftheplants.
Poleplacement by statefeedback
Consider a lineardynamic system in the statespace form

In somecases oneis ableto achievethegoalbyusingthe full statefeedback, whichrepresents a


linearcombination ofthe state variables,that is

So that the closed loop systemgivenby

has the desired specifications.


Ifthe pair(A,b) is controllable, the originalsystemcan be transformed into phase
variablecanonical form,i.eit exists a nonsingulartransformation ofthe characteristic
polynomial of Athat is

Such that

Whereaiarecoefficients of the characteristic polynomial of A, that is

For single input single output systems thestate feedback isgivenby

49
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com

After

Linear observer design

Inalineartimeinvariantobserverforreconstructionofthecrystalradiusfromtheweighingsignalisderi
ved.Asastartingpoint,alinearapproximationofthesystembehaviourcanbeused.Forthis
purposethenonlinearequationsrequiredfor observerdesign needtobelinearizedaround some
operatingor (steadystate)values, i.e. the equationsareexpanded in aTaylorserieswhich
istruncatedat thesecond order

Can be approximated by

Aroundsomefixedvaluesa0,v .Withnewcoordinatesr
0 0 0
r v tan( )
e e c c c c

Inthesamewayonecancontinuewiththeremainingequationsneededfordescribingtheprocessdyna
mics.Forexample, Thelinear model he derived is

Wherexisthestatevector,Furthermore.Onehasthe3 3systemmatrixA.the3
2controlmatrixBandthe1 3outputmatrixC.Onehastokeepinmindthatthevaluesofthestatespare
vector.r,rho inputsector itand theoutputy describethe deviation ofthe
correspondingquantitiesfromtheiroperatingwillies.

50

Get useful study materials from www.rejinpaul.com


www.rejinpaul.com

Unit IV & V
Optimal control & Optimal Estimation
Optimal control theory, an extension of the calculus of variations, is a mathematical
optimization method for deriving control policies. The method is largely due to the work of Lev
Pontryagin and his collaborators in the Soviet Union[1] and Richard Bellman in the United
States.

Contents
[hide]

1 General method
2 Linear quadratic control
3 Numerical methods for optimal control
4 Discrete-time optimal control
5 Examples
o 5.1 Finite time
6 See also
7 References
8 Further reading
9 External links

[edit] General method


Optimal control deals with the problem of finding a control law for a given system such that a
certain optimality criterion is achieved. A control problem includes a cost functional that is a
function of state and control variables. An optimal control is a set of differential equations
describing the paths of the control variables that minimize the cost functional. The optimal
control can be derived using Pontryagin's maximum principle (a necessary condition also
known as Pontryagin's minimum principle or simply Pontryagin's Principle[2]), or by solving the
Hamilton-Jacobi-Bellman equation (a sufficient condition).

We begin with a simple example. Consider a car traveling on a straight line through a hilly
road. The question is, how should the driver press the accelerator pedal in order to minimize the
total traveling time? Clearly in this example, the term control law refers specifically to the way
in which the driver presses the accelerator and shifts the gears. The "system" consists of both
the car and the road, and the optimality criterion is the minimization of the total traveling time.
Control problems usually include ancillary constraints. For example the amount of available
fuel might be limited, the accelerator pedal cannot be pushed through the floor of the car, speed
limits, etc.

A proper cost functional is a mathematical expression giving the traveling time as a function of
the speed, geometrical considerations, and initial conditions of the system. It is often the case
that the constraints are interchangeable with the cost functional.

Another optimal control problem is to find the way to drive the car so as to minimize its fuel
consumption, given that it must complete a given course in a time not exceeding some amount.
Get useful study materials from www.rejinpaul.com
Yet another control problem is to minimize the total monetary cost of completing the trip, given
www.rejinpaul.com

assumed monetary prices for time and fuel.

A more abstract framework goes as follows. Minimize the continuous-time cost functional

subject to the first-order dynamic constraints

the algebraic path constraints

and the boundary conditions

where is the state, is the control, is the independent variable (generally speaking,
time), is the initial time, and is the terminal time. The terms and are called the
endpoint cost and Lagrangian, respectively. Furthermore, it is noted that the path constraints
are in general inequality constraints and thus may not be active (i.e., equal to zero) at the
optimal solution. It is also noted that the optimal control problem as stated above may have
multiple solutions (i.e., the solution may not be unique). Thus, it is most often the case that any
solution to the optimal control problem is locally minimizing.

[edit] Linear quadratic control


A special case of the general nonlinear optimal control problem given in the previous section is
the linear quadratic (LQ) optimal control problem. The LQ problem is stated as follows.
Minimize the quadratic continuous-time cost functional

Subject to the linear first-order dynamic constraints

and the initial condition

A particular form of the LQ problem that arises in many control system problems is that of the
linear quadratic regulator (LQR) where all of the matrices (i.e., , , , and ) are
constant, the initial time is arbitrarily set to zero, and the terminal time is taken in the limit
(this last assumption is what is known as infinite horizon). The LQR problem is
stated as follows. Minimize the infinite horizon quadratic continuous-time cost functional
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com

Subject to the linear time-invariant first-order dynamic constraints

and the initial condition

In the finite-horizon case the matrices are restricted in that and are positive semi-definite
and positive definite, respectively. In the infinite-horizon case, however, the matrices and
are not only positive-semidefinite and positive-definite, respectively, but are also constant.
These additional restrictions on and in the infinite-horizon case are enforced to ensure that
the cost functional remains positive. Furthermore, in order to ensure that the cost function is
bounded, the additional restriction is imposed that the pair is controllable. Note that the
LQ or LQR cost functional can be thought of physically as attempting to minimize the control
energy (measured as a quadratic form).

The infinite horizon problem (i.e., LQR) may seem overly restrictive and essentially useless
because it assumes that the operator is driving the system to zero-state and hence driving the
output of the system to zero. This is indeed correct. However the problem of driving the output
to a desired nonzero level can be solved after the zero output one is. In fact, it can be proved
that this secondary LQR problem can be solved in a very straightforward manner. It has been
shown in classical optimal control theory that the LQ (or LQR) optimal control has the
feedback form

where is a properly dimensioned matrix, given as

and is the solution of the differential Riccati equation. The differential Riccati equation is
given as

For the finite horizon LQ problem, the Riccati equation is integrated backward in time using the
terminal boundary condition

For the infinite horizon LQR problem, the differential Riccati equation is replaced with the
algebraic Riccati equation (ARE) given as

Getthatuseful
Understanding the ARE study materials
arises from infinite horizonfrom www.rejinpaul.com
problem, the matrices , , , and
www.rejinpaul.com

are all constant. It is noted that there are in general multiple solutions to the algebraic Riccati
equation and the positive definite (or positive semi-definite) solution is the one that is used to
compute the feedback gain. The LQ (LQR) problem was elegantly solved by Rudolf Kalman.[3]

[edit] Numerical methods for optimal control


Optimal control problems are generally nonlinear and therefore, generally do not have analytic
solutions (e.g., like the linear-quadratic optimal control problem). As a result, it is necessary to
employ numerical methods to solve optimal control problems. In the early years of optimal
control (circa 1950s to 1980s) the favored approach for solving optimal control problems was
that of indirect methods. In an indirect method, the calculus of variations is employed to obtain
the first-order optimality conditions. These conditions result in a two-point (or, in the case of a
complex problem, a multi-point) boundary-value problem. This boundary-value problem
actually has a special structure because it arises from taking the derivative of a Hamiltonian.
Thus, the resulting dynamical system is a Hamiltonian system of the form

where

is the augmented Hamiltonian and in an indirect method, the boundary-value problem is solved
(using the appropriate boundary or transversality conditions). The beauty of using an indirect
method is that the state and adjoint (i.e., ) are solved for and the resulting solution is readily
verified to be an extremal trajectory. The disadvantage of indirect methods is that the boundary-
value problem is often extremely difficult to solve (particularly for problems that span large
time intervals or problems with interior point constraints). A well-known software program that
implements indirect methods is BNDSCO.[4]

The approach that has risen to prominence in numerical optimal control over the past two
decades (i.e., from the 1980s to the present) is that of so called direct methods. In a direct
method, the state and/or control are approximated using an appropriate function approximation
(e.g., polynomial approximation or piecewise constant parameterization). Simultaneously, the
cost functional is approximated as a cost function. Then, the coefficients of the function
approximations are treated as optimization variables and the problem is "transcribed" to a
nonlinear optimization problem of the form:

Minimize

subject to the algebraic constraints

Depending upon the type of direct method employed, the size of the nonlinear optimization
problem can be quite small (e.g., as in a direct shooting or quasilinearization method) or may be
quite large (e.g., a direct collocation method[5]). In the latter case (i.e., a collocation method),
Get
the nonlinear usefulproblem
optimization studymaymaterials from towww.rejinpaul.com
be literally thousands tens of thousands of variables
www.rejinpaul.com

and constraints. Given the size of many NLPs arising from a direct method, it may appear
somewhat counter-intuitive that solving the nonlinear optimization problem is easier than
solving the boundary-value problem. It is, however, the fact that the NLP is easier to solve than
the boundary-value problem. The reason for the relative ease of computation, particularly of a
direct collocation method, is that the NLP is sparse and many well-known software programs
exist (e.g., SNOPT[6]) to solve large sparse NLPs. As a result, the range of problems that can be
solved via direct methods (particularly direct collocation methods which are very popular these
days) is significantly larger than the range of problems that can be solved via indirect methods.
In fact, direct methods have become so popular these days that many people have written
elaborate software programs that employ these methods. In particular, many such programs
written in FORTRAN include DIRCOL,[7] SOCS,[8] OTIS,[9] GESOP/ASTOS[10] and DITAN.[11]
In recent years, due to the advent of the MATLAB programming language, optimal control
software in MATLAB has become more common. Examples of academically developed
MATLAB software tools implementing direct methods include RIOTS,[12]DIDO,[13]
DIRECT,[14] and GPOPS,[15] while an example of an industry developed MATLAB tool is
PROPT.[16] These software tools have increased significantly the opportunity for people to
explore complex optimal control problems both for academic research and industrial-strength
problems. Finally, it is noted that general-purpose MATLAB optimization environments such
as TOMLAB have made coding complex optimal control problems significantly easier than was
previously possible in languages such as C and FORTRAN.

[edit] Discrete-time optimal control


The examples thus far have shown continuous time systems and control solutions. In fact, as
optimal control solutions are now often implemented digitally, contemporary control theory is
now primarily concerned with discrete time systems and solutions. The Theory of Consistent
Approximations[17] provides conditions under which solutions to a series of increasingly
accurate discretized optimal control problem converge to the solution of the original,
continuous-time problem. Not all discretization methods have this property, even seemingly
obvious ones. For instance, using a variable step-size routine to integrate the problem's dynamic
equations may generate a gradient which does not converge to zero (or point in the right
direction) as the solution is approached. The direct method RIOTS is based on the Theory of
Consistent Approximation.

[edit] Examples
A common solution strategy in many optimal control problems is to solve for the costate
(sometimes called the shadow price) . The costate summarizes in one number the marginal
value of expanding or contracting the state variable next turn. The marginal value is not only
the gains accruing to it next turn but associated with the duration of the program. It is nice when
can be solved analytically, but usually the most one can do is describe it sufficiently well
that the intuition can grasp the character of the solution and an equation solver can solve
numerically for the values.

Having obtained , the turn-t optimal value for the control can usually be solved as a
differential equation conditional on knowledge of . Again it is infrequent, especially in
continuous-time problems, that one obtains the value of the control or the state explicitly.
Usually the strategy is to solve for thresholds and regions that characterize the optimal control
and use a numerical solver to isolate the actual choice values in time.

Get useful study materials from www.rejinpaul.com


[edit] Finite time
www.rejinpaul.com

Consider the problem of a mine owner who must decide at what rate to extract ore from his
mine. He owns rights to the ore from date to date . At date there is ore in the ground,
and the instantaneous stock of ore declines at the rate the mine owner extracts it u(t). The
mine owner extracts ore at cost and sells ore at a constant price . He does not
value the ore remaining in the ground at time (there is no "scrap value"). He chooses the rate
of extraction in time u(t) to maximize profits over the period of ownership with no time
discounting.

2. Continuous-time version
1. Discrete-time version
The manager maximizes profit :
The manager maximizes profit :

subject to the law of evolution for the state


subject to the law of evolution for the state variable
variable

Form the Hamiltonian and differentiate:


Form the Hamiltonian and differentiate:

As the mine owner does not value the ore As the mine owner does not value the ore
remaining at time , remaining at time ,

Using the above equations, it is easy to solve Using the above equations, it is easy to solve
for the and series for the differential equations governing
and

and using the initial and turn-T conditions, the


series can be solved explicitly, giving .
and using the initial and turn-T conditions, the
functions can be solved numerically.

Get useful study materials from www.rejinpaul.com


www.rejinpaul.com

Optimal control
Victor M. Becerra (2008), revision #91612 [link
Scholarpedia, 3(1):5354. doi:10.4249/scholarpedia.5354 to/cite this article]
Curator and Contributors
1.00 - Victor M. Becerra
0.05 - Ian Stevenson
0.05 - Abhishek
0.05 - Eugene M. Izhikevich
0.05 - Ross H. Miller
Graham W Griffiths
D. Subbaram Naidu

Victor M. Becerra, School of Systems Engineering, University of Reading, UK

Optimal control is the process of determining control and state trajectories for a dynamic
system over a period of time to minimise a performance index.

Contents
1 Origins and applications
2 Formulation of optimal control problems
3 Continuous time optimal control using the variational approach
o 3.1 General case with fixed final time and no terminal or path constraints
o 3.2 The linear quadratic regulator
o 3.3 Case with terminal constraints
o 3.4 Case with input constraints - the minimum principle
o 3.5 Minimum time problems
o 3.6 Problems with path constraints
o 3.7 Singular arcs
4 Computational optimal control
5 Dynamic programming
6 Discrete-time optimal control
7 Examples
o 7.1 Minimum energy control of a double integrator with terminal constraint
o 7.2 Computational optimal control: B-727 maximum altitude climbing turn
manoeuvre
8 References
9 Further reading
10 External links
11 See also

Origins and applications


Optimal control is closely related in its origins to the theory of calculus of variations. Some
important contributors to the early theory of optimal control and calculus of variations include
Get useful study materials from www.rejinpaul.com
Johann Bernoulli (1667-1748), Isaac Newton (1642-1727), Leonhard Euler (1707-1793),
www.rejinpaul.com

Ludovico Lagrange (1736-1813), Andrien Legendre (1752-1833), Carl Jacobi (1804-1851),


William Hamilton (1805-1865), Karl Weierstrass (1815-1897), Adolph Mayer (1839-1907),
and Oskar Bolza (1857-1942). Some important milestones in the development of optimal
control in the 20th century include the formulation dynamic programming by Richard Bellman
(1920-1984) in the 1950s, the development of the minimum principle by Lev Pontryagin (1908-
1988) and co-workers also in the 1950s, and the formulation of the linear quadratic regulator
and the Kalman filter by Rudolf Kalman (b. 1930) in the 1960s. See the review papers
Sussmann and Willems (1997) and Bryson (1996) for further historical details.

Optimal control and its ramifications have found applications in many different fields, including
aerospace, process control, robotics, bioengineering, economics, finance, and management
science, and it continues to be an active research area within control theory. Before the arrival
of the digital computer in the 1950s, only fairly simple optimal control problems could be
solved. The arrival of the digital computer has enabled the application of optimal control theory
and methods to many complex problems.

Formulation of optimal control problems


There are various types of optimal control problems, depending on the performance index, the
type of time domain (continuous, discrete), the presence of different types of constraints, and
what variables are free to be chosen. The formulation of an optimal control problem requires the
following:

a mathematical model of the system to be controlled,


a specification of the performance index,
a specification of all boundary conditions on states, and constraints to be satisfied by
states and controls,
a statement of what variables are free.

Continuous time optimal control using the variational


approach
General case with fixed final time and no terminal or path constraints

If there are no path constraints on the states or the control variables, and if the initial and final
times are fixed, a fairly general continuous time optimal control problem can be defined as
follows:

Problem 1: Find the control vector trajectory to minimize the


performance index:

subject to:

where is the time interval of interest, is the state vector,


is a terminal cost function, is an
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com

intermediate cost function, and is a vector field. Note that


equation (2) represents the dynamics of the system and its initial state condition. Problem 1 as
defined above is known as the Bolza problem. If then the problem is known
as the Mayer problem, if it is known as the Lagrange problem. Note that the
performance index is a functional, this is a rule of correspondence that assigns a
real value to each function u in a class. Calculus of variations (Gelfand and Fomin, 2000) is
concerned with the optimisation of functionals, and it is the tool that is used in this section to
derive necessary optimality conditions for the minimisation of J(u).

Adjoin the constraints to the performance index with a time-varying Lagrange multiplier vector
function (also known as the co-state), to define an augmented
performance index

Define the Hamiltonian function H as follows:

such that can be written as:

Assume that and are fixed. Now consider an infinitesimal variation in that is
denoted as Such a variation will produce variations in the state history and a
variation in the performance index

Since the Lagrange multipliers are arbitrary, they can be selected to make the coefficients of
and equal to zero, as follows:

Get useful study materials from www.rejinpaul.com


This choice of results in the following expression for assuming that the initial
www.rejinpaul.com

state is fixed, so that

For a minimum, it is necessary that This gives the stationarity condition:

Equations (2), (5), (6), and (7) are the first-order necessary conditions for a minimum of J.
Equation (5) is known as the co-state (or adjoint) equation. Equation (6) and the initial state
condition represent the boundary (or transversality) conditions. These necessary optimality
conditions, which define a two point boundary value problem, are very useful as they allow to
find analytical solutions to special types of optimal control problems, and to define numerical
algorithms to search for solutions in general cases. Moreover, they are useful to check the
extremality of solutions found by computational methods. Sufficient conditions for general
nonlinear problems have also been established. Distinctions are made between sufficient
conditions for weak local, strong local, and strong global minima. Sufficient conditions are
useful to check if an extremal solution satisfying the necessary optimality conditions actually
yields a minimum, and the type of minimum that is achieved. See (Gelfand and Fomin, 2003),
(Wan, 1995) and (Leitmann, 1981) for further details.

The theory presented above does not deal with the existence of an optimal control that
minimises the performance index J. See the book by Cesari (1983) which covers theoretical
issues on the existence of optimal controls. Moreover, a key point in the mathematical theory of
optimal control is the existence of the Lagrange multiplier function See the book by
Luenberger (1997) for details on this issue.

The linear quadratic regulator

A special case of optimal control problem which is of particular importance arises when the
objective function is a quadratic function of x and u, and the dynamic equations are linear. The
resulting feedback law in this case is known as the linear quadratic regulator (LQR). The
performance index is given by:

where and are positive semidefinite matrices, and is a positive definite matrix,
while the system dynamics obey:

where A is the system matrix and B is the input matrix.

In this case, using the optimality conditions given above, it is possible to find that the optimal
control law can be expressed as a linear state feedback:
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com

where the state feedback gain is given by:

and S(t) is the solution to the differential Ricatti equation:

In the particular case where and provided the pair (A,B) is stabilizable, the Ricatti
differential equation converges to a limiting solution S, and it is possible to express the optimal
control law as a state feedback as in (10) but with constant gain K. which is given by

where S is the positive definite solution to the algebraic Ricatti equation:

Moreover, if the pair (A,C) is observable, where then the closed loop system

\tag{13}
\dot \mathbf{x} = (\mathbf{A}-\mathbf{B}\mathbf{K})\mathbf{x}

is asymptotically stable. This is an important result, as the linear quadratic regulator provides a
way of stabilizing any linear system that is stabilizable. It is worth pointing out that there are
well established methods and software for solving the algebraic Ricatti equation (12). This
facilitates the design of linear quadratic regulators. A useful extension of the linear quadratic
regulator ideas involves modifying the performance index (8) to allow for a reference signal
that the output of the system should track. Moreover, an extension of the LQR concept to
systems with gaussian additive noise, which is known as the linear quadratic gaussian (LQG)
controller, has been widely applied. The LQG controller involves coupling the linear quadratic
regulator with the Kalman filter using the separation principle. See (Lewis and Syrmos, 1995)
for further details.

Case with terminal constraints

In case problem 1 is also subject to a set of terminal constraints of the form:

where is a vector function, variational analysis (Lewis and Syrmos,


1995) shows that the necessary conditions for a minimum of J are (7), (5), (2), and the
following terminal condition:

Get useful study materials from www.rejinpaul.com


www.rejinpaul.com

where is the Lagrange multiplier associated with the terminal constraint, is


the variation of the final time, and is the variation of the final state. Note that if the
final time is fixed, then and the second term vanishes. Also, if the terminal constraint
is such that element j of x is fixed at the final time, then element j of vanishes.

Case with input constraints - the minimum principle

Realistic optimal control problems often have inequality constraints associated with the input
variables, so that the input variable u is restricted to be within an admissible compact region
such that:

It was shown by Pontryagin and co-workers (Pontryagin, 1987) that in this case, the necessary
conditions (2), (5) and (6) still hold, but the stationarity condition (7), has to be replaced by:

for all admissible u, where * denotes optimal variables. This condition is known as Pontryagin's
minimum principle. According to this principle, the Hamiltonian must be minimised over all
admissible u for optimal values of the state and costate variables.

Minimum time problems

One special class of optimal control problem involves finding the optimal input u(t) to reach a
terminal constraint in minimum time. This kind of problem is defined as follows.

Problem 2: Find and to minimise:

subject to:

See (Lewis and Syrmos, 1995) and (Naidu, 2003) for further details on minimum time
problems.

Problems with path constraints

Sometimes it is necessary to restrict state and control trajectories such that a set of constraints is
Gettheuseful
satisfied within interval ofstudy
interest materials from www.rejinpaul.com
www.rejinpaul.com

where Moreover, in some problems it may be


required that the state satisfies equality constraints at some intermediate point in time
These are known as interior point constraints and can be expressed as follows:

where See Bryson and Ho (1975) for a detailed treatment of


optimal control problems with path constraints.

Singular arcs

In some optimal control problems, extremal arcs satisfying (7) occur where the matrix
is singular. These are called singular arcs. Additional tests are required to verify if a
singular arc is optimizing. A particular case of practical relevance occurs when the Hamiltonian
function is linear in at least one of the control variables. In such cases, the control is not
determined in terms of the state and co-state by the stationarity condition (7). Instead, the
control is determined by the condition that the time derivatives of must be zero
along the singular arc. In the case of a single control u, once the control is obtained by setting
the time derivative of to zero, then additional necessary conditions known as the
generalized Legendre-Clebsch conditions must be checked:

The presence of singular arcs may cause difficulties to computational optimal control methods
to find accurate solutions if the appropriate conditions are not enforced a priori. See (Bryson
and Ho, 1975) and (Sethi and Thompson, 2000) for further details on the handling of singular
arcs.

Computational optimal control


The solutions to many optimal control problems cannot be found by analytical means. Over the
years, many numerical procedures have been developed to solve general optimal control
problems. With direct methods, optimal control problems are discretised and converted into
nonlinear programming problems of the form:

Problem 3: Find a decision vector to minimise subject to


and simple bounds where is a differentiable scalar
function, and are differentiable vector functions.
Some methods involve the discretization of the differential equations using, for example, Euler,
Trapezoidal, or Runge-Kutta methods, by defining a grid of N points covering the time interval
In this way, the differential equations become
equality constraints of the nonlinear programming problem. The decision vector y contains the
control and state variables at the grid points. Other direct methods involve a decision vector y
which contains only the control variables at the grid points, with the differential equations
solved by integration and their gradients found by integrating the co-state equations, or by finite
differences. Other direct methods involve the approximation of the control and states using
Getsuch
basis functions, useful study
as splines materials
or Lagrange from
polynomials. Therewww.rejinpaul.com
are well established numerical
www.rejinpaul.com

techniques for solving nonlinear programming problems with constraints, such as sequential
quadratic programming (Bazaraa et al, 1993). Direct methods using nonlinear programming are
known to deal in an efficient manner with problems involving path constraints. See Betts (2001)
for more details on computational optimal control using nonlinear programming. See also
(Becerra, 2004) for a straightforward way of combining a dynamic simulation tool with
nonlinear programming code to solve optimal control problems with constraints.

Indirect methods involve iterating on the necessary optimality conditions to seek their
satisfaction. This usually involves attempting to solve nonlinear two-point boundary value
problems, through the forward integration of the plant equations and the backward integration
of the co-state equations. Examples of indirect methods include the gradient method and the
multiple shooting method, both of which are described in detail in the book by Bryson (1999).

Dynamic programming
Dynamic programming is an alternative to the variational approach to optimal control. It was
proposed by Bellman in the 1950s, and is an extension of Hamilton-Jacobi theory. Bellman's
principle of optimality is stated as follows: "An optimal policy has the property that regardless
of what the previous decisions have been, the remaining decisions must be optimal with regard
to the state resulting from those previous decisions". This principle serves to limit the number
of potentially optimal control strategies that must be investigated. It also shows that the optimal
strategy must be determined by working backward from the final time.

Consider Problem 1 with the addition of a terminal state constraint (14). Using Bellman's
principle of optimality, it is possible to derive the Hamilton-Jacobi-Bellman (HJB) equation:

where J* is the optimal performance index. In some cases, the HJB equation can be used to find
analytical solutions to optimal control problems.

Dynamic programming includes formulations for discrete time systems as well as combinatorial
systems, which are discrete systems with quantized states and controls. Discrete dynamic
programming, however, suffers from the 'curse of dimensionality', which causes the
computations and memory requirements to grow dramatically with the problem size. See the
books (Lewis and Syrmos, 1995), (Kirk, 1970), and (Bryson and Ho, 1975) for further details
on dynamic programming.

Discrete-time optimal control


Most of the problems defined above have discrete-time counterparts. These formulations are
useful when the dynamics are discrete (for example, a multistage system), or when dealing with
computer controlled systems. In discrete-time, the dynamics can be expressed as a difference
equation:

where k is an integer index, x(k) is the state vector, u(k) is the control vector, and f is a vector
Get useful study materials from www.rejinpaul.com
function. The objective is to find a control sequence to
www.rejinpaul.com

minimise a performance index of the form:

See, for example, (Lewis, 1995), (Bryson and Ho, 1975), and (Bryson, 1999) for further details.

Examples
Minimum energy control of a double integrator with terminal constraint

Consider the following optimal control problem.

Figure 1: Optimal control and state histories for the double integrator example

subject to

The Hamiltonian function (4) is given by:

The stationarity condition (7) yields:

The co-state equation (5) gives:

so that

where a and b are constants to be found. Replacing (19) in (18) gives


Get useful study materials from www.rejinpaul.com
www.rejinpaul.com

In this case, the terminal constraint function is


so that the final value of the state vector is fixed, which implies that Noting
that since the final time is fixed, then the terminal condition (15) is satisfied.
Replacing (20) into the state equation (17), and integrating both states gives:

Evaluating (21) at t=0 and using the initial conditions gives the values c=1 and d=1. Evaluating
(21) at the terminal time t=1 gives two simultaneous equations:

This yields a=18, and b=10. Therefore, the optimal control is given by:

The resulting optimal control and state histories are shown in Fig 1.

Computational optimal control: B-727 maximum altitude climbing turn


manoeuvre

This example is solved using a gradient method in (Bryson, 1999). Here, a path constraint is
considered and the solution is sought by using a direct method and nonlinear programming. It is
desired to find the optimal control histories to maximise the altitude of a B-727 aircraft in a
given time with terminal constraints that the aircraft path be turned 60 degrees and the
velocity be slightly above the stall velocity. Such a flight path may be of interest to reduce
engine noise over populated areas located ahead of an airport runway. This manoeuvre can be
formulated as an optimal control problem, as follows.

subject to:

with initial conditions given by:

Get useful study materials from www.rejinpaul.com


www.rejinpaul.com

Figure 2: 3D plot of optimal B-727 aircraft trajectory

tbe terminal constraints:

and the path constraint:

where h is the altitude, x is the horizontal distance in the initial direction, y is the horizontal
distance perpendicular to the initial direction, V is the aircraft velocity, is the climb angle, is
the heading angle, and units. The distance and time units in the above equations are
normalised. To obtain meters and seconds, the corresponding variables need to be multiplied by
10.0542, and 992.0288, respectively. There are two controls: the angle of attack and the bank
angle . The functions T(V), CD() and CL() are given by:

The solution shown in Fig 2 was obtained by using sequential quadratic programming, where
the decision vector consisted of the control values at the grid points. The differential equations
were integrated using 5th order Runge-Kutta steps with size t= 0.01 units, and the gradients
required by the nonlinear programming code were found by finite differences.

Get useful study materials from www.rejinpaul.com

You might also like