You are on page 1of 550

Advanced Control using

MATLAB
or Stabilising the
unstabilisable
David I. Wilson
Auckland University of Technology
New Zealand
April 12, 2013
Copyright 2013 David I. Wilson
Auckland University of Technology
New Zealand
Creation date: April, 2013.
All rights reserved. No part of this work may be reproduced, stored in a retrieval system, or
transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or
otherwise, without prior permission.
Contents
1 Introduction 1
1.1 Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Matlab for computer aided control design . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.1 Alternative computer design aids . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Economics of control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Laboratory equipment for control tests . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4.1 Plants with one input and one output . . . . . . . . . . . . . . . . . . . . . . 6
1.4.2 Multi-input and multi-output plants . . . . . . . . . . . . . . . . . . . . . . . 8
1.5 Slowing down Simulink . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2 From differential to difference equations 13
2.1 Computer in the loop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.1.1 Sampling an analogue signal . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.1.2 Selecting a sample rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.1.3 The sampling theorem and aliases . . . . . . . . . . . . . . . . . . . . . . . . 16
2.1.4 Discrete frequency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.2 Finite difference models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.2.1 Difference equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.3 The z transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.3.1 z-transforms of common functions . . . . . . . . . . . . . . . . . . . . . . . . 23
2.4 Inversion of z-transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.4.1 Inverting z-transforms with symbolically . . . . . . . . . . . . . . . . . . . . 25
2.4.2 The partial fraction method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.4.3 Long division . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.4.4 Computational approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.4.5 Numerically inverting the Laplace transform . . . . . . . . . . . . . . . . . . 31
2.5 Discretising with a sample and hold . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.5.1 Converting Laplace transforms to z-transforms . . . . . . . . . . . . . . . . 37
2.5.2 The bilinear transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.6 Discrete root locus diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.7 Multivariable control and state space analysis . . . . . . . . . . . . . . . . . . . . . . 43
2.7.1 States and state space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.7.2 Converting differential equations to state-space form . . . . . . . . . . . . . 47
2.7.3 Interconverting between state space and transfer functions . . . . . . . . . . 50
2.7.4 Similarity transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
2.7.5 Interconverting between transfer functions forms . . . . . . . . . . . . . . . 55
2.7.6 The steady state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
2.8 Solving the vector differential equation . . . . . . . . . . . . . . . . . . . . . . . . . 58
2.8.1 Numerically computing the discrete transformation . . . . . . . . . . . . . . 61
2.8.2 Using MATLAB to discretise systems . . . . . . . . . . . . . . . . . . . . . . . 63
2.8.3 Time delay in state space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
2.9 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
iii
iv CONTENTS
2.9.1 Stability in the continuous domain . . . . . . . . . . . . . . . . . . . . . . . . 70
2.9.2 Stability of the closed loop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
2.9.3 Stability of discrete time systems . . . . . . . . . . . . . . . . . . . . . . . . . 73
2.9.4 Stability of nonlinear differential equations . . . . . . . . . . . . . . . . . . . 74
2.9.5 Expressing matrix equations succinctly using Kronecker products . . . . . . 80
2.9.6 Summary of stability analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
2.10 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
3 Modelling dynamic systems with differential equations 85
3.1 Dynamic system models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
3.1.1 Steady state and dynamic models . . . . . . . . . . . . . . . . . . . . . . . . 86
3.2 A collection of illustrative models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
3.2.1 Simple models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
3.3 Chemical process models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
3.3.1 A continuously-stirred tank reactor . . . . . . . . . . . . . . . . . . . . . . . 92
3.3.2 A forced circulation evaporator . . . . . . . . . . . . . . . . . . . . . . . . . . 94
3.3.3 A binary distillation column model . . . . . . . . . . . . . . . . . . . . . . . 95
3.3.4 Interaction and the Relative Gain Array . . . . . . . . . . . . . . . . . . . . . 103
3.4 Regressing experimental data by curve tting . . . . . . . . . . . . . . . . . . . . . . 107
3.4.1 Polynomial regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
3.4.2 Nonlinear least-squares model identication . . . . . . . . . . . . . . . . . . 112
3.4.3 Parameter condence intervals . . . . . . . . . . . . . . . . . . . . . . . . . . 116
3.5 Numerical tools for modelling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
3.5.1 Differential/Algebraic equation systems and algebraic loops . . . . . . . . . 123
3.6 Linearisation of nonlinear dynamic equations . . . . . . . . . . . . . . . . . . . . . . 125
3.6.1 Linearising a nonlinear tank model . . . . . . . . . . . . . . . . . . . . . . . 127
3.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
4 The PID controller 131
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
4.1.1 P, PI or PID control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
4.2 The industrial PID algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
4.2.1 Implementing the derivative component . . . . . . . . . . . . . . . . . . . . 133
4.2.2 Variations of the PID algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 134
4.2.3 Integral only control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
4.3 Simulating a PID process in SIMULINK . . . . . . . . . . . . . . . . . . . . . . . . . . 135
4.4 Extensions to the PID algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
4.4.1 Avoiding derivative kick . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
4.4.2 Input saturation and integral windup . . . . . . . . . . . . . . . . . . . . . . 140
4.5 Discrete PID controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
4.5.1 Discretising continuous PID controllers . . . . . . . . . . . . . . . . . . . . . 144
4.5.2 Simulating a PID controlled response in Matlab . . . . . . . . . . . . . . . . 146
4.5.3 Controller performance as a function of sample time . . . . . . . . . . . . . 148
4.6 PID tuning methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
4.6.1 Open loop tuning methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
4.6.2 Closed loop tuning methods . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
4.6.3 Closed loop single-test tuning methods . . . . . . . . . . . . . . . . . . . . . 159
4.6.4 Summary on closed loop tuning schemes . . . . . . . . . . . . . . . . . . . . 166
4.7 Automated tuning by relay feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
4.7.1 Describing functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
4.7.2 An example of relay tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
4.7.3 Self-tuning with noise disturbances . . . . . . . . . . . . . . . . . . . . . . . 173
4.7.4 Modications to the relay feedback estimation algorithm . . . . . . . . . . . 176
4.8 Drawbacks with PID controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
CONTENTS v
4.8.1 Inverse response processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
4.8.2 Approximating inverse-response systems with additional deadtime . . . . 183
4.9 Dead time compensation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
4.10 Tuning and sensitivity of control loops . . . . . . . . . . . . . . . . . . . . . . . . . . 187
4.11 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
5 Digital ltering and smoothing 193
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
5.1.1 The nature of industrial noise . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
5.1.2 Differentiating without smoothing . . . . . . . . . . . . . . . . . . . . . . . . 196
5.2 Smoothing measured data using analogue lters . . . . . . . . . . . . . . . . . . . . 197
5.2.1 A smoothing application to nd the peaks and troughs . . . . . . . . . . . . 197
5.2.2 Filter types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
5.2.3 Classical analogue lter families . . . . . . . . . . . . . . . . . . . . . . . . . 201
5.3 Discrete lters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
5.3.1 A low-pass ltering application . . . . . . . . . . . . . . . . . . . . . . . . . . 209
5.3.2 Digital lter approximations . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
5.3.3 Efcient hardware implementation of discrete lters . . . . . . . . . . . . . 214
5.3.4 Numerical and quantisation effects for high-order lters . . . . . . . . . . . 217
5.4 The Fourier transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
5.4.1 Fourier transform denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
5.4.2 Orthogonality and frequency spotting . . . . . . . . . . . . . . . . . . . . . . 224
5.4.3 Using MATLABs FFT function . . . . . . . . . . . . . . . . . . . . . . . . . . 225
5.4.4 Periodogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
5.4.5 Fourier smoothing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
5.5 Numerically differentiating industrial data . . . . . . . . . . . . . . . . . . . . . . . 230
5.5.1 Establishing feedrates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
5.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
6 Identication of process models 235
6.1 The importance of system identication . . . . . . . . . . . . . . . . . . . . . . . . . 235
6.1.1 Basic denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
6.1.2 Black, white and grey box models . . . . . . . . . . . . . . . . . . . . . . . . 237
6.1.3 Techniques for identication . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
6.2 Graphical and non-parametric model identication . . . . . . . . . . . . . . . . . . 239
6.2.1 Time domain identication using graphical techniques . . . . . . . . . . . . 239
6.2.2 Experimental frequency response analysis . . . . . . . . . . . . . . . . . . . 246
6.2.3 An alternative empirical transfer function estimate . . . . . . . . . . . . . . 253
6.3 Continuous model identication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
6.3.1 Fitting transfer functions using nonlinear least-squares . . . . . . . . . . . . 254
6.3.2 Identication using derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . 256
6.3.3 Practical continuous model identication . . . . . . . . . . . . . . . . . . . . 258
6.4 Popular discrete-time linear models . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
6.4.1 Extending the linear model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
6.4.2 Output error model structures . . . . . . . . . . . . . . . . . . . . . . . . . . 264
6.4.3 General input/output models . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
6.5 Regressing discrete model parameters . . . . . . . . . . . . . . . . . . . . . . . . . . 266
6.5.1 Simple ofine system identication routines . . . . . . . . . . . . . . . . . . 268
6.5.2 Bias in the parameter estimates . . . . . . . . . . . . . . . . . . . . . . . . . . 269
6.5.3 Using the System Identication toolbox . . . . . . . . . . . . . . . . . . . . . 270
6.5.4 Fitting parameters to state space models . . . . . . . . . . . . . . . . . . . . 274
6.6 Model structure determination and validation . . . . . . . . . . . . . . . . . . . . . 276
6.6.1 Estimating model order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
6.6.2 Robust model tting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
vi CONTENTS
6.6.3 Common nonlinear model structures . . . . . . . . . . . . . . . . . . . . . . 280
6.7 Online model identication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
6.7.1 Recursive least squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
6.7.2 Recursive least-squares in MATLAB . . . . . . . . . . . . . . . . . . . . . . . 286
6.7.3 Tracking the precision of the estimates . . . . . . . . . . . . . . . . . . . . . . 290
6.8 The forgetting factor and covariance windup . . . . . . . . . . . . . . . . . . . . . . 292
6.8.1 The inuence of the forgetting factor . . . . . . . . . . . . . . . . . . . . . . . 294
6.8.2 Covariance wind-up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
6.9 Identication by parameter optimisation . . . . . . . . . . . . . . . . . . . . . . . . . 296
6.10 Online estimating of noise models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
6.10.1 A recursive extended least-squares example . . . . . . . . . . . . . . . . . . 302
6.10.2 Recursive identication using the SI toolbox . . . . . . . . . . . . . . . . . . 305
6.10.3 Simplied RLS algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
6.11 Closed loop identication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
6.11.1 Closed loop RLS in Simulink . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
6.12 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
7 Adaptive Control 317
7.1 Why adapt? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
7.1.1 The adaption scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
7.1.2 Classication of adaptive controllers . . . . . . . . . . . . . . . . . . . . . . . 319
7.2 Gain scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
7.3 The importance of identication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
7.3.1 Polynomial manipulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
7.4 Self tuning regulators (STRs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
7.4.1 Simple minimum variance control . . . . . . . . . . . . . . . . . . . . . . . . 323
7.5 Adaptive pole-placement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
7.5.1 The Diophantine equation and the closed loop . . . . . . . . . . . . . . . . . 326
7.5.2 Solving the Diophantine equation in Matlab . . . . . . . . . . . . . . . . . . 327
7.5.3 Adaptive pole-placement with identication . . . . . . . . . . . . . . . . . . 330
7.6 Practical adaptive pole-placement . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
7.6.1 Dealing with non-minimum phase systems . . . . . . . . . . . . . . . . . . . 335
7.6.2 Separating stable and unstable factors . . . . . . . . . . . . . . . . . . . . . . 338
7.6.3 Experimental adaptive pole-placement . . . . . . . . . . . . . . . . . . . . . 340
7.6.4 Minimum variance control with dead time . . . . . . . . . . . . . . . . . . . 341
7.7 Summary of adaptive control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
8 Multivariable controller design 351
8.1 Controllability and observability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
8.1.1 Controllability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
8.1.2 Observability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
8.1.3 Computing controllability and observability . . . . . . . . . . . . . . . . . . 355
8.1.4 State reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
8.2 State space pole-placement controller design . . . . . . . . . . . . . . . . . . . . . . 359
8.2.1 Poles and where to place them . . . . . . . . . . . . . . . . . . . . . . . . . . 362
8.2.2 Deadbeat control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
8.3 Estimating the unmeasured states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
8.4 Combining estimation and state feedback . . . . . . . . . . . . . . . . . . . . . . . . 367
8.5 Generic model control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
8.5.1 The tuning parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
8.5.2 GMC control of a linear model . . . . . . . . . . . . . . . . . . . . . . . . . . 373
8.5.3 GMC applied to a nonlinear plant . . . . . . . . . . . . . . . . . . . . . . . . 375
8.6 Exact feedback linearisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
8.6.1 The nonlinear system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
CONTENTS vii
8.6.2 The input/output feedback linearisation control law . . . . . . . . . . . . . 380
8.6.3 Exact feedback example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
8.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
9 Classical optimal control 387
9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
9.2 Parametric optimisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
9.2.1 Choosing a performance indicator . . . . . . . . . . . . . . . . . . . . . . . . 388
9.2.2 Optimal tuning of a PID regulator . . . . . . . . . . . . . . . . . . . . . . . . 389
9.2.3 Using SIMULINK inside an optimiser . . . . . . . . . . . . . . . . . . . . . . . 395
9.2.4 An optimal batch reactor temperature policy . . . . . . . . . . . . . . . . . . 396
9.3 The general optimal control problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
9.3.1 The optimal control formulation . . . . . . . . . . . . . . . . . . . . . . . . . 399
9.3.2 The two-point boundary problem . . . . . . . . . . . . . . . . . . . . . . . . 401
9.3.3 Optimal control examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
9.3.4 Problems with a specied target set . . . . . . . . . . . . . . . . . . . . . . . 406
9.4 Linear quadratic control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
9.4.1 Continuous linear quadratic regulators . . . . . . . . . . . . . . . . . . . . . 409
9.4.2 Analytical solution to the LQR problem . . . . . . . . . . . . . . . . . . . . . 411
9.4.3 The steady-state solution to the matrix Riccati equation . . . . . . . . . . . . 415
9.4.4 The discrete LQR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
9.4.5 A numerical validation of the optimality of LQR . . . . . . . . . . . . . . . . 423
9.4.6 An LQR with integral states . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
9.5 Estimation of state variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
9.5.1 Random processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
9.5.2 Combining deterministic and stochastic processes . . . . . . . . . . . . . . . 437
9.5.3 The Kalman lter estimation scheme . . . . . . . . . . . . . . . . . . . . . . . 438
9.5.4 The steady-state form of the Kalman lter . . . . . . . . . . . . . . . . . . . . 442
9.5.5 Current and future prediction forms . . . . . . . . . . . . . . . . . . . . . . . 443
9.5.6 An application of the Kalman lter . . . . . . . . . . . . . . . . . . . . . . . . 447
9.5.7 The role of the Q and R noise covariance matrices in the state estimator . . 448
9.5.8 Extensions to the basic Kalman lter algorithm . . . . . . . . . . . . . . . . . 452
9.5.9 The Extended Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454
9.5.10 Combining state estimation and state feedback . . . . . . . . . . . . . . . . . 457
9.5.11 Optimal control using only measured outputs . . . . . . . . . . . . . . . . . 457
9.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459
10 Predictive control 461
10.1 Model predictive control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
10.1.1 Constrained predictive control . . . . . . . . . . . . . . . . . . . . . . . . . . 464
10.1.2 Dynamic matrix control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468
10.2 A Model Predictive Control Toolbox . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
10.2.1 A model predictive control GUI . . . . . . . . . . . . . . . . . . . . . . . . . 474
10.2.2 MPC toolbox in MATLAB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
10.2.3 Using the MPC toolbox in SIMULINK . . . . . . . . . . . . . . . . . . . . . . 476
10.2.4 Further readings on MPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
10.3 Optimal control using linear programming . . . . . . . . . . . . . . . . . . . . . . . 478
10.3.1 Development of the LP problem . . . . . . . . . . . . . . . . . . . . . . . . . 479
viii CONTENTS
11 Expert systems and neural networks 487
11.1 Expert systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487
11.1.1 Where are they used? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488
11.1.2 Features of an expert system . . . . . . . . . . . . . . . . . . . . . . . . . . . 488
11.1.3 The user interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490
11.1.4 Expert systems used in process control . . . . . . . . . . . . . . . . . . . . . 490
11.2 Neural networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
11.2.1 The architecture of the neural network . . . . . . . . . . . . . . . . . . . . . . 495
11.2.2 Curve tting using neural networks . . . . . . . . . . . . . . . . . . . . . . . 499
11.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503
A List of symbols 505
B Useful utility functions in Matlab 507
C Transform pairs 509
D A comparison of Maple and MuPad 511
D.1 Partial fractions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
D.2 Integral transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
D.3 Differential equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
D.4 Vectors and matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513
E Useful test models 515
E.1 A forced circulation evaporator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515
E.2 Aircraft model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516
List of Figures
1.1 Traditional vs. Advanced control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 Economic improvements of better control . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3 Blackbox conguration. The manual switch marked will toggle between either 7
or 9 low-pass lters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4 The Black-box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.5 Balance arm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.6 The apper wiring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.7 Flapper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.8 Helicopter plant with 2 degrees of freedom. See also Fig. 1.9(a). . . . . . . . . . . . 10
1.9 Helicopter control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.10 Helicopter ying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.11 Real-time Simulink simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.1 The computer in the control loop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2 3 bit sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3 A time series with unknown frequency components . . . . . . . . . . . . . . . . . . 18
2.4 The frequency component of a sampled signal . . . . . . . . . . . . . . . . . . . . . 18
2.5 Frequency aliases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.6 The Scarlet Letter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.7 H enons attractor in Simulink . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.8 H enons attractor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.9 Inverting z-transforms using dimpulse . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.10 Numerically inverting the Laplace transform using the Bromwich integral . . . . . 33
2.11 Numerically inverting the Laplace transform . . . . . . . . . . . . . . . . . . . . . . 33
2.12 Numerically inverting Laplace transforms . . . . . . . . . . . . . . . . . . . . . . . . 34
2.13 Ideal sampler and zeroth-order hold . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.14 Zeroth-order hold effects on the discrete Bode diagram . . . . . . . . . . . . . . . . 41
2.15 Bode plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.16 The discrete root locus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.17 Various discrete closed loop responses . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.18 A binary distillation column with multiple inputs and multiple outputs . . . . . . 45
2.19 A block diagram of a state-space dynamic system, (a) continuous system: x =
Ax +Bu, and (b) discrete system: x
k+1
= x
k
+u
k
. (See also Fig. 2.20.) . . . . 47
2.20 A complete block diagram of a state-space dynamic system with output and direct
measurement feed-through, Eqn. 2.41. . . . . . . . . . . . . . . . . . . . . . . . . . . 48
2.21 Unsteady and steady states for level systems . . . . . . . . . . . . . . . . . . . . . . 57
2.22 Submarine step response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
2.23 Issues in assessing system stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
2.24 Nyquist diagramof Eqn. 2.94 in (a) three dimensions and (b) as typically presented
in two dimensions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
2.25 Liapunov (18571918) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
2.26 Regions of stability for the poles of continuous (left) and discrete (right) systems . 82
ix
x LIST OF FIGURES
3.1 A stable and unstable pendulum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
3.2 Simple buffer tank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
3.3 The UK growth based on the GDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
3.4 A CSTR reactor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
3.5 A forced circulation evaporator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
3.6 Schematic of a distillation column . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
3.7 Wood-Berry step response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
3.8 Wood-Berry column in SIMULINK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
3.9 Distillation tower . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
3.10 Sparsity of the distillation column model . . . . . . . . . . . . . . . . . . . . . . . . 101
3.11 Open loop distillation column control . . . . . . . . . . . . . . . . . . . . . . . . . . 102
3.12 Distillation column control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
3.13 Distillation column control (in detail) . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
3.14 Distillation interactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
3.15 Dynamic RGA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
3.16 Dynamic RGA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
3.17 Density of Air . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
3.18 Fitting a high-order polynomial to some physical data . . . . . . . . . . . . . . . . . 111
3.19 A bio-chemical reaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
3.20 Model of compressed water . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
3.21 Experimental pressure-rate data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
3.22 Parameter condence regions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
3.23 Linear and nonlinear trajectories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
3.24 Linearising a nonlinear tank model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
4.1 Comparing PI and integral-only control for the real-time control of a noisy apper
plant with sampling time T = 0.08. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
4.2 PID simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
4.3 PID internals in Simulink . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
4.4 Block diagram of PID controllers as implemented in SIMULINK (left) and classical
text books (right). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
4.5 Realisable PID controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
4.6 PID controller in Simulink . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
4.7 PID controller with anti-derivative kick. . . . . . . . . . . . . . . . . . . . . . . . . . 139
4.8 Avoiding derivative kick . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
4.9 Illustrating the improvement of anti-derivative kick schemes for PID controllers
when applied to the experimental electromagnetic balance. . . . . . . . . . . . . . . 140
4.10 Derivative control and noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
4.11 Anti-windup comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
4.12 Discrete PID controller in SIMULINK . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
4.13 Headbox control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
4.14 Headbox controlled response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
4.15 A PID controlled process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
4.16 Sample time and discrete PID control . . . . . . . . . . . . . . . . . . . . . . . . . . 149
4.17 The parameters T and Lto be graphically estimated for the openloop tuning method
relations given in Table 4.2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
4.18 Cohen-Coon model tting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
4.19 Cohen-Coon tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
4.20 PID tuning using a GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
4.21 Solving for the ultimate frequency . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
4.22 Ziegler-Nichols tuned responses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
4.23 Typical response of a stable system to a P-controller. . . . . . . . . . . . . . . . . . . 160
4.24 A Yuwana-Seborg closed loop step test . . . . . . . . . . . . . . . . . . . . . . . . . . 163
4.25 Closed loop responses using the YS scheme . . . . . . . . . . . . . . . . . . . . . . . 164
LIST OF FIGURES xi
4.26 A self-tuning PID controlled process . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
4.27 A process under relay tuning with the PID regulator disabled. . . . . . . . . . . . . 167
4.28 An unknown plant under relay feedback exhibits an oscillation . . . . . . . . . . . 169
4.29 Nyquist & Bode diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
4.30 PID Relay tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
4.31 Relay tuning with noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
4.32 Relay tuning of the blackbox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
4.33 Relay tuning results of the blackbox . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
4.34 A relay with hysteresis width h and output amplitude d. . . . . . . . . . . . . . . . 177
4.35 Relay feedback with hysteresis width h. . . . . . . . . . . . . . . . . . . . . . . . . . 178
4.36 Relay feedback with hysteresis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
4.37 Relay feedback with an integrator . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
4.38 2-point Relay identication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
4.39 The J curve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
4.40 An inverse response process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
4.41 A NMP plant controlled with a PI controller . . . . . . . . . . . . . . . . . . . . . . 183
4.42 Approximating inverse-response systems with additional deadtime . . . . . . . . . 184
4.43 The Smith predictor structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
4.44 The Smith predictor structure from Fig. 4.43 assuming no model/plant mis-match. 186
4.45 Smith predictor in Simulink . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
4.46 Dead time compensation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
4.47 Deadtime compensation applied to the blackbox . . . . . . . . . . . . . . . . . . . . 188
4.48 Closed loop with plant G(s) and controller C(s) subjected to disturbances and
measurement noise. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
4.49 Sensitivity transfer functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
4.50 Sensitivity robustness measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
5.1 A lter as a transfer function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
5.2 A noisy measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
5.3 Noise added to a true, but unknown, signal . . . . . . . . . . . . . . . . . . . . . . . 195
5.4 Derivative action given noisy data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
5.5 Smoothing industrial data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
5.6 Low-pass lter specication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
5.7 Three single low-pass lters cascaded together to make a third-order lter. . . . . . 199
5.8 Amplitude response for ideal, low-pass, high pass and band-pass lters. . . . . . . 200
5.9 Analogue Butterworth lter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
5.10 Analogue Chebyshev lter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
5.11 Butterworth and Chebyshev lters . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
5.12 Using a Butterworth lter to smooth noisy data . . . . . . . . . . . . . . . . . . . . . 211
5.13 The frequency response for Butterworth lters . . . . . . . . . . . . . . . . . . . . . 212
5.14 Various Butterworth lters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
5.15 Advantages of frequency pre-warping . . . . . . . . . . . . . . . . . . . . . . . . . . 214
5.16 Hardware difference equation in Direct Form I . . . . . . . . . . . . . . . . . . . . . 215
5.17 An IIR lter with a minimal number of delays, Direct Form II . . . . . . . . . . . . 216
5.18 Cascaded second-order sections to realise a high-order lter. See also Fig. 5.19. . . 217
5.19 A second-order section (SOS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
5.20 Comparing single precision second-order sections with lters in direct form II
transposed form. Note that the direct form II lter is actually unstable when run
in single precision. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
5.21 Approximating square waves with sine waves . . . . . . . . . . . . . . . . . . . . . 223
5.22 The Fourier approximation to a square wave . . . . . . . . . . . . . . . . . . . . . . 223
5.23 Two signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
5.24 Critical radio frequencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
5.25 Power spectrum for a signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
xii LIST OF FIGURES
5.26 Smoothing by Fourier transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
5.27 Differentiating and smoothing noisy measurement . . . . . . . . . . . . . . . . . . . 232
5.28 Filtering industrial data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
6.1 The prediction problem: Given a model and the input, u, can we predict the out-
put, y? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
6.2 A good model, M, duplicates the behaviour of the true plant, S. . . . . . . . . . . . 237
6.3 An experimental setup for input/output identication. We log both the input and
the response data to a computer for further processing. . . . . . . . . . . . . . . . . 238
6.4 Typical open loop step tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
6.5 Areas method for model identication . . . . . . . . . . . . . . . . . . . . . . . . . . 241
6.6 Examples of the Areas method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
6.7 Identication of the Blackbox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
6.8 Balance arm step test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
6.9 Random signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
6.10 A 5-element binary shift register to generate a pseudo-random binary sequence. . 245
6.11 Pseudo-random binary sequence generator in SIMULINK . . . . . . . . . . . . . . . 245
6.12 Black box experimental setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
6.13 Black box response analysis using a series of sine waves . . . . . . . . . . . . . . . . 247
6.14 Black box response using an input chirp signal. . . . . . . . . . . . . . . . . . . . . . 249
6.15 Black box frequency response analysis using a chirp signal. . . . . . . . . . . . . . . 249
6.16 Flapper response to a chirp signal. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
6.17 Experimental setup to subject a random input into an unknown plant. The in-
put/output data was collected, processed through Listing 6.2 to give the frequency
response shown in Fig. 6.18. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
6.18 The experimental frequency response compared to the true analytical Bode dia-
gram. See the routine in Listing 6.2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
6.19 Black box response given a pseudo-random input sequence. . . . . . . . . . . . . . 252
6.20 Black box frequency response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
6.21 Empirical transfer function estimate . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
6.22 Experimental data from a continuous plant . . . . . . . . . . . . . . . . . . . . . . . 255
6.23 A continuous-time model tted to input/output data . . . . . . . . . . . . . . . . . 256
6.24 Continuous model identication strategy . . . . . . . . . . . . . . . . . . . . . . . . 257
6.25 Continuous model identication simulation . . . . . . . . . . . . . . . . . . . . . . . 259
6.26 Continuous model identication of the blackbox . . . . . . . . . . . . . . . . . . . . 260
6.27 Identication using Laguerre functions . . . . . . . . . . . . . . . . . . . . . . . . . 261
6.28 A signal ow diagram of an auto-regressive model with exogenous input or ARX
model. Compare this structure with the similar output-error model in Fig. 6.30. . . 262
6.29 A signal ow diagram of a ARMAX model. Note that the only difference between
this, and the ARX model in Fig. 6.28, is the inclusion of the C polynomial ltering
the noise term. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
6.30 A signal ow diagram of an output-error model. Compare this structure with the
similar ARX model in Fig. 6.28. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
6.31 A general input/output model structure . . . . . . . . . . . . . . . . . . . . . . . . . 265
6.32 ARX estimation exhibiting bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
6.33 Ofine system identication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
6.34 Identication of deadtime from the step response . . . . . . . . . . . . . . . . . . . 278
6.35 Deadtime estimation at fast sampling . . . . . . . . . . . . . . . . . . . . . . . . . . 278
6.36 Deadtime estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
6.37 Blackbox step model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
6.38 Hammerstein and Wiener model structures . . . . . . . . . . . . . . . . . . . . . . . 281
6.39 Ideal RLS parameter estimation. (See also Fig. 6.41(a).) . . . . . . . . . . . . . . . . 286
6.40 Recursive least squares estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
6.41 RLS under SIMULINK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
LIST OF FIGURES xiii
6.42 RLS under SIMULINK (version 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
6.43 Estimation of a two parameter plant . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
6.44 Condence limits for estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
6.45 RLS and an abrupt plant change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
6.46 The memory when using a forgetting factor . . . . . . . . . . . . . . . . . . . . . . . 293
6.47 Identication using various forgetting factors . . . . . . . . . . . . . . . . . . . . . . 294
6.48 Covariance wind-up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
6.49 The MIT rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
6.50 Optimising the adaptation gain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
6.51 Addition of coloured noise to a dynamic process. See also Fig. 6.31. . . . . . . . . . 301
6.52 RLS with coloured noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
6.53 Nonlinear parameter estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
6.54 Recursive extended least-squares estimation . . . . . . . . . . . . . . . . . . . . . . 305
6.55 A simplied recursive least squares algorithm . . . . . . . . . . . . . . . . . . . . . 307
6.56 Furnace input/output data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
6.57 Closed loop estimation using Simulink . . . . . . . . . . . . . . . . . . . . . . . . . . 309
6.58 RLS in Simulink . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
7.1 The structure of an indirect adaptive controller . . . . . . . . . . . . . . . . . . . . . 319
7.2 Varying process gain of a spherical tank . . . . . . . . . . . . . . . . . . . . . . . . . 320
7.3 Simple minimum variance control . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
7.4 Simple minimum variance control (zoomed) . . . . . . . . . . . . . . . . . . . . . . 325
7.5 Adaptive pole-placement control structure . . . . . . . . . . . . . . . . . . . . . . . 325
7.6 Adaptive pole-placement control structure with RLS identication . . . . . . . . . 331
7.7 Control of multiple plants with an adapting controller. We desire the same closed
loop response irrespective of the choice of plant. . . . . . . . . . . . . . . . . . . . . 332
7.8 Three open loop plants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
7.9 Desired closed loop response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
7.10 Adaptive pole-placement with identication . . . . . . . . . . . . . . . . . . . . . . 334
7.11 Comparing the adaptive pole-placement with the reference trajectory . . . . . . . . 335
7.12 Bursting in adaptive control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
7.13 A plant with poorly damped zeros . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
7.14 Adaptive pole-placement with an unstable B . . . . . . . . . . . . . . . . . . . . . . 339
7.15 Pole-zero map of an adaptive pole-placement . . . . . . . . . . . . . . . . . . . . . . 340
7.16 Areas of well-damped poles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
7.17 Adaptive pole-placement of the black-box . . . . . . . . . . . . . . . . . . . . . . . . 342
7.18 Adaptive pole-placement of the black-box . . . . . . . . . . . . . . . . . . . . . . . . 343
7.19 A non-minimum phase plant with an unstable zero which causes an inverse step
response. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
7.20 Moving average control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
8.1 Reconstructing states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
8.2 Pole-placement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
8.3 Deadbeat control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
8.4 Simultaneous control and state estimation . . . . . . . . . . . . . . . . . . . . . . . . 368
8.5 Control and estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
8.6 GMC tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
8.7 GMC tuning characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
8.8 Linear GMC response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
8.9 Linear GMC comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
8.10 GMC CSTR control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378
8.11 A CSTR phase plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
8.12 The conguration of an input/output feedback linearisation control law . . . . . . 381
8.13 Exact feedback linearisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
xiv LIST OF FIGURES
9.1 IAE areas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
9.2 ITAE breakdown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
9.3 Optimal responses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
9.4 Optimal PID tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
9.5 Optimum PI tuning of the blackbox plant . . . . . . . . . . . . . . . . . . . . . . . . 394
9.6 SIMULINK model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
9.7 Production of a valuable chemical in a batch reactor. . . . . . . . . . . . . . . . . . . 396
9.8 Temperature prole optimisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
9.9 Temperature prole optimisation using 3 temperatures . . . . . . . . . . . . . . . . 398
9.10 Optimum temperature prole comparison for different number of temperatures . 398
9.11 Optimal control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
9.12 Optimal control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
9.13 Optimal control with targets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
9.14 Steady-state and time-varying LQR control . . . . . . . . . . . . . . . . . . . . . . . 414
9.15 Steady-state continuous LQR controller . . . . . . . . . . . . . . . . . . . . . . . . . 417
9.16 Comparing discrete and continuous LQR controllers . . . . . . . . . . . . . . . . . . 420
9.17 LQR control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
9.18 Pole-placement and LQR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424
9.19 Pole-placement and LQR showing the input . . . . . . . . . . . . . . . . . . . . . . 425
9.20 Trial pole locations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
9.21 Trial pole-placement performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426
9.22 State feedback control system with an integral output state . . . . . . . . . . . . . . 428
9.23 State feedback with integral states . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
9.24 Black box servo control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
9.25 A state-based estimation and control scheme . . . . . . . . . . . . . . . . . . . . . . 431
9.26 PDF of a random variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
9.27 Correlated noisy x, y data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435
9.28 2D ellipse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
9.29 Kalman lter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
9.30 A block diagram of a steady-state prediction-type Kalman lter applied to a linear
discrete plant. Compare with the alternative form in Fig. 9.31. . . . . . . . . . . . . 444
9.31 A block diagram of a steady-state current estimator-type Kalman lter applied to
a linear discrete plant. Compare with the alternative prediction form in Fig. 9.30. . 445
9.32 Kalman lter demonstration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448
9.33 The performance of a Kalman lter for different q/r ratios . . . . . . . . . . . . . . 451
9.34 A random walk process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456
9.35 LQG in SIMULINK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458
10.1 Horizons used in model predictive control . . . . . . . . . . . . . . . . . . . . . . . 462
10.2 Predictions of the Reserve Bank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
10.3 Inverse plant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
10.4 Predictive control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466
10.5 Acausal response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467
10.6 Varying the horizons of predictive control . . . . . . . . . . . . . . . . . . . . . . . . 468
10.7 MPC on the blackbox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468
10.8 Step response coefcients, g
i
, for a stable system. . . . . . . . . . . . . . . . . . . . . 469
10.9 DMC control details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
10.10DMC control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472
10.11Adaptive DMC of the blackbox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473
10.12An MPC graphical user interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
10.13Multivariable MPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
10.14SIMULINK and MPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
10.15MPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
10.16LP constraint matrix dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481
LIST OF FIGURES xv
10.17LP optimal control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484
10.18LP optimal control with active constraints . . . . . . . . . . . . . . . . . . . . . . . . 485
10.19Non-square LP optimal control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
10.20LP optimal control showing acausal behaviour . . . . . . . . . . . . . . . . . . . . . 486
11.1 Possible neuron activation functions. . . . . . . . . . . . . . . . . . . . . . . . . . . 495
11.2 A single neural processing unit with multiple inputs . . . . . . . . . . . . . . . . . . 496
11.3 Single layer feedforward neural network . . . . . . . . . . . . . . . . . . . . . . . . 497
11.4 A 3 layer fully interconnected feedforward neural network . . . . . . . . . . . . . . 497
11.5 Single layer network with feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498
11.6 An unknown input/output function . . . . . . . . . . . . . . . . . . . . . . . . . . . 499
11.7 Fitting a Neural-Network to an unknown input/output function . . . . . . . . . . 501
11.8 Tide predictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502
xvi LIST OF FIGURES
List of Tables
1.1 Computer aids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.1 Final and initial value theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.2 Inverting a z transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.3 Laplace transform pairs used for testing . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.1 Standard nomenclature used in modelling dynamic systems . . . . . . . . . . . . . 90
3.2 Parameters of the CSTR model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
3.3 The important variables in the forced circulation evaporator . . . . . . . . . . . . . 94
3.4 Compressed water . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
3.5 The parameter values for the CSTR model . . . . . . . . . . . . . . . . . . . . . . . . 122
3.6 The initial state and manipulated variables for the CSTR simulation . . . . . . . . . 123
4.1 Alternative PID tuning parameter conventions . . . . . . . . . . . . . . . . . . . . . 132
4.2 Ziegler-Nichols open-loop PID tuning rules . . . . . . . . . . . . . . . . . . . . . . . 152
4.3 PID controller settings based on IMC for a small selection of common plants where
the control engineer gets to chose a desired closed loop time constant,
c
. . . . . . . 154
4.4 Various alternative Ziegler-Nichols type PID tuning rules as a function of the
ultimate gain, K
u
, and ultimate period, P
u
. . . . . . . . . . . . . . . . . . . . . . . . 155
4.5 Closed-loop single-test PID design rules . . . . . . . . . . . . . . . . . . . . . . . . . 161
4.6 Relay based PID tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
5.1 Filter transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
6.1 Experimentally determined frequency response of the blackbox . . . . . . . . . . . 248
6.2 Identication in state-space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
8.1 The relationship between regulation and estimation . . . . . . . . . . . . . . . . . . 367
8.2 Litcheld nonlinear CSTR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
9.1 Common integral performance indices . . . . . . . . . . . . . . . . . . . . . . . . . . 391
11.1 Comparing expert systems and neural networks . . . . . . . . . . . . . . . . . . . . 494
xvii
xviii LIST OF TABLES
Listings
2.1 Symbolic Laplace to z-transform conversion . . . . . . . . . . . . . . . . . . . . . . 37
2.2 Symbolic Laplace to z-transform conversion with ZOH . . . . . . . . . . . . . . . . 37
2.3 Extracting the gain, time constants and numerator time constants froman arbitrary
transfer function format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
2.4 Submarine simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
2.5 Example of the Routh array using the symbolic toolbox . . . . . . . . . . . . . . . . 71
2.6 Solve the continuous matrix Lyapunov equation using Kronecker products . . . . 77
2.7 Solve the matrix Lyapunov equation using the lyap routine . . . . . . . . . . . . . 78
2.8 Solve the discrete matrix Lyapunov equation using Kronecker products . . . . . . 79
3.1 Computing the dynamic relative gain array analytically . . . . . . . . . . . . . . . . 105
3.2 Computing the dynamic relative gain array numerically as a function of . See
also Listing 3.1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
3.3 Curve tting using polynomial least-squares . . . . . . . . . . . . . . . . . . . . . . 109
3.4 Polynomial least-squares using singular value decomposition. This routine fol-
lows from, and provides an alternative to Listing 3.3. . . . . . . . . . . . . . . . . . 111
3.5 Curve tting using a generic nonlinear optimiser . . . . . . . . . . . . . . . . . . . . 113
3.6 Curve tting using the OPTI optimisation toolbox. (Compare with Listing 3.5.) . . 113
3.7 Fitting water density as a function of temperature and pressure . . . . . . . . . . . 115
3.8 Parameter condence limits for a nonlinear reaction rate model . . . . . . . . . . . 118
3.9 Comparing the dynamic response of a pendulum to the linear approximation . . . 121
3.10 Using linmod to linearise an arbitrary SIMULINK module. . . . . . . . . . . . . . . 127
4.1 Constructing a transfer function of a PID controller . . . . . . . . . . . . . . . . . . 133
4.2 Constructing a discrete (ltered) PID controller . . . . . . . . . . . . . . . . . . . . . 145
4.3 A simple PID controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
4.4 Ziegler-Nichols PID tuning rules for an arbitrary transfer function . . . . . . . . . . 159
4.5 Identies the characteristic points for the Yuwana-Seborg PID tuner from a trial
closed loop response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
4.6 Compute the closed loop model from peak and trough data . . . . . . . . . . . . . 165
4.7 Compute the ultimate gain and frequency from the closed loop model parameters. 165
4.8 Compute the open loop model, G
m
, Eqn. 4.31. . . . . . . . . . . . . . . . . . . . . . 165
4.9 Compute appropriate PI or PID tuning constants based on a plant model, G
m
,
using the IMC schemes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
4.10 Calculates the period and amplitude of a sinusoidal time series using least-squares. 174
5.1 Designing Butterworth Filters using Eqn. 5.4. . . . . . . . . . . . . . . . . . . . . . . 203
5.2 Designing a low-pass Butterworth lter with a cut-off frequency of f
c
= 800 Hz. . 203
5.3 Designing a high-pass Butterworth lter with a cut-off frequency of f
c
= 800 Hz. . 203
5.4 Designing Chebyshev Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
5.5 Computing a Chebyshev Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
5.6 Converting a 7th-order Butterworth lter to 4 second-order sections . . . . . . . . . 218
5.7 Comparing DFII and SOS digital lters in single precision. . . . . . . . . . . . . . . 219
5.8 Routine to compute the power spectral density plot of a time series . . . . . . . . . 226
xix
xx LISTINGS
5.9 Smoothing and differentiating a noisy signal . . . . . . . . . . . . . . . . . . . . . . 232
6.1 Identication of a rst-order plant with deadtime from an openloop step response
using the Areas method from Algorithm 6.1. . . . . . . . . . . . . . . . . . . . . . . 241
6.2 Frequency response identication of an unknown plant directly from input/out-
put data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
6.3 Non-parametric frequency response identication using etfe. . . . . . . . . . . . . 253
6.4 Function to generate output predictions given a trial model and input data. . . . . 255
6.5 Optimising the model parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
6.6 Validating the tted model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
6.7 Continuous model identication of a non-minimum phase system . . . . . . . . . . 258
6.8 Generate some input/output data for model identication . . . . . . . . . . . . . . 268
6.9 Estimate an ARX model from an input/output data series using least-squares . . . 269
6.10 An alternative way to construct the data matrix for ARX estimation using Toeplitz
matrices. See also Listing 6.9. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
6.11 Ofine system identication using arx from the System Identication Toolbox . . 271
6.12 Ofine system identication with no model/plant mismatch . . . . . . . . . . . . . 271
6.13 Demonstrate the tting of an AR model. . . . . . . . . . . . . . . . . . . . . . . . . . 272
6.14 Create an input/output sequence from an output-error plant. . . . . . . . . . . . . 273
6.15 Parameter identication of an output error process using oe and arx. . . . . . . . 273
6.16 A basic recursive least-squares (RLS) update (without forgetting factor) . . . . . . . 284
6.17 Tests the RLS identication scheme using Listing 6.16. . . . . . . . . . . . . . . . . . 286
6.18 A recursive least-squares (RLS) update with a forgetting factor. (See also List-
ing 6.16.) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
6.19 Adaption of the plant gain using steepest descent . . . . . . . . . . . . . . . . . . . 299
6.20 Create an ARMAX process and generate some input/output data suitable for sub-
sequent identication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
6.21 Identify an ARMAX process from the data generated in Listing 6.20. . . . . . . . . 303
6.22 Recursively identify an ARMAX process. . . . . . . . . . . . . . . . . . . . . . . . . 303
6.23 Kaczmarzs algorithm for identication . . . . . . . . . . . . . . . . . . . . . . . . . 306
7.1 Simple minimum variance control where the plant has no time delay . . . . . . . . 323
7.2 A Diophantine routine to solve FA+BG = T for the polynomials F and G. . . . . 328
7.3 Alternative Diophantine routine to solve FA+BG = T for the polynomials F and
G. Compare with Listing 7.2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
7.4 Constructing polynomials for the Diophantine equation example . . . . . . . . . . 329
7.5 Solving the Diophantine equation using polynomials generated from Listing 7.4. . 330
7.6 Adaptive pole-placement control with 3 different plants . . . . . . . . . . . . . . . . 333
7.7 The pole-placement control law when H = 1/B . . . . . . . . . . . . . . . . . . . . 337
7.8 Factorising a polynomial B(q) into stable, B
+
(q) and unstable and poorly damped,
B

(q) factors such that B = B


+
B

and B
+
is dened as monic. . . . . . . . . . . . 338
7.9 Minimum variance control design . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
8.1 A simple state reconstructor following Algorithm 8.1. . . . . . . . . . . . . . . . . . 359
8.2 Pole-placement control of a well-behaved system . . . . . . . . . . . . . . . . . . . . 362
8.3 A deadbeat controller simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
8.4 Pole placement for controllers and estimators . . . . . . . . . . . . . . . . . . . . . . 369
8.5 GMC on a Linear Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
8.6 GMC for a batch reactor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377
8.7 The dynamic equations of a batch reactor . . . . . . . . . . . . . . . . . . . . . . . . 377
8.8 Find the Lie derivative for a symbolic system . . . . . . . . . . . . . . . . . . . . . . 383
8.9 Establish relative degree, r (ignore degree 0 possibility) . . . . . . . . . . . . . . . . 384
8.10 Design Butterworth lter of order r. . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
8.11 Symbolically create the closed loop expression . . . . . . . . . . . . . . . . . . . . . 384
9.1 Returns the IAE performance for a given tuning. . . . . . . . . . . . . . . . . . . . . 392
9.2 Optimal tuning of a PID controller for a non-minimum phase plant. This script le
uses the objective function given in Listing 9.1. . . . . . . . . . . . . . . . . . . . . . 392
LISTINGS xxi
9.3 Returns the ITSE using a SIMULINK model. . . . . . . . . . . . . . . . . . . . . . . . 395
9.4 Analytically computing the co-state dynamics and optimum input trajectory as a
function of states and co-states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
9.5 Solving the reaction prole boundary value problem using the boundary value
problem solver, bvp4c.m. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
9.6 Computes the full time-evolving LQR solution . . . . . . . . . . . . . . . . . . . . . 413
9.7 The continuous time differential Riccati equation. This routine is called from List-
ing 9.8. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
9.8 Solves the continuous time differential Riccati equation using a numerical ODE
integrator. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
9.9 Calculate the continuous optimal steady-state controller gain. . . . . . . . . . . . . 416
9.10 Closed loop simulation using an optimal steady-state controller gain. . . . . . . . . 416
9.11 Solving the algebraic Riccati equation for P

using Kronecker products and vec-


torisation given matrices A, B, Qand R. . . . . . . . . . . . . . . . . . . . . . . . . 417
9.12 Calculate the discrete optimal steady-state gain by iterating until exhaustion.
Note it is preferable for numerical reasons to use lqr for this computation. . . . . 419
9.13 Comparing the continuous and discrete LQR controllers. . . . . . . . . . . . . . . . 419
9.14 An LQR controller for the blackbox . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
9.15 Comparing an LQR controller from Listing 9.14 with a pole-placement controller . 424
9.16 Computing the closed loop poles from the optimal LQR controller from Listing 9.14.426
9.17 Comparing the actual normally distributed random numbers with the theoretical
probability density function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
9.18 Probability and inverse probability distributions for the F-distribution. . . . . . . . 434
9.19 Generate some correlated random data. . . . . . . . . . . . . . . . . . . . . . . . . . 434
9.20 Plot a 3D histogram of the random data from Listing 9.19. . . . . . . . . . . . . . . 435
9.21 Compute the uncertainty regions from the random data from Listing 9.20. . . . . . 436
9.22 Validating the uncertainty regions computed theoretically from Listing 9.21. . . . . 436
9.23 Solving the discrete time Riccati equation using exhaustive iteration around Eqn. 9.98
or alternatively using the dare routine. . . . . . . . . . . . . . . . . . . . . . . . . . 443
9.24 Alternative ways to compute the Kalman gain . . . . . . . . . . . . . . . . . . . . . 445
9.25 State estimation of a randomly generated discrete model using a Kalman lter. . . 447
9.26 Computing the Kalman gain using dlqe. . . . . . . . . . . . . . . . . . . . . . . . . 449
9.27 Demonstrating the optimality of the Kalman lter. . . . . . . . . . . . . . . . . . . . 450
9.28 Potters algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
10.1 Predictive control with input saturation constraints using a generic nonlinear op-
timiser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
10.2 Objective function to be minimised for the predictive control algorithm with input
saturation constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466
10.3 Dynamic Matric Control (DMC) control . . . . . . . . . . . . . . . . . . . . . . . . . 471
10.4 Setting up an MPC controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475
10.5 Optimal control using linear programming . . . . . . . . . . . . . . . . . . . . . . . 482
11.1 Generate some arbitrary data to be used for subsequent tting . . . . . . . . . . . . 500
B.1 Polynomial addition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
B.2 Multiple convolution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
B.3 Strip leading zeros from a polynomial. . . . . . . . . . . . . . . . . . . . . . . . . . . 508
xxii LISTINGS
Chapter 1
Introduction
Mathematicians may atter themselves that they posses new ideas which mere human language is as yet
unable to express. Let them make the effort to express those ideas in appropriate words without the aid of
symbols, and if they succeed, they will not only lay us laymen under a lasting obligation but, we venture to
say, they will nd themselves very much enlightened during the process, and will even be doubtful
whether the ideas expressed as symbols had ever quite found their way out of the equations into their
minds.
James Clerk Maxwell, 1890
Control, in an engineering sense, is where actions are taken to ensure that a particular physi-
cal process responds in some desired manner. Automatic control is where we have relieved the
human operator from the tedium of consistently monitoring the process and supplying the nec-
essary corrections. Control as a technical discipline is therefore important not only in the elds
of engineering, but also in economics, sociology and indeed in most aspects of our life. When
studying control, we naturally assume that we do conceivably have some chance of inuencing
things. For example, it is worthwhile to study the operation of a coal red power plant in order
to minimise possibly polluting emissions, but it is not worth our time to save the world from the
next ice age, or as the results of a special study group who investigated methods designed to pro-
tect the world from a stray comet (such as the one postulated to have wiped out the dinosaurs 80
million years ago) concluded, there was nothing feasible we could do, such as change the earths
orbit, or blast the asteroid, to avoid the collision. In these latter examples, the problem exists, but
our inuence is negligible.
The teaching of control has changed in emphasis over the last decade from that of linear single-
input/single-output systems elegantly described in the Laplace domain, to general nonlinear
multiple-input/multiple-output systems best analysed in the state space domain. This change
has been motivated by the increasing demands by industry and public to produce more, faster or
cleaner and is now much more attractive due to the impressive improvements in computer aided
tools, such as MATLAB used in these notes. This new emphasis is called advanced (or modern)
control as opposed to the traditional or classical control. This set of notes is intended for students
who have previously attended a rst course in automatic control covering the usual continuous
control concepts like Laplace transforms, Bode diagrams, stability of linear differential equations,
PID controllers, and perhaps some exposure to discrete time control topics like z transforms and
state space.
This book attempts to describe what advanced control is, and how it is applied in engineering
applications with emphasis on the construction of controllers using computer aided design tools
such as the numerical programming environment MATLAB from the MathWorks, [134]. With
1
2 CHAPTER 1. INTRODUCTION
this tool, we can concentrate on the intentions behind the design procedures, rather than the
mechanics to follow them.
1.1 Scope
Part one contains some revision material in z-transforms, modelling and PID controller tuning.
The discrete domain, z-transforms, and stability concepts with a brief discussion of appropriate
numerical methods are introduced in chapter 2. A brief potpourri of modelling is summarised
in chapter 3. Chapter 4 is devoted to the most common industrial controller, the three term PID
controller with emphasis on tuning, implementation and limitations. Some basic concepts from
signal processing such as ltering and smoothing are introduced in chapter 5. Identication and
the closely related adaptive control are together in chapters 6 and 7. State space analysis and
optimal control design are given in the larger chapters 8 and 9.
Notation conventions
Throughout these notes I have used some typographical conventions. In mathematical expres-
sions, scalar variables are written in italic such as a, b, c, or if Greek, , while vectors x, y are
upright bold lower case and matrices, A, are bold upper case. More notation is introduced as
required.
Computer commands and output and listings are given in a xed-width font as A=chol(B
*
B).
In some cases where you are to type an interactive command, the MATLAB prompt >> is given,
and the computed solution returned. If no ambiguity exists such as in the case for functions, the
prompt is omitted.
1.2 Matlab for computer aided control design
Modern control design has heavy computing requirements. In particular one needs to:
1. manipulate symbolic algebraic expressions, and
2. performintensive numerical calculations and simulations for proto-typing and testing quickly
and reliably, and nally
3. to implement the controller at high speed in special hardware such as an embedded con-
troller or a digital signal processing (DSP) chip perhaps using assembler.
To use this new theory, it is essential to use computer aided design (CAD) tools efciently as
real-world problems can rarely be solved manually. But as [170] point out, the use of computers
in the design of control systems has a long and fairly distinguished history. This book uses
MATLAB for the design, simulation and prototyping of controllers.
MATLAB, (which is short for MATrix LABoratory), is a programming environment that grew out
of an effort to create an easy user-interface to the very popular and well regarded public domain
FORTRAN linear algebra collection of programmes, LINPACK and EISPACK. With this direct inter-
pretive interface, one can write quite sophisticated algorithms in a very high level language, that
are consequently easy to read and maintain. Today MATLAB is a commercial package, (although
1.2. MATLAB FOR COMPUTER AIDED CONTROL DESIGN 3
some public domain lookalikes exist), that is supported with a variety of toolboxes comprising
of collections of source code subroutines organised in areas of specic interest. The toolboxes we
are most interested in, and used in this book are:
Control toolbox containing functions for controller design, frequency domain analysis, conver-
sions between various models forms, pole placement, optimal control etc. (Used through-
out)
Symbolic toolbox which contains a gateway to the symbolic capabilities of MAPLE.
Signal processing toolbox containing lters, wave form generation and spectral analysis. (Used
principally in chapter 5.)
System identication toolbox for identifying the parameters of various dynamic model types.
(Used in chapter 6.) You may also nd the following free statistics toolbox useful available
at: www.maths.lth.se/matstat/stixbox/.
Real-time toolbox can be used to interface MATLAB to various analogue to digital converters.
The student version of MATLAB (at time of writing) has a special SIGNALS & SYSTEMS TOOLBOX
that has a subset of routines from the control and signal processing toolboxes. Other toolboxes
used for some sections of the notes are the OPTIMISATION TOOLBOX, used in chapter 9 and the
NEURAL NETWORK toolbox.
Additional documentation to that supplied with MATLAB is the concise and free summary notes
[183] or the more recent [68]. Recently there has been exponential growth of other texts that heav-
ily use MATLAB (such as this one), and a current list is available fromthe Mathworks anonymous
ftp server at www.mathworks.com. This server also contains many user contributed codes, as
well as updates, bug xes etc.
If MATLAB, or even programming a high level language is new to you, then [201] is a cheap
recommended compendium, similar in form to this, covering topics in numerical analysis, again
with many MATLAB examples.
1.2.1 Alternative computer design aids
Table 1.1 lists a number of alternatives computer-aided design and modelling environments sim-
ilar and complimentary to MATLAB.
Product WWW site comment
SCILAB www.scilab.org Free Matlab/Simulink clone
OCTAVE www.octave.org Free Matlab clone, inactive
RLAB rlabplus.sourceforge.net Matlab clone, Linux
VISUALMODELQ www.qxdesign.com shareware Simulink clone
MATHVIEWS www.mathwizards.com shareware
MUPAD www.mupad.de Interfaces with SCILAB
MAPLE www.maplesoft.com commercial CAS
MATHEMATICA www.mathematica.com commercial CAS
Table 1.1: Shareware or freeware Matlab lookalikes and computer algebra systems
Unlike MATLAB, symbolic manipulators are computer programs that by manipulating symbols
can perform algebra. Such programs are alternatively known as computer algebra systems or
4 CHAPTER 1. INTRODUCTION
CAS. The most well known examples are MATHEMATICA, MAPLE, MUPAD, and MACSYMA,
(see Table 1.1). These programs can nd analytical solutions to many mathematical problems
involving integrals, limits, special functions and so forth. They are particularly useful in the
controller design stage.
The Numerics in Control
1
group in Europe have collect together a freeware FORTRAN subroutine
library SLICOT for routines relevant in systems and control.
Problem 1.1 1. Familiarise yourself with the fundamentals of MATLAB. Run the MATLAB
demo by typing demo once inside MATLAB.
2. Try the MATLAB tutorial (part 1).
3. Read through the MATLAB primer, [183] or [68], and you should get acquainted with the
MATLAB users manual.
1.3 Economics of control
Most people would agree that Engineers apply technology, but what do these two words really
mean? Technology is derived from two Greek words, techne which means skill or art, and logia
which means science or study. The interesting point here is that the art component is included.
The English language unfortunately confuses the word engine with engineering so that many peo-
ple have the mistaken view that engineers drive engines (mostly). Actually engineer is derived
from the Latin ingeniatorium which means one who is ingenious at devising. A far cry from the
relatively simple act of piloting jumbos. An interesting American perspective of the professional
engineer and modern technology is given as light reading in [2] and Flormans The Existential
Pleasures of Engineering, [66].
Chemical engineering is, succinctly put, chemistry constrained by cost. The chemist wants the
reaction to proceed, the chemical engineer takes that for granted, but is interested in increasing
the rate, or pushing the equilibrium, or in most cases both at the same time. As the incentive to
produce better products increases, accompanied by an awareness of potent global competition
driving one to reduce costs, process control becomes an important aspect of engineering.
Obviously modern computerised process control systems are expensive. They are especially ex-
pensive compared with other computers such as ofce or nancial computers because the market
is smaller, the environment harsher, the graphical requirements more critical, the duties more var-
ied, and the potential payoffs larger. Process control has at least two main duties to perform; rst
to ensure that the plant is operated safely (that is protect the plant, environment and people), and
second that the product quality is consistent with some customer or regulatory body demanded
specications. There is always a trade off between how much control you apply and the bene-
ts that result. An automobile manufacturer could produce an almost totally indestructible car
(ie, tank), but the expense of raw materials required, and the high running costs would certainly
deem the project an economic failure.
On the other hand, in 1965 Ralph Nader complained in the aptly named Unsafe at Any Speed
about the poor quality of American automotive engineering, the lack of controls and the unsafe
result. This inuential book challenged the balance from between commercial prots and more
quality control. A product with less variation in quality may be worth more than a product that
has a higher average quality, but more variation. A potentially dangerous production facility that
regularly destroys equipment, people or surroundings is not usually tolerated by the licensing
authorities.
1
The home page is located at http://www.win.tue.nl/niconet/niconet.html
1.3. ECONOMICS OF CONTROL 5
Safety concerns motivate
better control.
Fig. 1.1, adapted from [6], gives an industrial perspective of the status of process control in 1994.
The techniques are divided into those considered classical or traditional, which demand only
modest digital online computing power, (if any), little in the way of explicit process models or
understanding, and those termed loosely advanced.
compensation
Valves
Onstream analysers
Process computer
multivariable
feedforward control
smart transmitters
DCS
single variable
Online simulation
Deadtime
Transmitters
Advanced control
PLCs
PID algorithm
dynamic methods
Rule definition
Direct search
Regulatory
Constraint
Optimisation
Basic
Field
Signal condition
Steady state
analogue control
Traditional control
Figure 1.1: A comparison of traditional vs. advanced process control techniques. Adapted from
[6].
One of the major concerns for the process control engineer is to reduce the output variance. If
the variation about the setpoint is small, then the setpoint can be shifted closer to the operating
constraint, without increasing the frequency of alarms. Fig. 1.2 demonstrates the ideal case that
while popular in the advertising literature, is harder to achieve unambiguously in practice.
Many text books in the control eld are very vague about the actual conguration of real pro-
cess control systems used today. Other books that take a more trade oriented approach, are
vague about the academic side of the control performance. There are a number of reasons for
this. First many texts try hard to describe only the theoretical aspects of digital control, and any-
thing remotely applied is not considered worthy of their attention. Secondly, the control systems
are rapidly changing as the cost of micro-processors drop in price, and different programming
methods come into avour. Thirdly many industries are deliberately vague about publishing
the details of their control system since they perceive that this information could help their com-
petitors. However some information of this type is given in [63, pp131-149] and [17]. One good
6 CHAPTER 1. INTRODUCTION
spt #3
manual control
advanced control
upper quality constraint
process output
regulatory
loss ($)
violations
time
setpoint #1
spt #2
Figure 1.2: Economic improvements owing to better control. If the control scheme can reduce
the variance, the setpoint can be shifted closer to the operating or quality constraint, thereby
decreasing operating costs.
balance for the practitioner is [143].
1.4 Laboratory equipment for control tests
Obviously if we are to study automatic control with the aimto control eventually chemical plants,
manufacturing processes, robots, undertake ltering to do active noise cancellation and so forth,
we should practice, preferably on simpler, more well understood, and potentially less hazardous
equipment.
In the Automatic Control Laboratory in the Department of Electrical Engineering at the Karlstad
University, Sweden we have a number of simple bench-scale plants to test identication and
control algorithms on.
1.4.1 Plants with one input and one output
The blackbox
Fig. 1.3 and Fig. 1.4(a) shows what we perhaps unimaginatively refer to as a black-box. It is a
box, and it is coloured black. Subjecting the box to an input voltage from 0 to 5 volts delivers an
output voltage also spanning from around 0 to 5 volts, but lagging behind the input voltage since
the internals of the blackbox are simply either 7 or 9 (depending on the switch position) low-pass
passive lters cascaded together.
The blackbox is a relatively well behaved underdamped stable system with dominant time con-
stants of around 5 to 10 seconds. Fig. 1.4(b) shows the response of the blackbox to two input
steps. The chief disadvantage of using this device for control studies is that the output response
1.4. LABORATORY EQUIPMENT FOR CONTROL TESTS 7


D/A
A/D
Earth
input indicator
blue
blue
GND
Input Output
(connector #1) (connector #10)
(connector #11)
BLACK-BOX

Sluggish
Fast
To computer
From computer
Figure 1.3: Blackbox conguration. The manual switch marked will toggle between either 7 or 9
low-pass lters.
(a) Black-box wiring to the National Instruments
LabPC terminator.
input
output
15 20 25 30 35 40 45 50
0.05
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
ip
n
u
t
/
o
u
t
p
u
t
time (s)
Blackbox step response
(b) The response of the blackbox to 2 step inputs
Figure 1.4: The Black-box
is not visible to the naked eye, and that we cannot manually introduce disturbances. One com-
plication you can do is to cascade two blackboxes together to modify the dynamics.
Electro-magnetic balance arm
The electromagnetic balance arm shown in Fig. 1.5(a) is a fast-acting, highly oscillatory, plant
with little noise. The aim is to accurately weigh small samples by measuring the current required
to keep the balance arm level, or alternatively just to position the arm at different angles. The
output response to a step in input shown in Fig. 1.5(b) indicates how long it would take for the
oscillations to die away.
8 CHAPTER 1. INTRODUCTION
(a) The electromagnetic balance arm
20 30 40 50 60 70 80 90 100
0.4
0.2
0
0.2
0.4
0.6
0.8
1
a
r
m

p
o
s
i
t
i
o
n
Step response of the balance arm
20 30 40 50 60 70 80 90 100
0.2
0.1
0
0.1
0.2
0.3
time T=0.05 seconds
i
n
p
u
t
(b) The response of the arm to a step changes in input.
Figure 1.5: The electromagnetic balance arm
Flapper
Contrary to the balance arm, the apper in Fig. 1.6 and Fig. 1.7(b) has few dynamics, but signif-
icant low-pass ltered measurement noise. An interesting exercise is to place two apper units
in close proximity. The air from one then disturbs the apper from the other which makes an
interacting multivariable plant.
Stepper motors
A stepper motor is an example of a totally discrete system.
1.4.2 Multi-input and multi-output plants
It is possible to construct multivariable interacting plants by physically locating two plants close
to each other. One possibility is to locate two appers adjacent to each other, another possibility
is one apper and one balance arm. The extent of the interaction can be varied by adjusting the
relative position of the two plants.
Helicopter
The model helicopter, Fig. 1.8, is an example of a highly unstable, multivariable (3 inputs, 2
outputs) nonlinear, strongly interacting plant. It is a good example of where we must apply
control (or crash and burn). Fig. 1.9(b) shows the controlled response using 2 PID controllers to
control the direction and altitude. Fig. 1.10(a) shows a 3 dimensional view of the desired and
actual ight path.
1.4. LABORATORY EQUIPMENT FOR CONTROL TESTS 9
manual
auto/man
` `
Digital to Analogue
(Fan speed)
` `
`

- -
Flapper arm

A/D
On/Off switch
Motor

position transducer
.
angle
`
`
power to the motor
Figure 1.6: The apper wiring
(a) The fan/apper equipment
0 5 10 15 20 25 30 35 40 45 50
0.1
0
0.1
0.2
0.3
0.4
i
n
p
u
t
time (sec) dt=0.05
0 5 10 15 20 25 30 35 40 45 50
0.1
0.15
0.2
0.25
0.3
0.35
o
u
t
p
u
t
Step response of the flapper
(b) Step response of the apper
Figure 1.7: The Fan and apper.
10 CHAPTER 1. INTRODUCTION
top rotor
side rotor
support stand
.

moveable counter weight


.

azimuth
_

_
plant outputs
..
control inputs
elevation angle
Figure 1.8: Helicopter plant with 2 degrees of freedom. See also Fig. 1.9(a).
1.5 Slowing down Simulink
For some applications like the development of PID controllers you want to be able to slow down
the SIMULINK simulation to have time to manually introduce step changes, add disturbances,
switch from automatic to manual etc. If left alone in simulation mode, SIMULINK will run as fast
as possible, but it will slow down when it needs to sub-sample the integrator around discontinu-
ities or periods when the system is very stiff.
The SIMULINK Execution Control block allows you to specify that the simulation runs at a multi-
ple of real-time. This is most useful when you want to slow down a simulation, or ensure that it
runs at a constant rate.
The block is available from: http://www.mathworks.com/matlabcentral/fileexchange
The implementation is shown in Fig. 1.11 where the Simulation Execution Control block looks
like an A/D card but in fact is not connected to anything, although it does export some diagnostic
timing information.
An alternative method to slow down a Simulink simulation and force it to run at some set rate is
to use the commercial Humusoft real-time toolbox, but not with the A/D card actually interfaced
to anything.
1.5. SLOWING DOWN SIMULINK 11
(a) The ying helicopter balanced using 2 PID controllers since it is openloop unstable and would otherwise crash.
0 10 20 30 40 50 60 70 80 90 100
1.5
1
0.5
0
0.5
1
1.5
o
u
t
p
u
t
Helicopter
0 10 20 30 40 50 60 70 80 90 100
1
0.5
0
0.5
1
i
n
p
u
t
Time (sec)
(b) Multivariable PID control of the helicopter exhibiting mediocre controlled response and severe
derivative kick.
Figure 1.9: Multivariable PID control of an unstable helicopter
12 CHAPTER 1. INTRODUCTION
0
50
100
150
200
250
1.5
1
0.5
0
0.5
1
1.5
1
0.5
0
0.5
time
Helicopter flight path
East/West
U
p
/
D
o
w
n
(a) Multivariable helicopter control (b) Model helicopter in the trees
Figure 1.10: Helicopter ying results
1
s+1
Transfer Fcn
Simulink
Execution
Control
Signal
Generator
Scope1
Scope
Figure 1.11: Real-time Simulink simulations. A parameter in the SIMULINK EXECUTION CON-
TROL block sets the speed of the simulation.
Chapter 2
From differential to difference
equations
Excerpt from What a sorry state of affairs, Martin Walker, Guardian Weekly, June 29, 1997.
. . . But then he said something sensible, as he quite often does. . . .
We need to treat individuals as individuals and we need to address discrete problems for what they are,
and not presume them to be part of some intractable racial issue. Gingrich, properly understood, is a
national treasure, and not only because he is one of the few Americans who understand the difference
between discreet and discrete.
2.1 Computer in the loop
The cost and exibility advantages of implementing a control scheme in software rather than
fabricating it in discrete components today are simply too large to ignore. However by inserting a
computer to run the software necessitates that we work with discrete regularly sampled signals.
This added complexity, which by the way is more than compensated by the above mentioned
advantages, introduces a whole new control discipline, that of discrete control.
Fig. 2.1 shows a common conguration of the computer in the loop. For the computer to respond
to any outside events, the signals must rst be converted from an analogue form to a digital
signal, say 1 to 5 Volts, which can, with suitable processing be wired to an input port of the
computers processor. This device that accomplishes this conversion is called an Analogue to
Digital converter or A/D. Similarly, any binary output fromthe pins on a processor must rst be
converted to an analogue signal using a Digital to Analogue converter, or D/A. In some micro-
controllers, (rather than micro-processors), such as Intels 8048 or some versions of the 8051, these
converters may be implemented on the microcontroller chip.
Digital to analogue conversion is easy and cheap. One simply loads each bit across different re-
sistors, and sums the resultant voltages. The conversion is essentially instantaneous. Analogue
to digital is not nearly as easy nor cheap, and this is the reason that the common data acquisition
cards you can purchase for your PC will often multiplex the analogue input channel. There are
various schemes for the A/D, one using a D/A inside a loop using a binary search algorithm.
Obviously this conversion is not instantaneous, although this is not normally considered a prob-
lem for process control applications. Any introductory electrical engineering text such as [171]
13
14 CHAPTER 2. FROM DIFFERENTIAL TO DIFFERENCE EQUATIONS

Process control computer


D/A

A/D +
`

+

computer r(t)
setpoint
y(t)
output
Plant
Figure 2.1: The computer in the control loop
will give further details on the implementation details.
2.1.1 Sampling an analogue signal
It is the A/D converter that is the most interesting for our analysis of the discrete control loop.
The A/D converter will at periodic intervals dened by the computers clock sample the continu-
ous analogue input signal. The value obtained is typically stored in a device called a zeroth-order
hold until it is eventually replaced by the new sample collected one sample time later. Given con-
straints on cost, the A/D converter will only have a limited precision, or limited number of bits
in which to store the incoming discrete value. Common A/D cards such as the PCLabs card,
[3], use 12 bits giving 2
12
or slightly over 4 thousand discretisation levels. The residual chopped
away is referred to as the quantisation error. For a given number of bits, b, used in the converter,
the amplitude quantisation is
= 2
b
Low cost analogue converters may only use 8 bits, while digital audio equipment use between
16 and 18 bits.
Fig. 2.2 shows the steps in sampling an analogue signal with a three bit (8 discrete levels), A/D
sampler. The dashed stair plot gives an accurate representation of the sampled signal, but owing
to the quantisation error, we are left with the solid stair plot. You can reproduce Fig. 2.2 in
MATLAB using the fix command to do the chopping, and stairs to construct the stair plot.
While other types of hold are possible, anything higher than a rst-order hold is rarely used.
2.1.2 Selecting a sample rate
Once we have decided to implement discrete control, rather than continuous control, we must
decided on a reasonable sampling rate. This is a crucial parameter in discrete control systems.
The sample time, (T or sometimes denoted t ), is measured in time units, say seconds or in
industrial applications, minutes. The reciprocal of the sample time is the sample frequency, f,
and is usually measured in cycles/second or Hertz. The radial or angular velocity (which some
confusingly also term frequency) is denoted and is measured in radians/second. The inter-
relationships between these quantities are
f =
1
T
_
cycles
second
_
=

2
_
radians/s
radians/cycle
_
(2.1)
2.1. COMPUTER IN THE LOOP 15
0 2 4 6 8 10 12
0
1
2
3
4
5
6
7
Sample time, (kT)
S
i
g
n
a
l
Sampling a continuous signal


Analogue
Sampled
Sampled & quantisized
Figure 2.2: Sampling an analogue signal (heavy
solid) with a three bit (8 discrete levels) A/D con-
verter and zeroth-order hold. The sampled val-
ues, , are chopped to the next lowest discrete
integer level giving the sampled and quantisied
output
The faster the sampling rate, (the smaller the sampling time, T), the better our discretised signal
approximates the real continuous signal. However, it is uneconomic to sample too fast, as the
computing and memory hardware may become too expensive. When selecting an appropriate
sampling interval, or sample rate, we should consider the following issues:
The maximum frequency of interest in the signal
The sampling theoremwhich species a lower limit required on the sampling rate to resolve
any particular frequency unambiguously. (See 2.1.3 following.)
Any analogue ltering that may be required (to reduce the problem of aliasing)
The cost of the hardware and the speed of the A/D converters.
Ogata discusses the selection of a sample time qualitatively in [148, p38]. However for most
chemical engineering processes, which are dominated by relatively slow and overdamped pro-
cesses, the sample time should lie somewhere between ten times the computational delay of the
hardware t
c
and some small fraction of the process dominant time constant , say
10t
c
T

10
(2.2)
For most chemical engineering applications, the computational delay is negligible comparedwith
the process time constant, (t
c
0), so we often choose T /10. Thus for a simple rst order
rise, we would expect to have about 2030 data samples from 0 to 99%. Some may argue that
even this sampling rate is too high, and opt for a more conservative (larger) sample time down
to /6. Note that commonly used guidelines such as presented in Table 22.1 in [179, p535] span a
wide range of recommended sample times.
Overly fast sampling
Apart fromthe high cost of fast A/D converters, there is another argument against fast sampling.
When one samples a continuous system, the poles of a stable continuous system map to the poles
of a stable discrete system as T goes to zero. However the zeros in the LHP of a continuous
system may not map to zeros inside the unit disk of a discrete system as T tends to zero. This
nonintuitive result could create problems if one samples a system, and then uses the inverse
of this system within a one step controller, since now the zeros outside the unit circle become
16 CHAPTER 2. FROM DIFFERENTIAL TO DIFFERENCE EQUATIONS
unstable poles inside the controller for small sample times. Note that the inverse continuous
system is stable, and the discrete inverse system will be stable for large sample times, and it will
only be unstable for small sample times.
2.1.3 The sampling theorem and aliases
Dinsdale was a gentleman. And whats more he knew how to treat a female impersonator
John Cleese in Monty Python (15-9-1970)
The sampling theorem gives the conditions necessary to ensure that the inverse sampling pro-
cedure is a one to one relationship. To demonstrate the potential problems when sampling,
consider the case where we have two sinusoidal signals, but at different frequencies.
1
y
1
= sin
_
2
7
8
t
_
and y
2
= sin
_
2
1
8
t
_
If we sample these two signals relatively rapidly, say at 25Hz (T=40ms) then we can easily see
two distinct sine curves. However if we sample at T = 1, we obtain identical points.
1 t = [0:0.04:8]';
y1= -sin(2
*
pi
*
7/8
*
t);
y2= sin(2
*
pi/8
*
t);
plot(t,[y1, y2])
As shown opposite, we note that y
1
(solid) and y
2
(dashed) just happen to
coincide at t = 0, 1, 2, 3, . . . ().
0 2 4 6 8
1
0.5
0
0.5
1
Aliasing
time (s)
s
i
g
n
a
l
Consequently, at the slower sampling rate of 1 Hz, we cannot distinguish between the two dif-
ferent signals. This is the phenomenon known as aliasing since one of the frequencies pretends
to be another. In conclusion, two specic sinusoids of different frequencies can have identical
sampled signals. Thus in the act of taking samples from a continuous measurement, we have lost
some information. Since we have experienced a problem when we sample too slowly, it is rea-
sonable to ask what the minimum rate is so that no aliasing occurs. This question is answered by
the sampling theorem which states:
To recover a signal from its sample, one must sample at least two times a
period, or alternatively sample at a rate twice the highest frequency of interest
in the signal.
Alternatively, the highest frequency we can unambiguously reconstruct for a given sampling
rate, 1/T, is half of this, or 1/(2T). This is called the Nyquist frequency, f
N
.
In the second example above when we sampled at 1 Hz, the sampling radial velocity was
s
=
2 = 6.28 rad/s. This was satisfactory to reconstruct the low frequency signal (f
1
= 1/8 Hz) since
1
This example was adapted from Franklin & Powell p81
2.1. COMPUTER IN THE LOOP 17
2
1
= 1.58 rad/s. We are sampling faster than this minimum, so we can reconstruct this signal.
However for the faster signal (f
2
= 7/8 Hz), we cannot reconstruct this signal since 2
2
= 11.0
rad/s, which is faster than the sampling radial velocity.
2.1.4 Discrete frequency
If we sample the continuous signal,
x(t) = Acos(t)
with a sample time of T,
x(nT) = Acos(nT) = Acos(n)
where the digital frequency, , is dened as

def
= T = 2f T = 2
f
f
s
then the range of analogue frequencies is 0 < f < while the range of digital frequencies is
limited by the Nyquist sampling limit, f
s
/2 giving the allowable range for the digital frequency
as
0
MATLAB unfortunately decided on a slightly different standard in the SIGNAL PROCESSING tool-
box. Instead of a range from zero to , MATLAB uses a range from zero to 2 where 1 corresponds
to half the sampling frequency or the Nyquist frequency. See [42].
In summary:
symbol units
sample time T or t s
sampling frequency f
s
=
1
T
Hz
angular velocity = 2f rad/s
digital frequency = T =
2f
fs

where the allowable ranges are:


0 < , continuous
0 , sampled
and Nyquist frequency, f
N
, and the dimensionless Nyquist frequency,
N
are:
f
N
=
f
s
2
=
1
2T
[Hz]

N
=
2f
N
f
s
=
It is practically impossible to avoid aliasing problems when sampling, using only digital lters.
Almost all measured signals are corrupted by noise, and this noise has usually some high or even
innite frequency components. Thus the noise is not band limited. With this noise, no matter
how fast we sample, we will always have some reection of a higher frequency component that
appears as an impostor or alias frequency.
18 CHAPTER 2. FROM DIFFERENTIAL TO DIFFERENCE EQUATIONS
If aliasing is still a problem, and you cannot sample at a higher rate, then you can insert a low
pass analogue lter between the measurement and the analogue to digital converter (sampler).
The analogue lter or in this case known as an anti-aliasing lter, will band-limit the signal, but
not corrupt it with any aliasing. Expensive high delity audio equipment will still use analogue
lters in this capacity. Analogue and digital lters are discussed in more detail in chapter 5.
Detecting aliases
Consider the trend y(t) in Fig. 2.3 where we wish to estimate the important frequency compo-
nents of the signal. It is evident that y(t) is comprised of one or two dominating harmonics.
Figure 2.3: Part of a noisy time series with
unknown frequency components. The con-
tinuous underlying signal (solid) is sam-
pled at T = 0.7, (), and at T = 1.05 s ().
0 1 2 3 4 5 6 7
2
1
0
1
2
3
time (s)
o
u
t
p
u
t


continuous
T
s
=0.7
T
s
=1.05
The spectral density when sampling at T = 0.7s ( in Fig.2.3) given in the upper trend of Fig. 2.4
exhibits three distinct peaks. These peaks are the principle frequency components of the signal
and are obtained by plotting the absolute value of the Fourier transform of the time signal
2
,
|DFT {y(t)}|. Reading off the peak positions, and for the moment overlooking any potential
10
0
10
2
T
s
= 0.7 s
f
N
=0.71
p
o
w
e
r
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
10
0
10
2
T
s
= 1.05 s
f
N
=0.48
frequency (Hz)
p
o
w
e
r
Figure 2.4: The frequency component of a signal sampled at T
s
= 0.7s (upper) and T
s
= 1.05s
(lower). The Nyquist frequencies for both cases are shown as vertical dashed lines. See also
Fig. 2.5.
problems with undersampling, we would expect y(t) to be something like
y(t) sin(20.1t) + sin(20.5t) + sin(20.63t)
However in order to construct Fig. 2.4 we had to sample the original time series y(t) possibly
introducing spurious frequency content. The Nyquist frequency f
N
= 1/(2T
s
) is 0.7143 and is
2
More about the spectral analysis or the power spectral density of signals comes in chapter 5.
2.1. COMPUTER IN THE LOOP 19
shown as a vertical dashed line in Fig. 2.4(top). The power spectrum is reected in this line, but
is not shown in Fig. 2.4.
If we were to re-sample the process at a different frequency and re-plot the power density plot
then the frequencies that were aliased will move in this second plot. The points in Fig. 2.3 are
sampled at t = 1.05s with corresponding spectral power plot in Fig. 2.4. The important data
from Fig. 2.4 is repeated below.
Curve T
s
(s) f
N
(Hz) peak 1 peak 2 peak 3
top 0.7 0.7143 0.1 0.50 0.63
bottom 1.05 0.4762 0.1 0.152 0.452
First we note that the low frequency peak (f
1
= 0.1Hz) has not shifted from curve a (top) to
curve b (bottom), so we would be reasonably condent that f
1
= 0.1Hz and is not corrupted by
the sampling process.
However, the other two peaks have shifted, and this shift must be due to the sampling process.
Let us hypothesize that f
2
= 0.5Hz. If this is the case, then it will appear as an alias in curve b
since the Nyquist frequency for curve b (f
N
(b) = 0.48) is less than the proposed f
2
= 0.5, but it
will appear in the correct position on curve a. The apparent frequency

f
2
(b) on curve b will be

f
2
(b) = 2f
N
(b) f
2
= 2 0.4762 0.5 = 0.4524
which corresponds to the third peak on curve b. This would seem to indicate that our hypothesis
is correct for f
2
. Fig. 2.5 shows this reection.
10
0
10
2
T
s
= 0.7 s
f
N
=0.71
p
o
w
e
r
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
10
0
10
2
T
s
= 1.05 s
f
N
=0.48
frequency (Hz)
p
o
w
e
r
Figure 2.5: Reecting the frequency re-
sponse in the Nyquist frequency from
Fig. 2.4 shows why one of the peaks is ev-
ident in the frequency spectrum at T
s
=
1.05, but what about the other peak?
Now we turn our attention to the nal peak. f
3
(a) = 0.63 and f
2
(b) = 0.152. Let us again try
the fact that f
3
(a) is the true frequency = 0.63Hz. If this were the case the apparent frequency
on curve b would be

f
3
(b) = 2f
N
(b) f
2
= 0.3224Hz. There is no peak on curve b at this
frequency so our guess is probably wrong. Let us suppose that the peak at f
3
(a) is the rst
harmonic. In that case the true frequency will be f
3
= 2f
N
(a)

f
3
(a) = 0.8Hz. Now we check
using curve b. If the true third frequency is 0.8Hz, then the apparent frequency on curve b will
be

f
3
(b) = 2f
N
(b) f
3
= 0.153Hz. We have a peak here which indicates that our guess is a good
one. In summary, a reasonable guess for the unknown underlying function is
y(t) sin(20.1t) + sin(20.5t) + sin(20.8t)
although we can never be totally sure of the validity of this model. At best we could either re-
sample at a much higher frequency, say f
s
> 10Hz, or introduce an analogue low pass lter to cut
out, or at least substantially reduce, any high frequencies that may be reected.
20 CHAPTER 2. FROM DIFFERENTIAL TO DIFFERENCE EQUATIONS
2.2 Finite difference models
To compute complex mathematical operations such as differentiation or integration, on a digital
computer, we must rst discretise the continuous equation into a discrete form. Discretising is
where we convert a continuous time differential equation into a difference equation that is then
possible to solve on a digital computer.
Figure 2.6: In Nathaniel Hawthornes The Scarlet Letter, Hester
Prynne is forced to wear a large, red A on her chest when she is
found guilty of adultery and refuses to name the father of her illegiti-
mate child.
In control applications, we are often required to solve ordinary differential equations such as
dy
dt
= f(t, y) (2.3)
for y(t). The derivative can be approximated by the simple backward difference formula
dy
dt

y
t
y
tT
T
(2.4)
where T is the step size in time or the sampling interval. Provided T is kept sufciently small,
(but not zero) the discrete approximation to the continuous derivative is reasonable. Inserting
this approximation, Eqn. 2.4 into our original differential equation, Eqn. 2.3, and re-arranging for
y
t
results in the famous Euler backward difference scheme;
y
t
= y
tT
+Tf(t T, y
tT
) (2.5)
Such a scheme is called a recurrence scheme since a previous solution, y
tT
, is used to calculate
y
t
, and is suitable for computer implementation. The beauty of this crude method of differen-
tiation is the simplicity and versatility. Note that we can discretise almost any function, linear
or nonlinear without needing to solve analytically the original differential equation. Of course
whether our approximation to the original problem is adequate is a different story and this com-
plex, but important issue is addressed in the eld of numerical analysis.
Problem 2.1 1. What is the nite difference approximation for a third order derivative?
2. Write down the second order nite difference approximation to

2
d
2
y
dt
2
+ 2
dy
dt
+y = ku
2.2.1 Difference equations
Difference equations are the discrete counterpart to continuous differential equations. Often the
difference equations we are interested in are the result of the discretisation of a continuous dif-
ferential equation, but sometimes, as in the case below, we may be interested in the difference
2.2. FINITE DIFFERENCE MODELS 21
equation in its own right. Difference equations are much easier to simulate and study in a digital
computer than a differential equation for example, since they are already written in a form where
we can just step along following a relation
x
k+1
= f (x
k
)
rather than resorting to integration. This step by step procedure is also known as a mathematical
mapping since we map the old vector x
k
to a new vector x
k+1
.
H enons chaotic attractor
The system of discrete equations known as H enons attractor,
x
k+1
= (y
k
+ 1) 1.4x
2
k
, x
0
= 1 (2.6)
y
k+1
= 0.3x
k
, y
0
= 1 (2.7)
is an interesting example of a discrete mapping which exhibits chaotic behaviour.
We will start from point x
0
= (x
0
, y
0
) = (1, 1) and investigate the subsequent behaviour using
SIMULINK. Since H enons attractor is a discrete system, we will use unit delays to shift back
fromx
k+1
x
k
and to better visualise the results we will plot an (x, y) phase plot. The SIMULINK
simulation is given in Fig. 2.7. Compare this SIMULINK construction (which I have drawn de-
liberately to ow from right to left) so that it matches the way we read the system equations,
Eqn. 2.6-Eqn. 2.7.
1/z
y1
1/z
x1
XY Graph
y
To Workspace2
x
To Workspace Sum1
Product
1.4
Gain2
0.3
Gain
1
Constant
Figure 2.7: H enons attractor, (Eqns 2.62.7), modelled in SIMULINK. See also Fig. 2.8.
The time response of x and y, (left gures in Fig. 2.8), of this mapping is not particularly inter-
esting, but the result of the phase plot (without joining the dots), (right gure in Fig. 2.8), is an
interesting mapping.
Actually what the chaotic attractor means is not important at this stage, this is after all only a
demonstration of a difference equation. However the points to note in this exercise are that we
never actually integrated any differential equations, but only stepped along using the unit delay
blocks in SIMULINK. Consequently these types of simulations are trivial to program, and run
very fast in a computer. Another example of a purely discrete system is given in later Fig. 6.10.
22 CHAPTER 2. FROM DIFFERENTIAL TO DIFFERENCE EQUATIONS
2
1
0
1
2
x
k
0 5 10 15 20 25 30 35
0.5
0
0.5
1
y
k
time counter
(a) The time response
1.5 1 0.5 0 0.5 1 1.5
0.5
0
0.5
1
x
k
y
k
Start point
(b) The (x, y) or phase response
Figure 2.8: The time response (left) and phase plot (right) of H enons attractor as computed by
the SIMULINK block diagram in Fig. 2.7.
2.3 The z transform
The sampling of a continuous function f(t) at regular sampling intervals equally spaced T time
units apart creates a new function f

(t),
f

(t) =

n=0
f(kT) (t kT) (2.8)
where (x) is the Dirac delta function which is dened as being a perfect impulse at x = 0,
and zero elsewhere. Note that the actual value of (0) is innity, but the integral of (t) is 1.0.
Expanding the summation term in Eqn 2.8 gives
f

(t) = f
0
(t) +f
T
(t T) +f
2T
(t 2T) +f
3T
(t 3T) +
Now suppose we wish to know the value of the third sampled function f

(t = 3T). For simplicity


we will assume the sample time is T = 1.
f

(3) = f
0
(3) +f
1
(2) +f
2
(1)
. .
all zero
+f
3
(0) +f
4
(1) +
. .
all zero
All the terms except for the term containing (0) are zero, while (0) = . Thus the value
of the function f

(3) = . Often you will see a graph of f

(t) depicted where the heights of


the values at the sample times are the same as the height of the continuous distribution such as
sketched in Fig. 2.13. Strictly this is incorrect, as it is the strength or integral of f

(t) which is the


same as the value of the continuous distribution f(t). I think of the function f

(t) as a series of
impulses whose integral is equal to the value of the continuous function f(t).
If we take the Laplace transform of the sampled function f

(t), we get
L{f

(t)} = L
_

n=0
f
kT
(t kT)
_
(2.9)
Now the function f
kT
is assumed constant so it can be factored out of the Laplace transform,
and the Laplace transform of the (0) is simply 1.0. If the impulse is delayed kT units, then the
2.3. THE Z TRANSFORM 23
Laplace transform is e
kT
1. Thus Eqn. 2.9 simplies to
L{f

(t)} =

n=0
f
kT
e
kTs
=

k=0
f
k
z
k
where we have dened
z
def
= e
sT
(2.10)
In summary, the z-transform of the sampled function f

(t) is dened as
F(z)
def
= Z {f

(t)} =

k=0
f
k
z
k
(2.11)
The function F(z) is an innite series, but can be written in a closed form if f(t) is a rational
function.
2.3.1 z-transforms of common functions
We can use the denition of the z-transform, Eqn. 2.11, to compute the z-transform of common
functions such as steps, ramps, sinusoids etc. In this way we can build for ourselves a table of
z-transforms such as those found in many mathematical or engineering handbooks.
Sampled step function
The unit sampled step function is simply s(kT) = 1, k 0. The z-transform of s(kT) following
the denition of Eqn. 2.11 is
S(z) =

k=0
k
n
z
k
= 1 + 1 z
1
+ 1 z
2
+ 1 z
3
+ (2.12)
By using the sum to innity for a geometric series,
3
we obtain the closed form equivalent for
Eqn. 2.12 as
S(z) =
1
1 z
1
(2.13)
Ramp and exponential functions
For the ramp function, x(k) = n for k 0, and the exponential function, x(k) = e
an
, k 0, we
could go through the same formal analytical procedure, but in this case we can use the ztrans
command from the symbolic toolbox in MATLAB.
3
The sum of a geometric series of n terms is
a + ar + ar
2
+ + ar
n1
=
a (1 r
n
)
1 r
; r = 1
and if |r| < 1, then the sum for an innite number of terms converges to
S =
a
1 r
24 CHAPTER 2. FROM DIFFERENTIAL TO DIFFERENCE EQUATIONS
z-transform of the ramp function using the Symbolic
toolbox for MATLAB.
Z {k}
1 >> syms k T
>> ztrans(k
*
T)
ans =
T
*
z/(z-1)2
z-transform of the exponential function.
Z
_
e
ak
_
1 >> syms k T a
>> ztrans(exp(-a
*
k
*
T))
ans =
z/exp(-a
*
T)/(z/exp(-a
*
T)-1)
>> simplify(ans)
6 ans =
z
*
exp(a
*
T)/(z
*
exp(a
*
T)-1)
In a similar manner we can build our own table of z-transforms of common functions.
Final and initial value theorems
Tables and properties of z-transforms of common functions are given in many handbooks and
texts such as [148, p49,67], but two of the more useful theorems are the initial value and the nal
value theorems given in Table 2.1. As in the continuous case, the nal value theorem is only
applicable if the transfer function is stable.
Table 2.1: Comparing nal and initial value theorems for continuous and discrete systems
Continuous Discrete
Initial value, lim
t0
y(t) lim
s
sY (s) lim
z
Y (z)
Final value, lim
t
y(t) lim
s0
sY (s) lim
z1
__
1 z
1
_
Y (z)

2.4 Inversion of z-transforms


The usefulness of the z-transformis that we can do algebraic manipulations on discrete difference
equations in the same way we can do algebraic manipulations on differential equations using the
Laplace transform. Generally we:
1. Convert to z-transforms, then
2. do some relatively simple algebraic manipulations to make it easier to
3. invert back to the time domain.
The nal step, the inversion, is the tricky part. The process of obtaining f

(t) back from F(z) is


called inverting the z-transform, and is written as
f

(t) = Z
1
{F(z)} (2.14)
Note that only the sampled function is returned f

(t), not the original f(t). Thus the full inver-


sion process to the continuous domain f(t) is a one to many operation.
There are various ways to invert z-transforms:
2.4. INVERSION OF Z-TRANSFORMS 25
1. Use tables of z-transforms and their inverses in handbooks, or use a symbolic manipulator.
(See 2.4.1.)
2. Use partial-fraction expansion and tables. In MATLAB use residue to compute the partial
fractions. (See 2.4.2.)
3. Long division for rational polynomial expressions in the discrete domain. In MATLAB use
deconv to do the polynomial division. (See 2.4.3.)
4. Computational approach while suitable for computers, has the disadvantage that the an-
swer is not in a closed form. (See 2.4.4.)
5. Analytical formula from complex variable theory. (Not useful for engineering applications,
see 2.4.5.)
2.4.1 Inverting z-transforms with symbolically
The easiest, but perhaps not the most instructive, way to invert z-transforms is to use a computer
algebra package or symbolic manipulator such as the symbolic toolbox or MUPAD. One simply
enters the z-transform, then requests the inverse using the iztrans function in a manner similar
to the forward direction shown on page 24.
>> syms z % Dene a symbolic variable z
>> G = (10
*
z+5)/(z-1)/(z-1/4) % Construct G(z) = (10z + 5)/(z 1)/(z 1/4)
3 G =
(10
*
z+5)/(z-1)/(z-1/4)
>> pretty(G) % check it
10 z + 5
-----------------
8 (z - 1) (z - 1/4)
>> iztrans(G) % Invert the z-transform, Z
1
{G(z)}
ans =
20
*
kroneckerDelta(n, 0) - 40
*
(1/4)n + 20
The Kronecker function, or kroneckerDelta(n, 0) is a shorthand way of expressing piece-
wise functions in MATLAB. The expression kroneckerDelta(n, 0) returns 1 when n = 0,
and 0 for all other values.
This is a rather messy and needlessly complicated artifact due to the fact that the symbolic toolbox
does not know that n is dened as positive. We can explicitly inform MATLAB that n > 0, and
then we get a cleaner solution.
>> syms z
>> syms n positive
>> G = (10
*
z+5)/(z-1)/(z-1/4);
4
>> y=iztrans(G) % Invert to get,y[n] = Z
1
{G(z)}
y =
20-40
*
(1/4)n
>> limit(y,n,inf) % y[]
9 ans =
20
The last command computed the steady-state by taking the limit of y[n] as n .
26 CHAPTER 2. FROM DIFFERENTIAL TO DIFFERENCE EQUATIONS
2.4.2 The partial fraction method
Inverting transforms by partial fractions is applicable for both discrete z-transforms and con-
tinuous Laplace transforms. In both cases, we nd an equivalent expression for the transform
in simple terms that are summed together. Hopefully we can then consult a set of tables, such
as [148, Table 2.1 p49], to invert each term individually. The inverse of the sum, is the sum of
the inverses meaning that these expressions when summed together give the full inversion. The
MATLAB function residue forms partial fractions froma continuous transfer function, although
the help le warns that this operation is numerically poorly conditioned. Likewise the routine
residuez (that is residue with a trailing z) is designed to extract partial fractions in a form
suitable when using ztransforms.
As an example, suppose we wish to invert
G(s) =
4s
2
58s 24
s
3
+ 2s
2
24s
=
4s
2
58s 24
s(s + 6)(s 4)
using partial fractions noting that it contains no multiple or complex roots.
It is easy to spot the pole at s = 0, and the two remaining can be found by synthetic division, or
by factorising using the roots([1 2 -24 0]) command. The partial fraction decomposition
of G(s) can be written as
G(s) =
4s
2
58s 24
s(s + 6)(s 4)
=
A
s
+
B
s + 6
+
C
s 4
where the coefcients A, B and C can be found either by equating the coefcients or using the
cover-up rule shown below.
A =
4s
2
58s 24
(s + 6)(s 4)

s=0
= 1, B =
4s
2
58s 24
s(s 4)

s=6
= 3
C =
4s
2
58s 24
s(s + 6)

s=4
= 8
In MATLAB we can use the residue command to extract the partial fractions.
>> B = [-4 -58 -24]; % Numerator of G(s) = (4s
2
58s 24)/(s(s + 6)(s 4))
>> A = [ 1 2 -24 0]; % Denominator of G(s)
>> [r,p,k] = residue(B,A) % Find partial fractions
r = % residues (top line)
5 3
-8
1
p = % factors or poles
-6
10 4
0
k = % No extra parts
[]
The order of the residue and pole coefcient produced by residue is the same for each, so the
partial fraction decomposition is again
G(s) =
4s
2
58s 24
s
3
2s 24s
=
3
s + 6
+
8
s 4
+
1
s
2.4. INVERSION OF Z-TRANSFORMS 27
and we can invert each termindividually perhaps using tables to obtain the time domain solution
g(t) = 3e
6t
8e
4t
+ 1
If the rational polynomial has repeated roots or complex poles, slight modications to the proce-
dure are necessary so you may need to consult the help le.
Special cases for z-transforms
We can invert z-transforms in a manner similar to Laplace, but if you consult a table of z-
transforms and their inverses in standard control textbooks such as Table 2-1 in [149, p49], you
will discover that the table is written in terms of factors of the form z/(z + a) or alternatively
1/(1 z
1
rather than say 1/(z +a). This means that we should rst perform the partial fraction
decomposition on X(z)/z, rather than just X(z) as in the continuous case. An outline to follow
is given in Algorithm 2.1.
Algorithm 2.1 Inverting z-transforms by partial fractions
To invert a z transform, X(z), using partial fractions, do the following:
1. Divide X(z) by z obtaining X(z)/z.
2. Find the partial fractions of X(z)/z perhaps using residue.
3. Multiply each partial fraction by z to obtain a fraction of the form z/(z +a)
4. Find the inverse of each fraction separately using tables, and sum together to nd x(t).
Symbolic partial fractions in MATLAB
Suppose we want to invert
G(z) =
10z
z
2
+ 4z + 3
to g[n] using partial fractions. Unfortunately there is no direct symbolic partial fraction command,
but Scott Budge from Utah University found the following sneaky trick which involves rst inte-
grating, then differentiating the polynomial. This round-about way works because MATLAB in-
tegrates the expression by internally forming partial fractions and then integrating term by term.
Taking the derivative brings you back to the original expression, but now in partial fraction form.
>> syms z
2 >> G = 10
*
z/(z2 + 4
*
z + 3);
>> G= diff(int(G/z)) % Extract partial fractions of G(z)/z
G =
5/(z+1)-5/(z+3)
>> G=expand(G
*
z)
7 G =
5
*
z/(z+1)-5
*
z/(z+3) % Gives us G(z)/z =
5z
z+1

5z
z+3
>> syms n positive; iztrans(G,z,n)
ans =
12 -5
*
(-3)n+5
*
(-1)n
28 CHAPTER 2. FROM DIFFERENTIAL TO DIFFERENCE EQUATIONS
While this strategy currently works, we should take note that in future versions of the symbolic
toolbox this may not continue to work. In the above example we also have the option of using
residuez.
2.4.3 Long division
Consider the special case of the rational polynomial where the denominator is simply 1,
F(z) = c
0
+c
1
z
1
+c
2
z
2
+c
3
z
3
+
The inverse of this transformis trivial since the time solution at sample number k or t = kT is sim-
ply the coefcient of the term z
k
. Consequently it follows that f(0) = c
0
, f(T) = c
1
, f(2T) =
c
2
, . . . , f(kT) = c
k
etc. Thus if we have a rational transfer function, all we must do is to synthet-
ically divide out the two polynomials to obtain a single, albeit possibly innite, polynomial in
z.
Ogata refers to this approach as direct division [148, p69]. In general the division will result
in an innite power series in z, and so is not a particularly elegant closed-form solution, but for
most stable systems, the power series can be truncated eventually with little error.
So to invert a z-transform using long-division, we:
1. Convert the z-transform to a (possibly innite) power series by dividing the denominator
into the numerator using long division or deconv.
2. The coefcients of z
k
are the solutions at the sample times kT.
Long division by hand is tedious and error-prone, but we can use the polynomial division capa-
bilities of MATLABs deconv to do this long division automatically. Since deconv only performs
integer division returning a quotient and remainder, we must fool it into giving us the in-
nite series by padding the numerator with zeros.
To invert
Y (z) =
z
2
+z
(z 1)(z
2
1.1z + 1)
=
z
2
+z
z
3
2.1z
2
+ 2.1z 1
using deconv for the long division, we pad the numerator on the right with say 5 zeros, (that is,
we multiply by z
5
), and then do the division.
z
7
+z
6
z
3
2.1z
2
+ 2.1z 1
= z
4
+ 3.1z
3
+ 4.41z
2
+ 3.751z + 1.7161
. .
Q
+ remainder
We are not particularly interested in the remainder polynomial. The more zeros we add, (5 in the
above example), the more solution points we get.
>> Yn = [1,1,0]; % Numerator of G(z)
>> Yd = conv([1,-1],[1 -1.1 1]); % Denominator of G(z) is (z 1)(z
2
1.1z + 1)
3 >> [Q,R] = deconv([Yn,0,0,0,0,0],Yd) % Multiply by z
5
to zero pad, then do long division
Q =
1.0000 3.1000 4.4100 3.7510 1.7161
>> dimpulse(Yn,Yd,7); % Matlab's check
2.4. INVERSION OF Z-TRANSFORMS 29
0 1 2 3 4 5 6
0
1
2
3
4
5
sample #
O
u
t
p
u
t
Figure 2.9: Inverting z-transforms using dimpulse for
the rst six samples. It is possible also to construct a
discrete transfer function object and then use the generic
impulse routine.
We can also numerically verify the inversion using dimpulsefor six samples as shown in Fig. 2.9.
Problem 2.2 1. The pulse transfer function of a process is given by
Y (z)
X(z)
=
4(z + 0.3)
z
2
z + 0.4
Calculate the response of y(nT) to a unit step change in x using long division.
2. Determine the inverse by long division of G(z)
G(z) =
z(z + 1)
(z 1)(z
2
z + 1)
2.4.4 Computational approach
The computational method is so-called because it is convenient for a computer solution technique
such as MATLAB as opposed to an analytical explicit solution. In the computational approach we
convert the z-transform to a difference equation and use a recurrence solution. Consider the
following transform (used in the example from page 25) which we wish to invert back to the time
domain.
X(z)
U(z)
=
10z + 5
(z 1)(z 0.25)
=
10z + 5
z
2
1.25z + 0.25
=
10z
1
+ 5z
2
1 1.25z
1
+ 0.25z
2
(2.15)
The inverse is equivalent to solving for x(t) when u(t) is an impulse function. The transform can
be expanded and written as a difference equation
x
k
= 1.25x
k1
0.25x
k2
+ 10u
k1
+ 5u
k2
(2.16)
Since U(z) is an impulse, thus U(z) = 1 which means that u(0) = 1 and u(k) = 0, k > 0. Now
we substitute k = 0 to start, and note that u(k) = x(k) = 0 when k < 0 by denition. We now
have enough information to compute x(0).
x
0
= 1.25x
1
0.25x
2
+ 10u
1
+ 5u
2
= 1.25 0 0.25 0 + 10 0 + 5 0
= 0
30 CHAPTER 2. FROM DIFFERENTIAL TO DIFFERENCE EQUATIONS
Continuing, we substitute k = 1 into Eqn 2.16 and using our previously computed x
0
and nd
the next term.
x
1
= 1.25x
0
0.25x
1
+ 10u
0
+ 5u
1
= 1.25 0 0.25 0 + 10 1 + 5 0
= 10
and just to clarify this recurrence process further, we can try the next iteration of Eqn. 2.16 with
k = 2,
x
2
= 1.25x
1
0.25x
0
+ 10u
1
+ 5u
0
= 1.25 10 0.2 0 + 10 0 + 5 1
= 17.5
Now we can use the full recurrence relation, (Eqn 2.16), to obtain the solution in a stepwise
manner to build up the solution as shown in Table 2.2. All the terms on the right hand side of
Eqn 2.16 are either known initially (u(t)), or involve past information, hence it is possible to solve.
Note that in this inversion scheme, u(t) can be any known input series.
Table 2.2: The inversion of a z transform using the computation method. Only the rst 5 samples
and the nal value are calculated. The nal value is as calculated using the symbolic manipulator
earlier on page 25.
time (kT) u(k) x(k)
0 1 0
1 0 10
2 0 17.5
3 0 19.375
4 0 19.8438
.
.
.
.
.
.
.
.
.
0 20
MATLAB is well suited for this type of computation. To invert Eqn. 2.15 we can use the discrete
filter command. We rewrite Eqn. 2.15 in the form of a rational polynomial in z
1
,
G(z) =
b
0
+b
1
z
1
+b
2
z
2
1 +a
1
z
1
+a
2
z
2
where the numerator and denominator are entered as row vectors in decreasing powers of z. We
then create an impulse input vector u with say 6 samples and then compute the output of the
system
>> b = [0.0, 10.0, 5.0] % Numerator (10z
1
+ 5z
2
)
>> a = [1.0, -1.25, 0.25] % Denominator (1 1.25z
1
+ 0.25z
2
)
3
>> u = [1, zeros(1,5)] % impulse input
u =
1 0 0 0 0 0
>> x = filter(b,a,u) % compute output
8 x =
10.0000 17.5000 19.3750 19.8438 19.9609 19.9902
which gives the same results as Table 2.2.
2.4. INVERSION OF Z-TRANSFORMS 31
The control toolbox has a special object for linear time invariant (LTI) systems. We can create a
discrete system with the transfer function command, tf, and by specifying a sampling time, we
imply to MATLAB that it is a discrete system.
1 >> sysd = tf([10 5],[1 -1.25 0.25],1) % System of interest (10z + 5)/(z
2
1.25z + 0.25)
Transfer function:
10 z + 5
-------------------
6 z2 - 1.25 z + 0.25
Sampling time: 1
>> y=impulse(sysd) % compute the impulse response
11 y =
0
10.0000
17.5000
19.3750
16 19.8438
19.9609
Finally, the contour integral method can also be used to invert z-transforms, [148, 33], but
Seborg et al maintains it is seldom used in engineering practice, [179, p571] because it is fraught
with numerical implementation difculties. These difculties are also present in the continuous
equivalent; some of which are illustrated in the next section.
2.4.5 Numerically inverting the Laplace transform
Surprisingly, the numerical inversion of continuous transfer functions is considered far less impor-
tant than the computation of the inverse of discrete transfer functions. This is fortunate because
the numerical inversion of Laplace transforms is devilishly tricky. Furthermore, it is unlikely that
you would ever want to numerically invert a Laplace transform in control applications, since
most problems involver little more than a rational polynomial with a possible exponential term.
For these problems we can easily factorise the polynomial and use partial fractions or use the
step orimpulse routines for continuous linear responses.
However in the rare cases where we have a particularly unusual F(s) which we wish to convert
back to the time domain, we might be tempted to use the analytical expression for the inverse
directly
f(t) =
1
2j
_
+j
j
F(s) e
st
ds (2.17)
where is chosen to be larger than the real part of any singularity of F(s). Eqn. 2.17 is sometimes
known as the Bromwich integral or Mellins inverse formula.
The following example illustrates a straight forward application of Eqn. 2.17 to invert a Laplace
transform. However be warned that for all but the most well behaved rational polynomial exam-
ples this strategy is not to be recommended as it results in severe numerical roundoff error due
to poor conditioning.
32 CHAPTER 2. FROM DIFFERENTIAL TO DIFFERENCE EQUATIONS
Suppose we try to numerically invert a simple Laplace transform,
F(s) =
3
s(s + 5)
f(t) =
3
5
_
1 e
5t
_
with a corresponding simple time solution. We can start by dening an anonymous function of
the Laplace transform to be inverted.
Fs = @(s) 3.0./(s+5)./s; % Laplace function to invert F(s) = 3/(s(s + 5)) . . .
Fsexp = @(s,t) Fs(s).
*
exp(s
*
t) % and associated integrand F(s)e
st
Now since the largest singularity of F(s) is 0, we can choose the contour path safely to be say
= 0.1. We can also approximate the innite integration interval by reasonably large numbers.
It is convenient that the MATLAB routine integral works directly in the complex domain.
c = 0.1; % Value > than any singularities of F(s)
a = c-100j; b = c+200j; % Contour path approx: jto +j
3
t=linspace(0,8); % time range of interest
% Numerically approximate Eqn. 2.17.
ft = 1./(2
*
pi
*
1j)
*
arrayfun(@(t) integral(@(x) Fsexp(x,t),a,b), ti);
plot(t,ft) % Compare results in Fig. 2.10(a).
Due to small numerical roundoff however, the returned solution has a small imaginary compo-
nent, which we can in this instance safely ignore. In this rather benign example, where we know
the analytical solution, we can validate the accuracy as shown in Fig. 2.10(a), which it has to be
admitted, is not that wonderful. We should also note in the simplistic implementation above we
have not adequately validated that our algorithmic choices such as the nite integration limits
are appropriate, so we should treat any subsequent result with caution. In fact if you repeat this
calculation with a larger value of , or a smaller integration range such as say 1 10j to 1 + 10j
then the quality of the solution drops alarmingly. One way to improve the integration accuracy
is to use the waypoints option in the integral routine.
Over the years there have been many attempts to improve the inversion strategy as reviewed
in [1], but nearly all of the proposals run into the problem of precision. For example, Fig. 2.11
shows the problem when attempting to invert F(s) = 1/

s
2
+ 1 using the Gaver-Stehfest ap-
proximation algorithm with a varying number of terms. The Gaver-Stehfest family of schemes
are well known to produce inaccurate results for underdamped systems, but when we increase
the number of terms in an attempt to improve the accuracy, the results deteriorate due to numer-
ical round off. Directly applying the Bromwich integral to this transform as shown in Fig. 2.10(b)
is no better. While the strategy suggested in [1] seemingly circumvents the problem of precision,
it does so by brute-force using multi-precision arithmetic which is hardly elegant.
An alternative numerical algorithm is the inverse Laplace transform function invlap function
contributed by Karl Hollenbeck from the Technical University of Denmark. You can test this
routine using one of the following Laplace transformpairs in Table 2.3. The rst two are standard,
those following are more unusual. A more challenging collection of test transforms is given in
[1].
The numerical solution to L
1
_
1

s
e
1/s
_
is compared with the analytical solution fromTable 2.3
in Fig. 2.12 using a modest tolerance of 10
2
. The routine splits the time axis up into decades,
and inverts each separately which is why we can see some numerical noise starting at t = 10.
All of these numerical inversion strategies, starting from the direct application of the Bromwich
2.4. INVERSION OF Z-TRANSFORMS 33
0 2 4 6 8
0.02
0
0.02
t
e
r
r
o
r
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
L
1

3
s(s+5)

f
(
t
)
(a) Inverting a benign transfer function,
L
1
_
3
s(s+5)
_
0 5 10 15 20
1
0
1
2
t
e
r
r
o
r
1
0.5
0
0.5
1
1.5
2
L
1

s
2
+1

f
(
t
)
(b) Inverting a challenging transfer function,
L
1
_
1

s
2
+1
_
Figure 2.10: Numerically inverting Laplace transforms by direct evaluation of the Bromwich
integral. Since this strategy is suspectable to considerable numerical errors, it is not to be recom-
mended.
0 2 4 6 8 10
0.5
0
0.5
1
No. of terms = 8
0 2 4 6 8 10
0.5
0
0.5
1
No. of terms = 16
0 2 4 6 8 10
0.5
0
0.5
1
No. of terms = 20
0 2 4 6 8 10
1
0
1
No. of terms = 26
t
Figure 2.11: Demonstrating the precision problems
when numerically inverting the Laplace transformusing
the Gaver-Stehfest algorithm with a varying number of
terms. The exact inversion is the red solid line while the
approximate inversion is given by .
34 CHAPTER 2. FROM DIFFERENTIAL TO DIFFERENCE EQUATIONS
Table 2.3: A collection of Laplace transform pairs suitable for testing numerical inversion strate-
gies.
Description F(s) f(t) Comment
Easy test 1/s
2
t
Control TF
1
(s + 2)(s + 3)
1
2
_
e
t
e
3t
_
Oscillatory
1

s
2
+ 1
J
0
(t) Bessel function, see Fig. 2.10(b) & Fig. 2.11.
Nasty
1

s
e
1/s 1

t
cos(

4t), See Fig. 2.12.


Figure 2.12: Testing the numerical inver-
sion of the Laplace transform e
1/s
/

s us-
ing the invlap routine.
0.5
0
0.5
1
1.5
f
(
t
)
L
1

exp(1/s)

0 5 10 15 20 25 30 35 40
10
6
10
4
10
2
10
0
e
r
r
o
r
time
integral in Eqn. 2.17, the Gaver-Stehfest algorithm and variants and the implementation invlap
require some care to get reasonable results. Unfortunately in practical cases it can be difcult to
choose suitable algorithmic tuning constants, which mean that it is difcult to generate any sort
of error bound on the computed curve.
2.5 Discretising with a sample and hold
In 2.1.1 we saw how the continuous analogue signal must be sampled to be digestible by the
process control computer. To analyse this sampling operation, we can approximate the real sam-
pler with an ideal sampler in cascade with a hold element such as given in Fig. 2.13.
When we sample the continuous function f(t), we get a sampled function f

(t), that is a series


of spikes that exist only at the sampling instant as shown in the middle plot of Fig. 2.13. The
sampled function, f

(t), is undened for the time between the sampling instants which is incon-
venient given that this is the bulk of the time. Obviously one sensible solution is to retain the
last sampled value until a new one is collected. During the sample period, the sampled value
f

(t) is the same as the last sampled value of the continuous function; f(kT + t) = f(kT) where
2.5. DISCRETISING WITH A SAMPLE AND HOLD 35

(t)
f(t)


`

`
`
`
```
analogue signal sampled signal sample & held signal
Hold

f
h
(t)
ideal sampler
Figure 2.13: Ideal sampler and zeroth-order hold
0 t < T shown diagrammatically in the third plot in Fig. 2.13. Since the last value collected is
stored or held, this sampling scheme is referred to as a zeroth-order hold. The zeroth-order refers
to the fact that the interpolating function used between adjacent values is just a horizontal line.
Higher order holds are possible, but the added expense is not typically justied even given the
improved accuracy.
We can nd the Laplace transform of the zeroth-order hold element by noting that the input is an
impulse function, and the output is a rectangular pulse of T duration, and y(t) high,
`

time, t
output
sample time, T
y(t)

0
.
Rectangular pulse
Directly applying the denition of the Laplace transform,
L{zoh} =
_

0
y(t) e
st
dt = y
t
_
T
0
1 e
st
dt
= y
t

1 e
sT
s
. .
zoh
we obtain the transform for the zeroth-order hold. In summary, the transformation of a continu-
ous plant G(s) with zero-order hold is
G(z) = (1 z
1
) Z
_
G(s)
s
_
(2.18)
Note that this is not the same as simply the z-transform of G(s).
36 CHAPTER 2. FROM DIFFERENTIAL TO DIFFERENCE EQUATIONS
Suppose we wish to discretise the continuous plant
G(s) =
8
s + 3
(2.19)
with a zero-order hold at a sample time of T = 0.1 seconds. We can do this analytically using ta-
bles, or we can trust it to MATLAB. First we try analytically and apply Eqn 2.18 to the continuous
process. The difcult bit is the z-transformation. We note that
Z
_
G(s)
s
_
=
8
3
Z
_
3
s(s + 3)
_
(2.20)
and that this is in the form given in tables of z-transforms such as [148, p50], part of which is
repeated here
X(s) X(z)
a
s(s+a)
(1e
aT
)z
1
(1z
1
)(1e
aT
z
1
)
Inverting using the tables, and substituting in Eqn 2.18 gives
G(z) = (1 z
1
)
_
G(s)
s
_
=
8
3
(1 z
1
) Z
_
3
s(s + 3)
_
=
8
3
(1 z
1
)
(1 e
3T
)z
1
(1 z
1
)(1 e
3T
z
1
)
=
8
3

(1 e
0.3
)z
1
(1 e
0.3
z
1
)
=
0.6912z
1
1 0.7408z
1
, for T = 0.1 (2.21)
We can repeat this conversion numerically using the continuous-to-discrete function, c2d. We
rst dene the continuous system as an LTI object, and then convert to the discrete domain using
a zeroth order hold.
>>sysc = tf(8,[1 3]) % Continuous systemGc(s) = 8/(s + 3)
3 Transfer function:
8
-----
s + 3
8 >> sysd = c2d(sysc,0.1,'zoh') % convert to discrete using a zoh
Transfer function:
0.6912
----------
13 z - 0.7408
Sampling time: 0.1
which once again is Eqn. 2.21. In fact we need not specify the zoh option when constructing a
discrete model using c2d or even in SIMULINK, since a zeroth-order hold is employed by default.
Methods for doing the conversion symbolically are given next in section 2.5.1.
2.5. DISCRETISING WITH A SAMPLE AND HOLD 37
2.5.1 Converting Laplace transforms to z-transforms
In many practical cases we may already have a continuous transfer function of a plant or con-
troller which we wish to discretise. In these circumstances we would like to convert from a con-
tinuous transfer function in s, (such as say a Butterworth lter) to an equivalent discrete transfer
function in z. The two common ways to do this are:
1. Analytically by rst inverting the Laplace transformback to the time domain, and then take
the (forward) z-transform.
G(z)
def
= Z
_
L
1
{G(s)}
_
(2.22)
2. or by using an approximate method known as the bilinear transform.
The bilinear method whilst approximate has the advantage that it just involves a simple algebraic
substitution for s in terms of z. The bilinear transform is further covered in 2.5.2.
The formal method we have already seen previously, but since it is such a common operation, we
can write a symbolic MATLAB script to do it automatically for us for any given transfer function.
However we should be aware whether the transformation includes the sample and hold, or if
that inclusion is left up to the user.
Listing 2.1 converts a Laplace expression to a z-transformexpression without a zeroth-order hold.
Including the zeroth-order hold option is given in Listing 2.2.
Listing 2.1: Symbolic Laplace to z-transform conversion
function Gz = lap2ztran(G)
% Convert symbolically G(s) to G(z)
syms t k T z
Gz = simplify(ztrans(subs(ilaplace(G),t,k
*
T),k,z));
5 return
Listing 2.2: Symbolic Laplace to z-transform conversion with ZOH
function Gz = lap2ztranzoh(G)
% Convert symbolically G(s) to G(z) with a ZOH.
syms s t k T z
Gz = simplify((1-1/z)
*
ztrans(subs(ilaplace(G/s),t,k
*
T),k,z));
5 return
We can test the conversion routines in Listings 2.1 and 2.2 on the trial transfer function G(s) =
1/s
2
.
>> Gz =lap2ztran(1/s2); % Convert G(s) = 1/s
2
to the discrete domain G(z).
>> pretty(Gz)
T z
--------
5 2
(z - 1)
>> Gz = lap2ztranzoh(1/s2);% Do the conversion again, but this time include a ZOH.
>> pretty(Gz)
10 2
38 CHAPTER 2. FROM DIFFERENTIAL TO DIFFERENCE EQUATIONS
(z + 1) T
1/2 ----------
2
(z - 1)
2.5.2 The bilinear transform
The analytical one step backone step forward procedure while strictly correct, is a little tedious,
so a simpler, albeit approximate, way to transformbetween the Laplace domain and the z-domain
is to use the bilinear transform method, sometimes known as Tustins method. By denition
z
def
= e
sT
or z
1
= e
sT
, (from Eqn. 2.10). If we substituted directly natural logs would appear in
the rational polynomial in z,
s =
ln(z)
T
(2.23)
making the subsequent analysis difcult. For example the resulting expression could not be
transformed into a difference equation which is what we desire.
We can avoid the troublesome logarithmic terms by using a Pad e approximation for the expo-
nential term as
e
sT
= z
1

2 sT
2 +sT
(2.24)
or alternatively
s
2
T
(1 z
1
)
(1 +z
1
)
(2.25)
This allows us to transform a continuous time transfer function G(s) to a discrete time transfer
function G(z),
G(z) = G(s)|
s=
2
T
1z
1
1+z
1
(2.26)
Eqn. 2.26 is called the bilinear transform owing to the linear terms in both the numerator and
denominator and it has the advantage that a stable continuous time lter will be stable in the dis-
crete domain. The disadvantage is that the algebra required soon becomes unwieldy if attempted
manually. Other transforms are possible and are discussed in 4-2 p308 of Ogata [148]. Always
remember that this transformation is approximate, being equivalent to a trapezoidal integration.
Here we wish to approximately discretise the continuous plant
G(s) =
1
(s + 1)(s + 2)
at a sample time of T = 0.1 using the bilinear transform. The discrete approximate transfer
function is obtained by substituting Eqn. 2.25 for s and simplifying.
G(z) =
1
(s + 1)(s + 2)

s=
2
T
1z
1
1+z
1
=
1
_
2
T
1z
1
1+z
1
+ 1
_ _
2
T
1z
1
1+z
1
+ 2
_ =
1
2
T
2
(z + 1)
2
(2T
2
+ 6T + 4)z
2
+ (4T
2
8)z + 2T
2
6T + 4
=
0.0022z
2
+ 0.0043z + 0.0022
z
2
1.723z + 0.74
for T = 0.1
2.5. DISCRETISING WITH A SAMPLE AND HOLD 39
Since this transformation is just an algebraic substitution, it is easy to execute it symbolically in
MATLAB.
1 >> syms s T z
>> G = 1/(s+1)/(s+2);
>> Gd = simplify(subs(G,s,2/T
*
(1-1/z)/(1+1/z)))
Gd =
1/2
*
T2
*
(z+1)2/(2
*
z-2+T
*
z+T)/(z-1+T
*
z+T)
6 >> pretty(Gd)
2 2
T (z + 1)
1/2 -------------------------------------
(2 z - 2 + T z + T) (z - 1 + T z + T)
Alternatively we could use MATLAB to numerically verify our symbolic solution.
>> G = zpk([],[-1 -2],1)
Zero/pole/gain:
1
-----------
5 (s+1) (s+2)
>> T=0.1;
>> Gd = c2d(G,T,'tustin')
10 Zero/pole/gain:
0.0021645 (z+1)2
---------------------
(z-0.9048) (z-0.8182)
15 Sampling time: 0.1
>> tf(Gd)
Transfer function:
0.002165 z2 + 0.004329 z + 0.002165
20 ------------------------------------
z2 - 1.723 z + 0.7403
Sampling time: 0.1
The bilinear command in the SIGNAL PROCESSING toolbox automatically performs this map-
ping from s to z. This is used for example in the design of discrete versions of common analogue
lters such as the Butterworth or Chebyshev lters. These are further described in 5.2.3.
Problem 2.3 1. Use Tustins method (approximate z-transform) to determine the discrete time
response of
G(s) =
4
(s + 4)(s + 2)
to a unit step change in input by long division. The sample time T = 1 and solve for 7 time
steps. Compare with the exact solution.
40 CHAPTER 2. FROM DIFFERENTIAL TO DIFFERENCE EQUATIONS
The frequency response characteristics of a hold element
The zeroth order hold element modies the transfer function, so consequently inuences the
closed loop stability, and frequency characteristics of the discrete process. We can investigate this
inuence by plotting the discrete Bode and Nyquist plots for a sampled process with and without
a zeroth order hold.
Suppose we have the continuous process
G
p
(s) =
1.57
s(s + 1)
(2.27)
which is sampled at a frequency of = 4 rad/s. (This corresponds to a sample time of T = /2
seconds.) First we will ignore the zeroth order hold and transform Eqn 2.27 to the z domain by
using tables such as say Table 21, p49 #8 in DCS.
G
p
(z) = 1.57
(1 e
aT
)z
1
(1 z
1
)(1 e
aT
z
1
)
(2.28)
=
1.243z
(z 1)(z 0.208)
(2.29)
To get the discrete model with the zeroth order hold, we again use Eqn 2.18, and the tables. Doing
this, in a similar manner to what was done for Eqn 2.21, we get
G
h0
G
p
(z) =
1.2215z + 0.7306
(z 1)(z 0.208)
(2.30)
Now we can plot the discrete frequency responses using MATLAB to duplicate the gure given
in [110, p649] as shown in Fig. 2.14.
In the listing below, I rst construct the symbolic discrete transfer function, substitute the sam-
pling time, then convert to a numeric rational polynomial. This rational polynomial is now in a
suitable form to be fed to the control toolbox routines such as bode.
syms s T z
2 G = 1.57/(s
*
(s+1))
Gd = lap2ztran(G) % convert to a z-transform without ZOH
Gd2 = subs(Gd,T,pi/2)
[num,den] = numden(Gd2) % extract top & bottom lines
7 B= sym2poly(num); A = sym2poly(den); % convert to polynomials
Gd = tf(B,A,pi/2) % construct a transfer function
To generate the discretisation including the zeroth-order hold, we could use the symbolic lap2ztranzoh
routine given in Listing 2.2, or we could use the built-in c2d function.
1 [num,den] = numden(G) % extract numerator & denominator
Bc= sym2poly(num); Ac = sym2poly(den);
Gc = tf(Bc,Ac)
Gdzoh = c2d(Gc,pi/2,'zoh')
6
bode(Gc,Gd,Gdzoh)
legend('Continuous','No ZOH','with ZOH')
2.5. DISCRETISING WITH A SAMPLE AND HOLD 41
In this case I have used vanilla Bode function which will automatically recognise the difference
between discrete and continuous transfer functions. Note that it also automatically selects both
a reasonable frequency spacing, and inserts the Nyquist frequency at f
N
= 1/2T = 1/Hz or

N
= 2 rad/s.
These Bode plots shown in Fig. 2.14, or the equivalent Nyquist plots, show that the zeroth order
hold destabilises the system. The process with the hold has a larger peak resonance, and smaller
gain and phase margins.
100
50
0
50
100
M
a
g
n
i
t
u
d
e

(
d
B
)
10
2
10
1
10
0
10
1
10
2
225
180
135
90
P
h
a
s
e

(
d
e
g
)
Bode Diagram
Frequency (rad/sec)
Continuous
No ZOH
with ZOH
Figure 2.14: The Bode dia-
gram showing the difference
between the continuous plant,
a discretisation with and with-
out the zeroth-order hold.
Of course we should compare the discrete Bode diagrams with that for the original continuous
process. The MATLAB bode function is in this instance the right tool for the job, but I will con-
struct the plot manually just to demonstrate how trivial it is to substitute s = i, and compute
the magnitude and phase of G(i) for all frequencies of interest.
num = 1.57; % Plant of interest G(s) = 1.57/(s
2
+s + 0)
2 den = [1 1 0];
w = logspace(-2,1)'; % Select frequency range of interest 10
2
< < 10
1
rad/s.
iw = 1i
*
w; % s = j
G = polyval(num,iw)./polyval(den,iw); % G(s = j)
loglog(w,abs(G)) % Plot magnitude |G(i)|, see Fig. 2.15.
7 semilogx(w,angle(G)
*
180/pi) % and phase (i)
We can compare this frequency response of the continuous system with the discretised version
in Fig. 2.14.
42 CHAPTER 2. FROM DIFFERENTIAL TO DIFFERENCE EQUATIONS
Figure 2.15: A frequency response plot of
G(s) where s = j constructed manually
without using the bode command.
10
2
10
1
10
0
10
1
10
5
10
0
10
5
A
R
10
2
10
1
10
0
10
1
200
150
100
50
frequency [rad/s]
P
h
a
s
e

l
a
g


[
d
e
g
]
2.6 Discrete root locus diagrams
The root locus diagram is a classical technique used to study the characteristic response of trans-
fer functions to a single varying parameter such as different controller gains. Traditionally the
construction of root locus diagrams required tedious manual algebra. However now modern
programs such as MATLAB can easily construct reliable root locus diagrams which makes the
design procedure far more attractable.
For many controlled systems, the loop is stable for small values of controller gain K
c
, but unstable
for a gain above some critical value K
u
. It is therefore natural to ask how does the stability (and
response) vary with controller gain? The root locus gives this information in a plot form which
shows how the poles of a closed loop system change as a function of the controller gain K
c
.
Given a process transfer function G and a controller K
c
Q, the closed loop transfer function is the
familiar
G
cl
=
K
c
Q(z)G(z)
1 +K
c
Q(z)G(z)
(2.31)
where the stability is dependent on the denominator (characteristic equation) 1 + K
c
Q(z)G(z),
which is of course itself dependent on the controller gain K
c
. Plotting the roots of
1 +K
c
Q(z)G(z) = 0 (2.32)
as a function of K
c
creates the root locus diagram.
In this section
4
we will plot the discrete root locus diagram for the process
G
p
(z) =
0.5(z + 0.6)
z
2
(z 0.4)
(2.33)
which is a discrete description of a rst order process with dead time. We also add a discrete
integral controller of the form
G
c
(z) = K
c
z
z 1
We wish to investigate the stability of the characteristic equation as a function of controller gain.
4
Adapted from [70, p124]
2.7. MULTIVARIABLE CONTROL AND STATE SPACE ANALYSIS 43
With no additional arguments apart from the transfer function, rlocus will draw the root locus
selecting sensible values for K
c
automatically. For this example, I will constrain the plots to have
a square aspect ratio, and I will overlay a grid of constant shape factor and natural frequency

n
using the zgrid function.
Q = tf([1 0],[1 -1],-1); % controller without gain
G = tf(0.5
*
[1 0.6],[1 -0.4 0 0],-1); % plant
3
Gol = Q
*
G; % open loop
zgrid('new');
xlim([-1.5 1.5]); axis equal %
rlocus(Gol) % plot root locus
8 rlocfind(Gol)
Once the plot such as Fig. 2.16 is on the screen, I can use rlocfind interactively to establish the
values of K
c
at different critical points in Fig. 2.16. Note that the values obtained are approximate.
Pole description Name K
c
border-line stability ultimate gain 0.6
cross the = 0.5 line design gain 0.2
critically damped breakaway gain 0.097
overdamped,
n
= 36

0.09
Once I select the gain that corresponds to the = 0.5 crossing, I can simulate the closed loop step
response.
Gol.Ts = 1; % Set sampling time
2 Kc = 0.2 % A gain where we expect = 0.5
step(Kc
*
Gol/(1 + Kc
*
Gol),50) % Simulate closed loop
A comparison of step responses for various controller gains is given in Fig. 2.17. The curve in
Fig. 2.17 with K
c
= 0.2 where 0.5 exhibits an overshoot of about 17% which agrees well with
the expected value for a second order response with a shape factor of = 0.5. The other curves
do not behave exactly as one would expect since our process is not exactly a second order transfer
function.
2.7 Multivariable control and state space analysis
In the mid 1970s, the western world suffered an oil shock when the petroleum producers and
consumers alike realised that oil was a scarce, nite and therefore valuable commodity. This had
a number of important implications, and one of these in the chemical processing industries was
the increased use of integrated heat plants. This integration physically ties separate parts of the
plant together and demands a corresponding integrated or plant-wide control system. Clas-
sical single-input/single-output (SISO) controller design techniques such as transfer functions,
frequency analysis or root locus were found to be decient, and with the accompanying devel-
opment of affordable computer control systems that could administer hundreds of inputs and
outputs, many of which were interacting, nonlinear and time varying, new tools were required
and the systematic approach offered by state-space analysis became popular.
44 CHAPTER 2. FROM DIFFERENTIAL TO DIFFERENCE EQUATIONS
1.5 1 0.5 0 0.5 1 1.5
1.5
1
0.5
0
0.5
1
1.5
0.9/T
0.8/T
0.7/T
0.6/T
0.5/T
0.4/T
0.1/T
0.2/T
0.3/T
0.7/T
0.5/T
0.4/T
0.3/T
0.2/T
0.1/T
/T
0.8
0.8/T
0.9
0.6/T
0.1
0.9/T
0.6
/T
0.7
0.2
0.3
0.4
0.5
Root Locus
Real Axis
I
m
a
g
i
n
a
r
y

A
x
i
s
Ultimate
gain
Design gain
breakaway
gain
Figure 2.16: The discrete root locus of Eqn. 2.33 plotted using rlocus. See also Fig. 2.17 for the
resultant step responses for various controller gains.
Figure 2.17: Various discrete closed loop re-
sponses for gains K
c
= 0.6, 0.2 and 0.1. Note that
with K
c
= 0.6 we observe a response on the sta-
bility boundary, while for K
c
= 0.1 we observe
a critically damped response as anticipated from
the root locus diagram given in Fig. 2.16.
0
1
2


0
1
2


0 10 20 30 40 50
0
1
2
time [s]


K
c
=0.1
K
c
=0.2
K
c
=0.6
2.7. MULTIVARIABLE CONTROL AND STATE SPACE ANALYSIS 45
Many physical processes are multivariable. A binary distillation column such as given in Fig. 2.18
typically has four inputs; the feed ow and composition and reux and boil-up rates, and at least
four outputs, the product ow and compositions. To control such an interacting system, mul-
tivariable control such as state space analysis is necessary. The Wood-Berry distillation column
model (Eqn 3.24) and the Newell & Lee evaporator model are other examples of industrial pro-
cess orientated multivariable models.
Distillation
column

distillate rate, D
tray 5 temperature, T
5
Bottoms rate, B
Bottoms comp., x
B
Outputs

Inputs
tray 15 temperature, T
15
_
Feed rate

_
reux rate
reboiler heat
Disturbances
Feed composition

Manpipulated
variables
distillation composition, x
D
Figure 2.18: A binary distillation column with multiple inputs and multiple outputs
State space analysis only considers rst order differential equations. To model higher order sys-
tems, one needs only to build systems of rst order equations. These equations are conveniently
collected, if linear, in one large matrix. The advantage of this approach is a compact represen-
tation, and a wide variety of good mathematical and robust numerical tools to analyse such a
system.
A few words of caution
The state-space analysis is sometimes referred to as the modern control theory, despite the fact
that is has been around since the early 1960s. However by the end of the 1970s, the promise of
the academic advances made in the previous decade were turning out to be ill-founded, and it
was felt that this new theory was ill-equipped to cope with the practical problems of industrial
control. In many industries therefore, Process Control attracted a bad smell. Writing a decade
still later in 1987, Morari in [136] attempts to rationalise why this disillusionment was around at
the time, and whether the subsequent decade of activity alleviated any of the concerns. Morari
summarises that commentators such as [69] considered theory such as linear multivariable con-
trol theory, (i.e. this chapter) which seemed to promise so much, actually delivered very little
and had virtually no impact on industrial practice. There were other major concerns such as
the scarcity of good process models, the increasing importance of operating constraints, operator
acceptance etc, but the poor track record of linear multivariable control theory landed top billing.
Incidentally, Morari writing almost a decade later still in 1994, [139], revisits the same topic giving
the reader a nice linear perspective.
46 CHAPTER 2. FROM DIFFERENTIAL TO DIFFERENCE EQUATIONS
2.7.1 States and state space
State space equations are just a mathematical equivalent way of writing the original common dif-
ferential equation. While the vector/matrix construction of the state-space approach may initially
appear intimidating compared with high order differential equations, it turns out to be more con-
venient and numerically more robust to manipulate these equations when using programs such
as MATLAB. In addition the state-space form can be readily expanded into multivariable systems
and even nonlinear systems.
The state of a system is the smallest set of values such that knowledge of these values, knowledge
of any future inputs and the governing dynamic equations, it is possible to predict everything
about the future (and past) output of the system. The state variables are often written as a column
vector of length n and denoted x. The state space is the n dimensional coordinate space that all
possible state vectors must lie in.
The input to a dynamic system is the vector of m input variables, u, that affect the state vari-
ables. Historically control engineers further subdivided the input variables into those they could
easily and consciously adjust (known as control or manipulated variable inputs), and those vari-
ables that may change, but are outside the engineers immediate sphere of inuence (like the
weather) known as disturbance variables. The number of input variables need not be the same as
the number of state variables, and indeed m is typically less than n.
Many mathematical control text books follow a standard nomenclature. Vectors are written using
a lower case bold font such as x, and matrices are written in upper case bold such as A.
The state space equations are the n ordinary differential equations that relate the state derivatives
to the state themselves, the inputs if any, and time. We can write these equations as
x = f (x, u, t) (2.34)
where f () is a vector function meaning that both the arguments and the result are vectors. In
control applications, the input is given by a control law which is often itself a function of the
states,
u = h(x) (2.35)
which if we substitute into Eqn. 2.34, we get the closed loop response
x = f (x, h(x), t)
For autonomous closed loop systems there is no explicit dependence on time, so the differential
equation is simply
x = f (x) (2.36)
For the continuous linear time invariant case, Eqn. 2.34 simplies to
x = Ax +Bu (2.37)
or in discrete form at time t = kT
x
k+1
= x
k
+u
k
(2.38)
where the state vector x is an (n 1) vector, or alternatively written as x
n
, the control input
vector has m elements, or u
m
. The model (transition) matrix is (n n) or A,
nn
, and
the control or input matrix is (n m) or B,
nm
.
Block diagrams of both the continuous and discrete formulations of the state-space model are
shown in Fig. 2.19. Such a form is suitable to implement state-space systems in a simulator such
as SIMULINK for example.
2.7. MULTIVARIABLE CONTROL AND STATE SPACE ANALYSIS 47

x
k+1 x(t)
z
1
_

A

Continuous plant

x
B +
`

u(t)
u
k

Discrete plant

x
k
+
`

Figure 2.19: A block diagram of a state-space dynamic system, (a) continuous system: x = Ax +
Bu, and (b) discrete system: x
k+1
= x
k
+u
k
. (See also Fig. 2.20.)
The output or measurement vector, y, are the variables that are directly measured from the operat-
ing plant, since often the true states cannot themselves be directly measured. These outputs are
related to the states by
y = g(x) (2.39)
If the measurement relation is linear, then
y = Cx (2.40)
where the r element measurement vector is
r
, and the measurement matrix is sized C
rn
.
Sometimes the input may directly affect the output bypassing the process, in which case the full
linear system in state space is described by,
x = Ax +Bu
y = Cx +Du
(2.41)
Eqn. 2.41 is the standard starting point for the analysis of continuous linear dynamic systems, and
are shown in block diagram form in Fig. 2.20. Note that in the diagram, the internal state vector
(bold lines), has typically more elements than either the input or output vectors (thin lines). The
diagram also highlights the fact that as an observer to the plant, we can relatively easily measure
our outputs (or appropriately called measurements), we presumably know our inputs to the
plant, but we do not necessarily have access to the internal state variables since they are always
contained inside the dashed box in Fig. 2.20. Strategies to estimate these hidden internal states is
known as state estimation and are described in section 9.5.
2.7.2 Converting differential equations to state-space form
If we have a collection or system of interlinked differential equations, it is often convenient and
succinct to group them together in a vector/matrix collection. Alternatively if we start with an
nth order differential equation, for the same reasons it is advisable to rewrite this as a collection
of n rst order differential equations known as Cauchy form.
Given a general linear nth order differential equation
y
(n)
+a
1
y
(n1)
+a
2
y
(n2)
+ +a
n1
y +a
n
y = b
0
u
(n)
+b
1
u
(n1)
+ +b
n1
u +b
n
u (2.42)
48 CHAPTER 2. FROM DIFFERENTIAL TO DIFFERENCE EQUATIONS
+

x(t)

_
-
y(t)
A

C
-

x
B

6
+
-
u(t)

Figure 2.20: A complete block diagram of a state-space dynamic system with output and direct
measurement feed-through, Eqn. 2.41.
by inspection, we can create the equivalent transfer function as
Y (s)
U(s)
=
b
0
s
n
+b
1
s
n1
+ +b
n1
s +b
n
s
n
+a
1
s
n1
+ +a
n1
s +a
n
s
(2.43)
We can cast this transfer function (or alternatively the original differential equation) into state
space in a number of equivalent forms. They are equivalent in the sense that the input/output
behaviour is identical, but not necessarily the internal state behaviour.
The controllable canonical form is
_

_
x
1
x
2
.
.
.
x
n1
x
n
_

_
=
_

_
0 1 0 0
0 0 1 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 1
a
0
a
1
a
2
a
n1
_

_
_

_
x
1
x
2
.
.
.
x
n1
x
n
_

_
+
_

_
0
0
.
.
.
0
1
_

_
u (2.44)
y =
_
b
n
a
n
b
0
.
.
. b
n1
a
n1
b
0
.
.
.
.
.
. b
1
a
1
b
0
_
_

_
x
1
x
2
.
.
.
x
n1
x
n
_

_
+b
0
u (2.45)
which is useful when designing pole-placement controllers. The observable canonical form is
_

_
x
1
x
2
.
.
.
x
n
_

_
=
_

_
0 0 0 a
n
1 0 0 a
n1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 1 a
1
_

_
_

_
x
1
x
2
.
.
.
x
n
_

_
+
_

_
b
n
a
n
b
0
b
n1
a
n1
b
0
.
.
.
b
1
a
1
b
0
_

_
u (2.46)
y =
_
0 0 0 1

_
x
1
x
2
.
.
.
x
n1
x
n
_

_
+b
0
u (2.47)
and is useful when designing observers.
2.7. MULTIVARIABLE CONTROL AND STATE SPACE ANALYSIS 49
If the transfer function dened by Eqn. 2.43 has real and distinct factors,
Y (s)
U(s)
=
b
0
s
n
+b
1
s
n1
+ +b
n1
s +b
n
(s +p
1
)(s +p
2
) (s +p
n
)
= b
0
+
c
1
s +p
1
+
c
2
s +p
2
+ +
c
n
s +p
n
we can derive an especially elegant state-space form
_

_
x
1
x
2
.
.
.
x
n
_

_
=
_

_
p
1
0
p
2
.
.
.
0 p
n
_

_
_

_
x
1
x
2
.
.
.
x
n
_

_
+
_

_
1
1
.
.
.
1
_

_
u (2.48)
y =
_
c
1
c
2
c
n

_
x
1
x
2
.
.
.
x
n
_

_
+b
0
u (2.49)
that is purely diagonal. As evident from the diagonal system matrix, this system is totally decou-
pled and possess the best numerical properties for simulation. For systems with repeated roots,
or complex roots, or both, the closest we can come to a diagonal form is the block Jordan form.
We can interconvert between all these above forms using the MATLAB canon command.
>> G = tf([5 6],[1 2 3 4]); % Dene transfer function G = (5s
2
+ 6s)/(s
3
+ 2s
2
+ 3s + 4)
2
>> canon(ss(G),'compan') % Convert TF to the companion or observable canonical form
a =
x1 x2 x3
7 x1 0 0 -4
x2 1 0 -3
x3 0 1 -2
b =
12 u1
x1 1
x2 0
x3 0
17 c =
x1 x2 x3
y1 0 5 -4
d =
22 u1
y1 0
These transformations are discussed further in [148, p515]. MATLAB uses a variation of this form
when converting from a transfer function to state-space in the tf2ss function.
50 CHAPTER 2. FROM DIFFERENTIAL TO DIFFERENCE EQUATIONS
2.7.3 Interconverting between state space and transfer functions
The transfer function form and the state-space form are, by in large, equivalent, and we can con-
vert from one representation to the other. Starting with our generic state-space model, Eqn. 2.37,
x = Ax +Bu (2.50)
with initial condition x(0) = 0 and taking Laplace transforms gives
sx(s) = Ax(s) +Bu(s)
x(s) (sI A) = Bu(s)
x(s) = (sI A)
1
B
. .
Gp(s)
u(s) (2.51)
where G
p
(s) is a matrix of expressions in s and is referred to as the transfer function matrix. The
MATLAB command ss2tf (state-space to transfer function) performs this conversion.
State-space to transfer function conversion
Suppose we wish to convert the following state-space description to transfer function form.
x =
_
7 10
3 4
_
x +
_
4 1
2 3
_
u (2.52)
From Eqn. 2.51, the transfer function matrix is dened as
G
p
(s) = (sI A)
1
B
=
_
s
_
1 0
0 1
_

_
7 10
3 4
__
1
_
4 1
2 3
_
=
_
(s + 7) 10
3 (s 4)
_
1
_
4 1
2 3
_
(2.53)
Now at this point, the matrix inversion in Eqn. 2.53 becomes involved because of the presence of
the symbolic s in the matrix. Inverting the symbolic matrix expression gives the transfer function
matrix.
G
p
(s) =
1
(s + 2)(s + 1)
_
s 4 10
3 s + 7
_ _
4 1
2 3
_
=
_

_
4
s + 2

s + 26
s
2
+ 3s + 2
2
s + 2

3(6 +s)
s
2
+ 3s + 2
_

_
We can directly apply Eqn. 2.51 using the symbolic toolbox in MATLAB.
>> syms s
2 >> A = [-7 10; -3 4]; B = [4 -1; 2 -3];
>> G = (s
*
eye(2)-A)\B % Pulse transfer function Gp(s) = (sI A)
1
B.
G =
[ 4/(s+2), -(s+26)/(2+s2+3
*
s)]
[ 2/(s+2), -3
*
(s+6)/(2+s2+3
*
s)]
2.7. MULTIVARIABLE CONTROL AND STATE SPACE ANALYSIS 51
The method for converting from a state space description to a transfer function matrix described
by Eqn. 2.51 is not very suitable for numerical computation owing to the symbolic matrix in-
version required. However [174, p35] describes a method due to Faddeeva that is suitable for
numerical computation in MATLAB.
Faddeevas algorithm to convert a state-space description, x = Ax + Bu into transfer function
form, G
p
(s).
1. Compute a, the characteristic equation of the n n matrix A. (Use poly in MATLAB.)
2. Set E
n1
= I.
3. Compute recursively the following n 1 matrices
E
n1k
= AE
nk
+a
nk
I, k = 1, 2, , n 1
4. The transfer function matrix is then given by
G
p
(s) =
s
n1
E
n1
+s
n2
E
n2
+ +E
0
a
n
s
n
+a
n1
s
n1
+ +a
1
s +a
0
B (2.54)
Expressed in MATLAB notation, an (incomplete) version of the algorithm is
a = poly(A); % characteristic polynomial
n = length(a); % dimension of system
I = eye(n-1); E=I;
4 for i=n-2:-1:1
E = A
*
E + a(i+1)
*
I % collect & printout
end % for
Note however that it is also possible to use ss2tf, although since MATLAB (version 4) cannot use
3D matrices, we need to call this routine n times to build up the entire transfer function matrix.
We can repeat the state-space to transfer function example, Eqn. 2.52, given on page 50 using
Faddeevas algorithm.
_
x
1
x
2
_
=
_
7 10
3 4
_
x +
_
4 1
2 3
_
u
We start by computing the characteristic polynomial of A
a(s) = s
2
+ 3s + 2 = (s + 1)(s + 2)
and with n = 2 we can compute
E
1
=
_
1 0
0 1
_
, E
0
=
_
5 1
2 2
_
Following Eqn. 2.54 gives a matrix of polynomials in s,
(sI A)
1
=
1
s
2
+ 3s + 2
_
s
_
1 0
0 1
_
+
_
4 10
3 7
__
=
1
(s + 1)(s + 2)
_
s 4 10
3 s + 7
_
52 CHAPTER 2. FROM DIFFERENTIAL TO DIFFERENCE EQUATIONS
which we can recognise as the same expression as we found using the symbolic matrix inverse
on page 50. All that remains is to post multiply by Bto obtain the pulse transfer function matrix.
Finally, we can also use the control toolbox for the conversion from state-space to transfer func-
tion form. First we construct the state-space form,
A = [-7 10; -3,4]; B = [4 -1; 2 -3]; % Continuous state-space example fromEqn. 2.52.
C = eye(size(A));
sys = ss(A,B,C,[]) % construct system object
where in this case if leave out the D matrix, MATLAB assumes no direct feed though path. We
now can convert to the transfer function form:
>> sys_tf = minreal(tf(sys)) % Convert to transfer function & remember to cancel common
factors
2
Transfer function from input 1 to output...
4
#1: -----
s + 2
7
2
#2: -----
s + 2
12 Transfer function from input 2 to output...
-s - 26
#1: -------------
s2 + 3 s + 2
17 -3 s - 18
#2: -------------
s2 + 3 s + 2
Once again we get the same pulse transfer function matrix.
Transfer function to state space
To convert in the reverse direction, we can use the MATLAB function tf2ss (transfer function to
state space) thus converting an arbitrary transfer function description to the state space format of
Eqn 2.37. For example starting with the transfer function
G(s) =
(s + 3)(s + 4)
(s + 1)(s + 2)(s + 5)
=
s
2
+ 7s + 12
s
3
+ 8s + 17s + 10
we can convert to state-space form using tf2ss.
1 >>num = [1 7 12]; % Numerator: s
2
+ 7s + 12
>>den = [1 8 17 10]; % Denominator: s
3
+ 8s + 17s + 10
>>[A,B,C,D] = tf2ss(num,den) % convert to state-space
2.7. MULTIVARIABLE CONTROL AND STATE SPACE ANALYSIS 53
which returns the following four matrices
A =
_
_
8 17 10
1 0 0
0 1 0
_
_
, B =
_
_
1
0
0
_
_
C =
_
1 7 12

, D = 0
This form is a variation of the controllable canonical form described previously in 2.7.2. This
is not the only state-space realisation possible however as the CONTROL TOOLBOX will return a
slightly different ABCD package
>> Gc = tf([1 7 12],[1 8 17 10]) % Transfer function in polynomial form
2 Transfer function:
s2 + 7 s + 12
-----------------------
s3 + 8 s2 + 17 s + 10
7 >>[A,B,C,D] = ssdata(Gc) % extract state-space matrices
A =
-8.0000 -4.2500 -1.2500
4.0000 0 0
0 2.0000 0
12 B =
2
0
0
C =
17 0.5000 0.8750 0.7500
D =
0
Alternatively we could start with zero-pole-gain form and obtain yet another equivalent state-
space form.
1 G = zpk([-3 -4],[-1 -2 -5],1) % Transfer function model in factored format
Gss = ss(G)
A later section, (2.7.4) shows how we can convert between equivalent dynamic forms. The four
matrices in the above description form the linear dynamic system as given in Eqn. 2.41. We can
concatenate these four matrices into one large block thus obtaining a shorthand way of storing
these equations

G =
A
(n n)

B
(n m)
C
(p n)

p
D
(p m)
`
n
m
# of states
# of measurements
# of inputs
`
54 CHAPTER 2. FROM DIFFERENTIAL TO DIFFERENCE EQUATIONS
which is often called a packed matrix notation of the quadruplet (A,B,C,D). If you get confused
about the appropriate dimensions of A, B etc, then this is a handy check, or alternatively you
can use the diagnostic routine abcdchk. Note that in practice, the D matrix is normally the
zero matrix since the input does not generally effect the output immediately without rst passing
through the system dynamics.
2.7.4 Similarity transformations
The state space description of a differential equation is not unique. We can transform from one
description to another by using a linear invertible transformation such as x = Tz. Geometrically
in 2 dimensions, this is equivalent to rotating the axes or plane. When one rotates the axes, the
inter-relationship between the states do not change, so the transformation preserves the dynamic
model.
Suppose we have the dynamic system
x = Ax +Bu (2.55)
which we wish to transform in some manner using the non-singular transformation matrix, T,
where x = Tz. Naturally the reverse transformation z = T
1
x exists because we have restricted
ourselves to consider only the cases when the inverse of T exists. Writing Eqn. 2.55 in terms of
our new variable z we get
d(Tz)
dt
= ATz +Bu
z = T
1
ATz +T
1
Bu (2.56)
Eqn 2.56 and Eqn 2.55 represent the same dynamic system. They have the same eigenvalues
hence the similarity transform, but just a different viewpoint. The mapping from A to T
1
AT
is called a similarity transform and preserves the eigenvalues. These two matrices are said to be
similar. The proof of this is detailed in [86, p300] and [148, p513514].
The usefulness of these types of transformations is that the dynamics of the states are preserved
(since the eigenvalues are the same), but the shape and structure of the system has changed.
The motivation is that for certain operations (control, estimation, modelling), different shapes
are more convenient. A pure, (or nearly), diagonal shape of the A matrix for example has much
better numerical properties than a full matrix. This also has the advantage that less computer
storage and fewer operations are needed.
To convert a system to a diagonal form, we use the transformation matrix T, where the columns
of T are the eigenvectors of A. Systems where the model (A) matrix is diagonal are especially
easy to manipulate. We can nd the eigenvectors T and the eigenvalues e of a matrix A using
the eig command, and construct the new transformed, and hopefully diagonal system matrix,
[T,e] = eig(A) % nd eigenvectors & eigenvalues of A
V = T\A
*
T % New system matrix, T
1
AT, or use canon
Another use is when testing new control or estimation algorithms, it is sometimes instructive to
devise non-trivial systems with specied properties. For example you may wish to use as an
example a 3 3 system that is stable and interacting, and has one over-damped mode and two
oscillatory modes. That is we wish to construct a full A matrix with specied eigenvalues. We
can use the similarity transformations to obtain these models.
2.7. MULTIVARIABLE CONTROL AND STATE SPACE ANALYSIS 55
Other useful transformations such as the controllable and observable canonical forms are covered
in [148, 64 p646 ]. The MATLAB function canon can convert state-space models to diagonal
or observable canonical form (sometimes known as the companion form). Note however the
help le for this routine discourages the use of the companion form due to its numerical ill-
conditioning.
MATLAB comes with some utility functions to generate test models such as ord2 which gener-
ates stable second order models of a specied damping and natural frequency, and rmodel is a
exible general stable random model builder or arbitrary order.
2.7.5 Interconverting between transfer functions forms
The previous section described how to convert between different representations of linear dy-
namic systems such as differential equations, transfer functions and state-space descriptions.
This section describes the much simpler task of converting between the different ways we can
write transfer functions.
Modellers tend to think in continuous time systems, G(s), and in terms of process gain and time
constants, so will naturally construct transfer functions of the form
K
i
(
i
s + 1)

j
(
j
s + 1)
(2.57)
where the all the variables of interest such as time constants
j
are immediately apparent. On
the other hand, system engineers tend to think in terms of poles and zeros, so naturally construct
transfer functions in the form
K

i
(s +z
i
)

j
(s +p
i
)
(2.58)
where once again the poles, p
j
, and zeros z
i
are immediately apparent. This is the form that
MATLAB uses in the zeros-pole-gain format, zpk.
Finally the hardware engineer would prefer to operate in the expanded polynomial form, (par-
ticularly in discrete cases), where the transfer function is of the form
b
m
s
m
+b
m1
s
m1
+ +b
0
s
n
+a
n1
s
n1
+ +a
0
(2.59)
This is the form that MATLAB uses in the transfer-function format, tf. Note that the leading
coefcient in the denominator is set to 1. As a MATLAB user, you can dene a transfer function
that does not follow this convention, but MATLAB will quietly convert to this normalised form if
you type something like G = tf(zpk(G)).
The inter-conversions between the forms is not difcult; between expressions Eqn. 2.57 and
Eqn. 2.58 simply require some adjusting of the gains and factors, while to convert from Eqn. 2.59
requires one to factorise polynomials.
For example, the following three transfer function descriptions are all equivalent
2 (10s + 1)(3s + 1)
(20s + 1)(2s + 1)(s + 1)
. .
time constant
=
1.5 (s 0.3333)(s + 0.1)
(s + 1)(s + 0.5)(s + 0.05)
. .
zero-pole-gain
=
60s
2
+ 14s + 2
40s
3
+ 62s
2
+ 23s + 1
. .
expanded polynomial
It is trivial to interconvert between the zero-pole-gain, Eqn. 2.58, and the transfer function for-
mats, Eqn. 2.59, in MATLAB, but it is less easy to convert to the time constant description. List-
ing 2.3 extracts from an arbitrary transfer function form the time constants, , the numerator time
constants, , and the plant gain K.
56 CHAPTER 2. FROM DIFFERENTIAL TO DIFFERENCE EQUATIONS
Listing 2.3: Extracting the gain, time constants and numerator time constants from an arbitrary
transfer function format
Gplant = tf(2
*
mconv([10 1],[-3 1]),mconv([20 1],[2 1],[1 1])) % TF of interest
G = zpk(Gplant); % Convert TF description to zero-pole-gain
3
Kp = G.k;
p = cell2mat(G.p); z = cell2mat(G.z); % Extract poles & zeros
delay0 = G.iodelay ; % Extract deadtime (if any)
8 tau = sort(-1./p,'descend'); % Convert (& sort) to time constants, 1/(js + 1)
ntau = sort(-1./z,'descend'); % Convert to numerator time constants, (is + 1)
K = Kp
*
prod(tau)/prod(ntau); % Adjust plant gain
We could of course use the control toolbox functions pole and zero to extract the poles and
zeros from an arbitrary LTI model.
2.7.6 The steady state
The steady-state, x
ss
, of a general nonlinear system x = f (x, u) is the point in state space such
that all the derivatives are zero, or the solution of
0 = f (x
ss
, u) (2.60)
If the system is linear, x = Ax+Bu, then the steady state can be evaluated algebraically in closed
form
x
ss
= A
1
Bu (2.61)
Consequently to solve for the steady-state one could invert the model matrix Abut which may be
illconditioned or computationally time consuming. Using MATLAB one should use xss = -A\B
*
u.
If Ahas no inverse, then no (or alternatively innite) steady states exist. An example of a process
that has no steady state could be a tank-ow system that has a pump withdrawing uid from
the outlet at a constant rate independent of liquid height say just exactly balancing an input ow
shown in the left hand schematic in Fig. 2.21. If the input ow suddenly increased, then the level
will rise until the tank eventually overows. If instead the tank was drained by a valve partially
open at a constant value, then as the level rises, the increased pressure (head) will force more
material out through the valve, (right-hand side of Fig. 2.21.) Eventually the system will rise to a
new steady state. It may however overow before the new steady state is reached, but that is a
constraint on the physical system that is outside the scope of the simple mathematical description
used at this time.
If the system is nonlinear, there is the possibility that multiple steady states may exist. To solve
for the steady state of a nonlinear system, one must use a nonlinear algebraic solver such as
described in chapter 3.
Example The steady state of the differential equation
d
2
y
dt
2
+ 7
dy
dt
+ 12y = 3u
where
dy
dt
= y = 0 at t = 0 and u = 1 , t 0 can be evaluated using Laplace transforms and the
2.7. MULTIVARIABLE CONTROL AND STATE SPACE ANALYSIS 57

ow in

constant ow pump
restriction
ow = f(height)
ow = constant
`
h

`
Figure 2.21: Unsteady and steady states for level systems
nal value theorem. Transforming to Laplace transforms we get
s
2
Y (s) + 7sY (s) + 12Y (s) = 3U(s)
Y (s)
U(s)
=
3
s
2
+ 7s + 12
=
3
(s + 3)(s + 4)
while for a step input
Y (s) =
1
s

3
s
2
+ 7s + 12
The nal value theorem is only applicable if the system is stable. To check, we require that the
roots of the denominator, s
2
+ 7s + 12, lie in the left hand plane,
s =
6

7
2
4 12
2
= 4 and 3
Given that both roots have negative real parts, we have veried that our system is stable and we
are allowed to apply the nal value theorem to solve for the steady-state, y(),
lim
t
y(t) = lim
s0
sY (s) = lim
s0
3
s
2
+ 7s + 12
=
3
12
= 0.25
Using the state space approach to replicate the above, we rst cast the second order differential
equation into two rst order differential equations using the controllable canonical form given in
Eqn. 2.45. Let z
1
= y and z
2
=
dy
dt
, then
z =
_
0 1
12 7
_
z +
_
0
3
_
u
and the steady state is now
z
ss
= A
1
Bu =
_
7/12 1/12
1 0
_ _
0
3
_
1 =
_
0.25
0
_
Noting that z
1
= y, we see that the steady state is also at 0.25. Furthermore, the derivative term
(z
2
) is zero, which is as expected at steady state.
58 CHAPTER 2. FROM DIFFERENTIAL TO DIFFERENCE EQUATIONS
2.8 Solving the vector differential equation
Since we can solve differential equations by inverting Laplace transforms, we would expect to be
able to solve state-space differential equations such as Eqn. 2.37 in a similar manner. If we look
at the Laplace transform of a simple linear scalar differential equation, x = ax + bu we nd two
terms,
sX(s) x
0
= aX(s) +bU(s)
X(s) =
x
0
s a
+
b
s a
U(s) (2.62)
One of these terms is the response of the system owing to the initial condition, x
0
e
at
, and is
called the homogeneous solution, and the other term is due to the particular input we happen
to be using. This is called the particular integral, and we must know the form of the input, u(t),
before we solve this part of the problem. The total solution is the sum of the homogeneous and
particular components.
The homogeneous solution
For the moment we will consider just the homogeneous solution to our vector differential equa-
tion. That is, we will assume no driving input, or u(t) = 0. (In the following section we will add
in the particular integral due to a non-zero input.)
Our vector differential equation, ignoring any input, is simply
x = Ax, x(t = 0) = x
0
(2.63)
Taking Laplace transforms, and not forgetting the initial conditions, we have
sx(s) x
0
= Ax(s)
x(s) (sI A) = x
0
(s)
x(s) = (sI A)
1
x
0
Finally inverting the Laplace transform back to the time domain gives
x(t) = L
1
_
(sI A)
1
_
x
0
(2.64)
Alternatively we can solve Eqn. 2.63 by separating the variables and integrating obtaining
x(t) = e
At
x
0
(2.65)
where the exponent of a matrix, e
At
, is itself a matrix of the same size as A. We call this matrix
exponential the transition matrix because it transforms the state vector at some initial time x
0
to
some point in the future, x
t
. We will give it the symbol (t).
The matrix exponential is dened just as in the scalar case as a Taylor series expansion,
e
At
or (t)
def
= I +At +
A
2
t
2
2!
+
A
3
t
3
3!
+
=

k=0
A
k
t
k
k!
(2.66)
2.8. SOLVING THE VECTOR DIFFERENTIAL EQUATION 59
although this series expansion method is not recommended as a reliable computational strategy.
Better strategies are outlined on page 62.
Comparing Eqn. 2.64 with Eqn. 2.65 we can see that the matrix e
At
= L
1
_
(sI A)
1
_
.
So to compute the solution, x(t), we need to know the initial condition and a strategy to numeri-
cally compute a matrix exponential.
The particular solution
Now we consider the full differential equation with nonzero input x = Ax + Bu. Building on
the solution to the homogeneous part in Eqn. 2.65, we get
x(t) =
t
x
0
+
_
t
0

t
Bu

d (2.67)
where now the second term accounts for the particular input vector u.
Eqn. 2.67 is not particularly useful as written as both terms are time varying. However the con-
tinuous time differential equation can be converted to a discrete time differential equation that
is suitable for computer control implementation provided the sampling rate is xed, and the in-
put is held constant between the sampling intervals. We would like to convert Eqn. 2.37 to the
discrete time equivalent, Eqn. 2.38, repeated here
x
k+1
= x
k
+u
k
(2.68)
where x
k
is the state vector x at time t = kT where T is the sample period. Once the sample pe-
riod is xed, then and are also constant matrices. We have also assumed here that the input
vector u is constant, or held, over the sample interval, which is the norm for control applications.
So starting with a known x
k
at time t = kT, we desire the state vector at the next sample time,
x
k+1
, or
x
k+1
= e
AT
x
k
+e
A(k+1)T
_
(k+1)T
kT
e
A
Bu

dt (2.69)
But as we have assumed that the input u is constant using a zeroth-order hold between the
sampling intervals kT and (k + 1)T, Eqn. 2.69 simplies to
x
k+1
= e
AT
x
k
+
_
T
0
e
A
Bu
k
d (2.70)
where = T t. For convenience, we can dene two new matrices as

def
= e
AT
(2.71)

def
=
_
_
T
0
e
A
d
_
B (2.72)
which gives us our desired transformation in the form of Eqn. 2.68. In summary, to discretise
x = Ax + Bu at sample interval T, we must compute a matrix exponential, Eqn. 2.71, and
integrate a matrix exponential, Eqn. 2.72.
Note that Eqn. 2.70 involves no approximation to the continuous differential equation provided
the input is constant over the sampling interval. Also note that as the sample time tends to zero,
the state transition matrix tends to the identity, I.
60 CHAPTER 2. FROM DIFFERENTIAL TO DIFFERENCE EQUATIONS
If the matrix Ais non-singular, then
=
_
e
AT
I
_
A
1
B (2.73)
is a simpler expression than the more general Eqn. 2.72.
Double integrator example. The single-input/single output double integrator system, G(s) =
1/s
2
, can be represented in continuous state space using two states as
x =
_
0 0
1 0
_
x +
_
1
0
_
u (2.74)
y =
_
0 1

x (2.75)
At sample time T, the state transition matrix is from Eqn. 2.71,
= e
AT
= exp
__
0 0
T 0
__
=
_
1 0
T 1
_
and the control matrix is given by Eqn. 2.72
=
_
_
T
0
e
A
d
_
B
=
_
_
T
0
_
1 0
1
_
d
_
_
1
0
_
=
_
_
T 0
T
2
2
T
_
_
_
1
0
_
=
_
_
T
T
2
2
_
_
For small problems such as this, the symbolic toolbox helps do the computation.
>> A = [0 0; 1 0]; B = [1; 0];
>> syms T lambda % Symbolic sample time T and
>> Phi = expm(A
*
T) % (T)
Phi =
5 [ 1, 0]
[ T, 1]
>> Delta = int(expm(A
*
lambda),lambda,0,T)
*
B % (T)
Delta =
[ T]
10 [ 1/2
*
T2]
Symbolic example with invertible Amatrix. We can discretise the continuous state-space sys-
tem
x =
_
1.5 0.5
1 0
_
x +
_
1
0
_
u
analytically at a sample time T by computing matrix exponentials symbolically.
>> A = [-3/2, -1/2; 1, 0]; B = [1;0];
>> syms T
>> Phi = expm(A
*
T)
5 Phi =
[ -exp(-1/2
*
T)+2
*
exp(-T), exp(-T)-exp(-1/2
*
T)]
[ -2
*
exp(-T)+2
*
exp(-1/2
*
T), 2
*
exp(-1/2
*
T)-exp(-T)]
2.8. SOLVING THE VECTOR DIFFERENTIAL EQUATION 61
Since Ais invertible in this example, we can use the simpler Eqn. 2.73
>> Delta = (Phi - eye(2))
*
A\B % =
_
e
AT
I
_
A
1
B
Delta =
3 [ 2/(exp(-1/2
*
T)
*
exp(-T)-exp(-T)-exp(-1/2
*
T)+1)
*
(exp(-T)-exp(-1/2
*
T))]
[ -2
*
(-exp(-1/2
*
T)+2
*
exp(-T)-1)/(exp(-1/2
*
T)
*
exp(-T)-exp(-T)-exp(-1/2
*
T)+1)]
>> simplify(Delta)
ans =
[ 2
*
exp(-1/2
*
T)/(exp(-T)-1)]
8 [ -2
*
(2
*
exp(-1/2
*
T)+1)/(exp(-T)-1)]
Of course it is evident from the above example that the symbolic expressions for (T) and (T)
rapidly become unwieldy for dimensions much larger than about 2. For this reason, analytical
expressions are of limited practical worth. The alternative numerical schemes are discussed in
the following section.
2.8.1 Numerically computing the discrete transformation
Calculating numerical values for the matrices and can be done by hand for small dimensions
by converting to a diagonal or Jordan form, or numerically using the exponential of a matrix.
Manual calculations are neither advisable nor enjoyable, but [19, p35] mention that if you rst
compute
=
_
T
0
e
A
d = IT +
AT
2
2!
+
A
2
T
3
3!
+ +
A
k1
T
k
k!
+
then
= I +A and = B (2.76)
A better approach, at least when using MATLAB, follows from Eqn. 2.71 and Eqn. 2.72 where
d
dt
= A = A (2.77)
d
dt
= B (2.78)
These two equations can be concatenated to give
d
dt
_

0 I
_
=
_

0 I
_ _
A B
0 0
_
(2.79)
which is in the same formas da/dt = ab. Rearranging this to
_
da/a =
_
b dt leads to the analytical
solution
_

0 I
_
= exp
__
A B
0 0
_
T
_
(2.80)
enabling us to extract the required and matrices provided we can reliably compute the
exponential of a matrix. The MATLAB CONTROL TOOLBOX routine to convert from continuous
to discrete systems, c2d, essentially follows this algorithm.
We could try the augmented version of Eqn. 2.80 to compute both and with one call to the
matrix exponential function for the example started on page 60.
>> A = [-3/2, -1/2; 1, 0]; B = [1;0];
2 >> syms T
62 CHAPTER 2. FROM DIFFERENTIAL TO DIFFERENCE EQUATIONS
>> [n,m] = size(B); % extract dimensions
>> Aa = [A,B;zeros(m,n+m)] % Augmented Amatrix,

A =
_
A B
0 0
_
Aa =
7 -1.5 -0.5 1.0
1.0 0.0 0.0
0.0 0.0 0.0
>> Phi_a = expm(Aa
*
T) compute exponential,

= exp(

AT)
Phi_a =
12 [ -exp(-1/2
*
T)+2
*
exp(-T), exp(-T)-exp(-1/2
*
T), -2
*
exp(-T)+2
*
exp(-1/2
*
T)]
[-2
*
exp(-T)+2
*
exp(-1/2
*
T), 2
*
exp(-1/2
*
T)-exp(-T),2
*
exp(-T)-4
*
exp(-1/2
*
T)+2]
[ 0, 0, 1]
>> Phi = Phi_a(1:n,1:n)
Phi =
17 [ -exp(-1/2
*
T)+2
*
exp(-T), exp(-T)-exp(-1/2
*
T)]
[ -2
*
exp(-T)+2
*
exp(-1/2
*
T), 2
*
exp(-1/2
*
T)-exp(-T)]
>> Delta = Phi_a(1:n,n+1:end)
Delta =
[ -2
*
exp(-T)+2
*
exp(-1/2
*
T)]
22 [ 2
*
exp(-T)-4
*
exp(-1/2
*
T)+2]
Reliably computing matrix exponentials
There are in fact several ways to numerically compute a matrix exponential. Ogata gives three
computational techniques, [148, pp526533], and a paper by Cleve Moler (one of the original
MATLAB authors) and Charles Van Loan is titled Nineteen dubious ways to compute the exponential
of a matrix
5
, and contrasts methods involving approximation theory, differential equations, matrix
eigenvalues and others. Of these 19 methods, two found their way into MATLAB, namely expm,
which is the recommended default strategy, and expm1 intended to compute e
x
1 for small x .
The one time when matrix exponentials are trivial to compute is when the matrix is diagonal.
Physically this implies that the system is totally decoupled since the matrix A is comprised of
only diagonal elements and the corresponding exponential matrix is simply the exponent of the
individual elements. So given the diagonal matrix
D =
_
_

1
0 0
0
2
0
0 0
3
_
_
, then the matrix exponential is exp (D) =
_
_
e

1
0 0
0 e

2
0
0 0 e

3
_
_
which is trivial and reliable to compute. So one strategy then is to transform our system to a
diagonal form(if possible), and then simply nd the standard scalar exponential of the individual
elements. However some matrices, such as those with multiple eigenvalues, is is impossible to
convert to diagonal form, so in those cases the best we can do is convert the matrix to a Jordan
block form as described in [148, p527], perhaps using the jordan command from the Symbolic
toolbox.
However this transformation is very sensitive to numerical roundoff and for that reason is not
used for serious computation. For example the matrix
A =
_
1 1
1
_
5
SIAM Review, vol. 20, A 1978, pp802836 and reprinted in [157, pp649680]. In fact 25 years later, an update was
published with some recent developments.
2.8. SOLVING THE VECTOR DIFFERENTIAL EQUATION 63
with = 0 has a Jordan form of
_
1 1
0 1
_
but for = 0, then the Jordan form drastically changes to the diagonal matrix
_
1 +

0
0 1

_
In summary, for serious numerical calculation we should use the matrix exponential function
expm. Remember not to confuse nding the exponent of a matrix, expm, with the MATLAB func-
tion exp which simply nds the exponent of all individual elements in the matrix.
2.8.2 Using MATLAB to discretise systems
All of the complications of discretising continuous systems to their discrete equivalent can be
circumvented by using the MATLAB command c2d which is short for continuous to discrete.
Here we need only to pass the continuous system of interest, and the sampling time. As an
example we can verify the conversion of the double integrator system shown on page 2.8.
G = tf(1,[1 0 0]) % Continuous system G(s) = 1/s
2
in transfer function form
Gc = ss(G) % Convert to continuous state-space
3 Gd = c2d(Gc,2) % Convert to discrete state-space with a sample time of T = 2
a =
x1 x2
x1 1 0
8 x2 2 1
b =
u1
x1 2
13 x2 2
c =
x1 x2
y1 0 1
18
d =
u1
y1 0
23 Sampling time (seconds): 2
Discrete-time state-space model.
Unlike in the analytical case presented previously, here we must specify a numerical value for
the sample time, T.
Example: Discretising an underdamped second order system with a sample time of T = 3
following Eqn. 2.80 using MATLAB to compute the matrix exponential of the augmented system.
1 [A,B,C,D] = ord2(3, 0.5); % generate a second order system
Ts = 3.0; % sample time
64 CHAPTER 2. FROM DIFFERENTIAL TO DIFFERENCE EQUATIONS
[na,nb] = size(B);
X = expm([A,B; zeros(nb,na+nb)]
*
Ts); % matrix exponential
6 Phi = X(1:na,1:na); % Pull off blocks to obtain & .
Del = X(1:na,na+1:end);
We can use MATLAB to numerically verify the expression found for the discretised double inte-
grator on page 60 at a specic sample time, say T = 4.
The double integrator in input/output
form is
G(s) =
1
s
2
which in packed state space notation is
_
A B
C D
_
=
_
_
0 0 1
1 0 0
0 1 0
_
_
Using a sample time of T = 4, we can com-
pute
=
_
1 0
T 1
_
=
_
1 0
4 1
_
=
_
T
T
2
2
_
=
_
4
16
2
_
We can verify this using c2d in MATLAB as
demonstrated opposite.
>>Gc = tf(1,[1,0,0])% G(s) = 1/s
2
>>Gc_ss = ss(Gc);% state space
3 >>dt = 4; % sample time
>>Gd_ss = c2d(Gc_ss,dt)
a =
x1 x2
8 x1 1.000 0
x2 4.000 1.000
b =
u1
13 x1 4.000
x2 8.000
c =
x1 x2
18 y1 0 1.000
d =
u1
y1 0
Once the two system matrices and are known, solving the difference equation for further
values of k just requires the repeated application of Eqn. 2.38. Starting from known initial condi-
tions x
0
, the state vector x at each sample instant is calculated thus
x
1
= x
0
+u
0
x
2
= x
1
+u
1
=
2
x
0
+u
0
+u
1
x
3
= x
2
+u
2
=
3
x
0
+
2
u
0
+u
1
+u
2
.
.
.
.
.
.
x
n
=
n
x
0
+
n1

k=0

n1k
u
k
(2.81)
The MATLAB function ltitr which stands for linear time invariant time response or the more
general dlsim will solve the general linear discrete time model as in 2.81. In special cases such
as step or impulse tests, you can use dstep, or dimpulse.
Problem 2.4 1. Evaluate A
n
where
A =
_
1 1
1 0
_
2.8. SOLVING THE VECTOR DIFFERENTIAL EQUATION 65
for different values of n. What is so special about the elements of A
n
? (Hint: nd the
eigenvalues of A.)
2. What is the determinant of A
n
?
3. Write a couple of MATLAB m-functions to convert between the A, B, C and Dform and the
packed matrix form.
4. Complete the state-space to transfer function conversion analytically started in 2.7.3,
Eqn. 2.53. Compare your answer with using ss2tf.
A submarine example A third-order model of a submarine from [62, p416 Ex9.17] is
x =
_
_
0 1 0
0.0071 0.111 0.12
0 0.07 0.3
_
_
x +
_
_
0
0.095
0.072
_
_
u (2.82)
where the state vector x is dened as
x

=
_

d
dt

The state is the inclination of the submarine and is the angle of attack above the horizontal.
The scalar manipulated variable u is the deection of the stern plane. We will assume that of the
three states, we can only actually measure two of them, and , thus
C =
_
1 0 0
0 0 1
_
In this example, we consider just three states x, one manipulated variable u, and two outputs, y.
The stability of the open-loop system is given by the eigenvalues of A. In MATLAB, the command
eig(A) returns
eig(A) =
_
_
0.0383 + 0.07i
0.0383 0.07i
0.3343
_
_
showing that all eigenvalues have negative real parts, indicating that the submarine is oscillatory,
though stable. In addition, the complex conjugate pair indicates that there will be some oscillation
to the response of a step change in stern plane.
We can simulate the response of our submarine system (Eqn 2.82) to a step change in stern plane
movement, u, using the step command. The smooth plot in Fig. 2.22 shows the result of this
continuous simulation, while the staired plot shows the result of the discrete simulation using
a sample time of T = 5. We see two curves corresponding to the two outputs.
Listing 2.4: Submarine simulation
A = [0,1,0;-0.0071,-0.111,0.12;0,0.07,-0.3]; B = [0,-0.095,0.072]';
C = [1 0 0;0 0 1];
3
Gc = ss(A,B,C,0) % Continuous plant, Gc(s)
dt = 5; % sample time T = 5 (rather coarse)
Gd = c2d(Gc,dt) % Create discrete model
8
step(Gc,Gd); % Do step response and see Fig. 2.22.
66 CHAPTER 2. FROM DIFFERENTIAL TO DIFFERENCE EQUATIONS
Figure 2.22: Comparing the continuous
and discrete step responses of the subma-
rine model.
15
10
5
0
x
1


0 20 40 60 80
0
0.1
0.2
0.3
0.4
time
x
2
G
c
G
d
Fig. 2.22 afrms that the openloop process response is stable, supporting the eigenvalue analysis.
Notice how the step command automatically selected appropriate time scales for the simulation.
How did it do this?
The steady state of the system given that u = 1 is, using Eqn 2.61,
1 >> u = 1;
>>xss = -A\B
*
u; % Steady-states given by xss = ABu.
>>yss = C
*
xss
yss = % See steady-state outputs in Fig. 2.22.
-9.3239
6 0.2400
which corresponds to the nal values given in Fig. 2.22.
We can verify the results from the c2d routine using Eqns 2.71 and 2.72, or in this case since Ais
invertible, Eqn. 2.73.
>> Phi = expm(A
*
5.0); % Dont forget the m in expmatrix()
>> Delta = (Phi-eye(size(A)))
*
(A\B) % =
_
e
AT
I
_
A
1
B
This script should give the same and matrices as the c2d function since Ais non-singular.
Given a discrete or continuous model, we can compute the response to an arbitrary input se-
quence using lsim:
t = [0:dt:100]'; % time vector
U = sin(t/20); % Arbitrary input U(t)
3
lsim(Gc,U,t) % continuous system simulation
lsim(Gd,U) % discrete system simulation
2.8. SOLVING THE VECTOR DIFFERENTIAL EQUATION 67
We could explicitly demand a rst-order-hold as opposed to the default zeroth order by setting
the options for c2d.
optn = c2dOptions('Method','foh') % Ask for a rst-order hold
Gd2 = c2d(Gc,5,optn) % Compare the step response of this with a zeroth-order hold
Problem 2.5 By using a suitable state transformation, convert the following openloop system
x
1
= x
2
x
2
= 10 2x
2
+ u
where the input u is the following function of the reference r,
u = 10 + 9r
_
9 4

x
into a closed loop form suitable for simulation in MATLAB using the lsim function. (ie: x =
Ax + Br) Write down the relevant MATLAB code segment. (Note: In practice, it is advisable to
use lsim whenever possible for speed reasons.)
Problem 2.6 1. Write an m-le that returns the state space system in packed matrix form that
connects the two dynamic systems G
1
and G
2
together,
G
all
= G
1
G
2
Assume that G
1
and G
2
are already given in packed matrix form.
2. (Problem from [182, p126]) Show that the packed matrix form of the inverse of Eqn. 2.41,
u = G
1
y, is
G
1
=
_
ABD
1
C BD
1
D
1
C D
1
_
(2.83)
assuming D is invertible, and we have the same number of inputs as outputs.
2.8.3 The discrete state space equation with time delay
The discrete state space equation, Eqn. 2.37, does not account for any time delay or dead time
between the input u and the output x. This makes it difcult to model systems with delay.
However, by introducing newvariables we can accommodate dead time or time delays. Consider
the continuous differential equation,
x
t
= Ax
t
+Bu
t
(2.84)
where is the dead time and in this particular case, is exactly equal to 2 sample times ( = 2T).
The discrete time equivalent to Eqn 2.84 for a two sample time delay is
x
k+3
= x
k+2
+u
k
(2.85)
which is not quite in our standard state-space form of Eqn. 2.38 owing to the difference in time
subscripts between u and x.
Now let us introduce a new vector of state variables, z where
z
k
def
=
_
_
x
k
x
k+1
x
k+2
_
_
(2.86)
68 CHAPTER 2. FROM DIFFERENTIAL TO DIFFERENCE EQUATIONS
is sometimes known as a shift register or tapped delay line of states. Using this new state vector,
we can write the dynamic system, Eqn. 2.85 compactly as
z
k+1
=
_
_
x
k+1
x
k+2
x
k+3
_
_
=
_
_
Ix
k+1
Ix
k+2
x
k+2
+ u
k
_
_
(2.87)
or in standard state-space form as
z
k+1
=
_
_
0 I 0
0 0 I
0 0
_
_
z
k
+
_
_
0
0

_
_
u
k
(2.88)
This augmented system, Eqn. 2.88 (with 2 units of dead time) is now larger than the original
Eqn. 2.85 given that we have 2n extra states. Furthermore, note that the new augmented transi-
tion matrix in Eqn. 2.88 is now no longer of full rank (provided it was in the rst place). If we
wish to incorporate a delay , of sample times ( = T), then we must introduce dummy
state vectors into the system creating a new state vector z with dimensions (n 1);
z
k
=
_
x
k
x
k+1
x
k+2
x
k+

The augmented state transition matrix and augmented control matrix are of the form
=
_

_
0 I 0 0
0 0 I 0
0 0 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 I
0 0 0
_

_
, and =
_

_
0
0
0
0
.
.
.

_
(2.89)
Now the discrete time equation is
z
k+1
= z
k
+u
k
(2.90)
Since all the introduced states are unobservable, we have
x
k
=
_
I 0 0 0

z
k
(2.91)
and that the output equation is amended to
y
k
= C
_
I 0 0 0

z
k
(2.92)
If the dead time is not an exact multiple of the sample time, then more sophisticated analysis is
required. See [70, p174]. Systems with a large value of dead time become very unwieldy as a
dummy vector is required for each multiple of the sample time. This creates large systems with
many states that possibly become numerically ill-conditioned and difcult to manipulate.
Problem 2.7 1. Simulate the submarine example (Eqn 2.82) but now introducing a dead time
of 2 sample time units.
2. Explain how the functions step and dstep manage to guess appropriate sampling rates,
and simulation horizons. Under what circumstances will the heuristics fail?
2.9. STABILITY 69
System Stability
Transfer function Nonlinear
Lyapunov
Nonlinear
Linearise
Marginal
non-poly
yes/no
.
-

-

.
eigenvalues
roots of denominator
poles

Routh array
Jury test
Pade approx.
simulate
Figure 2.23: Issues in assessing system stability
2.9 Stability
Stability is a most desirable characteristic of a control system. As a control designer, you should
only design control systems that are stable. For this reason, one must at least be able to analyse
a potential control system in order to see if it is indeed theoretically stable at least. Once the
controller has been implemented, it may be too late and costly to discover that the system was
actually unstable.
One denition of stability is as follows:
A system of differential equations is stable if the output is bounded for all
time for any given bounded input, otherwise it is unstable. This is referred to
as bounded input bounded output (BIBO) stability.
While for linear systems, the concept of stability is well dened and relatively straight forward
to evaluate
6
, this is not the case for nonlinear systems which can exhibit complex behaviour. The
next two sections discuss the stability for linear systems. Section 2.9.4 discusses briey techniques
that can be used for nonlinear systems.
Before any type of stability analysis can be carried out, it is important to dene clearly what sort
of analysis is desired. For example is a Yes/No result acceptable, or do you want to quantify
how close to instability the current operating point is? Is the system linear? Are the nonlinearities
hard or soft? Fig. 2.23 highlights some of these concerns.
6
Actually, the stability criterion can be divided into 3 categories; Stable, unstable and critically (un)stable. Physically
critical stability never really occurs in practice.
70 CHAPTER 2. FROM DIFFERENTIAL TO DIFFERENCE EQUATIONS
2.9.1 Stability in the continuous domain
The most important requirement for any controller is that the controller is stable. Here we de-
scribe how one can analyse a continuous transfer function to determine whether it is stable or
not.
First recall the conditions for BIBO stability. In the Laplace domain, the transfer function system
is stable if all the poles (roots of the denominator) have negative real parts. Thus if the roots of the
denominator are plotted on the complex plane, then they must all lie to the left of the imaginary
axis or in the Left Hand Plane (LHP). In other words, the time domain solution of the differential
equation must contain only e
at
terms and no terms of the form e
+at
, assuming a is positive.
Why is the stability only determined by the denominator of the transfer function and not by the
numerator or the input?
1. First the input is assumed bounded and stable (by BIBO denition).
2. If the transfer function is expanded as a sum of partial fractions, then the time solution
terms will be comprised of elements that are summed together. For the system to be un-
stable, at least one of these individual terms must be unstable. Conversely if all the terms
are stable themselves, then the summation of these terms will also be stable. The input will
also contribute separate fractions that are summed, but these are assumed to be stable by
denition, (part 1).
Note that it is assumed for the purposes for this analysis, that the transfer function is written in
its simplest form, that it is physically realisable, and that any time delays are expanded as a Pad e
approximation.
In summary to establish the stability of a transfer function, we could factorise the denomina-
tor polynomial using a computer program to compute numerically the roots such as MATLABs
roots routine. Alternatively, we can just hunt for the signs of the real part of the roots, a much
simpler operation, and this is the approach taken by the Routh and Jury tests described next.
Routh stability criterion
The easiest way to establish the absolute stability of a transfer function is simply to extract the
roots of the denominator perhaps using MATLABS root command. In fact this is an overkill
since to establish absolute stability we need only to know the presence of any roots with positive
real parts, not the actual value.
Aside from efciency, there are two cases where this strategy falls down. One is where we do not
have access to MATLAB or the polynomial is particularly ill-conditioned such that simple root
extraction techniques fail for numerical reasons, and the other case is where we may have a free
parameter such as a controller gain in the transfer function. This has prompted the development
of simpler exact algebraic methods such as the Routh array or Lyapunovs method to assess
stability.
The Routh criterion states that all the roots of the polynomial characteristic equation,
a
n
s
n
+a
n1
s
n1
+ +a
1
s +a
0
= 0 (2.93)
have negative real parts if and only if the rst column of the Routh table have the same sign.
Otherwise the number of sign changes is equal to the number of right-hand plane roots.
2.9. STABILITY 71
The Routh table is dened starting with the coefcients of the characteristic polynomial, Eqn. 2.93,
s
n
a
n
a
n2
a
n4

s
n1
a
n1
a
n3
a
n5

s
n2
b
1
b
2
b
3

s
n3
c
1
c
2
c
3

.
.
.
.
.
.
.
.
.
and where the new entries b and c are dened as
b
1
=
a
n1
a
n2
a
n
a
n3
a
n1
, b
2
=
a
n1
a
n4
a
n
a
n5
a
n1
, etc.
c
1
=
b
1
a
n3
a
n1
b
2
b
1
, c
2
=
b
1
a
n5
a
n1
b
3
b
1
, etc.
The Routh table is continued until only zeros remain.
The Routh array is only applicable for continuous time systems, but for discrete time systems,
the Jury test can be used in a manner similar to the Routh table for continuous systems. The
construction of the Jury table is slightly more complicated than the Routh table, and is described
in [60, 117118].
The Routh array is most useful when investigating the stability as a function of a variable such as
controller gain. Rivera-Santos has written a MATLAB routine to generate the Routh Array with
the possibility of including symbolic variables.
7
Suppose we wish to determine the range of stability for a closed loop transfer function with
characteristic equation
s
4
+ 3s
3
+ 3s
2
+ 2s +K = 0
as a function of gain K.
8
Listing 2.5: Example of the Routh array using the symbolic toolbox
>> syms K % Dene symbolic gain K
>> ra = routh([1 3 3 2 K],K) % Build Routh array for s
4
+ 3s
3
+ 3s
2
+ 2s +K
3 ra =
[ 1, 3, K]
[ 3, 2, 0]
[ 7/3, K, 0]
[ -9/7
*
K+2, 0, 0]
8 [ K, 0, 0]
Since all the elements in the rst column of the table must be positive, we know that for stability
(9/7)K + 2 > 0 and that K > 0, or
0 < K < 14/7
Stability and time delays
Time delay or dead time does not affect the stability of the open loop response since it does not
change the shape of the output curve, but only shifts it to the right. This is due to the non-
polynomial term e
s
in the numerator of the transfer function. Hence dead time can be ignored
7
The routine, routh.m, is available from the MathWorks users group collection at
www.mathworks.com/matlabcentral/fileexchange/.
8
Problem adapted from [150, p288]
72 CHAPTER 2. FROM DIFFERENTIAL TO DIFFERENCE EQUATIONS
for stability considerations for the open loop. However in the closed loop the dead time term ap-
pears in both the numerator and the denominator and now does affect the stability characteristics
and can now no longer be ignored. Since dead time is a nonpolynomial term, we cannot simply
nd the roots of the denominator, since now there are an innite number of them, but instead we
must approximate the exponential term with a truncated polynomial such as a Pad e approxima-
tion and then apply either a Routh array in the continuous time case, or a Jury test in the discrete
time case.
Note however that the Nyquist stability criteria yields exact results, even for those systems with
time delays. The drawback is that the computation is tedious without a computer.
Problem 2.8 Given
G(s) =
15(4s + 1)e
s
(3s + 1)(7s + 1)
Find the value of deadtime such that the closed-loop is just stable using both a Pad e approxi-
mation and a Bode or Nyquist diagram.
A rst order Pad e approximation is
e
s
=
1

2
s
1 +

2
s
2.9.2 Stability of the closed loop
Prior to the days of cheap computers that can readily manipulate polynomial expressions and
extract the roots of polynomials, one could infer the stability of the closed loop from a Bode or
Nyquist diagram of the open loop.
The openloop system
G(s) =
100(s + 2)
(s + 5)(s + 4)(s + 3)
e
0.2s
(2.94)
is clearly stable, but the closed loop system, G/(1 +G) may, or may not be stable. If we substitute
s = i, and compute the complex quantity G(i) as a function of , we can apply either the Bode
stability criteria, or the equivalent Nyquist stability criteria to establish the stability of the closed
loop without deriving an expression for the closed loop, and subsequently solving for the closed
loop poles.
The Bode diagram consists of two plots; the magnitude of G(i) versus , and the phase of
G(i) versus . Alternatively we could plot the real and imaginary parts of G(i) as a function
of frequency, but this results in a three dimensional plot. Such a plot for the system given in
Eqn. 2.94 is given in Fig. 2.24(a).
However three dimensional plots are difcult to manipulate, so we normally ignore the fre-
quency component, and just view the shadow of the plot on the real/imaginary plane. The
two dimensional Nyquist curve corresponding to Fig. 2.24(a) is given in Fig. 2.24(b). It is evident
from either plot that the curve does encircle the critical (0, 1i) point, so the closed loop system
will be unstable.
Establishing the closed loop stability using the Nyquist criteria is slightly more general than when
using the Bode criteria. This is because for systems that exhibit a non-monotonically decreasing
curves may cross the critical lines more than once leading to misleading results. An interesting
example from [80] is use to illustrate this potential problem.
2.9. STABILITY 73
2
0
2
4
4
2
0
2
10
2
10
0
10
2
10
4
(Gi )
3D Nyquist plot
(Gi )


[
r
a
d
/
s
]
(a) The Nyquist diagram with frequency information
presented in the third dimension. A agpole is
drawn at (1, 0i). Since the curve encircles the ag-
pole, the closed loop is unstable.
Nyquist Diagram
Real Axis
I
m
a
g
i
n
a
r
y

A
x
i
s
1 0 1 2 3
3
2
1
0
1
2
3
(b) A 2D Nyquist diagram which shows the closed
loop is unstable since it encircles the (1, 0i) point.
In this case both positive and negative frequencies are
plotted.
Figure 2.24: Nyquist diagram of Eqn. 2.94 in (a) three dimensions and (b) as typically presented
in two dimensions.
2.9.3 Stability of discrete time systems
The denition of stability for a discrete time system is similar to that of the continuous time
system except that the denition is only concerned with the values of the output and input at the
sample times. Thus it is possible for a system to be stable at the sample points, but be unstable
between the sample points. This is called a hidden oscillation, although rarely occurs in practice.
Also note that the stability of a discrete time system is dependent on the sample time, T.
Recall that at sample time T, by denition z = exp(sT), the poles of the continuous transfer
function p
i
map to exp(p
i
T) in discrete space. If p
i
is negative, then exp(p
i
T) will be less than
1.0. If p
i
is positive, then the corresponding discrete pole z
i
will be larger than 1.0. Strictly if p
i
is complex, then it is the magnitude of z
i
that must be less than 1.0 for the system to be stable.
If the discrete poles z
i
, are plotted on the complex plane, then if they lie within a circle centered
about the origin with a radius of 1, then the system is stable. This circle of radius 1 is called the
unit circle.
The discrete time transfer function G(z) is stable if G(z) has no discrete poles
on, or outside, the unit circle.
Now the evaluation of stability procedure is the same as that for the continuous case. One simply
factorises the characteristic equation of the transfer function and inspects the roots. If any of the
roots z
i
lie outside the unit circle, the system will be unstable. If any of the poles lie exactly on
the unit circle, there can be some ambiguity about the stability, see [119, p120]. For example the
transfer function which has poles on the unit circle,
G(z) =
1
z
2
+ 1
74 CHAPTER 2. FROM DIFFERENTIAL TO DIFFERENCE EQUATIONS
given a bounded input such as a step, u(k) = {1, 1, 1, 1 . . .} produces
y(k) = {0, 0, 1, 1, 0, 0, 1, 1, 0, 0, . . .}
which is bounded and stable. However if we try a different bounded input such as a cycle u(k) =
{1, 0, 1, 0, 1, 0, 1, . . .} we observe
y(k) = {0, 0, 1, 0, 2, 0, 3, 0, 4, 0, 5, 0, 6, 0, . . .}
which is unbounded.
Just as in the continuous case, we can establish the absolute stability of a discrete transfer function
without explicitly solving for the roots of the denominator D(z) = 0, which, in the days before
computers was a main consideration. The main methods for discrete systems are the Jury test,
the bilinear transform coupled with the Routh array, and Lyapunovs method. All these methods
are analytical and hence exact. The Jury test is the discrete equivalent to the Routh array and
is covered in Ogata [148, p242]. Lyapunovs method has the special distinction that it is also
applicable to nonlinear ODEs and is covered in 2.9.4.
Suppose we wish to establish the stability of the discretised version of
G(s) =
1
6s
2
+ 5s + 1
at a sampling period of T = 2 and a rst-order hold.
The discrete transfer function is
G(z) =
0.0746 + 0.2005z
1
+ 0.0324z
2
1 0.8813z
1
+ 0.1889z
2
with discrete poles given by the solution of
1 0.8813z
1
+ 0.1889z
2
= 0
or
z =
_
0.5134
0.3679
>> G = tf(1,[6 5 1]);
2 >> Gd = c2d(G,2,'foh')
>> pole(Gd)
ans =
0.5134
0.3679
7 >> pzmap(Gd); axis('equal');
Since both discrete poles lie inside the unit circle, the discretised transfer function with a rst-
order hold is still stable.
If you are using the state space description, then simply check the eigenvalues of the homoge-
neous part of the equation (the A or matrix).
2.9.4 Stability of nonlinear differential equations
Despite the fact that nonlinear differential equations are in general very difcult, if not impossible
to solve, one can attempt to establish the stability without needing to nd the solution. Studies of
this sort fall into the realm of nonlinear systems analysis which demands a high degree of math-
ematical insight and competence. [187] is a good introductory text for this subject that avoids
much of the sophisticated mathematics.
Two methods due to the Russian mathematician Lyapunov
9
address this nonlinear system sta-
bility problem. The indirect or linearisation method is based on the fact that the stability near
9
Some authors, e.g. Ogata, tend to it spell as Liapunov. Note however that MATLAB uses the lyap spelling.
2.9. STABILITY 75
the equilibrium point will be closely approximated to the linearised approximation at the point.
However it is the second method, or direct method that being exact is a far more interesting and
powerful analysis. Since the Lyapunov stability method is applicable for the general nonlinear
differential equation, it is of course applicable for linear systems as well.
Figure 2.25: Alexander Mikhailovich Liapunov (18571918) was a Russian
mathematician and mechanical engineer. He had the very rare merit of pro-
ducing a doctoral dissertation of lasting value. This classic work was originally
published in 1892 in Russian but is now available in an English translation Sta-
bility of Motion, Academic Press, NY, 1966. Liapunov died by violence in Odessa,
which cannot be considered a surprising fate for a middle class intellectual in the
chaotic aftermath of the Russian revolution.
Excerpt from Differential Equations and historical applications, Simmons, p465.
The Lyapunov stability theorem says that if such a differential system exists
x = f (x, t) (2.95)
where f (0, t) = 0 for all t, and if there exists a scalar function V (x) having continuous rst partial
derivatives satisfying V (x) is positive denite and the time derivative,

V (x), is negative denite,
then the equilibrium position at the origin is uniformly asymptotically stable. If V (x, t) as
x , then it is stable in the large. V (x) is called a Lyapunov function of the system.
If the Lyapunov function is thought of as the total energy of the system, then we know that the
total energy must both always be positive, hence the restriction that V (x) is positive denite, and
also for the system to be asymptotically stable, this energy must slowly with time die away, hence
the requirement that

V (x) < 0. The hard part about this method is nding a suitable Lyapunov
function in the rst place. Testing the conditions is easy, but note that if your particular candidate
V (x) function failed the requirements for stability, this either means that your system is actually
unstable, or that you just have not found the right Lyapunov function yet. The problem is that
you dont know which. In some cases, particularly vibrating mechanical ones, good candidate
Lyapunov functions can be found by using the energy analogy, however in other cases this may
not be productive.
Algorithm 2.2 Lyapunov stability analysis of nonlinear continuous and discrete systems.
The second method of Lyapunov is applicable for both continuous and discrete systems, [187,
p65], [148, p557].
76 CHAPTER 2. FROM DIFFERENTIAL TO DIFFERENCE EQUATIONS
Continuous differential equations
Given a nonlinear differential equation,
x = f (x, t)
and a scalar function V (x), then if
1. V (x) is positive denite, and
2.

V (x), the derivative of V (x), is negative
denite, and
3. V (x) as x
then the system x = f (x, t) is globally asymptoti-
cally stable.
Discrete difference equations,
Given a nonlinear difference equation,
x
k+1
= f(x
k
), f (0) = 0
and a scalar function V (x), continuous in x, then
if
1. V (x) > 0 for x = 0 (or V (x) is positive def-
inite), and
2. V (x), the difference of V (x), is negative
denite, where
V (x
k
) = V (x
k+1
) V (x
k
)
= V (f(x
k
)) V (x
k
)
and
3. V (x) as x
then the system x
k+1
= f (x
k
) is globally asymp-
totically stable.
Example of assessing the stability of a discrete nonlinear system using Lyapunovs second
method.
Consider the dynamic equation from Ogata [150, Ex 9-18, p729]
x
1
= x
2
x
1
_
x
2
1
+x
2
2
_
x
2
= x
2
x
2
_
x
2
1
+x
2
2
_
and we want to establish if it is stable or not. The system is nonlinear and the origin is the only
equilibrium state. We can propose a trial Lyapunov function V (x) = x
2
1
+ x
2
2
which is positive
denite. (Note that that was the hard bit!) Now differentiating gives

V (x) = 2x
1
dx
1
dt
+ 2x
2
dx
2
dt
= 2
_
x
2
1
+x
2
2
_
which is negative denite. Since V (x) as x , then the system is stable in the large.
Relevant problems in [150, p771] include B-9-15, B-9-16 and B-9-17.
The problem with assessing the stability of nonlinear systems is that the stability can change
depending on the values of the states and/or inputs. This is not the case for linear systems,
where the stability is a function of the system, not the operating point. An important and relevant
chemical engineering example of a nonlinear system that can either be stable or unstable is a
continuously stirred tank reactor. The reactor can either be operated in the stable or unstable
regime depending on the heat transfer characteristics. The stability of such reactors is important
for exothermic reactions, since the thermal run-away is potentially very hazardous.
Stability of linear time invariant systems using Lyapunov
Sections 2.9.1 and 2.9.3 established stability criteria for linear systems. In addition to these meth-
ods, the second method of Lyapunov can be used since the linear system is simply a special case
of the general nonlinear system. The advantages of using the method of Lyapunov are:
2.9. STABILITY 77
1. The Lyapunov method is analytical even for nonlinear systems and so the criteria is, in
principle, exact.
2. It is computationally simple and robust and does not require the solution to the differential
equation.
3. We do not need to extract eigenvalues of an n n matrix which is an expensive numerical
computation, (although we may need to check the deniteness of a matrix which does
require at least a Cholesky decomposition).
Of course similar advantages apply to the Routh array procedure, but unlike the Routh array,
this approach can be extended to nonlinear problems.
To derive the necessary condition for stability of x = Ax using the method of Lyapunov, we
follow the procedure outlined in Algorithm 2.2 where we choose a possible Lyapunov function
in quadratic form,
V (x) = x
T
Px
The Lyapunov function will be positive denite if the matrix P is positive denite. The time
derivative is given by

V (x) = x
T
Px +x
T
P x
= (Ax)
T
Px + x
T
P(Ax)
= x
T
_
A
T
P+PA
_
. .
Q
x
So if Q is negative denite or alternatively Q is positive denite, Q 0, then the system is
stable at the origin and hence asymptotically stable in the large. Note that this solution procedure
is analytical.
Unfortunately there is no cheap and easy computational way to establish if a matrix is positive
denite or not, but using MATLAB we would usually attempt a Cholesky factorisation. Apart
from the difculty in deciding if a matrix is positive denite or not, we also have the problem
that a failure of Q being positive denite does not necessarily imply that the original system is
unstable. All it means is that if the original system does turn out to be stable, then the postulated
P was actually not a Lyapunov function.
A solution procedure that avoids us testing many different Lyapunov candidates is to proceed in
reverse; namely choose an arbitrary symmetric positive denite Q (say the identity matrix), and
then solve
A
T
P+PA = Q (2.96)
for the matrix P. Eqn. 2.96 is known as the matrix Lyapunov equation or the more general form,
AP+PB = Q, is known as Sylvesters equation.
One obvious way to solve for the symmetric matrix P in Eqn. 2.96 is by equating the coefcients
which results in a n(n + 1)/2 system of linear algebraic equations. It is convenient in MATLAB
to use the Kronecker tensor product to set up the equations and then use MATLABs backslash
operator to solve them. This has the advantage that it is expressed succinctly in MATLAB, but it
has poor storage and algorithmic characteristics and is recommended only for small dimensioned
problems. Section 2.9.5 explains Kronecker tensor products and this analytical solution strategy
in further detail.
Listing 2.6: Solve the continuous matrix Lyapunov equation using Kronecker products
78 CHAPTER 2. FROM DIFFERENTIAL TO DIFFERENCE EQUATIONS
n= size(Q,1); % Solve A
T
P+PA = Q, Q = Q
T
I= eye(n); % Identity In, the same size as A
3 P=reshape(-(kron(I,A')+kron(A',I))\Q(:),n,n);% vec(IA
T
+A
T
I)vec(P) = vec(Q)
P= (P+P')/2; % force symmetric (hopefully unnecessary)
The nal line, while strictly unnecessary, simply forces the computed result to be symmetric,
which of course it should be in the rst case.
Less restrictive conditions on the form of Q exist that could make the solution procedure easier
to do manually and are discussed in problem 918 Ogata [150, p766], and [148, p554]. Note
however, using MATLAB, we would not normally bother with these modications.
The CONTROL TOOLBOX for MATLAB includes the function lyap that solves the matrix Lya-
punov equation which is more numerically robust and efcient that equating the coefcients or
using the Kronecker tensor product method given on page 77. However note that the denition
of the Lyapunov matrix equation used by MATLAB,
AP+PA
T
= Q (2.97)
is not exactly the same as that dened by Ogata and used previously in Eqn. 2.96. (See the MAT-
LAB help le for clarication of this and compare with example 920 in [150, p734].)
Suppose we wish to establish the stability of the continuous system
x = Ax
=
_
0 1
1 0.5
_
x
using lyap, as opposed to say extracting eigenvalues.
Now we wish to solve A

P+PA = Q for P where Q is some conveniently chosen positive


denite matrix such as say Q = I. Since MATLABs lyap function solves the equivalent matrix
equation given by Eqn. 2.97, we must pass A

rather than A as the rst argument to lyap.


Listing 2.7: Solve the matrix Lyapunov equation using the lyap routine
1 >> A = [0,1;-1,-0.4];
>> P = lyap(A',eye(size(A))) % Use A
T
instead of A
P =
2.7000 0.5000
0.5000 2.5000
6 >> A'
*
P + P
*
A + eye(size(A)) % Does this equal zero ?
ans =
1.0e-015
*
0 -0.4441
-0.4441 0.2220
11
>> Q = eye(size(A)); % alternative method
>> n = max(size(Q)); % solve A
T
X+XA = C where C 0
>> I = eye(n);
>> P=reshape(-(kron(I,A')+ kron(A',I))\Q(:),n,n)
16 P =
2.7000 0.5000
0.5000 2.5000
2.9. STABILITY 79
Both methods return the same matrix P,
P =
_
2.7 0.5
0.5 2.5
_
which has minor determinants of 2.7 and 6.5. Since both are positive, following Sylvesters crite-
ria, P is positive denite and so the system is stable.
We can use Sylvesters criteria to establish if P is positive denite or not, and hence the sys-
tems stability. To establish if a symmetric matrix is positive denite, the easiest way is to look
at the eigenvalues. If they are all positive, the matrix is positive denite. Unfortunately we were
originally trying to avoid to solve for the eigenvalues, so this defeats the original purpose some-
what. Another efcient strategy to check for positive deniteness with MATLAB is to attempt
a Cholesky decomposition using the [R,p]=chol(A) command. The decomposition will be
successful if A is positive denite, and will terminate early with a suitable error message if not.
Refer to the help le for further details.
As a nal check, we could evaluate the eigenvalues of Ain MATLAB by typing eig(A).
Discrete time systems
For linear discrete time systems, the establishment of stability is similar to the continuous case
given previously. Given the discrete time system x
k+1
= x
k
, we will again choose a positive
denite quadratic function for the trial Lyapunov function, V (x) = x
T
Px where the matrix P is
positive denite, and compute the forward difference
V (x) = V (x
k+1
) V (x
k
)
= x
T
k+1
Px
k+1
x
T
k
Px
k
= (x
k
)
T
Px
k
x
T
k
Px
k
= x
T
k
_

T
PP
_
. .
Q
x
k
If the matrix Q
Q =
_

T
PP
_
(2.98)
is positive denite, then the system is stable.
The reverse solution procedure for P given Q and is analogous to the continuous time case
given in Listing 2.6 using the Kronecker tensor product.
Listing 2.8: Solve the discrete matrix Lyapunov equation using Kronecker products
n = max(size(Q)); % Solve A
T
PAP = Q, with Q 0
2 I = eye(n); % identity
P = reshape((kron(I,I)-kron(Phi',Phi'))\Q(:),n,n)
P = (P+P')/2; % force symmetric (hopefully unnecessary)
Once again, the more efcient dlyap routine from the CONTROL TOOLBOX can also be used. In
fact dlyap simply calls lyap after some minor pre-processing.
We can demonstrate this establishing the stability of
x
k+1
=
_
3 0.5
0 0.8
_
x
k
80 CHAPTER 2. FROM DIFFERENTIAL TO DIFFERENCE EQUATIONS
In this case since is upper triangular, we can read the eigenvalues by inspection, (they are the
diagonal elements, 3 and 0.8). Since one of the eigenvalues is outside the unit circle we know
immediately that the process is unstable. However following the method of Lyapunov,
1 >> Phi = [3, -0.5; 0, 0.8]
>> Q = eye(size(Phi)); % Ensure Q 0
>> P = dlyap(Phi',Q) % Note is transposed.
P =
-0.1250 -0.1339
6 -0.1339 2.9886
>> [R,rc] = chol(P) % Is P +ve denite?
R =
[]
rc =
11 1 % No it isnt, unstable!
we verify that result.
We can both check the deniteness of P using Sylvesters criteria [150, pp924-926], or by using
the chol function as in the above example, or even by nding the eigenvalues of P. We then can
compare the stability criteria thus established with that obtained by solving for the eigenvalues
of .
Algorithm 2.3 Stability of linear systems using Lyapunov
We wish to establish the absolute stability of the dynamic linear continuous system, x = Ax or
the discrete counterpart, x
k+1
= x
k
1. Choose a convenient positive denite n n matrix Q, say the identity, I
nn
, then
2. Solve either
A
T
P+PA = Q
in the case of a continuous system, or

T
PP = Q
in the case of a discrete system for P by either equating the coefcients or using lyap or
dlyap.
3. The Lyapunov function is V (x) = x
T
Px, so check the deniteness of P. The system is
stable if and only if P is positive denite. Check using Sylvesters criteria or attempt a
Cholesky factorisation.
2.9.5 Expressing matrix equations succinctly using Kronecker products
The strategy employed in Listings 2.6 and 2.8 to solve the Lyapunov equation used Kronecker
products and vectorisation or stacking to express the matrix equation succinctly. This made it easy
to solve since the resulting expression was now reformulated into a system of linear equations
which can be solved using standard linear algebra techniques. Further details on the uses and
properties of Kronecker products (or tensor products) are given in [88, p256], [115, Chpt 13]
and [128]. The review in [34] concentrates specically on Kronecker products used in control
applications.
2.9. STABILITY 81
The Kronecker product, given the symbol , of two arbitrarily sized matrices, A
mn
and
B
pq
, results in a new (large) matrix
AB =
_

_
a
11
B a
1n
B
.
.
.
.
.
.
.
.
.
a
m1
B a
mn
B
_

_
mpnq
(2.99)
of size (mp nq). In MATLAB we would write kron(A,B). Note in general AB = BA.
The vectorisation of a matrix A is an operation that converts a rectangular matrix into a single
column by stacking the columns of A on top of each other. In other words, if A is an (n
m) matrix, then vec(A) is the resulting (nm 1) column vector. In MATLAB we convert block
matrices or row vectors to columns simply using the colon operator, or A(:).
We can combine this vectorisation operation with Kronecker products to express matrix multi-
plication as a linear transformation. For example, for the two matrices A and B of compatible
dimensions, then
vec (AB) = (I A) vec(B)
=
_
B
T
I
_
vec(A)
(2.100)
and for the three matrices A, B and C of compatible dimensions, then
vec (ABC) =
_
C
T
A
_
vec(B)
= (I AB) vec(C)
=
_
C
T
B
T
I
_
vec(A)
(2.101)
Table II in [34] summarises these and many other properties of the algebra of Kronecker products
and sums.
This gives us an alternative way to express matrix expressions such as the Sylvester equation
AX XA = Q where we wish to solve for the matrix X given matrix A. In this case, using
Eqn. 2.100, we can write
vec(AXXA) =
_
I AA
T
I
_
vec(X) = vec(Q)
which is in the form of a system of linear equations Gx = q where the vectors x and q are simply
the stacked columns of the matrices X, and Q, and the matrix G is given by
_
I A A
T
I
_
.
We rst solve for the unknown vector x using say x = G
1
q or some numerically sound equiv-
alent, and then we reassemble the matrix X by un-stacking the columns from x. Of course this
strategy is memory intensive because the size of the matrix Gis (n
2
n
2
). However [34] describes
some modications to this approach to reduce the dimensionality of the problem.
Using fsolve to solve matrix equations
The general nonlinear algebraic equation solver fsolve within the OPTIMISATION TOOLBOX has
the nice feature that it can solve matrix equations such as the continuous time Lyapunov equation
directly. Here we wish to nd the square matrix Xsuch that
AX+XA
T
+Q = 0
given an arbitrary square matrix Aand a positive denite matrix Q. For an initial estimate for X
we should start with a positive denite matrix such as I. We can compare this solution with the
one generated by the dedicated lyap routine.
82 CHAPTER 2. FROM DIFFERENTIAL TO DIFFERENCE EQUATIONS

`


Continuous s-plane Discrete z-plane
0
stable
,
unstable
stable
Stability boundary
unstable

+1
+1
1 unit circle

imaginary axis
Figure 2.26: Regions of stability for the poles of continuous (left) and discrete (right) systems
>> n = 4;
>> A = rand(n); % Create a random matrix Aof dimensions (n n)
>> Q1 = randn(n); Q = Q1'
*
Q1; % Create a random positive denite matrix Q
4
>> LyEqn = @(X) A
*
X+X
*
A'+Q; % Matrix equation to be solved: AX+XA
T
+Q = 0
>> X = fsolve(LyEqn, eye(n)); % Solve the Lyapunov using fsolve
X1 =
-0.8574 -0.6776 1.0713
9 -0.6776 2.7742 -2.6581
1.0713 -2.6581 0.7611
>> Xx = lyap(A,Q); % Solve the Lyapunov using the control toolbox
Xx =
14 -0.8574 -0.6776 1.0713
-0.6776 2.7742 -2.6581
1.0713 -2.6581 0.7611
2.9.6 Summary of stability analysis
The stability of a linear continuous time transfer function is determined by the sign of the real
part of the poles. For the transfer function to be stable, all the poles must lie strictly in the left-
hand-plane. In the discrete case, the stability is determined by the magnitude of the possibly
complex poles. To be stable, the discrete poles must lie within the unit circle. (See Fig. 2.26.)
The key difference between the stability of discrete and continuous transfer functions is that the
sample time plays an important role. Generally as one increases the sample time, the discrete
system tend to instability.
To establish the stability of the transfer functions, one need not solve for the roots of the de-
nominator polynomial since exact algebraic methods such as the Routh Array, Jury blocks and
Lyapunov methods exist. However with the current computer aided tools such as MATLAB, the
task of reliably extracting the roots of high order polynomials is not considered the hurdle it once
2.10. SUMMARY 83
was. The Lyapunov method is also applicable for nonlinear systems. See Ogatas comments,
[148, p250].
2.10 Summary
This chapter briey developed the tools needed to analyse discrete dynamic systems. While
most physical plants are continuous systems, most control is performed on a digital computer,
and therefore the discrete description is far more natural for computer implementation or sim-
ulation. Converting a continuous differential equation to a discrete differential equation can be
done either by using a backward difference approximation such as Eulers scheme, or by using z
transforms. The z transform is the discrete equivalent to the Laplace transform in the continuous
domain.
A vector/matrix approach to systems modelling and control has the following characteristics:
1. We can convert any linear time invariant differential equation into the state space form
x = Ax +Bu.
2. Once we have selected a sampling time, T, we can convert from the continuous time do-
main to the discrete form equivalent; x
k+1
= x
k
+u
k
.
3. The stability of both the discrete and continuous time systems is determined by the eigen-
values of the Aor matrix.
4. We can transform the mathematical description of the process to dynamically similar de-
scriptions by adjusting our base coordinates, which may make certain computations easier.
Stability is one of the most important concepts for the control engineer. Continuous linear sys-
tems are stable if all the poles lie in the left hand side of the complex plane, ie they have negative
real parts. For discrete linear systems, the poles must lie inside the unit circle. No easy check can
be made for nonlinear systems, although the method due to Lyapunov can possibly be used, or
one can approximate the nonlinear as a linear system and check the stability of the latter.
When we deal with discrete systems, we must sample the continuous function. Sampling can
introduce problems in that we may miss interesting information if we dont sample fast enough.
The sampling theorem tells us how fast we must sample to reconstruct particular frequencies.
Over sampling is expensive, and could possibly introduce unwanted artifacts such as RHP zeros
etc.
84 CHAPTER 2. FROM DIFFERENTIAL TO DIFFERENCE EQUATIONS
Chapter 3
Modelling dynamic systems with
differential equations
The ability to solve so many problems with a small arsenal of mathematical methods gave rise
to a very optimistic world view which in the 18th Century came to be called the Age of
Reason. The essential point is that the world was felt to be predictable.
Hans Mark
3.1 Dynamic system models
Dynamic system models are groups of equations that involve time derivative terms that attempt
to reproduce behaviour that we observe around us. I admire the sense of optimism in the quote
given above by Hans Mark: if we know the governing equations, and the initial conditions, then
the future is assured. We now know that the 18th century thinkers were misguided in this sense,
but nevertheless engineers are a conservative lot, and still today with similar tools, they aim to
predict the known universe.
It is important to realise that with proven process models, control system design becomes sys-
tematic, hence the importance of modelling.
Physically dynamic systems are those that change with time. Examples include vibrating struc-
tures (bridges and buildings in earthquakes), bodies in motion (satellites in orbit, ngers on a
typewriter), changing compositions (chemical reactions, nuclear decay) and even human charac-
teristics (attitudes to religion or communism throughout a generation, or attitudes to the neigh-
bouring country throughout a football match) and of course, there are many other examples. The
aim of many modellers is to predict the future. For many of these types of questions, it is infea-
sible to experiment to destruction, but it is feasible to construct a model and test this. Today, it is
easiest to construct a simulation model and test this on a computer. The types of models referred
to in this context are mathematically based simulation models.
Black-box or heuristic models, where we just t any old curve to the experimental data, and White-
box or fundamental models, where the curves are tted to well established physical laws repre-
sent two possible extremes in modelling philosophy. In practice, most engineering models lie
somewhere in the grey zone, known appropriately as grey-box models where we combine our
partial prior knowledge with black-box components to t any residuals. Computer tools for
85
86 Modelling
grey-box modelling are developed in [30].
3.1.1 Steady state and dynamic models
Steady state models only involve algebraic equations. As such they are more useful for design
rather than control tasks. However they can be used for control to give broad guidelines about
the operating characteristics of the process. One example is Bristols relative gain array ( 3.3.4)
which uses steady state data to analyse for multivariable interaction. But it is the dynamic model
that is of most importance to the control engineer. Dynamic models involve differential equa-
tions. Solving systems involving differential equations is much harder than for algebraic equa-
tions, but is essential for the study and application of control. A good reference for techniques of
modelling dynamic chemical engineering applications is [196].
3.2 A collection of illustrative models
An economist is an expert who will know tomorrow why the things he predicted yesterday didnt happen
today.
Laurence J. Peter
This section reviews the basic steps for developing dynamic process models and gives a few
examples of common process models. There are many text books devoted to modelling for en-
gineers, notably [33, 114] for general issues, [43, 195] for solution techniques and [129, 196] for
chemical engineering applications. I recommend the following stages in the model building pro-
cess:
1. Draw a diagram of the system with a boundary and labelling the input/output ows that
cross the boundary.
2. Decide on what are state variables, what are parameters, manipulated variables, uncon-
trolled disturbances etc.
3. Write down any governing equations that may be relevant such as conservation of mass
and energy.
4. Massage these equations in a standard form suitable for simulation.
The following sections will describe models for a ow system into and out of a tank with external
heating, a double integrator such as an inverted pendulumor satellite control, a forced circulation
evaporator, and a binary distillation column.
3.2.1 Simple models
Pendulums
Pendulums provide an easy and fruitful target to model. They can easily be built, are visually
impressive, and the governing differential equations derived from the physics are simple but
nonlinear. Furthermore, if you invert the pendulum such as in a rocket, the system is unstable,
and needs an overlaying control scheme.
3.2. A COLLECTION OF ILLUSTRATIVE MODELS 87

mg

torque, T
c

mg

Inverted pendulum (unstable) Pendulum (stable)


length, l
Figure 3.1: A stable and unstable pendulum
Fig. 3.1 shows the two possible orientations. For the classical stable case, the equation of motion
for the pendulum is
ml
2
d
2

dt
2
+mgl sin = T
c
(3.1)
where is the angle of inclination, and T
c
is the applied torque. We can try to solve Eqn. 3.1
analytically using dsolve from the symbolic toolbox,
>> syms m l g positive
>> syms Tc theta
>> dsolve(m
*
l2
*
D2theta + m
*
g
*
l
*
sin(theta) = Tc)
Warning: Explicit solution could not be found; implicit solution returned.
> In C:\MATLAB6p5p1\toolbox\symbolic\dsolve.m at line 292
ans =
[ Int(m/(m
*
(2
*
m
*
g
*
l
*
cos(a)+2
*
Tc
*
a+C1
*
m
*
l2))(1/2)
*
l,a = .. theta)-t-C2 = 0]
[ Int(-m/(m
*
(2
*
m
*
g
*
l
*
cos(a)+2
*
Tc
*
a+C1
*
m
*
l2))(1/2)
*
l,a = .. theta)-t-C2 = 0]
but the solution returned is not much help indicating that this particular nonlinear differential
equation probably does not have a simple explicit solution.
Eqn. 3.1 is a second order nonlinear differential equation, but by dening a new state variable
vector, x, as
x
1
def
= (3.2)
x
2
def
= x
1
=
d
dt
(3.3)
then we note that x
2
= x
1
, so the single second order differential equation can now be written as
88 Modelling
two coupled rst order equations
x
1
= x
2
(3.4)
x
2
=
g
l
sin x
1
+
T
c
ml
2
(3.5)
Linearising Eqn. 3.5 is trivial since sin for small angles. Consequently, both linear and
nonlinear expressions are compared below
x =
_
0 x
1
g
l
sin(x
1
) 0
_
+
_
0
T
c
ml
2
_
. .
nonlinear
, x =
_
0 1
g
l
0
_
x +
_
0
T
c
ml
2
_
. .
linear
(3.6)
while the numerical results are compared on page 120.
In the inverted case, the equation for x
2
is almost the same,
x
2
= +
g
l
sin x
1
+
T
c
ml
2
(3.7)
but the sign change is enough to place the linearised pole in the right-hand plane.
A double integrator
The double integrator is a simple linear example from classical mechanics. It can describe,
amongst other things, a satellite attitude control system,
J
d
2

dt
2
= u (3.8)
where J is the moment of inertia, is the attitude angle, u is the control torque (produced by
small attitude rockets mounted on the satellites side). We can convert the second order system
in Eqn. 3.8 to two rst order systems by again dening a new state vector
x =
_
x
1
x
2
_
def
=
_

_
(3.9)
Substituting this denition into Eqn. 3.8, we get J x
2
= u and x
1
= x
2
, or in a matrix-vector form
_
x
1
x
2
_
=
_
0 1
0 0
_
x
_
0
1
J
_
u (3.10)
A liquid level tank
Buffer storage vessels such as shown in Fig. 3.2 are commonly used in chemical plants between
various pieces of equipment such as reactors and separators to dampen out uctuations in pro-
duction ows.
Our system boundary is essentially the gure boundary, so we have one input, the input ow
rate, F
in
, and one output, the output ow rate, F
out
. What interests us is the amount of uid in
the tank, M, which physically must always be less than the full tank M
max
, and greater than 0.
The governing relation is the conservation of mass,
dM
dt
= F
in
F
out
(3.11)
3.2. A COLLECTION OF ILLUSTRATIVE MODELS 89
?
tank with cross sectional area, A
valve
ow out, F
out
P
ow in, F
in
-
measured height,

h
`

true height, h
Figure 3.2: Simple buffer tank
If the tank has a constant cross sectional area, then the amount of water in the tank M is pro-
portional to the height of water h, since most uids are assumed incompressible. In addition the
mass M is proportional to the volume, M = V = Ah. Rewriting Eqn. 3.11 in terms of height
dh
dt
=
F
in
F
out
A
(3.12)
since the area and density are constant. If the liquid passes though a valve at the bottom of the
tank, the owrate out will be a function of height. For many processes, the ow is proportional
to the square root of the pressure drop, (Bernoullis equation)
F
out

P = k

h
This square root relation introduces a mild nonlinearity into the model.
The tank level is not actually measured in the tank itself, but in a level leg connected to the
tank. The level in this level leg lags behind the true tank level owing to the restriction valve on
the interconnecting line. This makes the system slightly more difcult to control owing to the
added measurement dynamics. We assume, that the level leg can be modelled using a rst order
dynamic equation, with a gain of 1, and a time constant of about 2 seconds. I can estimate this by
simply watching the apparatus. Thus
d

h
dt
=
h

h
2
(3.13)
The level signal is a current, I, between 4 and 20 mA. This current is algebraically related (by
some function f() although often approximately linearly) to the level in the level leg. Thus the
variable that is actually measured is:
I = f(

h) (3.14)
The current I is called the output or measured variable. In summary, the full model equations
are;
dh
dt
=
F
in
k

h
A
(3.15)
d

h
dt
=
h

h
2
(3.16)
I = f(

h) (3.17)
90 Modelling
The input variable (which we may have some hope of changing) is the input ow rate, F
in
, and
the exit valve position which changes k. The dependent (state) variables are the actual tank
level, h, and the measurable level in the level leg,

h. The output variable is the current, I. Con-
stant parameters are density, and tank cross-sectional area. In Table 3.1 which summarises this
nomenclature, I have written vectors in a bold face, and scalars in italics. The input variables
can be further separated into variables that can be easily changed or changed on demand such
as the outlet valve position (manipulated variables), and inputs that change outside our control
(disturbance variables). In this case the inlet owrate.
Table 3.1: Standard nomenclature used in modelling dynamic systems
type symbol variable
independent t time, t
states x h,

h
inputs u F
in
, k
outputs y I
parameters , A
A stirred tank heater
Suppose an electrical heater is added causing an increase in water temperature. Now in addition
to the mass balance, we have an energy balance. The energy balance in words is the rate of
temperature rise of the contents in the tank is proportional to the amount of heat that the heater
is supplying plus the amount of heat that is arriving in the incoming ow minus the heat lost
with the out owing stream minus any other heat losses.
We note that for water the heat capacity c
p
, is almost constant over the limited temperature range
we are considering, and that the enthalpy of water in the tank is dened as
H = Mc
p
T (3.18)
where T is the temperature difference between the actual tank temperature T and some refer-
ence temperature, T
ref
. Writing an energy balance gives
Mc
p
dT
dt
= F
in
c
p
(T
in
T
ref
) F
out
c
p
T +Qq (3.19)
where Qis the heater power input (kW) and q is the heat loss (kW). These two equations (Eqn 3.12
and 3.19) are coupled. This means that a change in mass in the tank M, will affect the temperature
in the tank T (but not the other way around). This is called one-way coupling.
For most purposes, this model would be quite adequate. But you may decide that the heat loss q
to the surroundings is not constant and the variation signicant. A more realistic approximation
of the heat loss would be to assume that it is proportional to the temperature difference between
the vessel and the ambient (room) temperature T
room
. Thus q = k(T T
room
). Note that the heat
loss now can be both positive or negative. You may also decide that the heat capacity c
p
, and
density of water are a function of temperature. Now you replace these constants with functions
as c
p
(T) and (T). These functions are tabulated in the steam tables
1
. The more complete model
1
Steam tables for MATLAB are available from http://www.kat.lth.se/edu/kat061/MfileLib.html
3.2. A COLLECTION OF ILLUSTRATIVE MODELS 91
would then be
dh
dt
=
F
in
F
out
(T)A
(3.20)
A(T)
dT
dt
= F
in
c
p
(T) (T
in
Tref) F
out
c
p
(T)T +Qk (T T
room
)
Caveat Emptor
Finally we should always state under what conditions the model is valid and invalid. For this
example we should note that the vessel temperature should not vary outside the range 0 < T <
100

C since the enthalpy equation does not account for the latent heat of freezing or boiling. We
should also note that h, F
in
, F
out
, M, A, k, c
p
, are all assumed positive.
As a summary and a check on the completeness of the model, it is wise to list all the variables,
and state whether the variable is a dependent or independent variable, whether it is a dynamic
or constant variable, or whether it is a parameter. The degree of freedom for a well posed prob-
lem should be zero. That is, the number of unknowns should equal the number of equations.
Incomplete models, and/or bad assumptions can give very misleading and bad results.
A great example highlighting the dangers of extrapolation from obviously poor models is the bi-
annual UKgovernment forecast of Britains economic performance. This performance or growth,
is dened as the change in the Gross Domestic Product (GDP) which is the total amount of goods
and services produced in the country that year. Figure 3.3 plots the actual growth over the last
few years (solid line descending), compared with the forecasts given by the Chancellor of the Ex-
chequer, (dotted lines optimistically ascending). I obtained this information from a New Scientist
article provocatively titled Why the Chancellor is always wrong, [49]. Looking at this performance,
it is easy to be cynical about governments using enormous resources to model things that are
essentially so complicated as to be unmodellable, and then produce results that are politically
advantageous.
1989 1990 1991 1992 1993
111
112
113
114
115
116
117
118
119
G
r
o
w
t
h

G
D
P

c
h
a
n
g
e
Year
Actual
Forecasts
Figure 3.3: The UK growth based on the
GDP comparing the actual growth (solid)
versus Treasury forecasts (dashed) from
19891992.
92 Modelling
3.3 Chemical process models
The best material model of a cat is another, or preferably the same, cat.
Arturo Rosenblueth, Philosophy of Science, 1945
Modelling is very important in the capital-intensive world of chemical manufacture. Perhaps
there is no better example of the benets of modelling than in distillation. One industry maga-
zine
2
reported that distillation accounted for around 3% of the total world energy consumption.
Clearly this provides a large economic incentive to improve operation. Distillation models pre-
dict the extent of the component separation given the type of trays in the column, the column di-
ameter and height and the operating conditions (temperature,pressure etc). More sophisticated
dynamic models predict the concentration in the distillate given changes in feed composition,
boilup rate and reux rate over time. An example of a simple distillation dynamic model is
further detailed in 3.3.3.
When modelling chemical engineering unit operations, we can consider (in increasing complex-
ity):
1. Lumped parameters systems such as well-mixed reactors or evaporators
2. Staged processes such as distillation columns, multiple effect evaporators, oatation.
3. Distributed parameter systems such as packed columns, poorly mixed reactors, dispersion
in lakes & ponds.
An evaporator is an example of a lumped parameter system and is described in 3.3.2 while a
rigorous distillation column model described on page 98 is an example of a staged process.
At present we will now investigate only lumped parameter systems and staged processes which
involve only ordinary differential equations, as opposed to partial differential equations, albeit
a large number of them. Problems which involve algebraic equations coupled to the ordinary
differential equations are briey considered in section 3.5.1.
A collection of good test industrial process models is available in the nonlinear model library
fromwww.hedengren.net/research/models.htm.
3.3.1 A continuously-stirred tank reactor
Tank reactors are a common unit operation in chemical processing. A model presented in [85]
considers a simple A B reaction with a cooling jacket to adjust the temperature, and hence the
reaction rate as shown in Fig. 3.4.
The reactor model has two states, the concentration of compound A, given the symbol C
a
mea-
sured in mol/m
3
and the temperature of the reaction vessel liquid, T measured in K. The manip-
ulated variables are the cooling jacket water temperature, T
c
, the temperature of the feed T
f
and
2
The Chemical Engineer, 21st October, 1999, p16
3.3. CHEMICAL PROCESS MODELS 93
-
?
?
AB
Product, T, C
a
Feed, T
f
, C
af
-
-
Cooling, T
c
Stirrer
Figure 3.4: A CSTR reactor
the concentration of the feed, C
af
.
dT
dt
=
q
V
(C
af
C
a
) k
0
exp
_
E
RT
_
C
a
(3.21)
dC
a
dt
=
q
V
(T
f
T) +
mH
C
p
k
0
exp
_
E
RT
_
C
a
+
UA
V C
p
(T
c
T) (3.22)
The values for the states and inputs at steady-state are
x
ss
def
=
_
T
C
a
_
ss
=
_
324.5
0.8772
_
, u
ss
def
=
_
_
T
c
T
f
C
af
_
_
ss
=
_
_
300
350
1
_
_
(3.23)
The values of the model parameters such as , C
p
etc. are given in Table 3.2.
Table 3.2: Parameters of the CSTR model
name variable unit
Volumetric Flowrate q = 100 m
3
/sec
Volume of CSTR V = 100 m
3
Density of A-B Mixture = 1000 kg/m
3
Heat capacity of A-B Mixture C
p
= .239 J/kg-K
Heat of reaction for AB mH = 5 10
4
J/mol
E/R = 8750 K
Pre-exponential factor k
0
= 7.2 10
10
1/sec
Overall Heat Transfer Coefcient Area UA = 5 10
4
W/K
Note: At a jacket temperature of T
c
= 305K, the reactor model has an oscillatory response. The
oscillations are characterized by reaction run-away with a temperature spike. When the concen-
tration drops to a low value, the reactor cools until the concentration builds and there is another
run-away reaction.
94 Modelling
3.3.2 A forced circulation evaporator
Evaporators such as illustrated in Fig. 3.5 are used to concentrate fruit juices, caustic soda, alu-
mina and many other mixtures of a solvent and a solute. The incentive for modelling such unit
operations is that the model can provide insight towards better operating procedures or future
design alternatives. A relatively simple dynamic model of a forced circulation evaporator is de-
veloped in [145, chapter 2] is reproduced here as a test process for a wide variety of control tech-
niques of interest to chemical engineers. Other similar evaporator models have been reported
in the literature; [65] documents research over a number of years at the University of Alberta
on a pilot scale double effect evaporator, and [40] describe a linearised pilot scale double effect
evaporator used in multivariable optimal selftuning control studies.
Condensate
Steam
Condensate
Condenser
Cooling
water
Feed
circulation pump
Product
Separator
Evaporator
Pressure
level
composition
P
L
A
Vapour
Figure 3.5: A forced circulation evaporator from [145, p7].
The nonlinear evaporator model can be linearised into a state space form with the variables de-
scribed in Table 3.3. The parameters of the linearised state space model are given in Appendix E.1.
Table 3.3: The important variables in the forced circulation evaporator
type name variable unit range
level L
2
m 02
State steam pressure P
2
kPa ??
product conc x
2
% 0100
product ow F
2
kg/min 0-10
Manipulated steam pressure P
100
kPa 100350
c/w ow F
200
kg/min 0300
circulating ow F
3
kg/min 30-70
feed ow F
1
kg/min 5-15
Disturbance feed composition x
1
% 0-10
feed temperature T
1

C 20-60
c/w temperature T
200

C 15-35
The distinguishing characteristics of this evaporator system from a control point are that the
3.3. CHEMICAL PROCESS MODELS 95
detailed plant model is nonlinear, the model is non-square (the number of manipulated variables,
u, do not equal the number of state variables, x), and that one of the states, level, is an integrator.
The dynamic model is both observable and controllable, which means that by measuring the
outputs only, we can at least in theory, control all the important state variables by changing the
inputs.
Problem 3.1 1. Look at the development of the nonlinear model given in [145, chapter 2].
What extensions to the model would you suggest? Particularly consider the assumption of
constant mass (M = 50kg) in the evaporator circuit and the observation that the level in the
separator changes. Do you think this change will make a signicant difference?
2. Describe in detail what tests you would perform experimentally to verify this model if you
had an actual evaporator available.
3. Construct a MATLAB simulation of the nonlinear evaporator. The relevant equations can be
found in [145, Chpt 2].
3.3.3 A binary distillation column model
Distillation columns, which are used to separate mixtures of different vapour pressures into al-
most pure components, are an important chemical unit operation. The columns are expensive
to manufacture, and the running costs are high owing to the high heating requirement. Hence
there is much incentive to model them with the aim to operate them efciently. Schematics of
two simple columns are given in Figures 3.6 and 3.9.
Wood and Berry experimentally modelled a binary distillation column in [202] that separated
water from ethanol as shown in Fig. 3.6. The transfer function model they derived by step testing
a real column has been extensively studied, although some authors, such as [185], point out
that the practical impact of all these studies has been, in all honesty, minimal. Other distillation
column models that are not obtained experimentally are called rigorous models and are based on
fundamental physical and chemical relations. Another more recent distillation column model,
developed by Shell and used as a test model is discussed in [137].
Columns such as given in Fig. 3.6 typically have at least 5 control valves, but because the hydro-
dynamics and the pressure loops are much faster than the composition dynamics, we can use the
bottoms exit valve to control the level at the bottom of the column, use the distillate exit valve to
control the level in the separator, and the condenser cooling water valve to control the column
pressure. This leaves the two remaining valves (steam reboiler and reux) to control the top and
bottoms composition.
The Wood-Berry model is written as a matrix of Laplace transforms in deviation variables
y =
_

_
12.8e
s
16.7s + 1
18.9e
3s
21s + 1
6.6e
7s
10.9s + 1
19.4e
3s
14.4s + 1
_

_
. .
G(s)
u +
_

_
3.8e
8s
14.9s + 1
4.9e
3.4s
13.2s + 1
_

_
d (3.24)
where the inputs u
1
and u
2
are the reux and reboiler steam owrates respectively, the outputs
y
1
and y
2
are the mole fractions of ethanol in the distillate and bottoms, and the disturbance
variable, d is the feed owrate.
It is evident from the transfer function model structure in Eqn. 3.24 that the plant will be inter-
acting and that all the transfer functions have some time delay associated with them, but the
96 Modelling
Feed, F, x
F

c/w

Distillate, D, x
D

`
`

Bottoms, x
B
, B
condensor

Distillation column
Reux, R
Steam, S
reboiler

Figure 3.6: Schematic of a distillation column


off-diagonal terms have the slightly larger time delays. Both these characteristics will make con-
trolling the column difcult.
In MATLAB, we can construct such a matrix of transfer functions (ignoring the feed) with a matrix
of deadtimes as follows:
>> G = tf({12.8, -18.9; 6.6 -19.4}, ...
{ [16.7 1],[21 1]; [10.9 1],[14.4 1]}, ...
'ioDelayMatrix',[1,3; 7,3], ...
4 'InputName',{'Reflux','Steam'}, ...
'OutputName',{'Distillate','bottoms'})
Transfer function from input "Reflux" to output...
12.8
9 Distillate: exp(-1
*
s)
*
----------
16.7 s + 1
6.6
bottoms: exp(-7
*
s)
*
----------
14 10.9 s + 1
Transfer function from input "Steam" to output...
-18.9
Distillate: exp(-3
*
s)
*
--------
19 21 s + 1
-19.4
3.3. CHEMICAL PROCESS MODELS 97
bottoms: exp(-3
*
s)
*
----------
14.4 s + 1
Once we have formulated the transfer function model G, we can perform the usual types of
analysis such as step tests etc as shown in Fig. 3.7. An alternative implementation of the column
model in SIMULINK is shown in Fig. 3.8.
20
10
0
10


T
o
:

D
i
s
t
i
l
l
a
t
e
From: Reflux
0 50 100
20
10
0
T
o
:

b
o
t
t
o
m
s
From: Steam
0 50 100
Step Response
Time (sec)
A
m
p
l
i
t
u
d
e
Figure 3.7: Step responses of the four
transfer functions that make up the
Wood-Berry binary distillation col-
umn model in Eqn. 3.24.
A 3-input/3 output matrix of transfer functions model of an 19 plate ethanol/water distillation
column model with a variable side stream draw off from [151] is
_
_
y
1
y
2
y
3
_
_
=
_

_
0.66e
2.6s
6.7s + 1
0.61e
3.5s
8.64s + 1
0.0049e
s
9.06s + 1
1.11e
6.5s
3.25s + 1
2.36e
3s
5s + 1
0.012e
1.2s
7.09s + 1
34.68e
9.2s
8.15s + 1
46.2e
9.4s
10.9s + 1
0.87(11.61s + 1)e
2.6s
(3.89s + 1)(18.8s + 1)
_

_
_
_
u
1
u
2
u
3
_
_
+
_

_
0.14e
1.2s
6.2s + 1
0.0011(26.32s + 1)e
2.66s
(3.89s + 1)(14.63s + 1)
0.53e
10.5s
6.9s + 1
0.0032(19.62s + 1)e
3.44s
(7.29s + 1)(8.94s + 1)
11.54e
0.6s
7.01s + 1
0.32e
2.6s
7.76s + 1
_

_
_
d
1
d
2
_
(3.25)
which provides an alternative model for testing multivariable control schemes. In this model
the three outputs are the overhead ethanol mole fraction, the side stream ethanol mole fraction,
and the temperature on tray 19, the three inputs are the reux ow rate, the side stream product
rate and the reboiler steam pressure, and the two disturbances are the feed ow rate and feed
temperature. This system is sometimes known as the OLMR after the initials of the authors.
Problem 3.2 1. Assuming no disturbances (d = 0), what are the steady state gains of the
Wood-Berry column model? (Eqn 3.24). Use the nal value theorem.
2. Sketch the response for y
1
and y
2
for;
(a) a change in reux ow of +0.02
98 Modelling
steam
reflux
distillate
bottoms
reflux
steam
distillate
bottoms
Wood Berry Column
(a) Overall column mask.
2
bottoms
1
distillate
Transport
Delay3
Transport
Delay2
Transport
Delay1
Transport
Delay
19.4
14.1s+1
Transfer Fcn3
10.9s+1
6.6
Transfer Fcn2
18.9
21s+1
Transfer Fcn1
16.7s+1
12.8
Transfer Fcn
2
steam
1
reflux
(b) Inside the Wood Berry column mask. Compare this with Eqn. 3.24.
Figure 3.8: Wood-Berry column model implemented in SIMULINK
(b) A change in the reux ow of 0.02 and a change in reboiler steam ow of +0.025
simultaneously.
3. Modify the SIMULINK simulation to incorporate the feed dynamics.
More rigorous distillation column models
Distillation column models are important to chemical engineers involved in the operation and
maintenance of these expensive and complicated units. While the behaviour of the actual multi-
component tower is very complicated, models that assume ideal binary systems are often good
approximations for many columns. We will deal in mole fractions of the more volatile compo-
nent, x for liquid, y for vapour and develop a column model following [130, p69].
A generic simple binary component column model of a distillation column such as shown in
Fig. 3.9 assumes:
1. Equal molar overow applies (heats of vapourisation of both components are equal, and
mixing and sensible heats are negligible.)
2. Liquid level on every tray remains above weir height.
3.3. CHEMICAL PROCESS MODELS 99
3. Relative volatility and the heat of vapourisation are constant. In fact we assume a constant
relative volatility, . This simplies the vapour-liquid equilibria (VLE) model to
y
n
=
x
n
1 + ( 1)x
n
on tray n with typically 1.22.
4. Vapour phase holdup is negligible and the feed is a saturated liquid at the bubble point.
`

Distillate, D, x
D

tray # N
tray #1

Feed, F, x
F
.
.
.

Bottoms, x
B
, B
reboiler
reux, R, x
D

`
`
.
.
.
feed tray, N
F
condensor
c/w
condensor collector
Figure 3.9: Distillation tower
The N trays are numbered from the bottom to top, (tray 0 is reboiler and tray N + 1 is the con-
denser). We will develop separate model equations for the following parts of the column, namely:
Condenser is a total condenser, where the reux is a liquid and the reux drum is perfectly
mixed.
General tray has liquid ows from above, and vapour ows from below. It is assumed to be
perfectly mixed with a variable liquid hold up, but no vapour hold up as it is assumed very
fast.
Feed tray Same as a general tray, but with an extra (liquid) feed term.
Top & bottom trays Same as a general tray, but with one of the liquid (top) or vapour (bottom)
ows missing, but recycle/reux added.
Reboiler in a perfectly mixed thermo-siphon reboiler with hold up M
b
.
100 Modelling
The total mass balance in the condenser and reux drum is
dM
d
dt
= V R D
and the component balance on the top tray
dM
d
x
d
dt
= V y
NT
(R +D)x
D
(3.26)
For the general nth tray, that is trays #2 through N 1, excepting the feed tray, the total mass
balance is
dM
n
dt
= L
n+1
L
n
+V
n1
V
n
. .
0
= L
n+1
L
n
(3.27)
and the component balance
dM
n
x
n
dt
= L
n+1
x
n+1
L
n
x
n
+V y
n1
V y
n
(3.28)
For the special case of the feed n
F
th tray,
dM
nF
dt
= L
nF +1
L
nF
+F, mass balance
dM
nF
x
nF
dt
= L
nF +1
x
nF +1
L
nF
x
nF
+Fx
F
+V y
nF 1
V y
nF
, component balance
and for the reboiler and column base
dM
B
dt
= L
1
V B
dM
B
x
B
dt
= L
1
x
1
Bx
B
V x
B
In summary the number of variables for the distillation column model are:
VARIABLES
Tray compositions, x
n
, y
n
2N
T
Tray liquid ows, L
n
N
T
Tray liquid holdups, M
n
N
T
Reux comp., ow & hold up, x
D
, R, D, M
D
4
Base comp., ow & hold up, x
B
, y
B
, V, B, M
B
5
Total # of equations 4N
T
+ 9
and the number of equations are:
EQUATIONS
Tray component balance, N
T
Tray mass balance N
T
Equilibrium (tray + base) N
T
+ 1
hydraulic N
T
Reux comp. & ow 2
Base comp & ow 2
Total # of equations 4N
T
+ 7
Which leaves two degrees of freedom. From a control point of view we normally x the boildup
rate,

Q, and the reux ow rate (or ratio) with some sort of controller,
R = f(x
D
), V

Q = f(x
B
)
3.3. CHEMICAL PROCESS MODELS 101
Our dynamic model of a binary distillation column is relatively large with two inputs (R, V ), two
outputs (x
B
, x
D
) and 4N states. Since a typical column has about 20 trays, we will have around
n = 44 states. which means 44 ordinary differential equations However the (linearised) Jacobian
for this system while a large 44 44 matrix is sparse. In our case the percentage of non-zero
elements or sparsity is 9%.
_
_
x
x
..
4444
.
.
.
x
u
..
442
_
_
The structure of the Aand B matrices are, using the spy command, given in Fig. 3.10.
0 5 10 15 20 25 30 35 40 45
0
5
10
15
20
25
30
35
40
45
States & input, ([holdup, comp*holdup| [R V])
Jacobian of the Binary distiilation column model
O
D
E

e
q
u
a
t
i
o
n
s
,

(
z
d
o
t
=
)
Figure 3.10: The incidence of the Jacobian and B matrix for the ideal binary distillation column
model. Over 90% of the elements in the matrix are zero.
There are many other examples of distillation column models around (e.g. [140, pp 459]) This
model has about 40 trays, and assumes a binary mixture at constant pressure and constant rela-
tive volatility.
Simulation of the distillation column
The simple nonlinear dynamic simulation of the binary distillation column model can be used in
a number of ways including investigating the openloop response, interactions and quantify the
extent of the nonlinearities. It can be used to develop simple linear approximate transfer function
models, or we could pose What if? type questions such as quantifying the response given feed
disturbances.
An openloop simulation of a distillation column gives some idea of the dominant time constants
and the possible nonlinearities. Fig. 3.11 shows an example of one such simulation where we
step change:
1. the reux from R = 128 to R = 128 + 0.2, and
2. the reboiler from V = 178 to V = 178 + 0.2
102 Modelling
0 10 20 30
0.975
0.98
0.985
0.99
0.995
1
D
i
s
t
i
l
l

c
o
n
c
Step change in Reflux
0 10 20 30
0.975
0.98
0.985
0.99
0.995
1
Step change in Vapour
0 10 20 30
0
0.02
0.04
0.06
0.08
0.1
b
a
s
e

c
o
n
c
time (min)
0 10 20 30
0
0.02
0.04
0.06
0.08
0.1
time (min)
Figure 3.11: Open loop response of the distillation column control
From Fig. 3.11 we can note that the open loop step results are overdamped, and that steady-state
gains are very similar in magnitude. Furthermore the response looks very like a 2 2 matrix of
second order overdamped transfer functions,
_
x
D
(s)
x
B
(s)
_
=
_
G
11
G
12
G
21
G
22
_ _
R(s)
V (s)
_
So it is natural to wonder at this point if it is possible to approximate this response with a low-
order model rather than requiring the entire 44 states and associated nonlinear differential equa-
tions.
Controlled simulation of a distillation column
A closed loop simulation of the 20 tray binary distillation column with a feed disturbance from
x
F
= 0.5 to x
F
= 0.54 at t = 10 minutes is given in Fig. 3.12.
We can look more closely in Fig. 3.13 at the distillate and base compositions to see if they really
are in control, x

b
= 0.02, x

d
= 0.98.
Distillation columns are well known to interact, and these interactions cause difculties in tuning.
We will simulate in Fig. 3.14 a step change in the distillate setpoint fromx

D
= 0.98 to x

D
= 0.985
at t = 10 min and a step change in bottoms concentration at 150 minutes. The interactions
are evident in the base composition transients owing to the changing distillate composition and
visa versa. These interactions can be minimised by either tightly tuning one of the loops so
consequently leaving the other loose, or by using a steady-state or dynamic decoupler, or even
a multivariable controller.
3.3. CHEMICAL PROCESS MODELS 103
0
0.2
0.4
0.6
0.8
1
L
i
q
u
i
d

c
o
n
c
e
n
t
r
a
t
i
o
n
Binary distillation column model
124
126
128
130
R
0 20 40 60 80 100
175
180
185
V
time (min)
Figure 3.12: A closed loop simulation of a
binary distillation column given a feed dis-
turbance. The upper plot shows the con-
centration on all the 20 trays, the lower
plots shows the manipulated variables (re-
ux and boil-up) response.
0.975
0.98
0.985
0.99
D
i
s
t
i
l
l
a
t
e
0 20 40 60 80 100
0.01
0.02
0.03
0.04
B
o
t
t
o
m
s
time [mins]
Figure 3.13: Detail of the distillate and base
concentrations from a closed loop simula-
tion of a binary distillation column given a
feed disturbance. (See also Fig. 3.12.)
3.3.4 Interaction and the Relative Gain Array
The Wood-Berry distillation column model is an example of a multivariable, interacting process.
This interaction is evident from the presence of the non-zero off diagonal terms in Eqn. 3.24. A
quantitative measure of the extent of interaction in a multivariable process is the Relative Gain
Array (RGA) due to Bristol, [35]. The RGA is only applicable to square systems (the number of
manipulated variables equal the number of controlled variables), and is a steady-state measure.
The RGA, given the symbol , is an (n n) matrix where each element is the ratio of the open
loop gain divided by the closed loop gain. Ideally, if you are the sort who do not like interactions,
you would like to be diagonally dominant. Since all the rows and all the columns must sum to
1.0, then a system with no interaction will be such that = I.
The open loop steady-state gain is easy to determine. Taking the Wood-Berry model, Eqn. 3.24,
104 Modelling
Figure 3.14: Distillation interactions are ev-
ident when we step change the distillate
and bottoms setpoints independently.
0.98
0.99
1
D
i
s
t
i
l
l
a
t
e

x
D
0
0.01
0.02
0.03
0.04
B
a
s
e
,

x
B
120
140
160
R
e
f
l
u
x
0 50 100 150 200 250 300
160
180
200
r
e
b
o
i
l
e
r
time (min)
as an example, the nal steady-state as a function of the manipulated variable u, is
y
ss
= G
ss
u =
_
12.8 18.9
6.6 19.4
_
u (3.29)
The ijth element of G
ss
is the open loop gain between y
i
and u
j
. The G
ss
matrix can be evaluated
experimentally or formed by applying the nal value theorem to Eqn 3.24. Now the closed loop
gain is a function of the open loop gain, so only the open loop gains are needed to evaluate the
RGA. Mathematically the relative gain array, , is formed by multiplying together G
ss
and G

ss
elementwise,
3
= G
ss

_
G
1
ss
_

(3.30)
where the special symbol means to take the Hardamard product (also known as the Schur
product), or simply the elementwise product of two equally dimensioned matrices as opposed to
the normal matrix multiplication.
In MATLAB, the evaluation of the relative gain array is easy.
1 >>Gss = [12.8, -18.9; 6.6, -19.4] % Steady-state gain from Eqn. 3.29
>>L = Gss.
*
inv(Gss)' % See Eqn. 3.30. Dont forget the dot-times (.*)
which should return something like
=
_
2.0094 1.0094
1.0094 2.0094
_
Note that all the columns and rows sum to 1.0. We would expect this system to exhibit severe
interactions, although the reverse pairing would be worse.
The usefulness of the RGA is in choosing which manipulated variable should go with which
control variable in a decentralised control scheme. We juggle the manipulated/control variable
parings until most approaches the identity matrix. This is an important and sensitive topic in
3
Note that this does not imply that = Gss
_
G
1
ss
_

, i.e. without the Hardamard product. See [179, p456] for further
details.
3.3. CHEMICAL PROCESS MODELS 105
process control and is discussed in more detail in [191, pp494503], and in [179, p457]. The most
frequently discussed drawback of the relative gain array, is that the technique only addresses the
steady state behaviour, and ignores the dynamics. This can lead to poor manipulated/output
pairing in some circumstances where the dynamic interaction is particularly strong. The next
section further illustrates this point.
The dynamic relative gain array
As mentioned above, the RGA is only a steady-state interaction indicator. However would could
use the same idea to generate an interaction matrix, but this time consider the elements of the
transfer function matrices as a function of frequency by substituting s = j. This now means
that the dynamic relative gain array, (), is a matrix where the elements as functions of .
Consider the (2 2) system from [181]
_
y
1
y
2
_
=
_

_
2.5e
5s
(15s + 1)(2s + 1)
5
4s + 1
1
3s + 1
4e
5s
20s + 1
_

_
. .
G(s)
_
u
1
u
2
_
(3.31)
which has as its distinguishing feature signicant time delays on the diagonal elements. We could
compute the dynamic relative gain array matrix using the denition
(s) = G(s) G(s)
T
(3.32)
perhaps using the symbolic toolbox to help us with the possibly unwieldy algebra.
Listing 3.1: Computing the dynamic relative gain array analytically
>>syms s
>>syms w real
3
>>G=[2.5
*
exp(-5
*
s)/(15
*
s+1)/(2
*
s+1), 5/(4
*
s+1); ...
1/(3
*
s+1), -4
*
exp(-5
*
s)/(20
*
s+1)] % Plant from Eqn. 3.31
>>RGA = G.
*
inv(G') % (s), Eqn. 3.32
8 >>DRGA = subs(RGA,'s',1j
*
w) % Substitute s = j
>> abs(subs(DRGA,w,1)) % Magnitude of the DRGA matrix at = 1 rad/s
ans =
0.0379 0.9793
13 0.9793 0.0379
You can see that from the numerical values of the elements of at = 1 rad/s that this system
is not diagonally dominant at this important frequency. Fig. 3.15 validates this observation.
An alternative numerical way to generate the elements of the DRGA matrix as a function of
frequency is to compute the Bode diagramfor the multivariable system, extract the current gains,
and then form the RGA from Eqn. 3.30.
Listing 3.2: Computing the dynamic relative gain array numerically as a function of . See also
Listing 3.1.
106 Modelling
G = tf({2.5, 5; 1 -4}, ...
2 {conv([15 1],[2,1]),[4 1]; [3 1],[20 1]}, ...
'ioDelayMatrix',[5,0; 0,5]) % Plant from Eqn. 3.31
[,,K] = zpkdata(G); sK =sign(K); % get sign of the gains
[Mag,Ph,w] = bode(G); % Compute Bode diagram
7 DRGA = NaN(size(Mag));
for i=1:length(w)
K = Mag(:,:,i).
*
sK; % Gain, including signs
DRGA(:,:,i) = K.
*
(inv(K))'; % () = KK
T
12 end
OnDiag = squeeze(DRGA(1,1,:)); OffDiag = squeeze(DRGA(1,2,:));
semilogx(w, OnDiag,'-',w,OffDiag,'--'); % See Fig. 3.15
The trends of the diagonal and off-diagonal elements of are plotting in Fig. 3.15. What is
interesting about this example is that if we only consider the steady-state case, = 0, then
is diagonally dominant, and our pairing looks suitable. However what we should be really
concentrating on are the values around the corner frequency at 0.1 where now the off-
diagonal terms start to dominate.
Figure 3.15: The diagonal and off-diagonal el-
ements of the RGA matrix as a function of
frequency.
10
3
10
2
10
1
10
0
10
1
0
0.2
0.4
0.6
0.8
1
frequency

1,1

1,2
For comparison, Fig. 3.16 shows the dynamic RGA for the (3 3) OLMR distillation column
model from Eqn. 3.25. In this case the off-diagonal elements do not dominate at any frequency.
Figure 3.16: The elements of the (3 3) RGA
matrix from Eqn. 3.25.
10
3
10
2
10
1
10
0
10
1
1.5
1
0.5
0
0.5
1
1.5
2
2.5
frequency
e
l
e
m
e
n
t
s

o
f

)
Diagonal elements
Offdiagonal elements

11

22

33

12

13

23
3.4. REGRESSING EXPERIMENTAL DATA BY CURVE FITTING 107
3.4 Regressing experimental data by curve tting
In most cases of industrial importance, the models are never completely white-box, meaning that
there will always be some parameters that need to be tted, or at least ne-tuned to experimental
data. Generally the following three steps are necessary:
1. Collect N experimental independent x
i
, and dependent y
i
data points.
2. Select a model structure, M() with a vector of parameters that are to be estimated.
3. Search for the parameters that optimise some sort performance index such as to maximise
the goodness of t, or equivalently minimise the sum of squared residuals.
A common performance index or loss function is to select the adjustable parameters so that the
sum of difference squared between the raw data and your model predictions is minimised. This
is the classical Gaussian least-squares minimisation problem. Formally we can write the perfor-
mance index as,
J =
N

i=1
(y
i
y
i
)
2
(3.33)
where the ith model prediction, y
i
is a function of the model, parameters and input data,
y
i
= f(M(), x
i
) (3.34)
We want to nd the best set of parameters, i.e. the set of that minimises J in Eqn. 3.33,

= arg min

_
N

i=1
(y
i
y
i
)
2
_
(3.35)
The arg min part of Eqn 3.35 can be read as the argument who minimises . . . since we are
not too interested in the actual value of the performance function at the optimum, J, but rather
the value of the parameters at the optimum,

. This least-squared estimation is an intuitive


performance index, and has certain attractable analytical properties. The German mathematician
Gauss is credited with popularising this approach, and indeed using it for analysing data taken
while surveying Germany and making astronomical observations. Note however other possible
objectives rather than Eqn. 3.33 are possible, such as minimising the sum of the absolute value
of the deviations, or minimising the single maximum deviation. Both these latter approaches
have seen a resurgence of interest of the last few years since the computer tools have enabled
investigators to by-pass the difculty in analysis.
There are many ways to search for the parameters, and any statistical text book will cover these.
However the least squares approach is popular because it is simple, requires few prior assump-
tions, and typically gives good results.
3.4.1 Polynomial regression
An obvious way to t smooth curves to experimental data is to nd a polynomial that passes
through the cloud of experimental data. This is known as polynomial regression. The nth order
polynomial to be regressed is
M() : y =
n
x
n
+
n1
x
n1
+
1
x +
0
(3.36)
108 Modelling
where we try different values of the (n+1) parameters in the vector until the difference between
the calculated dependent variable y is close to the actual measured value y. Mathematically the
vector of parameters is obtained by solving a least squares optimisation problem.
Given a set of parameters, we can compute the ith model prediction using
y
i
=
_
x
n
x
n1
x 1
_

_
_
_
_
_
_
_

n1
.
.
.

0
_
_
_
_
_
_
_
(3.37)
or written in compact matrix notation
y
i
= x
i

where x
i
is the data row vector for the ith observation. If all n observations are stacked vertically
together, we obtain the matrix system
_
_
_
_
_
y
1
y
2
.
.
.
y
N
_
_
_
_
_
=
_

_
x
n
1
x
n1
1
x
1
1
x
n
2
x
n1
1
x
2
1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
x
n
N
x
n1
N
x
N
1
_

_
. .
X
(3.38)
or y = X in a more compact matrix notation. The matrix of comprised of the stacked rows
of measured independent data, X, in Eqn. 3.38 is called the Vandermonde matrix or data matrix
and can be easily constructed in MATLAB using the vander command, although it is well known
that this matrix can become very illconditioned.
We can search for the parameters in a number of ways, but minimising the squared error is
a common and easy approach. In this case our objective function that we want to minimise
(sometimes called the cost function), is the summed squared error. In matrix notation, the error
vector is

def
= y X (3.39)
and the scalar cost function then becomes
J =

= (y X)

(y X)
= y

y y

X +

X
We want to choose such that J is minimised, ie a stationary point, thus we can set the partial
derivatives of J with respect to the parameters to zero,
4
J

= 0
= 2X

y + 2X

X (3.40)
which we can now solve for as
=
_
X

X
_
1
X

. .
pseudo-inverse
y (3.41)
As a consequence of the fact that we carefully chose our model structure in Eqn. 3.36 to be linear
in the parameters, , then the solution given by Eqn. 3.41 is analytic and therefore very reliable,
4
Ogata, [148, p938], gives some helpful rules when using matrix-vector differentials.
3.4. REGRESSING EXPERIMENTAL DATA BY CURVE FITTING 109
and straight forward to implement as opposed to nonlinear regression which require iterative
solution techniques.
This method of tting a polynomial through experimental data is called polynomial least-squares
regression. In general, the number of measurements must be greater (or equal) to the number
of parameters. Even with that proviso, the data matrix X

X can get very ill-conditioned, and


hence it becomes hard to invert in a satisfactorily manner. This problem occurs more often when
high order polynomials are used or when you are trying to over-parameterise the problem. One
solution to this problem is given in 3.4.1.
The matrix
_
X

X
_
1
X

is called the left pseudo-inverse of X, and is sometimes denoted X


+
.
The pseudo inverse is a generalised inverse applicable for even non-square matrices and is dis-
cussed in [150, p928]. MATLAB can compute
_
X

X
_
1
X

with the pseudo inverse command,


pinv, enabling the parameters to be evaluated simply by typing theta = pinv(X)
*
y, al-
though it is more efcient to simply use the backslash command, theta = X\y.
An example of polynomial curve tting using least-squares
Tabulated below is the density of air as a function of temperature. We wish to t a smooth
quadratic curve to this data.
Temperature [

C] 100 50 0 60 100 160 250 350


Air density [kg/m
3
] 1.98 1.53 1.30 1.067 0.946 0.815 0.675 0.566
200 0 200 400
0.5
1
1.5
2
2.5
Temperature [

C]
A
i
r

d
e
n
s
i
t
y
,

[
k
g
/
m
3
]


Raw Data
fitted curve
Figure 3.17: The density of air as a function
of temperature. Experimental data, , and a
tted quadratic curve. (See also Fig. 3.18 fol-
lowing for a higher-order polynomial t.)
To compute the three model parameters fromthe air density data we can run the following m-le.
Listing 3.3: Curve tting using polynomial least-squares
T = [-100 -50 0 60 100, 160 250 350]'; % Temperature, T
rho = [1.98, 1.53, 1.30, 1.067 0.946, 0.815, 0.675, 0.566]'; % Air density,
X = vander(T); % Vandermonde matrix
5 X = X(:,end-2:end); % Keep only last 3 columns
theta = X\rho; % Solve for , where = 1T
2
+2T +3
Ti = linspace(-130,400); % validate with a plot
110 Modelling
rho_pred = polyval(theta,Ti);
10 plot(T,rho,'o',Ti,rho_pred,'r-') % See Fig. 3.17.
The resulting curve is compared with the experimental data in Fig. 3.17.
In the example above, we constructed the Vandemonde data matrix explicitly, and solved for the
parameters using the pseudo-inverse. In practice however, we would normally simply use the
built-in MATLAB command polyfit which essentially does the same thing.
Improvements to the least squares estimation algorithm
There are many extensions to this simple multi-linear regression algorithm that try to avoid the
poor numerical properties of the scheme given above. If you consider pure computational speed,
then solving a set of linear equations is about twice as fast as inverting a matrix. Therefore instead
of Eqn. 3.41, the equivalent
X

X = X

y (3.42)
is the preferred scheme to calculate . MATLABS backslash operator, \, used in the example
above follows this scheme internally. We write it as if we expect an inverse, but it actually solves
a system of linear equations.
A numerical technique that uses singular value decomposition (SVD) gives better accuracy (for
the same number of bits to represent the data), than just applying Eqn 3.41, [161, 176]. Singular
values stem from the property that we can decompose any matrix T into the product of two
orthogonal matrices (U, V) and a diagonal matrix ,
T = UV

(3.43)
where due to the orthogonality,
UU

= I = U

U and VV

= I = V

V (3.44)
The diagonal matrix consists of the square roots of the eigenvalues of T

T which are called


the singular values of T, and the number that differ signicantly from zero is the rank of T. We
can make one further modication to Eqn 3.41 by adding a weighting matrix W. This makes the
solution more general, but often in practice one sets W= I. Starting with Eqn 3.42 including the
weighting matrix W
1
= G

G,
X

W
1
X = X

W
1
y (3.45)
X

GX = X

Gy (3.46)
Now we dene T
def
= GXand z
def
= Gy gives
T

T = T

z (3.47)
Now we take the singular value decomposition (SVD) of T in Eqn 3.47 giving
_
VU

__
UV

_
= VU

z (3.48)
V
2
V

= (3.49)
3.4. REGRESSING EXPERIMENTAL DATA BY CURVE FITTING 111
Multiplying both sides by V

and noting that is diagonal we get,

2
V

= U

z (3.50)
VV

. .
identity
= V
1
U

z (3.51)
= V
1
U

. .
pseudo inverse
z (3.52)
The inverse of is simply the inverse of each of the individual diagonal (and non-zero) elements,
since it is diagonal. The key point here is that we never in the algorithm needed to calculate the
possibly ill-conditioned matrix T

T. Consequently we should always use Eqn. 3.52 in preference


to Eqn. 3.41 due to the more robust numerical properties as shown in Listing 3.4.
Listing 3.4: Polynomial least-squares using singular value decomposition. This routine follows
from, and provides an alternative to Listing 3.3.
[U,S,V] = svd(X,0); % Use the economy sized SVD, UV

= X
theta2 = V
*
(S\U')
*
rho % = V
1
U

, Eqn. 3.52.
It does not take much for the troublesome X
T
X matrix to get ill-conditioned. Consider the tem-
perature/density data for air from Fig. 3.17, but in this case we will use temperature in degrees
Kelvin (as opposed to Celsius), and we will take data over slightly larger range of temperatures.
This is a reasonable experimental request, and so we should expect to be able to t a polynomial
in much the same way we did in Listing 3.3.
However as shown in Fig. 3.18, we cannot reliably t a fth-order polynomial using standard
least-squares to this data set, although we can reliably t such a 5th-order polynomial using the
SVD strategy of Eqn. 3.52. Note that MATLABs polyfit uses the reliable SVD strategy.
100 200 300 400 500 600 700
0
0.5
1
1.5
2
2.5
3
3.5
Temperature [degrees Kelvin]
A
i
r

d
e
n
s
i
t
y
,

[
k
g
/
m
3
]
Polynomial order = 5


= (X

X)
1
X

y
= V
1
U
T
y
Raw Data
standard LS
SVD fitted
LS using SVD
Standard leastsquares
Figure 3.18: Fitting a high-order polyno-
mial to some physical data, in this case
the density of air as a function of tempera-
ture in degrees Kelvin. Experimental data,
a tted quintic curve using standard least
squares, and one using the more robust
singular value decomposition. (See also
Fig. 3.17.)
A further rened regression technique is termed partial least squares (PLS), and is summarised
in [74]. Typically these schemes decompose the data matrix X, into other matrices that are
ordered in some manner that roughly corresponds to information. The matrices with the least
information (and typically most noise) are omitted, and the subsequent regression uses only the
remaining information.
The example on page 109 used the least squares method for estimating the parameters of an
algebraic equation. However the procedure for estimating a dynamic equation remains the same.
This will be demonstrated later in 6.7.
112 Modelling
Problem 3.3 Accuracy and numerical stability are very important when we are dealing with com-
puted solutions. Suppose we wish to invert
A =
_
1 1
1 1 +
_
where is a small quantity (such as nearly the machine eps)
1. What is the (algebraic) inverse of A?
2. Obviously if = 0 we will have trouble inverting since A is singular, but what does the
pseudo-inverse, A
+
, converge to as 0?
3.4.2 Nonlinear least-squares model identication
If the model equation is nonlinear in the parameters, then the solution procedure to nd the opti-
mum parameters requires a nonlinear optimiser rather than the relatively robust explicit relation
given by Eqn. 3.41 or equivalent. Nonlinear optimisers are usually based on iterative schemes
additionally often requiring good initial parameter estimates and even then may quite possibly
fail to converge to a sensible solution. There are many algorithms for nonlinear optimisation
problems including exhaustive search, the simplex method due to Nelder-Mead, and gradient
methods.
MATLAB provides a simple unconstrained optimiser, fminsearch, which uses the Nelder-Mead
simplex algorithm. A collection of more robust algorithms and algorithms to optimise con-
strained systems possibly with integer constraints is the OPTI toolbox
5
.
A nonlinear curve-tting example
The following biochemical example illustrates the solution of a nonlinear algebraic optimisation
problem. Many biological reactions are of the Michalis-Menton form where the cell number y(t)
is given at time t by the relation
y =
t
t + 1
(3.53)
Suppose we have some experimental data for a particular reaction given as
time, t 0 0.2 0.5 1 1.5 2 2.5
cell count, y 0 1.2 2.1 2.43 2.52 2.58 2.62
where we wish to estimate the parameters and in Eqn 3.53 using nonlinear optimisation tech-
niques. Note that the model equation (3.53) is nonlinear in the parameters. While it is impossible
to write the equation in the linear form y = f(t) as done in 3.4, it is possible to linearise the
equation (Lineweaver-Burke plot) by transforming the equation. However this will also trans-
form and bias the errors around the parameters. This is not good practice as it introduces bias
in the estimates, but does give good starting estimates for the nonlinear estimator. Transforming
Eqn 3.53 we get
y =


y
t
5
The OPTI toolbox is available from www.i2c2.aut.ac.nz
3.4. REGRESSING EXPERIMENTAL DATA BY CURVE FITTING 113
Thus plotting y/t against y should result in a straight line with an intercept of / and a slope of
1/.
Assuming we have the experimental data stored as
column vectors [t,y] in MATLAB, we could plot
t = [0, 0.2, 0.5:0.5:2.5]';
y = [0,1.2,2.1,2.43,2.52,2.58,2.62]';
3 % ignore divide-by-zero
plot(y./t,y,'o-')
0 1 2 3 4 5 6
1
1.5
2
2.5
3
3.5
y/t
c
e
l
l

c
o
u
n
t

y
which should given an approximately straight line (ignoring the rst and possibly the second
points). To nd the slope and intercept, we use the polyfit function to t a line and we get a
slope = 0.2686 and intercept = 2.98. This corresponds to = 11.1 and = 3.723.
Now we can rene these estimates using the fminsearch Nelder-Mead nonlinear optimiser.
We must rst construct a a small objection function le which evaluates the sum of the squared
error for a given set of trial parameters and the experimental data. This is succinctly coded as an
anonymous function in Listing 3.5 following.
Since we require the experimental data variables y and t in the objective function, but we are not
optimising with respect to them, we must pass these variables as additional parameters to the
optimiser.
Listing 3.5: Curve tting using a generic nonlinear optimiser
1 % Compute sum of square errors given trial and (t, y) data.
sse= @(theta,t,y) sum((y-theta(1)
*
t./(theta(2)
*
t+1)).2);
optns = optimset('Diagnostics','on','Tolx',1e-5);
theta = fminsearch(sse,[11.1 3.723]',optns,t,y) % Polish the estimates.
6 plot(t,y,'o',t, theta(1)
*
t./(theta(2)
*
t+1)) % Check nal t in Fig. 3.19.
Listing 3.5 returns the rened estimates for as

=
_

_
=
_
12.05
4.10
_
and a comparison of both the experimental data and the models predictions is given in Fig. 3.19.
Curve tting using the OPTI optimisation toolbox
The MATLABOPTIMISATION toolbox contains more sophisticated routines specically intended
for least-squares curve tting. Rather than write an objective function to compute the sum of
squares and then subsequently call a generic optimiser as we did in Listing 3.5, we can solve the
problem in a much more direct manner. The opti_lsqcurvefit is the equivalent routine in
the OPTI toolbox that solves least-squares regression problems.
Listing 3.6: Curve tting using the OPTI optimisation toolbox. (Compare with Listing 3.5.)
>> theta0 = [11,3]'; % Initial guess 0
114 Modelling
Figure 3.19: The tted (dashed) and experi-
mental, , data for a bio-chemical reaction.
The asymptotic nal cell count, /, is given
by the dashed horizontal line.
0 0.5 1 1.5 2 2.5
0
0.5
1
1.5
2
2.5
3
time
c
e
l
l

c
o
u
n
t


Exptal data
Best fit curve
/
>> F = @(x,t) x(1)
*
t./(x(2)
*
t+1); % f(, t) = 1t/(2t + 1)
>> >> theta = opti_lsqcurvefit(F,theta0,t,y)
4 theta =
12.0440
4.1035
Note that in Listing 3.6 we encapsulate the function in an anonymous function which is then
passed to the least-squares curve t routine, opti_lsqcurvefit.
Higher dimensional model tting
Searching for parameters where we have made two independent measurements is just like search-
ing when we have made one independent measurement, except that to plot the results we must
resort to contour or three-dimensional plots. This makes the visualisation a little more difcult,
but changes nothing in the general technique.
We will aim to t a simple four parameter, two variable function to model the compressibility
of water. Water, contrary to what we taught in school, is compressible, although this is only
noticeable at very high pressures. If we look up the physical properties for compressed water
in steam tables,
6
we will nd something like table 3.4. Fig. 3.20(a) graphically illustrates the
Table 3.4: Density of compressed water, ( 10
3
kg/m
3
)
Pressure (bar) Temperature,

C
0.01 100 200 250 300 350 374.15
100 1.0050 0.9634 0.8711 0.8065 0.7158
221.2 1.0111 0.9690 0.8795 0.8183 0.7391 0.6120
500 1.0235 0.9804 0.8969 0.8425 0.7770 0.6930 0.6410
1000 1.0460 1.0000 0.9242 0.8772 0.8244 0.7610 0.7299
strong temperature inuence on the density compared with pressure. Note how the missing data
represented by NaNs in MATLAB is ignored in the plot.
Our model relates the density of water as a function of temperature and pressure. The proposed
6
Rogers & Mayhew, Thermodynamic and Transport properties of Fluids, 3rd Edition, (1980), p11
3.4. REGRESSING EXPERIMENTAL DATA BY CURVE FITTING 115
0
100
200
300
400 0
500
1000
600
700
800
900
1000
1100

Pressure (bar)
Temperature,

C

T
r
u
e

d
e
n
s
i
t
y

k
g
/
m
3
650
700
750
800
850
900
950
1000
(a) True density of compressed water as a function of tem-
perature and pressure.
0 50 100 150 200 250 300 350 400
100
200
300
400
500
600
700
800
900
1000
1100
700
775
850
925
1e+003
1e+003
700
775
850 925 1e+003
Temperature (deg C)
P
r
e
s
s
u
r
e

(
b
a
r
)
density (kg/m3)
(b) Model t
Figure 3.20: A 2D model for the density of compressed water. In Fig. 3.20(b), the mark experi-
mental data points used to construct the 2-D model given as contour lines.
model structure is
= P
k
exp
_
a +
b
T
+
c
T
3
_
(3.54)
where the constants (model parameters ) need to be determined. We dene the parameters as;

def
=
_
a b c k

(3.55)
Again the approach is to minimise numerically the sum of the squared errors using an optimiser
such as fminsearch. We can check the results of the optimiser using a contour plot with the
experimental data from Table 3.4 superimposed.
The script le in Listing 3.7 calls the minimiser which in turn calls the anonymous function
J_rhowat which given the experimental data and proposed parameters returns the sum of
squared errors. This particular problem is tricky, since it is difcult to know appropriate starting
guesses for , and the missing data must eliminated before the optimisation. Before embarking
on the full nonlinear minimisation, I rst try a linear tting to obtain good starting estimates for
. I also scale the parameters so that the optimiser deals with numbers around unity.
Listing 3.7: Fitting water density as a function of temperature and pressure
rhowat = @(a,P,T) P.a(1).
*
exp(a(2) + ...
1e2
*
a(3)./T + 1e7
*
a(4)./T.3) % Assumed model = P
k
exp
_
a +
b
T
+
c
T
3
_
J_rhowat = @(a,P,T,rho) ...
4 norm(reshape(rho,[],1)-reshape(rhowat(a,P,T),[],1)); % SSE J =

( )
2
T = [0.01, 100, 200, 250,300,350, 374.15]'; % Temp [deg C]
P = [100 221.2 500 1000]'; % Pressure [Bar]
rhof = 1.0e3
*
[1.00250, 0.9634, 0.8711, 0.8065, 0.7158, NaN, NaN; ...
9 1.0111, 0.9690, 0.8795, 0.8183, 0.7391, 0.6120, NaN; ...
1.0235, 0.9804, 0.8969, 0.8425, 0.7770, 0.6930, 0.6410; ...
1.0460, 1.0000, 0.9242, 0.8772, 0.8244, 0.7610, 0.7299]; % density kg/m
3
[TT,PP] = meshgrid(T+273,P); Tv = TT(:); Pv = PP(:); % vectorise
116 Modelling
14 A = [ones(size(Tv)), 1.0e2 ./Tv, 1.0e7 ./Tv.3, log(Pv)]; % Scaled data matrix
idx = isnan(rhof(:)); % nd missing data points
rhofv = rhof(:); rhofv(idx) = []; % remove bad points
Tv(idx) = []; Pv(idx) = []; A(idx,:) = [];
theta = A\log(rhofv); % rst (linear) estimate
19
% Do nonlinear t
theta_opt = fminsearch(@(theta) J_rhowat(theta,Pv,Tv,rhofv),theta);
[Ti,Pi] = meshgrid([0.01:10:370]+273,[100:100:1000]); % Compare t with data
24 rho_est = rhowat(theta_opt,Pi,Ti);
Fig. 3.20(b) compares contour plots of the density of water as a function of pressure and temper-
ature derived from the experimental data and the tted model. The shows the location of the
experimental data points. The solid contour lines give the predicted density of water compared
with the contours derived from the experimental data (dashed lines). The optimum parameters
found above are

=
_

_
a
b
c
k
_

_
=
_

_
5.5383
5.8133 10
2
1.8726 10
7
0.0305
_

_
(3.56)
3.4.3 The condence of the optimised parameters
Finding the optimum model parameters to t the data is only part of the task. A potentially
more difcult objective is to try to establish how precise these optimum parameters are, or what
the condence regions of the parameters are. simply expressing the uncertainty of your model as
parameter values some interval is a good rst approximation, but it does neglect the interaction
of the other parameters, the so-called correlation effect.
To establish the individual condence limits of the parameters, we need to know the following:
1. The n optimum parameters from the m experimental observations. (For the nonlinear
case, this can be found using a numerical optimiser)
2. A linearised data matrix Xcentered about the optimised parameters.
3. The measurement noise s
Yi
(This can be approximated from the sum of the squared error
terms, or from prior knowledge.)
4. Some statistical parameters such as the t-statistic as a function of condence interval (90%,
95% etc) and degrees of freedom. This is easily obtained from statistical tables or using the
qt routine from the STIXBOX collection mentioned on page 3 for m-le implementations of
some commonly used statistical functions.
The condence interval for parameter
i
is therefore

i
t
(1/2)
s
Yi
_
P
i,i
(3.57)
where t
(1/2)
is the t-statistic evaluated at = n m (number of observations less the number
of parameters) degrees of freedom and P
i,i
are the diagonal elements of the covariance matrix P.
The measurement noise (if not already approximately known) is
s
Yi

_
s
2
r
=

n m
(3.58)
3.4. REGRESSING EXPERIMENTAL DATA BY CURVE FITTING 117
and the covariance matrix P is obtained from the data matrix. Note that if we assume we have
perfect measurements and a perfect model, then we would expect that the sum of the errors
will be exactly zero given the true parameters. This is consistent with Eqn 3.58 when

= 0
although of course this occurs rarely in practice!
An example of establishing condence intervals for a nonlinear model
This section was taken from [87, p198], but with some modications and corrections. Himmel-
blau, [87, p198], gives some reaction data as:
Pressure, p 20 30 35 40 50 55 60
rate, r 0.068 0.0858 0.0939 0.0999 0.1130 0.1162 0.1190
and proposes a 2 parameter nonlinear model of the form
r =

0
p
1 +
1
p
(3.59)
Now suppose that we use a nonlinear optimiser programto search for the parameters such that
the sum of squared error is minimised.
min

=
n

i=1
(r
i
r
i
)
2
=
_
(r r)

(r r)
_
(3.60)
If we do this using say the same technique as in 3.4.2, we will nd that the optimum parameters
are approximately
=
_

0

1
_
=
_
5.154 10
3
2.628 10
2
_
(3.61)
and the raw data and model predictions are given in Fig 3.21.
Now the linearised data matrix is a n by m matrix where n is the number of observations and m
is the number of parameters. The partial derivatives of 3.59 with respect to the parameters are;
r

0
=
p
1 +
1
p
,
r

1
=

0
p
2
(1 +
1
p)
2
and the data matrix Xis dened as
X
i,j
def
=
r
i

j
(3.62)
Thus each row of X are the partial derivatives of that particular observation with respect for the
parameters. The covariance matrix P is dened as
P
def
=
_
X

X
_
1
(3.63)
and should be positive denite and symmetric just as the variance should always be greater
than, or equal to zero. However in practice this requirement may not always hold owing to poor
numerical conditioning. The diagonal elements are the variances we will use for the individual
condence limits. When using MATLAB, it is better to look at the singular values, or check the
rank of X

Xbefore doing the inversion.


Listing 3.8 ts parameters to the nonlinear reaction rate model and also illustrates the uncertain-
ties.
118 Modelling
Listing 3.8: Parameter condence limits for a nonlinear reaction rate model
1 p = [20,30,35,40,50,55,60]'; % Pressure, P
r = [0.068,0.0858,0.0939,0.0999,0.1130,0.1162,0.1190]'; % Reaction rate, r
Rxn_rate = @(x,p) x(1)
*
p./(1+x(2)
*
p); % r =

0
p
1+
1
p
theta = lsqcurvefit(Rxn_rate,[5e-3 2e-2]',p,r); % Rene estimate of parameters
6 nobs = length(p); mpar = length(theta); % # of observations & parameters
r_est = Rxn_rate(theta,p);% Predicted reaction rate r(, p)
j = sum((r-r_est).2); % sum of squared errors J =

2
11 d1= 1+theta(2)
*
p;
X = [p./d1, -theta(1)
*
p.
*
p./d1.2]; % Data gradient matrix, Eqn. 3.62.
C = inv(X'
*
X); % not numerically sound ??
% t-distribution statistics
16 pt = @(x,v) (x0).
*
(1-0.5
*
betainc(v./(v+x.2),v/2,0.5)) + ...
(x<0).
*
(0.5
*
betainc(v./(v+x.2),v/2,0.5))
qt = @(P,v) fsolve(@(x) pt(x,v)-P,sqrt(2)
*
erfinv(2
*
P-1), ...
optimset('display','off'));
21 alpha = 0.05; % CI level (97.5%)
nu = nobs-mpar; % Degrees of freedom = n m
t = qt(1-alpha,nu); % t statistic from approx function
sy = sqrt(j/nu); % approx measurement noise std
b_ci = t
*
sy
*
diag(C); % condence regions
This gives 95% condence limits for this experiment
7
as
_
5.1061 10
3
2.2007 10
2
_
< <
_
5.2019 10
3
3.0553 10
2
_
(3.64)
A plot of the raw data (), the model with the optimum estimated parameters and the associated
error bounds using Eqn. 3.64 is given in Fig. 3.21. Note while the model prediction is quite good,
the region dened by the 95% condence on the parameters is surprisingly large. The error
bounds were generated by simulating the model with the lower and upper bounds of Eqn 3.64
respectively.
The condence region (an ellipse in the two variable case) can also be plotted. This gives a deeper
understanding of the parameter interactions, but the visualisation becomes near impossible as
the number of parameters gets much larger than 3. The region for this example is evaluated the
following section.
The condence regions of the parameters
An approximate condence region can be constructed by linearising the nonlinear model about
the optimum parameters. The covariance of the parameter estimate is
cov(b)
_
X

X
_
1

Yi
(3.65)
where b is the estimated parameter and is the true, (but unknown) parameter. Now the con-
dence region, an ellipse in two parameter space (an ellipsoid in three diameter space) is the region
7
These gures differ from the worked example in Himmelblau. I think he swapped two matrices by mistake, then
continued with this error.
3.4. REGRESSING EXPERIMENTAL DATA BY CURVE FITTING 119
10 20 30 40 50 60 70
0.06
0.07
0.08
0.09
0.1
0.11
0.12
0.13
0.14
0.15
pressure
r
a
t
e
Experimental data (o) & model fit


Data
model
Figure 3.21: The experimental raw
data (), model (), and associated
95% error bounds (dotted) for the
optimised model t.
that with a condence limit of say 95% we are certain that the true parameter () lies within.
( b)

(X

X) ( b) = s
2

Yi
mF
1
[m, n m] (3.66)
where F[m, n m] is the upper limit of the F distribution for m and n m degrees of freedom.
This value can be found in statistical tables. Note that the middle term of Eqn 3.66 is X

X and
not the inverse of this.
For the previous example, we can plot the ellipse about the true optimised parameters. Fig 3.22
shows the 95% condence ellipse about the optimised parameter (marked with a ) and the
individual condence lengths superimposed from Eqn. 3.64. Since the ellipse is slanted, we note
that the parameters are correlated with each other.
4.6 4.8 5 5.2 5.4 5.6 5.8
0.02
0.022
0.024
0.026
0.028
0.03
0.032

2
95% Confidence limit
Figure 3.22: The approximate 95% con-
dence region in 2 parameter space with
the individual condence intervals super-
imposed (marked as ).
The ellipse denes an area in which the probability that the true parameter vector lies is better
than 95%. If we use the parameter points marked as a in Fig. 3.22, then the model prediction
will differ from the prediction using the optimised parameter vector. This difference is shown as
the errorbars in the lower plot of Fig. 3.22. Note that these errorbars are much smaller than the
errorbars I obtained from considering only the condence interval for each individual parameter
120 Modelling
given in Fig 3.21. Here, owing to the correlation (slanting of the ellipse in Fig 3.22), the parameters
obtained when neglecting the correlation lie far outside the true 95% condence region. Hence
the bigger errorbars.
3.5 Numerical tools for modelling
Unfortunately there are no known methods of solving Eqn 1 (a nonlinear differential equation).
This, of course, is very disappointing.
M. Braun, [33, p493]
Since differential equations are difcult to solve analytically, we typically need to resort to nu-
merical methods as described in many texts such as [39, 41, 46, 94, 104, 169] and in addition to
[201].
Differential equations are broadly separated into two families:
ODEs Ordinary differential equations (ODEs) where time is the only independent variable. The
solution to ODEs can be described using standard plots where the dependent variables are
plotted on the ordinate or y axis against time on the x axis.
PDEs Partial differential equations (PDEs) are where space (and perhaps time) are independent
variables. To display the solution of PDEs, contour maps or 3Dplots are required in general.
Generally PDEs are much harder to solve than ODEs and are beyond the scope of this text.
The solution of nonlinear differential equations
One of the most important tools in the desk-top experimenters collection, is a good numerical
integrator. We need numerical integrators whenever we are faced with a nonlinear differential
equation which is intractable by any other means. MATLAB supplies a number of different nu-
merical integrators optimised for different classess of problems. Typically the 4th/5th dual order
Runge-Kutta implementation, ode45, is a good rst choice.
To use the integrators, you must write a function subroutine that contains your differential system
to be solved which will be called by the numerical integration routine.
1. The script le that calls the integrator (eg: ode45) with name of the function subroutine to
be integrated and parameters including initial conditions, tolerance and time span.
2. The function subroutine (named in the script le) that calculates the derivatives as a func-
tion of the independent variable (usually time), and the dependent variable (often x).
We will demonstrate this procedure by constructing templates of the above two les to solve a
small nonlinear differential equation. One can reuse this template as the base for solving other
ODE systems.
ODE template example
Suppose our task is to investigate the response of the nonlinear pendulum system given in
Eqn. 3.6 and compare this with the linearised version. The script le given in Listing 3.9 com-
putes the angle = x
1
(t) trajectory for both the nonlinear and linear approximation.
3.5. NUMERICAL TOOLS FOR MODELLING 121
Listing 3.9: Comparing the dynamic response of a pendulum to the linear approximation
g=8.81; l=1; m = 5; T_c=0; % constants
xdot = @(t,x) [x(2); -g/l
*
sin(x(1)) + T_c/m/l2]; % dx/dt
x0 = [1,1]'; % start position x(0)
5 [t,x]= ode45(xdot,[0,10],x0); % Integrate nonlinear
% Now try linear system
A = [0 1; -g/l 0]; B = [0;0]; C = [1,0];D=0;
tv = linspace(0,10)';
10 [y2,t2]=lsim(ss(A,B,C,D),0
*
tv,tv,x0);
plot(t,x(:,1),t2,y2) % See Fig. 3.23.
Note how the trajectories for the nonlinear (solid) result as calculated from ode45 and linear
approximation calculated using lsim gradually diverge in Fig. 3.23.
0 2 4 6 8 10
1.5
1
0.5
0
0.5
1
1.5
time
x
(
t
)


Non Linear
Linearised Figure 3.23: The trajectory of the true non-
linear pendulum compared with the lin-
earised approximation.
Problem 3.4 1. One of my favourite differential equations is called the Van der Pol equation;
m
d
2
y
dt
2
b
_
1 y
2
_
dy
dt
+ky = 0
where m = 1, b = 3 and k = 2. Solve the equation for 0 < t < 20 using both ode23 and
ode45. Sketch the solution. Which routine is better? You will need to create a function
called vandepol(t,y) which contains the system of equations to be solved.
2. A damped forced pendulum is an example of a chaotic system. The dynamic equations are
d
dt
=

q
sin +g cos (3.67)
d
dt
= (3.68)
d
dt
=
d
(3.69)
where there are three states, (x =
_

), and three parameters, ( =


_
q g
d

).
Simulate this system for 0 t 500 starting from x =
_
1 2 0.3

and using
=
_
2 1.5 2/3

as parameters. Be warned, this may take some computer time!


122 Modelling
Problem 3.5 A particularly nasty nonlinear chemical engineering type model is found in [83]. This
is a model of two CSTRs
8
in series, cooled with a co-current cooling coil. The model equations
are:

C
a1
=
q
V
1
(C
af
C
a1
) k
0
C
a1
exp
_

E
RT
1
_

T
1
=
q
V
1
(T
f
T
1
) +
(H)k
0
C
a1
c
p
exp
_

E
RT
1
_
+

c
c
pc
c
p
V
1
q
c
_
1 exp
_

hA
1
q
c

c
c
pc
__
(T
cf
T
1
)

C
a2
=
q
V
2
(C
a1
C
a2
) k
0
C
a2
exp
_

E
RT
2
_

T
2
=
q
V
2
(T
1
T
2
) +
(H)k
0
C
a2
c
p
exp
_

E
RT
2
_
+

c
c
pc
c
p
V
2
q
c
_
1 exp
_

hA
2
q
c

c
c
pc
__ _
T
1
T
2
+ exp
_

hA
1
q
c

c
c
pc
_
(T
cf
T
1
)
_
where the state, control, disturbance and measured variables are dened as
x
def
=
_
C
a1
T
1
C
a2
T
2

, u
def
= q
c
, d
def
=
_
C
af
T
cf

y
def
= C
a2
and the parameters values are given in Table 3.5.
Table 3.5: The parameter values for the CSTR model
description variables value units
reactant ow q 100 l/min
conc. of feed reactant A C
af
1.0 mol/l
temp. of feed T
f
350 K
temp. of cooling feed T
cf
350 K
volume of vessels V
1
= V
2
100 l
heat transfer coeff. hA
1
= hA
2
1.67 10
5
J/min.K
pre-exp. constant k
0
7.2 10
10
min
1
E/R E/R 1.0 10
4
K
reaction enthalpy H 4.78 10
4
J/mol
uid density
c
= 1.0 g/l
heat capacity c
p
= c
pc
0.239 J/g.K
Note that the model equations are nonlinear since the state variable T
1
(amongst others) is a
appears nonlinearly in the differential equation. In addition, this model is referred to as control
nonlinear, since the manipulated variable enters nonlinearly. Simulate the response of the con-
centration in the second tank (C
a2
) to a step change of 10% in the cooling ow using the initial
conditions given in Table 3.6.
You will need to write a .m le containing the model equations, and use the integrator ode45.
Solve the system over a time scale of about 20 minutes. What is unusual about this response?
How would you expect a linear system to behave in similar circumstances?
Problem 3.6 Develop a MATLAB simulation for the high purity distillation column model given in
[140, pp459]. Verify the open loop responses given on page 463. Ensure that your simulation is
easily expanded to accommodate a different number of trays, different relative volatility etc.
8
continuously stirred tank reactors
3.5. NUMERICAL TOOLS FOR MODELLING 123
Table 3.6: The initial state and manipulated variables for the CSTR simulation
description variable value units
coolant ow q
c
99.06 l/min
conc. in vessel 1 C
a1
8.53 10
2
mol/l
temp. in vessel 1 T
1
441.9 K
conc. in vessel 2 C
a2
5.0 10
3
mol/l
temp. in vessel 2 T
2
450 K
Problem 3.7 A slight complexity to the simple ODE with specied initial conditions is where we
still have the ODE system, but not all the initial conditions. In this case we may know some of the
end conditions instead, thus the system is in principle able to be solved, but not in the standard
manner; we require a trial and error approach. These types of problems are called two point
boundary problems and we see them arise in heat transfer problems and optimal control. They
are so called two point boundary problems because for ODEs we have two boundaries for the
independent variable; the start and the nish. Try to solve the following from [196, p178].
A uid enters an immersed cooling coil 10m long at 200

C and is required to leave at 40

C. The
cooling medium is at 20

C. The heat balance is
d
2
T
dx
2
= 0.01(T 20)
1.4
(3.70)
where the initial and nal conditions are;
T
x=0
= 200, T
x=10
= 40
To solve the system, we must rewrite Eqn 3.70 as a systemof two rst order differential equations,
and supply a guess for the missing initial condition
dT
dx x=0
=? We can then integrate the system
until x = 10 and check that the nal condition is as required. Solve the system.
Hint: Try 47 <
dT
dx x=0
< 42.
3.5.1 Differential/Algebraic equation systems and algebraic loops
In many cases when deriving models from rst principals, we end up with a coupled system
of some differential equations, and some algebraic equations. This is often due to our practice
of writing down conservation type equations (typically dynamic), and constraint equations, say
thermodynamic, which are typically algebraic. Such a system in general can be written as
f ( x, x, t) = 0 (3.71)
and is termed a DAE or differential/algebraic equation system. If we assume that our model
is well posed, that is we have some hope of nding a solution, then we expect that we have
the same number of variables as equations, and it follows that some of the variables will not
appear in the differential part, but only in the algebraic part. We may be able to substitute those
algebraic variables out, such that we are left with only ODEs, which can then be solved using
normal backward difference formula numerical schemes such as rk2.m. However it is more
likely that we cannot extract the algebraic variables out, and thus we need special techniques to
solve these sorts of problems (Eqn. 3.71) as one.
In a now classic article titled Differential/Algebraic Equations are not ODEs, [158] gives some in-
sight into the problems. It turns out that even for linear DAE systems, the estimate of the error,
124 Modelling
which is typically derived from the difference between the predictor and the corrector, does not
decrease as the step size is reduced. Since most normal ODE schemes are built around this as-
sumption, they will fail. This state of affairs will only occur with certain DAE structures where
the nilpotency or index is 3. Index 2 systems also tend to cause BDF schemes to fail, but
can be tackled using other methods. The index problem is important because in many times it
can be changed (preferably reduced to less than two), by rearranging the equations. Automated
modelling tools tend, if left alone, to create overly complex models with a high index that are im-
possible to solve. However if we either by using an intelligent symbolic manipulator or human,
change these equations, we may be able to reduce the index.
The difculties of algebraic loops
One reason that computer aided modelling tools such as SIMULINK have taken so long to mature
is the problem of algebraic loops. This is a particular problem when the job of assembling the
many different differential and algebraic equations in an efcient way is left to a computer.
Suppose we want to simulate a simple feedback process where the gain is a saturated function of
the output, say
K(y) = max(min(y, 5), 0.1)
If we simulate this in Simulink using
1
s+1
Transfer Fcn
Step
Scope
Saturation
Product
1
Gain1
1
Gain
we run into an Algebraic Loop error. MATLAB returns the following error diagnostic (or something
similar):
Warning: Block diagram sgainae contains 1 algebraic loop(s).
Found algebraic loop containing block(s):
sgainae/Gain1
sgainae/Saturation (discontinuity)
sgainae/Product (algebraic variable)
Discontinuities detected within algebraic loop(s), may have trouble solving
and the solution stalls.
The simplest way to avoid these types of problems is to insert some dynamics into the feedback
loop. In the example above, we could place a transfer function with a unit gain and very small
time constant in place of the algebraic gain in the feedback loop. While we desire the dynamics
of the gain to be very fast so that it approximates the original algebraic gain, overly fast dynamics
cause numerical stability problems, hence there is a trade off.
3.6. LINEARISATION OF NONLINEAR DYNAMIC EQUATIONS 125
3.6 Linearisation of nonlinear dynamic equations
While most practical engineering problems are nonlinear to some degree, it is often useful to
be able to approximate the dynamics with a linear differential equation which means we can
apply linear control theory. While it is possible to design compensators for the nonlinear system
directly, this is in general far more complicated, and one has far fewer reliable guidelines and
recipes to follow. One particular version of nonlinear controller design called exact nonlinear
feedback is discussed briey in 8.6.
Common nonlinearities can be divided into two types; hard nonlinearities such as hysterisis,
stiction, and dead zones, and soft nonlinearities such as the Arrhenius temperature depen-
dency, power laws, etc. Hard nonlinearities are characterised by functions that are not differen-
tiable, while soft nonlinearities are. Many strategies for compensating for nonlinearities are only
applicable for systems which exhibit soft nonlinearities.
We can approximate soft nonlinearities by truncating a Taylor series approximation of the origi-
nal system. The success of this approximation depends on whether the original function has any
non-differentiable terms such as hysterisis, or saturation elements, and how far we deviate from
the point of linearisation. This section follows the notation of [76, 3.10].
Suppose we have the general nonlinear dynamic plant
x = f (x(t), u(t)) (3.72)
y = g(x, u) (3.73)
and we wish to nd an approximate linear model about some operating point (x
a
, u
a
). A rst-
order Taylor series expansion of Eqns 3.723.73 is
x f (x
a
, u
a
) +
f
x

x=xa
u=ua
(x(t) x
a
) +
f
u

x=xa
u=ua
(u(t) u
a
) (3.74)
y g(x
a
, u
a
) +
g
x

x=xa
u=ua
(x(t) x
a
) +
g
u

x=xa
u=ua
(u(t) u
a
) (3.75)
where the ijth element of the Jacobian matrix f /x is f
i
/x
j
. Note that for linear systems, the
Jacobian is simply Ain this notation, although some authors dene the Jacobian as the transpose
of this, or A
T
.
The linearised system Eqn 3.743.75 can be written as
x = Ax(t) +Bu(t) +E (3.76)
y = Cx(t) +Du(t) +F (3.77)
where the constant matrices A, B, C and Dare dened as
A =
f
x

x=xa
u=ua
, B =
f
u

x=xa
u=ua
C =
g
x

x=xa
u=ua
, D =
g
u

x=xa
u=ua
and the bias vectors E and F are
E = f (x
a
, u
a
)
f
x

x=xa
u=ua
x
a

f
u

x=xa
u=ua
u
a
F = g(x
a
, u
a
)
g
x

x=xa
u=ua
x
a

g
u

x=xa
u=ua
u
a
126 Modelling
Note that Eqns 3.763.77 are almost in the standard state-space form, but they include the extra
bias constant matrices E and F. It is possible by introducing a dummy unit input to convert this
form into the standard state-space form,
x = Ax +
_
B E

_
u
1
_
(3.78)
y = Cx +
_
D F

_
u
1
_
(3.79)
which we can then directly use in standard linear controller design routines such as lsim.
In summary, the linearisation requires one to construct matrices of partial derivatives with respect
to state and input. In principle this can be automated using a symbolic manipulator provided the
derivatives exist. In both the SYMBOLIC TOOLBOX and in MAPLE we can use the jacobian
command.
Example: Linearisation using the SYMBOLIC toolbox. Suppose we want to linearise the nonlin-
ear system
x =
_
ax
1
exp
_
1
b
x2
_
cx
1
(x
2
u
2
)
_
at an operating point, x
a
= [1, 2]
T
, u
a
= 20.
First we start by dening the nonlinear plant of interest,
>> syms x1 x2 a b c u real
>> x = [x1 x2]';
3 >> fx = [a
*
x(1)
*
exp(1-b/x(2)); c
*
x(1)
*
(x(2)-u2)]
fx =
[ a
*
x1
*
exp(1-b/x2)]
[ c
*
x1
*
(x2-u2)]
Now we are ready to construct the symbolic matrix of partial derivatives,
>> Avar = jacobian(fx,x)
Avar =
[ a
*
exp(1-b/x2), a
*
x1
*
b/x22
*
exp(1-b/x2)]
4 [ c
*
(x2-u2), c
*
x1]
>> Bvar = jacobian(fx,u)
Bvar =
[ 0]
[ -2
*
c
*
x1
*
u]
9 >> Evar = fx - Avar
*
x - Bvar
*
u
Evar =
[ -a
*
x1
*
b/x2
*
exp(1-b/x2)]
[ -c
*
x1
*
x2+2
*
u2
*
c
*
x1]
We can substitute a specic set of constants, say, a = 5, b = 6, c = 7 and an operating point
x = [1, 2]
T
, u = 20, into the symbolic matrices to obtain the numeric matrices.
>> A = subs(Avar,{a,b,c,x1,x2,u},{5,6,7,1,2,20})
A =
3 1.0e+003
*
3.6. LINEARISATION OF NONLINEAR DYNAMIC EQUATIONS 127
0.0007 0.0010
-2.7860 0.0070
>> B = subs(Bvar,{a,b,c,x1,x2,u},{5,6,7,1,2,20})
B =
8 [ 0]
[ -280]
>> E = subs(Evar,{a,b,c,x1,x2,u},{5,6,7,1,2,20})
E =
1.0e+003
*
13 -0.0020
5.5860
At this point we could compare in simulation the linearised version with the full nonlinear model.
3.6.1 Linearising a nonlinear tank model
Suppose we wish to linearise the model of the level in a tank given on page 88 where the tank
geometry is such that A = 1. The nonlinear dynamic system for the level h is then simplied to
dh
dt
= k

h +F
in
For a constant ow in, F
ss
in
, the resulting steady state level is given by noting that dh/dt = 0, and
so
h
ss
=
_
F
ss
in
k
_
2
We wish to linearise the system about this steady-state, so we will actually work with deviation
variables,
x
def
= h h
ss
, u
def
= F
in
F
ss
in
Now following Eqn. 3.74, we have

h = x = f(h
ss
, F
ss
in
)
. .
=0
+
f
h
(h h
ss
) +
f
F
in
(F
in
F
ss
in
)
Note that since f/h = k/(2

h
ss
), then our linearised model about (F
ss
in
, h
ss
) is
x =
k
2

h
ss
x +u
which is in state-space form in terms of deviation variables x and u.
We can compare the linearised model with the nonlinear model in Fig. 3.24 about a nominal input
ow of F
ss
in
= 2 and k = 2 giving a steady-state level of h
ss
= 4. Note that we must subtract and
add the relevant biases when using the linearised model.
An alternative, and much simpler way to linearise a dynamic model is to use linmod which
extracts the Jacobians from a SIMULINK model by nite differences.
Listing 3.10: Using linmod to linearise an arbitrary SIMULINK module.
1 k=1; Fin_ss = 2; % Model parameters and steady-state input
hss = (Fin_ss/k)2 % Steady-state level, h
ss
[A,B,C,D] = linmod('sNL_tank_linmod',hss,Fin_ss) % Linearise
128 Modelling
level, h
level
1
s
To Workspace
simout
StateSpace
x = Ax+Bu
y = Cx+Du
Scope
Repeating
Sequence
Stair
Math
Function
sqrt
Gain
k
Fin_ss
Fin_ss
Bias1
u+Lss
Bias
uFin_ss
Add1 Add
(a) A SIMULINK nonlinear tank model and linear state-space model for comparison
0 10 20 30 40 50
3
4
5
6
7
8
9
10
time
l
e
v
e
l
linearised response
full nonlinear response
(b) Nonlinear and linearised model comparison
Figure 3.24: Comparing the linearised model with the full nonlinear tank level system
A =
6 -0.2500 % Note k/(2

h
ss
) = 1/4
B =
1
C =
1
11 D =
0
Quantifying the extent of the nonlinearity
It is important for the control designer to be able to quantify, if only approximately, the extent
of the open-loop process nonlinearity. If for example the process was deemed only marginally
nonlinear, then one would be condent a controller designed assuming a linear underlying plant
would perform satisfactory. On the other hand, if the plant was strongly nonlinear, then such a
linear controller may not be suitable. Ideally we would like to be able to compute a nonlinear
3.7. SUMMARY 129
metric, say from 0 (identically linear) to 1 (wildly nonlinear) that quanties this idea simply by
measuring the openloop input/output data. This of course is a complex task, and is going to be
a function of the type input signals, duration of the experiment, whether the plant is stable or
unstable, and if feedback is present.
One such strategy is proposed in [82] and used to assess the suitability of linear control schemes
in [177]. The idea is to compute the norm of the difference between the best linear approxima-
tion and the true nonlinear response for the worst input trajectory within a predetermined set
of trajectories. This is a nested optimisation problem with a min-max construction. Clearly the
choice of linear model family from which to choose the best one, and the choice of the set of input
trajectories will have an effect on the nal computed nonlinear measure.
3.7 Summary
Stuart Kauffman in a New Scientist article
9
succinctly paraphrased the basis of the scientic
method. He said
. . . state the variables, the laws linking the variables, and the initial and boundary
conditions, and from these compute the forward trajectory of the biosphere.
In actual fact he was lamenting that this strategy, sometimes known as scientic determinism
voiced by Laplace in the early 19th century was not always applicable to our world as we under-
stand it today. Nonetheless, for our aim of modeling for control purposes, this philosophy has
been, and will remain for some time I suspect, to be remarkably successful.
Modelling of dynamic systems is important in industry. These types of models can be used for
design and or control. Effective modelling is an art. It requires mathematical skill and engi-
neering judgement. The scope and complexity of the model is dependent on the purpose of the
model. For design studies a detailed model is usually required, although normally only steady
state models are needed at this stage. For control, simpler models (although dynamic) can be
used since the feedback component of the controller will compensate for any model-plant mis-
match. However most control schemes require dynamic models which are more complicated
than the steady state equivalent.
Many of the dynamic models used in chemical engineering applications are built from conser-
vation laws with thermodynamic constraints. These are often expressed as ordinary differential
equations where we equate the rate of accumulation of something (mass or energy) to the inputs,
outputs and generation in a dened control volume. In addition there may be some restrictions
on allowable states, which introduces some accompanying algebraic equations. Thus general
dynamic models can be expressed as a combination of dynamic and algebraic relations
dx
dt
= f (x, u, , t) (3.80)
0 = g(x, u, t) (3.81)
which are termed DAEs (differential and algebraic equations), and special techniques have been
developed to solve them efciently. DAEs crop up frequently in automated computer modelling
packages, and can be numerically difcult to solve. [133] provides more details in this eld.
Steady state models are a subset of the general dynamic model where the dynamic term, Eqn
3.80, is set to equal zero. We now have an augmented problem of the form Eqn 3.81 only. Linear
9
Stuart Kauffman, God of creativity, 10 May 2008, NewScientist, pp52-53
130 Modelling
dynamic models are useful in control because of the many simple design techniques that exist.
These models can be written in the form
x = Ax +Bu (3.82)
where the model structure and parameters are linear and often time invariant.
Models are obtained, at least in part, by writing the governing equations of the process. If these
are not known, experiments are needed to characterise fully the process. If experiments are used
the model is said to be heuristic. If the model has been obtained from detailed chemical and
physical laws, then the model is said to be mechanistic. In practice, most models are a mixture
of these two extremes. However what ever model is used, it still is only an approximation to the
real world. For this reason, the assumptions that are used in the model development must be
clearly stated and understood before the model is used.
Chapter 4
The PID controller
4.1 Introduction
The PID controller is the most common general purpose controller in the chemical process indus-
try today. It can be used as a stand-alone unit, or it can be part of a distributed computer control
system. Over 30 years ago, PID controllers were pneumatic-mechanical devices, whereas nowa-
days they are implemented in software in electronic controllers. The electronic implementation
is much more exible than the pneumatic devices since the engineer can easily re-program it to
change the conguration of things like alarm settings, tuning constants etc.
Once we have programmed the PID controller, and have constructed something, either in soft-
ware or hardware to control, we must tune the controller. This is surprisingly tricky to do success-
fully, but some general hints and guidelines will be presented in 4.6. Finally, the PID controller
is not perfect for everything, and some examples of common pitfalls when using the PID are
given in 4.8.
4.1.1 P, PI or PID control
For many industrial process control requirements, proportional only control is unsatisfactory
since the offset cannot be tolerated. Consequently the PI controller is probably the most com-
mon controller, and is adequate when the dynamics of the process are essentially rst or damped
second order. PID is satisfactory when the dynamics are second or higher order. However the
derivative component can introduce problems if the measured signal is noisy. If the process
has a large time delay (dead time), derivative action does not seem to help much. In fact PID
control nds it difcult to control processes of this nature, and generally a more sophisticated
controller such as a dead time compensator or a predictive controller is required. Processes that
are highly underdamped with complex conjugate poles close to the imaginary axis are also dif-
cult to control with a PID controller. Processes with this type of dynamic characteristics are rare
in the chemical processing industries, although more common in mechanical or robotic systems
comprising of exible structures.
My own impressions are that the derivative action is of limited use since industrial measurements
such as level, pressure, temperature are usually very noisy. As a rst step, I generally only use a PI
controller, the integral part removes any offset, and the two parameter tuning space is sufciently
small, that one has a chance to nd reasonable values for them.
131
132 CHAPTER 4. THE PID CONTROLLER
4.2 The industrial PID algorithm
This section describes how to implement a simple continuous-time PID controller. We will start
with the classical textbook algorithm, although for practical and implementation reasons indus-
trially available PID controllers are never this simple for reasons which will soon become clear.
Further details on the subtlety of implementing a practical useful PID controller are described in
[53] and the texts [13, 15, 98].
The purpose of the PID controller is to measure a process error and calculate a manipulated ac-
tion u. Note that while u is referred to as an input to the process, it is the output of the controller.
The textbook non-interacting continuous PID controller follows the equation
u = K
c
_
+
1

i
_
dt +
d
d
dt
_
(4.1)
where the three tuning constants are the controller proportional gain, K
c
, the integral time,
i
and the derivative time,
d
, the latter two constants having units of time, often minutes for in-
dustrial controllers. Personally, I nd it more intuitive to use the reciprocal of the integral time,
1/
i
, which is called reset and has units of repeats per unit time. This nomenclature scheme has
the advantage that no integral action equates to zero reset, rather than the cumbersome innite
integral time. Just to confuse you further, for some industrial controllers to turn off the integral
component, rather than type in something like 99999, you just type in zero. It is rare in engineer-
ing where we get to approximate innity with zero! Table 4.1 summarises these alternatives.
Table 4.1: Alternative PID tuning parameter conventions
Parameter symbol units alternative symbol units
Gain K
c
input/output Proportional band PB %
integral time
i
seconds reset 1/
i
seconds
1
derivative time
d
seconds
The Laplace transform of the ideal PID controller given in Eqn. 4.1 is
U(s)
E(s)
= C(s) = K
c
_
1 +
1

i
s
+
d
s
_
(4.2)
and the equivalent block diagram in parallel form is
1
is

d
s

differentiator

+
`

integrator
K
c
error input
u
controller output
where is is clear that the three terms are computed in parallel which is why this form of the PID
controller is sometimes also known as the parallel PID form.
4.2. THE INDUSTRIAL PID ALGORITHM 133
We could rearrange Eqn. 4.2 in a more familiar numerator/denominator transfer function format,
C(s) =
K
c
_

d
s
2
+
i
s + 1
_

i
s
(4.3)
where we can clearly see that the ideal PID controller is not proper, that is, the order of the
numerator (2), is larger than the order of the denominator (1) and that we have a pole at s = 0.
We shall see in section 4.2.1 that when we come to fabricate these controllers we must physically
have a proper transfer function, and so we will need to modify the ideal PID transfer function
slightly.
4.2.1 Implementing the derivative component
The textbook PID algorithm of Eqn. 4.2 includes a pure derivative term
d
s. Such a term is
not physically realisable, and nor would we really want to implement it anyway, since abrupt
changes in setpoint would cause extreme values in the manipulated variable.
There are several approximations that are used in commercial controllers to address this problem.
Most schemes simply add a small factory-set lag termto the derivative term. So instead of simply

d
s, we would use
derivative term =

d
s

d
N
s + 1
(4.4)
where N is a large value, typically set to a large number somewhere between 10 and 100. Using
this derivative term modies the textbook PID transfer function of Eqn. 4.2 or Eqn. 4.3 to
C(s) = K
c
_
1 +
1

i
s
+

d
s

d
N
s + 1
_
(4.5)
= K
c
_
(
i

d
+
i

d
N) s
2
+ (
i
N +
d
)s +N

i
s (
d
s +N)
_
(4.6)
which is now physically realisable. Note that as N , the practical PID controller of Eqn. 4.6
converges to the textbook version of Eqn. 4.3. The derivative-ltered PID controller of Eqn. 4.6
collapses to a standard PI controller if
d
= 0.
We can fabricate in MATLAB the transfer function of this D-lteredPIDcontroller with the pidstd
command which MATLAB refers to as a PID controller in standard form.
Listing 4.1: Constructing a transfer function of a PID controller
>> C = pidstd(1,2,3,100) % Construct a standard form PID controller
Continuous-time PIDF controller in standard form:
3
% C(s) = Kp
_
1 +
1

i
s
+

d
s

d
N
s+1
_
with Kp = 1, Ti = 2, Td = 3, N = 100
The generated controller is of a special class known as a pidstd, but we can convert that to the
more familiar transfer function with the now overlayed tf command.
>> tf(C)
134 CHAPTER 4. THE PID CONTROLLER
Transfer function:
4 101 s2 + 33.83 s + 16.67
-------------------------
s2 + 33.33 s
A related MATLAB function, pid, constructs a slight modication of the above PID controller in
what MATLAB refers to as parallel form,
C
parallel
(s) = P +I/s +D
s

f
s + 1
where in this case as the derivative lter time constant,
f
, approaches zero, the derivative term
approaches the pure differentiator.
4.2.2 Variations of the PID algorithm
The textbook algorithm of the PID controller given in Eqn. 4.2 is sometimes known as the par-
allel or non-interacting form, however due to historical reasons there is another form of the PID
controller that is sometimes used. This is known as the series, cascade or interacting form
G

c
(s) = K

c
_
1 +
1

i
s
_
(1 +

d
s) (4.7)
where the three series PID controller tuning constants, K

c
,

i
and

d
are related to, but distinct
from, the original PIDtuning constants, K
c
,
i
and
d
. Ablock diagramof the series PIDcontroller
is given below.

differentiator

d
s
+

+
1

i
s
integrator

1 u
K

controller output

error input
A series PID controller in the form of Eqn. 4.7 can always be represented in parallel form
K
c
= K

i
+

i
,
i
=

i
+

d
,
d
=

i
+

d
(4.8)
but not necessarily the reverse
K

c
=
K
c
2
_
1 +
_
1 4
d
/
i
_
(4.9)

i
=

i
2
_
1 +
_
1 4
d
/
i
_
(4.10)

d
=

i
2
_
1
_
1 4
d
/
i
_
(4.11)
since a series form only exists if
i
> 4
d
. For this reason, the series form of the PID controller is
less commonly used, although some argue that it is easier to tune, [186]. Note that both controller
forms are the same if the derivative component is not used. A more detailed discussion on the
various industrial controllers and associated nomenclatures is given in [98, pp32-33].
4.3. SIMULATING A PID PROCESS IN SIMULINK 135
0 20 40 60 80 100 120 140 160 180 200
0.1
0.2
0.3
0.4
0.5
o
u
t
p
u
t
PI controlled response
0 20 40 60 80 100 120 140 160 180 200
0
0.2
0.4
0.6
0.8
i
n
p
u
t
time (sec) dt=0.08
(a) PI control of a apper with K = 1 and reset= 0.2
0 20 40 60 80 100 120 140 160 180 200
0.2
0
0.2
0.4
0.6
i
n
p
u
t
time (sec) dt=0.08
0 20 40 60 80 100 120 140 160 180 200
0
0.1
0.2
0.3
0.4
o
u
t
p
u
t
Integral only control
(b) Integral-only control of a apper.
Figure 4.1: Comparing PI and integral-only control for the real-time control of a noisy apper
plant with sampling time T = 0.08.
4.2.3 Integral only control
For some types of control, integral only action is desired. Fig. 4.1(a) shows the controlled response
of the laboratory apper, (1.4.1), controlled with a PI controller. The interesting characteristic of
this plants behaviour is the signicant disturbances. Such disturbances make it both difcult to
control and demand excessive manipulated variable action.
If however we drop the proportional term, and use only integral control we obtain much the same
response but with a far better behaved input signal as shown in Fig. 4.1(b). This will decrease the
wear on the actuator.
If you are using Eqn. 4.1 with the gain K
c
set to zero, then no control at all will result irrespective
of the integral time. For this reason, controllers either have four parameters as opposed to the
three in Eqn. 4.1, or we can follow the SIMULINK convention shown in Fig. 4.4.
4.3 Simulating a PID process in SIMULINK
SIMULINK is an ideal platform to rapidly simulate control problems. While it does not have the
complete exibility of raw MATLAB, this is more than compensated for in the ease of construction
and good visual feedback necessary for rapid prototyping. Fig. 4.2 shows a SIMULINK block
diagram for the continuous PID control of a third order plant.
output Step
Plant
3s +5s +6s+1.6
3 2
3
PID Controller
PID
Figure 4.2: A SIMULINK block diagram of a PID controller and a third-order plant.
136 CHAPTER 4. THE PID CONTROLLER
The PID controller block supplied as part of SIMULINK is slightly different from the classical de-
scription given in Eqn. 4.2. SIMULINKs continuous PIDblock uses the complete parallel skeleton,
G
c
(s) = P +
I
s
+ D
Ns

d
s +N
(4.12)
where we choose the three tuning constants, P, I and D, and optionally the lter coefcient N
which typically lies between 10 and 100. You can verify this by unmasking the PID controller (via
the options menu) block to exhibit the internals as shown in Fig. 4.3.
Figure 4.3: An unmasked view of a PID
controller block that comes supplied in
SIMULINK. Note how the conguration
differs from the classical form. See also
Fig. 4.4.
y
1
SumD
Sum
Proportional Gain
P
Integrator
1
s
Integral Gain
I
Filter Coefficient
N
Filter
1
s
Derivative Gain
D
u
1
Note how the derivative component of the SIMULINK PID controller in Eqn. 4.12 follows the
realisable approximation given in Eqn. 4.4 by using a feedback loop with an integrator and gain
of N = 100 as a default.
Block diagrams of both the SIMULINK implementation and the classical PID scheme are com-
pared in Fig. 4.4. Clearly the tuning constants for both schemes are related as follows:
P = K
c
, I =
K
c

i
, D = K
c

d
(4.13)
or alternatively
K
c
= P,
i
=
K
c
I
,
d
=
D
K
c
(4.14)
The SIMULINK scheme has the advantage of allowing integral-only control without modication.
I
s
Ds

+
`

u
1
is

d
s

Simulink PID controller block Classical controller block

+
`

K
c
P
Figure 4.4: Block diagram of PID controllers as implemented in SIMULINK (left) and classical text
books (right).
4.3. SIMULATING A PID PROCESS IN SIMULINK 137
If you would rather use the classical textbook style PID controller, then it is easy to modify the
SIMULINK PID controller block. Fig. 4.5 shows a SIMULINK implementation which includes the
lter on the derivative component following Eqn. 4.6. Since PID controllers are very common,
you may like to mask the controller as illustrated in Fig. 4.6, add a suitable graphic, and add this
to your SIMULINK library.
u
1
reset
1/taui
filtered deriv
s
taud/N.s+1
deriv
taud
Pgain
Kc
Integrator
1
s
Add4
error
1
Figure 4.5: A realisable con-
tinuous PID controller imple-
mented in SIMULINK with a
lter on the derivative action.
output Step
Plant
3s +5s +6s+1.6
3 2
3
PID Controller
Figure 4.6: A SIMULINK block diagram of a classical PID controller as a mask
Note that to have a reverse acting controller, we either require all three constants to be negative,
or just add a negative gain to the output of the controller.
Reasonable SIMULINK controller tuning
constants for this example are:
P = 2, I = 1.5, D = 2
or in classical form
K
c
= 2,
i
= 1.33,
d
= 1
which gives a step response as shown op-
posite.
0
0.5
1
1.5
y

&

r
e
f
0 2 4 6 8 10
0
2
4
u
time
Because the derivative term in the PID controller acts on the error rather than the output, we see
a large derivative kick in the controller output
1
. We can avoid this by using the PID block with
anti-windup, or by modifying the PID block itself. Section 4.4.1 shows how this modication
works.
1
Note that I have re-scaled the axis in the simulation results
138 CHAPTER 4. THE PID CONTROLLER
4.4 Extensions to the PID algorithm
Industrial PID controllers are in fact considerably more complicated than the textbook formula
of Eqn. 4.1 would lead you to believe. Industrial PID controllers have typically between 15 and
25 parameters. The following describes some of the extra functionality one needs in an effective
commercial PID controller.
4.4.1 Avoiding derivative kick
If we implement the classical PIDcontroller such as Eqn. 4.3 with signicant derivative action, the
input will show jump excessively every time either the output or the setpoint changes abruptly.
Under normal operations conditions, the output is unlikely to change rapidly, but during a set-
point change, the setpoint will naturally change abruptly, and this causes a large, though brief,
spike in the derivative of the error. This spike is fed to the derivative part of the PID controller,
and causes unpleasant transients in the manipulated variable. If left unmodied, this may cause
excessive wear in the actuator. Industrial controllers and derivative kick is further covered in
[179, p191].
It is clear that there is a problemwith the controller giving a large kick when the setpoint abruptly
changes. This is referred to as derivative kick and is due to the near innite derivative of the error
when the setpoint instantaneously changes as when in a setpoint change. One way to avoid
problems of this nature is to use the derivative of the measurement y, rather than the derivative
of the error e =

y

y. If we do this, the derivative kick is eliminated, and the input is much less
excited. Equations of both the classical and the anti-kick PID controller equations are compared
below.
Normal: G
c
(s) = K
c
_
+
1

i
_
t
0
d +
d
d
dt
_
(4.15)
Anti-Kick: G
c
(s) = K
c
_
+
1

i
_
t
0
d +
d
dy
dt
_
(4.16)
The anti-derivative kick controller is sometimes known as a PI-D controller with the dash indi-
cating that the PI part acts on the error, and the D part acts on the output.
In SIMULINK you can build a PID controller with anti-kick by modifying the standard PID con-
troller block as shown in Fig. 4.7.
Fig. 4.8 compares the controlled performance for the third order plant and tuning constants given
previously in Fig. 4.2 where the derivative term uses the error (lefthand simulation), with the
modication where the derivative term uses the measured variable (righthand simulation).
Evident from Fig 4.8 is that the PID controller using the measurement rather than the error be-
haves better, with much less controller action. Of course, the performance improvement is only
evident when the setpoint is normally stationary, rather than a trajectory following problem. Sta-
tionary setpoints are the norm for industrial applications, but if for example, we subjected the
closed loop to a sine wave setpoint, then the PID that employed the error in the derivative term
would perform better than the anti-kick version.
An electromagnetic balance arm
The electromagnetic balance arm described previously in 1.4.1 is extremely oscillatory as shown
in Fig. 1.5(b). The overshoot is about 75% which corresponds to a damping ratio of 0.1
4.4. EXTENSIONS TO THE PID ALGORITHM 139
reset
1/taui
input/output
gain
Kc
derivative filter
s
taud/N.s+1
deriv
taud
Sum1 Sum
Pulse
Generator
Plant
3
(s+4)(s+1)(s+0.4)
Integrator
1
s
Figure 4.7: PID controller with anti-derivative kick. See also Fig. 4.8.
0 10 20 30 40
0.5
0
0.5
1
1.5
Normal PID
o
u
t
p
u
t

&

s
e
t
p
o
i
n
t
0 10 20 30 40
40
20
0
20
40
i
n
p
u
t
time
0 10 20 30 40
0.5
0
0.5
1
1.5
No derivative Kick
0 10 20 30 40
40
20
0
20
40
time
Figure 4.8: PID control of a plant given in Fig. 4.7. The derivative kick is evident in the input
(lower trend of the left-hand simulation) of the standard PID controller. We can avoid this kick
by using the derivative of the output rather than the error as seen in the right-hand trend. Note
that the output controlled performance is similar in both cases
140 CHAPTER 4. THE PID CONTROLLER
assuming a proto-type second order process model.
To stabilise a system with poles so close to the imaginary axis requires substantial derivative
action. Without derivative action, the integral component needed to reduce the offset would
cause instability. Unfortunately however the derivative action causes problems with noise and
abrupt setpoint changes as shown in Fig. 4.9(a).
1000
1500
2000
2500
Derivative kick
F
l
a
p
p
e
r

a
n
g
l
e
0 10 20 30 40 50 60
0
1000
2000
3000
i
n
p
u
t
time [s]
(a) PID control exhibiting signicant derivative kick.
1400
1600
1800
2000
2200
No derivative kick
0 10 20 30 40 50
800
1000
1200
1400
1600
time [s]
(b) PID control with anti-kick.
Figure 4.9: Illustrating the improvement of anti-derivative kick schemes for PID controllers when
applied to the experimental electromagnetic balance.
The controller output, u(t) exhibits a kick every time the setpoint is changed. So instead of the
normal PID controller used in Fig. 4.9(a), an anti-kick version is tried in Fig. 4.9(b). Clearly there
are no spikes in the input signal when Eqn. 4.16 is used, and the controlled response is slightly
improved.
Abrupt setpoint changes are not the only thing that can trip up a PID controller. Fig. 4.10 shows
another response from the electromagnetic balance, this time with even more derivative action.
At low levels the balance arm is very oscillatory, although this behaviour tends to disappear at
the higher levels owing to the nonlinear friction effects.
4.4.2 Input saturation and integral windup
Invariably in practical cases the actual manipulated variable value, u, demanded by the PID
controller is impossible to implement owing to physical constraints on the system. A control
valve for example cannot shut less than 0% open or open more than 100%. Normally clamps
are placed on the manipulated variable to prevent unrealisable input demands occurring such as
u
min
< u < u
max
, or u = MIN(max u,MAX(min u,u))
or in other words, if the input u is larger than the maximum input allowed max_u, it will be
reset to that maximum input, and similarly the input is saturated if it is less than the minimum
allowable limit. In addition to an absolute limit on the position of u, the manipulated variable
can not instantaneously change from one value to another. This can be expressed as a limit on
the derivative of u, such as |
du
dt
| < c
d
.
4.4. EXTENSIONS TO THE PID ALGORITHM 141
1500
2000
2500
3000
3500
4000
S
e
t
p
o
i
n
t

&

o
u
t
p
u
t
0 5 10 15 20 25 30 35
0
1000
2000
3000
4000
5000
time [s]
i
n
p
u
t
Figure 4.10: Derivative
control and noise. Note
the difference in re-
sponse character once
the setpoint is increased
over 3000. This is
due to nonlinear fric-
tion effects.
However the PID control algorithm presented so far assumes that the manipulated variable is
unconstrained so when the manipulated variable does saturate, the integral error term continues
to grow without bound. When nally the plant catches up and the error is reduced, the integral
term is still large, and the controller must eat up this error. The result is an overly oscillatory
controlled response.
This is known as integral windup. Historically with the analogue controllers, integral windup
was not much of a problem since the pneumatic controllers had only a limited integral capacity.
However, this limit is effectively innite in a digital implementation.
There are a number of ways to prevent integral windup and these are discussed in [15, pp1014]
and more recently in [138, p60] and [31]. The easiest way is to check the manipulated variable
position, and ignore the integral term if the manipulated variable is saturated.
An alternative modication is to compute the difference between the desired manipulated vari-
able and the saturated version and to feed this value back to the integrator within the controller.
This is known as anti-windup tracking and is shown in Fig. 4.11(a). When the manipulated input is
not saturated, there is no change to normal PID algorithm. Note that in this modied PID cong-
uration we differentiate the measured value (not the error), we approximate the derivative term
as discussed in section 4.2.1, and we can turn off the anti-windup component with the manual
switch.
As an example (adapted from [19, Fig 8.9]), suppose we are to control an integrator plant, G(s) =
1/s, with tight limits on the manipulated variable, |u| < 0.1. Without anti-windup, the controller
output rapidly saturates, and the uncompensated response shown in Fig. 4.11(b) is very oscilla-
tory. However with anti-windup enabled, the controlled response is much improved.
142 CHAPTER 4. THE PID CONTROLLER
1
u
1/taui
reset
Kc
gain
taud.s+1
taud/N.s+1
approx Deriv
actuator model
Sum3
Sum2
Sum1
Manual Switch
Kc*taud
Kc*deriv
1/s
Integrator
0
Constant
2
y
1
error
(a) A PID controller with anti-windup.
3
2
1
0
1
2
3
o
u
t
p
u
t

&

s
e
t
p
o
i
n
t
Antiwindup demonstration
Antiwindup on Antiwindup off
0 100 200 300 400 500 600 700 800 900 1000
0.1
0.05
0
0.05
0.1
0.15
i
n
p
u
t
time
(b) Turning on the anti-windup controller at t = 660 dramatically improves the controlled response.
Figure 4.11: The advantages of using anti-windup are evident after t = 660.
4.5. DISCRETE PID CONTROLLERS 143
4.5 Discrete PID controllers
To implement a PID controller as given in Eqn. 4.1 on a computer, one must rst discretise or
approximate the continuous controller equation. If the error at time t = kT is
k
, then the contin-
uous expression
u(t) = K
c
_
e(t) +
1

i
_
e(t) dt +
d
de(t)
dt
_
(4.17)
can be approximated with
u
k
= K
c
_
_
_
_
_
_
_

k
+
T

i
k

j=0

j
. .
integral
+

d
T
(
k

k1
)
. .
differential
_
_
_
_
_
_
_
(4.18)
The integral in Eqn. 4.17 is approximated in Eqn. 4.18 by the rectangular rule and the derivative
is approximated as a rst order difference, although other discretisations are possible. Normally
the sample time, T, used by the controller is much faster than the dominant time constants of the
process so the approximation is satisfactory and the discrete PID controller is, to all intents and
purposes, indistinguishable from a continuous controller.
It is considerably more convenient to consider the discrete PIDcontroller as a rational polynomial
expression in z. Taking z-transforms of the discrete position form of the PID controller, Eqn. 4.18,
we get
U(z) = K
c
_
_
_
_
_
E(z) +
T

i
_
z
i+1
+z
i+2
+ +z
1
+ 1
_
. .
integral
E(z) +

d
T
(1 z
1
)E(z)
. .
differential
_
_
_
_
_
This shows that the transfer function of the PID controller is
G
PID
(z) = K
c
_
1 +
T

i
1
(1 z
1
)
+

d
T
(1 z
1
)
_
(4.19)
=
K
c
T
i
_
_
T
2
+
i
T +
d

i
_

i
(T 2
d
) z
1
+
i

d
z
2
1 z
1
_
(4.20)
Eqn. 4.19 is the discrete approximation to Eqn. 4.2 and the approximation is reasonable provided
the sample time, T, is sufciently short. A block diagram of this discrete approximation is
K
c
T
i
1
(1z
1
)

+

d
T
1 z
1
`

which we could further manipulate this using (block diagram algebra rules) to use only delay
elements to give
144 CHAPTER 4. THE PID CONTROLLER
K
c
T
i

d
T


`
`
+

+
z
1

z
1
There are other alternatives for a discrete PID controller depending on how we approximate the
integral part. For example we could use a backward difference,
u
kT
= u
(k1)T
. .
old integral
+Ty
(k1)T
. .
add on
G
i
(z) =
T
z 1
or a forward difference,
u
kT
= u
(k1)T
. .
old integral
+Ty
kT
. .
add on
G
i
(z) =
Tz
z 1
(which is not to be recommended due to stability problems), or the trapezoidal approximation.
u
kT
= u
(k1)T
. .
old integral
+T/2
_
y
kT
+y
(k1)T
_
. .
add on
G
i
(z) =
T(z + 1)
2(z 1)
We can insert any one of these alternatives to give the overall discrete z-domain PID controller,
although the trapeziodal scheme
G
PID
(z) = K
c
_
1 +
T(1 +z
1
)
2
i
(1 z
1
)
+

d
T
(1 z
1
)
_
(4.21)
=
K
c
2T
i
_
_
T
2
+ 2
i
T + 2
d

i
_
+
_
T
2
2
i
T 4
i

d
_
z
1
+ 2
i

d
z
2
1 z
1
_
(4.22)
is the most accurate and therefore the preferred implementation.
A SIMULINK discrete PID controller with sample time T using the trapeziodal approximation of
Eqn. 4.21 is given in Fig. 4.12. Note that now, as opposed to the continuous time implementation
of the PID controller, the derivative and integral gain values are now a function of sample time,
T. While continuous versions of PID controllers exist in SIMULINK, discrete versions simulate
much faster.
4.5.1 Discretising continuous PID controllers
The easiest way to generate a discrete PID controller is to simply call the MATLAB standard PID
pidstd with a trailing argument to indicate that you want a discrete controller. Since the default
4.5. DISCRETE PID CONTROLLERS 145
u
1
integral
T/taui/2
gain
Kc
derivative
taud/T
Sum1
Discrete Filter2
1z
1
1
Discrete Filter1
1+z
1
1z
1
error
1
Figure 4.12: A discrete PID controller in SIMULINK using a trapezoidal approximation for the
integral with sample time T following Eqn. 4.21. This controller block is used in the simulation
presented in Fig. 4.13.
discretisation strategy is using the forward Euler, it would be prudent to use explicitly state the
stable backward Euler option for both the integration and differentiation.
Listing 4.2: Constructing a discrete (ltered) PID controller
>> C = pidstd(1,2,3,100,0.1, ...
'IFormula','BackwardEuler','DFormula','BackwardEuler')
Discrete-time PIDF controller in standard form:
4
% C(s) = Kp
_
1 +
1

Tsz
z1
+
d
1

d
/N+
Tsz
z1
_
with Kp = 1, Ti = 2, Td = 3, N = 100, Ts = 0.1
9 >> tf(C)
Transfer function:
24.13 z2 - 47.4 z + 23.31
--------------------------
14 z2 - 1.231 z + 0.2308
Sampling time: 0.1
Velocity form of the PID controller
Eqn. 4.18 is called the position form implementation of the PID controller. An alternative form
is the velocity form which is obtained by subtracting the previous control input u
k1
from the
current input u
k
to get
u
k
= u
k
u
k1
= K
c
__
1 +
T

i
+

d
T
_

_
1 +
2
d
T
_

k1
+

d
T

k2
_
(4.23)
The velocity formin Eqn. 4.23 has three advantages over the position form: (see [191, pp636-637]),
namely it requires no need for initialisation, (the computer does not need to know the current u),
no integral windup problems, and the algorithm offers some protection against computer failure
in that if the computer crashes, the input remains at the previous, presumably reasonable, value.
146 CHAPTER 4. THE PID CONTROLLER
One drawback, however, is that it should not be used for P or PD controllers since the controller
is unable to maintain the reference value.
Fig. 4.13 shows a SIMULINK block diagram of a pressure control scheme on a headbox of a paper
machine. The problem was that the pressure sensor was very slow, only delivering a new pres-
sure reading every 5 seconds. The digital PID controller however was running with a frequency
of 1 Hz. The control engineer in this application faced with poor closed loop performance, was
concerned that the pressure sensor was too slow and therefore should be changed. In Fig. 4.13,
the plant is a continuous transfer function while the discrete PID controller runs with sample
period of T = 1 second, and the pressure sensor is modelled with a zeroth-order hold with T = 5
seconds. Consequently the simulation is a mixed continuous/discrete example, with more than
one discrete sample time.
pressure sensor
200s +30s+1
2
3.5
headbox
Signal
Generator
Scope
error ctrl output
Discrete PID
Figure 4.13: Headbox control with a slow pressure transducer. The discrete PID controller was
given in Fig. 4.12.
Fig. 4.14 shows the controlled results. Note how the pressure signal to the controller lags behind
the true pressure, but the controller still manages to control the plant.
4.5.2 Simulating a PID controlled response in Matlab
We can simulate a PID controlled plant in MATLAB by writing a small script le that calls a gen-
eral PID controller function. The PID controller is written in the discrete velocity form following
Eqn. 4.23 in the MATLAB function le pidctr.m shown in listing 4.3.
Listing 4.3: A simple PID controller
function [u,e] = pidctr(err,u,dt,e,pidt)
% [u,e] = pidctr(err,u,dt,e,pidt)
% PID controller
4 % err: = , current error, u = u(t), current manipulated variable
% dt = T sample time, e: row vector of past 3 errors
% pidt = [Kc, 1/I ,
d
] = tuning constants
k = pidt(1); rs = pidt(2); td2=pidt(3)/dt;
9 e = [err,e(1:2)]; % error shift register
du = k
*
[1+rs
*
dt+td2,-1-2
*
td2,td2]
*
e';
u = du + u; % Update control value unew = u
old
+ u
return
This simple PID controller function is a very naive implementation of a PID controller without
any of the necessary modications common in robust commercial industrial controllers as de-
scribed in section 4.4.
4.5. DISCRETE PID CONTROLLERS 147
0 50 100 150
4
2
0
2
4
u
time [s]
1.5
1
0.5
0
0.5
1
1.5
2
y

&

s
e
t
p
o
i
n
t


y
y sampled
setpoint
Figure 4.14: Headbox control with a slow pressure transducer measurement sample time of T = 5
while the control sample time is T = 1. Upper: The pressure setpoint (dotted), the actual pressure
and the sampled-and-held pressure fed back to the PID controller. Lower: The controller output.
The plant to be controlled for this simulation is
G
p
(q
1
) = q
d
1.2q
1
1 0.25q
1
0.5q
2
(4.24)
where the sample time is T = 2 seconds and the dead time is d = 3 sample time units, and the
setpoint is a long period square wave. For this simulation, we will try out the PID controller with
tuning constants of K
c
= 0.3, 1/
i
= 0.2 and
d
= 0.1. How I arrived at these tuning constants is
discussed later in 4.6.
The MATLAB simulation using the PID function from Listing 4.3 is given by the following script
le:
a=[0.25,0.5];b=1.2;theta=[a b]'; dead = 3; % Plant G(q), Eqn. 4.24.
3 dt = 2.0; t = dt
*
[0:300]'; yspt = square(t/40); % time vector & setpoint
y = zeros(size(yspt));u=zeros(size(y)); % initialise y(t) & u(t)
pid = [0.3, 0.2, 0.1]; % PID tuning constants: Kc = 0.3, 1/I = 0.2,
d
= 0.1
e = zeros(1,3); % initialise error integral
8 for i=3+dead:length(y)
X = [y(i-1), y(i-2), u(i-1-dead)]; % collect i/o
y(i) = X
*
theta; % system prediction
err = yspt(i) - y(i); % current error
[u(i),e] = pidctr(err,u(i-1),dt,e,pid); % PID controller from Listing 4.3.
13 end % for
plot(t,yspt,'--',t,[y,u]) % Plot results in Fig. 4.15.
148 CHAPTER 4. THE PID CONTROLLER
Figure 4.15 shows the controlled response of this simulation. Note how I have plotted the input
(lower plot of Fig. 4.15) as a series of horizontal lines that show that the input is actually a
piecewise zeroth-order hold for this discrete case using the stairs function.
Figure 4.15: The output (up-
per solid), setpoint (upper
dashed) and discretised input
(lower) of a PID controlled
process.
2
1
0
1
2
PID control
y

&

s
e
t
p
o
i
n
t
0 50 100 150 200
1
0.5
0
0.5
1
Time
I
n
p
u
t
4.5.3 Controller performance as a function of sample time
Given that the discrete PID controller is an approximation to the continuous controller, we must
expect a deterioration in performance with increasing sample time. Our motivation to use coarse
sampling times is to reduce the computational load. Fig. 4.16 compares the controlled response
of the continuous plant,
G
p
(s) =
s + 3
(s + 4)(
2
s
2
+ 2s + 1)
with = 4, = 0.4 given the same continuous controller settings at different sampling rates.
Note that the reset and derivative controller settings for the discrete controller are functions of
time, and must be adjusted accordingly. Fig. 4.16 shows that the controller performance improves
as the sampling time is decreased and converges to the continuous case. However if the sampling
time is too small, the discrete PID controller is then suspectable to numerical problems.
4.6 PID tuning methods
Tuning PID controllers is generally considered an art and is an active research topic both in
academia and in industry. Typical chemical processing plants have hundreds or perhaps thou-
sands of control loops of which the majority of these loops are non-critical and are of the PID
type and all these loops require tuning.

Arzen [8] asserts that it is a well known fact that many
4.6. PID TUNING METHODS 149
0
0.5
1
1.5
T
s
= 4
y
0 20 40
0
0.5
1
1.5
2
2.5
u
T
s
= 2
0 20 40
T
s
= 1
0 20 40
T
s
=0.1
0 20 40
continuous
discrete
Figure 4.16: The effect of varying sampling time, T, when using a discrete PID controller com-
pared to a continuous PID controller. As T 0, the discrete PID controller converges to the
continuous PID controller.
control loops in (the) process industry are badly tuned, or run in manual mode. Supporting this
claim, here is a summary of the rather surprising results that Ender, [64], found after investigating
thousands of control loops over hundreds of plants
2
;
More than 30% of the controllers are operating in manual.
More than 60% of all installed loops produce less variance in manual than automatic.
The average loop costs $40,000 to design, instrument and install.
Note however that it is not only industry that seems unable to tune PID controllers since many
publications in the academic world also give mis-tuned PID controllers. This is most common
when comparing the PID with some other sort of more sophisticated (and therefore hopefully
better performing), controller. So it appears that it would be worthwhile to look more closely at
the tuning of PID regulators.
There are two possibilities that we face when tuning PID controllers. One is that we have a model
of the plant to be controlled, perhaps as a transfer function, so then we need to establish a suitable
PID controller such that when combined with the plant, we obtain an acceptable response. The
second case is where we dont even have a model of the plant to be controlled, so our task is also
to identify (implicitly or explicitly) this plant model as well.
Tuning PID controllers can be performed in the time domain or in the frequency domain with the
controller either operating (closed loop), or disconnected (open loop), and the tuning parameter
calculations can be performed either online or ofine. The online tuning technique is the central
component of automated self tuners, discussed more in chapter 7. Section 4.6.1 considers the two
classical time domain tuning methods, one closed loop, the other open loop.
2
mainly in the US
150 CHAPTER 4. THE PID CONTROLLER
4.6.1 Open loop tuning methods
Open loop tuning methods are where the feedback controller is disconnected and the experi-
menter excites the plant and measures the response. They key point here is that since the con-
troller is now disconnected the plant is clearly now no longer strictly under control. If the loop
is critical, then this test could be hazardous. Indeed if the process is open-loop unstable, then
you will be in trouble before you begin. Notwithstanding for many process control applications,
open loop type experiments are usually quick to perform, and deliver informative results.
To obtain any information about a dynamic process, one must excite it in some manner. If the
system is steady at setpoint, and remains so, then you have no information about howthe process
behaves. (However you do have good control so why not quit while you are ahead?) The type
of excitation is again a matter of choice. For the time domain analysis, there are two common
types of excitation signal; the step change, and the impulse test, and for more sophisticated
analysis, one can try a random input test. Each of the three basic alternatives has advantages and
disadvantages associated with them, and the choice is a matter for the practitioner.
Step change The step change method is where the experimenter abruptly changes the input to
the process. For example, if you wanted to tune a level control of a buffer tank, you could
sharply increase (or decrease) the ow into the tank. The controlled variable then slowly
rises (or falls) to the new operating level. When I want a quick feel for a new process, I
like to perform a step test and this quickly gives me a graphical indication of the degree of
damping, overshoot, rise time and time constants better than any other technique.
Impulse test The impulse test method is where the input signal is abruptly changed to a new
value, then immediately equally abruptly changed back to the old value. Essentially you
are trying to physically alter the input such that it approximates a Dirac delta function.
Technically both types of inputs are impossible to perform perfectly, although the step test
is probably easier to approximate experimentally.
The impulse test has some advantages over the step test. First, since the input after the ex-
periment is the same as before the experiment, the process should return to the same pro-
cess value. This means that the time spent producing off-specication (off-spec) product
is minimised. If the process does not return to the same operating point, then this indi-
cates that the process probably contains an integrator. Secondly the impulse test (if done
perfectly) contains a wider range of excitation frequencies than the step test. An excitation
signal with a wide frequency range gives more information about the process. However
the impulse test requires slightly more complicated analysis.
Random input The randominput technique assumes that the input is a randomvariable approx-
imating white noise. Pure white noise has a wide (theoretically innite) frequency range,
and can therefore excite the process over a similarly wide frequency range. The step test,
even a perfect step test, has a limited frequency range. The subsequent analysis of this type
of data is now much more tedious, though not really any more difcult, but it does require
a data logger (rather than just a chart recorder) and a computer with simple regression
software. Building up on this type of process identication where the input is assumed,
within reason, arbitrary, are methods referred to as Time Series Analysis (TSA), or spectral
analysis; both of which are dealt with in more detail in chapter 6.
Open-loop or process reaction curve tuning methods
There are various tuning strategies based on an open-loop step response. While they all follow
the same basic idea, they differ in slightly in how they extract the model parameters from the
4.6. PID TUNING METHODS 151
Plant

output input
`

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
K
T

L
time
tangent at inection pt
Figure 4.17: The parameters T and L to be graphically estimated for the openloop tuning method
relations given in Table 4.2.
recorded response, and also differ slightly as to relate appropriate tuning constants to the model
parameters. This section describes three alternatives, the classic Ziegler-Nichols open loop test,
the Cohen-Coon test, and the

Astr om-H agglund suggestion.
The classic way of open loop time domain tuning was rst published in the early 40s by Ziegler
and Nichols
3
, and is further described in [150, pp596597] and in [179, chap 13]. Their scheme re-
quires you to apply a unit step to the open-loop process and record the output. From the response,
you graphically estimate the two parameters T and L as shown in Fig. 4.17. Naturally if your
response is not sigmoidal or S shaped such as that sketched in Fig. 4.17 and exhibits overshoot,
or an integrator, then this tuning method is not applicable.
This method implicitly assumes the plant can be adequately approximated by a rst order trans-
fer function with time delay,
G
p

Ke
s
s + 1
(4.25)
where L is approximately the dead time , and T is the open loop process time constant . Once
you have recorded the openloop input/output data, and subsequently measured the times T and
L, the PID tuning parameters can be obtained directly from Table 4.2.
A similar open loop step tuning strategy due to Cohen and Coon published in the early 1950s is
where you record the time taken to reach 50% of the nal output value, t
2
, and the time taken to
reach 63% of the nal value, t
3
. You then calculate the effective deadtime with
=
t
2
ln(2)t
3
1 ln(2)
and time constant,
= t
3
t
1
The open loop gain can be calculated by dividing the nal change in output over the change in
the input step.
3
Ziegler & Nichols actually published two methods, one open loop, and one closed loop, [204]. However it is only the
second closed loop method that is generally remembered today as the ZieglerNichols tuning method.
152 CHAPTER 4. THE PID CONTROLLER
Once again, now that you have a model of the plant to be controlled in the form of Eqn. 4.25,
you can use one the alternative heuristics given in Table 4.2. The recommended range of values
for the deadtime ratio for the Cohen-Coon values is 0.1 < / < 1. Also listed in Table 4.2 are
the empirical suggestions from [16] known as AMIGO, or approximate M-constrained integral
gain optimisation. These values have the same form as the Cohen-Coon suggestions but perform
slightly better.
Table 4.2: The PID tuning parameters as a function of the openloop model parameters K, and
from Eqn. 4.25 as derived by Ziegler-Nichols (open loop method), Cohen and Coon, or alterna-
tively the AMIGO rules from [16].
Controller K
c

i

d
Ziegler-Nichols
(Open loop)
P

K

PI
0.9
K

0.3

PID
1.2
K
2 0.5
Cohen-Coon
P
1
K

_
1 +

3
_

PI
1
K

_
0.9 +

12
_

30 + 3/
9 + 20/

PID
1
K

_
4
3
+

4
_

32 + 6/
13 + 8/

4
11 + 2/
AMIGO PID
1
K
_
0.2 + 0.45

_
0.4 + 0.8
+ 0.1

0.5
0.3 +
Fig. 4.18 illustrates the approximate rst-order plus deadtime model tted to a higher-order over-
damped process using the two points at 50% and 63%. The subsequent controlled response using
the values derived from in Table 4.2 is given in Fig. 4.19.
Figure 4.18: Fitting a rst-order model with
deadtime using the Cohen-Coon scheme.
Note how the tted model is a reason-
able approximation to the actual response
just using the two data points and gain.
See Fig. 4.19 for the subsequent controlled
response.
0 5 10 15 20 25 30 35
0
0.5
1
1.5
2
t
2
t
3
time
y
actual
fitted model
Conventional thought now considers that both the Zeigler-Nichols scheme in Table 4.2 and the
Cohen-Coon scheme gives controller constants that are too oscillatory and hence other modied
tuning parameters exist, [178, p329]. Problem 4.1 demonstrates this tuning method.
4.6. PID TUNING METHODS 153
2
0
2
Ponly
y

&

s
e
t
p
o
i
n
t
0 100 200
5
0
5
u
time
2
0
2
PIcontrol
0 100 200
5
0
5
time
0 100 200
5
0
5
time
2
0
2
PID control
Figure 4.19: The closed loop response for a P, PI and PID controlled system using the Cohen-Coon
strategy from Fig. 4.18.
Problem 4.1 Suppose you have a process that can be described by the transfer function
G
p
=
K
(3s + 1)(6s + 1)(0.2s + 1)
Evaluate the time domain response to a unit step change in input and graphically estimate the
parameters L and T. Design a PI and PID controller for this process using Table 4.2.
Controller settings based on the open loop model
If we have gone to all the trouble of estimating a model of the process, then we could in principle
use this model for controller design in a more formal method than just rely on the suggestions
given in Table 4.2. This is the thinking behind the Internal Model Control or IMC controller
design strategy. The IMC controller is a very general controller, but if we restrict our attention to
just controllers of the PID form, we can derive simple relations between the model parameters
and appropriate controller settings.
The nice feature of the IMC strategy is that it provides the scope to adjust the tuning with a single
parameter, the desired closed loop time constant,
c
, something that is missing fromthe strategies
given previously in Table 4.2. A suitable starting guess for the desired closed loop time constant
is to set it equal to the dominant open loop time constant.
Table 4.3 gives the PID controller settings based on various common process models. For a more
complete table containing a larger selection of transfer functions, consult [179, p308].
An simplication of the this IMC idea in an effort to make the tuning as effortless as possible is
given in [186].
Perhaps the easiest way to tune a plant when the transfer function is known is to use the MATLAB
function pidtune, or the GUI, pidtool as shown in Fig. 4.20.
154 CHAPTER 4. THE PID CONTROLLER
Table 4.3: PID controller settings based on IMC for a small selection of common plants where the
control engineer gets to chose a desired closed loop time constant,
c
.
Plant PID constants
K
c
K
i

d
K/(s + 1)

c

K
(
1
s + 1)(
2
s + 1)

1
+
2

1
+
2

K/s
2

c
2
c

K
s(s + 1)
2
c
+

2
c
2
c
+
2
c

2
c
+
Ke
s
s + 1

c
+

+/2

c
+/2
+/2

2 +
Figure 4.20: PID tuning of an arbitrary transfer function using the MATLAB GUI.
4.6. PID TUNING METHODS 155
4.6.2 Closed loop tuning methods
The main disadvantage of the open loop tuning methods is that it is performed with the controller
switched to manual, i.e. leaving the output uncontrolled in open loop. This is often unreason-
able for systems that are openloop unstable, and impractical for plants where there is a large
invested interest, and the operating engineers are nervous. The Ziegler-Nichols continuous cy-
cling method described next is a well-known closed loop tuning strategy used to address this
problem, although a more recent single response strategy given later in 4.6.3 is faster, safer, and
easier to use.
Ziegler-Nichols continuous cycling method
The Ziegler-Nichols continuous cycling method is one of the best known closed loop tuning
strategies. The controller is left on automatic, but the reset and derivative components are turned
off. The controller gain is then gradually increased (or decreased) until the process output contin-
uously cycles after a small step change or disturbance. At this point, the controller gain you have
selected is the ultimate gain, K
u
, and the observed period of oscillation is the ultimate period,
P
u
.
Ziegler and Nichols originally suggested in 1942 PID tuning constants as a function of the ulti-
mate gain and ultimate period as shown in the rst three rows of Table 4.4. While these values
give near-optimum responses fort load changes, practical experience and theoretical consider-
ations (i.e. [15, 29]) have shown that these tuning values tend to give responses that are too
oscillatory for step-point changes due to the small phase margin. For this reason, various people
have subsequently modied the heuristics slightly as listed in the remaining rows of Table 4.4
which is expanded from that given in [178, p318].
Table 4.4: Various alternative Ziegler-Nichols type PID tuning rules as a function of the ultimate
gain, K
u
, and ultimate period, P
u
.
Response type PID constants
K
c

i

d
Ziegler-Nichols
P 0.5K
u

PI 0.45K
u
P
u
/1.2
PID 0.6K
u
P
u
/2 P
u
/8
Modied ZN
No overshoot 0.2K
u
P
u
/2 P
u
/2
Some overshoot 0.33K
u
P
u
/2 P
u
/3
Tyreus-Luyben
PI 0.31K
u
2.2P
u

PID 0.45K
u
2.2P
u
P
u
/6.3
Chien-Hrones-Reswick PI 0.47K
u
1

Astr om-H agglund PI 0.32K


u
0.94
Specied phase-margin,
m
PID K
u
cos(
m
) f
d
Eqn. 4.56
Experience has shown that the Chien-Hrones-Reswick values give an improved response on the
original Ziegler-Nichols, but the

Astr om-H agglund tend, like the ZN, to be overly oscillatory.
While the Tyreus-Luyben values deliver very sluggish responses, they exhibit very little over-
shoot and are favoured by process engineers for that reason.
Algorithm 4.1 summarises the ZN ultimate oscillation tuning procedure.
156 CHAPTER 4. THE PID CONTROLLER
Algorithm 4.1 Ziegler-Nichols closed loop PID tuning
To tune a PID controller using the closed-loop Ziegler-Nichols method, do the following:
1. Connect a proportional controller to the plant to be controlled. I.e. turn the controller on
automatic, but turn off the derivative and integral action. (If your controller uses integral
time, you will need to set
i
to the maximum allowable value.)
2. Choose a trial sensible controller gain, K
c
to start.
3. Disturb the process slightly and record the output.
4. If the response is unstable, decrease K
c
and go back to step 3, otherwise increase K
c
and
repeat step 3 until the output response is a steady sinusoidal oscillation. Once a gain K
c
has been found to give a stable oscillation proceed to step 5.
5. Record the current gain, and measure the period of oscillation in time units (say seconds).
These are the ultimate gain, K
u
and corresponding ultimate period, P
u
.
6. Use Table 4.4 to establish the P, PI, or PID tuning constants.
7. Test the closed loop response with the new PID values. If the response is not satisfactory,
further manual ne-tuning may be necessary.
This method has proved so popular, that automatic tuning procedures have been developed that
are based around this theory as detailed in chapter 7. Despite the fact that this closed loop test
is difcult to apply experimentally, gives only marginal tuning results in many cases, it is widely
used and very well known. Much of the academic control literature uses the Z-N tuning method
as a basis on which to compare more sophisticated schemes but often conveniently forgetting in
the process that the Z-N scheme was really developed for load disturbances as opposed to set-
point changes. Finally many practicing instrument engineers (who are the ones actually tuning
the plant) know only one formal tuning method this one.
Consequently it is interesting to read the following quote from a review of a textbook in Process
control:
4
. . . The inclusion of tuning methods based on the control loop cycling (Ziegler-Nichols
method) without a severe health warning to the user reveals a lack of control room experience on
behalf of the author.
Ziegler-Nichols continuous cycling example
Finding the ultimate gain and frequency in practice requires a tedious trial-and-error approach.
If however, we already have the model of the process, say in the form of a transfer function, then
establishing the critical frequency analytically is much easier, although we may still need to solve
a nonlinear algebraic equation.
Suppose we have identied a model of our plant as
G(s) =
1
6s
2
+ 7s + 1
e
3s
(4.26)
4
Review of A Real-Time Approach to Process Control by Svrcek, Mahoney & Young, reviewed by Patrick Thorpe in The
Chemical Engineer, January 2002, p30.
4.6. PID TUNING METHODS 157
and we want to control this plant using a PID controller. To use Ziegler-Nichols settings we need
to establish the ultimate frequency,
u
, where the angle of G(s = i
u
) = radians, or solve the
nonlinear expression
G(iu)
= (4.27)
for
u
. In the specic case given in Eqn. 4.26, we have
e
3iu

16
2
u
+7iu =
3
u
tan
1
_
7
u
1 6
2
u
_
= (4.28)
which is a non-trivial function of ultimate frequency,
u
. However it is easy to graph this relation
as a function of
u
and look for zero crossing such as shown in Fig. 4.21, or use a numerical
technique such as Newton-Rhapson to establish that
u
0.4839 rad/s implying an ultimate
period of P
u
= 2/
u
= 12.98 seconds per cycle.
0 0.2 0.4 0.6 0.8 1
2
1
0
1
2
3

3 watan2(7 w,16 w
2
)+
f
(

)
Figure 4.21: Solving F() = 0 for the
ultimate frequency. In this case the ul-
timate frequency is
u
0.48 rad/s.
A quick way to numerically solve Eqn. 4.28 is to rst dene an anonymous function and then to
use fzero to nd the root.
1 >> F = @(w) -3
*
w - atan2(7
*
w,1-6
*
w.2)+pi; % F() = 3 tan
1
_
7
16
2
_
+
>> ezplot(F,[0 1]) % See Fig. 4.21.
>> wu = fzero(F,0.1) % Solve F() = 0 for
wu =
0.4839
Once we have found the critical frequency,
u
, we can establish the magnitude at this frequency
by substituting s = i
u
into the process transfer function, Eqn. 4.26,
|G(i
u
)| = |G(0.48i)| 0.2931
which gives a critical gain, K
u
= 1/0.2931 = 3.412. An easy way to do this calculation in MATLAB
is to simply use bode at a single frequency, as in
>> [M,ph] = bode(G,wu) % Compute G(s = ju), should obtain = 180

, and Ku.
M =
0.2931
ph =
10 -180.0000
Now that we have both the ultimate gain and frequency we can use the classic Zielger-Nichols
rules in Table 4.4 to obtain our tuning constants and simulate the controlled process as shown in
Fig. 4.22.
158 CHAPTER 4. THE PID CONTROLLER
0.2
0
0.2
0.4
0.6
0.8
1
1.2
1.4
o
u
t
p
u
t

&

s
e
t
p
o
i
n
t
P control
0 50 100 150 200
0
1
2
i
n
p
u
t
time
0.2
0
0.2
0.4
0.6
0.8
1
1.2
1.4
PI control
0 50 100 150 200
0
1
2
time
0.2
0
0.2
0.4
0.6
0.8
1
1.2
1.4
PID control
0 50 100 150 200
0
1
2
time
Figure 4.22: The step and load responses of a P (left) and PI (middle) and PID (right) controlled
process using the closed loop Ziegler-Nichols suggestions from Table 4.4.
The tuned response to both a setpoint change and a load disturbance at t = 100 of all three
candidates shown in Fig. 4.22 is reasonable, but as expected the P response exhibits offset, and
the PI is sluggish. We would expect a good response from the PID controller because the actual
plant model structure (second order plus deadtime), is similar to the assumed model structure
by Ziegler and Nichols, which is rst order plus deadtime. However while it could be argued
that the step response is still too oscillatory, the response to the load disturbance is not too bad.
In the very common case of a rst order plus deadtime structure, Eqn. 4.25, we can nd the
ultimate frequency by solving the nonlinear equation

u
tan
1
(
u
) + = 0 (4.29)
for
u
, and then calculate the ultimate gain from a direct substitution
K
u
=
1
|G(i
u
)|
=

1 +
2

2
K
(4.30)
The tedious part of the above procedure is generating the nonlinear equation in
u
for the spe-
cic plant structure of interest. We can however automate this equation generation for standard
polynomial transfer functions with deadtime as shown in the listing below.
[num,den] = tfdata(G,'v') % Or use: cell2mat(G.num)
iodelay = G.iodelay;
% Construct the equation
G(i)
= and solve for
15 F = @(w) angle(polyval(num,1i
*
w))-iodelay
*
w-angle(polyval(den,1i
*
w))+pi;
wc = fzero(F,0) % Now solve for c
Rather than solve for the critical frequency algebraically, we could alternatively rely on the MAT-
LAB margin routine which tries to extract the gain and phase margins and associated frequencies
for arbitrary transfer functions. Listing 4.4 shows how we can tune a PID controller for an arbi-
trary transfer function, although note that because this strategy attempts to solve a nonlinear
4.6. PID TUNING METHODS 159
expression for the critival frequency using a numerical iterative scheme, this routine is not infal-
lible.
Listing 4.4: Ziegler-Nichols PID tuning rules for an arbitrary transfer function
G = tf(1,conv([6 1],[1 1]),'iodelay',5); % Plant of interest G =
1
(6s+1)(s+1)
e
5s
[Gm,Pm,Wcg,Wcp] = margin(G); % Establish critical points
20 Ku = Gm; Pu = 2
*
pi/Wcg; % Critical gain, Ku & period, Pu
Kc = 0.6
*
Ku; taui = Pu/2; taud = Pu/8; % PID tuning rules (Ziegler-Nichols)
Gc = tf(Kc
*
[taui
*
taud taui 1],[taui 0]); % PID controller Kc(i
d
s
2
+is + 1)/(is)
Note that while the use of the ultimate oscillation Ziegler-Nichols tuning strategy is generally dis-
couraged by practicitioners, it is the oscillation at near instability which is the chief concern, not
the general idea of relating the tuning constants to the ultimate gain and frequency. If for exam-
ple, we already have a plant model in the form of a transfer function perhaps derived from rst
principles, or system identication techniques, then we can by-pass the potentially hazardous
oscillation step, and compute directly the tuning constants as a function of ultimate gain and
frequency.
4.6.3 Closed loop single-test tuning methods
Despite the fact that Ziegler-Nichols continuous cycling tuning method is performed in closed
loop, the experiment is both tedious and dangerous. The Yuwana-Seborg tuning method de-
scribed here, [203], retains the advantages of the ZN tuning method, but avoids many of the the
disadvantages. Given the attractions of this closed-loop reaction curve tuning methodology, there
have been subsequently many extensions proposed, some of which are summarised in [48, 99].
The following development is based on the modications of [47] with the corrections noted by
[193].
The Yuwana-Seborg (YS) tuning technique is based on the assumption that the plant transfer
function, while unknown, can be approximated by the rst order plus deadtime structure,
G
m
(s) =
K
m
e
ms

m
s + 1
(4.31)
with three plant parameters, process gain K
m
, time constant,
m
, and deadtime
m
.
Surrounding this process with a trial, but known, proportional-only controller K
c
, gives a closed
loop transfer function between reference r(t) and output y(t) as
Y (s)
R(s)
=
K
c
K
m
e
ms
1 +s +K
c
K
m
e
ms
(4.32)
A Pad e approximation can be used to expand the non-polynomial term in the denominator, al-
though various people have subsequently modied this aspect of the method such as using a
different deadtime polynomial approximation. Using some approximation for the deadtime, we
can approximate the closed loop response in Eqn. 4.32 with
G
cl

Ke
s

2
s
2
+ 2s + 1
(4.33)
and we can extract the model parameters in Eqn. 4.33 from a single closed loop control step test.
160 CHAPTER 4. THE PID CONTROLLER

`
time
setpoint
`

Setpoint
change
r
0
r
1

offset
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

period, P
`

y
p2
y
p1
y
m
peak #1
peak #2
typical output
.........
`
y
ss
Figure 4.23: Typical response of a stable system to a P-controller.
Suppose we have a trial proportional-only feedback controlled response with a known controller
gain K
c
. It does not really matter what this gain is so long as the response oscillates sufciently
as shown in Fig. 4.23.
If we step change the reference variable setpoint from r
0
to r
1
, we are likely to see an under-
damped response such as shown in Fig. 4.23. Fromthis response we measure the initial output y
0
,
the rst peak y
p1
, the rst trough, y
m1
and the second peak y
p2
, and associated times. Under these
conditions we expect to see some offset given the proportional-only controlled loop, and that the
controller gain should be sufciently high such that the response exhibits some underdamped
(oscillatory) behaviour.
The nal value of the output, y

, is approximately
y

=
y
p2
y
p1
y
2
m
y
p1
+y
p2
2y
m
(4.34)
or alternatively, if the experimental test is of sufcient duration, y

could simply be just the last


value of y collected.
The closed loop gain K is given by
K =
y

r
1
r
0
(4.35)
and the overshoot is given by
H =
1
3
_
y
p1
y

+
y

y
m
y
p1
y

+
y
p2
y

y
m
_
, (4.36)
The deadtime is
= 2t
p1
t
m
, (4.37)
the shape factor
=
ln(H)
_

2
+ ln
2
(H)
, (4.38)
4.6. PID TUNING METHODS 161
and the time constant
=
(t
m
t
p1
)
_
1
2

. (4.39)
Nowthat we have tted a closed loop model, we can compute the ultimate gain, K
u
, and ultimate
frequency,
u
, by solving the nonlinear expression

u
tan
1
_
2
u
1
2

2
u
_
= (4.40)
for
u
, perhaps using an iterative scheme starting with
u
= 0. It follows then that the ultimate
gain is
K
u
= K
c
_
1 +
1
|G
cl
(i
u
)|
_
(4.41)
where
|G
cl
(i
u
)| =
K
_
(1
2

2
u
)
2
+ (2
u
)
2
(4.42)
Alternatively of course we could try to use margin to extract K
u
and
u
.
At this point, given K
u
and
u
, we can tune PID controllers using the ZN scheme directly fol-
lowing Table 4.4.
Alternatively we can derive the parameters for the assumed open loop plant model, Eqn. 4.31, as
K
m
=
|y

y
0
|
K
c
(|r
1
r
0
| |y

y
0
|)
(4.43)

m
=
1

u
_
K
2
u
K
2
m
1 (4.44)

m
=
1

u
_
tan
1
(
m

u
)

(4.45)
and then subsequently we can tune it either using the ZN strategy given in Listing 4.4, or use
the internal model controller strategy from Table 4.3, or perhaps using ITAE optimal tuning con-
stants. The optimal ITAE tuning parameters for P, PI and PID controllers are derived from
K
c
=
A
K
m
_

m
_
B
,
1

i
=
1
C
m
_

m
_
D
,
d
= E
m
_

m
_
F
(4.46)
where the constants A, B, C, D, E, F are given in Table 4.5 for P, PI and PID controllers. We are
now in a position to replace our trial K
c
with the hopefully better controller tuning constants.
Table 4.5: Tuning parameters for P, PI, and PID controllers using an ITAE optimisation criteria
where the constants A through E are used in Eqn. 4.46.
mode A B C D E F
P 0.49 1.084
PI 0.859 0.977 1.484 0.680
PID 1.357 0.947 1.176 0.738 0.381 0.99
Note that the change in setpoint, |r
1
r
0
| must be larger than the actual response |y

y
0
| owing
to the offset exhibited by a proportional only controller. If the system does not respond in this
manner, then a negative process gain is implied, which in turn implies nonsensical imaginary
time constants. Another often overlooked point is that if the controlled process response does
not oscillate, then the trial gain K
c
is simply not high enough. This gain should be increased, and
162 CHAPTER 4. THE PID CONTROLLER
the test repeated. If the process never oscillates, no matter how high the gain is, then one can use
an innite gain controller at least in theory.
Algorithm 4.2 summarises this single-test closed loop tuning strategy.
Algorithm 4.2 Yuwana-Seborg closed loop PID tuning
Tuning a PID controller using the closed-loop Yuwana-Seborg method is very similar to Algo-
rithm 4.1, but avoids the trial and error search for the ultimate gain and period.
1. Connect a proportional only controller to the plant.
2. Choose a trial sensible controller gain, K
c
and introduce a set point change and record the
output.
3. Check that the output is underdamped and shaped similar to Fig. 4.23, and if not increase
the trial controller gain, K
c
, and repeat step 2.
4. From the recorded underdamped response, measure the peaks and troughs, and note the
demanded setpoint change. You may nd Listing 4.5 helpful here.
5. Compute the parameters of the closed loop plant and solve for the ultimate frequency,
Eqn. 4.40 and gain, Eqn. 4.41.
6. Optionally compute the open loop plant model using Eqn. 4.43Eqn. 4.45.
7. Compute the P, PI or PID tuning constants using Table 4.5 or perhaps using the model and
the ZN strategy in Listing 4.4.
8. Test the closed loop response with the new PID values. If the response is not satisfactory,
further manual ne-tuning may be necessary.
A MATLAB implementation to compute the tuning constants given response data is given in
listings starting on page 164.
A consequence of the assumption that the plant is adequately modelled using rst-order dy-
namics with deadtime, the Yuwana-Seborg tuning scheme is not suitable for highly oscillatory
processes, nor plants with inverse response behaviour as illustrated in section 4.8.1. Furthermore,
a consequence of the low order Pad e approximation is that the strategy fails for processes with
a large relative deadtime. Ways to use this tuning scheme in the presence of excessive measure-
ment noise is described in section 5.2.1.
An example of a closed loop response tuner
Suppose we are to tune a PID controller for an inverse response process,
G(s) =
s + 1
(s + 1)
2
(2s + 1)
Such processes with right-hand plane zeros are challenging because of the potential instability
owing to the initial response heading off into the opposite direction as described further in 4.8.1.
The response to a unit setpoint change is given in the top gure of Fig. 4.24. By recording the
values of the peaks and troughs, we can estimate the parameters of an approximate model of the
4.6. PID TUNING METHODS 163
0 5 10 15 20 25
0
0.5
1
t
p1
t
m1
t
p2
Trial closed loop step test, K
c
= 1
0 5 10 15 20 25
0
0.5
1


Closed loop model
Time (sec)
A
m
p
l
i
t
u
d
e
Actual
Fitted
0 5 10 15 20 25 30
0
0.5
1


Open loop model
Time (sec)
A
m
p
l
i
t
u
d
e
Actual
fitted
Figure 4.24: A Yuwana-Seborg closed loop
step test. The top plot shows the response
given a unit change in setpoint with the key
points marked. The middle plot compares
the estimated with the actual closed loop
data, while the bottom plot compares the
estimated openloop plant model with the
true plant.
closed loop response, Eqn. 4.33. A comparison of this model with the actual data is given in the
middle plot of Fig. 4.24. It is not a perfect t because the underlying plant is not a pure rst order
with deadtime, and because Eqn. 4.33 is only approximate anyway.
Given the closed loop model, we can now extract the open loop model of the plant which is
compared with the true plant in the bottom trend of Fig. 4.24. Note how the rst-order plus
deadtime model, specically in this case

G =
1.003
3.128s + 1
e
2.24s
approximates within reason the higher order true plant dynamics.
Finally now that we have a plant model,

G, we can either use the ITAE optimal or IMC tuning
scheme as shown in Fig. 4.25.
As evident in Fig. 4.25, the ITAE responses are overly oscillatory which tends to be a common
failing of PID tuning schemes that are designed to satisfy a performance objective.
In the IMC case, the desired closed loop time constant was set equal to the sum of the dead-
time and the open loop time constant,
c
=
m
+
m
, which as it turns out is perhaps overly
conservative in this case. Better performance would be achieved by reducing
c
.
164 CHAPTER 4. THE PID CONTROLLER
0
0.5
1
1.5
ITAE PI control
y

&

r
2
0
2
u
0
0.5
1
1.5
ITAE PID control
2
0
2
0
0.5
1
1.5
IMC PI control
y

&

r
0 10 20 30 40 50
2
0
2
u
0
0.5
1
1.5
IMC PID control
0 10 20 30 40 50
2
0
2
Figure 4.25: The PI (left) and PID (right) closed loop responses using the ITAE (upper) and IMC
(lower) tuning schemes based on the identied model but applied to the actual plant.
Automating the Yuwana-Seborg tuner
Because this is such a useful strategy for PID tuning, it is helpful to have a semi-automated
procedure. The rst step once you have collected the closed loop data with a trial gain is to
identify the values and timing of the peaks and troughs, y
p1
, y
m1
and y
p2
. Listing 4.5 is a simple
routine that attempts to automatically extract these values. As with all small programs of this
nature, the identication part can fail, particularly given excessive noise or unusual response
curves.
Listing 4.5: Identies the characteristic points for the Yuwana-SeborgPIDtuner froma trial closed
loop response
function [yp1,tp1,ym1,tm1,yp2,tp2,yss_est] = ys_ident(t,Y);
% Identify the peaks & troughs from an underdamped closed loop test
3 % Used for Yuwana-Seborg PID tuning
y0 = Y(1); n = length(Y); % start point & total # of points
if n < 20
fprintf('Probably need more points\n');
8 end % if
Y = Y
*
((Y(n) > y0)-0.5)
*
2; % workout direction, 1 if positive)
yss_est = mean(Y(n:-1:n-round(n/50))); % average pt at the end (robust)
[yp1,idx] = max(Y); tp1 = t(idx); % 1st maximum point & time at occurrence
13 Y = Y(idx:n); t = t(idx:n); % chop off to keep searching
[ym1,idx] = min(Y); tm1 = t(idx); % 1st minimum point & time
Y(1:idx) = []; t(1:idx)=[];
[yp2,idx] = max(Y); tp2 = t(idx); % 2nd maximum point & time at occurrence
4.6. PID TUNING METHODS 165
18 return % end ys_ident.m
Using this peak and trough data, we can compute the parameters in the closed loop model fol-
lowing Listing 4.6.
Listing 4.6: Compute the closed loop model from peak and trough data
yinf = (yp1
*
yp2 - ym12)/(yp1+yp2-2
*
ym1); % Steady-state output, y
2 K = yinf/A; % Closed loop gain, K, Eqn. 4.35 & Overshoot, H
H = 1/3
*
((yp1-yinf)/yinf+(yinf-ym1)/(yp1-yinf)+(yp2-yinf)/(yinf-ym1));
d = 2
*
tp1 - tm1; % Deadtime, , Eqn. 4.37
zeta = -log(H)/sqrt(pi2+log(H)2); % shape factor, , Eqn. 4.38
7 tau = (tm1-tp1)
*
sqrt(1-zeta2)/pi; % time constant, , Eqn. 4.39
Gcl = tf(K,[tau2 2
*
zeta
*
tau 1],'iodelay',d); % Closed loop model, G
cl
(s), Eqn. 4.33
Now that we have the closed loop model, we can compute the ultimate gain, K
u
, and ultimate
frequency,
u
.
Listing 4.7: Compute the ultimate gain and frequency from the closed loop model parameters.
10 fwu = @(w) pi - d
*
w - atan2(2
*
zeta
*
tau
*
w,1-tau2
*
w.2); %
[Gm,Pm,Wcg,Wcp] = margin(Gcl) % Use as a good starting estimate for u
wu = fsolve(fwu,Wcg); % Solve nonlinear Eqn. 4.40 for u
15 B= K/sqrt((1-tau2
*
wu2)2+(2
*
tau
*
zeta
*
wu)2);
Kcu = Kc
*
(1+1/B); % Ultimate gain, Ku, Eqn. 4.41
Finally we can now extract the open loop rst-order plus deadtime model using Listing 4.8.
Listing 4.8: Compute the open loop model, G
m
, Eqn. 4.31.
Km = yinf/(Kc
*
(A-yinf)); % Plant gain, Gm, Eqn. 4.43
taum = 1/wu
*
sqrt(Kcu2
*
Km2-1); % Plant time constant, m, Eqn. 4.44
dm = 1/wu
*
(pi - atan(taum
*
wu)); % Plant delay, m, Eqn. 4.45
20 Gm = tf(Km,[taum 1],'iodelay',dm);% Open loop model estimate, Gm(s), Eqn. 4.31
Now that we have an estimate of the plant model, it is trivial to use say the IMC relations to
compute reasonable PI or PIDtuning constants. We do need however to decide on an appropriate
desired closed loop time constant.
Listing 4.9: Compute appropriate PI or PID tuning constants based on a plant model, G
m
, using
the IMC schemes.
tauc = (tau+dm)/3; % IMC desired closed loop time constant, c
controller = 'PID'; % Type of desired controller, PI or PID
switch controller % See IMC tuning schemes in Table 4.3.
25 case 'PI'
Kc = 1/Km
*
(tau)/(tauc+dm);
taui = taum; rs = 1/taui; % Integral time, i, and reset, 1/i.
166 CHAPTER 4. THE PID CONTROLLER
taud = 0;
case 'PID'
30 Kc = 1/Km
*
(tau+dm/2)/(tauc+dm/2);
taui = tau+dm/2; rs = 1/taui;
taud = tau
*
dm/(2
*
tau+dm);
end % switch
Now all that is left to do is load the controller tuning constants into the PID controller, and set
the controller to automatic.
4.6.4 Summary on closed loop tuning schemes
These automated PID tuners are, not surprisingly, quite popular in industry and also in academic
circles. Most of the schemes are based on what was described above, that is the unknown process
is approximated by a simple transfer function, where the ultimate gain and frequency is known
as a function of parameters. Once these parameters are curve tted, then the PID controller is
designed via a ZieglerNichols technique. There are numerous modications to this basic tech-
nique, although most of themare minor. One extension is reported in [48] while [193] summarises
subsequent modications, corrects errors, and compares alternatives. The relay feedback method
of H agglund and

Astr om, [15], described in next in 4.7 is also a closed loop single test method.
Problem 4.2 1. Modify the ys_ident.m function to improve the robustness. Add improved
error checking, and try to avoid other common potential pitfalls such as negative controller
gain, integrator responses, sever nonlinearities, excessive noise, spikes etc.
2. Test your auto closed loop tuner on the following three processes G
p
(s) (adapted from [48]);
G
1
(s) =
e
4s
s + 1
(4.47)
G
2
(s) =
e
3s
(s + 1)
2
(2s + 1)
(4.48)
G
3
(s) =
e
0.5s
(s 1)(0.15s + 1)(0.05s + 1)
(4.49)
G
1
and G
2
show the effect of some dead time, and G
3
is an open loop unstable process.
3. Create a m le similar to the one above, that implements the closed loop tuning method as
described by Chen, [48]. You will probably need to use the MATLAB fsolve function to
solve the nonlinear equation.
4.7 Automated tuning by relay feedback
Because the manual tuning of PID controllers is troublesome and tedious, it is rarely done which
is the motivation behind the development of self-tuning controllers. This tuning on demand
behaviour after which the controller reverts back to a standard PID controller is different from
adaptive control where the controller is consistently monitoring and adjusting the controller tun-
ing parameters or even the algorithm structure. Industrial acceptance of this type of smart con-
troller was much better than for the fully adaptive controller, partly because the plant engineers
were far more condent about the inherent robustness of a selftuning controller.
4.7. AUTOMATED TUNING BY RELAY FEEDBACK 167
One method to selftune employed by the ECA family of controllers manufactured by ABB shown
in Fig. 4.26 uses a nonlinear element, the relay. The automated tuning by relays is based on the
assumption that the Ziegler-Nichols style of controller tuning is a good one, but is awed in
practice since nding the ultimate frequency
u
is a potentially hazardous tedious trial and error
experiment. The development of this algorithmwas a joint project between SattControl and Lund
University, Sweden [15]. This strategy of tuning by relay feedback is alternatively known as the
Auto Tune Variation or ATV method.
Figure 4.26: A PID con-
troller with a self-tuning
option using a relay from
ABB.
The PID controller with selftuning capability using relay feedback is really two controllers in one
as shown in Fig. 4.27. Here the PID component is disabled, and the relay is substituted. Once the
executive software is condent that the updated controller parameters are an improvement, the
switch is toggled, and the controller reverts back to a normal PID regulator.

PID

relay (active)
u
y
Plant
`
+

PID (inactive)
G
p
(s)
setpoint
d
Figure 4.27: A process under relay tuning with the PID regulator disabled.
A relay can be thought of as an on/off controller or a device that approximates a proportional
controller with an innite gain, but hard limits on the manipulated variable. Thus the relay has
two outputs: if the error is negative, the relay sends a correcting signal of d units, and if the
error is positive, the relay sends a correcting signal of +d units.
168 CHAPTER 4. THE PID CONTROLLER
4.7.1 Describing functions
A relay is a nonlinear element that, unlike the situation with a linear component, the output
to a sinusoidal input will not be in general sinusoidal. However for many plants that are low-
pass in nature, the higher harmonics are attenuated, and we need only consider the fundamental
harmonic of the nonlinear output.
The describing function N, of a nonlinear element is dened as the complex ratio of the funda-
mental harmonic of the output to the input, or
N
def
=
Y
1
X
1

(4.50)
where X
1
, Y
1
are the amplitudes of the input and output, and is the phase shift of the fun-
damental harmonic component of the output, [150, p652]. If there is no energy storage in the
nonlinear element, then the describing function is a function only of the input amplitude.
By truncating a Fourier series of the output, and using Eqn. 4.50, one can show that the describing
function for a relay with amplitude d is
N(a) =
4d
a
(4.51)
where a is the input amplitude.
Now a sustained oscillation or limit cycle of a linear plant G(i) together with a nonlinear ele-
ment with describing function N will occur when 1 +NG(i) = 0 or
G(i) =
1
N
(4.52)
Now since in the case of a relay, N given by Eqn. 4.51 is purely real, then the intersection of G(i)
and 1/N is somewhere along the negative real axis. This will occur at a gain of a/(4d), so the
ultimate gain, K
u
, which is the reciprocal of this, is
K
u
=
4d
a
(4.53)
The ultimate frequency,
u
= 2/P
u
, (in rad/s) is simply the frequency of the observed output
oscillation and can be calculated from counting the time between zero crossings, and the ampli-
tude, a, by measuring the range of the output. We now know from this simple single experiment
one point on the Nyquist curve and a reasonable PID controller can be designed based on this
point using, say, the classical Ziegler-Nichols table.
In summary, under relay feedback control, most plants will exhibit some type of limit oscillation
such as a sinusoidal-like wave with a phase lag of 180

. This is at the point where the open loop


transfer function crosses the negative real axis on a Nyquist plot, although not necessarily at the
critical 1 point. A typical test setup and response is given in Fig. 4.28.
Given that we know a single point on the Nyquist curve, how do we obtain controller param-
eters? It is convenient to restrict the controller to the PID structure since that is what is mostly
about in practice. The original ZN tuning criteria gave responses that should give a specied
gain and phase margin. For good control a gain margin of about 1.7 and a phase margin of about

m
= 30

is recommended. Note that by using different settings of the PID controller, one can
move a single point on the Nyquist curve to any other arbitrary point. Let us assume that we
know the point where the open loop transfer function of the process, G
p
(s), crosses the negative
4.7. AUTOMATED TUNING BY RELAY FEEDBACK 169

`
`

time, t
output
input, u
relay amplitude 2d
Plant

`
output amplitude 2a


P
Relay

`
+

Unknown plant
output
u
period
Figure 4.28: An unknown plant under relay feedback exhibits an oscillation
real axis,
u
. This point has a phase (or argument) of or 180

. We wish the open loop trans-


fer function, G
c
G
p
to have a specied phase margin of
m
by choosing appropriate controller
tuning constants. Thus equating the arguments of the two complex numbers gives
arg
_
1 +
1
i
u

i
+i
u

d
_
=
m
(4.54)
which simplies to
5

i
= tan
m
Since we have two tuning parameters (
i
,
d
), and one specied condition, (
m
), we have many
solutions. Since their are many PID controller constants that could satisfy this phase margins,

Astr om, [11], chose that the integral time should be some factor of the derivative time,

i
= f
d
(4.55)
Then the derivative time, which must be real and positive, is

d
=
f tan
m
+
_
f
2
tan
2

m
+ 4f
2f
u
(4.56)
where the arbitrary factor, f 4. Given the ultimate gain K
u
, the controller gain is
K
c
= K
u
cos
m
(4.57)
5
Remember that for a complex number z
def
= x + iy, the argument of z is the angle z makes with the real axis;
arg(z) = arg(x + iy) = tan
1
_
y
x
_
170 CHAPTER 4. THE PID CONTROLLER
Algorithm 4.3 Relay auto-tuning
In summary, to tune a system by relay feedback, you must:
1. Turn off the current PIDcontroller, and swap over to a relay feedback controller with known
relay gain d.
2. Apply some excitation if necessary to get the process to oscillate under relay control. Mea-
sure the resultant error amplitude a, and period P
u
. Calculate the ultimate gain from
Eqn. 4.53 and associated frequency
u
.
3. Decide on a phase margin, (
m
30

), the factor, f 4, and calculate the PID controller


tuning constants using Eqns 4.554.57.
4. Download these parameters to the PID controller, turn off the relay feedback, and swap
back over to the PID controller.
5. Wait until the operator again initiates the self-tuning program after which return to step 1.
4.7.2 An example of relay tuning
We can compare the experimentally determined ultimate gain, K
u
, and ultimate frequency,
u
,
with the theoretical values for the plant
G(s) =
1
(s + 1)
4
(4.58)
where the time constant is = 0.25. The ultimate gain and frequency are given by solving
arg
_
K
u
(s + 1)
4
_
s=iu
= , and

K
u
(s + 1)
4

s=iu
= 1
which is rather involved. An alternative graphical solution is obtained using Bode and/or Nyquist
diagrams in MATLAB.
G = zpk([],[-4 -4 -4 -4],44);
2 nyquist(G,logspace(-1,3,200));
bode(G) % Bode diagram is an alternative
The Nyquist diagram given in the upper plot of Fig Fig. 4.29 shows the curve crossing the nega-
tive real axis at 1/K
u
0.25, thus K
u
4. To nd the ultimate angular frequency, we need to
plot the phase lag of the Bode diagram shown in the bottom gure of Fig. 4.29. The phase angle
crosses the 180

radians point at a frequency of about


u
= 4 rad/s. Thus the ultimate
period is P
u
= 2/
u
= 1.57 seconds. Note that it was unnecessary to plot the Nyquist diagram
since we could also extract the ultimate gain information from the magnitude part of the Bode
diagram.
Estimating K
u
and
u
by relay experiment
Now we can repeat the evaluation of K
u
and
u
, but this time by using the relay experiment.
I will use a relay with a relay gain of d = 2 but with no hysteresis (h = 0) and a sample time
of T = 0.05 seconds. To get the system moving, I will choose a starting value away from the
setpoint, y
0
= 1.
4.7. AUTOMATED TUNING BY RELAY FEEDBACK 171
1 0.5 0 0.5 1
1
0.5
0
0.5
1
0.1
0.5
1
1.5
2
3
4
6
Nyquist Diagram
Real Axis
I
m
a
g
i
n
a
r
y

A
x
i
s
150
100
50
0
M
a
g
n
i
t
u
d
e

(
d
B
)
10
1
10
0
10
1
10
2
360
180
0
P
h
a
s
e

(
d
e
g
)
Bode Diagram
Gm = 12 dB (at 4 rad/sec) , Pm = 180 deg (at 0 rad/sec)
Frequency (rad/sec)
1/K
u

u
Figure 4.29: Nyquist (top) and Bode (bot-
tom) diagrams for 1/(0.25s + 1)
4
. The
Nyquist curve crosses the negative -axis
at about 0.25. The frequency at which the
Bode curve passes though = 180

is
4 rad/s.
G = zpk([],[-4 -4 -4 -4],44); % Plant: G(s) = 1/(0.25s + 1)
4
2 Ts=0.05; t = [0:Ts:7]'; % sample time
u = 0
*
t; y=NaN
*
t; % initialise u & y
d=2; % relay gain
[Phi,Del,C,D] = ssdata(c2d(G,Ts));
7 x = [0,0,0,0.1]'; % some non-zero initial state
for i=1:length(t)-1; % start relay experiment
x = Phi
*
x + Del
*
u(i); % integrate model 1 step
y(i) = C
*
x; % output
12 u(i+1) = -d
*
sign(y(i)); % Relay controller
end % for
plot(t,y,t,u,'r--')
The result of the relay simulation is given in Fig. 4.30 where we can see that once the transients
have died out, the amplitude of the oscillation of the output (or error since the setpoint is con-
stant) is a = 0.64 units, with a period of P = 1.6 seconds. Using Eqn. 4.53, we get estimates for
the ultimate gain and angular frequency

P
u
= P = 1.6 1.57s, and

K
u
=
4d
a
= 3.978 4
172 CHAPTER 4. THE PID CONTROLLER
which approximately equal the estimates that I graphically found using the Bode and Nyquist
diagrams of the true plant.
2
1
0
1
2
o
u
t
p
u
t

&

s
e
t
p
o
i
n
t
relay off
a = 0.64
0 5 10 15 20 25 30
5
0
5
P
u
= P = 1.6s i
n
p
u
t
time [s]
Figure 4.30: PID tuning using relay feedback. For the rst 10 seconds the relay is enabled, and
the process oscillates at the ultimate frequency. After 10s, the relay is turned off, the PID tuner
with the updated constants is enabled, and the plant is under control.
Once we have the ultimate gain and frequency, the PID controller constants are obtained from
Eqn. 4.55 to 4.57 and are shown in Table 4.6. Alternatively we can use any Z-N based tuning
scheme.
Table 4.6: The PID controller tuning parameters obtained from a relay based closed loop experi-
ment
PID parameter H agglund Ziegler-Nichols units
K
c
3.445 0.6K
u
= 3.4

i
0.866 P
u
/2 = 0.8 s

d
0.217 P
u
/8 = 0.2 s
Now that we have the PID tuning constants, we can simulate the closed loop response to setpoint
changes. We could create the closed loop transfer function using G
*
Gpid/(1 + Gpid
*
G), how-
ever the preferredapproachis to use the routines series and feedback. This has the advantage
that it will cancel common factors and perform a balanced realisation.
G = zpk([],[-4 -4 -4 -4],44); % Plant to be controlled, Eqn. 4.58.
K = 3.4; ti = 0.8; td = 0.2; % Controller constants from Table 4.6.
Gpid = tf(K
*
[ti
*
td ti 1],[ti 0])
5 Gcl = feedback(series(G,Gpid),1); % Create closed loop for simulation
R = [zeros(10,1);ones(100,1);-ones(100,1)]; % Design setpoint vector
T = 0.05
*
[0:1:length(R)-1]'; % time scale
y = lsim(Gcl,R,T); % Simulate continuous
10 plot(T,[y,R])
4.7. AUTOMATED TUNING BY RELAY FEEDBACK 173
The remainder of Fig. 4.30 for the period after 10 seconds shows the quite reasonable response us-
ing the PID constants obtained via a relay experiment. Perhaps the overshoot at around 40% and
oscillation is a little high, but this is a well known problem of controller tuning via the classical
ZN approach, and can be addressed by using the modied ZN tuning constants.
4.7.3 Self-tuning with noise disturbances
The practical implementation of the relay feedback identication scheme requires that we mea-
sure the period of oscillation P, and the error amplitude a automatically. Both these parameters
are easily established manually by visual inspection if the process response is well behaved and
noise free, but noisy industrial outputs can make the identication of even these simple charac-
teristics difcult to obtain reliably and automatically.
When considering disturbances, [11] develops two modications to the period and amplitude
identication part of the selftuning algorithm. The rst involves a least squares t, (described
below) and the second involves an extended Kalman lter described in problem 9.7. Of course
hybrids of both these schemes with additional heuristics could also be used. Using the same pro-
cess and relay as in 4.7.2, we can introduce some noise into the systemas shown in the SIMULINK
model given in Fig. 4.31(a). Despite the low noise power (0.001), and the damping of the fourth-
order process, the oscillations vary in amplitude and frequency as shown in Fig. 4.31(b).
To obtain reliable estimates of the amplitude and period we can use the fact that a sampled
sinusoidal function with period P and sample time T satises
y(t)
1
y(t T) +y(t 2T) +
2
= 0 (4.59)
with

1
= 2 cos
_
2T
P
_
(4.60)
The extra constant,
2
is introduced to take into account non-zero means. Eqn. 4.59 is linear
in the parameters, and we can solve the least-squares regression problem in MATLAB using the
backslash command.
=
_

1

2
_
=
_
y
tT
, 1

+
(y
t
+y
t2T
) (4.61)
Once we have tted
1
, (we are uninterested in
2
), the period is obtained by inverting Eqn. 4.60,
P =
2T
cos
1
(
1
/2)
(4.62)
Now that we know the period, we can solve another linear least-squares regression problem
given the N output data points,
min

k=1
_
y(kT)
3
sin
_
2kT
P
_

4
cos
_
2kT
P
_

5
_
2
(4.63)
where the amplitude is given by
a =
_

2
3
+
2
4
By doing the regression in this two step approach, we avoid the nonlinearity of Eqn. 4.63 if we
were to estimate P as well in an attempt to do the optimisation in one step.
Running the SIMULINK simulation given in Fig. 4.31(a) gives a trend something like Fig. 4.31(b).
174 CHAPTER 4. THE PID CONTROLLER
output
256
(s+4)(s+4)(s+4)(s+4)
ZeroPole Relay
0
Constant
BandLimited
White Noise
(a) Relay feedback of a fourth-order plant with band-limited noise added
0 2 4 6 8 10 12 14 16 18 20
2
1
0
1
2
time
i
n
p
u
t

&

o
u
t
p
u
t
Relay tuning with noise
(b) The controlled output and relay response.
Figure 4.31: Relay oscillations with noise
The only small implementation problem is that SIMULINK does not necessarily return equally
sampled data for continuous system simulation. The easiest work-around is to dene a regularly
spaced time vector and do a table lookup on the data using interp1. The script le given in
listing 4.10 calculates the period and amplitude using least-squares.
Listing 4.10: Calculates the period and amplitude of a sinusoidal time series using least-squares.
N = length(t); % Length of data series
tr = linspace(t(1),t(length(t)),1.5
*
N)'; dt = tr(2)-tr(1);
yr = interp1(t,y,tr,'linear'); % Simulink is not regular in time
y=yr; t = tr; % Now sample time T is regularly spaced.
5
% Construct data matrix
npts = length(y); % # of data points
rhs = y(3:npts) + y(1:npts-2);
X = [y(2:npts-1),-ones(size(rhs))];
10 theta = X\rhs; % Solve LS problem, Eqn. 4.61.
P = 2
*
pi
*
dt/acos(theta(1)/2); % Now nd period from Eqn. 4.62.
omega = 2
*
pi/P; % 2nd round to find amplitude .....
k = [1:npts]';
15 X = [sin(omega
*
k
*
dt) cos(omega
*
k
*
dt) ones(size(k))]; % data matrix
theta = X\y; % Solve 2nd LS problem, Eqn. 4.63.
a = norm(theta(1:2)); % Extract amplitude.
In my case, the computed results are:
P = 1.822, a = 0.616
which is not far off the values obtained with no disturbances, from page 171 of P = 1.6 and
4.7. AUTOMATED TUNING BY RELAY FEEDBACK 175
Adapter
Clock
t
To Workspace1 AutoScale
Graph1
yout
To Workspace
Mux
Mux
+

Sum
+
+
Sum1
Signal
generator1
Switch
0
Constant
Relay
PID
PID Controller
RT In
RT In
Input Plug
Output Plug
RT Out
RT Out
0.6
setpoint
(a) A SIMULINK implementation of the relay feedback experiment.
0 10 20 30 40 50 60 70
1
0.5
0
0.5
1
i
n
p
u
t
time (s)
0 10 20 30 40 50 60 70
0.3
0.4
0.5
0.6
0.7
0.8
O
u
t
p
u
t

&

s
e
t
p
o
i
n
t
Relay based tuning (Black Box)
(b) The output of the relay experiment
Figure 4.32: Experimental implementation of relay tuning of the blackbox
a = 0.64.
Relay based tuning of a black-box
Using a data acquisition card and a real-time toolbox extension
6
to MATLAB, we can repeat the
trial of the relay based tuning method this time controlling a real process, in this case a black
box. The SIMULINK controller and input/output data are given in Fig. 4.32.
From Fig. 4.32, we can measure the period and amplitude of the oscillating error
P
u
= 7 seconds, a = 0.125
giving an approximate ultimate gain as
K
u
=
4d
a
, with d = 1
=
4
0.125
= 10.2
which leads to appropriate K
c
,
i
,
d
, using the modied ZN rules
6
Available from Humusoft
176 CHAPTER 4. THE PID CONTROLLER
K
c

i
(s)
d
(s)
0.33K
u
= 3.4 P
u
/2 = 3.5 P
u
/3 = 2.3
Since SIMULINK uses a slightly different PID controller scheme from the classical formula,
P = K
c
= 3.4, I =
K
c

i
= 0.97, D = K
c

d
= 7.8
Using these values in the PID controller with the black-box, we obtain the controlled results
shown in Fig. 4.33 which apart fromthe noisy input and derivative kick (which is easily removed,
refer 4.4.1), does not look too bad.
0 20 40 60 80 100 120
0.2
0.4
0.6
0.8
1
O
u
t
p
u
t

&

s
e
t
p
o
i
n
t
PID control based on relay tune
0 20 40 60 80 100 120
1.5
1
0.5
0
0.5
1
1.5
i
n
p
u
t
time (s)
Figure 4.33: Experimental PID control of the blackbox using parameters obtained by the relay-
based tuning experiment. (See also Fig. 4.32.)
4.7.4 Modications to the relay feedback estimation algorithm
The algorithm using a relay under feedback as described in section 4.7 establishes just one point
on the Nyquist curve. However it is possible by using a richer set of relay feedback experiments
to build up a more complex model, and perhaps better characterise a wider collection of plants.
Relays with hysteresis
Physical relays always have some hysteresis h, to provide mechanical robustness when faced
with noise. By adjusting the hysteresis width we can excite the plant at different points on the
Nyquist curve.
4.7. AUTOMATED TUNING BY RELAY FEEDBACK 177
The response of a relay with hysteresis is given in Fig. 4.34. Note that the relay will not ip to
+d from the d position until we have a small positive error, in this case h. Likewise, from the
reverse direction, the error must drop below a small negative error, h, before the relay ips from
+d to d.
0 h h
0
+d
d

error
output
relay amplitude
Figure 4.34: A relay with hysteresis width h and output amplitude d.
The describing function for a relay with amplitude d and hysteresis width h is

1
N(a)
=

4d
_
a
2
h
2
j
h
4d
(4.64)
which is a line parallel to the real axis in the complex plane. Compare this with the describing
function for the pure relay in Eqn. 4.51.
The intersection of this line and G(i) is the resultant steady-state oscillation due to the relay,
refer Fig. 4.35. By adjusting the relay hysteresis width, h, we can move the 1/N(a) line up and
down, thereby establishing different points on the Nyquist diagram. Of course if we increase the
hysteresis too much, then we may shift the 1/N(a) curve too far down so that it never intercepts
the G(i) path. In this situation, we will not observe any limit cycle in the closed loop experiment.
Fig. 4.36 shows the estimated frequency response for the plant G(s) = 256/(s + 4)
4
calculated
using a relay with d = 2 and varying amounts of hysteresis from h = 0 to h = 1.4. Note that the
calculated points do follow the true frequency response of the plant, at least for small values of h.
Obviously if h is too large, then the line 1/N(a) will not intersect the frequency response curve,
G(i), so there will be no oscillations.
Inserting known dynamics into the feedback loop
A second simple modication is to insert an integrator prior to the relay. The integrator subtracts
/2 or 90

from the phase, and multiplies the gain by a factor of 1/. In this way, under relay
feedback we can estimate the point where the Nyquist curve crosses the negative imaginary
axis as well as the negative real axis. Such a scheme is shown in Fig. 4.37 and is described by
[199] which is a specic case of the general schemes described by [117, 144]. A survey of other
modications to the basic relay feedback idea for controller tuning is given in [44].
178 CHAPTER 4. THE PID CONTROLLER
`

0
increasing

1
N(a)
G(i)
operating point under
relay feedback
Figure 4.35: Relay feedback with hysteresis width h.
Figure 4.36: Using a relay feedback with varying
amounts of hysteresis fromh = 0 to h = 1.4 to ob-
tain multiple points, 2, on a Nyquist curve. The
true frequency response G(i) is the solid line.
1 0.5 0 0.5 1
1.2
1
0.8
0.6
0.4
0.2
0
0.2
0.4
0
0.2
0.4
0.6
0.8
1
1.2
1.4
G(i )

G
(
i

)
output Transport
Delay
K(s)
den(s)
Transfer Fcn
Relay1
Relay
Manual Switch
1
s
Integrator
0
Constant
Figure 4.37: Relay feedback with an integrator option
4.7. AUTOMATED TUNING BY RELAY FEEDBACK 179
Establishing multiple points on the Nyquist curve enables one to extract a transfer function mod-
els. In the case of a rst-order plus deadtime model
G(s) =
K
p
s + 1
e
Ls
(4.65)
we need two arbitrary distinct points. That is we establish the magnitude and phase of G(i) at
two frequencies
1
and
2
via two relay experiments.
Dening the magnitudes and angles to be
|G(i
1
)| =
1
k
1
, |G(i
2
)| =
1
k
2
G
(i
1
) =
1
,
G
(i
2
) =
2
then the plant gain is
K
p
=


2
1

2
2
k
2
2

2
1
k
2
1

2
1
, (4.66)
the time constant is
=
1

1
_
k
2
1
K
2
p
1, (4.67)
and the deadtime is
L =
1

1
_

1
+ tan
1
(
1
)
_
. (4.68)
In the case of a pure relay and a pure relay with an integrator,

1
= ,
2
= /2.
The case for a second order plant
G(s) =
K
p
s
2
+as +b
e
Ls
(4.69)
is more complicated. The following algorithm, due to [117], rst estimates the deadtime via a
nonlinear function, then extracts the coefcients K
p
, a and b using least-squares.
First we dene the matrices
A =
_

_
A
1
A
2
A
3
A
4
_

_
=
_

2
1
0 1

2
2
0 1
0
1
0
0
2
0
_

_
, X =
_
_
1/K
p
a/K
p
b/K
p
_
_
, Y =
_

_
sin L
1
cos L
1
sin L
2
cos L
2
_

_
(4.70)
and
B =
_

_
B
1
B
2
B
3
B
4
_

_
=
_

_
{G
1
(j
1
)} {G
1
(j
1
)} 0 0
0 0 {G
1
(j
2
)} {G
1
(j
2
)}
{G
1
(j
1
)} {G
1
(j
1
)} 0 0
0 0 {G
1
(j
2
)} {G
1
(j
2
)}
_

_
(4.71)
Now provided we know the deadtime L, we can solve for the remaining parameters in Xusing
X =
_
_
A
1
A
2
A
3
_
_
1
_
_
B
1
B
2
B
3
_
_
Y (4.72)
180 CHAPTER 4. THE PID CONTROLLER
1 0.5 0 0.5 1 1.5 2
1
0.8
0.6
0.4
0.2
0
0.2
0.4
{G(i )}

{
G
(
i

)
}
(a) The resultant Nyquist plots
0 20 40 60 80 100 120 140 160 180
1
0.5
0
0.5
1
1.5
2
Step Response
Time (sec)
A
m
p
l
i
t
u
d
e
G
G
1est
G
2est
(b) The resultant step responses
Figure 4.38: Identication of transfer function models using a two-point relay experiment.
However since we do not know L, we must solve the nonlinear relation
(A
4
CB
4
)Y = 0 (4.73)
Once given the model, any standard model-based controller tuning scheme such as internal
model control (IMC) can be used to establish appropriate PID tuning constants.
Fig. 4.38 illustrates the procedure for when the true, but unknown plant is the third-order system
G =
1
(9s + 1)(6s + 1)(4s + 1)
e
6s
with estimated rst and second order models.
Problem 4.3 1. Draw a Nyquist diagram for the transfer function G
p
(s). Investigate what hap-
pens to the Nyquist diagram when a PI controller is added. How is the curve shifted
G
p
(s) =
4
(s + 1)
3
where = 2.0.
2. Write down pseudo code for the automatic tuning (one button tuning) of a PID controller
using the ZN tuning criteria. Ensure that your algorithm will work under a wide variety of
conditions and will not cause too much instability if any. Consider safety and robustness
aspects for your scheme.
3. Write a .m le to implement a relay feedback tuning scheme using the

Astr omand H agglund
tuning method. Use the pidsim le as a starting point. Pass as parameters, the relay gain
d, and the hysteresis delay h. Return the ultimate period and gain and controller tuning
parameters.
4.8 Drawbacks with PID controllers
The PID controller despite being a simple, intuitive and exible controller does have some draw-
backs. Some of these limitations are highlighted when trying to control an inverse response as
demonstrated in 4.8.1, and a simple add-on for deadtime processes is given in 4.9.
4.8. DRAWBACKS WITH PID CONTROLLERS 181
4.8.1 Inverse response processes
An inverse response is where the response to a step change rst heads in one direction, but then
eventually arrives at a steady state in the reverse direction as shown in Fig. 4.40(b). Naturally
this sort of behaviour is very difcult to control, and if one tends to overreact, perhaps because
the controllers gain is too high, then excessive oscillation is likely to result. Inverse responses are
common in the control of the liquid water level in a steam drum boiler as described in [20] due
to a phenomena known as the swell-and-shrink effect. For example the water level rst increases
when the steam valve is opened because the drum pressure will drop causing a swelling of the
steam bubbles below the surface.
Transfer functions that have an odd number of right-hand-plane zeros have inverse responses.
Inverse response processes belong to a wider group of transfer functions called Non-Minimum
Phase (NMP) transfer functions. Other NMP examples are those processes that have dead time
which are investigated later in 4.9. The term non-minimum phase derives from the fact that
for these transfer functions, there will exist another transfer function that has the same amplitude
ratio at all frequencies, but a smaller phase lag.
Figure 4.39: The J curve by Ian Bremmer popularised the
notion that things like private investments, or even the
openness/stability state of nations often get worse before
they get better. J curves are inverse responses.
I like to think of inverse response systems as the sum of two transfer functions: one that goes
downwards a short way quickly, and one that goes up further but much more leisurely. The
addition of these two gives the inverse response. For example;
G(s) =
K
1

1
s + 1
. .
G1

K
2

2
s + 1
. .
G2
(4.74)
=
(K
1

2
K
2

1
)s +K
1
K
2
(
1
s + 1)(
2
s + 1)
(4.75)
If K
1
> K
2
, then for the system to have right hand plane zeros (numerator polynomial equals
zero in Eqn. 4.75), then
K
1

2
< K
2

1
(4.76)
If K
1
< K
2
, then the opposite of Eqn. 4.76 must hold.
A SIMULINK simulation of a simple inverse response process,
G(s) =
4
3s + 1
. .
G1

3
s + 1
. .
G2
=
(5s + 1)
(3s + 1)(s + 1)
(4.77)
of the form given in Eqn. 4.74 and clearly showing the right-hand plane zero at s = 1/5 is shown
in Fig. 4.40(a). The simulation results for the two individual transfer functions that combine to
give the overall inverse response are given in Fig. 4.40(b).
182 CHAPTER 4. THE PID CONTROLLER
(a) A SIMULINK simulation of the inverse response process de-
scribed by Eqn. 4.77
0 2 4 6 8 10 12 14 16 18
3
2
1
0
1
2
3
4
Step Response
Time (sec)
A
m
p
l
i
t
u
d
e
G
1
G
2
G
1
+G
2
G
1
G
1
+G
2
G
2
(b) The inverse response G made up of G
1
+ G
2
.
Figure 4.40: An inverse response process is comprised of two component curves, G
1
+ G
2
. Note
that the inverse response rst decreases to a minimumy = 1 at around t = 1.5 but then increases
to a steady-state value of y

= 1.
Controlling an inverse response process
Suppose we wish to control the inverse response of Eqn. 4.77 with a PI controller tuned using the
ZN relations based on the ultimate gain and frequency. The following listing adapts the strategy
described in Listing 4.4 for the PID tuning of an arbitrary transfer function.
G= tf(4,[3 1]) - tf(3,[1 1]); % Inverse response plant, Eqn. 4.77.
3 [Ku,Pm,Wcg,Wcp] = margin(G); % Establish critical gain, Ku, and frequency, u.
Pu = 2
*
pi/Wcg; % Critical period, Pu
Kc = 0.45
*
Ku; taui = Pu/1.2; % PI tuning rules (Ziegler-Nichols)
Gc = tf(Kc
*
[taui 1],[taui 0]);
8 Gcl = feedback(G
*
Gc,1); % Closed loop
step(Gcl); % Controlled response as shown in Fig. 4.41.
The controlled performance in Fig. 4.41 is extremely sluggish taking about 100 seconds to reach
the setpoint while the open loop response as shown in Fig. 4.40(b) takes only about one tenth of
that to reach steady-state. The controlled response also exhibits an inverse response.
4.8. DRAWBACKS WITH PID CONTROLLERS 183
0 20 40 60 80 100 120
0.5
0
0.5
1
time [s]
Figure 4.41: A NMP plant controlled with a
PI controller is very sluggish but is stable, and
does eventually reach the setpoint.
The crucial problem with using a PID controller for an inverse response process is that if the
controller is tuned too tight, in other words the gain is too high and the controller too fast acting,
then the controller will look at the early response of the system, which being an inverse system is
going the wrong way, and then try to correct it, thus making things worse. If the controller were
more relaxed about the correction, then it would wait and see the eventual reversal in gain, and
not give the wrong action.
In reality, when engineers are faced with inverse response systems, the gains are deliberately de-
turned giving a sluggish response. While this is the best one can do with a simple PID controller,
far better results are possible with a model based predictive controller such as Dynamic Matrix
Control (DMC). In fact this type of process behaviour is often used as the selling point for these
more advanced schemes. Processes G(s) that have RHP zeros are not open-loop unstable (they
require RHP poles for that), but the inverse process (G(s)
1
) will be unstable. This means that
some internal model controllers, that attempt to create a controller that approximates the inverse
process, will be unstable, and thus unsuitable in practice.
4.8.2 Approximating inverse-response systems with additional deadtime
We cannot easily remove the inverse response, but we can derive an approximate transfer func-
tion that has no right-hand plane zeros using a Pad e approximation. We may wish to do this to
avoid the problem of an unstable inverse in a controller design for example. The procedure is to
replace the non-minimum phase term (s 1) in the numerator with
(s1)
(s+1)
(s + 1), and then
approximate the rst term with a deadtime element. In some respects we are replacing one evil
(the right-hand plane zeros), with another, an additional deadtime.
For example the inverse response plant
G =
(2s 1)
(3s + 1)(s + 1)
e
3s
(4.78)
has a right-hand zero at s = 1/2 which we wish to approximate as an additional deadtime ele-
ment. Now we can re-write the plant as
G =
(2s + 1)
(3s + 1)(s + 1)

(2s 1)
(2s + 1)
. .
Pade
e
3s
184 CHAPTER 4. THE PID CONTROLLER
and approximate the second factor with a Pad e approximation
7
in reverse
G
(2s + 1)
(3s + 1)(s + 1)
e
4s
e
3s

(2s + 1)
(3s + 1)(s + 1)
e
7s
(4.79)
Note that the original systemhad an unstable zero and 3 units of deadtime while the approximate
system has no unstable zeros, but whose deadtime has increased to 7.
We can compare the two step responses with:
1 >> G = tf([2 -1],conv([3 1],[1 1]),'iodelay',3); % Original system, Eqn. 4.78.
>> G2 = tf(-[2 1],conv([3 1],[1 1]),'iodelay',7);% No inverse response, Eqn. 4.79.
>> step(G,G2); legend('G','G_{pade}') % Refer Fig. 4.42.
in Fig. 4.42 which, apart from the fact that the approximation version failed to capture the inverse
response, look reasonably similar.
Figure 4.42: Approximating inverse-
response systems with additional
deadtime
0 5 10 15 20 25
1
0.5
0
0.5


Step Response
Time (sec)
A
m
p
l
i
t
u
d
e
G
G
pade
True system
Pade approximation
Problem 4.4 A model of a exible beam pinned to a motor shaft with a sensor at the free end
repeated in [52, p12] is approximated by the non-minimum phase model,
G
p
(s) =
1
s

6.475s
2
+ 4.0302s + 175.77
5s
3
+ 3.5682s
2
+ 139.5021s + 0.0929
(4.80)
1. Verify that G
p
(s) is non-minimum phase.
2. Plot the root locus diagram of G
p
(s) for the cases (a) including the integrator, and (b) ex-
cluding the integrator. Based on the root locus, what can you say about the performance of
stabilising the system with a proportional controller?
3. Simulate the controlled response of this system with an appropriately tuned PID controller.
7
Recall that e
s
(1 (2/)s)/(1 + (2/)s) using a 1-1 Pad e approximation.
4.9. DEAD TIME COMPENSATION 185
4.9 Dead time compensation
Processes that exhibit dead time are difcult to control and unfortunately all too common in
the chemical processing industries. The PID controller, when controlling a process with dead-
time falls prey to the sorts of problems exhibited by the inverse response processes simulated in
4.8.1, and in practice the gain must be detuned so much that the controlled performance suf-
fers. One successful technique for controlling processes with signicant dead time, and that has
subsequently spawned many other types of controllers, is the Smith predictor.
To implement a Smith predictor, we must have a model of the process, which we can articially
split into two components, one being the estimated model without deadtime, G(s) and one com-
ponent being the estimated dead time,
G
p
(s) = G(s) e
s


G(s) e

s
(4.81)
The fundamental idea of the Smith predictor is that we do not control the actual process,G
p
(s),
but we control a model of the process without the deadtime,

G(s). Naturally in real life we
cannot remove the deadtime easily, (otherwise we would of course), but in a computer we can
easily remove the deadtime. Fig. 4.43 shows a block diagramof the deadtime compensator where
D(s) is our standard controller such as say a PID controller. The important feature of the Smith
predictor scheme is that the troublesome pure deadtime component is effectively removed from
the feedback loop. This makes the controllers job much easier.

D(s) G(s)e
s

G(s)e

G(s)

+

` `

+
`
+

r
+
+
y
Plant
Dead time compensator
Figure 4.43: The Smith predictor structure
Using block algebra on the Smith predictor structure given in Fig. 4.43 we can easily show that
the closed loop transfer function is
Y (s)
R(s)
=
D(s) G(s)e
s
1 D(s)

G(s)e

s
+D(s)G(s)e
s
+

G(s)D(s)
(4.82)
and assuming our model closely apprtoximates the true plant,
=
D(s)G(s)
1 + G(s)D(s)
e
s
if G =

G and =

(4.83)
The nice property of the Smith predictor is now apparent in Fig. 4.44 in that the deadtime has
been cancelled from the denominator of Eqn. 4.83, and now sits outside the feedback loop.
186 CHAPTER 4. THE PID CONTROLLER

D(s) G(s)
`

+
e
s
+

y
Figure 4.44: The Smith predictor structure from Fig. 4.43 assuming no model/plant mis-match.
For a successful implementation, the following requirements should hold:
1. The plant must be stable in the open loop.
2. There must be little or no model/plant mismatch.
3. The delay or dead time must be realisable. In a discrete controller this may just be a
shift register, but if the dead time is equivalent to a large number of sample times, or a
non-integral number of sample times, or we plan to use a continuous controller, then the
implementation of the dead time element is complicated.
Smith predictors in SIMULINK
Suppose we are to control a plant with signicant deadtime,
G
p
=
2

2
s
2
+ 2s + 1
e
50s
(4.84)
where = 10, = 0.34. A PID controller with tuning constants K = 0.35, I = 0.008, D = 0.8 is
unlikely to manage very well, so we will investigate the improvement using a Smith predictor.
A SIMULINK implementation of a Smith predictor is given in Fig. 4.45 which closely follows the
block diagram of Fig. 4.43. To turn off the deadtime compensation, simply double click on the
manual switch. This will break the compensation loop leaving just the PID controller.
setpoint
process
2
100s +6.8s+1
2
model
100s +6.8s+1
2
2
delayp =50
delay =50
Sum3
Scope PID Controller
PID
Manual Switch
Figure 4.45: A SIMULINK implementation of a Smith predictor. As shown, the deadtime compen-
sator is activated. Refer also to Fig. 4.43.
The conguration given in Fig. 4.45 shows the predictor active, but if we change the constant
block to zero, the switch will toggle removing the compensating Smith predictor, and we are left
4.10. TUNING AND SENSITIVITY OF CONTROL LOOPS 187
with the classical feedback controller. Fig. 4.46 highlights the usefulness of the Smith predictor
showing what happens when we turn on the predictor at t = 1750. Suddenly now the controller
nds it easier to drive the process since it needs not worry about the deadtime. Consequently the
controlled performance improves. We could further improve the tuning when using the Smith
predictor, but then in this case the uncompensated controlled response would be unstable. In
2
1
0
1
2
o
u
t
p
u
t

&

s
e
t
p
o
i
n
t
Smith predictor on at t=2100
Smith on
0 500 1000 1500 2000 2500 3000 3500 4000
2
1
0
1
2
I
n
p
u
t

u
time
Smith on
Figure 4.46: Dead time compensation. The Smith predictor is turned on at t = 2100 and the
controlled performance subsequently rapidly improves.
the simulation given in Fig. 4.46, there is no model/plant mis-match, but it is trivial to re-run
the simulation with a different dead time in the model, and test the robustness capabilities of the
Smith Predictor.
Fig. 4.47 extends this application to the control of an actual plant, (but with articially added
deadtime simply because the plant has little natural deadtime) and a deliberately poor model
(green box in Fig. 4.47(a)). Fig. 4.47(b) demonstrates that turning on the Smith predictor at t = 100
shows a big improvement in the controlled response. However the control input shows a little
too much noise.
4.10 Tuning and sensitivity of control loops
Fig. 4.48 depicts a plant G controlled by a controller C to a reference r in the presence of process
disturbances v and measurement noise w. A good controller must achieve a number of different
aims. It should encourage the plant to follow a given setpoint, it should reject disturbances
introduced by possible upstream process upsets and minimise the effect of measurement noise.
Finally the performance should be robust to reasonable changes in plant dynamics.
The output Y (s) from Fig. 4.48 is dependent on the three inputs, R(s), V (s) and W(s) and using
block diagram simplication, is given by
Y (s) =
CG
1 +CG
R(s) +
G
1 +CG
V (s)
CG
1 +CG
W(s) (4.85)
188 CHAPTER 4. THE PID CONTROLLER
model
Delay
1
s+1
model
In1 Out1
blackbox
Transport
Delay
Switch Step
Signal
Generator
Scope
PID
PID Controller
(a) Deadtime compensation using SIMULINK and the real-time toolbox
0 20 40 60 80 100 120 140 160 180 200
1
0.5
0
0.5
1
s
e
t
p
o
i
n
t

&

o
u
t
p
u
t
Smith predictor on
Smith predictor off
0 20 40 60 80 100 120 140 160 180 200
1
0.5
0
0.5
1
i
n
p
u
t
time (s)
(b) Controlled results without, and with, deadtime compensation.
Figure 4.47: Deadtime compensation applied to the blackbox. The compensator is activated at
time t = 100 after which the controlled performance improves.
Ideally a successful control loop is one where the error is small, where the error is E = RY , or
E(s) =
1
1 +CG
R(s)
G
1 +CG
V (s) +
CG
1 +CG
W(s) (4.86)
We can simplify this expression by dening the open loop transfer function, or sometimes re-
ferred to just as the loop transfer function, as
L(s) = C(s)G(s) (4.87)
which then means Eqn. 4.86 is now
E(s) =
1
1 +L
R(s)
G
1 +L
V (s) +
L
1 +L
W(s) (4.88)
By rearranging the blocks in Fig. 4.48, we can derive the effect of the output on the error, the input
on the setpoint and so on. These are known as sensitivity functions and they play an important
role in controller design.
4.10. TUNING AND SENSITIVITY OF CONTROL LOOPS 189
C(s) G(s) +

+

`
process disturbance
v
y
w
measurement noise
controller
plant
controlled output
reference
r

u
e
error
Figure 4.48: Closed loop with plant G(s) and controller C(s) subjected to disturbances and mea-
surement noise.
The sensitivity function is dened as the transfer function from setpoint to error
S(s) =
1
1 +L(s)
= G
er
(s) (4.89)
The complementary sensitivity function is dened as the transfer function from reference to out-
put
T(s) =
L(s)
1 +L(s)
= G
yr
(s) = G
yw
(s) (4.90)
The disturbance sensitivity function is
GS(s) =
G(s)
1 +L(s)
= G
yv
(s) (4.91)
and the control sensitivity function is
CS(s) =
C(s)
1 +L(s)
= G
ur
(s) = G
uw
(s) (4.92)
The error from Eqn. 4.88 (which we desire to keep as small as practical), now in terms of the the
sensitivity and complementary sensitivity transfer functions is
E(s) = S(s)R(s) S(s)G(s)V (s) +T(s)W(s) (4.93)
If we are to keep the error small for a given plant G(s), then we need to design a controller C(s)
to keep both S(s) and T(s) small. However there is a problem because a direct consequence of
the denitions in Eqn. 4.89 and Eqn. 4.90 is that
S(s) +T(s) = 1
which means we cannot keep both small simultaneously. However we may be able to make S(s)
small over the frequencies when R(s) is large, and T(s) small when the measurement noise W(s)
dominates. In many practical control loops the servo response R(s) dominates at lowfrequencies,
and the measurement noise dominates the high frequencies. This is known as loop shaping.
We are primarily interested in the transfer function T(s), since it describes howthe output changes
given changes in reference command. At low frequencies we demand that T(s) = 1 which means
190 CHAPTER 4. THE PID CONTROLLER
that we have no offset at steady-state and that we have an integrator somewhere in the process.
At high frequencies we demand that T(s) 0 which means that fast changes in command signal
are ignored.
Correspondingly we demand that since S(s) + T(s) = 1, then S(s) should be small at low fre-
quencies, and 1 for high frequencies such as shown in Fig. 4.49.
Figure 4.49: Typical sensitiv-
ity transfer functions S(s) and
T(s). The maximum of the |S|
is marked .
10
2
10
1
10
0
10
1
10
3
10
2
10
1
10
0
10
1
[rad/s]
M
a
g
n
i
t
u
d
e
Bode plot of S & T
S
T
Sensitivity, S(s)
Complementary, T(s)
||S||

The maximum values of |S| and |T| are also useful robustness measures. The maximum sensitiv-
ity function
S

= max

|S(j)| (4.94)
is shown as a in Fig. 4.49 and is inversely proportional to the minimal distance from the loop
transfer function to the critical (1, 0i) point. Note how the circle of radius 1/||S||

centered at
(1, 0i) just touches the Nyquist curve shown in Fig. 4.50. A large peak in the sensitivity plot in
Fig. 4.49, corresponds to a small distance between the critical point and the Nyquist curve which
means that the closed loop is sensitive to modelling errors and hence not very robust. Values of
S

= 1.7 are considered reasonable, [108].


Figure 4.50: A circle of radius 1/S

cen-
tered at (1, 0i) just touches the Nyquist
curve. See also Fig. 4.49.
3 2 1 0 1 2 3
4
3.5
3
2.5
2
1.5
1
0.5
0
0.5
1
Nyquist plot of L(s)
{L(i )}

{
L
(
i

)
}
circle radius=1/||S||

4.11. SUMMARY 191


4.11 Summary
PID controllers are simple and common in process control applications. The three terms of a PID
controller relate to the magnitude of the current error, the history of the error (integral), and the
current direction of the error (derivative). The integral component is necessary to remove offset,
but can destabilise the closed loop. While this may be countered by adding derivative action,
noise and abrupt setpoint changes may cause problems. Commercial PID controllers usually add
a small time constant to the derivative component to make it physically realisable, and often only
differentiate the process output, rather than the error.
Many industrial PID controllers are poorly tuned which is clearly uneconomic. Tuning con-
trollers can be difcult and time consuming and there is no correct fail-safe idiot-proof procedure.
The single experiment in closed loop (Yuwana-Seborg) is the safest way to tune a critical loop,
although the calculations required are slightly more complicated than the traditional methods
and is designed for processes that approximate a rst-order system with time delay. The tuning
parameters obtained by all these methods should only be used as starting estimates. The ne
tuning of the tuning parameters is best done by an enlightened trial and error procedure on the
actual equipment.
192 CHAPTER 4. THE PID CONTROLLER
Chapter 5
Digital ltering and smoothing
Never let idle hands get in the way of the devils work.
Basil Fawlty (circ. 75)
5.1 Introduction
This chapter is concerned with the design and use of lters. Historically analogue lters were the
types of lters that electrical engineers have been inserting for many years in radios fabricated
out of passive components such as resisters, capacitors and inductors or more recently and reli-
ably out of operational ampliers. When we lter something, be it home made wine, sump oil,
or undesirable material on the Internet, we are really interested in purifying, or separating the
noise or unwanted, from the desired.
The classical approach to ltering assumes that the useful signals lie in one frequency band, and
the unwanted noise in another. Then simply by constructing a transfer function as our lter,
we can pass the wanted band (signal), while rejecting the unwanted band (noise) as depicted in
Fig. 5.1.
Unfortunately even if the frequency bands of the signal and noise are totally distinct, we still
Filter, H(s)

. .
low-pass lter
input, u(t)

output, y(t)
(noisy) (smooth)
Figure 5.1: A lter as a transfer function
193
194 CHAPTER 5. DIGITAL FILTERING AND SMOOTHING
cannot effect complete separation in an online ltering operation. In practice however, we can
design the lters with such sharp cut-offs that a practical separation is achieved. More commonly
the signal is not distinct from the noise (since the noise tends to inhabit all frequency bands), so
now the lter designer must start to make some trade-offs.
The more sophisticated model-based lters that require computer implementation are described
in chapter 9. While the classical lters can be realised (manufactured) with passive analogue
components such as resisters, capacitors, and more rarelyinductors, today they are almost always
implemented digitally. This chapter describes these methods. The SIGNAL PROCESSING toolbox
contains many useful lter design tools which we will use in this chapter, although the workhorse
function for this chapter, filter, is part of MATLABs kernel.
5.1.1 The nature of industrial noise
Real industrial measurements are always corrupted with noise. This noise (or error) can be at-
tributed to many causes, but even if these causes are known, the noise is usually still unpre-
dictable. Causes for this noise can include mechanical vibrations, poor electrical connections
between instrument and transducer sensor, electrical interference of other equipment or a com-
bination of the above. What ever the cause, these errors are usually undesirable. Remember, if it
is not unpredictable, then it is not noise.
An interesting example of the debilitating affect of noise is shown in the controlled response
in Fig. 5.2. In this real control loop, the thermocouple measuring the temperature in a sterilising
furnace was disturbed by a mobile phone a couple of meters away receiving a call. Under normal
conditions the standard deviation of the measurement is around 0.1 degrees which is acceptable,
but with the phone operating, this increases to 2 or 3 degrees.
Figure 5.2: A noisy temperature
measurement due to a nearby mobile
phone causes problems in the feed-
back control loop.
110
112
114
116
118
t
e
m
p
e
r
a
t
u
r
e

[
d
e
g

C
]
0 5 10 15 20 25
0
0.5
1
P
o
w
e
r

[
%
]
time [min]
mobile phone range here
We can mathematically describe the real measurement process as
y = y

+v
where y

is the true value which is corrupted with some other factor which we call v to give the
as received measured value y.
5.1. INTRODUCTION 195
-
+

measurement noise, v
true signal, y

corrupted measured signal, y


Figure 5.3: Noise added to a true, but unknown, signal
Naturally we never know precisely either v or y

. The whole point about ltering or signal


processing is to try and distinguish the signal, y

fromthe noise, v. There is another reason why


one should study ltering, and that is to disguise horrible experimental data. As one operator
said, I control the plant by looking at trends of the unltered data, but I only give ltered data
to management.
While the noise itself is unpredictable (non deterministic) it often has constant, known or pre-
dictable statistical properties. The noise is often described by a Gaussian distribution, with
known and stable statistical properties such as mean and variance.
Noise is generally considered a nuisance because
1. It is not a true reection of the actual process variables, since the transducer, or other exter-
nal factors have introduced unwanted information.
2. It makes it difcult to differentiate the data accurately such as say for the D part of a PID
controller, or extracting the owrate from the rate of change of a level signal.
3. Often for process management applications, only an average value or smoothed character-
istic value is required.
4. Sometimes noise is not undesirable! In some specialised circumstances noise can actually
counter-intuitively improve the strength of a signal. See the article entitled The Benets of
Background Noise [142] for more details of this paradox.
Some care should be taken with point 1. No one really knows what the true value of the process
is (except possibly Mother Nature), so the assertion that a particular time series is not true is
difcult to rigorously justify. However for most applications, it is a reasonable and defendable
assumption. When ltering the data for process control applications, it is often assumed that the
true value is dominated by low frequency dynamics (large time constants etc), and that anything
at a high frequency is unwanted noise. This is a fair assumption for measurements taken on most
equipment in the processing industries. Large industrial sized vessels, reactors, columns etc
often have large holding volumes, large capacities, and hence slow dynamics. This tendency to
favour the low frequencies by low-pass ltering everything, while common for chemical process
engineers, is certainly not the norm in say telecommunications.
Differentiating (i.e. estimating the slope of) noisy data is difcult. Conversely, integrating noisy
data is easy. Differentiating experimental data is required for the implementation of the D part
of PID controllers, and in the analysis of rate data in chemical reaction control amongst other
things. Differentiation of actual industrial data from a batch reaction is demonstrated in 5.5, and
as a motivating example, differentiating actual data without smoothing rst is given in 5.1.2.
196 CHAPTER 5. DIGITAL FILTERING AND SMOOTHING
5.1.2 Differentiating without smoothing
As seen in chapter 4, the Proportional-Integral-Differential controller uses the derivative compo-
nent, or D part, to stabilise the control scheme and offers improved control. However in practice,
the D part is rarely used in industrial applications, because of the difculty in differentiating the
raw measured signal in real-time.
The electromagnetic balance arm shown in Fig. 1.5 is a very oscillatory device, that if left un-
compensated would wobble for a considerable time. If we were to compensate it with a PID
controller such that the balance arm closely follows the desired setpoint, we nd that we would
need substantial derivative action to counter the oscillatory poles of the plant. The problem is
how we can reliably generate a derivative of a noisy signal for out PID controller.
Fig. 5.4 demonstrates experimentally what happens when we attempt to differentiate a raw and
noisy measurement, using a crude backward difference with a sample time T
dy
dt

y
t
y
t1
T
The upper plot shows the actual position of the balance armunder real-time closed-loop PID con-
trol using a PC with a 12 bit (4096 discrete levels) analogue input/output card. In this instance,
I tuned the PID controller with a large amount of derivative action, and it is this that dominates
the controller output (lower plot). While the controlled response is reasonable, (the arm position
follows the desired setpoint, the input is too noisy), and will eventually wear the actuators if used
for any length of time. Note that this noise is a substantial fraction of the full scale input range.
1000
1500
2000
2500
s
e
t
p
o
i
n
t

&

o
u
t
p
u
t
0 5 10 15 20 25 30
0
1000
2000
3000
4000
5000
I
n
p
u
t
time [s]
Figure 5.4: The signicant derivative action of a PID controller given the slightly noisy data from
an electromagnetic balance arm(solid line in upper plot) causes the controller input signal (lower
plot) to go exhibit very high frequencies.
One solution to minimise the excitedness of the noisy input signal whilst still retaining the dif-
ferentiating characteristic of the controller is to low-pass lter (smooth) the measured variable
before we differentiate it. This is accomplished with a lter, and the design of these lters is what
5.2. SMOOTHING MEASURED DATA USING ANALOGUE FILTERS 197
this chapter is all about. Alternatively the controller could be completely redesigned to avoid the
derivative step.
5.2 Smoothing measured data using analogue lters
If we have all the data at our disposal, we can draw a smooth curve through the data. In the past,
draughtsmen may have used a exible ruler, or spline, to construct a smoothing curve by eye.
Techniques such as least squares splines or other classes of piecewise low-order polynomials
could be used if a more mathematically rigorous tting is required. Techniques such as these fall
into the realm of regression and examples using MATLAB functions such as least-squares tting,
splines, and smoothing splines are covered more in [201]. If the noise is to be discarded and
information about the signal in the future is used, this technique is called smoothing. Naturally
this technique can only be applied ofine after all the data is collected since the future data
must be known. A smoothing method using the Fourier transform is discussed in 5.4.5.
Unfortunately for real time applications such as the PID controller application in 5.1.2, we can-
not look into the future and establish what the signal will be for certain. In this case we are
restricted to smoothing the data using only output historical values. This type of noise rejection
is called real-time ltering and it is this aspect of ltering that is of most importance to control
engineers. Common analogue lters are discussed in 5.2.3, but to implement using digital hard-
ware we would rather use an equivalent digital description. The conversion from the classical
analogue lter to the equivalent digital lter using the bilinear transform is described in 5.3.
Finally we may wish to predict data in the future. This is called prediction.
5.2.1 A smoothing application to nd the peaks and troughs
Section 4.6.3 described one way to tune a PID controller where we rst to subject the plant to a
closed loop test with a trial gain, and then record the peaks and troughs of the resultant curve.
However as shown in the actual industrial example in Fig. 5.5(a), due to excessive noise on the
thermocouple (introduced by mobile phones in the near vicinity!), it is difcult to extract the
peaks and troughs from the measured data. It is especially difcult to write a robust algorithm
that will do this automatically with such noise.
The problem is that to achieve a reasonable smoothing we also introduce a large lag. This in turn
probably does not affect the magnitude of the peaks and troughs, but it does affect the timing, so
our estimates of the model parameters will be wrong.
One solution is to use an acausal lter which does not exhibit any phase lag. MATLAB provides
a double-pass lter called filtfilt which is demonstrated in Fig. 5.5(b). Using the smooth
data from this acausal lter generates reasonable estimates for both the magnitude and timing of
the characteristic points needed for the tuning. The drawback is that the ltering must be done
off-line.
5.2.2 Filter types
There are four common lter types: the low-pass, high-pass, band-pass and notch-pass. Each of
these implementations can be derived from the low-pass lter, meaning that it is only necessary
198 CHAPTER 5. DIGITAL FILTERING AND SMOOTHING
0 100 200 300 400 500 600 700 800
2
0
2
4
6
8
time [s]
O
u
t
p
u
t

&

s
e
t
p
o
i
n
t


raw value
smoothed, n=5,
c
=0.03
Key points
Butterworth filter
(a) To extract the key points needed for PID tuning, we must rst
smooth the noisy data. Unfortunately, to achieve good smoothing
using a causal lter such as the 5th order Butterworth used here,
we introduce a large lag.
0 100 200 300 400 500 600 700 800
2
0
2
4
6
8
time [s]
O
u
t
p
u
t

&

s
e
t
p
o
i
n
t


raw value
Acausal smoothed
Key points
(b) Using an acausal lter removes the phase lag problem
Figure 5.5: In this industrial temperature control of a furnace, we need to smooth the raw mea-
surement data in order to extract characteristic points required for PID tuning.
to study in detail the design of the rst. A comparison of the magnitude response of these lters
is given in Fig. 5.8.
Low-pass lters
A linear lter is mathematically equivalent to a transfer function, just like a piece of processing
equipment such as a series of mixing tanks, or distillations columns. Most of the lter applica-
tions used in process industry are low-pass in nature. That is, ideally the lter passes all the low
frequencies, and above some cut off frequency
c
, they attenuate the signal completely. These
types of lters are used to smooth out noisy data, or to retain the long term trends, rather than
the short term noisy contaminated transients.
The ideal low pass lter is a transfer function that has zero phase lag at any input frequency, and
an innitely sharp amplitude cut off above a specied frequency
c
. The Bode diagram of such
a lter would have a step function down to zero on the amplitude ratio plot and a horizontal
line on the phase angle plot as shown in Fig. 5.8. Clearly it is physically impossible to build
such a lter. However, there are many alternatives for realisable lters that approximate this
behaviour. Fig. 5.6 shows a reasonable attempt to approximate the ideal low-pass lter. Typically
the customer will specify a cut-off frequency, and it is the task of the lter designer to select
sensible values for the pass and stop band oscillations or ripple and the pass and stop band
frequencies to achieve these balancing lter performance with minimal hardware cost.
5.2. SMOOTHING MEASURED DATA USING ANALOGUE FILTERS 199

s
1
A
2
1
1
1+
2

pass band

stop band
|H(i)|
0

p
0
_
pass band ripple
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
corner frequency
Figure 5.6: Low-pass lter specication
The most trivial low-pass lter approximation to the ideal lter is a single lag with a cut off
frequency (
c
) of 1/. If you wanted a higher order lter, say a nth order lter, you could simply
cascade n of these lters together as shown in Fig. 5.7.
G
n
(s) =
1
_
s
c
+ 1
_
n
(5.1)
1
s/c+1

1
s/c+1
1
s/c+1
output, y

single low-pass lter
input, u
..
cascaded 3rd order low-pass lter

. .
Figure 5.7: Three single low-pass lters cascaded together to make a third-order lter.
Actually Eqn 5.1 is a rather poor approximation to the ideal lter and better approximations exist
for the same complexity or lter order, although they are slightly more difcult to design and
200 CHAPTER 5. DIGITAL FILTERING AND SMOOTHING
implement in hardware. Two classic analogue lter designs are described in 5.2.3 following.
High-pass lters
There are however applications where high pass lters are desired, although these are far more
common in the signal processing and electronic engineering elds. These lters do the opposite
of the low pass lter; namely they attenuate the low frequencies and pass the high frequencies
unchanged. This can be likened to the old saying If Ive already seen it, Im not interested!
These remove the DC component of the signal and tend to produce the derivative of the data.
You may use a high pass lter if you wanted to remove long term trends from your data, such as
seasonal effects for example.
Band and notch-pass lters
The third type of lter is a combination of the high and lowpass lters. If these two are combined,
they can form a band pass or alternatively a notch lter. Notch lters are used to remove a par-
ticular frequency such as the common 50Hz hum caused by the mains power supply. Band pass
lters are used in the opposite manner, and are used in transistor radio sets so that the listener can
tune into one particular station without hearing all the other stations. The difference between the
expensive radios and the cheap ones, is that the expensive radios have a sharp narrow pass band
so as to reduce the amount of extra frequency. Oddly enough for some tuning situations a notch
lter is also used to remove neighbouring disturbing signals. Some satellite TV stations intro-
duce into the TV signal a disturbing component to prevent non-subscribers from watching their
transmission without paying for the secret decoder ring. This disturbance, which essentially
makes the TV unwatchable, must of course must be within the TV stations allowed frequency
range. The outer band pass lter will tune the TV to the stations signal, and a subsequent notch
lter will remove the disturbing signal. Fig. 5.8 contrasts the amplitude response of these lters.
`

, [rad/s]

c
AR
Ideal lter
`

c

c
AR AR
`

l

h
Low-pass High-pass Band-pass
AR
Figure 5.8: Amplitude response for ideal, low-pass, high pass and band-pass lters.
Filter transformation
Whatever the type of lter we desire, (high-pass, band-pass notch etc), they can all be obtained
by rst designing a normalised low-pass lter prototype as described by [159, p415]. This lter
is termed normalised because it has a cutoff frequency of 1 rad/s. Then the desired lter, high-
pass, band-pass or whatever, is obtained by transforming the normalised low-pass lter using the
5.2. SMOOTHING MEASURED DATA USING ANALOGUE FILTERS 201
relations given in Table 5.1 where
u
is the upper cutoff and
l
is the lower cutoff frequencies.
This is very convenient, since we need only to be able to design a low-pass lter from which all
Table 5.1: The lter transformations needed to convert from the prototype lter to other general
lters. To design a general lter, replace the s in the prototype lter with one of the following
transformed expressions.
Desired lter transformation
low pass s s/
u
band-pass s
s
2
+
u

l
s(
u

l
)
band-stop s
s(
u

l
)
s
2
+
u

l
high pass s
u
/s
the others can be obtained using a variable transformation. Low-pass lter design is described
for the common classical analogue lter families in 5.2.3. MATLAB also uses this approach in the
SIGNAL PROCESSING toolbox with what it terms analogue low-pass proto-type lters.
5.2.3 Classical analogue lter families
Two common families of analogue lters superior to the simple cascaded rst-order lter net-
work are the Butterworth and the Chebyshev lters. Both these lters require slightly more com-
putation to design, but once implemented, require no more computation time or hardware com-
ponents than any other lter of the same order. Historically these were important continuous-
time lters, and to use discrete lter designs we must convert the description into the z plane by
using the bilinear transform.
In addition to the Butterworth and Chebyshev lters, there are two slightly less common classic
analogue lters; the Chebyshev type II (sometimes called the inverse Chebyshev lter), and the
elliptic or Cauer lter. All four lters are derived by approximating the sharp ideal lter in
different ways. For the Butterworth, the lter is constructed in such a way as to maximise the
number of derivatives to zero at = 0 and = . The Chebyshev lter exhibits minimum error
over the passband, while the type II version minimises the error over the stop band. Finally the
elliptic lter uses a Chebyshev mini-max approximation in both pass-band and stop-band. The
calculation of the elliptic lter requires substantially more computation than the other three. The
SIGNAL PROCESSING toolbox contains both discrete and continuous design functions for all four
analogue lter families.
Butterworth lters
The Butterworth lter, H
B
, is characterised by the following magnitude relation
|H
B
()|
2
=
1
1 +
_

c
_
2n
(5.2)
where n is the lter order and
c
is the desired cut off frequency. If we substitute s = i, we nd
that the 2n poles of the squared magnitude function, Eqn. 5.2, are
s
p
= (1)
1
2n
(i
c
)
202 CHAPTER 5. DIGITAL FILTERING AND SMOOTHING
That is, they are spaced equally at /n radians apart around a circle of radius
c
. Given that
we desire a stable lter, we choose just the n stable poles of the squared magnitude function for
H
B
(s). The transfer function form of the general Butterworth lter in factored form is
H
B
(s) =
n

k=1
1
s +p
k
(5.3)
where
p
k
=
c
exp
_
i
_
1
2
+
2k 1
2n
__
(5.4)
Thus all the poles are equally spaced apart on the stable half-circle of radius
c
in the s plane.
The MATLAB function buttap, (Butterworth analogue low pass lter prototype), uses Eqn. 5.4
to generate the poles.
We can plot the poles of a 5th order Butterworth lter after converting the zero-pole-gain form
to a transfer function form, using the pole-zero map function, pzmap, or of course just plot the
poles directly.
1 [z,p,k] = buttap(5) % Design a 5th order Butterworth proto-type lter
pzmap(zpk(z,p,k)) % Look at the poles & zeros
Note how the poles of the Butterworth lter are equally spread around the stable half circle in
Fig. 5.9. Compare this circle with the ellipse pole-zero map for the Chebyshev lter shown in
Fig. 5.10.
Figure 5.9: Note how the poles of the Butterworth lter,
, are equally spaced on the stable half circle. See also
Fig. 5.10.
1 0.5 0 0.5 1
1
0.5
0
0.5
1
(p)

(
p
)
Alternatively using only real arithmetic, the n poles of the Butterworth lter are p
k
= a
k
b
k
i
where
a
k
=
c
sin , b
k
=
c
cos (5.5)
where
=

2n
(2k 1) (5.6)
Expanding the polynomial given in Eqn. 5.3 gives the general Butterworth lter template in the
continuous time domain,
H
B
(s) =
1
c
n
_
s
c
_
n
+c
n1
_
s
c
_
n1
+ +c
1
s
c
+ 1
(5.7)
where c
i
are the coefcients of the lter polynomial. Using Eqn 5.4, the coefcients for the general
nth order Butterworth lter can be evaluated.
5.2. SMOOTHING MEASURED DATA USING ANALOGUE FILTERS 203
Taking advantage of the complex arithmetic abilities of MATLAB, we can generate the poles typ-
ing Eqn. 5.4 almost verbatim. Note how I use 1i, (that is I type the numeral 1 followed with an
i with no intervening space) to ensure I get a complex number.
Listing 5.1: Designing Butterworth Filters using Eqn. 5.4.
>>n = 4; % Filter order
>>wc = 1.0; % Cut-off frequency, c
3 >>k = [1:n]';
>>p = wc
*
exp(-pi
*
1i
*
(0.5 + (2
*
k-1)/2/n)); % Poles, p
k
= c exp
_
i
_
1
2
+
2k1
2n
__
>>c = poly(p); % should force real
>>c = real(c) % look at the coefficients
c =
8 1.0000 2.6131 3.4142 2.6131 1.0000
>>[B,A] = butter(n,wc,'s'); % compare with toolbox direct method
Once we have the poles, it is a simple matter to expand the polynomial out and nd the coef-
cients. In this instance, owing to numerical roundoff, we are left with a small complex residual
which we can safely delete. The 4th order Butterworth lter with a normalised cut-off frequency
is
H
B
(s) =
1
s
4
+ 2.6131s
3
+ 3.4142s
2
+ 2.6131s + 1
Of course using the SIGNAL PROCESSING toolbox, we could duplicate the above with a single
call to butter given as the last line in the above example.
We can design a low-pass lter at an arbitrary cut-off frequency by scaling the normalised ana-
logue proto-type lter using Table 5.1. Listing 5.2 gives the example of the design of a low-pass
lter with a cutoff of f
c
= 800 Hz.
Listing 5.2: Designing a low-pass Butterworth lter with a cut-off frequency of f
c
= 800 Hz.
ord = 2; % Desired lter order
[Z,P,K] = buttap(ord); Gc = tf(zpk(Z,P,K)); % 2nd-order Butterworth lter prototype
Fc = 800; % Desired cut-off frequency in [Hz]
5 wc = Fc
*
pi
*
2 ; % [rad/s]
n = [ord:-1:0]; % Scale lter by substituting s for s/c
d = wc.n;
Gc.den{:} = Gc.den{:}./d
10
[B,A] = butter(2,wc,'s'); Gc2 = tf(B,A); % Alternative continuous design (in Matlab)
p=bodeoptions; p.FreqUnits = 'Hz'; % Set Bode frequency axis units as Hz (not rad/s)
bode(Gc,Gc2,p)
15 hline(-90); vline(Fc)
A high pass lter can be designed in much the same way, but this time substituting
c
/s for s.
Listing 5.3: Designing a high-pass Butterworth lter with a cut-off frequency of f
c
= 800 Hz.
ord = 2; [Z,P,K] = buttap(ord); Gc = tf(zpk(Z,P,K)); % Low-pass Butterworth proto-
type
204 CHAPTER 5. DIGITAL FILTERING AND SMOOTHING
Fc = 800;wc = Fc
*
pi
*
2 ; % Frequency specications
5 n = [0:ord]; d = wc.n; % Scale lter by converting s to c/s
Gc.den{:} = Gc.den{:}.
*
d;
Gc.num{:} = [1,zeros(1,ord)];
bode(Gc,p); title('High pass')
The frequency response of the Butterworth lter has a reasonably at pass-band, and then falls
away monotonically in the stop band. The high frequency asymptote has a slope of n on a
log-log scale. Other lter types such as the Chebyshev (see following section) or elliptic lters
are contained in the SIGNAL PROCESSING toolbox, and [42] details the use of them.
Algorithm 5.1 Butterworth lter design
Given a cut-off frequency,
c
, (in rad/s), and desired lter order, n,
1. Compute the n poles of the lter, H
B
(s), using
p
k
=
c
exp
_
i
_
1
2
+
2k 1
2n
__
, k = 1, 2, . . . , n
2. Construct the lter H
B
(s) either using the poles directly in factored form, or expanding the
polynomial
H
B
(s) =
1
poly(p
k
)
3. Convert low pass lter to high-pass, band pass etc by substituting for s given in Table 5.1.
4. Convert to a discrete lter using c2dm if desired ensuring that the cut-off frequency is much
less than the Nyquist frequency;
c
<<
N
= /T.
Algorithms 5.1 suffers from the aw that it assumes the lter designer has already selected a
cut-off frequency,
c
, and lter order, n. However for practical cases, it is characteristics such
as band-pass ripple, stopband attenuation that are specied by the customer such as given in
Fig. 5.6, and not the lter order. We need some way to calculate the minimum lter order (to
construct the cheapest lter) from the specied magnitude response characteristics. Once that is
established, we can proceed with the lter design calculations using the above algorithm. Further
descriptions of the approximate order are given in [92, pp9305327] and [194]. The functionally
equivalent routines, buttord and cheb1ord in the SIGNAL PROCESSING toolbox attempt to
nd a suitable order for a set of specied design constraints.
In the case for Butterworth lters, the lter order is given by
n =
_
log
10
__
10
Rp/10
1
_ _
10
As/10
1
_
2 log
10
(
p
/
s
)
_
(5.8)
where R
p
is the pass-band ripple, A
s
is the stopband ripple, and
p
,
s
are the pass and stop
band frequencies respectively. In general, the order n given by the value inside the brackets
(without feet) in Eqn. 5.8 will not be an exact integer, so we should select the next largest integer
to ensure that we meet or exceed the customers specications. MATLABs ceiling function ceil
is useful here.
Problem 5.1 1. Derive the coefcients for a sixth order Butterworth lter.
5.2. SMOOTHING MEASURED DATA USING ANALOGUE FILTERS 205
2. Draw a Bode diagram for a fourth order Butterworth lter with w
c
= 1 rad/s. On the same
plot, draw the Bode diagram for the lter
H(s) =
1
_
s
c
+ 1
_
4
with the same w
c
. Which lter would you choose for a low pass ltering application?
Hints: Look at problem A-4-5 (DCS) and use the MATLAB function bode. Plot the Bode
diagrams especially carefully around the corner frequency. You may wish to use the Matlab
commands freqs or freqz.
3. Find and plot the poles of a continuous time 5th order Butterworth lter. Also plot a circle
centered on the origin with a radius w
c
.
4. Plot the frequency response (Bode diagram) of both a high pass and low pass 4th order
Butterworth lter with a cutoff frequency of 2 rad/s.
Chebyshev lters
Related to the Butterworth lter family is the Chebyshev family. The squared magnitude function
of the Chebyshev lter is
|H
C
()|
2
=
1
1 +
2
C
2
n
_

c
_ (5.9)
where again n is the lter order,
c
is the nominal cut-off frequency, and C
n
is the nth order
Chebyshev polynomial. (For a denition of Chebyshev polynomials, see for example [201], and in
particular the cheb_pol function.) The design parameter is related to the amount of allowable
passband ripple, ,
= 1
1

1 +
2
(5.10)
Alternatively the lter design specications may be given in decibels of allowable passband rip-
ple
1
, r
dB
,
= 1 10

r
dB
20
(5.11)
This gives the Chebyshev lter designer two degrees of freedom, (lter order n and or equiva-
lent), rather than just order as in the Butterworth case. Similar to the Butterworth lter, the poles
of the squared magnitude function, Eqn. 5.9, are located equally spaced along an ellipse in the s
plane with minor axis radius of
c
aligned along the real axis, where
=
1
2
_

1/n

1/n
_
(5.12)
with
=
1

+
_
1 +
2
(5.13)
and a major axis radius of
c
aligned along t2he imaginary axis where
=
1
2
_

1/n
+
1/n
_
(5.14)
1
The functions in the SIGNAL PROCESSING TOOLBOX use r
dB
rather than .
206 CHAPTER 5. DIGITAL FILTERING AND SMOOTHING
Since the poles p
k
of the Chebyshev lter, H
c
(s) are equally spaced along the stable half of the
ellipse, (refer Fig. 5.10 for a verication of this), the real and imaginary parts are given by:
(p
k
) =
c
cos
_

2
+
(2k+1)
2n
_
(p
k
) =
c
sin
_

2
+
(2k+1)
2n
_
_
_
_
k = 0, 1, . . . , n 1 (5.15)
and the corresponding transfer function is given by
H
c
(s) =
K

n
k=0
(s p
k
)
where the numerator K is chosen to make the steady-state equal to 1 if n is odd, or 1/

1 +
2
for
even n. Algorithm 5.2 summarises the design of a Chebyshev lter with ripple in the passband.
Algorithm 5.2 Chebyshev type I lter design
Given a cut-off frequency,
c
, passband ripple, , or r
dB
, and order lter n,
1. Calculate the ripple factor
=
_
10
r/10
1 =

1
(1 )
2
1
2. Calculate the radius of the minor, , and major axis of the ellipse using Eqns 5.125.14.
3. Calculate the n stable poles from Eqn. 5.15 and expand out to form the denominator of
H
c
(s).
4. Choose the normalising gain K such that at steady-state (DC gain)
H
c
(s = 0) =
_
1

1+
2
, n even
1, n odd
We can calculate the gain of a transfer function using the nal value theorem which assum-
ing a unit step gives
K =
_
a0

1+
2
, n even
a
0
, n odd
where a
0
is the coefcient of s
0
in the denominator of H
c
(s).
We can test this algorithm by designing a Chebyshev type I lter with 3dB ripple in the passband,
(r
dB
= 3dB or = 1 10
3/20
), and a cross-over frequency of
c
= 2 rad/s.
Listing 5.4: Designing Chebyshev Filters
1 n = 4; wc = 2; % Order & cut-off frequency, n, c
r = 3; % Ripple. Note 3dB = 1/

2
rn = 1/n;
e = sqrt(10(r/10)-1); % = 1 10
r/20
, e =
_
1/(1 )
2
1
6 g = 1/e + sqrt(1 + 1/e/e); minora = (grn - g(-rn))/2;
majorb = (grn + g(-rn))/2;
k = [0:n-1]'; n2 = 2
*
n;
5.2. SMOOTHING MEASURED DATA USING ANALOGUE FILTERS 207
realp = minora
*
wc
*
cos(pi/2 + pi
*
(2
*
k+1)/n2); % real parts of the poles
11 imagp = majorb
*
wc
*
sin(pi/2 + pi
*
(2
*
k+1)/n2); % imaginary parts
polesc = realp + 1i
*
imagp; % poles of the lter
Ac = real(poly(polesc)); % force real
Bc = Ac(n+1); % overall DC gain = 1 (n =odd)
if rem(n,2) % if lter order n is even
16 Bc = Bc/sqrt(1+e2); % dc gain
end % if
You may like to compare this implementation with the equivalent SIGNAL PROCESSING toolbox
version of the type 1 Chebyshev analogue prototype by typing type cheb1ap.
We can verify that the computed poles do indeed lie on the stable half of the ellipse with major
and minor axes and respectively in Fig. 5.10. I used the axis(equal) command for
this gure forcing the x and y axes to be equally, as opposed to conveniently, spaced to highlight
the ellipse. Compare this with the poles of a Butterworth lter plotted in Fig. 5.9.
2 1 0 1 2
2
1.5
1
0.5
0
0.5
1
1.5
2
(p)

(
p
)
Figure 5.10: Verifying that the poles of a fourth-order
analogue type 1 Chebyshev lter are equally spaced
around the stable half of an ellipse.
As the above relations indicate, the generation of Chebyshev lter coefcients is slightly more
complex than for Butterworth lters, although they are also part of the SIGNAL PROCESSING
toolbox. Further design relations are given in [152, p221]. We can compare our algorithm with
the equivalent toolbox function cheby1 which also designs type I Chebyshev lters. We want
the continuous description, hence the optional s parameter. Alternatively we could use the
more specialised cheb1ap function which designs an analogue low-pass lter prototype which
is in fact called by cheby1.
Listing 5.5: Computing a Chebyshev Filter
n = 4; wc = 2; % Order of Chebyshev lter; cut-off, c = 2 rad/s
r = 3; % dB of ripple, 3dB = 1/

2
3 [Bc,Ac] = cheby1(n,r,wc,'s'); % design a continuous lter
[Hc,w] = freqs(Bc,Ac); % calculate frequency response.
semilogx(w,abs(Hc)); % magnitude characteristics
These lter coefcients should be identical to those obtained following Algorithm5.2. You should
exercise some care with attempting to design extremely high order IIRlters. While the algorithm
is reasonably stable from a numerical point of view, if we restrict ourselves just to the factored
form, one may see unstable poles once we have expanded the polynomial owing to numerical
round-off for orders larger than about 40.
208 CHAPTER 5. DIGITAL FILTERING AND SMOOTHING
>> [Bc,Ac] = cheby1(50,3,1,'s'); % Very high order IIR lter
>> max(real(roots(Ac)))
ans =
0.0162 % +ve, unstable RHP pole
5 >> [z,p,g] = cheb1ap(50,3); % repeat, but only calculate poles
>> max(real(p)) % are any RHP?
ans =
-5.5478e-004 % All poles stable
Of course in applications while it is common to require high order for FIR lters, it is unusual
that we would need larger than about n = 10 for IIR lters.
Fig. 5.11 compares the magnitude of both the Butterworth and Chebyshev lter and was gener-
ated using, in part, the above code. For this plot, I have broken from convention, and plotted the
magnitude on a linear rather than logarithmic scale to emphasize better the asymptote towards
zero. Note that for even order Chebyshev lters such as the 4th order presented in Fig. 5.11,
the steady-state gain is not equal to 1, but rather 1 the allowable pass-band ripple. If this is a
problem, one can compensate by increasing the lter gain. For odd order Chebyshev lters, the
steady-state gain is 1 which perhaps is preferable.
Figure 5.11: Comparing the magni-
tude characteristics of 4th order But-
terworth and Chebyshev lters. The
Chebyshev lter is designed to have
a ripple equal to 3dB (dotted line)
which is approx 30%. Both lter cut-
offs are at
c
= 2 rad/s.
10
1
10
0
10
1
0
0.2
0.4
0.6
0.8
1
[rad/s]
|
H
(

)
|
Comparison of Butterworth & Chebyshev filters

c
3dB
Chebychev
Butterworth
ideal filter
The difference between a maximally at passband lter such as the Butterworth and the equal
ripple lter such as the Chebyshev is clear. [131, p176] claim that most designers prefer the
Chebyshev lter owing to the sharper transition across the stopband, provided the inevitable
ripple is acceptable.
In addition to the Butterworth and Chebyshev lters, there are two more classic analogue lters;
the Chebyshev type II and the elliptic or Cauer lter. All four lters are derived by approximating
the sharp ideal lter in different ways. For the Butterworth, the lter is constructed in such a way
as to maximise the number of derivatives to zero at = 0 and = . The Chebyshev lter
exhibits minimum error over the passband, while the type II version minimises the error over
the stop band. Finally the elliptic lter uses a Chebyshev mini-max approximation in both pass-
band and stop-band. The calculation of the elliptic lter requires substantially more computation
than the other three, although all four can be computed using the SIGNAL PROCESSING toolbox.
5.3. DISCRETE FILTERS 209
5.3 Discrete lters
Analogue lters are ne for small applications and for where aliasing may be a problem, but
they are more expensive and more inexible than the discrete equivalent. We would much rather
fabricate a digital lter where the only difference between lter orders is a slightly longer tapped
delay or shift register, and proportionally slower computation, albeit with a longer word length.
Discrete lters are also easy to implement in MATLAB using the filter command.
To design a digital lter, one option is to to start the design from the classical analogue lters
and use a transformation such as the bilinear transform described in chapter 2 to the discrete
domain. In fact this is the approach that MATLAB uses in that it rst designs the lter in the
continuous domain, then employs bilinear to convert to the discrete domain. Digital lter
design is discussed further in section 5.3.2.
Traditionally workers in the digital signal processing (DSP) area have referred to two types of l-
ters, one where the response to an impulse is nite, the other where the response is theoretically
of innite duration. This latter recursive lter or Innite Impulse Response (IIR), is one where
previous ltered values are used to calculate the current ltered value. The former non-recursive
lter or Finite Impulse Response, (FIR) is one where the ltered value is only dependent on past
values of the input hence A(z) = 1. FIR lters nd use in some continuous signal processing
applications where the operation is to be run for very long sustained periods where the alterna-
tive IIR lters are sensitive to numerical blow-up owing to the recursive nature of the ltering
algorithm. FIR lters will never go unstable since they have only zeros which cannot cause in-
stability by themselves. FIR lters also exhibit what is known as linear phase lag, that is each
of the signals frequencies are delayed by the same amount of time. This is a desirable attribute,
especially in communications and one that a causal IIR lter cannot match.
However better lter performance is generally obtained by using IIR lters for the same number
of discrete components. In applications where the FIR lter is necessary, 50 to 150 terms are com-
mon. To convert approximately from an IIR to a FIR, divide A(z) into B(z) using long division,
and truncate the series at some suitably large value. Normally, the lter would be implemented
in a computer in the digital form as a difference equation, or perhaps in hardware using a series
of delay elements and shift registers.
5.3.1 A low-pass ltering application
Consider the situation where you are asked to lter a measurement signal from a sensitive pres-
sure transducer in a level leg in a small tank. The level in the tank is oscillating about twice a
second, but this measurement is corrupted by the mains electrical frequency of 50Hz. Suppose
we are sampling at 1 kHz (T = 0.001s), and one notes that this sampling rate is sufcient to avoid
aliasing problems since it is above twice the highest frequency of interest, (2 50). We can sim-
ulate this in MATLAB. First we create a time vector t and then the two signals, one is the true
measurement y
1
, and the other is the corrupting noise from the mains y
2
. The actual signal, as
received by your data logger, is the sum of these two signals, y = y
1
+y
2
. The purpose of ltering
is to extract the true signal y
1
from the corrupted y as closely as possible.
210 CHAPTER 5. DIGITAL FILTERING AND SMOOTHING
t = [0:0.001:1]'; % time vector
2 y1 = sin(2
*
pi
*
2
*
t); % true signal
y2 = 0.1
*
sin(2
*
pi
*
50
*
t); % noise
y = y1+y2; % what we read
plot(t,y)
0 0.2 0.4 0.6 0.8 1
1.5
1
0.5
0
0.5
1
1.5
Clearly even from just the time domain plot the signal y is composed of two dominant frequen-
cies. If a frequency domain plot were to be created, it would be clearer still. To reconstruct the
true level reading, we need to lter the measured signal y. We could design a lter with a cut-
off frequency of f
c
= 30Hz. This should in theory pass the slower y
1
signal, but attenuate the
corrupting y
2
.
To implement such a digital lter we must:
1. select a lter type, say Butterworth, (refer page 201)
2. select a lter order, say 3, and
3. calculate the dimensionless frequency related to the specied frequency cutoff f
c
= 30Hz,
and the sample time (or Nyquist frequency f
N
),
=
f
c
f
N
= 2Tf
c
= 2 10
3
30 = 0.06
We are now ready to use the SIGNAL PROCESSING TOOLBOX to design our discrete IIR lter or
perhaps build our own starting with a continuous lter and transforming to the discrete domain
as described on page 203.
>> wc = 2
*
1.0e-3
*
30; % Dimensionless cut-off frequency
>> [B,A] = butter(3,wc) % Find A(q
1
) & B(q
1
) polynomials
B =
0.0007 0.0021 0.0021 0.0007
10 A =
1.0000 -2.6236 2.3147 -0.6855
>> yf = filter(B,A,y); % filter the measured data
>> plot(t,y,t,yf,'--',t,y1) % any better ?
Once having established the discrete lter coefcients, we can proceed to lter our raw signal
and compare with the unltered data as shown in Fig. 5.12.
The ltered data Fig. 5.12 looks smoother than the original, but it really is not smooth enough.
We should try a higher order lter, say a 6th order lter.
[B6,A6] = butter(6,wc); % 6th order, same cut-off
15 yf6 = filter(B6,A6,y); % smoother ?
plot(t,[y,yf,yf6]) % looks any better ?
5.3. DISCRETE FILTERS 211
0 0.05 0.1 0.15 0.2
0
0.2
0.4
0.6
0.8
1
1.2
1.4


Raw data
3rd or Butterworth
6th order Butterworth
6th order
3rd order
raw data
Figure 5.12: Comparing a 3rd and 6th order
discrete Butterworth lters to smooth noisy
data.
This time the 6th order ltered data in Fig. 5.12 is acceptably smooth, but there is quite a notice-
able phase lag such that the ltered signal is behind the original data. This is unfortunate, but
shows that there must be a trade off between smoothness and phase lag for all causal or online
lters.
The difference in performance between the two digital lters can be shown graphically by plot-
ting the frequency response of the lter. The easiest way to do this is to convert the lter to a
transfer function and then use bode.
Ts = t(2)-t(1); % Sampling time, T, for the correct frequency scale
H3 = tf(B,A,Ts);
H6 = tf(B6,A6,Ts)
20 bode(H3,H6); % See Frequency response plot in Fig. 5.13.
Alternatively we could use freqz and construct the frequency response manually.
[h,w] = freqz(B,A,200); % discrete frequency response
[h6,w6] = freqz(B6,A6,200);
w = w/pi
*
500; % convert from rad/s to Hz
w6 = w6/pi
*
500;
25 loglog(w,abs(h), w6,abs(h6)) % See Magnitude plot in Fig. 5.13.
The sixth order lter in Fig 5.13 has a much steeper roll off after 30Hz, and this more approximates
the ideal lter. Given only this curve, we would select the higher order (6th) lter. To plot the
second half of the Bode diagram, we must unwrap the phase angle, and for convenience we will
also convert the phase angle from radians to degrees.
ph = unwrap(angle(h))
*
360/2/pi;
ph6 = unwrap(angle(h6))
*
360/2/pi;
semilogx(w,ph,w6,ph6) % plot the 2nd half of the Bode diagram
The phase lag at high frequencies for the third order lter is 3 90

= 270

, while the phase lag


for the sixth order lter is 6 90

= 540

. The ideal lter has zero phase lag at all frequencies.


212 CHAPTER 5. DIGITAL FILTERING AND SMOOTHING
Figure 5.13: The frequency response for a
3rd and 6th order discrete Butterworth l-
ter. The cut-off frequency for the lters is
f
c
= 30Hz and the Nyquist frequency is
f
N
= 500 Hz.
10
0
10
1
10
2
10
3
10
10
10
5
10
0
A
m
p
l
i
t
u
d
e

r
a
t
i
o
f
N
f
c


10
0
10
1
10
2
10
3
600
400
200
0
f
N
f
c
P
h
a
s
e

a
n
g
l
e

frequency [Hz]
3rd order
6th order
6th order
3rd order
The amplitude ratio plot in Fig 5.13 asymptotes to a maximum frequency of 500 Hz. This may
seem surprising since the Bode diagram of a true continuous lter does not asymptote to any
limiting frequency. What has happened, is that Fig 5.13 is the plot of a discrete lter, and that
the limiting frequency is the Nyquist frequency (2/T). However the discrete Bode plot is a good
approximation to the continuous lter up to about half the Nyquist frequency.
A matrix of lters
Fig. 5.14 compares the performance of 9 lters using the 2 frequency component signal from
5.3.1. The 9 simulations in Fig 5.14 show the performance for 3 different orders (2,5 and 8) in
the columns at 3 different cut-off frequencies (20, 50 and 100 Hz) in the rows. This gure neatly
emphasizes the trade off between smoothness and phase lag.
5.3.2 Approximating digital lters from continuous lters
As mentioned in the introduction, one way to design digital lters, is to rst start off with a
continuous design, and then transformthe lter to the digital domain. The four common classical
continuous lter families: Butterworth, Chebyshev, elliptic and Bessel, are all strictly continuous
lters, and if we are to implement them in a computer or a DSP chip, we must approximate these
lters with a discrete lter.
To convert from a continuous lter H(s), to a digital lter H(z), we could use the bilinear trans-
form as described in section 2.5.2. Since this transformation method is approximate, we will nd
that some frequency distortion will be introduced. However we can minimise this distortion by
adjusting the frequency response of the lter H(s) before we transform to the discrete approxi-
mation. This procedure is called using the bilinear transformation with frequency pre-warping.
5.3. DISCRETE FILTERS 213


=

2
0
n = 2 n = 5 n = 8


=

5
0


=

1
0
0
filtered
raw
Figure 5.14: The performance of various Butterworth lters of different orders 2,5, and 8 (left to
right) and different cut-off frequencies,
c
of 20, 50 and 100 Hz (top to bottom) of a two frequency
component signal.
Frequency pre-warping
The basic idea of frequency pre-warping is that we try to modify the transformation from contin-
uous to discrete so that H(z) better approximates H(s) at the frequency of interest, rather than
just at = 0. This is particularly noticeable for bandpass ltering applications.
The frequencies in the z plane (
d
) are related to the frequencies in the s plane (
a
) by

d
=
2
T
tan
1
_

a
T
2
_
(5.16)

a
=
2
T
tan
1
_

d
T
2
_
(5.17)
This means that the magnitude of H(s) at frequency
a
, |H(ia)|, is equal to the magnitude
of H(z) at frequency
d
, |H(e
i
d
T
)|. Suppose we wish to discretise the continuous lter H(s)
using the bilinear transform, but we also would like the digital approximation to have the same
magnitude at the corner frequency (w
c
= 10), then we must substitute using Eqn 6.13.
Example of the advantages of frequency pre-warping.
Suppose we desire to fabricate an electronic lter for a guitar tuner. In this application the guitar
player strikes a possibly incorrectly tuned string, and the tuner box recognises which string is
struck, and then subsequently generates the correct tone. The player can then adjust the tuning
of the guitar. Such a device can be fabricated in a DSP chip using a series of bandpass lters
tuned to the frequencies of each string (give or take a note or two).
We will compare a continuous bandpass lter centered around a frequency of 440 Hz (concert
pitch A), with digital lters using a sampling rate of 2kHz. Since we are particularly interested
in the note A, we will take special care that the digital lter approximates the continuous lter
around f = 440Hz. We can do this by frequency pre-warping. The frequency response of the
three lters are shown in Fig. 5.15.
214 CHAPTER 5. DIGITAL FILTERING AND SMOOTHING
[B,A] = butter(4,[420 450]
*
2
*
pi,'s'); % band pass from 420 to 540 Hz
2 Gc = tf(B,A) % Continuous first order filter
Ts = 1/2000; % sample rate 2kHz
wn = pi/Ts; % Nyquist freq. (rad/s)
7 Gd = c2d(Gc,Ts,'tustin') % Now sample it using bilinear transform
Gd_pw = c2d(Gc,Ts,'prewarp',440
*
2
*
pi) % Now sample it with pre-warping
Fig. 5.15 highlights that the uncompensated digital lter (dashed) actually bandpasses signals
in the range of 350 to 400Hz rather than the 420 to 450Hz required. This signicant error is
addressed by designing a lter to match the continuous lter at f = 440Hz using frequency
pre-warping (dotted).
10
3
10
2
10
1
10
0
f
N
A
R
Frequency [Hz]
Continuous & discrete Bode plots of a bandpass filter


Continuous
Discrete
prewarp
continuous
prewarp
discrete
Figure 5.15: Comparing a continuous bandpass lter with a discrete lter and discrete lter with
frequency pre-warping (dashed). Note the error in the passband when using the bilinear trans-
form without frequency pre-warping.
For an online lter application, we should
1. Decide on the cut-off frequency,
c
.
2. Select structure of the analogue lter, (Butterworth, Chebyshev, elliptical etc). Essentially
we are selecting the optimality criteria for the lter.
3. The lter order (number of poles).
4. Transform to the discrete domain using the bilinear transform.
All these considerations have advantages and disadvantages associated with them.
5.3.3 Efcient hardware implementation of discrete lters
To implement the discrete difference equation,
y
k
A(z
1
) = B(z
1
)u
k
5.3. DISCRETE FILTERS 215
in hardware, we can expand out to a difference equation
y
k
=
na

i=1
a
i
y
ki
+
n
b

i=0
b
i
u
ki
(5.18)
meaning now we can implement this using operational ampliers and delay elements. One
scheme following Eqn. 5.18 directly given in Fig. 5.16 is known as the Direct Form I. It is easy
to verify that the Direct Form I lter has the transfer function
H(z
1
) =
b
0
+b
1
z
1
+b
2
z
2
+ +b
n
b
z
n
b
1 +a
1
z
1
+a
2
z
2
+ +a
na
z
na

y
k

. .
z
1
z
1

z
1
. .

+
x
k

z
1
z
1
z
1

a
2
numerator
denominator
.
.
.

+

z
1
a
1
+
a
na
`

`
`
+

`
b
0
+

`
+
+

.
.
.

b
1
`
`
b
n
b
Figure 5.16: Hardware difference equation in Direct Form I
The square boxes in Fig. 5.16 with z
1
are the delay elements, and since we take off, or tap, the
signal after each delay, the lines are called tapped delays.
For the circuit designer who must fabricate Fig. 5.16, the expensive components are the delay el-
ements since they are typically specialised integrated circuits and one is economically motivated
to achieve the same result but by using fewer components. Rather than use the topology where
we need separate delay lines for both the input sequence and the output sequence separately, we
can achieve the identical result using only half the number of delays. This reduces fabrication
cost.
Starting with original IIR lter model
Y (z
1
)
U(z
1
)
=
B(z
1
)
A(z
1
)
(5.19)
216 CHAPTER 5. DIGITAL FILTERING AND SMOOTHING
and introducing an intermediate state, w,
Y (z
1
)
W(z
1
)

W(z
1
)
U(z
1
)
=
B(z
1
)
A(z
1
)
which means that we can describe Eqn. 5.19 as one pure innite impulse response (IIR) and one
nite impulse response (FIR) process
Y (z
1
) = B(z
1
)W(z
1
) (5.20)
A(z
1
)W(z
1
) = U(z
1
) (5.21)
A block diagram following this topology is given in Fig. 5.17 which is known as Direct Form II,
DFII.

b
0
+

+ +

+ +
z
1
z
1
z
1

a
2
.
.
.


+ +

z
1

a
1 b
1
+
a
na
`

`
`
`
`
+

y
k

x
k
`
w
k
b
n
b
Figure 5.17: An IIR lter with a minimal number of delays, Direct Form II
We could have reached the same topological result by noticing that if we swapped the order of
the tapped delay lines, in the direct form 1 structure, Fig. 5.16, then we will notice that the delay
lines are separated by a unity gain branch. Thus we can eliminate one of the branches, and save
the hardware cost of half the delays.
Further forms of the digital lter are possible and are used in special circumstances such as when
we have parallel processes or we are concerned with the quantisation of the lter coefcients.
One special case is described further in section 5.3.4.
With MATLAB we would normally use the optimised filter command to simulate IIR lters,
but this option is not available when using say C on a microprocessor or DSP chip. The following
script implements the topology given in Fig. 5.17 for the example lter
B(z
1
)
A(z
1
)
=
(z
1
0.4)(z
1
0.98)
(z
1
0.5)(z
1
0.6)(z
1
0.7)(z
1
0.8)(z
1
0.9)
and a random input sequence.
5.3. DISCRETE FILTERS 217
B = poly([0.4 0.98]); % Example stable process
2 A = poly([0.5:0.1:0.9]);
U = [ones(30,1); randn(50,1)]; % example input profile
Yx = dlsim(B,A,U); % Test Matlab's version
% Version with 1 shift element -----------------
7 nb = length(B); A(1) = []; na = length(A);
nw = max(na,nb); % remember A=A-1
shiftw = zeros(size(1:nw))'; % initialise with zeros
Y = []; for i=1:length(U);
w = - A
*
shiftw(1:na) + U(i) ;
12 shiftw = [w; shiftw(1:nw-1)];
Y(i,1) = B
*
shiftw(1:nb);
end % for
t = [0:length(U)-1]'; [tt,Ys] = stairs(t,Y); [tt,Yx] =
17 stairs(t,Yx);
plot(tt,Ys,tt,Yx,'--'); % plotting verification
MATLABs dlsim gives a similar output but is simply shifted one unit to the right.
5.3.4 Numerical and quantisation effects for high-order lters
Due to commercial constraints, for many digital ltering applications, we want to be able to
realise cheap high-quality lters at high speed. Essentially this means implementing high-order
IIR lters on low-cost hardware with short word-lengths. The problem is that high-order IIR
lters are very sensitive to quantisation effects especially when implemented in the expanded
polynomial form such as DFII.
One way to reduce the effects of numerical roundoff is to implement rewrite the DFII lter in
factored form as a cascaded collection of second-order lters as shown in Fig. 5.18. Each indi-
vidual lter is a second-order lter as shown in Fig. 5.19. These are known as bi-quad lters, or
second-order sections (SOS). For an nth order lter, we will have n/2 sections, that is for lters
with an odd order, we will have one second order section that is actually rst order.

y
k


. .
SOS #1
SOS # n/2

x
k
Figure 5.18: Cascaded second-order sections to realise a high-order lter. See also Fig. 5.19.
We can convert higher-order lters to a collection of second-order sections using the tf2sos or
equivalent commands in MATLAB. You will notice that the magnitude and range of the coef-
cients of the SOS lter realisation is much smaller than the equivalent expanded polynomial
lter.
218 CHAPTER 5. DIGITAL FILTERING AND SMOOTHING

`
b
0
+

`
`

z
1
a
2 b
2

+ +
z
1

a
1 b
1
+

y
k

x
k
`
Figure 5.19: A second-order section (SOS)
The separating of the factors into groups of second-order sections for a given high-order lter is
not unique, but one near-optimal way is to pair the poles nearest the unit circle with the zeros
nearest those poles and so on. This is is the strategy that tf2sos uses and is explained in further
detail in the help documentation. Listing 5.6 demonstrates how we can optimally decompose
a 7th order lter into four second-order sections. Of course the rst of the four second-order
sections is actually a rst-order section. We can see that by noticing that the coefcients b
2
and a
2
are equal to zero for the rst lter section.
Listing 5.6: Converting a 7th-order Butterworth lter to 4 second-order sections
>> [B,A] = butter(7,0.8,'low'); H = tf(B,A,1) % Design a 7th order discrete DFII lter
2 Transfer function:
0.2363 z7 + 1.654 z6 + 4.963 z5 + 8.271 z4 + 8.271 z3 + 4.963 z2
+ 1.654 z + 0.2363
7
-------------------------------------------------------------------------
z7 + 4.182 z6 + 7.872 z5 + 8.531 z4 + 5.71 z3 + 2.349 z2
+ 0.5483 z + 0.05584
12
>> [SOS,G] = tf2sos(B,A) % Convert 7
th
order polynomial to 4 second-order sections
SOS =
1.0000 0.9945 0 1.0000 0.5095 0
1.0000 2.0095 1.0095 1.0000 1.0578 0.3076
17 1.0000 2.0026 1.0027 1.0000 1.1841 0.4636
1.0000 1.9934 0.9934 1.0000 1.4309 0.7687
G =
0.2363
The advantage when using second-order sections is that we can maintain reasonable accuracy
even when using relatively short word lengths. Fig. 5.20(a), which is generated by Listing 5.7,
illustrates that we cannot successfully run a 7th-order elliptic low-pass lter in single precision in
the expanded polynomial form, (DFII), one reason being that the resultant lter is actually unsta-
5.4. THE FOURIER TRANSFORM 219
0
0.5
1
1.5
2
f
i
l
t
e
r
e
d

r
e
s
p
o
n
s
e
0 20 40 60 80 100
10
10
10
5
10
0
10
6
sample interval
e
r
r
o
r
single precision
Double & single SOS
(a) The time response of double and single precision lters
to a unit step response.
1.5 1 0.5 0 0.5 1
1
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
1


PoleZero Map

double
single
0.9 1 1.1
0.15
0.2
0.25
0.3
0.35
0.4
zoomed region
(b) Pole-zero maps of single and double precision
lters. Note that the single precision version has
poles outside the unit circle.
Figure 5.20: Comparing single precision second-order sections with lters in direct form II trans-
posed form. Note that the direct form II lter is actually unstable when run in single precision.
ble! However we can reliably implement the same lter, in single precision if we use 4 cascaded
second-order sections. In fact for this application, the single precision lter implemented in
SOS is indistinguishable in Fig. 5.20(a) from the double precision implementation given that the
average error is always less than 10
6
.
Listing 5.7: Comparing DFII and SOS digital lters in single precision.
[B,A] = ellip(7,0.5,20,0.08); % Design a 7th order elliptic lter
U = ones(100,1); % Consider a step response
Y = filter(B,A,U); % Compute lter in double precision
5 Ys = filter(single(B), single(A), single(U)); % Compute lter in single precision
[SOS,G] = tf2sos(B,A); % Convert to SOS (in full precision)
n = size(SOS,1); % # of sections
ysos = single(G)
*
U; % Compute SOS lter in single precision
10 for i=1:n
ysos = filter(single(SOS(i,1:3)), single(SOS(i,4:end)), ysos)
end
stairs([Y Ys ysos]); % Compare results in Fig. 5.20(a).
It is also clear fromthe pole-zero plot in Fig. 5.20(b) that the single precision DFII implementation
has poles outside the unit circle due to the quantisation of the coefcients. This explains why this
single precision lter is unstable.
5.4 The Fourier transform
The Fourier Transform (FT) is one of the most well known techniques in the mathematical and
scientic world today. However it is also one of the most confusing subjects in engineering,
partly owing to the plethora of denitions, normalising factors, and other incompatibilities found
in many standard text books on the subject. The Fourier transform is covered in many books on
220 CHAPTER 5. DIGITAL FILTERING AND SMOOTHING
digital signal processing, (DSP), such as [36, 131], and can also be found in mathematical physics
texts such as [107, 161]. The Fourier Transform is a mathematical transform equation that takes
a function in time and returns a function in frequency. The Fast Fourier Transform, or FFT, is
simply an efcient way to compute this transformation.
Chapter 2 briey mentioned the term spectral analysis when discussing aliases introduced
by the sampling process. Actually a very learned authority , [175, p53], tells us that spectral
formally means pertaining to a spectre or ghost, and then goes on to make jokes about ghosts
and gremlins causing noisy problems. But nowadays spectral analysis means analysis based
around the spectrum of a something. We are going to use spectral analysis to investigate the
frequency components of a measured signal, since in many cases these frequency components
may reveal critical information hidden in the time domain description. This decomposition of
the time domain signal into its frequency components is just one application of the FFT.
Commonly in mathematics, we can approximate any function f(t) using (amongst other things)
a Taylor series expansion, or a sine and cosine series expansion. If the original signal is periodic,
then the latter technique generally gives better results with fewer terms. This latter series is called
the Fourier series expansion.
The fundamental concept on which the Fourier transform(FT) is based is that almost any periodic
wave form can be represented as a sum of sine waves of different periods and amplitudes,
f(t) =

n=0
A
n
sin (2nf
0
t) +

n=0
B
n
cos (2nf
0
t) (5.22)
This means that we can decompose any periodic wave form (PW) with a fundamental frequency
2
(f
0
) into a sum of sinusoidal waves, (which is known as frequency analysis) or alternatively we
can construct complicated waves from simple sine waves (which is known as frequency synthe-
sis).
Since it is impractical to have an innite number of terms, this series is truncated, and f(t) now
only approximates the right hand side of Eqn. 5.22. Naturally, as the number of terms tends to
innity, the approximation to the true signal f(t) improves. All we have to do nowis to determine
the constants A
n
and B
n
for a given function f(t). These coefcients can be evaluated by
A
n
=
2
P
_
P/2
P/2
f(t) sin (2nf
0
t) dt and B
n
=
2
P
_
P/2
P/2
f(t) cos (2nf
0
t) dt (5.23)
when n = 0. When n = 0, we have the special case
A
0
= 0, and B
0
=
1
P
_
P/2
P/2
f(t)dt (5.24)
In summary, f(t) tells us how the signal develops in time, while the constants A
n
, B
n
give us a
method to generate f(t).
We can re-write Eqn. 5.22 as a sum of cosines and a phase angle
f(t) =

n=0
C
n
cos (2nf
o
t +
n
) (5.25)
2
Remember the relationship between the frequency f, the period P, and the angular velocity .
f =
1
P
=

2
5.4. THE FOURIER TRANSFORM 221
where the phase angle,
n
, and amplitude, C
n
, are given by

n
= tan
1
_

A
n
B
n
_
and C
n
=
_
A
2
n
+B
2
n
(5.26)
If we plot C
n
vs f then this is called the frequency spectrum of f(t). Note that C
n
has the same
units of A
n
or B
n
which have the same units of whatever the time series f(t) is measured in.
5.4.1 Fourier transform denitions
Any function f(t) that is nite in energy can be represented in the frequency domain as F() via
the transform
F() =
_

f(t)e
it
dt (5.27)
=
_

f(t) cos(t)dt i
_

f(t) sin(t)dt (5.28)


We can also convert back to the time domain by using the inverse Fourier transform,
f(t) =
1
2
_

F() e
it
d
You may notice that the transform pair is not quite symmetrical, we have a factor of 2 in the
inverse relation. If we use frequency measured in Hertz, where = 2f rather than , we obtain
a much more symmetrical, and easier to remember, pair of equations. [161, pp381382] comment
on this, and [139, p145] give misleading relations.
The spectrum of a signal f(t) is dened as the square of the absolute value of the Fourier trans-
form,

f
() = |F()|
2
Parsevals equation reminds us that the energy of a signal is the same whether we describe it in
the frequency domain or the time domain,
_

f
2
(t) dt =
1
2
_

|F()|
2
d
We can numerically approximate the function F() by evaluating Eqn 5.28 using a numerical
integration technique such as Eulers method. Since this numerical integration is computation-
ally expensive, and we must do this for every of interest, the calculation procedure is quite
time consuming. I will refer to this technique as the Slow Fourier Transform (SFT). Remember
that H() is a complex function of angular velocity (), and that the result is best displayed as a
graph with measured in radians/s as the dependent variable.
Some helpful properties
Here are some helpful properties of the Fourier transform that may make analysis easier.
1. If f(t) = f(t) which means that f(t) is an odd function like sin t, then all the cosine
terms disappear.
222 CHAPTER 5. DIGITAL FILTERING AND SMOOTHING
2. If f(t) = f(t) which means that f(t) is an even function like cos t, then all the sine terms
disappear.
3. If f(t) = f(t +P/2), then the series will contain only odd components; n = 1, 3, 5, . . .
4. If f(t) = f(t +P/2), then the series will contain only even components; n = 2, 4, 6, . . .
Naturally we are not at liberty to change the shape of our measured time series f(t), but we may
be allowed to choose the position of the origin such that points 3 and 4 are satised.
The Euler relations
We can also write the Fourier series using complex quantities with the Euler relations.
e
i
= cos +i sin , and e
i
= cos i sin (5.29)
where i =

1. This implies that


cos =
e
i
+e
i
2
, and sin =
e
i
e
i
2i
(5.30)
So we can re-write Eqn 5.22 in complex notation as
f(t) =

D
n
e
i2nf0t
(5.31)
where
D
n
=
1
P
_
P/2
P/2
f(t)e
i2nf0t
dt (5.32)
Example A decomposition of a square wave
The square wave is a simple periodic function that can be written as a innite sine and cosine
series. Fig 5.21 shows the original time series f(t). We note that the original wave is:
1. Periodic, with a period of P = 1 seconds.
2. f(t) is an odd function such as sint so we expect all the cosine terms disappear. An odd
function is where f(t) = f(t), while an even function is where f(t) = f(t).
Using Eqn. 5.23, we see that
A
n
=
1
n
(1 cos n)
which collapses to A
n
= 2/n for n = 1, 3, 5, . . . and A
n
= 0 for n = 2, 4, 6, . . . . This gives
the Fourier series approximation to f(t) as
f(t) =
2

sin 2t +
2
3
sin6t + +
2
(2n 1)
sin 2(2n 1)t + (5.33)
As the number of terms (n) in Eqn. 5.33 increase, the approximation improves as illustrated in Fig
5.22 which shows the Fourier approximation for 1 through 4 terms. Later in 5.4.3 we will try to
do the reverse, namely given the square wave signal, we will extract numerically the coefcients
of the sine and cosine series.
5.4. THE FOURIER TRANSFORM 223
1
0.5
0
0.5
1
Original signal f(t)
0 1 2 3 4
1
0.5
0
0.5
1
time
First order approximation of f(t)
Figure 5.21: The original square wave sig-
nal f(t) (top), and an approximation to f(t)
using one sine term (bottom)
1
0
1
# of terms = 1
1
0
1
# of terms = 2
1
0
1
# of terms = 3
0 1 2 3 4
1
0
1
# of terms = 4
Figure 5.22: The Fourier approxima-
tion to a square wave with varying
number of terms.
224 CHAPTER 5. DIGITAL FILTERING AND SMOOTHING
5.4.2 Orthogonality and frequency spotting
Consider the function y = sin(3t) sin(5t) which is plotted for 2 full periods in Fig 5.23 (top). If
we integrate it over one period, we can see that the positive areas, cancel out the negative areas,
hence the total integral is zero. Or
_
P=
0
y(t) dt = 0
The only time an integral of a full period will not be zero is when the entire curve is either all
positive or all negative. The only time that can happen, is when the two frequencies are the same
such as in Fig. 5.23 (bottom). Thus if
y = sin(nt) + sin(mt) (5.34)
then the integral over one full period will be one of the following two results:
_
P
0
y(t) dt = 0 if n = m (5.35)
_
P
0
y(t) dt = 0 if n = m (5.36)
Figure 5.23: A plot of (top)
y = sin(3t) sin(5t) and (bottom)
y = sin(3t) sin(3t). In the top gure
the shaded regions cancel giving a total
integral of zero for the two periods.
0 1 2 3 4 5 6 7
1
0.5
0
0.5
1
sin(3t)*sin(5t)
0 1 2 3 4 5 6 7
1
0.5
0
0.5
1
time
sin
2
(3t)
Fig 5.23 was created by the following MATLAB commands
t=[0:0.01:2
*
pi]'; % create a time vector 2 Periods long
2 y3=sin(3
*
t); y5=sin(5
*
t);
plot(t,[y3.
*
y3, y3.
*
y5]) % plot both curves
We can numerically integrate these curves using the sum command. The integral of the top curve
in Fig. 5.23 is
sum(y3.
*
y5)
ans = -1.9699e-05
5.4. THE FOURIER TRANSFORM 225
which is close enough to zero while
_
2
0
sin
2
3t is evaluated as
sum(y3.
*
y3)
ans = 314.1593
and is denitely non-zero. Note that for this example we have integrated over 2 periods, so
we should divide the resultant integral by 2. This however, does not change our conclusions.
Additionally note that the simple sum command when used as an integrator assumes that the
argument is uniformly spaced 1 time unit apart. This is not the case for this example, but the
difference is only a scale factor, and again is not important for this demonstration.
Period
The period of y = sin(nt) is P = 2/n. The period of y = sin(nt) sin(nt) can be evaluated using
an expansion for sin
2
(nt) as
y = sin
2
(nt) =
1
2
(1 cos(2nt))
thus the period is P = /n.
The period of the general expression y = sin(nt) sin(mt) where n and m are potentially different
is found by using the expansion
y =
1
2
[cos(n +m)t + cos(n m)t]
which for the example above gives
y = sin(3t) sin(5t) =
1
2
[cos 8t + cos 2t] (5.37)
The period of the rst term in Eqn 5.37 is P
1
= /4 and for the second term is P
2
= . The total
period is the smallest common multiple of these two periods which is for this case P = .
The property given in Eqn. 5.36 can be exploited for spectral analysis. Suppose we have measured
a signal x(t) that is made up of a sine wave with a particular, but unknown, frequency. However
we wish to establish this unknown frequency using spectral analysis. All we need to do is to
multiply x(t) by a trial sine wave, say sin 5t, integrate over one period, and look at the result.
If the integral is close to zero, then our original signal, x(t), had no sin 5t term, if however we
obtained a signicant non-zero integral, then we would conclude that the measured signal had
a sin5t term somewhere. Note that to obtain a complete frequency description of the signal, we
must try all possible frequencies from 0 to .
So in summary, the idea is to multiply the signal x(t) with different sine waves containing the
frequency of interest, integrate, and then see if the result is non-zero. We could plot the integral
result against the frequency thus producing a spectral plot of x(t).
5.4.3 Using MATLABs FFT function
MATLAB provides a convenient way to analyse the frequency component of a nite sampled
signal by using the built-in fft function. This is termed spectral analysis and is at its simplest
just a plot of the magnitude of the Fourier transform of the time series, or abs(fft(x)). The
complexities are due to the scaling of the plot, and possible windowing to improve the frequency
226 CHAPTER 5. DIGITAL FILTERING AND SMOOTHING
resolution at the expense of the magnitude value, or vice-versa and possible adjustment for non-
periodic signals.
The SIGNAL PROCESSING toolbox includes a command psd which is an improved version of the
example given in Technical Support Note # 1702 available fromwww.mathworks.com/support/tech-notes/17
Other functions to aid the spectral analysis of systems are spectrum and the associated
specplot, and etfe (empirical transfer function estimate) from the SYSTEM IDENTIFICATION
toolbox. All of these routines perform basically the same task.
To look how we can perform a spectral decomposition, let us start with a simple example where
we know the answer, namely
x(t) = 4.5 sin(2200t)
It is immediately apparent that at a frequency of 200 Hz, we should observe a single Fourier
coefcient with a magnitude of 4.5 while all others are zero. Our aim is to verify this using
MATLABS fft routine.
The function, spsd (scaled PSD) giving in Listing 5.8 will use the Fourier transformto decompose
the signal into different frequencies and display this information. This function was adaptedfrom
the version given in Technical Support Note # 1702 available at the MATHWORKS ftp server and
further details are given in the note. Since we have assumed that the input signal is real, then
we know that the Fourier transform will be conjugate symmetrical about the Nyquist frequency,
thus we can throw the upper half away. Doing this, we must multiply the magnitude by 2 to
maintain the correct size. MATLAB also normalises the transform by dividing by the number of
data points so must also account for this in the magnitude. Since the FFT works much faster
when the series is a power of two, we automatically pad with zeros up to the next power of 2. In
the example following, I generate 1000 samples, so 24 zeros are added to make up the difference
to 2
10
= 1024.
Listing 5.8: Routine to compute the power spectral density plot of a time series
function [Mx,f] = spsd(x,dt);
2 % [Mx,f] = spsd(x,[dt])
% Power spectral plot of x(t) sampled at dt [s], (default dt=1).
% Output: Power of x at frequency, f [Hz]
% See also: PSD.M, SPECTRUM, SPECPLOT, ETFE
7 if nargin < 2, dt = 1.0; end % default sample time
Fs = 1/dt; Fn = Fs/2; % sampling & Nyquist frequency [Hz]
NFFT=2.nextpow2(length(x)); % NFFT=2.(ceil(log(length(x))/log(2)));
FFTX=fft(x,NFFT); % Take fft, padding with zeros,
NumUniquePts = ceil((NFFT+1)/2);
12
FFTX=FFTX(1:NumUniquePts); % fft is symmetric, throw away upper half
MX=2
*
abs(FFTX); % Take magnitude of X & 2
*
since threw half away
MX(1)=MX(1)/2; % Account for endpoint uniqueness
17 MX(length(MX))=MX(length(MX))/2; % We know # of samples is even
MX=MX/length(x); % scale for length
f=(0:NumUniquePts-1)
*
2
*
Fn/NFFT; % frequency Hz
22 if nargout == 0, % do plot if requested
plot(f,MX); xlabel('frequency [Hz]');
end % if
return % SPSD.M
5.4. THE FOURIER TRANSFORM 227
We are now ready to try our test signal to verify spsd.m given in Listing 5.8.
Fs = 1000; % Sampling freq, Fs, in Hz
t=0:1/Fs:1; % Time vector sampled at Fs for 1 second
x = 4.5
*
sin(2
*
pi
*
t
*
200)'; % scaled sine wave.
spsd(x,1/Fs) % Does Fs = 200, A = 4.5?
0 100 200 300 400 500
0
1
2
3
4
5
frequency [Hz]
We should expect to see a single distinct spike at f = 200Hz with a height of 4.5.
With the power spectrum density function spsd, we can also analyse the frequency components
of a square wave. We can generate a square wave using the square function in a manner similar
to that of generating a sine wave, or simply wrap the signum function around a sine wave.
1 Fs = 30; t=0:1/Fs:20; % Sampling freq, Fs, in Hz & time vector for 20 seconds
s = 0.5
*
square(2
*
pi
*
t); % Square wave
spsd(s,1/Fs) % See odd spikes ??
pwelch(s,[],[],[],Fs); % Try toolbox version
0 5 10 15
50
40
30
20
10
0
Frequency (Hz)
P
o
w
e
r
/
f
r
e
q
u
e
n
c
y

(
d
B
/
H
z
)
Welch Power Spectral Density Estimate
Note that all the dominant frequency components appear at the odd frequencies (1,3,5,7. . . ), and
that the frequency components at the even frequencies are suppressed as anticipated from 5.4.1.
Of course they theoretically should be zero, but the discretisation, nite signal length and sam-
pling etc. introduce small errors here.
5.4.4 Periodogram
There exists a parallel universe to the time domain called the frequency domain, and all the op-
erations we can do in one domain can be performed in the other. Our gateway to this frequency
domain as previously noted is the Fast Fourier Transform or known colloquially as the FFT. In-
stead of plotting the signal in time, we could plot the frequency spectrum of the signal. That is,
we plot the power of the signal as a function of frequency. The power of the signal y(t) over a
228 CHAPTER 5. DIGITAL FILTERING AND SMOOTHING
frequency band is dened as the absolute square of the Fourier transform;

y
()
def
= |Y ()|
2
=

y(t)e
it
_

2
When we plot the power as a function of frequency, it is much like a histogram where each
frequency bin contains some power. This plot is sometimes referred to as a periodogram.
Constructing the periodogram for real data can give rather messy, inconclusive and widely uc-
tuating results. For a data series, you may nd that different time segments of the series give
different looking spectra when you suspect that the true underlying frequency characteristics is
unchanged. One way around this is to compute the spectra or periodogram for different seg-
ments and average the results, or you could use a windowing function. The spa and etfe
functions in the System identication toolbox attempt to construct periodograms from logged
data.
Problem 5.2 Fig. 5.24 trends the monthly critical radio frequencies in Washington, D.C. from May
1934 until April 1954. These frequencies reect the highest radio frequency that can be used for
broadcasting.
1935 1940 1945 1950 1955
0
5
10
15
Year
C
r
i
t
i
c
a
l

f
r
e
q
u
e
n
c
y
Figure 5.24: Critical radio frequencies
This data is available fromthe collection of classic time series data archived in: http://www-personal.buseco.monash.edu
(See also Problem 6.3 for further examples using data from this collection.)
1. Find the dominant periods (in days) and amplitudes of this series.
2. Can you hypothesise any natural phenomena that could account for this periodicity?
5.4.5 Fourier smoothing
It is very easy to smooth data in the Fourier domain. You simply take all your data, FFT it,
multiply this transformed data by the lter function H(f), and then do the inverse FFT (IFFT)
to get back into the time domain. The computationally difcult parts are the FFT and IFFT, but
this is easily achieved in MATLAB or in custom integrated circuits. Note that this section is called
smoothing as opposed to online ltering because these smoothing lters are acausal since future
data is required to smooth present data as well as past data. Filters that only operate on past
data are referred to as causal lters. Acausal lters tend to perform better than causal lters as
they exhibit less phase lag but they cannot be used in real time applications. In process control,
physical realisation constraints mean that only causal lters are possible. However in the audio
world (compact disc and digital tapes), acausal lters are used since a time delay is not noticeable
5.4. THE FOURIER TRANSFORM 229
and hence allowed. Surprisingly time delays are allowed on telephone communications as well.
Numerical Recipes [161] has a good discussion of the advantages of Fourier smoothing.
Let us smooth the measurement data given in 5.3.1 and compare the smoothed signal with
the online ltered signal. This time however we will deliberately generate 2
10
= 1024 data pairs
rather than an even thousand so that we can take advantage of the fast Fourier Transform al-
gorithm. It is still possible to evaluate the Fourier transform of a series with a non-power of 2
number of pairs, but the calculation is much slower so its use is discouraged.
Again we will generate a measurement signal as
y = sin(2f
1
t) +
1
10
sin (2f
2
t)
where f
1
= 2 Hz and f
2
= 50 Hz as before. We will generate 2
10
data pairs at a sampling frequency
of 200Hz.
1 t = 1/200
*
[0:pow2(10)-1]'; % 2
10
data pairs sampled at 200 Hz
y = sin(2
*
pi
*
2
*
t) + 0.1
*
sin(2
*
pi
*
50
*
t); % Signal: y = sin (2f1t) +
1
10
sin (2f2t)
Generating the Fourier transform of the data is easy. We can plot the amount or power in each
frequency bin Y(f) by typing
yp = fft(y);
plot(abs(yp).2) % See periodogram in Fig. 5.25
5 semilogy(abs(yp).2) % clearer plot
In Fig. 5.25, we immediately note that the plot is mirrored about the central Nyquist frequency
meaning that we can ignore the right-hand side of the plot. The x-axis frequency scale is nor-
malised, but you can easily detect the two peaks corresponding to the two dominant frequencies
of 2 and 50 Hz. Since we are trying to eradicate the higher frequency, what we would desire in
the power spectrum is only one spike at a frequency of f = 2 Hz. We can generate a power curve
of this nature, by multiplying it with the appropriate lter function H(f).
0 200 400 600 800 1000
0
100
200
300
400
500
frequency
P
o
w
e
r


Spectrum
Filter
0 50
0
200
400
600
zoomed
Figure 5.25: The frequency power spectrum
for the level signal. Both the frequency axis
and the power axis are normalised.
Let us generate the lter H(f) in the frequency domain. We want to pass all frequencies less than
say 30Hz, and attenuate all frequencies above 30Hz. We do this by constructing a vector where 1
corresponds to the frequencies we wish to retain, and zero is for those we do not want. Then we
convolve the lter with the data, or elementwise multiply the two frequency series.
230 CHAPTER 5. DIGITAL FILTERING AND SMOOTHING
n = 20;
h=[ones(1,n) zeros(1,length(y)-2
*
n) ones(1,n)]'; % Filter H(f)
ypf = yp.
*
h % convolve signal & lter Y (f) H(f)
plot([abs(yp), h, abs(ypf)])
Now we can take the inverse FFT to obtain back the transformed the time domain signal which is
now our low pass ltered signal. Technically the inverse Fourier transformof a Fourier transform
of a real signal should return also a real time series of numbers. However owing to numerical
roundoff, the computed series may contain a small complex component. In practice we can
safely ignore these, and force the signal to be real.
10 yf = real(ifft(ypf));% force real signal, y
f
= {FFT
1
(Y (f))}
plot(t,[y,yf]) % compare raw & smoothed in Fig. 5.26.
Figure 5.26: The Fourier smoothed signal, y
f
,
compared to the original noisy level signal, y.
Note the absence of any phase lag in the now
ofine smoothed signal.
0.2 0 0.2 0.4 0.6 0.8
1.5
1
0.5
0
0.5
1
1.5
time (s)
o
u
t
p
u
t
original
smoothed
The result shown in Fig. 5.26 is an acceptably smooth signal with no disturbing phase lag that
was unavoidable in the online ltering example.
Problem 5.3 Design a notch lter to band-stop 50Hz. Test your lter using an input signal with a
time varying frequency content such as u(t) = sin
_
16t
4
_
. Plot both the original and ltered signal
together. Sample around 1000 data pairs using a sampling frequency of 1000 Hz. Also plot the
apparent frequency as a function of time for u(t).
5.5 Numerically differentiating industrial data
In industrial applications, we often want the know the slope of a measured process variable. For
example, we may wish to know the owrate of material into a vessel, when we only measure
the level of the vessel. In this case, we differentiate the level signal, to calculate the owrate.
However for industrial signals, which typically have signicant noise, calculating an accurate
derivative is not trivial. A crude technique such as differencing, will give intolerable error. For
these types of online applications, we must rst smooth the data, then differentiate the smoothed
data. This can be accomplished using discrete lters.
5.5. NUMERICALLY DIFFERENTIATING INDUSTRIAL DATA 231
5.5.1 Establishing feedrates
This case study demonstrates one way to differentiate noisy data. For this project, we were trying
to implement a model based estimation scheme to a 1000 liter fed-batch chemical reactor. For our
model, we needed to know the feedrate of the material being fed to the reactor. However we
did not have a ow meter on the feed line, but instead we could measured the weight of the
reactor using load cells. As the 3 hour batch progressed, feed was admitted to the reactor, and
the weight increased from about 800 kg to 1200 kg. By differentiating this weight, we could at
any time approximate the instantaneous feed rate. This is not an optimal arrangement, since we
expect considerable error, but in this case it was all that we could do.
The weight measurement of the reactor is quite noisy. The entire reactor is mounted on load cells,
and all the lines to and from the reactor (such as cooling/heating etc) are connected with exible
couplings. Since the reactor was very exothermic, the feed rate was very small (30 g/s) and this
is almost negligible to the combined weight of the vessel ( 3000 kg) and contents ( 1000 kg).
The raw weight signal (dotted) of the vessel over the batch is shown in the upper gures in Fig.
5.28. An enlargement is given in the upper right plot. The sample time for this application was
T = 6 seconds.
A crude method of differentiating
The easiest (and crudest) method to establish the feedrate is to difference the weight signal. Sup-
pose the sampled weight signal is W(t), then the feedrate,

W = F, is approximated by
F =

W
W
i
W
i1
T
But due to signal discretisation and noise, this results in an unusable mess
3
. The differencing
can be achieved elegantly in Matlab using the diff command. Clearly we must rst smooth the
signal before differencing.
Using a discrete lter
This application is not just smoothing, but also differentiating. We can do this in one step by
combining a low pass lter (G
lp
) with a differentiater, s, as
G
d
(s) = G
lp
(s) s (5.38)
We can design any low pass lter, (say a 3rd order Butterworth lter using the butter com-
mand), then multiply the lter polynomials by the differentiater. In the continuous domain, a
differentiater is simply s, but in the discrete domain we have a number of choices how to imple-
ment this. Ogata, [148, pp 308] gives these alternatives;
s
_

_
1z
1
T
Backward difference
1z
1
z
1
T
Forward difference
2
T
1z
1
1+z
1
Center difference
(5.39)
For a causal lter, we need to choose the backward or center difference variants. I will use the
backward difference Thus Eqn 5.38 using the backward difference in the discrete domain is
G
d
(z) = G
lp
(z)
1 z
1
T
(5.40)
3
I do not even try to plot this, as it comes out a black rectangle
232 CHAPTER 5. DIGITAL FILTERING AND SMOOTHING
with sample time T. Thus the denominator is multiplied by T, and the numerator gets convolved
with the factor (1 z
1
). We need to exercise some care in doing the convolution, because the
polynomials given by butter are written in decreasing powers of z (not z
1
). Note however
that fliplr(conv(fliplr(B),[-1,1])); is equivalent to conv(B,[1,-1]);. Listing 5.9
tests this smoothing differentiator algorithm.
Listing 5.9: Smoothing and differentiating a noisy signal
dt = 0.05; t = [0:dt:20]'; % Sample time T = 0.05
U = square(t/1.5); [num,den]=ord2(2,0.45); % Input u(t) & plant dynamics
3 y = lsim(num,den,U,t)+randn(size(y))
*
0.005;%simulate it & add noise
dyn = gradient(y,dt); % Do crude approx of differentiation
[B,A] = butter(3,0.1); % Design smoother, = 0.1
8 Gd_lp = tf(B,A,dt,'variable','z-1'); % Convert to low-pass lter, G
lp
(z
1
)
Gdd = series(Gd_lp,tf([1 -1],dt,dt,'variable','z-1')) % Eqn. 5.40.
yns = filter(B,A,y); % Smooth y(t)
dyns = lsim(Gdd,y); % Smooth & differentiate y(t), See Fig. 5.27.
The results of the smoothing and the differentiating of the noisy signal are given in Fig. 5.27.
The upper plot compares the noisy measurement with the the smoothed version exhibiting the
inevitable phase lag. The lower plot in Fig. 5.27 shows the crude approximation of the deriva-
tive obtained by differencing (with little phase lag, but substantial noise), and the smooth and
differentiated version.
Figure 5.27: Smoothing, (upper)
and differentiating (lower) a noisy
measurement.
0.4
0.2
0
0.2
0.4
M
e
a
s
u
r
e
m
e
n
t
0 5 10 15 20
1
0.5
0
0.5
1
time [s]
D
e
r
i
v
a
t
i
v
e
Note that even though the noise on the raw measurement is insignicant, the nite differencing
amplies the noise considerably. While the degree of smoothing can be increased by decreasing
the low pass cut-off frequency, , we pay the price in increasing phase lag.
Fig. 5.28 shows actual industrial results using this algorithm. The right hand plots show an
enlarged portion of the signal. The upper left plot shows the raw (dotted) and smoothed (solid)
weight signal. The smoothed signal was obtained from the normal unity gain Butterworth lter
with a
c
=
0.02
2T
=
1
600
s
1
. There is a short transient at the beginning, when the lter is rst
invoked, since I made no attempt to correct for the non-zero initial conditions. The lower 2 plots
show the differentiated signal, in this case feed rate using the digital lter and differentiater.
Note that since the lter is causal, there will be a phase lag, and this is evident in the ltered
signal given in the top right plot. Naturally the differentiated signal will also lag behind the true
feedrate. In this construction, the feed rate is occasionally negative corresponding to a decrease
5.6. SUMMARY 233
in weight or lter transient. This is physically impossible, and the derived feedrate be chopped
at zero if it were to be used further in a model based control scheme for example.
800
900
1000
1100
1200
8 10 12
V
e
s
s
e
l

w
e
i
g
h
t

[
k
g
]
time [hr]
1060
1080
1100
1120
1140
10.5 11 11.5 12
V
e
s
s
e
l

w
e
i
g
h
t

[
k
g
]
time [hr]
-0.05
0
0.05
0.1
8 10 12
F
e
e
d

r
a
t
e

[
k
g
/
s
]
time [hr]
-0.05
0
0.05
0.1
10.5 11 11.5 12
F
e
e
d

r
a
t
e

[
k
g
/
s
]
time [hr]
Figure 5.28: Industrial data showing (upper) the raw & ltered weight of a fed-batch reactor
vessel [kg] during a production run, and (lower) the feedrate [kg/s]
5.6 Summary
Filtering or better still, smoothing, is one way to remove unwanted noise fromyour data. Smooth-
ing is best done ofine if possible. The easiest way to smooth data is to decide what frequencies
you wish to retain as the signal, and which frequencies you wish to discard. One then constructs
a lter function, H(f) which is multiplied with the signal in the frequency domain to create the
frequency of the smoothed signal. To get the ltered time signal, you transform back to the time
domain. One uses the Fourier transform to convert between the time and frequency domains.
If you cannot afford the luxury of ofine ltering, then one must use a causal lter. IIR lters per-
form better than FIRs, but both have phase lags at high frequencies and rounded edges that are
deleterious to the ltered signal. The Butterworth lter is better than simple cascaded rst order
lters, it has only one parameter (cut-off frequency), relatively simple to design and suits most
applications. Other more complicated lters such as Chebyshev or elliptic lters, with corre-
spondingly more design parameters are easily implemented using the signal processing toolbox
in MATLAB if desired. Usually for process engineering applications, we are most interested in
low pass lters, but in some applications where we may wish to remove slow long term trends
(adjustment for seasons which is common in reporting unemployment gures for example), or
differentiate, or remove a specic frequency such as the 50Hz mains hum, then high-pass or notch
lters are to be used. These can be designed in exactly the same way as a low pass lter, but with
a variable transformation.
234 CHAPTER 5. DIGITAL FILTERING AND SMOOTHING
Chapter 6
Identication of process models
Fiedlers forecasting rules:
Forecasting is very difcult, especially if its about the future. Consequently when presenting a forecast:
Give them a number or give them a date, but never both.
6.1 The importance of system identication
Most advanced control schemes require a good process model to be effective. One option is to set
the control law to be simply the inverse of the plant so when the inevitable disturbances occur,
the controller applies an equal but opposite input to maintain a constant output. In practice this
desire must be relaxed somewhat to be physically realisable, but the general idea of inserting
a dynamic element (controller) into the feedback loop to obtain suitable closed loop dynamics
necessitates a process model.
When we talk about models, we mean some formal way to predict the system outputs knowing
the input. Fig. 6.1 shows the three components of interest; the input, the plant, and the output. If
we know any two components, we can back-calculate the third.
Model

output, y input, u
. .
Does this match with reality?
Try to establish this Model
Figure 6.1: The prediction problem: Given a model and the input, u, can we predict the output,
y?
When convenient in this chapter, I will use the convention that inputs and associated parameters
such as the B polynomial are coloured blue, and the outputs and the A polynomial are coloured
red. As a mnemonic aid, think of a hot water heater. Disturbances will be coloured green (be-
cause sometimes they come from nature). These colour conventions may help make some of the
equations in this chapter easier to follow.
235
236 CHAPTER 6. IDENTIFICATION OF PROCESS MODELS
The prediction problem is given the plant and input, one must compute the output, the reconstruc-
tion problem is given the output and plant, one must propose a plausible input to explain the
observed results, and the identication problem, the one we are concerned with in this chapter, is
to propose a suitable model for the unknown plant, given input and output data.
Our aim for automatic control problems is to produce a usable model, that given typical plant
input data, will predict within a reasonable tolerance the consequences. We should however
remember the advice by the early English theologian and philosopher, William of Occam (1285
1349) who said: What can be accounted for by fewer assumptions is explained in vain by more.
This is now known as the principle of Occams razor, where with a razor we are encouraged to
cut all that is not strictly necessary away from our model. This motivates us to seek simple, low
order, low complexity, succinct, elegant models.
For the purposes of controller design we are interested in dynamic models as opposed to simply
curve tting algebraic models. In most applications, we collect discrete data, and employ digital
controllers, so we are only really interested in discrete dynamic models.
Sections 3.4 and 3.4.2 revises traditional curve and model tting such as what you would use in
algebraic model tting. Section 6.7 extends this to t models to dynamic data.
Good texts for for system identication include the classic [32] for time series analysis, [188], and
[124] accompanied with the closely related MATLAB toolbox for system identication, [125]. For
identication applications in the processing industries, [127] contains the basics for modelling
and estimation suitable for those with a chemical engineering background. A good collection
is found in [147] and a now somewhat dated detailed survey of applications, including some
interesting practical hints and observations with over 100 references is given in [79].
6.1.1 Basic denitions
When we formulate and use engineering models, we typically have in mind a real physical pro-
cess or plant, that is perhaps so complex, large, or just downright inconvenient, that we want to
use a model instead. We must be careful at all times to distinguish between the plant, model,
data and predictions. The following terms are adapted from [188].
System, S This is the system that we are trying to identify. It is the physical process or plant that
generates the experimental data. In reality this would be the actual chemical reactor, plane,
submarine or whatever. In simulation studies, we will refer to this as the truth model.
Model, MThis is the model that we wish to nd such that the model predictions generated by
Mare sufciently close to the actual observed systemS output. Sometimes the model may
be non-parametric where it is characterised by a curve or a Bode plot for example, or the
model might be characterised by a nite vector of numbers or parameters, M(), such as
say the coefcients of a polynomial transfer function or weights in a neural network model.
In either case, we can think of this as a black-box model where if we feed in the inputs, turn
the handle, we will get output predictions that hopefully match the true system outputs.
Parameters, . These are the unknowns in the model M that we are trying to establish such
that our output predictions y are as close as possible to the actual output y. In a transfer
function model, the parameters are the coefcients of the polynomials; in a neural network,
they would be the weights of the neurons.
This then brings us to the fundamental aim of system identication: Given sufcient input/out-
put data, and a tentative model structure Mwith free parameters , we want to nd reasonable
6.1. THE IMPORTANCE OF SYSTEM IDENTIFICATION 237
values for such that our predictions y closely follow the behaviour of our plant under test, y, as
shown in Fig. 6.2.
Plant, S

`
+
Model, M

y
y

error,

input
(hopefully small)
Figure 6.2: A good model, M, duplicates the behaviour of the true plant, S.
Models can be classied as linear or nonlinear. In the context of parameter identication, which
is the theme of this chapter, it is assumed that the linearity (or lack of) is with respect to the
parameters, and not to the shape of the actual model output. A linear model does not mean just
a straight line, but includes all models where the model parameters , enter linearly. If we have
the general model
y = f(, x)
with parameters and independent variables x, then if y/ is a constant vector independent
of , then the model is linear in the parameters, otherwise the model is termed nonlinear. Linear
models can be written in vector form as
y = f (x)


Just as linear models are easier to use than nonlinear models, so too are discrete models compared
with continuous models.
6.1.2 Black, white and grey box models
The two extreme approaches to modelling are sometimes termed black and white box models.
The black box, or data-based model is built exclusively from experimental data, and a deliberate
boundary is cast around the model, the contents inside about which we neither know nor care.
The white box, theoretical or rst-principles model, preferred by purists, contains already well
established physical and chemical relations ideally so fundamental, no experiments are needed
to characterise any free parameters.
However most industrial processes are so complex that totally fundamental models are usually
not available. Here the models that are used may be partly empirical and partly fundamental.
The empirical model will have some tted parameters. An example of this type of model is the
Van de Waals gas law or correlations for the friction factor in turbulent ow. Finally if the process
is so complex that no fundamental model is possible, then a black box type model may be tted.
The Wood-Berry column model described in Eqn. 3.24 is a black box model and the parameters
were obtained using the technique of system identication. Particularly hopeless models, based
mostly on whim and what functions happened to be lying around at the time, can be found in
[77]. These are the types of the models that should have never been regressed.
Of course many people have arguedfor many years about the relative merits of these two extreme
approaches. As expected, the practising engineer, tends to have the most success somewhere in
238 CHAPTER 6. IDENTIFICATION OF PROCESS MODELS
the middle, which, not unexpectedly, are termed grey box models. The survey in [184] describes
the advantages and disadvantages of these colour coded models in further details.
6.1.3 Techniques for identication
There are two main ways to identify process models: ofine or online in real-time. In both cases
however, it is unavoidable to collect response data fromthe systemof interest as shown in Fig. 6.3.
Most of the more sophisticated model identication is performed in the ofine mode where all
the data is collected rst and only analysed later to produce a valid model. This is typically
convenient since before any regressing is performed we know all the data, and we can do this in
comfort in front of our own computer workstation. The ofine developed model (hopefully) ts
the data at the time that it was collected, but there is no guarantee that it will be still applicable for
data collected sometime in the future. Process or environmental conditions may change which
may force the process to behave unexpectedly.
Plant

Computer
PI
`
`
`
transducer
input
control valve
output
signal generator
Dial-up a signal
Figure 6.3: An experimental setup for input/output identication. We log both the input and the
response data to a computer for further processing.
The second way to identify models is to start the model regression analysis while you are col-
lecting the data in real-time. Clearly this is not a preferred method when compared with ofine
identication, but in some cases we will study later, such as in adaptive control, or where we
have little input/output data at our immediate disposal, but are gradually collecting more, we
must use on-line identication. As you collect more data, you continually update and correct the
model parameters that were established previously. This is necessary when we require a model
during the experiment, but that the process conditions are likely to change during the period of
interest. We will see in chapter 7 that the online least squares recursive parameter estimation is
the central core of adaptive control schemes.
6.2. GRAPHICAL AND NON-PARAMETRIC MODEL IDENTIFICATION 239
6.2 Graphical and non-parametric model identication
Dynamic model identication is essentially curve tting as described in section 3.4 but for dy-
namic systems. For model based control, (our nal aim in this section), it is more useful to have
parametric dynamic models. These are models which have distinct parameters to be estimated in
a specied structure. Conversely non-parametric models are ones where the model is described
by a continuous curve for example, such as say a Bode or Nyquist plots, or collections of models
where the model structure has not been determined beforehand, but is established from the data
in addition to establishing the parameters.
In both parametric and non-parametric cases we can make predictions, which is a necessary
requirement of any model, but it is far more convenient in a digital computer to use models
based on a nite number of parameters rather than resort to some sort of table lookup or digitise
a non-parametric curve.
How one identies the parameters in proposed models fromexperimental data is mostly a matter
of preference. There are two main environments in which to identify models;
1. Time domain analysis described in 6.2.1, and
2. frequency domain analysis described in 6.2.2.
Perturb the process somehow . . . and watch what happens.
6.2.1 Time domain identication using graphical techniques
Many engineers tend to be more comfortable in the time domain, and with the introduction of
computers, identication in the time domain, while tedious, is now reliable and quite straight
forward. The underlying idea is to perturb the process somehow, and watch what happens.
Knowing both the input and output signals, one can in principle, compute the process transfer
function. The principle experimental design decision is the type of exciting input signal.
Step and impulse inputs
Historically, the most popular perturbation test for visualisation and manual identication was
the step test. The input step test itself is very simple, and the major response types such as rst
240 CHAPTER 6. IDENTIFICATION OF PROCESS MODELS
0 2 4 6
0
0.2
0.4
0.6
0.8
1
1
s+1
Step Response
Time (sec)
A
m
p
l
i
t
u
d
e
0 5 10
0
0.2
0.4
0.6
0.8
1
1e
4s
s+1
Step Response
Time (sec)
A
m
p
l
i
t
u
d
e
0 5 10 15
0
0.5
1
1.5
1
s
2
+0.8s+1
Step Response
Time (sec)
A
m
p
l
i
t
u
d
e
0 0.5 1
0
0.5
1
1.5
1
s
Step Response
Time (sec)
A
m
p
l
i
t
u
d
e
0 5 10 15
0
0.2
0.4
0.6
0.8
1
1
6s
3
+11s
2
+6s+1
Step Response
Time (sec)
A
m
p
l
i
t
u
d
e
0 5 10 15
1
0
1
2
6s+2
3s
2
+4s+1
Step Response
Time (sec)
A
m
p
l
i
t
u
d
e
Figure 6.4: Typical open loop step tests for a variety of different standard plants.
order, underdamped second order, integrators and so forth are all easily recognisable from their
step tests by the plant personnel as illustrated in Fig. 6.4.
Probably the most common model structure for industrial plants is the rst-order plus deadtime
model,
G
p
(s) =
K
s + 1
e
s
(6.1)
where we are to extract reasonable values for the three parameters: plant gain K, time constant
, and deadtime . There are a number of strategies to identify these parameters, but one of the
more robust is called the Method of Areas, [120, pp3233].
Algorithm 6.1 FOPDT identication from a step response using the Method of Areas.
Collect some step response data (u, y) from the plant, then
1. Identify the plant gain,
K =
y() y(0)
u() u(0)
and normalise the experimental data to obtain a unit step
y
us
=
y y(0)
K
2. Compute the area
A
0
=
_

0
(K y
us
(t)) dt (6.2)
6.2. GRAPHICAL AND NON-PARAMETRIC MODEL IDENTIFICATION 241
3. Compute the time t
1
= A
0
/K and then compute a second area
A
1
=
_
t1
0
y
us
(t) dt (6.3)
as illustrated in Fig. 6.5.
4. Now the two remaining parameters of the proposed plant model in Eqn. 6.1 are
=
eA
1
K
and =
A
0
eA
1
K
(6.4)
where e = exp(1).
0 10 20 30 40 50
0
0.2
0.4
0.6
0.8
1
1.2
time
O
u
t
p
u
t
A
0
A
1
Figure 6.5: Areas method for a
rst-order plus time delay model
identication.
Listing 6.1 gives a simple MATLAB routine to do this calculation and Fig. 6.5 illustrate the inte-
grals to be computed. Examples of the identication is given in Fig. 6.6 for a variety of simulated
plants (1st and 2nd order, and an integrator) with different levels of measurement noise super-
imposed. Fig. 6.7 shows the experimental results from a step test applied to the blackbox lter
with the identied model.
Listing 6.1: Identication of a rst-order plant with deadtime from an openloop step response
using the Areas method from Algorithm 6.1.
Gp = tf(3, [7 2 1],'iodelay',2); % True plant: Gp(s) = 3e
2s
/(7s
2
+ 2s + 1)
[y,t] = step(Gp); % Do step experiment to generate data, see Fig. 6.5.
4 % Calculate areas A0 & A1 as shown in Fig. 6.5.
K = y(end); % plant gain
A0 = trapz(t,K-y);
t0 = A0/K;
idx = find(t<t0);
9 t1 = t(idx); y1 = y(idx);
A1 = trapz(t1,y1);
% Create process model

Gp(s)
tau = exp(1)
*
A1/K;
14 Dest = max(0,(A0-exp(1)
*
A1)/K); % deadtime
Gm = tf(K,[tau 1],'iodelay',Dest);
step(Gp,Gm) % Compare model with plant, see Fig. 6.5.
242 CHAPTER 6. IDENTIFICATION OF PROCESS MODELS
Figure 6.6: The Areas method for
plant identication applied to a va-
riety of different plants with random
disturbances.
0 20 40 60
2
0
2
4
6
8
time
O
u
t
p
u
t
0 50 100 150
2
0
2
4
6
8
time
O
u
t
p
u
t
0 5 10 15 20
2
0
2
4
6
8
time
O
u
t
p
u
t
0 0.5 1 1.5 2
2
0
2
4
6
time
O
u
t
p
u
t
Plant
Model
Plant
Model
Plant
Model
Plant
Model
Figure 6.7: Identication of the experimental
Blackbox using the Method of Areas. Note
that the identication strategy is reasonably
robust to the noise and outlier in the experi-
mental data.
0
200
400
600
O
u
t
p
u
t


Blackbox
Model
0 5 10 15 20 25
0
500
1000
time
Of course not all plants are adequately modelled using a rst-order plus deadtime model. For
example, Fig. 6.8 shows the experimental results from a step test applied to the electromagnetic
balance arm which is clearly second order with poles very close to the imaginary axis.
However many plant operators do not appreciate the control engineer attempting to step test
the plant under open loop conditions since it necessarily causes periods of off-specication prod-
uct at best and an unstable plant at worst. An alternative to the step test is the impulse test where
the plant will also produce off-spec product for a limited period,
1
but at least it returns to the
same operating point even using limited precision equipment. This is important if the plant ex-
hibits severe nonlinearities and the choice of operating point is crucial. The impulse response is
ideal for frequency domain testing since it contains equal amounts of stimulation at all frequen-
cies, but is difcult to approximate in practice. Given that we then must accept a nite height
pulse with nite slope, we may be limited as to the amount of energy we can inject into the sys-
1
Provided the plant does not contain an integrator
6.2. GRAPHICAL AND NON-PARAMETRIC MODEL IDENTIFICATION 243
1000
0
1000
2000
3000
O
u
t
p
u
t
0 5 10 15
0
1000
2000
time
I
n
p
u
t
Figure 6.8: Two step tests subjected to the bal-
ance arm. Note the slow decay of the oscilla-
tions indicate that the plant has at least two
complex conjugate poles very close to the sta-
ble half of the imaginary axis.
tem without hitting a limiting nonlinearity. The impulse test is more difcult to analyse using
manual graphical techniques (pencil and graph paper), as opposed to computerised regression
techniques, than the step test.
Random inputs
An alternative to the step or impulse tests is actually not giving any deliberate input to the plant
at all, but just relying on the ever-present natural input disturbances. This is a popular approach
when one wants still to perform an identication, but cannot, or is unable to inuence the system
themselves. Economists, astronomers and historians fall into this group of armchair-sitting, non-
interventionist identicationalists. This is sometimes referred to as time series analysis (TSA), see
[32] and [54].
Typically for this scheme to work in practice, one must additionally consciously perturb the input
(control valve, heating coils etc) in some random manner, since the natural disturbance may not
normally be sufcient in either magnitude, or even frequency content by itself. In addition, it is
more complex to check the quality of model solution from comparison with the raw time series
analysis. The plant model analysed could be anything, and from a visual inspection of the raw
data it is difcult to verify that the solution is a good one. The computation required for this type
of analysis is more complex than graphically curve tting and the numerical algorithms used for
time series analysis are much more sensitive to the initial parameters. However using random
inputs has the advantage that they continually excite or stimulate the process and this excitation,
provided the random sequence is properly chosen, stimulates the process over a wide frequency
range. For complicated models, or for low signal to noise ratios, lengthy experiments may be
necessary, which is infeasible using one-off step or impulse tests. This property of continual
stimulation is referred to as persistent excitation and is a property of the input signal and should
be considered in conjunction with the proposed model. See [18, 2.4] for further details.
Two obvious choices for random sequences are selecting the variable from a nite width uniform
distribution, or from a normal distribution. In practice, when using a normal distribution, one
must be careful not to inject values far from the desired mean input signal, since this could cause
additional unwanted process upsets.
244 CHAPTER 6. IDENTIFICATION OF PROCESS MODELS
Random binary signals
Electrical engineers have traditionally used randomsignals for testing various components where
the signal is restricted to one of two discrete signal levels. Such a random binary signal is called
an RBS. A binary signal is sometimes preferred to the normal random signal, since one avoids
possible outliers, and dead zone nonlinearities when using signals with very small amplitudes.
We can generate an RBS in MATLAB using the random number generator and the signum func-
tion, sign, as suggested in [124, p373]. Using the signum function could be dangerous since
technically it is possible to have three values, 1, 1 and zero. To get different power spectra of
our binary test signal, we could lter the white noise before we sign it.
x = randn(100,1); % white noise
rbsx = 0.0 x; % binary signal
3
y = filter(0.2,[1 -0.8],x); % coloured noise
rbsy = 0.0 y; % binary value
Compare in Fig. 6.9 the white noise (left plots) with the ltered white noise through a low-pass
lter (solid). The lter used in Fig. 6.9 is 0.2/(1 0.8q
1
).
4
2
0
2
4
White noise
0 50 100
0
0.5
1
Random binary noise
1
0.5
0
0.5
1
Coloured noise
0 50 100
0
0.5
1
Random binary coloured noise
Figure 6.9: Comparing white and ltered (coloured) random signals. Upper: Normal signals,
lower: Two-level or binary signals.
If you fabricate in hardware one of these random signal generators using standard components
such as ip-ops or XOR gates, rather than use a process control computer, then to minimise
component cost, you want to reduce the number of these physical components. This is done in
hardware using a linear feedback shift register (LFSR), which is an efcient approximation to our
(almost random) random number generator. Fig. 6.10 shows the design of one version which
uses a 5 element binary shift register. Initially the register is lled with all ones, and at each clock
cycle, the elements all shift one position to the right, and the new element is formed by adding
the 3rd and 5th elements modulus 2.
Since we have only 5 positions in the shift register in Fig. 6.10, the maximum length the sequence
can possibly be before it starts repeating is 2
5
1 or 31 elements. Fig. 6.11 shows a SIMULINK
6.2. GRAPHICAL AND NON-PARAMETRIC MODEL IDENTIFICATION 245
1 1
+
1 1 1
`

output

. .
5-element shift register
binary operation
Figure 6.10: A 5-element binary shift register to generate a pseudo-random binary sequence.
z
1
Unit Delay5
z
1
Unit Delay4
z
1
Unit Delay3
z
1
Unit Delay2
z
1
Unit Delay1
Sum
Scope
mod
Math
Function
2
Constant
(a) SIMULINK block diagram
0 10 20 30 40 50 60 70
0.2
0
0.2
0.4
0.6
0.8
1
1.2
sample #
Pseudo random binary sequence of length 31
(b) SIMULINK simulation output
Figure 6.11: SIMULINK simulation of a 5-element binary shift register (upper) and resulting out-
put (lower). Note the period of 31 samples. Refer also to Fig. 6.10.
implementation and the resulting binary sequence which repeats every 31 samples.
Random number generators such as Fig. 6.10 are completely deterministic so are called pseudo
random binary generators since in principal anyone can regenerate the exact sequence given
the starting point since the construction in Fig. 6.10. RBS can be used as alternatives to our
random sequences for identication. These and other commonly used test and excitation signals
are compared and discussed in [200, p143].
246 CHAPTER 6. IDENTIFICATION OF PROCESS MODELS
6.2.2 Experimental frequency response analysis
When rst faced with an unknown plant, it is often a good idea to performa non-parametric model
identication such as a frequency response analysis. Such information, quantied perhaps as a
Bode diagram, can then be used for subsequent controller design, although is not suitable for
direct simulation. For that we would require a transfer function or equivalent, but the frequency
response analysis will aid us in the choice of appropriate model structure if we were to regress
further a parametric model.
The frequency response of an unknown plant G(s) can be calculated by substituting s = i and
following the denition of the Fourier transform, one gets
G(i)
def
=
Y (i)
X(i)
=
_

0
y(t)e
it
dt
_

0
x(t)e
it
dt
(6.5)
For every value of frequency, 0 < < , we must approximate the integrals in Eqn. 6.5. If we
choose the input to be a perfect Dirac pulse, then the denominator in Eqn. 6.5 will always equal
1. This simplies the computation, but complicates the practical aspects of the experiment. If
we use arbitrary inputs, we must compute the integrals numerically, although they are in fact
simply the Fourier transform of y(t) and x(t) and can be computed efciently using the Fast
Fourier Transform or FFT.
We will compare three alternative techniques to compute the frequency response of an actual
experimental laboratory plant, namely:
1. subjecting the plant to a series of sinusoids of different frequencies,
2. subjecting the plant to a single sinusoid with a time-varying frequency component (chirp
signal),
3. and subjecting a plant to a pseudo-random white noise input.
In the three alternatives, we must record (or compute) the steady-state amplitude ratio of the
output over the input and the phase lag of the output compared with the input for different
input frequencies.
Finally, to extract estimates of the model parameters from the frequency data, you could use the
MATLAB functions invfreqs or the equivalent for the discrete domain, invfreqz. I have no
idea how robust this tting works, and I would treat the results very cautiously at rst. (Also
refer to problem 6.1.)
A series of input sine waves
The simplest way to experimentally characterise the frequency response of a plant is to input a
series of pure sine waves at differing frequencies as shown in in the SIMULINK diagram Fig. 6.12
which uses the real-time toolbox to communicate with the experimental blackbox.
Fig. 6.13 shows the response of the laboratory blackbox when subjected to 12 sine waves of in-
creasing frequency spanning from = 0.2 to 6 radians/second. The output clearly shows a re-
duction in amplitude at the higher frequencies, but also a small amplicationat modest frequences.
By carefully reading the output amplitude and phase lag, we can establish 12 experimental points
given in table 6.1 suitable say for a Bode diagram.
6.2. GRAPHICAL AND NON-PARAMETRIC MODEL IDENTIFICATION 247
Maximum real time sampling rate is around 0.1s
To Workspace
uy
Terminator
Sine Wave
Simulink
Execution
Control
Scope Saturation
MCC DAQ DAC OUT
DAC
MCC DAQ ADC IN
ADC
Figure 6.12: Using the real-time toolbox in MATLAB to subject sine waves to the blackbox for an
experimental frequency analysis.
2
0
2
= 0.2 = 0.3 = 0.5
2
0
2
= 0.8 = 1.0 = 1.3
2
0
2
= 1.5 = 2.0
0 5 10 15 20 25
= 2.5
0 5 10 15 20 25
2
0
2
= 3.0
0 5 10 15 20 25
= 4.0
0 5 10 15 20 25
= 6.0
Figure 6.13: The black box frequency response analysis using a series of sine waves of different
frequencies. This data will be subsequently processed and is tabulated in Table 6.1 and plotted
in Fig. 6.20.
However while conceptually simple, the sine wave approach is not efcient nor practical. Lees
and Hougen, [118], while attempting a frequency response characterisation of a steam/water
heat exchanger noted that it was difcult to produce a good quality sine wave input to the plant
when using the control valve. In addition, since the ranges of frequencies spanned over 3.5 orders
of magnitude (from 0.001 rad/s to 3 rad/s), the testing time was extensive.
Additional problems of sine-wave testing are that the gain and phase lag is very sensitive to
bias, drift and output saturation. In addition it is often difcult in practice to measure accurately
phase lag. [141, pp117-118] give further potential problems and some modications to lessen the
effects of nonlinearities. The most important conclusion from the Lees and Hougen study was
that they obtained better frequency information, with much less work using pulse testing and
subsequently analysing the input/output data using Fourier transforms.
248 CHAPTER 6. IDENTIFICATION OF PROCESS MODELS
Table 6.1: Experimentally determined frequency response of the blackbox derived from the labo-
ratory data shown in Fig. 6.13. For a Bode plot of this data, see Fig. 6.20.
Input frequency Amplitude ratio Phase lag
(rad/s) (degrees)
0.2 1.19 -4.8
0.3 1.21 -5.1
0.5 1.27 -11.1
0.8 1.42 -20.8
1.0 1.59 -27.6
1.3 1.92 -46.7
1.5 2.08 -64.4
2.0 1.61 -119.2
2.5 0.88 -144.3
3.0 0.54 -156.8
4.0 0.27 -172.6
6.0 0.11 -356.1
A chirp signal
If we input a signal with a time varying frequency component such as a chirp signal, u(t) =
sin(t
2
), to an unknown plant, we can obtain an idea of the frequency response with a single,
albeit long drawn-out experiment unlike the series of experiments required in the pure sinusoidal
case.
Fig. 6.14 shows some actual input/output data collected from the blackbox laboratory plant
collected at T = 0.1 seconds. As the input frequency is increased, the output amplitude decreases
as expected for a plant with low-pass ltering characteristics. However the phase lag information
is difcult to determine at this scale from the plot given in Fig. 6.14. Our aim is to quantify this
amplitude reduction and phase lag as a function of frequency for this experimental plant.
Since the phase information is hard to visualise in Fig. 6.14, we can plot the input and output sig-
nals as a phase plot similar to a Lissajou gure. The resulting curve must be an ellipse (since the
input and output frequencies are the same), but the ratio of the height to width is the amplitude
ratio, and the eccentricity and orientation is a function of the phase lag.
If we plot the start of the phase plot using the same input/output data as Fig. 6.14, we should see
the trajectory of an ellipse gradually rotating around, and growing thinner and thinner. Fig. 6.15
shows both the two-dimensional phase plot and a three dimensional plot with the time axis
added.
Fig. 6.16 shows the same experiment performed on the apper. This experiment indicates that the
apper has minimal dynamics since there is little discernable amplitude attenuation and phase
lag but substantial noise. The saturation of the output is also seen in the squared edges of ellipse
in the phase plot in addition to the attened sinusoids of the apper response.
A pseudo random input
A more efcient experimental technique is to subject the plant to an input signal that contains
a wide spectrum of input frequencies, and then compute the frequency response directly. This
has the advantage that the experimentation is considerably shorter, potentially processes the data
6.2. GRAPHICAL AND NON-PARAMETRIC MODEL IDENTIFICATION 249
0 50 100 150 200 250 300 350
0
0.2
0.4
0.6
0.8
1
o
u
t
p
u
t
Experimental Blackbox
0 50 100 150 200 250 300 350
1
0.8
0.6
0.4
0.2
0
i
n
p
u
t
time (s)
Figure 6.14: Input (lower) /output (upper) data collected from the black box where the input is a
chirp signal.
1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
o
u
t
p
u
t

(
V
)
Phase plot of the experimental Blackbox
input (V)
Start
(a) 2D input/output phase plot
0
50
100
150
200
250
300
350
400
1
0.5
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
time (s)
Phase plot of the experimental Blackbox
input (V)
o
u
t
p
u
t

(
V
)
(b) 2D input/output phase plot with time in the third di-
mension
Figure 6.15: Phase plot of the experimental black box when the input is a chirp signal. See also
Fig. 6.14.
250 CHAPTER 6. IDENTIFICATION OF PROCESS MODELS
0 50 100 150 200 250 300 350 400
0.2
0.1
0
0.1
0.2
0.3
0.4
0.5
o
u
t
p
u
t
Experimental Flapper
0 50 100 150 200 250 300 350 400
1
0.8
0.6
0.4
0.2
0
in
p
u
t
time (s)
(a) Time response
1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
0.2
0.1
0
0.1
0.2
0.3
0.4
0.5
o
u
t
p
u
t

(
V
)
Phase plot of the experimental Flapper
input (V)
Start
(b) Phase plot of the experimental apper
Figure 6.16: Flapper response to a chirp signal. There is little evidence of any appreciable dynam-
ics over the range of frequencies investigated.
more efciently and removes some of the human bias.
The experimental setup in Fig. 6.17 shows an unknown plant subjected to a random input. If we
collect enough input/output data pairs, we should be able to estimate the frequency response of
the unknown plant.
Figure 6.17: Experimental setup to subject a
random input into an unknown plant. The
input/output data was collected, processed
through Listing 6.2 to give the frequency re-
sponse shown in Fig. 6.18.
UY
1
y & spt
Unknown plant
G
BandLimited
White Noise
Fig. 6.18 compares the magnitude and phase of the ratio of the Fourier transforms of the input
and output to the analytically computed Bode plot given that the unknown plant is in fact
G(s) = e
2s
/(5s
2
+s + 6).
The actual numerical computations to produce the Bode plot in Fig. 6.18 are given in Listing 6.2.
While the listing looks complex, the key calculation is the computation of the fast Fourier trans-
form, and the subsequent division of the two vectors of complex numbers.
It is important to note that SIMULINK must deliver equally spaced samples necessary for the
subsequent FFT transformation, and that the frequency vector is equally spaced from 0 to half
the Nyquist frequency.
Listing 6.2: Frequency response identication of an unknown plant directly from input/output
6.2. GRAPHICAL AND NON-PARAMETRIC MODEL IDENTIFICATION 251
10
3
10
2
10
1
10
0
10
1
10
2
600
400
200
0
P
h
a
s
e

a
n
g
l
e

[
d
e
g
r
e
e
s
]
frequency [rad/s]
10
5
10
4
10
3
10
2
10
1
10
0
|
G
(
i

)
|

f
N
Figure 6.18: The experimental frequency
response compared to the true analytical
Bode diagram. See the routine in List-
ing 6.2.
data
G = tf(1,[5 1 6],'iodelay',2); % Unknown plant under test
Ts = 0.2; timespan = [0 2000]; % sample time & period of interest
sl_optns = simset('RelTol',1e-5,'AbsTol',1e-5, ...
'MaxStep',1e-2,'MinStep',1e-3);
5 [t,x,UY] = sim('splant_noiseIO',timespan,sl_optns); % Refer Fig. 6.17.
n = ceil((pow2(nextpow2((length(t)))-1)+1)/2);
t = t(1:n); % discard down to a power of 2 for the FFT
u = UY(1:n,1); y = UY(1:n,2);
10
Fs = 1/Ts; Fn = Fs/2; % sampling & Nyquist frequency [Hz]
Giw = fft(y)./fft(u); % G(i) = FFT (Y )/FFT (U)
w=2
*
pi
*
(0:n-1)
*
2
*
Fn/n; % frequency in [rad/s]
15 [Mag,Phase,wa] = bode(G); % Analytical Bode plot for comparison
Mag = Mag(:); Phase = Phase(:);
subplot(2,1,1); % Refer Fig. 6.18.
loglog(w,abs(Giw),wa,Mag,'r-'); ylabel('|G(i \omega)| ')
20 subplot(2,1,2);
semilogx(w,phase(Giw)
*
180/pi, wa,Phase,'r-');
ylim([-720,10])
ylabel('Phase angle [degrees]'); xlabel('frequency \omega [rad/s]');
We can try the same approach to nd the frequency response of the blackbox experimentally.
Fig. 6.19 shows part of the 2
13
= 8192 samples collected at a sampling rate of 10Hz or T = 0.1 to
be used to construct the Bode diagram.
The crude frequency response obtained simply by dividing the two Fourier transforms of the
input/output data from Fig. 6.19 is given in Fig. 6.20 which, for comparison, also shows the
frequency results obtained from a series of distinct sine wave inputs and also from using the
MATLAB toolbox command etfe which will be subsequently explained in section 6.2.3.
Since we have not employed any windows in the FFT analysis, the spectral plot is unduly noisy,
especially at the high frequencies. The 12 amplitude ratio and phase lag data points manually
252 CHAPTER 6. IDENTIFICATION OF PROCESS MODELS
0
1
2
3
4
5
o
u
t
p
u
t
0 50 100 150
0
1
2
3
4
time [s]
i
n
p
u
t
Figure 6.19: Part of the experimental black box response data series given a pseudo-randominput
sequence. This data was processed to produce the frequency response given in Fig. 6.20.
10
1
10
0
|
G
(
i

)
|

10
2
10
1
10
0
10
1
10
2
300
250
200
150
100
50
0
50
P
h
a
s
e

a
n
g
l
e

[
d
e
g
r
e
e
s
]
frequency [rad/s]


FFT
etfe
sine waves
Figure 6.20: The frequency response of the black box computed using FFTs (cyan line), the Sys-
tem ID toolbox function etfe, (blue line) and the discrete sine wave input data, (2), read from
Fig. 6.13. The FFT analysis technique is known to give poor results at high frequencies.
6.2. GRAPHICAL AND NON-PARAMETRIC MODEL IDENTIFICATION 253
read from Fig. 6.13 (repeated in Table 6.1) show good agreement across the various methods.
6.2.3 An alternative empirical transfer function estimate
The frequency response obtained from experimental input/output data shown in Fig. 6.18 and
particularly Fig. 6.20 illustrate that the simple approach of simply numerically dividing the
Fourier transforms is suspect to excessive noise. A better approach is to take more care in the
data pre-processing stage by judicious choice of windows. The empirical transfer function esti-
mate routine, etfe, from the System Identication toolbox does this and delivers more reliable
and consistent results. Compare the result given in Fig. 6.21 from Listing 6.3 with the simpler
strategy used to generate Fig. 6.18.
Listing 6.3: Non-parametric frequency response identication using etfe.
G = tf(1,[5 1 6],'iodelay',2); % Plant under test: G(s) = e
2s
/(5s
2
+s + 6)
2 Ts = 0.2; t = [0:Ts:2000]'; % sample time & span of interest
u = randn(size(t)); y = lsim(G,u,t);
Dat = iddata(y,u,Ts); % collect data together
7 Gmodel = etfe(Dat); % estimate a frequency response model
bode(Gmodel) % Refer Fig. 6.21.
10
2
10
1
10
0
|
G
(
i

)
|


0.3 0.5 1 2 3
600
400
200
0
[rad/s]


[
d
e
g
]
Estimated
Actual
Figure 6.21: An empirical transfer function
estimate from experimental input/output
data computed by etfe. The true fre-
quency response is overlayed for compar-
ison. This is preferred over the simpler
strategy used to generate Fig. 6.18.
Problem 6.1 1. Plot the frequency response of an unknown process on both a Bode diagram
and the Nyquist diagram by subjecting the process to a series of sinusoidal inputs. Each
input will contribute one point on the Bode or Nyquist diagrams. The unknown transfer
function is
G(s) =
0.0521s
2
+ 0.4896s + 1
0.0047s
4
+ 0.0697s
3
+ 0.5129s
2
+ 1.4496s + 1
Compare your plot with one generated using bode. Note that in practice we do not know
the transfer function in question, and we construct a Bode or Nyquist plot to obtain it. This
is the opposite of typical textbook student assignments.
254 CHAPTER 6. IDENTIFICATION OF PROCESS MODELS
Hint: Do the following for a dozen or so different frequencies:
(a) First specify the unknown continuous time
transfer function.
(b) Choose a test frequency, say = 2 rad/s, and
generate a sine wave at this frequency.
(c) Simulate the output of the process given this
input. Plot input/output together and read off
the phase lag and amplitude ratio.
Gcn=[0.0521 0.4896 1];
Gcd=[0.0047 0.0697 0.5129 1.4496 1];
w=2; % rad/s
t=[0:0.1:20]; % seconds
u=sin(w
*
t); % input signal
y=lsim(Gcn,Gcd,u,t); plot(t,[u y])
Note that the perfect frequency response can be obtained using the freqs command.
2. The opposite calculation, that is tting a transfer function to an experimentally obtained
frequency response is trivial with the invfreqs command. Use this command to recon-
struct the polynomial coefcients of G(s) using only your experimentally obtained frequency
response.
Problem 6.2 Write a MATLAB function le to return G
p
(i) given the input/output x(t), y(t) us-
ing Eqn6.5 for some specied frequency. Test your subroutine using the transfer function from
Problem 6.1 and some suitable input signal, and compare the results with the exact frequency
response.
Hint: To speed up the calculations, you should vectorise the function as much as possible. (Of
course for the purposes of this educational example, do not use an FFT)
6.3 Continuous model identication
Historically graphical methods, especially for continuous time domain systems, used to be very
popular partly because one needed little more than pencil and graph paper. For rst-order or for
some second-order models common in process control, we have a number of graphical recipes
which if followed, enable us to extract reasonable parameters. See [179, Chpt 7] for further details.
While we could in principle invert the transfer function given a step input to the time domain
and then directly compare the model and actual outputs, this has the drawback that the solutions
are often convoluted nonlinear functions of the parameters. For example, the solutions to a step
response for rst and second order overdamped systems are:
K
s + 1
y(t) = K
_
1 e
t/
_
K
(
1
s + 1)(
2
s + 1)
y(t) = K
_
1

1
e
t/1

2
e
t/2

2
_
and the problem is that while the gain appears linearly in the time domain expressions, the time
constants appear nonlinearly. This makes subsequent parameter estimation difcult, and in gen-
eral we would need to use a tool such as the OPTIMISATION toolbox to nd reasonable values for
the time constants. The following section describes this approach.
6.3.1 Fitting transfer functions using nonlinear least-squares
Fig. 6.22 shows some actual experimental data collected from a simple black box plant sub-
jected to a low-frequency square wave. The data was sampled at 10Hz or T
s
= 0.1. You can
6.3. CONTINUOUS MODEL IDENTIFICATION 255
1.5
2
2.5
3
3.5
4
o
u
t
p
u
t
0 10 20 30 40 50 60 70 80 90 100
2
2.5
3
time (s)
Figure 6.22: Experimental data from a continuous plant subjected to a square wave.
notice that in this case the noise characteristics seems to be clearly dependent on the magnitude
of y. By collecting this input/output data we will try to identify a continuous transfer function.
We can see from a glance at the response in Fig. 6.22 that a suitable model for the plant would be
an under-damped second order transfer function
G(s) =
Ke
as

2
s
2
+ 2s + 1
(6.6)
where we are interested in estimating the four model parameters of Eqn. 6.6 as
def
=
_
K a

T
.
This is a nonlinear least-squares regression optimisation problemwe can solve using lsqcurvefit.
Listing 6.4 will be called by the optimisation routine in order to generate the model predictions
for a given trial set of parameters.
Listing 6.4: Function to generate output predictions given a trial model and input data.
function ypred = fmodelsim(theta, t,u)
2 % Compute the predicted output, y(t), given a model & input data, u(t).
K = theta(1); tau = theta(2); zeta = theta(3); deadt = theta(4);
G = tf(K,[tau2 2
*
tau
*
zeta, 1],'iodelay',deadt); % G(s) =
Ke
as

2
s
2
+2s+1
ypred = lsim(G,u,t);
7 return
The optimisation routine in Listing 6.5 calls the function given in Listing 6.4 repeatedly in order
to search for good values for the parameters such that the predicted outputs match the actual
outputs. In parameter tting cases such as these, it is prudent to constrain the parameters to be
non-negative. This is particularly important in the case of the deadtime where we want to ensure
that we do not inadvertently try models with acausal behaviour.
Listing 6.5: Optimising the model parameters.
K0 = 1.26; tau0 = 1.5; zeta0 = 0.5; deadt0 = 0.1; % Initial guesses for 0
theta0 = [K0,tau0,zeta0,deadt0]';
3 LowerBound = zeros(size(theta0)); % Avoid negative deadtime
theta_opt = lsqcurvefit(@(x,t) fmodelsim(x,t,u), theta0,t,y,LowerBound)
Once we have some optimised values, we should plot the predicted model output on top of the
actual output as shown in Fig. 6.23 which, in this case, is not too bad.
256 CHAPTER 6. IDENTIFICATION OF PROCESS MODELS
Listing 6.6: Validating the tted model.
5 K = theta_opt(1); tau = theta_opt(2);
zeta = theta_opt(3); deadt = theta_opt(4);
Gopt = tf(K,[tau2 2
*
tau
*
zeta, 1],'iodelay',deadt);
ypred = lsim(Gopt,u,t);
10 plot(t,y,t,ypred,'r-') % See results of the comparison in Fig. 6.23.
0 10 20 30 40 50 60 70 80 90 100
2
1.5
1
0.5
0
0.5
time (s)
o
u
t
p
u
t


G(s) =
1.166
0.571
2
s
2
+0.317s+1
Actual
fitted
Figure 6.23: A continuous-time model tted to input/output data from Fig. 6.22.
The high degree of similarity between the model and the actual plant shown in Fig. 6.23 suggests
that, with some care, this method does work in practice using real experimental data complete
with substantial noise. However on the downside, we did use a powerful nonlinear optimiser,
we were careful to eliminate the possibility any of the parameters could be negative, we chose
sensible starting estimates, and we used nearly 1000 data points.
6.3.2 Identication using derivatives
While most modern system identication and model based control is done in the discrete do-
main, it is possible, as shown above, to identify the parameters in continuous models, G(s),
directly. This means that we can establish the continuous time poles and zeros of a transfer func-
tion without rst identifying a discrete time model, and then converting back to the continuous
time domain.
However there are two main reasons why continuous time model identication is far less use-
ful than its discrete time equivalent. The rst reason is that since discrete time models are more
useful for digital control, it makes sense to directly estimate them rather than go through a tem-
porary continuous model. The second drawback is purely practical. Either we must resort to
the delicate task of applying nonlinear optimisation techniques such as illustrated above, or if
we wish to use the robust linear least-squares approach, we need reliably to estimate high order
derivatives of both the input and output data. In the presence of measurement noise and possible
discontinuity in gradient for the input such as step changes, it is almost impossible to construct
anything higher than a second-order derivative, thus restricting our models to second order or
less. Notwithstanding these important implementation issues, [29] describes a recursive-least
squares approach for continuous system identication.
6.3. CONTINUOUS MODEL IDENTIFICATION 257
As continuous time model identication requires one to measure not only the input and output,
but also the high order derivatives of y(t) and u(t), it turns out that this construction is delicate
even for well behaved simulated data, so we would anticipate major practical problems for actual
experimental industrial data.
If we start with the continuous transfer function form,
Y (s) =
B(s)
A(s)
U(s)
then by rationalising and expanding out the polynomials in s we get
_
a
n
s
n
+a
n1
s
n1
+ + a
1
s + 1
_
Y (s) =
_
b
m
s
m
+b
m1
s
m1
+ +b
1
s +b
0
_
U(s)
Inverting to the time domain and assuming zero initial conditions, we get a differential equation
a
n
y
(n)
+a
n1
y
(n1)
+ + a
1
y
(1)
+y = b
m
u
m
+b
m1
u
(m1)
+ +b
1
u
(1)
+b
0
u
or in vector format,
y =
_
y
(n)
y
(n1)
y
(1)
u
(m)
u

_
_
_
_
_
_
_
_
_
_
_
_
a
n
a
n1
.
.
.
a
1
b
m
.
.
.
b
0
_
_
_
_
_
_
_
_
_
_
_
_
(6.7)
where the notation y
(n)
is the nth derivative of y.
Eqn. 6.7 is in a form suitable for least-squares regression of the vector of unknown coefcients of
the A and B polynomials. Note that if we desire n
a
poles, we must differentiate the output n
a
times. Similarly, we must differentiate the input once for each zero we want to estimate. A block
diagram of this method is given in Fig. 6.24.
Plant

y
`
regress
. .
s
i

s
i
`
parameters,
`
differentiator differentiator
u
plant under investigation
Figure 6.24: Continuous model identication strategy
A simple simulation is given in Listing 6.7. I rst generate a truth model with four poles and
three zeros. Because I want to differentiate the input, I need to make it sufciently smooth by
258 CHAPTER 6. IDENTIFICATION OF PROCESS MODELS
passing a random input through a very-low-pass lter. To differentiate the logged data I will use
the built-in gradient command which approximates the derivative using nite differences. To
obtain the higher-order differentials, I just repeatedlycall gradient. With the input/output data
and its derivatives, the normal equations are formed, and solved for the unknown polynomial
coefcients.
Listing 6.7: Continuous model identication of a non-minimum phase system
Gplant = tf(3
*
poly([3,-3,-0.5]),poly([-2 -4 -7 -0.4])); % Plant G(s) =
3(s3)(s+3)(s+0.5)
(s+7)(s+4)(s+2)(s+0.4)
step(Gplant); % Step response of a Continuous plant
4 % Design smooth differentiable input
dt = 5e-2; t = [0:dt:10]'; % Keep sample time T small
Ur = randn(size(t)); % white noise
[Bf,Af] = butter(4,0.2); % very low pass filter
u = filter(Bf,Af,Ur); % smoothish input
9
y = lsim(Gplant,u,t); % do experiment
plot(t,[u,y]); % verify I/O data
% Now do the identication
14 na = 2; nb = 1; % # of poles & zeros to be estimated. (Keep small).
dy = y; % Zeroth derivative, y
(0)
= y
for i=1:na % build up higher derivatives recursively, y
(1)
, y
(2)
, . . .
dy(:,i+1) = gradient(dy(:,i),t); % should be correctly scaled
19 end % for
du = u; % do again for input signal
for i=1:nb % but only for nb times
du(:,i+1) = gradient(du(:,i),t); % could use diff()
end % for
24
b = y; % RHS
X = [-dy(:,2:na+1), du]; % data matrix
theta = X\b; % solve for parameters
29 % re-construct linear model polynomials
Gmodel = tf(fliplr(theta(na+1:length(theta))'), ...
fliplr([1,theta(1:na)']))
yp = lsim(Gmodel,u,t); % test simulation
34 plot(t,[y],t,yp,'--');
Even though we know the truth model possessed four zeros and three poles, our tted model
with just two poles and one zero gave a close match as shown in Fig. 6.25. In fact the plant output
(heavy line) is almost indistinguishable from the predicted output (light line) and this is due to
the lack of noise and smoothed input trajectory. However if you get a little over ambitious in
your identication, the algorithm will go unstable owing to the difculty in constructing reliable
derivatives.
6.3.3 Practical continuous model identication
We may be tempted given the results presented in Fig. 6.25 to suppose that identifying a contin-
uous transfer function from logged data is relatively straight forward. Unfortunately however, if
6.3. CONTINUOUS MODEL IDENTIFICATION 259
0.4
0.2
0
0.2
0.4
0.6
o
u
t
p
u
t


Plant
Model
0 1 2 3 4 5 6 7 8 9 10
2
1
0
1
i
n
p
u
t
time
Figure 6.25: A simulated example of a continuous model identication comparing the actual
plant with the model prediction showing almost no discernable error. Compare these results
with an actual experiment in Fig. 6.26.
you actually try this scheme on a real plant, even a well behaved plant such as the black-box, you
will nd practical problems of such magnitude that it renders the scheme is almost worthless. We
should also note that if the deadtime is unknown, then the identication problem is non-linear in
the parameters which makes the subsequent identication problematic. This is partly the reason
why many common textbooks on identication and control pay only scant attention to identica-
tion in the continuous domain, but instead concentrate on identifying discrete models. (Discrete
model identication is covered next in 6.4.)
The upper plot in Fig. 6.26 shows my attempt at identifying a continuous model of the blackbox,
which, compared with Fig. 6.25, is somewhat less convincing. Part of the problem lies in the dif-
culty of constructing the derivatives of the logged input/output data. The lower plot in Fig. 6.26
shows the logged output combined with the rst and second derivatives computed crudely us-
ing nite differences. One can clearly see the compounding effects on the higher derivatives due
to measurement noise, discretisation errors, and so forth.
For specic cases of continuous transfer functions, there are a number of optimised algorithms
to directly estimates the continuous coefcients. Himmelblau, [87, p358360] gives once such
procedure for which reduces to a linear system which is easy to solve in MATLAB. However
again this algorithm essentially suffers the same drawbacks as the sequential differential schemes
given previously.
Improving identication reliability by using Laguerre functions
One scheme that has found to be practical is one based on approximating the continuous response
with a series of orthogonal functions known as Laguerre functions. The methodology was devel-
oped at the University of British Columbia and is available commercially from BrainWave, now
part of Andritz Automation. Further details about the identication strategy are given in [198].
Compared to the standard curve tting techniques, this strategy using orthogonal functions is
more numerically robust, and only involves numerical integration of the raw data, (as opposed
to the troublesome differentiation). On the downside, manipulating the Laguerre polynomial
260 CHAPTER 6. IDENTIFICATION OF PROCESS MODELS
0 2 4 6 8 10 12 14 16 18 20
0.1
0.05
0
0.05
0.1
Sample time T=0.05 s
o
u
t
p
u
t

&

p
r
e
d
i
c
t
i
o
n
Continuous model identification
0 2 4 6 8 10 12 14 16 18 20
0.2
0.15
0.1
0.05
0
0.05
0.1
0.15
Measured output & derivatives
y
,

d
y
/
d
t
,

d
2
y
/
d
t
2
time (s)
Blackbox
Model
Figure 6.26: A practical continuous model identication of the blackbox is not as successful as
the simulated example given in Fig. 6.25. Part of the problem is the reliable construction of the
higher derivatives. Upper: The plant (solid) and model prediction (dashed). Lower: The output
y, and derivatives y, y.
functions is cumbersome, and the user must select an appropriate scaling factor to obtain rea-
sonable results. While [198] suggest some rules of thumb, and some algorithms for choosing the
scaling factor, it is still delicate and sensitive calculation and therefore tricky in practice.
Suppose we wish to use Laguerre functions to identify a complex high-order transfer function
G(s) =
(45s + 1)
2
(4s + 1)(2s + 1)
(20s + 1)
3
(18s + 1)
3
(5s + 1)
3
(10s + 1)
2
(16s + 1)(14s + 1)(12s + 1)
(6.8)
which was used as a challenging test example in [198, p32]. To perform the identication, the
user must decide on an appropriate order, (8 in this case), and a suitable scaling factor, p.
Fig. 6.27 compares the collected noisy step data with the identied step and impulse responses,
compares the identied and gives the error in the impulse response.
6.4 Popular discrete-time linear models
The previous section showed that while it was possible in theory to identify continuous models
by differentiating the raw data, in practice this scheme lacked robustness. Furthermore it makes
sense for evenly spaced sampled data systems not to estimate continuous time models, but to
estimate discrete models directly since we have sampled the plant to obtain the input/output
data anyway. This section describes the various common forms of linear discrete models that we
will nd useful in modelling and control.
6.4. POPULAR DISCRETE-TIME LINEAR MODELS 261
0.5
0
0.5
1
1.5
S
t
e
p

t
e
s
t
T
ss
T
ss
=493.1, p
opt
=0.05
0.01
0
0.01
0.02
0.03
0.04
I
n
p
u
l
s
e

c
o
m
p
a
r
i
s
o
n
Step test: Order = 8, scale factor p=0.050
0 100 200 300 400 500 600 700 800
0.05
0
0.05
e
r
r
o
r
time [s]
Figure 6.27: Identication of the 14th order system given in Eqn. 6.8 using an 8th-order Laguerre
series and a near-optimal scaling parameter.
If we collect the input/output data at regular sampling intervals, t = kT, for most applications it
sufces to use linear difference models such as
y(k) +a
1
y(k 1) +a
2
y(k 2) + +a
n
y(k n) = b
0
u(k) +b
1
u(k 1) + +b
m
u(k m) (6.9)
where in the above model, the current output y(k) is dependent on m+1 past and present inputs
u and the n immediate past outputs y. The a
i
s and b
i
s are the model parameters we seek.
A shorthand description for Eqn. 6.9 is
A(q
1
) y(k) = B(q
1
)u(k) +e(k) (6.10)
where we have added a noise term, e(k), and where q
1
is dened as the backward shift-operator.
The polynomials A(q
1
), B(q
1
) are known as shift polynomials
A(q
1
) = 1 +a
1
q
1
+a
2
q
2
+ +a
n
q
n
B(q
1
) = b
0
+b
1
q
1
+b
2
q
2
+ +b
m
q
m
where the A(q
1
) polynomial is dened by convention to be monic, that is, the leading coef-
cient is 1. In the following development, we will typically drop the argument of the A and B
polynomials.
262 CHAPTER 6. IDENTIFICATION OF PROCESS MODELS
Even with the unmeasured noise term, we can still estimate the current y(k) given old values of
y and old and present values of u using
y(k|k 1) = a
1
y(k 1) a
n
y(k n)
. .
past outputs
+b
0
u(k) +b
1
u(k 1) + +b
m
u(k m)
. .
present & past inputs
(6.11)
= (1 A)y(k) +Bu(k) (6.12)
Our estimate or prediction of the current y(k) based on information up to, but not including k is
y(k|k 1). If we are lucky, our estimate is close to the true value, or y(k|k 1) y(k). Note
that the prediction model of Eqn. 6.11 involves just known terms, the model parameters and past
measured data. It does not involve the unmeasured noise terms.
The model described by Eqn. 6.10 is very common and is called an Auto-Regressive with Exoge-
nous, (externally generated) input, or an ARX model. The auto-regressive refers to the fact that
the output y is dependent on old ys, (i.e. it is regressed on itself), in addition to an external input,
u. A signal ow diagram of the model is given in Fig. 6.28.

+

1
A
y
noise
u
e
B
known input
output
Figure 6.28: A signal ow diagram of an auto-regressive model with exogenous input or ARX
model. Compare this structure with the similar output-error model in Fig. 6.30.
There is no reason why we could not write our models such as Eqn. 6.10 in the forward shift
operator, q as opposed to the backward shift q
1
. Programmers and those working in the digital
signal processing areas tended to favour the backward shift because it is easier and more natural
to write computational algorithms such as Eqn. 6.11, whereas some control engineers favoured
the forward notation because the time delay is given naturally by the difference in order of the A
and B polynomials.
6.4.1 Extending the linear model
Since the white noise term in Eqn. 6.10 is also ltered by the plant denominator polynomial, A(q),
this form is known as an equation error model structure. However in many practical cases we will
need more exibility when describing the noise term. One obvious extension is to lter the noise
term with a moving average lter
y(k) +a
1
y(k 1) + +a
n
y(k n) = b
0
u(k) +b
1
u(k 1) + +b
m
u(k m)
. .
ARX model
+e(k) white noise
+c
1
e(k 1) + +c
n
e(k n)
. .
coloured noise
This is now known as coloured noise, the colouring lter being the C polynomial. The AUTO-
REGRESSIVE MOVING-AVERAGE with EXOGENOUS input, ARMAX, model is written in compact
6.4. POPULAR DISCRETE-TIME LINEAR MODELS 263
polynomial form as
A(q)y(k) = B(q)u(k) +C(q)e(k) (6.13)
where we have dropped the notation showing the explicit dependance on sample interval k. The
ARMAX signal ow is given in Fig. 6.29. This model now has two inputs, one deterministic u,
and one disturbing noise, e, and one output, y. If the input u is zero, then the system is known
simply as an ARMA process.

+

1
A
y u

B
C
noise
known input
e
output
Figure 6.29: A signal ow diagram of a ARMAX model. Note that the only difference between
this, and the ARX model in Fig. 6.28, is the inclusion of the C polynomial ltering the noise term.
While the ARMAX model offers more exibility than the ARX model, the regression of the pa-
rameters is now nonlinear optimisation problem. The nonlinear estimation of the parameters
in the A, B and C polynomials is demonstrated on page 302. Also note that the noise term is
still ltered by the A polynomial meaning that both the ARX and ARMAX models are suitable
if the dominating noise enters the system early in the process such as a disturbance in the input
variable.
Example We can simulate such a process in raw MATLAB, although the handling the indices and
initialisation requires some care. Suppose we have an ARMAX model of the form Eqn. 6.13 with
coefcient matrices
A(q) = q
2
q + 0.5, B(q) = 2q + 0.5, C(q) = q
2
0.2q + 0.05
Note that these polynomials are written in the forward shift operator, and the time delay given
by the difference in degrees, n
a
n
b
, is 1. A MATLAB script to construct the model is:
1 A = [1 -1 0.5]; % Model A(q) = q
2
q + 0.5
B = [2 0.5]; % No leading zeros B(q) = 2q + 0.5
C = [1 -0.2 0.05]; % Noise model C(q) = q
2
0.2q + 0.05
ntot = 10; % # of simulated data points
U = randn(ntot,1); E = randn(ntot,1);
Now to compute y given the inputs u and e, we use a for-loop and we write out the equation
explicitly as
y(k + 2) = y(k + 1) 0.5y(k)
. .
old outputs
+2u(k + 1) + 0.5u(k)
. .
old inputs
+e(k + 2) 0.2e(k + 1) + 0.05e(k)
. .
noise
It is more convenient to wind the indices back two steps so we calculate the current y(k) as
opposed to the future y(k + 2) as above. We will assume zero initial conditions to get started.
264 CHAPTER 6. IDENTIFICATION OF PROCESS MODELS
n = length(A)-1; % Order na
z = zeros(n,1); % Padding: Assume ICs = zero
Uz = [z;U]; Ez = [z;E]; y = [z;NaN
*
U]; }% Initialise
d = length(A)-length(B); % deadtime = na n
b
5 zB = [zeros(1,d),B]; % pad B(q) with leading zeros
for i=n+1:length(Uz)
y(i) = -A(2:end)
*
y(i-1:-1:i-n) + zB
*
Uz(i:-1:i-n) + C
*
Ez(i:-1:i-n);
end
10 y(1:n) = []; % strip off initial conditions
This strategy of writing out the difference equations explicitly is rather messy and error-prone,
so it is cleaner, but less transparent, to use the idpoly routine to create an armax model and the
idmodel/sim command in the identication toolbox to simulate it. Again we must pad with
zeros the deadtime onto the front of the B polynomial.
>>G = idpoly(A,[0,B],C,1,1) % Create polynomial model A(q)y(t) = B(q)u(t) +C(q)e(t)
Discrete-time IDPOLY model: A(q)y(t) = B(q)u(t) + C(q)e(t)
A(q) = 1 - q-1 + 0.5 q-2
5 B(q) = 2 q-1 + 0.5 q-2
C(q) = 1 - 0.2 q-1 + 0.05 q-2
This model was not estimated from data.
10 Sampling interval: 1
>>y = sim(G,[U E]); % test simulation
Both methods (writing the equations out explicitly, or using idpoly) will give identical results.
6.4.2 Output error model structures
Both the linear ARX model of Eqn. 6.10, and the version with the ltered noise, Eqn. 6.13, as-
sume that both the noise and the input are ltered by the plant denominator, A. A more exible
approach is to treat the input and noise sequences separately as illustrated in Fig. 6.30. This is
known as an output error model. This model is suitable when the dominating noise term enters
late in the process, such as a disturbance in the measuring transducer for example.
B
F
y

noise
u
known input

+

output
y
Figure 6.30: A signal ow diagram of an output-error model. Compare this structure with the
similar ARX model in Fig. 6.28.
6.4. POPULAR DISCRETE-TIME LINEAR MODELS 265
Again drawback of this model structure compared to the ARX model is that the optimisation
function is a nonlinear function of the unknown parameters. This is because the estimate y not
directly observable, but is itself a function of the parameters in polynomials B and F. This makes
the solution procedure more delicate and iterative. The SYSTEM IDENTIFICATION toolbox can
identify output error models with the oe command.
6.4.3 General input/output models
The most general linear input/output model with one noise input, and one known input is de-
picted in Fig. 6.31. The relation is described by
A(q)y(k) =
B(q)
F(q)
u(k) +
C(q)
D(q)
e(k) (6.14)
although it is rare that any one particular application will require all the polynomials. For exam-
ple [91, p76] point out that only the process dynamics B and F are usually identied, and D is
a design parameter. Estimating the noise C polynomial is difcult in practice because the noise
sequence is unknown and so must be approximated by the residuals.
C
D
B
F
1
A
+

e
u
y
noise
known input
output
Figure 6.31: A general input/output model structure
Following the block diagram in Fig. 6.31 or Eqn. 6.14, we have the following common reduced
special cases:
1. ARX: Autoregressive with eXogenous input; C = D = F = 1
2. AR: Autoregressive with no external input; C = D = 1, and B = 0
3. FIR (Finite Input Response): C = D = F = A = 1
4. ARMAX: Autoregressive, moving average with eXogenous input; D = F = 1
5. OE (Output error): C = D = A = 1
6. BJ (Box-Jenkins): A = 1
In all these above models, our aim is to estimate the model parameters which are the coefcients
of the various polynomials given the observable input/output experimental plant data. As al-
ways, we want the smallest, most efcient model that we can get away with. The next section
will describe how we can nd suitable parameters to these models. Whether we have the correct
model structure is another story, and that tricky and delicate point is described in section 6.6.
Nonlinear models are considered in section 6.6.3.
266 CHAPTER 6. IDENTIFICATION OF PROCESS MODELS
6.5 Regressing discrete model parameters
The simple auto-regressive with exogenous input (ARX) model,
A(q
1
) y(k) = B(q
1
) u(k) (6.15)
can be arranged to give the current output in terms of the past outputs and current and past
inputs as,
y(k) = (1 A(q
1
))y(k) +B(q
1
) u(k) (6.16)
or written out explicitly as
y(k) = a
1
y(k 1) a
n
y(k n)
. .
past outputs
+b
0
u(k) +b
1
u(k 1) + +b
m
u(k m)
. .
current & past inputs
(6.17)
Our job is to estimate reasonable values for the unknown parameters a
1
, . . . , a
n
and b
0
, . . . , b
m
such that our model is a reasonable approximation to data collected from the true plant. If we
collectively call the parameter column vector, , as

def
=
_
a
1
a
2
a
n
.
.
. b
0
b
1
b
m
_
T
and the past data vector as

def
=
_
y(k 1) y(k 2) y(k n)
.
.
. u(k) u(k 1) u(k m)
_
T
then we could write Eqn. 6.17 compactly as
y(k) =
T
(6.18)
The data vector is also known as the regression vector since we are using it to regress the values
of the parameter vector .
While Eqn. 6.18 contains (n + m + 1) unknown parameters we have only one equation. Clearly
we need additional equations to reduce the degree of freedom which in turn means collecting
more input/output data pairs.
Suppose we collect a total of N data pairs. The rst n are used to make the rst prediction, and
the next N n are used to estimate the parameters. Stacking all the N n equations, of which
Eqn. 6.17 is the rst, in a matrix form gives
_

_
y
n
y
n+1
.
.
.
y
N
_

_
=
_

_
y
n1
y
n2
y
0
u
n
u
n1
u
nm
y
n
y
n1
y
1
u
n+1
u
n
u
nm+1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
y
N1
y
N2
y
Nn1
u
N
u
N1
u
Nm
_

(6.19)
or in a compact form
y
N
= X
N
(6.20)
Now the optimisation problem is, as always, to choose the parameter vector such that the
model Mis an optimal estimate to the true systemS. We will choose to minimise the sum of the
squared errors, so our cost function becomes
J =
N

k=n
_
y
k

T
k

_
2
(6.21)
6.5. REGRESSING DISCRETE MODEL PARAMETERS 267
which we wish to minimise. In practice, we are rarely interested in the numerical value of J but
in the value of the parameter vector, , that minimises it,

= argmin

_
N

k=n
_
y
k

T
k

_
2
_
(6.22)
Eqn. 6.22 reads as the optimum parameter vector is given by the argument that minimises the
sum of squared errors. Eqn. 6.22 is a standard linear regression problem which has the same
solution as in Eqn 3.41, namely
=
_
X

N
X
N
_
1
X

N
y
N
(6.23)
For the parameters to be unique, the inverse of the matrix X

N
X
N
must exist. We can ensure
this by constructing an appropriate input sequence to the unknown plant. This is known as
experimental design.
Example of ofine discrete model identication.
Suppose we have collected some input/output data from a plant
time, k input output
0 1 1
1 4 2
2 3 7
3 2 16
to which we would like to t a model of the form A(q)y
k
= B(q)u
k
. We propose a model
structure
A(q) = 1 +a
1
q
1
, and B(q) = b
0
which has two unknown parameters,
def
=
_
a
1
b
0
_
T
and we are to nd values for a
1
and b
0
such that our predictions adequately match the measured output above. We can write our model
as a difference equation at time k,
y
k
= a
1
y
k1
+b
0
u
k
which if we write out in full, at time k = 3 using the 4 previously collected input/output data,
gives
_
_
y
1
y
2
y
3
_
_
=
_
_
y
0
u
1
y
1
u
2
y
2
u
3
_
_
_
a
1
b
0
_
or by inserting the numerical values
_
_
2
7
16
_
_
=
_
_
1 4
2 3
7 2
_
_
_
a
1
b
0
_
The solution for the parameter vector is over-constrained as we have 3 equations and only 2
unknowns, so we will use Eqn. 6.23 to solve for the optimum.

N
=
_
X

N
X
N
_
1
X

N
y
N
268 CHAPTER 6. IDENTIFICATION OF PROCESS MODELS
and substituting N = 3 gives

3
=
_
_
_
1 2 7
4 3 2
_
_
_
1 4
2 3
7 2
_
_
_
_
1
_
1 2 7
4 3 2
_
_
_
2
7
16
_
_
=
1
655
_
29/2 8
8 27
__
124
61
_
=
_
2
1
_
This means the parameters are a
1
= 2 and b
0
= 1 giving a model
M() =
1
1 + 2q
1
This then should reproduce the measured output data, and if we are lucky, be able to predict
future outputs too. It is always a good idea to compare our predictions with what was actually
measured.
time input actual output predicted output
0 1 1
1 4 2 2
2 3 7 7
3 2 16 16
Well, surprise surprise, our predictions look unrealistically perfect! (This problem is continued
on page 285 by demonstrating how by using a recursive technique we can efciently process new
incoming data pairs.)
6.5.1 Simple ofine system identication routines
The following routines automate the ofine identication procedure given in the previous sec-
tion. However it should be noted that if you are serious about system identication, then a better
option is to use the more robust routines from the the SYSTEM IDENTIFICATION TOOLBOX. These
routines will be described in section 6.5.3.
In Listing 6.8 below, we will use the true ARX process with polynomials
A(q) = 1 1.9q
1
+ 1.5q
2
0.5q
3
, and B(q) = 1 + 0.2q
1
(6.24)
to generate 50 input/output data pairs.
Listing 6.8: Generate some input/output data for model identication
G = tf([1,0.2,0,0],[1 -1.9 1.5 -0.5],1); % Unknown plant
1+0.2q
1
11.9q
1
+1.5q
2
0.5q
3
G.variable = 'z-1';
3 u = randn(50,1); % generate random input
y = lsim(G,u); % Generate output with no noise
Now that we have some trial input/output data, we can write some simple routines to calculate
using least-squares the coefcients of the underlying model by solving Eqn. 6.19. Once we have
performed the regression, we can in this simulated example, compare our estimated coefcients
with the true coefcients given in Eqn. 6.24.
6.5. REGRESSING DISCRETE MODEL PARAMETERS 269
Listing 6.9: Estimate an ARX model from an input/output data series using least-squares
1 na = 3; nb = 2; % # of unknown parameters in the denominator, na, & numerator, n
b
.
N = length(y); % length of data series
nmax = max(na,nb)+1; % max order
6 X = y(nmax-1:N-1); % Construct Eqn. 6.19
for i=2:na
X = [X, y(nmax-i:N-i)];
end
for i=0:nb-1
11 X = [X, u(nmax-i:N-i)];
end
y_lhs = y(nmax:N); % left hand side of y = X
theta = X\y_lhs % = X
+
y
16 G_est = tf(theta(na+1:end)',[1 -theta(1:na)'],-1,'Variable','z-1')
If you run the estimation procedure in Listing 6.9, your estimated model, G_est, should be
identical to the model used to generate the data, G, in Listing 6.8. This is because in this simulation
example we have no model/plant mismatch, no noise and we use a rich, persistently exciting
identication signal.
As an aside, it is possible to construct the Vandemonde matrices (data matrix X, in Eqn. 6.20)
using Toeplitz (or the closely related Hankel matrices) as shown in Listing 6.10.
Listing 6.10: An alternative way to construct the data matrix for ARX estimation using Toeplitz
matrices. See also Listing 6.9.
na = length(A)-1; nb = length(B); % # of parameters to be estimated
nmax = max(na,nb);
4 Y = toeplitz(y(nmax:end-1),y(nmax:-1:nmax-na+1));
U = toeplitz(u(nmax+1:end),u(nmax+1:-1:nmax-nb+2));
X = [Y,U]; % Form data matrix, X.
theta = X\y(nmax+1:end) % = X
+
y
Note that in this implementation, n
a
is both the order of the A(q
1
) polynomial and the number
of unknown coefcients to be estimated given that A is monic, while n
b
is the order of B plus
one.
6.5.2 Bias in the parameter estimates
The strategy to identify ARX systems has the advantage that the regression problemis linear, and
the estimated parameters will be consistent, that is, they can be shown to converge to the true pa-
rameters as the number of samples increases provided the disturbing noise is white. If however,
the disturbances are non-white as in the case where we have a non-trivial C(q) polynomial, then
the estimated A(q) and B(q) parameters will exhibit a bias.
To illustrate this problem, suppose we try to estimate the A and B polynomial coefcients in the
270 CHAPTER 6. IDENTIFICATION OF PROCESS MODELS
the ARMAX process
(1 + 0.4q
1
0.5q
2
)y
k
= q
1
(1.2 q
1
)u
k
+ (1 + 0.7q
1
+ 0.3q
2
)
. .
noise lter
e
k
(6.25)
where we have a non-trivial colouring C polynomial disturbing the process. However the results
shown in Fig. 6.32 illustrate the failure for the ARX estimation routine to properly identify the
A(q) and B(q) parameters. Note that the estimate of a
1
after 300 iterations is around 0.2 which
is some distance for the true result of 0.4. Similar errors are evident for a
2
and b
1
and it does not
look like they will converge even if we collect mode data. This inconsistency of the estimated
parameters is a direct result of the C(q) polynomial which lters the white noise, e
k
. As the
disturbing signal is no longer white, the ARX estimation routine is now no longer consistent.
Figure 6.32: In this example, the
ARX estimation routine fails to con-
verge to the true A(q) and B(q) pa-
rameters. This bias is due to the pres-
ence of the C(q) polynomial which
colours the noise.
20
0
20
I
n
p
u
t
/
o
u
t
p
u
t


0 100 200 300
1.5
1
0.5
0
0.5
1
1.5
2
Sample No. [k]
P
a
r
a
m
a
t
e
r
s
,

a
1
a
2
b
0
b
1
input
output
To obtain the correct parameters without the bias, we have no option other than to estimate the
C(q) polynomial as well. This ARMAX estimation strategy is unfortunately a nonlinear estima-
tion problem and appropriate identication strategies are available in the SYSTEM IDENTIFICA-
TION TOOLBOX described next.
6.5.3 Using the System Identication toolbox
As an alternative to using the rather simple and somewhat home grown identication routines
developed in section 6.5.1 above, we could use the collection in the SYSTEM IDENTIFICATION
TOOLBOX. This comprehensive set of routines for dynamic model identication that closely fol-
low [124] and is far more reliable and robust than the purely illustrative routines presented so
far.
6.5. REGRESSING DISCRETE MODEL PARAMETERS 271
The simplest model considered by the SYSID toolbox is the auto-regressive with exogeneous
input, or ARX model, described in Eqn. 6.10. However for computational reasons, the toolbox
separates the deadtime from the B polynomial so that the model is now written
A(q
1
)y = q
d
B(q
1
)u +e (6.26)
where the integer d is the assumed number of delays. The arx routine will, given input/output
data, try to establish reasonable values for the coefcients of the polynomials A and B using
least-squares functionally similar to the procedure given in Listing 6.9.
Of course in practical identication cases we do not know the structure of the model (i.e. the
order of B, Aand the actual sample delay), so we must guess a trial structure. The arx command
returns both the estimated parameters and associated uncertainties in an object which we will
call testmodel and we can view with present. Once we tted out model, we can compare
the model predictions with the actual output data using the compare command. The script in
Listing 6.11 demonstrates the load, tting and comparing, but in this case we deliberately use a
structurally decient model.
Listing 6.11: Ofine system identication using arx from the System Identication Toolbox
>>plantio = iddata(y,u,1); % Use the data generated from Listing 6.8.
2
>>testmodel = arx(plantio,[2 2 0]) % estimate model coefficients
Discrete-time IDPOLY model: A(q)y(t) = B(q)u(t) + e(t)
A(q) = 1 - 1.368 q-1 + 0.5856 q-2
7 B(q) = 1.108 + 0.8209 q-1
Estimated using ARX from data set plantio
Loss function 0.182791 and FPE 0.214581
Sampling interval: 1
12
>>compare(plantio,testmodel) % See Fig. 6.33.
0 10 20 30 40 50
10
8
6
4
2
0
2
4
6
8
10
y
1
Measured Output and Simulated Model Output
Measured Output
testmodel Fit: 71.34%
Figure 6.33: Ofine system
identication using the Sys-
tem Identication Toolbox
and a structurally decient
model.
Note however that with no structural mismatch, the arx routine in Listing 6.12 should manage
to nd the exact values for the model coefcients,
Listing 6.12: Ofine system identication with no model/plant mismatch
>> testmodel = arx(plantio,[3 2 0]) % Data from Listing 6.11 with correct model structure.
2 Discrete-time IDPOLY model: A(q)y(t) = B(q)u(t) + e(t)
A(q) = 1 - 1.9 q-1 + 1.5 q-2 - 0.5 q-3
272 CHAPTER 6. IDENTIFICATION OF PROCESS MODELS
B(q) = 1 + 0.2 q-1
7 Estimated using ARX from data set plantio
Loss function 1.99961e-029 and FPE 2.44396e-029
Sampling interval: 1
which it does.
Note that in this case the identication routine perfectly identied our plant and that the loss
function given in the result summary is practically zero ( 10
30
). This indicates an extraordi-
nary goodness of t. This is not unexpected, since we have used a perfect linear process, which
we knew perfectly in structure although naturally we could not expect this in practice.
The arx command essentially solves a linear regression problem using the special MATLAB com-
mand \ (backslash), as shown in the simple version given in Listing 6.9. Look at the source of
arx for further details.
Purely auto-regressive models
As the name suggests, purely auto-regressive models have no measured input,
A(q
1
) y
k
= e
k
(6.27)
and are used when we have a single time series of data which we suspect is simply disturbed
by random noise. In Listing 6.13 we attempt to t a 3rd order AR model and we get reasonable
results with the coefcients within 1% of the true values. (We should of course compare the
pole-zero plots of the model and plant because the coefcients of the polynomials are typically
numerically ill-conditioned.) Note however that we used 1000 data points for this 3 parameter
model, far more than we would need for an ARX model.
Listing 6.13: Demonstrate the tting of an AR model.
1 >> N = 1e3; e = randn(N,1); % Take 1000 data points
>> B=1; A = [1 0.4 0.1 0.2]; % True AR plant is: A(q
1
) = 1 0.4q
1
+ 0.1q
2
+ 0.2q
3
>> y = filter(B,A,e);
>> Gm = ar(iddata(y),3); % Fit AR model with no model/plant mismatch
6 Discrete-time IDPOLY model: A(q)y(t) = e(t)
A(q) = 1 + 0.419 q-1 + 0.1008 q-2 + 0.1959 q-3
Estimated using AR ('fb'/'now') from data set Dat
Loss function 1.00981 and FPE 1.01587
11 Sampling interval: 1
Estimating noise models
Estimating noise models is complicated since some of the regressors are now functions them-
selves of the unknown parameters. This means the optimisation problem is now non-linear com-
pared to the relatively straight forward linear arx case. However, as illustrated in section 6.5.2,
we need to estimate the noise models to eliminate the bias in the parameter estimation.
6.5. REGRESSING DISCRETE MODEL PARAMETERS 273
In Listing 6.14 we generate some input/output data from an output-error process,
y(k) =
B
F
u(k d) +e(k)
=
1 + 0.5q
1
1 0.5q
1
+ 0.2q
2
u(k 1) +e(k)
whose parameters are, of course, in reality unknown. The easiest way to simulate an output-error
process is to use the idmodel/sim command from the System ID toolbox.
Listing 6.14: Create an input/output sequence from an output-error plant.
>> Ts = 1.0; % Sample time T
>> NoiseVariance = 0.1;
4 >> F = [1 -0.5 0.2]; % F(q
1
) = 1 0.5q
1
+ 0.2q
2
>> B = [0 1 0.5]; % Note one unit of delay: B(q
1
) = q
1
+ 0.5q
2
>> A = 1;C=1;D=1; % OE model: y = (B/F)u +e
>> G = idpoly(A,B,C,D,F,NoiseVariance,Ts)
9 Discrete-time IDPOLY model: y(t) = [B(q)/F(q)]u(t) + e(t)
B(q) = q-1 + 0.5 q-2
F(q) = 1 - 0.5 q-1 + 0.2 q-2
14 This model was not estimated from data.
Sampling interval: 1
>> N = 10000; % # of samples
>> U = randn(N,1); E = 0.1
*
randn(N,1); % Input & noise
19 >> Y = sim(G,[U E]);
Using the data generated in Listing 6.14, we can attempt to t both an output-error and an arx
model. Listing 6.15 shows that in fact the known incorrect arx model is almost as good as the
structurally correct output-error model.
Listing 6.15: Parameter identication of an output error process using oe and arx.
1 >> Gest_oe = oe(Z,'nb',2,'nf',2,'nk',1,'trace','on');
>> present(Gest_oe);
Discrete-time IDPOLY model: y(t) = [B(q)/F(q)]u(t) + e(t)
B(q) = 1.002 (+-0.0009309) q-1 + 0.5009 (+-0.002302) q-2
6
F(q) = 1 - 0.4987 (+-0.001718) q-1 + 0.1993 (+-0.00107) q-2
Estimated using OE from data set Z
Loss function 0.00889772 and FPE 0.00890484
11 Sampling interval: 1
Created: 02-Jul-2007 11:35:15
Last modified: 02-Jul-2007 11:35:16
>> Gest_arx = arx(Z,'na',2,'nb',2,'nk',1);
16 >> present(Gest_arx);
Discrete-time IDPOLY model: A(q)y(t) = B(q)u(t) + e(t)
A(q) = 1 - 0.4816 (+-0.001879) q-1 + 0.1881 (+-0.001366) q-2
274 CHAPTER 6. IDENTIFICATION OF PROCESS MODELS
B(q) = 1.001 (+-0.001061) q-1 + 0.518 (+-0.002154) q-2
21
Estimated using ARX from data set Z
Loss function 0.0111952 and FPE 0.0112041
Sampling interval: 1
Created: 02-Jul-2007 11:35:16
26 Last modified: 02-Jul-2007 11:35:16
The System Identication toolbox has optimal parameters for the tting of the nonlinear models.
One of the more useful options is to be able to x certain parameters to known values. This can
be achieved using the FixParameter option.
6.5.4 Fitting parameters to state space models
So far we have concentrated on nding parameters for transfer function models, which means
we are searching for the coefcients of just two polynomials. However in the multivariable case
we need to identify state-space models which means estimating the elements in the , and C
matrices. If we are able to reliably measure the states, then we can use the same least-squares
approach (using the pseudo inverse) that we used in the transfer function identication. If how-
ever, and more likely, we cannot directly measure the states, x, but only the output variables, y
then we need to use a more complex subspace approach described on page 276.
The state-space identication problem starts with a multivariable discrete model
x
k+1
= x
k
+u
k
, x
n
, u
m
(6.28)
where we want to establish elements in the model matrices , given the input/state sequences
u and x. To ensure we have enough degrees of freedom, we would expect to need to collect at
least (n +m) m data pairs, although typically we would collect many more if we could.
Transposing Eqn. 6.28 gives x
T
k+1
= x
T
k

T
+ u
T
k

T
, or
x
T
k+1
=
_
x
1
x
2
x
n

k
_

1,1

2,1

n,1

1,2

2,2

n,2
.
.
.
.
.
.
.
.
.

1,n

2,n

n,n
_

_
. .
nn
+
_
u
1
u
m

k
_

1,1

2,1

n,1
.
.
.
.
.
.
.
.
.

1,m

2,n

m,n
_

_
. .
mn
6.5. REGRESSING DISCRETE MODEL PARAMETERS 275
we can see that it is simple to rearrange this equation into
x
T
k+1
=
_
x
1
x
2
x
n
. .
n
| u
1
u
m
_
k
_

1,1

2,1

n,1

1,2

2,2

n,2
.
.
.
.
.
.
.
.
.

1,n

2,n

n,n

1,1

2,1

n,1
.
.
.

1,m

2,n

m,n
_

_
. .
matrix (n+m)n
(6.29)
where we have stacked the unknown model matrices and together into one large parameter
matrix . If we can estimate this matrix, we have achieved our model tting.
Currently Eqn. 6.29 comprises of n equations, but far more, (n + m)n, unknowns; the elements
in the and matrices. To address this degree of freedom problem, we need to collect more
input/state data. If we collect say N more input/states and stack them underneath Eqn. 6.29, we
get
_

_
x
T
k+1
x
T
k+2
.
.
.
x
T
k+N+1
_

_
. .
Nn
=
_
_
_
_
_
_
x
1
x
2
x
n
| u
1
u
m

k _
x
1
x
2
x
n
| u
1
u
m

k+1
.
.
.
_
x
1
x
2
x
n
| u
1
u
m

k+N
_
_
_
_
_
. .
N(n+m)
(6.30)
or Y = X. Inverting this, using the pseudo inverse of the data matrix X, gives the least-squares
parameters
=
_

T

T
_
= X
+
Y (6.31)
and hence the and matrices as desired.
The computation of the model matrices and is a linear least-squares regression which is easy
to program, and robust to execute. However we are still constrained to the normal restrictions
on the invertibility and conditioning of the data matrix X apply in order to obtain meaningful
solutions. The largest drawback of this method is that we must measure states x, as opposed to
only outputs. (If we only have outputs then we need to use the more sophisticated technique
described on 276). If we had measured the outputs in addition to the states, it is easy to calculate
the C matrix subsequently using another least-squares regression.
In MATLAB we could use the fragment below to compute and given a sequence of x(k), and
u(k).
[y,t,X] = lsim(G,U,[],x0); % Discrete I/O data collected
Xdata = [X,U]; Xdata(end,:) = []; % data matrix
4 y = X; y(1,:) = []; % delete first row
theta = Xdata\y; % solve for parameters = X
+
Y
Phiest = theta(1:n,:)'; Delest = theta(n+1:end,:)';
If any of the parameters are known a priori, they can be removed from the parameter matrix,
and the constants substituted. If any dependencies between the parameters are known (such as
276 CHAPTER 6. IDENTIFICATION OF PROCESS MODELS
a = 3b), then this may be incorporated as a constraint, and a nonlinear optimiser used to search
for the resulting best-t parameter vector. The SYSTEM IDENTIFICATION TOOLBOX incorporates
both these extensions.
Estimating the condence limits for parameters in dynamic systems uses the same procedure as
outlined in 3.4.3. The difcult calculation is to extract the data matrix, X, (as dened by Eqn
3.62) accurately. [23, pp226228] describes an efcient method which develops and subsequently
integrates sensitivity equations, which is in general, the preferred technique to that using nite
differences.
Identication of state-space models without state measurements
The previous method required state measurements, but it would be advantageous to be able still
to estimate the model matrices, but only be required to measure input, u, and output y data. This
is addressed in a method known as a subspace identication described in [71, p536] and in more
detail by the developers in [153]. The state-space subspace algorithm for identication, known
as n4sid, is available in the System Identication toolbox.
Summary of identication alternatives in state-space
Given the underlying system
x
k+1
= x
k
+u
k
y
k
= Cx
k
then Table 6.2 summarises the identication possibilities.
Table 6.2: Identication alternatives in state-space
We measure Wish to calculate Algorithm
page #
u
k
, y
k
, , , C x
k
Kalman lter 431
u
k
, y
k
, x
k
, , C Normal linear regression 274
u
k
, y
k
n, , , C, x
k
Subspace identication 276
6.6 Model structure determination and validation
The basic linear ARX model using in Eqn. 6.26 has three structural parameters: the number of
poles, n
a
, the number of zeros n
b
and the number of samples of delay, d. Outside of regressing
suitable parameters given a tentative model structure, we need some method to establish what a
tentative structure is. Typically this involves some iteration in the model tting procedure.
If we inadvertently choose too high an order for the model, we are in danger of overtting the
data, which will probably mean that our model will perform poorly on new data. Furthermore,
for reasons of efciency, we would like the simplest model, with the smallest number of pa-
rameters that still adequately predicts the output. This is the rational for dividing the collected
experimental data into two sets, one estimation set used for model tting, and one validation set
used to distinguish between models structures.
6.6. MODEL STRUCTURE DETERMINATION AND VALIDATION 277
One way to penalise overly complex models is to use the Akaike Information Criteria (AIC)
which for normally distributed errors is
AIC = 2p +N ln
_
SSE
N
_
(6.32)
where p is the number of parameters, N is the number of observations, and SSE is the sum of
squared errors. The SYSTEM IDENTIFICATION toolbox computes the AIC.
If we increase the number of parameters to be estimated, the sum of squared errors decreases,
but this is partially offset by the 2p term which increases. Hence by using the AIC we try to nd
a balance between models that t the data, but are parsimonious in parameters.
6.6.1 Estimating model order
If we restrict our attention to linear models, the most crucial decision to make is that of model
order. We can estimate the model order from a spectral analysis, perhaps by looking at the high-
frequency roll-off of the experimentally derived Bode diagram.
An alternative strategy is to monitor the rank of the matrix X
T
X. If we over-estimate the order,
then the matrix X
T
X will become singular. Of course with additive noise, the matrix may not
be exactly singular, but it will certainly become illconditioned. Further details are given in [126,
pp496497].
Identication of deadtime
The number of delay samples, d in Eqn. 6.26 is a key structural parameter to be established prior
to any model tting. In some cases, an approximate value of the deadtime is known from the
physics of the system, but this is not always a reliable indicator. For example, the blackbox from
Fig. 1.4 should not possess any deadtime since it is composed only of passive resistive/capacitor
network. However the long time constant, the high system order, or the enforced one-sample
delay in the A/D sampler may make it appear if there is some deadtime.
Question: If we suspect some deadtime, how do we conrm & subsequently measure it?
1. Do a step test and look for a period of inactivity.
2. Use the SYSTEM IDENTIFICATION TOOLBOX (SITB) and optimise the t for deadtime. This
will probably require a trial & error search.
3. Use some statistical tests (such as correlation) to estimate the deadtime.
Note: Obviously the deadtime will be an integer number of the sampling time. So if you want
an accurate deadtime, you will need a small sample time T. However overly fast sampling will
cause problems in the numerical coefcients of the discrete model.
Method #1 Apply a step response & look for the rst sign of life.
1. First we do an open loop step test using a sample time T = 0.5 seconds.
2. We should repeat the experiment at a different sampling rate to verify the deadtime. Fig. 6.35
compares both T = 1 and T = 0.1 seconds.
278 CHAPTER 6. IDENTIFICATION OF PROCESS MODELS
15 20 25 30 35 40 45 50
0
0.1
0.2
0.3
0.4
0.5
i
n
p
u
t
/
o
u
t
p
u
t
time (s)
Blackbox step response


input
output
19 20 21 22 23
0.02
0
0.02
0.04
0.06
zoomed region
40 41 42 43 44
0.18
0.2
0.22
0.24
0.26
0.28
zoomed region
Figure 6.34: Identication of deadtime from the step response of the blackbox. Note that the
deadtime is approximately 2T 1 second.
Figure 6.35: Deadtime estimation at both T =
1 and T = 0.1s.
20 21 22 23 24 25
0
0.05
0.1
0.15
0.2
time [s]
i
/
o
Method #2: Use trial & error with the SYSTEM IDENTIFICATION TOOLBOX. We can collect some
input/output data and identify a model M() with different deadtimes, namely d = 0 to 3.
Clearly the middle case in Fig. 6.36 where d = 2 has the least error.
This shows, at least for the model M() with n
a
= n
b
= 2, then the optimum number of delays
is d = 2.
Modelling the blackbox
A model of the experimental blackbox is useful to test control schemes before you get to the lab.
Based on the resistor/capcitor internals of the black-box, we expect an overdamped plant with
time constants in the range of 2 to 5 seconds.
Identication tests on my blackbox gave the following continuous transfer function model

G
bb
(s) =
0.002129s
2
+ 0.02813s + 0.5788
s
2
+ 2.25s + 0.5362
e
0.5s
or in factored form
=
0.002129(s
2
+ 13.21s + 271.9)
(s + 1.9791)(s + 0.2709)
e
0.5s
Fig. 6.37 illustrates the SIMULINK experimental setup and compares the model predictions to the
actual measured data.
6.6. MODEL STRUCTURE DETERMINATION AND VALIDATION 279
0.5
0
0.5
D
e
a
d
t
i
m
e
=
0
0.5
0
0.5
D
e
a
d
t
i
m
e
=
1
0.5
0
0.5
D
e
a
d
t
i
m
e
=
2
20 40 60 80 100 120 140 160
0.5
0
0.5
D
e
a
d
t
i
m
e
=
3
time (s)


actual
predicted
Figure 6.36: Model prediction and
actual using a variety of tentative
deadtimes, d = 0, 1, 2, 3.
0
initialise
In1 Out1
bb_pcl711
uy
To Workspace2
Scope1 Saturation
Mux1
Manual Switch
0.002129s +0.02813s+0.5788
2
s +2.25s+0.5362
2
Blackbox dyn
BandLimited
White Noise
BB Delay
(a) Blackbox experimental setup in SIMULINK
25 30 35 40 45 50 55 60 65 70 75
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
time (seconds)
u
y
y predict
(b) Blackbox model predictions compared to actual data
Figure 6.37: A dynamic model for the blackbox compared to actual output data.
280 CHAPTER 6. IDENTIFICATION OF PROCESS MODELS
6.6.2 Robust model tting
So far we have chosen to t models by selecting parameters that minimise the sum of squared
residuals. This is known as the principle of the least squares prediction error and is equivalent
to the maximum likelihood method when the uncertainty distribution is gaussian. This strategy
has some very nice properties rst identied by Gauss in 1809 such as the fact that the estimates
are consistent, that is they eventually converge to the true parameters, the estimate is efcient in
that no other unbiased estimator has a smaller variance, and the numerical strategies are simple
and robust as described in [10].
However by squaring the error, we run into problems in the presence of outliers in the data. In
these cases which are clearly non-gaussian, but actually quite common, we must modify the form
of the loss function. One strategy due to [90] is to use a quadratic form when the errors are small,
but a linear form for large errors, another suggested in [10] is to use something like
f() =

2
1 +a
2
(6.33)
and there are many others. Numerical implementations of these robust tting routines are given
in [201].
6.6.3 Common nonlinear model structures
Nonlinear models, or more precisely, dynamic models that are nonlinear in the parameters are
considerably more complex to identify than linear models such as ARX, or even pseudo-linear
models such as ARMAX as outlined in the unied survey presented in [184]. For this reason fully
general nonlinear models are far less commonly used for control and identication than linear
models. However there are cases where we will need to account for some process nonlinearities,
and one popular way is to only consider static nonlinearities that can be separated fromthe linear
dynamics.
Fig. 6.38 shows one way how these block-orientated static nonlinear models can be structured.
These are known as Hammerstein-Wiener models, or block-orientated static nonlinear models.
The only difference between the two structures is the order of the nonlinear block element.
In the Hammerstein model, it precedes the linear dynamics such as in the case for when we
are modelling input nonlinearities such as modelling equal percentage control valves, and for
the Wiener models, the nonlinearity follows the linear dynamics such as in the case where we
are modelling nonlinear thermocouples. Note of course that unlike linear systems, the order of
nonlinear blocks does make a difference since in general nonlinear systems do not commute.
The SYSTEM IDENTIFICATION TOOLBOX can identify a selection of commonly used nonlinear
models in Hammerstein and Wiener structures using the nlhw routine.
6.7 Online model identication
In many practical cases such as adaptive control, or where the system may change from day to
day, we will estimate the parameters of our models online. Other motivations for online identi-
cation given by [124, p303] include optimal control with model following, using matched lters,
failure prediction etc. Because the identication is online, it must also be reasonably automatic.
The algorithm must pick a suitable model form (number of dead times, order of the difference
equation etc.), guess initial parameters and then calculate residuals and measures of t.
6.7. ONLINE MODEL IDENTIFICATION 281

y
t

Linear dynamics
u
t
B(q
1
)
A(q
1
)
.....................
.
.
.
.
.
.
.
.
.
.
.
.
.
v
t
Static nonlinearity
y
t
= N(v
t
)
y
t


Static nonlinearity
u
t
B(q
1
)
A(q
1
)
.....................
.
.
.
.
.
.
.
.
.
.
.
.
.
v
t
Linear dynamics
v
t
= N(u
t
)
Hammerstein Model

Wiener Model
Figure 6.38: Hammerstein and Wiener model structures
As time passes, we should expect that owing to the increase in number of data points that we are
continually collecting, we are able to build better and better models. The problem with using the
ofine identication scheme as outlined in 6.4 is that the data matrix, X in Eqn. 6.20 will grow
and grow as we add more input/output rows to it. Eventually this matrix will grow too large to
store or manipulate in our controller. There are two obvious solutions to this problem:
1. Use a sliding window where we retain only the last say 50 input/output data pairs
2. Use a recursive scheme where we achieve the same result but without the waste of throwing
old data away.
The following section develops this second, and more attractive alternative known as RLS or
recursive least-squares.
6.7.1 Recursive least squares
As more data is collected, we can update the current estimate of the model parameters. If we
make the update at every sample time, our estimation procedure is now no longer off-line, but
now online. One approach to take advantage of the new data pair would be to add another
row to the X matrix in Eqn. 6.20 as each new data pair is collected. Then a new estimate can
be obtained using Eqn 6.23 with the now augmented X matrix. This equation would be solved
every sample time. However this scheme has the rather big disadvantage that the Cmatrix grows
as each new data point is collected so consequently one will run out of storage memory, to say
nothing of the growing impracticality of the matrix inversion required. The solution is to use a
recursive formula for the estimation of the new
k+1
given the previous
k
. Using this scheme,
282 CHAPTER 6. IDENTIFICATION OF PROCESS MODELS
we have both constant sized matrices to store and process, but without the need to sacrice old
data.
Mean and variance calculated recursively
Digressing somewhat, let us briey look at the power of a recursive calculation scheme, and the
possible advantages that has for online applications. Consider the problem of calculating the
mean or average, x, of a collection of N samples, x
i
. The well known formula for the mean is
x =
1
N
N

i=1
x
i
Now what happens, (as it often does when I collect and grade student assignments) is that after
I have laboriously summed and averaged the class assignments, I receive a late assignment or
value x
N+1
. Do I need to start my calculations all over again, or can I can use a recursive formula
making use of the old mean x
N
based on N samples to calculate the new mean based on N + 1
samples? Starting with the standard formula for the new mean, we can develop an equivalent
recursive equation
x
N+1
=
1
N + 1
N+1

i=1
x
i
=
1
N + 1
_
x
N+1
+
N

i=1
x
i
_
=
1
N + 1
(x
N+1
+N x
N
)
= x
N
..
old
+
1
N + 1
. .
gain
(x
N+1
x
N
)
. .
error
which is in the formof newestimate = old estimate + gain error. Note that nowthe computation
of the new mean, assuming we already have the old mean, is much faster and more numerically
sound.
In a similar manner, the variance can also be calculated in a recursive manner. The variance at
sample number N + 1 based on the mean and variance using samples 1 to N is

2
N+1
=
2
N
+
1
N + 1
_
1
N + 1
(x
N+1
x
N
)
2

2
N
_
(6.34)
which once again is the linear combination of the previous estimate and a residual. For further
details, see [121, p29]. This recursive way to calculate the variance published in by B.P. Welford
goes back to 1962 and has been shown to have good numerical properties.
A recursive algorithm for least-squares
Suppose we have an estimate of the model parameters at time t = N,
N
, perhaps calculated
ofine, and we subsequently collect a new data pair (u
N+1
, y
N+1
). To nd the new improved
parameters,
N+1
, Eqn. 6.20 is augmented by adding a new row,

T
N+1
= [y
N
y
N1
y
Nn+1
.
.
. u
N+1
u
N
u
Nm+1
] (6.35)
to the old data matrix X
N
to obtain the new, now slightly larger, system
_
_
y
N

y
N+1
_
_
=
_
_
X
N

T
N+1
_
_

N
(6.36)
6.7. ONLINE MODEL IDENTIFICATION 283
or y
N+1
= X
N+1

N+1
. This we can solve in exactly the same way as in Eqn. 6.23,

N+1
=
_
X
T
N+1
X
N+1
_
1
X
T
N+1
y
N+1
(6.37)
=
_
_
_
X

N
.
.
.
N+1
_
_
_
X
N

T
N+1
_
_
_
_
1
_
X
T
N
.
.
.
N+1
_
_
_
y
N

y
N+1
_
_
(6.38)
except that now our problem has grown in the vertical dimension. Note however, the number of
unknowns in the parameter vector remains the same.
However we can simplify the matrix to invert by noting
_
_
_
X
T
N
.
.
.
N+1
_
_
_
X
N

T
N+1
_
_
_
_
1
=
_
X
T
N
X
N
+
N+1

T
N+1
_
1
(6.39)
The trick is to realise that we have alreadycomputed the inverse of X
T
N
X
N
previously in Eqn. 6.23
and we would like to exploit this in the calculation of Eqn. 6.39. We can exploit this using the
Matrix inversion lemma which states
(A+BDC)
1
= A
1
A
1
B
_
D
1
+CA
1
B
_
1
CA
1
(6.40)
If B and C are n 1 and 1 n, (column and row vectors respectively), then Eqn. 6.40 simplies
to
(A+BC)
1
= A
1

A
1
BCA
1
1 +CA
1
B
(6.41)
If we compare Eqn. 6.41 with Eqn. 6.39, then with
A
def
= X

X, B
def
=
N+1
, C
def
=
T
N+1
we can rewrite Eqn. 6.39 as
_
X
T
N
X
N
+
N+1

T
N+1
_
1
=
_
X
T
N
X
N
_
1

_
X
T
N
X
N
_
1

N+1

T
N+1
_
X
T
N
X
N
_
1
1 +
T
N+1
_
X
T
N
X
N
_
1

N+1
(6.42)
and we can also rewrite Eqn. 6.38 as

N+1
=
_

_
_
X
T
N
X
N
_
1

_
X
T
N
X
N
_
1

N+1

T
N+1
_
X
T
N
X
N
_
1
1 +
T
N+1
_
X
T
N
X
N
_
1

N+1
_

_
_
X
T
N
y
N
+
T
N+1
y
N+1
_
(6.43)
and recalling that the old parameter value,
N
, was given by Eqn. 6.37, we start to develop the
new parameter vector in terms of the old,

N+1
=
N

_
X
T
N
X
N
_
1

T
N+1

N+1
1 +
N+1
_
X
T
N
X
N
_
1

T
N+1

N
+
_
X
T
N
X
N
_
1

T
N+1
y
N+1

_
X
T
N
X
N
_
1

T
N+1

N+1
_
X
T
N
X
N
_
1
1 +
N+1
_
X
T
N
X
N
_
1

T
N+1

T
N+1
y
N+1
(6.44)
=
N
+
_
X
T
N
X
N
_
1

T
N+1
1 +
N+1
_
X
T
N
X
N
_
1

T
N+1
_
y
N+1

N+1

N
_
(6.45)
284 CHAPTER 6. IDENTIFICATION OF PROCESS MODELS
If we dene P as the covariance matrix,
P
N
def
=
_
X

N
X
N
_
1
(6.46)
then the parameter update equation, Eqn. 6.45 is more concisely written as

N+1
=
N
+K
N+1
_
y
N+1

T
N+1

N
_
(6.47)
with the gain Kin Eqn. 6.47 and new covariance given by Eqn. 6.42 as
K
N+1
=
P
N

N+1
1 +
T
N+1
P
N

N+1
(6.48)
P
N+1
= P
N
_
I

N+1

T
N+1
P
N
1 +
T
N+1
P
N

N+1
_
= P
N
_
I
N+1
K
T
N+1
_
(6.49)
Note that the form of the parameter update equation, Eqn. 6.47, is such that the new value of

N+1
is the old value
N
with an added correction term which is a recursive form. As expected,
the correction termis proportional to the observed error. Asummary of the recursive least-square
(RLS) estimation scheme is given in Algorithm 6.2.
Algorithm 6.2 Recursive least-squares estimation
Initialise parameter vector,
0
, to something sensible (say random values), and set co-variance to
a large value, P
0
= 10
6
I, to reect the initial uncertainty in the trial guesses.
1. At sample N, collect a new input/output data pair (y
N+1
and u
N+1
)
2. Form the new
N+1
vector by inserting u
N+1
.
3. Evaluate the new gain matrix K
N+1
, Eqn 6.48
4. Update the parameter vector
N+1
, Eqn 6.47
5. Update the covariance matrix P
N+1
, Eqn 6.49 which is required for the next iteration.
6. Wait out the remainder of one sample time T, increment sample counter, N N + 1, then
go back to step 1.
The routine in listing 6.16 implements the basic recursive least-squares (RLS) algorithm 6.2 in
MATLAB. This routine will be used in subsequent identication applications, although later in
section 6.8 we will improve this basic algorithm to incorporate a forgetting factor.
Listing 6.16: A basic recursive least-squares (RLS) update (without forgetting factor)
function [thetaest,P] = rls0(y,phi,thetaest,P)
% Basic Recursive Least-Squares (RLS) parameter update
3 % Inputs: y
k
,
k
: current output & past input/outputs column vector
% Input/Outputs: , P: parameter estimates & covariance matrix
K = P
*
phi/(1 + phi'
*
P
*
phi); % Gain K, Eqn. 6.48.
P = P-(P
*
phi
*
phi'
*
P)/(1 + phi'
*
P
*
phi); % Covariance update, P, Eqn. 6.49.
8 thetaest = thetaest + K
*
(y - phi'
*
thetaest);% N+1 = N +KN+1
_
yN+1
T
N+1
N
_
return % end rls.m
6.7. ONLINE MODEL IDENTIFICATION 285
A recursive estimation example
On page 267 we solved for the parameters of an unknown model using an ofine technique. Now
we will try the recursive scheme given that we have just collected a new input/output data pair
at time t = 4,
time input output
4 6 10
and we wish to update our parameter estimates. Previously we had computed that
3
= [2, 1]

,
but now with the new data we have

4
=
3
+K
4
_
y
4

T
4

3
_
where
T
4
= [y
3
u
4
] = [16, 6]. The gain K
4
can be calculated from equations 6.48 and 6.49. Now
y
4

T
4

3
= 10
_
16 6

_
2
1
_
= 36
which is clearly non-zero. Since this is the residual, we should continue to update the parameter
estimates.
The covariance matrix at time t = 3 has already been computed and (repeated here) is
P
3
=
_
X

3
X
3
_
1
=
1
655
_
29/2 8
8 27
_
=
_
0.0221 0.0122
0.0122 0.0412
_
so the new gain matrix K
4
is now
K
4
=
P
3

4
1 +
T
4
P
3

4
=
_
0.0407
0.0422
_
Thus the new parameter estimate,
4
, is modied to

4
=
_
2
1
_
+
_
0.0407
0.0422
_
36 =
_
0.5338
2.5185
_
and the updated covariance matrix P
4
is also updated to
P
4
= P
3

P
3

T
4
P
3
1 +
T
4
P
3

4
=
_
0.0047 0.0058
0.0058 0.0225
_
Note that P
4
is still symmetric, and less immediate apparent, is that it is also still positive denite,
which we could verify by computing the eigenvalues. If we subsequently obtain another (u, y)
data pair, we can just continue with this recursive estimation scheme.
We can compare the new estimates
4
, with the values obtained if we used to nonrecursive
method of Eqn 6.23 on the full data set. At this level, both methods show much the same com-
plexity in terms of computation count and numerical round-off problems, but as N tends to ,
the non-recursive scheme is disadvantaged.
Starting the recursive scheme
In practice there is a small difculty in starting these recursive style estimation schemes, since ini-
tially we do not know the covariance P
N
. However we have two alternatives: we could calculate
the rst parameter vector using a non-recursive scheme, and the switch to a recursive scheme
only later once we know P
N
, or we could just assume that at the beginning, since we have no
estimates, our expected error will be large, say P
N
= 10
4
I. The latter method is more common
in practice, although it takes a few iterations to converge to the correct estimates if they exist.
286 CHAPTER 6. IDENTIFICATION OF PROCESS MODELS

white noise
RLS

Plant under test

parameter estimate,

0
= random
Initial covariance, P
0
= 10
3
I
1+1.2q
1
11.9q
1
+1.5q
2
0.5q
3
output
Figure 6.39: Ideal RLS parameter estimation. (See also Fig. 6.41(a).)
6.7.2 Recursive least-squares in MATLAB
MATLAB can automate the estimation procedure so that it is suitable for online applications. To
test this scheme we will try to estimate the same system S from Eqn. 6.24 used previously on
page 268 repeated here
S : G(q
1
) =
1 + 0.2q
1
11.9q
1
+ 1.5q
2
0.5q
3
where the unknown vector of A and B polynomial coefcients consist of the ve values
=
_
1.9 1.5 0.5 1 0.2

T
which are to be estimated. Note of course we do not need to estimate the leading 1 in the monic
A polynomial.
The estimation is performed in open loop using a white noise input simply to ensure persistence
excitability. We will select a model structure the same as our true system. Our initial parameter
estimate is a random vector, and the initial covariance matrix is set to the relatively large value
of 10
6
I as shown in Fig. 6.39. Under these conditions, we expect near perfect estimation.
Listing 6.17 calls the rls routine from listing 6.16 to do the updating of estimated parameters
and associated statistics using Eqns 6.48 and 6.49.
Listing 6.17: Tests the RLS identication scheme using Listing 6.16.
1 G = tf([1,0.2,0,0],[1 -1.9 1.5 -0.5],1); % unknown plant
1+0.2q
1
11.9q
1
+1.5q
2
0.5q
3
G.variable = 'z-1';
N = 12; % # of samples
U = randn(N,1); % design random input u(k) = N(0, 1)
Y = lsim(G,U); % Compute output
6
n = 5; P = 1e6
*
eye(n); % Identication initialisation, P0 = 10
6
I
thetaest = randn(n,1); Param = [];
6.7. ONLINE MODEL IDENTIFICATION 287
for i=4:N % start estimation loop
11 phi = [Y(i-1:-1:i-3); U(i:-1:i-1)]; % shift X register (column)
[thetaest, P] = rls0(Y(i),phi,thetaest,P); % RLS update of
Param = [Param; thetaest']; % collect data
end
If you run the script in Listing 6.17 above you should expect something like Fig. 6.40 giving
the online estimated parameters (lower) which quickly converge to the true values after about 5
iterations. The upper plot in Fig. 6.40 shows the input/output sequence I used for this open loop
identication scheme. Remember for this identication application, we are not trying to control
the process, and that provided the input is sufciently exciting for the identication, we do not
really care precisely what particular values we are using.
5
0
5
I
n
p
u
t
/
o
u
t
p
u
t


0 2 4 6 8 10 12
2.5
2
1.5
1
0.5
0
0.5
1
1.5
2
Sample No.
P
a
r
a
m
a
t
e
r
s
,

a
1
a
2
a
3
b
0
b
1
input
output
Figure 6.40: Recursive least squares param-
eter estimation.
Upper: The input/output trend for an un-
known process.
Lower: The online estimated parameters
quickly converge to the true parameters.
Note that no estimation is performed until
after we have collected 4 data pairs.
The near perfect estimation in Fig. 6.40 is to be expected, and is due to the fact that we have no
systematic error or random noise. This means that after 4 iterations (we have ve parameters),
we are solving an over constrained system of linear equations for the exact parameter vector. Of
course if we add noise to y, which is more realistic, then our estimation performance will suffer.
Recursive least-squares in SIMULINK
The easiest way to perform recursive estimation in SIMULINK is to use the the identication
routines from the SYSTEM IDENTIFICATION TOOLBOX. Fig. 6.41 shows both the SIMULINK block
diagram for the example given in Fig. 6.39, and the resultant trend of the prediction error.
288 CHAPTER 6. IDENTIFICATION OF PROCESS MODELS
Scope
Plant under test
1+1.2z
1
11.9z +1.5z 0.5z
1 2 3
BandLimited
White Noise
AutoRegressive
with eXternal input
model estimator
ARX
u y
(a) Identifying a simple plant using the RLS block in SIMULINK.
40 45 50 55 60 65 70 75 80 85
5
0
5
Time (secs)
Actual Output (Red Line) vs. The Simulated Predicted Model output (Blue Line)
40 45 50 55 60 65 70 75 80 85
0
0.5
1
Time (secs)
Error In Simulated Model
(b) The output of the RLS block in SIMULINK
Figure 6.41: RLS under SIMULINK. In this case, with no model/plant mismatch, and no distur-
bances, we rapidly obtain perfect estimation.
The ARX block also stores the resultant models in a cell array which can be accessed from the
workspace once the simulation has nished. The structure of the model to be estimated, and the
frequency at which it is updated is specied by double clicking on the ARX block.
If we dont have the System Identication toolbox, we can still reproduce the functionality of
the ARX block by writing out the three recursive equations for the update in raw SIMULINK as
shown in Fig. 6.42. The phi block in Fig. 6.42(a) generates the shifted input/output data vector
using a simple discrete state-space system, while the bulk of the RLS update is contained in the
RLS block shown in Fig. 6.42 which simply implements the thee equations given in Algorithm 6.2
on page 284.
An elegant way to generate the past values of input and output vector required for the vector is
to implement a shift register in discrete state-space. If we have a stream of data, and we want to
keep a rolling collection of the most recent three values, then we could run a discrete state-space
6.7. ONLINE MODEL IDENTIFICATION 289
y
theta
To Workspace3
P
To Workspace1
theta
Subsystem
u y
p
h
i
RLS
y (k)
phi
theta
P
BandLimited
White Noise
"Unknown" plant
1+1.2z
1
11.9z +1.5z 0.5z
1 2 3
u
u
y
y
5
5
5
[5x5]
5
5
(a) Implementing RLS in raw SIMULINK.
P
2
theta
1
phi *K
Matrix
Multiply
phi *theta
phi *P*phi
Unit Delay 1
z
1
Unit Delay
z
1
Transpose1
u
T
P*phi
Matrix
Multiply
P*(Iphi *K)
Matrix
Multiply
K*err
Divide
Constant
eye(na+nb)
Bias
u+1
phi
2
y(k)
1
5
5
5
theta
5
5
5
[5x5]
[5x5]
[5x5]
[5x5]
P(k)
5
5
5
P*phi
5
5
5
K
5
phi
5
5
5
5
phi
[5x5]
P(k+1)
[5x5]
[5x5]
[5x5]
phi*K
[5x5]
[1x5]
5
(b) The internals of the RLS block in SIMULINK. The inputs are the current plant output and ,
and the outputs are the parameter estimate, , and the covariance matrix.
Figure 6.42: Implementing RLS under SIMULINK without using the SystemIdentication toolbox.
system as
_
_
y
k
y
k1
y
k2
_
_
=
_
_
0 0 0
1 0 0
0 1 0
_
_
_
_
y
k1
y
k2
y
k3
_
_
+
_
_
1
0
0
_
_
y
k
(6.50)
where the state transition matrix has ones on the rst sub-diagonal.
In general, if we want to construct a shift register of multiple old values of input and output, we
290 CHAPTER 6. IDENTIFICATION OF PROCESS MODELS
can construct a discrete state-space system as
_

_
y
k
y
k1
.
.
.
y
kna
u
k
u
k1
.
.
.
u
knb
_

_
=
_

_
0 0 0
1 0 0
0 1
.
.
. 0
0 0 1
0
0
0 0 0
1 0 0
0 1
.
.
. 0
0 0 1
_

_
_

_
y
k
y
k1
.
.
.
y
kna
u
l
u
k1
.
.
.
u
knb
_

_
+
_

_
1
0
.
.
.
0
0
0
1
0
.
.
.
0
_

_
_
y
k
u
k
_
(6.51)
where we are collecting n
a
shifted values of y, and n
b
values of u.
6.7.3 Tracking the precision of the estimates
One must be careful not to place too much condence on the computed or inherited covariance
matrix P. Ideally this matrix gives an indication of the errors surrounding the parameters, but
since we started with an arbitrary value for P, we should not expect to believe these values.
Even if we did start with the non-recursive scheme, and then later switched, the now supposedly
correct covariance matrix will be affected by the ever present nonlinearities and the underly-
ing assumptions of the least square regression algorithm (such as perfect independent variable
knowledge) which are rarely satised in practice.
Suppose we attempt to estimate the two parameters of the plant
G =
b
0
1 +a
1
q
1
(6.52)
where the exact values for the parameters are b
0
= 1 and a
1
= 0.5. Fig. 6.43 shows the parameter
estimates converging to the true values, and the elements of the covariance matrix P decreasing.
Now even though in this simulated example we have the luxury of knowing the true parameter
values, the RLS scheme also delivers an uncertainty estimate of the estimated parameters via
the covariance P matrix. Fig. 6.44 which uses the same data from Fig. 6.43 superimposes the
condence ellipses around the data estimates for the points once the estimate has converged
sufciently after k = 2. It is interesting to note that even after 3 or 4 samples where the estimates
look perfect in Fig. 6.43, have, according to Fig. 6.44, surprisingly large condence ellipses with
large associated uncertainties.
A more realistic estimation example
The impressive results of the fully deterministic estimation example shown in Fig. 6.40 is some-
what misleading since we have a no process noise, good input excitation, no model plant/mis-
match in structure and we only attempt to identify the model once. All of these issues are crucial
for the practical implementation of an online estimator. The following simulation examples will
investigate some of these problems.
6.7. ONLINE MODEL IDENTIFICATION 291
2
1
0
1
I
n
p
u
t
/
o
u
t
p
u
t
Recursive leastsquares parameter estimation


1
0
1
2
a
1
b
0
p
a
r
a
m
e
t
e
r
s
,

0 1 2 3 4 5
10
5
10
0
10
5
10
10
sample No.
e
l
e
m
e
n
t
s

o
f

P
Output
Input
Figure 6.43: Estimation of a two pa-
rameter plant. Note that the parame-
ter estimates converge to their correct
values, and that the elements of the
covariance matrix decrease with time.
See also Fig. 6.44 for the associated
condence ellipses.
10 5 0 5 10
8
6
4
2
0
2
4
6
8
10
a
1
b
0
a
1
b
0
Figure 6.44: The 95% condence re-
gions at various times for the two es-
timated parameters shown in Fig. 6.43.
Suppose we want to identify a plant where there is an abrupt change in dynamics at sample time
k = 40,
S() =
_

_
1.2q
1
1 0.2q
1
0.4q
2
for k < 40,
1.5q
1
1 0.3q
1
0.7q
2
for k 40
In this case shown, as in Fig. 6.40, the parameters quickly converge to the correct values as shown
in Fig. 6.45 for the rst plant, but after the abrupt model change, they only reluctantly converge
to the new values.
This problem of an increasing tendency to resist further updating is addressed by incorporating
a forgetting factor into the standard RLS algorithm and is described later in 6.8.
292 CHAPTER 6. IDENTIFICATION OF PROCESS MODELS
Figure 6.45: Identifying an abrupt
plant change at k = 40. The estima-
tors parameters initially converges
quickly to the rst plant (lower
trend), but nd difculty in converg-
ing to the second and subsequent
plants.
10
0
10
20
I
n
p
u
t
/
o
u
t
p
u
t
Identifying abrupt plant changes
2
0
2
e
r
r
o
r
0 50 100 150 200 250 300
0
0.5
1
1.5

sample No.
a
2
a
3
b
0
6.8 The forgetting factor and covariance windup
Sometimes it is a good idea to forget about the past. This is especially true when the process
parameters have changed in the interim. If we implement Eqns 6.48 and 6.47, or alternatively
run the routine in Listing 6.17, we will nd that it works well initially, the estimates converge
to the true values with luck, and the trace of the covariance matrix decreases as our condence
improves. However if the process subsequently changes again, we nd that the parameters do
not converge as well to the new parameters. The reason for this failure to converge a second time
is due to the fact that the now minuscule covariance matrix, reecting our large, but misplaced
condence, is inhibiting any further large changes in .
One solution to this problem, is to reset the covariance matrix to some large value (10
3
I for ex-
ample) when the system change occurs. While this works well, it is not generally feasible, since
we do not know when the change is likely to take place. If we did, then gain scheduling may be
more appropriate.
An alternative scheme is to introduce a forgetting factor, , to decrease the inuence of old sam-
ples. With the inclusion of the forgetting factor, the objective function of the tting exercise is
modied from Eqn. 6.21 to
J =
N

k=1

Nk
_
y
k

T
k

_
2
, < 1 (6.53)
6.8. THE FORGETTING FACTOR AND COVARIANCE WINDUP 293
where data n samples past is weighted by
n
. The smaller is, the quicker the identication
scheme discounts old data. Actually it would make more sense if the forgetting factor were
called the remembering factor, since a higher value means more memory!
Incorporating the forgetting factor into the recursive least squares scheme modies the gain and
covariance matrix updates very slightly to
K
N+1
=
P
N

N+1
+
T
N+1
P
N

N+1
(6.54)
P
N+1
=
1

_
P
N

P
N

N+1

T
N+1
P
N
+
T
N+1
P
N

N+1
_
=
P
N

_
I
N+1
K
T
N+1
_
(6.55)
To choose an appropriate forgetting factor, we should note that the information dies away with a
time constant of N sample units where N is given by
N =
1
1
(6.56)
As evident from Fig. 6.46, a reasonable value of is 0.995 which gives a time constant of 100
samples.
0.6 0.7 0.8 0.9 1
0
50
100
150
200

N
Figure 6.46: The memory, N when us-
ing a forgetting factor from Eqn. 6.56.
Typically = 0.995 which corresponds
to a time constant of 100 samples.
Consequently most textbooks recommend a forgetting factor of between 0.95 and 0.999, but lower
values may be suitable if you expect to track fast plant changes. However if we drop too much,
we will run into covariance windup problems which are further described later in section 6.8.2.
We can demonstrate the usefulness of the forgetting factor by creating and identifying a time
varying model, but rst we need to make some small changes to the recursive least-squares
update function rls given previously in Listing 6.16, to incorporate the forgetting factor. as
shown in Listing 6.18 which we should use from now on. Note that if is not specied explicitly,
a value of 0.995 is used by default.
Listing 6.18: A recursive least-squares (RLS) update with a forgetting factor. (See also List-
ing 6.16.)
1 function [thetaest,P] = rls(y,phi,thetaest,P,lambda)
% Basic Recursive Least-Squares (RLS) parameter update
% Inputs: y
k
,
k
, : current output & past input/outputs column vector & forgetting factor
% Input/Outputs: , P: parameter estimates & covariance matrix
6 if nargin < 5
294 CHAPTER 6. IDENTIFICATION OF PROCESS MODELS
lambda = 0.995; % Set default forgetting factor,
end % if
K=P
*
phi/(lambda + phi'
*
P
*
phi); % Gain K, Eqn. 6.54.
11 P=(P-(P
*
phi
*
phi'
*
P)/(lambda+phi'
*
P
*
phi))/lambda;% Covariance update, P, Eqn. 6.55.
thetaest = thetaest + K
*
(y - phi'
*
thetaest);% N+1 = N +KN+1
_
yN+1
T
N+1
N
_
return % end rls.m
6.8.1 The inuence of the forgetting factor
Figure 6.47 shows how varying the forgetting factor alters the estimation performance when
faced with a time varying system. The true time varying parameters S() are the dotted lines,
and the estimated parameters M() for different choices of from 0.75 to 1.1 are the solid lines.
When = 1, (top trend in Fig. 6.47), effectively no forgetting factor is used and the simulation
shows the quick convergence of the estimated parameters to the true parameters initially, but
when the true plant S() abruptly changes at t = 200, the estimates, M(), follows only very
sluggishly, and indeed never really converges convincingly to the new S(). However by incor-
porating a forgetting factor of = 0.95, (second trend in Fig. 6.47), better convergence is obtained
for the second and subsequent plant changes.
Figure 6.47: Comparing the estimation per-
formance of an abruptly time-varying un-
known plant with different forgetting factors,
.
Estimated parameters (solid) and true param-
eters (dashed) for (a) = 1, (b) = 0.95, (c)
= 0.75, (d) = 1.1. Note that normally one
would use 0.95, and never > 1.
1
0
1
2


=

1
a
2
a
3
b
1
1
0
1
2


=

0
.
9
5
a
2
a
3
b
1
1
0
1
2


=

0
.
7
5
a
2
a
3
b
1
0 200 400 600
1
0
1
2


=

1
.
1
a
2
a
3
b
1
sample time [k]
6.8. THE FORGETTING FACTOR AND COVARIANCE WINDUP 295
Reducing the forgetting factor still further to = 0.75, (third trend in Fig. 6.47), increases the con-
vergence speed, although this scheme will exhibit less robustness to noisy data. We also should
note that there are larger errors in the parameter estimates (S() M()) during the transients
when = 0.75 than in the case where = 1. This is a trade-off that should be considered in the
design of these estimators. We can further decrease the forgetting factor, and we would expect
faster convergence, although with larger errors in the transients. In this simplied simulation
we have a no model/plant mismatch, no noise and abrupt true process changes. Consequently
in this unrealistic environment, a very small forgetting factor is optimal. Clearly in practice the
above conditions are not met, and values just shy of 1.0, say 0.995 are recommended.
One may speculate what would happen if the forgetting factor were set greater than unity ( > 1).
Here the estimator is heavily inuenced by past data, the more distant pastthe more inuence.
In fact, it essentially disregards all recent data. How recent is recent? Well actually it will disre-
gard all data except perhaps the rst few data points collected. This will not make an effective
estimator. A simulation where = 1.1 is shown in the bottom trend of Fig. 6.47), and demon-
strates that the estimated parameters converge to the correct parameters initially, but fail to budge
from then on. The conclusion is not to let assume values > 1.0.
Modications on the forgetting theme
Introducing a forgetting factor is just one attempt to control the behaviour of the covariance
matrix. As you might anticipate, there are many more modications to the RLS scheme along
these lines such as variable forgetting factors, [67], directional forgetting factors, constant trace
algorithms, adaptive forgetting factors, startup forgetting factors etc. See [200] for further details.
6.8.2 Covariance wind-up
Online estimation in practice tends to work well at rst, but if the estimator is left running for
extended periods of time, such as a few days or even weeks, the algorithm often eventually fails.
One of the reasons leading to a breakdown is that during periods of little interest such as when
the system is a steady-state, there is no information arriving into the estimator to reduce further
the uncertainty of the estimated parameters. In fact the reverse occurs, the uncertainty grows,
and this is seen as a slow but sure exponential growth in the elements of the covariance matrix
P. Soon the matrix will overow, and this is referred to as covariance wind-up. Any practical
algorithm must take steps to monitor for the problem and take steps to avoid it.
During periods of little interest, the covariance update, Eqn. 6.55, at sample time k, reduces to
effectively
P
k+1
=
P
k

_
_
_I
k+1
K
T
k+1
. .
almost zero
_
_
_

P
k

(6.57)
which means that the P matrix increases each sample by a factor of 1/ which under normal
circumstances is say around 2%. It will continue this exponential increase until either there is
sufcient excitation in input and output to start to drop the covariance again or it overows.
Consequently it is prudent to watch the size of P by monitoring the size of the individual ele-
ments. A suitable scalar size measure is given by the trace of the P matrix, which is simply the
296 CHAPTER 6. IDENTIFICATION OF PROCESS MODELS
sum of the diagonal elements
tr(P)
def
=
n

i=1
p
i,i
=
n

i=1

i
The trace of a matrix has the nice property in that it also equals the sum of the n eigenvalues,
i
.
Since Pis positive denite, the trace will always be positive. If the trace gets too large, then some
corrective action is required.
Suppose we try to identify the same system as used in section 6.7.2,
S =
1.2q
1
1 0.25q
1
0.50q
2
using a recursive least-squares algorithm with a forgetting factor of 0.98. Rather than using the
optimal white noise as an input excitation signal, we will use a poor input excitation, such as a
square wave with an excessively long period. While the parameters do converge to the correct
values, Fig. 6.48 shows also that the trace of P increases during the periods of no input excita-
tion. If we turned the input square wave off for an extended period of time, the covariance matrix
would grow without bound. If instead we had used the ideal white noise input signal, the con-
tinual excitation keeps the trace of P small. As evident from Eqn. 6.57, the smaller our forgetting
factor is, the faster the covariance matrix will windup.
5
0
5
I
n
p
u
t
/
o
u
t
p
u
t
5
0
5

0 100 200 300 400 500


10
2
10
0
10
2
10
4
P
sample No.
=0.98
Figure 6.48: Covariance wind-up. In this case with = 0.98, the parameters converge to their
correct values, (middle trend), but the trace of the covariance matrix grows during the periods of
little input excitation, (lower plot).
6.9 Identication by parameter optimisation
Model identication can also be achieved using local parametric optimisation theory. This has
the advantage that it is conceptually simple, allows for general, possibly nonlinear functions,
and closely follows classical optimisation techniques. The basic idea is to update the parameter
6.9. IDENTIFICATION BY PARAMETER OPTIMISATION 297
vector so that a performance index is minimised. In this manner, the problem is reduced to one
of a normal minimisation problem where many computer algorithms already exist. However
one typically assumes that we have good starting guesses for the parameters, and that one must
live with the resulting slow rate of convergence. Also these simple optimisation schemes can
become unstable, and the resulting stability analysis is typically quite involved. This section
closely follows [113, pp66-73].
Suppose we are given a system, S,
S : y(t) =
B(s)
A(s)
u(t) (6.58)
where we know the denominator, A(s) = (1 +a
1
s + a
2
s
2
), and wish to estimate the steady-state
gain, B(s) = b
0
. Therefore our model Mof the process is
M: (1 +a
1
s +a
2
s
2
) y(t) =

b
0
u(t) (6.59)
where we wish

b
0
b
0
. We will take the squared error as our performance index,
J =
1
2
_

2
dt (6.60)
where the error between the actual and predicted is = y y.
The gradient optimisation technique is an intuitive method that tells us how to adjust our un-
known parameter vector to search for the minimum. We adjust our parameters in such a manner
as to decrease the residual at a maximum rate. We will use the steepest descent method,

b
0
= k
J

b
0
(6.61)
where k > 0 is the adaption gain which can be adjusted to vary the speed of convergence. The
rate of change of our parameter is
d

b
0
dt
= k

t
_
J

b
0
_
(6.62)
Now we can reverse the order of differentiation in Eqn. 6.62 provided the system does not change
too fast,
d

b
0
dt
= k

b
0
_
j
t
_
=
k
2

b
0
= k

b
0
(6.63)
Eqn. 6.63 is referred to as the MIT rule. To use the MIT rule in an adaptive scheme, we must
evaluate the sensitivity function, /

b
0
,

b
0
=
y

b
0
..
=0

b
0
(6.64)
so therefore the adaption rule is
d

b
0
dt
= k
y

b
0
(6.65)
298 CHAPTER 6. IDENTIFICATION OF PROCESS MODELS
The sensitivity function of the predicted output y with respect to the parameter is obtained by
differentiating the original model, Eqn. 6.59. Note that
(1 +a
1
s +a
2
s
2
) y =

b
0
u
y +a
1
y +a
2
y =

b
0
u
so that the partial derivative of model Mwith respect to

b
0
is
y

b
0
+ a
1

b
0
+a
2

b
0
= u (6.66)
Again we can change the order of differentiation so
y

b
0
+a
1

t
_
y

b
0
_
+a
2

2
t
2
_
y

b
0
_
= u (6.67)
(1 +a
1
s +a
2
s
2
)
y

b
0
= u (6.68)
Dividing Eqn. 6.68 by Eqn. 6.58 gives
y

b
0
=
y
b
0
(6.69)
thus giving the adaption law (from Eqn. 6.65)

b
0
= k
y

b
0
= k
y
b
0
(6.70)
which would all work very nicely except that we do not know b
0
! (It is after all, what we are
trying to estimate.) So we will substitute the estimate in the adaption law giving,

b
0
= k
y

b
0
(6.71)
Algorithm 6.3 Searching for the plant gain Given an unknown system S producing measurable
outputs y(t) and a model M with an initial parameter estimate

b
0
producing outputs y(t) sub-
jected to known inputs u(t),
1. Using the model M with the current best estimate of

b
0
predict y and compare with the
actual system output, (t) = y(t) y(t).
2. Update the parameter estimate using the adaption law, Eqn. 6.71
3. Go back to step 1.
Example of parametric adaption We will demonstrate this parametric adaption where our true,
but only partially known, system is
S =
0.45
s
2
+ 4.2s + 4.4
(6.72)
whereas our model is
M=

b
0
s
2
+ 4.2s + 4.4
(6.73)
6.9. IDENTIFICATION BY PARAMETER OPTIMISATION 299
where the initial estimate of the unknown gain b
0
has the wrong sign at

b
0
= 0.3.
We will do most of the calculations in the discrete domain, so we will approximate the time
derivative in the adaption law, d

b
0
/dt, with an Euler approximation which when discretised at
sample time T gives,

b
0
(t) =

b
0
(t 1) +T
_
k
y

b
0
_
The upper plot in Fig. 6.49 calculated from Listing 6.19 compares the result of using the model
Mwith no adaption (dash-dot) with one where the model is adapted according to the MIT rule
(dotted) line. Note that eventually the model with adaption closely follows the true process (thick
solid line). The lower plot shows that the parameter estimate

b
0
slowly converging to the true
value of 0.45. Note that in this case the adaptation gain k = 10 and that the adapted model has a
much reduced integrated error than the model with no adaption.
0.1
0.05
0
0.05
0.1
P
l
a
n
t

&

M
o
d
e
l

o
u
t
p
u
t
Adaption gain = 10


0 10 20 30 40 50 60
0.4
0.2
0
0.2
0.4
0.6
G
a
i
n

b
0
Time T
s
=0.175 [s]
True
Open loop
Adapted
Figure 6.49: Parameter adaption us-
ing the steepest descent method. The
upper plot compares the true sys-
tem (solid) with the openloop model
(dotted) and the adapted model
(dashed). The lower plot shows the
parameter adapting gradually to the
correct value of b
0
= 0.45.
Listing 6.19: Adaption of the plant gain using steepest descent
S = tf(0.45,[1 4.2 4.4]); % True plant S = 0.45/(s
2
+ 4.2s + 4.4)
2
b0_est = -0.3; % Estimated of parameter

b0 = 0.3, while actual b0 = 0.45
G = tf(b0_est,[1 4.2 4.4]); % Estimated model

b0/(s
2
+ 4.2s + 4.4)
dt = 0.175; t = [0:dt:60]'; % Simulate truth & openloop
7 U = 0.6
*
randn(size(t))+0.5
*
sin(t/4); % Random input
Yx = lsim(S,U,t); % truth output
Yol = lsim(G,U,t); % open loop estimate
Y = Yx; B=NaN
*
Y; k = 10;
12
for i=3:length(t)-1 % Now do the estimation .......
G.num = b0_est; Gd = c2d(G,dt);
[num,den] = tfdata(Gd,'v');
Y(i) = -den(2:3)
*
Y(i-1:-1:i-2) + num
*
U(i:-1:i-2);
300 CHAPTER 6. IDENTIFICATION OF PROCESS MODELS
17
e = Yx(i)-Y(i) ; % model/plant error
b0_est = b0_est + dt
*
k
*
e
*
Y(i)/b0_est; % Update

b0

b0 +T ky/

b0, Eqn. 6.71


B(i,:) = b0_est;
end % for
22
d = [Y-Yx, Yx-Yol]; jise = sum(abs(d)); % Performance analysis, See Fig. 6.49.
Choosing an appropriate adaptation gain is somewhat of an art. Too small a value and the adap-
tation will be too slow. Too large, and the adaptation may become unstable. Fig. 6.50 trends the
integrated absolute error for both the adaption scheme as a function of adaptation gain which can
be compared with the openloop scheme over the time horizon used in Fig. 6.49. In this applica-
tion, the best setting for k is below 20, but this is very dependent on starting conditions, problem
type etc. Above this value the performance rapidly deteriorates, and one would be better off
using only the uncorrected openloop model. As k 0, then there effectively no adaption, and
this is equivalent to the openloop model.
Figure 6.50: Optimising the adaptation
gain. The IAE for the for different adap-
tation gains, k. The optimum gain, k

, is
around 20.
10
1
10
0
10
1
10
2
10
3
10
1
10
0
10
1
10
2
10
3
Adpation gain, k
I
S
E
optimum gain
6.10 Online estimating of noise models
So far we have assumed that the noise acting on our system has been random white noise. How-
ever in practice, coloured noise is far more common, and blindly applying the RLS scheme to a
process disturbed by coloured noise will in general result in biased estimates. The text [200, p102]
contains further details. Recall that the ARMAX model of Eqn. 6.13, repeated here,
A(q)y
k
= B(q)u
kd
+ C(q)e
k
. .
coloured noise
(6.74)
is comprised of three polynomials, A, B and C,
A = 1 +a
1
q
1
+ +a
na
q
na
, B = b
0
+b
1
q
1
+ +b
n
b
q
n
b
C = 1 +c
1
q
1
+ +c
nc
q
nc
where A and C are assumed monic. A schematic of Eqn. 6.74 is given in Fig. 6.51 which is a
special case of the general linear model of Fig. 6.31.
6.10. ONLINE ESTIMATING OF NOISE MODELS 301
white noise, e(t)
q
d B
A
+

input, u(t) output, y(t)
C
A

Figure 6.51: Addition of coloured noise to a dynamic process. See also Fig. 6.31.
The noise model polynomial, C, is typically never larger than rst or second order. We wish to
estimate the n
a
+ n
b
+ n
c
+ 1 unknown coefcients of A, B and C. The problem we face when
trying to estimate the coefcients of C is that we can never directly measure the noise term e
k
.
So the best we can do is replace this unknown error term with an estimate such as the prediction
error,

k
= y
k

k1
(6.75)
and augment the data vector with the estimates of the C polynomial. This scheme is referredto as
recursive extended least-squares, or RELS. Note carefully the difference in nomenclature between
the actual, but unmeasured error, e and the estimate .
The recursive extended least-squares algorithm given in Algorithm 6.4 is an extension of the
standard recursive least-squares algorithm.
Algorithm 6.4 Recursive extended least-squares
We aim to recursively estimate the parameters of the A, B and C polynomials in the dynamic
process
A(q
1
)y
k
= B(q
1
)u
kd
+C(q
1
)e
k
by measuring u
k
and y
k
. We note that A and C are assumed monic, so the parameter vector is

def
=
_
a
1
a
na
.
.
. b
0
b
n
b
.
.
. c
1
c
nc
_
T
where n
a
, n
b
and n
c
are the orders of the A, B and C polynomials respectively.
1. Construct the augmented data row vector using past input/output data and past prediction
error data,

k
=
_
_
y
k1
, . . . , y
kna
. .
past outputs
,
.
.
. u
kd
, . . . , u
kd1n
b
. .
inputs
,
.
.
.
k1
, . . . ,
knc
. .
predicted errors
_
_
2. Collect the current input/output data, y
k
, u
k
, and calculate the present prediction error
using the previously obtained model parameter estimates

k
= y
k

k1
3. Update the parameter covariance matrix, gain, and parameter estimate vector using the
standard RLS update equations, (Eqns 6.47, 6.48, and 6.49), from Algorithm 6.2.
302 CHAPTER 6. IDENTIFICATION OF PROCESS MODELS
4. Wait out the remaining of the sample time, and then return to step (1).
Of course we could modify these update equations to incorporate additional features such as
forgetting factors in the same manner as in the normal recursive least squares case.
An alternative algorithm is the approximate maximum likelihood, where instead of using the
prediction error, one uses the residual error,

k
= y
k

k
(6.76)
and minor changes to the covariance and parameter update equations. (Note the difference be-
tween this and Eqn. 6.75.) Further implementation details are given in [200, p105].
6.10.1 A recursive extended least-squares example
For this simulated example, we will compare the parameter estimates of an unknown plant dis-
turbed by coloured noise using both the normal recursive least-squares (where we anticipate
biased estimates), and the recursive extended least-squares algorithm to additionally estimate
the C polynomial and to remove the bias estimates of the A and B polynomials.
Our true plant is
(1 0.4q
1
0.5q
2
)
. .
A(q
1
)
y
k
= (1.2q
1
+ 0.3q
2
)
. .
B(q
1
)
u
k
+ (1 0.8q
1
0.1q
1
)
. .
C(q
1
)
e
k
(6.77)
where e
k
N(0, 2.5), and the input u
k
N(0, 1). Note that we have coloured noise as C = 1
and that this noise variance is relatively large.
If we just ignore the noise model, and attempt to estimate just the four unknown Aand B polyno-
mial coefcients using the normal RLS algorithm, then we get large biases especially as evident
in the rst half of the Fig. 6.52.
If however, we swap over to use the extended recursive least-squares algorithm at time k = 2000,
then the estimation performance is much improved, exhibits much less bias, and even manages
(eventually) to estimate the noise polynomial correctly as shown in the latter section of Fig. 6.52.
The downside of course is that now we must estimate the noise polynomial coefcients as well
increasing the number of unknown parameters from 4 to 6.
Example Estimating parameters in an ARMAX model.
For the dynamic system described by Eqn. 6.77, we will create an ARMAX model using the
idpoly object, create some suitable random input, and use the model to generate and output as
shown in Listing 6.20. Note how I must pad the B polynomial with a leading zero to take into
account the deadtime.
Listing 6.20: Create an ARMAX process and generate some input/output data suitable for sub-
sequent identication.
nt= 100; u= randn(nt,1); e= randn(nt,1); % # of samples, input & disturbance
2
A = [1 -0.4 0.5]; B = [0, 1.2, 0.3]; C = [1, 0.8 -0.1]; % True coefcients
G = idpoly(A,B,C) % Create true ARMAX plant
y = sim(G,[u e]); % Simulate Ay = Bu +Ce
Zdata = iddata(y,u,1); % collect output/input data, sample time T = 1.
6.10. ONLINE ESTIMATING OF NOISE MODELS 303
0 1000 2000 3000 4000 5000 6000
1
0.5
0
0.5
1
1.5
2
P
a
r
a
m
e
t
e
r

e
s
t
i
m
a
t
e
s
=0.9990
a
2
a
3
b
1
b
2
c
2
c
3
RLS RELS
sample time [k]
Figure 6.52: Estimating model parameters using recursive least squares in the presence of
coloured noise. The rst period shows the A and B estimates (solid) and true values (dashed)
using normal RLS. The second half of the trend shows the A, B and C estimates (solid) and true
values (dashed) using extended RLS. Note the eventual lack of bias in the latter RELS estimates.
To estimate the parameters we could either use the general pem or the more specic armax com-
mands. We will estimate a structurally perfect model, that is 2 parameters in each of the A, B and
C polynomials and 1 unit of dead time.
Listing 6.21: Identify an ARMAX process from the data generated in Listing 6.20.
>>G_est= pem(Zdata,'na',2,'nb',2,'nc',2,'nk',1);% Estimate A(q
1
), B(q
1
) & C(q
1
)
>>G_est= armax(Zdata,'na',2,'nb',2,'nc',2,'nk',1)% Alternative to using pem.m
4 >>present(G_est) % present results
G_est =
Discrete-time ARMAX model: A(z)y(t) = B(z)u(t) + C(z)e(t)
A(z) = 1 - 1.416 z-1 + 0.514 z-2
9 B(z) = 1.209 z-1 + 0.2935 z-2
C(z) = 1 + 0.7703 z-1 - 0.1181 z-2
>>yp = sim(G_est,[u e]); % form prediction
14 >>plot([y yp]) % check prediction performance in Fig. 6.53(a).
Fig. 6.53(a) compares the true and predicted output data. In this case the t is not perfect even
though we used the correct model structure which is partly due to the iterative nature of the
nonlinear regression, but mostly the poor t is due to the fact that we used only 100 input/output
samples. If we used 10,000 samples for the tting, we will estimate a far superior model as
illustrated in Fig. 6.53(b). It is interesting to note just how many input/output samples we need
to estimate a relatively modest sized system of only 6 parameters.
Listing 6.22 repeats this estimation, but this time does it recursively in MATLAB.
Listing 6.22: Recursively identify an ARMAX process.
1 A = [1 -0.4 0.5]; B = [1.2, 0.3]; C = [1, 0.8 -0.1]; % True plant to be estimated
304 CHAPTER 6. IDENTIFICATION OF PROCESS MODELS
0 20 40 60 80 100
15
10
5
0
5
10
o
u
t
p
u
t


Actual
Estimated
actual
estimated
(a) Identication using only 100 data points
0 10 20 30 40 50 60 70 80 90 100
20
15
10
5
0
5
10
sample interval
o
u
t
p
u
t


Actual
Estimated
(b) Identication using 10,000 data points
Figure 6.53: A validation plot showing the actual plant output (solid) compared to the model
predictions (dashed line) of an ARMAX model using nonlinear parameter estimation. Note that
to obtain good results, we need a long data series.
na = length(A); nb = length(B); nc = length(C); % Order of the polynomials+1
d = na-nb; % deadtime
6 Npts = 5e4;
u = randn(Npts,1); a = randn(Npts,1);
y = zeros(Npts,1); err = y; thetaAll = []; % dummy
np = na+nb+nc - 2; % # of parameters to be estimated
11 theta = randn(np,1); P = 1e6
*
eye(np); % initial guess of
lambda = 1; % forgetting factor, 1
for i=na:Npts;
y(i)= -A(2:end)
*
y(i-1:-1:i-na+1)+ B
*
u(i-d:-1:i-nb+1-d)+ C
*
a(i:-1:i-nc+1);
16
phi = [-y(i-1:-1:i-na+1)', u(i-d:-1:i-nb+1-d)', err(i-1:-1:i-nc+1)']';
err(i,1) = y(i) - phi'
*
theta; % estimated noise sequence
K = P
*
phi/(lambda + phi'
*
P
*
phi); % update gain
21 P = (P-(P
*
phi
*
phi'
*
P)/(lambda + phi'
*
P
*
phi))/lambda; % covariance update
theta = theta + K
*
err(i,1); % update
thetaAll(i,:) = theta';
6.10. ONLINE ESTIMATING OF NOISE MODELS 305
end
26 Aest = [1, thetaAll(end, 1:na-1)]; Best = thetaAll(end, na:na+nb-1);
Cest = [1, thetaAll(end,na+nb:end)]; % See Fig. 6.54.
As shown in Fig. 6.54, the estimates do eventually converge to the correct values without bias,
and furthermore, the step response of the nal estimate compares favourably with the true step
response.
0 1 2 3 4 5
x 10
4
2
1.5
1
0.5
0
0.5
1
1.5
2
a
2
a
3
b
1
b
2
b
3
c
2
c
3
sample time [k]
P
a
r
a
m
e
t
e
r

e
s
t
i
m
a
t
e
s
RELS: =0.9990
0 5 10 15 20 25
0
1
2
3


Model at k=2e4
(seconds)
Estimated
Actual
0 5 10 15 20 25
5
0
5


Model at k=4e4
(seconds)
Estimated
Actual
Figure 6.54: Recursive extended least-squares estimation where there is an abrupt model change
at time k = 2 10
4
. The inserted gures compare a step response of the true plant with the current
estimate.
6.10.2 Recursive identication using the SI toolbox
The System Identication toolbox can also calculate the parameter estimates recursively. This
essentially duplicates the material presented in 6.7.1. However in this case there are many more
options to choose from regarding type of method, numerical control, starting point etc.
The toolbox has a number of recursive algorithms (See [125, p 164]) but they all work in the same
way. For most demonstration purposes, we will recursively estimate the data pseudo online. This
means that we will rst generate all the data, and then estimate the parameters. However we
will make sure that when we are processing element i, the data ahead in time, elements i + 1 . . .,
are not available. This pseudo online approach is quite common in simulation applications. The
Toolbox command that duplicates the recursive least squares estimation of a ARMAX process
with a forgetting factor is rarmax(z,nn,ff,0.95).
306 CHAPTER 6. IDENTIFICATION OF PROCESS MODELS
6.10.3 Simplied RLS algorithms
The standard recursive least squares (RLS) algorithm such as presented in 6.7.1 requires two
update equations; one for the update gain, and one for the covariance matrix. But for imple-
mentation in a small computer, there is some motivation to simplify the algorithm, even at the
expense of the quality of parameter estimates.

Astr om and Wittenmark, [17, p69-70] describe one
such method known as Kaczmarzs projection algorithm. The input/output relation is assumed
y
k
=
k

k
where is the parameter vector and is the row data vector, and the correction to the estimated
parameter vector is

k
=

k1
+

k
where is chosen such that y
k
=
k

k
which gives the full updating formula in our now familiar
form as

k
=

k1
+

T

T
_
y
k

k1
_
(6.78)
Note that in [17], there is some nomenclature change, and that the data vector is dened as a
column vector, where here I have assumed it is a row vector. This update scheme (Eqn 6.78) is
sometimes modied to avoid potential problems when the parameter vector equals zero.
The script in listing 6.23 rst generates data from an ARX plant, and then the data is processed
pseudo-online to obtain the estimated parameters.
Listing 6.23: Kaczmarzs algorithm for identication
u = randn(50,1); % random input
3 A = [1 -1.25 0.8]; B = 0.3; % G = 0.3/(1 1.25q
1
+ 0.8q
2
)
G = idpoly(A,B,1); y = sim(G,u);
theta = randn(3,1);% Initial parameter estimate, 0, is random
Param = []; % estimated parameters so far
8 for i=4:length(u);
phi= [-y(i-1:-1:i-2)', u(i)]'; % regressor vector,
k
theta=theta+phi
*
(y(i)-phi'
*
theta)/(phi'
*
phi); %

k1
+

k

T
k

_
yt
T
k

k1
_
Param= [Param; theta']; % collect parameters
end %
13
Param_x = ones(size(Param(:,1)))
*
([A(2:3) ,B(1)]); % true parameters
plot([Param_x, Param]) % have they converged ?
The input/output data (top) and parameters from this data are shown in Fig. 6.55. The estimated
parameters converge relatively quickly to the true parameters, although not as quickly as in the
full recursive least-squares algorithm.
Problem 6.3 The following questions use input/output data from the collection of classic time
series data available from the time series data library maintained by Rob Hyndman at:
www-personal.buseco.monash.edu.au/

hyndman/TSDL/ or alternatively fromthe DAISY:


DATABASE FOR THE IDENTIFICATION OF SYSTEMS collection maintained by De Moor, Department
of Electrical Engineering, ESAT/SISTA, K.U.Leuven, Belgium at
www.esat.kuleuven.ac.be/sista/daisy/.
6.11. CLOSED LOOP IDENTIFICATION 307
5
0
5
I
n
p
u
t

&

o
u
t
p
u
t
0 10 20 30 40 50
1.5
1
0.5
0
0.5
1
a
1
a
2
b
0
sample time
E
s
t
i
m
a
t
e
d

&

t
r
u
e

p
a
r
a
m
e
t
e
r
s
Figure 6.55: The performance of a simpli-
ed recursive least squares algorithm, Kacz-
marzs algorithm.
Upper: The input/output data.
Lower: The estimated parameters (solid)
eventually converge to the true parameters
(dashed).
These collections are useful to test and verify new identication algorithms. Original sources for
this data include [32] amongst others.
1. The data given in /industry/sfbw2.dat shows the relationship in a paper making ma-
chine between the input stock ow (gal/min) and basis weight (lb/3300 ft
2
), a quality control
parameter of the nished paper. Construct a suitable model of this time series.
Ref: [156, pp500501].
2. This problem is detailed and suitable for a semester group project.
An alternative suite of programs to aid in model development and parameter identication is
described in [32, Part V]. The collection of seven programmes closely follows the method-
ology in the text, and are outlined in pseudo-code. Construct a TIME SERIES ANALYSIS
toolbox with graphics based around these programs. Test your toolbox on one of the time
series tabulated in the text. A particularly relevant series is the furnace data that shows the
percentage of carbon dioxide output from gas furnace at 9 seconds intervals as a function
of methane input, (ft
3
/min), [32, Series J, pp532533]. This input/output data is reproduced
in Fig. 6.56, although the input methane owrate is occasionally given as negative which
is clearly physically impossible. The original reference footnoted that the unscaled data lay
between 0.6 and 0.04 ft
3
/min.
6.11 Closed loop identication
The identication techniques described this far assume that the input signal can be chosen freely
and that this signal has persistent excitation. However if you are trying to identify while con-
trolling, you have lost the freedom to choose an arbitrary input since naturally, the input is now
constrained to give good control rather than good estimation. Under good control, the input will
probably be no longer exciting nor inject enough energy into the process so that there is sufcient
information available for the identication. Closed loop identication occurs when the process is
so important, or hazardous, that one does not have the luxury of opening the control loop simply
for identication. Closed loop identication is also necessary, (by denition), in many adaptive
control algorithms.
308 CHAPTER 6. IDENTIFICATION OF PROCESS MODELS
Figure 6.56: The percent CO
2
in the
exhaust gas from a furnace as a func-
tion of the methane input (scaled).
Data from [32, p532533].
0 500 1000 1500 2000 2500 3000
45
50
55
60
65
%

C
O
2
0 500 1000 1500 2000 2500 3000
4
2
0
2
4
M
e
t
h
a
n
e

f
l
o
w
time [s]
If we use a low order controller, then the columns of the data matrix X in Eqn. 6.20 become
linearly dependent. This is easy to see if we use a proportional controller, u
t
= ky
t
, then this in
turn creates columns in X that only differ by the constant controller gain k. In reality, any noise
will destroy this exact dependency, but the numerical inversion may still be a computationally
difcult task.
One way to ensure that u(t) is still persistently exciting the process is to add a small dither or
perturbation signal on to the input u. This noisy signal is usually of such a small amplitude,
that the control is not seriously effected, but the identication can still proceed. In some cases
there may be enough natural plant disturbances to make it unnecessary to add noise to u. With
this added noise, added either deliberately or otherwise, the estimation procedure is the same as
described in 6.7.
The above arguments seems to suggest that closed loop identication is suboptimal, and that if
possible, one should always try to do the identication in open loop, design a suitable controller,
then close the loop. This avoids the possible ill-conditioning, and the biasing in the estimated
parameters. However Landau in a series of publications culminating in [112] argues that in fact
closed loop control, with suitable algorithms, actually develops better models since the identi-
cation is constrained to the frequencies of interest for good closed loop control. It all depends
in the nal analysis on whether we want a good model of the plant for say design in which
case we do the identication in open loop, or if we want good control in which case closed loop
identication may be better.
A very readable summary of closed loop identication is given in [93, pp517518], [17, p82] and
especially [200]. Identication in the closed loop is further discussed in chapter 7.
6.12. SUMMARY 309
6.11.1 Closed loop RLS in Simulink
Implementing recursive identication using SIMULINK is not as straight forward as writing raw
MATLAB code because of the difculty in updating the parameters inside a transfer function
block. Under normal operation these are considered constant parameters within the lter block,
so it is not a problem. Fig. 6.57 shows the closed loop estimation using the same RLS blocks as
those used in Fig. 6.42 on page 289. In this example the identied parameters are not used in the
controller in any sort of adaptive manner, for that we need an adaptive controller which is the
subject of the following chapter.
y
theta
phi generator
u y
p
h
i
Unknown plant
Gd
Signal
Generator
RLS
phi1
y(k)
theta
P
Plant i/o data uy
Parameters
theta
PID Controller
PID
Covariance
P
u y
[5x5]
3
5
5
5
5
5
Figure 6.57: Closed loop estimation using RLS in SIMULINK. See also Fig. 6.42.
6.12 Summary
System identication is where we try to regress or t parameters of a given model skeleton or
structure to t input/output data from a dynamic process. Ideally we would obtain not only
the values of the parameters in question, but also an indication of the goodness of t, and
the appropriateness of the model structure to which the parameters are tted. Identication
methodologies usually rely on a trial & error approach, where different model structures are
tted, and the results compared. The SYSTEM IDENTIFICATION TOOLBOX within MATLAB, [125],
is a good source of tools for experimentation in this area, because the model tting and analysis
can be done so quickly and competently.
Identication can also be attempted online, where the current parameter estimate is improved
or updated as new process data becomes available in an efcient way without the need to store
all the previous historical data. We do this by re-writing the least-squares update in a recursive
algorithm. Updating the parameters in this way enables one to follow time-varying models or
even nonlinear models more accurately. This philosophy of online estimation of the process
model is a central component of an adaptive control scheme which is introduced in chapter 7.
However there are two key problems with the vanilla recursive least-squares algorithm which
become especially apparent when we start to combine identication with adaptive control. The
rst is the need to prevent the covariance matrix Pfrom being illconditioned. This can, in part be
solved by ensuring persistent excitability in input. The second problem is that given non-white
noise or coloured noise, the RLS scheme will deliver biased estimates. However, there are many
extensions to the standard schemes that address these and other identication issues. Succinct
310 CHAPTER 6. IDENTIFICATION OF PROCESS MODELS
summaries of system identication with some good examples are given in [22, pp422431] and
[148, p856], while dedicated texts include [124, 188, 200].
6.12. SUMMARY 311
Problems
Problem 6.4 An identication benchmark problem from [109].
1. Simulate the second-order model with external disturbance v
t
,
y
t
=
2

i=1
a
i
y
ti
+
2

i=0
b
i
u
ti
+
2

i=1
d
i
v
ti
+e(t), e(t) N(0,
2
) (6.79)
where the parameter values are:
a
1
a
2
b
0
b
1
b
2
d
1
d
2

0.98 0.9 0.5 0.25 0.1 0.8 0.2 0.1
The input u
t
is normally distributed discrete white noise, and the external disturbance is
to be simulated as a rectangular signal alternating periodically between the values +1 and
1 at t = 100, 200, . . . Run the simulation for 600 time steps, and at t = 300, change a
1
to
0.98.
2. This is a challenging identication problem, because the rarely varying external disturbance
signal gives little information about the parameters d. Identify the parameters in Eqn. 6.79
perhaps starting with

1|0
= 0, P
1|0
= 50I and using an exponential forgetting factor, =
0.8.
Problem 6.5 1. Run through the demos contained in the SYSTEM IDENTIFICATION toolbox for
MATLAB.
2. Investigate the different ways to simulate a discrete plant with a disturbing input. Use the
model structure
A(q
1
)y
t
= B(q
1
)u
t
+C(q
1
)e
t
Choose a representative stable system, say something like
A(q
1
) = 1 0.8q
1
, B(q
1
) = 0.5, C(q
1
) = 1 + 0.2q
1
and choose some sensible inputs for u
t
and e
t
. Simulate the following and explain any
differences you nd.
(a) Write your own nite difference equation in MATLAB, and compare this using filter.
(b) Use the transfer function object, tf, in MISO (multiple input/single output) mode.
Hint: Use cell arrays as
>>G = tf( {B ,C} , {A,A},T); % discrete version
>>G.variable = q;
(c) Use idpoly and idmodel/sim from the SYSTEM IDENTIFICATION toolbox.
(d) Build a discrete SIMULINK model
Hint: You may nd problems due to the differing lengths of the polynomials. Either ensure
all polynomials are the same length, or swap from z
1
to z, or introduce delays.
3. Repeat problem 2 with an unstable model (A is unstable.)
4. What is the difference in SIMULINK when you place the A polynomial together with the B
and C polynomials? Particularly pay attention to the case where A is unstable.
312 CHAPTER 6. IDENTIFICATION OF PROCESS MODELS
5. Identication of plants that include coloured noise inputs is considerably more complex than
just using arx. The following illustrates this.
(a) Create a truth plant with polynomials, A, B, C. (Use idpoly).
(b) Simulate this plant using idmodel/sim for the following two cases:
i. A noise-free version: i.e. random u(t) and no e(t)
ii. A version with coloured noise: i.e. random u(t), e(t).
(c) Estimate an ARX model (i.e. only A, B polynomials) using both data sets. Present
the model with uncertainty bounds on the estimated parameters using present. What
was the problem with the coloured noise version and how can we avoid it?
(d) Estimate the A, B and C polynomials using an extended-LS scheme such as armax
or pem. Has the identication improved signicantly? How did the computational time
change from using to arx to armax?
6. The DAISY (Data Base for the Identication of Systems), [135], located at www.esat.kuleuven.ac.be/sista/daisy/
and contains a number of simulated and experimental multivariable data sets. Down load
one of these data sets, preferably one of the real industrial data trends, and make an iden-
tication.
7. RLS with modied forgetting factors
Construct a simulation to estimate the parameters in a dynamic model (as in tident.m) to
test various extensions to the forgetting factor idea. Create a simulation with output noise,
and step change the model parameters part way through the run. (For more information on
these various extensions, see [200, pp140160].)
(a) Use a start-up forgetting factor of the form

1
(t) =
0
+ (1
0
)
_
1 exp
_
t

f
__
(6.80)
where
0
is the initial forgetting factor, (say 0.90.95), and
f
is the time constant that
determines how fast (t) approaches 1.0.
Note that Eqn. 6.80 can be re-written in recursive form as

1
(t) =
1
(t 1) + (1 )
where = e
1/
f
and
1
(0) =
0
.
(b) Combine with the start-up forgetting factor, an adaptive forgetting factor of the form

2
(t) =

f

f
1
_
1

2
(t)

f
s
f
(t)
_
(6.81)
where s
f
(t) is a weighted average of past values of the squared error
2
.
[200, p159] suggest using the lter
s
f
(t) =

f
1

f
s
f
(t 1) +

2

f
Combine both the start-up and the adaptive forgetting factor to get a varying forgetting
factor (t) =
1
(t)
2
(t).
(c) A directional forgetting factor tries to update in only those directions in parameter space
where there is information available. The covariance update with directional forgetting
is
P
t
= P
t1
_
I
x
t
x

t
P
t1
r
1
t1
+x

t
P
t1
x
t
_
(6.82)
6.12. SUMMARY 313
where the directional forgetting factor r(t) is
r
t
=

t+1
P
t
x
t+1
(6.83)
and the scalar

is like the original xed forgetting factor.


What ever happened to elegant solutions?
8. Compare the operation count of the RLS scheme with square root updates with the nor-
mal update. Construct a sufciently ill-conditioned example where you can demonstrate a
signicant difference.
Summary of the RLS square root update scheme for y =
T
.
Dene the square root of P as
P
def
= SS
T
Update S
t
by following
f
t
= S
T
t1

t
= 1 +f
T
f

t
=
1

t
+

t
L
t
= S
t1
f
t
S
t
= S
t1

t
L
t
f
T
t
Update the parameter estimates using

t+1
=
t
+
L
t

t
_
y
t

t
_
9. Modify the square root RLS scheme to incorporate a forgetting factor.
314 CHAPTER 6. IDENTIFICATION OF PROCESS MODELS
10. Construct an RLS update algorithm in SIMULINK, perhaps using the matrix/vector blocks in
the DSP blockset. Use the following update equations:
K
k+1
=
P
k

k+1
1 +
T
k+1
P
k

k+1
, P
k+1
= P
k
_
I

k+1

T
k+1
P
k
1 +
T
k+1
P
k

k+1
_

k+1
=
k
+K
k+1
_
y
k+1

T
k+1

k
_
Note that as of Matlab R12, there is considerable support for vector/matrix manipulations in
raw SIMULINK.
11. Construct a square root update algorithm (from Problem 8) in SIMULINK, perhaps using the
matrix/vector blocks in the DSP blockset.
12. Wellstead & Zarrop, [200, p103], develop an expression for the bias as a function of the
model parameters for a simple 1-parameter model
y
t
= ay
t1
+e
t
+ce
t1
, |a| < 1, |c| < 1
where e
t
is white noise with variance
2
e
, as
a a
c(1 a
2
)
1 +c
2
+ 2ac
(a) Validate with simulation the expression for the size of the bias, a a.
(b) The size of the bias should not be a function of the size of the variance of e
t
. Is this a
surprising result? Verify this by simulation. What happens if you drop the variance of
the noise,
2
e
, to zero? Do you still observe any bias? Should you?
13. Starting with the SIMULINK demo rlsest which demonstrates an adaptive pole-placement
of a discrete plant do the following tasks.
(a) Modify the rlsest simulation to use a continuous plant of your choice and, for the mo-
ment, a xed (non-adaptive) PID controller. (I.e. disconnect the closed-loop adaption
part of the controller.) Use Fig. 6.58 as a guide to what you should construct.
Explain in detail how the state generator works and how the RLS estimator works.
Modify the blocks so that you can identify a different number of A terms from the B
terms, (such as say two B(q
1
) and three A(q
1
) polynomials terms).
(b) Run the estimation routine under closed-loop conditions. What do you observe espe-
cially as you change the noise level? (To be realistic, you should add at least one more
noise source, where ?)
(c) Modify the RLS estimator s-function to export the covariance matrix as well as the
updated parameters. Do these conrm your ndings from part 13b?
(d) Modify your RLS scheme to better handle noise and the closed loop conditions. You
may like to use some simple heuristics based on the trace of the covariance matrix, or
use a directional and/or variable forgetting factor.
14. Read the MATLAB Technical support note #1811 How Do I Create Time Varying Matrix
Gain and State-space Blocks as C MEX S-functions?. Download the relevant les and
implement an adaptive controller.
15. Model identication in state-space with state information.
The English mathematician Richardson has proposed the following simple model for an
arms race between two countries:
x
k+1
= ax
k
+ by
k
+f (6.84)
y
k+1
= cx
k
+dy
k
+g (6.85)
6.12. SUMMARY 315
t
time
Clock1
PID
PID Controller
spt/plant
Mux
yr
To Workspace
theta
To Workspace1
distrurbance
ZOH1
u[n]
y[n]
parameters
RLS
Estimator
Recursive least squares
Parameter Estimator
3(s+0.3)
(s+2)(s+1)(s+0.4)
plant
ZOH
State
Generator
phi
+

+
+
Sum1
Setpoint
Figure 6.58: A modied version of rlsest in Simulink.
where x
k
and y
k
are the expenditures on arms of the two nations and a, b, c, d, f and g
are constants. The following data have been compiled by the Swedish International Peace
Research Institute, (SIPRI yearbook 1982).
Millions of U.S. dollars at 1979 prices and 1979 exchange rates
year Iran Iraq NATO WTO
1972 2891 909 216478 112893
1973 3982 1123 211146 115020
1974 8801 2210 212267 117169
1975 11230 2247 210525 119612
1976 12178 2204 205717 121461
1977 9867 2303 212009 123561
1978 9165 2179 215988 125498
1979 5080 2675 218561 127185
1980 4040 225411 129000
1981 233957 131595
(a) Determine the parameters of the model and investigate the stability of this model.
(b) Determine the estimate of the parameters based on 3 consecutive years. Look at the
variability of the estimates.
(c) Create a recursive estimate of the parameters. Start with 1975 with the initial values
determined from 197274.
316 CHAPTER 6. IDENTIFICATION OF PROCESS MODELS
Chapter 7
Adaptive Control
Adopt, adapt and improve . . . Motto of the round table.
John Cleese whilst robbing a lingerie boutique.
7.1 Why adapt?
Since no one yet has found the holy grail of controllers that will work for all conceivable oc-
casions, we must not only select an appropriate class of controller, but also be faced with the
unpleasant task of tuning it to operate satisfactorily in our particular environment. This job is
typically done only sporadically in the life of any given control loop, but in many cases owing to
equipment or operating modications, the dynamics of the plant have changed so as to render to
controller sub-optimal. Ideally the necessary re-tuning could be automated, and this is the idea
behind an adaptive controller.
In a control context, adaption is where we adjust the controller as the process conditions change.
The problem is that for tight control we essentially want to have a high feedback gain, giving
a rapid response and good correction with minimal offset. However if the process dynamics
subsequently change due to environmental changes for example, the high feedback gain may
create an unstable controller. To prevent this problem, we adapt the controller design as the
process conditions change, to achieve a consistently high performance.
Adaptive control was rst used in the aerospace industry around the late 1950s where automatic
pilots adapted the control scheme depending on external factors such as altitude, speed, payload
and so forth. One classic example was the adaptive ight control system (AFCS) for the X-15 in
the late 1960s described in [7]. The adaptive control system was developed by Honeywell and
was fundamentally a gain scheduler that adapted the control loop gains during the ights that
ranged from sea level to space at 108 km.
Unfortunately the crash of X-15-3 and loss of pilot Michael Adams in 1967 somewhat terminated
the enthusiasm of adaptive ight control, and the research program was terminated shortly af-
terwards.
In the past couple of decades however, adaptive control has spread to the processing industries
with several commercial industrial adaptive controllers now available. Some applications where
adaptive control is now used are:
317
318 CHAPTER 7. ADAPTIVE CONTROL
Aeronautical control As the behaviour of high performance ghter aircraft depends on altitude,
payload, speed etc, an adaptive control is vital for the ight to be feasible.
Tanker steering The dynamics of oil supertankers change drastically depending on the depth of
water, especially in relatively shallow and constricted water-ways.
Chemical reactions Chemical processes that involve catalytic reactions will vary as the catalyst
denatures over time. Often coupled to this, is that the heat transfer will drop as the tubes
in the heat exchanger gradually foul.
Medical The anesthetist must control the drug delivery to the patient undergoing an operation,
avoiding the onsets of either death or consciousness.
1
Some further examples of adaptive control are given in a survey paper in [180].
Adaptive control is suitable:
if the process has slowly varying parameters
if clean measurements are available
for batch processes or startup/shut down of
continuous processes
for widely changing operating conditions (such
as in pH controlled processes)
Adaptive control is not suitable if:
PID control is already satisfactory
The system output is essentially constant
The system is essentially linear and time invari-
ant
If the system is nonlinear, but easy to model
7.1.1 The adaption scheme
When faced with tuning a new control loop, the human operator would probably do the follow-
ing steps:
1. Disturb the plant and collect the input/output data.
2. Use this data to estimate the plant dynamics.
3. Using the tted model, an appropriate controller can be designed.
4. Update the controller with the new parameters and/or structure and test it.
If the scheme outlined above is automated, (or you nd an operator who is cheap and does
not easily get bored), we can return to step 1, and repeat the entire process at regular intervals
as shown in Fig. 7.1. This is particularly advantageous in situations where process conditions
can change over time. The process and feedback controller are as normal, but we have added
a new module in software (dashed box) consisting of an on-line identication stage to extract a
process model M(), and given this model information, a controller design stage to nd suitable
controller tuning constants.
Of course there will be some constraints if we were to regularly tune our controllers. For example,
it is advisable to do the the estimation continuously in a recursive manner, and not perturb the
plant very much. We could adapt the three tuning constants of a standard PID controller, or we
could choose a controller structure that is more exible such as pole-placement. The adaption
module could operate continuously or it could be congured to adapt only on demand. In the
latter case, once the tuning parameters have converged, the adaption component of the controller
turns itself off automatically, and the process is now controlled by a constant-tuning parameter
controller such as a standard PID controller.
1
One particularly ambitious application is described in [52, p429].
7.2. GAIN SCHEDULING 319

Identify M()
+

`
Plant, S

controller

r(t)
setpoint
y(t)
Adaptive module
Control design
u(t)

+
output
Update controller if necessary
Measure input & output
Figure 7.1: The structure of an indirect adaptive controller
7.1.2 Classication of adaptive controllers
As one may expect, there are many different types of adaptive control, all possessing the same
general components, and an almost equal number of classication schemes. To get an overview
of all these similar, but subtly different controllers, we can classify them according to what we
adapt;
Parametric adaption Here we continually update the parameters of the process model, or even
controller, or
Signal adaption where we vary a supplementary signal to be added directly to the output of the
static controller.
Alternatively we can classify them as to how we choose a performance measure,
Static A static performance measure is one such as overall efciency, or maximising production.
Dynamic A dynamic performance measure is to optimise the shape of a response to a step test.
Functional of states and inputs This more general performance index is discussed more gener-
ally in chapter 9.
7.2 Gain scheduling
The simplest type of adaptive control is gain scheduling. In this scheme, one tries to maintain
a constant overall loop gain, K by changing the controller gain, K
c
in response to a changing
process gain K
p
. Thus the product K
c
K
p
is always constant.
320 CHAPTER 7. ADAPTIVE CONTROL
An example where gain scheduling is used is in the control of a liquid level tank that has a vary-
ing cross sectional area. Suppose we are trying to control the level in a spherical tank by adjusting
a valve on the outlet. The exiting ow is proportional to the square root of height of liquid above
the valve, which in turn is a nonlinear function of the volume in the tank. Fig. 7.2 shows how
this gain varies as a function of material in the tank, illustrating why this is a challenging control
problem. No single constant controller setting is optimal for both operating in the middle and
the extremes of the tank.
Figure 7.2: The plant gain, K
p
is pro-
portional to the square toot of the hy-
draulic head which in turn is a non-
linear function of volume in the tank.
Consequently the controller gain, K
c
,
should be the inverse of the plant gain.
0
0.5
1
1.5
V
max
P
l
a
n
t

g
a
i
n

K
p


h
1
/
2
0 1 2 3 4 5
0
2
4
6
8
V
max
Volume
C
o
n
t
r
o
l
l
e
r

g
a
i
n

K
c
A way around this problem is to make the controller parameter settings a function of the level
(the measured variable, y). Here the controller rst measures the level, then it decides what
controller constants (K
c
,
i
,
d
) to use based on the level, and then nally takes corrective action
if necessary. Because the controller constants change with position, the system is nonlinear and
theoretical analysis of the control system is more complicated.
To do this in practice one must:
1. Determine the open loop gain at the desired operating point and several points either side
of the desired operating point.
2. Establish good control settings at each of these three operating points. (This can be done by
any of the usual tuning methods such as Ziegler-Nichols, Cohen-Coon etc)
3. Generate a simple function (usually piecewise linear) that relates the controller settings to
the measured variable (level).
The gain scheduling described above is a form of open-loop or feedforward adaption. The pa-
rameters are changed according to a pre-set programme. If the tank is in fact different from the
tank that was used to calculate the parameter adaption functions such as having a cross section,
then the controller will not function properly. This is because the controller is not comparing its
own system model with the true system model at any time. However despite this drawback, gain
scheduling is an appropriate way to control things like tanks of varying cross sectional area. It is
unlikely that the tanks cross sectional area will change signicantly with time, so the instrument
engineer can be quite condent that the process once tuned, will remain in tune.
7.3. THE IMPORTANCE OF IDENTIFICATION 321
7.3 The importance of identication
The adaptive controller (with feedback adaption) must be aware of the process, or more accu-
rately it must be aware of changes in process behaviour hence the importance of system iden-
tication covered in the previous chapter. Central to the concept of controllers that adapt, is an
identication algorithm that constantly monitors the process trying to establish if any signicant
changes have occurred. In the gain schedular given in 7.2, the identication component was
trivial, it was simply the level reading. The purists will argue that the gain schedular is not a true
adaptive controller, because if the dynamics of the system change (such as the cross section of
the tank suddenly varies), the controller is unaware of this change, and cannot take appropriate
corrective action.
There are two popular adaptive control techniques used currently in industry. The rst approach
is to use a controller with auto-tuning. Normally these controllers are operating in just the same
manner as a traditional PID controller, but if the operator requests, the controller can be tuned
automatically. This is sometimes called tuning on demand. Once the auto-tuning part is com-
plete, the tuning procedure switches itself off and the controller reverts back to the traditional
PID mode. These types of adaptive controllers are used where adaptive control is required,
but the loop may be too sensitive or critical to have constant adaptation.
The second approach to adaptive control is called true adaptive control. This is where the con-
troller will adapt itself to the changing process without operator demand. Unlike the auto-tuners,
the true adaptive control remains continuously identifying the process and updating the con-
troller parameters. Many use variants of the least-squares identication that was described in
6.7.
Industrially auto-tuners have been successful and are now accepted in the processing industries.
Most of the major distributed computer control companies provide an adaptive control in the
form of a self-tuner as part of their product (Satt Control, Bailey, ABB, Leeds-Northrup, Foxboro).

Astr om and H agglund describe some of these auto-tuning commercial products in chapter 5 of
[15, 105-132].
However the true adaptive control has not been accepted as much as the self-tuner. This is due to
the fact that true adaptive control requires a higher level of expertise than self-tuning regulators.
In addition, because the adaptive control is continuous, it requires a much larger safety network
to guarantee correct operation. Even though the adaptive controller does not require tuning,
the correct operation still requires estimates of values before the estimator can correctly func-
tion. These types of parameters are things such as expected model order, sampling period and
expected dead time. If these values are signicantly different from the true values, then poor
adaption and hence control will result. Section 7.4 will consider self-tuning regulators.
7.3.1 Polynomial manipulations
The controller design strategies in this section will use the so-called polynomial approach. In this
design technique, we will need to be able to add and multiply polynomials, of possibly differing
lengths. The two routines given in Appendix B on page 507 add and multiply polynomials and
are used in the following controller design scripts in this chapter. An alternative programming
methodology is to create a polynomial object and then overload the plus and times operators.
322 CHAPTER 7. ADAPTIVE CONTROL
7.4 Self tuning regulators (STRs)
A better adaptive controller than a gain schedular is one where the controller parameters are
continually updated as the process characteristics change with time in an undetermined man-
ner. These are called self tuning regulators and an early application was in the pulp and paper
industry controlling a paper machine cited in [9]. Further development culminating in the self
tuning regulator is described in the classic reference, [12]. Self tuning regulators are more exible
than gain scheduling controllers, which to be effective, the relation between the input and output
must be known beforehand and is assumed constant. A textbook covering many aspects of the
design and use of self tuning regulators is [200].
The basic elements of a self tuning regulator are:
1. Identify the model online using input-output data. Usually the model is estimated by using
a recursive least squares parameter estimation.
2. A relation between the model parameters estimated in step 1 and the controller parameters.
3. The control algorithm. Many different types of control algorithms exist, but many are func-
tions that try to adjust the controller parameters, given the continually changing process
parameters in such a way that the closed loop response is constant. Because the controller
is being continuously updated to the new varying conditions, these control algorithms can
be quite vigorous.
Step 1, the identication phase, was covered in chapter 6. Steps 2 and 3 are the controller tuning
and the controller structure sections and are discussed further in this chapter.
Suppose an ARX model has been previously identied with the form
A(q)y
k
= B(q)u
k
+e
k
(7.1)
Then what should the control law be? The simplest control law is a direct model inversion. We
choose u such that y = e
k
which is the best possible result given the unknown noise. This results
in the control law
u
k
=
A(q)
B(q)
y
k
(7.2)
which is called minimum variance control, [9, 12, 14]. Expanding this control law gives the ma-
nipulated input at sample time k as
u
k
=
1
b
0
(y

k
+a
1
y
k1
+a
2
y
k2
+ +a
n
y
kn
b
1
u
k1
b
m
u
km
) (7.3)
where y

k
is the setpoint.
Clearly for this type of controller to be realisable, the b
0
term in Eqn. 7.3 must be non-zero, or
in other words, the plant must have no dead time. This means that we can calculate the present
control action u
t
that will drive y
t
to the desired y

t
.
Even without deadtime however, this type of control law will exhibit extremely vigorous action
in an attempt to bring the process back to the setpoint in a single sample time. While the simu-
lated results look very impressive, (see for example Fig. 7.3), in practice we will need to alter the
structure of the control law, relax it somewhat, to account for imperfect modelling and manipu-
lated variable constraints. These problems are partly circumvented in 7.6.4 and summarised in
7.7.
7.4. SELF TUNING REGULATORS (STRS) 323
7.4.1 Simple minimum variance control
The algorithm to construct an adaptive minimum variance controller consists of two parts: the
identication of the model parameters, , in this instance using recursive least-squares, and the
implementation of the subsequent minimum variance control law.
Algorithm 7.1 Simple minimum variance control for plants with no delay
1. Measure the current output y
t
.
2. Form the new input/output data matrix X.
3. Calculate the prediction y
t
from and
t1
.
4. Apply the control law, Eqn. 7.3, to obtain the current input, u
t
, using the estimated param-
eters.
5. Update the estimated parameters
k
using a recursive least-squares algorithm such as Al-
gorithm 6.2 or equivalent.
A simple minimum variance controller is demonstrated in Listing 7.1 To test the simulation, we
will construct a siumulation where we abruptly swap between two different plants:
G
1
(q
1
) =
0.2 + 0.05q
1
1 0.78q
1
0.65q
2
+ 0.2q
3
, for 0 t < 100 (7.4)
G
2
(q
1
) =
0.3 + 0.15q
1
1 0.68q
1
0.55q
2
+ 0.1q
3
, for t > 100 (7.5)
For this simulation we will assume no model/plant structural mismatch and we will use the
recursive estimation with the forgetting factor option.
Listing 7.1: Simple minimum variance control where the plant has no time delay
t = [0:200]'; yspt = square(t/18); % Create time & setpoint
y=zeros(size(yspt)); u=zeros(size(y));
nr=0.02; % noise or dither sigma
5 n1 = [0.2 0.05]; d1 = [1 -0.78 -0.65 0.2]; % system #1, Eqn. 7.4
n2 = [0.3 0.15]; d2 = [1 -0.68 -0.55 0.1]; % system #2, Eqn. 7.5, for t > 100
thetatrue = [ones(t(1:100))
*
[d1 n1]; ones(t(101:length(t)))
*
[d2 n2]];
na = 4;nb = 2; dead=0; nt=na+nb; % Estimated model parameter orders, na, n
b
& n
k
.
10 lam = 0.95; P=1000
*
eye(nt); % Identication parameters: , P0
thetaest=rand(nt,1); % random start guess for 0
Param=rand(na,nt);Pv = ones(na,1)
*
diag(P)';
for i=na+1:length(yspt)
15 theta = thetatrue(i,:)'; % Actual plant, G
thetar=[thetaest(1:na); thetaest(na+2:nt)]; % Estimated plant,

G
b0 = thetaest(na+1);
yv=y(i-1:-1:i-na); uv=u(i-1:-1:i-nb+1); % old output & input data
u(i)=(yspt(i)-[yv; uv]'
*
thetar)/b0; % Min. Var. control law
20
c = [yv', u(i:-1:i-nb+1)']; % Now do identication
324 CHAPTER 7. ADAPTIVE CONTROL
y(i) = c
*
theta(2:end) + 0.01
*
randn(1)
*
nr; % true process with noise
[thetaest,P] = rls(y(i),c',thetaest,P,lam); % RLS with forgetting
Param = [Param; thetaest']; Pv = [Pv;diag(P)'];
25 end % for loop
Initially we will set the forgetting factor to 0.99, and we will introduce a small amount of noise
so the estimation remains reasonable even under closed loop conditions, nr=0.02. We will aim
to follow the trajectory given as the dashed line in the upper gure of Fig. 7.3.
Our minimum variance controller will both try to estimate the true plant and control the process
to the setpoint, y

. The lower trend in Fig. 7.3 shows the parameter values for the estimated
model.
2
0
2
4
o
u
t
p
u
t

&

s
e
t
p
o
i
n
t
Minimum variance control
G
2
(q
1
)
20
10
0
10
20
i
n
p
u
t
G
1
(q
1
)
0 50 100 150 200 250 300
1
0
1

= 0.99
time
Figure 7.3: Simple minimum variance control with an abrupt plant change at t = 100. (See also
Fig. 7.4.)
In this unrealistically ideal situation with no model/plant mismatch, the controlled response
looks practically perfect, despite the plant change at t = 100 (after of course the converging
period). Fig. 7.4 shows an enlarged portion of Fig. 7.3 around the time of a setpoint change. Note
how the response reaches the setpoint in one sample time, and stays there. This is as expected
when using minimum variance control since we have calculated the input to force y exactly equal
to the desired output y

in one shot. In practice this is only possible for delay-free plants with no
model-plant mismatch.
Of course under industrial conditions (as opposed to simulation), we can only view the con-
trolled result, and perhaps view the trends of the estimated parameters, we dont know if they
have converged to the true plant parameters. However if the estimates converge at all to steady
values, and the diagonal elements of the covariance matrix are at reasonable values, (these are ap-
plication dependent), then one may be condent, but never totally sure, that a reasonable model
has been identied.
7.5. ADAPTIVE POLE-PLACEMENT 325
1
0.5
0
0.5
1
o
u
t
p
u
t

&

s
e
t
p
o
i
n
t
Minimum variance control
35 40 45
20
10
0
10
i
n
p
u
t
time
Figure 7.4: A zoomed portion of Fig. 7.3
showing the near-perfect response around
a setpoint change when using simple min-
imum variance control on a plant with no
delay.
7.5 Adaptive pole-placement
Clearly the problem with the simple minimum variance control is the excessive manipulated
variable demands imposed by the controller in an attempt to reach the setpoint in 1 sample time.
While our aim is to increase the speed of the open-loop response, we also need to reduce the
sensitivity to external disturbances. Both these aims should be addressed despite changes in the
process.
Fig. 7.5 shows a generalised control structure given in shift polynomial form. If the controller
polynomials G = H, then we have the classic controlled feedback loop, otherwise we have a
slightly more general conguration. Our aim as controller designers is to select polynomials
F, G, H given process polynomials B, A such that our performance aims are met. The closed
loop transfer function is
y(t) =
HB
FA +GB
r(t)
where clearly the stability is given by the denominator, FA + GB. Rather than demand the one-
step transient response used in the minimum variance case, we can relax this criterion, and ask
only that the controlled response follow some pre-dened trajectory.
1
F(q
1
)
B(q
1
)
A(q
1
)
G(q
1
)

+
u(t)
`

Process
H(q
1
)

r(t) y(t)
pre-compensator
setpoint

Controller
Figure 7.5: Adaptive pole-placement control structure
326 CHAPTER 7. ADAPTIVE CONTROL
Suppose that we restrict the class of dynamic systems to the form,
Ay(t) = Bu(t 1) (7.6)
where we have assumed a one sample time delay between u and y. This is to leave us enough
time to perform any controller calculations necessary. We will use the controller structure given
in Fig. 7.5,
Fu(t) = Hr(t) Gy(t) (7.7)
where the polynomials F, G, H are of the form:
F = 1 +f
1
q
1
+f
2
q
2
+ +f
nf
q
nf
G = g
0
+g
1
q
1
+g
2
q
2
+ +g
ng
q
ng
H = h
0
+h
1
q
1
+h
2
q
2
+ +h
nh
q
nh
With the computational delay, the closed-loop is
(FA +q
1
BG)y(t) = q
1
BHr(t) (7.8)
and we desire the closed-loop poles to be assigned to specied locations specied in the polyno-
mial T,
T = 1 +t
1
q
1
+t
2
q
2
+ +t
nt
q
nt
(7.9)
Assigning the closed-loop poles to given locations is called pole-placement or pole assignment.
(In practice, the closed loop polynomial T will be a design requirement given to us perhaps by
the customer.)
Note that some authors such as [21] use a slightly different standard nomenclature convention
for the controller polynomials. Rather than use the variable names F, G, H in the control law
Eqn. 7.7, the variable names R, S, T are used giving a control law of Ru(t) = Tr(t) Sy(t).
7.5.1 The Diophantine equation and the closed loop
To assign our poles specied in T, we must select our controller design polynomials F, G such
that the denominator of the actual closed-loop is what we desire,
FA +q
1
BG = T (7.10)
The polynomial equation Eqn. 7.10 is called the Diophantine equation, and may have many so-
lutions. To obtain a unique solution, A, B must be coprime (no common factors), and we should
select the order of the polynomials such that
n
f
= n
b
n
g
= n
a
1, (n
a
= 0)
n
t
n
a
+n
b
n
c
One way to establish the coefcients of the design polynomials F, G and thus solve the Diophan-
tine equation is by expanding Eqn. 7.10 and equating coefcients in q
i
giving a system of linear
7.5. ADAPTIVE POLE-PLACEMENT 327
equations.
_

_
1 0 0 b
0
0 0
a
1
1
.
.
. b
1
b
0
.
.
.
.
.
.
a
2
a
1
.
.
.
.
.
.
.
.
. b
1
.
.
.
.
.
. a
2
.
.
.
.
.
. b
nb
.
.
.
a
na
.
.
.
.
.
. 1 0 b
nb
0
0 a
na
.
.
. a
1
.
.
. 0 b
0
.
.
. 0
.
.
. a
2
.
.
. b
1
.
.
.
.
.
.
.
.
.
0 a
na
0 b
nb
_

_
. .
A
_

_
f
1
f
2
.
.
.
f
nf
g
0
g
1
.
.
.
g
ng
_

_
. .
c
=
_

_
t
1
a
1
t
2
a
2
.
.
.
t
nt
a
nt
a
nt+1
.
.
.
a
na
0
.
.
.
_

_
. .
b
(7.11)
The matrix A in Eqn. 7.11 has a special structure, and is termed a Sylvester matrix and will be
invertible if polynomials A, B are co-prime. Solving Eqn. 7.11

c
= A
1
b
gives us the coefcients of our controller polynomials, F, G. Nowthe dynamics of the closed-loop
requirement are satised,
y(t) =
BH
T
r(t 1)
but not the steady-state gain, which ideally should be 1. We could either set H to be a scalar,
H =
T
B

q=1
(7.12)
which forces the steady-state gain to 1, or preferably cancel the zeros
H =
1
B
(7.13)
which improves the transient response in addition to keeping the steady-state gain of unity. Can-
celling the zeros should only be attempted if all the zeros of B are well inside the unit circle.
Section 7.6.1 describes what to do if the polynomial B does has poorly damped or unstable zeros.
This scheme has now achieved our aim of obtaining a consistent dynamic response irrespective
of the actual process under consideration. Since we cannot expect to know the exact model B, A
we will use an estimated model,

B,

A in the controller design.
7.5.2 Solving the Diophantine equation in Matlab
Since solving the Diophantine equation is an important part of the adaptive pole-placement reg-
ulator, the following functions will do just that. Listing 7.2 solves for the polynomials F and G
such that FA + BG = T. If a single time delay is introduced into the B polynomial such as
in Eqn. 7.10, then one needs simply to pad the B polynomial with a single zero. Note that the
expression AF +BGis optionally computed using the polyadd routine given in Listing B.1, and
this polynomial should be close to the T polynomial specied.
328 CHAPTER 7. ADAPTIVE CONTROL
Listing 7.2: A Diophantine routine to solve FA +BG = T for the polynomials F and G.
function [F,G,Tcheck] = dioph(A,B,T);
% [F,G] = dioph(A,B,T);
% Solves the Diophantine equation by setting up the Sylvester matrix,
% AF +BG = T and subsequently solves for polynomials F & G.
5 % See [18, p175] (follows nomenclature of except R=F, S=G).
if abs(A(1)-1) > 1.0e-8;
error('Error: Polynomial A not monic A(1) = %g\n',A(1));
end % if
10
na = length(A)-1; nb = length(B)-1; % Order of polynomials A(q
1
), B(q
1
)
nf = nb; ng = na-1; n = nf+ng+1; n = nf+ng+1; % Required orders for F & G
D = zeros(n,n); % construct Sylvester matrix
for i=1:nf
15 D(i:i+na,i) = A'; % ll in columns
end % for i
for i=1:ng+1
D(i:i+nb,nf+i) = B'; % ll in columns
end % for i
20 rhs = [A(2:na+1), zeros(size(na+1:n))]'; % right-hand side
rhs = [T(2:length(T)), zeros(size(length(T):n))]' - rhs;
FG= D\rhs; % Find solution for polynomials F & G
F= FG(1:nf)'; G= FG(nf+1:n)';% Monic polynomial F has leading 1 dropped.
25
if nargout >2 % then check that AF +BG = T as required
Tcheck = polyadd(conv(A,[1,F]), conv(B,G));
end
return
An alternative Diophantine scheme
An alternative, and somewhat more elegant, scheme for solving the Diophantine equation is
given in Listing 7.3 which uses matrix convolution to construct the Sylvester matrix.
Listing 7.3: Alternative Diophantine routine to solve FA + BG = T for the polynomials F and
G. Compare with Listing 7.2.
1 function [F,G,Tcheck] = dioph_mtx(A,B,T);
% Alternative version for polynomial Diophantine equation solving
% Solves AF +BG = T for F & G.
da = length(A)-1; db=length(B)-1; % da = deg(A) etc.
6 T = [T, zeros(1,da+db-length(T)+1)]; % pad with zeros
dt=length(T)-1; % Convert to full length
dg =da-1; df=dt-da ; % assuming for RAB to be square
B = [ zeros(1, df-db+1), B]; % pad with leading zeros
11 Rac = [convmtx(A,df+1);convmtx(B, dg+1)]; % Construct RAB = [RA; RB]
FG = T/Rac; % [F, G] = T R
1
AB
F = FG(1:df+1); G = FG(df+2:df+dg+2);% Note F & G are row vectors.
16 Tcheck = polyadd(conv(A,F), conv(B,G)); % Verify solution
7.5. ADAPTIVE POLE-PLACEMENT 329
return
The alternative scheme in Listing 7.3 is slightly different from the one presented earlier in that
this time the F polynomial contains the leading unity coefcient. A third method to construct
the Sylvester matrix is to use the lower triangle of a Toeplitz matrix in MATLAB say with the
command tril(toeplitz(1:5)).
Example of a Diophantine problem
Suppose we only want to place one pole, T = 1 + t
1
q
1
, given a n
a
= 3, n
b
= 2 model. The
recommended orders of the controller polynomials are
n
f
= n
b
= 2
n
g
= n
a
1 = 2
n
t
= 1 n
a
+n
b
n
c
= 5
and expanding the closed-loop equation Eqn. 7.10,
(1 +f
1
q
1
+f
2
q
2
)(1 +a
1
q
1
+a
2
q
2
+a
3
q
3
)+
q
1
(b
0
+b
1
q
1
+b
2
q
2
)(g
0
+g
1
q
1
+g
2
q
2
) = 1 +t
1
q
1
which by equating the coefcients in q
i
, i = 1, . . . , 5 gives the linear system following Eqn. 7.11
_

_
1 0 b
0
0 0
a
1
1 b
1
b
0
0
a
2
a
1
b
2
b
1
b
0
a
3
a
2
0 b
2
b
1
0 a
3
0 0 b
2
_

_
_

_
f
1
f
2
g
0
g
1
g
2
_

_
=
_

_
t
1
a
1
a
2
a
3
0
0
_

_
An example of solving the Diophantine equation
In this example we start with some known polynomials, and then try to reconstruct them using
the Diophantine routines presented previously.
Suppose we start with some given plant polynomials, say
A(q
1
) = 1 + 2q
1
+ 3q
2
+ 3q
3
, B(q
1
) = 5 + 6q
1
+ 7q
2
where the orders are n
a
= 3 and n
b
= 2. This means we should choose orders n
f
= n
b
= 2, and
n
g
= n
a
1 = 2, so let us invent some controller polynomials
F(q
1
) = 1 + 8q
1
+ 9q
2
, G(q
1
) = 10 + 11q
1
+ 12q
2
noting the A and F are monic. We can now compute the closed loop, AF +BGusing convolution
and polynomial addition as shown in Listing 7.4
Listing 7.4: Constructing polynomials for the Diophantine equation example
>> A = [1 2 3 4]; B = [5 6 7];
>> F = [1 8 9]; G = [10 11 12]; % Note n
f
= n
b
& ng = na 1.
3
>> T = polyadd(conv(A,F),conv(B,G)) % Compute T(q
1
) = A(q
1
)F(q
1
) +B(q
1
)G(q
1
)
T =
1 60 143 242 208 120
330 CHAPTER 7. ADAPTIVE CONTROL
which gives T(q
1
) = 1 + 60q
1
+ 143q
2
+ 242q
3
+ 208q
4
+ 120q
5
.
We are now ready to essentially do the reverse. Given the polynomials A(q
1
), B(q
1
), and
T(q
1
), we wish to reconstruct F(q
1
) and G(q
1
) using the Diophantine routines given in List-
ings 7.2 and 7.3.
Listing 7.5: Solving the Diophantine equation using polynomials generated from Listing 7.4.
>> [F1,G1,Tcheck1] = dioph(A,B,T) % Use Listing 7.2
F1 =
8.0000 9.0000
4 G1 =
10.0000 11.0000 12.0000
Tcheck1 =
1 60 143 242 208 120
9 >> [F2,G2,Tcheck2] = dioph_mtx(A,B,T) % Use Listing 7.3
F2 =
1 8 9
G2 =
10.0000 11.0000 12.0000
14 Tcheck2 =
1 60 143 242 208 120
Note that in both cases we manage to compute the polynomials F(q
1
) and G(q
1
) which recon-
struct back to the correct T(q
1
), although the routine in Listing 7.2 drops the leading 1 in the
monic F polynomial.
7.5.3 Adaptive pole-placement with identication
We have now all the components required for an adaptive controller. We will use an RLS mod-
ule for the plant identication, and then subsequently solve the Diophantine equation to obtain
a required closed loop response. Algorithm 7.2 describes the algorithm behind this adaptive
controller.
Algorithm 7.2 Adapting pole-placement algorithm
We will follow Fig. 7.6 which is a specic case of the general adaption scheme given in Fig. 7.1.
1. At each sample time t, collect the new system output, y(t).
2. Update the polynomial model estimates A, B using recursive least-squares on
Ay(t) = Bu(t 1) +e(t)
3. Synthesize the controller polynomials, F, G by solving the identity,
FA+q
1
BG = T
where T is the desired closed-loop response. (You could use the dioph routine given in
Listing 7.2.)
4. Construct H using either Eqn. 7.12 or Eqn. 7.13.
7.5. ADAPTIVE POLE-PLACEMENT 331
?

B(q
1
)
A(q
1
)
?
setpoint

A,

B Performance
requirements
T
identify
plant

?
-
Diophantine
y(t)
F, G & H
Pole-placement
controller design

2 DOF controller
output
u(t)
+
G(q
1
)
1
F(q
1
)
Unknown plant

RLS Structure
H r(t)
Figure 7.6: Adaptive pole-placement control structure with RLS identication
5. Apply control step
Fu(t) = Gy(t) +Hr(t)
6. Wait out the remainder of the sample time & return to step 1.
An adaptive controller with RLS identication example
In this example will attempt to control three different plants in the conguration as shown in
Fig. 7.7. The open loop step tests of the plants are compared in Fig. 7.8. Of the three plants, G
1
is
stable and well behaved, G
2
has open-loop unstable poles, and G
3
has unstable zeros.
In practice we must estimate the process model and if the plant changes, we redesign our con-
troller such that we obtain the same closed loop response. The customer for this application
demands a slight overshoot for the controlled response as shown in Fig. 7.9. We can translate this
requirement for the denominator of the closed-loop transfer function to be
T = 1 1.4685q
1
+ 0.6703q
2
Listing 7.6 and Fig. 7.10 showthe impressive results for this combined identication and adaptive
control simulation when using the three plants from Fig. 7.8. Note that the controller design
computation is done in the dioph.m routine given in Listing 7.2 while the identication is done
using the rls routine.
Here I assume a model structure where n
a
= 5, n
b
= 3, and I add a small amount of noise,
= 0.005, to keep the estimation quality good. A forgetting factor of = 0.95 was used since the
model changes were known a priori to be rapid.
332 CHAPTER 7. ADAPTIVE CONTROL

G
2
G
1

y r
G
3
setpoint
current active plant

current inactive plant models


Adaptive
controller
..
Plant alternatives
G
c

switch
Figure 7.7: Control of multiple plants with an adapting controller. We desire the same closed
loop response irrespective of the choice of plant.
0 20 40 60 80 100 120
0
1000
2000
3000
4000
5000
6000
o
u
t
p
u
t
time
Openloop plant step responses
G
1
(q
1
)
G
2
(q
1
)
G
3
(q
1
)
1 0.5 0 0.5 1 1.5
1.5
1
0.5
0
0.5
1
1.5


PoleZero Map
Real Axis
I
m
a
g
i
n
a
r
y

A
x
i
s
G
1
G
2
G
3
Figure 7.8: Step responses of the 3 discrete open-loop plants (one of which is unstable), and the
corresponding pole-zero maps. Note that G
3
has zeros outside the unit circle.
Figure 7.9: The desired closed loop response,
1/T.
0 5 10 15 20 25 30 35
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1
T(q
1
)
o
u
t
p
u
t
time
Desired closed loop response
7.5. ADAPTIVE POLE-PLACEMENT 333
Listing 7.6: Adaptive pole-placement control with 3 different plants
Gx=struct('b',[3 0.2 0.5],'a',poly([0.95 0.9 0.85])); % 3 different plants
Gx(2) = struct('b',[2,2,1],'a',poly([1.1 0.7 0.65]));
Gx(3) = struct('b',[0.5 0.2 1],'a',poly([0.6 0.95 0.75]));
5 na = 3; nb = 3; % order of polynomials, na, n
b
a = randn(1,na); b = randn(1,nb); % model estimates
np = length([a,b]); % # of parameters to be estimated
P = 1.0e4
*
eye(np); % Large initial co-variance, P0 = 10
4
I
lambda = 0.95; % Forgetting factor, 0 < < 1
10 param_est = [a,b]';
% Design a reasonable CLTF response with = 2 and = 0.4
tau=2; zeta = 0.4; Grc = tf(1,[tau2 2
*
tau
*
zeta 1]);
Grd = c2d(Grc,1); T = cell2mat(Grd.den);
15
dt = 1; t = dt
*
[0:700]'; r = square(t/10); % sample time & setpoint
y = zeros(size(t)); u = y; noise = 0.005
*
randn(size(t));
Param = zeros(length(t),np); trP = zeros(size(t));% plot holder
tG2 = 150; tG3 = tG2+200; k=1; % Swap over times for the different plants
20
for i=10:length(t)
if t(i) == tG2, k=2; end % Swap over to model #2
if t(i) == tG3, k=3; end % Swap over to model #3
25 % Plant response
y(i) = Gx(k).b
*
u(i-1:-1:i-length(Gx(k).b)) - ...
Gx(k).a(2:end)
*
y(i-1:-1:i-length(Gx(k).a)+1);
% Model identication
30 x = [-y(i-1:-1:i-na)', u(i-1:-1:i-length(b))']; % shift register
[param_est,P] = rls(y(i),x',param_est,P,lambda); % update parameters, Listing 6.18.
a=[1 param_est(1:na)']; b=param_est(na+1:na+nb)';% extract estimates:

A,

B
Param(i,:) = param_est'; trP(i) = trace(P);
35 % Controller design
[F,G] = dioph(a,b,T); % Solve Diophantine polynms, Listing 7.2.
%[F,G] = dioph_mtx(a,b,T); F(1)=[]; % Alternative Diophantine, Listing 7.3.
H = sum(T)/sum(b); % Get steady-state correct, H = T/B|q=1.
40 % Control law, Fu = Gy +Hr
u(i) = H
*
r(i) - G
*
y(i:-1:i-length(G)+1) - F
*
u(i-1:-1:i-length(F));
end % for i
%% Check the results
45 G = tf(sum(T),T,1); % desired closed loop
yref = lsim(G,r);
Fig. 7.10 shows the impressive results of this adaptive controller. When the plant changes, un-
beknown to the controller, the system identication updates the new model, and the controller
consequently updates to deliver a consistent response.
If the pole-placement controller is doing its job, then the desired closed loop response should
match the actual output of the plant. These two trajectories are compared in Fig. 7.11 which
shows an enlarged portion of Fig. 7.10. Here it can be seen that apart from the short transients
when the plant does change, and the identication routine is struggling to keep up, the adaptive
pole-placement controller is satisfying the requirement to follow the reference trajectory.
334 CHAPTER 7. ADAPTIVE CONTROL
3
2
1
0
1
2
3
o
u
t
p
u
t

&

s
e
t
p
o
i
n
t
0.1
0.05
0
0.05
0.1
i
n
p
u
t
5
0
5

0 50 100 150 200 250 300 350 400 450 500


10
0
10
5
time
Figure 7.10: Adaptive pole-placement with RLS identication. After a period for the estimated
parameters to converge (lower trends), the controlled output response (upper trend) is again
reasonably consistent despite model changes. Results from Listing 7.6.
0 50 100 150 200 250 300 350 400 450 500
0.6
0.8
1
1.2
1.4
1.6
1.8
time


y
Reference, y
*
Figure 7.11: An enlarged portion of Fig. 7.10 showing the difference between the actual plant
output y(t), and the reference output, y

(t).
7.6. PRACTICAL ADAPTIVE POLE-PLACEMENT 335
7.6 Practical adaptive pole-placement
Some problems still remain before the adaptive pole-placement algorithmis a useful and practical
control alternative. These problems include:
Poor estimation performance while the system is in good control, and vice-versa, (poor
control during the periods suitable for good estimation). (This is known as the dichotomy
of adaptive control, or dual control).
Non-minimum phase systems possessing unstable zeros in B which must not be cancelled
by the controller.
Dead time where at least b
0
= 0.
We must try to solve all these practical problems before we can recommend this type of controller.
An example of bursting
If our controller does its job perfectly well, then the output remains at the setpoint. Under these
circumstances we do not have any identication potential left, and the estimated model starts to
drift. The model will continue to drift until the controller deteriorates so much that there is a
period of good excitation (poor control), that the model re-converges again. This
Fig. 7.12 shows the experimental results from using an adaptive controller, (in this case an adap-
tive LQR regulator) on the blackbox over a substantial period of time. The controller sampling at
T = 1 second was left to run over night for 10 hours (36,000 data points), and for a 3 hour period
in the middle we had no setpoint changes, and presumably no signicant disturbances. During
this time, part of which is shown in Fig. 7.12, the adaptive controller oscillated into, and out of,
good control. This is sometimes known as bursting.
7.6.1 Dealing with non-minimum phase systems
If we use the steady-state version for H,
H
def
=
T
B

q=1
then we do not cancel any zeros, and we avoid the problem of inadvertently inverting unstable
model zeros. However we can achieve a better transient response if we cancel the zeros with the
pre-compensator H,
H =
1
B
giving the closed loop transfer function,
y(t) =
HB
T
r(t 1) =
1
T
r(t 1)
which looks ne until we investigate closer what is likely to happen if B has zeros outside the
unit circle.
In fact, we will have problems not only if the zeros are unstable, but even if they are stable but
poorly damped. Fig. 7.13 shows the adaptive pole-placement response of a plant with stable, but
336 CHAPTER 7. ADAPTIVE CONTROL
2.5 3 3.5 4
0.3
0.2
0.1
0
0.1
0.2
0.3
0.4
Adaptive LQR of the blackbox t=1.00 (sec)
o
u
t
p
u
t

&

s
e
t
p
o
i
n
t
2.5 3 3.5 4
1
0.5
0
0.5
1
RLS: =0.995
I
n
p
u
t
time (hours)
Figure 7.12: Bursting phenomena exhibited by the black box when using an adaptive controller.
With no setpoint changes or external disturbances, the adaptive controller tends to oscillate be-
tween periods of good control and bad estimation and the reverse.
poorly damped zeros when using H = 1/B. In fact the zeros are 0.95e
j130

indicating that they


are close to, but just inside the unit circle. While the output response looks ne, it is the ringing
behaviour of the input that should cause concern.
2
0
2
o
u
t
p
u
t

&

s
e
t
p
o
i
n
t
0.5
0
0.5
i
n
p
u
t
sample time
Figure 7.13: Adaptive pole-placement with a poorly damped B polynomial. The poorly damped
zeros cause the ringing behaviour in the input.
The results in Fig. 7.13 were obtained by using the modied controller given in Listing 7.7 in
Listing 7.6.
7.6. PRACTICAL ADAPTIVE POLE-PLACEMENT 337
Listing 7.7: The pole-placement control law when H = 1/B
if any(abs(roots(b))>1) % Use H = T/B|q=1 if any zeros of B are unstable.
H = polyval(T,1)/polyval(b,1); % Simple scalar pre-lter, H = T/B|q=1.
4 u(i) = H
*
r(i) - G
*
y(i:-1:i-ng) - F
*
u(i-1:-1:i-nf);
else % Alternatively, use dynamic compensation, H = 1/B.
v(i) = 1/b(1)
*
(-b(2:end)
*
v(i-1:-1:i-nb+1)+sum(T)
*
r(i));
u(i) = v(i) - G
*
y(i:-1:i-ng) - F
*
u(i-1:-1:i-nf); % control action
end
Suppose we have estimated a B polynomial possessing an unstable zero at say q
1
= +2/3, or
B = 1 1.5q
1
and we desire T = 1 0.7q
1
. Then the closed-loop transfer function is
y(t) =
BH
FA+q
1
BH
r(t 1) =
q
1
(1 1.5q
1
)
1 0.7q
1
Hr(t)
Now suppose we use a dynamic pre-compensator, H = 1/B,
H =
1
q(1 1.5q
1
)
which has a time-advance term, q
+1
, meaning that the control law is a non-causal function of the
setpoint. In other words, we need to know r(t + 1) at time t. In fact since we need to know only
future values of setpoint, and not output, this type of control law is easily implemented.
The main drawback stems, however, not from the acausality, but from the ill-advised cancel-
lation. In all practical cases, we will experience some small numerical round-off error such as
where the pole is represented in a nite-word length computer as 1.50001 rather than exactly 1.5.
Consequently the pole will not completely cancel the zero,
y(t) =
q
1
(1 1.5q
1
)
1 0.7q
1

q
(1 1.50001q
1
)
. .
small error
r(t)
leaving an unstable controller.
The remedy is that if we discover unstable modes in B, we simply do not cancel them, although
we continue to cancel the stable modes. We achieve this by separating the B polynomial into two
groups of factors; the stable roots, B
+
, and the unstable roots, B

,
B
def
= B
+
B

(7.14)
and then set H = 1/B
+
only. A simple MATLAB routine to do this factorisation is given in
Listing 7.8 in section 7.6.2.
So given the model and control law,
Ay(t) = Bu(t 1)
Fu(t) = Gy(t) +Hr(t)
we modify the pole assignment to
AF +q
1
BG = TB
+
(7.15)
338 CHAPTER 7. ADAPTIVE CONTROL
so closed-loop transfer function is
y(t) =
HB
TB
+
r(t 1) =
HB

T
r(t 1) (7.16)
We solve the modied pole-assignment problem, Eqn. 7.15, by noting that B
+
must be a factor of
F,
F = B
+
F
1
so we solve the reduced problem using the Diophantine equation,
AF
1
+q
1
B

G = T
in the normal manner.
We should do this when using RLS, because we can never be sure when we will identify an un-
stable B since the RLS is automatic. Even in a situation where we know, or reasonably suspect a
stable B, we may stumble across an unstable B while getting to the correct values.
As an example, we can apply the adaptive pole-placement controller where we cancel the stable
B polynomial zeros, but not the unstable ones, using the three plants from page 331. Fig. 7.14
shows the result of of this controller where we note that the performance of plant #3 is also
acceptable exhibiting no ringing.
For diagnostic purposes, we can also plot the instantaneous poles and zeros of the identied
model, and perhaps display this information as a movie. This will show how the poles and zeros
migrate, and when zeros, or even poles, leap outside the unit circle. Fig. 7.15 shows these migra-
tions from the simulation presented in Fig. 7.14. If you rotate the pole-zero map in Fig. 7.15(a)
you may see the dreaded V armland lynx.
7.6.2 Separating stable and unstable factors
When applying pole-placement in the case with unstable zeros, we need to be able to factor
B(q) into stable and unstable factors. Listing 7.8 does this factorisation, and additionally it also
rejects the stable, but poorly damped factors given a user-specied damping ratio, . (If you want
simply stable/unstable, use = 0.) Fig. 7.16 illustrates the region for discrete well-damped poles
where well-damped is taken to mean damping ratios greater than = 0.15. This algorithm is
adapted from [148, p238].
Listing 7.8: Factorising a polynomial B(q) into stable, B
+
(q) and unstable and poorly damped,
B

(q) factors such that B = B


+
B

and B
+
is dened as monic.
function [Bp,Bm] = factorBz(B,zeta);
2 % Factorise B(q) = B
+
(q)B

(q) where B
+
are factors inside the unit circle and B

(q) are outside.


% The optional parameter, denes the cutoff. Use = 0 for inside/outside circle.
% Note B
+
(q) or Bp is dened as monic.
if (exist('zeta') == 1)
7 zeta = 0.15; % default reasonable cut-off is = 0.15.
end
zeta = max(min(zeta,1),0); % Ensure [0, 1].
tol = 1e-6; r = roots(B);
12 % Now separate into poorly damped & good poles
mag_r = abs(r); rw = angle(r)/2/pi; % See equations in [148, p238].
7.6. PRACTICAL ADAPTIVE POLE-PLACEMENT 339
2
1
0
1
2
3
Unstable zeros: G
3
(q
1
) G
2
(q
1
) G
1
(q
1
)
o
u
t
p
u
t

&

s
e
t
p
o
i
n
t
0.2
0.1
0
0.1
0.2
i
n
p
u
t
4
2
0
2
4

=0.99
0 500 1000 1500
10
0
10
5
time
Figure 7.14: Adaptive pole-placement control with H
def
= 1/B and an unstable B polynomial in
Plant #3.
mz = exp(-2
*
pi
*
zeta/sqrt(1-zeta2)
*
rw); % magnitude of z
idx = mag_r < mz;
17 Bp = poly(r(idx)); % Set B
+
(q) to contain all the zeros inside unit circle
Bm = poly(r(idx))
*
B(1); % All the others
if norm(conv(Bp,Bm)-B) > tol % Check B = B
+
B

warning('Problem with factorisation')


22 end % if
return
340 CHAPTER 7. ADAPTIVE CONTROL
2 1.5 1 0.5 0 0.5 1 1.5 2
1.5
1
0.5
0
0.5
1
1.5
(a) Pole-zero map for an adaptive controller (b) If you rotate the pole-zero
map, you nd a disturbing simi-
larity to the V armland lynx.
Figure 7.15: A pole-zero map at each sampling instance of the adaptive pole-placement simula-
tion given Fig. 7.14. Note how at times not only the zeros, but also the poles jump outside the
unit circle leading to an unstable identied model.
Figure 7.16: The boundary of the shaded area
is the loci of constant damping ratio with
= 0.15. Inside this region, the poles are con-
sidered well damped. (Figure plotted with
zgrid.)
7.6.3 Experimental adaptive pole-placement
Actually running an adaptive pole-placement scheme on a real plant, even a well behaved one
in the laboratory is considerably more difcult than the simulation examples presented so far.
Our desire is to test an adaptive pole-placement controller on the black-box to demonstrate the
following:
1. To follow a specied desired trajectory (as quantied by the T polynomial),
2. under both the NORMAL and SLOW positions on the black box, and
3. to run stably for at least 10 minutes following both disturbances and servo changes.
The rst design decision is to choose a desired response (closed loop poles) and suitable sampling
interval. We shall select a second-order prototype response with a time constant of = 2.1
7.6. PRACTICAL ADAPTIVE POLE-PLACEMENT 341
seconds and some overshoot, = 0.6. With a sampling time of T = 0.6 seconds, the desired
discrete closed loop transfer function is
G

cl
(s) =
1
4.41s
2
+ 2.52s + 1
G

cl
(q
1
) =
0.03628 + 0.03236q
1
1 1.641q
1
+ 0.7097q
2
from which follows that T = 1 1.641q
1
+ 0.7097q
2
.
Fig. 7.17 shows the results fromone experiment where the characteristics of the plant are changed
at t = 128 by manually switching over from 7 cascaded low-pass lters to 9. This increases the
effective time constant of the plant.
It is important to verify that the output from the blackbox did in fact follow the intended trajec-
tory once the model parameters have converged. An enlarged portion of Fig. 7.17 is repeated in
Fig. 7.18 comparing the intended trajectory, y

, (dashed line) with the actual trajectory, y. This


shows that the response converged to the desired trajectory even after the plant subsequently
changed. Fig. 7.17 shows clearly that the shape of the input must vary considerably to force the
same output if the plant itself changes.
7.6.4 Minimum variance control with dead time
The minimum variance control formulation of 7.4.1 is not very useful not the least due to its in-
ability to handle any dead time and unstable process zeros. Even without the dead time, practical
experience has shown that this control is too vigorous and tends to be over-demanding causing
excessive manipulator variable fatigue. Moreover when using discrete time models, delay be-
tween input and output is much more common, and typically the deadtime is at least one sample
period. The best we can do in these circumstances is to make the best prediction of the output
we can for d samples times ahead, and use this prediction in our control law as done above. This
introduces the general concept of minimum variance control which combines prediction with
control. Further details are given in [22, pp367379].
Suppose we start with the ARMAX canonical model written in the forward shift operator q,
A(q)y
k
= B(q)u
k
+C(q)
k
(7.17)
and we assume that A and C are monic. We can always divide out the coefcients such that A
is monic, and we can choose the variance of the white noise term so that C is monic. For this
simplied development we will also need to assume that all the zeros of both C and B are inside
the unit circle. The pole excess of the system d, is the difference in the order of the A and B
polynomials and is equivalent to the system time delay. The output d steps ahead in the future is
dependent on the current and past control actions, u, which are known, and the current, past and
future disturbances, the latter which are not known. To avoid using the unknown disturbance
values we need to make an optimal prediction.
Optimal prediction
If we have a process driven by white noise ,
z
k
=
C(q)
A(q)

k
(7.18)
then the minimum variance predictor over d steps is
z
k+d|k
=
qG(q)
C(q)
z
k
(7.19)
342 CHAPTER 7. ADAPTIVE CONTROL
0.2
0.1
0
0.1
0.2
0.3
0.4
Normal Slow
Adaptive poleplacement with RLS
o
u
t
p
u
t

&

s
e
t
p
o
i
n
t
1
0.5
0
0.5
1
i
n
p
u
t
t=0.6,
2
1
0
1
2
p
a
r
a
m
e
t
e
r
s
=0.99, n
a
=4, n
b
=3
0 50 100 150 200 250 300 350 400
10
3
10
4
10
5
t
r
(
P
)
time (sec)
Figure 7.17: Adaptive pole-placement of the black-box showing consistent output response be-
haviour despite the plant change when the switch was manually activated at t = 128 seconds.
Plot (A): black-box output (solid), intended trajectory (dashed) and setpoint point (dotted). (Re-
fer also to Fig. 7.18 for an enlarged version.)
Plot (B): Controller input to the plant, Plot (C): Plant parameter estimates, and Plot (D): Trace of
the covariance matrix.
7.6. PRACTICAL ADAPTIVE POLE-PLACEMENT 343
60 80 100 120 140 160 180 200 220 240 260 280
0.16
0.18
0.2
0.22
0.24
0.26
0.28
Normal Slow
o
u
t
p
u
t

&

s
e
t
p
o
i
n
t
time (sec)
actual y
setpoint
intended y*
Figure 7.18: A comparison of the in-
tended trajectory, y

(dashed line)
with the actual trajectory y (solid
line) for the black box from Fig. 7.17.
Even after the plant change, the ac-
tual response gradually converges to
the desired trajectory.
where G(q) is dened from
q
d1
C(q) = A(q)F(q) +G(q) (7.20)
We can calculate the quotient F and remainder G polynomials in Eqn. 7.20 from polynomial
division in MATLAB using the deconv function, or alternatively, one could calculate them by
equating the coefcients in Eqn. 7.20.
Minimum variance control law
Once we can make an optimal prediction using the formulation of Eqn. 7.20, we can predict the
output d samples times into the future and then calculate the control action required to equate
this with the setpoint. This will give the desired minimum variance control law as,
u
k
=
G(q)
B(q)F(q)
y
k
(7.21)
The closed loop formulation by combining Eqn. 7.17 and Eqn. 7.21 is
Ay
k
= B
G
BF
y
k
+C
k
AFy
k
= Gy
k
+CF
k
y
k
=
CF
AF +G

k
= F
k
(7.22)
where the last equation is simplied using Eqn. 7.20. Eqn. 7.22 indicates that the closed loop
response is a moving average process of order d 1. This provides us with a useful check in that
the covariance of the error will vanish for delays larger than d 1.
Listing 7.9 designs the F(q) and G(q) polynomials required in the control law, Eqn. 7.21, by using
deconvolution. Note this routine uses stripleadzeros to strip any leading zeros from the
B(q) polynomial from Listing B.3 given in Appendix B.
Listing 7.9: Minimum variance control design
function [F,G,BF] = minvar_design(A,B,C)
2 % Design a minimum variance controller for plant B(q)/A(q) with noise polynomial C(q).
344 CHAPTER 7. ADAPTIVE CONTROL
% Control law, Eqn. 7.21, is u
k
= G(q)/(B(q)F(q))y
k
B = stripleadzeros(B); % Remove any leading zeros from the B polynomial
n = length(A); % order of plant
7 if length(C) = n
C = [C,zeros(1,1:n-length(C))]; % Force C(q) the same length as A(q)
end % if
d0 = length(A)- length(B); % Time delay is na n
b
= d0
12
qd0 = [1,zeros(1,d0-1)]; % Construct q
d
0
1
qd0C = conv(qd0,C); % Construct q
d
0
1
C(q)
[F,G] = deconv(qd0C,A);
17 BF = conv(B,F); % Construct B(q)F(q)
return
Unstable process zeros
As in the adaptive pole-placement case outlined in section 7.6.1, if we have a non-minimum
phase plant with unstable zeros, then the resulting controller given by Eqn. 7.21 will be unstable
since essentially we are attempting to cancel right-hand plane zeros with right-hand plane poles.
Again the solution is to factor B into two factors, one containing all the stable modes, B
+
, and
one containing all the unstable modes, B

, and using only B


+
in the control law. (See Listing 7.8
to do this.) The F and G polynomials are obtained from solving the Diophantine equation which
is a generalisation of Eqn. 7.20. Refer to [22, pp380-385] for further details.
Moving average example
Consider the following plant system with a noise term,
(q 0.9)(q 0.8)(q 0.2)y
k
= 2(q 1.25)(q 0.4)u
k
+ (q
3
0.7q
2
+ 0.14q 0.2) e
k
noting the unstable plant zero at q = 1.25.
Plant = zpk([1.25 0.4],[0.9 0.8 0.2],-2,1); % Plant poles & zeros
2 Ptf = tf(Plant);
B = stripleadzeros(Ptf.num{:});
A = Ptf.den{:};
C = [1 -0.7 0.14 -0.2]; % Noise polynomial
var_e = 1; % Variance of noise
The step response and pole-zero plot in Fig. 7.19 illustrates the potential difculties with this
plant.
We can design a moving average controller for this NMP plant as follows.
B = stripleadzeros(B); % Remove the leading zeros, (if any)
[Bp,Bm] = factorBz(B); % Factorise B(q) into well-damped and unstable zeros
4 n = length(A); % order of plant
C = [C,zeros(1,n-length(C))]; % Back-pad with zeros to get same length as A(q) (if neces-
sary)
7.7. SUMMARY OF ADAPTIVE CONTROL 345
1 0.5 0 0.5 1 1.5
1
0.5
0
0.5
1
PoleZero Map
Real Axis
I
m
a
g
in
a
r
y

A
x
is
0 20 40 60 80
5
0
5
10
15
20
Step Response
Time (seconds)
A
m
p
lit
u
d
e
Figure 7.19: A non-minimum phase
plant with an unstable zero which
causes an inverse step response.
d = length(A)- length(Bp); % # of process zeros to cancel
9 % Now solve Diophantine eqn AR+BS = q
d1
CB
+
to compute control law
qd = [1,zeros(1,d-1)]; % Construct q
d1
qdCBp = mconv(qd,C,Bp); % Evaluate q
d1
CB
+
using (multiple) convolution
[R,S] = dioph_mtx(A,B,qdCBp) % Listing 7.3.
14 [R1, R1rem] = deconv(R,Bp) % Check that R/B
+
has no remainder
G = S; BF = R; F=R1; % relabel
B = [zeros(1,length(A)-length(B)), B]; % Make B(q) same length as Ato prevent AE loops
sim('sminvarctrl',[0 1e5]) % Now run the Simulink simulation
19 variance = var(ey);
varR1 = var_e
*
sum(R1.
*
R1);
disp([variance, varR1]) % Note that var{y} var{(Fe)}
Note that this controller does not attempt to cancel all two process zeros since that would result
in an unstable controller giving rise to unbounded internal signals. However instead we cancel
the one stable process zero giving rise to a moving average controller.
In this example shown in Fig. 7.20, F(q) = q+5.0567 which indicates that the expected covariance
of the output is
2
y
= (1
2
+ 5.06
2
) = 26.6 which, provided we take enough samples as shown in
Fig. 7.20(b), it is.
7.7 Summary of adaptive control
Adaptive control is where the controller adjusts in response to changing external conditions.
This is suitable for plants that are time varying, severely nonlinear or perhaps just unknown at
the controller design stage. The general adaptive controller has two parts; an identication part
and a controller design part although in some implementations these two components are joined
together. The main styles of adaptive control are:
Where the controller parameters are some static function of the output. The simplest exam-
ple is gain scheduling. This has the advantage that it is easy to implement, but it is really
only open loop adaption, and may not perform as expected if the conditions change in an
unpredictable manner.
Where the classical PID controller is augmented with a self-tuning option. This can be
based on the Ziegler-Nichols design, using a relay feedback to extract the ultimate fre-
quency and gain.
346 CHAPTER 7. ADAPTIVE CONTROL
To Workspace
ey
Scope
Random
Number
Min var ctrlr
G(z)
den(z)
Gain
1
F
F(z)
1
C
C(z)
1
B
B(z)
1
Add
A
1
A(z)
(a) A minimum variance moving average controller implementation in SIMULINK.
10
3
10
4
10
5
10
6
0
5
10
15
20
25
30
Experiment duration
V
a
r
i
a
n
c
e
variance of noise
variance of output
(b) The variance of the output should converge to

2
y
= 26.6.
Figure 7.20: Minimum variance moving average control. If the controller is designed correctly,
then the output variance should be approximately equal to the variance of the noise signal ltered
through F, which in this case it is, provided we take enough samples.
Where the parameters of the plant transfer function model are estimated online using the
techniques described in chapter 6, and the controller is designed to control this model. If
the controller is approximately the inverse of the process model, this leads to minimum
variance control, but is usually too severe, and even tends to instability for many actual
applications. A better alternative is adaptive pole-placement, where reasonable closed loop
poles are specied by the control engineer, and the controller is varied to provide this per-
formance as the identied model changes.
As with all high performance control schemes, there are some open issues with adaptive con-
trollers. For the simple schemes presented here we should be aware of the following issues:
1. The structure (order and amount of deadtime) of the plant must be known beforehand, and
typically is assumed constant.
2. Minimum variance control, while simple to design, attempts to force the process back to
7.7. SUMMARY OF ADAPTIVE CONTROL 347
the desired operating point in one sample time. This results in overly severe control action.
3. There are potential numerical problems when implementing recursive estimators, particu-
larly in xed point or with short word lengths.
4. If dead time is present, and usually dead time of at least one sample time is always
present, then the controller will be unrealisable since future values of y are needed to calcu-
late current values of u. In this case, a sub-optimal controller is used where the dead time
is simply ignored, or the future output is predicted.
However the real problem of adaptive control is that when the plant is in control, there is lit-
tle or no information going to the estimator. Hence the controller parameters begin to oscillate
wildly. This is not normally noticed by the operator, because the process is in control. However
as soon as a disturbance enters the plant, the controller will take extra action attempting to cor-
rect for the upset, but now based on the wildly inaccurate plant model. During this time of major
transients however, the widely changing process conditions provide good information to the es-
timator which soon converges to good process parameters, which in turn subsequently generate
appropriate controller parameters. The adaptive controller then brings the process back under
control and the loop starts all over again. This results in alternating periods of very good control
and short periods of terrible control. One solution to this problem is to turn off the estimator
whenever the system is at the setpoint.
In summary when applying adaptive control, one wants:
A constant output for good control, but
a varying input and output for good estimation (which is required for good control)!
This conict of interest is sometimes called the dichotomy of adaptive control.
348 CHAPTER 7. ADAPTIVE CONTROL
Problems
Problem 7.1 1. Construct a SIMULINK diagram to simulate the general linear controller with
2-degrees of freedom controller. That is, with the R, S and T controller polynomials. Follow
[18, Fig3.2, p93].
Hint: Use the discrete lter blocks in z
1
, since these are the only blocks that you can
construct S(z

)/1 in.
2. Simulate the controlled response in problem 1 using the polynomials from [18, Ex3.1], re-
peated here
B(q)
A(q)
=
0.1065q + 0.0902
q
2
1.6065q + 0.6065
and
R(q) = q + 0.8467, S(q) = 2.6850q 1.0319, T(q) = 1.6530q
3. Verify that the closed loop is the same as the expected model,
BT
AR +BS
=
B
m
A
m
in the example 3.1, [18, Ex3.1, p9798].
Hint: You may like to use the MATLAB symbolic toolbox.
4. Construct an m-le to do the minimum degree pole-placement design.
(a) Do an All zeros cancelled version
(b) Do an No zeros cancelled version
(c) Do an general version, perhaps with some automated decider which zeros to cancel.
5. For the minimum variance controller we need to be able to compute the variance of the
output of a FIR lter, refer [18, chapter 4].
(a) What is the theoretical output variance of a FIR lter with input variance of
2
e
?
i. In SIMULINK you have the choice of the DSP format z
1
or the transfer function
version z. What would be the natural choice for q?
ii. Test your answer to part (1) by constructing a simulation in SIMULINK.
(b) Approximately how many samples do you need to obtain 2, 3 and 4 decimals accuracy
in your simulation?
(c) Using the hist function look at the distribution of both the input, e, and output, y. Is
the output Gaussian and how could you validate that statistically?
6. For stochastic control and for regulatory control topics we want to be able to elegantly
simulate models of the form
A(q)y(t) = B(q)u(t) +C(q)e(t) (7.23)
where A, B, C are polynomials in the forward shift operator q. Note that the deadtime is
dened as deg(A)-deg(B).
(a) For the standard model in Eqn. 7.23 what are the constraints on the polynomials
A, B, C?
i. Choose sensible polynomials for A, B, C with at least one sample of deadtime.
Create a model object from the System Identication toolbox using the idpoly
command.
7.7. SUMMARY OF ADAPTIVE CONTROL 349
ii. Simulate this model either using idmodel/sim or the idsim command. What is
the difference between these two commands? Choose a square wave for u(t) and
say random noise for e(t).
(b) Create a SIMULINK model and compare the simulated results with above. Use the
same random input.
(c) Write a raw MATLAB script le to run the simulation and compare with the two alter-
natives above. (They should give identical results.)
350 CHAPTER 7. ADAPTIVE CONTROL
Chapter 8
Multivariable controller design
As our engineering endeavors became more sophisticated, so did our control problems. The
oil crises of the mid 1970s gave impetus for process engineers to nd ways to recycle energy
and materials. Consequently the plants became much harder to control because they were now
tightly coupled to other plants. As the plants became multivariable, it made sense to consider
multivariable controllers.
If we have a dynamic system with say 3 states and 3 inputs, then assuming that our system is
controllable, we can feedback the three states directly to the three inputs, state 1 to input 1, state 2
to input 2 etc. This is termed de-centralised control, and our rst dilemma is which state should
be paired with which input. We then have the problem of tuning the three loops, especially if
there are some interactions.
Alternatively we could feedback each state to all the inputs, which is termed centralised control.
In matrix form, the two alternative control laws look like
u =
_
_

_
_
x
. .
centralised
or u =
_
_



_
_
x
. .
de-centralised
Exploiting the off-diagonals of the gain matrix K tends to be more efcient and is often what is
meant by multivariable control. While centralised or multivariable control structures typically
give better results for similar complexity, they are harder to design and tune since we have more
controller parameters (9 as opposed to 3) to select. Not surprisingly, decentralised control re-
mains popular in industry, none the least for the following reasons given by [89]:
1. Decentralised control is easier to design and implement than a full multivariable controller,
and also requires fewer tuning parameters.
2. They are easier for the operators to understand and possibly retune.
3. Decentralised controllers are more tolerant of manipulated or measurement variable fail-
ure.
4. Startup and shutdown operating policies are easier with decentralised controllers, since the
control loops can be gradually brought into service one by one.
Despite these application style advantages of decentralised controllers, it is useful to study what
is possible with a full multivariable controller. This is the purpose of this chapter.
351
352 CHAPTER 8. MULTIVARIABLE CONTROLLER DESIGN
Finally a one-step optimal controller that is applicable for nonlinear processes called GMC(Generic
Model Control) is presented in 8.5. Exact feedback nonlinearisation is a generalisation of the
GMC controller follows in 8.6.
8.1 Controllability and observability
This section will introduce in a descriptive manner the concept of controllability, observability
and other abilities. For a more formal treatment see for example [150, 94, p699-713] or [148,
p627 62]. The criteria of controllability is given rst in 8.1.1 followed by the related property
of observability (which is the dual of controllability) in 8.1.2. These conditions are necessary for
some of the multi-variable controller and estimator design that follow. Methods to compute the
controllability matrix, and establishing the condition are given in 8.1.3.
8.1.1 Controllability
When designing control systems, we implicitly assume that we can, at least in principle inuence
the system to respond in some desired manner. However, it is conceivable, perhaps due to some
structural deciency, that we cannot manipulated independently all the states of the system. The
controllability criteria tells us if this is the case.
Formally dened, a system is controllable if it is possible to transfer from any arbitrary initial
state x
0
to any other desired state x

in a nite time by using only an uncompensated control


signal u.
Note that while the transformation is allowed to take place over more than one sample time, it
must be achieved in a nite time. In addition, even if the above controllability requirement is
satised, you may not be able to keep the state x at the desired position, the guarantee is only
that x could pass through x

at some time in the future. Nevertheless, it is instructive to know


if a given system is controllable or not. Some care is needed with these terms.

Astr om and
Wittenmark, [22, p127], dene reachability as I have dened controllability above, and then dene
controllability as a restricted version where x

= 0. These two denitions are equivalent if A or


is invertible.
First consider a continuous dynamic system that has a diagonal A matrix and an arbitrary B
matrix,
x =
_

_
0 0
0 0
0 0
.
.
. 0
0 0
_

_
x +Bu
where an indicates the presence of a non-zero value. Now suppose u(t) = 0 for all time, then
the steady state is x
ss
= 0. Since Ais a diagonal matrix, there is no interaction between the states.
This means that if one particular state x
i
, is 0 at some time t, then it will always remain at zero
since u= 0. Now consider the full system (u = 0), but suppose that the ith row of B are all zeros
such as the second row in the following control matrix,
B =
_

_
0 0
0 0 0 0
0 0
0
_

_
8.1. CONTROLLABILITY AND OBSERVABILITY 353
Then if x
i
happens to be zero, it will remain at zero for all time irrespective of the value of the
manipulated variable u. Quantitatively this is due to the following two points:
1. No manipulated variable can modify the value of x
i
because they are all multiplied by zero.
2. No other states can inuence x
i
because in this special case the system matrix Ais diagonal.
Hence if x
i
is zero, it will stay at zero. The fact that at least one particular state is independent of
the manipulated vector means that the system is uncontrollable. We cannot inuence (and hence
control) all the states by only changing the manipulated variables.
In summary, a system will be uncontrollable if the system matrix A is diagonal, and any one or
more rows of B are all zero. This is sometimes known as Gilberts criteria for controllability.
If the matrix A is not diagonal, then it may be transformed to a diagonal form (or Jordon block
form), and the above criteria applied. We can easily transform the system to a block diagonal
form using the canon function in MATLAB. However a more general way to establish the con-
trollability of the n n system is to rst construct the n (m n) controllability matrix C,
C
def
=
_
B
.
.
. AB
.
.
. A
2
B
.
.
.
.
.
. A
n1
B
_
(8.1)
and then compute the rank of this matrix. If the rank of C is n, then all n system modes are
controllable. This is proved in problem [148, A-6-6 p753]. The discrete controllability matrix is
dened in a similar manner;
C
def
=
_

.
.
.
.
.
.
2

.
.
.
.
.
.
n1

_
(8.2)
and also must be of rank n for the discrete system to be controllable.
An often less restrictive controllability requirement is output controllability. This is where we
wish only to control the outputs y, rather than the full state vector x. The output controllability
matrix C
y
is dened as
C
y
def
=
_
CB
.
.
. CAB
.
.
. CA
2
B
.
.
.
.
.
. CA
n1
B
_
(8.3)
Output controllability is given in Example A-9-11 Ogata, [150, p756], and [148, p636].
The rank of a matrix is dened as the number of independent rows (or columns) in a matrix, and
is a delicate numerical computation using a nite precision computer. The presence of a non-zero
determinant is not a good indication of rank because the value of the determinant scale so poorly.
By the same argument, a very small determinant such as say 10
6
is also not a reliable indication
of dependence. Numerically the rank is computed reliably by counting the number of non-zero
singular values, or the number of singular values above some reasonable threshold value. See
also 8.1.3.
The controllability matrix for the submarine system in 2.8.2 is
C =
_
B
.
.
. AB
.
.
. A
2
B
_
=
_
_
0 0.095 0.0192
0.095 0.0192 0.0048
0.072 0.0283 0.0098
_
_
_
_
0 0.095 0.0192
0.095 0.0192 0.0048
0.072 0.0283 0.0098
_
_
which has a rank of 3. Hence the submarine system is controllable.
354 CHAPTER 8. MULTIVARIABLE CONTROLLER DESIGN
8.1.2 Observability
Observability is a related property to controllability.
A system is observable if one can evaluate x
0
by recording only the output y
t
over a nite time interval. In other words, the system is observable if all the
states eventually effect the outputs.
Note that the observation is allowed to take place over more than one sample time. Once we have
back calculated x
0
, we can use the history of the manipulated variable u
t
to forward calculate
what the current state vector x
t
is. It is clear that this condition of observability is important in
the eld of estimation.
As a trivial example, suppose we had a 3 state system and we have three outputs which are
related to the states by
y = Cx =
_
_
1 0 0
0 1 0
0 0 1
_
_
x
Here the outputs (measurements) are identically equal to the states (C= I). It is clear that we can
evaluate all the states simply by measuring the outputs over one sample time and inverting the
measurement matrix
x = C
1
y = Iy = y
since C
1
= I in this trivial example. This system is therefore observable. Suppose, however, we
only measure two outputs where C is the non-square matrix
y = Cx =
_
1 0 0
0 0 1
_
x
Now, obviously, if we recorded only one output vector (2 measurements), we could not hope to
reconstruct the full 3 state vector. But the observability property does not require us to estimate x
in one sample time, only in a nite number of sample times. Hence it may be possible for a system
with fewer outputs than states to be still observable.
To get a feel for the observability of a system, we will use a similar line of argument to that used
for the controllability. Suppose we have a diagonal system and an arbitrary output matrix
x =
_

_
0 0
0 0
0 0
.
.
. 0
0 0
_

_
x (8.4)
y = Cx (8.5)
Note that any system can be converted into the above form, and for observability criteria we are
not interested in the manipulated variable effects. Now if any of the columns of the C matrix are
all zeros, then this is equivalent to saying that that a particular state will never inuence any of
the outputs, and since the system matrix is diagonal, that same state is never inuenced by any
of the other states. Therefore none of the outputs will ever inuence that state variable. Note the
symmetry of this criterion to the controllability criteria.
8.1. CONTROLLABILITY AND OBSERVABILITY 355
Formally for a continuous time system (Eqn 2.37), the n-state system is observable if the np n
observability matrix O is of rank n where
O
def
=
_
_
_
_
_
_
_
C
CA
CA
2
.
.
.
CA
n1
_
_
_
_
_
_
_
, or in the discrete case O
def
=
_
_
_
_
_
_
_
C
C
C
2
.
.
.
C
n1
_
_
_
_
_
_
_
(8.6)
Some authors, perhaps to save space, dene O as a row concatenation of matrices rather than
stacking them in a column as done above.
Problem 8.1 1. What are the dimensions of the observability and controllability matrices?
What is the maximum rank they can achieve?
2. Establish C and O for the Newell & Lee evaporator.
3. Examples B-9-13, B-9-10, B-9-11, [150, p771]
4. Examples B-6-6, B-6-7, [148, p805]
5. [155] gives some alternative mathematical denitions for controllability and observability:
A system is controllable if and only if
rank(B, A
i
I) = n, for i = 1, , n (8.7)
and observable if and only if
rank(C

, A

i
I) = n, for i = 1, , n (8.8)
where are the eigenvalues of A.
Construct a MATLAB m-le implementation using these algorithms. Test your routines on
various systems, perhaps using the rmodel command.
8.1.3 Computing controllability and observability
The MATLAB functions ctrb and obsv create the controllability and observability matrices re-
spectively, and the rank function can be used to establish the rank of them. While the concept
of observability and controllability applies to all types of dynamic systems, Eqns 8.1 and 8.6 are
restricted to linear time invariant systems. Read problem A-6-1 [148, p745] for a discussion on
the equivalent term reachable.
The algorithms given above for computing controllability or observability, (equations 8.1 and 8.6
or the discrete equivalents), seem quite straight forward, but are in fact often beset by subtle
numerical errors. There exist other equivalent algorithms, but many of these are also susceptible
to similar numerical problems. Issues in reliable numerical techniques relevant to control are
discussed in a series of articles collected by [157].
An example quoted in [155] illustrates common problems to which these simple algorithms can
fall prey. Part of the problem lies in the reliable computation of the rank of a matrix which is best
done by computing singular values.
356 CHAPTER 8. MULTIVARIABLE CONTROLLER DESIGN
Problem 8.2 Write a MATLAB m-le to compute the exponential of a matrix using the Taylor
series formula,
e
At
=

k=0
1
k!
A
k
Test your algorithm on
A =
_
49 24
64 31
_
and comment on the expected accuracy of your result.
Problem 8.3 1. When using good (in the numerical reliability sense) software, we tend to for-
get one fundamental folklore of numerical computation. This folklore, repeated in [157, p2],
states
If an algorithm is amenable to easy hand calculation, then it is probably a poor
method if implemented in the nite oating point arithmetic of a digital com-
puter.
With that warning in mind, calculate the singular values of
A =
_
_
1 1
0
0
_
_
where is a real number such that || <

where is the machine precision. (Note that
this effectively means that 1 +
2
= 1 where as 1 + > 1 when executed on the computer.)
Calculate the singular values both analytically, and using the relationship between singular
values and the eigenvalues of A

A.
2. Calculate the controllability of the system
A =
_
1
0 1
_
, B =
_
1

_
with || <

. Solve this both analytically and numerically.


8.1.4 State reconstruction
If a system is observable, we should in principle, be able to reconstruct the states based only
on the current, and perhaps historical outputs. So, supposing we have collected the following
input/output data,
time (k) u(k) y(k) x
1
(k) x
2
(k)
0 4 4 ? ?
1 5 4 ? ?
2 36 ? ?
we should be able to reconstruct the states assuming a model of the form
x
k+1
=
_
0 1
3 2
_
x
k
+
_
0
1
_
u
k
y
k
=
_
4 0

x
k
8.1. CONTROLLABILITY AND OBSERVABILITY 357
Note that we cannot simply invert the measurements back to the states, partly because the mea-
surement matrix is not square.
However we can see that y(k) = 4x
1
(k) from the measurement relation, so we can immediately
ll in some of the blank spaces in the above table, in the x
1
(k) column; x
1
(0) = 1, x
1
(1) =
1, x
1
(2) = 9. From the model relation, we can see that x
1
(k + 1) = x
2
(k), so we can set x
2
(0) =
x
1
(1) = 1. Now we have established the initial condition
x(0) =
_
1 1

and with the subsequent manipulated variable history u(t), we can calculate x(t) simply using
the recursive formula, Eqn. 2.38. This technique of estimation is sometimes called state recon-
struction and assumes a perfect model and no measurement noise. Furthermore, the estimation
reconstruction takes, at most, n input-output sample pairs.
As a check, we can construct the observability matrix,
O =
_
C
C
_
=
_
_
_
4 0

_
4 0

_
0 1
3 2
_
_
_
=
_
4 0
0 4
_
which clearly has a rank of 2 indicating that the state reconstruction is in fact possible.
We can generalise this state reconstruction scheme outlined above, following [19, p315], for any
observable discrete plant
x
k+1
= x
k
+u
k
y
k
= Cx
k
where we wish to reconstruct the current state, x
k
, given past values of input and output, u
k
, u
k1
, . . .
and y
k
, y
k1
, . . .
Our basic approach will be to use the outputs from the immediate past n samples, to develop
linear equations for the n unknown states at the point n samples in the past. We will then use the
knowledge of the past inputs to roll this state forwards to the current sample k. For the present
development, we will just consider SISO systems.
Listing the immediate previous n values nishing at the current sample k gives
y
kn+1
= Cx
kn+1
y
kn+2
= Cx
kn+1
+Cu
kn+1
y
kn+3
= C
2
x
kn+1
+ Cu
kn+1
+Cu
kn+2
.
.
.
y
k
= C
n1
x
kn+1
+C
n2
u
kn+1
+ +Cu
k1
(8.9)
To make things clearer, we can stack the collected input and output values in a vector,
Y
k
=
_
_
_
_
_
_
_
y
kn+1
y
kn+2
.
.
.
y
k1
y
k
_
_
_
_
_
_
_
, U
k1
=
_
_
_
_
_
u
kn+1
u
kn+2
.
.
.
u
k1
_
_
_
_
_
(8.10)
and we note that the input vector collection U
k1
is one element shorter than the output vector
collection Y
k
. Now we can write Eqn. 8.9 more succinctly as
Y
k
= Ox
kn+1
+W
u
U
k1
(8.11)
358 CHAPTER 8. MULTIVARIABLE CONTROLLER DESIGN
with matrices dened as
O =
_
_
_
_
_
_
_
C
C
C
2
.
.
.
C
n1
_
_
_
_
_
_
_
, W
u
=
_

_
0 0 0
C 0 0
C C 0
.
.
.
.
.
.
.
.
.
.
.
.
C
n2
C
n3
C
_

_
(8.12)
and we may recognise O is the observability matrix dened in Eqn. 8.6.
Solving Eqn. 8.11 for the states (in the past at sample time k n + 1) gives
x
kn+1
= O
1
Y
k
O
1
W
u
U
k1
(8.13)
provided O is non-singular. This is why we require the observability condition to be met for state
reconstruction.
The minor problem with Eqn. 8.13 is that it delivers the state n sample times in the past, rather
than the current state, x
k
, which is of more immediate interest. However, it is trivial to use
the plant model and knowledge of the past inputs to roll the states onwards to current time k.
Repeated forward use of the model gives
x
k
=
n1
x
kn+1
+
n2
u
kn+1
+
n3
u
kn+2
+ +u
k1
The entire procedure for state reconstruction is summarised in the following algorithm.
Algorithm 8.1 State reconstruction assuming a perfect observable model and no noise. Given
x
k+1
= x
k
+u
k
, y
k
= Cx
k
then the current state, x
k
, is given by the linear equation
x
k
= A
y
Y
k
+B
u
U
k1
where the matrices are
A
y
=
n1
O
+
B
u
=
_

n2

n3

n1
O
+
W
u
and Y
k
, U
k1
are dened in Eqn. 8.10 and O, W
u
are dened in Eqn. 8.12. We should note:
To reconstruct the state (at the current time k), we need to log past input and output data,
and the minimum number of data pairs required is n, where n is the number of states. Any
more will gives a least-squares t.
Since we invert O, this matrix must not be singular. (Alternatively we require that the
system must be observable.) For the multivariable case we should use the pseudo-inverse,
O
+
, as written above.
We use in the algorithm, but do not invert, the controllability matrix.
A simple state reconstructor following Algorithm 8.1 is given in Listing 8.1.
8.2. STATE SPACE POLE-PLACEMENT CONTROLLER DESIGN 359
Listing 8.1: A simple state reconstructor following Algorithm 8.1.
function x_est = state_extr(G,Yk,Uk)
% Back extract past state from outputs & inputs (assuming observable)
4 Phi = G.a; Del = G.b; C = G.c;
error(ABCDCHK(Phi,Del,C)) % bullet proofing
[p,n] = size(C); [n,m] = size(Del);
Wo = obsv(G); % same as: O = obsv(Phi,C)
9 Ay = Phi(n-1)/Wo; % similar to
n1
O
1
S = C
*
Del;
for i=1:n-2
S = [S; C
*
Phii
*
Del]; % Building Wu
14 end % for
zC = zeros(p,m); % or zeros(size(C
*
Del));
Wu1 = [zC;S]; Wu = Wu1;
for i=n-3:-1:0
19 Wu1 = [zC; Wu1]; Wu1(end-p+1:end,:) = [];
Wu = [Wu, Wu1];
end % for
Co = Del; % Reversed controllability matrix
24 for i=1:n-2
Co = [Phii
*
Del,Co];
end % for
Bu = Co - Ay
*
Wu; x_est = Ay
*
Yk + Bu
*
Uk;
29 return
Fig. 8.1 illustrates the results of the state reconstruction algorithm in the case where there is no
model/plant mismatch, but there is noise on the input and output. Given that the algorithm has
the wrong u(t) and y(t) trends, then the reconstructed states ( in Fig. 8.1) are slightly different
from the true states.
8.2 State space pole-placement controller design
This section describes how to design a controller to control a given process, using state space
techniques in the discrete domain.
When a process engineer asks the control engineer to design a control system, the rst question
that the control engineer should ask back, is what sort of performance is required? In this way,
we start with some sort of closed loop specication, which, for example, may be that we demand
that the system return to setpoint after a disturbance with no offset, or rise to a new setpoint in a
reasonable time, perhaps one open-loop time constant. If the loop is a critical one, then we may
wish to tighten this requirement by specifying a smaller time to reach control, or conversely if
we have a non-critical loop we could be satised with a more relaxed settling time requirement.
It is our job as control engineers to translate these vague verbal requirements into mathematical
constraints, and if possible, then design a controller that meets these constraints.
When we talk about a response, we are essentially selecting the poles (or eigenvalues) of the closed
loop transfer function, and to some lesser degree, also the zeros. Specifying these eigenvalues by
360 CHAPTER 8. MULTIVARIABLE CONTROLLER DESIGN
Figure 8.1: Reconstructing states.
Since the state reconstructor is given
the undisturbed input and a noisy
output, then it is no surprise that the
estimated states are slightly differ-
ent from the true states (solid lines in
the lower two trends).
0 1 2 3 4 5 6 7
0.1
0
0.1
0.2
0.3
O
u
t
p
u
t
0 1 2 3 4 5 6 7
1
0.5
0
0.5
1
x
1
0 1 2 3 4 5 6 7
1
0
1
2
x
2
time [s]
y
actual y
x
1
estimate
x
2
estimate
changing the controller parameters is called controller design by pole placement. In order to
design a controller by pole placement, we must know two things:
1. A dynamic model of the process, and
2. what type of response we want (i.e. what the closed loop eigenvalues should be).
However choosing appropriate closed loop poles is non-trivial. Morari in [136] reminds us that
poles are a highly inadequate measure of closed loop performance, and he continues to assert
that because of the lack of guidance to the control designer, pole placement techniques have been
abandoned by almost everybody today. Notwithstanding this opinion, it is still instructive for
us to understand howour closed loop response is affectedby different choices of eigenvalues, and
how this forms the basis for more acceptable tuning methods such as linear quadratic controllers
discussed in the following chapter.
Suppose we have a single inputmultiple output (SIMO) system such as
x
k+1
= x
k
+u
k
(8.14)
where u is a scalar input and we wish to control all n states by using a proportional feedback
control law such as
u = Kx (8.15)
where Kis called the state feedback gain matrix. (Actually in this SIMO case, the gain matrix
collapses to a row vector). Substituting Eqn 8.15 into Eqn 8.14 gives the closed loop response as
x
k+1
= (K) x
k
(8.16)
8.2. STATE SPACE POLE-PLACEMENT CONTROLLER DESIGN 361
Now we desire ( K) to have the n eigenvalues at arbitrary positions
1
,
2
, ,
n
To
achieve this, we must check that arbitrary pole placement is achievable. In other words, the
system must be controllable. This means that the controllability matrix C
o
, must be of rank n.
If this condition is satised, we can design the feedback gain matrix K, by using Ackermanns
formula
K =
_
0 0 0 1

_

.
.
.
.
.
.
.
.
.
n1

_
1
() (8.17)
where is the matrix characteristic equation of the desired closed loop transfer function. Note
that the matrix to be inverted in Eqn 8.17 is the discrete controllability matrix C
o
, hence the need
for full rank.
A quick way to calculate () is to use poly and polyvalm. For instance we can verify the
Cayley-Hamilton theorem, (any matrix satises its own characteristic equation), elegantly using
>> A = rand(3) % take any test matrix
A =
0.0535 0.0077 0.4175
0.5297 0.3834 0.6868
0.6711 0.0668 0.5890
>> polyvalm(poly(A),A) % Cayley-Hamilton theorem, should = 0
ans =
1.0e-016
*
0.1388 0.0119 0.1407
0.1325 0 0.7412
-0.0496 0.0073 0.5551
Algorithm 8.2 SIMO state-space pole-placement
Given a system model, x = Ax + Bu, or discrete version, x
k+1
= x
k
+ u
k
, and design
requirements for the closed loop poles, , then
1. Check the controllability of the system, ctrb(A, B), or ctrb(, ). If not all the states are
controllable, then terminate the algorithm, otherwise carry on.
2. Compute the characteristic polynomial, , and substitute either matrix Aor .
3. Compute the row-vector state feedback gain using Eqn. 8.17.
Example Pole placement of the double integrator system.
In this example, we wish to design a controller for the discrete double integrator system sampled
at T = 1 that has closed loop eigenvalues at = 0.5 0.5i.
In this case we will assume a zeroth-order hold on the
discretisation.
>> Gc = tf(1,[1 0 0]);
>> G = ss(c2d(Gc,1));
giving matrices
=
_
2 0.5
2 0
_
, =
_
1
0
_
To use Ackermann formula, Eqn. 8.17, we need to both compute the controllability matrix and
the matrix characteristic equation .
362 CHAPTER 8. MULTIVARIABLE CONTROLLER DESIGN
>> mu = [0.5+0.5i; 0.5-0.5i];
>> alpha = polyvalm(poly(mu),G.a)
>> Cb = ctrb(G.a,G.b)
Finally the gain is given by
>> K = [0 1]/Cb
*
alpha
Note that as (z)
def
= (z
1
)(z
2
) = z
2
z + 0.5,
then
() =
2
+ 0.5I =
_
1 0.5
2 0.5
_
and that the controllability matrix
C =
_
1 2
0 2
_
has full rank. Finally the feedback gain is
K =
_
1 0.25
_
We can easily check that the closed loop poles are as required with
>> eig(G.a-G.b
*
K)
ans =
0.5000 + 0.5000i
0.5000 - 0.5000i
Rather than constructing Ackermanns equation by hand, MATLAB can design a pole placement
controller using the acker or the preferred place function. The latter gives an estimate of the
precision of the closed loop eigenvalues, and an optional error message if necessary.
[K,PREC] = place(G.a,G.b,mu)
K =
1.0000 -0.2500
PREC =
15
Eqn 8.15 assumes that the setpoint is at x=0. (We can always transform our state variables by
subtracting the setpoint so that this condition is satised, but this becomes inefcient if we are
likely to change the setpoint regularly.) In these cases, it is better to modify our control law from
Eqn 8.15 to
u = K(r x) (8.18)
where r is a vector of the state setpoints or reference states, nowassumed non-zero. Consequently
the closed loop response is now modied from Eqn 8.16 to
x
k+1
= (K) x
k
+Kr
k
(8.19)
8.2.1 Poles and where to place them
While pole-placement seems an attractive controller design strategy, the problem lies in deciding
where the closed loop poles out to be.
Listing 8.2 designs a pole-placement controller for the blackbox, G(s) = 0.98/(2s+1)(s+1), with
discrete closed loop poles specied, somewhat arbitrarily, at
T
= [0.9, 0.8].
Listing 8.2: Pole-placement control of a well-behaved system
1 >> Gc = tf(0.98,conv([2 1],[1,1])) % Blackbox model
>> G = ss(c2d(Gc,0.1)) % Convert to discrete state-space with sample time Ts = 0.1
>> [Phi,Delta,C,D] = ssdata(G);
8.2. STATE SPACE POLE-PLACEMENT CONTROLLER DESIGN 363
0 2 4
2
0
2
4
6

T
= [0.9, 0.8]
S
t
a
t
e
s

x
1

&

x
2
0 2 4
100
50
0
50
100
i
n
p
u
t
1 0 1
1
0.5
0
0.5
1
0.8
0.6
0.4
0.2
2.7
2.2
1.7
1.2
0.7
0.2
2.7
2.2
1.7
1.2
0.7
0.2

0 2 4
2
0
2
4
6

T
= [0.9, 0.8]
0 2 4
100
50
0
50
100
1 0 1
1
0.5
0
0.5
1
0.8
0.6
0.4
0.2
2.7
2.2
1.7
1.2
0.7
0.2
2.7
2.2
1.7
1.2
0.7
0.2

0 2 4
2
0
2
4
6

T
= 0.6e
j80

0 2 4
100
50
0
50
100
1 0 1
1
0.5
0
0.5
1
0.8
0.6
0.4
0.2
2.7
2.2
1.7
1.2
0.7
0.2
2.7
2.2
1.7
1.2
0.7
0.2

Figure 8.2: Pole-placement of the black box for three different pole locations. From left to right:
(a)
T
= [0.9, 0.8], (b)
T
= [0.9, 0.8], and (c) = 0.6e
j80

.
>> lambda = [0.9 0.8]'; % Desired closed loop discrete poles
T
= [0.9, 0.8]
6 >> K = place(Phi,Delta,lambda)% Compute controller gain K
K =
2.4971 -2.2513
Now we can check the closed loop performance from a non-zero initial condition with
Gcl = ss(Phi-Delta
*
K,zeros(size(Delta)),eye(size(Phi)),0,Ts);
2
x0 = [1; -2]; % start position
initial(Gcl,x0)
The results of a controlled disturbance recovery is given in the left hand plot of Fig. 8.2. However
an obvious question at this point is where exactly should I place the poles. The three trends in
Fig. 8.2 illustrate the difference. A conservative choice is as given above is where we place the
poles in right-hand side of the origin, on on the real axis close to the +1 point. In this case I chose

T
= [0.9, 0.8]. The second choice is the mirror image of those poles
T
= [0.9, 0.8], and the
third choice is a pair of complex conjugates at 0.6
80

.
Clearly there is a trade off between the speed of the disturbance rejection, the amount of input
energy, and the degree of oscillation. If we can quantify these various competing effects, then
we can design controllers systematically. This is the subject of optimal control given in the next
364 CHAPTER 8. MULTIVARIABLE CONTROLLER DESIGN
chapter.
8.2.2 Deadbeat control
If the system is controllable such that any arbitrary poles are allowable, one is naturally inclined
to extrapolate the selection to poles that will give a near perfect response? Clearly a perfect
response is one when we are always at the setpoint. If a step change in reference value occurs,
then we instantaneously apply a corrective measure to instantaneously push the system to the
new desired point. A system that rises immediately to the new setpoint has a closed loop time
constant of = 0 or alternatively a continuous pole at s = . This corresponds to a discrete
pole at e
T
or the origin. If we decide to design this admittedly rather drastic controller, we
would select all the desired closed loop poles at = 0. Such a controller is feasible in the discrete
domain and is called a deadbeat controller.
The design of a deadbeat controller is the same as for an arbitrary pole placement controller.
However it is better to use the acker function rather than the place function, because the place
function will not allow you to specify more multiple eigenvalues than inputs. This restriction
does not apply for the Ackermann algorithm.
Suppose we design a deadbeat controller for the continuous plant
x =
_
_
0.3 0 0
0 0.5 2
0 2 0.5
_
_
x +
_
_
3
2
5
_
_
u (8.20)
To design a deadbeat controller, we specify the desired discrete closed loop eigenvalues as = 0
and select a sample time of T = 4, we obtain
K =
_
3.23 10
2
3.69 10
4
1.98 10
2

The closed loop transition matrix is

cl
= K=
_
_
0.075 0.003 0.139
0.063 0.019 0.095
0.060 0.133 0.056
_
_
and we could also compute that all the elements in
3
cl
are in the order of 10
19
and all the
eigenvalues of eig(K) are in the order of 10
7
as required.
Such a matrix where A
n1
= 0, but A
n
= 0 is called a Nilpotent matrix.
Fig. 8.3 shows the impressive controlled response of the plant (Eqn. 8.20) under deadbeat control.
The top half of the diagram shows the states which converge from the initial condition, ( x
0
=
_
2 3 1

), to the setpoint x=0 in only three sample times (3T = 12 s). It takes three
sample times for the process to be at the setpoint, because it is a third order system.
The deadbeat controlled response looks almost perfect in Fig. 8.3. In fact, the states are only guar-
anteed to be at the setpoint (after t > 12s) at the sample times. In between these times anything
can happen. We can investigate this by repeating the simulation at a smaller time interval (say
T = 0.05 s) but with the same input as before, to see what happens between the sample intervals.
(This is how I evaluated the smooth state curve in Fig 8.3 between the states at the controller sam-
ple time denoted with a in Listing 8.3.) Note that the states are oscillating between the samples,
but quickly die out after 10 seconds.
8.2. STATE SPACE POLE-PLACEMENT CONTROLLER DESIGN 365
0 5 10 15 20
0
0.02
0.04
time (s), sample time T=4
i
n
p
u
t
3
2
1
0
1
2
3
Deadbeat controller
s
t
a
t
e
s
Figure 8.3: The states (top) and the
manipulated variable (bottom) for
the deadbeat controlled response.
The states at the controller sample
times are denoted with a and af-
ter the third sample time, t 12, the
states remain exactly at the setpoint
at the sample times.
Listing 8.3: A deadbeat controller simulation
1 x0 = [-2 -3 1]'; % initial condition
dt =4; tf =dt
*
5;t =[0:dt:tf]'; % time vector
A = [-0.3,0,0;0,-0.5,2;0,-2,-0.5];% continuous plant
B = [3;-2;5];
6 Gc = ss(A,B,eye(3),0); % create continuous model
Gd = c2d(Gc,dt,'zoh'); % discrete model
mu = [0;0;0]; % desired closed loop eig
K = acker(Gd.a,Gd.b,mu); % construct deadbeat controller
11
Gd_cl = Gd; Gd_cl.a = Gd.a-Gd.b
*
K; % closed loop
[y,t,x]= LSIM(Gd_cl,zeros(size(t)),[],x0);
U = [-K
*
x']'; % back extract input
16 % reduce sample time at same conditions to see inter-sample ripple
dt = dt/100; t2 = [0:dt:tf]'; % new time scale
[ts,Us] = stairs(t,U); % create new input
ts = ts + [1:length(ts)]'/1e10; ts(1) = 0;
U2 = interp1(ts,Us,t2); % approx new input
21 [y2,t2,x2]= LSIM(Gc,U2,t2,x0);% continuous plant simulation
subplot(2,1,1);plot(t,x,'o',t2,x2); % plot results
subplot(2,1,2);plot(ts,Us);
Naturally under practical industrial conditions, the plant will not be brought under control so
rapidly owing to either modelplant mismatch, or to manipulated variable constraints. This type
of controller can be made more robust by specifying a larger sample time, which is in fact the
only tuning constant possible in a deadbeat controller. Ogata gives some comments and cautions
about deadbeat control in [148, pp670-71].
366 CHAPTER 8. MULTIVARIABLE CONTROLLER DESIGN
Problem 8.4 Create a system with eigenvalues of your choice; say
v =
_
0.2 0.3 0.1

T
and design a deadbeat controller. Simulate this using the dlsim function. (Try a sample time
of 7.) Investigate the same controller, but at a higher sampling rate and plot the intersample
behaviour of the states.
Investigate the effect on the gain when using different sample rates.
8.3 Estimating the unmeasured states
The pole-placement control design strategies as developed in 8.2 utilises all the states in the
control law u = Kx so implicitly assumes that all the states are measurable. In many cases this
is not practically possible, so the only alternative then is to estimate the states.
The states can be reconstructed using the outputs, the inputs, and the model equations. In fact we
need not restrict ourselves to just estimating states, we could also estimate derived quality control
variables, or even estimate the outputs themselves. The quantities that we estimate are known as
observers, and offer cost and reliability advantages over actual sensors in many applications.
The simplest state estimator is where we use a model and subject it to the same input u that is
applied to the true process. The resultant x is the estimate for the true state x.
x
k+1
= x
k
+u
k
(8.21)
If we know the true model (,) and the manipulated variable history u(k), then this estimator
will work provided we select the correct initial condition x
0
. Since we have no feedback infor-
mation as to how well we are doing, this estimator is called an open loop estimator. If the
conditions stated above are not met, and they will rarely be satised in practice, then the esti-
mator will provided estimates that will probably get worse and worse. The error of the estimate
is

k
= x
k
x
k
(8.22)
Ideally, we would like to force this error vector to zero. This is actually still a control problem, al-
though while we are not controlling the plant, we are controlling the error of the estimate scheme.
We introduce a feed back error scheme such as
x
k+1
= x
k
+u
k
+L(y
k
C x
k
) (8.23)
which is called a prediction error observer since the prediction is one sample period ahead of the
observation. L is a (n 1) column vector and is called the observer gain matrix and is a design
parameter. Now the error model is

k+1
= (LC)
k
(8.24)
We must select the matrix L such that the observer systemLC is stable and has eigenvalues
(observer poles) that are suitably fast. This means that even if we were to get the initial condition
wrong for x
0
, we can still use the estimator since any initial error will eventually be forced to
zero. Typically we would choose the poles of the observer system (Eqn 8.24) much faster than
the openloop system. To be able to solve for L, the system must be observable.
For many chemical process applications, the sampling rate is relatively slow, since the processes
are generally quite slow, so the basic estimator of Eqn 8.23 can be modied so that the prediction
8.4. COMBINING ESTIMATION AND STATE FEEDBACK 367
of x
k+1
uses the current measurement y
k+1
rather than the previous measurement y
k
as is the
case in Eqn 8.23. This is only possible if the computational time is short, compared with the sam-
ple time, but if this is the case, then the sample period is better utilised. This type of formulation
is termed a current estimator and both are described in [70, pp1478] and in terms of the Kalman
lter in [148, pp87375]. The current estimator is
x
k+1
= x
k
+u
k
(8.25)
x
k+1
= x
k+1
+L
_
y
k+1
C x
k+1
_
(8.26)
In practice for most systems, the two forms (current and prediction estimators) are almost equiv-
alent.
The design of the observer gain L by pole-placement is similar to the controller gain Kgiven in
8.2, but in the observer case we have the closed loop expression
eig(LC) rather than eig(K)
However we note that
eig(LC) = eig(
T
C
T
L
T
)
since eig(A) = eig(A
T
), so we can use the MATLAB functions place or acker for the observer
problem by using the dual property
L = place(
T
, C
T
, )
T
Table 8.1 shows the equivalent relationships between the matrices used for observation and reg-
ulation and make the relevant substitution.
Table 8.1: The relationship between regulation and estimation
Control Estimation

B C

K L

C
o
O
b
eig(K) eig(LC)
8.4 Combining estimation and state feedback
We can combine the state feedback control from 8.2 with the state estimation from 8.3 into
a single algorithm. This is useful because now we can use the estimator to provide the states
for our state feedback controller. For linear plants, the design of the state feedback controller
is independent from the design of the observer. The block diagram shown in Fig. 8.4, which is
adapted from [19, p438], shows the conguration of the estimator and controller.
Remember that in this example, we have no model plant mis-match, the plant is linear, but we do
not know the initial conditions of the states x
0
. The equations are summarised below. The true
process is
x
k+1
= x
k
+u
k
(8.27)
and the measurement is
y
k
= Cx
k
(8.28)
368 CHAPTER 8. MULTIVARIABLE CONTROLLER DESIGN

q
1
C + +

q
1
C + +
L



`

State feedback
Estimator
Plant

x
input, u
output, y
Figure 8.4: Simultaneous control and state estimation
but we do not know x, so we estimate it with
x
k+1
= x
k
+u
k
+ L(y
k
C x
k
) (8.29)
and the scalar control law is
u
k
= K x
k
(8.30)
Combining Eqns 8.30 and 8.27 gives
x
k+1
= x
k
K x
k
(8.31)
and combining Eqns 8.29 to 8.30 gives
x
k+1
= ( LCK) x
k
+LCx
k
(8.32)
Concatenating Eqns 8.27 and 8.32 together gives the homogeneous equation
z
k+1
=
_
x
k+1
x
k+1
_
=
_
K
LC KLC
_ _
x
k
x
k
_
(8.33)
I will call the transition matrix for Eqn 8.33 the closed loop observer transition matrix (
clob
).
Ogata gives a slightly different formulation of the combined observerstate feedback control sys-
tem as Eqn 6-161 in [148, p714]. Ogata also recommends choosing the observer poles 45 times
faster than the system response.
Example: Combined state-feedback and observer
Suppose we are given a continuous plant
G(s) =
0.98
(3s + 1)(2s + 1)
, T = 0.1
8.5. GENERIC MODEL CONTROL 369
which we will sample at T = 0.1. The design specications are that the estimator poles are to be
relatively fast compared to the controller poles

est
def
=
_
0.8
0.85
_
and
ctrl
def
=
_
0.92
0.93
_
We can design the controller and estimator gains as follows
Listing 8.4: Pole placement for controllers and estimators
1 G = tf(0.98,conv([2 1],[3 1])); % Blackbox plant
Gss = ss(G); % convert to state-space
% Now special version with full outputs
Gss1 = Gss;
6 Gss1.c = eye(2); Gss1.d = [0;0];
T = 0.1; % seconds
Gssz = c2d(Gss,T); % convert to discrete state-space
11 p = [0.92 0.93]'; % Now design controller gain
L = place(Gssz.a, Gssz.b, p); % eig(K)
pe = [0.80 0.85]'; % Now design estimator gain
K = place(Gssz.a', Gssz.c', pe)'; % eig(LC)
16
eig(Gssz.a-K
*
Gssz.c) - pe % Test results should be zero
which gives controller and observer gains as
L =
_
1.3848 1.6683

, and K =
_
0.2146
0.4109
_
Fig. 8.5(a) gives the SIMULINK conguration for the controller which closely resembles the block
diagram given in Fig. 8.4. To test the estimation and control, we will simulate a trend where the
actual and estimated starting conditions are different, namely
x
0
=
_
1
1
_
, which is not the same as x
0
=
_
1.6
2
_
The trends shown in Fig. 8.5(b) show the (discrete) estimated states converging rapidly to the
true continuous states, and that both converge, slightly less rapidly, to the setpoint.
8.5 Generic model control
Control of multivariable nonlinear processes such as
x = f (x, u, d, , t) (8.34)
is in general, much harder than that for linear processes. Previously we discussed the simple
state feedback control law; u = K(x

x) where x

is the desired state vector or setpoint. We


showed that we could augment this simple proportional only law to incorporate integral action
to ensure zero offset to get a control law such as 9.71. Now suppose that we wish to control not x,
but the derivative of x, dx/dt, to follow some desired derivative trajectory ( x)

. We now wish to
370 CHAPTER 8. MULTIVARIABLE CONTROLLER DESIGN
states
input
states
output
plant BB
output
K* u
estimator Gain1
K*u
ctrl
Gain
z
1
Unit Delay
yv To Workspace
K*u
Phi
Mux1
K*u
Delta
K*u
C model
2
2 2
2
2
2
2
2
2
2 2
4 4
4
(a) SIMULINK conguration
0 1 2 3 4 5 6 7
1
0.5
0
0.5
1
1.5
2
time, [s]
s
t
a
t
e
s
x
1
x
2
xe
1
xe
2
(b) Estimated and true states
Figure 8.5: Simultaneous control and estimation
8.5. GENERIC MODEL CONTROL 371
nd u(t) such that x = r(t) where r(t) is some reference function that we can specify and where
dx

/dt is given by Eqn 8.34.


For any controller, when the process is away from the desired state, x

, we want to force the rate


of change of the states, x to be such that it is moving in the direction of the desired setpoint, and
in addition, we would like the closed loop to have zero offset. The control law
( x)

= K
1
(x

x) +K
2
_
t
0
(x

x) d (8.35)
combines these two weighted ideas. Ignoring any model-plant mistmatch, ( x)

= x, so substi-
tuting in the plant dynamics Eqn. 8.34 to Eqn. 8.35, we get the control law
f (x, u, t) K
1
(x

x) K
2
_
t
0
(x

x) d = 0 (8.36)
Solving Eqn. 8.36 for u is the control law and this scheme is called Generic Model Control or
GMC and was developed by Lee and Sullivan in [116]. Note that solving Eqn 8.36 for u is in
effect solving a system of m nonlinear algebraic equations. This must be done at every time step.
Actually, the solution procedure is not so difcult since the previous solution u
t1
can be used as
the initial estimate for u
t
.
8.5.1 The tuning parameters
The GMC control law (Eqn 8.36) imbeds a model of the process, and two matrix tuning parame-
ters, K
1
and K
2
. Selecting different tuning parameters changes the type of response. Assuming
we have a perfect model, the control law (Eqn 8.36) is
x = K
1
(x

x) +K
2
_
t
0
(x

x) d (8.37)
If we differentiate once the control law Eqn 8.37, we get
x = K
1
( x

x) +K
2
(x

x)
x +K
1
x +K
2
x = K
1
x

+K
2
x

(8.38)
and taking Laplace transforms of 8.38 gives
s
2
x +sK
1
x +K
2
x = sK
1
x

+K
2
x

x(s)
x

(s)
=
K
1
s +K
2
s
2
+K
1
s +K
2
=
2s + 1

2
s
2
+ 2s + 1
(8.39)
where
=
1

k
2
, =
k
1
2

k
2
(8.40)
or alternatively
k
1
=
2

, k
2
=
1

2
(8.41)
Note that Eqn 8.39 is a modied second-order transfer function. The time domain solution to Eqn
8.39 for a step change in setpoint, X

(s), is given by
x(t) =
1
2
e
t

_
_
1 +
2
_
cos
_

1
2
t

_
+
_
1
2
sin
_

1
2
t

__
1
2
, = 1 (8.42)
372 CHAPTER 8. MULTIVARIABLE CONTROLLER DESIGN
0 2 4 6 8 10
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
Normalised time, =1
r
e
s
p
o
n
s
e
=0.2
=0.5
=1
=3
(a) Various choices for shape factor. = 0.2, 0.5, 1 and
= 3.
(b) A 3-dimensional plot of the same relation.
Figure 8.6: The normalised ( = 1) response curve for a perfect GMC controller for various
choices of shape factor .
For the limiting case where = 1, the solution to Eqn. 8.39 simplies to just
x(t) = 1 +e
t

_
t

1
_
(8.43)
Fig. 8.6 shows the response for a perfectly GMC controlled process for different . Note that as
, the output response approaches a step response at t = 0, and as 0, the output
tends to x(t) = sin(2t) + 1. This improvement comes, naturally, at the expense of decreased
robustness.
Fig. 8.7 shows the comparison between the overshoot, decay ratio and the tuning value. These
values are easily obtained from Fig. 8.6 or analytically starting with Eqn 8.42. Note that for > 1,
there is no oscillation in the response, and that a 0.22 gives a quarter decay ratio.
Figure 8.7: The relation between the shape
factor , the maximum overshoot (solid)
and decay ratio (dashed) for the perfectly
controlled GMC response when = 1
0 0.2 0.4 0.6 0.8 1 1.2
0
0.2
0.4
0.6
0.8
1
shape factor
o
v
e
r
s
h
o
o
t

&

d
e
c
a
y

r
a
t
i
o
overshoot
decay ratio
8.5. GENERIC MODEL CONTROL 373
Algorithm 8.3 Designing the GMC controller
To choose appropriate values for K
1
and K
2
, do the following:
1. For each state to be controlled, choose an appropriate shape fromFig. 8.6 for the response. In
other words, choose a value. For tight control with little overshoot, choose a shape corre-
sponding to 3 or higher , or if the loop is not so important, choose a more conservative
response such as 1.
2. Specify the time scale by choosing an appropriate value for . Often it is convenient to
specify the rise time. For distillation column control, you may choose a cross over time of
30 minutes, or for a ow controller you may wish to cross the setpoint after only 5 seconds.
The time scale is application dependent.
3. From the specied and , evaluate the scalars k
1
and k
2
using Eqn. 8.41.
4. The diagonal matrices K
1
and K
2
are formed by collecting the k scalars into diagonal nn
matrices.
8.5.2 GMC control of a linear model
Since GMC can be applied to a nonlinear process, one would expect it to work especially well
given a linear process x = Ax + Bu. Now the GMC control law simplies from the general
Eqn. 8.36 to
u = B
1
_
K
1
x

(K
1
+A)x +K
2
_
t
0
(x

x) d
_
(8.44)
noting that B
1
must exist and hence at least should be square. This means that we must have
the same number of manipulated variables as state variables.
It is easy to test such a controller using a stable random model using rmodel. Hopefully our
randomly generated control matrix Bis inevitable. (If it is not, just re-run the script le.) To tune
the response, I will select a slightly different shape factor and time constant for each state, namely

1
= 0.7,
1
= 1.5

2
= 0.8,
2
= 2.6
Running the script le in Listing 8.5 gives results much like Fig. 8.8.
Listing 8.5: GMC on a Linear Process
dt = 0.5; n = 2;% sample time & # of states
m = n; p = 1; % ensure square system & 1 output
3
[A,B,C,D]=rmodel(n,p,m); % generate random continuous model
D = zeros(size(D)); C = eye(size(C)); % not interesting
[Phi,Del] = c2dm(A,B,C,D,dt,'zoh'); % discretise
8 x0 = round(3
*
randn(n,1)); % nice starting position
t = [0:dt:80]'; % simulation lifetime
Xspt = [square(t/8), 0.75
*
square(t/8
*
0.7)+3]; % setpoint
% GMC tuning rules
13 zeta=[0.7 0.8]'; tau=[1.5 2.6]'; % Shape factor , & time constant , for each state.
374 CHAPTER 8. MULTIVARIABLE CONTROLLER DESIGN
K1 = 2
*
diag(zeta./tau); K2 = diag(1.0./tau./tau); % Diagonal K1, K2.
Ie = zeros(size(x0)); % reset integrator
x= x0; X = []; Uc = [];
18 for i=1:length(t) % start simulation
err = (Xspt(i,:)-x); % current error (x

x).
Ie = Ie + err
*
dt; % update integral
rhs = K1
*
err + K2
*
Ie - A
*
x; % K1x

(K1 +A)x +K2


_
t
0
(x

x) d
u = B\rhs; % Solve (linear) control law, Eqn. 8.44.
23
x = Phi
*
x + Del
*
u; % Do 1-step integration
X = [X; x']; Uc = [Uc;u']; % collect input for plotting
end % for
28
plot(t,[Xspt, X],t,Uc,'--') % Refer Fig. 8.8.
Figure 8.8: The response of a randomly
generated linear process controlled by
GMC. Top: state () and setpoint (dot-
ted) trajectories; Bottom=manipulated
variables). See also Fig. 8.9.
0 20 40 60 80
2
0
2
4
6
Linear GMC
O
u
t
p
u
t
s

&

s
e
t
p
o
i
n
t
s
0 20 40 60 80
5
0
5
10
15
20
time
i
n
p
u
t
s
We would expect the GMC controller to perform well, but what should really convince us is if
the actual response y(t) follows the analytical & desired trajectory, y

(t) since GMC is really a


model reference controller. These two curves are compared in Fig. 8.9 which shows an enlarged
portion of Fig. 8.8. The differences between the actual response and the specied trajectory are
due to sampling discretisation.
This demonstration of the GMC controlling a linear process does not do the algorithm justice,
since the controller really is intended for nonlinear processes. This is demonstrated in the follow-
ing section.
8.5. GENERIC MODEL CONTROL 375
48 50 52 54 56 58 60 62
1
0.5
0
0.5
1
1.5
2
time
o
u
t
p
u
t

&

r
e
f
e
r
e
n
c
e

o
u
t
p
u
t
Actual y
Reference y
*
setpoint
Figure 8.9: Comparing the actual response,
y, (solid line) to a setpoint change with the
expected response, y

, ( ). This response
is a zoomed part of the response given in
Fig. 8.8.
8.5.3 GMC applied to a nonlinear plant
Section 8.5.2 did not show the true power of the GMC algorithm since the plant was linear and
any linear controller such as an LMR would work. However this section introduces a nonlinear
model of a continuously stirred tank reactor, CSTR, from [123]. The CSTR model equations are

T =
F
a
T
a
V
+
F
b
T
b
V

(F
a
+F
b
)T
V

H
C
p
r
U
ar
V C
p
(T T
c
) (8.45)

C
a
=
F
a
C
ai
V

(F
a
+F
b
)C
a
V
r (8.46)

T
c
=
F
c
V
c
(T
ci
T
c
) +
U
ar
V
c
C
p
(T T
c
) (8.47)
where the reaction kinetics are second order
r = Ke

E
RT
C
2
a
(8.48)
The state, manipulated and disturbance variables as explained in Table 8.2 are dened as
1
x
def
=
_
T C
a
T
c

T
, u
def
=
_
T
ci
C
ai
F
b

T
, d
def
=
_
T
a
F
a
T
b
F
c

T
(8.49)
The model parameters are also given in Table 8.2.
The plant dynamics dened by Eqns 8.45 to 8.47 are nonlinear owing to the reaction rate r, being
an exponential function of the temperature T. This type of nonlinearity is generally considered
signicant.
Designing the GMC controller
First we note that have n states and n manipulated variables. Thus the B matrix, or at least a
linearised version of the nonlinear control function will be square. We also note that the GMC
algorithm assumes that the process states (x) are measured, hence the measurement relation must
1
The original reference had only one manipulated variable T
ci
. This was changed for this simulation so the GMC is
now a true multivariable controller.
376 CHAPTER 8. MULTIVARIABLE CONTROLLER DESIGN
Table 8.2: The state, manipulated and disturbance variables and parameter values for the Litch-
eld nonlinear CSTR, [123].
(a) State, input and disturbance variables
variable description units
T reactor temperature K
C
a
concentration of A kgmol m
3
T
c
temperature of jacket K
T
ci
temperature of cooling inlet K
C
ai
inlet concentration of A kgmol m
3
F
b
inlet ow of stream B m
3
s
1
T
a
inlet temp. of stream A 50

C (333K)
F
a
inlet ow of stream A 3.335 10
6
m
3
s
1
T
b
inlet temp. of stream B 50

C (333K)
F
c
cooling ow 3 10
4
m
3
s
1
(b) Model parameters
parameter value units parameter value units
V 0.0045 m
3
K 16.6 10
6
m
3
/kmol/s
V
c
0.0074 m
3
E/R 5.6452 10
3
K
1000 kg/m
3
H 74.106 10
3
kJ/kg
C
p
4.187 kJ/kg/K U
ar
0.175 kW/K
be C = I. The GMC control law, Eqn. 8.36, for this case is
_
_
0 0
T
b
T
V
0
Fa
V

Ca
V
Fc
Vc
0 0
_
_
_
_
T
ci
C
ai
F
b
_
_
+
_
_
_
Fa
V
(T
a
T)
H
Cp
r
Uar
Cp
(T T
c
)

_
FaCa
V
+r
_

FcTc
Vc
+
Uar
VcCp
(T T
c
)
_
_
_
K
1
(x

x) K
2
_
t
0
(x

x) d = 0
Now you will notice that the control law is a system of algebraic equations unknown in u. For
this case, the control law is linear in u because I had the freedom of choosing the manipulated
variables carefully. If, for example, I had chosen F
a
rather than F
b
as one of the manipulated
variables, then the system would be control nonlinear, and hence more difcult to solve. But
in this case, the system is control linear, the GMC control law is a linear system of algebraic
equations, and that the 3 3 matrix to be inverted is in a lower diagonal form already making
the computation straight forward.
Simulating GMC with a nonlinear process
We will use a GMC controller to drive the states to follow a setpoint as follows:
x

=
_

_
_
300 0.03 290

0 t < 5 min.
_
310 0.03 290

5 t < 10 min.
_
310 0.025 295

t > 10 min.
starting from an initial condition of x
0
=
_
290 0.035 280

and with a relatively coarse


sample time of T = 10 seconds. We set the shape factor = 0.5, and the time constant for the
8.5. GENERIC MODEL CONTROL 377
response to = 20 seconds. This gives k
1
=
1
20
and k
2
=
1
400
. The disturbance variables (d) are
assumed constant for this simulation and are taken from Table 8.2.
The state derivative x as a function of x, u, d, , t is evaluated using Eqns 8.45 to 8.47 and is given
in listing 8.7 which is called by the integrator ode45 in listing 8.6.
Listing 8.6: GMC for a batch reactor
1 % GMC CSTR simulation
% Ref: Litchfield, Campbell & Locke Trans I Chem E Vol 57 1979
x0 = [290, 0.035, 280]'; % initial condition
dt = 10; t = [0:dt:15
*
60]'; % sample time (seconds)
6 Xspt = [300+10
*
square(t/150,70), 0.03+0.05
*
square(t/150), ...
295-5
*
square(t/150,30)]; % setpoint
d = [333 3.335e-6 333 3e-4]'; % disturbance vector constant
Ta = d(1); Fa = d(2); Tb = d(3); Fc = d(4);
11 k1 = 1/20; k2=1/400; % GMC tuning constants
K1 = eye(3)
*
k1; K2 = eye(3)
*
k2;
%Parameters for the CSTR
V = 0.0045; Vc = 0.0074; rho = 1000.0;
16 K = 16.6e6; ER = 5.6452e3; dH = -74.106e3;
Cp = 4.187; %kJ/Kg.K
Uar = 0.175; % kW/K
rCp = rho
*
Cp ;
21 x=x0;Ie=zeros(size(x)); X=[]; U=[];
for i=1:length(t) % start simulation
xspt = Xspt(i,:)'; % current setpoint
T = x(1); Ca = x(2); Tc = x(3); % state variables
Ie = Ie+(xspt-x)
*
dt; % update integral error
26 r = K
*
exp(-ER/T)
*
Ca
*
Ca; % reaction rate
R = [Fa/V
*
(Ta-T)-dH
*
r/rCp-Uar
*
(T-Tc)/rCp;-(Fa
*
Ca/V+r); ...
-Fc
*
Tc/Vc+Uar/rCp/Vc
*
(T-Tc)];
A = [0 0 (Tb-T)/V; 0 Fa/V -Ca/V; Fc/Vc 0 0];
u = A\(K1
*
(xspt-x)+K2
*
Ie - R); % solve for u GMC control law
31 % u = max(zeros(size(u)),u); % ensure feasible
[ts,xv] = ode45('fcstr',[0,dt/2,dt],x,[],u, ...
V, Vc, K, ER, dH, rCp, Uar, Ta, Fa, Tb ,Fc); % integrate nl ODEs
x = xv(3,:)'; % keep only last state
36
X(i,:) = x'; U(i,:) = u';
end %for
plot(t,[X,U]); % will need to rescale
Listing 8.7: The dynamic equations of a batch reactor
1 function x_dot = fcstr(t,x,flag,u, ...
V, Vc, K, ER, dH, rCp, Uar, Ta, Fa, Tb ,Fc);
% CSTR dynamic model, called from gmccstr.m
T = x(1); Ca = x(2); Tc = x(3); % states
6 Tci = u(1); % Inputs, Coolant temp in Kelvin
Cai = u(2); Fb = u(3);
378 CHAPTER 8. MULTIVARIABLE CONTROLLER DESIGN
r = K
*
exp(-ER/T)
*
Ca2; % reaction rate kmol/m3/s
x_dot(1,1)=(Fa
*
Ta+Fb
*
Tb-(Fa+Fb)
*
T)/V - dH
*
r/rCp - Uar/V/rCp
*
(T-Tc);
11 x_dot(2,1)=(Fa
*
Cai - (Fa+Fb)
*
Ca)/V - r ;
x_dot(3,1)=Fc
*
(Tci-Tc)/Vc+Uar
*
(T-Tc)/rCp/Vc;
return % fcstr.m
Physically the manipulated variables are constrained to be positive, and unless these constraints
are added explicitly to the GMC optimisation control law, negative values for some of the inputs
will occur. In the above example I simply clamp the input when these violations occur. If you
removed the u=max(zeros(size(u)),u);line from the control law, you will have the uncon-
strained version that ignores the physically realistic constraints which is the case shown in the
simulations given in Fig. 8.10.
Fig 8.10 shows the response of the CSTR controlled by the GMC. The upper plots show the state
trajectories and the lower plots give the manipulated variables used. Note that the controlled
response is good, but not perfect as we can still see some interaction between the states. The re-
action concentration C
a
particularly seems strongly coupled to the other states, and this coupling
is not completely removed by the GMC controller. We could improve on this response both by
decreasing the sample time, or tightening the controller design specications.
Figure 8.10: The nonlinear CSTR controlled
by a GMC controller where = 0.5 and
= 20 seconds and sample time T = 10
seconds.
Upper: The state trajectories (temperatures
T and T
c
, and reaction concentration C
a
).
Lower: The manipulated variable trajecto-
ries T
ci
and C
ai
and 5 10
4
F
b
.
0 5 10 15
10
20
30
40
T
e
m
p

T
,
T
C

(
d
e
g

C
)
0 5 10 15
0
0.2
0.4
C
o
n
c

C
A
0 5 10 15
10
20
30
40
T
e
m
p

T
C
i

(
d
e
g

C
)
0 5 10 15
20
0
20
40
C
o
n
c

C
A
i
,

F
l
o
w

F
b

*

5
e
4
time (min)
Just for fun, we can plot a three dimensional plot of the trajectory of the state variables from
the CSTR example under GMC control. Fig. 8.11 shows the actual state trajectory (solid line)
converging to the 4 distinct setpoints, (), in state space. Note that the independent variable time
8.5. GENERIC MODEL CONTROL 379
is removed in this illustration.
15
20
25
30
35
40
0.1
0.2
0.3
0.4
0.5
10
15
20
25
30
35
Temperature T
Conc C
A
T
e
m
p
e
r
a
t
u
r
e

T
c
Figure 8.11: Aphase plot of T, T
c
and
C
a
for the GMC controlled CSTR ex-
ample. The state trajectory is the
solid line, and the setpoints are .
Alternative manipulated variables choices
Manipulating the inlet concentration of A or the inlet temperature of the cooling uid is not very
practical. It is much easier to manipulate ow rates rather than temperatures of concentrations.
Therefore in this situation, a better manipulated variable vector would be u = [F
a
, F
b
, F
c
]

rather
than that used above. Note however that if we had chosen F
a
, F
c
and T
ci
as u, we would have
a control problem that is not only nonlinear, but one that is also degenerate. If you construct the
GMC control law of Eqn. 8.36, you will get a structure such as
_
_
0 0
0 0
0
_
_
_
_
F
a
F
c
T
ci
_
_
= g(x) +K
1
(x

x) +K
2
_
(x

x) (8.50)
where we know everything on the right hand side of Eqn. 8.50, (it is a constant for the purposes
of this calculation) and wish to solve for u. Unfortunately, however, we can see from the struc-
ture matrix on the left hand-side of Eqn. 8.50, that F
a
appears alone in two equations, and the
remaining two manipulated variables appear together in one equation. Thus this nonlinear al-
gebraic system is degenerate since in general we cannot expect a single unknown F
a
to satisfy
two independent conditions. Note that even though we do not have an overall degree of freedom
problem, (we have three unknowns and three equations), we do have a sub-structure that has a
degree of freedom problem.
Problem 8.5 1. Re-run this simulation and test the disturbance rejection properties of the
GMC controller. You will need to choose some suitable disturbance prole.
2. Re-run the simulation for different choices of GMC tuning constants. For each simulation,
verify that the response is the linear response that you actually specied in the GMC design.
(You may need to decrease the sampling interval to T 2 seconds to get good results.) Try
for example;
(a) = 20, = 1.5
380 CHAPTER 8. MULTIVARIABLE CONTROLLER DESIGN
(b) = 5, = 0.5
(c) any other suitable choice.
8.6 Exact feedback linearisation
For certain nonlinear systems, we can derive controllers such that the closed loop input/output
behaviour is exactly linear. This is quite different from our traditional approach of approximating
the linearities with a Taylor series approximation and then designing a linear controller based
on the approximate model. Designing exact feedback linearisation uses nonlinear results from
the eld of differential geometry, which, to say the least, is somewhat abstract to most engineers.
Lie algebra for nonlinear systems will replace matrix algebra for linear systems, and Lie brackets
and Lie derivatives will be extensions to normal matrix operations. A good tutorial of this topic
with applications of interest to the process industry, and one from where most of this material
was taken, is [105] and the follow up, [106]. A review of nonlinear control for chemical processes
which includes a section on exact feedback linearisation is [28] and the classic text book for this
and other topics in nonlinear control is [187]. Extensions for when the process is not afne in
input, and for systems where the relative degree is greater than unity are discussed in [83].
8.6.1 The nonlinear system
We will only deal with SISO nonlinear systems of the form:
x = f (x) +g(x)u (8.51)
y = h(x) (8.52)
where f , g are vector functions, or vector elds, h(x) is a scalar eld and the input u, and output
y are scalars. Note that the system described by Eqn 8.51 is written in afne form, that is the ma-
nipulated variable enters linearly into the system. It is always possible to transform an arbitray
system into afne form by introducing a new variable. For example, given the non-afne system
x = f (x, u)
we can dene a new input, , which is the derivative of our original input, u = , so now our
augmented system becomes
_
x
u
_
=
_
f (x, u)
0
_
+
_
0
1
_
(8.53)
which is now in afne form. Whether creating a new variable which requires differentiating
the input is practical, or even feasible under industrial conditions is a different matter.
8.6.2 The input/output feedback linearisation control law
Given the nonlinear system in Eqn 8.51 and 8.52, we will try to construct a control lawof the form
u = p(x) +q(x)v (8.54)
where v is the command signal or setpoint, such that the closed loop system (obtained by substi-
tuting Eqn 8.54 into Eqns 8.51 and 8.52) is
x = f (x) +g(x)p(x) +g(x)q(x)v (8.55)
y = h(x) (8.56)
8.6. EXACT FEEDBACK LINEARISATION 381
and is linear from reference variable v to output y. The dashed box in Fig. 8.12 contains the
nonlinearity, but as far as we are concerned, viewing the system from the outside, it may as well
be a linear black box.
+
p(x)
`
6
f ()
1
s
h() +
6

x(t) x

Nonlinear
plant
- -
command
v(t)

6
-
u(t)
output
y(t)

`
q(x)
6

g()

Figure 8.12: The conguration of an input/output feedback linearisation control law


Note that for a controller in this conguration, the internal states will still be nonlinear. Notwith-
standing, once we have a linear relation from reference, v(t), to outputs, y(t), it is much easier to
design output-based controllers to control this now linear system, rather than the original non-
linear system. It is also clear from Fig. 8.12, that the controller requires state information to be
implemented.
The control design problem is to select a desired linear response, perhaps by specifying the time
constants of the desired closed loop, and then to construct the p(x) and q(x) functions in the
control law, Eqn. 8.54. These are given by the following relations:
p(x) =
(L
r
f
h(x) +
1
L
r1
f
h(x) + +
r1
L
f
h(x) +
r
h(x))
L
g
L
r1
f
h(x)
(8.57)
q(x) =
1
L
g
L
r1
f
h(x)
(8.58)
where the Lie derivative, L
f
h(x), is simply the directional derivative of the scalar function h(x)
in the direction of the vector f (x). Here, the Lie derivative is calculated by
L
f
h(x) = f (x)

h(x)
x
=
n

i=1
f
i
(x)
h(x)
x
i
(8.59)
One can apply the Lie derivative operator repeatedly, L
2
f
= L
f
L
f
either to the same function, or
to different functions as in L
g
L
2
f
.
The relative order of a system such as Eqns 8.518.52 is the smallest integer r such that
L
g
L
r1
f
h(x) = 0 (8.60)
An alternative interpretation of relative order is that r is the smallest order of derivatives of y that
depends explicitly on u. For linear systems, the relative order is simply the difference between
the number of poles and the number of zeros of the system, i.e. n
a
n
b
.
382 CHAPTER 8. MULTIVARIABLE CONTROLLER DESIGN
Finally, the
i
in Eqn 8.57 are the user-chosen tuning constants that determine how our output
will respond to the command v. They are dened by
d
r
y
dt
r
+
1
d
r1
y
dt
r1
+ +
r1
dy
dt
+
r
y = v (8.61)
Typically it is convenient to choose Eqn. 8.61 as some sort of stable low-pass lter, such as a
Butterworth lter.
Algorithm 8.4 Exact feedback linearisation design procedure:
1. We are given a single-input/single-output nonlinear model (which may need to be trans-
formed into afne form), Eqn 8.518.52.
2. Find the relative order, r, of the system using Eqn. 8.60. This will govern the order of the
desired response differential equation chosen in the next step. (If r = 1, perhaps a GMC
controller is easier to develop.)
3. We choose a desired linear response, which we consider reasonable, in the form of Eqn 8.61,
that is, we need to choose r time constants or equivalent.
4. We create our exact linearising nonlinear controller, Eqn 8.54 using the relations Eqns 8.57
and 8.58 by constructing Lie derivatives. (A symbolic manipulator such as MAPLE may be
useful here.)
5. Closing the loop creates a linear input/output system, which can be controlled by a further
controller sitting around this system, designed by any standard linear controller design
technique such as LMR or pole placement.(However this is hardly necessary since we have
the freedom to choose any reasonable response in step 3.)
Cautions and restrictions
This technique is not applicable if the zero dynamics are unstable. This is analogous to the
linear case where we are not allowed to cancel right-hand plane zeros with unstable poles. Tech-
nically, it works, but in practice, the slightest model mis-match will cause an unstable pole which
in turn will create an unstable controller. This then excludes process systems with dead time.
Clearly also owing to the nature of the cancellation of the process nonlinearities, exact feedback
linearisation is very sensitive to modelling errors. The technique demands state feedback, but
only delivers output linearisation. Full state linearisation techniques do exist, but exist only un-
der very restrictive conditions.
8.6.3 Exact feedback example
Building an exact feedback controller is relatively easy for well behaved smooth nonlinear sys-
tems. The only calculus operation required is differentiation, and this poses few problems for
automated computer tools. This example shows how to build such a controller using a simple
program written for a symbolic manipulator. With this program, we can subsequently change
the system, and regenerate the new nonlinear control law. This is a departure from all the other
simulations presented so far, since here we can change the entire structure of the process in our
program, not just the order or the values of the parameters, and subsequently re-run the simula-
tion.
8.6. EXACT FEEDBACK LINEARISATION 383
Suppose we have a simple 3-state nonlinear SISO system,
_
_
x
1
x
2
x
3
_
_
=
_
_
sin(x
2
) + (x
2
+ 1)x
3
x
5
1
+x
3
x
2
1
_
_
. .
f (x)
+
_
_
0
0
1
_
_
. .
g(x)
u (8.62)
y =
_
1 0 0

x
. .
h(x)
(8.63)
which is already in the form of Eqn. 8.51Eqn. 8.52 or afne form, and we wish to construct an
exact feedback linearised system following Eqn. 8.54. To do this we will follow Algorithm 8.4.
Since we will need to compute some Lie derivatives analytically,
L
f
h(x) = f (x)

h(x)
x
to establish the relative order r, it may help to write a small procedure to do this automatically.
Listing 8.8 gives an example using the Symbolic toolbox.
Listing 8.8: Find the Lie derivative for a symbolic system
function L = lie_d(F,h,x)
2 % Find the Lie derivative L
f
h(x)
% Note jacobian returns a row vector
L = F'
*
jacobian(h,x)'; % f (x)

h(x)
x
return
First we dene the actual nonlinear functions f (x), h(x), and g(x) for our particular problem.
% define : must be real or will have conjugate problems
syms x1 x2 x3 real
x = [x1 x2 x3]';
4
F = [sin(x2)+(x2+1)
*
x3; x15+x3; x12]; % Eqn. 8.62
h = [1 0 0]
*
x
g = [0 0 1]'
The Lie derivatives are easy to construct,
L
f
h(x) = f (x)

h(x)
x
=
_
sin(x
2
) + (x
2
+ 1)x
3
, x
5
1
+x
3
, x
2
1

_
_
1
0
0
_
_
= sin(x
2
) + (x
2
+ 1)x
3
and check in MATLAB using the routine giving in Listing 8.8
>> lie_d(F,h,x)
ans =
3 sin(x2)+(x2+1)
*
x3
384 CHAPTER 8. MULTIVARIABLE CONTROLLER DESIGN
To nd the relative degree, we simply repeatedly call the function in Listing 8.8 starting at r = 1
and incrementing r until L
g
L
r1
f
h(x) = 0 at which point we have established the relative order.
Listing 8.9: Establish relative degree, r (ignore degree 0 possibility)
Lfn = h % Dummy start
2 LgLfnh = lie_d(g,h,x) % LgL
1
f
h(x) case.
r = 1; % # relative degree counter
while LgLfnh == 0
r =r+1; % increment relative degree counter
7 Lfn = lie_d(F,Lfn,x) % need to build-up
LgLfnh = lie_d(g,Lfn,x) % stop when non-zero
end % while
Now that we know the relative degree r, we can design our desired response lter
Listing 8.10: Design Butterworth lter of order r.
1 [z,p,K] = buttap(r) % Low-pass Butterworth filter analogue prototype
beta = real(poly(p));
%[B,beta] = butter(r,2,'s') % Alternative: but must scale with B(end).
and now construct the controller components p(x) and q(x) to form the closed loop.
Listing 8.11: Symbolically create the closed loop expression
qx = 1/LgLfnh; % Controller components q(x), p(x) , Eqn. 8.57, Eqn. 8.58.
2 px = beta(r+1)
*
h;
Lfn = h;
for i=r-1:-1:0
Lfn = lie_deriv(f,Lfn,x);
px = px+ beta(i+1)
*
Lfn;
7 end
px = -px
*
qx;
xdot = f+g
*
(px+qx
*
v) % Closed loop, Eqn. 8.55, hopefully linear.
In practice, we are only interested in the control law
>> (px+qx
*
v) % Control law, u = p(x) +q(x)v
ans =
(-4
*
x1-2
*
2(1/2)
*
(sin(x2)+(x2+1)
*
x3)-(x15+x3)
*
(cos(x2)+x3)-x12
*
(x2+1))/(x2+1)+1/(x2+1)
*
v
which could, in principle, now be implemented on the plant.
A good test for this controller is to see if the closed loop response is indeed indistinguishable
from the specied linear lter. As demonstrated in Fig. 8.13, there is no discernable difference
indicating that the nonlinear controller design worked as intended in this instance.
Problem 8.6 Create an exact nonlinear feedback controller for a penicillin fermentation process.
8.6. EXACT FEEDBACK LINEARISATION 385
xdot
MATLAB
Function
h(x)
u(1)
Signal
Generator
Scope
Integrator
1
s
Analog
Filter Design
butter
3
3
3 3 4
3
(a) SIMULINK conguration
0 10 20 30 40 50 60 70 80 90 100
1.5
1
0.5
0
0.5
1
1.5
time
y
=
x
1
output
linear filter
(b) The closed loop response of the controlled nonlinear system compared
to the desired linear response
Figure 8.13: This example of exact feedback linearisation shows that there is no practical differ-
ence between the output of the nonlinear system and the desired linear lter.
The system dynamics are:
dX
dt
= (S, X)X DX (8.64)
dS
dt
= (S, X)X + D(S
f
S) (8.65)
y = X (8.66)
where X, S are the cell mass and glucose concentration, D is the dilution rate and manipulated
variable, and S
f
is the feed glucose concentration, considered constant in this example. The
empirical growth rate expressions are:
(S, X) =

x
S
K
x
X +S
(8.67)
(S, X) = (S, X)/Y
xs
+(S)/Y
ps
+m (8.68)
(S) =

p
S
K
p
S(1 +S/K
i
)
(8.69)
and for simulation purposes, use parameters values as

x
1.5 K
x
2.34

p
3.45 K
p
1.56
Y
xs
0.45 Y
ps
0.78
K
i
0.89 m 0.34
386 CHAPTER 8. MULTIVARIABLE CONTROLLER DESIGN
and manipulated, disturbance and initial conditions as
manipulated D 0 < D < 1
disturbance S
f
1.25 could also be varied
initial conditions X
0
0.05
S
0
0.6
Choose an appropriate and design a linearising controller using Eqn 8.54. Test the closed loop
response by simulation and verify that your input/output response is actually linear.
Problem 8.7 1. What is the relative order of the following nasty system from 3.5?
x
1
= x
2
x
2
= 6x
1
7x
2
+ 24x
3
+ 10x
2
2
+u
x
3
= x
3
x
2
x
5
2
y = x
1
2. Implement and simulate an exact linearising controller for this system.
3. What problems do you face when you try to construct an input/output GMC controller for
this system? What requirement therefore must be satised to apply GMC?
4. Choose a suitable alternative measurement function such that GMC will work, and imple-
ment this as a MATLAB simulation. Compare the performance of this controller with one
developed using geometric methods.
8.7 Summary
This chapter introduced the theory and technique of the design of state space control systems.
Writing the dynamic equations in a discrete matrix form is both convenient for complex systems,
and intuitive since the design is done in the time domain. The major disadvantage is the fact that
the computation is more complex, and is the reason why the design is typically carried out on a
computer. The important conclusions of this chapter are:
Controller design by pole placement means that we choose a gain matrix K, such that the
closed loop system ( K) has a desired dynamic response or alternatively has the de-
sired eigenvalues. To do this, the system must be controllable.
We can design observers L in much the same way as controllers.
However, in this chapter we only covered the design of single inputmultiple output controllers
or single measurement - multiple state observers. The true multivariable case is under-constrained,
and is covered in chapter 9.
Control of a nonlinear process may require a different approach and typically more computation.
Generic model control or GMC is a simple technique that can be used to control a nonlinear plant
in an optimal manner. Many of the so-called difcult processes in chemical process control
are due to nonlinearities and these are well suited to techniques such as GMC. An alternative,
more general nonlinear controller, particularly suitable for SISO nonlinear systems in control
afne form is one where a controller is sought that transforms the nonlinear system into a linear
openloop system that can subsequently be controlled using the far more extensive library of
linear controllers.
Chapter 9
Classical optimal control
God does not throw dice ? Nor is it our business to prescribe to God how He should run the
world.
Attributed to Niels Bohr replying to Einstein.
9.1 Introduction
This, and the following chapter, is concerned with designing optimal controllers. In practice opti-
mal controllers are complicated owing to issues such as plant nonlinearities, constraints on state
and manipulated variables for example, but for some applications the improvement in controlled
response is well worth the implementation effort. Following a small aside, demonstrating opti-
mal tuning of a known controller, we will begin with the general optimal control problem in 9.3,
develop a solution strategy and test it on a batch reactor example in 9.2.4. Building on the gen-
eral nonlinear optimisation material, we will make some simplifying assumptions, and develop
an optimal linear controller in 9.4
Chapter 8 described the pole placement method of controller design for single input/single out-
put (SIMO) systems. It was also briey mentioned that the evaluation of the gain matrix K
for MIMO systems is now under-constrained and an innite number of possible controller con-
gurations satisfy the original design specications. However, by using these extra degrees of
freedom, optimal controllers are possible. Section 9.4 will describe an optimal multivariable con-
troller termed a Linear Quadratic Regulator (LQR). Section 9.5 will present the dual problem of
an optimal estimator, the Kalman lter.
Model predictive control, which is the subject of chapter 10, lies somewhere between the full
generic optimal control problem, and the linear quadratic control. This chapter introduces the
concept of a moving horizon for the optimisation and compares a computationally intensive but
very general nonlinear optimal scheme with a much faster linear optimal controller based on a
FIR model.
387
388 CHAPTER 9. CLASSICAL OPTIMAL CONTROL
9.2 Parametric optimisation
The simplest type of optimisation problem is known as parametric optimisation where we at-
tempt to nd the best values of a nite number of parameters. Finding the best three PID tuning
constants to minimise say the integral of square error is an example of parametric optimisation
described in 9.2.1.
Alternatively we could try to nd the best shape of a trajectory for a missile to hit a target whilst
say minimising energy. In principle the shape may require an innite number of discrete pa-
rameters to completely describe it, perhaps using a Taylor series. This then becomes an innite
dimensional parametric optimisation problem, so we must tackle it with other techniques such
as variational calculus.
9.2.1 Choosing a performance indicator
Optimal control is where you try to extremetise something to get the best controlled result. To
optimise, we must at least be able to quantify the controlled performance, that is to put numbers
to the quality of control. Typically however, this assessment is, in practice, a tricky function of
many variables such as process economics, safety customer satisfaction etc, which are all very
difcult to quantify.
We call the value that quanties the performance the objective function, and typically use the
symbol J. Some authors use the term performance index or perhaps prot function, and natu-
rally we would like to maximise this quantity. However by convention most numerical software
packages are designed to minimise the objective function, and we certainly do not want to min-
imise the prot function, but rather minimise the cost function. Consequently the term cost
function is often used in optimisation where it is assumed that we are interested in minimising
the objective function. For our purposes, where we are typically interested in minimising the
error residuals, we will be using cost function for our objective function.
A simple example of optimal control is the optimal tuning of a PID controller for a given process
responding to a setpoint change. First we must choose an objective function that we wish to
optimise, often some sort of cost term (which we want to minimise), and then we search for the
tuning variables that minimise this cost. Since it is sometimes difcult to correlate cost to per-
formance in a simple manner, we often choose a performance objective more easily quantiable
such as the integral of the difference between the actual process and the setpoint. This is termed
the integral of the absolute error or IAE performance index,
J
IAE
=
_

0
|
t
| dt (9.1)
where
t
is dened as the difference between setpoint and actual, r
t
y
t
at time t. Classically
control engineers have tried to design closed-loop control systems that behave similarly to a
second-order, slightly under-damped process. From experience this is found to be a reasonable
trade-off between speed of response, overshoot and ease of design. We can plot both the pro-
cess response, and visualise the area given by Eqn. 9.1 we wish to minimise using the oodll
command fill.
1 t = linspace(0,10)';
[num,den] = ord2(1,0.54); % Choose = 0.54
y = step(num,den,t); % do simulation
fill([t;0],[y;1],'c') % oodll area as shown in Fig. 9.1.
9.2. PARAMETRIC OPTIMISATION 389
A damping factor of = 0.54 is near the recommended value.
0 2 4 6 8 10
0
0.2
0.4
0.6
0.8
1
1.2
1.4
time
c
o
n
t
r
o
l
l
e
d

r
e
s
p
o
n
s
e
Figure 9.1: The area of the shaded g-
ure is the approximate value of the per-
formance integral. You could try other
values for , say 0.1.
Other common performance index denitions are given in Table 9.1. Classically the least-squares
index, ISE, was popular because for simple problems the solution for the optimum is analyti-
cal. However the so called robust techniques such as IAE or ITAE are now popular, although a
nonlinear optimiser is required to nd the optimum tuning parameters.
Fig. 9.2 shows the steps to numerically compute the ITAE performance index for a given con-
trolled response. In this case the ITAE is 3.85 cost units. Notice how the time weighting empha-
sizes deviations late in the response.
We can compare the performance of a second order process with different damping ratios to nd
the optimum,

, for a given criteria.


1 t = linspace(0,10)'; dt = t(2)-t(1);
zeta = linspace(0.01,1.5,100)';% trial values
Perf = [];
for i=1:length(zeta); % For each . . .
[num,den] = ord2(1,zeta(i)); % 2nd order
6 y = step(num,den,t); % do simulation
err = 1-y; % error
Perf(i,:) = trapz(t,[abs(err) err.2 t.
*
abs(err) t.
*
err.2]);
end % for
11 plot(zeta,Perf) % Find optimum for each index as shown in Fig. 9.3.
These curves are given in Fig. 9.3 from which we can extract the optimum damping ratios which
are summarised in Table 9.1. Note that the optimum are all in the 0.5 to 0.75 region. Ogata, [150,
pp295-303], and especially Fig 438 on page 298 compares these alternatives in a similar manner.
9.2.2 Optimal tuning of a PID regulator
For example, say we wish to tune a PID controller to minimise the integral of the absolute error
times time of a response to a step change in setpoint. For this example I will choose the ITAE
performance index, which is a slight modication on the IAE performance index given previously
in Eqn. 9.1,
J =
_

0
t |
t
| dt (9.2)
390 CHAPTER 9. CLASSICAL OPTIMAL CONTROL
Figure 9.2: A breakdown of the ITAE
performance index. The top plot
shows the controlled response, and
the second plot the error. The third
and fourth plots show the difference
between the IAE and the ITAE. The
cumulative integral under the curve
in the fourth plot (shaded portion)
gives the overall performance.
0
1
2
y

a
n
d

s
e
t
p
o
i
n
t
ITAE performance
1
0
1
e
r
r
o
r
=
(
y

r
)
0
0.5
1
|
(
y

r
)
|
0
0.5
1
|
(
y

r
)
|
*
t
0 2 4 6 8 10
0
2
4


|
(
y

r
)
|
*
t
time
ITAE=3.85
Figure 9.3: Various performance criteria as a
function of damping ratio .
0 0.5 1 1.5 2
10
0
10
1
C
o
s
t

f
a
c
t
o
r
Damping factor,
Optimum tuning of a 2nd order response


ise
iae
itae
itse
ITAE
ITSE
IAE
ISE
9.2. PARAMETRIC OPTIMISATION 391
Table 9.1: Common integral performance indices and the optimum shape factor,

, assuming a
second order prototype response.
Name J

comment
Integral Squared Error, ISE
_

0

2
t
dt 0.507 analytical, Gauss noise
Integral Time Squared Error, ITSE
_

0
t
2
t
dt 0.597 encourages ISE to stabilise rapidly
Integral Absolute Error, IAE
_

0
|
t
| dt 0.657 recommended index
Integral Time Absolute Error, ITAE
_

0
t|
t
| dt 0.748 similar to ITSE
The parameters, , we can adjust in this example are the PID tuning constants K
c
,
i
and
d
,
while the controller structure is xed as a three term PID controller. The optimisation problem is
to choose suitable parameters such that the scalar J is minimised subject to the constraint that
Y (s)
R(s)
=
G
c
G
p
1 +G
c
G
p
(9.3)
where R(s) = 1/s. Here G
c
is assumed a PID controller, and we will also assume that the ma-
nipulated variable is unbounded. Since we chose the ITAE performance index, (Eqn 9.2), no
analytical solution will exist for the optimisation. The ITAE is a good performance index, but
difcult to analytically manipulate hence the solution is best achieved numerically.
First the performance index, J, must be written in terms of the parameter vector . To do this,
we must insert our particular process model into Eqn. 9.3 and also substitute the PID controller
transfer function with the unknown parameter vector . One can then algebraically invert Eqn
9.3 to obtain a time domain solution y
t
if the model is sufciently simple, or we could simulate
the closed loop system for a given . The error
t
is y
t
y

t
where y

t
is the reference setpoint,
in this case 1. One then substitutes this equation into Eqn 9.2 and numerically integrates to
calculate the performance index J. Finally any numerical optimisation routine can be used to
locate the minimum J as a function of the PID tuning parameters. Many such general purpose
optimisers exist such as Powells conjugate directions, Nelder-Mead, simplex etc. (See also [201]
for optimisation routines in MATLAB.)
The hard part in attempting to optimally tuned PID controllers is locating the optimal solution.
In the above description, it was done via simulation using an iterative numerical technique. This
solution technique is completely general and can be applied to systems with nonlinear process
models, bounds on the manipulated variables and complex performance indices. It is however
often slow and computationally expensive, and prone to fall into local minima etc, as any numer-
ical technique.
Example: Optimal PID tuning Suppose we wish to optimally tune a PID controller given the
non-minimum phase plant,
G
p
(s) =
3s + 1
(s
2
+ 0.8s + 0.52)(s + 4)(s + 0.4)
(9.4)
We can see the non-minimum phase fromthe right-hand zero s = +1/3 fromthe numerator term.
G
p
(s) has two complex and two real poles.
Simulating the openloop model gives us some information about the severity of the inverse re-
sponse and the dominant time constants.
392 CHAPTER 9. CLASSICAL OPTIMAL CONTROL
>>Gp = zpk(1/3,[-0.4, -4, -0.4+0.6i, -0.4-0.6i],-3)
Gp =
4 -3 (s-0.3333)
---------------------------------
(s+0.4) (s+4) (s2 + 0.8s + 0.52)
>>step(Gp) %
0 5 10 15 20
0.5
0
0.5
1
1.5
Step Response
Time (seconds)
A
m
p
litu
d
e
We can use fminsearch(which uses Nelder-Meads simplex method), or one of the optimisation
routines from the OPTIMISATION toolbox to do the searching for parameters. Listing 9.1 returns
the IAE performance for a given plant and PID controller tuning constants subjected to a step-
test.
Listing 9.1: Returns the IAE performance for a given tuning.
function [j,y] = foptpidt(x,Gp);
2 % Computes the IAE for a given PID tuning
% Called via a numerical optimiser via optpidt.m
Kc = x(1); taui=1/x(2); taud=x(3); % PID controller parameters
Gc = tf(Kc
*
[taui
*
taud, taui, 1],[taui, 0]); % PID controller
Gcl = feedback(Gc
*
Gp,1); % closed loop GcGp/(1 +GcGp)
7
tsim = 50; dt=0.1; t = [0:dt:tsim]';
y = step(Gcl,t); % Now do simulation
j = sum(abs(y-1))
*
dt; % J(x) =
_
50
0
|| dt
return
The main script le that calls the optimiser to establish the best PID tuning constants is given in
Listing 9.2. Here we specify the plant of interest which we would like to tune, Eqn. 9.4, some trial
PID constants as a rst estimate, and then we call the numerical search routine to rene these
estimates.
Listing 9.2: Optimal tuning of a PID controller for a non-minimum phase plant. This script le
uses the objective function given in Listing 9.1.
Kc = 0.5; taui=0.2; taud=1.; % trial start PID constants
x0 = [Kc,taui,taud]'; % trial tuning vector
Gp = zpk(1/3,[-0.4,-4,-0.4+0.6i,-0.4-0.6i],3);% Gp(s) =
3s1
(s
2
+0.8s+0.52)(s+4)(s+0.4)
4
Perf = @(x) foptpidt(x,Gp); % J(x) =
_
50
0
|| dt
optns = optimset('TolFun',1e-2,'MaxIter',200); % optimisation tolerances
x = fminsearch(Perf,x0,optns) % Return optimal Kc, 1/I, D
9
[jbest,y] = foptpidt(x,Gp,jtype); % best-shot
t = [0:0.1:50]';
plot(t,y,t,ones(size(y)),'--'); % Refer Fig. 9.4.
In the above example we used the IAE performance objective, but we could just as well have
9.2. PARAMETRIC OPTIMISATION 393
chosen other alternatives such as ISE or ITAE, and we would see slightly different optimal tuning
parameters. The following table gives the values to which I converged for the three commonly
used performance indices.
Performance K

c
reset

, 1/
i

d
IAE 0.407 0.382 1.43
ISE 0.359 0.347 1.93
ITAE 0.388 0.317 1.17
A comparison of the closed-loop response with a PID controller using these tuning constants is
given in Fig. 9.4. You can see that all responses are similar, although I personally favour the ITAE
alternative.
0 5 10 15 20 25 30 35 40
0.4
0.2
0
0.2
0.4
0.6
0.8
1
1.2
Time
O
u
t
p
u
t

&

s
e
t
p
o
i
n
t



2
= 7.100
|| = 7.267
t|| = 7.352
ITAE
ISE
IAE
Figure 9.4: A comparison of the op-
timal PID tuning for a step response
of a non-minimum phase system us-
ing IAE, ISE and ITAE performance
objectives.
Implementation details
To be honest, I have cheated somewhat in the above example. Optimisation, even for small
dimensional problems such as this is always difcult and typically needs some expertise to
get working properly. For the above case, even though I chose good starting guesses for the
tuning constants, the optimiser still had some numerical conditioning problems while searching
for the solution. We can avoid these problems partly by encouraging the optimiser to search in
locations that we think the solution will be, and also by telling the optimiser explicitly parameter
spaces to avoid such as negative values. If the optimiser tentatively tries a negative controller
gain, the closed loop system will be unstable, and the objective function meaningless. However
the computer program does not know this, and will still try to extract an improved estimate
from this information. Typically in these types of examples, the objective function loses any real
dependance on the parameters, and the response surface becomes at, or worse, vertical! This is
the principle cause behind the poor conditioning warnings MATLAB gives sometimes.
The nal point to watch out for in an optimisation problem such as this is that the optimum does
not lie far outside any reasonable tuning values. For rst or second order systems, the optimum
controller gain is innity, and the optimiser will naturally tend in this direction. In this case
either constraints are needed, or the objective function can be modied to take into account the
vigorous controller action, or a new more complex model can be developed.
Optimal PID tuning #2 Finding optimal PID tuning constants for actual plant equipment is a
good example of practical optimisation. In this example, we wish to nd the best K
c
and
i
394 CHAPTER 9. CLASSICAL OPTIMAL CONTROL
values for a step response of the black box,
G
bb
(s) =
0.002129s
2
+ 0.02813s + 0.5788
s
2
+ 2.25s + 0.5362
e
0.5s
,
presented in Fig. 1.4 using an ISE performance criteria.
A contour of the performance surface is given in Fig. 9.5 from which we can directly identify the
minimum corresponding to the tightest control is obtained when using K
c
4.4 and
i
9.8. For
comparison, you can see from the insert plots of the error, that for small gains and large integral
times, the response is too sluggish, while for large gains and small integral times (large integral
actions), the response is overly oscillatory.
0
.
2
0
.
2
0
.
2
0.2
0
.2
0
.
2
0
.
2
0
.
3
0
.
3
0.3
0.3
0
.3
0
.
3
0
.
3
0
.
4
0
.
4
0
.
4
0
.
1
0
.
1
0
.
1
0.1
0
.
1
0
.
5
0
.
5
0
.
5
0
.
6
0
.
6
0
.
6
0.4
0.4
0
.
4
0.5
0.5
0
.
5
0.6
0
.
6
Controller gain, K
c
I
n
t
e
g
r
a
l

t
i
m
e
,

i


0 1 2 3 4 5 6 7 8
5
10
15
20
25
30
0
0.1
0.2
0.3
0.4
0.5
J=2.09
J=5.63
J=1.41
J=1.14
J=1.98
Figure 9.5: Optimum PI tuning of the blackbox plant. The contour plot shows the ISE perfor-
mance for a range of gains and integral times while the inserts show the actual error given a
step change in reference at those specic K
c
,
i
combinations. The optimum tuning constants are
K
c
= 4.4 and
i
= 9.8.
Naturally the optimum is only for a step test using our specic model. However the results for
other similar input scenarios should be similar. To construct the contour plot of Fig. 9.5 we used
an exhaustive grid search which requires lots of trials, hence the use of a model. An alternative,
and more efcient strategy is to use a nonlinear numeric optimiser which is the subject of the
next section.
9.2. PARAMETRIC OPTIMISATION 395
9.2.3 Using SIMULINK inside an optimiser
In cases where we have a complex model with nonlinearities or time delays (such as the black box
model of Fig. 9.5), it may be more convenient to use SIMULINK inside the objective function of
an optimiser. In this case we can use the sim command to execute a SIMULINK simulation from
within the objective function, but we must take care to tell SIMULINK to obtain its parameters
from the current workspace as opposed to the global workspace which it will do by default.
Further implementation details for these types of optimisation problems using SIMULINK are
given in [57, 8.6].
Example: Find the optimum PI tuning constants for controlling the plant
G(s) =
2
0.25s
2
+ 0.6s + 1
The SIMULINK model (named in this example sopt_pi.mdl) is given in Fig. 9.6. The objective
1
error
Step Scope
0.25s +0.6s+1
2
2
Plant
taui.s
K*[taui 1](s)
PI ctrl
Figure 9.6: SIMULINK model where the PI tuning constants are to be optimised. Note that we
export the error in an output block at the highest level and that the PI controller has parameters
K and taui which are to be optimised.
function is called via the optimiser, (fminunc in this case), which in turn executes the SIMULINK
model from Fig. 9.6 and returns the time-weighted integral of the square error as a performance
index is given in listing 9.3.
Listing 9.3: Returns the ITSE using a SIMULINK model.
function j = foptslink_pi(x)
% Optimisation objective function that will call a simulink model
3
K = x(1); taui = x(2); % parameters to be optimised
t0 = 0; tf = 20; h = 0.1; % start & finish times
% System parameters used by simulink
8 opts = simset('SrcWorkspace','current','DstWorkspace','current');
[t,states,error] = sim('sopt_pi',[t0:h:tf],opts); % run simulink model
j = trapz(t,t.
*
error.2); % compute ITSE
return
From the MATLAB command line we call the general unconstrained optimiser to compute the
two optimum PI tuning constants.
396 CHAPTER 9. CLASSICAL OPTIMAL CONTROL
>> x0 = [0.2, 0.1]'; % Start guess for Kc and i.
>> x = fminunc(@foptslink_pi,x0) % See function in Listing 9.3.
3
Optimization terminated successfully:
Current search direction is a descent direction, and magnitude of
directional derivative in search direction less than 2
*
options.TolFun
x =
8 66.5402
25.6549
The optimisation routine found that a gain of K
c
= 66.5 and an integral time of
i
= 25.7 min-
imised our integral-time squared error performance index.
9.2.4 An optimal batch reactor temperature policy
This example illustrates how to establish an optimal input prole u(t) for a given system and
initial conditions. In this section we are going to approximate the smooth prole with a small
number of constant variables and we are going to use parametric optimisation. Later in sec-
tion 9.3.3 we will repeat this optimisation problem, but solve for the continuous prole using
functional optimisation.
Suppose we have a batch reactor as shown in Fig. 9.7 in which two chemical reactions are carried
out in series,
A
k
1
B
k
2
C (9.5)
The rst reaction produces the desired product (B) which is what we wish to maximise after two
hours reaction duration, but the second reaction consumes our valuable product, and produces
a byproduct C, which naturally we wish to minimise.

steam (hot) condensate (cold)


heating coil

A
product (at end)
Open reactor after 2 hours
Figure 9.7: Production of a valuable chemical in a batch reactor.
In this case, the only manipulated variable is the jacket temperature, u
def
= T and we will consider
the concentrations of A and B to be the two states, x
def
= [c
a
, c
b
]
T
. As the jacket temperature varies,
so does the reaction temperature which in turn affects the Arrhenius rate constants (k
1
, k
2
), which
changes the ratio of desired product to by-product. The plant dynamics are given by the reaction
9.3. THE GENERAL OPTIMAL CONTROL PROBLEM 397
kinetics and initial conditions are
dx
1
dt
= 2e
6/u
x
1
, x
1
(t = 0) = 0.9
dx
2
dt
= 2e
6/u
x
1
5e
11/u
x
2
, x
2
(t = 0) = 0.1
Our objective for the operation of this reactor is to maximise the amount of B after a given re-
action time, in this example t = 2 hours, by adjusting the temperature over time as the reaction
progresses, or
J = x
2
(t = 2) +
_
t=2
0
0 dt (9.6)
If we decide to operate at a constant temperature, then Fig. 9.8(a) shows us which temperature is
the one to maximise the amount of B at the end of the reaction. Alternatively, if we decide to run
the batch reactor for 1 hour at one temperature, and the second hour for another then we would
expect the optimum to improve. Fig. 9.8(b) shows the three dimensional response surface for this
case.
0 1 2 3 4 5 6 7
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
Temperature
C

f
i
n
a
l
(a) Optimisation of a single constant temperature
0
5
10 0
5
10
0.1
0.2
0.3
0.4
0.5
T
2
2 Temperatures
T
1
f
i
n
a
l

C
b
(b) The response surface of the objective function when
using two temperatures
Figure 9.8: By using two temperatures, we can improve (slightly) the nal product concentration.
We could in principle continue this discretisation of the input prole. Fig. 9.9 shows the situation
when we decide to use three temperatures equally spaced across the 2 hour duration of the batch.
In this case the objective function becomes a response volume so we must use volume visualisa-
tion techniques to see where the optimum lies. Fig. 9.10 compares the optimum trajectories for
the case where we select 1,2,3,5 and 7 equally spaced temperatures divisions. Also given is the
limiting case where the number of temperature divisions tend to . The analytical computation
for this latter case is given in section 9.3.3.
9.3 The general optimal control problem
The optimal control problem has the following characteristics:
1. A scalar performance index or cost function, J(u) that is to be minimised (or maximised)
by adjusting the manipulated variables u(t).
398 CHAPTER 9. CLASSICAL OPTIMAL CONTROL
0
5
10
4
6
8
1
2
3
4
5
6
T
1
T
2
T
3
0.26
0.28
0.3
0.32
0.34
0.36
0.38
0.4
0.42
(a) The response volume when using 3 temperatures (b) Isosurfaces of the response volume better showing
where the optimum point in 3D space is.
Figure 9.9: Temperature prole optimisation using 3 temperatures. Compare with Fig. 9.8.
Figure 9.10: The optimum temperature
proles for cases where we have different
numbers of temperature divisions. Note
how as the number of allowable tempera-
ture divisions increase, the discretised tra-
jectory converges to the smooth analytical
optimum prole.
9.3. THE GENERAL OPTIMAL CONTROL PROBLEM 399
2. Constraints that must be either:
(a) continuously satised, that is the constraint must be satised for all time, or
(b) satised only at the end of the optimisation problem, a terminal constraint.
These constraints can either be hard (they must never be violated under any conditions, or
soft (they can be violated under special conditions).
3. The optimisation horizon is the length of time over which the optimisation is allowed to
take place. Sometimes the horizon is innite, but in practical cases the horizon has some
upper limit over which an optimal solution is sought to minimise the objective function.
The optimal control problems can be written as a standard mathematical optimisation problem,
thus many algorithms and therefore computer programmes exist to give a solution. Two com-
mon solution procedures are Pontryagins maximum principle and dynamic programming. [63,
pp1620] or [51] covers these aspects in more detail and a simple introduction to Pontryagins
maximum principle is given in [172, Chpt 13] and [168, pp 84105].
The optimal control problemas described above is quite general, and the solution will be different
for each case. Section 9.3.1 develops the equations to solve the general optimal control problem
using variational calculus and Section 9.2.4 demonstrates this general approach for a simple non-
linear batch reactor application. The accompanying MATLAB simulation clearly shows that the
computation required is involved, and motivates the development of less general linear optimal
control designs. This results in the Linear Quadratic Regulator (LQR) described in subsequently
Section 9.4.
There are many other texts that describe the development of optimal controllers. Ogata has al-
ready been mentioned, and you should also consult [100, p226] for general linear optimal control,
[163, 167, 168] for chemical engineering applications and [38] is considered a classic text in the
optimisation eld. A short summary available on the web is [45]. A survey really highlighting
the paucity of applications of linear quadratic control, (which is a simplied optimal controller
for linear systems), in the process industries is [97].
9.3.1 The optimal control formulation
To develop our optimal controller, we need to dene, as listed in 9.3, an objective function,
a feasible search space, and list our constraints. The Optimal Control Problem, or OCP, is when
given a general nonlinear dynamic model,
x = f (x, u, t), x(t = 0) = x
0
(9.7)
we wish to select u(t) to minimise the scalar performance criteria
J = (x(t
f
))
. .
Mayer problem
+
_
t
f
0
L(x, u, t) dt
. .
Lagrange problem
. .
Bolza problem
(9.8)
This performance criteria J(u) to be minimised is quite general and is usually related to process
economic considerations and is often called a cost function. Technically Eqn. 9.8 is termed a
functional since it maps the function u(t) to the scalar J. The rst term of Eqn. 9.8 is called
the termination criteria, and is only concerned about the penalty of the nal state. The second
400 CHAPTER 9. CLASSICAL OPTIMAL CONTROL
integral term is concerned with the cost incurred getting to the nal state. This nal term is
typically some sort of ISE or IAE type term. For example if
L(x, u, t) = x

Qx
where the matrix Qis a positive denite matrix, then this term computes the squared deviations
in state variables and is used when we are concerned with the cost of deviations in state variables.
Alternatively if
L(x, u, t) = 1
then we are concerned sorely with the total time for the optimisation (which we want to min-
imise). If only the terminal criterion, , is present, we have a Mayer problem, if we are only
concerned with the integral term, we have a Lagrange problem, and if we have both terms, then
we have what is known as a Bolza problem. We can convert between all three types of problems
by introducing extra state variables, see [51, p71] for more hints how to do this.
We solve this constrained optimisation problem by converting it to an unconstrained optimisa-
tion problem using generalised Lagrange multipliers. These are a vector of introduced variables
, that are pre-multiplied to each of the constraint equations and then added to the original ob-
jective function. Now converting Eqns 9.8 and 9.7 to an unconstrained optimisation problem
gives
J = +
_
t
f
0
_
L(x, u, t) +

(f x)
_
dt (9.9)
= +
_
t
f
0
_
H

x
_
dt (9.10)
where the scalar H is termed the Hamiltonian and is dened
H
def
= L +

f (9.11)
We can eliminate x in Eqn 9.10 by integrating by parts giving
J = +

0
t
f
+
_
t
f
0
_
H +

x
_
dt (9.12)
Using variational calculus, we get the Euler-Lagrange equations which dene the dynamics of
the co-state variables

=
H
x
with nal condition (t
f
) =

x

t
f
(9.13)
where we emphasise that the co-state dynamics are specied by terminal boundary conditions as
opposed to the initial conditions given for the state dynamics in Eqn. 9.7.
Finally, the optimum control input prole is given by extremitising the Hamiltonian, or equiva-
lently solving for u such that
H
u
= 0 (9.14)
Solving these two equations, and applying the manipulated variable to our dynamic system
(Eqn. 9.7) will result in optimal operation.
In many practical cases, the manipulated variable may be constrained between an upper and
lower limit. In this case, we would not blindly use the stationary point of the Hamiltonian
(Eqn. 9.14) to extract the manipulated variable, but we would simply minimise H, now conscious
of the constrained manipulated variable.
In summary, the Optimal Control Problem, or OCP, is given in Algorithm 9.1.
9.3. THE GENERAL OPTIMAL CONTROL PROBLEM 401
Algorithm 9.1 The Optimal Control Problem
The solution procedure of the optimum control problem (OCP) is as follows:
1. Given a dynamic model with initial conditions
x = f (x, u, t), x(t = 0) = x
0
(9.15)
and a scalar performance index which we wish to extremetise over a nite time from t = 0
to t = t
f
,
J(u) = (x(t
f
)) +
_
t
f
0
L(x, u, t) dt (9.16)
2. then we can form the Hamiltonian
H
def
= L +
T
f (9.17)
where the n introduced co-state dynamics are given by

=
H
x
with nal condition (t
f
) =

x

t
f
(9.18)
3. and the optimum input is given by extremetising H, or by solving
H
u
= 0 (9.19)
for u(t).
Section 9.3.3 solves the optimal control problemfor some simple examples. including the optimal
temperature prole of a batch reactor from section 9.2.4. For the batch reactor we have only a
termination criteria in our objective function, and no state or manipulated variable energy cost.
This makes the calculation somewhat easier.
9.3.2 The two-point boundary problem
Equations 9.7, 9.139.14 need to be solved in order to nd the desired u(t), the optimal control
policy at every sampling instance. This, however, is not a trivial task. Normally we would expect
to know the initial conditions for the states so in principle we could integrate Eqn. 9.7 from the
known x(0) for one sample interval, excepting that we do not know the correct control variable
(u) to use. However, we can solve for the instantaneous optimum u by solving Eqn. 9.14. The
only problem now is that we do not know the correct co-state () variables to use in Eqn. 9.14.
So far the only things we know about the co-state variables are the dynamics (Eqn 9.13), and the
nal conditions, but unfortunately not the initial conditions.
These types of ODE problems are known as Two Point Boundary Value problems. If we have n state
dynamic equations, then the number of state and co-state dynamic equations is 2n. We know n
initial state conditions, but that still leaves us n initial conditions short for the costates. (To solve
the system of ODEs, we must know as many initial conditions as DEs). However we do know
the nal values of the costates, and this information supplies the missing n conditions.
Methods to solve two point boundary value problems are described in [161, Chp16] and [196,
p176] and are in general much harder to solve than initial value ODE problems, and often require
402 CHAPTER 9. CLASSICAL OPTIMAL CONTROL
an iterative approach. The most intuitive way to solve two point boundary value ODEs is by the
shooting method or as it is sometimes termed, the boundary condition iteration, or BCI [167,
p237]. This technique is analogous to the artillery gunner who knows both his own and the
enemys position, but does not know the correct angle to point the gun so that the shell lands on
the enemy. Typically an iterative approach is used where the gun is pointed at different angles
with an observer who watches where the shells land, and then communicates with the gunner
what corrections to make. However, the iteration loop outside the integration is not the only
complication. In many physical cases, numerically integrating the state equations backwards or
the co-state equations forwards is unstable, where integrating in the opposite direction causes no
problems.
An alternative method known as Control Vector Iteration (CVI) avoids some of the failings of the
BCI method. A comparison of these two techniques is given in [95]. Perhaps the most robust
scheme is to use collocation which is the strategy used by the MATLAB routine bvp4c designed
for boundary value problems.
9.3.3 Optimal control examples
The following examples illustrate two optimal control problems. In both cases we follow Algo-
rithm 9.1 and use the SYMBOLIC toolbox to assist in developing the necessary equations.
Rayleighs optimal control problem
We wish to minimise
J(u) =
_
2.5
0
_
x
2
1
+u
2
_
dt (9.20)
subject to the state dynamics and state initial conditions
x
1
= x
2
, x
1
(t = 0) = 5
x
2
= x
1
+
_
2 0.1x
2
2
_
x
2
+ 4u, x
2
(t = 0) = 5
For this problem it is convenient to use the Symbolic toolbox to develop the necessary equations.
First we dene the state dynamics, the objective function, and form the Hamiltonian.
1 >> syms u lam1 lam2 x1 x2 real % dene symbolic variables
>> x = [x1 x2]'; lambda = [lam1 lam2]'; % x
def
= [x1, x2]
T
,
def
= [1, 2]
T
>> f = [x(2); ...
-x(1)+(2-0.1
*
x(2)2)
*
x(2) + 4
*
u];% State dynamics x = f(x, u)
>> phi = 0; L = x(1)2 + u2; % Terms in the objective function, +
_
Ldt
6
>> H = L + lambda'
*
f % Hamiltonian, H = L +
T
f
H =
lam1
*
x2 + u2 + x12 - lam2
*
(x1 - 4
*
u + x2
*
(x22/10 - 2))
Given the Hamiltonian, we can construct the co-state dynamics, and the terminal conditions from
Eqn. 9.18.
1 >> lam_dot = -jacobian(H,x)' % Eqn. 9.18. Note: Jacobian is a row vector
lam_dot =
9.3. THE GENERAL OPTIMAL CONTROL PROBLEM 403
lam2 - 2
*
x1
lam2
*
((3
*
x22)/10 - 2) - lam1
>> lam_final = jacobian(phi
*
x(1),x)'
6 lam_final =
0
0
The optimal control input is given by solving Eqn. 9.19 for u(t), or
>> uopt = solve(diff(H,u),u) % Eqn. 9.19
2 uopt =
(-2)
*
lam2
Now all that is left to do is to form and solve our two-point boundary value problem. We can
substitute the expression for the optimum u(t) in the state dynamics, and append the co-state
dynamics
>> x_dot = subs(f,u,uopt);
2 >> [x_dot; lam_dot] % State & co-state dynamics
ans =
x2
- 8
*
lam2 - x1 - x2
*
(x22/10 - 2)
lam2 - 2
*
x1
7 lam2
*
((3
*
x22)/10 - 2) - lam1
To solve the boundary value ODE problem, we take the state and co-state dynamic equations de-
veloped above and insert into the two-point boundary value ODE routine bvp4c where we need
to specify the dynamics, the boundary conditions, and additionally estimate a suitable grid for
the collocation points. We also need to given an initial estimate trajectory for the state and co-state
proles over the integration interval, but in this case I will simply supply constant trajectories.
zdot = @(t,z) ...
[z(2); ...
3 -8
*
z(4) - z(1) - z(2)
*
(z(2)2/10 - 2); ...
z(4) - 2
*
z(1) ;...
z(4)
*
(3/10
*
z(2)2-2) - z(3)]; % State & co-state dynamics
% Set up boundary conditions
8 x0 = [-5;-5]; lam_final = [0;0]; % Initial state conditions & terminal co-state conditions
BCres = @(a,b) [a(1:2)- x0; ...
b(3:4)- double(lam_final)]; % Boundary residuals
solinit = bvpinit(linspace(0,2.5,10),[-1 -1 -0.2 -0.5])
13 sol = bvp4c(zdot,BCres,solinit); % Now solve TPBVP
The optimum prole is given in Fig. 9.11 where we can quickly check that the boundary con-
ditions are satised. We can also visually check that the optimum input u

(t) = 2
2
(t) as
required. The circles plotted show the position of the collocation points and we should check
that they capture the general trend of the prole.
404 CHAPTER 9. CLASSICAL OPTIMAL CONTROL
Figure 9.11: The optimum control policy for
the Rayleigh problem. The upper two trends
are the states, the middle is the optimal con-
trol policy u

(t), and the lower two trends are


the corresponding co-states.
6
4
2
0
2
4
6
x
(
t
)


0 0.5 1 1.5 2 2.5
10
5
0
5
time

(
t
)


5
0
5
10
u
(
t
)
x
1
x
2

2
An optimal batch reactor policy revisited
In the optimum prole determination for the batch reactor example in section 9.2.4, we approxi-
mated the continuous prole with a nite series of constant zeroth-order holds. In this example,
we will compute the true continuous curve.
Recall that the plant dynamics are given by the reaction kinetics and initial conditions are
dx
1
dt
= 2e
6/u
x
1
, starting from x
1
(t = 0) = 0.9
dx
2
dt
= 2e
6/u
x
1
5e
11/u
x
2
, starting from x
2
(t = 0) = 0.1
(9.21)
and the objective function to be maximised is
J = x
2
(t = 2) +
_
t=2
0
0 dt
which indicates that L(x, u, t) = 0. The Hamiltonian function, H = L +
T
f , in this instance is
H = 0 +
_

1

2

_
2e
6/u
x
1
2e
6/u
x
1
5e
11/u
x
2
_
= 2
1
e
6/u
x
1
+
2
_
2e
6/u
x
1
5e
11/u
x
2
_
The costates vary with time following

= H/x and nish with
d
1
dt
= 2e
6/u
(
1

2
), ending at
1
(t = 2) = 0
d
2
dt
= 5e
11/u

2
, ending at
2
(t = 2) = 1
(9.22)
9.3. THE GENERAL OPTIMAL CONTROL PROBLEM 405
since = x
2
. Now the optimum temperature trajectory is the one that maximises H or the
solution to dH/du = 0, which in this case can solved analytically to give
u =
5
ln
_
552x2
12x1(21)
_ (9.23)
Once again constructing the costate dynamics and optimal input is easier with the help of a
symbolic manipulator as shown in Listing 9.4 below.
Listing 9.4: Analytically computing the co-state dynamics and optimum input trajectory as a
function of states and co-states
>> syms u lam1 lam2 x1 x2 real % dene symbolic variables
2 >> x = [x1 x2]'; lambda = [lam1 lam2]'; % x
def
= [x1, x2]
T
,
def
= [1, 2]
T
>> f = [-2
*
exp(-6/u)
*
x(1); ...
2
*
exp(-6/u)
*
x(1) - 5
*
exp(-11/u)
*
x(2)];
>> phi = x(2); L = 0;
7 >> H = L + lambda'
*
f; % H = L +
T
f
>> lam_dot = -jacobian(H,x)'; % H/x
lam_dot =
(2
*
lam1)/exp(6/u) - (2
*
lam2)/exp(6/u)
(5
*
lam2)/exp(11/u)
12 >> lam_final = jacobian(phi,x)'% t
nal
= /x
lam_final =
0
1
>> uopt = solve(diff(H,u),u)
17 uopt =
5/log(-(55
*
lam2
*
x2)/(12
*
lam1
*
x1 - 12
*
lam2
*
x1))
Once again we can solve the two-point boundary value problem succinctly using bvp4c as
demonstrated in Listing 9.5. We need to supply the state and co-state dynamic equations, Eqn. 9.21
and Eqn. 9.22, the four requiredboundary conditions, the initial x
1
(0), x
2
(0) and the nal
1
(2),
2
(2),
and the optimum input trajectory as a function of states and co-states.
Listing 9.5: Solving the reaction prole boundary value problem using the boundary value prob-
lem solver, bvp4c.m.
u = @(x) 5/log(55
*
x(4)
*
x(2)/12/x(1)/(x(4)-x(3))) % Optimuminput trajectory, Eqn. 9.23
2
% State and co-state dynamics, Eqn. 9.21 & Eqn. 9.22.
zdot = @(t,x) ...
[-2
*
exp(-6/u(x))
*
x(1);
2
*
exp(-6/u(x))
*
x(1) - 5
*
exp(-11/u(x))
*
x(2);
7 2
*
exp(-6/u(x))
*
(x(3)-x(4));
5
*
exp(-11/u(x))
*
x(4) ];
% Boundary residuals x1(0) = 0.9, x2(0) = 0.1 and 1(2) = 0, 2(2) = 1
BCres = @(a,b) [a(1:2)-[0.9; 0.1]; ...
12 b(3:4)-[0;1]];
tf = 2; % Final time t
nal
= 2
solinit = bvpinit(linspace(0,tf,10),[0.6,0.2,0.4 0.6])
sol = bvp4c(zdot,BCres,solinit); % Now solve TPBVP
406 CHAPTER 9. CLASSICAL OPTIMAL CONTROL
17
xint = linspace(0,tf,1e3)'; Sxint = deval(sol,xint); % interpolate solution
plot(sol.x, sol.y,'ro',xint,[Sxint(1,:)',Sxint(2,:)'],'-') % See Fig. 9.12
The optimum input temperature policy is shown in Fig. 9.12 with the resultant state and costate
trajectories. The amount of desired product B that is formed using this policy is 0.438 and we can
be condent that no other temperature policy will improve on that result.
Figure 9.12: The concentrations, the opti-
mum temperature trajectory, and co-states
for the batch reactor as evaluated using an
optimum control policy.
0
2
4
6
8
T
e
m
p
e
r
a
t
u
r
e
0
0.2
0.4
0.6
0.8
1
C
b
(final) = 0.438
C
o
n
c
e
n
t
r
a
t
i
o
n


0 0.5 1 1.5 2
0.5
0
0.5
1
1.5
time [hours]
C
o

s
t
a
t
e
s
,



C
a
C
b

2
In both the Rayleigh and the batch reactor optimal control examples given above we could ex-
tract analytical expressions for the optimal control policy as an explicit function of states and
co-states, u

(t) = g(x, , t) which made the subsequent boundary-value problem easier to solve.
However in many practical cases, the solution of Eqn. 9.19 to give u

(t) may not be possible to


nd in closed form. In these cases, we may need to embed an algebraic equation solver such as
fsolve, or an optimiser such as fminunc.m inside the BVP solution routine. However the prob-
lem to be solved now is a differential algebraic equation or DAE which introduces considerable
computational problems.
9.3.4 Problems with a specied target set
In some optimal control problems we wish not only to establish the optimum prole for u(t)
starting from a given initial condition, x(0), but we might also require some (or all) of our states
to satisfy some arbitrary condition at the nal time,
(x
f
, t
f
) = 0 (9.24)
9.3. THE GENERAL OPTIMAL CONTROL PROBLEM 407
These problems are known as OCP problems with a target or manifold set. Further details and
solution strategies for these sorts of problems are given in [45, 163]. The main difference here is
that we need to introduce a new vector of Lagrange multipliers, , the same dimension as the
number of target state constraints in Eqn. 9.24, and these get added to the modied objective
functional.
A simplied, but very common situation is where we wish some of our states to hit a specied
target, and perhaps leave the remainder of our states free. In this case the boundary conditions
simplify to:
1. if x
i
(t
f
) is xed, then we use that as a boundary constraint, i.e. x
i
(t
f
) = x
if
,
2. if x
i
(t
f
) is free, then we use
i
(t
f
) = /x
i
(t
f
) as usual for the boundary constraint.
Note that together, these two conditions supply the additional n boundary conditions we need
to solve the two-point boundary value problem of dimension 2n.
Suppose we wish to nd the optimum prole u(t) over the interval t [0, 1] to minimise
J =
_
1
0
u
2
dt
subject to the dynamics
x = x
2
sin(x) +u, x(0) = 0
but where we also require the state x to hit a target at the nal time
x(1) = 0.5
In this case we have one state variable and one associated co-state variable, so we need two
boundary conditions. One is the initial state condition, and the other is the specied state target.
This means that for this simple problem we do not need to specify any boundary conditions for
the costate. The rest of the OCP problem is much the same as that given in Algorithm 9.1.
1 syms x u lam real % dene symbolic variables, u(t), x(t), (t)
tf = 1; % nal time
xtarget = 0.5; % target constraint
6 f = [x2.
*
sin(x) + u]; % State dynamics x = f(x, u)
phi = 0; L = u2; % Terms in the objective function, , L
H = L + lam'
*
f % Hamiltonian, H = L +f
11 lam_dot = -jacobian(H,x)'; % ensure column vector
uopt = solve(diff(H,u),u)
Running the above code gives the optimum input as u

= /2 and the the two-point boundary


problem to solve as
_
x

_
=
_
x
2
sin(x)

2

_
x
2
cos(x) + 2xsin(x)
_
_
(9.25)
in two variables. The two boundary conditions are
x(0) = 0, and x(1) = 0.5
408 CHAPTER 9. CLASSICAL OPTIMAL CONTROL
% Dene state & co-state dynamics, Eqn. 9.25.
ztmp = @(t,z1,z2) ...
3 [z12
*
sin(z1) - z2/2;
-z2
*
(z12
*
cos(z1) + 2
*
z1
*
sin(z1))];
zdot = @(t,z) ztmp(t,z(1),z(2));
x0 = [0];
8 BCres = @(a,b) [a(1)- x0; ...
b(1)- xtarget]; % Boundary residuals: Initial & terminal state conditions
xlam_init = @(t) [0.5
*
t; -1];
13 solinit = bvpinit(linspace(0,tf,10),xlam_init)
sol = bvp4c(zdot,BCres,solinit); % Now solve TPBVP
% Extract solution
t = sol.x; z = sol.y'; % could also check derivatives
18 x = z(:,1); lam = z(:,2);
u = -lam(:,1)/2;
Fig. 9.13 shows the result of this optimal control problem. We should note that the state, x, did
indeed manage to hit the specied target at t = 1. Finally we can establish the optimum prole
for u(t).
Figure 9.13: Optimal control with a target
where x(1) must equal 0.5.
0
0.2
0.4
0.6
S
t
a
t
e
s


0.35
0.4
0.45
0.5
u
0 0.2 0.4 0.6 0.8 1
1
0.8
0.6
c
o

s
t
a
t
e
s
time


x

Problem 9.1 (These problems were taken in part from [51, p76].)
1. Show that to determine the optimal input trajectory u(t) given the one dimensional system,
x = x +u, x(0) = 1
9.4. LINEAR QUADRATIC CONTROL 409
such that
J(u) =
1
2
_
1
0
_
x
2
+u
2
_
dt
is minimised reduces to solving the two point boundary value problem,
_
x

_
=
_
1 1
1 1
_ _
x

_
,
_
x(0) = 1
(1) = 0
Can you nd an analytical solution for this problem?
2. Construct the two point boundary problem as above, (but do not solve it) for the two state,
one input system
x =
_
0 1
0 0
_
x +
_
0
1
_
u, x(0) =
_
1
0
_
with the cost functional J(u) =
1
2
_
1
0
_
x

x +u
2
_
dt
9.4 Linear quadratic control
The general optimal controller from section 9.3 has some nice properties, not the least that the
trajectory is optimal. However there are some obvious disadvantages, some of which are sub-
stantial. Quite apart from the sheer complexity of the problem formulation and the subsequent
numerical solution of the two-point boundary value problem, there is the fact that the nal re-
sult is a manipulated variable trajectory which is essentially an open loop controller. This is ne
provided we always start from the same conditions, and follow down the prescribed path, but
what happens if due to some unexpected disturbance we fall off this prescribed path? Do we
still follow the pre-computed trajectory for u(t), or must we re-compute a new optimal prole for
u(t)?
Ideally we would like our optimal controller to have some element of feedback, which after all is a
characteristic of all our other practical controllers. Furthermore if we restrict our attention to just
linear plant models, and use just a quadratic performance objective, then the subsequent prob-
lem setup, and implementation, is made considerably easier. These types of optimal controllers
designed to control linear plants with quadratic performance are known as Linear Quadratic
Regulators, or LQR.
9.4.1 Continuous linear quadratic regulators
The linear quadratic control problem is to choose a manipulated variable trajectory u(t), that
minimises a combination of termination error and the integral of the squared state and manipu-
lated variable error over a time horizon from t = 0 to some specied (although possibly innite)
nal time t = T,
J =
1
2
x(T)Sx(T) +
1
2
_
T
0
_
x
T
Qx +u
T
Ru
_
dt (9.26)
given the constraint of linear plant dynamics
x = Ax +Bu, x(t = 0) = x
0
(9.27)
As we assume linear plant dynamics, and the performance objective, Eqn. 9.26, is of a quadratic
form, this type of controller is known as a continuous Linear Quadratic Regulator or LQR.
410 CHAPTER 9. CLASSICAL OPTIMAL CONTROL
The weighting matrices Q and S are positive semi-denite (and typically diagonal) matrices,
while R must be a strictly positive denite matrix. Note that any real positive diagonal matrix
is also positive denite. The expressions x
T
Qx and u
T
Ru are called quadratic forms and are
sometimes compactly written as ||x||
2
Q
or ||u||
2
R
.
Following the terminology introduced in the general optimisation criteria in Eqn. 9.8 we have
x = Ax +Bu, x(t = 0) = x
0
(9.28)
L(x, u, t) =
1
2
x

Qx +
1
2
u

Ru (9.29)
=
1
2
x

(T) Sx(T) (9.30)


for the state dynamics constraint, the getting there cost or energy term, and the nal cost term.
For many applications, there is no explicit termination cost, since all the cost of state deviations
is accounted for in the energy term, thus often simply the matrix S = 0. In addition, the upper
limit of the time horizon is often set to T = , in which case we are obviously now no longer
concerned with the termination cost term.
Now applying the Euler-Lagrange equations (Eqns 9.13 and 9.14) gives
1
d
dt
=
H
x
=

_
1
2
x

Qx +

Ax
_
x
(9.33)
= Qx A

(9.34)
Similarly using Eqn. 9.14 gives
H
u
= 0 = Ru +B

(9.35)
which implies that the optimal control law is
u = R
1
B

(9.36)
Eqn. 9.36 denes the optimal u as a function of the co-states, . Note that now our problem has
2n unknown states, the original n x states and the n introduced co-states, . We can write this
in a compact form using Eqns 9.36, 9.28 and 9.34
_
x

_
=
_
A BR
1
B

Q A

_
_
x

_
(9.37)
with mixed boundary conditions
x(0) = x
0
, and (T) = Sx
From Lyapunovs theorem, we know that the costates are related to the states via
(t) = P(t)x(t) (9.38)
1
Note that Ogata, [148, Appendix A] has a very good summary of matrix operations including differentiation. Espe-
cially note the following:
Ax
x
= A

(9.31)
x

Ax
x
= 2Ax if Ais symmetrical (9.32)
9.4. LINEAR QUADRATIC CONTROL 411
where P(t) is a time varying matrix. Differentiating Eqn. 9.38 using the product rule gives

=

Px +P x (9.39)
Now we can substitute the control law based on the costates (Eqn 9.36) into the original linear
dynamic model (Eqn 9.28) and further substitute the relation for the states from the costates
(Eqn. 9.38) giving
x = Ax +Bu (9.40)
= Ax +B
_
R
1
B

_
(9.41)
= Ax BR
1
B

Px (9.42)
Nowsubstituting Eqn. 9.42 into Eqn. 9.39 and equating with the Euler-Lagrange equation Eqn. 9.34
gives

= Qx A

=

Px +P
_
Ax BR
1
B

Px
_
(9.43)
Rearranging, and cancelling x gives

P = PAA

P+PBR
1
B

PQ (9.44)
which is called the differential matrix Riccati equation. It is a matrix differential equation where
the boundary requirement or termination value, from Eqn. 9.13 is
P(t = T) = S (9.45)
and which can be solved backwards in time using the original dynamic model matrices A, B,
and the controller design matrices Q, R. While we need to know the model and design matrices
for this solution, we need not know the initial condition for the states. This is fortunate, since
it means that we can solve the matrix Riccati equation ofine once, store the series of P(t), and
apply this solution to our problem irrespective of initial condition. We only need to repeat this
computation if the plant model or design criteria change.
In summary, we now have developed an optimal state feedback controller of the proportional
form
u = K(t)x (9.46)
where the time varying matrix feedback gain is (from Eqn 9.36) dened as
K(t) = R
1
B

P(t) (9.47)
where the time-varying P matrix is given by Eqn. 9.44 ending with Eqn. 9.45. Since we have
assumed that the setpoint is at the origin in Eqn. 9.46, we have a regulatory problem, rather than
a servo or tracking problem. Servo controllers are discussed in 9.4.6.
9.4.2 Analytical solution to the LQR problem
The problemwith implementing the full time-varying optimal controller is that we must solve the
nonlinear differential matrix Riccati equation, Eqn. 9.44. This is a complicated computational task
and furthermore we must store ofine the resultant gain matrices at various times throughout the
interval of interest. However there is an analytical solution to this problem which is described
in [61, 2.4] and [163, 5.2]. The downside is that it requires us to compute matrix exponentials,
and if we want to know P(t) as opposed to just the costates, we must do this computation every
sample time.
412 CHAPTER 9. CLASSICAL OPTIMAL CONTROL
We start by solving for both states and costates in Eqn. 9.37, repeated here
_
x

_
=
_
A BR
1
B

Q A

_
. .
H
_
x

_
,
x(0) = known
(T) = known
(9.48)
where the matrix H is known as the Hamiltonian matrix. This approach has the attraction that
it is a linear, homogeneous differential equation with an analytical solution. Starting from any
known point at t = 0, the states and co-states at any future time are given by
_
x(t)
(t)
_
= e
Ht
_
x(0)
(0)
_
(9.49)
However the outstanding problem is that we do not know (0) so we cannot get started!
If we dene e
Ht
def
= (t), then we can re-write Eqn. 9.49 as
_
x(t)
(t)
_
= (t)
_
x(0)
(0)
_
(9.50)
and remember that with MATLAB it is straight forward to compute using matrix exponentials
since it is a function of known, constant matrices. We then partition the matrix into four equal
sized n n blocks as
(t) =
_

11
(t)
12
(t)

21
(t)
22
(t)
_
which means we can express Eqn. 9.50 as
x(t) =
11
(t)x(0) +
12
(t)(0) (9.51)
(t) =
21
(t)x(0) +
22
(t)(0) (9.52)
Employing the Riccati transformation, (t) = P(t)x(t), changes Eqn. 9.52 to
P(t)x(t) =
21
(t)x(0) +
22
(t)(0)
and multiplying Eqn. 9.51 by P(t) and equating to the above gives
P(t)
11
(t)x(0) +P(t)
12
(t)(0) =
21
(t)x(0) +
22
(t)(0)
Solving for the mystery (0) gives
(0) = [
22
(t) P(t)
12
(t)]
1
[P(t)
11
(t)
21
(t)] x(0)
Now at the nal point, t = T, we know that P(T) = S from the transversality conditions, so (0)
is given by
(0) = [
22
(T) S
12
(T)]
1
[S
11
(T)
21
(T)] x(0) (9.53)
which is something that we can compute since all the terms are known.
Now having established (0), we can use Eqn. 9.49 to compute analytically both states, x(t), and
costates, (t), at any time t. Consequently, we can compute P(t) at any intermediate time be-
tween 0 and T using
P(t) = [
22
(T t) S
12
(T t)]
1
[S
11
(T t)
21
(T t)] (9.54)
In many applications, it sufces to ignore the costates and just consider P(0) which is given
directly by
P(0) = [
22
(T) S
12
(T)]
1
[S
11
(T)
21
(T)] (9.55)
We will investigate further this important simplication in section 9.4.3.
9.4. LINEAR QUADRATIC CONTROL 413
An LQR time-varying example
Suppose we wish to control the continuous plant
x =
_
2 4
3 1
_
x +
_
4
6
_
u, x(0) =
_
2
1
_
using an optimal controller of the form u = K(t)x to drive the process from some specied
initial condition to zero. For this application, we decide that the manipulated variable deviations
are more costly than the state deviations,
Q =
_
1 0
0 1
_
, R = 2, S =
_
0 0
0 0
_
Listing 9.6 rst denes the plant, then solves for the initial costates, (0), using Eqn. 9.53, and
then nally simulates the augmented state and costate system given by Eqn. 9.48.
Listing 9.6: Computes the full time-evolving LQR solution
1 A = [-2 -4; 3 -1]; B=[4,6]'; x0 = [-2,1]'; % Plant: x = Ax+Bu &initial state, x(t = 0)
n=length(A); % Problem dimension
Q = eye(2); R=2; S=zeros(size(A)); % Design criteria: Q, Rand S
T = 0.5; % nal time, T
6 H = [A, -B/R
*
B'; -Q, -A']; % Create Hamiltonian using Eqn. 9.48
Phi = expm(H
*
T); % (t)
def
= e
Ht
Phi11 = Phi(1:n,1:n); Phi12 = Phi(1:n,n+1:end); % Construct 11(t), 12(t), . . .
Phi21 = Phi(n+1:end,1:n); Phi22 = Phi(n+1:end,n+1:end);
11 Lam_0 = (Phi22 - S
*
Phi12)\(S
*
Phi11 - Phi21)
*
x0; % costate initial condition, Eqn. 9.53
z0 = [x0;Lam_0]; % augment states & costates
G = ss(H,zeros(2
*
n,1),eye(2
*
n),0);
initial(G,z0,T) % simulate it
The results of this simulation are given in Fig. 9.14 which also compares the near identical results
from a slightly sub-optimal controller which uses simply the steady state solution to the Riccati
equation discussed next in section 9.4.3. Note that the performance drop when using the consid-
erably simpler steady-state controller is small, and is only noticeable when the total time horizon
is relatively short.
An alternative solution procedure to the analytical solution given in Listing 9.6 is to numerically
integrate the continuous time Riccati equation, Eqn. 9.44, backwards using say the workhorse
ode45 with initial condition Eqn. 9.45. The trick that the function in Listing 9.7 does is to
collapse the matrix Eqn. 9.44 to a vector form that the ODE function can then integrate. The
negative sign on the penultimate line is because we are in fact integrating backwards.
Listing 9.7: The continuous time differential Riccati equation. This routine is called from List-
ing 9.8.
function dPdt = mRiccati(t, P, A, B, Q)
% Solve the continuous matrix differential Riccati equation
% Solves dP/dt = A
T
PPA+PBB
T
PQ
5 P = reshape(P, size(A)); % Convert from (n
2
1) vector to n n matrix
414 CHAPTER 9. CLASSICAL OPTIMAL CONTROL
4
2
0
2
s
t
a
t
e
s
Timeevolving & steadystate: T
final
= 2
0.5
0
0.5
u


0
0.5
1
K
0.5
0
0.5
P
0 0.5 1 1.5 2
0.5
0
0.5

time [s]
u(t)
u

(a) A long time horizon, T = 2.


4
2
0
2
s
t
a
t
e
s
Timeevolving & steadystate: T
final
=0.5
0.5
0
0.5
u


u(t)
u

0
0.5
1
K
0.5
0
0.5
P
0 0.1 0.2 0.3 0.4 0.5
0.5
0
0.5

time [s]
(b) A shorter time horizon, T = 0.5.
Figure 9.14: Comparing the full time-varying LQR control with a steady-state sub-optimal con-
troller. The upper two plots trend the two states and one input for the steady-state case and for
the time varying case. The lower three plots trend the time-varying elements of the matrices
K(t), P(t), and the co-state vector (t) for the full time-varying optimal case.
dPdt = -A.'
*
P - P
*
A + P
*
B
*
B.'
*
P - Q; % Eqn. 9.44
dPdt = -dPdt(:); % Convert back fromn n to (n
2
1)
return
We call the function containing the matrix differential equation in Listing 9.7 via the ode45 in-
tegrator as shown in Listing 9.8. Note that we can express BR
1
B efciently as ZZ
T
where
Z = B(chol(R)
1
).
Listing 9.8: Solves the continuous time differential Riccati equation using a numerical ODE inte-
grator.
A = [-2 -4; 3 -1]; B=[4,6]'; % Plant dynamics of interest
2 Q = eye(2); R=2; S=zeros(size(A)); % Weighting matrices, Q, R
T = 1; % Final time for the nite time horizon
P0 = S; % Initial condition for P
9.4. LINEAR QUADRATIC CONTROL 415
[t,P] = ode45(@(t,P) mRiccati(t,P,A,B/(chol(R)),Q),[0 T],P0(:)); % Call listing 9.7
7 P = flipud(P); % reverse time order of P(t)
Now we have generated a trajectory for the four elements of the P(t) matrix as a function of time.
We could then pre-compute the time varying gains, K(t), and then store them ready to be used
in our closed loop simulation.
Compared to the analytical solution, this numerical integration scheme requires that we integrate
n
2
equations as opposed to 2n when we combine the co-states. Furthermore we must store the
trajectory of pre-computed time-varying gains. In practice of course, this is storage requirement
rapidly becomes unwieldy and, as we shall see in the next section, rarely worth the effort.
9.4.3 The steady-state solution to the matrix Riccati equation
In the optimal controller scheme presented above, the gain K(t), is a continuously time varying
matrix, as opposed to a more convenient constant matrix. A time varying controller gain requires
excessive computer resources even if we did discretise, since we need to store an (m n) gain
matrix at every sample time in the control computer over the horizon of interest.
We can make an approximation to our already approximate optimal controller by using only the
steady-state gain derived fromthe steady-state solution of the matrix differential Riccati equation
when

P = 0. With this approximation, the differential Eqn. 9.44 is simplied to the algebraic
equation,
0 = A

+P

AP

BR
1
B

+Q (9.56)
where the steady-state solution, P

, is now a constant gain matrix which is considerably more


convenient to implement. The steady-state controller gain is still obtained from Eqn. 9.47,
K

= R
1
B

(9.57)
and is also a constant gain matrix.
Whether the performance deterioration due to this approximation is signicant depends on the
plant, and on the time horizon of interest. However as evident in Fig. 9.14, the differences in
performance are negligible for all but the shortest time horizons. Fig. 9.14(a) shows that even
for this relatively brief optimisation horizon, the gains are essentially constant for more than two
thirds of the horizon, and this feature is generally true for all plants, and optimisation scenarios
provided the horizon is reasonably long.
We can solve for the elements of the matrix P

in Eqn. 9.56 using one of several strategies. One


direct way is to by equating the individual elements in each of the matrices and thus reducing the
single matrix equation to a set of algebraic equations. However unlike the Lyapunov equation,
Eqn. 2.96, which we could collapse to a system of linear equations using vectorisation and Kro-
necker products as described in section 2.9.5, this time the steady-state Riccati equation includes
a quadratic term of the formP

BP

which means our resultant system of equations will not be


linear. However an iterative scheme based on this idea is given on page 417.
Another strategy is to compute the analytical expression for P(T), Eqn. 9.55, with a large time
horizon T. Unfortunately Eqn. 9.55 is numerically very ill-conditioned when T is large, let alone
, and it is difcult to know a priori exactly what time horizon is sufciently large. A nal strat-
egy given later in Listing 9.12 is only applicable for the discrete version of the optimal controller.
The control toolbox in MATLAB provides a reliable algebraic Riccati equation routine, care, that
416 CHAPTER 9. CLASSICAL OPTIMAL CONTROL
solves Eqn. 9.56 for P

in a numerically reliable manner


2
, and the ric routine veries the com-
puted P from care (or equivalent) by calculating the residual of Eqn. 9.56. The linear quadratic
regulator lqr routine nds the controller gain directly which is slightly more convenient than
the two-step procedure solving the algebraic Riccati equation explicitly. All three routines are
demonstrated in Listing 9.9.
A steady-state continuous LQR example
Simulating a steady-state optimal state-feedback controller is trivial in MATLAB. We can try a
steady-state version of the optimal controller for the example given previously on page 413. A
script to solve the controller design problem using the toolbox functions are, care or lqr, and
to simulate the results in MATLAB is given in Listings 9.9 and 9.10.
Listing 9.9: Calculate the continuous optimal steady-state controller gain.
>>A = [-2, -4; 3 -1]; B = [4; 6]; % Plant under consideration
>>Q = eye(2); R = 2; % Controller weighting matrices, Q, R.
3
>>P = are(A, B/R
*
B', Q) % Solve for Xsuch that A
T
X+XAXBX+Q = 0
P =
0.1627 -0.0469
-0.0469 0.2514
8 >>K = R\B'
*
P % Now compute gain from Eqn. 9.57, K = R
1
B

P.
K =
0.1847 0.6603
>>[Kerr,Perr] = ric(A,B,Q,R,K,P) % Check results
13 Kerr =
0 0
Perr =
1.0e-014
*
0.0999 0.2720
18 0.1471 0.1887 % Residuals should 0 which they do
>> [P2,L,G]=care(A,B/chol(R),Q);% Solve for Xsuch that A
T
X+XAXBB
T
X+Q = 0
>> K2 = chol(R)\G; % Compute controller gain given that G = B
T
X
23
>>[K2,P2,E] = lqr(A,B,Q,R) % Gain fromlqr should gain fromcare.
>> K2
K2 =
0.1847 0.6603
This completes the controller design problem, the rest is just a simple closed-loop simulation as
shown in Listing 9.10.
Listing 9.10: Closed loop simulation using an optimal steady-state controller gain.
C = eye(size(A)); D = zeros(2,1);
Gcl = ss(A-B
*
K,zeros(size(B)),C,D); % Hopefully stable optimal closed loop
3
T = 1; x0 = [-2,1]'; % nal time & initial state, x0
[Y,t,X]=initial(Gcl,x0,T);
2
An older toolbox routine, are.m, essentially also does this calculation.
9.4. LINEAR QUADRATIC CONTROL 417
U = -X
*
K'; % back-calculate input, u(t), for plotting
8 subplot(2,1,1);plot(t,X); ylabel('states'); % Refer Fig. 9.15.
subplot(2,1,2);plot(t,U); ylabel('input'); xlabel('time (s)')
The state and input trajectories of this optimal controller given a non-zero initial condition are
given in the Fig. 9.15.
0 0.2 0.4 0.6 0.8 1
3
2
1
0
1
s
t
a
t
e
s
Continuous steadystate LQR
0 0.2 0.4 0.6 0.8 1
0.4
0.2
0
0.2
0.4
i
n
p
u
t
time (s)
Figure 9.15: A steady-state continuous LQR
controller.
The difference between the fully optimal time varying case and the simplication when using
the steady-state solution to the Riccati equation is given in Fig. 9.14 which uses the same example
from page 416. For short optimisation intervals there is a small difference between the steady-
state solution and the full evolving optimal solution as shown in Fig. 9.14(b). However as the
nal time increases, there is no discernible difference in the trajectories of the states nor in the
input in Fig. 9.14(a). This goes along way to explain why control engineers hardly ever bother
with the full optimal solution. The lower two plots in Fig. 9.14 trend the time varying P(t) and
K(t) matrices.
Solving the algebraic Riccati equation using Kronecker products
Notwithstanding that the algebraic Riccati equation of Eqn. 9.56 is nonlinear, we can devise an
iterative scheme to solve for P

in a manner similar to that for the Lyapunov equation using


vectorisation and Kronecker products described in section 2.9.5.
Collapsing Eqn. 9.56 to vectorised form gives

_
I A
T
+A
T
I I P

BR
1
B
T
_
vec(P

) = vec(Q) (9.58)
which we nowmust solve iteratively due to the unknown P

in the coefcient matrix in Eqn. 9.58.


Listing 9.11 uses this solution strategy which works (most of the time) for modest sized problems,
but is not numerically competitive with, nor as reliable as, the ARE scheme in MATLAB.
Listing 9.11: Solving the algebraic Riccati equation for P

using Kronecker products and vec-


torisation given matrices A, B, Qand R.
1 n = size(A,1); I = eye(size(A)); % Identity, I
BRB = B/R
*
B'; % Constant matrix, BR
1
B
T
D = kron(I,A') + kron(A',I); % vec(A
T
P+PA) = (I A
T
+A
T
I)vec(P)
418 CHAPTER 9. CLASSICAL OPTIMAL CONTROL
P = I; % Start with P = I
6 tol = 1e-8; dP = tol
*
1e2; % Suggested tolerance, 10
5
while norm(dP) > tol
Pnew = reshape(-(D-kron(I,P
*
BRB))\Q(:),n,n); % Solve Eqn. 9.58 iteratively for P
dP = Pnew - P;
P = Pnew;
11 end
In a similar manner to solving the Lyapunov equation using fsolve on page 81, we could also
construct the algebraic Riccati matrix equation to be solved and submit that to the general non-
linear algebraic equation solver fsolve. This works in so far as it delivers a solution, but the
solution may not be a stabilising solution, or in other words while the matrix P

does satisfy
the original matrix equation, Eqn. 9.56, it may not be symmetric, or lead to a stable closed loop
system. This is due to the fact that the matrix Riccati equation has multiple solutions and general
purpose numerical schemes such as fsolve will return the rst solution it comes across as op-
pose to the stabilising one that we are interested in. For this reason it is better to use the dedicated
Riccati solving routines than the general purpose fsolve.
9.4.4 The discrete LQR
In the discrete domain, the optimisation problem analogous to Eqn. 9.26 is to search for the N
future optimal control moves, u
0
, u
1
, , u
N1
to minimise
J =
1
2
x
N
Sx
N
+
1
2
N1

k=0
_
x
T
k
Qx
k
+u
T
k
Ru
k
_
(9.59)
where the continuous plant dynamics, Eqn 2.37, are now replaced by the discrete equivalent
x
k+1
= x
k
+u
k
(9.60)
and the state feedback control law, Eqn. 9.46, is as before. Now the discrete matrix difference
Riccati equation analogous to Eqn. 9.44 is
P
k
= Q+

P
k+1

P
k+1

_
R+

P
k+1

_
1

P
k+1
(9.61)
with nal boundary condition
P
N
= S (9.62)
and the feedback gain Kat time t = kT is
K
k
=
_
R+

P
k+1

_
1

P
k+1
(9.63)
An alternative form to Eqn. 9.63 is
K
k
= R
1

T
(P
k
Q) (9.64)
which uses the current P
k
rather than the future version, but has the disadvantage that must
be invertible. Note that the optimal feedback gain K, is now an (mn) matrix rather than a row
vector as in the SIMO pole placement case described in chapter 8.
It is possible to compute the value of the performance index J, of Eqn. 9.59 in terms of the initial
states and the initial P
0
matrix,
J =
1
2
x
T
0
P
0
x
0
(9.65)
although this precise value of J has probably little practical worth in most industrial control
applications.
9.4. LINEAR QUADRATIC CONTROL 419
Using the slightly sub-optimal discrete steady-state gain
Clearly the discrete optimal gain given by Eqn. 9.63 is again time varying as it was in Eqn. 9.47 for
the continuous case, but this can be computed ofine since it is only dependent on the constant
plant matrices and , the weighting matrices Qand R, and the time varying P
k
.
However, once again as in the continuous case given in 9.4.3, we can use the considerably sim-
pler steady-state approximation to Eqn. 9.61 with little reduction in optimality. We solve for the
steady state covariance,
P

= Q+

_
R+

_
1

(9.66)
by starting with P

= 0 and stepping, (rather than integrating), backwards in time until P


k
approximately equals P
k+1
. It follows that the resultant K given by Eqn. 9.63 will now also be
constant. This strategy is implemented in the listing 9.12 which is functionally equivalent, but
not as reliable to the lqr routine in the control toolbox.
Listing 9.12: Calculate the discrete optimal steady-state gain by iterating until exhaustion. Note
it is preferable for numerical reasons to use lqr for this computation.
function [K,P] = dlqr_ss(Phi,Delta,Q,R)
% Steady-state discrete LQR by unwinding the Riccati difference eqn, Eqn. 9.61.
% Routine not as reliable as dlqr.m
4 P = zeros(size(Phi)); % Start with P = 0 at t =
tol = 1e-5; % default tolerance
P0 = P+tol
*
1e3;
while norm(P0-P,'fro') > tol
9 P0=P;
P = Q + Phi'
*
P
*
Phi - Phi'
*
P
*
Delta
*
((Delta'
*
P
*
Delta+R)\Delta')
*
P
*
Phi;
end % while
K = (Delta'
*
P
*
Delta + R)\Delta'
*
P
*
Phi; % K =
_
R+
T
P
_
1

T
P
end % end function dlqr_ss
However, the backward iteration calculation technique given in dlqr_ss in listing 9.12 is both
crude and computationally dangerous. For this reason it is advisable to use the functionally
equivalent, but more robust dlqr (or lqr) provided in the CONTROL TOOLBOX.
Listing 9.13 compares a continuous LQR regulator for the system given on page 413 with a dis-
crete LQR sampled at a relatively coarse rate of T
s
= 0.1s. Note that in both the discrete and
continuous cases we design the optimal gain using the same lqr function. MATLAB can tell
which algorithm to use from the model characteristics.
Listing 9.13: Comparing the continuous and discrete LQR controllers.
A = [-2, -4; 3 -1]; B = [4; 6]; % Continuous plant, x = Ax +Bu
2 C = eye(size(A)); D = zeros(2,1); G = ss(A,B,C,D)
Q = eye(2); R = 2; % Controller weighting matrices Q = I2, R = 2
K = lqr(G,Q,R); % Optimal gain, K = [k1, k2]
Gcl = ss(A-B
*
K,zeros(size(B)),C,D); % Continuous closed loop
7
T = 1; x0 = [-2,1]'; % final time & initial condition
[Y,t,X]=initial(Gcl,x0,T); U = -X
*
K'; % Simulate & back-calculate input
420 CHAPTER 9. CLASSICAL OPTIMAL CONTROL
% Discrete version of the LQR controller
12 Ts = 0.1; % Rather coarse sampling time, Ts
Gd = c2d(ss(A,B,C,D),Ts); % Discrete plant, x
k+1
= x
k
+Bu
k
, Ts = 0.1
Kd = lqr(Gd,Q,R); % Design discrete LQR gain
Gcl_d = ss(Gd.A-Gd.b
*
Kd,zeros(size(B)),C,D,Ts);
17
[Yd,td,Xd]=initial(Gcl_d,x0,T); Ud = -Xd
*
Kd'; % Simulate & back-calculate input
subplot(2,1,1); plot(t,X,td,Xd,'o'); % Refer Fig. 9.16
subplot(2,1,2); plot(t,U,td,Ud);
Figure 9.16: Comparing discrete and
continuous LQR controllers. Note
that even with a relatively coarse
sample time, the controlled response
in both cases is practically identical.
0 0.2 0.4 0.6 0.8 1
3
2
1
0
1
s
t
a
t
e
s
0 0.2 0.4 0.6 0.8 1
0.4
0.2
0
0.2
0.4
i
n
p
u
t
time (s)
discrete input, T
s
=0.1
continuous input
Problem 9.2 1. Load in the Newell & Lee evaporator matrices (refer Appendix E.1) and design
an LQR with equal weighting on the Qand Rmatrices by typing Q = eye(A); R = eye(B);
and using the dlmr function with a sample time of 1.0. This feedback matrix should be the
same as the gain matrix given in [145, p55]
3
. Comment on any differences in solution you
obtain.
2. Repeat the LQR design using the supplied dlqr function rather than the dlqr_ss func-
tion. Using dlqr however, the gain elements are now slightly different. Is this difference
important and how would you justify your answer?
3. Use either dlqr_ss or dlqr to generate feedback gains for the limiting cases when either
(or both) Q or R are zero. What difculties do you encounter, and how do you solve them?
3
Newell & Lee dene the control law in a slightly different way; u = Kx omitting the negative sign. This means that
the Kon p55 should be the negative of that evaluated by MATLAB.
9.4. LINEAR QUADRATIC CONTROL 421
Computing the performance of a steady-state discrete LQR controller in MATLAB
In the steady-state case, we are uninterested in the termination part of the performance index of
Eqn. 9.59 leaving just
J =
1
2

k=0
_
x
T
k
Qx
k
+u
T
k
Ru
k
_
(9.67)
=
1
2
x
T
0
P

x
0
(9.68)
reinforcing the fact that while the controller gain is not dependent on the initial condition, the
value of the performance index is.
Suppose we had a plant
x
k+1
=
_
0 0
0.5 1
_
x
k
+
_
1
0
_
u
k
, starting fromx
0
=
_
2
2
_
with controller design matrices Q = diag[1, 0.5], and R = 1, and we wanted to compute the
performance index J comparing the numerical summation given by Eqn. 9.67, and the analytical
strategy in Eqn. 9.68.
>> Phi = [0 0; -0.5, 1]; Del = [1 0]'; % Plant matrices , and controller design Q, R
>> Q = diag([1 0.5]); r = 1;
>> [K,P,E] = dlqr(Phi,Del,Q,r) % Solve the Riccati equation for K & P.
4 K =
0.2207 -0.4414
P =
1.5664 -1.1328
-1.1328 2.7656
Now that we have the steady-state Riccati equation solution we can compute the performance
directly , J.
>> x0 = [2,2]'; % Initial condition x0
2 >> J = 0.5
*
x0'
*
P
*
x0 % Performance J =
1
2
x
T
0
Px0
J =
4.1328
To obtain an independent assessment of this, we could simulate until exhaustion and numeri-
cally compute Eqn. 9.67. Since we have simulated the closed loop, we need to back-extract the
input by using the control law and the states.
1 Gcl = ss(Phi-Del
*
K,0
*
Del,eye(2),0, 1); % Form closed loop (K)
[y,t,x] = initial(Gcl,x0); % Let MATLAB establish a suitable on nal time, k
u = -x
*
K'; % Back extract control input from closed loop simulation, u = Kx
Now MATLAB has returned a vector of input and states, and we need to compute the summa-
tion given in Eqn. 9.67. One elegant way to calculate expressions such as

x
T
Qx given the
states stacked in rows is to use the MATLAB construction sum(sum(x.
*
x
*
Q)). This has the
advantage that it can be expressed succinctly in one line of code and it avoids an explicit loop.
422 CHAPTER 9. CLASSICAL OPTIMAL CONTROL
>> uRu = u'
*
u
*
r; %

k
_
u
T
k
Ru
k
_
, but only works for scalar r
2 >> J_numerical = 0.5
*
(sum(sum(x.
*
x
*
Q'))+uRu) % Performance
1
2

k
_
x
T
k
Qx
k
+u
T
k
Ru
k
_
J_numerical =
4.1328
In this case the numerical value of J is very close to the analytical value.
Varying the Qand Rweighting matrices
The Qand Rweighting matrices in the objective function Eqn. 9.26 reect the relative importance
of the cost of state deviations with respect to manipulated variable deviations and are essentially
the tuning parameters for this control algorithm. If we increase the Q matrix, then the optimum
controller tries hard to minimise state deviations and is prepared to demand very energetic con-
trol moves to do it. Likewise, if we increase R, the weighting on the deviations from u, then
excessive manipulated variables moves are discouraged, so consequently the quality of the state
control deteriorates. If we are only really interested in output deviations, y
T
y, as opposed to
state deviations, then Q is often set equal to C
T
C. This follows simply from the output relation
y = Cx.
Fig. 9.17 and the listing below demonstrate the design and implementation of a discrete LQR
for a stable, interacting arbitrary plant. We will start from a non-zero initial condition and we
subject the system to a small external disturbance at sample time k = 20. In this listing, the state
deviations are considered more important than manipulated deviations.
1 % Create an simple, stable, interacting, 3*2 plant
Phi = [0.8 0.1 0.1; -0.1 0.7 0.2; 0 0.6 0.1];
Del = [1,2; -2,1; 0,-3]; C = [1,0,0]; D=[0,0];
t=[0:40]'; dist = 20; % sample vector & disturbance point
6 Q = eye(size(Phi)); [n,nb] = size(Del); R=eye(nb,nb); % tuning constants
K = dlqr(Phi,Del,100
*
Q,R); % Design LQR controller with Q = 100I, R = I
Gd = ss(Phi-Del
*
K,Del,C,D,1); % closed loop system
U = zeros(size([t,t]));U(dist,:) = [1,0.2]; % not much + disturbance
11 x0 = [-1,2,0]'; % given initial condition
[Y,T,X3] = lsim(Gd,U,t,x0); % do closed loop simulation
U3 = X3
*
K'; % extract control vector from x
stairs(t,[X3,U3]);
16 % Compute performance indices:

x
T
Qx, &

u
T
Ru
perfx3 = sum(sum(X3.
*
(Q
*
X3')')), perfu3 = sum(sum(U3.
*
(R
*
U3')'))
Fig. 9.17 compares two responses. The left controlled response is when we consider state devia-
tions more important than manipulated variations, i.e. the weighting factor for the states is 100
times that for the manipulated variables. In the right plot, the opposite scenario is simulated.
Clearly in Fig. 9.17 the left controlled response returns a relatively low ISE (integral squared
error) on the states,

x
T
Qx = 12.3, but at the cost of a high ISE on the manipulated variables,

u
T
Ru = 1.57. Conversely the right controlled response shows the situation, where perhaps
owing to manipulated variable constraints, we achieve a much lower ISE on the manipulated
variables, but at a cost of doubling the state deviation penalty.
9.4. LINEAR QUADRATIC CONTROL 423
0 10 20 30 40
1
0.5
0
0.5
1
i
n
p
u
t

R
=
1
Sample number
uRu=1.571
2
1
0
1
2
S
t
a
t
e

Q
=
1
0
0
xQx=12.3
(a) A discrete LQR where state deviations are con-
sidered costly; q = 100, r = 1.
0 10 20 30 40
1
0.5
0
0.5
1
i
n
p
u
t

R
=
1
0
0
Sample number
uRu=0.059
2
1
0
1
2
S
t
a
t
e

Q
=
1
xQx=27.4
(b) A discrete LQR where manipulated variable
deviations are considered costly; q = 1, r = 100.
Figure 9.17: Varying the state and manipulated weights of a discrete linear quadratic regulator
where there is a disturbance introduced at sample time k = 20. When state deviations are costly (q
is large compared to r), the states rapidly return to setpoint at the expense of a large manipulated
movement.
As Fig. 9.17 shows, the design of an optimal LQR requires the control designer to weigh the rel-
ative costs of state and manipulated deviations. By changing the ratio of these weights, different
controlled responses can be achieved.
As we did in 8.2, we can modify the simulation given above to account for non-zero setpoints,
x

, by using the control law, u = K(x

x) rather than Eqn 9.46, so simulating the closed loop


x
k+1
= (K)x
k
+Kx

k
thus enabling response to setpoint changes.
9.4.5 A numerical validation of the optimality of LQR
If the LQR is indeed an optimal controller, its performance must be superior than any arbitrarily
designed pole-placement controller such as those tested in section 8.2.1. Listing 9.14 designs an
optimal controller for the blackbox plant with equal state and input weights.
Listing 9.14: An LQR controller for the blackbox
>> G = ss(tf(0.98,conv([2 1],[1,1])),0.1) % Blackbox plant with sample time Ts = 0.1
3 >> [n,m] = size(G.b); % system dimensions
>> Q = eye(n); % state weight, Q =
_
1 0
0 1
_
>> R = eye(m); % Input weight, R = r = 1
>> K = dlqr(G.a,G.b,Q,R) % Compute optimal controller gain K.
8 K =
4.7373 -3.5942
424 CHAPTER 9. CLASSICAL OPTIMAL CONTROL
Nowit would be interesting to compare this supposedly optimal controller with a pole-placement
controller with poles arbitrarily placed at say
T
= [0.5 0.5i].
Listing 9.15: Comparing an LQR controller from Listing 9.14 with a pole-placement controller
1 >> pc = [0.5-0.5i; 0.5+0.5i]; % Desired closed loop poles, = [0.5 0.5i]
>> Kp = acker(G.a,G.b,pc); % Pole-placement controller gain
Kp =
13.6971 -5.7713
6 >> Gcl = G; Gcl.b = 0
*
G.b; Gcl.c = eye(n);
>> Gcl.a = G.a-G.b
*
K; % Closed loop system with LQR, (K)
>> Gclpp = Gcl; Gclpp.a = G.a-G.b
*
Kp; % Closed loop system with pole-placement
>> x0 = [-1,2]'; % Simulate disturbance rejection
11 >> initial(Gcl,Gclpp,x0) % Refer Fig. 9.18.
The trajectory of the states for the two schemes are given in Fig. 9.18. However what is surprising
is that the pole-placement response looks supercially considerably better than the somewhat
sluggish, though supposedly optimal, LQR response. How could this be? Surely the optimally
designed controller should be better than any ad-hoc designed controller?
Figure 9.18: Comparing the state
trajectories for both pole-placement
and LQR. Note that the optimal LQR
is considerably more sluggish to re-
turn to the setpoint than the pole-
placement controller.
6
4
2
0
2
x
1
Comparing LQR & Poleplacement
0 0.5 1 1.5 2 2.5 3
6
4
2
0
2
x
2


dlqr
Poleplacement =0.50.5i
poleplacement
LQR
What Fig. 9.18 has not shown us is the input trajectory. This is also important when compar-
ing controlled responses, although typically perhaps not quite as important as the state variable
trajectories.
We need to consider the input trajectory for at least two reasons. Excessive deviations in input
will almost certainly reach the input saturation constraints that are present in any real control
system. Secondly, excessive movement in the input can cause unreasonable wear in the actuators.
Finally in the case where the input magnitude is related to cost, say fuel cost in an engine, then
the larger the magnitude, the more expensive the operating cost of the corrective action will be.
Fig. 9.19 reproduces the same problem, but this time includes the input trajectory. In this instance,
the cost of the pole-placement controller is J = 1288, while for the LQR it is under half at 553.
Obviously different pole locations will give different performances, and it is up to us as control
designers to choose the most appropriate pole location for the particular control application at
hand. Using MATLAB, it is easy to do an exhaustive search over a reasonable domain to nd the
best pole location.
9.4. LINEAR QUADRATIC CONTROL 425
6
4
2
0
2
s
t
a
t
e
s
Performance: LQR = 553, Poleplace = 1288
0 0.5 1 1.5 2 2.5 3
30
20
10
0
10
i
n
p
u
t


dlqr
Poleplacement =0.50.5i
poleplacement
LQR
Figure 9.19: Comparing pole-
placement and LQR, but this time
including the input trajectories.
Compared to Fig. 9.18, the input
trajectory for the optimal LQR
controller is considerably less active.
For real systems (that is systems with real coefcients), the desired poles must appear in conju-
gate pairs. That means that we only need to consider one half of the unit circle. Furthermore, we
should only consider the 1st quadrant, because in the 2nd quadrant, the response will be overly
oscillatory.
Naturally it goes without saying that in the discrete domain we must be inside the unit circle.
I have chosen a set of trial poles lying on a polar grid from 0 degrees round to a little past 90

,
and from a magnitude of about 0.1 to 0.9. These locations are shown in Fig. 9.20.
1 0.5 0 0.5 1
1
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
/T
0.9/T
0.8/T
0.7/T
0.6/T
0.5/T
0.4/T
0.3/T
0.2/T
0.1/T
/T
0.9/T
0.8/T
0.7/T
0.6/T
0.5/T
0.4/T
0.3/T
0.2/T
0.1/T

Figure 9.20: We will test a pole-placement


controller at each of the pole locations spec-
ied by a +. (See also Fig. 9.21 following.)
For each of these pole locations specied in Fig. 9.20, I will:
1. Design a pole placement controller, then
2. simulate the response for a reasonably long time which I assume will approximate innity,
and then
3. compute the performance criteria, J =

i=0
x
T
Qx +u
T
Ru.
426 CHAPTER 9. CLASSICAL OPTIMAL CONTROL
As you can see from the resultant contour plot in Fig. 9.21, the performance of our previously
arbitrarily chosen pole location of
T
= [0.5 0.5i] is reasonable, but not great, and clearly
not optimal. From the plot, the best performance is located at a pole location of approximately

[0.75 0.15i]
T
marked as a in Fig. 9.21.
The question is: can we nd this location without doing an exhaustive search, or indeed any type
of numerical optimisation search strategy?
Figure 9.21: Trial pole-placement
performance. The performance
when
T
= [0.5 0.5i] is approxi-
mately 1288, while the optimum per-
formance is considerably lower at
around 550 at pole location

[0.75 0.15i]
T
.
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1 0.6/T 0.5/T 0.4/T 0.3/T 0.2/T
0.1/T
6
0
0
6
0
0
6
0
0
6
0
0
8
0
0
8
0
0
800
8
0
0
8
0
0
8
0
0
8
0
0
1
1
0
0
1
1
0
0
1100
1
1
0
0
1100
1
5
0
0
1
5
0
0
1
5
0
0
1
5
0
0
1
5
0
0
2
1
0
0
2
1
0
0
2100
2
1
0
0
2100
2
1
0
0
3000
3
0
0
0
3000
3
0
0
0
4100
4100
4
1
0
0
3
0
0
0
3
0
0
0
5800
8000
()

)


0 0.2 0.4 0.6 0.8
0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1000
2000
3000
4000
5000
6000
7000
8000
9000
10000
11000
When we look at the closed loop poles of the LQR controller,
Listing 9.16: Computing the closed loop poles from the optimal LQR controller from Listing 9.14.
>> [K,S,E] = dlqr(G.a,G.b,Q,R)
K =
4.7373 -3.5942 % Controller gain (as before).
4 S =
90.4087 -65.2391
-65.2391 50.4963
E =
0.7800 + 0.1664i % Optimum closed loop poles,

.
9 0.7800 - 0.1664i
then we see that in fact the computed closed loop poles are intriguingly close to the optimum
found numerically in Fig. 9.21. It turns out that the LQR routine did indeed nd the optimum
controller without going to the computational trouble of an exhaustive search. Just remember
of course, that this is not a proof of the optimality of the LQR, it is just a numerical validation.
Nonetheless, you should nd it a convincing argument.
Note of course, if we change the dynamic system, or the controller design weights, the of course
the shape of the performance surface in Fig. 9.21, and the corresponding optimum will also
change.
Problem 9.3 1. Simulate the controlled response of the evaporator for about 500 time steps
(T = 0.5 time units) with the initial conditions
x
0
=
_
1 4.3 2.2

9.4. LINEAR QUADRATIC CONTROL 427


for the 4 cases:
(a) Q = R = I
(b) Q = 10
3
R = I
(c) 10
3
Q = R = I
(d) any other combination of your choice.
In each case, sketch the results of the input u, and the states x with time and calculate the
ISE for the states, manipulated variables and the weighted sum. What conclusions can you
deduce from the relative importance of Q and R?
2. Compare the performance of the full multivariable LQR controller with that of a controller
comprised of just the diagonal elements of the controller gain. What does this say about
the process interactions?
9.4.6 An LQR with integral states
The LQR developed in 9.4 is only a proportional controller so will suffer from the problem
of offset given setpoint changes as graphically demonstrated in the upper plot in Fig. 9.23. To
compensate however, the gains in the true multivariable LQR controller can be higher than in
multiple SISO controllers without causing instability, reducing, but not eliminating, the offset
problem. If the offset problem is still signicant, one can introduce extra integral states, z dened
as the integral of the normal states
z
def
=
_
xdt (9.69)
and can be augmented to the original system as
_
x
z
_
=
_
A 0
I 0
_ _
x
z
_
+
_
B
0
_
u (9.70)
and the feedback control law is now
u =
_
K
p
K
i
_
_
x
z
_
(9.71)
where K
p
are the gains for the proportional controller, and K
i
are the gains for the integral
controller. Now the LQR is designed for the new system with n additional integral states in
the same manner as for the proportional state only case. In MATLAB we can formulate the new
augmented system as
Ai = [A zeros(size(A)); eye(size(A)) zeros(size(A))]
Bi = [B; zeros(size(B))]
This modication improves the system type, (i.e. the number of poles at the origin), enabling our
controller to track inputs. [111, 1014,p802] gives further details.
SISO servo integral output control
If we want to track setpoint changes (servo-control) for a SISO system, we need only to add one
extra integral state. The augmented system is
x = Ax +Bu
z = r y
y = Cx
428 CHAPTER 9. CLASSICAL OPTIMAL CONTROL
where x is the vector of original states, u, y and r are the scalar input, output and setpoint respec-
tively. We simply concatenate the new state to the old,
x
def
=
_
x
1
x
2
x
n
.
.
. z
_

and the feedback gain row-vector in u =

K x is

K =
_
k
1
k
2
k
n
.
.
. k
n+1
_
So the closed-loop system is
d x
dt
=
_

A

B

K
_
x +
_
0
1
_
r
where the augmented matrices are:

A =
_
A 0
C 0
_
. .
(n+1)(n+1)
,

B =
_
B
0
_
. .
(n+1)1
To implement, the augmented system must be controllable, which is a tougher requirement than
just ctrb(A, B). A block diagram of a pole-placement controller with a single integral state is
given in Fig. 9.22, after [111, p802]. The wide lines are vector paths.
_
C
A
-
+
-
6
B +
K

k
n+1

+

_
r
` `

y

x
n+1
x
n+1
integral state
u
x
x
full state feedback
Figure 9.22: State feedback control system with an integral output state
We can compare the effect of adding an integral state to remove the offset given
x =
_
4 3
1 6
_
x +
_
2
1
_
u
where we want to control x
1
to follow reference r(t) using pole-placement. In other words,
y = [1, 0]x. I will place the closed-loop eigenvalues for the original system at
=
_
6 + 2i
6 2i
_
and with the extra integral state, I will add another pole at
3
= 10. Fig. 9.23 compares the
response of both cases. Clearly the version with the integral state that tracks the setpoint with no
steady-state error is preferable.
9.4. LINEAR QUADRATIC CONTROL 429
2
0
2
Proportional control: = (6 2i)
0 5 10 15
2
0
2
time [s]
Integral states: = (6 2i, 10)
Figure 9.23: State feedback
with an integral state. Up-
per: response of original sys-
tem showing offset. Lower:
improved response of the aug-
mented system with no offset.
Servo LQR control of the black-box
A good example where poor trajectory-following con-
trol is advisable.
In experimental, rather than simulated, applications, since we do not measure the states directly,
we must use a plant model to produce state estimates based on the measured outputs. In the
experimental optimal regulator example shown in Fig. 9.24, we estimated online a simple 3 state
model using recursive least-squares after which we recomputed each sample time the optimal
steady-state gains. In this application the sample time was relatively long, T = 1.5 seconds, and
the RLS forgetting factor was = 0.995. No feedback control was used for the rst 20 sample
times.
Problem 9.4 1. Design an integral state LQR for the Newell & Lee evaporator. Newell and
Lee suggest that the weightings on the integral states should be an order of magnitude less
than the weightings on the states themselves, hence the augmented Q
i
matrix is
Q
i
= diag
_
1 1 1 0.1 0.1 0.1

and the manipulated weighting matrix R, is as before.


2. Compare the feedback matrix from part 1 with that given in [145, p57]. Why are they
different? What are the eigenvalues of the closed loop response?
430 CHAPTER 9. CLASSICAL OPTIMAL CONTROL
0 50 100 150 200 250
0.2
0
0.2
0.4
0.6
LQR of the blackbox t=1.50
o
u
t
p
u
t

&

s
e
t
p
o
i
n
t
0 50 100 150 200 250
0.5
0
0.5
dlqr servo:
2
q
= 1,
2
r
= 10
I
n
p
u
t
0 50 100 150 200 250
5
0
5

RLS: =0.995
0 50 100 150 200 250
1
0
1
2
3
K
1

a
n
d

K
2
time (s)
Figure 9.24: An experimental application of LQR servo control of the black box with 10q = r.
Plots: (a) output and setpoint; (b) input; (c) model parameters ; (d) controller gains.
9.5. ESTIMATION OF STATE VARIABLES 431
Plant Meas
-
g(x)
Model
-
+

`
-
?
Disturbances & noise

-
Estimator


-
corrected states
inputs
predicted
outputs
inputs
state feedback
state
control
predicted states
Software

actual noisy outputs


Figure 9.25: A state-based estimation and control scheme
9.5 Estimation of state variables
Chapter 6 looked at various alternatives to identify the parameters of dynamic systems, given
possibly noisy input/output measured data. This section looks at ways to estimate not the pa-
rameters, (they are assumed known by now), but the states themselves, x, by only measuring the
inputs u, and the outputs y. This is a very powerful technique since if this estimation works, we
can use state feedback control schemes without actually going to the trouble of measuring the
states, but instead using only their estimates.
These estimators work by using a model to predict the output, y, which can be compared to the
measured output, y. Based on this difference we can adjust, if necessary, the predicted states.
The estimation of states in this way is known, for historical reasons, as a Kalman lter. A block
scheme of how the estimator works is given in Fig. 9.25. Here we have a control scheme that
while demanding state-feedback, actually uses only state estimates rather than the actual hard-
to-obtain states.
There are many texts describing the development (eg: [50, 96]) and applications (eg: [37, 189]) of
the Kalman lter.
9.5.1 Random processes
The only reason that our estimation scheme will not work perfectly, at least in theory, is due to
the corrupting noise, and to a lesser degree, incorrect initial conditions. Now in practice, noise
432 CHAPTER 9. CLASSICAL OPTIMAL CONTROL
is the garbage we get superimposed when we sample a real process, but when we run pure
simulations, we can approximate the noise using some sort of random number generator such as
described previously in 6.2. These are correctly termed pseudo random number generators, and
most of them are of the very simple linear congruent type, and are therefore deterministic and
not strictly random. Whole books have been written on this subject and many of them quote
John von Neumann saying in 1951 Anyone who considers arithmetical methods of producing
random digits is, of course, in a state of sin.
MATLAB can generate two types of randomnumbers; namely uniformor rectangular, and normal
or Gaussian. Using the uniform scheme, we can generate any other arbitrary distribution we
may desire including Gaussian. However in most practical cases, it is the Gaussian distributed
random number we are more interested in. To visualise the difference, try:
1 x = rand(1000,1); hist(x) % [default] uniformly
y = randn(1000,1); hist(y) % normally distributed
Sometimes it is interesting to compare the actual probability density function or PDF obtained
using the hist command with the theoretical PDF. To directly compare these, we need to scale
the output of the histogram such that the integral of the PDF is 1. Fig. 9.26 compares the actual
(scaled) histogram from 10
4
samples of a random variable, N( = 12, = 2) with the theoretical
probability density function. The PDF of a normal variate is, [190, p60],
f(x) =
1

2
2
exp
_

1
2
_
x

_
2
_
(9.72)
2 4 6 8 10 12 14 16 18 20
0
0.05
0.1
0.15
0.2
0.25
x
f
r
e
q
u
e
n
c
y
mean = 12.0116

2
= 4.03369
Figure 9.26: The probability density function of a normally distributed random variable x
N( = 12, = 2); theoretical (solid) and experimental (histogram).
Listing 9.17: Comparing the actual normally distributed random numbers with the theoretical
probability density function.
xsig = 2; xmean = 12; % Variance, , & mean, x
x = xsig
*
randn(10000,1)+xmean; % sample (realisation) of r.v. x.
3 xv = linspace(min(x),max(x),100)'; % approx range of x
pdf= @(xv) 1/sqrt(2
*
pi)/xsig
*
exp(-0.5
*
(((xv-xmean)/xsig).2));% expected PDF, Eqn. 9.72.
area = trapz(xv,pdf(xv)); % Note integral of PDF should=1.0
[numx,xn] = hist(x,30); % Generate histogram [incidence #, x-value]
9.5. ESTIMATION OF STATE VARIABLES 433
8 dx = mean(diff(xn)); % assume evenly spaced
scale = 1/dx/sum(numx); % scale factor so total area = 1
freq = numx
*
scale; % normalise so total area =1
bar(xn,freq); hold on; plot(xv,pdf(xv));hold off % See Fig. 9.26
Multivariable random numbers
For a single time series x
k
of n elements, the variance is calculated from the familiar formula
Var(x) =
2
x
=
1
n
n

i=1
(x
i
x)
2
(9.73)
where x is the mean of the vector of data. In the case of two time series of data (x
k
, y
k
) the
variances of x and y are calculated in the normal fashion using Eqn. 9.73, but the cross-variance
or co-variance between x and y is calculated by
Cov(x, y) =
1
n
n

i=1
(x
i
x) (y
i
y) (9.74)
Note that while the variance must be positive or zero, the co-variance can be both positive and
negative and that Cov(x, y) = Cov(y, x). For this two vector system, we could assemble all the
variances and cross-variances in a matrix P known as the variance-covariance matrix
P =
_

2
x
Cov(x, y)
Cov(y, x)
2
y
_
which is of course symmetric with positive diagonals for time series of real numbers. In normal
usage, this name of this matrix is shortened to just the covariance matrix.
For an n variable system, the co-variance matrix P will be a positive denite symmetric n n
matrix. MATLAB can calculate the standard deviation () of a vector with the std command and
it can calculate the co-variance matrix of a multiple series using the cov command.
One characteristic about two independent random time series of data is that they should have no
cross-correlation, or in other words, the co-variance between x and y, should be approximately
zero. For this reason, most people assume that the co-variance matrices are simply positive diag-
onal matrices, and that the off-diagonal or cross-covariance terms are all zero. Suppose we create
three random time series vectors in MATLAB (x, y, z).
x=randn(1000,1) +3; % generate 3 independent vectors
y=randn(1000,1)
*
6+1; % standard deviation, = 6
z=randn(1000,1)
*
2-2; % variance
2
= 2
2
= 4
4 a=[x y z];
cov(a) % nd co-variance
The co-variance matrix of the three time series is in my case, (yours will be slightly different),
Cov(x, y, z) =
_
_
1.02 0.11 0.10
0.11 37.74 0.30
0.10 0.30 4.22
_
_

_
_
1 0 0
0 36 0
0 0 4
_
_
which shows the dominant diagonal terms which are approximately the true variances (1, 36, 4)
and the small, almost zero, off-diagonal terms.
434 CHAPTER 9. CLASSICAL OPTIMAL CONTROL
Covariance matrices and ellipses
In many cases, the random data does exhibit cross-correlation and we will see this clearly if the
x, y data pairs fall in a patten described by a rotated ellipse. We can compute the 95% percent
condence limits of this ellipse using
(x )
T
P
1
(x )
p(n 1)
n p
F
,p,(np)
where p is the number of parameters (usually 2), and n is the number of data points (usually
> 100). F
,p,(np)
is the F-distribution at a condence of , in this case 95%. To compute values
from the F-distribution we could either use the functions given in Listing 9.18,
Listing 9.18: Probability and inverse probability distributions for the F-distribution.
pf = @(x,v1,v2) betainc(v1
*
x/(v1
*
x+v2),v1/2,v2/2);
qF = @(P,v1,v2) fsolve(@(x) pf(x,v1,v2)-P,max(v1-1,1))
or use equivalent routines from STIXBOX toolbox from Holtsberg mentioned on page 3.
The easiest way to construct this ellipse is to rst plot a circle parametrically, scale the x and y
axes, rotate these axes, then nally translate the ellipse so that the center is at the mean of the
data. The axes of the ellipse are given by the eigenvectors of P, and the lengths are given by the
square root of the eigenvalues of P.
Example of condence limit plotting
Here we will generate 1000 random (x, y) samples with deliberate correlation and we will com-
pute the ellipse at a 95% condence limit. Furthermore we can check this limit by using the
inpolygon routine to calculate the actual proportion of data points that lie inside the ellipse.
We expect around 95%.
1. First we generate some correlated random data,
x N(0.3, 1) and y N(2, 1.1) +x
These data pairs are correlated because y is explicitly related to x.
Listing 9.19: Generate some correlated random data.
N = 1e3; % # of samples in Monte-Carlo
x = randn(N,1)-0.3;% N(0.3, 1)
3 y = 1.1
*
randn(N,1)+x + 2; % Deliberately introduce some cross-covariance
plot(x,y,'.') % See Fig. 9.27(a).
9.5. ESTIMATION OF STATE VARIABLES 435
We can plot the (x, y) points and we can see a that the buckshot pattern falls in a rotated
ellipse indicating correlation as expected as shown in Fig. 9.27(a).
2. The distribution of the data in Fig. 9.27(a) is more clearly shown in a density plot or a 3D
histogram. Unfortunately MATLAB does not have a 3D histogram routine, but we can use
the normal 2D histogram in slices along the y-axis as demonstrated in the following code
snippet.
Listing 9.20: Plot a 3D histogram of the random data from Listing 9.19.
1 Nb = 30; % # of bins in a quick & dirty 3D histogram
xmin = min(x); ymin = min(y); xmax = max(x); ymax = max(y);
xv = linspace(xmin,xmax,Nb); yv = linspace(ymin,ymax,Nb);
[X,Y] = meshgrid(xv,yv); Z= 0
*
X;
6 for i=1:length(xv)-1
idx = find(x>xv(i)&x<xv(i+1));
Z(:,i) = hist(y(idx),yv)';
end % for
bar3(Z) % crude 3D histogram
This gives the density and 3D histogram plots given in Fig. 9.27(b) and (c).
5 0 5
4
2
0
2
4
6
8
y
x
xy data plot
y
x
Density plot
5 0 5
4
2
0
2
4
6
8
0
50

y
3D histogram
x

5
10
15
20
25
30
35
40
45
50
Figure 9.27: Correlated noisy x, y data. (a) Individual (x, y) data. (b) A density plot. (c) A
histogram in 2 independent variables.
3. To plot an ellipse, we plot rst plot x = a cos(t), y = b sin(t) with t [0, 2], rotate, and
nally then shift the origin to . The directions of the of the semi-major and semi-minor
axes of the ellipse are given by the eigenvectors of Pand the lengths are given by the square
roots of the eigenvalues.
If V is the collection of eigenvectors v
i
of the (p p) matrix P, then in the usual case of
p = 2,
V
def
=
_
v
1
v
2

and the transformation to the un-rotated ellipse is


u = V
T
(x )
This is shown in Listing 9.21 which calls the F-distribution using the anonymous functions
given previously in Listing 9.18.
436 CHAPTER 9. CLASSICAL OPTIMAL CONTROL
Listing 9.21: Compute the uncertainty regions from the random data from Listing 9.20.
X = cov(x,y); % sampled covariance matrix
[V,D] = eig(X); e = diag(D);
n = length(x); p = 2; % # of data points & # of parameters
5 alpha = 0.95; % condence limit, = 0.95
scale= p
*
(n-1)/(n-p)
*
qf(alpha,p,n-p); % Inverse F-distribution, See Listing 9.18.
t = linspace(0,2
*
pi,5e2)'; % parametric circle
uxc = sqrt(e(1)
*
scale)
*
cos(t); uyc = sqrt(e(2)
*
scale)
*
sin(t); % scaled circle
10
% Now rotate V
T
(x x)
c = (inv(V')
*
[uxc, uyc]')'; % basically V
1
x
xc = c(:,1); yc = c(:,2);
xc = xc + mean(x); yc = yc + mean(y); % shift for mean
4. As a nal check, we could use the inpolygon routine to see if the actual proportion of
points inside the ellipse matches the expected %. The nal data points and the 95% ellipse
is given in Fig. 9.28.
Listing 9.22: Validating the uncertainty regions computed theoretically from Listing 9.21.
1 % Now look at actual proportions
in = inpolygon(x,y,[xc;xc(1)],[yc;yc(1)]);
prop_inside = sum(in)/n;
plot(x(in),y(in),'r.','MarkerSize',1); hold on
6 plot(x(in),y(in),'b.','MarkerSize',1); plot(xc,yc,'k-');
hold off axis('equal'); grid
For my data, 94.6% of the 1000 data points lie inside the condence limits which is in good
agreement to the expected 95%.
Figure 9.28: Randomly generated data with
a 95% condence limit superimposed. In
actual fact, 94.6% of the data points lie in-
side the condence limits, which is a good
agreement.
6 4 2 0 2 4
3
2
1
0
1
2
3
4
5
6
95% confidence limit
x
y
9.5. ESTIMATION OF STATE VARIABLES 437
9.5.2 Combining deterministic and stochastic processes
If we have a perfect model of the process, no noise and we know the initial conditions x
0
exactly,
then a perfect estimator will be simply the open loop predictions of x, given the system model
and present and past inputs u. However industrially one there is always noise present both in
the measurements and in the state dynamics, so this open loop prediction scheme will soon fail
even if the initial estimates and model are good. A more realistic description of an industrial
process is
x
k+1
= x
k
+u
k
+w
k
, w
k
N(0, Q) (9.75)
y
k
= Cx
k
+v
k
, v
k
N(0, R) (9.76)
where v, ware zero mean, white noise sequences with covariance matrices Qand Rrespectively.
This model is still a discrete linear dynamic model, but the nal terms are inserted to account for
all the non-deterministic components of the model.
Mean and variance propagation in a dynamic system
Predicting the behaviour of the state mean and variance given a stochastic dynamic system
sounds quite complicated, but actually in the discrete domain it is simple. The mean of the
system, Eqn. 9.75 is the expected value,
x
k+1
= E {x
k
+u
k
+w
k
} (9.77)
= x
k
+ u
k
+ w
k
(9.78)
but since the input is deterministic, ( u u), and with no loss of generality the mean of the noise
is zero, (E(w) = 0), Eqn. 9.78 collapses to
x
k+1
= x
k
+u
k
(9.79)
which is simply the deterministic part of the full stochastic dynamic process, and also which
should be intuitive.
The propagation of the state variance, P is slightly more complex. The state variance at the next
sample time is, by denition,
P
k+1
def
= E
_
(x
k+1
x
k+1
)(x
k+1
x
k+1
)

_
(9.80)
and if we substitute the new mean from Eqn. 9.79, the process, Eqn. 9.75 we get
P
k+1
= E
_
((x
k
x
k
) +w
k
)((x
k
x
k
) +w
k
)

_
(9.81)
which if we expand out, and note that the variance of w is Q, then we get the following matrix
equation
P
k+1
= P
k

+P
x
k
w
k

+P
w
k
x
k

. .
very small, 0
+Q

(9.82)
= P
k

+Q

(9.83)
We can ignore the two inner cross-product terms in Eqn. 9.82) because of the uncorrelatedness of
states and the white noise w. In a similar manner to the above application of denition, expan-
sion and eliminating cross terms, one can nd expressions for the propagation of the mean and
variance of the output variables, y. Further details are given in [121, pp6264].
438 CHAPTER 9. CLASSICAL OPTIMAL CONTROL
Eqn. 9.83 is called the discrete Algebraic Lyapunov difference Equation, or ALE. If the system is
stable, then the steady-state covariance, P

, will be given at sample k = , or by solving the


following matrix equation,
P

= P

+Q

(9.84)
in exactly the same way we used in section 2.9.4 when establishing the stability of a linear
discrete-time system either using Kronecker products, or using the MATLAB dlyap command.
Note that the nal state covariance does not depend on the initial uncertainty, P
0
.
The propagation of the mean and variance of a linear dynamic system, given stochastic distur-
bances, is fundamental to what will be developed in the following section which describes the
Kalman lter estimation scheme. The Kalman lter tries to reduce the variance over time which
in effect is to reduce the magnitude of the matrix P using a clever feedback scheme, thus obtain-
ing improved state estimates.
Problem 9.5 Verify by MATLAB Monte-Carlo simulation that the steady-state covariance for a
stable linear system subjected to white noise input with known variance is given by the solution
to the ALE, Eqn. 9.84. Compare your simulated actual covariance (perhaps using cov) with the
predicted covariance (perhaps using dlyap).
9.5.3 The Kalman lter estimation scheme
The estimation scheme consists of two parts: a prediction and a correction. For the prediction
step, we will predict using the openloop model, the new state vector and the state covariance
matrix. The correction part will adjust the state prediction based on the current measurement,
and in so doing, hopefully reduce the estimated state covariance matrix.
We will denote the state vector as x and an estimate of this as x. The prediction of the state using
just the model is called the a priori estimate and denoted, x

. Once we have assimilated the


measurement information, we have the a posteriori estimate which is denoted simply as x.
At sample time k, we have:
The a priori estimate x

k
, which is the estimate of the true, but unknown, state vector x
k
before we assimilate the current measurement, y
k
.
We also assume we know the a priori covariance of x

k
,
P

k
= E{(x
k
x

k
)(x
k
x

k
)
T
}
Now we want to improve the estimate of x
k
by using the current measurement. We will do this
by the linear blending equation
x
k
= x

k
+L
k
_
y
k
C x

k
_
(9.85)
where x
k
, is the corrected state estimate or a posteriori estimate, and L
k
, the yet to be determined
blending factor or Kalman gain. We use the symbol L for the Kalman gain instead of the com-
monly used K because we have already used that symbol for the similar feedback gain for the
linear quadratic regulator.
The best value of L will be one that minimises the mean-square error of the a posteriori covariance,
P
k
= E{(x
k
x
k
)(x
k
x
k
)
T
} (9.86)
9.5. ESTIMATION OF STATE VARIABLES 439
and substituting in Eqn. 9.85, gives
P
k
=E{
_
(x
k
x

k
) L
k
(Cx
k
+v
k
C x

k
)

_
(x
k
x

k
) L
k
(Cx
k
+v
k
C x

k
)

}
Now noting that (x
k
x

k
) is uncorrelated with v
k
, then
P
k
= (I L
k
C) P

k
(I L
k
C)
T
+L
k
RL
T
k
(9.87)
is an expression for the updated covariance P
k
in terms of the old covariance P

k
.
We want to nd the particular L
k
such that we minimise the terms along the diagonal of P
k
.
Note that
d trace(AB)
dA
= B
T
d trace(ACA
T
)
dA
= 2AC
So now expanding P
k
from Eqn. 9.87 to explicitly show the terms in L gives
P
k
= P

k
L
k
CP

k
P

k
C
T
L
T
k
. .
linear in L
+L
k
_
CP

k
C
T
+R
k
_
L
T
k
. .
quadratic in L
We wish to minimise the trace of Psince that is the sum of the diagonal elements which we want
to minimise. (Minimising the sum is the same as minimising each individually when positive.)
d trace(P
k
)
d L
k
= 0
= 2
_
CP

k
_
T
+ 2L
k
_
CP

k
C
T
+R
k
_
T
Solving for the optimum gain gives
L
k
= P

k
C
T
_
CP

k
C
T
+R
k
_
1
(9.88)
where the matrix L
k
is called the Kalman gain.
Substituting Eqn. 9.88 into Eqn. 9.87 we have:
P
k
= P

k
P
k
C
T
_
CP

k
C
T
+R
k
_
1
CP

k
or
P
k
= P

k
L
k
_
CP

k
C
T
+R
k
_
1
L
k
or even
P
k
= (I L
k
C) P

k
(9.89)
All expressions for computing the updated P
k
from the prior P

k
are equivalent, but
1. All but Eqn. 9.87 are valid only for the optimum gain. (Eqn. 9.87 is valid for any gain.)
2. The last and simplest expression is the most used, but is suspectable to numerical ill-
conditioning.
440 CHAPTER 9. CLASSICAL OPTIMAL CONTROL
Prediction updates
The updated estimate x

k
is obtained by using the discrete model
x

k+1
= x
k
Note that we can ignore w
k
since it has mean zero and is uncorrelated with any previous ws.
The a priori error is
e

k+1
= x
k+1
x

k+1
= (x
k
+w
k
) x
k
= e
k
+w
k
and the error covariance associated with x

k+1
is
P

k+1
= E{e

k
e
T
k
} = E{(e
k
+w
k
)(e
k
+w
k
)
T
}
= P
k

T
+Q
The prediction
The best prediction we can make for the new state vector is by using the deterministic part of
Eqn. 9.75. Remember that Eqn 9.75 is our true process S(), and the deterministic part excludes
the noise term since we cannot predict it
4
. We may also note that the expected value of the noise
term is zero, so if we were to add anything, it would be zero. Thus the prediction of the new state
estimate is at sample time kT
x

k
= x
k1|k1
+u
k
(9.90)
which is precisely Eqn. 9.79. In addition to predicting the states one sample time in the future,
we can also predict the statistics of those states. We can predict the co-variance matrix of the state
predictions P
k|k1
, the a priori co-variance as
P
k|k1
= P
k1|k1

+Q

(9.91)
which again is Eqn. 9.83. Without any measurement information, the state estimates obtained
from Eqn 9.90 with variances from Eqn 9.91 are the best one can achieve. If our original dynamic
systemis stable, (is stable), then over time, the estimate uncertainty Pwill converge to a steady-
state value, and will not further deteriorate, despite the continual presence of process noise. This
is termed statistical steady-state. Conversely however, if our process is unstable, then in most
cases, the uncertainty of the states will increase without bound, and P will not converge.
Stable system or unstable, we can improve these estimates if we take measurements. This intro-
duces the correction step of the Kalman lter.
The correction
We rst take a measurement from the real process to obtain y
k
. We can compare this actual
measurement with a predicted measurement to form an error vector
k
known as the predicted
measurement residual error

k
= y
k
y
k
= y
k
C x
k|k1
(9.92)
4
If we could predict it, then it would be part of the deterministic part!
9.5. ESTIMATION OF STATE VARIABLES 441
It would seem natural to correct our predicted state vector by a magnitude proportional to this
error
x
k|k
= x
k|k1
+L
k
_
y
k
C x
k|k1
_
(9.93)
where L
k
is called the Kalman gain matrix at time t = kT and is similar to the gain matrix in
the observers presented in chapter 8. The Kalman gain matrix L
k
and co-variance of the state
variables after the correction P
k|k
, the a posteriori co-variance, are
L
k
= P
k|k1
C

_
CP
k|k1
C

+R
k
_
1
(9.94)
P
k|k
= (I L
k
C) P
k|k1
(9.95)
and note the similarity of these equations to Eqns 9.61 and 9.63. This predictor/corrector scheme
comprises the Kalman lter update.
Initialising the algorithm
To start the estimation algorithm, we need estimates of the initial state, x
0
, and covariance, P
0|0
.
In practice, both these variables quickly converge to near optimal values, so the long-term per-
formance of the estimator is not overly sensitive to these parameters. As in the recursive least
squares estimator algorithms, it normally sufces to set P
0|0
10
6
I. If more outputs than states
are available (m > n), then it is possible to solve for the initial states (in a least squares sense) and
for the covariance given an initial measurement y
0
,
P
0|0
=
_
C

R
1
C
_
1
(9.96)
x
0|0
= P
0|0
C

R
1
y
0
(9.97)
The above two formulae can be derived (assuming no input) from Eqn 3.45 where the weight-
ing matrix W = R. However [75, p1539] argues that using the above equations for the initial
estimates is probably not worth the extra effort.
Algorithm 9.2 Discrete Kalman lter algorithm
What we must know before we can implement an evolving Kalman lter:
Model matrices , , C
The covariance matrices Q, Rof the noise sequences.
Initial prior state estimate, x

0
and its error covariance, P

0
.
At the kth iteration, do the following:
1. Compute Kalman gain, L
k
, from a priori information,
L
k
= P

k
C
T
_
CP

k
C
T
+R
k
_
1
2. Correction of the a priori state estimate and its error covariance using the current measure-
ment
x
k
= x

k
+L
k
_
y
k
C x

k
_
P
k
= (I L
k
C) P

k
442 CHAPTER 9. CLASSICAL OPTIMAL CONTROL
3. Project ahead or prediction step using the model
x

k+1
= x
k
P

k+1
= P
k

T
+Q
4. Wait out the remainder of the sample time, increment counter, k k +1, and then go back
to step 1.
Kalman lter example
Simulated results presented in Fig. 9.29 show the improvement of the estimated state when using
a Kalman lter with a randomly generated 2-state discrete model, and plant and measurement
noises with covariances
Q = 0.2I, R = I
Fig. 9.29 compares (a) an open loop model with (b) states computed by directly inverting the
measurement matrix, and (c) states obtained using the Kalman lter. The best practical scheme
is the one where the estimated states best approximates the true states.
4
2
0
2
4
Open loop model: sse(ol) = 281
x
1
0 50 100
4
2
0
2
4
x
2
4
2
0
2
4
Measurements only: sse(y) = 201
0 50 100
4
2
0
2
4
4
2
0
2
4
State estimator: sse(KF) = 75
0 50 100
4
2
0
2
4
true states
estimated states
Figure 9.29: Comparing an openloop model prediction (left column), with direct measurement
inversion (middle column) and the Kalman lter (right column). The Kalman lter has the lowest
state error. See also the performance of the Kalman lter for a range of conditions in Fig. 9.33.
To compare the performance quantitatively, the sum of the square errors, (sse), is computed for
each of the three alternatives. The Kalman lter, not surprisingly, delivers the best performance.
9.5.4 The steady-state form of the Kalman lter
Analogous to the optimal regulator gain given in Eqn. 9.47, the Kalman gain matrix L
k
in Eqn. 9.93
is also time varying. However in practical instances, this matrix is essentially constant for most
of the duration of the control interval, so a constant matrix can be used with very little drop in
performance. This constant matrix is called the steady-state form of the Kalman gain matrix and
is equivalent to the steady-state LQR gain matrix used in 9.4.
We can solve for the steady-state Kalman gain matrix by nding P such that
P = Q+P

PC

_
R+CPC

_
1
CP

(9.98)
9.5. ESTIMATION OF STATE VARIABLES 443
is satised. To solve Eqn. 9.98 for P, we could use an exhaustive iteration strategy where we
iterate until P converges, or we could notice that this equation is in the form of a discrete alge-
braic Riccati equation for which we can use the function dare. We should note that routine
dare solves a slightly different Riccati form than Eqn. 9.98, so follow the procedure given in
Listing 9.23.
Listing 9.23: Solving the discrete time Riccati equation using exhaustive iteration around
Eqn. 9.98 or alternatively using the dare routine.
P = eye(n); Pnew = 0; % Initial condition for P should be positive denite.
tol = 1e-6; i=0;
3 while norm(P-Pnew)>tol % Repeat until P
k+1
P
k
.
P = Pnew; i=i+1;
Pnew = Q + A
*
P
*
A' - A
*
P
*
C'/(R+C
*
P
*
C')
*
C
*
P
*
A'; % Refer Eqn. 9.98.
end
8 % Now try using dare.m
% Solve P = Q+P

PC

_
R+CPC

_
1
CP

[Pdare,L,G] = dare(A',C',Q,R); % We are assuming E = I, S = 0


Once we have the steady-state matrix P, then the steady-state Kalman gain is given by Eqn. 9.94,
L = PC

_
CPC
T
+R
_
1
(9.99)
which is known as the prediction-form of the Kalman lter; see also Eqn. 9.102. Note also that I
have dropped the time subscripts on P and L since now they are both time invariant.
The MATLAB routine dlqe (discrete linear quadratic estimator) uses an equivalent, but numer-
ically robust scheme to compute the steady-state discrete optimum Kalman gain matrix L and
steady-state covariance P.
9.5.5 Current and future prediction forms
In fact there are two common ways to implement a discrete Kalman lter depending on what
computational order is convenient in the embeddedsystem. Ogata refers to themas the Prediction-
type and the Current Estimation type in [148, p880-881], while the Control toolbox in MATLAB
refers to them as delayed and current in the help le for kalman.
The main difference between the current estimate and the prediction type is that the current
estimate uses the most current measurements y
k
to correct the state x
k
, whereas the prediction
type uses the old measurements y
k1
to correct state x
k
.
For both cases, we assume our true plant is of the form we used in Eqns 9.75 and Eqn. 9.76,
repeated here with = I
x
k+1
= x
k
+u
k
+w
k
y
k
= Cx
k
+v
k
and we want to estimate x in spite of the unknown noise terms.
Prediction-type
The prediction-type state estimate is given by
x
k+1
= x
k
+u
k
+L(y
k
C x
k
) (9.100)
444 CHAPTER 9. CLASSICAL OPTIMAL CONTROL
where the constant covariance and Kalman gain are
P = Q+P

PC

_
R+CPC

_
1
CP

(9.101)
L = PC
T
_
R+CPC
T
_
1
(9.102)
A block diagram of the prediction type Kalman lter implementation is given in Fig. 9.30.

q
1
C + +

q
1
C + +
L



`

output, y
k

x
k
True plant

x
k
measurement noise
v
k
process noise
w
k
input, u
k
Kalman gain

Plant model & estimator


Figure 9.30: A block diagram of a steady-state prediction-type Kalman lter applied to a linear
discrete plant. Compare with the alternative form in Fig. 9.31.
Current estimation-type
The current-estimation state estimate is given by
x
k+1
= z
k+1
+L
_
y
k+1
Cz
k+1
_
(9.103)
where z
k+1
= x
k
+u
k
(9.104)
and where the covariance is the same as above in Eqn. 9.101, but the Kalman gain is now
P = Q+P

PC

_
R+CPC

_
1
CP

(9.105)
L = PC
T
_
R+CPC
T
_
1
(9.106)
A block diagram of the current estimator type Kalman lter implementation is given in Fig. 9.31.
Note that the prediction-type Kalman lter is obtained simply by multiplying the current esti-
mate gain with as evident by comparing equations 9.102 and 9.106. Also note that the MATLAB
routine dlqe solves for the current estimator type.
9.5. ESTIMATION OF STATE VARIABLES 445

q
1
C + +
+
+ C +

Plant model & estimator

u
k1
output, y
k

q
1
L
q
1
True plant
Kalman gain
z
k
measurement noise
v
process noise
w
input, u
k
x

x
k1

x
k
Figure 9.31: A block diagram of a steady-state current estimator-type Kalman lter applied to a
linear discrete plant. Compare with the alternative prediction form in Fig. 9.30.
Listing 9.24 computes the Kalman gain for a discrete plant with matrices
=
_
1 2
3 4
_
, C =
_
2 3

, and with covariances Q = 2I, R = 1


Listing 9.24: Alternative ways to compute the Kalman gain
>> A = [1,2; 3 4]; C = [2,-3]; % Discrete plant, here A used as .
>> Q = 2
*
eye(2); R = 1; Gam = eye(2);
>> P = dare(A',C',Q,R) % Solve Eqn. 9.101 for P.
5 P =
27.0038 60.4148
60.4148 148.0538
>> Ke_ce = P
*
C'/(R+C
*
P
*
C') % Current estimator-type Kalman gain, Eqn. 9.106
10 Ke_ce =
-0.1776
-0.4513
>> Ke_pt = A
*
Ke_ce; % Prediction-type Kalman gain, Eqn. 9.102
Ke_pt =
15 -1.0801
-2.3377
>> Ke = dlqe(A,Gam,C,Q,R) % Note dlqe delivers the current estimator-type Kalman gain
Ke =
20 -0.1776
446 CHAPTER 9. CLASSICAL OPTIMAL CONTROL
-0.4513
Both implementations of the steady-state Kalman lter are very similar to the estimator part of
Fig. 8.4. The main difference between this optimal estimator and the estimator designed using
pole-placement in section 8.3 and 8.4 is that we now take explicit notice of the process and mea-
surement noise signals in the design of the estimator.
Of course knowing the states is not much use if you are not going to do anything with that
information, so of course we could add a feedback controller to this strategy in the same was as
we did for the pole-placement designed simultaneous state feedback controller and estimator in
Fig. 8.4.
In closed form suitable for simulation
If we combine the plant dynamics with the Kalman lter, we get the following system which is
convenient for simulation.
For the predictor form, by combining the plant dynamics in Eqns 9.75 and Eqn. 9.76 with the
Kalman lter update Eqn. 9.100, we get
_
x
k+1
x
k+1
_
=
_
0
LC LC
_ _
x
k
x
k
_
+
_
0
0 L
_
_
_
u
k
w
k
v
k
_
_
(9.107)
where x is the true but unknown state vector, x are the corrected best estimates of x, u are the
manipulated variable inputs, and w and v are the measurement and process noise.
For the current estimator form, by combining the plant dynamics with Eqns 9.103 and 9.104 we
get
_
x
k+1
x
k+1
_
=
_
0
LC LC
_ _
x
k
x
k
_
+
_
0
LC 0 L
_
_
_
u
k
w
k
v
k
_
_
(9.108)
For simplicity, let us suppose that we want equal weightings on the state and measurements.
With this simplication we can write
Q = qI, R = rI
where q and r are positive scalars. As q/r 0, L 0 and the estimated dynamic equation
becomes x
k+1|k
= x
k|k1
+u
k
which is the pure open-loop model prediction. Conversely as
q/r , we are essentially discarding all process information, and only relying on measurement
back-calculation.
Problem 9.6 Design a Kalman gain for the Newell & Lee evaporator. In this example, we are only
measuring level and pressure, so the measurement matrix is now
C =
_
1 0 0
0 0 1
_
The covariance matrices and optimum discrete gain are
Q = 0.1
*
eye(8); R = diag([0.1, 5.0]); L = dlqe(Phi,Delta,C,Q,R)
Compare this gain with that on [145, p63]. Is the difference signicant?
9.5. ESTIMATION OF STATE VARIABLES 447
9.5.6 An application of the Kalman lter
Let us suppose the composition meter on the evaporator failed, but we are required to estimate
it for a state feed-back controller. Since we have an industrial evaporator, we have noise super-
imposed on both the measurements and the process. We can simulate a scenario such as this.
We will use Eqn. 9.108 as a starting point, but we will also add another equation, that of the
uncorrected model predictions. Now we can compare 4 things:
1. The raw measurement unavoidably corrupted by noise, and possibly incomplete;
y
k
= Cx
k
+v (9.109)
2. The uncorrected model state predictions
x
k+1
= x
k
+u
k
(9.110)
3. The corrected state estimates using both model and the measurements processed by the
Kalman Filter;
x
k+1
= f ( x
k
, y
k
) (9.111)
4. and of course the true state variable
5
x
k+1
= x
k
+u
k
+w (9.112)
The simulation given in listing 9.25 uses a randomly generated discrete plant model. The actual
manipulated variable trajectories are unimportant for this example, and the measurement noise
variance is one fth of the process noise variance. To further demonstrate the advantage of the
estimator, I will assume that we have an error in the initial condition of the states.
Listing 9.25: State estimation of a randomly generated discrete model using a Kalman lter.
n = 3;nc = 2;m = 2; % # of states, outputs, inputs
[Phi,Del,C,D]=drmodel(n,nc,m); % generate random discrete model
Gam = eye(n); D = zeros(size(D)); % process noise model & ignore any direct pass-through
4
t = [0:100]'; N = length(t); % discrete sample time
U = [square(t/10), square(t/11)];
x0_true = zeros(n,1); x0_est = [-10 5 -7]' % True & estimated initial conditions
9 v=0.5;w=0.1; Q=w
*
eye(n); R=v
*
eye(nc); % Noise covariances, Q =
2
w
I, R =
2
v
I
W=sqrt(w)
*
randn(N,n);V=sqrt(v)
*
randn(N,nc);% State & measurement noise sequences
[yp,xp] = dlsim(Phi,Del,C,D,U,x0_est); % True deterministic model
14 L = dlqe(Phi,Gam,C,Q,R); % Steady-state optimal Kalman gain, Eqn. 9.99.
Phia = [Phi,zeros(n);L
*
C, Phi-L
*
C
*
Phi]; % Augmented system for simulation, Eqn. 9.108.
Dela = [Del, Gam,zeros(size(L)); Del-L
*
C
*
Del, zeros(size(Gam)),L];
Ca=eye(2
*
n);Da=zeros(2
*
n,m+n+nc);
19 [y,x] = dlsim(Phia,Dela,Ca,Da,[U,W,V],[x0_true;x0_est]);
x_true = x(:,1:n); % true states, x
x_est = x(:,n+1:2
*
n); % Kalman estimated/corrected states, x
5
once again only available in simulation land
448 CHAPTER 9. CLASSICAL OPTIMAL CONTROL
y_meas = (C
*
x_true')'+V; % Actual noisy measurement, y
24 y_est = (C
*
x_est')'; % Predicted measurement, y = C x
subplot(3,1,1); plot(t,x_true(:,1),t,x_est(:,1),'--',t,xp(:,1));
subplot(3,1,2); plot(t,x_true(:,2),t,x_est(:,2),'--',t,xp(:,2));
subplot(3,1,3); plot(t,x_true(:,3),t,x_est(:,3),'--',t,xp(:,3));
The most obvious scheme to reconstruct an estimate of the true states, is to use only the measure-
ments y. However using this scheme we have two difculties, rst we have noise superimposed
on the measurements, and second, we cannot reconstruct all the states since C is not inevitable,
not even square. Without the estimator, we can only use the incomplete raw output measure-
ments, rather than either the corrected states or ltered measurements. In addition, the poor
state estimate at t = 0 is compensated. We can improve on the measurements by using the model
Figure 9.32: Estimation using the Kalman
lter. Each of the three trends shows the
true state (solid), the corrected estimate (2),
and for comparison, the uncorrected open
loop model trajectory, (dashed line).
10
5
0
5
x
1
True states, corrected estimates & openloop model
5
0
5
x
2
0 5 10 15 20
10
5
0
5
10
x
2
sample time


Estimate
Openloop model
True
information and the Kalman lter. One can see that the Kalman estimates (2) are better than
using simply an open-loop model estimator (dashed line), particularly near the beginning of the
simulation given the incorrect starting conditions.
9.5.7 The role of the Q and R noise covariance matrices in the state estimator
The Kalman lter is an optimum model based lter, and is closely related to the optimum LQR
controller described in 9.4. If properly designed, the Kalman lter should give better results
than a lter using only measurement or model information. We can rank different design alter-
natives by comparing the integral of the squared error between the true states x and our predic-
tions x. The only design alternative for the Kalman lter is the selection of the Qand Rmatrices.
The theory leads us to believe that if we select these matrices exactly to match the actual noise
conditions, then we will minimise the error between our prediction and the true states.
Normally the covariance matrices Qand R, are positive diagonal matrices, although strictly they
need only to be positive denite. The elements on the diagonal must be positive since they are
9.5. ESTIMATION OF STATE VARIABLES 449
variances which are squared terms. If Qis large relative to R, then the model uncertainty is high
and the estimate values will closely follow the measured values. Conversely if Ris large relative
to Q, then the measurement uncertainty is large, and the estimate will closely follow the model.
Suppose we have a very simple system discrete dynamic system
x
k+1
= 0.5x
k
+w
k
(9.113)
y
k
= x
k
+v
k
(9.114)
if we were to design a Kalman lter with model and measurement covariances as Q = 1.5, R =
0.5, we could use dlqe as shown in Listing 9.26.
Listing 9.26: Computing the Kalman gain using dlqe.
>> Phi = -0.5; Gam = 1; C = 1; % System x
k+1
= 0.5x
k
+w
k
with y
k
= x
k
+v
k
2 >> Q = 1.5; R = 0.5; % Plant & measurement covariance matrices
>> L = dlqe(Phi,Gam,C,Q,R)
L =
0.7614
Now if we were lose all condence in our measurements, effectively that means Q = 0 (or R
becomes very large), and we essentially discard any update. In this case the Kalman gain should
be L = 0.
>> L = dlqe(Phi,Gam,C,0,R) % Excellent measurements, bad plant noise, Q = 0
L =
0
Alternatively if we lose all condence in our model, then R becomes small, and the Kalman gain
in this case tends to 1 in the case of the current estimator. (Note that R = 0 because R must be
strictly positive denite.)
>> L = dlqe(Phi,Gam,C,Q,1e-6) % Bad measurements, R 0
2 L =
1.0000
Now the update expression using the current measurement estimator is
x
k
= x

k
+L(y
k
x
k
)
so obviously if L = 1, then x
k
= x

k
+ y
k
x
k
= y
k
which means our state estimate is derived
completely from the measurement as anticipated given that we have a very poor model.
In the case of the prediction type Kalman lter, we get
>> L = Phi
*
dlqe(Phi,Gam,C,Q,1e-6) % Prediction-type KF, Eqn. 9.102.
2 L =
-0.5
which also makes sense intuitively. In this case the update is x
k+1
= x
k
+ L(y
k
x
k
) or in this
case x
k+1
= 0.5 x
k
0.5y
k
+ 0.5 x
k
= 0.5y
k
.
450 CHAPTER 9. CLASSICAL OPTIMAL CONTROL
The optimality of the Kalman lter
The optimality of the Kalman lter is best demonstrated using a Monte-Carlo simulation. The
script given in Listing 9.27 consists of two parts: rst the inner loop simulates estimation of a
simple third order plant and calculates the ISE performance,
J =
_
t
f
0
( x x)

( x x) dt (9.115)
using a steady-state Kalman lter, while the outer loop varies the Kalman design q/r ratios.
The inner loop is repeated many times (since slightly different results are possible owing to the
different noise signals) in order to obtain a statistically valid result.
In Listing 9.27, the true process has q/r = 20. We will then compare the performance of a Kalman
lter using different q/r values as the design parameters. If all goes well, we should detect a
minimum (optimum) near the known true value of q/r = 20.
Listing 9.27: Demonstrating the optimality of the Kalman lter.
qr_v = logspace(-4,3,20)'; % # of points to plot. Span % 10
4
to 10
3
2 nsum = 10; % # of inner iterations in Monte-Carlo simulation
s = tf([1 0],1); tau = 2; zeta = 0.5;
G = tf((3
*
s-1)/(7
*
s+1)/(tau2
*
s2+2
*
zeta
*
s+1)); % G(s) =
3s1
(7s+1)(
2
s
2
+2s+1)
Ts = 0.2; % sample time
7 Gdss = ss(c2d(G,Ts,'zoh')); % discrete time model
Phi = Gdss.a; Del = Gdss.b; C= Gdss.c; D = Gdss.d;
[n,m] = size(Del); p = length(C
*
C'); % system dimensions
t = Ts
*
[0:1000]'; N = length(t); % time vector
12 U = 0.2
*
square(t/80).
*
square(t/15) + square(t/50); % any input
x0 = [-1 1 0]'; % some starting point
Gam = eye(n);R = eye(p); % constant KF design
ves = 1e-6; wes = 20
*
ves; % measurement & process noise
17 x0_est = x0; % [0;0;0]; % (correct) estimate of x0
Jstats=[];
for i=1:length(qr_v) % outerloop: Design tests
L = dlqe(Phi,Gam,C, qr_v(i)
*
eye(n),R); % Design Kalman gain
22 LC=L
*
C; % Formulate the extended system
Phib = [Phi, zeros(n); LC, Phi-LC]; % Current predictor
Delb = [Del,Gam,zeros(size(L));Del+L
*
D,zeros(size(Gam)),L];
Ga = ss(Phib,Delb,eye(size(Phib)),0,Ts);
27
jx=[];
for j=1:nsum % don't repeat too often !
W = sqrt(wes)
*
randn(N,n); V = sqrt(ves)
*
randn(N,p); % Add noise
x = lsim(Ga,[U,W,V],t,[x0;x0_est]);
32 dx = x(:,[1:3])-x(:,[4:6]); % x x
jx(j) = sum(sum(dx.
*
dx)); % Find state ISE performance
end % j
Jstats(i,:)=[mean(jx),std(jx),min(jx),max(jx)]; % collect statistics
end % i
37 Jm = Jstats(:,1);
h=loglog(qr_v,Jm,'-',qr_v,Jstats(:,[3 4]),':',...
9.5. ESTIMATION OF STATE VARIABLES 451
qr_v,Jm
*
[1 1]+Jstats(:,2)
*
[1 -1],'--' );
set(line(qr_v,Jm),'MarkerSize',18,'LineStyle','.');
y = axis; y = y(3:4); h = line(wes/ves
*
[1,1],y);
42 set(h,'LineStyle','-.','color','k');
xlabel(sprintf('Q/R ratio (# of loops=%3g)',nsum)); ylabel('error')
The plot you obtain should be similar to Fig. 9.33 which was produced using 10
3
inner loops.
This curve shows that the actual minimum obtained by simulation agrees quite well with the
theoretically expected optimum expected at q/r = 20. Actually the curvature of this function near
the optimum is quite at which means that one can design the q/r ratio an order of magnitude
in error, and it really does not degrade the system performance much. This is fortunate since it is
almost impossible to choose these design values accurately for any practical industrial process.
10
4
10
2
10
0
10
2
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
x 10
5
Q/R ratio (# of loops=1000)
e
r
r
o
r
(q/r)
*
maximum
2
Figure 9.33: The performance of a
Kalman lter for different q/r ratios.
The shaded areas give the approxi-
mate 1 and 2 uncertainty regions
of the performance. The optimum
performance should be at q/r = 20,
vertical dot-dashed line.
Adapting the KF tuning matrices
In any actual implementation of the Kalman lter, one does not usually know a priori appropri-
ate values for the Q and R covariance matrices, and therefore these can be effectively viewed
as tuning parameters. Tuning the KF is a cumbersome trial-and-error process that normally re-
quires expensive state verication. Some techniques make use of the fact that the innovations
(or residuals, see Eqn 9.92), of an optimal KF should be zero-mean white-noise. One can easily
store these innovations, and test this assumption once a sufcient number have been collected.
[25] describes one technique due to Mehra that updates the Kalman gain and the Rmatrix if the
whiteness condition of the innovations is not satised. There are many variations of adaptive
KF tuning based on this theme, but for a variety of reasons, they have found little acceptance in
practice.
452 CHAPTER 9. CLASSICAL OPTIMAL CONTROL
9.5.8 Extensions to the basic Kalman lter algorithm
There are many extensions to the basic Kalman lter algorithm, and the next few sections briey
describes some of the more important extensions. (Afull treatment is given in [96] and in the later
chapters of [50] to name just a few.) The next few sections describe modications to the update
equations that are more robust to numerical round-off, using the lter for nonlinear problems,
tricks to save computing time and estimating parameters along with states. These alternatives
are generally preferred to the standard algorithms.
Numerically stable Kalman lter design equations
The Kalman lter design equations presented in 9.5, whilst correct, do have a tendency to fail
on actual industrial examples. This is due, in part to
The covariance matrices becoming indenite, (i.e., losing their positive deniteness), owing
to the limited precision of the machine, and
the poor numerical properties of the covariance update equation.
This section describes an alternative scheme to design a Kalman lter, that tries to avoid these
two problems. Some solution strategies are very similar to the covariance concerns expressed in
the parameter identication sections. See [101] or [5, p147] for further details.
One way to ensure that the covariance matrix remains symmetric is to include the hack some-
where in the update loop
P
P+P

2
While this correction does force symmetry, it does not however force positive deniteness which
is a more expensive condition to check. To check positive deniteness, we could attempt a
Cholesky decomposition and keep a running check, using an optional output of the chol routine
with something like
[R,rc] = chol(P) % do decomposition & supply return code, rc
2 if (rc !=0), 'Matrix not +ve definite', end
Recall that the measurement update is
x
k
= x
k
+K(y
k
C x
k
) (9.116)
P
k
= (I K
k
C) M
k
(9.117)
The troublesome equation is the measurement covariance update, Eqn 9.117. There are many
variations attempting to improve the numerics of the recursive update, and some of the more sim-
ple rearrangements are given in [121, p69-70]. Two of the most popular are the Upper-Diagonal
(U-D) factorisation due to Bierman, and Potters square root algorithm. The key idea for the U-D
algorithm is that the covariance matrix is factored into
P = UDU

where Uand Dare upper triangular and diagonal matrices respectively, and these are the matri-
ces that are propagated rather than P. The algorithm for the U-D update is slightly complicated
and difcult to vectorise in a formelegant for MATLAB, but many programs exist and one version
is given in [200, p167-168].
9.5. ESTIMATION OF STATE VARIABLES 453
The Cholesky matrix square root
Any positive semi-denite symmetric matrix can be factored into the product of a lower triangu-
lar matrix and its transpose. This is analogous to a square root in the scalar sense, and is called
the Cholesky matrix square root. If the Cholesky square root of matrix P is S, then by denition
P
def
= SS

(9.118)
For example
_
_
1 2 3
2 8 2
3 2 14
_
_
. .
P
=
_
_
1 0 0
2 2 0
3 2 1
_
_
. .
S

_
_
1 2 3
0 2 3
0 0 1
_
_
(9.119)
MATLAB can extract the Cholesky square root from a matrix using the chol command, although
this function actually returns the transpose S
T
.
Updating the matrix square root
The basic idea of the square root lter is to replace the covariance matrices with their square roots.
The covariance matrix can then never be indenite, since it is calculated as the product of its two
square roots. Using the square roots also has the advantage that we can compute with double
accuracy. We can dene the following square roots;
M
def
= SS

, P
def
= S
+
S

+
, Q
def
= UU

, R
def
= VV

(9.120)
Now the measurement covariance update equation can be written as
F = S

(9.121)
G =
_
R+ F

F
_
1/2
(9.122)
S
+
= S SFG

(G+V)
1
F

(9.123)
and the correction is
x = x +SFG

G
1
(y C x) (9.124)
In Eqn. 9.122, one nds the square root of a matrix. MATLAB can do this with the sqrtm (square
root matrix) command. The expression A

is a shorthand way of writing (A


1
)

which is
also equivalent to (A

)
1
. We can verify Eqn 9.123 by multiplying it with its transpose and
comparing the result with Eqn 9.117. This algorithm is often called Potters algorithm.
Listing 9.28 calculates the steady state Kalman gain using the square root method of Potter. The
functionality and arguments are the same as the built-in Matlab function dlqe.
Listing 9.28: Potters algorithm.
function [K,M,P] = potter(Phi,Gamma,C,Q,R);
% [K,M,P] = potter(Phi,Gamma,C,Q,R);
3 % Potter's square root filter algorithm to design a Kalman filter
% Ref: Algorithm from IEEE trans AC vol12 1971
% See dlqe for argument list & description
% Starting estimates & constants
454 CHAPTER 9. CLASSICAL OPTIMAL CONTROL
8 GQG = Gamma
*
Q
*
Gamma'; K = zeros(C'); Kprev = -0.5
*
eye(C'); tol =
1.0e-6; P = zeros(Phi);
V = chol(R)'; % extract roots to start
13 while norm(K-Kprev,'fro') > tol
Kprev = K;
M = Phi
*
P
*
Phi' + GQG; % Time update
% Square root form for the measmnt update
18 S = chol(M)'; % Note transpose required from chol
F = S'
*
C'; G = sqrtm(R+F'
*
F);
Sp = S - (S
*
F/(G'))
*
((G+V)\F');
P = Sp
*
Sp'; % Will always be positive definite
K = S
*
F/(G')/G;
23 end % while
return % end function potter.m
Modify the potter algorithm to make it more efcient. (For example the calculation of Kis not
strictly necessary within the loop.)
Sequential processing
The Kalman lter was developed and rst implemented in the 1960s when computational power
was considered expensive. For this reason, there has been much optimisation done on the algo-
rithm. One of these improvements was to sequentially process the measurements one at a time
rather than as a vector. [5, p142] gives two reasons for this advantage. First if the measurement
noise covariance matrix R is diagonal, there is a reduction in processing time, and second, if for
some reason there is not enough time between the samples to process the entire data, (owing to
an interrupt for example), then there is only a partial loss of update information to the optimised
states, rather than a full loss of improvement.
In the sequential processing formulations of the Kalman lter equations, the Kalman gain matrix
L is in fact a single column vector, and the measurement matrix C is a row vector.
Diagonalising the Rmatrix
If the measurement noises are correlated, then R will not be diagonal. In this rare case, we can
transform the system to where the measurements are uncorrelated, and work with these. The
transformation uses a Cholesky factorisation. Ris factored into
R = SS

(9.125)
Now the transformed measurements are
y = S
1
y (9.126)
9.5.9 The Extended Kalman Filter
The Extended Kalman Filter or EKF is used when the process or measurement models are non-
linear. Once we have a nonlinear process, all the guarantees that the Kalman lter will converge
9.5. ESTIMATION OF STATE VARIABLES 455
and that the lter will deliver the optimal state estimates no longer apply. Notwithstanding, it is
still worth a try, since in most cases even sub-optimal estimates are better than none. Almost all
the published accounts of using a Kalman lter in the chemical processing industries are actually
an extended Kalman lter applied to a nonlinear model.
The EKF is a logical extension from the normal linear Kalman lter. The estimation scheme is
still divided up into a prediction step and an estimation step. However for the model and mea-
surement prediction steps, the full nonlinear relations are used. Integrating the now nonlinear
process differential equation may require a simple numerical integrator such as the Runge-Kutta
algorithm. Thus the linear model update and measurement equations are now replaced by
x
k|k1
= f ( x
k1|k1
, u
k
, d, , t) (9.127)
y = g ( x) (9.128)
Given the nonlinear model and measurement relations, updating the covariance estimates is no
longer a simple, or even tractable task. Here we assume that over our small prediction step of
interest, the system is sufciently linear that we can approximate the change in error statistics,
which are only an approximate anyway, with a linear model. Now all the other equations of
the Kalman lter algorithm are the same, except that the discrete state transition matrix , is
obtained from a linearised version of the nonlinear model evaluated at the best current estimate,
x
k|k1
following the techniques described in 3.6. The creation of the linear approximation to the
nonlinear model can be done at every time step, or less frequently, depending on the application
and the availability of cheap computing. Linearising the measurement equation, Eqn 9.128 can
be done in a similar manner.
Estimating parameters online using the Kalman lter
Chapter 6 discussed estimating model parameters online which could then be subsequently used
in an adaptive control scheme. In addition, there are many text books explaining how to ofine
regress parameters from experimental data. While there is some debate about the nomenclature,
parameters are generally considered to be constant, or at least change much more slowly than
the states. In a heat exchanger for example, the heat exchange surface area is constant, thus
a parameter, while the ow may vary depending on demand, thus a state. Some variables lie
somewhere in between, they are essentially constant, but may vary over a long time period. The
heat transfer coefcient may be one such variable. It will gradually get less as the exchanger fouls
(owing to rust for example), but this occurs over time spans much larger than the expected state
changes.
In many engineering applications, however, there is some, (although possibly not complete)
knowledge about the plant, and this partial knowledge should be exploited. This was not the case
for the identication given in chapter 6, where only an arbitrary ARMA model was identied.
The Kalman lter can be extended to estimate both states as given in 9.5 and also parameters.
Of course once parameters are to be estimated, the extended system generally becomes nonlin-
ear, (or if already nonlinear, now more nonlinear) so the Extended Kalman lter or EKF must be
used. In this framework, the parameters to be estimated are treated just like states. I typically
simply append the parameter vector () to the bottom of the state vector x. Normally we would
like to assume that the parameter is roughly constant, so it is reasonable to model the parameter
variation as a random walk model,
d
dt
= 0 +v (9.129)
where v is a random vector of zero mean and known variance. A plot comparing normally
distributed white noise with a random walk is shown in Fig. 9.34 and can be reproduced using:
456 CHAPTER 9. CLASSICAL OPTIMAL CONTROL
1 v = randn(500,1); % white noise: mean zero, variance 1.0, Normal distbn.
x = cumsum(v); % generate random walk
plot([v x])
Eqn 9.129 is the simplest dynamic equation for a parameter. In some specialised examples, it
may be worth while to have other simple dynamic models for the parameters, such as random
ramps to model instrument drift for example, but in general this extension is not justied. Of
course, with the extra parameters, the dimension of the estimation problem has increased, and
the observability criteria must still hold.
Figure 9.34: An example of a ran-
dom walk process:
i+1
=
i
+ .
The solid line, , is white noise with
a mean of zero, and a variance of 1.
The dashed line is .
0 100 200 300 400 500
15
10
5
0
5
10
15
20
25
Sample #
White noise
Random walk
Non-Gaussian noise
The assumption that the measurement and process noise sequences are Gaussian, un-correlated,
and mean zero are usually not satised in practice. Coupled with the fact that the true system is
probably nonlinear, it is surprising that an optimal linear estimator, such as the Kalman lter, or
even the EKF works at all.
Most textbooks on regression analysis approximate the residual between actual experimental
data and a model as made up of two components; a systematic error and a random error. The
Kalman lter is based on the assumption that the only error is due to a random component. This
in general, is a very strong assumption, and not satised in practice. However coloured noise is
treated theoretically in [50, Chpt 5].
One easy test to make is how the Kalman lter performs when the noise is correlated. We can
generate correlated or coloured noise by passing uncorrelated or white noise through a low pass
lter.
Problem 9.7 Chapter 7 discussed auto-tuning by relay feedback. One of the key steps in con-
structing an automated tuning procedure is to identify the period and amplitude of the oscillation
in the measurement. Often, however, this is difcult owing to the noise on the measurement,
but we can use an Extended Kalman lter to estimate these quantities in the presence of noise.
Further details and alternative schemes are discussed in (author?) [11]. A signal that is the sum
9.5. ESTIMATION OF STATE VARIABLES 457
of a sinusoid and a constant bias can be represented by the following equations;
x
1
= 2
x
2
x
3
x
2
= 2
x
1
x
3
x
3
= 0
x
4
= 0
y = x
1
+x
4
and the period and amplitude of the oscillation given by
P = x
3
, =
_
x
2
1
+x
2
2
The idea is to estimate the states of the process given the output y, and then to calculate the
period and amplitude online.
1. Linearise the dynamic equations to the standard state space form.
2. Design a steady-state Kalman lter.
3. Simulate the estimation process. Plot the estimates of the period and amplitude for a noisy
signal, and for a time varying signal.
9.5.10 Combining state estimation and state feedback
One of the key aims of the Kalman lter is to provide estimates of unmeasured states that can
then be used for feedback. When we combine state estimation with state feedback we have a
Linear Quadratic Gaussian or LQG controller.
Fig. 9.35 shows how we could implement such a combined estimator and controller in SIMULINK.
The plant model and estimator is in yellow, the regulator in blue, and the noise terms are in
orange. In this implementation we are using the 5 state aircraft model from [61] listed in Ap-
pendix E.2.
The design of the discrete regulator, perhaps using dlqr, and the design of the estimator, perhaps
using dlqe are completely independent.
9.5.11 Optimal control using only measured outputs
The optimal controllers of 9.4 had the major drawback that they required state feedback since
the control law was a function of the states, namely Eqn. 9.46 or u = Kx. We addressed this
problem in 9.5 by rst designing an observer to produce some estimated states which we could
use in place of the true, but unknown, states in the controller. However it would be valuable to
be able to design an optimal controller directly that uses output, rather than state feedback. Such
an optimal output feedback controller in fact exists, but it is difcult to compute, and may only
deliver a local optimum. The derivation of the optimum output controller can be found in [122,
chapter 8], and improvements to the design algorithm are given in [165].
Once again we assume a linear plant
x = Ax +Bu, x(0) = x
0
y = Cx
458 CHAPTER 9. CLASSICAL OPTIMAL CONTROL
plant noise
output
z
1
Unit Delay1
z
1
Unit Delay
K*u
Phi1
K*u
Phi
Measurement noise
K*u
L
K*u
Ke
K*u
Del1
K*u
Del
K*u
C1
K*u
C
5
5
5 3 5
5
5
5
5
5
5
5 5
5
5
5
5 3
3
3
3
5
5
3
3
3
3
3
3
Figure 9.35: An example of a discrete LQG optimal estimator/controller implemented in
SIMULINK.
but this time we desire an output controller of the form
u = Ky
which minimises the familiar quadratic performance index
J(K) =
1
2
_

0
_
x
T
Qx +u
T
Ru
_
dt
where A, Q
nn
, B
np
, C
rn
, R
pp
, K
pr
. Also we assume, Q > 0
(positive semi-denite) and R > 0.
The optimum gain Kis given by the simultaneous solution to the following design equations:
0 = A
T
c
P+PA
c
+C
T
K
T
RKC+Q (9.130)
0 = A
c
S +SA
T
c
+X (9.131)
K = R
1
B
T
PSC
T
_
CSC
T
_
1
(9.132)
where the closed loop A
c
and the covariance of the initial state are dened as
A
c
= ABKC, X
def
= E
_
x(0)x
T
(0)
_
and the performance index is given by
J = trace (PX) (9.133)
The coupled nature of the multiple Lyapunov equations mean that we will need to use a nu-
merical technique to solve for the K that satises all three design equations. Our alternatives
are:
9.6. SUMMARY 459
1. To use a general-purpose unconstrained optimiser to search for the values of K that min-
imise Eqn. 9.133 directly where P is given by the solution to Eqn. 9.130.
2. To use an unconstrained optimiser and exploit the gradient information since the gradient
of J with respect to Kis
J
K
= 2
_
RKCSC
T
B
T
PSC
T
_
where we have computed the auxiliary matrices P, S by solving the two Lyapunov equa-
tions Eqns 9.1309.131 perhaps using the MATLAB function lyap.
3. To follow a simple xed-position algorithm where we guess a value for Kand then iterate
around Eqns 9.130 to 9.132.
Numerical experiments show that the direct optimisation is prone to innite looping, and the
xed-point algorithm can converge to local optima.
9.6 Summary
This chapter dealt with optimal control using classical methods. Optimal control is where we try
to control a process in some best manner. The term best in the engineering eld usually means
most economic (either most prot, or least cost). While it is naive to expect our controller to per-
form optimally in the eld, in practice they have been found to out-perform more conventional
controllers designed via methods such as loop shaping or frequency response for example. I think
of optimal controllers as simply another set of design rules to construct a workable, satisfactory
controller, but would not expect them to be the best in any strict and absolute sense.
A convenient objective function is the quadratic objective function that tries to minimise the
weighted sum of state and manipulated deviations squared over time. The optimal controller
design problemgiven nonlinear process dynamics and constraints is, in general, too hard to solve
online, although ofine open-loop optimal proles are possible using numerical techniques. The
optimisation problem given linear dynamics is tractable and the solution is termed the linear
multivariable regulator (LMR or LQG). The LMR in its simple guise is a multivariable propor-
tional controller. The fact that estimation is closely related to regulation was shown in chapter 8
and the optimal estimator is the Kalman lter.
460 CHAPTER 9. CLASSICAL OPTIMAL CONTROL
Chapter 10
Predictive control
To be computationally practical, the optimal controllers of chapter 9 required that the system
satisfy certain mathematical restrictions. To use variational calculus, the dynamic models must
be differentiable; to use LQG, we require linear models with no saturation constraints. In contrast
the controllers described in this chapter take into account constraints in the manipulated and
control variables, something which is crucial for practical control applications.
10.1 Model predictive control
Model predictive control, or MPC, rst started appearing in a variety of avours in the late 1970s
and has since then emerged as one of the most popular and potentially valuable control method-
ologies, particularly popular in the chemical processing industries. A good comprehensive sur-
vey of theoretical and practical applications is given in [52, 72, 162, 166].
The central theme of all these variants of predictive control is a controller that uses a process
model to predict responses in a moving or receding horizon. Unlike the optimal control of sec-
tion 9.3.1 or the linear cases following, this control strategy uses a nite optimisation horizon,
(refer Fig. 10.1). At every sample time, the optimiser built into the controller selects the best nite
number of future control moves over a horizon of 10 or so samples, of which the controller will
subsequently implement just the rst. At the next sample time, the whole optimisation procedure
would be repeated over the new horizon, which has now been pushed back one step. This is also
known as receding horizon control. Since an optimisation problem is to be solved each sampling
time step, one hopes that the calculations are not too involved!
This long-term predictive strategy coupled with the seeking of multiple optimal future control
moves turns out to be rather a exible control scheme where constraints in manipulated and
state variables can be easily incorporated, and some degree of tuning achieved by lengthening or
shortening the horizon. By predicting over the dead time and two or three time constants into the
future, and by allowing the controller to approach the setpoint in a relaxed manner, this scheme
avoids some of the pitfalls associated with the traditionally troublesome processes such as those
with inverse response or with excessive dead-time.
However relying predictions frommodels are not without their problems. I pity the Reserve Bank
of NewZealand which has the unenviable task of predicting the future time-varying interest rates
at quarterly intervals as shown in Fig. 10.2. The dark solid line shows the actual 90-day interest
rates from March 1999 until March 2003 compared to the predictions made at the originating
461
462 CHAPTER 10. PREDICTIVE CONTROL
Past values


t
Prediction horizon

`
Past
time

Planned future manipulated inputs


Future
Predicted (sampled) outputs
Known future setpoint
Current time
Current manipulated input

.
.
.
.
.
.
.............
.
.
.
.............
.
.
.
.
.
.
.
.............
.
.............
.
..........................................
Control horizon
time

Figure 10.1: Horizons used in model predictive control


point. This data was obtained from [192, p18] where is it cautioned: While we consider our
approach generally useful, there are risks. For example, if the interest rate projections are taken
as a resolute commitment to a particular course of policy action, then those that interpret it as
such will be misled.
Figure 10.2: Successive future 90-day interest
predictions by the Reserve Bank at quarterly
intervals.
The idea behind MPC is that we have a dynamic, possibly nonlinear, plant described as
y
i+1
= f(y
i
, u
i
) (10.1)
where we wish the plant output y(t) to follow as closely as practical a possibly varying setpoint
or reference r(t).
10.1. MODEL PREDICTIVE CONTROL 463
To establish the control move, we nd a suitable u(t) solve that minimises the quantity
J
t
=
Np

i=1
(r
t+i
y
t+i
)
2
. .
prediction horizon
+
Nc

i=1
wu
2
t+i1
. .
control horizon
(10.2)
subject to some input constraints
u
i
U (10.3)
The set U in the constraint equation, Eqn. 10.3 is the set of allowable values for the input and
typically takes into account upper and lower saturation limits in the actuator. For example in the
case of a control valve, the allowable set would be from 0 (shut) to 100% (fully open).
The optimisation of Eqn. 10.2 involves nding not just the one next control move, but a series of
N
c
future moves, one for each sample time over the future control horizon. In practice the con-
trol horizon is often about 515 samples long. The quantity optimised in Eqn. 10.2 is comprised
of two parts. The rst expression quanties the expected error between the future model pre-
dictions y and the future setpoints, r. Typically the future setpoint is a constant, but if changes
in reference are known in advanced, they can be incorporated. The comparison between error
and expected output is considered over the prediction horizon, H
p
, which must be longer than the
control horizon.
The second term in Eqn. 10.2 quanties the change in control moves. By making the weighting
factor, w, large, one can discourage the optimal controller from making overly excited manipu-
lated variable demands.
At each sample time, a new constrained optimisation problem is solved producing an updated
vector of future control. At the next sample instance, the horizons are shifted forward one sample
time T, and the optimisation repeated. Since the horizons are continually moving, they are often
termed receding horizons.
The chief drawback with this scheme is the computational load of the optimisation every sample
time. If the model, Eqn. 10.1, is linear, and we have no constraints, then we are able to analyti-
cally extract the optimum of Eqn. 10.2 without resorting to a time consuming numerical search.
Conversely if we meet active constraints or have a nonlinear plant, then the computation of the
optimum solution becomes a crucial step. Further details for nonlinear predictive control are
given in the survey of [84].
Algorithm 10.1 Model Predictive Control (MPC) algorithm
The basic general predictive control algorithm consists of a design phase and a implementation
phase. The design phase is:
1. Identify a (discrete-time) model of the plant to be controlled.
2. Select a suitable sampling time and choose the lengths of the control and prediction hori-
zons, H
c
, H
p
, where H
c
H
p
and the control move weightings, w. Typically H
p
is chosen
sufciently long in order to capture the dominant dynamics of the plant step response.
3. For further exibility, one can exponentially weight future control or output errors, or not
include the output errors in the performance index for the rst few samples. Making the
prediction horizon longer than the control horizon effectively adds robustness to the algo-
rithm, without excessively increasing the optimisation search space.
464 CHAPTER 10. PREDICTIVE CONTROL
4. Incorporate constraints in the manipulated variable if appropriate. In the absence of con-
straints, and as the prediction horizons tend to , the MPC will lead to the standard linear
quadratic regulator as described in 9.4.
The implementation phase is carried out each sampling time.
1. Use an optimiser to search for the next future N
c
optimum control moves u
t
starting from
the current position.
2. Implement only the rst optimum control move found from step 1.
3. Wait one sample period, then go back to step 1. Use the previous optimum u
t
vector,
(shifted forward one place) for the initial estimate in the optimiser.
The following sections demonstrate various predictive controllers. Section 10.1.1 applies a pre-
dictive controller to a linear plant, but with input constraints. Because this is a nonlinear optimi-
sation problem, a generic nonlinear optimiser is used to solve for the control moves. As you will
see if you run the example in Listing 10.1, this incurs a signicant computational burden. If we
ignore the constraints, then we can use a fast analytical solution as shown in section 10.1.2, but
this loses one of the big advantages of MPC in the rst place, namely the ability to handle con-
straints. The MODEL PREDICTIVE CONTROL TOOLBOX for MATLAB described in [26] is another
alternative to use for experimenting with MPC.
10.1.1 Constrained predictive control
This section demonstrates a constrained predictive control example using the following linear
non-minimum phase plant,
G
p
=
1.5(1 2.5s)
4s
2
+ 1.8s + 1
(10.4)
but where the manipulated variable is constrained to lie between the values 3.5 < u < 3.
Sampling Eqn. 10.4 at T = 0.75 with a zeroth-order hold gives the discrete system
G
p
(z) =
0.005z
1
0.4419z
2
1 1.596z
1
+ 0.7136z
2
whose open loop response to a step change is illustrated in Fig. 10.3.
To make the control problem more interesting, we will assume that
Since we have a constraint in the manipulated variable, I will use the general nonlinear con-
strained optimiser fmincon from the OPTIMISATION TOOLBOX. The function to be minimised is
fmodstpc in Listing 10.2 which returns the performance index.
We can see from the discrete step response that a sample time of T = 0.75 is a reasonable choice.
However the response drops before it climbs to the nal steady-state, exhibiting an inverse re-
sponse or non-minimum phase behaviour owing to the right hand plane zero at s = +2.5. With
this sort of inverse behaviour, we must take care that the prediction horizon is sufciently long
to span the initial inverse response which again we can estimate approximately from the discrete
step response. In this case I will set H
p
= 20 and optimise for 10 inputs, H
c
= 10. The control
weightings are set to w = 1.
The script le to run the simulation and to obtain results similar to Fig. 10.4 is given in List-
ing 10.1.
10.1. MODEL PREDICTIVE CONTROL 465
0 5 10 15 20 25
1
0.5
0
0.5
1
1.5
2
G
p
=
1.5(12.5s)
4s
2
+1.8s+1
, T
s
= 0.75
Step Response
Time (sec)
A
m
p
l
i
t
u
d
e
Figure 10.3: An inverse plant model sam-
pled at T
s
= 0.75s.
Listing 10.1: Predictive control with input saturation constraints using a generic nonlinear opti-
miser
ntot=150; dt=0.75; t=dt
*
[0:ntot-1]'; % time vector
2
r = [2,3,-2,-7,2]; % Interesting setpoint
Yspt = ones(size(t(1:fix(length(t)/length(r))+1)))
*
r(:)';
Yspt = Yspt(1:length(t))'; % Cut off excess data points
7 tau = 2.0; zeta = 0.45; Kp=1.5;taup=2.5; % Create non-minimum phase plant
Gss = ss(c2d(tf(Kp
*
[-taup 1],[tau2 2
*
tau
*
zeta 1]),dt)); % discrete plant
Hc = 10; % control horizon, Hc
Hp = Hc+10; % prediction horizon, Hp > Hc
12 Umin = randn(Hc+1,1); % start guess
Ulb = -3.5
*
ones(size(Umin)); Uub = 3
*
ones(size(Ulb)); % bounds
Uopt=zeros(size(t)); Y=zeros(size(t)); % initialise u(t)
x = [0,0]'; % initialise starting point
optns = optimset('Display','off','TolFun',1e-3,'TolX',1e-3); % optimiser options
17
for i=1:ntot-Hp-1; % for each sample time ...
Umin = [Umin(2:end);Umin(end)]; % roll & repeat
Umin = fmincon(@(u)fmodstpc(u,Gss,x,Yspt(i:i+Hp-1),Hp),Umin, ...
[],[],[],[],Ulb,Uub,[],optns);
22
x = Gss.a
*
x + Gss.b
*
Umin(1,:)'; % do 1-step prediction
Y(i,:) = Gss.c
*
x; Uopt(i,:) = Umin(1)'; % collect
end % for i
Yspt(1) = []; Yspt=[Yspt;1]; % shift due to fence-post error
27
subplot(2,1,1); % Now plot results
plot(t,Y,t,Yspt,'r--') % plot results
ylabel('output & setpoint')
subplot(2,1,2);
32 [ts,Us] = stairs(t,Uopt);
plot(ts,Us,t,[ones(size(t))
*
mean([Ulb,Uub])],'k--')
xlabel('time [s]'); ylabel('Input')
466 CHAPTER 10. PREDICTIVE CONTROL
The objective function to be minimised called by the optimiser is given in Listing 10.2. Note that
auxiliary variables, such as the model, and horizon lengths are passed as anonymous variables
through to the function to be optimised.
Listing 10.2: Objective function to be minimised for the predictive control algorithm with input
saturation constraints
1 function Perf = fmodstpc(U,G,x0,Yspt,Hp)
% Short-term-prediction error called via fmincon via TGPC.M
if Hp > length(U), % normally the case
U = [U;U(end)
*
ones(Hp-length(U),1)]; % pad extra
6 end % if
ssdu = sum(diff(U).2); % Control weightings

Nc
i=1
u
2
t+i1
[Y,t,X] = lsim(G,U,[],x0); % do linear model prediction
Perf = sum((Y-Yspt).2) + ssdu; % Performance index J
11 return %
The controlled performance as shown in Fig. 10.4 is very good, as indeed it should be.
10
5
0
5
MPC: T
s
=0.75, H
p
=20, H
c
=10
o
u
t
p
u
t

&

s
e
t
p
o
i
n
t
0 20 40 60 80 100 120
4
2
0
2
4
u
min
u
max
time [s]
I
n
p
u
t
Figure 10.4: Predictive control of a non-minimum phase system with constraints on u with a
control horizon of H
c
= 10 and a prediction horizon of H
p
= 20. Refer also to Fig. 10.5 for a
zoomed portion of this plot.
Especially note that since we are making use of a vector of future setpoints, the control algorithm
starts preparing for the setpoint change before the actual setpoint changes. This acausal behaviour,
illustrated more clearly in Fig. 10.5, is allowed because we assume we know future setpoints,
improves the response compared with a traditional feedback only PID controller. Naturally the
control becomes looser as the manipulated variable hits the upper and lower bounds.
You will soon realise, if you run this example, that the computational load for the optimal in-
put trajectory is considerable. With such a performance load, one needs to investigate ways to
increase the speed of the simulation. Reasonable starting estimates for the new set of future con-
trol moves are simply obtained from the previous optimisation problem. Good starting guesses
for the nonlinear optimiser are essential to ensure that the controller manages to nd a reason-
able output in the time available. Even though our model, ignoring the manipulated variable
10.1. MODEL PREDICTIVE CONTROL 467
10
5
0
5
H
p
H
p
H
c
MPC: T
s
=0.75, H
p
=20, H
c
=10
o
u
t
p
u
t

&

s
e
t
p
o
i
n
t
30 35 40 45 50
4
2
0
2
4
time [s]
I
n
p
u
t
Figure 10.5: A zoomed portion of Fig. 10.4
centered about the setpoint change around
t = 46 showing clearly the acausal response
possible if we take advantage of the vector of
future setpoints.
constraints, is linear, we have used a general nonlinear optimiser to solve the control problem.
This means that the same algorithm will work in principle for any plant model, whether linear
or nonlinear, and any constraint formulation. Naturally there are many extensions to the basic
algorithm that address this issue such as only optimising every n-th sample time, or lowering N
c
.
However substantial simplications can be made if we restrict our attention to just linear plants
and bounded output, input and input rate constraints. Under these restrictions, the optimal
MPC formulation can be re-written as a standard quadratic program, or QP for which there are
efcient solution algorithms. Further details of this are given in [24, 164], and a commercial
MATLAB implementation is described in [27].
Varying the control and prediction horizons
Investigating different controller design decisions is computationally expensive for this applica-
tion, but we can relax the termination requirements for the optimiser with little noticeable change
in results. From a computational complexity viewpoint, we want to reduce the search space, and
hence reduce the horizons. However, reducing the horizons excessively, will result in a poor
controlled response.
Fig. 10.6 demonstrates what is likely to happen as the control and prediction horizons grow
shorter. For a short control horizon, N
c
= 2, and a reasonably short prediction horizon, N
p
=
8, but one that still covers inverse response, the controlled response is still quite reasonable as
shown in Fig. 10.6(a). However decreasing the prediction horizon any further, even just slightly
such as to N
p
= 7, gives problems as exhibited in Fig. 10.6(b). The inverse response horizon is
5.
MPC on the laboratory blackbox
Fig. 10.7 shows the result of using the constrained MPC on the black box assuming a constant
model of the blackbox as
M: G(s) =
0.98
(3s + 1)(5s + 1)
468 CHAPTER 10. PREDICTIVE CONTROL
10
5
0
5
GPC: T
s
=0.75, H
p
= 8, H
c
= 2
o
u
t
p
u
t

&

s
e
t
p
o
i
n
t
0 20 40 60 80 100 120
4
2
0
2
4
time [s]
I
n
p
u
t
(a) Short control & reasonable prediction horizons,
Hc = 2, Hp = 8.
10
5
0
5
GPC: T
s
=0.75, H
p
= 7, H
c
= 2
o
u
t
p
u
t

&

s
e
t
p
o
i
n
t
0 20 40 60 80 100 120
4
2
0
2
4
time [s]
I
n
p
u
t
(b) Short control & prediction horizons,Hc = 2, Hp = 7.
Figure 10.6: Varying the horizons of predictive control of a non-minimum phase system with
constraints on u.
sampling at T = 0.75 seconds with a prediction horizon of H
p
= 20, and control horizon of
H
c
= 10 samples. Clearly we should also estimate the controller model online perhaps using a
recursive least-squares type algorithm.
1
0.5
0
0.5
BlackBox: MPC t=0.75, H
p
=20, H
c
=10
o
u
t
p
u
t

&

s
e
t
p
o
i
n
t
0 20 40 60 80 100 120 140 160 180 200
1
0.5
0
0.5
1
time (s)
i
n
p
u
t
Figure 10.7: Constrained MPC applied to the laboratory blackbox with sampling at T = 0.75 sec
and prediction & control horizons H
p
= 20, H
c
= 10.
10.1.2 Dynamic matrix control
If you actually ran the simulation of the general optimal predictive controller from 10.1.1, and
waited for it to nish, you would have stumbled across the one big drawback computational
load. As previously anticipated, one way to quicken the search is to assume a linear system,
ignore any constraints, and make use of the analytical optimisation solution. To do this, we must
10.1. MODEL PREDICTIVE CONTROL 469
express our model as functions of input only, that is, a nite impulse response model, rather
than the more parsimonious recursive innite impulse response model. There are again many
different avours of this scheme, but the version here is based on the step response model,
y(t) =

i=1
g
i
u(t i) (10.5)
where the g
i
s are the step response coefcients, and u(t) is the change in input, or u
t
u
t1
, refer
Fig. 10.8. Suppose we start from rest, and make a unit step change in u at time t = 0. Using
Eqn. 10.5, the rst few output predictions are
y
1
= g
1
u
0
..
=1
+g
2
u
1
. .
=0
+ (10.6)
y
2
= g
1
u
1
..
=0
+g
2
u
0
..
=1
+g
3
u
1
. .
=0
+ (10.7)
y
3
= g
1
u
2
+g
2
u
1
+g
3
u
0
+ (10.8)
Since all the control moves are zero, except u
0
= 1, then it follows that y
1
= g
1
, y
2
= g
2
, y
3
= g
3
and so on. So the values of the output at each sample time y
k
in response to a unit step at k = 0
are the step response coefcients, g
k
. These coefcients form an innite series in general, but for
stable systems, converge to a limit as shown in Fig. 10.8.
0
`

time
steady-state
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
t

output
`
time
`
0
1
Manipulated input
unit step
g
1
g
2
g
3
g
Np

Future Past

impulse coeff, h
5
Step response coefcients
Figure 10.8: Step response coefcients, g
i
, for a stable system.
470 CHAPTER 10. PREDICTIVE CONTROL
We can write the response of the model in a compact matrix form as
_

_
y
1
y
2
.
.
.
y
Np
_

_
=
_

_
g
1
0 0 0
g
2
g
1
0 0
g
3
g
2
g
1
0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
g
Np
g
Np1
g
Np2
g
NpNc+1
_

_
. .
dynamic matrix
_

_
u
0
u
1
.
.
.
u
Nc1
_

_
y
forced
= Gu (10.9)
The N
p
N
c
matrix Gin Eqn. 10.9 is referred to as the dynamic matrix, hence the term dynamic
matrix control or DMC. It is also closely related to a Hankel matrix which provides us with a
convenient, but not particularly efcient, way to generate this matrix in MATLAB. Eqn. 10.9
tells us how our system will respond to changes in future moves in the manipulated variable.
Specically Eqn. 10.9 will predict over the prediction horizon, N
p
, the effect of N
c
control moves.
As in the general optimisation case, the input is assumed constant, u = 0, past the end of the
control horizon.
All we need to do for our optimal controller, is to nd out what to set these future control moves
to, in order that the future outputs are close to the desired setpoints. Actually Eqn. 10.9 only
predicts how our system will behave given future control changes, (termed the forced response),
but it ignores the ongoing effect of any previous moves, (termed the free response). We must
add the effect of the free response, if any, to Eqn. 10.9 to obtain a valid model prediction. This
simple addition of two separate responses is a consequence of the linear superposition principle
applicable for linear systems. We can calculate the free response over the prediction horizon,
y
free
simply by integrating the model from the current state with the current constant input for
N
p
steps. Therefore to ensure our future response is equal to the setpoint, r, at all sample times
over the prediction horizon, we require
r = y
free
+y
forced
(10.10)
Again, since we will make the requirement that the control horizon is shorter than the prediction
horizon, our dynamic matrix, G, is non-square, so the solution of Eqn. 10.9 for the unknown con-
trol moves involves a pseudo inverse. Typically we further weight the control moves to prevent
an over-excited input demands by a positive weight , so the optimal solution to the predictive
control problem is
u =
_
G

G+I
_
1
G

(r y
free
) (10.11)
This analytical solution to the optimal control problemis far preferredabove the numerical search
procedure required for the general nonlinear case. Note that the matrix inversion in Eqn. 10.11 is
only done once, ofine and thus does not constitute a severe online numerical problem. The effect
of the control move weighting improves the numerical conditioning of the matrix inversion.
Fig. 10.9 shows in detail how this control philosophy works. If we are controlling a process with
a controller that uses a prediction horizon of 12 samples and a controller horizon of 5, and we
take a snap-shot at t = 135. The thick solid line (upper plot of Fig. 10.9) shows how the controller
predicts the process will respond given the optimally calculated dashed input (lower plot) at time
t = 135. Since the model inside the controller is slightly wrong owing to model/plant mismatch,
the process would, in fact, follow the thin solid line if given the dashed pre-supposed optimal
input prole. In actual fact, since only the rst input is implemented, and the whole optimisation
repeated at the next time step, the process follows the line given the solid input trajectory.
A dynamic matrix control (DMC) example
10.1. MODEL PREDICTIVE CONTROL 471
134 136 138 140 142 144 146 148 150
-4
-2
0
2
4
6
O
u
t
p
u
t
DMC dt= 1.0 Hp= 12 Hc= 5
134 136 138 140 142 144 146 148 150
-2
-1
0
1
2
3
time w= 1 noise var=0.04
I
n
p
u
t
Figure 10.9: DMC predictive control showing at t = 135 how the controller predicts the process
will respond (heavy-solid line) given the current calculated optimal (dashed) input trajectory,
how the process would really respond (light-solid line), and how in fact eventually the process
was controlled ( ) given the solid input trajectory.
Fig. 10.10 shows the performance of a DMC controller for the non-minimum phase plant of
Eqn. 10.4 from 10.1.1. I generated the step response coefcients elegantly from the transfer
function model simply using the dstep command. For illustration purposes in this simulation I
have varied the control move weighting, in Eqn. 10.11 as = 100 for t < 50, then = 1 for the
interval 50 < t < 100, and then nally = 0.
The controller uses a prediction horizon of H
p
= 25, a control horizon of H
c
= 8 at a sample time
of T = 0.5. The manipulated variable constraints present in 10.1.1 are relaxed for this example.
As the control move weighting is decreased, the output performance improves at the expense of
vigorous manipulated variable movements. Note that a weighting = 0 is allowable, but will
result in an over-excited controller.
The m-le for this DMC example, but using a constant weight of = 1 is given in Listing 10.3.
Listing 10.3: Dynamic Matric Control (DMC) control
ntot=250; dt=0.5; t=dt
*
[0:ntot-1]';% time vector
r = [1,3,-1,-2.5,3.5,5.4,-4,-6,5,2.2]; % random square wave setpoint
Yspt = ones(size(t(1:fix(length(t)/length(r))+1)))
*
r(:)';
4 Yspt = Yspt(1:length(t))'; % Cut off excess data points
Kp=1.5;taup=2.5; tau = 2.0; zeta = 0.45; % process model
Gpc = tf(Kp
*
[-taup 1],[tau2 2
*
tau
*
zeta 1]); % Kp(ps + 1)/(
2
s
2
+ 2s + 1)
Gp = c2d(Gpc,dt); % discrete plant
472 CHAPTER 10. PREDICTIVE CONTROL
1
0.5
0
0.5
1
T
s
=0.5, H
p
=25, H
c
= 8
Y

&

s
p
t
0 50 100 150
1
0.5
0
0.5
1
sample time
U
= 100 = 20 = 0.1
Figure 10.10: DMC predictive control of a non-minimum phase system with H
c
= 8, H
p
= 25
showing the effect of varying the control move weighting .
9 [Phi,Del,C,D] = ssdata(Gp); % state space for x0
Hc = 8; % Control horizon, Hc
Hp = 25; % Prediction horizon, Hp > Hc
14 [g,ts] = step(Gp,dt
*
Hp); % step response coefficients
g(1) = []; % only consider Hp points (ignore rst zero)
% Formulate DMC Gmatrix & inverse once; (ofine) Hankel version
G = flipud(hankel(flipud(g))); G(:,Hc+1:end) = [];
19 weight=1; % control move weighting w 0
GG = G'
*
G; % G

G
invG = (GG + weight
*
eye(size(GG)))\G'; %
_
G

G+I
_
1
G

x = [0,0]'; y=C
*
x; u=0; p=0; % cheat here but Ok
24 Uopt=zeros(size(t)); Y=zeros(size(t)); % initialise u(t)
for i=1:ntot-Hp-1;
du = invG
*
(Yspt(i:i+Hp-1)-p); % analytical optimisation
u = du(1)+u ; % only take first change
x = Phi
*
x + Del
*
u; y = C
*
x; % apply 1 control move
29
% Calculate future free response (assume all future du=0)
[p,px] = dlsim(Phi,Del,C,D,u
*
ones(Hp+1,1),x);
p(1)=[]; % kill repeat of x0
34 Y(i,:) = y; Uopt(i,:) = u;
end % for i
Yspt(1) = []; Yspt=[Yspt;1];
subplot(2,1,1); plot(t,Y,t,Yspt,':') % plot results
39 subplot(2,1,2); stairs(t,Uopt);
The high delity of the controlled response of both predictive control algorithms, (Figures 10.4
and 10.10) is helped by the lack of model/plant mismatch in the previous simulations. In practice,
faced with the inevitable mismatch, one can extend the basic DMC algorithm such as varying the
10.1. MODEL PREDICTIVE CONTROL 473
weights and/or horizon lengths or by including an adaptive model and therefore performing the
matrix inversion online. In some cases, particularly for unstable processes, or for cases where the
sampling rate is either very fast or very slow compare with the step response of the system, the
DMC algorithm will fail primarily owing to the poor conditioning of G. In these cases singular
value decomposition may help.
DMC of the blackbox
A DMC controller is tried on the laboratory blackbox in Fig. 10.11. Unlike the GPC example on
468 however, here we will identify the model online using a recursive least-squares estimation
algorithm. The model-based controller assumes the structure
M: G(q
1
) =
b
0
+b
1
q
1
+b
2
q
2
1 +a
1
q
1
+a
2
q
2
+a
3
q
3
for the blackbox with a sample time of T = 1.5 seconds and a forgetting factor of = 0.995. The
controller horizon is set to 8 samples, and the prediction horizon is set to 20 samples. The control
move weighting is set to 0.2.
0 50 100 150 200 250 300 350 400
1
0.5
0
0.5
1
DMC of the blackbox t=1.50, H
p
=25, H
c
= 8
o
u
t
p
u
t

&

s
e
t
p
o
i
n
t
0 50 100 150 200 250 300 350 400
1
0.5
0
0.5
1
RLS Model ID =0.20, n
a
= 3, n
b
= 2
I
n
p
u
t
time (s)
Figure 10.11: Adaptive dynamic matrix control (DMC) of the blackbox with control horizon H
c
=
8 and prediction horizon H
p
= 20.
474 CHAPTER 10. PREDICTIVE CONTROL
10.2 A Model Predictive Control Toolbox
A Model Predictive Control toolbox developed by Currie, [55, 56] allows one to rapidly build and
test various MPC congurations for a wide variety of linear, or linearisable nonlinear plants with
both soft and hard constraints on the output, the input and input rate of change. The following
sections describe this toolbox in more detail.
10.2.1 A model predictive control GUI
The easiest way to get started is to use the Model Predictive Control toolbox GUI as shown in
Fig. 10.12. In this MATLAB GUI, one can load and experiment with different plants (and models of
those plants), adjust prediction horizons, operating constraints, and setpoints. What is interesting
to observe in the GUI application are animations of the future output and input predictions given
to the right of the solid line in the scrolling plot.
Figure 10.12: An MPC graphical user interface
10.2.2 MPC toolbox in MATLAB
The MPC toolbox is easy to use in script mode. This section shows how to build a multivariable
MPC controller using the Wood-Berry column plant in Eqn. 3.24 from section 3.3.3. This plant
has 2 inputs (reux and steam) and two outputs (distillate and bottoms concentration) and we
will use a sampling time of T
s
= 2 seconds. For this application we will assume no model/plant
mismatch.
10.2. A MODEL PREDICTIVE CONTROL TOOLBOX 475
Listing 10.4: Setting up an MPC controller
1 G = tf({12.8, -18.9; 6.6 -19.4}, ...
{ [16.7 1],[21 1]; [10.9 1],[14.4 1]}, ...
'ioDelayMatrix',[1,3; 7,3]) % The Wood-Berry column plant from Eqn. 3.24
Plant = jSS(G); % Convert the LTI model to a special form required for the MPC controller
6 Ts = 2; Plant = c2d(Plant,Ts); % Discretise the plant at Ts = 2 seconds
Model = Plant; % Assume (for the moment) no model-plant mismatch
Now we are ready to specify the controller tuning parameters which include the length of the
control and prediction horizons, the values of the input and output saturation constraints, and
the relative input and output weights in the objective function.
Np = 10; % Prediction Horizon
2 Nc = 5; % Control Horizon
con.u = [-1 1 0.1; ... % Input weights: umin, umax, umax
-1 1 0.1]; % One row for each input
con.y = [-2 2; % Output weights: ymin, ymax
7 -2 2] ;
uwt = [1 1]'; % Individual weighting on the inputs
ywt = [1 0.1]'; % We are less interested in controlling bottoms, y2
12 Kest = dlqe(Model); %Estimator Gain
Now we are ready to initialise the model (and plant). To that, we need to know how many inter-
nals states we have in our discrete model. That number depends on the discretisation method,
and the sampling rate, so the easiest way is to inspect the properties of the plant and model.
>> Plant
Plant =
3 -- jMPC State Space Object --
States: 10
Inputs: 2
Outputs: 2
Sampling Time: 2 sec
8
>> Plant.x0 = zeros(10,1); % Initial condition of the 10 states
>> Model.x0 = zeros(10,1);
Finally we are ready to design a setpoint trajectory, assemble the MPC controller, and simulate
the results. In this case we are using the high-speed MEX version of the optimisation routines.
T = 200; % Desired simulation length
setp = 0.5
*
square(0.05
*
Ts
*
[0:T]',50)+0.7; % Design interesting setpoint
setp(:,2) = 0.5
*
square(0.1
*
Ts
*
[0:T]',50)-0.7;
setp(1:10,:) = 0;
5
MPC1 = jMPC(Model,Np,Nc,uwt,ywt,con,Kest); % Construct an MPC controller
simopts = jSIM(MPC1,Plant,T,setp); % Set some simulation options
476 CHAPTER 10. PREDICTIVE CONTROL
simresult = sim(MPC1,simopts,'Mex'); % Simulate & plot results
10 plot(MPC1,simresult,'summary');
suptitle(sprintf('Horizons: N_p=%2i, N_c=%2i',Np,Nc))
The results for this multivariable MPC example are shown in Fig. 10.13. Note how the control of
y
2
is considerably looser than that for y
1
. This is a consequence of the output weights which are
ten times higher for y
1
. Also note that the change in input u never exceeds the required 0.1.
0 50 100 150 200
1
0
1
Outputs: y
p
(k)
A
m
p
l
i
t
u
d
e
0 50 100 150 200
2
0
2
4
6
States: x
p
(k)
A
m
p
l
i
t
u
d
e
0 50 100 150 200
0.2
0
0.2
0.4
0.6
Input: u(k)
Sample
A
m
p
l
i
t
u
d
e
Horizons: N
p
=10, N
c
= 5
0 50 100 150 200
0.1
0.05
0
0.05
0.1
Change in Input: u(k)
Sample
A
m
p
l
i
t
u
d
e
Figure 10.13: Multivariable MPC of the Wood-Berry column.
10.2.3 Using the MPC toolbox in SIMULINK
It is also easy to implement MPC controllers in SIMULINK. Fig. 10.14 shows an MPC controller
from the jMPC toolbox connected to an unstable, non-minimum phase plant
G(s) =
s 1
(s 2)(s + 1)
(10.12)
which is difcult to control and has been used in various benchmark control studies, [103].
Signal
Generator
Scope
Plant
(s1)
(s2)(s+1)
Discrete MPC Controller
mdist
setp
yp
u
delu
xm
ym
double
double
double
Figure 10.14: The MPC block in the SIMULINK environment.
10.2. A MODEL PREDICTIVE CONTROL TOOLBOX 477
To use the MPC controller block in SIMULINK, we rst must design an MPC controller in the
same way as we did in section 10.2.2. Once we have designed the controller, we can simply
insert it into the discrete MPC controller block and connect it to our plant. However in this case
instead of using the true plant inside the model-based MPC controller, we will inadvertently use
the slightly different model

G =
0.95(s 1)
(s 2.1)(s + 1)
(10.13)
thereby introducing some model/plant mismatch.
Model = jSS(zpk([1],[2.1 -1],0.95));% Mismatch: Compare Eqn. 10.12 with Eqn. 10.13.
Ts = 0.1; Model = c2d(Model,Ts); % Discretize Model
4 Np = 40; Nc = 5; % Prediction & control Horizons
con.u = [-5 5 0.3]; % Input & output constraints
con.y = [-2 2];
uwt = 1; ywt = 0.5; % input & output weightings
Kest = dlqe(Model);
9
T = 800; setp = square(0.2
*
Ts
*
[0:T]',50); % Setpoint
setp(1:10,:) = 0;
Model.x0 = zeros(2,1); % Initial values
14 MPC1 = jMPC(Model,Np,Nc,uwt,ywt,con,Kest); % Build the MPC controller
The last line created an MPC controller object called MPC1 which we can place in the MPC
controller block in the SIMULINK diagram in Fig. 10.14. Note that the true plant, Eqn. 10.12 is
never actually used in the MPC controller. The results of the subsequent SIMULINK simulation in
Fig. 10.15 show that we hit, but do not exceed, the input rate constraint during the transients.
2
1
0
1
2
y
4
2
0
2
4
u
0 10 20 30 40 50 60 70
0.5
0
0.5

u
time
Figure 10.15: MPCof a SISOnon-minimum
phase unstable plant with model/plant
mismatch from Fig. 10.14.
10.2.4 Further readings on MPC
Model predictive control is now sufciently mature that it has appeared in numerous texts,
[132, 173, 197], all of which give an overview of the underlying theory, and insight into the in-
dustrial practice. Industrial applications using commercial implementations such as Pavilion
478 CHAPTER 10. PREDICTIVE CONTROL
Technologies Pavilion8, Honeywells RMPCT or Perceptive Engineering have been surveyed in
[162]. A commercial MPC toolbox for MATLAB is available in [27] while freeware versions also
exist from [55] and [4].
Problem 10.1 1. Repeat the DMC simulation of 10.1.2 but add an integrator to the plant.
Adjust the constraints
2. Repeat the DMC simulation of 10.1.2 but add some model/plant mismatch by both vary-
ing one of the model parameters, and adding noise to the true system. Investigate the
performance of DMC control by varying the length of the prediction and control horizons.
Construct a contour plot of ISE as a function of N
c
and N
p
over a sensible range of values.
3. Modify the basic DMC simulation given in 10.1.2 to investigate the robustness of the tech-
nique to structural model/plant mismatch. You should start by separating the model used
by the DMC and the true process, and introduce some minor structural mismatch (gains
and time constants changed by 10% for example). How does the controlled performance
deteriorate as the model gain changes?
4. Design an adaptive DMC regulator by introducing an online model parameter estimator as
discussed in chapter 6, section 6.7.2 to the simulation completed in question 3. Investigate
ways to improve the robustness, and to reduce the computational requirements.
10.3 Optimal control using linear programming
Optimal control problems quickly demand excessive computation, which all but eliminates them
from practical real-time applications. If however our model is a linear discrete-time state-space
model, our performance objective is to minimise the sum of the absolute value, rather than the
more normal least-squares, and we have linear constraints on the manipulated variables, then
the general constrained optimisation problembecomes a linear program. Linear programs or LPs
are much more attractive to solve than a general constrained nonlinear optimisation problem of
comparable complexity because, in theory, they can be solved in a nite number of operations,
and in practice they are far more successful at solving problems with more than 50 variables and
100 constraints on modest hardware. The theory of linear programming is well known, and if the
problem is poorly conditioned, or not well-posed, then the software will quickly recognise this,
ensuring that we do not waste time on impossible, or poorly thought out problems.
There are many computer codes available to solve linear programs, including linprog from
the OPTIMIZATION TOOLBOX. An introduction to linear programming with emphasis on the
numerical issues with MATLAB is given in [201], and an experimental application is given in
[146] from which this section was adapted.
As in the predictive control development, we want a controlled response that minimises the error
between setpoint, r, and the actual state, say something like Eqn. 9.8
J =
_

0
f(x, r)
In this application we will take the absolute value of the error, partly because this is a more robust
measure than the classical least-squares, but mainly because a squared error will introduce a non-
linearity into the performance objective, which will render the linear programming inapplicable.
We will also consider only a nite time horizon, and since the model is discrete, we will use a
simple sum in place of the integral. The optimisation problem is then to choose the set of future
10.3. OPTIMAL CONTROL USING LINEAR PROGRAMMING 479
manipulated variables u
k
over the time horizon such that the performance index
J =
N

k=0
n

i=1
|r
i
x
i
| (10.14)
is minimised subject to the process model. In Eqn. 10.14, n is the number of states, and N is the
number of samples in the time horizon, (typically around 1020). If desired, Eqn. 10.14 could be
further modied by adding state weighting if some states are more important than others.
10.3.1 Development of the LP problem
The hardest part of this optimal controller is formulating the LP problem. Once that is done, it
can be exported to any number of standard numerical LP software packages to be solved.
We will use a discrete state model for the predictive part of the controller.
x
k+1
= x
k
+u
k
(10.15)
Unwinding the recursion in Eqn. 10.15, we can calculate any subsequent state, x
k
given the ma-
nipulated variable history, and the stating condition, x
0
, (refer Eqn. 2.81,
x
k
=
k
x
0
+
k1
u
0
+
k2

2
u
1
+ +u
k1
(10.16)
If r
k
is the setpoint at sample time k, then the error is
r
k
x
k
= r
k

k
x
0

k1
u
0
u
k1
(10.17)
We desire to minimise the sum of the absolute values of these errors, as given in Eqn. 10.14, but
the absolute value sign is difcult to incorporate in a linear programming format. We can instead
introduce two new non-negative variables,

1i
:= r
i
x
i

2i
:= 0
_
if r
i
x
i
(10.18)
otherwise

1i
:= 0

2i
:= x
i
r
i
_
if x
i
r
i
(10.19)
Note that we could write the above four relations without the IF statement as something like:
e1 = abs(r-x)
*
round(sign(r-x)/2 + 0.5)
e2 = abs(x-r)
*
round(sign(x-r)/2 + 0.5)
for scalar x and r.
Using denitions Eqns 10.1810.19, the absolute value is
|r
i
x
i
| =
1i
+
2i
(10.20)
and the error
r
i
x
i
=
1i

2i
(10.21)
Both relations are valid at all times because only one of either
1
or
2
is non-zero for any particular
state at any particular time.
480 CHAPTER 10. PREDICTIVE CONTROL
Re-writing the objective function, Eqn. 10.14 in terms of the new variables using Eqn. 10.20 gives
the linear relation without the troublesome absolute values.
J =
N

k=0
n

i=1
(
1i
+
2i
)
k
(10.22)
The penalty paid is that we have doubled the number of decision variables, thereby doubling the
optimisation space.
Eqn. 10.22 is a linear objective function suitable for a linear program. The constraints arise from
the model dynamics which must be obeyed. Substituting Eqn. 10.21 into Eqn. 10.17 gives N
blocks of equality constraints, each with n equations. To make things clearer, the rst few blocks
are written explicitly out below.

11

21
. .
r1x1
+u
0
= r
1
x
0

12

22
. .
r2x2
+
2
u
0
+u
1
= r
2

2
x
0
.
.
.

1k

2k
. .
r
k
x
k
+
k1
u
0
+
k2
u
1
+ +u
k1
= r
k

k
x
0
Naturally this is best written in a matrix form.
_

_
0 0 I I
0 I I
.
.
.
.
.
.
.
.
. 0
.
.
.

N1

N2
I I
_

_
_

_
u
0
u
1
.
.
.
u
N1

11

21

12

22
.
.
.

1N

2N
_

_
=
_

_
r
1
x
0
r
2

2
x
0
.
.
.
r
N

N
x
0
_

_
(10.23)
The dimensions of the partitions in this large constraint matrix are given in Fig. 10.16 where
N is the number of future sample horizons, n is the number of states, and m is the number of
manipulated variables.
The constraint matrix in Fig. 10.16 is a Nn(Nm+2Nn) matrix, the decision variable vector has
Nm manipulated variables and 2Nn auxiliary variables comprising of (Nm + 2Nn) 1 vector,
and the right-hand side is a Nn 1 vector.
In general applications it is necessary for the control variable, u, to be unrestricted in sign, which
is counter to the formulation required in most linear programming packages. To achieve this we,
10.3. OPTIMAL CONTROL USING LINEAR PROGRAMMING 481

.
.
.

N2

_
Nn
_
n
0

.
.
.
.
.
.


0
0
.
.
.
I
. .
Nm
..
m
I
.
.
.
I I
. .
2Nn
_
n

N1

..
n
I
I


Figure 10.16: LP constraint matrix dimensions
must introduce two further new variables, both non-negative, for each manipulated variable.
u := s t, s, t 0 (10.24)
If the unrestricted u > 0, then t = 0, otherwise u is negative and s = 0. This substitution modies
the Amatrix in the constraint equation, Eqn. 10.23, to
_

_
0 0 0 0 I I
0 0 I I
.
.
.
.
.
.
.
.
. 0
.
.
.
.
.
.
.
.
. 0
.
.
.

N1

N2

N1

N2
I I
_

_
and the decision variables are now
_
s
0
s
1
s
N1
.
.
. t
0
t
1
t
N1
.
.
.
11

2N
_

Using this ordering of decision variables, the objective function, Eqn. 10.22, is the sum of the s.
In linear programming, the objective function is written as J = c

x, where c is termed the cost


vector, and x is the decision variable vector to be optimised. In our case the cost vector is given
as
c =
_
0 0 0
. .
2Nm
1 1 1 1 1
. .
2Nn
_

(10.25)
Note that the manipulated variables do not explicitly enter the objective function. Naturally the
ones in Eqn. 10.25 could be replaced by different positive individual time and state weightings,
if so desired.
This completes the formal development of the LP as far as the unconstrained optimum controller
is concerned. However since we are already solving a constrained optimisation problem, there
is little overhead in adding extra constraints such as controller upper and lower bounds to the
problem. In practice the manipulated space is limited, (a control valve can only open so far, an
electric drive is limited in top speed), so we are interested in constraints of the form
u
lower
u u
upper
(10.26)
These constraints can be appended to the other constraint equations.
482 CHAPTER 10. PREDICTIVE CONTROL
Simplications when using the optimisation toolbox
When using linprog.m from the OPTIMIZATION TOOLBOX, certain simplications are possible
to the standard LP problem form. These simplications reduce the problem size, and therefore
also reduce the solution time. Since the algorithm does not make the non-negative constraint
assumption, we do not need to introduce the variable transformation given by Eqn. 10.24, and
suffer the associated growth of the constraint matrix. This is not the case in general, if we were to
use other LP routines such as described in [201] or [78], then we would need to use non-negative
variables.
Most LP programs assume the constraints are given as Ax b, i.e. less-than inequalities rather
than equalities as in Eqn. 10.23. While the conversion is trivial, it again increases the number of
constraint equations. However the routine linprog conveniently allows the user to specify both
equality and inequality constraints.
The third simplication is to exploit how linprog handles constant upper and lower bounds
such as needed in Eqn. 10.26. These are not added to the general constraint matrix, Ax b, but
are supplied separately to the routine. All these modications reduce the computation required
to solve the problem.
Optimal control using linear programming
Listing 10.5 demonstrates a nite time-horizon optimal controller with a randomly generated
stable multiple-input/multiple output (MIMO) model. The optimisation problemis solved using
the linear programlinprog from the OPTIMISATION TOOLBOX.
Listing 10.5: Optimal control using linear programming
1 n = 2;% # of states
m = 2;% # of manipulated variables
[Ac,Bc,C,D]=rmodel(n,1,m); % Some arbitrary random model for testing
Gc = ss(Ac,Bc,C,D);
6 dt = min(1. ./abs(eig(Ac)))/0.5; % sparse sample time reasonable guess
G = c2d(Gc,dt,'zoh');
G.d = zeros(size(G.d)); % clear this
if rank(ctrb(G)) = n
error('System not controllable')
11 end % if
N = 20; % future optimisation horizon
x0 = randn(n,1); % random start point
r = [ones(floor(N/2),1)
*
randn(1,n); ...
16 ones(ceil(N/2),1)
*
randn(1,n)]; % setpoint change
%% start construction of the LP matrix
Phin = G.b; Aeq = G.b; % better for when m>1
for i=2:N
21 Phin = G.a
*
Phin;
Aeq = [Aeq; Phin]; % rough method
end
Z = zeros(size(G.b)); % padding
26 for i=2:N
Aeq = [Aeq, [Z;Aeq(1:(N-1)
*
n,(i-2)
*
m+1:(i-1)
*
m) ]];
10.3. OPTIMAL CONTROL USING LINEAR PROGRAMMING 483
end % for i
% weighting Could use blkdiag here (but messy)
31 W = eye(n); % change if want different state weighting
A2 = zeros(N
*
n,2
*
N
*
n); % initialise
for i=1:N
A2((i-1)
*
n+1:i
*
n,2
*
(i-1)
*
n+1:2
*
i
*
n) = [W, -W];
end % for
36 Aeq = [A2,Aeq]; % put +ve variables before general +/- variables
% Now right-hand side .....
U = zeros(N+1,m); % unforced, therefore zero input (drop start term)
[yuf,t,xuf]=lsim(G,U,[],x0); % simulate unforced response
41 xuf(1,:) = []; % drop start term since dlsim repeats it.
b = r - xuf; % error
b = b'; beq = b(:); % columnerise
% Now objective function
46 [nr,nc] = size(Aeq); % current size
c = zeros(1,nc); % row vector for the moment
c(1:2
*
N
*
n) = 1; % unity weights for the moment !
% Set up A
*
x < b (or -Ax > 0)
51 b = zeros(2
*
N
*
n,1); % All #s are positive
A = zeros(2
*
N
*
n,nc)
A(:,1:2
*
N
*
n) = -eye(2
*
N
*
n);
tic; % now ready to solve LP ..................
56 ulow = -13; uhigh = 14; % upper/lower constraints
xlower = [zeros(2
*
N
*
n,1); ulow
*
ones(N
*
m,1)];
xupper = [1e6
*
ones(2
*
N
*
n,1); uhigh
*
ones(N
*
m,1)];
xsol=linprog(c',A,b,Aeq,beq,xlower,xupper); % first 2Nn are +ve
toc
61
% extract solution, if found
U = xsol(2
*
N
*
n+1:end); % must reshape if m=1
U = reshape(U,m,length(U)/m)'; % MI system
66 [y,t,xopt] = lsim(G,U,[],x0); % Now test system response
%k = [0:length(xopt)-1]';
[ts,rs] = stairs(t,r); [ts,Us] = stairs(t,U);
subplot(2,1,2);
71 plot(ts,Us,t,ones(size(t))
*
[ulow, uhigh],'--');
h = line(ts,Us); set(h,'LineWidth',2);
%set(gca,'YLim',[-4,4.5])
xlabel('sample #'); ylabel('Manipulated')
76 % Sub-sample output response
nsub = 500; % # of sub-samples total (approx 500)
ts2 = ts(:,1) + [1:length(ts)]'/1e10; ts2(1) = 0; % destroy monotonicity
t2 = linspace(0,t(end),nsub)';
U2 = interp1(ts2,Us,t2);
81
[y,t2,xoptsc] = lsim(Gc,U2,t2,x0);
subplot(2,1,1);
plot(ts(:,1),rs,':',t2,xoptsc);
h = line(t,xopt); set(h,'Marker','.','MarkerSize',24,'LineStyle','none');
484 CHAPTER 10. PREDICTIVE CONTROL
86 ylabel('state'); title('LP optimal control')
You should run the simulation a number of times with different models to see the behaviour of
the optimal controller. One ideal 2-input/2 state case is given in Fig. 10.17. Here the states reach
the setpoint in two sample times, and remain there. The manipulated variables remain inside the
allowable upper (4) and lower (3) bounds. It is clear that the trajectory is optimal, since the sum
of the absolute error at the sample points is zero. One cannot do better than that.
0 5 10 15 20
20
0
20
sample #
M
a
n
i
p
u
l
a
t
e
d
1.5
1
0.5
0
s
t
a
t
e

1
LP optimal control
2
1
0
1
2
s
t
a
t
e

2
Figure 10.17: LP optimal control with no active manipulated variable bounds. Note the perfect
controlled performance. We hit the setpoint in one sample time, and stay there. One cannot
do much better than that within the limitations of discrete control. The plot was generated by
Listing 10.5.
In another case, the manipulated bounds may be active and consequently the setpoint may not
be reached in the minimum time, and our objective function will not reach zero. In Fig. 10.18,
both steady-states are reachable, but owing to the upper and lower bounds on u, they cannot be
met in just two sample times. In other words we have hit the manipulated variable constraints
during the transients.
Our system does not need to be square, Fig. 10.19 shows the results for a 3-state, 2 input problem.
In this case, one state is left oating. Some small inter-sample ripple is also evident in this case.
Note that since this is a predictive controller, and it knows about future setpoint changes, it
anticipates the changes and starts moving the states before the setpoint change. Fig. 10.20 shows
10.3. OPTIMAL CONTROL USING LINEAR PROGRAMMING 485
0 1 2 3 4 5 6 7 8 9
4
2
0
2
4
sample time
M
a
n
i
p
u
l
a
t
e
d
0 1 2 3 4 5 6 7 8 9
2
1
0
1
2
s
t
a
t
e
LP optimal control
Figure 10.18: LP optimal control
with active manipulated variable
constraints. During transients, the
manipulated variables hit the up-
per constraints, so compromising the
controlled performance for a short
period.
0 1 2 3 4 5 6 7 8 9
4
2
0
2
4
sample time
M
a
n
i
p
u
l
a
t
e
d
0 1 2 3 4 5 6 7 8 9
3
2
1
0
1
2
s
t
a
t
e
LP optimal control
Figure 10.19: Non-square LP opti-
mal control with 3 states and 2 in-
puts. Note the small inter-sample
ripple.
the case for a 23 system where owing to the active upper and lower bounds on u, the controller
knows that it cannot reach the setpoint in one sample time, hence it chooses a trajectory that will
minimise the absolute error and in doing so, starts the states moving before the setpoint.
486 CHAPTER 10. PREDICTIVE CONTROL
Figure 10.20: LP optimal
control showing acausal
behaviour
0 1 2 3 4 5 6 7 8 9
4
2
0
2
4
sample time
M
a
n
i
p
u
l
a
t
e
d
0 1 2 3 4 5 6 7 8 9
2
1.5
1
0.5
0
0.5
1
s
t
a
t
e
LP optimal control
Chapter 11
Expert systems and neural networks
Doctor Whirtzles controversial talking brain in a jar
is revealed for the shameless fraud that it was.
1995 David Farley
11.1 Expert systems
An expert system is a computer programme that uses knowledge and techniques in much the
same way as a human expert would to solve a wide variety of problems. Typically expert systems
are used for types of problems that cannot be successfully solved using traditional procedural
type programmes. Expert systems do not use numeric data like conventional programmes, but
they manipulate symbolic data instead.
Human experts have the following characteristics that
expert systems try to duplicate:
A narrow eld of expertise (domain)
Usually are formal about decision making and
can justify their reasoning.
Use all possible senses (touch, hearing, sight,
smell etc.)
They do not do large quantitative number
crunching in their head, at least initially.
Can deal with vague and uncertain or even ir-
relevant input data.
Learn from past experience.
Expert systems differ from human experts in that:
They do not die, retire or go to sleep, and will
diligently work 24 hours a day.
They are easily duplicated One just copies the
programme and supplies additional hardware
if required.
Generally cheap to maintain once the original
development is done.
Expert systems nd it difcult to use common
sense or to realise if a situation is sufciently
novel that it should fail gracefully.
487
488 CHAPTER 11. EXPERT SYSTEMS AND NEURAL NETWORKS
The concept of Expert systems, or machines that can reason in a manner similar to humans, seems
to worry some people. One strong argument against the algorithmic view of the human brain
was put forward by Roger Penrose in the popular, but detailed work The Emperors New Mind.
11.1.1 Where are they used?
Expert systems originally grew out of the eld of articial intelligence (AI) and is now used
in game playing, theorem proving, pattern recognition and diagnosis, [81]. Languages used in
AI are typically different from conventional languages such as FORTRAN, Cobol, Pascal etc.
AI languages deal with symbolic notation rather than numeric data and include LISP, Prolog,
POP and OOPs. However, some applications in AI use conventional languages with extensions
(Object Oriented Pascal, C++).
Articial intelligence is sometimes difcult to distinguish from monkey business.
Conventional programmes (rather than expert systems) are used for applications such as payroll
calculations, nite element analysis, 3D ight simulators etc. Here the problem can be rigorously
dened and an accurate numerical answer is required.
Some well known expert systems are:
MYCIN Medical diagnosis for bacterial infections. This has been shown, in its admittedly limited
domain, to be better than expert human doctors .
EMYCIN : (Empty MYCIN) Similar programme as the above MYCIN, but without the data for
bacterial infections. This allows other applications to use the same programme shell but in
different domains.
PROSPECTOR : Gives advice about mining and drilling operations. This programme pointed to
a $2 billion molybdenum deposit that human experts had previously overlooked.
All the above expert systems have enjoyed good success, but then they also have had hundreds
of human years effort. However in the case of PROSPECTOR, which is invariably cited by AI
enthusiasts as a particularly good example, [102] notes that it is credited for nding only one
mineral deposit around 1982, and this was achieved using a never-released experimental version
rather than the publicly available version. Since that time, there have been no other reported
mineral deposit nds.
11.1.2 Features of an expert system
Most expert systems have in addition to a knowledge base, a facility to explain how it arrived at
a decision, some sort of question and answer format, and an accompanying probability analysis
to address the degree of uncertainty.
11.1. EXPERT SYSTEMS 489
In structure they have an inference engine, a knowledge base, and after time, some case specic
data.
An inference engine
This is the part of the programme that decides what knowledge is relevant and what further
questions need to be asked to arrive at a conclusion. It is the central core or kernel. There are two
different styles of inference engine:
1. Goal driven or top down
2. Data driven or bottom up
The style which is used depends on the application. A goal driven system asks questions that
work backwards from a possible goal to nd pre-requisites (sub goals) until the initial conditions
required by the proposed goal are the same as the actual initial conditions.
A data driven method looks at the data and examines which rules are satised. The rules that
are satised are said to re which in turn may satisfy conditions for other rules to re. This
continues until a goal is satised. An example of the two types of inference engine methods are
from HAZOP (hazard and operability) studies.
Consider causes: top down
Consider consequences: bottom up
Goal driven
Suppose you are an expert consultant employed by a multinational oil company which is won-
dering why their plant has just exploded. You propose (guess) that a pipe carrying octane cor-
roded at an elbow spraying ammable material around. A dialogue between an expert (you) and
an eye witness might go something like
Was there a large plume after the explosion?
... Yes
Was the centre of the explosion 50m down wind of the suspected leak?
... Well it was about 100m I suppose.
You are trying to establish sufcient evidence to support your initial hypothesis.
Data driven
You are designing a new plant and carry out a consequence analysis as part of a preliminary
HAZOP (Hazard and operability study). You might ask:
What would happen if pump #23 failed? Would the pressure be high enough to
rupture the vessel? Would the contents find a source of ignition within the
explosive limit?
490 CHAPTER 11. EXPERT SYSTEMS AND NEURAL NETWORKS
Here you are taking the initial data and evaluating what would happen given that data. You then
match that with what really happened.
Knowledge representation in expert systems
Information in the knowledge base has two forms; facts and rules. The fact base contains a
collection of fact statements. These facts may be stored in the form older(John,Paul) which
is read as John is older than Paul. This way of expressing facts is derived from the PROLOG
programming language. Rules in a more engineering context could be:
saturated(steam,100C, 1 Bar) saturated(freon,-36C,1 Bar) rupture(boiler, 15
Bar)
These facts can be dynamically updated or deleted from the knowledge base as the expert system
grows and gains experience. The second type of knowledge representation is as rules. The rules
are usually of the simple IF . . . THEN . . . ELSE type.
IF the ame colour is blue AND the ow rate is high THEN a secondary reaction is occurring.
Other types of knowledge representations are possible such as semantic nets or neural networks.
Neural networks are introduced in 11.2.
11.1.3 The user interface
The user interface is the part of the programme that asks the questions and receives the answers.
Generally the users of the expert systemare not computer experts and would like to communicate
in as near to natural language as possible. An example that requires a good user interface is one
where a group of doctors installed a common disease diagnosis expert system in the waiting
room of their surgery. The waiting patients could ask the Expert system, in the same way they
could ask the doctor what was wrong with them, whilst waiting for the doctor. Supposedly the
programme was not too successful, or the doctors might soon be out of a job. The user interface
can operate on at least 3 levels:
1. Solves problems for the client. (normal mode)
2. The user is an expert who is extending the knowledge base of the system.
3. The user is extracting knowledge from the programme. (Industrial espionage?)
11.1.4 Expert systems used in process control
Why is it that an experienced human operator can often satisfactorily control large complex
plants when they exhibit nonlinear time varying behaviour and where there is little or no precise
knowledge available? Examples are control of processes such as sugar crystallisation, cement
kilns, oatation separations etc. All these plants involve imprecise measurements such as sharp-
ness of crystals, colour of the ame, shape and size of bubbles, and stability of boiling. Given
that conventional controllers have difculty and are substandard when compared to the human
operators, it seems logical to try and develop non-conventional computer based controllers that
duplicate what the human operator is doing. Here the human operator is considered an expert
and one tries to capture their way of doing things in a computer programme such as an expert
system. Fuzzy control is a further attempt to duplicate how a human makes decisions given
11.1. EXPERT SYSTEMS 491
vague and qualitative information. Fuzzy control is quite closely related to expert systems, and
is described formally in [73] and an application to the nonlinear evaporator is given in chapters
13 and 14 of [145].
Outside regulatory control, expert systems have also been used in:
Tuning industrial PID controllers, (pattern recognition)
Monitoring equipment for faults (diagnosis)
Analysing alarms on complex plants
Designing distributed computer control (DCC) congurations
Qualitative simulation
Prediction, planning and design
Auto-tuners
PID controllers are very common on chemical processing plants. However they will only work
effectively if they are well tuned. Tuning PID controllers is where one selects the optimum Gain
(K
c
), integral time
i
and derivative time
d
. These 3 variables will be dependent on the process
being controlled. Traditionally instrument engineers tuned the PID controllers during start-up,
but often never re-tuned them afterwards. Auto-tuners such as the EXACT controller manufac-
tured by Foxboro will re-tune the controller if the operator demands [17, pp445-446]. This means
that the controller can be frequently retained by inexperienced personnel. The method of tuning
is based on analysis of the transient response of the closed loop system to changes in setpoint
and load. It uses the methods of Ziegler-Nichols. Heuristic logic is used to detect if a proper dis-
turbance has occurred and when the peaks subsequently occur. Hence the Expert system. This
system has been widely accepted in industry.
Fault diagnosis
Expert systems are ideal for fault diagnosis. Here one constructs a fault tree which is some type
of conditions connected by branches. Given a fault such as a high exit temperature, you can trace
back through the tree and nd the cause such as a pump failure.
Unfortunately these trees can get rather large very quickly, so some heuristic rules must be em-
ployed to reduce the scope of the tree. Also most people will ignore events that have a low
probability associated with them (asteroids hitting the distillation column for example).
Alarm system analysis
Related to fault diagnosis is alarm analysis. In a large chemical plant, there may be thousands of
variables and hundreds of controllers. If each controller has two alarms (high & low) associated
with it, then there will also be hundreds of possible alarms. Often one alarm will trigger other
alarms which in turn also trigger alarms. Consequently a single fault may generate many differ-
ent alarms making the control room display light up like a Christmas tree. Faced with so many
alarms, the operator may nd it difcult to evaluate exactly what has happened and what should
be done to correct the situation. Experience has shown that typically operators, who otherwise
492 CHAPTER 11. EXPERT SYSTEMS AND NEURAL NETWORKS
know the process better than anybody else, make poor decisions when faced with the emergency
situation of so many alarms.
Expert systems have been developed to analyse these alarms (what order, which type etc) and
to try to deduce what was the crucial initial fault that triggered all the subsequent alarms. Once
the programme has established a probable cause, it advises the operator the cause and a suitable
remedy. Programmes of this type typically have heuristic rules and plant models built into them.
To be useful however, they must generate good answers quickly (within say 5 minutes), and
reject the inevitable false alarms.
Qualitative simulation (QSIM)
This combines the elds of exact mathematical modelling (simulation) and the heuristic approach
of expert systems and is described in [58].
In an emergency situation such as a major plant failure, the human operators want to be able
to predict quickly what will happen if they take some particular course of action such as open
a valve or shut down a reactor for example. They do not have time or even are interested in
exact results, just general trends. QSIM is designed to provide reasonably condent results in
the face of uncertainty without having to resort to a time consuming major plant simulation.
Essentially QSIM is a method that tries to provided knowledge of the approximate dynamics
of the system from incomplete knowledge of the process. Normally everything must be exactly
known (or estimated exactly) before any traditional equation solver or dynamic simulator such
as those discussed in chapter 3 can begin. This condition is not required for the QSIM algorithm.
The QSIM algorithm is a little like a programming language that includes operators, constraints,
relations and values. Operators include the familiar mathematical operators such as:
MULT(mass,temp,enthalpy) DERIV(enthalpy,heat_flow)
where the rst entry can be read as masstemperature = enthalpy. Other operatives include
qualitative relations such as:
M+(temp,press) M-(press,vol)
The rst qualitative relation says that pressure increases with temperature for a particular pro-
cess. Nothing is said about how much increase or does it give particular values. The second
relation says that volume decreases with pressure in some unspecied manner. These style of
relations incorporate uncertainty and cannot be duplicated in normal simulation computer pro-
grammes.
Values can equal landmark values such as , 0, +or somewhere in between. Values can also
be increasing or decreasing. If a temperature (in Kelvin) is decreasing, then the qualitative state
will be T = ((0,+),dec). Dynamic modelling is possible with QSIM just as with conven-
tional quantitative simulation but for QSIM the output results are now not numerical values, but
qualitative states.
Difculties of QSIM
Most implementations today take more computer time than an equivalent numerical solu-
tion.
11.2. NEURAL NETWORKS 493
Only very simple cases are described.
Problems arise when two numbers close in magnitude are subtracted since now the sign of
the result is unknown! This causes big ambiguities in the result.
Problem 11.1 Tuning PID controllers in industrial conditions is somewhat of an art. Construct a
rule-based expert system to aid a control engineer to tune a PID controller.
1. Write down a list of rules that you would use in your expert system. You should incorporate
heuristics and mathematical calculations in this rule base. You may wish to add a classier
to judge what type of response you get, (non-minimum phase, oscillatory, excessive dead
time etc).
2. Structure the rules so the most important ones are searched rst. Draw a diagram of the
logic tree and show an example for a typical case. What happens if you have an open loop
unstable process, or a process that is difcult to control?
3. What sort of diagnostics and explanation facilities would you incorporate in your expert
system?
11.2 Neural networks
Weve replaced the bad cybernetic brain that went
berserk and tore people limb-from-limb with a good
one that does not.
1994 David Farley
Accompanying expert systems, the other popular development in the research in articial in-
telligence with relevance for process control is Articial Neural Networks or ANN. Articial
neural networks are computer programs that attempt to duplicate how the brain works, albeit
at a very limited level, by simulating the highly interconnected biological neurons. The relation
between the biological brain and even the most complex computer programs is really quite weak,
and nowadays most people do not expect that current ANNs will ever convincingly duplicate a
humans cognitive ability. However ANNs do represent a novel and refreshing way of solving
certain types of problems. Neural networks are also referred to as connectionist models, parallel
distributed models, or neuromorphic systems.
Of course, any program that claims to duplicate the brains working, must do at a very crude
level and this is certainly the case for ANN. Both the articial and biological neural networks are
composed of simple elements called neurons that are joined together in a particular topology or
494 CHAPTER 11. EXPERT SYSTEMS AND NEURAL NETWORKS
network. The connections between neurons are referred to as dendrites, and the inputs and out-
puts are called axons. However in the case of the articial neural network, the size and topology
of the network is much much simpler than those even in an ants biological brain. We will not be
concerned further with biological neural networks, so henceforth I will simply refer to ANN as
neural networks with the articial term implied. The neural network attempts to model complex
behaviour with a dense interconnection of simple computational elements, hence the descriptive
term connectionist models referred to above, and the obvious parallelising of the computation.
Table 11.1 compares the characteristics between expert systems and neural networks. The MAT-
LAB neural network toolbox, [59] supplies tools and gives a methodology for elegantly simulat-
ing neural networks within the matrix framework. There are many introductory texts explaining
neural networks, [154] is just one.
Table 11.1: A comparison of characteristics between expert systems and articial neural networks
Neural Networks Expert systems
example based rule based
domain free domain specic
nds rules needs rules supplied
easy to program & maintain hard to program & maintain
needs database needs human expert
adaptive needs re-programming
time consuming generally quicker
It is clear by now, that successful advanced control for chemical processes hinges on the devel-
opment of accurate suitable models, and that these models are expensive and time consuming to
develop and verify. Neural networks are interesting because they can be made to model complex
nonlinear systems with very little attention paid to the underlying structure of the model. In this
sense, one size ts all, which has its obvious advantages when one is trying to model complex
behaviour for which no feasible mechanism is known. If ANN are to be useful for control prob-
lems we must compared their performance to more classical and traditional techniques such as
model based control, adaptive control and so forth. We should carefully look at the quality of the
model/plant matching, the efciency of parameter use, online and ofine computer usage and
ease of tuning and conguration. [141, p151] cite some additional issues such as the high degree
of parallelisation exhibited by neural networks ensures a certain fault tolerancy, and speed of op-
eration if the hardware is optimised for the task. One of the most successful areas of application
of ANN is the fusion between symbolic information such as that used by an Expert System and a
deterministic model. By combining both sources of information, one would naturally expect an
improved result.
As a nal caution consider carefully the following quote by the computer scientist A. K. Dewdney
in his expose of bad science, Yes, We Have No Neurons.
As computer recreations columnist for a number Of years with Scientic American
magazine, I had ample opportunity to review the literature on neural nets and may
have unwittingly helped to spark the revolution by writing more than one article on
the subject. Only now may I say what I never could in the column: The appearance
of an idea in Scientic American does not automatically make it the coming thing. The
number of people who seemed unable to spot the word recreations in my column
was truly frightening. Although neural nets do solve a fewtoy problems, their powers
of computation are so limited that I am surprised anyone takes them seriously as a
general problem solving tool. Most computer scientists who work in the eld known
as computational complexity understand this very well.
11.2. NEURAL NETWORKS 495
(b) Linear with possible saturation

output

` `
(a) Hard limit (c) Nonlinear
output output
input
tanh(x)
log-sig
input

`
input
Figure 11.1: Possible neuron activation functions.
Neural nets have alreadyentered the long, slowdecline predicted by Irving Langmuir
(see the end of chapter 1) for all bad science-or for that matter, technology.
11.2.1 The architecture of the neural network
Neural networks have three basic characteristics; their architecture or how the neurons are con-
nected to each other, the type of processing element in the individual neuron and the training
rule used to initialise the network.
The neuron model
The neuron is a very simple element that takes as input a value p, multiplies this by a weight w,
and then passes this weighted input to a transfer or activation function f(). There are a variety of
possible transfer functions and some are given in Fig. 11.1. The simplest transfer (or activation)
function is a binary switch;- either it res or it does not (see Fig. 11.1 (a)). This is at the heart of the
perceptron, but it is limited as to what behaviour it can model. A more exible transfer function
is a linear transfer function, although in practice the output may be constrained by limits (see
Fig. 11.1b) but it is the nonlinear activation functions, (see Fig. 11.1c) that give the neural network
the most power to model interesting and nonlinear relations.
The two most common nonlinear activation functions are the hyperbolic tangent,
y = tanh(x), (11.1)
and the log sigmoidal,
y =
1
1 + exp(x)
. (11.2)
Both of these are sigmoidal or S shaped curves that are bounded both above and below and
are continuously differentiable.
Multiple neurons in layers
There are two obvious extensions to the single neuron example given above. First we can deal
with multiple inputs to the single neuron. In this case, the inputs are individually scaled by a
496 CHAPTER 11. EXPERT SYSTEMS AND NEURAL NETWORKS
weight, w, and then summed to form a single input to the activation function in the same manner
as before as shown in Fig. 11.2.
Suppose we have N inputs and corresponding weights, then the output of the neuron y is
y = f
_
N

i=1
w
i
p
i
+w
0
_
(11.3)
where w
0
is a bias weight. Bias weights have an implied input of 1 and are used to enable the
neuron model to give a nonzero output for a zero input. This adds exibility to the neuron model.

activation function f()



output, y
i
s
i

w
0
w
1
w
2
.
.
.
w
L
1
bias

single neuron
weighted
summer
synaptic connections
nonlinear function
_

_
inputs
Figure 11.2: A single neural processing unit with multiple inputs
If we duplicate the single neuron given in Fig. 11.2, we can build up a one layer feed-forward
neural network. Fig. 11.3 shows the topology for a single layer neural network with L inputs and
N outputs or neurons. The weight w
ij
is the weight from the jth input to the ith neuron.
Given this multivariable structure, a convenient computational molecule employed by the Neu-
ral Network toolbox is to use a vector/matrix version of Eqn. 11.3,
y = f (Wx) (11.4)
where y is a N1 vector of outputs, x is a L vector of inputs, and Wis a NL matrix of weights,
W=
_

_
w
11
w
21
w
1L
w
21
w
22
w
2L
.
.
.
.
.
.
.
.
.
.
.
.
w
N1
w
N1
w
NL
_

_
The neural network is comprised of two mappings, the Wx linear transformation, and the non-
linear f ().
The second extension is to feed the output of one neuron into the input of another. This layering
of neurons forms network part of the neural network as shown in Fig. 11.4. Organising the
11.2. NEURAL NETWORKS 497

.
.
.
w
NL
w
11
w
21
w
1L
1
2
N
_

_
L inputs
w
N2
w
2L
w
12
x
1
x
2
x
L
.
.
.
y
1
y
2
y
N
_

_
N outputs
Figure 11.3: Single layer feedforward neural network
neurons in layers allows the technique far more exibility to model things than before. Most
neural networks in practice have sigmoidal processing elements, are arranged in layers with at
least one center or hidden layer and have at least one bias input.

hidden layer
output input
information ow

Figure 11.4: A 3 layer fully interconnected feedforward neural network
Layering the neurons gives a great deal of exibility in what can be approximated by the net-
work. In fact one can prove using the Stone-Weierstrass theorem that just a two layer network
is sufcient to model any continuous function. Unfortunately this theorem does not tell us how
many neurons we need, and it should be mentioned that simple polynomials or other orthogonal
expansions can also approximate to arbitrary accuracy any continuous function, although again
the theorem is silent regarding how many coefcients are required. (See for example [104, p289].)
So two main design problems remain; how many layers we should use, and how many nodes in
each layer. Naturally we wish to minimise the number of parameters to search for, but still obtain
a reasonable result.
498 CHAPTER 11. EXPERT SYSTEMS AND NEURAL NETWORKS
Recurrent networks
In the network structure given in Fig. 11.4, the information travels from left to right, analogous
to an open loop process producing an output given an input. These are called statical networks.
However for the analysis of dynamic models, it is natural to use a neural network structure with a
feedback through a unit time delay duplicating the nonlinear discrete time model, x
k+1
= f (x
k
).
Fig. 11.5 shows a neural network suitable for an autonomous (no input) dynamic system. The
outputs are delayed one sample time and then fed back to the inputs.

1
2
N
z
1
z
1
z
1

y
1
y
2
y
N
. .
feedforward part
x
1
(0)
x
2
(0)
x
N
(0)
unit delays
. .
initial
conditions
Figure 11.5: Single layer network with feedback
Other possibilities include using a two layered network with a delays between the layers, or
feeding only some of the states back. Adding provision for a control signal is simply a matter of
adding another node in the rst layer joined to the input at time k.
Training or regressing the weights
Once we have decided on a network structure, (number of hidden layers, number of neurons
in each layer etc), and have decided on the type of neuron, (sigmoid, linear etc), then we must
calculate appropriate values for the weights between the neurons. In essence this is a nonlinear
regression problem, where inputs and known outputs are presented to the network in a super-
vised learning mode. The weights are adjusted by the optimiser so that the network duplicates
the correct answers. Once the network can do this, and the weights have converged, it may be
able to extrapolate and produce reasonable answers to inputs that it had not seen previously.
11.2. NEURAL NETWORKS 499
11.2.2 Curve tting using neural networks
Fitting functions to data is a classic engineering pastime. When faced with some data say such as
given in Fig. 11.6, our rst, and often hardest choice, is to decide what is an appropriate under-
lying structure for the function that stands some chance of going through the data. This is where
strategies that make minimal demands on any a priori information such as neural networks have
advantages.
2 1 0 1 2
4
3
2
1
0
1
2
3


data
Initial NN fit
Figure 11.6: An input/output data series
and initial trial t (solid) to be approximated
using a neural network. The initial t was
obtained by randomly setting the neural-
network weights.
We are going to make use of the neural network characteristic of one size ts all mentioned
above, in that we will not actually pay much attention to the structure of the model to which
we will regress. To speed up the prototyping, I am going to use the Neural Network toolbox for
MATLAB, but constructing this example fromscratch is also very simple. I will also closely follow
the example for curve tting from the NN toolbox manual.
Traditionally to select a model structure for curve tting, I by looking at the data, would choose
some sort of growing exponential sinusoidal family with say 36 parameters and then use a non-
linear regression package. However in this case we avoid the problem specic model selection
step and just to be different, will use neural networks. It this advantage, that of eliminating the
requirement to select a model structure that is why neural networks are currently so popular.
We will search for the values of the weights inside the neural network (also termed parameters)
such that our neural network approximates the unknown function. We will use a crude, but
effective optimisation technique to search for these weights called back propagation. Searching for
the optimal neural network weights is termed training the network, but in reality it is simply a
nonlinear regression problem.
How many neurons and in what structure?
It turns out that we can, at least in principle, t any function with a nite neural network with
only two layers, the rst a nonlinear layer, and the output, a linear layer. Unfortunately, this
result does not tell us how many neurons are required for a given accuracy specication, so it
is not much use in practice. Hence we will use a three layer network (two hidden which are
nonlinear, and one linear output). Current thinking is that if the network is too large, it will not
be able to generalise very well, despite the good performance on the training runs.
I will use the hyperbolic tangent activation function, (see Fig. 11.1c), for the nonlinear neurons.
We are already constrained by the number of input and output neurons, that is given by the
500 CHAPTER 11. EXPERT SYSTEMS AND NEURAL NETWORKS
problem. In our case we have 1 input (the independent variable) and 1 output (the dependent
variable). However we do have to choose how many neurons we want in our two hidden layers.
Too many neurons take too much memory, and can over-t the data. Too few may never t the
data to the desired level of accuracy. I will choose 10 neurons for each of the hidden layers.
The MATLAB code in Listing 11.1 rst generates some arbitrary data to be tted and initialises
the neural network with random weights.
Listing 11.1: Generate some arbitrary data to be used for subsequent tting
x = linspace(-2,2,20); % Note: row vector for input
F = @(x) 1./(1+2
*
x.2).
*
cos(3
*
(x-2)); % create "unknown" Target function to fit!
y=F(x);
4
xi = linspace(x(1), x(end),50); % Interpolating input vector validate
% Now create Network Skeleton
[R,Q] = size(x); % dimension of input
9 S1 = 10; S2 = 10;% # of hidden nodes (2 layers)
[S3,Q] = size(y);% dim of output
% Initialize weights and biases.
[W1,B1] = rands(S1,R); % random weights to start
14 [W2,B2] = rands(S2,S1); [W3,B3] = rands(S3,S2);
% try out our initial guess
plotfa(x,y,xi,purelin(W3
*
tansig(W2
*
tansig(W1
*
xi,B1),B2),B3));
Clearly when using random weights, any t will be purely fortuitous. More likely as a rst
estimate, we will see a t similar to the attempt in Fig. 11.6. However hopefully once we have
optimised the network weights the t will improve.
Now we are ready to train the network using the back-propagation algorithm. We specify a max-
imum of 2000 iterations, and a nal error allowance of 10
3
. There is scope for many more tuning
parameters just for the optimiser, and these are specied in the following code. The algorithm
should work without them, but it will be much less efcient. We will use the supplied trainbpx
m-le to perform this optimisation.
disp_freq = 500; % plotting frequency
max_epoch = 20000; % maximum allowable iterations
3 err_goal = 1.0e-3;% termination error
lr = 0.01; % learning rate
TP = [disp_freq max_epoch err_goal lr];
% adaptive learning rate add-ons
8 lr_incr=1.01; % learn rate increase
lr_decr=0.99; % decrease
momentum = 0.9;% low pass filter (sort of)
err_r = 1.04;
TPx = [TP,lr_incr, lr_decr, momentum, err_r];
13
% Now do the training by accelerated back-prop.
[W1x,B1x,W2x,B2x,W3x,B3x,epoch,errors] = ...
trainbpx(W1,B1,'tansig',W2,B2,'tansig',W3,B3,'purelin',x,y,TPx);
18 % finally when finished plot the best approximated curve
11.2. NEURAL NETWORKS 501
plotfa(x,y,xi,purelin(W3x
*
tansig(W2x
*
tansig(W1x
*
xi,B1x),B2x),B3x));
If you have tried the above, you will have learned the rst important feature of neural networks
they are not very computationally efcient! The result of my best t is given in Fig. 11.7. The t
is good considering we are using 140 parameters! Just for comparison, the dashed line in Fig. 11.7
shows the t only using 3 neurons in the hidden layers or 9 tted parameters.
2 1 0 1 2
0.4
0.2
0
0.2
0.4
0.6
0.8
1
1.2


data
NN
10,10
fit
NN
2,1
Figure 11.7: An unknown input/output func-
tion approximated using a neural network
with two hidden layers of 10 neurons each
with 140 tted parameters, (solid) and (2,1)
neurons in the two hidden layers with 9 pa-
rameters, (dashed). Both optimisers were
constrained to 2000 iterations.
Actually one must be weary of comparisons such as these since in both cases the optimiser timed
out by exceeding the maximum allowed number of iterations rather than actually nding the
appropriate minimum. On the other hand however, I do not have time to waste! For the case
with 5 neurons in the hidden layers, a good t is obtained with 16,000 iterations.
Predicting the tides
The high and low tides in Auckland habour (36

50

39

S, 174

46

E) is shown in Fig. 11.8(a)


over 4 months.
1
Note that this time series only shows the extreme values of the data, and it is
evident from the plot that Auckland experiences what is known as semi-diurnal tides but with a
noticeable mixed component. That is, Auckland has two low tides and two high tides each day,
but one of the tides is larger than the other.
Recreational boaters need to know the tides, and it is interesting to try and t a function to this
data. In fact this is quite a challenge because there are really 4 curves in Fig. 11.8(a). Suppose to
make the problem easier, we just take one of the high tides, and use this as a test case for tting a
neural network function.
Fig. 11.8(b) shows the result of tting a large neural-network with 2 layers of 30 neurons in each.
Note that the last month of data was withheld from the neural network and gives an indication
of the quality if the extrapolation that we can expect. Not too good Im afraid.
Problem 11.2 1. Repeat the curve tting using a neural network with only one hidden layer.
Investigate the effect of varying the number of hidden neurons.
2. Fit the data given in Fig. 11.6 using the conventional nonlinear least squares optimisation.
(The optimisation toolbox procedure fminu may be useful.) You will need to select an
appropriate model and suitable initial parameters. How does this compare to the neural
network approach in terms of computational effort required, user understanding, robust-
ness etc?
1
This data was obtained from http://www.niwascience.co.nz/services/tides.
502 CHAPTER 11. EXPERT SYSTEMS AND NEURAL NETWORKS
May Jun Jul Aug Sep Oct
2
1.5
1
0.5
0
0.5
1
1.5
2
Mean Sea Level
Date
H
e
i
g
h
t

o
f

H
i
g
h

&

L
o
w

t
i
d
e
s
(a) The high and low tides in Auckland harbour over 4 months.
May Jun Jul Aug Sep Oct
0.8
1
1.2
1.4
1.6
1.8
2
Validation
Date
H
e
i
g
h
t

o
f

H
i
g
h

t
i
d
e
s


data
Final NN fit
(b) High tide predictions. Note that the September data was not used for the curve tting, and is
used to validate the t.
Figure 11.8: Fitting a neural network function to tide data
Some preliminary cautions
A very short readable criticism of the whole neural network idea is given by Ponton, [160]. Pon-
ton makes the point that there is nothing particularly special about neural networks, and that
often tting a known nonlinear model will probably work just as well, if not better. Ponton also
asserts that a neural network function does not generally correspond to physical reality, and in
general the network output cannot be extrapolated beyond the domain used to train (or in other
words, t) the network parameters. Ponton advises that one can use a neural network when
nothing is known about a particular relationship except that it is nonlinear. In many engineering
applications however, something is known (or at least partially known), and that this informa-
tion is useful in the model. This approach is known as using the a priori knowledge available.
Ponton ends his short critique of neural networks with the following two statements found from
experience:
A simpler arbitrary function can probably be found to perform just as well as a ANN, and
a model incorporating almost any engineering or scientic knowledge will do much better.
In a similar manner, the September 1990 issue of BYTE magazine published a dialogue from a
discussion of experts in the eld of articial intelligence. BYTE asked these experts Why have
we failed to come up with real AI? The replies ranged from Its an extremely hard problem (Danny
Hillis) to AI started to work on robotics and expert systems, which were very easy in comparison to
really understanding intelligent behaviour and learning at the early stages (Nicholas Negroponte) to the
11.3. SUMMARY 503
dramatic The computation power required . . . is enormous (Federico Faggin), to At a very fundamental
level, we do not know how the brain works (Dick Shaffer). Remember some of these concerns when
next investigating the claims of new expert systems and articial intelligence.
11.3 Summary
Expert systems have been around for about 25 years and have been popular for about the last 10
years. They represent a major departure from the traditional way of using computers as that of
a precise exact machine solving engineering problems exactly to solving more vague problems
that require concepts that humans have traditionally been better then computers at. With this
new technology has arrived new programming languages and new computer architectures.
Expert systems in the process control eld is still in its infancy, and has had mixed success. Expert
systems are often applied to control problems that are traditionally control hard such as the
control of cement kilns and pulp digestors. These applications are characterised by complicated
physiochemical phenomena with scarce reliable online measurements. Notwithstanding, human
operators do manage to operate these plants, so one predicts there is scope for an expert system.
Usually the expert system is congured to operate in an advisory mode which the operator can
override. Installation costs are high hence they tend to be used only for projects that show a large
potential pay back.
Recently there has been some published disillusioned analysis of expert systems, and many re-
searches abandoned the eld moving to the now more popular neural network eld. Whether
this currently popular ship will also capsize in the near future makes interesting speculation.
Like expert systems, neural networks have also been around for 25 years or more, and is only now
gaining interest in the general engineering community. The neural network is the quintessential
black-box model, that hopefully after it has been trained on some data, can make useful predic-
tions with similar, but slightly different data. The training of the network, or alternatively the
regression of the network weights is a time consuming, expensive nonlinear optimisation prob-
lem, but needs to be done only once ofine. The actual operation of the neural network once
trained, is trivial.
504 CHAPTER 11. EXPERT SYSTEMS AND NEURAL NETWORKS
Appendix A
List of symbols
SYMBOL DESCRIPTION UNITS
t time s
T, t sample time s
time constant s
K Gain
shape factor -
() Dirac delta function -
f(t) function of time -
f

(T) sampled function f(T) -


L() Laplace transform -
Z() z transform -
s Laplace variable -
z z transform variable -
q
1
backward shift operator -
radial velocity radians/s
f frequency Hz
V(x) scalar Lyapunov function -
P, Q positive denite matrices -
J Jacobian matrix -
num numerator -
den denominator -
M mass kg
F material ow kg/s
density kg/m
3
h height m
A cross sectional area m
2
P pressure differential kPa
T temperature K,

C
H Enthalpy kJ
I current mA
cp heat capacity kJ/kg.K
T temperature differential K
Q, q heat loss (gain) kJ/s
J moment of inertia -
angle radians
relative gain matrix -
u manipulated variable -
error -
i integral time s

d
derivative time s
Kc proportional gain -
Ku ultimate gain -
u ultimate frequency rad/s
Pu ultimate period s
m phase margin s
, actual PID parameters -
d, dead time samples, s
P period s
signal power
W weight kg
M model -
S system -
vector of parameters -
error vector -
t
1/2
t-statistic -
cov() co-variance
C past input/output data -
K gain matrix -
P co-variance matrix -
I identity matrix -
forgetting factor -
f factor 4
E{} expected value
t sample time s
x state vector n 1
u input vector m1
y output vector r 1
d disturbance vector p 1
A system matrix n n
B control matrix n m
D disturbance matrix n p
C measurement matrix r n
transition matrix n n
discrete control matrix n m
discrete disturbance matrix n p
time delay s
# whole samples delay -
z augmented state vector -
Co controllability matrix -
O
b
observability matrix -
K controller gain -
L observer gain -
J, j performance index $
Q, R weighting matrices -
505
506 APPENDIX A. LIST OF SYMBOLS
x mean of x -
x standard deviation of x -
, Lagrange multipliers
H Hamiltonian
termination criteria
SUBSCRIPTS DESCRIPTION
N Nyquist
u,l upper & lower
ss steady-state
SUPERSCRIPTS DESCRIPTION
x estimate (of x)
x mean (of x)
x

desired value (of x)


Appendix B
Useful utility functions in Matlab
The routine in Listing B.1 adds two or more polynomials of possibly differing lengths. The resultant
polynomial order is the same as the maximum order of the input polynomials. Note that this routine
pads with zeros to the left so that all the polynomials are of the same size. It assumed (but not checked)
that the polynomials are row vectors in descending order.
Listing B.1: Polynomial addition.
1 function R = polyadd(A,varargin)
% Adds two or more row vectors of possibly differing lengths.
% R(x) = A(x) +B(x) +C(x) + .
np = length(varargin); nmax = length(A);
for i=1:np % find maximum order in the input argument list
6 nmax = max(nmax,length(varargin{i}));
end % if
R = [zeros(1,nmax-length(A)),A];
for i=1:np
varargin{i} = [zeros(1,nmax - length(varargin{i}),1), varargin{i}];
11 R = R+varargin{i};
end % if
return
This routine is useful for checking the results of a Diophantine solution, A(x)R(x) +B(x)S(x) = T(x).
Tc = polyadd(conv(A,R),conv(B,S))
The routine in Listing B.2 convolves or multiplies two or more polynomials. Unlike polynomial addi-
tion, we do not need to ensure that all the polynomials are of the same length.
Listing B.2: Multiple convolution.
function y = mconv(a,varargin)
2 % Multiple convolution of polynomial vectors
% R(x) = A(x)B(x)C(x) .
% Would be faster to do divide & conquer, and lots of smaller rst.
if nargin < 2
7 error('Need two or more arguments')
end % if
507
508 APPENDIX B. USEFUL UTILITY FUNCTIONS IN MATLAB
y = conv(a,varargin{1}); % do the first pair
for i=2:nargin-1 % now do any remaining
12 y = conv(y,varargin{i});
end return
Listing B.3 strips the leading zeros from a row polynomial, and optionally returns the number of zeros
stripped which may, depending on the application, be the deadtime.
Listing B.3: Strip leading zeros from a polynomial.
function [Br,i] = stripleadzeros(B)
2 % Strip the leading zeros from a (vector) polynomial & return deadtime (# of leading zeros)
% [Br,i] = stripleadzeros(B)
for i=0:length(B)-1
if B(i+1) % if Bi = 0
7 Br = B(i+1:end);
return % jump out of routine
end
end % for
return
Appendix C
Transform pairs
time function Laplace transform z-transform
f(t) F(s) Z
unit step 1
1
s
z
z 1
impulse (t) 1
unit ramp t
1
s
2
Tz
(z 1)
2
t
2
2
s
3
T
2
z(z + 1)
2(z 1)
3
t
n
n!
s
n+1
e
at
1
s +a
z
z e
aT
te
at
1
(s +a)
2
Tze
aT
(z e
aT
)
2
(1 at)e
at
s
(s +a)
2
sin(at)
a
s
2
+a
2
z sin(aT)
z
2
2z cos(aT) + 1
cos(at)
s
s
2
+a
2
e
bt
e
at
a b
1
(s +a)(s +b)
e
bt
sin(at)
a
(s +b)
2
+a
2
e
bt
cos(at)
s +b
(s +b)
2
+a
2
1
ab
+
ae
bt
be
at
ab(b a)
1
s(s + a)(s +b)
509
510 APPENDIX C. TRANSFORM PAIRS
Appendix D
A comparison of Maple and MuPad
Much of the functionality of the symbolic manipulator MAPLE, www.maplesoft.com is duplicated
in the freeware alternative MUPAD, www.sciface.com. Below both packages are demonstrated in
parallel.
D.1 Partial fractions
Suppose we want to decompose
G(s) =
s
3
+s
2
+ 1
s
2
(s
2
1)
into partial fractions.
Maple uses the parfrac option in the convert
command.
> G := (s3+s2+1)/(s2
*
(s2-1)):
> convert(G,parfrac,s);

1
s
2
+
3
2
1
s 1

1
2
1
s + 1
MuPad uses the partfrac command.
partfrac((s3+s2+1)/(s2
*
(s2-1)));
3 1 1
--------- - -- - ---------
2 (s - 1) 2 2 (s + 1)
s
Note the different spelling!
D.2 Integral transforms
First we must load the relevant package to do integral transforms.
Maple
In this case for integral transforms.
>with(inttrans):
MuPad
In this case Laplace, Fourier, z and Mellin trans-
forms.
export(transform);
Now we nd a Laplace transform of a common function, L{sin(at)}
511
512 APPENDIX D. A COMPARISON OF MAPLE AND MUPAD
laplace(sin(a
*
t),t,s);
a
a
2
+s
2
laplace(sin(a
*
t),t,s);
a
-------
2 2
a + s
The inverse Laplace transform of L
1
{K/(s
2
+s + 1)}
invlaplace(K/(s2+s+1),s,t);
1
3
K

3
_
e
((
1
2
+
1
2

3)t)
e
((
1
2

1
2

3)t)
_
ilaplace(K/(s2+s+1),s,t);
/ 1/2 \
1/2 / t \ | t 3 |
2 K 3 exp| - - | sin| ------ |
\ 2 / \ 2 /
---------------------------------
3
We can try a Laplace transform the hard way following the mathematical denition (without using the
integral transform packages),
L{sin(at)}
def
=
_

0
sin(at) e
st
dt
f := cos(a
*
t):
int(f
*
exp(-s
*
t),t=0..infinity);
However MAPLE returns a solution accompa-
nied with a warning:
Definite integration: Cant
determine if the integral is
convergent. Need to know the
sign of --> x Will now try
indefinite integration and
then take limits.
lim
t
se
st
cos(at) +ae
st
sin(at) +s
s
2
+a
2
> assume(s,positive);
> simplify(%);
s
s
2
+a
2
MuPad, however, has no resevations.
f := cos(a
*
t):
int(f
*
exp(-s
*
t),t=0..infinity);
s
-------
2 2
a + s
D.3 Differential equations
Find the solution to the linear ordinary differential equation with initial conditions
3
dx
dt
+
dx
dt
+x = 4e
t
, x(0) = 1,
dx
dt
(0) = 2
In MAPLE we use the dsolve command to solve differential equations.
>ode2 := 3
*
diff(x(t),t,t) + diff(x(t),t)+x(t)=4
*
exp(-t);
D.4. VECTORS AND MATRICES 513
ode2 := 3
_

2
t
2
x(t)
_
+
_

t
x(t)
_
+x(t) = 4e
t
>dsolve({ode2,x(0)=1,D(x)(0)=2},x(t));
x(t) =
4
3
e
t

1
3
e

1
6
t
cos
_
1
6

11t
_
+
59
33
e

1
6
t
sin
_
1
6

11t
_

11
In MUPAD we construct an ode object group and use the normal solve command.
>eqn := ode({3
*
x(t) + x(t) + x(t)=4
*
exp(-t),x(0)=1, D(x)(0)=2},x(t));
ode({D(x)(0) = 2, x(0) = 1, x(t) + diff(x(t), t) + 3 diff(x(t), t, t) =
4 exp(-t)}, x(t))
>solve(eqn);
{ / 1/2 \ / 1/2 \ }
{ / t \ | t 11 | 1/2 / t \ | t 11 | }
{ exp| - - | cos| ------- | 59 11 exp| - - | sin| ------- | }
{ 4 \ 6 / \ 6 / \ 6 / \ 6 / }
{ -------- - ------------------------- + ---------------------------------- }
{ 3 exp(t) 3 33 }
D.4 Vectors and matrices
To manipulate vectors and matrices in MUPAD, we must load the linear algebra package.
info(linalg):
which returns a list of functions available in the linear algebra package. We start by dening the domain
of the matrices we are dealing with. In our case we will stick with the default and call that domain type
M:
M := Dom::Matrix();
Dom::Matrix(Dom::ExpressionField(id, iszero))
Now we are ready to input a test matrix, A, and perform some arithmetic operations such as addition,
inverse, matrix exponentiation.
A:=M([[1,3],[4, 5]]);
+- -+
| 1, 3 |
| |
| 4, 5 |
514 APPENDIX D. A COMPARISON OF MAPLE AND MUPAD
+- -+
A+A;
+- -+
| 2, 6 |
| |
| 8, 10 |
+- -+
exp(A,T);
+- -+
| 3 exp(-T) exp(7 T) 3 exp(-T) 3 exp(7 T) |
| --------- + --------, - --------- + ---------- |
| 4 4 8 8 |
| |
| exp(-T) exp(7 T) exp(-T) 3 exp(7 T) |
| - ------- + -------- , ------- + ---------- |
| 2 2 4 4 |
+- -+
Of course we could involve variables in the matrix, and perform further manipulations
A:=M([[1,3],[4, x]]);
+- -+
| 1, 3 |
| |
| 4, x |
+- -+
A+A(-1);
+- -+
| 12 3 |
| ------ + 2, - ------ + 3 |
| x - 12 x - 12 |
| |
| 4 1 |
| - ------ + 4 , x + ------ |
| x - 12 x - 12 |
+- -+
linalg::eigenValues(A);
{ 2 1/2 2 1/2 }
{ x (x - 2 x + 49) x (x - 2 x + 49) }
{ - + ------------------ + 1/2, - - ------------------ + 1/2 }
{ 2 2 2 2 }
Appendix E
Useful test models
The following are a collection of multivariable models in state-space format which can be used to test
control and estimation algorithms.
E.1 A forced circulation evaporator
The linearised forced circulation evaporator developed in [145, chapter 2] and mentioned in 3.3.2 in
continuous state-space form, x = Ax +Bu, is
A =
_
_
0 0.10455 0.37935
0 0.1 0
0 1.034 10
2
5.4738 10
2
_
_
(E.1)
B =
_
_
0.1 0.37266 0 0.36676 0.38605 0 3.636 10
2
0
0.1 0 0 0 0.1 0.1 0 0
0 3.6914 10
2
7.5272 10
3
3.6302 10
2
3.2268 10
3
0 3.5972 10
3
1.7785 10
2
_
_
(E.2)
with state and input variables dened as in Table 3.3,
x =
_
L
2
P
2
x
2

T
(E.3)
u =
_
F
2
P
100
F
200
.
.
. F
3
F
1
x
1
T
1
T
200
_
T
(E.4)
In this case we assume that the concentration cannot be measured online, so the output equation is
y =
_
1 0 0
0 1 0
_
x (E.5)
515
516 APPENDIX E. USEFUL TEST MODELS
E.2 Aircraft model
The following model is of an aircraft reported in [61, p31]. The state model is
A =
_

_
0 0 1.132 0 1
0 0.0538 0.1712 0 0.0705
0 0 0 1 0
0 0.0485 0 0.8556 1.013
0 0.2909 0 1.0532 0.6859
_

_
, B =
_

_
0 0 0
0.12 1 0
0 0 0
4.419 0 1.665
1.575 0 0.0732
_

_
(E.6)
C =
_
_
1 0 0 0 0
0 1 0 0 0
0 0 1 0 0
_
_
(E.7)
where the aircraft states & control inputs are dened as:
state description units measured
x
1
altitude m yes
x
2
forward velocity m/s yes
x
3
pitch angle degrees yes
x
4
pitch rate, x
3
deg/s no
x
5
vertical speed m/s no
u
1
spoiler angle deg/10
u
2
forward acceleration m/s
2
u
3
elevator angle deg
Note that we only measure altitude, velocity and pitch angle, we do not measure the rate states.
A reasonable initial condition is
x
0
=
_
10 100 15 1 25

.
Bibliography
[1] J. Abate and P.P. Valk o. Multi-precision Laplace
transform inversion. International Journal for Nu-
merical Methods in Engineering, 60:979993, 2004. 32
[2] James L. Adams. Flying Buttresses, Entropy, and O-
rings: The World of an Engineer. Harvard University
Press, 1991. 4
[3] Advantech Co. Ltd. PC-Multilab Card Users Man-
ual, 1992. 14
[4] Johan

Akesson. MPCtools: A toolbox for simu-
lation of MPC controllers in Matlab. Technical re-
port, Lund Institute of Technology, Lund, Sweden,
January 2006. 478
[5] Brian D.O. Anderson and John B. Moore. Opti-
mal Filtering. Information and System Sciences.
Prentice-Hall, 1979. 452, 454
[6] Jim Anderson, Ton Backx, Joost Van Loon, and
Myke King. Getting the most fromAdvanced Pro-
cess Control. Chemical Engineering, pages 7889,
March 1994. 5
[7] Anon. Experience with the X-15 Adaptive Flight
Control System. Technical Report NASA TN D-
6208, NASA Flight Research Centre, Edwards,
California, Washington, D.C., March 1971. 317
[8] K-E

Arz en. Realisation of expert system based feed-
back control. PhD thesis, Department of Automatic
control, Lund Institute of Technology, Lund, Swe-
den, 1987. 148
[9] K. J.

Astr om. Computer Control of a Paper Ma-
chine an Application of Linear Stochastic Con-
trol Theory. IBM Journal of Research and Develop-
ment, 11(4):389405, 1967. 322
[10] K. J. Astrom. Maximum likelihood and prediction
error methods. Automatica, 16:551574, 1980. 280
[11] K. J.

Astr om. ZieglerNichols Auto-tuners. Tech-
nical Report LUTFD2/(TFRT3167) /01025/,
Lund University, Lund, Sweden, 1982. 169, 173,
456
[12] K. J.

Astr om and B. Wittenmark. On Self Tuning
Regulators. Automatica, 9:185199, 1973. 322
[13] Karl J.

Astr om and Tore H agglund. Advanced PID
Control. ISA, Research Triangle Park, NC, USA,
2006. 132
[14] Karl-Johan

Astr om. Introduction to stochastic con-
trol theory. Academic press, 1970. 322
[15] Karl-Johan

Astr om and Tore H agglund. Automatic
tuning of PID controllers. Instrument Society of
America, Research triangle park, NC, 1988. 132,
141, 155, 166, 167, 321
[16] Karl-Johan

Astr omand Tore H agglund. Revisiting
the Ziegler-Nichols step response method for PID
control. Journal of Process Control, 14:635650, 2004.
152
[17] Karl-Johan

Astr om and Bj orn Wittenmark. Adap-
tive control. AddisionWesley, 1989. 5, 306, 308,
491
[18] Karl-Johan

Astr om and Bj orn Wittenmark. Adap-
tive Control. AddisionWesley, 2 edition, 1995. 243,
328, 348
[19] Karl-Johan

Astr om and Bj orn Wittenmark.
Computer-Controlled Systems: Theory and Design.
PrenticeHall, 3 edition, 1997. 61, 141, 357, 367
[20] K.J.

Astr om and R.D. Bell. Drum-boiler dynamics.
Automatica, 36(3):363378, 2000. 181
[21] K.J. Astr om and B. Wittenmark. Adaptive Control.
AddisonWesley, 1989. 326
[22] K.J.

Astr om and B. Wittenmark. Computer Con-
trolled Systems: Theory and Design. PrenticeHall,
2 edition, 1990. 310, 341, 344, 352
[23] Yonathan Bard. Nonlinear parameter estimation.
Academic Press, 1974. 276
[24] R.A. Bartlett, A. W achter, and L. T. Biegler. Ac-
tive set vs. interior point strategies for model pre-
dictive control. In Proceedings of the American Con-
trol Conference, pages 42294233, Chicago, Illinois,
USA, 2000. 467
[25] B. Bellingham and F.P. Lees. The detection of
malfunction using a process control computer:
A Kalman ltering technique for general control
loops. Trans. IChemE, 55:253265, 1977. 451
517
518 APPENDIX E. USEFUL TEST MODELS
[26] Alberto Bemporad, Manfred Morari, and
N. Lawrence Ricker. Model Predictive Control
Toolbox. The MathWorks Inc., 2006. 464
[27] Alberto Bemporad, Manfred Morari, and
N. Lawrence Ricker. Model predictive con-
trol toolbox 3. Technical report, The Mathworks,
2009. 467, 478
[28] B. Wayne Bequette. Nonlinear control of chemical
processes: A review. Ind. Eng. Chem. Res., 30:1391
1413, 1991. 380
[29] A. Besharati Rad, Wai Lun Lo, and K.M. Tsang.
Self-tuning pid controller using newton-raphson
search method. Industrial Electronics, IEEE Trans-
actions on, 44(5):717725, Oct 1997. 155, 256
[30] Torsten Bohlin. A grey-box process identication
tool: Theory and practice. Technical Report IR-
S3-REG-0103, Department of Signals Systems &
Sensors, Royal Institute of technology, SE-100 44
Stockholm, Sweden, August 2001. 86
[31] C. Bohn and D.P. Atherton. An Analysis Pack-
age Comparing PIDAnti-Windup Strategies. IEEE
Control Systems, 15:3440, 1995. 141
[32] G.E.P. Box and G.M. Jenkins. Time Series Analysis:
Forecasting and Control. HoldenDay, 1970. 236,
243, 307, 308
[33] M. Braun. Differential Equations and their Applica-
tions. SpringerVerlag, 1975. 86, 120
[34] John W. Brewer. Kronecker products and matrix
calculus in systemtheory. IEEE Transactions on Cir-
cuits and Systems, 25(9):772781, September 1978.
80, 81
[35] E.H. Bristol. On a new measure of interaction for
multivariable process control. IEEE transactions on
Automatic Control, AC11(133), 1966. 103
[36] Jens Trample Broch. Principles of Experimental Fre-
quency Analysis. Elsevier applied science, London
& New York, 1990. 220
[37] Robert Grover Brown and Patrick Y.C. Hwang. In-
troduction to RandomSignals and Applied Kalman Fil-
tering. John Wiley & Sons, 2 edition, 1992. 431
[38] Arthur E. Bryson and Yu-Chi Ho. Applied Optimal
Control. Ginn & Co., Waltham, Mass., 1969. 399
[39] James L. Buchanan and Peter R. Turner. Numerical
Methods and Analysis. McGrawHill, 1992. 120
[40] F. Buchholt and M. K ummel. A Multivariable Self-
tuning Regulator to Control a Double Effect Evap-
orator. Automatica, 17(5):737743, 1981. 94
[41] Richard L. Burden and J. Douglas Faires. Numeri-
cal Analysis. PWS Publishing, 5 edition, 1993. 120
[42] C. Sidney Burrus, James H. McClellan, Alan V.
Oppenheim, Thomas W. Parks, Ronald W. Schafer,
and Hans W. Schuessler. Computer-Based Exercises
for Signal Processing using Matlab. PrenticeHall,
1994. 17, 204
[43] B. Carnahan and J.O. Wilkes. Numerical solution
of differential equations - an overview. In R. Mah
and W.O. Seider, editors, FOCAPD 80, Henniker,
New Hampshire, 1980. 86
[44] C.C. Hang and K. J.

Astr om and Q.G. Wang. Re-
lay feedback auto-tuning of process controllers
A tutorial overview. J. Process Control, 12:143162,
2002. 177
[45] B. Chachuat. Nonlinear and dynamic op-
timization: From theory to practice. Tech-
nical report, Automatic Control Laboratory,
EPFL, Switzerland, 2007. Available from
infoscience.epfl.ch/record/111939/files/.
399, 407
[46] Edward R. Champion. Numerical Methods for En-
gineering Applications. Marcel Dekker, Inc., 1993.
120
[47] Cheng-Liang Chen. A simple method of on-line
identication and controller tuning. 35(12):2037
2039, 1989. 159
[48] Cheng-Liang Chen. A closed loop reactioncurve
method for controller tuning. Chemical Engineering
Communications, 104:87100, 1991. 159, 166
[49] Robert Chote. Why the Chancellor is always
wrong. New Scientist, page 26, 31 October 1992.
91
[50] C.K. Chui and G. Chen. Kalman Filtering with Real-
Time Applications. SpringerVerlag, 1987. 431, 452,
456
[51] C.K. Chui and G. Chen. Linear Systems and Optimal
Control. SpringerVerlag, 1989. 399, 400, 408
[52] David Clarke. Advances in Model-Based Predictive
Control. Oxford University Press, 1994. 184, 318,
461
[53] David W. Clarke. PID Algorithms and their Com-
puter Implementation. Trans. Inst. Measurement
and Control, 6(6):305316, Oct-Dec 1984. 132
[54] J.D. Cryer. Time series analysis. PWS, 1986. 243
[55] Jonathan Currie and David I. Wilson. A Model
Predictive Control toolbox intended for rapid pro-
totyping. In Tim Molteno, editor, 16th Electron-
ics New Zealand Conference (ENZCon 2009), pages
E.2. AIRCRAFT MODEL 519
712, Dunedin, New Zealand, 1820 November
2009. 474, 478
[56] Jonathan Currie and David I. Wilson. Lightweight
Model Predictive Control intended for embedded
applications. In 9th International Symposium on Dy-
namics and Control of Process Systems (DYCOPS),
pages 264269, Leuven, Belgium, 57 July 2010.
474
[57] James B. Dabney and Thomas L. Harman. Master-
ing Simulink 4. PrenticeHall, 2001. 395
[58] D.T. Dalle-Molle, T. Edgar, and B.J. Kuipers. Qual-
itative modelling of physical systems. In 3rd Int.
Symposium on Process Systems Engineering, pages
169174, August 1988. 492
[59] Howard Demuth and Mark Beale. Neural Network
Toolbox Users Guide. The MathWorks Inc., June
1992. 494
[60] J.J. DiStefano, A.R. Stubberud, and I.J. Williams.
Feedback and Control Systems. Schaums Outline Se-
ries, McGraw-Hill, 1990. 71
[61] Peter Dorato, Chaouki Abdallah, and Vito Cerone.
Linear-quadratic Control: An Introduction. Prentice
Hall, 1995. 411, 457, 516
[62] Richard Dorf. Modern control systems. Addison
Wesley, 5 edition, 1989. 65
[63] M. Drouin, H. Abou-Kandil, and M. Mariton.
Control of complex systems. Methods and technology.
Plenum Press, 223 Spring St, New York NY 10013,
1 edition, 1991. 5, 399
[64] D.B. Ender. Control analysis and optimisation.
Technical report, Instrument Soc. America, Tech-
mation, Tempe, Arizona 85282, 1990. 149
[65] D. Grant Fisher and Dale E. Seborg. Multivariable
Computer Control. North-Holland, 1976. 94
[66] Samuel C. Florman. The Existential Pleasures of En-
gineering. St. Martins Press, New York, 1976. 4
[67] T. R. Fortescue, L. S. Kershenbaum, and B. E.
Ydstie. Implementation of self-tuning regula-
tors with variable forgetting factors. Automatica,
17(6):831835, 1981. 295
[68] Lloyd D. Fosdick, Elizabeth R. Jessup, and Car-
olyn J.C. Schauble. Elements of Matlab. High
Performance Scientic Computing, University
of Colorado, January 1995. Available from
cs.colorado.edu/pub/HPSC. 3, 4
[69] A. S. Foss. Critique of chemical process con-
trol theory. American Inst. Chemical Engineers J,
19(2):209214, 1973. 45
[70] G.F. Franklin and J.D. Powell. Digital control of dy-
namic systems, pages 131183. AddisonWesley,
1980. 42, 68, 367
[71] G.F. Franklin, J.D. Powell, and M.L. Workman.
Digital Control of Dynamic Systems. Addison
Wesley, 3 edition, 1998. 276
[72] C.E. Garcia, D.M. Prett, and M. Morari. Model pre-
dictive control: Theory and practice A survey.
Automatica, 25(3):335348, 1989. 461
[73] H.P. Geering. Introduction to Fuzzy Control. Tech-
nical Report IMRT-Bericht Nr. 24, Eidgen ossische
Technische Hochschule, February 1992. 491
[74] Paul Geladi and Bruce R. Kowalski. Partial Least-
Squares Regression: A Tutorial. Analytica Chimica
Acta, 185:117, 1986. 111
[75] S.F. Goldmann and R.W.H. Sargent. Applications
of linear estimation theory to chemical processes:
A feasibility study. Chemical Engineering Science,
26:15351553, 1971. 441
[76] Graham C. Goodwin, Stefan F. Graebe, and
Mario E. Salgado. Control System Design. Prentice
Hall, 2001. 125
[77] Felix Gross, Dag Ravemark, Peter Terwiesch, and
David Wilson. The Dynamics of Chocolate in Beer:
The Kinetic Behaviour of theobroma cacao Paste in
a CH3CH2OHH2OCO2 Solution. Journal of Irre-
producible Results, 37(4):24, 1992. 237
[78] Tore K. Gustafsson and Pertti M. M akil a.
L1 Identication Toolbox for Matlab.

Abo
Akademi, Finland, August 1994. From
ftp.abo.fi/pub/rt/l1idtools. 482
[79] I. Gustavsson. Survey of Applications of Identi-
cation in Chemical and Physical Processes. Auto-
matica, 11:324, 1975. 236
[80] Juergen Hahn, Thomas Edison, and Thomas F.
Edgar. A note on stability analysis using bode
plots. Chemical Engineering Education, 35(3):208
211, 2001. 72
[81] F. Hayes-Roth. Building Expert Systems. Addison
Wesley, Reading Massachusetts, 1983. 488
[82] A. Helbig, W. Marquardt, and F. Allgower. Non-
linearity measures: denition, computaiton and
applications. Journal of Process Control, pages 113
123, 2000. 129
[83] M.A. Henson and D.E. Seborg. Input-output Lin-
earization of General Nonlinear Processes. Amer-
ican Inst. Chemical Engineers J, 36(11):17531757,
1990. 122, 380
520 APPENDIX E. USEFUL TEST MODELS
[84] Michael A. Henson. Nonlinear model predictive
control: current status and future directions. Com-
puters in Chemical Engineering, 23(2):187202, 1998.
463
[85] Michael A. Henson and Dale E. Seborg. Nonlinear
Process Control. Prentice Hall, Saddle River, New
Jersey, 1997. 92
[86] David R. Hill. Experiments in Computational Matrix
Algebra. The Random House, 1988. 54
[87] David M. Himmelblau. Process Analysis by Statis-
tical Methods. John Wiley & Sons, 1970. 117, 259
[88] Roger A. Horn and Charles R. Johnson. Topics
in Matrix Analysis. Cambridge University Press,
1991. 80
[89] Morten Hovd and Sigurd Skogestad. Pairing Cri-
teria for Unstable Plants. In AIChE Annual Meet-
ing, page Paper 149i, St. Louis, Nov 1993. 351
[90] P. J. Huber. Robust regression: Asymptotics, con-
jectures, and Monte Carlo. Ann. Math. Statist.,
1(5):799821, 1973. 280
[91] Enso Ikonen and Kaddour Najim. Advanced Pro-
cess Identication and Control. Marcel Dekker, 2002.
265
[92] Vinay K. Ingle and John G. Proakis. Digital Signal
Processing using Matlab V.4. PWS Publishing Com-
pany, 1997. 204
[93] Rolf Isermann. Paramater adaptive control
algorithmsA tutorial. Automatica, 18(5):513528,
1982. 308
[94] M.L. James, G. M. Smith, and J. C. Wolford. Ap-
plied Numerical Methods for Digital Computation
with Fortran and CSMP. Harper & Row, 2 edition,
1977. 120
[95] R.K. Jaspan and J. Coull. Trajectory Optimization
Techniques in Chemical Reaction Engineering II:
Comparison of Methods. American Inst. Chemical
Engineers J, 18(4):867869, July 1972. 402
[96] Andrew H. Jazwinski. Stochastic Processes and Fil-
tering Theory. Academic Press, 111 Fifth Avenue,
New York, 1970. 431, 452
[97] A. Johnson. LQG Applications in the Process In-
dustries. Chemical Engineering Science, 48(16):2829
2838, 1993. 399
[98] Michael A. Johnson and Mohammad H. Moradi.
PID COntrol: New Identication and Design Meth-
ods. SpringerVerlag, London, UK, 2005. 132, 134
[99] Arthur Jutan and E. S. Rodriguez II. Extension of
a New method for On-Line Controller Tuning. The
Canadian Journal of Chemical Engineering, 62:802
807, December 1984. 159
[100] Thomas Kailath. Linear Systems. PrenticeHall,
1980. 399
[101] Paul G. Kaminski, Arthur E. Bryson, and Stan-
ley F. Schmidt. Discrete Square Root Filtering: A
survey of current techniques. IEEE Trans. Auto-
matic Control, AC-16:727735, 1971. 452
[102] Solomon S. Katz. Emulating the PROSPECTOR Ex-
pert System with a Raster GIS. Computers & Geo-
sciences, 17(7):10331050, 1991. 488
[103] L.H. Keel and S.P. Bhattacharyya. Robust, fragile,
or optimal? Automatic Control, IEEE Transactions
on, 42(8):1098 1105, Aug 1997. 476
[104] David Kincaid and Ward Cheney. Numerical Anal-
ysis. Mathematics of Scientic Computing. Brooks/-
Cole, 1991. 120, 497
[105] Costas Kravaris and Jeffery C. Kantor. Geometric
Methods for Nonlinear Process Control. 1 Back-
ground. Ind. Eng. Chem. Res., 29:22952310, 1990.
380
[106] Costas Kravaris and Jeffery C. Kantor. Geomet-
ric Methods for Nonlinear Process Control. 2 Con-
troller Synthesis. Ind. Eng. Chem. Res., 29:2310
2323, 1990. 380
[107] Erwin Kreyszig. Advanced Engineering Mathemat-
ics. John Wiley & Sons, 7 edition, 1993. 220
[108] B. Kristiansson and B. Lennartson. Robust and op-
timal tuning of PI and PID controllers. IEE Proc.-
Control Theory Applications, 149(1):1725, January
2002. 190
[109] R. Kulhav y and M.B. Zarrop. On a general concept
of forgetting. Int. J. Control, 3(58):905924, 1993.
311
[110] Benjamin C. Kuo. Automatic Control Systems.
PrenticeHall, 6 edition, 1991. 40
[111] Benjamin C. Kuo. Automatic Control Systems.
PrenticeHall, 7 edition, 1995. 427, 428
[112] I. D. Landau. Identication in closed loop: a pow-
erful design tool (better design models, simpler
controllers). Control Engineering Practice, 9:5165,
2001. 308
[113] Yoan D. Landau. Adaptive Control: The model refer-
ence approach. Marcel Dekker, 1979. 297
[114] Leon Lapidus and John H. Seinfeld. Numerical So-
lution of Ordinary Differential Equations. Academic
Press, 1971. 86
E.2. AIRCRAFT MODEL 521
[115] Alan J. Laub. Matrix Analysis for Scientists and En-
gineers. Society for Industrial and Applied Mathe-
matics, 2004. 80
[116] P.L. Lee and G.R. Sullivan. Generic Model Con-
trol (GMC). Computers in Chemical Engineering,
12(6):573580, 1988. 371
[117] T.H. Lee, Q.G. Wang, and K.K. Tan. Knowledge-
based process identication from relay feedback.
Journal of Process Control, 5(6):387397, 1995. 177,
179
[118] S. Lees and J.O. Hougen. Determination of pneu-
matic controller characteristics by frequency re-
sponse. Ind. Eng. Chem., 48:1064, 1956. 247
[119] J. R. Leigh. Control Theory: A guided tour. IEE Con-
trol Series 45. Peter Peregrinus Ltd., 1992. 73
[120] A. Leva, C. Cox, and A. Ruano. Hands-on
pid autotuning: a guide to better utilisation.
In Marek Zaremba, editor, IFAC Professional
Brief, chapter 3. IFAC, 2002. Available from
www.ifac-control.org/publications/pbriefs/PB_Final_LevaCoxRuano.pdf.
240
[121] Frank L. Lewis. Optimal Estimation. John Wiley &
Sons, 1986. 282, 437, 452
[122] Frank L. Lewis and Vassilis L. Syrmos. Optimal
Control. John Wiley & Sons, 2 edition, 1995. 457
[123] R.J. Litcheld, K.S. Campbell, and A. Locke. The
application of several Kalman lters to the con-
trol of a real chemical reactor. Trans. IChemE,
57(2):113120, April 1979. 375, 376
[124] Lennart Ljung. System Identication: Theory for the
User. PrenticeHall, 1987. 236, 244, 270, 280, 310
[125] Lennart Ljung. System Identication Toolbox, for use
with Matlab. The MathWorks Inc., May 1991. 236,
305, 309
[126] Lennart Ljung. System Identication: Theory for the
User. PrenticeHall, 2 edition, 1999. 277
[127] Lennart Ljung and Torkel Glad. Modeling of Dy-
namic Systems. PrenticeHall, 1994. 236
[128] Charles F. Van Loan. The ubiquitous kronecker
product. Journal of Computational and Applied Math-
ematics, 123:85100, 2000. 80
[129] W. L. Luyben. Process Modeling, Simulation, and
Control for Chemical Engineers. McGraw-Hill, 2 edi-
tion, 1990. 86
[130] W.L. Luyben. Process modelling simulation and con-
trol for chemical engineers. McGraw-Hill, 1973. 98
[131] Paul A. Lynn and Wolfgang Fuerst. Introductory
Digital Singnal Processing with Computer Applica-
tions. John Wiley & Sons, 2 edition, 1994. 208, 220
[132] J. M. Maciejowski. Predictive Control with Con-
straints. PrenticeHall, 2002. 477
[133] Sven Erik Mattsson. On Modelling and Differen-
tial/Algebraic Systems. Simulation, pages 2432,
January 1989. 129
[134] Cleve Moler and John Little. Matlab Users Guide.
The MathWorks Inc., December 1993. 1
[135] B. De Moor, P. De Gersem, B. De Schutter, and
W. Favoreel. DAISY: A database for identication
of systems. Journal A, 38(3):45, 1997. 312
[136] ManfredMorari. Three critiques of process control
revisited a decade later. In D.M. Prett, C.E. Graca,
and B.L. Ramaker, editors, The Shell Process control
workshop, pages 309321. Butterworths, 1987. 45,
360
[137] Manfred Morari. Process control theory: Reec-
tions on the past and goals for the next decade.
In D.M. Prett, C.E. Graca, and B.L. Ramaker, edi-
tors, The second Shell Process control workshop, pages
469488. Butterworths, December 1988. 95
[138] Manfred Morari. Some Control Problems in the
Process Industries. In H.L. Trentelman and J.C.
Willems, editors, Essays on Control: Perspectives in
the Theory and its applications. Birkh auser, 1993. 141
[139] ManfredMorari. Model Predictive Control: Multi-
variable Control Technique of Choice in the 1990s.
In David Clarke, editor, Advances in Model-Based
Predictive Control, pages 2237. Oxford University
Press, 1994. 45, 221
[140] Manfred Morari and Evanghelos Zariou. Robust
Process Control. PrenticeHall, 1989. 101, 122
[141] Jerzy Moscinski and Zbigniew Ogonowski. Ad-
vanced Control with Matlab and Simulink. Ellis Hor-
wood, 1995. 247, 494
[142] Frank Moss and Kurt Wiesenfeld. The benets of
Background Noise. Scientic American, pages 50
53, August 1995. See also Amateur Scientist in the
same issue. 195
[143] Ivan Nagy. Introduction to Chemical Process Instru-
mentation. Elsevier, 1992. 6
[144] Ioan Nascu and Robin De Keyser. A novel appli-
cation of relay feedback for PID auto-tuning. In
IFAC CSD03 Conference on Control systems Design,
Bratislava, Slocak Republic, 2003. 177
522 APPENDIX E. USEFUL TEST MODELS
[145] R.B. Newell and P.L. Lee. Applied Process Control
A Case Study. PrenticeHall, 1989. 94, 95, 420, 429,
446, 491, 515
[146] R.E. Nieman and D.G. Fisher. Experimental Eval-
uation of Optimal Multivariable Servo-control in
Conjunction with Conventional Regulatory Con-
trol. Chemical Engineering Communications, 1, 1973.
478
[147] R.E. Nieman, D.G. Fisher, and D.E. Seborg. A re-
view of process identication and parameter esti-
mation techniques. Int. J of control, 13(2):209264,
1971. 236
[148] Katsuhiko Ogata. Discrete Time Control Systems.
PrenticeHall, 1987. 15, 24, 26, 28, 31, 36, 38, 49,
54, 55, 62, 74, 75, 78, 83, 108, 231, 310, 338, 339, 352,
353, 355, 365, 367, 368, 410, 443
[149] Katsuhiko Ogata. Discrete-Time Control Systems.
PrenticeHall, 1987. 27
[150] Katsuhiko Ogata. Modern Control Engineering.
PrenticeHall, 2 edition, 1990. 71, 76, 78, 80, 109,
151, 168, 352, 353, 355, 389
[151] B. A. Ogunnaike, J. P. Lemaire, M. Morari, and
W. H. Ray. Advanced multivariable control of a
pilot-plant distillation column. 29(4):632640, July
1983. 97
[152] Alan V. Oppenheim and Ronald W. Schafer. Digi-
tal Signal Processing. PrenticeHall, 1975. 207
[153] Overschee and De Moor. Subspace Identication.
Kluwer Academic Publshers, 1996. 276
[154] G.F. Page, J.B. Gomm, and D. Williams. Applica-
tions of Neural Networks to Modelling and Control.
Chapman & Hall, 1993. 494
[155] Chris C. Paige. Properties of Numerical Algo-
rithms Related to Computing Controllability. IEEE
transactions on Automatic Control, AC26(1):130
138, 1981. 355
[156] Sudharkar Madhavrao Pandit and Shien-Ming
Wu. Time Series and System Analysis with Applica-
tions. Wiley, 1983. 307
[157] Rajni V. Patel, Alan J. Laub, and Paul M. van
Dooren. Numerical Linear Algebra Techniques for
Systems and Control. IEEE Press, 1994. A selected
reprint volume. 62, 355, 356
[158] Linda Petzold. Differential/Algebraic Equations
are not ODEs. SIAM J. Sci. Stat. Comput., 3(3):367
384, 1982. 123
[159] C.L. Phillips and H.T. Nagle. Digital Control System
Analysis and Design. PrenticeHall, 2 edition, 1990.
200
[160] Jack W. Ponton. Neural Networks: Some Ques-
tions and Answers. J. Process Control, 2(3):pp163
165, 1992. 502
[161] W.H. Press, B.P. Flannery, S.A Teukolsky, and W.T.
Vetterling. Numerical Recipes: The Art of Scien-
tic Computing. Cambridge University Press, 1986.
110, 220, 221, 229, 401
[162] S. Joe Qin and Thomas A. Badgwell. A sur-
vey of industrial model predictive control tech-
nology. Control Engineering Practice, 11(7):733764,
July 2003. 461, 478
[163] W. Fred Ramirez. Process Control and Identication.
Academic Press, 1994. 399, 407, 411
[164] C. V. Rao, S. J. Wright, and J. B. Rawlings. Applica-
tion of interior-point methods to model predictive
control. Journal of Optimization Theory and Applica-
tions, 99(3):723757, 1998. 467
[165] T. Rautert and E.W. Sachs. Computational design
of optimal output feedback controllers. Techni-
cal Report Nr. 95-12, Universit at Trier, FB IV, Ger-
many, June 1995. 457
[166] James B. Rawlings. Tutorial overview of Model
Predictive Control. IEEE Control Systems Magazine,
20(3):3852, June 2000. 461
[167] W. Harmon Ray and Julian Szekely. Process Opti-
mization. John Wiley & Sons, 1973. 399, 402
[168] W.H. Ray. Advanced Process Control. McGrawHill,
NY, 1 edition, 1981. 399
[169] John R. Rice. Numerical Methods, Software, and
Analysis. McGrawHill, 1983. 120
[170] C. Magnus Rimvall and Christopher P. Jobling.
Computer Aided Control System Design. In W.S.
Levine, editor, The Control Handbook, chapter 23,
pages 429442. CRC Press, 1995. 2
[171] William H. Roadstrum and Dan H. Wolaver. Elec-
trical Engineering for all Engineers. John Wiley &
Sons, 2 edition, 1994. 13
[172] E.R. Robinson. Time Dependent Chemical Processes.
Applied Science Publishers, 1975. 399
[173] J. A. Rossiter. Model-Based Predictive Control: A
Practical Approach. CRC Press, 2003. 477
[174] Hadi Saadat. Computational Aids in Control Systems
using Matlab. McGrawHill, 1993. 51
[175] Robert Schoenfeld. The Chemists English. VCH
Verlagsgesellschaft mbH, D-6940 Weinheim, Ger-
many, 2 edition, 1986. 220
[176] J. Schoukens and R. Pintelon. Identication of Linear
Systems. Pergamon, 1991. 110
E.2. AIRCRAFT MODEL 523
[177] Tobias Schweickhardt and F. Allgower. Linear
control of nonlinear systems based on nonlinear-
ity measures. Journal of Process Control, 17:273284,
2007. 129
[178] Dale E. Seborg, Thomas F. Edgar, and Duncan A.
Mellichamp. Process Dynamics and Control. Wiley,
2 edition, 2005. 152, 155
[179] D.E. Seborg, T.F. Edgar, and D.A. Mellichamp. Pro-
cess Dynamics and Control. Wiley, 1989. 15, 31, 104,
105, 138, 151, 153, 254
[180] D.E. Seborg, T.F Edgar, and S.L. Shah. Adaptive
Control Strategies for Process Control: A survey.
American Inst. Chemical Engineers J, 32(6):881913,
1986. 318
[181] Warren D. Seider, J.D. Seader, and Daniel R.
Lewin. Process Design Principles: Synthesis, Anal-
ysis and Evaluation. John Wiley & Sons, 1999. 105
[182] Bahram Shahian and Michael Hassul. Control Sys-
tem Design Using Matrixx. PrenticeHall, 1992. 67
[183] Kermit Sigmon. Matlab primer, 3rd edition. Dept.
of Mathematics, University of Florida, 1993. 3, 4
[184] Jonas Sj oberg, Qinghua Zhang, Lennart Ljung, Al-
bert Benveniste, Bernard Delyon, Pierre-Yves Glo-
rennec, H akan Hjalmarsson, and Anatoli Judit-
sky. Nonlinear black-box modeling in system
identication: a unied overview. Automatica,
31(12):16911724, 1995. Trends in System Identi-
cation. 238, 280
[185] S. Skogestad. Dynamics and Control of Distilla-
tion Columns A Critical Survey. In DYCORD+,
pages 1135. Int. Federation of Automatic Control,
1992. 95
[186] Sigurd Skogestad. Simple analytic rules for model
reduction and PIDcontroller tuning. Journal of Pro-
cess Control, 13(4):291309, 2003. 134, 153
[187] Jean-Jacques E. Slotine and Weiping Li. Applied
Nonlinear Control. PrenticeHall, 1991. 74, 75, 380
[188] T. S oderstr om and P. Stocia. System Identication.
PrenticeHall, 1989. 236, 310
[189] Harold W. Sorenson, editor. Kalman ltering: The-
ory and Application. Selected reprint series. IEEE
Press, 1985. 431
[190] Henry Stark and John W. Woods. Probability, Ran-
dom Processes, and Estimation Theory for Engineers.
PrenticeHall, 2 edition, 1994. 432
[191] George Stephanopoulos. Chemical Process Control:
An introduction to Theory and Practice. Prentice
Hall, 1984. 105, 145
[192] Dominick Stephens. The Reserve Banks Fore-
casting and Policy System. Technical report, Eco-
nomics Department, The Reserve Bank of New
Zealand, Wellington, NZ, 2004. 462
[193] O. Taiwo. Comparison of four methods of on-
line identication and controller tuning. IEE
Proceedings-D, 140(5):323327, 1993. 159, 166
[194] Les Thede. Analog and Digital Filter Design using C.
PrenticeHall, 1996. 204
[195] J. Villadsen and M.L. Michelsen. Solution of differ-
ential equation models by polynomial approximation.
PrenticeHall, 1978. 86
[196] Stanley M. Walas. Modeling with Differential
Equations in Chemical Engineering. Butterworth
Heinemann, 1991. 86, 123, 401
[197] Liuping Wang. Model Predictive Control System De-
sign and Implementation using Matlab. Springer,
2009. 477
[198] Liuping Wang and William R. Cluett. From Plant
Data to Process Control. Taylor and Francis, 11 New
Fetter Lane, London, EC4P 4EE, 2000. 259, 260
[199] Ya-Gang Wang, Zhi-Gang Shi, and Wen-Jian Cai.
PID autotuner and its application in HVAC sys-
tems. In Proceedings of the American Control Confer-
ence, pages 21922196, Arlington, VA, 2527 June
2001. 177
[200] P.E. Wellsteadand M.B. Zarrop. Self-tuning Sytems:
Control and Signal Processing. John Wiley & Sons,
1991. 245, 295, 300, 302, 308, 310, 312, 314, 322, 452
[201] David I. Wilson. Introduction to Numerical Anal-
ysis with Matlab or whats a NaN and why do I
care? Auckland University of Technology, Auck-
land, New Zealand, July 2007. 465pp. 3, 120, 197,
205, 280, 391, 478, 482
[202] R. K. Wood and M.W. Berry. Terminal Compo-
sition Control of a Binary Distillation Ccolumn.
Chemical Engineering Science, 28:1707, 1973. 95
[203] Minta Yuwana and Dale E. Seborg. A New
Method of On-line Controller Tuning. American
Inst. Chemical Engineers J, 28(3):434440, May 1982.
159
[204] J.G. Ziegler and N.B. Nichols. Optimum Settings
for Automatic Controllers. Trans. ASME, 64:759
768, 1942. 151
Index
, see Kronecker product
A/D, 13
Ackermann, 361, 364
adaptive control
applications, 317
classication, 319
general scheme, 318
suitable, 318
afne, 380
AIC, see Akaike Information Criteria
Akaike Information Criteria, 277
algebraic Riccati equation, 416
are, 416
solving, 415, 417
alias, 17
detecting, 18
ARMAX model, 263
arms race, 314
ARX model, 262
ATV tuning method, 167
auto-tuners, 321
autonomous systems, 46
back propagation, 499
balance arm, 7, 138
Bernoullis equation, 89
bilinear transform, 38
black-box, 175
blackbox, 6
Bode
diagram, 40, 72
Bristol, 103
Bromwich integral, 31
bursting, 335
bvp4c, 402, 405
c2d.m, 61
canon, 55, 353
canonical form, 50, 55
controllable, 48
diagonal, 49
Jordan, 49
observable, 48
Cauchy form, 47
Cayley-Hamilton theorem, 361
ceiling function, 204
Chebyshev
design, 206
Chebyshev lter, see lter
Cholesky decomposition, 77, 80, 452
Cholesky matrix square root, 453
closed loop, 46
coloured noise, 300
Computer design aids, 3
condence ellipses, 118
condence intervals, 116
controllability, 352
computing, 355
covariance
wind-up, 295
covariance matrix, 117, 284, 290, 433
reset, 292
cover-up rule, 26
curve tting, 499
D/A, 13
DAEs, 123
dare, 443
dead time compensation, 185
deadbeat control, 364
deadtime, 277
difference equations, 20
differentiation
noise, 196
digital lter, 212
aliasing, 17
diophantine
convmtx, 328
Diophantine equation, 326
Dirac delta function, 22
direct method, see Lyapunov
discrete algebraic Riccati equation, 443
distillation
interaction, 102
model, 98
simulation, 101
dlqe, 443
DMC, see dynamic matrix control
double integrator, 88
524
INDEX 525
DSP, 209
dynamic matrix control, 468
etfe, 253
Euler
relations, 222
exact feedback linearisation, 380
cautions, 382
design procedure, 382
exhaustive search, 394
expert systems, 487
languages, 488
process control, 490
extended Kalman lter, 454
FFT, see Fourier transform, 246
fft, 228
lter, 193, 209
bi-quad, 217
Butterworth, 201
characteristics, 204
cascade, 199
Chebyshev, 205
DFI, 215
DFII, 216, 219
digital, 212
elliptic, 201, 208
hardware, 215
ideal, 198
minimum order, 204
proto-type, 200
quantisation, 218
SOS, 217
nal value theorem, 24
nite differences, 20
Euler, 20
rst-order plus deadtime, 240
xed-position, 459
apper, 8
FOPDT, see rst-order plus deadtime
forgetting factor, 291, 292, 324
inuence, 294
Fourier transform, 219, 250
denitions, 221
fast, 225
slow, 221
frequency pre-warping, 213
function approximation, 220
gain scheduling, 319
Gauss, 107
Gilberts criteria, 353
Hammerstein models, 280
Hankel matrix, 269, 470
Hardamard product, , 104
helicopter, 8
hidden layers, 498
identication, 235
black and white, 237
closed loop, 307
continuous models, 254
frequency response, 250
frequency testing, 246
graphical, 239, 254
model, 236
noise models, 300
nonlinear, 280
ofine, 239
online, 280
parameters, 236
PRBS, 244
real-time, 238
state-space, 274
subspace, 276
step response, 240
system, 236
time, 239
two-dimensional, 114
initial value theorem, 24
integral states, 427
integral-only control, 135
inter-sample ripple, 364
interaction, 102
Internal Model Control, 153
inverse response, 181
invfreqs, 246
Jordan form, 61, 62
Jury test, 71
Kaczmarz, 306
Kalman lter, 438, 442
algorithm, 441
current estimation type, 444
extended, 454
prediction type, 443
steady-state, 442
Kronecker product, 77, 80, 417, 438
Kronecker tensor product, 77
Laguerre functions, 259
Laplace transform
inverting
numerically, 31
least-squares, 107
improvements, 110
526 INDEX
nonlinear, 112
partial, 111
recursive, 281
recursive estimation, 284
solution, 109
level control, 88
Liapunov, see Lyapunov
Lie algebra, 380
limit oscillation, 168
linear program, 478
linear quadratic regulator, 409
continuous, 409
discrete, 418
steady-state, 415
time varying, 411
linearise, 125
nite difference, 127
linmod, 127
tank model, 127
Lissajou, 248
load disturbance, 158
loss function, see performance index
LQR, see linear quadratic regulator
LTI object, 36
Lyapunov, 74, 76, 458
continuous stability, 76
discrete, 438
discrete stability, 76
function, 75
matrix equation, 77
solving, 80
Maple, 3, 511
mapping, 21
Matlab, 2
Matlab alternatives
Octave, 3
SciLab, 3
matrix exponential, 58
maximum likelihood, 280
Maxwell, 1
measurement noise, 116
Method of Areas, 240
micro-controllers, 13
minimum variance control, 323
dead-time, 341
MIT rule, 297
model
CSTR, 92
distillation, 95
high purity, 101
Wood-Berry, 95
double integrator, 88
evaporator, 94
pendulum, 86
stirred heated tank, 90
validation, 276
model predictive control, 461
model reference control
GMC, 374
modelling, 85, 86
cautions, 91
evaporators, 94
extrapolation, 91
models
discrete, 260
linear, 237
MPC, see model predictive control
algorithm, 463
horizons, 467
Toolbox, 464
toolbox, 478
MuPad, 511
Nader, Ralph, 4
Nelder-Mead, 113
neural networks, 493
activation function, 495
architecture, 495
neurons, 495
toolbox, 499
training, 498
Newell-Lee evaporator, see modelling
Newton-Rhapson, 157
Nilpotent matrix, 364
noise
industrial, 194
models, 300
non-minimum phase, 181, 336
nonlinear measure, 128
nonlinear models, 127, 280
nonlinearities
types, 125
Nyquist
diagram, 40, 41
frequency, 16, 17, 204, 210, 212
dimensionless, 17
Nyquist diagram, 72
observability, 352
OCP, 401
ODEs, 120
numerical solutions, 120
two-point boundary value problems, 123
OE model, see output-error model
Opti toolbox, 112, 113
optimal control, 387
generic problem, 397
INDEX 527
Hamiltonian, 400
linear programming, 478
PID tuning, 388
Pontryagin, 399
TPBVP, 401
optimal control problem, 399
optimal prediction, 341
optimisation, 113, 297, 387
cost function, 388
nonlinear, 112
Opti, 112
parametric, 388
output optimal control, 457
output variance, 5
output-error model, 264
packed matrix notation, 54
Pad e approximation, 38
parameter optimisation, 296
parametric adaption, 298
Parseval, 221
partial fractions, 26, 27
Matlab, 27
partial least-squares, 111
PDEs, 120
pendulum, 86
simulation, 120
penicillin, 384
performance index, 107
IAE, 388
periodogram, 227
persistent excitation, 243, 308
PI-D, 138
PID, 131
cascade, 134
classical, 132
discrete, 143
parallel, 132, 134
position form, 145
programming, 143
standard, 133
textbook, 132
tuning, 150
pidtool, 153
pidtune, 153
closed loop, 155
Ziegler-Nichols, 155
velocity form, 145
pole excess, see time delay
pole-placement, 325, 364
polyt, 110
polynomial
addition, 321
multiplication, 321
polynomial controller design, 321
poorly damped, 336
positive denite, 410
check, 452
lost, 452
power spectral density, 226, 228
process control
economics, 4
psd, see power spectral density
pseudo-inverse, 111
quadratic form, 410
quadratic program, 467
quantisation error, 14
random inputs, 243
receding horizon control, 461
recurrent networks, 498
recursive
mean, 282
variance, 282
recursive estimation
extended, 301
SI toolbox, 305
simplied, 306
Kaczmarz, 306
starting, 285
recursive extended least-squares, 302
recursive least squares, see least squares
regression vector, 266
relative gain array, 103
dynamic, 105
relative order, 381
relay, 167, 175
autotuning, 170
example, 170
relay feedback, 166
noise, 173
RELS, see recursive estimation
rels, 302
remembering factor, see forgetting factor
residue, 26
RGA, see relative gain array
Riccati equation
algebraic, 416
computing solution, 419
differential, 411
time varying, 417
ringing, 336
ripple, 205
RLS, 281
robust tting, 280
Routh table, 71
528 INDEX
safety, 4
sample and hold
zoh, 36
sample time selection, 14
sampling, 14
sampling theorem, 16
alias, 16
SattControl, ECA400, 167
self tuning controllers
relay feedback, 166
self tuning regulators, 322
problems, 346
sensitivity function, 188
shift register, 289
Simulink
real-time, 10
single precision, 219
singular value decomposition, 110
SISO, single input/single output, 43
Slyvesters criteria, 79, 80
Smith predictor, 185
Simulink, 186
smoothing
fourier, 228
SOS, 217
spectral analysis, 224
experimental, 246
spectral plot, 226
spline, 197
stability, 69
BIBO, 70
closed loop, 72
continuous time, 70
denition, 69
discrete time, 73
Jury test, 71
Lyapunov, 74
nonlinear, 74
Routh array, 70
unit circle, 73
stacking, 81
state, 46
state reconstruction, 356
state space, 46
conversions, 50
denitions, 46
pole-placement, 359
state-space
conversions, 47
statical networks, 498
statistic, t, 116
Statistics toolbox, 3
steady-state models, 86
steepest descent, 297
step tests, 239
Stixbox, 3
stochastic processes, 437
Sylvester equation, 77, 81
Sylvester matrix, 327
symbolic algebra, 3
Maple, 3
MuPad, 3
symbols, 505
System Identication toolbox, 302
system type, 427
tapped delay, 215
target set, 406
Technical support notes, 226
tides, 501
time delay
discrete, 67
time series analysis
project, 307
Toeplitz matrices, 269
trace of a matrix, 296
transfer function, 55
time constant, 55
transition matrix, 58
Tustins approximation, 38
Tustins method, see bilinear transform
unstable zeros, 336, 338, 344
vectorisation, 81
waypoints, 32
Weierstrass approximation theorem, 497
well damped, 338
white noise, 286
Wiener models, 280
Wood-Berry column model, 95
X-15, 317
Yuwana-Seborg tuning, 159
z-transform, 22
geometric series, 23
inversion, 24
computational, 29
long division, 28
partial fractions, 26
zero-pole-gain, 55
zeroth-order hold, 40
Ziegler-Nichols
closed loop, 155
example, 156
openloop, 151
zoh, see zeroth-order hold

You might also like