You are on page 1of 176
eae ea AO eee ee THE OXFORD SERIES IN ELECTRICAL AND COMPUTER ENGINEERING Adel S. Sedra, Series Editor, Electrical Engineering Michael R. Lightner, Series Editor, Computer Engineering Allen and Holberg, CMOS Analog Circuit Design Bobrow, Elementary Linear Circuit Analysis, 2ntd Ed. Bobrow, Fundamentals of Electrical Engineering, 2nd Ed. Campbell, The Science and Engineering of Microelectroinic Fabrication Chen, Analog and Digital Control System Design Chen, Linear System Theory and Design, 3rd Ed. Chen, System and Signal Analysis, 2nd Ed. ‘Comer, Digital Logic and State Machine Design, 3rd Ed. Cooper and McGillem, Probabilistic Methods of Signal and System Analysis, 3rd Ed. Fortney, Principles of Electronics: Analog & Digital Franco, Electric Circuits Fundamentals Granzow, Digital Transmission Lines Guru and Hiziroglu, Electric Machinery & Transformers, 2nd Ed. Hoole and Hoole, A Modern Short Course in Engineering Electromagnetics Jones, Introduction to Optical Fiber Communication Systems Krein, Elements of Power Electronics Kuo, Digital Control Systems, 3rd Ed. Lathi, Modern Digital and Analog Communications Systems, 3rd Ed. ‘McGitlem and Cooper, Continuous and Discrete Signal and System Analysis, 3rd Ed. Miner, Lines and Electromagnetic Fields for Engineers Roberts and Sedra, SPICE, 2nd Ed. Roulston, An Introduction to the Physics of Semiconductor Devices Sadiku, Elements of Electromagnetics, 2nd Ed. Santina, Stubberud, and Hostetter, Digital Control System Design, 2nd Ed. Schwarz, Electromagnetics for Engineers ‘Schwarz and Oldham, Elecirical Engineering: An Introduction, 2nd Ed. ‘Sedra and Smith, Microelectronic Circuits, 4th Ed. Stefani, Savant, Shahian, and Hostetter, Design of Feedback Control Systems, 3rd Ed. ‘Van Valkenburg, Analog Filter Design ‘Warner and Grung, Semiconductor Device Electronics Wolovich, Automatic Control Systems Yariv, Optical Electronics in Modern Communications, 5th Ed. LINEAR SYSTEM THEORY AND DESIGN Third Edition Chi-Tsong Chen State University of New York at Stony Brook New York Oxford OXFORD UNIVERSITY PRESS 1999 ‘OXFORD UNIVERSITY PRESS Oxford New York ‘Athens Auckland Bangkok Bogoti Buenos Aires Calcutta Cape Town Chennai Dares Salaam Delhi Florence Hong Kong Istanbul Karachi Kuala Lumpur Madrid Melbourne Mexico City Mumbai Nairobi Paris S40 Paulo Singapore Taipei Tokyo ‘Toronto Warsaw ‘and associated companies in ‘Berlin Ibadan Copyright © 1999 by Oxford University Press. Inc Published by Oxford University Pres, Inc 198 Magison. Avenue, New York, New York 10016 (Oxford isa registered trademark of Oxford University Press All rights teserved, No par of this publication may be reproduced, sored ina retrieval system, or transmitted, imany form or by aay means, electronic. mechanical, photocopying, recording, or otherwise, without the prior ‘permission of Oxford University Press. Library of Congress Cataloging-in-Publication Data, (Chen, Chi-Ts00g Linear system theory and design / by Chi-Tsong Chen. — 3d ed . cm. — (The Oxford series in electrical and computer engineering) Includes bibliogesphical references and index. ISBN 0-19-511777-8 (clot). 1 Linear systems. 2. System design. L. Title. I. Series. QAs2.C44 198 629.832—de21 97-35535 cP Printing (ast digit: 9 8765432 1 Printed in the United States of America on acid-free paper To Bi-Jau Contents Preface xi Chapter 1: Introduction 7 1.1 Introduction 1 12 Overview 2 Chapter 2: Mathematical Descriptions of Systems 5 2.1 Introduction § 2.1.1 Causality and Lumpedness 6 2.2 Linear Systems 7 2.3. Linear Time-invariant (LID Systems 17 23.1 Op-Amp Circuit implementation 16 24 Linearization 17 2.8 Examples 18 25.1 RLC Networks 26 2.6 Discrete-Time Systems 31 2.7 Concluding Remarks 37 Problems 38 Chapter 3: Linear Algebra 44 3.1 Introduction 44 3.2 Basis, Representation, and Orthonormaiization 45 3.3. Linear Algebraic Equations 48 3.4 Similarity Transformation $3 3.5 Diagonal Form and Jordan Form $5 3.6 Functions of a Square Matrix 67 3.7. Lyapunov Equation 70 3.8 Some Useful Formulos 71 3.9 Quadratic Form and Positive Definiteness 73 3.10 Singular-Value Decomposition 76 3.11 Norms of Matrices 78 Problems 78 vi CONTENTS. Contents ix Chapter 4: State-Space Solutions and Realizations 86 7.3 Computing Coprime Fractions 192 73.1 GR Decompostion 195 4.1 Introduction 86 7.4 Balanced Realization 197 42. Solution of {TI State Equations 87 7.5. Realizations from Markov Parameters 200 42.1 Discretzation 90 7.6 Degree of Transfer Matrices 205 42.2 Solution of Discrete-Time Equations. 92 7.7 Minimal Realizations—Matrix Case 207 43. Equivalent State Equations 93 7.8 Matrix Polynomial Fractions 209 43.1 Canonical Forms 97 7.8.1 Column end Row Recucedness 212 4322 Magnitude Scaling in Op-Amp Circuits 98 7.8.2 Computing Matix Copimme Fractions 274 44 Realizations 100 7.9. Realizations from Matrix Coprime Fractions 220 45. Solution of Linear Time-Varying (LIV) Equations 106 7.10 Realizations from Matrix Markov Parameters 225 45.1 Discrete-Time Cose 110 7.11 Concluding Remarks 227 46 Equivalent Time-Varying Equations 117 Problems 228 17 Time-Varying Realizations 115 Problems 117 1 [ Chapter 8: State Feedback and State Estimators 237 Chapter 5: Stability 127 8.1 Introduction 237 82 State Feedback 232 8.1 Introduction 121 8.2.1 Solving the Lyapunov Equation 239 5.2 Input-Output Stability of LTI Systems 127 8.3. Regulation and Tracking 242 52.1 Discrete-Time Case 126 8.3.1 Robust Tracking and Disturbance Rejection 243, 53 Internal Stability 129 832 Stobiization 247 53.1 Discrete-Time Case 137 84 State Estimator 247 54 Lyapunov Theorem 132 8.4.1 Reduced-Dimensional State Estimator 257 5.4.1 Discrete-Time Cose 135 88 Feedback from Estimated Stotes 253 55. Stability of LTV Systems 137 86 State Feedback—Muttivaiable Case 255 Problems 140 36.1 Cyclic Design 256 8.62 Lyapunov-Equation Method 259 863 Canonica-form Method 260 a af 8.6.4 Effect on Transfer Matrices 262 Chapter 6: Controllability and Observability 743 8.7. State estimators—Multivariable Case 263 88 Feedback from Estimated States—Multivariable Case 265 6.1 Introduction 143 | Problems 266 62 Controllability 144 62.1 Controllabiity Indices 150 63 Observabilty 153 63.1 Observabilty Indices 157 64 Canonical Decomposition 158 65. Conditions in Jordan-Form Equations 66 Discrete-Time State Equations 169 Chapter 9: Pole Placement and Model Matching 269 i 9.1 introduction 269 i 9.1.1 Compensator Equations-Classical Method 271 e 9.2. Unity-Feedback Configuration-Pole Placement 273 nt ti ri ecchabilly 177 646.1 Controllability to the Origin and Reachebilty eal hecdeta ana teenie a 67 Controllability After Sampling 172 9.22 Robust Tracking ond Disturbance Rejection 68 LIVState Equations 176 9.23 Embedding Internal Models 280 Problems 180 9.3. implementable Transfer Functions 263 9.3.1 Model Matching-Two-Parameter Configuration 26 9.3.2 Implementation of Two-Parameter Compensators 291 2 Chapter 7: Minimal Realizations and Coprime Fractions 184 9.4 Multivariable Unity-Feedback Systems 292 ' 9.4.3 Regulation and Tracking 302 7.1 Introduction 184 ! 9.4.2 Robust Tracking and Disturbance Rejection 303 7.2 implications of Coprimeness 185 9.5 Multivariable Model Matching-Two-Parameter 7.2.1 Minimal Realizations 189 Configuration 306 x CONTENTS, 9.5.1 Decoupling 311 96 Concluding Remarks 314 Problems 315 References 319 Answets to Selected Problems 321 Index 337 Preface ‘This text is intended for use in senior/first-year graduate courses on linear systems and multivariable system design in electrical, mechanical, chemical, and aeronautical departments. It may also be useful to practicing engineers because it contains many design procedures. The mathematical background assumed is a working knowledge of linear algebra and the Laplace transform and an elementary knowledge of differential equations. Linear system theory is a vast field. In this text, we limit our discussion to the conventional approaches of state-space equations and the polynomial fraction method of transfer matrices. ‘The geometric approach, the abstract algebraic approach, rational fractions, and optimization are not discussed, We aim to achieve two objectives with this text. The first objective is to use simple and efficient methods to develop results and design procedures, Thus the presentation is not exhaustive. For example, in introducing polynomial fractions, some polynomial theory such as the Smith-McMillan form and Bezout identities are not discussed. The second objective of this text is to enable the reader to employ the results to carry out design. Thus most results are discussed with an eye toward numerical computation. All design procedures in the text can be carried out using any software package that includes singular- value decomposition, QR decomposition, and the solution of linear algebraic equations and the Lyapunov equation. We give examples using MATLAB®, as the package! seems to be the most widely available. ‘This edition is a complete rewriting of the book Linear System Theory and Design, which was the expanded edition of Introduction to Linear System Theory published in 1970. Aside from, hopefully, a clearer presentation and a more logical development. this edition differs from the book in many ways: « The order of Chapters 2 and 3 is reversed. In this edition, we develop mathematical descriptions of systems before reviewing linear algebra. The chapter on stability is moved earlier. « This edition deals only with real numbers and foregoes the concept of fields. Thus it is mathematically less abstract than the original book. However, all results are still stated as theorems for easy reference. « In Chapters 4 through 6, we discuss first the time-invariant case and then extend it to the time-varying case. instead of the other way around, |. MATLAB is a registered trademark of the MathWorks, In., 24 Prime Park Way, Natick, MA 01760-1500. Phone: 508-657-7000. fax: 508-647-7001, E-mail: info@mathworks com, hp: /www mathworks.com. xi xil PREFACE ++ The discussion of discrete-time systems is expanded. + In state-space design, Lyapunov equations are employed extensively and multivariable canonical forms are downplayed. This approach is not only easier for classroom presentation but also provides an attractive method for numerical computation. ‘+ The presentation of the polynomial fraction method is streamlined. The method is equated with solving linear algebraic equations. We then discuss pole placement using a one-degree- of-freedom configuration, and model matching using a two-degree-of-freedom configura- tion + Examples using MATLAB are given throughout this new edition. This edition is geared more for classroom use and engineering applications; therefore, many topics in the original book are deleted, including strict system equivalence, deterministic iden- tification, computational issues, some multivariable canonical forms. and decoupling by state feedback. The polynomial fraction design in the inputioutput feedback (controller/estimator) configuration is deleted. Instead we discuss design in the two-parameter configuration. This configuration seems to be more suitable for practical application. The eight appendices in the original book are either incorporated into the text or deleted. The logical sequence of all chapters is as follows: Chap. 8 Chap 1-5 = [ow [. ee 7 Sec. 7.1-1.3 = Sec. 9.1-9.3 => Sec.76-78 = Sec. 9.4-9.5 In addition, the material in Section 7.9 is needed to study Section 8.6.4. However, Section 8.6.4 may be skipped without loss of continuity. Furthermore, the concepts of controllability and observability in Chapter 6 are useful, but not essential for the material in Chapter 7. All minimal realizations in Chapter 7 can be checked using the concept of degrees, instead of checking controllability and observability. Therefore Chapters 6 and 7 are essentially independent. This text provides more than enough material for a one-semester course. A one-semester course at Stony Brook covers Chapters 1 through 6, Sections 8.1-8.5, 7.1~7.2, and 9.1-9.3 Time-varying systems are not covered. Clearly, other arrangements are also possible for a ‘one-semester course. A solutions manual is available from the publisher. [am indebted to many people in revising this text. Professor Imin Kao and Mr. Juan Ochoa helped me with MATLAB. Professor Zongli Lin and Mr. T. Anantakrishnan read the whole manuscript and made many valuable suggestions. I am grateful to Dean Yacov Shamash, College of Engineering and Applied Sciences, SUNY at Stony Brook, for his encouragement. The revised manuscript was reviewed by Professor Harold Broberg, EET Departmen, Indiana Purdue University; Professor Peyman Givi, Department of Mechanical and Aerospace Engineering, State University of New York at Buffalo: Professor Mustafa Khammash, Department of Electrical and Computer Engineering, Iowa State University: and Professor B. Ross Barmish, Department of Electrical and Computer Engineering, University of Wisconsin. Their detailed and critical comments prompted me to restructure some sections and to include a number of mechanical vibration problems. I thank them all Preface xiii Iam indebted to Mr. Bill Zobrist of Oxford University Press who persuaded me to undertake this revision, The people at Oxford University Press, including Krysia Bebick, Jasmine Urmeneta, Terri O’Prey, and Kristina Della Bartolomea were most helpful in this undertaking, Finally, I thank my wife, Bih-Jau, for her support during this revision. Chi-Tsong Chen LINEAR SYSTEM THEORY AND DESIGN Chapter j i ok Introduction 1.1 Introduction ‘The study and design of physical systems can be carried out using empirical methods. We can apply various signais to a physical system and measure its responses. If the performance is not satisfactory, we can adjust some of its parameters or connect to it a compensator to improve its performance. This approach relies heavily on past experience and is carried out by trial and error and has succeeded in designing many physical systems. Empirical methods may become unworkable if physical systems are complex or too expensive or too dangerous to be experimented on. In these cases, analytical methods become indispensable. The analytical study of physical systems consists of four parts: modeling, development of mathematical descriptions, analysis, and design. We briefly introduce each of these tasks. ‘The distinction between physical systems and models is basic inengineering. Forexample, circuits or control systems studied in any textbook are models of physical systems. A resistor with a constant resistance is a model; it will bum out if the applied voltage is over a limit. This power limitation is often disregarded in its analytical study. An inductor with a constant inductance is again a model; in reality, the inductance may vary with the amount of current flowing through it. Modeling is a very important problem, for the success of the design depends on whether the physical system is modeled properly. A physical system may have different models depending on the questions asked. It may also be modeled differently in different operational ranges. For example, an electronic amplifier is modeled differently at high and low frequencies. A spaceship can be modeled as a particle in investigating its trajectory; however, it must be modeled as a rigid body in maneuvering. A spaceship may even be modeled as a flexible body when it is connected to a space station. In order to develop a suitable model for a physical system. a thorough understanding of the physical system and its operational range is essential. In this text, we will call a model of a physical system simply a system. Thus a physical system is a device or a collection of devices existing in the real world; a system is a model of a physical system. ‘Once a system (or model) is selected for a physical system, the next step is to apply various physical laws to develop mathematical equations to describe the system. For ex- ample, we apply Kirchhoff’s voltage and current laws to electrical systems and Newton's law to mechanical systems. The equations that describe systems may assume many forms: 1 2 INTRODUCTION they may be linear equations, nonlinear equations, integral equations, difference equations, differential equations, or others. Depending on the problem under study, one form of equation may be preferable to another in describing the same system. In conclusion, a system may have different mathematical-equation descriptions just as a physical system may have many different models. Afier a mathematical description is obtained, we then carry out analyses—quantitative and/or qualitative. In quantitative analysis, we are interested in the responses of systems excited by certain inputs. In qualitative analysis, we are interested in the general properties of systems, such as stability, controllability, and observability. Qualitative analysis is very important, because design techniques may often evolve from this study. If the response of a system is unsatisfactory, the system must be modified. In some cases, this can be achieved by adjusting some parameters of the system; in other cases, compensators must be introduced. Note thaf the design is carried out on the model of the physical system. If the model is properly chosen, then the performance of the physical system should be improved by introducing the required adjustments or compensators. If the model is, poor, then the performance of the physical system may not improve and the design is useless. Selecting a model that is close enough to a physical system and yet simple enough to be studied analytically is the most difficult and important problem in system design. 1.2 Overview The study of systems consists of four parts: modeling, setting up mathematical equations, analysis, and design. Developing models for physical systems requires knowledge of the particular field and some measuring devices. For example, to develop models for transistors requires a knowledge of quantum physics and some laboratory setup. Developing models, for automobile suspension systems requires actual testing and measurements; it cannot be achieved by use of pencil and paper. Computer simulation certainly helps but cannot replace actual measurements. Thus the modeling problem should be studied in connection with the specific field and cannot be properly covered in this text. In this text, we shall assume that models of physical systems are available to us. ‘The systems to be studied in this text are limited to finear systems, Using the concept of linearity, we develop in Chapter 2 that every linear system can be described by vor f Ge. nu(r)de an This equation describes the relationship between the input u and output y and is called the input-output or external description. Ifa linear system is lumped as well, then it can also be described by (0) = ACI) + BOUL) ad ¥() = Cinyx(n) + Due) a3) Equation (1.2) is asetof first-order differential equations and Equation (1,3) is a et of algebraic equations. They are called the internal description of linear systems. Because the vector x is, called the state, the set of two equations is called the state-space or. simply, the stare equation. 1.2 Overview 3 Ifa linear system has, in addition, the property of time invariance, then Equations (1.1) through (1.3) reduce to ywo=f GU - nul) de ila) 0 and, Ax(f) + But) «s) x(t) + Du(r) (6) For this class of linear time-invariant systems, the Laplace transform is an important tool in analysis and design. Applying the Laplace transform to (1.4) yields Fis) = Gisyas) an where a variable with a circumflex denotes the Laplace transtorm of the variable. The function G(s) is called the transfer matrix. Both (1.4) and (1.7) are input-output or external descriptions. ‘The former is said to be in the time domain and the latter in the frequency domian Equations (1.1) through (1.6) are called continuous-time equations because their time variable ¢ is a continuum defined at every time instant in (—o6. 20). If the time is desined only at discrete instants, then the corresponding equations are called discrete-time equations This text is devoted to the analysis and design centered around (1.1) through (1,7) and their discrete-time counterparts. We briefly discuss the contents of each chapter. In Chapter 2. after introducing the aforementioned equations from the concepts of lumpedness, linearity. and time invariance. wwe show how these equations can be developed to describe systems. Chapter 3 reviews linear algebraic equations. the Lyapunov equation, and other pertinent topics that are essential for this text, We also introduce the Jordan form because it will be used to establish a number of results. We study in Chapter 4 solutions ofthe state-space equations in (1.2) and (1.5). Different analyses may lead to different state equations that describe the same system. Thus we introduce the concept of equivalent state equations. The basic relationship between state-space equations and transfer matrices is also established. Chapter 5 introduces the concepts of bounded-input bounded-output (BIBO) stability, marginal stability, and asymptotic stability. Every system must be designed to be stable: otherwise, it may burn out or disintegrate. Therefore stability is a basic system concept. We also introduce the Lyapunov theorem to check asymptotic stability, Chapter 6 introduces the concepts of controllability and observability. They are essential in studying the internal structure of systems. A fundamental result is that the transfer matrix describes only the controllable and observable part of a state equation, Chapter 7 studies minimal realizations and introduces coprime polynomial fractions. We show how to obtain coprime fractions by solving sets of linear algebraic equations. The equivalence of controllable and observable state equations and coprime polynomial fractions is established. The last two chapters discuss the design of time-invariant systems. We use controllable and observable state equations to carry out design in Chapter 8 and use coprime polynomial fractions in Chapter 9. We show that, under the controllability condition, all eigenvalues of a system can be arbitrarily assigned by introducing state feedback. If a state equation is observable, full-dimensional and reduced-dimensional state estimators, with any desired 4 INTRODUCTION eigenvalues, can be constructed to generate estimates of the state. We also establish the separation property. In Chapter 9, we discuss pole placement, model matching, and their applications in tracking, disturbance rejection, and decoupling. We use the unity-feedback configuration in pole placement and the two-parameter configuration in model matching. In our design, no control performances such as rise time, settling time, and overshoot are considered: neither are constraints on control signals and on the degree of compensators. Therefore this, is not a control text per se. However, all results are basic and useful in designing linear time- invariant control systems. Chapter sian! Mathematical Descriptions of Systems 2.1 Introduction ‘The class of systems studied in this text is assumed to have some input terminals and output terminals as shown in Fig, 2.1. We assume that if an excitation or input is applied to the input terminals, a unique response or output signal can be measured at the output terminals. This Unique relationship between the excitation and response, input and output, or cause and effect is essential in defining a system. A system with only one input terminal and only one output terminal is called a single-variable system or a single-input single-output (SISO) system. A system with two or more input terminals and/or two or more output terminals is called a multivariable system. More specifically, we can call a system a multi-input multi-output (MIMO) system if it has two or more input terminals and output terminals, a single-input multi-output (SIMO) system if it has one input terminal and two or more output terminals. me noe : wn Black vo Of : alt lhl be vi) yk 45 rear ofiza4as« Figure 2.1 System, 6 MATHEMATICAL DESCRIPTIONS OF SYSTEMS A system is called a continuous-time system if it accepts continuous-time signals as its input and generates continuous-time signals as its output. The input will be denoted by lowercase italic u(t) for single input or by boldface u(t) for multiple inputs. If the system has input terminals, then u(t) isa p x I vector oru = [w: uz +++ pl’, where the prime denotes the transpose. Similarly, the output will be denoted by y(t) or y(s). The time f is assumed to range from —00 to ce. A system is called a discrete-time system if it accepts discrete-time signals as its input and generates discrete-time signals as its output. All discrete-time signals in a system will be assumed to have the same sampling period 7. The input and output will be denoted by ulk] == w(kT) and y{k) := y(kT), where k denotes discrete-time instant and is an integer ranging from ~co to oc. They become boldface for multiple inputs and multiple outputs. 2.1.1 Causality and Lumpedness A system is called a memoryless system if its output ¥(¢o) depends only on the input applied At fp; it is independent of the input applied before or after fo. This will be stated succinctly as follows: current output of a memoryless system depends only on current input: itis independent of past and future inputs. A network that consists of only resistors is a memoryless system. Most systems, however, have memory. By this we mean that the output at fg depends on u(t) fort < to, = fo, and ¢ > fp. Thatis, current output of a system with memory may depend ‘on past, current, and future inputs, A system is called a causal or nonanticipatory system if its current output depends on past and current inputs but not on future input. Ifa system is not causal, then its current output will depend on future input. In other words, a noncausal system can predict or anticipate what will be applied in the future. No physical system has such capability. Therefore every physical system is causal and causality is a necessary condition for a system to be built or implemented in the real world. This text studies only causal systems. Current output of a causal system is affected by past input, How far back in time will the past input affect the current output? Generally, the time should go all the way back to minus infinity. In other words. the input from —o0 to time ¢ has an effect on y(¢). Tracking w(s) from ¢ = —0o is. if not impossible, very inconvenient. The concept of state can deal with this, problem, Definition 2.1 The state x(to) of a system at time to is the information at tg that, together with the input u(t), for t > tp, determines uniquely the output y(e) for all ¢ > ty By definition, if we know the state ay, there is no more need to know the input ws) applied before fo in determining the output y(t) after fp. Thus in some sense, the state summarizes the effect of past input on future output. For the network shown in Fig. 2.2, if we know the voltages -x (fo) and xp(09) across the two capacitors and the current x; (fo) passing through the inductor, then for any input applied on and after fo, we can determine uniquely the output for ¢ > to ‘Thus the state of the network at time fo is 2.2 Linear Systems 7 Figure 2.2 Network with 3 state variables. xi (to) (lo) = | x2(to) 3(to) Itisa3 x 1 vector. The entries of x are called state variables. Thus, in general, we may consider the initial state simply as a set of initial conditions. Using the state at zo, we can express the input and output of a system as x) UO, req} TIO FB ap It means that the output is partly excited by the initial state at fy and partly by the input applied at and after fp. In using (2.1), there is no more need to know the input applied before fo all the way back to ~90, Thus (2.1) is easier to track and will be called a state-input-output pai. A system is said to be lumped if its number of state variables is finite or its state is a finite vector. The network in Fig. 2.2 is clearly a lumped system; its state consists of three numbers. A system is called a distributed system if ts state has infinitely many state variables. ‘The transmission line is the most well known distributed system. We give one more example. EXAMPLE 2.1 Consider the unit-time delay system defined by yO =u) ‘The output is simply the input delayed by one second. In order to determine (y(¢), #2 fob from (u(t), ¢ > to}. we need the information (u(t), fo —1 = ¢ < to}. Therefore the initial state of the system is (u(?), to ~ 1 <¢ < fo). There are infinitely many points in {to ~ 1 <# < fa). Thus the unit-time delay system is a distributed system, 2.2 Linear Systems A system is called a linear system if for every fo and any two state-input-output pairs (to) a), ref 7 MO t2% fori = 1, 2, we have 8 MATHEMATICAL DESCRIPTIONS OF SYSTEMS X (00) + xa(¢0) “ + yD, 12 fy (additivi ui) us), | OGD Dey and axi(to) = ays(t), 1 > fo (homogeneit for any real constant a. The first property is called the additivity property, the second, the homogeneity property. These two properties can be combined as enXi (1) +.02%2(%0) Fay) + enyl0, 12% aius(t) + anun(e), £2 0 Fao for any real constants a and a2, and is ealled the superposition property. A system is called a nonlinear system if the superposition property does not hold, If the input u(t) is idemtically zero for ¢ > tp, then the output will be excited exclusively by the initial state x(to), This output is called the zero-input response and will be denoted by 7 x(t) ult) =0, 12 Sy), 12h If the initial state x(t) is zero, then the output will be excited exclusively by the input. This output is called the zero-state response and will be denoted by ¥:5 oF X(p) = 0 yu. 12h ue), £2} 7 ° ‘The additivity property implies Oupruero {RO = eupurere {2 5 xi) = wt), 1% 12% + output due to or Response = zero-input response + zero-state response ‘Thus the response of every linear system can be decomposed into the zero-state response and the zero-input response. Furthermore, the two responses can be studied separately and their sum yields the complete response. For nonlinear systems, the complete response can be very different from the sum of the zero-input response and zero-state response. Therefore we cannot separate the zero-input and zero-state responses in studying nonlinear systems. Ia system is linear, then the additivity and homogeneity properties apply to zero-state responses, To be more specific, if x(%o) = 0, then the output will be excited exclusively by the input and the state-input-output equation can be simplified as {u; + y\). If the system is linear, then we have (uy + uz > yi + y2) and (eu, — ay.) for all a and all u,. A similar remark applies to zero-input responses of any linear system, Input-output description We develop a mathematical equation to describe the zero-state response of linear systems. In this study, the initial state is assumed implicitly to be zero and the 2.2 Linear Systems 9 ‘output is excited exclusively by the input. We consider first SISO linear systems. Let a(t —1) be the pulse shown in Fig. 2.3. It has width A and height 1/A and is located at time f,. Then every input u(r) can be approximated by a sequence of pulses as shown in Fig. 2.4. The pulse in Fig. 2.3 has height 1/A: thus 3a(t — )) has height | and the left-most pulse in Fig. 2.4 with height «(f;) can be expressed as u(t;)3a(t — 11). Consequently, the input u(r) can be expressed symbolically as u(t) ® Sout bae 4) Let ga(¢,f) be the output at time ¢ excited by the pulse w(t) = 8a(r ~ %) applied at time 1, Then we have bath) > Baltt) Sale — HUA > gali.4)ulJA (homogeneity) Cisate— aad -» Peale fg 0 (additivity) Thus the output y(t) excited by the input «(f) can be approximated by HO % DY galt nul ya (2.2) Figure 2.3. Pulse at 1 |-_——p)- rer Figure 24 Approximation of input signal. 10 MATHEMATICAL DESCRIPTIONS OF SYSTEMS Now if A approaches zero, the pulse 3g (t—t;) becomes an impulse at, denoted by 5(¢—f),and the corresponding output will be denoted by g(r. fi). As A approaches zero, the approximation in (2.2) becomes an equality, the summation becomes an integration, the discrete 1; becomes ‘a continuum and can be replaced by r, and A can be written as dr. Thus (2.2) becomes 0 aii git, ru(e) dt 23) Note that g(t, r) isa function of two variables. The second variable denotes the time at which the impulse input is applied; the first variable denotes the time at which the output is observed. Because g(t, r) is the response excited by an impulse, it is called the impulse response, Ifa system is causal, the output will not appear before an input is applied. Thus we have Causal > gft.t) =0 fort to, is excited exclusively by the input u(r) for t > fo. Thus the lower limit of the integration in (2.3) can be replaced by fp. If the system is causal as well, then g(t, r) = 0 for 1 < t. Thus the upper limit of the integration in (2.3) can be replaced by t. In conclusion, every linear system that is causal and relaxed at fo can be described by g(t, t)a(e) dt 4) yo) a In this derivation, the condition of lumpedness is not used. Therefore any lumped or distributed linear system has such an input-output description. This description is developed using only the additivity and homogeneity properties; therefore every linear system, be it an electrical system, a mechanical system, a chemical process, or any other system, has such a description. If a linear system has p input terminals and q output terminals, then (2.4) can be extended to yo) [se owner (25) where BWET) Bit) oe Riel) galt.t) gn(t.t) g2p(t.t) Gans Barltet) gel. t) + Baplt-) ‘and gij(t, 7) is the response at time ¢ at the ith output terminal due to an impulse applied at time rat the jth input terminal, the inputs at other terminals being identically zero. That is, y(t.) is the impulse response between the jth input terminal and the ith output terminal, Thus G is called the impulse response matrix of the system, We stress once again that if a system is described by (2.5), the system is linear, relaxed at fp, and causal. State-space description Every linear lumped system can be described by a set of equations of the form 2.3 Linear Timestnvariant (LT) Systems 117 8) = ACX() + BEUL) 26) ¥() = C(O) + DEQur) Qn where X := dx/dr.' For a p-input g-output system, is a p x 1 vector and y is aq x 1 vector. If the system has m state variables, then x is an m x 1 vector. In order for the matrices in (2.6) and (2.7) 10 be compatible, A, B, C, and D must be n xn.n x p.g Xm, and q x p matrices The four matrices are all functions of time or time-varying matrices. Equation (2.6) actually consists of a set of n first-order differential equations. Equation (2.7) consists of g algebraic equations. The set of two equations will be called an n-dimensional state-space equation or, simply, stare equation, For distributed systems, the dimension is infinity and the two equations in (2.6) and (2.7) are not used. ‘The input-output description in (2.5) was developed from the linearity condition. The development of the state-space equation from the linearity condition, however, is not as simple and will not be attempted. We will simply accept it as a fact. 2.3. Linear Tirne-Invariant (LTI) Systems A system is said to be time invariant if for every state-input~output pair x(to) uC), ten] 720: 2 and any T, we have X(t +7) ui —7) tsnerfrmern 12% +T (time shifting) It means that if the initial state is shifted to time z +7" and the same input waveform is applied from tq + T instead of from fo, then the output waveform will be the same except that it starts to appear from time fo + T- In other words, if the initial state and the input are the same, no ‘matter at what time they are applied, the output waveform will always be the same. Therefore, for time-invariant systems, we can always assume, without loss of generality, that f = 0. Ifa system is not time invariant, it is said to-be time varying. Time invariance is defined for systems, not for signals. Signals are mostly time vary Ifa signal is time invariant such as w(x) = 1 for all r, then it is a very simple or 2 trivial signal, The characteristics of time-invariant systems must be independent of time. For example, the network in Fig, 2.2 is time invariant if R,, C;, and L, are constants. Some physical systems must be modeled as time-varying systems. For example, a buming rocket is a time-varying system, because its mass decreases rapidly with time. Although the performance of an automobile or a TV set may deteriorate over a long period of time, its characteristics do not change appreciable in the first couple of years. Thus a large number of physical systems can be modeled as time-invariant systems over a limited time period. 1. Weuse A ss ta denote that. by definition, equals B. We use A =: B to denote that B, by definition, equals A 12 MATHEMATICAL DESCRIPTIONS OF SYSTEMS, Input-output description The zero-state response of a linear system can be described by (2.4). Now if the system is time invariant as well, then we have® g(t,t) =g+T,1+T) = at 1,0) = glr— 1) for any T. Thus (2.4) reduces to ro=f se —nunar= f g(tyu(r =r) dt 8) Is Ib where we have replaced fq by 0. The second equality can easily be verified by changing the variable. The integration in (2.8) is called a convolution integral, Unlike the time-varying case where g is a function of two variables, g is a function of a single variable in the time-invariant case. By definition g(1) = git — 0) is the ourput at time ¢ duc to an inspulse input applied at time 0. The condition for a linear time-invariant system to be causal is g(t) = O fort < 0. EXAMPLE 2.2 The unit-time delay system studied in Example 2.1 is a device whose output ‘equals the input delayed by } second. If we apply the impulse 8(¢) at the input terminal, the output is 6(¢ — 1). Thus the impulse response of the system is 6(¢ ~ 1) EXAMPLE 2.3 Consider the unity-feedback system shown in Fig. 2.5(a). It consists of a multiplier with gain a and a unit-time delay element. It is a SISO system. Let r(#) be the input of the feedback system. If r(¢) = (2), then the output is the impulse response of the feedback system and equals 150 = 1) + a5 — 2) +a*5(C— 3) Fs = Yoa'd—D a9 a O for: < 0; then the output is given by Let r(e) be any input with r(r ~iyr(ayde roe f se-orende= De f ¢r lo Ee bo Because the unit-time delay system is distributed, so is the Feedback system. nn + wie] vawsime | sur ny io x ely : clement S a ® Figure 2.5 Positive and negative feedback systems. 2. Note that g(t 7) and g(? ~ 1) are wo different functions, However, fer convenience. the same symbol g i used. 2.3 Linear Time-tnvariant (LTH Systems 13 Transfer-function matrix The Laplace transform is an important tool in the study of linear time-invariant (LTI) systems. Let 9(5) be the Laplace transform of y(¢), thatis, f y@edr n Throughout this text, we use a variable with a circumflex to denote the Laplace transform of the variable. For causal systems, we have g(t) = O fort < Oor g(t — t) = 0 for t > . Thus the upper integration limit in (2.8) can be replaced by oo. Substituting (2.8) and interchanging the order of integrations, we obtain i= [ (fee moar) eo “Lo. which becomes. after introducing the new variable v = 1 — so= ff G swear) u(rje* de 12 ener Again using the causality condition to replace the lower integration limit inside the parentheses dt g(t new-rrar) u(tje de from v = —r to v = 0. the integration becomes independent of z and the double integrations become 50s) “ewer av u(rje** de or - 56s) = &(s)AES) (2.10) where as a gine “ar A is called the transfer function of the system. Thus the transfer function is the Laplace transform of the impulse response and, conversely, the impulse response is the inverse Laplace transform of the transfer function, We see that the Laplace transform transforms the convolution integral in (2.8) into the algebraic equation in (2.10). In analysis and design. itis simpler to use algebraic equations than to use convolutions. Thus the convolution in (2.8) will rarely be used in the remainder of this text. For a p-input q-output system, (2.10) can be extended as 6) Bul) &r26) Bipls)9 Fas) Buls) gals) -++ Brpls) | | teats) gls) Byls) B4a(s) Bapls) ip(s) or Hs) = Gtsils) Qu) 14 MATHEMATICAL DESCRIPTIONS OF SYSTEMS. where (5) is the transfer function from the jth input to the ith output. The q x p matrix G(s) is called the transfer-function matrix or, simply, transfer matrix of the system. ExaMPLe 2.4 Consider the unit-time delay system studied in Example 2.2, Its impulse response is 5(¢ — 1). Therefore its transfer function is Gs) = £80 - D1 =f 3(t — Neat = 4 This transfer Function isan irational function of s EXAMPLE 2.5 Consider the feedback system shown in Fig. 2.5(a). The transfer function of the unit-time delay element is e~*. The transfer function from r to y can be computed directly from the block diagram as ! 83) 2.12) T-ae ‘This can also be obtained by taking the Laplace transform of the impulse response, which was computed in (2.9) as gf) = Dats -d) i Because £{5(¢ — )] = e~¥, the Laplace transform of g¢(t) is as) = sejo\= ee 7 ae") (ae 3 Using for |r| < 1, we can express the infinite series in closed form as aoe which is the same as (2.12) ‘The transfer function in (2.12) is an irrational function of s. This is so because the feedback system is a distributed system. If a linear time-invariant system is lumped, then its transfer function will be a rational function of s. We study mostly lumped systems; thus the transfer functions we will encounter are mostly rational functions of s : Every rational transfer function can be expressed as (5) = N(s)/D(s), where N(s) and (5) are polynomials of s. Let us use deg to denote the degree of a polynomial. Then @(s) can be classified as follows: ‘© &(5) proper ¢ deg D(s) > deg N(s) 4 g(00) = zero or nonzero constant. | | 2.3 Linear Time-Invariant (LTH Systems 15 © G(s) strictly proper <> deg D(s) > deg N(s) <> (00) = # (5) biproper < deg D(s) deg N(s) ¢ §(00) = nonzero constant. ‘* &(3) improper « deg D(s) < deg N(s) « [8(20)] = 00. Improper rational transfer functions will amplify high-frequency noise, which often exists in the real world; therefore improper rational transfer functions rarely arise in practice, A real or complex number 4 is called a pole of the proper transfer function g(s) = N(5)/D(S) if |8@)| = 00; a zero if 80.) = 0. If M(s) and D(s) are coprime, that is, have ‘no common factors of degree | or higher, then all roots of N(s) are the zeros of §(s), and all roots of D(s) are the poles of @(s). In terms of poles and zeros, the transfer function can be expressed as (= p= p)—-S= Po) Thisis called the ero-pole-gain form. In MATLAB, this form can be obtained from the transfer function by calling (z, A rational matrix G(s) is said to be proper if its every entry is proper or if G(oo) is a zero ‘of nonzero constant matrix: itis strictly proper if its every entry is strictly proper or if G(20) is a zero matrix. If a rational matrix G(s) is square and if both G(s) and G-(s) are proper, then G(s) is said to be biproper. We call 4 a pole of G(s) if itis a pote of some entry of G(s). Thus every pole of every entry of G(s) is a pole of G(s). There are a number of ways of defining zeros for G(s). We call i a locking zero if itis a zero of every nonzero entry of G(s). A more useful definition is the transmission zero, which will be introduced in Chapter 9 c£2zp(num, State-space equation Every linear time-invariant lumped system can be described by a set of equations of the form H(t) = Ax(t) + Bu(r) (2.13) ¥() = Cx(r) + Due) For a system with p inputs, g outputs, and n state variables, A. B, C, and D are, respectively, nxn mx p, qn, and q x p constant matrices. Applying the Laplace transform to (2.13) yields s&(5) — x(0) = AX(s) + Ba(s) Fis) = CX(s) + Dats) which implies Hs) = (61 — A)“!x(0) + (1 = AN' Bats) (14) (5) = COST ~ A)™'x(0) + Cs ~ A)! Ba(s) + Divs) (2.15) They are algebraic equations. Given x(0) and ii(s). (5) and f(s) can be computed algebraically from (2.14) and (2.15), Their inverse Laplace transforms yield the time responses x(t) and (0). The equations also reveal the fact that the response of a linear system can be decomposed MATHEMATICAL DESCRIPTIONS OF SYSTEMS as the zero-state response and the zero-input response, If the initial state x(0) is zero, then (2.15) reduces to H(s) = [C(sI — A)'B + Ds) ‘Comparing this with (2.11) yields Gis) = CG1- Ay 'B+D (2.16) This relates the input-output (or transfer matrix) and state-space descriptions. ‘The functions t£2ss and ss2t in MATLAB compute one description from the other. They compute only the SISO and SIMO cases. For example, (num, Gen] =ss2t£ (a,b,c, 1) computes the transfer matrix from the first input to all outputs or, equivalently, the first column of G(s). Ifthe last argument 1 in ¢s2t£ (a,b, c,d, 1) is replaced by 3, then the function generates the third column of G(s}, To conclude this section, we mention that the Laplace transform is not used in studying linear time-varying systems. The Laplace transform of g(t, x) is a function of two variables and L[A()x(s)] 4 £[A(O|LIx(e)]; thus the Laplace transform does not offer any advantage and is not used in studying time-varying systems. 2.3.1 Op-Amp uit Implementation Every linear time-invariant (LTD state-space equation can be implemented using an operational amplifier (op-amp) circuit. Figure 2.6 shows two standard op-amp circuit elements. All inputs are connected, through resistors, to the inverting terminal. Not shown are the grounded noninverting terminal and power supply. If the feedback branch is a resistor as shown in Fig, 2.6(a), then the output of the element is —(ax) +bx2 +c). Ifthe feedback branch is acapactor with capacitance C and RC = 1 as shown in Fig. 2.6(b), and if the outputis assigned as x, then ¥ = —(av) + bv; +cvy). We call the first element an adder; the second element, an integrator. ‘Actually, the adder functions also as multipliers and the integrator functions also as multipliers and adder. If we use only one input, say, x1. in Fig, 2.6(a), then the output equals ~axy, and the element can be used as an inverter with gain a, Now we use an example to show that every LTI state-space equation can be implemented using the two types of elements in Fig, 2.6. Consider the state equation a) 2 ~037] fa =. = ult 2.17) al [: a Iles FL ot? a Rok ne " Rib ‘ Rib . 1 neath : oe = Alan) $n bem) @ o Figure 2.6 Two op-amp circuit elements. 2.4 Linearization 17 alt) y@) =[-2 3} fee ] + 5u(t) 2.18) It has dimension 2 and we need two integrators to implement it. We have the freedom in choosing the output of each integrator as +1, or —x;. Suppose we assign the output of the Jeft-hand-side (LHS) integrator as x, and the output of the right-hand-side (RHS) integrator as —x) as shown in Fig, 2.7. Then the input of the LHS integrator should be, from the first equation of (2.17), —i; = -2x; + 0.3x2 + 2u and is connected as shown, The input of the RHS integrator should be # =x) ~ 8x; and is connected as shown, Ifthe output of the adder is chosen as y, then its input should equal -y = 2x — 3x2 ~ Sw, and is connected as shown ‘Thus the state equation in (2.17) and (2.18) can be implemented as shown in Fig. 2.7. Note that there are many ways to implement the same equation. For example, if we assign the outputs of the two integrators in Tig. 2.7 as xy and x2, instead of x) anid x2, then we will obvain a different implementation In actual operational amplifier circuits, the range of signals is limited, usually 1 oF 2 volts below the supplied voltage. If any signal grows outside the range, the circuit will saturate or bum out and the circuit will not behave as the equation dictates. There is, however, a way 10 deal with this problem. as we will discuss in Section 4.3.1 : 2.4 Linearization Most physical systems are nonlinear and time varying. Some of them can be described by the nonlinear differential equation of the form “ ® wy ave © i¢ RO Figure 2.7 Op-amp implementation of (2.17) and (2.18) 18 MATHEMATICAL DESCRIPTIONS OF SYSTEMS H(t) = hiK(), WO). 219) yet) = f(x(), ule), where h and fare nonlinear functions. The behavior of such equations can be very complicated and its study is beyond the scope of this text. Some nonlinear equations, however, can be approximated by linear equations under certain conditions. Suppose for some input function u,(t) and some initial state, X9(¢) is the solution of (2.19); that is, Ko(1) = RU¥o(#), Hol.) 2.20) [Now suppose the input is perturbed slightly to become up(¢) + &(¢) and the initial states also perturbed only slightly. For some nonlineat equations, the corresponding solution may differ from x,(¢) only slightly. In this case, the solution can be expressed as Xo(t) + X(¢) with X(6) small for all. Under this assumption, we can expand (2.19) as oft) + X(t) = WiKo(0) +), Uo(H) +H), #) ah_ ah. = h(x). tol) + ERE ST 22%) where, for h = [ay hy hs], x= [xy xp ap]. and w= [an a ah [2h/8x1 Bhy/Ax2 Bhi/Bxs AC): Sim | Oha/Oe1 din/@x2 Aa/dxs % Lahs/ax: ahs/axr ahs/Axs Bhy/Ouy ahs /Bu2 ah 1/8 fe BE) = Fe i= | ahy/duy ahn/dus Labs /au; ahs/duz ‘They are called Jacobians. Because A and B are computed along the two time functions x(t) and u,(0), they are, in general, functions of ¢. Using (2.20) and neglecting higher powers of & and G, we can reduce (2.21) to () = AMR) + BENHLE) ‘This is a linear state-space equation. The equation y(t) = f(x(1), w(t), 1) can be similarly linearized. This linearization technique is often used in practice to obtain linear equations. 2.5 Examples In this section we use examples to illustrate how to develop transfer functions and state-space equations for physical systems. EXAMPLE 2.6 Consider the mechanical system shown in Fig. 2.8. It consists of a block with mass m connected to a wall through a spring. We consider the applied force u to be the input 3, This is not true in general. For some nonlinear equations, a very small difference in initial states will generate ‘completely different solutions yielding the phenomenca of chaos. 2.5 Examples 19 and displacement y from the equilibrium to be the output. The friction between the floor and the block generally consists of three distinct parts: static friction, Coulomb friction, and viscous friction as shown in Fig. 2.9. Note that the horizontal coordinate is velocity § = dy/dr. The friction is clearly not a linear function of the vetocity. To simplify analysis, we disregard the static and Coulomb frictions and consider only the viscous friction. Then the friction becomes linear and can be expressed as k(t), where k; is the viscous friction coefficient. The characteristics ofthe spring are shown in Fig. 2.10: itis not linear. However, if the displacement is limited to (y1, y2) as shown, then the spring can be considered to be linear and the spring force equals kz y, where k: is the spring constant. Thus the mechanical system can be modeled asa linear system under linearization and simplification, Figure 28 Mechanical system Force a {\ Velocity Velocity @ @ Figure 2.9 Mechanical system.(a) Static and Coulomb frictions, (b) Viscous frietion, Figure 2.10 Characteristic of spring Force Displacement 20 MATHEMATICAL DESCRIPTIONS OF SYSTEMS ssa aS Sinnidacr cube We apply Newton's law to develop an equation to describe the system. The applied force 1 must overcome the friction and the spring force. The remainder is used to accelerate the block. Thus we have . mi su-ky— ky (2.22) where ¥ = d*y(t)/dr? and j = dy(t)/dt. Applying the Laplace transform and assuming zero initial conditions, we obtain ms? 5(s) = f(s) — kis $3) ~ k: which implies as) 5@) = Fates the ‘This is the input-output description of the System. Its transfer function is 1/(ms? + kis + ko), . ky =3, ke = 2, then the impulse response of the system is 1 1 1 Se wore laa) lm and the convolution description of the system is o=f se- oun de = f (eo) lo lo Next we develop a state-space equation to describe the system. Let us select the displace ment and velocity of the block as state variables: that is, x1 = y, x2 = j. Then we have, using 2.22), m= )u(r)de Ben mip suki khn ‘They can be expressed in matrix form as E(t) 0 : a(t) 0 [rea =[ aon ctm | od * Lum mi yO=il oil | This state-space equation describes the system. EXAMPLE 2.7 Consider the system shown in Fig. 2.11. It consists of two blocks, with masses ‘m, and ms, connected by three springs with spring constants k,, i = 1,2. 3. To simplify the discussion, we assume that there is no friction between the blocks and the floor. The applied Figure 2.11 Spring-mass system. ky ky ky ii aaa deh de 2.5 Examples 21 force u, must overcome the spring forces and the remainder is used to accelerate the block, thus we have uy = kiyy key — 92) = mi mi Si + (ey +h) yy = keys (2.23) For the second block, we have mais — key + (by + kayo (2.24 They can be combined as mi OTP) fkitk ke Tf] _ fu 0 mile wk kth Lye} Le ‘This is a standard equation in studying vibration and is said to be in normal form. See Reference [18}. Let us define ny ok 5 ss Then we can readily obtain ‘i 7 0 0 0 1 ks x e 0] ale |2 o | m a | | in ra % o iifalt}o oo flu: ia o oh o| bx o 4 mi m n]_f1 0 0 07, nl~loo 10 This two-input two-output state equation describes the system in Fig. 2.11 To develop its input-output description, we apply the Laplace transform to (2.23 (2.24) and assume zero initial conditions to yield mis Fuls) + (ky + ka) Se(s) — ken(s) = tis) mys? Fa(s) — kof (s) + (kr + ka) $25) = dials) From these two equations, we can readily obtain mys? + ky + ky ky [Re] as) as) Sy] a ms tk t al d{s) d(s) where Cs) := (mis? + ky + ke)dmys? + ky + ke) — This is the transfer-matrix description of the system. Thus what we will discuss in this text can be applied directly to study vibration 22 MATHEMATICAL DESCRIPTIONS OF SYSTEMS EXAMPLE 2.8 Consider cart with an inverted pendulum hinged on top of it as shown in Fig. 2.12. For simplicity, the cart and the pendulum are assumed to move in only one plane, and the friction, the mass of the stick, and the gust of wind are disregarded. The problem is to maintain the pendulum at the vertical position. For example, if the inverted pendulum is falling in the direction shown, the cart moves to the right and exerts a force, through the hinge, to push the pendulum back to the vertical position. This simple mechanism can be used as a model of a space vehicle on takeoff. Let H and V be, respectively, the horizontal and vertical forces exerted by the cart on the pendulum as shown. The application of Newton's law to the linear movements yields in é : na +(Ic05 4) = mi{—6 sind ~ (6)? cos} a ‘The application of Newton’s law to the rotational movement of the pendulum around the hinge yields mgl sin = mld «1+ mil cos? They are nonlinear equations. Because the design objective is to maintain the pendulum at the vertical position, itis reasonable to assume 8 and 9 to be small. Under this assumption, ‘we can use the approximation sin @ = 6 and cos@ = 1. By retaining only the linear terms in 8 and 6 or, equivalently, dropping the terms with 67, (6)*, 96, and 66, we obtain V = mg and MY =u —my— mid go = +¥ which imply —mge - 225) = (M + m)g@ ~u 2.26) Figure 2.12 Cart with inverted pendulum, 25 Examples 23 Using these linearized equations, we now can develop the input-output and state-space descriptions, Applying the Laplace transform to (2.25) and (2.26) and assuming zero initial conditions, we obtain M3? 5(s) = di(s) — mgGi(s) MIs76(s) = (M + m)g6(s) — as) From these equations, we can readily compute the transfer function @y(s) from u to y and the transfer function gou(s) from u to 8 as, jus)e nk Sul) = OM Fl Bou(s) = a MPM Emig To develop a state-space equation, we select state variables as xy = y, x2 = ¥ and x4 = 6. Then from this selection, (2.25), and (2.26) we can readily obtain a a 0 0 xy oO fe|_|0 0 mgm ol] 1M allo o 0 ieee 0 hy 0 0 (M+m)giMt 0} Lx =1/ML y=[l 00 Ox (2.27) This state equation has dimension 4 and describes the system when 6 and 6 are very small. Exampte 2.9 A communication satellite of mass m orbiting around the earth is shown in Fig. 2.13. The altitude of the satellite is specified by r(é), 8(¢), and (¢) as shown. The orbit can be controlled by three orthogonal thrusts t-(t), wo(t), and we(). The state, input, and ‘output of the system are chosen as ro) Ho) ae u(t) rt) x(t) = 6) u(t) = | u(t) yO= | a ow Holt) ot) 6 ‘Then the system can be shown to be described by 18? cos? +r? — kin? + p/m 6 h(x, u) = : 2.28 : —276 /r +264 sin @/ cos + uy/mr cos¢ é —6 cos $ sin — 26/r + ug /mr 24 MATHEMATICAL DESCRIPTIONS OF SYSTEMS, | | Satellite (mass m) Figure 2.13 Satelite in orbit, One solution, which corresponds to a circular equatorial orbit, is given by Xolt) = [re O wot @, OOF uy 0 with r3w? = k, a known physical constant. Once the satellite reaches the orbit, it will remain in the orbit as long as there are no disturbances. If the satellite deviates from the orbit, thrusts ‘must be applied to push it back to the orbit. Define X(0) = X01) +H()— ule) ot HO) witaG) yi = If the perturbation is very small, then (2.28) can be linearized as 0 az 0 0 0 0 Bef 00 eur oo 0 oo. 0 Of. X= 0 0 0 o 0 x(r) 0 0 0 0 1 0 0 -o 0 0 0 o 1 a) 0 o 9 |. tho 4b o [2 mire ° 0 0 = 2.5 Examples 25 Cee ee § - 8 TO je 2 1 ¢ 0 9 Jeu (2.29) o 0 0 0 : te This six-dimensional state equation describes the system. In this equation, A, B, and C happen to be constant. If the orbit is an elliptic one, then they will be time varying. We note that the three matrices are all block diagonal. Thus the equation can be decomposed into two uncoupled parts, one involving r and 8, the other . Studying these two parts independently can simplity analysis and design. EXAMPLE 2.10 In chemical plants, itis often necessary to maintain the levels of liquids. A simplified model of a connection of two tanks is shown in Fig, 2.14. It is assumed that under normal operation, the inflows and outflows of both tanks all equal Q and their liquid levels, equal H; and Hp. Let « be inflow perturbation of the first tank, which will cause variations in liquid level x; and outflow yy as shown. These variations will cause Jevel variation x2 and outflow variation y in the second tank. Itis assumed that maa a andy where R, are the flow resistances and depend on the normal height H; and Hz. They can also be controlled by the valves. Changes of liquid levels are governed by Ardxy =(u—yi)dt and Agda: = (1 -y)dr where A, are the cross sections of the tanks. From these equations, we can readily obtain wonoe AL AIRY yom ‘Thus the state-space description of the system is given by i] _[-V4ak: V/A, a) [vad io] >| tank, -1/A2R) + 1/422) | Le 0 y= (0 1/R:)x Figure 2.14 Hydraulic tanks. Oru 26 MATHEMATICAL DESCRIPTIONS OF SYSTEMS Its transfer function can be computed as 1 48) = TAR Re + (AR PAR) AaRs #1 2.5.1 RLC networks In RLC networks, capacitors and inductors can store energy and are associated with state variables. Ifa capacitor voltage is assigned as a state variable x, then its current is Ci, where Cris its capacitance. If an inductor current is assigned as a state variable x, then its voltage is Li, where L is its inductance. Note that resistors are memoryless elements, and their currents or voltages should not be assigned as state variables. For most simple RLC networks, once state variables are assigned, their state equations can be developed by applying Kirchhoff’s current and voltage laws, as the next example illustrates. EXAMPLE 2.11 Consider the network shown in Fig. 2.15. We assign the C,-capacitor voltages as xi, i = 1,2 and the inductor current as xs. It is important to specify their polarities. Then their currents and voltage are, respectively, C11, Ca¥2, and Lis with the polarities shown. From the figure, we see that the voltage across the resistor is u — x1 with the polarity shown, ‘Thus its current is (u —x,)/R. Applying Kirchhoff’s current law at node A yields C22 = x5; at node B it yields Thus we have Appling Kirchhoff's voltage law to the right-hand-side loop yields Lis = xy — 32 oF noe as The output y is given by Figure 2.15 Network. 25 Examples 27 ‘They can be combined in matrix form as =F) 0 -1/C, 1/RC, 0 0 W/Cz |x+ 0 u VL -L 0 oO ysl ~10K+0-u “This three-dimensional state equation describes the network shown in Fig. 2.15. ‘The procedure used in the preceding example can be employed to develop state equations to describe simple RLC networks. The procedure is fairly simple: assign state variables and then use branch characteristics and Kirchhoff’s laws to develop state equations. The procedure can be stated more systematically by using graph concepts, as we will introduce next. The procedure and subsequent Example 2,12, however, can be skipped without loss of continuity. First we introduce briefly the concepts of tree, link, and cutset of a network. We consider only connected networks. Every capacitor, inductor, resistor, voltage source, and current source will be considered as a branch. Branches are connected at nodes. Thus a network can be considered to consist of only branches and nodes. A loop is a connection of branches starting from one point and coming back to the same point without passing any point twice. The algebraic sum of all voltages along every loop is zero (Kirchhoff’s voltage law). The set of all branches connect to a node is called a cutset. More generally, a cutset of a connected network is any minimal set of branches so that the removal of the set causes the remaining network to be unconnected. For example, removing all branches connected to a node leaves the node unconnected to the remaining network. The algebraic sum of all branch currents in every ccutset is zero (Kirchhoft’s current law), A 1tree of a network is defined as any connection of branches connecting all the nodes but containing no loops. A branch is called a tree branch if itis in the tree, a link if itis not. With respect to a chosen tree, every link has a unique loop, called the fundamental loop, in which the remaining loop branches are all tee branches, Every tree branch has a unique cutset, called the fundamental cutser, in which the remaining cutset branches are ail links. In other words, a fundamental loop contains only one link anda fundamental cutset contains only one tree branch. Procedure for developing state-space equations 1. Consider an REC network, We first choose a normal tree. The branches of the normal tree are chosen in the order of voltage sources, capacitors, resistors, inductors, and current sources. 2, Assign the capacitor voltages in the normal tree and the inductor currents in the links as state Variables. Capacitor voltages in the links and inductor currents in the normal tree are not assigned. 3. Express the voltage and current of every branch in terms of the state variables and, if necessary, the inputs by applying Kirchhoff’s voltage law to fundamental loops and Kirchhoff’s current law to fundamental cutsets. 4, The eeader may skip this procedure and go directly to Example 2.13, 28 MATHEMATICAL DESCRIPTIONS OF SYSTEMS 4. Apply Kirchhoff’s voltage or current law to the fundamental loop or cutset of every branch that is assigned as a state variable. EXAMPLE 2.12. Consider the network shown in Fig. 2.16. The normal tree is chosen as shown with heavy lines: it consists of the voltage source, two capacitors, and the 1-® resistor. The capacitor voltages in the normal tree and the inductor current in the link will be assigned as, state variables. Ifthe voltage across the 3-F capacitor is assigned as xi, then its current is 3, The voltage across the I-F capacitor is assigned as x2 and its current is £2. The current through the 2-H inductor is assigned as x3 and its voltage is 243. Because the 2-9 resistor is a link, we use its fundamental loop to find its voltage as uy — xj. Thus its current is (uw, ~ x1)/2, The 1-9 resistor is a tree branch. We use its fundamental cutset to find its current as x3. Thus its voltage is 1 - x3 = x5. This completes Step 3. The 3-F capacitor is a tree branch and its fundamental cutset is as shown. The algebraic sum of the cutset currents is O or wy =a Bij tus—ny =0 which implies tae Ge stfu + fas ‘The I-F capacitor is a tree branch, and from its fundamental cutset we have iz ~ 13 = 0 or The 2-H inductor is a link. The voltage along its fundamental loop is 23 +3 — ar +42 = 0 or Fundamental 1 1 eusetof @ ‘ oe 1 Fundamentat , Sl 1 euset of @ » wR ‘| a7 = Fundamental culset of @ (voltage souree) Figure 2.16 Network with two inputs. 2.5 Examples 29 ‘They can be expressed in matrix form as [; If we consider the voltage across the 2-H inductor and the current through the 2-9 resistor as the outputs, then we have (2.30) yelysx—-m-n stl <1 - Ix and yr = 0.5(uy — x) = [-0.5 0 Ox + (0.5 O} ‘They can be written in matrix form as 1 oo 0 0 Qa i 0 3 |**[os ol ¥ Equations (2.30) and (2.31) are the state-space description of the network. ‘The transfer matrix of the network can be computed directly from the network or using the formula in (2.16) G(s) = CI A) B+D ‘We will use MATLAB to compute this equation, We type ={-1/8 0 -1/3;0 0 170.5 -0.5 c=(L “1 - :4e(0 030.5 01; Ins at (abreast) which yields 0.0000 0.1667 -0.0000 -0.0000 0.5000 0.2500 0.3333 -0.0000 1.0000 0.6667 0.7500 0.0833 This isthe first column of the transfer matrix, We repeat the computation for the second input Thus the transfer matrix of the network is 0.16675? 0.33335? $5 +0,66675? + 0.755 + 0.0833 0.16675? ~ 0.0833s ~ 0.0833 $9 + 0.66675? + 0.75s + 0.0833 30 MATHEMATICAL DESCRIPTIONS OF SYSTEMS. EXAMPLE 2.13 Consider the network shown in Fig. 2.17(a), where 7 is a tunnel diode with the characteristics shown in Fig. 2.17(b). Let x; be the voltage across the capacitor and xz be the current through the inductor. Then we have v = x1 and xn(t) = CE (1) +1) = Cai(e) + AG) Lig(t) = E— Ret) — a(t) ‘They can be arranged as it) a ce ( Ret) E cy i ani) = Rn n(n = PO RRO LE This set of nonlinear equations describes the network. Now if x1 (¢) is known to lie only inside the range (a, 6) shown in Fig. 2.17(b), then h(x(t)) can be approximated by h(x)()) = xi(t)/Ri. In this case, the network can be reduced to the one in Fig. 2.17(c) and can be described by (e)-(r [ed Lad @ wo A sy © (a) Figure 2.17 Network with a tunnel diode. 2.6 Discrete-Time Systems 31 ‘This is an LTI state-space equation, Now if x1 (t) is known to lie only inside the range (c, d) shown in Fig. 2.17(b). we may introduce the variables ¥,(f) = x(t) —vp,and ¥2(t) = x2(0)—fo and approximate h(xi()) a8 ig — ¥,(t)/ Ro. Substituting these into (2.32) yields HT _puck ye fa O71; [B] = [ -e Rt | Le) * Lye} ® where E = E —v, — Rio. This equation is obtained by shifting the operating point from (0.0) to (vp. é,) and by linearization at (vig). Because the two linearized equations are identical if ~Ry is replaced by Ry and & by E, we can readily obtain its equivalent network shown in Fig. 2.17(d), Note that it is not obvious how to obtain the equivalent network from the original network without first developing the state equation. 2.6 Discrete-Time Systems ‘This section develops the discrete counterpart of continuous-time systems. Because most concepts in continuous-time systems can be applied directly to the discrete-time systems, the discussion will be brief. The input and output of every discrete-time system will be assumed to have the same sampling period T and will be denoted by ufk] := u(kT), y{k]:= y(RT), where k is an integer ranging from —cc to +20. A discrete-time system is causal if current output depends on current and past inputs. The state at time ko, denoted by x{ko}. is the information at time instant ko, which together with u[k], k > ko, determines uniquely the output y{k], k = ko. The entries of x are called state variables. If the number of state variables is finite, the discrete-time system is lumped; otherwise. it is distributed. Every continuous-time system involving time delay, as the ones in Examples 2.1 and 2.3, is a distributed system. In a discrete-time system, if the time delay is an integer multiple of the sampling period T, then the discrete-time system is a lumped system. ‘A discrete-time system is linear if the additivity and homogeneity properties hold. The response of every linear discrete-time system can be decomposed as Response = zero-state response + zero-input response and the zero-state responses satisfy the superposition property. So do the zero-input responses. Input-output description Let S[k] be the impulse sequence defined as ifk =m sem ={j ifk zm where both & and m are integers. denoting sampling instants. It is the discrete counterpart of the impulse 6(¢ — 11). The impulse 8( — 1) has zero width and infinite height and cannot be generated in practice: whereas the impulse sequence 4{k — m] can easily be generated. Let tlk] be any input sequence. Then it can be expressed as UK] = YO ulom ate — mo} Let g{k, m] be the output at time instant & excited by the impulse sequence applied at time instant m. Then we have 32 MATHEMATICAL DESCRIPTIONS OF SYSTEMS 8{k —m] — gtkvm) 4{k — mum] > g{k. mJuim] (homogeneity) Yk = mJufen] > SO stk, melulm] (additivity) Thus the output yk] excited by the input w{k] equals vk] = SD alk muta] This s the discrete counterpart of (2.3) and its derivation is considerably simpler. The sequence g{k, m] is called the impulse response sequence. Ifa discrete-time system is causal, no output will appear before an input is applied. Thus we have Causal <=> gf. fork

You might also like