You are on page 1of 4

5t.raints of Theorem 1 are satisfied. Then the poles of C ( r ) I R ( z ) = D(z) must lie wit.hin the 2-plane region defined by Iz - 11 5 q.

It is perhapsmore reasonable toarrange t.hat t,he processing delays occu~in t.he returnpaths, t.hat is, in awxiation 1vit.h the col*t.ant multiplier units n 1, n 2, . . .: 2n. 111 this ca5e we have

employs a standard two-bit increment. transfer code. If multiple-bit transfer codes are employed, the relat.ionships we have given are relaxed roughly by an appropriat.epower of two. Special implementations [6] of multiple increment IIDAs which avoid integration are not included in the analysis.

REFEKENCES
[I] R. J. Leake and H. L. Alt.haus, Mathematical models of a DDA. Electrical
121

Sims [ 5 ] , has suggested thisimplementation for the realization of arbit.rary rational transfer functions (neglecting scaling constraints). The transformation from H ( s ) t.o D(r) i s not, as simple in t h b case, but we may obt,a.inthe follo~ing collservative result.
lheorwn 4

131 G. A . Knm a i d T. hi. ICorn, Electronic Analog and Hybrid Computers. Kew Yorlt:JlcGraw-Hill, 1964. [4] J. 17. Wait, A hybrid analog-&ita1 differential analyzer system Proc. f968 Pull Joinl Computer Conf., A F I P S Proc., vnl. 24. Washington, D . C.: Spartan, 1963, pp. 2ii-293. [ 5 ] J. R. Sims, New techniques for programming the DDA, 3i.E.E. t . h s i s , Syracuse University. Syracuse. N. Y.. June 1960. [SI A. J. Monroe, Digital Processes for SampledData S ~ S ~ ENew ~ S Ynrk: . IYiley, 1964, pp. 464-471.

Rept. EE 674, February 196i. -. DDA scaling graph, I B E B C-17, pp. 81-84 January 1968.

Engineering Dept.,University of NntreDame,NntreDame,


Trans.Computers

Ind., Tech.

(Short :Votes), vol.

Asume that.a processing delay occurs m-ith each of t.he integrators n 1, n 2, . ., 271 of Fig. 2 and that theconstraints of Theorem 1 a.re satisfied. Then if 0 < q < 0.1, the poles of D ( z ) lie xit.hin the region defined by

Inverse Iteration Method for Finding Eigenvectors


J A k I B E. VA%N NESS,
SENIOR MEMBER, IEEE

Abstract-An iteration method which is not sensitive to small errors in the eigenvalues is developed for finding eigenvectors. The method finds the eigenvector of both the normal and transposed in the Rayleigh quotient matrix. These eigenvectors can then be used to improve the value for theeigenvalue.
A major computational problem in analyzing systems represented ill matrix form is t.he finding of the eigenvectors of the system. In this paper a modified form of Rilkinsons inverse iterat.ion method [ l ] will be described. A FORTFAN program [2] has been written using this met.hod, and it has shown the method to be very good. The work described in thispaper resulted from the need for the eigeuvectors in the sensitivity problem [3], [4]and in the comput,ation of the response of a large system t.0 a periodic load [SI. The eigeuvector program is part of a group of programs that have beeu developed for analyzing large syst.ems [6].

J,\:e notethat this is an absolute upper bound which caunothe 1 5 q is a good rule of thumb. attained; it is still true that, Iz, - 1

PROBLEM
The equations that need to be solved to find t,he eigenvectors are

CONCLUSIONS
Under rat.her idealized assumptions, it has been shown that a guide for t.ransfer funct.ion realiabilitg on a DDB consists in the simple relations: BW 5 qQ, poles si of H ( s ) satisfy Isl\ 5 p&, and poles 2 , of U ( a )satisfy Izi - 1 1 5 p (except.for poles at theorigin due t.o pure delays). In view of the severe rest,rictions on D ( z ) , which are independent. of iteration rate, and of t.he high it.erat,ionrates required t.u yield an appreciable bandaidth, it seems more natural to view the macroscopic behavior of a DDA from the cont,inuous model standpoint. The restrict.iow on D ( zj = C(z)/H(z) arise primarily because of the baaic rate limitations of a DIIA, imposed bv the fact that AC is hounded. If c ( k ) W t,he output sequence of a UIIA program which is furmecl by summing t.he increments Ac(k), it is easy to see that t,he program resolution cannot be much bett.er than t,he ratio of to iqrnax. Thus anot,her good crit,erion for judging whet.her or not a 1 ) I ) X is suitable for a realizat.ion is to check the response of D(r) to typical inputs and compare the output magnitude wit.h the magnitude trf output changes. It shuuld be noted that. t,his entire analysis is based on the assumption t.hat integration is the basic 1IDA operation and the machule

( A - IX,)Si = 0 (A
-

(1)
(2 )

ZXi)Vi

Iklmar

where A is a square real mat.rix, but not necessarily symmetric. The X i are the eigenvalues of A , the Si the eigenvectors of -4,and t.he V i the eigenvectors of A, the t.ransposed A mat,riu. The met,hods described in this paper will find both the Xi and the V i for a given hi in the same operation. If the X i were exact, (1) and (2) would be homogeneous with a coefficient matrix whose determinant would be zero. One computational problem is that the X i often are not exact, even when using a good algorit.hm for the eigenvalues such the QR transform [2]. Inthis case, the accuracy in finding t.he eigel1vect.ol-s by some methods will depend on t.he accuracy of the eigenvalues. With the inverse iteration method, the result is independent, of the accuracy of the eigenvalue as long a s the eigenvalue is close enough t,o the correct value so that the met.hod will converge t,o the desired eigenvector. Two other Computational problems arise from complex eigenvalue and decoupled mat.rices. These will be discussed later.
hlanuscrigt reeived k-ehrunry 8, 1968. This research was supported in part under Contract 14-03-7OYi2 with t.he Uonnerille Power Administration. The aut.hor is x \ i t h Northw=teru University, Eranstuu, Ill. 60201.

ir- given in [ I 1.

A mure detailed treatment of the yrucesr; Iry wllicll one obtains (10) and i l l )

64

IEEE TRAIVSACTIOiiS O S -4UTOJIATIC

COSTROL,

FEBRUhBY

1969

IXVERSE ITERATION METHOD


The basic inverse iteration met.hod can be described by the two equations
(A

- Zp)W*+, = 2,

(3)

program the mat,ris P is never formed, but the information on row interchanges is kept in a table. Equat,ion (3) can then be rewrit,ten in thefollowing form (a fourthorder matrix is assumed an example).

where p is the approximation of Xi, max(w,+,) is the element of W,,, with the largest magnitude, and ZO,the initialvalue of Z, is an arbitrary vector, usually chosen to be all ones. The iteration process is continued until the change in Z at any step is less than some prescribed value. To examine this iteration process, expand the vector ZO in terms of the eigenvectors of A ,
2, =
j

cjxj

If E is not equd to zero, the folloKing steps should be followed tu fmd the eigenvect.or.
1) Initially set thevector 2 to have all elements unity. 2) Reorder the Z matrix in the same manner t.hat the (-4 - I p ) matrix was reordered during its fact.oring. This step may be skipped with the initial 2. 3) Solve (8) by fom-ard substitution for the vector E . 4) Solve (9) by back substitution for t.he vector T. 5) Normalize F as in (4) to give a new Z. 6) Compare the new 2 x-ith the previons one. If the change has been sufficiently small, this is the desired eigenvector. If it is not., repeat steps 2) through 6) as often as necessmy.

after s iterations, 2, will be

If the normalization process of (4) is neglected and Z,+l is set equal


to

w-,+,.

If p is close to t.he desired eigenvalue it, the ith eigenvector will s predominate in 2, after a very few iterations, and 2, can be taken a t.he desired eigenvector. Since any eigenvector can be multiplied by an arbit,rary const.ant and still bean eigenvector and since the normalization described by (4) would mult.iply each term of (6) by the same value, the normalization does not effect the convergence. T h e b s i c criteria for convergence is that. the factor (Xj - p ) for the desired eigenvalue and eigenvector be smaller than this factor for any of the other eigenvalues. The rate of convergence will depend on the ratio of this fact.or for the desired eigenvalue to the next larger (X, - p ) fact.or in (6). Wilkinson [l] gives a detailed discussion of the convergence and error analysis of this method. For most problems arising from physical systems, a method such as the QR transform will give etimates of the eigenvalues to be i l l differ from the actual eigenvalues by very slight used for p that w amounts. These problems mill converge very rapidly. m7hen p happens t.o be exact,ly equal to X, the it,eration method is not used, but a direct solut.ion of (1) and (2) is used. The major modi6cations and additions that, have been made by t.he program in this paper t o the inverse iterat,ion method described by Wilkinson concern the handling of eases where this difference is very s m a l , especially in decoupled matrices. This is discussed in more detail later. Most other common methods of finding eigenvectors depend either on a solution of the homogeneous equations (1) and (2) with inaccurate values for t.he X, or on some form of direct iteration. The fist class of methods are limited in their accuracy by the accuracy of the eigenvalues, and the second class are limited in their convergence, especially for eigenvectors corresponding to other than the largest or smallest eigenvalues. The inverse iteration method seems superior to these types of met.hods bot,h in accuracy and speed of convergence.
W H E N X Is REAL SOLUTION

If E should be equal to zero, no iteration is necessary. Set all elements of B to zero, m u m e the last element of W is unity, and comp1et.e the back substit.ut.ion of (9) to find the eigenvector. Note t.hat as E becomes small, indicat.ing that p isclose to X, the value found for the last element of n-ill become large. As the elements of W become large, the values of B have less effect. on the amwers obtained. The limit,ing case is when E is equal to zero. For the transposed case, (5) can be rewritten a';
W 7

(AT - Ip)PT

CTLT.

(10)

A set of iteration equations for the transposed eigenvector t.hen take the form

[I

where t,he U and L matrices are the transpose of the matrices in ( 8 ) and (9). The following steps should be followedt.o find the eigenvector of the transposed matrix Fhen E is not. equal to zero.
1) Initially set, the vector C to have all elements unity. 2) Solve (11) by fom-ard substitut.ionfor the vector D. 3) Solve (12) by back substitution for the vector (PT)-'F. 4) Reorder the vector of st.ep 3) in the inverse of t.he row changes made in factoring (A - I p ) . Note that thk is the inverse of t,he reordering carried out in step 2) for the normal eigenvector. 5) Kormalize P in themanner of (4) to give a new C. 6) Compare the new C with the previous one. I f the change has f been sufEcient.ly small, this is the desired eigenvector. I it. is not, repeat steps 2 ) through 6) as often as necessary.

The first step in thesolut,ion when X is real is to factor the matrix ( A - I p ) into trro t,riangular mat.rices. A standard Gaussian elimination met.hod wit.h row interchange is used and will be discussed later. If the effect of the row interchanges is represented by a matris P , this factoriat,ion can be represented by
P(A - Ip)
=

LU

(7 )

where L is a lo-xer triangular mat.rir chosen to havea l l of its diagonal elements unity, and C7 is an upper t.riangular matrix whose last diagonal element is UsuallJ- very small or possibly zero. I n the actual

As with the normal eigenvector, e equal to zero can be taken as the limiting case as E becomes small. If E is zero, assume that Ciazero, and

SHORT PAPERS

65

all of the elements of D will be zero except the last. This may be assigned any value, usually unity. Steps 3) and 4) can then be applied t o find the eigenvect,or.

procedure simiiar to that described previously:

UTD

= BG,

+ ( A - Ia)TH,
-H J

COMPLEX EIGEKVALUES
In the case of complex eigenvalues, all of these methods described could he used directly. The major disadvantages in this would be t,he doublittg of the st,orage requirements for complex arrays and the increasing of the time because of complex arithmet.ic operations. Wilkinson [ I ] describes four possible met.hods of solving for complex eigenvectors, and his fourth method is the one used here. In (3) let the p be replaced by a j p as the approximat.ion to the conlplex h, let Z be replaced by Q jR, and W be M $ 1 ; . Solving for the rea.1 and imaginary p a r e of the result.ing equation gives

LT[(PT)-'Ts+l] = D

s,+1 = - [ ( A - l a ) T T s + l

+ +

The steps for applying these equations are as follows.


1) Assume all of the elements of G and H are initially unity.
2) Evaluate the vector on the right-hand side of (20), and then

[i.4 - Ia)'

+ Ip2]h:,+1 = BQ. + ( A - Ia)Rs - g M , - , + ( A - ZajLV8+, = R,. + (a2 + p)I] = LC:

(13) (14)

As i n t.he real case, the matrix on t,he left-hand side of (13) needs to

be factored once for the iterat.ion process. Let

P [ A ?- 2 a A

(15)

where P again represents t.he row interchanges madeduring the factoring and L. and C have the form shown in (8) and (9). By expressing the matrix in this form, it. is seen that A needs to be squared only once and then can be used for finding any of its complex eigenvectors. Using t.his factoring and t.he previous equations, t.he equations for the iteration process become

back substitute to find the vector D. 3 ) Solve for the vector in the brackets on t.he left-hand side of (21). 4 ) Reorder t,he vector found in step 3) by the inverse of the interchange of rows used in the process of (15) to givethevectorT,+I. 5) Subsbitute into (22)to find the vector & + I . 6) Normalize (8 jT)as in (23) to give a new (G j H ) . 7) Campare the new (G j H ) with the previous value. If t.he changehas been sutficiently small, this is the desired eigens necessary. vector. If not, repeat steps 2 ) through 7) as often a

For the caSe where there are one or more zeros on t.he diagonal, G and H can be set to zero, and (20) can be solved by the methods to be discussed later. Then steps 3) through 6 j can be used to find
the eigenvector.

LB

P[BQ.

+ (-4

FACTORIKG THE >fA4TRIX


-

Ia)R,]

(16) (17)

Ci\'s-~ = B
x..+, =

1
-

[!A -

Ia)Xs+1

- E,]

(18)

The steps for finding t,he eigenvector are as follows.


1) ;ismme all of the elements of Q and R are initially unity. 2) Solve for the vector in t.he brackets on t.he right-hand side of (16) and then reorder it in the samemanner that, the rows were reordered in the factoring of (15). 3) Solve (16) by forward substitution for the vector B. 4) Solve (17) by back subst,itution for the vector L \ ~ ~ + I . 5) Substihte int,o (18) to find the vector L ? f 3 + ~ . 6 ) Normalize ( N jAY)as in (19) to give a new (& jR). 7) Compare the new (Q j R ) Kith the previous value. If t.he change has been sufficiently small, thk is the desired eigenvect.or. If it is not., repeat steps 3) through 7) as often as necesary.

The matrix is factored using a st,andard Gaussian elimination method with row interchange to find the largest. pivot element at each step. If the value of p were exact.13; equal tu A t , the matrix would be singular. In the real case, this usually would result in the last diagonal element in the upper triangular factor being zero. While it seldom is actually zero, the value of this element will give some indicat,ion of how close the matrix was to being singular. In the complex case, the last two elements on the diagonal of ' L may be zero when 01 4. j s equals Xi. In some cases, the small or zero elements on t,he diagonal will occur at other than thelowest,positions. This usually occurs in some form oi deeoupled system. Consider a matrix of the following form:

LOOOXX~J
If an eigenvector corresponding to a real eigenvalue of the upper left 3 X 3 submatrix is wanted, the method out,lined previously wit,h row interchange can be used. However, when fact,oling is applied to the third row and column, the diagonal and all of the remaining elements in that column may be zero. h t h e r t.han interchange columns, these elements are left zero, and the fact,orization proceeds to the next step. I n this case, t.he resulting I/' matrix would have a zero in its third diagonal posit,ion rat.her t.han in its last posit.ion, and all of the third column of the L m a t i x would be zero except forthe diagonal, which would beunity. The subroutines written to do the forrvard and back subst.itution 7 matrix to see if they operations always test the diagonals of the 1 are zero. I f one of them is zero, the corresponding element in the vector being found is computed as if the diagonal element had been such asratherthan zero. If t.he result of some small number s used as the t,his computation is greater in magnitude t,han one, it i

Xote that this process, as the previous ones described, could be started at different places than that specified. Another choice would be to initially set all the elements of B equal to unity and then to start at step 4). Also, as before, the case where a diagonal element of U is zero can be t.reated as a limit.ing case. In thecomplex case, more than one diagonal element, of 1; may be zero. When any diagonal element is zero, (17) can be solved for S by methods t.o be described later. Then steps 5) and 6 ) with R equal to zero in (18) can be used to find the eigenvector. For the eigenvect,or of the transposed matrix, the following four equat.ion3 describing t.he iteration process can be formed using a

66

IEEE FEBRUARY TRa4SSACTIOKS COSTROL, ON AtTONATIC

1(3GS

desired elementin the solutionvector. I f it is lexs than one, the desired element is set equal to one. The procedure has provided a satisfactory met.hod of handlingall of t,he cases of zeros on the diagonal encountered so far. I n t.he case of a mukiple eigenvalue, more than one or t.no small or zero elements will occur on the diagonal. The methods described will find an eigenvector for this eigenvalue, bllt. othertechniques m w t be used t.o find the ot.her eigenvectors or Jordan chain vect.ora.

Irreducible Jordan Form Realization of a Rational Matrix

s. P. PAKDA A X D

C. T. CHEN,

I t n m m , IEEE

R-IYLEIGH QUOTIENT
The iterativemethod that has been described will converge to very accurate eigenvectors, even if there is some error iu the eigenvalue. With t.he Rayleigh quotient [ l ] , the results of t.hk iterative process can be used to improve the eigenvalue. The lbyleighqnotietrt can be described by the equation (25) where p j is the cnrrent. est,imate of the eigenvalue, X and B are the two corresponding eigenvectors, and p j + l is t.he new estimate of the eigeuvalue. The equation can be wed with eit,her real or romples eigenvalues. The product ( A - I p j ) X should be accnmnlatedin double precision for greatest. accuracy, especially if the computer used has a short. word length, as considerable cancellation will occ11r in it. Thus a good method to find both the eigenvalues and eigenvect,ors of a given matrix would be a s follows.
1) Find the eigenvalues of the matrix by the QR transform. 2) Find t.he X and T eigenvect.ors of the matrix bv the inverse iteration method. 3) Use the Rayleigh quotient to check and possibly improve the value of the eigenvalue f o n d in step I). 4) If the change in t.he eigenvalue at. step 3) seems too great., repeat steps 2) and 3).

Abstract-This paper presents a new method for realizing a rational transfer function matrix into an irreducible Jordan canonical form state equation. The methodconsists oftwo steps,first, to forma controllable state equation andthen secondly, to obtain a controllable as well a s observable realization by nonsingular transformations. If everydenominator of the rationalmatrix is given in the factored form, then the proposed method can be carried out quite easily.

INTRODUCTION
The realization of a rational t,ransfer function mat.rix into a stat.e equation is one of t.he basic problems in linear system theory, and it has been discused by several authors [I]-[6]. In [ l ] , [2], a met.hod for realizing a rational matrix with simple poles into an irreducible Jordan canonical form state eqrmtion is given. 4 method for realizing a rational mat.ris with mdtiple poles is proposed by Kalman [4] by using the Emith-AIc3Iillan canonical form. However, the Smith1\Ic1\Iillan canonical form of a rational matrix is not. ewy to obtain. Chidambara [5] presents a different realization method which, in general, gives a reducible state eqnation (t.he state equation is eit.her uncont.rollable or unob5ervable). The most satisfactory realization method, Fhich ran he carried oltt, e&y by the digital computer, is given by Ho and Kalman [ 61. This paper proposes a new method for realizing a rational transfer functionmatrix into an irredrlcible Jordan canonical form state equation. This met.hod consists of two steps, filxt, to form a controllable state equation and t.hen secondly, t.0 obtain a controllable and observable state equation by using nonsingular transformations. If every denominator of the rational matrix is given in the factored form, t.his method can be carried out quite easily by hand calculations, and it provides a11 alternative for the methods given in [4], [GI.
cONTROLL.\RLB

coxcLusIOs
The methods described have proved very satisfacbory in practice. The matrices which arise fromphysical systems are usually very well behaved and converge quickly to t.he desired results. These met.hods have been used regularly on mat,rices up to t.he 130th order. IIowever, one small artificially generated matrix gave verypoor estimates of X from the QR transform. The inverse iteration met.hod COIIverged sloxly andwas t,erminated at 10 iterations. The results at that step were nsed in the Rayleigh qnotient, to improve the eigenvalue, andthen inverse iterat,ion was triedagain. After repeating this cycle four times, very accurate values were obtained for both the eigenvalues and t,he eigenvectors.

ST.ITK EQC.\TIOS RF,.\LIZ.LTIOS

Given a p X p proper rationaltransferfunctionmatrix G(s). Wit.hol1t losr of generality, it. is asst1med that G(s) contains only one pole, e.g., P = X, with order %. Let G(s) be espanded into its partial fraction3

ACKKOWLICDGMEST
The author wish- to acknowledge t.he very helpful suggestions that have been received from bhs. Virginia Iilema of Piorthwstern Universit.y, Evanston, Ill., and T. 31. Whitelegg of the Soltth of Scotland Elect.ricity Board.

where Mi, i = 1, 2, . . , X. are q X p constant matrice-<. Let, p ( M ) = t n rank of M and bll, bl?, . . ., blm he v z linearly independent rows generat.ing all the r o w of M . F o r the different M mat.rires let. 11s find t.he ftrllowing:
p(Mk) = a

and b11, by], . . ., bl,

REFERESCES
[I] J. H. Wilbrinson, The Algebaic Eigeaualue Problem. London: Osford University Press 1965 pp. 619-63i. [2] J . E. Van Ness ahd L. k. Weiner, The EIGSYS Prooramsfor the Eigencalues and Eigenvectors of h7on-srJmmetrie Jfnfrices. These programs includethe QR t.ransform, the inverseiterationmethod,and the Rayleigll quotient. FORTRAK programs areavailablefromtheYogelbackComputingCenter, Northrresrern UniversitvEvanston, Ill. [3] J. E. Van Ness, J. hi. BOJIe, and F. P. Imad. Sensitiritier; of large multipleloop control systems, I E E E Trans. Alrtonmlic Control, rol. .iC-lO, pp. p = 0 and bll, biz, . . ., big! . . ., bLs 30%315, July 1965. [i] F. P. h a d and J. E. Van hess, Findingthe sta1dit.y and sensitivity of lawe sampled sptems, I E E E Trans. Automatic Control (Shorf Papers). vol. A&12. pp. 4.12445 August 196i. [ 5 ] J. E. Van Ness RespoAse of large poxer systems io cyclic load variations. IEEE Trans. kozrer Apparatus and Systems, vol. P.IS-85, pp. i23-i2i, July 1966. . F. Goddard,Formation of the coefficientmatrixhIanuscriptreceived .Ipril 16, 1968. [6] J. E. Van N e s and W of a largedynamicsystem, I E E E Trans. Power -4ppardiusand S v d e m s , The authorsarewith rhe Departmentof Electrical Sc~ence, StateUniversity ~ o l PXS-Si, . pp. Sc83,January 196s. o f New T o r k Ston)- Brook, N.T. 11790.

f-)

You might also like