You are on page 1of 10

Time-Varying Systems and Computations Lecture 7

Klaus Diepold 9. Dezember 2012

QL Factorization in State-Space
QL-Factorization
The QL-factorization of a matrix T is a powerful tool for solving linear systems of equations or linear least squares problems as u = T y . The QL-factorization of a given matrix T amounts to computing T = QL, QT Q = 1n , L: lower triangular.

Note that L is a lower-triangular matrix corresponding with a causal system (in contrast with conventional QR decomposition, where R is supposed to be an upper triangular matrix. We assume that T is a tall matrix (i.e. T Rmn , m n) and that it has full column rank, i.e. det T T T = 0. Having determined the QL-factorization we can easily determine the inverse or the pseudo-inverse of T as T = L1 QT , L is square, det L = 0

Using the time-varying state-space methodology we are interested in calculating the QL-factorization of a given matrix T in terms of its state-space realizations directly. The procedure for computing the QL factorization then goes as follows: 1. Determine a state-space realization for T 2. Compute the QL factorization in state-space 3. Invert the factors in state-space 4. Arrive at T = L1 QT Note that for solving least squares problems it is not necessary to actually compute the product L1 QT , it is more economic to keep the factorized form an apply the matrices QT and L1 subsequently to the vector y and QT y , respectively.

State-Space Realization of Transfer Matrix T


Step 1 in the previously given list of operational procedure asks us to determine a time-varying statespace realization for the given matrix T . To this end we could engage the realization procedure based on the Kronecker Theorem, i.e. by factoring the Hankel matrices into the product of observability and controllability and then read of the elements Ak , Bk , Ck and Dk of the state-space realization. However, 1

Lecture 7

this computational process may require us to compute the corresponding factorizations and the necessary inversions, which turn out to be computationally expensive. Alternatively, we can determine realizations directly by reading o the matrix entries from T and creating a simple computational state-space structure for T as a starting point. Lets have a look at a simple example. Example 1 We consider the simple example of a 4 2 matrix T as shown in Figure 1. We can easily check that the computational structure shown in the gure realizes the computation for T u = y . The state-space realization matrices for this structure can also be easily read o as A1 C1 and A4 C4 B4 D4 = tT 4 . B1 D1 = 1 tT 1 , A2 C2 B2 D2 = 1 tT 2 , A3 C3 B3 D3 = 1 tT 3

tT 1

y1

tT k

T =

[]

tT 2

y2

[]

tT 3

y3

y = Tu

[]

tT 4

y4

Figure 1: State-Space Realization for the matrix T (taken from [5])

Example 2 Yet another slightly more complicated example is shown in Figure 2. For this example we partition the u1 to give us input vector as u2 y=T u1 u2 ,

Lecture 7

where we have the corresponding partitioning of the matrix T given as T = T (1) T (2) .

For each of these two matrices we assume to have state-space realizations T (1) Ak (1) Ck
(1)

Bk (1) Dk

(1)

T (2)

Ak (2) Ck

(2)

Bk (2) Dk

(2)

which can be combined into one realization matrix as (1) (1) Ak 0 Bk 0 (2) (2) Tk = 0 Ak 0 Bk . (1) (1) (2) (2) Ck Ck Dk Dk

Using this generic partitioning of the realization matrix along with the realizations determined in the previous section we can easily read of a time-varying realization to consist of the matrices 1 0 0 0 1 0 T1 = 0 0 , T2 = 0 0 , tT tT 1 2 1 0 0 0 0 T3 = 0 0 1 , T4 = 0 1 0 , tT tT tT tT 3 1 4 2 0 0 0 0 T5 = 0 1 0 , T6 = 0 0 . tT tT 3 4

Determine QL-Factorization
We start out with a time-varying state-space realization for T , such that we can represent the coecient matrix as T = D + C (1n ZA)1 ZB , = A C B D .

Since T is the product of two matrices Q and L we aim at state-space realizations for them, that is we represent both matrices in terms of state-space models Q = D Q + C Q (1n ZAQ )1 ZB Q , Q = AQ CQ A CL BQ DQ B DL .

L = D L + C L (1n ZAL )1 ZB L ,

L =

Figure 3 shows the computational structure we aim for, that is, this structure represents the QLfactorization of the matrix T . To this end we need to determine the realizations for Q and for L. This represents the completion of step 2 in our operational procedure, where we represent the matrix T as the product N Q 2Q 1L N L 2L 1, T =Q

Lecture 7

u1

[]

y1

y2

u2

[]

y3

T =
y4

y=T

u1 u2

[] []

y5

y6

Figure 2: Direct State-Space Realization for the matrix T (taken from [5])

Lecture 7

N . . . Q 2Q 1L N . . . L 2L 1 T =Q
u1 L1 Q1 y1

u2

L2

Q2

y2

u3

L3

Q3

y3

u4

L4

Q4

y4

u5

L5

Q5

y5

Figure 3: State-Space Realization for the QL-Factorization of T (taken from [5])

Lecture 7

where we use the notation Q Ak 1 .. . k = Q 1 CQ k Factored Representation

Q Bk

Q Dk

k = L CL k

AL k

L Bk

1 .. . 1
L Dk

k provides for an embedding of the k -th realization matrix in the form where the factor T Ak Bk 1 . . . k = T . 1 Ck Dk 1 Example We can check this factorized representation for T looking at a small example for N = 3. For the lower 4 4 semi-separable matrix we have D1 C 2 B1 D2 . T = C3 A2 B1 C3 B2 D3 C4 A3 A2 B1 C4 A3 B2 C4 B3 D4 We check the factored form by directly calculating 4 T 3 T 2 T 1 = T =T A3 . . B3 . A2 A4 . . . B 4 . 1 . 1 . . C2 = 1 1 . . . 1 C3 D3 C4 D4 . 1 . A4 A3 . . A4 B3 B4 A2 . B2 . . 1 . 1 C2 = . 1 D2 C3 . D3 1 C 4 A3 C4 B3 D4 . A4 A3 A2 . A 4 A3 B 2 A4 B 3 B 4 A1 B 1 . C1 D 1 . 1 . = C2 D2 1 C 3 A2 . C3 B2 D3 C4 A2 A2 C4 A3 B2 C4 B3 D4 .

We used in the previous section the option to represent a matrix T as the product N T N 1 T N 2 T 1 , T =T

. 1

B2 D2

1 A1 C1 . . 1 . . . = 1 1 . 1

A1 C1 . . . . 1 1 .

B1 D1

. 1

1 . 1

B1 D1

1 D1 C2 B1 C 3 A2 B 1 C4 A3 A2 B1

D3 C4 B3

D2 C3 B2 C4 A3 B2

D4

where we have used that A1 = [], C1 = [], A4 = [] and B4 = [], producing a zero-dimensional rst block-column and rst block-row.

Lecture 7

Inversion of Factors in State-Space In the next step we take the individual stages of the structure and determine the inverse realization by local inversion. Once we have computed the state-space realizations for Q and L we can easily determine the state-space realizations of the inverse systems via S k =
L L 1 L AL Ck k Bk (Dk ) L 1 L (Dk ) Ck L L 1 Bk (Dk ) L 1 (Dk )

AS k S Ck

S Bk S Dk

where we have S = L1 and


Q T Q k = (k ) = T ( AQ k) Q T (Bk ) Q T (Ck ) Q T (Dk )

where the realizations Q k represent an anti-causal system. Anti-causal realization corresponds to a backward recursion xk yk = =
Q T T (A Q k ) xk+1 + (Ck ) uk Q T Q T (Bk ) xk+1 + (Dk ) uk

k = N, N 1, . . . 1 Concatenating the inverse realizations produces the structure shown in Figure 4, which is a state-space realization for the inverse matrix T 1 given in terms of the factors QT and L1 , or more accurately, in terms of the product N S 2 S 1 Q T Q T Q T T = S 1 2 1. Note in Figure 4 how the anti-causality of the realization for QT creates an upward ow of the state signals.

Recursive Computation in State-Space


Recursive Scheme The state-space realizations for the factors of a QL factorization can be computed by a recursive algorithm to determine Yk+1 Ak Ck Yk+1 Bk Dk = AQ k Q Ck
Q Bk Q Dk

Yk L Ck

0 L Dk

(1)

R The matrices Yk and Ak have the same number of columns, Dk and Dk have also the same number R R of columns, while Dk has full row rank. The matrices Yk or Dk may be zero-dimensional ([]) and hence the matrix entry 0 may also vanish. The recursion starts out with Yk+1 = [] and continues for k = N, N 1, . . . , 1. For practical purposes we rewrite Equation 1 by bringing the factor Q to the left side to arrive at

AQ k Q Ck

Q Bk Q Dk

Yk+1 Ak Ck

Yk+1 Bk Dk

Yk L Ck

0 L Dk

Note that this amounts to applying an appropriately chosen sequence of Givens rotations from the left with the purpose to eliminate the 12-block and to generate the lower triangular shape on the right-hand side of the equation. This is very similar to the conventional algorithm for computing the QR factorization

Lecture 7

N . . . S 2 S 1 Q T T T T = S 1 Q2 . . . QN
y1 QT 1 S1 u1

y2

QT 2

S2

u2

y3

QT 3

S3

u3

y4

QT 4

S4

u4

y5

QT 5

S5

u5

Figure 4: State-Space Realization for the inverse of T (taken from ??)

Lecture 7

(as shown in [2]) except that we create a lower triangular matrix instead of an upper triangular. This requires a slight change in the elimination sequence. As a result of performing this recursive computation scheme for all values of k we arrive at a realization matrix for the lower matrix L L k = Ak L Ck Bk L Dk .

Details of the QL Factorization Process Working out the rst steps of the recursive algorithm produces the intermediate matrices Q Q T BN YN +1 BN AN YN +1 AN N 1 T 1 = T T Q 1 1 NT = Q Q CN DN CN DN YN 0 AN 1 B N 1 1 1 .. .. 1 = TN 2 T . . CN 1 1 D N 1 L L 1 CN DN YN AN 1 YN BN 1 1 .. 1 , = TN 2 T . L CN 1 DN 1 L L L CN AN 1 CN BN 1 DN

T where we made us of Equation 1. Pre-multiplication with the next factor Q N 1 produces the intermediate result YN 1 0 AN 2 B N 2 1 1 T T . T .. QN 1 QN T = D N 1 C N 2 N 3 T1 L 1 CL DN N 1 1 L L L 1 C N AN 1 CN BN 1 DN YN 1 AN 2 YN 1 BN 2 1 L T = CN 2 DN 2 N 3 T1 L L L CN 1 C N 1 B N 2 DN 1 L L L L CN AN 1 CN AN 1 BN 2 CN BN 1 DN Continuing the recursive computation will eventually produce Y1 T L C D1 1 L L L C 2 A1 C2 B1 D2 T T Q 1 . . . QN T = . . . . . . . . . L L CN AN 1 . . . A1 CN AN 1 . . . A2 B1 . . . . ... ..
L DN

which we identify as the lower factor L, since Y1 = [] and A1 = [].

Lecture 7

10

Literatur
[1] G. Strang. Computational Science and Engineering. Wellesley-Cambridge Press, 2007. [2] G. Golub, Ch. van Loan. Matrix Computations. John Hopkins, 1992. [3] T. Kailath. Linear Systems. Prentice Hall, 1980. [4] P. Dewilde, A.-J. van derVeen. Time-Varying Systems and Computations. Kluwer Academic Publishers, 1998. [5] L. Tong,A.-J. van der Veen, P. Dewilde, Y. Sung. Blind Decorrelating RAKE Receivers for Long-Code WCDMA. IEEE Trans. Signal Processing, Vol.51, No.6, pp. 1642-1655, June 2003.

You might also like