You are on page 1of 7

Distance of a system from a nearest singular descriptor system

having impulsive initial-conditions

Ashish Kothyari, Rachel K. Kalaimani and Madhu N. Belur

Abstract— This paper considers a distance problem: given a II. P RELIMINARIES


first order system, what is the distance to a nearest singular
In this section we briefly explain about the concept of zeros
descriptor system that has impulsive initial conditions. The link
between impulsive initial conditions and zeros at infinity is well- at infinity and its relation with the impulsive solutions in a
known. This paper provides bounds on the minimum perturbation dynamical system. We then explain about SLRA.
required for a pair of matrices E and A such that the perturbed
A. Zeros at infinity
matrix pencil has one or more zeros at infinity. We provide closed
form solutions for the minimum value for rank one perturbations Zeros at infinity play a central role in this paper: we review
and also compare this minimum value with what is obtained by this from [?]. A polynomial matrix R(s) ∈ Rn×n [s] is said
the Structured Low Rank Approximation (SLRA) tool and other
to have a zero at a finite number λ ∈ C if rank R(λ) < n.
estimating methods.
The number of finite zeros for R(s) is given by the degree of
I. I NTRODUCTION determinant of R(s). Assume

When dealing with Differential Algebraic Equations (DAE) R(s) = Rd sd + · · · + R1 s + R0 ,


of first order, it is known that existence of impulsive solutions
where each Ri ∈ Rn×n and Rd 6= 0. For any λ ∈ C and
for certain initial conditions are linked to the presence of zeros
R ∈ Rn×n (s) there exist square rational matrices U and V
at infinity of the corresponding matrix pencil. Just like ‘gener-
such that none of U and V have any poles or zeros at λ, and
ically’ a system is controllable, i.e. the set of uncontrollable
U RV = diag ((s − λ)µi (λ) ), the integers µi (λ) nondecreasing
systems form a thin set, it is also easy to see that generically a
in i which takes the values from 1 to n. It turns out that the
matrix pencil has no zeros at infinity. In other words, the set of
integers µi (λ) depend only on R and not on the U and V
matrix pairs (E, A) such that the pencil (sE − A) has a zero
matrices. If µ1 < 0 we say that R has (one or more) poles
at infinity is a set of measure zero in the space of all constant
at λ, the negative µi (λ)’s are called the structural pole indices
square matrices. This gives rise to the question, given a pair
at λ. If µq > 0 we say that R has (one or more) zeros at λ,
of matrices E, A, what is the minimum amount of perturbation
and the positive µi (λ)’s are called the structural zero indices at
required for the perturbed pair to have one or more zeros at
λ. The zeros/poles and their structural indices of R at infinity
infinity. This paper deals with this question and provides closed
are defined as those of Q(s) at s = 0 with Q(s) := R(λ) and
form solution for the case when the perturbation matrices ∆E
λ = 1/s.
and ∆A are each of rank one. In this paper, we use the 2-norm
The total number of poles and zeros (counted with multiplic-
of the perturbation matrices to quantify the distance of a first
ity) of P at any λ ∈ C ∪ ∞ are respectively denoted by zR (λ)
order DAE to having impulsive initial conditions.
and pR (λ) and defined by
The paper is organized as follows. The following section X X
contains preliminaries essential for this paper. Section II-E, zR (λ) := µi (λ) and pR (λ) = − µi (λ) .
µi >0 µi <0
in particular, elaborates on the link between zeros at infinity
and impulsive solutions to DAEs. Section III contains the A more direct count of the zeros and poles at infinity can
problem formulation while the main results of this paper are be obtained by counting the ‘valuations’ at ∞ for a rational
in Section IV. A few examples are investigated in Section V. matrix as elaborated in [?] and as explained briefly below. For
The well-known techniques involving Structured Low Rank a rational p(s) ∈ R(s), with p = a/b where a and b are
Approximation (SLRA) is reviewed in Section VI and this polynomials, b 6= 0, define ν(p) the valuation at ∞ of p by
section also contains a comparison of the values obtained using ν(p) := degree b − degree a and ν(0) := ∞. For a polynomial
the SLRA tool ([?]) with our closed form perturbation values matrix R ∈ Rn×m [s], define σi (R) as the minimum of the
(for rank one). Some concluding remarks are in Section VII. valuations of all i × i minors of R for each 1 6 i 6 n.
The structural indices at infinity of the rational matrix R are
*This work was supported in part by SERB, DST and BRNS, India. defined as ν1 (R) := σ1 (R), νj (R) := σj (R) − σj−1 (R) for
A. Kothyari, R. K. Kalaimani and M. N. Belur are in the Department of
Electrical Engineering, Indian Institute of Technology Bombay, India. Email: j = 2, . . . , n. This procedure gives some νi as positive integers,
{ashkothyari, rachel, belur}@ee.iitb.ac.in some negative and others zero. The absolute values of the
negative ones are summed to give the poles at infinity of R We now relate the above proposition with zeros at infinity.
with multiplicity, while the positive ones are summed to give Existence of one or more zeros at infinity implies the existence
the zeros at ∞ of R counted with multiplicity: of generalized eigenvalues at infinity, but the converse is not
X X true. However, if (E, A) has a repeated generalized eigenvalue
z∞ (R) := νi and δM (R) := − νi . at ∞, then (E, A) is arbitrarily close to having one or more
νi >0 νi <0
zeros at infinity (if sE − A does not already have a zero at
The matrix R is said to have no zeros at infinity if all νi 6 0. infinity). This can be seen as follows.
Note that δM (R) is also called the McMillan degree of the ??
polynomial matrix R: see [?].
We are interested in the matrix pencil sE − A which is a C. Singular values and the induced 2-norm
polynomial matrix in first degree. We primarily deal with matrix Singular values and orthogonal matrices play a key role
pencils which are square and nonsingular as a polynomial and hence we summarize the essential background in this
matrix: we define them as ‘regular pencil’. A matrix pencil subsection. The 2-norm of a vector x ∈ Rn is the one we will
sE − A, with E, A ∈ Rn×n is called a regular pencil if use throughout this paper: this is the usual Euclidean norm, and
det(sE − A) 6 ≡0. Note that assuming regularity of a pencil same as the square root of the sum of all the entries’ squares.
implies that E and A are square. A pencil that is not regular Of course, for any orthogonal matrix U , the 2-norms of U x
is called singular. A necessary and sufficient condition for the and x are the same for each vector x ∈ Rn . Given a matrix
pencil to not have zeros at infinity is given by the following A ∈ Rn×n , there exist orthogonal matrices U and V such
proposition. that U AV = Σ, with Σ ∈ Rn×n such that Σ is a diagonal
Proposition 2.1: [?, Page 138] Suppose E, A ∈ Rn×n . The matrix with its diagonal entries σ1 , . . . , σn ∈ R ordered such
matrix pencil sE − A has no zeros at infinity if and only if that σ1 > σ2 > · · · > σn > 0. These diagonal entries are called
deg det(sE − A) = rank E. the singular values and the number of nonzero singular values
It is easy to see that deg det(sE − A) 6 rank E, and the is also the rank of the matrix A. The maximum singular value
above proposition is a maximality condition on the degree of σ1 is also the induced 2-norm, defined as
the determinant of sE − A as necessary and sufficient for the
kAxk2
absence of zeros at infinity of the pencil. kAk2 := sup .
x∈Rn \0 kxk2

B. Generalized eigenvalues Since there can be no ambiguity, we will use k · k2 to mean


either the vector 2-norm or the matrix (induced) 2-norm. For a
Closely related to (and often confused with) the notion of
given matrix A, the Frobenius norm is defined as
to zeros at infinity is that of generalized eigenvalues at infinity
v
of the pair (E, A). For a regular pencil sE − A, a complex u n X
uX n

λ ∈ C such that rank (λE − A) < n is called a generalized kAkF := t a2ij .


i=1 j=1
eigenvalue of the pair (E, A). The point λ = ∞ is said to be a
Pn
generalized eigenvalue if E is a singular matrix. Multiplicities Note that kAk2F = i=1 σi2 and, further, for any orthogonal
of the generalized eigenvalues are more easily defined using matrices U1 and U2 , the Frobenius norms kU1 AU2 kF = kAkF
the so-called generalized Schur form of a pencil (E, A). See and also the induced 2-norms kU1 AU2 k2 = kAk2 . Rank one
[?] and [?, Page 377]. matrices have the same 2-norm and the Frobenius norm, and
Proposition 2.2: Suppose E, A ∈ Rn×n . Then there exist so does the zero matrix.
unitary matrices U and V such that both U EV and U AV are
upper triangular. The generalized eigenvalues of the pair (E, A), D. Eigenvalue sensitivity: condition number
with (algebraic) multiplicity, are equal to aii /eii when eii is Since we deal with distance problems, and the generalized
nonzero for i = 1, . . . , n. If eii = 0 for some i, but aii 6= 0, eigenvalue at infinity plays a key role in this paper, we review
then these correspond to generalized eigenvalues at ∞ with that the notion of the condition number of an eigenvalue for a
(algebraic) multiplicity. For some i, both aii and eii are zero if matrix. It is known that the condition number of a simple
and only if the pencil is singular. eigenvalue λ helps in an upper bound on the minimum per-
In fact, the orthogonal matrices U and V can be chosen turbation required for the perturbed matrix to have a repeated
suitably to ensure that the diagonal elements of E and A appear eigenvalue at λ: see [?] and [?]. More precisely, for a matrix
in any arbitrary order (as long as the ratio of aii /eii is a A ∈ Rn×n , with λ ∈ C a simple eigenvalue of A, define
generalized eigenvalue): see [?, Corollary 4.7, page 133]. In s(λ) := |ylT yr |, where yl and yr ∈ Cn are unit 2-norm
particular, when the pencil is nonregular, one can ensure that left and right eigenvectors respectively. Then there exists an
E and A are upper triangular with ann = 0 = enn . ∆A ∈ Rn×n such that A + ∆A has a repeated eigenvalue at λ
and The significance of this result is that existence of zeros at
s(λ)
k∆A k2 6 p . (1) infinity of the pair (E, A) is equivalent to the nilpotent matrix
1 − s(λ)2 N not being identically zero.
This helps getting an upper bound on the minimum perturba-
tion required for repeated eigenvalues. The situation for the
III. P ROBLEM F ORMULATION
generalized eigenvalue problem is less straightforward due to
perturbations having (reasonably) very different effects depend- Having noted the significance of zeros at infinity in dynami-
ing on the generalized eigenvalue being near origin or near ∞. cal systems theory, we formulate the problem that we study in
This motivates the use of a pair (eii , aii ) (in the notation of this paper.
Proposition 2.2) instead of λi = aii /eii . For a, b ∈ R, define
Problem 3.1: Consider E and A ∈ Rn×n . Find the mini-
chord(a, b)
mum value of k∆E k2 +k∆E k2 such that s(E+∆E )−(A+∆A )
|a − b| has one or more zeros at infinity.
chord(a, b) := p .
(1 + a2 )(1 + b2 ) We also deal with the situations when only E or only A
Then, by [?] and [?, Page 378], for x` and xr which are left are perturbed. Another important measure of the amount of
and right generalized eigenvectors for a simple eigenvalue λ of perturbation is the Frobenius norm. However, for a rank one
the pair (E, A), and if A and E are such that kA − A k 6  matrix, the Frobenius and 2-norms are the same: rank one
and kE − B k 6 , with λ a generalized eigenvalue of the pair matrices play a key role in our results. We provide explicit
(E , A ) that is closest to λ, closed form solutions for the case when ∆A and ∆E are each
 of rank one. Of course, for the case when these need not be of
chord(λ, λ ) 6 q .
rank one, our values are upper bounds.
(xT` Axr )2 + (xT` Exr )2
As reviewed in Section II-E, it is often convenient to bring
Though a bound similar to equation (1) for the case of gen- the pair (E, A) to the standard form (2). However, nonsingular
eralized eigenvalues seems to be unavailable in the literature, matrices M and N in general do change the 2-norm and the
comparing the above equation with equation (1), we note later Frobenius norms of E and A, and hence of the perturbation
that when E is diagonal and singular, then the generalized matrices too. In this context, orthogonal/unitary matrices play a
eigenvalue at zero of the pencil sA − E has a sensitivity that more helpful role to bring the pair (E, A) to a convenient form
results in a repeated generalized eigenvalue at zero for the pair without loss of generality. We review two particular forms. The
(A, E) requiring at most perturbation an,n . This is, in fact, the first one is of course the case where orthogonal matrices are
first of the various cases stated and proved in Theorem ??. used to ensure E is diagonal, and in face, has its singular values
along the diagonal (sorted according to decreasing magnitude.)
E. Impulsive solutions in dynamical systems
The second one is the generalized Schur form: see [?].
The zeros at infinity is more significant in systems theory as
it is closely related to the presence of impulsive solutions in
a dynamical system. Impulsive solutions are system responses IV. M AIN RESULTS
which involve the Dirac delta distributions and/or its derivatives.
This section contains our main results. We first state the
The following proposition makes this precise.
following result that states that if rank of E is at least two
Proposition 2.3: [?] Consider an autonomous singular sys-
less than n, then a regular pencil sE − A either already has
tem with state space representation E ẋ = Ax, where E, A ∈
one or more zeros at infinity or requires an arbitrarily small
Rn×n and E singular. The free response of the system has no
perturbation to have such zeros at infinity.
impulsive solutions for any initial condition x(0) ∈ Rn if and
Lemma 4.1: Consider the matrix pencil sE − A, E, A ∈
only if deg det(sE − A) = rank E.
n×n
R and rank E = r. Consider M ∈ R(n−r)×n , full row rank
The following result from [?, page 1076] helps in revealing
such that M E = 0. Then the pencil has no zeros at infinity if
the fast subsystem more directly and is very helpful for con-
and only if dim(M A ker E) = n − rank E.
ceptual purposes. See also [?, page 28].
Proposition 2.4: Consider E, A ∈ Rn×n and suppose r = Theorem 4.2: A regular matrix pencil sE −A where E, A ∈
deg det sE − A. Then there exist nonsingular matrices M1 and Rn×n and rank E 6 n−2 is arbitrarily close to a regular matrix
M2 such that pencil having a zero at infinity.
" # " # Proof: We first assume the E has rank n − 2. Let ai,j
Ir 0 As 0
M1 EM2 = and M1 AM2 = , (2) denote the (i, j)th element of the matrix A. Recall we have
0 N 0 In−r
assumed E to be a diagonal matrix with the singular values
where N is a nilpotent matrix. σi ’s as the diagonal elements. We now introduce a perturbation
in E as follows where γ1 and γ2 denote the perturbation. as a result of which γ2 = 0, γ1 =  and the matrix pair (E +

σ1 . . . . . . . . . 0
 ∆E , A) will have a zero at infinity.
. . .. ..  Next we consider the case when E has rank less than n − 2.
 .. .. ... . .

.

 With the help of arbitrary small perturbations, the rank of E
E + ∆E =  .
 . . . . σn−2 0 0

can be increased to n − 2. Let this perturbed matrix be E1 :=
. 
.
 . ... ... γ 1 γ2 
 (E +∆E ). Now since rank of E1 is n−2 from above arguments
0 ... ... ... 0 there exists another pair (E2 , A), where E2 = E1 + ∆E1 and
is arbitrary close to (E1 , A) and has a zero at infinity. Hence
Let NL and NR be two matrices whose columns span the left
we conclude that for the pair (E, A) there exists another pair
and right null spaces of E + ∆E respectively.
(E2 , A) which is arbitrarily close to it and has zeros at infinity.
h iT h iT
NL = 0 . . . 1 , NR = 0 . . . −γ2 γ1 ¿From [?], for a matrix pair (P, Q) to be non-regular:
" #
Acoording to Lemma 4.1, for the pair (E + ∆E , A) to have a
h i P
rank P Q < n, or rank < n. (4)
zero at infinity Q
NLT ANR = 0. We had assumed that the matrix pair (E, A) is regular. Hence,
h i " #
We have NLT ANR = an,1 . . . an,n−1 an,n NR . Therefore h i E
rank E A = n, and rank =n
for the pair (E + ∆E , A) to have a zero at infinity: A
h i h i
an,1 . . . an,n−1 an,n NR = 0 The minimum perturbation required for the matrix E A or
" # " #
E h i E
⇒ an,n γ1 − an,n−1 γ2 = 0 to lose rank, is ∆ := min{σmin ( E A ), σmin ( )}.
A A
⇒ an,n γ1 = an,n−1 γ2 Therefore since we claim that with arbitrary perturbations we
γ1 an,n−1
⇒ = =k can get the regular pair (E, A) to have zeros at infinity,the pair
γ2 an,n
which was regular to start with cannot become non regular as
atleast ∆ is the perturbation that is required for the pair to
For the perturbation defined by ∆E , γ1 and γ2 should satisfy become non regular.
the above equation for the pair (E + ∆E , A) to have a zero at
infinity. Also the pair (E + ∆E , A) should be arbitrarily close Theorem 4.3: A non regular matrix pencil (E, A) with
to (E, A), i.e. for any given  > 0, k∆E k 6 . In this regard E, A ∈ Rn×n is arbitrarily close to a regular matrix pencil
we consider the following cases and provide ∆E , i.e. the values having a zero at infinity
of γ1 and γ2 such that k∆E k 6 . Proof: From (4) if the pair (E, A) is nonregular:
Case 1: k 6= 0 " #
Note that k∆E k2 = γ12 + γ22 . Therefore γ1 , γ2 satisfies the
h i E
rank E A < n, or rank < n. (5)
following. A
γ12 + γ22 = 2 (3) We will consider the following cases
Case 1: rank E = n − 1
Substituting γ1 = kγ2 in (3) we get
For the equation (5) to be satisfied, the 2-norm of either the
k 2 γ22 + γ22 = 2 last row or last column or for both last row and column of the
 matrix A is zero. Assuming the 2-norm of the last column of
γ2 = √
1 + k2 the matrix A is zero. This implies that
Similarly value of γ1 would come out to be " #
E
k rank < n.
γ1 = √ A
1 + k2 " #
Case 2: k = 0 E
As the matrix E has rank n − 1, hence the matrix also
This implies that A
an,n−1 = 0 has rank equal to n − 1. Perturbations can be done on the last
column of the matrix A with the help of the matrix ∆A such that
As a result of which γ1 = 0, γ2 = , and the matrix pair (E + the 2-norm of the last column equals k∆A k which some small
∆E ) will have a zero at infinity. quantity and also ensuring that an,n is always zero.#With this
"
Case 3: k is not defined E
kind of perturbation the rank of the matrix becomes
This implies that A + ∆A
an,n = 0 equal to n and hence the matrix pair (E, A + ∆A ). With NL
and NR defined as the matrices whose columns span the left Case 2:k∆E k = 0:
and right nullspaces of the matrix E, As E has rank n − 1, let NL and NR be two matrices
h iT h iT whose columns span the left and right null spaces of E + ∆E
NL = 0 . . . 1 , NR = 0 . . . 1
respectively.
As, h iT h iT
NLT ANR = an,n = 0 NL = 0 . . . 1 , NR = 0 . . . 1

therefore, according to Lemma 4.1, the matrix pair (E, A+∆A ) Now,
is regular and has a zero hat infinity.
i Similarly we can do NLT ANR = an,n .
perturbation on A if rank E A < n. In this case the 2-
norm of the last row of the matrix A is zero. Thus
" # by doing a According to lemma 4.1, the matrix product NLT ANR should
E be equal to zero for the pair (E, A) to have a zero at infin-
similar kind of perturbation as in case of rank < n to the
A ity.Therefore perturbing the matrix A with ∆A = −an.n at the
last row of the matrix A and ensuring an,n = 0, we can have n, nth position and zero elsewhere the matrix pair (E, A+∆A )
a matrix pair (E, A + ∆A ) which is regular and "has#a zero at will have a zero at infinity and k∆A k = an,n = D.
h i E Case 3:k∆A k = 0.
infinity. If for both rank E A < n and rank < n the
A As rank ∆E = 1 hence it can only a particular row or column
both the 2-norm of both the last row and the last column of
of the matrix E can be perturbed. We have to ensure that we
the matrix A is zero. In this case small perturbation can done
donot perturb the (n, n)th term of the matrix E which is zero
simultaneously on the last row and column of the matrix A to
because this would make the matrix E + ∆E a full rank matrix
increase their 2-norm to some arbitrary small non-zero quantity
with which we can’t have any zeros at infinity. First consider
while ensuring an,n = 0, the matrix pair (E, A + ∆A ) will be
the perturbation on the last column of the matrix E:
regular and has a zero at infinity.
 
Case 2: rank E 6 n − 2 σ1 0 ... γ1
If the matrix pair (E, A) is non-regular then equation (5) 0 σ ... γ2 
 2 
. .. . .. 
is satisfied. This would imply that rank A 6 n − 1. By E + ∆E ==  .  . . . . . 

arbitrary small perturbation ∆A we can ensure that the matrix 
 0 . . . σn−1 γn−1 

A + ∆A is full rank. This would ensure that equation (5) is 0 ... ... 0
no more satisfied and the matrix pair (E, A + ∆A ) is regular.
As rank E 6 n − 2, using theorem 4.2 we can find a samll Let NL and NR be the matrices whose columns spam the left
perturbation ∆E such that the matrix pair (E + ∆E , A + ∆A ) and right nullspace of E + ∆E respectively.
has a zero at infinity and is regular. h iT h iT
NL = 0 . . . 1 , NR = γ1 σ2 . . . σn−1 . . . σ1 σ2 . . . σn−1
Theorem 4.4: Consider a regular pencil (E, A) ∈ Rn×n and
p
rank E = n − 1. Let D = k∆E k2 + k∆A k2 . Then for some According to lemma 4.1 for the matrix pair (E + ∆E , A) to
rank ∆A , ∆E 6 1: have a zero at infinity,
min D = min{σn−1 (E), an,n , X, Y } (6)
NLT ANR = 0
with
|an,n σ1 ...σn−1 | ⇒ an,1 σ2 . . . σn−1 +. . .+an,n−1 σ1 . . . σn−2 = an,n σ1 . . . σn−1
X=√
(an,1 σ2 σ3 ...σn−1 )2 +...+(an,n−1 σ1 σ2 ...σn−2 )2
|an,n σ1 ...σn−1 | (7)
Y =√
(a1,n σ2 σ3 ...σn−1 )2 +...+(an−1,n σ1 σ2 ...σn−2 )2 For this case:
such that the matrix pair (E + ∆E , A + ∆A ) has a zero at q
D = γ12 + . . . + γn−1 2 .
infinity
Proof: If we consider equation (7) to be a equation of a plane with
Case 1: Perturbation on the singular values of E, k∆A k = 0 variables γ1 , . . . , γn−1 , the minimum value of D with respect
As we have already proved in theorem 4.2 that the pair (E1 , A1 ) to this (7) is the distance of the plane from the origin i.e
where rank E1 6 n − 2 is arbitrarily close to a regular matrix
pair having a zero at infinity, hence by perturbing the matrix
|an,n σ1 . . . σn−1 |
E with perturbation ∆E ,k∆E k = σn−1 (E), we can bring the X=p
(an,1 σ2 σ3 . . . σn−1 )2 + . . . + (an,n−1 σ1 σ2 . . . σn−2 )2
rank of the perturbed matrix E + ∆E to n − 2. And as the
matrix pair (E + ∆E , A) is arbitrarily close to a regular matrix Similarly we can perturb the last row of E and get the value
pair having a zero at infinity, the value of D in this case is Y.
σn−1 (E). Other rows/columns of the matrix E can also be perturbed.
For example if we perturb the first row of E in the following Example 5.1: Consider the case when
manner:    
  6 0 0 16.94 −1.3560 −3.50
σ 1 β2 . . . β1 E = 0 5 0 , A = 20.39
  
1.69 14.00 

0 σ ... 0
 2  0 0 0 40.23 35.50 9.23
. .. . .. 
E + ∆E ==  .  . . . . .


 0 . . . σn−1 0 
 σn−1 = 5, an,n = 9.2300 , X = 0.9452, Y = 3.2271 ,
Z = 12.5445 Example for when minimum is very much less
0 ... ... 0
than σn−1  
Let NL and NR be the matrices whose columns spam the left 9 0 0
and right nullspace of E + ∆E respectively. Example 5.2: E = 0 6 0 and
 
h iT h iT 0 0 0
NR = −β1 0 . . . σ1 , NL = 0 . . . 1  
2.52 19.3340 23.9
A =  12.50 22.34 6.90
 
According to lemma 4.1 for the matrix pair (E + ∆E , A) to
have a zero at infinity: 30.1240 22.5 9.23
σn−1 = 6, an,n = 9.23, X = 1.8363, Y = 3.1895, Z =
NLT ANR = an,1 β1 − σ1 an,n = 0
8.8277 Example for minimum less than σn−1 but very much
an,n greater than σ2n−1 , Structure isimportant.
⇒ β1 = σ1
an,1 6 0 0
Example 5.3: E = 0 5 0 and A =
 
It can be seen that only one varialble β1 is neccessary for the
pair (E + ∆E , A) to have a zero at infinity, so we assume all 0 0 0
 
oth β’s to be equal to zero. Hence, 6.94 45.62 −23.3
10.32 −12.20 4.0  For Example 3:
 
k∆E k = β1 = D 12.80 30.50 9.23
σn−1 = 5, an,n = 9.23, X = 1.4283, Y = 2.3279,
This example can be seen as a special case where perturbation
Z = 12.2006 A general example
is done on the last column of the matrix E and in equation 7
γ1 = β1 and rest all γ’s are equal to zero. Also the value β1 is
VI. SLRA: F ORMULATION AND C OMPARISON
always greater than or equal to X as X is the distance of the
plane from the origin and β1 gives us the the intercept on the In this section we will formulated the problem of SLRA.
γ1 axis. Similarly if we perturb any column of E other than the The we will relate the problem of SLRA with our problem and
last column, then that becomes a special case of perturbing the then compare our result with the solution obtained using the
last row of the matrix E and the norm of that perturbation will SLRA package provided with [LRAIM] using different initial
always be greater than or equal to Y . It can be observed that conditions.
any rank 1 perturbation done inside the leading (n−1)×(n−1)
minor of the matrix E does not the nullspace of the perturbed A. SLRA formulation
matrix E + ∆E . For example:
  Structured Low Rank Approximation deals with the construc-
σ 1 β1 ... 0 tion of a structured low rank matrix nearest to a given matrix.
0 σ . . . 0
 2  Problem Statement: Let ω ⊂ Rn×n be a subset of matrices
. .. .. .. 
E + ∆E =  . . . . .
 having a particular structure. Let X ∈ ω, then
 
 0 . . . σn−1 0
minimize kX − Y kw
0 ... ... 0 Y

subject to rank Y 6 r − l, l = 1 . . . r − 1,
Left and right nullspaces of both matrices E and E + ∆E is
spanned by the same vector and hence we require perturbations Y ∈ω
on the last row of E + ∆E or perturbations on the matrix A where k · kw denotes the weighted 2-norm. We will be using
to introduce a zero at infinity in the matrix pair (E + ∆E , A) unweighted 2-norm for solving the problem. For the structure,
or (E + ∆E , A + ∆A ) accordingly. As these cases have been we use the following key result to find a nearest matrix pencil
discussed, so we can ignore these kind of perturbations. having zeros at infinity.
Proposition 6.1: ([?, Theorem 1]) Let P (s) ∈ Rn×n (s) and,
V. E XAMPLES

he Value X,Y,Z,σn−1 ,an,n anre defined in the theorem ?? P (s) = P0 + P1 s + ... + Pd sd


and VII. C ONCLUSION
 
Pd ... P−i+1 P−i We related the minimum perturbation required to have ze-
 .. .. 
 . .  ros at infinity for a pencil to various topics like repeated
Ti =  
generalized eigenvalues at infinity and Structured Low Rank

Pd Pd−1 


0 Pd Approximiation (SLRA). We provided explicit bounds for the
case when perturbations are allowed for both E and A but the
is a block Toeplitz matrix constructed from matrix coefficients
perturbation matrices are each of at most rank one. We showed
of A(s) and let
using examples that the condition number defined through the
ri = rank Ti − rank Ti−1 .. chordal metric gives only an upper bound which could be very
conservative.
Polynomial matrix A(s) has no zeros at infinity if and only if Of course, when rank 2 or more perturbation matrices are
allowed, then the minimum we obtained is an upper bound. A
r0 = n.
lower bound for this case is what we can get using the special
For our case P (s) = sE−A, and hence P0 := −A and P1 := case of the results on block Toeplitz structure for a polynomial
E. This yields the above necessary and sufficient condition for matrix.
the matrix pair (E, A) to have one or more zeros at infinity as
" #
A E
rank − rank E 6 n − 1.
E 0
" #
A E
Further, if rank E is n − 1, then the Toeplitz matrix
E 0
has rank at most 2n − 1. Thus if we find the nearest
" low
# rank
A0 E 0
matrix with the same Toeplitz structure, i.e. then
E0 0
0 0
E − E =: ∆E and A − A =: ∆A .
Using certain examples, we next demonstrate that our per-
turbation is better than answers that we get from SLRA.

B. Comparison

In order to compare with the SLRA tool ([?]), we use the


following initial conditions in addition to the default. The values
obtained with respect to these initial conditions is later below
in the table.
" #
0 0 0 0 0 1.0
Rini1 =
0 0 0 0.0022 1.0 0
" #
0 0 0 0 0 1.0
Rini2 =
0.4472 0.4472 0.4472 0.4472 0.4472 0
" #
1 0 0 0 0 0
Rini3 =
0 1 0 0 0 0
Default depends upon the input strucured matrix.
σ2n−1 (T0 ) σn−1 (E) Theorem ?? Initial SLRA
(lower bound) (upper bound) Condition
0.2227 5 0.9452 Default 8.3824
Example 1

0.2227 5 0.9452 Rini1 5


0.2227 5 0.9452 Rini2 7.5471
0.2227 5 0.9452 Rini3 8.8892
0.7446 6 1.8363 Default 15.0833
Example 2

0.7446 6 1.8363 Rini1 6


0.7446 6 1.8363 Rini2 17.8630
0.7446 6 1.8363 Rini3 33.6599
0.2263 5 1.4283 Default 5.9389
Example 3

0.2263 5 1.4283 Rini1 5


0.2263 5 1.4283 Rini2 12.6741
0.2263 5 1.4283 Rini3 42.9928

You might also like