You are on page 1of 8

Distance of a system from a nearest singular descriptor system

having impulsive initial-conditions

Ashish Kothyari, Rachel K. Kalaimani and Madhu N. Belur

Abstract— This paper considers a distance problem: given a II. P RELIMINARIES


first order system, what is the distance to a nearest singular
In this section we briefly explain about the concept of zeros
descriptor system that has impulsive initial conditions. The link
between impulsive initial conditions and zeros at infinity is well- at infinity and its relation with the impulsive solutions in a
known. This paper provides bounds on the minimum perturbation dynamical system. We then explain about SLRA.
required for a pair of matrices E and A such that the perturbed
A. Zeros at infinity
matrix pencil has one or more zeros at infinity. We provide closed
form solutions for the minimum value for rank one perturbations Zeros at infinity play a central role in this paper: we review
and also compare this minimum value with what is obtained by this from [?]. A polynomial matrix R(s) ∈ Rn×n [s] is said
the Structured Low Rank Approximation (SLRA) tool and other
to have a zero at a finite number λ ∈ C if rank R(λ) < n.
estimating methods.
The number of finite zeros for R(s) is given by the degree of
I. I NTRODUCTION determinant of R(s). Assume

When dealing with Differential Algebraic Equations (DAE) R(s) = Rd sd + · · · + R1 s + R0 ,


of first order, it is known that existence of impulsive solutions
where each Ri ∈ Rn×n and Rd 6= 0. For any λ ∈ C and
for certain initial conditions are linked to the presence of zeros
R ∈ Rn×n (s) there exist square rational matrices U and V
at infinity of the corresponding matrix pencil. Just like ‘gener-
such that none of U and V have any poles or zeros at λ, and
ically’ a system is controllable, i.e. the set of uncontrollable
U RV = diag ((s − λ)µi (λ) ), the integers µi (λ) nondecreasing
systems form a thin set, it is also easy to see that generically a
in i which takes the values from 1 to n. It turns out that the
matrix pencil has no zeros at infinity. In other words, the set of
integers µi (λ) depend only on R and not on the U and V
matrix pairs (E, A) such that the pencil (sE − A) has a zero
matrices. If µ1 < 0 we say that R has (one or more) poles
at infinity is a set of measure zero in the space of all constant
at λ, the negative µi (λ)’s are called the structural pole indices
square matrices. This gives rise to the question, given a pair
at λ. If µq > 0 we say that R has (one or more) zeros at λ,
of matrices E, A, what is the minimum amount of perturbation
and the positive µi (λ)’s are called the structural zero indices at
required for the perturbed pair to have one or more zeros at
λ. The zeros/poles and their structural indices of R at infinity
infinity. This paper deals with this question and provides closed
are defined as those of Q(s) at s = 0 with Q(s) := R(λ) and
form solution for the case when the perturbation matrices ∆E
λ = 1/s.
and ∆A are each of rank one. In this paper, we use the 2-norm
The total number of poles and zeros (counted with multiplic-
of the perturbation matrices to quantify the distance of a first
ity) of P at any λ ∈ C ∪ ∞ are respectively denoted by zR (λ)
order DAE to having impulsive initial conditions.
and pR (λ) and defined by
The paper is organized as follows. The following section X X
contains preliminaries essential for this paper. Section II-E, zR (λ) := µi (λ) and pR (λ) = − µi (λ) .
µi >0 µi <0
in particular, elaborates on the link between zeros at infinity
and impulsive solutions to DAEs. Section III contains the A more direct count of the zeros and poles at infinity can
problem formulation while the main results of this paper are be obtained by counting the ‘valuations’ at ∞ for a rational
in Section IV. A few examples are investigated in Section V. matrix as elaborated in [?] and as explained briefly below. For
The well-known techniques involving Structured Low Rank a rational p(s) ∈ R(s), with p = a/b where a and b are
Approximation (SLRA) is reviewed in Section VI and this polynomials, b 6= 0, define ν(p) the valuation at ∞ of p by
section also contains a comparison of the values obtained using ν(p) := degree b − degree a and ν(0) := ∞. For a polynomial
the SLRA tool ([?]) with our closed form perturbation values matrix R ∈ Rn×m [s], define σi (R) as the minimum of the
(for rank one). Some concluding remarks are in Section VII. valuations of all i × i minors of R for each 1 6 i 6 n.
The structural indices at infinity of the rational matrix R are
*This work was supported in part by SERB, DST and BRNS, India. defined as ν1 (R) := σ1 (R), νj (R) := σj (R) − σj−1 (R) for
A. Kothyari, R. K. Kalaimani and M. N. Belur are in the Department of
Electrical Engineering, Indian Institute of Technology Bombay, India. Email: j = 2, . . . , n. This procedure gives some νi as positive integers,
{ashkothyari, rachel, belur}@ee.iitb.ac.in some negative and others zero. The absolute values of the
negative ones are summed to give the poles at infinity of R We now relate the above proposition with zeros at infinity.
with multiplicity, while the positive ones are summed to give Existence of one or more zeros at infinity implies the existence
the zeros at ∞ of R counted with multiplicity: of generalized eigenvalues at infinity, but the converse is not
X X true. However, if (E, A) has a repeated generalized eigenvalue
z∞ (R) := νi and δM (R) := − νi . at ∞, then (E, A) is arbitrarily close to having one or more
νi >0 νi <0
zeros at infinity (if sE − A does not already have a zero at
The matrix R is said to have no zeros at infinity if all νi 6 0. infinity). This can be seen as follows.
Note that δM (R) is also called the McMillan degree of the ??
polynomial matrix R: see [?].
We are interested in the matrix pencil sE − A which is a C. Singular values and the induced 2-norm
polynomial matrix in first degree. We primarily deal with matrix Singular values and orthogonal matrices play a key role
pencils which are square and nonsingular as a polynomial and hence we summarize the essential background in this
matrix: we define them as ‘regular pencil’. A matrix pencil subsection. The 2-norm of a vector x ∈ Rn is the one we will
sE − A, with E, A ∈ Rn×n is called a regular pencil if use throughout this paper: this is the usual Euclidean norm, and
det(sE − A) 6 ≡0. Note that assuming regularity of a pencil same as the square root of the sum of all the entries’ squares.
implies that E and A are square. A pencil that is not regular Of course, for any orthogonal matrix U , the 2-norms of U x
is called singular. A necessary and sufficient condition for the and x are the same for each vector x ∈ Rn . Given a matrix
pencil to not have zeros at infinity is given by the following A ∈ Rn×n , there exist orthogonal matrices U and V such
proposition. that U AV = Σ, with Σ ∈ Rn×n such that Σ is a diagonal
Proposition 2.1: [?, Page 138] Suppose E, A ∈ Rn×n . The matrix with its diagonal entries σ1 , . . . , σn ∈ R ordered such
matrix pencil sE − A has no zeros at infinity if and only if that σ1 > σ2 > · · · > σn > 0. These diagonal entries are called
deg det(sE − A) = rank E. the singular values and the number of nonzero singular values
It is easy to see that deg det(sE − A) 6 rank E, and the is also the rank of the matrix A. The maximum singular value
above proposition is a maximality condition on the degree of σ1 is also the induced 2-norm, defined as
the determinant of sE − A as necessary and sufficient for the
kAxk2
absence of zeros at infinity of the pencil. kAk2 := sup .
x∈Rn \0 kxk2

B. Generalized eigenvalues Since there can be no ambiguity, we will use k · k2 to mean


either the vector 2-norm or the matrix (induced) 2-norm. For a
Closely related to (and often confused with) the notion of
given matrix A, the Frobenius norm is defined as
to zeros at infinity is that of generalized eigenvalues at infinity
v
of the pair (E, A). For a regular pencil sE − A, a complex u n X
uX n

λ ∈ C such that rank (λE − A) < n is called a generalized kAkF := t a2ij .


i=1 j=1
eigenvalue of the pair (E, A). The point λ = ∞ is said to be a
Pn
generalized eigenvalue if E is a singular matrix. Multiplicities Note that kAk2F = i=1 σi2 and, further, for any orthogonal
of the generalized eigenvalues are more easily defined using matrices U1 and U2 , when dealing with Frobenius norms the
the so-called generalized Schur form of a pencil (E, A). See following equality holds: kU1 AU2 kF = kAkF . Similarly when
[?] and [?, Page 377]. dealing with induced 2-norms we have kU1 AU2 k2 = kAk2 .
Proposition 2.2: Suppose E, A ∈ Rn×n . Then there exist Rank one matrices have the same 2-norm and the Frobenius
unitary matrices U and V such that both U EV and U AV are norm, and so does the zero matrix.
upper triangular. The generalized eigenvalues of the pair (E, A),
with (algebraic) multiplicity, are equal to aii /eii when eii is D. Eigenvalue sensitivity: condition number
nonzero for i = 1, . . . , n. If eii = 0 for some i, but aii 6= 0, Since we deal with distance problems, and the generalized
then these correspond to generalized eigenvalues at ∞ with that eigenvalue at infinity plays a key role in this paper, we review
(algebraic) multiplicity. For some i, both aii and eii are zero if the notion of the condition number of an eigenvalue for a
and only if the pencil is singular. matrix. It is known that the condition number of a simple
In fact, the orthogonal matrices U and V can be chosen eigenvalue λ helps in an upper bound on the minimum per-
suitably to ensure that the diagonal elements of E and A appear turbation required for the perturbed matrix to have a repeated
in any arbitrary order (as long as the ratio of aii /eii is a eigenvalue at λ: see [?] and [?]. More precisely, for a matrix
generalized eigenvalue): see [?, Corollary 4.7, page 133]. In A ∈ Rn×n , with λ ∈ C a simple eigenvalue of A, define
particular, when the pencil is nonregular, one can ensure that s(λ) := |ylT yr |, where yl and yr ∈ Cn are unit 2-norm
E and A are upper triangular with ann = 0 = enn . left and right eigenvectors respectively. Then there exists an
∆A ∈ Rn×n such that A + ∆A has a repeated eigenvalue at λ The significance of this result is that existence of zeros at
and infinity of the pair (E, A) is equivalent to the nilpotent matrix
s(λ)
k∆A k2 6 p . (1) N not being identically zero.
1 − s(λ)2
This helps getting an upper bound on the minimum perturba- III. P ROBLEM F ORMULATION
tion required for repeated eigenvalues. The situation for the Having noted the significance of zeros at infinity in dynami-
generalized eigenvalue problem is less straightforward due to cal systems theory, we formulate the problem that we study in
perturbations having (reasonably) very different effects depend- this paper.
ing on the generalized eigenvalue being near origin or near ∞. Problem 3.1: Consider E and A ∈ Rn×n . Find the mini-
This motivates the use of a pair (eii , aii ) (in the notation of mum value of k∆E k2 +k∆A k2 such that s(E+∆E )−(A+∆A )
Proposition 2.2) instead of λi = aii /eii . For a, b ∈ R, define has one or more zeros at infinity when rank ∆E 6 1 and
chord(a, b) rank ∆A 6 1 .
|a − b| We also deal with the situations when only E or only A
chord(a, b) := p . are perturbed. Another important measure of the amount of
(1 + a2 )(1 + b2 )
perturbation is the Frobenius norm. However, for a rank one
Then, by [?] and [?, Page 378], for x` and xr which are left
matrix, the Frobenius and 2-norms are the same: rank one
and right generalized eigenvectors for a simple eigenvalue λ of
matrices play a key role in our results. We provide explicit
the pair (E, A), and if A and E are such that kA − A k 6 
closed form solutions for the case when ∆A and ∆E are each
and kE − B k 6 , with λ a generalized eigenvalue of the pair
of rank one. Of course, for the case when these need not be of
(E , A ) that is closest to λ,
rank one, our values are upper bounds.

chord(λ, λ ) 6 q . As reviewed in Section II-E, it is often convenient to bring
(xT` Axr )2 + (xT` Exr )2 the pair (E, A) to the standard form in equation (2). However,
Though a bound similar to equation (1) for the case of gen- nonsingular matrices M and N in general do change the 2-
eralized eigenvalues seems to be unavailable in the literature, norm and the Frobenius norms of E and A, and hence of the
comparing the above equation with equation (1), we note later perturbation matrices too. In this context, orthogonal/unitary
that when E is diagonal and singular, then the generalized matrices play a more helpful role to bring the pair (E, A) to
eigenvalue at zero of the pencil sA − E has a sensitivity that a convenient form without loss of generality. We review two
results in a repeated generalized eigenvalue at zero for the pair particular forms. The first one is of course the case where
(A, E) requiring at most perturbation an,n . This is, in fact, the orthogonal matrices are used to ensure E is diagonal, and
first of the various cases stated and proved in Theorem 4.4. has its singular values along the diagonal (sorted according to
decreasing magnitude.) The second one is the generalized Schur
E. Impulsive solutions in dynamical systems
form: see [?].
The zeros at infinity is more significant in systems theory as
it is closely related to the presence of impulsive solutions in IV. M AIN RESULTS
a dynamical system. Impulsive solutions are system responses This section contains our main results. We first state the
which involve the Dirac delta distributions and/or its derivatives. following result that states that if rank of E is at least two
The following proposition makes this precise. less than n, then a regular pencil sE − A either already has
Proposition 2.3: [?] Consider an autonomous singular sys- one or more zeros at infinity or requires an arbitrarily small
tem with state space representation E ẋ = Ax, where E, A ∈ perturbation to have such zeros at infinity.
Rn×n and E singular. The free response of the system has no Lemma 4.1: Consider the matrix pencil sE − A, E, A ∈
impulsive solutions for any initial condition x(0) ∈ Rn if and Rn×n and rank E = r. Consider M ∈ R(n−r)×n , full row rank
only if deg det(sE − A) = rank E. such that M E = 0. Then the pencil has no zeros at infinity if
The following result from [?, page 1076] helps in revealing and only if dim(M A ker E) = n − rank E.
the fast subsystem more directly and is very helpful for con- Proof: Consider a singular value decomposition of any
ceptual purposes. See also [?, page 28]. matrix, say P ∈ Rn×n , with rank r < n.
Proposition 2.4: Consider E, A ∈ Rn×n and suppose r =
" #
D 0
deg det sE − A. Then there exist nonsingular matrices M1 and UPV = Σ = (3)
0 0n−r
M2 such that
" # " # where U, V ∈ Rn×n are orthogonal matrices and D is a
Ir 0 As 0
M1 EM2 = and M1 AM2 = , (2) nonsingular
" # diagonal matrix of size r. Let U be partitioned
0 N 0 In−r U1
as , where U1 ∈ Rr×n and U2 ∈ R(n−r)×n . Let V be
where N is a nilpotent matrix. U2
h i
partitioned as V1 V2 , where V1 ∈ Rn×r and V2 ∈ ⇒ an,n γ1 − an,n−1 γ2 = 0
Rn×(n−r) . We use this decomposition, in particular that the ⇒ an,n γ1 = an,n−1 γ2
rows of U2 are a basis of ker P T and the columns of V2 are a γ1 an,n−1
⇒ = =k
basis of ker P . γ2 an,n
(1 ⇒ 2) : We assume dim(M A ker E) = n − r and
show that sE − A has no zeros at infinity. Without loss of For the perturbation defined by ∆E , γ1 and γ2 should satisfy
generality, assume E is diagonal. Since the rank of E is r. Let the above equation for the pair (E + ∆E , A) to have a zero at
A be partitioned in accordance with nonzero and zero diagonal infinity. Also the pair (E + ∆E , A) should be arbitrarily close
entries in E into: " # to (E, A), i.e. for any given  > 0, k∆E k 6 . In this regard
A1 A2 we consider the following cases and provide ∆E , i.e. the values
A3 A4 of γ1 and γ2 such that k∆E k 6 .
where A1 ∈ Rr×r and A4 ∈ R(n−r)×(n−r) . From the partitions Case 1: k 6= 0
of U and V we have A4 = U2 AV2 . The rows of U2 form a Note that k∆E k2 = γ12 + γ22 . Therefore γ1 , γ2 satisfies the
basis for ker E T , i.e U2 = M . Similarly columns of V2 form following.
a basis for ker E. Therefore dim(M A ker E) = rank A4 . It is γ12 + γ22 = 2 (4)
given dim(M A ker E) = n−r. This implies A4 is nonsingular.
Substituting γ1 = kγ2 in (4) we get
Since A4 is nonsingular and because of the structure of E it is
clear that deg det (sE −A) = r. Since U and V are orthogonal k 2 γ22 + γ22 = 2
matrices the deg det (sE − A) = deg det (sE − A). Hence 
γ2 = √
we have deg det (sE − A) = r = rank E. Therefore from 1 + k2
Proposition 2.1 there are no zeros at infinity. Similarly value of γ1 would come out to be
(2 ⇒ 1) : We assume that there are no zeros at infinity. Hence k
from Proposition 2.1 we have deg det (sE −A) = rank E = r. γ1 = √
1 + k2
Using arguments exactly like the previous part we have that if
Case 2: k = 0
deg det (sE − A) has to be r then A4 has to be nonsingular.
This implies that
This implies that dim(M A ker E) = n − r.
an,n−1 = 0
Lemma 4.2: A regular matrix pencil sE − A where E, A ∈
n×n As a result of which γ1 = 0, γ2 = , and the matrix pair (E +
R and rank E 6 n−2 is arbitrarily close to a regular matrix
pencil having a zero at infinity. ∆E ) will have a zero at infinity.
Proof: We first assume the E has rank n − 2. Let ai,j Case 3: k is not defined
denote the (i, j)th element of the matrix A. Recall we have This implies that
assumed E to be a diagonal matrix with the singular values an,n = 0
σi ’s as the diagonal elements. We now introduce a perturbation
as a result of which γ2 = 0, γ1 =  and the matrix pair (E +
in E as follows where γ1 and γ2 denote the perturbation.
  ∆E , A) will have a zero at infinity.
σ1 . . . . . . . . . 0 Now considering the case when the matrix E has rank less
. . .. .. 
 .. .. ... . . than n − 2. With the help of arbitrary small perturbations the
 
.  rank of E can be increased to n − 2 and then the procedure
E + ∆E =  .
 . . . . σn−2 0 0

.
.

 used for rank n−2 case can be followed to get a zero at infinity
 . ... ... γ1 γ 
2 in the matrix pair (E 0 + ∆0E , A), E 0 is the perturbed E with
0 ... ... ... 0 rank n − 2 and ∆0E is the perturbation on E 0 .
Let NL and NR be two matrices whose columns span the left For Case 1 rank E + ∆E = n − 1, and let
and right null spaces of E + ∆E respectively. " #
(sE11 − A11 ) A12 (s)
h iT h iT s(E + ∆E ) − A = .
NL = 0 . . . 1 , NR = 0 . . . −γ2 γ1 A21 an,n

Acoording to Lemma 4.1, for the pair (E + ∆E , A) to have a If the matrix pair (E + ∆E , A) is non-regular it would imply
zero at infinity that the schur complement of sE11 − A11 is identically zero,
NLT ANR = 0. i.e
h i an,n − A21 (sE11 − A11 )−1 A12 (s) ≡ 0
We have NLT ANR = an,1 . . . an,n−1 an,n NR . Therefore
for the pair (E + ∆E , A) to have a zero at infinity: and hence,
h i
an,1 . . . an,n−1 an,n NR = 0 det(sE11 − A11 )an,n = A21 adj(sE11 − A11 )A12 (s) (5)
L.H.S is a degree n − 1 polynomial with the coefficient of the Theorem 4.4: Consider a regular pencil (E, A) ∈ Rn×n and
largest degree term equal to σ1 . . . σn−2 γ1 an,n . R.H.S is also a rank E = n − 1. Define the following parameters.
p
degree n−1 polynomial with the coefficient of the largest degree D := k∆E k2 + k∆A k2
|an,n σ1 ...σn−1 |
term equal to an,n−1 σ1 . . . σn−2 γ2 . As γ2 an,n−1 = γ1 an,n the X := √ 2 2
(an,1 σ2 σ3 ...σn−1 ) +...+(an,n−1 σ1 σ2 ...σn−2 )
coefficients of the highest degree terms in both L.H.S and R.H.S Y := √
|an,n σ1 ...σn−1 |
(a1,n σ2 σ3 ...σn−1 )2 +...+(an−1,n
" σ#1 σ2 ...σn−2 )
2
are equal. Considering det(sE11 − A11 ) to be fixed we can h i E
perturb A12 (s) and A21 by some small amount (except for the Z := min{σmin ( E A ), σmin ( )
A
term an,n−1 ) so as to have different coefficients for lower order
Assume rank ∆A 6 1 and rank ∆E 6 1. Then minimum D
terms in L.H.S and R.H.S and thus ensuring that the schur
such that the pair (E + ∆E , A + ∆A ) has a zero at infinity is
compliment of sE11 − A11 is not identically zero because of
given by
which the pair (E + ∆E , A + ∆A ) is regular and has a zero at
infinity. For Case 2 an,n−1 = 0. Now let min D = min{σn−1 (E), an,n , X, Y, Z}. (7)
" # Proof: We first explain how all terms in the expression in
(sE22 − A22 ) A12
s(E + ∆E ) − A = (7) are obtained by the following cases.
A21 S
" # Case 1: Perturbation on the singular values of E.
an−1,n−1 an−1,n + γ2 s From Lemma 4.2 that the pair (E1 , A1 ) where rank E1 6 n−2
where S = and is unimodular.
0 an,n is arbitrarily close to a regular matrix pair having a zero at
Hence the schur compliment of sE22 − A22 is always poly- infinity, hence by perturbing the matrix E with perturbation
nomial and hence the pair (E + ∆E , A) is regular and has ∆E ,k∆E k = σn−1 (E), we can bring the rank of the perturbed
a zero at infinity. For Case 3 an,n = 0. Hence arbitrarily matrix E + ∆E to n − 2. And as the matrix pair (E + ∆E , A)
small perturbations can be done on the matrix A2,1 or A1,2 is arbitrarily close to a regular matrix pair having a zero at
or on both together such that the R.H.S of equation (5) is not infinity, the value of D in this case is σn−1 (E).
identically zero and thus the schur complement of sE11 − A11 Case 2:k∆E k = 0:
is not identically zero which would ensure that the pair (E + As E has rank n − 1, let NL and NR be two matrices
∆E , A + ∆A ) is regular and has a zero at infinity. whose columns span the left and right null spaces of E + ∆E
respectively.
Lemma 4.3: A non regular matrix pencil (E, A) with E, A ∈ h iT h iT
n×n NL = 0 . . . 1 , NR = 0 . . . 1
R is arbitrarily close to a regular matrix pencil having a zero
at infinity Now,
Proof: From [?, Section 6.1] if, NLT ANR = an,n .
" #
E According to Lemma 4.1, the matrix product NLT ANR should
h i
rank E A < n, or rank < n. (6)
A be equal to zero for the pair (E, A) to have a zero at infin-
the matrix pair (E, A) is non-regular. This implies that rank E ity.Therefore perturbing the matrix A with ∆A = −an.n at the
as well as rank A is 6 n − 1. This can also be observed from n, nth position and zero elsewhere the matrix pair (E, A+∆A )
the generalized schur decomposition of the matrix pair (E, A). will have a zero at infinity and k∆A k = an,n = D.
Considering the following cases: Case 3:k∆A k = 0.
Case 1: rank E = n − 1 As rank ∆E = 1 hence it can only a particular row or column
Here if the pencil is not regular then this implies the following. of the matrix E can be perturbed. We have to ensure that we
1) an,n = 0 donot perturb the (n, n)th term of the matrix E which is zero
2) Atleast one of {an,n−1 , an−1,n } is 0 because this would make the matrix E + ∆E a full rank matrix
From Lemma 4.1 the condition for (E, A) pair to have zeros at with which we can’t have any zeros at infinity. First consider
infinity is an,n = 0. Now in the set {an,n−1 , an−1,n }, the zero the perturbation on the last column of the matrix E:
 
elements are perturbed by a small amount . This will make the σ1 0 ... γ1
0 σ ... γ2 
pair (E, A) regular and since an,n = 0 is 0, the new perturbed  2 
. .. .. .. 
pair also has a zero at infinity. E + ∆E ==  .. . . . 

Case 2: rank E 6 n − 2 
 0 . . . σn−1 γn−1 

For the matrix A there exists an arbitrarily small perturbation 0 ... ... 0
∆A such that A + ∆A is nonsingular. Now the pair (E, A +
Let NL and NR be the matrices whose columns spam the left
∆A ) is regular. Since rank E 6 n − 2, from Lemma 4.2 there
and right nullspace of E + ∆E respectively.
exists an arbitrarily small perturbation ∆E such that the pair h iT h iT
(E + ∆E , A + ∆A ) has a zero at infinity. NL = 0 . . . 1 , NR = γ1 σ2 . . . σn−1 . . . σ1 σ2 . . . σn−1
According to Lemma 4.1 for the matrix pair (E + ∆E , A) to minor of the matrix E does not the nullspace of the perturbed
have a zero at infinity, matrix E + ∆E . For example:
 
NLT ANR = 0 σ1 β1 ... 0
0 σ2 ... 0
 
⇒ an,1 σ2 . . . σn−1 +. . .+an,n−1 σ1 . . . σn−2 = an,n σ1 . . . σn−1 . .. .. .. 
E + ∆E =   .. . . .

(8)  
0 ... σn−1 0
For this case: q 0 ... ... 0
D= γ12 + . . . + γn−1
2 .
Left and right nullspaces of both matrices E and E + ∆E is
If we consider equation (8) to be a equation of a plane with spanned by the same vector and hence we require perturbations
variables γ1 , . . . , γn−1 , the minimum value of D with respect on the last row of E + ∆E or perturbations on the matrix A
to this (8) is the distance of the plane from the origin i.e to introduce a zero at infinity in the matrix pair (E + ∆E , A)
or (E + ∆E , A + ∆A ) accordingly. As these cases have been
|an,n σ1 . . . σn−1 | discussed, so we can ignore these kind of perturbations.
X=p
(an,1 σ2 σ3 . . . σn−1 )2 + . . . + (an,n−1 σ1 σ2 . . . σn−2 )2 Case 4: kEk = 6 0, kAk =6 0 From [?, Section 6.1] a sufficient
Similarly we can perturb the last row of E and get the value condition for a matrix pair (E, A) to be singular is:
Y.
" #
h i E
Other rows/columns of the matrix E can also be perturbed. rank E A < n, or rank <n
A
For example if we perturb the first row of E in the following " #
manner:
h i E
Hence by reducing the the rank of either E A or with

σ 1 β2 . . . β1
 A
0 σ the help of perturbations ∆A and ∆E we can make the pair
2 ... 0

. ..

..  (E + ∆E , A + ∆A ) singular, and by Lemma 4.3 this pair is
E + ∆E ==  . ..
. . . .

arbitrarily close to a matrix pair which is regular and has a zero
 
 0 . . . σn−1 0  at infinity. Hence the minimum perturbation required to ihave
h
0 ... ... 0 introduce a zero at infinity is the minimum of k ∆E ∆A k =
" # " #
Let NL and NR be the matrices whose columns spam the left h i ∆E E
σmin ( E A ) or k k = σmin ( ). The perturbation
and right nullspace of E + ∆E respectively. ∆A A
h i h i
T
h iT h iT matrix ∆E ∆A is equal to σmin ( E A )un v2n where un
NR = −β1 0 . . . σ1 , NL = 0 . . . 1 h i
T
and v2n are the vectors corresponding to σmin ( E A ) in the
According to lemma ?? for the matrix pair (E + ∆E , A) to h i
SVD expansion of the matrix E A . As the vectors un and
have a zero at infinity:
v2n are normalized it is easy to see that
NLT ANR = an,1 β1 − σ1 an,n = 0 h i
k ∆E ∆A k2 = k∆E k2 + k∆A k2
an,n
⇒ β1 = σ1 " #
an,1 ∆E
. Similarly we can find the perturbation matrix and show
It can be seen that only one varialble β1 is neccessary for the ∆A
that
pair (E + ∆E , A) to have a zero at infinity, so we assume all " #
∆E 2
oth β’s to be equal to zero. Hence, k k = k∆E k2 + k∆A k2
∆A
k∆E k = β1 = D . Also for both cases it can be seen that ∆E and ∆A are rank
This example can be seen as a special case where perturbation 1 matrices. Due to this all rank 1 perturbations on both E and
is done on the last column of the matrix E and in equation 8 A have been covered.
γ1 = β1 and rest all γ’s are equal to zero. Also the value β1 is
always greater than or equal to X as X is the distance of the V. E XAMPLES
plane from the origin and β1 gives us the the intercept on the
The value X,Y,Z,σn−1 ,an,n anre defined in the theorem 4.4
γ1 axis. Similarly if we perturb any column of E other than the
Example 5.1: Consider the case when
last column, then that becomes a special case of perturbing the    
last row of the matrix E and the norm of that perturbation will 6 0 0 16.94 −1.3560 −3.50
always be greater than or equal to Y . It can be observed that E = 0 5 0 , A = 20.39 1.69 14.00 
   

any rank 1 perturbation done inside the leading (n−1)×(n−1) 0 0 0 40.23 35.50 9.23
σn−1 = 5, an,n = 9.2300 , X = 0.9452, Y = 3.2271 , Polynomial matrix A(s) has no zeros at infinity if and only if
Z = 12.5445 Example for when minimum is very much less
r0 = n.
than σn−1
For our case P (s) = sE−A, and hence P0 := −A and P1 :=
 
9 0 0
Example 5.2: E = 0 6 0 and
  E. This yields the above necessary and sufficient condition for
0 0 0 the matrix pair (E, A) to have one or more zeros at infinity as
 
2.52 19.3340 23.9
" #
A E
A =  12.50

22.34 6.90
 rank − rank E 6 n − 1.
E 0
30.1240 22.5 9.23 " #
σn−1 = 6, an,n = 9.23, X = 1.8363, Y = 3.1895, Z = A E
Further, if rank E is n − 1, then the Toeplitz matrix
8.8277 Example for minimum less than σn−1 but very much E 0
has rank at most 2n − 1. Thus if we find the nearest low# rank
greater than σ2n−1 , Structure isimportant.
"
A0 E 0
6 0 0 matrix with the same Toeplitz structure, i.e. then
E0 0
Example 5.3: E = 0 5 0 and A =
 
E 0 − E =: ∆E and A0 − A =: ∆A .
0 0 0 Using certain examples, we next demonstrate that our per-
 
6.94 45.62 −23.3 turbation is better than answers that we get from SLRA.
10.32 −12.20 4.0  For Example 3:
 

12.80 30.50 9.23 B. Comparison


σn−1 = 5, an,n = 9.23, X = 1.4283, Y = 2.3279,
In order to compare with the SLRA tool ([?]), we use the
Z = 12.2006 A general example
following initial conditions in addition to the default. The values
VI. SLRA: F ORMULATION AND C OMPARISON obtained with respect to these initial conditions is later below
In this section we will formulated the problem of SLRA. in the table.
" #
The we will relate the problem of SLRA with our problem and 0 0 0 0 0 1.0
Rini1 =
then compare our result with the solution obtained using the 0 0 0 0.0022 1.0 0
SLRA package provided with [LRAIM] using different initial
" #
0 0 0 0 0 1.0
conditions. Rini2 =
0.4472 0.4472 0.4472 0.4472 0.4472 0
A. SLRA formulation " #
1 0 0 0 0 0
Structured Low Rank Approximation deals with the construc- Rini3 =
0 1 0 0 0 0
tion of a structured low rank matrix nearest to a given matrix. Default depends upon the input strucured matrix.
Problem Statement: Let ω ⊂ Rn×n be a subset of matrices σ2n−1 (T0 ) σn−1 (E) Theorem 4.4 Initial SLRA
(lower bound) (upper bound) Condition
having a particular structure. Let X ∈ ω, then 0.2227 5 0.9452 Default 8.3824
Example 1

minimize kX − Y kw 0.2227 5 0.9452 Rini1 5


Y 0.2227 5 0.9452 Rini2 7.5471
subject to rank Y 6 r − l, l = 1 . . . r − 1, 0.2227 5 0.9452 Rini3 8.8892
0.7446 6 1.8363 Default 15.0833
Example 2

Y ∈ω 0.7446 6 1.8363 Rini1 6


where k · kw denotes the weighted 2-norm. We will be using 0.7446 6 1.8363 Rini2 17.8630
0.7446 6 1.8363 Rini3 33.6599
unweighted 2-norm for solving the problem. For the structure, 0.2263 5 1.4283 Default 5.9389
Example 3

we use the following key result to find a nearest matrix pencil 0.2263 5 1.4283 Rini1 5
having zeros at infinity. 0.2263 5 1.4283 Rini2 12.6741
0.2263 5 1.4283 Rini3 42.9928
Proposition 6.1: ([?, Theorem 1]) Let P (s) ∈ Rn×n (s) and,
VII. C ONCLUSION
P (s) = P0 + P1 s + ... + Pd sd
We related the minimum perturbation required to have ze-
and  
Pd ... P−i+1 P−i ros at infinity for a pencil to various topics like repeated
 .. ..  generalized eigenvalues at infinity and Structured Low Rank
 . . 
Ti = 


Approximiation (SLRA). We provided explicit bounds for the
Pd Pd−1 


case when perturbations are allowed for both E and A but the
0 Pd
perturbation matrices are each of at most rank one. We showed
is a block Toeplitz matrix constructed from matrix coefficients
using examples that the condition number defined through the
of A(s) and let
chordal metric gives only an upper bound which could be very
ri = rank Ti − rank Ti−1 .. conservative.
Of course, when rank 2 or more perturbation matrices are
allowed, then the minimum we obtained is an upper bound. A
lower bound for this case is what we can get using the special
case of the results on block Toeplitz structure for a polynomial
matrix.

You might also like