You are on page 1of 17

NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS

Numer. Linear Algebra Appl. 2011; 18:205221


Published online 3 February 2011 in Wiley Online Library (wileyonlinelibrary.com). DOI: 10.1002/nla.766
Ill-conditioning of the truncated singular value decomposition,
Tikhonov regularization and their applications to numerical partial
differential equations
Zi-Cai Li
1
, Hung-Tsai Huang
2
and Yimin Wei
3,4, ,
1
Department of Applied Mathematics and Department of Computer Science and Engineering, National Sun Yat-sen
University, Kaohsiung, Taiwan 80424, Taiwan
2
Department of Applied Mathematics, I-Shou University, Kaohsiung City 84001, Taiwan
3
School of Mathematical Sciences, Fudan University, Shanghai 200433, Peoples Republic of China
4
Shanghai Key Laboratory of Contemporary Applied Mathematics, Shanghai 200433, Peoples Republic of China
SUMMARY
This paper explores some intrinsic characteristics of accuracy and stability for the truncated singular
value decomposition (TSVD) and the Tikhonov regularization (TR), which can be applied to numerical
solutions of partial differential equations (numerical PDE). The ill-conditioning is a severe issue for
numerical methods, in particular when the minimal singular value
min
of the stiffness matrix is close
to zero, and when the singular vector u
min
of
min
is highly oscillating. TSVD and TR can be used as
numerical techniques for seeking stable solutions of linear algebraic equations. In this paper, new bounds
are derived for the condition number and the effective condition number which can be used to improve
ill-conditioning by TSVD and TR. A brief error analysis of TSVD and TR is also made, since both errors
and condition number are essential for the numerical solution of PDE. Numerical experiments are reported
for the discrete Laplace operator by the method of fundamental solutions. Copyright 2011 John Wiley
& Sons, Ltd.
Received 11 October 2009; Revised 12 November 2010; Accepted 17 November 2010
KEY WORDS: ill-conditioning; fundamental solutions; truncated singular value decomposition; Tikhonov
regularization; effective condition number; collocation Trefftz method
1. INTRODUCTION
This paper explores some intrinsic characteristics of the accuracy and stability of the truncated
singular value decomposition (TSVD) and the Tikhonov regularization (TR), which can be applied
to numerical methods for solving partial differential equations (numerical PDE). The condition
number is an important issue for those methods, especially when the minimal singular value
min
of the stiffness matrix is close to zero and the left singular vector u
min
corresponding to
min
highly oscillates. TSVD and TR can be used as stable numerical techniques for solving the linear
algebraic equations arising from numerical PDE. In this paper, we rst derive new bounds for the
condition number and the effective condition number, which can be used to improve conditioning
by TSVD and TR. Since in numerical PDE, arising from image processing, a high accuracy is
more important than the stability, the link and balance between stability and accuracy must be

Correspondence to: Yimin Wei, School of Mathematical Sciences, Fudan University, Shanghai 200433, Peoples
Republic of China.

E-mail: ymwei@fudan.edu.cn
Copyright 2011 John Wiley & Sons, Ltd.
206 Z.-C. LI, H.-T. HUANG AND Y. WEI
dealt with carefully. Moreover, the stability analysis shows that the condition number is a better
criterion for numerical PDE than the 2-norm of the solution vector.
Consider the over-determined system of linear algebraic equations [1, 2]
Ax=b, (1)
where AR
mn
(mn), xR
n
and bR
m
. Its perturbed system is
A(xDx) =bDb, (2)
or more general
(ADA)(xDx) =bDb, (3)
where the perturbations DAR
mn
, DxR
n
and DbR
m
. To measure the sensitivity of the
solution to the perturbations in the data, traditionally we use the 2-norm condition number dened
by [1]
Cond(A) =

max

min
=|A||A

|, (4)
where
max
and
min
are the maximal and the minimal singular values of matrix A, respectively,
and A

is the MoorePenrose inverse of A. For Equation (2), there exists the classical upper bound,
|x|
|x|
Cond(A)
|b|
|b|
, (5)
where | | is the spectral norm (i.e. the 2-norm). In addition to the 2-norm condition number, the
mixed and componentwise condition numbers are developed in [35].
Recently, in [68], the effective condition number dened by
Cond_eff(A) =
|b|

min
|x|
=|A

|
|b|
|x|
(6)
is proposed and a sharp bound for (2)
|x|
|x|
Cond_eff(A)
|b|
|b|
(7)
is also given.
The Cond_eff(A) can be much smaller than the traditional Cond(A). Comparing Cond_eff(A)
with Cond(A),
min
=1/|A

|, not
max
=|A|, is intrinsic to stability. In practice, some numerical
methods, such as the spectral methods, the method of fundamental solutions (MFS), the radial
basis function method, etc. are often very ill-conditioned, i.e.
min
is very close to zero (see [9]). In
this case, both Cond(A) and Cond_eff(A) are large. To reduce the severe instability, two techniques
can be employed:
(1) the TSVD and
(2) the TR.
Both TSVD and TR play an important role in ltering [10], and are successful in noise reduction
in the least squares (LS) problems. For the related important references, TSVD is discussed by
Hansen and co-workers [1113], Chan and Hansen [14], and used by Chen et al. [15, 16]. TR was
rst proposed by Tikhonov [17] in 1963, introduced in Tikhonov and Arsenin [18] and analyzed
in [1927]. Recently, TSVD and TR are studied in [14, 12, 28], and Fierro et al. [29] for the
regularization by truncated total LS.
From (4) and (6), we have
Cond(A) =c
0
Cond_eff(A)|x|, (8)
Copyright 2011 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. 2011; 18:205221
DOI: 10.1002/nla
ILL-CONDITIONING OF THE TSVD 207
where the ratio c
0
=|A|/|b|. When c
0
.O(1), where a .b or a .O(b) denotes that there exist
two positive constants C
1
and C
2
such that C
1
baC
2
b, Equation (8) leads to
Cond(A) .Cond_eff(A)|x|. (9)
Suppose that the data error Db includes both the rounding and the truncation (i.e. the discretiza-
tion) errors [8]. Equation (7) indicates that Cond_eff is an enlargement factor of the solution errors
from the data errors. A large |x| indicates the occurrence of the subtraction cancelation, which is
a source of instability [30]. Interestingly, from (9) Cond denotes the whole stability, whereas |x|
denotes only part of stability. Hence, for the stability analysis of regularization, condition number
Cond is more important than |x|. This is a distinct feature of this paper from existing literature.
For numerical PDE, the small errors are most desirable. Under a xed machine precision, the
large enlargement factor Cond may cost many working digits, leaving the rest of the working digits
for accuracy of the numerical solutions. Hence, the accuracy is retained as long as there are enough
working digits left from stability. Multiple precision can be used, but will cost more CPU time and
computer memory. Therefore, to reach the same accuracy, we should employ longer precision only
for more ill-conditioned problems. The accuracy is of the most concern in numerical PDE, and it
is often required to be very high. This is another distinct feature of this paper from the existing
literature, such as in image processing, where the errors are only required within about 1%.
We will explore these intrinsic properties of the regularization algorithms and cope with them
in numerical PDE. One important feature of our paper is that when the regularization is applied,
the stability is improved signicantly, but the accuracy is reduced moderately. Hence a suitable
balance between stability and accuracy must be taken. In this paper, for numerical PDE, we explore
condition numbers and error bounds for two kinds of regularization: (1) the TSVD and (2) the TR.
The application of this paper, in particular to seek the optimal regularization parameter, will be
discussed in our future work.
This paper is organized as follows. In Section 2, the TSVD and the TR algorithms are described.
In Section 3, the bounds for both condition number and effective condition number are derived
for TSVD and TR. In Section 4, a brief error analysis is done and error bounds are derived. In the
last section, numerical tests for discrete Laplace equation are reported, solved by the MFS.
2. ALGORITHMS OF REGULARIZATION
In this paper, we assume that Rank(A) =n. The singular value decomposition of A is expressed by
A=URV
T
, (10)
where UR
mm
and VR
nn
are orthogonal matrices, and RR
mn
is a diagonal matrix with
positive singular values

2

n
>0, (11)
where we denote simply
1
=
max
=|A| and
n
=
min
=1/|A

|. In this paper, we also assume


that
min
_
max
. The columns of the matrices U=(u
1
, u
2
, . . . , u
m
) and V=(v
1
, v
2
, . . . , v
n
), where
u
i
R
m
and v
i
R
n
are the left and the right singular vectors of A, respectively, then the LS
solution for (1) can be expressed by
x
0
=x
LS
=
n

i =1

i
v
i
, (12)
where
i
=u
T
i
b. When
n
is very close to zero, the solution x
0
in (12) may be large and even
huge if
n
,=0. Also when v
n
is highly oscillating, the solution x
0
is also highly oscillating.
One way to overcome this difculty is to discard the part of (12) involving very small
i
, say
i =k 1, k 2, . . . , n. We then obtain the TSVD [12]
x
k
=
k

i =1

i
v
i
, k<n. (13)
Copyright 2011 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. 2011; 18:205221
DOI: 10.1002/nla
208 Z.-C. LI, H.-T. HUANG AND Y. WEI
The other approach dealing with very small
n
is TR. Consider the following minimization
problem with a parameter , known as the regularization parameter,
min
xR
n
{|Axb|
2

2
|Lx|
2
], (14)
where the matrix LR
pn
( pn). If L is the identify matrix IR
nn
, then Equation (14) leads to
min
xR
n
{|Axb|
2

2
|x|
2
], (15)
which is the standard form of TR. In fact, we have from Equation (12)
|x
0
|
2
=
n

i =1
_

i
_
2
.
When
n
,=0 and
n
=
min
0, we have |x
0
|. Equation (15) is used to control |x| from
being large, and thus to reduce the severe instability. The solution of (15) can be represented by
x

=(A
T
A
2
I)
1
A
T
b. (16)
Hence a better stability can be achieved by preventing very large values of |x
k
| from
min
(tending
to 0). From the SVD (10), we have
x

=
n

i =1

2
i
k
2

i
v
i
. (17)
In this paper, we assume that the parameter satises

min

max
,
min
_
max
. (18)
Equation (17) is called the TR solution. Both Equations (13) and (17) can overcome the instability
caused by the small
min
.
3. NEW ESTIMATES OF THE CONDITION NUMBER AND THE
EFFECTIVE CONDITION NUMBER
For TSVD of (13), the condition number and the effective condition number for the matrix A are
given by
Cond
k
(A) =

k
(19)
and
Cond_eff
k
(A) =
|b|

k
|x
k
|
. (20)
When k =n, Cond
n
(A) =Cond(A) and Cond_eff
n
(A) =Cond_eff(A). Evidently, Cond_eff
k
(A)
in (20) is smaller or much smaller than Cond
k
(A) in (19).
Now we consider TR in (17). Denote the singular values of the matrix of TR by

i
=
i

i
. (21)
Then Equation (17) can be rewritten as
x

=
n

i =1

i
v
i
. (22)

A justication for (18) is given in Remark 4.1 later.


Copyright 2011 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. 2011; 18:205221
DOI: 10.1002/nla
ILL-CONDITIONING OF THE TSVD 209
The condition number and the effective condition number are respectively presented by
Cond

(A) =
max
i

i
min
i

i
, (23)
and
Cond_eff

(A) =
|b|
(min
i

i
)|x|
. (24)
Based on the denitions in (19), (20), (23) and (24), we obtain
Cond_eff
k
(A) Cond
k
(A), (25)
Cond_eff

(A) Cond

(A). (26)
Hence the effective condition number is smaller than the condition number.
Next, we have the following lemma.
Lemma 3.1
If (18) holds, then
min
=min
i

i
2.
Proof
Dene the function
f (y) =y

2
y
, y [
min
,
max
]. (27)
The singular values
i
in (21) are then equal to f (
i
). By the basic inequality,
f (y) =y

2
y
2
_
y

2
y
=2,
where the minimum of (21) will be attained if =
i
.
Lemma 3.2
Let (18) hold, then
max
i

i
=
max

max
if

min

max
(28)
and
max
i

i
=
min

min
if

min

max
. (29)
Proof
From f (y) in (27), we may seek the extreme values of the continuous function f (y) for y
[
min
,
max
], where
1
=
max
and
n
=
min
. The stationary point is given by
0= f
/
( y) =1

2
y
2
, (30)
which implies y =[
min
,
max
] by (18).
Copyright 2011 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. 2011; 18:205221
DOI: 10.1002/nla
210 Z.-C. LI, H.-T. HUANG AND Y. WEI
Then the largest value must occur at the two boundary points, i.e.
max
y[
min
,
max
]
f (y) =max{ f (
min
), f (
max
)]. (31)
To show (28), it is sufcient to prove
f (
max
)f (
min
), (32)
i.e.

max

max

min

min
, (33)
which is equivalent to

max

min

2
_
1

min

max
_
=
2

max

min

min

max
. (34)
Equation (34) holds if and only if

min

max
. (35)
This is the rst desired result (28). Relation (29) can be proven in a similar manner.
From Lemma 3.2, we obtain the following theorem.
Theorem 3.1
If (18) holds, then the effective condition number and the condition number for TR are given by
Cond_eff

(A)
|b|
2|x

|
, (36)
Cond

(A)

2
max

2
2
max
if

min

max
, (37)
Cond

(A)

2
min

2
2
min
if

min

max
, (38)
where the equality of (36)(38) will be attained if =
i
.
Corollary 3.1
Let (18) hold for
i
in (21). Then the following bound holds:
|x

|
|x

|b|
|b
0
|
, (39)
where b is a perturbation of vector b, b
0
is the projection of vector b onto the range space of A,
and the condition number is given in [12] by

=

max

if

min

max
, (40)

min
if

min

max
. (41)
Proof
When

min

max
, we have from Theorem 3.1 and (18)
Cond

(A)

2
max

2
2
max
=
2
2
max
2
max

2
max
2
max

max

. (42)
This is the rst desired result (40).
Copyright 2011 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. 2011; 18:205221
DOI: 10.1002/nla
ILL-CONDITIONING OF THE TSVD 211
Similarly, when

min

max
, also from Theorem 3.1 and (18) we can deduce
Cond

(A)

min
2


2
min
=

min

1
2
_

min

min
_

min
=

. (43)
This is the second desired result (41).
Corollary 3.1 shows that the denitions of Cond

(A) in (37) and (38) and



K

in (40) and (41)


by Hansen [12] are consistent. However, from (42) and (43), Cond

(A) in this paper provides a


sharper lower bound: Cond

(A)<

, when
min
<<
max
.
A relation between the condition numbers for TSVD and TR is presented in the following
lemma.
Corollary 3.2
Suppose that (18) holds. When =
k

min

max
,
Cond

(A)

max
2
k


k
2
max
=
1
2
_
Cond
k
(A)
1
Cond
k
(A)
_
, (44)
and when =
k

min

max
,
Cond

(A)

min
2
k


k
2
min
=
1
2
_
Cond
k
(A)
Cond(A)

Cond(A)
Cond
k
(A)
_
. (45)
Corollary 3.2 shows that when =
k

min

max
, Cond

(A) is about a half of Cond


k
(A).
Corollary 3.3
Let =
k
, then the following bounds hold:
|x
k
|
|b|

, (46)
|x

|
|b|
2
. (47)
Proof
For
i
(i k), from (13) we have
|x
k
|
2
=
k

i =1

2
i

2
i

2
k

i =1

2
i

2
n

i =1

2
i

2
m

i =1

2
i
=
1

2
|b|
2
, (48)
where
|b|=
_
m

i =1

2
i
. (49)
This gives the rst desired result (46).
Now, from Lemma 3.1 and Equation (22) we obtain,
|x

|
2
=
n

i =1
_

i
_
2

1
(min
i

i
)
2
n

i =1

2
i

1
(2)
2
n

i =1

2
i

1
(2)
2
|b|
2
. (50)
This deduces the second desired result (36).
Corollary 3.3 implies that for |b|=O(1),
|x
k
|=O
_
1

_
, |x

|=O
_
1

_
. (51)
When
min
, the norms of x
k
and x

in (51) may be reduced signicantly, compared with x


0
.
Copyright 2011 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. 2011; 18:205221
DOI: 10.1002/nla
212 Z.-C. LI, H.-T. HUANG AND Y. WEI
Theorem 3.2
When (18) holds, for TSVD when
k
, then the following bound holds:
Cond(A)
Cond
k
(A)

min
. (52)
For TR, when =
k

min

max
,
Cond(A)
Cond

(A)
=

min
_
_
_
_
_
_
2
1
1
Cond(A)
_

2

min

max
_
_

_
, (53)
and when =
k

min

max
,
Cond(A)
Cond

(A)
=

min
_
_
_
_
_
2
1
Cond(A)

min

max
_

_
. (54)
Proof
We have from
k
,
Cond(A)
Cond
k
(A)
=
(
max
/
min
)
(
max
/
k
)
=

k

min

min
. (55)
This is the rst result (52).
Next we consider Cond(A)/Cond

(A). When =
k

min

max
, we have the equality from
Theorem 3.1,
Cond(A)
Cond

(A)
=

max
/
min
(
2
max

2
)/(2
max
)
=

min
_
_
_
_
_
2
1
_

max
_
2
_

_
=

min
_
_
_
_
_
_
2
1

min

max
_

2

min

max
_
_

_
=

min
_
_
_
_
_
_
2
1
1
Cond(A)
_

2

min

max
_
_

_
. (56)
Similarly, when =
k

min

max
, from Theorem 3.1 we have
Cond(A)
Cond

(A)
=

max
/
min
(
2
min

2
)/(2
min
)
=

min
_
_
_
_
_
2

min

max

min

max
_

_
=

min
_
_
_
_
_
2
1
Cond(A)

min

max
_

_
. (57)

Copyright 2011 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. 2011; 18:205221
DOI: 10.1002/nla
ILL-CONDITIONING OF THE TSVD 213
Remark 3.1
Let (18) hold. We assume that
=
k
=

min

max
,
min
_
max
. (58)
Then, the following bounds hold:

Cond(A)
Cond
k
(A)
1 (59)
and

Cond(A)
Cond

(A)
2. (60)
Another relation between the condition numbers for TSVD and TR is derived in the following
theorem.
Theorem 3.3
Let (18) hold and =
k
. Then there exists the approximation, when

min

max
,
Cond

(A)
1
2
Cond
k
(A), (61)
and when

min

max
, the following bounds hold:
Cond

(A) Cond
k
(A) if
_
2
min

max

2
min
, (62)
Cond

(A) Cond
k
(A) if
_
2
min

max

2
min
. (63)
Proof
When =
k

min

max
, we derive from (19) and (37) with the equality
Cond

(A) =

max

max
2
=

max


2
k

max
2
k
=

max
2
k


k
2
max

1
2
Cond
k
(A), (64)
by noting that

k
=

min

max
_
max
.
Next, when =
k

min

max
, we have from (19) and (38) with the equality that
Cond
k
(A)Cond

(A) =

max

2
min

2
2
min
=

max

2
min

2
k
2
k

min
=
1
2
k

min
(2
max

min

2
min

2
k
). (65)
When

k
=
_
2
max

min

2
min
,
Equation (65) leads to Cond
k
(A)Cond

(A)0, i.e. Cond

(A)Cond
k
(A). On the other
hand, when
k
=
_
2
max

min

2
min
. Equation (65) leads to Cond
k
(A)Cond

(A)0, i.e.
Cond

(A)Cond
k
(A).
Copyright 2011 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. 2011; 18:205221
DOI: 10.1002/nla
214 Z.-C. LI, H.-T. HUANG AND Y. WEI
4. BRIEF ERROR ANALYSIS
The aim of this paper is to apply the regularization to numerical PDE, where the minimal singular
value of the discrete matrices is close to zero. The PDE problems, discussed here, are assumed
to be well-posed, however this does not hold for the corresponding numerical solution methods,
such as the MFS, where the computation of the solution requires high accuracy. When solving
systems of linear equations, to control the accuracy, one usually estimates the difference between
the original and the computed regularized solution. When solving PDEs numerically, the latter
difference involves also the discretization error (cf. e.g. [8]). In this paper we use error as a
general measure of the accuracy of the numerically computed discrete solution of the original
problem.
First, we consider the relative error (or the discrepancy)

k
=
|x
0
x
k
|
|x
0
|
, (66)
for TSVD. We have
x
0
x
k
=
n

i =k1

i
v
i
. (67)
Lemma 4.1
Let k<n and

k
=
|x
0
x
k
|
|x
0
|
. (68)
A necessary condition for relation (68) is

i
|x
0
|, i k 1. (69)
Proof
We have from Equation (67) that
|x
0
x
k
|=
_
n

i =k1
_

i
_
2
_1
2
. (70)
Hence, from (68) we obtain

i
|x
0
x
k
||x
0
|, i =k 1, k 2, . . . , n. (71)

When
i
=u
i
T
b(i >k) is not small, in order to satisfy (69), the relative error of x
k
by TSVD
may not be small, either.
Next, we consider the relative error for TR,

=
|x
0
x

|
|x
0
|
. (72)
For TR, the error analysis is more complicated. We have the following theorem.
Theorem 4.1
Let (18) hold. Then the following bounds hold for TR,

2
max

|x
0
x

|
|x
0
|


2

2
min

2
. (73)
Copyright 2011 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. 2011; 18:205221
DOI: 10.1002/nla
ILL-CONDITIONING OF THE TSVD 215
Proof
From Equation (17) we have
x
0
x

=
n

i =1
_
1

2
i

2
_

i
v
i
=
n

i =1
_

2

2
i

2
_

i
v
i
. (74)
Hence we derive the upper bound in (73),
|x
0
x

|
2
=
n

i =1
_

2

2
i

2
_
2 _

i
_
2
max
i
_

2

2
i

2
_
2
n

i =1
_

i
_
2
=
_

2

2
min

2
_
2
|x
0
|
2
. (75)
Next from (74),
|x
0
x

|
2
min
i
_

2

2
i

2
_
2
n

i =1
_

i
_
2
=
_

2

2
max

2
_
2
|x
0
|
2
, (76)
which gives the lower bound in (73).
Corollary 4.1
Let
=

min

max
,
min
_
max
. (77)
Then the following bounds hold:
1
Cond(A)
c
0

|x
0
x

|
|x
0
|
<1, (78)
where Cond(A) =
max
/
min
.
Proof
From the assumption (77), we have

2
min

2
=

max

min

max
<1 (79)
and

2
max

2
=

min

min

max
=
1

max

min
1

1
_

max

min
_ =
1
Cond(A)
.

The desired result (78) follows from Theorem 4.1.


Let b=

m
i =1

i
u
i
, where
i
=u
T
i
b. Denote
b
0
=Ax
0
=
n

i =1

i
u
i
,

b=
m

i =n1

i
u
i
, (80)
where mn is dened in Equation (10). Then we have
|b|=
_
m

i =1

2
i
, |b
0
|=
_
n

i =1

2
i
, |

b|=
_
m

i =n1

2
i
, (81)
Copyright 2011 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. 2011; 18:205221
DOI: 10.1002/nla
216 Z.-C. LI, H.-T. HUANG AND Y. WEI
and
|b|
2
=|b
0
|
2
|

b|
2
. (82)
We get the following theorem.
Theorem 4.2
When (18) holds, then the following bounds hold,

2
max

|b
0
Ax

|
|b
0
|


2

2
min

2
. (83)
Proof
We have
|Ax

b
0
|
2
=
_
_
U
T
AVV
T
x

U
T
b
0
_
_
2
=
_
_
_
_
_
n

i =1
_

2
i

2
i

2
1
_

i
u
i
_
_
_
_
_
2
=
_
_
_
_
_
n

i =1
_

2

2
i

2
_

i
u
i
_
_
_
_
_
2
=
n

i =1
_

2

2
i

2
_
2

2
i
. (84)
Then
_

2

2
max

2
_
2
n

i =1

2
i

i =1
_

2

2
i

2
_
2

2
i

_

2

2
min

2
_
2
n

i =1

2
i
. (85)
The desired result (83) follows from (84) and (81).
From Theorems 4.1 and 4.2, although the errors are bounded by the factor
2
/(
2
min

2
)(<1),
they may not be small, which is undesirable for numerical PDE. For problems with noisy data,
the exact solutions may not exist, or are meaningless even if they exist. The useful solutions, as in
the case of image processing, may allow a certain range of errors, which may not be very small,
though. More analysis is given by Hansen [12].
Remark 4.1
From (80) and (84), we have
|Ax

b|
2
=|

b
0
|
2
|b
0
Ax

|
2
, (86)
where
|b
0
Ax

|
2
=
n

i =1
_

2

2
i

2
_
2

2
i
, (87)
Then we have
|Ax

b|
2
|

b
0
|
2

i =1

2
i
=|

b
0
|
2
|b
0
|
2
=|b|
2
as .
Since large cannot reduce the errors, we should choose
max
. Moreover, from (21) when
<
min
, the minimal singular value
min
=O(
min
). Then we choose
min
. For both accuracy
and stability of the TR solutions, we conclude that the assumption (18) for is reasonable.
5. NUMERICAL TESTS
The instability of the MFS is very severe. In this section, we investigate discrete Laplace operator
by MFS by carrying out numerical experiments to verify our analysis.
Copyright 2011 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. 2011; 18:205221
DOI: 10.1002/nla
ILL-CONDITIONING OF THE TSVD 217
Figure 1. A rectangular domain.
5.1. The MFS
Consider the Dirichlet problem of the Laplace operator
u =0 in S, u =g on *S, (88)
where S ={(x, y)[1<x<1, 0<y<1]. We choose the smooth solution
u =sin(kx) sinh(ky), k =1 or 2. (89)
The Dirichlet boundary conditions are given explicitly by (see Figure 1)
u[
ABCDAD
=0, u[
BC
=g=sin(kx) sinh(k). (90)
Let G(0,
1
2
) be the center of the polar coordinates (r, ), then r
max
=max
S
r =GB=

5
2
. We use
MFS in Li [31] and the stability analysis in [9]. Choose the source points Q
i
={(r, )[r =R, =i h]
uniformly on the circle, where R>r
max
, and h =2/N. The fundamental solutions are given by

i
(P) =ln[ PQ
i
[, i =1, 2, . . . , N, (91)
where P S*S. We choose their linear combination
u
N
=
N

i =1
c
i

i
(P), (92)
as the approximate solutions of (88), where c
i
are the coefcients to be determined. Since functions
(92) are harmonic, we may establish the collocation equations directly for satisfying the Dirichlet
boundary conditions (90). Hence we have
u
N
(P
j
) =
N

i =1
c
i

i
(P
j
) =0, P
j
ABCDAD, (93)
u
N
(P
j
) =
N

i =1
c
i

i
(P
j
) =g(P
j
), P
j
BC. (94)
For simplicity, we choose the uniform collocation points P
j
. Such an approach can be described
as the LS method: Find u
N
V
L
such that [32]
u
N
= min
vV
L

I (v), (95)
Copyright 2011 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. 2011; 18:205221
DOI: 10.1002/nla
218 Z.-C. LI, H.-T. HUANG AND Y. WEI
where V
L
is the set of the approximate solution (92), and

I (v) =

(vg)
2
. (96)
In (96), =*S, g=0 on ABCDAD, g=sin(kx) sinh(k) on BC, and the approximation

of
_

is given by the central rule. We may also establish the collocation equations by the
Gaussian rule,
w
j
N

i =1
c
i

i
(P
j
) =0, P
j
ABCDAD, (97)
w
j
N

i =1
c
i

i
(P
j
) =w
j
g(P
j
), P
j
BC, (98)
where w
j
and P
j
are the weights and integration nodes, respectively. Let M denote the number of
uniform collocation nodes along AB. Hence the total number of collocation equations is 6M, see
Figure 1. Choose 6M>N and R=2>

5
2
. Then Equations (97) and (98), as well as (93) and (94),
are represented by the over-determined linear system (1).
The errors and condition numbers are given in Table I, where
||
B
=
__

(u
N
g)
2
_1
2
(99)
denotes the real errors of numerical solutions, and it is more important than the discrepancy
|x
0
x

|/|x
0
| for numerical PDE. We will focus on the error ||
B
, which is also different from
the existing literature. From Table I, we can see
||
B
=O(0.57
N
),

max
=O(1),
min
=O(0.5
N
),
Cond(A) =O(2.04
N
), Cond_eff(A) =O(1.47
N
),
|x| =O(1.48
N
).
5.2. The regularization
Since
min
is close to zero, the coefcients c
i
are very large and the ill-conditioning is very severe.
To reduce the ill-conditioning, we use TSVD and TR. First choose =
k
in TSVD with N =71
and M=50, the errors and condition numbers are listed in Table II. In Table II, when k =71, the
solution is the same as that of Table I with N =71 and M=50. Evidently, when k =57 the solution
x
k
with coefcients c
i
is reduced from O(10
8
) to O(10
2
), and Cond
k
(A) from O(10
24
) to O(10
17
).
When k =57, the errors ||
B
=O(10
15
) increase only by a factor of 10, whereas the effective
condition number Cond_eff
k
(A) =O(10
14
) decreases by a factor of 200.
Next, we choose =
m
in TR, the errors and condition numbers are listed in Table III. We list
the data in Table III when =
57
,
||
B
=0.172(14), Cond

(A) =0.319(17), |x
k
|=178. (100)
In (100) small |x
k
| and huge Cond

(A) occur. This indicates that small solutions do not necessarily


guarantee well-conditioning. Hence for stability analysis and parameter choice of TR, we should
choose Cond as a criterion, although the TR methods in (15) have originated from reducing |x|.
This is the key difference between numerical PDE and the existing literature in TR.
When m=57 in Table III, the errors and condition numbers show similar behaviors to those
obtained by TSVD in Table II. When k =m=57,
Cond
k
(A) =0.639(17), Cond

(A) =0.319(17),
Copyright 2011 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. 2011; 18:205221
DOI: 10.1002/nla
ILL-CONDITIONING OF THE TSVD 219
Table I. The error norms and condition numbers by MFS for Laplace operator with R=2.
N 28 42 56 71
M 20 30 40 50
||
B
0.272(5) 0.122(8) 0.673(12) 0.171(15)
|x| 0.143(4) 0.139(7) 0.267(7) 0.331(9)
|b| 2.579 2.108 1.826 1.633

max
1.331 1.331 1.331 1.340

min
0.414(10) 0.360(15) 0.241(20) 0.487(25)
Cond(A) 0.321(11) 0.370(16) 0.552(21) 0.275(26)
Cond_eff(A) 0.435(8) 0.421(10) 0.283(15) 0.101(18)
Table II. Using TSVD by MFS for Laplace operator with N=71, M=50 and
k
.
k(
k
) 71 69 65 59 57 55 51 50
||
B
0.171(15) 0.181(15) 0.204(15) 0.949(15) 0.171(14) 0.259(14) 0.922(12) 0.111(11)
|x
0
x
k
| 0 0.331(9) 0.331(9) 0.331(9) 0.331(9) 0.331(9) 0.331(9) 0.331(9)
|x
0
x
k
|
|x
0
|
0 0.999 0.999 1 1 1 1 1
|x
k
| 0.331(9) 0.653(7) 0.785(6) 0.213(4) 179 177 177 177

max
1.34 1.34 1.34 1.34 1.34 1.34 1.34 1.34

k
0.487(25) 0.102(22) 0.197(21) 0.614(18) 0.209(16) 0.198(13) 0.876(12) 0.353(11)
Cond
k
(A) 0.275(26) 0.131(24) 0.679(22) 0.218(19) 0.639(17) 0.678(14) 0.153(13) 0.379(12)
Cond_eff
k
(A) 0.101(18) 0.245(17) 0.105(17) 0.125(16) 0.434(15) 0.468(12) 0.105(11) 0.262(10)
Table III. Using TR by MFS for Laplace operator with N =71, M=50 and =
m
.
m(=
m
) 71 69 65 59 57 55 51 50
||
B
0.171(15) 0.183(15) 0.206(15) 0.114(14) 0.172(14) 0.123(13) 0.683(12) 0.234(11)
|x

| 0.167(9) 0.374(7) 0.718(6) 0.109(4) 178 177 177 177


|x

|
|x
0
|
0.504 0.113(1) 0.217(2) 0.331(5) 0.536(6) 0.534(6) 0.534(6) 0.534(6)

max
1.34 1.34 1.34 1.34 1.34 1.34 1.34 1.34

m
0.489(25) 0.102(22) 0.197(21) 0.614(18) 0.209(16) 0.198(14) 0.876(12) 0.353(11)
Cond

(A) 0.138(26) 0.655(23) 0.339(22) 0.109(19) 0.319(17) 0.339(14) 0.765(12) 0.189(12)


Cond_eff

(A) 0.100(18) 0.213(17) 0.535(16) 0.121(16) 0.219(15) 0.234(12) 0.527(10) 0.131(9)


and then
Cond

(A)
1
2
Cond
k
(A), (101)
completely consistent with (61) in Theorem 3.3. The Cond
k
(A) and Cond

(A) are listed in Tables II


and III, showing that when k =m>51 their relations are also consistent with (62) in Theorem 3.3.
Remark 5.1
From the given examples of regularization, when k =57 the reduction factor O(10
7
) of Cond
is much more signicant than the increasing factor O(10) of errors. Under a xed number of
working digits, highly accurate solutions can be achieved by using regularization. Suppose that
31 working digits are chosen. Then we may achieve ||
B
=O(10
14
) by TSVD with k57. Without
regularization, however, only ||
B
=O(10
7
) is obtained, since the instability with Cond=O(10
24
)
costs 24 working digits already. For numerical PDE, the errors may be allowed to be enlarged by
a small amount such as O(10)O(10
2
), while the condition numbers can be reduced remarkably.
Hence, fewer working digits are used for highly accurate solutions, less CPU time and computer
storage.
Copyright 2011 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. 2011; 18:205221
DOI: 10.1002/nla
220 Z.-C. LI, H.-T. HUANG AND Y. WEI
In summary, for the TSVD and the TR, new computational formulas for Cond and Cond_eff
are derived (see Theorem 3.1). Equations (37) and (38) for Cond

(A) are shaper than those in [12]


(see Corollary 3.1). Moreover, the improvements of stability by TSVD and TR are explored in
Theorem 3.2 and Remark 3.1. The focus of the regularization is to reduce the condition number,
but not on |x
0
|. This is a key difference between this paper and the existing literature. Numerical
tests are carried out for the discrete Poisson equation by the MFS and numerical results conrm our
theoretical analysis. TSVD and TR have been discussed for image processing with noise in many
papers. In this paper, their different intrinsic properties are explored in detail, such as the condition
number, the effective condition and error bounds, etc. which can be regarded as a theoretical basis
for TSVD and TR, applied to the numerical PDE.
ACKNOWLEDGEMENTS
We are grateful for Prof. Owe Axelsson and two reviewers for their valuable comments and detailed
suggestions, and express our thanks to S. L. Huang for the computation in this paper.
Y. Wei is supported by the National Natural Science Foundation of China under grant 10871051, Doctoral
Program of the Ministry of Education under grant 20090071110003, Shanghai Education Committee
(Dawn Project) and Shanghai Science & Technology Committee under grant 09DZ2272900.
REFERENCES
1. Bjrck . Numerical Methods for Least Squares Problems. SIAM: Philadelphia, 1996.
2. Higham NJ. Accuracy and Stability of Numerical Algorithms (2nd edn). SIAM: Philadelphia, PA, 2002.
3. Baboulin M, Dongarra J, Gratton S, Langou J. Computing the conditioning of the components of a linear
least-squares solution. Numerical Linear Algebra with Applications 2009; 16:517533.
4. Cucker F, Diao H, Wei Y. On mixed and componentwise condition numbers for MoorePenrose inverse and
linear least squares problems. Mathematics of Computation 2007; 76:947963.
5. Demmel JW, Hida Y, Li XS, Riedy EJ. Extra-precise iterative renement for overdetermined least squares
problems. ACM Transactions on Mathematical Software 2009; 35(4):Article No. 28.
6. Huang HT, Li ZC. Effective condition number and superconvergence of the Trefftz method coupled with high
order FEM for singularity problems. Engineering Analysis with Boundary Elements 2006; 30:270283.
7. Li ZC, Chien CS, Huang HT. Effective condition number for nite difference method. Journal of Computational
and Applied Mathematics 2007; 198:208235.
8. Li ZC, Huang HT. Effective condition number for numerical partial differential equations. Numerical Linear
Algebra with Applications 2008; 15:575594.
9. Li ZC, Huang J, Huang HT. Stability analysis of method of fundamental solutions for mixed boundary value
problems of Laplaces equation. Computing 2010; 88:129.
10. Hansen PC, Nagy JG, OLeary DP. Deblurring Images. Matrices, Spectra, and Filtering. SIAM: Philadelphia,
PA, 2006.
11. Hanke M, Hansen PC. Regularization methods for large-scale problems. Surveys on Mathematics for Industry
1993; 3:253315.
12. Hansen PC. Truncated singular value decomposition solutions to discrete ill-posed problems with ill-determined
numerical rank. SIAM Journal on Scientic Computing 1990; 11:503518.
13. Hansen PC. Rank-decient and Discrete Ill-posed Problems. SIAM: Philadelphia, PA, 1997.
14. Chan TF, Hansen PC. Computing truncated singular value decomposition least squares solutions by rank revealing
QR-factorizations. SIAM Journal on Statistics and Scientic Computing 1990; 11:519530.
15. Chen CS, Cho HA, Golberg MA. Some comments on the ill-conditioning of the method of fundamental solutions,
preprint. Department of Mathematics, University of Southern Mississippi, Hattiesburg, MS, 2006.
16. Chen CS, Hon YC, Schaback RA. Scientic computing with radial basis functions, 2005; preprint. Department
of Mathematics, University of Southern Mississippi, Hattiesburg, MS.
17. Tikhonov AN. Solution of incorrectly formulated problems and the regularisation method. Doklady Akademii
Nauk 1963; 151:501504. (in Russian), Soviet Mathematics 4: 10351038 (in English).
18. Tikhonov AN, Arsenin VY. On the Solution of Ill-posed Problems. Wiley: New York, 1977.
19. Calvetti D, Reichel L. Tikhonov regularization of large linear problems. BIT 2003; 43:263283.
20. Eldn L. Algorithms for the regularization of ill-conditioned least squares problems. BIT 1977; 17:134145.
21. Engl HW, Hanke M, Neubauer A. Regularization of Inverse Problems. Kluwer: Dordrecht, Netherlands, 1996.
22. Gulliksson ME, Wedin P, Wei Y. Perturbation identities for regularized Tikhonov inverses and weighted
pseudoinverses. BIT 2000; 40:513523.
23. Hansen PC. Perturbation bounds for discrete Tikhonov regularisation. Inverse Problems 1989; 5:L41L44.
24. Hansen PC. Regularization Tools version 4.0 for Matlab 7.3. Numerical Algorithms 2007; 46:189194.
Copyright 2011 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. 2011; 18:205221
DOI: 10.1002/nla
ILL-CONDITIONING OF THE TSVD 221
25. Malyshev AN. A unied theory of conditioning for linear least squares and Tikhonov regularization solutions.
SIAM Journal on Matrix Analysis and Applications 2003; 24:11861196.
26. Vogel CR. Computational Methods for Inverse Problems. SIAM: Philadelphia, PA, 2002.
27. Wei Y, Zhang N, Ng M, Xu W. Tikhonov regularization for weighted total least squares problems. Applied
Mathematics Letters 2007; 20:8287.
28. Philips DL. A technique for the numerical solution of certain integral equations of the rst kind. Journal of the
Association for Computing Machinery 1962; 9:8497.
29. Fierro RD, Golub GH, Hansen PC, OLeary DP. Regularization by truncated total least squares. SIAM Journal
on Scientic Computing 1997; 18:12231241.
30. Li ZC, Huang HT. Studies on effective condition number for collocation methods. Engineering Analysis with
Boundary Elements 2008; 32:839848.
31. Li ZC. Method of fundamental solutions for annular shaped domains. Journal of Computational and Applied
Mathematics 2009; 228:355372.
32. Li ZC, Lu TT, Hu HY, Cheng AH-D. Trefftz and Collocation Methods. WIT Publishers: Southampton, Boston,
2008.
Copyright 2011 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. 2011; 18:205221
DOI: 10.1002/nla

You might also like