You are on page 1of 22

GALERKIN-WAVELET METHODS

FOR TWO-POINT BOUNDARY VALUE PROBLEMS


JIN-CHAO XUy AND WEI-CHANG SHANNyy
Abstract. Anti-derivatives of wavelets are used for the numerical solution of di erential equations. Optimal error
estimates are obtained in the applications to two-point boundary value problems of second order. The orthogonal property
of the wavelets is used to construct ecient iterative methods for the solution of the resultant linear algebraic systems.
Numerical examples are given.
Key words. wavelets, nite element method, two-point boundary value problem, conjugate gradient method, pre-
conditioner, piecewise linear hierarchical basis
Subject classi cations: AMS(MOS): 65N30, 65N13, 65F10; CR: G1.8.
1. Introduction. In this paper, we shall study wavelet functions applying to the numerical
solution of di erential equations. Wavelets in our considerations are those discovered by Daubechies
in 1988 [7], which have compact supports and form an orthonormal basis of L2 (R).
The idea of wavelets may be traced back to Calderon [2] and Coifman and Weiss [4]. Mod-
ern ideas grow out of applications in signal processing, cf. Lienard [19], Rodet [24] and Goupil-
laud, Grossmann and Morlet [12]. Earlier mathematical analysis of wavelets can be found in
Stromberg [26], Grossmann and Morlet [14] and Meyer [22]. In recent years, wavelets have been
studied by Meyer and his group using the multiresolution framework, cf. Mallat [20] and Meyer [23].
The multiresolution approach has lead to the discovery of a family of wavelets with compact sup-
ports by Daubechies in 1988 [7]. For the application of wavelets to practical problems, we refer to
[5, 6, 10, 13, 17, 18, 21].
Applying wavelets to discretize di erential equations appears to be a very attractive idea. In
nite element type methods, piecewise polynomial trial functions may be replaced by wavelets.
Such an idea was explored by Glowinski, Lawton, Ravachol and Tenenbaum [10] (see also the
unpublished technical reports cited therein). Many interesting numerical examples in their paper
suggest that wavelets have great potential in the application to the numerical solution of di eretial
equations.
At this stage of research, a thourough study of one dimensional problems is still necessary
and theoretically important. The simplest case would be the two-point boundary value problem
for the second order elliptic equation, but how the wavelets can be appropriatly used in this case
is still not clear. If the wavelets are used directly as trial functions, as is done in [10], several
diculties would be encountered. First, due to the lack of regularity, \lower order" wavelets can
not be employed. Secondly, Dirichlet boundary value problems can not be applied directly because
the boundary conditions are hard to impose on subspaces. Thirdly, the orthogonality, one of the
main features of wavelets, does not play any signi cant roles.
In this paper, we shall take a di erent approach. Instead of using wavelets directly, we take their
anti-derivatives as trial functions. In this way, singularity in the wavelets is smoothed, the boundary
condition can be treated easily and the orthogonality is used to construct ecient algorithms to
solve the underlying algebraic system. Moreover, we can develop a rather complete theory in most
important aspects of our algorithm.
Our approach was motivated by the observation that the di erentiation of the so-called hier-
archical basis functions (cf. [27, 28]) in one dimensional piecewise linear nite elements are exactly
y Department of Mathematics, The Pennsylvania State University, University Park, PA 16802.
xu@math.psu.edu. This work was supported by National Science Foundation.
yy National Central University, Chung-Li, Taiwan, R.O.C. t210001@sparc20.ncu.edu.tw
2 J.-C. Xu and W.-C. Shann
the wavelets of order 1, namely the Haar basis [15]. Taking the anti-derivatives of higher order
wavelets then leads to higher order trial functions.
This work is our rst attempt to use wavelets for numerical solution of di erential equations.
For one-dimensional problems studied in this paper, the wavelets approach appears to be attractive
and our theory is rather complete. Applications to higher dimensional problems (on the domains
other than rectangles) remain to be explored and the prospect of the method is yet to be seen.
We shall use the standard notation L2 (R) to denote the space of square integrable functions
Two functions u; v 2 L2 (R) are orthogonal in L2 (R) if (u; v ) = 0. Given
= (a; b) (?1  a < b 
1), H s(
) denotes the standard Sobolev space with the norm k  ks;
and semi-norm j  js;
given
by
X s Zb Zb
2
kvks;
= (i
jv (x)j dx and jvjs;
= jv(s)(x)j2 dx:
) 2 2
i=0 a a
For ?1 < a < b < 1, we de ne
H01 (
) = fv 2 H 1 (
) j v(a) = v(b) = 0g and H?1 (
) = fv 2 H 1 (
) j v(a) = 0g:
It is well-known that the semi-norm j  j1;
is a norm (equivalent to k  k1;
) in these two spaces.
When
= R, we denote kv ks = kv ks;
and jv js = jv js;
.
Throughout this paper, we shall use the letter C to denote a generic positive constant which
may stand for di erent values at its di erent appearances. When it is not important to keep track
of these constants, we shall conceal the letter C into the notation ., =
, or &. Here
(1:1) x . y means x  Cy;
x & y means ?x . ?y, and x =
 y means x . y and x & y.
The rest of the paper is organized as follows. Section 2 gives a brief introduction to wavelet
functions. In Section 3, a simple technique by means of anti-derivatives is presented to construct a
frame in the space H 1 from a frame in the space L2 ; the optimal approximation properties of the
frames in H01 (or H1; H 1 ) based on wavelets are established. Applications to the numerical solution
of a second order two-point boundary value problem is given in Section 4. This section concerns
the optimal error estimates for the approximate solution, the selection of basis functions, e ective
iterative techniques for solving the underlying linear algebraic system and nally some comparisons
with the standard hierarchical basis method. Finally, in Section 5, a number of numerical examples
are given to support our theory.
2. Basic properties of wavelets. In this section, we shall give a brief introduction to the
constructions and basic properties of wavelets. More detailed discussions can be found in [20, 23,
25].
Given a positive integer p, consider a set of constants ck that satisfy the following four prop-
erties:
(2:1) ck = 0 for k 62 f0; 1; : : :; 2p ? 1g;
X
(2:2) ck = 2;
k
X
(2:3) (?1)k km ck = 0; for 0  m  p ? 1;
k
Galerkin-wavelets methods 3
and
X
(2:4) ck ck?2m = 20m; for 1 ? p  m  p ? 1:
k
Solutions of ck for 1  p  10
p that satisfy (2:1){(2:4) can be found in [7] (the coecients h(n)
are listed on p. 980, while cn = 2h(n)). In particular, for p = 1,
c0 = 1; c1 = 1;
and, for p = 2,
p p p p
c0 = 1 +4 3 ; c1 = 3 +4 3 ; c2 = 3 ?4 3 ; c3 = 1 ?4 3 :
For each p  1 and the corresponding coecients ck determined above, we consider the so
called dilation equation
X
(2:5) (x) = ck (2x ? k):
k
It is shown in [7] that (2:5) has a unique solution such that supp = [0; R], with R = 2p ? 1 > 0.
There are several ways to construct the solution (x) (cf. [23, 25]). For example, we can take  to
be the limit of the iteration
X
(2:6) n+1 (x) = ck n (2x ? k); n = 0; 1; : : :;
k
where 0 is the unit indicator function [0;1] . For p = 1, 0 is invariant in (2:6) and hence (x) =
[0;1] . For p = 2, (x) 2 C r where r is about 0:55 (Figure 1). In general, r = r(p) is an increasing
function [8] and r(p) ! 1 linearly as p ! 1 [7].
Given (x) (the solution of (2:5)), we de ne
~(x) =
X
(2:7) (?1)k c1?k (2x ? k) and (x) = ~(x ? p + 1):
k
Note that supp ~ = [1 ? p; p] and supp = [0; R]. Usually the function ~ is used to de ne wavelets,
but for our purpose will be used instead. For p = 1, is the Haar function [15]. Pictures of
for the cases p = 1 and 2 are shown in Figures 2 and 3, respectively.
We are now in a position to present the de nition of wavelets.
Definition 2.1. Assume p  1 is a given integer. Let ck be determined by (2:1){(2:4), 
satisfy (2:5) and be given in (2:7). De ne
(
(2:8) p j (xj? k); for j = ?1,
jk (x) =
2 (2 x ? k); for j  0.
The functions jk with j  ?1 and (j; k) 2 Z Zare called wavelets of order p.
Proposition 2.1. (1) The set of wavelets forms an orthonormal basis of L2 (R). That is,
(2:9) ( jk (x); `m (x)) = j` km
4 J.-C. Xu and W.-C. Shann

1.5

0.5

-0.5
0 1 2 3
Figure 1. (x) for p = 2.
and
X
J X
lim ku ?
J !1
jk jk k = 0; for all u 2 L2 (R);
j =?1 k
where
(2:10) jk = (u; jk ):
(2) For j  0, the rst p moments of jk equal to zero:
Z1
(2:11) xm jk (x) dx = 0; 0  m  p ? 1:
?1
We shall now give a brief illustration for the assertions in the above proposition. For detailed
proofs, we refer to [7, 23]. R 1  dx = 1. By (2:4), (2:6) and induction we get ((x); (x ?
By (2:2) and (2:5), we have ?1
k)) = 0k , which together with (2:7) show that ((x); ~(x ? k)) = 0. Then an application of (2:4)
and (2:7) yields ( ~(x); ~(x ? k)) = 0k . Thus (2:9) is proved for fj; `g  f?1; 0g. The general case
with j = ` follows from a proper change of variables of x. Finally, by (2:5) and induction we can
prove generally with j < `. P
P To see (2:11), let ^(! ) be the Fourier transform of ~(x), P (! ) = 21 ck eik! and Q(! ) =
1 (?1)k c1?k eik! . Since ^(0) = 1, we have formally
2
^(! ) = Q( ! ) 
Y
1
2 P ( 2!n ):
n=2
Galerkin-wavelets methods 5

-1

0 0.5 1
Figure 2. (x) for p = 1.
2

1.5

0.5

-0.5

-1

-1.5
0 0.5 1 1.5 2 2.5 3
Figure 3. (x) for p = 2.

Note that P (0) = 1 and by (2:3) Q(m) (0) = 0. Thus ^(m) (0) = 0, for 0  m  p ? 1, and (2:11) is
proved for j = 0 and k = 1 ? p. The general case follows from a proper change of variables.
It can be shown that the polynomials 1, x, : : :, xp?1 are in the subspace spanned by f ?1;k g.
6 J.-C. Xu and W.-C. Shann
For example, by (2:3), (2:6) and induction, one can show that
X
(2:12) (x ? k) = 1:
k
Given any integer J  0, we de ne
VJ = spanf jk j ?1  j < J; k 2 Zg:
Obviously VJ  VJ +1 . By Proposition 2.1 [1 0 VJ is dense in L (R). It is worth noting that there
2
is another interesting orthonormal basis for VJ in addition to f jk g. To see this, let
p
jk (x) = 2j (2j x ? k)
and
V~J = spanfJk j k 2 Zg:
By (2:7), jk 2 V~j +1 . But (2:5) implies V~j  V~j +1 . Thus we conclude VJ  V~J . Similarly
the reverse inclusion holds. Therefore VJ = V~J , for all J  0. Since ((x); (x ? k)) = 0k ,
(j` ; jk ) = `k . Consequently fJk g is another set of orthonormal basis of VJ . The sequence
    pV~?1  V~0  V~1     de nes the so called multiresolution analysis of L2(R) [20, 23]; while
spanf 2j (2j x ? k) j k 2 Zg is the orthogonal complement of V~j in V~j +1 .
For convenience of exposition, we shall use the term \frame" for a \basis" of a vector space in
a weak sense. See [9, 16].
Definition 2.2. Let fn g1 n=1 be a subset Pof a Banach space (X; k  kX ). Let
spanfn j 1  n < 1g be the set of all elements n n ( n 2 R) which converge (strongly)
in X . Then fn g is said to be a frame of X if spanfn g = X .
Note that a frame is not necessarily an algebraic basis; the linear independence is not required.
The main properties of wavelets used in this paper are summarized in the following proposition.
In the forthcoming sections, we shall be concerned with the space L2 (
), where
is a bounded
open interval. It is clear that jk j
form a frame for L2 (
). In fact, those with supp jk \
6= ;
will be sucient for a frame. That is, f jk j j  ?1; k 2 Ij g, where
(2:13) Ij = fk 2 Zj 1 ? R  k  2|^R ? 1g; with |^ = maxf0; j g:
Correspondingly, we de ne
VJ (
) = spanf jk j
j ?1  j < J; k 2 Ij g; and
(2:14)
V~J (
) = spanfJk j
j k 2 IJ g:
It is easy to see that V~J (
) = VJ (
). Also, Vj (
)  Vj +1(
) and [1 2
0 VJ (
) is dense in L (
).
3. Anti-derivatives of wavelets and their approximation properties. In this section,
the anti-derivatives of wavelets will be used to construct frames for H01 (a; b), H?1 (a; b) and H 1 (a; b)
respectively. Approximation properties of these anti-derivatives will be discussed.
Given any f 2 L1 (a; b), we shall use the notation f to denote its mean value, i.e.
Zb
f = 1 b?a f dx:
a
Galerkin-wavelets methods 7
We begin our discussion with general functions.
Lemma 3.1. Let fn (x)g1 2
n=1 be a frame of L (a; b). Then
 Zx 
(3:1) n (x) = n ds ? (x ? a)n j a  x  b  H01 (a; b);
a

 Zx 
(3:2) n (x) = n ds j a  x  b  H?1 (a; b);
a
and, for any 2 [?1; a),
 Zx 
(3:3) n (x) = n ds j a  x  b  H 1 (a; b)

are frames of H01 (a; b), H?1 (a; b) and H 1 (a; b) respectively.
Proof. We shall only present the proof for H01 (a; b) since the proofs for other cases are similar.
Let
R b w(xn) be de ned by (3:1), clearly n 2 H01(a; b). Given any v 2 H01 (a; b), let w = v 0 . Note that
a dx = 0.
Since w 2 L2 (a; b), there are numbers f n g1 n=1 such that
Zb
lim
N !1 a
(w ? wN )2 dx = 0;
P P
where wN (x) = Nn=1 n n . Since wN = Nn=1 n n , we have
1 Z b
jwN j = b ? a wN (x) dx
1 Zab
=
b?a a [wN (x) ? w(x)] dx
 p 1 kw ? wN k0:
b?a
Hence limN !1 wN = 0 and
XN p
kv ? n (n (x) ? n)k0  kv0 ? wN k0 + b ? a jwN j ! 0
0
n=1
P
as N ! 1. Thus if we take vN = Nn=1 n n , then vN 2 H01 (a; b) and
lim jv ? vN jH 1 (a;b) = 0:
N !1

As j  jH 1 (a;b) is a norm on H01 (a; b), the proof is then completed.


8 J.-C. Xu and W.-C. Shann
Recall that the wavelets f jk j j  ?1 and k 2 Ij g form a frame of L2 (
), hence the foregoing
lemma can be applied. From here on, let p  1, R = 2p ? 1,
= (0; R) and jk be the wavelets of
order p. The following lemma is a direct consequence of Lemma 3.1.
Lemma 3.2. For j  ?1 and k 2 Ij , the functions
Zx
(3:4) jk (x) = jk ds ? x jk ; for 0  x  R;
0
form a frame of H01 (
), and the functions
Zx
(3:5) jk (x) = jk ds; for 0  x  R:
0
form a frame of H?1 (
).
Apply Lemma 3.1 with = ?R, the functions
Zx
jk (x) = jk ds; j  ?1; 0  x  R;
?R
with supp jk \ (?R; R) 6= ;, form a frame of H 1 (
). We further observe that, for 0  x  R,
8 (x) = R x
< jk ?1 jk ds; for k 2 Ij ;
R
: jk (x) = ?1R jk ds = a constant; otherwise.

So if we de ne
Zx Zx
(3:6) (x) = (s) ds; (x) = (s) ds; for 0  x  R:
?1 ?1
R x p
Then for j  0, ?1 jk (s) ds = (1= 2j ) (2j x ? k). Hence we have a frame for H 1 (
) consisting
of anti-derivatives of wavelets with supp jk \ (0; R) 6= ;, as seen in the next lemma.
Lemma 3.3. f1; jk j j  ?1; k 2 Ij g is a frame of H 1 (
) where
( (x ? k); for j = ?1,
(3:7) jk (x) = p1 j k 2 Ij ; 0  x  R:
2j (2 x ? k); for j  0,

Note that  2 C 0 for p = 1 and  2 C 1 for p = 2. The support of (x) is also [0; R]. But
(x) = 1 for x 2 [R; 1). Pictures of  and for p = 2 are shown in Figure 4.
Thus, from the anti-derivatives of wavelets, we have constructed a frame for each of the spaces
H0 (
), H?1 (
) and H 1(
). We shall now discuss the approximation properties of these frames.
1
Lemma 3.4. For j  0 and p  1, let f 2 H p and jk be given by (2:10). Then

j jk j . 2?js jf js;Sjk ; 0  s  p;
where Sjk = supp jk .
Galerkin-wavelets methods 9

1.5

0.5

-0.5
0 1 2 3
Figure 4. - - - (x); (x).
Proof. By (2:10) and (2:11), we have, for any polynomial q (x) of degree  s ? 1,
Z1 Z
jk = (f ? q ) jk dx = (f ? q ) jk dx
?1 Sjk
 kf ? qk0;Sjk k jk k = kf ? qk0;Sjk :
Since jSjk j = 2?j R, a standard Bramble-Hilbert scaling argument shows that
j jk j  infq kf ? qk0;Sjk . 2?js jf js;Sjk :
A direct consequence of Lemma 3.4 is
X
J X
ku ? jk jk k . 2?Js jujs; 0  s  p:
j =?1 k
But this estimate will not be directly used in this paper.
Since [1 2
0 VJ (
) is dense in L (
) and each VJ (
) is nite dimensional, it is natural to construct
nite dimensional subspaces of H01 (
), etc., by the frame of VJ (
). The following theorems concern
of the distance between these subspaces and H01 (
), etc.
Theorem 3.1. Given J  0, de ne
n o
(3:8) SJ = span jk j ?1  j < J; k 2 Ij :
10 J.-C. Xu and W.-C. Shann
Then SJ is a nite dimensional subspace of H01 (
) (resp. H?1 (
)) if jk are chosen from (3:4) (resp.
(3:5)). For any v 2 H01(
) \ H s+1 (
) (resp. v 2 H?1 (
) \ H s+1 (
)), we have
(3:9) inf jv ? j1;
. hs jv js+1;
; 0  s  p;
2S J
where h = 2?J .
Proof. Without loss of generality, we will only present the proof for the case that s = p and
that jk are given by (3:4). The proof for the other cases are similar.
Let v 2 H01 (
) \ H p+1(
). By the Sobolev extension theorem, there is a v~ 2 H p+1(R) such
that v~(x) = v (x) for x 2
and
kv~kp+1 . kvkp+1;
:
By Lemma 3.1 and (2:10),
X̀ X
lim jv ?
`!1
jk jk j1;
= 0
j =?1 k2Ij
P P
where jk = (~v 0 ; jk ). Let ej = k2Ij jk jk and  = jJ=??1 1 ej . Then  2 SJ and
X
jv ? j1;
. ke0j k0;
:
j J
For each j , let Ij0  Ij consist of the indices k such that supp jk  [0; R]. That is, 0  k 
(2|^ ? 1)R. Let Ij00 = Ij n Ij0 . Since j  0, we have
ZR XX Z
je0j j2 dx = jk j` ( jk ? jk )( j` ? j` ) dx
0 k2Ij `2Ij

X 2 X Z
= jk + 2 jk j` jk ( j` ? j` ) dx
k2Ij0 k2Ij0

`2Ij00
X Z
+ jk j` ( jk ? jk )( j` ? j` ) dx:
k;`2Ij00

R R 1 dx and  = 0. Also note that if k 2 I 0 and ` 2 I 00 , then


Note that if k 2 Ij0 then
dx = ?1 jk j j
k 6= ` and supp jk 
. Thus it follows from (2:9) and (2:11) that
X Z X Z1
jk j` jk ( j` ? j` ) dx = jk j` jk j` dx = 0:
k2Ij0
k2Ij0 ?1
`2Ij00 `2Ij00
R p
Since j
jk dxj  R k jk k0;
 R, thus
p
p
k jk ? jk k0;
 k jk k + R j jk j  2:
Also, since jIj00 j = 2R ? 2,
X X X
4 j jk j`j  2 ( 2jk + 2j` ) = (4R ? 2) 2jk :
k;`2Ij00 k;`2Ij00 k2Ij00
Galerkin-wavelets methods 11
Therefore ZR X X X
je0j j2 dx  2jk + 4 j jk j` j  (4R ? 1) 2jk :
0 k2Ij0 k;`2Ij00 k2Ij
By Lemma 3.4,
j jk j . 2?jpjv~0jp;Sjk :
Thus X X
2jk . 2?2jp jv~j2p+1;Sjk :
k2Ij k2Ij
We note that, for each k 2 Ij , there are at most R ? 1 indices ` > k such that supp j` has a
nonzero intersection with supp jk . Therefore
X
2jk . 2?2jp jv~j2p+1 . 2?2jp kvk2p+1;
:
k2Ij
Consequently,
X
(3:10) jv ? j1;
. kvkp+1;
2?jp  2hp kv kp+1;
:
j J
Now let vp be the polynomial of degree  p such that
vp( pi R) = v( pi R); 0  i  p:
Then a standard Bramble-Hilbert argument shows that kv ? vp kp+1;
. jv jp+1;
. Applying (3:10)
with v ? vp in place of v , we get
jv ? (vp + )j1;
. hpjvjp+1;
:
By (3:4), it can be shown that vp 2 SJ , the desired result then follows.
Theorem 3.2. Let jk be given by (3:7) and J  0. De ne
n o
SJ = span 1; jk j ?1  j < J; k 2 Ij :
Then SJ is a nite dimensional subspace of H 1 (
) and, for all v 2 H s+1 (
), the estimate (3:9)
holds.
Before the end of this section, we would like to remark that we can also construct frames of SJ
by the frame of V~J (
). Namely, for subspaces of H01 (
) or H?1 (
), take
Zx
(3:11) jk (x) = jk ds ? xjk ; for 0  x  R;
0
or
Zx
(3:12) jk (x) = jk ds; for 0  x  R;
0
respectively, then SJ = spanfJk j k 2 IJ g. For the subspace SJ of H 1 (
), take
(3:13) jk (x) = p1 j (2j x ? k)j

2
and SJ = spanf1; Jk j k 2 IJ g.
12 J.-C. Xu and W.-C. Shann
4. Applications to two-point boundary value problems. In this section, we apply the
wavelets of order p to numerical solutions of a two-point boundary value problem*
(4:1) ?(q(x)u0)0 + r(x)u = f (x); for x 2 (0; R);
with either one of the boundary conditions:
8 u(0) = u(R) = 0; (Dirichlet condition)
><
>: u(0) = u0 (R) = 0; (mixed condition)
u0 (0) = u0 (R) = 0: (Neumann condition)
We also let
= (0; R), where R = 2p ? 1. We assume that f 2 L2 (
), the coecients q and r are
smooth in [0; R] with
(4:2) 0 < q  q (x)  q and 0  r(x)  r:
We now de ne Galerkin-wavelets methods for solving (4:1). According to the boundary condi-
tion, we construct a nite dimensional subspace SJ of the solution space as de ned in Theorem 3.1
and 3.2. Then we solve numerically the Galerkin projection of the solution in SJ . We rst give a
general account of the method and the error estimates in 4.1, then construct two bases for each SJ
in 4.2. Some computational aspects are considered in 4.3 and 4.4. Finally, we discuss connections
with hierarchical bases in the last subsection.
4.1. Error estimates. In this subsection, we shall only consider the Dirichlet boundary value
problem. Thus SJ , J  0, is the subspace of H01 (
) with frames jk from (3:4) or Jk from (3:11).
The other cases are similar with an appropriate choice of jk or Jk . The variational form of (4:1)
is
(4:3) a(u; v) = (f; v); for all v 2 H01 (
);
where a(; ) is the bilinear form de ned by
ZR
(4:4) a(u; v) = qu0 v 0 + ruv dx:
0
Clearly a(; ) is continuous and coercive on H01 (
). It is well-known that, by Lax-Milgram lemma
(cf. [3]), there is a unique weak solution u 2 H01 (
) for (4:3). Let uJ be the Galerkin projection
of u on SJ de ned by
(4:5) a(uJ ; ) = (f; ); for all  2 SJ :
It is also clear that (4:5) has a unique solution such that
ku ? uJ k1;
. inf
2S
ku ? k1;
:
J
By Theorem 3.1 and a standard duality argument (cf. [3]), the error estimates for both H 1 and
L2 norms can be obtained.
Theorem 4.1. Let u and uJ be the solutions of (4:3) and (4:5) respectively. Then for any
u 2 H01(
) \ H s+1 (
),
(4:6) ku ? uJ k0;
+ hku ? uJ k1;
. hs+1 kuks+1;
; 0  s  p;
where h = 2?J .
* Without loss of generality, a special interval (0; R) is used in this model. More general interval
(a; b) can be reduced to (0; R) by a simple change of variable.
Galerkin-wavelets methods 13
4.2. Basis functions from frames. It is well-known that the approximate solution uJ can
be obtained by solving a linear algebraic system by means of a basis of SJ . But the frames of SJ
discussed earlier are not bases. In order to obtain a basis from a frame, we need to exclude some
linearly dependent functions. We rst study the bases for VJ (
).
Proposition 4.2. For any J  0, dim VJ (
) = 2J R + R ? 1. De ne the index set Dj such
that 8
< 1 ? R  k  R ? 1; if j = ?1;
(4:7) k 2 Dj () :
p ? R  k  2j R ? p; if j  0.
Then
(4:8) fJk j 1 ? R  k  2J R ? 1g
and
(4:9) f jk j ?1  j < J; k 2 Dj g
are bases of VJ (
).
Proof. For any J  0, the functions in (4:8) form a frame of VJ (
), as de ned in (2:14). To
show they actually form a basis, it suces to show that they are linearly independent. Now suppose
2JX
R?1
= k Jk = 0 in
:
k=1?R
Note that for any 0  m  2J R ? R, the support of Jm lies entirely in [0; R], thus (Jm ; J` ) = m` ,
for all `. Thus Jm = 0. Observe that the value of (x) for R ? 2  2j x  R ? 1 is solely determined
by ?1 J;?1 . But J;?1 6= 0 in that interval, so J;?1 = 0. The same argument shows that k = 0
for all 1 ? R  k  2J R ? 1. Consequently the functions in (4:8) form a basis of VJ (
). Using this
basis, we count that the dimension of VJ (
) is 2J R + R ? 1.
Now we show by induction that the wavelets in (4:9) are also basis functions of VJ (
). For
J = 0, the functions in (4:9) are precisely those in (4:8), so they form a basis for V0(
). Suppose
that jk , for 0  j  J ? 1 and k 2 Dj , form a basis of VJ (
). We shall now prove that jk , for
0  j  J and k 2 Dj , form a basis of VJ +1 (
). To see this, we rst prove that they are linearly
independent. Assume
X
J X
= jk jk = 0 in
:
j =1 k2Dj
Then there are uniquely determined numbers k such that
2JX
R?1 X
= k Jk + Jk Jk in
:
k=1?R k2DJ
For the same reason as we put in the rst paragraph, we have Jk = 0 for all 0  k  2J R ? R.
By (2:7),
(2J x ? k) = ?c0 (2J +1 x ? (R + 2k)) +    + cR (2J +1 x ? 2k):
Hence for any p ? R  k < 0 and 2J R ? R < k  2J R ? p, we have 1  R + 2k < R and
2J +1 R ? 2R  2k  2J +1 R ? R ? 1 respectively. That means either (2J +1 x ? (R + 2k)) or
(2J +1 x ? 2k) has the support in [0; R]. The orthogonality of (x) and (2x) in L2 (R) implies that
either Jk c0 = 0 or Jk cR = 0. Since c0 6= 0 and cR 6= 0, we conclude Jk = 0. Thus Jk = 0 for
all k 2 DJ . By the induction hypothesis, jk = 0 for all 0  j  J and k 2 Dj .
Therefore we have shown that the set f jk j ?1  j  J; k 2 Dj g is linearly independent
in VJ +1 (
). Counting the cardinality, we see it is a basis for VJ +1 (
). The general result follows
14 J.-C. Xu and W.-C. Shann
by induction.
If (4:1) is imposed with the Neumann boundary condition, and SJ  H 1 (
) is the space de ned
in Theorem 3.2, then by the linear independence of (4:8) and (4:9), it is easy to see that
(4:10) f1; jk j ?1  j < J; k 2 Dj g; where jk is de ned in (3:7)
and
(4:11) f1; Jk j k 2 IJ g; where jk is de ned in (3:13)
are bases of SJ .
The bases for SJ  H?1 (
) are also easy to construct if (4:1) is imposed with the mixed boundary
condition. But there is another twist for the subspace SJ  H01 (
), because of the extra term x jk .
Let us apply (3:4) to the functions in (4:9), with j = ?1. By (2:12), we have
RX
?1 RX
?1 Z x ZR 
?1;k = (s ? k) ds ? Rx (x ? k) dx
k=1?R 0 0
(4:12)
Zk=1x?X
R
x Z RX
= (s ? k) ds ? R (x ? k) dx
0 0
  k k
= x ? Rx R = 0:
Thus ?1;k , for k 2 D?1 , are linearly dependent. The reason is that 1 can be spanned by (x ? k).
Proposition 4.3. Let Dj be the index set de ned in (4:7). Let D~ j = Dj for j  0 and
~
D?1 = D?1 n f1 ? Rg. Then
(4:13) f jk j ?1  j < J; k 2 D~ j g; where jk is de ned in (3:4)
and
(4:14) fJk j 2 ? R  k  2J R ? 1g; where jk is de ned in (3:11)
are bases of SJ , where SJ is the subspace of H01 (
) de nedPin Theorem 3.1.
Proof. Let jk be the functions in (4:13). Suppose jk jk jk = 0, which can be written as
Zx
(4:15)  ds = x in
;
0
where JX
?1 X
= jk jk :
j =?1 k2D~ j
Obviously (4:15) implies that  =  is a constant function in
. By (2:12), we have
X JX
?1 X
? ?1;1?R + ( ?1;k ? ) ?1;k + jk jk = 0:
k2D~ ?1 j =0 k2Dj
Galerkin-wavelets methods 15

0.4

0.3

0.2

0.1

-0.1

-0.2

0 0.5 1 1.5 2 2.5 3

Figure 5.    1;?1 ; 1k , 0  k  3; -  - 14 .

Since jk in (4:9) are linearly independent, so  = 0 and jk = 0. Hence the functions in (4:13)
are linearly independent.
We have already seen in (4:12) that ?1;1?R is a linear combination of ?1;k , for k 2 D~ ?1 .
But, as de ned in Theorem 3.1, SJ is spanned by jk for ?1  j < J and k 2 Dj . Therefore, the
wavelets in (4:13) can span SJ , and thus form a basis.
Similarly we can show that the set of functions in (4:14) is linearly independent. Since its
cardinality is equal to the dimension of SJ , it is a basis.
Pictures of 1k for k 2 D~ 1 are shown in Figure 5.
4.3. Solution of linear algebraic systems. Suppose that (4:1)Pis imposed with Dirichlet
boundary condition and the basis of SJ is given by (4:13). Let uJ = ujk jk be the Galerkin
projection of u in SJ , the coecients ujk are then determined by the following linear system of
equations.
X
ujk a( jk ; `m ) = (f; `m ); for all ? 1  ` < J; m 2 D~ ` :
j;k
We now label the double indices linearly. Let  : (j; k) 7! i be the labelling function which enumerate
(j; k) lexicographically. That is, (j; k)  (`; m) if j  ` or j = ` and k  m. We then get a linear
16 J.-C. Xu and W.-C. Shann
system of the form Au = f , where the matrix A = (aij ) and the vector f = (fj ) have the entries
(4:16) aij = a( i; j ); fj = (f; j ):
Here a(; ) is the bilinear form de ned in (4:4) and A corresponds to the sti ness matrix in the
usual nite element method. Because i form a basis and a(; ) is symmetric and coercive on SJ ,
A is symmetric and positive de nite.
If r(x) = 0 and q (x) = 1, then A is sparse. In fact, let P be the permutation matrix which
relabels jk in such a way that those with 0  k  2j R ? R (which supports lie entirely in [0; R])
are indexed together after the others, one can see that
B 0
(4:17) A = P 0 I P t:
Let N = dim SJ , then B is a full square matrix with order O(log N ). As a result, the eigenvalues
of A are mostly 1's, with a few exceptions caused by the wavelets near the boundary. Similar results
also hold if q (x) and r(x) are not constant. In fact, the following proposition can be proved by a
simple application of the Courant-Fischer Minimax Theorem [11].
Proposition 4.4. For general coecients q (x) and r(x) satisfying (4:2), all but O(log N ) of
the eigenvalues of A are uniformly bounded below and above.
If the conjugate gradient method is used to solve a linear system, one nds that if most of
the eigenvalues are clustered in one place, the convergence speed can be fast even if the condition
number is big.
Proposition 4.5. ([1]) Let uk be the k-th approximated solution of Au = f by the conjugate
gradient method. Suppose the spectrum of A = 0 [ 1 . If ` = j1 j and
.
 = (max
0
) (min
0
):
Then  p ? 1 k?`
ku ? uk kA  M p + 1 ;
where k  k2A = (A; ) and M is a constant depending on the spectrum of A.
By Proposition 4.4 and 4.5, the convergence of the conjugate gradient method for solving
Au = f will be uniform after O(log N ) steps. This is already very attractive since A is a matrix of
order N .
To further improve the performance of the conjugate gradient method, we have a natural choice
of preconditioner for the matrix A. Let B be the matrix in (4:17). Since B is small, it is easy
to invert B by a direct method. Evidently B and B ?1 are symmetric and positive de nite, hence
there exists the Cholesky decomposition, say B ?1 = LLt . Then SS t is a good preconditioner where
L 0
(4:18) S=P 0 I :
That is, instead of solving Ax = b, we solve the equivalent problem
S tAS y = S t b; x = S y;

where S tAS is a well-conditioned matrix. The action of P can be easily derived during the compu-
tation, so we only have to compute and store the lower triangular matrix L which has O(log2 N )
Galerkin-wavelets methods 17
entries. Theoretically the condition number of S tAS is one if q = 1 and r = 0; in general it is
bounded above by q + r=q. The preconditioner S is made possible by the orthogonality property
of wavelets.
If (4:1) is implosed with mixed or Neumann boundary conditions, then it is preferred to
construct SJ by Jk in (3:12) or (3:13). We also denote the corresponding sti ness matrix by A.
Since there are constantly many (2R ? 2) basis functions which do not lie in [0; R], all but O(1) of
A's eigenvalues are uniformly bounded. Finally, we remark that (1) the matrix A is sparse if r = 0,
and (2) we can construct a preconditioner in the same manner as S in (4:18).
4.4. Reduction of operations. Let A be the sti ness matrix by means of the basis f jk g
in (4:13). If q is not a constant or r 6= 0, then A is a full matrix and its multiplication with a vector
becomes expensive. We can overcome this drawback by changing basis (as is done in hierarchical
basis, cf [28]). Recall that Jk in (4:14) also form a basis for SJ . We label Jk by k and let A^ be
the corresponding sti ness matrix.
Consider the bases f jk g in (4:9) and fJk g in (4:8) for VJ (
). Let Q be the matrix of changing
basis. That is,
( 1 2    N ) = (1 2    N ) Q:
We shall show that Q has O(N ) nonzero entries. Since the construction from jk to jk is linear,
with little modi cations we can change from f jk g to fJk g in O(N ) operations.
To simplify
p the presentation and explain more clearly the main idea, we further ignore the
coecients 2j . Then the problem reduces to nding the matrix Q such that
0 (x + R ? 1) 1 0 (2J x ? R + 1) 1
B@ ..
. A = Qt B
C @ ..
.
C
A:
J ?
(2 x ? 2 R + p)
1 J ?1 J J
(2 x ? 2 R + 1)
Proposition 4.6. For any x 2 RN , O(N ) operations are sucient for computing the multi-
plication Qx.
Proof. By (2:5) and (2:7), it is easy to see that
0 (x + R ? 1) 1 0 (2x + R ? 1) 1
BB ..
.
C
C B
B C
C
BB (x ? R + 1) C C B
B ..
C
C
BB (x + R ? p) C C = Q~ t
1 B
B . C
C ;
BB ..
C
C B
B C
C
@ . A @ A
(x ? R + p) (2x ? 2R + 1)
where Q~ 1 is a matrix of order dim V1 (
) and each column of Q~ 1 consists of the coecients ck .
Hence there are at most 2p nonzero elements on each column of Q~ 1 . Inductively,
Q = Q J    Q 2 Q1 :
Here, for any 1  j  J ,  Q~ 0 
Qj = 0j I ;
j
where Q~ j is a sparse matrix of order dim Vj (
) and Ij is the identity matrix of order dim VJ (
) ?
dim Vj (
). Since Q~ j has O(2j ) nonzero entries, a multiplication Qx needs
O(1) + O(2) +    + O(2J ) = O(2  2J ) = O(N )
operations.
18 J.-C. Xu and W.-C. Shann
Because of the term xJk , Jk are fully supported in [0; R] and A^ is still full. This can be
modi ed by a further change of basis. Consider the functions
(4:19) Jk = J;k?1 ? Jk ; for 2 ? R  k  2J R ? 1:
It is easy to show that spanfJk g  SJ and that they are linearly independent. Counting the
dimension, they form another basis for SJ . Note that, for 0  k  2J R ? R, Jk are the same
constants and
suppJk = supp0Jk = suppJ;k?1 [ suppJk :
So all but 2R ? 2 of Jk and 0Jk are locally supported in [0; R]. Let A~ be the sti ness matrix
corresponding to this basis, then A~ has O(N ) nonzero entries.
Therefore, there is a linear transformation T such that
~
A = T t AT:
For any x 2 RN , O(N ) operations are sucient to compute a multiplication T x. Since A~ is sparse,
we can reduce the number of operations for Ax to O(N ).
If (4:1) is imposed with mixed or Neumann boundary conditions and SJ is constructed by
fJk g. Then A is already sparse when r = 0. Otherwise A is full and we can change the basis
to Jk in the same manner as in (4:19).
4.5. Relations with hierarchical bases in nite elements. As we have mentioned in the
introduction, we now observe the connection between the wavelet basis (4:13) of the case p = 1
and the hierarchical basis [27] in one-dimensional piecewise linear nite element spaces. Let SJ be
the basis of H01 (
) de ned in Theorem 3.1, where jk are de ned in (4:13). Since p = 1, now
R = 2p ? 1 = 1 and
= (0; 1); so D~ ?1 = ;. For all k 2 Dj , j  0, we nd supp jk  [0; 1], so
jk = 0. It is easy to see that jk is a hat function with support [k=2j ; (k + 1)=2j ] and height
p
1=2 2j . That is, the basis functions f jk g for SJ are the piecewise linear hierarchical basis functions
with the uniform mesh size h = 2?J .
On the other hand, consider Jk in (4:19). In this case J;k?1 ? Jk = 0. Observe that Jk ,
for 1  k  2J ? 1, are the scaled piecewise linear nodal basis with the uniform mesh size h = 2?J .
5. Numerical examples. In this section, we verify our claims on some test problems of the
form (4:1), with Dirichlet or Neumann boundary conditions. We have seen in section 4.4 that the
nite elements with p = 1 are exactly the hierarchical piecewise linear nite elements. So now we
test the methods with wavelets of order 2. That is, p = 2 and R = 3. The programs are coded in
FORTRAN. The tests are done on a SUN SparkStation 1.
We implement the functions  and by their values at dyadic points in [0; R]; that is, the
values of (x) and (x) at x = k=2D , for 0  k  2D R (the functions vanish for k out of this range).
The data are stored in disk les. Then we temporarily generate  and with resolution D + 1,
from which we compute the functions  and . The integrations are done by Simpson's numerical
quadrature rule. Similarly we store the dyadic values of  and in [0; R] with the common
denominator 2D . In practice, the choice of D will a ect the accuracy of numerical quadratures.
The highest level J which can yield accurate solution is also related with the resolution D. To
choose D empirically, consider the problem
 ?u00 = 1; for x 2 (0; 3);
(5:1) u(0) = u(3) = 0:
Galerkin-wavelets methods 19
The solution u = ? 12 x2 + 23 x just lies in the subspace S0 , if the wavelets are of order p  2. Hence
the errors are due to the computations. The error ku ? uJ k0;
is approximately 10?8 for D = 8,
and 10?12 for D = 12. In most of the computations shown below, we set D = 8.
The function values of , etc., are repeatedly used during the computation, hence they are
read by the program to save some running time. Note that (x) = 1 for x  R. This is the only
function involved which does not have a compact support. The programs must treat this exception
carefully.
We now test some Dirichlet boundary value problems. Let the nite element spaces SJ be
spanned by basis jk chosen from (4:13). As a rst example, we modify (5:1) a little in order to
observe the convergence order:
 ?u00 = 3x ? 3; for x 2 (0; 3);
(5:2) u(0) = u(3) = 0:
The computational results are summarized in Table 1.
(5:2) h?3 ku ? uJ k0;
h?2 ju ? uJ j1;
CG
J =0 0:09153 0:4750 3
J =1 0:08964 0:5307 5
J =2 0:09051 0:5598 7
J =3 0:09114 0:5766 8
J =4 0:08994 0:5811 10
Table 1.
In Table 1 and all of the following tables, h = 2?J and the columns labelled \CG" and \PCG"
list respectively the numbers of iterations needed for the conjugate gradient and preconditioned
conjugate gradient methods to obtain the displayed error. For problem (5:2), the preconditioner
SS t is exactly A?1 . Thus we get exact solutions with one iteration.
In the second example, we made up a problem with transcendental coecients:
(  
(5:3) ?(e?x u0)0 = 3 e?x cos( 3 x) + 3 sin( 3 x) ; for x 2 (0; 3),
u(0) = u(3) = 0:
The computational results are summarized in Table 2. We also compare the numbers of iterations
by conjugate gradient method with or without the preconditioner.
(5:3) h?3 ku ? uJ k0;
h?2 ju ? uJ j1;
CG PCG
J =0 0:02546 0:0980 3 3
J =1 0:02622 0:1285 6 6
J =2 0:02502 0:1436 12 9
J =3 0:02509 0:1528 14 10
J =4 0:02457 0:1651 16 10
Table 2.
In order to observe the trend of converging in a picture, we tried a problem with higher fre-
quency solution, say u(x) = sin(x), where  = 4=3. It is not necessary to have a complicated q (x)
20 J.-C. Xu and W.-C. Shann

0.8

0.6

0.4

0.2

-0.2

-0.4

-0.6

-0.8

-1
0 0.5 1 1.5 2 2.5 3
Figure 6.    u0; -  - u1 ; - - - u2 ; u.
for this issue, so we simply consider the problem ?u00 =  2 sin(x). Galerkin projections uJ with
0  J  2 are plotted in Figure 6. Visually there is no di erence between u and u3 at this scale.
Finally, let us try a Neumann boundary value problem:
 ?u00 = x ? 3 ; for x 2 (0; 3);
(5:4) 2
u0 (0) = u0 (3) = 0:
Now let SJ be constructed by the basis functions jk de ned in (4:10). The convergence rates are
also as predicted by Theorem 4.1, as seen in Table 3.
(5:4) h?3 ku ? uJ k0;
h?2 ju ? uJ j1;
CG
J =0 0:03052 0:1582 3
J =1 0:02989 0:1768 5
J =2 0:02961 0:1857 6
J =3 0:02977 0:1911 7
J =4 0:02930 0:1916 9
Table 3.

REFERENCES
Galerkin-wavelets methods 21
[1] O. Axelsson and G. Lindskog, On the rate of convergence of the preconditioned conjugate
gradient method, Numer. Math., 48 (1986), pp. 499{523.
[2] A. P. Caldero n, Intermediate spaces and interpolation, the complex method, Studia Math.,
24 (1964), pp. 113{190.
[3] P. G. Ciarlet, \The nite element methods for elliptic problems," North-Holland, Amster-
dam, 1978.
[4] R. Coifman and G. Weiss, \Analyse Harmonique non commutative sur certains espaces
homogenes," Springer-Verlag, Berlin, 1971.
[5] J. M. Combes, A. Grossmann and Ph. Tchamitchian, eds., \Wavelets, time-frequency
methods and phase space, 2nd ed.," Springer-Verlag, Berlin, 1990.
[6] E. Cortina and S. M. Gomes, A wavelet based numerical method applied to free boundary
problems, IPE Technical Report, Sao Jose dos Campos, Brasil, 1989.
[7] I. Daubechies, Orthonormal bases of compactly supported wavelets, Comm. Pure Appl. Math.,
41 (1988), pp. 909{996.
[8] I. Daubechies and J. Lagarias, Two-scale di erence equations. I. Global regularity of so-
lutions, and {. II. In nite matrix products, local regularity and fractals, AT&T Bell Labo-
ratories, preprint.
[9] R. J. Duffin and A. C. Schaeffer, A class of nonharmonic Fourier series, Trans. Am.
Math. Soc., 72 (1952), pp. 341{366.
[10] R. Glowinski, W. M. Lawton, M. Ravachol and E. Tenenbaum, Wavelets solution
of linear and nonlinear elliptic, parabolic and hyperbolic problems in one space dimension,
preprint, AWARE Inc., Massachusetts, 1989.
[11] G. H. Golub and C. F. Van Loan, \Matrix computations," 2nd ed., Johns Hopkins Uni-
versity Press, Baltimore, 1988.
[12] P. Goupillaud, A. Grossmann and J. Morlet, Cycle-octave and related transforms in
seismic signal analysis, Geoexploration, 23 (1984/85), pp. 85{102.
[13] A. Grossmann, M. Holschneider, R. Kronland-Martinet and J. Morlet, Detection
of abrupt changes in sound signals with the help of wavelet transforms, in \Inverse problems:
an interdisciplinary study," Academic Press, London-Orlando, 1987, pp. 289{306.
[14] A. Grossmann and J. Morlet, Decomposition of Hardy functions into square integrable
wavelets of constant shape, SIAM J. Math., 15 (1984), pp. 723{736.
[15] A. Haar, Zur Theorie der orthogonalen Funktionensysteme, Math. Ann., 69 (1910), pp. 331{
371.
[16] C. Heil, Wavelets and frames, in \Signal processing, pt. 1, 2nd ed.," Spring-Verlag, New Yord,
1990, pp. 147{160.
[17] R. Kronland-Martinet, J. Morlet and A. Grossmann, Analysis of sound patterns
through wavelet transforms, to appear.
[18] A. Latto and E. Tenenbaum, Les ondelettes a support compact et la solution numerique
de l'equation de Burgers, preprint, AWARE Inc., Massachusetts, 1990.
[19] J. S. Lienard, \Speech analysis and reconstruction using short-time, elementary waveforms,"
LIMSI-CNRS, Orsay, France.
[20] S. Mallat, Multiresolution approximations and wavelet orthonormal bases of L2 (R), Trans.
Amer. Math. Soc., 315 (1989), pp. 69{87.
[21] S. Mallat, Multifrequency channel decompositions of images and wavelet models, IEEE Trans.
Acoust. Speech Signal Process, 37 (1989), pp. 2091{2110.
[22] Y. Meyer, \Principe d'incertitude, bases hilbertiennes et algebres d'operateurs," Bourbaki
Seminar, no. 662, 1985.
[23] Y. Meyer, \Ondelettes et operateurs, I. Ondelettes," Hermann, Paris, 1990.
22 J.-C. Xu and W.-C. Shann
[24] X. Rodet, Time-domain formant-wave-function synthesis, Computer Music J., 8 (1985),
[25] G. Strang, Wavelets and dilation equations: A brief introduction, SIAM Review, 31 (1989),
pp. 614{627.
[26] J. Stromberg, A modi ed Franklin system and higher-order systems of Rn as unconditional
bases for Hardy spaces, in \Conference on harmonic analysis in honor of Antoni Zygmund,"
Wadsworth, Belmont, 1983, pp. 475{493.
[27] H. Yserentant, On the multi-level splitting of nite element spaces, Numer. Math., 49 (1986),
pp. 379{412.
[28] O. C. Zienciewicz, D. W. Kelly, J. Gago and I. Babuska, Hierarchical nite element
approaches, error estimates and adaptive re nement, in \The mathematics of nite elements
and applications IV," Academic Press, London, 1982, pp. 313{346.

You might also like