You are on page 1of 4

·-º"'

· z.s . n-1E wiENER FILTÉR 31

Referring to the forro of (2.7.5), the continuous ti.me In the next section, the powerful filtering techniques
due to Wiener will be introduced and analysed.
normalised cross-correlation may be defined usmg
(2.7.19) and (2.7.20) as
' g¡{h) 2.8 The Wiener filter .
gih) = [rc,(O)] l/2 . [rc¡(O)] 1/2
2.8.1 Convolution as a matrix equation
a¡a¡rw[/1 - (h¡ - h¡)]
112
= [af r JO) + r".(O)] 112 · [af r w(O) + r.¡(O)] As was seen earlier, the discrete linear convolution of
two time series, h and x say, is defined by (cf. equation
(2.7.21)
(2.4. l )),
As has already been noted, the auto-correlation has its Lh
maximum value when the lag is zero, hence, g¡i is a yk= ¿
j=O
hi·xk-J

maximum for hM = hj - h¡, and the maximum value is


(2.8.1)
given by k = O, ... , Ls( = Lh + L¿ - 1)

, a¡air w(O) Here L, is the length of the filter h, L, is t.he length of


[g ijlu = [ af r w(O) + r• .(O)] 112 • [ a J r JO) + r"i(O)] 112 the input x, and L, is the length of the actual output y.
In this section, it will be· shown that (2.8.1) can be
(2.7.22) written as a matrix equation, thus allowing the power of
matrix algebra to be applied to the u�dersta�d.ing of
This is the desired result but there remains the question this fundamental equation. The appropnate form is
of how it can be used. Recall that the object is to. relate
the cross-correlation to a signal-to-noise ratio, SN say. y0 x0 O O h¿
Consider the choice (2.7.14). Since the zero-lag value of X1 Xo o
the auto-correlation is no more than the sum of squared X2 X1 Xo
(2.8.2)
magnitudes, it follows that X3 X2 X1
. l ... :,
R MS of signa! i;
[S:vl=---.. ;;c.. _-
' ' RMS of noise
YL, o o o XL,:
_ a¡[r ,..(O)] 112
(2.7.23) This may easily be verified to be the same as (2."8.1). lt
- [r• .(O)] 112
can be written as ·
for trace i. Combining (2.7.22) and (2.7.23) gives Y=XH (2.8.3)

1 (2.7.24) [Note that boldface letters wíll be used to denote/matrix


notation throughout this book.] ·
where Y is the column vector [y0, •.. , Yi,]T
Note that since X is the matrix shown in equation (2.8.2)
O � [SNl. j � infinity H is the column vector [h0, ••. , hLh]. ·
(2.8.2) and (2.8.3) are also known as the complete tran-
then
sient convolution of x with h.
o s ¡g;11 s 1
as before. 2.8.2 The method of least squares
As it stands, (2.7.24) gives one equation and two
unknowns, the two signal-to-noise ratios. There are two This technique is extremely common throughout explo-
ways around this. First, an additional trace ck could be ration seismology, just as in other sciences, and dates
brought in, giving three equations in three unknowns. If back to Gauss. The first principies can be found in
more traces are brought in, statistical redundancy can rnany textbooks, and only the following problem will be
be used to improve the estimates, rernembering that in considered here: suppose a filter h is required 'which,
practice (2.7.24) must be approximated for discrete finite when convolved with a given input x,' produces a
time series. with a corresponding statistical uncertainty. desired output d.
Second, simple assumptions could be made to close the In practice, for discrete finite time series, there is no
loop. For example, if such filler in general and the question must be· refined
to be what is the filler which gets closest to producing
[SN] = [SN]¡ = [S�L the desired effect in sorne sense.
that is, if the signal-to-noise ratios on traces i and j are The least-squares solution to our problem may be
assumed identical, then stated as that filter which minimises ·

Scanned by CamScanner
• [SN] = [ [g;¡]:' ]1/2
(2.7.25)
L,i+Lx
l.=;· ¿ (d" - Yk)2 (2.8.4)
. . . 1 - [gf}]M . t , " .� •
k=O
1,,, •• -�

\\

Scanned by CamScanner
32 TIME SERIES ANALYSIS IN SEJSMOLOGY

(2.8.8) is actually a sel of simultancous equations which


i.e. the sum of squared differences of desircd and actual have been solved in the order of ten thousand míllion
outputs, This Jeast-squares minimisation criterion is times during the era of digital computcr technology ín
also known as minimising using the L2 norrn, taking seismology alone. In spite of its immense practica!
care not to confuse this with the use of L as a length as importance in discrcte time series analysis, Wiener's
above. Note that there are an infinite number of other work is surprisingly Jittle known outside the fields of
ways of minimising the error in sorne sense. For communication theory, seismology and econometrics.
example, we could choose a filter which minimised the Before continuing with the discussion of (2.8.8) and its
sum ofthe absolute differences in amplitude many uses, it will be transformed to facilitate its appli-
L•+L, catión in a FORTRJ. 1"J c0111puter program, where the

1 = I idl - rk I (2.8.5) use of negative indices is either forbidden or fraught


k=O
with danger, depending on the compiler implementa-
which corresponds to minimising in the so-called L1
tion.
norm, as considered by Claerbout and Muir (1973).
Let
Much curren! research centres around these other tech- =x be the index of time zero in the input
niques, bu! we will only consider the problem of mini- =d be the index of time zero in the desired output
mising (2.8.4). =h be the index of time zero in the filler.
(2.8.3) and (2.8.4) together give:

1 =
Li,+L, (
L dk - L
Li,
h,. X1-1
)2 (2.8.6)
Then
k=O t=O .
Zx = p + 1
Now expand our definitions of x, d, and /¡ to re- (2.8.10)
zd =R+ 1
introduce the concept of time-zero as follows:
Let x¡ where l = - P, ... , O, ... , Q =h = T + 1
be the input and
Now define the shifted arrays and indices
let dm where . m= - R, ... ,O, ... , S
be the desired output and l = - P•... , O, ... , Q
Jet h" where 11= - T..•. , O, ... , U
m= - R, ... , O, ... , S
be the required filter, which is currently unknown. The d�+R+ 1 =d
problem then, is to minimise
Jt: + T+ 1 = /J 11 = - T, ... , O, ... , U

/ = k= ];_T
U+Q (
dk - ,=2:.T
U
h,. Xk-1
)2 (2.8. 7) j'=j+T+I
(2.8.11)

t'=t+T+l
by varying the h..
E�ployi_ng ·. the _standard technique of taking the k' =k+P+T + 1
partial den va ti ve with respect to h., for ali t, and setting
Substituting (2.8.10) and (2.8.11) m (2.8.8) gives after
e_qual to O gives
sorne manipulation
U+Q U+Q U
L dk • Xk-1 L L hr
= k=-P-Tr=-T . X • X · �+�-) �
k=-P-T k-r ·k-;
L
,. = J
X�·-,·+1. X�·-j"+I I.
,·= 1
lt',.

for;·= - T' ... , O' ... , U


lnterchanging the orders of summation and re- (2.8.12)
writing yields finally
U+Q U U+Q
= 1: · · ·' L, • where L, and L; are the total lengths
for t'
L Xk-1 . Xk-1· L
" ., h, = ".,
L dk • X k-1

k=-P-T r=-T l.:=-P-T oí �he mput and filter respectively. The reworked
versión oí (2.8.8) is in a form suitable for inclusion in a
(2.8.8)
computer program.
for;·= - T' .... ' O, ... ' u Note _that a knowledge oí zx is unnecessary for the

. This is �he result of interest, and is known as the calcu.fliation. oí the left-hand side oí (2 . 8 . 1.-,) . anot h er

�1screte Wiener-Hopf equation, following the pioneer- with the desired output, which is also known
m� work oí the great American mathematician Norbert Note í�r �ow that (2.8.8) can be written � the matrix
�1en_er. The first term º?
the left-hand side of this equa-
non t� the auto-correlation which is known, the second
term _ is the filler coefficients, which are unknown and
th_e nght-ha�d side is the cross-correlation oí the input
Scanned by CamScanner
rnam esta!1on oí the fact that the auto-correlation 2.83 The Wiener filter and its uses
has
(2.8.�) occurs in many guises in seismic data processing
no phase mformation.
:nd is �� the_ heart of many inverse filtering techniques,

n addition, rt represents a classic exarnple oí . .


formulation mtroduced earlicr as a vers n

s el id d · Chapter 5. For now, thc m io


uc ate o

i m foll win

'XTXH = XTD (2.8.9) common arcas of application will be described: g


............._� ·. .

Scanned by CamScanner

You might also like