Professional Documents
Culture Documents
r
X
i
v
:
1
1
0
2
.
0
3
8
9
v
2
[
m
a
t
h
.
P
R
]
1
4
M
a
r
2
0
1
1
Bernoulli 17(1), 2011, 170193
DOI: 10.3150/10-BEJ265
Rice formulae and Gaussian waves
JEAN-MARC AZAS
1
, JOS
E R. LE
ON
2
and MARIO WSCHEBOR
3
1
Universite de Toulouse, IMT, LSP, F31062 Toulouse Cedex 9, France. E-mail: azais@cict.fr
2
Escuela de Matem atica, Facultad de Ciencias, Universidad Central de Venezuela, A.P. 47197,
Los Chaguaramos, Caracas 1041-A, Venezuela. E-mail: jose.leon@ciens.ucv.ve
3
Centro de Matem atica, Facultad de Ciencias, Universidad de la Rep ublica, Calle Igu a 4225,
11400 Montevideo, Uruguay. E-mail: wschebor@cmat.edu.u
We use Rice formulae in order to compute the moments of some level functionals which are
linked to problems in oceanography and optics: the number of specular points in one and two
dimensions, the distribution of the normal angle of level curves and the number of dislocations
in random wavefronts. We compute expectations and, in some cases, also second moments of
such functionals. Moments of order greater than one are more involved, but one needs them
whenever one wants to perform statistical inference on some parameters in the model or to test
the model itself. In some cases, we are able to use these computations to obtain a central limit
theorem.
Keywords: dislocations of wavefronts; random seas; Rice formulae; specular points
1. Introduction
Many problems in applied mathematics require estimations of the number of points, the
length, the volume and so on, of the level sets of a random function W(x) : x R
d
, or
of some functionals dened on them. Let us mention some examples which illustrate this
general situation:
1. A rst example in dimension one is the number of times that a random process
X(t) : t R crosses the level u:
N
X
A
(u) =#s A: X(s) =u.
Generally speaking, the probability distribution of the random variable N
X
A
(u) is un-
known, even for simple models of the underlying process. However, there exist some
formulae to compute E(N
X
A
) and also higher order moments; see, for example, [6].
2. A particular case is the number of specular points of a random curve or a random
surface. Consider rst the case of a random curve. A light source placed at (0, h
1
) emits a
ray that is reected at the point (x, W(x)) of the curve and the reected ray is registered
This is an electronic reprint of the original article published by the ISI/BS in Bernoulli,
2011, Vol. 17, No. 1, 170193. This reprint diers from the original in pagination and
typographic detail.
1350-7265 c 2011 ISI/BS
Rice formulae and Gaussian waves 171
by an observer placed at (0, h
2
). Using the equality between the angles of incidence and
reection with respect to the normal vector to the curve (i.e., N(x) = (W
(x), 1)), an
elementary computation gives
W
(x) =
2
r
1
1
r
2
x(r
2
r
1
)
, (1)
where
i
:= h
i
W(x) and r
i
:=
_
x
2
+
2
i
, i = 1, 2. The points (x, W(x)) of the curve
such that x is a solution of (1) are called specular points. For each Borel subset A of
the real line, we denote by SP
1
(A) the number of specular points belonging to A. One
of our aims is to study the probability distribution of SP
1
(A).
3. The following approximation, which turns out to be very accurate in practice for
ocean waves, was introduced some time ago by Longuet-Higgins ([10, 11]; see also [9]).
If we suppose that h
1
and h
2
are large with respect to W(x) and x, then r
i
=
i
+
x
2
/(2
i
) +O(h
3
i
). (1) can then be approximated by
W
(x)
x
2
1
+
2
x
2
h
1
+h
2
h
1
h
2
=kx, where k :=
1
2
_
1
h
1
+
1
h
2
_
. (2)
Set Y (x) :=W
with d >d
-dimensional
Hausdor measure of a Borel set B and M
T
the transpose of a matrix M. (const) is
a positive constant whose value may change from one occurrence to another. p
(x) is
the density of the random variable or vector at the point x, whenever it exists. If not
otherwise stated, all random elds are assumed to be Gaussian and centered.
2. Specular points in dimension one
2.1. Expectation of the number of specular points
We rst consider the Longuet-Higgins approximation (2) of the number of SP (x, W(x)),
that is,
SP
2
(I) =#x I : Y (x) =W
(x) kx =0.
We assume that W(x) : x R has (
2
paths and is stationary. The Rice formula for the
rst moment ([3], Theorem 3.2) then applies and gives
E(SP
2
(I)) =
_
I
E([Y
(x)[)
1
_
kx
2
_
dx
(3)
=
_
I
G(k,
_
4
)
1
_
kx
2
_
dx,
where
2
and
4
are the spectral moments of W and
G(, ) :=E([Z[), Z N(,
2
) =[2(/) 1] +2(/), (4)
where () and () are respectively the density and cumulative distribution functions
of the standard Gaussian distribution.
Rice formulae and Gaussian waves 173
If we look at the total number of specular points over the whole line, we get
E(SP
2
(R)) =
G(k,
4
)
k
_
2
4
1
k
_
1 +
1
2
k
2
4
+
1
24
k
4
2
4
+
_
, (5)
which is the result given in [10], part II, formula (2.14), page 846. Note that this quantity
is an increasing function of
4
k
.
We now turn to the computation of the expectation of the number of specular points
SP
1
(I) dened by (1). It is equal to the number of zeros of the process Z(x) :=W
(x)
m
1
(x, W(x)) : x R, where
m
1
(x, w) =
x
2
(h
1
w)(h
2
w) +
_
[x
2
+(h
1
w)
2
][x
2
+(h
2
w)
2
]
x(h
1
+h
2
2w)
.
Assume that the process W(x) : x R is Gaussian, centered and stationary, with
0
=1.
The process Z is not Gaussian, so we use [3], Theorem 3.4, to get
E(SP
1
([a, b])) =
_
b
a
dx
_
+
E([Z
2
e
w
2
/2
1
2
2
e
m
2
1
(x,w)/(22)
dw.
For the conditional expectation in (6), note that
Z
(x) =W
(x)
m
1
x
(x, W(x))
m
1
w
(x, W(x))W
(x)
so that under the condition Z(x) =0, W(x) =w, we get
Z
(x) =W
2
2
2
_
b
a
dx
_
+
G(m, 1) exp
_
1
2
_
w
2
+
m
2
1
(x, w)
2
__
dw, (7)
where m = m(x, w) = (
2
w + K(x, w))/
_
2
2
and G is dened in (4). In (7), the
integral is convergent as a , b +and this formula is well adapted to numerical
approximation.
We have performed some numerical computations to compare the exact expectation
given by (7) with the approximation (3) in the stationary case. The result depends on
h
1
, h
2
,
4
and
2
, and, after scaling, we can assume that
2
=1. When h
1
h
2
, the ap-
proximation (3) is very sharp. For example, if h
1
=100, h
2
=100,
4
=3, the expectation
of the total number of specular points over R is 138.2; using the approximation (5), the
174 J.-M. Azas, J.R. Leon and M. Wschebor
Figure 1. Intensity of specular points in the case h1 = 100, h2 = 300, 4 = 3. Solid line corre-
sponds to the exact formula, dashed line corresponds to the approximation (3).
result with the exact formula is around 2.10
2
larger (this is the same order as the error
in the computation of the integral). For h
1
= 90, h
2
= 110,
4
=3, the results are 136.81
and 137.7, respectively. If h
1
= 100, h
2
= 300,
4
= 3, the results dier signicantly and
Figure 1 displays the densities in the integrand of (6) and (3) as functions of x.
2.2. Variance of the number of specular points
We assume that the covariance function E(W(x)W(y)) =(xy) has enough regularity
to perform the computations below, the precise requirements being given in the statement
of Theorem 1.
Writing, for short, S =SP
2
(R), we have
Var(S) =E(S(S 1)) +E(S) [E(S)]
2
. (8)
Using [3], Theorem 3.2, we have
E(S(S 1)) =
_
R
2
E([W
(x) k[[W
(y) k[[W
(x) =kx, W
(y) =ky)
(9)
p
W
(x),W
(y)
(kx, ky) dxdy,
where
p
W
(x),W
(y)
(kx, ky)
Rice formulae and Gaussian waves 175
(10)
=
1
2
_
2
2
2
(x y)
exp
_
1
2
k
2
(
2
x
2
+2
2
(x y)xy +
2
y
2
)
2
2
2
(x y)
_
,
under the condition that the density (10) does not degenerate for x ,=y.
For the conditional expectation in (9), we perform a Gaussian regression of W
(x)
(resp., W
(x), W
(x) =
y
(x) +a
y
(x)W
(x) +b
y
(x)W
(y),
a
y
(x) =
(z)
(z)
2
2
2
(z)
, b
y
(x) =
2
(z)
2
2
2
(z)
,
where
y
(x) is Gaussian centered, independent of (W
(x), W
y
(x)k
(z)
_
1+
(z)x +
2
y
2
2
2
(z)
_
x
(y)k
(z)
_
1+
(z)y +
2
x
2
2
2
(z)
_
_
. (11)
Note that the singularity on the diagonal x = y is removable since a Taylor expansion
shows that for z 0,
(z)
_
1 +
(z)x +
2
y
2
2
2
(z)
_
=
1
2
2
x(z +O(z
3
)). (12)
It can be checked that
2
(z) = E((
y
(x))
2
) =E((
x
(y))
2
) =
4
2
(z)
2
2
2
(z)
, (13)
E(
y
(x)
x
(y)) =
(4)
(z) +
2
(z)
(z)
2
2
2
(z)
. (14)
Moreover, if
6
<+, we can show that as z 0, we have
2
(z)
1
4
2
4
2
z
2
(15)
and it follows that the singularity on the diagonal of the integrand in the right-hand side
of (9) is also removable.
We will make use of the following auxiliary statement that we state as a lemma for
further reference. The proof requires some calculations, but is elementary, so we omit it.
The value of H(; 0, 0) can be found in, for example, [6], pages 211212.
Lemma 1. Let
H(; , ) =E([ +[[ +[),
176 J.-M. Azas, J.R. Leon and M. Wschebor
where the pair (, ) is centered Gaussian, E(
2
) =E(
2
) =1, E() =.
Then, if
2
+
2
1 and 0 1,
H(; , ) =H(; 0, 0) +R
2
(; , ),
where
H(; 0, 0) =
2
_
1
2
+
2
arctan
_
1
2
and [R
2
(; , )[ 3(
2
+
2
).
In the next theorem, we compute the equivalent of the variance of the number of spec-
ular points, under certain hypotheses on the random process W and with the Longuet-
Higgins asymptotic. This result is new and useful for estimation purposes since it implies
that, as k 0, the coecient of variation of the random variable S tends to zero at
a known speed. Moreover, it will also appear in a natural way when normalizing S to
obtain a central limit theorem.
Theorem 1. Assume that the centered Gaussian stationary process J=W(x) : x R
is -dependent, that is, (z) =0 if [z[ >, and that it has (
4
-paths. Then, as k 0, we
have
Var(S) =
1
k
+O(1), (16)
where
=
_
J
2
+
_
2
4
2
4
2
_
, J =
_
+
2
(z)H((z); 0, 0))
_
2(
2
+
(z))
dz,
(z) =
1
2
(z)
_
(4)
(z) +
(z)
2
(z)
2
2
2
(z)
_
,
2
(z) is dened in (13) and H is dened in Lemma 1. Moreover, as k 0, we have
_
Var(S)
E(S)
k.
Remarks.
(1) The -dependence hypothesis can be replaced by some weaker mixing condition,
such as
[
(i)
(z)[ (const)(1 +[z[)
(0 i 4)
for some >1, in which case the value of should be
=
_
2
4
+
1
_
+
2
(z)H((z); 0, 0)
2
_
2
+
(z)
2
_
dz.
Rice formulae and Gaussian waves 177
The proof of this extension can be constructed along the same lines as the one we
give below, with some additional computations.
(2) The above computations complete the study done in [10] (Theorem 4). In [9], the
random variable SP
2
(I) is expanded in the WienerHermite chaos. The aforemen-
tioned expansion yields the same formula for the expectation and also allows a
formula to be obtained for the variance. However, this expansion is dicult to
manipulate in order to get the result of Theorem 1.
Proof of Theorem 1. We use the notation and the computations preceding the state-
ment of the theorem.
Divide the integral on the right-hand side of (9) into two parts, corresponding to
[x y[ > and [x y[ , that is,
E(S(S 1)) =
_ _
|xy|>
+
_ _
|xy|
=I
1
+I
2
. (17)
In the rst term, the -dependence of the process implies that one can factorize the
conditional expectation and the density in the integrand. Taking into account that for
each x R, the random variables W
(x) and W
(x) k[)E([W
(y) k[)p
W
(x)
(kx)p
W
(y)
(ky) dxdy.
On the other hand, we know that W
(x) (resp., W
2
(resp.,
4
). Hence,
I
1
=[G(k,
_
4
)]
2
_ _
|xy|>
1
2
2
exp
_
1
2
k
2
(x
2
+y
2
)
2
_
dxdy.
To compute the integral on the right-hand side, note that the integral over the whole x, y
plane is equal to 1/k
2
so that it suces to compute the integral over the set [x y[ .
Changing variables, this last integral is equal to
_
+
dx
_
x+
x
1
2
2
exp
_
1
2
k
2
(x
2
+y
2
)
2
_
dy =
k
+O(1),
where the last term is bounded if k is bounded (remember that we are considering an
approximation in which k 0). Therefore, we can conclude that
_ _
|xy|>
1
2
2
exp
_
1
2
k
2
(x
2
+y
2
)
2
_
dxdy =
1
k
2
k
+O(1),
from which we deduce, performing a Taylor expansion, that
I
1
=
2
4
_
1
k
2
k
+O(1)
_
. (18)
178 J.-M. Azas, J.R. Leon and M. Wschebor
Let us now turn to I
2
. Using Lemma 1 and the equivalences (12) and (15), whenever
[z[ =[x y[ , the integrand on the right-hand side of (9) is bounded by
(const )[H((z); 0, 0) +k
2
(x
2
+y
2
)].
We divide the integral I
2
into two parts.
First, on the set (x, y) : [x[ 2, [x y[ , the integral is clearly bounded by some
constant.
Second, we consider the integral on the set (x, y) : x >2, [xy[ . (The symmetric
case, replacing x >2 by x <2, is similar that is the reason for the factor 2 in what
follows.) We have (recall that z =x y)
I
2
= O(1) +2
_ _
|xy|,x>2
2
(z)[H((z); 0, 0) +R
2
((z); , )]
1
2
_
2
2
2
(z)
exp
_
1
2
k
2
(
2
x
2
+2
(x y)xy +
2
y
2
)
2
2
2
(x y)
_
dxdy,
which can be rewritten as
I
2
= O(1) +2
_
2
(z)[H((z); 0, 0) +R
2
((z); , )]
1
_
2(
2
+
(z))
exp
_
1
2
k
2
z
2
(z)
_
2
2
+
(z)
1
2
__
dz
_
+
2
1
_
2(
2
(z))
exp
_
k
2
(x z/2)
2
(z)
_
dx.
Changing variables, the inner integral becomes
1
k
2
_
+
0
1
2
exp
_
1
2
2
_
d =
1
2
2
1
k
+O(1), (19)
where
0
=2
2k(2 z/2)/
_
(z).
Substituting this into I
2
, we obtain
I
2
=O(1) +
J
k
2
. (20)
To nish, combine (20) with (18), (17), (8) and (5).
Rice formulae and Gaussian waves 179
2.3. Central limit theorem
Theorem 2. Assume that the process J satises the hypotheses of Theorem 1. In ad-
dition, we assume that the fourth moment of the number of approximate specular points
on an interval having length equal to 1 is uniformly bounded in k, that is, for all a R
and 0 <k <1,
E([SP
2
([a, a +1])]
4
) (const ). (21)
Then, as k 0,
S
_
2
4
/1/k
_
/k
= N(0, 1) in distribution.
Remarks. One can give conditions for the additional hypothesis (21) to hold true. Even
though they are not nice, they are not costly from the point of view of physical models.
For example, either one of the following conditions implies (21):
(i) the paths x W(x) are of class (
11
(use [3], Theorem 3.6, with m=4, applied to
the random process W
(x) : x R);
(ii) the paths x W(x) are of class (
9
and the support of the spectral measure has
an accumulation point (apply [3], Example 3.4, Proposition 5.10 and Theorem 3.4, to
show that the fourth moment of the number of zeros of W
(x) is bounded).
Note that the asymptotic here diers from other ones existing in the literature on related
subjects (compare with, e.g., [7] and [12]).
Proof of Theorem 2. Let and be real numbers satisfying the conditions 1/2 <
<1, + >1, 2 + <2. It suces to prove the convergence as k takes values on a
sequence of positive numbers tending to 0. To keep in mind that the parameter is k, we
use the notation S(k) :=S =SP
2
(R).
Choose k small enough so that k
] +/2, j[k
] /2),
I
k
j
= [j[k
] /2, j[k
] +/2].
Each interval U
k
j
has length [k
].
We write
T(k) =
|j|[k
]
SP
2
(U
k
j
), V
k
=(Var(S(k)))
1/2
_
k/,
where the equivalence is due to Theorem 1.
180 J.-M. Azas, J.R. Leon and M. Wschebor
The proof is performed in two steps, which easily imply the statement. In the rst, it
is proved that V
k
[S(k) T(k)] tends to 0 in the L
2
of the underlying probability space.
In the second step, we prove that V
k
T(k) is asymptotically standard normal.
Step 1. We rst prove that V
k
[S(k) T(k)] tends to 0 in L
1
. Since it is non-negative,
it suces to show that its expectation tends to zero. We have
S(k) T(k) =
|j|<[k
]
SP
2
(I
k
j
) +Z
1
+Z
2
,
where Z
1
=SP
2
(, [k
] [k
] +/2), Z
2
=SP
2
([k
] [k
] /2, +)).
Using the fact that E(SP
k
2
(I)) (const)
_
I
(kx/
2
) dx, we can show that
V
k
E(S(k) T(k)) (const )k
1/2
_
+
=0
_
[k
]k
2
_
+
_
+
[k
][k
]
(kx/
_
2
) dx
_
,
which tends to zero as a consequence of the choice of and . It suces to prove that
V
2
k
Var(S(k) T(k)) 0 as k 0. Using independence, we have
Var(S(k) T(k)) =
|j|<[k
]
Var(SP
2
(I
k
j
)) +Var(Z
1
) +Var(Z
2
)
|j|<[k
]
E(SP
2
(I
k
j
)(SP
2
(I
k
j
) 1))
+E(Z
1
(Z
1
1)) +E(Z
2
(Z
2
1)) +E(S(k) T(k)).
We already know that V
2
k
E(S(k) T(k)) 0. Since each I
k
j
can be covered by a xed
number of intervals of size one, we know that E(SP
2
(I
k
j
)(SP
2
(I
k
j
) 1)) is bounded by a
constant which does not depend on k and j. Therefore,
V
2
k
|j|<[k
]
E(SP
2
(I
k
j
)(SP
2
(I
k
j
) 1)) (const )k
1
,
which tends to zero because of the choice of . The remaining two terms can be bounded
in a similar form as in the proof of Theorem 1.
Step 2. T(k) is a sum of independent, but not equidistributed, random variables. To
prove that it satises a central limit theorem, we will use a Lyapunov condition based of
fourth moments. Set
M
m
j
:=E[SP
2
(U
k
j
) E(SP
2
(U
k
j
))]
m
.
For the Lyapunov condition, it suces to verify that
|j|[k
]
M
4
j
0 as k 0, where
2
:=
|j|[k
]
M
2
j
. (22)
Rice formulae and Gaussian waves 181
To prove (22), we divide each interval U
k
j
into p =[k
] 1 intervals I
1
, . . . , I
p
of equal
size . We have
E(SP
1
+ +SP
p
)
4
=
1i1,i2,i3,i4p
E(SP
i1
SP
i2
SP
i3
SP
i4
), (23)
where SP
i
stands for SP
2
(I
i
) E(SP
2
(I
i
)). Since the size of all intervals is equal
to , given the niteness of fourth moments in the hypothesis, it follows that
E(SP
i1
SP
i2
SP
i3
SP
i4
) is bounded.
On the other hand, the number of terms which do not vanish in the sum of the right-
hand side of (23) is O(p
2
). In fact, if one of the indices in (i
1
, i
2
, i
3
, i
4
) diers by more
than 1 from all the others, then E(SP
i1
SP
i2
SP
i3
SP
i4
) =0. Hence,
E[SP
2
(U
k
j
) E(SP
2
(U
k
j
))]
4
(const)k
2
so that
|j|[k
]
M
4
j
= O(k
2
k
2
r
1
1
r
2
r
2
r
1
, W
y
=
y
x
2
+y
2
2
r
1
1
r
2
r
2
r
1
. (24)
When h
1
and h
2
are large, the system above can be approximated by
W
x
=kx, W
y
=ky, (25)
under the same conditions as in dimension one.
Next, we compute the expectation of SP
2
(Q), the number of approximate specular
points, in the sense of (25), that are in a domain Q. In the remainder of this paragraph, we
limit our attention to this approximation and to the case in which W(x, y) : (x, y) R
2
(x, y)[)p
Y(x,y)
(0) dxdy (27)
since for xed (x, y), the random matrix Y
20
02
2
11
exp
_
k
2
2(
20
02
2
11
)
(
02
x
2
2
11
xy +
20
y
2
)
_
.
To compute the expectation of the absolute value of the determinant in the right-
hand side of (27), which does not depend on x, y, we use the method of [4]. Set
:=det Y
(x, y) =(W
xx
k)(W
yy
k) W
2
xy
.
We have
E([[) =E
_
2
_
+
0
1 cos(t)
t
2
dt
_
. (29)
Dene
h(t) :=E[exp(it[(W
xx
k)(W
yy
k) W
2
xy
])].
Then
E([[) =
2
__
+
0
1 Re[h(t)]
t
2
dt
_
. (30)
We now proceed to give a formula for Re[h(t)]. Dene
A=
_
_
0 1/2 0
1/2 0 0
0 0 1
_
_
and denote by the variance matrix of (W
xx
, W
yy
, W
x,y
)
:=
_
_
40
22
31
22
04
13
31
13
22
_
_
.
Rice formulae and Gaussian waves 183
Let
1/2
A
1/2
=P diag(
1
,
2
,
3
)P
T
, where P is orthogonal. Then
h(t) = e
itk
2
E(exp[it((
1
Z
2
1
k(s
11
+s
21
)Z
1
) +(
2
Z
2
2
k(s
12
+s
22
)Z
2
)
(31)
+(
3
Z
2
3
k(s
13
+s
23
)Z
3
))]),
where (Z
1
, Z
2
, Z
3
) is standard normal and s
ij
are the entries of
1/2
P
T
.
One can check that if is a standard normal variable and , are real constants, >0,
then
E(e
i(+)
2
) =(1 2i)
1/2
e
i
2
/(12i)
=
1
(1 +4
2
)
1/4
exp
_
2
1 +4
2
+i
_
+
2
1 +4
2
__
,
where =
1
2
arctan(2), 0 < </4. Substituting this into (31), we obtain
Re[h(t)] =
_
3
j=1
d
j
(t, k)
_
1 +4
2
j
t
2
_
cos
_
3
j=1
(
j
(t) +k
2
t
j
(t))
_
, (32)
where, for j =1, 2, 3:
d
j
(t, k) =exp
_
k
2
t
2
2
(s
1j
+s
2j
)
2
1 +4
2
j
t
2
_
;
j
(t) =
1
2
arctan(2
j
t), 0 <
j
</4;
j
(t) =
1
3
t
2
(s
1j
+s
2j
)
2
j
1 +4
2
j
t
2
.
Introducing these expressions into (30) and using (28), we obtain a new formula which
has the form of a rather complicated integral. However, it is well adapted to numerical
evaluation. On the other hand, this formula allows us to compute the equivalent as
k 0 of the expectation of the total number of specular points under the Longuet-
Higgins approximation. In fact, a rst-order expansion of the terms in the integrand
gives a somewhat more accurate result, one that we now state as a theorem.
Theorem 3.
E(SP
2
(R
2
)) =
m
2
k
2
+O(1), (33)
where
m
2
=
_
+
0
1 [
3
j=1
(1 +4
2
j
t
2
)]
1/2
cos(
3
j=1
j
(t))
t
2
dt
=
_
+
0
1 2
3/2
[
3
j=1
(A
j
_
1 +A
j
)](1 B
1
B
2
B
2
B
3
B
3
B
1
)
t
2
dt, (34)
184 J.-M. Azas, J.R. Leon and M. Wschebor
Figure 2. Intensity function of the specular points for the Jonswap spectrum.
A
j
= A
j
(t) =(1 +4
2
j
t
2
)
1/2
, B
j
=B
j
(t) =
_
(1 A
j
)/(1 +A
j
).
Note that m
2
depends only on the eigenvalues
1
,
2
,
3
and is easily computed
numerically. We have performed a numerical computation using a standard sea model
with a Jonswap spectrum and spread function cos(2). It corresponds to the default
parameters of the Jonswap function of the toolbox WAFO [13]. The variance matrix of
the gradient and the matrix are, respectively,
10
4
_
114 0
0 81
_
, =10
4
_
_
9 3 0
3 11 0
0 0 3
_
_
.
The integrand in (27) is displayed in Figure 2 as a function of the two space variables
x, y. The value of the asymptotic parameter m
2
is 2.52710
3
.
We now consider the variance of the total number of specular points in two dimensions,
looking for analogous results to the one-dimensional case (i.e., Theorem 1), in view of
their interest for statistical applications. It turns out that the computations become
Rice formulae and Gaussian waves 185
much more complicated. The statements on variance and speed of convergence to zero of
the coecient of variation that we give below include only the order of the asymptotic
behavior in the Longuet-Higgins approximation, but not the constant. However, we still
consider them to be useful. If one renes the computations, rough bounds can be given on
the generic constants in Theorem 4 on the basis of additional hypotheses on the random
eld.
We assume that the real-valued, centered, Gaussian stationary randomeld W(x) : x
R
2
has paths of class C
3
, the distribution of W
(0))
is invertible). Moreover, let us consider W
(0) =
_
W
xx
(0) W
xy
(0)
W
xy
(0) W
yy
(0)
_
.
The function
z (z) =det[Var(W
(0)z)],
dened on z = (z
1
, z
2
)
T
R
2
, is a non-negative homogeneous polynomial of degree 4 in
the pair z
1
, z
2
. We will assume the non-degeneracy condition
min(z) : |z| =1 =>0. (35)
Theorem 4. Let us assume that W(x) : x R
2
satises the above conditions and that
it is also -dependent, >0, that is, E(W(x)W(y)) =0 whenever |xy| >. Then, for
k small enough,
Var(SP
2
(R
2
))
L
k
2
, (36)
where L is a positive constant depending on the law of the random eld.
Moreover, for k small enough, by using the result of Theorem 3 and (36), we get
_
Var(SP
2
(R
2
))
E(SP
2
(R
2
))
L
1
k,
where L
1
is a new positive constant.
Proof. To simplify notation, let us denote T =SP
2
(R
2
). We have
Var(T) =E(T(T 1)) +E(T) [E(T)]
2
. (37)
We have already computed the equivalents as k 0 of the second and third term in
the right-hand side of (37). Our task in what follows is to consider the rst term.
The proof is performed along the same lines as the one of Theorem 1, but instead of
applying a Rice formula for the second factorial moment of the number of crossings of a
186 J.-M. Azas, J.R. Leon and M. Wschebor
one-parameter random process, we need [3], Theorem 6.3, for the factorial moments of a
2-parameter random eld. We have
E(T(T 1)) =
_ _
R
2
R
2
E([ det Y
(x)[[ det Y
200
000
e
1/2u
2
/000
,
where
abc
=
_
R
3
a
x
b
y
c
t
d(
x
,
y
,
t
)
are the spectral moments of W. Hence, on the basis of the quantity (40), for large T,
one can make inference about the value of certain parameters of the law of the random
eld. In this example, these are the spectral moments
200
and
000
.
If two-dimensional level information is available, one can work dierently because there
exists an interesting relationship with Rice formulae for level curves that we explain in
what follows. We can write (x =(x, y))
W
(x, t) =|W
200
000
e
u
2
/(2000)
,
(41)
where Q=[0, M
1
] [0, M
2
]. We have a similar formula when we consider sections of the
set [0, M
1
] [0, M
2
] in the other direction. In fact, (41) can be generalized to obtain the
Palm distribution of the angle .
Set h
1,2
=I
[1,2]
and, for
1
<
2
, dene
F(
2
) F(
1
) := E(
1
(x Q: W(x, 0) =u;
1
(x, s)
2
))
= E
__
CQ(u,s)
h
1,2
((x, s)) d
1
(x) ds
_
(42)
=
2
(Q)E
_
h
1,2
_
y
W
x
W
_
((
x
W)
2
+(
y
W)
2
)
1/2
_
exp(u
2
/(2
00
))
2
000
.
Dening =
200
020
110
and assuming
2
(Q) = 1 for ease of notation, we readily
obtain
F(
2
) F(
1
)
=
e
u
2
/(2000)
(2)
3/2
()
1/2
000
_
R
2
h
1,2
()
_
x
2
+y
2
e
(1/(2))(02x
2
211xy+20y
2
)
dxdy
=
e
u
2
/(200)
(2)
3/2
(
+
)
1/2
000
188 J.-M. Azas, J.R. Leon and M. Wschebor
_
+
0
_
2
1
2
exp
_
2
2
+
(
+
cos
2
() +
sin
2
( ))
_
dd,
where
+
are the eigenvalues of the covariance matrix of the random vector
(
x
W(0, 0, 0),
y
W(0, 0, 0)) and is the angle of the eigenvector associated with
+
.
Noting that the exponent in the integrand can be written as 1/
(1
2
sin
2
( ))
with
2
:=1
+
/
and that
_
+
0
2
exp
_
H
2
2
_
=
_
2H
,
it is easy to obtain that
F(
2
) F(
1
) =(const )
_
2
1
(1
2
sin
2
( ))
1/2
d.
From this relation, we get the density g() of the Palm distribution, simply by dividing
by the total mass:
g() =
(1
2
sin
2
( ))
1/2
_
(1
2
sin
2
( ))
1/2
d
=
(1
2
sin
2
())
1/2
4/(
2
)
. (43)
Here, / is the complete elliptic integral of the rst kind. This density characterizes
the distribution of the angle of the normal at a point chosen at random on the level
curve. In the case of a random eld which is isotropic in (x, y), we have
200
=
020
and, moreover,
110
= 0, so that g turns out to be the uniform density over the circle
(Longuet-Higgins says that over the contour, the distribution of the angle is uniform
(cf. [11], page 348)). We have performed the numerical computation of the density (43)
for an anisotropic process with = 0.5, = /4. Figure 3 displays the densities of the
Palm distribution of the angle showing a large departure from the uniform distribution.
Let us turn to ergodicity. For a given subset Q of R
2
and each t, let us dene /
t
=
W(x, y, t) : >t; (x, y) Q and consider the -algebra of t-invariant events /=
/
t
.
We assume that for each pair (x, y), (x, y, t) 0 as t +. It is well known that under
this condition, the -algebra /is trivial, that is, it only contains events having probability
zero or one (see, e.g., [6], Chapter 7). This has the following important consequence in
our context. Assume that the set Q has a smooth boundary and, for simplicity, unit
Lebesgue measure. Let us consider
Z(t) =
_
CQ(u,t)
H(x, t) d
1
(x) (44)
with H(x, t) =1(W(x, t), W(x, t)), where W =(W
x
, W
y
) denotes the gradient in the
space variables and 1 is some measurable function such that the integral is well dened.
This is exactly our case in (42). The process Z(t) : t R is strictly stationary and,
Rice formulae and Gaussian waves 189
Figure 3. Density of the Palm distribution of the angle of the normal to the level curve in the
case = 0.5 and =/4.
in our case, has a nite mean and is Riemann-integrable. By the BirkhoKhintchine
ergodic theorem ([6], page 151), a.s. as T +,
1
T
_
T
0
Z(s) ds E
B
[Z(0)],
where B is the -algebra of t-invariant events associated with the process Z(t). Since for
each t, Z(t) is /
t
-measurable, it follows that B / so that E
B
[Z(0)] =E[Z(0)]. On the
other hand, the Rice formula yields (taking into account the fact that stationarity of J
implies that W(0, 0) and W(0, 0) are independent)
E[Z(0)] =E[1(u, W(0, 0))|W(0, 0)|]p
W(0,0)
(u).
We consider now the central limit theorem. Let us dene
:(t) =
1
t
_
t
0
[Z(s) E(Z(0))] ds. (45)
To compute the variance of :(t), one can again use the Rice formula for the rst moment
of integrals over level sets, this time applied to the R
2
-valued random eld with parameter
in R
4
, (W(x
1
, s
1
), W(x
2
, s
2
))
T
: (x
1
, x
2
) QQ, s
1
, s
2
[0, t] at the level (u, u). We get
Var :(t) =
2
t
_
t
0
_
1
s
t
_
I(u, s) ds,
190 J.-M. Azas, J.R. Leon and M. Wschebor
where
I(u, s) =
_
Q
2
E[H(x
1
, 0)H(x
2
, s)|W(x
1
, 0)||W(x
2
, s)|[W(x
1
, 0) =u; W(x
2
, s) =u]
p
W(x1,0),W(x2,s)
(u, u) dx
1
dx
2
(E[1(u, W(0, 0))|W(0, 0)|]p
W(0,0)
(u))
2
.
Assuming that the given random eld is time--dependent, that is, (x, y, t) = 0 (x, y)
whenever t >, we readily obtain
t Var :(t) 2
_
0
I(u, s) ds :=
2
(u) as t . (46)
Now, using a variant of the HoedingRobbins theorem [8] for sums of -dependent
random variables, we can establish the following theorem.
Theorem 5. Assume that the random eld W and the function H satisfy the conditions
of [3], Theorem 6.10. Assume, for simplicity, that Q has Lebesgue measure. Then:
(i) if the covariance (x, y, t) tends to zero as t + for every value of (x, y) Q,
we have
1
T
_
T
0
Z(s) ds E[1(u, W(0, 0))|W(0, 0)|]p
W(0,0)
(u),
where Z(t) is dened by (44).
(ii) if the random eld W is -dependent in the sense above, we have
t:(t) = N(0,
2
(u)),
where :(t) is dened by (45) and
2
(u) by (46).
5. Application to dislocations of wavefronts
In this section, we follow the article [4] by Berry and Dennis. Dislocations are lines in space
or points in the plane where the phase of the complex scalar wave (x, t) =(x, t)e
i(x,t)
is undened. With respect to light, they are lines of darkness; with respect to sound,
threads of silence. Here, we only consider two-dimensional space variables x =(x
1
, x
2
).
It is convenient to express by means of its real and imaginary parts:
(x, t) =(x, t) +i(x, t).
Thus, dislocations are the intersection of the surfaces (x, t) =0 and (x, t) =0.
Let us quote the authors of [4]: Interest in optical dislocations has recently revived,
largely as a result of experiments with laser elds. In low-temperature physics, (x, t)
could represent the complex order parameter associated with quantum ux lines in a
superconductor or quantized vortices in a superuid (cf. [4] and the references therein).
Rice formulae and Gaussian waves 191
In what follows, we assume an isotropic Gaussian model. This means that we will
consider the wavefront as an isotropic Gaussian eld
(x, t) =
_
R
2
exp(i[k x c[k[t])
_
([k[)
[k[
_
1/2
dW(k),
where k =(k
1
, k
2
), [k[ =
_
k
2
1
+k
2
2
, (k) is the isotropic spectral density and W =(W
1
+
iW
2
) is a standard complex orthogonal Gaussian measure on R
2
with unit variance. We
are only interested in t = 0 and we put (x) := (x, 0) and (x) := (x, 0). We have,
setting k =[k[,
(x) =
_
R
2
cos(k x)
_
(k)
k
_
1/2
dW
1
(k)
_
R
2
sin(k x)
_
(k)
k
_
1/2
dW
2
(k), (47)
(x) =
_
R
2
cos(k x)
_
(k)
k
_
1/2
dW
2
(k) +
_
R
2
sin(k x)
_
(k)
k
_
1/2
dW
1
(k). (48)
The covariances are
E[(x)(x
)] =E[(x)(x
)] =([x x
[) :=
_
0
J
0
(k[x x
(x) is the Bessel function of the rst kind of order . Moreover, E[(r
1
)(r
2
)] =0.
5.1. Mean number of dislocation points
Let us denote by Z(x) : x R
2
a random eld having values in R
2
, with coordinates
(x), (x), which are two independent Gaussian stationary isotropic random elds with
the same distribution. We are interested in the expectation of the number of dislocation
points
d
2
:=E[#x S : (x) =(x) =0],
where S is a subset of the parameter space having area equal to 1.
Without loss of generality, we may assume that Var((x)) =Var((x)) =1 and for the
derivatives, we set
2
= Var(
i
(x)) = Var(
i
(x)), i = 1, 2. Then, according to the Rice
formula,
d
2
=E[[ det(Z
(x))[/Z(x) =0]p
Z(x)
(0).
An easy Gaussian computation gives d
2
=
2
/(2) ([4], formula (4.6)).
5.2. Variance
Again, let S be a measurable subset of R
2
having Lebesgue measure equal to 1. We have
Var(N
Z
S
(0)) =E(N
Z
S
(0)(N
Z
S
(0) 1)) +d
2
d
2
2
192 J.-M. Azas, J.R. Leon and M. Wschebor
and for the rst term, we use the Rice formula for the second factorial moment ([3],
Theorem 6.3), that is,
E(N
Z
S
(0)(N
Z
S
(0) 1)) =
_
S
2
A(s
1
, s
2
) ds
1
ds
2
,
where
A(s
1
, s
2
) =E[[ det Z
(s
1
) det Z
(s
2
)[[Z(s
1
) =Z(s
2
) =0
2
]p
Z(s1),Z(s2)
(0
4
).
Here, 0
p
denotes the null vector in dimension p.
Taking into account the fact that the law of the random eld Z is invariant under
translations and orthogonal transformations of R
2
, we have
A(s
1
, s
2
) =A((0, 0), (r, 0)) =A(r) with r =|s
1
s
2
|.
The Rice function A(r) has two intuitive interpretations. First, it can be viewed as
A(r) = lim
0
1
4
E[N(B((0, 0), )) N(B((r, 0), ))].
Second, it is the density of the Palm distribution, a generalization of the horizontal
window conditioning of the number of zeros of Z per unit surface, locally around the
point (r, 0), conditionally on the existence of a zero at (0, 0) (see [6]). In [4], A(r)/d
2
2
is
called the correlation function.
To compute A(r), we denote by
1
,
2
,
1
,
2
the partial derivatives of , with respect
to rst and second coordinate. Therefore,
A(r) = E[[ det Z
(0, 0) det Z
1
)(0, 0)(
1
1
)(r, 0)[[Z(0, 0) =Z(r, 0) =0
2
] (50)
p
Z(0,0),Z(r,0)
(0
4
).
The density is easy to compute:
p
Z(0,0),Z(r,0)
(0
4
) =
1
(2)
2
(1
2
(r))
, where (r) =
_
0
J
0
(kr)(k) dk.
The conditional expectation turns out to be more dicult to calculate, requiring a long
computation (we again refer to [2] for the details). We obtain the following formula (that
can be easily compared to the formula in [4] since we are using the same notation):
A(r) =
A
1
4
3
(1 C
2
)
_
1
t
2
_
1
1
(1 +t
2
)
(Z
2
2Z
2
1
t
2
)
Z
2
_
(Z
2
Z
2
1
t
2
)
_
dt,
Rice formulae and Gaussian waves 193
where we have dened
C := (r), E =
(r), H =E/r, F =
(r), F
0
=
(0),
A
1
= F
0
_
F
0
E
2
1 C
2
_
, A
2
=
H
F
0
F(1 C
2
) E
2
C
F
0
(1 C
2
) E
2
,
Z =
F
2
0
H
2
F
2
0
_
1
_
F
E
2
C
1 C
2
_
2
_
F
0
E
2
1 C
2
_
2
_
,
Z
1
=
A
2
1 +Zt
2
, Z
2
=
1 +t
2
1 +Zt
2
.
Acknowledgement
This work has received nancial support from the European Marie Curie Network
SEAMOCS.
References
[1] Azas, J.-M., Le on, J. and Ortega, J. (2005). Geometrical characteristic of gaussian sea
waves. J. Appl. Probab. 42 119. MR2145485
[2] Azas, J.-M., Le on, J. and Wschebor, M. (2009). Some applications of Rice formulas to
waves. Available at ArXiv:0910.0763v1 [math.PR].
[3] Azas, J.-M. and Wschebor, M. (2009). Level Sets and Extrema of Random Processes and
Fields. Hoboken, NJ: Wiley. MR2478201
[4] Berry, M.V. and Dennis, M.R. (2000). Phase singularities in isotropic random waves. Proc.
R. Soc. Lond. Ser. A 456 20592079. MR1794716
[5] Caba na, E. (1985). Esperanzas de Integrales sobre Conjuntos de Nivel aleatorios. In Actas
del 2o. Congreso Latinoamericano de Probabilidad y Estadistica Matem atica, Spanish
6582. Caracas, Venezuela: Regional Latino americana de la Soc. Bernoulli.
[6] Cramer, H. and Leadbetter, M.R. (1967). Stationary and Related Stochastic Processes. New
York: Wiley. MR0217860
[7] Cuzick, J.A. (1976). Central limit theorem for the number of zeros of a stationary Gaussian
process. Ann. Probab. 4 547556. MR0420809
[8] Hoeding, W. and Robbins, H. (1948). The central limit theorem for dependent random
variables. Duke Math. J. 15 773780. MR0026771
[9] Kratz, M. and Le on, J.R. (2009). Level curves crossings and applications for Gaussian
models. Extremes. DOI: 10.1007/s10687-009-0090-x.
[10] Longuet-Higgins, M.S. (1960). Reection and refraction at a random surface, I, II, III. J.
Optical Soc. Amer. 50 838856. MR0113489
[11] Longuet-Higgins, M.S. (1962). The statistical geometry of random surfaces. In Proc. Symp.
Appl. Math. Vol. XIII 105143. Providence, RI: Amer. Math. Soc. MR0140175
[12] Piterbarg, V. and Rychlik, I. (1999). Central limit theorem for wave functionals of Gaussian
processes. Adv. in Appl. Probab. 31 158177. MR1699666
194 J.-M. Azas, J.R. Leon and M. Wschebor
[13] WAFO-group (2000). WAFO A Matlab Toolbox for Analysis of Random
Waves and Loads. Lund Univ., Sweden: Math. Stat. Center. Available at
http://www.maths.lth.se/matstat/wafo.
[14] Wschebor, M. (1985). Surfaces Aleatoires. Lecture Notes in Math. 1147. Berlin: Springer.
MR0871689
Received April 2009 and revised January 2010