Professional Documents
Culture Documents
first few terms and integrate. The integral can be over the
complex plane, corresponding to the inversion formula of a
Saddlepoint approximations are powerful tools for obtain- Fourier transform, but this is a secondary point.
ing accurate expressions for densities and distribution func- We start with an elementary motivation of the technique,
tions. We give an elementary motivation and explanation of stressing familiar Taylor series expansions. At the begin-
approximation techniques, starting with Taylor series ex- ning we will somewhat ignore the statistical applications,
pansions and progressing to the Laplace approximation of because saddlepoint approximations are general techniques,
integrals. These approximations are illustrated with exam- and quite often references to random variables and distri-
ples of the convolution of simple densities. We then turn to butions may be more confusing than illuminating. Once
the saddlepoint approximation and, using both the Fourier the approximation is developed, however, we will examine
inversion formula and Edgeworth expansions, we derive the some statistical applications. Throughout the article, we as-
saddlepoint approximation to the density of a single ran- sume that the functions are as regular as needed. In other
dom variable. We next approximate the density of the sam- words, when we write a derivative or an integral, we as-
ple mean of iid random variables, and also demonstrate the sume that they exist. Furthermore, we develop the methods
technique for approximating the density of a maximum like- in the univariate case. This is almost without loss of gen-
lihood estimator in exponential families. erality as the multivariate case is essentially the same but
with a somewhat more complicated notation.
KEY WORDS: Edgeworth expansions; Fourier trans-
To keep the technical level reasonable, we avoid any rig-
form; Laplace method; Maximum likelihood estimators;
orous asymptotic analysis, but a few remarks are in or-
Moment-generating functions; Taylor series.
der. The accuracy of an approximation is assessed by ex-
amining the size of the error of approximation. We use
the notation 0(l) which denotes a function that satisfies
1imt<O tO (') = constant, where for a random sample of
size n, standard techniques typically give approximations
1. INTRODUCTION of order 0( ). The saddlepoint can improve this to 0( n)
The saddlepoint approximation has been a valuable tool and even 0(n-3/2) in some circumstances. Unfortunately,
in asymptotic analysis. Various techniques of accurate ap- calculation of these error terms often requires detailed tech-
proximations, relying on it in one way or another, have been nical arguments, and we do not include them here (see Field
developed since the seminal article by Daniels (1954). Reid and Ronchetti 1990 or Kolassa 1994). This omission does
(1988, 1991) gave a comprehensive review of the applica- not speak to the importance of such calculations. Indeed, in
tions and a broad coverage of the relevant literature. application, the size of the approximation error is perhaps
The number of applications of the saddlepoint approxi- the most important concern, and this error must be assessed
mation is quite impressive, as warrants this extremely pow- either through analytical or numerical means.
erful approximation (see Section 5 for a partial list). Typ- In Section 2 we introduce approximation techniques from
ically, derivations and implementations of saddlepoint ap- the point of Taylor series and Laplace approximations, and
proximations rely on tools such as exponential tilting, Edge- give some examples. Section 3 is an attempt to explain the
worth expansions, Hermite polynomials, complex integra- original derivation of the saddlepoint, which has its roots
tion, and other advanced notions. Although these are impor- in Fourier transforms and complex analysis, along with an-
tant tools for researchers in the area, they may obscure the other derivation based on Edgeworth expansions. Those un-
fundamental idea of the saddlepoint approximation. A goal willing to wade through these derivations need only look at
of this article is to illustrate that there is a simple basic idea formula (25), which gives the formula for the density ap-
behind this useful technique. Namely, write the quantity one proximation of a sample mean. Section 4 treats the case of
wishes to approximate as an integral, expand the integrand the MLE in exponential families, with the important for-
with respect to the dummy variable of integration, keep the mulas being (34) and (36). Section 5 contains a short dis-
cussion.
George Casella is Liberty Hyde Bailey Professor of Biological Statistics, 2. FIRST EXPANSIONS
Department of Biometrics, Cornell University, 434 Warren Hall, Ithaca,
NY 14853. The authors thank Luis Tenorio for useful discussions, and We begin by looking at some basic principles of approx-
the reviewers for providing detailed comments on earlier versions of this imation, using the familiar tool of the Taylor expansion. As
article, which resulted in a much improved presentation. This research was we will see, the underlying strategy of this approximation
supported by NSF Grants DMS 9305547 and DMS 9625440, and this is
carries through to more sophisticated approximations. Note,
paper BU-131 1-M in the Biometrics Unit, Cornell University, Ithaca, NY
14853. The original version of this article was written in December 1995, however, that the approximations in this section can have
before the tragic death of Costas Goutis in July 1996. large errors. As we are working with densities of single
216 The Amnerican Statistician, Autguist 1999, Vol. 53, No. 3 (?) 1999 Americanl Statistical Association
This content downloaded from 39.50.186.67 on Sun, 01 Jul 2018 00:55:15 UTC
All use subject to http://about.jstor.org/terms
random variables rather than means, the order of the error is not particularly useful or illuminating. (We will later see
can be expected to be about 0(1). a number of cases where the representation (5) arises fairly
naturally.)
2.1 From Taylor to Laplace
By defining k(x, t) = log m(x, t) we consider the Laplace
approximation of the integral of exp k(x, t) with respect to
Perhaps the simplest way to approximate a positive func-
the variable t. For any fixed x, we write
tion f(x) is to use the first few terms of its Taylor series
expansion. We will use that idea, but not for f(x) itself but
rather for h (x) log f(x). Writing f(x) = exph( x) and f(x) Jexp{k(x,t(x))+ -(2 k(t) ( dt
choosing xo as the point to expand around, we obtain
( 1/2
for some positive m(x, t). This is always possible by con- Recalling that f (x) is actually a density function, we can
renormalize
sidering, for example, m(p, t) f()mO(t), where mO (t) is the approximation by calculating the constant
a function integrating to one, but the latter representation of the right side of (8) 50 that f f (cc) dcc= 1. Doing this,
This content downloaded from 39.50.186.67 on Sun, 01 Jul 2018 00:55:15 UTC
All use subject to http://about.jstor.org/terms
we obtain It is interesting to note that the convolution formula (10)
can be solved for the maximum in some generality. If we
write
[f [2-F(2oz
2oX1)] -1)]
- exp
1 {L <f(
2(2a c-)1-
J 1)k (9)
k(x, y) = log[g(x - y)g(y)] = log[g(x - y)] + log[g(y)],
a normal density with mean and variance both equal to 2oc -
1. then under mild regularity conditions, &k(x, y)/&y has a
We can probably write f(x) in the form (5) in several zero at y = x/2, and we can apply the convolution formula
ways, but a simple one is motivated from elementary dis- somewhat straightforwardly to other densities.
tribution theory. We know that the sum of two independent
Example 2: Student's t distribution. Let X be a random
F(a, 1) random variables is a F(2ca, 1) random variable, so
variable with a Student's t distribution with v degrees of
if g( ) is the F(ca, 1) density then f(x) is a convolution of
freedom (X tj). Then X has density
the form
x
fx (x) Jf (c Y) dy
218 General
This content downloaded from 39.50.186.67 on Sun, 01 Jul 2018 00:55:15 UTC
All use subject to http://about.jstor.org/terms
Formula (14) can also be applied to integrated likeli- merical inversion of (17).] The function Kx (t) = log /x (t)
hoods, a desirable enterprise in the presence of nuisance is also called the cumtulant generating function (cgf) of a
parameters. Given a likelihood function L(O, A x), where 0 random variable X. (Mathematically, the cgf and mgf are
is the parameter of interest, A is a nuisance parameter, and equivalent. The fact that the cgf generates the mean and
x is the data, an integrated likelihood LI for 0 is obtained variance, instead of the uncentered moments, is statistically
by integrating out A. This leads to the approximation more appealing. We can think of K" as a variance.) How-
ever, we do not have to think of a random variable at all,
Li(O|x) JL(0, Alx) dA and (17) is applicable even if f(x) is negative or does not
integrate to one. There is a complication in that we have
(92log L(0, A x) ~-1/2 to deal with complex rather than real numbers, but (17) is
L(, Ax) y )A2 ) (15) similar to (5) and this suggests that we can use the same
ideas as in the previous section.
This approximation is the Cox and Reid (1987) approximate
We first make a change of variable t' = it. We can then
conditional likelihood, as first noted by Sweeting (1987),
write (17) as
and can be considered a version of the modified profile like-
lihood of Barndorff-Nielsen (1983). The factor that is "ad- 1 fT+i
justing" the likelihood is the observed Fisher information. f](X) _ exp{Kx(t) -tx} dt (18)
See Barndorff-Nielsen (1988) for an alternate derivation and
further discussion. for T in a neighborhood of zero. It is a consequence of a
theorem from complex analysis (the closed curve theorem)
3. THE REAL THING
that the integral (18) is the same over all paths that are
parallel to the imaginary axis in a neighborhood of zero
The examples in the previous section are, of course,
where OX (t) exists. Thus, we are free to choose a value for
rather artificial. The function f(x) has a closed form and
T over which to do the integration.
there is no need to use any approximations. Indeed, we
We take k(x, t) = Kx (t) - tx and, as in (6), we find the
do not gain anything as the approximating functions are
point i(x) that satisfies
equally complicated as the original one. As one might ex-
pect, this is not always the case. In this section we look at a I> (t) = x. (19)
number of important statistical problems where saddlepoint
approximations are useful. Expanding the exponent in (18) around i(x) we have
The first statistical application of the saddlepoint approx-
imation was derived by Daniels (1954). He approached the
-Kx(t) - tx _Kx(t(x)) - F(x)x ? (- 2()) K>(t(x)).
problem of finding a density approximation through the in- 2
version of a Fourier transform. Such an approach has the (20)
advantage of automatically providing a function m(.) sat-
isfying (5), but also carries the disadvantage of making us We now substitute in (18) and integrate with respect to t
deal with complex integration. We begin with some details along the line parallel to the imaginary axis through the
about the inversion formula. point t(x); that is, we choose the point -r in the limits of
the integral to be t(x). To treat this maneuver with rigor re-
3.1 The Inversion Formula quires great care (as in Daniels 1954 or Field and Ronchetti
We recall that for a density f(x), the moment generating 1990, chap. 3), but if we proceed informally (as if it were
function (mgf) is defined as a real integral) we again see that there is the kernel of a
normal density. Similar to (6) we obtain
r+0
ox(t) J exp(tx)f(x) dx, (16) 1/2
-00
This content downloaded from 39.50.186.67 on Sun, 01 Jul 2018 00:55:15 UTC
All use subject to http://about.jstor.org/terms
method takes advantage of the fact that, since t(x) is an Solving the saddlepoint equation & log Ox (t) /0t = x yiel
extreme point, the function is falling away rapidly as we the saddlepoint
move from this point. Thus, the influence on the integral of
neighboring points is diminished, making the approxima-
t(x) -p 2x- Ax (24)
tion (21) seem more reasonable. (See Field and Ronchetti 4x
1990, sec. 3.2, or Kolassa 1994, sec. 4.4, for details.) and applying (21) yields the approximate density. For p = 7
Expression (21) is what is commonly thought of as the and A = 5, Figure 2 displays the saddlepoint, renormalized
saddlepoint approximation to a density. Its error of approx- saddlepoint, and exact densities. Here, the saddlepoint and
imation is much better than the Taylor series approximation the renormalized saddlepoint are remarkably accurate.
to a function. In classical, or "first-order" asymptotics, the
3.1.1 Saddlepoints for Sums
error terms usually decrease at the rate n1/2, for a sample
A useful application of the saddlepoint is the approxima-
of size n. The saddlepoint is "second-order" asymptotics,
tion of the density of a sum or average. Perhaps the simplest
and can have error terms decrease as fast as n3/2, which
nontrivial example, which is also the oldest one (Daniels
yields a big improvement in accuracy for small samples.
1954), is the derivation of the density of a sample mean of
We return to this point in Section 3.2.
independent and identically distributed random variables.
Example 3. Noncentral chi-squared. Hougaard (1988) The key here is that the moment generating function of a
presented an interesting application of the saddlepoint ap- sum of iid random variables can be easily computed from
proximation, of which the following is a special case. The the original moment generating function, and (21) can be
noncentral chi-squared density has no closed form, and is directly applied.
usually written Consider X to be the sample mean of X1, X2,.. ,Xn,
iid random variables. Each Xi has a moment generating
xp/2?kle x/2 ke-A function qx(t) and a cumulant generating function Kx (t).
kF F(p/2 + k)2P/2+k k! (22) An elementary statistical argument shows that the moment
generating function of X is OX(t) = bx(t/n)T and the cu-
where p is the degrees of freedom and A is the noncentrality mulant generating function is Kg (t) = nKx (t/n). A direct
parameter. The density is an infinite mixture of central chi- application of (21) then gives
squared densities, where the weights are Poisson probabili-
ties. It turns out that calculation of the moment-generating
function is simple, and it can be expressed in closed form fx (X) 21rKx(i(x))
as x exp {rn [Kx (i(z)) - (z)] }. (25)
e2At/(1-2t) The right side of expression (25) is the saddlepoint ap-
ox(t) (1 - 2t)P/2 (23) proximation to the density of X. Of course, there are several
loose ends in the derivation which should be formalized in
order to be legitimate. Daniels (1954, 1987) presented all
the details. Note the fact that we are dealing with densities,
o - - - Exact and random variables enter only in the derivation of the
o --- saddlepoint cumulant generating function Kx(t) from the cumulants
co - - renormalized of the individual random variables. We can also appeal to
it to renormalize fx(x-) so that it integrates to 1, which
amounts to adjusting the constant (n/(2wF))1/2. This typi-
co
cally requires numerical integration.
The saddlepoint approximation can also be used on dis-
crete distributions, as the next example shows.
2 (1)n X- Ar+?/2
220 Genzeral
This content downloaded from 39.50.186.67 on Sun, 01 Jul 2018 00:55:15 UTC
All use subject to http://about.jstor.org/terms
It is interesting to compare the approximation (25) with to the density of X to be
the simpler one, obtained by expanding the integrand in the
inversion formula around 0 rather than around its maxi-
fx(x)> =qnfjX At)
mum. If we keep the first two terms we obtain
a _ a ~___ _
This content downloaded from 39.50.186.67 on Sun, 01 Jul 2018 00:55:15 UTC
All use subject to http://about.jstor.org/terms
saddlepoint improves on the usual normal approximation by eralizations). Consider Xl, ... , X , independent ran-
eliminating the skewness term in the asymptotic expansion dom variables with density
of the density. And this is accomplished by using a new ap-
proximating density for each value of x. So this is similar f (xIO) = exp {Os(x) - K(O) - d(x)}. (31)
in flavor to the derivation in Section 2.1.
A version of the sufficient statistic is S - j> s(Xi) which
We have only shown that the order of the approximation
has a density
is Q(n-r/2), not the 0(n-3/2) that is often claimed. This
better error rate is obtained by renormalizing (30) so that it f(sJ0) = exp {Os - nK(O) - h(s)}. (32)
integrates to one. Details of this are contained in Daniels'
From sufficiency considerations, we need to consider only
original 1954 paper, Field and Ronchetti (1990, chap. 3),
f(s 0). We also realize that there is no need to apply the
or Kolassa (1994, chap. 4). We omit them here, but note
saddlepoint approximation to the entire density (32), but
that such renormalization may be quite computer intensive;
only to the function exp {-h(s)}, as the part of (32) that
see, for example, the discussion in Section 3.3 of Field and
involves 0 is relatively simple.
Ronchetti (1990).
Next, note that because f(s 0) is a density, it integrates
Last, there is an interesting analogy between the exponen-
to one, and hence by integrating and rearranging (32) it
tial tilting derivation and the inversion formula derivation
follows that
of the saddlepoint approximation. Here, by using a fam-
ily of densities, for any value x we were able to select the
exp {nK(0)} = Jexp (Os) exp{- h(s)} ds. (33)
member of the family with the value of ,uM in (28) that ze-
roed out a term in the expansion. In the inversion formula
The right side is exactly (16) with 0 instead of t and s
derivation of Section 3.1, the complex integration resulted
instead of x, so the cgf of exp {-h(s)} is nK(0). (Here,
in a family of paths over which we could integrate. There,
we have to think of 0 as a dummy variable rather than a
we chose the path that also zeroed out a term in the ex-
parameter of a distribution.) Hence, we can apply the ap-
pansion (20). Thus, either derivation results in us having an
proximation (21) directly to exp {-h(s)}.
extra parameter that we can control to help produce a good
Alternatively, we can realize that (32) is in the form of
approximation.
an "exponential tilt" of exp {-h(s)}, so the method of Sec-
tion 3.2.1 can be applied. In either case, we obtain that the
4. MAXIMUM LIKELIHOOD ESTIMATION
density of f (s I0) is approximated by
nK'(t) = s. (35)
function of F, we obtain
222 General
This content downloaded from 39.50.186.67 on Sun, 01 Jul 2018 00:55:15 UTC
All use subject to http://about.jstor.org/terms
parameter 0 enter the formula. Though the approximation have the approximation
is quite accurate, it typically requires numerical methods to
compute s(i), K"(t) and the exact normalizing constant.
As it turns out, the renormalized (36) is exact in many of P(X >a) Jj (2rKxj(t()))1
the simple cases that we examined. [Daniels (1980) showed
x exp {n [Kx (t(x)) -t(x)x] } dx
that the three cases where the saddlepoint approximation to
the density of a sample mean or sum is exact are the gamma, = ' n )1/2 [KX,,(t) ]1/ 2
the normal, and the inverse Gaussian distributions.] To take
full advantage of (36) requires more complicated settings x exp {rn [Kx (t) - tKx (t)] } dt,
than we will look at here, so we content ourselves with a
simple, exact example. where we make the transformation Kx (t) = x and t(a)
satisfies Kx (i(a)) = a. This transformation was noted by
Example 5: Pareto Distribution. Let Xl,... X, be iid
Daniels (1983), and allows the evaluation of the integral
from the Pareto distribution with known lower limit ae. The
with only one saddlepoint evaluation. However, one still
density is
must do the integration, probably using a numerical method.
[See Robert and Casella (1999), sec. 6.3, for an illustration
of the use of the Metropolis algorithm to do such a calcu-
f (xJ13) = B0+1) X > at, (37)
lation.]
Tail area approximations have seen much more develop-
a member of the exponential family. From (32) we see that ment. The work of Lugannini and Rice (1980) produced a
s = - Z log xi, K(,3) - log (,j3O), and the saddlepointvery
is accurate approximation that only requires the evalu-
given byt= s+n-log(o) The saddlepoint approximation is
ation of one saddlepoint, and no integration. It is derived
straightforward to compute and, from (36), the density of by further transformations of the saddlepoint approxima-
,3, the maximum likelihood estimator of ,3, is approximated
tion; see the discussions by Field and Ronchetti (1990,
by sec. 6.2) or Kolassa (1994, sec. 5.3). There are other ap-
proaches to tail area approximations; for example, the work
of Barndorff-Nielsen (1991), which takes advantage of an-
f(13113) I[n] ( e3) T(1//)1 (38) cillary statistics or the Bayes-based approximation of Di-
Ciccio and Martin (1993). Also, Wood, Booth, and Butler
Figure 3 shows this approximation and its renormalized ver- (1993) gave generalizations of the Lugannini and Rice for-
sion, which is exact. mula.
estimators can be obtained (Field 1982; Field and Ronchetti (1991), "Modified Signed Log-Likelihood Ratio," Biomnetrika, 78,
557-563.
1990, chap. 4). Moreover, the cgf itself may be estimated,
Barndorff-Nielsen, O., and Cox, D. R. (1994), Inferences and Asymptotics,
with application to L estimators (Easton and Ronchetti
London: Chapman and Hall.
1986) and general nonlinear statistics (Gatto and Ronchetti Billingsley, P. (1995), Probability and Measure (3rd ed.), New York: Wiley.
1996). Booth, J. G., Hall, P., and Wood, A. (1992), "Bootstrap Estimation of
Other applications include finite population models Conditional Distributions," The Annals of Statistics, 20, 1594-1610.
(Wang 1993a), bootstrapping, and related confidence meth- Butler, R. W., Huzurbazar, S., and Booth, J. G. (1992a), "Saddlepoint
Approximations for the Generalized Variance and Wilks' Statistic,"
ods (Wang 1993b; Booth, Hall, and Wood 1992; DiCiccio,
Biometrika, 79, 157-169.
Martin, and Young 1992), ANOVA and MANOVA (Butler, (1992b), "Saddlepoint Approximations for the Bartlett-Nonda-
Huzurbazar, and Booth 1992ab, 1993), prior distributions Pillai Trace Statistic in Multivariate Analysis," Biomnetrika, 79, 705-515.
(Eichenhaur-Herrmann and Ickstadt 1993), generalized lin- (1993), "Saddlepoint Approximations for Tests of Block Indepen-
dence, Sphericity and Equal Variances and Covariances," Journal of the
ear models (Strawderman, Casella, and Wells 1996), expo-
Royal Statistical Society, Ser. B, 55, 171-183.
nential linear models (Fraser et al. 1991), multiparameter
Cox, D. R., and Reid, N. (1987), "Parameter Orthogonality and Approx-
exponential families (Pierce and Peters 1992), studentized imate Conditional Inference" (with discussion), Journial of the Royal
means (Daniels and Young 1991), and 2 x 2 tables (Straw- Statistical Society, Ser. B, 49, 1-39.
derman and Wells 1998). Daniels, H. E. (1954), "Saddlepoint Approximations in Statistics," Annals
of Mathematical Statistics, 25, 631-650.
There is another use of the saddlepoint approximation
(1980), "Exact Saddlepoint Approximations," Biometrika, 67, 59-
that is, perhaps, even more important than the approxima-
63.
tion of a density function. That is the use of the saddlepoint (1983), "Saddlepoint Approximations for Estimating Equations,"
to approximate the tail area of a distribution. From (25) we Biometrika; 70, 89-96.
This content downloaded from 39.50.186.67 on Sun, 01 Jul 2018 00:55:15 UTC
All use subject to http://about.jstor.org/terms
(1987), "Tail Probability Approximations," International Statistics Distribution of the Sum of Independent Random Variables," Advances
Review, 55, 37-48. in Applied Probability, 12, 475-490.
Daniels, H. E., and Young, G. A. (1991), "Saddlepoint Approximations for McCullagh, P. (1987), Tensor Methods in Statistics, London: Chapman and
the Studentized Mean," Biometrika, 78, 169-179. Hall.
DiCiccio, T. J., and Martin, M. A. (1993), "Simple Modifications for Pierce, D. A., and Peters, D. (1992), "Practical Use of Higher Order
Signed Roots of Likelihood Ratio Statistics," Journal of the Royal Sta- Asymptotics for Multiparameter Exponential Families" (with discus-
tistical Society, Ser. B, 55, 305-316. sion), Journal of the Royal Statistical Society, Ser. B, 54, 701-737.
DiCiccio, T. J., Martin, M. A., and Young, G. A. (1992), "Fast and Accurate Reid, N. (1988), "Saddlepoint Methods and Statistical Inference" (with
Double Bootstrap Confidence Intervals," Biometrika, 79, 285-295. discussion), Statistical Science, 3, 213-238.
Easton, G. S., and Ronchetti, E. (1986), "General Saddlepoint Approx- (1991), "Approximations and Asymptotics," in Statistical Theory
imations With Application to L Statistics," Journal of the American and Models, Essays in Honor of D. R. Cox, London: Chapman and Hall,
Statistical Association, 81, 420-430. pp. 287-334.
Efron, B. (1981), "Nonparametric Standard Errors and Confidence Inter- Robert, C. P., and Casella, G. (1999), Monte Carlo Statistical Methods,
vals" (with discussion), Canadian Journal of Statistics, 9, 139-172. New York: Springer-Verlag
Eichenhaur-Herrmann, J., and Ickstadt, K. (1993), "A Saddlepoint Charac- Strawderman, R. W., Casella, G., and Wells, M. T. (1996), "Practical Small
terization for Classes of Priors With Shape-Restricted Densities," Statis- Sample Asymptotics for Regression Problems," Jourinal of the American
tics and Decisions, 11, 175-179. Statistical Association, 91, 643-654.
Feller, W. (1971), An Introduction to Probability Theory and its Applica- Strawderman, R. W., and Wells, M. T. (1998), "Approximately Exact In-
tions (Vol. II), New York: Wiley. ference for the Common Odds Ratio in 2 x 2 Tables" (with discussion),
Field, C. (1982), "Small Sample Asymptotic Expansions for Multivariate Journal of the American Statistical Association, 93, 1294-1320.
M Estimators," The Annals of Statistics, 10, 672-689.
Stuart, A., and Ord, J. K. (1987), Kendall' s Advanced Theory of Statistics
Field, C., and Ronchetti, E. (1990), Small Sample Asymptotics, Hayward, (Vol. I, 5th ed.), New York: Oxford University Press.
CA: Institute of Mathematical Statistics.
Sweeting, T. J. (1987), Discussion of "Saddlepoint Approximations in
Fraser, D. A. S., Reid, N., and Wong, A. (1991), "Exponential Linear Mod- Statistics," by D. R. Cox and N. Reid, Journal of the Royal Statistical
els: A Two-Pass Procedure for Saddlepoint Approximation," Journal of Society, Ser. B, 49, 20-21.
the Royal Statistical Society, Ser. B, 53, 483-492.
Waller, L. A., Turnbull, B. W., and Hardin, J. M. (1995), "Obtaining Distri-
Gatto, R., and Ronchetti, E. (1996), "General Saddlepoint Approximations bution Functions by Numerical Inversion of Characteristic Functions,"
of Marginal Densities and Tail Probabilities," Journal of the American The American Statistician, 49, 346-350.
Statistical Association, 91, 666-673.
Wang, S. (1993a), "Saddlepoint Expansions in Finite Population Prob-
Hall, P. (1992), The Bootstrap and Edgeworth Expansion, New York:
lems," Biometrika, 80, 583-590.
Springer-Verlag.
(1993b), "Saddlepoint Methods for Bootstrap Confidence Bands in
Hougaard, P. (1988), Discussion of "Saddlepoint Methods and Statistical
Nonparametric Regression," Australian Journal of Statistics, 35, 93-101.
Inference," by N. Reid, Statistical Science, 3, 230-231.
Wood, A. T. A., Booth, J. G., and Butler, R. W. (1993), "Saddlepoint Ap-
Kolassa, J. E. (1994), Series Approximation Methods in Sstatistics, New
proximations to the CDF of Some Statistics With Nonnormal Limit
York: Springer-Verlag.
Distributions," Journal of the American Statistical Associationi, 88, 680-
Lugannani, R., and Rice, S. (1980), "Saddlepoint Approximation for the 686.
224 General
This content downloaded from 39.50.186.67 on Sun, 01 Jul 2018 00:55:15 UTC
All use subject to http://about.jstor.org/terms