You are on page 1of 14

Testing for Unit Roots in Seasonal Time Series

Author(s): D. A. Dickey, D. P. Hasza, W. A. Fuller


Source: Journal of the American Statistical Association, Vol. 79, No. 386 (Jun., 1984), pp. 355367
Published by: American Statistical Association
Stable URL: http://www.jstor.org/stable/2288276
Accessed: 12/05/2010 02:03
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at
http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless
you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you
may use content in the JSTOR archive only for your personal, non-commercial use.
Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at
http://www.jstor.org/action/showPublisher?publisherCode=astata.
Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed
page of such transmission.
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of
content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms
of scholarship. For more information about JSTOR, please contact support@jstor.org.

American Statistical Association is collaborating with JSTOR to digitize, preserve and extend access to Journal
of the American Statistical Association.

http://www.jstor.org

Time

Seasonal

in

Roots

Unit

for

Testing

Series

D. A. DICKEY,D. P. HASZA,and W. A. FULLER*

Regression estimators of coefficients in seasonal autoregressive models are described. The percentiles of the
distributionsfor time series that have unit roots at the
seasonal lag are computedby Monte Carlointegrationfor
finite samples and by analytic techniques and Monte
Carlo integrationfor the limit case. The tabled distributions may be used to test the hypothesis that a time series
has a seasonal unit root.

The statistics ?ld - 1 and 'd are standardoutput from a


Y=
- Y, -d on Y_d. Dickey
computerregressionof Y. =
(1976), Fuller (1976), and Dickey and Fuller (1979) discussed a1 and Ti; M.M. Rao (1978)and Evans and Savin
(1981) discussed a,.
An alternativemodel for seasonal data is the stationary
model in which the observations satisfy

(2.4)
+ e,,
yt = OtdYt-d
Iad I < .
KEY WORDS: Time series; Seasonal; Nonstationary; Furthermore,for normalstationaryYtsatisfying(2.4), Y,
Unit root.
ad Yt +d is a NID(O,a 2) time series. That is,
1. INTRODUCTION
- NID(O,cr2).
+ Vt,
(2.5)
Vt
Yt = tdyt+d
Let the time series Y, satisfy

Motivated by (2.4) and (2.5), we suggest an alternative


(1.1) estimator of td, which we call the symmetricestimator.
The symmetric least squares estimator, &d, pools inforwhere Y-d+ , Y-d+ 2,. . . , Yoare initialconditionsand mationfrom the regressionof Y, on Yt+d and the regresthe e, are iid (0, c2) random variables. Model (1.1) is a sion of Y, on Y,d. Let
simple seasonal time series model in which monthlydata
are representedby d = 12, quarterlydata by d = 4, and
(2.6)
(2
Ytytd)
( Y,2 + Y-d ))2,
d=
so on.
We consider several regression-typeestimators of ad
and compute percentiles of their distributionsunder the and define the associated Studentizedstatistic as
hypothesis that ad = 1. This hypothesis states that the
S2
seasonal difference Y, = Yt - Y- d is white noise for d= 2 [{E (Yt2 + Y,-d 2)
(2.7)
(&d 1),
model (1.1). Extensions of model (1.1) permit a variety
of applications.
where
n
2. ESTIMATORS
dY,-d)2
S2 = (2n - 1)' E [(Y, Our first estimator of Oad iS the least squares estimator
t= I
defined as
=

yt

1, 2,

et,

ady,-d

. . .,

/(4

+
-1

n
ad

Y,-d2

Yt-dYt.

t=1

(yt-d

It follows from the definitionsthat - I1

(2.1)

t=1

itd

-[(2n -

1)(1 -

(2.8)

-adYt)].

&d <

1 and that

&Ad)]i(1 + &d)<1,

If the initial conditions are fixed and et is normal, Ad iS


the maximum likelihood estimator. The Studentized where rd is a monotone function of &d. Thus tests based
tests based on &d.
regression statistic for testing the hypothesis Ho: ad = on Td will be equivalentto
&d and 2 - id can be obtainedusing stanThe
statistics
1 is
dardregressionprograms.Table 1 gives two columnsappropriatefor the regressioncomputationwhen d = 2 and
J
Yd)
Td
(2.2)
(aLd - 1),
( =
n = 5. A * indicates a missing value. Thus an observation

where
n

S2 = (n

1)'

(Yt -

Y
t=

adY,-d)

with a missing value for either the independentor the


dependentvariableis not used in the regression. The In(2.3) dependent Variable column is the Dependent Variable
columnlaggedby the seasonal d. The ordinaryregression
of the first column on the second column gives the estimator a2

* D. A. Dickey is Associate Professor, Departmentof Statistics,


North CarolinaState University,Raleigh,NC 27695-8203.D. P. Hasza
is a statisticianat Boeing ComputerServices, P.O. Box 7730, Wichita,
KS 67277-7730.W. A. Fuller is Professor, Departmentof Statistics,
Iowa State University, Ames, IA 50011. The authorsare indebtedto
an anonymousreferee for helpful suggestions.

355

? Journal of the American Statistical Association


June 1984, Volume 79, Number 386
Theory and Methods Section

Journal of the American

356

where ni is the greatest integer not exceeding (n + d i)ld. The estimator ,i can be used to define a symmetric
estimator of td analogous to (2.6). We define

Table 1. Regression Variables Used to


Construct a2 With n = 5
Dependent

Independent Variable

Variable

n
=td

Y_1
Yo

Y1

Y_1

Y2

Y3

Yo
Y1

Y4

Y2

Y5

Y3
Y4
Y5

Y5

Y2

Y4

Y1

Y3

Y-1

Y=

Y.,-

E~~

0*1

t = 1, 2,

+ e,,

YtYt-d,

(2.12)

Fiit

=1

[ ]

d
atdYt-d

If the initialconditions are fixed, the Oi(or [wi)should be


estimated using the regression model.
Finally, we consider a model in which the seasonal
means ,ui are equal to a constant [L.For the model with
constant mean, the ordinaryregressionestimatorsare defined by

Models (1.1) and (2.4) have the propertythat E( Y,) =


0. A stationarytime series with zero mean is seldom encountered in practice. An alternative to Ho: Ad = 1,
which is reasonablefor much real data, is the stationary
model in which E( Y,) is nonzero. We thereforeconsider
the regression model
+

z
=

Y_1

ibit

Yo

Y=

(yt2 + yt_d

where

Y5
Y4
Y3

June 1984

Statistical Association,

. . .,

(2.9)

Otd

=[~
;,=Yt

L*I

Yt-d

Yt.-d

1K

x n

(2.13)

ld,

1f

Y.

E Yt-d

and the Studentized statistic for Ho: Oxd = 1 is

i= I

where
it = 1 if t = i (mod d)
= 0

- nI

otherwise,

(E y,

sd)j]

1t

d*

),

and {e,} is a sequence of iid (0, cr2) randomvariables.The


regression of Y, on 8 ,t,62, ... , d,t, Yt-dfor t = 1, 2, where
n
n, produces coefficients 01, 02,
*,
Od,
d
,d. We
n
1
2
denote the Studentized regression statistic associated
(n - 2)- E (1K,- ad* Y,-d)
S*=
with a,d - 1 by T id.
tt=
If we assume that |a(d I <1, a reparameterizedversion The symmetric estimator of Ad for the stationary conof model (2.9) is
stant-meanmodel is defined by
1

AA

Yt

8it6Li = ad

Yt-d

8itRi

+ e,,

nYt(t

(2.10)

(Y.d*)21}

y.*y.d*,

where
Oi -

Ud)tiq

i = 1, 2, .

(2.14)
,d.

Under model (2.10) the hypothesis Otd = 1 implies that


0i = 0 regardlessof the value of ti. Thus, specifying ad
= 1 in the model (2.10) allows puito assume any value.
Under the alternative of I Oad I < 1, however, FL is an
identified parameterand should be estimated.
Two estimators of pLjcan be considered for the stationary model. The first is that defined by the regression
estimatorsfor (2.9), and the second is the seasonal mean
[is defined by
ii= (niz? l)'

E
j=O

-++j

where
n

Y. - 1i .

S=

(2.15)

3. LIMITING
REPRESENTATION
OF
REGRESSION
STATISTICS
when
cxl = 1 in m del(.). Dcey (17)baiet
The error in the estimator
1 is
O
wea

R1- 1 =
=t1

{vE

yt,,2}

1K
Y_e,

t=1

357

Dickey, Hasza, and Fuller:Seasonal Unit Roots

distributionsgiven in Theorem 1 follow from the results


of Dickey (1976). The asymptotic distributions of the
n
n
symmetric estimators for ?-d = 1 depend only on the
(n2
Y-e
yt_12,
I
n1'f
)
t= I
t=
asymptotic distributionsof their denominatorsbecause,
and used it to calculate percentiles of the limit distribu- for example,
n
tions of n(LI - 1), Ti, n(&,1 - 1), and T,I by Monte
, (Yt _ yt _d)2
-n-1
Carlo integration.This approachis now extended to the
n
()
n(&-d
model with d > 1.
Consider the case d = 2 and the estimator&2. Letting
n2 1: ( yt2 + yt d2)

representationfor the joint limitingdistributionof

2m, we have

Ot

[
Yt-2

L422
tt=

t~

ytt2I

(n-1 -

[t=

(3.4)

n=

n-2

m
~_
E y2j132

-m
E Y2J-22+

t=

t=

( yt2 +

2)

Yd

y2i

i=I

(3.1)

i= I

Y2i = eo +

e2

and
= e-1

Y2i_1

+ el +

+ e2i_

t=

(,

2j=1

Dj

Li

T,2

j=l

2
E (Ti _ 1),

j=1

(Lj _W 2)]

[2~>

xl(?d

E
t=

Td

(3.2)

-1)

-d

>

Td

_I

j=l

in

j = 1, 2,

has a limitingdistribution.

BLJ2

where
Nj = E XjtXj,-I,

d)

1)

n(acd-

+ N2],

[DI + D2]1[N1

Y, +

Theorem1. Let Ytsatisfy (1.1), where ad = 1 and {et}


is a sequence of iid (0, u2) randomvariables. Let &d be
defined by (2.1), &' d by regression equation (2.9), & d*
by (2.13), 0d by (2.6), 1t,d by (2.12), and .,, d* by (2.14).
Then

for all j, we see that summationson even subscriptsare


independentof summationson odd subscripts. That is,
ot2=

(yt

>J2

t=

and n2

Each summationinvolves only Y's whose subscriptsare


even or only Y's whose subscriptsare odd. Because

et 2

n-l

j = 1, 2,

X1,t1 2,

(3.3)

(Tj-

[aE
TsJd~~~~~

w1)

j) d

TjW

2 (T
L2

jW

Xj, = Y2, -j + I,

and the vector (NI, D1) is independentof the vector (N2,


D2). The argumentextends immediatelyto the dth order
process with estimated means. We have

fl(oc

d*

' ~d{d E
1)
2
{jI

Od=

Ld

i= I

1d

n(.d

Naj,
j=l

D>zi

L
(7>d

Li1)

JE
d

(~ wj)}i

=-Ij

-d

L1 -

I]-

i)

W
(

where
m

N~,j = E Xjt,(Xi,
t It=

1-l),

j == 1, 2, ,

d,
x={d

22

1)-

j)(

(d

=
E (Xj, 1
D
=j
I

Xj(- ))2,

j =

1, 2, .

d,

dlO~

t=

-)2

L)

Xj, = Ydt- j+ 1,
T

rn=

XJ(I)~~~~~

and n

md. Therefore,the representationsfor the limit

l(o&d -

1) >

-d2Li

(L1 -

w2)1

)}

Journal of the American Statistical Association, June 1984

358

[2d-2 n(1 -

where

is FVd(l), and

,d)]-

00

Wj = 2I

YkY Zjk,

FV2(1) = 4(2,rrl)- ,

Tj = 2i I

YkZjk,

FV4(1)

j=I

FV6(1) = 8(2iTl15)-

E (j + 1)(j + 2)
j=O

k= I

2(-

1)2},

16(21rl3)- z j2 exp{- 21-1Ij2},

k= 1

Yk

exp{-A1-'(2j +

j=O

k =1

)k+

x [(2j +

[(2k - 1)7r] 1,

and Zjk are normalindependent(0, 1) randomvariables.

FV12(l)

3)2

1]exp{-(21)-'(2j +

= 32(15)-'(21r"YI)-i

3)2}

F(j + 3)[r(j - 2)]-1


j=3

4. ALTERNATIVE
FORTHELIMIT
REPRESENTATIONS
STATISTICS
DISTRIBUTIONS
OF THESYMMETRIC

x j(1512 - 40j21 + 16j4)exp{-21-1j2}.

Theorem4. Let the definitions of Theorems 2 and 3


In Theorem 1 we gave the limiting distributionof all hold. Then the limiting distribution of [2d-2 n(1 statistics as functions of certainrandomvariables.In this a*pd)]
is that of
section we present expressions for the limit cumulative
d
distributionfunctions of the symmetric statistics. These
z Lim - Wim2.
Umd =
results follow the work of Anderson and Darling(1952),
J= I
Sargan (1973), Bulmer (1975), and MacNeill (1978). The

proofs of Theorems 2-4 are given in Appendix A.

The limitingcumulativedistributionfunction of Umd is


00

Theorem 2. Let

Fud(l)

= 2

(d-1)

i-I

1-i E Aj(4j + d)*


j=O

Xj,

x exp{-(161)-'(4j

ejk,

k= I

+ d)2],

where

Ljm = M2

+ d)2}KI[(161)-1(4j

jX2

Aj = Ir4[Fj{(d

t= I

- 1)}]

2, (_1)k[k!(j

k)!]--

k=O

where eJk are iid (0, 1) randomvariables.Thenthe limiting


k + 1}, j= 0, 1, ...
xF{k + l(d -1)}r{jdistributionof [2nd-2(1 - &Ad)]- 'is the limiting distriand K*is the modifiedBessel functionof the second kind
bution of Ej'= I Ljm, and
of order one-fourth.
d

FL(l) =

lim P E Lim C 1}
M-30 j=
I

OFSAMPLING
DISTRIBUTIONS
5. PERCENTILES

Tables of percentilesfor the statistics discussed in this


article
are given in Tables 2-10. For the ordinaryregresE
j=1
sion statistics, the representationin Theorem 1 and the
simulationmethod described by Dickey (1976), and used
x F(j + Pd) [1 - F{1 1-i(4j + d)}]
in Dickey and Fuller (1979,1981)and Hasza (1977), were
for 1 > 0, where ?(x) is the probabilitythat a normal(0, used to construct limit percentiles. For these limit distributions and for all finite sample sizes, the programs
1) randomvariable is less than x.
describedin Dickey (1981)were used for the MonteCarlo
Theorem3. Let Xj, and Lj. be as defined in Theorem integration.For all combinationsof d = 1, 2, 4, 12 and
2. Let
m = 10, 15, 20, 50, 100, 200 and for the limit case, two
replications of d-' 60,000 statistics were run. Normal
Vjm = Ljm - Wjm2,
time series were used for all finite sample sizes. The emm
piricalpercentiles were smoothed by regressionon n - 1.
Wim
mXt
For the symmetric estimators, the regression-smoothed
*f= 1
percentiles were forced throughthe known limit percend
tiles. The smoothingprocedureinvolved 234 regressions.
Fvd(l) = lim P z Vjm'1l
1>0?
A lack-of-fit statistic was computedfor each regression.
Of these, 12 were significant at the 5% level, and four
Then the limiting cumulative distribution function of were also significantat 1%.
=

21(2+d)

[F(jd)]l

m-*oo

(-

1)W[F( + 1)]-

359

Dickey, Hasza, and Fuller:Seasonal UnitRoots

Table 2. Percentiles for n(aid

1), the OrdinaryRegression Coefficient for the Zero Mean Model


Probabilityof a Smaller Value

d=

d = 12

.50

.90

.95

.975

.99

-7.70
- 7.83
-7.94
-8.23
- 8.34
-8.41
-8.47

-5.62
-5.71
- 5.78
-5.93
- 5.99
- 6.02
-6.06

-.67
-.71
-.73
-.76
-.77
-.77
-.78

1.83
1.72
1.67
1.58
1.56
1.54
1.53

2.43
2.28
2.21
2.10
2.06
2.04
2.02

3.02
2.79
2.69
2.54
2.50
2.48
2.46

3.74
3.43
3.30
3.07
3.01
2.97
2.94

-10.89
-11.09
-11.23
-11.57
-11.70
-11.77
-11.85

-8.67
-8.86
-8.94
-9.08
-9.12
-9.14
-9.16

-6.40
-6.51
- 6.54
-6.59
-6.59
-6.59
-6.59

- .61
-.65
-.67
-.70
-.71
-.71
-.71

2.86
2.68
2.60
2.50
2.47
2.46
2.45

3.69
3.47
3.38
3.23
3.19
3.17
3.15

4.41
4.13
4.02
3.85
3.80
3.78
3.76

5.29
4.97
4.82
4.55
4.47
4.43
4.39

-14.25
-14.31
-14.33
-14.35
-14.35
-14.35
-14.35

-11.51
-11.55
-11.56
-11.57
-11.58
-11.58
-11.58

-8.76
-8.74
-8.73
-8.72
-8.72
-8.72
-8.72

-.53
-.62
-.65
-.69
-.69
-.69
-.69

5.54
5.25
5.13
4.97
4.93
4.92
4.90

7.02
6.66
6.52
6.34
6.30
6.28
6.27

8.09
7.82
7.69
7.49
7.42
7.39
7.36

9.70
9.17
8.97
8.71
8.65
8.63
8.61

.01

.025

20
30
40
100
200
400

-12.20
-12.67
-12.98
-13.63
-13.88
-14.01
-14.14

-9.66
-9.93
-10.11
-10.50
-10.65
-10.73
-10.81

x0

-13.87
-14.10
-14.31
-14.82
-15.04
-15.15
-15.27

120
180
240
600
1,200
2,400
00

-18.10
-18.04
-18.02
-18.00
-18.00
-18.00
- 17.99

X0

d=

.10

n = md

40
60
80
200
400
800

.05

The leftmost column in each table is headed by n =


md, which is the total numberof observations.The other
column headings are probabilities of obtaining a value
smallerthan the tabulatedvalue underthe model with ad
= 1. Percentilesfor the Studentizedstatistics in the symmetric case are not given because the Studentized statistics are simple transformationsof the a statistics. The

percentilesfor d = 1 are not includedin Table 10because


they are identical to those for d = I in Table 9.
The limitingdistributionof the symmetricstatisticwith
the mean removed is the same for d = 2 and x2 = 1 as
for d = I and otl = 1. The distributionis also similarfor
finite n. To see the reason for the similarity, let Y, =
EJ(- I ej and consider the sum of squaresfor a sample of

Table 3. Percentiles forT'd, the Studentized Statistic for the Ordinary


Regression Coefficient of the Zero Mean Model
Probabilityof a Smaller Value

n = md

.01

.025

.05

.10

.50

.90

.95

.975

.99

20
30
40
100
200
400

- 2.69
- 2.64
- 2.62
-2.58
- 2.57
-2.56
- 2.55

- 2.27
- 2.24
-2.23
-2.22
-2.22
-2.22
- 2.22

-1.94
-1.93
-1.93
-1.92
-1.92
-1.92
-1.92

-1.57
-1.58
-1.58
-1.59
-1.59
-1.59
-1.60

-.29
-.31
-.32
-.34
-.35
-.35
-.35

1.07
1.04
1.03
1.01
1.00
1.00
.99

1.48
1.43
1.42
1.39
1.38
1.38
1.38

1.85
1.79
1.76
1.72
1.71
1.70
1.69

2.28
2.21
2.18
2.12
2.10
2.09
2.08

-2.58
-2.57
-2.56
-2.56
-2.56
- 2.56
- 2.56

- 2.20
-2.20
-2.20
-2.20
-2.20
-2.20
- 2.20

-1.87
-1.89
-1.89
-1.90
-1.90
-1.90
-1.90

-1.51
-1.52
-1.53
-1.53
-1.53
-1.53
-1.53

-.20
-.21
-.22
-.23
-.24
-.24
-.24

1.14
1.10
1.09
1.07
1.07
1.07
1.07

1.53
1.49
1.47
1.45
1.44
1.44
1.44

1.86
1.82
1.80
1.78
1.77
1.77
1.77

2.23
2.20
2.18
2.17
2.16
2.16
2.16

-2.50
-2.49
-2.49
-2.49
- 2.49
- 2.49
-2.49

-2.08
-2.09
-2.10
-2.10
- 2.10
-2.10
-2.10

-1.77
-1.77
-1.77
-1.79
-1.79
-1.80
-1.80

-1.39
-1.41
-1.42
-1.43
-1.43
-1.43
-1.44

-.10
-.12
-.13
-.14
-.14
-.14
-.14

1.20
1.17
1.16
1.15
1.15
1.15
1.15

1.55
1.53
1.53
1.52
1.52
1.52
1.52

1.87
1.85
1.85
1.84
1.84
1.84
1.84

2.24
2.22
2.21
2.20
2.20
2.20
2.20

X0

d= 4

40
60
80
200
400
800
X0

d=

12

120
180
240
600
1,200
2,400
00

Journal of the American

360

June 1984

Statistical Association,

Table 4. Percentiles for n(cia,d* - 1), the OrdinaryRegression Coefficient for the Single Mean Model
Probabilityof a Smaller Value
n = md
20
30
40
100
200
400

d = 2

X0

40
60
80
200
400
800

d = 4

X0

120
180
240
600
1,200
2,400
oc

d = 12

.01

.025

.05

.10

.50

.90

.95

.975

.99

-15.57
-16.82
-17.50
-18.81
-19.27
-19.50
-19.74

-13.15
-13.91
-14.36
-15.29
-15.63
-15.81
-15.99

-11.12
-11.71
-12.04
-12.68
-12.90
-13.02
-13.14

-8.97
- 9.36
- 9.56
-9.96
-10.10
-10.18
-10.25

-2.74
- 2.84
- 2.89
-2.96
-2.98
-2.99
- 3.00

.99
.88
.84
.76
.74
.73
.72

1.75
1.60
1.54
1.44
1.41
1.39
1.38

2.41
2.21
2.12
1.97
1.93
1.91
1.89

3.19
2.92
2.79
2.59
2.53
2.50
2.47

-17.40
-17.85
-18.16
-18.90
-19.19
-19.35
-19.50

-14.16
-14.59
-14.80
-15.19
-15.32
-15.38
-15.44

-11.66
-12.01
-12.17
-12.45
-12.54
-12.58
-12.62

-9.14
-9.27
- 9.35
-9.52
-9.58
-9.62
- 9.65

-2.06
-2.10
-2.12
-2.17
-2.19
- 2.20
-2.21

2.16
1.98
1.91
1.81
1.79
1.78
1.78

3.10
2.86
2.77
2.64
2.61
2.60
2.59

3.87
3.60r
3.49
3.33
3.29
3.27
3.25

4.78
4.45
4.31
4.11
4.05
4.02
4.00

-20.16
- 20.28
- 20.35
- 20.47
- 20.52
-20.54
- 20.56

-16.35
-16.42
-16.47
-16.60
-16.65
-16.68
-16.71

-13.43
-13.55
-13.59
-13.64
-13.65
-13.65
-13.65

-1.67
-1.73
-1.76
-1.81
-1.82
-1.83
-1.84

4.77
4.52
4.41
4.27
4.23
4.21
4.20

6.44
6.01
5.85
5.69
5.66
5.66
5.66

7.63
7.21
7.05
6.84
6.80
6.78
6.77

9.03
8.65
8.48
8.19
8.10
8.06
8.02

-10.50
-10.47
-10.45
-10.44
-10.44
-10.44
-10.44

2n,

We assume that all roots of

2n

(Yty,
t=

MP - OlmP-' -

k )2
[Yt -_

=n

(Y

-Yn)]

t=

2n

(1Y,

n) - 2n(y

yn)

t=

- (2n)n-1
=

Es

r=

ej -

(E

-(1

0)

01B -02B

Eten-i)

Y, ej

4 j=)

t=l j=

r-

en+j)

(?

O
0PBP) Y.-d(&d

td)

'L

e,(&d edt(d,0)

+ z

en-j)

(tj

?E

(6.2)

are less thanone in absolutevalue and that Yo, Y_1,. . ..


Y-_p-d + are initialvalues. Hasza and Fuller (1982)considered tests of the hypothesis that Ld = 1 and that one
of the roots of (6.2) is one. Model (6.1) defines et as a
nonlinearfunctionof (td, 0), where0' - (0I, 02,
Op). Define this function, evaluated at (cd, 0), by
e,(ad, 0). Expandingin Taylor's series, we obtain

2n

-p - op = 0

r
i=

- (2n)-' E ,

I
n

r-I

(-en_-)+

Lr=ij=O

ad Y.-di)(Oi

O) + r,,

where r, is the Taylor series remainder.This suggests the


- Y- d
following estimation procedure (where Y, =

en +j

r=

-n-I

i= 1

j=0

(Y-

-,

correspondsto the initial estimate &Ld =

12

en+j

r=Ij=_

1):

(1) Regress f, on Y, 1, . . . , Y,_p to obtain an initial


estimator of 0 that is consistent for 0 under the null hy-

If 2n is replaced by (2n - 1)in the final sum of squares, pothesisthatAd = 1.


(2) Compute e,(1, 0) and regress e,(l, 0) on
the resultingquantityhas the same distributionas the sum
2
of squares for a sample of 2n - 1 from model (1.1) with
A

d = 2 andt2=

[(1

1.

61B

02B2

By

d,

MODELS
ORDER
6. HIGHER
A popular model for seasonal time series is the multiplicative model
(1

OtdBd)(l

- 01B

OpBP)Y

= e,

(6.1)

whereB is the backshiftoperatordefinedby B( Y,) = Y,and e, is a sequenceof iid (0, CJ2) randomvariables.

to obtain estimates of (atd - 1, 0


Oad - 1 may be used to test Ho:

0). The estimatorof

td

Theorem 5. If ad = 1 in model (6.1), the two-step


regression procedure suggested earlier results in an estimatorad and a correspondingStudentizedstatisticwith
the same limit distributionas that of the statistic one

361

Dickey, Hasza, and Fuller:Seasonal UnitRoots

Table 5. Percentiles for T,d*, the Studentized Test for the Single Mean Model
Probabilityof a Smaller Value
.01

.025

.05

.10

.50

.90

.95

.975

.99

20
30
40
100
200
400
00

-3.54
-3.44
-3.40
-3.31
- 3.28
- 3.27
- 3.25

-3.08
- 3.02
- 3.00
- 2.95
- 2.93
- 2.93
- 2.92

- 2.72
- 2.69
- 2.68
- 2.65
-2.64
- 2.64
- 2.63

- 2.32
-2.31
-2.31
-2.31
-2.31
-2.31
-2.31

-.97
-1.01
-1.02
-1.05
-1.05
-1.06
-1.06

.49
.46
.44
.41
.40
.40
.40

.92
.88
.86
.83
.83
.82
.82

1.31
1.25
1.22
1.19
1.19
1.18
1.18

1.76
1.68
1.65
1.61
1.60
1.59
1.59

40
60
80
200
400
800

- 3.14
-3.11
-3.09
-3.07
- 3.06
-3.06
- 3.05

- 2.73
- 2.71
- 2.71
-2.70
- 2.70
- 2.70
- 2.70

-2.38
- 2.38
- 2.38
-2.38
-2.38
- 2.38
-2.38

- 2.00
-2.00
- 2.01
-2.02
- 2.02
- 2.02
- 2.03

-.60
-.63
-.64
-.66
-.67
-.67
-.67

.80
.76
.74
.72
.72
.72
.72

1.19
1.15
1.13
1.11
1.11
1.10
1.10

1.54
1.49
1.47
1.46
1.45
1.45
1.45

1.94
1.89
1.87
1.86
1.85
1.85
1.85

-2.01
- 2.02
- 2.02
- 2.04
- 2.05
- 2.06
- 2.06

-1.65
-1.66
-1.66
-1.66
-1.66
-1.66
-1.66

-.31
-.33
- .34
-.35
-.35
- .36
-.36

1.02
.99
.98
.97
.96
.96
.96

1.38
1.35
1.34
1.34
1.33
1.33
1.33

1.71
1.68
1.67
1.66
1.65
1.65
1.65

2.08
2.06
2.05
2.03
2.02
2.02
2.01

d=

md

X0

120
180
240
600
1,200
2,400

d = 12

X0

2.73
2.73
2.73
2.73
2.73
2.74
2.74

would obtain by regressing Z, where Z,

Y, - 0,

--

2.33
2.35
2.36
2.37
2.37
2.37
2.37

Z,-d

Zt

on Z,d,

means or a single mean is immediate.Let

OPY,_p. The estimators

Oi,obtainedby addingthe estimates of Oi- Oito 0i, have


E
t
(6.3)
Yt =Yiitpi.
the same asymptotic distributionas the coefficients in a
, ,_p
regression of Y, on Y,_ , Yt-2, .
The proof of Theorem5 is given in AppendixB. Theo- Replacing Ytby y, in the two-step estimationprocedure
rem 5 implies that the tabulatedlimit percentiles for es- results in the regression of e,(1, 0) on
timatorsin model (1.1) are applicablein the multiplicative (1- 01B - 02
0,Bl)yt- d,
(I
OIB 02B2 _
model for large sample sizes.
.
kt
The extension of Theorem5 to estimatorswith seasonal
A

t-

Table 6. Percentiles for n(6

,.d

kt

1), the Ordinary Regression Coefficient for the Seasonal Means Model
Probability of a Smaller Value

n = md

.01

.025

.05

.10

20
30
40
100
200
400
00

-18.98
- 20.89
-22.02
- 24.32
-25.17
- 25.62
- 26.07

-16.75
-18.19
-19.03
- 20.76
-21.39
- 21.72
- 22.06

-14.96
-16.01
-16.64
-17.95
-18.44
-18.69
-18.95

-12.78
-13.62
-14.09
-15.05
-15.40
-15.58
-15.76

40
60
80
200
400
800

- 28.78
- 30.63
-31.79
- 34.26
-35.20
-35.69
- 36.19

- 25.59
- 27.18
- 28.12
-30.03
- 30.73
-31.10
- 31.47

- 23.10
- 24.49
- 25.27
- 26.78
- 27.32
-27.60
- 27.88

- 20.40
- 21.42
- 22.00
- 23.15
- 23.56
-23.78
- 24.00

-60.72
-63.40
- 64.90
-67.87
- 68.93
- 69.48
-70.04

-55.63
- 57.83
-59.21
- 62.16
- 63.29
- 63.87
- 64.48

-51.22
- 53.57
- 54.89
- 57.52
- 58.47
- 58.95
- 59.45

-46.98
- 49.14
- 50.28
- 52.44
- 53.19
- 53.57
- 53.95

x0

d = 12

120
180
240
600
1,200
2,400
xc

.50

.90

.95

-6.48
- 6.76
-6.91
- 7.20
-7.30
- 7.35
-7.41

-1.94
-2.11
-2.18
-2.31
- 2.35
- 2.37
- 2.39

- .83
-1.01
-1.10
-1.25
-1.30
-1.33
-1.35

.10
-.11
-.21
-.40
-.46
-.49
-.52

-11.89
-12.33
-12.58
-13.06
-13.23
-13.32
-13.41

- 5.40
- 5.73
- 5.87
- 6.09
- 6.15
-6.18
- 6.21

- 3.90
-4.15
- 4.28
- 4.51
-4.59
-4.63
- 4.67

- 2.51
- 2.81
- 2.96
- 3.20
- 3.28
-3.32
- 3.35

-33.65
- 34.95
-35.57
- 36.65
- 36.99
- 37.16
-37.33

-22.13
- 23.06
- 23.48
- 24.16
- 24.37
- 24.47
- 24.56

-19.46
- 20.00
- 20.31
- 20.92
- 21.14
- 21.26
- 21.37

.975

-17.09
-17.44
-17.70
-18.34
-18.60
-18.74
-18.88

.99

1.16
.92
.79
.55
.46
.42
.38
-1.05
-1.33
-1.47
-1.74
-1.83
-1.87
-1.92
-14.25
-14.57
-14.86
-15.57
-15.87
-16.03
-16.20

Journal of the American Statistical Association, June 1984

362

Table 7. Percentiles of Tp,d, the Studentized Statistic for the Seasonal Means Model
Probabilityof a Smaller Value
n = md

d=

12

.025

.05

.10

.50

.90

.95

.975

20
30
40
100
200
400

-4.46
-4.25
-4.15
-4.00
-3.95
-3.93

-3.98
-3.84
-3.77
-3.66
-3.63
-3.62

-3.60
-3.50
- 3.45
-3.38
-3.36
-3.35

-3.18
-3.13
-3.11
-3.07
-3.05
-3.05

-1.92
-1.95
-1.96
-1.99
-1.99
-2.00

-.66
-.73
-.76
-.81
-.83
-.83

-.29
-.36
-.40
-.46
-.48
-.49

.03
-.04
-.08
-.15
-.18
-.19

.42
.34
.29
.21
.18
.17

00

-3.90

- 3.60

- 3.34

-3.04

- 2.00

-.84

-.50

-.20

.15

40
60
80
200
400
800
00

-5.01
-4.85
-4.78
-4.67
-4.64
-4.62
-4.61

-4.57
-4.46
-4.41
-4.34
-4.32
-4.31
-4.30

-4.21
-4.14
-4.11
-4.06
-4.05
-4.04
-4.04

-3.83
-3.79
-3.78
-3.75
-3.74
- 3.74
-3.73

-2.55
- 2.58
-2.60
-2.63
- 2.64
- 2.65
-2.65

-1.30
-1.36
-1.40
-1.45
-1.47
-1.48
-1.49

-.94
-1.01
-1.05
-1.11
-1.14
-1.15
-1.16

-.61
-.70
-.74
-.81
-.84
-.85
-.86

-.26
-.35
-.39
-.46
-.48
-.49
-.50

- 6.63
-6.52
-6.47
-6.39
-6.37
- 6.36
-6.35

- 6.20
-6.15
-6.13
-6.09
-6.07
- 6.07
-6.06

-5.86
-5.84
-5.83
-5.82
-5.82
- 5.82
-5.82

- 5.49
-5.49
-5.49
-5.49
-5.49
-5.49
-5.49

-4.21
-4.27
-4.30
-4.34
-4.35
-4.36
-4.36

- 2.97
-3.06
-3.10
-3.17
-3.19
-3.20
-3.22

- 2.62
-2.71
-2.75
-2.83
-2.86
- 2.87
- 2.88

- 2.35
-2.39
-2.43
-2.52
-2.56
- 2.58
- 2.60

-1.95
-2.01
-2.05
-2.18
-2.23
- 2.26
- 2.29

120
180
240
600
1,200
2,400
0c

Table 8. Percentiles of n(cid

1), the Symmetric Estimator for the Zero Model

12

.10

.50

.90

.95

.975

.99

-8.31
-9.12
-9.59
-10.54
-10.89
-11.07
-11.26

-6.92
- 7.48
- 7.79
-8.40
-8.63
-8.74
-8.86

- 5.37
-5.71
-5.90
-6.27
- 6.40
-6.46
-6.53

-1.57
-1.62
-1.64
-1.69
-1.71
-1.71
-1.72

-.42
-.42
-.42
-.42
-.42
-.42
-.42

-.33
-.31
-.31
-.30
-.30
-.30
-.30

-.27
-.25
-.25
-.24
-.24
-.24
-.23

-.22
-.21
-.20
-.19
-.18
-.18
-.18

-12.94
-13.83
-14.29
-15.16
-15.46
-15.61
-15.76

-10.62
-11.11
-11.40
-12.01
-12.24
-12.36
-12.48

-8.71
-9.12
-9.34
-9.75
- 9.90
-9.97
-10.05

-6.82
- 7.09
- 7.24
-7.50
- 7.59
- 7.64
-7.68

-2.50
-2.54
- 2.56
-2.61
- 2.62
- 2.63
-2.64

-1.01
-.99
-.98
-.97
-.97
-.97
-.97

-.82
-.79
-.78
-.77
-.76
-.76
-.76

-.70
-.67
-.66
-.64
-.63
-.63
-.63

-.60
-.56
-.54
-.52
-.51
-.51
-.51

-16.21
-16.77
-17.10
-17.77
-18.02
-18.15
-18.28

-13.61
-13.86
-14.06
-14.55
-14.74
-14.85
-14.95

-11.45
-11.73
-11.90
-12.23
-12.35
-12.41
-12.48

-9.38
-9.57
-9.68
-9.89
- 9.97
-10.01
-10.05

-4.46
-4.49
-4.51
-4.55
-4.57
-4.58
-4.59

- 2.35
-2.31
-2.29
- 2.27
-2.26
- 2.26
-2.26

-2.01
-1.96
-1.93
-1.90
-1.90
-1.89
-1.89

-1.77
-1.71
-1.68
-1.65
-1.64
-1.63
-1.63

-1.57
-1.49
-1.45
-1.41
-1.40
-1.39
-1.39

- 26.43
- 27.39
-27.77
- 28.29
- 28.42
- 28.47
- 28.52

- 23.58
- 24.02
-24.26
- 24.70
- 24.86
- 24.94
- 25.02

- 21.29
- 21.56
-21.72
- 22.07
- 22.20
- 22.27
- 22.34

- 18.78
- 19.04
- 19.18
- 19.44
- 19.53
- 19.57
- 19.62

- 12.42
- 12.49
- 12.52
- 12.55
- 12.55
- 12.55
- 12.55

- 8.57
- 8.43
-8.38
- 8.32
- 8.31
- 8.31
- 8.30

- 7.82
- 7.65
-7.58
- 7.48
- 7.46
- 7.45
- 7.44

- 7.25
- 7.06
-6.98
- 6.85
- 6.82
- 6.80
- 6.78

- 6.64
- 6.35
-6.25
- 6.13
- 6.12
- 6.11
- 6.11

.025

10
15
20
50
100
200

-9.98
-11.18
-11.89
-13.38
-13.93
-14.22
-14.51

20
30
40
100
200
400
40
60
80
200
400
800
cc

.05

.01

cc

Probabilityof a Smaller Value

n= md

0c

.99

.01

120
180
240
600
1200
2400
cc

Dlckey, Hasza, and Fuller:Seasonal UnitRoots

363

Table 9. Percentiles of n(a6,d* - 1), the Symmetric Estimator for the Single Mean Model
Probabilityof a Smaller Value
n = md

10
15
20
50
100
200
00

d = 2

20
30
40
100
200
400
00

40
60
80
200
400
800
00

12

120
180
240
600
1,200
2,400
00

.01

.025

.05

.10

.50

.90

.95

.975

.99

-12.64
-14.77
-15.97
-18.38
-19.25
-19.70
-20.16

-11.28
-12.71
-13.55
-15.30
-15.95
-16.28
-16.62

-9.92
-11.01
-11.62
-12.81
-13.24
-13.45
-13.68

-8.38
-9.14
-9.55
-10.32
-10.59
-10.73
-10.87

-3.84
-3.96
-4.02
-4.13
-4.17
-4.19
-4.21

-1.51
-1.47
-1.46
-1.45
-1.44
-1.44
-1.44

-1.20
-1.15
-1.13
-1.10
-1.09
-1.09
-1.08

-1.01
-.96
-.93
-.88
-.86
-.85
-.84

-.86
-.78
-.75
-.70
-.69
-.68
-.67

-15.88
-17.23
-17.93
-19.25
-19.70
-19.93
-20.16

-13.48
-14.45
-14.97
-15.94
-16.28
-16.45
-16.62

-11.57
-12.24
-12.59
-13.23
-13.45
-13.56
-13.68

-9.51
-9.95
-10.18
-10.59
-10.73
-10.80
-10.87

-4.02
-4.08
-4.11
-4.17
-4.19
-4.20
-4.21

-1.46
-1.45
-1.45
-1.44
-1.44
-1.44
-1.44

-1.13
-1.11
-1.11
-1.09
-1.09
-1.09
-1.08

-.93
-.90
-.89
-.86
-.85
-.85
-.84

-.75
-.72
-.71
-.69
-.68
-.68
-.67

-18.97
-19.62
-20.03
-20.89
-21.21
-21.38
-21.56

-16.13
-16.51
-16.79
-17.47
-17.74
-17.88
-18.03

-13.86
-14.08
-14.26
-14.70
-14.88
-14.98
-15.08

-11.45
-11.65
-11.77
-12.05
-12.16
-12.21
-12.27

-5.46
-5.51
- 5.53
-5.59
-5.61
-5.62
-5.63

- 2.74
- 2.70
- 2.69
- 2.67
-2.67
- 2.66
-2.66

- 2.32
- 2.26
- 2.24
-2.21
-2.20
-2.20
-2.20

- 2.02
-1.97
-1.94
-1.89
-1.87
-1.86
-1.85

-1.76
-1.68
-1.64
-1.60
-1.59
-1.58
-1.58

-28.27
-28.91
-29.25
-29.94
-30.18
-30.31
-30.43

-25.14
-25.63
-25.92
-26.49
-26.69
-26.80
-26.91

- 22.77
- 22.96
-23.12
-23.51
-23.67
-23.76
-23.85

- 20.09
- 20.28
-20.41
-20.70
-20.81
-20.87
- 20.93

-9.01
-8.87
-8.82
-8.76
-8.75
-8.75
-8.75

-8.16
-8.01
-7.95
-7.86
- 7.84
-7.83
-7.82

-7.51
-7.38
-7.30
-7.16
-7.11
-7.08
- 7.06

-6.98
-6.70
-6.59
-6.45
-6.42
-6.40
- 6.39

-13.15
-13.24
-13.28
-13.31
-13.32
-13.32
-13.32

- 1), the Symmetric Estimator for the Seasonal Means Model


Table 10. Percentiles for n(&a,,d
Probabilityof a Smaller Value

n= md

.01

.025

.05

.10

.50

.90

20
30
40
100
200
400

-19.24
-21.01
-22.00
-23.95
-24.65
-25.01
-25.37

-16.90
-18.22
-18.96
-20.45
-20.99
-21.26
-21.54

-14.97
-16.04
-16.58
-17.59
-17.93
-18.10
-18.28

-12.86
-13.61
-13.99
-14.67
-14.90
-15.01
-15.13

-6.85
-6.96
-7.02
-7.13
-7.17
-7.19
-7.21

-3.46
-3.39
-3.36
-3.32
-3.31
-3.30
-3.29

-28.95
-30.89
-31.85
-33.53
-34.08
-34.35
-34.63

-26.00
-27.35
-28.06
-29.41
-29.88
-30.12
-30.36

- 60.83
- 63.22
- 64.30
- 66.06
- 66.59
- 66.85
-67.10

- 56.03
-58.06
- 59.02
- 60.63
- 61.14
- 61.39
- 61 .64

-23.23
-24.41
-24.98
-25.97
-26.29
-26.45
-26.61
- 52.50
- 54.17
- 54.89
- 56.02
- 56.35
- 56.50
- 56.65

-20.59
-21.44
-21.84
-22.52
-22.74
- 22.84
-22.95
- 48.69
- 49.67
- 50.17
- 51 .07
- 51.37
- 51.52
- 51 .67

-12.83
-12.90
-12.95
-13.08
-13.13
-13.16
-13.19
- 36.64
- 36.74
- 36.81
- 37.00
- 37.08
- 37.12
- 37.16

-7.88
-7.72
-7.66
- 7.57
-7.54
-7.53
-7.53
- 27.56
- 27.20
- 27.05
- 26.87
- 26.82
- 26.81
- 26.79

00

40
60
80
200
400
800
00

12

120
180
240
600
1,200
2,400
X0

.95

.975

.99

- 2.92
-2.82
-2.78
-2.71
-2.69
-2.68
-2.68

-2.55
-2.44
-2.38
-2.28
-2.25
-2.23
-2.22

-2.22
-2.07
-2.01
- 1.91
-1.89
-1.87
-1.86

-6.95
-6.74
-6.65
-6.53
-6.49
-6.48
-6.47
- 25.69
- 25.14
- 24.92
- 24.61
- 24.54
- 24.50
- 24.47

-6.28
-6.03
-5.92
-5.74
-5.68
-5.65
-5.63
- 24.02
- 23.50
- 23.25
- 22.79
- 22.64
-22.57
- 22.49

-5.58
-5.32
-5.21
-5.03
-4.98
-4.95
-4.93
- 22.39
- 21 .62
- 21.32
- 20.90
- 20.81
- 20.76
- 20.73

364

Journal of the American

Notice that it = y. and that derivatives with respect to


Ri in the Taylor series get multipliedby zero, so that no
adjustmentsto ,uiare made in the second step. Using the
argumentsof Theorem 5, it follows that the first coefficient t1d and its Studentized statistic converge to the
limit distributions of the corresponding estimators in
model (1.1).
Finally, we extend the results for our symmetric statistics to the multiplicativemodel. If I ad | < 1 in (6.1),
then
(I

01F

aYdFd)(l

0pFP)Y

is a white noise series with the same varianceas et, where


F denotes the forward shift operatorF(Yt) = Yt, 1. To
compute a symmetric estimator for the model with seasonal means, first compute the deviations from seasonal
means defined in (6.3). Then create the column vector
z

(Y-p-d?l,

Y-p-d+2,

Yn Mg

M, . .. 9MgYns.... 9Y-p-dJrl).

Statistical Association,

June 1984

the statistic was in the rejection region. The numberof


replicationswas 24,000 for d = 2, 12,000for d = 4, and
4,000 for d = 12. The power for the eight statistics is
given in Table 11 for m = 20 and d = 4.
For m : 100, all statistics have power exceeding .90
for coefficients less than .85. The power increases with
d and, of course, with m.
It is interesting that tests based on &d, ci,d*, and
&lld* are seriously biased for the stationary alternative
with small m and ad close to one. The bias is clear in
Table 11. The bias is associated with the properties of
stationarytime series. For example, if ad = .99, the variance of the stationary time series is 50.25 a2 and the
initial value can be far from zero. In small samples with
large YO,the varianceof the estimatorof td iSsmallerthan
the variance of 6d under the null distribution.The T statistics show less bias because these statistics reflect the
fact that sampleswith large Yohave smallvariance.Note,
however, that T,4* is also slightly biased for m = 20 and
a4

.995.

Tests based on statistics constructed with seasonal


Here the M's denote a string of p + d missing values.
means removed displayed little bias. For m = 10 and ad
Let theelementsof Z be denotedbyz,t = - p - d + 1,
- .995, the null was rejected by the statistic &,4 about
.. ., 2n + 2(p + d). Now create
4.8% of the time. For largerm, little or no bias was obe=
* 0P)
served. The power of the statistic &4 was similarto that
et(l, 01, 02,
O
of
a slightadvantage.
&t4, with&,4 displaying
_02B 2 _ .._pBP)Z,
=(I _-OIB
To summarize,if the alternativemodelis the zero mean
or single mean stationarymodel, the i and TE* statistics
and regress et on
are preferred. If the alternative model is the stationary
2(I - OIB _02B
**-BP)zt-d, Zt, Zt-, 1 ,
Zt-p
process with seasonal means, (x,, and &j are the most
If the first coefficient in this regression is denoted by powerful of the statistics studied.
1), the percentiles of the limit distributionof
(&(,d
APPENDIX
A. PROOFS
OFLIMIT
RESULTS
FOR
n(&,d - 1) are those given in Table 9.
A

A,

AA

SYMMETRIC
STATISTICS

7. POWERSTUDY

Proof of Theorem2. White (1958) showed that


Using the Monte Carlo method, we computed power
E{exp(2tL1)} = [cosh(2ti)]-.
curves for the eight test statistics discussed in this paper.
Power curves were generated for all combinationsof m Thus
= 10, 15, 20, 50, 100, and 200, with d = 2, 4, and 12. In
E{exp(-t L1)} = [cosh(2t)VI-1,
all cases, the hypothesis that Otd = 1 was tested at the
=Li,
.05 level against the alternativethat ad < 1. The power andforL =
was evaluated for stationarytime series satisfying Y, =
E{exp(- t L)} = [cosh(2t)']-Id
at Yt-d + e,, where et are NI(0, 1) randomvariablesfor
ax = .995, .99, .95, .90, .85, .70, .60, .50, and .30. The
= Jfexp(I t) dFL(l),
empirical power is the fraction of the samples in which
Table 11. Empirical Power of Tests Against the Stationary Alternative for 20 Years of QuarterlyData
a4

Statistic

.995

.99

.98

.95

.90

.85

.80

.70

.60

&4

.001

.003

.018

.13

.47

.81

.96

1.00

1.00

i4

.090

.114

.170

.35

.66

.88

.97

1.00

1.00

.001
.044
.054
.054
.000
.054

.005
.062
.057
.054
.002
.055

.017
.102
.064
.059
.005
.064

.09
.22
.10
.08
.04
.10

.35
.47
.19
.13
.21
.19

.65
.72
.32
.20
.50
.34

.88
.90
.50
.33
.79
.53

1.00
1.00
.84
.65
.99
.86

1.00
1.00
.97
.89
1.00
.99

&?L4*

Tt4*
a,,

TL4
&pA4*
&,>4

Dickey, Hasza, and Fuller:Seasonal UnitRoots

365

where, for notationalconvenience, we omit the subscript


d. Integratingby parts, we have
exp(-1 t) dFL(l)

tf

exp(-1 t)FL(l) dl,

Proof. Using results of Anderson and Darling(1952),


E{exp(- t V1)} = [(sinh(2t)l) - (2t)i]i,

and thus
E{exp( - t V)}

so that
exp(- 1 t)FL(l) dl

= 2Id
=

t} dFv(l)l

By the Laplace transformtheory,


=

Fv(l)

[sinh(2t)l]-dl2

Now
t -'[cosh(2t)I]

x
-exp

? -{[t(sinh(2t)I)Wd]

1(2t)/}.

We have

[cosh(2t)I]dl2}.

{t

[(sinh(2t)i) - (2t)i]i'

t[cosh(2t)id2

By the theory of Laplace transforms,FL(l) is the inverse


Laplace transform,Sf -(), given by
FL(l) =

t-1

(20)d/4

23d/4 t(d 4)/4

exp[ - d(2t)i2

23d/4 t(d4)/4

exp[ - d(2t)i2 -]

[1 - exp{ - 2(2t)i}]

-I

-Id

t-

exp[-hd(2t)I]

2Id -exp[

-d(2t)i]

[I + exp(-2(2t)I)]
z

Id

00

Xz

(-l)'[r(4d)

[F(j + l)F('

d)] -' F(j

1 d)exp{- 2j(2t)I}.

j=O

j=0

Thus

x r(j + 1)]-' r(j + 2d)exp{-2j(2t)I}.

j
W1?I{t(d4)/4exp[-2i(2j

Fv(l) = ,rrjC

Therefore,

+ d)ti]}.

j=O

FL(l) = [r(ld)]'

2id

The inverse transforms


? -'{exp(- bti)} = [2II1-'

x, (-l)j[r(j + 1)i-' r(1 + hd)SV-'[f(t)],


j=0

b exp{- b2(41) '},

'- {t-Iexp(-bti)} = (,ml)-Iexp{- b2(41)-},

where
fit) = t-' exp{-2 -(4j

alongwith the usualformulafor differentiationof Laplace


transforms,give the result.

+ d)tI}.

From standardtables (Selby 1968, p. 476, No. 83),

Proof of Theorem4. Let


d

[t-' exp{-kt1}]

= 2[1 - ?(k(21) 1)],

U = 1, Li -

W12 =

(LI

i= I

W,2) + a Li,
i=2

where '1 is the standardnormal cumulativedistribution


where d 2 2 is an integer.Letting2{1} denote the Laplace
function.
transformand Fu(I) = P(U ' I), we have
Proof of Theorem3. We show the generalresultfor V,
= LiWi2, V =
Vi, and d even. Then for every
>
I
O,
= t '{![exp{(2t)i} + exp{-(2t)I}]}' -d)/2
1>0,~~~~~~0
Fv(l)

C2

x {2(2t)l [exp{(2t)I} - exp{ - (2t)I}]

Wj(4j + d)

j=0

x al

C E Wj k[I-l exp{ j=0

(81)-l'(4j + d)2}],
d

2(2d+

exp{-(81)- '(4j + d)2}], d0(mod 4),


d0

1)/4

[1 +

x ,

t- i exp{- d(2t)12-'}

exp{-2(2t)

2(2d+ l)/4

(-

`'P

}](l-d)2[l

- exp{-2(2t)I}]-I

t- i exp{- d(2t)i2-'}
l)k[k!F((d - 1)/2)]-1

k=O

2 (mod 4),

x F(k + (d - 1)/2)exp{- 2k(2t)I}

where
C = [7rTrF('d)]

23d/4

j=O

[j!F(2)]- F(j + )exp{- 2j(2t)I}

WI= [F(j + l)]' F1(j + 2d),


-

(i, k) = ((d - 4)14,(d - 2)14).

2(2d+

1)/4

2
j=O

ajrA-exp{- ti(4j + d)2 -},

Journal of the American Statistical Association, June 1984

366

where

andif Y, = Y, -

then f, is stationary,exceptfor

Y.-d,

effects of initial conditions, so that


wj = zrVI[l((d - 1)/2]

- k)!]-

()k[k!(j

[n

k=O

- E(Yn2)] = 0p(n

-t2

x F(k + (d - 1)12)F(j - k + 1). and

E eeY,1

0,(nl),
and by standardtheory of stationarytime series, ,o - o0
Y -'{t-iexp( - t1b)}
= Op(n-) when Oiis estimatedby ordinaryleast squares
= Tr-' bl(21) -exp{ - (81)-' b2}Kj((81)-' b2). from the Y, series.
Now the series
We now establish the relationshipbetween U and n(l 2
L,,d*). Let Q be a d x d orthonormalmatrix with ljth
tt,
IB - 02B2
et= (1 entry qlj = d-1. Let X, = (Xlt, X2t,
, Xdt)'. Let Zt
regressed on
= Q
Notice that
The result follows because

Xt.

d
a

(1

Y.tr(X,X,t')

yaXit2=

t=1

j=lt=l

t=l

where Zt

(Zlt, Z2t,

j=lt=l

(1

diE Zit,

E E xt=

(dM)-1

Xj=21
(=

i=lt=l

j=l
d

_
j=lt=l

t=l
m
'IC'

0IB

\2

1, denoted by X, and

OpBP)kYt
BP)Y,td

p-

yields the estimator A, where the limit distributionof n


A is known.
Now

t=1

so that
d

OPB)Ytd,

yields an estimator of A = Odthe regression of


et= (1 - 01B -

j=lt=l

02B2-

on

Furthermore,

Zdt)'.

43,t

tr(QXtX,'Q') =

01B

-7

jt2-(dm)-

--...-

I_-IB

Do

OpBP)Yt-d]2

01B

PBP-

B))Yt,d]2}
=

Zit}

Op(n-i)

t=l

Since the covariance matrixof X, is of the form r2 I, so


is that of Zt. Thus, the sum of squareddeviations of Yt
from the single mean

because
n

(Oioi - Oi0j)

t= I

y2=Op(ni)

md

Y= (md)

Similarly,

Y, Yt

t= 1

n- Ee^t(l

is
md

yt2

t=1t=1
d
-

EYt

-(md1

E zit
j=lt=l

(md)-

_-m-

j=1t= 1
m

Thus n(X - A)
2

E Z1Xjt2 j=1l =lI

0IB

02B2

e t(I - 0 1B -

OpBP)Yt

'*-OpBP)

-d
Yt -d +

Op (n-i.

A_

m
X2

-n

)2

(md

t=l

z Zt
.

Op(n -),

so by Slutzky's theorem, the

limit distributionfor nA is the same as that of nA. The


removalof an overall mean Yat the outset does not affect
the estimationof the Oibecause (Y, - Y) - ( Ytd - Y)
= If - Y,-d = I,, as before. Similarly, etis unchanged
but is regressed on (1 - 0AB OB- PB)(Yt_d - Y).
Comparisonof this regression coefficient to that in the
regression ofeton(1

- OIB -

-- - OpBP)(Yt-d

Y)

It follows that the limit distributionof n(I - &,,d*) is the shows, in exactly the same manner as before, that the
sameas thatof d2 {Jd_l Lj _ W1211
estimators differ by Op(n-i).
The inclusion of Yt ,IYt2, . . . , Yt-p in the regresAPPENDIXB
sions allows for Gauss-Newton improvementsof 0i but
To prove the results for extension to higher order does not affect the limit distribution of n(&d - 1), since
= Op (n).
models, simply note that if
E Y,-dY,-j
(1 - Bd)(1 - O,B - 02132
= e
[Received November 1982. Revised January 1984.1
-* -OpBP)Y
-

Dickey, Hasza, and Fuller:Seasonal UnitRoots

REFERENCES
ANDERSON, T.W., and DARLING, D.A. (1952), "AsymptoticTheory of Certain 'Goodness of Fit' CriteriaBased on Stochastic Processes," Annals of MathematicalStatistics, 23, 193-212.
BULMER, M.G. (1975), "The StatisticalAnalysis of Density Population," Biometrics, 31, 901-911.
DICKEY, D.A. (1976), "Estimationand Hypothesis Testing in NonStationaryTime Series," unpublishedPh.D. thesis, Iowa State University.
(1981),"Histograms,Percentiles,andMoments," TheAmerican
Statistician, 35, 164-165.
DICKEY, D.A., and FULLER, W.A. (1979), "Distributionof the Estimatorsfor AutoregressiveTime Series Witha Unit Root," Journal
of the AmericanStatistical Association, 74, 427-431.
(1981),"LikelihoodRatioStatisticsfor AutoregressiveTimeSeries With a Unit Root," Econometrica,49, 1057-1072.
EVANS, G.B.A., and SAVIN, N.E. (1981), "The Calculationof the
LimitingDistributionof the Least SquaresEstimatorof the Parameter
in a RandomWalk Model," Annals of Statistics, 5, 1114-1118.
FULLER, W.A. (1976), Introductionto Statistical TimeSeries, New
York: John Wiley.

367
(1979), "An Estimatorfor the Parametersof the Autoregressive
Process With Applicationto Testingfor a Unit Root," Reportto the
U.S. Census Bureau.
HASZA, D.P. (1977), "Estimationin NonstationaryTime Series," unpublishedPh.D. thesis, Iowa State University.
HASZA, D.P., andFULLER, W.A. (1981),"Testingfor Nonstationary
ParameterSpecificationsin Seasonal Time Series Models," Annals
of Statistics, 10, 1209-1216.
McNEILL, I.B. (1978), "Propertiesof Sequences of PartialSums of
Polynomial Regression Residuals With Application to Tests for
Changeof Regressionat UnknownTimes," Annals of Statistics, 6,
422-433.
RAO, M.M. (1978), "Asymptotic Distributionof an Estimatorof the
BoundaryParameterof an Unstable Process," Annals of Statistics,
6, 185-190.
Ratioof the GaussianRanSARGAN,J.D. (1973),"TheDurbin-Watson
dom Walk," unpublishedmanuscript,LondonSchool of Economics.
SELBY, S.M. (1968), StandardMathematicsTables(6th ed.), Cleveland: ChemicalRubberCompany.
WHITE, J.S. (1958), "The LimitingDistributionof the Serial Correlation Coefficientin the Explosive Case," Annals of Mathematical
Statistics, 29, 1188-1197.

You might also like