You are on page 1of 13

OLS: Estimation and Standard Errors

Brandon Lee

15.450 Recitation 10

Brandon Lee OLS: Estimation and Standard Errors


Ordinary Least Squares

The model:
y = Xβ +ε
where y and ε are column vectors of length n (the number of
observations), X is a matrix of dimensions n by k (k is the
number of parameters), and β is a column vector of length k.
For every observation i = 1, 2, . . . , n, we have the equation

yi = xi1 β1 + · · · + xik βk + εi

Roughly speaking, we need the orthogonality condition

E [εi xi ] = 0

for the OLS to be valid (in the sense of consistency).

Brandon Lee OLS: Estimation and Standard Errors


OLS Estimator

We want to find βˆ that solves

min (y − X β )0 (y − X β )
β

The first order condition (in vector notation) is


 
0 = X 0 y − X βˆ

and solving this leads to the well-known OLS estimator


−1 0
βˆ = X 0 X Xy

Brandon Lee OLS: Estimation and Standard Errors


Geometric Interpretation

The left-hand variable is a vector in the n-dimensional space.


Each column of X (regressor) is a vector in the n-dimensional
space as well, and we have k of them. Then the subspace
spanned by the regressors forms a k-dimensional subspace of
the n-dimensional space. The OLS procedure is nothing more
than finding the orthogonal projection of y on the subspace
spanned by the regressors, because then the vector of residuals
is orthogonal to the subspace and has the minimum length.
This interpretation is very important and intuitive. Moreover,
this is a unique characterization of the OLS estimate.
Let’s see how we can make use of this fact to recognize OLS
estimators in disguise as more general GMM estimators.

Brandon Lee OLS: Estimation and Standard Errors


Interest Rate Model

Refer to pages 35-37 of Lecture 7.


The model is
rt+1 = a0 + a1 rt + εt+1
where

E [εt+1 ] = 0
 2 
E εt+1 = b0 + b1 rt

One easy set of moment conditions:

0 = E (1, rt )0 (rt+1 − a0 − a1 rt )
 
h  i
0 2
0 = E (1, rt ) (rt+1 − a0 − a1 rt ) − b0 − b1 rt

Brandon Lee OLS: Estimation and Standard Errors


Continued

Solving these sample moment conditions for the unknown


parameters is exactly equivalent to a two-stage OLS
procedure.
Note that the first two moment conditions give us

ET (1, rt )0 (rt+1 − â0 − â1 rt ) = 0


 

But this says that the estimated residuals are orthogonal to


the regressors and hence â0 and â1 must be OLS estimates of
the equation
rt+1 = a0 + a1 rt + εt+1

Brandon Lee OLS: Estimation and Standard Errors


Continued

Now define
ε̂t+1 = rt+1 − â0 − â1 rt
then the sample moment conditions
h  i
ET (1, rt )0 (rt+1 − â0 − â1 rt )2 − b̂0 − b̂1 rt = 0

tell us that b̂0 and b̂1 are OLS estimates from the equation
2
ε̂t+1 = b0 + b1 rt + ut+1

by the same logic.

Brandon Lee OLS: Estimation and Standard Errors


Standard Errors
Let’s suppose that E εi2 |X = σ 2 and E [εi εj |X ] = 0 for i =
 
6 j.
In other words, we are assuming independent and
homoskedastic errors.
What is the standard error of the OLS estimator under this
assumption?
   
Var βˆ |X = Var βˆ − β |X
 −1 0 
= Var X 0 X X ε|X
−1 0 −1
= X 0X X Var (ε |X ) X X 0 X
Under the above assumption,
Var (ε|X ) = σ 2 In
and so   −1
Var βˆ |X = σ 2 X 0 X

Brandon Lee OLS: Estimation and Standard Errors


Continued

We can estimate σ 2 by
n
c2 = 1
σ ∑ ε̂i2
n i=1

and the standard error for the OLS estimator is given by


 
Vdar βˆ |X = σ c2 X 0 X −1

This is the standard error that most (less sophisticated)


statistical softwares report.
But it is rarely the case that it is safe to assume independent
homoskedastic errors. The Newey-West procedure is a
straightforward and robust method of calculating standard
errors in more general situations.

Brandon Lee OLS: Estimation and Standard Errors


Newey-West Standard Errors
Again,
   
Var βˆ |X = Var βˆ − β |X
 −1 0 
= Var X 0 X X ε|X
−1 −1
= X 0X Var X 0 ε|X X 0 X


The Newey-West procedure boils down to an alternative way


of looking at Var (X 0 ε|X ).
If we suspect that the error terms may be heteroskedastic, but
still independent, then
n
ar X 0 ε|X = ∑ ε̂i2 · xi xi0

Vd
i=1
and our standard error for the OLS estimate is
!
  n
d βˆ |X = X 0 X −1 −1
∑ εˆi2 · xi xi0 X 0 X

Var
i=1
Brandon Lee OLS: Estimation and Standard Errors
Continued

If we suspect correlation between error terms as well as


heteroskedasticity, then
!
k n
k − |j | 0
ar X 0 ε |X = ∑

V
d
k ∑ εˆi εˆi +j · xi xi+j
j=−k t=1

and our standard error for the OLS estimator is


!!
k n
  −1 k − |j | 0 −1
V
dar βˆ |X = X 0 X ∑ k ∑ ε̂i ε̂i+j · xi xi+j X 0X
j=−k t=1

Brandon Lee OLS: Estimation and Standard Errors


Continued
We can also write these standard errors to resemble the
general GMM standard errors (see page 23 of Lecture 8).
In the uncorrelated errors case, we have
!
  n
−1 −1
ar βˆ |X = X 0 X ∑ ε̂i2 · xi xi0 X 0 X

V
d
i=1
−1 !
X 0X 1 n 2 X 0 X −1
 
1 0
=
n n ∑ ε̂i · xi xi
n i=1 n
!
1 −1 1 n 2 −1
= Ê xi xi0 ∑ ε̂i · xi xi0 Ê xi xi0
n n i=1
and for the general Newey-West standard errors, we have
!!
k n
  −1 k − |j| 0 −1
V
dar βˆ|X = X 0 X ∑ ∑ εˆi ε̂i+j · xi xi+j X 0X
j=−k k t=1
!!
k n
1 −1 1 k − |j| 0 −1
= Ê xi xi0 ∑ k ∑ ε̂i εˆi+j · xi xi+j ˆ xi x 0
E i
n n j=−k t=1
Brandon Lee OLS: Estimation and Standard Errors
MIT OpenCourseWare
http://ocw.mit.edu

15.450 Analytics of Finance


Fall 2010

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms .

You might also like