Professional Documents
Culture Documents
Regression Analysis
Defined as the analysis of the statistical relationship among variables In its simplest form there are only two variables:
Dependent or response variable (labeled as Y) Independent or predictor variable (labeled as X)
- Alpha an intercept component to the model that represents the models value for Y when X=0 - Beta a coefficient that loosely denotes the nature of the relationship between Y and X and more specifically denotes the slope of the linear equation that specifies the model - Epsilon a term that represents the errors associated with the model
Example
i in this case is a counter representing the ith observation in the data set
Yi ! E F X i I
observation (i) number of ice cream cones sold 1 2 3 4 5 6 7 8 9 10 (Y) cost of icecream cones (X) 84 $2.50 89 $3.00 92 $3.25 96 $2.25 98 $1.75 102 $2.75 113 $2.00 114 $1.50 122 $1.25 127 $1.00
Accompanying Scatterplot
Ice Cream Demand
Ice Cream Cone's Sold 130 120 110 100 90 80
0 .5 $3 5 .2 $3 0 .0 $3 5 .7 $2 0 .5 $2 5 .2 $2 0 .0 $2 5 .7 $1 0 .5 $1 5 .2 $1 0 .0 $1 5 .7 $0
Ice Cream Cone Cost
0 .5 $3 5 .2 $3 0 .0 $3 5 .7 $2 0 .5 $2 5 .2 $2 0 .0 $2 5 .7 $1 0 .5 $1 5 .2 $1 0 .0 $1 5 .7 $0
Ice Cream Cone Cost
Coefficient of Determination
In this simple example R2 is indeed the square of R Recall that R is often the symbol for the Pearson Product Moment Correlation (PPMC) which is a parametric measure of association between two variables R (X,Y) = -0.84 in this case 0.84^2=0.71 We will get into why this is the case and how are these related on Thursday
The guy that got the creditCarl-Fredrick- the giant of early statistics AKA Gauss published the theory of least squares in 1821
zy
( x x)( y y) r!
(n 1) s x s y
r!
2
n 1
2
sx !
( x x)
n 1
( x x)( y y ) ( x x) ( y y )
xy ( x)( y) / n x ( x ) / n y ( y )
2 2 2
/n
Mathematically Simplified
The
Computationally Easier
sample covariance is the upper center equation without the sample standard deviations in the denominator measures how two variables covary and it is this measure that serves as the numerator in Pearsons r
Covariance
OLS defined
OLS stands for Ordinary Least Squares This is a method of estimation that is used in linear regression Its defining and nominal criteria is that it minimizes the errors associated with predicting values for Y It uses a least squares criterion because a simple least criterion would allow positive and negative deviations from the model to cancel each other out (using the same logic that is used for computations of variance and a host of other statistical measures) n
i !1
min (Yi Y i ) 2
Y
i!
! E FX i I
(i) 1 2 3 4 5 6 7 8 9 10
(X) $2. $3. $3.25 $2.25 $1. 5 $2. 5 $2.00 $1.50 $1.25 $1.00
Since Y and X are known for all I and the error term is immutable, minimizing the model errors is really based upon our choice of alpha and beta
This
min (Yi ! E F X i I ) 2
i !1
is this under the condition that S is the total sum of squared deviations from i =1 to n for all Y and X for an alpha and beta n
S (E , F ) ! (Yi E F X i ) 2
i !1
The correct alpha and beta to minimize S can be found by taking the partial derivative for alpha and beta by setting each of them equal to zero for the other, yielding
(Y
i !1
E F
)!0
Y
i !1
! nE F X i
i !1
X i Yi ! E X i F X i2
i !1 i !1 i !1
for beta
Refer to page 436 for the the texts more detailed description of the computations for solving for alpha and beta
Y
i !1
! nE F X i
i !1
n i i
X Y
i !1
! E X i F X i2
i !1 i !1
Given these, we can easily solve for the more simple alpha via algebra and since X(bar) is the sum of all X(I) from 1 to n diveded by n and the same can be said for Y(bar) we are left with
Y F X !E
Y
i !1
! nE F X i
i !1
n i
Y is
i !1
F Xi
i !1
!E
Since the mean of both X and Y can be obtained from the data, we can calculate the intercept or alpha very simply if we know the slope or beta
Once we have a simple equation for alpha, we can plug it into the equation for beta and then solve for the slope of the regression equation
i !1
i i
Y ! E
i !1
F
i !1
n i
2 i
Y
i !1
F Xi
i !1
n
n n Yi F i !1 i !1 iYi ! n n
!E
n n Y i i !1 i !1 iYi ! n 2 i n F i !1
i !1
i n i !1
F
i !1
2 i
i !1
n F i !1 n
2 i
i !1 i i i !1 n
i i
Y Yi
i !1 n i i !1 n n i i !1 2 i !1
! nF
i !1
n 2 i F i !1
X Y Y X
n X i2 X i i !1 i !1
!F
Y F X !E
n i i i !1 n n i i !1 n n i i !1 2
X Y Y X
Beta or the regression slope
n X i2 X i i !1 i !1
!F