You are on page 1of 8

Project report

Harikripal

10/07/2018

1 Introduction
The objective of the project is to study on the delivery time using the data
given on the vending machine service routes.The project is a regression based
study focused on predicting the optimum delivery time.

2 Ordinary Least Square Method


The method of least squares is used to estimate the regression coefficients.Suppose
n > k observations are available and let yi denote the ith observed response
and xij denote the ith observation of regressor xj .Let us assume that the error
term  in the model has E() = 0, var() = σ 2 ,and that the eroor are uncorre-
lated.Throughout in this method we assume that regressor variables x1 , x2 , ...xk
are fixed.and we wil have to assume the conditional distribution of y given
x1 , x2 ...xk be normal with mean β0 + β1 x1 + .......βk xk and variance σ 2 .
we may write the sample regression model to following

yi = β0 + β1 xi1 + +βk xik + i (1)


Xn
= β0 + βj xij (2)
j=1

,i=1,2,3,4,...n. then the least square function is


n n k
2i = βj xij )2
P P P
S = (β0 , β1 , ...βk ) = (yi − β0 −
i=1 i=1 j=1
hThe ifunction S must be minimised with respect to β0 , β1 ...βk must satisfy.
∂S
∂β0 Pn k
P (3)
ˆ ˆ ˆ
β0 ,β1 ,..βk =−2 ˆ ˆ(yi −β0 − βj xij)=0
i=1 j=1
and
h i
∂S
∂βj n
P k
P (4)
βˆ0 ,β1,...
ˆ βˆk =−2 (yi −βˆ0 − βˆj xij)=0
i=1 j=1
j = 1, 2...k

Now from above we get the least squares normal equations.


n n n n
nβ0 + βˆ1
ˆ xi1 + βˆ2 xi2 + ...βˆk
P P P P
xik = yi
i=1 i=1 i=1 i=1
n n n n n
nβˆ0 xi1 + βˆ1 x2i1 + βˆ2 xi1 xi2 + ...βˆk
P P P P P
xi1 xik = xi1 yi
i=1 i=1 i=1 i=1 i=1

1
n n n n n
βˆ0 xik + βˆ1 xik xi1 + βˆ2 xik xi2 + ...βˆk x2ik =
P P P P P
xik yi (5)
i=1 i=1 i=1 i=1 i=1
The solution of the above equations will be the least-square estimators βˆ0 , βˆ1 ...βˆk .
It is more convenient to deal with multiple regression model if they are expreesed
in matrix notation.It allows a very compact display of the model data and re-
sults.In matrix notation the model given by equation (1) is
y =Xβ +  
y1 1 x11 x12 . . . x1k
 y2   1 x21 x12 . . . x2k 
   
 .   . 
y= 
 ,X= .
  
 .  


 .   . 
yn 1 xn1 xn2 . . . xnk
   
β1 1
 β2   2 
   
 .   . 
β= 
 , =  . 
  
 .   
 .   . 
βk n
In general y is an n × 1 vector of the observations,x is an n × p matrix of the
level of the regressor variables,β is a p × 1vector of the regression coefficients
,and  is an n × 1 vector of random errors.
We are looking for finding the vector of least square estimators β∗ that mini-
mizes
n
21 = 0  = (y − Xβ)0 (y − Xβ)
P
S(β) =
i=1
We can express S(β) as S(β) = y 0 y − β 0 X 0 Y − Y 0 Xβ + β 0 X 0 Xβ
= Y 0 Y − 2β 0 X 0 Y + β 0 X 0 Xβ
sinceβ’X’Y is a 1 × 1 matrix or a scalar,and its transpose(β 0 X 0 Y )0 = Y 0 Xβ
his the
i same scalar.The least square estimator must satisfy
∂S
∂β =-2X’Y+2X’Xβ = 0
β̂
X 0 Xβ = X 0 Y (6)
equation (6) are the least-square normal equations.
Now we solve the normal equations,multiply both sides of (6) by (X 0 X)− 1.Now
OLS of β is
β̂ = (X 0 X)− 1X 0 Y (7)
Provided that the inverse matrix(X 0 X)1 exists.The(X 0 X)1 1 will always exist if
the regressors are linearly in dependent.that is if no coloum of the X matrix is
a linear combination of other coloumns.
Now we see the matrix form of (7) in detail,we obtain.

2
 n
P   n
P n
P 
yi n xi1 . . . xik  
 i=1   i=1 i=1  β0
 n x y   n x n n
β1
 P   P 
x2i1
P P 
i1 i i1 . . . xi1 xik  
β2
    
 i=1   i=1 i=1 i=1  
.  = . .
    
  
. . .
    
    
. . .
    
   
n n n n
 P
xik yi
  P
xik
P
xi1 xik . . .
P
x2ik
 βk
i=1 i=1 i=1 i=1

If the indicated matrix multiplication is performed,the scales form of the


normal equation is obtained.In this display we see that X’X is a P × P sym-
metric matrix and X’Y is a P × 1 coloumn vector.The diagonal elements of X’X
are the sum of squares of the elements in the column of X.and the off-digonal
elements are sum of cross product of the elements in the column of X.And the
elements of X’Y are the sum of cross products of the elements in the columns
of X.Since the elements of X’Y are the sums of cross products of the columns
of X and the observations yi .
The fitted regression model corresponding to the levels of the regressor vari-
ables.
X 0 = [1, x1 , x2 ...xk ] is
k
Y 0 = X 0 β = β0 +
P
βj Xj .
j=1
The vector of fitted values yi corresponding to the observed values yi is
y 0 = Xβ = X(X 0 X)− 1X 0 Y = Hy.
The matrix H = X(X 0 X)1 X 0 is usually called the hat maerix.It maps the vector
of observed value into a vector of fitted values.The hat matrix and its properties
play a central role in RA.
The difference b/w the observed valueyi and the corresponding fitted value yi0
is the residual ei = yi − yi0 .
The n-residuals may be conveniently written in matrix notation as
e = y − y0
PROBLEM
The delivery time data-;

Now we are going to fit the multiple linear regression model.


y = β 0 + β 1 x 1 + β2 x 2 + 
to the delivery time data in following table:

3
Observation Deliverytime N umberof Distance ŷi ei = yi − ŷi
N umber (minutes) casesx1 x2 (f eet)

1 16.68 7 560 21.7081 −5.0281


2 11.50 3 220 10.3536 1.1464
3 12.03 3 340 12.0798 −0.0498
4 14.88 4 80 9.9556 4.9244
5 13.75 6 150 14.1944 −0.4444
6 18.11 7 330 18.3996 −.2896
7 8.00 2 110 7.1554 0.8446
8 17.83 7 210 16.6734 1.1566
9 79.24 30 1460 71.0283 7.4197
10 21.50 5 605 19.1236 2.3764
11 40.33 16 688 38.0925 2.2375
$
12 21.00 10 215 21.5930 −0.5930
13 13.50 4 255 12.4730 1.0270
14 19.75 6 462 18.6825 1.0675
15 24.00 9 448 23.3288 0.6712
16 29.00 10 776 29.6629 −0.6629
17 15.35 6 200 14.9136 0.4364
18 19.00 7 132 15.5514 3.4486
19 9.50 3 36 7.7068 1.7932
20 35.10 17 770 40.880 −5.7880
21 17.90 10 140 20.5142 −2.6142
22 52.32 26 810 56.0065 3.6865
23 18.75 9 450 23.3576 −4.6076
24 19.83 8 635 24.4028 −4.5728
25 10.75 4 150 10.9626 −0.2126

To fit the multiple regression model we first form the X matrix and y vector
 
1 1 1 . . . . 1 1
X’= 7 3 3 . . . . 8 4 
560 220 340 . . . . 635 150

 
Y’= 16.68 11.50 12.03 . . . . 19.83 10.75
The X’X matrix is

 
  1 7 560
1 1 1 . . . . 1 1 
 1 3 220 

X’X= 7 3 3 . . . . 8 4 
 . . . 

560 220 340 . . . . 635 150  . . . 
  1 4 150
25 219 10, 232
= 219 3055 133, 899 
10, 232 133, 899 6, 725, 688
and the X’y vector is

4
 
  16.68
1 1 1 . . . . 1 1  11.50



X’y= 7 3 3 . . . . 8 4 
 . . . 

560 220 340 . . . . 635 150  . . . 
10.75
 
559.60
= 7375.44 
337072.00
The least square estimator of β is β̂ = (X 0 X)( − 1)X 0 y
 ˆ 
β0
 βˆ1 
βˆ2

 1  
25 219 10232 559.60
= 219 3055 133899   7375.44 
10232 133899 6, 725688 337072.00

  
.11321518 −.004444859 −.00008367 559.60
= −.00444859 0.002744378 −.00004786  7375.44 
−.00008367 −.00004786 .00000123 337072.00
 ˆ   
β0 2.34123115
 βˆ1 = 1.61590712 
βˆ2 0.01438483
The least-squares fit(with the regression coefficients reported to five decimal) is
ŷ = 2.34123 + 1.61591x1 + 0.01438x2

3 ANALYSIS OF VARIANCE
:
SU M OF M EAN
SOU RCE DF SQU ARES SQU ARE F V ALU E P ROB  F

M ODEL 2 5550.81092 2775.40546 261.235 0.0001


ERROR 22 233.73168 10.62416716
CT OT AL 24 5784.54260
ROOT M SE 3.259473 R − SQU ARE 0.9596
DEP M EAN 22.384 ADJR − SQ 0.9559
C.V 14.56162

4 PARAMETER ESTIMATES
:
T FOR Ho:

5
P ARAM ET ER ST AN DARD P ARAM ET ER P ROB V ARIAN CE
V ARIABLE DF EST IM AT E ERROR =0  |T | IN F LAT ION
IN T ERCEP 1 2.34123115 1.09673017 2.135 0.0442 0
CASES 1 1.61590721 0.17073492 9.464 0.0001 3.11841
DIST AN CE 0.01438483 0.003613086 3.981 0.0006 3.11841

5 RESULT
:-There is a positive relationship of delivery time with number of cases and
distance.

5.1 Font types

He likes Fonts and fonts


Fonts

5.1.1 Spaces
paragraph space for me

6 Math Mode
6.1 Active Character

\
ˆ
˜

6.2 Basic Math


If f (x) = 5x + 15 = 0, then x =?
Example
Given that f (x) = 5x + 15 = 0 To find x. x = −15/5. Solving this to get
x = −3 Thus, if f (x) = 5x + 15 = 0 then x = −3

6
6.3 Subscript and Superscript
f (x) = a3 x3 + a2 x2 + a1 x + a0 = 0

7 Greek Letters
surface area of sphere 4πr2
Alpha-α
Beta-β
Gamma-γ
Delta- δ  η θ κ λ µ ν ρ σ φ τ ω

± · × ∗ ◦ ÷ ≤≥≡∼⊥6<6≈→*
PR H
Γ h̄ ∂ ∞ 0 ∇ 4 \ ♣ ♥ ♠ ♦ )

8 Standard Functions
sin
arcsin
exp
ln
lim
log

9 Fractions
numerator
denominator

10 nth root

5 xyz

11 Ellipsis
f 0(x1 , . . . , xn ) = 1 + 2 + 2 + · · · + 5

12 Accents
~r


7
13 Homework
13.1 Example 1
roots of the quadratic equation αx2 + βx + γ = 0 is
The √
2
−β± β 2 −4αγ

13.2 Example 2
tanx
lim x =1
x→0
limx→0 tanx
x =1

13.3 Example 3
" #

R
b

1
Rb Rb
∂a f (x)dx = lim f (x)dx − f (x)dx
a ∆a→0 ∆a a+∆a
a

13.4 Example 4
The cross product for two vectors,
~a = a1 î + a2 ĵ + a3 k̂ and
~b = b1 î + b2 ĵ + b3 k̂ can be written as


î ĵ k̂
~a × ~b =  a1

a2 a3

b1 b2 b3

14 Euations
1
s = ut + at2 (3)
2

a = b (4)
= y (5)

a = c
= z

a=b+c

You might also like