You are on page 1of 4

Answers to Review Exercises

Exercise 1 B is true. Exercise 2 Denote the marginal distribution of X by g(X). The marginal distribution of X is: g(0) = f(0,-1) + f(0,0) + f(0,1) = .13 + .01 + .04 = .18 g(1) = f(1,-1) + f(1,0) + f(1,1) = .12 + .30 + .08 = .50 g(2) = f(2,-1) + f(2,0) + f(2,1) = .14 + .06 + .12 = .32 Hence B is the correct answer. Exercise 3 Denote the joint distribution of X and Y by f(x,y) and denote the marginal distribution of X by fX(x) and the marginal distribution of Y by fY(y). By definition, X and Y are independent if and only if f(x,y) = fX(x)fY(y) for all x,y. We calculate the marginal distribution of Y to be: fY(-1) = .39, fY(0) = .37 and fY(2) = .24. From this we see that for example fX(0)fY(-1) = .0702 .13 = f(0,-1). Thus, we have found an (x,y) such that f(x,y) fX(x)fY(y), hence X and Y are not independent. Exercise 4 A is true. Exercise 5 1. The distribution of X1-X2 is a normal with mean E[X1-X2] = E[X1] E[X2] = 12 10 = 2 and with variance (since X1 and X2 are independent) V[X1-X2] = (1)2V[X1] + (-1)2V[X2] = 9+16 = 25. 4 42 ) = ( ) = (0.4). 2. P(X1 -X 2 < 4) = ( 25 02 P(X 1 > X 2 ) = P(X 1 - X 2 > 0) = 1 - P(X 1 - X 2 < 0) = 1 - ( ) = 1 (0.40) = (0.40) = 0.66 25 Exercise 6 E is the correct answer. Exercise 7 E[2X+3Y]=2E[X]+3E[Y]=2(14)+3(5)=43 V[2X-Y]=(2)2V[X]+(-1)2V[Y]+2(2)(-1)cov(X,Y)=4(16)+100-4(-20)=244. Exercise 8 D is the correct answer. Exercise 9 1. E[Y]=E[2X1+X2]= 2E[X1]+E[X2]=2(5)+10=20 and E[Z]=2(10)+15=35 V[Y]=(2)2V[X1]+V[X2]+2(2)(1)Cov(X1,X2)=4(9)+16=52

V[Z]= (2)2V[X2]+V[X3]+2(2)(1)Cov(X2,X3)=4(16)+25+4(2)=97 2. Cov(Y,Z)=Cov(2X1+X2,2X2+X3)=E[(2X1+X2)( 2X2+X3)]-E[2X1+X2]E[2X2+X3]= E[4X1X2+2X1X3+2X22+X2X3]-{E[2X1]E[2X2] + E[2X1]E[X3]+ E[X2]E[2X2]+ E[X2]E[X3]}= 4{ E[X1X2]- E[X1]E[X2]}+2{E[X1X3]- E[X1]E[X3]}+2{E[X22]- E[X2]E[X2]}+ {E[X2X3]- E[X2]E[X3]}= 4Cov(X1,X2)+2Cov(X1,X3)+2V[X2]+Cov(X2,X3)=2(16)+2=34. 3. The correlation coefficient: Cov(Y , Z ) 34 Corr (Y , Z ) = = = .4787 V (Y )V ( Z ) (52)(97) Exercise 10

1 1 1 1. E[ Y ]=E[ (Y1 + Y2 + Y3 + Y4 ) ]= ( E[Y1 ] + E[Y2 ] + E[Y3 ] + E[Y4 ]) = 4 = 4 4 4 1 1 V[ Y ]=V[ (Y1 + Y2 + Y3 + Y4 ) ]= V [Y1 + Y2 + Y3 + Y4 ] 4 16 1 1 1 = (V [Y1 ] + V [Y2 ] + V [Y3 ] + V [Y4 ]) = 4 2 = 2 since the Yi are independent. 16 16 4 1 1 1 1 1 1 1 1 2. E[W]=E[ ( Y1 + Y2 + Y3 + Y4 ) ]= E[Y1 ] + E[Y2 ] + E[Y3 ] + E[Y4 ] = 8 8 4 2 8 8 4 2 1 1 1 1 + + + = , hence W is an unbiased estimator of . 8 8 4 2 1 1 1 1 (1 + 1 + 4 + 16) 2 22 2 V[W]= V [Y1 ] + V [Y2 ] + V [Y3 ] + V [Y4 ] = = . 64 64 16 4 64 64 3. Since V[W]=22/64>16/64=V[ Y ], Y thus has smaller variance than W, hence we prefer Y .
Exercise 11 (Estimators; this is part of Exercise C.2 in Wooldridge) Let Y1, Y2, Y3,.,Yn be n pairwise independent, identically distributed random variables with common mean and common variance 2. Let Y denote the sample average. 1. Define the class of linear estimators of by Wa = a1Y1 + a 2Y2 + ... + a nYn , where the ai are constants. What restriction on the ai is needed for Wa to be an unbiased estimator of ? 2. Find V(Wa). 3. Suppose that the restriction from part 1. is satisfied. Find the condition that needs to hold in order for Y to be the most efficient estimator (that is, that Y has the smallest variance) of Y and Wa.

Exercise 11 1. Wa is an unbiased estimator of if E[Wa]= . So we find E[Wa]:

E[Wa]=E[ a1Y1 + ... + a nYn ]= a1 E[Y1 ] + ... + a n E[Yn ] = (a1 + ... + a n ) , restriction required for unbiasedness is that (a1 + ... + a n ) = 1 . 2. V[Wa]= a1 V [Y1 ] + .... + a n V [Yn ] = (a1 + ... + a n ) 2
2 2 2 2

hence

the

3.

In order for Y to have the smallest variance of Y and Wa the following must hold: 1 1 2 2 2 2 V [Y ] V [Wa ] 2 (a1 + ... + a n ) 2 (a1 + ... + a n ). n n Further to this part of the exercise (this was not asked in the exercise), but you can 1 2 2 see in Wooldridge Exercise C.2 (iii) that (a1 + ... + an ) 2 (a1 + ... + an ) always n holds. Since we have assumed in this question that the condition for Wa to be unbiased holds, we have assumed that a1 + ... + a n = 1 , hence the inequality from 1 2 2 Wooldridge becomes (a1 + ... + a n ) , which was exactly the condition we n found for Y to have smaller variance than Wa. Hence, what you have showed is that among all weighted averages that produce an unbiased estimator of , the usual sample average Y is the most efficient.

Exercise 12
1. The 95 % confidence interval is x t n 1,(10.95) / 2 since t 99, 0.025 interval is

S2 S2 = x t 99, 0.025 , so n n 1.984 (see table for t distribution) we get that the confidence
0.0009 0.0009 ,0.2495 + 1.984 ] = [0.2435,0.2555] . 100 100

[0.2495 1.984 2.

Since the width of the confidence interval is determined by 1.984

3.

S2 , it is n determined by the estimated variance (S2) and the sample size. Hence, one way to make the confidence interval smaller is to increase the sample size. Since we have found the 95% confidence interval, we can test the proposed null hypothesis at a 5% significance level. Since 0.25 is contained in the confidence interval, we can not reject the null hypothesis that =0.25.

Exercise 13 1. H0: 2. H1:

=0. >0.

3.

= 45.5536. 466.6 S2 900 n The critical value in N(0,1) at 5% level for a one-sided test is approximately 1.645, so since 45.5536>1.645 we strongly reject the null at a 5% level. The critical value at

t=

32.8

1% level for a one-sided test is approximately 2.33, so we also strongly reject at a 1% level.

You might also like