You are on page 1of 111

EXP.

NO: 01
DESCRIPTIVE STATISTICS
Date:
10.02.2016

EXERCISE NO: 01

AIM:
To obtain a frequency table and bar chart by using SPSS.

ALGORITHM:
STEP 1: Select the analyse menu.

STEP 2: Click on descriptive statistics and then on frequenciesto open the frequencies
dialogue box.

STEP 3: Select the variable(s) you require and click on the button to move the variable (s) :

box.

STEP 4: Click on the chartscommand push button to open the frequencies: charts sub-
dialog box.

STEP 5: Click on continue and then ok.

1
QUESTION:
Calculate the frequency distribution. Create a data file with the following variables.

Label for the variables

Age: 1 (< 20), 2 (20-25), 3 (25-30), 4 (30-40), 5 (>40).

Gender: 1 (Male), 2(Female).

Education: 1 High school, 2 Graduate in Arts and Science degree , 3- Graduate in


professional degree, 4-Post graduate degree.

Working Experience (years) : 1 (< 1 year) 2(1-5 years ), 3(5-10Year),


4(10-20), 5(>20).

Enter your own data set (minimum 25 data set ) in the data view of SPSS than Calculate the
frequency distribution. Graphically represent the variables in the form of BAR Chart.

OUTPUT:

Frequencies
Statistics
age gender education wrkexp
Valid 25 25 25 25
N
Missing 0 0 0 0
Frequency Table
Age
Frequency Percent Valid Percent Cumulative
Percent
<20 5 20.0 20.0 20.0
20-25 5 20.0 20.0 40.0
25-30 5 20.0 20.0 60.0
Valid 30-40 6 24.0 24.0 84.0
>40 4 16.0 16.0 100.0
Total 25 100.0 100.0

Gender
Frequency Percent Valid Percent Cumulative
Percent
Male 13 52.0 52.0 52.0
Valid Female 12 48.0 48.0 100.0
Total 25 100.0 100.0

Education
2
Frequency Percent Valid Percent Cumulative
Percent
high school 6 24.0 24.0 24.0
gruduate in art and science
8 32.0 32.0 56.0
degree
Valid graduate professional
6 24.0 24.0 80.0
degree
post graduate degree 5 20.0 20.0 100.0
Total 25 100.0 100.0

wrkexp
Frequency Percent Valid Percent Cumulative
Percent
<1 year 5 20.0 20.0 20.0
1-5 years 9 36.0 36.0 56.0
Valid 5-10 years 3 12.0 12.0 68.0
10-20 years 3 12.0 12.0 80.0
>20 years 5 20.0 20.0 100.0
Total 25 100.0 100.0
Bar Chart:

RESULT:
Thus, the frequency table and bar chart by using SPSS was obtained.

EXERCISE NO: 02

AIM:
To obtain a frequency table, measure of central tendency and bar chart by using SPSS.

3
ALGORITHM:
STEP 1: Select the analyse menu.

STEP 2: Click on descriptive statistics and then on frequenciesto open the frequencies
dialogue box.

STEP 3: Select the variable(s) you require and click on the button to move the variable (s):

box.

STEP 4: Click the statisticscommand push button to open the frequencies: statistics sub-
dialog box.

STEP 5: In the central tendency box, select the mean, median, mode check boxes.

STEP 7: In the dispersion box, select the standard deviation variance, range minimum and
maximum check boxes.

STEP 8: Click on continue.

STEP 9: Click on the chartscommand push button to open the frequencies: charts sub-
dialog box.

STEP 10: Click on continue and then ok

QUESTION:
Calculate the measures of central Tendency. Create a data file with the following variables.

Label for the variables


4
Working experiences Education
Freque Percen Valid Cumulativ
Age: 1 (<Percent
Frequen 20), 2Valid
(20-25), 3
Cumulative (25-30),
ncy 4 (30-40),
t Percent 5 (>40).
e Percent
cy Percent
Vali high Percent
school 6 24.0 24.0 24.0
<1 year Gender: 5 1 20.0
(Male), 2(Female).
d20.0 gruduate in
20.0 art and
8 32.0 32.0 56.0
science degree
1-5 years 9 36.0 36.0 56.0
graduate
5-10 Education:
3
1 High
12.0
school,68.0 2 Graduate6 in Arts
12.0 professional 24.0 and24.0 Science80.0
degree
Vali
d
years degree , 3- Graduate in
10-20
years professional
3 12.0degree,12.0 4-Post graduate degree.
post graduate
80.0 5 20.0 20.0 100.0
degree
>20 years Working 5 Experience
20.0 20.0 (years)
Total :
100.0 1 (< 1 year)
25 100.0 100.0 2(1-5 years ),
Total 25 100.0 100.0 3(5-10Year),
4(10-20), 5(>20).

Enter your own data set (minimum 25 data set ) in the data view of SPSS than
Calculate the frequency distribution. Graphically represent the variables in the form of
BAR Chart.

OUTPUT:

Frequencies:
Statistics
age gender education wrkexp
Valid 25 25 25 25
N
Missing 0 0 0 0
Mean 2.96 1.48 2.40 2.76
Median 3.00 1.00 2.00 2.00
Mode 4 1 2 2
Std. Deviation 1.399 .510 1.080 1.451
Range 4 1 3 4
Minimum 1 1 1 1
Maximum 5 2 4 5

Frequency Table:
Age
Frequenc Percent Valid Cumulative
y Percent Gender
Percent
<20 5 20.0 Frequency
20.0 Percent
20.0 Valid Percent Cumulative
Percent
20-25 5 20.0 20.0 40.0
Valid Male 13 52.0 52.0 52.0
25-30 5 20.0 20.0 60.0
Valid Female 12 48.0 48.0 100.0
30-40 6 24.0 24.0 84.0
Total 25 100.0 100.0
>40 4 16.0 16.0 100.0
Total 25 100.0 100.0

5
Bar Chart:

RESULT:
Thus, the frequency table, measure of central tendency and bar chart by using SPSS
was obtained.

EXERCISE NO: 03

AIM:
To obtain a frequency table, measure of control tendency and variability by applicable of
SPSS.

ALGORITHM:
STEP 1: Select the analyse menu.

STEP 2: Click on descriptive statistics and then on frequenciesto open the frequencies
dialogue box.

6
STEP 3: Select the variable(s) you require and click on the button to move the variable (s):

box.

STEP 4: Click the statisticscommand push button to open the frequencies: statistics sub-
dialog box.

STEP 5: In the percentile value box, select the quartile check box.

STEP 6: In the central tendency box, select the mean, median, mode check boxes.

STEP 7: In the dispersion box, select the standard deviation variance, range minimum and
maximum check boxes.

STEP 8: Click on continue.

STEP 9: Click on the chartscommand push button to open the frequencies: charts sub-
dialog box.

STEP 10: Click on the histogram(s) radio button. You will notice that you can also obtain a
normal curve overlay so click on the normal curve check box.

QUESTION:
Calculate the frequency distributions and measures of central tendency from following table.

Label for Gender 1 2(Femal


(Male), e)

Gender 1 1 2 1 2 1 2 1 2 1 1 2 1 2 1 1 2 2 1 2

Height 140 146 156 149 154 156 151 148 158 150 151 159 153 148 155 146 150 152 149 156
in cms

Weight 56 45 68 51 54 53 69 51 70 49 45 68 50 55 61 53 65 64 47 59

7
in kg

Graphically represent the variables in the form of Histogram Chart.

OUTPUT:

Frequencies:
Statistics
Gender of person Height in cms Weight in kgs
Valid 20 20 20
N
Missing 0 0 0
Mean 1.45 151.35 56.65
Median 1.00 151.00 54.50
Mode 1 156 45a
Std. Deviation .510 4.671 8.286
Variance .261 21.818 68.661
Range 1 19 25
Minimum 1 140 45
Maximum 2 159 70
25 1.00 148.25 50.25
Percentiles 50 1.00 151.00 54.50
75 2.00 155.75 64.75
a. Multiple modes exist. The smallest value is shown
Frequency Table:
Gender of person
Frequenc Percent Valid Cumulative
y Percent Percent
male 11 55.0 55.0 55.0
Valid female 9 45.0 45.0 100.0
Total 20 100.0 100.0

Height in cms Frequen Percen Valid Cumulativ


Frequenc Percent Valid cyCumulative
t Percent e Percent
y Percent Percent
Bar Chart:
45 2 10.0 10.0 10.0
140 1 5.0 5.0 5.0
47 1 5.0 5.0 15.0
146 2 10.0 10.0 15.0
49 1 5.0 5.0 20.0
148 2 10.0 50 10.0 25.0
1 5.0 5.0 25.0
149 2 10.0 51 10.0 35.0
2 10.0 10.0 35.0
150 2 10.0 53 10.0 45.0
2 10.0 10.0 45.0
151 2 10.0 10.0 55.0
54 1 5.0 5.0 50.0
152 1 5.0 55 5.0 60.0
Valid 1 5.0 5.0 55.0
153 1 5.0
Vali 56 5.0 65.0
1 5.0 5.0 60.0
154 1 5.0 59 5.0
d 70.0
1 5.0 5.0 65.0
155 1 5.0 61 5.0 75.0
1 5.0 5.0 70.0
156 3 15.0 64 15.0 90.0
1 5.0 5.0 75.0
158 1 5.0 65 5.0 95.0
1 5.0 5.0 80.0
159 1 5.0 68 5.0 2 100.0
10.0 10.0 90.0
Total 20 100.0 69 100.0 1 5.0 5.0 95.0
70 1 5.0 5.0 100.0
Tota
20 100.0 100.0
l
8
RESULT:
Thus, a frequency table, measure of centrol tendency and variability by applicable of
SPSS was obtained.

EXP.NO: 02
Date: HYPOTHESIS - PARAMETRIC
17.02.2016

EXERCISE NO: 01

(ONE SAMPLE T TEST)

AIM:
To analyze the given problem using one sample T test by SPSS.

ALGORITHM:

9
SSTEP 1: Select the analyse menu.

STEP 2: Click on compare means and then one sample T-test to open sample T- test dialog
box.

STEP 3: Select the variable you require and click on the 1> button to move the variables into
the test variables(s):box.

STEP 4: In the test value: box type the mean score.

STEP 5: Click on options enter the confidence interval continue.

STEP 6: Click on ok.

QUESTION:
The life time of tube for a random sample of 20 provides following figures:

Item 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

Life 5 5.5 6 5.4 2 3.4 6.5 7 5.4 5.9 6 5.8 5.7 9 6 7 8.7 5.9 5 4.9
(in
years)

Null hypothesis: Average Life of tube is 5 years.


Analyze the above problem through T test.

OUTPUT:

T-Test

One-Sample Statistics
N Mean Std. Deviation Std. Error Mean
10
Life in years 20 5.785 1.5517 .3470

One-Sample Test
Test Value = 5
t df Sig. (2-tailed) Mean Difference 95% Confidence Interval of the
Difference
Lower Upper
Life in years 2.262 19 .036 .7850 .059 1.511

H0: There is no significant difference between average lifetime and actual lifetime of tubes.

H1: There is significant difference between average lifetime and actual lifetime of tubes.

CONCLUSION:
The p value is 0.036< 0.05. So H0is rejected and H1is accepted.

Hence, there is a significant difference between the average lifetime and actual lifetime.

RESULT:
Thus, the given problem using one sample T-test by SPSS was analyzed.

EXERCISE N0: 02

(INDEPENDENT SAMPLE T-TEST)

AIM:
To analyze the given problem using independent sample T-test by SPSS.

ALGORITHM:
SSTEP 1: Select the analyze menu.

STEP 2: Click on compare means and then independent sample T-test to open sample T- test
dialog box.

STEP 3: Select the variable you require and click on the 1> button to move the variables into
the test variables(s): box.

STEP 4: Select the variable you require and click on the 1> button to move the variables into
the grouping variable(s): box.

11
STEP 5: Click on optionsenters the confidence interval continue.

STEP 6: Click on ok.

QUESTION:

The height of males and the height of females were compared


Research question: Does height create a significant difference between the male and female.
Analyze the problem using independent sample t test.
Label : Male : 1, Female :2

Gende 1 1 2 1 2 1 2 1 2 1 1 2 1 2 1 1 2 2 1 2
r

Height 14 14 15 14 15 15 15 14 15 15 15 15 15 14 15 14 15 15 14 156
in Cms 0 6 6 9 4 6 1 8 8 0 1 9 3 8 5 6 0 2 9

OUTPUT:
T-Test

Group Statistics
Gender of person N Mean Std. Deviation Std. Error Mean
male 11 149.36 4.523 1.364
Height in cms
female 9 153.78 3.768 1.256

Independent Samples Test


12
Levene's Test for t-test for Equality of Means
Equality of
Variances
F Sig. t df Sig. Mean Std. Error 95% Confidence Interval of the
(2- Differenc Differenc Difference
taile e e Lower Upper
d)
Equal
variances .024 .880 -2.336 18 .031 -4.414 1.889 -8.384 -.444
Height assumed
in cms Equal
variances not -2.381 17.985 .029 -4.414 1.854 -8.309 -.519
assumed

H0: There is no significant difference between the height of the male and female.

H1: There is significant difference between the height of the male and female.

CONCLUSION:
The Levenes test sign value is 0.880 > 0.05, So Equal variance assumed Significant
value is .031 < 0.05.Hence, there is significant difference between the height of the male
and female.

RESULT:
Thus, the given problem using independent sample T-test by SPSS was obtained.

EXERCISE NO: 03

(PAIRED SAMPLE T-TEST)

AIM:
To analyze the given problem using paired sample T-test by SPSS.

ALGORITHM:
STEP 1: Select the analyze menu.

STEP 2: Click on compare means and then paired sample T-test to open sample T- test dialog
box.

STEP 3: Select the variable you require and click on the 1> button to move the variables into
the test variables(s): box.

STEP 4: Click on optionsenters the confidence interval continue.

STEP 5: Click on ok

13
QUESTION:
A researcher wants to compare the pretest scores and posttest scores of 30 students
who has undergone training in a Institution.

Research Question is: Does the training have any impact in the scores of the
students. Analyze
Pretest 53 54 57 68 66 74 63 71 74 75 59 71 69 55 52 61 76 81 61 69

Posttest 88 96 98 103 108 122 112 120 123 124 97 115 111 85 79 94 93 96 87 77

OUTPUT:

T-Test
Paired Samples Statistics
Mean N Std. Deviation Std. Error Mean
pretest score 65.45 20 8.642 1.932
Pair 1
posttest score 101.40 20 14.780 3.305

Paired Samples Correlations


N Correlation Sig.
pretest score & posttest
Pair 1 20 .563 .010
score

14
H0: There is no significant difference between pre-test and post-test scores of the students.

H1: There is significant difference between pre-test and post-test scores of the students.

CONCLUSION:
The P value is.010 < 0.05. So, H0is rejected and H1 is accepted. Therefore, there is a
significant difference between the pre-test and post test scores of the students.

So, the training has an impact in the scores of the students.

RESULT:
Thus, the given problem using paired sample T-test by SPSS was obtained.

EXERCISE NO: 04

(ONE WAY ANOVA)

AIM:
To Conduct a one way Anova with post-hoc analysis using SPSS.

ALGORITHM:
STEP1: Select the analyze menu.

STEP2: Click on compare means and 1 way Anova to open the 1 way Anova dialog box.

STEP3: Select the dependent variables and click on the right button to move the variable
into the dependent list box.

STEP4: Select the independent variable and click on the right button to move the variable
into the factor box.

STEP5: Click on the options command push button to open the one way Anova options
sub-dialog box.

STEP6: Click on the check boxes for descriptive and homogenity of variance.

STEP7: Click on Continue.

STEP8: Click on the post hoc command push button to open the one anova post hoc
multiple comparison sub-dialog box.you will notice that a number of multiple comparison
options are available. In this eg: you will use the Tukey's HSD multiple comparison on test.
15
STEP9: Click on the check box for Tukey.

STEP10: Click on continue and then OK.

QUESTION:
(i) Gupta wants to compare the scores of CBSE students from four metro cities of India i.e
Delhi, Kolkata, Mumbai, Chennai. He Obtained 10 participant scores based on random
sampling from each of the four metro cities, Collecting 40 responses. He made the
following hypothesis

Note: This is an independent design, since the respondents are from different cities. Use
One way between groups ANOVA.

Label For City : 1 Delhi , 2 Kolkata, 3-Mumbai, 4 Chennai


City Scores of the Student
1 400 450 499 480 495 300 350 356 269 298
2 389 398 399 498 457 400 300 298 369 348
3 488 469 425 450 399 385 299 298 389 390
4 450 400 428 398 359 360 310 295 322 365

(ii)Sekar Kapoor wants to know the sales in four different metro cities of India in Diwali
season. He assumes the sales contrast of 2:1:-1:-2 for Delhi: Kolkata: Mumbai: Chennai,
respectively. He collects sales data from 10 respondents each from the four metro cities.
Frame the required hypothesis, do the analysis using the One-way between groups ANOVA
with planned Comparisons and show the result. Calculate F ratio along with Post Hoc
analysis.

City Sales in Rs (Lacs)


1 500 498 478 499 450 428 500 498 486 469
2 500 428 389 378 498 469 428 412 410 421
3 421 410 389 359 369 359 349 349 359 400

16
4 289 269 259 299 389 349 350 301 297 279

(iii) Deepak wants to know the sales in four different cities of India in Christmas Season. He
assumes the sales contrast of 5: 3: 4: -4 for Delhi: Bangalore: Mumbai: Hyderabad,
respectively. He collects sales data from 10 respondents each from the four cities, collecting a
total of 40 sales data.

City Sales in (RsCrores)


Delhi 50,48,47,49,40,42,50,98,86,69
Bangalore 40,38,43,38,39,87,69,48,41,40
Mumbai 41,10,89,39,36,39,49,29,59,40
Hyderabad 28,29,59,99,39,34,30,31,29,39

Frame the required hypothesis, Analyses through One-way between groups ANOVA with
planned comparisons, Calculate F ratio along with Post Hoc analysis.

OUTPUT:
One way:

Descriptives

sales in rs (lacs)
95% Confidence Interval for
Mean
N Mean Std. Deviation Std. Error Lower Bound Upper Bound Minimum Maximum
delhi 10 480.60 24.88 7.87 462.80 498.40 428 500
kolkata 10 433.30 42.34 13.39 403.01 463.59 378 500
mumbai 10 376.40 26.45 8.37 357.48 395.32 349 421
chennai 10 308.10 41.34 13.07 278.53 337.67 259 389
Total 40 399.60 73.28 11.59 376.16 423.04 259 500

Test of Homogene ity of Varia nce s

sales in rs (lacs)
Levene
Statistic df1 df2 Sig.
1.421 3 36 .253

ANOVA

sales in rs (lacs)
Sum of
Squares df Mean Square F Sig.
Between Groups 166071.8 3 55357.267 45.936 .000
W ithin Groups 43383.800 36 1205.106
Total 209455.6 39

Post hoc test: Homogeneous


subset:

17
Multiple Compa risons

Dependent Variable: sales in rs (lacs)


Tukey HSD

Mean
Difference 95% Confidence Interval
(I) metro cities (J) metro cities (I-J) Std. Error Sig. Lower Bound Upper Bound
delhi kolkata 47.30* 15.52 .021 5.49 89.11
mumbai 104.20* 15.52 .000 62.39 146.01
chennai 172.50* 15.52 .000 130.69 214.31
kolkata delhi -47.30* 15.52 .021 -89.11 -5.49
mumbai 56.90* 15.52 .004 15.09 98.71
chennai 125.20* 15.52 .000 83.39 167.01
mumbai delhi -104.20* 15.52 .000 -146.01 -62.39
kolkata -56.90* 15.52 .004 -98.71 -15.09
chennai 68.30* 15.52 .001 26.49 110.11
chennai delhi -172.50* 15.52 .000 -214.31 -130.69
kolkata -125.20* 15.52 .000 -167.01 -83.39
mumbai -68.30* 15.52 .001 -110.11 -26.49
*. The mean difference is significant at the .05 level.

sa le s in rs (l a cs)
a
Tuk ey HSD
Subs et for alpha = . 05
met ro cities N 1 2 3 4
c hennai 10 308.10
mumbai 10 376.40
k olk at a 10 433. 30
delhi 10 480.60
S ig. 1. 000 1.000 1.000 1.000
Means for groups in homogeneous s ubsets are displayed.
a. Uses Harmonic Mean Sample S ize = 10.000.

One way ANOVA test:

Descriptives

sales in (crores)
95% Confidence Interval for
Mean
N Mean Std. Deviation Std. Error Lower Bound Upper Bound Minimum Maximum
delhi 10 57.90 19.76 6.25 43.76 72.04 40 98
banglore 10 48.30 16.48 5.21 36.51 60.09 38 87
mumbai 10 43.10 20.51 6.49 28.43 57.77 10 89
hyderabad 10 41.70 22.16 7.01 25.85 57.55 28 99
Total 40 47.75 20.11 3.18 41.32 54.18 10 99

Test of Homogeneity of Variances

sales in (crores)
Levene
Statistic df1 df2 Sig.
.175 3 36 .913

ANOVA

sales in (crores)
Sum of
Squares df Mean Square F Sig.
Between Groups 1615.500 3 538.500 1.369 .268
Within Groups 14164.000 36 393.444
Total 15779.500 39

18
Multiple Comparisons

Dependent Variable: sales in (crores)


Tukey HSD

Mean
Difference 95% Confidence Interval
(I) different cities of india (J) different cities of india (I-J) Std. Error Sig. Lower Bound Upper Bound
delhi banglore 9.60 8.87 .702 -14.29 33.49
mumbai 14.80 8.87 .355 -9.09 38.69
hyderabad 16.20 8.87 .278 -7.69 40.09
banglore delhi -9.60 8.87 .702 -33.49 14.29
mumbai 5.20 8.87 .936 -18.69 29.09
hyderabad 6.60 8.87 .879 -17.29 30.49
mumbai delhi -14.80 8.87 .355 -38.69 9.09
banglore -5.20 8.87 .936 -29.09 18.69
hyderabad 1.40 8.87 .999 -22.49 25.29
hyderabad delhi -16.20 8.87 .278 -40.09 7.69
banglore -6.60 8.87 .879 -30.49 17.29
mumbai -1.40 8.87 .999 -25.29 22.49

sales in (crores)
a
Tukey HSD
Subset
for alpha
= .05
different cities of india N 1
hyderabad 10 41.70
mumbai 10 43.10
banglore 10 48.30
delhi 10 57.90
Sig. .278
Means for groups in homogeneous subsets are displayed.
a. Uses Harmonic Mean Sample Size = 10.000.

(i) NULL HYPOTHESIS: There is no significant difference in scores between


different metro cities of India.

ALTERNATE HYPOTHESIS: There is significant difference in scores between


differentmetro cities of India.

(ii) NULL HYPOTHESIS: There is no significant difference between the sales in the
four different metro cities of India during Diwali season

ALTERNATE HYPOTHESIS: There is significant difference between the sales


in the four different metro cities of India during Diwali season

(iii) NULL HYPOTHESIS: There is no significant difference between the sales in the
four different metro cities of India during Christmas season

ALTERNATE HYPOTHESIS: There is significant difference between the sales


in the four different metro cities of India during Christmas season
19
CONCLUSION:
(i) The P value is 0.784 > 0.05. So the H0 is accepted and H1 is rejected. F-ratio (3, 36)
=0.358. Hence there is no significant difference in scores between different
metrocities of India.
(ii) The P value is 0.000 < 0.05. H0 is rejected and H1 is accepted F-ratio (3, 36) =
45.936. Hence,There is significant difference between the sales in the four different
metro cities of India during Diwali season
(iii) The P value is 0.208 > 0.05. So the H0 is accepted and H1 is rejected. F ratio (3, 36)
= 1.369. Hence, There is no significant difference between the sales in the four
different metro cities of India during Christmas season

RESULT:
Thus, one way ANOVA with post-Hoc analysis using SPSS was obtained.
20
EXERCISE NO: 05

(ONE SAMPLE T TEST, PAIRED SAMPLE TEST AND INDEPENDENT


SAMPLE T TEST)

AIM:
To analyze the given problem using one sample T test, Paired sample T test and Independent
sample T test by SPSS.

ALGORITHM:
SSTEP 1: Select the analyze menu.

STEP 2: Click on compare means and then one sample T-test to open sample T- test dialog
box.

STEP 3: Select the variable you require and click on the 1> button to move the variables into
the test variables(s): box.

STEP 4: In the test value: box type the mean score.

STEP 5: Click on options Enter the confidence interval continue and click ok

STEP 6: Select the analyze menu.

STEP 7: Click on compare means and then paired sample T-test to open sample T- test dialog
box.

STEP 8: Select the variable you require and click on the 1> button to move the variables into
the test variables(s): box.

STEP 9: Click on optionsenters the confidence interval continue.

STEP 10: Click on ok

STEP 11: Select the analyze menu.

STEP 12: Click on compare means and then independent sample T-test to open sample T- test
dialog box.

STEP 13: Select the variable you require and click on the 1> button to move the variables
into the test variables(s): box.

STEP 14: Select the variable you require and click on the 1> button to move the variables
into the grouping variable(s): box.

STEP 15: Click on optionsenters the confidence interval continue.

21
QUESTION:
Indian Oil has developed a formulation with increased use of ethanol in petroleum
products, which increases engine efficiency with less harmful emissions. 30 cars were
test driven with and without the ethanol and the number of kilometers per litre were
recorded. The cars used for the tests were having either automatic or manual
transmission.

Label: Car Coding: 1 (Automatic), 2 (Manual)

The earlier trial shows that mean number of kilometer per litre was 12. Indian Oil
wants to know:
1. Second trial efficiency of cars is better than the previous trial. (use one sample T test)
2. Whether efficiency of engine improves with added ethanol.(Paired T test)
3. Whether efficiency of engine with and without the ethanol differ between
manual and automatic cars. (Independent group T test)
Frame hypothesis and determine the significant difference between two set of scores

Car 1 1 2 2 1 2 1 2 1 2 12 1 1 2 1 2 1 1 2
With Ethanol 15 16 20 22 18 20 10 19 9 8 6 15 16 11 19 14 20 18 25 16
(in kms)
Without 15 15 19 18 15 18 11 20 9 8 6 14 13 10 18 12 19 17 20 15
Ethanol (in
kms)
Car 1 2 1 1 2 1 2 1 1 1 2 1 2 1 1

With 15 12 20 19 24 11 10 16 26 28 20 19 11 16 23
Ethanol
(in kms)
Without 14 13 19 20 22 10 9 17 20 20 19 15 10 13 21
Ethanol
(in kms)

OUTPUT:

22
ONE SAMPLE T TEST

One-Sample Statistics

Std. Error
N Mean Std. Deviation Mean
WETO 35 16.77 5.31 .90

One-Sample Test

Test Value = 12
95% Confidence
Interval of the
Mean Difference
t df Sig. (2-tailed) Difference Lower Upper
WETO 5.312 34 .000 4.77 2.95 6.60

PAIRED SAMPLE T TEST

Paired Samples Statistics

Std. Error
Mean N Std. Deviation Mean
Pair WETO 16.77 35 5.31 .90
1 WOETO 15.26 35 4.28 .72

Paired Samples Correlations

N Correlation Sig.
Pair 1 WETO & WOETO 35 .934 .000

Paired Samples Test

Paired Differences
95% Confidence
Interval of the
Std. Error Difference
Mean Std. Deviation Mean Lower Upper t df Sig. (2-tailed)
Pair 1 WETO - WOETO 1.51 2.02 .34 .82 2.21 4.435 34 .000

INDEPENDENT SAMPLE T TEST

23
Group Statistics

Std. Error
CAR N Mean Std. Deviation Mean
WETO automatic 21 16.71 5.68 1.24
MANUAL 14 16.86 4.91 1.31
WOETO automatic 21 14.86 4.25 .93
MANUAL 14 15.86 4.42 1.18

Independent Samples Test

Levene's Test for


Equality of Variances t-test for Equality of Means
95% Confidence
Interval of the
Mean Std. Error Difference
F Sig. t df Sig. (2-tailed) Difference Difference Lower Upper
W ETO Equal variances
.027 .871 -.077 33 .939 -.14 1.86 -3.93 3.64
assumed
Equal variances
-.079 30.679 .937 -.14 1.81 -3.83 3.54
not assumed
W OETO Equal variances
.175 .678 -.672 33 .506 -1.00 1.49 -4.03 2.03
assumed
Equal variances
-.666 27.230 .511 -1.00 1.50 -4.08 2.08
not assumed

(i) H0: There is no significant difference in efficiency of cars between previous &
present trail.

H1: There is significant difference in efficiency of cars between previous & present
trail.

(ii) H0:There is no significant difference between the efficiency of engine with & without
ethanol.

H1:There is significant difference between the efficiency of engine with & without
ethanol.
(iii) H0:There is no significant difference in efficiency of engine with and without
ethanol between the automatic and manual car

H1: There is significant difference in efficiency of engine with and without ethanol
between the automatic and manual car

CONCLUSION

24
1. Significant value= 0.000 < 0.05. H0 is rejected and H1 is accepted. Hence, there is a
significant difference in efficiency of cars between previous & present trail.
2. Significant value = 0.000 < 0.05. . H0 is rejected and H1 is accepted. Hence, there is
significant difference between the efficiency of engine with & without ethanol.
3. Significant value = 0.500 < 0.05 for with ethanol. F(0.27) (3,33). H0 is accepted and
H1 is rejected. F(0.775)= (33,33) for without ethanol.

Hence, the levene test significant value is > 0.05. Then equal variance assumed is
considered, where p is > 0.05. So there is significant difference in efficiency of engine
with & without ethanol between manual and automatic in the car.

RESULT: Thus, the given problem using one way sample T-test, paired sample T-test and
independent sample-test by using SPSS was executed.

EXP.NO: 03

Date: HYPOTHESIS- NON-PARAMETRIC


24.02.2016

EXERCISE NO: 01
25
(TWO WAY ANOVA)

AIM:
To conduct two ways between groups Anova using SPSS.

ALGORITHM:
STEP1: Select the analyze menu.

STEP2: Click on General Linear model and then univariate to open univariate dialog box

STEP3: Select the dependent variables and click on the right button to move the variable
into the dependent list box.

STEP4: Select the independent variable and click on the right button to move the variable
into the fixed factor box.

STEP5: Click on the options command push button to open the Univariate options sub-
dialog box.

STEP6: In dialog box click on the check boxes for descriptive statistics, estimate of effect
size, observed power and homogeneity test.

STEP7: Click on continue and then OK.

QUESTION:

(i)Neha gupta wants to research that whether sales (dependent) of the respondents depend on
their place (independent) and education (independent). She assigns 10 respondents from each
metro city.

Labels

Place: 1 (Delhi), 2 (Kolkata), 3(Chennai)


Education: 1 (Under graduate), 2(Graduate), 3(Post Graduate).
26
She wants to know which variable shows significant effect on
sales Whether the location influences sales?
Whether the education influences the sales?
Whether the influence of education on sales depends on location of respondent?
Use two ways between groups Anova to determine the significant effect between variables.

Place Graduation Sales (Rs. Lacs)


Delhi 1 20,40,44,35
2 30,34,50,40
3 60,55
Kolkata 1 15,25,30
2 35,40,45,70
3 65,80
Chennai 1 80,75,30,10
2 100,90,75
3 150,89,99

(ii)Mohit Rajan wants to research that whether sales (dependent) of the respondents depend
on their place (independent) and age (independent). He assigns 10 respondents from each
metro city. Each respondent can select from 3 age levels.
Place: Ram Nagar, Jyoti Colony,
VivekVihar Ram Nagar 10 respondents
Jyoti Colony 9
respondents VivekVihar
10 respondents
Age: 1 (below 25 years), 2 (25-35 years), 3 (above 35 years).

Place Age Sales (Rs. Lacs)


Ram nagar 1 20,40,44,35
2 30,34,50,40
3 60,55
Jyoti Colony 1 15,25,30
2 35,40,45,70
3 65,80
VivekVihar 1 80,75,30,10
2 100,90,75
3 150,89,99
27
Analyze through Two-way between groups ANOVA. Find the variable that has significant
effect on sales.

OUTPUT 1:

Betw een-Subjects Factors

Value Label N
PLACE OF 1 DELHI 10
SALES 2 KOLKATA 9
3 CHENNAI 10
GRADUATON 1 UG 11
FOR SALES 2 GRADUATE 11
3 PG 7

Descriptive Statistics

Dependent Variable: SALES


PLACE OF SALES GRADUATON Mean Std. Deviation N
DELHI FOR
UG SALES 34.75 10.50 4
GRADUATE 38.50 8.70 4
PG 57.50 3.54 2
Total 40.80 12.00 10
KOLKATA UG 23.33 7.64 3
GRADUATE 47.50 15.55 4
PG 72.50 10.61 2
Total 45.00 22.08 9
CHENNAI UG 48.75 34.25 4
GRADUATE 88.33 12.58 3
PG 112.67 32.72 3
Total 79.80 38.43 10
Total UG 36.73 22.58 11
GRADUATE 55.36 24.32 11
PG 85.43 32.62 7
Total 55.55 31.36 29

a
Levene's Test of Equality of Error Variances

Dependent Variable: SALES


F df1 df2 Sig.
5.572 8 20 .001
Tests the null hypothesis that the error variance of the
dependent variable is equal across groups.
a. Design: Intercept+PLACE+GRADUATN+PLACE
* GRADUATN

28
Tests of Between-Subjects Effects

Dependent Variable: SALES


Type III Sum Noncent. Observed
a
Source of Squares df Mean Square F Sig. Eta Squared Parameter Power
Corrected Model 20044.672b 8 2505.584 6.681 .000 .728 53.449 .998
Intercept 91467.120 1 91467.120 243.896 .000 .924 243.896 1.000
PLACE 8957.647 2 4478.824 11.943 .000 .544 23.885 .987
GRADUATN 8688.611 2 4344.306 11.584 .000 .537 23.168 .984
PLACE * GRADUATN 1754.261 4 438.565 1.169 .354 .190 4.678 .299
Error 7500.500 20 375.025
Total 117039.000 29
Corrected Total 27545.172 28
a. Computed using alpha = .05
b. R Squared = .728 (Adjusted R Squared = .619)

OUTPUT 2:

29
Descriptive Statistics

Dependent Variable: SALES


PLACE OF SALES ages Mean Std. Deviation N
RAM NAGAR below 25 yes 34.75 10.50 4
25-35 yrs 38.50 8.70 4
above 35 yrs 57.50 3.54 2
Total 40.80 12.00 10
Between-Subjects Factors JYOTI COLONY below 25 yes 23.33 7.64 3
25-35 yrs 47.50 15.55 4
Value Label N
PLACE OF 1 RAM above 35 yrs 72.50 10.61 2
10 Total 45.00 22.08 9
SALES NAGAR
2 JYOTI VIVEK below 25 yes 48.75 34.25 4
9 25-35 yrs 88.33 12.58 3
COLONY
3 VIVEK 10 above 35 yrs 112.67 32.72 3
ages 1 below 25 Total 79.80 38.43 10
11 Total below 25 yes
yes 36.73 22.58 11
2 25-35 yrs 11 25-35 yrs 55.36 24.32 11
3 above 35 above 35 yrs 85.43 32.62 7
7
yrs Total 55.55 31.36 29

a
Levene's Test of Equality of Error Variances

Dependent Variable: SALES


F df1 df2 Sig.
5.572 8 20 .001
Tests the null hypothesis that the error variance of the
dependent variable is equal across groups.
a. Design: Intercept+PLACE+AGE+PLACE * AGE

Tests of Betw een-Subjects Effects

Dependent Variable: SALES


Type III Sum Noncent. Observed
a
Source of Squares df Mean Square F Sig. Eta Squared Parameter Power
Corrected Model 20044.672b 8 2505.584 6.681 .000 .728 53.449 .998
Intercept 91467.120 1 91467.120 243.896 .000 .924 243.896 1.000
PLACE 8957.647 2 4478.824 11.943 .000 .544 23.885 .987
AGE 8688.611 2 4344.306 11.584 .000 .537 23.168 .984
PLACE * AGE 1754.261 4 438.565 1.169 .354 .190 4.678 .299
Error 7500.500 20 375.025
Total 117039.000 29
Corrected Total 27545.172 28
a. Computed using alpha = .05
b. R Squared = .728 (Adjusted R Squared = .619)

30
Question 1:
1. H0:There is no significant effect of location on sales
H1:There is significant effect of location on sales
2. H0:There is no significant effect of education on sales
H1:There is significant effect of education on sales
3. H0:The influence of education on sales does not depends on location of respondents.
H1: The influence of education on sales depends on location of respondents.
Question 2:
4. H0: There is no significant effect of place and age on sales
H1: There is significant effect of place and age on sales

CONCLUSION:

(i) Significant value is 0.000< 0.05, H0 is rejected and H1 is accepted. There is a


significant effect of location on sales.
(ii) Significant value is 0.000 < 0.05. H0 is rejected and H1 is accepted. There is
significant effect of education on sales
(iii) Significant value is 0.001 < 0.05. H0 is rejected and H1 is accepted. The influence
of education on sales depends on location of respondents.
(iv) Significant value is 0.001 < 0.05. H0 is rejected and H1 is accepted. There is
significant effect of place and age on sales

RESULT:
Thus, The Two Way ANOVA between group using SPSS was obtained.

EXERCISE NO: 02

(CHI-SQUARE TEST)

AIM:
To analyze the given problem using chi-square test by SPSS.

ALGORITHM:
31
STEP 1: Select analyze menu.

STEP 2: Click on the descriptive statistics and then on cross tabs to open the cross tab dialog
box.

STEP 3: Select the row variable you require and click on the button to move the variable

into rows box.

STEP 4: Select the column variable you require and click on the button to move the

variable into column box

STEP 5: Click on the statistics command push button to open the cross tab statistics sub
dialog box.

STEP 6: Click on chi-square check box.

STEP 7: Click on continue.

STEP 8: Click on the cells command push button to open the cross tabs: cell display sub
dialog box

STEP 9: In the count box click on the observed and expected check boxes

STEP 10: In the percentages box click on the row, column and total check boxes

STEP 11: Click on continue and then ok.

QUESTION:
Mathu Gupta wants to know whether the serial preference was dependent on location of the
respondent. The responses indicate, 75 respondents each have seen serial crorepathi and Big
Boss. His responses indicate 66 respondents from Delhi and 84 respondents from Mumbai.
The frequency table is shown below.
Serial Place Frequency
Crorepathi Delhi 40
Crorepathi Mumbai 35

32
Big Boss Delhi 26
Big Boss Mumbai 49

Conduct a chi square test for independence or relatedness.

OUTPUT:

Crosstabs:
Case Processing Summary

Cases
Valid Missing Total
N Percent N Percent N Percent
SERIAL * PLACE 4 100.0% 0 .0% 4 100.0%

SERIAL * PLACE Crossta bula tion

PLACE
delhi mumbai Total
SERIAL crorepathi Count 1 1 2
Expected Count 1.0 1.0 2.0
% within SERIAL 50.0% 50.0% 100.0%
% within PLACE 50.0% 50.0% 50.0%
% of Total 25.0% 25.0% 50.0%
bigboss Count 1 1 2
Expected Count 1.0 1.0 2.0
% within SERIAL 50.0% 50.0% 100.0%
% within PLACE 50.0% 50.0% 50.0%
% of Total 25.0% 25.0% 50.0%
Total Count 2 2 4
Expected Count 2.0 2.0 4.0
% within SERIAL 50.0% 50.0% 100.0%
% within PLACE 100.0% 100.0% 100.0%
% of Total 50.0% 50.0% 100.0%

Chi-Squa re Te sts

Asymp. Sig. Exact Sig. Exact Sig.


Value df (2-sided) (2-sided) (1-sided)
Pearson Chi-Square .000b 1 1.000
a
Continuity Correction .000 1 1.000
Likelihood Ratio .000 1 1.000
Fisher's Exact Test 1.000 .833
Linear-by-Linear
.000 1 1.000
Association
N of Valid Cases 4
a. Computed only for a 2x2 table
b. 4 cells (100.0%) have expected count less than 5. The minimum expected count is
1.00.

33
H0: There is no association between the serial preference and location of the respondents.

H1: There is association between the serial preference and location of the respondents.

CONCLUSION:
The significant value is 0.833 > 0.05. H0 is accepted and H1 is rejected. Therefore
there is no significant difference between the serial preference and location of the
respondents.

RESULT:
Thus, the given problem using Chi-square test was obtained.

EXERCISE NO: 2(A)

(CHI-SQUARE TEST)

AIM:
To analyze the given problem using chi-square test by SPSS.

ALGORITHM:
34
STEP 1: Select the data menu.

STEP 2: Click on the weight cases to open the weight cases dialog box.

STEP 3: Click on weight cases by radio button.

STEP 4: Select the variable you require (ie) frequency and click on the button to move the

variable into frequency variable box

STEP 5: Click on ok. The message weight on should appear on the status var. at the bottom
right of the application window.

STEP 6: Select analyze menu.

STEP 7: Click on non-parametric test and then chi-square to open the chi- square test dialog
box.

STEP 8: Select the variable you require (ie) attitude and click on the to move the variable

into test variable list box.

STEP 9: Click on ok.

QUESTION:
The following table outlines the attitude of 60 people , towards us military bases in
Australia .A chi-square test for goodness of fit will allow as determine if difference in
frequency exists across response category

Attitude Frequency

in favour 8

35
Against 20

Undecided 32

OUTPUT:

Chi-square test:

attidude

Observed N Expected N Residual


in favour 8 20.0 -12.0
against 20 20.0 .0
undecided 32 20.0 12.0
Total 60

Te st Sta tistics

attidude
Chi-Squarea 14.400
df 2
Asymp. Sig. .001
a. 0 cells (.0%) have expected frequencies less than
5. The minimum expected cell frequency is 20.0.

H0: There is no relationship between the frequencies exists across categories.

H1: There is relationship between the frequencies exists across categories.

CONCLUSION:
The significant value 0.001 < 0.05. So H0 is rejected and H1 is accepted. Therefore
there is significant difference between the frequencies exists across categories.

36
RESULT:
Thus, the given problem using Chi-square test by SPSS was obtained.

EXERCISE NO: 03

(SPEARMANS RANK CORRELATION)

AIM:
To analyze the relationship between two variables by using SPSS.

ALGORITHM:
STEP 1: Select the analyze menu.

STEP 2: Click on correlate & bi variate. To open the bivariate correlations dialog box.
37
STEP 3 :Select the variable you require &click on the button to move the variable

into variable box.

STEP 4: Ensure that the spearman/pearson rank order correlation option can be selected.

STEP 5: In the test significant box, select the one tailed radio button.

STEP 6: Click ok button

QUESTION:
Pawan wants to see the relationship between monthly household income and retail
purchase by 20 respondents through Spearmans rank order correlation.

Sl.No 1 2 3 4 5 6 7 8 9 10
Household 1 5 0.50 10 3 3 7 8 10 0.40
income
(Rs in
Lac)
Retail 10 25 10 100 35 30 100 100 150 8
purchase
(Rs. In
thousand)
38
Sl.No 11 12 13 14 15 16 17 18 19 20
Household 0.30 0.20 0.10 0.15 0.80 5 6 7 8 1
income (Rs
in Lac)
Retail 6 3 2 2 10 30 30 20 60 10
purchase
(Rs. In
thousand)

OUTPUT:

Non-parametric Correlations:

Correlations
Household Retail
income purchases (Rs.
(Rs.lacs) thousand)

Correlation Coefficient 1.000 .942**


Household income (Rs.lacs)
Sig. (1-tailed) . .000
Spearman's rho N 20 20
Correlation Coefficient .942** 1.000
Retail purchases (Rs.
Sig. (1-tailed) .000 .
thousand)
N 20 20
**. Correlation is significant at the 0.01 level (1-tailed).

H0: There is no significant relationship between monthly household income and retail
purchase.

H1: There is significant relationship between monthly household income and retail purchase.

CONCLUSION:
The significant value 0.000 < 0.05. There is a significant relationship between
monthly household income and retail purchase. Whereas, the correlation coefficient is
positive.

39
RESULT:
Thus, the relationship between the two variables using SPSS was obtained.

EXERCISE NO: 04

(MANN WHITNEY U TEST)

AIM:
To analyze the given problem using mann-whitney (u- test) by SPSS.

ALGORITHM:
STEP 1: Select the analyze menu.

STEP 2: Click on non parametric test and then on 2 in independent samples to


open the two independent sample test dialog box.

STEP 3: Select the dependent variable that is produced and click on to the move the variables
into test variable list box.

40
STEP 4: Select the independent variable that is factory and click on to the move the variable
into grouping variable box.

STEP 5: Click on the define groups command push button to open the two independent
samples, defines groups sub dialog box.

STEP 6: In group 1, enter the 1st value for the independent variable (ie)1thentab.enter the
second value for the independent variable (ie)2 in the group 2 box.

STEP 7: Click on continue.

STEP 8: Ensure the mann-whitney u check boxes has been selected.

STEP 9: Click ok.

QUESTION:
The sales of two retails stores of Delhi (store1) and Mumbai (store2) are compared by
ganesh. The sales are in Rs.Lacs. There are 20 respondents, 10 from each store. Apply
Mann Whitney non-parametric t-test of independent groups to prove the hypothesis

Retail store 1 2 1 1 2 2 2 1 2 1
Sales (Rs 40 30 60 45 55 25 60 80 100 20
Lacs)

Retail store 2 1 1 2 1 1 2 2 1 2
Sales (Rs 10 80 85 90 120 85 60 55 56 25
Lacs)

OUTPUT:

41
Mann-Whitney Test

Ranks
Retail stores N Mean Rank Sum of Ranks
Delhi 10 11.90 119.00
Sales (Rs. lacs) Mumbai 10 9.10 91.00
Total 20

Test Statisticsa
Sales (Rs. lacs)
Mann-Whitney U 36.000
Wilcoxon W 91.000
Z -1.061
Asymp. Sig. (2-tailed) .288
Exact Sig. [2*(1-tailed Sig.)] .315b
a. Grouping Variable: Retail stores
b. Not corrected for ties.

Null hypothesis: There exists no significant difference in the sales of two retails shop
Alternative hypothesis: There exists significant difference in the sales of two retails
shop.

CONCLUSION:
The significant value 0.315 > 0.05. H0 is accepted and H1 is rejected. Hence there
exists no significant difference in the sales of two retails shop.

42
RESULT:
Thus, the given problem using mann-whitney u test by SPSS was obtained.

EXERCISE NO: 05
(WILCOXON SIGNED RANK TEST)

AIM:
To analyze the given problem usingWilcoxon signed-rank test (u- test) by SPSS.

ALGORITHM:
STEP 1: Select the analyze menu.

STEP 2: Click on non parametric test and then on 2 related sample test to open the two
related sample test dialog box.

STEP 3: Select the variable that is produced and click on the to move the variables into
test variable list box.

STEP 4: Ensure the Wilcoxon signed rank test check boxes has been selected.

STEP 5: Click continue and then ok.

43
QUESTION:

A showroom manager compares the laptop sales for the year in two parts. He wants to
compare the sales of first half and second half of the year. He recorded the sales from 20
showrooms and saved their sales in Rs.Lacs. Apply Wilcoxon signed-rank test of non-
parametric, paired-test to prove the hypothesis.

Sale 1 (Rs 10 20 25 50 45 30 50 60 100 20


Lacs)
Sale 2 (Rs 20 50 60 35 65 90 110 25 20 35
Lacs)

Sale 1 (Rs 50 90 60 40 20 45 65 56 38 28
Lacs)
Sale 2 (Rs 64 69 95 85 76 68 59 120 60 30
Lacs)

OUTPUT:

Wilcoxon Signed Ranks Test


44
Ranks
N Mean Rank Sum of Ranks
a
Negative Ranks 5 9.70 48.50
Positive Ranks 15b 10.77 161.50
Sales2 - Sales1 c
Ties 0
Total 20
a. Sales2 < Sales1
b. Sales2 > Sales1
c. Sales2 = Sales1

Test Statisticsa

Sales2 - Sales1

Z -2.110b
Asymp. Sig. (2-tailed) .035

a. Wilcoxon Signed Ranks Test


b. Based on negative ranks.

Null hypothesis: There exists no significant difference in the showroom sales for the first
andsecond half of the year.

Alternative hypothesis: There exists significant difference in the showroom sales for the
firstand second half of the year.

CONCLUSION:
The significant value 0.035 > 0.05. Thus H0 is accepted and H1 is rejected. Hence
there exists no significant difference in the showroom sales for the first and second half of the
year.

45
RESULT:
Thus, the given problem using Wilcoxon signed rank test by using SPSS was
obtained.

EXERCISE NO: 06

(KRUSKAL WALLIS TEST)

AIM:
To analyze the given problem using kruskal-wallis test by SPSS.

ALGORITHM:
STEP1: Select the analyze menu.
STEP2: Click on non-parametric test and then an k-independent samples to open the test
for several independent sample box.
STEP3: Select the dependent variable and click on the right button to move the variable
into test variable list box.
STEP4: Select the independent variables and click on the right button to move the
variable box.
STEP5: Click on define range, command push button to open the several indepenent
samples, define range sub-dilog box.
STEP6: Enter the first value for the independent variable that is (1) in the minimum base,
then tab.
STEP7: Enter the greatest value for the independent variable that is (3) in the maximum
box.
46
STEP8: Click on continue, ensure the kruskal-wallis check box has been selected.
STEP9: Click Ok.

QUESTION:

A Personal manager of a large insurance company wished to evaluate the


effectiveness of 3 different sales training programs that had been designed for new
employees. 30 new graduates were randomly assign to 1 of the programs and then their
annual sales figures within (1000$) were compared 12 months later the data violate the
stringent assumptions of a 1 way Anova so the manager decided to perform a Kruskal-wallis
test.

TRAINING TRAINING TRAINING


PROGRAM 1 PROGRAM 2 PROGRAM 3
545 538 505
470 587 436
445 466 555
574 621 496
463 724 493
383 487 581
452 460 500
573 504 486
529 500 500
471 450 505

47
Kruskal-Wallis Test:
a,b
Test Statistics
Ranks
Training N sales
Mean Rank
Chi-Square
sales training program 1 10 1.289
13.00
Df 2
training program 2 10 17.30
Asymp. Sig. .525
training program 3 10 16.20
a. Kruskal Wallis Test
Total 30
b. Grouping Variable: Training

H0: There is no significant difference between the effectiveness of 3 different sales training
programs
H1: There is a significant difference between the effectiveness of 3 different sales training
programs

CONCLUSION:

The significant value 0.525 > 0.05. H0is accepted and H1 is rejected. Hence there is no
significant difference between the 3 sales training program for new employees.

48
RESULT:

Thus the given problem using kruskal-wallis test by using SPSS was analyzed.

EXERCISE NO: 07
(FRIED MAN TEST)

AIM:
To analyze the given problem using fried man test by SPSS.

ALGORITHM:
STEP1: Select the analyze menu.

STEP2: Click on non-parametric test and then k-related samples to open the test for several
related samples box.

STEP3: Select the variables you require (i.e: drug x, drug y & placebo) and click on the right
button to move the variables into test variable list box.

STEP4: Ensure the fried man check boxes has been selected.

STEP5: Click OK.

49
QUESTION:

Reactions times for 8 subjects were measure under a placebo condition, a drug x, drug y
condition. It was hypothesized that reaction times would differ significantly across drug
condition.

PLACEBO DRUGX DRUGY


167 111 310
267 222 456
303 245 532
110 134 220
120 345 678
210 304 315
113 129 189
223 289 430

OUTPUT:

NPar Tests:
Friedman Test:
Test Statisticsa Ranks
N Mean8Rank
Chi-Square placebo 12.250 1.38
df drugx 2 1.63
Asymp. Sig. .002
drugy 3.00
a. Friedman Test

H0: There is no significant difference between the reaction times across drug condition.
H1: There is a significant difference between the reaction times across drug condition.

50
CONCLUSION:
The significant value 0.2 > 0.05.H0is rejected and H1is accepted. Hence there is a
significant difference between the reaction times across drug condition.

RESULT:
Thus, the given problem using fried man test by using SPSS was analysed.

EXP.NO: 04 CORRELATION AND REGRESSION

51
Date:
02.03.2016

EXERCISE NO: 01
(CORRELATION)

AIM:
To analyze the relationship between two variables using Correlation by SPSS.

ALGORITHM:
STEP 1: Select the analyze menu .

STEP 2: Click on correlate & bi variate.... to open the bivariate correlations dialog box .

STEP 3 :Select the variable you require &click on the button to move the variable

into variable box.

STEP 4: Ensure that the Pearson correlation option can be selected.

STEP 5: In the test significant box, select the one tailed radio button.

STEP 6: Click ok button.

QUESTION:
52
Twenty students have taken their common entrance test after their graduation. The
selection committee wants to see the relationship between the scores of CET and the
percentage achieved in graduation through correlation analysis.

Frame a hypothesis and show the type of relationship (positive or negative


relationship) between the variables.

CET 70 60 65 68 70 75 87 89 90 96 97 65 80 86 77
scores
% in UG 71 82 73 64 75 69 75 88 90 90 88 82 73 74 65
degree

OUTPUT:
Correlations

percentage
cet scores in ug degree
cet scores Pearson Correlation 1.000 .539*
Sig. (1-tailed) . .019
N 15 15
percentage in ug degree Pearson Correlation .539* 1.000
Sig. (1-tailed) .019 .
N 15 15
*. Correlation is significant at the 0.05 level (1-tailed).

H0: There is no significant relationship between the scores and the percentage achieved in
graduation.
H1: There is a significant relationship between the scores and the percentage achieved in
graduation.

CONCLUSION:
The significant value 0.019 < 0.05, H0is rejected and H1is accepted. Hence there is no
significant relationship between the scores and the percentage achieved in graduation.

RESULT:
Thus the relationship between the variables using correlation by SPSS was analyzed.

EXERCISE NO: 02
(CORRELATION AND REGRESSION)

53
AIM:
To analyze the relationship between two variables using Correlation and regression by SPSS.

ALGORITHM:
STEP 1: Select the analyze menu .

STEP 2: Click on correlate & bi variate.... to open the bivariate correlations dialog box .

STEP 3 :Select the variable you require &click on the button to move the variable

into variable box.

STEP 4: Ensure that the Pearson correlation option can be selected.

STEP 5: In the test significant box, select the one tailed radio button.

STEP 6: Click ok button.

STEP 7: Select the analyze menu.

STEP 8: Click on the regression and then on linear to open the linear regression
dialog box.

STEP 9: Select the dependent variable (ie) sales.

STEP 10: Click on button to move the variable into dependent box.

STEP 11: Select the independent variable (i.e ) spare& piece &click on button to move

variable into independent dialog box.

STEP 12: In the method drop-down list, ensure is related .

STEP 13: Click on the statistics command push button to open the line as regression
statistics sub-dialog box & ensure the estimate model fit check boxes are related.

STEP 14: Click on continue & then ok.

QUESTION:
(i)Twenty employees of different age group have taken their Exams for getting
promotions in their designation in the office. The appraisal committee wants to see
54
the relationship between the scores of Exam and the age through correlation analysis.

Frame a hypothesis and show the type of relationship (positive or negative


relationship) between the variables using correlation.

The appraisal committee also wants to calculate the Unit increase in the Exam score
when there is increase in age by applying regression. It also wants to predict the
exam score of an employee with the age of 21.
Age of 24 34 27 26 30 34 45 43 48 51 46 30 38 42 32
Employees
Exam 70 60 65 68 70 75 87 89 90 96 97 65 80 86 77
scores

(ii)A Survey was taken among 25 students; the data collected was preparatory hours
of the student, Exam percentage of the student. The Researcher was to see the
direction of correlation between the preparatory hours and Exam percentage through
correlation analysis. He also wants to calculate the how much Exam percentage will
increases when the preparatory time is increased by 1 hour. Apply regression to
calculate the Unit increase.

Preparatory 4 3 7 6 3 4 5 4 8 5 6 3 8 4 3
hours of
the student
Exam 70 60 65 68 70 75 87 89 90 96 97 65 80 86 77
percentage

OUTPUT 1:

Correlation
age examscores

Pearson Correlation 1 .893**

age Sig. (2-tailed) .000

N 15 15
**
Pearson Correlation .893 1

examscores Sig. (2-tailed) .000

N 15 15

**. Correlation is significant at the 0.01 level (2-tailed).

Regression

55
b
Variables Entered/Removed

Variables Variables
Model Entered Removed Method
1
age of a . Enter
employees

a. All requested variables entered.


b. Dependent Variable: exam scores

Model Summary

Adjusted Std. Error of


Model R R Square R Square the Estimate
1 .893a .798 .782 5.57
a. Predictors: (Constant), age of employees

ANOVAb

Sum of
Model Squares df Mean Square F Sig.
1 Regression 1593.752 1 1593.752 51.337 .000a
Residual 403.581 13 31.045
Total 1997.333 14
a. Predictors: (Constant), age of employees
b. Dependent Variable: exam scores

Coefficientsa

Standardi
zed
Unstandardized Coefficien
Coefficients ts 95% Confidence Interval for B
Model B Std. Error Beta t Sig. Lower Bound Upper Bound
1 (Constant) 33.231 6.457 5.146 .000 19.281 47.181
age of employees 1.230 .172 .893 7.165 .000 .859 1.601
a. Dependent Variable: exam scores

a
Coefficient Correlations

age of
Model employees
1 Correlations age of employees 1.000
Covariances age of employees 2.947E-02
a. Dependent Variable: exam scores

56
OUTPUT 2:

Correlation
Correlations

preparatory
hours of the exam
students percentage
preparatory hours Pearson Correlation 1.000 .330
of the students Sig. (1-tailed) . .114
N 15 15
exam percentage Pearson Correlation .330 1.000
Sig. (1-tailed) .114 .
N 15 15

Regression

Variables Entered/Removeda

Model Variables Variables Method


Entered Removed
b
1 prephours . Enter

a. Dependent Variable: examperc


b. All requested variables entered.

Model Summary

Model R R Square Adjusted R Std. Error of the


Square Estimate
a
1 .330 .109 .041 11.699

a. Predictors: (Constant), prephours

ANOVAa

Model Sum of Squares df Mean Square F Sig.

Regression 218.112 1 218.112 1.594 .229b

Residual 1779.221 13 136.863

Total 1997.333 14

a. Dependent Variable: examperc

b. Predictors: (Constant), prephours

57
Coefficientsa

Model Unstandardized Coefficients Standardized t Sig.


Coefficients

B Std. Error Beta

(Constant) 67.465 9.124 7.394 .000


1
prephours 2.233 1.769 .330 1.262 .229

a. Dependent Variable: examperc

(i) H0: There is no significant relationship between the age and exam scores
H1: There is a significant relationship between the age and exam scores
(ii) H0:There is no significant relationship between the preparatory hours and the
exam scores.
H1:There is a significant relationship between the preparatory hours and the
exam scores.

CONCLUSION:
(i) The significant value 0.000 < 0.05. H0is rejected and H1 is accepted.There is a
significant relationship between the age and exam scores.
Since the R square value is 0.798=0.798*100=79.8%. So the unit increase in
the exam score is 79.8% based on age.
The exam score of an employee at the age of 21 is Y= a+bx,
Y=33.231+1.230(21) , Y=59.061.

(ii) The significant value 0.229 > 0.05. H0is accepted and H1is rejected. Hence,
there is no significant relationship between the preparatory hours and the exam
scores.
Since the R square value is 0.109=0.109*100=10.98%. So the unit increase in
the exam score is 10.98% when the preparatory hour is increased by one hour.

RESULT:
Thus, the relationship between the variables using correlation and regression by Spss
was analyzed.

58
EXP.NO.5
Date: FORECASTING
16.03.2016

EXERCISE NO: 01

AIM:
To describe the formula syntax and usage of the FORECAST function in Microsoft Excel.

ALGORITHM:
Step 1: Click Start All Programs Microsoft Office Microsoft Office Excel
Spreadsheet will appear on your screen.

Step 2: Enter data given in the table above in the spreadsheet.

Step3: Type the predict value of known x in B6th Cell as March 2013

Step 4: Type the FORECAST formula in A6th Cell as =FORECAST (B6, A2:A5, B2:B5)
press ENTER.

Step 5: The Predicts a value for y given an x value of March 2013is shown in A6th Cell.

QUESTION:

59
Calculates, or predicts, a future value by using existing values. The predicted value is a y-
value for a given x-value. The known values are existing x-values and y-values, and the new
value is

Predicted by using linear regression. You can use this function to predict future sales,
inventory requirements, or consumer trends. The dataset for our example of Tata Consultancy
Services Profit since March 2009 - 2012 is shown in the following table;

A B

Known X
Known Y
OUTPUT: 1 Time Period
Profit (in Rs. Cr.)
(Duration)
A B
Known
2 Y4696.21 Known
MarchX 2009
1 Profit (in Rs. Corers) Time period (Duration)
2 3
4696.21 5618.51 March
March 20092010
3 5618.52 March 2010
4 4
7569.99 7569.99 March
March 20112011
5 10975.98
5 10975.98 March 20122012
March
6 12413.75 March 2013

CONCLUSION:
Tata Consultancy Service Profit forecasted for March 2013 is 12413.75.

RESULT
Thus, the given formula syntax and usage of the FORECAST Function in Microsoft excel is
solved successfully.

EXERCISE NO: 02

AIM:
To forecast the demand using simple average method, weighted moving average
method, simple exponential smoothing, linear regression in excel.

ALGORITHM:

STEP1: To find simple moving average. (i.e.) t=4


Ft = Dt-3 + Dt-1
3
STEP2: To find weighted moving average. (i.e) t=4

60
Ft= (W1.Dt-3)+(W2.Dt-2)+(W3.Dt-1)
W1+W2+W3

STEP3: To find simple exponential smoothing.

Where F1 and will be given.

Ft =Ft-1+(At-1-Ft-1)

STEP4: To find linear regression Y= A+BX.

Where
A = X2Y - XXY
NX2 (X)2

B = NXY - XY
NX2 (X)2

STEP5: Substitute the given X value to find Y.

QUESTION:
(i)Arun exports limited is a major garment export house based at New Delhi. The sales
figures (in 1000 units) of a particular garment during the past 20 weeks are given below.

1. Calculate the 3 week moving average forecast for the given 20 weeks and
also forecast demand for 21st week.

2. Calculate the weighted moving average forecast for the given 20 weeks
with the weights being W1=1, W2=2 ,W3=3. And also forecast demand for
21 st week.

3. Find the simple exponential smoothing forecast for all the 20 weeks and
also forecast the demand for 21 st week. Assume =0.2, F1=21.

61
Week 1 2 3 4 5 6 7 8 9 10
Demand 21 24 18 22 27 23 21 25 27 18
Week 11 12 13 14 15 16 17 18 19 20
Demand 19 22 26 24 17 21 29 29 23 25

(ii)The general manager of a building meets production plan feels that the demand for the
plaster board shipment may be related to the number of construction permits issued in the
country during the previous quarter.

1. Compute values for the slope 13 and intercept A.

2. Determine a point estimate for the plaster board shipment when the number
of construction permit is 30.

Construction 15 9 40 20 25 25 15 35
Permit(X)
Plaster board 6 4 16 6 13 9 10 16
shipment(Y)

OUTPUT: 1
WEEK DEMAND SMA WMA SES
1 21 - - 21
2 24 - - 21
3 18 - - 21.60
4 22 21 20.50 20.88
5 27 21.33 21.00 21.10
6 23 22.33 23.83 22.28
7 21 24.00 24.17 22.43
8 25 23.67 22.67 22.14
9 27 23.00 23.33 22.71
10 18 24.33 25.33 23.57
11 19 23.33 22.17 22.46
12 22 21.33 20.00 21.77
13 26 19.67 20.33 21.81

62
14 24 22.33 23.50 22.65
15 17 24.00 24.33 22.92
16 21 22.33 20.83 21.74
17 Y 29
A+BX 20.67 20.17 21.59
18 X 29 30 22.33 24.33 23.07
19 N 23 8 26.33 67.67 24.26
20 25 27.00 26.00 24.01
21 A 0.9069 25.66 25.00 24.20
B 0.3953
Y 12.7674
CONCLUSION:
Therefore the forecast value of 21st week for

i. 3 week moving average is 25.667


ii. Weighted moving average is 25
iii. Simple Exponential Smoothing is 24.2

OUTPUT: 2
Construction Plaster X Y XY
Permit (X) Board (Y)
15 6 225 36 90
9 4 81 16 36
40 16 1600 256 640
20 6 400 36 120
25 13 625 169 325
25 9 625 81 225
15 10 225 100 150
35 16 1225 256 560
184 80 5006 950 2146
CONCLUSION:
Therefore the point estimated for the plaster board shipment (Y) will be 12.7644 or
13 for the construction permit is 30.

63
RESULT:
Thus, the given problem by using simple average method, weighted moving average
method, simple exponential smoothing, linear regression in Microsoft excel was verified
successfully.

EXP.NO.6
Date: PORTFOLIO SELECTION
16.06.2016

AIM:
To select the best portfolio using MS-Excel.

ALGORTITHM:
STEP1: Find the mean & standard deviation using excel function for company A,B&C.

STEP2: Compare the values of Mean & standard deviation of three companies.

STEP3: Find out which companies annual return is more than 12%.

64
QUESTION:
EBy Abraham has recently inherited some money which he would like to invest in
stock. Eby already holds stock in company A, and over the past ten years he has received an
average annual return of 7.48% on his investment. He would like to increase this and hence
informed his investment banker that an annual return of at least 12% is his desired objective.
The banks funds investment manager has forwarded details given below of two suitable
companies B and C, whose stock performances meet Ebys requirements.

Company 1 2 3 4 5 6 7 8 9 10
A 8.5 15.3 11.5 -1.6 -3.6 8.4 6.8 11.9 6.1 11.5
B 6.7 9.2 11.3 17.7 7.4 13 19.5 15.1 19.4 15.2
C 15.1 27.8 38.6 -12 -5.6 12.7 -2.1 12.8 36.8 22.7

OUTPUT:

Company A B C

65
Mean 7.48 13.45 14.68
Standard
5.99 4.72 17.35
Deviation

CONCLUSION:
Eby Abraham informed that annual return must be at least 12%, Where Company B
and Company Cs annual return is more than 12% but, company Bs standard deviation is
lesser than company C which means Company Bs has minimum risk compare to others. So,
Eby Abraham would invest his stock in company B which is a best portfolio.

RESULT:
Thus, the best portfolio was successfully selected by using Ms-Excel.

EXP.NO.7
Date: SENSITIVITY ANALYSIS
16.03.2016

AIM:
To perform sensitivity analysis using MS- Excel.

ALGORITHM:
STEP 1: Calculate demand, variable cost (Demand * Unit Variable cost), revenue (Demand
*Price) and profit (Revenue (Fixed cost + Variable Cost)

STEP 2: Variation of profit, variable cost and revenue with variation in price. (1 - way data
table).

STEP 3: Variation of profit with variation in price and variation in unit cost. (2 - way data
table).

66
STEP 4: Variation of variable cost with variation in price and variation in unit cost. (2 - way
data table).

STEP 5: Variation of revenue with variation in price and variation in unit cost. ( 2 -way data
table)

STEP 6: Select the cells where you need the variation and then Go to data What If
AnalysisData table and enter row and column cells.

QUESTION:
Price : 4

Demand : 65000-(9000*Price)

Unit variable cost: 0.45

Fixed cost : Rs. 45000

1. Calculate demand, variable cost, revenue and profit.

2. Calculate variation of profit, variable cost and revenue with variation in price.

3. Calculate variation of profit with variation in price and variation in unit cost.

4. Calculate variation of variable cost with variation in price and variation in unit cost.

5. Calculate variation of revenue with variation in price and variation in unit cost.

67
PRICE 4 OUTPUT:
DEMAND 29000
UNIT
VARIABLE VAR.CO REVEN
COST 0.45 PRICE PROFIT ST UE
FIXED COST 45000 ONE WAY 57950 13050 116000
VARIABLE DATA 1 -14200 25200 56000
COST 13050 TABLE: 1.5 9075 23175 77250
REVENUE 116000 2 27850 21150 94000
PROFIT 57950 2.5 42125 19125 106250
3 51900 17100 114000
3.5 57175 15075 117250
4 57950 13050 116000
4.5 54225 11025 110250
5 46000 9000 100000
5.5 33275 6975 85250
6 16050 4950 66000

TWO WAY DATA TABLE:

68
57950 0.3 0.35 0.4 0.45 0.5 0.55 0.6

1 -5800 -8600 -11400 -14200 -17000 -19800 -22600


1.5 16800 14225 11650 9075 6500 3925 1350
2 34900 32550 30200 27850 25500 23150 20800
2.5 48500 46375 44250 42125 40000 37875 35750
3 57600 55700 53800 51900 50000 48100 46200
3.5 62200 60525 58850 57175 55500 53825 52150
4 62300 60850 59400 57950 56500 55050 53600
4.5 57900 56675 55450 54225 53000 51775 50550
5 49000 48000 47000 46000 45000 44000 43000
5.5 35600 34825 34050 33275 32500 31725 30950
6 17700 17150 16600 16050 15500 14950 14400

13050 0.3 0.35 0.4 0.45 0.5 0.55 0.6


1 16800 19600 22400 25200 28000 30800 33600
1.5 15450 18025 20600 23175 25750 28325 30900
2 14100 16450 18800 21150 23500 25850 28200
2.5 12750 14875 17000 19125 21250 23375 25500
3 11400 13300 15200 17100 19000 20900 22800
3.5 10050 11725 13400 15075 16750 18425 20100
4 8700 10150 11600 13050 14500 15950 17400
116000 0.3 0.35 0.4 0.45 0.5 0.55 0.6
4.5 7350 8575 9800 11025 12250 13475 14700
1 56000 56000 56000 56000 56000 56000 56000
5 6000 7000 8000 9000 10000 11000 12000
1.5 77250 77250 77250 77250 77250 77250 77250
5.5 4650 5425 6200 6975 7750 8525 9300
2 94000 94000 94000 94000 94000 94000 94000
6 3300 3850 4400 4950 5500 6050 6600
2.5 106250 106250 106250 106250 106250 106250 106250
3 114000 114000 114000 114000 114000 114000 114000
3.5 117250 117250 117250 117250 117250 117250 117250
4 116000 116000 116000 116000 116000 116000 116000
4.5 110250 110250 110250 110250 110250 110250 110250
69
5 100000 100000 100000 100000 100000 100000 100000
5.5 85250 85250 85250 85250 85250 85250 85250
6 66000 66000 66000 66000 66000 66000 66000
RESULT:
Thus, the sensitivity analysis is performed by using Microsoft excel and output is
verified and obtained successfully.

EXP.NO.8
Date: FINANCIAL MANAGEMENT
16.03.2016

EXERCISE NO: 01

AIM:
To describe the usage of the financial function in Microsoft Excel.

ALGORITHM:
Step 1: Click Start All Programs Microsoft Office Microsoft Office Excel Spreadsheet will
appear on your screen.

Step 2: Enter the data given in the spreadsheet.

70
Step 4: Type the Financial functions given below for attaining the result/calculation for the
given questions:

1. Find out the rate of interest charged: RATE (nper, pmt, PV, 0, 0)

2. Find out the future vale: FV (rate, nper, pmt, 0, 0)

3. Find out the PV of future payment: PV (rate, nper, pmt, 0, 0)

4. Find out the payment period: NPER (rate, pmt, pv, 0, 0)

5. Find out the EMI: PMT (rate, nper, pv, 0, 0)

6 Find out the IRR: IRR (values, guess).

7. Find out the NPV: NPV (rates, values).

QUESTION:

(i). Suppose we have availed a loan of Rs.1, 00,000 that is to be paid off in 48 monthly
instalments of rupees 3,000 each. Find out the rate of interest charged on this loan.

(ii).You deposit Rs.1,000 each and every month in your bank account. The bank pays 12%
annual rate that is compound every month. Find out how much money will be in your account
at the end of 24 months.

(iii). You expect to receive Rs.800/- every month over next 24 months. If the current discount
rate is 12% per annum. What is the present value of these future payments?

(iv).You can afford only Rs.500/- per month. If you are crediting this amount in a bank that
pays an annual interest of 12% compounded monthly. How long will it take for your
investment to accumulate to Rs.50, 000?

(v). Suppose if you want to take a loan of Rs.2,00,000 at an annual interest rate of 14%. The
loan has to be repaid in 15 years in equal monthly instalments. Find out the EMI.

(vi). You are expected to get 5 monthly payments of Rs.500, 900, 550, 478, 950 respectively.
71
At the discount rate of 10% per annum. Find the Net Present Value (NPV).

(vii). Assuming that an initial investment of Rs.1, 00,000. Results in 12 annual cash outflows
as given below.
13200; 15000; 13000; 2000; 12400; 16000; 14000; 16450; 17690; 16550; 16500; 12200.

Find the Internal Rate of Return (IRR).

OUTPUT:

1. RATE OF INTEREST:

Rate (48, -3000, 1, 00, 000, 0, 0)


Rate of Interest = 1.5991%
2. FUTURE VALUE:
Fv (12%, 24, -1000, 0, 0)
Future Value = 26,973.46
3. PRESENT VALUE OF FUTURE PAYMENT:
Pv (12%, 24, -800, 0, 0)
Present Value of future payment =1, 16, 994.71
4. PAYMENT PERIOD:
NPER (12%, -500, 0, 50000, 0)
Payment Period = 70

72
5. EMI:
PMT (14%, 15, -2, 00, 000, 0, 0)
EMI = 2, 663.48
6. NET PRESENT VALUE:
NPV (10%, 500, 900, 550, 478, 950)
Net Present Value = 2, 527.93
7. INTERNAL RATE OF RETURN:
IRR (-100000, 13200, 15000, 13000, 2000, 12400, 16000, 14000,16450,17690,
16550,16500, 12200, 10%)
Internal Rate of Return = 8%

RESULT:
Thus, the given problem are solved using financial function in Microsoft excel.

EXERCISE NO: 02

AIM:
To calculate the given ratios in Microsoft Excel.

ALGORITHM:
Step 1: Click Start All Programs Microsoft Office Microsoft Office Excel Spreadsheet will
appear on your screen.

Step 2: Enter the data given in the spreadsheet.

Step 3: Calculate the given ratios.

73
QUESTION:
The following financial details of Express Company Ltd are available. You are required to
calculate the following ratios:

Business profitability
Gross profit margin (Gross profit)/Net Sales
Net Profit margin (Net Profit after tax)/(Net Sales)
Return on equity (Net Profit after tax)/(Owner's equity)

Financial Stability
Current Ratio (Current assets)/Current Liabilities
Debt/Equity Ratio (Total assets-Owner's equity)/Owner's equity
Quick Ratio (Current assets less inventory)/(Current Liabilities)

Resource Utilization
Total Asset Turnover (Net sales)/Total assets
Inventory Turnover (Cost of goods sold)/Inventory
Debt Turnover (Credit Sales)/Debtors

Calculate the above ratios for the Express Company Limited

74
Express company Ltd -Final Accounts

P & L Account
For Year ended 31st Dec 2010 31st Dec 2011
Rs.'00
Rs.'000 Rs.'000 0 Rs.'000
Sales
Cash 150 180
Credit 330 480 420 600
Less: Cost of Goods sold
Opening inventory 200 195
Purchases 60 80
260 275
Less closing inventory 30 230 10 265

Gross Profit 250 335


Less: Operating expenses
administrative expenses 75 70
financial expenses 18 20
selling and distribution
expenses 55 148 60 150

Net Profit before tax 102 185


Less: corporation tax 25 47
Net profit after tax (earnings) 77 138

Balance Sheet
As at 31st Dec 2010 31st Dec 2011
Rs.'00
Fixed Assets Rs.'000 Rs.'000 0 Rs.'000
Building and land 60 90
Equipment 110 170 85 175

Current assets
Inventory 30 10
Debtors 50 90
Cash 150 230 100 200
400 375

Current liabilities & owner's equity


Creditors 80 55
Dividends 30 42
Overdraft 4 114 20 117
Owner's equity 286 258
Total liabilities & owner's equity 400 375

75
OUTPUT:

Business profitability: 31st dec 2010 31st dec 2011

Gross profit margin-


(gross profit/net sales) =250/480 =335/600
Net profit margin- =0.520833 =0.558333
(net profit after =77/480 =138/600
tax/net sales) =0.160416 =0.23
Return on equity-
(net profit after =77/286 =138/258
tax/owners equity) =0.269230 =0.534883

Financial stability:
Current ratio- =230/114 =200/117
(current assets/current =2.017543 =1.709401
liabilities)
Debt ratio- (400-286)/286 =(375-258)/258
(total assets-owners =0.398601 =0.453488
equity)/owners equity

76
Quick ratio-
(current assets- =(230-30)/114 =(200-10)/117
inventory)/current =1.754385 =1.623931
liabilities
Resource utilization:
Total asset turnover- =480/400 =600/375
(net sales/total assets) =1.2 =1.6
Inventory turnover- =260/30 =275/10
(cost of goods =8.6666 =27.5
sold/inventory) =330/50 =420/90
Debt turnover- =6.6 =4.6667
(credit sales/debtors)

RESULT:
Thus, the given ratios are calculated in Microsoft excel.

EXERCISE NO: 03

AIM:
To calculate the Weighted Average Cost for the given problem in Microsoft Excel.

ALGORITHM:
Step 1: Click Start All Programs Microsoft Office Microsoft Office Excel Spreadsheet will
appear on your screen.

Step 2: Enter the data given in the spreadsheet.

Step 3: Calculate the proportion, after tax cost and Weighted Average cost.

Step 4: Calculate the Total Weighted Average Cost

77
QUESTION:

With the following informations available for a company, calculate Weighted Average Cost
of capital.

Equity Share 25, 00, 000


Debenture (12%) 40, 00, 000
Pref Share (13%) 20, 00, 000
Retained earnings 10, 00, 000

The expected dividend for the equity shares are estimates at Rs. 4.50 for the share issued. Rs
10/-the companys tax rate is 50%.

OUTPUT:

78
Sources of funds Amount Tax50% Proportions WACC
Equity Shares 2500000 45 0.2632 11.84
debentures 4000000 6 0.4211 2.53
preferences Shares 2000000 13 0.2105 2.73
Retained Earnings 1000000 22.5 0.1053 2.37
TOTAL 9500000 19.47

The weighted average cost of capital is 19.47.

RESULT:
Thus, the given problem is solved using financial management function in Microsoft
excel.

EXERCISE NO: 04

AIM:
To calculate the EPS (Earnings per Share) for the given problem in Microsoft Excel.

ALGORITHM:
Step 1: Click Start All Programs Microsoft Office Microsoft Office Excel Spreadsheet will
appear on your screen.

Step 2: Enter the data given in the spreadsheet.

Step 3: Calculate the interest for debentures

Step 4: Find out the PBT (Profit Before Tax)

Step 5: Calculate the EAT (Earning After Tax) Deduct tax from PBT

Step 6: Calculate EPS (Earning Per Share)

EPS = EAT/No of Equity Shares


79
QUESTION:

Calculate EPS from the following informations made available to you.

Capital structure:-

10,000 equity shares of 100 each 10, 00, 00

14% Debenture - 40, 00, 000

The expected EBIT of the company is estimated at Rs15, 00,000 per annum. The company is
in the tax bracket of 40%.

OUTPUT:

EBIT 1500000
Less: Debentures 560000
EBT 940000

80
Less: Tax 40% 376000
EAT 564000

Calculation of Earning Per Shares:


Earnings
EPS = --------------------------
No. of equity shares
= 564000
10000
= 56.4%

RESULT:
Thus, the given problem is solved using financial management function in Microsoft
Excel.

EXP.NO.9
Date: LINEAR PROGRAMMING
05.04.2016
EXERCISE NO: 01

AIM:
To obtain feasible solution through Linear Programming using TORA

ALGORITHM:
Step 1: Select TORA in windows
Step 2: Press any key to continue and select Linear Programming
Step 3: Enter new problem and give title (E.g. Linear Programming)
Step 4: Enter the Variables (X1) and Constraints count
Step 5: Press Y for Yes and N for No for the queries
Step 6: Enter the user names for variables
Step 7: Enter the Constraint Values
81
Step 8: Click Solve Menu
Step 9: Save the file and press enter
Step 10: Select Solve ProblemGraphical
Step 11: Go to Output Screen

QUESTION :

Solve the following linear programming problem using Graphical Method.


Maximize Z = 2 X1 + 3 X2
Subject to
X1 + X2 > 6
7 X1 + X2 > 14
X1, X2 > 0

82
OUTPUT:

RESULT:
Thus, the feasible solution was obtained successfully through linear programming
using TORA.

83
EXERCISE NO: 02

AIM:
To obtain feasible solution through Linear Programming using TORA

ALGORITHM:
Step 1: Select TORA in windows
Step 2: Press any key to continue and select Linear Programming
Step 3: Enter new problem and give title (E.g. Linear Programming)
Step 4: Enter the Variables (X1) and Constraints count
Step 5: Press Y for Yes and N for No for the queries
Step 6: Enter the user names for variables
Step 7: Enter the Constraint Values
Step 8: Click Solve Menu
Step 9: Save the file and press enter
Step 10: Select Solve ProblemAlgebraicIterationBounded Simplex
Step 11: Go to Output Screen

QUESTION :
84
Solve the following linear programming problem using Simplex Method
Maximize Z = 6 X1 + 8 X2
Subject to
5 X1 + 10 X2 < 60
4 X1 + 4 X2 < 40
X1, X2 > 0

OUTPUT:

85
RESULT:
Thus, the feasible solution was obtained successfully through linear programming
using TORA.

86
EX.NO.10
Date: TRANSPORTATION
05.04.2016

AIM:
To obtain feasible solution through Transportation using TORA

ALGORITHM:
Step 1: Select TORA in windows
Step 2: Press any key to continue and select Transportation Model Go to input screen
Step 3: Enter the Problem title, Number of sources (From) and Destination (To) and press
Enter
Step 4: Enter the given data
Step 5: Click Solve Menu
Step 6: Save the file and press enter
Step 7: Select Solve ProblemIterationNorth West Starting solution or Least Cost Starting
solution or Vogels starting solution
Step 8: Go to Output Screen

QUESTION:
87
(i)Find the feasible solution for the transportation problem using North West Corner rule:

To D E F Supply
From
A 6 4 1 50

B 3 8 7 40

C 4 4 2 60

Demand 20 95 35 150

(ii)Find the feasible solution for the transportation problem using Least cost method:

To D E F Supply
From
A 6 4 1 50

B 3 8 7 40

C 4 4 2 60

Demand 20 95 35 150

(iii)Find the basic feasible solution for the following transportation problem using vogel
approximation method

To D1 D2 D3 D4 Supply
From
O1 11 13 17 14 250

O2 16 18 14 10 300

O3 21 24 13 10 400

Demand 200 225 275 250 950

OUTPUT:1

88
OUTPUT: 2

89
OUTPUT: 3

90
RESULT
Thus, the feasible solution is obtained successfully through transportation using TORA.

EX.NO.11
Date: ASSIGNMENT
06.04.2016 91
AIM:

To obtain feasible solution through Assignment using POM-QM Software

ALGORITHM:

STEP 1: Open POM-QM software.

STEP 2: Select LP: Assignment from the module.

STEP 3: Enter the objects to be assigned.

STEP 4: Enter the given data

STEP 5: Click solve and note down the results.

QUESTION:
(i) The following matrix gives the cost involved to perform jobs 1, 2 and 3 operators A,B and
C. Assign the operators and jobs to minimize the total time taken to complete the jobs

Operator Job1 Job2 Job 3

92
A 10 16 7

B 9 17 6

C 6 13 5

(ii) A firm wants to purchase three different types of equipment and five manufactures have
come forward to supply one or all the three machines. However , the firms policy is not to
accept more than one machine from any of the manufactures. The data relating to the price (in
lakhs of rupees) quoted by the different manufactures are given below.

MACHINES

Manufacturers
1 2 3

A 2.99 3.11 2.68

B 2.78 2.87 2.57

C 2.92 3.05 2.80

D 2.82 3.10 2.74

E 3.11 2.90 2.64

Determine how best the firm can purchase three machines.

OUTPUT:1
*** LINEAR PROGRAMMING **
PROBLEM NAME: ASSIGNMENT
Min Z= 10 X1 + 16 X2 + 7 X3 + 9 X4 + 17 X5 + 6 X6 + 6 X7 + 13 X8 + 5 X9
ST
(1) 1 X1 + 1 X2 + 1 X3 = 1
(2) 1 X4 + 1 X5 + 1 X6 = 1
(3) 1 X7 + 1 X8 + 1 X9 = 1
93
(4) 1 X1 + 1 X4 + 1 X7 = 1
(5) 1 X2 + 1 X5 + 1 X8 = 1
(6) 1 X3 + 1 X6 + 1 X9 = 1
==================================================================
SOLUTION:
ITERATION NUMBER 10
VARIABLE MIX SOLUTION
X2 1.000
X8 0.000
X6 1.000
X4 0.000
X7 1.000
Artificial 6 0.000
Z 28.000
Assignment Problem Solution
JOB1 JOB2 JOB3
A 0 1 0
B 0 0 1
C 1 0 0
Total cost or profit is $ 28
==================================================================
SENSITIVITY ANALYSIS:
CONSTRAINTS:
--------------------------------------------------------------------------
RANGE OF RHS
CONSTRAINT TYPE OF SHADOW FOR WHICH SHADOW
NUMBER CONSTRAINT PRICE PRICE IS VALID
---------- ---------- ------ ----------------
NOTE: RHS shadow prices are not meaningful
for an assignment problem.
--------------------------------------------------------------------------
DECISION VARIABLES:
--------------------------------------------------------------------------
NONBASIC AMOUNT Z IS REDUCED (MAX) OR INCREASED (MIN)
VARIABLE FOR ONE UNIT OF X IN THE SOLUTION
-------- --------------------------------------------
NOTE: Here are non basic variables with zero shadow
prices. Other shadow price values are of
questionable value in assignment problems.

OUTPUT:2

*** LINEAR PROGRAMMING ***


PROBLEM NAME: ASSIGNMENT
Min Z= 2.99 X1 + 3.11 X2 + 2.68 X3 + 2.78 X6 + 2.87 X7 + 2.57 X8 + 2.92 X11
+ 3.05 X12 + 2.8 X13 + 2.82 X16 + 3.1 X17 + 2.74 X18 + 3.11 X21
+ 2.9 X22 + 2.64 X23
ST
(1) 1 X1 + 1 X2 + 1 X3 + 1 X4 + 1 X5 = 1
94
(2) 1 X6 + 1 X7 + 1 X8 + 1 X9 + 1 X10 = 1
(3) 1 X11 + 1 X12 + 1 X13 + 1 X14 + 1 X15 = 1
(4) 1 X16 + 1 X17 + 1 X18 + 1 X19 + 1 X20 = 1
(5) 1 X21 + 1 X22 + 1 X23 + 1 X24 + 1 X25 = 1
(6) 1 X1 + 1 X6 + 1 X11 + 1 X16 + 1 X21 = 1
(7) 1 X2 + 1 X7 + 1 X12 + 1 X17 + 1 X22 = 1
(8) 1 X3 + 1 X8 + 1 X13 + 1 X18 + 1 X23 = 1
(9) 1 X4 + 1 X9 + 1 X14 + 1 X19 + 1 X24 = 1
(10) 1 X5 + 1 X10 + 1 X15 + 1 X20 + 1 X25 = 1
=======================================================================
SOLUTION:
ITERATION NUMBER 17
VARIABLE MIX SOLUTION
------------ --------
X5 1.000
X22 1.000
X8 1.000
X14 1.000
X15 0.000
X6 0.000
X3 0.000
X16 1.000
X23 0.000
Artificial 10 0.000

Z 8.290

Assignment Problem Solution

MAC1 MAC2 MAC3 MAC4 MAC5


A 0 0 0 0 1
B 0 0 1 0 0
C 0 0 0 1 0
D 1 0 0 0 0
E 0 1 0 0 0

Total cost or profit is $ 8.29


========================================================================
SENSITIVITY ANALYSIS:
========================================================================

CONSTRAINTS:
--------------------------------------------------------------------------
RANGE OF RHS
CONSTRAINT TYPE OF SHADOW FOR WHICH SHADOW
NUMBER CONSTRAINT PRICE PRICE IS VALID
---------- ---------- ------ ----------------
NOTE: RHS shadow prices are not meaningful
95
for an assignment problem.
--------------------------------------------------------------------------
DECISION VARIABLES:
--------------------------------------------------------------------------
NONBASIC AMOUNT Z IS REDUCED (MAX) OR INCREASED (MIN)
VARIABLE FOR ONE UNIT OF X IN THE SOLUTION
-------- --------------------------------------------
NOTE: Here are non basic variables with zero shadow
prices. Other shadow price values are of
questionable value in assignment problems.
X4 0.
--------------------------------------------------------------------------

RESULT:
Thus, the feasible solution is obtained through Assignment using POM QM software.

EXP.NO.12
Date: NETWORKING MODELS (CPM & PERT)
06.04.2016
AIM:
To find the critical path using TORA module.

ALGORITHM:

Step 1: Choose PERT/CPM model in TORA.


96
Step 2: Give the number of nodes.
Step 3: Provide the duration for each node.
Step 4: Enter the given data
Step 5: Click Solve Menu
Step 6: Save the file and press enter
Step 7: Select Solve Problem
Step 8: Go to Output Screen

QUESTION:
CRITICAL PATH METHOD
(i) A project schedule has the following characteristics as shown in Table given below

Activity Name Time Activity Name Time


(days) (days)
1-2 A 4 5-6 G 4
1-3 B 1 5-7 H 8
2-4 C 1 6-8 I 1
3-4 D 1 7-8 J 2
3-5 E 6 8-10 K 5
4-9 F 5 9-10 L 7

97
1. Compute Te and TL for each activity.
2. Find the critical path.
3. Calculate the project duration
PROGRAMME EVALUATION REVIEW TECHNIQUE
(ii) R and D project has a list of tasks to be performed whose time estimates are given in the
table. Draw the network diagram for the R&D project.
Activity Time Time Time
Activity Name (days) (days) (days)
To Tm TP
1-2 A 4 6 8

1-3 B 2 3 10

1-4 C 6 8 16

2-4 D 1 2 3

3-4 E 6 7 8

3-5 F 6 7 14

4-6 G 3 5 7

4-7 H 4 11 12

5-7 I 2 4 6

6-7 J 2 9 10

NETWORK DIAGRAM:

CPM:
7

8
3
2
98
5 6

10
2 4 9

PERT:

6
2

1 4 7

3 5

OUTPUT:
CPM:

99
PERT:

RESULT:
Thus the critical path is obtained successfully using TORA Module.

EXP.NO.13
Date:
100
06.04.2016 QUEUING THEORY

AIM:
To solve the provided problem under waiting time models of queuing theory.

ALGORITHM:

STEP1: Open POM-QM software.

STEP2: Select waiting line from the module.

STEP3: There are several models available like M/M/1, M/D/1, M/4/1 one and execute by
following the proceeding steps.

STEP 4: Select M/M/1 and provide the input like arrivals rate and Service Rate. All the data
in same time interval.

STEP 5: Click solve and note down the results.

STEP 6: Repeat the experiment for other module and note the result.

QUESTION:
(i) Customers arrive at a booking office window, being manned by a single individual at a
rate of 25 per hour. Time required to serve a customer has exponential distribution with a
mean of 120 seconds. Find the mean waiting time of a customer in the queue.
101
(ii) A belt snapping for conveyors in an open cast mine occur at the rate of 2 per shift. There
is only one hot plate available for vulcanizing. And it can vulcanize on an average 5 belts
snap per shift.
a. What is the probability that when a belt snaps, the hot plate is readily available?
b. What is the average number of belts in the system?
c. What is the waiting time of an arrival?
d. What is the average waiting plus vulcanizing time?

(iii) A repairman is to be hired to repair machines which breakdown at an average rate of 6


per hour. The breakdowns follow Poisson distribution. The non-production time of a machine
is considered to cost Rs. 20 per hour. Two repairmen, Mr.X and Mr.Y have been interviewed
for this purpose. Mr. X charges Rs. 10 per hour and he services breakdown machines at the
rate of 8 per hour. Mr. Y. demands Rs. 14 per hour and he services at an average of 12 per
hour. Which repairman should be hired?

OUTPUT 1:

*** WAITING LINE MODELS ***


--------------------------------------------------------------------------
PROBLEM NAME: QT 1
--------------------------------------------------------------------------

MODEL: Single Channel

Arrival Rate (lambda) = 25


Service Rate (mu) = 30

Average Number of Units in Waiting Line = 4.1667


Average Number of Units in System = 5.0000
Average Waiting Time in Line = 0.1667
Average Time in System = 0.2000

Probability of Idle System = 0.1667

Probability of 1 units in the system = 0.1389


Probability of 2 units in the system = 0.1157
Probability of 3 units in the system = 0.0965
Probability of 4 units in the system = 0.0804
Probability of 5 units in the system = 0.0670
Probability of 6 units in the system = 0.0558
Probability of 7 units in the system = 0.0465
Probability of 8 units in the system = 0.0388
Probability of 9 units in the system = 0.0323
Probability of 10 units in the system = 0.0269
102
Probability of 11 units in the system = 0.0224
Probability of 12 units in the system = 0.0187
Probability of 13 units in the system = 0.0156
Probability of 14 units in the system = 0.0130
Probability of 15 units in the system = 0.0108

CONCLUSION:

Mean waiting time of customer in the queue is 0.1667

OUTPUT 2:

*** WAITING LINE MODELS ***


--------------------------------------------------------------------------
PROBLEM NAME: QT 2
--------------------------------------------------------------------------

MODEL: Single Channel

Arrival Rate (lambda) = 2


Service Rate (mu) = 5

Average Number of Units in Waiting Line = 0.2667


Average Number of Units in System = 0.6667
Average Waiting Time in Line = 0.1333
Average Time in System = 0.3333

Probability of Idle System = 0.6000

Probability of 1 units in the system = 0.2400


Probability of 2 units in the system = 0.0960
Probability of 3 units in the system = 0.0384
Probability of 4 units in the system = 0.0154

--------------------------------------------------------------------------

CONCLUSION:

a) Probability when a belt snaps is 0.2667


b) Average number of belts in the system is 0.6667
c) Waiting time of an arrival is 0.1333
d) Average waiting plus vulcanizing time is 0.469

OUTPUT 3:
*** WAITING LINE MODELS ***
--------------------------------------------------------------------------
PROBLEM NAME: MR X
--------------------------------------------------------------------------

103
MODEL: Single Channel

Arrival Rate (lambda) = 6


Service Rate (mu) = 8

Average Number of Units in Waiting Line = 2.2500


Average Number of Units in System = 3.0000
Average Waiting Time in Line = 0.3750
Average Time in System = 0.5000

Probability of Idle System = 0.2500

Probability of 1 units in the system = 0.1875


Probability of 2 units in the system = 0.1406
Probability of 3 units in the system = 0.1055
Probability of 4 units in the system = 0.0791
Probability of 5 units in the system = 0.0593
Probability of 6 units in the system = 0.0445
Probability of 7 units in the system = 0.0334
Probability of 8 units in the system = 0.0250
Probability of 9 units in the system = 0.0188
Probability of 10 units in the system = 0.0141
Probability of 11 units in the system = 0.0106

--------------------------------------------------------------------------
*** WAITING LINE MODELS ***
--------------------------------------------------------------------------
PROBLEM NAME: MR Y
--------------------------------------------------------------------------

MODEL: Single Channel

Arrival Rate (lambda) = 6


Service Rate (mu) = 12

Average Number of Units in Waiting Line = 0.5000


Average Number of Units in System = 1.0000
Average Waiting Time in Line = 0.0833
Average Time in System = 0.1667

Probability of Idle System = 0.5000

Probability of 1 units in the system = 0.2500


Probability of 2 units in the system = 0.1250
Probability of 3 units in the system = 0.0625
Probability of 4 units in the system = 0.0313
Probability of 5 units in the system = 0.0156

--------------------------------------------------------------------------

104
CONCLUSION:

Mr.X : Average time in system + Waiting Time in Line =0.875

Mr.Y : Average no. of units in waiting line + Waiting Time in Line = 0.25
For Mr X=0.875* 20 = 17.5 +10 = 27.5

For Mr Y=0.25 *20 = 5 + 14 = 19

Since the total cost incurred for the machine to be repaired by Mr Y was lesser than Mr
X, Repairman Mr Y should be hired .

RESULT:
Thus the given problem is solved using POM-QM software successfully.

EX.NO.14
Date: INVENTORY MODEL
06.04.2016

EXERCISE NO: 01

AIM:

105
To solve the provided inventory problem using Microsoft excel.

ALGORITHM:

STEP1: Open the excel sheet.

STEP2: Enter the demand (A), Ordering Cost (Oc) and Carrying cost (Cc).

STEP3: Find out Economic Order Quantity (EOQ) or (Q*) =SQRT {(2*A* Oc)/ Cc}

STEP 4: Find the Number of Orders per year = A/EOQ

STEP 5: Find the Time between Successive Orders (T) = EOQ/A

QUESTION:

Alpha industry needs 15,000 units/year of a bought out component which will be used in its
main product. The ordering cost is Rs. 125 per order and the carrying cost per unit per unit
per year is 20% of the purchase price per unit which is Rs 75.
Find a. Economic order quantity
b. Number of orders per year
c. Time between successive orders

106
OUTPUT:
Given:
Demand (A) --- Rs. 15000 units per year
Ordering Cost (Oc) --- Rs. 125 per order
Carrying Cost (Cc) --- Rs. 15 per unit per year

Economic Order Quantity


= 500 units per order
EOQ = SQRT [(2*A*Oc)/Cc]
Number of Order per year
= 30 Orders
= A / EOQ
Time between Successive Order = 0.033 X 12
(T) = EOQ / A = 0.36 months

RESULT:
Thus, the given inventory problem is solved using Microsoft excel.
EXERCISE NO: 02

AIM:

To solve the provided inventory problem using Microsoft excel.

ALGORITHM:

STEP1: Open the excel sheet.

STEP2: Enter the demand (A), Ordering Cost (Oc), Carrying cost (Cc) and Number of Units
produced per year (K).

107
STEP3: Find out Economic Batch Quantity (EBQ ) =SQRT {(2*A* Oc)/ (Cc (1-A/K))}

STEP 4: Find the Cycle Time (T) = Inventory Period (T1) + Shortage Period (T2) Where

T1 = EBQ/K; T2 = (EBQ (1-A/K))/A

QUESTION:

A product is to be manufactured within the company, the details of which are as


follows: A=36000 units/year,
K = 72000 units/year,
Set up cost, Co = Rs. 250 per set-up
Carrying Cost Cc = Rs 25/unit/year
Find the EBQ and cycle time.

OUTPUT:
GIVEN:
Demand (A) = 36, 000 units/year
108
Ordering Cost (OC) = Rs. 250 per set up
Carrying Cost = Rs. 25/units. 25/units/year
No. of units produced per year (K) = 72, 000 units/year

Economic Batch Quantity


1200 units/batch
(EBQ)=SQRT((2*A* OC)/(Cc*(i-A/K))
Cycle Time (T)
Inventory Period (T1) + Shortage Period (T2) 0.2 months
T1 = EBQ/K 0.2 months
T2 = (EBQ*(1-A/K))/A
Cycle Time (T) 0.4 months

RESULT:
Thus, the given inventory problem is solved using Microsoft excel.
EXERCISE NO: 03

AIM:

To solve the provided inventory problem using Microsoft excel.

ALGORITHM:

STEP1: Open the excel sheet.

STEP2: Enter the demand (A), Ordering Cost (Oc), Carrying cost (Cc) and Shortage Cost (Cs).

STEP3: Find out Economic Order Quantity (EOQ) or (Q*) =SQRT {((2*A* Oc)/ Cc)*((Cs+
Cc)/ Cs)}

STEP 4: Find the Maximum Inventory (Q1*) = SQRT {((2*A* Oc)/ Cc)*(Cs/(Cs+ Cc)}

109
STEP 5: Find the Maximum Shortage Quantity ((Q2*) = Q* - Q1*

STEP 6: Find the Cycle Time (T) = EOQ/A.

STEP 7: Find the Inventory period T1 = Q1*/A

STEP 8: Find the Shortage Period T2 = T - T1

QUESTION:

The annual demand for an automobile component is 36000 units. The carrying cost is Rs.
0.50/unit/year, the ordering cost is Rs 25.00 per order and the shortage cost is Rs
15.00/unit/year. Find the optimal values of the following.
1. EOQ
2. Maximum inventory
3. Maximum Shortage Quantity
4. Cycle time
5. Inventory period(T1)
6. Shortage Period (T2)

OUTPUT:
GIVEN:
Demand (A) = 36000 units
Ordering Cost (OC) = Rs. 25.00 / order
110
Carrying Cost (Cc) = Rs. 0.50 / unit / year
Shortage Cost (Cs) = Rs. 15.00 / unit / year

Economic Order Quantity (EOQ) or (Q*)


= 1929 units / order
= SQRT (((2*A*Oc)*((Cs + Cc) / Cs))
Maximum Inventory (Q1*)
= 1867 units
= SQRT (((2*A*Oc) /Cc)*(Cs / Cs+Cc))
Maximum Shortage Quantity (Q2*)
= 62 units
= Q* - Q1*
Cycle Time (T)
= 0.643 months
= EOQ / A *12
Inventory Period (T1)
= 0.622 months
= Q1* / A *12
Shortage Period (T2)
= 0.021 months
= T T1

RESULT:
Thus, the given inventory problem is solved using Microsoft excel

111

You might also like