Professional Documents
Culture Documents
Kristiaan Schreve
Stellenbosch University
kschreve@sun.ac.za
Stats Block
1 / 181
Overview I
1
2
3
4
5
6
7
8
9
10
11
12
Introduction
Some Important Concepts
Excel Demonstration
Graphing Data
Choosing the right type of graph
Guidelines for creating good scientific graphs
Calculating Averages with Excel
Standard Deviation and Variance
Z Scores
Higher Order Distribution Descriptors
Frequency and Histograms
Box-and-whisker Plots
The Normal Distribution
Confidence Limits
Sampling distributions
Kristiaan Schreve (SU)
Stats Block
2 / 181
Overview II
Central limit theorem
Limits of confidence
t-distribution
Normal distribution and t-distribution confidence limits compared
13
14
15
Stats Block
3 / 181
Overview III
After the F-test
16
Regression
Linear regression
Testing hypotheses about regression
Excels R-squared
Excel functions for regression
Multiple regression
Guidelines
17
Correlation
Pearsons correlation coefficient
Correlation and regression
Testing hypotheses about correlation
18
Uncertainty of Measurement
Evaluation of standard uncertainty
Type A evaluation of standard uncertainty
Type B evaluation of standard uncertainty
Kristiaan Schreve (SU)
Stats Block
4 / 181
Overview IV
Law of propagation of uncertainty for uncorrelated quantities
Law of propagation of uncertainty for correlated quantities
Determining expanded uncertainty
Reporting uncertainty
Example
19
Stats Block
5 / 181
Stats Block
6 / 181
Conditional Probability
Pr(event|condition)
Stats Block
7 / 181
Null hypothesis
H0
Alternate hypothesis
H1
Stats Block
8 / 181
Type II error
Not rejecting H0 when you should.
Stats Block
9 / 181
Excel Demonstration I
Stats Block
10 / 181
Graphing Data I
Column graphs
E.g. show percentage change over time for nominal values
Discrete data: open space between columns
Continuous data: no space between columns
Stats Block
11 / 181
Graphing Data II
Choosing the right type of graph
Stats Block
12 / 181
Pie graph
E.g. show percentages that make up one total
Avoid 3D effects, it can distort the ability to distinguish between sizes
of the slices
As few slices as possible
Stats Block
13 / 181
Graphing Data IV
Choosing the right type of graph
Stats Block
14 / 181
Graphing Data V
Choosing the right type of graph
Line graph
E.g. show trends, or relationships between parameters
Stats Block
15 / 181
Graphing Data VI
Choosing the right type of graph
Stats Block
16 / 181
Bar graph
E.g. make a point about reaching a goal
Good if the labels on the horizontal axis take too much space
Arrange in ascending/descending order whenever appropriate
Stats Block
17 / 181
Stats Block
18 / 181
Graphing Data IX
Choosing the right type of graph
Linear regression
E.g. show relationship between parameters
Use with great care!
Stats Block
19 / 181
Graphing Data X
Choosing the right type of graph
Stats Block
20 / 181
Graphing Data I
Not in textbook
Stats Block
21 / 181
Graphing Data II
Guidelines for creating good scientific graphs
Stats Block
22 / 181
Stats Block
23 / 181
Graphing Data IV
Guidelines for creating good scientific graphs
Stats Block
24 / 181
Graphing Data V
Guidelines for creating good scientific graphs
Stats Block
25 / 181
Graphing Data VI
Guidelines for creating good scientific graphs
Time [s]
19
26
18
19.6
21
22
27
23
17
23
Stats Block
26 / 181
Stats Block
27 / 181
Stats Block
28 / 181
Graphing Data IX
Guidelines for creating good scientific graphs
Stats Block
29 / 181
Graphing Data X
Guidelines for creating good scientific graphs
Stats Block
30 / 181
Graphing Data XI
Guidelines for creating good scientific graphs
Stats Block
31 / 181
Wind type
Strong wind
Calm
Gale
Light breeze
Total
Stats Block
Days
10
5
7
9
31
32 / 181
Stats Block
33 / 181
Stats Block
34 / 181
Graphing Data XV
Guidelines for creating good scientific graphs
Stats Block
35 / 181
Stats Block
36 / 181
Stats Block
37 / 181
Stats Block
38 / 181
Stats Block
39 / 181
Graphing Data XX
Guidelines for creating good scientific graphs
Figure: Graphing experimental data. Error bars show the measurement error
range.
Stats Block
40 / 181
Stats Block
41 / 181
Stats Block
42 / 181
Stats Block
43 / 181
Stats Block
44 / 181
Stats Block
45 / 181
Population variance
2
)2
(X X
N
Stats Block
46 / 181
Sample variance
s2 =
P
)2
(X X
N 1
Stats Block
47 / 181
)2
(X X
= 2 =
N
Excel function: STDEV.P and STDEVPA
NOTE: the standard deviation has the same unit as the original
measurements
Stats Block
48 / 181
s=
s
s2 =
P
)2
(X X
N 1
Stats Block
49 / 181
Z Scores I
How do you compare scores in one year to another year for, say,
Mechatronics 424?
Z scores take the mean as a zero point and the standard deviation as a
unit of measure. Therefore, for a sample
z=
X X
s
z=
Stats Block
50 / 181
Z Scores II
Stats Block
51 / 181
Z Scores III
Stats Block
52 / 181
Descriptors
Variance: Describes the spread in the data.
Skewness: Describes how symmetrically the data is distributed.
Kurtosis: Describes whether or not there is a peak in the distribution
close to the mean.
Stats Block
53 / 181
Stats Block
54 / 181
P
kurtosis =
Stats Block
55 / 181
Stats Block
56 / 181
Stats Block
N [2]
57 / 181
Box-and-whisker Plots I
Not in textbook
Stats Block
58 / 181
Box-and-whisker Plots II
Gives an indication of the distribution of the data
Compare with histogram
Useful to compare different distributions
Matlab and Python both have useful tools to create these tools. More
difficult with Excel.
Stats Block
59 / 181
Stats Block
60 / 181
Box-and-whisker Plots IV
Example (Showing results of robot movement - Table)
Stats Block
61 / 181
Box-and-whisker Plots V
Example (Showing results of robot movement - Box-and-whisker plot)
Stats Block
62 / 181
1
f (x) = e
2
(x)2
2 2
Stats Block
63 / 181
Stats Block
64 / 181
Stats Block
65 / 181
Stats Block
66 / 181
Stats Block
x2
(x)2
2 2
x1
67 / 181
Stats Block
68 / 181
Stats Block
69 / 181
x1
(x)2
2 2
Stats Block
70 / 181
Stats Block
71 / 181
Stats Block
72 / 181
Stats Block
73 / 181
Excel functions
NORM.DIST, NORM.S.DIST
NORM.INV, NORM.S.INV
Use NORM.DIST(x,mean,standard deviation,TRUE) for the
cumulative distribution function
Use NORM.DIST(x,mean,standard deviation,FALSE) for the
probability density function
Stats Block
74 / 181
Stats Block
75 / 181
Confidence Limits I
Sampling distributions
The sampling distribution therefore has its own mean and standard
deviation.
The mean of the sampling distribution of the mean is x .
The standard deviation of the sampling distribution is called the
standard error.
The standard error is denoted as x .
Kristiaan Schreve (SU)
Stats Block
76 / 181
Confidence Limits I
Stats Block
77 / 181
Confidence Limits II
Central limit theorem
Stats Block
78 / 181
Confidence Limits I
Limits of confidence
Stats Block
79 / 181
Confidence Limits I
t-distribution
s/ n
Stats Block
80 / 181
Confidence Limits II
t-distribution
Stats Block
81 / 181
Stats Block
82 / 181
Confidence Limits I
Not in textbook
Figure: Comparison of 90% confidence limits for the normal and t-distributions
Kristiaan Schreve (SU)
Stats Block
83 / 181
Confidence Limits II
Normal distribution and t-distribution confidence limits compared
Note: the range of for the t-distribution is much larger than for the normal
distribution.
Stats Block
84 / 181
Some revision
Stats Block
85 / 181
Hypothesis testing
Stats Block
86 / 181
Stats Block
87 / 181
Example on pp 207-209
Stats Block
88 / 181
Stats Block
89 / 181
Stats Block
90 / 181
a = 0 z/2
n
b = 0 + z/2
n
Stats Block
91 / 181
The above is for a two-tailed test. A similar test can be formulated for a
one-tailed hypothesis.
Stats Block
92 / 181
x 0
s/ n
when
t > t/2,n1 or t < t/2,n1
Excel function: T.DIST
Stats Block
93 / 181
Stats Block
94 / 181
H0 : 2 = 02
H1 : 2 6= 02
Stats Block
95 / 181
(N 1)s 2
2
Stats Block
96 / 181
Stats Block
97 / 181
Stats Block
98 / 181
Not in textbook
H0
= 0 or 0
= 0 or 0
= 0
H1
< 0
> 0
6= 0
= 0 or 0
= 0 or 0
= 0
0 unknown
t = xs/
n
df = n 1
< 0
> 0
6= 0
2 = 02 or 2 02
2 = 02 or 2 02
2 = 02
2 = (n1)s
2
df = n 1
Stats Block
2 < 02
2 > 02
2 6= 02
Critical Region
z < z
z > z
z < z/2
and z > z/2
t < t
t > t
t < t/2
and t > t/2
2 < 2,df
2 > 2,df
2 < 2/2,df
and 2 > 2/2,df
January 26, 2015
99 / 181
Objective: does the two samples come from two different populations or
not?
Null hypothesis: Difference between the two samples are strictly due to
chance. They come from the same population.
Alternative hypothesis: There is a real difference between the samples.
They come from different populations.
Stats Block
100 / 181
One-tailed tests
or
H 0 : 1 2 = 0
H0 : 1 2 = 0
H 1 : 1 2 > 0
H1 : 1 2 < 0
Two-tailed tests
H0 : 1 2 = 0
H1 : 1 2 6= 0
Stats Block
101 / 181
Calculate 1 , 2 , 1 and 2
Stats Block
102 / 181
For this type of testing, the sampling distribution of the difference between
means is needed.
The sampling distribution of the difference between means is the
distribution of all possible values of differences between pairs of sample
means with the sample sizes held constant from pair to pair.
Stats Block
103 / 181
Stats Block
104 / 181
NOTE:
All samples from population 1 must have the same size.
All samples from population 2 must have the same size.
The two sample sizes are not necessarily equal.
Characteristics of the sampling distribution of the difference between
means according to the Central Limit Theorem
For large samples, it is approximately normally distributed.
For normally distributed populations, it is normally distributed.
The mean is the difference between the population means
x1 x2 = 1 2
The standard deviation
error of the difference between
q 2(or standard
1
22
means) is x1 x2 = N1 + N2
Kristiaan Schreve (SU)
Stats Block
105 / 181
(
x1 x2 ) (1 2 )
q 2
1
22
N1 + N2
Stats Block
106 / 181
Stats Block
107 / 181
Stats Block
108 / 181
(
x1 x2 ) (1 2 )
q
sp N11 + N12
Stats Block
109 / 181
(s22 /n2 )2
n2 1
Stats Block
110 / 181
(
x1 x2 ) (1 2 )
q 2
s1
s22
N1 + N2
Stats Block
111 / 181
H0 :(1 2 ) = D0
H1 :(1 2 ) > D0
H1 :(1 2 ) 6= D0
[or H1 : (1 2 ) < D0 ]
d D0
; df = n 1
t=
sd / n
Assumptions
The relative frequency distribution of the population of differences is
approximately normal.
The paired differences are randomly selected from the population of
differences.
Kristiaan Schreve (SU)
Stats Block
112 / 181
sa2
sb2
Stats Block
113 / 181
Stats Block
114 / 181
NOTE
The distribution depends on two dfs, dfa and dfb .
dfa = na 1
dfb = nb 1
Stats Block
115 / 181
Stats Block
116 / 181
Not in textbook
H0
1 2 = 0
1 2 = 0
H1
1 2 < 0
Critical Region
z < z
1 and 2 known
1 2 > 0
1 2 6= 0
z > z
z < z/2
and z > z/2
t < t,df
1+ 2
N1
N2
t=
(
x1
x2 )(1 2 )
q
sp N1 + N1
1
1 and 2 unknown
but equal
df = N1 + N2 2
1 2 < 0
Stats Block
1 2 > 0
1 2 6= 0
t > t,df
t < t/2,df
and t > t/2,df
117 / 181
H0
1 2 = 0
H1
1 2 < 0
Critical Region
t < t,df
1 and 2 unknown
and unequal
(s 2 /n +s 2 /n )2
df = 1 1 2 2
1 2 > 0
1 2 6= 0
t > t,df
t < t/2,df
s
s
1 + 2
N1
N2
(s 2 /n1 )2
(s 2 /n2 )2
1
+ 2n 1
n1 1
2
1 2 = D0
0
t = sdD
d/ n
df = n 1
1 2 < D 0
1 2 > D 0
1 2 6= D0
Stats Block
t < t,df
t > t,df
t < t/2,df
and t > t/2,df
118 / 181
H0
12
22
F =
sa2
sb2
dfa = na 1
dfb = nb 1
H1
12
12
12
Stats Block
Critical Region
<
>
6
=
22
22
22
119 / 181
Introduction to ANOVA
93.44
16.28
4.03
Stats Block
Method 2
83
89
85
89
81
89
90
82
84
80
85.20
14.18
3.77
Method 3
68
75
79
74
75
81
73
77
75.25
15.64
3.96
January 26, 2015
120 / 181
Example (Continued...)
Hypothesis
H0 :1 = 2 = 3
H1 :Not H0
= 0.05
Performing multiple t-tests possibly sets us up for a disaster. Lets see why:
Chance of NOT making a Type I error with one comparison, with a
significance level of = 0.05 is 95%.
So, for 3 samples, 3 tests must be done: Method 1 Method 2,
Method 1 Method 3 and Method 2 Method 3.
Kristiaan Schreve (SU)
Stats Block
121 / 181
Stats Block
122 / 181
Table: Increasing chance of making a Type I error for multiple t-tests, from [6]
Number of samples t
3
4
5
6
7
8
9
10
Number of tests
3
6
10
15
21
28
36
45
Stats Block
123 / 181
The idea with ANOVA is to separate the total variability into the following
components [8]
1
Stats Block
124 / 181
Stats Block
125 / 181
What statistics of the two samples in these plots did we intuitively use to
make a decision on the difference between the population means?
Kristiaan Schreve (SU)
Stats Block
126 / 181
Stats Block
127 / 181
MST =
SST
dfT
Pk Pni
i=1
2
j=1 yij
2
Pni
i=1
j=1 yij
Pk
i=1 ni 1
Pk
Pk
i=1 ni
Stats Block
128 / 181
MSW =
SSW
dfW
Pk
i=1
Pni
j=1 yij
2
ni
Pk
2
Pni
i=1
j=1 yij
Pk
i=1 ni 1
Pk
i=1 (ni
1)
Stats Block
129 / 181
MSB =
Stats Block
130 / 181
Note that both MSW and MSB are estimates of the population variance.
If there is a meaningful difference between the variances, then the samples
cannot all come from the same populations and therefore there is a
meaningful difference between the samples that cannot be attributed just
to random errors.
ANOVA translates
H0 :1 = 2 = . . . = k
H1 :Not H0
Kristiaan Schreve (SU)
Stats Block
131 / 181
into
2
H0 :B2 W
2
H1 :B2 > W
MSB
MSW
Stats Block
132 / 181
x1 x2
MSw [ n11 +
1
n2 ]
Stats Block
133 / 181
Unplanned comparisons
There may be some situations where the conditions for the t-test
mentioned above are not met. This is then called a unplanned comparison.
Also known as a posteriori or post hoc tests.
Numerous tests are available...
Stats Block
134 / 181
Regressions I
Linear regression
Stats Block
135 / 181
Regressions II
Linear regression
Stats Block
136 / 181
Regression I
2
syx
P
(y y 0 )2
=
N 2
P
(y y 0 )2
=
N n1
n is the degree of the polynomial fitted to the data. In the linear case,
n = 1.
N is the number of data points.
y y 0 is the difference between the measured and predicted value.
Stats Block
137 / 181
Regression II
Testing hypotheses about regression
q
2 =
= syx
sP
(y y 0 )2
N 2
Hypothesis
Stats Block
138 / 181
Regression III
Testing hypotheses about regression
2
2
H0 :Regression
Residual
2
2
H1 :Regression
> Residual
To find the variances, we need the sums of squares and their corresponding
degrees of freedom.
Stats Block
139 / 181
Regression IV
Testing hypotheses about regression
Stats Block
140 / 181
Regression V
Testing hypotheses about regression
SSResidual =
(y y 0 )2
(y 0 y )2
(y y )2
Stats Block
141 / 181
Regression VI
Testing hypotheses about regression
Stats Block
142 / 181
Regression VII
Testing hypotheses about regression
MSRegression =
MSResidual
MSTotal
Test the hypothesis with an F test
F =
MSRegression
MSResidual
Stats Block
143 / 181
Regression VIII
Testing hypotheses about regression
H0 : = 0
H1 : 6= 0
This is a standard one-sample, two tailed, t-test. In what follows, = 0
The test statistic is
t=
Kristiaan Schreve (SU)
b
; df = N 2
sb
Stats Block
144 / 181
Regression IX
Testing hypotheses about regression
sx N 1
s
P
(y y 0 )2
syx =
N 2
sP
(x x)2
sx =
N 1
sb =
Stats Block
145 / 181
Regression X
Testing hypotheses about regression
Stats Block
146 / 181
Regression I
Not in textbook
Excels R-squared
Coefficient of Determination
R2 =
SSRegression
SSTotal
Stats Block
147 / 181
Regression I
SLOPE
INTERCEPT
STEYX
FORECAST
TREND
LINEST
Data analysis tool: Regression
Stats Block
148 / 181
Multiple regression I
bi x i
Stats Block
149 / 181
Regression
Not in textbook
Guidelines
Stats Block
150 / 181
Correlation I
Stats Block
151 / 181
Correlation II
Pearsons correlation coefficient
P
(x x)(y y )
r=
sx sy
cov(x, y )
=
sx sy
Stats Block
152 / 181
Correlation I
SSRegression
r = r2 =
SSTotal
r 2 is just Excels Coefficient of Determination
R 2 = 0.667 implies SSRegression is 66.7% of SSTotal . To find out if
that is significant, do a hypothesis test...
Stats Block
153 / 181
Correlation I
r
sr
Where
=0
q
2
sr = 1r
N2
Reject H0 at significance level if t > t .
Kristiaan Schreve (SU)
Stats Block
154 / 181
Correlation II
Testing hypotheses about correlation
Stats Block
155 / 181
Correlation III
Testing hypotheses about correlation
t
2.178
2.085
1.988
1.886
1.778
t
1.980
1.982
1.984
1.987
1.990
N 2
120
110
100
90
80
Reject?
Yes
Yes
Yes
No
No
Stats Block
156 / 181
Correlation IV
Testing hypotheses about correlation
H0 :1 = 2
H1 :1 6= 2
We have to transform the r value with
zr = 0.5[ln(1 + r ) ln(1 r )]
The test statistic is then
z=
z1 z2
z1 z2
where
Kristiaan Schreve (SU)
Stats Block
157 / 181
Correlation V
Testing hypotheses about correlation
r
z1 z2 =
1
1
+
N1 3 N 2 3
Stats Block
158 / 181
Uncertainty of Measurement I
Not in textbook
Stats Block
159 / 181
Uncertainty of Measurement II
Example (Measuring Power Dissipated from a Resistor [1])
If a potential difference V is applied to the terminals of a
temperature-dependent resistor that has a resistance of R0 at the defined
temperature t0 and a linear temperature coefficient of resistance , the
power P (the measurand) dissipated by the resistor at the temperature t
depends on V , R0 , and t according to
P = f (V , R0 , , t) =
V2
R0 [1 + (t t0 )]
Stats Block
160 / 181
Stats Block
161 / 181
Uncertainty of Measurement IV
To find the mean value of the measurand, do you take the mean of
the input quantities or do you first calculate the measurand for each
set of measurements and then take the mean of the measurand?
Stats Block
162 / 181
Uncertainty of Measurement V
Example (When to calculate the mean)
The table shows voltage and
temperature readings for the power
dissipated by the resistor in the
previous example. If R0 = 4.33 ,
= 0.00393 and t0 = 20 C, the
mean power dissipated is
21.43545 W if P is calculated
for each data point and then the
mean of the 10 power values are
taken.
21.43568 W if the mean voltage
(10.006565 V) and mean
temperature (40.0563 C) is
used.
Kristiaan Schreve (SU)
Stats Block
Voltage [V]
10.030
9.991
9.971
10.023
10.000
10.039
10.073
9.987
9.935
10.017
Temperature [ C]
39.930
39.962
39.916
40.102
39.949
40.250
40.315
39.921
40.124
40.093
January 26, 2015
163 / 181
Uncertainty of Measurement I
Not in textbook
From the examples it is clear that there are two types of uncertainty.
One is based on a set of repeated measurements. (Type A.) In the
example, it is the standard uncertainty of the temperature t and
voltage V .
Another is based on other information, e.g. data sheets. (Type B.) In
the example, it is the standard uncertainty of the constants R0 , and
t0 .
Stats Block
164 / 181
Uncertainty of Measurement I
Not in textbook
Stats Block
165 / 181
Uncertainty of Measurement I
Not in textbook
If the source does not give the standard uncertainty explicitly, it may
be derived. The GUM Guide [1] gives several examples in section 4.3.
Stats Block
166 / 181
Uncertainty of Measurement
Not in textbook
N
X
f 2
i=1
xi
s 2 (
xi )
Stats Block
167 / 181
Uncertainty of Measurement
Not in textbook
N
X
f 2
i=1
xi
s 2 (
xi ) + 2
N1
X
N
X
f f
s(
xi , xj )
xi xj
i=1 j=i+1
s(
xi , xj ) is the estimated covariance associated with xi and xj . It is
calculated as
N
s(
xi , xj ) =
X
1
(xi,k xi )(xj,k xj )
N(N 1)
k=1
Stats Block
168 / 181
Uncertainty of Measurement
Not in textbook
Stats Block
169 / 181
Uncertainty of Measurement I
Not in textbook
Reporting uncertainty
Stats Block
170 / 181
Uncertainty of Measurement II
Reporting uncertainty
give all the corrections and constants used in the analysis and their
sources
in the case of reporting expanded uncertainty report the coverage
factor.
Stats Block
171 / 181
Stats Block
172 / 181
Uncertainty of Measurement I
Example
N
X
f 2
i=1
xi
s 2 (
xi ) + 2
N1
X
N
X
f f
s(
xi , xj )
xi xj
i=1 j=i+1
V2
R0 [1 + (t t0 )]
Let
Stats Block
173 / 181
Uncertainty of Measurement II
Example
x1 = V
x2 = R0
x3 =
x4 = t
Ignore the uncertainty contribution of t0 . Assume it is a very well known
reference value with negligible uncertainty. Then
Stats Block
174 / 181
f
2V
=
V
R0 [1 + (t t0 )]
V 2
f
= 2
R0
R0 [1 + (t t0 )]
f
(t t0 )V 2
=
((t t0 ) + 1)2 R0
f
V 2
=
t
[(t t0 ) + 1]2 R0
Evaluate these values at mean values of V , R0 , and t, i.e.
V = 10.007 V, R0 = 4.33 , = 0.00393 and t = 40.056 C.
This gives
Stats Block
175 / 181
Uncertainty of Measurement IV
Example
f
V
f
R0
f
f
t
= 4.284
= 4.950
= 398.506
= 0.078
Stats Block
176 / 181
Uncertainty of Measurement V
Example
) = 0.00149
s 2 (V
s 2 (t ) = 0.02076
Finally, lets assume that somehow we know that
s 2 (R0 ) = 0.001
s 2 (
) = 0.02
Now it is straight forward to calculate sc2 (P).
Stats Block
177 / 181
Not in textbook
Typical Use
Calculate confidence limits for your estimate
of the population mean. You know the population variance.
Calculate confidence limits for your estimate
of the population mean. You do not know
the population variance.
You have one sample and some guess of the
population mean. You want to know if the
guess is right or how it differs. You know
the population variance.
You have one sample and some guess of the
population mean. You want to know if the
guess is right or how it differs. You do not
know the population variance.
Stats Block
178 / 181
179 / 181
180 / 181
Stats Block
181 / 181
References I
Uncertainty of measurementpart 3: guide to the expression of uncertainty in
measurement, 1995.
R.S. Figliola and D.E. Beasley.
Theory and Design for Mechanical Measurements.
Wiley, Hoboken, 4th edition, 2006.
A Graham.
Statistics: A Complete Introduction.
Hodder & Stoughton, 2013.
D Huff and I Geis.
How to Lie with Statistics.
Norton, New York, 1954.
W. Mendenhall and T Sincich.
Statistics for Engineering and the Sciences.
MacMillan, New York, 3rd edition, 1992.
INBO 519.502462 MEN.
Kristiaan Schreve (SU)
Stats Block
182 / 181
References II
J. Schmuller.
Statistical Anlysis with Excel for Dummies.
Wiley, Hoboken, 2nd edition, 2009.
J Schmuller.
Statistical Analysis with Excel for Dummies.
Wiley, Hoboken, 3rd edition, 2013.
R.E. Walpole and R.H. Myers.
Probability and Statistics for Engineers and Scientists.
MacMillan, New York, 4th edition, 1990.
Stats Block
183 / 181
The End
Stats Block
184 / 181