Professional Documents
Culture Documents
by Dr. Dang Quang A & Dr. Bui The Hong Hanoi Institute of Information Technology
Preface
Statistics is the science of collecting, organizing and interpreting numerical and nonnumerical facts, which we call data. The collection and study of data are important in the work of many professions, so that training in the science of statistics is valuable preparation for variety of careers. , for example economists and financial advisors, businessmen, engineers, farmers Knownedge of probability and statistical methods also are useful for informatic specialists of various fields such as data mining, knowledge discovery, neural network, fuzzy system and so on. Whatever else it may be, statistics is, firsrt and foremost, a collection of tools used for converting raw data into information to help decision makers in their works. The science of data - statistics - is the subject of this course.
2
Objectives
Understanding statistical reasoning Mastering basic statistical methods for analyzing data such as descriptive and inferential methods Ability to use methods of statistics in practice with the help of computer softwares in statistics
Entry requirements
High school algebra course (+elements of calculus) Skill of working with computer
3
Contents
Preface
[Back]
Chapter 1 Introduction. Chapter 2 Data presentation... Chapter 3 Data characteristics ... descriptive summary statistics Chapter 4 Probability: Basic... concepts . Chapter 5 Basic Probability distributions ... Chapter 6 Sampling Distributions . Chapter 7 Estimation. Chapter 8 General Concepts of Hypothesis Testing .. Chapter 9 Applications of Hypothesis Testing ..
Chapter 10 Categorical Data .... Analysis and Analysis of variance Chapter 11 Simple Linear regression and correlation Chapter 12 Multiple regression Chapter 13 Nonparametric statistics References Appendix A Appendix B Appendix C Appendix D Index
4
[Back]
Whatever else it may be, statistics is, first and foremost, a collection of tools used for converting raw data into information to help decision makers in their works.
1.4. Brief history of statistics 1.5. Computer softwares for statistical analysis
[Contents] [Back]
The objective of data description is to summarize the characteristics of a data set. Ultimately, we want to make the data set more comprehensible and meaningful. In this chapter we will show how to construct charts and graphs that convey the nature of a data set. The procedure that we will use to accomplish this objective depends on the type of data.
[Back] Chapter 2 (continued 1) 2.5 Graphical description of quantitative data: Stem and Leaf displays Stem and leaf display is widely used in exploratory data analysis when the data set is small Steps to follow in constructing a Stem and Leaf Display Advantages and disdvantage of a stem and leaf display
Frequency distribution is a table that organizes data into classes Class frequency = the number of observations that fall into the class. Class relative frequency = Class frequency/ Total number of observations Relative class percentage = Class relative frequency x 100%
[Contents] [Back]
Chapter 3 Data characteristics: descriptive summary statistics 3.1 Introduction 3.2 Types of numerical descriptive measures 3.3 Measures of central tendency 3.4 Measures of data variation 3.5 Measures of relative standing 3.6 Shape 3.7 Methods for detecting outlier 3.8 Calculating some statistics from grouped data 3.9 Computing descriptive summary statistics using computer softwares 3.10 Exercises
8
[Back]
[Back]
Descriptive measures that locate the relative position of an observation in relation to the other observations are called measures of relative standing The pth percentile is a number such that p% of the observations of the data set fall below and (100-p)% of the observations fall above it. Lower quartile = 25th percentile Mid- quartile,= 50th percentile. Upper quartile = 75th percentile, Interquartile range, z-score
3.6 Shape
3.6.1 Skewness 3.6.2 Kurtosis
3.7 Methods for detecting outlier 3.8 Calculating some statistics from grouped data
10
Experiment, Events and Probability of an Event Approaches to probability The field of events Definitions of probability Conditional probability and independence Rules for calculating probability Exercises
11
[Back]
The process of making an observation or recording a measurement under a given set of conditions is a trial or experiment. Outcomes of an experiment are called events. We denote events by capital letters A, B, C, The probability of an event A, denoted by P(A), in general, is the chance A will happen.
12
[Back ]
Definitions and relations between the events: A implies B, A and B are equivalent (A=B), product or intersection of the events A and B (AB), sum or union of A and B (A+B), difference of A and (A-B or A\B), certain (or sure) event, impossible event, complement of A, mutually exclusive events, simple (or elementary), sample space. Ven diagrams Field of events
13
Chapter 4 (continued 3) 4.5 Conditional probability and independence 4.6 Rules for calculating probability 4.6.1 The addition rule for pairwise mutually exclusive events P(A1+ A2+ ...+An)= P(A1)+P(A2)+ ...+P(An) for two nonmutually exclusive events A and B P(A+B) = P(A) + P(B) P(AB). 4.6.2 Multiplicative rule P(AB) = P(A) P(B|A) = P(B) P(A|B). 4.6.3 Formula of total probability P(B)= P(A1)P(B|A1)+P(A2)P(B|A2)+ ...+P(An)P(B|An).
[Back]
14
[Contents] [Back]
Chapter 5 Basic Probability distributions 5.1 Random variables 5.2 The probability distribution for a discrete random variable 5.3 Numerical characteristics of a discrete random variable 5.4 The binomial probability distribution 5.5 The Poisson distribution 5.6 Continuous random variables: distribution function and density function 5.7 Numerical characteristics of a continuous random variable 5.8 The normal distribution 5.9 Exercises
15
[ Back]
16
Chapter 5 (continued 2) 5.3 Numerical characteristics of a discrete random variable 5.3.1 Mean or expected value: =E(X)= xp(x)
[ Back]
5.3.2 Variance and standard deviation 2=E[(X- )2] 5.4 The binomial probability distribution Model (or characteristics) of a binomial random variable The probability distribution mean and variance for a binomial random variable 5.5 The Poisson distribution Model (or characteristics) of a Poisson random variable The probability distribution mean and variance for a Poisson random variable
17
Chapter 5 (continued 3)
[Back]
xp(x)dx
18
[Contents] [Back]
Chapter 6 Sampling Distributions 6.1 Why the method of sampling is important 6.2 Obtaining a Random Sample 6.3 Sampling Distribution 6.4 The sampling distribution of sample mean: the Central Limit Theorem 6.5 Summary 6.6 Exercises
19
[ Back]
two samples from the same population can provide contradictory information about the population Random sampling eliminates the possibility of bias in selecting a sample and, in addition, provides a probabilistic basic for evaluating the reliability of an inference
20
[Back]
A numerical descriptive measure of a population is called a parameter. A quantity computed from the observations in a random sample is called a statistic. A sampling distribution of a sample statistic (based on n observations) is the relative frequency distribution of the values of the statistic theoretically generated by taking repeated random samples of size n and computing the value of the statistic for each sample. Examples of computer-generated random samples
6.4 The sampling distribution of sample mean: the Central Limit Theorem
If the size is sufficiently large, the mean of a random sample from a population has a sampling distribution that is approximately normal, regardless of the shape of the relative frequency distribution of the target population Mean and standard deviation of the sampling distribution
6.5 Summary
21
Chapter 7. Estimation
[Contents] [Back ]
7.1 Introduction 7.2 Estimation of a population mean: Large-sample case 7.3 Estimation of a population mean: small sample case 7.4 Estimation of a population proportion 7.5 Estimation of the difference between two population means: Independent samples 7.6 Estimation of the difference between two population means: Matched pairs 7.7 Estimation of the difference between two population proportions 7.8 Choosing the sample size 7.9 Estimation of a population variance 7.10 Summary
22
Chapter 7 (continued 1)
7.2 Estimation of a population mean: Large-sample case Point estimate for a population mean:
[Back ]
Large-sample (1-) 100% Confidence interval for a population mean ( use the fact that For sufficient large sample size n>=30, the sampling distribution of the sample mean, , is approximately normal).
23
Chapter 7 (continued 2) 7.5 Estimation of the difference between two population means: Independent samples
For sufficiently large sample size (n1 and n2 >= 30), the sampling distribution of
[Back ]
1 - 2 based on independent
random samples from two populations, is approximately normal Small sample sizes under some assumptions on populations
7.6 Estimation of the difference between two population means: Matched pairs
Assumption: the population of paired differences is normally distributed Procedure
p2
8.1 Introduction
The procedures to be discussed are useful in situations, where we are interested in making a decision about a parameter value rather then obtaining an estimate of its value
Conten ts
[Back ]
The test statistic is a sample ststistic, upon which the decision concerning the null and alternative hypotheses is based. The rejection region is the set of possible values of the test statistic for which the null hypotheses will be rejected. Critical value =boundary value of the rejection region
26
[Contents] [Back]
Chapter 9. Applications of Hypothesis Testing 9.1 Diagnosing a hypothesis test 9.2 Hypothesis test about a population mean 9.3 Hypothesis test about a population proportion 9.4 Hypothesis tests about the difference between two population means 9.5 Hypothesis tests about the difference between two proportions 9.6 Hypothesis test about a population variance 9.7 Hypothesis test about the ratio of two population variances 9.8 Summary 9.9 Exercises
27
Chapter 9 (continued 1)
[Back ]
the sampling distribution of is approximately normal and s is a good approximation of . Procedure for large- sample test
9.4 Hypothesis tests about the difference between two population means
Large- sample test : Assumptions: n1>=30, n2>=30; samples are selected randomly and independently from the populations Small- sample test
28
Chapter 9 (continued 2) 9.5 Hypothesis tests about the difference between two proportions:
Assumptions, Procedure
[Back]
9.7 Hypothesis test about the ratio of two population variances (optional)
Assumptions: Populations has approx. nornal distr., random samples are independent. Procudure using F- distribution
29
Chapter 10. Categorical Data Analysis and Analysis of Variance 10.1 Introduction 10.2 Tests of goodness of fit 10.3 The analysis of contingency tables 10.4 Contingency tables in statistical software packages 10.5 Introduction to analysis of variance 10.6 Design of experiments 10.7 Completely randomized designs 10.8 Randomized block designs 10.9 Multiple comparisons of means and confidence regions 10.10 Summary 10.11 Exercises
[Contents] [Back]
30
[Back ]
Purpose: to test for a dependence on a qualitative variable that allow for more than two categorires for a response.Namely, it test there is a significant difference between observed frequency distribution and a theoretical frequency distribution . Procedure for a Chi-square goodness -of- fit test
Chapter 10 (continued 2) 10.5 Introduction to analysis of variance Purpose: Comparison of more than two means 10.6 Design of experiments
[Back]
Concepts of experiment, design of the experiment, response variable, factor, treatment Concepts of Between-sample variation, Within-sample variation This design involves a comparison of the means of k treatments, based on independent random samples of n1, n2,, nk observations drawn from populations. Assumptions: All k populations are normal, have equal variances F-test for comparing k population means Concept of randomized block design Tests to compare k Treatment and b Block Means
32
[Contents] [Back]
11.1 Introduction: Bivariate relationships 11.2 Simple Linear regression: Assumptions 11.3 Estimating A and B: the method of least squares 11.4 Estimating 2 11.5 Making inferences about the slope, B 11.6. Correlation analysis 11.7 Using the model for estimation and prediction 11.8. Simple Linear Regression: An Overview Example 11.9 Exercises
33
[ Back]
Subject is to determine the relationship between two variables. Types of relationships: direct and inverse Scattergram a simple linear regression model y = A + B x + e
[ Back]
Problem about making an inference of the population regression line E(y)=A+Bx based on the sample regression line y^=a+bx Sampling distribution of the least square estimator of slope b Test of the utility of the model: H0: B =0 against Ha: B 0 or B>0, B<0 A (1-) 100% Confidence interval for B Is the statistical tool for describing the degree to which one variable is linearly related to another. The coefficient of correlation r is a measure of the strength of the linear relationship between two variables The coefficient of determination A (1-) 100% Confidence interval for the mean value of y for x = xp A (1-) 100% Confidence interval for an individual y for x = xp
[Contents] [Back]
12.1. Introduction: the general linear model 12.2 Model assumptions 12.3 Fitting the model: the method of least squares 12.4 Estimating 2 12.5 Estimating and testing hypotheses about the B parameters 12.6. Checking the utility of a model 12.7. Using the model for estimating and prediction 12.8 Multiple linear regression: An overview example 12.8. Model building: interaction models 12.9. Model building: quadratic models 12.10 Exercises
36
[Back ]
y = B0 + B1x1 + ... + Bkxk + e, where y - dependent., x1, x2, ..., xk independent variables, e - random error.
12.4 Estimating 2 12.5 Estimating and testing hypotheses about the B parameters
Sampling distributions of b0, b1, ..., bk A (1-) 100% Confidence interval for Bi (i =0, 1,.., k) Test of an individual parameter coefficient Bi
37
[ Back]
Finding a measure of how well a linear model fits a set of data: the multiple coefficient of determination testing the overall utility of the model A (1-) 100% confidence interval for the mean value of y for a given x A (1-) 100% confidence interval for an individual y for for a given x
12.8 Multiple linear regression: An overview example 12.8. Model building: interaction models
Interaction model with two independent variables: E(y) = B0 + B1x1 + B2x2 + B3x1x2 procedure to build an interaction model Quadratic model in a single variable: E(y) = B0 + B1x + B2x2
38
[Contents] [Back]
13.3 Comparing two populations based on independent random samples:Wilcoxon rank sum test
Nonparametric test about the difference between two populations is the test to detect whether distribution 1 is shifted to the right of distribution 2 or vice versa. wilcoxon rank sum test for a shift in population locations The case of large samples (n1 10, n2 10)
39
Chapter 13 (continued 1)
[ [Contents] Back]
13.5. Comparing populations using a completely randomized design: The Kruskal-Wallis H test
The Kruskal-Wallis H test is the nonparam. Equivalent of ANOVA F test when the assumptions that populations are normally distributed with common variance are not satisfied. The Kruskal-Wallis H test for comparing k population probability distributions Is stastistic developed to measure and to test for correlation between two random variables. Formula for computing Spearmans rank correlation coefficient rs Spearmans nonparametric test for rank correlation
40
13.7 Exercises