You are on page 1of 2

1.) Probability:Probability is the chance that something will happen - how likely it is that some event will happen.

Sometimes you can measure a probability with a number: "10% chance of rain", or you can use words such as impossible, unlikely, possible, even chance, likely and certain. 2.) Markov Chain:A Markov chain is a mathematical system that undergoes transitions from one state to another, between a finite or countable number of possible states. It is a random process characterized as memoryless: the next state depends only on the current state and not on the sequence of events that preceded it. This specific kind of "memorylessness" is called the Markov property. Markov chains have many applications as statistical models of real-world processes. A Markov chain is a sequence of random variables X1, X2, X3, ... with the Markov property, namely that, given the present state, the future and past states are independent. Formally, The possible values of Xi form a countable set S called the state space of the chain. Markov chains are often described by a directed graph, where the edges are labeled by the probabilities of going from one state to the other states. Variations Continuous-time Markov processes have a continuous index. Time-homogeneous Markov chains (or stationary Markov chains) are processes where for all n. The probability of the transition is independent of n. A Markov chain of order m (or a Markov chain with memory m), where m is finite, is a process satisfying

In other words, the future state depends on the past m states. It is possible to construct a chain (Yn) from (Xn) which has the 'classical' Markov property as follows: It can be proved that a Markov chain of order m can be in fact reduced to a Markov chain of order m = 1 (a simple Markov chain). Indeed, let Yn = (Xn, Xn1, ..., Xnm+1), the ordered mtuple of X values. Then Yn is a Markov chain with state space Sm and has the classical Markov property. An additive Markov chain of order m is determined by an additive conditional probability,

The value f(xn,xn-r,r) is the additive contribution of the variable xn-r to the conditional probability 3.) Point Estimation:Point estimation refers to the process of estimating a parameter from a probability distribution, based on observed data from the distribution point estimation, in statistics, the process of finding an approximate value of some parametersuch as the mean (average)of a population from random samples of the population. The accuracy of any particular approximation is not known precisely, though probabilistic statements concerning the accuracy of such numbers as found over many experiments can be constructed. It is desirable for a point estimate to be: (1) Consistent. The larger the sample size, the more accurate the estimate. (2) Unbiased. The expectation of the observed values of many samples (average observation value) equals the corresponding population parameter. For example, the sample mean is an unbiased estimator for the population mean. (3) Most efficient or best unbiasedof all consistent, unbiased estimates, the one possessing the smallest variance (a measure of the amount of dispersion away from the estimate). In other words, the estimator that varies least from sample to sample. This generally depends on the particular distribution of the population. For example, the mean is more efficient than the median (middle value) for the normal distribution but not for more skewed (asymmetrical) distributions. Several methods are used to calculate the estimator. The most often used, the maximum likelihood method, uses differential calculus to determine the maximum of the probability function of a number of sample parameters. The moments method equates values of sample moments (functions describing the parameter) to population moments. The solution of the equation gives the desired estimate. 4.) Goodness Of Fit:The goodness of fit of a statistical model describes how well it fits a set of observations. Measures of goodness of fit typically summarize the discrepancy between observed values and the values expected under the model in question. Such measures can be used in statistical hypothesis testing, e.g. to test for normality of residuals, to test whether two samples are drawn from identical distributions , or whether outcome frequencies follow a specified distribution . In the analysis of variance, one of the components into which the variance is partitioned may be a lack-of-fit sum of squares. In assessing whether a given distribution is suited to a data-set, the following tests and their underlying measures of fit can be used: KolmogorovSmirnov test; Cramrvon Mises criterion; AndersonDarling test; Chi Square test;

You might also like