Professional Documents
Culture Documents
Paradigms – A broad point of view on the way things are or the theory dominant in any
historical period
Epistemological ‘modes’ - The different ways of knowing what you know. It can be
through scientific, tradition, religion, logics and convention etc.
Errors in ‘ordinary’ human inquiry – these are mistakes that are sometimes made from
inaccurate observation. For example, if the question was asked what colour shirt our
lecturer was wearing the first day of class, we may have to guess because most of our
daily observations were causal and semiconscious. However, if we deliberately made an
effort to observe from the first day of the class, would help reduce error. It can also be
made from overgeneralization. For example, out of two thousand persons at a gathering,
we interview only five and assumed that all the others were there for the same reason. It
can also result from selective observation, illogical reason and premature inquiry.
‘Clocks and clouds’ analogy – In the debate between the two dominant paradigms
-logical positivism and verstehen, Almond likened clocks to logical positivism and clouds
to verstehen and shows that while the hard sciences can easily adhere to the scientific
method, just as clocks—or time—can be shown in a structured manner, social science
isn’t the same type of animal. The “cloud-like” nature of social phenomena is ever
changing, reshaping itself into different outlines with growing and shrinking depths and
mass.
Dominant paradigm (Kuhn) - A single truth or world view that dominates a field of
science at any one time e.g. Marxism a dominant theory at one point
Anomalies (Kuhn) – things that can not be explained well that do not fit the pattern.
Deduction / deductive reasoning – the logical model in which specific expectations of
hypotheses are developed on the basis of general principles. For example, starting from
the general principle that all deans are meanies, you might anticipate that this one won’t
let you change courses. This anticipation would be the result of deduction
Wallace’s ‘wheel of science’ analogy – this is a cycle that starts with a theory, then a
hypothesis, then observation, then empirical generalization; this logical model is
deduction. At the other extreme, the reverse takes place in which the starting point is from
empirical generalization. This model is Induction
Theory – a systematic explanation for the observations that relate to a particular aspect of
life or a statement that organizes predicts and explains a general class of phenomena
Normative statement or proposition – deals with values and addresses what should be
rather than what is, for example a statement saying, Jamaica should be a democratic
society, is an expression of a value judgment
Continuous variable – a variable whose attributes form a steady progression, such as age
or income. Thus, the ages of a group of people might include 30, 31, 32, 33, 34, and so
forth and could even be broken down into fractions of years
Discrete variable - a variable whose attributes are separate from one another, or
discontinuous as in the case of gender (male or female)
Positive relationship - an association whereby as the value of one variable increases, the
value of the other also increases or when one is present, the other is also present
Perfect relationship – this is when two variables are completely correlated and the value
equals one (1)
Nominal – the level of measurement at which the properties of objects in one category
are identical and mutually exclusive and exhaustive for all its cases; it is the lowest level
of measurement and values cannot be ranked ordered (for example, Gender – male of
female)
Ordinal - the level of measurement in which all sets of observations generate a complete
ranking of objects (for example, from ‘the most’ to ‘the least’), although the distances
cannot be precisely measured. There is no incremental change between the values
Interval - the level of measurement at which the distances between observations are
exact; can be precisely measured in constant units and is continuous in nature. There is no
true/real or absolute zero (for example, temperature)
Ratio (-level) - the level of measurement that has a unique zero point. The phenomenon
does not exist, for example, you cannot say you speed at ‘0’ mph from work to home, this
is not possible
Reliable / reliability – the consistency of a measuring instrument, that is, the extent to
which a measuring instrument contains variable error
Unidimensional (ity) – principle that implies that the items comprising a scale reflect a
single dimension and belong on a continuum that reflects one and only one theoretical
concept
Response set bias – a tendency to agree or disagree with every question in a series rather
than carefully thinking through one’s answer to each. The participant no longer giving a
clear response, they are only just following a pattern
Triangulation – use of more than one form of data collection to test the same hypothesis
within a unified research plan
Index – a composite measure of two or more indicators or items and they are not identical
structure
Likert scale – a scale often used in survey research in which people express attitudes or
other responses in terms of ordinal-level categories (for example, agree, disagree) that are
ranked along a continuum. It is a summated rating scale designed to assist in excluding
questionable items
Unit of analysis – the who or what being studied for example, ‘individual people’. It is
also the most elementary part of the phenomenon to be studied; its character influences
subsequent research design, data collection, and data analysis decisions
Ecological fallacy – the inappropriate generalization from more complex to simpler unit
of analysis
Longitudinal study – a study design involving the collection of data at different points in
time
Determinism – an approach to human agency and causality that assumes human actions
are largely caused by forces external to individuals that can be identified
‘Criteria for causality’ – the variables must be correlated (some actual relationship exist
between the two variables), the cause take place before the effect and the variables are
nonspurious (effect cannot be explained interms of some third variable)
Necessary cause – represents a condition that must be present for the effect to follow (for
example, it is necessary to take college courses in order to get a degree)
Random selection - a sampling method in which each element has an equal chance of
selection independent of any other event in the selection process
Random sample – a sample in which the reseacher uses random number table or similar
mathematical process so that each sampling element in the population will have an equal
probability of being selected
Probability sampling - sample units selected from the sampling frame according to some
probabilistic scheme
Sampling frame - the list of the sampling units that is used in the selection of the sample
Stratified sample – this is when you group sampling frame elements according to
categories of one characteristic and sample from each group separately
‘Significance’ test(s) – this indicates the probability that a relationship could have
occurred because of chance alone. It is also a class of statistical computations that
indicate the likelihood that the relationship observed between variables in a sample can be
attributed to sampling error only
Level of significance – the probability of rejecting a true null hypothesis; that is, the
possibility of making a type I error
Data reduction – this is using scientific analysis to reduce data from unmanageable
details to manageable summaries
Statistic vs parameter – the summary description of a variable in a sample, used to
estimate a population parameter. Parameter is the summary description of a given
variable in a population
Descriptive statistics – statistical procedures used for describing and analyzing data that
enable the researcher to summarize and organize data in an effective and meaningful way
Univariate statistics - statistical measures that deal with one variable only
Multivariate statistics - statistical measures that deal with more than two variables
Percentage – this is a way of expressing a number as a fraction of 100 (per cent meaning
"per hundred"). It is often denoted using the percent sign, "%", For example, 65% (read as
"sixty percent") is equal to 65/100, or 0.65
Median – a measure of central tendency defined as the point above and below which 50
percent of the observations fall
Range – measures the distance between the highest and the lowest values of a distribution
Standard deviation – a commonly used measure of variability whose size indicates the
dispersion of a distribution
Subgroup comparison – this is the dividing of data into subgroup and comparing their
differences
Measures of ‘association’ - a single number that expresses the strength, and often the
direction, of a relationship. It condenses information about a bivariate relationship into a
single number
Phi - this is a chi-square based measure of association that involves dividing the chi-
square statistic by the sample size and taking the square root of the result.
Spearman’s rho – a statistics used to calculate the strength of the relationship between
two ordinal variables. It is the non-parametric alternative to Pearson Product Moment
correlation
Covariation – a measure of how two variables both vary relative to one another
Linear regression analysis – a form of statistical analysis that seeks the equation for the
straight line that best describes the relationship between two ratio variables
Regression line – a line based on the least squares criterion that is the best fit to the
points in a scatterplot
Least squares criterion – this is a formula that looks at the distance by which data is off
by
Correlation matrix – this is a matrix of correlation or a method of presentation showing
the intercorrelations among several variables
Regression assumptions – these are assumptions made about variables for analysis so
that the results can be trustworthy and also to avoid a Type I or Type II error, or over- or
under-estimation of significance or effect size
ANOVA – or Analysis of Variance is a method of analysis in which cases under study are
combined into groups representing an independent variable, and the extent to which the
groups differ from one another is analyzed in terms of some dependent variable. Then the
extent to which the groups differ is compared with the standard of random distribution.
There are two common forms: one-way analysis of variance and two-way analysis of
variance
Eta / eta squared - A measure of association that ranges from 0 to 1, with 0 indicating no
association between the row and column variables and values close to 1 indicating a high
degree of association. Eta is appropriate for a dependent variable measured on an interval
scale (e.g., income) and an independent variable with a limited number of categories (e.g.,
gender). Two eta values are computed: one treats the row variable as the interval variable;
the other treats the column variable as the interval variable.
Reliability analysis - allows you to study the properties of measurement scales and the
items that make them up. The Reliability Analysis procedure calculates a number of
commonly used measures of scale reliability and also provides information about the
relationships between individual items in the scale. Intra-class correlation coefficients can
be used to compute inter-rater reliability estimates.
Inter-item correlation - is a type of reliability analysis that gives the average or mean of
all the correlations
Alpha - this is a reliability model of internal consistency, based on the average inter-item
correlation.