You are on page 1of 17

www.adhd.

ro

www.adhd.org.uk/

www.adhdnews.com

www.logopedics.info/diagnosticul-adhd.php

www.copilul.ro

http://www.scribd.com/doc/9972474/DSM-IV

psihologiesociala.uv.ro/psihoconsiliere/stiluri-parentale.php

psychology.about.com/od/developmentalpsychology/a/parenting-style.htm

www.parentingstyles.co.uk/

http://www.scritube.com/sociologie/psihologie/Abordarea-rational-emotiva-a-
c32579.php

www.livestrong.com/article/14728-handling-irrational-beliefs/

psychology.wikia.com/wiki/Irrational_beliefs

mental-health.families.com/blog/smash-those-irrational-beliefs-1

www.ukskeptics.com/do_irrational_beliefs_cause_harm.php

www.psychtrader.com/psychology/irrational-beliefs/

http://vcc.asu.edu/vcc_pdfs/The%20Child%20and%20Adolescent%20Scale%20of
%20Irrationality%20Validation%20Data%20and%20Mental%20Health
%20Correlates.pdf
East Carolina University
Department of Psychology

When Does Correlation Imply


Causation?
Here are selected contributions from a discussion, on the EDSTAT list, about
the phrase "correlation does not imply causation:"
From: Wuensch, Karl L
Sent: Wednesday, December 05, 2001 7:36 AM
To: edstat-l@jse.stat.ncsu.edu
Subject: When does correlation imply causation?

I opined "correlation is necessary but not sufficient for establishing a causal


relationship." Jim opined "depending on precisely what Karl means by
"correlation is necessary," I'd have to disagree strongly.

More nearly precisely what I mean follows, but is long.

First, let me give a short answer to the question "When does correlation imply
causation?" The short answer is: When the data from which the correlation was
computed were obtained by experimental means with appropriate care to avoid
confounding and other threats to the internal validity of the experiment.

My long answer will start with a distinction between correlation as a statistical


technique and "correlational" (nonexperimental) designs as a way to gather data.

It is not rare for researchers and students to confuse (1) correlation as a


statistical technique with (2) nonexperimental data collection methods, which are
also often described as "correlational." For example, a doctoral candidate at
Florida State University hired me to assist him with the statistical analysis of data
collected for his dissertation. No variables were manipulated in his research. I
used multiple regression (a path analysis) to test his causal model. When he
presented this analysis to his dissertation committee the chair asked him to
reanalyze the data with an ANOVA, explaining that results obtained with ANOVA
would allow them to infer causality, but results obtained with multiple regression
would not because "correlation does not imply causation." I cannot politely tell
you what my initial response to this was. After I cooled down, and realizing that it
would be fruitless to try to explain to this chair that ANOVA is simply a multiple
regression with dummy coded predictors, I suggested that the student present
the same analysis but describe it as a " hierarchical least squares ANOVA." The
analysis was accepted under this name and the chair felt that she then had the
appropriate tool with which to make causal inferences from the data.

I have frequently encountered this delusion, the belief that it is the type of
statistical analysis done, not the method by which the data are collected, which
determines whether or not one can make causal inferences with confidence.
Several times I have I had to explain to my colleagues that two-group t tests and
ANOVA are just special cases of correlation/regression analysis. One was a
senior colleague who taught statistics, research methods, and experimental
psychology in a graduate program. When I demonstrated to him that a test of the
null hypothesis that a point biserial correlation coefficient is zero is absolutely
equivalent to an independent samples (pooled variances) two-groups t test, he
was amazed.
The hypothetical example that I give my students is this: Imagine that we go
downtown and ask people to take a reaction time test and to blow into a device
that measures alcohol in the breath. We correlate these two measures and reject
the hypothesis of independence. Can we conclude, from this evidence, that
drinking alcohol causes increased reaction time? Of course not. There are all
sorts of potential noncausal explanations of the observed correlation. Perhaps
some constellation of "third variables" is causing variance in both reaction time
and alcohol consumption -- for example, perhaps certain brain defects both (1)
slow reaction time and (2) dispose people to consume alcohol. Suppose we take
these same data and dichotomize the alcohol measure. We employ an
independent samples t test to compare the mean reaction time of those who
have consumed alcohol with that of those who have not. We find the mean of
those who have consumed alcohol to be significantly higher. Does that now allow
us to infer that drinking alcohol causes increases in reaction time. Of course not
-- the same potential noncausal explanations that prevented such inference with
the correlational analysis also prevent such inference with the two-group t test
conducted on data collected in nonexperimental means.

Now consider that we bring our research into the lab. We employ experimental
means -- we randomly assign some folks to an alcohol consumption group,
others to a placebo group, taking care to avoid any procedural or other
confounds. When a two-groups t test shows that those in the alcohol group have
significantly higher reaction time than those in the placebo group, we are
confident that we have results that allow us to infer that drinking alcohol causes
slowed reaction time. If we had conducted the analysis by computing the point
biserial correlation coefficient and testing its deviation from zero, we should be no
less confident of our causal inference, and, of course, the value of t (or F) and p
obtained by these two seemingly different analyses would be identical.

Accordingly, I argue that correlation is a necessary but not a sufficient


condition to make causal inferences with reasonable confidence. Also necessary
is an appropriate method of data collection. To make such causal inferences one
must gather the data by experimental means, controlling extraneous variables
which might confound the results. Having gathered the data in this fashion, if one
can establish that the experimentally manipulated variable is correlated with the
dependent variable (and that correlation does not need to be linear), then one
should be (somewhat) comfortable in making a causal inference. That is, when
the data have been gathered by experimental means and confounds have
been eliminated, correlation does imply causation.

So why is it that many persons believe that one can make causal inferences
with confidence from the results of two-group t tests and ANOVA but not with the
results of correlation/regression techniques. I believe that this delusion stems
from the fact that experimental research typically involves a small number of
experimental treatments and that data from such research are conveniently
evaluated with two-group t tests and ANOVA. Accordingly, t tests and ANOVA
are covered when students are learning about experimental research. Students
then confuse the statistical technique with the experimental method. I also feel
that the use of the term "correlational design" contributes to the problem. When
students are taught to use the term "correlational design" to describe
nonexperimental methods of collecting data, and cautioned regarding the
problems associated with inferring causality from such data, the students mistake
correlational statistical techniques with "correlational" data collection methods. I
refuse to use the word "correlational" when describing a design. I much prefer
"nonexperimental" or "observational."

In closing, let me be a bit picky about the meaning of the word "imply." Today
this word is used most often to mean "to hint" or "to suggest" rather than "to have
as a necessary part." Accordingly, I argue that correlation does imply (hint at)
causation, even when the correlation is observed in data not collected by
experimental means. Of course, with nonexperimental research, the potential
causal explanations of the observed correlation between X and Y must include
models that involve additional variables and which differ with respect to which
events are causes and which effects.

Karl L. Wuensch, Department of Psychology,


East Carolina University, Greenville NC 27858-4353

From: Art Kendall [Arthur.Kendall@verizon.net]

I concur. Another way to put it is:

The results of statistical analyses are parts of principled arguments about


causality.

If correlation (in the broad sense) remains after taking into account (controlling,
rendering unlikely) plausible rival hypotheses, it does imply (support, suggest,
indicate, make plausible) causation.

In experimental studies, active manipulation of independent variables, and


random assignment to conditions, go a long way toward minimizing the
plausability of rival hypotheses. If there is a correlation between treatment and
score on a dependent variable (i.e,. if there is a difference among treatment
groups) after rejecting consistency with merely random process, a causal relation
hypothesis is supported.

In quasi-experimental designs, case selection, partialling, controlling, etc, are


needed to support causal argumentation.
From: Michael M. Granaas

I have noticed the same tendency to confuse the correlation coefficient with
observational data collection methods. I explain to my students that the confusion
comes from exactly the types of things that Karl describes. Historically regression
has been taught in the context of observational research and ANOVA in the
context of experimental research. (Kirk's book about ANOVA designs is even
entitled "Experimental Design".) This simplifies things for the learner, but can
lead them down exactly the wrong path when it comes to analyzing/interpreting
their data later in life.

We really need to emphasize over and over that it is the manner in which you
collect the data and not the statistical technique that allows one to make causal
inferences.

Michael M. Granaas
Associate Professor mgranaas@usd.edu
Department of Psychology
University of South Dakota
Vermillion, SD 57069

From: Dennis Roberts [dmr@psu.edu]

correlation NEVER implies causation ...

the problem with this is ... does higher correlation mean MORE cause? lower r
mean LESS cause?

From: Karl W.

Dennis is not going to like this, since he has already expressed a disdain of r2,
omega-square, and eta-square like measures of the strength of effect of one
variable on another, but here is my brief reply:

R2 tells us to what extent we have been able to eliminate, in our data collection
procedures, the contribution of other factors which influence the dependent
variable.

Excellent essay. I agree completely about the confusion of method (anova vs


correlation) with the nature of the data gathering process.
Neil W. Henry, Department of Sociology and Anthropology
Department of Statistical Sciences and Operations Research, Box 843083
Virginia Commonwealth University, Richmond VA 23284-3083

I appreciated your comments on correlation/causation. I teach stats & research


design in the College of Ed at the U of Ky, and I hammer my students all the time
with research scenarios, asking, "What kind of research is this? What kinds of
conclusions can you draw about this kind of research?"

Last summer I created a webpage for my students, and I continue to add to it.
May I post your comments on my page and put a link to your website there?
Here's a link to the page: http://www.uky.edu/~ldesh2/stats.htm

Cheers, Lise Deshea

From: Karl W.

My experimental units are 100 classrooms on campus. As I walk into each


room I flip a perfectly fair coin in a perfectly fair way to determine whether I turn
the room lights on (X = 1) or off (X = 0). I then determine whether or not I can
read the fine print on my bottle of smart pills (Y = 0 for no, Y = 1 for yes). From
the resulting pairs of scores (one for each classroom), I compute the phi
coefficient (which is a Pearson r computed with dichotomous data). Phi = .5. I
test and reject the null hypothesis that phi is zero in the population (using chi-
square as the test statistic). Does correlation (phi is not equal to zero) imply
causation in this case? That is, can I conclude that turning the lights on affects
my ability to read fine print?

I modify my experiment such that Y is now the reading on an instrument that


measure the intensity of light in the classroom. I correlate X with Y (point biserial
r, a Pearson r between a dichotomous and a continuous variable) and obtain r = .
5. I test and reject the null that this r is zero in the population (using t or F as the
test statistic). Does correlation (point biserial r is not zero) imply causation in this
case? That is, can I conclude that one of things I can do to increase the intensity
of light in the room is to turn on the lights?

I modify this second experiment by creating three experimental groups, with


classrooms randomly assigned to groups. In one group I turn off the lights and
close the blinds. In a second group I raise the blinds but turn off the lights. In a
third group I raise the blinds and turn on the lights. I compute eta, the nonlinear
correlation coefficient relating group membership to brightness of light in the
room. Alternatively I dummy code group membership and conduct a multiple
regression predicting brightness from my dummy variables. R = eta = .5. I test
and reject the null hypothesis that R and eta are zero in the population (using F
as my test statistic). Does correlation (R or eta are not equal to zero) imply
causation in this case?

I could continue on with other correlations appropriate for various experimental


designs, but I would hope that you have gotten the point by now

From: Stephen Levine [mailto:szlevine@netvision.net.il]


Sent: Tuesday, December 11, 2001 3:47 AM
To: Karl L. Wuensch
Subject: Re: When does correlation imply causation?

Hi
You wrote
Several times I have I had to explain to my colleagues that two-group t tests and
ANOVA are just special cases of correlation/regression analysis.
I can see what you mean - could you please proof it - I read, in a pretty good text,
that the results are not necessarily the same!

Reply from: Wuensch, Karl L


Subject: ANOVA = Regression

For a demonstration of the equivalence of regression and traditional ANOVA,


just point your browser to T Tests, ANOVA, and Regression Analysis

A few weeks ago I found a nicely written article on this topic in the Teaching of
Psychology. Here is the reference:

• Hatfield, J., Faunce, G. J., & Soames Job, R. F. (2006). Avoiding


confusion surrounding the phrase "correlation does not imply causation."
Teaching of Psychology, 33, 49-51.

Back to the Stat Help Page

Visit Karl's Index Page


Contact Information for the Webmaster,
Dr. Karl L. Wuensch

This page most recently revised on 13. November 2006.

Section 3.2: Non-Experimental Designs


From Research Methods in Psychology

Jump to: navigation, search


Contents
[hide]

• 1 WHAT ARE YOU STUDYING?


• 2 Quasi-experimental designs
o 2.1 Non-equivalent control groups designs
o 2.2 Interrupted Time Series Designs
• 3 Correlation
o 3.1 Why use correlation?
o 3.2 Problems with correlations
• 4 Case Studies
o 4.1 Advantages of Case Studies
o 4.2 Disadvantages of Case Studies
• 5 Surveys

• 6 Section Summary

[edit] WHAT ARE YOU STUDYING?


IN THIS SECTION WE WILL DESCRIBE AND EVALUATE FOUR NON-
EXPERIMENTAL DESIGNS: QUASI-EXPERIMENTAL DESIGNS,
CORRELATIONAL DESIGNS, CASE STUDIES AND SURVEYS.

[edit] Quasi-experimental designs


Crucial concept: in a quasi-experiment, a researcher finds naturally occurring groups -
they do not assign people to the different groups.

The main difference between the quasi-experimental design and the true experimental
design is that with the former, participants are not randomly assigned to experimental
groups. Instead, the experimenter may assign participants to groups according to age, sex,
weight, nationality, or some behavioural characteristic (for example smokers and non-
smokers). Alternatively, a researcher might observe the effects of events such as an
accident, or a change in someone’s life circumstances.

Quasi-experimental designs come in two types:

[edit] Non-equivalent control groups designs

The non-equivalent control group design is the equivalent of the independent groups
experimental design that was discussed in Chapter 2. When an independent groups
experimental design is used, the participants are randomly assigned to conditions or
levels of the independent variable. Sometimes it is not possible to assign participants
randomly to different conditions. This might occur for several reasons.

• The independent variable of interest may be some characteristic of the person,


such as age or sex - which we cannot alter.
• It might be unethical to assign participants to conditions, for example we might be
interested in the effects of women drinking whilst pregnant.
• The independent variable of interest might be a behavioural characteristic of a
person, such as a smoking habit - which we cannot alter.

Because we didn’t assign participants to conditions, it we cannot be sure that the


independent variable was the cause of any difference that we find - there is always the
possibility that an extraneous variable had a systematic effect, and was the actual cause of
the difference. We can’t, in other words, be sure that the control group is equivalent to
the experimental group. To reduce the problems of non-equivalence as much as possible,
it is usual to compare the different groups in other ways, and ensure that they are as
similar as possible.

[edit] Interrupted Time Series Designs

The interrupted time-series design is the equivalent of the repeated measures design
(discussed in Chapter 2), but as with the non-equivalent control groups design, the
researchers do not control when the intervention occurs. When an interrupted time-series
design is used, participants are observed or their behaviour is measured on a number of
occasions prior to some event. After the event, the participants are observed or measured
again, usually more than once. The aim of the research is to investigate the effect of the
event. This design would be used to study the impact of an event such as redundancy or
pregnancy.

[edit] Correlation
Crucial concept: A correlation is used to measure the relationship between two
variables.

Correlational studies are probably the most common types of non-experimental design. A
correlational design is used to determine whether or not there is a relationship between
two variables, which we cannot manipulate (as in an experimental design.) Correlational
designs are commonly employed in the applied branches of psychology (clinical, health,
educational, occupational) where the variables of interest will be real world phenomena
that we cannot manipulate.

[edit] Why use correlation?

The advantage of the correlational design is its flexibility. We can collect data on any
type of variable, then look for relationships. Correlations are commonly used where
characteristics of people are the variables of interest and the values these variables are
continuous rather than categorical. In psychology, the types of variable that are studied
using correlations are individual characteristics such as intelligence, attitudes and
personality dimensions.

Riemann, et al., (1993) carried out a correlational study. They were interested in how a
personality dimension known as ‘openness to experience’ related to adherence to right
wing political ideology. They collected data on openness and on right wing orientation
and found that a negative relationship between the two variables. This finding suggests
that people who have lower scores on the openness questionnaire are more likely to
endorse right wing ideas. It does not suggest that there is a causal relationship between
these two variables, or that being open to experience is a cause of being left wing.
Correlational research design enables us to look at historical data that other people have
already collected, and use those data for our analysis - we do not need to go out and
collect the data ourselves. Herrnstein and Murray (1994) used a very large American
study called the National Longitudinal Survey of Youth (NLSY) to analyse data about a
very wide range of variables, including intelligence, poverty, crime, race and pregnancy.
They did not need to go and collect these data themselves, they were already collected
and made available to academics. (It should be noted that Herrnstein and Murray’s
analysis has been questioned, particularly in relation to the causal conclusions that they
drew.)

Correlational analysis can be extended in a number of ways into more complex analyses
that consider relationships between many variables at once. We will be looking at what
can be done with these techniques in Chapter 8.

[edit] Problems with correlations

The main disadvantage of the correlational technique is that you cannot say for sure that
you know that one variable, A, is causing the other variable, B. Sometimes it may seem
obvious that A is causing B, but when you think harder, you realise that this is not
necessarily the case. Maybe B is causing A, or it may be that C (which we do not
necessarily know about) is causing both A and B. It is very easy to come to a wrong
conclusion which the evidence does not justify. Consider the following examples:

• Children who are brought up by loving, tender parents have higher self-esteem.
• People who watch more violent television programmes are more likely to commit
violent crime.
• Unemployed people are more likely to suffer from mental illness.

If you are like most people, you will leap to the following conclusions:

• Parental behaviour causes higher self-esteem.


• Violent television causes violent behaviour.
• Unemployment causes mental illness.

But the evidence in the first set of statements does not justify the conclusions in the
second set of statements.

Whilst these statements are not necessarily untrue, we do not have the evidence to be able
to say that they are true. And, as scientists, we do not want to say things unless we know
them to be true (or at least are pretty sure). Whenever you read that there is a relationship
between two variables (let us call them A and B), with the implication that one of them
causes the other (i.e. that A causes B) you should always think of the alternatives.

Someone is claiming:

• A causes B
Is it possible that the direction is reversed:

• B causes A

Or even that something else (we will call it C) is causing both of them?

• C causes A and B

Before you read ahead, think about the three examples above, and try to decide whether it
is possible that the direction of causality is reversed, or that some third factor (C) is
causing both effects.

First, we said that children who are brought up by loving tender parents tend to have
higher self-esteem. We assumed that parental behaviour caused self-esteem to change.
First, let's consider the alternative that child’s self-esteem causes parental behaviour.
Could a child’s self esteem alter parental behaviour? This is possible – parents behave
differently towards different sorts of children. A parent is likely to behave differently
towards a calm, happy well-mannered child than they would behave towards a child who
misbehaved. Could differences between Bart and Lisa Simpson be because of differential
treatment from parents? If you were the parents of those two children, which child would
you be more inclined to try to involve in extra activities. Would you buy a saxophone for
Lisa, or Bart?

Now we will consider the third option, that something else – the third factor – C - is
causing parental behaviour and the child’s behaviour. Here there are many choices that
we could make, one of them is genes. A parent who has a genetic inclination to behave in
a particular way will behave in that way with their child and may also pass those genes
onto the child. Thus there may be a relationship between the parent’s behaviour and the
child’s behaviour, which has nothing to do with the interaction between them.

Our next example was that people who behave violently are likely to watch violent
television. We easily jump to the conclusion that watching violent television causes
violent behaviour, but (as always) there are other explanations for the relationship. First,
could it be that engaging in violent behaviour causes people to want to watch violent
programs on the television? If you are with a peer group, who engage in violent
behaviour, you might find that this is enjoyable and exciting. If you do this, you may then
like to watch such violence on the television, to be reminding of the enjoyment that you
had.

Of course, it might be a third variable, which is causing both of these behaviours. It might
be possible that you feel that you have not received the opportunities that other people
have had, and that you are frustrated by the life that you are forced to lead. Because of
this frustration, you might engage in violent behaviour. You might also like to watch
escapist violence on the television, to help you to forget about your lost.
Finally, we said that unemployment causes mental illness. This sounds very feasible,
because being combines long-term stress and worry, along with long periods of boredom
and inactivity. However, is there an alternative explanation? First the direction of
causality may be reversed. It is possible that mental illness may lead to unemployment –
a person whose thinking is disorganised and chaotic - a symptom of schizophrenia, may
have trouble in an interview situation. A person who is suffering from depression may
have difficulty motivating themselves to get to work on time, and may therefore be late
for an interview, and unable to convince a potential employer of their enthusiasm and
dynamism, and therefore be unlikely to get a job.

Finally, there could be a third factor. If a person has many difficulties in life, ill-health in
their family, unpleasant housing and few friends, then that person may be prone to mental
illness, and may also find it impossible to maintain their responsibilities at home and at
work, and hence lose their job.

[edit] Case Studies


Crucial concept: A case study is an in depth study of either one individual or a small
number of individuals, or of an organisation.

[edit] Advantages of Case Studies

A case study is useful when the subject of the study is rare or unique, and it is also useful
when we want to study something in great depth.

A problem with research methods other than case studies is that they tend to lump people
together into one block, ignoring the fact that every individual is unique and has a
different story to tell. If we take a group of people who are depressed, each one will have
a very different history and background. Different things will have affected them in
different ways, and each one will be unique. If we put them into one group which we then
investigate using a standard research technique, we will lose some information. By
studying one person in depth we might be able to find out much more about the origins of
depression than we could by studying a larger group of people.

Sometimes a case study is used because a case is rare or unique. One such example is that
of serial killers. There are very few true serial killers, therefore when a serial killer is
caught who is willing to talk to a psychologist or psychiatrist about their motivation and
reasons for what they have done may provide valuable information for the future.

[edit] Disadvantages of Case Studies

The main disadvantage of the case study is that because it is only carried out on one
individual, or a small number of individuals, we have no idea how far we can generalise
the results. The case that we choose to study in depth may be very similar to all other
cases, or may be completely different. In the case of a serial killer, conclusions that apply
to just one person may lead detectives off the correct trail and up a blind alley

Griffiths (1997) wrote a case study report of a woman who was, he claimed, addicted to
exercise. He said that there had been a great deal of speculation that exercise addiction
might exist, but little firm evidence. Griffiths presented a detailed description of the
woman, and the evidence that she satisfied all of the criteria to be diagnosed as suffering
from exercise addiction. This case study then presents evidence to demonstrate the
exercise addiction is a reality.

Barlow and Harrison carried out a case study of an organisation that provided support and
counselling to young people who suffered from arthritis. Here the aim of the study was
not to test any theory, but rather to explore what people thought about different aspects of
the support and counselling which was provided. The lessons learned from this study
could be used to help people to develop support networks for arthritis, or for other
conditions.

[edit] Surveys
Crucial Concept: A survey aims to find out the current level of the value of something in
a population.

Surveys are comparatively rare in psychology. Although many studies may look like
surveys, they are actually correlational designs, or quasi-experiments. A survey aims to
take a snapshot of a population at any one time, and report on the findings. Theories and
hypotheses in psychological research rarely make statements about the current state of
any situation - they usually make statements about differences or relationships between
variables. A survey, on the other hand, looks at the current state of different aspects of a
population.

Surveys are sometimes used in psychology to assess the current state of a situation to
determine whether there is a problem that needs to be addressed. Fisher (1999) carried
out a survey of 10,000 12-15 year olds to assess the current level of gambling behaviour
amongst them. She found that 13% had had spent money on the National Lottery in the
past week, and 19% had spent their own money on fruit machines in the past week.

Surveys are not necessarily of people - surveys of many different things can be carried
out. Taylor and Stern (1997) surveyed television advertisements to examine the
proportion of people in those adverts who were Asian-Americans and the roles these
Asian-Americans played. They found that Asian-Americans were featured fairly
frequently, but often in background or work roles. They very rarely appeared in leisure
roles, or at home.

Psychological literature is sometimes the subject of surveys.


Literature surveys come in two types - the first and most common, is a review of the
literature:, the literature being a large number of studies which have all examined the
same hypothesis. Bond and Smith (1996) carried out a literature survey to look at 133
different studies that had examined conformity to large groups which Asch’s original line
judgement task. They found that the results of such studies varied over time and between
countries. The second type of survey is a survey of the literature to see what sorts of
techniques people are using in published studies. Tennen, Hall and Affleck were
interested in seeing how different researchers had been measuring depression. They
looked at articles that had been published in the Journal of Personality and Social
Psychology. From their survey, there conclusion was that the researchers whose work
they had surveyed had not used adequate methods of measuring depression. Of course,
some of those researchers disagreed with their conclusions - see Kendall and Flannery-
Schroeder, 1995 and Weary, Edwards, and Jacobson, 1995).

[edit] Section Summary


This section has considered and evaluated four non-experimental designs. A quasi-
experimental design is used where we have naturally occurring groups or events, which
we cannot manipulate. A correlational design is used to compare the relationship between
two measures. We considered the problem of causality in relation to correlations in some
detail – looking at examples of occasions when it is tempting to make inappropriate
causal statements, based on correlational data. Case studies are an in-depth investigation
of one individual, used either when there are only a small number of cases available, or
when you want to try to gain a more detailed, holistic picture of an individual. Surveys
are rare in psychology, and are used to take a ‘snapshot’ or a population at one time.

Retrieved from "http://www.researchmethodsinpsychology.com/wiki/index.php?


title=Section_3.2:_Non-Experimental_Designs"

Views

• Article
• Discussion
• Edit
• History

Personal tools

• Log in / create account

Navigation

• Main Page
• Community portal
• News
• Recent changes
• Random page
• Help

Search

Go Search

Toolbox

• What links here


• Related changes
• Upload file
• Special pages
• Printable version
• Permanent link

• This page was last modified 13:27, 18 October 2009.


• This page has been accessed 4,404 times.
• Privacy policy
• About Research Methods in Psychology
• Disclaimers

You might also like