You are on page 1of 11

It is a collection of research designs which use manipulation and controlled testing to understand causal processes.

Generally, one or more variables are manipulated to determine their effect on a dependent variable. The experimental method is a systematic and scientific approach to research in which the researcher manipulates one or more variables, and controls and measures any change in other variables. Experimental Research is often used where: 1. There is time priority in a causal relationship (cause precedes effect) 2. There is consistency in a causal relationship (a cause will always lead to the same effect) 3. The magnitude of the correlation is great. (Reference: en.wikipedia.org) The word experimental research has a range of definitions. In the strict sense, experimental research is what we call a true experiment. This is an experiment where the researcher manipulates one variable, and control/randomizes the rest of the variables. It has a control group, the subjects have been randomly assigned between the groups, and the researcher only tests one effect at a time. It is also important to know what variable(s) you want to test and measure. A very wide definition of experimental research, or a quasi experiment, is research where the scientist actively influences something to observe the consequences. Most experiments tend to fall in between the strict and the wide definition. A rule of thumb is that physical sciences, such as physics, chemistry and geology tend to define experiments more narrowly than social sciences, such as sociology and psychology, which conduct experiments closer to the wider definition. AIMS OF EXPERIMENTAL RESEARCH Experiments are conducted to be able to predict phenomenons. Typically, an experiment is constructed to be able to explain some kind of causation. Experimental research is important to society - it helps us to improve our everyday lives. IDENTIFYING THE RESEARCH PROBLEM After deciding the topic of interest, the researcher tries to define the research problem. This helps the researcher to focus on a more narrow research area to be able to study it appropriately. The research problem is often operationalizationed, to define how to measure the research problem. The results will depend on the exact measurements that the researcher chooses and may be operationalized differently in another study to test the main conclusions of the study. Defining the research problem helps you to formulate a research hypothesis, which is tested against the null hypothesis. Conceptual variables are often expressed in general, theoretical, qualitative, or subjective terms and important in hypothesis building process. An ad hoc analysis is a hypothesis invented after testing is done, to try to explain why the contrary evidence. A poor ad hoc analysis may be seen as the researcher's inability to accept that his/her hypothesis is wrong, while a great ad hoc analysis may lead to more testing and possibly a significant discovery. CONSTRUCTING THE EXPERIMENT There are various aspects to remember when constructing an experiment. Planning ahead ensures that the experiment is carried out properly and that the results reflect the real world, in the best possible way.

SAMPLING GROUPS TO STUDY Sampling groups correctly is especially important when we have more than one condition in the experiment. One sample group often serves as a control group, whilst others are tested under the experimental conditions. Deciding the sample groups can be done in using many different sampling techniques. Population sampling may chosen by a number of methods, such as randomization, "quasi-randomization" and pairing. Reducing sampling errors is vital for getting valid results from experiments. Researchers often adjust the sample size to minimize chances of random errors. Here are some common sampling techniques:

probability sampling non-probability sampling simple random sampling convenience sampling stratified sampling systematic sampling cluster sampling sequential sampling disproportional sampling judgmental sampling snowball sampling quota sampling

CREATING THE DESIGN The research design is chosen based on a range of factors. Important factors when choosing the design are feasibility, time, cost, ethics, measurement problems and what you would like to test. The design of the experiment is critical for the validity of the results. TYPICAL DESIGNS AND FEATURES IN EXPERIMENTAL DESIGN

Pretest-Posttest Design Check whether the groups are different before the manipulation starts and the effect of the manipulation. Pretests sometimes influence the effect. Control Group Control groups are designed to measure research bias and measurement effects, such as the Hawthorne Effect or the Placebo Effect. A control group is a group not receiving the same manipulation as the experimental group.

Experiments frequently have 2 conditions, but rarely more than 3 conditions at the same time.

Randomized Controlled Trials Randomized Sampling, comparison between an Experimental Group and a Control Group and strict control/randomization of all other variables Solomon Four-Group Design With two control groups and two experimental groups. Half the groups have a pretest and half do not have a pretest. This to test both the effect itself and the effect of the pretest.

Between Subjects Design Grouping Participants to Different Conditions Within Subject Design Participants Take Part in the Different Conditions - See also: Repeated Measures Design Counterbalanced Measures Design Testing the effect of the order of treatments when no control group is available/ethical Matched Subjects Design Matching Participants to Create Similar Experimental- and Control-Groups Double-Blind Experiment Neither the researcher, nor the participants, know which is the control group. The results can be affected if the researcher or participants know this. Bayesian Probability Using bayesian probability to "interact" with participants is a more "advanced" experimental design. It can be used for settings were there are many variables which are hard to isolate. The researcher starts with a set of initial beliefs, and tries to adjust them to how participants have responded

PILOT STUDY It may be wise to first conduct a pilot-study or two before you do the real experiment. This ensures that the experiment measures what it should, and that everything is set up right. Minor errors, which could potentially destroy the experiment, are often found during this process. With a pilot study, you can get information about errors and problems, and improve the design, before putting a lot of effort into the real experiment. If the experiments involve humans, a common strategy is to first have a pilot study with someone involved in the research, but not too closely, and then arrange a pilot with a person who resembles the subject(s). Those two different pilots are likely to give the researcher good information about any problems in the experiment. CONDUCTING THE EXPERIMENT An experiment is typically carried out by manipulating a variable, called the independent variable, affecting the experimental group. The effect that the researcher is interested in, the dependent variable(s), is measured. Identifying and controlling non-experimental factors which the researcher does not want to influence the effects, is crucial to drawing a valid conclusion. This is often done by controlling variables, if possible, or randomizing variables to minimize effects that can be traced back to third variables. Researchers only want to measure the effect of the independent variable(s) when conducting an experiment, allowing them to conclude that this was the reason for the effect. ANALYSIS AND CONCLUSIONS In quantitative research, the amount of data measured can be enormous. Data not prepared to be analyzed is called "raw data". The raw data is often summarized as something called "output data", which typically consists of one line per subject (or item). A cell of the output data is, for example, an average of an effect in many trials for a subject. The output data is used for statistical analysis, e.g. significance tests, to see if there really is an effect. The aim of an analysis is to draw a conclusion, together with other observations. The researcher might generalize the results to a wider phenomenon, if there is no indication of confounding variables "polluting" the results. If the researcher suspects that the effect stems from a different variable than the independent variable, further investigation is needed to gauge the validity of the results. An experiment is often conducted because the

scientist wants to know if the independent variable is having any effect upon the dependent variable. Variables correlating are not proof that there is causation. Experiments are more often of quantitative nature than qualitative nature, although it happens. EXAMPLES OF EXPERIMENTS This website contains many examples of experiments. Some are not true experiments, but involve some kind of manipulation to investigate a phenomenon. Others fulfil most or all criteria of true experiments. Here are some examples of scientific experiments: SOCIAL PSYCHOLOGY

Stanley Milgram Experiment - Will people obey orders, even if clearly dangerous? Asch Experiment - Will people conform to group behavior? Stanford Prison Experiment - How do people react to roles? Will you behave differently? Good Samaritan Experiment - Would You Help a Stranger? - Explaining Helping Behavior

GENETICS

Law Of Segregation - The Mendel Pea Plant Experiment Transforming Principle - Griffith's Experiment about Genetics

PHYSICS

Ben Franklin Kite Experiment - Struck by Lightning J J Thomson Cathode Ray Experiment

MODULE R13 EXPERIMENTAL RESEARCH AND DESIGN

Experimental Research - An attempt by the researcher to maintain control over all factors that may affect the result of an experiment. In doing this, the researcher attempts to determine or predict what may occur. Experimental Design - A blueprint of the procedure that enables the researcher to test his hypothesis by reaching valid conclusions about relationships between independent and dependent variables. It refers to the conceptual framework within which the experiment is conducted. Steps involved in conducting an experimental study Identify and define the problem. Formulate hypotheses and deduce their consequences. Construct an experimental design that represents all the elements, conditions, and relations of the consequences. 1. Select sample of subjects. 2. Group or pair subjects.

3. Identify and control non experimental factors. 4. Select or construct, and validate instruments to measure outcomes. 5. Conduct pilot study. 6. Determine place, time, and duration of the experiment. Conduct the experiment. Compile raw data and reduce to usable form. Apply an appropriate test of significance.

Essentials of Experimental Research Manipulation of an independent variable. An attempt is made to hold all other variables except the dependent variable constant - control. Effect is observed of the manipulation of the independent variable on the dependent variable - observation. Experimental control attempts to predict events that will occur in the experimental setting by neutralizing the effects of other factors. Methods of Experimental Control Physical Control Gives all subjects equal exposure to the independent variable. Controls non experimental variables that affect the dependent variable. Selective Control - Manipulate indirectly by selecting in or out variables that cannot be controlled. Statistical Control - Variables not conducive to physical or selective manipulation may be controlled by statistical techniques (example: covariance).

Validity of Experimental Design Internal Validity asks did the experimental treatment make the difference in this specific instance rather than other extraneous variables? External Validity asks to what populations, settings, treatment variables, and measurement variables can this observed effect be generalized? Factors Jeopardizing Internal Validity History - The events occurring between the first and second measurements in addition to the experimental variable which might affect the measurement. Example: Researcher collects gross sales data before and after a 5 day 50% off sale. During the sale a hurricane occurs and results of the study may be affected because of the hurricane, not the sale. Maturation - The process of maturing which takes place in the individual during the duration of the experiment which is not a result of specific events but of simply growing older, growing more tired, or similar changes. Example: Subjects become tired after completing a training session, and their responses on the Posttest are affected.

Pre-testing - The effect created on the second measurement by having a measurement before the experiment. Example: Subjects take a Pretest and think about some of the items. On the Posttest they change to answers they feel are more acceptable. Experimental group learns from the pretest. Measuring Instruments - Changes in instruments, calibration of instruments, observers, or scorers may cause changes in the measurements. Example: Interviewers are very careful with their first two or three interviews but on the 4th, 5th, 6th become fatigued and are less careful and make errors. Statistical Regression - Groups are chosen because of extreme scores of measurements; those scores or measurements tend to move toward the mean with repeated measurements even without an experimental variable. Example: Managers who are performing poorly are selected for training. Their average Posttest scores will be higher than their Pretest scores because of statistical regression, even if no training were given. Differential Selection - Different individuals or groups would have different previous knowledge or ability which would affect the final measurement if not taken into account. Example: A group of subjects who have viewed a TV program is compared with a group which has not. There is no way of knowing that the groups would have been equivalent since they were not randomly assigned to view the TV program. Experimental Mortality - The loss of subjects from comparison groups could greatly affect the comparisons because of unique characteristics of those subjects. Groups to be compared need to be the same after as before the experiment. Example: Over a 6 month experiment aimed to change accounting practices, 12 accountants drop out of the experimental group and none drop out of the control group. Not only is there differential loss in the two groups, but the 12 dropouts may be very different from those who remained in the experimental group. Interaction of Factors, such as Selection Maturation, etc. - Combinations of these factors may interact especially in multiple group comparisons to produce erroneous measurements. Factors Jeopardizing External Validity or Generalizability Pre-Testing -Individuals who were pretested might be less or more sensitive to the experimental variable or might have "learned" from the pre-test making them unrepresentative of the population who had not been pretested. Example: Prior to viewing a film on Environmental Effects of Chemical, a group of subjects is given a 60 item antichemical test. Taking the Pretest may increase the effect of the film. The film may not be effective for a nonpretested group. Differential Selection - The selection of the subjects determines how the findings can be generalized. Subjects selected from a small group or one with particular characteristics would limit generalizability. Randomly chosen subjects from the entire population could be generalized to the entire population. Example: Researcher, requesting permission to conduct experiment, is turned down by 11 corporations, but the 12th corporation grant permission. The 12th corporation is obviously different then the others because they accepted. Thus subjects in the 12th corporation may be more accepting or sensitive to the treatment. Experimental Procedures - The experimental procedures and arrangements have a certain amount of effect on the subjects in the experimental settings. Generalization to persons not in the experimental setting may be precluded.

Example: Department heads realize they are being studied, try to guess what the experimenter wants and respond accordingly rather than respond to the treatment. Multiple Treatment Interference - If the subjects are exposed to more than one treatment then the findings could only be generalized to individuals exposed to the same treatments in the same order of presentation. Example: A group of CPAs is given training in working with managers followed by training in working with comptrollers. Since training effects cannot be deleted, the first training will affect the second. Tools of Experimental Design Used to Control Factors Jeopardizing Validity Pre-Test - The pre-test, or measurement before the experiment begins, can aid control for differential selection by determining the presence or knowledge of the experimental variable before the experiment begins. It can aid control of experimental mortality because the subjects can be removed from the entire comparison by removing their pre-tests. However, pre-tests cause problems by their effect on the second measurement and by causing generalizability problems to a population not pre-tested and those with no experimental arrangements. Control Group -The use of a matched or similar group which is not exposed to the experimental variable can help reduce the effect of History, Maturation, Instrumentation, and Interaction of Factors. The control group is exposed to all conditions of the experiment except the experimental variable. Randomization - Use of random selection procedures for subjects can aid in control of Statistical Regression, Differential Selection, and the Interaction of Factors. It greatly increases generalizability by helping make the groups representative of the populations. Additional Groups - The effects of Pre-tests and Experimental Procedures can be partially controlled through the use of groups which were not pre-tested or exposed to experimental arrangements. They would have to be used in conjunction with other pre-tested groups or other factors jeopardizing validity would be present.

The method by which treatments are applied to subjects using these tools to control factors jeopardizing validity is the essence of experimental design. Tools of Control Pre-Test/ Internal Sources History Maturation Pre-Testing Measuring Instrument Statistical Regression Differential Selection X X X X X Post Test Control Group Randomization Additional Groups

X X X

Experimental Mortality Interaction of Factors External Sources Pre-Testing Differential Selection Procedures Multiple Treatment

X X X

X X X X

Experimental Designs Pre-Experimental Design - loose in structure, could be biased Aim of the Research To attempt to explain a consequent by an antecedent Name of the Design One-shot experimental case study Notation Paradigm Comments

X O

An approach that prematurely links antecedents and consequences. The least reliable of all experimental approaches. An approach that provides a measure of change but can provide no conclusive results. Weakness lies in no examination of preexperimental equivalence of groups. Conclusion is reached by comparing the performance of each group to determine the effect

To evaluate the influence of a variable

One group pretest-posttest

OXO

To determine the influence of a variable on one group and not on another

Static group comparison

Group 1: X O Group 2: - O

of a variable on one of them.

True Experimental Design - greater control and refinement, greater control of validity Aim of the Research To study the effect of an influence on a carefully controlled sample Name of the Design Pretest-posttest control group Notation Paradigm R--[OX O [O-O Comments

This design has been called "the old workhorse of traditional experimentation." If effectively carried out, this design controls for eight threats of internal validity. Data are analyzed by analysis of covariance on posttest scores with the pretest the covariate. This is an extension of the pretest-posttest control group design and probably the most powerful experimental approach. Data are analyzed by analysis of variance on posttest scores. An adaptation of the last two groups in the Solomon four-group design. Randomness is critical. Probably, the simplest and best test for significance in this design is the t-test.

To minimize the effect of pretesting

Solomon fourgroup design

R--[OX O [O-O [- X O [--O

To evaluate a situation that cannot be pretested

Posttest only control group

R--[ XO [-O

Quasi-Experimental Design - not randomly selected Aim of the Research To investigate a situation in which random selection and assignment are not possible Name of the Design Nonrandomized control group pretest-posttest Notation Paradigm OXO O-O Comments

One of the strongest and most widely used quasiexperimental designs. Differs from experimental designs because test and control groups are not equivalent. Comparing

pretest results will indicate degree of equivalency between experimental and control groups. To determine the influence of a variable introduced only after a series of initial observations and only where one group is available Time series experiment OOXO O If substantial change follows introduction of the variable, then the variable can be suspect as to the cause of the change. To increase external validity, repeat the experiment in different places under different conditions. A variant of the above design by accompanying it with a parallel set of observations without the introduction of the experimental variable. An on-again, off-again design in which the experimental variable is sometimes present, sometimes absent.

To bolster the validity of the above design with the addition of a control group

Control group time series

OOXO O OO-O O

To control history in time designs with a variant of the above design

Equivalent time-samples

[X1 O1] [X0 O2] [x1 O3]

Correlational and Ex Post Facto Design Aim of the Research To seek for causeeffect relationships between two sets of data Name of the Design Causalcomparative correlational studies Notation Paradigm - Oa Ob Comments

A very deceptive procedure that requires much insight for its use. Causality cannot be inferred merely because a positive and close correlation ratio exists. This approach is experimentation in reverse. Seldom is proof through data substantiation possible. Logic and inference are the principal tools of this design

To search backward from consequent data for antecedent causes

Ex post facto studies

Leedy, P.D. (1997). Practical research: Planning and design (6th ed.). Upper Saddle River, NJ: Prentice-Hall, Inc., p. 232-233.

SELF ASSESSMENT 1. Define experimental research. Define experimental design. 2. List six steps involved in conducting an experimental study. 3. Describe the basis of an experiment. 4. Name three characteristics of experimental research. 5. State the purpose of experimental control. 6. State three broad methods of experimental control. 7. Name two type of validity of experimental design. 8. Define eight factors jeopardizing internal validity of a research design. 9. Define four factors jeopardizing external validity. 10. Describe the tools of experimental design used to control the factors jeopardizing validity of a research design. 11. Define the essence of experimental design. 12. Name and describe the four types of experimental designs.

You might also like