You are on page 1of 21

Research Methodology

GENERAL PROCEDURE IN RESEARCH

Dr. Azadeh Asgari

General Procedure In Research


1. PROBLEM IDENTIFICATION 2. HYPOTHESIS FORMULATION

3. DATA COLLECTION
4. DATA ANALYSIS 5. REPORTING THE RESULTS

Data Collection Process


REQUIRES:

1) Subjects to provide information 2) Instruments to collect data on different variables from subjects

Measurement

Process of giving a value (numeric or category) to systematically measured variables using specific steps

Process of measuring whatever that needs to be measured

Instrument
A tool used to measure whatever needs to be measured

e.g.:
To measure achievement = prepare an achievement test To measure professional needs of teachers = prepare an inventory of teacher needs

Some Instruments Used In Education

ACHIEVEMENT TEST CREATIVITY TEST PHYSICAL FITNESS TEST PERSONALITY INVENTORY ATTITUDE SCALE INTEREST SCALE INTERVIEW SCHEDULE

Scale

Instrument that allows assigning a symbol or value to an individual, or his/her behaviour. Assigning the value to the ind. gives the connotation that the ind. HAS that value measured by the scale

Types of Scale

Likert scale or summated rating scale consists of a set of items, all of which having the same value loading Semantic differential

Instruments
The Instrument Should be VALID and should be RELIABLE

Validity of Instruments
Validity Refers To The Extent The Instrument Measures What It Is Supposed To Precisely Measure.

Face Validity
Superficially, Does The Instrument Look Satisfactory.

Content Validity

Is the content complete and suitable? Is the format and content in line with the respondents ability level? How do you ascertain validity? Refer to panel of experts and anyone who is able to evaluate suitability of information used.

Steps to Ensure Validity

Panel of judges confirms suitability of format, instruction, font size, reading and reasoning level of respondents .

Panel of judges confirms whether the included items will be able to answer the research objectives.

Construct Validity

To what extent the concept or theory was taken into account in coming up with the construct.
To what extent ind. Overall performance in responding to the instrument mirrors the trait / quality of the construct measured.

Ensuring Construct Validity

Using panel of judges

Factor analysis
The computer will calculate the Eigen value for each factor within the instrument, and from the values, the researcher may decide to reject specific items from the instrument

Criterion Validity

Refers to the relationship / correlation between the scores obtained from the new instrument and the scores obtained from a standard instrument. A positive strong relationship connotes that the instrument has high criterion validity.

Reliability

Measures that are free from error and therefore, yield consistent results A consistent instrument means that the measure will result in a measured value that is similar every time the measurement is being used
e.g. : A measuring tape used to measure the height of a high jump bar, a stop watch used to measure the speed of problem solving, the test which measures the mathematical

performance.

Reliability Coefficient

Increases if the instrument is long.

Increases if the variance score of the instrument is high

Types of Reliability

Test-retest which determines the coefficient of stability


The correlation is calculated between the results of two tests, administered one after another after a lapse of some time Uses the same test

Alternative forms = measures the coefficient of equivalence


The correlation is calculated between the results of two equivalent tests or two parallel tests, given one after another after a lapse of some time

Types of Reliability

Split half which measures the coefficient of internal consistency


Dividing the items in the instrument into two equivalent halves and finding the coefficient correlation between the halves. Then corrected using the spearmen-brown formula to estimate the reliability of the whole test

SPEARMEN-BROWN FORMULA

rWhole

2rHalves / 1 + rHalves

Types of Reliability
Method of rational equivalence Using Kuder Richardson Formula 20 or 21 Formula 21 is not very precise but it is easier to calculate r(KR-21) = (K)(s2) - Mean(K - Mean) (s2) (K 1)
WHERE:
K No. Of Items In The Whole Test s2 Variance Score Mean Mean of Scores

You might also like