You are on page 1of 190

OUM Business School

BBRC4103

Research Methodology

Copyright Open University Malaysia (OUM)

BBRC4103
RESEARCH
METHODOLOGY
Assoc Prof Dr Ahmad Shuib
Dr Thinagaran Perumal
Assoc Prof Dr Nagarajah Lee

Copyright Open University Malaysia (OUM)

Project Directors:

Prof Dato Dr Mansor Fadzil


Prof Dr Wardah Mohamad
Open University Malaysia

Module Writers:

Assoc Prof Dr Ahmad Shuib


Dr Thinagaran Perumal
Universiti Putra Malaysia
Assoc Prof Dr Nagarajah Lee

Developed by:

Centre for Instructional Design and Technology


Open University Malaysia

First Edition, August 2011


Second Edition, December 2013 (rs)
Copyright Open University Malaysia (OUM) , December 2013, BBRC4103
All rights reserved. No part of this work may be reproduced in any form or by any means
without the written permission of the President, Open University Malaysia (OUM).

Copyright Open University Malaysia (OUM)

Table of Contents
Course Guide

ix-xiv

Topic 1

Scientific Thinking in Research


1.1 Scientific Method
1.2 Traditional Sources of Knowledge
1.3 Philosophy of Science Research
1.4 Deductive and Inductive Models
1.5 Links between Theory and Research
Summary
Key Terms

1
2
3
4
6
7
12
13

Topic 2

Research Process
2.1 Research Process
2.2 Process of Identifying the Problem
2.3 Data for Research
2.4 Analysing and Interpreting Data
2.5 How to Choose a Topic
Summary
Key Terms

14
14
18
19
20
20
24
25

Topic 3

Review of Literature
3.1 What is Literature Review?
3.2 Importance of Literature Review
3.3 Review of Literature Procedures
3.4 Common Weaknesses
3.5 Evaluating Journal Articles
Summary
Key Terms

26
27
28
30
33
34
36
36

Topic 4

Sampling Design
4.1 Sampling Concept
4.2 Justification for Sampling
4.3 Criteria of a Good Sample
4.4 Types of Sampling Designs
4.4.1 Probability Sampling Design
4.4.2 Types of Probability Sampling
4.4.3 Non-probability Sampling Design
4.4.4 Types of Non-probability Sampling

37
37
39
40
41
42
44
46
46

Copyright Open University Malaysia (OUM)

iv

TABLE OF CONTENTS

4.5

Sample Size
4.5.1 Qualitative Approach
4.5.2 Statistical Approach
Summary
Key Terms

48
49
49
54
55

Topic 5

Measurement and Scales


5.1 Conceptualisation
5.2 Operationalisation
5.3 Variables
5.4 Measurement
5.4.1 Level of Measurement
5.5 Scaling Techniques
5.5.1 Rating Scales
5.5.2 Ranking Scale
5.6 Measurement Quality
5.6.1 Reliability, Validity and Practicality
5.7 Sources of Measurement Errors
Summary
Key Terms

56
57
58
59
60
61
65
65
68
70
71
72
75
76

Topic 6

Survey Method and Secondary Data


6.1 Survey Research
6.2 Personal Interview
6.2.1 Types of Personal Interviews
6.2.2 Advantages of Personal Interviews
6.2.3 Disadvantages of Personal Interviews
6.3 Telephone Interview
6.3.1 Types of Telephone Interviews
6.3.2 Advantages and Disadvantages of Telephone
Interviews
6.4 Self-Administered Survey
6.4.1 Types of Self-administered Surveys
6.5 Types and Uses of Secondary Data
6.5.1 Documentary Secondary Data
6.5.2 Types of Documents
6.5.3 Survey-based Secondary Data
6.5.4 Multiple Source Secondary Data
6.5.5 Triangulation
6.6 Advantages and Disadvantages of Using Secondary Data
6.7 Sources of Secondary Data
Summary
Key Terms

77
78
79
79
81
83
85
85
86

Copyright Open University Malaysia (OUM)

89
90
92
92
93
94
95
95
96
97
100
103

TABLE OF CONTENTS

Topic 7

Experimental Research Designs


7.1 Symbols Used in Experimental Research Designs
7.2 Weak Designs
7.2.1 One-shot Design
7.2.2 One-group Pre-test and Post-test Design
7.2.3 Non-equivalent Post-test Only Design
7.3 True Experimental Designs
7.3.1 After-only Research Design
7.3.2 Before-after Research Design
7.4 Quasi-experimental Design
7.4.1 Non-equivalent Control-group Design
7.4.2 Interrupted Time Series Design
7.5 Ethics in Experimental Research
Summary
Key Terms

104
105
105
105
106
107
108
109
111
112
113
114
116
118
118

Topic 8

Qualitative Research Methods


8.1 Definition of Qualitative Research
8.2 Types of Qualitative Research Methods
8.2.1 Action Research
8.2.2 Case Study
8.2.3 Ethnography
8.2.4 Grounded Theory
8.2.5 Content Analysis
8.3 Qualitative Data Analysis
8.4 Differences between Quantitative and Qualitative
Approaches
Summary
Key Terms

119
120
121
121
122
123
124
124
125
131

Data Analysis
9.1 Data Screening and Editing
9.1.1 Data Editing
9.1.2 Field Editing
9.1.3 In-house Editing
9.1.4 Missing Data
9.2 Coding
9.3 Data Entry
9.4 Data Transformation
9.5 Data Analysis
9.5.1 Descriptive Statistics

134
135
135
136
136
137
138
139
141
142
144

Topic 9

Copyright Open University Malaysia (OUM)

132
133

vi

TABLE OF CONTENTS

9.6

Topic 10

What is a Hypothesis?
9.6.1 Null and Alternate Hypotheses
9.6.2 Directional and Non-directional Hypotheses
9.6.3 Sample Statistics versus Population
9.6.4 Type I and Type II Errors
9.6.5 Steps in Hypothesis Testing
9.7 Inferential Statistics
9.7.1 Testing for Significant Differences between
Two Means Using the t-test (Independent Groups)
9.7.2 Testing for Significant Differences between
Two Means Using the t-test (Dependent Groups)
9.7.3 Testing for Differences between Means Using
One-way Analysis of Variance (ANOVA)
9.7.4 Correlation Coefficient
Summary
Key Terms

148
148
149
150
150
152
152
155

Proposal Writing and Ethics in Research


10.1 What is a Research Proposal?
10.2 Contents of a Research Proposal
10.3 Guidelines for Writing Research Proposal
10.4 Common Weakness in Research Proposal
10.5 Research Ethics
10.6 Key Ethical Issues
10.7 Ethical Issues during the Initial Stages of the Research
Process
Summary
Key Terms

162
163
163
167
168
170
170
171

Copyright Open University Malaysia (OUM)

156
157
158
160
161

173
173

COURSE GUIDE

Copyright Open University Malaysia (OUM)

Copyright Open University Malaysia (OUM)

COURSE GUIDE

ix

COURSE GUIDE DESCRIPTION


You must read this Course Guide carefully from the beginning to the end. It tells
you briefly what the course is about and how you can work your way through
the course material. It also suggests the amount of time you are likely to spend in
order to complete the course successfully. Please keep on referring to the Course
Guide as you go through the course material as it will help you to clarify
important study components or points that you might miss or overlook.

INTRODUCTION
BBRC4103 Research Methodology is one of the courses offered by the Faculty of
Business and Management at Open University Malaysia (OUM). This course is
worth 3 credit hours and should be covered over 8 to 15 weeks.

COURSE AUDIENCE
This course is offered to undergraduates students who need to acquire
fundamental knowledge in research methodology.
As an open and distance learner, you should be able to learn independently and
optimise the learning modes and environment available to you. Before you begin
this course, please confirm the course material, the course requirements and how
the course is conducted.

STUDY SCHEDULE
It is a standard OUM practice that learners accumulate 40 study hours for every
credit hour. As such, for a three-credit hour course, you are expected to spend 120
study hours. Table 1 gives an estimation of how the 120 study hours could be
accumulated.

Copyright Open University Malaysia (OUM)

COURSE GUIDE

Table 1: Estimation of Time Accumulation of Study Hours


Study Activities

Study
Hours

Briefly go through the course content and participate in initial


discussions

Study the module

60

Attend 3 to 5 tutorial sessions

10

Online participation

12

Revision

15

Assignment(s), Test(s) and Examination(s)

20

TOTAL STUDY HOURS

120

COURSE OUTCOMES
By the end of this course, you should be able to:
1.

Discuss the important concepts of scientific research in business;

2.

Examine the processes involved in doing research;

3.

Prepare a research proposal;

4.

Implement a research project in business; and

5.

Conduct a research project in business.

COURSE SYNOPSIS
This course is divided into 10 topics. The synopsis for each topic is as follows:
Topic 1 describes the concept of science, the scientific research process in problem
solving and the need to formulate a good hypothesis.
Topic 2 describes the processes in doing research starting from the identification
of a specific problem, the identification and definition of the concepts, and the
identification of the methodology of the research.

Copyright Open University Malaysia (OUM)

COURSE GUIDE

xi

Topic 3 defines the importance of literature review in research methodology and


procedure to review literature. This topic also explores some common
weaknesses encountered in the review of literature and how to critique a journal
articles.
Topic 4 introduces the strategies that can be used to collect primary data. The
process of collecting primary data must be identified properly based on the
purpose and objectives of the research. Data to be used to answer the research
questions must come from the appropriate population in order to be useful. The
process of selecting the right individuals, objects or events for study is known as
sampling.
Topic 5 defines concepts and discusses methods of measuring the concepts to
help researchers to determine the methods of collecting and analysing data. Once
the concept has been defined, it is necessary to identify the methods to measure
the concept. Unless the variables are measured in some way, the researcher will
not be able to test the hypotheses and find answers to complex research issues.
Topic 6 examines the application of the different types of secondary data to help
in answering the research questions and to meet objectives of the study. Different
types of secondary data will be explained, and the strengths and weaknesses of
using secondary data will be discussed. The topic will also discuss the different
sources of secondary data. Besides that, it also introduces the different survey
methods for collecting primary data. The communication techniques used in
collecting the primary data can be classified into personal interview, telephone
interview and self-administered survey.
Topic 7 presents the research design. Research design is used in seeking an
answer to the research questions. The topic also explains the differences between
a true experimental design and a quasi experimental design.
Topic 8 devoted towards explaining the definition and types of qualitative
research methods. This topic also explicates the differences between quantitative
and qualitative research methods.
Topic 9 introduces the processes of analysis of the data, which include several
interrelated procedures that are performed to summarise and rearrange data. The
goals of most research are to provide information. The topic will discuss the
processes of rearranging, ordering or manipulating data to provide descriptive
information that answers questions posed in the problem definition. All forms of
analysis attempt to portray consistent patterns in data so the results may be
studied and interpreted in a brief and meaningful way.

Copyright Open University Malaysia (OUM)

xii

COURSE GUIDE

Topic 10 focuses on writing research report and principles of research writing, the
content of a research proposal, the guidelines for writing research proposal,
common weakness in research proposal and ethics in research.

TEXT ARRANGEMENT GUIDE


Before you go through this module, it is important that you note the text
arrangement. Understanding the text arrangement will help you to organise your
study of this course in a more objective and effective way. Generally, the text
arrangement for each topic is as follows:
Learning Outcomes: This section refers to what you should achieve after you
have completely covered a topic. As you go through each topic, you should
frequently refer to these learning outcomes. By doing this, you can continuously
gauge your understanding of the topic.
Self-Check: This component of the module is inserted at strategic locations
throughout the module. It may be inserted after one sub-section or a few subsections. It usually comes in the form of a question. When you come across this
component, try to reflect on what you have already learnt thus far. By attempting
to answer the question, you should be able to gauge how well you have
understood the sub-section(s). Most of the time, the answers to the questions can
be found directly from the module itself.
Activity: Like Self-Check, the Activity component is also placed at various locations
or junctures throughout the module. This component may require you to solve
questions, explore short case studies, or conduct an observation or research. It may
even require you to evaluate a given scenario. When you come across an Activity,
you should try to reflect on what you have gathered from the module and apply it
to real situations. You should, at the same time, engage yourself in higher order
thinking where you might be required to analyse, synthesise and evaluate instead
of only having to recall and define.
Summary: You will find this component at the end of each topic. This component
helps you to recap the whole topic. By going through the summary, you should
be able to gauge your knowledge retention level. Should you find points in the
summary that you do not fully understand, it would be a good idea for you to
revisit the details in the module.

Copyright Open University Malaysia (OUM)

COURSE GUIDE

xiii

Key Terms: This component can be found at the end of each topic. You should go
through this component to remind yourself of important terms or jargon used
throughout the module. Should you find terms here that you are not able to
explain, you should look for the terms in the module.
References: The References section is where a list of relevant and useful
textbooks, journals, articles, electronic contents or sources can be found. The list
can appear in a few locations such as in the Course Guide (at the References
section), at the end of every topic or at the back of the module. You are
encouraged to read or refer to the suggested sources to obtain the additional
information needed and to enhance your overall understanding of the course.

PRIOR KNOWLEDGE
There is no prerequisite requirement for learners prior to taking this subject.

ASSESSMENT METHOD
Please refer to myINSPIRE.

REFERENCES
Black T. R. (1999). Doing quantitative research in the social sciences: Sage
Publication, London.
Boaden, R.C. & Biklen, S.K. (2003). Qualitative research for education: An intro to
theories and methods (4th ed.). Pearson, New York.
Cooper, D. R. & Schindler, P. S. (2007). Business research methods. (10th ed). New
York: McGraw Hill.
Uma Sekaran. (2003). Research methods for business. A skill building approach.
(4th ed). New York: Wiley.

Copyright Open University Malaysia (OUM)

xiv

COURSE GUIDE

TAN SRI DR ABDULLAH SANUSI (TSDAS) DIGITAL


LIBRARY
The TSDAS Digital Library has a wide range of print and online resources for the
use of its learners. This comprehensive digital library, which is accessible through
the OUM portal, provides access to more than 30 online databases comprising
e-journals, e-theses, e-books and more. Examples of databases available are
EBSCOhost, ProQuest, SpringerLink, Books24x7, InfoSci Books, Emerald
Management Plus and Ebrary Electronic Books. As an OUM learner, you are
encouraged to make full use of the resources available through this library.

Copyright Open University Malaysia (OUM)

Topic

Scientific
Thinking in
Research

LEARNING OUTCOMES
By the end of this topic, you should be able to:
1.

Discuss the different approaches to problem solving;

2.

Compare and contrast the inductive and deductive processes in


the scientific method; and

3.

Assess the components needed for the development of scientific


research.

INTRODUCTION
The purpose of science is to expand knowledge and discover the truth. By
building theory, researchers undertake research to achieve this purpose.
Prediction and understanding are the two purposes of theory and they usually
go hand in hand. To make a prediction, one must know and understand why
variables behave as they do and theories provide this explanation. A theory is a
coherent set of general propositions used as principles to explain the apparent
relationships of certain observed phenomena. The scientific method is a series of
stages used to develop and refine theories.
Scientific methods and scientific thinking are based on concepts. Concepts are
invented so as to enable us to think and communicate abstractions. Higher-level
concepts are used for specialised scientific explanatory purposes that are not
directly observable. Concepts and constructs are used at the theoretical levels
while variables are used at the empirical level. The scientific research process is
used to develop and test various propositions using inductive-deductive
reflective thinking. Scientific research uses an orderly process that combines

Copyright Open University Malaysia (OUM)

TOPIC 1

SCIENTIFIC THINKING IN RESEARCH

induction, deduction, observation and hypothesis testing into a set of reflective


thinking activities.
People analyse problems differently because they have selective perception and
conditioning of the environment affecting them; the kind of questions asked
would be different depending on how they see the world. Scientific inquiry is
one of the ways to analyse problems. Understanding the relationship between
science and research will help researchers in formulating the study.

1.1

SCIENTIFIC METHOD

Science is a definable subject. It tries to describe reality truthfully. It is an


institution or a system and a way of producing knowledge. Science is also a
product of the system.
To most people, science is classified as hard (physical/biological science) and
soft (human science). The subject matter of science determines the techniques and
instruments used in scientific studies. A scientific method is the method
researchers use to gain knowledge. Business research is scientific because it
studies business actions and interactions truthfully, such as research in the field
of information and communication technology, education etc.
The basic goal of science is to obtain, with confidence, valid generalisations and
to establish relationships between variables. By understanding the relationships,
scientists will be able to understand a phenomenon in terms of the patterns of
relationships, to make predictions and to determine causal relationships. Good
science uses the scientific method and can be characterised by the following:
(a)

It is empirical, meaning that it is compared against reality;

(b)

It is replicable or objective, meaning that the researchers opinion is


independent of the results; other researchers conducting the study would
obtain the same results;

(c)

It is analytical, meaning that it follows the scientific method in breaking


down and describing empirical facts;

(d)

It is theory driven, meaning that it relies on a previous body of knowledge;

(e)

It is logical, meaning that conclusions are drawn from the results based on
logic; and

(f)

It is rigorous, meaning that every effort is made to reduce error.


Copyright Open University Malaysia (OUM)

TOPIC 1

SCIENTIFIC THINKING IN RESEARCH

It is noted here that the difference between hard science and soft science is
control over confounding variables. For example, in business, there are factors
which may be beyond the control of managers, so there has to be some trade-off
between the rigours of science and the pragmatics of business. There has to be
some give and take between the desires of the businesspeople and the desires of
the researchers.
Although this will lead to error, as long as the researcher informs the decisionmaker of the limitations, and the results are qualified based on the limitations,
the research should go on to produce the information. Good scientific research
also follows the principle of parsimony, that is, a simple solution is better than a
complex solution. Parsimonious research means applying the simplest approach
that will address the research questions satisfactorily.

SELF-CHECK 1.1
What are the characteristics of good science?

1.2

TRADITIONAL SOURCES OF KNOWLEDGE

There are many ways of gathering information and acquiring knowledge.


Knowledge gained from traditional sources is not scientific and may potentially
have errors. The following are the common sources of knowledge:
(a)

Common Sense
Information and knowledge can be gained by relying on what everyone
knows and what just makes sense. Common sense is valuable in daily
living but it can allow logical fallacies to slip into thinking. Common sense
may contain contradictory ideas that many people may not notice because
the ideas are used at different times. Common sense can originate from
tradition; it is useful and may be correct but it may contain errors,
misinformation and contradiction. It may be prejudiced because of beliefs
and socio-cultural differences. One can avoid making wrong decisions by
accepting the truth that a deficiency of knowledge in common sense exists.
To reduce this deficiency, one has to generate the right kind of knowledge
and common sense knowledge needs to be examined systematically to find
the actual cause. The actual cause can be found by setting up experiments
for systematic testing or continually collecting data to examine the repeat
Copyright Open University Malaysia (OUM)

TOPIC 1

SCIENTIFIC THINKING IN RESEARCH

occurrences of an event. Thus, scientific advances are relied on in scientific


research, not common sense. The right kind of knowledge is generated
through systematic research.
(b)

Personal Experience
When something happens, you feel it, you experience it and you accept it as
true. Personal experience or seeing is believing is a forceful source of
knowledge. But personal experience can lead one astray. What may appear
true may actually be due to a slight error or distortion in judgment. People
make mistakes or fall for illusions. They may believe what they see or
experience but these may be full of errors. Personal experience is reinforced
by four basic errors:
(i)

Overgeneralisation: People have some evidence that they believe


and then assume that it applies in many other situations too. There
are many individuals, areas and situations that people know little or
nothing about, so generalising from what little they know might seem
reasonable.

(ii)

Selective observation: People take special notice of some other


people or situations and generalise from these observations. The
focus becomes more intense if the objects fit their preconceived ideas;
people become more sensitive to features that confirm their ideas.

(iii) Premature closure: This often operates with and reinforces the first
two errors. Premature closure occurs when people feel they have all
the answers and do not need to listen, seek information or raise
questions any longer.
(iv) Halo effect: This happens in many forms whereby people
overgeneralise from what they interpret to be highly positive or
prestigious.

1.3

PHILOSOPHY OF SCIENCE RESEARCH

Science is characterised by the two pillars of science: logic and observation. A


scientific understanding must make sense (logical) and correspond with what
we observe. Observation is used to confirm the world we see by making
measurements of what is seen. Both are essential to science and relate to the three
major aspects of science:

Copyright Open University Malaysia (OUM)

TOPIC 1

SCIENTIFIC THINKING IN RESEARCH

(a)

Scientific theory Deals with the logical aspects of science.

(b)

Data collection Deals with the observational aspects of science.

(c)

Data analysis Looks at patterns of observations, compares what is logical


with what is actual.

Science is still an imprecise field because it is imperfect in the current body of


knowledge. From time to time, new concepts are generated to falsify old
concepts. New concepts arise as a result of new findings, new sets of data or new
perspectives of research. As a result, no matter how well a concept seems to be
proven at this moment in time, you have to be aware that new evidences will
overturn older ones and new concepts will emerge. Thus, in science, you must
view knowledge as tentative and not absolute. Theories generated in science are
tentative law and do not forever govern the way the universe works.
A scientific law is not universal. It just tentatively reflects a natural occurrence
under a certain circumstance. However, due to the invention of new measuring
techniques, experimental instruments or observations, the scientific law will be
modified or changed. As a result, scientific laws are tentative statements and
subject to change.
Look at Figure 1.1. Theory is a general statement or proposition explaining what
causes what and why and under what circumstances of certain phenomena. A
theory is generated through analysis of facts in their relationship to one another
that explain phenomena. Facts are discovered through observation. These facts
are then analysed, and a model is created to explain the relationship observed in
a phenomenon. A tentative theory is then developed from the model.

Figure 1.1: A researcher investigating a phenomenon


Copyright Open University Malaysia (OUM)

TOPIC 1

SCIENTIFIC THINKING IN RESEARCH

From this tentative theory, prediction and hypotheses are derived for further
investigation or testing. The process of further investigation or testing will
continue until the theories and laws derived are refined. The refined laws or
theories are tentative. If an anomaly is found when a new observation does not fit
into a current body of knowledge or the theories or laws are proven wrong, a
modification has to be carried out. The process will continue again and again
when new knowledge is generated from new observations.

1.4

DEDUCTIVE AND INDUCTIVE MODELS

Research involves the use of theory. In the process of designing a research, theory
may or may not be expressed explicitly, although in the presentation of the
findings and conclusion, relationships with theories will be explicitly made.
(a)

Inductive Model Moves from the particular to the general, from a set of
specific observations to discovery of a pattern that represents some degree
of order among all given events; the logical model in which general
principles are developed from specific observations.

(b)

Deductive Model Moves from the general to the specific, from a pattern that
might be logically or theoretically expected and observations that test whether
the expected pattern actually occurs; the logical model in which specific
expectations of hypotheses are developed on the basis of general principles.

Figure 1.2 illustrates the differences between the inductive and deductive models.

Figure 1.2: Differences between the inductive and deductive models


Copyright Open University Malaysia (OUM)

TOPIC 1

SCIENTIFIC THINKING IN RESEARCH

In the deductive model, existing theory is used to deduce a hypothesis. The


hypothesis is then empirically tested through data collected from the field. The
data is analysed and the findings are used to validate the hypothesis. If the
findings do not fit the hypothesis, then the hypothesis is rejected. If the findings
fit the hypothesis, then the hypothesis is accepted. Consequently, theory is
revised according to the findings.
In the inductive model, an observation is made in order to generate an initial
concept to provide clearer understanding of a phenomenon. The concept is then
used to generate more relevant research questions for further data collection.
Findings are used to validate the initial concept. The validated concept is used as
the basis for new theory development. Next, the theory is compared with an
existing theory. If the new and existing theories are the same, the existing theory
is said to be confirmed or strengthened. If there is any anomaly in the
comparison, the existing theory is modified accordingly.

ACTIVITY 1.1
By providing appropriate examples, discuss the deductive and
inductive methods.

1.5

LINKS BETWEEN THEORY AND RESEARCH

Theory comprises systematically interrelated concepts, definitions and


propositions that are used to explain and predict phenomena (facts). It is a
systematic explanation for the observation that relates to a particular aspect of
behaviour. All operations are carried out on the basis of theories since theories
are general statements about variables and the relationships among them. These
generalisations are used to make decisions and predict outcomes.
Theory serves many useful purposes in research:
(a)

It narrows the range of facts needed to study; any problem can be studied
in many different ways. A theory can suggest the ways that are likely to
yield the greatest meaning;

(b)

It suggests a system for the researcher to impose on data in order to classify


them in a meaningful way;

(c)

It summarises what is known about an object;


Copyright Open University Malaysia (OUM)

TOPIC 1

SCIENTIFIC THINKING IN RESEARCH

(d)

It indicates uniformities that are not immediately observable; and

(e)

It helps to predict future facts that could be found.

The followings are the components of theory:


(a)

Concepts
A concept is a bundle of meanings or characteristics associated with certain
events, objects, conditions and situations. Concepts may be developed
because of frequent, general and shared usage over time. It may be acquired
through experience. Some concepts are unique to a particular culture and
not easily translated into another language.
In research, concepts used must be precise and comprehensible; hypotheses
are designed using concepts, measurement concepts are used to collect
data, new concepts may be invented to express ideas. The success of
research depends on the ability of researchers to conceptualise ideas and
how well others understand the concepts used. Concepts represent
progressive levels of abstractions; the degree to which the concepts do not
have objective referents. A shirt is an objective concept while personality is
a concept with a high degree of abstraction; such concepts are called
constructs.

(b)

Constructs
A construct is an image or idea specifically invented for a given research
and/or theory-building purpose. Constructs are developed by combining
simpler concepts, especially if the idea or image we want to convey is not
directly subject to observation. Intelligent quotient (IQ) is constructed
mathematically from observations of the answers given to a large number
of questions in an IQ test. No one can directly or indirectly observe IQ but it
is a real characteristic of people.

(c)

Definitions
If the meaning of the concept is confused, the value of the research may be
destroyed. If the concepts used give different meanings to different people,
it indicates that the parties are not communicating on the same wavelength.
A concept may be defined with a synonym. For research purposes, the
definition must measure concepts, thus, needing a more rigorous definition.
Operational definition is a definition stated in terms of specific testing
criteria or operations; the terms must have empirical referents (must be able
to count, measure or gather information in an objective manner). The
definition must specify the characteristics to study and how to observe the
characteristics. An effective operational definition ensures that two or more
Copyright Open University Malaysia (OUM)

TOPIC 1

SCIENTIFIC THINKING IN RESEARCH

people will have the same interpretation of a phenomenon. The purpose of


operational definition is basically to provide unambiguous interpretation
and measurement of concepts.
(d)

Variables
At the theoretical level, constructs and concepts are used to relate to
propositions and theory; at this level, constructs cannot be observed. At the
empirical level, propositions are converted into hypotheses and tested; at
this level, the concepts are termed as variables. The term variable is used
as a synonym for construct or the property being studied. Quantitative
variables usually take numerals or values as the indicator of the degree of
level. The followings are some commonly used quantitative variables:
(i)

Dichotomous variable has two values reflecting the prescience or


absence of a property.

(ii)

Discrete variable takes on values representing added categories and


only certain values are possible.

(iii) Continuous variable takes on values within a given range or in some


cases, an infinite set.
On the other hand, qualitative variables do not have any numerical values
and mostly describe in subjective terms.
(e)

Propositions
Propositions are statements about concepts which may be judged as true or
false if they refer to observable phenomena.

(f)

Hypothesis
Hypothesis is a proposition that is formulated for empirical testing:
(i)

Descriptive hypotheses are propositions that typically state the


existence, size, form or distribution of some variables.

(ii)

Relational hypotheses are statements that describe a relationship


between two variables with respect to some case; the relationship can
be correlational or causal (explanatory).
Role of hypothesis in research
A hypothesis serves several functions in a research:
It guides the direction of the study;
It limits what is to be studied and what is not;
Copyright Open University Malaysia (OUM)

10

TOPIC 1

SCIENTIFIC THINKING IN RESEARCH

It identifies which facts are relevant and which are not;


It suggests the most appropriate form of research design; and
It provides a framework for organising the conclusions.
Criteria of good hypothesis
A good hypothesis meets two conditions:
Adequate for its purpose: A descriptive hypothesis must
clearly state the condition, size or distribution of some
variables in terms of values meaningful to the research task;
and
Testable: A hypothesis is not testable if it requires the use of
techniques that are not available.
(g)

Model
A model is a representation of a developed system used to study some
aspects of the system or the system as a whole. It is different from theory
because theory explains relationships in the system whereas a model is a
representation of the relationships in the system.

(h)

Framework
A framework is an abstract representation of a phenomenon. It describes
the variables studied and the relationships among the variables. It can be
represented graphically in a diagram. Thus, in the early stage of a research,
a theoretical framework is usually constructed based on initial studies or
literature search. The theoretical framework is used to explain the
relationships that need to be investigated and tested in research. A
framework that has been successfully tested will be considered as the final
framework. A research will report the research findings by presenting the
final framework.

(i)

Process
A process is developed for a specific purpose in a business organisation. It
aims to make some change in the organisation. For example, lets say a
company implements a process to improve its quality performance. This
process may involve changes in the structure (for instance, someone is
transferred to a different department) or operations (for example, the
quality inspection procedure is modified) of the organisation. In research, a
process is developed to help solve an organisations problem or improve its
performance. The output of this research will be in the form of a new
process rather than a framework or model. A process is also called a tool,
procedure, method or system.
Copyright Open University Malaysia (OUM)

TOPIC 1

11

SCIENTIFIC THINKING IN RESEARCH

SELF-CHECK 1.2
1.

What are the differences between proposition and hypothesis?

2.

What are the differences between concept and construct?

3.

What are the differences between model and framework?

SELF-CHECK 1.3
1.

If research in a social science area cannot be 100% scientific, why


do it at all?

2.

Tick True or False for each statement below:

No.

Question

1.

In deduction, we start from observing data and


developing a generalisation, and then explain the
relationship between observed variables.

2.

Theory helps us to make sense of observed patterns.

3.

The traditional model of science uses inductive logic.

4.

Scientific inquiry is a process involving alteration of


deduction and induction.

5.

A good hypothesis is testable; it means that the


hypothesis is simple, requiring few conditions or
assumptions.

6.

The role of a theory is representation while that of a


model is explanation.

7.

A moderating variable has a contributory effect on the


stated independent-dependent variable relationship.

8.

As income increases, age tends to increase is an


example of a causal/explanatory hypothesis.

9.

In deduction, we observe facts and draw conclusions


from them.

10.

Shoe, chair, demand and bread are all


concepts.

True

Copyright Open University Malaysia (OUM)

False

12

TOPIC 1

SCIENTIFIC THINKING IN RESEARCH

Scientific research is an alternative to gain knowledge and information; other


methods to gain knowledge include authority, tradition, common sense,
media myth and personal experience.
Although scientific research does not produce 100% exact information, it is
less likely to have potential errors and less likely to be flawed.
The traditional model of science is made up of three components: theory,
operationalisation and observation.
The inductive and deductive model combines induction, deduction,
observation and hypothesis testing as a problem-solving process.
Scientific methods are based on concepts, constructs and variables, which
when operationalised, enable empirical testing of hypotheses.
Variables are concepts and constructs used at the empirical level. They are
numerals or values that represent the concepts for the purpose of testing and
measurement.
A hypothesis describes relationships between variables.
A good hypothesis can explain what it claims, is testable and has greater range.
Theories are general statements explaining phenomena.
Theories consist of concepts, definitions, and propositions to explain and
predict phenomena.
Models are representations of some aspects of a system or of the system as
a whole.

Copyright Open University Malaysia (OUM)

TOPIC 1

SCIENTIFIC THINKING IN RESEARCH

Concept

Model

Construct

Generalisation

Deductive models

Process

Definition

Proposition

Empirical

Replicable

Framework

Variables

Hypothesis

Copyright Open University Malaysia (OUM)

13

Topic

Research
Process

LEARNING OUTCOMES
By the end of this topic, you should be able to:
1.

Identify the types of problems that need to be highlighted in research;

2.

Describe the stages of the research process; and

3.

Assess the factors that influence the success of a research.

INTRODUCTION
Research usually involves a multi-stage process. Although the actual number of
stages may vary, research must include formulating and identifying a topic,
reviewing literature, planning a strategy, collecting data, analysing data and
writing a report. In discussing the research process, the presentation depicts a
stage by stage and straightforward rational discussion, although in real working
conditions of doing a research project, this is unlikely to be the case. The
researcher may have to revisit each stage more than once because each stage is
interrelated and may influence or be influenced by other stages. Each time a
researcher revisits a stage, he may have to reflect on the associated issues and
refine his ideas; in addition, he has to consider ethical and access issues during
the process.

2.1

RESEARCH PROCESS

The research process usually starts with interest in a certain event, situation,
object or just wanting to know about something. Research is the process of
gathering the information needed to answer certain questions and thereby
helping in solving problems faced by an individual, firm, organisation or society.
For information to be useful, it must be good. To get good information, the
Copyright Open University Malaysia (OUM)

TOPIC 2

RESEARCH PROCESS

15

process of getting the information must be good. A good process is a scientific or


systematic process.
The steps in the research process are depicted in Figure 2.1.

Figure 2.1: Overview of research process

Below are the details for Figure 2.1.


(a)

Problem Identification
The first stage of research is to identify problems or issues and to justify the
need for research. There are many sources of research problems such as
personal interest, personal experiences, social problems, world trends, new
development in technology or society, etc.

Copyright Open University Malaysia (OUM)

16

TOPIC 2

RESEARCH PROCESS

(b)

Formulate Research Questions


Research questions are important to ensure that the research is moving in
the right direction. The questions serve as guideline for literature search,
data collection, analysis and conclusion. Research questions are usually
more specific in quantitative research than in qualitative research. We
cannot answer all research questions that arise. Rather, we need to select
questions based on the time and cost available in a research project.

(c)

Literature Review
Literature review includes the purposes of the research, the search
strategies and plan of how to undertake the research and write the review.

(d)

Research Philosophy and Approach


In research, understanding of the appropriate research philosophy and
approach is important before beginning a research. You may choose to use
an inductive approach rather than a deductive approach. You may choose
to follow the physical science approach (i.e. positivism) or focus on the
human aspect of studies (i.e. interpretivism). Deciding on your research
approach is important to justify your own values and how you see the
world. This justification at the early stage of a research will determine the
way you design your research, collect and analysis data, and conclude your
research.

(e)

Research Design
In this stage, a range of research methods are available for conducting your
research. The choices are between quantitative and qualitative methods.
Sometimes the use of combined research methods is encouraged.

(f)

Data Collection
Before collecting data, you need to think about the sampling method.
Qualitative research will usually adopt a theoretical sampling method
while quantitative research will adopt probability or non-probability
sampling. You have to decide what data need to be collected such as
primary or secondary data. You will also need to think about how to access
these data and what method you will use to capture these data. There are
many ways you can collect data such as observation as well as semistructured or structured interviews. Before collecting data, research
instruments such as questionnaires need to be developed.

(g)

Data Processing and Analysis


The main issue that needs to be considered here is how to prepare data for
either quantitative or qualitative analysis. Data need to be edited and coded
for subsequent analysis. For quantitative data, the use of computerised
Copyright Open University Malaysia (OUM)

TOPIC 2

RESEARCH PROCESS

17

analysis software package such as SPSS is encouraged. Analysis of


qualitative data is very subjective and is usually done manually. The use of
various qualitative data analysis methods such as pattern matching, textual
analysis, grounded theory and narrative analysis depend on the nature of
the data itself. However, qualitative data can now be analysed by use of an
appropriate software packages.
(h)

Conclusion and Report


The final report presents the whole research project from the research
issues, literature review, research methodology, findings, data analysis and
conclusion. Not all reports are of the same format; as a researcher you have
to decide on the structure, content and style of the final report.

To ensure the success of the research, the researcher should:


(a)

Get up-to-date information;

(b)

Know what those who have more information about the problem feel about
the situation and the technological prospects;

(c)

Determine other researchers who have been involved in similar types of


study; and

(d)

Determine the successes and failures of other researchers in similar


situations.

Factors that may impede the research process are as follows:


(a)

The most favoured methodology syndrome;

(b)

The notion that the results of the study could solve all problems;

(c)

Designing questions that cannot be examined;

(d)

Problems that have not been defined properly; and

(e)

Research that has been directed on a political platform.

SELF-CHECK 2.1
Identify the purpose of the research process and the main factors to
make it successful.

Copyright Open University Malaysia (OUM)

18

TOPIC 2

2.2

RESEARCH PROCESS

PROCESS OF IDENTIFYING THE PROBLEM

A useful method of approaching the research process is by stating the problem.


The problem may be a general situation that describes a particular phenomenon.
The everyday dilemma that one faces may be a symptom of a bigger problem.
The dilemma is easily identified; however, to focus on the real problem may be
more difficult. To identify the dilemma, a situational analysis can be carried out.
The dilemma will finally lead to the practical problem.
The research problem can be identified by:
(a)

Examining the concept or the construct;

(b)

Breaking down the problem into smaller more specific questions;

(c)

Stating the hypotheses to be tested clearly;

(d)

Identifying the evidence to check the questions and the hypotheses; and

(e)

Identifying the scope of the study.

Investigative Questions
Once the research problem has been identified, the researcher has to think of the
problem in a more specific or focused way; this is the investigative question.
These are questions that the researcher must ask in order to get the most
satisfying conclusion regarding the research question. The specific questions will
help in determining the types of data to be collected.
Measurement Questions
These are questions that are actually asked of respondents in order to obtain
necessary data for analysis; these are questions that appear in the questionnaire.
If the research uses an observational approach, the measurement questions take
the form of records of the observations of the subject made by the researcher.

ACTIVITY 2.1
What are the systematic/scientific steps needed to carry out a research?

Copyright Open University Malaysia (OUM)

TOPIC 2

RESEARCH PROCESS

19

ACTIVITY 2.2
The general manager of the company you work for calls you to his office.
He is very worried about the companys engineering department as the
turnover rate is quite high for technicians. He asks you to do a survey
among other major companies in the region to learn how they take care
of the problem of high turnover of technicians.
(a)

Do you think the suggestion made by the general manager is


appropriate? Justify your answer.

(b)

If the suggestion is acceptable, how could you improve on the


formulation of the research problem?

2.3

DATA FOR RESEARCH

Data are facts which the researcher gets from the environment. Data may take
numerical or non-numeric forms of information and evidence that have been
carefully gathered according to sets of rules and established procedures. Data
may be obtained using simple observations at a specific crossroad to modern
technologically enhanced survey from big giant corporations all over the world.
The technique used to collect data will determine the methods by which data is
collected. Among the techniques used to record raw data include questionnaires,
observational forms, laboratory notes, instrument calibration logs, financial
statements and standardised instruments. Data is used in order to reject or to
accept hypotheses; and as evidence or empirical information that represent the
concept.
The characteristics of data can be examined in terms of:
(a)

Level of abstractness They are more metaphorical than real, for example,
profits cannot be observed directly but the effects can be recorded.

(b)

Ability to be proven When the sensory experiences produce the same


result consistently, then the data is reliable and can be verified.

(c)

Difficulty in obtaining data Obtaining data may be difficult due to the


speed of change at which events occur and the lapse in time of the
observation; changes occur with the passage of time.

(d)

Level of representation of the phenomenon under study


it is to the real phenomenon.

That is how close

Copyright Open University Malaysia (OUM)

20

TOPIC 2

RESEARCH PROCESS

There are two types of data:


(a)

Secondary data
Data that have been collected and processed by one
researcher and reanalysed for a different purpose by another researcher.

(b)

Primary data Data that has close proximity to the truth and control over
error, so careful designing for the collection of the data becomes pertinent.

2.4

ANALYSING AND INTERPRETING DATA

Data in raw form is of little help in overcoming management problems or in


decision-making. To produce information, the raw data needs to be processed,
transformed and reduced so that it is more easily managed. To make the
information more useful, data interpretation involves making conclusions,
looking at patterns of relationships and using statistical techniques.
Interpretation of the findings of the analysis also involves determining whether
the research questions are answered or whether the results are consistent with
theories and prior information.
The results or findings of the analysis must be transmitted or delivered so that
the recommendations or suggestions made based on the facts are available. The
presentation of the findings may vary depending on the target audience, the
occasion and the purpose of doing the research.

2.5

HOW TO CHOOSE A TOPIC

One of the problems faced by students when it comes to research is choosing the
right topic. Getting the right topic will help in designing the suitable steps in
carrying out the research effectively. Below are some of the important
terminologies one should understand before deciding on the research topic:
(a)

Subject/General Problem An area of interest that can be narrowed down


to a suitable topic; subjects are either too broad or too loosely defined to
serve as topics for research.

Copyright Open University Malaysia (OUM)

TOPIC 2

RESEARCH PROCESS

21

Example 2.1
A subject/general problem must lead to a good topic one that raises
some questions which have not been answered to the satisfaction of all
authorities on the topic.
Remember the purpose of research: To explain/describe, or to
illustrate/explore, or to argue for/determine causal relationship, or to
forecast/control. Research is more than mere reporting or just finding
information. The researcher should be able to evaluate the information
and ideas discovered and to arrive at a clear, well-thought conclusion
that gives the reader something to think about or to use in solving
problems.
(b)

Topic A reasonably narrow, clearly defined area of interest that could be


thoroughly investigated within the limits of the resources available to
undertake the research. A good topic raises questions that have no simple
answers. However, there are no absolute right or wrong answers. Research
is not geared towards making judgments as to who or what is right but
instead consists of assembling information from various sources in order to
present readers with a composite picture.
Example 2.2
The effects of parental attitudes on teenage pregnancy
The demand for recreation from domestic visitors in Langkawi
The role of certain traditional herbs in the cure of certain cancers
The effectiveness of improved communication systems on the
productivity of airline catering workers
The effects of the growth of the component manufacturing industry
on rural-urban migration of women
The role of local universities in the use of the English language in
primary schools

(c)

Thesis A general statement which announces the major conclusions that


may be reached after a thorough analysis of all sources. The statement
should appear in the beginning of the research report (in the problem
statement); the main body of the report should explain, illustrate
(introductory stage), analyse (methodology sections), argue for, and in
some sense, prove the thesis (discussion and conclusion). The defence of the
Copyright Open University Malaysia (OUM)

22

TOPIC 2

RESEARCH PROCESS

thesis consists of evidence gathered and analysed from a fair number of


sources that express the various points of view towards the topic.
If the thesis can be thought of early, then the researcher can easily limit the
reading in each source to just those passages that relate directly to the
thesis. However, this is not always possible. Thus, it will be most helpful to
think of the topic in terms of the possible thesis or hypothesis.
(d)

Hypothesis
The predictions (of the eventual thesis), made sometime
before reading the sources, as to what the research will reveal about the
topic i.e. what answers are expected to be found for the major questions
raised by the topic. As can be seen, the hypothesis (educated guess) can
help the researcher to find exactly what information (data, methods) is
needed as quickly and efficiently as possible, by keeping attention focused
on a limited number of specific aspects of the topic. A carefully worded
hypothesis can greatly reduce problems of searching for sources and
extracting from them the most useful information. In other words, the
hypothesis points to the right direction by indicating the specific questions
that need answers. The information/answer that either agrees or disagrees
with the hypothesis will bring the researcher closer to the truth, which is
the thesis of the researcher.
Forming the hypothesis should be done while choosing the topic. This is
because the topic involves unanswered questions and the hypothesis
predicts the possible answers. The hypothesis can thus test the
thoroughness of the research. The hypothesis should not be defended by
using only those that support it; for validity of the conclusions. Different
sources representing different viewpoints should be considered. The
mission of the research is to present readers with the full picture so that
they will have enough information to evaluate the conclusions.

Copyright Open University Malaysia (OUM)

TOPIC 2

RESEARCH PROCESS

SELF-CHECK 2.2
Tick True or False for each statement below:
No.

Question

1.

Investigative, management and measurement


questions are part of the research process
hierarchy of questions.

2.

A discrepancy between a desirable and an actual


situation is a problem.

3.

In an observational study, measurement


questions are the observation themselves.

4.

A good researcher must consider all possible


alternatives when attempting to design a
research.

5.

Comparing situations to similar experiences in


the past develops creative solutions to problems.

6.

The management dilemma is a symptom of an


actual problem.

7.

Explanatory research depends on the notion of


cause and effect.

8.

Research objectives lead to greater specificity


compared to research or investigative questions.

True

False

Choose the correct answer


1.

One of the main reasons for doing a research proposal is to:


(a)

Present the problem to be researched and its importance.

(b)

Start the final research report early.

(c)

Allow the client to choose the proper research design.

(d)

Force the client to choose the most appropriate technique to


analyse the data.

Copyright Open University Malaysia (OUM)

23

24

TOPIC 2

2.

3.

RESEARCH PROCESS

Measurement questions are:


(a)

The observations in an observational study.

(b)

Not relevant in a qualitative study.

(c)

Used only in sampling procedures.

(d)

Inferred, not collected using questionnaires.

The sections on the Problem Statement and Research Objective:


(a)

Lay the basis for an appropriate literature review, research


design and data analysis.

(b)

Not required in most internal studies.

(c)

Required only in large-scale studies.

(d)

Should be specified only by the sponsoring agency.

The research process is an interrelated process that can be viewed in phases.


The first phase of the research process is the problem identification or
definition stage; this may include doing some exploratory research.
Once the problem has been identified, the researcher has to plan for the later
stages which are very much dependent on the type of problem identified.
In the planning phase, the researcher has to identify the methods of collecting
data, the techniques to analyse the data and the preparation of the report.
The researcher has to determine the design of the research because the design
will determine the type of data to be collected and the method of collecting
the data.
Creative design of the research will help in reducing the cost of the research.

Copyright Open University Malaysia (OUM)

TOPIC 2

RESEARCH PROCESS

Concept

Empirical framework

Conceptualism

Exploratory study

Construct

Halo effect

Deductive models

Hypothesis

Definitions

Inductive model

Copyright Open University Malaysia (OUM)

25

Topic

Review of
Literature

LEARNING OUTCOMES
By the end of this topic, you should be able to:
1.

Define what is review of literature;

2.

Explain the importance of a good literature review;

3.

List the ideal procedures for review of literature; and

4.

Describe common mistakes in review of literature.

INTRODUCTION
The review of literature is not properly understood by some learners. Some have
the opinion that literature review means collecting and compiling facts for the
research being undertaken. In fact, the literature review process needs analytical
thinking, critiquing ability and empirical approach. Review of literature is an
integral part of the entire research process. When you undertake a research
process, review of literature will help you to establish the theoretical roots of your
field of interest, clarify your ideas and develop your methodology. The review of
literature also helps you to integrate your findings with the existing body of
knowledge. You must remember that one of your important responsibilities in
research is to compare your findings with those of others, and that is why review
of literature plays a very important role in the research process.

Copyright Open University Malaysia (OUM)

TOPIC 3

3.1

REVIEW OF LITERATURE

27

WHAT IS LITERATURE REVIEW?

The aim of literature review is to highlight what has been done so far in the
field of interest and how your findings relate to earlier research. The review of
literature also indicates the following:
(a)

Approaches;

(b)

Methods;

(c)

Variables used; and

(d)

Statistical procedure.

The foremost importance in a literature review is its findings. The review


gives an overview of findings from previous research work. It also traces the
general patterns of the findings and the conclusions that can be made based
on the findings.
Generally, review of literature provides in-depth understanding and
explanation on how your findings are similar to or novel from previous
research work.
For example, your literature review could justify whether your work is an
extension of what others have done. It could also indicate whether you are trying
to replicate earlier studies in a different context.
Review of literature also reveals techniques and statistical procedures that
have not been attempted by others. To do a review of literature, you need to
locate, read and evaluate research documents, reports, theses and other types
of academic materials. Review done for a research process must be extensive
and thorough because you are aiming to obtain a detailed account of the topic
being studied.

ACTIVITY 3.1
List some obstacles that learners may face in doing a literature review
for their theses or research reports. Discuss your answer during your
tutorial.

Copyright Open University Malaysia (OUM)

28

3.2

TOPIC 3

REVIEW OF LITERATURE

IMPORTANCE OF LITERATURE REVIEW

Reviewing literature can be time-consuming and daunting. However, it is


always rewarding. A review of literature has a number of functions in research
methodology, as illustrated in Figure 3.1.

Figure 3.1: Functions of literature review in research

Copyright Open University Malaysia (OUM)

TOPIC 3

REVIEW OF LITERATURE

29

Figure 3.2 lists the main reasons why literature review is important.

Figure 3.2: Importance of literature review

Here is the details of the main points depicted in Figure 3.2.


(a)

Improve Research Methodology


Literature review helps you to acquire methodologies used by other
researchers to find and solve research questions similar to the ones you are
investigating. It will explain the procedures other researchers used and
methods similar to the ones you are proposing. It will give you an idea
whether the methods other researchers used worked for them and what are
the problems they faced. By doing a review of literature, you will become
aware of the pitfalls and problems and you can strategise well to select a
methodology that you feel will suit your research work better.

(b)

Focus on Research Problem


Review of literature could help you shape your research problem because
the process of reviewing literature helps you to understand the subject area
better and thus helps you to conceptualise your research problem clearly
and precisely. In addition, it helps to understand the relationship between
your research problem and the body of knowledge in your research area.

(c)

Cater to Knowledge Base for Research Area


One of the most important objectives of literature review is to ensure that
you read widely on the subject area in which you intend to conduct a study.
Copyright Open University Malaysia (OUM)

30

TOPIC 3

REVIEW OF LITERATURE

It is fundamental that you know what others are doing in your field of
interest or similar topics as well as understand theories that have been put
forward and gaps that exist in the field.
(d)

Contextualise Research Findings


Obtaining answers for your research questions is easy. The difficulty lies in
how you examine your research findings in light of the existing body of
knowledge. How do you answer your research questions compared to what
other researchers concluded? What is the new knowledge contribution from
your research work? How are your findings distinguished from those of
other researchers? To answer these questions, you need to go back to the
review of literature. It is important to put your findings in the context of
what is already known and understood in your field of research.

(e)

Ensure Novelty in Work


By doing a review of literature, you do not run the risk of reinventing the
wheel, which means wasting efforts on trying to rediscover something that
is already known or published in the research arena. Therefore, through
literature review, you could ensure novelty and new contribution in your
research work.

SELF-CHECK 3.1

3.3

1.

What is meant by literature review?

2.

List three reasons why literature review is important.

PROCEDURES FOR REVIEWING


LITERATURE

It is important for you to have a specific idea of what you want to research before
embarking on literature review. There is danger in reviewing literature without
having a reasonably specific idea of what you want to study. It can condition
your thinking about your research and the methodology you might prefer,
resulting in a less innovative choice of research problem and methodology.
Therefore, try to draft your main idea before reviewing literature. Generally,
there are four steps in literature review, as depicted in Figure 3.3.

Copyright Open University Malaysia (OUM)

TOPIC 3

REVIEW OF LITERATURE

31

Figure 3.3: Four important steps in literature review

Let us look at each of these steps in details.


(a)

Step 1: Search the Existing Literature in Your Research Area of Interest


Once you choose your topic of interest, make sure it is a well-researched
and well-studied area which could give you more literature of research to
choose from. Narrow your topics so that you can cover your selected topic
in depth. Comprehensiveness and narrowness of topic go hand in hand.
Now, you can proceed to search the existing literature. To effectively search
literature, have in mind some idea of the broad subject area and the
problem you wish to investigate. The first task would be compiling a
bibliography in your research area. Books and journals are the best sources
for literature in a particular research area. The sources include:
(i)

Indices of Journals (e.g. ACM, IEEE Transactions and Elsevier)

(ii)

Abstracts of articles (e.g. Dissertations Abstracts International,


Emerald and IT Knowledge Base)

(iii) Citation indices (e.g. ProQuest and Scopus)


(b)

Step 2: Review the Literature Obtained


Once you have identified several journals and books, the next thing to do is
to start reading them critically to pull together themes and issues that are
associated with your research topic. Read and read! That is the bottom line
Copyright Open University Malaysia (OUM)

32

TOPIC 3

REVIEW OF LITERATURE

in doing a review. If you do not have a framework or theme to begin your


research with, use a separate paper to jot down the main points you extract
from journal articles and books. Once you create a rough framework, you
may slot in the extracted information accordingly. As you read further, do
some critical review with particular references on the following aspects:
(i)

Note the theories put forward, critiques, methods used (sample size,
data used, measurement procedure);

(ii)

Note whether the knowledge relevant to your designed framework


has been confirmed beyond doubt;

(iii) Find differences of opinions among researchers and jot down your
opinions about their validity; and
(iv) Examine the gaps that exist in the body of knowledge.
(c)

Step 3: Develop a Theoretical Framework


Reviewing the literature can be a never-ending task. You must know that
with the limited time you have to complete your research, it is important
for you to set the boundaries and parameters by looking into literature
relevant to your research topic. Information you obtain from literature
sources must be sorted out according to the themes and issues you put in
your framework. Unless you review the literature with regard to the
framework you developed, you will not be able to develop a focus in your
literature search. This means your theoretical framework will provide you a
base and guide to read further. The best practice would be to develop a
framework first and then dive into literature search or vice-versa. Of
course, as you read more about your research area, you are likely to change
the framework. Do not worry much about this because it is part of a
research process.

(d)

Step 4: Writing up the Literature Review


The final task will be compiling and writing all the literature you read and
reviewed. Begin your review with some themes or points that you want to
emphasise. Organise and list all the themes you would like to discuss and
relate. Organisation is of utmost importance and makes the structure
known to your reader. While writing, identify and describe various theories
relevant to your field and specify gaps in the body of knowledge in that
area. Proceed to explain recent advances in the area of study as well as
current trends. In research, we describe, compare and evaluate findings
based on:
(i)

Assumptions of research
Copyright Open University Malaysia (OUM)

TOPIC 3

(ii)

REVIEW OF LITERATURE

33

Theories related to the area of study

(iii) Hypotheses
(iv) Research designs applied
(v)

Variables selected

(vi) Potential future work speculated by researchers


We will go in-depth on hypotheses and research designs in the coming topics in
this module. Most importantly, avoid plagiarism when writing. Give due
recognition to the works of other researchers. Quote their work to show how
your findings contradict, confirm or add to them. This step is undertaken when
you start writing about your findings after finalising your data analysis during
the research process. It does not cost anything to acknowledge sources. In fact, it
shows the breadth and depth of your review and shows that your work is
precise.

3.4

COMMON MISTAKES IN REVIEW OF


LITERATURE

Normally, beginners in research make the following mistakes as soon as they


start writing the review of literature:
(a)

The review is a mere description of various materials without showing the


relation between the studies and the main objective of the research topic.

(b)

Students tend to cut and paste, which SHOULD NOT be encouraged.


Original works should be cited and quoted.

(c)

Journals or reports that are included are not critically evaluated.


Critically evaluate the research questions, the methodology used and
recommendations made by the researchers.

There is some evidence to suggest that students sometimes do not read the
original works and instead take someone elses work and cite it as though they
had read the primary source.

SELF-CHECK 3.2
1.

What are the procedures involved in the review of literature?

2.

What are the common mistakes in doing a literature review?


Copyright Open University Malaysia (OUM)

34

TOPIC 3

3.5

REVIEW OF LITERATURE

EVALUATING JOURNAL ARTICLES

Writing your literature review is essential as it enables you to interpret the works
of other researchers. How do you go about evaluating journal articles or
proceedings? The procedure for evaluating journal or research articles is shown
in Figure 3.4.

Figure 3.4: The five steps of evaluating a journal article

Now, let us discuss the steps mentioned in Figure 3.4 in detail.


(a)

Step 1: Read and Understand the Abstract


(i)

What was the research about? Are the objectives or aims of the study
specified clearly?

(ii)

Was the design used for the study described clearly?

(iii) What are the reasons for understanding the research?

Copyright Open University Malaysia (OUM)

TOPIC 3

(b)

REVIEW OF LITERATURE

35

Step 2: Read and Understand the Introduction


(i)

You should keep in mind that the author is assuming that the reader
is an expert in the field and has some background knowledge about it.

(ii)

References made may be short and brief because it is assumed that


you know the people in the field.

(iii) Do some critique on the research questions to determine whether they


are applicable to the theme of study.
(c)

Step 3: Read the Methodology Section


(i)

This section describes the methods used to collect data and the
background of the subjects.

(ii)

You should be able to do some critique on whether the selection of


subjects is appropriate.

(iii) Were the issues of validity and reliability discussed?


(iv) If the topic was design and development, was the framework
explained in sufficient detail? Could it have been done in another
way?
(d)

Step 4: Read the Results Section


(i)

This section describes the connection between the results and the
research questions or hypotheses.

(ii)

It reports results relating to the research questions and other


statistically significant results.

(iii) Were the results clearly reported and presented? (e.g. use of tables
and graphs)
(iv) Did the results reflect predictions made in the Introduction section?
(e)

Step 5: Read and Understand the Discussion Section


(i)

This section describes main findings and relates these to the


Introduction section.

(ii)

It also speculates reasons for the results.

(iii) You need to identify weaknesses or limitations of the study, as


highlighted by the author.
(iv) You must analyse whether the authors method is the only way to
interpret the predicted results (a good researcher would look into this
aspect to justify his/her findings firmly).
Copyright Open University Malaysia (OUM)

36

TOPIC 3

REVIEW OF LITERATURE

ACTIVITY 3.2
Select three journals in the research area you are interested in and
identify the main contributions of those papers.

Literature review shows what has been done in the research topic and how
the intended study relates to earlier research.
Literature review consists of research findings as well as propositions and
opinions of researchers in the field.
Literature review delimits the study, relates the methods used by other
researchers as well as recommendations of earlier works and provides the
basis for the intended research task.
All journal and research articles reviewed should be critically evaluated.
Literature review can reveal methods of dealing with the research problem
that may be similar to the difficulties you are facing.
Literature review will increase your confidence in your research topic if you
find other researchers have an interest in this topic and have invested time,
effort and resources in studying it.

Abstracts of articles

Research journal

Body of knowledge

Review of literature

Citation indices

Theoretical framework

Indices of journals

Copyright Open University Malaysia (OUM)

Topic

Sampling
Design

LEARNING OUTCOMES
By the end of this topic, you should be able to:
1.

Discuss the concept and reasons for sampling;

2.

Assess the criteria of a good sample;

3.

Prescribe the appropriate usage of sampling designs; and

4.

Examine the process of determining the sample size.

INTRODUCTION
This topic introduces strategies to collect primary data. The process of collecting
primary data must be identified properly based on the purpose and objectives
of the research. Data used to answer research questions must come from the
appropriate population in order to be useful. If data is not collected from
the people, events or objects that can provide the correct answers to solve the
problem, then the process of collecting the data is a waste. The process of
selecting the right individuals, objects or events for study is known as sampling.

4.1

SAMPLING CONCEPT

Whatever the research questions and objectives of the study may be, a researcher
must collect data to answer them. If the researcher collects and analyses data
from every possible member, this is known as a census. However, most
researchers are faced with the limitation of resources, time, and often access,
which made it impossible to collect or analyse all the data. Sampling techniques
provide a range of methods that enable the researcher to reduce the amount of
data needed, by considering only data from a subgroup rather than from all
possible cases or elements (refer to Figure 4.1).
Copyright Open University Malaysia (OUM)

38

TOPIC 4

SAMPLING DESIGN

Figure 4.1: Population, sample and individual case

In order to ensure that the data collected is representative, a few terms related to
the concept of sampling must be understood.
(a)

Population
Total collection of elements or cases in which to make
inferences; it refers to the entire group of people, events or things of interest
that the researcher wants to study.

(b)

Element
taken.

(c)

Census

(d)

Population Frame or Study Population Aggregation of elements from


which sample is taken; it is a listing of all the elements in the population
from the sample drawn.

(e)

Sample
A subset of the population; it is made up of some members
selected from the population. These are some, not all, elements of the
population that form the sample.

(f)

Sampling Unit or Subject


Element or set of elements considered for
selection in the sample; it is the single member of the sample.

(g)

Sampling Frame

It is a single member of the population on whom measurement is


Count of all elements in a population.

Actual list of sampling units from which sample is taken.

The process of selecting a sufficient number of elements from the population is


called sampling. A study of the sample and an understanding of its properties
would make it possible to make generalisation of such properties to the population
elements. The characteristics of the population elements, such as population mean
(M), population standard deviation (Sd) and population variance (S2), are referred
Copyright Open University Malaysia (OUM)

TOPIC 4

SAMPLING DESIGN

39

to as parameters. The characteristics of the sample, the statistics, such as sample


mean ( X ), standard deviation (sd) and variation in the sample (s2) are used as
estimates of the population parameters (refer to Figure 4.2).

Figure 4.2: Relationship between sample and population

ACTIVITY 4.1
How can sampling techniques help to obtain good research results?

4.2

JUSTIFICATION FOR SAMPLING

The reasons for using a sample are many; in research investigations involving
several hundreds and even thousands of elements, it would be impractical to
collect data, test or examine every element. Consider the cost of using a census,
the time and the human resources needed; they are prohibitive. The quality of the
information obtained from a sampling study is likely to be more reliable than
from a census; this is mostly because fatigue is reduced and fewer errors will
result in collecting the data, especially if a large number is involved. In some
situations, sampling is required. For example, in testing the life of an electric
bulb, it would be impossible to test the entire population or if burn them, then
there would be none to sell.

Copyright Open University Malaysia (OUM)

40

TOPIC 4

SAMPLING DESIGN

The advantages of sampling over census may be less compelling if the


population is small and variability is high. Two conditions are appropriate to
carry out a census. A census is feasible when the population is small and
necessary when the elements are quite different from each other.
Sampling in qualitative research will be different from sampling in quantitative
research. In qualitative research, the objective is to generate an in-depth analysis
of the issue, thus the representative of the sample chosen is less important. The
focus of the research is to gain access to insight on the problems and
generalisation of the findings to other similar setting is less emphasised.
One popular sampling method in qualitative research is theoretical sampling. In
applying the concept of theoretical sampling, data collection is driven by the theory
which emerged along the research. The next data collection plan is determined by
the findings extracted from the data collected previously until theoretical saturation
is achieved. Theoretical saturation is considered achieved when no new concept
seems to emerge after two or more consecutive cases.

SELF-CHECK 4.1
1.

What are the advantages and disadvantages of a census?

2.

What are the reasons for sampling? When is a census appropriate?

4.3

CRITERIA OF A GOOD SAMPLE

A good sample is judged by how well it represents the characteristics of the


population. The sample must be valid, which means it must possess the criteria
of accuracy and precision.
Accuracy means the degree to which bias is absent from the sample. There is
no systematic variance in the data and no variation in measures due to some
unknown influences that cause the scores to lean in one direction more than
another. For example, the peak season for a tourist destination falls during the
long school holidays; if a sample is taken only during the school holidays to
collect data on congestion, then accuracy of the data will be reduced.
Precision of estimate is another criterion of a good sample design. It is impossible
to get a 100% representation of the population because it is expected that some
differences in the numerical descriptors happen due to random fluctuations in
the sampling process. This difference is called sampling error and it reflects the
Copyright Open University Malaysia (OUM)

TOPIC 4

SAMPLING DESIGN

41

influence of chance in drawing the sampling members. Sampling error is leftover


error after the systematic variance is accounted for. It is supposed to consist of
random fluctuations only. Precision is measured by the standard error of
estimation; the smaller the standard error of estimate, the higher the precision of
the sample.

ACTIVITY 4.2
Although a researcher cannot get 100% accuracy in the research
findings, why is it still important to have a good sample design?

4.4

TYPES OF SAMPLING DESIGNS

A variety of sampling designs is available and the choice depends on the


requirements of the research, the objectives of the study and the resources
available. The sampling technique available is divided into two types:
(a)

Probability sampling
With probability sampling, the chance or probability of each case being
selected from a population is known and is usually the same. It is based on
the concept of random selection, which is a controlled procedure that
assures each population element, or case is given a known non-zero chance
of selection. By using a probability sample, it is possible to answer research
questions and achieve objectives of estimating characteristics of the
population from the sample. Thus, probability sampling is often used in
surveys and experimental research. Researchers use a random selection of
the elements to reduce or eliminate sampling bias.

(b)

Non-probability sampling
In non-probability sampling, the probability of each case being selected
from the total population is not known; and it is impossible to answer
research questions or address objectives that require statistical inferences
about the characteristics of the population. Although generalisations could
still be made from non- probability samples about the population, it cannot
be done on statistical grounds. For this reason, non-probability sampling is
often used in a case study research.

Copyright Open University Malaysia (OUM)

42

TOPIC 4

SAMPLING DESIGN

SELF-CHECK 4.2
How many types of sampling designs are there in a research study
and which one is the most often used?

4.4.1

Probability Sampling Design

Probability sampling is frequently used in a survey research to make inferences


about the population based on the sample statistics. If the population is small,
say, less than 30 cases, then sampling is not advisable. A census should be done
because when the population is small, the influence of a single extreme case on
the subsequent statistical analysis is more pronounced than for a larger sample.
There are four stages in probability sampling:
(a)

Identification of Suitable Sampling Frame


The sampling frame for any probability sample is the complete list of all
the cases in the population from which the sample is drawn. The sample
must be based on the research questions and objectives. If your research
questions and objectives are concerned with second-year students in the
business administration degree programme, then your sampling frame
work is the complete list of second-year business degree students. The
completeness of your list is important because an incomplete list means
some cases will be excluded and it will be impossible for every case to have
a known chance of being selected.
If a suitable list does not exist, then the researcher has to compile his own
sampling frame. It is important that the list is unbiased, accurate and
current. There may also be organisations that specialise in selling lists of
names and addresses for surveys. If the researcher uses this sample frame,
he must make sure of the way the sample is to be selected as well as how
the list was compiled and when it was last revised.

(b)

Determination of a Suitable Sample Size


The sample statistics are used to make generalisations on the population
parameters; the generalisations from any probability sample are based on
the theory of probability. The larger the sample size, the lower the chances
of error in generalising the population. The probability sample allows the
researcher to compromise between accuracy of the results; and the amount
of money and time that the researcher has to invest in collecting, analysing

Copyright Open University Malaysia (OUM)

TOPIC 4

SAMPLING DESIGN

43

and checking the data. Thus, the determination of the sample size within
this compromise is influenced by:
(i)

The confidence level in the data that is the level of certainty that the
characteristics of the data collected will represent the characteristics of
the total population.

(ii)

The margin of error tolerated the level of accuracy required for any
estimate made from the sample.

(iii) The types of analysis undertaken the level of minimum threshold of


statistical techniques to be considered for data categorisation.
(iv) The size of the total population.
(c)

Selection of an Appropriate Sampling Technique and the Sample


Once the sampling frame and the sample size has been determined, the next
step is to select the most appropriate sampling technique to obtain the
representative sample. The choice of probability sampling depends on the
research questions and objectives; and on whether statistical inferences will
be made from the sample. Other factors that may influence the choice of
probability sampling include contacts with respondents, geographical
locations of the population spread and the nature of the sampling frame.
Furthermore, the structure of the sampling frame, the size of the sample
needed, the nature of the support workers who collect the data and the ease
of explaining and making the technique understood will have some
influence on the decision. Five main techniques can be used to select a
probability sample.

(d)

Determination of the Representativeness of the Sample


The collected data is compared with data from other sources of the
population in order to determine the representativeness of the data. If there
is no statistically significant difference, then the sample is representative
with respect to the characteristics. The data is compared to data collected in
different time periods to determine representativeness of longitudinal data.

SELF-CHECK 4.3
If Harun calculated that the adjusted minimum sample size was 439
and his estimated response rate was 30%, what would his actual
sample size be?

Copyright Open University Malaysia (OUM)

44

TOPIC 4

4.4.2

SAMPLING DESIGN

Types of Probability Sampling

Below are the types of probability sampling:


(a)

Simple Random Sampling


In this sampling technique, each population element has an equal chance of
being selected into the sample. The samples are drawn using random
number tables or generators. This technique is best used if an accurate,
complete and easily accessible sampling frame is available. By using
random numbers, the selection of sample is done without bias, thus making
the sample representative of the whole population.
The major disadvantage of this sampling form is that it requires a listing
of the population elements. This will take more time to implement. If the
population covers a large random geographical area of selection, then a
selected case is likely to be dispersed throughout the area, and will be costly
due to high travel expenditure.

(b)

Systematic Sampling
In systematic sampling, an element of the population is selected at the
beginning. With a random start of a range of 1 to k and following the
sampling fraction is selected for every kth element. The sampling involves
the selection of the sample at regular intervals from the sampling frame.
This sampling technique is simple to design and is flexible; it is easier to
use and more efficient than simple random sampling. It has an added
advantage of being easy in determining the sampling distribution of mean
or proportion. As it is not necessary to construct a sampling frame, it is less
expensive than simple random sampling.
Periodicity within the population may skew the sample and results. For
instance, assume the sampling fraction is k = 4, and the list contains the
names of every male followed by a female. If the first selection is a male,
then the sample will contain only male respondents. Consequently, the
sample will be biased. If the population list has a monotonic trend, listing
from the smallest to the largest element, a biased estimate will result based
on the starting point.

(c)

Stratified Sampling
This is a modification of the random sampling in which the population is
divided into two or more mutually exclusive subpopulations or strata;
based on one or a number of attributes. Then, the random selection (simple
or systematic) is used on each strata. Results may be weighted and
Copyright Open University Malaysia (OUM)

TOPIC 4

SAMPLING DESIGN

45

combined. Stratification can be done based on primary variables under


study. The selection of the sample is done either by proportionate stratified
sampling, in which the sample is drawn proportionate to the stratums
share of the total population, or by a disproportionate stratified sampling,
which is any stratification that departs from the proportionate relationship.
A major advantage of the stratified sampling is that the researcher has
control over the sample size in strata. This control results in increased
statistical efficiency because each stratum is homogeneous internally and
heterogeneous with other strata. Moreover, the size of the sample in
each stratum provides adequate data to represent and thus analysis of
subgroups.
(d)

Cluster Sampling
The population is divided into internally heterogeneous subgroups, each
with a few elements in it. The subgroups are selected according to some
criterion of ease or availability in data collection; within subgroups there is
heterogeneity but between subgroups there is homogeneity. Samples are
taken from some randomly selected subgroups for further study.
It often involves large samples since there must be sufficient data to stratify
or cluster the population. However, if the method is indiscriminately used,
it will increase costs.

(e)

Multistage Sampling
This is a sampling method that employs more than one sampling strategy.
It usually starts with cluster sampling since it is a method of selecting a
group rather than individual elements. Having selected the groups
(clusters), the individual elements representing the groups are determined
using other probability techniques mentioned above.

SELF-CHECK 4.4
What are the factors influencing the choice of the following sampling
designs?
(a)

A probability sample and a non-probability sample.

(b)

A simple random sample, a cluster sample and a stratified


sample.

Copyright Open University Malaysia (OUM)

46

TOPIC 4

4.4.3

SAMPLING DESIGN

Non-probability Sampling Design

In addressing certain problems, it is not possible to assume that the sample is


selected using probability methods; so the sample has to be chosen by some other
way. Non-probability sampling provides a range of alternative techniques based
on the researcher s subjective judgments. Often, field workers carry out the
sample selection; so, there is greater opportunity that unfairness would enter
the sample selection procedure and distort the findings of the study. Since the
probability of selection is not known, the range within which to expect the
population parameter cannot be estimated.
Often the choice of non-probability sampling is based on practical reasons even
though there are technical disadvantages compared to the probability methods:
(a)

The use of non-probability sampling can satisfactorily meet the sampling


objectives. Sometimes, the true cross section of the population may not be
the objective of the research. For instance, if there is no desire or need to
generalise the results to the population parameters, the sample does not
have to be representative of the population.

(b)

Another reason for choosing non-probability sampling is the lower cost and
time factor. Probability sampling is time consuming and expensive. If the
non-probability sampling is carefully controlled, it can produce acceptable
results.

(c)

Although probability sampling produces superior results, it is often subject


to constant breakdowns in its application. Carelessness in application by
the people involved often leads to biased results.

(d)

The non-probability technique may be the only feasible method if the total
population is not available for the study or not known. In such cases, the
sampling frame will not be available to choose the elements. It may
not be possible to determine completely that the respondent of the mail
questionnaire is actually the person selected or the true cross section of the
population.

4.4.4

Types of Non-probability Sampling

Below are the types of non-probability sampling:


(a)

Convenience or Haphazard Sampling


This is a non-restricted non-probability sampling, in which field workers
have the freedom to choose whomever they find. The sample is chosen
Copyright Open University Malaysia (OUM)

TOPIC 4

SAMPLING DESIGN

47

haphazardly until the required sample size is met. It is normally the


cheapest and easiest to conduct. This method is considered the most useful
procedure to test ideas and exploratory research. The sampling design is
considered the least reliable design because there is no control to ensure
precision.
(b)

Purposive or Judgemental Sampling


The sampling form enables selection of sample members that conform to
certain criteria; the researcher can use his own judgement to select cases to
enable him to answer research questions and meet the objectives. This form
of sample is usually used when the population is small, such as in case
study research and when the main purpose is to select cases that are
particularly informative. It is very useful in the early stages of an
exploratory study or in selecting a biased group for screening purposes.
The main disadvantage of this design is that the sample may not have the
specific criteria, which differ from the criteria of the population.

(c)

Quota Sampling
The design is based on the premise sample that will represent the
population; as the variability in the sample for various quota variables is
the same as that in the population. The logic is that certain relevant
characteristics describe the dimension of the population, thus making the
design a type of stratified sample in which the selection of cases within the
strata is entirely non-random. If the sample has the same distribution of
these characteristics, then it is quite likely to be a representative of the
population.
The quota sampling has several advantages over the probability sampling.
It is in particular less costly and can be set up rather quickly. It does not
require a sampling frame and may be the only technique that can be used if
other techniques are not available. It is most useful if the population is
large; since the sample size is governed by the need to have sufficient
responses in each quota to enable subsequent analyses to be undertaken,
hence the total sample size may be more than 2000.
A major weakness of the quota sampling design is that the assumption of
the quota being representative is arguable as there is no assurance that each
variable under study represents the population characteristics. Available
data used as a basis for the determination of quota may be outdated or
inaccurate, thus without relevant or sensible quotas, data collected may be
biased. The number of control variables that are used may be limited and is
often left to the choice of the field workers.

Copyright Open University Malaysia (OUM)

48

(d)

TOPIC 4

SAMPLING DESIGN

Snowball Sampling
This design is usually used when the respondents are difficult to identify
and located through some referrals from people who know them. The
respondents may or may not be chosen initially through probability. The
initial individuals are used to locate other individuals who have similar
characteristics, and who, in turn identify others. The referral approach can
help to reach particularly hard to find respondents; however, it may get
only individuals similar in characteristics to the introducers. The design
may result in a highly homogeneous group.

ACTIVITY 4.3
In a situation where the respondents live in rural areas, what is the
most effective type of sampling that can be used?

4.5

SAMPLE SIZE

The sample size is more often than not determined by judgement as well as
calculation. In many cases, the types of statistical analyses used would determine
the minimum sample size for each individual category. As a rule of thumb, a
sample size of 30 is the smallest number in each category within the overall
sample that is acceptable.
The sample size is the number of elements to be studied in the research project.
Determining the size is one of the great challenges of many junior researchers.
Some of the major considerations in sample size determination are namely:
(a)

Importance of the decision (larger and representative sample size for


important decisions);

(b)

The nature of the research (smaller size for exploratory);

(c)

The number of variables (larger size if more variables are involved);

(d)

The nature of data analysis (detailed and sophisticated statistical analysis


require larger random samples); and

(e)

Resource availability. The size can be determined statistically or nonstatistically.

Copyright Open University Malaysia (OUM)

TOPIC 4

4.5.1

SAMPLING DESIGN

49

Qualitative Approach

Determining the sample size involves both qualitative and quantitative


consideration. There are a few qualitative factors in determining sample size as
discussed in the earlier part of this topic.
One of the commonly referred rules of thumb for determining sample sizes
especially in the exploratory research is the user-friendly model of Krejcie and
Morgan (1970). Many also have used their suggestion in determining sizes in
certain phrases of the probability sampling (for example, determining the size of
each stratum in a stratified sampling). They simplified the sample size
determination based on the respective target population sizes. Table 7.3 shows
some of the suggested sizes.
Table 7.3: Sample Sizes by Target Population Sizes

4.5.2

Target Population Size (N)

Sample Size (n)

10

10

30

28

50

44

100

80

200

132

300

169

500

217

1,000

278

2,000

322

3,000

341

4,000

351

5,000

357

8,000

367

10,000

370

20,000

377

50,000

381

Statistical Approach

Statistical approaches to determine sample size are based on inferential statistics,


mainly the confidence interval and hypothesis testing. The former uses
parameter estimates to compute the sample size while the latter makes use of the
Copyright Open University Malaysia (OUM)

50

TOPIC 4

SAMPLING DESIGN

effect size, alpha, beta, and the population standard deviation in the calculation.
The sample size determination for confidence intervals presented in this manual
was adapted from Malhotra (1999) while the effect size approach was adapted
from Brewer (1996).
Sample Size Considerations using the Confidence Interval Approach
This approach is based on the construction of confidence intervals around the
sample means or proportions using standard error formula.
(a)

Sample Size Determination Using Mean


The following steps are used to determine the sample size:
(i)

Specify the level of precision (D), this is the maximum permissible


difference that the researcher would like to set.

(ii)

Specify the level of confidence, also set by the researcher.

(iii) Determine the z value associated with the level of confidence set in (b)
using the z-distribution table.
(iv) Determine the standard deviation of the population ( ). This is
determined based either on some secondary sources, empirically
derived from pilot tests, or defined judgmentally by the researcher.
(v)

Calculate the sample size using the formula for the standard error of
the mean
2

z2
D

n
(b)

Sample Size Determination Using Proportion


If the statistics of interest is proportion, the following steps should be used
to determine the sample size:
(i)

Specify the level of precision (D).

(ii)

Specify the level of confidence.

(iii) Determine the z value associated with the level of confidence set in (b)
using the z-distribution table.
(iv) Estimate the population proportion. This can be done based on
information from previous studies, derived empirically from pilot
tests, or judgmentally defined by the researcher.
(v)

Calculate the sample size using the formula for the standard error of
the proportion.

) z2

(1
D2

Copyright Open University Malaysia (OUM)

TOPIC 4

(c)

SAMPLING DESIGN

51

Sample Size Considerations using Effect Size Method


There are three major factors need to be considered in determining the
minimum sample size using this method, namely the alpha value, power
and the effect size (Brewer, 1996; Cohen, 1977). The alpha, power and effect
size are set by the researcher prior to data collection.

Alpha is the probability of rejecting the Null when the Null is indeed true.
Because the focus of hypothesis testing is to minimise the errors in making
a decision, an adequately small value of alpha is essential for the results to
be meaningful.

Power is the probability of correctly rejecting the Null. Since power refers to
correct rejection for the rejection to be meaningful, the power should be set
substantially high.
Effect size (ES) is the degree of association between the variables under
investigation. If the study is concerned with differences between two
populations, then the effect size refers to the magnitude of difference that
make it meaningful. A small effect size will allow the researcher to detect
even a small effect of the phenomenon. For example if it is hypothesised
that there is a true difference between male and female employees in terms
of their job satisfaction levels, a small difference in the mean scores of these
two populations (if the null is rejected) is good enough to provide evidence
of practical importance if the effect size is set to be small. A small effect size
is able to detect even small true differences if there is a difference between
the null and the alternate hypothesis.
Using this method, mainly in hypothesis testing, the minimum sample size
is defined as a function of alpha, power and the effect size (Brewer, 1996;
Cohen, 1977).
For one sample hypothesis testing, the minimum sample size is defined as
N =
Where, N

=
=
=
ES =

[(Z + Z ) /ES]2
minimum sample size
alpha (probability of type I error)
beta (probability of type II error)
effect size

Copyright Open University Malaysia (OUM)

52

TOPIC 4

SAMPLING DESIGN

For hypothesis testing involving two independent samples, the minimum


sample size for each population is defined as;
N
Where, N

=
=
=
ES =

2[(Z + Z ) /ES]2
minimum sample size
alpha (probability of type I error)
beta (probability of type II error)
effect size

SELF-CHECK 4.5
Tick True or False for each statement below:
No.

Question

1.

Lower cost and quicker execution are often claimed to


make sampling more superior than taking a census.

2.

The stratified sampling design is usually the most


efficient probability sampling in the statistical sense.

3.

The sampling frame is always prepared on a random


selection basis.

4.

A listing of the population itself is known as the sampling


frame.

5.

Purposive samples are so biased that they seldom


provide useful results.

6.

An advantage of sampling as compared to a census is the


better quality of interviewing in a sample.

True False

Copyright Open University Malaysia (OUM)

TOPIC 4

SAMPLING DESIGN

Choose the correct answer.


1.

2.

3.

The list of elements from which the sample is actually drawn is


called the:
(a)

Population

(b)

Universe

(c)

Parameter list

(d)

Sample frame

A good sample is one in which there is no bias from the


sampling process. This is defined as:
(a)

Consistency

(b)

Accuracy

(c)

Precision

(d)

Reliability

Area sampling is a form of:


(a)

Cluster sampling

(b)

Non-probability sampling to study geography

(c)

Stratified sampling

(d)

Systematic sampling in geographical studies

Copyright Open University Malaysia (OUM)

53

54

TOPIC 4

SAMPLING DESIGN

SELF-CHECK 4.6
For each of the following research questions, it has not been possible to
obtain a sampling frame. Suggest the most appropriate non-probability
sampling technique to obtain the necessary data, giving reasons for
your choice.
(a)

What can social services provide to homeless people?

(b)

Which television advertisements are most remembered by the


public watching last weekend?

(c)

How are manufacturing companies planning to respond to the


introduction of highway tolls?

(d)

Would users of a squash club be prepared to pay a 10% increase


in subscription fees to help fund two new extra courts (answer
needed by tomorrow morning)?

The logic of sampling is that there are similarities among the elements in a
population that can adequately represent the characteristics of the total
population.
Some of the elements may underestimate the true value of the population,
but others may overestimate the value. The combination of these estimates
gives the statistics; which give a true value estimated population.
A good sample should be accurate; there is little or no bias or systematic
variance.
A good sample must be precise; the sampling error is within acceptable limits
for the purpose of the study.
The choice of the sampling design depends on the objectives and the research
questions of the study.
The size of the sample depends on the accuracy of the results required, the
confidence level of the study and the resources available to collect and
analyse the data.

Copyright Open University Malaysia (OUM)

TOPIC 4

SAMPLING DESIGN

55

The probability sample design is the ideal design, since it allows the
determination of the level of error likely to be produced. It is often time
consuming and expensive.
Stratified and systematic sampling are modifications to simple random
sampling.
A sampling frame is needed to apply probability sampling.
If the sampling frame is not possible, a non-probability design can be applied.
The non-probability techniques are a compromise between accuracy and cost
of collecting data. Non-probability sampling has many advantages especially
ease of use and reducing cost of data collection. In some instances, probability
sampling is the only feasible method of data collection.

Census

Population

Cluster sampling Parameters

Population case

Copyright Open University Malaysia (OUM)

Topic

Measurement
and Scales

LEARNING OUTCOMES
By the end of this topic, you should be able to:
1.

Define conceptualisation and operationalisation;

2.

Explain the four types of scales used in research;

3.

Prescribe the measures of quality used; and

4.

Assess the sources of measurement errors.

INTRODUCTION
This topic begins with an explanation of conceptualisation and
operationalisation. The definition of concepts and the methods of measuring the
concepts will help the researcher to determine the methods of collecting and
analysing data. The process of defining concepts is important in a research to
ensure that readers have the same understanding as the researcher; this will
prevent any confusion or misunderstanding by readers in interpreting the
meaning of the concept.
Once the concept is defined, it is necessary to identify the methods to measure
the concept. Measurement of the variables is an integral part of the research
process and is an important aspect of a research design. Unless the variables are
measured in some way, the researcher will not be able to test the hypothesis and
find answers to complex research issues.

Copyright Open University Malaysia (OUM)

TOPIC 5

5.1

MEASUREMENT AND SCALES

57

CONCEPTUALISATION

In a research, we use concepts that vary in levels of abstraction; from simple


concepts such as shoes, table and height, to the most abstract such as satisfaction,
marketability, love and stress. It is necessary to clarify the meaning of the
concepts used in order to draw meaningful conclusions about them.
In daily life, we communicate through a system of vague agreements on the use
of terms. In many cases, other people do not exactly understand what we wish to
communicate and the meaning of the terms we use. This will cause conflict but
we somehow get the word through. In the scientific research, however, this
scenario is not acceptable; scientific research cannot operate in an imprecise
context.
Conceptualisation is the mental process of making imprecise notions (mental
images-conceptions) into more specific meanings to enable communication and
eventual agreement on the specific meanings of the terms. We specify what we
mean when we use a particular term. The process of conceptualisation will
produce specification of the indicators of what we have in mind on the concept
we are studying.
For example, the concept of compassion may comprise different kinds of
compassion. There is compassion towards humans or animals. In addition,
compassion may be an act or a feeling. It could also be seen in terms of
forgiveness or pity. The grouping of the concept is known as dimension. Thus,
conceptualisation involves both specifying dimensions and identifying the
various indicators for each.
The process of refining abstract concepts is called definition. By defining a
concept, we derive its meaning to draw conclusions. The concepts are specified
using the following:
(a)

Nominal Definition A working definition for the purpose of an inquiry in


assigning a meaning to a term. It helps to focus on how to strategise
observation but not to make the actual observation.

(b)

Operational Definition How the concept is measured by specifying what


to observe, how to observe and how to interpret the observation.
Operational definition is undertaken to measure a concept.

Copyright Open University Malaysia (OUM)

58

TOPIC 5

MEASUREMENT AND SCALES

Conceptualisation may differ among researchers but definitions are specific and
unambiguous. Therefore, even if one disagrees with the definitions, he has a
good idea of how to interpret the results because the definitions are clear and
specific.

ACTIVITY 5.1
How do you define the concept of socio-economic status in terms of
nominal definition and operational definition?

5.2

OPERATIONALISATION

Once the concepts have been identified, the next step is the process of developing
the specific research procedures/operations that will result in empirical
observations representing those concepts in the real world.
The process of linking a conceptual definition to a specific set of measurement
techniques or procedures is called operationalisation. These are procedures to
measure a concept either through a collection of data from a survey research or
by conducting observation research. The following example explains this.
Example 5.1
Operationalising the concept of an individual/person: Variable Individual
Attributes Gender characteristics (male/female)
Nominal Definition An individual is either a male or female
Operational Definition If B defines/represents an individual
Mapping out attributes:
1 represents an individual who is a male
0 represents an individual who is a female
Thus, for B1, B2, B3, B4, B5, B6:
B1 is measured as 1 if B1 is a male
B2 = 0 if B2 is a female
B3 is measured as 1 if B3 is a male
B4 = 0 if B4 is a female
B5 = 1, B6 = 0

Copyright Open University Malaysia (OUM)

TOPIC 5

MEASUREMENT AND SCALES

59

To be meaningful, the measurement must follow rules that specify procedures of


assigning numbers to objects of reality.

5.3

VARIABLES

At the theoretical level, concepts and constructs are used; whereas at the
empirical level, the constructs are transformed into variables. Thus, variables are
the construct or property to be studied. A variable consists of logical groupings
or sets of attributes/values.
An attribute is the intensity or strength of attachment to attitudes, beliefs
and behaviours associated with a concept. It is a characteristic or quality of a
concept/symbol to which numerals or values are assigned.
Two important characteristics of a variable are:
(a)

Attributes composing the variable must be exhaustive.

(b)

Attributes composing a variable must be mutually exclusive.

Below are five types of variable:


(a)

Dependent DV (criterion variable) is the variable of primary interest to the


researcher. The goal is to understand and describe the dependent variable.

(b)

Independent IV (predictor variable) influences the dependent variable


either in a positive or a negative way. The variance in the dependent
variable is accounted for by the independent variable.

(c)

Moderating MV is a second independent variable and has a strong


contingent contributory effect on the original stated IV-DV relationship.

(d)

Extraneous EV is a random variable that has little impact on the


relationship.

(e)

Intervening IVV shows the link between IV and DV; it acts as a DV with
respect to an IV and as an IV with respect to a DV.

ACTIVITY 5.2
What are the relationships between IV, DV and IVV? How does the
inclusion of MV change or affect the relationship?

Copyright Open University Malaysia (OUM)

60

TOPIC 5

5.4

MEASUREMENT AND SCALES

MEASUREMENT

The concepts used in a research are divided into objects or properties. Objects are
things such as shirts, hands, computers, shoes, books and papers. Things that are
not so concrete such as genes, nitrogen, attitudes, stocks and peer-group pressure
are also included as objects. Properties or attributes, on the other hand, are the
characteristics of the objects.
An individuals physical characteristics are indicated in terms of weight, height
and posture. An individuals psychological attributes are shown in terms of
attitudes and intelligence. The social characteristics of the person include
leadership ability, social status or class affiliation. The object and the
characteristics can be measured in a research study.
Measuring the properties indicators of the objects makes the measurement of the
objects or characteristics more sensible. It is easy to see that A is older than B, and C
participates more than D in a group discussion. Indicators such as age, working
experience and number of reports done can be easily measured. Hence, they are so
commonly accepted that one considers the properties to be observed directly.
However, properties such as an individuals ability to solve problems,
motivation for success, political affiliation and sympathetic feelings are more
difficult to measure. Since they cannot be measured directly, they have to be
gauged by making inferences to the presence or absence of certain behaviour or
attitude by observing some indicators or pointer measurement.
Essentially, the measuring process consists of giving numbers or symbols to
empirical events based on a set of rules. The process of making the measurement
involves three steps: selecting observable objects or properties; using numbers or
symbols to represent aspects of the events or objects; and applying a mapping
rule to connect the observation to the symbol. Thus, some mapping rules are
devised to transfer the observation of the property indicators using these rules.
The accepted rules in using numbers to map the observation of the indicators
include:
(a)

Order of numbers
another number;

One number is greater than, less than or equal to

(b)

Difference between numbers The difference between any pair of numbers


is greater than, less than or equal to the difference between any other pair
of numbers; and

(c)

The number series has a unique origin indicated by the number zero.
Copyright Open University Malaysia (OUM)

TOPIC 5

MEASUREMENT AND SCALES

61

SELF-CHECK 5.1
Why is it necessary to define the concepts of research clearly?

5.4.1

Level of Measurement

Once the operationalisation of the concepts has been established, the concepts
need to be measured in some manner. A scale is a tool or mechanism by which
individuals are distinguished based on the variables of interest in the study. The
scale or tool could be gross or fine-tuned. A gross scale broadly categorises
individuals on certain variables. A fine-tuned scale differentiates individuals on
the variables with varying degrees of sophistication.
Using these rules of order, distance and origin of the data are classified into the
following types of scales:
(a)

Nominal Measure (Scale)


Nominal data is widely used in social science research. It is characterised by
a set of categories that are exclusive and exhaustive. When nominal data is
used, the only arithmetic operation that can be done is the numeration of
members in each group. If numbers are used to identify categories, then
they are recognised as labels only and have no quantitative value. Nominal
measures are the least useful form of measurement because they suggest no
order or distance relationship and have no arithmetic origin. Moreover, the
measurements have no information on the varying degree of the property
measured.
Although nominal data is weak, it is still useful. In an exploratory study of
which the objective is to uncover relationship rather than secure precise
measurements, nominal data is valuable. Studies to provide insights into
important data patterns can be easily accomplished using nominal data.

SELF-CHECK 5.2
Give three examples of nominal scale.

Copyright Open University Malaysia (OUM)

62

(b)

TOPIC 5

MEASUREMENT AND SCALES

Ordinal Measure (Scale)


An ordinal scale not only categorises the variables in a way that denotes
differences among the various categories, it also rank-orders the categories
in a meaningful way. The ordinal scale would be used for an ordered series
of relationship. The preference would be ranked and numbered 1, 2 or 3.
The ordinal scale helps the researcher to determine the percentage of each
preference level first preference, second preference and so on. The ordinal
scale gives more information than the nominal scale. It goes beyond giving
difference in categories and on how respondents can distinguish them by
rank ordering. Do take note that the ordinal scale does not give any
indication of the magnitude of the differences among the ranks. The
following example explains this.
Example 5.2
Please indicate your preference among the types of examination designs
below by using the following scales:
1.

Least Preferred

Types of Questions

(c)

2.

Preferred

3.

Most Preferred

Ranking of Preference

(a)

Objective questions

(b)

Subjective questions

(c)

Combination of both

Interval Measure (Scale)


The interval scale allows the researcher to perform arithmetical operations
on the data and to measure the distance between any two points on the
scale. It allows the calculation of means and standard deviations of the
responses on the variables. The interval scale not only grouped individuals
according to certain categories and indicates the order of the groups, but
also measures the magnitude of the differences among the individuals. The
following example explains this.

Copyright Open University Malaysia (OUM)

TOPIC 5

63

MEASUREMENT AND SCALES

Example 5.3
Using the scale below, please indicate your choice for each of the items
that follow, by circling the number that best describes your feeling.
1.

Strongly disagree

2.

Disagree

3.

Neutral

4.

Agree

5.

Strongly Agree

(a)

The facilities here are adequate.

(b)

The services provided are sufficient.

(c)

The people here are friendly.

(d)

The prices here are cheap.

The interval scale has equal magnitude of differences in the scale point. The
magnitude of difference represented by the space between 1 and 2 on the
scale is the same as the magnitude of difference represented by the space
between 4 and 5, or between any other two points. Any number can be
added to or subtracted from the numbers on the scale. Assuming the
magnitude of the difference is still retained, if a 6 is added to all five points
on the scale, the interval scale will become 6 to 11; the magnitude of the
difference between 7 and 8 is still the same as the magnitude of the
difference between 10 and 11. Thus, the origin or the starting point could be
any arbitrary number.
The interval scale taps the differences, the order and the equality of the
magnitude of the differences in the variable. It is a more powerful scale
than the ordinal and nominal scales. It allows the measuring of the central
tendency, mean, dispersion, range, standard deviation and variance.
(d)

Ratio Measure (Scale)


The disadvantage of using the interval scale is related to its arbitrary origin.
This can be overcome by using the ratio scale as it has an absolute origin or
zero point, which is a meaningful measurement point. So, the ratio scale not
only measures the magnitude of the differences between points on the scale
but also taps the proportions of the differences. The ratio scale is the most
powerful of the four scales because it has a unique zero origin and
subsumes all the properties of the other four scales (see Table 5.1).

Copyright Open University Malaysia (OUM)

64

TOPIC 5

MEASUREMENT AND SCALES

Table 5.1: Properties of the Four Measures (Scales)


Highlights
Scales

Measure of
Central
Tendency

Difference

Order

Distance

Unique
Origin

Nominal

Yes

No

No

No

Mode

Ordinal

Yes

Yes

No

No

Median

Interval

Yes

Ratio

Yes

Yes

Yes

Yes

Yes

Measure of
Dispersion

Semi interquartile range.

No

Standard
deviation,
Arithmetic
variance,
mean
coefficient of
variation.

Yes

Standard
Arithmetic/ deviation,
geometric variance,
mean
coefficient of
variation.

Example 5.4
(c)

How many books have you read in the last two weeks?

(d)

How many times have you visited a shopping complex in the last
month?

The measures of central tendency of the ratio scale could be either the arithmetic
or the geometric mean; and the measure of dispersion could be the standard
deviation, variance or the coefficient of variation.

ACTIVITY 5.3
1.

What is the meaning of measurement in a research study? Give


three steps of the measurement process.

2.

What is the level of measurement concept based on?

Copyright Open University Malaysia (OUM)

TOPIC 5

MEASUREMENT AND SCALES

65

SELF-CHECK 5.3
What are the essential differences among the nominal, ordinal,
interval and ratio scales?

SCALING TECHNIQUES

5.5

Four different types of scales are used to measure the operationally defined
dimensions and elements of a variable. It is necessary to know the methods of
scaling; the process of assigning numbers or symbols to elicit the attitudinal
responses of subjects towards objects, events or persons. There are two main
categories of attitudinal scales rating scale and ranking scale.

5.5.1

Rating Scales

Rating scales have several categories and are used to elicit responses with regard
to the object, event or person studied. The following are some examples of rating
scales often used in social science research.
(a)

Dichotomous Scale (Simple Category Scale)


This dichotomous scale is used to elicit a Yes or No response; a nominal
scale is used to measure the response.
Do you purchase product A?

Yes

No

Copyright Open University Malaysia (OUM)

66

(b)

TOPIC 5

MEASUREMENT AND SCALES

Category Scale (Multiple Choice - Single Response Scale)


The category scale uses multiple items to elicit a single response; the
nominal scale is also used to measure the response.
Where did you purchase your tickets?
(i)

Train station

(ii)

Grocery outlet

(iii) Fast-food restaurant


(iv) Petrol station
(v)
(c)

Others

Category Scale (Multiple Choice - Multiple Response Scale)


Among the easy reading magazines listed below, which ones do you
like to read?
(i)

Time

(ii)

Readers Digest

(iii) National Geographic


(iv) Far Eastern Economic Review
(v)

Vogue

(vi) Family
(vii) Others (specify)
(d)

Summated Rating Scale


One of the most popular application of summated rating scale is the Likert
Scale. The Likert scale is designed to examine how strongly subjects agree
or disagree with statements on a five-point scale. The responses over a
number of items tapping a particular concept or variable are then
summated for every respondent. This is an interval scale and the
differences in the responses between any two points on the scale remain the
same.

Copyright Open University Malaysia (OUM)

TOPIC 5

MEASUREMENT AND SCALES

67

Usage of computer systems has helped to improve the performance of


students.

(e)

(i)

Strongly Agree

(ii)

Agree

(iii) Neither agree nor disagree

(iv) Disagree

(v)

Strongly disagree

Semantic Differential Scale


Several bipolar attributes are identified at the extremes of the scale and
respondents are asked to indicate their attitudes towards a particular
individual, object or event on each of the attributes. The semantic
differential scale is used to assess respondents attitudes towards a
particular brand, advertisement, object or individual. The responses are
plotted to obtain a good idea of their perceptions and are measured as an
interval scale.
How do you feel about the idea of war?
Bad
Fair
Clean
Modern

(f)

Good
Unfair
Dirty
Traditional

Numerical Scale
The numerical scale is similar to the semantic scale, with the difference that
numbers on a five-point or seven-point scale are provided, with the bipolar
adjectives at both ends. The scale used is an interval scale.
How do you feel about the idea of war?
Bad
Fair
Clean
Modern

1
1
1
1

2
2
2
2

3
3
3
3

4
4
4
4

5
5
5
5

Good
Unfair
Dirty
Traditional

Copyright Open University Malaysia (OUM)

68

(g)

TOPIC 5

MEASUREMENT AND SCALES

Fixed or Constant Sum Scale


In this type of scale, the respondents are asked to distribute a given number
of points across various items. This scale uses the ordinal measure. In
choosing the accommodation facility, indicate the importance to attach the
following five aspects by allotting points for each to a total of 100.
Room space
Room dcor
Cleanliness
Price
Housekeeping service
Total points

(h)

100

Staple Scale
The staple scale provides simultaneous measures of the direction and
intensity of the attitude towards the items under study. The characteristics
of interest to the study are placed at the centre and a numerical scale
ranging from +3 to 3 are put on either side of the item. The scale gives an
idea on the gap of the individual response to the stimulus. It does not have
an absolute zero, thus it is an interval scale.
Please indicate how you would rate the restaurant with respect to each
of the characteristics mentioned below, by circling the appropriate
number.
Services
Cleanliness
Prices

5.5.2

3
3
3

2
2
2

1
1
1

+1
+1
+1

+2
+2
+2

+3
+3
+3

Ranking Scale

The respondents make comparisons between two or more objects/items and


make choices among them (ordinal in nature). Often, the respondents are asked
to select one as the best or the most preferred. This ranking may be conclusive if
there are only two choices to be compared. If there are more than two choices, it
may result in ties, which may not be helpful. Suppose that 35% of the
respondents choose product A, 25% choose product B and 20% choose each of
Copyright Open University Malaysia (OUM)

TOPIC 5

MEASUREMENT AND SCALES

69

the product C and D as of importance to them. Which product is the most


preferred? It is not acceptable to conclude that product A is the most preferred
since 65% of the respondents did not choose that product. This ambiguity can be
avoided by using alternative methods of ranking.
(a)

Paired Comparison
In using this scale, the respondents are asked to choose among a small
number of objects; two objects at a time. This helps to assess preferences
because the respondents can express attitudes unambiguously by choosing
between two objects. The number of paired comparisons that will be judged
by the respondents for n objects is {(n)(n 1)/2}. If n = 4, then the number
of paired comparisons will be {(4)(4 1)/2 = 6}. The greater the number of
objects, the greater the number of paired comparisons that will be
presented to the respondents. This will tire the respondents mentally. This
technique is good if the number of objects is small.
Example 5.5
For each pair of national parks, place a check beside the one you most
prefer if you had to choose between the two.

(b)

(i)

Taman Negara Malaysia


Endau Rompin National Park

(ii)

Mulu National Park


Sabah National Park

(iii)

Taman Negara Malaysia


Sabah National Park

(iv)

Mulu National Park


Endau Rompin National Park

(v)

Taman Negara Malaysia


Mulu National Park

Forced Choice
This choice enables the respondents to rank objects relative to one another,
among alternatives provided. This is easier for the respondents, especially if
the number of choices to be ranked is limited in number.

Copyright Open University Malaysia (OUM)

70

TOPIC 5

MEASUREMENT AND SCALES

Example 5.6
Please rank the following daily newspaper you would like to subscribe
in order of preference, assigning 1 for the most preferred choice and 5
for the least preferred.
(i)

New Strait Times

(ii)

Utusan Malaysia

(iii) The Star


(iv) Malay Mail
(v)
(c)

Harian Metro

Comparative Scale
This scale gives a point of reference to assess attitudes towards the current
object, event or situation under study. The technique is ideal if the
respondents are familiar with the standard.
Example 5.7
Compared to your previous visit to this holiday destination, your
present visit is:
More enjoyable
1

5.6

About the same


2

Less enjoyable
4

MEASUREMENT QUALITY

A good measurement gives an accurate counter or indicator of the concept that


we want to measure. It must also be easy and efficient. A precise measure
indicates fineness or distinction between the attributes of the variable. Although
a precise measure is superior to an imprecise measure, but precision is neither
always necessary nor desirable. It is important to note that precision does not
mean accuracy. An accurate measure indicates how close the measure is to the
real thing/value.
Measurements are subject to random and systematic biases or errors; hence in
research one cannot get 100% accuracy. The test of reliability and validity of the
measurement becomes important.

Copyright Open University Malaysia (OUM)

TOPIC 5

5.6.1

MEASUREMENT AND SCALES

71

Reliability, Validity and Practicality

Three major criteria are often used to determine the quality of a measurement
tool: reliability, validity and practicality.
Reliability and validity are associated with how concretely connected the
measures are to the constructs because perfect reliability and validity are
impossible to achieve. It is important to establish the truthfulness, the credibility
or the believability of findings, with no random or systematic errors. Thus,
reliability and validity are considered as the scientific criteria of the
measurement.
Reliability is related to the consistency of the measurement, which means the
recurrences are measured with an identical method or under very similar
conditions. If a particular technique is applied repeatedly to the same object and
yields the same result each time, then this indicates consistency. The criteria take
into account the degree to which the measurement is free of random error.
Reliability can be assessed by posing the following questions (Easterby-Smith et
al., 2002):
(a)

Will the measures give the same results on other occasions?

(b)

Will similar observations be reached by other observers?

(c)

Is there transparency in how sense was made from the raw data?

Validity is concerned with truthfulness a match between a construct, or the way


the idea is packaged in a conceptual definition and measures.
It reflects how well an idea about reality fits with actual reality the extent to
which the empirical measurement adequately reflects the real meaning of the
concept. In other words, it measures what it is supposed to reflect. Major threats
to validity include:
(a)

History If certain events or factors that have impact on the relationships


occur unexpectedly while the study is being conducted, and this history of
events confounds the cause-effect relationship between the variables, then
the validity of the results may be affected.

(b)

Maturation Effects The time passage of the relationship can influence the
cause and effect among variables and cannot be controlled. The maturation
effects are a function of processes operating within the respondents as a
result of the time passage. Examples of maturation processes include
growing older, getting tired, getting bored and feeling hungry.
Copyright Open University Malaysia (OUM)

72

TOPIC 5

MEASUREMENT AND SCALES

(c)

Testing Effects A pre-test given to the subjects in order to improve the


instruments used may actually have effects on the actual test or post-test;
the very fact that the respondents were exposed to the pre-test might
influence their responses.

(d)

Instrumentation Effects
The effects on validity may occur because of
changes in the measuring instrument between pre-test and post-test.

Example 5.8
Relationship between reliability and validity are shown using this example:
You use a bathroom scale to measure your weight. If the scale measures your
weight correctly, then the scale as a measuring tool is both reliable and valid.
If the scale is tampered and consistently gives an overweight of 6 kg every
time it is used, it is reliable but not valid. If the scale gives an erratic weight
reading from time to time, it is neither reliable nor valid.
Practicality is correlated with the operational requirement of the measurement
process. The criterion of practicality involves the aspects of economy,
convenience and interpretability. To achieve a high degree of reliability and
validity, one may require high expenditure that may be beyond the budget for
research; thus there has to be some form of trade off between the ideal measures
and the budget. Data collection techniques are always dictated by budget
constraints and other economic factors.
The measuring device should also be easy to administer; the design of the
instruments used should allow easy comprehension and have complete and clear
instructions. If the instrument is to be administered by people other than the
designer, then it must also be easy to interpret.

5.7

SOURCES OF MEASUREMENT ERRORS

In an ideal situation, a study design should be able to control the precision and
ambiguity of the measurement. However, an ideal situation is impossible.
Therefore, the next best thing to do is to go for the reduction of errors. The
researcher should be aware of the sources of potential errors such as systematic
and random errors.
(a)

Respondent as an Error Source


These are errors resulting from the differences in the responses due to
the nature of the individual respondents. Some responses related to the
Copyright Open University Malaysia (OUM)

TOPIC 5

MEASUREMENT AND SCALES

73

characteristics of the respondents may be anticipated and may be quite


stable but other effects of the characteristics may be less obvious. An
individual who has had a traumatic experience may have a different
outlook of a certain situation. The respondent may be reluctant to state his
views or feelings, or may not have much knowledge of the situation and he
may be giving guesses as his response. Other factors that may affect the
respondents like fatigue, boredom, anxiety, impatience and variations in
moods may also affect the responses.
(b)

Situational Factors
Any condition that may place strains on the interview can have serious
effects on the interview-respondent rapport. If the interview is carried out
in the company of other people, friends, relatives and children, the
responses can be distorted by others joining in, distractions or by others just
merely being there. Some may feel they are being intruded upon and thus
may not willingly give their responses.

(c)

Measurer as an Error Source


If the interviewer or enumerator changes the wordings, paraphrases or
reorders the sequences of the questions, these could lead to errors. The first
impression of the interviewer to the respondent can introduce bias. The
voice tone of the interviewer can encourage or discourage certain replies.
The failure of the recorder to record the full responses may affect the
findings. When data is not entered correctly for analysis or a faulty
statistical analysis is used by the researcher, it may introduce further bias.

(d)

Instrument as an Error Source


If the instrument used is defective, two major sources of distortions can
occur. The first, which uses complex wordings, syntax and jargon beyond
the comprehension of the respondents, can lead to confusion and
ambiguity. Questions that violate the criteria of a good survey design will
cause respondents to give biased answers. Leading questions, ambiguous
meanings and multiple questions are some examples of sources of errors in
the instruments. The second source of errors related to the instrument is the
incomplete inclusion of the content items. It is impossible to include all
potentially important issues related to a problem. Although the instruments
may take into account the majority of the issues, there could be some that
are left out on purpose.

Copyright Open University Malaysia (OUM)

74

TOPIC 5

MEASUREMENT AND SCALES

SELF-CHECK 5.4
What are the four major sources of measurement errors? Give an
example of how each source can affect the measurement results in a
face-to-face interview.

SELF-CHECK 5.5
Tick True or False for each statement below.
No.

Question

1.

Time dimension is a basis for classifying


research design.

2.

In the most literal sense, what are measured are


the indicators.

3.

An interval scale is defined as one that has both


order and distance but no unique origin.

4.

If a measure is reliable, it must be valid.

5.

A nominal measure can only have two


categories.

6.

Classifying
someone
as
employed
or
unemployed treats employment as a nominal
variable.

True

False

Choose the correct answer


1.

Which of the following is an incorrect classification of scale?


(a)

Attitude measured on an interval scale

(b)

Weight measured in ratio scale

(c)

Gender measured using ordinal scale

(d)

Position in an examination using ordinal scale

Copyright Open University Malaysia (OUM)

TOPIC 5

2.

3.

MEASUREMENT AND SCALES

75

Measurement should meet the criteria of practicality, which


is typically defined as:
(a)

Economy, accuracy and interpretability

(b)

Convenience, economy and interpretability

(c)

Economy, consistency and interpretability

(d)

Convenience, economy and consistency

A researcher must decide in the process of operationalisation:


(a)

What to measure

(b)

What level of measurement to use

(c)

How to measure

(d)

All of the above

In scientific research, the measurements used must be precise and controlled.


In the process of measurement, what is actually done is measuring the
properties of the objects rather than the objects themselves.
To be exact, what are measured are the indicants of the properties.
Measurements usually use some type of scale to classify or quantify the data
collected.
Four types of scales are used in increasing order of power: nominal, ordinal,
interval and ratio.
The nominal scale highlights the differences by classifying objects or persons
into groups, and provides the least amount of information on the variable.
The ordinal scale provides some additional information by rank-ordering the
categories of the nominal scale. The interval scale also provides users with
information on the magnitude of the differences in the variable.
Copyright Open University Malaysia (OUM)

76

TOPIC 5

MEASUREMENT AND SCALES

The ratio scale indicates the magnitude and proportion of the differences.
The data becomes more precise when we move from the nominal to the ratio
scale and allow the use of more powerful statistical tests.
Sound measurement must meet the criteria of validity, reliability and
practicality.
Validity reveals the degree to which an instrument measures what it is
supposed to measure.
A measure is reliable if it provides consistent results each time it is used.
Reliability is a partial contributor to validity but a measurement tool may be
reliable without being valid.
A measure meets the criteria of practicality if it is economical, convenient and
interpretable.

Conceptualisation

Operational definition

Dichotomous scale

Operationalisation

Internal measure

Ranking scale

Measurement

Ratio measure

Nominal definition

Staple scale

Nominal measure

Variables

Ordinal measure

Copyright Open University Malaysia (OUM)

Topic Survey Method

and Secondary
Data

LEARNING OUTCOMES
By the end of this topic, you should be able to:
1.

Discuss the importance of surveys to collect data;

2.

Explore the types of personal interviews, telephone interviews and


self-administered surveys;

3.

Appraise the advantages and disadvantages of the different


survey methods;

4.

Discuss the types and uses of secondary data;

5.

Assess the advantages and disadvantages of secondary data; and

6.

Explore the sources of secondary data.

INTRODUCTION
The type and amount of data collected depends on the nature of the study
together with its research objectives. If the study is exploratory, the researcher is
likely to collect narrative data through the use of focus groups, personal
interviews or observation of behaviour or events. These types of data are known
as qualitative.
Qualitative approaches to data collection are typically used at the exploratory
stage of the research process. Their role is to identify and/or refine research
problems that may help to formulate and test conceptual frameworks. Such
studies normally involve the use of smaller samples or case studies.

Copyright Open University Malaysia (OUM)

78

TOPIC 6

SURVEY METHOD AND SECONDARY DATA

If the study is descriptive or causal in nature, the researcher requires a relatively


large amount of quantitative data obtained through large-scale surveys or
existing electronic databases. Quantitative data typically are obtained through
the use of various numeric scales. Quantitative data collection approaches are
typically used when the researcher is using well-defined theoretical models and
research problems. Validation of the concepts and models usually involves the
use of quantitative data obtained from large-scale questionnaire surveys.

6.1

SURVEY RESEARCH

Survey research is a common tool for applied research. Surveys can provide
a quick, inexpensive and accurate means to obtain information for a variety
of objectives. The typical survey is a descriptive research study that has the
objective of measuring awareness, knowledge, behaviour, opinions and the like.
Surveys can also be used to collect data for explanatory or analytical research to
enable researchers to examine and explain relationships between variables; in
particular cause and effect relationships. The term sample survey is often used
because a survey is expected to obtain a representative sample of the target
population.
Surveys are popular because they allow the collection of a large amount of data
from a sizeable population in a highly economical way. This data is standardised
and often obtained by using a questionnaire to allow for easy comparison. In
addition, the survey strategy is perceived as authoritative by people in general.
Every day, a news bulletin or a newspaper article reports the results of a new
survey indicating a certain percentage of the population that thinks or behaves in
a particular way. The reliability and validity of the findings in survey depends on
the quality of the instrument used.
Among the popular instruments in survey research are questionnaire and
observation inventory.
Methods of collecting survey data fall into two broad categories: self-completion
and interviewer-administered.
Self-completion methods include mail and electronic surveys. Intervieweradministered methods involve direct contact with the respondents through
personal interviews, including face-to-face, telephone and computer dialogue.

Copyright Open University Malaysia (OUM)

TOPIC 6

SURVEY METHOD AND SECONDARY DATA

79

Personal interviews, whether structured or unstructured, are typically used to


obtain detailed qualitative information from a relatively small number of
individuals. The approach sometimes is referred to as an in-depth survey.
On the other hand, questionnaires are used to collect quantitative data from a
large number of individuals in a quick and convenient manner. In this topic, the
focus will be on the survey technique used for data collection.

SELF-CHECK 6.1
Explain the difference between questionaire and observation inventory.
Explain the use of these instruments by providing appropriate
examples.

6.2

PERSONAL INTERVIEW

Interviewer-administered questionnaires are completed either face to face, over


the telephone or via computer dialogue. Face-to-face and telephone interviews
are the most prevalent but computer dialogue is the fastest growing mode of
communication. Computer dialogue approaches use digital technology and can
obtain information easily from large groups of individuals.
An interview is where the researcher speaks to the respondent directly, asks
questions and records answers. Interviews are particularly helpful in gathering
data when dealing with complex and/or sensitive issues, and when open-ended
questions are used to collect data. For example, face to face interviews also enable
the researcher to obtain feedback and to use visual aids. Respondents might be
shown a new corporate logo, a new corporate mission statement, building
designs, automobile styles and colours might be and, asked to comment. Finally,
interviews are flexible as they can be conducted at work, home, or in malls, etc.
Researchers can increase participation rates by explaining the project and its
value to the respondents.

6.2.1

Types of Personal Interviews

There are four types of personal interviews:


(a)

Structured Interview
In a structured interview, the interviewer uses an interview sequence with
predetermined questions. For each interview, the interviewer is required to
use the same interview sequence and to conduct the interview in the same
Copyright Open University Malaysia (OUM)

80

TOPIC 6

SURVEY METHOD AND SECONDARY DATA

way to avoid biases that may result from inconsistent interviewing


practices. Additionally, a standardised approach will ensure responses are
comparable between interviews. Each respondent is provided with an
identical opportunity to respond to the questions. The interviewer may
collect the responses in the form of notes or may tape record the interview.
Taping should only be done with the permission of the interviewee. If the
interview is not recorded on tape, it is a good practice to provide the
interviewee with a copy of the interviewers notes after they have
completed the session as this will help ensure the interview is captured
accurately.
(b)

Semi-structured Interview
In this approach, the researcher is free to exercise his or her own initiative
to follow up with the interviewee for his or her responses. The interviewer
may want to ask related, unanticipated questions that were not originally
included in the interview. This approach may result in discovers of
unexpected and insightful information, thus it may enhance the findings.
The semi-structured interview has an overall structure and direction but
allows more flexibility to include unstructured questioning. Perhaps the
best-known semi-structured interview approach is the focus group. Focus
groups are semi-structured interviews that use an exploratory research
approach and are considered as part of qualitative research. Focus groups
are structured within a list of topics and/or questions prepared by
moderator. However, they can be unstructured if the moderator allows
participants to answer questions in their own words and encourages them
to elaborate on their responses.

(c)

Unstructured Interview
An unstructured interview is conducted without an interview sequence.
This allows the researcher to elicit information by engaging the interviewee
in an open discussion on the topic of interest. A particular advantage of this
approach is that the researcher has the opportunity to explore in-depth
issues raised during the interview.
Unstructured interviews are used when a research is directed towards an
area that is relatively unexplored. By obtaining a deeper understanding of
the critical issues involved, the researcher is in a better position to not only
better define the research problem but also to develop a conceptual
framework for the research. This will then form the basis for subsequent
empirical research to test the ideas, concepts and hypotheses that emerge.

Copyright Open University Malaysia (OUM)

TOPIC 6

(d)

SURVEY METHOD AND SECONDARY DATA

81

In-depth Interview
An in-depth interview is an unstructured one-to-one discussion session
between a trained interviewer and a respondent. Respondents are usually
chosen carefully because they have some specialised insight. For example, a
researcher exploring employee turnover might conduct an in-depth
interview with someone who has worked for five different restaurants in
two years. Like a focus group, the interviewer first prepares an outline that
guides the interview (this is the structured part of an in-depth interview).
The responses are usually unstructured. Indeed, an in-depth interview
allows deeper probing than a focus group. The researcher probes into a
response to identify possibly hidden reasons for a particular behaviour. Indepth interviews can be very useful in clarifying concepts. Administering
in-depth interviews is similar to coordinating a focus group.

SELF-CHECK 6.2
You have been asked by the management to carry out a study on
sexual harassment at the workplace after the female employees
expressed their concerns on the matter. Which method would you
choose to collect data?

6.2.2

Advantages of Personal Interviews

There are several advantages of using personal interviews in business research.


To help researchers obtain complete and precise information, several
characteristics of the process are elaborated below:
(a)

The Opportunity for Feedback


Personal interviews provide the opportunity for feedback from the
respondents. For example, a head of household who is reluctant to provide
sensitive information about his family can be assured that his answers will
be strictly confidential. The interviewer may also provide feedback to
clarify any questions which a respondent has about the interview. After the
interview is terminated, circumstances may dictate that the respondent be
given additional information concerning the purpose of the study. This can
be easily accomplished with the personal interview.

Copyright Open University Malaysia (OUM)

82

(b)

TOPIC 6

SURVEY METHOD AND SECONDARY DATA

Probing Complex Answers


An important characteristic of personal interviews is the opportunity to
follow up. If a respondents answer is unclear, the researcher may probe for
a more comprehensive explanation. Asking, Can you tell me more about
what you had in mind? is an example of a probing question. Although
interviewers are expected to ask questions exactly as they appear on the
questionnaire, probing allows the interviewer some flexibility. Depending
on the research purpose, personal interviews vary in the questions
structured and in the amount of probing allowed. Personal interviews are
especially useful for obtaining unstructured information. Questions that are
difficult to ask during telephone or mail surveys can be addressed by
skilful interviewers in the personal interviews.
Example 6.1
Probing questions can be used to explore responses that are of
significance to the research topic. They can be worded like open
questions but can also require a particular focus or direction. Examples
of this type of question include:
How would you evaluate the success of this new marketing
strategy?
Why did you
redundancies?

choose

compulsory

method

to

make

What external factors caused the corporate strategy to change?


These questions can begin with, for example, Thats interesting.... or
Tell me more about... Probing questions can also be used to seek an
explanation when you do not understand the interviewees meaning or
the response given does not reveal the reasoning involved. Examples of
this type of question include:
What do you mean by bumping as a means to help to secure
volunteers for redundancy?
What is the relationship between the new statutory requirements
that you referred to and the organisations decision to set up its
corporate affairs department?
(c)

Length of Interview
If the research objective requires a lengthy questionnaire, personal
interviews may be the only alternative. Generally, telephone interviews last
fewer than 10 minutes, whereas a personal interview can be much longer,
Copyright Open University Malaysia (OUM)

TOPIC 6

SURVEY METHOD AND SECONDARY DATA

83

perhaps an hour and a half. A rule of thumb for mail surveys is that they do
not exceed more than six pages.
(d)

Complete Questionnaires
Social interaction between a well-trained interviewer and a respondent in a
personal interview increases the likelihood that a response will be given to
all items on the questionnaire. The respondent who is bored with a
telephone interview may terminate the interview at his or her discretion by
hanging up the phone. A respondents self-administration of a mail
questionnaire requires more effort. Rather than writing a long explanation,
the respondent may fail to complete some of the questions on the selfadministered questionnaire. Failure to provide the answer to a question is
less likely to occur with an experienced interviewer and face-to-face
interaction.

(e)

Props and Visual Aids


Interviewing a respondent face to face allows the investigator to show the
respondent a resume, a new product sample, a sketch of a proposed office
or plant layout or some other visual aid. In a telephone interview, the use of
visual aids is not possible.

(f)

High Participation
While some people are reluctant to participate in a survey, the presence of
an interviewer generally increases the percentage of people willing to
participate in the interview. Respondents are generally not required to do
any reading or writing. All they have to do is talk. Most people enjoy
sharing information and insights with friendly and sympathetic
interviewers. Personal interviews can be conducted at the respondents
home or office or other places. The locale for the interview generally
influences the participation rate. Interestingly, personal interviews are
being conducted in shopping malls even though research has shown that
the refusal rate is highest when respondents are shopping in a mall.

6.2.3

Disadvantages of Personal Interviews

There are numerous advantages to personal interviews but there are some
disadvantages as well. Respondents are not anonymous and therefore are
reluctant to provide confidential information to another person. There is some
evidence that the demographic characteristics of the interviewer influence
respondents answers. For example, one research study revealed that male
interviewers produced a larger variance than females in a survey where 85
percent of the respondents were female. Older interviewers and interviewing
older respondents produced more variance than other age combinations,
Copyright Open University Malaysia (OUM)

84

TOPIC 6

SURVEY METHOD AND SECONDARY DATA

whereas younger interviewers and interviewing younger respondents produced


the least.
Different interview techniques may be a source of interviewer bias. The
rephrasing of a question, the interviewers tone of voice and the interviewers
appearance may influence the respondents answer. Consider the interviewer
who has conducted 100 personal interviews. During the next one, the interviewer
may selectively perceive the respondents answer so that the interpretation of the
response can be somewhat different from the intended response.
Our image of the person who does business research is a typical dedicated
scientist. Unfortunately, interviewers who are hired as researchers do not
necessarily conjure the perceived image. Sometimes, interviewers may cut
corners to save time and energy. They may fake parts of their reporting by
dummying up part of or the entire questionnaire. Control over interviewers is
important to ensure that difficult and time-consuming questions are handled
properly.
(a)

Cost
Personal interviews are generally more expensive than mail and telephone
interviews. The geographical proximity of respondents, the length and
complexity of the questionnaire, and the number of non-respondents can
affect the cost of the personal interview.

(b)

Anonymity of Respondent
A respondent is not anonymous and may be reluctant to provide
confidential information to another person. Researchers often spend
considerable time and effort to phrase sensitive questions so that social
desirability bias will not occur. For example, the interviewer might show a
respondent a card that lists possible answers and ask him or her to read a
category number rather than verbalise sensitive answers.

(c)

Callbacks
When a person selected to be in the sample cannot be contacted on the first
visit, a systematic procedure is normally initiated to call him or her back at
another time. Callbacks, are the major means of reducing non-response
error. The cost of an interviewer calling back on a sampling unit is more
expensive (per interview) because subjects who were initially not at home
are generally more dispersed geographically than the original sampling
units.

Copyright Open University Malaysia (OUM)

TOPIC 6

SURVEY METHOD AND SECONDARY DATA

85

Callbacks are important because individuals who are away from home at
point of call (working women) may vary from those who are at home (nonworking women, retired people, etc).

ACTIVITY 6.1
1.

Is personal interview the best survey method to obtain


information? Why?

2.

Consider this question posed to a top executive in a firm: Do


you see any major instabilities or threats to the achievement of
your departments objectives? Would a personal interview
lead to a biased answer?

6.3

TELEPHONE INTERVIEW

Telephone interview has become the primary method of commercial survey


research in the last two decades. The quality of data obtained by telephone are
comparable to personal interviews. Using the telephone to ask the respondents
may encourage the respondents to provide detailed and reliable information on a
variety of personal topics willingly. Telephone surveys can provide
representative samples of the general population in most industrialised
countries.

6.3.1

Types of Telephone Interviews

There are two types of telephone interviews:


(a)

Central Location Interviewing


Research agencies and interviewing services typically conduct all telephone
interviews from a central location. Wide-Area Telecommunications Service
(WATS) lines are purchased from a long-distance telephone service at fixed
charges so that unlimited telephone calls can be made throughout the entire
country or within a specific geographic area. Such central location
interviewing allows firms to hire a staff of professional interviewers and to
supervise and control the quality of interviewing more effectively. When
telephone interviews are centralised and computerised, the research
becomes even more cost-effective.

Copyright Open University Malaysia (OUM)

86

(b)

TOPIC 6

SURVEY METHOD AND SECONDARY DATA

Computer-Assisted Telephone Interviewing


Advances in computer technology allow telephone interviews to be directly
entered into a computer using an online computer-assisted telephone
interviewing (CATI) process. Telephone interviewers are seated at a
computer terminal. A monitor displays the questionnaire, one question at a
time, along with pre-coded possible responses to each question. The
interviewer reads each question as it is shown on the screen. When the
respondent answers, the interviewer enters the response into the computer
and it is automatically stored into the computers memory when the
computer displays the next question on the screen. A computer-assisted
telephone interview requires that answers to questionnaires be highly
structured. For instance, if a respondent gives an answer that is not
acceptable (not pre-coded and programmed), the computer will reject the
answer. Computer-assisted telephone interviewing systems include
telephone management systems that handle telephone number selection,
perform automatic dialling and provide other labour-saving functions.
One such system: automatically control sample selection, randomly
generating names or fulfilling a sample quota. Another call management
feature is automatic callback scheduling. The computer is programmed to
time re-contact attempts (recall no-answers after two hours, recall busy
numbers after ten minutes) and allow the interviewer to enter a time slot (a
later day or another hour) when a busy respondent indicates that he can be
interviewed. Still, another feature supplies daily status reports on the
number of completed interviews relative to quotas (Zikmund, 2000).

6.3.2

Advantages and Disadvantages of Telephone


Interviews

The advantages and disadvantages of telephone interviews when compared to


personal interviews are viewed from the following aspects:
(a)

Speed
In telephone interviewing, the speed of data collection is a major
advantage. For example, union officials who wish to conduct a survey on
members attitudes towards a strike may conduct a telephone survey
during the last few days of the bargaining process. Rather than taking
several weeks for data collection by mail or personal interviews, hundreds
of telephone interviews can be conducted overnight. When the interviewer
enters the residents answers directly into a computerised system, the rate
of data processing escalates.

Copyright Open University Malaysia (OUM)

TOPIC 6

SURVEY METHOD AND SECONDARY DATA

87

(b)

Cost
As the cost of personal interviews continues to increase, telephone
interviews are becoming relatively inexpensive. Telephone interviews cost
approximately 40 percent less than the cost of personal interviews. Costs
are further reduced, when travelling costs are eliminated and the
interviews are centralised and computerised.

(c)

Absence of Face-to-face Contact


Telephone interviews are more impersonal than face-to-face interviews.
Respondents are more willing to answer sensitive questions in a telephone
interview rather than in a personal interview. There is some evidence that
respondents are unlikely to share their income and other financial
information even with telephone interviews due to security reasons. High
refusal rates for this type of data occur in all forms of survey research.
Although telephone calls can be less threatening, the absence of face-to-face
contact can be harmful as well. The respondent cannot detect it the
interviewer has completed the interview. If the respondent pauses to think,
the interviewer may skip writing down the complete response and move on
to another question. Hence, there is a greater tendency for incomplete
answers in telephone interviews than in personal interviews.

(d)

Cooperation
In some neighbourhoods, people are reluctant to allow a stranger to come
even to the doorstep. The same individual, however, may be willing to
cooperate in a telephone survey. Likewise, interviewers can be reluctant to
conduct face-to-face interviews in certain neighbourhoods, especially
during evening hours. Some individuals will refuse to participate and the
researcher should be aware of potential non-response bias. The likelihood
of an unanswered call and not-at-home respondent varies by the time of
day, the day of the week and the month of the year.

(e)

Callbacks
Situations like an unanswered call, a busy signal or a respondent who is not
at home require a callback. Telephone callbacks are less expensive than
personal interview callbacks. Houses with telephone answering machines
are more common nowadays. Although their effect has not been studied
extensively, it is clear that many individuals will not return a call to help
someone conducting a survey. Some researchers argue that leaving a
proper message on an answering machine will produce return calls. The
message left on the machine should explicitly state that the purpose of the
call is not sales-related. Others believe no message should be left on the
machine because respondents can be reached eventually if the researcher
calls back. Many people do not allow their answering machines to record
Copyright Open University Malaysia (OUM)

88

TOPIC 6

SURVEY METHOD AND SECONDARY DATA

100 percent of their calls. If enough callbacks are made at different times
many respondents can be reachable through the telephone.
(f)

Representative Samples
When the study group consists of the general population, researchers may
face difficulties in obtaining a representative sample based on listings in the
telephone directory. In most developed countries, the majority of the
households have telephone connections. The poor and those in rural areas
may be a minor segment of the market but unlisted phone numbers and
new numbers not printed in the directory are a greater problem. Unlisted
numbers fall into two groups: those unlisted because of mobility and those
unlisted by choice.

(g)

Lack of Visual Medium


Since visual aids cannot be utilised in telephone interviews, a research that
requires visual material cannot be conducted by phone. Certain attitude
scales and measuring instruments, such as the semantic differential, cannot
be used easily because a graphic scale is needed.

(h)

Limited Duration
One major disadvantage of the telephone interview is the length of the
interview is limited. Respondents who feel they have spent too much time
in the interview will simply hang up. Refusal to cooperate with interviews
is directly related to interview length. A major study on survey research
found that for interviews of 5 minutes or less, the refusal rate was 21
percent. For interviews of 6 12 minutes, the refusal rate was 41 percent. For
interviews of 13 minutes or more, the refusal rate was more than 47 percent.
A thirty minutes frame is the maximum time most respondents will spend,
unless they are interested in the survey subject. (In unusual cases, a few
interested respondents may put up with longer interviews.) A good rule of
thumb is to plan telephone interviews for a ten-minute period long
(Struebbe, 1986).

ACTIVITY 6.2
Do you think that the interviewers can get accurate information from
telephone interviews? What if the respondents give biased answers?
How can the interviewers be certain?

Copyright Open University Malaysia (OUM)

TOPIC 6

SURVEY METHOD AND SECONDARY DATA

89

SELF-CHECK 6.3
What are the major advantages and disadvantages of the telephone
interview method?

6.4

SELF-ADMINISTERED SURVEY

A self-administered questionnaire such as a mail questionnaire is filled in by the


respondent rather than an interviewer. Business researchers distribute
questionnaires to respondents in many ways. They insert questionnaires in
packages and magazines. They may distribute questionnaires at points of
purchase or in high-traffic locations.
Questionnaires can also be distributed via fax machines. These fax surveys
eliminate the senders printing and postage costs and are delivered and/or
returned faster than traditional mail surveys. Of course, most households do not
have fax machines. However, when the sample consists of organisations that are
likely to have fax machines, the sample coverage may be adequate.
Questionnaires are usually printed on paper but they can be programmed into
computers and distributed via e-mail or on the Internet. No matter how a selfadministered questionnaire is distributed to the members of the sample, it is
different from interviews because the respondent takes responsibility for reading
and answering the questions.
Self-administered questionnaires present a challenge to the business researcher
because they rely on the efficiency of the written word rather than the
interviewer. The nature of self-administered questionnaires is best illustrated by
the mail questionnaires.

ACTIVITY 6.3
Suggest a good approach to attract a respondent to do a selfadministered survey. Why do you think it is good?

Copyright Open University Malaysia (OUM)

90

TOPIC 6

6.4.1

Types of Self-administered Surveys

SURVEY METHOD AND SECONDARY DATA

Below are the types of self-administered surveys:


(a)

Mail Survey
A mail survey is a self-administered questionnaire sent to respondents
through the mail. This method presents several advantages and
disadvantages.
(i)

Geographical Flexibility
Mail questionnaires can reach a geographically dispersed sample at
the same time and incur a relatively low cost because interviewers are
not required. Respondents in isolated areas (like farmers) or those
who are otherwise difficult to reach (like executives) can be easily
contacted by mail.
Self-administered survey questionnaires can be widely distributed to
a large number of employees, allowing the diagnose of organisation
problems to be accomplished quickly at a low cost. Questionnaires
can be administered during group meetings. An hour long period
may be scheduled during the working day so that employees can
complete a self-administered questionnaire. These meetings generally
allow the researcher to provide basic instructions to a large group
(generally fewer than 50 people) and to minimise data collection time.
They also give the researcher the opportunity to debrief subjects
without spending a great deal of time and effort.

(ii)

Cost and Time


Mail questionnaires are relatively low in cost compared to personal
interviews and telephone surveys. However, mail surveys are not
cheap. Most include a follow-up mailing, which requires additional
postage and printing of questionnaires. Questionnaires of poor quality
have a greater likelihood of being thrown in the wastebasket than a
more expensive, higher quality questionnaires.
If research results are needed in a short time frame or if attitudes are
rapidly changing (towards a political event), mail surveys may not be
the best communication medium. Usually, a time frame of about two
to three weeks is given in order to receive the majority of the
responses. Follow-up mailings, which are usually sent when returns
begin to trickle in, require an additional two or three weeks. The time
between the first mailing and the cut-off date (when questionnaires
will no longer be accepted) is usually six to eight weeks.
Copyright Open University Malaysia (OUM)

TOPIC 6

SURVEY METHOD AND SECONDARY DATA

91

(iii) Respondent Convenience


When respondents receive self-administered questionnaires, they can
fill up the questionnaires whenever they have the time. Thus, there is
a better chance that respondents will take time to think about their
replies. In some situations, particularly in organisational research,
mail questionnaires allow respondents to collect facts (such as records
of absenteeism) that they may not recall accurately. Checking
information by verifying records (e.g. consulting with family
members) should provide more valid, factual information than either
personal or telephone interviews. Furthermore, with the absence of
the interviewer, the respondent may incline to reveal sensitive
information.
On the other hand, the respondent will not have the opportunity to
ask the interviewer questions. Problems or misunderstandings will
remain in a mail survey. Unlike face-to-face interview, probing cannot
be done to get additional information or clarification of an answer.
(iv) Length of Mail Questionnaire and Response Rates
Mail questionnaires vary considerably in length, ranging from short
questionnaires in postcards to multi-paged booklets requiring
respondents to fill in thousands of answers. As previously mentioned,
a general rule of thumb for a mail questionnaire is it should not
exceed six pages. When a questionnaire requires a respondent to put
in more effort, an incentive should be given to motivate the
respondent to return the questionnaire.
A poorly designed survey will probably have a 15 percent response
rate. A major limitation of mail questionnaires relate to response
problems. Respondents who answer the questionnaire may not
represent all people in the sample. Individuals with a special interest
in the topic are more likely to respond to a mail survey. A researcher
has no assurance that the intended subject will fill out the
questionnaire. When corporate executives, physicians and other
professionals are the respondents, problems may arise if the wrong
person answers the questions. (A subordinate may be given the mail
questionnaire to complete).
(b)

E-mail Surveys
Questionnaires are now being distributed electronically via electronic mail
(e-mail). E-mail is a relatively new method of communication, however,
there are still so many individuals who have no access to it yet. Yet, certain
circumstances allow for e-mail surveys, such as internal employee surveys
Copyright Open University Malaysia (OUM)

92

TOPIC 6

SURVEY METHOD AND SECONDARY DATA

or surveys of retail buyers who regularly deal with the organisation via email. The benefits of this method include cheaper distribution and
processing fees, faster turnaround time, more flexibility, and less paper
chasing.
(c)

Internet Surveys
A typical Internet survey appears when a computer user intentionally
navigates a particular website. Questions are displayed on the website. The
respondent typically provides an answer by highlighting an answer or by
clicking an icon. In some instances, the visitor cannot venture beyond the
survey page without providing information for the organisations
registration questionnaire. When cooperation is voluntary, response rates
are low and participants tend to be more deeply involved with the subject
of the research than the average person.

SELF-CHECK 6.4
1.

A self-administered survey can be done either by postal


delivery or personal interview methods. In what circumstances
would you choose to use the latter?

2.

What can a researcher do to ensure that the questionnaire used


will reduce errors in the data collected?

6.5

TYPES AND USES OF SECONDARY DATA

Secondary data include both raw and published summaries and can include both
quantitative and qualitative data. They are used in descriptive and explanatory
research. The secondary data may be raw data, where there has been little or no
processing at all, or it can be compiled data that has been processed or selected
and summarised. Secondary data is mostly used in business and management
case studies.

6.5.1

Documentary Secondary Data

This type of secondary data is used with primary data collection methods or with
other secondary data and is often used in historical research. If the researcher
uses secondary data exclusively, then it is called archival research. However
historical research may also use recent data as well as historical data.

Copyright Open University Malaysia (OUM)

TOPIC 6

SURVEY METHOD AND SECONDARY DATA

93

Documentary data may include written and non-written documents. Written


documents include notices, correspondence, minutes of meetings, reports to
shareholders, diaries, transcripts of speeches and administrative and public
records. Other examples include books, journal and magazine articles and
newspapers. Written documents can be used as a storage medium and to provide
qualitative data. They can also be used to generate statistical measures such as
profitability from company records.
Non-written documents may include tape and video recordings, pictures,
drawings, film and television programmes. Recent forms of non-written
documents are digital versatile disks (DVD) and CD-ROMs. Data from these
sources can be analysed both quantitatively and qualitatively.

6.5.2

Types of Documents

Various types of documents can be used as sources of data:


(a)

Personal Documents
Personal documents such as diaries, letters, notebooks and personal files in
computers can be used as a primary source of data. Personal documents
can be used to trace history or events that happened in the past as well as
the opinions and feelings of individuals.

(b)

Public Documents
Public documents are good sources of data. A great deal of information can
be obtained from public documents such as government reports, economic
growth of a sector, official statistics on manufacturing growth, investor
records, etc. Public documents not only provide a large amount of
quantitative data but also a potential source of a lot of textual material.

(c)

Internal Documents
Internal documents are available from most organisations. Some of the
internal documents are widely available in public such as a companys
annual reports, press releases, catalogues and product brochures,
advertisements and other information on the company website. Other
internal documents that are not publicly available are minutes of meetings,
newsletters, companys notice board, memos, letters, working procedures,
technical drawings, production and maintenance schedules, quality reports,
and inventory records. Internal documents can be used to describe the
company performance and to analyse its performance, strength and
weaknesses. However, the most difficult part of data collection is to get
access to the company.
Copyright Open University Malaysia (OUM)

94

TOPIC 6

SURVEY METHOD AND SECONDARY DATA

(d)

Mass Media
Mass media source of data can be categorised as newspapers, magazines,
radio broadcasting, television programmes, films and banners. Using mass
media as a source of data, credibility and authenticity is frequently an issue
of debate. This is mainly because the evidence is usually an issue. Articles
written in the newspaper or magazine are usually unclear, biased and
without proper justification. Researchers have to make sure that they
always follow the proper scientific process in doing research.

(e)

Internet
The Internet is a rapidly growing source of information. More people are
getting access to the Internet today and are using it as a quick reference.
However, like mass media,
information from the Internet can be
questionable. The authenticity and credibility of the Internet source is an
issue. This is mainly because anyone could put up anything on the Internet.

6.5.3

Survey-based Secondary Data

These are data collected by questionnaires that have been analysed for their
original purpose. The data may be compiled in the form of data tables or as a
computer-readable matrix of raw data. Survey-based secondary data may be
obtained by census, regular surveys or ad-hoc surveys.
The government usually carries out censuses where participation is obligatory.
The purpose is to collect data on the population to meet the needs of government
departments and local departments. The data collection is usually well-defined,
well-documented and of high quality. Individual researchers and organisations
can access these data for their own researches.
Regular surveys are those undertaken repeatedly over time or at regular intervals
by various organisations. They may be used for comparative purposes,
monitoring purposes or general purposes by public organisations, nongovernmental organisations or private firms. The data may have gone through
detailed analyses and the results of the surveys may be kept in many different
forms. Data collected by certain private firms or organisations may not be
accessible to individual researchers if the information produced from the surveys
is sensitive in nature.
If secondary survey data is available in sufficient detail, it can be used to answer
research questions and meet the objectives of the studies. In many cases, the data
may need to be rechecked because results from some of the survey-based
secondary data take at least a couple of years to be published.
Copyright Open University Malaysia (OUM)

TOPIC 6

SURVEY METHOD AND SECONDARY DATA

95

Ad-hoc surveys are usually one-off surveys and undertaken for specific
purposes. Organisations, government and independent researchers may carry
out the surveys on an ad-hoc basis. To get the relevant data requires substantive
search because of the nature of the ad-hoc surveys. The data from ad-hoc surveys
may be kept in aggregate form, thus the data may have to be reanalysed.

6.5.4

Multiple Source Secondary Data

This data is obtained from documents, surveys or from a combination of both.


Data from various sets are combined to form another data set before the
researcher uses it. The way the data is compiled in the multiple source set will
dictate the kinds of objectives or research questions that can be established. For
instance, time series data is compiled from surveys that are carried out over a
period of time, thus the kind of study that can be done using the compiled data is
the longitudinal study. Secondary data from different sources can also be
combined from different geographical areas to form area-based data sets
(Hakim, 2000).

6.5.5

Triangulation

Triangulation entails using multiple source of data to study the same


phenomena. The concept is similar to the concept in physical science whereby
multiple reference points are used to locate an objects exact location. The concept
has been adopted in research whereby more than one data collection method
would be employed in order to increase confidence about the findings.
Triangulation can be used in either quantitative or qualitative research.
Furthermore, combining quantitative and qualitative methodologies in one
research study is actually a way to triangulate the research findings.

ACTIVITY 6.4
1.

What is secondary data?

2.

What is the purpose of collecting secondary data?

3.

Give three examples of different situations where secondary


data might be used.

Copyright Open University Malaysia (OUM)

96

TOPIC 6

6.6

SURVEY METHOD AND SECONDARY DATA

ADVANTAGES AND DISADVANTAGES OF


USING SECONDARY DATA

There are a few advantages and disadvantages of using secondary data in a


research study. Table 6.1 shows these advantages and disadvantages.
Table 6.1: Advantages and Disadvantages of Using Secondary Data
Secondary Data
Advantages

Disadvantages

(a) Have fewer resource requirements


Save cost and time
Less expensive

(a) Does not meet the purpose of study


Data collected may differ, be
inappropriate or irrelevant for the
present study (outdated).

(b) Unobtrusive
Quickly obtained and of higher
quality.

(b) Difficult or costly access


Data mining for commercial purposes
uses a lot of time and money.

(c) Feasible longitudinal study


Compiled and recorded data is
used using comparable methods on
regional and international bases.

(c) Unsuitable
aggregations
and
definitions
Aggregation
and
inappropriate
definition of data cause difficulties in
combining different data sets.

(d) Comparative and contextual data


Collected data is compared with
secondary data to determine the
representativeness
of
the
population

(d) No real control over data quality


Data sets are not always of higher
quality.
Predispositions, culture and ideals
of original collector influences the
nature of the data.

(e) Unforeseen discoveries


May lead to unexpected
discoveries.

new

(f) Permanence of data


Permanent and available data can
be easily checked by other
researchers and is open to public
scrutiny.

Consider the following examples that indicate the weaknesses or disadvantages


of using secondary data:
(a)

A researcher interested in small farm tractors finds that the secondary data
on the subject is broader, less pertinent in category and encompasses all
agricultural tractors. Moreover, the data was collected five years ago.
Copyright Open University Malaysia (OUM)

TOPIC 6

SURVEY METHOD AND SECONDARY DATA

97

(b)

An investigator wishing to study those who make more than RM100,000


per year finds the top-end category in a secondary study reported at
RM75,000 or more per year.

(c)

A researcher who wants to compare the dividends of several industrial


steel manufacturers finds that the units of measure differ due to stock
splits.

(d)

The Daily Gold Index reports the stock market indicator series. This
secondary data source reflects the prices of 50 non-randomly selected blue
chip stocks. This data is readily available and inexpensive, thus the source
of information may not suit the needs of individuals concerned with the
typical companies listed on the KLSE.

ACTIVITY 6.5

6.7

1.

Come up with another two advantages and disadvantages of


using secondary data.

2.

Sometimes, secondary data is the only kind of data that can be


used in research. Give some examples of management research
questions for which secondary data sources are probably the
only ones feasible.

SOURCES OF SECONDARY DATA

There are two sources of secondary data:


(a)

Internal Sources
There is an endless list of potential sources for secondary data. Internal
sources may be a good start for the researcher to begin searching secondary
data. Internal sources refer to data previously collected by or for the
organisation itself. The data are compiled in the form of previous primary
data collection as well as routine record inventories. Other useful internal
sources can be found in employee annual evaluation reports, salesperson
itineraries, sales invoices, company financial reports and records, customer
complaints, billing records, bank ledgers and previous strategic planning
documents.

Copyright Open University Malaysia (OUM)

98

(b)

TOPIC 6

SURVEY METHOD AND SECONDARY DATA

External Sources
After the potential sources of internal secondary data are looked through,
the researcher must consider the external data sources. Countless volumes
of secondary data are available from both non-profit and profit
organisations. With advanced technologies for data searching, these sources
can be easily accessed and searched with an electronic search engine. The
key to a successful computer search is using useful key words in a search
engine. Most libraries have access to several search engines that can
identify potentially relevant research studies and/or data. Individuals and
private companies may also subscribe to an online database vendor for a
fee. Some provide access to print articles from trade periodicals, academic
journals and general business magazines. Others provide access primarily
to statistical data.
Secondary data is abundant online. All one needs is a good search engine
and a little imagination. Many libraries have access to many search engines
that charge a fee to use them. Table 6.2 shows a few examples.
Table 6.2: Examples of Online Sources of Secondary Data
Sources

Addresses

Ministry of Agriculture

http://agrolink.moa.my

Department of Statistics, Malaysia

www.statistics.gov.my
www.census.govmain/www/stat_int.ht
ml

Bank Negara Malaysia

www.bnm.gov.my

Malaysia Industry, Investment, Trade


and Productivity (MITI)

www.miti.gov.my

Summary of Annual Fisheries Statistics

agrolink.moa.my/dof/statdof.html

Tourism Malaysia

www.tourism.gov.mytourism.gov.my/sta
tistics/statistics.asp

Department of Civil Aviation Malaysia

www.dca.gov.my/homeng.htm

Malaysian Key Economic Indicators

jpbpo@stats.gov.my, hadi@stats.gov.my

Websites of National Statistical Offices


and other national bodies dealing with
statistics
Department Of Statistics,
Ministry of International Trade

www.planet-venture.de/seiten/stat.htm

Yale

www.library.yale.edu/socsci/egcmalay.
html

University:

Economic

Centre Collection Malaysia

Growth

Copyright Open University Malaysia (OUM)

TOPIC 6

SURVEY METHOD AND SECONDARY DATA

Economic statistics agencies listed by


country, economic indicators, statistical
agencies, foreign data and economic
data
Colombia
National
Administrative, Department of Statistics
(DANE), Macau Census and Statistics
Department Malaysia

members.tripod.com/pugahome/diario.
htm

Malaysian Economy Indicators and


Statistics. Malaysia: Economic Plans ...
Washington Post. Malaysia: Indicators
and Statistics

www.dotmy.com/

Malaysias most comprehensive centre


for research resources. An excellent
research starting point for serious
researchers, students and academicians
doing research on Malaysia. Research
Department, Government Department,
Matrade Search Malaysia Directory
Malaysia, WWW Directory Altavista

irb11.tripod.com/irbhome

Central index of economics institutions


(academic, governmental and nonprofit)
in
Malaysia.
(Malaysia
Agricultural Bank) Economic Planning
Unit. Inland Revenue Board. Jabatan
Perangkaan (Department of Statistics,
University Kebangsaan Malaysia

ideas.uqam.ca/EDIRC/malaysia.html

SELF-CHECK 6.5
What are the differences between internal sources and external
sources?

ACTIVITY 6.6
Suppose you are interested in a statistical overview of aquaculture (fish
farming) as part of an environmental analysis for a prospective
entrepreneurial business venture. How would you search for
information?

Copyright Open University Malaysia (OUM)

99

100

TOPIC 6

SURVEY METHOD AND SECONDARY DATA

SELF-CHECK 6.6
Tick True or False for each statement below:
No.

Question

True

1.

Secondary data are data collected for some


other purposes and reanalysed for the present
purpose.

2.

A major disadvantage of secondary data is


that the information often does not fit the
researcher s needs.

3.

To forecast sales by constructing models


based on past sales figures is an example of use
of secondary data.

4.

The use of secondary data as the sole


source of information has the drawback of
becoming obsolete.

5.

Secondary data is only available


external sources of the organisation.

6.

A disadvantage of the secondary data over


the primary data is that the process of getting the
data is usually more expensive.

7.

Even if the definitions of variables being studied


are not the same, research can be modified
according to the secondary data available.

False

from

Interviews and self-administered questionnaires are used to collect survey


data.
Interviews can be categorised based on the medium used to communicate
with respondents, such as door-to-door, mall intercept or telephone
interviews.
Traditionally, interviews have been printed on paper but survey researchers
are increasingly using computers.
Copyright Open University Malaysia (OUM)

TOPIC 6

SURVEY METHOD AND SECONDARY DATA

101

Personal interviews give researchers a flexible survey method in which they


can use visual aids and various kinds of props.
Door-to-door personal interviews get high response rates but they are also
more costly to administer than the other forms of surveys.
The presence of an interviewer may influence subjects responses.
Obtaining a sample that is representative of the entire country is not a
primary consideration, mall intercept interviews may be conducted at lower
costs.
Telephone interviewing has the advantages of speed in data collection and
lower cost per interview.
However, not all households have telephones and not all telephone numbers
are listed in directories; this causes problems in obtaining a representative
sampling frame.
Absence of face-to-face contact and inability to use visual materials are other
limitations of telephone interviewing.
The self-administered questionnaire has most frequently been delivered
by mail. However, these may be delivered personally, administered at a
central location, sent by e-mail or administered via computer.
Mail questionnaires are generally less expensive than telephone or personal
interviews; however, there is a much larger chance of low response with mail
questionnaires. Several methods can encourage a higher response rate.
Mail questionnaires must be more structured than other types of surveys and
cannot be changed if problems are discovered in the course of data collection.
Questionnaires are now distributed electronically via e-mail, fax machine and
by sending computer disks by mail.
Surveys are also conducted using the Internet and interactive kiosks.
These are relatively new methods of communication and many individuals
cannot be accessed by these media.
Pre-testing a questionnaire on a small sample of respondents is a useful way
to discover problems while they still can be corrected.
Copyright Open University Malaysia (OUM)

102

TOPIC 6

SURVEY METHOD AND SECONDARY DATA

Secondary data is gathered and recorded prior to (and for purposes other
than) the current needs of the researcher. It is usually historical and already
assembled, and does not require access to respondents or subjects.
Primary data is data gathered for the specific purpose of the current research.
The main advantage of secondary data is that it is almost less expensive than
primary data.
Secondary data can generally be obtained rapidly and may include
information available to the researcher.
The main disadvantage of secondary data is that it is not designed
specifically to meet the researchers needs. Therefore, the researcher must
examine secondary data for accuracy, bias and soundness.
One method for doing this is to crosscheck different sources of secondary
data.
One of the main sources of secondary data for business research is internal
proprietary sources such as accounting records.
External data is created, recorded, or generated by an entity other than the
researcher s organisation. The government, newspapers and journals, trade
associations and other organisations produce information.
Traditionally, this information has been distributed in a published form
either directly from producer to user or indirectly through intermediaries
such as through the public library.
Modern computerised data archives, the Internet and electronic data
interchange systems have changed the distribution channels for external data.
Due to the rapid changes in computer technology, they are now almost as
easily accessible as internal data. Hence, the distribution of multiple types of
related data by single-source suppliers has radically changed the nature of
research using secondary data.

Copyright Open University Malaysia (OUM)

TOPIC 6

SURVEY METHOD AND SECONDARY DATA

Survey

Telephone Interview

Personal Interview

Mail Survey

Self-Administered Survey

Secondary Data

Structured Interview

Triangulation

Semi structured interview

Unstructured Interview

Questionnaire Survey

Copyright Open University Malaysia (OUM)

103

Topic

Experimental
Research
Designs

LEARNING OUTCOMES
By the end of this topic, you should be able to:
1.

Define what is research design;

2.

Distinguish the ways in which good research designs differ from


weak research designs;

3.

Explain the differences between a true experimental design and a


quasi-experimental design; and

4.

Discuss the ethics of experimental research.

INTRODUCTION
According to Christensen, research design refers to the outline, plan or strategy
specifying the procedure to be used in seeking an answer to the research
question. It specifies such things as how to collect and analyse the data. The
design of an experiment will show how extraneous variables are controlled. The
design will determine the types of analysis that can be done to answer your
research questions and the conclusions that can be drawn from your research.
The extent to which your design is good or bad will depend on whether you are
able to get the answers to your research questions. If your design is faulty, the
results of the experiment will also be faulty. How do you go about getting a good
research design that will provide answers to the questions asked? It is not easy
and there is no fixed way of telling others how to do it. The best that can be done
is to examine different research designs and to point out their strengths and
weaknesses, and leave it to you to make the decision.

Copyright Open University Malaysia (OUM)

TOPIC 7

EXPERIMENTAL RESEARCH DESIGNS

105

You should have an in-depth understanding of your research problem, such as


the treatment you want to administer, the extraneous variables or factors you
want to control and the strengths and weaknesses of the different alternative
designs. You should be clear about your research question/s and what is it that
you intend to establish. You should avoid selecting a design and then trying to fit
the research question to the design. It should be the other way round! Most
important is to see if the design will enable you to answer the research question.
You should be clear what factors you wish to control so that you can arrive at a
convincing conclusion. Choose a design that will give you maximum control
over variables or factors that explain the results obtained.

7.1

SYMBOLS USED IN EXPERIMENTAL


RESEARCH DESIGNS

Research design can be thought of as the structure of research i.e. it is the glue
that holds all of the elements in a research project together. In experimental
research, a few selected symbols are used to show the design of a study.
O = Observation or Measurement (e.g. mathematics score, score on an
attitude scale, weight of subjects, etc.).
O1, O2, O3 On = More than one observation or measurement.
R = Random assignment: subjects are randomly assigned to the various
groups.
X = Treatment which may be a teaching method, counselling techniques,
reading strategy, frequency of questioning and so forth.

7.2

WEAK DESIGNS

The followings are considered weak experimental research designs.

7.2.1

One-shot Design

This is a simple design where the researcher makes a single observation without
any follow-up measure or comparison with another homogenous group. For
example, you want to determine whether praising primary school children
makes them perform better in arithmetic as in Figure 7.1. You measure arithmetic
achievement with a test. To test this idea, choose a class of Year 4 pupils and start
Copyright Open University Malaysia (OUM)

106

TOPIC 7

EXPERIMENTAL RESEARCH DESIGNS

praising the pupils. You will find that their performance in mathematics is
significantly improved.

Figure 7.1: One-shot design

You conclude that praise increases the pupils' mathematics score. This design is
weak for the following reasons:
(a)

Selection Bias: It is possible that the pupils you selected as subjects were
already good in mathematics.

(b)

History: The school had organised a motivation course on mathematics for


Year 4 pupils. So, it is possible that it might influence their performance.

7.2.2

One-group Pre-test and Post-test Design

To ensure that there was no pre-existing characteristic among the school


children, a pre-test may be administered as illustrated in Figure 7.2. If the
children performed better in mathematics after they have been praised compared
to the pre-test, then you can attribute it to the practice of praising.

Figure 7.2: One-group pre-test post-test design

This design is weak for the following reasons:


(a)

Maturation: If the time frame between the pre-test and post-test is long, it is
possible that the subjects may have matured because of developmental
changes.

(b)

Testing: Sometimes the period between the pre-test and the post-test is too
short and there is a possibility that subjects can remember their pre-test
session and give inaccurate responses.

Copyright Open University Malaysia (OUM)

TOPIC 7

EXPERIMENTAL RESEARCH DESIGNS

107

ACTIVITY 7.1
Twenty pupils who had poor scores in arithmetic were taught
arithmetic using the Zandox method. Three weeks later, when they
were tested, their arithmetic scores improved. Thus, the Zandox
method improves their arithmetic performance.
1.

Which type of research design is this?

2.

What are some problems with this design?

7.2.3

Non-equivalent Post-test Only Design

The main weakness of the previous two designs is the lack of a comparison
group and the vague correlation between the practice of (praising) and
increased mathematics score. In the Non-Equivalent Post test Only Design, an
attempt is made to include a comparison group (i.e. control group) that did not
receive any praise as in Figure 7.3. The dashed lines separating the
experimental group and the control group indicate that the children were not
randomly assigned to the two groups. Hence, the two groups are non-equivalent.
Matching can be used but there is no assurance that the two groups can be
equated. The only way one can have assurance that the two groups are equated is
to assign the children randomly.

Figure 7.3: Non-equivalent post test only design

This design is weak for the following reason:


Selection Bias: Since there was no random assignment, it cannot be
established that the two groups are equivalent. So, any differences in the
post-test cannot be attributed to the practice of giving praise but other factors
should also be considered such as ability, IQ, interest and so forth.

Copyright Open University Malaysia (OUM)

108

TOPIC 7

EXPERIMENTAL RESEARCH DESIGNS

The three designs described are weak research designs because they do not
allow for extraneous factors that might influence the outcome of the
experiment to be controlled within the research construct. For example, if the
attitude towards mathematics and additional tuition classes in mathematics
are not controlled, it may not be possible to conclude that praise
(treatment) affects mathematics performance (dependent variable). Also,
poor research designs do not attempt to randomly assign subjects to the
groups. This introduces extraneous factor affecting the dependent measure.
Random assignment controls for both known and unknown extraneous
variables that might affect the results of the experiment.

SELF-CHECK 7.1
1.

Identify the major differences between the one-shot design,


one-group pre-test post-test design and non-equivalent post-test
only design.

2.

Why are these designs considered weak?

ACTIVITY 7.2
A teacher assigns one class of pupils to be the experimental group and
another class as the control group. Both groups are given a science
post-test. The pupils in the experimental group are taught by their
peers, while pupils in the control group are taught by their teacher.
1.

Which research design is the teacher using?

2.

How will you challenge the findings of the experiment?

7.3

TRUE EXPERIMENTAL DESIGNS

What is a true experimental design? According to Christensen, to be a true


experimental design, a research design must enable the researcher to maintain
control over the situation in terms of assignment of subjects to groups, in terms of
who gets the treatment condition, and in terms of the amount of treatment
condition that subjects receive. In this topic, we will discuss two major types of
true designs:

Copyright Open University Malaysia (OUM)

TOPIC 7

EXPERIMENTAL RESEARCH DESIGNS

(a)

after-only design; and

(b)

before-after design (as in Figure 7.4).

109

What is the difference between the two designs? The after-only design relies only
on a post-test while the before-after design (as the name suggests) relies on both
a pre-test and a post-test.

Figure 7.4: Types of true experimental designs

7.3.1

After-only Research Design

The After-Only Research Design gets its name from the fact that the dependent
variable is measured only once after the experimental treatment. In other words,
the post-test is administered once to the experimental group and the control
group as provided in Figure 7.5.

Note: R Random assignment


Figure 7.5: After-only research design

It shows an experiment in which the researcher is attempting to show the


effectiveness of the inductive method in improving the science problem skills of
17-year-old secondary school students. The sample was drawn from a population
and randomly assigned to the experimental and control group. Those from the
experimental group were taught science using the inductive approach, while
Copyright Open University Malaysia (OUM)

110

TOPIC 7

EXPERIMENTAL RESEARCH DESIGNS

students in the control group were not taught using the inductive approach.
Instead, students in this group were taught the same science content using the
traditional didactic approach (chalk-and-talk method).
In the above example, the experimental and control groups consist of two
different sets of students. This procedure is called a between-subjects design (also
sometimes known as an independent or unrelated design). One advantage of this
design is that the students are less likely to get bored, with the study because
each set of students is exposed to only one condition. In a similar vein, the
research is less susceptible to practice and order effects. However, you will need
more students to participate in your research. There is also a need to ensure that
both groups of students are homogeneous in any confounding variables that
might affect the outcome of the study. This is because different students bring
different characteristics to the experimental setting. Even though we randomly
assign students to experimental and control conditions, we might allocate
students with one characteristic to one condition by chance, and this might
produce confusing results.
Another research procedure in the after-only design is the within-subjects
design (sometimes known a repeated measures or related design). In this design,
the same students are exposed to two or more different conditions under
comparison. For example, you wish to study the effects of content familiarity on
reading comprehension performance. You can assign the same students to read
two types of passages, one familiar and the other unfamiliar, and then analyse
their comprehension performance.
One obvious advantage is that you need fewer students to participate in your
research. Besides, you will have much greater control over confounding variables
between conditions because the same students are used in both conditions. By
large, the same individual will bring the same characteristics to the conditions.
However, it is not all rosy in the within-subjects design. First, since the same
students are exposed to different conditions, they might get bored by the time
they are given the experimental treatment in the later condition. Besides, there is
an increased likelihood of practice and order effects.
One way to eliminate these effects is to introduce counterbalancing into your
design. In counterbalancing, you get half of your students to complete the first
condition followed by the second condition. You then get the remaining half of
your students to do the two conditions in the appositive order; the second
condition is given first followed by the first condition.

Copyright Open University Malaysia (OUM)

TOPIC 7

7.3.2

EXPERIMENTAL RESEARCH DESIGNS

111

Before-after Research Design

The Before-After Research Design is perhaps the best example of a true


experimental design that incorporates both an experimental and control group to
which the subjects are randomly assigned as shown in Figure 7.6. This research
design is a good experimental design because it does a good job of controlling the
extraneous factors such as history, maturation, instrumentation, selection bias
and regression to the mean. How is this done? Any history events (e.g. certain
events subjects may have been exposed to) that may have produced a different
result in the experimental group would also have shown a contrasting outcome
in the control group. Here, it is assumed that the subjects in both groups have
experienced the same set of events.

Figure 7.6: Before-after research design

We can conclude that a true experimental research has three distinct


characteristics (sometimes referred to as three basic principles of
experimentation):
First, it involves intervention or treatment of independent variables. The
independent variables are manipulated systematically to examine the
effectiveness of the treatment.
Second, there is a control of variables. The purpose of this control is to rule
out extraneous variables that might confound the experiment.
Third, a true experiment requires an appropriate comparison. For instance,
comparison is made between two or more groups that are treated differently.

Copyright Open University Malaysia (OUM)

112

TOPIC 7

EXPERIMENTAL RESEARCH DESIGNS

SELF-CHECK 7.2
1.

What is the main strength of true experiments?

2.

What is the major difference between the two types of true


experiments: after-only research design and before-after research
design?

SELF-CHECK 7.3
1.

What is the main advantage of using factorial design?

2.

Why is the factorial design considered a true experiment?

3.

Identify the differences between main effect and interaction effect.

7.4

QUASI-EXPERIMENTAL DESIGN

So far, we have examined both weak and strong experimental research designs.
However, in social science research (e.g. education) there are times when
investigators face situation in which all the requirements of a true experiment
cannot be met.
For example, sometimes it is not possible to assign students to groups which are
a requirement of strong experimental research. Due to logistic reasons, it is
challenging to randomly assign subjects to groups and so a whole class may have
to be used in the research. Is it still possible to do an experiment despite these
limitations? The answer is yes, you can use a quasi-experimental design.
According to Christensen and Johnson, a quasi-experimental design is an
experimental research design that does not provide for full control of potential
confounding variables. In most instances, the primary reason that full control is
not achieved is because participants cannot be randomly assigned.

Copyright Open University Malaysia (OUM)

TOPIC 7

7.4.1

EXPERIMENTAL RESEARCH DESIGNS

113

Non-equivalent Control-group Design

The non-equivalent control-group design contains an experimental and control


group, but the subjects are not randomly assigned to groups as stipulated in
Figure 7.7.

Figure 7.7: Non-equivalent control-group design

The fact there is no random assignment means that subjects in the experimental
group and control group may not be equivalent for all variables. For example,
you could have more poor performing students in the control group compared to
the experimental group. Hence, it may be difficult to establish whether the better
performance of the experimental group is due to the treatment or because there
are more high performing students in the group.
In the non-equivalent control-group design, both groups are given first a pre-test
and then a post-test (after the treatment is given to the experimental group). The
pre-test score and the post-test score are compared to determine if there are
significant differences.
When you cannot assign subjects randomly, you can be sure that extraneous
variables or factors will influence the experiment and threaten its internal
validity. Do you leave it alone or do you take action regarding the external
threats?
Knowing that extraneous factors will creep into a quasi-experiment, a good
researcher will take steps to ensure that the subjects in the experimental group
and control group are as similar as possible, especially pertaining to important
variables such as academic ability, attitude, interest, socioeconomic status and so
forth. How do you address this issue?

Copyright Open University Malaysia (OUM)

114

TOPIC 7

EXPERIMENTAL RESEARCH DESIGNS

Cook and Campbell proposed the following steps to enhance the internal validity
of the non-equivalent control-group design or quasi-experiments in general:
(a)

Selection: Ensure that subjects in the experimental and control groups are
matched in terms of important variables that may affect the results of the
experiment. For example, match subjects in terms of academic ability, IQ,
attitudes, interests, gender, socioeconomic background and so forth.

(b)

Testing: Ensure that the time period between the pre-test and post-test is
not too short that subjects are able to remember the questions given to them
earlier.

(c)

History: Ensure that events outside the experiment do not affect the
experiment. The problem is most serious when only subjects from one of
the groups are exposed to such events (e.g. motivation talks, private
tuition).

(d)

Instrumentation: Ensure that the pre-test and the post-test are similar. If a
different test is used, you should make sure that the two tests are
equivalent in terms of what it is measuring (i.e. high reliability and
validity).

7.4.2

Interrupted Time Series Design

The interrupted time-series design requires the researcher to take a series of


measurements both before and after the treatment. A single group of subjects are
pre-tested a number of times during the baseline phase, exposed to the treatment,
and then posted a number of times after the treatment. Baseline refers to the
testing done before the treatment designed to alter behaviour.
A hypothetical example may illustrate how the interrupted time series design is
used. Say that you want to determine whether positive reinforcement encourages
slow learners to be more attentive. Identify a group of 11-year-olds who are slow
learners and persuade them to attend an experimental classroom for at least one
period each school day as in Figure 7.8.

Figure 7.8: Interrupted time-series design

Copyright Open University Malaysia (OUM)

TOPIC 7

EXPERIMENTAL RESEARCH DESIGNS

115

In this classroom, subjects are taught reading skills in a positive environment


where they are praised and rewarded for their cooperation and attention on the
given task activities. Before the students are sent to the experimental classroom,
their behaviour is observed over three sessions in their regular classroom with
regard to their attentiveness. This is to obtain baseline data whereby their
behaviour is recorded in its natural state. The treatment lasts for three weeks and
after the treatment, the subjects are observed for their attentiveness and focus.

Figure 7.9: Percentage of students observed to be attentive and focused

The results of the hypothetical experiment is shown in Figure 7.9 which


illustrates the percentage of students who were attentive and focused on the
given task. From this graph, you can see that the percentage of attentive and
focused students who were assessed multiple times prior to and after
implementation of the positive classroom environment, which results in an
interrupted time-series design.
This assessment reveals that the percentage of students who were attentive and
focused remained rather constant during the first three baseline class sessions, or
the class sessions prior to the implementation of the positive classroom
environment. After the implementation of positive classroom environment, the
percentage of attentive behaviour increased gradually over the next three class
sessions, suggesting that the implementation of positive approach had a
beneficial effect on the behaviour of inattentive students.

Copyright Open University Malaysia (OUM)

116

TOPIC 7

EXPERIMENTAL RESEARCH DESIGNS

SELF-CHECK 7.4
1.

What is the meaning of non-equivalent in the non-equivalent control


group design?

2.

How can you enhance the internal validity of quasi-experimental


research designs?

3.

When would you use the interrupted time-series design?

7.5

ETHICS IN EXPERIMENTAL RESEARCH

During World War II, Nazi scientists conducted some gross experiments such as
immersing people in ice water to determine how long it would take them to
freeze to death. They also injected prisoners with newly developed drugs to
determine their effectiveness and many died in the process. These experiments
were conducted by individuals living in a demented society and they were
universally condemned as being unethical and inhumane.
Research in education involves human as subjects: students, teachers, school
administrators, parents and so on. These individuals have certain rights, such as
the right to privacy that may be violated if you attempt to obtain answers for
many significant questions. Obviously, this is a dilemma for the researcher as to
whether to conduct the experiment and violate the rights of subjects, or abandon
the study. Surely, you have heard people say: I guess we are the guinea pigs in
this study! or We are your white rats!.
Any researcher conducting an experiment must ensure that the dignity and
welfare of the subjects are maintained. The American Psychological Association
published the Ethical Principles in the Conduct of Research with Human
Participants in 1982. The document listed the following principles:
(a)

In planning a study, the researcher must take responsibility to ensure that


the study respects human values and protect the rights of human subjects.

(b)

The researcher should determine the degree of risk imposed on subjects by


the study (e.g. stress on subjects, subjects required to take drugs).

(c)

The principal researcher is responsible for the ethical conduct of the study
and be responsible for assistants or other researchers involved.

(d)

The researcher should make it clear to the subjects before they participate in
the study regarding their obligations and responsibilities. The researcher

Copyright Open University Malaysia (OUM)

TOPIC 7

EXPERIMENTAL RESEARCH DESIGNS

117

should inform subjects of all aspects of the research that might influence
their decision to participate.
(e)

If the researcher cannot tell everything about the experiment because it is


too technical or it will affect the study, then the researcher must inform
subjects after the experiment.

(f)

The researcher should respect the individuals freedom to withdraw from


the experiment at any time, or refuse to participate in the study.

(g)

The researcher should protect subjects from physical and mental


discomfort, harm and danger that may arise from the experiment. If there
are risks involved, the researcher must inform the subjects of that fact.

(h)

Information obtained from the subjects in the experiment is confidential


unless otherwise agreed upon. Data should be reported as group
performance and not individual performance.

ACTIVITY 7.3
1.

What are some ethical principles proposed by the American


Psychological Association with regard to doing experiments
involving human subjects?

1.

Make a case for the superiority of true experimental designs.

2.

What are the quasi-experimental research designs and how do they


differ from true experiments?

3.

Discuss the circumstances in which researchers have to use intact


groups.

4.

What can a researcher do to increase the equivalence of subjects in the


control and experimental groups in a quasi-experiment design?

5.

Graph the following data from an experiment on the effect of lighting


and music on anxiety. The scores are means of an anxiety test.
Music
Classical
Rock
Lighting Level

Dim

45

11

Bright

12

44

Is there an interaction? How do you know?

Copyright Open University Malaysia (OUM)

118

TOPIC 7

EXPERIMENTAL RESEARCH DESIGNS

A research design is a plan or strategy specifying the procedure in seeking an


answer to the research question.
Weak research designs do not allow for the control of extraneous factors that
might influence the experiment.
Examples of weak designs are one-shot design, one-group pre-test
design and non-equivalent post-test only design.

post-test

True experimental designs enable the researcher to maintain control over the
situation in terms of assignment of subjects to groups.
Examples of true designs are after-only research design, factorial design and
before-after research design.
A quasi-experimental design does not provide for full control of potential
confounding variables.
Examples of quasi-experimental designs are non-equivalent control-group
design and interrupted time series.
Researchers conducting experiments involving human subjects should
respect the confidentiality of subjects.

After-only design

One-group pre-test post-test

Before-after design

One-shot design

Experimental design

Quasi-experimental design

Factorial design

Time series design

Non-equivalent design

True research designs

Non-equivalent post-test only

Weak research designs

Copyright Open University Malaysia (OUM)

Topic

Qualitative
Research
Methods

LEARNING OUTCOMES
By the end of this topic, you should be able to:
1.

Define qualitative research method;

2.

Describe the types of qualitative research methods;

3.

Identify data analysis of qualitative methods; and

4.

Differentiate between quantitative and qualitative methods.

INTRODUCTION
The term qualitative research is a general definition that includes many
different methods used in understanding and explaining social phenomena with
minimum interference in the natural environment. Qualitative research begins by
accepting that there are many different ways of understanding and making sense
of the world. You are not attempting to predict what may happen in the future.
You want to understand the people in that setting (e.g. What are their lives like?
What is going on for them? What beliefs do they hold about the world?) In short,
qualitative research relates to the social aspects of our world and seeks to find
out answers for the following questions:
Why do people behave the way they do?
How are opinions and attitudes formed?
How are people affected by the events occurring in their surroundings?
How and why cultures have developed in the way they have?
What are the differences between social groups?
Copyright Open University Malaysia (OUM)

120

8.1

TOPIC 8

QUALITATIVE RESEARCH METHODS

DEFINITION OF QUALITATIVE RESEARCH

The qualitative research method involves the use of qualitative data such as
interviews, documents and observation, in order to understand and explain a
social phenomenon. Qualitative research methods originates from social sciences
to enable researchers to study social and cultural-oriented phenomena. Today,
the use of qualitative method and analysis is extended to almost every research
field. The method generally includes respondent observation, interviews and
questionnaires and the researchers impression and perception.
A good definition is given by Denzin and Lincoln (1994):
A qualitative research focuses on interpretation of phenomena in their
natural settings to make sense in terms of the meanings people bring to
these settings.
The qualitative research method involves data collection of personal experiences,
introspection, stories about life, interviews, observations, interactions and visual
texts which are significant in peoples life. Qualitative research typically serves
one or more of the following purposes (Peshkin, 1993) (Figure 8.1):

Figure 8.1: Purpose of quality research


(Perskin, 1993)
Copyright Open University Malaysia (OUM)

TOPIC 8

QUALITATIVE RESEARCH METHODS

121

Measuring reliability while conducting qualitative research is quite challenging.


However, we will explore reliability in the next topic.

8.2

TYPES OF QUALITATIVE RESEARCH


METHODS

There are many methods in conducting qualitative research. Types of qualitative


research are shown in Figure 8.2.

Figure 8.2: Types of qualitative research for ICT

8.2.1

Action Research

Action research is associated with investigation on changes. Cunningham (1993)


suggested that action research comprises a continuous process of research and
learning in the researchers long-term relationship with a problem. The intention
of action research is to institute a process of change and then draw a conclusion
based on this process. There are four stages in the action research cycle (Susman
& Evered, 1978) as illustrated in Figure 8.3.

Copyright Open University Malaysia (OUM)

122

TOPIC 8

QUALITATIVE RESEARCH METHODS

Figure 8.3: Stages in action research cycle

There are two reasons for action research:


(a)

To involve practitioners in their work; and

(b)

To encourage research with the purpose of improving a certain field of


study.

8.2.2

Case Study

Case study is a method used in both qualitative and quantitative research


methodologies. Yin (1994) suggested that a case study is an empirical
investigation of phenomenon within its environmental context, where the
relationship between the phenomenon and the environment is not clear.
Therefore, a case is examined to understand an issue or provide input to an
existing theory or contribute new thoughts to a new concept. A case studys unit
of measurement is associated with the entity concept.
A research work deploying the case study method may have single or multiple
cases. Conclusions can be drawn upon the similarities or differences among the
cases involved in a research work. Figure 8.4 shows the sequence of case study
(Yin, 1994) in a research work.

Copyright Open University Malaysia (OUM)

TOPIC 8

QUALITATIVE RESEARCH METHODS

123

Figure 8.4: Sequence of case study method


Source: Yin, 1994

Case studies can be in single or multiple designs. Single case design is ideal for
studying extreme cases in order to confirm or challenge a theory. Additionally it
is also used in that cases a researcher did not have access previously. However, it
is important for a researcher to be careful in interpreting what is being observed.
A multiple case design is appropriate when a researcher is keen to use more than
one case to gather data and draw upon a conclusion based on the facts. The
multiple case design confirms the evidence which enhance the reliability and
validity of a research work.

8.2.3

Ethnography

Ethnography is a qualitative research method which involves description of


people and nature of phenomena. Atkinson and Hammersley (1994) suggested
that ethnography involves exploring the nature of phenomena, working with
unstructured data and analysing data through interpretation of the meanings
attributed by research respondents. This method involves primary observations
conducted by a researcher during a stipulated period.
The ethnographic method needs considerable time and fieldwork commitment
from the researcher. It can be extremely time consuming as the researcher need to
spend a long time in the observation period and jot down field notes. There are
some standard rules for taking field notes (Neuman and Wiegand, 2000):

Copyright Open University Malaysia (OUM)

124

TOPIC 8

QUALITATIVE RESEARCH METHODS

RULES FOR TAKING FIELD NOTES


Jot down notes immediately and as soon as possible during observation
Track the number of phrases used by subjects
Pay attention to every detail
Record sequence of events chronologically
Avoid making evaluative judgments or summarising retrieved facts and
respondents.
Source: Neuman and Wiegand (2000)

8.2.4

Grounded Theory

Grounded theory uses a prescribed set of procedures for analysing data and
constructing theoretical model from them. A good definition given by Glaser and
Strauss, (1967) states:
The discovery of theory from data systematically obtained from social
research.

Although it originates from social research, the method is now widely used in
other fields as well.
They also defined that a category emerges from the data and may stand by
itself as a conceptual element. The term grounded refers to the idea whereby a
theory emerged from the study is derived from and grounded in data collected
in the field rather than taken from research literature.
Grounded theory is very useful when current theories about a phenomenon
are either in-adequate or non-existent (Creswell, 1998). Data collection for
this method is field-based and is likely to change over the course of the study.
Interviews play a major role in this method but some other techniques like
observation, multimedia resources and documents may also be used.

8.2.5

Content Analysis

Content analysis is a detailed and systematic examination of the contents of a


particular material to identify patterns or themes. It is typically performed on
forms of human communication including journals, books, printed media and
recorded human interactions. Out of the five designs explained in this topic,
Copyright Open University Malaysia (OUM)

TOPIC 8

QUALITATIVE RESEARCH METHODS

125

content analysis requires thorough planning from the very beginning. Research
problem or research questions need to be specified from the beginning.
Some steps in content analysis are:
(a)

Identify the specific body of material which needs to be explored


For example, you may be interested in finding evidence for enterprise
architecture using XML and CORBA in a service-oriented organisation. In
this case, the specific body of material to be explored will be enterprise
architecture using XML and CORBA.

(b)

Define the characteristics or qualities to be examined in precise terms


A researcher may identify specific examples of each characteristic as a way
of defining it more clearly.

(c)

Break into small and manageable segments of materials if it is too complex


or lengthy.

(d)

A researcher should scrutinise and sort the materials based on the defined
characteristics.

SELF-CHECK 8.1
Identify types of qualitative research methods.

8.3

QUALITATIVE DATA ANALYSIS

Qualitative data is a pool of data obtained from interviews, field notes of


observations and analysis of documents. This information must be organised and
interpreted properly to extract the key findings for your research work. As a rule
of thumb, there is no single right way for qualitative data analysis. Different
researchers have proposed different methods for qualitative data analysis.
However, there are some common procedures in the analysis of qualitative data.
A researcher begins with a large body of knowledge and information and
deploys inductive reasoning, sorting and categorisation and make it precise with
key themes. For example, in the content analysis method, it might seem very
straightforward but you need to be careful in extracting information that has
meaningful characteristics to your research theme. Creswell (1998) came up with
data analysis spiral that is applicable to most qualitative methods. There are
several steps for this analysis. These steps are:
(a)

Organise data into several forms (i.e. database, sentences or individual words);
Copyright Open University Malaysia (OUM)

126

TOPIC 8

QUALITATIVE RESEARCH METHODS

(b)

Peruse the data sets several times to gain a complete picture or overview of
what it contains as a whole. During the process, a researcher should jot
down short notes or summarise of the key points that suggest possible
categories or interpretations;

(c)

Identify general categories or themes and classify them accordingly. This


will help a researcher to see a pattern or meaning of the data obtained; and

(d)

Finally, integrate and summarise the data for the audience. This step also
may include hypotheses that state the relationships among those categories
defined by the researcher. The data summary could be represented by
table, figure or matrix diagram.

The stages in the analysis of qualitative data are shown in Figure 8.5. It usually
begins with familiarisation of the data, transcription, organisation, coding,
analysis (grounded theory or framework analysis) and reporting (though the
order may vary).

Figure 8.5: Stages in qualitative data analysis

Let us look at each stage.


(a)

Familiarisation
The first step of data analysis is familiarisation in which you listen to tapes
and watch video material, read and re-read field notes, and make memos
and summaries before formal analysis begins. This is especially important
when besides you, others are also involved in data collection. You have to
get familiar with the field notes they made (perhaps try to decipher their
handwriting!).
Copyright Open University Malaysia (OUM)

TOPIC 8

QUALITATIVE RESEARCH METHODS

127

(b)

Transcription
Almost all qualitative research studies involve some degree of transcription.
What is transcription? Transcription is the process of converting audio or
video-recorded data obtained from interviews and focus groups as well as
handwritten field notes into verbatim form (i.e. written or printed) for easy
reading. Why do you have to do this? If you were to analyse directly from an
audio or video recording, there is the likelihood that you may include those
sections that seem relevant or interesting to you and ignore others. With a
transcript of everything that you observed and recorded (audio, video or field
notes), you get the whole picture of what happened and the chances of your
analysis being biased is minimised.

(c)

Organisation
After transcription, it is necessary to organise your data into sections that
is easy to retrieve. What does this mean? Say for example, in your study
you interviewed 10 teachers (30 minutes each) on their opinion about the
leadership style of their principal. It is advisable that you give each teacher
a pseudonym (e.g. Elvis, Jagger, Dina not their real name) or referred to
by a code number (e.g. T1, T2..T10). You need to keep a file that links the
pseudonym or code number to the original informants which are to be kept
confidential and destroyed after completion of the research. Names and
other identifiable material should be removed from the transcripts.
The narrative data you obtained from the 10 teachers need to be numbered
depending on your unit of analysis. In other words, you have to determine
whether you intend to analyse at the word level, sentence level or
paragraph level and they have to be numbered accordingly. Make sure that
the unit of text you use can be traced back to its original context.
You have at your disposal TWO approaches to analyse the data:

(d)

(i)

If you are interested in conducting an exploratory study and are


more concerned with theory generation, than the grounded theory
approach should be your choice of analysis.

(ii)

If you are interested in finding answers to pre-determined questions (a


priori questions) then framework analysis would be the logical option.

Coding
Coding is the process of examining the raw qualitative data in the
transcripts and extracting sections of text units (words, phrases, sentences
or paragraphs) and assigning different codes. This is done by marking
sections of the transcript and giving a numerical reference, symbol,
descriptive words or category words. Most of the text (or transcript) will be
Copyright Open University Malaysia (OUM)

128

TOPIC 8

QUALITATIVE RESEARCH METHODS

marked and given different codes which will be later refined or combined
to form themes or categories.
(e)

Analysis (Grounded Theory Approach)


Based on the research questions and your objective for conducting the study,
you determine the approach of analysing the data. If you are interested in
generating theory and are not sure what to expect, the grounded theory
approach would be a logical choice. The grounded theory approach offers a
rigorous approach in generating theory from qualitative data. It is particularly
well suited for exploratory studies where little is known.
Grounded theory has evolved from the work of sociologists Glaser and Strauss
(1967). It is a method to conduct qualitative research and is an inductive
method of qualitative research in which theory is systematically generated
from data. However, many studies in education, business, management and in
the health field (especially in nursing), have adopted grounded theory as a
procedure for conceptualising and analysing data without taking on the whole
methodology. The appeal of grounded theory analysis is that it allows for the
theory to emerge from the data through a process of rigorous analysis (see
Figure 9.1). The word theory is used to refer to the relationships that exist
among concepts generated from the data and to help us understand our social
world better (Strauss and Corbin, 1998).
The main feature of the grounded theory procedure is the use of the
constant comparison technique. Using this technique, categories or concepts
emerged from a stage of analysis are compared with categories or concepts
emerged from the previous stage. The researcher continues with this
technique until a situation called theoretical saturation is reached.
Theoretical satisfaction refers to a situation where no new significant
categories or concept emerge. The grounded theory procedure is cyclical,
involving frequent revisiting of data in the light of emergence of new
categories or concepts as data analysis progresses. The theory being
developed is best seen as provisional until proven by the validation of data
from others.

(f)

Analysis (Framework Analysis Approach)


Another approach to qualitative data analysis is called framework analysis
(Ritchie & Spencer, 1994). In contrast to the grounded theory procedure,
framework analysis was explicitly developed for applied research. In
applied research, the findings and recommendations of research need to be
obtained within a short period to be adopted. The general approach of
framework analysis shares many of the common features with the
grounded theory approach discussed earlier. This approach to qualitative
Copyright Open University Malaysia (OUM)

TOPIC 8

QUALITATIVE RESEARCH METHODS

129

data analysis allows the researcher to set the categories and themes from
the beginning of the study. However, this approach also allows for
categories and themes that may emerge during the data analysis process
which the researcher had not stated at the beginning of the study.
Once the categories to themes have been pre-determined, specific pieces of
data are identified which correspond to the different themes or categories.
For a change, let us take an example from the medicine field. You may want
to know, for instance, about how people who had a heart attack
conceptualise the causes of the attack. From existing literature, you may
know that these can be divided into physical causes, psychological causes,
ideas of luck, genetic inheritance and so forth. You interview people who
have had a heart attack and from the interview transcript you search the
data for materials that are coded under these headings.
Using the headings, you can create charts of your data so that you can
easily read across the whole data set. Charts can be either thematic for each
theme or category across all respondents (cases) or by case for each
respondent across all themes:
(i)

Thematic Chart
THEME

Psychological
cause

(ii)

Case 2

The stress at office is too


much. Got to work late.

Case 3

Business was bad. Had to


close shop.

Case Chart

CASE 1

Theme 1
Genetic inheritance

Theme 2
Physical cause

My younger brother and


father died of heart attack.

I hardly do any exercise. I am


too busy to do any exercise.

In the chart boxes, you could put line and page references to relevant
passages in the interview transcript. You might also want to include some
text e.g. key words or quotations as a reminder of what is being referred to
(see (i) and (ii)). For example, under the theme Psychological Causes, Case 2
talks about stress in the workplace while Case 3 talks about business
failure.

Copyright Open University Malaysia (OUM)

130

TOPIC 8

QUALITATIVE RESEARCH METHODS

Next, let us look at the data analysis spiral, as illustrated by Creswell, 1998, in
Figure 8.6.

Figure 8.6: Data analysis spiral (Creswell, 1998)


Source: Adapted from http://www.thearney.com/Scholar%20Folio/Schola1.gif

Data analysis for qualitative methods is more time consuming compared to


quantitative methods. This is due to the loads of information you may obtain
during the entire research process. It is important for a researcher to set aside
some information because not everything gathered will be useful.

ACTIVITY 8.1
1.

Describe the types of interview method for qualitative approach.

2.

Distinguish between primary and secondary data sources.

Copyright Open University Malaysia (OUM)

TOPIC 8

8.4

QUALITATIVE RESEARCH METHODS

131

DIFFERENCES BETWEEN QUANTITATIVE


AND QUALITATIVE APPROACHES

There are some differences between the quantitative and qualitative approaches
in research methodology. In information and communication technology, both
methods play a significant role in facilitating the entire research process and
leading to desirable results or outcomes. Qualitative research tends to focus on
the subject or respondents instead of the perspective of the researcher. This is
also termed as the emic or insider perspective against etic or outsider
perspective. A researcher is always the main person in data collection and
analysis in qualitative approach as compared to questionnaires or tests in case of
quantitative approach.
Qualitative method also involves field work where a researcher must participate in the
setting especially for observation and interviews with respondents of the research
topic. Table 8.1 lists the differences between qualitative and quantitative research.
Table 8.1: Differences between Qualitative and Quantitative Research
Qualitative

Quantitative

Focus

Quality (features)

Quantity (how much, numbers)

Philosophy

Phenomenology

Positivism

Method

Ethnography/Observation

Experiments/Correlation

Goal

Understand, meaning

Prediction, test hypothesis

Design

Flexible, emerging

Structured, predetermined

Sample

Small, purposeful

Large, random, representation

Data collection

Interviews, observation,
documents and artefacts

Questionnaire, scales, tests,


inventories

Analysis

Inductive (by the researcher)

Deductive (by statistical methods)

Findings

Comprehensive, description
detailed, holistic

Precise, numerical

Researcher

Immersed

Detached

(Adapted from Merriam, 1999; Firestone, 1987; Potter, 1996)

Generally, qualitative research adopts the inductive approach. Such a method is


conducted due to lack of theory related to the research topic that is unable to
explain a phenomenon convincingly. A qualitative approach also focuses on
process and understanding based on rich description of body of knowledge. Data
Copyright Open University Malaysia (OUM)

132

TOPIC 8

QUALITATIVE RESEARCH METHODS

takes the form of communication of the respondents itself, extracts from research
documents, multimedia resources like audio and video recordings. These also
support the finding of a study.

SELF-CHECK 8.2
1.

What are the steps involved in qualitative data analysis?

2.

Identify the differences between qualitative and quantitative


research.

Qualitative research method involves the use of qualitative data, such as


interviews, documents, and respondents observation, to understand and
explain social phenomena.
Qualitative method focuses on interpretation of situations or phenomena in
their natural settings.
Types of qualitative methods are Action Research, Case Study, Ethnography,
Grounded Theory and Content Analysis.
Primary data sources comprise observation, interviewing and questionnaires.
Interviewing is a technique of gathering data from respondents by asking
questions and reacting verbally.
Secondary data sources correspond to documents such as publications,
records, earlier research reports and service records.
Collective administration and mailed questionnaires are the two most used
techniques in questionnaires distribution to respondents.
Qualitative is
methodology.

inductive

whereas

quantitative

follows

deductive

Action research adopts a spiral approach comprising four steps: planning,


acting, observing and reflecting.

Copyright Open University Malaysia (OUM)

TOPIC 8

QUALITATIVE RESEARCH METHODS

133

A qualitative case study is intensive, has holistic description and analysis of


single instance of a phenomenon.
Ethnography is a qualitative research method which involves a description of
people and nature of phenomena.
The inductive approach used in qualitative method begins by observing
phenomena, then it proceeds to find patterns in the form of emerging
categories or concepts.

Action research

Inductive approach

Case study

Interviews

Content analysis

Primary data sources

Ethnography

Qualitative methods

Grounded theory

Secondary data sources

Copyright Open University Malaysia (OUM)

Topic

Data Analysis

LEARNING OUTCOMES
By the end of this topic, you should be able to:
1.

Explain the process of data editing and coding;

2.

Assess the process of summarising, rearranging, ordering and


manipulating data;

3.

Carry out descriptive analysis of data; and

4.

Conduct hypothesis testing for a study.

INTRODUCTION
The goal of most research is to provide information. There is a difference
between raw data and information. Information refers to a body of facts that is in
a format suitable for decision-making, whereas data is simply recorded measures
of certain phenomena. Raw data collected in the field must be transformed into
information that will provide answers to the managers questions. The
conversion of raw data into information requires that the data be edited and
coded so that it can be transferred to a computer or other storage medium. This
topic introduces the processes of data analysis. These comprise several
interrelated procedures that are performed to summarise and rearrange the data.
Researchers edit and code data to provide input that results in tabulated
information that will answer the research questions. With this input, researchers
could logically and statistically describe research findings.

Copyright Open University Malaysia (OUM)

TOPIC 9

9.1

DATA ANALYSIS

135

DATA SCREENING AND EDITING

After data has been collected and before it is analysed, the researcher must
examine it to ensure its validity. Blank responses, referred to as missing data,
must be dealt with in some way. If the questions are pre-coded, then they can
simply be transferred into a database. If they are not pre-coded, then a system
must be developed so that they can be keyed in the database. The typical tasks
involved are data editing, which deals with missing data, coding, transformation
and data entry.

ACTIVITY 9.1
What is raw data? How is it different from primary and secondary data?

9.1.1

Data Editing

Before the collected data can be used, it must be edited. It must be inspected
for completeness and consistency. Some inconsistencies may be corrected at
this point. Editing also involves checking to see if respondents understood the
question or followed a particular sequence they were supposed to follow in a
branching question. For example, assume the researcher is using an experimental
design with two treatments. One treatment is designed to be a supportive work
environment and the other treatment is a much less supportive environment.
To verify that a respondent interpreted the treatment properly, the researcher
may conduct a manipulation check. After a respondent has answered the
questions, he or she is asked to comment on both work environments. If the
respondent indicates both work environments are equally supportive, it means
he or she did not respond appropriately to the treatment. In such situations, the
researcher may choose to remove that particular respondent from the data
analysis because he or she did not see the difference in the two work
environments.
Finally, editing may result in the elimination of questionnaires. For example, if
there is a large proportion of missing data, then the entire questionnaire may
have to be removed from the database. Similarly, a screening question may
indicate that you want to interview only persons who own their own home.
However the response in a questionnaire may indicate a particular respondent is
a renter. In such cases, the questionnaire must not be included in the data
analysis.
Copyright Open University Malaysia (OUM)

136

TOPIC 9

9.1.2

Field Editing

DATA ANALYSIS

The process of editing can be done in the field. The purpose of field editing on
the same day as the interview is to detect technical omissions (such as a blank
page on the interview questionnaire), check legibility of handwriting and clarify
responses that are logically or conceptually inconsistent. If a daily field edit is
conducted, a supervisor who edits completed questionnaires will be able to
question the interviewers who can recall the interview well enough to correct the
problem.
The number of no answers or incomplete responses to some questions can be
reduced with the rapid follow-up stimulated by a field edit. The daily field edit
also allows possible recontacting of the respondent to fill in omissions before the
situation has changed. Moreover, the field edit may indicate the need for further
interviewer training. For example, the field editor should check open-ended
responses for thoroughness of probing and correct following of skip patterns.
When poor interviewing is reflected by lack of probing, supervisors may further
train the interviewer.

9.1.3

In-house Editing

Although simultaneous editing in the field is highly desirable, in many


situations, (particularly with mail questionnaires) early reviewing of the data is
not always possible. In-house editing rigorously investigates the results of data
collection. The research supplier or the research department normally has a
centralised office staff to perform the editing and coding function. Figure 9.1
summarises the rules to be followed in editing.

Copyright Open University Malaysia (OUM)

TOPIC 9

DATA ANALYSIS

137

Figure 9.1: Rules in editing

9.1.4

Missing Data

Missing data can impact the validity of the researchers findings and therefore
must be identified and the problems resolved. Missing data typically arise
because of data collection or data entry problems. The researcher must assess
how widespread the missing data problem is and whether or not it is systematic
or random. If the problem is of limited scope, the typical solution is to simply
eliminate respondents and/or questions with missing data. When missing data is
more widespread, the researcher must deal with it differently by removing
respondents with missing data. The sample size may become too small to
provide meaningful results.
Several possible approaches can be taken to deal with missing data:
(a)

Identify the respondents and variables that have a large number of missing
data points. These respondents and/or variables are then eliminated from
the analysis.

(b)

Estimate the missing values by substituting the mean. Unfortunately, this


is only appropriate for metrically measured variables. When non-metric
variables have missing data, the respondent/question must be eliminated
from the analysis in most situations.

Copyright Open University Malaysia (OUM)

138

TOPIC 9

DATA ANALYSIS

(c)

Assign to the item the mean value of the responses of all those who have
responded to that particular item.

(d)

Give the item the mean of the responses of this particular respondent to all
other questions measuring this variable.

(e)

Give the missing response a random number within the range for that scale.

(f)

Give to the missing response of an interval-scaled item with a mid-point the


midpoint in the scale as the response to that particular item.

9.2

CODING

If scanner sheets for collecting questionnaire data are used, such sheets facilitate
the entry of the responses directly into the computer without manual keying in of
the data. However, if this cannot be done, then it is perhaps better to use a coding
sheet first to transcribe the data from the questionnaire and then key in the data.
This method, in contrast to flipping through each questionnaire for each item,
avoids confusion especially when there are many questions and a large number
of questionnaires involved.
Responses could be coded either before or after the data is collected. If at all
possible, it is best to code them ahead of time. Coding means assigning a number
to a particular response so the answer can be entered into a database. For
example, if a five-point Agree-Disagree scale is used, then it must be decided if
Strongly Agree will be coded with a 5 or a 1. Most researchers will assign the
largest number to Strongly Agree and the smallest to Strongly Disagree; for
example; 5 = Strongly Agree and 1 = Strongly Disagree, with the points in
between being assigned 2, 3 or 4. A special situation arises when the researcher
has a two-category variable like gender. Some researchers use a coding approach
that assigns 1 = male and a 2 = female. It is recommended that in such instances a
coding approach be used that assigns 1 to one of the categories and 0 to the other
category. This enables greater flexibility in data analysis and is referred to as
using dummy variable coding.
When interviews are completed using a computer-assisted approach, the
responses are entered directly into the database. When self-completed
questionnaires are used, it is good to use a scanner sheet because the responses
can be directly scanned into the database. In other instances, however, the raw
data must be manually keyed into the database using a personal computer. A
most popular software known as SPSS includes a data editor that looks like a
spreadsheet that can be used to enter, edit and view the contents of the database.
Missing values typically are represented by a dot in a cell (.), so they must be
coded in a special way as was indicated earlier.
Copyright Open University Malaysia (OUM)

TOPIC 9

DATA ANALYSIS

139

Human errors can occur while completing the questionnaire, coding it or during
keying in data. Therefore, at least 10 percent of the coded questionnaires, as well
as the actual database, are checked for possible coding or data entry errors.
Questionnaires to be checked are selected by a systematic, random sampling
process.
Coding in qualitative research will be different from of quantitative research.
Research findings in raw form need to be classified and transformed into
categories or variables. Raw data in qualitative research could hardly be
associated with numbers. Thus, a researcher cannot assign numbers to the data.
The concept of open, axial and selective coding systems can be used for
qualitative data handling, interpretation and theory development.
In qualitative research, researchers usually end up with too much data. As a
result, data is coded to prevent data overload and to enable further analysis for
theory development. Open coding system is used to identify categories that are
derived from the concepts generated in a research. In this coding system, the
dimensions and properties of the concept are identified from the raw data in
order to group certain concepts into certain categories. These categories reflect
the concepts in a more abstract or higher order concept. Axial coding is used to
relate categories to their sub-categories. In other words, axial coding is used for
linking concepts at their dimensions and properties level in order to provide a
more precise and complete explanation of the categories.
Selective coding system is used as the basis for theory development. In this
coding system, categories are rearranged and reorganised in order to relate them
to a core concept. This core concept will form a framework or model to explain
the phenomenon being studied. The framework or model built upon categories
and subcategories is an important milestone for theory development because it
facilitates the process of further data collection to test the framework or model.

ACTIVITY 9.2
Explain the benefits of coding and when to use it in brief.

9.3

DATA ENTRY

Data entry converts information gathered through primary or secondary


methods into a medium suitable for viewing and manipulation. Keyboard entry

Copyright Open University Malaysia (OUM)

140

TOPIC 9

DATA ANALYSIS

remains a mainstay for researchers who need to create a data file immediately
and store it in a minimal space in a variety of media medium.
Optical scanning instruments, the ever-present choice of testing services, have
improved efficiency. Examinees darken small circles, ellipses or sets of parallel
lines to choose a test answer. Optical scanners process the marked-sensed
questionnaires and store the answers in a file. This technology has been adopted
by questionnaire designers for the most routine data collection. It reduces the
number of times data is handled, thereby reducing the number of errors that are
introduced.
The cost of technology has allowed most researchers access to desktop or
portable computers or a terminal linked to a larger computer. This technology
enables computer-assisted telephone or personal interviews complete with
answers to be keyed in directly for processing, eliminating intermediate steps
and errors. The increase in computerised random-digit dialling encourages other
data collection innovations.
Voice recognition and response systems, while still far from mature, are
providing some interesting alternatives for the telephone interviewer. Such
systems can be used with software programmed to call specific three-digit
prefixes and generate four-digit numbers randomly, reaching a sample within a
set geographical area. Upon getting a voice response, the computer branches into
a questionnaire routine. Currently, the systems are programmed to record the
verbal answers but voice recognition systems are improving rapidly and soon
this system will be able to translate voice responses into data files.
Field interviewers can use portable computers or electronic notebooks instead of
the traditional clipboards and pencils. With a built-in communications modem or
cellular link, their files can be sent directly to another computer in the field or to a
remote site. This lets supervisors to inspect data immediately or simplify the data
processing at a central facility. Bar code readers are used in several applications
at point-of-sale terminals, inventory control, product and brand tracking, and at
busy rental car locations to facilitate the return of cars and generate invoices. This
technology can be used to simplify the interviewers role as a data recorder.
Instead of writing (or typing) information about the respondents and their
answers by hand, the interviewer can pass a bar code wand over the appropriate
codes. The data is recorded in a small, lightweight unit for translation later.
Even with these time reductions between data collection and analysis, continuing
innovations in multimedia technology are being developed by the personal
computer business. The capability to integrate visual images, audio and data may
soon replace video equipment as the preferred method for recording an
Copyright Open University Malaysia (OUM)

TOPIC 9

DATA ANALYSIS

141

experiment, interview or focus group. A copy of the response data could be


extracted for data analysis but the audio and visual images would remain intact
for later evaluation.

ACTIVITY 9.3
People nowadays are attracted to SMS service provider advertisements,
be it for mobile ring tone services or contests. Even local TV stations use
this service to get information on certain survey questions. Why has this
phenomenon become so widely acceptable by the public even when the
charges are expensive?

9.4

DATA TRANSFORMATION

The process of changing data from its original format to a new format is called
data transformation. This process is usually done so that data can be easily
understood or achieved to meet some other research objective. For example,
when the data is measured using scales, quite often the statements are given in
negative as well as positive formats. In such cases, the researcher will reverse
code the questions that are negatively worded so a summated scale can be
calculated to interpret the results. If a scale of 5 is used, a 5 will be transformed to
a 1 and a 4 to a 2; a 3 does not have to be changed.
Data transformation is usually done to reduce bias when ages of respondents are
being studied. To reduce the biased response, respondents are asked the year
they were born. In such cases, the researcher would have to simply transform the
birth year to obtain the age of the respondents. Data transformation is required
when the researcher wants to create a new variable by respecifying the data
according to logical transformation. In many cases, the Likert scales are
combined into a summated rating. Usually, the transformed variable involves
combining the scores (raw data) for several attitudinal statements into a single
summated score.
The researcher could also calculate an average summated score that involves
dividing the total summated score by the number of variables. For example, if
three 5-point statements are used, the summated score might be 4 + 4 + 5 = 13.
Using the average summated score, the result becomes 4 + 4 + 5 = 13/3 = 4.3.

Copyright Open University Malaysia (OUM)

142

TOPIC 9

DATA ANALYSIS

SELF-CHECK 9.1

9.5

1.

Why do we need to transform original data?

2.

What kind of steps can be taken to minimise or avoid biases


during the data analysis stage of research?

DATA ANALYSIS

The objectives of data analysis can be viewed from three aspects: to have a feel of
the data, to test the goodness of the data and to test the hypotheses developed for
the research. Getting a feel of the data can be achieved by checking the mean, the
range, the standard deviation and the variance in the data. These statistics will
give the researcher a good idea of how the respondents have reacted to each item
in the questionnaire and how effective the items and measures are.
Suppose the researcher notices that the item in the data set does not have a good
spread (range) and shows little variability. The researcher can deduce that the
question may not be understood by the respondents due to improper wording or
the respondents may not fully understand the intent of the question. If the
respondents have given similar answers to all items, the researcher may want to
check for biases (e.g. if the respondents have stuck at only certain points on the
scale). The objective of descriptive analysis is to portray an accurate profile of
persons, events or situations. The analysis could be an extension or a beginner for a
piece of exploratory research. Table 9.1 summarises data presentation by data type.

Copyright Open University Malaysia (OUM)

TOPIC 9

143

DATA ANALYSIS

Table 9.1: Data Presentation by Data Type


CATEGORICAL

USAGE OF DATA

Descriptive

To show one variable so that


any specific value can be easily
read.

Ranked

QUANTIFIABLE
Continuous

Discrete

Table/Frequency Distribution (grouped data)

Bar Chart
(data may
need
grouping)

Histogram/
Frequency
Polygon (data
must be
grouped)

Bar Chart/
Pictogram (data
may need
grouping)

Line Graph/
Bar Chart

Line Graph/
Histogram

Line = Graph/
Bar Chart

Pie Chart/Bar
Chart(data may
need grouping)

Histogram/Pie
Chart (data
must be
grouped)

Pie Chart/Bar
Chart (data may
need grouping)

To show the distribution of


values for one variable.

Frequency
Polygon,
Histogram
(data must
be grouped)
or Box Plot

Frequency
Polygon, Bar
Chart (data may
need grouping) or
Box Plot

To show the interdependence


between two/more variables so
that any specific value can be
read easily.

Contingency table/Cross-tabulation (data often grouped)

To compare the frequency of


occurrences
of
categories/
values for two/more variables
so that highest and lowest are
clear.

Multiple Bar Chart (continuous data must be grouped, other


data may need grouping)

To compare the trends for


two/more variables so that
conjunctions are clear.

Multiple Line Graph/Multiple Bar Chart

To compare the proportions of


occurrences
of
categories/
values for two/more variables.

Comparative Pie Charts/Percentage Component Bar Chart


(continuous data must be grouped, other data may need
grouping)

To compare the distribution of


values for two/more variables.

Multiple Box Plot

To compare the frequency of


occurrences
of
categories/
values for two/more variables
so that totals are clear.

Stacked Bar Chart (continuous data must be grouped, other


data may need grouping)

To show the frequency of


occurrences
of
categories/
values for one variable so that
highest and lowest are clear.
To show
variable.

the

trend

for

To show the proportion of


occurrences
of
categories/
values for one variable.

Copyright Open University Malaysia (OUM)

144

TOPIC 9

DATA ANALYSIS

To compare the proportions and


totals
of
occurrences
of
categories/values for two/
more variables.

Comparative Proportional Pie Charts (continuous data must


be grouped, other data may need grouping)

To show the relationship


between cases for two variables.

Scatter Graph/Scatter Plot

Source: Adapted from Saunders. M., Lewis. P., and Thornhill. A. (2003)

ACTIVITY 9.4
What are the factors that determine the choice of data analysis?
Compare your answers with those of your classmate.

9.5.1

Descriptive Statistics

In the first part of the topic, we have discussed how responses could be coded and
entered. When nominal measurements are involved, each category is represented by
its own numerical code. With ordinal data, the items rank, reflecting a position in
the range from the lowest to the highest, is entered into the system. The same is true
with interval-ratio scores. When this data is tabulated, it may be arrayed from the
lowest to the highest scores on the scales. Together with the frequency of occurrence,
the observations form a distribution of values.
Many variables of interest have distributions that approximate a standard normal
distribution. It is a standard of comparison for describing distributions of sample
data and is used with inferential statistics that assume normally distributed
variables. The characteristics of location, spread and shape describe distribution.
Their definitions, applications and formulas fall under the heading of descriptive
statistics. Although the definitions will be familiar to most readers, the review
takes the following perspective on distributional characteristics:
(a)

A distributions shape is just as consequential as its location and spread.

(b)

Visual representations are superior to numerical ones for discovering a


distributions shape.

(c)

The choice of summarised statistics to describe a single variable is


contingent on the appropriateness of those statistics for the shape of the
distribution.
Copyright Open University Malaysia (OUM)

TOPIC 9

DATA ANALYSIS

145

(a)

Measures of Location
The common measures of location, often called central tendency or centre
include the mean, median and mode.

(b)

Mean
Mean is the arithmetic average. It is the sum of the observed values in the
distribution divided by the number of observations. It is the location
measure most frequently used for interval-ratio data but can be misleading
when the distribution contains extreme scores, large or small.
Mean of the population (or population mean) is denoted as ( ) and is
defined as:
Formula:

x/N
(c)

Median
Median is the midpoint of the distribution. Half of the observations in the
distribution fall above and the other half fall below the median. When the
distribution has an even number of observations, the median is the average
of the two middle scores. The median is the most appropriate locator of
centre for ordinal data and has resistance to extreme scores, thereby making
it a preferred measure for interval ratio data particularly those with
asymmetric distributions.

(d)

Mode
Mode is the most frequently occurring value. When there is more than one
score that has the highest yet equals frequency, the distribution is bimodal
or multi modal. When every score has an equal number of observations,
there is no mode. The mode is the location measure for normal data and a
point of reference along with the median and mean for examining spread
and shape.

(e)

Measures of Spread
The common measures of spread (alternatively referred to as measure
dispersion or variability) are the variance, standard deviation, range,
interquartile range and quartile deviation. They describe how scores cluster
or scatter in a distribution.

(f)

Variance
Variance is the average of the squared deviation scores from the
distributions mean. It is a measure of score dispersion about the mean. If
Copyright Open University Malaysia (OUM)

146

TOPIC 9

DATA ANALYSIS

all the scores are identical, the variance is 0. The greater the dispersion of
scores, the greater is the variance. Both the variance and the standard
deviation are used with interval ratio data. The symbol for the sample
variance is (s2), and the population variance is the Greek letter sigma
squared ( 2)
Formula:
Sample variance,

s2

(Xi Xj)2
n 1

Population variance,
2

(g)

(Xi Xj)2
N

Standard Deviation
Standard deviation is the positive square root of the variance. It is perhaps
the most frequently used measure of spread because it improves
interpretability by removing the variances square and expressing
deviations in their original units. Like the mean, the standard deviation is
affected by extreme scores. The symbol for the sample standard deviation is
(s), and a population standard deviation is ( ).
Formula:
s
Variance

(h)

Range
Range is the difference between the largest and smallest score in the
distribution. Unlike the standard deviation, it is computed from only the
minimum and maximum scores. Thus, it is a very rough measure of spread.
With the range as a point of comparison, it is possible to get an idea of the
homogeneity (small std. dev.) or heterogeneity (large std. dev.) of the
distribution. For homogeneous distribution, the ratio of the range to the
standard deviation should be between 2 and 6. A number above 6 would
indicate a high degree of heterogeneity. The range provides useful but
limited information for all data. It is mandatory for ordinal data.

(i)

Interquartile Range
Interquartile range (IQR) is the difference between the first and third
quartiles of the distribution. It is also called the midspread. Ordinal or
Copyright Open University Malaysia (OUM)

TOPIC 9

DATA ANALYSIS

147

ranked data use this measure in conjunction with the median. It is also used
with interval-ratio data if there are asymmetrical distributions or for
exploratory analysis. Recall the following relationships: the minimum value
of the distribution is the 0th percentile and the maximum is the l00th
percentile.
The first quartile (Q1) is the 25th percentile; it is also known as the lower
hinge when used with box plots. The median, or (Q2), is the 50th percentile.
The third quartile (Q3) is the 75th percentile; it is also known as the upper
hinge. The IQR is the distance between the hinges.
(j)

Semi-interquartile Range
Semi-interquartile range, is expressed as:
Formula:
Q = (Q1 Q3)/2
The semi-interquartile range is always used with the median for ordinal
data. It is helpful for interval-ratio data of a skewed nature. In a normal
distribution, a quartile deviation (Q) on either side encompasses 50 percent
of the observations. Eight (Qs) cover approximately the range. Qs
relationship with the standard deviation is constant (Q = 0.6745s) when
scores are normally distributed.

(k)

Measure of Shapes
The measure of shape, skewness and kurtosis describe departures from
the symmetry of a distribution and its relative flatness (peakedness),
respectively. They are related to statistics known as moments, which use
deviation scores

X i . The variance, for example, is a second power

moment.
(i)

The measures of shape use third and fourth power computations


and are often difficult to interpret when extreme scores are in the
distribution.

(ii)

Skewness is a measure of a distributions deviation from symmetry. In


a symmetrical distribution, mean, median and mode are in the same
location. A distribution that has cases stretching toward one tail or the
other is called skewed.

(iii) Kurtosis is a measure of a distributions peakedness (flatness).


Distributions where scores cluster heavily or pile up in the centre
(along with more observations than normal in the extreme tails) are
Copyright Open University Malaysia (OUM)

148

TOPIC 9

DATA ANALYSIS

peaked or leptokurtic. Flat distributions with scores more evenly


distributed and tails fatter than a normal distribution are called
playtykurtic. Intermediate or mesokurtic distributions are neither too
peaked nor to flat. The symbol for kurtosis is (ku).

9.6

WHAT IS A HYPOTHESIS?

In statistics, a hypothesis is an unproven supposition or proposition that


tentatively explains certain facts or phenomena. A hypothesis can also be an
assumption about the nature of a particular situation. Statistical techniques
enable us to determine whether the proposed hypotheses can be confirmed by
empirical evidence.
Hypotheses are developed prior to data collection, generally as a part of the
research plan. Hypotheses enable researchers to explain and test proposed facts
or phenomena.
The following are general steps in hypothesis testing:
(a)

Make an initial assumption;

(b)

Write down the null and alternate hypotheses;

(c)

Determine the level of significance ( level);

(d)

Collect evidence and perform the statistical analysis; and

(e)

Based on the available evidence, decide whether or not the initial


assumption is reasonable and write the conclusion.

9.6.1

Null and Alternate Hypotheses

The null hypothesis states that there is no change or difference in the group
means. It is based on the notion that any change from the past is due entirely to
random error. We are saying that the population value has not changed from one
time to another or the sample statistic does not vary significantly from an
assumed population parameter.

Copyright Open University Malaysia (OUM)

TOPIC 9

DATA ANALYSIS

149

One Sample
The mean brand preference score of male teachers aged 35 to 40 is 85.
Two Samples
There is no difference in the mean brand preference score between male and
female teachers aged 35 to 40.
The alternate hypothesis (or sometimes called the research hypothesis) is the
hypothesis that contradicts the null. It is commonly written as Ha. The alternative
hypothesis can indicate the direction of the differences or relationship, or assume
a neutral position. If direction is emphasised (indicated in the alternative
hypothesis), we called it one tailed-test. Otherwise, the test will be a two-tailed
test. The following are examples of alternate hypothesis.

One Sample
The mean brand preference score of male teachers aged 35 to 40 is not equal
to 85.
Two Samples
There is a difference in the mean brand preference score between male and
female teachers aged 35 to 40.

9.6.2

Directional and Non-directional Hypotheses

Hypotheses can be stated as directional or non-directional. This indication is


reflected in the alternate hypothesis. If you use terms like more than, less than,
positive or negative in stating the relationship between two groups or two
variables, then these hypotheses are directional. An example of a directional
hypothesis would be: The greater the stress experienced on the job, the more
likely an employee scouts for another job. Another way of stating a directional
hypothesis is the If-Then approach: If employees are given more safety
training, then they will have fewer accidents.
Non-directional hypotheses postulate a difference or relationship but do not
indicate a direction for the differences or relationship. We may postulate a
significant relationship between two groups or two variables but we are not able
to say whether the relationship is positive or negative. An example of a nondirectional hypothesis would be: There is a relationship between stress
experienced on the job and the likelihood an employee will search for another
job. Another example of a non-directional hypothesis is: There is a relationship
between job commitment and the likelihood to search for another job.
Copyright Open University Malaysia (OUM)

150

TOPIC 9

9.6.3

Sample Statistics versus Population

DATA ANALYSIS

Inferential statistics help us make judgments about the population from a


sample. Sample statistics are summarised values of the sample and are computed
using all the observations in the sample. Population parameters are summarised
values of the population but they are seldom known. This is the reason we use
sample statistics to infer on population parameters.
A null hypothesis refers to a population parameter not a sample statistic. Based
on the sample data, the researcher can reject the null hypothesis or accept the
alternative hypothesis whether there is a meaningful difference between the
groups or there is no meaningful difference.
In the latter case, the researcher would not be able to detect any significant
differences between the groups. It is important to understand that while the null
hypothesis may not be rejected, it is not necessarily accepted as true. The null
hypothesis typically is developed so that its rejection leads to an acceptance of
the desired situation. The alternative hypothesis represents what we think may
be correct.
In statistical terminology, the null hypothesis is notated as (Ho) and the
alternative hypothesis is notated as (Ha). If the null hypothesis (Ho) is rejected,
then the alternative hypothesis (Ha) is accepted. The alternative hypothesis is the
one you must prove.

SELF-CHECK 9.2
Explain the difference between sample statistics and population
parameters.

9.6.4

Type I and Type II Errors

The assumption that there is always a risk of inference when a researcher studies
a population may be incorrect. Thus, in research, error can never be completely
avoided and statistical tests that the researcher performs to accept or reject the
null hypothesis may be incorrect. Researchers, therefore, need to be aware of two
types of errors associated with hypothesis testing: Type I Error and Type II Error.

Copyright Open University Malaysia (OUM)

TOPIC 9

(a)

DATA ANALYSIS

151

Type I Error
Type I Error, referred to as alpha ( ), occurs when the sample results lead
to rejection of the null hypothesis when it is true. The probability of this
type of error, also referred to as the level of significance, is the amount of
risk regarding the accuracy of the test the researcher is willing to accept.
Thus, the level of significance is the probability of making an error by
rejecting the null hypothesis.
Depending on the research objectives and situation, researchers typically
consider either <.05 or <.01 as an acceptable level of significance. The
researchers are willing to accept some risk they will incorrectly reject the
null hypothesis but that level of risk is specified before the research project
is carried out. If the research situation involves testing relationships where
the risk of making a mistake is high, the researcher would specify a higher
level of significance, for example <.01.
In examining the relationship between two chemicals that might explode or
the failure rate of an expensive piece of equipment, the researcher would
not be willing to take much risk. On the other hand, when examining
behavioural relationships or when the risk is less costly, then only the
researcher is willing to take more risk. In some situations, the researcher
may even accept a 0.10 level of significance.

(b)

Type II Error
The second type of error is referred to as Type II Error. Type II Error occurs
when, based on the sample results, the null hypothesis is not rejected when
it is, in fact, false. Type II Error is generally referred to as beta ( ) error.
Usually, the researcher specifies the alpha error ahead of time but the beta
error is based on the population parameter (mean or proportion) and/or
the sample size.
A third important concept in testing hypotheses for statistical significance is
the statistical power of the test. The power of a test is the ability to reject the
null hypothesis when in fact the null hypothesis is false. The statistical
power of a test can be described as (1
), the probability of correctly
rejecting the null. The probability of a Type II Error is unknown but it is
related to the probability of a Type I Error.

Extremely low levels of Type I error ( ) will result in a high level of Type II error
( ), thus it is necessary to reach an acceptable compromise between the two types
of error. Sample size can help control Type I and Type II Errors. Generally, the
researcher will select the sample size in order to increase the power of the test
and to minimise Type I error and Type II error.
Copyright Open University Malaysia (OUM)

152

TOPIC 9

DATA ANALYSIS

The trade off between Type I and Type II Errors has a practical dimension as
defined by the costs incurred for each error. Often a change in the status quo is
associated with a great cost (the risks or gambling the future of the firm on a new
technology, a new investment in an equipment, etc.) Since the change must be
beneficial, the risks associated with alpha should be kept very low. However, if it
is essential to detect changes from a hypothesised mean, the risk of a beta would
be more important. Thus, a higher less critical alpha level would be chosen.

9.6.5

Steps in Hypothesis Testing

Below are the steps in hypothesis testing:


(a)

State the null and alternative hypotheses.

(b)

Make a judgment about the sampling distribution of the population and


then select the appropriate statistical test based upon whether you believe
the data is parametric or non-parametric.

(c)

Decide upon the desired level of significance (p = .05, .01, or something


else).

(d)

Collect data from a sample and compute the statistical test to see if the level
of significance is met.

(e)

Accept or reject the null hypothesis. Determine whether the deviation of the
sample value from the expected value would have occurred by chance
alone (five times out of one hundred).

9.7

INFERENTIAL STATISTICS

Data collected from questionnaires or other instruments in quantitative research


methods have to be analysed and interpreted. Generally, statistical procedures
are quantitative data approaches. In this section, we will look at these common
statistical approaches and emphasis on a conceptual understanding for
quantitative data analysis.
Figure 9.2 summarises the statistical components that we will be looking at in
this section.

Copyright Open University Malaysia (OUM)

TOPIC 9

DATA ANALYSIS

153

Figure 9.2: Summary of statistical components in quantitative data analysis

(a)

Mean
Mean is also known as average. A mean is the sum of all scores divided by
the number of scores. The mean is used to measure central tendency or
centre of a score distribution generally. For example, the mean for the
following set of integers: 3, 4, 5, 7 and 6 = 5.

Copyright Open University Malaysia (OUM)

154

TOPIC 9

DATA ANALYSIS

Figure 9.3: Mean in distributed scores


Adapted from http://www.tarorigin.com/art/Omasory/Uncertainty/

(b)

Standard Deviation
A standard deviation tells us how close the scores are centred around the
mean.
As shown in Figure 9.3, when the scores are bunched together around the
mean, the standard deviation is small and the bell curve is steep. When the
scores are spread away from the mean, the standard deviation is large and
the bell curve is relatively flat.

Figure 9.4: Standard deviation

To explore better what standard deviation means, refer to Figure 9.4. The mean is
20 and the standard deviation (SD) is 5. Figure 9.4 represents the score obtained
on grid test for two organisation terminals using cluster computing with the
same mean of 20.
Copyright Open University Malaysia (OUM)

TOPIC 9

DATA ANALYSIS

155

(a)

One standard deviation (SD = 5) from the mean in either direction on the
horizontal axis accounts for around 68% of the organisation in this group.
In other terms, 68% terminals obtained 15 and 25 optimal time.

(b)

Two standard deviations (5 + 5 = 10) away from the mean account roughly
95% of terminals. In other words, 95% terminals obtained are between 10
and 30 optimal time.

(c)

Three standard deviations (5 + 5 + 5 = 15) away from the mean account for
roughly 99% terminals. In other words, 99% terminals obtained are
between 5 and 35 optimal time.

9.7.1

Testing for Significant Differences between Two


Means Using the T-Test (Independent Groups)

Let say you are conducting a study to compare the effectiveness of the use
of service discovery protocol (independent variable) in enhancing network
appliances detection in home networks. The mean score and standard deviation
for the application test are shown in a table and you want to test the null
hypothesis.
H0: There is no significant difference between the experimental group and the
control group in terms of enhancing network appliances detection.
To solve this, you may use the statistical approach called t-test to obtain the
t-value for independent means. In this case, independent means that the two
groups consist of different subjects. The t-test gives the probability that the
difference between the two means is caused by chance. For testing the
significance, you will need to set a risk level called the alpha level. In social
science research, the alpha level is set at 0.05. This means that the obtained result
which is significant at .05 level could occur by chance only 5 times in a trial of
100.
Table 9.2 Means and Standard Deviations Obtained for
the Experimental and Control Groups
N

Mean

Standard Deviation

Experimental group

10

13.8

2.10

Control group

10

11.4

1.96

t value = 2.65; degrees of freedom = 18; p < 0.02

Copyright Open University Malaysia (OUM)

156

TOPIC 9

DATA ANALYSIS

(a)

Table 9.2 displays t-value of 2.65 obtained. If you are using statistical
software like SPSS or SAS, the probability value is given (i.e. p < 0.02). We
could also refer to the table of critical values to find out whether the t-value
is large enough to say that the difference between the groups is not likely to
have been a chance finding.

(b)

We can also determine the degrees of freedom (df) for the test which is the
sum of the terminals in both groups minus 2 (i.e. n 2). By the given alpha
level, the df and the t-value, we can refer to the t-value in the table of
critical values.

(c)

Refer to Table 9.3. The obtained t-value (2.65) is bigger than the critical
value (2.1009) for 18 degree of freedom (20 2 = 18). From this, we can
conclude that the differences between the means for two organisations is
significantly different at the 0.05 level of significance.
Table 9.3: Extract from the Table of Critical Values

(d)

df

p = 0.05

p = 0.01

17

2.1098

2.8982

18

2.1009

2.8784

19

2.1009

2.8609

Please note the difference is NOT SIGNIFICANT at the 0.01 level of


significance because the t-value (2.65) is smaller than the critical value
(2.8784) for 18 degrees of freedom.

9.7.2

Testing for Significant Differences between Two


Means Using the t-test (Dependent Groups)

Let us say you would like to conduct a study to compare the effectiveness of the
use of service discovery protocol (independent variable) in enhancing network
appliances detection (dependent variable) in ONE home network (IEEE 802.11).
You gave a pre-test and after testing the protocol with the IEEE 802.11 network,
you give a post-test. Here, the same group of subjects are tested twice. The mean
score and standard deviation obtained for network detection are in Table 9.4.
You want to test the null hypothesis.
H0: There is no significant difference between the pre-test mean and the post-test
mean in terms of network appliances detection enhancement.
Copyright Open University Malaysia (OUM)

TOPIC 9

DATA ANALYSIS

157

Table 9.4: Means and Standard Deviation Obtained for the Pre-test and Post-test Scores
Mean

Standard Deviation

Pre-test

9.90

1.66

Post-test

10.90

0.99

N = 10; t-value = 1.94; degrees of freedom = 9; p < 0.09

(a)

By using the t-test for dependent groups, we can obtain the value of 1.94. In
this case, dependent means that the two means are obtained from the same
groups.

(b)

From Table 9.5, we can highlight that for 9 degrees of freedom, the critical
value is 2.2622, which is larger than the t-value 1.94. We can conclude that
the means are NOT significantly different at the 0.05 level of significance.
Table 9.5 Extract from the Table of Critical Values of t

9.7.3

df

P = 0.05

p = 0.01

2.3060

3.3554

2.2622

3.2498

10

2.2281

3.1693

Testing for Differences between Means Using


One-way Analysis of Variance (ANOVA)

In ANOVA, the mechanism is similar to t-test, although the method differs.


Whenever a researcher wants to compare more than two means, he will always
opt to use One-Way Analysis of Variance or widely known as ANOVA. For
example, say you want to conduct an experiment on three types of buffering
methods for video streaming over mobile device. The means and standard
deviation obtained are shown in Table 9.6.
Table 9.6 Means and Standard Deviation for Three Types of Buffering Methods
N

Mean

Standard Deviation

Buffer Method 1

10

14.6

1.83

Buffer Method 2

10

15.6

2.22

Buffer Method 3

10

18.0

2.10

Copyright Open University Malaysia (OUM)

158

TOPIC 9

DATA ANALYSIS

Table 9.7: Summary of the Buffering Method Analysis


Summary

Sum of Square

df

Mean Squares

Treatment

61.066

30.533

7.1811

0.003

Within

114.800

27

TOTAL

175.866

29

In this example, a researcher used One-Way ANOVA and obtained an F-value of


7.1811 which is significant at 0.003 (refer to Table 9.7). Therefore, the null
hypothesis of no differences between means is rejected. However, it is uncertain
which of the differences contribute to the significance. To overcome this problem,
another statistical approach needs to be considered. The researcher can perform
post hoc comparisons such as Scheffe Test or Tukey Test. These tests are usually
applied after an analysis of variance.
Table 9.8: Tukey Test for the Analysis
Buffer Method 1 vs Buffer Method 1

Not significant

Buffer Method 1 vs Buffer Method 3

Significant at p < .01

Buffer Method 2 vs Buffer Method 3

Significant at p < .05

From Table 9.8, we can conclude that there is no significant difference between
the performance using Buffer Method 1 and Buffer Method 2. Buffer Method 3
performed significantly better at significance level 0.05. Buffer Method 3 also
outperformed Buffer Method 1 at the 0.01 level of significance.

9.7.4

Correlation Coefficient

To find the relationship or correlation between two variables, the approach used
is called correlation coefficient. For example, in a research work, you collected
data on bandwidth rate and also jitter. You may want to find out if there is a
correlation between bandwidth rate and jitter in network performance. It is
important to note that correlation has direction and can be positive or negative.
The Pearson product-moment correlation coefficient (represented by r) is used
to show the strength of relationship between two variables. A coefficient can
range from r = +1.00 to 1.00. Figure 9.5 shows what the coefficient means.

Copyright Open University Malaysia (OUM)

TOPIC 9

DATA ANALYSIS

159

Figure 9.5: Graphs showing different correlation coefficients


Source: http://arts.uwaterloo.ca/~jfsulliv/Lectures%2015%20&%2016.htm

Figure 9.5(a) shows a perfect positive correlation (r = +1) which means an


increase in variable y is also followed by an increase in variable x. Figure 9.5(b)
shows a perfect negative correlation (r = 1) which means an increase in variable
y is followed by a decrease in variable x or vice-versa. Figure 9.5(c) shows a zero
correlation (r = 0.00) which means there is no relationship between variable y and
variable x.

SELF-CHECK 9.3
1.

Describe the types of survey for quantitative research methods.

2.

Identify the common statistical components for quantitative


methods.

Copyright Open University Malaysia (OUM)

160

TOPIC 9

DATA ANALYSIS

SELF-CHECK 9.4
1.

When would you use a longitudinal survey rather than a crosssectional survey? Discuss.

2.

Discuss a situation of using Analysis of Variance (ANOVA) in a


research work (use an example with a research topic of your own).

3.

Discuss some ethical issues you think could arise during survey
research.

After data is collected and before it is analysed, the researcher must examine
it to ensure its validity. Blank responses, referred to as missing data, must be
dealt accordingly.
If the questions were pre-coded, then they can be inserted into a database. If
they were not pre-coded, then a system must be developed so they can be
included in the database.
The typical tasks involved are editing, dealing with missing data, coding,
transformation and entering data.
Descriptive analysis refers to the transformation of raw data into an
understandable form so that their interpretation will not be difficult.
Summarising, categorising, rearranging and other forms of analysis obtain
descriptive information.
Tabulation refers to the orderly arrangement of data in a table or other
summary format. It is useful for indicating percentages and cumulative
percentages as well as frequency distributions. The data may be described by
measures of central tendency, such as the mean, median or mode.
Cross-tabulation shows how one variable relates to another variable to reveal
differences between groups. Such cross-tabulations should be limited to
categories related to the research problem and purpose. It is also useful to put
the results into percentage form to facilitate inter-group comparisons.
A hypothesis is a statement of assumption about the nature of the world.
A null hypothesis is a statement about the status quo.
Copyright Open University Malaysia (OUM)

TOPIC 9

DATA ANALYSIS

161

An alternative hypothesis is a statement indicating the opposite of the null


hypothesis.
In hypothesis testing, a researcher states a null hypothesis about a population
mean and then attempts to disprove it. If a sample mean is contained in the
region of rejection, the null hypothesis is rejected.
There are two possible types of error in statistical tests: Type I, rejecting a true
null hypothesis, and Type II, accepting a false null hypothesis.
Quantitative methods deal with numbers and anything that is measurable in a
systematic way of investigation of phenomena.
Three types of descriptive designs are observation, correlation and survey
research.
Likert Scale is the common rating scale widely used in questionnaires.
There are two types of survey: cross sectional survey and longitudinal survey.
Questionnaires are used widely due to their cost effectiveness and easier
management nature.
The t-test is used for significant differences between means for independent and
dependent groups.
ANOVA is used when comparing the means of more than two groups.
Correlation coefficient is used to test the strength of relationship between two
variables.

ANOVA

Mode

Descriptive Statistics

Median

Correlation

Normal Distribution

Comparing Differences

Standard Deviation

Inferential Statistics

T-test

Mean

Variance

Copyright Open University Malaysia (OUM)

Topic

10

Proposal
Writing and
Ethics in
Research

LEARNING OUTCOMES
By the end of this topic, you should be able to:
1.

Define what is a research proposal;

2.

List the eight items in a research proposal;

3.

Identify the guidelines for writing a research proposal;

4.

Elaborate the three common weaknesses in a research proposal; and

5.

Describe the ethics in research.

INTRODUCTION
Before embarking on a research project, students must have an overall research
plan that indicates the research problem to be studied, research objectives,
significance of the research, strategies to obtain answers to the research problems
and the research project implementation schedule. This overall research plan is
called a research proposal. It is very important that students write a good
research proposal because without a good research proposal the student may not
get a research supervisor or even an approval to carry out the research. The
student must also know about ethics in research because that governs the
researchers behaviour in terms of what should do and should not be done in the
research project.

Copyright Open University Malaysia (OUM)

TOPIC 10

10.1

PROPOSAL WRITING AND ETHICS IN RESEARCH

163

WHAT IS A RESEARCH PROPOSAL?

This topic will address guidelines for writing a research proposal for your
research work. All students are required to write a research proposal before
venturing into research work.
A research proposal is a document written to state your proposed research
direction and the study you intend to do. A research proposal serves to advise
your academic supervisor or potential provider of research contract on your
conceptualisation of the total research process that you propose to undertake and
examine its suitability.
A research proposal is an overall plan, scheme, structure and strategy to
obtain answers for your research problems which constitutes your research
project. A research proposal can be rejected if your supervisor or graduate
commitee finds it is poorly devised.
Therefore, you need to have a well-planned research proposal prior to
undertaking any research task. If the proposal is well-designed, it would be
much easier for you to outline the entire research processes as well as help you to
prepare a thesis or dissertation in a sequential manner.

10.2

CONTENTS OF A RESEARCH PROPOSAL

In this section, we will describe in detail about the contents of a research


proposal. Firstly, let us examine Figure 10.1 that shows the content of a research
proposal.

Figure 10.1: Contents of a research proposal


Copyright Open University Malaysia (OUM)

164

(a)

TOPIC 10

PROPOSAL WRITING AND ETHICS IN RESEARCH

Introduction
(i)

In this section, you should provide an overview of the issues that you
intend to study. The content of introduction should be brief, precise
and straight to the point. After giving an overview on the scope of the
research, you will need to narrow it down to the specific area of your
concern.

(ii)

Try to elaborate on the main theme that needs to be addressed.


Identify the gap that exists and the research question will surface
naturally.

(iii) Define the problem statement in your proposal. This will give a clear
perception to the reader on the issue you are going to solve. However,
the problem statement you write in the proposal may only be
tentative at the point of proposal preparation because the research has
not been carried out yet.
(iv) Use simple sentences and make sure you narrow down the research
issue to focus on a very specific field.
(b)

Research Objectives
(i)

List out your research objectives and it should indicate the central
theme of your research that you intend to study.

(ii)

Use action-oriented verbs when you write the objective statement


such as to find out, to determine and to ascertain, which must
be listed numerically.

(c)

Research Questions
Research questions are questions formulated to address the research
objectives and to break the research objectives into smaller parts. Research
questions will influence research methodology and the type of analysis to
be performed.

(d)

Significance of the Research


(i)

Mention briefly why you need to do the research and provide


justification for it.

(ii)

You may elaborate on the significance of the research based on certain


criteria, for instance, a problem that demands attention because the
findings can solve or influence practice.

Copyright Open University Malaysia (OUM)

TOPIC 10

(ii)

PROPOSAL WRITING AND ETHICS IN RESEARCH

165

You can also provide justification if the methodology you are going to
use has some degree of novelty and your research would contribute
to new knowledge.

(iii) Some other criteria you can mention are the variables that you are
going to use and the expected outcomes of your research and its
influence on the model or design.
(e)

Literature Review
(i)

In this section, do some summary about the related work of your


research and what others had done to date in your field of interest.

(ii)

Try to read a thesis in similar area that you are investigating to get the
feel of it.

(iii) It will be an added advantage if you could provide one or two


critics of related work that you have come across, in the proposal.
This will give a good impression to the readers that you have done
your homework.
(iv) Introduce some definitions of the key terms.
(v)

(f)

Conclude the literature review with a theoretical background which


describes the model you are going to use in the research.

Methodology
(i)

Generally, it is not necessary to describe the methodology that you are


going to use in details. However, you should justify why it is chosen
over other similar methodologies.

(ii)

You may explain some reasons why you are using certain theory
or models, whether your research approach will be qualitative or
quantitative, or a combination of both.

(iii) Describe variables, sampling techniques and data collection methods


as well.
(iv) Explain the type of data involved and what type of analysis and
testing will be performed.

Copyright Open University Malaysia (OUM)

166

(g)

TOPIC 10

PROPOSAL WRITING AND ETHICS IN RESEARCH

Project Schedule
Provide a Gantt Chart or timetable specifying how long it will take to
complete your research work. Also indicate how long you will take for data
collection, analysis and writing up the final report or document. Some
research proposals require indication of milestone dates to give clear
picture on the expected accomplishment.

Figure 10.2: Gantt chart for a research project

Copyright Open University Malaysia (OUM)

TOPIC 10

(h)

PROPOSAL WRITING AND ETHICS IN RESEARCH

167

References
(i)

List of references must be provided by following the academic


guideline or scholarly fashion. References will convince the reader
that your proposal is comprehensive and demonstrate your
understanding in the particular field of interest.

(ii)

Use citation style of APA (Manual of the American Psychological


Association) or other bibliographical standard accepted in ICT.

SELF-CHECK 10.1
1.

What are the main contents of a research proposal?

2.

What should you write in the literature review in a research


proposal?

10.3

GUIDELINES FOR WRITING RESEARCH


PROPOSAL

There are several steps and guidelines to be followed during research proposal
preparation.
(a)

Familiarise yourself with word processor software (i.e. Microsoft Word,


Open Office) and learn the features of the software, for instance inserting
tables, graphs, footnotes and other specially formatted features. This will
help you towards preparing a research report at the end of the process.

(b)

Follow the guidelines specified by the institution, organisation or funding


agency to which you are submitting the proposals. Write according to the
given guidelines and the specified format.

(c)

While writing the first proposal, present your ideas by narrowing it down
sequentially and focus more on presenting the information in an interesting
manner. You must remember that your proposal should be expressed
clearly and states the overview of your research intention.

Copyright Open University Malaysia (OUM)

168

TOPIC 10

PROPOSAL WRITING AND ETHICS IN RESEARCH

(d)

Write and explain about your research problem at the beginning of the
proposal content (i.e. Introduction section). This is important as to give
readers attention since the entire research process is driven by the research
problem.

(e)

Write about the methodology you are going to implement briefly and
precisely. It is a good practice to outline methods and source of data in the
proposal stage itself. This will put your proposal in a better position in
order to determine its worth and potential contribution.

SELF-CHECK 10.2
Identify the guidelines involved in constructing a research proposal.

10.4

COMMON WEAKNESS IN RESEARCH


PROPOSAL

As a learner, you must ensure that your research proposal meets the requirement
and guidelines specified by your institution or university. This is partially
important to help you plan your research process and ease you towards the
preparation of the dissertation or thesis. There are some common weaknesses
encountered by researchers during research proposal preparation (Allen, 1960).
Proposals submitted by researchers and students for academic projects tend to
have weaknesses as described in Table 10.1.
Table 10.1: Weaknesses in Research Proposal
No.

Weaknesses

(a)

Research Problem
Justification

Explanation
Project description is unfocused and research direction is
unclear.
Research problem is not significant enough to support
or provide justification in initiating new knowledge
contribution.
The research problem is of interest only to a particular
group and has many limitations.
The research problem may be so complex than the author
actually thinks.
The hypotheses are doubtful or rest on insufficient
evidence.
Copyright Open University Malaysia (OUM)

TOPIC 10

(b)

Research
Methodology

PROPOSAL WRITING AND ETHICS IN RESEARCH

169

Research design and methodology is so vague and


unfocused.
Data declared by author is either difficult to obtain or not
appropriate for research problem mentioned earlier.
Method and measurement instruments are inappropriate
for the research design.
The proposal does not state the equipment to be used or
highlights outdated equipment for the research use.
The proposed statistical approach has not received
adequate consideration, is too simple or unlikely to yield
accurate results.

(c)

Proposal Author

The author does not have sufficient training or experience


for the proposed research topic. Hence, it is important for
you to choose a topic which you are good at and familiar
with in the specified field.
The author has insufficient time to devote to the research
project.
The author does not critically review the related works
but simply rewrites the available literature of the
particular research topic.
The author does not quote and cite the necessary
references based on academic journals and research
papers.

SELF-CHECK 10.3
1.

Describe the common


proposal writing.

weaknesses

encountered

2.

How is a good research proposal benchmarked?

Copyright Open University Malaysia (OUM)

during

170

TOPIC 10

10.5

PROPOSAL WRITING AND ETHICS IN RESEARCH

RESEARCH ETHICS

Ethics are norms or standard behaviour guiding choices or relationships with


others; they may refer to the appropriateness of the behaviour in relation to the
rights of those who will become the subject of the research or who may be
affected by the pursuant of the research. A researcher needs to consider ethical
issues when doing the research so as to ensure that no one is injured or suffers
adversely from research activities. The researcher must be sensitive to the impact
of the research work on those who are approached for data or information, those
who directly or indirectly provide access and cooperation, and those who will be
affected by the results of the research.

10.6

KEY ETHICAL ISSUES

Some of the key ethical issues that might arise are:


(a)

Privacy of possible and actual participants;

(b)

Voluntary nature of participation and the right to withdraw partially or


completely from the process;

(c)

Consent and possible deception of participants;

(d)

Maintenance of the confidentiality of data provided by individuals or


identifiable participants and their anonymity;

(e)

Reactions of participants to the way the researcher seeks to collect data;

(f)

Effects on participants of the way the researcher uses, analyses and reports
the data; and

(g)

Behaviour and objectivity of the researcher.

Privacy is accepted as the key to ethical issues that the researcher has to confront
in carrying out any research project. Almost all aspects of ethics, for example,
consent, confidentiality, participant reactions, and the effects of the way the
researcher uses, analyses and reports research findings have the capacity to
affect, or are related to, the privacy of participants.

Copyright Open University Malaysia (OUM)

TOPIC 10

10.7

PROPOSAL WRITING AND ETHICS IN RESEARCH

171

ETHICAL ISSUES DURING THE INITIAL


STAGES OF THE RESEARCH PROCESS

During the early stages of the research process, the researcher may have to seek
access to agencies, organisations or even individuals. Ethical problems may arise
when participants are not clear of the objectives of the research. Ethical issues
may be related to a researcher attempting to apply pressure to grant access.
Issues relating to privacy may arise when a researcher tries to get access by
telephone calls at inappropriate times or approach the participants during odd
hours. Access to secondary data may also have ethical consequences, for
example, when the researcher obtains personal data of individuals who have not
consented to be involved in the project.
Consent to participate in the research project may not be a straight forward
matter because when someone agrees to participate, it does not necessarily imply
consent.
The nature of consent can be differentiated as shown in Figure 10.3.

Figure 10.3: Differences in nature of consent

Copyright Open University Malaysia (OUM)

172

TOPIC 10

PROPOSAL WRITING AND ETHICS IN RESEARCH

ACTIVITY 10.1
In your opinion, why do ethical issues become a major concern in the
research process? Explain how to handle this issue.

SELF-CHECK 10.4
Tick True or False for each statement below:
No.

Question

1.

An internal proposal is created by a research


unit or division for a client who is in another
department or division of the corporation.

2.

A research proposal will tell the reader what the


results of the study will be.

3.

Collection instruments should be destroyed if it


is possible to use the data. The problem statement
section of the proposal explains the problems
concrete consequences and states the problem.

4.

Literature review is the study of published reports


and secondary data.

5.

Some of the most important sections of the


external proposal are the objectives, design and
qualifications.

6.

An external research agencys fee is always paid


in advance.

7.

Ethics are norms for behaviour guiding moral


choices.

8.

Exploratory data reconstructs respondents profiles


or identifies them.

9.

Are search consultant should adhere to the


conclusions emerging from the study and real data,
even if this means that the next contract goes to
someone else.

True

False

10. The right to privacy means a respondent cannot


refuse to answer questions on matters of public
concern.

Copyright Open University Malaysia (OUM)

TOPIC 10

PROPOSAL WRITING AND ETHICS IN RESEARCH

173

A research proposal is an overall plan, scheme, structure and strategy to


obtain answers for your research problem that constitutes your research
project.
A research proposal should be concise, precise and highlight research
problems as the main theme.
Gantt Chart is important to illustrate your planning and project schedule
throughout the research process.
The process of formulating and clarifying the research topic is the most
crucial part of the research process.
Writing a research proposal helps the researcher in organising ideas and can
also be thought as a contract between the researcher and the client.
The content of the research proposal should tell the reader what the
researcher wants to do, why he wants to do it, what he wants to achieve and
how he wants to achieve it.
Research ethics should be recognised and considered from the outset of the
research project and be used as one of the criteria in judging the proposal.
Ethical concerns are likely to occur at all stages of the research project, when
seeking access to data, during data collection, while analysing data and in
reporting the results.

Ethics

Project Schedule

Proposal

Significance of Study

Copyright Open University Malaysia (OUM)

MODULE FEEDBACK
MAKLUM BALAS MODUL

If you have any comment or feedback, you are welcome to:


1.

E-mail your comment or feedback to modulefeedback@oum.edu.my

OR
2.

Fill in the Print Module online evaluation form available on myINSPIRE.

Thank you.
Centre for Instructional Design and Technology
(Pusat Reka Bentuk Pengajaran dan Teknologi )
Tel No.:

03-27732578

Fax No.:

03-26978702

Copyright Open University Malaysia (OUM)

Copyright Open University Malaysia (OUM)

You might also like