You are on page 1of 21

5/22/2018

Introduction
 Tools are instruments used to collect information
RESEARCH TOOL for performance assessments, self- evaluations,
and external evaluations.
Presented by  Tools need to be strong enough to support what
Josfeena Bashir the evaluations find during research.
Ph.D. Scholar  Depending on the nature of the information to
be gathered, different instruments are used to
conduct the assessment forms for gathering data.

DEFINITION MEANING
 The instruments used for the purpose of data
 It is an instrument or machine that aids in collection, are measurable and observable for
accomplishing a task It is a testing device for data analysis & interpretation constructed by
measuring a given event, such as , a researcher according to objectives
questionnaire, an interview or a set of guidelines
or checklist for observation

Characteristics of a Good Research


tool Cont….
 The Instrument must be valid and reliable.  It should be free from all kinds of bias.
 It must be based upon the Conceptual  It must contain clear and definite directions to
framework. accomplish it.
 It must gather data suitable for and relevant to the  It must be accompanied by a good cover letter.
research topic.  It must be accompanied, if possible, by a letter of
 It must gather data would test the hypotheses or recommendation from a sponsor.
answer the questions under investigation.

1
5/22/2018

Types of research tool-


Questionnaires
 “A questionnaire is a systematic compilation of questions
 1. Questionnaires that are submitted to a sampling of population from
which information is desired.”
 2. Interviews
--Barr, Davis & Johnson
 3. Schedules
 “In general, the word questionnaire refers to a device
 4. Observation Techniques for securing answers to questions by using a form
which the respondent fills in himself.”
 5. Rating Scales --W. J. Goode & K. Hall

Characteristics of a Good
Questionnaire Cont….
 It deals with an important or significant topic.  It is attractive in appearance, nearly arranged and
 Its significance is carefully stated on the clearly duplicated or printed.
questionnaire itself or on its covering letter.  Directions are clear and complete, important
 It seeks only that data which cannot be obtained terms are clarified.
from the resources like books, reports and  The questions are objective, with no clues, hints
records. or suggestions.
 It is as short as possible, only long enough to get  Questions are presented in a order from simple
the essential data. to complex.

Cont…… Merits of Questionnaire Method


 Double negatives, adverbs and descriptive
 It’s very economical.
adjectives are avoided.
 It’s a time saving process.
 Double barred questions or putting two questions
in one question are also avoided.  It covers the research in wide area.
 The questions carry adequate number of  It’s very suitable for special type of responses.
alternatives.  It is most reliable in special cases.
 It is easy to tabulate, summarize and interpret.

2
5/22/2018

Demerits of Questionnaire Method Cont…


 Chances of receiving incomplete response are
 Through this we get only limited responses.
more.
 Lack of personal contact.
 Sometimes answers may be illegible.
 Greater possibility of wrong answers.
 It may be useless in many problems.

The Interview Cont…..


 Interview is a two way method which permits an exchange of  “The interview may be regarded as a systematic method
ideas and information. by which a person enters more or less imaginatively into
the inner life of a comparative stranger.”
--P.V. Young
 “Interviewing is fundamentally a process of social
interaction.”
In an interview a rapport is established between the interviewer
-W. J. Goode & P.K. Hatt and the interviewee. Not only is physical distance
 The interview constitutes a social situation between two between them annihilated, the social and cultural barrier is
persons, the psychological process involved requiring both also removed; and a free mutual flow of ideas to and fro
individuals mutually respond though the social research takes place. Both create their respective impression upon
purpose of the interview call for a varied response from the each other.
two parties concerned.
--Vivien Palmar

Difference between Interview and


Cont…. Questionnaire
 The interview brings them both on the same level and Questionnaire Method Interview Method
an emotional attachment supervenes between them. In an 1. Data is gathered indirectly. 1. Data is gathered directly.
interview all formalities are laid down and the gate is 2. No face to face contact between 2. There is face to face contact
opened for delivering into the intellectuals, emotional and two. between
subconscious stirrings of the interviewee. Thus here the interviewer and interviewee.
‘depth’ of subject (man) is gone to the very bottom of 3. Interviewer should have the 3. Skill full interviewer is needed.
his emotional pool and may check his truthfulness of general knowledge of the topic.
responses. 4. Interviewee will hesitate to 4. Some confidential information
write it. can also be obtained.
5. We get written information 5. We get written and oral both
only. type of information.

3
5/22/2018

Characteristics of an Interview:- Cont….


 The interviewer can probe into casual factors, determine  6. It has flexibility.
attitudes, discover the origin of problem.  7. Sincerity, frankness, truthfulness and insight of the
 2. Its appropriate to deal with young children and illiterates interviewee can be better judged through cross questioning.
person.  8. It gives no chance for respondent to modify his earlier
 3. It can make cross questioning possible. answer.
 4. It helps the investigator to gain an impression of the  9. It is applicable in survey method, but it is also
person concerned. applicable in historical, experimental, case studies and
 5. It can deal with delicate, confidential and even intimate clinical studies.
topics.

TYPES OF INTERVIEW – Personal Interview


 communication is between a face to face two way interview
 1. Personal Interview-
between the interviewer and the respondents.
 2. Telephone Interview  Generally the personal interview is carried out in a planned
manner and is referred to as ‘structured interview’.
 3. Focus Group Interview-
 This can be done in many forms e.g. door to door or as a
 4. Depth Interview- planned formal executive meeting.
 A personal interview involves a lot of preparation.
 5. Projective Techniques

Cont…
 Generally an personal interview should go through the Stages of interview cont…
following five/5 stages they are as follows.  c. Probing Probing is the technique of encouraging the
 a. Rapport Building- Interviewer should increase respondents to answer completely, freely relevantly.
the receptiveness of the respondent, by making him
believe that his opinions are very useful to the  d. Recording -The interviewer can either write the
research, and is going to be a pleasure rather than an response at the time of interview or after the interview. In
ordeal(trial). certain cases, where the respondent allows for it, audio or
 b. Introduction -An introduction involves the visual aids can be used to record answers
interviewer identifying himself by giving him his name, .
purpose and sponsorship if any. An introductory letter  e. Closing After the interview, interviewer should thank
goes a long way in conveying the study’s legitimacy. the respondent and once again assure him about the worth of
his answers And also the confidentiality of the same.

4
5/22/2018

Telephone Interview - . Focus Group Interview


 Telephone interview the information is collected from the  Focus group interview is an unstructured interview which
respondent by asking him questions on the phone is called as involves a moderator leading a discussion between a small
telephone interview. The combination of telephone and group of respondents on a specific topic.
computer has made this method even more popular. It has
certain advantages and disadvantages.

. Depth Interview- Projective Techniques


 Depth interview is nondirective in nature where the  Projective Techniques involve the presentation of an
respondent is given freedom to answer within the boundaries ambiguous, unstructured object, activity or person that a
of the topic of interest. respondent is asked to interpret and explain the colours of
dress In Projective Techniques, the respondents are asked to
interpret the behaviour of others/objects and this way they
indirectly reveal their own behaviour in the same situation.

Merits of Interview: Cont…


 Direct research.  Mutual encouragement is possible.

 Deep research  Supra-observation is possible.

 Knowledge of past and future.  Knowledge of historical and emotional causes.

 Knowledge of special features.  Examination of known data.

5
5/22/2018

Disadvantage of Interview:
 May provides misleading information.
Schedule:
 When a researcher is using a set of questionnaires for
 Defects due to interviewee(low level of intelligence or interview purpose it is known as schedule.
may be emotionally unbalanced)
 “Schedule is the name usually applied to set of
 Result may be affected due to prejudices of interviewer. questions, which are asked and filled by an interviewer
in a face to face situation with another.”
 Result may be affected due to the difference in the
--W.J. Goode & P. K. Hatt
mental outlook of interviewee and interviewer.  By a schedule we cannot, however, obtain information
about many things at once. It is best suited to the study of a
 One sided and incomplete research. single item thoroughly

Cont…
Cont…  In schedule method interview occupies a central
 . According to Thomas Carson Macormie, “The and plays a vital role. As a matter of fact
schedule is nothing more than a list of questions success in the use of schedule is largely
which, it seems necessary to test the hypothesis.” determined by the ability and tact of the
interviewer rather than by the quality of the
 Thus schedule is a list of questions questions posed
formulated and presented with the specific .
purpose of testing an assumption or hypothesis.  Because the interviewer himself poses the
questions and notes down the answers all by
himself, the quality of questions has not any
great significance.

Types of schedule- Cont..


 Rating Schedules is a schedule used to obtain  Survey Schedules are like questionnaires.
opinions, preferences etc, respondents over statements
on the phenomenon studied. The schedule consists of
positive and negative statements of opinion on the  Observation Schedules are schedules used
phenomenon. when observational method of data collection is
used. These could be structured or unstructured
 Documents Schedules are used to collect
data/information from recorded evidences and/or case  interview schedules are used for collecting
histories. Here the blanks, functional issues related
data when interview method of communication
blanks and the like to be filled up from records and
documents are present. with the respondents is used.

6
5/22/2018

Important Features of Schedule: Cont….


 The schedule is presented by the interviewer.  It aims at delimiting the subject.
The questions are asked and the answers are
noted down by him.  In the schedule the list of questions is pre planned
 The list of questions is a mere formal and noted down formally and the interviewer
document, it need not be attractive. is always armed with the formal document
detailing the questions.
 The schedule can be used in a very narrow sphere
of social research.
 Thus interviewer not to depend upon the
 It aids to delimit the scope of the study and to memory.
concentrate on the circumscribed elements
essential to the analysis.

Points to be kept in mind while


designing schedule; Cont…
 Interviewer should not frame long, complex,  Its questions should be simple, clear and relevant
defective questions. to topic

 Unrelated and unnecessary questions should not  Questions be suitable to respondent’s intelligence
be asked. level.

 Schedule should not contain personal and  Impersonal, indirect and unambiguous questions
upsetting questions. should be included in schedule.

Merits of Schedule: Cont…


 Higher percentage of responses.  It is possible to give human touch to schedule.

 Possible to observe personality factors.  Removal of doubts is possible because face to face

 Through interview personal contact is possible. interaction is there.

 It is possible to know about the defects of the

interviewee.

7
5/22/2018

Observation Technique: Defination


 This is most commonly used technique of evaluation  “It is thorough study based on visual observation. Under
research. this technique group behaviours and social institutions
 It is used for evaluating cognitive and non-cognitive aspects problems are evaluated.”
of a person. --C. Y. Younge
 It is used in evaluation performance, interests, attitudes,  “Observation employs relatively more visual and senses
values towards their life problems and situations. than audio and vocal organs.”
 It is most useful technique for evaluating the behaviours of -C.A. Mourse
children.
 It is technique of evaluation in which behaviour are observed
in a natural situations.

Cont… Cont….
 The cause- effect relationship and study of events in  Artificiality and formality of questionnaires and
original form, is known as observation. Observation seeks interview is replaced by reality and informality in
to ascertain what people think and do by watching them observation. Data obtained through observation are more
in action as they express themselves in various situations and real and true than the data collected by any other method. It
activities.
also plays a particular part in survey procedure.
 Observation is recognized as the most direct means of
studying people when one is interested in their overt
behaviour. In questionnaires and interview people may
write answer as they think, they do but this is often
different from what they actually do. These restrictions
are missing in observation so observation is a more
natural way of gathering data.

Characteristics of Observation cont….


 It serves a formulated research purpose.
 It is a direct technique to study an object, an
 It is planned systematically rather than occurring
haphazardly. event or a problem.
 It is systematically recorded and related to  It is based mainly on visual –audio scene.
more general propositions.
 It is subjected to checks and controls with respect  It employs own experiences.
to validity , reliability and precision.  It establishes cause-effect relationship.

8
5/22/2018

Cont…. Type of observation –


 It is an objective technique of data collection.  Participant and Nonparticipant
 It is both objective and subjective evaluation
observation
technique.
 Structured and Unstructured Observation
 It is formal as well as informal technique.

 It is quantitative as well as qualitative  Controlled and Uncontrolled Observation

technique for data collection.

Advantages: Limitation:
•It is reliable and valid technique of
 It has a limited scope for its use because all the
collecting data and information.
events cannot be observed directly.
•We get first hand data through this method.
 It is subjective method.
•Record of observation is also available
immediately.  It is very time consuming process.

•It is simple, broad and comprehensive method.  Costly so energy consuming also.
•It is an oldest technique of data collection .
and getting direct information.

Cont… Rating Scale:-


 Ratting is term applied to express opinion or
 Presence of observer influences the behaviour of the
judgment regarding some situation, object or
person i.e. subject becomes conscious. character. Opinions are usually expressed on a
scale of values; rating techniques are devices by
 In case covert behaviour, which can’t be which such judgments may be quantified.

observed, it is not useful.

 Observer should be trained and experienced

9
5/22/2018

Defination Cont….
 “Rating is an essence and direct observation.”  Rating techniques are more commonly used in
--Ruth Strong scaling traits and attributes. A rating method is
 “A rating scale ascertains the degree, intensity a method by which one systematizes, the
and frequency of a variable.” expression of opinion concerning a trait. The
rating is done by parents, teachers, a board of
--Von Dallen
interviewers and judges and even by the self as
well.

Cont… Type-
 The special feature of rating scale is that the attitudes
are evaluated not on the basis of the opinions of the  Numerical Scales
subjects but on the basis of the opinions and judgments of
the experimenter himself.  Graphical Scales-
 In rating scale data are collected by; Verbal behaviour,
facial expression, personal documents, clinical type  Percentage Rating-
interview, projective techniques and immediate
experiences as emotions, thoughts and perceptions.  Standard Scale

Numerical Scales-- Graphical Scales--


 It consists of a sequence of defined numbers which is  The scales are presented graphically in which
supplied to the rater or the observer therater assigns to each descriptive cues corresponding to the different
stimulus, to be rated, an appropriate number as defined
rated, or describes. scale steps are given. In this scale, a straight line
is drawn vertically or horizontally with various
cues to help the ratter. The line may be
segmented in units or continuous.

10
5/22/2018

Percentage Rating- Standard Scale


 This technique involves placing objects, persons,  A standard scale is one in which the ratter is presented with
etc among different specified percentage groups some standards with pre-established scale values. These
standards usually consist of objects of same kind.
or into different percentiles. For example: in
 For example: Handwriting, Portrait-matching,, man-to-man
percentiles or quartiles in the highest 5 per cent scale, etc.
of the group, in the middle 25 percent of the
group, in the lowest 10 percent, etc.

Advantages Cont..
 Writing reports to parents.
 Supplementing other sources of under taking
 Filling out admission blanks for colleges.
about child.
 Finding out students’ needs.
 Stimulating effect upon the rates.
 Making recommendations to employers

 Easy to administer

Limitations: Cont…
 Difference in rating abilities.  Limits of self-rating.

 Difference in reliability as subjects for rating.  Over rating.


 Agreement among ratters of one type of contact  Limits of rating of specific qualities.
only.
 Limits of justifications
 Average superior than single.

 Impact of emotions.

11
5/22/2018

DEVELOPING RESEARCH TOOL- DEVELOPING RESEARCH TOOL-


 Requirements  Documentation
 Clearly define the construct of the Research  Clear description of the construct
tool.  Old and new versions of items
 Conduct a pilot study to test the Research tool.  Formulation why certain scorings were chosen
 Determine the validity of the Research tool.  Results of pilot testing
 Final version of the Research tool

 Executing researcher:
DEVELOPING RESEARCH TOOL-
 Define a clear construct;
 Responsibilities
 Project leaders:  Chose for appropriate measurement method;

 Decide whether a new Research tool is necessary to  Search, select and formulate clear items;
develop;
 Chose scoring system carefully;
 Inspect the defined construct by reading and adding
comments;  Perform pilot tests in a small group of the selected population and

 Evaluate each step in the development of the make sure to adapt the instrument wherever needed;
instrument with the executing researcher;  Evaluate the instrument according to the ‘Evaluating instruments’
 Ensure that pilot tests are executed.
guidelines.
 Ensure that the Research tool is carefully evaluated.

Step 1: Definition and elaboration of


Process- the construct intended to be measured
 The first step in instrument development is conceptualization,
 Step 1: Definition and elaboration of the construct
which involves defining the construct and the variables to be
intended to be measured measured.
 When the construct is not directly observable (latent variable), the
 Step 2: Choice of measurement method (e.g.
best choice is to develop a multi-item instrument (De Vet et al.
questionnaire/physical test) 2011). When the observable items are consequences of
(reflecting) the construct, this is called a reflective model.
 Step 3: Selecting and formulating items  When the observable items are determinants of the construct, this
 Step 4: Scoring issues
is called a formative model.
 When you are interested in a multidimensional construct, each
 Step 5: Pilot study dimension and its relation to the other dimensions should be
described.
 Step 6: Field-testing

12
5/22/2018

Step 2: Choice of measurement Step 3: Selecting and formulating


method items
 Some constructs form an in dissoluble alliance with a Research  To get input for formulating items for a multi-item
tool, e.g. body temperature is measured with a thermometer; and
a sphygmomanometer is usually used to assess blood pressure in questionnaire you could examine similar existing instruments
clinical practice. from the literature that measure a similar construct, e.g. for
 The options are therefore limited in these cases, but in other different target population, and talk to experts (both
situations more options exist. clinicians and patients) using in-depth interview techniques.
 For example, physical functioning can be measured with a
performance test, observations, or with an interview or self-  In addition, you should pay careful attention to the
report questionnaire. formulation of response options, instructions, and choosing
 With a performance test for physical functioning, information is an appropriate recall period (Van den Brink & Mellenbergh,
obtained about what a person can do, while by interview or self-
report questionnaire information is obtained about what a person 1998).
perceives he/she can do.

Step 4: Scoring issues Cont…


 Many multi-item questionnaires contain 5-point item scales,  Are all items equally important or will you use (implicit)
and therefore are ordinal scales. Often a total score of the weights? Note that when an instrument has 3 subscales, with
instrument is considered to be an interval scale, which makes 5, 7, and 10 items respectively, the total score calculated as
the instrument suitable for more statistical analyses. Several the mean of the mean score of each subscale differs from the
questions are important to answer: total score calculated as the mean of all items.
 How can you calculate (sub)scores? Add the items, use the  How will you deal with missing values? In case of many
mean score of each item, or calculate Z-scores. missings (>5-10%) consider multiple imputation (Eekhout et
al., 2014).

Step 5: Pilot study


 Be aware that the first version of the instrument you develop
will (probably) not be the final version. It is sensible to Step 6: Field-testing
(regularly) test your instrument in small groups of people. A
pilot test is intended to test the comprehensibility, relevance,
and acceptability and feasibility of your Research tool.

13
5/22/2018

Standardization of research tool – Process-


 Standardization or standardisation is the process  The first step consists of emerging of a proposal for a new
tool or the revision or the amendment of an existing tool.
of implementing and developing technical The proposal can emerge from a representative of any sector
standards based on the consensus of different of the research
parties that include firms, users, interest groups,  The directorate of the standardization organization
preliminarily examines the proposal to determine whether it
standards organizations and governments. is consistent with the underlying principles for the
Standardization can help to maximize preparation of research tool.
compatibility, safety, repeatability, or quality  The division council of the standardization organization
decides to approve or reject the proposal for the preparation
of new tool.

Cont… Cont……
 After the approval by the division council, the work of  The technical committee reviews tool extensively and then
drafting the tool is allotted to the existing technical passes it on to the secretariat for editing and wide
committee or sectional committee. circulation.
 The tool is widely circulated. The aim of wide circulation is
 The technical committee is to work within the framework of
to inform every interest in the country or abroad which may
the governing policies and procedures for the preparation of
be affected by the tool and to invite critical review and
the standard tool. comments.
 The members of the committee are required to have good  The comments on the tool are systematically examined by
technical knowledge about what is to be standardized of tool. the technical or sectional committee. In the light of
committee discussions, the final version of the tool is drawn
up by the secretariat incorporating the comments accepted
by the technical or sectional committee.

Cont… Reliability of tool.


 The final version of the tool is submitted to the division  It refers to the consistency, stability and repeatability of
council for approval and finally to the general council or its results i.e. the result of a researcher is considered reliable if
chairman. Once approved by these offices, the tool gets the consistent results have been obtained in identical situations
status of a standard. but different circumstances
 The researcher is always interested in collecting data that are
 The approved tool is then published by the secretariat and the reliable.The reliability of an instrument concerns its
published tool is then released for use to the public. consistency and stability. If research are using a thermometer
to measure body temperature, he would expect it to provide
the same reading each time it was placed in a constant
temperature water bath.

14
5/22/2018

Cont… Cont…
 Regardless of the type of research, the reliability of the study  Correlation coefficients computed to test the reliability of an
instrument(s) is always of concern. Reliability needs to be instrument are expected to be positive correlations.
determined whether the instrument is a mechanical device, a According to Polit and Beck (2008), it is risky to use an
written questionnaire, or a human observer. The degree of instrument with reliability lower than .70. These authors
reliability is usually determined by the use of correlation have cautioned researchers to check for reliability as a
procedures. A correlation coefficient is determined between routine step in all studies that involve observational tools,
two sets of scores or between the ratings of two judges: The self-report measures, or knowledge tests because of their
higher the correlation coefficient, the more reliable is the susceptibility to measurement errors.
instrument or the judges. Correlation coefficients can range
between –1.00 and +1.00.

Cont…. Cont…
 Correlation coefficients are frequently used to determine the  Reliability is not a property of the instrument that, once
reliability of an instrument. However, when observers or established, remains forever.
ratters are used in a study, the percentage or rate of  Reliability must continually be assessed as the instrument is
agreement may also be used to determine the reliability of used with different subjects and under different
their observations or ratings. environmental conditions.
 In general, the more items that an instrument contains, the  An instrument to measure patient autonomy might be highly
more reliable it will be. The likelihood of coming closer to reliable when administered to patients while in their hospital
obtaining a true measurement increases as the sample of rooms, but very unreliable when administered to these same
items to measure a variable increases. If a test becomes too patients while lying on a stretcher outside the operating
long, however, subjects may get tired or bored. Be cautious room waiting for surgery.
about the reliability of instruments

Types of reliability Stability Reliability


 The stability reliability of an instrument refers to its
 Stability, consistency over time. A physiological instrument, such as a
thermometer, should be very stable and accurate. If a
thermometer were to be used in a study, it would need to be
 Equivalence, and checked for reliability before the study began and probably
again during the study (test-retest reliability).
 Questionnaires can also be checked for their stability. A
 Internal consistency. questionnaire might be administered to a group of people,
and, after a time, the instrument would again be
administered to the same people. If subjects’ responses were
almost identical both times, the instrument would be
determined to have high test-retest reliability.

15
5/22/2018

Cont… Equivalence Reliability


 If the scores were perfectly correlated, the correlation  Equivalence reliability concerns the degree to which two
coefficient (coefficient Reliability estimates are used to different forms of an instrument obtain the same results or
evaluate of stability) would be 1.00. two or more observers using a single instrument obtain the
 The interval between the two testing periods may vary from same results.
a few days to several months or even longer.  Alternate forms reliability or parallel forms reliability are
 This period is a very important consideration when trying to terms used when two forms of the same instrument are
determine the stability of an instrument. compared.
 The period should be long enough for the subjects to forget  Inter-ratter reliability or inter-observer reliability are terms
their original answers on the questionnaire, but not long applied to the comparisons of ratters or observers using the
enough that real changes may have occurred in the subjects’ same instrument. This type of reliability is determined by the
responses. degree to which two or more independent ratters or
observers are in agreement.

Cont.. Cont….
 When two forms of a test are used, both forms should  The higher the correlation, the more confidence the
contain the same number of items, have the same level of researcher can have that the two forms of the test are
difficulty, and so forth. gathering the same information.
 One form of the test is administered to a group of people;  Whenever two forms of an instrument can be developed, this
the other form is administered either at the same time or is the preferred means for assessing reliability.
shortly thereafter to these same people.  Researchers, however, may find it difficult to develop one
 A correlation coefficient (coefficient of equivalence) is form of an instrument, much less two forms!
obtained between the two forms.

Internal Consistency Reliability Cont…


 Internal consistency reliability, or scale homogeneity, addresses the  Before computers, internal consistency was tedious to
extent to which all items on an instrument measure the same calculate.
variable.
 This type of reliability is appropriate only when the instrument is  Today, it is a simple process, and accurate split-half
examining one concept or construct at a time. procedures have been developed.
 This type of reliability is concerned with the sample of items used  A common type of internal consistency procedure used today
to measure the variable of interest. is the coefficient alpha () or Cronbach’s alpha, which provides
 If an instrument is supposed to measure depression, all of the an estimate of the reliability of all possible ways of dividing
items on the instrument must consistently measure depression. If an instrument into two halves.
some items measure guilt, the instrument is not an internally
consistent tool.  Think about that a minute. How many possible combinations
 This type of reliability is of concern to nurse researchers because of two halves could be made from a 30-item questionnaire? A
of the emphasis on measuring concepts such as assertiveness, lot!
autonomy, and self-esteem.

16
5/22/2018

VALIDITY OF THE INSTRUMENT Cont…


 The validity of an instrument concerns its ability to gather  The greater the validity of an instrument, the more confidence
the data that it is intended to gather. you can have that the instrument will obtain data that will answer
the research questions or test the research hypotheses.
 The content of the instrument is of prime importance in
validity testing.  Just as the reliability of an instrument does not remain constant,
neither does an instrument necessarily retain its level of validity
 If an instrument is expected to measure assertiveness, does
when used with other subjects or in other environmental settings.
it, in fact, measure assertiveness? It is not difficult to
 An instrument might accurately measure assertiveness in a group
determine that validity is the most important characteristic of subjects from one cultural group. The same instrument might
of an instrument. actually measure authoritarianism in another cultural group
because assertiveness, to this group, means that a person is trying
to act as an authority figure.

Types of validity -
Cont…..
 When attempting to establish the reliability of an instrument, all
of the procedures are based on data obtained through using the
 Face,
instrument with a group of respondents.
 Conversely, some of the procedures for establishing the validity of
an instrument are not based on the administration of the  Content,
instrument to a group of respondents.
 Validity may be established through the use of a panel of experts
or an examination of the existing literature on the topic.  Criterion, and
 Statistical procedures, therefore, may not always be used in trying
to establish validity as they are when trying to establish reliability.
 When statistical procedures are used in trying to establish validity,  Construct.
they generally are correlation procedures.

Face Validity Content Validity


 An instrument is said to have face validity when a preliminary  Content validity is concerned with the scope or range of
examination shows that it is measuring what it is supposed to items used to measure the variable.
measure.
 In other words, are the number and type of items adequate to
 In other words, on the surface or the face of the instrument,
measure the concept or construct of interest? Is there an
it appears to be an adequate means of obtaining the data
adequate sampling of all the possible items that could be used
needed for the research project.
to secure the desired data?
 The face validity of an instrument can be examined through
the use of experts in the content area or through the use of  There are several methods of evaluating the content validity
individuals who have characteristics similar to those of the of an instrument.
potential subjects. Because of the subjective nature of face
validity, this type of validity is rarely used alone

17
5/22/2018

Cont… Cont…
 The first method is accomplished by comparing the content  A second way to examine the content validity of an
of the instrument with material available in the literature on instrument is through the use of a panel of experts, a group
the topic. A determination can then be made of the adequacy of people who have expertise in a given subject area. These
of the measurement tool in light of existing knowledge in the experts are given copies of the instrument and the purpose
content area. and objectives of the study. They then evaluate the
 For example, if a new instrument were being developed to instrument, usually individually rather than in a group.
measure the empathic levels of nurses in hospice settings, the Comparisons are made between these evaluations, and the
researcher would need to be familiar with the literature on researcher then determines if additions, deletions, or changes
both empathy and the hospice setting. need to be made.

Cont… Criterion Validity


 A third method is used when knowledge tests are being  Criterion validity is concerned with the extent to which an
developed. The researcher develops a test blueprint designed instrument corresponds to or is correlated with some
around the objectives for the content being taught and the criterion measure of the variable of interest.
level of knowledge that is expected (e.g., retention, recall,  Criterion validity assesses the ability of an instrument to
and synthesis). determine subjects’ responses at the present time or predict
subjects’ responses in the future.
 The actual degree of content validity is never established. An  These two types of criterion validity are called –
instrument is said to possess some degree of validity that can  concurrent and
only be estimated.  predictive validity

Concurrent validity Cont….


 Concurrent validity compares an instrument’s ability to obtain a  The degree of validity would be determined through
measurement of subjects’ behaviour that is comparable to some other
criterion of that behaviour. correlation of the results of the two tests administered to a
 For example, a researcher might want to develop a short instrument that number of people.
would help evaluate the suicidal potential of people when they call in to
a suicide crisis intervention centre.  The correlation coefficient must be at least .70 to consider
 A short, easily administered interview instrument would be of great that the two instruments are obtaining similar data.
help to the staff, but the researcher would want to be sure this
instrument was a valid diagnostic instrument to assess suicide potential.
 Responses received on the short instrument could be compared with
those received when using an already validated, but longer, suicide
assessment tool.
 If both instruments seem to be obtaining the essential information
necessary to make a decision about the suicide potential of a person, the
new instrument might be considered to have criterion validity.

18
5/22/2018

Predictive validity Construct Validity


 It is concerned with the ability of an instrument to predict  Of all of the types of validity, construct validity is the most
behaviour or responses of subjects in the future. difficult to measure. Construct validity is concerned with the
 If the predictive validity of an instrument is established, it degree to which an instrument measures the construct it is
can be used with confidence to discriminate between people, supposed to measure.
at the present time, in relation to their future behaviour.  Construct validity involves the measurement of a variable
 This would be a very valuable quality for an instrument to that is not directly observable, but rather is an abstract
possess. concept derived from observable behaviour. Construct
 For example, a researcher might be interested in knowing if a
validity is derived from the underlying theory that is used to
suicidal potential assessment tool would be useful in describe or explain the construct.
predicting actual suicidal behaviour in the future

Cont…
Cont…
 Many of the variables measured in research are labelled constructs.  Another approach to construct validity is called factor
 Nursing is concerned with constructs such as anxiety & analysis, a method used to identify clusters of related items
assertiveness.
 One method to measure construct validity is called the known-
on an instrument or scale.
groups procedure, in which the instrument under consideration is  This type of procedure helps the researcher determine
administered to two groups of people whose responses are whether the tool is measuring only one construct or several
expected to differ on the variable of interest.
 For example, if you were developing an instrument to measure constructs.
depression, the theory used to explain depression would indicate  Correlation procedures are used to determine if items cluster
the types of behaviour that would be expected in depressed together
people.
 If the tool was administered to a group of supposedly depressed
subjects and to a group of supposedly happy subjects, you would
expect the two groups to score quite differently on the tool.
 If differences were not found, you might suspect that the
instrument was not really measuring depression.

Relationship between Reliability and


Validity Cont..
 Reliability and validity are closely associated.  In actuality, validity is often considered first in the
 Both of these qualities are considered when selecting a construction of an instrument.
research instrument.  Face validity and content validity may be examined, and then
 Reliability is usually considered first because it is a necessary some type of reliability is considered.
condition for validity. An instrument cannot be valid unless it  Next, another type of validity may be considered.
is reliable.  The process is not always the same.
 However, the reliability of an instrument tells nothing about  The type of desired validity and the type of reliability are
the degree of validity. decided, and then the procedures for establishing these
 In fact, an instrument can be very reliable and have low criteria for the instrument are determined.
validity.

19
5/22/2018

Cont….. CRITIQUE –
 A word of caution about using the term established in regard  “Systematic, unbiased, careful examination of all aspects of a
to reliability and validity of instruments: Strickland (1995), tool to judge the merits, limitations, meaning and
in an editorial in the Journal of Nursing Measurement, stated significance based on previous research experience and
that reliability and validity cannot be established because knowledge of the topic”
there is always an error component in measurement. She - Burns, N. & Grove, S., 2005.
wrote that it is more correct to use terms like “supported,”
“assessed,” or “prior evidence has shown”

Step I: Conceptualize the proposed


Research Instrument critique research project
 Step I: Conceptualize the proposed research project  Without knowledge of the foundational elements of a
 Step II: Find an existing instrument for the proposed study, researchers would have great difficulty
proposed study locating and selecting an appropriate existing instrument to
 Step III: Critical assessment of the proposed
evaluate for potential use.
measurement instrument  begins with the researcher describing and discussing each of
 Step IV: Decision to select or non-select the data
the basic elements of the proposed research project.
collection instrument for the study of interest

Step II: Find an existing instrument for


Cont… the proposed study
 The basic foundational elements include the study problem,  Once the key elements of the proposed study have been conceptualized
and one or more proposed measurement approaches determined (e.g.,
review of the relevant literature, proposed design and physiological, questionnaire etc) the next step is for the researcher to
methods; research objectives, questions and/or hypotheses; seek out and locate potential measurement instruments to critically
evaluate for use in their study.
and the proposed study variables and their measurement.  Therefore the purpose of Step II is to locate an existing instrument that
 Questions and statement prompts have been developed to may align with the key elements of the proposed study identified and
summarized in Step I. To locate existing measurement instruments one
assist the researcher describe and discuss each of the key can begin by:
foundational elements as they relate to a proposed study. 1. Searching computerized databases such as Medline or CINAHL;
2. Searching journals that are devoted specifically to measurement, e.g.,
Journal of Nursing Measurement;
3. Identifying publications in which relevant instruments are used and then
using citation indices to locate other publications that used them to access
the computer database, the Health and Psychological Instruments Online

20
5/22/2018

Step III: Critical assessment of the


Cont… proposed measurement instrument
4. accessing the Health research websites - these website lists  Once an instrument has been located and before a decision
many surveys (available without charge) that would be of can be made about the selection of the instrument for use in
interest to nurse researchers
a proposed study, the instrument must be critically assessed.
5. reviewing one or more of the many reference books that
contain published measurement instruments,
6. reviewing Dissertation Abstracts online; and  The purpose of this critical assessment process is to
7. networking and communicating by word-of-mouth with other determine the strengths and weaknesses of the selected data
researchers collection instrument and to ensure that the make-up of the
 Once a measurement instrument for a proposed study has instrument aligns with the needs of the proposed study.
been located, the next step is for the researcher to document
the basic information about the instrument such as the name
of the instrument and where the instrument can be located.

Step IV: Decision to select or non-select


Cont…. the data collection instrument for the
 Thus, the critical assessment of this potential research
study of interest
instrument must occur within the backdrop of the proposed  Step IV summarizes the information gathered and assessed
study articulated in Step I. To begin, instrument availability during Steps I–III and assists the researcher with justifying
and access must be explored followed by a discussion of the and making an informed decision to select or non-select the
instrument background, data ownership, variables previous data collection instrument for use in their proposed study.
measured, sampling, measurement and analysis, and other
considerations.

21

You might also like