Professional Documents
Culture Documents
• Form equivalence: two different forms of test, based on the same content, on one occasion to
the same examinees (Alternate form). After alternate forms have been developed, it can be used
for different examinees. It is very common in high-stake examination for pre-empting cheating.
A examinee who took Form A earlier could not share the test items with another student who
might take Form B later, because the two forms have different items.
• Internal consistency: the coefficient of test scores obtained from a single test or survey
(Cronbach Alpha, KR20, Spilt-half). For instance, let's say respondents are asked to rate
statements in an attitude survey about computer anxiety. One statement is "I feel very negative
about computers in general." Another statement is "I enjoy using computers." People who
strongly agree with the first statement should be strongly disagree with the second statement,
and vice versa. If the rating of both statements is high or low among several respondents, the
responses are said to be inconsistent and patternless. The same
principle can be applied to a test. When no pattern is found in the
students' responses, probably the test is too difficult and students
just guess the answers randomly.
•
Reliability is a necessary but not sufficient condition for validity.
For instance, if the needle of the scale is five pounds away from
zero, I always over-report my weight by five pounds. Is the
measurement consistent? Yes, but it is consistently wrong! Is the measurement valid? No! (But
if it under-reports my weight by five pounds, I will consider it a valid measurement)
• Performance, portfolio, and responsive evaluations, where the tasks vary substantially from
student to student and where multiple tasks may be evaluated simultaneously, are attacked for
lacking reliability. One of the difficulties is that there are more than one source of measurement
errors in performance assessment. For example, the reliability of writing skill test score is
affected by the raters, the mode of discourse, and several other factors (Parkes, 2000).
The internal structure of things may not concur with the appearance. Many times professional
knowledge is counter-common sense. The criteria of validity in research should go beyond
"face," "appearance," and "common sense."
However, this approach has some drawbacks. Usually experts tend to take their knowledge for
granted and forget how little other people know. It is not uncommon that some tests written by
content experts are extremely difficult.
Second, very often content experts fail to identify the learning objectives of a subject. Take the
following question in a philosophy test as an example:
Top of Form
b. 331-232 BC
c. 280-207 BC
This type of question tests the ability of memorizing historical facts, but not philosophizing. The
content expert may argue that "historical facts" are important for a student to further understand
philosophy. Let's change the subject to computer science and statistics. Look at the following
two questions:
Top of Form
When was the founder and CEO of Microsoft, William Gates III born?
a. 1949
b. 1953
c. 1957
It would be hard pressed for any computer scientist or statistician to accept that the above
questions fulfill content validity. As a matter of fact, the memorization approach is a common
practice among instructors.
Further, sampling knowledge from a larger domain of knowledge involves subjective values.
For example, a test regarding art history may include many questions on oil paintings, but less
questions on watercolor paintings and photography because of the perceived importance of oil
paintings in art history.
In short, criterion validity is about prediction rather than explanation. Predication is concerned
with non-casual or mathematical dependence where as explanation is pertaining to causal or
logical dependence. For example, one can predict the weather based on the height of mercury
inside a thermometer. Thus, the height of mercury could satisfy the criterion validity as a
predictor. However, one cannot explain why the weather changes by the change of mercury
height. Because of this limitation of criterion validity, an evaluator has to conduct construct
validation.
For example, an evaluator wants to study the relationship between general cognitive ability and
job performance. However, the evaluator may not be able to administer a cognitive test to every
subject. In this case, he can use a proxy variable such as "amount of education" as an indirect
indicator of cognitive ability. After he administered a cognitive test to a portion of all subjects
and found a strong correlation between general cognitive ability and amount of education, the
latter can be used to the larger group because its construct validity is established.
Other authors (e.g. Angoff,1988; Cronbach & Quirk, 1976) argue that construct validity cannot
be expressed in a single coefficient; there is no mathematical index of construct validity. Rather
the nature of construct validity is qualitative.
Test bias is a major threat against construct validity, and therefore test bias analyses should be
employed to examine the test items (Osterlind, 1983).
The presence of test bias definitely affects the measurement of the psychological construct.
However, the absence of test bias does not guarantee that the test possesses construct validity. In
other words, the absence of test bias is a necessary, but isn't a sufficient condition.
• Construct validation as unification: The criterion and the content models tends to be
empirical-oriented while the construct model is inclined to be theoretical. Nevertheless, all
models of validity requires some form of interpretation: What is the test measuring? Can it
measure what it intends to measure? In standard scientific inquiries, it is important to formulate
an interpretative (theoretical) framework clearly and then to subject it to empirical challenges. In
this sense, theoretical construct validation is considered functioning as a unified framework for
validity (Kane, 2001).
• In many situations such as searching faculty candidate and conferring graduate degree,
committee members are not trained to agree on a common set of criteria and standards
• Inconsistency in students' performance across tasks does not invalidate the assessment. Rather it
becomes an empirical puzzle to be solved by searching for a more comprehensive
interpretation.
• Initial disagreement (e.g., among students, teachers, and parents in responsive evaluation) would
not invalidate the assessment. Rather it would provide an impetus for dialog.
• The definition of reliability should be defined in terms of the classical test theory: the squared
correlation between observed and true scores or the proportion of true variance in obtained test
scores.
• It has been a tradition that multiple factors are introduced into a test to improve validity but
decrease internal-consistent reliability.
• In many cases we don't present just one argument; rather problem solving involves arguments or
chains of reasoning with massive evidence.
• Off-the-shelf inferential machinary (e.g. compute reliability indices) may fail if we quantify
things or tasks that we don't know much about.
• Independence of sampling
• Heterogeneity of content
• How the measurement will be used: Decide whether future uses of the instrument are likely to
be exclusively for absolute decisions, for differential decisions, or both.
• The very definition of a construct implies a domain of content. There is no sharp distinction
between test content and test construct.
Validity is not a property of the test or assessment, but rather of the meaning of the test scores.
• Consequential: bias, fairness, and justice; the social consequence of the assessment to the
society
Critics argued that consequences should not be a component of validity because test developers should
not be held responsible for the consequences of misuse; accountability should lie with the misuser.
Messick (1998) counter-argued that social consequences of score interpretation include the value
implications of the construct label, which may or may not commensurate with the construct's trait
implications and need to be addressed in appraising score meaning. While test developers should not be
accountable to misuse of tests, they should still pay attention to the unanticipated consequences of
legitimate score interpretation.
A different view of reliability and validity (Salvucci, Walter, Conley, Fink, & Saba
(1997)
Some scholars argue that the traditional view that "reliability is a necessary but not a sufficient
condition of validity" is incorrect. This school of thought conceptualizes reliability as invariance and
validity as unbiasedness. A sample statistic may have an expected value over samples equal to the
population parameter (unbiasedness), but have very high variance from a small sample size.
Conversely, a sample statistic can have very low sampling variance but have an expected value far
departed from the population parameter (high bias). In this view, a measure can be unreliable (high
variance) but still valid (unbiased).
Population parameter (Red line) = Sample Population parameter (Red line) <> Sample
statistic (Yellow line) --> unbiased statistic (Yellow line) --> Biased
High variance (Green line) low variance (Green line)
Unreliable but valid Invalid but reliable
Low reliability is less detrimental to the performance pretest. In the pretest where subjects are not
exposed to the treatment and thus are unfamiliar with the subject matter, a low reliability caused by
random guessing is expected. One easy way to overcome this problem is to include "I don't know" in
multiple choices. In an experimental settings where students' responses would not affect their final
grades, the experimenter should explicitly instruct students to choose "I don't know" instead of making
a guess if they really don't know the answer. Low reliability is a signal of high measurement error,
which reflects a gap between what students actually know and what scores they receive. The choice "I
don't know" can help in closing this gap.
Reliability - overview
Reliability is the extent to which a test is repeatable and yields consistent scores.
Note: In order to be valid, a test must be reliable; but reliability does not guarantee
validity.
All measurement procedures have the potential for error, so the aim is to minimize it. An
observed test score is made up of the true score plus measurement error.
The goal of estimating reliability (consistency) is to determine how much of the variability in test
scores is due to measurement error and how much is due to variability in true scores.
Measurement errors are essentially random: a person?s test score might not reflect the true score
because they were sick, hungover, anxious, in a noisy room, etc.
Reliability can be improved by:
• getting repeated measurements using the same test and
• getting many different measures using slightly different techniques and methods.
- e.g. Consider university assessment for grades involve several sources. You would not
consider one multiple-choice exam question to be a reliable basis for testing your
knowledge of "individual differences". Many questions are asked in many different
formats (e.g., exam, essay, presentation) to help provide a more reliable score.
Types of reliability
There are several types of reliability:
There are a number of ways to ensure that a test is reliable. I?ll mention a few of them now:
1. Test-retest reliability
The test-retest method of estimating a test's reliability involves administering the test to
the same group of people at least twice. Then the first set of scores is correlated with the
second set of scores. Correlations range between 0 (low reliability) and 1 (high reliability)
(highly unlikely they will be negative!)
Remember that change might be due to measurement error e.g if you use a tape measure
to measure a room on two different days, any differences in the result is likely due to
measurement error rather than a change in the room size. However, if you measure
children?s reading ability in February and the again in June the change is likely due to
changes in children?s reading ability. Also the actual experience of taking the test can
have an impact (called reactivity). History quiz - look up answers and do better next time.
Also might remember original answers.
2. Alternate Forms
Administer Test A to a group and then administer Test B to same group. Correlation
between the two scores is the estimate of the test reliability
4. Inter-rater Reliability
Compare scores given by different raters. e.g., for important work in higher education
(e.g., theses), there are multiple markers to help ensure accurate assessment by checking
inter-rater reliability
5. Internal consistency
Internal consistence is commonly measured as Cronbach's Alpha (based on inter-item
correlations) - between 0 (low) and 1 (high). The greater the number of similar items, the
greater the internal consistency. That?s why you sometimes get very long scales asking a
question a myriad of different ways ? if you add more items you get a higher cronbach?s.
Generally, alpha of .80 is considered as a reasonable benchmark
How reliable should tests be? Some reliability guidelines
.90 = high reliability
.80 = moderate reliability
.70 = low reliability
High reliability is required when (Note: Most standardized tests of intelligence report
reliability estimates around .90 (high).
• tests are used to make important decisions
• individuals are sorted into many different categories based upon relatively small individual
differences e.g. intelligence
Lower reliability is acceptable when (Note: For most testing applications, reliability
estimates around .70 are usually regarded as low - i.e., 49% consistent variation (.7 to the
power of 2).
• tests are used for preliminary rather than final decisions
• tests are used to sort people into a small number of groups based on gross
individual differences e.g. height or sociability /extraversion
Reliability estimates of .80 or higher are typically regarded as moderate to high (approx. 16% of
the variability in test scores is attributable to error)
Reliability estimates below .60 are usually regarded as unacceptably low.
Levels of reliability typically reported for different types of tests and measurement devices
are reported in Table 7-6: Murphy and Davidshofer (2001, p.142).
Reliability refers to whether a test is consistent, over time, in its results. A good test must also be
internally consistent. Validity refers to whether test results describe a person's actual behavior. For
most psychological tests, reliability is higher than validity, which underscores the need for additional
information, such as interviews.
Importance
Reliability and validity are crucial to quality psychological testing. If reliability isn't present, then the
test can't be trusted to make valid assessments in a clinical setting. If validity isn't present, then the test
can't be trusted to make predictions about behavior in a real-world setting.