Professional Documents
Culture Documents
Citation:
Cox, K., Imrie, B.W., & Miller, A.H. (1998). Student assessment in higher education: A
handbook for assessing performance. London: Routledge.
Citation:
Author, A. A. (Year of publication). Title of work: Capital letter also for subtitle.
Location: Publisher.
Summary:
This comprehensive overview of higher educational assessment features a guide to
setting, marking and reviewing the coursework, assignments, tests and
examinations used in higher education. In addition, the authors examine the
various programs for certificates, diplomas, first degrees as well as higher degrees.
The strong influence that assessment has on the way students approach their
learning is also discussed. Truly international in focus, this book features authors
with higher education experience in Australia, New Zealand, Scotland, England,
Canada, Hong Kong, USA, and Thailand.
Notes:
International Focus:
http://books.google.com/books?
id=n6M9AAAAIAAJ&printsec=copyright&dq=normative+and+formative+assessment+i
n+higher+education&lr=
http://escholarship.bc.edu/cgi/viewcontent.cgi?article=1059&context=jtla
Definitions:
Formative- “ provide students with information which will help them judge the
effectiveness of their learning strategies to date. It also alerts teachers to any sections of
the course or approactes to teaching where students are having difficulties and which may
need further attention.
Other types
Alternative to normative testing, tests can be ipsative, that is, the
individual assessment is compared to him- or her-self through time.[2][3]
Common use
Most state achievement tests are criterion referenced. In other words, a
predetermined level of acceptable performance is developed and
students pass or fail in achieving or not achieving this level. Tests that
set goals for students based on the average student's performance are
norm-referenced tests. Tests that set goals for students based on a set
standard (e.g., 80 words spelled correctly) are criterion-referenced
tests.
Many college entrance exams and nationally used school tests use norm-
referenced tests. The SAT, Graduate Record Examination (GRE), and
Wechsler Intelligence Scale for Children (WISC) compare individual
student performance to the performance of a normative sample. Test-
takers cannot "fail" a norm-referenced test, as each test-taker receives a
score that compares the individual to others that have taken the test,
usually given by a percentile. This is useful when there is a wide range of
acceptable scores that is different for each college. For example one
estimate of the average SAT score for Harvard University is 2200 out of
2400 possible. The average for Indiana University is 1650[9].
One of the faults of No Child Left Behind is that each state can choose or
construct its own test which cannot be compared to any other state.[16]
A Rand study of Kentucky results found indications of artificial inflation
of pass rates which were not reflected in increasing scores in other tests
such as the NAEP or SAT given to the same student populations over the
same time.[17]
Although the tests such as the WASL are intended as a minimal bar for
high school, 27 percent of 10th graders applying for Running Start in
Washington State failed the math portion of the WASL. These students
applied to take college level courses in high school, and achieve at a
much higher level than average students. The same studyc oncluded the
level of difficulty was comparable to, or greater than that of tests
intended to place students already admitted to the college. [18]
A norm referenced test has none of these problems because it does not
seek to enforce any expectation of what all students should know or be
able to do other than what actual students demonstrate. Present levels
of performance and inequity are taken as fact, not as defects to be
removed by a redesigned system. Goals of student performance are not
raised every year until all are proficient. Scores are not required to show
continuous improvement through Total Quality Management systems.
References
1. ^ a b Assessment Guided Practices
2. ^ Assessment
3. ^ PDF presentation
4. ^ Cronbach, L. J. (1970). Essentials of psychological testing (3rd
ed.). New York: Harper & Row.
5. ^ Glaser, R. (1963). Instructional technology and the measurement of
learning outcomes. American Psychologist, 18, 510-522.
6. ^ [1] Illinois Learning Standards
7. ^ stories 5-01.html Fairtest.org: Times on Testing "criterion
referenced" tests measure students against a fixed yardstick, not
against each other.
8. ^ [2] By the Numbers: Rising Student Achievement in Washington State
by Terry Bergesn "She continues her pledge ... to ensure all
students achieve a diploma that prepares them for success in the
21st century."
9. ^ [3] About.com "What is a Good SAT Score?" From Jay Brody Aug 2006
10. ^ [4] NCTM: News & Media: Assessment Issues (Newsbulletin April
2004) "by definition, half of the nation's students are below grade
level at any particular moment"
11. ^ [5] National Children's Reading Foundation website
12. ^ [6] HOUSE BILL REPORT HB 2087 "A number of critics ... continue to
assert that the mathematics WASL is not developmentally
appropriate for fourth grade students."
13. ^ Prof Don Orlich, Washington State University
14. ^ [7]Panel lowers bar for passing parts of WASL By Linda Shaw, Seattle
Times May 11, 2004 "A blue-ribbon panel voted unanimously
yesterday to lower the passing bar in reading and math for the
fourth- and seventh-grade exam, and in reading on the 10th-grade
test"
15. ^ [8] Seattle Times December 06, 2002 Study: Math in 7th-grade
WASL is hard By Linda Shaw "Those of you who failed the math
section ... last spring had a harder test than your counterparts in
the fourth or 10th grades."
16. ^ [9] New Jersey Department of Education: "But we already have tests
in New Jersey, why have another test? Our statewide test is an
assessment that only New Jersey students take. No comparisons
should be made to other states, or to the nation as a whole.
17. ^ [10] Test-Based Accountability Systems (Rand) "NAEP data are
particularly important ...Taken together, these trends suggest
appreciable inflation of gains on KIRIS. ...
18. ^ [11]Relationship of the Washington Assessment of Student Learning
(WASL) and Placement Tests Used at Community and Technical
Colleges By: Dave Pavelchek, Paul Stern and Dennis Olson Social &
Economic Sciences Research Center, Puget Sound Office, WSU "The
average difficulty ratings for WASL test questions fall in the middle
of the range of difficulty ratings for the college placement tests."
See also
1. http://www.murdoch.edu.au/admin/policies/assessmentlinks.html#4
2.http://eric.ed.gov:80/ERICDocs/data/ericdocs2sql/content_storage_01/0000019b/80/15/
8b/9c.pdf
Abstract:
ACT and 127 institutions (primarily community colleges) worked together on a Partners
in Progress Research project comparing ASSET reading, writing, and math scores
(incoming student placement tests) with CAAP reading, writing, and math scores (exiting
student outcomes test). This was in order to refine the content of the related exams, and
establish the degree of statistical relationship between them so that student intellectual
growth might be measured between the student's point of entry and the point of exit from
the institution. Administrators at Mid-Plains Community College Area (MPCCA)
compared 108 pairs of ASSET and CAAP reading scores. Results indicated that reading
improvement of MPCCA students was comparable with the public, two-year college
normative percentages of improvement, with the majority of students achieving expected
gains in their reading. In terms of writing test cohort, 163 matched ASSET/CAAP
outcomes indicated that MPCCA students improved their writing ability at a slightly
higher rate than the norm. For 162 ASSET/CAAP math outcomes, results indicated that
although MPCCA had slightly more students improving at a lower rate than expected,
they also had slightly more students improving at a slightly higher rate than the norm.
3. Loyola University
4. http://www.gseis.ucla.edu/heri/cirpoverview.php
5.http://books.google.com/books?
id=60h0ZVgWrYoC&pg=PA54&lpg=PA54&dq=Higher+Education+Institution+Normat
ive+Assessment&source=web&ots=L9eQ7IwpD7&sig=jhNEiPL-
QLpmPgxJrPBQH9GO2rE&hl=en&sa=X&oi=book_result&resnum=3&ct=result#PPA5
5,M1
Pg. 54!
Normative Assessment~
Limitations:
-When focusing on where individuals fall within a given population, this method
handicaps itself by not being able to measure the progress of the whole
population.