Professional Documents
Culture Documents
1AC solvency advocate says they still use testing and measure performance.
Gamoran 07 recut’ -Adam Gamoran, 2007, " School accountability, american style: dilemmas of
high-stakes
testing,”http://www.pedocs.de/volltexte/2011/4182/pdf/SZBW_2007_H1_S79_Garmoran_D_A.pdf
Value added fails to implement accurate tests and leads to unjustified firings of
good teachers.
Strauss 12 Valerie Strauss covers education for the Washington post, 12-23-2012, "The
fundamental flaws of ‘value added’ teacher evaluation," Washington Post,
https://www.washingtonpost.com/news/answer-sheet/wp/2012/12/23/the-
fundamental-flaws-of-value-added-teacher-evaluation/
The common sense rationale for linking teacher evaluations to student test scores is to hold teachers accountable for how much their students
are learning. The favorite way of measuring gains, or lack thereof, in student learning is through “value-
added” models, which seek to determine what each teacher has added to the educational achievement of each of his or her students.
Even though it seemingly makes sense to look at individual gains attributable to particular teachers, this method is fundamentally
flawed due to the nature of current state tests, as well as the methods used to assign students to teachers and other reasons. These
tests were not designed to be used in that way, and various aspects of their administration make this use improper. In a
briefing paper prepared for the National Academy of Education (NAE) and the American Educational Research Association, Linda Darling-
Hammond and three other distinguished authors reached the following conclusion: “With respect to value-added measures of student
achievement tied to individual teachers, current research suggests that high-stakes, individual-level decisions, as well as comparisons across
highly dissimilar schools or student populations should be avoided.” The paper goes on to say that “in general, suchmeasures should
be used only in a low-stakes fashion when they are part of an integrated analysis of what the teacher is
doing and who is being taught.” (Disclaimer: Although I am a member of NAE, I did not research or write that paper.) The paper
highlights three specific problems with using value-added models to evaluate teacher effectiveness,
especially for such important decisions as teacher employment or compensation: Value-added models of teacher effectiveness
are highly unstable. Teachers’ ratings differ substantially from class to class and from year to year, as well as from one test to another.
Teachers’ value-added ratings are significantly affected by differences in the students who are assigned
to them, even when models try to control for prior achievement and student demographic variables. In particular, teachers with large
numbers of new English learners and others with special needs have been found to show lower gains than the same teachers who are teaching
other students. Value-added ratings cannot disentangle the many influences on student progress. These include home, school and student
factors that influence student learning gains and that matter more than the individual teacher in explaining changes in test scores. Cautions
about value-added testing have also been expressed by a group of testing and policy experts assembled by the Economic Policy Institute. This
group concluded that “[w]hile there are good reasons for concern about the current system of teacher evaluation, there
are also good
reasons to be concerned about claims that measuring teachers’ effectiveness largely by student test
scores will lead to improved student achievement.” In a similar vein, W. James Popham, professor emeritus at UCLA and test
design expert, has concluded that the
use of students’ test scores to evaluate teachers “runs counter to the most
important commandment of educational testing — the need for sufficient validity evidence.”