You are on page 1of 3

Value added achievement still involves testing.

Center for greater Philadelphia, no date. http://www.cgp.upenn.edu/ope_value.html


Value-added assessment is a new way of analyzing test data that can measure teaching and learning.
Based on a review of students' test score gains from previous grades, researchers can predict the
amount of growth those students are likely to make in a given year. Thus, value-added assessment can
show whether particular students - those taking a certain Algebra class, say - have made the expected
amount of progress, have made less progress than expected, or have been stretched beyond what they
could reasonably be expected to achieve. Using the same methods, one can look back over several years
to measure the long-term impact that a particular teacher or school had on student achievement.

1AC solvency advocate says they still use testing and measure performance.
Gamoran 07 recut’ -Adam Gamoran, 2007, " School accountability, american style: dilemmas of
high-stakes
testing,”http://www.pedocs.de/volltexte/2011/4182/pdf/SZBW_2007_H1_S79_Garmoran_D_A.pdf

Value-added achievement analysis is an alternative approach to examining school performance (e.g.,


Meyer, 1996, 2003). Rather than examining absolute targets, value-added analysis examines the
contributions of schools (or teachers) to student performance, taking into account their prior
performance levels and other relevant characteristics, e.g. race/ethnicity and economic status. A
valueadded approach to accountability focuses on the effectiveness of schools rather than the schools’
achievement status at a given point in time. While gain scores are less reliable than raw scores, they are
more valid as indicators of what the school has contributed to student learning, and thus arguably
represent more appropriate criteria for accountability. Despite these apparent advantages of a value-
added approach, NCLB has rejected it in favor of reliance on progress towards absolute targets. One
reason for this rejection may be the greater simplicity of absolute targets. Whereas absolute targets
require a simple calculation of means, value-added analyses are complex, particularly if attempts are
made to use many years of test score data, to take account of the nested structure of schools, to adjust
for test unreliability, and so on. NCLB’s stated reason for rejecting value-added approaches, however, is
to avoid the «soft bigotry of low expectations,» that is, a refusal to recognize anything other than
common standards for all students, fearing that acceptance of divergent standards would mean that
lower achieving students will remain permanently behind.7 Another alternative, as yet untried on a
national level, would be to examine both progress towards an absolute target and value-added
performance. If each school were evaluated on both criteria, then schools that are effective might be
rewarded rather than sanctioned, even if (or perhaps especially if) their students are low-performing on
an absolute scale.

Value added achievement is unreliable and easily circumvented


Rotherham 10 Andrew J., 9-23-2010, "Breaking News, Analysis, Politics, Blogs, News
Photos, Video, Tech Reviews," TIME,
http://content.time.com/time/nation/article/0,8599,2020867,00.html
The last decade has yielded an explosion of data about student performance. In many places, these data can be used to create a year-over-year
analysis of how much a teacher advanced the learning of an individual student. Because value-added models can control for other factors
impacting student test scores, the most important being whether a student arrived in a teacher's classroom several grade levels behind, this
method of analysis can offer a more accurate estimate of how well a particular teacher is teaching than simply looking at the latest set of
student test scores. High-flying teachers can be recognized, and low performers can be identified before they spend years doing a disservice to
kids. Science and technology to the rescue again! Unfortunately, it's not so straightforward in practice. The tests used in a lot of
places are a bad match for the value-added methodology, which is a lot more complicated than subtracting one year's
score from the next. Meanwhile, different value-added models can yield different conclusions about the same
teacher. A small detail like that matters a lot if you're going to use this data to start firing people. In addition, though you
wouldn't know it from all the noise about testing, most of the nation's teachers teach subjects, often electives, for
which students are not subjected to standardized testing. Even subjects like science frequently are not tested annually.
End result: the best value-added model still leaves out many teachers who are not teaching math or
language arts.. These and other complications are why, ultimately, the education field will end up using the same
imperfect evaluation strategy used by most professions: a blend of quantifiable data and managerial
judgment.

Value added fails to implement accurate tests and leads to unjustified firings of
good teachers.
Strauss 12 Valerie Strauss covers education for the Washington post, 12-23-2012, "The
fundamental flaws of ‘value added’ teacher evaluation," Washington Post,
https://www.washingtonpost.com/news/answer-sheet/wp/2012/12/23/the-
fundamental-flaws-of-value-added-teacher-evaluation/
The common sense rationale for linking teacher evaluations to student test scores is to hold teachers accountable for how much their students
are learning. The favorite way of measuring gains, or lack thereof, in student learning is through “value-
added” models, which seek to determine what each teacher has added to the educational achievement of each of his or her students.
Even though it seemingly makes sense to look at individual gains attributable to particular teachers, this method is fundamentally
flawed due to the nature of current state tests, as well as the methods used to assign students to teachers and other reasons. These
tests were not designed to be used in that way, and various aspects of their administration make this use improper. In a
briefing paper prepared for the National Academy of Education (NAE) and the American Educational Research Association, Linda Darling-
Hammond and three other distinguished authors reached the following conclusion: “With respect to value-added measures of student
achievement tied to individual teachers, current research suggests that high-stakes, individual-level decisions, as well as comparisons across
highly dissimilar schools or student populations should be avoided.” The paper goes on to say that “in general, suchmeasures should
be used only in a low-stakes fashion when they are part of an integrated analysis of what the teacher is
doing and who is being taught.” (Disclaimer: Although I am a member of NAE, I did not research or write that paper.) The paper
highlights three specific problems with using value-added models to evaluate teacher effectiveness,
especially for such important decisions as teacher employment or compensation: Value-added models of teacher effectiveness
are highly unstable. Teachers’ ratings differ substantially from class to class and from year to year, as well as from one test to another.
Teachers’ value-added ratings are significantly affected by differences in the students who are assigned
to them, even when models try to control for prior achievement and student demographic variables. In particular, teachers with large
numbers of new English learners and others with special needs have been found to show lower gains than the same teachers who are teaching
other students. Value-added ratings cannot disentangle the many influences on student progress. These include home, school and student
factors that influence student learning gains and that matter more than the individual teacher in explaining changes in test scores. Cautions
about value-added testing have also been expressed by a group of testing and policy experts assembled by the Economic Policy Institute. This
group concluded that “[w]hile there are good reasons for concern about the current system of teacher evaluation, there
are also good
reasons to be concerned about claims that measuring teachers’ effectiveness largely by student test
scores will lead to improved student achievement.” In a similar vein, W. James Popham, professor emeritus at UCLA and test
design expert, has concluded that the
use of students’ test scores to evaluate teachers “runs counter to the most
important commandment of educational testing — the need for sufficient validity evidence.”

Implementing the plan doesn’t remove any high stakes testing.


a.) Value added measures often use the same tests, they are just analyzed
differently.
b.) High stakes testing applies to areas outside of assessments that determine
evaluation of a teacher and school. The SAT, ACT, AP tests are all still in place. If
we pretend like the plan solves all that, the disadvantaged students will be
even less prepared for those tests and be even more likely to fail and be
subjected to the kinds of dehumanization the aff tries to solve.
c.) The one card that actually talks about their solvency mechanism concludes that
you could have joint measures of growth and proficiency, this is even more high
stakes testing.

You might also like