You are on page 1of 17

6

This chapter examines developments in using standardized tests in the college admission process. The author discusses the controversy that has surrounded these tests over the past several years.

The Role of Standardized Tests in College Admissions: Test-Optional Admissions


Steven Syverson
There are numerous standardized tests in use throughout American society; amplified by implementation of the federal No Child Left Behind legislation, standardized testing is ubiquitous in the educational experience of students here. Perhaps the most oft-referenced (and most oft-fretted-over) of all educational tests are those associated with college admission. For the purposes of this discussion, we consider specifically the SAT-I and the ACT and their use in the college admission process. In the past few years, there has been a growing outcry over the SAT and to a lesser extent the ACT and their use and misuse in both the admission process and assessment of educational institutions, the latter being a purpose for which they were never intended (Zwick, 2002). The SAT has drawn the greatest criticism; particularly with the three-part SAT-I launched early in 2005, the volume (both meanings intended) of critics has increased dramatically. There are a range of colleges that do not now (and have not in the past) required the SAT or ACT as an integral part of their admission process. Such institutions typically are less selective or highly specialized (for example, art institutes), but the level of dissatisfaction with the SAT has prompted an increasing number of selective institutions to adopt (or consider adopting) admission policies that place less emphasis on standardized tests, even to

NEW DIRECTIONS FOR STUDENT SERVICES, no. 118, Summer 2007 Wiley Periodicals, Inc. Published online in Wiley InterScience (www.interscience.wiley.com) DOI: 10.1002/ss.241

55

56

KEY ISSUES IN NEW STUDENT ENROLLMENT

the point of making them entirely optional. This trend is the focus of our discussion. Because there is not a substantial body of literature exploring test-optional admission policies among selective colleges, much of the information described in this article is based on interviews with deans and directors of admission at colleges that have adopted such policies.

History of the SAT and ACT


Prior to 1900, each college or university in the United States that required an entrance examination developed and used its own, institution-specific battery of questions and essays to assess the preparation of applicants. At that time, a relatively low proportion of the populace attended college, and a high proportion of those who did first attended independent collegepreparatory secondary schools. Also at that time, many students from a given school would attend the same college, and many students would apply to only one college. In 1900, concern over the numerous, institution-specific entrance examinations being required for admission prompted a group of twelve colleges and universities in the Northeast to found the College Entrance Examination Board (CEEB). The Scholastic Aptitude Test (SAT) was developed in 1926, using questions similar to those in the Army Alpha tests developed during World War I, which were modeled after IQ tests (Zwick, 2002). The creators of the SAT had an interest in identifying talented students who had not been privileged to attend Northeastern prep schools and toward offering students one standardized test that could be used in applying to multiple institutions (Lemann, 1999). The SAT was taken by a relatively small number of students and used by only a smattering of colleges during the 1930s and early 1940s. At the end of World War II, the GI Bill sent thousands of returning veterans to college, spurring rapid growth in use of the SAT by colleges (Zwick, 2002). In 1937, the College Board introduced the Achievement Tests (originally called the Scholarship Examinations), which were discipline-specific and assessed achievement in particular content areas (Lemann, 1999). In 1990, the College Board reconfigured the SAT and Achievement Tests into the SAT-I: Reasoning Tests and SAT-II: Subject Tests. The name was changed in 1993 from Scholastic Aptitude Test to Scholastic Assessment Test in an effort to appropriately reflect the changing nature of the test. Most recently, the SAT has been deemed no longer an acronym for anything, so the test is simply referred to as the SAT. In 1959, the American College Testing Program (ACT) was created in Iowa City, in part because individuals who were centrally involved with the Iowa Testing Programs (a statewide high school testing program) felt the SAT was heavily geared toward elite institutions of the East (Zwick, 2002). The ACT was initially employed primarily by Midwestern colleges, but its use expanded, and now virtually every college in the nation accepts either the ACT or the SAT from its admission applicants.
NEW DIRECTIONS FOR STUDENT SERVICES DOI: 10.1002/ss

THE ROLE OF STANDARDIZED TESTS IN COLLEGE ADMISSIONS

57

Initially, the two tests were substantially different in nature. The SAT was designed to assess higher order reasoning skills to help predict aptitude for success in college; it was not specifically tied to the high school curriculum. By contrast, the ACT was considered to be closely linked and was an assessment of mastery of that curriculum. As noted by George Elford (2002), these are really two points along a testing spectrum that runs from aptitude tests, which assess teachable skills to predict future success, to achievement tests, which assess mastery of subjects that have been taught. Though the premises of the two tests differ, both can provide useful information in predicting a students likelihood of early success in college. Over the latter part of the twentieth century, these tests enjoyed increasingly universal application in college admissions. Even some colleges that were open admission began requiring the testsin part, some would argue, because of the cachet of requiring them. By 2005, 1.4 million collegebound students took the SAT and 1.2 million sat for the ACT. In spite of the fact that the two tests are structured with different underlying philosophies of measurement, they are used almost interchangeably by college admissions offices; virtually every college in the United States accepts either test. The SAT and ACT have taken on an almost mystical importance in modern American society, being used as a yardstick for assessing the quality of high schools and colleges and having a major impact on everything from a students self-image to the price of homes in a particular neighborhood. Although the SAT and ACT are not considered to be IQ tests, students with high SAT scores are routinely presumed to be bright and encouraged to consider the most selective collegessometimes with no regard to their academic performance in high school. Students with strong scores who have done poorly in school are presumed to be underachievers, and in conversation with admission counselors many parents of such students suggest that the student simply was not challenged in high school. Conversely, students with lower test scores may be discouraged from applying to a particular college even if they have demonstrated exceptional academic achievement in high school. Raymond Brown, dean of admission at Texas Christian University (Nov. 2005, Jan. 2006), comments about this phenomenon: If you assume a bell-shaped distribution, then on one end of the chart you would have a small group of strong performers who are poor test takers, and on the other end you would have an equal number of strong test takers who are weak performers, but the number of people who assert that they are poor test takers, but strong performers far outnumber those who describe themselves as poor performers, but strong test takers. Many people express concern that specific subgroups of the population (for instance, students from a lower socioeconomic or less-sophisticated background) are especially intimidated by the SAT and ACT, tend to score less well, and are therefore discouraged from pursuing a college education. Weak performance on one of these tests can sometimes do great harm to a students self-image and self-confidence.
NEW DIRECTIONS FOR STUDENT SERVICES DOI: 10.1002/ss

58

KEY ISSUES IN NEW STUDENT ENROLLMENT

Similarly, although never intended to assess the quality of the educational program provided by colleges, at the beginning of the new millennium in many sectors of society a colleges average SAT or ACT scores (particularly SAT scores) have become a shorthand notation for the quality of that college. So important have they become that a common question asked of admission counselors at college fairs is, What are your average SATs?sometimes even preceding questions about location or available majors! Students, families, and even high school counselors frequently use the answer as a way to place the school in the universe of colleges. Much less frequently is a college representative asked about the average GPA or high school class rank of their colleges freshman class. Although accounting for only a small portion of the ratings found in publications such as the U.S. News & World Report national ranking of colleges, the presumed association between the average test scores of a colleges freshman class and its perceived prestige is of great import in the minds of many admissions office staff. The concern for the colleges freshman class profile sometimes causes staff to deny admission to students they are convinced could be fully successful at their college solely because the test scores would hurt their institutions freshman class profile. On occasion, students have even been encouraged to apply again next year as a transfer student, because wed love to have you, but your test scores are too low. (Transfers typically are not included in the published academic profile of an institution.) Requirements for the SAT or ACT and reporting of them have taken on significance in marketing an institution. Minimally selective colleges have been known to require these standardized tests primarily because doing so emulates the more selective colleges. Colleges have also devised strategies to artificially enhance their published academic profile, and because these statistics (as well as many other admission statistics) are not readily auditable there is great potential to do so. In the 1980s, there was great discussion and concern over what were referred to as NIPs (Not In Profile). In reporting the standardized test scores of the freshman class, it was not an uncommon practice for colleges to exclude from their published academic profile one or more portions of their freshman class, such as international students, athletes, students of color, and sometimes even legacies (sons and daughters of alumni). On average, these subgroups tended to lower the modal test scores for the class. The justification for excluding these students was that they were being admitted for special talents or attributes and that to include their scores would be misleading to the average prospective student reviewing the profile.

Discontent with the SAT


The quality of the educational experience offered at a college is a complex matrix of resources, environment, and programmatic philosophy, combined with the faculty, staff, and students at the particular institution, all assessed
NEW DIRECTIONS FOR STUDENT SERVICES DOI: 10.1002/ss

THE ROLE OF STANDARDIZED TESTS IN COLLEGE ADMISSIONS

59

within the context of the particular student taking advantage of that educational experience. There are a variety of legitimate yardsticks for measuring attributes of colleges (such as the lowest faculty-student ratio or the highest endowment-per-student ratio), but there really is no one best college in the country, because different colleges have different missions and serve different populations in different ways, so best has meaning only in the context of the individual taking advantage of the education. Increasingly, though, the average SAT score for the freshman class at a college became an overly simplistic proxy for the quality of that college. Far too often, students and counselors treat the average SAT as a minimum (rather than as a modal tendency of the class), thereby potentially discouraging applications from students who would be well-served by the particular institution. In the late 1980s and early 1990s, the concern over misuse of test scores promulgated a shift in how colleges report their test-score profiles, with the National Association for College Admission Counseling (NACAC) and other organizations, beginning to encourage colleges to avoid publishing their average test scores. As of this writing, most colleges (and most college guidebooks) have adopted the convention of reporting standardized test scores as a range representing the middle 50 percent of the test scores of the entering class. This has two advantages for students. First, it decreases the likelihood that prospective students whose scores are below the average will be discouraged from applying. Second, it offers prospective students a much richer understanding of the score profile of the college, because it not only reports the modal tendencies of the test scores but indicates how widely or tightly grouped the class is. For instance, two institutions may each report an average ACT of 26, but one might have a middle 50 percent range of 2230 and the other a range of 2527. Proponents of standardized testing argue that variability among high schools in both grading standards and academic rigor limits the value of high school transcripts in assessing preparation for college and in predicting the likelihood of success in college (Zwick, 2002). In earlier days, students typically would take the test only once and prepare primarily by getting a good nights sleep the night before. As the perceived importance of the tests grew, so did the test-preparation industry. Industry analysts predicted that the 2005 addition of the Critical Writing section to the SAT-I and the optional writing sample on the ACT would add about $200 million to the $1 billion dollar annual revenues of the test-preparation industry. For most of the existence of the SAT, the College Board espoused the view that test-preparation workshops and classes would not significantly enhance a students scores because the test assessed higher-order thinking skills that could not be enhanced by short-term coaching. But around the turn of the millennium, the College Board reversed its long-standing argument and began to cash in on test preparation revenue opportunities by offering its own test-preparation resources.
NEW DIRECTIONS FOR STUDENT SERVICES DOI: 10.1002/ss

60

KEY ISSUES IN NEW STUDENT ENROLLMENT

Opponents of widespread use of these tests have long argued that they are biased with regard to race and gender, and that they correlate primarily with socioeconomic status (Zwick, 2002). Although the testing agencies have made significant and conscientious efforts to ensure that tests are not biased against any particular group, many professionals still believe they disadvantage students of color and those less affluent or less sophisticated who are the first in their family to attend college. The proliferation of formal test-preparation courses, as well as the fact that many of the more sophisticated or affluent schools have actually incorporated test preparation into their regular curriculum, raises the concern that the poor and less-sophisticated are becoming more disenfranchised in the college admission process because they are so much less likely to have had the benefit of test-preparation courses. Furthermore, because students from more affluent and more sophisticated backgrounds are disproportionately likely to use test-preparation resources and pay for taking the test multiple times (thereby likely improving their scores), test results are less likely to constitute a level-playing-field standard by which to reduce the impact of differences in high school grading standards and academic rigor, thereby reducing the original positive impact of the tests.

Current Usage in the Admission Process


In spite of variations in grading standards and rigor at high schools across the country, it is widely acknowledged that a students record in high school is the best predictor of success in college. Most commonly in such predictive modeling, the students high school GPA is used to describe the record in high school, and success in college is measured either by graduation or by the GPA the student achieves (most often at the end of the freshman year). A more refined correlation assessment would augment definition of the students high school record to include information about the rigor of the students high school curriculum and high school environment. Although graduation from college is perhaps the better indicator of success (that is, the highest GPA is only a very limited definition of the most successful college experience), the college GPA at the end of the freshman year is more commonly used in predictive modeling. Generally, when used in conjunction with the high school record, standardized test scores add some predictive value to the admission formulae that attempt to predict likelihood of success at a given college. Using aggregate national data from many institutions, Astin and Oseguera (2005) evaluated the likelihood that a student would graduate from college within six years of entrance and found that the high school GPA accounted for about 8.3 percent of the variance in the likelihood of graduation. Adding the SAT-I to the analysis accounted for less than 0.8 percent of the additional variance. Lawrence University institutional researcher William Skinner (Oct. 2005) reported that for the class entering in 2003, when used in conjuncNEW DIRECTIONS FOR STUDENT SERVICES DOI: 10.1002/ss

THE ROLE OF STANDARDIZED TESTS IN COLLEGE ADMISSIONS

61

tion with high school GPA and other demographic variables, the ACT Composite score explained an additional 8.6 percent of the variation in GPA for freshmen at the end of their first term of study. But in considering the thirdterm GPAs of freshmen (end of the freshman year; Lawrence is on a trimester system), the ACT incrementally explains only 1.1 percent of the variation, with high school GPA and the other variables having significantly eclipsed the ACT as predictors of academic performance at Lawrence. This result as well as other studies tend to confirm the limited long-term predictive value of these standardized tests. Institutions that use standardized tests in admission should have institution-specific data that support the predictive value of those tests. Admission officers at those institutions should have a clear understanding of the significance (or lack of significance) of predictive value to ensure they are used appropriately in the admission process. As I have noted, those data typically are in the form of a regression analysis and focus on the GPA achieved by enrolled students at the end of the freshman year. A major weakness of such an approach is that it makes no attempt to differentiate among students curricular choices. Standardized test scores in particular do not measure creativity or predict likely success in fine or performing arts. Nor do most predictive models account for the effect of the (usually) large proportion of freshmen taking general education requirements, which may not be their best subjects.

Rationale for Adopting a Test-Optional Admission Policy


Over the years, colleges have adopted test-optional admission policies for a variety of reasons. In some instances, it was primarily a marketing decision, driven by the hope that becoming a test-optional school would garner publicity for the institution. Another market-driven motivation was the expectation that students with higher test scores would tend to submit them, while those with lower scores would not, so the scores reported on the institutional profile, as well as to the guidebooks and magazines such as U.S. News & World Report, would present a stronger academic profile for the institution, thereby perhaps raising the institutions appearance of selectivity, which many believe translates rather directly into enhanced prestige and perceived value, attracting a larger applicant pool. A few selective colleges have adopted a test-optional policy because it is entirely consistent with their institutional mission and ethos. St. Johns College in Maryland, for example, offers no midterms or finals and bases assessments solely on papers and classroom discussion ( John Christensen, Jan. 2006). The college requires lengthy essays on the application, in part asking the students to reflect on why they are attracted to the unusual curriculum at St. Johns. The school feels the SAT does not do a good job of predicting success in such a program and so is of little value
NEW DIRECTIONS FOR STUDENT SERVICES DOI: 10.1002/ss

62

KEY ISSUES IN NEW STUDENT ENROLLMENT

in the admission process; SATs have not been required for at least three decades. Most selective colleges that have decided to become test-optional have done so because they believe it may attract a more diverse applicant pool, or they believe the negative impact on students that is associated with the tests is disproportionate to the value of those tests in the admission process, or because they seek to free the admission office from the yolk of the impact of test scores on institutional profile.

Examples of Test-Optional Colleges


The National Center for Fair and Open Testing (FairTest) has compiled on its Website (http://fairtest.org) a list of more than seven hundred four-year colleges that admit a substantial portion of their students without using standardized test scores. FairTest refers to these colleges as test-optional, but their actual policies on standardized testing vary widely. Included in the FairTest listing are colleges of a variety of levels of selectivity. On the basis of the profiles in Petersons Four-Year Colleges (Oram, 2006), one could say a substantial portion of them are non-competitive or minimally competitive institutions, meaning essentially anyone who applies will be admitted. Though some of these institutions may require students to submit the results of one or another standardized test, the tests are primarily used for placement rather than the admission decision. Other institutions in the listing include those with quite specialized curricula, such as art institutes (Milwaukee Institute of Art and Design, California Institute of the Arts, Ringling School of Art and Design, and so on) and schools of music ( Juilliard School, Boston Conservatory, and so on) for which the primary admission assessment is of demonstrated talent in the area of specialization, an assessment to which the SAT and ACT can contribute little value. About one-third of the institutions on the FairTest list identify themselves in the Petersons guidebook as moderately selective. The bulk of them use the tests only for placement (that is, they are not required for admission), or they are colleges that offer admission to any applicant who meets a minimum GPA requirement. Many public institutions (such as Portland State University and the California State University system), for instance, use a numeric admission matrix in which all candidates with the specified curriculum and minimum GPA are guaranteed admission and need not submit standardized test scores, whereas students with a GPA that is lower than the minimum must submit test results that are of a specified level (or higher) in order to be offered admission. Rather than being considered test-optional in the strictest sense of the word, these colleges do not require test results unless a student has a lower GPA and wishes to offer the test results as an indicator of unfulfilled potential. Any public attention that is directed toward test-optional colleges, though, is almost exclusively focused on the relatively small group of selecNEW DIRECTIONS FOR STUDENT SERVICES DOI: 10.1002/ss

THE ROLE OF STANDARDIZED TESTS IN COLLEGE ADMISSIONS

63

tive colleges that society would normally expect to require either the SAT-I or the ACT but that have deemphasized those tests by giving applicants the option not to have their test results considered in the admission process. Such institutions are truly test-optional; among them are such notables as Bates College, Bowdoin College, and St. Johns College (Maryland), which have offered students this option for more than two decades. A variation on this theme are selective colleges that offer students the option of submitting either standardized tests (the SAT-I or ACT) or other documents in lieu of the tests. Lewis & Clark College, for instance, offers students the option of a portfolio path that allows them, in lieu of test results, to submit four graded writing samples and two additional teacher recommendations. Guilford College requires nonsubmitters, as they are called, to include with their application a substantive writing project of eight to twelve pages in length. Franklin and Marshall College has required two graded writing samples in lieu of test results, but only students with at least a 3.6 GPA or who are ranked in the top 10 percent of their high school class have been allowed to exercise this option. Some selective institutions, particularly in the Northeast, have adopted what they refer to as an SAT-I optional admission policy. Under this policy, the college still requires some version of standardized test results for admission, but it need not include the SAT-I. Hamilton College, for instance, at the time of this writing was in the final year of a five-year experiment in which applicants are required to submit the SAT-I, ACT, or the results of three standardized tests: one assessment of verbal or writing abilities, one of quantitative abilities, and a third of the students choice. Students choosing the latter option can designate them as a combination of the results of Advanced Placement exam(s), SAT-II exams, IB exam(s), or any of the three SAT-I exams. Interestingly, Hamilton also expressly offers students the option of submitting all their scores and having the Hamilton admissions staff select for them the scores to be used in the admission decision. Beginning with the class entering in 2004, Sarah Lawrence College removed all standardized testing from the admission process. Thyra Briggs, dean of admission, notes that although they have always required a graded analytic writing sample, the new policy further emphasizes the colleges emphasis on writing rather than testing. With amusement, she also commented on the shredding of Rush Score Reports that are frantically sent by students who either didnt read the instructions or didnt believe that their test results truly would not be considered in the admission process (Briggs, Sept. 2005).

The Bates Study


At the NACAC National Conference in the fall of 2004, Bill Hiss reported on the Bates College study of its twenty years of test-optional admissions, thereby bringing renewed national attention to the discussion of the impact
NEW DIRECTIONS FOR STUDENT SERVICES DOI: 10.1002/ss

64

KEY ISSUES IN NEW STUDENT ENROLLMENT

of test-optional admission policies. The study, which can be found at www.bates.edu, lent strong support to the notion that sound admission decisions can be made in the absence of standardized test scores, even at a highly selective institution. Roughly 30 percent of the class each year were what Bates refers to as nonsubmitters, applicants who chose not to submit their SAT or ACT results for consideration in the admission process. To facilitate their research on the impact of the policy, Bates requires matriculating students who were nonsubmitters to submit all available SAT or ACT scores after they enroll at Bates. This allows their researchers to more effectively compare the nonsubmitters with the submitters, and to search for differences between the groups. On average, the nonsubmitters had SAT-I scores that were about 160 points lower than the submitters (90 points on the Verbal and 70 points on the Math). In spite of the substantially lower test scores for the nonsubmitters, the graduation rates between the two groups were only 0.1 percent different, and the average Bates GPAs varied by only 0.05. Perhaps of even greater significance was the revelation that for students with comparable SAT scores (for example, submitters and nonsubmitters who both had SAT scores of 1100 1150), the nonsubmitters rather consistently outperformed the submitters at Bates (as measured by their GPA at graduation; see the Bates Website). Apparently students who chose not to submit their scores had accurately assessed the fact that their scores were not a good indicator of their potential for academic success. The Bates study also documents disproportionate use of the testoptional choice by women, students of color, and international students, thereby lending support to the notion that people in these groups feel underserved by these tests. Or, as Christopher Hooker-Haring of Muhlenberg College notes, The negative effect of the SAT weighs disproportionately on certain subgroups of the population (Hooker-Haring, Jan. 2006). Although it is impossible to determine with confidence the cause of such an increase, virtually every college that has been test-optional for an extended period of time reported substantial growth in applications and matriculation among underrepresented students in the years since the introduction of their test-optional policy. The Bates research has garnered the greatest publicity and broadest dissemination, but other test-optional colleges are pursuing serious assessment of their admission policies as well, and most are reporting results similar to those at Batesvirtually identical graduation rates and average college GPAs, an increased number of students of color and less-sophisticated students, higher profiles, and a larger applicant pool. None of the twenty selective liberal arts colleges interviewed for this article expressed any serious interest in abandoning a test-optional policy. Lafayette College is the only such college we were able to identify that had instituted but then abandoned a testoptional admission policy.
NEW DIRECTIONS FOR STUDENT SERVICES DOI: 10.1002/ss

THE ROLE OF STANDARDIZED TESTS IN COLLEGE ADMISSIONS

65

Implementation
Most of the selective, test-optional colleges require students to definitively indicate on their application whether or not they wish their scores to be considered, but some simply take the presence or absence of the scores to be the indicator. A couple of them indicated they promote the option in their written materials, but an interested student must proactively ask the college (by phone, letter, or e-mail) not to consider the scores. Some of the admissions offices aggressively require their staff to remain neutral relative to individual students and their option: make sure it is well publicized, but dont influence the students decision about the option. At other institutions, even after the folder is being reviewed, staff may contact an applicant and advise that he or she would be well served to use the option. The actual handling of scores varies considerably. Some institutions (Knox College, for instance) destroy any test score information received from students who have asked that their test scores not be considered in the admission process. Lawrence University eliminates any trace of scores from the application folder of nonsubmitters, but, like Bates and a number of other institutions, to facilitate research the school records the scores (after matriculation) for all incoming students. At other institutions, even if a student chooses a test-optional path, score reports remain in the file during the folder-reading process, but admissions officers are instructed to ignore them. One motivation often assumed by other colleges to be operative when a school adopts a test-optional policy is the desire to raise the colleges testscore profile, because only the scores from the submitters will be reported to the public (and the guidebooks). Most test-optional institutions do not acknowledge this as a primary goal, but they may reap the benefits anyway. In an instance where the score reports are systematically destroyed, obviously there is no choice to be made relative to the scores that are reported publicly on the institutional profile. At other colleges, the logical and ethical decision about what to include in the profile is less clear and the actual decisions more varied. Some institutions profess to include in their profile all scores to which they ever have access, with the rationale that this best represents the profile of the enrolled students, though they may not solicit scores from nonsubmitters. Other institutions report only those scores that were actively included in the decision-making process, arguing that prospective students who are considering submitting their scores are best served by having a test-score profile that represents only those students with whom they would be competing as a test submitter. One institution actually acknowledged that with its nonsubmitters they would report scores for those who had attractive scores but not for those with unattractive scores. As surprising as this seems, it is perhaps no more egregious in intent or

NEW DIRECTIONS FOR STUDENT SERVICES DOI: 10.1002/ss

66

KEY ISSUES IN NEW STUDENT ENROLLMENT

effect than the institutions that actively counsel applicants with low test scores that they should choose to be test-optional. Like so many other aspects of this policy, the proportion of students choosing to be nonsubmitters varies widely by geography and alternative. The proportion of nonsubmitters is sometimes as low as 3 percent, at institutions where the nonsubmitter is required to do a substantial amount of alternative work (such as submit multiple writing samples). At the purely test-optional institutions where the student can simply ask that none of her test scores be considered, the proportion is typically in the 2030 percent range. At some of the institutions that are only SAT-I-optional (that is, the student must submit a variety of standardized test results but the SAT-I need not be one of them), nearly half the class is missing SAT-I scores. One institution reported an ironic dichotomy. The test-optional policy was adopted to attract more students with strong achievement who might have less-than-stellar testing. However, school officials publicly report only those scores that were used in the admission decisions (2025 percent of the applicant pool are nonsubmitters) and acknowledge that this raises the schools apparent profile. The higher test-score profile, though, obviously makes a larger proportion of the prospective students feel less comfortable about their test scores. There is concern that the test-optional policy made the college less attractive to some high-scoring students who felt that the policy was an implicit statement that the college didnt value standardized test scores (of which those students were proud). So the institution initiated National Merit Scholarships in an effort to make the statement to highscoring students that high test scores are indeed valued.

Advantages of a Test-Optional Admission Policy


Almost without exception, deans and directors of admission at selective, test-optional colleges report numerous benefits accruing to their college from the policy. Beyond growing applicant pools and diversity apparently coinciding with adoption of test-optional admission policies, they cite subtler advantages as well. Robert Maasa at Dickinson College comments, The policy allows us to accept those highly qualified students that dont test well, without being concerned about their impact on our profile (Maasa, Jan. 2006). Paul Steenis believes it has allowed Knox College to strengthen the consistency of its message about the things that are important in a liberal arts education (Steenis, July 2005, Jan. 2006). Patricia Maben is convinced that the policy attracts to Hartwick College more of those students who believe in that spirit of learning (Maben, Jan. 2006). Gary Ripple indicated that when Lafayette adopted a test-optional policy (it has since reinstated a testing requirement), his staff felt themselves to have been liberated in their reading of folders, no longer needing to be preoccupied with concerns about test scores (Ripple, Jan. 2005, Jan. 2006).
NEW DIRECTIONS FOR STUDENT SERVICES DOI: 10.1002/ss

THE ROLE OF STANDARDIZED TESTS IN COLLEGE ADMISSIONS

67

Multiple interviewees reported refreshing conversation with high school counselors, as well as students, that now focuses on discussion of teaching and learning styles at the college rather than competitive marketing jargon. Thyra Briggs said her favorite question from students who have just been told they need not submit any tests is, Then what do you look at? It gives her a chance to depart from the usual rhythm of the college spiel and engage the student in a discussion of the real educational goals and style of Sarah Lawrence (Briggs, Sept. 2005). In short, a test-optional policy helps to reframe the college admission conversation in healthy ways that serve students well.

Challenges to Adopting a Test-Optional Admission Policy


If the results are so positive, why have more colleges not adopted a testoptional policy? First and foremost, it appears that use of test scores is so ingrained in most colleges selection process and institutional self-image that the majority are not even giving serious consideration to becoming testoptional. Essentially, we have here the inertias: it takes a lot more energy to change (or even consider changing) than it does to continue doing what we have done for the past several decades. Two decades ago, Ernest Boyer, in College: The Undergraduate Experience in America, summarized the problem with two quotes from admissions officers interviewed as part of a Carnegie Foundation survey. One said, If we didnt ask for the scores, we would be regarded in the marketplace as having very low prestige. We cant afford that. The other commented, Its like a dance where everyone continues to go through the motions after the music has stopped (Boyer, 1987). Considering how many college faculty and staff unquestioningly consider the tests to be integral to the admission process, it is ironic how few of them are conversant with any data or studies documenting (for their particular institution) the value or reliability of the tests. There is a great deal of assumption surrounding use of the SAT and ACT in admission. If a test-optional policy is posited, the resistance on a campus seems to center around two areas of concern. The first is that the absence of standardized test scores will result in admission of less qualified or lowerquality students, yielding degradation in the academic strength of the student body. The second is that the institution will lose prestige solely through the absence of a test score requirement. Regarding the former, the sort of colleges under consideration (selective liberal arts colleges) essentially all employ an intensive, holistic, multifaceted evaluation of application materials. Most argue that they already place the greatest weight on the students high school record and a combination of other attributes. The data collected by test-optional institutions almost universally indicate that nonsubmitters graduate at a rate virtually identical to that of submitters and achieve comparable grades. Though the graduation
NEW DIRECTIONS FOR STUDENT SERVICES DOI: 10.1002/ss

68

KEY ISSUES IN NEW STUDENT ENROLLMENT

rate and grades typically are minimally lower for the nonsubmitters, as a faculty member at one of the institutions pointed out the performance of the nonsubmitters should not be compared to the rest of the enrolled students but to the students who were not admitted (for example, waitlisted students), because they would be the cohort from which you select students to replace the nonsubmitters if forced to do so. As noted by Boyers survey respondents two decades earlier, the other concern (and probably the overriding one at most institutions) has to do with marketing; elimination of the testing requirement will signal to the world decreased academic selectivity, resulting in perceived lesser quality of the institution. As noted by Dennis Trotter at Franklin and Marshall, Its a signaling issuepeople fear it sends a signal to the marketplace that you cant compete, (Trotter, Dec. 2005). Ann McDermott at College of the Holy Cross adds, High testing is still sexy (McDermott, Jan. 2006). Yet in the 2006 U.S. News & World Report ranking of National Liberal Arts Colleges, 20 percent of the twenty-five top-ranked colleges and more than 25 percent of the top seventy-five colleges are included in FairTests listing of test-optional colleges, so it clearly has not had a significant negative impact for those colleges. So, neither counterargument appears to withstand rational scrutiny. There are, however, some cautionary comments from the deans interviewed. Home-schooled students are applying in growing numbers, and they present some special challenges in evaluating their academic strength in the absence of test scores (some test-optional colleges still require tests of home-schooled students, though not necessarily the SAT-I or ACT). Bowdoin Colleges Richard Steele cautions that, particularly in the Northeast with galloping grade inflation and vanishing ranks, colleges considering going test-optional need to be very confident that they arent losing critical information in the absence of standardized tests (Steele, Jan. 2006). He also noted, though, that their weighted academic performance rating (a composite of the unweighted GPA, the rigor of the curriculum, and the quality of the secondary school) even in the absence of test scores correlated most strongly with academic success at Bowdoin. Finally, for institutions with a huge volume of applications, the SAT or ACT is an expedient mechanism for sorting or filtering the applicants (whether or not it is completely valid), and its absence would require more staff to complete the selection process in a timely manner.

Future Trends in Test-Optional Admission


In summary, during 2005 and 2006, at least ten selective liberal arts colleges adopted new test-optional policies, and a similar number are currently having serious conversations about doing so in the near future. Clearly, colleges that engage in holistic, comprehensive folder reading are able to make good, well-founded admission decisions even in the absence of SAT or ACT scores. The current prominence of these tests distorts both the admission process
NEW DIRECTIONS FOR STUDENT SERVICES DOI: 10.1002/ss

THE ROLE OF STANDARDIZED TESTS IN COLLEGE ADMISSIONS

69

and, in too many cases, the focus of student attention during the latter years of secondary school, as well as adding unneeded stress to the lives of students. The test-prep industry is booming, and its availability tends to exacerbate the gap between the more-affluent and less-affluent members of our society, thereby inhibiting some of the egalitarian educational goals espoused by most members of the educational community. A substantial number, perhaps even the majority, of students, parents, and high school counselors seem to be highly supportive of the move toward test-optional admission because in its purest usage it tends to help restore sanity to the college admission process. Colleges that are considering deemphasizing these two tests in their admission processes can choose from among several potential policy configurations that have already been proved effective by other institutions, so it would not be surprising to see a growing number of colleges make this move. There is a widely held perception that the most highly selective colleges have the greatest need for standardized tests because they must make such fine distinctions between exceptionally well-qualified students. A rational case can be made, though, that these colleges actually have the least need for this information. To the extent that SAT scores, for instance, are used to improve prediction of college freshman GPAs, they become of little significance when predicting the likelihood of receiving a 3.0 GPA as opposed to a 3.2 GPA; either one is a perfectly acceptable freshman GPA. There is little likelihood that the admissions staff at the most highly selective institutions, even in the absence of test scores, could not still identify students who are likely to survive at their institution. Within the large number of wellqualified applicants who clearly will be successful at such an institution, the healthy focus of attention should instead be on the other student attributes that enrich and enliven the campus community. One can argue that test scores, with their seductive apparent precision, give these institutions only an inappropriate shorthand efficiency in the decision-making process. Indeed, to whatever extent one can validate the predictive value of test scores, it is the lesser-selective institutions that can justify their use, as they attempt to identify students who are unlikely to survive academically at their institution. One final note. It is worth remembering that most of the colleges discussed herein have been around for at least a hundred to two hundred years, so standardized tests have been a part of their admission schema for a relatively small portion of their institutional existence. There was life before standardized tests, just as there will likely be life after them. References
Astin, A. W., and Oseguera, L. Degree Attainment Rates at American Colleges and Universities (rev. ed.). Los Angeles: Higher Education Research Institute, UCLA, 2005. Boyer, E. L. College: The Undergraduate Experience in America. New York: HarperCollins, 1987.
NEW DIRECTIONS FOR STUDENT SERVICES DOI: 10.1002/ss

70

KEY ISSUES IN NEW STUDENT ENROLLMENT

Elford, G. W. Beyond Standardized Testing: Better Information for School Accountability and Management. Lanham, Md., and Oxford, UK: Scarecrow Press, 2002. Lemann, N. The Big Test: The Secret History of the American Meritocracy. New York: Farrar, Strauss, and Giroux, 1999. Oram, F. (ed.). Petersons Four-Year Colleges. Lawrenceville, N.J.: Petersons, 2006. Zwick, R. Fair Game? The Use of Standardized Admissions Tests in Higher Education. New York: RoutledgeFalmer, 2002.

STEVEN SYVERSON is the dean of admission and financial aid at Lawrence University.
NEW DIRECTIONS FOR STUDENT SERVICES DOI: 10.1002/ss

You might also like