The Truth Behind the Hype: A Closer Look at the SAT

by Christina Perez, FairTest

2002 NEACAC Conference

Fairfield University, May 29-31, 2002

** Please note that this was one of three presentations delivered during this session.

The SAT I: A Faulty Instrument for Predicting College Success

When UC President Richard Atkinson first issued a call last year for the University of California to drop its SAT I requirement, one of the primary reasons he cited was a disconnect between what students learn in high school (often termed “achievement”) and what the SAT I measures (promoted as “aptitude”). The questions Atkinson and others have raised about whether the SAT indicates “aptitude” or “achievement” are key in understanding the limitations of the exam. The College Board itself seems unclear about how to bridge this gap. Reacting to Atkinson’s critique, College Board President Gaston Caperton defended the SAT I as a valuable barometer of educational quality: “The SAT scores reflect unfairness in our educational system.” Yet the College Board website states that “The test [SAT I] is designed to allow you to demonstrate your abilities in these areas regardless of the particular type of instruction you’ve received or textbooks you’ve used.”1 So the College Board seems to be trying to straddle both sides of the debate, claiming the SAT is both tied to school quality and independent of classroom learning.

In reality the test is not closely tied to high school curriculum, is not closely linked with the academic skills needed to succeed in college, is coachable, and is administered under artificial conditions rather than being embedded within classroom learning. For these and other reasons, the SAT does a poor job of predicting college performance. According to even the test-maker’s own research, high school grades, class rank, and rigor of courses do a better job of forecasting college performance than the SAT. The College Board claims that the exam explains approximately 22% of the variation in first-year grades.2 High school grades, however, account for nearly 30% of the difference in freshman grades, even in an era of grade inflation and variability in quality among high schools.

These figures speak to the SAT’s predictive power in general, but validity research at individual institutions and studies conducted independently of the test-maker demonstrates in more depth how poorly the exam predicts the success of many students. One study at the University of Pennsylvania found that the SAT I was by far the weakest predictor in analyzing differences in cumulative college GPA between students, explaining only 4% of the variation in college grades, while SAT II scores accounted for 6.8% of the differences in academic performance. By far the most useful tool proved to be high school class rank, which predicted more than twice as much as SAT I scores by forecasting 9.3% of the changes in cumulative GPAs.3

Another study of 10,000 students at 11 selective public and private colleges found that a 100-point increase in SAT I combined scores led to a one-tenth of a grade point gain in college GPA.4 This offered about the same predictive value as looking at whether an applicant’s father had a graduate degree or her mother had completed college.

Bates College, which chose to make test scores optional in 1990, first conducted several studies to determine the most powerful variables for predicting success at the college. One study showed that students’ evaluations of their “energy and initiative” as reported in a self-evaluation added more to the ability to predict performance at Bates than did either Math or Verbal SAT scores.5 In comparing five years of enrollees who submitted SAT I scores with those who didn’t, Bates found that while “non-submitters” averaged 160 points lower on the SAT I, their freshman GPA (FGPA) was only five one-hundredths of a point lower than that of “submitters.”6

A newly released report by the University of Texas at Austin shows the success of that institution’s admissions policy, which has greatly de-emphasized test scores. Texas students in the top 10% of their high school classes are guaranteed admission regardless of test scores. Onehalf of enrolling freshmen are admitted under this policy; for the rest of the applicant pool, test scores are just one of 18 criteria used to evaluate students. UT Austin’s validity studies showed that while class rank forecast 17.6 percent of the differences in first-year grades at UT Austin, SAT I and ACT scores accounted for only 10 to 14 percent of those differences.7 The ability of test scores to predict the performance of African American and Latino students was particularly weak, accounting for no more than 10 percent of explained variance for either group.

The SAT does a particularly poor job of predicting performance beyond the first year of college. In The Case Against the SAT, James Crouse and Dale Trusheim point to the SAT I’s negative ability to forecast long-term success. Data they analyzed demonstrated that using high school record alone to predict who would complete a bachelor’s degree resulted in “correct” admissions decisions 73.4% of the time, while using the SAT I and high school GPA forecast “correct” admissions in only 72.2% of the cases.8

Drawing from a national database of nearly 10,000 students, one study sponsored by the U.S. Department of Education found that rigor of high school curriculum offered a higher correlation with bachelor’s degree attainment (.54) than either SAT scores (.48) or class rank/GPA (.44).9

Significantly, high school curriculum was an even stronger predictor for African American and Latino students than for students overall, indicating a need for employing broader admissions criteria than just a test score/HSGPA index when trying to forecast the college performance of underrepresented minorities.

For this reason, it is important to note that test score prediction rates vary across population groups. The poor predictive ability of the SAT becomes particularly apparent when considering the college performance of females. Despite the fact that they receive better grades in high school and college in comparable classes, females scores on average 35-40 points lower than

males on the SAT I (math and verbal combined). For students whose first language isn’t English, test-maker research shows the SAT I frequently under-predicts their future college performance. One study at the University of Miami compared Hispanic and non-Hispanic White students. Though both groups earned equivalent college grades, the Hispanic students received on average combined SAT I scores that were 91 points lower than their non-Hispanic White peers. This gap existed despite the fact that 89% of the Hispanic students reported English as their best language.10

Extensive research compiled by Derek Bok and William Bowen in The Shape of the River highlights the SAT I’s questionable predictive power for African-American students. The ability of SAT I scores to predict freshman grades, undergraduate class rank, college graduation rates, and attainment of a graduate degree is weaker for African-American students than for Whites. While Bok and Bowen did find that SAT scores loosely correlated with undergraduate GPA, the magnitude of the effect was small: an additional 100 points in combined SAT I scores was associated with a gain of only 5.0 percentile points in class rank for African American students (for all students, the gain hovered just under 6 percentile points).11

The ACT does not fare much better in terms of predictive validity, even though it is supposedly more “achievement-based” than the SAT I. According to its manufacturer, the ACT has a predictive validity similar to that of the SAT I, accounting for 17% of the explained variance in first-year college grades.12 The high correlation of .89 to .92 between SAT I and ACT scores explains part of this finding and further debunks the myth that there is a substantial difference between “achievement” and “aptitude” college admission tests.

Test-score Optional Admissions

Nearly 400 colleges and universities nationwide, recognizing the shortcomings of standardized tests, have eliminated or de-emphasized SAT I and ACT admissions requirements.13 These schools, which span the gamut in terms of size, selectivity, geography, and mission, have crafted admissions policies that rely on the wealth of knowledge available in a student’s application. Some institutions, such as the University of Texas System and Franklin & Marshall College, automatically admit applicants who meet certain high school GPA or class rank minimums. Others, such as Muhlenberg College, require applicants to submit a graded writing sample or portfolio in lieu of test scores. Still others, including Bates, Mount Holyoke, Wheaton, and Hampshire Colleges, have made the SAT I and ACT optional for all applicants.

Examples from a small, private college and large public university illustrate the different methods institutions employ when dropping or de-emphasizing test scores. At Bates College, the admissions process relies on high school record, essays, recommendations, personal interviews, and student interests in evaluating students. In particular, the Bates staff values the personal interview.

At UT Austin one-half of the enrolling class is admitted under the “Top 10% Law.” Those not admitted this way are evaluated using a complex set of criteria that seeks to expand traditional notions of “merit.”14 Admissions officers consider:

1. The Academic Index (AI)

High School Record:

o Class rank

o Completion of UT required high school curriculum

o Extent to which students exceed the UT required units

o SAT I/ACT score

2. The Personal Achievement Index (PAI)

C. Perez, FairTest 4 2002 NEACAC Conference, Fairfield University

o Scores on two essays

o Leadership

o Extracurricular activities

o Awards/honors

o Work experience

o Service to school or community

o Special circumstances

Socio-economic status of family

Single parent home

Language spoken at home

Family responsibilities

Socio-economic status of school attended

Average SAT I/ACT of school attended in relation to student’s own SAT

I/ACT

Note that as the single largest campus in the nation, UT Austin receives more the 19,000 applications annually. Regardless of size, selectivity, or geography, “test-score optional” institutions are widely pleased with the results. Many schools, such as Mount Holyoke, Wheaton, Bates, and the University of Texas, report an increase in the racial and economic diversity of applicants and enrollees after changing their test score requirements. Five years of test-score optional admissions contributed in part to steady increases in the numbers of applicants of color at Muhlenberg College, which dropped test score requirements in 1996. Dean of Admission & Financial Aid Christopher Hooker-Haring believes this could be explained by the SAT-optional policy inviting members of underrepresented groups to apply to the college. Indeed, that was the intention of the college’s faculty and administration when it decided to drop test score requirements; Muhlenberg: “wanted to offer encouragement to groups of students who are underrepresented on many selective college campuses who often do not score well on the SAT (i.e., non-English speaking students, low income students, first generation college students, some learning disabled students, etc.).”15

Contrary to the popular myth that the SAT is a necessary tool for selecting an academically able student body, data from many of the “optional” schools also shows that students who are admitted without regard to test scores perform as well as or better than their test-score submitting peers. For example, since implementing the “Top 10% Law” in 1997, the University of Texas at Austin has seen:

An increase in average freshman GPA university-wide from 1996 to 2000

An increase in the retention rate (the percentage of freshman returning for their sophomore year) from 87.9 percent in 1996 to 92 percent in 2000

Students admitted under top 10% and non-top 10% policies required remedial resources at UT Austin at comparable rates

At every SAT/ACT score interval, top 10% students met or exceeded the college academic performance level of non-top 10% students, even when the latter earned SAT scores that were 200-300 points higher16

Relying on SAT and ACT scores may seem like an efficient, inexpensive way for college admissions offices to pare down a large applicant pool. Let’s face it – these exams have become intertwined with the American definition of merit. But it doesn’t have to be that way, and the nearly 400 “testscore optional” colleges and universities have shown how. Institutions genuinely committed to equity and excellence should consider dropping test score requirements as a way to increase student diversity while also maintaining or improving academic quality.

1 College Board Website, http://www.collegeboard.com, 2002.

2 College Entrance Examination Board, Handbook for the SAT Program 2000-2001, 2000, New York, NY.

3 J. Baron & M. F. Norman, “SATs, Achievement Tests, and High School Class Rank as Predictors of College

Performance,” in Educational and Psychology Measurement, Vol. 52, 1992, pp. 1047-1055.

4 Vars, F. & Bowen,W., The Black-White Test Score Gap, C. Jencks & M. Phillips, Eds., Washington, D.C.:

Brookings Institute, 1998, pp. 457-479.

5 Hiss, W., E. Woodcock, and A. McGrath, “(At Least) 20 Questions: Academic Criteria, Personal Qualities, and

College Admissions,” in The Journal of College Admissions, Summer 1984, p. 12.

6 Hiss, W., “Optional SATs: Six Years Later,” Bates: The Alumni Magazine, September 1990, pp. 15-19.

7 Task Force on Standardized College Admissions Testing, “A Review of the Use of Standardized Test Scores in the

Undergraduate Admissions Process at the University of Texas at Austin,” Austin, TX: University of Texas at Austin,

2002.

8 Crouse, J. and D. Trusheim, The Case Against the SAT. Chicago, IL: University of Chicago Press, 1988, pp.56-58.

9 Adelman, C., Answers in the Tool Box: Academic Intensity, Attendance Patterns, and Bachelor’s Degree

Attainment. Washington, D.C.: U.S. Department of Education, 1999.

10 Pearson, B. “Predictive Validity of the Scholastic Aptitude Test (SAT) for Hispanic Bilingual Students,” in

Hispanic Journal of Behavioral Sciences, Vol. 15, 1993, pp. 342-355.

11 Bowen & Bok, pp. 74-75.

12 ACT, Inc., “ACT Prediction Research Summary Tables,” Iowa City, IA, 1998.

13 For a list of such institutions, visit the National Center for Fair & Open Testing website at http://www.fairtest.org

14 Task Force on Standardized College Admissions Testing, 2002.

15 “Questions and Answers About Muhlenberg’s Test-Optional Policy,” Muhlenberg College.

16 Task Force on Standardized College Admissions Testing, 2002.

Tags: