The "New" SAT: A Better Test or Just a Marketing Ploy?

In June 2002, the College Board announced a series of changes to the SAT-I that were implemented in March 2005. The action primarily responded to threats by the University of California, the SAT’s biggest customer, that it planned to drop the test and to the growing number of colleges which have made test scores optional for many applicants.

None of the revisions in the test address the SAT-I’s fundamental flaws such as its inaccuracy, biases, and susceptibility to coaching. In fact, some of the changes may make these problems worse. Nor has the College Board acted to crack down on widespread misuses of the SAT-I, such as requiring minimum scores for admissions or scholarships. Moreover, contrary to the marketing claims accompanying the promotion of the “new” SAT-I, the revised exam will do little to improve the quality of K-12 education.


The general format and content of the “new” SAT-I is largely unchanged. It is still primarily multiple-choice and administered under strictly timed conditions. The SAT-Verbal has been renamed “Critical Reading” and includes additional short Reading Comprehension passages in place of the much-criticized verbal Analogies. The math section now contains some Algebra II questions (it formerly covered only Algebra I and geometry), and the arcane Quantitative Comparison items were removed.

Responding to criticism about the SAT-I being far removed from classroom learning, the College Board added a so-called “Writing” component. The new section is modeled on the SAT II: Writing Test, previously an optional exam with such weak predictive value that it was required by fewer than 100 colleges nationally. The SAT-I “Writing” test includes 35 minutes of multiple-choice, copy editing questions with the remaining 25 minutes allotted for drafting one, short essay. Each section is still graded on a 200 to 800 point scale, so the addition of the third section bumps up a “perfect” SAT-I score to 2400. The total testing time rises from 3 hours to 3 3/4 hours. These changes were accompanied by a cost increase of $12 per test-taker, boosting the College Board’s revenues by more than $30 million a year.


College Board technical reports acknowledge that a student’s high school grades and courses taken provide a better forecast of college performance than the SAT-I does. Historically the test did a particularly poor job of predicting how females, students of color, and older test-takers will perform in college. Since the revised SAT-I has not yet been used in the admissions process, there is no research demonstrating how the test’s predictive validity will be affected by the changes. However, several sources of information do provide clues about likely impacts.

With the exception of the added writing component, the “new” SAT-I closely resembles the prior test in form and content. This means it will likely remain a weak predictor of college grades and bachelor degree attainment. Although the exam’s predictive validity may increase slightly due to the addition of a third section (forecasting accuracy tends to rise with each additional test score), this improvement will probably not be substantial nor be equal across all demographic groups.

College Board reports on the Test of Standard Written English (TSWE) it administered as part of the SAT prior to 1994 and the English Composition Achievement Test (ECT), which together were the prototypes for the current SAT II: Writing Test, call into question the predictive power of the “new” SAT-I writing section. This research shows that the TSWE and ECT did a particularly poor job of forecasting the college performance of African American students and students whose strongest language was not English. In fact, the TSWE was removed from the SAT in the early 1990s because the College Board recognized that it was not a useful tool in the college admissions process.


College Board research demonstrates that the SAT-I systematically under-estimates the academic potential of young women, students whose first language is not English, and applicants over 25. This is unlikely to change with the “new” SAT-I. The underlying causes of the score gaps – including the test’s multiple-choice format, highly-speeded pace, and rewards for strategic guessing – remain in place. The gender gap may be reduced with the addition of the essay question, since females tend to score slightly higher than males on the SAT II: Writing Test, but will probably not be completely eliminated.

However, the score gap for students from non-English backgrounds will likely grow larger due to the added challenge from the high pressure, timed conditions of the essay question. On the SAT II: Writing Test, African American and Latino test-takers scored on average 80-100 points lower than White students. In fact, the SAT II: Writing Test had one of the largest Black-White test score gaps among the twelve most popular SAT II tests, second only to the SAT II: Literature Test. These gaps will likely carry over to the “new” SAT-I, given its similarity in form and content. Moreover, the College Board has made no indication it will act to stop test score misuses, such as minimum score cut-offs, that have a particularly harmful impact on the opportunities available to African-Americans, Latinos, low-income students, and students with special needs.


While there is still debate over how much test prep can boost students’ scores, the College Board has backed away from its historic claim that the SAT-I is not “coachable” and now sells its own test preparation materials. Regardless of how much coaching can increase SAT results on average, it can substantially enhance some students’ scores, thus further tilting the college admissions playing field.

The coaching industry experienced a huge surge in business from the introduction of the “new” SAT-I. Firms such as Kaplan and the Princeton Review know that any change in the admissions process feeds student anxiety. They say the “new” SAT-I is just as “coachable” as the old test, if not more so. One major test preparation company touted its success in training test-takers to raise their scores on the SAT II: Writing Test as evidence that the mandatory writing section on the SAT-I increases the exam’s coachability. The ability of test preparation to boost students’ scores skews the college application process in favor of students from higher-income families that can afford the $800 or more that an intensive course costs.


A proposed SAT-I essay section was scrapped in the early 1990’s because of potential logistical problems in grading it and grave equity questions. These concerns have yet to be resolved.

Each essay is read in less than three minutes and rated on a “holistic” 1-6 scale by two readers. If the readers’ scores differ by more than 2 points, a third person reviews the essay to resolve the discrepancy. Readers are supposed to evaluate writing skills using a vague set of criteria including such factors such as variety in sentence structure and range of vocabulary. This means test-takers can earn high marks for long-winded, complicated sentences with plenty of “10-cent” words just as easily as they can for concise, interesting writing. The criteria also include measures such as “effectively and insightfully” responding to the writing prompt with “well organized and fully developed” arguments. While these goals may seem to be worthwhile writing skills, they open the exam up to subjective judgments from essay readers.


There is an old adage in the measurement profession: “What is tested becomes what is taught!” No matter how the SAT-I is altered, there will be strong pressure on teachers to drill their students on the narrow subject matter and formats it covers. One College Board study estimated that at least half of the high schools in the U.S. already offer SAT prep. Such practices increased as the revised SAT-I was phased in.

Test-prep drills come at the expense of more worthwhile learning opportunities. While some have praised the “new” SAT-I as providing an incentive for schools to teach writing skills, it will actually encourage educators to focus on how to write formulaic five-paragraph essays rather than developing students’ writing skills more broadly. In addition, almost every state assessment system already includes a writing test, many of which allow students to use more time than the twenty-five minute limit on the SAT-I essay. Therefore, no additional “incentive” is needed in order to encourage teachers to focus on the limited writing skills covered by these assessments.


More than 700 accredited, bachelor degree-granting institutions nationwide do not consider SAT-I or ACT scores before making admissions offers to substantial portions of their entering classes (see this list here). They recognize that there is ample information in applicants’ files to make superior admissions decisions without the distortions cause by SAT-I scores.

The “test-score optional” list includes some of the most selective private colleges in the nation, such as Bates, Bowdoin, and Mount Holyoke, as well as large public campuses like the University of Texas at Austin. Rather than accepting the College Board’s promotional claims about the “new” SAT-I, colleges and universities should look with a critical eye at both the “new” and “old” exams and follow the lead of “test-score optional” institutions. Already several colleges, including Sarah Lawrence, Holy Cross, and Saint Lawrence University, have made precisely that decision after the “new” SAT was introduced.

MORE INFORMATION: Links to additional fact sheets on the SAT and related topics here.

View FairTest’s other SAT Fact sheets SAT: Questions and Answers and The SAT: A Faulty Instrument For Predicting College Success