SAT (2016 – 2023)

What is the SAT?

The SAT is this nation’s oldest, most widely used – and misused – college entrance exam. Originally named the Scholastic Aptitude Test, the SAT has been renamed multiple times in attempts to reflect what the test actually assesses. Currently, SAT is an orphan acronym that no longer stands for anything (quite possibly reflecting quite well what the test measures).

What does the SAT claim to measure?

The SAT was designed to predict first-year college grades – it has not historically be researched reguarding or validated to predict grades beyond the freshman year. Little information exists about the relationship between the test and graduation rates, pursuit of a graduate degree, or for placement or advising purposes. This makes sense given that the further beyond the test predictions are made the more factors beyond the students knowledge or performance on a test are influencing those those outcomes.

Further, according to research done by the tests’ manufacturers, high school grades on their own are consistently better predictors of college performance than the SAT. The combination of test and grades only add marginal increases to predictions of first year performance.

The College Board (and in the past ETS, which developed the SAT until 2015) conduct validity studies peridocially to quantify the relationship between test scores and first-year college grades, generally expressed as the correlation coefficient (or r or r-squared value).

The College Board’s latest validity study claims the SAT has a correlation of .51 with freshman GPA (FYGPA).

This number is deceptive, however. To determine how much of the difference in first-year grades between students the SAT really predicts, the correlation coefficient must be multiplied by itself. The result, called r squared, describes the difference (or variation) among college freshman grades. Thus, the predictive ability (or r squared) of the SAT I is just .26, meaning the test explains only 26% of the variation in freshman grades. With a correlation of .54, high school grades alone do a better job, explaining almost 28% of the variance in first-year college performance.

Does the SAT accurately predict success?

If one looks beyond college grades, information from The Case Against the SAT by James Crouse and Dale Trusheim actually points to the SAT I’s poor utility in forecasting long-term success. Data they analyzed demonstrated that using the high school record alone to predict who would complete a bachelor’s degree resulted in “correct” admissions decisions 73.4% of the time, while using the SAT and high school GPA forecast “correct” admissions in 72.2% of the cases.

More from FairTest about SAT validity: 2007 SAT-I, Comparing SAT-I ACT and SAT IIs

Has the SAT Changed?

Since it’s inception almost 100 years ago the SAT has been revised, redesigned, reformatted and even recentered. There have been at least 22 significant revisions to the exam’s content, format, and scoring. However, which each revision the College Board claims the test is as representative of what students were learning in the classroom as the previous version.

The SAT most recent version of the SAT (which changed significantly in 1994, 2005, 2016 and change again in 2023) is composed of two sections, “Evidence-Based Reading and Writing” and “Mathematics,” each scored on a 200-800 point scale. The 154 questions are nearly all multiple-choice except for 13 math questions require students to “grid in” the answer.

What impact does the SAT have on educational equity?

The SAT consistently under-predicts the performance of females in college and over-predicts the performance of males. Although females earn higher grades in high school and college, their SAT scores were 26 points lower in 2006. College Board research has shown that both the Critical Reading and Math portions of the test under-predict girls’ college performance. A 1994 ETS study found that, on average, males scored 33 points higher on the SAT-Math than females who earn the same grades in the same college math courses. Analyses of SAT gender bias cite several causes including the test’s emphasis on speed over sustained reasoning and its multiple-choice format. Mathematics tests in other countries that require solutions to long problems appeared unbiased with respect to gender.

African American, Latino, new Asian immigrant and many other minority test-takers score significantly lower than white students. Rigid use of SATs for admissions will produce freshman classes with very few minorities and with no appreciable gain in academic quality. The SATs are very effective at eliminating academically promising low-income and under-represented minority students who apply with strong academic records but relatively low SAT scores. Colleges that have made the SAT optional report that their applicant pools are more diverse and that there has been no drop off in academic quality.

How is the SAT misused?

Several states impose SAT minimum score requirements on students hoping to qualify for taxpayer-funded scholarships. Using cut-off scores for such high-stakes decisions is a clear violation of not only the test-makers’ guidelines but also the joint standards for educational testing provided by the leading education (AERA), psychology (APA), and measurement (NCME) associations. This practice disproportionately impacts minority students who as a group tend to score lower than white students on the SAT. The result is these students lose out on millions of dollars in financial assistance.

National Merit Scholarships use Preliminary SAT scores as the sole criterion to select semifinalists. The resultant pool has historically been predominantly male because boys score higher on the PSAT even though girls earn higher grades in high school (and college). In 1993, FairTest filed a complaint with the Office for Civil Rights (OCR) charging the test-makers with illegally assisting gender bias. As part of a settlement with OCR, ETS and the College Board agreed to add a new multiple-choice “writing” component to the PSAT. This simple change in test format significantly increased the percentage of National Merit semifinalists who are female, but girls are still cheated out of a fair share of awards by bias in the unreformed portions of the exam.

Gifted and Talented Programs:
Many special programs for the “gifted and talented,” such as the Johns Hopkins Center for the Advancement of Academically Talented Youth, use the SAT or similar tests to select participants. Not surprisingly, girls and minorities are often underrepresented in these accelerated programs.

Is the SAT coachable?

While there is still debate over how much test prep can boost students’ scores, the College Board has backed away from its historic claim that the SAT is not “coachable” and now endorses, provides, and sells test preparation materials. Regardless of how much coaching can increase SAT results on average, it can substantially enhance some students’ scores, thus further tilting the college admissions playing field.

The coaching industry experienced a huge surge in business from the introduction of “new” SATs. Firms such as Kaplan and the Princeton Review know that any change in the admissions process feeds student anxiety. They say a new SAT is just as “coachable” as the old test, if not more so. One major test preparation company touted its success in training test-takers to raise their scores on the SAT II: Writing Test as evidence that the mandatory writing section on the SAT-I increases the exam’s coachability. The ability of test preparation to boost students’ scores skews the college application process in favor of students from higher-income families that can afford the $800 or more that an intensive course costs.

Read FairTest’s other SAT Fact sheets

The SAT: A Faulty Instrument For Predicting College Success

The “New” SAT: A Better Test or Just a Marketing Tool?

Further Reading