
(512) 571-3003
(512) 571-3003
Throughout my 35 years as a college admissions professional, the value of the SAT (and other standardized testing used in college admissions) has been hotly debated. On the one hand, we hear impassioned condemnations of the SAT from critics bemoaning the biases in the exam, the pressure it places on our high school students, and its questionable predictive value. On the other hand, we hear dire warnings from other quarters about the need for “accountability” and for objective measures to assess preparation for and predict performance in college.
I subscribe to the principle that led the National Association for College Admission Counseling (NACAC), in its Report of the Commission on the Use of Standardized Tests in Undergraduate Admission, to recommend that all colleges and universities conduct their own research about the predictive value of standardized tests for their institutions. The fact that a test shows meaningful correlations with academic performance at one university does not mean it will hold anything like such a correlation at another college. While I do not assume that the SAT is a useless measure for all colleges, I do lament that there remain colleges and universities that require this exam without knowing why.
The SAT was not the first college admission test developed by the College Entrance Examination Board at the beginning of the 20th century. Those first exams were essay tests and were intended to bring more uniformity to the admission requirements at prestigious colleges and to the curricula at the various prep schools that were feeders of the former. The Scholastic Aptitude Test, as the SAT was originally known, was not widely administered as a college admission tool until the 1940s, and it continues to be primarily a multiple-choice exam today. The SAT was designed for one overriding purpose: to help predict the first-year GPA (FYGPA) of college students. The College Board has begun showing that SAT scores can help predict retention, persistence, and graduation rates, as well, but its original purpose was focused on predicting FYGPA.
At most of the seven institutions where I have worked (including two Ivy League universities and five liberal arts colleges, three private and two public), considering SAT scores helped predict FYGPA better than if we did not use the scores in our assessments. In most of those schools, it simply was not as good a predictor as high school GPA (HSGPA), especially a recalculated HSGPA that included only grades in academic subject courses. The College Board itself reports that HSGPA is a better single predictor nationally.
This is not true, however, for all colleges and universities. Moreover, no college is forced to choose one single predictor on which to base its assessments of applicants’ academic potential or achievement. At some very selective institutions, standardized tests have proven to be a better predictor than HSGPA. SAT II Subject tests have been identified as such in at least some Ivy League universities, AP and IB test scores have been found to be best predictors at some other institutions of higher education, and I would not be surprised if the regular SAT were shown to be the best single predictor at a few other colleges. At most colleges and universities, though, the HSGPA remains the best single predictor.
Colleges practicing any degree of selectivity, however, will also fold into their evaluation of a candidate the quality of the high school academic program (the number and level of academic subject courses taken in high school). A number of students with respectable HSGPAs may be deemed academic risks if they have never taken a challenging academic course in high school. This is true even for students at high schools which do not offer especially challenging courses. In order to help take such differences into account, admissions officers tend to evaluate an applicant’s academic program in the context of the opportunities made available to her at the high school.
One could argue that because HSGPA is the best single predictor of college performance nationally, no college should include other factors in its academic assessment of applicants, but it would be a mistake to ignore other factors when they help give a fuller picture of the applicant’s academic preparation and ability. Most observers will surely agree, for example, that assessing the number and level of academic courses taken in high school is a useful and sensible credential to consider in the college admission process. Is this unfair? In a way, it is. But the fact that some of our youth are not given rigorous academic preparation for college at their high schools does not alter the fact that this may make succeeding in college more difficult for them. In other words, yes, it is unfair that some students have fewer academic opportunities than others. This should be viewed as a problem to be addressed, revealed by tests, rather than to be ignored or swept under the rug.
The fact that average SAT scores differ noticeably for certain groups—particularly socio-economic and ethnic groups—has led many to conclude that the test is racially and otherwise biased. Some of the differences in scores can be attributed to the disadvantages that some groups have historically experienced. Nevertheless, despite some studies attempting to reassure us that the SAT and other standardized testing are not racially biased, it is unclear that any test written in a language (any language using words) can shed itself entirely of the biases of the cultures that speak that language. More importantly, the same differences in group performance can be seen in virtually every other academic measure available to an admission office: other standardized tests, rank-in-class, quality of academic program in high school, and HSGPA.
“Bias,” however, is also a technical term reserved for tests on which the same test score results in different outcomes for different groups. In this regard, too, the SAT is biased, though contrary to widespread belief, the exam over predicts performance for Hispanics and African-Americans. In other words, all Hispanics who achieve a particular score on one section of the exam will achieve on average a lower FYGPA than the average FYGPA of all Whites and Asians with the identical score. Most other academic measures, however, are at least equally biased in this regard.
Even the HSGPA shows many of the differences in group performance that we find in the SAT. Should we, therefore, ignore HSGPA in our assessment of applicants? In addition, while many colleges have made strides in this regard, most college curricula and exams strike me as at least as culturally biased as the SAT. In fact, for all of its blemishes, one of the virtues of the SAT is that it is probably the most studied exam in the world. Its faults and limitations have been publicized and debated widely for decades and are well-known to the college admissions profession.
The more important question remains, does the SAT help predict FYGPA? The answer to that question depends on the college/university. It would be surprising if there are single measures that are near-perfect predictors of FYGPA at any college or university—human behavior is simply too complex to be encapsulated in one measurement. As mentioned earlier, many correlations and regression analyses have been run for different factors in relation to FYGPA. In most of the studies I have reviewed, HSGPA was the single best predictor. In all of them, the SAT (all or part of it) did add enough to our predictive modeling to make it useful to include in the evaluation of applicants. At one of the schools where I worked, the combination of HSGPA and SAT-Verbal (this was before the new Writing test was introduced) was so powerful that it all but obliterated the statistical benefit of considering any other academic factors. This was worth knowing as we reviewed applications, particularly from borderline applicants, but it did not mean that we ignored the quality of the high school program or other academic measures. If I had worked at a school where the correlations showed no added predictive benefit to including the SAT in our evaluation of applicants, I would have been the first to propose considering eliminating the test as an admission requirement.
One of the private liberal arts colleges where I worked had adopted an SAT-optional policy for four years, but by the time I arrived there, the faculty had voted to reinstate the standardized testing requirement for all applicants. Two other private liberal arts colleges where I worked adopted a policy of SAT-optional after my departure. Considering SAT scores at those schools enabled them to make better predictions of FYGPA, but one could not deny some of the benefits that other colleges were experiencing from adopting the SAT-optional policy: a bigger and more diverse applicant pool and some increase in the size and diversity of the incoming class. Both of these colleges remain SAT-optional today. One requires applicants who choose not to submit standardized test scores to submit a graded paper from a high school course instead; the other requires students to submit SAT scores if they wish to be considered for academic merit scholarships. At yet a fourth liberal arts college where I worked–this one with a state affiliation–standardized tests were ignored by the admission office but used widely for identifying students who needed to be placed in remedial courses.
As even my limited experience illustrates, even many institutions that have chosen to avoid the negative repercussions of requiring SAT scores, and are comfortable making admission decisions without them, see some predictive value to the exam. Clearly, other considerations are at play beyond the fundamental focus on whether the SAT helps to predict performance at a particular college. (And I do not mean to denigrate these other considerations, which generally align with the colleges’ missions.)
It is ironic, though, that in advocating for a more ‘holistic’ review of admissions applicants, some colleges have chosen to ignore one major credential (SAT) because it is less useful in evaluating some students than others. The same could be said of virtually every academic credential presented in an admissions application, yet the SAT is the only one admission officers fear will force their hand against their will. There is an assumption that the SAT has been used only to bolster the academic profile of the incoming class, and, therefore, colleges are unwilling to overlook modest SATs for some students, even though the rest of their application depicted a promising applicant.
I imagine this is the case at some colleges and universities. My experience at selective schools, however, was that the admissions officers sought to identify patterns of strength and weakness in application folders. This generally meant that particular credentials were interpreted in the context of the other credentials. If the only weak part of an application was an SAT score, it did not carry much weight in the final assessment. Similarly, if the only strong credential in an application was a high SAT score, it carried little weight in the final decision. Too many students are fond of attributing a denial at one college to an overemphasis on SAT scores in the selection process of that school. Unless the college used a strict formula for selecting students, however, and that formula gave significant weight to the SAT, the decision probably rested on a pattern of credentials of concern in the application and/or on the sheer depth of the applicant pool at that school. I cannot deny, however, that some colleges and universities are nervous about what admitting students with especially low SAT scores might do to their published class profile, nor that others simply misuse the SAT in their assessments, the subject of a subsequent section of this article.
While there is little question that the SAT generates a good deal of anxiety among students, those who cannot handle an academic challenge for an extended period will likely have trouble succeeding in college. When this topic was discussed at a National SAT Committee meeting when I was a member of that group, the undergraduate students on the committee were the most vociferous about this point. Essentially, they explained, as difficult and anxiety-provoking as the SAT seemed to them in high school, they did not think students who found that pressure too overwhelming could survive in college. Worth noting, too, is that we are preparing our students to compete in the global marketplace, and students in many different countries (China, for example) are put through and taught (forced?) to cope with far more rigorous forms of standardized testing than students in the U.S..
Helping students identify the material to be covered by an exam, its structure, timing, and known strategies (such as when to guess on a multiple-choice question and when not) is always a good idea. This kind of guidance will often reduce anxiety and permit students to spend less time figuring out the contours of the test and more time answering questions. As a result, this kind of guidance will often result in better performance on the test. One of the most significant findings of the NACAC Testing Commission Report, however, was that:
“…all available academic research suggests that an overall point increase of between 20 and 30 points on the SAT appears to be standard for test preparation activities. This modest gain (on the old 1600 scale) is considerably less than the 100 point or more gains that are often accepted as conventional wisdom.”
The College Board itself makes available materials that contain much of the guidance students need. In addition, typically, scores increase just from taking the exam a second time. Moreover, anyone who devotes a few hours per week over several months to reading academic material, looking up the unknown words in a dictionary, and practicing basic math problems of the sort to be covered on the SAT will probably see an increase in scores. Though the college admissions profession has been accused of pressuring students to take SAT prep courses instead of academic electives, I have yet to learn personally of a single instance of an admission officer actually endorsing this choice.
There have been documented instances, however, of misuses of the SAT by colleges and universities. Why the general public chooses to admonish the College Board instead of the guilty colleges for such abuses remains a mystery to me, but these practices have really fueled the arguments of FairTest and other groups who earnestly and vigorously object to the use of standardized testing for college admissions.
The most common misuse is the adoption of strict cut-off scores in the selection process. It is important to understand that the College Board makes special efforts to explain that an SAT score is an imprecise measure. Specifically, it explains, for example, that a 540 score in the Critical Reading exam represents achievement anywhere from a low of 500 to a high of 570. Yet some schools will require a particular minimum score for consideration for admission, or for some program, or for a merit scholarship. Large state systems, often under legislative command, are the most likely to use such cut-off scores for academic merit awards, but many other colleges and universities have a published or de facto set of cut-off scores in their admission process. Eliminating this practice is one of the recommendations of the NACAC Testing Commission.
Most people who advocate dismissing SAT scores as irrelevant balk when I ask them this question: do you believe that, for two students who have taken the SAT more than once, it is at all significant if one cannot crack 300 on any of the exams (all scored on a scale of 200 to 800) while the other shows scores all in the 700s? Most college admissions officers know that it is far easier to get a low score that is not indicative of ability than to get a high score by accident. Nevertheless, huge differences in scores typically signal some difference in level of ability and preparedness for college. Given the range of achievement each SAT score measures, however, it is difficult to argue that modest differences in scores are significant, yet the use of cut-off scores sends the signal that even a ten-point shortfall in a score is necessarily significant.
Indeed! A student is also more than a GPA or a compilation of courses and grades. This is not a reason to ignore any of these credentials. Let me clarify, as well, that colleges are keenly aware that non-cognitive factors influence academic performance and that most make efforts–through consideration of athletic and extra-curricular activities, essays, interview reports, and recommendations—to assess qualities such as motivation, discipline, and resourcefulness. To my knowledge, no standardized tests are used widely in admissions to evaluate these non-cognitive qualities. (The Posse Foundation, however, has developed a dynamic assessment tool to help identify non-cognitive qualities, and many schools will admit Posse students based in part on their performance in that exercise.)
In addition, insofar as many colleges and universities seek to assess an applicant’s potential contributions to campus life and to society at large, they should and typically do consider other non-cognitive factors in the selection process–and they do so without the benefit of a standardized test. The SAT does not attempt to measure motivation or altruism. At most, it has recently claimed to help predict who will graduate from a college, not who will make the best spouses or citizens after graduation.
There are good reasons for many schools to require SAT scores for admission. None of the common objections alter this fact for institutions at which the SAT does help to predict FYGPA. The misuse of standardized test scores is to be discouraged, but the fact that some colleges misuse SAT scores should not deprive others of the chance to use them responsibly. Recent developments, though, may challenge the usefulness of the SAT further.
The SAT is by no means the only standardized test inspiring debate. In a couple of years, at least 45 of the 50 states will have adopted the Common Core State Standards for their K-12 curricula. This is a comprehensive effort by state Governors and their educational officers to identify specific English Language and Math skills or competencies that students will need to master at some minimum level to progress toward high school graduation. We are told that these skills will make students ready for the workplace and for college. Most importantly, for the purposes of this discussion, only two testing companies (the Partnership for Assessment of Readiness for College and Career and the Smarter Balanced Assessment Consortium) have developed the standardized tests that will be administered to students to show they are ready to move on to another skill. These are potentially very significant developments for the country’s educational landscape and for college admissions officers.
First, the standards. High schools will likely align their grades to the skills of the common core standards. Many will undoubtedly include performance on the common core assessment tests in their grading (though the common core assessments will not fit neatly into the 4.0 or 100-pt. scales traditionally used by schools). This means that colleges everywhere may need to reassess how they interpret the GPAs they have traditionally seen from high schools. It will take colleges some time to evaluate whether the predictive value of the HSGPA under the common core grading is comparable to the predictive value of the traditional HSGPAs.
More importantly, because there will be only two common core exams for most of the country, college admission offices may begin considering state assessments more explicitly in their evaluation of applicants. Currently, there are so many different state competency exams being administered that admissions offices tend to ignore performance on those exams altogether. One possible unintended consequence of all this is that those colleges that currently require SAT scores for admission will likely rely even more heavily on them during the time that colleges study the correlations between the new “grades” and common core assessments and performance at their institutions.
If, over time, however, colleges learn that common core assessment tests predict performance at their institutions as well as or better than the SAT, then the continued usefulness of the latter will come under even greater scrutiny. The College Board appointed a new president who happens to be one of the architects of the Common Core State Standards and, presumably, is quite familiar with the standardized tests that will be used to measure whether students meet those standards. Mr. Coleman’s first open letter to the membership of his organization is to signal his desire to change the SAT. Because of his familiarity with the new core standards, Mr. Coleman may develop a new SAT that complements the many standardized tests about to be used to measure achievement in each standard. If not, the SAT itself could be seriously threatened, unless the new measures for the Common Core State Standards prove to be off the mark.
Roberto Noya, CEO &
Educational Consultant
(512) 571-3003
3609 Cheyenne Street
Round Rock, TX 78665
Roberto Noya, CEO &
Educational Consultant
(512) 571-3003
3609 Cheyenne Street
Round Rock, TX 78665