The SAT is the one test that most Stuyvesant students do not cram for, instead dedicating hours after school and on weekends to doing problems in instructional booklets and on practice tests.
The College Board sponsors the SAT and decides how the test will be constructed and administered. Its parent company, Educational Testing Services (ETS), produces numerous standardized examinations, such as the Advanced Placement (AP) tests and College-Level Examination Program (CLEP) test.
According to the College Board Web site, the SAT is an “objective measure of a student’s college readiness […and] used with GPA’s and high school transcripts, SAT scores allow colleges to fairly compare applicants.”
The SAT and ACT are both widely accepted college entrance exams. However, the ACT focuses less on vocabulary than the SAT, and includes science and optional writing sections. Unlike the SAT, which is graded out 2400, the ACT is scored out of 36 points, and there is no penalty for marking incorrect answers on the multiple-choice part of the test.
The SAT officially takes three hours and 45 minutes to complete. It consists of 10 separately timed sections in critical reading, mathematics, and writing. One of the ten sections is not counted in students’ final scores because The College Board uses its questions to help develop future tests.
“The SAT is created by professional test developers who are experts in a specific field of study. These experts are divided into committees to help write particular questions for a certain topic,” Director of External and Medial Relations at ETS Thomas Ewing said. “The average SAT takes approximately 18 months to put together, due to the long process of creating, reviewing, and analyzing questions before officially approving them.”
“Before we actually write a test question, there are a lot of things we need to think about: who is the item for, what content do we want to measure, and at what level do we want to try to measure it?” ETS General Manager of Assessment Development Patricia Klag said.
For each of the critical reading, writing, and mathematics sections, there are different factors test writers must keep in mind when creating questions.
The SAT Critical Reading sections are designed to test skills in vocabulary, reasoning, and analysis. The section does this through two distinct question types: Sentence Completion and Reading Comprehension.
“Many students often struggle with sentence completions due to the great amount of vocabulary words they have to memorize. However, this [type of] question is not solely to test one’s ability to remember the definitions of words, but to also have the student exercise his or her reasoning skills. A good test-taker will be able to utilize one’s knowledge of roots, prefixes or suffixes, as well as the words he or she may already know, to pick the correct answer,” Klag said.
However, some students are skeptical about the efficacy of such testing methods. “All those SAT words you have to memorize—you can be an avid reader and a skilled writer and still not know a lot of them. To so many people, this test represents the sum of all their efforts in high school and their one shot at success in life. Of course, it really doesn’t say much about your intelligence or your value as a person that you were able to spend your nights memorizing vocabulary instead of sleeping, but that’s what the College Board is implicitly telling the high schoolers of this country,” sophomore Jane Argodale said.
Others view the SAT more positively and believe that the exam expands on what a test-taker already knows. “Especially at Stuyvesant, I think many students study for the SAT not because they actually need the practice, but for the sake of feeling prepared beforehand,” senior Charles Bagley said. “Though I did study for the SAT, after having taken it, I think I would have still done fine based on my prior knowledge and common sense rather than the time I spent prepping.”
According to Executive Director of Communications at the College Board Kathleen Steinberg, choosing reading passages for the SAT is not only based on finding texts of appropriate lengths that allow a reader to understand a passage’s main idea and analyze what is provided, but also texts that require readers to infer information. “There aren’t specific requirements for a ‘perfect passage,’ but one that does incorporate the basic qualities often complements suitable questions for the SAT,” Steinberg said in an e-mail interview.
The Mathematics sections on the SAT cover material up to, and including, the first semester of Algebra II. The basic breakdown of the topics include: numbers and operations (20-25%); algebra and functions (35-40%); geometry and measurement (25%-30%); and data analysis, statistics, and probability (10-15%).
“There are some students out there who are simply just very good test-takers,” Steinberg said. “This is why the mathematics sections include both multiple-choice and written questions, which allow us to test a student’s ability to problem-solve and figure things out, rather than just manipulate and deduce answer choices.”
“The written-response questions in the math sections of the SAT are often trickier and require more effort in finding the answer than the multiple-choice questions,” Bagley said. “The whole purpose of this exam is to assess the strengths and weakness of students, and the written math questions distinguish those who are stronger in math from those who aren’t.”
The Writing sections include an essay prompt and multiple-choice questions involving improving sentences and paragraphs, and identifying sentence errors. According to the College Board Web site, “The essay gives you an opportunity to show how effectively you can develop and express ideas.” Two readers, usually experienced high school or college teachers, independently score each essay on a scale from one through six. The two grades are then combined. If the two readers’ scores differ by more than one point, a third reader scores the essay.
“The writing prompt is a very important aspect of the SAT,” Steinberg said. “Like the Reading Comprehension passages, it should give the student a substantial amount of information to form a thesis, yet also be ambiguous enough for the test-taker to also include his or her own personal writing style. The prompts are often questions regarding quotes by known people or ideas that give the student a chance to persuade the reader.”
However, some students are concerned that the writing prompts place certain test-takers ahead of others. “Not all students get the same writing prompt since the College Board gives different versions of the SAT on different days. While this is practical, it also seems unfair, because one student may be more familiar with a topic than other,” sophomore Julia Mendelsohn said.
“I heard about last year’s writing prompt that asked whether or not reality television is harmful in misleading people,” sophomore Michelle Lin said. “I know plenty of kids who don’t watch reality television as well of plenty of kids that do, and this disparity shows that the SAT doesn’t really give equal opportunities to all test-takers.”
After writing a test item or completing a section of the SAT, ETS completes multiple levels of content review. When a test developer finishes a reading comprehension set, for instance, the passage and the related questions are given to two other ETS experts, who make alteration recommendations. Following the two content reviews, the initial test developer works to improve the test questions based on the recommendations made.
After editing, the test items are assembled into a pre-test taken by approximately 1,000 test-takers at ETS, who answer questions from the entire test. The results are sent to statistical-analysis department experts, who then examine whether a wide range of students answered certain questions incorrectly. According to the “ETS’s Assessment Development Area” video on the organization’s Web site, less-able examinees as well as more-able examinees are equally likely to pick the incorrect option when faced with poorly constructed questions.
The test questions are further scrutinized in a Differential Functioning Analysis that looks to see different populations (based on gender or ethnicity) are more likely to answer a particular question correctly. “If, for instance, males significantly out-perform females for a particular question, this is a red flag for us to take the item to content experts for further review,” ETS Psychometric Manager at Guatam Puhan said in the video.
“Once we’ve looked at the statistical analyses of the items and that we’ve checked to make sure they’re working the way we intend them to work, all the items will be put into a pool and are ready to be put on a future SAT,” Klag said. “After the test is assembled, our [assessment experts] review the test as a whole. They look to see that there is not an item here that is too similar to an item later on in the test, or that an item does not cue in another question.”
Finally, ETS certifies the test, allowing it to be printed and shipped out to the testing centers. The completed exams are also sent back to ETS to be graded.
Despite the long process of creating and reviewing the SAT, many Stuyvesant students continue to view the test in a negative light.
“While the SAT IIs actually test one’s knowledge of a specific subject, the SAT seems to be just an exam that determines how good of a test-taker someone is. It’s more about being able to sit through a four-hour exam and managing to do well, rather than actually mastering a topic,” junior Shreya Kalva said.
“The vast majority of college students don’t major in English or mathematics, yet only questions regarding these two topics are on the SAT,” Mendelsohn said. “At the same time, I believe that there would be no other conceivable testing method of examining all that a college applicant may know.”
Though students have different opinions about the SAT, they cannot deny that it is a lengthy exam, and the arduous process of creating it is even longer.