A version of this first appeared as a guest post on the Happy Schools Blog. It is expanded and revised here for a more in depth perspective. It doesn’t reflect the official views of my school and is simply my professional opinion.
Prospective graduate and business school applicants can get apprehensive and feel uncertain when the standardized tests they have grown accustomed to make changes. It is unclear what the impact of format and scoring changes will be, and how the scores will be interpreted by admissions committees as well.
I’ll start with the conclusion: take the GRE version you think you will perform best on. Graduate and business school admissions committees will not penalize applicants for taking one version or the other and have a duty to interpret scores fairly, regardless of version.
Wait a minute, the GRE is changing???
The GRE General Test is being revised and will change in August 2011. These changes will make the exam more true to life in both the verbal and quantitative sections, and should better assess a test taker’s ability than the current test instrument. Changes include question types, added test taker flexibility, and a new scoring scale, among others. Educational Testing Service does a nice job describing the changes online at www.ets.org/gre/revised_general/know.
Which one should you take?
If you have been preparing for the current version and are satisfied with your progress (as measured by sample tests and comfort level) – then take the current version. If you haven’t started yet and weren’t planning to take the GRE until late next year – then research both versions (and take a full length sample test for each one) and make a decision to commit preparation time to one version and schedule your efforts according to that timeline. I usually recommend students start with a three month preparation window and expand or contract that according to how smoothly their preparation is going.
Why will adcoms like the revised GRE?
From the admissions committee perspective, there are several positive effects brought about by the changes (as well as a few negatives). For the most part, I expect the revised GRE to better assess candidates verbal and quantitative abilities in ways that are similar to how you need to use those skills in academic, professional, and real life. I like the changes overall.
Preview and review within a section, “mark and review”, and change / edit answers
I believe test takers should not be held hostage to sequential question administration at the expense of being able to make a correction or having to guess in order to proceed. Although I do like adaptive exams, being able to return to earlier sections and to change answers is a welcome change. Hopefully this lowers the stress level for test takers.
From the adcom perspective, this should end the occasion when an applicant tries to explain weak scores by saying something like “I missed one question early on and it messed me up for the rest of the section”. That sounds like an excuse and poor preparation, even though it could really have been the case.
Verbal Reasoning Section
This section will provide better mechanisms to assess ability according to how language is really used and challenges you can expect to face as a student after enrollment. Adcoms may develop an increased sense of validity in this part of the test over time, when compared to the older version. This section is more true to life so we will likely have more confidence in these scores.
Quantitative Reasoning Section
The on-screen calculator brings the GRE up to 2010 since we rarely ever get into a situation where we have to do math using only a pencil and paper? This change aligns the test environment more closely with the regular environment you face during school and in your career. It is a good change but will take practice for candidates to get used to using the on-screen user interface. Adcoms will not want to hear applicants say they had trouble with the on-screen calculator. Practice, practice, practice!
The question format changes also portend an increased difficulty level and may help spread out the score cluster we see at the top of the current GRE Q section. Yes, I expect it will be harder to earn perfect scores on this section. I also believe this section will be a better indicator of one’s true ability than it is now.
Analytical Writing Measure
This section replaces the former AWA section and will take some use by adcoms to develop stronger sense of how much weight to put into the scores. In otherwords, only time will tell how we end up using it. Some adcoms disregard this section entirely, while others consider it vital, and yet others use it in conjunction with other required application materials to develop understanding of a candidate’s communication skills. Schools that practice holistic evaluation that includes GREV, AWA, essays, reference questions about communications ability, transcript indicators of communications, other English tests, and even personal interviews have many indicators to use when determining communications ability. The new AWA will just be one of them.
When standardized tests change score scales there is usually a concordance table created that allows comparison across versions (Google “TOEFL score conversion” for an example). While these are helpful tools, there is already a mechanism that allows easy comparison: the percentile scores. Over time, the scaled scores move, but the comparison with all other test takers in a given year remains valid and is conveyed via the percentile scores. Adcoms sometimes look at older scores according to the percentiles rather than the raw score.
Changes to the score scales can also really irritate adcoms and our IT departments. Most of the information management systems used by universities are purpose built for higher ed and not the most flexible. When the TOEFL test went from paper-based to computer-based, and then to internet-based, it changed the scoring scales each time. This means universities cannot carry the scores in our existing systems without creating new fields for the data and we cannot calculate averages or easily compare scores across test versions. While this has little to do with the applicant, it is really really frustrating on the part of a university.