February 2023 – Volume 26, Number 4
Assessing Academic English for Higher Education Admissions |
|||
Author: | X. Xi, and J. Norris (Eds.) (2021) | ![]() |
|
Publisher: | Routledge | ||
Pages | ISBN | Price | |
---|---|---|---|
Pp. 1-234 | 9780815350644 (paper) | $49.95 U.S. |
The 21st century has seen a growth of international students seeking admission to English-medium universities. This has resulted in a crucial need for more focused efforts in the assessment of their academic English proficiency (Eckes & Althaus, 2020). Academic tests that students take for admission purposes both need to assign students an English proficiency score and predict their academic success in university programs. Thus, these tests must provide reliable and meaningful test scores (Chapelle & Voss, 2013). Indeed, amid the popularity of academic tests, there has been a crucial need for a state-of-the-art reference when it comes to assessing the academic English language ability of students. The book Assessing Academic English for Higher Education Admissions addresses the aforementioned need by providing its readers with a comprehensive overview of the assessment of the four English language academic skills. For each skill, the book introduces theoretical backgrounds, current trends, technology integrations, and current gaps in the literature that require further investigation. This 2022 SAGE-ILTA award-winning book is a noteworthy reference for readers interested in assessing English for academic purposes.
The book consists of six chapters. Following the introductory chapter, Chapters 2–5 address the four language skills—reading, listening, writing, and speaking, respectively—while Chapter 6 serves as a concluding chapter that reviews the previous four chapters. Chapter 1, the introduction, introduces the readers to the concept of English for Academic Purposes, and highlights the linguistic and communicative abilities that underlie tasks across academic context performances. The authors of this chapter argue that validity is fundamental for both writing test items and delivering tests, while they also emphasize the importance of efficient, valid, and reliable English academic tests.
Chapters 2 and 3 address the assessment of receptive skills: reading and listening. Chapter 2 provides an overview of reading comprehension theories and proposes a model for assessing academic reading. A great point raised in this chapter is how the prevalence of technology plays a role in redefining the reading ability and inventing new assessment methods (e.g., ePIRLS). Throughout the chapter, the authors criticize the continuous reliance on paper-based texts and the lack of media-based reading, the latter of which is increasingly common in the 21st century. Thus, they call for the integration of tasks that assess students’ academic ability to read to learn and read to integrate information across texts. The authors conclude the chapter by pointing out the importance of conducting an international domain analysis to reach an up-to-date definition of academic reading comprehension. Subsequently, Chapter 3 presents an overview of the theoretical explanations and common trends in assessing listening. The authors of this chapter point out the importance of assessing listening as an interactive rather than a one-way dimension, and as an authentic real-life skill, rather than relying on decontextualized short sentences. To provide the readers with practical examples, the authors compare three common standardized listening tests—namely TOEFL iBT, IELTS, and PTE—and explain to the readers how test definition, sample, delivery, and format differ across the three tests. In this vein, they explain that English as a Lingua Franca has mostly been neglected in standardized tests, although it has been recommended by the latest language testing research.
Chapters 4 and 5 address productive skills: writing and speaking. Chapter 4 highlights the importance of assessing writing under an interactionist assessment approach that “conceptualize[s] writing in reference to the characteristics of a person’s abilities to interact in a particular context rather than as a trait, independent of a context” (p. 108). Similar to Chapter 3, the authors of this chapter compare the three standardized tests’ writing sections and demonstrate how they differ across task type, test time, composition length, rating mode, and number of raters. Accordingly, they suggest applying integrated writing tasks (e.g., integrating reading with writing) instead of independent tasks (assessing writing only). Chapter 5 starts with an emphasis on the importance of assessing speaking that replicates actual spoken language used by students in real-life academic contexts (e.g., group discussion). The authors argue that test developers need to define the speaking test and its purpose, choose task types that better represent the test purpose, and design a rubric that best measures what the test claims to measure. In terms of practical evaluations, similar to previous chapters, they address the strengths and weaknesses of the three standardized tests. For example, they criticize the IELTS test for giving only minimal consideration to interactional competence, which refers to the students’ ability to use a pragmatically appropriate language to communicate with other speakers by developing the topic, negotiating meaning, and taking turns (Ockey & Li, 2015). In this chapter, the authors question the underrepresentation of pragmatics and interactional competence in academic speaking tests and call for their consideration.
Chapter 6, the concluding chapter, summarizes the previous four chapters in terms of validation of test score interpretation, task design, technology usage, and future research recommendations. In this chapter, the author asks the readers to consider two main points, namely (1) the complexity of integrating an overall framework of the four language skills when the skills are mostly represented as four separate modalities, and (2) the need to consider the integration of technology into the English language test, as such technology has become an integral part of higher education communication.
Overall, the volume is a valuable contribution to the assessment of English in higher education from both theoretical and practical perspectives. From the theoretical side, it can help test developers understand the theoretical underpinnings behind each language skill, the common models in each skill, and the current trends and recommendations. Readers of this book will be able to answer questions such as these:
- What language should I include on my test?
- What task types should I incorporate?
- What evidence do I have to support the meaning of my test scores?
From a practical level, the readers of the book will be able to apply their understanding with regard to test delivery, task design, scoring methods, and technology application. Moreover, comparing the four skills’ representation of the three common standardized English tests across the book chapters provides a great reference for university admissions services that are unsure about which test better serves their needs. For example, if university admissions services believe that test takers’ performance in integrated tasks is important for their students’ academic success, then the TOEFL iBT test would be the most appropriate test, since the other common standardized tests (IELTS & PTE) do not include integrated tasks, instead assessing each skill separately. Finally, researchers interested in the assessment of academic English will find this book a great starting point, as each chapter ends with areas of research that deserve further investigation.
While the book has several strengths, a few limitations are worth mentioning. First, although the authors advocate assessing integrated rather than independent skills, the way the book is organized does not support their argument. For example, it would be more consistent with the authors’ views if the reading and writing skills were merged into one chapter and the speaking and listening skills were merged into another chapter. A second limitation is that the writing of some chapters is strongly inspired by construct-based approach (e.g., Chapter 5), which emphasizes the need for a clear definition of the test based on domain analysis that measures the language knowledge, abilities, and fundamental skills to be performed in the academic context, while the other chapters place less emphasis on this (e.g., Chapter 2). This highlights the issue of whether all the language skills may be equally approachable using the construct-based framework and whether or not these scholars are in agreement regarding the construct-based approach. Last but not least, although most of the contributors to this book are language assessment experts, some are part of the Educational Testing Services (ETS), and thus it appears that there is a risk of bias associated with their contribution in favor of the TOEFL test formats over the other standardized tests.
Despite its limitations, Assessing Academic English for Higher Education Admissions is highly recommended for higher education admission test developers, researchers interested in investigating current gaps in higher education assessment contexts, institutions developing placement tests for admission purposes, and graduate students interested in academic English assessment.
To Cite this Article
Aseeri, F. & Susanto, A. (2023). Assessing Academic English for Higher Education Admissions. X. Xi, and J. Norris (Eds.) (2021). Teaching English as a Second Language Electronic Journal (TESL-EJ), 26 (4). https://doi.org/10.55593/ej.26104r3
References
Chapelle, C. A., & Voss, E. (2013). Evaluation of language tests through validation research. In A. Kunnan (Ed.), The companion to language assessment (Vol. III) (pp. 1079–1097). Wiley. https://doi.org/10.1002/9781118411360.wbcla110
Eckes, T., & Althaus, H. J. (2020). Language proficiency assessments. In M. E. Oliveri & C. Wendler (Eds.), Higher education admission practices: An international perspective, (pp. 256–275). Cambridge Core. https://doi.org/10.1017/9781108559607
International Language Testing Association. (2022, March 11). SAGE/ILTA Book Award. https://www.iltaonline.com/news/600403/SAGEILTA-Book-Award.htm
Ockey, G. J., & Li, Z. (2015). New and not so new methods for assessing oral communication. Language Value, 7(1). https://doi.org/10.6035/LanguageV.2015.7.2
About the reviewers
Fatimah M. Aseeri is a Ph.D. student in the Applied Linguistics and Technology program at Iowa State University in Ames, Iowa. Her research interests include language assessment literacy, the assessment of oral communication, and the role of corrective feedback in second language learning. https://orcid.org/0000-0001-6503-3207. fmaseeriiastae.edu
Andrias Susanto is a Ph.D. student in the Applied Linguistics and Technology program at Iowa State University in Ames, Iowa. His research interests include language assessment, second language pronunciation, and technology-assisted language learning. https://orcid.org/0000-0002-6694-6814. andriasiastate.edu
© Copyright rests with authors. Please cite TESL-EJ appropriately. Editor’s Note: The HTML version contains no page numbers. Please use the PDF version of this article for citations. |