• Skip to primary navigation
  • Skip to main content

site logo
The Electronic Journal for English as a Second Language
search
  • Home
  • About TESL-EJ
  • Vols. 1-15 (1994-2012)
    • Volume 1
      • Volume 1, Number 1
      • Volume 1, Number 2
      • Volume 1, Number 3
      • Volume 1, Number 4
    • Volume 2
      • Volume 2, Number 1 — March 1996
      • Volume 2, Number 2 — September 1996
      • Volume 2, Number 3 — January 1997
      • Volume 2, Number 4 — June 1997
    • Volume 3
      • Volume 3, Number 1 — November 1997
      • Volume 3, Number 2 — March 1998
      • Volume 3, Number 3 — September 1998
      • Volume 3, Number 4 — January 1999
    • Volume 4
      • Volume 4, Number 1 — July 1999
      • Volume 4, Number 2 — November 1999
      • Volume 4, Number 3 — May 2000
      • Volume 4, Number 4 — December 2000
    • Volume 5
      • Volume 5, Number 1 — April 2001
      • Volume 5, Number 2 — September 2001
      • Volume 5, Number 3 — December 2001
      • Volume 5, Number 4 — March 2002
    • Volume 6
      • Volume 6, Number 1 — June 2002
      • Volume 6, Number 2 — September 2002
      • Volume 6, Number 3 — December 2002
      • Volume 6, Number 4 — March 2003
    • Volume 7
      • Volume 7, Number 1 — June 2003
      • Volume 7, Number 2 — September 2003
      • Volume 7, Number 3 — December 2003
      • Volume 7, Number 4 — March 2004
    • Volume 8
      • Volume 8, Number 1 — June 2004
      • Volume 8, Number 2 — September 2004
      • Volume 8, Number 3 — December 2004
      • Volume 8, Number 4 — March 2005
    • Volume 9
      • Volume 9, Number 1 — June 2005
      • Volume 9, Number 2 — September 2005
      • Volume 9, Number 3 — December 2005
      • Volume 9, Number 4 — March 2006
    • Volume 10
      • Volume 10, Number 1 — June 2006
      • Volume 10, Number 2 — September 2006
      • Volume 10, Number 3 — December 2006
      • Volume 10, Number 4 — March 2007
    • Volume 11
      • Volume 11, Number 1 — June 2007
      • Volume 11, Number 2 — September 2007
      • Volume 11, Number 3 — December 2007
      • Volume 11, Number 4 — March 2008
    • Volume 12
      • Volume 12, Number 1 — June 2008
      • Volume 12, Number 2 — September 2008
      • Volume 12, Number 3 — December 2008
      • Volume 12, Number 4 — March 2009
    • Volume 13
      • Volume 13, Number 1 — June 2009
      • Volume 13, Number 2 — September 2009
      • Volume 13, Number 3 — December 2009
      • Volume 13, Number 4 — March 2010
    • Volume 14
      • Volume 14, Number 1 — June 2010
      • Volume 14, Number 2 – September 2010
      • Volume 14, Number 3 – December 2010
      • Volume 14, Number 4 – March 2011
    • Volume 15
      • Volume 15, Number 1 — June 2011
      • Volume 15, Number 2 — September 2011
      • Volume 15, Number 3 — December 2011
      • Volume 15, Number 4 — March 2012
  • Vols. 16-Current
    • Volume 16
      • Volume 16, Number 1 — June 2012
      • Volume 16, Number 2 — September 2012
      • Volume 16, Number 3 — December 2012
      • Volume 16, Number 4 – March 2013
    • Volume 17
      • Volume 17, Number 1 – May 2013
      • Volume 17, Number 2 – August 2013
      • Volume 17, Number 3 – November 2013
      • Volume 17, Number 4 – February 2014
    • Volume 18
      • Volume 18, Number 1 – May 2014
      • Volume 18, Number 2 – August 2014
      • Volume 18, Number 3 – November 2014
      • Volume 18, Number 4 – February 2015
    • Volume 19
      • Volume 19, Number 1 – May 2015
      • Volume 19, Number 2 – August 2015
      • Volume 19, Number 3 – November 2015
      • Volume 19, Number 4 – February 2016
    • Volume 20
      • Volume 20, Number 1 – May 2016
      • Volume 20, Number 2 – August 2016
      • Volume 20, Number 3 – November 2016
      • Volume 20, Number 4 – February 2017
    • Volume 21
      • Volume 21, Number 1 – May 2017
      • Volume 21, Number 2 – August 2017
      • Volume 21, Number 3 – November 2017
      • Volume 21, Number 4 – February 2018
    • Volume 22
      • Volume 22, Number 1 – May 2018
      • Volume 22, Number 2 – August 2018
      • Volume 22, Number 3 – November 2018
      • Volume 22, Number 4 – February 2019
    • Volume 23
      • Volume 23, Number 1 – May 2019
      • Volume 23, Number 2 – August 2019
      • Volume 23, Number 3 – November 2019
      • Volume 23, Number 4 – February 2020
    • Volume 24
      • Volume 24, Number 1 – May 2020
      • Volume 24, Number 2 – August 2020
      • Volume 24, Number 3 – November 2020
      • Volume 24, Number 4 – February 2021
    • Volume 25
      • Volume 25, Number 1 – May 2021
      • Volume 25, Number 2 – August 2021
      • Volume 25, Number 3 – November 2021
      • Volume 25, Number 4 – February 2022
    • Volume 26
      • Volume 26, Number 1 – May 2022
      • Volume 26, Number 2 – August 2022
      • Volume 26, Number 3 – November 2022
      • Volume 26, Number 4 – February 2023
    • Volume 27
      • Volume 27, Number 1 – May 2023
      • Volume 27, Number 2 – August 2023
      • Volume 27, Number 3 – November 2023
      • Volume 27, Number 4 – February 2024
    • Volume 28
      • Volume 28, Number 1 – May 2024
      • Volume 28, Number 2 – August 2024
      • Volume 28, Number 3 – November 2024
      • Volume 28, Number 4 – February 2025
    • Volume 29
      • Volume 29, Number 1 – May 2025
  • Books
  • How to Submit
    • Submission Info
    • Ethical Standards for Authors and Reviewers
    • TESL-EJ Style Sheet for Authors
    • TESL-EJ Tips for Authors
    • Book Review Policy
    • Media Review Policy
    • APA Style Guide
  • Editorial Board
  • Support

Assessing Language through Computer Technology (Cambridge Language Assessment Series)

March 2007 — Volume 10, Number 4

 


Assessing Language through Computer Technology
(Cambridge Language Assessment Series)

Author: Carol A. Chapelle and Dan Douglas (2006)  
Publisher: Cambridge, UK: Cambridge University Press
Pages ISBN Price
Pp. xii + 137 0-521-54949-3 £
18.95 GBP

Chapelle and Douglas’s book is a clear example of an introduction to the most important concepts in CALT (Computer Assisted Language Testing). The book efficiently describes the main aspects that influence the design, production and implementation of CALT systems for language learning, putting special emphasis on CALT’s future impact. The authors show straightforwardly and concisely the latest advances in the CALT field, innovations that can serve as support for creators and developers of specific language tests using computer tools.

Overview

Chapter 1 describes the agents (teachers, developers and administrators) and the elements, such as test development and classroom assessment, that affect directly the development of high- and low-stakes testing through computers. The authors state that their approach is open to new paradigms for developing new test content and new testing methods. This chapter introduces some general issues in the field of Computer Assisted Language Testing (CALT).

Chapter 2 gives a detailed description of the test methods, characteristics, and contents in CALT. This account is well illustrated and supported by a table where CALT’s advantages and limitations are shown. This table and others are very helpful for readers, allowing them to get a clear idea of the new CALT test-developing programs.

CALT´s potentiality is shown in chapter 3 through the study of several constraints related to testing-validation techniques. The authors cover controversial issues such as test security levels, the limits to adaptive systems, and real control of automatic response scoring under suitable ratios.

CALT´s implementation, chapter 4, is illustrated through authoring tools like WebCT. This program, created as a management tool for online courses on the web, includes a module for general test development (multiple choice, true and false, matching, short answer). Focusing on this module permits the authors to demonstrate WebCT’s full potential for creating conventional tests managed and semi-designed by teachers.

A technical approach to CALT evaluation can be found in chapter 5. Chapelle and Douglas state that it is difficult to foresee the future potential of CALT, but they do point to two emerging issues to account for: (1) the argument-based structure for expressing aspects of evaluations and (2) the use-driven framework for demonstrating appropriate use for CALT validation. These points should probably have been addressed more carefully in this book.

Commentary

First, I want to focus on the development and evolution of the visual ergonomics of the interfaces in various CALT models. Currently there are few evaluations on this subject that cover accessibility, i.e., the capability to access the test tool easily by all users no matter what operating system they may be using; usability, and functionality: all terms that have not been fully explored in this book. Neither has the book discussed the importance visual ergonomics could have in visualizing contents of computer-based language tests – and their subsequent implementation on an online platform like the web environment. User-oriented interfaces are a key element in creating a testing environment adapted to user-level needs.

In this book we find occasional references to concepts based on Fulcher (2003), such as “invisible interface” (p. 83), the latter term not clearly defined either from Fulcher or by Chapelle & Douglas in their book. Instead of this vague concept, it would probably have been better to state that interfaces should not interfere with assessment, particularly since there is little doubt that the interface has an effect on the test taker. For instance, the test taker may perform poorly on a test just because the taker does not like or feel comfortable with the interface (independently of whether the test content is appropriate for the given situation).

Clearly, the nature of an interface in any interactive format is determined by the level of communication intended to take place between the tool and the user. The importance of guiding the user becomes a basic premise for creating an interface adapted to CALT platforms, a mechanism that orients and guides the user toward completing tasks. Elements such as restricted forward and back arrows, help contents, the linearity of a guided interaction, etc. are essential to test-takers’ comfort and ease in taking computer-aided tests. As such they should be studied as a matter of course, both by CALT developers and evaluators. It’s also worth mentioning that in citing Fulcher and his guidelines on the design of a good interface (see table 5.2, p. 84), the authors overlook an earlier global model by Nielsen et al. more than a decade ago (Nielsen, 1993).

Secondly, we need to take into account the methods proposed in the book for evaluating both the visual design and content selection of CALT tests, summarised in Chapter 5, where the guidelines for developing and implementing tests via computer are mentioned. Diagram 5.1 shows the summary of points for evaluating CALT outlined by Noijons, a staff member at CITO, the Dutch National Exams Agency, and one of the coordinators of the EU-funded DIALANG project. The criteria include a series of questions to address during the content-creation and development phases of CALT tests. In table 5.3 Fulcher’s criteria for CALT interface design are described, with special emphasis placed on the “usability test” phase.

The above tests are generally applied in the more advanced phases of creating an interface, and serve as controlled feedback that helps to improve aspects of visual ergonomics and operation. Other evaluation methods, with various profiles, fall into two categories: those which provide a global viewpoint of the platform (the handling of the tool’s environment and its interactiveness); and those which assess a more specific viewpoint (the monitoring and handling of specific tasks, validation of those tasks, etc.).

Beyond Chapelle and Douglas’s suggested means of evaluating CALT environments, we can mention two other methods for judging usability: (1) heuristic evaluation (Nielsen & Molich, 1990), in which specialists in test design evaluate whether each element of the interface follows the principles of usability related to navigability, flexibility, accessibility; and (2) cognitive walkthroughs (Lewis et al., 1990), a usability test method employed to generate early design evaluation by assigning a group of users the tasks that represent the environment interface.

The use of standardised tests should be evaluated in their own context and according to their specific use. For instance, it is questionable whether the TOEFL design would be acceptable in a low-stakes situation for Japanese primary students, when in fact it was designed as a high-stakes test for international students. A model of clear comparison among usability techniques is presented by Jeffries et al. (1991).

All of these methods are combinable and since they are used to create and develop any telematic environment, they are applicable to discovering CALT problems. Emphasizing accessibility when considering validating any web platform or telematic environment such as CALT has not been fully addressed in this book. Applying a usability test, as suggested initially by Nielsen, has led to international research consortiums establishing standards aimed at evaluating and implementing levels of usability and accessibility in web environments and applications.

The third point to consider is content in chapter 4, specifically authoring tools used in creating and managing training documents via educative virtual platforms, is the third point to consider. Such tools lend themselves to, amongst other functionalities, developing tests of varying kinds (simple questions, multiple, relational, etc.), and allow us to integrate multimedia elements such as static and dynamic images and sound. Such software allows users without advanced computer knowledge to manage information in a structured way, and, more importantly, classify and re-use content systematically, based on international content standards such as SCORM (Shareable Content Object Reference Model) and IMS (Learning Design).

But what is not covered on this topic is the role these tools play in an increasingly crowded virtual realm: the concern for developing communication standards between educative virtual platforms. Sustainability criteria need to be always kept in mind; “media ecology” must be practiced. Why? Because the web is becoming saturated with non-reusable content. To that end, it is vital to emphasize that CALT-developed content offers to comply with recent IEE and SCORM standards that facilitate their integration and use validation on multiple platforms.

What Chapelle and Douglas provide in their book will no doubt be helpful, particularly to those new to CALT. It’s what they overlook or don’t treat comprehensively that are problematic about their book as a guide through the issues raised by CALT.

References

Nielsen, J. (1993). Usability engineering. New York: Academic Press, Inc.

Nielsen, J., & Molich, R. (1990). Heuristic evaluation of user interfaces. Proceedings of ACM CHI’90 Conference, Seattle, pp. 249-256.

Lewis, C., Polson P., Wharton C., & Rieman J., (1990) Testing a walkthrough methodology for theory-based design of walk-up-and-use-interface. Proceedings of HCI ’90, New York, pp. 235-242.

Jeffries, R., Millar, J.R., Wharton, C., & Uyeda, K.M. (1991). User interface evaluation in the real world: A comparision of four techniques. Proceedings ACM CHI´91, New Orleans, pp. 119-124.

IMS (Learning Design).IMS Global Learning, Inc. Retrieved December 1, 2006, from http://www.imsglobal.org/learningdesign/.

SCORM (Shareable Content Object Reference Model). (Advanced) Distributed Learning. Retrieved December 1, 2006, from http://www.adlnet.gov/scorm/index.cfm.

Teresa Magal-Royo
University Polythecnic of Valencia, Spain
<tmagaldegi.upv.es>

© Copyright rests with authors. Please cite TESL-EJ appropriately.

Editor’s Note: The HTML version contains no page numbers. Please use the PDF version of this article for citations.

© 1994–2025 TESL-EJ, ISSN 1072-4303
Copyright of articles rests with the authors.