• Skip to primary navigation
  • Skip to main content

site logo
The Electronic Journal for English as a Second Language
search
  • Home
  • About TESL-EJ
  • Vols. 1-15 (1994-2012)
    • Volume 1
      • Volume 1, Number 1
      • Volume 1, Number 2
      • Volume 1, Number 3
      • Volume 1, Number 4
    • Volume 2
      • Volume 2, Number 1 — March 1996
      • Volume 2, Number 2 — September 1996
      • Volume 2, Number 3 — January 1997
      • Volume 2, Number 4 — June 1997
    • Volume 3
      • Volume 3, Number 1 — November 1997
      • Volume 3, Number 2 — March 1998
      • Volume 3, Number 3 — September 1998
      • Volume 3, Number 4 — January 1999
    • Volume 4
      • Volume 4, Number 1 — July 1999
      • Volume 4, Number 2 — November 1999
      • Volume 4, Number 3 — May 2000
      • Volume 4, Number 4 — December 2000
    • Volume 5
      • Volume 5, Number 1 — April 2001
      • Volume 5, Number 2 — September 2001
      • Volume 5, Number 3 — December 2001
      • Volume 5, Number 4 — March 2002
    • Volume 6
      • Volume 6, Number 1 — June 2002
      • Volume 6, Number 2 — September 2002
      • Volume 6, Number 3 — December 2002
      • Volume 6, Number 4 — March 2003
    • Volume 7
      • Volume 7, Number 1 — June 2003
      • Volume 7, Number 2 — September 2003
      • Volume 7, Number 3 — December 2003
      • Volume 7, Number 4 — March 2004
    • Volume 8
      • Volume 8, Number 1 — June 2004
      • Volume 8, Number 2 — September 2004
      • Volume 8, Number 3 — December 2004
      • Volume 8, Number 4 — March 2005
    • Volume 9
      • Volume 9, Number 1 — June 2005
      • Volume 9, Number 2 — September 2005
      • Volume 9, Number 3 — December 2005
      • Volume 9, Number 4 — March 2006
    • Volume 10
      • Volume 10, Number 1 — June 2006
      • Volume 10, Number 2 — September 2006
      • Volume 10, Number 3 — December 2006
      • Volume 10, Number 4 — March 2007
    • Volume 11
      • Volume 11, Number 1 — June 2007
      • Volume 11, Number 2 — September 2007
      • Volume 11, Number 3 — December 2007
      • Volume 11, Number 4 — March 2008
    • Volume 12
      • Volume 12, Number 1 — June 2008
      • Volume 12, Number 2 — September 2008
      • Volume 12, Number 3 — December 2008
      • Volume 12, Number 4 — March 2009
    • Volume 13
      • Volume 13, Number 1 — June 2009
      • Volume 13, Number 2 — September 2009
      • Volume 13, Number 3 — December 2009
      • Volume 13, Number 4 — March 2010
    • Volume 14
      • Volume 14, Number 1 — June 2010
      • Volume 14, Number 2 – September 2010
      • Volume 14, Number 3 – December 2010
      • Volume 14, Number 4 – March 2011
    • Volume 15
      • Volume 15, Number 1 — June 2011
      • Volume 15, Number 2 — September 2011
      • Volume 15, Number 3 — December 2011
      • Volume 15, Number 4 — March 2012
  • Vols. 16-Current
    • Volume 16
      • Volume 16, Number 1 — June 2012
      • Volume 16, Number 2 — September 2012
      • Volume 16, Number 3 — December 2012
      • Volume 16, Number 4 – March 2013
    • Volume 17
      • Volume 17, Number 1 – May 2013
      • Volume 17, Number 2 – August 2013
      • Volume 17, Number 3 – November 2013
      • Volume 17, Number 4 – February 2014
    • Volume 18
      • Volume 18, Number 1 – May 2014
      • Volume 18, Number 2 – August 2014
      • Volume 18, Number 3 – November 2014
      • Volume 18, Number 4 – February 2015
    • Volume 19
      • Volume 19, Number 1 – May 2015
      • Volume 19, Number 2 – August 2015
      • Volume 19, Number 3 – November 2015
      • Volume 19, Number 4 – February 2016
    • Volume 20
      • Volume 20, Number 1 – May 2016
      • Volume 20, Number 2 – August 2016
      • Volume 20, Number 3 – November 2016
      • Volume 20, Number 4 – February 2017
    • Volume 21
      • Volume 21, Number 1 – May 2017
      • Volume 21, Number 2 – August 2017
      • Volume 21, Number 3 – November 2017
      • Volume 21, Number 4 – February 2018
    • Volume 22
      • Volume 22, Number 1 – May 2018
      • Volume 22, Number 2 – August 2018
      • Volume 22, Number 3 – November 2018
      • Volume 22, Number 4 – February 2019
    • Volume 23
      • Volume 23, Number 1 – May 2019
      • Volume 23, Number 2 – August 2019
      • Volume 23, Number 3 – November 2019
      • Volume 23, Number 4 – February 2020
    • Volume 24
      • Volume 24, Number 1 – May 2020
      • Volume 24, Number 2 – August 2020
      • Volume 24, Number 3 – November 2020
      • Volume 24, Number 4 – February 2021
    • Volume 25
      • Volume 25, Number 1 – May 2021
      • Volume 25, Number 2 – August 2021
      • Volume 25, Number 3 – November 2021
      • Volume 25, Number 4 – February 2022
    • Volume 26
      • Volume 26, Number 1 – May 2022
      • Volume 26, Number 2 – August 2022
      • Volume 26, Number 3 – November 2022
      • Volume 26, Number 4 – February 2023
    • Volume 27
      • Volume 27, Number 1 – May 2023
      • Volume 27, Number 2 – August 2023
      • Volume 27, Number 3 – November 2023
      • Volume 27, Number 4 – February 2024
    • Volume 28
      • Volume 28, Number 1 – May 2024
      • Volume 28, Number 2 – August 2024
      • Volume 28, Number 3 – November 2024
      • Volume 28, Number 4 – February 2025
    • Volume 29
      • Volume 29, Number 1 – May 2025
      • Volume 29, Number 2 – August 2025
      • Volume 29, Number 3 – November 2025
      • Volume 29, Number 4 – February 2026
  • Books
  • How to Submit
    • Submission Info
    • Ethical Standards for Authors and Reviewers
    • TESL-EJ Style Sheet for Authors
    • TESL-EJ Tips for Authors
    • Book Review Policy
    • Media Review Policy
    • TESL-EJ Special issues
    • APA Style Guide
  • Editorial Board
  • Support

The Art of Nonconversation

June 2002 — Volume 6, Number 1

The Art of Nonconversation

Marysia Johnson (2001)
Yale University Press
Pp. viii + 230
ISBN 0-300-09002-1
US 30.00$ (paper)

In The Art of Nonconversation, Marysia Johnson analyzes the validity a common type of spoken language assessment, the Oral Proficiency Interview (OPI) by using discourse analysis and conversational analysis insights. Finding it’s validity wanting, she offers an alternative theoretical model of spoken language ability, the Practical Oral Language Agility (POLA) based on the sociocultural and institutional contexts in which the language has been acquired. This is a most welcome book because it not only brings to question the validity of OPIs but offers test developers a solid theoretical and researched model for developing, assessing and interpreting locally-constructed tests of spoken ability.

The book has a clear Table of Contents, acknowledgements, and an extremely useful List of Abbreviations. Its two Appendices (one for OPI coding sheet, the other a coding document listing the various codes used in analyzing the OPI), an excellent Bibliography, and a very useful Subject Index.

The book has nine chapters, the first of which gives an overview of the text. Chapters 2 through 7 give a description, history, and theoretical and empirical analysis of the Oral Proficiency Interview. The OPI’s validity is investigated within Messick’s framework of validity, given in this chapter. Chapters 8 and 9 investigate what speaking ability outside of tests is and proposes a new model of speaking ability.

In Chapter 1, Johnson briefly introduces the OPI in its many forms and poses the question of whether the OPI’s use of an interview format is the most appropriate and desirable way to assess second language speaking ability. She then introduces her discourse analysis methodology and then introduces the question of what speaking ability is by offering a skeleton description of Vygotsky’s sociocultural theory, the basis for Johnson’s POLA, being the direct challenge to the current testing view of interaction as cognitively and psycholinguistically based.

In Chapter 2, a history of the OPI system, from its beginnings in the 1950’s to the early 1990’s improving and refining the earlier OPI, is outlined with many examples of the different types of the OPI. The structure of OPI, its elicitation techniques and rating procedures are then clearly described. The OPI’s six general test characteristics are listed and a discussion of OPI’s reliability and validity from the supportive viewpoint of academia’s proficiency movement (late 70’s to late 80’s) is done. This movement tried to promote the OPI as the main assessment instrument for second and foreign language proficiency.

Chapter 3 begins the critical analysis of the OPI with a short history of its popularity and its becoming more institutionalized without having had a complete empirical investigation of it. Johnson then gives a historical outline of validity in general and cites Messick’s momentous article on validity as the point to discuss two periods of lines of thought about validity, pre- and post-Messick. Johnson then discusses OPI’s opponents’ viewpoints, referring to the serious flaws in the OPI’s claim that it measures conversational ability. To empirically test the OPI, she proposes to analyze the OPI using discourse and conversational analysis methodology and see if the OPI has its own speech event and rules outside those of conversation. [-1-]

The theoretical bases for exploring whether the OPI is a speech event in its own right are laid down in Chapter 4. Johnson does this by carefully grounding her methodology on recent discourse and conversational analysis findings beginning with defining a speech event. She then discusses conversation as a speech event, and its prototypical features of turn-taking, repair, adjacency pair systems, and topic that create order in conversation. She then explains both an interview and classroom interaction as speech events. Finally, she summarizes these prototypical features in all three speech events comparing and contrasting their similarities and differences.

Chapter 5 begins the qualitative and quantitative empirical analysis of the OPI in order to see what the prototypical features of the OPI are and to compare these with the features in interviews, conversations and classroom interactions. In this chapter she describes the quantitative study’s data, coding system, coding process and a summary of findings. From the well-crafted summary of the discourse analysis findings, Johnson discovers that the OPI is a combination of a survey-research interview and a sociolinguistic interview, both not representative of real life conversations, the opposite of what Educational Testing Services says the OPI tests reflect.

The qualitative study of testers’ and non-testers’ perceptions of the OPI speech event is described in Chapter 6. This study was done to see if the findings of the earlier discourse analysis study are consistent with the findings of this perception study. This study used a semantic differential instrument, a well-established and validated instrument to compare individual native-speaker perceptions of the OPI. The data was analyzed in three parts and found to support the discourse analysis findings in Chapter 5.

Chapter 7 fleshes out the prototypical features of the OPI communicative event into a model for it. Within this model, Johnson describes four major sections, the warm-up phase, explanation of testing procedures, level check and probes, and the wind-down phase. None of these reflect conversation in real-life, but support the findings in chapters 5 and 6. Johnson then offers suggestions to those who sell OPIs regarding the advertising of what OPIs really test, changes in how OPI testers are trained, and implementing an official policy for procedures of testing for OPIs.

In chapter 8, Johnson gives a short history of communicative competence, the most widespread theoretical basis of second language teaching and testing. She then describes the two most widely accepted models of communicative competence in second language theory and testing. A discussion of communicative competence compared to proficiency follows, along with the problem with their being used interchangeably. To exemplify this problem, the Test of Spoken English (TSE) and Speaking Proficiency English Assessment Kit (SPEAK), both sponsored by ETS, are examined against their claims to measure communicative language ability. Johnson argues that the tests are indirect OPI tests and their validity is questionable because it is unclear what they actually measure–communicative competence or proficiency. In the last section, Johnson proposes an alternative framework to the communicative competence models, the interactional competence theory.This theory differs from the others in that co-construction of conversation and localized, specific context competencies, not general language competence, are recognized as central to participants’ oral interactive practices. It is a theory of knowledge, specifying what participants has to have in order to participate in interactive contexts. This interactive competence is acquired in three processes “. . . observation, reflection and creation (p. 177).” This theory, Johnson says, is actually rooted in Vygotsky’s sociocultural theory and the connections are then discussed. It is this sociocultural theory that Johnson states is the best theory at had to answer theoretical and practical issues related to speaking ability. [-2-]

In the first part of Chapter 9, the main beliefs of Vygotsky’s sociocultural theory are described along with their implications for Second Language Acquisition (SLA) instruction and research. Johnson ends this chapter and the book with her Practical Oral Language Agility (POLA), which applies Vygotsky’s theory to spoken language testing, specifically International Teaching Assistant’s speaking tests given to determine spoken ability in a variety of academic situations. Some features of this test follow. This testing procedure should be locally developed and interpreted. This variety and number of contexts would be clearly defined, along with the intended audience and the test’s purpose. Each context should be carefully analyzed in terms of functions, tasks, skills, and abilities. The oral events tested should be score separately by a group of evaluators. Testers’ interactive abilities and their exposure to many training instances of the target, spoken language would be required. Feedback would be practical and informative, not just one score purporting to represent a speaking ability but including suggestions for improvement of weaker tested situations and descriptions of what the test taker can actually do.

This book is long overdue because it not only empirically measures what commonly used (and overused) OPIs actually do and are, but also offers a viable replacement for many other spoken language assessment contexts. Other applications of the POLA could be in such disparate spoken language contexts that now use OPI tests to evaluate language proficiency and gain in proficiency as Adult Pre-Employment English, Workplace English, Pre-Academic Speaking (formal presentation and social), and any variety of English for Specific Situations. This book should be required reading for anyone involved in a spoken language context. Graduate students, teachers, test designers, course designers, admission officers, and program administrators would all benefit from its insights.

Jim Bame
Utah State University
<fabame@cc.usu.edu>

Editor’s note: see http://www.yale.edu/yup/books/090021.htm for more information.

© Copyright rests with authors. Please cite TESL-EJ appropriately.

Editor’s Note: Dashed numbers in square brackets indicate the end of each page for purposes of citation.

Return to Table of Contents Return to Top Return to Main Page

[-3-]

© 1994–2026 TESL-EJ, ISSN 1072-4303
Copyright of articles rests with the authors.