• Skip to primary navigation
  • Skip to main content

site logo
The Electronic Journal for English as a Second Language
search
  • Home
  • About TESL-EJ
  • Vols. 1-15 (1994-2012)
    • Volume 1
      • Volume 1, Number 1
      • Volume 1, Number 2
      • Volume 1, Number 3
      • Volume 1, Number 4
    • Volume 2
      • Volume 2, Number 1 — March 1996
      • Volume 2, Number 2 — September 1996
      • Volume 2, Number 3 — January 1997
      • Volume 2, Number 4 — June 1997
    • Volume 3
      • Volume 3, Number 1 — November 1997
      • Volume 3, Number 2 — March 1998
      • Volume 3, Number 3 — September 1998
      • Volume 3, Number 4 — January 1999
    • Volume 4
      • Volume 4, Number 1 — July 1999
      • Volume 4, Number 2 — November 1999
      • Volume 4, Number 3 — May 2000
      • Volume 4, Number 4 — December 2000
    • Volume 5
      • Volume 5, Number 1 — April 2001
      • Volume 5, Number 2 — September 2001
      • Volume 5, Number 3 — December 2001
      • Volume 5, Number 4 — March 2002
    • Volume 6
      • Volume 6, Number 1 — June 2002
      • Volume 6, Number 2 — September 2002
      • Volume 6, Number 3 — December 2002
      • Volume 6, Number 4 — March 2003
    • Volume 7
      • Volume 7, Number 1 — June 2003
      • Volume 7, Number 2 — September 2003
      • Volume 7, Number 3 — December 2003
      • Volume 7, Number 4 — March 2004
    • Volume 8
      • Volume 8, Number 1 — June 2004
      • Volume 8, Number 2 — September 2004
      • Volume 8, Number 3 — December 2004
      • Volume 8, Number 4 — March 2005
    • Volume 9
      • Volume 9, Number 1 — June 2005
      • Volume 9, Number 2 — September 2005
      • Volume 9, Number 3 — December 2005
      • Volume 9, Number 4 — March 2006
    • Volume 10
      • Volume 10, Number 1 — June 2006
      • Volume 10, Number 2 — September 2006
      • Volume 10, Number 3 — December 2006
      • Volume 10, Number 4 — March 2007
    • Volume 11
      • Volume 11, Number 1 — June 2007
      • Volume 11, Number 2 — September 2007
      • Volume 11, Number 3 — December 2007
      • Volume 11, Number 4 — March 2008
    • Volume 12
      • Volume 12, Number 1 — June 2008
      • Volume 12, Number 2 — September 2008
      • Volume 12, Number 3 — December 2008
      • Volume 12, Number 4 — March 2009
    • Volume 13
      • Volume 13, Number 1 — June 2009
      • Volume 13, Number 2 — September 2009
      • Volume 13, Number 3 — December 2009
      • Volume 13, Number 4 — March 2010
    • Volume 14
      • Volume 14, Number 1 — June 2010
      • Volume 14, Number 2 – September 2010
      • Volume 14, Number 3 – December 2010
      • Volume 14, Number 4 – March 2011
    • Volume 15
      • Volume 15, Number 1 — June 2011
      • Volume 15, Number 2 — September 2011
      • Volume 15, Number 3 — December 2011
      • Volume 15, Number 4 — March 2012
  • Vols. 16-Current
    • Volume 16
      • Volume 16, Number 1 — June 2012
      • Volume 16, Number 2 — September 2012
      • Volume 16, Number 3 — December 2012
      • Volume 16, Number 4 – March 2013
    • Volume 17
      • Volume 17, Number 1 – May 2013
      • Volume 17, Number 2 – August 2013
      • Volume 17, Number 3 – November 2013
      • Volume 17, Number 4 – February 2014
    • Volume 18
      • Volume 18, Number 1 – May 2014
      • Volume 18, Number 2 – August 2014
      • Volume 18, Number 3 – November 2014
      • Volume 18, Number 4 – February 2015
    • Volume 19
      • Volume 19, Number 1 – May 2015
      • Volume 19, Number 2 – August 2015
      • Volume 19, Number 3 – November 2015
      • Volume 19, Number 4 – February 2016
    • Volume 20
      • Volume 20, Number 1 – May 2016
      • Volume 20, Number 2 – August 2016
      • Volume 20, Number 3 – November 2016
      • Volume 20, Number 4 – February 2017
    • Volume 21
      • Volume 21, Number 1 – May 2017
      • Volume 21, Number 2 – August 2017
      • Volume 21, Number 3 – November 2017
      • Volume 21, Number 4 – February 2018
    • Volume 22
      • Volume 22, Number 1 – May 2018
      • Volume 22, Number 2 – August 2018
      • Volume 22, Number 3 – November 2018
      • Volume 22, Number 4 – February 2019
    • Volume 23
      • Volume 23, Number 1 – May 2019
      • Volume 23, Number 2 – August 2019
      • Volume 23, Number 3 – November 2019
      • Volume 23, Number 4 – February 2020
    • Volume 24
      • Volume 24, Number 1 – May 2020
      • Volume 24, Number 2 – August 2020
      • Volume 24, Number 3 – November 2020
      • Volume 24, Number 4 – February 2021
    • Volume 25
      • Volume 25, Number 1 – May 2021
      • Volume 25, Number 2 – August 2021
      • Volume 25, Number 3 – November 2021
      • Volume 25, Number 4 – February 2022
    • Volume 26
      • Volume 26, Number 1 – May 2022
      • Volume 26, Number 2 – August 2022
      • Volume 26, Number 3 – November 2022
  • Books
  • How to Submit
    • Submission Procedures
    • Ethical Standards for Authors and Reviewers
    • TESL-EJ Style Sheet for Authors
    • TESL-EJ Tips for Authors
    • Book Review Policy
    • Media Review Policy
    • APA Style Guide
  • TESL-EJ Editorial Board

ELT Scholars’ Attitudes towards Inclusion of Intercultural Competence Assessment in Language Proficiency Tests

February 2023 – Volume 26, Number 4

https://doi.org/10.55593/ej.26104a6

Mohammad Kazemian
Department of English, Tonekabon Branch, Islamic Azad University, Tonekabon, Iran
<m_kazemiansanatiatmarkyahoo.com>

Mohammad Reza Khodareza
Department of English, Tonekabon Branch, Islamic Azad University, Tonekabon, Iran
<m.r.khodareza1349atmarkgmail.com>

Fatemeh Khonamri
University of Mazandaran, Babolsar, Iran
<fkhonamriatmarkyahoo.com>

Ramin Rahimy
Department of English, Tonekabon Branch, Islamic Azad University, Tonekabon, Iran
<rahimy49atmarkyahoo.com>

Abstract

Intercultural Competence Assessment (ICA) has recently become a central issue in applied linguistics in general, and language testing and assessment in particular. The present paper aims to investigate the difference between native and non-native assessment experts’ beliefs about incorporating ICA in the Language Proficiency Assessment (LPA). Basic qualitative research design was employed and questions were emailed to 97 native and non-native language testing and assessment experts of whom 32 participants returned their responses (response rate = 33%). Moreover, 10 of the experts were interviewed to triangulate the data. The data were analyzed qualitatively and quantitatively. The results demonstrated that there were no significant differences between native experts and non-native experts’ attitudes towards whether ICA should be included in the LPA. Despite this finding, some native speaker experts strongly disagreed with the notion while some other non-native scholars supported it. This study may be helpful to Teaching English as a Foreign Language (TEFL) assessment experts who argue for the inclusion of ICA in the LPA, believing that such an inclusion would benefit not only second language proficiency assessment but also efforts in designing more effective instructional syllabuses.

Keywords: Intercultural Competence Assessment, Language Proficiency Assessment, Native speakers, Non-native speakers

Assessment, as an integral part of any educational system, monitors the quality of instruction and measures learning outcomes (Cohen et al., 2023). In Teaching English as a Second/Foreign language (L2), assessment procedures are used to inform a wide array of concepts from local concerns such as day-to-day instructional planning to general issues like course design and curriculum development. Moreover, language proficiency assessment that measures what a candidate is capable of doing with their current knowledge of English is used for employment purposes and the selection of qualified students for entrance to universities in English speaking countries. Most of these high-stake proficiency tests center around conspicuous aspects of language; that is, the four skills of listening, speaking, reading, and writing plus explicit or implicit attention paid to pronunciation, grammar, and vocabulary as sub-skills. Accordingly, attempts have been made to guarantee good assessment by the publication of such documents as National Standards for Foreign Language Education (1996) and the Common European Framework of Reference for Languages (CEFR, 2001). A significant issue which has remained unattended thus far, however, is the inclusion of intercultural competence (IC) in the assessment of language proficiency (Nushi et al., 2016). IC is defined as “the ability to consciously experience different cultures and thus to be able to intentionally generate appropriate alternative behavior” (Bennett, 2020, p. 529). IC also refers to “complex abilities that are required to perform effectively and appropriately when interacting with others who are linguistically and culturally different from oneself” (Fantini, 2009, p. 458).

Intercultural Competence Assessment (ICA) has been underrepresented in language assessment literature (Scarino, 2017) mainly due to the difficulty of its assessment. In fact, practically the inclusion of ICA requires specialized logistics in terms of expert examiners and standard assessment criteria. Moreover, the development of construct validity arguments for intercultural content in a language test would need a great extent of theorizing and conceptualization (Borghetti, 2017).

On the other hand, Bateson (1972; 1979), a philosopher and anthropologist, utilizes three levels of constructivism in assessment. He uses the first level (action) to refer to unilateral causal analysis, similar to the intersection, positivism and individual level. The second level is “contextual”, where phenomena and their interaction need to be considered in context, similar to the intersection of relativism and group or institutional levels. Most importantly, in the third level of constructivism in assessment, namely as defined by Bennett (2020, p. 529) ICA pertains “not so much to individuals or cultures themselves but rather to the experiencing of one’s own and other cultures”. While IC is related to knowledge and proficiency in the target language culture, ICA is the actual task of assessing competence in this. The objective of ICA is for language testers to try to assess the IC by investigating the use of language by two or more interlocutors from different cultural backgrounds in a real-world context (Mazeikiene & Virgailaite-Meckauskaite, 2007).

Disregard for intercultural elements can cause another serious problem; that is, most language proficiency assessments are designed by individuals who are not members of test-taking circles. As a result, they may not be familiar with the contextual constraints of the country in which the tests are intended to be used, leading to production of tests that are culturally loaded or even biased (Chitravelu, 2007). The problem becomes crucial when these culturally-loaded tests are applied in other parts of the world outside North America and Europe and, hence, affect ecological validity (Farhady, 2011). Indeed, ecological validity refers to “whether or not one can generalize from observed behavior in the laboratory to natural behavior in the world” (Schmuckler, 2001, p. 419). This is a grave concern given the fast growth of English around the world and the development of World Englishes which suggests the need to assess IC.

The literature appears to show agreement on the need to assess IC in language classes and considers it more or less possible and even mandatory (e.g., Borghetti, 2017; Byram, 1997, 2009; Byram & Morgan, 1994; Fantini, 2009; Sercu, 2004, 2010; Schauer, 2016; Schulz, 2007). More specifically, learners can benefit from such an assessment to increase their self-awareness and adopt appropriate learning strategies (Borghetti, 2017). Language testing should be informed by the unprecedented growth of variation in the norms of international communication (Canagarajah, 2006; Elder & Davies, 2006), and the contents of language tests need to correspond with the functions for which the examinee will utilize it (Zafar Khan, 2009). Most importantly, the integration of ICA in a language proficiency test best represents the integration of language and culture in real world contexts. Thus, it is critical to examine the role of IC in language assessment.

As such the present study sets its aim to investigate language assessment experts’ beliefs regarding whether ICA should be integrated into language assessment. Another main objective is to assessment experts’ attitudes towards how current language tests can be redesigned to embrace intercultural elements. To that end, the present researchers approach the leading assessment experts from central and peripheral, native and non-native, language education contexts for their stands in this respect. Therefore, the study fills the gap by consolidating assessment experts’ opinions regarding the integration of intercultural elements in language tests and compares and contrasts the views of both inner circles and outer circles experts on the issue. The significance of the study lies in the fact that it attends to a so far overlooked area and its findings can shed light on a better understanding of language assessment. Furthermore, as more than just four language skills are required for successful cross-cultural communication, exploring the underlying assumptions of experts can prepare the ground for the inclusion or exclusion of a fifth skill in language teaching and assessment. This, in turn, can have numerous theoretical and practical implications and may enhance the validity, reliability, and fairness of future language assessment practices.

Literature Review

Culture as a Language Skill

In spite of the fact that listening, speaking, reading, and writing, as the four cardinal language skills, are integral elements of L2 situations, in themselves they do not seem to be sufficient to assist students in becoming communicatively competent (Vermier et al., 2008). Culture learning is considered as the fifth language skill by some scholars (Damen & Savignon, 1987; Tomalin, 2008). In the same vein, Oxford (2001) believes that culture and grammar could be regarded as skills, yet different from the four traditional skills in the sense that they can be affected by listening, speaking, reading, and writing. The emergence of the fifth language skill, that is referred to as IC in applied linguistics, suggests that a new opportunity has arisen to redefine understanding of language proficiency in language testing (Nushi et al., 2016). IC skills are the “abilities to interpret the meanings in the target culture and relate them to one’s own and to interact with people from different cultures” (Tran & Seepho, 2016, p. 9). This has crucial implications for how we assess proficiency as well.  Although teaching and learning of such a vast area as culture may seem like a prospect, it forms a natural and integral part of language teaching in general and language assessment in particular.

The Interface between Communicative Competence (CC) and Intercultural Competence (IC)

CC is the final goal of the majority of L2 contexts; CC tackles “the interaction between grammatical competence, or knowledge of the rules of grammar, and sociolinguistic competence, or knowledge of the rules of language use” (Fulcher & Davidson, 2007, p. 38).

The most important model of CC is the seminal communicative competence model by Canale and Swain introduced in 1980. The purpose of introducing this model was instructing and assessing English-speaking learners of French in special programs. They planned to compare students’ proficiency in their L2, i.e., French, with that of native French speakers or of learners of French in more traditional L2 programs. Not only did they include the grammatical competence, but they also added three components: sociolinguistic competence, strategic competence, and discourse competence (Duff, 2014). Then, a rather more comprehensive model of linguistic proficiency – Communicative Language Ability – was introduced by Bachman (1990) and later Bachman and Palmer (1996, 2010).  First of all, the model makes a clear-cut distinction between “knowledge constitution” and “skill constitution”.  There are three constituents in Bachman’s (1990) Communicative Language Ability including language competence (knowledge), strategic competence, and psychophysiological mechanisms, which include the real implementation of language competence as a physical fact (Bachman, 1990). The model is more comprehensive and much clearer than that of Canale and Swain’s because of its detailed description of the basic components of CC. Second, it endeavors to portray the processes in which various constituents interact between the context and language use (Fulcher & Davidson, 2007).

Celce-Murcia et al. (1995) argued that the purpose of their model was to work as a comprehensive “checklist” for teachers’ CC needs to span all of its components, that is, organizational, pragmatic, and strategic strategies (Bachman, 1990; Celce-Murcia, et al., 1995).

Celce-Murcia et al. (1995) challenged Bachman’s (1990) model through proposing a model which is an explanation for the earlier Canale’s (1983) CC model. This model was introduced to criticize Bachman’s model for confining the context of language testing (Fulcher & Davidson, 2007; Motallebzadeh & Moghaddam, 2011).

There is a closeness between CC and IC. Celce-Murcia et al.’s (1995) model of CC set forth a model which was an expansion of Canale’s earlier model (1983). In their model, socio-cultural competence refers to the speaker/listener’s background knowledge of the target community that facilitates communication and comprehension (Abedi & Gandara, 2006). According to Celce-Murcia et al. (1995, p. 24), one of the components of culture is sociocultural background which refers to “the knowledge of the target language community, awareness of major dialect or regional differences, and cross-cultural awareness.” Regarding cross-cultural awareness, there are a number of culture-specific regulations in which the absence of them can have detrimental effects on language learners’ second culture acquisition (Celce-Murcia et al., 1995). It is worth noting that the cross-cultural awareness is one of the underpinnings of IC (Byram,1997) and the interface of CC and IC can be observed in this respect. Moreover, to have comprehensive language proficiency tests, IC should be incorporated into linguistic proficiency test (Nushi et al.,2016).  Indeed, second language acquisition and second culture acquisition are inextricably interwoven (Robinson,1991). As for McNamara (2000), nevertheless, he challenges the theory and socio-cultural competence contending that involvement of such variances in language testing digresses from and violates the LPA and brings about a test of identity instead. To resolve the issue, Scarino (2006) introduced the assessment cycle which has the four components of Conceptualization, Elicitation, Judgment, and Validation.

Conceptualization refers to what will be assessed and for what purposes. Elicitation is how to elicit tasks/procedures that operationalize the construct. Conceptualization of the construct will itself affect the process of elicitation in intercultural language education. Judgment deals with how to judge performance, and Validation concerns how to justify the construct. However, Jaeger (1999) contends that the problem with assessing intercultural issues is due to conceptualizing what is being assessed rather than the assessment itself. Liddicoat and Scarino (2010, p. 54) suggest that one way of understanding intercultural construction is to see it “as factual, objective knowledge, which is removed from people as constructors and users of that knowledge.” Similarly, one approach to assess IC is through the assessment of IC behaviors. In this regard, IC is regarded as if it were the fifth language skill so that a learner is asked to perform the role of a communicator in an intercultural context (Liddicoat & Scarino, 2010).

Language Proficiency Assessment (LPA), Intercultural Competence Assessment (ICA), and World Englishes (WEs)

Researchers (Bachman, 2010; Brown, 1995; Lantolf & Frawley, 1992) believe that a valid LPA cannot exist unless there is an acceptable and empirical model of language proficiency. Various factors including the emergence of different conceptualizations of LPA (Bachman, 2010), redefinition of the construct of language ability, and greater attention to sociopolitical and ethical factors in test design (Davies, 1977) have led to the expansion of LPA in regard to different frames of reference. The theoretical models of LPA have evolved over time from the traditional discrete-point approaches to the more recent communicative approaches. A communicative approach to the LPA is multifaceted and includes consideration of IC as well as grammatical, sociolinguistic, discourse and strategic competence. Although major commercial language tests have been trying their best to incorporate a variety of language uses in their tests, IC skills have been paid scant attention (Nushi et al., 2016). Additionally, ease of administration might be a challenge.

Clyne and Sharifian (2008) in their position paper believe that preparation of language tests should be in line with the actual IC needs of testees. They argued that the absence of “native” speakers is noticeable in many contexts of high-stakes examinations such as IELTS and TOEFL. Therefore, these tests should attempt to evaluate IC skills instead of being dominated by the “inner circle” Englishes (Kachru, 1986). Other writers have also argued that assessing IC skills includes attitudinal tests (Cadd, 1994), cultural assimilator (Brislin et al., 1986), and cultural awareness tests (Byram & Morgan, 1994).

Brown (2014) implicitly mentioned the concept of ICA in his study and the future of WEs in language testing and defined WEs and its related paradigm, that is, Kachru’s (1986) inner-, outer-, and expanding -circles. Then he discussed whether WEs and language testing communities could reconcile with this new phenomenon or still they are adamant that native-speakerism is superior to other varieties of English. He concluded his study with some implications and recommendations that the testing communities are obliged to make the intersection of WEs and language testing more productive (Brown, 2014).

Intercultural Communicative Competence and Intercultural Competence Assessment Studies

Switching from a more theoretical description of the IC theory to the experimental research domain, it is worth mentioning that a lot of research has been done on cultural content of L2 textbooks. However, it seems that scant research has been done on ICA. For example, the   analysis of English textbooks by Abdullah and Chandran (2009) and Mousavi (2018) in Malaysian and Iranian L2 educational contexts revealed an emphasis on local cultures in both contexts. The points of strength of the aforementioned studies were pinpointing national cultures in their contexts.

Rezaei and Naghibian (2018) and Kazemian et al. (2021) worked on IC in reading and writing skills respectively. The positive point for Rezaei and Naghibian’s (2018) study was the incorporation of IC skills in reading. In their study, the function of literary texts in the development of Iranian English language learners’ IC was investigated. The participants were taught with a contrastive approach where both native and target language cultural points were emphasized. They had five activities: intensive reading, extensive reading, cross-cultural discussions, critical thinking, and role plays. The researchers followed the four elements of Kraft’s cultural pattern (i.e., linguistic, social, technological, and religious structures) as introduced in Chinaka (2010). The results indicated the combination of literary texts in teaching culture provided valuable insights. To show the significance of IC in both instruction and assessment, Kazemian et al. (2021) investigated the effect of instruction on IC in writing in an L2 context. To analyze the samples of learners’ writing, a modified rubric of Byram’s model (1997) was adopted. The result of this study showed that the effect of IC instruction on EFL writers was significant. This study highlighted the importance of the incorporation of IC instruction and IC assessment in writing.

In view of a model of IC in English language programs, Young and Sachdew (2011) examined the beliefs and practices of twenty-one experienced ESL/EFL teachers from the USA, the UK and France focusing on Byram’s (1997) language-pedagogical model of IC in their teaching practices. The survey was multi-method, combining diaries, focus group interviews, and questionnaires. The results pointed out a general agreement with across locations but indicated a slight difference between native and non-native teachers’ sentiments and beliefs about IC and their current classroom preferences. Most participants supported the applicability of IC to their work and underscored that ‘good’ learners and teachers tended to show high IC. In fact, there were no significant differences between the responses given by the native and non-native speakers about their teaching practices.  Notwithstanding, they also suggested that IC was given relatively little emphasis on testing and textbooks.

Gu (2016) investigated assessment of IC in a foreign language education in China. Data were gathered from 1170 Chinese EFL university teachers via a questionnaire. The result revealed that in spite of being willing to assess IC, the EFL teachers had limited conception of IC which had a domino effect on measuring their students’ IC. The conclusion of this study was that more studies should be conducted on assessing IC. However, the EFL teachers had a lack of a proper understanding of the conception of IC.  Similarly, to problematize language testing and assessment syllabi through ICA, Kazemian et al. (2022) analyzed twenty applied linguistics syllabi of Language Testing and Assessment of Ph.D. programs from twenty Iranian universities. The results showed that despite believing in the concept of ICA from the experts’ view, there was no trace of the concept in their syllabi.

All in all, there is no extensive body of literature on ICA and its role in the LPA. While studies concentrated on a range of areas including textbook evaluations, reading, and teacher education in terms of IC, fewer studies focused on IC assessment in teacher education (Gu, 2016), writing (Kazemian et al., 2021), and problematizing ICA syllabi (Kazemian et al., 2022). As such, the topic seems to be a novel one in applied linguistics in general and language assessment in particular.

The research question that guided this study was:

  1. Is there any difference between native and non-native speaker assessment scholars’ attitudes towards incorporating ICA in the LPA?

Methodology

Research Design

This study employed both qualitative and quantitative research, that is, convergent mixed methods design (Creswell& Creswell, 2018). In fact, basic qualitative research (Merriam, 2002), alternatively called basic interpretative qualitative (Ary et al., 2014), study and interview were applied to triangulate the data methodologically (Janesick, 2015). The philosophy of utilizing qualitative paradigm was to investigate and clarify a problem or phenomenon (Creswell& Creswell, 2018); in this case basic qualitative research and interview. The rationale for employing quantitative research was to “make valid and objective descriptions” of the phenomenon under study (Taylor, 2005, p. 91).

Participants

To carry out this study, thirty-two ELT testing and assessment experts, mainly full professors, associate professors, and several assistant professors, in applied linguistics from different universities all over the world were chosen via purposive sampling. Ninety-seven experts in language testing and assessment were invited and thirty-two scholars, including nineteen native applied linguists and thirteen non-native experts, responded and took part.   The scholars’ names were anonymized and coded (i.e., native speaker (NS#1), non-native speaker (NNS#2)) in this study. The Merriam-Webster dictionary defines scholar as a “person who has done advanced study in a special field” (Merriam Webster, n.d.). We followed Ericsson and Smith (1991) and Merriam-Webster (Merriam Webster, n.d.) in defining an expert or scholar and selected those who have achieved a superior performance in a special domain, ELT testing and assessment in our case. Both the native and non-native participants had a long experience of teaching and researching in language testing and assessment, IC and ICA (M = 42.30, M = 31.58) respectively, with an h-index of either beyond 30 in Google Scholar or 3 in SCOPUS. It is worth noting that in this study the native speaker is defined as “a person who has spoken a certain language since early childhood” (Cook, 2016, p. 182) which in the case of this study is English. The details regarding the participants’ background information are displayed in Table 1.

In order to avoid contaminating the data (i.e., control the extraneous variables), and make sure there would be no bias with regard to gender, age, and nationality, the authors/researches explained the purpose of the study to the participants, and they were free to share their beliefs about the basic qualitative research questions.

Table 1. Participants’ Demographics

  Native speaker experts
(self-claimed)
Non-native speaker experts
N 19 13
Nationality American 8; British 6; Australian 2; New Zealander 1; Canadian 2 Iranian 11; German 2
Mean age 61.21 50.64
Gender 16 males;3 females 11 males; 2 females
Academic Rank 18 full professors;
1 associate professor
5 full professors;
5 associate professors;
3 assistant professors

Data Collection Methods

Basic qualitative research. The electronic correspondence and interview were between October 3, 2017 and April 1, 2018. Basic qualitative research questions were emailed to the assessment and testing experts. There were three main basic qualitative research questions regarding the EFL context: 1. To what extent is ICA important to assessment experts? 2. Is IC a necessary component of language proficiency or not? 3. Does the recognition of WEs clash with native speakerism in language testing? The aforesaid questions were sent to eight testing and assessment (i.e., two full professors, two associate professors, four assistant professors due to convenience sampling) experts at Mazandaran, Farhangian, Shahid Beheshti, and Azad universities in Iran for validation. Later, the questions were modified to some extent and emailed to the experts in different parts of the world to attain the objectives of the study. There were four drafts of research inquiry designed by researchers. That means the researcher revised the questions four times.  The language of all drafts including the main draft was English. The reviewers worked independently. That is, the aforementioned eight testing and assessment experts commented and revised the questions independently. Additionally, further explanations and examples were provided to the participants either inside the basic qualitative study or in the email to ensure they would find out the research purpose.

Interviews. The researchers obtained another source of data to triangulate the data, that is, semi-structured interview. Ten native and non-native scholars, one native full professor, six non-native associate professors, and three non-native assistant professors, from the same cohort because of convenience sampling were interviewed.  The interview questions, regarding the EFL context, asked were: 1. To what extent is it practical to include ICA in language proficiency test? 2.  Does the inclusion of ICA require us to redefine language proficiency? The aforementioned questions were first validated by the same professors that were mentioned in the previous section.

Procedure and Data Analysis. Three open-ended, and basic qualitative research questions were sent through email, LinkedIn (a professional networking site) or its Android application, and the well-known academic site ResearchGate to ninety-seven experts in language testing and assessment, IC and ICA.  It is worth mentioning that some of the experts’ emails were not available; consequently, the researchers utilized LinkedIn and ResearchGate websites. Furthermore, two semi-structured interview questions were conducted via Skype. Some further correspondence was done regarding the experts’ replies to the initial email that is, elaborating on the concepts of ICA and IC thoroughly. The replies of the questions and transcripts were analyzed thematically and individual comments were coded and categorized as agreement, disagreement, and ambivalency separately by two raters to ensure inter-rater reliability.

We used thematic analysis which is a part of content analysis technique (Creswell & Creswell, 2018) to indicate the responses in regard to the research questions. It is worth noting that coding the data was done manually due to a small dataset, and it took six weeks to separate the coding with a co-coder within a research team.  For instance, “ICA is requisite in the LPA” was coded as “Agreement” to show that the scholars were in favor of incorporating ICA in the LPA, “ICA is a separate construct” was coded as “Disagreement” to illustrate the experts’ standpoints were at odds with the notion of incorporating ICA in the LPA, and “Well, I am not sure if it is necessary” was coded as “Ambivalency” to demonstrate the scholars’ replies were equivocal.

Additionally, ten native and non-native scholars were interviewed from the same cohort.  Each interview session with the experts lasted for fifteen minutes. All interviews were recorded and transcribed literally. Then, the transcripts were codified and themes were generated. They were read, revised, and reviewed by a colleague as an inter-coder. The reliability of inter-coder was 80%.

The replies and transcripts of assessment and testing experts were analyzed thematically. Both qualitative and quantitative paradigms were employed in this research study. Furthermore, two phases of data analysis were conducted for the research question: The first phase is a qualitative analysis and the second one is a quantitative analysis.

To calculate the inter analyzer reliability, an expert who is an assistant professor in TEFL, reanalyzed 10% of the extracts (kappa coefficient .92). Moreover, in order to address a research question, a quantitative paradigm a Chi-square test along with Cross Tabulation was employed to explore whether there is any difference between native and non-native speaker assessment scholars’ beliefs regarding the role of ICA in the LPA.

Findings

In what follows, the key findings from the analysis of basic qualitative research and interview data will be presented. For the sake of clarity, native and non-native beliefs are reported here and categorized as agreed, disagreed, and ambivalent.

Is there any difference between native and non-native speaker assessment scholars’ attitudes towards incorporating ICA in the LPA?

Qualitative Analysis of Native Speaker Experts. Eight native-speaker experts agreed with the incorporation of ICA in the LPA. As some relevant pieces of evidence, NS#10 stated, “ICA is obligatory in the LPA.” She/He backed up her/his assertion by saying that “If a language learner does not know how to use that language to communicate with speakers from other cultures, then knowing a language is not very useful.”

NS#14 believed, “In language teaching contexts it (IC) has to be assessed in combination with linguistic competences.” It means “a degree of modification of language competence assessment.”

NS#1 paid meticulous attention to pragmatics and its fundamental role in assessment. He says that:

Since I am of the strong belief that pragmatics has a key role in any assessment program and in any assessment courses for that matter; I would certainly want students to deal with these issues. Having said that, I realize that most tests do not include assessment of pragmatics since it is considered difficult to do so.

Furthermore, six native speaker experts were ambivalent about the incorporation of ICA component in the LPA.

NS#3 held two attitudes towards IC and language proficiency. He believed:

If one is learning a second language for the purpose of communicating with language across cultures, then, yes, it is difficult to avoid the need to display some intercultural competencies. But if you are a foreign language student somewhere in the world learning a foreign language at school for the purpose of passing a high-stakes examination which focuses on linguistic aspects of proficiency, then probably no.

NS#8 believed:

I know that it is somewhat problematic to operationalize the construct as a valid test due to designing practical and reliable means of assessing whether students can demonstrate IC skills, incorporating as much of the theoretical definition as possible. By the same token, the dynamic nature of this competence (including the interpersonal element) makes it difficult to operationalize adequately.

NS#9 did not “like to dichotomize the competence/performance distinction,” stating that “I do not believe IC as a component of language proficiency because I refute the framework of competence/performance.”

Moreover, five native speaker experts strongly disagreed with the incorporation of an ICA component in the LPA.

NS#18 was a zealous supporter of globalization and went into great detail to support her/his argument that the inclusion of IC in language proficiency “surely misses the effect of globalization.” NS#19, who disagreed profoundly with the incorporation of an ICA component in the LPA, believed that it is a kind of “fighting native speakerism”. NS#17 believed IC is a “separate construct. It may be related to language proficiency but it is not a component thereof.” NS#12 prevaricated that the cultural competence of a language learner cannot be measured. However, he believed it is feasible to measure cultural knowledge not cultural competence. He also contended that “it is a folly to believe that language proficiency and cultural proficiency can fit into the same scale.”

Qualitative Analysis of Non-native Speaker Experts.  Ten non-native speaker experts agreed with the incorporation of an ICA component in the LPA.

NNS# 4 stated that “IC is an essential component of language proficiency which must be duly emphasized in language teaching and assessment.” NNS#6 also explained how IC can be used in language testing. He believed that:

If language testing is to become as communicative as possible, then its design and content should be in line with what has been taught. Therefore, IC can safely find its way in language testing. Even proficiency tests should contain aspects of intercultural competence. As a matter of fact, we can observe traces of it in many high-stakes language tests.

NNS#8 contended that “this [IC] will contribute to fairness as many people around the world are discriminated against (through standardized testing) by the examiners who lack this competence.” NNS#9 supported his claim by saying that “we need to argue for its inclusion in the foreign language education first before we can make a case for its assessment.”

None of the twelve non-native scholars disagreed with the incorporation of an ICA component in the LPA.

However, one of the non-native experts, NNS#5, had an ambivalent standpoint, suggesting:

If our inferences are going to have zero or nothing to do with a particular target community, then assessing IC is somehow pointless. Consider the TOEIC test as an instance of measuring proficiency in the English Lingua Franca context. If the test-takers are going to use their language skills in communicating (in English) with others for whom English is the Lingua Franca (say, non-native speakers like Chinese, Russians, Brazilians, and the like,), then testing IC does not seem to be important.

Another non-native expert was also dubious about the inclusion of ICA in the LPA. He believed that:

IC can constitute a component of a theoretical model of proficiency but whether we include it in actual test content depends on the types of inferences we are going to make about test takers’ future language use in the target domains.

Furthermore, as already discussed, ten TEFL experts were interviewed.  The results of the interviews indicated that IC should be included in the teaching curriculum and its inclusion should be reflected in assessment. They contended that there must be an argument for its incorporation in foreign language education first before we can make a case for its assessment.

We can conclude that defining the concept is a must.  As another non-native expert commented “there is a need for IC at elementary, secondary, and even tertiary level [in the Iranian contexts]. The inclusion will be welcomed in different stages of education of English in Iran and assessment settings.” The other non-native speaker expert contended that “regarding the emergence of WEs and language testing, the inclusion of ICA requires us to redefine language proficiency.” The next non-native speaker expert indicated that “I believe it [ICA] is to some extent necessary and practical for LPA as it has its share in language proficiency.” Another non-native speaker expert believes “since IC is important, it can bring the recognition of WEs to fight native speakerism in language testing.”

The quantitative analysis of native and non-native experts. The Chi-square output, shown in Table 2, presents the Cross Tabulation of beliefs whether there is any difference between native and non-native speaker assessment scholars’ beliefs regarding the role of ICA in the LPA. It is worth mentioning that the sample size was 32 and no cases were missing. As the table shows, there are three categories of opinions, that is, positive views agreement, disagreement, and ambivalency. The Cross-Tabulation table includes raw data along with its percentage. There were 8 native scholars (42.1 %) who had positive attitudes, 5 native scholars (26.3 %) who had a negative idea, and 6 native scholars (31.6 %) who were ambivalent. On the other hand, there were 10 non-native scholars (76.9 %) who had a positive idea, 3 (23.1%) non-native experts who were ambivalent about the role of ICA in the LPA.

Table 2. Different Opinions of Nativeness and Non-nativeness

    Nativeness Vs. Non-Nativeness Total
  positive views 8 (42.1%) 10 (76.9%) 18 (56.25%)
Opinion disagree 5 (26.3%) 0 (0.0%) 5 (26.3%)
  ambivalence 6 (31.6%) 3 (23.1%) 9 (28.1%)
Total Count 19 (100%) 13 (100%) 32 (100%)

Table 3 displays the result of Cramer’s V, which is used to estimate the strength of association between two nominal variable. The table demonstrates that the result is not significant (P =.153), that is, there is not a difference between native and non-native speaker assessment scholars’ beliefs regarding the role of ICA in the LPA.

Table 3. Difference between Nativeness and Non-nativeness

Value Approximate
significance
Cramer’s V .343 .153
N of Valid Cases 32

Discussion and Conclusion

The present study is concerned with whether there is any difference between native and non-native speaker assessment scholars’ beliefs regarding the incorporation of ICA in the LPA.

The quotations given in the results section showed a complex picture, especially among “native speaker experts”. Although the difference between “native speaker experts” and “non-native speaker experts” is shown to be statistically non-significant, this area is worth further examination.

The results indicated that native and non-native assessment experts’ beliefs about incorporating ICA in the LPA were different; that is, 47 % of native speaker scholars were for the idea and maintained that ICA is necessary in the LPA as it plays a crucial role in the communication with other cultures. Moreover, they believed IC is an integral part of CC and thus courses with proficiency goals should include it in teaching and assessment. Meanwhile, 26.3 % of native speaker experts were indecisive about including ICA in the LPA beliefs, attitudes towards ICA and LPA can be classified into two groups:

The first standpoint deals with an individual who intends to learn a second language for the purpose of communicating with people across different cultures; consequently, it is plausible to view the inclusion of ICA in the LPA. Nevertheless, if the purpose is to succeed in high-stakes examinations, then ICA might not have an important role.

As one of the native speaker experts in this study argued that ICA is necessary in the LPA owing to using a language efficiently with other interlocutors from other countries, this view point may explicitly advocate Vermier et al. (2008) and Tomalin (2008) and complement Bachman’s model (1990). That is, it seems to support the Communicative Language Ability.

Another native speaker expert considers the concept of ICA through the window of pragmatics. S/he believes that pragmatics has a pivotal role in testing. This standpoint may be in line with Celce-Murcia et al.’s (1995) model, that is, pragmatic is one of their models of components.  Nonetheless, most tests do not enjoy as such.

Moreover, it seems that some participants are ambivalent about operationalizing the construct of ICA as a valid test. Designing a practical and reliable means of assessment to demonstrate the students’ intercultural skills would be a difficult task. The dynamic nature of such competence which includes the interpersonal element makes it demanding to operationalize sufficiently. This is in line with Jaeger’s (1999) point of view regarding the difficulty of conceptualizing and constructing ICA. Furthermore, it is difficult to operationally define ICA as there are many inherently abstract components involved, including the weakness of the existing models of IC with respect to assessment (Borghetti, 2017).

On the contrary, five native speaker experts were against the inclusion of ICA in the LPA. They were concerned that the inclusion of IC in the LPA contradicts the idea of globalization.  However, IC is congruent with glocalization of English (Sharifian, 2010) which believes in thinking globally and acting locally.

Overcoming native-speakerism and being a separate construct were other criticisms leveled at incorporating ICA in the LPA by some native speaker scholars. In contrast, as the development of language tests should be in accord with the actual IC needs of examinees (Clyne & Sharifian, 2008), “native speakers” are absent from many EFL circles which high-stakes examinations are administered, IC needs of examinees, accordingly; in many EFL circles which high-stakes examinations are administered, the absence of “native speakers” has been felt. Thus, we argue the test should strive for evaluating IC skills rather than accentuating the “inner circle” Englishes.

As previously mentioned, the views of native and non-native experts were various. 92.3% of non-native speaker scholars favored incorporation of an ICA component in the LPA. Yet, there was no significant difference between native speaker and non-native speaker scholars’ beliefs regarding the role of ICA in the LPA based on their quantitative analysis. The non-native speaker experts believed IC is an indispensable part of LPA. They also added that if language testing is to become as communicative as possible, then its design and content should be in line with what has been taught. Their beliefs theoretically and practically are in line with those of Brown (2014), Clyne and Sharifian (2008), Chandran (2009), Gu (2016), Kazemian et al. (2021), Kazemian et al. (2022), Liddicoat (2002), Liddicoat and Scarino (2010), Mousavi (2018), Nushi et al. (2016), Rezaei and Naghibian (2018), and Scarino (2006).

The dispersion of perceptions regarding the inclusion of ICA in the LPA among native speaker scholars is more than that of non-native speakers. In fact, most of native speaker experts were cautious about integrating IC into LPA whereas non-native speakers unanimously agreed with such an integration. This is probably because native speaker scholars have internalized cultural components as part of their language acquisition process and, thus, may not feel the need to focus on IC in a test of language proficiency. Non-native speakers, on the other hand, have observed the kind of problems lack of IC causes in establishing smooth communication and, accordingly, call for the need to take due measures in assessing IC.

WEs and ICA are interwoven and it is a cradle in which ICA has been growing up under WEs’ supervision. The growth of the English language has brought about different variations of it; however, the concept of WEs is new to the native teachers (Eslami et al.,2019). Likewise, they may not understand the concept of ICA, either. This is in line with some native speaker experts’ ambivalency or disagreement positions.

This research is innovative in that it reports the views of some native and non-native language assessment scholars about ICA.  The ideas generated by in this research may trigger further debate about incorporating ICA in the LPA. All in all, according to Brown (2014), language testing is a complex collection of standardized proficiency levels.  Different types of tests such as placement, diagnostic, progress, and achievement are utilized in different global contexts.  This study supports Young and Sachdew (2011) and Kazemian et al.’s (2022) studies which indicated that IC was given scant attention in testing and even curricula. ICA in foreign language education is an arduous task and a noteworthy feature (Sercu, 2004). Accordingly, new discourses will be created and gradually its consequences will be notified.

The present paper aimed to investigate if there is any difference between native and non-native speaker assessment scholars’ beliefs regarding the inclusion of ICA in the LPA. Data analysis indicated that there were no significant differences between native experts and non-native experts’ attitudes towards whether ICA should be included in the LPA. In spite of some controversies between native and non-native speaking language assessment and testing scholars regarding the inclusion of ICA in the LPA, the results of the study showed that viewing the LPA from different angles seems to be imperative. By virtue of this, it is vital that two perspectives of inner and outer circles reconcile and exchange ideas so that they can have a fresh outlook on ICA in the LPA.

Implications, Limitations, and Recommendations

Considering the controversies among applied linguists over the issue of ICA, the topic seems to be still young and still evolving in language assessment research. Therefore, some implications can be made regarding the issue.

The implication can be of two types: The first implication would be more relevant to the theoreticians and ETS researchers who may consider language proficiency from a new aspect. While some applied linguists, based on the findings of the study, have challenged the incorporation of ICA due to lack of either validity or practicality, they need to further consider how it is possible to put it into practice.

The second implication is for educational policy makers who should consider ICA in the syllabi of language testing to generate new research the interface between LPA and IC.

Every study suffers from a number of limitations either in its design or in the way it is conducted, which restricts its generalizability in one way or another. While ninety-seven emails and messages had been sent to language testing and assessment scholars all over the world but only thirty-two scholars agreed to take part in this study.

Thus, this study might have different interpretations, and conclusions if more scholars had participated. This study can be replicated in prestigious testing and assessment organizations such as British Council, Educational Testing Service (ETS), and International Development Program (IDP) in a larger scope. However, this study, albeit small in scope, helps shed light on how applied linguists perceive the incorporation of ICA into LPA in the EFL context.

About the authors

Mohammad Kazemian-Sana’ati is an Associate Editor of MEXTESOL Journal. Mohammad did his MA in applied linguistics at Islamic Azad University, Tonekabon Branch, Iran. He has taught English in different proficiency levels at Iran Language Institute (ILI) in adults’ department. As well as ESP/EAP courses at University of Guilan and Guilan University of Medical Sciences. His published papers have appeared in Asia TEFL and MEXTESOL Journals. His areas of research interests are intercultural studies, language testing and assessment, interfaces of SLA and language testing and assessment, and CALL. ORCID ID: 0000-0002-1517-1174

Mohammad Reza Khodareza (corresponding author) is an assistant professor of applied linguistics at Islamic Azad University, Tonekabon Branch, Iran. His recent publications have appeared in Asia TEFL Journal, Journal of Language Horizons, and Jordan Journal of Modern Languages and Literatures. His areas of research interests are intercultural perspectives, language testing and assessment, and pragmatics. ORCID ID: 0000-0003-1363-4718  

Fatemeh Khonamri is an assistant professor of applied linguistics at University of Mazandaran, Babolsar, Iran. Her recent publications have appeared in The Language Learning Journal, Asia TEFL Journal, MEXTESOL Journal, and Iranian Journal of Language Teaching Research. Her areas of research interests are applied linguistics, language testing and assessment, SLA theories, and intercultural studies. ORCID ID: 0000-0002-6833-5347

Ramin Rahimy is an assistant professor of applied linguistics at Islamic Azad University, Tonekabon Branch, Iran. His recent publications have appeared in Asia TEFL Journal, How Journal, and Journal of Psycholinguistic Research. His areas of research interests are intercultural studies, translation studies, and language testing and assessment. ORCID ID: 0000-0002-0859-7812

Acknowledgement

We wish to express our appreciation to late professor Farzad Sharifian who had encouraged us to conduct such a study, may he rest in peace! Many thanks to the four anonymous reviewers and the editors for their very helpful feedback and the editors’ favorable decisions at each stage of the revision process!

To Cite this Article

Kazemian-Sana’ati, M., Khodareza, M. R., Khonamri, F., & Rahimy, R. (2023). ELT scholars’ attitudes towards inclusion of intercultural competence assessment in language proficiency tests. Teaching English as a Second Language Electronic Journal (TESL-EJ), 26(4). https://doi.org/10.55593/ej.26104a6

References

Abedi, J. & Gandara, P. (2006). Performance of English language learners as a subgroup in large-scale assessment: Interaction of research and policy. Educational measurement: Issues and Practice, 25(4), 36-46. https://doi.org/10.1111/j.1745-3992.2006.00077.x

Ary, D., Jacobs, L.C., Sorensen, C.K., & Walker, D.A. (2014). Introduction to research in education (9th ed). Wadworth Cengage Learning.

Bachman, L. F. (1990). Fundamental considerations in language testing. Oxford University Press.

Bachman, L. (2010). Second language assessment. Peterson P, Baker E, McGaw B. The international encyclopedia of education (3rd ed., v4). Academic Press, 140-147.

Bachman, L. F & Palmer, A. S. (1996). Language testing in practice. Oxford University Press.

Bachman, L., & Palmer, A. (2022). Language assessment in practice: Developing language assessments and justifying their use in the real world. Oxford University Press.

Bennett, M. J. (2020). A constructivist approach to assessing intercultural communication competence. In Rings, G. & Rasinger, S. (2020), The Cambridge handbook of intercultural communication (pp. 521-535). Cambridge University Press.

Borghetti, C. (2017). Is there really a need for assessing intercultural competence? Some ethical issues. Journal of Intercultural Communication, 44, 1-15.

Brislin, R., Cushner, K., Cherrie,C. &Yong,M. (1996). Intercultural Interactions: A Practical Guide. Sage.

Brown, J. D. (1995). The elements of language curriculum: A systematic approach to program development. Heinle & Heinle.

Brown, J. D. (2014). The future of world Englishes in language testing. Language Assessment Quarterly, 11(1), 5-26. https://doi.org/10.1080/15434303.2013.869817

Byram, M. & Morgan, C. (1994). Teaching-and-learning language-and-culture. Multilingual Matters.

Byram, M. (1997). Teaching and assessing intercultural communicative competence. Clevedon: Multilingual matters.

Cadd, M. (1994). An attempt to reduce ethnocentrism in the foreign language classroom. Foreign Language Annuals, 27(2), 143-160.  https://doi.org/10.1111/j.1944-9720.1994.tb01198.x

Canagarajah, S. (2006). Changing communicative needs, revised assessment objectives: Testing English as an international language. Language Assessment Quarterly: An International Journal, 3(3), 229-242. https://doi.org/10.1207/s15434311laq0303_1

Canale, M., & Swain, M. (1980). Theoretical bases of communicative approaches to second language teaching and testing. Applied linguistics, 1(1), 1-47.

Celce-Murcia, M., Dörnyei, Z., & Thurrell, S. (1995). Communicative competence: A pedagogically motivated model with content specifications. Issues in Applied linguistics, 6(2), 5-35.

Chinaka, S.D. (2010). An introduction to multicultural education: From theory to practice. Rowman & Littlefield.

Chitravelu, N. (2007). Multilingualism in Southeast Asia: A tentative research agenda. English in Southeast Asia: Literacies and Literatures, Newcastle-upon-Tyne: Cambridge Scholars Publishing, 224-245.

Clyne, M. & Sharifian, F. (2008). English as an international language: Challenges and possibilities. Australian Review of Applied Linguistics, International Forum on English as an International Language, 31(3), 1-28. https://doi.org/10.2104/aral0828

Council of Europe. Council for Cultural Co-operation. Education Committee. Modern Languages Division. (2001). Common European framework of reference for languages: Learning, teaching, assessment. Cambridge University Press.

Cook, V. (2016). Second language learning and language teaching. Routledge.

Cohen, A. D., Rahmati, T., & Sadeghi, K. (2023). Test-taking strategies in technology-assisted language assessment. In K. Sadeghi, & D., Douglas (Eds.), Fundamental considerations in technology mediated language assessment. Routledge.

Council of Europe (2001). Common European framework of reference for languages: Learning, teaching, assessment. Cambridge University Press.

Creswell, J. W., & Creswell, J. D. (2018). Research design: Qualitative, quantitative, and mixed methods approaches. Sage publications.

Cummins, J. (1984). Bilingualism and special education: Issues in assessment and pedagogy (Vol. 6). Multilingual Matters.

Damen, L., & Savignon, S. J. (1987). Culture learning: The fifth dimension in the language classroom. Addison-Wesley Publishing Company.

Davies, R. B. (1977). Testing the hypothesis that a point process is Poisson. Advances in Applied Probability, 9, 724-746.

Duff, P. A. (2014). Communicative language teaching. In D. M. Brinton, M. Celce-Murcia, & M. A. Snow (Eds.), Teaching English as a second or foreign language. (15-30). Heinle Cengage Learning.

Elder, C., & Davies, A. (2006). Assessing English as a lingua franca. Annual Review of Applied Linguistics, 26, 282-304.

Ericsson, K. A., & Smith, J. (1991). Prospects and limits of the empirical study of expertise: An introduction. In K. A. Ericsson & J. Smith (Eds.), Toward a general theory of expertise: Prospects and limits (p. 1–38). Cambridge University Press.

Eslami, Z., Moody, S., & Pashmforoosh, R. (2019). Educating pre-service teachers about World Englishes: Instructional activities and teachers’ perceptions. TESL-EJ, 22(4), 1-17. http://tesl-ej.org/wordpress/volume22/ej88/ej88a9/

Fantini, A. E. (2009). Assessing intercultural competence. The SAGE handbook of intercultural competence, 456-476. Sage.

Farhady, H. (2011, October). Language proficiency assessment: localizing global issues. Paper presented at the First international TESOL Persia conference, Tehran, Iran.

Fulcher, G., & Davidson, F. (2007). Language testing and assessment. Routledge.

Gu, X. (2016). Assessment of intercultural communicative competence in FL education: A survey on EFL teachers’ perception and practice in China. Language and Intercultural Communication, 16(2), 254-273. https://doi.org/10.1080/14708477.2015.1083575

Jager, K. (1999). Intercultural competence and the problem of assessment. In T. Vestergaard (Ed.), Language, culture and identity (pp. 71-81). Aalborg Universitetsforlag.

Janesick, V. J. (2015). Stretching” exercises for qualitative researchers (4th ed.). Sage.

Kazemian, M., Khodareza, M.R., Khonamri, F., Rahimy, R. (2022) Problematizing language testing and assessment syllabi through intercultural competence assessment perspective in an EFL context. Asia TEFL Journal,19(1), 309-316. https://doi.org/10.18823/asiatefl.2022.19.1.23.309

Kazemian, M., Khodareza, M. R., Khonamri, F., & Rahimy, R. (2021). Instruction on intercultural communicative competence and its application by Iranian EFL male and female writers. Education and Self-Development Journal, 16(1), 21-39. https://eandsdjournal.kpfu.ru/en/journal-article/instruction-on-intercultural-communicative-competence-and-its-application-by-iranian-efl-male-and-female-writers/

Kachru, B. B. (1986). The alchemy of English: The spread, functions and models of non-native English. Pergamon.

Lantolf, J. P., & Frawley, W. (1992). Rejecting the opi-again: A response to Hagen. ADFL Bulletin, 23(2), 34-37.

Mažeikienė, N., & Virgailaitė-Mečkauskaitė, E. (2007). The experience of measurement and assessment of intercultural competence in education. Social Sciences, 4(58), 70–82. https://hdl.handle.net/20.500.12259/37634

McNamara, T. (2000). Language testing. McGraw-Hill.

Merriam, S. B. (2002). Introduction to qualitative research. Qualitative research in practice: Examples for discussion and analysis, 1(1), 1-17.

Motallebzadeh, K. & Baghaee Moghaddam, P. (2011). Models of language proficiency: A reflection on the construct of language ability. Iranian Journal of Language Testing, 1(1), 42-48.

Mousavi, M. (2018). Exploring Iranian EFL teachers’ perceptions of intercultural competence and high school textbook’s potential for the development of intercultural competence. Unpublished master’s thesis, University of Mazandaran, Babolsar, Mazandaran, Iran.

National Standards in Foreign Language Education Project (1996). Standards for foreign language learning: Preparing for the 21st century (SFFLL). Allen Press.

Nushi, M., Abolhassani, Z., & Mojerloo, N. (2016). An interview with Professor Farzad Sharifian. Asian-Pacific Journal of Second and Foreign Language Education, 1(1), 1-10. https://doi.org/10.1186/s40862-016-0011-x

Oxford, R. (2001). Integrated skills in the ESL/EFL classroom. ESL Magazine 6(1), 5-11.

Phillipson, R. (1992). Linguistic imperialism. Oxford University Press.

Rea-Dickins, P. (2004). Understanding teachers as agents of assessment. Language Teaching, 21 (3), 249-258. https://doi.org/10.1191/0265532204lt283ed

Rezaei, S., & Naghibian, M. (2018). Developing intercultural communicative competence through short stories: A qualitative inquiry. Iranian Journal of Language Teaching Research, 6(2), 77-96. https://doi.org/10.30466/IJLTR.2018.120561

Robinson, G. L. (1992). Second culture acquisition. Georgetown University Round Table on Languages and Linguistics (GURT) 1991: Linguistics and Language Pedagogy: The State of the Art, 114.

Scarino, A. (2017). Culture and language assessment. In E. Shohamy, I. Or, & S. May (Eds.), Language testing and assessment, Encyclopedia of Language and Education. https://doi.org/10.1007/978-3-319-02261-1_3

Scholar. (2022). In Merriam -Webster’s online dictionary. Retrieved from http://www.merriam-webster.com/dictionary/scholar

Schulz, R. A. (2007). The challenge of assessing cultural understanding in the context of foreign language instruction. Foreign Language Annals, 40(1), 9-26. https://doi.org/10.1111/j.1944-9720.2007.tb02851.x

Schmuckler, M. A. (2001). What is ecological validity? A dimensional analysis. Infancy, 2(4), 419-436. https://doi.org/10.1207/S15327078IN0204_02

Sercu, L. (2004). Assessing intercultural competence: A framework for systematic test development in foreign language education and beyond. Intercultural education, 15(1), 73-89. https://doi.org/10.1080/1467598042000190004

Sharifian, F. (2010). Glocalization of English in World Englishes: An emerging variety among Persian speakers of English. In M. Saxena & T. Omoniyi (Eds.), Contending with globalization in world Englishes (pp. 137-158). Bristol: Multilingual Matters.

Taylor, G. R. (2005). Quantitative research. In G. R. Taylor (Ed), Integrating quantitative and qualitative methods in research (pp. 91-100). Lanham University Press.

Tomalin B. (n.d.). Culture- the fifth language skill. https://www.teachingenglish.org.uk/article/culture-fifth-language-skill

Tran, T.Q., & Seepho, S. (2016). An Intercultural communicative competence model for EFL learners. In the 4th TESOL Conference Proceedings: Teaching methodologies and learning outcomes in Ho Chi Minh City (pp. 27-74). Publishing House of Economics.

Vernier, S., Barbuzza, S., Del Giusti, S., del Moral, G. (2008). The five language skills in the EFL classroom. Nueva Revista de Lenguas Extranjeras, 10, 263-291.

Young, T. J. & Sachdew, I. (2011). Intercultural communicative competence: Exploring English language teachers’ beliefs and practices. Language Awareness, 20(2), 81-98. https://doi.org/10.1080/09658416.2010.540328

Appendix

Dear Professor,

I would like to invite you to participate in research on Intercultural Competence Assessment (ICA) and beliefs about ICA, and whether we need to include ICA in the language proficiency through redefining language proficiency assessment. I would appreciate it if you let me know, through a short explanation, if inclusion of ICA requires us to redefine language proficiency. There are examples to clarify the task. Your participation in this study is voluntary and if you decide that you would prefer not to participate, I do respect the choice whatever the reason may be. However, I would like to humbly encourage you to take part as the outcome will benefit applied linguistics in general and language assessment in particular. All information provided will be used for research purposes only. The results are hoped to be published in academic journals. If you are interested in getting a report of the results before publishing, you may contact the researcher via email: m_kazemiansanati@yahoo.com Thank you very much in advance for your time and support.

Examples:

  1. Please write a narrative about 100 words and inform me to what extent you find intercultural competence assessment important? Also, please write about the recognition of world Englishes and whether there is a clash with native speakerism in language testing. You may include any other ideas you believe are relevant to the issue.

Here are some sample examples from Iranian applied linguists.

Sample 1:

To me, intercultural competence is important. Also, the recognition of world Englishes, to fight native speakerism in language testing. This will contribute to fairness as many people around the world are discriminated against (through standardized testing) by the examiners who lack this competence.

Sample 2:

Intercultural Competence Assessment is a significant component of pragmatic competence which is often neglected due to practicality issues. The construct of pragmatics assessment and intercultural assessment is underrepresented. we often design pragmatics test based on speech act theory and politeness theory and do not consider intercultural and interactional competence. cross-cultural differences are often overlooked in our teaching and testing.

  1. Tomalin (2008) considers culture as a “fifth language skill”. The concepts of the English as an International Language (EIL) or World Englishes (WEs) paradigm require us to conceptualize the notion of language proficiency in a new or different way. To this end, it seems that language proficiency assessment should be redefined (Nushi et al., 2016).  Please write another narrative about 100 words whether intercultural competence (IC) is a necessary component of language proficiency or not. If the answer is yes, do we need to redefine language proficiency to include intercultural competence?

Sample 1:

Yes. I believe intercultural competence is an essential component of language proficiency which must be duly emphasized in language teaching and assessment. we have to raise learners’ intercultural awareness by means of role plays, cross-cultural comparisons, discussions, etc. part of the problem in language learning is not because of lack of language knowledge rather it is the cross-cultural differences that result in communication breakdown.

Sample 2:

Well, I am not sure if it is necessary, but it can play a central role in communication. Whether it can be implemented in our syllabus requires considering many factors, including infrastructure, feasibility, cost-effectiveness, a well-defined blueprint of its components, and an operational definition.

Copyright of articles rests with the authors. Please cite TESL-EJ appropriately.
Editor’s Note: The HTML version contains no page numbers. Please use the PDF version of this article for citations.

© 1994–2023 TESL-EJ, ISSN 1072-4303
Copyright of articles rests with the authors.