• Skip to primary navigation
  • Skip to main content

site logo
The Electronic Journal for English as a Second Language
search
  • Home
  • About TESL-EJ
  • Vols. 1-15 (1994-2012)
    • Volume 1
      • Volume 1, Number 1
      • Volume 1, Number 2
      • Volume 1, Number 3
      • Volume 1, Number 4
    • Volume 2
      • Volume 2, Number 1 — March 1996
      • Volume 2, Number 2 — September 1996
      • Volume 2, Number 3 — January 1997
      • Volume 2, Number 4 — June 1997
    • Volume 3
      • Volume 3, Number 1 — November 1997
      • Volume 3, Number 2 — March 1998
      • Volume 3, Number 3 — September 1998
      • Volume 3, Number 4 — January 1999
    • Volume 4
      • Volume 4, Number 1 — July 1999
      • Volume 4, Number 2 — November 1999
      • Volume 4, Number 3 — May 2000
      • Volume 4, Number 4 — December 2000
    • Volume 5
      • Volume 5, Number 1 — April 2001
      • Volume 5, Number 2 — September 2001
      • Volume 5, Number 3 — December 2001
      • Volume 5, Number 4 — March 2002
    • Volume 6
      • Volume 6, Number 1 — June 2002
      • Volume 6, Number 2 — September 2002
      • Volume 6, Number 3 — December 2002
      • Volume 6, Number 4 — March 2003
    • Volume 7
      • Volume 7, Number 1 — June 2003
      • Volume 7, Number 2 — September 2003
      • Volume 7, Number 3 — December 2003
      • Volume 7, Number 4 — March 2004
    • Volume 8
      • Volume 8, Number 1 — June 2004
      • Volume 8, Number 2 — September 2004
      • Volume 8, Number 3 — December 2004
      • Volume 8, Number 4 — March 2005
    • Volume 9
      • Volume 9, Number 1 — June 2005
      • Volume 9, Number 2 — September 2005
      • Volume 9, Number 3 — December 2005
      • Volume 9, Number 4 — March 2006
    • Volume 10
      • Volume 10, Number 1 — June 2006
      • Volume 10, Number 2 — September 2006
      • Volume 10, Number 3 — December 2006
      • Volume 10, Number 4 — March 2007
    • Volume 11
      • Volume 11, Number 1 — June 2007
      • Volume 11, Number 2 — September 2007
      • Volume 11, Number 3 — December 2007
      • Volume 11, Number 4 — March 2008
    • Volume 12
      • Volume 12, Number 1 — June 2008
      • Volume 12, Number 2 — September 2008
      • Volume 12, Number 3 — December 2008
      • Volume 12, Number 4 — March 2009
    • Volume 13
      • Volume 13, Number 1 — June 2009
      • Volume 13, Number 2 — September 2009
      • Volume 13, Number 3 — December 2009
      • Volume 13, Number 4 — March 2010
    • Volume 14
      • Volume 14, Number 1 — June 2010
      • Volume 14, Number 2 – September 2010
      • Volume 14, Number 3 – December 2010
      • Volume 14, Number 4 – March 2011
    • Volume 15
      • Volume 15, Number 1 — June 2011
      • Volume 15, Number 2 — September 2011
      • Volume 15, Number 3 — December 2011
      • Volume 15, Number 4 — March 2012
  • Vols. 16-Current
    • Volume 16
      • Volume 16, Number 1 — June 2012
      • Volume 16, Number 2 — September 2012
      • Volume 16, Number 3 — December 2012
      • Volume 16, Number 4 – March 2013
    • Volume 17
      • Volume 17, Number 1 – May 2013
      • Volume 17, Number 2 – August 2013
      • Volume 17, Number 3 – November 2013
      • Volume 17, Number 4 – February 2014
    • Volume 18
      • Volume 18, Number 1 – May 2014
      • Volume 18, Number 2 – August 2014
      • Volume 18, Number 3 – November 2014
      • Volume 18, Number 4 – February 2015
    • Volume 19
      • Volume 19, Number 1 – May 2015
      • Volume 19, Number 2 – August 2015
      • Volume 19, Number 3 – November 2015
      • Volume 19, Number 4 – February 2016
    • Volume 20
      • Volume 20, Number 1 – May 2016
      • Volume 20, Number 2 – August 2016
      • Volume 20, Number 3 – November 2016
      • Volume 20, Number 4 – February 2017
    • Volume 21
      • Volume 21, Number 1 – May 2017
      • Volume 21, Number 2 – August 2017
      • Volume 21, Number 3 – November 2017
      • Volume 21, Number 4 – February 2018
    • Volume 22
      • Volume 22, Number 1 – May 2018
      • Volume 22, Number 2 – August 2018
      • Volume 22, Number 3 – November 2018
      • Volume 22, Number 4 – February 2019
    • Volume 23
      • Volume 23, Number 1 – May 2019
      • Volume 23, Number 2 – August 2019
      • Volume 23, Number 3 – November 2019
      • Volume 23, Number 4 – February 2020
    • Volume 24
      • Volume 24, Number 1 – May 2020
      • Volume 24, Number 2 – August 2020
      • Volume 24, Number 3 – November 2020
      • Volume 24, Number 4 – February 2021
    • Volume 25
      • Volume 25, Number 1 – May 2021
      • Volume 25, Number 2 – August 2021
      • Volume 25, Number 3 – November 2021
      • Volume 25, Number 4 – February 2022
    • Volume 26
      • Volume 26, Number 1 – May 2022
      • Volume 26, Number 2 – August 2022
      • Volume 26, Number 3 – November 2022
      • Volume 26, Number 4 – February 2023
    • Volume 27
      • Volume 27, Number 1 – May 2023
      • Volume 27, Number 2 – August 2023
      • Volume 27, Number 3 – November 2023
      • Volume 27, Number 4 – February 2024
    • Volume 28
      • Volume 28, Number 1 – May 2024
      • Volume 28, Number 2 – August 2024
      • Volume 28, Number 3 – November 2024
      • Volume 28, Number 4 – February 2025
    • Volume 29
      • Volume 29, Number 1 – May 2025
      • Volume 29, Number 2 – August 2025
      • Volume 29, Number 3 – November 2025
      • Volume 29, Number 4 – February 2026
  • Books
  • How to Submit
    • Submission Info
    • Ethical Standards for Authors and Reviewers
    • TESL-EJ Style Sheet for Authors
    • TESL-EJ Tips for Authors
    • Book Review Policy
    • Media Review Policy
    • TESL-EJ Special issues
    • APA Style Guide
  • Editorial Board
  • Support

Generative AI in English Language Teaching: Students’ Voices, Teachers’ Reactions, and Needs

* * * On the Internet * * *

November 2025 — Volume 29, Number 3

https://doi.org/10.55593/ej.29115int3

Rhian Webb
University of South Wales, UK
<rhian.webbatmarksouthwales.ac.uk>

Ferah Şenaydın
Ege University, Turkey
<ferah.senaydinatmarkege.edu.tr>

Abstract

Due to the rapid emergence and use of generative artificial intelligence (GenAI) by English as a foreign language (EFL) students in higher education (HE), further research is required to understand English language teaching (ELT) teachers’ training needs to effectively manage digitally enhanced teaching and learning. This study identifies teachers’ needs by investigating Turkish ELT teachers’ reactions to their students’ self-reported GenAI usage. Our transcendental phenomenological research design ensured minimal author bias from the thematically analysed, qualitative, interview data from 21 Turkish undergraduate EFL students (B1-C1 level) and six Turkish ELT teachers. Analysis has revealed that students used ChatGPT (version 3.5) as a human collaborator to build content, clarify tasks, be a critical friend, organise ideas, enhance language, and obtain feedback, which they found motivating. However, teachers’ reactions to their students’ usage were inconsistent and exposed a need for unified teacher identity development that is shaped by GenAI literacy training and supported by institutional policies that address GenAI integration into curriculum design and assessment practices.

Keywords: ChatGPT, English Language Teaching (ELT), Generative Artificial intelligence (GenAI), GenAI literacy, Higher Education (HE)

Teachers need to adapt to the rapidly unfolding artificial intelligence revolution (Baydemir, 2025). The launch of Generative AI (GenAI), ChatGPT in 2022 gained global interest for its capacity to perform tasks traditionally requiring human intelligence, which uses large language models (LLMs) to generate text, images, and music from prompts (Kanbach et al., 2024). Prominent GenAI tools include ChatGPT, Gemini, Copilot, and the emerging Chinese competitor DeepSeek-V2. This study focuses on EFL students’ use of ChatGPT for their studies because it was the GenAI tool used by the Turkish participants.

Within EFL, researchers have examined the opportunities and challenges of GenAI tools for teaching and learning. This paper contributes to that discourse by offering an emic perspective grounded in live data from students and teachers. The ethical and effective use of GenAI by EFL students in HE and how educators accommodate this technology in pedagogy and assessment needs further exploration. While GenAI integration has been linked to improved skills acquisition and increased student engagement (Yuan & Liu, 2025), successful implementation depends on educators’ understanding, preparedness, and attitudes to use it because GenAI is now considered essential to shape future educational practices (Almanea, 2024).

This research has used an authentic two-stage design. Student interviews, which explored why and how students used GenAI for their English learning, were thematically analysed and presented as prompts to teachers to elicit authentic spoken responses. The reactions were thematically analysed to identify what teachers need to guide students in ethical and effective GenAI practices.

Research has demonstrated that digital competence, which is the ability to use technology in context (Rizza, 2014) is now fundamental for EFL teachers. Specifically, AI literacy (Long & Magerko, 2020), which enables teachers to critically evaluate, communicate and collaborate with AI and demonstrate ethical awareness (Ng et al., 2021).  In addition, critical GenAI literacy, which is defined as an ‘active awareness of affordances and limitations of AI’ (Mills et al., 2022). The study highlights students’ GenAI practices, teachers’ perceptions, and educators’ digital literacy levels to inform strategies for GenAI accommodation in EFL pedagogy.

Literature Review

Students’ GenAI usage

Research has indicated that EFL students hold positive attitudes toward GenAI tools such as ChatGPT for several reasons. Firstly, it enables students from different backgrounds and levels of English proficiency to access and use high-quality language (Yuliani et al., 2024). Secondly, it can accommodate students’ diverse needs by personalizing responses from prompts (Nishant et al., 2020). Finally, it can create an ideal learning environment where students feel satisfied and listened to, which leads to them being extrinsically motivated to get results and intrinsically motivated by the joy of learning (Yuliani et al., 2024). Relatedly, Yin et al. (2024) explored GenAI’s impact on students’ attention and motivation. They identified improvements when teachers used GenAI interactive platforms to create participatory lessons, which included opinion polls and surveys, because the students could freely engage in discussions, ask questions, and state opinions. Wei (2023) stated that GenAI’s positive impact on motivation would change students’ study patterns and academic achievements, which is supported by Yamaoka (2024), who mentioned GenAI’s contribution towards the decrease in students’ study anxieties.

Empirical studies have found that GenAI improves EFL students’ key language skills (Kushmar et al., 2022). Fathi et al. (2024) observed that interactive speaking activities enhanced students’ willingness to communicate because they gained confidence and enjoyed risk-free chatbot interactions. Listening skill development depends on focused attention, and accurate interpretation to process information from spoken language (Newton & Nation, 2020), which is difficult for students to access in non-native English-speaking countries. However, Cheng et al. (2020) have found that GenAI’s Google Assistant promoted enjoyment for listening because it provided authentic, flexible, edutainment, game-based learning that encouraged peer collaboration.  Improvements have been identified in self-reading development, when students received personal and detailed performance feedback from a chatbot because they could track their own performance (Jose & Jayaron Jose, 2024; Pan et al., 2024; Selwyn, 2024). The GenAI Wordtune writing application improved students’ writing at both lexical and syntactic levels (Al Mahmud, 2023). Whereas Huang and Teng’s (2025) research found that ChatGPT feedback improved students’ self-efficacy, motivation and engagement to write because they learned to notice and rectify their grammar errors and incorporate ChatGPT’s constructive feedback.  In addition, Huang and Teng (2025) discovered that ChatGPT feedback  impacted students’ affective and behavioural engagement when compared to peer feedback, which was probably due to the immediacy of response. However, further research by Huang and Teng (2025) has indicated that students required metacognitive skills, critical thinking skills and cognitive effort to influence the effective use of ChatGPT for writing assistance. Relatedly, Cong-Lem et al., (2025) have found that Vietnamese EFL students’ critical thinking skills became enhanced following short workshops that included multiple scaffolded activities.

While the findings demonstrate GenAI’s advantages for student development, the next section examines teachers’ perceptions of students’ usage, which need consideration when integrating GenAI into EFL study.

Teachers’ perceptions of students’ GenAI usage

Despite GenAI’s demonstrated benefits to students learning, teachers’ views on student use remain varied and contradictory, which is impacted by the lack of institutional policies and guidance. For example, McGrath et al.’s (2024) online survey found that teachers advocated their universities to provide students with GenAI resources whilst they remained undecided about its role in assessed work. Relatedly, Delello et al. (2023) found that American students valued GenAI learning but believed a lack of university policy prevented their teachers from embracing their usage. Whilst, Almanea (2024) has indicated that a lack of guidance about appropriate GenAI use can generate vague areas, where students may engage in unethical GenAI use despite displaying ethical awareness. The position is exemplified in the following studies.

In America, Jowarder’s (2023) students appreciated GenAI’s perceived usefulness to better understand difficult subjects, to find study resources and its ease of use but lacked awareness about academic dishonesty, plagiarism, and data privacy. Črček and Patekar’s (2023) survey with 201 Croatian students reported that ChatGPT was used for several writing processes, which included idea generation, paraphrasing, summarizing, proofreading, and drafting written assignments. However, around 50% of the students were later involved in academic misconduct cases because of their GenAI use, despite being aware of their establishments’ ethical violation policies. Additional studies have reported students’ limited understanding of GenAI’s limitations and bias, which contribute to problems with ethics (Hockly, 2023),

Cheng et al. (2020) have reported that some educators believe that GenAI simplifies processes too much and can lead to students underestimating academic tasks, which leaves them unable to discuss or present what they have produced. Ngo’s (2023) research in Vietnam highlighted that students considered ChatGPT beneficial because it offered ideas, personal tutoring, and feedback for writing. However, GenAI’s inability to assess the quality and reliability of sources, cite sources and replace words accurately were recognised as areas where teacher intervention was required. Almanea (2024) has found that despite students’ positive attitudes towards GenAI use, their actual use is shaped by their teachers’ expectations and guidance about acceptable and nonacceptable use. Črček and Patekar’s (2023) study in Vietnam found that students felt uncertain about their teachers’ ChatGPT use expectations because some forbid it and others discouraged it, which may be due to a lack of GenAI technological knowledge (Luckin et al., 2022; McGrath et al., 2024).

The findings have demonstrated that teachers are uncertain how to guide students who are eager to engage with GenAI. The next section examines teachers’ GenAI knowledge, which offers insights into why their views are contradictory and how enhanced digital literacy might reconcile divergent attitudes toward students’ GenAI use.

Teachers’ GenAI knowledge

The rapid emergence of GenAI has left many teachers feeling insecure about their digital abilities to integrate it into their teaching and to guide students’ usage (Kohnke et al., 2024; Nyaaba, 2024; Prather et al., 2024). Teachers’ AI-technological pedagogical content knowledge (AI-TPACK) is a significant predictor of this variation (Yue et al., 2024) because studies have demonstrated that in general, university teachers have limited GenAI technological knowledge (Luckin et al., 2022; McGrath et al., 2024).

Edmett et al.’s (2023), global study with 1348 teachers has revealed that EFL teachers have received an inadequate amount of GenAI training to integrate into teaching. An et al.’s (2023) study with 470 EFL teachers, has suggested that teacher AI-TPACK levels can predict teachers’ perceptions and behavioural intentions towards GenAI use in teaching. Yue et al. (2024) have argued that low GenAI literacy levels lead to avoidance, whereas higher proficiency levels engender positive attitudes toward its pedagogical value.

Teachers have called for training and institutional support to understand, accommodate, manage, teach, and assess their students’ GenAI usage (Choi et al., 2023). Early advocates urged a shift from traditional content delivery to facilitating critical thinking through digital literacy (Dillenbourg, 2016; Luckin, 2018). More recently, Toncelli and Kostka (2024) have documented teachers’ excitement, optimism, and curiosity about GenAI alongside a sense of loss for established practices. They stated that as a critical first step, teachers needed support to incorporate GenAI into teaching, which would build communities of supportive teachers to teach in unknown territory. Similarly, Nazim and Alzubi (2025) have identified that constraints in technological competence and GenAI literacy encroached on professional autonomy. Conversely, Wu and Miller’s (2025) research in China with one technically initiative-taking teacher and the other technically reluctant teacher has highlighted the impact institutional guidance, peer collaboration and individual beliefs can have on enabling successful GenAI integration into teaching practices.

ELT literature has presented research about students’ GenAI usage, which is positive in relation to learning development, and teachers’ perceptions, which are varied and contradictory and due to lack of GenAI literacy knowledge. However, there is a limited understanding about how teachers react to their students’ self-reported GenAI usage for English language learning to enable teachers to understand what knowledge they need to guide and help students. This study addresses the gap in literature through two research questions (RQ), which are:

RQ1: How do students use GenAI, and teachers react to their students’ GenAI usage for English language learning?

RQ2: What do teachers need to accommodate students’ GenAI usage in English language learning?

Methodology

Research Design

We used Moustakas’ (1994) transcendental phenomenological design, which encourages researchers to eliminate preconceived ideas about the phenomena and examine data with a fresh perspective. We relied on the teacher participants to provide the complexity of the phenomena, which was to identify teachers’ needs to accommodate students’ GenAI usage in ELT. The phenomena needed self-reported data from students about their GenAI usage. Therefore, a two-tier interview process was undertaken which firstly collected data from students for use in the second interview with the teachers. The double‐tier approach eliminated researcher bias and enabled teacher insights to be grounded in authentic student data.

 Participants

Two participant cohorts contributed from a Turkish university. Group 1 consisted of 21 EFL students where, eight were B1‐level foundation students, and 13 were B2–C1 first-year English Language and Literature undergraduate students. To create the sample, we emailed 50 eligible students’ details about the study’s aims, procedures, and ethics and 21 students volunteered to participate. Group 2 included six ELT and English Language and Literature teachers from the same university. All teachers were proficient L2 English speakers with over ten years of teaching experience. They agreed to undertake face-to-face interviews during their regular meetings on students’ GenAI use.

Table 1. Profile of participating teachers

Teacher Level of qualification Years of experience Professional background
T1 BA 15-20 English Language & Literature
T2 MA 10-15 English Language & Literature
T3 BA 20-25 ELT
T4 PHD 10-15 ELT
T5 BA 15-20 English Language & Literature
T6 PHD 10-15 ELT

Data collection process

Student data were collected by the first author (UK‐based and unaffiliated) in one‐hour, semi‐structured online interviews on Microsoft TEAMS, which was used for transcription purposes. Students were given a random identity number, which ensured anonymity for future data analysis as the second author was based in the Turkish university.  Interview prompts explored when, how, and why the students used GenAI for their English studies. The author practiced verbal and non-verbal immediacy, which aimed to break down psychological barriers and encourage students to speak honestly. The interviews created 105,000 words of transcript.

Teacher data were collected during six, face-to-face, semi‐structured interviews (TEAMS-recorded) with the first author, when she visited the Turkish university. The conversations were guided by a PowerPoint presentation, which presented students’ thematically analysed statements, for teachers’ responses. The interviews created 48,000 words of transcript.

Data analysis

Data analysis followed Moustakas’ (1994) transcendental phenomenological framework. To initiate the research, we defined the phenomenon, which was EFL teachers’ needs to manage students’ GenAI use and set aside individual GenAI experiences to minimise bias. Following this, we undertook six stages of analysis. Stage one collected student data to ensure the teachers worked with real data (described above in data collection process). For stage two, we undertook manual qualitative thematic analysis (Braun & Clarke, 2006) to identify themes in the students’ data. For stage three, we created a PowerPoint presentation with the student responses to use and stimulate teachers’ responses during their interviews. For stage four, we examined the teachers’ TEAMS interview transcripts to determine ‘what’ the teachers experienced. We noticed a high volume of critical expressions because of the reoccurrence of certain words, which were ‘need’ (n=83), conditional ‘if’ clauses (n=150) for conditional acceptance, and modal verbs; should, must and had better (n=178) for criticism. The frequency of words was used to undertake thematic analysis for stage five, which created a description of ‘how’ their responses impacted on them. Finally, stage six combined the teachers’ ‘what and how’ to describe the phenomenon.

Table 2. Procedure undertaken for data analysis, adapted from Moustakas’ (1994) transcendental phenomenological framework.

Stages Aim Description of process
1 Describe the experience that the teachers need to deal with, which is the phenomenon under study Student data are collected, which does not include the researchers’ perspective
2 Develop a list of codes from students Students’ experiences are identified, and a non-repetitive list of statements are created to represent different perspectives
3 Group the student data into broad themes Themes are created for meaningful interpretation
4 Create a description of ‘what’ the participants (teachers) experience with the phenomenon Undertake a textual description of the teachers’ experiences with the students’ data
5 Draft a description of ‘how’ the experience happens Implement a structural description to describe the setting, context and conditions
6 Write a description of the phenomenon Integrate ‘what’ (textural) and how’ (structural) the experience happened

Ethics

Ethical approval was secured from the UK university who acted as gatekeepers.  Also, formal permission for data collection was granted by the Turkish University, which ensured transparency. The study conformed to British Educational Research Association (BERA, 1992) guidelines.  Therefore, participation was voluntary and informed consent, which outlined the study’s aims, intentions and ethics, was obtained from all the participants (students and teachers) via email.  To ensure confidentiality the first author gave all participants numeric identifiers during their TEAMS interviews and anonymized all transcripts. Prior to analysis, transcripts were returned to participants for checking or withdrawal and upon approval to proceed, the audio recordings were destroyed.

Findings

In the following sections, quotations include AI to mean GenAI, and ChatGPT refers to the GenAI tool that students use. Participant codes such as S12 and T1 refer to student-12 and teacher-1, respectively. The report on RQ1 presents students’ descriptions of their GenAI use along with teachers’ corresponding responses. This is followed by RQ2, which details teachers’ statements about their needs for accommodating their students’ GenAI use.

RQ1. Students’ self-reported GenAI use and teachers’ responses

All 21 students reported using ChatGPT (v3.5) in their studies. Seventeen endorsed its benefits, “if we are improving ourselves, it doesn’t matter how… it’s good to have AI in our world” (S15).  Six framed GenAI as an inevitable and rightful resource, for example, “AI cannot be banned… it will be used” (S12). Four, however, admitted ethical unease: “AI is scary… I feel bad using it because it feels like I am cheating” (S14).

Teachers responded with notable openness. Their initial reactions reflected a consensus that GenAI should be integrated as a routine educational tool, with emphasis on guided use and transparent disclosure. One observed that GenAI use reflects contemporary digital progression, “this generation is in this technology. Maybe we should consider their usage as a normal part of their digital progression” (T1). Another rejected outright bans, arguing that “we shouldn’t ban AI. It’s here, we need to adapt to using it correctly…” (T4). A third stressed the importance of transparency, suggesting that “students should be comfortable using AI and be able to say that they use it” (T3).

Students’ use of ChatGPT as a human collaborator and teachers’ reactions

Students framed ChatGPT as a human collaborator or as an insightful human to work with. GenAI’s ability to adapt communication to suit individual needs from a vast reservoir of knowledge, which is fast-paced and productive, helped students achieve their academic goals. They used it to build content, clarify tasks, be a critical friend, organise ideas, enhance language, and obtain feedback, which they found motivating. An elaboration on each students’ use and teachers’ reactions to it follow.

To build content. Students praised ChatGPT’s speed because it can identify key themes very quickly and provide ideas and insights that are helpful. For example, S3 explained, “First, I try to write my own essay, then I send it to AI to give me insights,” while S1 noted, “I use ChatGPT to find ideas for the introduction… I consider them before I do research.”

However, teachers voiced mixed pedagogical and ethical concerns, which highlighted an absence of consensus and exemplified the need for clear institutional guidelines on GenAI integration in ELT. One teacher cautioned that “if you give AI the whole instruction, it won’t be the students’ work,” although she acknowledged its usefulness for proofreading, “for comments on grammar and ideas, maybe it’s a good use” (T1). Another warned that while GenAI can supply preliminary concepts, “to put these all together should be the students’ effort, not ChatGPT’s” (T2). A third agreed that GenAI should “enhance learning but students must avoid over-reliance” (T5). In contrast, a fourth reflected on her own practice, “students are getting ideas thrown at them… as a teacher, I do it as well… why shouldn’t students do it?” (T4).

To clarify the tasks. Fourteen students used ChatGPT to break down assignment prompts into simple steps with relevant examples that help them understand the meaning more clearly. S14 explained, “I use AI when I can’t come up with anything on my own,” while S16 compared it to a substitute teacher: “I put the teacher’s instruction into ChatGPT, and it gives me some examples… I get better marks.”

Teachers’ responses, although varied, all reflected pedagogical concerns, which highlighted the need for guidance to implement GenAI without replacing student–teacher interaction or critical engagement. T5 worried how GenAI impacted negatively on student-teacher interaction “instead of asking the teacher, they use AI” although also acknowledged that “AI may help weaker students to prepare and understand better.” By contrast, T6 reflected on workload implications for teachers and observed that “if students are not getting any replies from their teachers…then they would use clarification tools like AI.” T2 raised a deeper issue of critical engagement, questioning students’ ability to evaluate GenAI output and cautioned that “if you are open to getting these ideas, AI is inspirational. But if you just get what is offered, it is not inspirational at all.”

As a critical friend. Seven students used ChatGPT as a critical friend to gain insights from a perspective that complemented their work. ChatGPT can evaluate the relevance and depth of writing and identify strengths and weaknesses, which helps students. S14 explained, “I use AI when I want to see things from a different light. I get ideas,” while S20 commented, “AI gives me new ideas from different perspectives; it collects information from different parts of the world.”

Teachers raised ethical concerns regarding the reliability and bias of GenAI feedback, which highlighted the role of critical thinking skills to distinguish between fact and misinformation. T4 warned that “AI gives us different perspectives… but we are not sure how biased the information is… for proper research, AI won’t be enough… they need critical thinking skills to verify if the fact is true or false.” T3 had the same opinion, “using AI to challenge biases and get different viewpoints is valuable, although there might be biases in AI itself,” and T2 cautioned that “AI gives you so many little ideas… to create a bigger piece. We need to double-check the information.”

To organise ideas. Students used ChatGPT to structure and organise their writing, which capitalised on GenAI’s ability to craft clear paragraph frameworks with a clear main idea, supporting details, and logical transitions between paragraphs. S17 explained, “I plan, ask ChatGPT, then paraphrase the result to use for my essay,” while S16 observed that, “AI categorises and orders things in my essays very effectively.”

Teachers voiced strong pedagogical objections.  They were concerned and disappointed and argued that outlining is a foundational skill that requires mental effort. One teacher maintained that “they should know how to organise their work. Depending on AI to do everything in writing is not the right way” (T2). In addition, another warned that if “AI does it for them, they won’t learn how we do it in class… if students use AI all the time to prepare, to organize, to think, to brainstorm for every task, then they are not learning” (T4).

To enhance language. Students turned to ChatGPT to enhance the grammar, style, and tone when writing, which adhered to standard language conventions. S3 praised that “AI guides through punctuation and spelling errors without hassle,” whilst S19 noted, “I ask AI to paraphrase my writing in a kinder, softer way, and it works.”

Teachers’ responses ranged from endorsement of GenAI-assisted language feedback to concerns about authorship and authentic learning. These tensions underscored the need to balance linguistic support with genuine proficiency development.

T3 praised GenAI, noting that “using it for language feedback…shows effort and thoughtful use,” a sentiment echoed by T6: “I’m doing the same thing.” In contrast, T4 challenged the validity of student authorship, questioning, “if there’s a great difference between the students’ writing and the AI correction, is it students’ work or AI’s work?”  T5 worried that by relying on GenAI to enhance their language, students might merely be “producing something for assessments” rather than genuinely learning. T5 also questioned learning “when students enhance language, are they learning or just producing something for assessments?”

To gain immediate feedback. Students used ChatGPT to gain real-time feedback, which provides the opportunity to make immediate improvements. ChatGPT also comments on the quality and relevance of inputs because it makes comparisons with other texts. S3 said, “ChatGPT helps me so much with my essays because it gives me feedback and tips to improve. I need critics about my essays.”

Teachers’ comments reflected a clear pedagogical divide over GenAI’s role in learning. The opposing views presented the challenge of integrating instant GenAI feedback whilst ensuring that students are engaged in meaningful, autonomous learning. T2 endorsed GenAI’s capacity for rapid refinement, noting that using GenAI “as a helper is a good idea… AI helps you improve what you have written down.” In contrast, T4 believed that the process undercuts and negatively impacts on learning, “AI is doing the job for them… it’s not helping students learn.”

To activate motivation. Students reported that GenAI enhanced their study motivation by its ability to respond to their prompts with writing ideas, individual suggestions, and progress feedback, which offered relief and reassurance. As one student observed, “AI is a beautiful tool… it helps students to achieve stuff easier with less time, less struggle if it’s used correctly” (S12).

However, teachers demonstrated pedagogical concern that an overreliance on GenAI could undermine intrinsic motivation, which is essential for English language learning. The issue demonstrated that GenAI’s motivational benefits needed to be balanced against the protection of students’ self-driven engagement. One teacher asked, “How motivated can you feel to do something by yourself if you know AI can help you anytime? If it’s used correctly, that’s the key” (T2).

RQ2. Identifying teachers’ needs to accommodate their students’ GenAI usage

The data revealed four key domains that teachers need to accommodate their students’ GenAI usage. These were, top-down institutional decisions to establish clear policies, curriculum development to decide how GenAI tools could be integrated in the curricula, assessment methods to determine how GenAI integration would impact on grading and in-service teacher training to equip teachers with the skills and knowledge to use GenAI in their teaching methods. Table 3 (next page) presents the teachers’ statements associated with each domain.

Institutional decisions

Institutional decisions emerged as a critical domain. Teachers called for comprehensive policies and guidelines that clearly defined the permissible scope of GenAI use in ELT, which would promote consistency, transparency, and confidence in integrating GenAI across curricula. They accentuated the need for standard criteria to protect teachers’ professional autonomy to ensure fair evaluation of GenAI‐assisted work. They wanted formal procedures for students to disclose their GenAI use to enable institutions to recognise the tools as learning resources rather than leaving decisions to individual teachers’ discretion.

Table 3. Teachers’ statements identifying what they need to accommodate their students’ GenAI usage

Need Domains Specific Need Areas Sample Codes
Institutional decisions Developing Policies ● “There should be a policy for everyone. It should not be left to the teacher.” (T4)
Developing guidelines ● “We need guidelines to use AI in teaching… “(T2)● “I have my own guideline… if I have support behind me, excellent. If not, I will still have AI in my classroom. I will have a basic guideline, not a perfect one and if a student said that they used AI according to my guidelines, I would say OK.” (T3)
Setting criteria ● “We need to adapt to using it correctly and we need to set the criteria for the students… It should not be about [the teachers’] intuition… there must be something to protect the teacher.” (T2)
Legitimising students’ GenAI use ● “The students should confidently say that they used AI. We should ask them to use it…” (T3)● “It should be official to mention that AI was used.” (T5)
Curriculum development Integration of GenAI into curriculum with a focus on: ● “We need to clarify the competencies involved in tasks, such as writing an essay, and define what we expect from students in terms of evaluation, summarising, and paraphrasing.” (T5).
Competences ●  “We should teach students to generate AI prompts for their own contexts.” (T6)
Students’ needs ● “We should ask students what they need to know more about.” (T2)
Assessment GenAI integration into tasks for assessment ● “AI means we need to change how we evaluate students’ work.” (T4) ● “AI should be integrated [in students’ work], and it should be marked. AI should be graded in our syllabus so that the student will know how to use it responsibly and ethically…” (T4)
Assessment Issues of academic integrity & ethics ● “Grading work is a dilemma… if we are open to AI use but then punish students for using it… We must balance encouraging AI use with maintaining academic integrity.” (T2) ● “Can we assess a student’s essay if they have used AI, and we have not taught them AI? Is it a valid assessment?” (T6)● “What if a student who writes an essay on her own gets a lower grade than the one who used AI?” (T4)

● “If students use AI for idea generation, it’s fine but they don’t, they copy and paste.” (T6)

In-service teacher training Teachers’ building GenAI knowledge base ● “I need AI literacy skills… I should be told better and more practical ways that AI can help me with my work. A training… to see the scope.” (T2)● “If I knew AI tools better, I would advise students… but I need to get the hang of them first.” (T4)● “If I can’t do [what students do with AI], I can’t help.” (T5)

● “If we are negative [about AI], we cannot help [the students].” (T1)

Gaining knowledge about GenAI enhanced teaching & assessment methodology ● “How can I make my teaching better? How can I use AI and teach better?” (T3)● “With extra training I would do better.” (T4)● “If we learn and if we teach how to use AI, we shouldn’t be afraid of it.” (T2)

Curriculum development

To integrate GenAI into ELT curricula, teachers highlighted the necessity of delineating task specific competencies, such as brainstorming, evaluating, summarising, and paraphrasing to clarify expectations and assessment criteria. They advocated for instruction in crafting GenAI prompts to help students in individual contexts, and for permission to ask students about their GenAI input to identify areas which needed additional support. By focusing on students’ needs in conjunction with curriculum development, the teachers felt that both skill acquisition and learner autonomy could develop.

Assessments

Teachers stressed that effective GenAI integration demanded a fundamental overhaul of assessment practices, which embedded GenAI-enabled output inclusion with a clear grading criterion into the syllabus, which would encourage students to use GenAI tools ethically and responsibly. At the same time, teachers flagged significant integrity challenges, which questioned the balance between encouraging students’ GenAI usage with fair evaluation. Also, teachers challenged the validity of the students’ assessed work because they lacked formal GenAI training and undermined their own learning by using copy/pasted outputs. These concerns highlighted the need for transparent policies and pedagogical frameworks that reconciled responsible innovation with assessment validity.

In-service teacher training

Teachers identified in-service teacher training as essential for cultivating the GenAI literacy that teachers needed to support their students’ usage and innovate their own practice. Teachers reported that without firsthand experience with GenAI tools, they could neither advise students effectively nor adapt their lesson planning and assessment methods. They noted that negative attitudes toward GenAI reinforced barriers to integration, whereas targeted workshops which demonstrated practical applications, from prompt design to automated feedback mechanisms, would build confidence, expand pedagogical strategies, and improve both teaching quality and assessment validity.

Discussion

Our intention in using an emic‐informed, data‐driven interview design, was to gain a deeper insight into teachers’ emerging needs in response to students actual GenAI use. In line with previous research (Črček & Patekar, 2023; Delello et al., 2023; Jowarder, 2023; Ngo, 2023) our students held positive attitudes towards GenAI because it was efficient and effective and stimulated their motivation to study. However, our teachers’ perspectives were ambivalent, which balanced optimism against caution as reported in other studies findings (Črček & Patekar, 2023; Edmett et al., 2023; McGrath et al., 2024).

Writing emerged as the students’ primary target for GenAI assistance, with students leveraging GenAI to generate ideas, organise content, and revise drafts.  Literature has supported the trend of writing enhancement (Al Mahmud, 2023; Črček & Patekar, 2023) and supporting idea generation for building content, organizing, and revising (Ngo, 2023) and feedback (Huang & Teng, 2025).  However, our teachers raised concern that the reliance risked undermining students’ originality and critical engagement, which echo concerns that GenAI’s pragmatic nature may supersede students’ deeper cognitive development (Creely, 2024; Jowarder, 2023). These observations align with Huang and Teng’s (2025) assertion that effective use of AI for writing demands robust metacognitive and critical‐thinking skills.

Beyond writing, students frequently consulted GenAI for task clarification, effectively treating it as a 24‐hour or substitute teacher. Whilst this supports claims that GenAI can facilitate comprehension through alternative explanations (Jowarder, 2023), one teacher implied that unmediated use may weaken students’ evaluative faculties because critical thinking develops through deliberate engagement. The point was mentioned by Creely (2024) and Jowarder (2023) who suggested that overreliance on GenAI may negatively impact cognitive abilities because students constantly opt for fast and optimal solutions due to its practicality. Whilst Huang and Teng (2025) argued that teachers must scaffold GenAI interactions to nurture students’ independent judgment.

In addition, we discovered that students used GenAI as a critical friend because they valued its ability to review drafts and gain fresh perspectives. However, none reported fact-checking or verifying the output. This gap underscores the necessity to embed source‐evaluation skills within GenAI‐mediated activities, for example cognitive processes like analysing, synthesizing, evaluating, and reasoning (Jamiai & El Karfa, 2022). Therefore, students need critical thinking guidance (Huang & Teng, 2025) to assess the quality and reliability of sources, and to identify bias, distortion, and misinformation, which were areas mentioned by our teachers during their interviews.

Students also used GenAI for language level enhancements. They undertook spelling and punctuation checks, and discourse processes that change the tone or formality of outputs, a practice corroborated by Črček and Patekar (2023).  However, the teachers questioned how to monitor authentic language development when GenAI intercedes in surface corrections.

Finally, students prized GenAI’s immediate feedback, which supports self‐regulated learning and aligns with findings that GenAI can enhance autonomous skill development (Huang & Teng, 2025; Ngo, 2023; Pan et al., 2024; Selwyn, 2024). However, the absence of process‐oriented feedback mechanisms constrains teachers’ capacity to address students’ evolving needs in both language acquisition and self‐regulation. Collectively, the findings highlight a need for pedagogical frameworks that integrate GenAI responsibly, which ensures that technological efficiency amplifies rather than replaces meaningful learning.

In the second phase of our study our teachers’ comments emphasised the need for establishing policies and guidelines for appropriate and ethical use of GenAI in ELT, which would create a safe space for teachers to make decisions about its use for themselves and their students. The findings are supported by Lameras and Arnab (2021), who stated that teachers are catalysts and play a key role in adopting innovations in education. However, without institutional clarity, teachers face a dilemma about either supporting innovation or feeling undermined about GenAI related classroom decisions (Felix & Webb, 2024).

Teachers needed systematic support to integrate GenAI into ELT curricula without compromising learners’ competence development. Consistent with Cheng et al. (2020), who argued that GenAI’s simplification of tasks may prompt students to underestimate academic rigor, and with evidence that GenAI’s biases and limitations contribute to ethical misconduct (Črček & Patekar, 2023; Hockly, 2023), the teachers proposed a detailed task‐by‐task embedding of GenAI. They suggested aligning core competencies such as brainstorming, paraphrasing, summarising, and critical verification into each stage of language tasks.  They believed that clarity, knowledge and education about how to use GenAI to accomplish the outputs could reinforce critical thinking, creativity, and ethical reasoning, thereby mitigating risks of misuse and underdevelopment of the analytical skills.

Teachers also expressed acute concern regarding GenAI’s impact on assessment. Our findings aligned with other studies (Črček & Patekar, 2023; McGrath et al., 2024), which found teachers being inconclusive about students’ GenAI usage for assessed work. This is because of the absence of clear grading criteria that led teachers making intuitive, case‐by‐case judgments about assessed work, which often punish students for their GenAI use, rather than making judgements based on policy.

Finally, our teachers acknowledged their own underdeveloped GenAI literacy, which echoed other research findings (Edmett et al., 2023; Luckin et al., 2022; McGrath et al., 2024; Nazim & Alzubi, 2025; Toncelli & Kostka, 2024). Our teachers advocated for embedded, practical, in‐service professional development to help develop a GenAI literacy, which would enable them to critically evaluate, communicate and collaborate effectively with GenAI (Long & Magerko, 2020) and therefore build confidence in embracing GenAI to apply to various teaching methods (Lameras & Arnab, 2021; Toncelli & Kostka, 2024) and embrace change to enable a positive transfer of new technology into practice (McGrath et al., 2024).

Conclusion

Our research highlights a gap in research because it has identified what teachers need to accommodate students’ GenAI usage in ELT, which indicates that HE institutions need to take the lead on GenAI usage, through policies, guidance, and training because “AI cannot be banned” (S12).  

Policies need to define acceptable and non-acceptable uses of GenAI tools to help teachers work from solid principles. Firstly, the policies need to promote academic integrity, which commits honest, responsible, and ethical conduct. Secondly, to promote transparency, which would build a culture of openness about GenAI use. Finally, to evaluate students’ GenAI use in assessments, which would enable teachers to undertake comparative, fair marking.

In addition, teachers need guidance and training to implement the policies. Practical guidance would build teachers’ confidence, knowledge, and abilities to use GenAI. It would also build awareness about ethical implications related to bias, accountability, and GenAI’s potential for misuse. Training would promote transparency, which could include how to reference GenAI, and integrity by learning to use GenAI to supplement learning rather than as plagiarism tool. Other areas could explore integrating GenAI into existing curriculum to enhance teaching and resources.

Limitations

There are some limitations in our research. Firstly, our data set is small and context specific that uses only one university in Turkey. Therefore, we acknowledge the specificity of the results. We also recognise that students may not have honestly disclosed their GenAI use, despite ethical procedures ensuring anonymity and confidentiality. Finally, teachers may have exaggerated their students’ GenAI use as part of their professional role.

Further Research

Based on these limitations, more participants from various other universities can be recruited and a broader sampling can be created to make the study more inclusive and broader in scope.  In addition, while teachers in our research reported that GenAI may not develop students’ critical thinking skills, some studies (Avsheniuk et al., 2024; Darwin et al., 2024) argue that GenAI might offer potential space to develop critical thinking skills, through carefully designed pedagogical design, which could be another area for further research.

About the Authors

Rhian Webb has a PhD from the University of South Wales, UK. She is a senior lecturer and researcher in Teaching English to Speakers of Other Languages (TESOL) and course leader for the MA TESOL. Rhian collaborates with academics globally to undertake research which has most recently included: GenAI in ELT, Action Research, translanguaging and intercultural communicative competence. ORCID ID: 0000-0002-1495-0010

Ferah Şenaydın is a senior lecturer and researcher at Ege University, specializing in English Language Teaching (ELT). With a PhD in ELT, her academic interests include teacher education, metacognition, coaching and mentoring. Dr. Şenaydın has participated in various international projects, sharing her expertise in collaborative educational initiatives. ORCID ID: 0000-0003-2368-0689

To Cite this Article

Webb, R., & Şenaydın, F. (2025). Generative AI in English language teaching: Students’ voices, teachers’ reactions, and needs. Teaching English as a Second Language Electronic Journal (TESL-EJ), 29(3). https://doi.org/10.55593/ej.29115int3

References

Almanea, M. (2024). Instructors’ and learners’ perspectives on using ChatGPT in English as a foreign language course and its effect on academic integrity. Computer Assisted Language Learning, 1-26. https://doi.org/10.1080/09588221.2024.2410158

Al Mahmud, F. (2023). Investigating EFL students’ writing skills through artificial intelligence: Wordtune application as a tool. Journal of Language Teaching and Research, 14(5), 1395-1404. https://doi.org/10.17507/jltr.1405.28

An, X., Chai, C., Li, Y., Zhou, Y., Shen, X., Zheng, C., & Chen, M. (2023). Modelling English teachers’ behavioural intention to use artificial intelligence in middle schools. Education and Information Technologies, 28(5), 5187-5208. https://doi.org/10.1007/s10639-022-11286-z

Avsheniuk, N., Lutsenko, O., Svyrydiuk, T., & Seminikhyna, N. (2024). Empowering language learners’ critical thinking: evaluating ChatGPT’s role in English course implementation. Arab World English Journal (AWEJ) Special Issue on ChatGPT. https://doi.org/10.24093/awej/vol14no4.1

Baydemir, R. (2025). Artificial Intelligence Wars in Global Technology: ChatGPT vs. DeepSeek. All Sciences Academy, 370-375. https://www.researchgate.net/publication/390174698

British Educational Research Association. (1992). British educational research association ethical guidelines. BERA. London.

Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative research in psychology, 3(2), 77-101. https://doi.org/10.1191/1478088706qp063oa

Cheng, C., Chen, C., & Fu, C. (2020). Artificial intelligence-based education assists medical students’ interpretation of hip fracture. Insights Imaging, 11(1), 119-127. https://doi.org/10.1186/s13244-020-00932-0

Choi, S., Jang, Y., & Kim, H. (2023). Influence of pedagogical beliefs and perceived trust on teachers’ acceptance of educational artificial intelligence tools. International Journal of Human-Computer Interaction, 39(4), 910–922. https://doi.org/10.1080/10447318.2022.2049145

Črček, N., & Patekar, J. (2023). Writing with AI: University students’ use of ChatGPT. Journal of Language and Education 9, 4 (36), 128-138. https://doi.org/10.17323/jle.2023.17379

Cong-Lem, N., Tat Nguyen, T., & Nhat Hoang Nguyen, K. (2025). Critical thinking in the age of generative AI: Effects of a short-term experiential learning intervention on EFL learners. International Journal of TESOL Studies (2025) 250522, 1-21. https://doi.org/10.58304/ijts.250522

Creely, E. (2024). Exploring the role of generative AI in enhancing language learning: Opportunities and challenges. International Journal of Changes in Education, 1(3), 158-167. https://doi.org/10.47852/bonviewIJCE42022495

Darwin, D., Rusdin, D., Mukminatien, N., Suryati, N., Laksmi, E., & Marzuki, M. (2024). Critical thinking in the AI era: An exploration of EFL students’ perceptions, benefits, and limitations. Cogent Education, 11(1). https://doi.org/10.1080/2331186X.2023.2290342

Delello, J., Sung, W., Mokhtari, K., & De Giuseppe, T. (2023). Exploring College Students’ Awareness of AI and ChatGPT: Unveiling Perceived Benefits and Risks. Journal of Inclusive Methodology and Technology in Learning and Teaching, 3(4).  https://doi.org/10.32043/jimtlt.v3i4.132

Dillenbourg, P. (2016). The evolution of research on digital education. International Journal of Artificial Intelligence in Education, 26, 544-560. https://doi.org/10.1007/s40593-016-0106-z

Edmett, A., Ichaporia, N., Crompton, H., & Crichton, R. (2023). Artificial intelligence and English language teaching: Preparing for the future. British Council, 2024-08. https://doi.org/10.1515/jccall-2023-0032

Fathi, J., Rahimi, M., & Derakhshan, A. (2024). Improving EFL learners’ speaking skills and willingness to communicate via artificial intelligence-mediated interactions. System, 121.  https://doi.org/10.1016/j.system.2024.103254

Felix, J., & Webb, L. (2024). Use of artificial intelligence in education delivery and assessment. Parliamentary Office of Science and Technology. https://post.parliament.uk/research-briefings/post-pn-0712/

Hockly, N. (2023). Artificial intelligence in English language teaching: The good, the bad and the ugly. Relc Journal, 54(2), 445-451. https://doi.org/10.1177/00336882231168504

Huang, J., & Teng, M. F. (2025). Peer feedback and ChatGPT-generated feedback on Japanese EFL students’ engagement in a foreign language writing context. Digital Applied Linguistics, 2, 102469-102469. https://doi.org/10.29140/dal.v2.102469

Jamiai, A., & El Karfa, A. (2022). Critical thinking practice in foreign language education classrooms. European journal of English language teaching, 7(3). https://doi.org/10.46827/ejel. v7i3.4322

Jose, J., & Jayaron Jose, B. (2024). An Overview of Incorporating Artificial Intelligence Tools for Promoting Learners’ Reading Skills. Innovative and Intelligent Digital Technologies; Towards an Increased Efficiency: Volume 1, 603-613. https://doi.org/10.1007/978-3-031-70399-7_46

Jowarder, M. (2023). The influence of ChatGPT on social science students: Insights drawn from undergraduate students in the United States. Indonesian Journal of Innovation and Applied Sciences (IJIAS), 3(2), 194-200. https://doi.org/10.47540/ijias.v3i2.878

Kanbach, D., Heiduk, L., Blueher, G., Schreiter, M., & Lahmann, A. (2024). The GenAI is out of the bottle: generative artificial intelligence from a business model innovation perspective. Review of Managerial Science, 18(4), 1189-1220. https://doi.org/10.1007/s11846-023-00696-z

Kohnke, L., Zou, D., & Moorhouse, B. L. (2024). Technostress and English language teaching in the age of generative AI. Educational technology & society, 27(2), 306-320. https://doi.org/10.30191/ETS.202404_27(2).TP02

Kushmar, L., Vornachev, A., Korobova, I., & Kaida, N. (2022). Artificial Intelligence in Language Learning: What Are We Afraid Of. Arab World English Journal, (8), 262-273. https://dx.doi.org/10.24093/awej/call8.18

Lameras, P., & Arnab, S. (2021). Power to the teachers: an exploratory review on artificial intelligence in education. Information, 13(1), 14. https://doi.org/10.3390/info13010014

Long, D., & Magerko, B. (2020). What is AI literacy? Competencies and design considerations. Paper presented at 2020 CHI conference on Human factors in computing systems (pp. 1-16).

Luckin, R. (2018). Machine Learning and Human Intelligence. The future of education for the 21st century. UCL Institute of Education Press.

Luckin, R., Cukurova, M., Kent, C., & Du Boulay, B. (2022). Empowering educators to be AI-ready. Computers and Education: Artificial Intelligence, 3, 100076. https://doi.org/10.1016/j.caeai.2022.100076

McGrath, S. P., Kozel, B. A., Gracefo, S., Sutherland, N., Danford, C. J., & Walton, N. (2024). A comparative evaluation of ChatGPT 3.5 and ChatGPT 4 in responses to selected genetics questions. Journal of the American Medical Informatics Association, 31(10), 2271-2283.https://doi.org/10.1093/jamia/ocae128

Mills, K., Unsworth, L., & Scholes, L. (2022). Literacy for Digital Futures: Mind, Body, Text (1st ed.). Routledge. https://doi.org/10.4324/9781003137368

Moustakas, C. (1994). Phenomenological research methods. Thousand Oaks, CA: Sage

Nazim, M., & Alzubi, A. A. F. (2025). Empowering EFL teachers’ perceptions of generative AI-mediated self-professionalism. PLoS One, 20(6). https://doi.org/10.1371/journal.pone.0326735

Nishant, R., Kennedy, M., & Corbett, J. (2020). Artificial intelligence for sustainability: Challenges, opportunities, and a research agenda. International Journal of Information Management, 53, 102104. https://doi.org/10.1016/j.ijinfomgt.2020.102104

Newton, J., & Nation, I. (2020). Teaching ESL/EFL listening and speaking. Routledge.

Ng, D., Leung, J., Chu, S., & Qiao, M. (2021). Conceptualizing AI literacy: An exploratory review. Computers and Education: Artificial Intelligence, 2, 100041. https://doi.org/10.1016/j.caeai.2021.100041

Ngo, T. (2023). The perception by university students of the use of ChatGPT in education. International Journal of Emerging Technologies in Learning, 18(17), 4–19. https://doi.org/10.3991/ijet. v18i17.39019

Nyaaba, M. (2024). Transforming teacher education in developing countries: The role of generative AI in bridging theory and practice. arXiv preprint arXiv:2411.10718. https://doi.org/10.48550/arXiv.2411.10718

Pan, M., Guo, K., & Lai, C. (2024). Using Artificial Intelligence Chatbots to Support English-as-a-Foreign Language Students’ Self-Regulated Reading. RELC Journal, 0(0). https://doi.org/10.1177/00336882241264030

Prather, J., Leinonen, J., Kiesler, N., Benario, J., Lau, S., & MacNeil, S. (2024). How Instructors Incorporate Generative AI into Teaching Computing. Proceedings of the 2024 on Innovation and Technology in Computer Science Education V.2, Milan, 8-10 July 2024, 771-772. https://doi.org/10.1145/3649405.3659534

Rizza, C. (2014). Digital Competences. In: Michalos, A.C. (eds) Encyclopedia of Quality of Life and Well-Being Research. Springer, Dordrecht. https://doi.org/10.1007/978-94-007-0753-5_731

Selwyn, N. (2024). On the limits of artificial intelligence (AI) in education. Nordisk tidsskrift for pedagogikk og kritikk, 10(1), 3-14. https://doi.org/10.23865/ntpk.v10.6062

Toncelli, R., & Kostka, I. (2024). A love-hate relationship: Exploring faculty attitudes towards GenAI and its integration into teaching. International Journal of TESOL Studies, 6(3), 77-94.https://doi.org/10.58304/ijts.20240306

Wei, L. (2023). Artificial intelligence in language instruction: impact on English learning achievement, L2 motivation, and self-regulated learning. Frontiers in psychology, 14, 1261955. https://doi.org/10.3389/fpsyg.2023.1261955.

Wu, J. G., & Miller, L. (2025). Smart or sweat: The bittersweet journey of teachers’ AI literacy. International Journal of TESOL Studies, 1-10. https://doi.org/10.58304/ijts.250124

Yamaoka, K. (2024). ChatGPT’s Motivational Effects on Japanese University EFL Learners: A Qualitative Analysis. International Journal of TESOL Studies, 6 (3) 24 -35. https://doi.org/10.58304/ijts.20240303

Yin, J., Goh, T., & Hu, Y. (2024). Interactions with educational chatbots: the impact of induced emotions and students’ learning motivation. International Journal of Educational Technology in Higher Education, 21(1), 47. https://doi.org/10.1186/s41239-024-00480-3

Yuan, L., & Liu, X. (2025). The effect of artificial intelligence tools on EFL learners’ engagement, enjoyment, and motivation. Computers in Human Behavior, 162, 108474. https://doi.org/10.1016/j.chb.2024.108474.

Yuliani, S., Mukhibbah, T. L., & Agustina, E. (2024). Artificial Intelligence Usage In Higher Education: EFL Student’s view. ELTR Journal, 8(2), 119-129.  https://doi.org/10.37147/eltr.v8i2.198

Yue, M., Jong, M. S. Y., & Ng, D. T. K. (2024). Understanding K–12 teachers’ technological pedagogical content knowledge readiness and attitudes toward artificial intelligence education. Education and information technologies, 29(15), 19505-19536. https://doi.org/10.1007/s10639-024-12621-2

Copyright of articles rests with the authors. Please cite TESL-EJ appropriately.
Editor’s Note: The HTML version contains no page numbers. Please use the PDF version of this article for citations.

© 1994–2026 TESL-EJ, ISSN 1072-4303
Copyright of articles rests with the authors.