• Skip to primary navigation
  • Skip to main content

site logo
The Electronic Journal for English as a Second Language
search
  • Home
  • About TESL-EJ
  • Vols. 1-15 (1994-2012)
    • Volume 1
      • Volume 1, Number 1
      • Volume 1, Number 2
      • Volume 1, Number 3
      • Volume 1, Number 4
    • Volume 2
      • Volume 2, Number 1 — March 1996
      • Volume 2, Number 2 — September 1996
      • Volume 2, Number 3 — January 1997
      • Volume 2, Number 4 — June 1997
    • Volume 3
      • Volume 3, Number 1 — November 1997
      • Volume 3, Number 2 — March 1998
      • Volume 3, Number 3 — September 1998
      • Volume 3, Number 4 — January 1999
    • Volume 4
      • Volume 4, Number 1 — July 1999
      • Volume 4, Number 2 — November 1999
      • Volume 4, Number 3 — May 2000
      • Volume 4, Number 4 — December 2000
    • Volume 5
      • Volume 5, Number 1 — April 2001
      • Volume 5, Number 2 — September 2001
      • Volume 5, Number 3 — December 2001
      • Volume 5, Number 4 — March 2002
    • Volume 6
      • Volume 6, Number 1 — June 2002
      • Volume 6, Number 2 — September 2002
      • Volume 6, Number 3 — December 2002
      • Volume 6, Number 4 — March 2003
    • Volume 7
      • Volume 7, Number 1 — June 2003
      • Volume 7, Number 2 — September 2003
      • Volume 7, Number 3 — December 2003
      • Volume 7, Number 4 — March 2004
    • Volume 8
      • Volume 8, Number 1 — June 2004
      • Volume 8, Number 2 — September 2004
      • Volume 8, Number 3 — December 2004
      • Volume 8, Number 4 — March 2005
    • Volume 9
      • Volume 9, Number 1 — June 2005
      • Volume 9, Number 2 — September 2005
      • Volume 9, Number 3 — December 2005
      • Volume 9, Number 4 — March 2006
    • Volume 10
      • Volume 10, Number 1 — June 2006
      • Volume 10, Number 2 — September 2006
      • Volume 10, Number 3 — December 2006
      • Volume 10, Number 4 — March 2007
    • Volume 11
      • Volume 11, Number 1 — June 2007
      • Volume 11, Number 2 — September 2007
      • Volume 11, Number 3 — December 2007
      • Volume 11, Number 4 — March 2008
    • Volume 12
      • Volume 12, Number 1 — June 2008
      • Volume 12, Number 2 — September 2008
      • Volume 12, Number 3 — December 2008
      • Volume 12, Number 4 — March 2009
    • Volume 13
      • Volume 13, Number 1 — June 2009
      • Volume 13, Number 2 — September 2009
      • Volume 13, Number 3 — December 2009
      • Volume 13, Number 4 — March 2010
    • Volume 14
      • Volume 14, Number 1 — June 2010
      • Volume 14, Number 2 – September 2010
      • Volume 14, Number 3 – December 2010
      • Volume 14, Number 4 – March 2011
    • Volume 15
      • Volume 15, Number 1 — June 2011
      • Volume 15, Number 2 — September 2011
      • Volume 15, Number 3 — December 2011
      • Volume 15, Number 4 — March 2012
  • Vols. 16-Current
    • Volume 16
      • Volume 16, Number 1 — June 2012
      • Volume 16, Number 2 — September 2012
      • Volume 16, Number 3 — December 2012
      • Volume 16, Number 4 – March 2013
    • Volume 17
      • Volume 17, Number 1 – May 2013
      • Volume 17, Number 2 – August 2013
      • Volume 17, Number 3 – November 2013
      • Volume 17, Number 4 – February 2014
    • Volume 18
      • Volume 18, Number 1 – May 2014
      • Volume 18, Number 2 – August 2014
      • Volume 18, Number 3 – November 2014
      • Volume 18, Number 4 – February 2015
    • Volume 19
      • Volume 19, Number 1 – May 2015
      • Volume 19, Number 2 – August 2015
      • Volume 19, Number 3 – November 2015
      • Volume 19, Number 4 – February 2016
    • Volume 20
      • Volume 20, Number 1 – May 2016
      • Volume 20, Number 2 – August 2016
      • Volume 20, Number 3 – November 2016
      • Volume 20, Number 4 – February 2017
    • Volume 21
      • Volume 21, Number 1 – May 2017
      • Volume 21, Number 2 – August 2017
      • Volume 21, Number 3 – November 2017
      • Volume 21, Number 4 – February 2018
    • Volume 22
      • Volume 22, Number 1 – May 2018
      • Volume 22, Number 2 – August 2018
      • Volume 22, Number 3 – November 2018
      • Volume 22, Number 4 – February 2019
    • Volume 23
      • Volume 23, Number 1 – May 2019
      • Volume 23, Number 2 – August 2019
      • Volume 23, Number 3 – November 2019
      • Volume 23, Number 4 – February 2020
    • Volume 24
      • Volume 24, Number 1 – May 2020
      • Volume 24, Number 2 – August 2020
      • Volume 24, Number 3 – November 2020
      • Volume 24, Number 4 – February 2021
    • Volume 25
      • Volume 25, Number 1 – May 2021
      • Volume 25, Number 2 – August 2021
      • Volume 25, Number 3 – November 2021
      • Volume 25, Number 4 – February 2022
    • Volume 26
      • Volume 26, Number 1 – May 2022
      • Volume 26, Number 2 – August 2022
      • Volume 26, Number 3 – November 2022
      • Volume 26, Number 4 – February 2023
    • Volume 27
      • Volume 27, Number 1 – May 2023
      • Volume 27, Number 2 – August 2023
      • Volume 27, Number 3 – November 2023
      • Volume 27, Number 4 – February 2024
    • Volume 28
      • Volume 28, Number 1 – May 2024
      • Volume 28, Number 2 – August 2024
      • Volume 28, Number 3 – November 2024
      • Volume 28, Number 4 – February 2025
    • Volume 29
      • Volume 29, Number 1 – May 2025
      • Volume 29, Number 2 – August 2025
      • Volume 29, Number 3 – November 2025
      • Volume 29, Number 4 – February 2026
  • Books
  • How to Submit
    • Submission Info
    • Ethical Standards for Authors and Reviewers
    • TESL-EJ Style Sheet for Authors
    • TESL-EJ Tips for Authors
    • Book Review Policy
    • Media Review Policy
    • TESL-EJ Special issues
    • APA Style Guide
  • Editorial Board
  • Support

Generative Artificial Intelligence (GenAI) for Feedback in School Writing: Friend or Foe?

* * * On the Internet * * *

February 2026 — Volume 29, Number 4

https://doi.org/10.55593/ej.29116int

Soon Koh Poh
National Institute of Education-Nanyang Technological University
<soonkoh.pohatmarknie.edu.sg>

Icy Lee
National Institute of Education-Nanyang Technological University
<icy.leeatmarknie.edu.sg>

Abstract

Effective writing instruction depends on quality feedback, yet teachers often face barriers such as heavy marking workloads and limited feedback literacy (Lee, 2021), leaving students without adequate support. Generative Artificial Intelligence (GenAI) is emerging as a promising tool to address these challenges and offers automated writing evaluation for timely feedback. While existing scholarship on GenAI has largely focused on older learners in tertiary contexts, its potential for school writing remains underexplored (Tseng & Warschauer, 2023). Concerns persist that younger students may lack the maturity and language proficiency to use GenAI effectively, and that misuse could hinder their development. Although many schools restrict independent use of GenAI by children under 13, young learners increasingly encounter these tools beyond the classroom. Yet, literature on GenAI’s application in school writing contexts remains scarce. This article explores the benefits, limitations, and challenges of GenAI as a feedback tool in schools. It offers guidelines for teachers to leverage GenAI at different stages of the writing process to support students’ development in school. The discussion emphasizes balancing GenAI integration with human intervention to ensure ethical and effective instruction, contributing to the growing scholarship on GenAI with a focus on supporting teachers and students in school contexts.

Keywords: Generative AI, Feedback, School Writing Instruction

For teachers, providing feedback on students’ writing often feels like a never-ending uphill battle—taxing, exhausting, and constrained by time. For students, although they hope that feedback can help them improve their writing, they often end up feeling lost and unsupported due to the overwhelming nature of feedback and limited opportunities for discussion. The debut of ChatGPT in 2022 has presented a promising solution to these challenges. For example, GenAI tools, such as Gemini and ChatGPT, as well as teacher-designed chatbots or AI-powered assistants, can be harnessed to support feedback delivery and enable students to receive immediate responses, regardless of time or location, thereby facilitating their writing development.

Although teachers and students may view GenAI as a potential panacea for their feedback challenges and writing struggles, research findings on the effectiveness of GenAI in enhancing writing skills are inconclusive. Scholars such as Bui and Barrot (2025) and Kohnke et al. (2023) have called for further exploration of GenAI to support students’ development as writers. In addition, recent studies have highlighted the drawbacks associated with the use of GenAI by students. For example, inappropriate use and overuse of GenAI may undermine academic integrity and hinder the development of students’ critical thinking skills (Feng et al., 2025).

Much of the existing scholarship on GenAI in writing feedback has focused on older learners. For instance, Kostka and Toncelli (2023) offered a comprehensive examination of ChatGPT’s applications in English language teaching, highlighting both the opportunities and challenges associated with GenAI integration in higher education contexts. Their study focused on undergraduate and graduate learners, emphasizing how ChatGPT can support language learning in specialized courses such as public speaking and academic listening and speaking for international students.

Although there are mounting concerns that younger students may lack the maturity and language proficiency needed to use GenAI tools effectively, and that misuse or overuse could hinder rather than support their writing development, much less literature has addressed the use of GenAI for feedback in school contexts. In many parts of the world, despite age-related access restrictions, young learners are increasingly exposed to GenAI outside of school. This growing exposure—alongside concerns about students’ readiness and potential misuse – raises several important questions: What benefits does the use of GenAI bring to writing in school contexts? What challenges does it pose? Do the advantages outweigh the potential risks? In light of the paucity of work addressing GenAI use in school-based writing instruction, this article seeks to explore these questions with a view to examining the pedagogical potential and pitfalls of integrating GenAI into school writing classrooms.

By placing particular emphasis on the use of GenAI as a feedback tool for writing in school contexts, we contribute to current conversations about the use of GenAI as a feedback tool to pre-tertiary contexts—namely, primary, secondary, and junior college classrooms—where learners’ developmental needs and the role of teacher mediation are not only markedly different but also significantly more pronounced. We foreground issues such as feedback interpretability, learner agency, and the affective dimensions of feedback. Building on insights from Kostka and Toncelli (2023), this discussion argues that while GenAI holds promise for scaffolding students’ writing processes, school-based implementation must be accompanied by critical attention to concerns around over-reliance and ethical use. In particular, teachers play a pivotal role in guiding students’ engagement with AI-generated feedback, helping them develop evaluative judgment and reflective practices. By identifying context-specific considerations for GenAI integration in school writing instruction, this paper contributes to a broader understanding of how AI tools can be pedagogically leveraged to support student writers in diverse educational settings.

This article begins by exploring the potential benefits of GenAI for both teachers and students in school settings. It then examines the key issues, challenges, and concerns surrounding the use of GenAI for writing feedback, highlighting key factors for integrating it into classroom practice. The article concludes with a discussion of guidelines for implementing GenAI-supported writing instruction, arguing that when used judiciously and supported by adequate teacher scaffolding and guidance, the benefits of GenAI for feedback outweigh its drawbacks in school-based writing instruction.

Benefits of GenAI for Feedback in School Writing

Potential Benefits of GenAI for Teachers

Feedback in conventional writing classrooms is often fraught with challenges (Lee & Mao, 2025). Some teachers may lack the feedback literacy required to ensure that feedback is effective. A common issue is the use of inappropriate assessment criteria. Drawing on our teacher education experience, for example, we have observed cases where teachers assessing the recount genre include “plot development” as a criterion—an element more relevant to the narrative genre. Feedback grounded in such misaligned criteria is unlikely to support students’ writing development, as the effectiveness of feedback depends on the appropriateness and clarity of the assessment criteria (Lee, 2021), which serve as the foundation for evaluating students’ writing (Dickinson & Adams, 2017). In this respect, the use of GenAI can offer valuable support to teachers. One key benefit is that GenAI can play the role of a virtual teaching assistant, providing information on the assessment criteria for target genres. Teachers can co-construct assessment criteria with GenAI, compare their self-developed criteria with those generated by GenAI, and consider how GenAI’s suggestions may enhance the quality of their own criteria. They can also collaborate with GenAI to design genre-specific assessment forms that inform and support their feedback practices.

Since the primary purpose of providing feedback on students’ writing is to support their improvement, treating writing as a one-off final product or using it primarily for evaluative and summative purposes—as in many conventional writing classrooms—undermines the pedagogical value of feedback (Wang, 2024). In contrast, feedback in process-oriented writing classrooms, as part of formative assessment, can guide and support students’ development throughout the writing process (Levine et al., 2025; Su et al., 2023). However, in conventional classrooms, some teachers resist engaging students in multiple drafting due to time constraints that limit their ability to deliver feedback on successive drafts. Leveraging GenAI can help address this challenge by offering continuous and targeted support throughout the writing process. Teachers can share responsibility with GenAI by assigning it to respond to specific aspects of student writing at particular stages of the writing process. For instance, teachers might focus on content, organization, and genre-related issues in the first draft, while allowing GenAI to provide feedback on language-related issues in subsequent drafts.

Another issue with feedback in conventional writing classrooms is that it tends to be overly detailed and comprehensive, particularly in relation to error correction (Lee & Mao, 2025), which can leave students feeling frustrated and confused. To avoid overwhelming students with excessive error feedback, teachers can harness the potential of GenAI to filter error types and provide targeted feedback on pre-selected error categories. In school settings, teachers can develop GPT-powered chatbots to deliver focused error feedback, directing students’ attention to specific error types aligned with the target genre and emphasized during pre-writing instruction. Research has shown that ChatGPT outperforms other GenAI tools in detecting and correcting grammatical errors (Biju et al., 2024). Teachers can capitalize on this strength and use it to offer targeted and pedagogically aligned error feedback to students.

When providing error feedback themselves, teachers can utilize GenAI to support their grammatical judgments when in doubt. A study by Lee (2004) indicated that slightly over  half of 58 teachers’ error corrections were accurate. For instance, in the study, three out of 54 teachers incorrectly identified damage in “the three problems that are causing damage” (p. 298) as an error, changing it to “damages”. When teachers are unsure of their grammatical judgments, they can easily consult GenAI for assistance. However, since GenAI does not always provide accurate feedback (Banihashem et al., 2024; Barrot, 2024; Steiss et al., 2024), its suggestions should be critically evaluated. It is therefore crucial for teachers to exercise strong metacognitive awareness when interpreting GenAI-generated feedback.

After providing feedback, teachers can use AI-powered writing assistants (such as WriteLab and ProWritingAid) to gather information about student performance in writing. These tools can help diagnose students’ strengths and weaknesses, and support teachers in feedforward (Lee, 2017) with a view to improving students’ future performance. Based on an analysis of students’ overall strengths and key weaknesses, teachers can plan instructional lessons to reinforce learning and address specific areas that need improvement. To identify students’ error patterns, teachers can use GenAI tools such as ChatGPT to categorize and count errors across different types. They can further use the GenAI tool to produce error ratio analyses for each student, which can highlight the relative severity of error patterns—from the most to the least frequent or problematic. To identify class-wide error patterns, aggregate data can be input into ChatGPT to generate an overview of common errors, which can then be used to inform instruction.

Potential Benefits of GenAI for School Students

One major issue in conventional writing classrooms is that feedback is often something teachers do to students rather than with or alongside them. As a result, students tend to remain passive. When used appropriately, GenAI has strong potential to foster active student engagement, learner agency, and self-regulated learning (Dizon et al., 2025).

Due to time constraints, it is often challenging for teachers to provide feedback to every student at each stage of the writing process. GenAI can serve as a valuable pedagogical support tool, offering timely and targeted feedback to complement teacher input. Before writing, students can seek feedback from GenAI on their ideas and outlines. Interacting with GenAI at this stage can help students enhance their metacognitive awareness of the learning goals and success criteria for the writing task, and develop metacognitive strategies for monitoring their progress (Kim et al., 2025). This aligns with Teng’s (2025a, 2025b) findings, which highlight how learners perceive ChatGPT as a useful tool for developing metacognitive awareness through reflective engagement with feedback. GenAI can also be used to provide feedback on students’ self-set learning goals, supporting their efforts to reflect on and adjust their learning goals during the writing process. In story writing, for example, a student might aim to improve their story structure and use of dialogue. With this in mind, they can prompt GenAI to provide goal-driven feedback specifically focused on these aspects of their writing. For younger learners, such as upper primary students, teacher-designed chatbots can be programmed to scaffold this process. For instance, chatbots can prompt students to articulate their goals and ask metacognitive questions (e.g., “How does this story opening grab the reader’s attention?”). Such interactions promote learner agency by guiding students to take ownership of their writing and engage with feedback meaningfully.

During the drafting and revision stages, a common challenge students face is the dual difficulty of self-assessing the quality of their writing and making effective revisions (Zhang & Yu, 2024). During these stages, students can pose relevant questions to GenAI to seek help, such as “My descriptions feel weak. Can you help me improve them?” Throughout the writing and feedback process, students can monitor, evaluate, and reflect on their own writing, with GenAI serving as a personal tutor (Kim et al., 2025). For younger learners, particularly those under the age of 13 who require teacher supervision, it is important to use age-appropriate language to support meaningful interaction with GenAI. These chatbots can provide tiered feedback that progresses from implicit to explicit, guiding students step by step rather than offering complete solutions outright. Through this scaffolded, dialogic interaction, students are encouraged to think critically and reflect on their writing, fostering learner agency, deep engagement, and metacognitive awareness.

At the end of the writing process, students can use GenAI to obtain diagnostic information about their strengths and weaknesses in writing. As students reflect on the extent to which their goals were achieved, their self-reflection can be enriched by insights provided by GenAI on aspects such as content, tone, style, and coherence. Through this process, students develop a deeper understanding of “person knowledge”, an important aspect of metacognitive knowledge (Flavell & Wellman, 1977, p. 10)—namely, an awareness of their own strengths and weaknesses in writing. This facilitates the setting of improvement goals that guide their future writing.

Issues, Challenges, and Concerns

While leveraging GenAI tools offers potential benefits, there are also challenges that need to be addressed. Researchers have raised concerns about the trustworthiness and dependability of using GenAI to support feedback on writing. Behzad et al. (2024) found that ChatGPT’s identification of punctuation errors was inaccurate. Barrot (2024) also observed ChatGPT’s struggle with evaluating the appropriate use of nuanced language in writing. Kim et al. (2025) and Li et al. (2024) pointed out that ChatGPT often provides general feedback, failing to consider contextual factors such as a writer’s voice, writing style, and customization of feedback to the specific writing task. Carlson et al. (2023) cautioned the tendency of GenAI to show bias for particular writing styles or cultural norms. This bias may also manifest in fabricated information (Li et al., 2024). Another related issue is the inconsistency in AI feedback, with varied responses emerging from each submission of student writing (Zeevy-Solovey, 2024). These threats to GenAI’s reliability can be attributed to its underlying algorithms, training data, misinterpretation of data patterns, and its inherent tendency to “hallucinate” information (Carlson et al., 2023; Kim et al., 2025).

Another limitation of GenAI is its lack of pedagogical expertise. Kim et al. (2025) reported that while ChatGPT can assist in providing feedback on various aspects of writing (e.g., coherence and organizational structure), it lacks the instructional ability to support feedforward practices. An integral part of this process involves offering guidance that attends to students’ affective needs during learning (Chai et al., 2024). Few would dispute that engaging students—rather than relying on one-way communication—is a vital component of effective feedback (Bloxham & Campbell, 2010). However, studies have shown that interaction with GenAI often lacks a human touch or relational sensitivity (Teng, 2024a), thereby neglecting the positive impact of human relationships on student learning (Jensen et al., 2015).

An additional concern is that over-reliance on GenAI or uncritical use can undermine a writer’s voice and originality. When writers adopt GenAI’s comments or suggestions wholesale, they risk compromising their unique identity and perspective (Jacob et al., 2025). This occurs because GenAI-generated solutions are based on patterns and data from public electronic sources. Consequently, relying on them uncritically can result in adopting others’ voices rather than cultivating one’s own distinctive or creative style (Wang, 2024). Li and Pan (2024) highlight how students’ interactions with ChatGPT often involve negotiating their voices within broader social and ideological contexts, including issues of personal identity. This dynamic reflects wider concerns about the narrowing of expressive diversity and the encouragement of uniformity in modes of expression, underscoring the importance of preserving personal and creative identity in the face of GenAI’s influence on writing practices.

The risk is particularly pronounced among school-aged learners. Based on our observations of primary students using personal chatbots in school settings, many young learners tend to view GenAI as the authoritative standard for language use and writing quality. It is common to hear students say, “But GenAI says this…,” citing GenAI as a powerful source of authority. This uncritical acceptance underscores the need for instructional guidance to help students engage with GenAI outputs more reflectively, discerningly, and critically.

Related to students’ uncritical adoption of GenAI’s comments or suggestions is the widely discussed issue of academic dishonesty (Al-Kfairy et al., 2024). The concern extends beyond the authenticity of written work (Yeo, 2023). Over-reliance on GenAI may discourage students from engaging in critical and creative thinking—such as evaluating, reflecting, and making decisions—during drafting and revision (Wang, 2024). This undermines the purpose of learning to write. The convenience of using GenAI to offer diverse perspectives to assist in engaging these critical thinking skills may inadvertently bypass the essential process of interrogating and refining those ideas. As a result, students may become less inclined to adapt or reimagine content, thereby weakening the imaginative and exploratory aspects of composition that are vital for developing both creative expression and critical judgment in writing (Hao et al., 2024).

While this concern can be mitigated by guiding students to evaluate GenAI’s output (Reeves & Sylvia, 2024), the effectiveness of this approach is limited by GenAI’s inability to explain the reasoning behind its suggestions (Kim et al., 2025). If GenAI tools could clarify why a particular revision is recommended, students would better understand the underlying principles and be more equipped to apply them in future writing tasks. However, this lack of transparency in explaining the how and why of its responses remains a key limitation of GenAI.

At its core, a central issue that threads through the foregoing discussion is that the introduction of GenAI in school contexts may inadvertently deepen existing disparities in access to learning and support opportunities. These inequities are evident in the constraints learners face in traditional learning classrooms—limited instructional time, inconsistent or inaccessible teacher feedback, and a lack of resources for individualized practice. These limitations can lead to an over-reliance on GenAI, with students accepting its output uncritically, and can also introduce bias into GenAI-generated feedback (Teng, 2024b), which may marginalize learners from non-dominant linguistic or cultural backgrounds. This broader concern, as noted by Ulla and Teng (2024) as “limited opportunities for language practice” (p. 1), has also been pointed out in Huang’s (2024) study, which illustrates how GenAI tools can help mitigate these constraints by enabling autonomous and personalized language practice.

Role of Teachers and Students in GenAI Integration

Role of Teachers

Given the concerns discussed above, it is not advisable for teachers to delegate their responsibility for providing writing feedback entirely to GenAI. Likewise, students should not rely solely on GenAI for feedback on their writing. Instead, when integrating GenAI into writing classrooms, the roles of teachers, students, and GenAI should reflect a shared responsibility between students and teachers (Lee, 2017).

In sharing responsibilities within the formative assessment of writing, it is crucial for teachers to assume the roles of mediator and critical supporter. The role of mediator involves several key responsibilities. First, it involves scaffolding that enables students to progress within their zone of proximal development (Lee, 2021). As mediators, teachers help bridge students’ understanding of GenAI—explaining how it works, when and how to use prompts effectively, and how to engage with it ethically.

Writing effective prompts requires skill and practice, especially for non-technical users. Research has shown that students often lack “explicit knowledge about when and why to prompt tools” (Woo et al., 2025, p. 12), highlighting the need for clear and targeted instruction in prompt engineering. The ethical use of GenAI can understandably seem abstract to many school-aged learners, particularly those who struggle with writing. This is where teachers play a vital role in helping students understand its importance.

To support ethical engagement, clear guidelines and policies should be established in school writing contexts. For example, this might include a policy prohibiting content generation for student writing, along with a requirement to disclose and reflect on GenAI use during the writing process. These practices should be teacher-led, as young learners often lack the capacity to navigate them independently.

As previously discussed, GenAI has limitations in providing feedback tailored to specific writing contexts (e.g., purpose, audience, culture) and individual student needs (e.g., language proficiency, learning goals). Therefore, teachers must support students in learning how to make personalized requests to GenAI and use its responses as a springboard for improving their writing.

While some students may find interacting with GenAI tools motivating (Meyer et al., 2024), many report that these tools lack a human touch (Teng, 2024a). This highlights the crucial role teachers play in providing the relational elements essential to students’ learning. Even when GenAI is used as a source of feedback, teacher scaffolding—as discussed in the preceding paragraph—along with teacher feedback and ongoing support, remains indispensable throughout the writing process.

In addition to complementing GenAI feedback with their own insights, teachers can create opportunities for face-to-face conferences with students. These sessions enable students to raise questions and concerns arising from GenAI feedback, pose metacognitive questions regarding their writing, and seek clarification. Such interactions foster deeper engagement and support the development of reflective and critical thinking skills.

Role of Students

The role of students in using GenAI in their writing is of paramount importance, as the ultimate goal of feedback is to cultivate independent learners capable of self-evaluation. In GenAI-supported writing classrooms, teachers should work towards empowering students to take ownership of their learning. To prevent students’ passive and uncritical acceptance of GenAI-generated feedback, teachers should position students as active inquirers, guiding them to prompt GenAI with meaningful questions and critically reflect on the relevance of its responses. This involves providing explicit instruction in crafting effective prompts, interpreting GenAI-generated suggestions, and evaluating their relevance and accuracy. Teachers can nurture students’ capacity to be both critical evaluators of GenAI-generated information and reflective self-assessors of their own writing. As they develop the former, they learn to analyze and critique information, identify biases and misconceptions, and make informed decisions—skills that also help them avoid academic dishonesty. As they develop the latter, they learn to build evaluative capacity, develop autonomy, and reduce dependence on external feedback.

Ultimately, students must learn to self-regulate their writing development, gradually reducing reliance on both teachers and GenAI tools. To support this, teachers should guide students in critically reflecting on GenAI’s suggestions and articulating the rationale behind their decisions to adopt or reject specific recommendations. Writing annotations to explain these choices can help generate internal feedback (Nicol, 2019) and enhance students’ metacognitive awareness (Yan & Zhang, 2024).

Additionally, when students receive feedback from multiple sources, including GenAI, teachers can guide them in comparing and evaluating the input they receive. For example, juxtaposing peer feedback with GenAI-generated feedback enables students to reflect on what they can learn by identifying similarities and discrepancies across sources (Nicol & McCallum, 2022). This process helps students make informed decisions about which feedback to accept or reject—and why. In doing so, they generate internal feedback, which deepens metacognitive awareness.

In summary, for the benefits of GenAI to outweigh its drawbacks, it is imperative that teachers guide students in its use, with the aim of fostering self-regulation and evaluative capacity. This is especially important in school writing contexts, where good AI habits should be cultivated from an early age. To maximize the benefits of GenAI and minimize its risks, teachers should establish pedagogical guidelines for its use in writing classrooms. These guidelines ought to be grounded in several foundational principles:

  1. Teachers should be guided by a long-term vision that prioritizes student learning, specifically, fostering metacognitive awareness, learner agency, and responsible AI use, rather than focusing solely on improving textual quality. The goal is to empower students to take control of AI and use it responsibly, meaningfully, and productively.
  2. Student-centered principles such as fostering learner agency, self-regulated learning, and critical thinking should form the cornerstone of teacher scaffolding to support students’ use of AI in writing.
  3. Writing is a process; therefore, meaningful AI integration should recognize and support writing as a process rather than a mere product.

Implications and Conclusion

Pedagogical Implications

Younger learners in primary, secondary, and junior college contexts often lack the metacognitive skills needed to evaluate or regulate their use of GenAI-generated feedback. To support meaningful engagement, teachers play a crucial scaffolding role in modeling how to critically interact with GenAI outputs and guiding students in distinguishing between constructive and misleading responses. Rather than replacing teacher feedback, GenAI tools should be introduced as complementary supports that enrich classroom feedback practices. Effective pedagogical integration may include teacher-led discussions about the quality and limitations of GenAI feedback, collaborative revision activities that incorporate GenAI suggestions, and explicit instruction in digital and ethical literacy. This emphasis on classroom-based mediation and instructional design foregrounds the pedagogical work required to make GenAI feedback meaningful and developmentally appropriate for younger learners. While it generally makes sense for both teachers and students to harness GenAI for writing feedback, doing so requires careful consideration of the tool’s Janus-faced nature—its potential to both support and undermine learning. Given the limited research on GenAI use in school writing contexts, it is premature to draw definitive conclusions about its overall impact. Nonetheless, dismissing its valuable contributions would be probably unwise.

Theoretical Implications

Although intended as a practice-oriented paper targeting pre- and in-service teachers, this article also has potential theoretical implications. From a theoretical standpoint, this paper extends discussions of GenAI use beyond higher-education contexts to the sociocultural dynamics of school-based writing. Drawing on a sociocultural perspective (Vygotsky, 1978; Wertsch, 1991), GenAI-mediated feedback can be understood as a new form of mediated interaction between learner and tool—one that reshapes how roles, responsibilities, and learning resources are distributed and negotiated within the writing classroom. Viewed through the lens of activity theory (Chapwanya, 2025), for example, GenAI functions as a mediating tool that influences how learners participate in the writing process and engage with feedback. For pre-tertiary learners, this interaction underscores the importance of teacher mediation as a cultural tool that supports the internalization of writing-related knowledge and practices introduced during instruction (Poh, 2021), while also highlighting how student learning is shaped by the broader activity system—including the tools employed, the community of learners and educators, the division of labour in classroom roles, and the intended learning outcomes.

Ethical Implications

In school settings, ethical concerns about using GenAI—such as issues of honesty, fairness, and dependence—are especially important because students are still developing their sense of responsibility and digital maturity. Ethical integration of GenAI in school writing requires explicit instruction in responsible use, including discussions about authorship, transparency, and the limitations of GenAI-generated feedback. Schools and teachers should establish clear guidelines that promote honesty, intellectual ownership, and respect for the learning process. More than just ensuring compliance, fostering ethical awareness from a young age helps students develop the judgment and habits needed to use GenAI thoughtfully and responsibly as part of their learning journey.

In this article, we have examined the benefits of GenAI-supported writing feedback for both teachers and students, alongside its limitations, challenges, and ethical concerns. We conclude that, as long as teachers adhere to sound pedagogical principles—by equipping students with the knowledge and skills to use GenAI tools responsibly and meaningfully, and by cultivating good GenAI habits from an early age—GenAI can serve as a friend rather than a foe.

Acknowledgements

This article is informed by a research and development project funded by the Singapore Ministry of Education (MOE) under the Education Research Funding Programme (ERFP;16/23 IL), administered by the National Institute of Education (NIE), Nanyang Technological University, Singapore. Any opinions, findings, and conclusions or recommendations expressed in this article are those of the authors and do not necessarily reflect the views of the Singapore MOE and NIE.

About the Authors

Soon Koh Poh is a lecturer at the National Institute of Education, Nanyang Technological University, Singapore. His research focuses on teacher education, particularly the theory–practice relationship in beginning teachers’ learning. His work draws on sociocultural perspectives to explore how teachers develop practical knowledge and pedagogical strategies for diverse 21st-century classrooms. He investigates how cultural psychological perspectives inform teachers’ beliefs, identities, and professional learning. ORCID ID: 0000-0002-1192-4044

Icy Lee is Professor of Education (TESOL & Language Education) at the National Institute of Education in Nanyang Technological University, Singapore. Her main research interests are second language writing and second language teacher education. She formerly served as Co-editor of the Journal of Second Language Writing and is currently Co-editor of its Disciplinary Dialogues section. In addition, she is Principal Associate Editor of The Asia-Pacific Education Researcher and Co-editor of the International Journal of Christianity & English Language Teaching. ORCID ID: 0000-0003-0749-3234

To Cite this Article

Poh, S. K. & Lee, I. (2026). Generative artificial intelligence (GenAI) for feedback in school writing: Friend or foe?  Teaching English as a Second Language Electronic Journal (TESL-EJ), 29(4). https://doi.org/10.55593/ej.29116int

References

Al-Kfairy, M., Mustafa, D., Kshetri, N., Insiew, M., & Alfandi, O. (2024). Ethical Challenges and Solutions of Generative AI: An Interdisciplinary Perspective. Informatics, 11(3), 58. https://doi.org/10.3390/informatics11030058

Banihashem, S. K., Kerman, N. T., Noroozi, O., Moon, J., & Drachsler, H. (2024). Feedback sources in essay writing: peer-generated or AI-generated feedback? International Journal of Educational Technology in Higher Education, 21(1), 23. https://doi.org/10.1186/s41239-024-00455-4

Barrot, J. S. (2024). Leveraging ChatGPT in the Writing Classrooms: Theoretical and Practical Insights. Language Teaching Research Quarterly, 43, 43-53. https://doi.org/10.32038/ltrq.2024.43.03

Behzad, S., Kashefi, O., & Somasundaran, S. (2024). Assessing online writing feedback resources: Generative AI vs. good samaritans. Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024). https://aclanthology.org/2024.lrec-main.144/

Biju, N., Abdelrasheed, N. S. G., Bakiyeva, K., Prasad, K., & Jember, B. (2024). Which one? AI-assisted language assessment or paper format: an exploration of the impacts on foreign language anxiety, learning attitudes, motivation, and writing performance. Language Testing in Asia, 14(1), 45. https://doi.org/10.1186/s40468-024-00322-z

Bloxham, S., & Campbell, L. (2010). Generating dialogue in assessment feedback: Exploring the use of interactive cover sheets. Assessment & Evaluation in Higher Education, 35(3), 291-300. https://doi.org/10.1080/02602931003650045

Bui, N. M., & Barrot, J. S. (2025). ChatGPT as an automated essay scoring tool in the writing classrooms: how it compares with human scoring. Education and Information Technologies, 30(2), 2041-2058. https://doi.org/10.1007/s10639-024-12891-w

Carlson, M., Pack, A., & Escalante, J. (2023). Utilizing OpenAI’s GPT-4 for written feedback. Tesol Journal, 759(15), e759. https://doi.org/10.1002/tesj.759

Chai, S. S., Ting, S. H., Goh, K. L., Chang, Y. H. R., Wee, B. L., Novita, D., & Karthikeyan, J. (2024). Beyond digital interfaces: The human element in online teaching and its influence on student experiences. PloS one, 19(7), e0307262. https://doi.org/10.1371/journal.pone.0307262

Chapwanya, O. (2025). Exploring the teacher’s role amid rising generative AI: An activity theory analysis in further education. Studies in Technology Enhanced Learning, 4(3). https://doi.org/10.21428/8c225f6e.4d227fbd

Dickinson, P., & Adams, J. (2017). Values in evaluation–The use of rubrics. Evaluation and program planning, 65, 113-116. https://doi.org/10.1016/j.evalprogplan.2017.07.005

Dizon, G., Gold, J., & Barnes, R. (2025). ChatGPT for self-regulated language learning: University English as a foreign language students’ practices and perceptions. Digital Applied Linguistics, 3, 102510-102510. https://doi.org/10.29140/dal.v3.102510

Feng, H., Li, K., & Zhang, L. J. (2025). What does AI bring to second language writing? A systematic review (2014-2024). Language Learning & Technology, 29(1). https://doi.org/10.64152/10125/73629

Flavell, J. H., & Wellman, H. M. (1977). Metamemory. In R. V. Kail & J. W. Hagen (Eds.), Perspectives on the development of memory and cognition (pp. 3-33). Lawrence Erlbaum.

Hao, Z., Fang, F., & Peng, J.-E. (2024). The integration of AI technology and critical thinking in English major education in China: Opportunities, challenges, and future prospects. Digital Applied Linguistics, 1, 2256-2256. https://doi.org/10.29140/dal.v1.2256

Huang, J. (2024). Enhancing EFL Speaking Feedback with ChatGPT’s Voice Prompts. International Journal of TESOL Studies, 6(3). https://doi.org/10.58304/ijts.20240302

Jacob, S. R., Tate, T., & Warschauer, M. (2025). Emergent AI-assisted discourse: a case study of a second language writer authoring with ChatGPT. Journal of China Computer-Assisted Language Learning, 5(1), 1-22. https://doi.org/10.1515/jccall-2024-0011

Jensen, E., Skibsted, E. B., & Christensen, M. V. (2015). Educating teachers focusing on the development of reflective and relational competences. Educational Research for Policy and Practice, 14(3), 201-212. https://doi.org/10.1007/s10671-015-9185-0

Kim, J., Yu, S., Detrick, R., & Li, N. (2025). Exploring students’ perspectives on Generative AI-assisted academic writing. Education and Information Technologies, 30(1), 1265-1300. https://doi.org/10.1007/s10639-024-12878-7

Kohnke, L., Moorhouse, B. L., & Zou, D. (2023). ChatGPT for language teaching and learning. RELC Journal, 54(2), 537-550. https://doi.org/10.1177/00336882231162868

Kostka, I., & Toncelli, R. (2023). Exploring applications of ChatGPT to English language teaching: Opportunities, challenges, and recommendations. TESJ-EJ, 27(3), n3. https://doi.org/10.55593/ej.27107int

Lee, I. (2004). Error correction in L2 secondary writing classrooms: The case of Hong Kong. Journal of second language writing, 13(4), 285-312. https://doi.org/10.1016/j.jslw.2004.08.001

Lee, I. (2017). Classroom writing assessment and feedback in L2 school contexts. Springer. https://doi.org/10.1007/978-981-10-3924-9

Lee, I. (2021). The development of feedback literacy for writing teachers. TESOL Quarterly, 55(3), 1048-1059. https://doi.org/10.1002/tesq.3012

Lee, I., & Mao, Z. (2025). Why Feedback Fails in Conventional Writing Classrooms. RELC Journal, 00336882251340720. https://doi.org/10.1177/00336882251340720

Levine, S., Beck, S. W., Mah, C., Phalen, L., & PIttman, J. (2025). How do students use ChatGPT as a writing support? Journal of Adolescent & Adult Literacy, 68(5), 445-457. https://doi.org/10.1002/jaal.1373

Li, H., & Pan, L. (2024). Network of discourses: Resistance and negotiation within chinese students’ AI-assisted EFL writing. International Journal of TESOL Studies, 6(3), 128-145. https://doi.org/10.58304/ijts.20240309

Li, Z., Liang, C., Peng, J., & Yin, M. (2024). The value, benefits, and concerns of generative AI-powered assistance in writing. Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3613904.3642625

Meyer, J., Jansen, T., Schiller, R., Liebenow, L. W., Steinbach, M., Horbach, A., & Fleckenstein, J. (2024). Using LLMs to bring evidence-based feedback into the classroom: AI-generated feedback increases secondary students’ text revision, motivation, and positive emotions. Computers and Education: Artificial Intelligence, 6, 100199. https://doi.org/10.1016/j.caeai.2023.100199

Nicol, D. (2019). Reconceptualising feedback as an internal not an external process. Italian Journal of Educational Research, 71-84. https://ojs.pensamultimedia.it/index.php/sird/article/view/3270

Nicol, D., & McCallum, S. (2022). Making internal feedback explicit: Exploiting the multiple comparisons that occur during peer review. Assessment & Evaluation in Higher Education, 47(3), 424-443. https://doi.org/10.1080/02602938.2021.1924620

Poh, S. K. (2021). English language teachers’ appropriation of tools in the Singapore classrooms: A socio-cultural analysis. Asia Pacific Journal of Education, 41(4), 740-753. https://doi.org/10.1080/02188791.2021.1997706

Reeves, C., & Sylvia, J. J., IV. (2024). Generative AI in technical communication: A review of research from 2023 to 2024. Journal of technical writing and communication, 54(4), 439-462. https://doi.org/10.1177/00472816241260043

Steiss, J., Tate, T., Graham, S., Cruz, J., Hebert, M., Wang, J., Moon, Y., Tseng, W., Warschauer, M., & Olson, C. B. (2024). Comparing the quality of human and ChatGPT feedback of students’ writing. Learning and Instruction, 91, 101894. https://doi.org/10.1016/j.learninstruc.2024.101894

Su, Y., Lin, Y., & Lai, C. (2023). Collaborating with ChatGPT in argumentative writing classrooms. Assessing Writing, 57, 100752. https://doi.org/10.1016/j.asw.2023.100752

Teng, M. F. (2024a). “ChatGPT is the companion, not enemies”: EFL learners’ perceptions and experiences in using ChatGPT for feedback in writing. Computers and Education: Artificial Intelligence, 7, 100270. https://doi.org/10.1016/j.caeai.2024.100270

Teng, M. F. (2024b). A Systematic Review of ChatGPT for English as a Foreign Language Writing: Opportunities, Challenges, and Recommendations. International Journal of TESOL Studies, 6(3). https://doi.org/10.58304/ijts.20240304

Teng, M. F. (2025a). Metacognitive awareness and EFL learners’ perceptions and experiences in utilising ChatGPT for writing feedback. European Journal of Education, 60(1), e12811. https://doi.org/10.1111/ejed.12811

Teng, M. F. (2025b). Understanding EFL Student Writers’ Metacognitive awareness in utilizing ChatGPT. System, 103848. https://doi.org/10.1016/j.system.2025.103848

Tseng, W., & Warschauer, M. (2023). AI-writing tools in education: If you can’t beat them, join them. Journal of China Computer-Assisted Language Learning, 3(2), 258-262. https://doi.org/10.1515/jccall-2023-0008

Ulla, M. B., & Teng, M. F. (2024). Generative artificial intelligence (AI) applications in TESOL: Opportunities, issues, and perspectives. International Journal of TESOL Studies, 6(3), 1-3. https://doi.org/10.58304/ijts.20240301

Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Harvard university press.

Wang, C. (2024). Exploring students’ generative AI-assisted writing processes: Perceptions and experiences from native and nonnative English speakers. Technology, Knowledge and Learning, 1-22. https://doi.org/10.1007/s10758-024-09744-3

Wertsch, J. V. (1991). Voices of the mind: Sociocultural approach to mediated action. Harvard University Press.

Woo, D. J., Guo, K., & Susanto, H. (2025). Exploring EFL students’ prompt engineering in human–AI story writing: an activity theory perspective. Interactive Learning Environments, 33(1), 863-882. https://doi.org/10.1080/10494820.2024.2361381

Yan, D., & Zhang, S. (2024). L2 writer engagement with automated written corrective feedback provided by ChatGPT: A mixed-method multiple case study. Humanities and Social Sciences Communications, 11(1), 1-14. https://doi.org/10.1057/s41599-024-03543-y

Yeo, M. A. (2023). Academic integrity in the age of Artificial Intelligence (AI) authoring apps. TESOL Journal, 14(3), e716. https://doi.org/10.1002/tesj.716

Zeevy-Solovey, O. (2024). Comparing peer, ChatGPT, and teacher corrective feedback in EFL writing: Students’ perceptions and preferences. Technology in Language Teaching & Learning, 6(3), 1482-1482. https://doi.org/10.29140/tltl.v6n3.1482

Zhang, E., D, & Yu, S. (2024). Understanding L2 student writers’ self-assessment in digital multimodal composing: A process-oriented approach. System, 121, 103219. https://doi.org/10.1016/j.system.2024.103219

Copyright of articles rests with the authors. Please cite TESL-EJ appropriately.
Editor’s Note: The HTML version contains no page numbers. Please use the PDF version of this article for citations.

© 1994–2026 TESL-EJ, ISSN 1072-4303
Copyright of articles rests with the authors.