• Skip to primary navigation
  • Skip to main content

site logo
The Electronic Journal for English as a Second Language
search
  • Home
  • About TESL-EJ
  • Vols. 1-15 (1994-2012)
    • Volume 1
      • Volume 1, Number 1
      • Volume 1, Number 2
      • Volume 1, Number 3
      • Volume 1, Number 4
    • Volume 2
      • Volume 2, Number 1 — March 1996
      • Volume 2, Number 2 — September 1996
      • Volume 2, Number 3 — January 1997
      • Volume 2, Number 4 — June 1997
    • Volume 3
      • Volume 3, Number 1 — November 1997
      • Volume 3, Number 2 — March 1998
      • Volume 3, Number 3 — September 1998
      • Volume 3, Number 4 — January 1999
    • Volume 4
      • Volume 4, Number 1 — July 1999
      • Volume 4, Number 2 — November 1999
      • Volume 4, Number 3 — May 2000
      • Volume 4, Number 4 — December 2000
    • Volume 5
      • Volume 5, Number 1 — April 2001
      • Volume 5, Number 2 — September 2001
      • Volume 5, Number 3 — December 2001
      • Volume 5, Number 4 — March 2002
    • Volume 6
      • Volume 6, Number 1 — June 2002
      • Volume 6, Number 2 — September 2002
      • Volume 6, Number 3 — December 2002
      • Volume 6, Number 4 — March 2003
    • Volume 7
      • Volume 7, Number 1 — June 2003
      • Volume 7, Number 2 — September 2003
      • Volume 7, Number 3 — December 2003
      • Volume 7, Number 4 — March 2004
    • Volume 8
      • Volume 8, Number 1 — June 2004
      • Volume 8, Number 2 — September 2004
      • Volume 8, Number 3 — December 2004
      • Volume 8, Number 4 — March 2005
    • Volume 9
      • Volume 9, Number 1 — June 2005
      • Volume 9, Number 2 — September 2005
      • Volume 9, Number 3 — December 2005
      • Volume 9, Number 4 — March 2006
    • Volume 10
      • Volume 10, Number 1 — June 2006
      • Volume 10, Number 2 — September 2006
      • Volume 10, Number 3 — December 2006
      • Volume 10, Number 4 — March 2007
    • Volume 11
      • Volume 11, Number 1 — June 2007
      • Volume 11, Number 2 — September 2007
      • Volume 11, Number 3 — December 2007
      • Volume 11, Number 4 — March 2008
    • Volume 12
      • Volume 12, Number 1 — June 2008
      • Volume 12, Number 2 — September 2008
      • Volume 12, Number 3 — December 2008
      • Volume 12, Number 4 — March 2009
    • Volume 13
      • Volume 13, Number 1 — June 2009
      • Volume 13, Number 2 — September 2009
      • Volume 13, Number 3 — December 2009
      • Volume 13, Number 4 — March 2010
    • Volume 14
      • Volume 14, Number 1 — June 2010
      • Volume 14, Number 2 – September 2010
      • Volume 14, Number 3 – December 2010
      • Volume 14, Number 4 – March 2011
    • Volume 15
      • Volume 15, Number 1 — June 2011
      • Volume 15, Number 2 — September 2011
      • Volume 15, Number 3 — December 2011
      • Volume 15, Number 4 — March 2012
  • Vols. 16-Current
    • Volume 16
      • Volume 16, Number 1 — June 2012
      • Volume 16, Number 2 — September 2012
      • Volume 16, Number 3 — December 2012
      • Volume 16, Number 4 – March 2013
    • Volume 17
      • Volume 17, Number 1 – May 2013
      • Volume 17, Number 2 – August 2013
      • Volume 17, Number 3 – November 2013
      • Volume 17, Number 4 – February 2014
    • Volume 18
      • Volume 18, Number 1 – May 2014
      • Volume 18, Number 2 – August 2014
      • Volume 18, Number 3 – November 2014
      • Volume 18, Number 4 – February 2015
    • Volume 19
      • Volume 19, Number 1 – May 2015
      • Volume 19, Number 2 – August 2015
      • Volume 19, Number 3 – November 2015
      • Volume 19, Number 4 – February 2016
    • Volume 20
      • Volume 20, Number 1 – May 2016
      • Volume 20, Number 2 – August 2016
      • Volume 20, Number 3 – November 2016
      • Volume 20, Number 4 – February 2017
    • Volume 21
      • Volume 21, Number 1 – May 2017
      • Volume 21, Number 2 – August 2017
      • Volume 21, Number 3 – November 2017
      • Volume 21, Number 4 – February 2018
    • Volume 22
      • Volume 22, Number 1 – May 2018
      • Volume 22, Number 2 – August 2018
      • Volume 22, Number 3 – November 2018
      • Volume 22, Number 4 – February 2019
    • Volume 23
      • Volume 23, Number 1 – May 2019
      • Volume 23, Number 2 – August 2019
      • Volume 23, Number 3 – November 2019
      • Volume 23, Number 4 – February 2020
    • Volume 24
      • Volume 24, Number 1 – May 2020
      • Volume 24, Number 2 – August 2020
      • Volume 24, Number 3 – November 2020
      • Volume 24, Number 4 – February 2021
    • Volume 25
      • Volume 25, Number 1 – May 2021
      • Volume 25, Number 2 – August 2021
      • Volume 25, Number 3 – November 2021
      • Volume 25, Number 4 – February 2022
    • Volume 26
      • Volume 26, Number 1 – May 2022
      • Volume 26, Number 2 – August 2022
      • Volume 26, Number 3 – November 2022
  • Books
  • How to Submit
    • Submission Procedures
    • Ethical Standards for Authors and Reviewers
    • TESL-EJ Style Sheet for Authors
    • TESL-EJ Tips for Authors
    • Book Review Policy
    • Media Review Policy
    • APA Style Guide
  • TESL-EJ Editorial Board

A Case Study on the Uptake of Suggestions in Online Synchronous Writing Center Sessions

February 2023 – Volume 26, Number 4

https://doi.org/10.55593/ej.26104a2

Olga Muranova
The University of California, Irvine, US
<omuranovatmarkuci.edu>

Svetlana Koltovskaia
Northeastern State University, US
<koltovskatmarknsuok.edu>

Michol Miller
University of Hawaiʻi at Mānoa, US
<micholatmarkhawaii.edu>

Abstract

Despite suggestions being a common speech act used by writing center tutors, very limited research is available on the use of suggestions in online writing center practice. Drawing upon multiple sources of data including the chat transcript, screen recording of the session, and final revised version of the writer’s text, this case study explores the types and frequency of the suggestion strategies employed by a tutor and the degree of writer uptake of tutor suggestions in a synchronous online writing center session. The findings indicate that the tutor’s use of suggestions throughout the session only led to a partial revision of the writer’s text. While factors contributing to the partial revision include overuse of indirect suggestion linguistic realization strategies (SLRSs) and addressing multiple errors at the same time, using multiple and more direct SLRSs appeared to contribute to successful uptake. The observations of the study suggest that, in order to increase the degree of uptake, tutors might consider addressing one error at a time, utilizing multiple suggestion strategies per error and focusing on using more direct rather than indirect suggestion strategies in order to provide suggestions more effectively in online synchronous writing center sessions.

Keywords: Online synchronous writing center sessions, speech acts, suggestions, suggestion realization strategies (SLRSs), uptake.

Suggestions are one of the most widely used speech acts in writing center practice because providing feedback on writing is considered the main purpose of writing center sessions (Fujioka, 2012). Cognizant of the importance of suggestions in writing center consultations, a number of studies have explored their usage in tutor-writer interactions (e.g., Mackiewicz, 2005; Storch & Wigglesworth, 2010). These studies focused on face-to-face sessions, but little is known about the use of suggestions in online sessions. Providing online services, including online asynchronous and synchronous tutoring sessions, has become a very common practice in many writing centers, as they offer more flexibility and accessibility compared to traditional face-to-face sessions (Holtz, 2014; Paiz, 2018; Ries, 2015).

While much is already known about online asynchronous sessions (Kourbani, 2018; Mick & Middlebrook, 2015; Neaderhiser & Wolfe, 2009; Severino & Prim, 2015; Weirick et al., 2017), little empirical research has focused on online synchronous sessions, most likely because they are typically considered similar to face-to-face sessions. However, as noted by Kastman, Breuch, & Racine (2000), the strategies used in face-to-face sessions, including the use of suggestions, cannot be easily transferred to online synchronous consultations. Therefore, the present study focuses on the use of suggestions in an online synchronous chat-based writing center session. Specifically, it includes the analysis of the types and frequency of suggestions provided by the tutor and the degree of writer uptake of those suggestions. The naturalistic case study presented in this paper took place in a writing center at a large Southwestern United States university. The findings offer new insights into the complexities of the use of different suggestion strategies  and their uptake in an online synchronous consultation. Consequently, these insights can contribute to the improvement of the quality of online writing center sessions.

Literature Review

Online Writing Center Sessions: Asynchronous and Synchronous Methods

Today, many writing centers, especially in North America, offer online tutoring sessions (also known as online conferences or online consultations), to assist a diverse population of writers (Kourbani, 2018; Weirick, Davis, & Lawson, 2017). Online tutoring sessions have many benefits, including time efficiency, cost effectiveness, and accessibility for writers across different geographical and physical locations (Bandi-Rao, 2009; Melkun, 2010; Paiz, 2018; Ries, 2015; Wolfe & Griffin, 2012). They also increase the possibilities for “universally inclusive and accessible online writing support” that can serve learners with “varied preferences and access needs” (Martinez & Olson, 2015, p. 190), including students with special needs and disabilities (Ries, 2015). Specifically, part of the flexible nature of online tutoring sessions comes from the fact that online tutoring sessions can take place asynchronously or synchronously (Holtz, 2014). Asynchronous tutoring sessions are characterized as “interactions [that] occur with a time lag between and among them… in ‘non-real’ time,” while synchronous sessions occur with no time lag or a very small one, taking place in real or “near-real” time (Mick & Middlebrook, 2015, pp. 129-130). Asynchronous tutoring methods include email, discussion boards, or online collaboration platforms, while synchronous methods may include real-time text-based chat, video-conferencing, or even screen-sharing options (Neaderhiser & Wolfe, 2009, pp. 54-58).

A number of studies have focused on both of the aforementioned methods, but online asynchronous tutoring methods have been more extensively investigated empirically (Kourbani, 2018; Severino & Prim, 2015; Weirick et al., 2017) than synchronous methods. Kourbani (2018) examined the impact of the asynchronous online feedback on tutees’ learning and text revision. The results revealed that asynchronous online feedback had positive effects on text revision, especially for lower-order concerns such as grammar and mechanics. Similarly, Severino and Prim (2015) also focused on online asynchronous tutor feedback; in particular, they investigated tutors’ suggestions and comments on writers’ word choice. They found that tutors mainly utilized the speech acts of corrections (48%), questions (25%), and explanations (12%) in response to writers’ word choice errors. The authors suggest that tutors need to be trained to recognize different word choice errors in order to choose the most appropriate speech acts to respond appropriately to errors in different asynchronous contexts. Unlike the above-mentioned studies that focused on the feedback provided to non-native speaking writers, Weirick et al. (2017) examined the online asynchronous tutor feedback offered to both native and non-native writers. Their findings demonstrate that, in both situations, tutor feedback included questions, explanations (comments), suggestions (advice), recasts (correction), and qualified criticism. However, both the forms and the focus of tutor feedback differed between native and non-native writers: native writers received feedback on content in the form of questions and qualified criticism, while non-native writers mainly received feedback on grammatical correctness and clarity in the form of recasts and questions. Furthermore, tutors tended to use more overtly directive forms of feedback for both native and non-native writers, including recasts and criticism.

Although these studies provide insights into different strategies that tutors use in asynchronous online environments to provide feedback on writers’ texts,  relatively little is known about strategies used in synchronous online tutoring. Many tutors assume that online synchronous tutoring is largely similar to face-to-face tutoring since both occur in real time. As follows from the findings presented in Bandi-Rao’s (2009) study of online synchronous and face-to-face writing center sessions, there are also no significant differences between these two approaches to writing center tutoring in terms of clients’ satisfaction regarding the effectiveness of the tutorial and convenience of working with a writing center tutor.  However, this does not mean that strategies used in face-to-face sessions can easily be transferred to online synchronous tutoring, as synchronous sessions can take a variety of forms such as interactions via text-based chat, video- and audio-conferencing (Kastman Breuch & Racine, 2000). As shown in previous studies, online synchronous sessions have certain features making them distinct from traditional face-to-face sessions. For example, as noted by Horne (2012), in a synchronous online session, writers tend to take responsibility for developing a plan for revision as well as for improving their own understanding of the writing task at hand, making the discussion during an online synchronous session more dynamic than in a face-to-face session. This corresponds with Pritchard and Morrow’s (2017) observation that student writers participating in online synchronous sessions tend to suggest their own ideas and solutions rather than rely solely on the tutor’s corrections, comments, or practical recommendations. As noted by Melkun (2010), writing center tutors also tend to think that clients are often more engaged and more motivated to work on revising their work during online sessions. At the same time, online written and multimodal communication can make interactions between the participants of online synchronous sessions more active when compared to face-to-face conversations (Magnifico, Woodard, & McCarthey, 2019). Furthermore, online synchronous sessions demonstrate a high level of collaboration between the participants, which is achieved through shared interaction, peer feedback, and the dialogic nature of this type of communication (Hewett, 2006; Horne, 2012; Magnifico, Woodard, & McCarthey, 2019; Melkun, 2010). In addition, research shows that online sessions tend to be more democratic, as they give clients greater control of the conferencing session (Melkun, 2010).

On the other hand, online synchronous consultations are characterized by a low degree of immediacy and spontaneity typically observed in face-to-face conversations as well as a higher degree of complexity caused by the technical aspects of online communication (Bandi-Rao, 2009; Pritchard & Morrow, 2017). In addition, some writers may not have sufficient previous experience of communicating effectively online (Pritchard & Morrow, 2017). They may also experience digital fatigue because of the prolonged use of computer-mediated communication platforms and a high cognitive load involved in the interaction with these platforms (Nadler, 2020), which is likely to reduce the effectiveness of online synchronous sessions.

To sum up, previous research has shown that online synchronous sessions have certain characteristics making them different from face-to-face sessions. For example, due to their multimodal nature, online synchronous sessions tend to be more conducive to active communication and collaboration between participants than traditional face-to-face sessions. At the same time, the effectiveness of online synchronous sessions depends heavily on the affordances and limitations of the technologies employed for communicating online. The degree of participants’ confidence and their level of comfort regarding the use of those technologies can also greatly impact the success of online synchronous sessions.

The above-mentioned characteristics of online synchronous sessions, including their multimodality as well as the lack of immediacy and spontaneity may influence the choices and the outcomes of different communicative strategies used by writing center tutors. According to Stickman (2014), since typed communication takes more time and care than speaking to develop an idea clearly, online tutoring calls for more direct communication and less suggestion, since it lacks such features as facial expression, gestures, and body language in general that enhance face-to-face communication. Comments thus tend to take more forethought in online sessions, compared to traditional face-to-face sessions (Kuriscak, 2010). Due to the unique pragmatic and communicative challenges resulting from these circumstances, it is clear that more research on effective tutoring strategies in online synchronous writing center sessions is warranted.

Speech Acts and the Use of Suggestions in Writing Center Sessions

Because online synchronous sessions are often based on the use of a combination of different modalities (e.g., video, audio, text-based chat, etc.), each unique tutor-writer interaction may pose different challenges. Unlike online synchronous video interactions, audio-only or text-based interactions lack visual cues, which may result in miscommunication between tutors and writers. Therefore, tutors should carefully consider the appropriate use of language in order to ensure successful communication in online contexts. To this end, Fujioka (2012) highlights the role that understanding of speech acts can play in effective writing center interactions. Speech acts are the utterances that serve a certain function in communication (Green, 2007), based on the understanding that people use language not just to say things, but also to perform actions (Austin, 1962). They may consist of one or several words or sentences (Adolphs, 2008), and they typically require not only knowledge of the language but also an appropriate use of that language within a given culture (Levinson, 1983).

Speech acts are pervasive in writing center interactions; for example, tutors compliment students’ writing, request information about the content and purposes of writing, and offer suggestions in order to help students improve their writing (Fujioka, 2012). Since the writing center is where writers go to receive advice on their writing, suggestions comprise an important part of writing center interactions; they are regarded as acts in which the speaker asks the hearer to perform an action that will potentially benefit the hearer (Rintell, 1979). Suggestions belong to a class of directive and face-threatening speech acts in which a speaker attempts to get a hearer to do something (Searle, 1976), aiming to stimulate the hearer to perform an action that will potentially benefit the hearer (Rintell, 1979). They are usually considered the components of a broader speech act that may also include advice, proposals, and recommendations (Jiang, 2006). Effective use of suggestions in writing center sessions is essential to ensuring successful communication between tutors and writers (Fujioka, 2012).

Despite the fact that suggestions are so commonly used in tutoring sessions, surprisingly, very limited research is available on the use of suggestions in writing center practice. For example, Mackiewicz (2005) investigated the frequency with which tutors used the non-conventional indirect suggestion strategies of “hints” in their suggestions to engineering students in face-to-face sessions as well as the benefits and drawbacks of using hints in tutoring sessions. The findings revealed that tutors used hints frequently in their suggestions because they believed that hints generate politeness and help avoid shaking students’ confidence. However, according to the researcher, the use of hints is likely to lead to miscommunication, as such indirect suggestion strategies may either go unnoticed or be misinterpreted by writers.

Previous related studies have focused on the use of suggestions in face-to-face tutoring sessions with little attention paid to the use of suggestions in online tutoring sessions; in addition, there has been little focus on writer’s uptake of tutors’ suggestions. In order to help tutors communicate more effectively in online synchronous environments by using suggestions appropriately, it is imperative to explore the degree of uptake in response to those suggestions. Furthermore, since the degree of uptake of tutor suggestions is more likely to decrease in online synchronous text-based interactions due to the lack of visual cues, more empirical research on this type of online tutoring session is needed. To fill in these gaps, the current naturalistic case-study analyzes one online synchronous tutoring session conducted by a non-native-English-speaking tutor for a non-native-English-speaking writer to answer the following research questions:

  1. What types of suggestion strategies does the tutor utilize in response to writer errors during a single online synchronous chat-based writing center session?
  2. What is the degree of writer uptake in response to the tutor’s suggestion strategies in a single online synchronous chat-based writing center session?

Methods

Design

The case study design approach was adopted for this study because it provides an in-depth and contextualized understanding of tutor-writer interactions in the online environment (Yin, 2009). Specifically, the single-case study design was chosen to explore the relationship between the types and frequencies of suggestions offered by the tutor and the resulting degree of writer uptake in an online synchronous chat-based session.

Context

The study was conducted at the Writing Center at a large Southwestern university in the U.S. Sessions at the center are conducted in both face-to-face and online synchronous environments; online sessions take place using the WCOnline platform, which combines audio, video, and text-based chat functions with collaborative text editing (see Fig. 1). Tutors and writers choose audio, video, or text-based chat communication to carry out the session, while the text is edited in real-time within the platform. The philosophy of the Writing Center is that revision must be carried out collaboratively; this means that writers are responsible for making direct changes to their work, while tutors may only provide suggestions for revision and improvement but are trained not to edit the text directly.


Fig. 1. WCOnline synchronous session interface

Participants

One tutor-writer pair agreed to participate in the study and signed the informed consent form approved by the university’s IRB. Along with the consent form, both participants were asked to complete the basic demographic survey. The tutor was a female L1-Arabic speaker in her fifth year of PhD in TESOL. At the time of the study, it was her second semester working at the writing center. It is noteworthy that all tutors get trained to work in face-to-face and online environments and are required to complete the Writing Center Theory and Pedagogy course prior to tutoring. The writer was an undergraduate male L1-Brazilian Portuguese student majoring in business. It was his first time using online writing center services as indicated in the appointment form. Both participants were between the ages of 30-40.

Procedure

The online session took place during the Fall 2018 semester and lasted 50 minutes; the two participants opted for text-based chat communication, without using the available audio and video options. The writer’s text provided for revision was the final version of a summary assignment for a business course, and he made a specific request to focus on grammar during the session as according to him he was “not good at it.”

Data collection

The following data were extracted from the WCOnline platform for the online tutoring session: chat transcripts, screen recordings showing the chat interaction and real-time changes in the writer’s text, and the final edited version of the writer’s text.

Data analysis

Data were triangulated through primarily qualitative analysis of three data sources collected from the session: the chat transcript, the screen recording, and the final revised version of the text. Data analysis of both the session screen recording and the chat transcript was carried in multiple phases. To enhance the qualitative analysis, additional quantitative analysis was carried out and consisted of simple statistical calculations. Two units of analysis were employed to examine the suggestions given during the session, the errors of focus, and the ways in which those suggestions were carried out: respectively, LRE, or Language Related Episode, and SLRS, Suggestion Linguistic Realization Strategy.

In the first phase of the analysis, suggestions given by the tutor were identified in the transcript in the order of their occurrence during the session. Suggestions were operationalized as the global or specific problems determined by the tutor’s evaluation (Thonus, 1999) so that each suggestion corresponds to one error in the text. Possible errors included lower-order concerns (e.g., grammar and mechanics) and higher-order concerns (e.g., organization and idea development). Next, the suggestions were matched to the corresponding errors in the writer’s text using the concept of Language Related Episodes (LRE) as the unit of analysis. LREs were operationalized as any segment in the data in which there was an explicit focus on language or usage items, so that one LRE corresponds to one error (see p. 12 for an example LRE). LREs vary in length and can be interrupted and returned to in a non-linear fashion (Storch & Wigglesworth, 2010). Interrater agreement rate for identifying LREs was 100%.

The second phase was to classify suggestions given by the tutor based on the taxonomy developed by Martinez-Flor (2010) in Table 1 below. Then, the number and type of suggestion linguistic realization strategies (hereafter referred to as SLRSs) utilized by the tutor per suggestion were coded using the same taxonomy. SLRSs were operationalized as “the different linguistic forms that may be employed when making suggestions in a variety of situations” (Martinez-Flor 2005, p. 173).

Table 1. Taxonomy of suggestion linguistic realization strategies (from Martinez-Flor, 2010, p. 259)

Type

Strategy

Example

Direct Performative Verb
Noun of Suggestion
Imperative
Negative Imperative
I suggest that you…
I advise you to…
I recommend that you…
My suggestion would be…
Try using…
Don’t try to…
Conventionalized Forms Specific Formulae
(interrogative forms)
Possibility/Probability
Should
Need
Conditional
Why don’t you…?
How about…?
What about…?
Have you thought about…?
You can…
You could…
You may…
You might…
You should…
You need…
If I were you, I would…
Indirect Impersonal Hints One thing (that you can do)
would be…
Here’s one possibility: …
There are a number of…
options that you…
It would be helpful if you
It might be better to…
A good idea would be…
It would be nice if…
I’ve heard that…

In several cases, multiple SLRSs were employed within one suggestion or LRE, often occurring over several turns and interrupted by different suggestions related to other errors in a non-linear fashion. The SLRSs identified were first classified as one of three types – Direct, Conventionalized Forms, or Indirect – and then coded for the specific kind of strategy applied based on the language used by the tutor, as shown in Table 1. For example, a suggestion containing a modal verb such as “may” or “might” was coded as a Conventionalized Form expressing Possibility/Probability, while suggestions employing the verbs “suggest” or “advise” were regarded as Direct Performatives.

The data revealed an additional type of SLRS described as “Hybrid,” in which two or more strategies were combined into one. For example, “It would better if you could add a transition here” was assigned a dual code representing the two types and two strategies: Indirect Impersonal/ Conventionalized Form – Possibility. To enhance the validity of the data coding, the inter-coder agreement rate was calculated. All three researchers coded data individually and discussed the results of coding to determine the agreement rate. Interrater agreement rate for both the suggestion types and SLRSs was 97 %.

In the third phase, the writer’s revisions made in response to the tutor’s suggestions were analyzed in both the screen recording and the writer’s text to determine the degree of uptake for each suggestion. Timestamps from the chat transcript were matched to timestamps in the recorded session to determine the writer’s revisions made in response to each suggestion offered by the tutor. For the purposes of this study, uptake was operationalized as the outcome resulting from tutor feedback in the form of a suggestion. Three different types of the degree of uptake emerged from the data and were coded as follows: successful, when the writer responded to the suggestion by appropriately addressing the error identified by the tutor; attempted but unsuccessful, when the writer tried to correct the error based on tutor’s comments but this resulted in an unsuccessful attempt; and unsuccessful, when the writer did not respond to the suggestion with a revision and when the error in the text remained unresolved. The interrater agreement rate for determining the degree of uptake was 100%.

Findings

Quantitative Findings

In the session, a total of 17 LREs and 38 SLRSs were identified. For each LRE and SLRS, the degree of uptake was determined, ranging from Successful, Attempted but Unsuccessful, and Unsuccessful. Four types of SLRSs were identified during the session, including Direct strategies, Conventionalized Forms, Hybrid Conventionalized Form/Indirect strategies, and Indirect strategies. Indirect strategies were most frequent, making up 52.6% of total strategies used. Conventionalized Forms were second at 31.6%, followed by Hybrid  Conventionalized Form/Indirect strategies at 7.9% and Direct strategies at 7.9%, as shown in Table 2.

Table 2. Summary of SLRS types

Direct Conventionalized Form (CF) Hybrid CF/Indirect Indirect
N 3 12 3 20
% of all SLRSs 7.9% 31.6% 7.9 % 52.6%

Table 3 presents the quantitative findings for the number and percentage of SLRSs and LREs used per each degree of uptake. Out of 38 total SLRSs offered during the session, 50% resulted in successful uptake, 23.7% in attempted but unsuccessful uptake, and 26.3% in unsuccessful uptake. This means that out of all the tutor’s suggestion strategies aimed at improving the quality of the text, 50% were recognized and implemented by the writer. A total of 23.7% of suggestion strategies were attempted by the writer, but the writer’s changes to the text did not lead to correction of the error identified by the tutor. The remaining 26.3% of strategies employed by the tutor were unaddressed by the writer, thus resulting in unsuccessful uptake. Out of  17 total LREs identified, 41.2% were successfully resolved, 17.6% were attempted but were not resolved, and 41.2% were unsuccessful. In other words, out of all the errors identified by the tutor, seven were successfully revised by the writer, while ten remained unresolved.

Table 3. Summary of the degree of uptake per SLRS and LRE

Successful Attempted but Unsuccessful Unsuccessful
Total # of SLRSs 19 9 10
% of all SLRSs 50 % 23.7 % 26.3 %
Total # of LREs 7 3 7
% of all LREs  41.2% 17.6% 41.2%

Fig. 2 shows the types of SLRSs used in each LRE; they are grouped by successful, attempted but unsuccessful, and unsuccessful uptake. As follows from the figure, the tutor used a combination of the following types of SLRSs in all the LREs resulting in successful uptake: ten Indirect (in blue), six Conventionalized Forms (in green), and three Direct (in red) for a total of 19 SLRSs. For all the LREs resulting in attempted but unsuccessful uptake, the tutor used six Indirect (in blue), one Hybrid Conventionalized Forms/Indirect (in yellow), and two Conventionalized Forms (in green) for a total of nine SLRSs. For all the LREs resulting in unsuccessful uptake, the tutor employed four Indirect (in blue), two Hybrid Conventionalized Forms/Indirect (in yellow), and four Conventionalized Forms (in green) totaling 10 SLRSs. No direct strategies (in red) were utilized in LREs resulting in attempted but unsuccessful uptake or in unsuccessful uptake. Additionally, the figure demonstrates that in LREs resulting in successful uptake, multiple SLRSs were often used. For example, in LRE 1, the tutor used a total of five SLRSs – two Direct, one Conventionalized Form, and two Indirect. Conversely, in LREs resulting in unsuccessful uptake, only one or two SLRSs were employed; for instance, in LRE 16, the tutor used two SLRSs – one Conventionalized Form and one Indirect.


Fig. 2. Degree of uptake per LRE

Qualitative Findings

Writer successful uptake. As mentioned earlier, 19 (50.0%) out of the 38 total SLRSs offered during the session resulted in successful uptake – that is, when the writer makes appropriate revisions based on the tutor’s suggestion. For instance, in LRE 8, the tutor and the writer brainstorm some possible ways of improving the structure of the writer’s paper, including the use of an appropriate conjunction for connecting two different sentences together:

Chat Transcript

C (18:48) – for, having an astonishing passion for business that developed at an early age in 1956

C (18:48) – what do you mean by “for”

C (18:48) – are you giving a reason for he became a chairman?

W (18:48) – yes im giving a reason for

C (18:50) – so, maybe we change it to be something like “Because of Bill’s astonishing passion for business that was developed at an areal age, he became the executive chairman and… (LRE 8)

C (18:50) – or you could do it differently if you wish

C (18:51) – or instead of “for” you can use “because”

C (18:51) – but you need to change the structure a lil bit (LRE 8)

C (18:51) – ok

Writer’s Text Before
For Bill’s astonishing passion for business that was developed at an areal age in 1956, he became the Executive Chairman and Chairman of the Board of Marriott international.
Writer’s Text After
Bill became the Executive Chairman and Chairman of the Board of Marriott international. Because of Bill’s astonishing passion for business that was developed at an early age in 1956, he became the CEO.

As follows from this example, multiple SLRSs were used in LRE 8. First, the tutor gives a direct suggestion which contains the personal pronoun “we” (“… maybe we change it to be something like…”). This direct suggestion is accompanied by further recommendations containing different kinds of Conventionalized Forms, including the Possibility SLRS (“… you could do it differently if you wish” and “… instead of “for” you can use ‘because’ ”) and the Need SLRS (“… you need to change the structure a lil bit”). This combination of a direct strategy and Conventionalized Forms stimulates the writer to make appropriate changes in his writing, thus resulting in successful uptake.

Writer attempted but unsuccessful uptake. Another outcome observed in the session is attempted but unsuccessful uptake, when the writer recognizes that the tutor has offered a suggestion but does not make the tutor’s intended correction in the text. In the session discussed in this study, 9 (23.7%) out of the 38 total SLRSs resulted in attempted but unsuccessful uptake. The example below shows two LREs occurring simultaneously in which the tutor employs a series of SLRSs for two separate sentences with similar structure errors. The tutor starts by identifying the first structure error with an Indirect Impersonal SLRS by stating that the sentence is not complete and indicating it by copying it into the chat (LRE 11). Without waiting for the writer to respond, the tutor makes a suggestion in the form of a Hint (“This is being…”) to address the structure error in the following sentence (LRE 12). The writer revises his word choice instead of the structure in the first sentence (“I put factual instead of opinionated”), and the structure errors in the two sentences remain unresolved.

Chat Transcript

C (19:01) this is also not complete – “Dealing with content that is not opinionated from compiling “the greatest portfolio of lodging brands, ranging from limited service to luxury hotel ownership and resorts” (Marriott, Para. #5) (LRE 11)

C: (19:02) “This being a challenge that has yet to be matched in the business of lodging” (LRE 12)

C (19:02) this is being … (LRE 12)

W (19:03) hold up

W (19:04) see that

W (19:04) I put factual instead of non opinionated

W (19:05) well its a quote from a source

Writer’s Text Before

Dealing with content that is not opinionated from compiling “the greatest portfolio of lodging brands, ranging from limited service to luxury hotel ownership and resorts” (Marriott, Para. #5). (LRE 11) This being a challenge that has yet to be matched in the business of lodging. (LRE 12)

Writer’s Text After

Dealing with content that is factual from compiling “the greatest portfolio of lodging brands, ranging from limited service to luxury hotel ownership and resorts” (Marriott, Para. #5). This being a challenge that has yet to be matched in the business of lodging.

The tutor then tries the more explicit strategy of Conventionalized Form “Need” to tell the writer again to complete the sentence (“You need to complete the sentence…”); she gives an example (“Dealing with children taught me to be fun”), but because it is not related to the writer’s own topic, he appears to be lost (“Where are u”). After clarifying the sentence she wants him to correct, she provides another suggestion in the form of a Hint by asking the writer questions (“Either a subject is missing or another clause?”). However, the writer still does not address the sentence structure errors, and instead explains his intended meaning.

Chat Transcript

C (19:05) still, you need to complete the sentence so for example ” dealing with children taught me to be fun” (LRE 11)

C (19:05) that is the structure for “dealing with…

W (19:05) where are u

C (19:06) same sentence but i am just giving an example

W (19:07) okay

C (19:07) either a subject is missing or another clause? (LRE 11)

C (19:07) so, when you say dealing with content that factual…. what happens after that? Did you learn something?

C (19:08) did you get what I mean?

W (19:09) well its information that is true

W 19:09 to believe i guess

W 19:10 read the quote

W 19:10 i was trying to say it was fun to know about Marriott’s portfolio

W 19:10 but through a quote

Writer’s Text Before

Dealing with content that is factual from compiling “the greatest portfolio of lodging brands, ranging from limited service to luxury hotel ownership and resorts” (Marriott, Para. #5). (LRE 11) This being a challenge that has yet to be matched in the business of lodging. (LRE 12)

Writer’s Text After

Dealing with content that is factual from compiling “the greatest portfolio of lodging brands, ranging from limited service to luxury hotel ownership and resorts” (Marriott, Para. #5). This being a challenge that has yet to be matched in the business of lodging.

The tutor tries again to use a Hybrid SLRS combining the Conventionalized Form for Possibility using “could” (“For example you could say”) with an Indirect Impersonal SLRS (“something should be here”) to model the correct sentence structure (“dealing with factual information ‘your quote’”). The tutor adds an additional Indirect Impersonal SLRS to emphasize the suggestion (“having a clearer sentence would be nice”), and the writer replies with “I understand.” After that, the tutor moves to the second sentence and gives two additional SLRSs in the form of Hints (“This being… This is being”). However, the writer mistakenly revises the first sentence instead of the second by adding “that is being” and says “Got it.” Therefore, the second sentence remains uncorrected when the tutor moves on. The tutor’s choice to simultaneously address two separate but similar sentence structure errors prevented the writer from correcting either error successfully, although he attempted to revise the text in response to the tutor’s suggestions.

Chat Transcript

C (19:11) yes I get what you say but how did that factual information affect you? for example you could say ” dealing with factual information “your quote” … something should be here (LRE 11)

C (19:12) something like ” dealing with factual content “…….” was fun…

C (19:12) if that is what you mean

C (19:12) I am sorry i feel it is confusing but having a clear sentence would be nice (LRE 11)

W (19:12) I understand

C (19:13) This being (LRE 12)

C (19:13)  this is being (LRE 12)

W: (19:14) got it

Writer’s Text Before

Dealing with content that is factual from compiling “the greatest portfolio of lodging brands, ranging from limited service to luxury hotel ownership and resorts” (Marriott, Para. #5). (LRE 11) This being a challenge that has yet to be matched in the business of lodging. (LRE 12)

Writer’s Text After

Dealing with factual content that is being “the greatest portfolio of lodging brands, ranging from limited service to luxury hotel ownership and resorts” (Marriott, Para. #5). This being a challenge that has yet to be matched in the business of lodging.

For LREs 11 and 12, the majority of the SLRSs used by the tutor are located on the indirect end of the spectrum: two Indirect Impersonal Suggestions, four Indirect Hints, one Conventionalized Form, and one Hybrid (Conventionalized Form/Indirect). The use of a higher proportion of Indirect SLRSs prompted the writer’s attempts to improve the text but did not lead to successful revision of the errors identified by the tutor in the two simultaneous LREs.

Writer unsuccessful uptake. Finally, out of 38 total SLRSs offered during the session, 10 (26.3%) resulted in unsuccessful uptake – that is, when no response to the suggestion is apparent and the error in the text remains unresolved. The example below demonstrates unsuccessful uptake of LRE 13. The tutor provides the Indirect Impersonal SLRS (LRE 13) on the structure error. Without waiting for the writer to respond to the suggestion, the tutor gives another Indirect Impersonal SLRS (LRE 14) on a different structure error in the following sentence. As a result, the writer successfully revises only the latter suggestion (“the two sentences should be together”) (LRE 14), thus failing to address the former one (“but the last part is not necessary”) (LRE 13).

Chat Transcript

 

C (19:16)  – the two sentences should be together (LRE13)

C (19:17) – but the last part is not necessary (LRE 14)

Writer’s Text Before

Naturally, the point of view here is if you put the time in with any organization, as Mr. Marriott did for over 40 years to become who he is today; nor, the person of the past (LRE 13). Than, you should become a symbol of wealth through investments onto running a company just as Bill Mariott would; but, who would of thought of the change in income from the beginning till now (LRE 14).

Writer’s Text After

Naturally, the point of view here is if you put the time in with any organization, as Mr. Marriott did for over 40 years to become who he is today; nor, the person of the past (LRE 13). Than, you should become a symbol of wealth through investments onto running a company just as Bill Mariott would; or, better than he did (LRE 14).

Another example of the LRE that shows unsuccessful uptake of the tutor’s suggestion can be seen below. In this case, the tutor provides a suggestion in the form of an Indirect Hint (“trying to (what)”) (LRE 15) on the writer’s possibly misspelled word “unit.” However, the writer does not respond to the suggestion; thus, the error remains uncorrected.

Chat Transcript

 

C (19:18)  – trying to (what) (LRE15)

Writer’s Text Before 

Trying to unit (LRE 15) all affiliates onto one headquarter in lone star state of Texas made it merrier.

Writer’s Text After 

Trying to unit (LRE 15) all affiliates onto one headquarter in lone star state of Texas made it merrier.

As follows from the aforementioned examples, the tutor’s provision of the suggestions on two different issues at the same time and her use of Indirect SLRSs prevented the writer from attending to her feedback, thus resulting in no revisions.

Discussion and Conclusions

This case study has investigated: a) different kinds of suggestion strategies utilized by a writing center tutor during a single writing center session; and b) the degree of writer uptake in response to the tutor’s suggestion strategies. The findings show that the tutor used four types of SLRSs, including Direct, Conventionalized Forms, Hybrid Conventionalized Forms/Indirect, and Indirect. Half of the SLRSs and 41.2% of all LREs that occurred during the session resulted in successful uptake, and the remaining SLRSs and LREs resulted in either attempted but unsuccessful or unsuccessful uptake. This observation indicates that the tutor’s use of suggestions throughout the session only led to a partial revision of the writer’s text. This could be explained by the proportion of different SLRS types utilized during the session. On the spectrum ranging from Direct to Conventionalized Forms to Indirect suggestion strategies, an overwhelming majority of the SLRSs employed by the tutor were Indirect.

When the tutor employed SLRSs that are located on the indirect end of the directness spectrum, successful uptake was hindered. This might be because Indirect suggestions do not  name a specific error explicitly, which could have made it difficult for the writer to recognize the corrective intention of the tutor’s suggestions, attend to errors, and resolve the issue. This observation is in line with Mackiewicz’s (2005) findings, according to which, even in face-to-face sessions, indirect suggestions may fail to signal to the writer the necessity of a certain change in his/her writing. In addition, as shown by the researcher, the excessive use of this type of suggestion in writing center consultations is more likely to generate opportunities for miscommunication and neglect of the tutor’s feedback. To this end, we believe that the SLRSs employed in an online synchronous chat-based writing center session should be more direct rather than indirect. This may help writers identify the corrective intent of the tutor’s suggestion, which can lead to successful revisions of writers’ texts. Conversely, if the tutor relies on indirect SLRSs, a writer may not clearly understand what revisions need to be made, or they may not even interpret them as a suggestion.

In those cases when the tutor addressed more than one error at the same time, initiating simultaneous LREs, the tutor’s suggestions often led to attempted but unsuccessful uptake outcomes. The reason for that could be that the use of simultaneous SLRSs, especially their Indirect Forms, may have prevented the writer from successfully identifying the corrective intent of the suggestion in the flow of the conversation, although he attempted to revise the text in response to the tutor’s suggestions. We believe that providing multiple suggestions (especially indirect suggestions) on different issues at a time in an online synchronous chat-based writing center session may prevent writers from addressing all suggestions offered due to overloading their attention capacity. Thus, to ensure that the writer attends to an error and makes the necessary revisions based on the provided suggestion, tutors should probably address only one error at a time. Initiating simultaneous suggestions in synchronous online sessions may lead to confusion and could prevent writers from successful uptake due to their inability to attend to multiple issues at once, particularly during the real-time flow of online synchronous conversations. Furthermore, in the online synchronous environment, writers may feel pressured to type, respond, and edit their texts quickly. According to Kim (2014), some features of synchronous computer-mediated communication such as split-turns and lack of time to respond cause “learners to spend more time and attention figuring out the flow of communication,” thus increasing learners’ cognitive load (p. 69). Therefore, writing tutors need to be aware of the non-linear nature of online synchronous communication; they should strive to maximize writers’ attention by waiting for the writer to revise one error before moving to the next, allowing them to focus on one issue at a time.

The findings also revealed that the use of SLRSs located on the more direct end of the SLRS directness spectrum resulted in successful uptake. This indicates that, for a tutor’s suggestion to be taken up successfully in an online synchronous chat-based writing center session, it should include more direct SLRSs such as Direct and Conventionalized Forms. This finding reflects Rilling’s (2005) observation that it is more expedient to identify errors with direct rather than indirect language so that students are not left doubting the corrective intent of tutors’ comments.

Additionally, the findings showed that the simultaneous use of multiple SLRSs located on the more direct end of the SLRS directness spectrum also led to successful uptake. Thus, we believe that the more SLRSs are used per LRE in an online synchronous chat-based writing center session and the more direct those SLRSs are, the more likely successful uptake is to occur. This corresponds with Wisniewski, Zierer, and Hattie’s (2020) conclusion, according to which the more information feedback contains the more effective it is. As can be inferred from the results of the study presented in this paper, giving more time and repeated attention to a particular problem in the form of multiple SLRSs appears to increase the chances that the writer will notice and correctly interpret the suggestion. In addition, explicitly indicating errors with more direct SLRSs may further increase the likelihood that a writer will understand the corrective intent of the tutor’s suggestion. Therefore, tutors should consider using multiple direct SLRSs per error to help focus writers’ attention more effectively so they can detect and fix the necessary issue in their text.  Previous research shows that giving multiple suggestions in a more direct and repeated manner can help overcome the lack of verbal cues in online synchronous communication, which are typically present in face-to-face communication (Kastman Breuch & Racine, 2000). If tutors only use one or two SLRSs per error, their suggestion is more likely to be lost over multiple turns and overlooked by the writer as the conversation continues, particularly in the absence of verbal cues present in face-to-face interactions.

This study could extend previous investigation of the use of suggestions in writing center practice in the following ways. First, this research sheds light on the specific types of SLRSs used in online synchronous writing center sessions and the factors which may influence the degree of uptake of those suggestions by the writer. Second, this project is the first attempt to investigate writers’ uptake of tutors’ suggestions in online synchronous sessions. Finally, this study adds to the limited number of empirical investigations of synchronous online writing center sessions.

However, the study is not without its limitations. It is based on the data from a single tutor-writer session; thus, its results cannot be generalized. Yet, the study could serve as a springboard for larger-scale studies that might investigate the generalizability of the findings presented in this paper. Since the current study focuses on analyzing the uptake of suggestions provided by a non-native English-speaking tutor for a non-native English-speaking writer, it could have affected the tutor’s use of suggestion strategies and the writer’s uptake of those suggestions. In the future, larger-scale studies could look into native vs. non-native English speaker dichotomy as well.  Such studies would provide more insights into what could potentially be incorporated into the system of tutor training. Consideration of various social and cultural factors such as participants’ gender, age and cultural influences, their first language and social background, current length of their stay in the country of the target language, etc. could also reveal different tendencies connected with the uptake of suggestions in different communicative situations taking place in online synchronous sessions.

Finally, it would also be useful to combine the analysis of transcripts with other data elicitation and research methods, including interviews with participants, written and oral discourse completion tasks (DCTs), the use of statistical procedures and checks, etc. for triangulating the data obtained in such studies in the future. Conducting the studies combining various data elicitation and research methods could contribute to the development of pragmatic research and to arriving at a better and more precise understanding of different ways of increasing the effectiveness of tutors’ suggestions in online synchronous writing center sessions.

About the authors

Olga Muranova is currently a Lecturer and Co-Curricular Research Specialist in the Program in Global Languages and Communication at the University of California, Irvine. She holds a Ph.D. in English (with a specialization in TESOL/Applied Linguistics) from Oklahoma State University. Her research interests include discourse/genre analysis (especially the linguistic and rhetorical features of popular science articles), corpus linguistics, contrastive/intercultural rhetoric, stylistics, intercultural pragmatics, teaching English for Specific/Academic Purposes, and teaching ESL writing. ORCID ID: 0000-0001-9096-1224

Svetlana Koltovskaia is an Assistant Professor of English and director of ESL Academy at Northeastern State University, Tahlequah, Oklahoma. Her research centers around L2 writing, computer-assisted language learning, and L2 assessment. ORCID ID: 0000-0003-3503-7295

Michol Miller is currently a PhD. candidate at the University of Hawaiʻi at Mānoa. Her primary research interests include second language materials development and teacher training for indigenous language revitalization, multilingual language teaching, intercultural pragmatics, Global Englishes, corpus linguistics, and cognitive approaches to second language acquisition. ORCID ID: 0000-0002-2464-0585

Acknowledgements

We would like to thank the anonymous reviewers for their invaluable comments on the manuscript and the participants of the study for helping us make this project come true.

To Cite this Article

Muranova, O., Koltovskaia, S. & Miller, M. (2023). A case study on the uptake of suggestions in online synchronous writing center sessions. Teaching English as a Second Language Electronic Journal (TESL-EJ), 26(4). https://doi.org/10.55593/ej.26104a2

References

Adolphs, S. (2008). Corpus and context: Investigating pragmatic functions in spoken discourse. John Benjamins Publishing Company.

Austin, J. L. (1962). How to Do Things with Words. London: Clarendon Press.

Bandi-Rao, S. (2009). A comparative study of synchronous online and face-to-face writing tutorials. In T. Bastiaens, J. Dron, & C. Xin (Eds.), Proceedings of E-Learn 2009 World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education (pp. 85-90). Vancouver, Canada: Association for the Advancement of Computing in Education (AACE). Retrieved October 20, 2021 from https://www.learntechlib.org/primary/p/32435

Fujioka, M. (2012). Pragmatics in writing center tutoring: Theory and suggestions for tutoring practice. Kinki University Center for Liberal Arts and Foreign Language Education Journal, 3, 129-146.

Green, M. (2017). Speech acts. Oxford Research Encyclopedia of Linguistics.

Hewett, B. L. (2006). Synchronous online conference-based instruction: A study of whiteboard interactions and student writing. Computers and Composition, 23(1), 4-31. https://doi.org/10.1016/j.compcom.2005.12.004

Holtz, E.V. (2014). Mode, method, and medium: The affordance of online tutorials in the writing center. (Publication No. 357) [Honors Scholar Theses, University of Connecticut]. UCONN Library.

Jiang, X. (2006). Suggestions: What should ESL students know? System, 34(1), 36-54. https://doi.org/10.1016/j.system.2005.02.003

Kastman Breuch, L., & Racine, S. (2000). Developing sound tutor training for online writing centers: Creating productive peer reviewers. Computers and Composition, 17(3), 245-263. https://doi.org/10.1016/S8755-4615(00)00034-7

Kim, H. (2014). Revisiting synchronous computer-mediated communication: Learner perception and the meaning of corrective feedback. English Language Teaching, 7(9), 64-73.

Kourbani, V. (2018). Writing center asynchronous/synchronous online feedback: The relationship between e-feedback and its impact on student satisfaction, learning, and textual revision. In R. Rice & K. St. Amant (Eds.), Thinking Globally, Composing Locally: Rethinking Online Writing in the Age of the Global Internet (pp. 233-256). Utah State University Press.

Kuriscak, L. M. (2010). The effect of individual-level variables on speech act performance. In A. Martinez-Flor & E. Usó-Juan (Eds.), Speech Act Performance. Theoretical, Empirical, and Methodological Issues  (pp. 23-39). John Benjamins Publishing Company.

Levinson, S. C. (1983). Pragmatics. Cambridge University Press.

Mackiewicz, J. (2005). Hinting at what they mean: Indirect suggestions in writing tutors’ interactions with engineering students. IEEE Transactions on Professional Communication, 48(4). 365-376. https://doi.org/10.1109/TPC.2005.859727

Magnifico, A. M., Woodard, R., & McCarthey, S. (2019). Teachers as co-authors of student writing: How teachers’ initiating texts influence response and revision in an online space. Computers and Composition, 52, 107-131. https://doi.org/10.1016/j.compcom.2019.01.005

Martínez-Flor, A. (2010). Suggestions. How social norms affect pragmatic behavior. In A. Martinez-Flor & E. Usó-Juan (Eds.), Speech Act Performance. Theoretical, Empirical, and Methodological issues (pp. 23-39). John Benjamins Publishing Company.

Martinez, D., & Olsen, L. (2015). Online writing labs. In B. L. Hewett, K. E. DePew, E. Guler, & R. Z. Warner (Eds.), Foundational Practices of Online Writing Instruction (pp. 183-210). WAC Clearinghouse.

Melkun, C. H. (2010). Meeting the needs of the nontraditional student: A study of the effectiveness of synchronous online writing center tutorials (Publication No. 60971199) [Doctoral dissertation, University of Maryland, College Park]. Semantic Scholar.

Mick, C. S., & Middlebrook, G. (2015). Asynchronous and synchronous modalities. In B. L. Hewett, K. E. DePew, E. Guler, & R. Z. Warner (Eds.), Foundational Practices of Online Writing Instruction (pp. 129-148). WAC Clearinghouse.

Nadler, R. (2020). Understanding “Zoom fatigue”: Theorizing spatial dynamics as third skins in computer-mediated communication. Computers and Composition, 58, 1-17. https://doi.org/10.1016/j.compcom.2020.102613

Neaderhiser, S., & Wolfe, J. (2009). Between technological endorsement and resistance: The state of online writing centers. The Writing Center Journal, 29(1), 49-77.

Paiz, J. M. (2018). Expanding the writing center: A theoretical and practical toolkit for starting an online writing lab. TESL-EJ, 21(4), 1-19. http://www.tesl-ej.org/wordpress/volume21/ej84/ej84a1/

Pritchard, R. J., & Morrow, D. (2017). Comparison of online and face-to-face peer review of writing. Computers and Composition, 46, 87-103. https://doi.org/10.1016/j.compcom.2017.09.006

Ries, S. (2015). The online writing center: Reaching out to students with disabilities. Praxis: A Writing Center Journal, 13(1), 5-6.

Rilling, S. (2005). The development of an ESL OWL, or learning how to tutor writing online. Computers and Composition, 22, 357-374. https://doi.org/10.1016/j.compcom.2005.05.006

Rintell, E. (1979). Getting your speech act together: The pragmatic ability of second language learners. Working Papers on Bilingualism, 17, 97-106.

Searle, J. R. (1976). The classification of illocutionary acts. Language in Society, 5(1), 1-23. https://doi.org/10.1017/S0047404500006837

Severino, C., & Prim, S. (2015). Word choice errors in Chinese students’ English writing and how online writing center tutors respond to them, The Writing Center Journal, 34(2), 115-143.

Stickman, N. (2014, September 15). MC writing center introduces online tutoring. https://themississippicollegian.com/2014/09/15/mc-writing-online-tutoring/

Storch, N., & Wigglesworth, G. (2010). Learners’ processing, uptake, and retention of corrective feedback on writing: Case studies. Studies in Second Language Acquisition, 32(2), 303-334. https://doi.org/10.1017/S0272263109990532

Thonus, T. (1999). Dominance in academic writing tutorials: gender, language proficiency, and the offering of suggestions. Discourse & Society, 10(2), 225-248. https://doi.org/10.1177/0957926599010002005

Yin, R. K. (2009). Case study research: Design and methods (2 ed.). SAGE Publications.

Van Horne, S. V. (2012). Situation definition and the online synchronous writing conference. Computers and Composition, 29(2), 93-103. https://doi.org/10.1016/j.compcom.2012.03.001

Weirick, J., Davis T., & Lawson, D. (2017). Writer L1/L2 status and asynchronous online writing center feedback: Tutor response patterns. Learning Assistance Review, 22(2), 9-38.

Wisniewski, B., Zierer, K., & Hattie, J. (2020). The power of feedback revisited: A meta-analysis of educational feedback research. Frontiers in Psychology, 10, 1-14.  https://doi.org/10.3389/fpsyg.2019.03087

Wolfe, J., & Griffin, J. A. (2012). Comparing technologies for online writing conferences: Effects of medium on conversation. The Writing Center Journal, 32(2), 60-92.

Copyright of articles rests with the authors. Please cite TESL-EJ appropriately.
Editor’s Note: The HTML version contains no page numbers. Please use the PDF version of this article for citations.

© 1994–2023 TESL-EJ, ISSN 1072-4303
Copyright of articles rests with the authors.