Vol. 2. No. 3 A-1 January 1997
Return to Table of Contents Return to Main Page

Computer Generated Error Feedback and Writing Process: A Link

Judy F. Chen
Business English Instructor
The Overseas Chinese College of Commerce,
Taichung, Taiwan, ROC

<jfc@rs1.occc.edu.tw>

Abstract

This study examines a possible link between computer generated feedback and changes in Taiwan EFL business writing students' writing strategies. By using computer software that measured details of students' writing, including: time spent on a document, amount of editing on a document, specific errors made in the document, and the amount of text copied from resource material, the author was able to perform numerous detailed analyses.

Students were randomly assigned to test and control groups with control students receiving a placebo computer feedback and the test group receiving real computer generated feedback on their errors. While the majority of feedback was teacher based and exactly the same for the two groups, different writing strategies were evident in the two groups by the third assignment.

Conclusions point to the important impact computer generated feedback appears to have on students, including the encouragement of a more process oriented approach in their writing. Such a finding has the potential of allowing Teachers to incorporate more process writing in their classrooms where they once though impossible, due to the large EFL class sizes so common in Asia.

I. Introduction

English business writing skills have been shown to be important to Taiwanese businesspeople (Tsui, 1992; Liao, 1990). Across Asia, the educational emphasis on English reflects the importance attached to English skills in international business. In Taiwan, however, the emphasis placed on English does not lead to a very conducive learning environment. The normal class size in Taiwan ranges from forty to eighty students (with numbers rarely falling below forty and occasionally going over one hundred. Remuneration for teachers is largely based on hours in class, rather than number of students in class. This leads teachers to question the amount of effort that should be expended in supplying students with feedback on writing assignments. [-1-]

Searching for possible solutions to some of these problems, I began to study how computer-generated feedback may play a role in relieving teachers' burden, while supplying students with more detailed feedback.

Grammar/Style Checkers

Application of grammar-checking software is a logical step in CALL; however, due to shortcomings in software and the lack of hardware in most EFL settings, such application has not become widespread. Numerous programs that claim to check grammar errors in English writing are commercially available. Bolt (1992) has described most of these programs in detail, including: Correct Grammar, Right Writer, Grammatik, CorrecText, Reader, Power Edit and LINGER. While these programs have different demands on hardware, most run on a personal computer. Bolt points out the very important characteristic of transparency, which he defines as the degree to which the program's underlying functions and logic can be seen and changed. Grammatik offera the greatest access to its rule base and allows changes to rules or addition of new rules(this is the case in the present study).

Healey (1992, p. 14) also examined such programs and attempted to add some new rules to Grammatik but found that such an exercise required considerable work on the part of the teacher. Healey observed that while a grammar checker may not find every error, its work in "consciousness raising" can be very helpful for language learners. Brock (1990), teaching in Hong Kong, also found that modifying Grammatik was helpful when rules were programmed for some common errors of Cantonese speakers learning English.

Garton and Levy (1994), using a later version of Grammatik, version 5, found it to be much improved over earlier versions. Although at first use Grammatik5 seems to be very inaccurate, after modification, it improves greatly. In their study, Garton and Levy gathered a large database of students' writing. They then ran some documents through the Grammatik grammar/style checking software. The results directed them towards what rules to turn off because such rules were not accurate or did not apply to EFL students. It also led towards the creation of new rules to find errors in the students' writing that the program had missed. After making changes, the database could again be used to verify the accuracy of new rules programmed into Grammatik. While the computer could not replace a teacher or even a tutor, it was found to be a useful tool in helping to raise the awareness of students.

Liou (1991, 1992, 1993, 1994) has performed a number of experiments using Grammatik as well as custom-designed software to find their impact on EFL students in Taiwan. Although the studies usually include a small number of subjects, the results tend to be [-2-] positive in showing that groups using CALL perform somewhat better than those not using it.

When grammar-oriented CALL was applied in a process-oriented class setting, Liou (1993) found that the CALL group was able to rectify more of their errors during redrafts and made fewer errors than the non-CALL group:

It is evident that subjects [non-CALL] were not able to correct most of their mistakes by themselves even after some devices to raise their consciousness as to form, such as marks, were used (p. 25).

The inability of EFL students to overcome some errors has also been observed by Dalgish (1991) when he wanted to find the common errors of students learning English in Sweden. The same topic was pursued by Brehony and Ryan (1994) with the understanding that EFL learners' mistakes often reflect the usage or structure of their native languages. These interlingual errors can be affected by CALL simply because they can be easily identified and then codified in software. Simple matching procedures can be used to flag such common errors.

II. Study

Previous Work

This study builds on previous studies using the custom built software QBL TOOLS (Warden & Chen, 1995; Warden 1995; Yao & Warden, 1996). These studies attempted to uncover details of why computer- generated error feedback does appear to help in the Taiwan EFL writing context. The software used is QBL Student Version for student use and QBL TOOLS for teacher use. While QBL TOOLS has continued to improve with more custom rules specific to the Chinese EFL context, the main parsing engine continues to be Grammatik5 for Windows. Previous results of supplying students with detailed computer-generated error feedback included: lower error rates for the test groups, time savings for the teachers, and detailed data on the types of errors made by students.

General Approach Of QBL System

Student Program

A text editor (QBL, Quick Business Letters) for use in the DOS environment is supplied to each student on a floppy disk and can be taken home or used in any open computer lab. Students' disks are clearly labeled so that a student uses the same floppy disk throughout the study. The student program guides students through the creation of correctly formatted business letters and memos for business writing classes, or essays, for normal English classes.[-3-] When an assignment is completed, each student prints out his/her work and hands it into the teacher for normal review and grading. Students also send the completed files over a local area network to the teacher's directory for automated correction.

Teacher Program

After receiving the students' files over a network, the teacher then runs the teacher software (QBL TOOLS) that creates detailed feedback on grammar, style, and mechanical errors for each student. The custom developed software automates the process of accessing students' files, running them through the Grammatik program and collecting and summarizing data.

Program Features

The Grammatik parsing engine was programmed to find 45 error types (see Table 1,) including a class of errors and custom errors (with over 300 customized rules) programmed specifically for EFL writing students in Taiwan (a similar approach to that followed by Levy and Garton (1994)). This completed system can automatically find hundreds of errors in minutes and print them out with no input from the teacher. Charts and graphs can be printed to track class or individual student progress.

Data Categories Of This Study

For this study, the student letter writing program was modified to collect data on the number of keystrokes completed by the user. This measurement includes two parts: one for actions that add characters to a document (such as typing the letters of a word) and one for actions that remove characters from a document (such as the backspace or delete keys). The characters removed from a document divided by the number of characters added to a document is here referred to as an "editing ratio."

All measurements are taken in units of keystrokes. For example, a student typing "ships before" would be reported as having added twelve keys. If the student then goes back and removes an s, resulting in "ship before," the number of characters removed would be one while the number of keys added remains twelve. The editing ratio of this example is thus .08333 (1/12) or 8.33 percent. All measurements begin at zero when a new assignment is given and are cumulative over a single assignment no matter how many times, or in what locations, the program is run.

Error rates and error types are the other data types gathered for this study. The error feedback for the students is made up of 45 specific writing error types (see Table 1). [-4-]

Table 1 The Error Types Found (See Appendix A for explanation)
Abbrev.        |Adjective     |Adverb          
Archaic        |Article       |Capitalization  
Clause         |Colloquial    |Comma Splice    
Comparative    |Conjunction   |Custom          
Double Neg.    |Ellipsis      |Ending Prep.    
Incomplete Sen.|Infinitive    |Jargon          
Long Sen.      |Noun Phrase   |Number          
Overstated     |Pejorative    |Poss. Form      
Preposition    |Pron. Number  |Pronoun Case    
Punctuation    |Ques. Usage   |Redundant       
Rel. Pronoun   |Repeated      |Run-on          
S/V Agreement  |Sent. Variety |Similar Words   
Spelling       |Split Infin.  |Split infinitive
Subordination  |Tense Shift   |Trademark       
Vague Adv.     |Verb Form     |Verb Object     

Subjects

Rather than using separate classes (intact groups) to form control and test groups, this study used randomly selected students. After selection, both groups stayed mixed together in already existing classes so that each class contained both test and control group students.

Approximately ten percent of students in each class were randomly selected to receive (unknowingly) feedback that was not actually produced by the computer system, i.e., a placebo. The computer-generated feedback given to these students resembled the true feedback; however, any errors found by the computer would not be reported as errors. The computer printout given back to them would simply report zero errors found. Other feedback from the teacher, such as handwritten corrections, comments on content and corrections to formatting (layout of heading, opening, closing, etc.) were given to all students irrelevant of the experimental group they were in. The test group students were randomly selected from the remaining students.

The experiment was performed during the 1995-96 fall semester at The Overseas Chinese College of Commerce (Chiao Kwang) located in central Taiwan. Students from three departments participated: International Trade, Business Administration, and Banking & Insurance. Two instructors taught the eight participating sections of Business English, which was required for these students in their senior year. The total number of students (see Table 2) using the QBL computer system was 363. The control group (receiving dummy feedback) contained 38 students while the test group numbered 42 students. [-5-]

Table 2. Participating students
  Test      Control
Randomly    Randomly
Selected    Selected
  42          38

Assignments

All teaching material was unified, and assignment topics, scheduling, and other class characteristics were agreed upon before the semester began. Numerous assignments were completed over the semester (which began in September), with three requiring students to use their QBL program disks (see Table 3). Each assignment required a minimum of 150 words in the body of the letter, with no restriction over 150. The first assignment was not a business letter, but geared towards familiarizing students with the computer system and the correct format of a business letter.

Missing assignments could be caused by a student not performing the assignment, late completion of the assignment or an error in successfully completing any part of turning in the assignment over the network.

Table 3. Assignments & students successfully turning in files
----------------------------------------------------------------
|Assignment Topic |  Date     | Control Students| Test Students|
|Summer Vacation  | Oct. 3, 95|        33       |       35     |
|Job Application  | Nov. 6, 95|        36       |       36     |
|Business Inquiry | Dec. 7, 95|        37       |       42     |
----------------------------------------------------------------

Files were turned in over a network and printouts turned in by hand. Redrafts were not required by the instructors. Thus any changes observed reflect students' behaviors before submitting the relative assignment. The teaching strategy for the class can best be described as a combination of process and product. This approach is a practical combination of process and product, as has been pointed out by Hutchinson and Waters (1987) and applied to large technical writing classes by Okoye (1994),referred to as SDPA (Self-Directed Process Approach). Upon return of assignments to students, the teachers spent time in class reviewing the most common error types found by the computer. At no time, however, did teacher-directed redrafting take place. [-6-]

III. Results

Differences In Means

Changes Between Assignments

As in previous studies using computer generated error feedback, error rates quickly declined(see Table 4).

Table 4. Error mean and rate of decline
-------------------------------------
|Group           | Control |  Test  |
|Assignment 1    | 15.32   | 15.67  |
|Assignment 2    |  7.81   |  7.69  |
|Assignment 3    |  5.16   |  5.00  |
|Rate of Decline | -5.08   | -5.33  |
-------------------------------------

Figure 3. Decline in mean errors
 16|       @                            
15|    #  @                            
14|    #  @                  # Control 
13|    #  @                  @ Test    
12|    #  @                            
11|    #  @                            
10|    #  @                            
9 |    #  @                            
8 |    #  @     #  @                   
7 |    #  @     #  @                   
6 |    #  @     #  @                   
5 |    #  @     #  @     #  @          
4 |    #  @     #  @     #  @          
3 |    #  @     #  @     #  @          
2 |    #  @     #  @     #  @          
1 |____#__@_____#__@_____#__@_____     
        1        2        3            
Assignment
What is most striking about this decline is that it is equally exhibited by both the control and test groups. Such a similar decline in both groups runs counter to previous QBL Tools studies that used intact and separated classes as control and test groups. Overall, each specific error type followed a downward trend through the three assignments. Between the first and third assignments, the control group was able to statistically significantly reduce eight of its error types, while the test group was able to reduce fifteen (two error types showed a significant increase). Table 5 clearly shows that both groups were reducing a range of error types. Of special interest in Table 5 are the different types of errors reduced. [-7-]

Previous studies have shown that spelling errors are the error type most sensitive to computer generated feedback. In this case, however, we observe that spelling errors are quickly reduced for both groups.

The obvious explanation is that these students, of the control group, are mixed with students receiving feedback. They were exposed to the errors common to their classmates, while also cooperating on completing assignments (a behavior quite normal for Taiwanese students) and receiving the same input from a teacher during class from instruction.

Table 5. Reduction in specific error types
(all changes shown are significant on a modified LSD [Bonferroni] test at P<.05)
                    Assignments 1-3  || Assignments 1-3 |
                |Control 1| Control 3|| Test 1 |Test 3  |
                |  Mean   |   Mean   ||  Mean  |  Mean  |
Adjective:      |  .2647  |  .0526   ||  .1284 |  .0373 |
Adverb:         |         |          ||  .1588 |  .0466 |
Capitalization: |         |          ||  .1757*|  .3727*|
Comma Splice:   |         |          ||  .2466 |  .0870 |
Custom:         | 4.1176  |  .7105   || 4.4932 |  .7298 |
Incomplete Sen.:|         |          ||  .3041 |  .1180 |
Infinitive:     |         |          ||  .0338 |  .0062 |
Noun Phrase:    | 1.9412  |  .5789   || 1.3176 |  .6335 |
Poss. Form:     |         |          ||  .0946*|  .1708*|
Pronoun Case:   |         |          ||  .1554 |  .0435 |
Punctuation:    |  .6176  |  .1316   || 1.2230 |  .3634 |
Repeated:       |         |          ||  .1047 |  .0155 |
S/V Agreement:  |         |          ||  .6417 |  .4161 |
Sent. Variety:  |         |          ||  .6014 |  .3634 |
Spelling:       | 3.3529  | 1.1842   || 3.5236 | 1.7702 |
Subordination:  |  .4706  |  .0789   ||  .3243 |  .0404 |
Verb Form:      |  .3824  |  .1053   ||        |        |
Verb Object:    |  .5000  |  .1316   ||  .4662 |  .1801 |

* Showed an increase between first and third assignments

Differences Between Groups

From the measurements centering on editing ratio, it is apparent that feedback had differing impact on the two groups. Editing ratios begin with no statistical difference in the first assignment and quickly show a significant difference in the second and third assignments (see Table 6). Clearly, the feedback is having an impact on editing behavior. This measure shows that for an equal number of keystrokes put into a document, feedback is causing the test group to delete more from the document than the control group. [-8-]

Table 6. Differences in editing ratio (percentage)
--------------------------------------------
|Group           | Control |t-test |  Test |
|----------------|---------|P value|-------|
|Assignment 1    |  17.91  |   NS  | 18.02 |
|Assignment 2    |  16.20  |  .046 | 23.20 |
|Assignment 3    |  13.94  |  .018 | 22.00 |
|Trend           |  -1.99  |       |  1.99 |
--------------------------------------------

Figure 4 graphically shows that not only is there a significant difference, but the two groups are actually showing trends in opposite directions. While the test group shows a positive trend (using linear trend analysis) of 1.99 (F=1.17, NS), the control group shows a negative trend of 1.99 (F=158.99, P<.01).

Figure 4. Differences in editing ratio
 24|                @                        
22|                @        @               
20|                @        @     # Control 
18|    #  @        @        @     @ Test    
16|    #  @     #  @        @               
14|    #  @     #  @     #  @               
12|    #  @     #  @     #  @               
10|    #  @     #  @     #  @               
8 |    #  @     #  @     #  @               
6 |    #  @     #  @     #  @               
4 |    #  @     #  @     #  @               
2 |____#__@_____#__@_____#__@_____          
        1        2        3                 
Assignment     

Keystrokes added (see Table 7) show no statistical difference with a general upward increase for both groups. The control group is very steady in its trend of 46.48 (F=1693.93, P<.025), meaning that this group types about 46 more keystrokes into each new assignment.

Table 7. Differences in keystrokes added
-----------------------------------------------
|Group           | Control |t-test  |   Test  |
|----------------|---------|P value |---------|
|Assignment 1    | 1205.06 | NS     | 1226.46 |
|Assignment 2    | 1253.5  | NS(.08)| 1390.44 |
|Assignment 3    | 1298.03 | NS     | 1266.19 |
-----------------------------------------------

[-9-]

In contrast, the test group shows much more variation among the students, possibly showing differing individual reactions to the very specific feedback from the computer. The test group has an overall average increase in keystrokes added of 19.87 (F=.0570, NS). Table 7 clearly shows how the test group exhibits much more variation over the second and third assignments, although none of the differences are statistically significant.

The keystrokes deleted measurement also shows differences between the two groups. Although none of the differences are significant at the P<.05 level (see Table 8), the trends of the groups are very similar to the trends of the editing ratios. This similarity shows that the edit ratio is rising or declining not due to how much is being put into a document, but mostly due to how much is being changed.

Table 8. Differences in keystrokes deleted
------------------------------------------------
|Group           | Control | t-test  |   Test  |
|----------------|---------| P value |---------|
|Assignment 1    | 262.30  | NS      | 266.37  |
|Assignment 2    | 240.81  | NS(.060)| 386.42  |
|Assignment 3    | 202.92  | NS(.068)| 321.07  |
|Trend           |  -29.69  |        |  27.35  |
------------------------------------------------

In total, the additional keystrokes being used to delete characters means that the test group is typing more. In fact, the test group is typing (keystrokes added plus keystrokes deleted) nearly 11 percent more keystrokes (see Table 9) than the control group in the second and third assignments (only examining assignments after the first feedback was received). The control group stays steady in its keystroke activity throughout the three assignments, in contrast to changes in the test group's activity.

Table 9. Total keystrokes
---------------------------------------------------
|Group           | Control |% Difference|  Test   |
|----------------|---------|------------|---------|
|Assignment 2    | 1494.31 |   15.9016  | 1776.86 |
|Assignment 3    | 1500.95 |    5.4377  | 1587.26 |
|Total           | 2995.26 |   10.9645  | 3364.12 |
---------------------------------------------------

The test group is increasing editing and reducing errors. The control group is decreasing editing and also reducing errors at a [-10-] rate equal to the test group's. One would suspect that increased editing leads to lower errors and that decreased editing leads to increased errors. Cooperation among students, thus a mixing of test and control groups, can explain some error reductions by the control group, but surely that could not fully compensate for real editing being done by the test group. It seems that differing mechanisms are at work that bring down error rates by assignment three. The question is how the control group manages to reduce errors.

IV. Conclusions

While the fields of EFL and ESL present numerous pedagogical theories, the lack of actual information concerning cognition and operation of learning strategies makes it difficult to advance teaching techniques (Holland et al., 1993). Real teachers work in the real world and are confronted with real students. More detailed information gathered about what is actually going on with students' learning strategies (in this case writing strategies) means better teaching and increased learning. Such information is vital to the development of CALL since programs must be modeled on some underlying assumptions. This study has been able to begin to quantify different strategies brought about simply through differences in computer generated error feedback.

By exposing students to the existence of specific errors, they are able to reduce their own errors even though they have not received personalized and accurate feedback on their own writing. Simultaneously, receiving personalized and accurate feedback encourages students to reduce a wider range of error types and to increase editing activity in their writing. Such activity does support the concept that a student's writing is a work in progress. Students receiving the personalized feedback appeared to review and changed their documents more. This increase in writing modification might have the drawback of actually introducing errors, thus resulting in the equal error rates of the test and control groups.

As the data show, the test group was not able to lower spelling errors as much as the control group, while the test group even increased capitalization and possessive errors. Students who do not see their own specific errors, but are aware of the errors commonly found in classmates' writings, may take a preventative approach to errors. Control group students may be using resource material more, copying examples, or simply taking measures such as avoiding the common errors like spelling by using dictionaries before typing into the computer. Test group students may also use these strategies; however, their increased editing introduces new errors. [-11-]

New Findings

Clearly, the very presence of feedback raises all students' awareness and increases behaviors that reduce errors. Previous studies with isolated control groups receiving no computer feedback showed that students reduced their errors at a much slower rate than those receiving the feedback. One can conclude that Taiwanese students are slow to implement any strategies to improve their English writing and reduce errors when no specific feedback is available, an assertion that is confirmed by other teachers' observations (Levy and Garton, 1994; Liou, 1993).

The process writing class does offer some types of motivation, mostly in the form of guidance through the different stages and steps of creating a composition. Difficulties with such a process in the EFL setting become evident in large classes with unmotivated students. What is a teacher to do when the majority of his/her students move through the many stages of writing, only to end up with documents that show little or no improvement over a semester? This study has shown that with no other input from the teacher, students decrease errors in the presence of error feedback, and increase editing behavior when individualized feedback is provided.

Future Directions: Investigating Issues of Quality

A further topic of interest is much more difficult to quantify, the issue of writing quality from a holistic perspective. Liou (1992) used teachers who were native English speakers to holistically review writing from CALL and non-CALL groups and found that the CALL group performed somewhat better. My own informal observations also find students using CALL do have better content and structure, although it seems to me that such things are mostly influenced by the teacher, as Hyland (1993) and Pennington (1991) pointed out in their own CALL research. Quantifying this on a large scale presents many difficulties, which explains why it has not been widely attempted. [-12]

V. References

Bolt, P. (1992). An evaluation of grammar-checking programs as self- help learning aids for learners of English as a foreign language.Computer Assisted Language Learning, 5(1-2) 49-91.

Brehony, T., & Ryan, K. (1994). Francophone stylistic grammar checking (FSGC) using link grammars.Computer Assisted Language Learning, 7(3) 257-269.

Brock, M. N. (1990). Customizing a computerized text analyzer for ESL writers: Cost versus gain.CALICO Journal, 8 51-60.

Dalgish, G. (1991). Computer-assisted error analysis and courseware design: Applications for ESL in the Swedish context. CALICO Journal, 9(2) 39-56.

Garton, J., & Levy, M. (1994). A CALL model for a writing advisor.CAELL Journal, 4(4) 15-20.

Healey, D. (1992). Where's the beef? Grammar practice with computers.CAELL Journal, 3(1) 10-16.

Holland, V.M., Maisano, R., Alderks, C., & Martin, J. (1993). Parsers in tutors: What are they good for?CALICO, 11(1) 28-46.

Hutchinson, T., & Waters, A. (1987).English for specific purposes: A learner centered approach Cambridge, UK: Cambridge University Press.

Hyland, K. (1993). ESL computer writers: What can we do to help?System, 21(1) 21-30.

Levy, M., & Garton, J. (1994). Adapting a grammar checker for learner writers.ReCALL, 6(2), November 3-8.

Liao, Chao-chih (1990).A needs analysis for improving instruction of business English in Taiwan Taipei: The Crane Publishing.

Liou, H. (1991). Development of an English grammar checker a progress report.CALICO Journal 9(2) 57-70. [-13-]

Liou, H. (1992). An automatic text-analysis project for EFL writing revision.System, 20(4) 481-492.

Liou, H. (1993). Integrating text-analysis programs into classroom writing revision.CAELL Journal, 4(1) 21-27.

Liou, H. (1994). Practical considerations for multimedia courseware development: an EFL IVD experience.CALICO Journal 11(3) 47-74.

Okoye, I. (1994). Teaching technical communication in large classes.English for Specific Purposes, 13(3) 223-237.

Pennington, M. (1991). An assessment of the value of word processing for ESL writers.City Polytechnic of Hong Kong research report, No. 7

Tsui, C. (1992). English business communication skills training needs of non-native English-speaking managers: a case in Taiwan.The Bulletin of the Association of Business Communication, 55(1) 40-41.

Warden, C. (1995). Expert system impact on writing errors in Taiwanese business English classes.CAELL Journal, 6(2) 22-29.

Warden, C., and Chen, J. (1995). Improving feedback while decreasing teacher burden in R.O.C. ESL business English writing classes, In: Bruthiaux, P., Boswood, T., & Du-Babcock, B. (Eds.), Explorations in English for professional communications (pp.125-137). Hong Kong: City University of Hong Kong.

Yao, Y., and Warden, C. (1996). Process writing and computer correction: Happy wedding or shotgun marriage?CALL Electronic Journal [On-line journal].1(1) Available: http://www.lc.tut.ac.jp/callej/callej.htm.

[-14-]


APPENDIX A

Return to Table 1

Explanation of error types found in this study.

Note: These error types are based on the error definitions from the software package Grammatik 5.0 for Windows. However, for this study, the rule base was modified and in some places heavily extended (as in the case of custom errors which was programmed to include common errors of Chinese EFL learner). (Grammatik5 User's Guide, 1992, Reference Software International)

Abbreviation Abbreviations follow rules, such as the use of periods and commas before and after the abbreviations. Additionally, in more formal writing, such as business communication, abbreviations should be avoided and all words written out to assure better understanding on the part of the reader. e.g., Mr. Smith Ph.D. can't come next week.

Adjective Incorrect adjective used to modify noun or pronoun. e.g., This is an interested story.

Adverb An adjective was used to modify a verb instead of an adverb. e.g., She certain is smart, but she is also stubborn.

Archaic The use of words that are out of date or not in common use. e.g., We can all go, albeit we must go separately.

Article Incorrect use of: a, an and the. Many words require the use of an article preceding them; Chinese EFL students often forget articles or use them incorrectly. e.g., A teachers had already distributed the tests to the class. e.g., The student claimed it was a honest mistake. e.g., We sell our products in North American Market.

Capitalization Letters at the beginning of a sentence and the personal pronoun 'I' are checked for correct capitalization. e.g., THere was a book on the bed. e.g., Tomorrow, i want to visit Bill.

Clause Subject and verb must together form a complete thought. A dependent clause that is not a complete thought must begin with a subordinating conjunction. e.g., James went to the tennis match. Even though it was raining.

Colloquial Colloquial phrases are often used in spoken English but are not appropriate in business writing. e.g., The director will make a decision when he is good and ready.[-15-]

Comma Splice Two or more independent clauses, or complete thoughts, are joined by only a comma. e.g., He smokes when he is working overtime, it keeps him awake.

Comparative/Superlative The incorrect use of comparatives like 'more' and 'most.' e.g., It would be even more better if we all could go.

Conjunction A conjunction is used as a coordinating or a subordinating conjunction. e.g., We had to choose between English or French.

Custom These errors are the expanded data base, including common errors of Chinese students. e.g., I have ever been to America. e.g., Go in and open the light. e.g., I learn English every week.

Double Negative Two negative words together is not acceptable in most written English e.g., There was not never any doubt that he would go.

Ellipsis The correct usage of ellipsis between words is: ' . . . ' and at the end of a sentence is: ' . . . .' Spaces are required before, between and after each period. e.g., They are white, red, yellow, blue...

Ending Preposition The use of a preposition at the end of a sentence should be avoided. e.g., He moved to an office near the people he works with.

Incomplete Sentence Usually, a sentence needs a subject and a verb; this error is when one of those is missing. e.g., Our wonderful president who devoted many years of service.

Incorrect Verb Form The incorrect form of the verb e.g., I will bought it next week.

Infinitive The incorrect use of the present tense of a verb in its infinitive form. e.g., I hope graduate in June.

Jargon Jargon is not known to a general audience and should be avoided when possible. This error often occurs when the writer uses an electronic dictionary for a translation from Chinese to English. e.g., Let us interface next week over lunch.

Long Sentence Sentences longer than that specified amount in the software (often set at 30). Shorter sentences are easier to understand and have less chance of containing errors. e.g., There are tables for scuba divers showing how fast a diver may ascend safely, but these tables make the assumption that the diver [-16-] descends, remains at the same depth for some time, and then comes to the surface, which is not necessarily so.

Noun Phrase Words missing from a phrase or a number disagreement with the phrase. e.g., He drove motorcycle. e.g., I purchased nine magazines and book.

Number Usage Numbers should be spelled out when: smaller than 11 or at the beginning of a sentence. Numbers that are degrees, percentages, times, dates, page numbers, money, should be written as Arabic numerals. e.g., 5 years are required to graduate. e.g., It is made of one hundred percent cotton.

Overstated Wordy sentences that are vague and difficult to understand. e.g., At the conclusion of the meeting, everyone in attendance departed for their homes.

Possessive Form Possessives are words that show ownership, usually of a thing. Possessives are often followed by a noun. It is often the case that if a plural noun is followed by another noun, the plural noun should be a possessive. e.g., The secretarys desk was covered with work.

Preposition Normal usage dictates which prepositions are used with which words or phrases. Although a preposition may appear to follow all grammatical rules, if it is not normally used then it should be revised. e.g., Everyone in our office must comply to the new regulations.

Pronoun Case Incorrect use of pronouns when being used as subjects or objects in the sentence. Also found in this group are incorrectly used possessive pronouns. e.g., Everyone has their own goal. e.g., The television is for you and I.

Pronoun Number Pronouns take the place of nouns in a sentence. They must agree in number with any verbs in the sentence. e.g., This error is caused when the number of the verb and pronoun are not in agreement. e.g., They was going to the fair

Punctuation Common punctuation errors such as commas and semicolons as well as incorrect use of spaces before and after punctuation (a very common error for Chinese EFL students). e.g., While most would agree Chinese is a difficult language to learn, it is useful if you want to do business in Asia.

Redundant Words that repeat the same meaning, e.g., raise up, past history, cash money. e.g., Once you use a computer, you will never revert back to using a typewriter.

Relative Pronoun Relative pronouns introduce restrictive and non-restrictive clauses (which, that, who). This error is the [-17-] incorrect use of the relative pronoun. e.g., Her green coat, that she bought in February, has a tear.

Repeated Words Or Punctuation This error is most often caused by typing error. Punctuation, in English, does not repeat. One period at the end of a sentence is always enough. e.g., We all like to travel to to Canada, South America, the United States, etc..

Run-on Sentence A run-on sentence is simply too long or is actually two sentences together. The overuse of commas or conjunctions causes this error. As a general rule, shorter sentences are easier to understand. e.g., My name is Chaur-Sheng Jan, I went to the National Tax Bureau, which is in Jang-Huah County, and had an internship during my summer vacation.

Sentence Variety Starting many sentences with the same words or structures gives a bad impression. Change some sentences so that the sentences do not seem monotone. e.g., I would like you to ship before June 20. I could open a letter of credit in your favor within the week. I will wait for your decision.

Similar Words Some words are often used wrong because they have the similar spellings or sounds to other words. e.g., We got the book form her mother. e.g., The words sited are from Shakespeare.

Spelling Spelling or typing errors are easy to correct, yet make an important impression on the reader. e.g., The acter, who is a techer, had the leading part

Split Infinitive A word, phrase or clause that comes between the infinitive 'to' and the verb. Avoid the split infinitive structure because it makes the main idea harder to understand. e.g., Steve wants to quickly finish this project.

Split Words As English changes, words often merge together, e.g., basketball. A modern dictionary will help to avoid splitting words that belong together. e.g., The quality of this product is out standing.

Subject/Verb Agreement Verbs must agree with their subjects in voice and number. e.g., The overcoat in the market are very heavy. e.g., My mother always encourage me.

Tense Shift A change in the verb tense, in one sentence, that makes the sentence difficult to understand. e.g., As long as a person could concentrate his attention, he will be successful in whatever he did.

Trademark Trademarks often follow unconventional capitalization. The writer should make sure of the specific [-18-] capitalization, such as: WordPerfect, Band-Aid, etc. e.g., He bought some scotch tape while listening to a walkman.

Vague Adverb Vague adverbs are commonly used in spoken English but make written English weak. Words such as: for example, awfully, pretty, really, kind of, etc., all hurt formal writing. e.g., He found her speech pretty interesting.

Verb Object A verb object is a noun or pronoun that comes after a transitive verb. An error occurs when the object of a verb is missing. e.g., She fixed up.

[-19-]

Return to Table of Contents Return to Top Return to Table A
© Copyright rests with authors. Please cite TESL-EJ appropriately.

Editor's Note: Dashed numbers in square brackets indicate the end of each page in the paginated ASCII version of this article, which is the definitive edition. Please use these page numbers when citing this work.