April 1994 — Volume 1, Number 1
The Role of Topic and the Reading/Writing Connection
Barbara L. Kennedy
University of Kentucky
This study was originally constructed to examine the effect of content-area reading on ESL writing proficiency. The experiment was restructured and extended because composition topics proved to be confounding factors. Although the results raise several issues, the most significant are 1) the influence of topic on the acquisition of ESL composition skills and 2) the influence of topic on the cognitive task of demonstrating ESL writing proficiency. More specifically, an information-process explanation is offered for some of the confounding factors the topic variable introduced into the equation.
Composition courses based on the connection between reading and writing were first developed for native English writers. There are numerous textbooks which prepare native English-speaking students to write across disciplines by presenting topics on which students will read articles and then write compositions. The number of ESL (English as a Second Language) composition textbooks of this nature is comparatively small (Shih, 1986, pp. 635-36). Recently, however, the “reading/writing connection” has also become a buzz phrase in ESL composition pedagogy.
In a 1988 study of professors’ reactions to nonnative-speaker academic compositions, Santos found that university professors grade more harshly on content deficiencies than they do on language usage; they are much more lenient with errors of linguistic form. Santos concludes that composition instruction that deals more strongly with content is indicated.
Shih (1986) discusses five approaches to instructing students in content-based writing. The present study examines the approach used in what Shih has termed “content-based academic writing courses” (pp. 635-37), in which students read sets of passages that relate to the topic areas of their writing assignments. Prior to the reading and writing, study questions for reading and discussion are introduced to stimulate the students’ close examination of the topic. These prereading questions help the reader to build appropriate schemata (Taglieber, Johnson, and Yarbrough, 1988, p. 466). [-1-] According to Anderson, Reynolds, Schallert, and Goetz, “Every act of comprehension involves one’s knowledge of the world as well” (Carrell & Eisterhold, 1983, p. 553). In other words, these questions help to prepare students to comprehend the new information from the reading passages, and to utilize what they already know, both when they read and when they write.
Zamel (1987) stresses that a process approach to writing gives learners an optimal opportunity to develop their ideas by allowing them to put concerns about linguistic form aside until the editing stage of composing, the last step before the final draft. Reading is one way of generating ideas in a process approach to writing. Shih says, “Empirical data are needed to support the belief held by many that content-based instruction can help ESL students to become more confident and competent when they tackle academic writing” (p. 642). Thus, a preliminary question is to what extent reading in the content area contributes to the quality of the composition. In the case of the present study, reading is defined as outside-the-classroom input in the content area. Other forms of receiving outside-the-classroom information were not examined. Therefore, it is not clear whether other types of outside input, e.g., lectures on the topic area, would have the same effect that reading has. Thus, the question becomes: is it possible for students to improve their ability to write effective compositions without outside-the-classroom input (in this case, without reading) to the same degree that it is possible for them to improve with this input?
This study was constructed to examine the quality of writing both with content-area readings and without content-area readings. However, it became apparent that another question must also be asked in conjunction with the question of whether or not reading contributes to the quality of writing students produce.
Carson, Carrell, Silberstein, Kroll, and Kuehn (1990) reported on a study they conducted using Japanese and Chinese subjects. They wanted to know if the proficiency levels of reading and writing in the subjects’ L1 would predict the reading and writing proficiency levels in the subjects’ L2. The following was noted:
In addition to the weak relationship noted in the L1-L2 writing correlations for both groups, the multiple regression analyses indicate that although reading scores predict reading scores in either language for both groups, writing never appears as a variable that predicts writing. (p. 260)
Although this study does not relate to any connection between reading in the content area and the quality of writing, it does point to a potentially confounding factor that must be ferreted out in the experimental design as a variable. In the Carson et al. study the writing topics differed in the L1 writing task and the L2 writing task. Could the difference in topics explain the lack of [-2-] connection between writing scores in the two languages? Moreover, by extension, within the same language, if the topics differ, will the quality of one writing predict the quality of another writing? If groups of students start out at approximately the same level in writing ability, might the topics that are used throughout the course determine how much they will improve?
Witte (1988) reported that when native speakers were asked to write compositions in response to various prompts (topics), it became obvious that not all prompts produced similar results across groups, even though the prompts had been devised to be topics with which all students would be familiar.
The topics to which students are asked to respond in composition would appear to make a difference in the quality of writing that students produce; however, research in the area of topic is sorely lacking. As Hoetker (1982) says, “there is little hard evidence anywhere that students will write any worse (or any better) on topics such as those I have just criticized than on the most thoughtfully considered and carefully edited topics” (p. 14).
The research that exists is not only far from conclusive, but often produces conflicting results. Hoetker cites White in a discussion of the extreme differences in quality that were found in the compositions produced by students taking the California State University and College Equivalency Examination between the years 1973 and 1974. White concluded that the 1974 topic, which produced lower scores, was more cognitively demanding, i.e., required abstract reasoning, that the 1973 topic relied more on personal experience, and that this difference accounted for the extreme difference in scores. Pytlik (1986), however, reports on a study conducted by Jones, whose findings showed that students performed better with textbook topics than with topics of their own (p. 7). Moreover, Greenberg, expecting that topics that asked students for their personal experience would produce better compositions, was surprised to find that students’ writing performance was not significantly affected by the type of essay question to which they responded (Pytlik, 1986, pp. 7-8).
O’Donnell (1984) cites Hoetker and his colleagues in a discussion of topics that are offensive to students. She says that there are three subjects that students found “difficult, uninteresting, or inappropriate, and that required special knowledge. . . (1) neglect of the urban environment, (2) favorite gadgets, and (3) dream homes” (p. 246). However, she says that there are also topics that produce favorable results; she cites Brossell and Ash who found that students wrote “more organized, more sharply focused, and more fluent” essays on the topic of violence in the schools (p. 246). [-3-]
Some researchers have questioned the use of topic options for composition exams. Hoetker states that “the strongest argument for options is that we know so little about topics that it is presumptuous for us to say we can know which topic will elicit student’s [sic] best performance” (p. 18). However, he cites a study conducted by DuCette and Wolk who found that when students were given more topic options, they performed less well than when they were given a single topic. Another question that arises in regard to providing students with options is whether or not students are able to judge which topic will show their best writing. Hoetker cites Meyer, who argues that students do not have the ability to select topics that elicit their best performance (p. 17). In thinking about the question of whether or not to provide topic options in an essay exam, one must consider how much time students will give to choosing the topic, rather than to actually writing. Pytlik says Jones concludes that students might perform better when provided with a few, rather than with many, options.
These studies on topic and composition proficiency have been conducted entirely with native speakers of English, except for the Carson et al. study, which does not actually examine the role topic plays. There is little research that provides much insight into the ways topics influence native-speaker writing, and the picture is even bleaker when it comes to nonnative-speaker writing. As Hoetker (cited in O’Donnell, 1984) states:
[W]e know little about topic variables because research attention has been devoted almost entirely to issues of rater reliability, ignoring for the part [sic] the issue of validity as well as the other two sources of error in an essay examination–thetopics and the writer. (p. 4)
The question to be examined by this research is whether reading in the content area contributes more to students’ quality of writing, or whether topic contributes more.
The results of this study suggest that the topic assigned, or chosen by the students, plays a significant role in the quality of writing students are able to produce, whereas reading in the content-area appears to contribute little to students’ quality of writing. Information-processing theory offers a possible explanation for the influence topic has on composing. Information-processing theory may also explain why reading contributes little to L2 students’ quality of writing.
The subjects of this study were all members of an advanced ESL composition class in the Center for English as a Second Language at the University of Kentucky in Lexington, Kentucky. The students’ advanced-level placement was determined either by their scores on the Michigan Placement Test or by promotion from an intermediate [-4-] level into an advanced level based on teachers’ evaluations. The subjects were from varied language backgrounds. Of the 31 students in the study, the largest percentage were Asian: there were eight native speakers of Japanese, nine of Chinese, two of Korean, one of Indonesian, one of Thai, and one of Bengali. There were also two Spanish speakers, and seven Arabic speakers. The students ranged in age from 17 to 47 years old, but the majority were in their mid-twenties. They had all studied English in their home countries, as well as after arriving in the United States. Their length of English study ranged from two to fourteen years. At the beginning of this study, the length of the students’ exposure to a predominantly English-speaking society ranged from zero to four years. The number of other languages they spoke, in addition to English and their native language, ranged from zero to two. All of the students, evaluated with the Jacobs et al. (1981) composition profile sheet, were within the same range of English composition proficiency at the beginning of the course. Of the 31 students that participated in the study, thirty of them had scores in the sixties and seventies, and one student had a score in the eighties, on a 100-point scale. The lowest score was 62 and the highest score was 85.
These students were divided into three groups, Group A, Group B, and Group C. There were 8 students in Group A (four females and four males), 11 students in Group B (three females and eight males), and 12 students in the Group C (eleven males and one female). Group A contained four Japanese speakers, two Chinese speakers, one Thai speaker, and one Korean speaker; Group B contained three Japanese speakers, five Chinese speakers, and three Arabic speakers; and Group C contained two Spanish speakers, one Indonesian speaker, one Korean speaker, four Arabic speakers, one Bengali speaker, one Japanese speaker, and two Chinese speakers. Caution should be used when drawing conclusions based on this study, since the sample size is small.
Since the classes at the University of Kentucky’s Center for English as a Second Language are small, and there is only one advanced-level section, all three groups were enrolled in different quarters. The main goal of the ESL composition course for all three groups was to teach/learn effective expository writing which utilized various strategies of essay development: description, narration, cause and effect, comparison/contrast, persuasion, analysis of a process, and so on. Group A was given relevant content-area readings which illustrated good form and in which these various strategies were utilized. In addition, the readings came from both professional writers and student writers, and from both native and nonnative speakers of English. The reason for using nonprofessional writers, as well as professional writers, is suggested by Hairston (1986): [-5-]
[B]ecause they [teachers] worry about the perennial problem of devising good theme topics, teachers sometimes resort to using professional essays as models for the students to imitate in their own writing. For good reasons, the results are usually not happy. Faced with the prospect of trying to understand an essay by Virginia Woolf or E. B. White and then of imitating it, most students are going to be more intimidated than instructed. Often, as consolation for what they see as their own incompetence, they take refuge in the myth that real writers can write because they are inspired, not because they were ever taught to write. (p. 180)
Students were not expected to use the readings as models, but as sources of information; however, it was felt that it would boost the confidence of the students to read other students’ writing that not only conveyed information, but conveyed it in good form. This same reasoning explains why readings by both nonnative and native speakers were used. Moreover, the use of readings by nonnative speakers communicated the idea that the topics were culturally relevant to people of both the United States and other countries. The method of teaching composition, the process approach, was held constant for all groups. The students in all three groups constructed multiple drafts of compositions with instructor feedback and peer critiques before the final draft. The drafts prior to the final drafts did not receive grades, only constructive feedback. The same teacher taught all three sections, Group A, Group B, and Group C.
Learning to synthesize information was a major component in all three classes. However, the Groups B and C were limited to synthesizing information from class/small-group discussions about personal experiences or about information they had gained previously related to the topic, whereas Group A synthesized information received from readings on the topic and from class/small-group discussions. Guided questions followed each reading (for Group A) and each discussion session (for all three groups) to aid the students in integrating what they had learned from these interactions with what they already knew about the topic.
Each group wrote multiple drafts of papers on three topics during the eight-week session; two of the topics served as daily class work, and one served as the final exam. The final exam paper was written in class, although the other compositions were constructed at home. The final exam incorporated all pre-final draft steps that the other two papers included, except peer editing. The final draft of the final exam, the in-class writing, was the writing used to judge the improvement in the students’ ESL composition writing proficiency, and the resulting scores were used [-6-] in the criterion measures for this study. For Group A, all readings for the two daily assignment topics were provided, and of the two readings for the final exam, one was of their own choosing. Each student provided the teacher with the reading s/he chose for constructing the final-exam essay.
Different topics were used in Groups B and C, whereas the same topics were used in Groups A and C. The topics given to Group B were “Music” and “Ecology.” The topics given to Groups A and C were “Medical Ethics” and “Education.” The readings for the Medical Ethics and Education units, used by Group A, were written by people of various cultures.
In Group A, the questions used for the Medical Ethics unit’s pre-first-draft discussion focused the students on the patient’s/patient’s family’s right to know the truth about the illness and its treatment. Students were asked to write a first draft of their paper, getting their own personal ideas down in written form, before they did any reading. The Medical Ethics unit contained two readings. One, an excerpt from a book, was by a Russian doctor who gave a short narration on an unsuspected error he made when administering treatment that resulted in an elderly, terminally-ill patient’s death. The other was by an Iranian freshman composition student who explained why Iranians prefer that the patient not be told the truth if the disease is terminal. The questions used for the Education unit’s pre-first-draft discussion focused students on academic education. Four readings were used for the Education topic, one by an American who had interviewed a Japanese student in Japan about the stress involved in preparing for the Japanese university entrance exams; one by a foreign student from Japan, recently graduated from an American university, who contrasted American education values with Japanese education values; one by a Norwegian journalist who was analyzing an educational problem in Norway; and one by a Saudi Arabian journalist who was describing the education of females in Saudi Arabia. The readings from both units were each followed by a set of discussion questions that not only focused discussion on the ideas from the readings, but also led students to consider how these ideas had influenced their own thoughts on the topic.
Although Groups B and C did not use readings, the students used pre-first-draft class discussions in the Music and Ecology units, for Group B, and Medical Ethics and Education units, for Group C, to stimulate students’ thinking and to gather information from each other about the topics. All three groups wrote the same number of revisions; between revisions they discussed what they had written with each other, and group members responded to their ideas (i.e., they responded to content, suggesting new, related information whenever possible). Written questions following the discussions were used to lead the students to consider how they [-7-] might integrate any new, relevant information they had gained from their group’s members.
All three groups used peer critique sessions prior to their final draft, and, once the final draft was written, they used peer-editing sessions. The drafts resulting from the peer-editing sessions were then turned in for a grade. This allowed the instructor to distinguish between the changes made by students related to content, and the changes made related to form.
The topics for the final exams were “Impressions of America,” chosen by Group B, and “Discrimination,” chosen by Groups B and C. The only differences between the final exam and the compositions constructed for daily work were 1) students wrote the final draft of the exam in class, rather than at home; 2) the students did not have the peer-editing sessions–they were required to do their own editing within the exam period; and 3) Group A chose the second reading used in writing their final exams. The first reading, consisting of a dictionary definition of discrimination followed by a general examination of the relationships of discrimination to prejudice and rumor, was provided for Group A.
Using the eighteen compositions at the end of Testing ESL Composition (Jacobs, et al., 1981), three outside readers were trained to evaluate ESL compositions. Two of the readers have taught ESL composition at the university level, and two have been tutors for ESL university-level composition students. All three hold M.A. degrees, two in English and one in Education. The reader with the M.A. in Education also holds a secondary teaching certificate with an ESL endorsement. For each composition evaluated, the readers’ totals on the composition profile sheet were not allowed to differ by more than 9 points. If a 10-point or greater spread was present, the score furthest from the other two was discarded. The spread between at least two of the readers’ scores was never greater than nine points. Inter-rater reliability correlation coefficients were calculated. The correlation coefficient between reader I and reader II was .98, between reader II and reader III was .89, and between reader I and reader III was .88. The scores within the nine-point spread were averaged. The composition profile sheet is broken into five subcomponents: content, organization, vocabulary, language use, and mechanics. The subcomponents are weighted: content is worth a maximum of thirty points and a minimum of thirteen, organization is worth a maximum of twenty points and a minimum of seven, vocabulary is worth a maximum of twenty points and a minimum of seven, language use is worth a maximum of twenty-five points and a minimum of five, and mechanics is worth a maximum of five points and a minimum of two. Thus, there is a possible total of 100 points. The content subcomponent is based on the following criteria: knowledge of the subject, substantial ideas, development of thesis, and relevance to the assigned topic. The organization subcomponent is based on [-8-] fluency of expression, clarity of ideas, supporting evidence for ideas, succinctness, logical sequencing, and cohesiveness. The vocabulary subcomponent is based on sophistication of vocabulary range, effective word/idiom choices and usage, word-form mastery, and appropriate register. The language-use subcomponent is based on use of constructions (simple or complex) and grammatical accuracy, i.e., appropriate use of agreement, tense, number, word order/function, articles, pronouns, and prepositions. The mechanics subcomponent is based on mastery of conventions, i.e., appropriate spelling, punctuation, capitalization, and paragraphing. Analytical scoring, as the Composition Profile Sheet is, was used rather than holistic because, as Omaggio (1986) points out, it tends to be less subjective.
The probability level of significance was established at .05. One-way ANOVAs were run to determine if, indeed, there were any significant differences between the three groups. First, a one-way ANOVA was run on the scores of initial compositions, which students wrote before taking the course, to determine whether or not the students in each group were starting at the same level. There was no significant difference among the groups (p > .25); the mean score for Group B was 73.182, the mean score for Group A was 70.875, and the mean score for Group C was 71.833. The initial compositions that are written before students take a writing course vary in topic from term to term. However, there was no significant correlation between the topics used for these three groups and the scores on these initial compositions. Next, a one-way ANOVA was run on the scores of the final compositions written for the final exams. There was a significant difference in scores between the Group B and Group A; the mean score for the Group B was 75.364, and the mean score for Group A was 84.125. Not only was there a significant difference between these two groups, but additional one-way ANOVAs revealed that there was a significant difference between the initial scores and the final scores for Group A, but no significant difference between the initial scores and final scores for Group B. Thus, Group A had demonstrated significant improvement, whereas Group B had not. A one-way ANOVA did not show a significant difference between Groups B and C, the two groups that did not use readings. As reported, the mean score for Group B was 75.364, whereas the mean score for Group C was 80.917. When a one-way ANOVA was run on Group A and Group C, the two groups that used the same topics, there was no significant difference between their final scores (p > .25); the mean score for Group C was 80.917, compared to Group A’s mean score of 84.125. One-way ANOVAs also indicated that Group A and Group C showed significant improvement over their initial scores.
Other predictor variables as well as the group assignment (whether topics were used and whether readings were used) were [-9-] examined in multiple regression analyses for all three groups: 1) the age when the students were first exposed to English, 2) Japanese native language background, 3) Chinese native language background, 3) Arabic native language background, 4) gender, 5) length of time spent in a predominantly English-speaking society, 6) present age, 7) number of other languages acquired besides English and the native language, 8) length of time English had been studied in an academic environment, and 9) topics used for composition. In order to guard against collinearity among predictor variables, correlations were run. A .6 cut-off point was used. If two predictor variables had a correlation of .6 or greater, one would be discarded. However, there was no collinearity among the variables used in these regression analyses.
The three variables that showed a significant interrelationship with the total improvement in composing skill (i.e., difference between initial and final scores) were the age when students were first exposed to the English language, students’ gender, and topics used for composition. The statistics revealed that the younger the students were when they were first exposed to English, the more they improved; females improved more than males; and students who wrote their final exam compositions on “Discrimination,” Groups A and C, showed more improvement than students who wrote their final exam compositions on “Impressions of America,” Group B.
When the total scores were broken into their component parts and improvement (difference between initial scores and final scores) was statistically examined, the following predictor variables interacted significantly with the scores:
- Improvement in content interacted significantly with gender and the number of other languages acquired besides English and the native language. Females improved more than males; and the fewer languages acquired besides English and the native language, the more improvement in the content scores.
- Improvement in organization interacted significantly with Chinese native language, gender, and topics used for composition. Chinese students improved more than students of other language backgrounds, females improved more than males, and students who wrote their final exam compositions on “Discrimination” showed more improvement than students who wrote their final exam compositions on “Impressions of America.”
- Improvement in vocabulary interacted significantly with only one variable, Chinese native language. Chinese students improved more than students of other language backgrounds.
- Improvement in language use interacted significantly with the age that students were first exposed to English and Chinese [-10-] native language. Students who were exposed to English earlier, improved more; and Chinese students improved more than other language backgrounds.
- Improvement in mechanics interacted significantly with Chinese native language and topics used for composition. Chinese students improved more than students of other language backgrounds and students who wrote their final exam compositions on “Discrimination” improved more than students who wrote their final exam compositions on “Impressions of America.”
Disregarding improvement and focusing only on Final-Exam scores revealed, through multiple regression analyses, the following statistically significant interrelationships:
- The final-exam total scores interacted significantly with gender and topics used for composition. Females performed better than males, and students who wrote their final exam composition on “Discrimination” performed better than students who wrote their final exam composition on “Impressions of America.”
- The final-exam content scores interacted significantly with gender, length of time spent in a predominantly English-speaking society, and topics used for composition. Females performed better than males. The less time students spent in an English-speaking society, the better they performed. Students who wrote their final-exam compositions on “Discrimination” performed better than students who wrote their final-exam composition on “Impressions of America.”
- The final-exam organization scores interacted significantly with gender, the length of time spent in a predominantly English-speaking society, and topics used for composition. Females outperformed males, students who had been in an English-speaking society for less time performed better, and students who wrote their final-exam compositions on “Discrimination” performed better than students who wrote their final-exam compositions on “Impressions of America.”
- The final-exam vocabulary scores did not interact significantly with any predictor variable.
- The final-exam language-use scores interacted significantly with gender, length of time spent in a predominantly English-speaking society, and topics used for composition. Females performed better than males, students who had spent less time in an English-speaking society performed better, and students who wrote their final-exam compositions on “Discrimination” performed better than students who wrote their final-exam compositions on “Impressions of America.” [-11-]
- The final-exam mechanics scores interacted significantly with Chinese native language, gender, and topics used for composition. Chinese students performed better than students of other language backgrounds, females performed better than males, and students who wrote their final-exam compositions on “Discrimination” performed better than students who wrote their final-exam compositions on “Impressions of America.”
The results of this study not only call into question the reading/writing connection, but also illustrate the importance of the relationship between the topics used for writing assignments and the acquisition of L2 composition skills.
As reported in the results section of this paper, there was no significant difference between the 1989 Fall II experimental group and the 1990 Summer control group in this study. When the initial mean score was subtracted from the final mean score of each of the two groups, the differences were not the same; even though the experimental group appeared to have improved more (a 13-point mean-score gain by the experimental group versus a 9-point gain by the control group), this could be due to chance. However, an eight-week session is not very long. It would be interesting to see what would happen in a program that has 16-week semesters. It might be possible that improvement is cumulative; a significant difference between the groups might be evident if the writing courses were lengthened.
Since the part of the study in which the experimental and control groups used different topics showed such different results from the part of the study in which the experimental and control groups used the same topics, the role of topic in the composition classroom requires examination. In the results section of this paper, it was reported that the topics-used-for-composition predictor variable interacted significantly with eight of the twelve criterion measures, three of the six criterion measures of improvement in scores and five of the six criterion measures of final composition scores. Topics used for composition, along with gender, showed significant interaction with more criterion measures than any of the other predictor variables.
In the 1989 Fall I and Fall II parts of this researcher’s study, the experimental group’s scores were based on compositions about discrimination, whereas the control group’s scores came from their writing on their impressions of America. Since all of these students were living in the United States and, thus, dealing with a foreign environment physically, socially, culturally, and psychologically, the topic of their impressions of America was, in all likelihood, one that they had discussed at least a few times [-12-] among friends and classmates. It is a personal topic, rather than academic; the content of the students’ compositions revealed the personal nature of the topic.
Interestingly, the topic of discrimination appeared to be approached from a much more academic viewpoint, even though one might have expected the control group to take a more personal approach, since they were writing without readings. They could have written on discrimination they had faced as foreigners in the United States, for example. However, since their instructor was an American, they may have thought it would be impolite to submit a composition of that nature about their instructor’s fellow Americans. The majority of the students from both the 1989 experimental and 1990 control groups who wrote on this topic analyzed particular discriminatory practices in their own countries.
This raises the question of what role familiarity with the topic plays. Langer and Weinman conducted a study in which half of the people in a Boston unemployment line “were asked to speak about why it was difficult to find a job in Boston,” and “half were asked to speak about finding a job in Alaska” (Langer, 1989, p. 21). These researchers assumed that the former topic was one about which their subjects had previously deliberated, whereas the latter was most likely a topic their subjects had not given much consideration. Half of each group was given time to think about and plan what they wanted to say, while the other half spoke extemporaneously. Langer reports the results:
Subjects were much more fluent when they were discussing a novel issue after being given time to think about it first or when they spoke about a familiar topic right away, with no time to think about it. Thinking about a very familiar topic disrupted their performance. (1989, p. 21)
Langer goes on to compare these results to the interference which could be expected in typing skills. She gives a hypothetical example of an experienced typist and a novice typist, saying that if each is asked “to type a paragraph without the usual spaces separating words. . ., it is likely that the person with less experience will have an edge” (p. 21).
The parallel which can be drawn with the present study is that extremely familiar ideas and skills become automatic and resist conscious examination and/or alteration. The students who wrote on their impressions of America may have discussed this topic and/or thought about the topic often enough that their responses to it had become routinized. From an information-processing perspective, automatized or routinized cognitive processes cannot be consciously examined without disrupting performance; they respond automatically [-13-] to environmental cues. For these students to analyze and reorganize their thoughts on a very familiar topic of conversation would be much more difficult than analyzing and organizing less familiar ideas. Much time was given for planning and organizing the “Impressions of America” and “Discrimination” compositions. Moreover, even if time had not been allotted for planning and organization and both groups had written impromptu papers, the “Impressions of America” papers may have been rated more highly than the “Discrimination” papers, but, in terms of rhetoric, would have most likely not been rated as highly as the planned papers on discrimination, due to the differences in spoken and written discourse. Topics which have been mentally organized and routinized for spoken discourse would not meet written discourse standards, since written discourse is not merely writing as one speaks.
The 1989 Fall II experimental group used medical ethics, education, and discrimination as topics for composition, but the Fall I control group used music, ecology, and impressions of America. It would be interesting to run a different experimental group, one which would use readings with the topics, “Music,” “Ecology,” and “Impressions of America,” to see if differences surface between the two groups, the new experimental group and the 1989 Fall II control group. One might hypothesize, considering the results of this study, that a group that used readings with the topics, “Music,” “Ecology,” and “Impressions of America,” would not improve significantly in their composition scores either, just as the Fall I control group did not. However, it is not clear whether the two groups that wrote their final exams on discrimination were able show significant improvement over their initial composition scores because they had been taught composing skills using the topics, “Medical Ethics” and “Education,” or because the topic, “Discrimination,” is a cognitively easier topic for students to demonstrate their proficiency in composing. This could be tested by using the topics, “Music” and “Ecology,” for daily assignments and “Discrimination” for the final exam. How much difference, and what kinds of differences, does topic make? Moreover, between similar groups, does one topic elicit more differences between the groups than another topic does? These are the issues.
Although other predictor variables correlated significantly with the criterion measures, the interaction of the topic and gender variables showed more significant correlations with the criterion measures than any other of the remaining variables. Unfortunately, the limited research that had been conducted on gender differences related to second language acquisition or related to the use of first language yields little insight into the gender-differences results of this study. Please consult the appendix for a discussion of gender and other predictor variables which showed significant correlations with the criterion measures. [-14-]
The results of this study need to be examined cautiously due to the small sample size. However, the study did produce a most important outcome: it points to the need for more research on 1) the influence of assigned topics on acquiring greater composition proficiency in the L2 writing classroom, and 2) the contribution of topic to students’ ability to demonstrate gains in writing proficiency. Moreover, it calls into question the use of personal, familiar topics when evaluating composition skills. White’s remarks concerning the extreme difference in scores between 1973 and 1974 on the California State Equivalency Examination (Hoetker, 1982) reflects the long-held assumption that personal experiences which are extremely familiar to students are cognitively less demanding topics for writing than less familiar, more abstract topics. It is not clear that this is the case when students are given time to plan their writing. Only when we know more about the role that topic plays will we be able to examine the reading/writing connection in any kind of conclusive way.
I thank Shelda Hale-Roca, the teacher of the ESL composition classes used in this study, for her unfailing cooperation with me in this project. I am also extremely grateful to Patsy Lanigan, Melinda Thompson, and Diane Lamon, the readers and evaluators of the ESL student compositions.
Carrell, P.L. & Eisterhold, J.C. (1983). Schema theory and ESL reading pedagogy. TESOL Quarterly, 17, pp. 553-573.
Carson, J.E., Carrell, P.L., Silberstein, S., Kroll, B., & Kuehn, P.A. (1990). Reading-writing relationships in first and second language. TESOL Quarterly, 24, pp. 245-266.
Grabe, W. & Kaplan R. (1989). Writing in a second language: Contrastive rhetoric. In D. Johnson and D. Roen (Eds.), Richness in writing: Empowering ESL students (pp. 263-283). White Plains, NY: Longman.
Hairston, M. (1986). Using nonfiction literature in the composition classroom. In B.T. Petersen (Ed.), Convergences: Transactions in reading and writing (pp. 179-188). Urbana, IL: National Council of Teachers of English.
Hoetker, J. (1982). Effects of essay topics on student writing: A review of the literature. ERIC ED 217 486. [-15-]
Jacobs, H., Zingraf, S.A., Wormuth, D.R., Hartfiel, V.F., & Hughey, J.B. (1981). Testing ESL Composition. Rowley, MA: Newbury House.
Kaplan, R.B. (1966). Cultural thought patterns in inter-cultural education. Language Learning, 16, pp. 1-20.
Kennedy, B.L. (1988). Adult versus child L2 acquisition: An information-processing approach. Language Learning, 38, pp. 477-495.
Krashen, S.D. (1985). The Input Hypothesis: Issues and Implications. NY, NY: Longman, Inc.
Krashen, S.D. & Terrell, T. (1981). The Natural Approach: Language Acquisition in the Classroom. London: Cambridge University Press.
Langer, E.J. (1989). Mindfulness. Reading, MA: Addison-Wesley.
Long, M.H. (1983). Does second language instruction make a difference? A review of the research. TESOL Quarterly, 17, pp. 359-382.
Moragne, M. (1981). Cultural organizational patterns in the ESL classroom. TESOL Summer Meeting. Columbia University, 25 July.
Moragne, M. (1983). Essentials in cross-cultural education. Currents: Issues in Education and Human Development, 2, pp. 11-13.
O’Donnell, H. (1984). ERIC/RCS report: The effect of topic on writing performance. English Education, 16, pp. 243-249.
Oyama, S. (1976). A sensitive period for the acquisition of a nonnative phonology system. Journal of Psycholinguistic Research, 5, pp. 261-285.
Oyama, S. (1978). The sensitive period and comprehension of speech. Working Papers on Bilingualism, 16, pp. 1-17.
Patkowski, M. (1980). The sensitive period for the acquisition of syntax in a second language. Language Learning, 30, pp. 449-472.
Pytlik, B.P. (1986). Designing effective writing assignments: What do we know? ERIC ED 291 107. [-16-]
Santos, T. (1988). Professors’ reactions to the academic writing of nonnative-speaking students. TESOL Quarterly, 22, pp. 69-90.
Seliger, H.W. (1978). Implications of a multiple critical periods hypothesis. In: Second Language Acquisition Research: Issues and Implications (pp. 11-19). N.Y., N.Y.: Academic Press.
Shih, M. (1986). Content-based approaches to teaching academic writing. TESOL Quarterly, 20, pp. 617-648.
Snow, C. & Hoefnagel-Hohle, M. (1977). Age differences in pronunciation of foreign sounds. Language and Speech, 20, pp. 357-65.
Snow, C. & Hoefnagel-Hohle, M. (1978). Age differences in second language acquisition. In E. Hatch (Ed.), Second Language Acquisition (pp. 333-44). Rowley, MA: Newbury House.
Taglieber, L.K., Johnson, L.L., & Yarbrough, D.B. (1988). Effects of prereading activities on EFL reading by Brazilian college students. TESOL Quarterly, 22, pp. 455-472.
Thorne, B., Kramarae, C., & Henley, N. (1983). Language, gender and society: Opening a second decade of research. In B. Thorne, C. Kramarae, and N. Henley (Eds.), Language, Gender, and Society (pp. 7-24). Rowley, MA: Newbury House.
Witte, S. (1988). The influence of writing prompts on composing. Paper presented at CCCC, St. Louis.
Zamel, V. (1988). Recent research on writing pedagogy. TESOL Quarterly, 21, pp. 697-715. [-17-]
With regard to variables other than topic, the multiple regression results of this experiment show several things. First, the predictor variable of gender showed significant interaction with eight of the twelve criterion measures examined. The fact that women performed better than men in these eight aspects of composition remains problematic. Research is very sparse in the area of gender and second-language acquisition. The situation is not much better in the case of first language. In the area of first-language English usage, there is little consensus on usage differences between men and women.
A review of the literature shows that very few expected sex differences have been firmly substantiated by empirical studies of isolated variables. Some popular beliefs about differences between the sexes appear to have little basis in fact, and in a few cases research findings actually invert the stereotypes (Thorne, Kramarae, and Henley, 1983, p. 13).
The Chinese native language variable significantly interacted with five criterion measures. The first four criterion measures which correlated significantly with the Chinese native language variable came from subtracting initial-composition scores from final-exam scores: vocabulary improvement, organization improvement, language use improvement, and mechanics improvement. The fifth criterion measure which interacted significantly with the Chinese native language variable was the final-exam mechanics score. Chinese students consistently improved more, in the cases of the first four criterion measures, or performed better, in the cast of the fifth criterion measure, on these five criterion measures than students of other language backgrounds. The Chinese language background is a factor that demands more research. The initial question which arises, of course, is “Did the Chinese speakers perform significantly worse in these five areas on their initial-composition scores. The answer is no, they did not.
Chinese speakers typically do not use the same type of rhetorical organization that native English speakers use. Native English writers prefer a direct and to-the-point organization, whereas oriental writers prefer an indirect, talk-around-the-point rhetorical organization. This might lead one to believe that their superior improvement on the organization subcomponent score of their final-exam composition was due to the fact that they recognized, as they progressed through the course, that English demanded a different rhetorical structure. However, not only Chinese, but also other oriental-language writers, Semitic-language writers, and Romance-language writers typically employ different rhetorical structures from English (Kaplan, 1966; Moragne, 1981; Moragne, 1983; Grabe & Kaplan, 1989). Since the teaching in the [-18-] classes was the same for all students, why did students of other language backgrounds not improve as much?
The Chinese orthographic system differs greatly from English; the script is ideographic, rather than phonetic, and the script reads from top to bottom rather than from right to left. Again, one might hypothesize that the Chinese students recognized, as they progressed through the course, that spelling, punctuation, capitalization, and paragraphing differed considerably from their native language, and realized that they would need to learn these conventions. However, English mechanics also differ from Japanese and Arabic conventions. Moreover, Japanese kanji was borrowed from the Chinese. Yet, neither the Japanese nor the Arabic language groups showed the same degree of improvement. Chinese is a subject-verb-object language, just as English is, which may have some effect on the Chinese students’ ability to improve more on the language use subcomponent; they could transfer some of the basic grammatical structures from their native language. However, the reasons for their superiority on any of these subcomponents is not clear. Most Chinese students have studied English from junior high school on. The focus of that study in their home country is predominantly on grammar, reading, and writing. The differences between English and Chinese have, most likely, long been apparent to the Chinese speakers in this study. Two of the questions that Long (1983) raises in his discussion of whether second language instruction makes a difference are, “Does type of learner make a difference?” and “Does type of instruction interact with type of learner?” Some part of the answers to these questions may lie within the relationship between the native-Chinese-speaker variable (e.g., perhaps, native-Chinese-speaker has a strong relationship with type-of-learner?) and the scores Chinese students achieved, but the nature of those answers is unclear.
Three criterion measures, the final-exam content-, organization-, and language-use-subcomponent scores, showed significant interaction with the length of time spent in a predominantly English-speaking society. In all cases the less time spent in an English-speaking environment, the better the scores in these subcomponent areas. This goes against all intuition. Based on the natural approach to language acquisition (Krashen & Terrell 1981) and Krashen’s The Input Hypothesis (1985), one would think that the more time spent in an English-speaking environment, the better. These students are beyond the intermediate level in their L2, English, and are capable of eliciting input. Therefore, being in a native-English-language environment should be an advantage, at least in the language-use area. One possible explanation might be that the students who have been in an English-speaking society longer have determined the amount of grammar needed to be communicative and have ceased to be concerned about further grammatical accuracy, whereas students who have spent less time in the environment are unsure about the level of grammatical [-19-] proficiency they will need to survive well. Therefore, they give more attention to linguistic forms.
Two criterion measures, improvement on the final-exam composition score over the initial composition score and improvement on the final-exam language-use-subcomponent score over the initial language-use-subcomponent score, had significant interaction with the age at which the students were first exposed to English. In both cases, those students who were exposed to English earlier, improved more. This finding does not seem unusual when one considers that childhood L2 acquisition produces greater ultimate achievement in the L2 than does L2 acquisition that begins later (Oyama, 1976; 1978; Patkowski, 1980; Seliger, 1978). Although Snow and Hoefnagel-Hohle (1977, 1978) have shown that later learners have an initial short-term advantage in their rate of L2 acquisition, they also say that in terms of ultimate achievement, the earlier learners show greater L2 proficiency. An information-processing analysis would predict results such as these (Kennedy, 1988). However, the question that arises is, why is the improvement not in all areas? Oyama discusses the “waxing and waning” of sensitive periods, and it may be that a number of these students were in a sensitive period for acquiring grammar skills. Moreover, although an L2 learner, as well as an L1 learner, acquires discourse ability in the language, spoken conversational discourse does not employ nearly as rigorous requirements for organization as does written discourse. However, even though language use differs to some extent from spoken to written forms (e.g., written discourse demands more conciseness and less redundancy), the grammar rules used for spoken discourse still apply to written. Thus, it may be that learning the grammar of a language is a significantly different process than learning composition skills, such as organization and mechanics. We certainly see native speakers in L1 composition classes who make few grammar errors, but nonetheless, lack composition skills. This may parallel what we see with early L2 learners; they can acquire grammar skills much more quickly than other composition skills.
One criterion measure, improvement on the final-exam content-subcomponent scores over the initial content subcomponent scores, showed significant interaction with the number of other languages students had acquired besides English and the native language. The results of this analysis revealed that students who had acquired fewer languages improved more. If, indeed, language is a reflection of how one views the world, and if different language populations view the world differently, coming to terms with these differences in the area of content for a composition may cause problems. The more languages a person has acquired, the more potential there may be for exposure to conflicting viewpoints. However, if interference among the languages that the students know is going to occur in the area of content, what prevents it from occurring in other subcomponent areas as well? [-20-] As this discussion reveals, there are many unanswered questions raised by this study. The beauty and the frustration of research is that, even though it sometimes answers some of our questions, it so often asks so many more!
|© Copyright rests with authors. Please cite TESL-EJ appropriately.
Editor’s Note: Dashed numbers in square brackets indicate the end of each page in the paginated ASCII version of this article, which is the definitive edition. Please use these page numbers when citing this work.