Vol. 7. No. 1 F-1 June 2003
Return to Table of Contents Return to Main Page

***TESL-EJ Forum***

Practical and Theoretical Approaches to ESL/EFL Student Evaluation of Teachers

Karen Stanley, editor

<
karen.stanley@cpcc.edu>

Teachers get evaluated in many ways and for a range of different purposes. While at times "evaluation" takes place in the form of grumbling or praise in unofficial conversations, it is the different types of officially structured evaluations that get the most attention. Evaluation usually purports to be a tool to help teachers improve their classroom approach, but there may be other elements involving hiring and promotion which color the experience, and sometimes the evaluation process is simply seen as an idiosyncratic judgment which is officially recorded to meet requirements of administrative bodies. Input into a final evaluation can be from different sources, among them single or multiple observations by peers or by administrators, test results of students exiting a course or program, review of syllabi and instructor-developed teaching materials, participation in and contribution to professional activities or publications, and written feedback from students. When considering the role of this last source, written opinions by students, many factors may come into play. How much does the culture of the student affect views of the teacher? Does the particular chemistry of a given class make a difference? How aware are students of their own input and responsibility in the process of learning? Have prior educational experiences set up particular expectations on the part of the student? Does teacher friendliness versus aloofness, aside from factors of learning, play a major role - and should it? Particularly in the case of second language learners, is the language of an evaluation question and of the student's response a significant factor?

These and other aspects of student evaluation of teachers are discussed in the following posts, selected from messages to the TESL-L email list for the period from November 1998 through July 2002. Contributors whose email addresses are listed welcome feedback from readers.


Geoffrey Vitale, Quebec, Canada
<geoffrey_vitale@UQTR.UQUEBEC.CA>

[T]eacher evaluation . . . [has] a bearing on methodology since both the approach and the outcomes of evaluation impact on methods (or don't they?). In Moroccan & French state schools in bygone years, I received annual visits from an Inspector, followed by a cosy after-class chat. At the same time, the Casablanca Berlitz "evaluated" its teachers by eavesdropping through the classroom intercom (I hear that some North American state schools still do this). In Japan, my experience (in a private school) was that evaluation meant a triangular conversation - student to principal, principal to teacher (?). At my Quebec university we have very structured evaluation by the students. However we never actually read what they have to say; it's anonymously collated and untenured profs find themselve rapidly leafing through pages of percentiles and equations trying to find some intelligible comment at the end. Tenured professors tend not to make even that effort. The most interesting type of evaluation I saw was in Belgium - and it relates directly back to the classroom - the final student evaluation involved ticking off the most appropriate of a combination 1) Good teacher/good method 2) good teacher/bad method 3) bad teacher/good method and 4) bad teacher/bad method. [-1-]

A suggestion to introduce a similar system into our department came very close to provoking a union-sponsored walk-out. Do other TESLERs believe in evaluation - by whom and how? Does it make you think, change strategies . . . etc. It might be useful for new TESLERS too to know how it happens out there in the other culture's classrooms!


David Ross, Department Chair, Intensive English Program, Houston Community College, Southwest Houston, TX
<david.ross@hccs.edu>

[A poster] . . . posed the following dilemma:

<< Before the decision was made to give some of our students academic credit, our department used a self-made evaluation instrument that seemed to work nicely, but that instrument was not norm-referenced. Does anyone know if there is a standardized, norm-referenced evaluation instrument available for international students to use to evaluate their classes in Intensive English programs?>>

. . .The answer to this particular problem. . .should not be too difficult, if the administration is acting in good faith.

I assume that some kind of institutional research or staff-development office has developed the instrument for student evaluation, administers it and disseminates the results. What the Intensive English program can do is to work with that office and develop an overlay set of questions which elicit the same information but uses simpler English. This overlay is then distributed to all credit-bearing ESL classes along with the test instrument. Because the same information is being elicited, there should not be too much protest about resulting unreliability.

If the advanced IEP students are having difficulties which surpass just understanding the language of the questions, then we've got a serious problem. Students who are advanced enough to earn degree credit (I assume that's the kind of credit Ms. Roessler is talking about) should be able to fill out a standardized survey form after proper instructions. Her program may wish to begin offering the college form to lower levels of IEP (assuming a suitable overlay has been developed) to prepare them for the day when the results really count. But I would emphatically not attempt to go replace the institutional research office's initiative in developing campus assessment instruments. It's part of what comes with the territory of being "just like" foreign languages, biology, history, and all those other departments we profess to envy from time to time.


Karen Stanley, Central Piedmont Community College, Charlotte, North Carolina, USA
<karen.stanley@cpcc.edu>

Some years ago, I went to a wonderful presentation on evaluations by a couple of women teaching in Central America. (As usual, I have remembered ideas but not names.)

One of the things that they included in the evaluations was an opportunity for the students to evaluate *their* role in the learning process. They included questions about how the student would evaluate him/herself as a student, what the student thought he/she could do to improve, etc.

This meant that the the course was presented not as something the *teacher* did or didn't do, but as a joint effort in which the student as well as the teacher needed to take responsibility for the learning that takes place in the classroom. [-2-]


Susan T. Simon
City College, City University of New York USA

I'd like to add a couple of caveats to the discussion of student evaluations of teaching methods, materials, objectives, effectiveness, etc. Remember that some of this is "teacher talk"--we know what we mean by these terms because we've been through teacher ed courses, but the students may have a different understanding of them. So it's important to couch the questions in a way that the students aren't being asked to pose as professional evaluators; if you want their insights, ask them in a personal way: "What activities in the course did you learn the most from? (Answer with details and examples.). What parts of the book were most useful to you?" Etc. You can often obtain more helpful responses from informal questions like these than from the more formal and seemingly objective questions.

I also agree with Karen Stanley that evaluations should ask the students to think about their own role. The evaluation instrument should treat the student as participants in the learning process, not judges. Although the teacher wants to use the evaluation to improve his or her teaching, it can also serve as an impetus to encourage the students to think about their own learning methods, goals, and accomplish ments. The evaluation can thus become an effective closure for both students and teacher.


Sab Will,Webmaster, The Language Fun Farm
http://www.teflfarm.com

. . . a good technique for getting pleasingly positive evaluations from students on the final evaluation sheet that all those important people see: ask the students to fill in your own personal, for your eyes only, anonymous evaluation sheet about a third of the way through the course. Act on their suggestions and comments, changing the course slightly if necessary, and your marks will magically soar!

It works along pretty much the same lines as the old adage 'tell 'em what you're gonna do, do it, then tell 'em you've done it'. Often enough they actually seem to believe you.


Abigail Tom, Durham Technical Community College Durham, NC, USA
<abtom@MINDSPRING.COM>

I teach high beginning adults in a class that meets 5 days a week, 2 and a half hours a day. Every Monday morning they fill out the following short form:

WEEKLY LEARNER'S LOG

In class last week:

Some things I learned
Some things I didn't understand
Some things I liked
Some things I didn't like
Some things I want to study
Some things I need help with:

Outside of class last week:

I spoke English (where? to whom?)
I listened to English
I read English
I wrote in English

This helps them think about what they have learned and it helps me know what I should be doing. [-3-]


Pete MacKichan,
<petermac@otenet.gr>

[A poster] raised the important issue of using questionnaires for feedback and evaluation. I am assuming that we are talking about a formal feedback exercise rather than the more informal feedback that is carried out between an individual teacher and their class.

Before collecting information I think it is important to consider the desired outcome of the feedback exercise, and not just the purpose. What is this information going to enable us to do? Often questionnaires tell us very little. We may discover that 40% of the learners are dissatisfied with their teaching but get little useful information about why.

Often questionnaires tell us things that we ought really to know already. We might find out that a class is dissatisfied with a particular teacher, but to find this out through a questionnaire tells us more about the state of management than anything else.

Often questionnaires tell us things that we can do little about. Students locked into an examination paper chase might favour an increase in grammar teaching, when the needs of the examination and their own weaknesses might point towards a stronger emphasis on listening and writing.

Often questionnaires take up a lot of valuable time. The time taken to process and evaluate ten responses from a hundred students is considerable, even with helpful software at our fingertips.

Often questionnaires ignore the fact that teachers are clients too. Management ought to be interested in collecting data from teachers and secretarial staff on its own performance. Alas, in 13 years teaching I have never once been asked to evaluate my managers. (And I'm not holding my breath)

However, as Karen Stanley rightly pointed out, a questionnaire can be a useful tool in learner training. As teachers we are used to self assessment as part of a process of appraisal and development. Ought not the same to be applied to learners?


Dave Kees, China Communications Foreign Language Center, China
<davkees@PUBLIC.GUANGZHOU.GD.CN>

[A poster] . . . asks if we have had the experience of it being rare for students to give an unfavorable evaluation of the teacher due to certain reasons related to the student's culture.

In some ways, this is the case here in China. Their culture doesn't allow them to speak up and state clearly any criticisms.

Or so it seems.

In reality, I've found my students are rather outspoken in their quiet way. The problem was my culture. As an American I expect that if someone wants to get my attention they will come up with 2-by-4-cracked-over-the-head-bluntness. When I don't get it my way, that doesn't mean it doesn't exist.

What I've learned is that with some people we have to amplify their little comments, tiny suggestions tossed off without emphasis, little ideas.

I was working at P&G here which is very steeped in American management processes and culture. They do personal evaluations, peer reviews and 360-degree feedbacks all the time. I encouraged feedback but never got any until the HR manager delivered evaluations that really showed me how little I knew about the student's feelings. Then re-running past classes in my mind I could recall one student mentioning that maybe a little more role-play would be nice. Another gently and quickly suggested a little more review of the previous material. But I paid little attention to these little comments until I learned my lesson. [-4-]


Karen Stanley, Central Piedmont Community College, Charlotte, North Carolina, USA
<karen.stanley@cpcc.edu>

[A poster] . . . asks a very interesting question about how different cultures perceive the teacher's role.

A number of years ago, at a TESOL conference, a young man made a presentation on that topic. His interest stemmed from the fact that he had received poor student evaluations, and he suspected that a good part of this was related to the fact that what he conceived of as good teaching was not necessarily how his students saw good teaching. Because it was so long ago, I don't remember many details. The one that remains vivid in my mind (because it was such a new concept to me) was in terms of what consituted a fair test. He had found that students from cultures that depend to a great extent on memorization as a method of teaching and learning felt that a test that asked you to apply previously learned materials/skills to *new* situations was not fair because you had not given the students an opportunity to memorize the material ahead of time.

On the TEFLChina lists, a dissertation written by Ming-Shen Li was mentioned. I haven't read it, but among other things people reported that it looked at the effectiveness of native speakers as English language teachers in China. It evidently talked about Chinese student opinions of what they perceive to be the most effective teaching style for language learning, which (from what I gather) is teacher-fronted lectures. While the dissertation itself has disappeared from the internet, Ming-Sheng Li delivered a related paper at the Joint AARE-NZARE 1999 Conference in Melbourne, "Discourse and Culture of Learning -- Communication Challenges"which is still available:

http://www.aare.edu.au/99pap/lim99015.htm

It would appear that views of the teacher's role are not even the same within our own culture. "The Teacher-Student Relationship, A Study of Community Expectations," Buck & Kovlesky (ERIC ED002756), looks at how teachers, school administrators, school board members, students, other adult community residents and teacher trainers at a school in Pennsylvania see the teacher's role. "It was concluded that none of the groups agreed on the definition for the teacher's role for both the extra-class or in-class sectors." (quote from the ERIC abstract)


Kris Barker,, Pacific Gateway International College, Vancouver, Canada
<teaching@shaw.ca>

I have just finished teaching my first session of classes at a new school. It is what my school likes to call life skills. I call it giving the students something to do and keeping them out of trouble.

I received my evaluations and discovered a trend. My M/W class who would consistently do h/w gave me a more or less favorable evaluation. However, my T/R group pretty much told me my certification came out of the cracker jack box.

My question is this: How much validity should schools give to students' evaluations especially when the evaluations are given to S's who A) don't care, B)attend sporadicly C)are in your country for varying degrees of time. I can correlate h/w and attendance with positive evaluations. I am not upset about the negative evaluations but concerned about management's interpretation of them.

I also find the different ethnic groups have different approaches to evaluations and the degree to which they will emphasize their dissatisfaction/satisfaction. [-5-]


David Ross, Department Chair, Intensive English Program, Houston Community College, Southwest Houston, TX
<david.ross@hccs.edu>

Kris Barker wrote:

>My question is this: How much validity should schools give to students' evaluations especially when the evaluations are given to S's who A) don't care, B)attend sporadicly C) are in your country for varying degrees of time. I can correlate h/w and attendance with positive evaluations. I am not upset about the negative evaluations but concerned about management's interpretation of them.

Kris Barker raises a tricky issue of program management: how much to rely on student evaluations, given that their reliability is frequently open to question.

Evaluations such as this, as well as just about any performance measure, have two functions: formative (i.e. how can this data make me a better teacher) and summative (rehire or not?). To begin with, I am of the conviction that the entire performance review process should focus mainly on the formative goal. If a program had good enough reasons to hire someone, the program should commit to enough faculty development to bring this faculty member up to the standards of the program. Performance review that focuses mainly on rehiring decisions creates a chasm between the faculty and the program management and ends up being useless for anything other than a thin cover for getting rid of teachers.

Having said that, I am unwilling to dismiss student evaluation data as totally meaningless, assuming that the evaluation instrument is a valid one. Students have to have a way of expressing their views about a teacher other than storming the department office, and student evaluation data, accumulated over several terms, can give the department valuable information about areas where the faculty member can improve. Teachers who routinely dismiss such data across the board are often in denial about their own weaknesses, and are usually unwilling to admit any data that shows need for improvement.

This is not a cut and dried answer, but I would say that you and the department should study the data carefully, not be buffaloed by one unfavorable class, but not ignore student input either.


Roberto Perez, Florida State University
<rgpg@TECHNOLOGIST.COM>

Kris Barker wrote:

>I received my evaluations and discovered a trend. My M/W class who would consistently do h/w gave me a more or less favorable evaluation. However, my T/R group pretty much told me my certification came out of the cracker jack box.

>My question is this: How much validity should schools give to students' evaluations [-6-]

That is a good question. My first semester teaching in college I received my worst evaluations ever, to the point that some students would even lie (e.g., "he was never available to help" or "he did not know the subject matter at all") and I had all kinds of documents (assistance provided after hours, on weekends, etc.) to prove that the allegations were not true.

Anyways, that semester I was trying to explain students (pre-service teachers) the why of each of the classroom activities, assignments, portfolio items, etc., each time they would complain. I would describe the teaching/learning paradigm behind it, research, the benefits to them from a learning perspective, etc. Obviously they did not buy into any of that (remember: they were future teachers themselves).

The following semester, and after knowing the results of my previous semester's evaluation, I would just say "you need to do it because it is included in the syllabus, and it gets you points". That semester my evaluation went up sky-high, and I got 100% students' thumbs up in items like "respect for students", "concern for students' learning", etc.

In my opinion, I was more concerned about them learning the first semester, when I would take the time to explain to them all the "behind the scenes" of the teaching profession. The second semester, I was just reminding them of the rules: you do this, you get the points. But, in their perspective, their perception of my concern for their learning was way better in that second semester, where my message basically was "just do it".

So, although I agree with David Ross in that students' opinions are very important, I cannot help but be frustrated by the fact that, when we evaluate students, we need to have all kinds of documents to prove we are not making things up. But when they evaluate us, they can vent all their anger and frustration at a well deserved low grade without needing to prove anything, sometimes destroying a teachers reputation (or even his/her job).

Is there an evaluation method/instrument out there that has addressed this issue successfully?


Thea Landesberg, Clifton High School, Ridgewood, New Jersey USA
<CliftonESL@netscape.net >

[A poster]. . . asks about the validity of student evaluations.

Every year, as a very small part of my final examination, I ask my students to write one paragraph to tell what they liked/didn't like about the class and give their "suggestions" for future classes. I list the different kinds of activities we did over the course of the year to help give them ideas for their comments. If the student writes one simple paragraph with comments, they get full credit, regardless of what they said they liked (or not). I specifically don't ask their evaluation of me, the teacher, because I'm sure they will all say nice things. However, in asking for evaluations of the class, I usually get good feedback that helps to evaluate my work. As a bonus, the students realize how much they have learned over ten months. Obviously, you can only do this with students who are able to reasonably express themselves in writing. [-7-]


Lida Baker, American Language Center, UCLA Extension Los Angeles, USA
<lbaker@UCLA.EDU>

I completely sympathize with . . . [the poster] who was shocked to receive less-than-stellar evaluations from her students despite all her hard work and commitment to her students. Is there a teacher out there to whom this has not happened? . . . [The poster] suggests having students do a midterm evaluation in order to catch problems when there is still time to fix them. In the IEP where I teach we administer informal evaluations not just at the midterm but during the third week as well. By then the class has settled in, students have begun to use the textbook, and they've had a taste of how the teacher conducts lessons. It's a good time to ask them how they like what they've seen so far and how they'd like the course to proceed henceforth. There is much in the literature these days about "empowering" students. I dislike this and all other buzzwords, but the concept is sound: Give students a voice in shaping their learning experience, and chances are they will be more motivated and ultimately more satisfied.

Even after twenty years of teaching, I fear and loathe being evaluated. However, I have to concede that evaluations provide valuable information that helps me to do my job better.


Anthea Tillyer, City University of New York (USA)
<ABTHC@CUNYVM.CUNY.EDU>

The issue of student evaluations of teachers is really tricky, and I think that it is even more tricky if the people filling out the evaluation forms are not native speakers of the language of the form and might have certain culturally-based biases against evaluating teachers or against certain teachers. It is amazing how some programs ask real beginners to use sophisticated language in reading and answering complicated questions about their learning experience.

Over the years, I have seen lots of good teachers wounded by unreliable results on poorly-constructed and unreliable evaluation instruments. And I have seen program administrators far more eager to be critical of teachers than of those evaluation instruments. If the evalution instrument is unfair or unreliable, it should not be used at all for any reason.

I think that there is only one really useful question to ask students about a teacher, and that is: "Would you like to have another course with this teacher? If you would, please explain why. If not, please explain why." However, even this question does not necessarily encourage honest answers and it is still a difficult question for a beginning language learner to answer clearly and accurately.

Even questions like "Are you learning?" are really worthless in the evaluation of a teacher because students might not even be aware of whether they are or not they are really learning (as opposed to studying). And of course, research shows that language acquisition does not always take place at the same time as language study. Sometimes it takes years for language to be internalized.

To me, one can only learn anything from student evaluations over time, looking to see if a pattern emerges. But even that is only valid if the evaluation instrument itself is valid and if we can be sure that the students really do understand the questions on the form. [-8-]


Maria Spelleri, Manatee Community College, Florida
<mariasp@PEOPLEPC.COM>

Perhaps volunteers could be used to help students fill in evaluation forms. In the adult ed ESL environment which often uses some volunteers anyway, volunteers could pull students out in groups of 3 or 4 and help them understand each point of the evaluation, respecting privacy and confidentiality, of course. Where there are no community volunteers around, perhaps non-ESL students from other departments could be recruited to help out. Evaluations would have to be done over a week's period, instead of one class. I think that open ended questions should be answered in the native language if desired, at least in all but the advanced level classes, and can be translated by volunteers or others when collected.


Emil Ysona, State University of New York-Westchester Community College
<emil@EMIL.NET>

As a former academic director of a propriety IEP, I took student evaluations VERY seriously. A student happy with his teacher is more likely to return, extend studies, and/or recommend the school to his friends. These evaluations also give the administration the chance to place teachers in levels where they have demonstrated an ability to build rapport with the students. This tends to make both the students AND the teachers happy campers.


Gregory Anderson, University of Southern California
<gga@USC.EDU>

Emil Ysona wrote that he "took student evaluations VERY seriously". In a business, where profits are essential, language acquisition or skill development may not be the highest priority.

In an academic program, where matriculated language learners are preparing for instruction in the target language, the acquisition of the language and the mastery of skills is of paramount importance. In this situation, the point made by Anthea Tillyer is particularly appropriate: "students might not even be aware of" their language acquisition or skill development.

The two hypothetical cases from a comprehensive, academic English program may provide some food for thought:

Teacher A receives consistently mediocre evaluations. Most students grudgingly admit that they learn something from her, but complain about how strict she is. The most favorable comments she receives are, "The teacher is strict but fair". The quality of the students' production, on the other hand, is excellent. The research papers that her students produce, for example, are consistently the best of any section of the same course.

Teacher B, on the other hand, usually receives glowing evaluations. Comments like, "The teacher is so friendly and nice, thank you," follow a column with all the highest marks. Even with more proficient students', however, the quality of production is extremely inconsistent. Upon investigation, the program administrators find that important tasks are never assigned and program-wide achievement standards are largely ignored.


Chris Goddard, Riga Graduate School of Law
<Christopher.Goddard@rgsl.edu.lv>

To the extent that evaluation of teacher performance is necessary at all, it may be more useful in the wider context of course evaluation. This itself is an instrument to find out client perceptions of effectiveness of an educational institution's marketing apparatus.

One group of factors to take into account might include students' cultural and linguistic background, their motivation, and their expectations of the course. These need to be established at the outset. For example, students from some cultures could be unwilling to put their names to any kind of meaningful evaluation.

Another group of factors might include the type of course, its location and timing, its frequency and duration. For example, among students arriving tired after a working day for an evening course, there's a world of difference in attitude between those who really want to be there, and those who are simply there because their company sent them. Those attitudes may be reflected (both ways) in end-of-course evaluations. [-9-]

Table of Contents Top TESL-EJ Main Page

© Copyright rests with authors. Please cite TESL-EJ appropriately.

Editor's Note: Dashed numbers in square brackets indicate the end of each page in the paginated ASCII version of this article, which is the definitive edition. Please use these page numbers when citing this work.