banner

December 2007
Volume 11, Number 3

Contents  |   TESL-EJ Top

Current Approaches to Assessment in Self-Access Language Learning

Hayo Reinders [1]
University of Hawaii
<temphayo.nl>

Noemí Lázaro
Universidad Nacional de Educación a Distancia, Spain
<noemilazarogmail.com>

Abstract

Assessment is generally seen as one of the key challenges in the field of self-access learning (Gardner & Miller, 1999; Champagne et al., 2001; Lai, 2001; Kinoshita Thomson, 1996). Many researchers and practitioners point to difficulties with assessing language gains in an environment in which variables cannot comprehensively be controlled for. Furthermore, assessing the development of learning gains, such as the ability to plan and manage independent learning or to approach a learning task strategically (a key aim of self-access learning) relies on underdeveloped methodologies and assessment tools.

This article reports on a study of current approaches to assessment in self-access in 46 centres in 5 countries (Germany , Hong Kong , New Zealand, Spain and Switzerland). These centres were visited and extensive interviews were conducted with their managers. The interviews were based on a SWOT analysis (strengths, weaknesses, opportunities and threats) of each centre. Data from the interviews were analysed to identify current assessment practice. It was found that more than half of all centres conducted no form of assessment at all. The remaining centres showed a wide range of assessment measures such as self-assessment, which was the most common type of assessment, collaborative assessment with an advisor, external examinations and tests, teacher assessment, advisor assessment, peer evaluation and assessment through a panel. There was little evidence of a consistent approach to assessment in the centres, and its application seemed haphazard. There was no evidence that the instruments were checked for validity or reliability.

The Literature

Many researchers have pointed to the importance of assessing self-access language learning. Considerable investments are often made in setting up and maintaining self-access facilities and such investments need to be justified (Morrisson, 2006). In addition, there is an added degree of accountability required beyond that found in more mainstream learning environments, both by funding agencies who, as Morrisson points out (1999, referring to Weir & Roberts, 1994) are increasingly requiring proper assessment procedures to be in place, and also by students and parents. This is especially the case because the self-access model does not reflect standard classroom practice. Of course, assessment can also help management improve a centre (Gardner & Miller, 1999, p. 206) and allow reporting back to students. However, some feel assessment in self-access facilities should be limited to reporting to students on their achievements, not to justifying the centre’s existence (Gardner & Miller, 1999, p. 206).

Assessment of self-access language learning is seen as particularly challenging. Gardner (1999) identifies the five main difficulties:

  1. the complexity of self-access systems (due to their individualisation),
  2. the uniqueness of self-access systems,
  3. data collection (as there is less control than in a classroom situation over what learners do and when the do it),
  4. data analysis (because often less is known about the learner),
  5. purposes of evaluation (which are generally on learning rather than teaching and on resource management).

In addition to these a major—perhaps the main—difficulty stems from the fact that a learner’s progress can almost never be attributed solely to self-access learning. Self-access centres are by definition open, flexible environments in which learners select their own pathways, materials, and goals. In addition to the work they do within the centre, they may work on their own, be enrolled in courses, take private lessons or practise with other learners. Even if these variables could be controlled for, learners who choose self-access may be dissimilar from others in terms of their motivation, their predisposition to self-directed learning, and perhaps their pre-existing level of autonomy and/or metacognitive awareness.

In addition to helping learners improve their language, most self-access centres aim to improve users’ learning skills and develop autonomy. In fact, Benson and Voller claim:

Self-access resource centres are the most typical means (emphasis added) by which institutions have attempted to implement notions of autonomy and independence over the last twenty years to the extent that ‘self-access language learning’ is now often used as a synonym for ‘autonomous language learning. (1997, p. 15)

However, despite the longstanding interest in the development of learner autonomy, no generally accepted approaches to assessment have been developed, either in our outside of the self-access context.

Sinclair (1999) identifies four ways in which identifying evidence for the development of autonomy has been attempted: 


  1. through proficiency gains,
  2. feedback from teachers and learners,
  3. logging learner behaviour, and
  4. studies into the effects of strategy training.

She points out that there is little evidence to suggest that programmes that aim to promote autonomy lead to greater learning gains than programmes that do not. Feedback, although useful, may not be reliable and learner logs don't give enough information about the processes underlying the behaviour. Sinclair argues for the assessment of the “capacity” (see, for example, Holec, 1981) for autonomy, by looking at learners' metacognitive awareness. She developed a set of criteria for assessing different levels of metacognitive awareness in learners by using interviews to determine whether students can provide a rationale for their choice of learning activities, describe the strategies they used, provide an evaluation of these strategies used, identify strengths and weaknesses, describe plans for learning, and describe alternative strategies.

Lai (2001) notes the need for adopting a more analytical approach to assess learner capacity for self-direction (and therefore learner autonomy). She designed two measurement scales, one for process control at the task level and one for self-direction at the level of the overall learning process, that is, the micro and the macro level. Process control is defined as “ a learner’s ability to self-monitor and self-evaluate her learning tasks and/or learning strategies employed for each learning activity” (p. 35) and is measured through an assessment of a student’s task aims, and their ability to conduct self-assessment, scored on a five-point scale. The second is self-direction, defined as “a learner’s ability to take charge of, self-organize or manage her own learning process” (p. 39), assessed through answers on a seven-point rating scale for questions relating to students’ ability to:

  1. set realistic goals for their learning,
  2. identify the scope of their learning, relevant materials to work with and related activities to engage in,
  3. employ materials and activities skilfully for monitoring their own learning,
  4. set their own pace for learning, and
  5. to conduct self-assessment.

Mynard (2004, 2006) suggests different measures through which autonomy can be measured, such as investigating strategy use, assessing the ability to reflect, evaluate and plan, first person narratives, interviews, learner journals, and observations. In her 2004 study, she uses a combination of questionnaires, observations, chat transcripts, student-produced work and in-depth interviews with students, which were conceptually coded to identify instances of student reflection and awareness (although it is unclear on what basis such instances were identified).

Champagne et al. (2001) propose both qualitative and quantitative measures to assess learner autonomy and language improvement. The assessment of language learning progress is based mainly on the students’ self-perception and self-assessment of their development, as well as on teachers’ observations, such as in entry-exit interviews and based on portfolios and one-to-one language consultations. It was also based on video recordings of classroom interaction and giving written feedback throughout the course. The authors feel that such an approach is more inclusive by giving stakeholders a role in the assessment process. The assessment of autonomy is conducted on the basis of data from students’ work (such as portfolios, presentations, etc.), observations and learning records, all assessed by the teacher, as well as learners’ self-perception of their progress (through learning journals, and oral and written evaluations). Examples of autonomous learning in this approach would be students’ ability to locate and use information and resources appropriately, to obtain relevant information from others, to examine their own and others’ work critically, and to initiate, plan, organise and carry out a piece of work. The courses in which the above assessments were implemented clearly specified both the language learning and autonomy-related goals of the courses so that students knew what was expected of them. Reports of such clear integration are uncommon in the literature; however it needs to be pointed out that none of the above authors discuss assessment in the self-access context per se.

Reinders and Cotterall (2001), although not looking specifically at assessment of self-access learning, used a combination of questionnaires, in-depth interviews and observations to gauge learners’ perceptions of their self-access learning. Such an approach could be used as a type of assessment procedure. Among their findings were student satisfaction with the self-access facilities, believing it would help them develop as autonomous learners. However, when probed in the interviews it was found that students had a rather shallow awareness of what independent learning entails. Students who found independent learning to be beneficial often seemed to have misunderstood its meaning. When asked in what ways independent learning was fostered, many students mentioned the fact they had been shown a variety of materials. The source of direction clearly remained with the teacher or self-access advisor. Few students indicated that they felt more able to initiate or plan their learning, or that they had been required to do so. When asked to specify what it was that the teacher showed them, students only mentioned strategies for learning, all of them within the cognitive domain. None of them mentioned metacognitive strategies, such as planning or monitoring progress. In that sense, students seemed to feel that learning to learn is a set of techniques that an instructor teaches them. The results of this study shows that relying on questionnaire data alone can be unreliable, and that assessment of self-access learning probably needs to take a more comprehensive approach.

In summary, assessment in self-access is fraught with difficulties, but attempts have been made to assess different aspects of self-access learning. However, as far as we are aware, no study has attempted to document the range of existing approaches currently used; therefore, it remains unclear what other approaches may exist.

The Study

This study investigated assessment practice in 46 self-access centres in 5 countries (Germany, Hong Kong, New Zealand, Spain, and Switzerland). As part of the study, each centre was visited and extensive interviews were conducted with their managers. During each interview a SWOT analysis (strengths, weaknesses, opportunities, and threats) was conducted of the centre. As part of the interview, extensive discussion took place on the topic of assessment in the centre, as well as its particular strengths and weaknesses. The interviews were recorded and subsequently analysed to identify current assessment practice through a categorical content analyses (see Bardin, 1983; L’Ecuyer, 1990). Discussion of assessment practice was coupled with the researcher’s notes of the observations made in the centre to establish the range of assessment practices in all the centres.

Results

The transcripts of the interviews were used to identify and group similar assessment practices. Figure 1 and Table 1 show a summary of the results. These are discussed further below.

 

Figure 1: Types of Assessment in SAC (I)

Table 1: Types of Assessment in SAC (II)

No Assessment

From the 46 SACs studied worldwide, 24 did not carry out any type of assessment. Most of the managers indicated during the interviews that they recognised the importance of assessment, but had various reasons for not conducting any. These included a lack of time, the heavy workload of the centre staff, and in centres where no advisory service was offered, the lack of direct contact with the students, which would have facilitated assessment. Some managers also reported that their institutions did not require them to carry out assessments, and that funding was not dependent on showing success in this way. Several managers said they were considering implementing assessment measures in the future.

Self Assessment

Self assessment was used in 18 out of the 46 centres and thus in 82% of the 22 centres where one or more forms of assessment were carried out. In some centres, students could choose to assess themselves, and in others, this was done as part of the language course their self-access learning was attached to, or of the self-access centre’s individualised learning programme. Centres that had language-advising programmes tended to offer self-assessment as a voluntary option, except in those cases in which a certificate was given or participation was credit-bearing. When self-assessment was not voluntary, records were usually kept as part of a portfolio.

Self assessment was conducted in a wide variety of ways: through metacognitive questionnaires, assessment grids as used in the European Language Portfolio, through students’ entries in a learning diary, in portfolios and as part of a learning plan. Self-assessment was also done through interviews with a language advisor where the advisor played a support role only, or in small-groups (usually in workshops), and also through evaluation of student presentations or learning products (for example, portfolios).

Collaborative Assessment with a Language Advisor

In 13 of the 46 (59%) centres, where some form of assessment took place, assessment was done collaboratively with a language advisor. Usually such collaborative assessment took place during an advisory session but sometimes the assessment was done in relation to a piece of work (for example, a presentation), and in some cases use was made of an assessment grid. Depending on the institutions’ requirements, collaborative assessment was done either formally or informally, and was either summative or formative. In some cases, the results of such collaborative assessment could affect the student’s grade (self-access was formally credited).

External Test or Examination

In nine centres (41% of all centres where some form of assessment took place), assessment was done through tests and/or external examinations. Tests done in the centre normally assessed students’ language level (in other words, not their learning skills or autonomy) and, interestingly, were often done through placement/diagnostic tests such as DIALANG[3]. Placement tests are of course not intended for the assessment of learning progress, but in several centres they were administered as exit tests and their scores compared with the initial score. External examinations were normally done through official exams for languages administered by testing institutions. In some cases students’ performance in language courses (external to the SAC) were taken as indication of progress.

Teacher Assessment

In seven cases (32%) assessment was done by a teacher, and self-access learning was an integrated component of a course. Depending on the course, the assessment might be more or less formal, and might include a focus on both language proficiency and learning skills. In general, teacher assessment was done through the teacher’s notes of students’ progress and a record of the student’s learning activities. Other sources of information included the students’ diaries, portfolios, projects, and presentations.

Assessment by Learning Advisors

In four cases (18%), assessment was done by language advisors. Such assessments could be more or less formal, and could be credited/graded or not. Some of these programmes offer a certificate, and the advisor decides whether students have completed the required work satisfactorily by looking at students’ portfolios and diaries, and the amount of work students have done by looking at hours spent in the centre, number of workshops and advisory sessions attended, and so forth. In some cases, the advisors focus on the learning process, in others, more on language gains. Although the importance of self-assessment was generally acknowledged, some institutions with credit bearing programmes relied on the advisor to decide the final grade, out of fear for possible discrepancies between the advisor’s and the student’s assessment.

Peer Assessment

Peer-assessment was also used in four centres (18%), and usually involved the use of peer-assessment questionnaires or forms. Such assessment was sometimes done through assessment grids from the ELP[2], or through error awareness activities (Esteve, 2002; Esteve et al., 2003). In these cases, students assess each other and by doing so are hoped to notice errors more easily, which will help them to avoid those errors in their own language production. Some centres mentioned they were considering the implementation of peer assessment in their programmes.

Panel

In one of the centres, the students were assessed through portfolios they keep for courses, with integrated SALL components. The portfolios were assessed according to different criteria, such as the time spent and the depth of students’ reflections. Each portfolio was assessed by three persons: the teacher of the course, a member of a dedicated institutional testing team, and a member of the self-access team.

Discussion

A surprising 24 out of 46 SACs did not carry out any type of assessment of their students’ progress. This is surprising because assessment is generally seen as important to self-access, and is a key component of any teaching environment in general (Gardner & Miller, 1999). The reason for this low number may be the difficulties reported above in relation to assessing self-access learning. There are, at present, no standardised assessment procedures, which may prove a challenge, especially to smaller centres that may not have the resources to develop their own. Another possibility, given by one of the interviewees, explaining their lack of resources for offering assessment as a lack of student-staff contact:

As we don’t offer any advising service, we cannot do any follow-up of the learners’ progress. We just look at the materials they use, but mainly with the organisational aim of seeing what is needed. (interviewee response)

The complexity of autonomy is given as a reason by another participant:

Rather than formally assessing students’ learning we look at their planned learning activities. However, we do not follow up whether they actually complete their learning plans. This is an area that we have to work on more, stet-by-step, as learner autonomy is such a complicated and multi-faceted concept. Assessment is probably an area that we have neglected as we work more on the development of learning strategies rather than on assessment. (interviewee response)

In the centres in which assessment was carried out, self-assessment was the preferred form. It is perhaps understandable that this type of assessment is favoured in a learning environment where the learner takes centre place; it promotes learners’ engagement with their own learning process, and thus serves two purposes. By reflecting on their learning and their own role, learners’ autonomy is encouraged. Gardner (2000) further mentions individualisation and motivation as advantages for students, and justification and accreditation as potential benefits for the institution.

Of course, there are also downsides to self-assessment, one being that it is not necessarily recognised as genuine or reliable. Gardner (2000) further points out that learners and teachers can have resistance towards changing their roles, and the problem that learners may not yet have acquired the skills needed for self-assessment. As mentioned earlier, some centres giving students credit for their self-access learning are hesitant to rely on self-assessments. Also, learners who have little or no experience in self-assessment may question its use and feel uncomfortable with the process. Perhaps this is one reason collaborative assessment with a language advisor is also a relatively popular form of assessment, as it places more control in the hands of the centre. The advisors play an important potential role in helping learners move from a complete reliance on external tests to preparing them to take responsibility for their own assessment.

Although external examinations and tests may be standardised, and thus viewed as reliable ways of measuring progress, their validity in a self-access context needs to be questioned. Most tests focus on measuring language development, not on the development of learning skills, which in a self-access context are often considered equally important. There is also a danger in relying on external tests in that this practice may undermine the philosophy of a centre in which learners are encouraged to take control of their learning and be autonomous. This type of testing then takes away their control in the crucial area of assessment, especially when such work is credit bearing. Nonetheless, external assessment can serve as a confirmation for students in their own self-assessment activities; it can also give them a clear idea of their progress and thus offer practical guidance, which in the self-access context with its limited support (compared to a classroom), may prove of great importance.

Peer-assessment has the advantage that it helps students to examine critically language learning in progress and through this, to understand their own learning better. Little (2003) highlights the importance of peer assessment for the self-direction development: “The capacity for private reflection grows out of the practice of public, interactive reflection, and the capacity for self-assessment develops partly out of the experience of assessing and being assessed by others.” (Little, 2003, p. 223). It also helps the students to develop collaboration skills. For these reasons, it supports the underlying philosophy—to foster autonomy—of self-access. It is not clear, however, to what extent peer-assessment needs to be monitored or guided by a teacher or language advisor. The use of a panel of one or more students could be one way of balancing the need for independence with the need for guidance; this approach was not used in the centres looked at in this study. Using a panel to assess students’ portfolios is an attempt to develop more standardized assessment processes in the self-access context. It requires considerable human resources, nevertheless this model could be a possible way for institutions to assess and perhaps credit learner progress.

In summary, the different learning assessment techniques in the self-access context have advantages and disadvantages. Maybe that is the reason there is a tendency to combine different types of assessment in a single programme. Table 2 shows the number of assessment types in each centre.

Table 2: Number of combined types of assessment in the SACs

Only two of the centres use a single type of assessment, whereas nine of them combine two types, seven three types, and four centres use four different types of assessment. In some cases, the assessment systems are used for assessing different types of users in the centres. Depending on the specific characteristics and profiles of each group of students (drop-in students, users of the counselling service, students in a course with an integrated self-access component, or students in individualised self-access programmes), one of a various number of ways of assessment are implemented. Due to the heterogeneous nature of self access, the combination of diverse assessment systems may offer a wide range of information about the learning process. This will enable assessments for the different purposes as outlined by Gardner and Miller (1999, p. 207), such as strengthening a learner’s self-confidence, documenting learning progress, providing assurance and motivation, giving opportunities for reflection, and practicing for official examinations.

Conclusion

This study has shown that in most cases self-access centres worldwide do not conduct any assessment of their services. This is not a satisfactory state of affairs. Considerable amounts of money are spent on self-access learning, and learners are asked (in some contexts, required) to spend their time in a self-access centre when other options may be available. Having no evidence for the benefits of self-access language learning is not a sustainable option.

The study has also shown that self-assessment and collaborative assessment with a language advisor are the most predominant types of assessment. This is probably because these encourage learners to think for themselves, which is in line with the philosophy of self-access language learning. Although such types of assessment hold promise and have a number of advantages in the self-access context in that they give a more inclusive role to learners, the application of such assessments seems haphazard. As far as we could ascertain, the instruments used for such assessments (and the others identified during the research) were not checked for validity or reliability. We intend to undertake further evaluative study into the assessment procedures used in self-access learning.

It was interesting that some of the approaches to assessment discussed in the literature review were not used. For example, Sinclair’s (1999) and Reinders’ and Cotterall’s (2001) interview questions for establishing learners’ level of metacognitive awareness were not used in any of the 46 centres, nor were Lai’s (2001) assessment scales.

This study has shown how assessment is conducted (or not), but one question that it has not answered is what is actually assessed. One would expect both language gains and learning gains (for example, learning skills and autonomy) to be the object of assessments. In some interviews, this was indeed mentioned, but as this was not a key question relating to the study, it was not investigated systematically. Future studies could look into this area and help better understand the practice of assessment in self-access and other flexible learning environments. Such information can then help develop better practices in future.

Notes

[1] This study has been conducted in part with funding from the Consejería de Educación de la Comunidad de Madrid, the European Social Fund, the DAAD (German Academic Exchange Service) and the "La Caixa" Foundation.

Collaborators in this study were Germán Ruipérez, J. Carlos García-Cabrero, M. Dolores Castrillo, Universidad Nacional de Educación a Distancia, Spain.

[2] ELP is the European Language Portfolio, a project of the European Language Council (CEL/ELC), that offers a document for students where they can keep record of their proficiency in different languages, and that serves as a guide for lifelong learning of languages. (http://www.coe.int/T/DG4/Portfolio/?L=E&M=/main_pages/introduction.html) (see also Schneider et al., 2001; Christ, 1999; Ali-Lawson, 2002; Perclová, 2000). There is also a digital version called Electronic European Language Portfolio- ELP (http://eelp.gap.it).

[3] DIALANG (http://www.dialang.org/english/) is an online learner-orientated diagnostic language assessment system that combines a placement test, a self-assessment and an adaptive test for five areas (reading, writing, listening, grammar and vocabulary) of 14 European languages (Luoma 2004).

References

Ali-Lawson, D., Langsch-Brown, B., & Strahm-Armato, M. (2002). Developing learning strategies through portfolios. Babylonia-Zeitschrift Für Sprachunterricht Und Sprachenlernen , 2: 20-25.

Bardin, L. (1983). L'analyse de contenu des documents et des communications. [Content analysis of documents and communications.] Paris: PUF.

Benson, P. & Voller, P. (Eds.) (1997). Autonomy and independence in language learning. London: Longman.

Champagne M.-F., Clayton T. D. N., Laszewski M., Savage W., Shaw J., Stroupe R., Thein M., & Walter, P. (2001). The assessment of learner autonomy and language learning. The AILA Review, 15, 45-55.

Christ, I. (1999). Das europäische portfolio für sprachen: Konzept und funktionen [The European Language Portfolio: Concept and functions]. Babylonia-Zeitschrift für sprachunterricht und sprachenlernen, 1: 10-13.

Esteve, O. (2002). L'enfocament per tasques en l'ensenyament de llengües estrangeres com a pont d'unió entre el treball en autoaprenentatge i l'aprenentatge a l'aula. [Focus on
tasks in the teaching of foreign languages as a link between self-instructed learning and learning in the classroom]. In Catalunya. Departament de Cultura (ed.) VII Trobada de Centres d'Autoaprenentatge: autoaprenentatge, models d'integració dins i fora de l'aula. Barcelona: Generalitat de Catalunya. Departament de Cultura, 37-47.

Esteve, O., Arumí, M., & Cañada, M. D. (2003). Hacia la autonomía del aprendiz en la enseñanza de lenguas extranjeras en el ámbito universitario: El enfoque por tareas como puente de unión entre el aprendizaje en el aula y el trabajo en autoaprendizaje. [Towards learner autonomy in foreign language education in the university arena: The task approach as a bridge between classroom learning and work in self-learning] In BELLS, 6 Available: http://www.publications.ub.es/bells/.

Farmer, R. (1994). The limits of learner independence in Hong Kong. In D. Gardner & L. Miller (eds.), Directions in self-access language learning (pp. 13-28). Hong Kong: Hong Kong University Press.


Gardner, D. & Miller, L. (1999). Establishing self-access. Cambridge: University Press.

Gardner, D. (1999). The evaluation of self-access centres. In B. Morrisson (ed.), Experiments and evaluation in self-access language learning (pp. 1-122). Hong Kong: HASALD.

Gardner, D. (2000). Self-assessment for autonomous language learners. Links & Letters, 7: 49-60.

Holec, H. (1981). Autonomy and foreign language learning. Oxford: Pergamon Press..

Kinoshita Thomson, C. (1996). Self-assessment in self-directed learning: Issues of learner diversity. In Pemberton, Y., E. Li, W.W.F. Or & H.D. Pierson (eds.), Taking control: Autonomy in language learning (pp. 77-91). Hong Kong: Hong Kong University Press, 77-91.

Lai, J. (2001). Towards an analytic approach to assessing learner autonomy. In Dam, L. (ed.): The AILA Review, 15: 34-44.

L’Ecuyer, R. (1990). Méthodologie de l’analyse développementale de contenu. [Methodology for developmental content analysis] Québec: Presses de l’université du Québec.

Little, D. (2003). Learner autonomy and public examinations. In Little, D., J. Ridley & E. Ushioda (eds.), Learner autonomy in the foreign language classroom. Teacher, learner, curriculum and assessment (pp. 223-233). Dublin: Authentik.

Luoma, S. (2004). Self-assessment in DIALANG. An account of test development. In Milanovic, M. & C. Weir (eds), European language testing in a global context. Cambridge: Cambridge University Press.

Morrisson, B. (ed.) (1999). Evaluating a self-access language learning centre: why, what and by whom? In Morrisson, B. (ed.), Experiments and evaluation in self-access language learning, (pp. 123-135). Hong Kong: HASALD.

Morrisson, B. (2006). Mapping a self-access language learning centre. In: Lamb, T. & Reinders, H. (eds.), Supporting independent learning: Issues and interventions. Frankfurt: Peter Lang.

Mynard, J. (2004). Investigating evidence of learner autonomy in a virtual EFL classroom: A grounded theory approach. Conference proceedings. Research in ELT Conference. Bangkok, King Mongkut's University of Technology, Thonburi.

Mynard, J. (2006). Measuring learner autonomy: Can it be done? Independence, 37: 3-6.

Perclová, R. (2000). Learners’ self-assessment in the ELP: A tool for communication in education. Babylonia-Zeitschrift für sprachunterricht und sprachenlernen, 4: 50-51.

Reinders, H. (2000). Language learners learning independently: How autonomous are they? Toegepaste Taalwetenschappen in Artikelen, 65,1: 85-97.

Schneider, G., North, B., & Koch, L. (2001). Europäisches Sprachenportfolio. [European Language Portfolio] Bern: Berner Lehrmittel- und Medienverlag.

Sinclair, B. (1999). More than an act of faith? Evaluating learner autonomy. In C. Kennedy (ed.), Innovation and best practice in British ELT. Harlow: Addison Wesley Longman.

Weir, C. & Roberts, J. (1994). Evaluation in ELT. Oxford: Blackwell.

Comment/view comments on this article. There are currently comments.

© Copyright rests with authors. Please cite TESL-EJ appropriately.

Editor's Note: The HTML version contains no page numbers. Please use the PDF version of this article for citations.