Author: Iaroslavschi, Maria
Date published: June 1, 2011
Assessment and, more importantly, self-assessment are indispensable to all interpreters concerned with the quality of their own work and are doubtlessly essential to trainee-interpreters who need to actively acquire and improve on their interpreting skills.
During course classes trainers are able to accurately point out students' mistakes and indicate pertinent solutions to various technique issues. Course trainers also actively shape students' assessment and self-assessment skills by introducing them to interpreting assessment patterns from the very beginning of the program. These patterns, ever more demanding and strict with the passage of time, are supposed to be adopted and applied by trainees not only in class but also during untutored practice sessions for the sake of efficiency and relevance. Does that really happen? How are practice sessions really perceived by conference interpreting students? How often do they meet to practice? Do they give meaningful feedback to each other? Are they able to take criticism and put it to constructive use if it doesn't come from course-trainers but from their peers? What value to they really attach to peer-feedback? Are they aware of "deliberate practice" strategies? Do they use them?
This paper strives to answer all of these questions by the means of a survey submitted to thirty students enrolled, in May 2010, in six different conference interpreting programs, all of them members of the European Masters in Conference Interpreting consortium: École de traduction et d'interprétation Geneva, École Supérieure de Traduction et d'Interprétation Paris, Conference Interpreting Techniques Westminster, Master's in Conference Interpreting La Laguna, Master's in Conference Interpreting Copenhagen and the European Master's in Conference Interpreting Cluj.
Through a close analysis of survey results, we identified several potentially dangerous issues encountered by students during their practice sessions: decreased efficiency, tendency to give irrelevant feedback, the occasional "hurt feelings" on account of "unduly" delivered feedback as well as lower credibility attached to feedback coming from fellow students and we thought that a possible solution to most of these problems might be the usage of a standardized feedback form: the assessment and self-assessment grids. Their role is to maximize feedback-objectivity and to allow students to accurately track their own progress throughout their entire training.
Quality assessment in conference interpreting
Conference interpreting is eminently a means of assuring communication between the speaker and message recipients. We could therefore conclude that from this point of view, "quality is measured by the success of this act" (Collados & Gile, 2005). The best indicator of a good quality of the interpretation is the fact that it allows people to understand each other (Donovan, 1990).
As highlighted by interpreting scholars (e.g., Collados & Gile, 2005), for different reasons, comprehension is achieved even if sometimes the quality of the interpretation is mediocre.
Clearly, this can by no means become a premise in conference interpreting training programs where the utmost care is given to preparing good professionals, permanently concerned with the accuracy of their own work. However, it is equally true that there are numerous "methodological difficulties inherent in studying the elusive concept of quality. There are few tools available aside from evaluation surveys, which focus on different goals and variables, thus making it hard to compare results." (AIIC, 1995).
Nonetheless, in order to train competent professionals as well as in order to ensure high and constant quality standards, various quality assessment grids have been conceived by interpreters and interpreter-trainers along time.
Initially interpreter-teachers based much on intuition and were occasionally self-complacent toward the "miracle" of interpretation, thus largely blurring early theoretical research on interpretation. This went hand-in-hand with a set of practical rules and precepts for the learning process, such as "interpret ideas not words, finish sentences, etc." Later on the first attempts at analytical models were made (AIIC, 2000).
The first such analytical models, based on empirical study, were published by Hildegund Bühler and Ingrid Kurz in 1986 and 1989 respectively. Similar assessment grids are now at the basis of most interpreting training techniques. Among the first parameters of quality assessment taught to students in all conference interpreting programs are: the "sense of consistency with the message", "logical cohesion of the utterance", "correct grammatical usage", "completeness of interpretation", "fluency of delivery" (Gile, 1995).
The main shortcoming of such quality assessment techniques, as pointed out by scholars (Gile, 1991; Pöchhacker, 1994) is what appears to be a limitation to the field of interpretation theory:
"(...) the criteria used in their ranking studies by Bühler (1986) and Kurz (1989, 1992), (...) indicate that there is a general consensus within the interpreting community on the quality standards for professional interpretation. We seem to know what the product should be like, but we are less sure about a method for establishing what a particular product is like in a given situation. Quite obviously, researchers, teachers and trainees need a method for looking at the product." (Pöchhacker, 1994, p. 235).
All in all, the quality of the interpretation can be perceived as a subjectively weighted sum of a number of components: "the fidelity of the target-language speech, the quality of the interpreter's linguistic output, the quality of his or her voice, the prosodic characteristics of his or her delivery, the quality of his or her terminological usage, all of them as perceived by the assessor." (Gile, 1995)
Furthermore, knowing one's target audience and understanding its expectations should at times also be taken into account in quality assessment: "not only do speakers have a different attitude from listeners (the former tolerate a greater degree of "intervention" from interpreters to correct minor errors; the latter seem to prefer literal respect for the speakers' words and even his mistakes), but different listeners in the same situation may have different expectations." (AIIC, 2000).
It is nonetheless also true that "functionalism" does not truly factor in during conference interpreting training courses, where students are taught to abide by the highest standards of quality keeping in mind "the end product" rather than "the end customer". And indeed, "though the process-oriented method has the theoretical edge over the product-oriented method, the former appears to be difficult to carry out in practice." (Badiu, 2008, p. 6).
Moreover, according to Moser-Mercer (1996, p. 62) it is understandable for student interpreters to use different strategies from intermediate or advanced learners. In fact, the numerous differences between students and professionals indicate a need to develop a different evaluation framework for the former that goes beyond judging them by their strict performance and branches out to equally assess the learning process (Choi, 2006).
Assessment in conference interpreting training courses
Though the terms "assessment" and "evaluation" are frequently seen as interchangeable, the following distinction ought to be made: assessment is a general term that includes all the processes and products and judges the learning process and the student's performance (Lefrançois, 2000) while "evaluation" means making value judgments of the effectiveness of teaching as a whole, which usually occurs after an assessment has been made (Child, 2004, p. 361).
According to D. Child (2004) there are four types of assessment, each of them used in accordance with the purpose assessors mean to serve.
1. Pre-task assessment: This type of assessment is meant to discover the level of knowledge and skills of students before the beginning of the actual learning process. Consequently course trainers will be able to better assess the level of their class and adjust their methods accordingly.
2. Formative assessment: This type of assessment is used by course trainers in order to assess the level of progress made by their students in terms of both knowledge and skills. In this case, trainers mean to optimize feedback by increasing the students' awareness on their own stronger and weaker points while also indicating ways of improvement.
3. Diagnostic assessment: Diagnostic assessment is primarily meant to pinpoint the stemming origins of various difficulties with which students appear to be struggling at some point in their training. This type of assessment is meant to help students override their issues (and reinforce their strengths) and mostly occurs during the period of "formative assessment".
4. Summative assessment: This type of assessment usually occurs towards the end of the course and it is mostly a product-oriented approach. Consequently, its purpose is neither to analyze difficulties nor to provide subsequent feedback to students but serves as useful information helping teachers and employers to measure the students' learning results.
As to the type of feedback students give each other, we may safely assume that it generally mirrors the feedback pattern provided by course trainers. However, as D. Gile concludes, following an experiment meaning to verify students' accuracy in assessment for consecutive interpretations:
"(students) were found not to be reliable error detectors; not only did they detect on average much less than half of the interpreter's errors, but there were also many "false positives" and "false negatives" in their assessments. [...] A possible explanation would be the following: during training, students are exposed to constant criticism of their interpretation, and know that most of them will never graduate. From classroom experience, this often leads to two types of reactions: fierce competition and hyper-critical attitudes on the one hand, and solidarity and very lenient judgment on the other" (Gile, 1995)
Quite clearly all students do not possess adequate feedback-giving techniques and this might explain certain assessment issues put forward by D. Gile. To this effect, increasing students' awareness on the subject of correct feedback patterns as well as providing them with useful assessment and self-assessment tools seems particularly important as they need to be able to offer relevant feedback to their peers not only during class hours but also during untutored practice sessions. Indeed, the acquisition of interpreting skills by trainees requires not only professional guidance during classes, but also extensive practice outside these hours (Ericsson et al., 1993; Ericsson, 2000; Moser-Mercer, 2003) and "in reality, trainee conference interpreters rely heavily on group practice and feedback from peers." (Badiu, 2008, p. 4).
Moreover, students need to be able to constantly set practice goals for themselves, given the fact that "deliberate practice" (Ericsson, 2000) appears to be essential in the progression of the trainee. (Badiu, 2008, p. 4).
One method of enhancing students' problem reporting skills is a metacognitive tool, as illustrated by J. Harmer (1996) and I. Badiu (2008): the practice journal. The main objective of this exercise is to get students to become "accustomed to regular, active review of their performance" (Harmer, 1996, p. 11).
Starting from the very encouraging experience we have had with practice journals during our two year training as conference interpreters and supported in our endeavor by prof. Badiu, we devised a complementary assessment tool, suitable for students' untutored practice sessions and meant to increase the relevance and the efficiency of practice meetings: the assessment and self-assessment grids, detailed below.
Assessment in untutored practice sessions. A survey-based analysis
About the survey: The second part of this paper is meant to analyze the way in which interpreting students enrolled in six different conference interpreting programs, all members of the European Masters in Conference Interpreting consortium, perceive the overall efficiency of their respective practice sessions.
By 'practice session' we understand any work meeting between peers, outside class hours, without trainer supervision and regardless of the premises in which it takes place, with the aim of the encounter being the enhancement of interpreting skills.
Individual and group practice is highly recommended or even compulsory in most EMCI courses (Badiu, 2008, p. 17) official indications being that every official class interpreting hour/language/week ought to be matched by a corresponding practice hour.
However, in none of the six EMCI participating programs are practice sessions monitored by a course trainer and only few indicate a time slot for students' practice meetings on their official timetables.
As mentioned, our survey[dagger] was conducted in six different EMCI programs:
* Master¡'s in Conference Interpreting Copenhagen School of Business,
* European Masters¡ in Conference Interpreting Cluj-Napoca,
* Conference Interpreting Techniques, Westminster, London;
* Masters¡ in Conference Interpreting La Laguna,
* Ecole de Traduction et d¡ Interpretation, Geneva;
* and Ecole Superieure d¡ Interpretes et de Traducteurs, Paris.
In order to discover the practice habits of the thirty students who agreed to participate in our research we conceived a ten question survey primarily focusing on the way in which peers assess each other's work as well as on the quality of the feedback they provide for each other when not in the presence of a class trainer. The fundamental questions we wished to answer were:
* What is the periodicity of group practice encounters and of individual training?
* How are practice sessions perceived in terms of efficiency?
* Do peers provide useful feedback to each other?
* Do students try to spare each others¡ feelings by avoiding making negative comments?
* Do they take offense in their peers¡ sometimes bluntly expressed remarks?
* Do practice sessions play a role in boosting students¡ confidence in their own abilities?
Below, we shall detail the main findings of our survey while also trying to offer a series of answers to the above questions.
Frequency and duration
Meaning to first of all inquire on the participants' practice habits, we asked them to indicate the average number of hours they allotted to practice sessions as well as the number of their weekly meetings.
We found that the majority of respondents took part in one or two practice sessions each week - 52%; 16% gathered three to four times a week, another 16% practiced more than 4 times a week while 8% declared they never participated in group practice sessions outside class hours.
Similarly, we found that 43% of participants granted practice sessions 5 to 6 hours each week, 33% met for a total of 1 to 4 weekly hours while 16% stated they spent more than 8 hours a week building up their interpreting skills.
A detailed analysis of results indicated that practice patterns vary greatly from school to school, ranging from as much as four sessions a week and over 8 hours of practice ("B[double dagger]") to one session - 2 weekly hours ("F"). Similarly, students from "A" and "D" allotted 5 to 8 weekly hours to practice sessions (two to four sessions) while students enrolled in program "E" in 2010 practiced once or twice every week for a total of 1 to 4 hours.
In what concerns individual practice, results indicate that 64% of participants chose to also work on their interpreting skills individually and on a regular basis, 33% declared to sometimes do it while 3% never practiced on their own.
This comes to show that, in all probability, most students believe that group practice sessions are not enough to help them reach the envisaged level of competence so they also put in a certain amount of individual effort to make up for probable shortcomings.
Concerning individual practice, students enrolled in program "F" unanimously stated that they practiced on their own on a weekly basis, clearly favoring this form of skill-building to group meetings.
In this same respect, 60% of students from "A" practiced on their own on a regular basis while 40% declared they only did it "once in a while". Similarly, 80% of students in program "B" practiced at home each week - while also summing up the highest number of group practice hours per week of all six participating programs.
Similar numbers were reported by students in program "E": 80% declaring to practice regularly at home and 20% admitting to only sometimes do it.
Students in program "C" presented a higher degree of variation in what concerns individual practice: 40% favored regular individual practice, 40% practiced on their own only at times while 20% never used this training method.
As for students in program "D", 80% admitted to only sometimes practice at home while 20% declared that they regularly trained on their own.
We may thus conclude that, perhaps not entirely coincidentally, students who previously reported the lesser amount of group practice hours (namely, students in programs "E" and "F") were also more willing to do compensatory individual work. We cannot comment on the overall efficiency of these two practice methods but we do believe it should be very interesting to establish a link between practice rates (group- and individual) and exam success ratios.
Efficiency of group practice sessions
Another question in our survey was meant to determine the efficiency of practice sessions when compared to class hours, as perceived by the students themselves.
In this respect, most of our respondents (60%) agree that practice sessions are less efficient than class hours while 40% believe that they are equally efficient.
Efficiency, seen as a ratio between the perceived "relevance" of a certain activity and the amount of time that this activity requires (Ef=R/T) can be enhanced by a more careful planning of sessions as well as by using a more orderly feedbackgiving method such as assessment grids, presented later in this paper.
A detailed analysis of results is shown in the table below:
Quite understandably, courses such as "B" where students reported high contentment rates as to the efficiency of their off-school practice sessions also reported the highest periodicity of meetings. As for the other programs, measures to increase the efficiency of practice sessions should perhaps be taken into consideration.
The relevance of peer-feedback
One of the main objectives of our survey was to learn how conference interpreting students perceive the feedback they receive from each other during unsupervised practice sessions. In this respect, our findings indicate that 73% of participants were pleased with the quality of the feedback they received from peers while only 27% believed that peer-feedback tended to be less relevant than the feedback provided by course trainers. Perhaps not surprisingly, none of our respondents believed peerfeedback to be more relevant than trainer-feedback and no one seemed to think that the feedback received from peers was completely irrelevant.
As mentioned, feedback techniques are learned in class from course trainers who use various methods to encourage students to make pertinent remarks and give useful advice to one another. Judging by these results, we can only infer that in at least 73% of cases feedback techniques have been successfully mastered by trainees. Below are, once more, the results by category of answerers.
Yet again, students from "B" came in first concerning the overall satisfaction with the quality of the feedback provided by their colleagues during practice sessions, all of them declaring peer-feedback equally relevant as course trainer feedback. Students from program "D" unanimously agreed while, at the opposite end, only 20% of participants from program "A" seemed to share the same opinion, most of them believing that peer-feedback is less relevant.
Students from "C" and "F" shared an overall positive opinion as regards the relevance of peer-feedback with 80% of them believing that, in general, the remarks made by their peers were equally pertinent as the ones put forth by class trainers. Lastly, if most of our respondents from program "E" were rather pleased with the feedback they received from peers, they seem to be more divided on the subject matter (60% to 40%).
The survey also asked respondents who declared peer-feedback to be ¡§less relevant than trainer feedback¡" or ¡§completely irrelevant¡" to justify their choices. Here are some of the reasons we received (justifying the lesser relevance of peerfeedback):
* They¡'re not teachers¡K (not experienced)- ¡§A¡"
* Teachers are able to single out your flaws. ¡V ¡§A¡"
* More generous; less specific; over focused on accuracy issues; not tough enough on language issues. - ¡§C¡"
* Being professional interpreters themselves, most of our teachers are more aware of details and relevant techniques. - ¡§E¡"
* Feedback from colleagues is to a certain extent less relevant since trainers are more experienced and thus better able to spot problem areas and point out what must be corrected. ¡V ¡§E¡"
* Sometimes classmates just don¡'t dare to give relevant feedback. ¡V ¡§F¡"
Quite clearly one of the main reasons why certain students are displeased with the quality of their peers' feedback is the latter's lack of experience and their insufficient grasping of "relevant [interpreting] techniques". Another recurrent complaint had to do with the students' inability to "single out" various issues and their lack of awareness of certain others. Moreover, according to a respondent from "E" students sometimes don't "dare" to give each other enough feedback probably wishing to avoid conflicts with their peers. This however is a sign of immature practice behavior that is likely to seriously harm the efficiency of group practice and should doubtlessly be eliminated.
A very interesting comment made by a student from "C" also points out that peers tend to leave out certain mistakes (perhaps the ones likely to bother) or on the contrary they are too concerned with "accuracy" issues. On the other hand, the same student mentions he would appreciate more emphasis being placed on feedback regarding "language" issues.
On a positive note, a student from "E" points out the fact that peers sometimes give a "different" kind of feedback from the one received in class. Indeed, given that trainers spend a limited amount of time with their students they might sometimes disregard certain issues, perhaps tagging them as "asymptomatic" while peers' share a better knowledge of one another and may be able to spot other, possibly less apparent, problems.
Giving negative feedback
Subsequently, we wished to determine the participants' approach when it comes to giving negative feedback. We believe we may safely assume these answers apply to the way in which respondents expect to be given feedback for their own performances.
In this respect, we found that when providing negative feedback, 34% of them admitted to "sugarcoat" their comments a bit, 23% only mentioned the most important errors on the principle that "if it's not symptomatic, it doesn't matter", an equal percentage "pointed out all mistakes" but "very politely" and 10% preferred to give a complete "review of mistakes" in a perfectly "neutral tone".
As indicated in the table above, in program "A" 60% of the participating students admitted to sometimes "sugarcoat" their feedback a bit - while 40% preferred to only point out mistakes that were or might have become symptomatic.
In program "B", most students also chose a milder way of expressing their opinions - 80%. Meanwhile, 40% of the students from "C" preferred the "very polite review of mistakes", 20% favored the "neutral review" and an equal percentage only pinpointed the main errors.
Similarly, most of the students from "D" (60%) only signaled their peers' main errors while 40% preferred a complete review of mistakes delivered in a perfectly neutral tone of voice.
Most of our participants from program "E" (60%) declared to give peers a "very polite review" of their mistakes and a student actually admits that she adapts her feedback to the "receiver": "it depends on how well you know each other, some people need to be more "sugarcoated" than others".
Students from "F" were less homogenous in their approach to giving negative feedback: 40% opted for the "sugarcoated" way, 20% gave a full review of mistakes in absolute politeness and 20% only pinpointed the main errors.
As observed, results indicate a general tendency of students to bear in mind each other's feelings when giving feedback. A safe assumption would therefore be that due to this "guarded" approach, some of the times the "message" did not get across. We believe this is yet another argument for adopting a more objective method of giving feedback during practice sessions.
"Holding grudges": Another closely related issue concerns the way in which negative feedback from peers is perceived. Given the importance of quality assurance in conference interpreting, receiving constant feedback for one's performance is of crucial importance. Thus, in theory at least, feedback should be welcomed at all times. However, as we are indeed dealing with the sensitive issue of the "human ego", we thought that the matter deserved special attention.
Results show that 67% of our respondents never felt any resentment against a colleague due to a negative comment they were addressed. This comes to show that the students' approach to feedback is generally healthy and constructive.
Nonetheless, 24% of participants admit that if they did occasionally hold "a grudge" against one of their colleagues it wasn't on account of the remark per se, but because of how "it was made".
Lastly, 3% also declared they were most bothered by the "sarcasm" they occasionally perceived in some of their colleagues' remarks and an equal percentage admits to often having had their feelings hurt by their peers on account of the feedback they were given.
As can be observed, students from "E" unanimously declared never to have taken a colleague's comment personally, the same being true for 80% of the students in programs "A" and "B" and 60% of the interpreting trainees in programs "C" and "F". On the other hand, 20% of participants from program "A" admitted to having had their feelings hurt by one of their colleagues' comments at some point "not because of what was said" but because of "the way" in which it was said, this being the situation for 80% of the students in program "D" and for 40% of those in program "F". Similarly, 20% of our respondents from "B" admit they occasionally took offense in what they perceived as a rude comment while two students from program "C" made the following comments:
- I did, as regards a comment made to someone else - but more about the way it was said.
- Some people are not good at giving feedback. I don't hold a grudge but I tend to resist their comments...
Such remarks might indicate a deeper problem. Even though students are encouraged by class trainers to give feedback according to a certain pattern (content issues at first: completeness of message, accuracy, additions, distortions omissions - presentation issues afterwards) it demonstrably happens for trainees to lose track of official recommendations in a sense that they come to stress elements with lesser importance while disregarding core problems.
Self-confidence and positive feedback
Given that, as previously shown, students tend to be rather cautious when it comes to the way in which they give feedback to one another a question that logically ensues concerns the importance of self-confidence as perceived by conference interpreting trainees.
Quite clearly, the great majority of participants agree that self-assurance is indispensable in conference interpreting (67%) and that if it does not come naturally, one must endeavor to acquire it (33%).
Here are some of the remarks made by students in various programs:
"B": It's just one important element but nothing more.
"C": [it's important to have self-assurance] but never at the cost of inaccuracy OR taking up too much place. A constant and complex balancing act...
"D": It is (important) but only accompanied by competent, correct info.
"E": I think it can be quite important, depending on the situation; at least you need to be able to stay calm and cool.
"F": It's very important because it makes a speech credible.
A recurrent observation, as seen, refers to the sacrifice that should by no means be made for the sake of presentation: accuracy. Indeed, regardless of the situation, the interpreter should be able to remain calm and clear-headed. All in all, as one student from program "C" very accurately pointed out, what the interpreter should be able to achieve is a constant balance between the two.
Undeniably, the best self-confidence booster for conference interpreting trainees is receiving positive feedback from course trainers. Another question in our survey was meant to discover whether positive feedback from fellow students has comparable effects and if not, for what reasons.
We therefore asked respondents to remember (or to imagine) a situation in which after a certain period of time during which course-trainer feedback had been rather negative they start to receive positive feedback during practice sessions.
Results show that positive feedback from colleagues does appear to "rejoice" but with "moderation" - 77% of our respondents admitted that positive feedback from peers did boost their confidence a bit but not enough to compensate for the negative feedback received from course trainers. Nonetheless, 17% of students admitted to feel much better after receiving positive comments from their peers while only 3% declared that positive feedback from colleagues was unable to boost their self-confidence as for them "only trainer-feedback matters".
Results by category of respondents are roughly the same for all six programs with almost all participants admitting that positive feedback from peers does improve their self-confidence albeit with moderation as course trainers' opinions always comes first. Finding a solution meant to increase students' trust in each other's feedback is perhaps a matter worth further thought.
Deliberate practice is, according to experts in psychology (Ericsson, 2000) and conference interpreting trainers one of the beast ways of enhancing interpreting skills. Students are thus advised to simply set "goals" for themselves in relation to the various aspects of their technique that need improving before every practice session.
As shown below, only 23% of our respondents always set aims for themselves before practice sessions, thereby using the technique of "deliberate practice". Meanwhile, 67% use the technique but only occasionally, while 10% of participants admit to never having set a particular target before practicing.
If most participants do establish specific goals for themselves before their practice sessions, be it only from time to time, perhaps it would also be interesting to further inquire whether they do so due to specific instructions from course trainers or because they discovered the benefits of this particular technique individually. Accordingly, measures to enhance metacognition might be suggested and encouraged.
The survey presented above allowed us to conclude that indeed, students generally allot a significant amount of time to practice sessions during their training as conference interpreters. The overall efficiency of practice sessions is generally acclaimed and so is the relevance of the feedback received from peers. Students are, for the most part, able to provide pertinent feedback to one another and are mature enough to accept criticism. Most of our respondents also practice on their own and many of them set goals for themselves during practice sessions, deliberately working on their various interpreting skills that need improving.
However, we also took note of the fact that there is plenty of room for improvement. Students tend to agree that time-wise, practice sessions aren't efficient enough and that occasionally, their peers' feedback is deficient in terms of problemspotting. Moreover, a significant number of students admit to having had their feelings hurt at some point by their peers' comments and that as a rule, the latter's observations have a lesser impact on them than comments advanced by course trainers.
We strongly believe that all these issues can and should be remedied. As previously mentioned, a possible solution, applicable to consecutive interpretation could be the usage of "assessment grids".
An assessment grid is a standardized assessment form, conceived according to quality requirements set forth by Bühler (1986) and Kurz (1989), namely: logical cohesion of utterance, completeness of interpretation, fluency of delivery, correct grammatical usage etc.
Assessment grids ought to be filled in by both the assessor(s) and the interpreter as they are meant to be used not only as a self-assessment tool but also as an evolution tracker. In what follows, we shall try to explain the usage as well as advantages of these assessment grids that we have already tested during our own practice sessions at MEIC Cluj.
Assessment and self-assessment grids (for consecutive interpretation)
Traditional feedback-giving may sometimes prove to be quite a lengthy process, often surpassing the original speech in terms of duration, becoming a potential threat to the overall efficiency of the practice session. In this respect, a form of standardized evaluation might considerably reduce the amount of time dedicated to assessment and could also improve the quality of peer-feedback by compelling all assessors to evaluate the same parameters. Furthermore, both the relevance and the credibility of peer-feedback are substantially increased by the usage of assessment and self-assessment grids (Appendix 1 and 2) as they render the entire feedback process more objective and orderly.
Another notable advantage is the fact that students get to collect all assessment-grids filled in by their peers for each one of their consecutive interpretations. This allows interpreting trainees to keep track of their progress in a more distinct manner as the grid asks evaluators to constantly grade performances for both content and presentation. The fact that trainees fill in a self-assessment grid after each of their performances is meant to allow them, on the long run, to diagnose and remedy potential interpreting issues and to become less self-complacent and more objective in self-assessment. This grade-based assessment model could also address a common complaint among student-interpreters: that they are truly evaluated only in exam situations and as a consequence, most of the time they don't really know where they stand in terms of grading.
Moreover, the forms filled in and submitted by peers should enable the student interpreter to spot and subsequently tackle certain potentially recurring issues e.g.: tendencies to add information, penchants to omissions or distortions, terminological inadvertencies etc. For instance, if peers graded one's terminology for three different speeches on financial topics as "poor" one might consider extending his/her readings in this particular field.
Lastly, assessment-grids also help in setting "deliberate goals" for oneself thereby promoting all demonstrated benefits of deliberate practice.
As previously mentioned, the assessment and self-assessment grids that we conceived have already been tested by the class of 2010 at MEIC Cluj and they are currently being used by students enrolled in the first year of MA studies. So far results are pleasing and rather promising. More precisely, we were able to observe:
* A very surprising uniformity of assessment: speech difficulty is constantly assessed unanimously without previous discussion between peers. This indicates a certain uniformity of assessment criteria resulting in the production of more objective feedback.
* A variation of only +/-5% between peers in their respective assessments.
* The grids promote metacognition: students tend to use the ¡§remarks¡" section of their self-assessment grids to write sometimes really short but illustrative personal points of view on their own performance ¡§I think I did a good job, I should be a bit more confident in the future¡".
* The self-assessment grids reveal a common tendency in most students to be somewhat indulgent toward themselves. For instance, below is a comparison between two assessment grids and one self-assessment grid for a speech delivered during a group practice session:
As observed from the above table, the author of the consecutive is constantly more generous towards her own performance than her two peer-assessors. In fact, this seems to be the general rule for the time being and maybe such analyses are a good indicator of a more serious issue: the lack of proper self-assessment.
Practice sessions play a significant role in the training of conference interpreting students. Most of them spend at least four hours each week engaged in such activities and as shown, they are relatively pleased with the overall efficiency of the time spent together.
However, certain shortcomings of practice meetings must also be taken into consideration: peer-feedback is not always relevant enough in terms of content whereas the way in which it is presented is reported to sometimes offend.
A solution to all of these problems might be a standardized, more objective form of assessment such as the "assessment-grids", practice tools devised and tested at MEIC Cluj and described above.
Further experience with assessment grids is indeed necessary in order to collect more extensive user-feedback but the encouraging results obtained so far make us confident that the assessment and self-assessment grids are able to exponentially increase the efficiency of practice sessions as well as the relevance of peer-feedback while also allowing students to constantly monitor their own evolution and become more self-aware and better prepared as conference interpreters.
[dagger] Given the relatively small number of conference interpreting trainees enrolled in such programs per year, our aim was to obtain five responses from each course, representing a minimum of 25% of their total number of students.
[double dagger] For the sake anonymity and keeping in mind the fact that the survey has been conducted on a single class (2010) of conference interpreting students and that practice patterns may vary from generation to generation, we decided that no individual reference to the above masters' programs shall be made in the analysis of results. Instead, each of them has been randomly attributed a letter from A to F, representing it throughout the analysis.
Ackermann, D., Lenk, H., & Redmond, M. (1997). Between Three Stools: Performance Assessment in Interpreter Training. In E. Fleischmann, W. Kutz, & P. Schmitt (Eds.), Translationsdidaktik. Beiträge der VI. Internationalen Konferenz zu Grundfragen der Übersetzungswissenschaft, (pp. 262-267). Leipzig, Tübingen.
AIIC (2000). Thoughts on the quality of interpretation. Retrieved from http://www.aiic.net/ViewPage.cfm/page197
Baaring I. (2001). Assessing presentation skills in interpreting trainings and exams. Meta: Translators' Journal, 46, 365-378.
Badiu, I. (2008 - unpublished work). Paradigmatic shift: journaling and tutoring to support interpreting learners; seminar paper - ETI, Université de Genève.
Bühler, H. (1986). Linguistic (semantic) and extralinguistic (pragmatic) criteria for the evaluation of conference interpretation and interpreters? Multilingua 5(2), 231-235.
Child, D. (2004). Psychology and the Teacher. London: Continuum.
Choi, J. Y (2006). Metacognitive evaluation method in consecutive interpretation for novice learners. Theories and Practices of Translation and Interpretation in Korea, 51(2), 273-283.
Donovan, C. (1990). La fidélité en interprétation. Phd. Thesis, ESIT, Université Paris III (through Gile: 1995).
Ericsson, A. (2000). Expertise in interpreting. An expert-performance perspective. Interpreting, 5(2), 187-220.
Gile, D. (1983). Aspects méthodologiques de l'évaluation de la qualité du travail en interprétation simultanée. Meta: Translators' Journal, 28(3), 236-243.
Gile, D. (1990). L'évaluation de la qualité de l'interprétation par les délégués: une étude de cas. The Interpreters' Newsletter, 3, 66-71.
Gile, D. (1995). Fidelity Assessment in Consecutive Interpretation: An Experiment. Target, 7(1), 151-164.
Gile, D. (2001). L'évaluation de la qualité de l'interprétation en cours de formation, Meta: Translators' Journal, 46(2), 379-393.
Harmer, J. (1996). Dialogue Journals - a Pedagogical Tool for Training Interpreters. Introducing Novice Consecutive Interpreters to Journal-writing" Monterey Institute of International Studies Graduate School of Translation and Interpretation/ University of Geneva, ETI. Training of Trainers Course, 31.
Kurz, I. (1989). Conference Interpreting - user expectations. In D. L. Hammond, (Ed.), Coming of age. Proceedings of the 30th Conference of the ATA (pp. 143- 148).Washington, D.C.: Melford, N.J.
Kurz, I. (1993). Conference interpretation: expectations of different user groups, The Interpreters' Newsletter, 5, 13-21.
Lefrançois, G-R (2000). Psychology for Teaching (10th ed), Wadsworth.
Moser-Mercer, B (1996). Quality in Interpreting some Methodological Issues. The Interpreters' Newsletter, 7.
Moser-Mercer, B., Künzli, A., & Korac, M. (1998). Prolonged turns in interpreting: Effects on quality, physiological and psychological stress (Pilot study). Interpreting 3(1), 47-64.
Pöchhacker F. (2001). Quality assessment in conference and community interpreting. Meta: Translators' Journal, 46(2), 410-425.
Pöchhacker, F. (1994). Quality assurance in simultaneous interpreting. In C., Dollerup, & A., Lindegaard, (Eds.), Teaching Translation and Interpreting, (pp. 233-242). Amsterdam/Philadelphia: John Benjamins.
Maria IAROSLAVSCHI *
Department of Applied Modern Languages, Babes-Bolyai University,
* Corresponding author:
(ProQuest: Appendix omitted.)