Download PDF

Using Corpora in Contrastive and Translation Studies (5th edition) (UCCTS 2018), Date: 2018/09/12 - 2018/09/14, Location: Louvain-la-Neuve (Belgium)

Publication date: 2018-09-14
Pages: 186 - 187

Author:

Verplaetse, Heidi
van Egdom, Gys-Walt ; Schrijver, Iris ; Segers, Winibert ; Kockaert, Hendrik ; van Santen, Fedde

Keywords:

translation assessment, translation process, preselected items evaluation (PIE), translation competence, learning progress, target text quality, translation style

Abstract:

ABSTRACT At a time when competence research was rapidly gaining momentum in Translation Studies, Adab emphasised that '[i]n the context of developing translation competence, one of the questions to be considered is that of how to evaluate the target text, as product of the process' (2000, p. 215, emphasis added).' It seems safe to say that but few scholars would have been inclined to challenge this claim at that time. Even today, the claim would be considered little more than a trite observation. In translator training, the level of competence of trainee translators is often gauged and assessed through translation exercises in which the trainee is instructed to produce a target text. Trainees are believed to have reached a certain level of competence/to have acquired a competence, if the quality of the abovementioned text comes up to standard. We argue that, by accepting and adopting this view, the view that translation quality is the ultimate basis on which to ground an evaluative judgment on the competence levels of (aspiring) translators, one inevitably sidesteps a few crucial and incontestably thorny issues. First of all, one should note that, from a purely professional point of view, translation is considered both a collaborative activity (in which several language professionals, and sometimes even domain specialists and clients, work toward a common goal) and a service (ISO 17100, 2015; cf. EMT Expert Group 2009, 2017). By taking the translation of an individual student as a point of departure in the assessment of translation competence, one overlooks the collaborative and service-related aspects of translation. The second argument against an uncritical equation of translation quality and translation/ translator competence, relies on the idea that, even if an all too narrow definition of translation competence were to be applied, subcompetences ought be tested in a reliable and a valid manner. This means that, in order for translation competence to be measured, translator trainers should be able to identify text items in the target text, items that attest to translation behaviour that is in line with very specific "can-do" statements that have been proven (beyond doubt) to be indicative of very specific subcompetences, without there being the possibility that the same item attests to behaviour that is directly or indirectly associated with other "can-do" statements or other subcompetences. With translation competence models geared to a translation market in constant flux, prospects do not look auspicious for measurement of translation/translator competence on the meager basis of textual material. By far the most important objection that can be made against the assessment of trainee translator competence through translation product evaluation, is the plain fact that the correlation between the two has never been scientifically tested/corroborated, in spite of the fact that the gut-feeling tells us that translation quality reflects translation/translator competence. The research project presented in this paper aims to be a first audacious step in the direction of establishing the strength (or weakness) of the correlation. In 2018, trainee translators at different institutions (Zuyd University of Applied Sciences / ITV University of Applied Sciences, KU Leuven and University of Antwerp) were asked to produce a Dutch translation of an English, a French or a Spanish source text, with Inputlog (keystroke logging software) running in the background. Different groups of students with different levels of translation experience, from first year bachelor students until fourth year master students, translated texts containing around 340 words on the topic of health economics. This yielded a student translator corpus with mixed diversification. The current corpus consists of approximately 50,000 words translated by 148 students from bachelor courses in Translation and in Applied Linguistics, and Master Translation at the abovementioned institutions. Currently, the subcorpora per training level represent respectively 16.2%, 10.1%, 54.1% and 19.6% of the full corpus for 1st, 2nd, 3rd and 4th year students. The third bachelor year represents a pivotal time for many students in the above mentioned programmes. In this manner this subcorpus may serve as a developmental benchmark. Students from the different study years translating from the same source language translated the same source text. All students’ translation activity was logged in idfx files (using Inputlog), which provide data on students’ translation behaviour (cf. below). Upon completion of the assignment, the target texts were uploaded to the translation revision and evaluation platform translationQ. In the first stage of the project, the quality of the translation was evaluated. The method of choice was the preselected items evaluation (or PIE) method, a method that ensures a speedy and highly reliable evaluation of translation quality - especially when the method is employed within the translationQ environment (cf. Kockaert & Segers 2014, 2017; Van Egdom et al. forthcoming). Once the evaluation was completed, a selection was made of translations pertaining to the top and the bottom groups in terms of quality. In the second stage of the project, the processes underlying the selected translations were closely examined with a view to distinguishing translation styles (behavioural patterns) that testify to a trainee translator's competence or (relative) incompetence. Particular attention was paid to differences in revision behaviour (number of deletions, substitutions, additions, position in the target text and time in process), use of external sources (number, time spent, time in process, distribution over process) and production fluency (ratio of number of characters produced relative to total process time and number of characters in the final product), and pausing behaviour (number, duration, location, distribution over process). In this paper, the focus will be on learning progress as attested for a diversified learner translator corpus by the keystroke logging software. As the data (viz. target texts and process data) gathered for this project were produced by students in various stages of training, one of the aims of this study was to find out whether the learning progress could be observed in the (quality of the) translation text and in the translation behaviour of trainees. References Beeby, A. (2000). Evaluating the Development of Translation Competence. In C. Schäffner & B. Adab (Eds.), Developing translation competence (pp. 185–198). Amsterdam: John Benjamins. EMT Expert Group (2009). Competences for professional translators, experts in multilingual and multimedia communication. Available: https://ec.europa.eu/info/resources- partners/european-masters-translation-emt/european-masters-translation-emt- explained_en EMT Expert Group (2017). EMT competence framework. Manuscript. ISO-17100 (2015). ISO 17100 Translation services: Requirements for translation services. Geneva: International Organization for Standardization. Kockaert, H., & Segers, W. (2014). Evaluation de la traduction : La méthode PIE (Preselected Items Evaluation). Turjuman, 23(2), 232-250. Kockaert, H., & Segers, W. (2017). Evaluation of legal translations: PIE method (Preselected Items Evaluation). Journal of Specialised Translation, 27, 148-163. Van Egdom, G., Verplaetse, H., Schrijver, I, Kockaert, H., Segers, W., Pauwels, J., Wylin, B., & Bloemen, H. (forthcoming). How to put the Translation Test to the Test? On Preselected Items Evaluation and Perturbation. In E. Huertas Barros, S. Vandepitte & E. Iglesias Fernández (Eds.), Quality Assurance and Assessment Practices in Translation and Interpreting. Hershey [PA]: IGI Global.