|Title: ||Leveraging crowdsourced data for the automatic generation of feedback in written dialogue tasks|
|Authors: ||Cornillie, Frederik ×|
Lagatie, Ruben #
|Issue Date: ||2012 |
|Conference: ||CALICO 2012: Open Education: Resources and Design for Language Learning; Pre-conference workshop on ICALL location:University of Notre Dame, Indiana date:12 June 2012|
|Abstract: ||This presentation will report on the ongoing development and evaluation of a sentence matching algorithm which is used to generate corrective feedback in a tutorial CALL system on the basis of crowdsourced data.
First, we will present the case within which the algorithm will be evaluated. The case concerns an online application in which learners play the role of a detective and gather clues by formulating (written) responses in scripted dialogue tasks. These tasks focus on a number of specific grammatical problems in English, but since the unit of response is at the level of the utterance, many alternative responses are possible. Learning support is available as feedback through metalinguistic prompts and model responses. The learners’ responses are logged by the system, and are subsequently evaluated by peers, as a form of educational crowdsourcing.
Next, we will present a review of existing methods for analysing learner output and generating metalinguistic feedback in similar (half-)open tasks. Most state-of-the-art algorithms use some sort of robust parsing and a wide variety of language dependent linguistic resources, such as lexicons and grammars (e.g. Heift, 2003; Nagata, 2002; Schulze, 1999; Dodigovic, 2005). Such techniques allow to detect linguistic errors without having the correction at hand, but unfortunately they are language dependent, often hard to construct and not foolproof (Dodigovic, 2005; Fowler, 2006).
Finally, we will describe an alternative approach that uses more simple techniques, is less language dependent and can address a wider range of target language errors. The proposed algorithm leverages the crowdsourced data and uses approximate string matching, POS tagging, and lemmatisation in order to
a) detect similarities and differences between the student’s response and (correct and incorrect) alternatives, and
b) to provide metalinguistic feedback for a number of grammatical problems.
Dodigovic, M. (2005). Artificial intelligence in second language learning : raising error awareness. Clevedon: Multilingual Matters.
Fowler, A. M. L. (2006). Logging student answer data in call exercises to gauge feedback efficacy. In J. Colpaert, W. Decoo, S. Van Bueren, & A. Godfroid (Eds.), CALL & monitoring the learner: proceeedings of the 12th International CALL Conference (pp. 83-91). Antwerp: Universiteit Antwerpen.
Heift, T. (2003). Multiple learner errors and meaningful feedback: A challenge for ICALL systems. CALICO Journal, 20(3), 533–548.
Nagata, N. (2002). BANZAI: An application of natural language processing to web-based language learning. CALICO Journal, 19 (3), 583-599.
Schulze, M. (1999) From the developer to the learner: Describing grammar – learning grammar. ReCALL Journal 11 (1), 117–24.
|Publication status: ||published|
|KU Leuven publication type: ||IMa|
|Appears in Collections:||Computer Science, Campus Kulak Kortrijk|
Faculty of Arts, Campus Kulak Kortrijk