Identifying the intended addressee in mixed human-human and human-computer interaction from non-verbal features
van Turnhout, Koen Terken, Jacques Bakx, Ilse Eggen, Berry #
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces pages:175-182
ICMI '05 edition:7th location:Trento, Italy date:4-6 October 2005
Against the background of developments in the area of speech-based and multimodal interfaces, we present research on determining the addressee of an utterance in the context of mixed human-human and multimodal human-computer interaction. Working with data that are taken from realistic scenarios, we explore several features with respect to their relevance to the question who is the addressee of an utterance: eye gaze both of speaker and listener, dialogue history and utterance length. With respect to eye gaze, we inspect the detailed timing of shifts in eye gaze between different communication partners (human or computer). We show that these features result in an improved classification of utterances in terms of addressee-hood relative to a simple classification algorithm that assumes that "the addressee is where the eye is", and compare our results to alternative approaches.