Title: HEAR : an Hybrid Episodic-Abstract speech Recognizer
Authors: Demange, S├ębastien
Van Compernolle, Dirk #
Issue Date: 2009
Host Document: Interspeech 2009: 10th annual conference of the international speech communication association 2009 pages:3067-3070
Conference: Interspeech2009 - 10th annual conference of the international speech communication association location:Brighton, UK date:6-10 September 2009
Abstract: This paper presents a new architecture for automatic continuous speech recognition called HEAR - Hybrid Episodic-Abstract speech Recognizer. HEAR relies on both parametric speech models (HMMs) and episodic memory. We propose an evaluation on the Wall Street Journal corpus, a standard continuous speech recognition task, and compare the results with a state-of-the-art HMM baseline. HEAR is shown to be a viable and a competitive architecture. While the HMMs have been studied and optimized during decades, their performance seems to converge to a limit which is lower than human performance. On the contrary, episodic memory modeling for speech recognition as applied in HEAR offers flexibility to enrich the recognizer with information the HMMs lack. This opportunity as well as future work are exposed in a discussion.
Description: Demange S., Van Compernolle D., ''HEAR : an Hybrid Episodic-Abstract speech Recognizer'', Proceedings Interspeech2009 - 10th annual conference of the international speech communication association, pp. 3067-3070, September 6-10, 2009, Brighton, UK.
Publication status: published
KU Leuven publication type: IC
Appears in Collections:ESAT - PSI, Processing Speech and Images
# (joint) last author

Files in This Item:
File Description Status SizeFormat
2901.pdf Published 86KbAdobe PDFView/Open Request a copy

These files are only available to some KU Leuven Association staff members


All items in Lirias are protected by copyright, with all rights reserved.

© Web of science