Download PDF

Equivalent Representations of Multi-Modal User Interfaces Through Parallel Rendering (Equivalente representaties van multi-modale gebruikersomgevingen via parallele weergave)

Publication date: 2012-06-04

Author:

Van Hees, Kris

Abstract:

Even though the Graphical User Interface (GUI) has been in existence since 1974, and available for commercial and home use since 1984, blind users still face many obstacles when using computer systems with a GUI. Over the past few years, our daily life has become more and more infused with devices that feature this type of user interface (UI). This continuing trend increasingly impacts blind users primarily due to the implied visual interaction model. Furthermore, the general availability of more flexible windowing systems such as the X Window System has increased the degree of complexity by providing software developers with a variety of graphical toolkits to use for their applications. Alternatives to the graphical user interface are not exclusively beneficial to the blind. Daily life offers us various opportunities where presenting the UI in a different modality may be a benefit. After all, a disability is a condition that imposes constraints on daily life, and often those same constraints are imposed by environmental influences.Current approaches to providing alternate representations of a user interface tend to obtain information from the default (typically visual) representation, utilising a combination of data capture, graphical toolkit hooks, queries to the application, and scripting. Other research explores the use of adapted user interface development or context-based runtime UI adaptation based on user and environment models. All suffer from inherent limitations due to the fact that they provide alternate representations as a derivative of the default representation, either as an external observer or as an adapted UI.Based on the original design principles for graphical user interfaces, this work shows that the original design can be generalised where a GUI is essentially the visualisation of a much broader concept: the Metaphorical User Interface (MUI). Expanding upon this MUI, a new definition is provided for "Graphical User Interface".The well-known paradigm to provide access to GUIs rather than graphical screens has been very influential to the development of assistive technology solutions for computer systems. Validation for this paradigm is presented here, and based on the MUI concept, the focus of accessibility is shifted to the conceptual model, showing that access should be provided to the underlying MUI rather than the visual representation.Building further on the MUI concept, and past and current research in Human-Computer Interaction (HCI) and multimodal interface, a novel approach to providing multimodal representations of the user interface is presented where alternative renderings are provided in parallel with the visual rendering rather than as a derivative thereof: Parallel User Interface Rendering (PUIR). By leveraging an abstract user interface (AUI) description, both visual and non-visual renderings are provided as representations of the same UI. This approach ensures that all information about UI elements (including semantic information and functionality) is available to all rendering agents, eliminating problems such as requiring heuristics to link labels and input fields, or seemingly undetectable elements.With the PUIR framework, user interaction semantics are defined at the abstractlevel, thereby ensuring consistency across input modalities. Input devices may be tightly coupled to specific renderings (e.g. a pointer device in a bitmap rendering), but all user interaction by means of such a device maps to abstract semantic events that are processed independent from any rendering.The novel approach presented in this work offers an extensible framework where support for new interaction objects can be included dynamically, avoiding the all too common frustration that results from needing to wait for assistive technology updates that "might" incorporate support for the new objects.The PUIR approach can contribute to the fields of HCI and accessibility well beyond the immediate goal of providing non-visual representations of GUIs. By providing a framework where UI rendering and user interaction are abstracted, additional rendering agents and support for additional input modalities can be provided to accommodate the needs of other disability groups. The use of an underlying AUI-based processing engine also ensures that a diverse group of users can collaborate using a similar mental interaction model regardless of the rendering they use. The PUIR framework is also capable of supporting (accessible) remote access to applications, and the presented work may benefit automated application testing methodologies as well by providing a means to interact with an application programmatically.