Social Intelligence Design (SID) edition:4 location:Stanford (California) date:24-26 March 2005
This paper addresses the possibility of measuring perceived usability in an absolute way. It studies the impact of the nature of the tasks performed in perceived software usability evaluation, using for this purpose the subjective
evaluation of an application’s performance via the Software Usability Measurement Inventory (SUMI). The paper reports on the post-hoc analysis of data from a productivity study for testing the effect of changes in the graphical user interface (GUI) of a market leading drafting application. Even though one would expect similar evaluations of an application’s usability for same releases, the analysis
reveals that the output of this subjective appreciation is context sensitive and therefore mediated by the research design. Our study unmasked a significant interaction between the nature of the tasks used for the usability evaluation and how users evaluate the performance of this application. This interaction challenges the concept of absolute benchmarking in subjective usability evaluation, as some software evaluation methods aspire to provide, since subjective measurement of software quality will be affected most likely by the nature of the testing materials
used for the evaluation.