Opin vísindi

Assessing the reliability, validity and acceptance of a classification scheme of usability problems (CUP)

Assessing the reliability, validity and acceptance of a classification scheme of usability problems (CUP)


Title: Assessing the reliability, validity and acceptance of a classification scheme of usability problems (CUP)
Author: Vilbergsdóttir, Sigurbjörg Gróa
Hvannberg, Ebba Thora
Law, Effie Lai-Chong
Date: 2014-01
Language: English
Scope: 18-37
University/Institute: Háskóli Íslands
University of Iceland
School: Verkfræði- og náttúruvísindasvið (HÍ)
School of Engineering and Natural Sciences (UI)
Department: Iðnaðarverkfræði-, vélaverkfræði- og tölvunarfræðideild (HÍ)
Faculty of Industrial Eng., Mechanical Eng. and Computer Science (UI)
Series: Journal of Systems and Software;87
ISSN: 0164-1212
1873-1228 (eISSN)
DOI: 10.1016/j.jss.2013.08.014
Subject: Usability problems; Defect classification; Validity; Upplýsingatækni; Hugbúnaður
URI: https://hdl.handle.net/20.500.11815/957

Show full item record

Citation:

Vilbergsdottir, S. G., Hvannberg, E. T., & Law, E. L.-C. (2014). Assessing the reliability, validity and acceptance of a classification scheme of usability problems (CUP). Journal of Systems and Software, 87, 18-37. doi:https://doi.org/10.1016/j.jss.2013.08.014

Abstract:

The aim of this study was to evaluate the Classification of Usability Problems (CUP) scheme. The goal of CUP is to classify usability problems further to give user interface developers better feedback to improve their understanding of usability problems, help them manage usability maintenance, enable them to find effective fixes for UP, and prevent such problems from reoccurring in the future. First, reliability was evaluated with raters of different levels of expertise and experience in using CUP. Second, acceptability was assessed with a questionnaire. Third, validity was assessed by developers in two field studies. An analytical comparison was also made to three other classification schemes. CUP reliability results indicated that the expertise and experience of raters are critical factors for assessing reliability consistently, especially for the more complex attributes. Validity analysis results showed that tools used by developers must be tailored to their working framework, knowledge and maturity. The acceptability study showed that practitioners are concerned with the effort spent in applying any tool. To understand developers’ work and the implications of this study two theories are presented for understanding and prioritising UP. For applying classification schemes, the implications of this study are that training and context are needed.

Description:

Post-print (lokagerð höfunda)

Files in this item

This item appears in the following Collection(s)