Friday , March 29 2024

A Usability Assistant for the Heuristic Evaluation of Interactive Systems

Costin PRIBEANU
National Institute for Research and Development in Informatics – ICI Bucharest
8-10, Averescu Avenue, Bucharest 1, Romania

Abstract: The increasing demands for usable interactive systems in the context of limited project budgets bring in front the need for faster and cheaper evaluation methods. Heuristic evaluation is a kind of inspection method that proved to be cost effective. Typically, the method involves a small number of evaluators that are testing the interactive system against a set of usability principles called heuristics. A way to increase the efficiency of usability evaluation methods is to provide evaluators with software tools able to assist in documenting and recording of usability problems. This paper presents a software assistant for usability evaluation which provides with various facilities to conduct heuristic evaluation: definition of the tasks set, specification of heuristics used, and documenting of usability problems. In order to support the specific requirements of a target application domain a set of usability guidelines could be specified that are detailing the heuristic set. These guidelines could be consulted during usability problem identification and specification. This way, a broader range of evaluator preferences and requirements could be accommodated.

Keywords: Usability, heuristic evaluation, usability evaluation assistant, software tools, tools for working with guidelines.

Costin Pribeanu is currently a Senior Researcher at the National Institute for Research and Development in Informatics – ICI Bucharest. He received his PhD degree in Economic Informatics in 1997 from the Academy of Economic Studies in Bucharest, Romania. His research interests include usability evaluation and design of interactive systems, with a focus on task analysis, task-based design of user interfaces, design patterns, and guidelines-based evaluation. He is author of several scientific papers on user interface design and evaluation. He is an editorial board member for Springer HCI Book Series and served as program committee member / reviewer for several major conferences in this domain. Costin Pribeanu served as Chair of RoCHI (ACM SIGCHI Romania) from April 2001 to June 2009.

>>Full text
CITE THIS PAPER AS:
Costin PRIBEANU, A Usability Assistant for the Heuristic Evaluation of Interactive Systems, Studies in Informatics and Control, ISSN 1220-1766, vol. 18 (4), pp. 355-362, 2009.

1. Introduction

The increasing demand for usable interactive systems in the context of a limited project budget and strict deadlines creates an extra pressure for evaluators and designers. This reveals the need for faster and cheaper evaluation methods.

Depending on the purpose and the moment when it is done, usability evaluation could be formative or summative (Scriven, 1991). Formative usability evaluation is performed in an iterative development cycle and aims at finding and fixing usability problems as early as possible (Teofanos and Quesenbery, 2005). The sooner these problems are identified, the less costly the effort to fix them is.

Formative usability evaluation can be carried on by conducting an expert-based usability inspection and / or by conducting user testing with a small number of users. In this last case, the evaluation is said to be user-centered, as opposite to expert-based formative evaluation.

Heuristic evaluation is a kind of inspection method which typically involves a small number of evaluators that are testing the interactive system against a set of usability principles called heuristics. This method proved to be cost effective and is widely used by the usability practitioners’ community (76% according to UPA Survey, 2005).

Heuristic evaluation provides with two kinds of measure: quantitative (number of usability problems per severity level) and qualitative (detailed descriptions of individual usability problems).

The quality of usability problem description is critical for the usefulness of a usability report. On the other hand, there is a lot of work to be done in order to properly describe each usability problem. A way to increase the efficiency of any evaluation method is to provide evaluators with suitable tools able to assist them during the evaluation process. As shown by Hvannberg et al. (2007), not only these problem registration tools are improving the immediate management of usability problems but they are also supporting a structured usability problem reporting.

This paper presents a software tool for usability evaluation which provides several facilities to conduct a heuristic evaluation: definition of the tasks set, specification of the heuristics used and structured description of usability problems. In order to better support evaluators’ expertise, a set of usability guidelines that are detailing the heuristic set could be specified. This facility is also useful to fulfill specific requirements of a target application domain.

The rest of this paper is organized as follows: Related work in usability research is briefly presented in the next section with a focus on usability problem extraction, matching and reporting. The software tool is presented in Section 3. The paper ends with conclusion and future work in Section 4.

4. Conclusion and Future Work

In this paper we presented an improved version of a usability assistant which provides various facilities to conduct heuristic evaluation: definition of the tasks set, specification of heuristics used, and documenting of usability problems.

This tool has been developed based on previous experience with heuristic evaluation and guidelines-based evaluation. As such it integrates features from two different kinds of tools.

By enabling the specification of usability guidelines is possible to accommodate both individual preferences related to a preferred set of heuristics and specific requirements of a target interactive system.

The integration of two kinds of inspection method is beneficial for evaluators from at least three points of view. First, the usability assistant is more versatile since it is not confined to a single evaluation method. Second, the rich functionality provides with more assistance during the evaluation process. Third, the tool provides a better understanding of usability knowledge along the relationship between ergonomic criteria, heuristics and guidelines. As such it is useful for training novice evaluators.

We intend to further develop this tool by adding functions to support the problem matching. In this respect, the next step is to support the individual filtering of usability problems.

Acknowledgement

This work was supported by the research project funded by ANCS under N 503/2009.

References:

  1. Bach, C., D. L. Scapin, Adaptation of Ergonomic Criteria to Human-Virtual Environments Interactions. Proceedings of Interact’03. IOS Press, 2003, pp. 880-883.
  2. Barbu, D. C., C. Pribeanu, A Tool for Working with Guidelines for e-commerce Web Sites. Proceedings of IE 2009 International Conference, Bucharest, pp. 169-174.
  3. Bastien, J. M. C., D. L. Scapin, Evaluating a User Interface with Ergonomic Criteria INRIA Report, Roquencourt, 1993.
  4. Capra, M., Comparing Usability Problem Identification and Description by Practitioners and Students. Human Factors and Ergonomics Society Annual Meeting Proceedings, Computer Systems (5), 2007, pp. 474-478.
  5. Cockton, G., A. Woolrych, Understanding Inspection Methods: Lessons from an Assessment of Heuristic Evaluation. Blandford, A., Vanderdonckt, J., Gray, P.D. (Eds.), Proceedings of People and Computers XV. Springer-Verlag, 2001, pp. 171-182.
  6. Hartson, H. R., T. S. Andre, R. C. Williges, Criteria for Evaluating Usability Evaluation Methods. International Journal of Human-Computer Interaction 13, 2001, pp. 373-410.
  7. Hornbaek, K., Current Practice in Measuring Usability: Challenges to Usability Studies and Research. International Journal of Human Computer Studies, 64, 2006, pp. 79-102.
  8. Hornbaek, K., E. Frokjaer, Comparison of Techniques for Matching of Usability Problem Descriptions. Interacting with Computers 20, 2008, pp. 505-514.
  9. Hvannberg, E. T., E. L.-C. Law, Classification of Usability Problems (CUP) Scheme, Proceedings of Interact, 2003.
  10. Hvannberg, E. T., E. L.-C. Law, M. C. Larusdottir, Heuristic Evaluation: Comparing Ways of Finding an Reporting Usability Problems, Interacting with Computers 19, 2007, pp. 255-240.
  11. Iordache, D. D., C. Pribeanu, Comparison of Quantitative and Qualitative Data from a Formative Usability Evaluation of an Augmented Reality Learning Scenario, Informatics Economic Journal, 13(3), 2009, pp. 67-74.
  12. ISO 9126-1:2001 Software Engineering – Software product quality. Part 1: Quality Model.
  13. Law, L-C., Hvannberg, E. T., Complementarities and Convergence of Heuristic Evaluation and Usability Test: A Case Study of UNIVERSAL Brokerage Platform. Proceedings NordiCHI Conference ACM 2002, pp. 71-79.
  14. Law, E. C., M. C. Lárusdóttir, M. Norgaard, (Eds), Downstream Utility 2007: The Good, the Bad, and the Utterly Useless Usability Evaluation Feedback, IRIT Press – Toulouse France, November 6th, 2007.
  15. Molich, R., J. Nielsen, Improving a Human-computer Dialogue, Communications of the ACM 33(3), 1990, pp. 338-348.
  16. Nielsen, J., Usability Engineering. Academic Press, New York, 1993.
  17. Nielsen, J., Heuristic Evaluation. Nielsen, J., and Mack, R.L. (Eds.), Usability Inspection Methods, John Wiley & Sons, New York, 1994.
  18. Paternò, F., C. Mancini, S. Meniconi, ConcurTaskTree: a Diagrammatic Notation for Specifying Task Models. Proceedings of IFIP TC 13 International Conference on Human-Computer Interaction (Syndey, June 1997). Chapman & Hall, London 1997, pp. 362-369.
  19. Pribeanu, C., C. Mariage, J. Vanderdonckt, A Corpus of Design Guidelines for Electronic Commerce Web Sites, M.J. Smith, G. Salvendy, D. Harris & R.J. Koubek (eds.) Proceedings of HCI International’2001 (New Orleans, 5-10 Aug 2001), Lawrence Erlbaum Associates, 2001, pp. 1195-1199.
  20. Pribeanu, C., J. Vanderdonckt, A Methodological Approach to Task-based Design of User Interfaces. Studies in Informatics and Control, 11(2), 2002, pp. 145-158.
  21. Pribeanu, C., O abordare bazată pe sarcină în proiectarea unui asistent software de evaluare a utilizabilităţii. Revista Română de Interacţiune Om-Calculator, 2(1), 2009, pp. 31-44.
  22. Scriven, M., Evaluation Thesaurus. 4th ed. Newbury Park, CA: Sage Publications, 1991.
  23. Theofanos, M., W. Quesenbery,. Towards the Design of Effective Formative Test Reports. Journal of Usability Studies, 1(1), 2005, pp. 27-45.
  24. Usability Professionals Association, UPA 2005 Survey.
  25. Vanderdonckt, J., C. Farenc, (eds.), Tools for Working with Guidelines, Springer, London, 2000.
  26. Vilbersgtottir, S.G., Law, L-C., Hvannberg, E. T., Classification of Usability Problems (CUP) Scheme: Augmentation and Exploitation, Proceedings of NordiCHI 2006, pp. 281-290.