You are on page 1of 6

How to Write a Usability Validation Report

Version 1.0, May, 1998. Author: J Kirakowski, Human Factors Research Group, University College, Cork, Ireland. eMail: jzk@ucc.ie

Structure Summary
This document is a series of recommendations as to how to document a usability validation report for an Information Engineering project. The primary source is the Report Document Standard from MUSiC Project, ESPRIT 5204, 1993, which has been modified in the light of practice in industrial settings and further experience from TAP projects. The main sections of this document are: Thematic Overview In this section, a general thematic overview of a usability validation report is described. Front Matter This describes specific features of layout of the front matter of a usability report. Particular emphasis is placed on the Summary of Findings page at the front. Body This describes the 'glue' of the validation report which connects together the Summary Findings presented at the start and the technical information contained in the annexes. Annexes This section describe how to present all the technical details necessary to enable a reader or reviewer to trace the conclusions in the Summary of Findings in the Front Matter to the data that led to them.

Each section and subsection is subdivided by one or more of the headings 'Purpose:', 'Recommendations:' and 'Comments:'

Thematic Overview
The usability validation report documents the results of validation activities: although only one validation report may have been originally envisaged in the project's TA, it is recommended that after each validation exercise a working paper following this structure is issued, and that at the end of the project, the working papers are gathered in an annexe of the final validation report. The usability validation report should develop the following structure: Front Matter Body Summary of what has been found with respect to usability The evaluation setup, including objectives and methods users involved, test scenarios. Presentation of method results (for each method used) identity of method summary of results conclusions Overall conclusions and detailed recommendations Technical data (as annexes) differences between context of use and context of test description of tasks used in testing instructions to users metrics and raw data obtained.

Each of these three components (Front Matter, Body, and Annexes) is now described in greater detail.

Front Matter
The Front Matter consists of: The Title Page. Purpose: to enable the reader to locate the date and authors of the document. Recommendation: the standard TAP front page layout is recommended. Summary of Findings Purpose: to enable a reader to understand the main points about the usability of the application being tested at a glance. Recommendation: not more than one single page, with the recommendations arranged in order of priority/ importance, preferably as bullet points. Comments: It is good psychology to include good points as well as critical points in this summary. Structure Summary. Purpose: to enable the reader to quickly locate where information is presented in the document. Recommendation: one page, with a short (two line) description of contents of the major sections of the document, especially the annexes. Acknowledgements. Purpose: to identify all partners apart from the authoring team who contributed to the document, and to identify personnel from outside the project who assisted in the development of the document. Contents. Recommendation: page numbers should be included.

Body:
Purpose: to lay an 'audit trail' from the raw data in the annexes to the Summary of Findings page at the front in order to demonstrate the credibility of the conclusions. Comment: the body of a validation report can be considered as the 'glue' which holds the report together. Evaluation Setup Purpose: to document the essential features of the evaluation to enable comparisons with other evaluations, process improvement recommendations, etc. Objectives and Methods Purpose: to document the evaluation objectives, and to cross-link these to the methods used to measure their attainment. Recommendation: summarise objectives, summarise methods used (one paragraph maximum for each method), and indicate cross-correlation between objectives and methods using a 2-d matrix format. Comment: the evaluation objectives should relate to the evaluation objectives determined in the Usability Evaluation Plan. Reference should be made to the appropriate edition and pages of this plan. It is not appropriate in this kind of report to write more than a paragraph about each method used: refer the reader to source documents instead. Users Involved Purpose: to document how many and what kind of users were involved in the evaluation. Recommendation: Important aspects of the user population have been identified in the UVA form. The user sample should be carefully described to show the extent to which the user sample corresponds to the intended user population. Comment: the purpose is not to attempt to do a comprehensive statistical-type stratified survey, but to simply note, in comparison to the intended population, what the sample user characteristics were. Test Scenarios Purpose: to document the way in which the application was tested. Recommendation: describe the general setting in which the evaluation took place (location, environment); briefly describe the tasks and refer to annexe 2 and annexe 3 for more detail. If there is more than one category of user involved in the evaluation, show which users carried out which tasks using a 2-d matrix form. Comment: the scientific criterion is that the evaluation should be replicable from the description given in this section. This is rarely possible in usability evaluation for many reasons. The rationale for this section is to enable an evaluator to understand the situations in which the application has been tested, and those situations in which it has not.

Presentation of Method Results The following sub-sections should be repeated for each method or metric used. Identify Method Purpose: to identify the method which has yielded the results being analysed. Recommendation: give a brief description and a reference for more detailed information. Present Summary of Method Findings Purpose: to show in overview what the method indicates about the application. Recommendation: a graph or bar chart is useful here, so long as there is a small

paragraph of text to explain the significance of the data. If the method yields nonnumeric data, then the findings should be summarised verbally, using tabular presentation if at all possible. Refer to the appropriate annexe where the raw data is tabulated and where the statistics are calculated for the graphs shown here. Comment: if more than one user group provides data for the method, the user groups must be treated separately in this section, a summary over all groups should also be provided at the conclusion of this section. Make Conclusions from Method Results Purpose: to draw conclusions from the method results that can be re-stated at the front of the report in the Summary of Findings. Recommendation: one small paragraph is usually sufficient. Comment: conclusions may be evaluative (they declare what standard the application has reached), or they may be formative (they make suggestions for re-work or redesign), or both: a good method usually yields both kinds of conclusions.

Overall Conclusions and Detailed Recommendations Purpose: to state whether the objectives of the evaluation have been met and to make detailed recommendations for further development work or other actions Recommendation: This section should have two parts: 1. Satisfaction of Objectives 2. Detailed Recommendations refer back to parts of the Presentation of Method Results section where appropriate. Comment: where there are many separate re-design suggestions or comments, they can be grouped together into a list, which should be ordered by severity. It is important not to hide re-design suggestions away into an annexe, where they are quite liable to get lost and forgotten. Remember to comment on the good aspects of the application as well as the bad ones, so the design team understands the value of their achievements and will not undo good work.

Annexes:
The following Annexes are recommended : Differences between Context of Use and Context of Test Purpose: to demonstrate any differences between the way the application has been tested and the way it is intended the application will be used in real life. Recommendation: refer to sections from the UVA (in the User Validation Plan). A series of bullet points is usually quite sufficient. Description of Tasks Purpose: to describe the tasks that the users carried out in testing the application Recommendation: the easiest format is to state, for each task, the: starting condition of the application and its environment final condition of the application and any effects on the environment brief statement of user actions involved to get from start to finish. Instructions to Users Purpose: to describe the instructions that have been given to the users, and to document what state the system was at when each user started interacting with it. Comment: it is usual to provide users with a written sheet of instructions which may also be read out or summarised. This sheet or sheets will suffice since they usually draw the users' attention to the initial state of the system. Metrics and raw data Purpose: to document the actual data obtained using each method for reference and possible re-analysis to show trends over validations etc. Recommendation: where possible, show not only the raw data (protocols, etc.) obtained, but also a statistical summary that can be used in the body of the report.

You might also like