Date of Award

2004

Document Type

Thesis - Open Access

Degree Name

Master of Science in Human Factors & Systems

Department

Human Factors and Systems

Committee Chair

Dahai Liu, PhD

Committee Member

Christina Frederick-Recascino, PhD

Committee Member

Hong Liu, PhD

Abstract

Usability testing is becoming a more important part of the software design process. New methods allow remote usability testing to occur. Remote testing can be less costly and allow more data to be collected in less time in many cases, provided the user can still provide meaningful data. However, little is known about differences in the user experience between the two testing methods. In an effort to find differences in user experience between remote and traditional website usability testing, this study randomly assigned participants into two groups, one completing a usability test in a traditional lab setting, while the other group utilizing a remote testing location. Both groups completed two tasks, one simple, one complex, using Amazon.com as a test interface. Task time and number of critical incidents reported were the dependent measures. Significant differences were found for task times both in the between and within-subjects conditions for task times. Task times differed significantly between task types; the complex task took generally twice as long as the simple task. No significant differences were found for critical incident reports for both the between and within-subjects conditions. Participants seemed hesitant to report interface problems, preferring to struggle through the task until they satisfied task requirements. Subjective user assessments of the task and website were similar across both conditions. User behavior navigating the site was remarkably similar in both test conditions. Results suggest a similar user testing experience for remote and traditional laboratory usability testing.

Share

COinS