Usability tests for eMammal Lite application development

In late 2016 and early 2017 we conducted a series of usability tests to support the iterative in-house development of an app.

Overview

The eMammal Lite web application, a camera trap animal identification game and explorer, launched on November 29, 2016. Eka Grguric and Walt Gurley conducted usability test prior to (October 25, 28 and November 11, 2016) and following the launch (February 2, 2017) of this application. The general design and function of the application were well received. Feedback prompted the addition of several user interface features and a significant change in the graphic elements and language of the application interface.

What we wanted to know:

  • Can the user initiate a session? (only asked in post-launch test)
  • Can the user use the interface to find an image and click on name?
  • Can the user look at image statistics on the back of a card?
  • Can the user find information about the page?
  • Can the user navigate to, search in, and use the photo archive?
  • Can the User flip a card in the photo archive?

What we found:

  • After the addition of a login feature users were able to distinguish between creating an account and continuing to use the app as a guest user
  • Users were able to navigate the interface without strict guidance
  • Users were not aware of information that was available on the back of a card as no visual cue indicated this interactivity
  • The use of a different symbol button on each page for access to information on the current page was confusing to users

How We Did It

We conducted four ‘guerilla testing’ sessions in which we set up a testing table in the lobby of D. H. Hill Library and asked individuals walking by to participate in a user testing study. Our testing materials included an iPod running the application for the tester to use and a laptop to record user interaction and a think out loud protocol. We employed Jenn Downs' "Laptop Hugging" method. The laptop was positioned with the lid partially closed and participants were seated behind the laptop and asked to ‘hug’ the laptop while the built in webcam record the screen of the iPad. Each participant was verbally instructed to complete a series of tasks to take them through what is expected to be a typical application session. Participants were asked to describe what they were doing and seeing and videos of the iPad screen and participant audio were recorded. We also collected notes on the participant’s actions as they interacted with the application. The notes and video from each test was used to evaluate the application. On completion of the test each user was given a candybar for their participation.

We conducted four sessions intermittently over four months and tested between five and six participants per session.

Team

  • Staff profile photo
    Walt Gurley
    Former Data Visualization Analyst
  • Staff profile photo
    Ekatarina Grguric
    Former NCSU Libraries Fellow