What makes a successful usability test?
- Author Benjy Stanton
- Date
- Categories
A weeknote starting 4 November 2024.
We’ve just finished up 2 days of in-person usability testing. I think it’s fair to say that we all came away feeling positive about the trip.
So, I’ve been reflecting on a few reasons why I think the sessions went well (note: it was a team effort)…
- We ran the user research at the participants’ place of work
- We worked hard to make the environment and device setup as real as possible (we ran the prototype with Microsoft’s Edge browser on a Windows laptop, we plugged it into a user’s display, we sat at a desk near where they would usually work and plugged in a keyboard and mouse we found in the office)
- We brought along a multidisciplinary team to observe (a user researcher, content designer, interaction designer, front end developer, delivery manager)
- We had multiple internal reviews of the prototype in the sprints running up to the research (we implemented ideas from a range of people including business analysts and software developers)
- The content and data was as realistic as we could make it (we know we have more work to do here though)
- We built on the learnings from the discovery and early alpha phases to create a version of the prototype that was evidence based
- We explained to users that not everything was working just yet (this didn’t seem to negatively affect the sessions)
- Note taking was a team sport (although it was lead by our excellent content designers)
- We worked hard to get the whole UCD team access to the prototype, despite some frustrating barriers in our way
- We discussed our findings inbetween sessions and made a running list of key findings to address
- We invited remote observers
Having said all that, we have a lot of work to do now to improve the design of the prototype. And we need to balance this with adding a longer and more realistic task flow to test with participants next time.