Two Week or Two Day Testing?
- BY Kevin
There are a lot of great things happening in the user research field. There are tons of new tools that make getting into research easier (and cheaper) and new books take a contemporary approach that makes user research feel less like a laboratory activity and more like part of a design process.
When I got started with user research, there were traditionalists who said usability tests should take three weeks and there were practitioners who were trying to shave off the fat while still giving projects the benefits research offered. Nowadays, there are books preaching single-day tests and even asking your office colleagues to impersonate end users. Whatever your feelings about validity or quality of these practices, the truth is that the traditional methods of research have been smashed. Everybody is figuring it out for themselves and evolving their practices.
It’s liberating to be able to dial research up or down and fit it within any time or cost constraint you have. The trick is to realize that when you take a two week test and squeeze it in two days, not only will the results will differ, the tests differ as well. I’m going to talk about four types of tests I’ve run over the past decade and the talking points I’ve shared with stakeholders so they understand the difference before jumping into a test together.
This isn’t really a test. But, it usually comes in the form of a stakeholder explaining why they think something is an issue and asking you for proof to back up or refute their position. These activities usually consist of gathering a few people in the office and asking them a few very pointed questions which last 5-10 minutes. The result will decide a contentious issue, so getting at least 8-9 people to test the stakeholders intuition is critical.
Time: 1 day Session Length: 5-10 Minutes Recruits: 8-10
Hallway User Testing
This is the lightest weight usability testing I do. It usually consists of gathering 8-10 office-mates (avoiding designers) and running them through a brief test script of 2-5 questions. The questions are usually general since we’re typically working with low-fidelity prototypes that we want to gut check. We usually don’t record the sessions, but just take notes with an observer and then write up a quick e-mail with bullet points afterward. We convey the successes and opportunities for the prototypes and a few exemplary quotes.
Time: 1-2 days Session Length: 15-20 Minutes Recruits: 8-10
Agile / Lean User Testing
You say Agile or Lean, I say tomato. What we both mean is research that fits within a short iterative design cycle. It focuses on the areas of the product that are in the current development “sprint” and we’re trying to get appropriately deep insight into the design. If we are early in the design, the questions are more about general impressions and basic tasks with a low to medium fidelity prototype. As we get closer to production and load the prototype with graphic design and content, our tests also get higher fidelity and we can dig into the nooks and crannies of the interface. Our presentation usually consists of printouts of the UI along with a team debrief. A prioritized email outlines what we found so that can be fed into the next cycle or parked.
Time: 4-7 days Session Length: 30-45 Minutes Recruits: 8-10
Formal User Testing
This is the type of ideal test you dream about, long for and can’t wait until it’s over because it feels like an endurance sport. It’s the most rigorous type of test I do, with lots of transparency and stakeholder buy-in at every step of the way, from recruiting parameters, prototype development and test script validation. It gets the deepest insights and digs into as much of the product as we have available to us. It also offers the highest confidence to the team and client. We typically present findings through informal debriefs along with a formal deck with screenshots, user video clips and design recommendations.
Time: 2 Weeks Session Length: 60-90 Minutes Recruits: 8-10 Per Persona
The key thing for us to keep in mind that shorter timeframes don’t necessarily mean a degradation in quality. It means a shift in focus. If we let go of tradition and reframe one tool as multiple tools, we can better select the one that is more nuanced and specific to the opportunity we face.