28

Jul

Putting Product Reaction Cards Into Practice

At the end of a research session, I oftentimes ask participants to characterize a product.  “In a few words, how would you describe X?”

What I’m looking for are a few insightful words and if I’m lucky, a tidy testimonial to supplement my research findings.  Unfortunately, that almost never happens.  Most of the time I get a few semi-useful words followed by a rehashing of the user’s experience but with more emotional content.

Enter Product Reaction Cards, invented as a way to measure desirability by two folks at Microsoft, Joey Benedek and Trish Miner.  There are 118 cards in total, each displaying either a positive or a negative/neutral adjective.  (Incidentally, 60% of the cards are positive and 40% are negative/neutral.  This is supposed to be representative of the user feedback balance point in usability evaluations.)  After being shown some stimulus, users are asked to select cards which describe it.  At first, giving users a common vocabulary seemed either smart or potentially a waste of time.  I wasn’t sure which, so I decided to give them a shot.

I have to admit that I was a little nervous trying out a new technique that seems more like an elementary school game than a bonafide research technique.  After all, why can’t participants just describe things out loud?  Oh wait, I already tried that.  So I bought a few packs of blank index cards and printed out the 118 words on some Avery labels.  After sticking them on and spreading them out on a table, it started to feel a bit more scientific.

At the end of one user testing session, I pointed to a table with PRC cards spread out on them (randomly ordered) and asked the participant to choose 3-5 cards she felt characterized the website being tested.  I was still a bit hesitant to ask, but relieved when she hopped up and started selecting cards.  I looked at my observer and gave a hopeful “here goes” look.  After a few minutes, she returned to the desk and I asked her to present each card with an explaination what the term meant.  The participant had taken great care in selecting their cards and presented like a proud parent.  When asked her to describe what each card meant, she was very specific and correlated definitions with specific website attributes she’d seen.  I couldn’t help but nod and smile as I took my notes.  This was great information!  It may be first-timer’s luck, but all of the users in the study seemed to like the activity.  I didn’t probe, but I think it gave them a tidy way to sum up their hour of thinking-aloud.

After the test, I realized I had to do something with this data in front of me.  This is where the guidance from the Microsoft folks get pretty loose.  Their original paper shows a venn diagram between products and a “Top 5 cards” grid.  I did some searching and found that some people generate tag clouds or use simple spreadsheets.  I ended up going the spreadsheet route since that’s how I tallied the results (and this data wasn’t going to be client-facing anyway).  Even though my card data alone isn’t statistically airtight, there’s something to be said when 8 users only chose 25 out of 118 cards to describe a given product.  There’s definitely some meaning to be uncovered there.

 

A few tips on using the product reaction cards:

Tip #1: Laminate the cards.  There’s some effort involved with making 118 cards and after only 8 sessions, some of the more popular ones have creased corners and smudges.

Tip #2: Make space. 118 index cards take up a lot of real estate and you want all of them to be readable.  So spread them out and encourage users to browse them all.

Tip #3: Randomize between sessions.  You may want to re-arrange some of the cards so there is no too-far-to-reach bias.

Tip #4: This isn’t my tip, but some people have swapped out some cards for original test-specific cards of their own.  So if you think the interface is meant to convey something not captured in the stock set, put it in there and see if people choose it.

Tip #5: Adapt them to your use.  For one test, I showed participants a homepage then had them select cards based on their first impression.  Then I had them perform tasks on the website and reselect cards.  There was a big difference (in this case, a positive change) and the cards helped users articulate it without having to do a verbal compare/contrast exercise.

Comments are closed here.