Thursday, March 17, 2011

Measuring User Experience — Initial Forays

My previous post on this blog delved on usability and user experience, with an initial argument that, while user experience is intrinsic to each product or service, there are several objective ways to measure it, and that these metrics can be replicated across different products and services.

While my gut feeling already figured out some ways to measure user experience that do not comprise archetypal usability metrics, i.e., measuring effectiveness and task completion success, I informally surveyed some community members at the Quora and UXExchange Q&A forums about this, to find out some clues on this. Indeed, I was pointed out some insightful answers, was pointed to previous discussions and essays on this subject. And, as an academic exercise, a paper presented by some Google UX Researchers at CHI 2010 on this exact topic.

So, it seems, I might be on the right track.

Without going too much into detail about the pros/cons of each answer (which is fodder to forthcoming posts, I guess), here's an unordered list of possible ways to measure user experience:

  • Analytics – average time of stay, return rates, aggregate data representation of multiple people
  • A/B testing
  • Task goals – registration completion, contact form submission, path to purchase
  • Customer support responsiveness
  • Customer satisfaction evaluation – quantitative and qualitative
  • Social sensing – Facebook likes, retweets, google trends
  • Experience monitoring – qualitative, representation of a single session
  • Mindshare goals – qualitative measures such as awareness, branding effectiveness
  • Loyalty
  • Net Promoter Score

I must point out that this list has some caveats: it can be unreasonable to apply some of these metrics in particular scenarios, and all of them have to be tackled from the perspective of the ultimate goals of the product or service they are applied to. In sum, the context they are applied serves as the basis for all measurements. This means that there's an additional variable when comparing the user experience of two products or services.

Posted via email from ruidlopes' postground