I recently ran my own UR workshops with groups of students at a college. However, prior to my session there were a couple of young male entrepreneurs who had come to run their own research session. I found out they had founded a company and created a new app that was aimed to rival *Insert an unamed globally leading video sharing website here*. By their own reports, it was gaining traction and doing well. They were at the college to do some research with their target audience about their online habits and preferences for such an app. They were happy for me to sit in during their session, and I thought I’d see if any of their insights were relevant to my own research.
However, what I observed was some key ways to *not* conduct effective research sessions. I thought I would share some of these insights here and how they could be improved.
Not good: “How much would you pay for this app?”
This is an example of a closed question. It invites a set, binary response. However it’s more problematic than that. You’re asking people how much they would pay for a service they have only been introduced to 10-15 minutes ago that they don’t even know if they need. How should they estimate it’s value to them?
Additionally, money can be a tricky thing. People have different financial circumstances and disposable income. Asking what people “would” pay in a group scenario invites in social desirability- which could encourage participants to over-inflate what they would be prepared to pay for a service or product.
Better: “Tell me about the last time that you paid for an app?”
This open question invites more follow up questions about the user’s decision to spend money on a service – Why did they decide to purchase? What factors influenced their decision? How did they decide if it was good value? What was the payment process like? Would they buy something similar in the future?
It allows them to reflect back upon a real action they took in their life, and how they found that experience, rather than imagining a hypothetical future scenario.
Not good: “Don’t you hate it when [competitor’s] website does [insert thing here]?”
This is of course, a leading question. By framing it in this way, they were priming their participants to agree with the statement that of course [competitors] website is terrible because it does that thing that everyone hates!
This also runs the risk of influencing people to agree with the statement if that is what the majority say – people don’t like to be the odd one out.
Better: Observe your users interacting with competitor websites or services
By observing people using live websites, apps or services as they would normally do, or by asking them to complete a specific task, you will get to understand their likes, preferences and pain points much more realistically. By letting them guide themselves through, and using minimal prompting and neutral statements, it can help to reduce interviewer bias.
You can then follow this up with more structured interview about any observed pain points or niggles the user had and ask them to elaborate.
Not good: “Do you think you’d use this app?”
Sounds pretty harmless doesn’t it? However in this situation, the guy who’d designed and invented the app was the one sat in front of the participants running the session.
Response bias is a type of bias where the subject consciously, or subconsciously, gives a response that they think that the interviewer wants to hear. In this case, that response is “yes, I would use your app!”
Better: Use independent researchers
Of course, it is great to involve everyone in user research. I believe teams are most effective when everyone understands what research is being done and why, how it is carried out, what the findings are and how they contribute to iterative design changes. Agile teams where the research is done in isolation from the rest of the team can be problematic and lack transparency. However, there is a time and a place, and indeed, a person.
For example, rather than having the service designer in the research interview, could they observe remotely? Could stakeholders observe through two-way glass? What about recording equipment?
Do you have correct mechanisms and ceremonies in place so that user researchers can present their findings to the rest of the team?
These methods can help make users feel more comfortable giving honest feedback, as the person leading the interview is more neutral and removed from the creation process. Telling an app designer you hate their app to their face would be more difficult than conveying the same message to an impartial researcher.
Not good: “Would you prefer [X] or [Y]?”
This was another question I observed being posed to this research group, and was in fact another hypothetical question as opposed to showing them two separate designs – if we did this, would you want it [this way] or [that way]?
Again this is difficult to quantify because we know that what users say and what users do can be two very different things. What should they be basing their preferences on? How do those two options link back to user goals or the user journey?
Better: A/B testing
AB testing is essentially an experiment where two or more variants of a web page are shown to users at random, and statistical analysis is used to determine which variation performs better for a given conversion goal. It can be very technical, with many different analysis software available to help collect and process your data, but it can also be tested in a a more “quick and dirty” way.
Show a version of the design to half your users, and show a different version to the other half. It doesn’t even need to be a working design, it can be a rough prototype or a wire frame if need be. Of course other variables may still come into play but it would give a general indication of where to concentrate efforts going forward.
Summary
I feel the need to point out that at no point did I attempt to jump into these other sessions whilst they were ongoing, but I did offer some gentle pointers afterwards for how they might improve things next time to get more useful data.
It is rare that a research session will go perfectly – we’re all human (even researchers!) – and we might phrase something clumsily, or ask something in the wrong order, but it is always about learning – about our users and about ourselves.