Featured

Welcome – UR Here

Navigating the world of User Research

“If I had an hour to solve a problem and my life depended on the solution, I would spend the first 55 minutes determining the proper question to ask, for once I know the proper question, I could solve the problem in less than five minutes.”

— Albert Einstein, Theoretical Physicist

Making Personas Personal

Personas have become ubiquitous in product and digital design. They’re everywhere. Designed to represent a particular user or customer segment for the website / product / service you are designing. The idea being that it allows design teams to empathise with their users via the personas, understand who they’re designing for, and to appreciate what the user experience might look for each of those different groups.

Photo by Jill Wellington on Pexels.com

They’re usually based on all the research gathered in an initial or discovery phase of a project – a visual representation of what you have learnt about your users. By putting these ingredients of demographics, attitudes, goals, behaviours, fears, hopes, emotions, and thoughts into a big bowl, you can create your cookie cutter user segments to snack on later during the project phases. Usually, those cookies will be decorated with a nice smiley stock photo to represent your user.

But sometimes that recipe can be a disaster. Personas are often likened to Marmite – some love ’em, some hate ’em. I find many of them cheesy clichés full of bad stock photos – Have you ever googled “woman eating salad”? If you do, you’ll get back stock images like this. Why are they so happy? They’re eating SALAD.

Persona’s can also focus too much on the detail, especially in the ‘bio’ section, in a bid to make the personality ‘real’. Does knowing that Susan is 47, drives a old blue Landrover, lives in a semi detached house, holidays in the South of France, is an extrovert, her favourite food is chicken pie, she loves to go to the gym and walk her dog after work, and enjoys listening to jazz going to be the most important things you need to know about her? Probably not. Feels a bit too personal. But what we are interested in, what is more useful to a design team, is what Susan does. What she has done in the past. What her attitudes are and how these might impact on what she does and how she does it. What her goals are. How she wants to achieve these goals.

So how can you make personas work for you?

  1. Activity Personas

By conducting good initial/discovery research with users and stakeholders, you should be able to identify user behaviours – past, present and future. Identify what your users are trying to do, how they try to do it, what their priorities are, what resources they have available to help them ‘do the thing’ and what barriers they experience. Keeping the persona focused on activities, such as:

  • How are they currently solving X problem/goal?
  • How are they currently interacting with you/competitor’s product?
  • How do they currently use digital services? Are they confident/competent?
  • What resources do they use/need to achieve their goal?
  • What are their priorities?
  • What barriers or problems does the user experience?
  • Do they require help and support from others?

2. Consider attitudes

What are the user’s key attitudes or values and how do these affect their behaviour? In cognitive behavioural therapy, the focus is on changing behaviours through changing thoughts and emotions – how we think affects how we feel, how we feel affects how we act.

For example, when I was working on personas for a project recently about data collection in education, I spoke with a number of users who displayed varying attitudes towards data. Some, spoke about how they valued access to good quality data and liked to use it to inform and implement changes in their place of work and the wider sector. They reported behaviours such as frequently referring to reports, pulling off their own data reports from software, and completing non mandatory data returns. On the other side of the scale, there were a group of users who valued compliance over data, and whose priorities were more focused on business outcomes and contract compliance – they reported less usage of data reporting for improvement and less robust processes for benefiting from data sets being captured. They were less likely to participate in non mandatory data collections. Their values and attitudes had a direct effect on their behaviours.

See the source image

3. Avoid stock photos

Try not to use stock photography as this makes it hard to think about the persona as a real life person and can be difficult in gaining credibility (woman laughing at salad, anyone?). If you can find a more realistic photo then do – perhaps ask colleagues or friends to pose for you, or consider using icons or illustrations instead if you prefer not to use a photo.

Image result for stock photo meme

4) Be prepared to challenge your assumptions

Personas should not be something that are created at the start of a project and then never revisited – if you are working in an agile and iterative way, then you should be learning more about your users and their behaviours each time you conduct research. You should be prepared to revisit and review your personas, and challenge any assumptions or hypotheses you had that have not stood up to research.

It is fine to start personas based on limited knowledge or perhaps just data from stakeholders or customer service teams – however, your research should be designed in a way that it allows you to test those assumptions and find out if that is true. If you create a persona on the assumption that your 18-25 year old customers are using product X at home, and when you speak to them you find out that they actually use it more when travelling because it affords them X benefit, you need to revise your personas accordingly.

On my most recent project, I revised my personas through the discovery phase 3 or 4 times; each time having an assumption about how our user base was divided, only to be proved wrong after conducting research. After testing personas based on job roles, organisation size and software used – I ended up in a place with activity and attitude based personas. These were a much better reflection of what I had learnt through research, and helped me to communicate to our assessment panel how our user’s attitudes impacted on their behaviours, and thus how they would interact with our proposed solution.

I included a variety of different direct user quotes in the personas to illustrate this, and gave them an behavioural based descriptor – “The Changer”, “The Compliant” – to summarise what position they inhabited in our space (see the below from GoToMeeting for a similar example)

See the source image

I still named them (you don’t have to name or gender your personas) and gave them a brief background, but this time it felt like I knew them better than a strange laughing woman eating salad.

Engaging your team in User Research planning

Planning user research is a key contributor to the success of any design project. It ensures that an iterative design cycle can happen; using research findings to inform prototyping and development and ensuring that each step of the process is anchored to the user experience. You should plan user research at the start of each development phase, and update your plans as you learn more about your users. Planning also allows you to agree the scope of your research for the upcoming sprint, or project phase. 

A user researcher needs to have a timescale, an understanding of the user groups, recruitment avenues and associated tasks, what the aims of the research are, the methodology to be employed and resources needed to carry it out. Surely though, most of this is the responsibility of the researcher? It’s their specialism, the task tickets are assigned to them, it’s their role to make sure the team is user focused – isn’t it? Yes and no. Yes those things are true (generally speaking), but I find that planning user research is most effective when the whole team is involved.

The risk of it sitting with just the UR, is that the rest of the team designates all of the responsibility for understanding the users onto that one person and it diminishes wider responsibility. Research planning is more than an administrative task to be carried out at a desk by a single person. 

Why should you plan user research with the team?

A shared understanding of the known and unknown

One of the main things to do with your team is to agree what the research objectives are. What do you know already (through evidence, not assumptions) and what is still unknown? What questions do you want to answer through your research? Are there assumptions that need validating?

This allows all the different professions to think about what questions they have that user research can help to answer. The questions a UR wants to know, will be different to what a Business Analyst wants to know, what a UX designer wants to know, and what a Content Designer wants to know. By allowing everyone to share their “unknowns”, you can begin to formulate and prioritise your research questions.

Manages Expectations

Planning user research with the whole team means that conversations about recruitment, resourcing and time constraints can be openly discussed. A consensus can happen on how long certain research activities are going to take, and the researcher can educate the team on things such as how long transcribing a usability session takes. This allows expectations and timescales to be set in agreement, and also opens up opportunity for other team members to get involved in supporting research activities, through observation and note taking etc.

It also allows for discussion around what things user research can and cannot help us with and which methodologies are best suited to which research questions. This will hopefully avoid awkward conversations later on down the line. 

Helps to focus priorities

After collecting together all of the knowns/unknowns from the team, research questions can be formulated (The Who’s, How’s, Why’s, What’s, When’s) and then prioritised. 

When prioritising research questions, the team should ask itself the following questions:

  • Is there anything that we need to know now in order to move the project forward? (ie: for the next sprint)
  • Is there anything that if we do not find out the answer, would jeopardise the progress of the project?
  • Which answers would have the biggest impact on the project?
  • Are any of the research questions out of scope at the moment?

The easiest way to prioritise options in a group is dot voting. This allows everyone to have their say and it also gets the team up and moving about a bit. By everyone voting, it creates a shared responsibility for the research decisions. 

(You will need to have a discussion about this and possibly moderate, as you want to try to ensure that the scope and plan is coherent and would make sense in a user setting – you might not be able to answer the top 4 questions in one research session if they are on very disparate topics for example)

Sample Agenda for a User Research Planning Meeting (60-90 minutes)

  1. What questions have we already answered from research? (KNOWN) – aka what do we have evidence for already?
  1. What questions do we need to answer for the next round of research? (UNKNOWN) – aka what do we want to find out?
  1. Group similar themes – move anything out of scope
  1. Prioritise questions via dot voting (3 or 4 dots per person)
  1. Review top responses, agree on coherence and consistency. Top voted responses go forward into the research plan

Write the Plan

Once you have your questions that are in scope for the next phase of research, you can put together a more concrete plan, which includes your hypotheses. I really like this example team-framework plan which can be filled in as a team and helps to solidify the discussions into a document that you can refer back to. 

For more information on planning user research:

https://www.gov.uk/service-manual/user-research/plan-user-research-for-your-service

https://www.gov.uk/service-manual/user-research/plan-round-of-user-research

https://userresearch.blog.gov.uk/2014/11/26/how-hmrc-got-a-whole-team-involved-in-planning-user-research/

Why internal users matter too

“We don’t need to do user research, because this will be mandatory for staff to use. They won’t have a choice, it’s a business decision”

“We don’t need to speak to our users, they’re staff, just like us. We know what they need”

“We don’t need to do user research, it’s only for people working in the organisation. The public won’t be affected”

Image result for user research cartoon

The sentiment behind these statements might sound familiar – ‘internal users don’t matter as much as external users’. External or public users can number hundreds, thousands, perhaps even millions. There is sometimes the assumption that external users are where all the glory is – the success stories, the media coverage, the customer feedback. Internal users are simply by-products, casualties of your digital transformation and product design. But this would be a mistake for a number of reasons.

Internal Users are predominantly people employed by the business – managers, administrators, customer service operators, call centre workers, operations staff, finance departments, HR and a myriad of other functional roles. They are interacting with various digital touch points along the way and are part of the wider end-to-end user journey, both online and offline. They are still intrinsic to the end user/customer’s experience, even if they have no face to face contact. If you provide them with a digital solution or a piece of software that makes their job more difficult, less efficient or less effective – the chain reaction will eventually result in a poorer experience for the end consumer, even if they aren’t always aware. This could be in various forms:

  • Unable to have their query dealt with
  • Longer processing/waiting times
  • Errors in their personal details
  • Lack of accurate information or support
  • Having to repeat themselves to multiple people
  • Loss of trust in a business or organisation
  • Not receiving or paying the correct amount of money
  • Not receiving phone calls, letters or texts

This could be a long complicated web form that doesn’t have auto-save and times out if not completed and submitted in a 10 minute window or it could be a process that has far too many unnecessary steps in it. Or perhaps it is a search feature that doesn’t allow the use of certain terms, or a system that is so illogical that users find their own renegade workarounds such as locally held spreadsheets (this is where data security can become a concern).

Poor design can also lead to business problems such as:

  • Inefficiencies
  • Errors in work
  • Frustration and stress to staff
  • Not meeting deadlines
  • Avoidance of certain software or processes
  • Lack of confidence and credibility in the business
Image result for software cartoon

If the product you are building will be mandatory for staff to use, then this is an even stronger argument for listening to the people who will have no choice but to use it. It is not about what the software does, it is what the user does. They still have user needs, pain points, enablers, and behaviours that should be observed and documented. They are experts at what they do, and should be treated as such – they have insight into the day to day working of the business and the end to end journey for the customer.

Don’t neglect internal users – you might just learn something.

Related image

Design in a Day

On 7th August I attended an event run by Hippo Digital‘s Midlands team, snappily named “Design in a Day”. After the success of their inaugural ‘DIAD’ event in Leeds, they sought to reproduce the event in Birmingham. I was really encouraged to see a User Centred Design event specifically for the Midlands – the sector is still hugely London-centric (although the North is a strong contender for digital innovation these days – Hippo have a strong Leeds presence), and the Midlands seems to have ended up the poor cousin, despite well known companies, government digital teams and start-ups choosing Birmingham and the wider area to call home. Hippo are striving to change that, specifically Katie Lambeth-Mansell (Midlands Regional Director) and Liz Whitefield (Director) who are putting down a strong Midlands footprint for Hippo, not just in design and delivery for clients, but also in providing learning and networking opportunities to other organisations.

The Day

The event was held at the IBIS hotel Birmingham, which was a great venue. Clearly a lot of planning had gone into the day to make it run smoothly, all equipment provided, great drinks/catering, enough room for everyone and a well kept schedule to ensure we didn’t go off track!

We were given an overview of Hippo’s approach to design sprints by UX Designer Suly Khan, and throughout the day we also heard from Katie and Liz, as well as Aimee Heyworth (Content Design) and Leah Thompson (User Research), who kept an eye on all the teams and provided additional support.

The Challenge!

I found myself on a table with a gentleman from DEFRA, a marketing manager and a UX designer. Our group was a great mix of experiences and views and we worked together really well. The problem statement set to us was “How can we encourage others to take care of their environment?”. We had to brainstorm this question and come up with an area that we were going to research and design for. We agreed upon “Fast Fashion” as our focus area.

We worked through a series of time-boxed design sprint tasks, including sketching “crazy 8” design ideas, discussing user needs, plotting user journeys, agreeing on possible solutions, sketching out our ideas in more detail and then creating a testable paper prototype of our final solution to test with other people in the room.

Show and Tell

We ended the day with a show and tell of our design sprints and our prototypes to the other groups in the room. It was great to see the different ways people had approached the question – from car sharing to recycling – and the ideas for digital solutions. I think by the end of the day we were all ready to launch our Kickstarter campaigns!

Summary

It was a really fun day and very different to other events I have been to – there was no expectation that everyone in the room would be a designer, or even working in digital. It felt like everyone was coming into it together on the same level. The mix of participants on the day felt really good, and there was a relaxed but focused feeling on the tasks at hand.

If you want to find out more about Hippo and any future events like this one – check them out on Linked In and Twitter

“Who are you designing for?”, or “Why Dr Malcolm was right”

I have just finished reading “Invisible Women” by Caroline Criado Perez, a veritable tome of a book outlining the multitude of ways in which the data gender-gap continues to enable poor design and oppression and inequality for women worldwide. If you haven’t already read this, you should. If you’re a designer or a researcher, you should read it. If you’re a woman, you should read it. And if you’re a man, you should definitely read it.

One of the examples Criado Perez references that really resonated with me was to do with cooking stoves. In South Asia, 75% of families use 3-stone biomass cooking stoves, in Bangladesh its around 90% and in sub-Saharan Africa, the figure is around 80% of the population. The issue with these 3 stone cooking stoves, which are used almost exclusively by women (because cooking and home care is a woman’s responsibility), is that they are used in confined spaces for long hours at a time, exposing the women using these stoves to the equivalent of 100 cigarettes a day.

(Above: A traditional 3-stone cooking stove)

Since the 1950s, Craido Perez advises that various organisations and development agencies have sought to reduce the risk from these stoves by introducing new “clean” cooking stoves in these areas that have better airflow and lower levels of toxic emissions. However, all the research showed was that new clean cook stoves had been rejected by almost all users. Why was this? Surely it was a simple matter of problem + solution? The product had a problem, and these agencies had provided the solution?

(Image from Nathan Pyle’s Strange Planet Series)

Initially, the issue was thought to be the (female) users. They simply needed “educating” on the benefits of the new stoves and how to use them. This was mistake number one.

Further research in 2013 highlighted something different. Women reported to researchers that the new stoves increased cooking time and required more attention. In countries where these women already had 15 hour plus workdays, in additional to household labour and unpaid caring responsibilities, this simply wasn’t practical. It meant they had to change the way they cooked and worked. The stoves were not effective or efficient for their needs, so they weren’t used.

It also didn’t take into account cultural gender roles, which meant that women had less purchasing power than their husbands, and also if the stove broke and required maintenance, they would rarely get fixed because the stove was in the kitchen, and the kitchen was the woman’s domain. Another study also found that women rejected the new clean stoves because they didn’t accept large pieces of wood – wood chopping was a difficult manual task for these women to carry out – so they reverted back to their old stoves that had no fuel size limitations.

Eventually, a team of researchers devised a cheap, metal device that could be placed in a traditional stove, to improve the airflow similar to the new stoves.

This case study provides a fable of sorts, of what happens when you don’t understand your users in context. People do not exist within a bubble, we are a complex product of both ourselves and our environment. These agencies had jumped straight to a solution, without first properly understanding the problem, or indeed the nuanced issues around gender in these geographical contexts. They believed they understood their users, and their needs and that they had the solution.

In this case, the user need was not “I want to reduce the amount of toxic emissions from my stove” it was “I want a stove solution that doesn’t cause me more time or effort”. The solution was new and shiny, and thought to hugely improve the users experience, but it was neither efficient, nor effective or satisfactory. These 3 things contribute to the overall user experience.

Contextual inquiry would have allowed the researchers to observe the women going about their normal day-to-day cooking tasks, interacting with the current product and the new designed product. It would have allowed the researchers to understand a typical day, their pain points, who else is involved in the process and the environment in which this behaviour was occuring. It would have helped with the question “Who are our users and what are they trying to do?” – essentially, how do they currently solve the problem?

Usability testing would have helped them answer the question “Can people use the thing we’ve designed to solve their problem?”. They would have seen the difficulties with the new stoves and understood why the adoption rate was so low. This would have enabled them to address the issue earlier. As David Travis points out in his book, “field visits (or contextual inquiry) tell you if you’re designing the right thing, usability testing tells you if you’ve designed the right thing.”

Dr Malcolm was right, just because you can design something, doesn’t mean you should.

Top Three Take-Aways from User Research London 2019

Photo by Clem Onojeghuo on Unsplash

On the 27th and 28th July this year, hundreds of question-asking, people-observing, sticker-loving, caffeine-fueled researchers descended on the stunning etc.venues’ County Hall in Westminster, London for User Research London 2019.

I opted for the two day ticket – a full length workshop on the Thursday and a full packed day of talks on the Friday, hoping to maximise the opportunity to see so many key UR/UX people in one (almost local) place.

Here are the key concepts that I took away from this event as food for thought:

Get Emotional

Bill Albert’s talk on Exploring the emotional user experience emphasised going beyond just simple usability to look at the entire user experience. People are not just robots following through with tasks from start to finish in a clinical bubble, we have emotional reactions to the experience and the context in which those behaviours occur.

Bill cautioned about the challenges of measuring and analysing emotion, including the fact that emotions are fleeting, and occur along a huge varied spectrum – with primary and secondary emotions that become more nuanced with each subsequent level. Context is also everything – it is important to understand where your users are utilising your services – where are they, what’s going on for them and around them, what’s the environment like, who might be with them?

He also spoke around how it is important to understand the intensity of an emotion (and not just the emotion itself) – riding on Nemesis at Alton Towers for example could be considered a high intensity emotion (adrenaline charged, high physiological arousal) but the frustration around using a poorly designed retail website is more likely to be in the low intensity range of the scale (annoying but low physiological arousal).

Often digital services are not being designed to create a specific emotional experience (such as many government digital services where it can just be about doing those necessary jobs) but emotion cannot be ignored entirely – for government services, context will be everything and many contexts will be emotionally charged and sensitive (E.g.: Apply for criminal injury compensation, Apply for a divorce, Make an application for lasting power of attorney). Whilst we might not feel strong emotions about a website, we feel strongly about what is happening to, and for us. Empathy with your users will be necessary.

Bill Albert can be contacted at @UXMetrics and walbert@bentley.edu

Safety First

Photo by Pop & Zebra on Unsplash

Tristan Ostrowski’s talk on the second day was around a piece of intense and sensitive research that he undertook with the UK Home Office, to understand the process behind investigation and prosecutions relating to child sexual exploitation and child abuse. This research meant that the team could be exposed to indecent and sensitive images as they carried out their ethnographic research with police and the home office.

He spoke about how the safety of the resarchers and the wider team on this project was prioritised and addressed. He highlighted the need for psychological screening for team members to ensure psychological resilience, access to trained counselors/psychologists whilst undertaking the research and ongoing support from fellow team members was crucial to unpacking their findings and processing what they had observed and heard.

Tristan also spoke about the necessity of taking time with the research and ensuring it was not too much in a short space of time. He also highlighted how they ensured that the wider team and other staff in the offices were protected from the content and findings by using private channels, restricting access to documents and filtering the feedback.

The main take away here, was ensuring that before embarking on research on sensitive topics or with vulnerable service users, steps must be taken to ensure that your researchers and the whole team, are effectively prepared, psychologically resilient and have the necessary support structures in place. If there is not a safe way to conduct the research, and if you can’t adequately guarantee the safety of the team, you need to find a different way to get those insights.

Tristan can be contacted by Tristan.Ostrowski@education.gov.uk

Adapt and Apply

Dalia El-Shimy, Head of UX Research at Shopify, spoke about creativity in user research.

She spoke about the different types of creativity as coined by Margaret Boden – H creative people and P creative people. H creatives are historically creative – which is someone who has come up with an idea that no-one in mankind has ever thought of before. P creatives are psychologically creative; where someone borrows and idea from one industry or sector and applies it to another. It’s not the most unique form of invention, but many advances and ideas have come about in this way.

Dalia went on to talk about how user research is predominently a P-creative’s game. Research is rarely about inventing a brand new concept of research method – it’s about taking ideas from other people or other sectors, and applying it in a new way to meet your needs or achieve your goals. Being P creative is about being curious, about being adventurous and appropriating ideas for outcomes.

She spoke about a variety of creative user research methods her and her team had employed during her time with Shopify. She points out that these were not unique, ground-breaking “H-Creative” ideas – they had just found a new way to apply old knowledge and approaches. We are not inventing new research methods. There is no shame in adapting and applying.

Dalia can be found here https://ux.shopify.com/@delshimy


UR sessions – What Not To Do

I recently ran my own UR workshops with groups of students at a college. However, prior to my session there were a couple of young male entrepreneurs who had come to run their own research session. I found out they had founded a company and created a new app that was aimed to rival *Insert an unamed globally leading video sharing website here*. By their own reports, it was gaining traction and doing well. They were at the college to do some research with their target audience about their online habits and preferences for such an app. They were happy for me to sit in during their session, and I thought I’d see if any of their insights were relevant to my own research.

However, what I observed was some key ways to *not* conduct effective research sessions. I thought I would share some of these insights here and how they could be improved.

Not good: “How much would you pay for this app?”

This is an example of a closed question. It invites a set, binary response. However it’s more problematic than that. You’re asking people how much they would pay for a service they have only been introduced to 10-15 minutes ago that they don’t even know if they need. How should they estimate it’s value to them?

Additionally, money can be a tricky thing. People have different financial circumstances and disposable income. Asking what people “would” pay in a group scenario invites in social desirability- which could encourage participants to over-inflate what they would be prepared to pay for a service or product.

Better: “Tell me about the last time that you paid for an app?”

This open question invites more follow up questions about the user’s decision to spend money on a service – Why did they decide to purchase? What factors influenced their decision? How did they decide if it was good value? What was the payment process like? Would they buy something similar in the future?

It allows them to reflect back upon a real action they took in their life, and how they found that experience, rather than imagining a hypothetical future scenario.

Not good: “Don’t you hate it when [competitor’s] website does [insert thing here]?”

This is of course, a leading question. By framing it in this way, they were priming their participants to agree with the statement that of course [competitors] website is terrible because it does that thing that everyone hates!

This also runs the risk of influencing people to agree with the statement if that is what the majority say – people don’t like to be the odd one out.

Better: Observe your users interacting with competitor websites or services

By observing people using live websites, apps or services as they would normally do, or by asking them to complete a specific task, you will get to understand their likes, preferences and pain points much more realistically. By letting them guide themselves through, and using minimal prompting and neutral statements, it can help to reduce interviewer bias.

You can then follow this up with more structured interview about any observed pain points or niggles the user had and ask them to elaborate.

Not good: “Do you think you’d use this app?”

Sounds pretty harmless doesn’t it? However in this situation, the guy who’d designed and invented the app was the one sat in front of the participants running the session.

Response bias is a type of bias where the subject consciously, or subconsciously, gives a response that they think that the interviewer wants to hear. In this case, that response is “yes, I would use your app!”

Better: Use independent researchers

Of course, it is great to involve everyone in user research. I believe teams are most effective when everyone understands what research is being done and why, how it is carried out, what the findings are and how they contribute to iterative design changes. Agile teams where the research is done in isolation from the rest of the team can be problematic and lack transparency. However, there is a time and a place, and indeed, a person.

For example, rather than having the service designer in the research interview, could they observe remotely? Could stakeholders observe through two-way glass? What about recording equipment?

Do you have correct mechanisms and ceremonies in place so that user researchers can present their findings to the rest of the team?

These methods can help make users feel more comfortable giving honest feedback, as the person leading the interview is more neutral and removed from the creation process. Telling an app designer you hate their app to their face would be more difficult than conveying the same message to an impartial researcher.

Not good: “Would you prefer [X] or [Y]?”

This was another question I observed being posed to this research group, and was in fact another hypothetical question as opposed to showing them two separate designs – if we did this, would you want it [this way] or [that way]?

Again this is difficult to quantify because we know that what users say and what users do can be two very different things. What should they be basing their preferences on? How do those two options link back to user goals or the user journey?

Better: A/B testing

AB testing is essentially an experiment where two or more variants of a web page are shown to users at random, and statistical analysis is used to determine which variation performs better for a given conversion goal. It can be very technical, with many different analysis software available to help collect and process your data, but it can also be tested in a a more “quick and dirty” way.

Show a version of the design to half your users, and show a different version to the other half. It doesn’t even need to be a working design, it can be a rough prototype or a wire frame if need be. Of course other variables may still come into play but it would give a general indication of where to concentrate efforts going forward.

Summary

I feel the need to point out that at no point did I attempt to jump into these other sessions whilst they were ongoing, but I did offer some gentle pointers afterwards for how they might improve things next time to get more useful data.

It is rare that a research session will go perfectly – we’re all human (even researchers!) – and we might phrase something clumsily, or ask something in the wrong order, but it is always about learning – about our users and about ourselves.

Create your website at WordPress.com
Get started