Welcome – UR Here

Navigating the world of User Research

“If I had an hour to solve a problem and my life depended on the solution, I would spend the first 55 minutes determining the proper question to ask, for once I know the proper question, I could solve the problem in less than five minutes.”

— Albert Einstein, Theoretical Physicist


An Internal Affair

“We don’t need to do user research, because this will be mandatory for staff to use. They won’t have a choice, it’s a business decision”

“We don’t need to speak to our users, they’re staff, just like us. We know what they need”

“We don’t need to do user research, it’s only for people working in the organisation. The public won’t be affected”

Image result for user research cartoon

The sentiment behind these statements might sound familiar – ‘internal users don’t matter as much as external users’. External or public users can number hundreds, thousands, perhaps even millions. There is sometimes the assumption that external users are where all the glory is – the success stories, the media coverage, the customer feedback. Internal users are simply by-products, casualties of your digital transformation and product design. But this would be a mistake for a number of reasons.

Internal Users are predominantly people employed by the business – managers, administrators, customer service operators, call centre workers, operations staff, finance departments, HR and a myriad of other functional roles. They are interacting with various digital touch points along the way and are part of the wider end-to-end user journey, both online and offline. They are still intrinsic to the end user/customer’s experience, even if they have no face to face contact. If you provide them with a digital solution or a piece of software that makes their job more difficult, less efficient or less effective – the chain reaction will eventually result in a poorer experience for the end consumer, even if they aren’t always aware. This could be in various forms:

  • Unable to have their query dealt with
  • Longer processing/waiting times
  • Errors in their personal details
  • Lack of accurate information or support
  • Having to repeat themselves to multiple people
  • Loss of trust in a business or organisation
  • Not receiving or paying the correct amount of money
  • Not receiving phone calls, letters or texts

This could be a long complicated web form that doesn’t have auto-save and times out if not completed and submitted in a 10 minute window or it could be a process that has far too many unnecessary steps in it. Or perhaps it is a search feature that doesn’t allow the use of certain terms, or a system that is so illogical that users find their own renegade workarounds such as locally held spreadsheets (this is where data security can become a concern).

Poor design can also lead to business problems such as:

  • Inefficiencies
  • Errors in work
  • Frustration and stress to staff
  • Not meeting deadlines
  • Avoidance of certain software or processes
  • Lack of confidence and credibility in the business
Image result for software cartoon

If the product you are building will be mandatory for staff to use, then this is an even stronger argument for listening to the people who will have no choice but to use it. It is not about what the software does, it is what the user does. They still have user needs, pain points, enablers, and behaviours that should be observed and documented. They are experts at what they do, and should be treated as such – they have insight into the day to day working of the business and the end to end journey for the customer.

Don’t neglect internal users – you might just learn something.

Related image

Design in a Day

On 7th August I attended an event run by Hippo Digital‘s Midlands team, snappily named “Design in a Day”. After the success of their inaugural ‘DIAD’ event in Leeds, they sought to reproduce the event in Birmingham. I was really encouraged to see a User Centred Design event specifically for the Midlands – the sector is still hugely London-centric (although the North is a strong contender for digital innovation these days – Hippo have a strong Leeds presence), and the Midlands seems to have ended up the poor cousin, despite well known companies, government digital teams and start-ups choosing Birmingham and the wider area to call home. Hippo are striving to change that, specifically Katie Lambeth-Mansell (Midlands Regional Director) and Liz Whitefield (Director) who are putting down a strong Midlands footprint for Hippo, not just in design and delivery for clients, but also in providing learning and networking opportunities to other organisations.

The Day

The event was held at the IBIS hotel Birmingham, which was a great venue. Clearly a lot of planning had gone into the day to make it run smoothly, all equipment provided, great drinks/catering, enough room for everyone and a well kept schedule to ensure we didn’t go off track!

We were given an overview of Hippo’s approach to design sprints by UX Designer Suly Khan, and throughout the day we also heard from Katie and Liz, as well as Aimee Heyworth (Content Design) and Leah Thompson (User Research), who kept an eye on all the teams and provided additional support.

The Challenge!

I found myself on a table with a gentleman from DEFRA, a marketing manager and a UX designer. Our group was a great mix of experiences and views and we worked together really well. The problem statement set to us was “How can we encourage others to take care of their environment?”. We had to brainstorm this question and come up with an area that we were going to research and design for. We agreed upon “Fast Fashion” as our focus area.

We worked through a series of time-boxed design sprint tasks, including sketching “crazy 8” design ideas, discussing user needs, plotting user journeys, agreeing on possible solutions, sketching out our ideas in more detail and then creating a testable paper prototype of our final solution to test with other people in the room.

Show and Tell

We ended the day with a show and tell of our design sprints and our prototypes to the other groups in the room. It was great to see the different ways people had approached the question – from car sharing to recycling – and the ideas for digital solutions. I think by the end of the day we were all ready to launch our Kickstarter campaigns!


It was a really fun day and very different to other events I have been to – there was no expectation that everyone in the room would be a designer, or even working in digital. It felt like everyone was coming into it together on the same level. The mix of participants on the day felt really good, and there was a relaxed but focused feeling on the tasks at hand.

If you want to find out more about Hippo and any future events like this one – check them out on Linked In and Twitter

“Who are you designing for?”, or “Why Dr Malcolm was right”

I have just finished reading “Invisible Women” by Caroline Criado Perez, a veritable tome of a book outlining the multitude of ways in which the data gender-gap continues to enable poor design and oppression and inequality for women worldwide. If you haven’t already read this, you should. If you’re a designer or a researcher, you should read it. If you’re a woman, you should read it. And if you’re a man, you should definitely read it.

One of the examples Criado Perez references that really resonated with me was to do with cooking stoves. In South Asia, 75% of families use 3-stone biomass cooking stoves, in Bangladesh its around 90% and in sub-Saharan Africa, the figure is around 80% of the population. The issue with these 3 stone cooking stoves, which are used almost exclusively by women (because cooking and home care is a woman’s responsibility), is that they are used in confined spaces for long hours at a time, exposing the women using these stoves to the equivalent of 100 cigarettes a day.

(Above: A traditional 3-stone cooking stove)

Since the 1950s, Craido Perez advises that various organisations and development agencies have sought to reduce the risk from these stoves by introducing new “clean” cooking stoves in these areas that have better airflow and lower levels of toxic emissions. However, all the research showed was that new clean cook stoves had been rejected by almost all users. Why was this? Surely it was a simple matter of problem + solution? The product had a problem, and these agencies had provided the solution?

(Image from Nathan Pyle’s Strange Planet Series)

Initially, the issue was thought to be the (female) users. They simply needed “educating” on the benefits of the new stoves and how to use them. This was mistake number one.

Further research in 2013 highlighted something different. Women reported to researchers that the new stoves increased cooking time and required more attention. In countries where these women already had 15 hour plus workdays, in additional to household labour and unpaid caring responsibilities, this simply wasn’t practical. It meant they had to change the way they cooked and worked. The stoves were not effective or efficient for their needs, so they weren’t used.

It also didn’t take into account cultural gender roles, which meant that women had less purchasing power than their husbands, and also if the stove broke and required maintenance, they would rarely get fixed because the stove was in the kitchen, and the kitchen was the woman’s domain. Another study also found that women rejected the new clean stoves because they didn’t accept large pieces of wood – wood chopping was a difficult manual task for these women to carry out – so they reverted back to their old stoves that had no fuel size limitations.

Eventually, a team of researchers devised a cheap, metal device that could be placed in a traditional stove, to improve the airflow similar to the new stoves.

This case study provides a fable of sorts, of what happens when you don’t understand your users in context. People do not exist within a bubble, we are a complex product of both ourselves and our environment. These agencies had jumped straight to a solution, without first properly understanding the problem, or indeed the nuanced issues around gender in these geographical contexts. They believed they understood their users, and their needs and that they had the solution.

In this case, the user need was not “I want to reduce the amount of toxic emissions from my stove” it was “I want a stove solution that doesn’t cause me more time or effort”. The solution was new and shiny, and thought to hugely improve the users experience, but it was neither efficient, nor effective or satisfactory. These 3 things contribute to the overall user experience.

Contextual inquiry would have allowed the researchers to observe the women going about their normal day-to-day cooking tasks, interacting with the current product and the new designed product. It would have allowed the researchers to understand a typical day, their pain points, who else is involved in the process and the environment in which this behaviour was occuring. It would have helped with the question “Who are our users and what are they trying to do?” – essentially, how do they currently solve the problem?

Usability testing would have helped them answer the question “Can people use the thing we’ve designed to solve their problem?”. They would have seen the difficulties with the new stoves and understood why the adoption rate was so low. This would have enabled them to address the issue earlier. As David Travis points out in his book, “field visits (or contextual inquiry) tell you if you’re designing the right thing, usability testing tells you if you’ve designed the right thing.”

Dr Malcolm was right, just because you can design something, doesn’t mean you should.

Top Three Take-Aways from User Research London 2019

Photo by Clem Onojeghuo on Unsplash

On the 27th and 28th July this year, hundreds of question-asking, people-observing, sticker-loving, caffeine-fueled researchers descended on the stunning etc.venues’ County Hall in Westminster, London for User Research London 2019.

I opted for the two day ticket – a full length workshop on the Thursday and a full packed day of talks on the Friday, hoping to maximise the opportunity to see so many key UR/UX people in one (almost local) place.

Here are the key concepts that I took away from this event as food for thought:

Get Emotional

Bill Albert’s talk on Exploring the emotional user experience emphasised going beyond just simple usability to look at the entire user experience. People are not just robots following through with tasks from start to finish in a clinical bubble, we have emotional reactions to the experience and the context in which those behaviours occur.

Bill cautioned about the challenges of measuring and analysing emotion, including the fact that emotions are fleeting, and occur along a huge varied spectrum – with primary and secondary emotions that become more nuanced with each subsequent level. Context is also everything – it is important to understand where your users are utilising your services – where are they, what’s going on for them and around them, what’s the environment like, who might be with them?

He also spoke around how it is important to understand the intensity of an emotion (and not just the emotion itself) – riding on Nemesis at Alton Towers for example could be considered a high intensity emotion (adrenaline charged, high physiological arousal) but the frustration around using a poorly designed retail website is more likely to be in the low intensity range of the scale (annoying but low physiological arousal).

Often digital services are not being designed to create a specific emotional experience (such as many government digital services where it can just be about doing those necessary jobs) but emotion cannot be ignored entirely – for government services, context will be everything and many contexts will be emotionally charged and sensitive (E.g.: Apply for criminal injury compensation, Apply for a divorce, Make an application for lasting power of attorney). Whilst we might not feel strong emotions about a website, we feel strongly about what is happening to, and for us. Empathy with your users will be necessary.

Bill Albert can be contacted at @UXMetrics and walbert@bentley.edu

Safety First

Photo by Pop & Zebra on Unsplash

Tristan Ostrowski’s talk on the second day was around a piece of intense and sensitive research that he undertook with the UK Home Office, to understand the process behind investigation and prosecutions relating to child sexual exploitation and child abuse. This research meant that the team could be exposed to indecent and sensitive images as they carried out their ethnographic research with police and the home office.

He spoke about how the safety of the resarchers and the wider team on this project was prioritised and addressed. He highlighted the need for psychological screening for team members to ensure psychological resilience, access to trained counselors/psychologists whilst undertaking the research and ongoing support from fellow team members was crucial to unpacking their findings and processing what they had observed and heard.

Tristan also spoke about the necessity of taking time with the research and ensuring it was not too much in a short space of time. He also highlighted how they ensured that the wider team and other staff in the offices were protected from the content and findings by using private channels, restricting access to documents and filtering the feedback.

The main take away here, was ensuring that before embarking on research on sensitive topics or with vulnerable service users, steps must be taken to ensure that your researchers and the whole team, are effectively prepared, psychologically resilient and have the necessary support structures in place. If there is not a safe way to conduct the research, and if you can’t adequately guarantee the safety of the team, you need to find a different way to get those insights.

Tristan can be contacted by Tristan.Ostrowski@education.gov.uk

Adapt and Apply

Dalia El-Shimy, Head of UX Research at Shopify, spoke about creativity in user research.

She spoke about the different types of creativity as coined by Margaret Boden – H creative people and P creative people. H creatives are historically creative – which is someone who has come up with an idea that no-one in mankind has ever thought of before. P creatives are psychologically creative; where someone borrows and idea from one industry or sector and applies it to another. It’s not the most unique form of invention, but many advances and ideas have come about in this way.

Dalia went on to talk about how user research is predominently a P-creative’s game. Research is rarely about inventing a brand new concept of research method – it’s about taking ideas from other people or other sectors, and applying it in a new way to meet your needs or achieve your goals. Being P creative is about being curious, about being adventurous and appropriating ideas for outcomes.

She spoke about a variety of creative user research methods her and her team had employed during her time with Shopify. She points out that these were not unique, ground-breaking “H-Creative” ideas – they had just found a new way to apply old knowledge and approaches. We are not inventing new research methods. There is no shame in adapting and applying.

Dalia can be found here https://ux.shopify.com/@delshimy

UR sessions – What Not To Do

I recently ran my own UR workshops with groups of students at a college. However, prior to my session there were a couple of young male entrepreneurs who had come to run their own research session. I found out they had founded a company and created a new app that was aimed to rival *Insert an unamed globally leading video sharing website here*. By their own reports, it was gaining traction and doing well. They were at the college to do some research with their target audience about their online habits and preferences for such an app. They were happy for me to sit in during their session, and I thought I’d see if any of their insights were relevant to my own research.

However, what I observed was some key ways to *not* conduct effective research sessions. I thought I would share some of these insights here and how they could be improved.

Not good: “How much would you pay for this app?”

This is an example of a closed question. It invites a set, binary response. However it’s more problematic than that. You’re asking people how much they would pay for a service they have only been introduced to 10-15 minutes ago that they don’t even know if they need. How should they estimate it’s value to them?

Additionally, money can be a tricky thing. People have different financial circumstances and disposable income. Asking what people “would” pay in a group scenario invites in social desirability- which could encourage participants to over-inflate what they would be prepared to pay for a service or product.

Better: “Tell me about the last time that you paid for an app?”

This open question invites more follow up questions about the user’s decision to spend money on a service – Why did they decide to purchase? What factors influenced their decision? How did they decide if it was good value? What was the payment process like? Would they buy something similar in the future?

It allows them to reflect back upon a real action they took in their life, and how they found that experience, rather than imagining a hypothetical future scenario.

Not good: “Don’t you hate it when [competitor’s] website does [insert thing here]?”

This is of course, a leading question. By framing it in this way, they were priming their participants to agree with the statement that of course [competitors] website is terrible because it does that thing that everyone hates!

This also runs the risk of influencing people to agree with the statement if that is what the majority say – people don’t like to be the odd one out.

Better: Observe your users interacting with competitor websites or services

By observing people using live websites, apps or services as they would normally do, or by asking them to complete a specific task, you will get to understand their likes, preferences and pain points much more realistically. By letting them guide themselves through, and using minimal prompting and neutral statements, it can help to reduce interviewer bias.

You can then follow this up with more structured interview about any observed pain points or niggles the user had and ask them to elaborate.

Not good: “Do you think you’d use this app?”

Sounds pretty harmless doesn’t it? However in this situation, the guy who’d designed and invented the app was the one sat in front of the participants running the session.

Response bias is a type of bias where the subject consciously, or subconsciously, gives a response that they think that the interviewer wants to hear. In this case, that response is “yes, I would use your app!”

Better: Use independent researchers

Of course, it is great to involve everyone in user research. I believe teams are most effective when everyone understands what research is being done and why, how it is carried out, what the findings are and how they contribute to iterative design changes. Agile teams where the research is done in isolation from the rest of the team can be problematic and lack transparency. However, there is a time and a place, and indeed, a person.

For example, rather than having the service designer in the research interview, could they observe remotely? Could stakeholders observe through two-way glass? What about recording equipment?

Do you have correct mechanisms and ceremonies in place so that user researchers can present their findings to the rest of the team?

These methods can help make users feel more comfortable giving honest feedback, as the person leading the interview is more neutral and removed from the creation process. Telling an app designer you hate their app to their face would be more difficult than conveying the same message to an impartial researcher.

Not good: “Would you prefer [X] or [Y]?”

This was another question I observed being posed to this research group, and was in fact another hypothetical question as opposed to showing them two separate designs – if we did this, would you want it [this way] or [that way]?

Again this is difficult to quantify because we know that what users say and what users do can be two very different things. What should they be basing their preferences on? How do those two options link back to user goals or the user journey?

Better: A/B testing

AB testing is essentially an experiment where two or more variants of a web page are shown to users at random, and statistical analysis is used to determine which variation performs better for a given conversion goal. It can be very technical, with many different analysis software available to help collect and process your data, but it can also be tested in a a more “quick and dirty” way.

Show a version of the design to half your users, and show a different version to the other half. It doesn’t even need to be a working design, it can be a rough prototype or a wire frame if need be. Of course other variables may still come into play but it would give a general indication of where to concentrate efforts going forward.


I feel the need to point out that at no point did I attempt to jump into these other sessions whilst they were ongoing, but I did offer some gentle pointers afterwards for how they might improve things next time to get more useful data.

It is rare that a research session will go perfectly – we’re all human (even researchers!) – and we might phrase something clumsily, or ask something in the wrong order, but it is always about learning – about our users and about ourselves.

Create your website at WordPress.com
Get started