Types of User Research Bias and How to Avoid It in Your UX Design

user research bias

As a business, you need to conduct usability testing so that you can obtain genuine user research from your target audience. However, when sourcing and listening to this feedback, you need to be aware that user research bias will occur. These user research bias types could occur due to your preconceptions or those of your participants. If you remain oblivious to the occurrence of these beliefs and attitudes in your user research, you will risk using biased insights to grow your product or service. To help raise your awareness of the types of biases that can occur, we will list our top six in this article. Following this, there are also various ways that you can avoid user research bias when conducting a user session.

User research bias types and examples

RESEARCHER BIAS AND EXAMPLES

CONFIRMATION BIAS: This occurs when a researcher uses specific data received from a study to confirm his hypothesis.

EXAMPLE OF CONFIRMATION BIAS: You want to conduct a user study to test out a UX design pattern you’ve just created. You are very confident that this new design will work. When you get the results from the study, you ignore any evidence that does not support your theory. 

CULTURE BIAS: This happens when a researcher interprets results based on their own cultural beliefs or attitudes rather than from a neutral point of view. It can also involve a moderator making involuntary suggestions to a participant which affects the way a participant answers.

user research bias example

EXAMPLE OF CULTURE BIAS: A moderator is conducting a study with a range of participants, with two of them being disabled. The moderator adopts a more polite tone towards these disabled participants when speaking to them. The other participants are influenced by his behaviour and act differently.

PARTICIPANT BIAS AND EXAMPLES

SOCIAL DESIRABILITY BIAS: This type of bias occurs when participants answer what they think a moderator or business wants to hear. This bias could involve a participant giving similar answers to questions that seem alike. Similarly, they could also use their opinion of your brand to answer questions in a positive rather than a neutral, objective way. 

EXAMPLE OF SOCIAL DESIRABILITY BIAS: You have organised a focus group for a controversial product that you wish to roll out. You are looking for participants to air out any opinions objectively. However, specific participants in this focus group might be afraid of revealing their real thoughts on a controversial topic as they fear judgement.

THE HAWTHORNE EFFECT: This happens when participants are very aware that they are being observed. So, they focus more on what they are doing and try harder to solve any issue. But, this is not always what a real user would do.

EXAMPLE OF THE HAWTHORNE EFFECT: During a study, a researcher informs a user that other team members will also watch their actions. The user is then more conscious of what they are doing to avoid mistakes. This type of bias means that any results gained are not objective as the user would not do this in real-life.

user research bias

RECRUITMENT BIAS AND EXAMPLES

AVAILABILITY BIAS: This type of bias happens when a researcher lowers the recruitment filters or avoids using screener questions so that they can source the required number of participants in a short amount of time. Similarly, stakeholders or sponsor companies could choose participants who they favour as they are more proactive. Both these instances could reduce the likelihood of getting objective insights. 

EXAMPLE OF AVAILABILITY BIAS: You are designing an online shopping app. You decide to interview and observe shoppers in central London. However, you fail to recognise that their shopping behaviour differs from those in other parts of the world. So you will run the risk that your insights might not apply to all of your target audience.

WORDING BIAS: Also known as the framing effect, this form of bias takes place when a researcher frames a question in a certain way that suggests an answer.

EXAMPLE OF WORDING BIAS: If a researcher asks a question like ‘How difficult is it to set up a doctor’s appointment?’, they have implied to the participant that setting up a doctor’s appointment is a negative experience. This question means that when a participant responds, they are more likely to react in a negative rather than an objective way.

 user research bias example

Instead, a researcher could objectively ask the same question by saying ‘Can you describe your last doctor’s appointment?’ or ‘How do you feel when going for a doctor’s appointment?’

Six tips to avoid user research bias in your UX design

NOTE DOWN YOUR ASSUMPTIONS BEFORE YOU BEGIN THE STUDY

When conducting user research, be aware of any general and specific of any general and particular assumptions you have concerning a project. Using an assumptions map to list these out together with the input from the rest of your team to avoid user research bias.

CHOOSE PARTICIPANTS WHO ARE REPRESENTATIVE OF YOUR TARGET AUDIENCE

It’s not possible to recruit the same number of participants for each usability study. Rather than focusing on numbers, you need to determine how many participants you need based on the number of target personas you have developed for your brand. In this way, the insights you collect can apply to all of your target audience and avoids user research bias.

 

BANISH USER RESEARCH BIAS BY LEARNING HOW TO STRUCTURE AND WRITE A USER TEST SCRIPT 

When sourcing a user’s opinions, intentions and preferences about your product or service, your brand should ask open-ended questions that don’t present answers only based on your assumptions. In the same way, you can also present a task as a goal or a scenario so that you can learn more about how they interact with your website or app. 

Users should never be pushed into confirming a specific outcome or problem. If you do this, you might not uncover any other issues. When writing your script, your wording should also be clear, neutral and straightforward. In this way, you can probe further into the mind of the user with follow up questions so that you understand what is important to them.

COLLECT A MIXTURE OF QUANTITATIVE AND QUALITATIVE METRICS 

Using quantitative metrics forces you to look objectively at insights gained from a study. Couple metrics like time-on-task and the system usability scale system with qualitative data such as sentiment analysis. You can also use annotations and time-stamps when watching your videos to cluster similar issues or problems together. When you use an AI-powered solution like PlaybookUX, you can source these type of quantitative and qualitative metrics with any of our moderated or unmoderated research methods. 

user research bias

GAIN ADDITIONAL PERSPECTIVES FROM COMPETITOR RESEARCH AND OTHER TEAM MEMBERS 

Conduct a competitor analysis in addition to unmoderated or moderated interviews of your product, or other similar brands. Discover what participants like or dislike about your competitors so that you can use this data as another point of reference for improving your user experience. Consider the perspectives of your team members by sharing any videos or insights from studies with them. 

AVOID USER RESEARCH BIAS BY LISTENING AS WELL AS WATCHING YOUR BODY LANGUAGE 

When moderating an interview, get rid of user research bias by letting your participants talk more than you do. If you want to clarify something that they’ve said you could ask them a follow-up question like ‘why do you think so?’ Learn to mask your emotional reactions so that participants are encouraged to reveal their true feelings about a product or service.

Your brand will never be able to remove a very part of research bias from your UX research. However, being aware that perceptions, attitudes and culture can affect the results of a study will help you look at the data more objectively. All team members should be involved in the UX research process so that together you can get more accurate data for your product or service.

Speak to high quality people