Hey! Here is my process for conducting modular, lightweight, customizable, scalable qualitative user research.
This is my attempt at outlining who I am as a researcher/builder and how I approach problems/projects.
- Study Design
- General Attitudes
(why is research important?)
My journey to user research was not planned. When I applied to my first role as a user researcher, I had NO idea what I was doing. I applied on a whim, prepared with a surface-level understanding of the field, and somehow landed a job.
Over the next few years, I made every mistake in the research book (I still have a bunch more to go), navigating tough conversations, lousy research, biased recruiting, poor alignment, interesting but not important findings, and so much more.
But I loved every bit of it. Each time something went wrong, I felt something on the inside. A love for that mistake. A chance to learn about something new. An opportunity to know more. To be more.
I love research!
We tend to think of research as an external process. Something that we help facilitate so that businesses & users can both reach their goals.
But to me, it's something much more.
Our research influences people on the inside (including the researcher). It is a beautifully complex internal process.
We begin with a problem or ignorance surrounding a user, product, business, or market. Hypotheses and questions get stated, kick-starting the curiosity engine within all of us. A journey begins. Questions are answered, new questions are posed, and we learn more about the world than before.
Research changes us. It allows us to evolve and grow. We find the limits of our minds, and through structured inquiry, we can move beyond them.
Our research influences people on the inside (including the researcher).
In this sense, we're all user researchers of our own lives. We all have specific experiences, and it's up to us to inquire deeply into them. To find the truth.
Research is a tool for personal & collective growth alongside business, product, and user success.
Let me share my process with you.
(the most critical step)
As UX practitioners, our primary role is to help connect our stakeholders (mainly businesses or other providers) with people (their users).
The way I like to think of it is like this:
"Imagine there are two mighty rivers. One represents the goals of an organization. The other represents the goals of people interacting with that organization. (Customers or users)."
"Our work is to figure out how we can connect business goals to users' needs so that both parties can reach their goals. When these rivers are divergent, user needs don't get met, and businesses/organizations risk working on something that doesn't meet business goals."
"UX practitioners help bring these two rivers together."
UX Research is one way to help bring these rivers together...
User (experience) research combines science, art, and improv to help us understand how people perceive, use, and build connections with products and services.
Here are a few reasons why it's important:
- Builds confidence around decision making
- Finds & fixes usability issues
- Uncovers new opportunities
- Protects against bad ideas
- Helps prioritize work
- Protects against "building in the dark."
- And so much more...
My Alignment Process:
1) Figure out how aware/experienced stakeholders are with research
This happens during stakeholder interviews, where I learn more about people's experiences, assumptions, expectations, and emotions surrounding research.
Why is this important?
- Builds trust, credibility, and rapport with stakeholders.
- Documents goals for both the product/service & business/org
- It helps me understand technical, regulatory, and financial constraints
- It allows me to see if anything is currently frustrating or making the stakeholder nervous
- It allows me to orient myself toward what the biz/product/org needs RIGHT NOW.
2) Educate/Discuss what research is, what it can't do, and handle objections
After orienting myself toward what people presently understand about research, I take the time to discuss what research is helpful for. (As enumerated above)
But I also hear some objections like:
- Research is expensive
- Research feels too slow
- "I am the user."
- "Our roadmap already exists."
- "We already have analytics data."
These can be unhelpful beliefs to hold & will make meaningful research hard to conduct & assimilate. I treat these beliefs with the utmost respect and try to probe into & point to more helpful beliefs instead.
I also take the time to explain what qualitative user research CANNOT do:
- Can't reliably predict the future (sorry)
- Can't guarantee product success
- Can't guarantee user engagement
- Can't establish causation very easily
In the end, this process requires a lot of listening and some probing to help orient my stakeholders in the right direction.
3) Figure out how research can work in that organization
Depending on the organization's needs, I shift gears towards setting up a Research Practice with them.
We need to answer three questions to get this done.
I usually work with startups, so I focus on only 1 product and a small set of stakeholders. You might call this an embedded model.
This refers to ResearchOps. Things like:
- Pre-existing participant lists or sampling frames.
- The available budget for compensation & research
- Figuring out tools
- Pulling together all the data sources I need
Having all these things in place means we can have a smooth research relationship.
Here is a list of expectations that I like to address:
- Research has the most value & impact when done as early as possible
- Research isn't something that can be rushed
- Stakeholders can get involved in every research phase (alignment, recruiting, etc.)
- Good research happens when everyone understands how the findings will be applied
- Research recommendations are subjective based on the analyst's judgement & interpretation of the data
- Involvement > Reports
- Some data sources can be more beneficial than doing primary research for questions you may have.
- Qualitative User Research isn't great for determining causality, but we can sometimes point to light correlation.
4) Align on one question/topic that is important to study now.
It's all about research questions...
A good research question provides value to the stakeholders, is meaningful to our participants, and can be sustainably studied.
Here are some qualities of a good research question:
- Stakeholders find it valuable
- Stakeholders align and agree upon it
- Participants would find it meaningful
- A clear description of who the participant should be
- Sustainable for the researcher to study over time
- Specific but cannot be answered with Yes/No
- Learning's can be acted upon
- Can be addressed given the constraints
- Guides the rest of the research process
- Focus on product problems or predictions about the product/service
Where do we find questions?
There are many different places where Research Questions can come from:
- Stakeholders themselves
- Conducting Desk Research (Analytics, Releases, Reports)
- Looking at Roadmap/Backlog
- By attending meetings
How to pick a question to study?
When evaluating any given research question, we may choose to look at it from 3 different angles:
WHAT are you studying? What exactly do your stakeholders want to learn about?
What do they NOT want to learn about?
What is the central phenomenon that you want to learn more about?
Scope determines how broad or narrow your research focus should be.
What type of question is this? Is it quantitative? Qualitative? Both?
This tells you HOW to study something.
What's the shelf-life of the learnings acquired by answering this question?
Tactical questions change regularly. Strategic ones change slowly.
After making a judgement based on those three angles, we now have a better idea of WHAT to study, HOW it will be studied, and its VALUE over time.
It's here that we start getting more tactical. We've already aligned on WHY we should do research. Now we're going to work toward HOW the research will be conducted—the implementation details.
Finding study alignment:
So let's assume we've landed on a few different research questions with our stakeholders; we can now align on HOW and with WHO the research will be conducted.
What we need:
- A research question(s)
- A Most Informative Participant (MIP)
- Discussion about how the learnings will be applied
- The method or approach for collecting data
- Agreed-upon format on how findings will be shared.
Since we already have a list of research questions, let's quickly explain all the other items we should align on, starting with the Most Informative Participant.
Most Informative Participant:
It is someone with the most helpful, accurate, and valuable information to help us answer our research questions.
Read more about this in recruitment [link]
How the learnings will be applied:
This will likely surface from a stakeholder interview. Knowing what post-study decisions are currently being considered can help make our research more relevant, impactful, and valuable.
We should quickly align on what method to use for data collection. There are countless numbers of methods. Choose one that seems appropriate.
Read more about methods in study design.
A rough timeline outlining how long it'll take for findings to be shared
Agreeing upon how the findings will be communicated & socialized.
Ideally, you won't have to do much here because you've already made efforts to embed your stakeholders in the research process deeply.
Suppose you have alignment on all of the above. It's time to officially create a research plan and flesh out your study design!
To Research is To Risk.
When designing a study, we open ourselves up to numerous risks. Risks that you probably won't even see coming if you don't thoughtfully address them during the design phase.
I think about risks based on each element of a study:
- Research Question Risks
- misaligned questions -> leading to unmet stakeholder expectations
- redundant/unhelpful questions
- questions that lead to low learning
- questions that are hard to study
- Study Timeline Risks
- rushing data collection, analysis, and reporting
- burnout with tight deadlines
- missing decision deadlines with short timelines
- MIP + Sample Size Risks
- can't get enough of the right people
- alignment on unreachable sample sizes might lead to not finding enough people
- Sampling/Recruitment Risks
- study comp seen as not meaningful to participants
- might need to contact MANY people to fill the sample size
- Method Risks
- picking a method that leads to biased/unhelpful learning
- picking a needlessly complex method
- influencing data quality with a poorly chosen method
- Deliverable Risks
- if the report/deliverable is too long, it might extend timelines
- stakeholders won't read your deliverable
- spending time editing & making the deliverable readable
Predict, then design:
There are a lot of risks to keep in mind. That's why I always try to predict how things will go wrong for each study element; this process is called "premortem." This means we figure out how something might fail before getting started.
By predicting, we reduce our risk, not to 0%, but to something we can manage during the study. So if we predict that finding enough of the MIP to fill our sample size will be a challenge, we can re-align with a more suitable sample size with our stakeholders before starting.
- Research Question Risks
- build a research plan
- know how the findings will be applied
- Study Timeline Risks
- explain to stakeholders why shorter timelines may not work
- use templates to cut down on time spent
- MIP + Sample Size Risks
- define the most informative participant with stakeholders
- use logical sample sizes
- Sampling/Recruitment Risks
- use a research panel
- get creative around incentives
- use chain-sampling
- Method Risks
- choose methods that match the research question
- rely on more straightforward methods to reduce study complexity
- Deliverable Risks
- involve stakeholders at every step
- socialize the research as thoughtfully as possible
- use templates
I like this plan template that I found on the fruitful UX course:
Fill in the boxes & you're off to the races. I like to refer to the plan many times throughout the research process.
The research plan lays out HOW you'll learn something, not what.
Also, make sure everyone is aligned on your plan!
(finding the right people)
Here we define who we want to speak with in more detail alongside HOW we'll reach them.
Here is an image that shows where we're going:
The population is the set of all people that would likely hold the information you need. They are ALL the people that fit the requirements of your study.
A segment is a smaller population group categorized based on criteria like behaviour or demographics. We sample from the segment as it helps narrow our recruiting efforts, but we still need to define a MIP (Most Informative Participant).
How to define the MIP?
Two ways we can approach this.
- ABC Model - Affect, Behaviour, Cognition
- Theoretical realm Model - No constraints.
The ABC model essentially asks us to work with our stakeholders to answer three questions:
- What do people feel? Or don't feel?
- What do people do? Or don't do?
- What do people think? Or not think?
As we answer those questions, in the context of finding useful MIPs to study, we will leave with a few characteristics we can use to screen for our MIP.
The Theoretical realm strategy asks us to drop all constraints and pretend that we can learn from ONLY the most informative participants – a 100% success rate. If you had infinite resources and 10,000 researchers helping you recruit, who would you (and your stakeholders) want to learn from? Who would completely engage them if they could speak with them? Who would make them so engaged in the research process?
- Can you, with your time & resource constraints, recruit these participants?
- How will you confirm/validate that these potential participants fit the MIP definition?
Sample representativeness ensures that the people you recruit adequately represent the population based on specific characteristics that you find relevant.
A few requirements for sample representativeness:
- Reliable knowledge about what types of characteristics or traits are shared in the population. We want to confirm similar behaviour.
- A channel to contact people in that population
- Introducing random sampling (if possible)
- A practically large sample size
The Population to Sample Pipeline
Represents the set of all people who you're trying to study. It's theoretical because you won't be able to study ALL of these people.
Also known as the accessible population is the set of all people you CAN contact at any given moment in time.
You may use a vendor or panel to reach out to these people.
The larger the sampling frame, the easier time you'll have finding the desired number of people.
The HOW around deciding who to speak with when selecting people from a sampling frame.
We commonly use random or non-random sampling to this end.
The # of people we contact.
The # of people who participate in the study.
How do we determine sample size?
This represents the minimum number of participants your stakeholders find valuable and that you can study sustainably given time/resource constraints.
In qualitative research, we can justify smaller sample sizes by explaining that:
- We only need to reach saturation.
- We only have so much attention.
Expect to recruit more people if you have many interconnected variables in your research question.
Let's quickly touch on a few ways that recruitment can go WRONG:
1) Stakeholder Problems
Stakeholders may think recruitment & research as a whole is WAY TOO SLOW – you can fix this by beginning recruitment as soon as alignment is reached, recruiting within the product/service, using analytics instead of primary research, using financial comp, and having an honest conversation about research speed.
2) Sampling Problems
The definition of your MIP might be too broad/tight. If you get this definition wrong or it doesn't serve its purpose, you put your study at risk of not delivering.
You may also run into some coverage bias regarding your sampling frame. Who you can contact and NEED to contact will likely be very different, so we must be conscious of how that might skew our results.
Sampling frame decay may also be an issue as there will be a discrete # of people you'll be able to contact that exist within the definition of your MIP. If this group of people doesn't grow, you may find it challenging to reach out & successfully study the people you need to speak with.
Nonresponse can also de-rail things. If your participants don't respond, you're in trouble. Aim for a 30% response rate.
The Participant Experience
Just like how our users have an experience, our participants also have an experience.
There are 4 phases of the participant user journey
- Study Awareness
- Study Qualification
- Study Engagement
- Study Compensation
At each step of the process, you'll normally deal with people opting out, not responding, or simply being screened out.
We can help improve our response rate by making use of incentives.
Study compensation is an exchange between someone's time and the value of the data provided in that time. -Fruitful
We pay participants because their data will be used to make financial decisions.
Participants want to help!
People are generally quite helpful. They want to be a part of building something new.
It also helps to throw in a little extra for their time. It depends on what you think your participants will find valuable. Maybe it's cash! Maybe it's an NFT! It's up to you!
How much to pay?
You could use the Median Hourly Wage Strategy!
Find out the median hourly wage for the people in the country you're studying. If they are high-earners, then take the median hourly wage of their role.
It depends on the type of study & the amount of time the participant will give you.
(studying the right people)
I use interviews for most of the research that I conduct.
Why I like using interviews:
- It helps get at people's motivations, behaviour, and experiences
- It helps us understand why people hold certain opinions, thoughts or emotions
- Stakeholders can get involved
- It can be intentionally designed to cover specific areas but can also be adjusted.
There are many types of interviews we can use. Here are a few examples:
- Unstructured interviews
- Semi-structured interviews
- Structured interviews
I mainly use semi-structured interviews in most of my studies, providing a high-level structure that ensures you stay on topic while providing enough flexibility to explore new, unexpected topics.
Phases of an interview
This section exists to help you set expectations, get informed consent to record the session, and confirm any relevant participant info.
2) Rapport Building
Let your participant feel comfortable with you. They are likely a bit nervous & anxious about what is going to happen. Defuse the situation by starting with easy questions & shooting the sh*t.
Start with the written questions about the things your team is trying to learn about (your research questions)
3.1) Probing Questions
Dig into answers. Follow unexpected paths. Clarify & say-back responses.
4) Transition to the next topic
Guide & nudge our participants to the next planned topic!
5) Participant Q&A
Give the participant a chance to revisit past topics or ask us any questions.
6) Thank You
Thank the participant for their time! Tell them what's going to happen next.
7) Sending people off with compensation
Give them $$$
The importance of finding valuable interview topics
Having a topic that is both meaningful for your participant & valuable to your stakeholder is where we want to aim.
Here are a few examples of interview topics:
- Behaviours that participants can talk about
- Domain-specific knowledge
- Experience/Emotions regarding solving a problem or using a product
Qualities of an effective interviewer
- clarifying & probing
Reflect after each call.
Every interview is a chance to improve. I try my best to review the recording and journal about my experience after each. I look for pitfalls and things I could improve on and challenge myself to adjust to help account for those pitfalls.
(making it make sense)
I use thematic analysis for my qualitative projects. It's easy AND powerful. The type of analysis will often depend on the type of questions you ask and which methods you will use.
Most of the qualitative data I collect comes from one-on-one interviews, so I tend to have many transcripts & other artifacts to analyze when it comes time to do so.
Coding -> Categories -> Themes
I start the analysis process by first doing three things:
- Creating an analysis journal
- Creating a codebook
- Cleaning the data & preparing it for coding
An analysis journal is a place where I engage in reflection & contains all of my ruminations and thinking surrounding the analysis process. My intention with the journal is to document my thought process throughout the analysis process; anyone who reads through it should come to the same conclusions I do.
The codebook contains short-hand codes that I use to start organizing and categorizing the data we collect. I also store all the codes I use in this document for tracking purposes.
I also clean the data by transferring the transcripts to a tool like Dovetail, which automatically transcribes the interview & lets me begin coding.
My workflow is simple. Code, build categories, then create themes.
I categorize certain groups of codes to help make this process easier. When creating themes, looking at 100 different codes is a nightmare, but things get a bit easier when we categorize these codes into six categories.
As researchers, we act as interfaces to the data we collect. The themes that we ultimately find & share are not found explicitly in the data. It's up to us, the researchers, to dig in and find the implicit patterns & meanings.
Creating themes is a science & an ART. It takes a trained eye to know whether or not the theme you come up with is representative of the data you collect and if it's helpful for your stakeholders to know.
Start with categories, look at the relevant codes, and then develop a candidate theme to explain what you collected.
We are ultimately looking for validation when it comes to the themes that we draft. We can do this by using a 1) theme testing chart or 2) Member Check
Theme Testing Chart
We use a spreadsheet to display all of our themes. We list all of our themes in different columns. Each row is dedicated to each participant that we worked with.
At the intersection of each column & row, find any negative evidence (codes) that goes against the theme in that column. Leave the cell blank if you cannot find any negative evidence.
Consider your theme validated if you have a whole column of blank cells. In practice, it won't always be empty – you will have at least 1 or 2.
We double-check our analysis with participants.
Build a survey, and add each central qualitative theme and findings to it. Below each, ask two questions:
- How well does this finding accurately reflect your experiences?
- What, if anything, has to be changed to make this finding more accurate to you?
(going from insight to action)
Our first job when delivering learning is to educate. We want to ensure that our stakeholders fully understand what we learned before we ask them to apply those learnings.
This means knowing your audience and tailoring your final report to match what they would find easy to learn & refer to as they work based on that research.
Depending on the context, you may send over the learnings (insights) via email or video. You may choose to host a presentation showcasing your research. It's up to you & what makes the most sense to your stakeholders.
You may choose to include recommendations in your report. In my view, it's more valuable to INCLUDE stakeholders in your research & have THEM bring up recommendations as they were there along the process.
(what makes a researcher a researcher)
Personal Reflexivity & Intersectionality
Research Process Reflexivity
Connecting Businesses & Customers/Users
Foresight & Risk Management
Inductive, Deductive, and Abductive Reasoning
Credibility, Dependability, Confirmability, and Transferability