Note to the reader:
This part of the field guide comes from our 2019 version of the UX Research Field Guide. Updated content for this chapter is coming soon!
Want to know when it's released?
Subscribe to our newsletter!
Not to toot our own horn, but here at User Interviews, we know a thing or two about user research recruiting. It’s kind of our thing.
This step-by-step guide to user research recruiting will teach you how to find participants for UX research, full stop. It is meant to be agnostic—meaning you’ll be able to apply this knowledge regardless of experience, budget, type of research, user testing tools you plan to use, and the kinds of participants you hope to recruit.
That being said, User Interviews is a research recruiting tool. We exist to solve the pain point of user research recruiting—especially qualitative research recruiting, in which researchers are looking for participants who meet highly specific criteria relevant to their research question.
As such, we’ll be name-dropping ourselves a few times in this chapter—but only when we think it makes sense for you to know about how our product can help during the recruitment process.
User research recruiting is a pain. For many researchers, it’s the pain that takes up time and energy they’d rather be spending talking to users.
Whether you’re an experienced researcher or new to the game, recruiting research participants for a study remains a challenge. Why?
These challenges are very real and very frustrating—but none of them are insurmountable. Over the course of this chapter, we’ll explain how to overcome these challenges and make the research recruiting process (a lot) less painful.
First, let’s talk about your participants. Who are they? Or, rather, who should they be?
As you’ll see in the research recruiting framework below, the first step to a successful recruit is clarifying the goals of your research and which methods you intend to use. Ask yourself:
The next step is to define your ideal research participant profile. What criteria do participants need to have? Who, exactly, is going to have the answers to your questions?
We’ll dig into ways to define participant criteria in more detail later in this chapter. But first, let’s go over a couple of the broader considerations that will influence your recruitment strategy.
Namely, let’s talk about the differences between recruiting for qualitative and quantitative research, as well as the differences between recruiting external participants and recruiting among your own customers.
Should you seek out your ideal user or cast a wide net? Do you need your participants to be articulate, expressive, and have experience relative to your product? Do you need a lot of people to achieve accurate results?
Quantitative research recruiting is a numbers game. For data analysis to be meaningful and statistically significant, you need a lot of data. Which means you need to do a lot of research with a lot of people.
When deciding who to recruit for quantitative research, you first have to define the population (the entire group you want to study). A population can be extremely broad (“doctors,” “women between the ages of 18-35”) or slightly more specific (“doctors in California,” “unmarried Australian women between the ages of 18-35”).
From there, you will choose a sampling method that allows you to create a sample—a randomly selected subset of the population who will participate in your study.
In qualitative research, which involves far fewer participants, you need to be a bit of a Goldilocks. You’re looking for the *perfect* participants—people who meet specific demographic, geographic, psychographic, and behavioral criteria relevant to your study.
Recruiting participants for qualitative studies involves non-random sampling, screening, and a lot of communication.
Your research goals will determine whether it makes sense to use your existing customers, recruit a representative audience, or use a mix of both.
More often than not, if you’re updating an existing product with new power features, your existing users will be your best audience. If you’re building a brand new feature or product, non-users can provide a fresh perspective.
If you choose to work with your existing customers, a lot of the work of finding a representative audience is done for you. That’s a big plus!
It may be possible to access recruits by emailing existing customers, posting a request for participants on your website or social media, or having account managers, sales people, or customer service reps make direct requests—just be mindful not to inundate your valued customers with too many requests for input, and don’t forget to make it clear what’s in it for them.
In some cases you may not need to offer monetary incentives to your own customers. Instead, consider emphasizing how much their feedback will improve the product that they’re already using or offering early access to the new feature.
However, even if you choose to offer non-monetary incentives, some kind of incentive is essential for an effective, ethical study. Learn why you have to compensate participants (and how to do so on a shoestring budget).
Non-users are an excellent resource if you want to understand the people who might benefit from your product if your product isn’t on the market yet. They also tend to be useful for identifying usability issues that people familiar with the product may have already overcome.
Talking to your competitors’ customers may help you understand how to adapt your product to cater to a gap in the market or gain a competitive edge based on your respective strengths and weaknesses.
Customer research and user research aren’t mutually exclusive. Depending on the goals of your research, it is often useful to talk to both groups for a broader range of insights. Just be sure to differentiate between groups and adapt your methods accordingly.
Defining your purpose and clarifying your objectives is a prerequisite to any user research project. By the time you get to the recruitment phase, you should already have clearly stated goals and reasons for doing this research.
Planning your recruitment strategy is part of UX research design—it should happen before recruiting and research kicks off, and the ‘whos,’ ‘whens,’ ‘hows,’ and ‘whys’ should be outlined in a concise user research plan (like this one).
Capisce?
Generally speaking, you need to know what you’re looking for in order to find it. When it comes to finding the right user research participants, this means taking the time to figure out what your research question is and who, exactly, can best help you answer it.
While this step may seem like a no-brainer, it’s often overlooked. Taking time to do it right can make a huge difference in the quality of your research.
Targeting should typically be defined by using a mixture of the following criteria:
The first way to improve the quality and suitability of participants when recruiting for a study is to clearly define your intended participants.
If you’re doing research for a company or a product, the target audience is usually representative of your eventual—or existing—customers.
To determine who to recruit, ask yourself:
What you want to learn from your study will determine who you should study. Consider what insight would be most useful, then work backward to figure out who can best provide that insight.
Research can be useful at every stage of the product development cycle. In the early stages, think broad—you’ll want a variety of opinions and perspectives during discovery. Further along in the process, a more precise target audience will provide the richer insight you need to optimize your product for current users.
According to Erika Hall, a good research question is “specific, actionable, and practical.”
In other words, an effective research question is one that can be answered with a reasonable amount of certainty, using the tools at hand.
For example, you could not reasonably find an answer to the question: “What does my cat think about all day?” I can’t ask my cat what it thinks about, so any answer to this question would just be a guess. I could, however, answer the question: “What does my cat do while I’m working?” I could monitor behavior over a period of time and eventually wind up with a definitive answer.
Here are some examples of specific, actionable, and practical research questions:
Questions like these are specific enough that you will know when you have found an answer, practical in the sense that you could reasonably find answers in the scope of a research project, and actionable in that you can act on the answer that you find.
A specific, actionable, and practical research question will also help you identify the kinds of participants you need to answer that question. For example, to answer the question “What tools do 20-somethings use to learn how to manage their finances?” you would need to talk to people in their twenties who are interested in actively managing their money.
Think through the specific traits a potential participant would need to have.
Sometimes, when you’re getting started with research, it’s easy to get hung up on requirements that don’t really matter to your study. We always encourage researchers to narrow their list to the minimum requirements needed.
Let’s use the pricing page question as an example. To answer “Does our pricing page accurately address our customer’s questions about our pricing?,” we may start off with the following list of criteria:
Investigating each of our requirements further can help us to narrow them down a little and focus on why each requirement is there.
We would want to talk to current customers because they can help us best understand the questions our target audience will ask about our pricing. We also want to talk to people who signed up over 6 months ago because (in this hypothetical example) that was the last time we changed our pricing page. People who signed up within the past 6 months may have already seen this version of the pricing page and may already have had their questions answered.
As far as eliminating requirements goes, there’s nothing about age that would prevent someone from being helpful with feedback on our pricing page, so we can cut that requirement. Ditto education level, unless your pricing page uses some seriously highfalutin language (which, fyi, it shouldn’t).
So our new requirements are:
This process can help you narrow down your ideal participants and cut out requirements that may slow down your recruit for no reason. It’s also a good way to gut-check your research question. If you find yourself with requirements that aren’t reflected in your question, this process can help you think about why those requirements are there and if you need to alter your research question.
Some more examples of target groups are:
It’s equally important to define who you’re not looking for.
For example, if you want to know which day of the week people in San Francisco drink the most wine, your recruitment efforts would be wasted on participants who are under the age of 21 (in theory).
It’s important to define these targets and non-targets at the outset, because it helps you create a solid screening process.
The right number of participants will depend on the type of research you’re doing and the specific methods you plan to use.
The UX experts at Nielsen Norman Group famously advised that for usability studies, you only need 5 good participants—and people have been parroting that advice ever since.
And for most usability studies, 5 is an ideal number—beyond that you’ll get diminishing returns. (Unless of course you recruited the wrong people in the first place. So... don’t do that.)
But what about user interviews? Diary studies? Quantitative studies?
As a rule of thumb, the right number of participants for quantitative research = however many people you need to achieve the level of statistical significance your study requires.
For academic researchers, that could mean finding 1,000 to 2,000 participants to achieve maximum reliability. But for most quantitative usability studies, 20 users is often plenty—although the ideal number still depends on the type of study you’re conducting.
For qualitative research, it really is a matter of quality over quantity. To give an extreme example:
In general, though, we recommend talking to more than one person before making any major decisions.
Here are some suggested sample sizes for different types of UX research:
Nielsen Norman Group has found that the average no-show rate for a usability study is 11%. That means that for every 10 participants you recruit, 1 of them is likely to be a no-show. You can reduce the risk of no-shows with good communication and the right incentives, but it’s always a good idea to recruit a few extra participants that you can call on, just in case.
If you’re using User Interviews, we’ll do this for you so you never find yourself without backup if someone doesn’t show up for their session. We also created a handy qualitative sample size calculator for just this purpose.
We cover incentives in more depth later in this module. But the gist is that you should compensate participants for their time, and do it in a timely manner.
Setting an appropriate incentive increases the likelihood of a great recruit. It shows that you respect participants’ time and that you value their expertise.
Monetary incentives, including gift cards, are the most popular form of incentive, but incentives can also take the form of charitable donations, account credits, or swag. (Seriously, don’t underestimate the power of good swag: There’s a whole community of people dedicated to buying, selling, and trading Mailchimp monkeys, after all.)
The most important thing about your incentives is that they are valuable to your participants and that they get people excited about participating in your research.
Here are our incentive recommendations in a nutshell:
We’ve also found that moderated research—which requires more coordination and communication between researchers and participants—generally warrants a higher incentive than unmoderated research.
For a detailed breakdown of how much to pay your research study participants for in-person and phone interviews, consult this handy cheat sheet. And check out the UX Research Incentive Calculator for personalized, data-backed incentive recommendations.
Once you determine what and how much your incentives will be, you should also have a plan for distributing them soon or immediately after each session.
Finding participants is a consistent and frequently cited pain point among just about everyone who does user research. Companies with big research teams have entire research ops divisions to help them manage the logistics of recruiting and managing participants for all those projects. Smaller research teams either go it alone with a recruitment tool or use research agencies to handle recruiting for them.
The good news is there are lots of tools out there to help you recruit participants for your research. You can use low-cost user research recruiting tools like User Interviews to recruit vetted participants, go it alone on Craigslist or social media, or turn to a research recruiting agency for a totally hands-off (but expensive) process.
We’ll go over the different channels and tools for recruiting research participants in more detail at the end of this chapter.
In a nutshell, your options are:
This next step puts the work you did in step 2—defining a research question and key participant traits—into action to separate the good participants from, well, the not-so-good.
The best way to do this is with a screener survey, which is a (typically brief) survey participants take before they qualify for your study. You can think of a screener survey as a sieve that captures the people who hit all your ‘must have’ criteria and filters out the ones who don’t quite fit the bill.
Again, these criteria are usually a mix of:
This is a deceptively straightforward concept—many (many!) researchers struggle to create the perfect screener survey. And getting this part of the process wrong can have far-reaching implications for the entire research project.
Because screener surveys are so important, we’ve dedicated an entire chapter to the topic, which you can read here. User Interviews customers also have access to our knowledgeable Project Coordinators, who can help you craft a standout screener survey.
Short on time/patience? Here are our recommendations in a nutshell:
Demographics are the low hanging fruit of screener surveys, and it often makes sense to include a few demographics questions either at the beginning or at the end of your survey. But don’t let age, gender, and location questions be the end-all be-all.
You’ll want to start by identifying which demographics, if any, really define your audience conclusively.
Don’t make prospective participants complete your entire screener before finding out they don’t qualify. Start with the questions that are most likely to weed out large groups of unqualified participants. The easiest way to do this is to write out your questions, rank them in order of importance, and look for any interdependencies.
For example, before diving into questions about how people use apps on their smartphones, find out if they use a smartphone at all. Then, move on to the questions that tap into specific behaviors, interests, and preferences.
A screener survey is meant to help you find the candidates who are a perfect fit for your study. Giving away the plot too early—by, say, telling participants what your study is about—can devalue the screening process and make your research less effective.
To avoid tipping your hand at this stage:
Don’t you just love conversations where you have to drag answers out of the other person?
Of course you don’t. People who make terrible dinner party guests also tend to be less-than-optimal candidates for qualitative research.
Screen for high-quality participants by asking “articulation questions.” These are open-ended questions designed to test a user’s capacity to communicate. If a person can express their ideas with depth of thought, they’re likely to be a helpful participant.
Including open-ended questions also helps weed out “professional participants” who are just looking to make a quick buck by qualifying for any and every study.
If you create multiple choice responses, don’t assume that you’ve presented the user with every possible option. As Gandalf once said, “even the very wise[st survey designers] cannot see all ends.”
Include a ‘none of the above,’ ‘I don’t know,’ or ‘other’ option to account for any outliers.
Otherwise you could end up with someone in your survey who doesn’t belong there because they were forced to choose an answer that didn’t apply to them. Likewise, you might screen good participants out because they didn’t quite fit the answers you provided.
Lastly, it’s important to keep screener surveys short and sweet. We’ve seen some screener surveys get so long that participants mistake them for a (paid) research survey! If you’re looking for a rough guideline on length, try to keep your screener to fewer than 10 questions.
Don’t you just love emailing back and forth to “find a time that works best for everyone?” Yeah, neither do we. That’s why we built scheduling right into the User Interviews interface, so you can just choose the times you’re available and your participants can select from the available time slots.
If you’re not using User Interviews to schedule your sessions, you may need a dedicated scheduling tool to help you coordinate your sessions—especially when coordinating multiple interviews.
Tools like Calendly, Doodle, and YouCanBookMe help take the hassle out of scheduling sessions by allowing your participants to select an available time slot on your calendar, similar to our scheduling function.
Make sure to include a calendar invite when each booking is confirmed. If they accept the invite, the participant may get notifications on their phone in addition to your emails and other outreach—all of which help them remember to show up!
Be respectful of your own time, too. Limit the number of sessions you conduct each day, and give yourself at least 15 minutes between sessions in case you have a particularly talkative participant. If you can, aim for a 30 minute buffer—this will give you time to polish your notes from the previous session while they’re fresh in your mind, and will give you time to prepare for the next one.
After a participant signs up for a user research session, send them a confirmation email right away. To help your participants remember the interview and get to the right place at the right time, be sure to include 3 key pieces of information:
It never hurts to be enthusiastic, thank the interviewee, and re-emphasize the impact of the interview.
And while you don’t want to bombard your participants with a constant stream of messages, do plan to follow up multiple times for interviews scheduled a week or more in advance.
A typical outreach cadence may be:
By the way, you can automate all of this outreach in User Interviews, whether you’re contacting your own users via Research Hub or sourcing new participants through Recruit.
For longer research like diary studies, periodic reminders may be needed to keep motivation high. You might also consider spreading your incentives distribution out over time to promote and reward continued engagement.
The incentive plan that you put together in step 4 should include a strategy for distributing incentives to participants as soon as possible once the session wraps up.
We recommend researchers use popular digital payment platforms like PayPal for cash-based incentives. If you use User Interviews for recruiting participants for a study, we can instantly process incentive payments through gift cards. We will also automatically issue 1099s for your tax records. Or you can handle incentives independently—it’s up to you.
Just remember, while compensation is important, incentives aren’t enough to guarantee high-quality and happy participants—people need to have clear instructions, and they need to know you’re truly grateful for their time.
Look, User Interviews is a user research recruiting and management platform. We’re a little biased when it comes to UXR recruiting tools… but that’s only because we’re the best.
All jokes aside, User Interviews is one of just a few tools fully dedicated to solving this part of the user research process. It’s why we exist—and we’re good at what we do.
Our platform is a one-stop shop for recruiting participants—whether you want to draw from your current user list or recruit from our audience of over 700,000 ready, willing, and vetted research participants from seven countries. You can also target over 140 different industries, job titles, demographics, and custom screener criteria. If you incentivize your participants with Amazon gift cards, we’ll manage the incentives for you.
User Interviews also includes screener surveys, scheduling for interviews, and participation tracking for your existing users. Don’t feel like handling incentives? If you choose to have them distributed as digital gift cards, we’ll manage the distribution for you.
The median turnaround time to match you with your first participant is 3 hours, though it can vary based on the project.
If you’re looking for a solution to manage your own participant population, we can do that too! Our Research Hub tool let you store customers and keep track of when they last participated and even how much you’ve paid them in incentives.
If you decide to DIY your research recruiting, you have several options, depending on who it is you’re trying to recruit.
An opt-in form is a great way to source participants from your existing customer base. These are lightweight, customizable, and can be shared in a variety of ways and channels to capture the right folks for your project needs. Here are (a few) channels you might share such a form:
Messages can be delivered in real time within your website or app based on the user’s actions or engagement, their background account information, and/or other criteria. You can recruit at the flick of a switch, but you’ll need to contend with third-party integrations and potential complexity in managing the tool.
If you’ve built an active and engaged social following from your own customer base, you can recruit people directly through these profiles. It’s free and straightforward, but some platforms (e.g., Facebook & Instagram) are making it hard for brands to get by on organic reach alone—so you might struggle to capture enough eyeballs. You can navigate this by boosting posts to an audience of your own page followers (this requires budget allocation).
Your support staff are in touch with customers every day, and these are the customers who have something to say. This makes them great candidates for qualitative user research—if you can align with the support team.
If you’re doing research with your own customers, turning your email list into a participant recruitment tool is relatively simple and can be very effective. These are people who are already invested in your product, and may have great insights to share. Work with your marketing or sales team so your research efforts don’t overlap with existing outreach.
If you want a completely hands-off recruitment process, you can enlist the help of a specialized research recruitment agency. Specialist market research companies or participant recruitment agencies are often very good at what they do, but their help comes at a high price (around $107 per participant, on average). That can be a hefty price tag for small businesses to handle, especially since you typically need at least 5 participants to complete a research study.
This is a highly targeted and customizable approach, but it’s time-intensive to crawl through LinkedIn profiles, send out speculative messages, and follow up on conversations. Frankly, few people have time for this and it doesn’t scale well
Performance marketing techniques can be directly applied to the research recruitment process, and if you have a savvy and helpful marketing team, you can recruit participants through paid Facebook, Instagram, LinkedIn, or other social advertising.
Defining your audience is especially important here, because your ad targeting strategy will determine whether you get relevant respondents for your screening survey.
Organic reach for posts is limited nowadays. But groups and community pages are still a hive of activity, packed with people who are actively talking about shared interests, ideas, professions, hobbies, and more.
The challenge is that many of these groups are closed or invite-only, so you may need the admin’s approval to recruit there. If you’re able to get approval, some of your best participants could come from these hyper-specific groups.
Slack isn’t just a great tool for communication with your colleagues—you can also use it to connect with research participants! Many specialized communities have Slack workspaces where people get together and talk about their industry. These communities can be a good place to connect with participants for a hard-to-recruit study.
Just make sure you are asking for research participation in the right channel, communicate the details of your study clearly, and abide by any community rules that exist.
You can use Slofile, an online database of Slack communities, to find ones that align with your target audience.
Another social media network you may not think of for participant recruitment is Reddit. Like Slack, it primarily consists of specialized subgroups (subreddits) that center around a theme, idea, hobby, location, etc. Because of this, Reddit can be a good place to find people who fit niche recruitment criteria.
When posting your research project on Reddit, remember to provide a clear description of your research project, why you need help from the people in this subreddit, and engage with any comments or questions people might have.
You can also use research-specific Reddit communities to find participants for your research. These communities are devoted to participating in research or finding ways to make a little extra pocket money. They’re not as targeted as posting in a subreddit specifically for your target participant, but can still yield good results.
Finally, recruiting participants through classified ads like Craigslist can be a cheap way to recruit participants. Craigslist limits the amount of ads you can post, so if you have a few studies to recruit for we recommend sticking with the free posts.
The quality of participants you get from this channel can be hit or miss, but it can be worth working into your recruitment checklist and letting your screener survey sort out the good participants from the bad.
If you keep hitting the same audiences time and time again for different research studies, you will experience the law of diminishing returns. Repeat participants will eventually get fatigued, and your research will be based on interviewing the same people with the same views.
There are 2 possible ways to avoid this problem:
Of course, it might be unavoidable to hit the same audience repeatedly when you’re testing product development among your most engaged current customers.
So, how can you keep things fresh?
The quality of your participants directly impacts the outcome of your research project, which is why it’s so important to get off to a good start by specifying targets, building a bulletproof screening process, and offering incentives that match the expectations of your target audience.
We know we packed a lot of information into this chapter, so let’s take a moment to recap.
In this chapter we went over how to:
And there you have it—everything you need to know to recruit great participants for your UX research study!