down arrow
picture of hands typing on a laptop surrounded by shapes

An Inside Look at Our Participant Recruitment System—And How We Use AI to Improve Matches

A behind-the-scenes look at how User Interviews is making recruiting smarter, faster, and more reliable with AI.

Research teams are constantly trying to solve for the challenges of participant recruitment. Our 2024 State of User Research analysis showed that the top three research challenges were: finding enough qualified participants, long recruitment timelines, and participant reliability.

At User Interviews, we pride ourselves on delivering qualified participants for any kind of research with more speed, precision, and scale than any other recruiting solution on the market. We're able to do this for two reasons:

  1. We have an incredibly rich data set built over nearly a decade of recruiting, segmenting, and screening millions of participants
  2. We have developed a sophisticated matching system that dynamically balances fast fulfillment, fraud prevention, and panel health

The result is fast and reliable recruitment, no matter how large or niche your study may be.

In this article, I’m going to:

  • Lift the hood on our matching system to reveal how it works
  • Unpack our powerful research recruitment flywheel
  • Share what my team is doing to iterate on system performance—most recently with AI

How our recruitment platform works (and why matching matters)

At its core, here's how the recruiting process works on the User Interviews platform:

  • Step 1: A researcher launches a study, setting recruitment criteria and a screener survey.
  • Step 2: Participants are notified and invited to apply by completing the screener survey.
  • Step 3: Researchers review qualified applications and approve participants, a step that researchers can choose to automate.
  • Step 4: Approved participants complete the research activity and are compensated in the form of research incentives.

Throughout this process, our matching system is driving three key outcomes:

  1. Study Fulfillment. When researchers launch a study on User Interviews, we aim to fill that study with the right participants, quickly and consistently. This is the core of our mission.
  2. Panel Health. We put a lot of work into building a large and diverse panel, and we maintain participant engagement by distributing opportunities and minimizing wasted effort.
  3. Fraud Prevention. Trustworthy participants are critical to running reliable research, so we weed out bad actors with a set of automated checks and restrict suspicious application activity.
Our response fraud rate is an industry-low 0.6%. Learn more about our panel.

The magic of matching

User Interviews maintains a marketplace that matches researchers (demand) with participants (supply), and effective matching—of participants to studies—contributes to a flywheel that powers our marketplace. A larger participant panel allows us to consistently meet researchers’ needs, which draws more researchers to recruit with us.

As more researchers run studies, that opens up more paid opportunities for participants, creating a better experience and drawing more participants to sign up…continuing the cycle.

Our matching system plays a critical role in strengthening that flywheel. As the system improves, it reinforces the virtuous cycle of better targeting ↔ greater participant value ↔ higher rate of participation ↔ more data to feed the targeting system. And on it goes.

Let’s take a closer look at how this flywheel works, and where we’re identifying optimizations.

Recruit targeted participants for any kind of research with User Interviews.

The 4 steps in matching researchers to participants

When a researcher launches a study, they provide us with two types of information on their recruiting needs: structured characteristics and a more nuanced screener survey. These two pieces jumpstart our matching process, and a participant needs to match both in order to qualify for a study.

Here’s how it works:

Step 1: Hard filtering on structured characteristics

Our first step is filtering participants based on a study’s structured characteristics. If a researcher needs 40-to-50 year olds in Chicago, we can immediately exclude non-matching participants to avoid sending irrelevant notifications.

This is a (technically) straightforward step. Researchers can choose from more than 45 structured characteristics on User Interviews when setting up a study, each of which has a pre-defined set of options. Participants complete these fields while onboarding into our panel, so we are working with a complete set of structured data on both sides.

If a researcher is looking to speak with Product Managers, we can easily identify that group. It’s the same for demographic-driven recruitment: We can quickly identify participants in our panel who live in the suburbs and have a reported household income between $80,000 and $99,999, for example.

Step 2: Ranking based on likelihood of passing screener survey

The first step gets us to a list of “possible fit” participants. There is no guarantee, however, that those participants will pass a researcher’s screener survey, which can vary widely across studies. This variability makes it difficult to predict which participants will pass any given screener and be qualified for researcher review. Given this difficulty, we use screener success likelihood to rank all participants who passed a study’s hard filters in step 1, but not to fully exclude any participants from consideration.

Rankings determine which participants are notified and given the opportunity to apply to studies first. To maximize fulfillment speed, and minimize unsuccessful applications and effort by our participants, we always want to get a study in front of relevant participants as quickly as possible.

Predicting a participant’s likelihood of passing a screener is primarily a text analysis problem. Our system analyzes and compares a participant’s past screener survey responses, among other data points, to a study’s screener survey to determine the level of overlap and similarity. Elasticsearch is a core part of our tech stack for this step given its ability to quickly query text at scale. This is also an area where we are using AI (in the form of natural language processing models) — more on that below.

Step 3: Re-ranking to optimize matching outcomes

We now have a list of participants that meet a study’s structured characteristics, we've ranked that list by their likelihood of passing the screener survey, and we’re finalizing which participants to notify first.

At this point, we consider panel health and fraud prevention. We predict each study’s overall screener survey difficulty and vulnerability to fraud, and we use those predictions to re-rank our participant list. Let's look at a couple of examples tied to the outcomes mentioned above.

  • Panel Health. If many participants meet a study’s characteristics and are expected to pass its screener survey, that study offers an opportunity to strengthen the health of our panel while still quickly fulfilling the researcher’s needs. Accordingly, we prioritize matching participants who will see a large boost to engagement and retention if they successfully complete that study. Prioritizing panel health helps us retain the right participants to meet future recruitment needs.
  • Fraud Prevention. If a study’s setup suggests that it might be vulnerable to participant fraud, we prioritize participants with a track record of trustworthy activity. Preventing fraud, while ensuring that researchers’ studies are filled quickly, is a balancing act. We're always striving for the optimal balance, and this logic provides a layer of defense on top of our existing fraud detection protocol.

This re-ranking step is an area of active focus for our team. We are continuing to improve our re-ranking logic to dynamically balance study fulfillment, panel health, and fraud prevention.

Step 4: Driving applications from participants and measuring results

At this point, we know exactly who we want to target for a study; our focus now turns to encouraging those participants to apply by completing the study screener survey. We have two main levers that we pull to drive applications: Email notifications and on-platform listings.

Email notifications allow us to quickly get studies in front of best-fit participants, while on-platform listings give us another way to highlight studies to that same group and show the study to ‘possible fits’. As mentioned in Step 2, predicting screener survey success will always be an imperfect science, so driving application volume from lower-ranked participants (who still match a study’s structured characteristics) helps us fill studies reliably.

We took a closer look at how researchers are analyzing AI tools — preview our findings below and click here to download the full report

Matching success metrics

Once notifications have been sent and applications are in, we track several metrics to determine whether our matching system is effectively and efficiently filling a study. These metrics allow us to measure system performance and inform future improvements.

Here are a subset of the key metrics that we track across all studies:

  • Fill Rate: Percent of requested research sessions that we successfully fill with participants.
  • Fill Speed: Percent of studies hitting key fulfillment thresholds early in their lifecycle.
  • Notification Precision: Percent of notified participant applications that lead to a successful match.
  • Applications per Participant Selection: Number of applications needed per participant to be approved for a study by a researcher.
  • Poorly Rated and Fraudulent Sessions: Percent of matches that lead to a poorly rated research session or reported fraud.

We use these metrics to benchmark matching efficiency and health, and we’re always looking for innovative ways to optimize performance — which leads me to our most recent development involving AI.

Using AI to navigate shades of grey

As our co-founder Basel recently discussed, we’re using Artificial Intelligence—primarily in the form of Machine Learning—to improve our matching system.

We’ve recently introduced semantic search, using a Natural Language Processing (NLP) model, as a new factor to determine a participant’s likelihood of passing a screener survey. Semantic search allows us to expand the text in a study’s screener survey and in a participant’s past responses to form more flexible matches. It allows us to identify “needle in a haystack” participants for niche recruits with confidence.

Additionally, our fraud prevention system uses Machine Learning models to assess participant trustworthiness, which we do using over 50 indicators . These models help us block the vast majority of likely fraud, and they help us determine which non-fraudulent participants are most trustworthy.

We are constantly looking for new opportunities to improve the system, which will likely include more AI applications going forward. Next on our Product team’s radar, we believe there is a clear opportunity to use AI (Machine Learning and/or LLMs) to help re-rank participants.

See for yourself how our matching system delivers fast, targeted participant matches: sign up free and launch a project in minutes.

More resources on AI and UX:

Luke Friedman
Senior Product Manager
Subscribe to the UX research newsletter that keeps it fresh
illustration of a stack of mail
[X]
Table of contents
down arrow
Latest posts from
Inside UIhand-drawn arrow that is curved and pointing right