Note to the reader:
This part of the field guide comes from our 2019 version of the UX Research Field Guide. Updated content for this chapter is coming soon!
Want to know when it's released?
Subscribe to our newsletter!
In the last chapter, How to Recruit Participants for User Research Studies, we introduced you to the topic of screener surveys. Well, we actually introduced that topic way back in chapter about the UXR process, and again when we walked you through how to create a user research plan.
Why do we keep harping on about screeners?
Because they’re important! Screener surveys are what stand in the way of you and hoards of unscrupulous, unqualified, uncommunicative participants. (That’s a bit dramatic, but you get the point.
Screener surveys are essential tools for qualitative research. And while they sound simple, they’re actually very easy to get wrong. Which is why we’ve dedicated a whole chapter to them!
Screener surveys, or just ‘screeners,’ are surveys people take before participating in a research study. They’re made up of a few questions, designed to weed out the folks who aren’t your intended audience and capture the ones who are.
You can think of a screener survey as a sieve that captures the people who hit all your ‘must have’ criteria and filters out the ones who don’t quite fit the bill.
If you want the right participants, you’ve got to design smart screening questions.
That’s not always as straightforward as you might think. You have to ask questions in a somewhat roundabout way to avoid leading people to certain responses, but also in a clear way to make sure you’re universally understood.
The devil is in the details, but luckily there are some pro strategies that anyone can learn and put to use right away.
The guiding principles explained in this chapter will help get you there.
We covered this bit in previous chapters, and the advice here is the same as the last.
Clearly defined goals and objectives are must-have requirements for any user research project. These goals—aka your reason for doing research—should be hammered out well before you start writing your screener surveys. (If they’re not, head on back to the chapter on planning research to get some clarity. Go on, we’ll wait...)
This is the part where you consider your research question, imagine the ideal participant who can give you the answers you need, and identify the targeting criteria (the things that must be true about them) needed to qualify for your study.
Your targeting criteria will typically be defined by using a mixture of:
So… how do you decide what criteria to target by?
We covered all of those points in finer detail in the last chapter. If you’re still struggling to pin down who to recruit, we recommend revisiting those recommendations before moving onto the next steps.
Take a closer look at your targeting criteria. Do they include demographic criteria like age, gender, race, income, etc? Do they need to?
Demographics are the low-hanging fruit of screener surveys, but these characteristics have their limitations.
For instance, where people live is important if you’re doing an in-person study, or if your app will only serve certain locations. But if you don’t have a clear reason to target based on geography… don’t. The same thing applies to demographics. In many cases, a person’s gender or how much they earn per year won’t determine how they interact with a product.
Our assumptions about these characteristics are prone to bias, which can invalidate your study and do real harm to the people your research will ultimately impact.
Also, it’s just bad form to waste valuable screener questions on criteria that aren't absolutely essential.
Screening for psychographics and behaviors lets you group people based on how they live, what they value, and how they relate to your product or category. That’s the juicy stuff!
Let’s say you want to test for accessibility with a mix of gender identities, age ranges, and educational backgrounds. In this case, adding demographic criteria will allow you to target a diverse audience.
Not every question on your screener has to result in an automatic in or out, but can be used to filter for a variety of participants as a final step. Accept anyone who could be a fit based on any given question.
Once you know the characteristics of your target participants and you’ve broken that down into specific criteria for how you’ll identify the people who qualify, it’s time to write the questions that will help you filter out the good’uns.
The language you use in your screener is important. When writing screener questions:
The more clearly worded and specific your questions are, the less likely participants will be to get confused and answer inaccurately. Leave no room for misinterpretation!
Similarly, make sure the multiple choice options you provide are carefully worded. Being clear in your responses is just as important as being clear with your questions.
Have you ever taken a survey where, on a certain question, you found yourself forced to choose between more than one answer that applied to you?
To avoid putting your audience in that position, make sure your answers have clear borders without any overlap. For example, when asking for numerical values (age, size, frequency etc.), make sure your values are mutually exclusive:
For less definitive answers or for answers that can’t be made mutually exclusive, ask participants to select the answer that is most true or give them the option to select all that apply, rather than a single answer.
Don’t make prospective participants complete your entire screener before finding out they don’t qualify. Eliminate unqualified people early.
Think of the process as a funnel. You’re refining your participants, and refining them further. Or, think of it like weeding a grown-over garden. The biggest, tallest, most obvious weeds come out first, simply because they’re the easiest to grab. Start with the questions that are most likely to weed people out.
The easiest way to do this is to write out your questions, rank them in order of importance, and look for any interdependencies.
For example, if you’re doing an in-person study, ask about location right away. Location here is a must and must-have criteria go first.
Before diving into questions about how people use apps on their smartphones, find out if they use a smartphone at all. Then, move on to the questions that tap into specific behaviors, interests, and preferences.
If you’re not working with a recruiting service that gathers demographics for you, ask any demographic questions that you need to ensure a diverse recruit pool.
You know how some folks add a “right?” at the end of every sentence, so that you have no choice but to nod or shrug in agreement? Right?
That’s an example of ‘leading.’ Leading questions will influence people to answer in a certain way.
It’s a handy conversational device if, say, you’re a dogged prosecutor in a courtroom TV series and the judge will allow it (for the drama, obviously). But leading questions have no place in user research—and definitely not in your screener survey. This is not the place to try to validate your assumptions. You’ll end up with skewed results or the wrong kind of participants.
Here’s an example:
A good way to identify whether a question might be leading is if it includes a hint or excludes possible answers.
Another way to avoid leading questions is to provide a series of unrelated options as answers.
For example, if you want to screen users who have a high level of concern around internet privacy issues, rather than diving right into questions about internet privacy by asking:
… you can create a question like this (not leading): Which of the following topics is most concerning to you regarding internet use in your life?
Likewise, avoid yes/no or true/false questions, which tend to be leading. Users might answer in the way they believe will entitle them to participate in the study. Whenever possible, replace these questions with multiple choice options or provide a scale for degree of agreement with a given question.
Exception: In cases where a black and white answer is required—for example, when asking if a person is willing or able to participate under the conditions of your study—a binary question will be your best bet.
‘Loaded questions’ are similar to leading questions (and the two are often conflated), in they push the participant to answer a certain way. Loaded questions do this by making assumptions, which are implicit in the question itself.
Here’s an example:
A good way to identify whether a question might be loaded is if it includes strong language or excludes possible answers.
If you create multiple choice responses, don’t assume that you’ve presented the user with every possible option. Even the best survey designers have their limitations. As Gandalf once said, “even the very wise[st survey designers] cannot see all ends.” 🧙
Include a ‘none of the above,’ ‘I don’t know,’ or ‘other’ option to account for any outliers.
Otherwise, you could end up with someone in your study who doesn’t belong there because they were forced to choose an answer that didn’t apply to them. Likewise, you might screen good participants out because they didn’t quite fit the answers you provided.
Spare yourself the pain of having to drag answers out of a reticent participant by screening uncommunicative people out of your study.
Screener surveys help you to get more value for your time and money on a per-participant basis. Sometimes that means excluding certain people who otherwise perfectly fit your ideal audience profile.
Screen for expressive participants by asking ‘articulation questions.’ These are open-ended questions designed to test a user’s capacity to communicate. If a person can express their ideas with depth of thought, they’re likely to be a helpful participant.
Including open-ended questions also helps weed out “professional participants” who are just looking to make a quick buck by qualifying for any and every study.
A screener survey is meant to help you find the candidates who are a perfect fit for your study.
Giving away too much information about the purpose of your study—by, say, revealing the name of your company to non-users or telling participants who you’re looking to interview (which is a real mistake that we’ve seen)—can devalue the screening process and make your research less effective.
And this advice doesn’t just apply to your screener. The title and description you give your study, the way you talk about it when you’re recruiting participants, and the things you reveal in the lead-up to the session itself—it all matters.
For instance, let’s say you are doing research on (yet another) photo editor app for influencers and people who actively post photos on social media. You might tell participants it’s a study related to social media habits. That way they have some context (which can help them decide to click into the screener), but they don’t know what type of social media habits (editing photos) you’re looking for.
This will make it harder for professional testers to guess what you want, making it more likely you’ll get authentic responses.
Make sure your participants are clear about what they’re doing, and at what stage of the process they’re at.
The screener survey is a sort of dress rehearsal, and it will help the participant to know they’re not yet in the final round. Be sure the candidate knows what they’re in for if they do make it.
If there are any possible deal-breakers (like NDA agreements, for example) let them know up front.
And of course, be clear that they won’t be paid until they make it through to complete the actual survey.
Finally, keep your screener surveys short and sweet. We’ve seen some screener surveys get so long that participants mistake them for a (paid) research survey! If you’re looking for a rough guideline on length, try to keep your screener to fewer than 10 questions.
Remember, the point of a screener survey is to help you find the right participants for your research. This list of screener questions is meant to be used for inspiration, and to help you get a gut check on your own screener. It’s not a library of general use questions to copy and paste from in all circumstances.
With that caveat out of the way, here are some sample screening questions to ask, depending on the type of criteria you’re filtering for.
Ask employment questions when you want to screen for people with a certain level of familiarity with a particular industry, or exclude those who work for competitors.
If you need to test with novices, experienced users, or some combination of each, ask about familiarity with a given product.
Asking about frequency of use or action is useful when you’re screening for users who regularly do a specific task, or who used to behave in a certain way and then stopped.
Consider defining terms like often (every day) and rarely (once a year) so there’s no guesswork.
Just because someone meets your screening criteria doesn’t mean they’re actually going to be willing to participate, especially if your study touches on sensitive topics like health, income, lifestyle, marital status, etc. Ask participants directly if they’re willing and able to answer personal questions.
If you’re conducting medical research, the recruitment process and screening process are considered separate activities. Recruitment—in which you reach out to research candidates and tell them about the planned study—is a pre-screening activity that can be done without informed consent. But even your pre-screening process may have to be submitted to your Institutional Review Board (IRB) before you can proceed.
Before you gather protected health information or obtain medical records to determine study eligibility, you’ll need patients to sign a consent form to proceed with screening activities. Your screening script for interacting with possible participants and gathering information also has to be submitted for IRB review.
For more information about IRBs, refer to the FDA website. For your institution’s specific rules regarding screening and research procedures, refer to its specific IRB.
If you want, your screening procedures can include a phone call to potential candidates who seem most promising. Premium screening like this typically isn’t necessary, but it can be a good way to be absolutely sure that you’re getting the right people to talk to during your study.
We’ve seen researchers use premium screening when the study they’re doing is high-profile (visible to important stakeholders in their organizations) or when they have a highly specific research need.
People who promise to show up for your study and don’t will cost you time, and likely a moment of discomfort with colleagues and bosses who are forced to sit around waiting. Save yourself some trouble and work to prepare your participants and yourself to avoid the dreaded no-show.
Also consider being prepared by recruiting a few extra folks who you can call in at the last minute, if need be.
Long story short, envision your ideal participant—know who they are, know who they aren’t—and build a screener survey that allows you to filter out the right people to answer your research question.
The specifics of how to get there are outlined above. Just remember to keep an open mind as to who these study participants might be, and don’t limit yourself with prejudgements mired in demographics.
We’ll leave you with a few rules of thumb:
Oh, and did we mention that you can build and fully customize screener surveys with User Interviews? Launch a research project and easily build your screener—we'll give you 3 free participants to get started.