“How long should my screener survey be?”
“Is there anything I can do to boost my completion rates?”
“I like open-ends. How many should I use in my screener?”
The “answers” to these questions might be based on experience, anecdotes from colleagues, or a gut feeling. But what about using a massive trove of screeners to surface trends? That might create a more confident and satisfying path forward.
This report does just that. Specifically, the User Interviews team analyzed over 42,000 screeners launched on its platform. These screener surveys were used by a wide variety of companies, team types, and industries. They were used for moderated, unmoderated, and combo-style research designs.
Read on for data to use the next time you or someone on your team asks one of those questions about screener surveys.
For more information about screener surveys, including best practices on question design and skip logic, check out our Field Guide on participant recruiting.
For this report, we examined a single criterion of screener success: drop-off rate, which is the percentage of participants who do not complete your screener after starting it. When drop-off rates are high, your pool of potential research candidates is smaller, reducing your flexibility in selecting for core personas, demographic groups, or locational diversity. For qualitative research especially, sample quality is key (given that studies are typically smaller and can therefore be skewed by just 1 or 2 poor-fit participants).
Average drop-off rates vary depending on modality (digital vs. in-person vs. phone), industry, and audience type—to name just a few. Moreover, most public benchmarks for drop-off rate are based on “surveys” and not screener surveys specifically (some screeners launched on User Interviews, for example, are recruiting participants for subsequent research surveys).
The 42,756 screeners in our sample were launched between November 2021 and November 2023 via the User Interviews Recruit platform. These screeners were used in a range of project types, from unmoderated surveys and usability testing to moderated interviews. All screeners were digital (i.e., taken on a computer).
Across all User Interviews screeners, the average dropoff rate is 6%. Although screeners varied in length (between 1 and 118 questions), the average is about 10, with 4-question screeners being the most prevalent. The median screener length is 7 questions. The average 10-question screener uses 8 closed and 2 open-ended questions (an 80/20 mix).
But how do the number and kind of questions impact your screener’s dropoff rate? The data suggests that for both B2C (consumer) and B2B (professional) audiences, open-ended questions are more likely to impact screener dropoff rate than closed-ended questions. The dropoff rate increases with each additional open-ended question.
On the other hand, screener dropoff is relatively low and stable as more closed-ended questions are added.
Comparing screeners by audience type yields useful information about the impact of screener length. Specifically, B2B screeners show higher dropoff rates overall and those rates are more likely to be impacted by screener length.
How might you apply these findings to your next research screener survey?
Launch in minutes, source in hours, complete your study in days. Get detailed feedback and quality insights from our top-rated pool of over 5 million users.