Welcome to the fifth annual State of User Research report, brought to you by the curious folks at User Interviews.
This report unpacks the findings from our annual survey on the state of user research and the people who do it.
In addition to our usual exploration of research methods, salaries, tools, and feelings, this year, we took a special look at the makeup of research teams and practices, the increasing prevalence of AI in research, recruiting pain points, and the impact of the economy on individual researchers and teams.
We are enormously grateful to all of our survey participants, our colleagues, and our partners for their contributions to this essential industry report.
Happy reading and researching!
The State of User Research survey was created by Katryna Balboni, Content Director, and Morgan Mullen, Senior UX Researcher, and built using SurveyMonkey. Analysis was done using Mode and Google Sheets/Excel.
This report was authored by Katryna, and brought to life by Holly Holden, Senior Visual Designer, and illustrator Olivia Whitworth.
Between May 4 and May 15, 2023, we collected 929 qualified responses from User Researchers, ReOps Specialists, and people who do research (PWDRs) as part of their jobs. (An additional 2,745 people took our screener but did not qualify for our survey based on their responses.)
The survey was distributed to User Interviews audiences via our LinkedIn and Twitter pages, our weekly newsletter (Fresh Views), and an in-product Appcues slideout. We also shared the survey in relevant groups on LinkedIn, Facebook, and Slack. Members of our team and friends within the UX research community also shared the survey with their professional networks.
This year, we partnered with other companies in the User Research space to extend the reach of our survey and this report. Our partners shared the survey with their audiences via their own newsletters and social media channels. Those partners are: Lookback, Marvin, MeasuringU, the ReOps Community, and UXtweak.
We believe that research is for everyone.
Whether they hold a PhD in Behavioral Anthropology or are a junior product designer for an ecommerce app—or, heck, both! We know people take all sorts of roads into research—we think that everyone should be empowered to ask questions and seek answers in a methodical way.
That’s why we’ve included not just dedicated UX Researchers (UXRs), but also people who regularly do user research as part of their jobs (PWDRs) and ReOps Specialists in our survey.
In this section, we break down our audience by job title, seniority, years of experience, company size, industry, and geographic location.
The majority (69%, N=637) of our audience are UX/User Researchers (UXRs). People who do research (PWDRs)—including Designers, Product Managers, Marketers, and Data Analysts—accounted for 26% (N=243) of all responses. A small but meaningful 5% (N=49) of our audience are ReOps Specialists.
Most of these folks (63%, N=589) are individual contributors (ICs), meaning they do not manage a team. Of these 589 people, 19% are what we’d call Senior ICs (people with 10+ years of experience); 44% are “mid-career” (4-9 years); and 37% are “early career” or Junior ICs (0-3 years).
A fifth of responses (20%, N=186) came from Managers, followed by Directors and Senior Directors (9%, N=87), and freelancers/self-employed (5%, N=42). A small percentage of our audience are at the VP level or higher (3%, N=25).
Notably, our PWDR audience skews more toward the management side of things, with 21% at the Director level or above.
Responses came from 65 countries around the world, with the people living in the United States representing a full 50% (N=466) of the total audience.
The next most represented countries are the United Kingdom (7%, N=61), Canada (6%, N=55), Poland (5%, N=43), Germany (3%, N=32), Brazil (2%, N=22), Australia (2%, N=21), and Spain (2%, N=20).
The remaining 209 responses came from folks on every continent (well, minus Antarctica), from Finland to Egypt to the Philippines.
These individuals represent companies of all sizes, from 2-person operations to multinational giants with over 10,000 employees.
A fifth (21%, N=198) of the folks we surveyed work at agencies or consulting firms that are contracted to conduct projects on their clients’ behalf.
Just over a third (35%, N=321) of our audience are User Interviews customers. This group includes users of both Recruit (our panel of over 3 million users) and Research Hub (our highly rated research CRM).
In terms of industry, a plurality (35%, N=327) of the researchers we surveyed work in tech and software companies. The next-most represented sectors are IT (10%) and finance/accounting/insurance (also 10%), followed by business consultancy or management (9%), and healthcare and education (5% each).
As we might expect, UXRs are the most likely to have a formal university education in user research or a closely related discipline, with 42% saying this was how they primarily acquired their research skills and knowledge.
Other folks—especially ReOps Specialists—most commonly learn about user research on the job. (That is not to say they don’t have advanced formal training in other areas—in fact, the majority of our audience (69%) hold a Master's degree or higher.)
When we asked people to rate their own experience level with UX research on a scale from 1 (beginner) to 5 (expert), we found that folks with formal training in research rated their own expertise the highest on average (3.90), while those who primarily learned about UX research through a bootcamp (9% of our audience) gave their experience the lowest average rating (3.17).
Self-assessments of research experience were also slightly higher among folks who reported that UXRs or ReOps Specialists were responsible for user research education at their company (3.79 for UXR-led and 3.72 for ReOps-led education), compared to those who said there was no one no one in charge of research education at their company (3.41).
But regardless of job title or education, it appears that the biggest factor in how highly a researcher rated their own level of expertise was simply time. Predictably, people’s sense of their own research expertise increases with years of experience.
Fully remote work among researchers is on the decline. The percent of people who said they work exclusively from home has decreased from 89% in 2021 (when our survey went out at the height of COVID-19 precautions) to 77% in 2022, to just 51% in 2023.
Even so, fewer than 1% of our audience said they were in the office full-time. Instead, hybrid workers (people who work remotely 1 to 4 days per week) represent a growing minority with 43% reporting hybrid work in 2023 (compared to 21% in 2022 and just 7% in 2021).
North American and Latin American researchers are the most likely to be fully remote (63% and 66%, respectively), while in-office work was most common among researchers in Asia, with 15% saying they never or rarely worked remotely.
But working remotely does not necessarily mean researching remotely. Among the 51% of people who said they work exclusively remotely, 39% said that a portion of their research happens in-person. Conversely, 96% of the people who never work remotely say that at least some (if not all) of their research is remote.
In fact, most people (regardless of their remote work status) conduct a majority of their research remotely. This suggests that remote research is a prevalent and essential aspect of the research process, even for those who primarily work in a physical office.
We asked our audience to rate both their overall fulfillment at work and their feelings about several job factors on a scale from 1 (very unfulfilled/very dissatisfied) to 5 (very fulfilled/very satisfied).
People who work remotely at least 1 day per week had an average fulfillment rating between 3.51 and 3.55, compared to those who rarely (3.37) or never work remotely (2.71).
People who are fully onsite also reported lower satisfaction with work-life balance on average (2.71) compared to their fully remote counterparts (3.76). Onsite workers were also the least satisfied with cross-functional collaboration and the level of bureaucracy in day-to-day decision-making. (Though frankly, no group is particularly satisfied with the latter).
Thomas Edison didn’t actually invent the lightbulb. He improved upon existing technology to produce an affordable long-burning incandescent light bulb—and even then he had considerable help from the nearly 200 people who worked in his lab. And in fact, it was a Black inventor named Lewis Latimer who perfected Edison’s lightbulb, making it more durable and efficient to produce.
As we’ve been saying for a while now: Research is a team sport.
To better understand the key players, we looked at team sizes and structures at companies of different sizes. Here’s what we learned:
Shocking headline, we know.
In previous years, we asked about team sizes using buckets. This year, we used an open-text field to collect numerical data, which allowed us to calculate the average number of UXRs, PWDRs, and ReOps Specialists and better understand the actual sizes of research teams in different organizations.
(Remember, in this context, “research teams” include all the UXRs, ReOps Specialists, and PWDRs involved in research at a company, regardless of which department they report to.)
And our first finding here was as unsurprising as the headline suggests: The average research team size scales with company size.
A plurality of our audience (37%) reported team sizes of between 2 and 10, followed by 23% who said there are 11 to 25 people involved in research at their organization. Five percent (5%) of responses came from solo researchers—people represent a research team of one.
It’s worth noting that when we analyzed how people rated their satisfaction with various job factors on a scale from 1 (very dissatisfied) to 5 (very satisfied) by team size, solo researchers appear to be least satisfied when it comes to their tool stacks, budgets, buy-in from leadership, cross-functional collaboration, and how research is used to make decisions at their company.
To understand the makeup of these different teams, we looked at the average ratios of different roles to one another. The number of PWDRs per every UXR ranges from 1 to 7, with an average ratio of 3 PWDRs: 1 UXR.
Meanwhile the ratio of people conducting research (UXRs + PWDRs) per every ReOps Specialist ranges from 4 to 32, with an average ratio of 21:1. In other words, the average ReOps Specialist supports the research efforts of 21 people.
Of course, not every company has a dedicated ReOps Specialist. In fact, our audience was evenly split between people who have a ReOps function at their company and those who don’t. Interestingly, 5% of people told us that their company outsources research operations work to an external agency or individual.
In any case, dedicated UXRs represent 39% of an average research team. Research Ops, when it is present, account for 16% of the average team size, with PWDRs constituting the remaining 45%. When there is no Research Ops function, PWDRs make up 61% of all researchers.
A third of our audience (33%) work in companies with a centralized Research department, with this model being more common in agencies (43%).
Meanwhile, over half (52%) work in organizations where the research practice is decentralized—meaning it is either fully distributed (dedicated researchers are embedded within a non-research team) or a hybrid wherein some UXRs sit in a Research department, while others are embedded in a non-research team.
Fully centralized practices seem to become less common as companies scale.
You don’t have to look far to find someone within User Research with an opinion on democratization and the role it may have played in the recent wave of UXR layoffs.
For anti-democratization folks, seeing fellow-researchers laid off has only confirmed their belief that democratization poses an existential threat that must be challenged and resisted. One researcher wrote:
“It's not shocking that UXRs are being laid off in droves after the whole ‘democratization’ trend kicked off. If everyone thinks they can do research (and they can't), then there will be no jobs for dedicated researchers.”
For others, recent events are a sign that UXR is due for a reckoning.
“Research is about discovery, so how can it be centralized and stay unbiased, diverse and inclusive to all walks of life?”
In our survey, we had people rate their feelings about democratization on a scale from 1 (very concerned/dissatisfied) to 5 (very excited/satisfied). On average, our audience rated their feelings a 2.95 out of 5—just below neutral. Sentiment toward democratization was lowest among UXRs, who gave an average rating of 2.84.
There does seem to be a “the water’s better once you’re in” scenario at play—folks on centralized teams (especially UXRs) took a dimmer view of democratization than people already working in distributed practices (2.84 vs. 3.05 on average).
We dug into qualitative responses on this topic to understand where our researchers were coming from:
Of those who left a qualitative response on this subject (N=526), 45% shared more negative views about democratization. People seem primarily concerned that it reduces research quality (17% of open responses), reduces research impact (5%), puts UXR jobs at risk by giving leadership an excuse to ax research-specific departments (7%), and puts more work on researchers by asking them to become educators—and indeed, on PWDRs by asking them to become researchers—which may not align with their expected roles (3%).
The latter point—that UXRs are expected to be educators, not that UXRs necessarily dislike this role—is supported by our data: Most (73%) of the people we surveyed said that the responsibility of teaching research best practices falls on the UXRs in their organization, even when there is a Research Ops function present.
Some folks, on the other hand, welcome the shift. Roughly 28% of those who left a qualitative response expanded on their positive view of democratization, saying it enhances research by bringing more perspectives into the fold, reduces biases, and increases the amount of research that can be done.
“It’s fine and I welcome it, everyone has the right to be data-informed,“ wrote one person. “The more research, the better!” said another.
Another 11% expressed more balanced views about the subject, saying that execution is key. In the words of one survey participant: “It's necessary, we just have to get it right.”
And some folks are, frankly, just done with this conversation:
“I'm honestly just sick of talking about it. Why does our industry have to have a single view on this? Stop trying to make fetch happen and accept that ‘it depends on organizational context’ is the answer.”
This year, 82% of the folks in our survey said they track the impact of their research—a notable uptick from last year (when 68% said the same). This suggests an increasing awareness of the importance of measuring research outcomes.
Follow-up meetings with stakeholders are the most common method of assessing research impact overall, followed by the use of Key Performance Indicators (KPIs) and manually tracking the number of decisions influenced.
UXRs are more likely to use the latter method—43% say they manually track research influenced-decisions compared to 29% of PWDRs and 24% of ReOps Specialists.
Meanwhile PWDRs are the most likely to use KPIs or other quantitative methods—47% compared to 39% of UXRs and 41% of ReOps Specialists.
These folks may be onto something—when we looked at how people rated their satisfaction with the way research is used to make decisions at their company, people who use KPIs to track their impact tend to rate their satisfaction in this regard highly (4-5 out of 5), as do those who say their company built a custom tool for this purpose.
On the flipside, nearly half (47%) of the folks who say they do not track research impact at all said they were dissatisfied or very dissatisfied (1-2 out of5) with how their work is used in decision-making.
Even more interestingly, we found that there was a clear correlation between how successful researchers felt in their efforts to track research impact and their overall fulfillment at work.
People who felt tracking efforts were very successful had an average fulfillment rating of 4.26. Comparatively, people who felt very unsuccessful in this regard had an average fulfillment rating of 3.09. (Perhaps confirming the old “ignorance is bliss” adage, people who make no effort to track their research outcomes rated their overall satisfaction somewhat more highly at 3.28 on average.)
Generative, evaluative, continuous. Qualitative, quantitative, mixed methods. Discovery, testing, go-to-market. Moderated, unmoderated. Longitudinal. Biometric. AI-driven.
There are many ways to conduct user research (and we’ve written about many of them in the User Experience Research Field Guide, by the way.)
As part of our survey, we asked people about how different types of research were handled in their organizations. An analysis of the data revealed some interesting patterns, but not many surprises.
UXRs tend to conduct both generative and evaluative research, and seem to favor a mixed methods approach (85% of people said the UXRs on their team use mixed methods, compared to less than a third who said the same of Designers and PMs).
When PMs conduct research, they are most commonly focused on evaluative goals (according to 77% of our audience), using either qualitative or quantitative methods. Meanwhile, it seems that when Designers are involved in research, they typically focus on evaluative research (say 93% of our audience, compared to 40% who report Designer involvement in generative research) using qualitative methods (82% vs. 31% quantitative or mixed methods).
Note that this data excludes answers from people who said “I don’t know” or reported that a role was not involved in any such research.
Overall, 38% of our audience said that their teams conduct continuous research, with PMs and ReOps being the most likely to respond affirmatively (65% and 54%, respectively, compared to 37% of UXRs).
In fact, PMs are twice as likely as UXRs to regularly employ continuous discovery interviews in their research.
We asked people how many moderated, unmoderated, and mixed methods studies they conducted in the last 6 months. All role segments reported that they conduct moderated studies most frequently, followed by unmoderated studies and then mixed methods.
The most commonly used methods are 1:1 interviews (which 88% of people said they use often or always), usability tests (80%), surveys (62%), concept tests (51%), and competitive analysis (44%).
UXRs rely most heavily on interviews, usability tests, and surveys, while PWDRs—especially Product Managers—seem to use a wider variety of methods more frequently than UXRs.
When we drill down further into the methods that people say they “always” use, we find that both PMs and Designers are much more likely than UXRs to use quantitative methods (like A/B or multivariate tests and behavioral product analytics) and biometric methods (like eye-tracking), as well as heuristic analysis and competitive analysis.
Designers are also 2x more likely to say they always conduct accessibility tests and preference tests as part of their studies.
Meanwhile, PMs are over 2x more likely than UXRs to frequently conduct continuous discovery interviews, and 5-10x more likely to use focus groups, card sorts, participatory/co-design studies, and diary studies on a regular basis.
When it comes to learning about customers through other methods, UXRs are more inclined than other groups to use data science/product analytics reports and online research.
On the other hand, PWDRs (particularly PMs) rely more heavily on CS/support team notes or reports (80% vs 60% of UXRs) and ad hoc conversations with customers (77% vs. 52% of UXRs).
ReOps Specialists and PMs are twice as likely as UXRs and Designers to utilize customer advisory boards (40-43% vs. 21-24%).
All in all, artificial intelligence (AI) in user research is a topic of both interest and caution. While a significant portion of researchers are currently using AI or planning to do so, factors like DEI considerations, data privacy concerns, and personal attitudes towards AI play a role in shaping researchers' decisions.
A fifth (20%) of our audience is currently using AI in their research, with an additional 38% planning to incorporate it in the future.
PMs are the most likely to have adopted AI already, while our small sample of Marketers (N=16) seem the most eager to jump on the AI bandwagon, with 56% of them planning to utilize AI for research at some point.
ReOps Specialists and UXRs appear the most hesitant about this new tech—27% of these folks said they have no plans to use AI for research, the largest percentage among role segments.
Novice and experienced researchers seem to be embracing or rejecting AI at similar rates, suggesting that readiness to adopt this particular new technology is not necessarily tied to one's level of experience in the field.
We analyzed qualitative responses on the topic to understand what most excites and worries researchers about the rise of AI.
Half (50%) of the qualitative responses we received on this subject (total N=582) trended negative—although average sentiment regarding AI was neutral (3.0 out of 5).
Folks who take measures to ensure that their research is diverse, equitable, and inclusive are somewhat less likely to currently use AI (19% vs. 23%) and somewhat more inclined to say they have no plans to do so (27% vs. 22%), compared to people who take no measures regarding DEI in research.
In their open-ended responses, some of these people (24%) expressed skepticisms about data accuracy, the replacement of real people by AI participants, and rigor in research:
“[I] have concerns that people who don't understand UXR will think it's viable to replace us with an AI tool. I also think it will amplify our own biases and misinformation.”
There seems to be a correlation between concerns about data privacy and inclusion, and one's willingness to embrace AI. People who expressed positive feelings about the current state of data privacy and security were more likely to be current users of AI in their research (27% vs. 16% of those who felt negatively about this issue).
In open responses, conversely, 16% said they worry about the lack of regulation and data privacy, and that we’re adopting this new technology too quickly.
And 9% expressed fears about what AI might mean for their job security.
“Not sure if ChatGPT will help my job or eliminate it.”
On the other hand, 42% of open responses focused on the positive impacts of AI—namely that it offers new opportunities, streamlines research processes, reduces mundane tasks, and/or enhances their work (or has the potential to do so).
“The more sophisticated it becomes, the less I have to do!”
Other folks are on the fence. Around 15% of qualitative responses reflected uncertain/mixed sentiments (or apathy). “Cautious” and “cautiously optimistic” were terms used to describe their feelings on the subject of AI.
Some people said they’re excited by this new technology, but worry that researchers are getting ahead of themselves:
“I have mixed feelings. I’m excited for certain productivity gains around rote processes, [but feel] skepticism about nuanced analysis [and] concern that there will be an overreliance on AI in UX before it's ready for prime time.”
While the majority of our audience (86%) indicated that they take measures to ensure inclusivity in their research, this number is down slightly from 91% in 2022.
This dip could be partially attributed to the higher participation of global researchers in this year’s survey—data suggests that researchers outside of North America are less likely to take DEI into consideration in their studies. To quote one European researcher, “I live in a country where nobody cares about [inclusivity and diversity] a lot.”
But before we go pointing fingers at folks on the other side of the world, it’s worth considering the response of one US-based researcher who wrote: “Often projects have very limited user pools or available stakeholders, so inclusivity is a privilege.”
It's encouraging that a majority of researchers are taking steps to ensure that their work is representative and respectful of the needs and perspectives of a diversity of users.
But there is room for improvement—both in regions where the adoption of these measures seems to be lower, and on teams where inclusive research and design is seen as a luxury.
Interestingly, people without a Research Operations function were almost twice as likely (18% vs. 10%) to say that they did not implement any inclusivity measures, suggesting that Research Ops can positively impact the adoption of inclusive practices within research.
This is the part of the report where we—the leading research recruiting and panel management solution—talk about ourselves and shamelessly plug our products.
Don’t worry, there’s plenty of data here, too. But we’d be remiss if we didn’t tell you (for the sake of both transparency and marketing) that at User Interviews, we are 100% focused on simplifying participant recruitment. That’s all we do, and we do it better than any alternative method.
If you’re curious to learn more Zoe Nagara, our Senior Product Marketing Director, wrote a great article called “Why We Exist” about what we do and why we do it, which you can read on our blog.
The median number of participants in a moderated study is 8, according to our data.
We also found that our researchers conducted a median of 4 moderated studies in the last 6 months, indicating that a typical researcher recruits around 32 participants within that time.
(That’s just for moderated research. If we include the median number of mixed methods studies (3) in our calculations, we find that a “typical” researcher needs to recruit around 56 qualified participants in a 6-month period.)
So who are these participants, and how do researchers recruit them?
Most researchers (65%) primarily rely on their own customers for research, especially in non-agency settings.
These participants are most commonly selected by their patterns of product usage or professional criteria, except when a researcher primarily recruits external users—in which case they tend to select participants from the general population, rather than product or job experience.
Our audience uses a blend of recruiting methods to source participants. On average, people use 3 different methods for recruiting their own customers, and 2 different methods for outside participants.
Email emerges as the most popular method for recruiting customers overall (used by 51% of our audience), followed by intercept surveys (43%). Among User Interviews customers (N=321) , self-serve recruitment tools (like ours) are the most popular method for recruiting one’s own customers (49%), followed by email (48%).
When it comes to recruiting external users, User Interviews is the most popular method by far among our own customers (60%), followed by a built-in tester panel (such as those offered by UserTesting, UXtweak, and other platforms–34%).
Non-UI customers are more likely to use solutions like recruiting agencies (45%) and built-in testing panels (43%).
PWDRs are more inclined than UXRs to post in LinkedIn/Facebook/Slack groups (17-29%, vs. 9-15%), company social channels (17-18% vs. 11-14%) and Customer Support or Sales teams (17-40% vs. 7-35%) for both internal and external participant recruitment. (This is perhaps unsurprising, given that this group includes Marketing, Sales, and Customer Support folks).
Meanwhile ReOps Specialists tend to employ a wider variety of recruitment methods overall, but are less likely to rely on built-in survey panels or social media.
Predictably, it takes longer to recruit for moderated studies than it does to recruit unmoderated studies.
A plurality of people (40-43%) said that it takes 1 to 2 weeks to recruit for moderated studies (1:1 interviews, diary studies, and focus groups) and 3 to 5 days for unmoderated usability tests and surveys (35-36%).
Over half (55%) of User Interviews customers (N=321) said they typically fill an interview study in under a week, whereas only 41% of non-customers (N=608) achieve the same outcome.
Gift cards are used by a majority of our audience (66%), making them the most popular form of user research incentives, followed by cash or cash equivalents (39%).
Researchers in certain industries, such as healthcare (N=50), energy/utilities (N=11), and government (N=10) appear more likely to not offer any incentives.
We asked the folks who offer gift cards, cash, or cash equivalents how much they typically pay for different study types, and calculated the median and average incentive rates.
Keep in mind that the amounts in this table do not necessarily reflect the most effective incentive amount for your target participants.
Almost half of the folks in our survey (46%) said they rely on predetermined guidelines provided by their company to set incentive rates, while 23% say they “just took an educated guess.”
Research Ops professionals were the least likely to say they guessed (just 2%), instead favoring more data-backed approaches. They are the most likely (22%) to rely on the User Research Incentives Calculator from User Interviews, which is used by 15% of our audience overall.
The vast majority of researchers in our audience (97%) experience some type of recruiting pain.
The most common challenge is finding enough participants who match their criteria (70%), followed by slow recruitment times (45%) and too many no-shows/unreliable participants (41%).
User Interviews customers were less likely to say they experience these pain points than non-customers.
They are also less likely to indicate administrative challenges, such as scheduling sessions, distributing incentives, and collecting NDAs.
Some of the most common pain points seem to be alleviated by Research Ops. This is especially when it comes to managing a panel of participants: 41% of people without a ReOps Specialist found it to be a pain point, compared to 30% of those with Research Ops.
If you’ve seen our UX Research Tools Map—an illustrated guide to the ever-changing user research software landscape—you’ll know that we spend a lot of time thinking about UXR tools around here.
That’s because the tools we use can shape the work we do, and the way we do it.
In our survey we not only asked about the tools people use, but how they use them in the course of their research. We collected a lot of data, so you can expect a separate report on UX research tools later this year (as well as the upcoming 5th edition of our UX Research Tools Map). So consider this section a precursor to future tools content! ◕
If you're not already, subscribe to our newsletter to be the first to know when future tools reports are published.
◕ See Appendix.
While we’re on the subject of research recruiting, let’s talk about recruiting and panel management tools.
User Interviews topped the list of recruiting tools among our audience (29% of people said they use our tools), followed by Respondent (12%), Salesforce (10%), HubSpot (6%), and TestingTime (6%).
While many recruiting tools now offer features for panel management, the most popular solutions for this are actually general-purpose tools like Google Workspace (22%) and Microsoft 365 (16%).
User Interviews is an exception—it’s the most popular dedicated panel management tool (10%) and 3rd most commonly used solution overall, followed by additional general-use tools like Airtable (5%), Notion (5%), Miro (4%), and Slack (3%).
Note: Our own customers make up 35% of our total survey audience (this includes both Recruit and Research Hub customers). We purposefully recruited outside our customer base and followers to expand our audience and reduce this bias.
General-purpose tools play a crucial role in all stages of user research, forming the backbone of most tool stacks. Figma tops the list—a whopping 81% of researchers use the tool for prototyping/design, while 31% use FigJam, the company’s flexible whiteboarding tool. Google Workspace is somewhat more popular than Microsoft 365 among our audience (57% vs. 51%), followed by Miro (51%).
Of course, no 2023 tool stack would be complete without a solid video conferencing solution. Zoom remains the biggest player here, and is used by 68% of our audience. Google Meet is the second-most popular option (41%), followed by Microsoft Teams (35%), and trailed by WebEx and WhatsApp (11% each).
Researchers use an average of 13 tools to conduct their research. While general use tools form the foundation of most UXR tool stacks, there are plenty of research-specific tasks that require purpose-made solutions.
Luckily, there are no shortage of options out there. (Indeed, our most recent UX Research Tools Map included over 230 products.)
In our survey, we found that, in contrast to general-purpose tools (where popularity is generally concentrated in a handful of well-established product suites), usage is spread more widely across an array of made-for-UXR tools, with smaller percentages reporting use of any one product.
The most popular made-for-UXR tools overall are:
Click on the dropdown box below to see the most popular tools for different UXR use cases:
Subscribe to Fresh Views, our weekly newsletter, to get notified about future data reports and UXR content. (We'll also send you a copy of the State of User Research 2023 survey data to explore.)
Subscribe to Fresh Views, our weekly newsletter, to get notified about future data reports and UXR content. (We'll also send you a copy of the State of User Research 2023 survey data to explore.)
Let’s talk turkey. It doesn’t matter how much you love the work you do—unless you inherited vast amounts of generational wealth and are truly working for the sheer joy of it (looking at you, Julia Louis Dreyfus), the money matters. We all have rents to pay, cats to feed, and well-deserved vacations to bankroll.
In this next section, we’ll be sharing our findings on median UXR income in the United States and elsewhere. But we’ll also be talking about some of the ways those salaries have been impacted in the last 12 months—from raises and bonuses to layoffs and pay cuts—and other changes that researchers have seen as a result of the current economy.
We promise it’s not all bad news, though there is plenty of that.
(We’ll be digging into UX Researcher salaries more thoroughly in a separate report later this year, so consider this next section a preview of things to come.)
To analyze UXR salaries, we had to break things down by geography. That’s because North America—particularly the United States—offers average and median salaries for UXRs that are significantly higher than those elsewhere in the world.
We also had to use our judgment when interpreting some of the open-response answers we received. (You can read more about those judgment calls and assumptions in the Appendix—look for the 🝋 symbol.)
But on the whole, we found that researchers are a well-paid bunch. The median researcher salary was considerably higher than national benchmarks for every country we looked at.
Among US researchers in our audience, for example, the median salary is $141,500—or 149% higher than the median US salary of $56,940 (2023 data, Bureau of Labor Statistics). The median salary among our UK audience is £63,648 GBP ($81,000 USD)—91% higher than the median UK salary of £33,280 GBP or $42,368 USD (2022 data, Office for National Statistics).
The median salary for Brazilian researchers (converted to USD) is $29,797 (323% higher than typical yearly earnings of $7,044—April 2023 data, CEIC); our German researchers earn a median salary of $75,514 (57% higher than the national median salary of $48,210—2022 data, Gehaltsatlas); median income for Spanish researchers is 108% higher than typical earnings, for Australian researchers the difference is +49%... and well, you get the picture.
It is worth bearing in mind that our sample sizes for regions outside North America and Europe were small, and therefore reflect likely trends rather than definitive ones.
That said, it appears that African UXRs receive the lowest median and average salaries, followed by Asia (the second-lowest median), and Latin America and the Caribbean. Oceania has relatively high average and median salaries, especially for managers.
A plurality (37%) of US-based UXRs at the individual contributor (IC) level earn between $100,000 and $149,000, with another 30% earning between $150,000 and $200,000 per year.
Most (66%) Managers remain in these income brackets, with an additional 29% reporting salaries between $200,000 and $499,999.
European UXRs earn less than their American counterparts. In fact, the median UXR salary for both ICs and Managers is 155% higher in the United States than in Europe. (For those at the Director to VP level, the difference is 114%).
By comparison with the US figures above, in Europe, the majority of UXRs at the IC level (67%) earn between $25,000 and $74,999. Most of those folks can expect to remain in that income bracket at the Manager level (45% of European UXR Managers report salaries within this range), while another 21% earn between $75,000 and $99,999 per year.
We can’t talk about the state of User Research in 2023 without addressing the elephant in the room: namely, the widespread layoffs and cutbacks that have hit our industry and our colleagues so hard amid the current economic downturn.
Over a third (35%) of our researchers experienced a negative change in their compensation and/or benefits this past year. These changes range from reductions in benefits like home office stipends (reported by 16%) and cuts to actual base pay (reported by 5% of our audience).
Half of the people in our survey were affected by layoffs this past year.
The majority of those folks (77%) said that their organizations laid off non-researchers, while 43% lost fellow researchers as a result of personnel cuts. And a fifth of those affected (11% of our total audience) were actually laid off themselves.
In open-ended responses, researchers told us that seeing so many of their colleagues let go has been taking a toll:
“Every day, my LinkedIn feed is filled with more people being laid off, and I just hope I'm not next.”
Some people also reiterated their concerns about democratization, and its possible contribution to the recent spate of UXR layoffs:
“I worry that research isn't seen as a specialized function worth keeping around. Democratizing research is good for increasing insights and buy in, but I fear it makes it appear anyone can do our jobs.”
Amid a climate of layoffs, and for the same reasons, many companies have stopped or drastically reduced hiring—59% of researchers reported that their teams experienced a hiring freeze in the last 12 months, two-thirds of whom said that this freeze remains in place and that their company had not yet resumed hiring as usual.
Hiring freezes were more common in larger companies—55% of people on teams of 10,000+ employees said that a hiring freeze was still in place, compared to 24% of folks at companies with under 50 employees.
As disheartening as these stats are, they don’t give us the full picture.
In fact, 17% of the folks in our survey said that at their companies, hiring has actually accelerated over the past 12 months. Just as hiring freezes were more common in larger orgs, folks at smaller companies were more likely to report that hiring had picked up speed since May 2022.
There’s good news at the individual level, too: 60% of researchers said that they received a positive adjustment to their base salary. This includes the 43% of people who received a raise in the last 12 months as a result of job performance (nicely done, everyone!) and the 26% who received a non-performance-based bump (e.g. a cost of living adjustment, or market rate calibration). Some people received both.
And 11% said they received a larger-than-expected bonus in the last year, while 16% reported that their company expanded or introduced new employee benefits.
We probably don’t have to tell you that it’s not easy to quantify feelings. But we still tried.
In our survey, we asked our audience to rate their overall fulfillment at work, their satisfaction with several job factors, and their feelings about current trends in the industry on scales from 1 to 5.
Some of our findings—like the correlation between remoteness and work-life balance, or between efforts to track research impact and overall satisfaction—have already been discussed in the sections above.
In this last section, we’ll take a closer look at the relationship between different job factors and overall fulfillment, as well as researcher opinions regarding AI, data privacy, democratization, job security, and the level of diversity in the field.
We asked people to rate their feelings about current topics in the industry on a scale from 1 (very concerned/dissatisfied) to 5 (very excited/satisfied).
On average, feelings are more or less neutral—except when it comes to the economy, a subject that has many researchers sweating it at an average score of 2.05/5.
When we asked folks to explain the score they gave, some just raised an eyebrow at the question: “Have you been outside lately?“ quipped one person.
“Do I even need to put anything here?” asked another, “Every day there is a new headline about the state of our economy. We are in tatters.”
Individual contributors seem the most anxious about the current economic climate (rating their feelings on the matter at 1.93/5 on average), as well as job security and opportunities for growth in this field.
Freelancers/self-employed researchers (who, perhaps, feel somewhat more in control of their next paycheck), were the least concerned about the economy (with an average score of a still-low 2.57/5).
Analyzed along geographic lines, our 22 African researchers (who had the highest average sentiment score for nearly every topic) seem the least concerned about the economy; this group rated their feelings 3.77 out of 5, on average.
Meanwhile, folks in Australia and New Zealand (N=24) are stressed—they gave themselves the lowest scores in every category except diversity (for which they were the second-most negative group after North American researchers), rating their feelings about the economy 1.83 out of 5, on average. (Note that these sample sizes are small and these findings should be taken as potential trends.)
Perhaps most worryingly, our core audience of UXRs gave the lowest average sentiment scores for all topics except data privacy (PWDRs have the lowest rating here), by a factor of 0.28 or 0.53 points. In open-ended responses, they expressed concerns about job security and opportunities for growth in their own field.
“The tech market [has not been] going well for close to a year now. A lot of the time when layoffs happen, researchers are let go. It makes me nervous to pursue challenges and goals in my career and I'm afraid I'll need to stay put at my job [...] I think it’s not a good time for big life decisions and career-wise it takes a toll.”
Indeed—as we’ll see in just a moment—career stagnation and a lack of opportunities for growth play a major role in overall job fulfillment.
We compared how researchers rated their overall fulfillment at work on a scale from 1 (very unfulfilled) to 5 (very fulfilled) to how they rated their feelings about several job factors on a scale from 1 (very dissatisfied) to 5 (very satisfied).
Among the factors that appear to have an impact on overall fulfillment (based on large differences between the average fulfillment scores of folks who are “very dissatisfied” or “very satisfied”) are:
Confidence in their company leadership/outlook (2.55 vs. 4.33), buy in from their peers about the importance of research (2.37 vs. 3.95), and the level of bureaucracy involved in day-to-day decision making (2.75 vs. 4.23).
But of all the factors we looked at, how satisfied people are with their opportunities for career growth had the strongest correlation with overall job fulfillment.
People who are very dissatisfied in this regard had one of the lowest average fulfillment scores (2.38—1.12 points below the overall average), while people who are very satisfied had the highest average fulfillment score (4.50) of any segment that we analyzed (including job titles, geographic regions, research practice models, team size, etc).
In other words, people who are happy with their path for growth are the most fulfilled at work, full stop. This finding is consistent with last year’s report, when we shared similar takeaways.
In this report, we’ve talked about the current state of: user research teams, the methods researchers use to recruit and research, UXR salaries, and sentiments regarding their job and trends within the industry. But what about tomorrow? Where do we go from here?
User Research is not a monolith. That much is clear from the diversity of answers that we received. But if we had to summarize the findings of this report into a single takeaway, it would be this: User Research is changing.
The UXR landscape looks very different at this point in 2023 than it did in 2022 (which in turn looked different than in 2021, and so on).
Looking around at this landscape, our audience feels somewhat meh about the future of User Research overall (3.6 out of 5, on average). And look, the people who tell you that User Research as you know it is dying are half right; the future of this field will not look the same as it does today. That is, frankly, inevitable. But it does not have to be a bad thing.
Ours is still a relatively young industry, one that is coming of age in an era marked by a global pandemic, looming recession, protest, war, disruptive new tech, deepening societal divisions… the list goes on. User Research is experiencing some growing pains. There are challenges ahead, but there are opportunities, too.
As Gregg Bernstein, author of Research Practice and Awkward Silences guest recently explained, there is no one way to do UX research:
“It’s a long journey from the place we’re hired to the place we think we should be, but we’re not without agency. We’re researchers—our superpower is to take stock of a complex scenario and spot the possible paths forward.”
Once again, we are enormously grateful to our partners, colleagues, readers, and—most of all—the 929 researchers who took our survey and made this report possible.
If you’d like to explore their answers in more depth, feel free to download the (anonymized) dataset and run your own analysis. And if you uncover any big ‘aha’ insights that we missed, let us know!
1. If the input salary was much lower than expected (between $300 and $9,000 USD) for individuals located outside the United States and Canada, we presumed that the participant had entered their monthly salary, rather than annual. We multiplied this value by 12, and confirmed that the resulting amount was within a reasonable range for local salaries based on a quick Google search.
2. If a participant from the United States or Canada entered a 3-digit salary that appeared to be shorthand for a typical salary (based on that individual’s location, seniority, and years of experience), we multiplied the response by 1,000. For example, in the case of a VP/Senior VP in the US with 15+ years of experience, we interpreted the input “350” as $350,000.
3. Responses of $0 were excluded from median and average calculations. When a response was inscrutable (i.e. far outside the expected range, but did not meet either criteria above), we also excluded it from our calculations.
◕ Due to an error in skip logic, the sample size for most tools-related questions was reduced from 929 to 518. The sample of 518 represents researchers who said they used any one of the research recruitment tools we asked about in Q58 (i.e. it excludes the 411 participants who selected “None of the above”).
Those tools were: ARCS, Askable, Disqo, Ethnio, Great Question, HubSpot, HubUX, PanelFox, RallyUXR, Respondent, Salesforce, TestingTime, User Interviews or an open-response “Other” option.
We have used percentages when comparing tools with a different tool N.
🝋 We asked for salaries in USD (and provided a link to a currency calculator), using an open-response field. We failed to specify that we were looking for annual salaries (rather than monthly or weekly). This is the standard format when discussing salaries in the United States, whereas in other regions it is more typical to use monthly salary. As a result, 129 responses required interpretation. For transparency, the assumptions and changes we made were as follows: