Note to the reader:
This part of the field guide comes from our 2019 version of the UX Research Field Guide. Updated content for this chapter is coming soon!
Want to know when it's released?
Subscribe to our newsletter!
No one conducting user research does so to live out their villain fantasy (at least we’d like to think so). But even those with the best intentions can unknowingly skip out on ethical protocols, like collecting signatures for consent notices or protecting private participant data.
For many user researchers, the consequences of not following ethics are relatively minor: hurt feelings, discomfort, or confusion. But that, too, can damage your research by dwindling the pool of prospects and customers who will participate. It also can affect larger brand reputation, and even, in some cases, escalate into costly legal trouble. But for those in high-stakes industries like healthcare and finance, or those who do research with protected classes like children or the elderly, this risk cannot be taken lightly.
Anyone who’s doing research in an organization needs to understand the importance of ethical practices and be equipped with the necessary tools and knowledge. Because while writing forms, collecting signatures, and checking other legal boxes isn’t glamorous, investing in tools and systems that put the admin side of research on the rails ensures the consistency, ethics, and legality of your research practice as it scales.
This guide will give you the tools to ensure your entire research process–from recruitment to reporting–is ethical, no matter who is doing the research.
Before we dive into the nitty gritty, let’s align on what we mean by ethics in research: the norms that separate good conduct from misconduct. Though ethics run the full gamut of research, much of the established ethical norms and guidelines center around protecting participants. For this, all industries and cases have looked to The Belmont Report, the gold standard of ethical standards, to establish guidelines.
According to the report, the basic ethical principles for protecting participants during research boil down to the following:
The Belmont Report defines this principle as “the requirement to acknowledge autonomy and the requirement to protect those with diminished autonomy.” But as Kat Zhou, Senior Product Designer at Epidemic Sound and the creator of <Design Ethically>, put it plainly in an episode of Awkward Silences: Respect towards participants means you acknowledge and honor their different ways of existing within your research. In practice, this can look like making sure your initial screener survey, any online tools, contracts, and on-site locations are accessible, and offering any reasonable accommodations for people with disabilities throughout the process.
This principle boils down not only to “do no harm,” but also to maximize the good and minimize the bad as much as possible. You’re doing research for the benefit of the people you’re working with, says Zhou. You must ask: “Who benefits more from your research, your participants or your paycheck/company’s bottom line?” Don’t let business objectives override respect for participants. For example, you can work for a cigarette company and be very inclusive throughout your research, but at the end of the day it’s still championing a problematic product.
Who does your research help and who does it hurt? Zhou recommends reflecting on what your product does and how it exists in society. You must ensure that the participant sample represents your target user base and engages with communities that span various racial, socioeconomic, gender differences, disabilities, sexual orientation, etc. But you also must consider: Do the costs of your research fall on those less disadvantaged?
Ruha Benjamin, a professor of African American Studies at Princeton University, speaks on how “impartial” or “data-driven” surveillance and policing technologies aim to predict where crimes are likely to occur, who is likely to commit, or who might be a “future offender.” However, they’re designed and deployed by people who don’t know or consider the implicit structural inequalities in them. These technologies use datasets collected from marginalized neighborhoods, including arrest records, school suspensions, and other criminal justice data. Although researchers “include” these marginalized identities in these datasets, their inclusion reinforces historical biases and means these groups are also disproportionately harmed as an outcome.
Much of the above focuses on the idea of minimizing participant risk from taking part in the study and ensuring the researcher’s presence and biases don’t skew the session to whatever degree is possible. To put this in practice, researchers must take the time and effort to ensure that participants:
On that last note, pay your participants for their time. Eliminating incentives can increase bias in a study–since you’ll only recruit participants who value their feedback enough to share for free and/or those who can afford to financially sacrifice to participate. However, incentives can become a form of coercion that can interfere with your study’s results and ethics. For example, an outsized incentive might encourage a reluctant participant to agree to a high-risk study that they don’t otherwise feel comfortable with. Certain industries, like healthcare, cannot pay participants in cash due to their own code of ethics, either. Ensure that–in whatever form possible–what you give participants equals what they gave you.
To cover all your ethical bases, give a participant consent form–also known as informed consent forms. These documents explain the details of your study, what data you’ll collect, why you need it, and each participant’s rights. Most research should use them–but many, like clinical studies or those involving minors, legally require them. Distribute these forms along with screeners for review and participant signature before they participate in any research.
This easy-to-read research consent form template can be used as a jumping off point when working with your legal team. Download it on our Free Research Templates page.
Beyond best ethical practices, several governing bodies have enacted privacy laws and regulations that affect how researchers obtain, manage, and secure research participants' personal data (sometimes referred to as personal information, personally identifiable information, or PII). To comply with these laws, like GDPR, CCPA, and the CPRA, participants must manually opt-in to data collection.
To minimize paperwork, this information can be included with the informed consent form–but consent must be freely given, specific, informed, and unambiguous. In a research context, this means that a participant must agree to each marketing opt-in, research database, and observed session separately to give people more control over their data.
Maintaining compliance in this area can be tricky in industries where complex layers of dependencies exist, as privacy, security, and compliance must be maintained for all parties as appropriate. Nadyne Richmond, design advisor for companies like IBM, Microsoft, Included Health, and Babylon, points to U.S. healthcare technology as an example, where not only patients are involved, but also care providers, insurance providers, billing, administration, pharmaceutical companies, and more.
Before sending out your forms, work with your legal and data privacy teams to ensure all collected data is actually stored securely with encryption where necessary, and that only authorized personnel can access it. Also brush up on your organization’s data retention policy to ensure the categories of collected data, their purpose, and the planned storage time before deletion complies.
Once you complete a research study, habitually delete all the personal user data that you no longer need. When archiving research insights, remove all PII from shared files to demonstrate respect for their safety and privacy as well.
Once the recruiting phase is complete, you must attend to ethics during the actual research session as well.
Start by designing tasks to be clear and manageable, and minimic real world conditions. They should empower the user–not exploit them. In this vein, avoid “dark patterns,” manipulative design, or anything that might pressure or influence participants to act or respond in a certain way, like leading questions.
For sensitive industries, like finance and healthcare, it’s important to not only have a skilled researcher facilitate the session, have your legal team involved in ensuring compliance with any additional regulations (like HIPAA), but also to consult field experts to help enact participant empathy best practices. For example, Erica Devine, Associate Director, Patient Experience Strategy and Support at Otsuka Pharmaceutical Companies (U.S.), said in an episode of Awkward Silences that having someone involved who knows the terminology and is receptive and favorable to those clients can go a long way at establishing trust between participant and researcher and also advocate for the participant needs throughout the pre-research process.
In some highly sensitive cases, researching your research might help as well. Will Notini, Senior Design Lead at IDEO, recounted a project that aimed to reduce the bottleneck of getting patients through the end of clinical trials–a study type that ethically requires low incentivization and thus often experiences a high-level of drop off before completion. The design team hypothesized that if clinical trials happened at home, rather than at a clinical trial testing center, participants would more likely complete the process. They set up preliminary interviews with a few participants and showed them a very rough design concept of a clinical trial van that made home visits. To the researchers' surprise, people had a strong negative reaction. They felt like it violated their privacy as a patient by outing them to their neighbors as clinical trial participants. This initial research unlocked a new goal of maintaining patient privacy at all stages of the patient journey.
While much of research ethics centers around participant/researcher interactions, consider ethics in the publishing and dissemination of your data, too. When doing quantitative research, ensure you are focusing on data integrity practices. This can include, but is not limited to:
Speaking of AI tools, according to our AI in UX Research Report, 74% of respondents said they use ChatGPT in their research. While the speed and scale of these technologies holds great promise, no one understands the full scope of the ethical and legal issues that can arise from using these nascent technologies.
For example, one respondent mentioned the gender and racial bias present in many AI outputs. As technologies like AI moderation become embedded into the research process, it becomes even more important that teams review their plans' questions for bias, so participants don’t have a poor (or harmful) experience when AI fields responses.
But even then, it seems as if UX professionals use all-purpose AI tools like ChatGPT and Claude in their workflows at higher rates (64%) than they do AI features (23%) or research-specific tools (13%) like AI moderation. 90% say they deploy AI tools and technology during the analysis and synthesis phase, with popular tasks including summarizing notes, transcripts, and open-ended data, as well as analyzing that data for trends, themes, and clusters. This brings up an ethical dilemma, however, as these solution providers lack transparency about their data protection policies. Using these all-purpose technologies for analysis and synthesis means data is most likely being shared and processed by third-parties in ways of which participants have consented.
While a small subset of companies have created their own internally-built/proprietary AI as a guardrail for safe, ethical use of all-purpose AI in UX research, this is incredibly rare. For now, the most ethical advice is to use AI for non personal or proprietary data during the research process–but with firm and cautious human oversight.
Beyond a single study, think about how you can stay ethical and unbiased in your roadmap as well. Michelle Ronsen, UX and design researcher and founder of Curiosity Tank, says that user research can often be added to a roadmap to prove a point or validate an already existing idea. To stay away from this, only greenlight studies that focus on uncovering the truth with users.
Other common signs of bias in UX research to look out for include:
While many companies have research ops supporting the safety, legality, and ethics of research through processes and guidelines for consent, data privacy, and asset/information storage, our State of User Research 2024 found that trained researchers are usually the ones monitoring its practice. Nearly three-fourths of survey participants said that the responsibility of teaching others about research best practices fell to the dedicated Researcher, even with a Research Ops function at their organization.
The best way to instill research accountability in your team or organization? Proper education on ethical research standards. Bring it from abstraction into practice by focusing on the 3 c’s of accountability: Clarity, communication, and consequences.
Clarity: By clearly defining what proper conduct in user research looks like, you’ll help educate the team on ethical research standards. We’ve made this easy for you, and put together a pre-research ethics checklist that you and your team can use when kicking off projects.
Additionally, Devin Harold, a designer and UX leader whose career has included roles at companies such as Capital One and Verizon, recommends advocating for the ethical treatment of customers using the language of business partners to earn the recognition and resources that will sustain the practice.
Communication: Don’t just drop the template link in your shared drive. Spend some effort normalizing it. Share it on Slack/Teams, email, and in a few meetings, allowing the team to raise any feedback or concerns. Most importantly, make sure you include it in any new employee onboarding so all researchers past and present know what’s expected of them.
Consequences: HR must deal with any consequences stemming from research misconduct in most organizations. But you can take on rewarding good performance with the team. If someone has done something that shows a particular eye towards ethics, reward it in the group the same way you normalize the standards themselves–whether simply mentioning it on messaging channels or shouting it out in a weekly meeting.