Design critiques are useful tools for helping team members grow and learn from one another, establishing a culture of feedback, and increasing consistency within the team. At Klaviyo, we’ve applied this principle to our research share outs with a lot of success. The trouble is, traditional design critique processes don’t always translate as well to the work of reviewing research.
In this article, I’ll walk you through the Klaviyo Design Research team’s journey to a better research feedback process. I’ll also share our current approach to research share outs, which was inspired by writing workshops.
📚 Related reading: The ultimate guide to user research for UX and product designers
What is a design critique?
Design critiques (or design reviews) are project milestones in which product teams evaluate a design against its brief. These reviews are normally held after a design has reached the prototype or proof-of-concept stage of development. Most commonly, these reviews are just the team members, but sometimes teams may invite other partners in the organization to gain alignment on key improvements before something is ready for wider release.
This step is intended to help teams improve the quality and consistency of their design outputs by providing a forum for designers and design leaders to give feedback on works-in-progress.
In a research context, these reviews often involve sharing a research plan, discussion guide for earlier career researchers, and research results.
The challenges of applying design critiques to user research
Like many research teams, the newly formed design research team at Klaviyo wanted a forum to learn from one another and grow through peer feedback.
We began by following a process similar to peer design critiques, wherein the researcher would volunteer (or be “voluntold”) to share a research project.
As with design reviews, these were often held at a mature stage in development. The trouble, we found, was that team members often spent the majority of the time explaining what they’d done, leaving little time for discussion. This was true even when we held reviews earlier in the design process.
When we did have time for discussion, often only myself and the most senior researchers felt fully comfortable providing their views and feedback on the work being shared. Questions were rarely asked to improve everyone’s understanding or allow the researcher whose work was being reviewed to reach their own conclusion.
Our team ran a retro of the peer feedback process to understand what was and wasn’t working. The team appreciated the ability to see one another’s work—but also expressed getting limited value out of the forum.
As the leader of the team, I was also not seeing some of the benefits I was aiming for with limited cross-pollination of practices, despite hiring with the intent of a variety of methodological approaches and practices (the idea being that everyone could learn from one another, even if that meant less consistency of practice in the short term).
I found myself asking: Each team member holds unique strengths, so how can I facilitate a process that showcases these superpowers and helps us grow as a team?
Taking inspiration from writing workshops
My own experience with critiques was a bit different from many others in the design field. In fact, it wasn’t rooted in design at all.
I learned my approach in writing workshops, which tend to encourage increased introspection and reflection prior to feedback. In addition, rather than focusing on the individual and their work, the intent of these writing workshops is for everyone to discuss the work together and contribute to the different interpretations and potential improvements.
Of course, analyzing and iterating on research plans, discussion guides, final reports, etc. isn’t quite the same thing as seeking to understand and interpret prose and verse. So, instead of framing our discussions around interpretation of meaning (still valuable within a small research team with intimate knowledge of all observations, interviews, and data points), we maintained the part of design critiques where the person whose work is being reviewed comes with an ask for direct feedback.
Also, unlike a writing course, I realized we couldn’t count on people having time to pre-read the materials ahead of the discussion so we baked time in for silent review.
Klayvio’s research review meeting format
By combining these approaches (design critiques + writing workshops) and filtered them through our own needs, we eventually came up with a review meeting format that was tailored to our team. We initially settled on a fixed 45-minute format that looked like this:
- Share context (up to 5 minutes)
- Silent review (5 minutes)
- Positive feedback (up to 5 minutes)
- Questions (about 15 minutes)
- Alternative perspectives (about 10 minutes)
- Directive feedback (remaining time)
📚 Looking for a meeting agenda that works for your team? Check out these free UX meeting templates and examples for inspiration.
Why this format works (for us)
We’ve been using this format for a few months, during which time we’ve doubled the size of the team. To learn more about what’s been working with the review format (and what could be improved), we held a retro.
Here’s what the team thought of our meeting format:
Sharing context (≤ 5 minutes):
Having a few minutes to set context allows the person sharing their work to feel in control, provide some basic information, and frame what they want feedback on.
Silent review (5 minutes):
This portion has helped more introverted members of the team reflect on the work and put form to their thoughts before the share out. In addition, because everyone gets to review the work on their own terms, they are able to re-read portions or go deeper into areas that they may have otherwise missed in a live presentation.
Positive feedback (≤ 5 minutes):
We liked having time baked in for praise—even if it’s someone’s very first time creating a discussion guide there’s always something positive that we can call out about the work. However, it was helpful to time box the positive feedback portion to keep things from devolving into flattery—this is an opportunity to learn and grow from one another, not a love fest.
Questions (~15 minutes)
If you’ve attended a traditional design critique, you’ve probably observed the tendency of individuals to tell others how they would do things, rather than seeking to understand why the other person made the initial decision.
To counteract this tendency, we baked in plenty of time for asking questions (not providing solutions). Researchers are a curious bunch by nature and no matter the level of practitioner, researchers are almost always comfortable asking questions (although sometimes you’ll need to call on individuals to make sure all voices are heard).
Less experienced researchers find value in these questions, which help them understand the thought process of more senior researchers. Meanwhile, more senior researchers find value in the additional context they need to understand the rationale behind a research decision.
Alternative perspectives (~10 minutes)
We found that explicitly asking for people to voice alternative perspectives helps make junior researchers more comfortable offering different approaches to more experienced researchers. This step also made peer suggestions feel like just that—suggestions, rather than directives. Junior researchers were more likely to feel they had a choice about whether or not to take action based on feedback from more senior peers
Directive feedback (remaining time)
At the same time, we recognized that sometimes things really do need to be mildly course corrected. So we decided to end our share outs with a block of time dedicated to more directive feedback and next steps.
Evolving the process
As our team scaled, the fixed rotation of full 45-minute meetings per share out were no longer practical.
So, we iterated. We increased the meeting time to an hour, while reducing the amount of time that each researcher has the floor to 25 minutes (with a 10-minute buffer to account for share outs running over or any unseen delays).
Our current meeting format:
- Share context (up to 3 minutes)
- Silent review (5 minutes)
- Positive feedback (up to 3 minutes)
- Questions (about 8 minutes)
- Alternative perspectives (about 8 minutes)
- Directive feedback (remaining time)
- Repeat the above for the 2nd person sharing
To further ensure that the feedback we are giving/getting is timely, we’ve also started to kick off the meetings by asking if anyone has anything they need urgent feedback on. If so, researchers will swap the order of their share outs to make sure nobody on the team is being blocked.
These meetings have helped us improve the quality of our research, and practitioners have begun to organically adopt each other’s approaches, which increases our consistency without the need for leadership to dictate an approach. Moreover, we’re now seeing a culture of growth on the team. People are proactively seeking feedback more often—for example, by individuals posting work-in-progress in our Slack channel with questions for the team.
Perhaps one of the outcomes that I’m most proud of is hearing the feedback from new team members who have joined the team. They’ve expressed appreciation for the meeting format and how it accomplishes a level of peer conversation about research that their prior companies strove for but never accomplished.
Designing your own research review meeting
When working on establishing this practice for your own team, make sure your first priority is making the team comfortable with being uncomfortable.
Focus less on the number of reviews or scalability, at first, and more on the quality and culture change. This will mean setting up forced rotations, making sure EVERYONE participates in the conversation by calling on them and reiterating the value of their contribution in your 1:1s, and not pulling punches in your feedback–just save anything directive until the appropriate section.