Note to the reader:
This part of the field guide comes from our 2019 version of the UX Research Field Guide. Updated content for this chapter is coming soon!
Want to know when it's released?
Subscribe to our newsletter!
Your support and sales teams are on the front lines, interacting with users on a day-to-day, hour-by-hour basis. Because of this, your customer facing teams are a great ongoing source of qualitative user insight. Developing ways to access their knowledge, build relationships with them, and align your interests are important aspects of managing your ongoing listening, post-launch research.
We talked a bit about ongoing surveys, like NPS, CSAT, and CES, in a previous chapter. You may already be working in conjunction with your support and sales teams to collect and analyze that data. Great, you have a leg up. This data is great because once you have the processes set up, it keeps coming to you in a pretty automated way, and you have a single metric or two you can track over time to baseline against. For those reasons, quantitative data is really useful for ongoing listening, but qualitative data plays an important role too in understanding why users are reacting positively or negatively to your product.
Much of the data you can get from your sales and support teams will be qualitative, based on the stories and feedback they hear each day. A great way to encourage the exchange of this qualitative data is to build relationships with customer facing teams. This will facilitate the organic passage of knowledge, as well as help lay the foundation for more systematic processes. If you’re at a large organization, start by identifying who has the information you need. Then, make friends with them! Figure out how your work can benefit each other, and start coming up with ways to make those things happen.
One of the biggest stumbling blocks when it comes to collecting qualitative feedback is making sure there’s a system of record for organizing it all. If someone on your sales team tells you that everyone struggles with a certain feature, how are you going to document that? How will everyone access it when it's most relevant and actionable?
You can do this via a spreadsheet, project management tool, and/or via regular meetings where someone takes notes, and adds the insights to the company wiki say, to name a few examples. Recently, many organizations are using Airtable to capture "nuggets," or small units of user insight that can be accessed on demand, when building a product that addresses that situation for instance. We actually use this solution at User Interviews!
Another recommendation is to break up the kinds of feedback that are useful to various teams in the company (sales, support, marketing, etc). Different teams can filter based on which team submitted the feedback, or which team it is most relevant to. The point is, often user feedback is relevant to many teams, so keeping that feedback together in one place for all to see makes a ton of sense, and marks meaningful progress for many organizations toward democratizing research and the user experience in general. Make sure to discuss this data on a regular cadence, so it doesn’t fall into the ether.
An essential aspect of UX research, and ongoing listening in particular, is analyzing user interactions with your support and sales teams across your website, email, chat, demos, or call center. Work with your sales and support leadership to find ways to evaluate this data. Perhaps sales or support puts together a weekly or monthly report on what they're hearing on the frontlines, with an emphasis on trends. Or, maybe you set up a recurring one-on-one with the same, of different members of these teams to accomplish a similar goal. Perhaps representatives from each team come together for a "voice of the customer" session.
If your system for organizing feedback includes the key kernels of insight gathered across email, chat, and phone, that may be enough for your needs. But take the time to understand where the data is, what kind of data is available, and make sure you have access to what you need to stay close to the user after a product has launched.
Your company likely has a self-service knowledge base where users can go to ask questions and get the answers they’re looking for. Using your web analytics data, you’ll be able to understand which articles get the most/least views and which ones are marked as the most/least helpful. You can look at what people search for most frequently, where they click. This data can give you insights into where improvements can be made in your product. If people have a lot of questions about a particular topic, is it because the product experience could be more seamless in those areas?
Because a knowledge base is sort of an extension of the product and support team, you may also uncover opportunities to improve the knowledge base in this process. Creating content for searches that return no results, or even running further qualitative usability testing to make sure people can find and understand the content they need. Work with your support and product teams to tackle the best opportunities on a regular cadence here.
Bugs stink. But when you’re building new products, and constantly iterating on them, bugs happen. Some bugs are relatively harmless—they’re mildly irritating but don’t ultimately prevent users from accomplishing key tasks. But there are also bugs that can cause major interruptions, driving users away.
Before releasing a new experience, you do everything you can to find and squash bugs. Often, QA teams get involved, as well as other software developers. Even so, bugs crop up. As a UX researcher, bug reports can be useful in your ongoing listening methods toolkit.
Take it from Itamar Turner-Trauring, Founder of Code Without Rules:
Your software has bugs. Sorry, mine does too. Doesn’t matter how much you’ve tested it or how much QA has tested it, some bugs will get through. And unless you’re NASA, you probably can’t afford to test your software enough anyway. That means your users will be finding bugs for you. They will discover that your site doesn’t work on IE 8.2. They clicked a button and a blank screen came up. Where is that feature you promised? WHY IS THIS NOT WORKING?!
Thankfully, as a user researcher, you can make use of what your users are telling you.
If a user interacts with your experience and consistently finds bugs, this is obviously a problem. The user will not feel positively about your brand, even if they are still able to complete the tasks in question. According to Turner-Trauring, users have only two options when they confront bugs. They can:
If you constantly fix your bugs, users are likely to be happier. They’ll stick around. However, it’s not fixed bugs that make users loyal to an app or experience. What’s most important is that these users feel heard.
A good bug feedback system means users are able to report bugs as quickly and as easily as possible, and then feel that your team hears their feedback, responds to it, and follows up.
Ask yourself:
Do you have an internal Slack channel, dedicated email, or software that helps you identify and share news of bugs? Probably your product or engineering team has some systems in place you can take advantage of as a source of data for your ongoing listening efforts.
FAQ and support desk reporting can similarly help you understand where users are getting caught up or confused in the product experience, reflecting usability issues that might not actually be bugs—someone internally may have even thought they were features!
Users don't always know if they’re encountering a bug or a usability issue. Your "bug reporting" system therefore likely straddles several of the above types of reports. Work with your support and engineering teams to find the right sources of data to uncover key bugs and usability issues, even if everything isn’t always clearly delineated in exactly the right place.
We’ll make the assumption that someone on your product or engineering team is actively working to improve bugs, but this reporting has other value too beyond fixing what’s broken. If certain tasks or areas of your product have had a history of bugginess, that’s good insight to be aware of when you’re evaluating how to improve that experience. It may be the at the experience itself is pretty good, it was just riddled with bugs in the past. If NPS drops in a given week, it may be connected to a string of bugs, and not that you built the wrong feature the wrong way on a broader level.
The best way to use bug reporting for post launch ongoing user research is to keep an eye on the volume of bugs coming in comparatively over time, and the nature of those bugs. This will add a layer of useful context to your analysis of a product's rollout and success as you look to make constant improvements.
Sales, support, product, and perhaps even marketing, have user data you can use. You probably have data they can use too. Understand who has what, where and how they're collecting it, and seek to keep your insights accessible, organized, and actionable. The more all departments share a common language and insights about user feedback, the more your organization can march forward on a cohesive mission to improve the user experience across every touch point. There are infinite ways to get there, but the underlying principles are the same.