Don’t get fooled again: The good, the bad, and the ugly of surveys, CSAT, and NPS
Did you know that 95% of respondents in a recent survey believed that books should be permanently banned for all students under 16? Perhaps you’ll be shocked to know that only 3% of people in our study believe that oxygen is safe to breathe! It’s a miracle that most folks have made it this far.
You may have sensed that the “facts” above may be deceptive at best. But what do you do when the misleading outcomes of your data aren’t as obvious? The challenge for many of us in enterprise technology, lies in the subtle ways that bias, errors, and misattribution are lurking in our very own research.
Many of us aspire to be ‘data driven’, hypothesis led, or outcome oriented, yet most organizations still lack the analytics tools, knowledge, and resources that data driven teams require to make decisions, measure outcomes, and identify problems. At the same time, that hasn’t reduced pressure on said teams to deliver impact reports, defend P&L, and advocate for additional resources. That’s led to the rise, and sometimes folly, of everyone’s beloved dataset – the survey.
While survey data is an ubiquitous and valuable tool in the hands of statisticians and researchers, it’s too easy (and tempting) to end up with a data story that’s less fact and more fiction. Here’s why and what to do instead.
The Rise of CSAT and NPS
If you’ve been using surveys in a large (or small) organization, there’s a good chance CSAT and NPS are floating around your data vocabulary list. Being able to understand how folks feel about something is at the heart of what surveys are intended to do and CSAT may possibly be the one of the most ubiquitous methods of collecting that information. Customer SATisfaction, or CSAT as we know it, is gold when it comes to standardizing the way we ask “does this meet your needs?” whereas Net Promoter Score (or NPS) has been used to gauge whether you like a product so much you’d recommend it to someone else.
The benefits of using these types of surveys have led to the widespread adoption of survey tools. Benefits such as predicting and improving customer churn, identifying frictions areas in a product and even boosting employee engagement, and more.
Whatever it is you’re trying to measure, baseline, or report on, CSAT and NPS are a great way to turn feelings and sentiment into quantifiable data you can measure, report on and eventually optimize! That’s probably why they’re so commonly used across different product types, companies and organizations.
Know your data types
Data is all around us as little more than information we can use to make informed decisions. As broad as the overall revenue of your organization, and as narrow as the rate at which paper gets passed through a commercial printer. But if it’s been a while since your last statistics course, let’s remember that broadly speaking, there’s two types of data: Qualitative and Quantitative.
Qualitative data is typically descriptive and unstructured information we can gather from newspapers, captain’s logs, and journals. Think about the ‘is there anything else you’d like to tell us’ field in a survey, it’s descriptive and informative, but requires a lot of work to process into broad consensus. If it can be felt or described, it’s qualitative. It’s really helpful for understanding the personal experiences of a customer, or employee when you need to research motivation behind an action, or better understand the “why” behind the “what.”
Quantitative data on the other hand is numerical in nature and can be counted. Product managers may want to understand how many customers use a particular feature, or IT managers may need to report on the number of active users to measure license utilization of a piece of software. This numbers based research is intended to provide broad, number based data which is great for understanding what’s happening but also critical for measuring change over time.
Wait a minute – wasn’t NPS developed to quantify customer and user sentiment?
Here’s a little story about the history of NPS. While at Bain & Co, Fred Reicheld suggested using a brief survey to test and understand brand loyalty. That idea spurred what is now known as the NPS survey which blossomed into surveys, pop ups, emails, push notifications, text messages, airport cleanliness buttons and even a $5B+ behemoth of survey focused technology firms based on the premise that asking people what they think will help drive your business forward.
At its inception, NPS was a fantastic tool. It was used to predict brand loyalty, identify areas of improvement, and established itself as a leading indicator aimed at measuring brand success. It proved to be a valuable tool to validate the effect of marketing campaigns, promotional incentives and ultimately led brands and companies to a path of success as they iterated product offerings and built customer experiences around the critical metric.
The idea behind NPS, and the long history of surveys before it, is that in order to implement a level of analysis for qualitative data (ie. how do you feel), there needed to be a way to quantify it somehow (ie. how many people feel that way) so what we have on hand, is a tool that allows us to measure, count, and analyze the data behind how people feel about a certain thing. Hooray for surveys!
The problem with CSAT and NPS
If you’re sick of submitting surveys, providing feedback, or checking a box, you’re not alone. Over the last 20 years, typical response survey response rates have gone from 36% to 9% and keep dropping. That trend is spilling into our work lives as well. If you’ve sent out a survey to someone at your company, think about what your response rate was.
That’s a tricky problem when it comes to surveys, be it CSAT or any other. You see, in academia, particularly in the fields of psychology, sociology, and political sciences, the survey has grown to be a critical tool to get into the hearts and minds of populations. There’s no other way to quantify how many people feel a certain way about a political candidate, or particular policy other than polling. Still, there are critical concepts in place to ensure the survey data collected is viable such as significance testing, sample size calculation, and bias mitigation.
Those same controls and statistical significance aren’t embedded into a typical organization’s CSAT survey the way they are at, say, an academic research institution.
Concerns over the rapid use and reliance of survey data by corporations, internal teams, marketing teams, and customer success teams has caused a gap between what the survey data says, and what your customer’s really mean. Furthermore, according to ADA, they caution not use CSAT to predict net-retention due to lack of correlation! In fact, in Fred Reichheld’s book (yes, the NPS guy), he reported that groups that relied solely on CSAT saw churn rates of 60-80% even as their scores were in the top 80%.
Wait, why are we using surveys anyway?
We’ve come a long way since the first surveys were used in the 18th and 19th century. Specifically, something that’s near and dear to your hearts, the rise of digital experiences. Perhaps the most exciting aspect of the digital experience is that it can be measured. Unlike the days of yore, when we had to survey folks to understand how often they accessed a particular file, or the average length of time it took to file an application, the transformation of digital experiences heralded new tools, new data, new methods.
Web analytics allowed commerce managers to understand the frequency in which consumers purchase products, the rate of cart abandonment (how often they change their minds), and popularity of products.
Product and UX analytics ushered in a new era of understanding as product managers and startup founders started to lean on metrics such as daily active usage, user retention, and conversion optimization coined by behavioral analytics pioneers like Dave McClure who developed Pirate Metrics, and Eric Reis who wrote the Lean Startup.
The promise of digital transformation led to something unique and new in terms of how organizations could measure user behavior – they could actually measure it! By now, many of us are familiar with cookies, internet analytics, etc, but the fact is that we often forget to tap into the very real, very available data we can use when trying to get to the ‘why’ behind ‘what’ someone is telling me in a survey.
Clicks on a form, views on a page aren’t mathematical data points used to count the number of views, likes, or audience reach. They can inform product managers, operations analysts, and digital transformation managers about the actual journeys and experiences we’re reading about in user survey data. That’s why Behavioral Analytics gives us a powerful view that survey researchers only dreamed of.
Digital Transformation has unlocked Behavioral Analytics
Behavioral analytics provides a more comprehensive view of the customer experience by combining qualitative and quantitative methods to gather data about customer behavior, preferences, and opinions. It goes beyond traditional survey methods, which moves beyond self-reported data, and instead captures actual customer behavior through various data sources such as website analytics, customer support interactions, and transactional data.
By analyzing this data, behavioral analytics can provide a more complete picture of the customer experience, including what they do, which can be directly compared to what they say, (and what they feel.) This can help organizations better understand user motivations, needs, and pain points, and to make informed and measurable decisions about how to improve the customer experience and digital transformation efforts.
Measuring user adoption quantitatively is an important step in evaluating the effectiveness of your software investments. Here are some specific data points that organizations should use to validate survey feedback by employees:
- Login and Usage Frequency: The number of logins and the frequency of usage can provide a clear picture of how often employees are using the software.
- Active User Count: Tracking the number of active users can give organizations a clear picture of how many employees are using the software.
- Digital Adoption Metrics: In-application metrics such as the time spent using the software, the number of tasks completed, and the frequency of interactions with processes users can provide valuable insights into how employees are using the software and how engaged they are with it.
- Technical Metrics: Technical metrics such as the number of errors and crashes, the load time of the software, and the overall performance of the software.
In conclusion, next time you’re looking for data, don’t get fooled, paint a complete picture so you can make better decisions.
What you can do to get more out of your CSAT, Surveys and NPS tools today
- Consider surveys a piece of your data puzzle. Combine them with behavioral data, system data, and qualitative interviewing to develop a full picture.
- Hold yourself accountable to misleading data by ensuring a large enough sample size.
- Ensure your questions are free of bias and misleading questions
- Compare ‘what people say’ to ‘what people do’ to spot check the accuracy of your user’s responses.
- CSAT and Survey scores should not be your goal, and instead used as indicators and research tools.