by Andrew Gilliam
Date Published August 1, 2019 - Last Updated February 4, 2020

Having a customer satisfaction survey doesn’t mean you have insight into what frustrates your customers. The score from your key performance indicator, whether it’s a Net Promoter Score, Customer Effort Score, or something else, is a nice barometer, but it doesn’t explain itself. To act on the score effectively, we need some context from the customer. Not all follow-up questions are created equal. If you’re not careful, asking the wrong follow-up questions can degrade the survey experience and the entire customer experience.

Asking the wrong follow-up questions can degrade the survey experience and the entire customer experience.
Tweet: Asking the wrong follow-up questions can degrade the survey experience and the entire customer experience. @ndytg @ThinkHDI #CSAT #CX

The traditional method of getting more specific details is to ask a series of rating scale questions, each evaluating a different attribute assumed to be important to the experience. For instance, transactional surveys in the fast-food industry might begin with an overall performance or satisfaction question and then ask “Please rate your satisfaction with the friendliness of the crew,” “Please rate your satisfaction with the accuracy of your order,” and so on. For each of these follow-up questions, respondents are presented with the same 5-point scale, which might range from “highly dissatisfied” to “highly satisfied.”

High Effort, Low Reward

This method of collecting feedback demands a ton of work from the respondent. Consider employee performance appraisals; many of them ask managers to rate their employees on similar scales. Many managers find these ratings tedious because it takes so much effort to evaluate employees on so many different attributes thoughtfully. Forcing customers to evaluate us this way is equally demanding on their time.

If a customer isn’t motivated to take a survey, they might abandon it when they see a long list of questions. If respondents are motivated by a reward offered at the end of a survey, they might hastily answer every question with the same rating. Perhaps the customer is taking a survey because they have something particular they want to share. If it’s a complaint or unresolved issue, they’re likely to mark the lowest level of satisfaction for everything to get to an open-ended question where they can write freely. Each of these problems reduces the value of the follow-up questions; they aren’t effective in these cases.

Assume that customers faithfully answer every question as if they were under oath. Taking action based on the results can be challenging! An average of the ratings is often the go-to method of analyzing these types of responses. But averages don’t tell the whole story. Wildly different response sets may result in the same average. They don’t provide a clear picture of the experience or how frequently problems are occurring.

Low Effort, Mixed Results

One of the most noticeable shifts in modern customer surveying is to ask a single rating scale question followed by a single open-ended question, often asking why a respondent chose a particular score. This method attempts to alleviate the problems caused by a long series of rating scale questions, and in general, it makes responding much easier. When customers have a strong opinion that led to a particular score, they’re often glad to share that with us. What about customers who aren’t that passionate?

When designing customer surveys, we can’t focus only on customers who are highly engaged; they love us or hate us. There are many customers in the middle! Assuming we’re lucky enough for them to take our survey, it’s not easy for them to tell us why they’re indifferent. They might not even have a reason in mind. These types of customers aren’t the ones writing essays about why we’re awesome or terrible, and they need a little nudge to help them share meaningful feedback.

The Nudge They Need

Consider the last time you used a water cooler or water fountain in your office. It’s something we often do without much thought. If I asked how to make the experience better, would you have much to say? If not, that’s alright; not everyone needs to have a strong opinion about everything! (Full disclosure: I don’t take that advice myself.) If it were your job to improve water cooler experiences, though, you might be a little frustrated. Because no one gives it a second thought, it can be hard to discover pain points.

I’ll help you out. Choose any of the following that affects your enjoyment of the water cooler: provided drinkware, bottle compatibility, temperature, flavor, purity, height, location, or ease of operation. That could have been eight tedious rating scale questions, but you’ve finished already! Best of all, if you didn’t have a strong opinion before, this might have jogged your memory to a lingering frustration.

I implement the same concept in transactional surveys using check-boxes or multiple selection questions. It’s as easy as replacing a rating scale for “How satisfied were you with the technician’s courteousness and professionalism?” with “Did any of the following affect the rating you selected? (Choose all that apply.)” and then listing “Technician courteousness and professionalism” as one of many choices. I also allow the respondent to write in their own choice. No, we don’t learn any more from particularly happy or angry customers; they are going to write us a thorough recount, regardless. It does make responding easier for them, effectively getting out of the way of their original purpose for taking our survey. For customers who haven’t spent much time contemplating their apathy towards us, it makes it easier for them to point us in the right direction. It’s more respectful of everyone’s time and purpose for responding.

CSAT, customer surveys, customer satisfaction

Results to Act On

Unlike unhelpful averages, the response count for each attribute makes it easy to identify causes of friction and spot trends over time. There are no more guessing games about whether a 4 out of 5 score is good or bad; clearly, it matters to the customer. During a busy time of year, the number of times an option representing the speed of service might jump up significantly. It’ll stand out more than the change in average, and it makes it easier to quantify the impact on customers. You can even monitor the percentage of times each attribute was selected out of the total number of responses and compare that rate across different reporting periods.

The best part of using the check-box method as opposed to a series of rating scales, is that customers don’t feel compelled to check every box. Rating scales expect a response (or N/A), and therefore customers feel the need to assign a rating to every attribute, whether it’s an accurate rating or not. By allowing customers to pick one or more factors from a list, they pinpoint the most significant causes of frustration. When the feedback we receive is specific and targeted, we can respond to this feedback more deliberately and precisely.

The survey experience is a part of the overall customer experience, and it’s important to design surveys in a way that is respectful of customers’ time and effort. We’re asking them for a favor, after all. Following-up the key performance questions in the right way have a big impact on how customers feel about your survey and how much and how quickly you can learn from it. Asking the right questions is better for customers, and it’s better for the business.

Andrew shared his insights and experience with customer surveys at HDI 2019 (now SupportWorld Live)!
Learn more!

Andrew Gilliam is a passionate customer experience innovator and change agent, with a background in IT support and customer service. As one of HDI's in-house subject matter experts, he writes and speaks about service management, technical support, and contact center trends and best practices. Andrew was among ICMI's 2019 Movers & Shakers and Top 50 Thought Leaders for multiple years, and he maintains several HDI and CompTIA certifications, including CASP+. Follow @ndytg on Twitter, connect with him on LinkedIn, and discover more at andytg.com.


Tag(s): supportworld, customer experience, customer satisfaction, customer-satisfaction-measurement

Related:

More from Andrew Gilliam


Comments: