There’s a difference between saying “I feel feverish” and saying “My temperature is 101°F.” In the second case, you’ve used a measuring device (thermometer) to find out what your temperature really is. Likewise, we can make a guess that “Our customers like us” because of comments in the hallway, but it’s much better to go after a measurement. We all know this. But how do we measure, and what kind of measurement is best?
First, it’s important to decide what you’re trying to measure. Customer service and customer experience are not the same thing, although they’re closely related. While customer service relates to the interactions a customer has with the service desk, customer experience is much more about what it’s like to be either a customer of your organization (external) or an employee who uses your services (internal). Per Google:
- Customer experience is the sum of all experiences a customer has with a supplier of goods and/or services, over the duration of their relationship with that supplier. This can include awareness, discovery, attraction, interaction, purchase, use, cultivation, and advocacy.
- Customer service is the assistance and advice provided by a company to those people who buy or use its products or services.
In an effort to improve the effectiveness of customer surveying, many institutions have gone shopping for different types of measurements. Here are some of the methods they’re using.
Customer Satisfaction (CSAT)
The CSAT score is the most widely used, especially for internal-facing technical and IT support organizations. The score is usually obtained by offering the customer four or five possible responses for each of several questions. The total responses are then tabulated and expressed as a percentage (e.g., “Ninety-six percent of our customers are satisfied or very satisfied”).
It’s important to note that the trustworthiness of the results depends heavily on the questions asked. Very often, CSAT is used as a scale to judge the performance of the individual analyst, by asking if the analyst was knowledgeable, professional, etc. Tying individual analyst transactions to overall performance is sketchy at best and counterproductive at worst. How can I answer the question, “Did the analyst resolve your problem quickly?” when the analyst was forced to escalate because he or she lacked the authorization and access to solve the problem? The goal of a customer survey should not be the assignment of blame; it should be the improvement of service. The questions asked will also determine whether you’re seeking information about your customer service or about the customer experience.
Tip: Many organizations have eliminated the “neutral” or “neither satisfied nor dissatisfied” response in an effort to make the customer choose something closer to how they really feel. If you eliminate the neutral or neither choice, you shouldn’t use the word “average” as one of the choices (e.g., poor, average, good, excellent). Average is not a degree of dissatisfaction.
Pro: Customers everywhere are familiar with the CSAT survey. Most of these surveys are three to five questions long, which is the ideal length. Some have an open text field to add comments, and this is a very good way to gather feedback about what the customer thinks, above and beyond the questions you ask.
Con: It’s very easy to ask the wrong questions by constructing the survey from your point of view rather than the customer’s. It’s also common practice to group the top two responses—satisfied and very satisfied—together when reporting the results. This practice has the inevitable consequence of making the results look better than they really are: “We achieved 96-percent customer satisfaction” sounds better than “Sixteen percent of our customers are very satisfied and 80 percent are satisfied.”
Net Promoter Score (NPS)
This score is based on the customer’s answer to one question: “How likely is it that you would recommend us to a friend or colleague?” Responses are scored on a scale of 0–10, where 0–6 are detractors, 7–8 are passive, and 9–10 are promoters. For analysis, the passives are removed; you then subtract the percentage of detractors from the percentage of promoters. What remains is your NPS.
Let’s suppose that your most recent survey received 1,400 responses. Of the responses, 15 percent were in the 0–6 range, 20 percent were in the 7–8 range, and the remaining 65 percent were in the 9–10 range. By removing the passives (20%) and subtracting the detractors (15%) from the promoters (65%), you arrive at an NPS of 50.
Tip: Look at public benchmarks before you decide that your score is too low. Some great companies have scores in the 40s and lower.
Pro: NPS is well suited to external, customer-facing support centers and is intended to indicate a customer’s sense of brand loyalty and willingness to become an advocate for your product or service. Many internal-facing organizations have begun using NPS as well.
Con: Now that you know the passives are discarded, you’ll never mark an NPS survey with a 7 or 8 yourself, and neither will anyone else who understands the methodology. There can be little doubt that this has a tendency to skew the results. Plus, those in the passive range might be exactly the people you want to reach out to in order to find out why you’re getting it nearly right and how to fix that.
Customer Effort Score (CES)
Customers shouldn’t have to exert a lot of effort to get their issues resolved (incidents) or obtain the things they need (service requests). CES seeks to determine how easy or difficult it is to work with your support organization.
CES is obtained by asking a single question: “How easy or difficult was it to get your issue resolved?” It’s usually rated on a scale of 0–5, where 0 is “no effort” and 5 is “a lot of effort.”
Tip: This score requires careful communication.
Pro: It’s an easy survey for customers to answer—one question only—and that one question can be added to any existing survey.
Con: Its scoring is the reverse of most other systems. Lower scores are better. If you change the scale or make any other modifications, you have to make absolutely sure that the managers and customers you share the scores with understand those changes.
Customer Experience Mapping
Also called customer journey mapping, customer experience mapping is a way of visually representing the various elements within the customer experience and/or recording the customer’s response throughout a customer service interaction.
Customers don’t operate in a vacuum but instead interact with your organization and/or support center over a period of time. The customer journey is the sum of all of those interactions or touchpoints. Since support centers tend to be transactional, single transactions can be mapped using a swimlane diagram with associated scores.
This particular map (and there are many methods) can be scored for each part of the interaction: 3 at the point of the broken application, 2 after a quick fix fails, 2 as the customer sits in the queue, 9 when the support analyst does a good job, back down to 1 when the customer discovers that custom settings are missing, and 1 again as she returns to the queue. Note that even though the second support analyst does a good job, the customer’s experience is tarnished by having been through all of this before, and the interaction only scores a 6. When it ultimately needs to go to the software developer for resolution, the customer’s experience is a 1. These individual scores can be tallied for each customer, transaction, or group of transactions, depending on your view and preferences. In this case, the score is 25 (3+2+2+9+1+1+6+1).
Pro: As part of a larger customer experience management program, mapping can provide much more useful insight into the real customer experience than mere transactional surveys, such as CSAT, NPS, and CES. It yields very granular information about the customer viewpoint, and it’s specific enough to point out areas that need improvement.
Con: This method requires a great commitment of time: time to design, time to survey and interview, time to calculate, and time to report. It’s not something built into most software tools, and it may require analysts to elicit multiple responses from the customer or on recollections about how the customer felt at each step.
Asking About Everything
A couple of years ago, I stayed at a hotel for one night. It was part of a chain I hadn’t used before, and I indicated my willingness to take a customer satisfaction survey. As hard as it is to believe, I received a 142-question survey about my one night, with questions about everything from the quality of the sheets to the scent of the soap. Of course, I didn’t complete it. Although this is an extreme example, every day surveys are left incomplete simply because they’re too long. How long is too long? Ask your customers. In general, you’ll find that people are willing to answer about five questions.
Asking Only About the Transaction
Were you connected quickly? Was the analyst polite? These things are good to know, but they may have no bearing on the customer’s experience of your services. Why did he call? In general, people call the support center for one of two reasons: something is broken (an incident) or they need something (a service request). If the call was connected quickly and the analyst was polite, but the customer didn’t get what he needed, how do you expect him to answer those questions? Should he be less than satisfied with an analyst who was the epitome of professionalism but didn’t have the authorizations needed to reset his password, fix his problem, or order what he needed?
Calculating Results Based on Faulty Data
Let’s say your desk resolved 4,000 tickets last month and sent a survey out after each resolution. You received 1,400 responses—a response rate of 35 percent, which is excellent. You do the math and find that your customer satisfaction rating is 94 percent. You celebrate.
Then someone asks you, “How many individuals are represented by those 1,400 responses?” After some analysis, you discover that there were only 900 unique respondents; the remaining 500 were repeat customers. You know that your company has 6,500 customers. So, 94 percent of 900 respondents means that 846 people out of 6,500 gave you high marks. That’s 13 percent of your customer base, and it’s that 13 percent that took the time to contact you because something was broken or something was needed. Of the 5,600 people who didn’t contact you last month, how many didn’t do so because of bad previous experiences or a belief, founded or unfounded, that you can’t or won’t do anything for them?
What’s more worrisome is that two percent of the respondents gave you really low marks. In response, you instituted some changes to address their concerns. That means you’re making changes to your procedures based on the negative feedback of eighteen people (.02 × 900) out of 6,500.
Surveying is a valid tool; just be aware of what and whom your responses really represent, and don’t make false assumptions.
Thinking That Surveys Are the Only Way to Get Feedback
Click “resolved” on a ticket and off goes a survey. It’s quick and it’s easy. Frankly, those are the main reasons why we lean so heavily on customer satisfaction surveys: they’re quick and easy, and we want quick and easy results.
Although you may glean some information from surveys, you’re often not reaching those portions of your customer base that have good, solid feedback to share about their experiences as your customers. (Remember the 20% of responses you threw out of your NPS calculation?) But how can you reach them?
Interviews: Write a standard, careful, and short set of interview questions, which elicit clear responses about the elements of your service. Then schedule fifteen-minute interviews with customers who haven’t contacted the service desk recently. Is everything perfect in their technical world? Probably not. Seek to understand why they aren’t contacting the desk, what’s wrong in their world, and how you can help them. These interviews are best done in person, but telephone is a suitable substitute if you’re restricted by distance.
Focus groups: Gather small groups of customers and/or end users from around your organization. Ask several key questions, and listen actively to their responses. Record the group or take careful notes. A good approach is to have a focus group for each major business unit.
There’s no one perfect tool for measuring the customer experience. Mapping the experience may yield the most information, but it also requires more time, thought, and explanation than simpler scoring methods. The good news is that CSAT can easily be combined with NPS, CES, or both by adding a couple of questions to a basic CSAT survey.
Like anything else, you must understand where you currently are and where you want to go, and you must have some idea of the specific steps you’ll need to take to get there. Benchmark your customer satisfaction now, and pay careful attention to any changes that happen as you use the feedback you get from your customers to drive improvements in your support center.
Roy Atkinson is HDI’s senior writer/analyst. He is a certified HDI Support Center Manager and a veteran of both small business and enterprise consulting, service, and support. In addition, he has both frontline and management experience. Roy is a member of the conference faculty for FUSION 14, and he’s known for his social media presence, especially on the topic of customer service. He also serves as the chapter advisor for the HDI Northern New England local chapter.