by Robert Jew
Date Published June 12, 2012 - Last Updated May 11, 2016

 

Most support organizations agree that customer satisfaction (CSAT) is one of the most important metrics. But do they really believe that, or is that just empty praise? If it is as important as they claim, one would expect CSAT to be rigorously measured and tightly managed. But that is not often the case. Many organizations don’t even know how to systematically measure CSAT, while others spend a tremendous amount of money and effort gathering incomplete and/or inaccurate information because they don’t employ the right processes and tools to collect statistically valid, unbiased CSAT data. Very few organizations fully utilize this precious insight to make decisions and take meaningful actions that result in better performance.

The cornerstone of every quality management system should be an effective process for gathering, analyzing, and applying CSAT data. At the highest level, CSAT is a key performance indicator (KPI); it provides feedback on what you are doing well, and, more importantly, what you’re not doing well, which will help expedite improvement efforts. But it can be so much more. Analyzing data at a moredetailed level provides deep insight into your customer base, enabling you to uncover customer preferences and identify the relative importance of the different attributes that drive satisfaction. Without this information, companies are wasting precious resources, excelling in areas that their customers don’t really care about and neglecting the things they do care about.

To that end, in this article I will present a methodology for creating a successful CSAT program that consists of four major components: 

  1. Survey design 
  2. Survey administration process and tools 
  3. Performance reporting and evaluation 
  4. Analysis of drivers and causes


Survey Design

Survey design includes the content, look, and feel of a survey. Surveys must be efficient and impartial; if improperly written, a survey can be inherently biased. The type of questions you ask, and how you ask them, will determine the type of data you get and how relevant that data is. Be careful not to ask questions in a way that influences how customers answer. Studies have shown that the way questions are worded, or even the order in which they are asked, can skew the score. Each question must also focus on distinct attributes, as overlapping content can negatively impact the statistical analysis.

We can all agree that it is very difficult to get customers to invest the time in providing feedback. To increase your response rate, make your survey short and sweet, and do some upfront planning using a top-down, hierarchical approach. Structure the survey so that it starts with a broad question on the overall satisfaction with the entire experience, and then follow that up with questions that drill down into the attributes that influence satisfaction, such as wait times, support staff, processes, etc. (see Figure 1). Always include both types of questions; the first gives you the satisfaction score, while the rest give you the details to better understand that score and identify ways to improve it.

To facilitate analysis, base most of the survey questions on the five-point Likert scale, which features a neutral midpoint (i.e., 3 = neither satisfied nor dissatisfied). This is the most commonly used scale for measuring satisfaction, and it’s the one that I recommend. Follow up on a few of the most important topics with open-ended questions to capture detailed responses.

Survey Administration Process and Tools

When it comes to administering surveys, you need a tool and process that produce statistically valid data. The most common approaches are email and phone surveys. These approaches are very labor- and time-intensive, and they produce very low-volume, poorquality data. A better approach is to utilize the web-based surveys that are included in many of the more advanced chat and remote support solutions. Web surveys typically receive more than double the response rate of traditional email and phone surveys; even at the low end (in terms of cost), web surveys typically receive 15–25 percent response rates, while email surveys usually receive three to five percent, and phone surveys even less.

The reason for this is that, in many cases, web survey tools allow you to prompt customers and representatives automatically, at the conclusion of every transaction, which enables you to collect real-time feedback from both customers and representatives. Instead of picking and choosing a small sample, the survey is offered to everyone on every transaction. This increases the number of responses from a broader cross-section of people and cases. And, by offering it immediately after the transaction, you have a higher likelihood of getting timely, relevant feedback. These features also eliminate most of the process biases caused by insufficient and nonrepresentative samples.

Performance Reporting and Evaluation

I often ask support center executives if they have CSAT data, and they proudly say yes as they blow the dust off a report hauled down off the top shelf. But within a few minutes, it becomes clear that even though they spent a tremendous amount of money and effort collecting CSAT data, they are not doing anything productive with it. Unfortunately, this is a common occurrence; management often doesn’t look at, understand, or manage CSAT results.

Remember, there is no ROI associated with data collection; data that is not used is wasted. Value is only created when data is used to drive changes that result in improved performance. But in order to do that, you must analyze the data and extract actionable insights. There are two levels of analysis: (1) a high-level performance evaluation, and (2) a low-level analysis where you dig into the details to understand the relationships between attributes and how different attributes affect the results.

When it comes to evaluating performance, one of the biggest problems, across the industry, is that companies report CSAT results using averages. We will discuss the “flaw of averages” in another article, but it’s worth highlighting just a few points here. Although averages are easy to calculate, report, and understand, this leads to oversimplified results that are often misleading, which prevents management from engaging with the important details and nuances. Here is an example: Using a five-point scale, a support center measures customer satisfaction and reports an average CSAT score of 3.3, which is mediocre. To improve this score, management implements a bunch of process changes and training. A month later the average CSAT score is still 3.3, so they assume the changes didn’t work. They scrap the earlier initiatives in favor of trying something else. Another month goes by and the average CSAT score is still 3.3. The second round of changes doesn’t seem to have worked. What can they do to improve customer satisfaction?

In fact, our hypothetical support center did impact customer satisfaction; they just didn’t improve the average CSAT score (see Figure 2).

Although all three months had the same average CSAT score, each graph shows completely different rates of performance and satisfaction in those months. Therefore, improving performance in each situation requires different strategies and actions. Instead of looking at just one average number, best practice is to look at three metrics. The first and most common metric is the percentage of 4s and 5s; together, they represent the percent of customers who are satisfied. In our example, the percent of satisfied customers went from 50 percent to 55 percent to 42 percent. (For reference, world-class support organizations consistently achieve 90 percent or above.)

The second metric is the percentage of 5s, as this represents your most loyal customers and advocates. These customers recommend you to their friends and rave online about their awesome experiences. Likewise, and most importantly, focus on the percentage of 1s, because these are the people who will tell their friends and rant on the internet about their horrible experiences. In our example, the first strategy increased the percent of 1s from ten percent to 20 percent, but the second brought it down to two percent. Although you can never satisfy everyone all of the time, keep in mind that best-in-class companies aim low on this metric.

Most organizations who try to manage CSAT results do so by looking at individual surveys. When a negative survey is received, the most common responses are reactive; for example, calling the customer to rectify the situation, or coaching the representative who made the error. If this is the only thing you do, you may have corrected the immediate problem, but fundamentally, the process hasn’t changed. When a similar opportunity presents itself, the same thing will happen again. A more proactive approach is to weigh the collective satisfaction of your entire customer base against the collective performance of your entire support center to understand how you can improve your process capabilities. By understanding patterns and trends, you can identify and fix process issues and prevent future errors.

Analyzing Drivers and Causes

Knowing your CSAT score is nice, but it doesn’t tell you how to improve. A more-detailed analysis is required to understand how various attributes relate to overall customer satisfaction and identify the attributes with the greatest impact. Everyone likes to dig deeper to find out what drives satisfaction (i.e., what you are doing well) so that you can do more of it. But you also have to identify what is causing dissatisfaction so that you can correct/fix it. This is done by using statistical methods like correlation and multiple regression analysis.

The graph on the top shows that representative knowledge is a very strong driver of overall customer satisfaction (R2 = 0.73). Improving representative knowledge by ten percent will improve overall satisfaction by five percent. Conversely, according to the graph on the bottom, there is a very weak correlation between professionalism and overall satisfaction (R2 = 0.17). In this case, improving professionalism won’t necessarily improve overall customer satisfaction. Needless to say, focus your resources on those drivers that will have the greatest impact, and remember that not all relationships are linear.

Another technique for helping you focus your efforts and prioritize your resources involves graphing performance versus relative importance for all attributes.

The Priority 1 quadrant represents the greatest opportunity because those are the attributes that are very important to your customers, but are also areas in which you are not performing well. The Priority 2 quadrant represents your strengths; these attributes are important to customers, and you are executing them well. Although you are not doing well in the attributes in the Priority 3 quadrant, don’t focus too much effort here because these aren’t the issues customers really care about. Even if you improve them, they will have a minimal effect on satisfaction. Similarly, putting any effort into improving the attributes in the Priority 4 quadrant is simply a waste of resources.

Properly implemented, the first two components of this CSAT methodology will enable you to collect useful data and customer feedback. This is where the rubber meets the road; if you don’t capture good data, then you risk making decisions based on misleading information. But if you do make the effort to ensure the integrity of your data, then you are ready to do some serious analysis. You can accurately assess how your organization stacks up relative to customer expectations and identify how different types of changes affect customer satisfaction. By directly injecting customer input into your management processes, you can be sure that the entire organization is always working on improving the features, products, and processes that are most important to your customer base.

 

Robert Jew, senior manager of business services at Bomgar, has provided business solutions to over eighty contact centers at some of the most competitive and customer focused Global 1000 companies. He has developed processes and implemented best practices and world-class standards that resulted in significant performance improvements for his clients, such as increased customer satisfaction, increased revenue, and reduced overall costs. Robert received his MBA from the UCLA Anderson School of Management and his BS in mechanical engineering from UCLA. 

Tag(s): people, customer service, customer satisfaction, performance management, metrics and measurements

Related:

More from Robert Jew :


Comments: