by Eric Goupil
Date Published October 8, 2015 - Last Updated May 11, 2016

It’s safe to say that all service desks are built on a foundation of specific service level agreements (SLAs) or key performance indicators (KPIs). Variables such as speed to answer, first call closure, abandon rate, and customer satisfaction are typically found scattered amongst the many service desk objectives and management reports. While these indicators may vary from service desk to service desk, they’re all based on a fairly common set of measurable data that all desks have.

Figuring out where to start sorting through all the data can be daunting.
Tweet: Sorting through all the #CSAT data can be daunting. @ThinkHDI

I call these measures commodity-level metrics. Everyone has them, all desks are measured by them, and they’re widely accepted in the industry. But they’re essentially commodities. In general economic terms, a commodity is a product that is indistinguishable from one provider of a product to another. Plainly put, you either deliver the core service as expected or you don’t.

Commodity-Level Metrics: Where’s the Customer?

To expand on commodity-level performance further, let’s imagine a service desk with a speed-to-answer goal of 80%/60 seconds (i.e., 80% of all calls offered must be answered within 60 seconds). At the end of the reporting period, this example desk performed at a rate of 84 percent of all calls answered in 60 seconds, or 84%/60. Well done! The performance for this reporting period is “green” or good! Let the recognition begin! The same logical assumption can be made for the other commodity-level metrics—meet or exceed the target, the service desk is successful! Is that right?

Before you answer that question, ask yourself the following questions:
 

  1. Who determined which SLAs are to be measured?
  2. What goals does each SLA have, and what determined the objective of having the SLA?

One would assume that several factors should have been considered to decide what performance goals the service desk is to achieve. What do the industry consultants say? What is affordable? How many service desk agents do we have? What hours should the desk be open? What data do we have to measure? Maybe your SLAs have “just always been there” and you don’t have a clue why you measure what you do. Do you really know how your SLAs were determined?

Similarly, the decision makers who select these SLAs often come from a cross-section of the business, and they may have differing objectives, depending on their view of the service desk. The finance team might expect low cost of service; the IT team might expect solutions to fit the technology available; and the leaders from business units might expect high performance goals.

One key item that is not typically factored into determining the best SLA goals is the voice of the customer. The actual consumer of the service desk is most often the one who has the least amount of knowledge about, or input into, determining the service desk’s performance objectives.

Why is that? Why would the ultimate end user of the service desk have virtually no input into factors that could affect their productivity significantly? First, it’s highly improbable that one could ever create a communication platform that would effectively gather input from every possible user of the service desk (no matter how big or small that user population is). It’s one thing to communicate what the service desk function is, but it’s a totally different beast to explain how service performance is determined and measured. 

Additionally, if by some unique ability you have a means to communicate and gather information from all your users, it’s an added challenge to gain a consensus of attainable goals that your customers want or expect from the service desk. That scenario leaves most service desks with a set of SLA goals that have been determined by a variety of sources, based on a wide range of variables, and that quite often results in a disconnected end-user customer experience that just can’t be explained to them by reviewing a deck of metrics.

As before, imagine your service desk is well on its way to achieving its speed-to-answer SLA of 80%/60. However, you have a customer who had to wait 120 seconds for his call to be answered, and he wishes to voice his concerns to the floor supervisor about the long queue time. What do you tell him? Do you tell him that you’re achieving the SLA goal of 80%/60 for the month and you’re “green”? Do you tell him that he’s part of the 20 percent of callers that, by design, are designed to fail to meet the objective time-to-answer goal? Of course you can’t. But the reality is that those answers are technically correct.

To expand that example a bit further, again assume you’re exceeding the 80%/60 goal for the reporting period. What if you get a complaint from a user that it took too long for their call to be answered, yet it was answered within 30 seconds? Do you tell them that the goal is 60 seconds and they were answered 50 percent faster than required? It’s a silly thought, but it would be accurate to say.

One key item that is not typically factored into determining the best SLA goals is the voice of the customer.

The reason I’ve posed these questions is that they are very real examples of what may be asked of any service desk. The reality is that it’s impossible to answer those questions without further upsetting the customer. If your service desk performance objectives were determined without understanding your customers’ direct expectations, or if the SLAs “have always been there,” it’s a safe bet that your customers don’t view the service desk as a highly valuable service, and some may avoid using the desk entirely.

Before we move on, it would be remiss of me to leave you with the idea that these commodity-level metrics are pointless. They do provide significant value. They are the controls that keep the service desk on the rails, reduce variability, and provide the stability to ensure the desk is being measured against defined targets. Without them in place, the whole idea of trying to provide good and improving customer service would be destroyed, as the effectiveness of the desk itself would remain weak.  Additionally, not until you have achieved stability across these metrics can you even dream of attempting to improve customer service.

A New Way to Mine for Hidden CSAT Gold

In order to really discover what motivates and satisfies your end users, you need to discover what is of real value to them. The good news is that you likely have near-infinite levels of incredibly insightful data given all the metrics, incident data, and customer satisfaction (CSAT) surveys you’ve probably collected. The bad news is that figuring out where to start sorting through all the data can be daunting.

One data point that is virtually guaranteed to be in place across most service desks is the legendary CSAT survey. Given the hypersensitive, and appropriate, focus on customer service today, every service desk should have the ability to survey end users on their experience with the service desk. The survey can be one or many questions inquiring on a variety of topics, should offer a free text option for unfiltered customer comments, and should contain the ability to determine an overall CSAT score. Having the survey in place is the first step. Analyzing and taking action on the CSAT data is where many service desks fail to recognize the immense value of what the customers are actually saying. With the right approach, CSAT surveys can reveal significant insight into the expectations and values of your end users.

A major stumbling block of CSAT surveys is appropriately evaluating the data the surveys provide. Many service desks approach CSAT with the following process:

  1. Receive a CSAT survey response
  2. Evaluate to see if it was negative or positive
  3. If positive, nod approvingly and high-five the nearest agent
  4. If negative, look at a related incident and take action to address the complaint (maybe call the customer)
  5. Repeat
  6. Hope the SLA goal for CSAT is achieved

Of course, your processes are probably much more specific and intelligent, but often much of the activity centers on addressing unhappy customers. What about the customers who are happy? Naturally, you’ll want to find and fix the issues that customers complain about, but those are usually stop-gap-type events that address individual areas of pain and don’t often lead to measurable improvements in CSAT. Redirecting your analysis to what happy customers are experiencing can provide you with that coveted voice-of-the-customer insight that can elevate your service desk performance from good to great!

Without delving into all of the analytics methodologies and toolsets out there, I offer a pragmatic two-step methodology to carve through the maze of data that will help uncover opportunities to delight your customers:

  1. Assimilate your CSAT and incident data together to where you can easily reference the CSAT surveys to the incident detail. You need to be able to see what happened within each incident relative to each CSAT response.
  2. Apply hypothesis testing of CSAT scores to incident attributes. Within each incident, you have various data attributes that may have a correlation to CSAT performance. While there’s really no wrong approach to test your theories, the results may be surprising. The objective here is to sort your data by a specific incident attribute and analyze the corresponding CSAT score for tickets with those attributes. For example, at what threshold of time to resolve does your CSAT score begin to dip? Do certain resolver groups have differing levels of CSAT ratings? Does the CSAT score change based on the incidents that experienced two, three, four, or more transfers?

If you’re like me, you might have already started formulating some ideas of what data exists within your incident management system that could be tested against CSAT. The objective of this type of analysis is to introduce and correlate data points that typically are not bumped up against each other, or to validate or disprove commonly held assumptions of what makes a customer happy or unhappy.

The ultimate outcome of this analysis is that you will be armed with an incredibly deep insight into what drives your customer satisfaction up or down. If you find an attribute that almost always results in a great CSAT score, you can build amazingly effective action plans that capitalize on taking what works and applying that to areas where such action might not be currently in place. Also, you now have the continuous ability to peer inside the mind of your customers every day, without requiring additional surveys or feedback forums that often suffer from inconsistent results.

Go from “Green” to Great

Creative and innovative use of the data available to you is worth the effort. Whether you’re an internal service desk manager or an outsourcing provider, just being “green” isn’t a guarantee of saving your job or your contract. The truth is, virtually everybody in the industry can hit service level metrics (SLAs and KPIs); the trap is just delivering to those metrics and thinking that you’re all set. In today’s competitive world, even relationships and history are no longer enough.

You need to, and can, add more value: Provide great data that will make your customers even happier with you, data they can use to improve results. Test negative CSAT responses against their corresponding tickets to reveal hidden problems to fix; test positive CSAT responses against their corresponding tickets to reveal best practices to implement elsewhere.

There are rich seams of data in your mountains of CSAT survey results and their corresponding incident tickets. If change is within your control, you can use this data yourself to improve customer satisfaction. Alternatively, you can give that data to your customer so they can use it to improve satisfaction. By adding value in this way, you and your service desk will be positioned not only to survive, but to thrive.

 


Eric Goupil is the senior director of continuous service improvement at Stefanini, a $1B global provider of IT outsourcing, applications services, and strategic staffing. An industry veteran of more than twenty years, Eric is a Certified Black Belt in Advanced Principles of Six Sigma, and he has developed leading-edge methodologies to align key customer values to CSI opportunities via targeted data analytics, Six Sigma processes, and customized business intelligence-based analyses. Before completing graduate work in Six Sigma at Lawrence Technological University, Eric received his BA in business administration from the University of Michigan.


Tag(s): customer service, KPI, metrics and measurements, supportworld, service level agreement, SLA, performance management

Related:

More from Eric Goupil :

    No articles were found.

Comments: