It is important to measure the performance of your IT teams. HDI Featured Contributor Michael Hanson offers some measurement tools to help.

by Michael Hanson
Date Published March 15, 2023 - Last Updated February 20, 2024

numbersWe all know that the IT Service Center is a crucial component of any organization’s IT infrastructure. It exists to provide support for customers who experience issues with IT services, software applications, or connected hardware.

But how do we as leaders determine how well the Service Center is actually performing? How do we show our customers and our business leaders what the support teams are doing? Fortunately, I have found over many years of practice that there are both objective and subjective metrics that provide insight into how well the Service Center is doing its job.

These metrics are typically called Key Performance Indicators, or KPIs. A KPI is a measure or metric that is used to evaluate the performance and effectiveness of an organization. They are important because they enable us to measure and improve the efficiency of our operations. There are two equally important types of KPIs - one is clearly objective, made up of measurable, real data; the other is subjective, designed to measure the customer’s feelings and experiences.

Let’s examine a few of the most common objective KPIs used for IT support. Many of these are referred to using abbreviations, so I’ll first identify the KPI and then use the common shorthand.

First Call Resolution (FCR)

This measures the percentage of incidents or service requests that are resolved during the initial call to the Service Center. It is a critical KPI because it indicates the ability of the team to resolve issues quickly and efficiently. A high FCR indicates that the team is able to resolve most issues on the first call, which influences customer satisfaction and reduces duplicate calls to the Service Center for the same issue. While it can always vary by industry, a good target for FCR is 70-80%.

First Contact Resolution (FCR)

In today’s workplace, sometimes First Call Resolution is replaced with First Contact Resolution (still called, confusingly, FCR). This may be a more appropriate KPI if the team handles multiple channels. In addition to voice, there may be a chat, email, or social media channel managed by the Service Center. This version of FCR measures the percentage of incidents or service requests that are resolved on the first contact, regardless of which intake channel was used. A high FCR rate shows the Service Center is providing consistent high-quality service to all customers. Because different channels may require a unique approach, the industry KPI for this metric is slightly lower, at around 60-70%.

Average Handle Time (AHT) and Average Speed of Answer (ASA)

These are two related KPIs. AHT is dependent upon the type of calls the Service Center receives, and can range from 4 to 6 minutes for simple inquiries to up to 20 minutes or more for complex issues. The ideal AHT will depend upon the specific needs and goals of the organization, as well as the expectation of its customers. Average speed of answer measures the average time a customer waits in a queue (voice or chat) before being connected to a Service Center analyst. Like AHT, the standard depends upon the type of service being provided, but a good general rule is that your ASA should be 30% of your AHT. So, if your AHT is 10 minutes, your target for your ASA would be 3 minutes.

Mean Time to Resolve (MTTR)

This measures the average time it takes to resolve an incident or service request, and is an essential metric because it indicates the efficiency and effectiveness of the Service Center. A low MTTR shows the team is resolving issues quickly, which in turn improves customer satisfaction and reduces the impact of incidents on the business. Most organizations also have priority levels, so a severity one priority would have a much lower expected MTTR than a severity four priority. As usual, the average can vary by industry, but the typical aggregated number is around 8.5 hours.

Call Abandonment Rate

This measures the percentage of calls that were abandoned – disconnected – before they are answered. This is a good indicator of the ability of the Service Center to handle the volume of calls it receives. A high abandonment rate could indicate that the team is understaffed or unable to handle the call volume, which ultimately reduces customer satisfaction.

Occupancy and Utilization

These are two similar but distinct performance metrics. Occupancy refers to the percentage of time that an analyst is actually performing their assigned work – that could be taking calls, working tickets, or any other activity defined by the organization. It would not include lunches, breaks, or any other non-working time. Industry numbers for Occupancy vary, as it depends on the type of work and calls taken, but generally range from 60% to 90%. Utilization differs from Occupancy in that it only measures the percentage of time actually spent waiting for or taking customer calls. This KPI provides insight into how effectively Service Center resources are being used to handle customer calls. High utilization indicates that the analysts are busy, but care should be taken to avoid burnout. While the optimal utilization rate can vary by the type of operation, a rate of between 60% and 80% is nominal.

Service Level (SLT)

This measures the expectation of what percentage of calls should be answered within a given time frame. For example, a typical industry number would be 80% of calls answered within 20 seconds. This ratio will vary widely by industry and the types of calls the Service Center is expected to answer. Usually, this number is in support of a Service Level commitment made to customers, and provides a KPI on whether the Service Center is meeting those commitments and providing consistent and reliable service.

Volume measures

This could be applied to volumes coming into the Service Center through all channels, as well as to the volume of incident or service request tickets created. This provides a picture of the amount of work the Service Center is handling. High volumes could indicate that the team is understaffed or that there may be issues within the IT infrastructure that need to be addressed. A common volume KPI is the Call-to-Ticket ratio, which helps leadership understand how well the Service Center is managing its work. A high Call-to-Ticket ratio may indicate that the team is not effectively managing customer issues, resulting in multiple calls for the same problem. Equal or lower ratios suggest that the team is efficiently managing their work.

Subjective KPIs try to quantify how customers feel about working with the IT support organization. It may not directly reflect their experience with the Service Center, as much as their entire support experience. The Service Center may handle the call perfectly, then when escalated the receiving team may poorly manage the customer experience, resulting in bad feelings. The primary means of measuring this KPI is a customer satisfaction (CSAT) survey.

Customer Satisfaction (CSAT)

The CSAT metric determines how satisfied customers are with a particular interaction, and is usually measured on a numeric scale. Last year (2021-2022), the industry standard for CSAT was 86.3%; for the new year (2022-2023) that number has dropped to 73.1%. This is an aggregate number, and the average CSAT can differ widely by industry.

Net Promoter (NPS)

This measures the likelihood of customers recommending a product or service to others. It’s measured by a numeric 0-10 score. NPS is an important KPI because it measures customer loyalty, those whose score a 9 or 10 are considered to be promoters and those who score a 6 or lower are considered detractors. It is a simple and straightforward metric to understand and communicate, and can be a powerful tool to align teams around the common goal of improving the customer experience. The score ranges from -100 to 100, and a score above 0 is considered positive; scores of 50 or better indicate excellent customer loyalty. However, this can vary widely depending on the industry being measured. NPS is also a living metric, as a positive trend is important and more important than a positive number that is stagnant and does not change.

Customer Effort Score (CES)

This is designed to measure the amount of effort that a customer has to put into an interaction to resolve an issue or complete a task. By understanding the level of effort required, companies can identify areas where they might be able to streamline processes and make it easier for customers to understand the support process. There isn’t a widely accepted industry standard for CES, but the general rule is that lower numbers are better. On a 10-point scale, a 5 or lower would indicate that there was a positive and low-effort customer experience.

The CSAT, NPS, and CES scores can be combined to gain a fairly complex picture of the customer experience. By identifying specific pain points and taking action to reduce them, organizations can improve customer loyalty and drive long-term success.

Experience Level Agreement (XLA)

A recent development in KPIs is using a combination of some of the objective metrics like FCR, AHT, and ASA along with the subjective customer experience numbers above to drive an XLA. This is a customer-centric approach to measuring the quality of the user experience. Rather than focusing on technical metrics, the XLA looks at the overall experience of the customer.

An XLA defines the level of experience that the organization wants to provide to its customers and outlines what specific KPIs will feed into the success of that goal. It takes into account both the functional and emotional aspects of the customer experience. The goal of the XLA is to create a culture of continuous improvement, where the organization is constantly looking for ways to enhance or improve the quality of the customer experience.

Quality Assurance (QA)

Finally, a KPI that includes aspects of both the objective and the subjective is the QA score. This metric looks at the effectiveness of the support team by using a set of predetermined criteria to evaluate the quality of interactions between the customer and the Service Center. It is objective in the sense that this criterion is well-documented and weighted to provide an overall score. The QA audit looks at the accuracy and completeness of information in an incident or service request. It is subjective in that it also measures the tone and demeanor of the analyst, which is dependent upon the auditor who is monitoring the customer contact. The target for QA can vary by industry, but generally a score of 90% or better is recommended.

These are but a few of the KPIs and metrics that are available to the modern Service Center. It’s important to keep in mind that any single one of these measures are just one component of a bigger picture. The effectiveness of a support team is reflected in a collection of objective and subjective measures. By tracking multiple metrics over time and using them as a means of improving the support process and customer experience, the Service Center and support teams can enhance the overall quality of the customer relationships and drive long-term success for everyone.

Michael Hanson is Vice President, IT Service Desk Operations at PSCU.

 
Tag(s): supportworld, support models, technology

Related:

More from Michael Hanson

    No articles were found.