ITIL 4 Driving Stakeholder Value (DSV) focuses on understanding your stakeholders needs, mapping their journey, and delivering value. To do this successfully, it is first important to understand how your stakeholders feel about their current experience and what matters most to them. Once current sentiment is understood, it becomes easier to identify where improvements are needed. By comparing operational data to sentiment data, IT can define desired experience, align operational and experience data, and drive stakeholder value.
Today, Information Technology (IT) often relies on customer satisfaction surveys to assess stakeholder sentiment. Although surveys give insight to individual transactions, they fall short of measuring overall sentiment across the end-to-end employee experience. Furthermore, operational data such as Service Level Agreement (SLA) compliance that forces on components of the service rather than user interactions with the services, becomes the key success factor when assessing IT service experiences. XLACollab asks “how much of the SLA data is relevant to anyone using the service organization’s service?” That is a question that is important for IT to answer.
How to Get Started
IT can start by capturing sentiment information. If you currently provide customer satisfaction surveys, start by looking at the comments. Additionally, capture comments that are made in any interaction with employees whether it is in meetings, hallway conversations, project interactions, or support calls.
Most comments center around general feelings about working with IT or using their services - remarks like, “I was frustrated that it took so long to figure out how to upgrade my phone” or “I just don’t think IT understands what matters in my job. If they did, things would work better.” These remarks aren’t limited to individual interactions. Instead, they are opinions formed by every interaction combined (moments over time - XLACollab). In fact, individual transactions could earn high customer satisfaction ratings, yet the overall opinion/sentiment could be poor.
Let’s walk through an example:
- Abbi wants to upgrade to the iPhone 12 and heard from a colleague that she could order it from the company service catalog. She went to the catalog and searched for iPhone 12. She found the form for ordering her new phone, completed it and pressed submit. Within minutes, she received a customer satisfaction survey in her email. Her customer sat score was a 5 (average) since she was able to submit the order online and it was quick and intuitive.
- The next day, she received an email saying that she needed to fill out another form on the phone she would be trading in for the iPhone 12. She completed that form and forwarded it to the IT Walk-up Desk email address listed in the email. Again, she rated the transaction a 5.
She received another email providing her an estimated date for her new iPhone arrival and directions for scheduling a walk-up appointment once the phone was received by IT.
- Two days later, IT received the phone and sent Abbi an email that provided a link to schedule her appointment with the walk-up desk for the trade-in. She scheduled her appointment for the first available date in two days.
- Abbi showed up at the walk up desk at her scheduled time, but had to wait 15 minutes before a tech was free to assist her. The appointment took an additional 30 minutes to complete the trade-in and set up the new phone with her company email and apps.
- Finally, Abbi had her new phone (after 5 transactions and 7 business days).
Although each transaction went smoothly, Abbi most likely put in a lot of unanticipated time and effort to process it as quickly as she could. Even though she gave individual transactions a high score, she felt frustrated that there were so many steps and that it took so long to get the new phone. Had she not processed each step as soon as she received the email, it could have taken even longer.
If we assess these transactions from an operational perspective – service level compliance and request fulfillment time frames – IT would view each transaction as successful and consider Abbi a happy customer. However, Abbi’s boss and colleagues would say that it was a frustrating experience based on what Abbi verbally shared with them about it.
This example demonstrates that:
- High customer satisfaction scores do not indicate a great customer experience.
- Operational success (SLA compliance) does not indicate a great customer experience.
- A transactional approach to measuring success is not comprehensive enough to understand experience.
- IT and employees do not see success the same way.
Closing the Gap Between Satisfaction and Experience
The gap that we see in this example is not uncommon. In fact, the Experience 2020 report shows that it is common for there to be a gap between IT’s perspective of the experience they deliver and the employees’ perspectives. Furthermore, it is vital that sentiment, quality, and operational efficiency all need to be considered together.
So, how does an IT department go beyond customer satisfaction scores and transactional metrics to create a better end-to-end experience? How do they close the gap between their perspective and employees’ perspectives? One way is to develop Experience Level Agreements (XLAs) and align corresponding operational data and SLAs with them.
Here are four high-level steps to do this:
- Assess current sentiment (surveys, interviews, capturing comments during interactions, etc.) to understand how employees feel about IT today.
- Ask questions to learn about and define the desired experience and to understand the gap.
- Build an XLA
- Align SLAs
Building an XLA
An Experience Level Agreement (XLA) involves defining the desired experience and Experience Indicators (XI’s - what affects that desired experience?) and combining Experience data (X data) with Operational data (O data).
Going back to the example with Abbi, we looked at one transaction of one employee. Even if we looked at several customer satisfaction surveys based on transactions, we wouldn’t have a clear understanding of overall employee sentiment about IT.
Let’s assume that we completed phases one and two (assess current sentiment and define the desired experience). After interviews, monitoring comments, and learning more about the IT experience employees desire, we find that employees care about being productive and feel that there are ways that IT negatively impacts that.
We can define the Desired Experience for the XLA as IT services that increase employee productivity.
The XLA Question is: Do your IT services make you more productive?
Then, Experience Indicators (XIs) need to be defined. XIs affect the desired experience. Based on what was learned, employees care about their interactions with the Service Desk, and the performance of their PC and applications that they use to do their jobs. Therefore, the XIs identified are:
- Interactions with the Service Desk
- PC performance
- Primary Business application performance
- The questions for the XI’s are:
- How was your last interaction with the Service Desk?
- How happy are you with your PC’s performance?
- How do you rate the Business App that you primarily use?
Now that the X data has been documented, identifying the responsible operational teams and metrics finalizes the XLA.
Focusing on how employees feel about IT and the tools to do their jobs and monitoring what impacts those feelings dramatically changes IT’s current approach to measuring success. This holistic approach goes beyond transactions, technology solutions and SLA compliance. However, it doesn’t ignore these aspects as they are part of the end-to-end customer experience. Instead, it helps to elevate improvements and prioritization of those improvements to those that favorably impact experience (instead of siloed initiatives).
In other words, focusing on experience and adding XLAs is an actionable way to co-create and drive stakeholder value.
Rae Ann Bruno is president of Business Solutions Training.