Metrics must reflect the increasing complexity of interconnected systems and services. But they must also do so in a way that shows the forest from the trees. Here is how to create and gather useful metrics about the true customer experience.

by Barclay Rae
Date Published November 17, 2020 - Last Updated November 17, 2020

My recent blog on watertight, not watermelon SLAs had a fantastic response, with nearly 5,000 reads via LinkedIn. It also drove a number of discussions and I established some new contacts as a result. This subject clearly “hit a nerve” so this is the followup to that, with more detail around what this means and what service experience management and metrics is all about.

To recap, there’s a real need to move away from IT-focussed Service Level Agreements (SLAs) and associated reporting, as this often does not represent customer/user experience, or show how services meet business demands. It's not accurate or healthy to have too much focus on individual IT components and an IT-departmental view of what’s important. All stakeholders need to be involved in defining targets and metrics that help to identify if value is being delivered, or if not, where this is failing.

Traditional SLAs don’t go far enough and often miss the mark on how or where to improve. Customer feedback on its own can also fail to show business value being achieved or understood. Whilst traditional IT metrics show performance in specific technical areas, the concept of value metrics should reflect a number of results and outcomes as a wider set of business results and areas of customer experience.

In the absence of real intelligence around how these SLA metrics are compiled and presented, service providers often fall back on producing volume rather than quality – listings and reports and details that no one wants to see. They can also fail by producing ‘industry’ metrics when specific business related outputs are required. This all adds to the confusion and lack of trust between providers and their customers.

Metrics must reflect the increasing complexity of interconnected systems and services. But they must also do so in a way that shows the forest from the trees – i.e. a rounded view of value and not just a vast forest of unintelligible data.


Before we go further, we should just also be clear on the following:

Operational metrics are useful – for internal quality monitoring and as building blocks for integrated reporting and Outcome and Experience-based Metrics (OXMs).

SLA metrics can be useful – as long as these are seen to be related to specific requirements and agreements with customers.

Customer satisfaction feedback data is highly useful but should be seen in context – event-based surveys reflect a moment in time; often, periodic surveys are also needed for context and perspective.

Internal employee satisfaction data is useful if seen in relation to other indicators and feedback – some surveys on their own can either show data that the organization wants to hear, or only negative data. A lot here depends on how the data is captured (i.e. if it is genuinely confidential, etc). Organizational trust and culture is important here.

So how do we do this? How do we measure value?

In simple terms, by using a number of different types, sources, and formats of metrics and combining these together. This is done with weighting that reflects relative importance, and therefore value. When discussing agreements and targets for these composite metrics, stakeholders can focus mostly on the outputs and relative value of different metrics, without needing to know each individual component in detail. The resultant combined and weighted metrics represent a broad spectrum of measurements of experiences, outcomes, and results.

Metrics should also be considered as fluid and in relation to changing contexts, so different metrics may measure the same things, in different situations, like service availability (of the same service) across different business periods. So, service availability at 9am may not require much priority, whereas at 3pm it may be business critical, if that is when a key business transaction takes place.

These “compound” metrics then can be considered watertight and provide views of the value delivered through services.

Analogy – aircraft biometrics

As a quick analogy, consider the number of measurements (biometrics) that are taken of an aircraft – these may involve the same measure at different parts of a flight, on the ground in the air. Tire pressure is of little actual value during a flight, but really important on landing. When we measure, we need to ensure that we are considering the context at any given time.

The flight would also include a number of other metrics around customer service (cabin crew), employee job satisfaction, on-time arrival, cost efficiency; all of these are relevant and need to be considered and viewed in context. All of these then contribute to the overall value and quality delivered during the flight.

Building OXMs

To build up a useful set of compound metrics, my suggestion is to use four key areas of measurement:

For experience data:

Customer feedback - these would involve various sources of customer feedback, from surveys, meetings, NPS, complaints, etc.

Employee feedback – these would include employee feedback from internal surveys, regular meetings and updates, sense checks on morale, etc.

For business outcome data:

Process and performance metrics – these would include a number of traditional metrics produced for SLAs, operational performance, incident response and turnaround times, MTTR, service availability, etc.

Key business metrics – these would include the business outcomes derived from use of the services. This will vary across different organisations, sectors and levels of maturity although in all cases they require input from users and customers to identify their nature and importance.

All of these areas contain a number of individual metrics that can be weighted and measured against target thresholds. The overall outcomes can also then be prioritized and weighted in accordance with user/customer preference – so “business outcome” may have a higher weightings over “individual processes” or “user satisfaction”. These preferences and relative weightings could also change in different situations, e.g. where user satisfaction may be more important than business outcome in some situations.

The overall dashboard view can then reflect user preference on relative weighting and thresholds, showing RAG (Red/Amber/green traffic lights) status as required. The overall result may or may not be acceptable to the customer; the discussions with the customer will determine this. From experience, building up the bundles of metrics in each areas is a useful task which also requires some customer input – this also helps both parties to fully understand and work through needs and expectation of service delivery and reporting. In turn this also helps to build a rich and trusting relationship across teams.

In all of the examples above, metrics, thresholds and weightings are examples – these will be different in each organization. There is no ‘standard’ for this – understanding the requirement is part of the relationship-building and stakeholder-value-building processes.

OXMs not SLAs?

The approach suggested here refers in particular to metrics – outcome- and experienced-based metrics – not SLAs or Experience Level Agreements (XLAs™). “Agreements” can be difficult to achieve without first developing this type of approach. My experience has been that it is helpful to develop these metrics as a means to building agreements in future. In many cases, formal agreements may not be needed, if there is a good working relationship built on the metrics and what they can deliver.

It’s vital to understand that the process of building these metrics (i.e. through collaboration) is equally, if not more, valuable than the outcomes of the work. Formal agreements may not be needed – however it is always sensible to use Goodhart's law – i.e. to always measure, sometimes formalize, avoiding SLAs and targets on their own becoming de facto goals.

A further stage of maturity that can also be developed is to use this type of model to drive forecasting and demand management – i.e. where changes to performance or capability are also modelled in relation to the impact on Customer or employee experience, and vice versa. I am currently looking at developing models and possibly tools in this area – If you are interested in this please contact me to discuss.

Moving forward

All of this can be achieved without the need necessarily to train and certify your entire department in one methodology or framework or another, although that is of course useful and I would recommend building awareness and briefing sessions in ITIL and other approaches as part of transformations.

However, why not try it out? I’ve used this technique in various forms for some time – it works and delivers some great results.

The OXM name and concept was defined by Barclay Rae in July 2020.

This article first appeared on the author’s blog, and can be found here.

Barclay Rae Barclay is an experienced ITSM consultant, analyst and writer. He has worked on approximately 700 ITSM projects over the last 25 years, and also writes blogs, research and papers on ITSM topics for a variety of industry organisations and vendors. He has also worked for a number of ITSM organisations, and he delivers strategic ITSM consultancy, as well as media analyst services to the ITSM industry. He is an ITIL4 Lead Architect, Lead Editor of the ‘ITIL4 Create , Deliver and Support’ publication, and a co-architect of the ITIL Practitioner scheme with Axelos.

In addition, he is a co-author of the SDI SDC certification standards and a participant in the current ISO/IEC 20000 revision. Barclay is an associate of SDI – as a consultant and auditor.

He is a Director of EssentialSM and itSMF UK, of which he was CEO from 2015 - 2018. Barclay also is a partner in A2V Services, which develops new start-up businesses. He is also non-executive director for several companies.

Barclay is also a regular speaker at industry conferences and events, in the UK and globally, including, SITS, SDI, itSMF, Pink Elephant, SMW, UCISA and others. Barclay was named in the HDI top 25 Thought Leaders in Technical Support and Service Management, 2017, 2018 and 2019, and ‘ITSM Contributor of the year’ in 2014 at the SITS show.

Barclay also created ‘ITSMGoodness’ – a set of practical steps and guidelines – simple practical and proven tips and tools – for successful ITSM. Visit for details, follow Barclay at @barclayrae .

Tag(s): supportworld, best practice, customer-satisfaction-measurement, customer survey tools, dashboards, methodology, metrics and measurements


More from Barclay Rae