Automation can be a game-changer for service management, but only if it is set up with the right information and can react effectively to a dynamic environment.

by Nancy Louisnord
Date Published November 7, 2022 - Last Updated January 20, 2023

Can you trust AI use cases at the service desk? The answer lies in the data.

As companies and technology vendors embrace artificial intelligence (AI) and Machine Learning (ML), more use cases are popping up at the service desk and changing work routines, processes, and tasks with it. This brings tremendous benefits and opportunities for the future, such as reduced incidents, accelerated incident resolution, downtime prevention, less repetitive, tedious tasks for service desk agents, and increased productivity.

Is it all rainbows and butterflies, however?

It can be, but the success of any such initiative is tightly coupled with the quality of data and level of trust among users of AI/ML algorithms. We must consider some essential questions:

  • Do we have details on how the AI/ML algorithms and models are built?
  • Do we understand and trust the underlying data that feeds into our AI/ML models?
  • Do we take all relevant data into account in our AI/ML models?
  • Is our approach ethical? Is any bias hidden in the data we use? Do we ethically use data?

In summary, we will only be able to take full advantage of AI/ML use case benefits if we can explain how the models are built and trust the underlying data.

Why is this so important?

Here are some examples of where unreliable data can counter the benefits of AI/ML use cases and potentially lead to privacy, security, and compliance issues.

Imagine your AI/ML algorithm will trigger automation, such as routing incidents to specific service desk agents with the right skills, availability, and knowledge. Or think of automatically deploying patches, resetting passwords, or other fixes - now imagine the consequences of unreliable data used to make those decisions. In that case, the incidents might be routed to the wrong team or person, delaying the time to resolution instead of improving it. Deploying the incorrect patch or fix might lead to security issues.

Let’s look at another use case around knowledge management. This one has many flavors: contextualization (for example, only showing articles about the specific laptop or equipment the end user is using), potential solution suggestions based on the content of the incident, or even recommendations for the agents on which knowledge articles to improve, create, or delete. Needless to say, this is only powerful if the data is reliable. If not, we can lose a significant amount of time, leading to lower productivity and higher costs.

Predictive analysis to anticipate and potentially prevent future incidents, as we see with monitoring and AIOps (Artificial Intelligence for IT Operations) tools, could also give a wrong picture if the data is incomplete or unreliable. Let’s say it’s 5:30 p.m. on September 29th, and the system predicts a high probability of downtime of the ERP system in the next 10 hours based on historical and real-time data and telemetry. The last thing we want is for the finance department to start the next day with the system down, especially at the end of the quarter when invoices are to go out and financial records need to be recorded, reconciled, and reviewed. Any delay can get costly, so that means overtime for the IT department to dig deeper into the analysis and ensure they can prevent this from happening.

It gets even trickier and riskier if we rely on AI/ML for automated risk analysis of changes. Unreliable data can lead to either underestimating or overestimating the risk of a change, leading to increased rather than decreased costs and risks. If we overestimate the impact, the change process will take longer, costing us unnecessary delays and resources. If we underestimate the risk, we may go too fast and cause costly incidents that we did not anticipate, which may take us a long time to recover.

How to avoid this?

To know which data we are using and where the data comes from, we first need to know which existing data sources we use to feed into our AI/ML model. Think about AIOps and monitoring tools for real-time and historical telemetry, the service desk tool for incident history, the HR or CRM system for contextual data about employees and customers, and other applications.

More importantly, though, we need to know the data journey - what sources the data comes from, where it is flowing to in the environment, how it gets from A to B, and what happens to it along its way. And more than that, we need to have a detailed overview of all direct and indirect dependencies between data in the environment.

Though it may sound simple - you draw a line from your monitoring tool to your service desk tool - this will not give you the complete picture.

Let me explain with another example. If I want to drive from Orlando to New Orleans, drawing a straight line on a map will not do; I need more detailed and reliable information to get there safely and quickly without getting lost. In addition, I need to know how I will go from point A to B and the type of transportation I will use. I will need Google maps with detailed route information that ideally takes into account my mode of transportation, traffic, roadwork, or other things that can impact my journey; it also can keep me updated if any of these factors change.

That is where data lineage comes in. Data lineage will ensure you can rely on the data used for your AI/ML model. You will understand where the data comes from, which data is considered, what happens in its transfer into the model, and how it is interconnected (directly or indirectly) to other data.

Conclusion

As use cases of AI and ML are more prevalent in service desk environments, it is critical that the underlying data is reliable and safe. Being able to rely on the data takes more than preventing “garbage in, garbage out.” Even more than knowing where our data comes from, we need to understand how the data flows, where we will use it, and how it gets impacted along the way. The last thing we want is to implement or expand AI/ML-based automation and then to later find out this is hurting our productivity instead of improving it.

Nancy Louisnord is the Global Chief Marketing Officer of MANTA, responsible for the company’s global marketing programs and product marketing strategy. With more than 15 years of international leadership experience in the B2B IT SaaS industry, she is a sought-after presenter at conferences and one of HDI’s TOP 25 Thought Leaders and HDI’s featured contributors. MANTA offers a comprehensive data lineage platform that gives companies complete visibility and control of their data pipeline. Manta has helped companies reduce incidents through proactive risk analysis, accelerate digital transformation, and enhance governance by building trust in data.

Tag(s): supportworld, best practice, IT service management

Related:

More from Nancy Louisnord


Comments: