Here is a practical framework based on trustworthy data to ensure that the output of automation is reliable.

by Nancy Louisnord
Date Published June 12, 2023 - Last Updated February 20, 2024

If you’ve spent any time scrolling through the internet lately, there’s a good chance you’ve seen the hype around AI (specifically everyone’s favorite, ChatGPT). The uses for AI are endless. You can use it to create entire articles (don’t worry, this was written by a human), generate images, and even write and debug code. This technology has major implications on the future of work. But harnessing the potential of AI is not without challenges, from understanding where and how to integrate AI into daily tasks to ensuring the accuracy of AI-generated outputs. Further, as I mentioned in another article for HDI, AI only works well if the data being fed into the AI is correct.

The key to successfully leveraging AI lies, first, in understanding the nature of various tasks and identifying the right strategy to incorporate AI into each of them. As such, this article explores a practical task-based framework that includes Just Me Tasks, Delegated Tasks, Centaur Tasks, and Automated Tasks.

But, it's not just about the tasks or the AI technology itself. Ensuring the trustworthiness of your data is equally crucial. With AI's tendency to "hallucinate" or produce erroneous outputs and the complexity of the modern data pipeline, the importance of data trust and lineage becomes paramount.

How AI Hallucinations Might Impact Your IT Service and Support

With the increase in AI use, there is the risk of creating an echo chamber of sorts. Think about it: AI is able to regurgitate old data over and over, changing a little bit every time it is regenerated. In some more extreme cases, AI will completely generate a brand-new answer based on incorrect data. The problem is that the answer seems logical or plausible, which makes it more likely to slip through the cracks and cause havoc later.

This phenomenon is called an “AI hallucination.” This is a big problem if you’re using AI to generate or debug code.

Let’s think of a practical example of AI hallucinations. Let’s say you’ve incorporated some type of ChatGPT software into your service desk. Now imagine your knowledge base is built on hallucinations; this will likely increase the number of incidents and hurt your service desk’s reputation.

A Task-Based Framework to Tackle the Challenges

How do you safeguard against AI hallucinations? And how can you use AI to create value in an IT service desk environment? It starts with categorizing the tasks and identifying areas where AI can add the most value, and where AI might have the greatest potential to create issues with hallucinations.

Just Me Tasks

In service management, tasks such as complex customer relations, empathetic communications during challenging situations, and making strategic decisions based on human intuition and experience fall into this category. For now, AI might not add significant value and might even complicate these tasks. Some professionals even believe these tasks should remain human-only, as they involve skills that AI currently cannot replicate.

Delegated Tasks

Service management professionals can delegate lower-importance, time-consuming tasks to AI. These can include, for example, routine activities like ticket categorization, initial responses to simple queries, generating reports, and monitoring system health. Despite delegating these tasks, a regular review of the AI's outputs is critical to catch any AI hallucinations or errors.

Here, data lineage becomes critical to validate the results produced by the AI. Understanding where the data came from, how it was transformed, and how the AI used it, can help identify any potential issues and maintain the accuracy of the task performed by the AI.

Centaur Tasks

This category encompasses those in which the integration of AI and human intelligence can significantly enhance the efficiency of service management tasks. This approach can be used in incident resolution, where AI provides recommendations based on historical data and trends, but where humans ultimately execute the solution. Additionally, it can be used in knowledge management, with AI surfacing relevant articles that can assist in troubleshooting or drafting technical documents.

The trustworthiness of data becomes even more critical in this model as the reliability of the AI's suggestions directly depends on the quality of the data it processes. Implementing robust data lineage practices can help in tracking the origin and transformation of the data, thus enhancing the reliability of the AI and the overall quality of the task.

Automated Tasks

Certain tasks in service management can be fully automated and left to AI. These include auto-responding to common queries, automatic ticket routing, and repetitive tasks. However, the AI's effectiveness greatly relies on the clarity of rules given to it and the quality of data it processes. In other words, just because AI can carry out a rule-based task doesn’t mean the robots are going to take over the world a la Wall-E.

Given the potential risk of AI hallucinations in automated tasks, it's crucial to maintain a trusted data source and clear data lineage. This not only validates the data but also helps in identifying any potential issues with the AI's outputs. Periodic checks on the AI's performance can further ensure the quality of these automated tasks.

Don’t Panic Over AI Hallucinations

Both the unlimited potential of AI in service management and AI hallucinations as a whole tend to put people on edge. We see this often in movies where AI becomes sentient and takes over the world. Or, more realistically, we see this on social media where people fear that AI will make humans obsolete in the workforce.

In reality, it’s probably much less insidious than that. It can create a nuisance when not frequently checked or not used correctly. But, understanding which tasks to perform, delegate, collaborate on, or automate is the first step to AI success. Further, knowing where your data comes from and where it goes is the key to stopping AI hallucinations from impacting your team. These practices together will pave the way for a more efficient, productive, and future-ready service management operation.

Nancy Louisnord is the Global Chief Marketing Officer of Manta, responsible for the company’s global marketing programs and product marketing strategy. With more than 15 years of international leadership experience in the B2B IT SaaS industry, she is a sought-after presenter at conferences and one of HDI’s TOP 25 Thought Leaders and HDI’s featured contributors. Manta offers a comprehensive data lineage platform that gives companies complete visibility and control of their data pipeline. Manta has helped companies reduce incidents through proactive risk analysis, accelerate digital transformation, and enhance governance by building trust in data.

Tag(s): supportworld, support models, technology

Related:

More from Nancy Louisnord

    No articles were found.