Date Published July 8, 2025 - Last Updated July 2, 2025
For those of us in the IT Service Desk trenches, the hype around generative and agentic AI is impossible to ignore. As tools evolve from simple chatbots to autonomous agents capable of learning, decision-making, and action-taking, the question for many IT organizations isn’t whether to use AI, but how to use it responsibly and in ways that truly add value to the business.
And that’s where things get interesting.
If your organization follows the Knowledge-Centered Service (KCS) framework, you already have a strong knowledge management foundation. KCS promotes capturing knowledge as a byproduct of problem-solving and emphasizes accuracy, findability and continuous improvement. It’s designed with a many-to-many approach, building organizational knowledge over time with trust and transparency at the core.
Agentic AI, on the other hand, learns from vast amounts of unstructured data (some internal, some external) and can act independently. While that sounds exciting, it’s also terrifying. When AI suggests a fix that “sounds right,” it may be based not on your company’s vetted knowledge, but on a public forum post or a misinterpreted trend scraped from the open web.
AI isn’t going away. So, how do you make these two worlds work together?
Establish Your Source of Truth
A solid KCS implementation can provide the guardrails your AI efforts need. Your internal knowledge base becomes the source of truth; not random internet search results. This ensures agentic AI learns from validated solutions captured by your support analysts, not from speculative or vendor-driven content.
Think of it this way: If resolution is the destination, KCS gives you the trusted map and AI provides the faster engine. But without that map, AI might just drive you in the wrong direction…quickly and possibly right over a cliff!
Let AI Scale What Works—Not Reinvent What Doesn’t
In a mature KCS environment, agentic AI can assist by identifying patterns, suggesting article improvements, flagging duplication and even generating draft knowledge articles based on case notes or ticket summaries. That’s not replacing the analyst, it’s accelerating their work, allowing them to move on to other value-added tasks while AI drafts the documentation.
However, this only works if the AI is trained on content that reflects your unique environment, processes, tools and governance. If it’s pulling unverified content from across the Internet, you risk delivering solutions that are inaccurate, unsupported or completely irrelevant.
A real-world example: I’ve seen AI tools suggest registry edits for a common endpoint error that weren’t approved, or even safe, for a client’s environment. In a KCS-governed model, such suggestions would be flagged, tested and published only after review. AI assists, but it is people that ensure its outputs are safe and sound.
Curate External Knowledge with Intent
Even in mature KCS environments, support analysts use Google. We all do. The Internet is a vast, dynamic source of information. But as we’ve learned repeatedly, just because something is online doesn’t mean it’s accurate, reliable, relevant or reusable.
So, what’s the answer? Organizations shouldn’t try to block this source of information; they should channel it through the KCS model. Found a relevant solution on a Microsoft forum or Reddit? That’s great! But test it, contextualize it and capture it the right way. Agentic AI can help discover these external insights, but the decision to incorporate them must always rest with the human expert.
The value here isn’t just the content, it’s the process of validation and integration. AI can discover. KCS ensures that what’s captured is accurate, applicable and aligned to your standards.
Create Feedback Loops to Strengthen the Ecosystem
Both KCS and agentic AI thrive on iteration. Think of this not as a static system, but as an evolving ecosystem. The key to making it work? Closed-loop feedback.
- Track which articles AI is referencing.
- Monitor where AI-generated suggestions miss the mark.
- Ensure your analysts are following the UFFA model (Use – Flag – Fix – Add) to keep the knowledge base current and trustworthy.
- Build governance processes that make it easy for analysts to coach the AI—and each other.
In short, make continuous improvement part of both your knowledge and AI strategies.
So, Is KCS Still Better?
If implemented well? Absolutely.
A disciplined KCS practice produces accurate, relevant, and reusable knowledge tailored to your environment. It fosters collaboration, accelerates onboarding and powers self-service. That’s hard to beat.
But when paired with thoughtfully governed agentic AI, KCS becomes even more powerful. AI brings reach, speed and adaptability. KCS brings structure, quality and trust.
The real magic happens when they work together.
Final Thought
The reality is that implementing agentic AI isn’t just a technology initiative — it’s also a leadership challenge. True value emerges when service management and change leaders intentionally position AI not as a replacement, but as an enabler to augment people, enhance processes and reinforce proven practices. It requires clear governance, transparent communication and empowered support teams who can coach both the knowledge base and the AI.
The future of IT support won’t be defined by who adopts AI the fastest, but by who integrates it the smartest. Start by asking: What can AI do? And how can we ensure it’s doing it right?