Date Published September 22, 2025 - Last Updated September 22, 2025
Most discussions about AI in Service Management focus on design. How do we build smarter AI? How do we make it sound more human-like? How do we integrate it into our platforms?
But there’s another side to the story that rarely gets attention: How do we teach humans to talk to AI?
This isn’t just a philosophical question. It’s a practical project requirement. Today, AI is being woven into core workflows, from Service Desk automation to enterprise-wide Agentic AI pilots. And yet, adoption and ROI often stumble because of a gap nobody talks about: The way humans feel and act when interacting with AI.
The Human Behavior Gap
While it’s considered ethical to let callers know when they’re speaking with AI, that awareness often changes the interaction. People get short. They bark one-word commands. They lose patience quickly.
A CIO I know, whose story inspired this article, once ran an experiment. Half of the users were told they were interacting with AI; the other half weren’t. The results?
- When users thought it was human, they spoke conversationally, politely and with patience. In some cases, they expressed their feelings and were appreciative of the empathy they received.
- When users thought it was AI, they snapped, gave clipped responses and grew frustrated; often not giving the AI Agent a chance to do its job.
Same system. Same functionality. The only difference was human behavior. For IT leaders, that’s not an anecdote…it’s an adoption risk.
Conditioning from Legacy Tech
Much of this comes down to conditioning.
- IVRs (Interactive Voice Response systems): For decades, people were trained to “say the right word” or risk being looped back to the start. Accents, tone and natural speech often failed. Over time, users learned to bark commands.
- Early assistants like Alexa and Siri: These systems frequently misunderstood full sentences, but sometimes worked with keywords. The lesson? Keep it short. Get to the point. Don’t waste time trying to converse.
Now, even though modern AI can handle natural language, people aren’t conditioned to try. That learned behavior has been carried forward into today’s Service Desk interactions.
Generational Dynamics
Different generations bring different baggage into AI projects:
- Boomers (1946–1964): Shaped by IVRs, often default to transactional speech.
- Gen X (1965–1980): Remember both IVRs and early assistants, pragmatic but skeptical.
- Millennials (1981–1996): The Alexa generation, quick with clipped commands.
- Gen Z (1997–present): Digital natives, often more conversational, unless frustration sets in.
This raises a crucial adoption question: Do we wait for natural communication with AI to evolve over decades? Or do we actively retrain today’s workforce to interact more effectively?
The Spillover Effect: From AI to Humans
Here’s the part most projects miss: What starts with AI doesn’t always stay with AI.
Imagine an employee struggling with an AI front-end agent. They’ve been firing off one-word commands, repeating themselves and getting increasingly impatient. Eventually, the case escalates to a human Service Desk analyst.
What state of mind do they bring into that conversation? Frustration. Impatience. A transactional tone.
In other words, the way we condition people to talk to AI can directly impact how they talk to other humans. Instead of making analysts’ jobs easier, poorly designed AI interactions risk handing them more difficult conversations. Now, we don’t just have an AI issue — we have a Service Desk issue.
Why It Matters for Project Success
For Service Management leaders, AI projects can’t be measured only by ticket deflection or cost savings. Success depends on the perception that AI improved the overall quality of service interactions.
Agentic AI in particular requires trust and collaboration. If users treat the AI like an IVR, they will:
- Ignore recommendations.
- Fail to provide useful context.
- Arrive frustrated when escalated to analysts.
That undermines adoption, ROI and analyst morale. Projects that overlook this risk may technically succeed, but culturally fail.
Making Communication a Project Requirement
How do we prevent this? By treating human communication patterns as a formal project requirement.
- Requirements Gathering
- Define expected interaction styles.
- Account for generational behaviors.
- Capture how AI should set the tone for human handoffs.
- Training and Onboarding
- Teach employees how to “talk to AI.”
- Position natural, conversational dialogue as more effective than clipped commands.
- Framing and Disclosure
- Be transparent about AI, but frame it positively: “This assistant is trained to help you like a colleague would.”
- Change Management
- Treat AI adoption as cultural, not just technical.
- Share success stories where AI-human collaboration worked well.
- Metrics for Success
- Go beyond deflection rates.
- Track sentiment at escalation and user experience over the full interaction chain.
Building Better Conversations
AI isn’t just reshaping Service Management workflows, it’s reshaping the way people communicate. If we normalize impatience and dismissiveness with AI, those habits won’t stay confined to machines; they risk bleeding into how people treat human Service Desk agents.
That’s why the challenge isn’t only making AI sound more human. It’s ensuring that humans engage with AI in ways that build adoption, trust and cultural alignment.
And just as important: resisting the temptation to deploy AI before it’s truly ready. Rolling out immature solutions creates frustration, damages trust and sets both the technology and the Service Desk up for failure.