Date Published March 16, 2026 - Last Updated March 16, 2026
Satisfaction surveys are a well-adopted method for gauging the perceived quality of the service desk and the broader IT organization from the perspective of the IT consumer. The idea behind a satisfaction survey is to measure how well an IT organization is delivering services and resolving issues.
But in practice, do satisfaction surveys really measure how well IT is doing?
The return rates for satisfaction surveys are often poor. This CustomerThermometer.com article states that response rates in the 5% to 30% range are typical; a response rate of 50% should be considered “excellent.” In an article I contributed to HDI, I found that the average response rate to surveys across all channels and industries is between 17% and 20%.
Is the low response rate because everyone is “happy”? Not likely.
Are we getting the right story?
If satisfaction surveys aren’t being returned, is IT getting the complete picture regarding the quality and satisfaction with its services? Perhaps not.
To be fair, perhaps consumers are happy. Perhaps consumers aren’t returning satisfaction surveys because they are satisfied with the service and quality of the interactions with IT. But how can an IT organization know this if only one of every five (or worse) surveys are returned? If IT doesn’t understand what is causing friction in a consumer interactions with an IT product, service, or associate, how can IT address it? Isn’t this a major reason IT satisfaction surveys are sent to consumers?
So, why aren’t IT consumers returning satisfaction surveys? In my experience, there are several reasons for low survey return rates:
- Survey fatigue – Many IT organizations saturate their consumers with surveys.
- Length of survey – The survey takes too long to complete.
- Different day, same questions – The same questions are asked over and over and over….
- “Leading the witness” – Surveys only ask the questions that the IT organization wants answers to so that they can promote a high CSAT score.
- Nothing changes - Complaints or criticisms are ignored, so consumers stop responding to surveys.
And in my experience, for those surveys that are returned, the results are misleading. Consumers were frustrated in their interaction with IT, and returning a negative survey is the only way to vent that dissatisfaction.
Wait a minute. What would be different if IT organizations distributed “dissatisfaction surveys”?
If the right questions aren’t being asked, if the right measures aren’t being captured, then how can an IT organization fix something that appears not to be broken?
“Are you dissatisfied?”
What would change if IT organizations started asking about dissatisfaction? Companies that truly care about customer experience already do.
For example, Voice of the Customer (VoC) programs are quite common, especially in mid- to large organizations. According to Merren, the purpose of a VoC program is to gather and analyze feedback from customer regarding their experiences, needs, preferences and pain points. One of the primary objectives of a VoC program is to identify causes of customer satisfaction and dissatisfaction.
Let that sink in for a minute. Most IT surveys only ask about satisfaction, not dissatisfaction.
A dissatisfaction survey is not a “new” concept. This article from Quirks.com argues that companies should consider conducting dissatisfaction surveys with their customers to understand the factors leading to dissatisfaction and determine how to avoid frustrating future customers.
Ask the right questions to get the right answers
Learning about the causes of dissatisfaction is not as simple as asking for additional feedback when a consumer responds with the lowest rating on a satisfaction survey. To understand the cause and patterns of negative experiences means that the right questions must be asked.
- What went wrong? To determine what went wrong, ask questions like “When did you first notice that something was wrong?”, “Where did things go off track?” or ” Was there a specific action, person, or situation that made the situation worse for you?
- What was the impact of this? Answers to questions, such as “How did this impact the consumer?”, “Did the consumer miss a deadline?”, “Was the consumer unable to complete her daily tasks or responsibilities?”, “Was there a loss of data?” and “Did this hinder or block project work?” will help quantify the impact of the issue.
- What was the cause? To identify a potential cause, questions such as, “If you could change one thing about how this issue was handled, what would it be and why?”, “What made this issue particularly frustrating?” and “Did you feel heard and understood?” will help.
- What was expected? Lastly, dissatisfaction often occurs when there is a mismatch between expectations and what happened. Answers to questions, such as, “What were you expecting to happen that did not happen?” and “Before contacting the service desk, what were you expecting from the service desk or IT organization?” will provide insights into the consumer’s expectations.
Are you ready for the truth?
Asking the right questions about dissatisfaction is only one part of the equation. Just as important is the commitment of the IT organization to make the changes needed to address the results of a dissatisfaction survey.
True leadership isn’t about chasing high satisfaction scores – it’s about having the courage to uncover and confront dissatisfaction. CIOs and senior IT leaders influence how the IT organization listens, learns and evolves. Lead the charge by changing the conversation – rethink how your organization measures experience. Ask where friction lives, learn from that discomfort, and drive the kind of impactful improvements that earns trust – not just a high CSAT score.
Because real improvement starts when leaders are bold enough to ask the hard questions.