by Kristin Jones
Date Published April 11, 2019 - Last Updated December 17, 2019

In my last article, I explored my personal discovery of internal customer experience and touched on how I got there. Part of that discovery was learning that assumptions (both my own and my internal customers’) can be tricky! Here’s how I handled them during a recent large project.

Today, routinely identifying and challenging my assumptions has proven to be one of my more powerful tools in customer experience. Although I wouldn’t recommend that anyone stake their results on assumptions, I believe it can be a good place to start if you have a plan to manage them.

My company was preparing a big-bang upgrade of operating system and local productivity applications along with a move to cloud storage. This was no small change, even for those of us comfortable “working with computers,” and I knew we had a challenge ahead of us to keep our customers productive and confident as they entered the new waters. I wanted them to feel like they’d jumped in, rather than having been thrown!

What’s Happening Out There, Anyway?

From experience and ticket review, I knew where my customers often struggled. I also knew that my end-users were a real mix: Gen Z to Boomers, corporate office executives to field laborers. The hardware they worked on was just as varied, with phones, tablets, and desktops found in offices, trailers, and trucks.

For my initial planning, I made a list of factors affecting my part of the project (people, job roles, location, hardware types, etc.). And then I got on the phone, and I got on a plane, and I tested my assumptions in the real world.

I got on the phone and I got on a plane, and I tested my assumptions in the real world.
Tweet: I got on the phone, and I got on a plane, and I tested my assumptions in the real world.  @kitonjones @ThinkHDI #cx #servicedesk #techsupport

Hit the Road

It’s a good thing I did. I knew I had a “corporate office” bias. But even when you’re aware of it, you can’t really know when it’s going to rear its head.

My company had multiple branch offices across the US and Canada, and while some offices worked in several of the company’s core disciplines, others were mainly focused on only one. This also played out on a project level, where larger projects contained multiple disciplines, and others did not.

My strategy for training was based on a “how you work today vs. how you will work tomorrow,” and the “today” included some assumptions about significant areas like document sharing, email usage, and how to connect to shared services. These pre-training meetings with managers and key user groups proved to be crucial as they allowed me to present an overview and then listen to what each branch or user groups cared about and how they really were working.

In my case, I had underestimated the extent to which teams worked differently from each other, not only in a specific discipline, but across the different branches as well. For example, I had assumed that people would want to learn more about document sharing from a collaborative editing perspective. I wasn’t wholly wrong. But some groups needed a process-focus more than a technical how-to, because their clients had specific needs for document management, while another wanted only the how-to because they only shared internally and weren’t concerned about an overarching process.

I had also underestimated the project’s understanding of the impact of network connectivity in the other offices and project sites. I had identified connectivity as a problem area based on our infrastructure capacity map, of course, but also our ticket history of “network is slow” or “connection keeps dropping.” However, when I went to these offices and sites, I saw the discrepancy between the corporate office where IT was working from and some of the other locations, which looked satisfactory on paper but were not in reality.

This realization impacted the project in a few ways, but mainly because we had to rethink how we delivered the updates to the computers. In fact, we ended up with three different delivery strategies. We also changed some of our training content to highlight workarounds to these challenges, like hot spotting their devices from a job site in order to download the latest set of drawings when the physical network was dropping.

This exploratory phase not only revealed to me where I was off-track, but I learned a lot about what my customers assumed about our services and our direction. Whether a formal meeting, or conversations over coffee or dinner, these direct interactions with the teams were paramount to my designing the overall experience I wanted them to have. And I’ll be honest…it taught me I hadn’t fully appreciated how their assumptions might affect the project. I discovered preconceptions about “Corporate” and IT being slow to change, which highlighted that we needed to communicate more frequently about “works-in-progress,” instead of presenting a “finished” solution. We discovered cases where people received a solution that “didn’t work,” so why bother asking again? This let us get specific examples from the groups. We discovered that, in some cases, it was true that we hadn’t interpreted their needs correctly (more learning about the “finished solution” again!), and this had eroded some trust between branches and IT.

On the flip side, we also found opportunity to educate where their assumptions were incorrect. During one of our first meetings a group was convinced that “The network is always slow, so it will still take forever to save a document stored in the cloud.” But, this assumption was based on the previous infrastructure where all apps were accessed via the network and terminal services. Once we’d explained how the new environment took advantage of local applications and took load off of their network, we could move forward again and look at how it applied to their branch directly.

If we hadn’t identified these assumptions, the project would surely have had mediocre results at best, and we would have lost staff buy-in to the changes. Armed with reality from those conversations, we shifted the content of the training from location to location, aligning with our customer’s requirements for ideas, innovation, and education.

Making Assumptions Work for You

Here are some of the lessons I learned along the way:

  1. Know what you’re doing: Make sure you have identified your goals/desired results and classified them. For my project, I split my goals into “department” and “customer” so that my planning could take different routes for each goal, but end up at the same destination. For example, my goal for the department was to move each user to new platform, upgrade desktop productivity applications, and move network-stored files to the cloud. My goal for the customer was to increase customer confidence and adoption by providing timely knowledge, training, support, and appropriate hardware.
  2. Always ask: Never assume you aren’t assuming! Start a conversation. Validate what you think you know, and learn as you go. Know how your customers are working, how they want to work, and how they want to work with you. Ask yourself “Why am I doing this?” and make sure the answer is in support of at least one of your goals/customer concerns. I my case, I designed multiple training sessions based on the audience job types in order to provide people with the most relevant information for their roles.Ask yourself “How am I doing this?” and make sure you are responding to requirements based on your customer’s actual needs. In my case, I am delivering training sessions in multiple ways (vidcon, in-person, print) to ensure that all my office locations and individuals’ learning styles are considered.Ask your customer “What do you think I’m doing?” or “What do you want me to do?” to fill out your understanding and validate any assumptions you might be working with.
  3. Draw it out: Make a map. I often think of it like my customer is driving a car from one place to another, and various things happen to them on the way. I draw a map, with alternate routes, so that I can best “control” their journey. For every turn, I ask myself if I know this, or if I’m assuming something. If it’s an assumption, I know I have more validation to do, or I accept the risk. For example, some of my training sessions had both in-person and virtual attendees. I drew a “route” for each of them. Their starting points were different. For the in-person group, we provided physical distribution of new hardware, with a projector screen displaying the new login instructions. For the virtual group, we pre-shipped hardware with “do not touch until morning of training” instructions, along with email/print documents containing the login instructions.
  4. Don’t rush it: I know time is money, and like money, it’s often a luxury. Advocate for time to properly plan your customer’s journey. It doesn’t matter what the journey is—submitting a ticket, signing up for training—plan it and then perform at least one test run for the various paths. I was lucky in that my executive sponsor valued the planning process and I had plenty of time for it. But if you aren’t as fortunate in your timelines, map out your critical points and watch out for them during execution. Sometimes your timeline means you need to make a decision based on an assumption. That’s OK. You’re most likely going to make some changes after you go live anyway, so you’ll have another chance. If time is running short, use “most likely to be right” assumptions and go with them, and focus your validation on weaker areas.
  5. Gather feedback and revise: Our work in customer experience is never really done. We should strive to understand if our customers are experiencing what we think they are and to know if it is the experience they want. Due to the scale of this project, I formally asked for feedback three times. We prepared a small survey that was sent out to the participants of each training session to assess whether it had been valuable to them and sent another immediately after deployment that addressed the quality of the deployment experience. We sent a third short survey a few months after deployment, to gauge whether users were experiencing the productivity benefits that we expected as part of our project goals.

The Relationship Is Priceless

These checks provide the opportunity to uncover more assumptions or realities that are impacting your plan, and you’ll find you can head back to your map to address the detour, before something becomes a costly pothole. Chances are you’ll improve the relationship between IT, your stakeholders, and your customers along the way, and that is priceless.


Kristin Jones is a passionate customer support advocate with a focus on people and process, and has been leading IT teams with delight for over a decade. A lifelong learner who seeks to inspire others with fresh ideas, she is an active member of the HDI community and holds certifications in ITIL v3., HDI Support Center Manager and KCS Foundations. She strives to end each day having smiled more than frowned and having helped someone (or something!) work better. Follow Kristin on Twitter @kitonjones.


Tag(s): supportworld, customer experience, customer service, customer satisfaction

Related:

More from Kristin Jones


Comments: