This post first appeared on ICMI, a partner publication.
Successful organizations evaluate their training programs to optimize learning transfer from the classroom to the job. You’re probably familiar with commonly-used methods to evaluate training effectiveness, such as learner feedback, tests and assessments, role plays and simulations, observation, and customer surveys.
As part of a comprehensive training evaluation approach, these methods form a strategy to determine if a class or curriculum has had the desired impact on employee performance. Some training teams add more detail to their training evaluation yardstick, using metrics like return on investment or strategic impact, but most training managers take a more pragmatic view. They ask, “Is what we are doing in the classroom influencing performance?”
Fueled by that pragmatic mindset, most trainers take the first step toward training evaluation with a post-class survey that asks the learner to provide their perspective on the learning experience. Gathering learner feedback is a quick and inexpensive starting point to gauge what’s happening in the classroom, understand the learner experience, and refine training design and delivery.
Here are five easy-to-implement ideas you can use to boost the effectiveness of learner feedback as an evaluation method:
Gather learner feedback early and often.
Post-course surveys may be helpful to improve future training classes, but don’t provide real-time feedback for real-time adjustments. Instead, add in-progress touchpoints for trainers to solicit learner feedback throughout a learning event. These can include:
- Daily pulse-checks for multi-day classes
- Weekly surveys for multi-week curriculum
- Standard questions at transitions between modules
Formative data helps the trainer adjust their delivery based on learner feedback, and provides context and an early warning if something is going wrong. This will provide good information for trainers as learners move into tests and assessments.
Craft learner-centered survey questions.
Questions designed to solicit feedback about the learner’s experience shift the focus from critiquing the trainer or the content to sharing unfiltered input about their experience as a learner. It's a subtle difference that can substantially impact the quantity and the quality of feedback offered.
See the difference in these two sets of examples:
“The course content was engaging.” (trainer-centered)
“I was engaged in the exercises and discussions.” (learner-centered)
“The practice exercises were practical and relevant.” (trainer-centered)
“After this training, I feel prepared to apply what I learned in my job.” (learner-centered)
Adjust survey questions to pivot away from what the learner thought about the training. Learner likes and dislikes are interesting, but our goal is a sound learning event that results in learning transfer. Substantial questions focus on the learner experience and learners’ preparedness to apply what they learned on the job.
Wait, there's more. To read the full post, click here.
Rebecca Gibson, Gibson Learning and Performance, is a tireless contact center advocate who believes passionate, connected employees are the key to memorable customer experiences. She specializes in practical, creative approaches to contact center training, employee development and support, performance management, and contact center quality. With over 20 years of contact center experience, Rebecca has worked in HR and recruiting, training and development, software product design, and consulting leadership. Her consulting and training experience includes Genesys, Magellan Health Services, ICMI, and extended engagements with USAA, Convergys, Verizon Wireless, and Covered California. She’s a frequent contributor to Contact Center Pipeline and other industry publications and currently serves on the ICMI Advisory Board. Connect with her at in/rebeccargibson/ and @gibsonlearning.