From Challenges to Change: A Deep Dive into Evaluating The Academy Training Program
When I first took on the Florida Academy for Child Protection and Family Resiliency project, I knew it would be challenging. But I didn’t anticipate how much this experience would reshape my approach to learning and development. My overarching goal was clear: to create a best-in-class learning experience that would set a new standard for newly hired child welfare professionals in the nation. During my best practices research, I discovered that while many states had innovative ideas, turning those ideas into reality was easier said than done. I’m sure many of you have faced similar hurdles—how do you balance innovation with practicality?
The use of VR, for example, enabled trainees to step into the shoes of child welfare professionals, experiencing the emotional and logistical challenges of home visits and family assessments. Meanwhile, Problem-Based Learning Simulations allowed them to tackle complex cases, fostering critical thinking and decision-making skills that are essential in this field. These tools turned traditional training on its head, giving our trainees a safe space to make mistakes, learn, and grow. And let’s be honest, who wouldn’t want to learn by doing rather than just listening? However, incorporating VR into the curriculum required new skills and logistical challenges for the trainers who probably conducted 20-40 12-week sessions. So, how could we use these great tools and new curricula without overwhelming and burning the trainers?
Evaluating the Effectiveness, Efficiency, and Quality of Learning Experience from Many Aspects
Once we wrapped up the first cycle of the project, it was time to dive into the pilot phase—a make-or-break moment for any project. There were so many moving parts, that I felt overwhelmed at times. That’s when my love for evaluation really kicked in. I knew we needed a rock-solid plan to measure not just whether the training worked, but how well it worked.
So, I rolled up my sleeves and got to work, carefully aligning every detail to make sure we captured every aspect of the learning experience. I combined Thalheimer’s and Kirkpatrick's methodologies to dig deep. This wasn’t just about checking boxes—it was about understanding, on a meaningful level, how well the training worked. From the initial “aha” moments in the classroom to the long-term impact on the ground, my goal was to measure it all.
Trainers:
The trainers played a pivotal role in the success of the academy, and it was essential to understand their experiences and challenges. The evaluations were conducted at three key stages: before, during, and after the training.
Before Training: We assessed trainers' initial reactions to the curriculum through focus groups, capturing their thoughts on the content and their preparation time.
During Training: Weekly post-mortem meetings and daily chat groups provided real-time feedback on trainers' cognitive load, learner engagement, and the overall energy in the room. This allowed us to make immediate adjustments as needed.
After Training: Post-pilot focus groups allowed trainers to share their experiences with the curriculum, simulations, VR, and assessments. We asked targeted questions to identify what worked well, what needed improvement, and what resources could further support them.
Trainee
Understanding the trainees' perspectives was equally important. I wanted to ensure that the learning experience was not only informative but also supportive and engaging.
Before Training: I gathered demographic information to tailor the training to the needs of our diverse group of trainees.
During Training: Daily surveys measured cognitive load and emotional responses, while focus groups assessed their understanding of the material. This feedback was critical in identifying areas where trainees struggled and where additional support was needed.
After Training: A comprehensive focus group session at the end of the pilot allowed trainees to reflect on their overall experience with the curriculum, simulations, and VR. This feedback was instrumental in refining the training for future cohorts.
To ensure that the training had a lasting impact, we conducted quality reviews 3-6 months after the trainees completed their provisional assessments. This allowed us to measure the real-world application of the training and its effectiveness in preparing child welfare professionals for their roles.
Mental Effort Survey for Trainers
One of the tools that became a game-changer for us was called the Mental Effort Survey – a tool I used during my dissertation. It’s as simple as it sounds—a 90-second daily check-in with our trainers. This survey provided powerful insights into just how mentally taxing each module was. It’s like asking, “How much did this workout really take out of you today?” The feedback we got helped us tweak and optimize the training, making it smoother and more effective for everyone involved.
The outcomes of this evaluation were informative. They provided clear insights into how well the content was designed and how emotionally prepared both the learners and trainers were to engage with it. The evaluation also highlighted the effectiveness of the trainer guides we developed. Despite several rounds of Subject Matter Expert (SME) reviews, the evaluation revealed areas where the content still required updates. This feedback was invaluable in refining the curriculum to ensure that it was not only comprehensive but also responsive to the needs of those it was designed to serve.
Looking back, this project was a journey—one that taught me as much as it did our trainees. It reinforced my belief in the power of thoughtful, intentional design, not just in training but in everything we do. The changes we made didn’t just improve our program; they had a real, lasting impact on the lives of children and families across Florida. And that, to me, is what it’s all about, creating something that matters. What about you? What’s a project that challenged and rewarded you in equal measure?
Until next time…
Business Transformation Leader | AI-Driven Program Manager | Certified SAFe Agilist | Innovator in Strategic Execution
3moImpressive
Training Leader | Workforce Development Professional | Program Manager | Strategic Planning | Delivering Innovative Solutions | Transitioning to Public Service
3moIt is important to evaluate the effectiveness, efficiency and quality of the learning experiences for different stakeholders and at various stages of your rollout. Your plan was very well thought out and executed! Given your subject matter, I'm interested to know if you also captured data to ensure compliance with any applicable policies.
Product Management | Ed Tech | Social Impact
3moFiliz, I'm curious...what was the most impactful feedback you got from the pilot round? Were there any major changes made?
Employee Experience Manager · Leadership Development, Employee Engagement, Organizational Effectiveness · Elevating Workplace Satisfaction 25% & Team Performance 20% through People-Centric Strategies
4moFiliz Aktan Clark, Ph.D. I appreciate learning about how your experience shaped your approach to learning and development. Great research considers a variety of perspectives and you did a great job gaining that insight and having it inform this project and perhaps more to come - you should be proud of yourself and the great work you accomplished!