Performitiv logomark
Blog Posts

Associate and Annotate Results to Learning

February 10, 2019
Feb 10, 2019 6:00 PM

When showing impact of learning on a business outcome, unless you're doing an in-depth study, don't make the mistake of saying the learning program caused a business outcome. A causal relationship is highly statistical and requires significant data over time analyzed by an expert.

Instead, on a regular basis, associate learning with business outcomes. What does this mean?  It means make a roughly reasonable correlation. In addition, annotate on a timeline when the program started, when key milestones happened in the program and when it ended. Highlighting these markers on a trend line of business outcome data tells your story in a reasonable manner.

The goal is to identify the business or talent outcome that the program is expected to change and then track it before, during and after the program. If you have a naturally occurring control group, track that separately but in addition to the data you’re tracking for program participants. Tracking the results is more important than being perfect or precise.  If the program impacted the results, the trend should reflect that.  Sometimes there were other factors that had a material or negative impact on those results and those can be annotated as well.  .

The bottom line is that L&D should use evidence that is roughly reasonable to tell the impact story. Occasionally it is healthy to do a deeper impact study or causal model and you should use experts and the right tools to do those. Otherwise, spend your time telling your impact story by looking at the trend in outcome data before, during and after the program and annotate on the outcome trend graph where learning milestones occurred. This will tell a chronological story to your stakeholders and help you share evidence of the impact, or lack of impact.

Speaking of lack of impact, when that happens, don’t be afraid of it.  Embrace it and use it as an opportunity to engage in constructive dialogue. Maybe it is a training issue or not.  Maybe there is room to adjust or try something different. A discussion that is data-driven is an opportunity to improve. Make the improvements, measure again and you likely will see a positive impact.

Finally, combine the associations in the data to impact ratings. Impact ratings come from scientifically-validated evaluations and can augment the impact story. Impact ratings on evaluations are great predictive tools to use before the outcomes have been fully realized.  Both the impact rating and outcome data annotations can help to communicate the story of impact. They are also great to augment the learning impact story by showing how evaluation-based predictors of impact are trending against goal or working even down to the participant demographic-level.

In conclusion, L&D leaders are better off using an evidence-based measurement approach that show trends in outcome data coupled with real-time, scientifically validated impact measures.  This allows the L&D function to course correct in real-time if the impact is not being fully realized.  Any of these course corrections and their associated impact to outcomes, can then be annotated in your scorecard.  This creates a collaborative, constructive conversation with stakeholders that engages all parties in the story of impact.t.

Thank you,

The Performitiv Team