Performitiv logomark
Blog Posts

Learning Impact is Important No Matter How You Slice It

November 30, 2018
Nov 30, 2018 6:00 PM

At Performitiv, we’re modernizing learning measurement through innovative technology. In the past, the only way impact could be measured was in highly complex and cumbersome ways that were resource intensive. What we’ve learned in our collective 100 plus years of experience in doing this is that impact is important no matter how you slice it.You can measure impact in many ways. One obvious one is a deep analysis like a causal model. That is for the most strategic programs and can provide ‘proof’ of impact. However, this takes volumes of data, expertise in causal modeling and should really only be done in the rare cases where it is necessary to do so.Nonetheless, learning impact is important to just about every organization in just about every situation, not just for the deep, periodic cases such as the above. So the real question is how do you measure impact and make it, well, impactful? The answer is in taking a methodologically sound approach and then identifying practical ways to execute on the approach.From a methodology perspective, modern learning organizations must fuse strong learning measurement methodology with performance improvement methodology. Learning impact is not about learning but about the performance from learning so this is really essential. At Performitiv our research and our clients have found that adapting Net Promoter Score (NPS) concepts that are built for performance improvement into traditional learning measurement models (ex. Kirkpatrick, ROI Process, Success Case) creates a greater sense of urgency and focus on impact for performance.Next, think about what data the learning group controls or has readily at its fingertips for impact. Start with that. Examples include your evaluations both before, during and after your programs. These should measure impact from three specific angles: program, people and results.At the program level, measure elements like the suppliers, the facilitators, the modality, the location, the environment, the content, the cost, the timeliness and the usage. These have a material effect on impact. NPS scores on all of these can show where there are strong positive or negative elements that can ultimately effect impact. So definitely measure and analyze at the program level.At the people level, it is about demographics of the learners. Common demographics such as years of service, job function, region, and grade level are examples. Slicing your evaluation data by these demographics will help identify subsets of learners that are having positive vs. negative experiences and these can drive their current and future impact. Cross tab all of your data by demographic and look for the differences so you can adapt what you are doing to improve performance and drive up your impact.Finally, there are results. Results can be operational in nature such as metrics that are cost, quality, time, productivity, revenue, safety, innovation, or satisfaction driven. Results can also be soft like competencies, capabilities and skills. Upgrade your evaluations to capture this information too. While it may be a bit of self-reporting, it is a good leading indicator of what is happening in reality.So design your evaluations to capture data you control as a learning group. You’ll have a great foundation to measure, analyze and optimize impact on many levels. However, you’ll also want to capture data you as a learning group do not control and this may be uncomfortable but it is truly necessary to have healthy conversations with stakeholders about learning impact.Data you don’t control are the actual results such as a compliance incident, an error rate, a cost savings, a revenue gain and so on. It can also be the change in a competency or skill over time. While learning cannot control these actual metrics you should gather and trend them before, during and after your programs. You’ll want to look for correlation to see if they are moving in the right direction and at the right slope that you and the stakeholder expect. While this is not a causal analysis it is a reasonable correlation conversation.Most of all, learning impact, at any level, no matter how you slice it, is about acting on the data and improving performance. If you do not use the data to change the impact on programs, people and results the effort is administrative and not value-added. So incorporate action plans that are visible and accountable so that the changes you make are tied to the metrics that prompted them and the future trajectory of the metrics is what was expected from the actions.In the end, acting on the data and continuously improving performance is the greatest form of learning impact you can truly demonstrate.The Performitiv Team