Skip to main content

After 12+ years in the L&D industry, I’d venture to say that less than 10% of teams are using learning and development metrics effectively. This is in direct opposition to the 91% of learning leaders who see the need to measure their training outcomes properly.

Why such a gap? A variety of reasons, of course, but much of it comes down to either looking at the wrong metrics or looking at the right ones without the right data to support it. 

In the second half of my L&D career, I’ve become particularly passionate about learning and development metrics and data. So much so that I’ve found myself on a four-year path to getting a Masters in Analytics, which meant I had to go review statistics and calculus content that I hadn’t used since high school. Yikes.

I did this because I saw the lack of data-forwardness as a real handicap to getting the kind of learning programs spun up and approved that I knew were beneficial. 

The problem was that my business leaders didn’t know this the way I did, and, without data, I lacked a proper tool to be able to show them. I wanted to change that forever.

Looking broadly, too often the L&D community finds itself measured as a cost rather than as a benefit. Or, if we’re seen as a benefit, it’s a loosely measured and understood one. Proper use of data and learning and development metrics is the salve for this. 

But it’s not an easy step forward. I read so many articles that illuminate a bunch of metrics you can track, but half of which are absolutely meaningless. I’ll get to why in this article, but, when it comes to metrics, my main question is always: would the CEO care about this? 

Here’s an example: would my CEO care about the average test score of one of my assessments? Maybe, but if they were an astute business leader they might ask who made that assessment and what was on it.

If most people are getting >95% on it, it’s probably too easy. If the scores are all over the place, the right question from a leader would be: what does that tell us about how well people can do their job?

Ahh, and there lies the real metric. An assessment score is only important if it tells you more about something

It can tell you how difficult an assessment was or how good or bad your content was (relevant to you but not likely your CEO). Or, at its best, it can give you a signal of how well someone might know how to do something relevant to their job or role.

Let’s dive in more on meaningful metrics, then. My list isn’t going to include your total number of learners or assessment data or common things like that. We’ll focus on more difficult metrics, but ones that, if you capture them, should put your L&D function miles ahead of others.

I’ll start with the ‘Gold’ standard and move down from there in order of ambition.

The Gold: Whatever The Business Needs

graphics for the gold whatever the business needs

The first and best metric that you can track as a learning and development team and leader is whatever the business (and/or leaders of that business) tell you they want to be tracked. It’s as simple as that. And as complex.

I was in a training session one time when the facilitator said that all business decisions come down to one of three desired outcomes:

  • More Revenue
  • Less Cost
  • Mitigated Risk

So, if you’re making a measurement from your learning program on a metric the business needs, it’s going to roll up into one of these.

Examples here include helping people be more productive (less cost), selling more or getting more renewals (more revenue), or knowing what they can and cannot do in certain situations (mitigated risk).

Of course, there are other examples, but I think of those three desired outcomes all the time. 

Why track it?

This is the ticket to proving a return on investment from your L&D program. It’s the way you prove that you’ve hit business-level KPIs and training effectiveness expectations.

The only way to do these in an unquestionable manner is to aim your measurements at what the business wants to get out of your programs. This removes your work from the silo and into the broader business strategy. Any self-designed L&D metrics or outputs can always be set aside as a siloed measurement.

In short, the why is the whole game here. Our goals as L&D professionals should be business goals. We are meant to be service providers and enablers. Proving this through the gold standard is what remains elusive in our industry.

How to track it?

Herein lies some difficulty. These metrics are notoriously hard to track. There are several reasons for that and I’ll talk about a few here.

But, before I do that, let me say that I think a healthy organization would be one where this happens collaboratively. An L&D team may not have the right access to the data it is looking to measure or that data might be difficult to obtain because of its messiness.

While I think a learning and development team running a training that’s tied to a proper business-focused outcome metric should be responsible for driving that metric, it cannot be totally responsible for also tracking and having tracked that metric. That should come with help from an operations team, analysts, or an internal data team.

And, when you work with these teams, hopefully, they can explain why it may be difficult to truly capture the metric you’re looking for. Because human-based experimentation is hard. To truly track a change, you’d want to have two performance groups: a control group and a test or treatment group. And these groups need to line up as closely as they can to one another.

But with people they rarely do. Employees may differ in tenure, salary, level, experience, etc. They may even just have a bad day or week or may only be participating half-awake due to personal matters.

So comparing two groups is seldom apples to apples. Ask most folks that work with data every day and they’ll tell you that groups of people seldom are. Don’t let it throw your experiment off. You can still use the scientific method in your approach and develop a hypothesis that one group will do something different than another (better or worse).

Don’t let perfect be the enemy of good! Because you still want to show, as best as you’re able, that your training initiative drove a business metric, even if it can’t be measured to absolute perfection.

If you know that your measurement is done on a stable group, even if part of it can be picked apart, leadership will see that your work has driven a positive impact on the business and that learning is a worthwhile investment of business capital.

This is the gold standard of what we’re doing with internal L&D teams. It’s where the value exists that we provide to our organizations. So how do you overcome the data challenges above?

  • Run sessions with a few different groups
    • A best practice would be to run this with totally random groups to help with hypothesis testing. A truly random segment may not be possible, and if it’s not be sure to compare it with a “control” segment that’s as closely aligned as possible 
  • Track outliers and reasons that a group may not be totally random
    • These may include items like tenure, previous experience, or previous experiments a person may have been a part of. As a data principle, you want to call out as many of these outlier details as possible.
  • Have enough students that you can rule a few out when the measurement comes.
    • Test with groups greater than 10 if you can. If two or three folks drop out, you should still have enough folks and data to report on
  • At the end of it, remember to factor in the cost of training as well (you can bet your business leaders are thinking about that).

If you’re interested in seeing more, I posted a (redacted) example of a training report seeking to show success in this framework. You can find that here.

Download our 2024 Workplace Trends Report to stay ahead in a transforming HR landscape. Get insights from leaders on trends that will define your strategies in AI, talent dynamics, and DEI.

Download our 2024 Workplace Trends Report to stay ahead in a transforming HR landscape. Get insights from leaders on trends that will define your strategies in AI, talent dynamics, and DEI.

  • By submitting this form, you agree to receive the requested content, our newsletter, and occasional emails related to People Managing People. You can unsubscribe at any time. For more details, please review our privacy policy. We're protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
  • This field is for validation purposes and should be left unchanged.

The Silver: Learner Behaviors and Behavioral Correlations

graphics for the silver learner behaviors and behavioral correlations

Next up we have a measurement of changed or altered behaviors (hopefully for the positive) and measurements of the correlation between training initiatives and performance.

I’ll start with behaviors. What most L&D programs attempt to do is teach, skill, train, or alter. I’m sure there’s more. Bottom line is that we want to improve someone's capacity to do something. 

It may be a specific tool (Excel), language (SQL), or skill (empathy), but it’s something. Therefore, the metric we want to capture is to see if we did in fact change or improve that skill. We may not be able to tie it into a business metric (‘The Gold’ above) but we can see that something’s changed.

Here’s an example: we had a “help desk” type function at one of my previous companies. We could see who was using the help desk, and how often, but we wanted to try and help employees do things on their own instead of relying on it (we didn’t have cost associated with this otherwise we could have fit it into a better business metric).

So we worked with the “power” users of the function—those who used it the most—to see if we could.

We split the power users into three groups: 

(1) A treatment group we sent an e-learning module about finding resources on their own

(2) A group we ran a short, in-person training to understand why they used the help desk so often and then followed up with resources (including e-learning)

(3) A control group we did nothing with.

Guess what happened? Both treatment groups (1 and 2) decreased their use of the help desk (we didn’t ask them to or tell them that was the goal) compared to the control group (3). But Group 2, which had a direct behavior understanding and follow through had a much larger, longer-term reduction in use.

This on its own was a great success. We had no way to discuss the cost savings at that company of such a change so, we settled for this “Silver” level metric instead of the “Gold”, but we proved we could adjust behaviors from our audience.

Why track it?

Because tracking behaviors helps you understand skill gaps and areas of opportunities. It also helps you identify where to put learning and development team resources to drive goals your company may have (eventually getting to ‘Gold’ level measurement). 

It means better managers that coach employees to move up faster (internal mobility) or stay with the company longer (employee retention). It means getting people more committed to the company’s mission or values (employee engagement). 

It also probably starts to put your learning strategy and design into the limelight. It may not have the topline business impact that ‘Gold’ metrics do, but it will help show that your learning program is capable of moving the needle it sets out to influence.

How to track it?

You’ll need another data set to track this properly. Likely this will be some kind of historical data on employee performance or performance ratings or reviews. Depending on your organization, this may be difficult to get, but it’s imperative in showing growth or value.

Here’s a tip: ask your HR teams for a version of this data that has had personal information wiped away from it. Ask them to help you connect your learning data with employee performance data and return it back to you.

You now have an anonymous data set to look at historical changes and/or correlations. Find a good analyst and run some regression models on the variables you now have!

The Bronze: Learner Knowledge and Abilities

graphics for the bronze leqrner knowledge and abilities

This is where most L&D programs top out. And that’s fine. Being able to show that learners gained knowledge or closed skill gaps is admirable and should keep investment for employee training and development programs chugging along.

Why track it?

You should track this because it proves that the learning experience was worth something and that it had an effect on employees.

Increasing someone’s knowledge or abilities is important and we know it helps with people performance metrics like employee engagement and retention.

How to track it?

Assessment scores come in here but they must be meaningful. For example, if you write a whole e-learning module, and then write an assessment just based on that e-learning, what does a passing score prove?

It proves they understand what you wrote, but will they remember the content from the modules? Probably not. Will passing it mean they will do something at their job better? Maybe (see above).

Tracking changes in knowledge and ability is really important but it has to be done in the right way. Some good ways include analyzing pre and post-assessments and looking for improvement.

Better yet, do both of those and then do another assessment 60 later, or throw an assessment question into a different training and see how many people get it right. 

You can also ask managers to track any changes you’ve set out to see from training in an employee’s working day and get back to you. This is somewhat subjective but it's subjectively pointed at the right person: their manager. 

Depending on the number of employees (or your test groups), there are all sorts of creative ways to look at this, just be thoughtful of time and how much you’re asking of managers. 

Another Bronze: Return Learning and Learning Pace

I assume that most teams are keeping track of who is learning and how many people are learning in a given time (week/month/quarter).

Those are important metrics but they’re just records of activity. Turning those into something meaningful—even if you aren’t able to accomplish some of the metrics above—is important to show the health of your learning culture.

And there's good reason to show your leadership that a healthy learning culture is imperative. According to McKinsey, companies that offer comprehensive training programs have higher income per employee by as much as 200%.

Why track it?

A return learning metric shows how often learners are coming back to whatever learning platform or program you’re hosting. 

You don’t want to see one-timers, you want to see folks that are committing and re-committing to upskilling and you want to measure this.

The caveat here is that, even in a healthy working environment, people are busy. A returning learner rate per month (those that come more than once) above 20% is good, and above 40% is really good.

Learning pace will help measure how often learners are returning to the platform—or how often learners are opting into learning programs from when they started. 

This can tell you if people are averaging one or two trips per month to a learning platform or if most return after 40 days. Where it starts to get really meaningful, though, is what type of training may drive a faster pace?

For example, is your manager training so good that people are coming back for more every week? Do people really love your third-party content offerings? Track this and look at trends month or month or quarter over quarter. 

How to track and an example

Learning analytics leader Peter Meerman has started to put together some of these into simple dashboards that I believe many teams can use. You can see some of his work here—and a dashboard that combines Active Learners, Average Hours Per Learner, and broken-out items like Average Spend per Learner. 

These are all imperative to understand a topline “health” score of a learning culture, but be careful what you wish for here, and be even more careful with how you present this data. 

Always set expectations with stakeholders of what “good” looks like when it comes to how many learners visit your platform or LMS each month. 

For big organizations, anything north of 50% per month is really good. If you’re using a learning vendor for content or software, ask them for benchmark metrics. 

Training Metrics To Consider Not Tracking

You may notice a common metric that many L&D teams use is missing from this article. And that’s customer satisfaction scores (cSAT), Net Promoter Score (NPS), or any kind of score in which a learner tells you how much they enjoyed or would recommend a course. Another is the simple metric of course completion rates.

The reason I’ve omitted them is because I simply don’t see much value. If you can report on return learners and learning pace as I described above, you have a harder and more meaningful metric and data set than the subjective answers you’ll get to questions like “How much did you enjoy this training?”

If you see learners coming back for more (and doing so quickly) that should be enough signal to know they liked your training, intend on finishing what they started, and probably see the value in it for them.

If you’re specifically looking to understand how your training could be improved, then you should ask that question directly. Asking about enjoyment, or how likely they’d be to share, are metrics seeking to be meaningful when they really aren’t.

Data Warehousing

A common question I get asked when talking about these points is “Where all this data can be found?”. 

A lot of it should exist in your learning management system. Anything related to learner activity (the bronze metrics) can be found there and may require a download and some work in Excel or Sheets to find conclusions. You may need a helping hand in matching names or employee IDs to training dates in a large file.

Your LMS should also have a working API, or a way to transfer its own data to another place. If you’re able, working with folks inside your organization to get this data into a proper data warehousing tool (like Snowflake) can be very effective. 

This allows you to potentially compare this data (learning activity) to employee data like performance, productivity, or ratings. And, rather than doing this in a variety of Excel sheets, you could potentially do it (likely with the help of a data analyst) in the same tool with a SQL query. 

There are other ways to do this, too. Cognota is an L&D platform that is working to house all of this info in one place and might be worth taking a look at. Tools like Tableau or Google Data Studio can help make nice-looking dashboards from spreadsheet inputs, so if you can find a way to get raw data in there, you can still visualize it nicely.

Keep Up With The Changes

If you want to learn more about learning development metrics, I would suggest following a few L&D communities on LinkedIn and beyond. I’d suggest starting with L&D Shakers and Offbeat.

If you think I’m totally wrong on cSAT questions in post-learning surveys, drop me a line on LinkedIn—I’m always up for a good debate!

Some further resources to help you refine your L&D program and grow your talent:

Subscribe to the People Manage People newsletter to receive regular articles to improve learning and development and other people function in your org.

By Eric Grant

Eric Grant has 12+ years in Learning & Development managing programs and teams in high-growth environments at organizations such as LinkedIn and Coinbase. To add greater depth to his practice, he's currently pursuing his Masters in Analytics where he’s focused on how to leverage data science to unlock human potential.