Get Access to the Right Data to Show Program Impact!

I've heard it said countless times, “we don't have access to data telling us if our program was successful.”

There are many reasons we don't have access to data. My suspicion, however, is that we could improve access to data by picking the right artifacts to evaluate our programs before beginning the program development process.

Oftentimes the data artifacts we choose to use (or use by default) to measure outcomes actually make it more difficult to evaluate. We do a great job picking metrics, which are the outcomes that we want to see change or grow. But we don't necessarily do a good job picking the right data artifacts, which are the indicators helping us track if we've accomplished our program goals.

Obviously, if we don’t select outcome metrics, we won’t be able to select the best data artifacts. I’m not advocating we forego the process of identifying metrics of success. I’m advocating we add one more simple step after selecting outcome metrics: identifying the most accessible and appropriate data artifacts that help us monitor progress toward our goals.

If you are wondering how to select program outcome metrics, use one of the strategic questions below. More than likely you are using some of these lines of questioning already.

You’ve heard it said before - begin with the end in mind. Before we go designing and delivering a program, we must have a clear idea of the things we hope to see change on the other side of our program. Here are some great questions.

  1. What is the business goal our program is supporting? This offers insight into possible business metrics.

  2. If our program is wildly successful, what do we expect to change? This gives us a sense of change metrics.

  3. What would become possible for participants because they've completed our program? This line of questioning might give us some metrics around participant-specific growth or outcomes.

  4. Here's a new question that actually came up from a friend in our measurement meetup. She asks, “If we don't do this program, what will happen?” This question might sound kind of strange, because you're actually asking for people to think about what's not going to take place. But in the end, that question actually gets us to the root of the problem that we're trying to solve. For example: if we don't do this program, then we’ll see a continual lack engagement, or employees will loose trust, or we’re going to see a lower amount of sales calls being done. Whatever the problem is, this question will reveal how the status quo will continue. This question helps us see well what's the problem that our program is designed to solve for and that can give us an idea of some metrics.

These questions can help us identify the outcomes to track as we deliver the program and after the program is over. I'm sure this is something that you're already doing to some degree.

We often miss the difference between a metric and the data artifacts used to track our metrics.

A metric is the outcome that we plan to track and measure during the program and after the program. Increasing sales is a metric for those of us that do sales enablement. Weight loss is another example of a metric. Those metrics aren't necessarily the data artifacts or data points themselves that we would be tracking and calculating if we've increased sales or if we've achieved our weight loss goals.

Let’s use the example of a weight loss program. What might be some examples of data artifacts we could use to help us know if we've lost weight? There’s a ton of them!

  • The number you get on a scale, tracking over time if you're losing weight along with the program that you're doing.

  • Inches. Get out that measuring tape and measure your waist.

  • Calories consumed daily or hours spent at the gym every week. These would be leading indicators that might tell us we are moving in the right direction.

  • How well do your favorite pants fit? Do we have a muffin top or not?

There are many different indicators we could look at in this simple example of weight loss. And then of course, the same is true for sales enablement work. There are a ton of indicators helping us see progress toward sales goals. You can imagine what some of those are. Now, the real question comes into play.

What data artifacts are best to help us track progress toward our desired outcomes?

With our metrics selected, we can brainstorm a list of possible data artifacts (just like in the weight loss example above).

Once you have 5-10 possible data artifacts, we select the best artifacts using the criteria below.

  1. Select the data artifacts you currently we have access to. Seems really simple, right! Yet we often don't take that extra step in our design process. Instead we select data artifacts by default - because someone told us to, or because that’s the way it’s always been done.

  2. Select the data artifacts that are most relevant to the program content. Here's a good example. Over the holidays I read a book about how our insulin levels spike oftentimes through our diet. One easy way to start losing weight might be to actually work toward reducing insulin spikes by implementing a few strategies. If we're teaching people the strategies to prevent insulin spiking, it's not the same thing as teaching them how to count calories, or that's not the same thing as getting them on a rigorous weight training regime. All those things could be useful. But if our program is all about insulin spiking, tracking the number of calories consumed a day and hours spent at the gym won’t be relevant for our program.

  3. If our data artifacts aren't inherently accessible for any reason, the next step is creating a system that makes those data artifacts accessible. This becomes a design challenge. It's a creative constraint. We don't have access to participants when they're not with us physically or virtually in our learning program. Well, how can we systematically collect data on their progress when we do have access to them? It can be something as simple as kicking off every workshop with data collection activity. Maybe everyone gets their measuring tape out and measures their waist size week over week. Build that activity into the learning experience.

Not having access to data is essentially not having control over your data. If you don’t have control over your data, create it!

Alaina Szlachta