University of Minnesota Extension
www.extension.umn.edu
612-624-1222
Menu Menu

Extension > Family Matters > Evaluation Essentials: Evaluation vs. Research

Thursday, June 1, 2017

Evaluation Essentials: Evaluation vs. Research

By Emily Becher, Research Associate

My quest to advance myself as an evaluator and a professional continues. So to that goal I am writing up another blog post to summarize my efforts and learnings. Once again, Family Matters readers are acting as my accountability buddies, so thank you!

Recently I read the evaluation text Evaluation Essentials: From A to Z, by Marvin Alkin. If you’re looking for a great basic guide on evaluation, this is a really good one. What I liked about it is that it gave me some great ways of talking about evaluation that don’t get lost in jargon. I don’t know about you, but often I will sit in meetings with experts. At some point, they string together a series of words that have no meaning for me. After a couple experiences like this, I tend to tune out.

When I start talking about evaluation, I worry that I do exactly the same thing! So using Alkin’s book, I’m going to review some concepts and terms in a way that I hope is logical and clear so the next time someone talks about evaluation, you don’t “pull an Emily” and tune out.

Evaluation: Determining the Value of an Activity

Let’s start with the word evaluation itself. Embedded within it is the word value. At its core, that is the core mission of evaluation, to assign and determine the value of an activity. Evaluation is something we all do, all the time with varying degrees of formality. Imagine I said, “We’re ordering pizza tonight. We can order from anywhere. Where would you like to order from?”

You start thinking through the various options for pizza.


You start weighing them against each other:
  • Which one delivers?
  • Is there free delivery or is there a surcharge?
  • Do they just take cash if they do delivery?
  • Do you have cash on you?
  • How far away are the take-out locations from home?
  • If you do take out, will you have to deal with rush-hour traffic?
  • What kind of crusts can you get?
  • What kind of crusts do you like?
  • What kind of toppings are available?
  • What kinds of toppings do you like?
  • How much do the various toppings cost?

You rapidly find yourself making a decision based on a number of variables. Often you do this so fast that you’re not even tracking exactly how you come to your ultimate decision, but you could probably track the steps if you needed to. Evaluation is about the decision made at the end, the steps taken to make that decision, and documenting the evidence used in each step.

The point of the pizza imagination exercise is this: evaluation is nothing special but it is an essential process to decision-making. In order to decide on one thing versus another, you need to assess its value and typically compare that value against something similar, using similar methods of measurement.

Evaluation in Family Development

For example, the nutrition course Cooking is a SNAP was adapted from a previous version. In order to make sure the updated version was working similar or better than the previous version, it was important for the project lead, Extension Educator Betsy Johnson, to use the same evaluation tools used in the previous version when testing the updated version.

Think about it as comparing navel oranges to Valencia oranges versus comparing apples and oranges. You’ve already decided you want to eat an orange — now you’re deciding which orange is fresher, riper, juicier, and will ultimately be tastier and more satisfying.

Evaluation is so reflexive and essential that even kids do it!

Research: Increasing Human Knowledge

This leads us to the next point: how to distinguish evaluation from research. Evaluation is supposed to ultimately lead to making a decision, about what policies to keep in place, what programs to sunset, etc. Research is about increasing human knowledge about the world around us and making conclusions. Because making a conclusion about the world is kind of an overwhelming idea, needing lots and lots of evidence, often research is phrased in broad ways with qualifying words like “perhaps,” “implies,” “suggests,” and so on. But evaluation is different. The goal of an evaluation is to lead to decision-making. To put it simply:
Research tells us why humans like pizza. Evaluation tells you what pizza to order tonight.
For example, a few years ago, we conducted the evaluation of program Co-Parent Court. We used an experimental research design where participants were assigned to groups. We recruited a large sample size. We used widely studied, validated instruments to see how parents were changing. We disseminated the findings in peer-reviewed articles.

This evaluation ended up looking a lot like research. Not only did we examine whether the parents in this particular program benefitted, but also established a knowledge base about if this type of program would be helpful for parents like the parents in our program. Said another way, our work extended beyond how participants experienced this particular program to establishing evidence that the intervention caused or did not cause change over time that could be generalized (applied to) a larger group of similar people.

Continuous Evaluation in Family Development

Compare the Co-Parent Court scenario to evaluation systems established for two of our larger programs, SNAP-Ed and Parents Forever™. We collect systematic data on who is participating in these programs, what knowledge they learned, and what behaviors they are engaging in related to the content of the curriculum. So for SNAP-Ed, we ask about fruit, vegetable, and whole grains consumption, and frequency of physical exercise. In Parents Forever™, we ask about self-care practices, parent-child relationships, and coparenting strategies. The programs are based on existing research about the value of the content and best practices in educational delivery. Then we use continuous evaluation to make sure participants are actually acquiring the knowledge and implementing the targeted behaviors.

“SNAP-Ed and Parents Forever™ are well-established programs,” you may be asking yourself. “We know they work. So why do we keep collecting data on participant experiences and outcomes from the course?” Well, because every year, we craft a new budget and we use that budget to deliver on our mission as Extension. We need to know if these programs are still “keeping their promise.” We need to know what to change to keep them working well. And we need to be confident that our investment in these programs continues to make sense.

Each year, as we decide budgets, our evaluation is a point of evidence to say “This continues to be a quality program to invest in — let’s continue it another year.” Or “Let’s keep investing and make some shifts this year.” Evaluation supplies us with the internal evidence we need to keep moving forward, and also evidence back to our funders to continue their investment. The key piece of these two ongoing evaluation systems is the continuous quality improvement of both programs so they stay relevant and helpful for participants.

So there’s the pizza-based analogy to help us understand evaluation and research. Next time, we’ll use a soup-based analogy to get even further into the weeds about different types of evaluation: formative, summative, and more!

1 comment:

  1. Yesterday as I was heading out of the office for lunch, I caught myself evaluating my options ("Do I walk down the hill to sit under the trees or out on the patio? The patio was busy yesterday, but I'm wearing high heels. I'd prefer some shade, but the sun would be OK."), and then I caught myself thinking, "I'm evaluating!"

    Way to get in my brain, Emily :)

    ReplyDelete

  • © Regents of the University of Minnesota. All rights reserved.
  • The University of Minnesota is an equal opportunity educator and employer. Privacy