University of Minnesota Extension
www.extension.umn.edu
612-624-1222
Menu Menu

Extension > Family Matters > Evaluations are supposed to be used.

Wednesday, March 29, 2017

Evaluations are supposed to be used.

By Emily Becher, Research Associate

Last month, I wrote a blog post about my path of professional development, which included my desire to learn more about the field of evaluation studies. Reading Essentials of Utilization-Focused Evaluation (2012) by Michael Quinn Patton was my first step. Not to skim, not to read sections, but to read the whole thing in its entirety.

And… I did it!

Publicly stating my goal to write down my thoughts for Family Matters was a way to create accountability for myself, keep me on track, and block out time for reading. So thank you to Family Matters readers for being my accountability buddies!

The first thing I want to share is that I really enjoyed this book and I found it extremely useful. I highly recommend it to anyone who’s looking to increase their evaluation capacity.


A Deadly Sin


My biggest takeaway from Patton’s book is this: Evaluations are supposed to be used. This means that if an evaluation project ends with an unused report, I as an evaluator have failed.

Guess what: I’ve failed a lot.


But why don’t evaluations get used? What makes them more or less useful? Utility comes down to taking the time to build relationships and to understand who the stakeholders of the program are and what they need from the evaluation. In short, taking the time to focus on the user.

One of my favorite parts of Patton’s book is a list called “Temptations Away from Being User-Focused: Deadly Sins.” Out of the list of 10 sins, the one I am most guilty of is waiting until the findings are in to identify users and intended uses. Since reading this list, I’ve made a big change in my evaluation work. Now when I meet with an educator for the first time, I try to ask, “What are you going to do with this evaluation? Who are you going to want to tell the results of the evaluation to?”

Asking these questions has helped the educators and me probe to discover what’s missing from our current evaluation that would help us tell the story of the project to a particular stakeholder. Often I’ve fallen into the trap of helping with an evaluation where the only identified stakeholder is the educator who is delivering the program. I never asked the question about a broader audience or other potential future stakeholders like granting agencies. The reason this is a trap is because when the evaluation is done and an educator wants to tell a broader story, we often can’t.

I want to help change that. I do not want to add to a pile of data that sits unused.



Two Cycles of Evaluation Use


In his book, Patton describes two cycles of evaluation use. One is virtuous: findings are used and useful. The other is vicious: findings, and evaluation as a field, are increasingly distrusted.

In a virtuous cycle of evaluation-utilization, 1) intended users become engaged, 2) users tell others the evaluation is important, 3) staff and others cooperate, 4) cooperating yields high-quality relevant data, 5) findings are credible and used, and 6) users promote evaluation as worthwhile.

In a vicious cycle of evaluation-utilization that undermines its use, 1) intended users distrust the evaluation, 2) intended users bad-mouth the evaluation, 3) staff and others resist and undermine, 4) poor data result from resistance, 5) findings lack credibility and are not used, and 6) users’ evaluation skepticism deepens.

When I think about the evaluations that I’ve worked on where findings were not used, the failure occurs at the first step in the cycle: engaging intended users. If an educator I work with only sees the value of an evaluation as a measure of their own performance to use for promotion, than that limits their engagement and ultimately the use of the evaluation. Conducting an evaluation with a primary goal of assessing an educator's performance is absolutely a reasonable goal. However, if we don’t have any other stakeholders we’re engaging or other goals we’re trying to reach, then of course those data are not going to be broadly useful.

Here’s my challenge then: To weave a broadening of scope into evaluations I work on, to identify outside stakeholders who would want to know about the evaluation results, and to keep them in mind throughout the process.



Your Turn


Right now, Patton’s book is currently the only book on my recommended reading list of evaluation resources, found here: Basic evaluation resources (scroll to the bottom). If you have any suggestions for recommended evaluation and research reading for this list, add them to the comments of this blog post, email me, or just add them into the document directly.

When you think of evaluations in the past that you’ve been a part of, what made the data useful or not useful? What would a really useful evaluation look like for your program? What are changes that would need to be made to your current evaluation system to make it really useful? Share your thoughts in the comments!

No comments:

Post a Comment

  • © Regents of the University of Minnesota. All rights reserved.
  • The University of Minnesota is an equal opportunity educator and employer. Privacy