What do we mean by evaluations that make a difference?

By: Ramon Crespo – European Evaluation Society

I agree with the generally accepted idea that evaluation is related to change. For example:

  1. Evaluations can discover how a given situation has changed due to a particular intervention.
  2. Evaluations can make recommendations for improvement (that is, for change) of the intervention under study.
  3. Finally, the people involved in an evaluation process can change as a result of the evaluation journey. At the end of the day, they will know more about their project, their organization or even themselves.

Taking this into account, we probably all agree that evaluation should be grounded in rigorous (and therefore systematic) methodological approaches, and be connected with different dimensions of change.

But, what if an evaluation that makes a difference is not only describing or suggesting change? What if evaluations that make a difference are those that create conditions for change?

Let’s imagine that an evaluation of a needle exchange program discovered the program helped to reduce HIV transmission in prisons. Let’s also imagine that the evaluation revealed inefficient distribution of clean needles among drug dependent inmates. A good evaluation, in its final report, would describe the program’s harm reduction achievements (using rigorous methododology), and provide recommendations for adopting better needle distribution processes (e.g. needle vending machines).

So far so good. However, I cannot help thinking that an evaluation that makes a difference would go one step farther by contributing to creating conditions that make recommended changes viable within a particular context.

Evaluations can create conditions for change in many different ways:

  • By giving a voice to those who don’t usually get a chance to express their views;
  • By challenging program owners with uncomfortable questions; driving them towards the next stage in their organizational development;
  • By encouraging discussion of findings among stakeholders, using an open format (e.g. workshop) geared towards promoting change rather than presenting a closed report based on the interpretations of a single ‘expert’ .

At some point, any good evaluation can be followed by relevant and significant change, especially when the stakeholders are strongly committed to transparency and improvement. But maybe the point here is that  “evaluations that make a difference” are those that carry a toolkit consisting of attitudes, knowledge and skills that help the evaluator to be a real agent of change.

What do you think?

Do you believe that stories should be selected according to whether or not they present evaluations that have intentionally created conditions for change? 

I’d be interested in your opinion and welcome your comments.


Filed under Evaluations that make a difference

9 responses to “What do we mean by evaluations that make a difference?

  1. I certainly agree with the author that evaluations that have created conditions for change, even unintentionally, is a worthwhile criterion that should be added to the selection list of stories.

    • Intentionality is something we hadn’t originally considered. It would be an interesting dimension to capture in the stories,as well as in the subsequent analysis of stories. Thank you for raising this concept.

  2. burtperrin

    Ramon, I think that you have made some really important points. Your discussion about evaluation, perhaps, helping to create conditions for change, is really helpful. And I think we shouldn’t worry about intentionality, some of the most important changes may not have been intended.

    I think perhaps the most important point in what you have said is your comment: “At some point, any good evaluation can be followed by relevant and significant change.” Thus, for evaluations which make a difference, a key criterion is that positive change follows from this. For example, it should not be enough for evaluation to result just in a change in a policy or perhaps in systems, unless change affecting people follows from this.

    The one point in your post that I might question, and most likely this is just a matter of semantics, is when you say that “evaluation should be grounded in rigorous (and therefore systematic) methodological approaches.” I suggest that we be fully open to any approach to evaluation (a danger is that there are some who have been trying to reserve “rigorous” for certain particular methodological approaches, and we certainly do not want to be restricted in this sense.


    • Thank you Lisa, and thank you Burt for your comments.

      I agree with you Burt that we shouldn’t worry about intentionality on the results. It is going to be very interesting to learn about not intended results and serendipity around evaluation processes. But maybe we could worry about identifying stories that refer intended (but not always explicit) strategies to create conditions for change.

      Something like: “during the evaluation we tried to subtly facilitate prisons guards and doctors to meet and share, in order to make some recommendations more viable to become true at the end of the process”.

      How do you feel about that?

      Very good point to remark that, for evaluations which make a difference, we would like to see how this applied recommendations affect people in the end.

      And regarding to this “rigorous”, I could not agree more with you. That is the reason for this “systematic” 😉


  3. Pingback: Once upon a time… Why link evaluation with storytelling? | Evaluations that Make a Difference

  4. Pingback: Había una vez….. Evaluación y narración de historias | Al Borde del Caos

  5. Sarah Cunningham

    In my experience evaluations that make a difference arise in the context of the relationship between the evaluator and evaluation commissioner (user). For me, transformative learning often arises in these relationships. Thus it is the process of evaluation, the way the evaluation is undertaken (and not the actual results) that has the most lasing impact.

    • Hi Sarah,
      I agree that the relationship between the evaluator and evaluation commissioner is key. Certainly a good relationship helps to build trust in the evaluation process and in the findings. This can lead to ‘process use’ (e.g. benefits that arise simply from participating in an evaluation, including evaluation capacity building). These benefits tend to occur during or shortly after the evaluation process. But what about longer-term impacts? Is a good relationship sufficient? Do we even check back at a later time to learn how much of an impact the evaluation had? Did it have ‘legs’? Did it lead to longer-term changes – anticipated or otherwise? Are there factors other than a good evaluator-evaluation user relationship that impacted evaluation influence?

      We are interested in learning from these stories. As we ask participants to reflect on factors that contributed to evaluation ‘impact’ it will be interesting to learn the role that relationship played and whether there are other factors also at play.

  6. Adinda Van Hemelrijck

    I totally agree that evaluations can and must serve not only improvement of performance and policy or decision making, but also development as such. Since we’re talking about evaluations that are assessing/making a difference in people’s lives, it appears to me that we’re talking about a particular kind of evaluation, namely impact evaluation. Impact evaluation is quite different from performance evaluation: it judges a project or program’s value in terms of its contributions to or influences on impact, rather than its performance. A project can perform very well, yet have no influence on impact. Impact can be defined in different ways. In this discussion, we’re clearly talking about relevant, significant, and even transformational change (the latter being understood as systemic change affecting people’s lives in transformational thus empowering ways).

    The biggest elephant in the room for me though is not whether impact evaluation c/should contribute to making such a relevant, significant and transformational change, but rather how it could do so in a way that produces sufficiently rigorous evidence needed to influence and convince power holders, and how we will know it effectively did so. It’s one thing to give people a voice or create space to express their views; it’s another thing to make sure they are actually heard and can hold these power holders accountable. The latter is essential to make the kind of change that transforms people’s lives. For this, evidence need to be sufficiently rigorous.

    In accordance with the literature, it is suggested that participation is essential to create the conditions for change. The argument often used to dismiss participatory approaches, however, is to say that they are not independent and thus their findings are not objective (or not free from political influence and organizational pressure), therefore cannot be used for rigorous causal inference. In addition, it is often argued that it’s the role of project/program managers and policy makers (and not of evaluators) to make use of evaluation findings in order to influence change. Arguments countering this line of reasoning say that the use of evaluation findings remains limited if all these stakeholders (including beneficiaries and decision makers) are not engaged in one way or another, and that evaluations are never entirely free from politics. Relationships and interactions between commissioners and evaluators, between evaluators and evaluated, and among all those who could benefit from the use of the knowledge being generated, do influence not only the uptake but also the design and conduct of evaluations.

    So how do we define and assure and assess rigor of participatory evaluation approaches? I don’t think we can avoid this question. Evaluators and development practitioners do intuitively agree on the need for rigorous approaches, but rigor is often too narrowly defined as a methodological procedure that must assure scientific validity and thus objectivity (or independence), suggesting the use of counter-factual based approaches such as (quasi-) experimental and statistical methods. This raises an important methodological challenge for those who question the value-for-money of impact evaluation and wish to enhance its utility and value for better understanding and influencing transformational development.

    Could the analysis of the stories help us develop a broader concept of rigor that for instance combines scientific and participatory rigor, and as such becomes more suitable for empowerment-focused or transformational research and development efforts? If this question is at stake, I’m happy to learn and contribute…


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s