Measure for Measure: On Shakespeare and Impact Evaluation

Evaluations of humanitarian interventions can broadly be divided into two categories: process and impact. 

While a process evaluation seeks to examine how the intervention was delivered and if the conduct or context influenced the outcome, an impact evaluation seeks to understand the causality between the intervention and the “positive and negative, primary and secondary long-term effects produced …, directly or indirectly, intended or unintended” (1), in short, the impact. 

In my experience, this is the more commonly requested type of evaluation, particularly from, and to reassure, donors, who above all want to see and share tangible results.

But how does one measure impact?

This process would normally begin with the intervention planning methodology. These vary and have many names, but it could be a logic model, blueprint for change, logical framework, or a theory of change, etc. or perhaps a combination of each.  

In most cases, the intervention is planned from the desired outcome backwards, e.g., ‘if success looks like all the people we aim to assist having access to clean water, what are the necessary steps to achieve this?’. 

In this scenario, a person, or group of people, will come up with some sort of plan typically using a combination of inputs, resources, outputs, outcomes, risks, assumptions, means of verification, and contextual factors. These will be outlined sequentially to identify the overall objective, map the required actions, clarify the intervention mechanisms of change and logic, and, critically, in some way explain how the impact will be measured. 

For the latter, this requires developing indicators to measure the outcomes, to assess the performance of the intervention.

And there’s the rub of this blogpost (and my first Shakespearean reference). 

unsplash-image-9BoqXzEeQqM.jpg

The way that ‘we’ conceive of an indication of impact, and by ‘we’ I mean the broader humanitarian sector, composed largely of organisations from Western, developed countries, may not be the same as that of the people that are served by the intervention.

Programme design systematically relies on analytical and technical metrics of determining impact, and current methods are typically influenced by entrenched power dynamics. More often than not, in a top-down, donor-driven process, in which the parameters for impact are linked to donor commitments, such as the number of outputs, or percentage-based outcomes, that have become synonymous with data and evidence, to predominantly show the cost effectiveness or value for money of an intervention. 

This process is often repeated by organisations out of fear of losing funding in future grant cycles. Therefore, the way impact is measured is often at the behest (knowingly or otherwise) of those in power. 

With this in mind, for my second Shakespearean reference, permit me to digress to many (many) years ago, when, as a 17-year-old English student, I was required to study the Shakespeare play, ‘Measure for Measure’. 

While the play examines numerous topics, including morality, women in a patriarchal society, and mercy, the overarching theme is of justice; and as a young student I was particularly impressed with the protagonist Isabella, who when faced with an injustice, decries the hypocrisy of the powers-that-be and states “… man, proud man, Dress'd in a little brief authority, Most ignorant of what he's most assur'd…”. 

Or in other words: power leads you to believe that you are right. 

Power can breed a hubris that enables you to believe that you know what’s, right and therefore best, for someone else.

Which leads me back to impact evaluations. 

By virtue of either financing or designing the intervention, we, in the humanitarian sector, hold the authority, but may also be ignorant of the best ways to measure impact. 

How are the metrics for success of an intervention decided upon? Who determines them? And to what extent are the perspectives of the people that the interventions serve incorporated into the process? 

The traditional ways of measuring and demonstrating impact must be improved with a ‘bottom-up;’ mindset, that recognises, enables, and takes on board, the viewpoints of the people that we serve. 

While the humanitarian sector in recent years has strived for more local action, decolonisation, and the increased participation of people at-risk, or affected by, crisis, evaluation seems to be lagging behind this trend, and frameworks for measuring impact are still, for want of a better word, imposed. 

For systemic change, at the intervention design stage, we must strive for participatory evaluative processes with those we wish to serve. As well as be open to shifting away from top-down metrics by valuing other forms of return through listening to what others consider to be impactful. 

Whatever this may be, e.g., feelings of improved wellbeing, or security, conversely maybe a reduction in stress etc., the point is the way we think about impact must go beyond a results-based system. One that prioritises outputs or outcomes per financial return, but instead works ever more collaboratively and equally with the people we serve, to understand what they consider to be the most important factors that impact their lives, to come up with appropriate measures. 

While this will undoubtedly provoke challenging conversations with donors and break with the status quo, enabling locally-led humanitarian action, particularly when it comes to impact evaluation, will require a level of discomfort, but permit me to reference Shakespeare a third time and say we must “screw [our] courage to the sticking place, and we’ll not fail”.

References

  1. OEDC-DAC (2010). Glossary of Key Terms in Evaluation and Results-Based Management. Paris: Organisation for Economic Co-operation and Development – Development Assistance Committee (OEDC-DAC).