Evaluation: Why, for Whom and How?

Henry Lucas, Richard Longhurst
2010

Summary

This article discusses theoretical approaches to evaluation and draws on experiences from agriculture and health. It notes that different stakeholders may have varying expectations of an evaluation and that alternative approaches to evaluation are more suited to meeting some objectives than others. Randomised controlled trials, or well-designed quasi-experimental studies, probably provide the most persuasive evidence of the impact of a specific intervention but if the primary aim is systematic learning a Theories of Change or Realistic Evaluation approach may be of greater value. If resources permit, different approaches could be combined to cover both accountability and learning objectives. As there will be trade-offs between objectives, transparency and realistic expectations are essential in evaluation design.

There are at least five competing approaches to evaluation: experimental, constructivist, pragmatic, realistic and theories of change. The European Commission guide to the evaluation of socioeconomic development (2007) suggests that the purpose of the evaluation should determine the methodology or combination of methodologies to be adopted. It identifies possible distinct purposes as: planning/efficiency, accountability, implementation, knowledge production, and institution and network strengthening.

Demonstrating that a specific intervention has improved outcomes is arguably easier but less useful over the longer term than explaining how to do it again. Focusing on the knowledge production agenda there are three widely-held theoretical positions:

  • Experimental: Randomised controlled trials (RCTs) provide the only scientific approach to the evaluation of an intervention. If such trials are not possible, the RCT benchmark should be approximated as closely as possible.
  • Theories of Change (ToC): It is essential to focus not only on whether an intervention succeeded or failed but why. This involves developing a shared understanding of how the intervention is intended to work (the implementation theory) and then mapping this against actual performance, identifying divergences and bottlenecks in the causal chain from inputs to outcomes.
  • Realistic Evaluation (RE): When interventions are complex, RCTs are useless. The aim must be to identify the most interesting ‘mechanisms’ generated by the intervention and explore how they have performed in relation to specific population groups in specific contexts. The extent to which a specific intervention has ‘succeeded’ or ‘failed’ is of limited interest, given that it cannot be seen as providing reliable insights as to the outcome of future similar interventions.

If a ‘combined methods’ approach is to be adopted so that both accountability and learning objectives can be satisfied:

  • Those following the ToC paradigm would have no theoretical objection to its use within a RCT. However, using both approaches together would require extra resources and the various stakeholders may have very different views as to the most effective allocation of these resources.
  • Another approach, probably acceptable to RE advocates, would be to address accountability using results based monitoring and evaluation (RBME). RBME focuses on performance and on providing evidence that allocated resources are correlated with quantifiable benefits. Combining the micro-level in-depth learning approach of RE with the ‘Is the intervention achieving its targets and allocating resources efficiently?’ objectives of performance-based monitoring is an interesting possibility.

Pawson and Tilley (1998) propose that the most productive outcome to theoretical debates would be a discussion around the routine decisions involved in specific instances so that alternative research designs can be compared.

See also this article’s abstract.

Source

Lucas, H. and Longhurst, H., 2010, 'Evaluation: Why, for Whom and How?', IDS Bulletin, vol. 41, no. 6, pp. 28-35