This paper attempts to identify the issues and challenges relating to the evaluation of protection work carried out by humanitarian actors, including those both with and without a specific protection mandate. It excludes literature and practice on the Responsibility to Protect doctrine and work carried out specifically in the context of human rights, legal or prosecutory actions (including the work of the International Criminal Court), and security and military actions (such as peacekeeping missions) with protection components.
The paper is written primarily for staff in evaluation, protection, and programme-advising roles whose effectiveness in commissioning and using evaluations is essential to improving the quality and use of evaluative evidence. Such staff includes evaluators; members of operational agencies and donor offices who commission, lead and support evaluations; and evaluation researchers and consultants. The paper is also relevant to staff working in protection programming, advisory, and policymaking positions in both headquarters and field offices, who are often called on to comment on evaluation terms of reference and inception reports and to host evaluation missions.
The Better Evaluation framework was used to organise and structure the contents of the paper and make it accessible to a wide and diverse audience. This framework was not developed specifically from a humanitarian evaluation perspective. Nevertheless, it is used here because it offers a comprehensive and flexible way of organising vast amounts of information relating to practical evaluation issues by grouping them into a series of broad process steps that are common to virtually every type of evaluation exercise in most contexts – including those where humanitarian actors operate. These broad process steps are (1) manage an evaluation or evaluation system; (2) define what is to be evaluated; (3) identify the results of interest for the evaluation and framing the evaluation; (4) collect and analyse data (about context, activities and results); (5) understand the causes of results; (6) synthesise data from one or more evaluation; and (7) report and support the use of evaluation findings.
Below is a list of recurring questions that emerged from both the literature and the interviews conducted for this paper that in a nutshell capture the challenges that make the evaluation of protection exceptionally complex:
- As evaluators and evaluation commissioning staff, how can we develop a solid understanding of what we are attempting to achieve when we are evaluating protection in humanitarian action?
- What does successful protection results look like? How can we articulate and measure programme performance at different levels of results in terms of protection for both programming and evaluation purposes? How is this different for mainstreamed, integrated protection and dedicated/specialist protection programming?
- How should we customise data collection and analysis strategies when we deal with sensitive information?
- How can we understand cause-and-effect issues in protection in humanitarian contexts?