Professional Documents
Culture Documents
EVALUATION METHODS
FOR THE EUROPEAN UNIONS
EXTERNAL ASSISTANCE
GUIDELINES FOR
PROJECT AND PROGRAMME
EVALUATION
VOLUME 3
Neither the European Commission nor anybody acting in the name
of the Commission is responsible for the use that could be made of
the information given hereafter.
ISBN: 92-79-00681-9
Overview
The European Commission has developed and formalised a
methodology for evaluating its external assistance, in which the
priority is on results and impacts. The aim is thus to maintain the
quality of its evaluations on a par with internationally recognised
best practice.
In the past, the evaluation of European external assistance
focused primarily on projects and on programmes. The current
methodological guidelines are designed to facilitate the move
towards an evaluation practice focused more on programmes and
strategies. It is intended mainly for:
evaluation managers at European Commission
headquarters and in the Delegations,
external evaluation teams.
The methodology is also made available to all European external
aid partners, as well as the professional evaluation community.
It is available in three languages (English, Spanish and French)
and in two forms, optimised for reading and for navigation on the
Internet, respectively.
The Internet version includes numerous examples and in-depth
analyses. It is available on the European Commission website:
http://ec.europa.eu/europeaid/evaluation/index.htm
The printed version consists of four volumes. The first volume
Methodological bases for evaluation presents the basic concepts
and their articulation. The second volume is a handbook for
"Geographic and Thematic Evaluation". It pertains to the
evaluation of the entire set of Community actions on the scale of a
country or region, and the evaluation of all actions relative to a
sector or a theme on a global scale. This third volume is a
handbook for "Project and Programme Evaluation". It concerns
large projects, pilot projects, multi-country programmes and any
other project or programme for which an evaluation is required.
The fourth volume "Evaluation Tools" presents the main techniques
available for structuring an evaluation, collecting and analysing
data, and assisting in the formulation of value judgements.
Evaluation questions
Earlier or later in the evaluation process, a series of precise
questions (no more than ten) are selected with a view to satisfy
the needs of the evaluation users, and to ensure the feasibility of
the evaluation.
By focusing the evaluation on key issues, the questions allow the
evaluation team to collect accurate data, to deepen its analyses,
and to make its assessments in a fully transparent manner.
Questions are written in a simple and precise way. As far as
possible, they do not cover areas where other studies are
available or in progress.
The set of questions is composed in such a way that the
synthesis of all answers will enable the evaluation team to
formulate an overall assessment of the project/programme. For
this purpose, the set of questions covers the various levels of the
logical framework and the seven evaluation criteria in a balanced
way (see Section 2.2.1).
1.2.1 Inception
The inception stage starts as soon as the evaluation team is
engaged, and its duration is limited to a few weeks.
Inception meeting
Within a few weeks after the start of the works, and after a review
of basic documents complemented by a few interviews, the
evaluation team defines its overall approach.
This approach is presented in a meeting with the evaluation
manager and the reference group members. Subjects to be
discussed include:
Logical framework
Evaluation questions, either from the ToR or proposed by
the evaluation team
Indicative methodological design
Access to informants and to documents, and foreseeable
difficulties.
The presentation is supported by a series of slides and by a
commented list of evaluation questions. If relevant, the meeting
may be complemented by an email consultation.
Inception report
The evaluation manager receives an inception report which
finalises the evaluation questions and describes the main lines of
the methodological design including the indicators to be used, the
strategy of analysis, and a detailed work plan for the next stage.
The report is formally approved by an official letter authorising the
continuation of the work.
Approval of reports
The members of the reference group comment on the draft
version of the report. All comments are compiled by the
evaluation manager and forwarded to the evaluation team. The
team prepares a new version of the report, taking stock of the
comments in two distinct ways:
Requests for improving methodological quality are satisfied,
unless there is a demonstrated impossibility, in which case full
justification is provided by the evaluation team.
Comments on the substance of the report are either accepted
or rejected. In the later instance, dissenting views are fairly
recalled in the report.
The manager checks that all comments have been properly
handled and then approves the report.
1.3.1 Preparation
The evaluation manager checks that:
Public authorities in the partner country/countries are
informed of field works to come through the appropriate
channel
Project/programme management are provided with an
indicative list of people to be interviewed, dates of visit,
itinerary, name of team members
Logistics are agreed upon in advance.
The work plan keeps flexible enough in order to accommodate for
circumstances in the field.
1.3.2 Follow-up
The evaluation manager facilitates interviews and surveys by any
appropriate means like mandate letters or informal contacts within
the Government.
The manager is prepared to interact swiftly at the evaluation
teams request if a problem is encountered in the field and cannot
be solved with the help of the project/programme manager.
1.3.3 Debriefing
One or several debriefing meetings are held in order to assess the
reliability and coverage of data collection, and to discuss significant
findings. At least one of these meetings is organised with the
reference group.
6. Valid conclusions
Are the conclusions coherent, logically linked to the findings,
and free of personal or partisan considerations? Do they cover
the five DAC's criteria and the two EC specific criteria?
7. Useful recommendations
Are recommendations coherent with conclusions? Are they
operational, realistic and sufficiently explicit to provide
guidance for taking action? Are they clustered, prioritised and
devised for the different stakeholders?
8. Clear report
Is there a relevant and concise executive summary? Is the
report well structured, adapted to its various audiences, and
not more technical than necessary? Is there a list of
acronyms?
Evaluation users
Decision-makers and designers use the evaluation to reform or
renew the project/programme, to confirm or to change strategic
orientations, or to (re)allocate resources (financial, human and
others). They appreciate clear, simple and operational
recommendations based on credible factual elements.
Managers and operators use the evaluation findings to adjust
management, coordination and/or their interactions with
beneficiaries and the target groups. They expect to receive
detailed information and are ready to interpret technical and
complex messages.
The institutions that funded the project/programme expect to
receive accounts, i.e. a conclusive overall assessment of the
project/programme.
Public authorities conducting related or similar
projects/programmes may use the evaluation through a transfer
of lessons learned. The same applies to the expert networks in
the concerned sector.
Finally, the evaluation may be used by civil society actors,
especially those representing the interests of the targeted
groups.
1.5.3 Presentations
The evaluation manager may organise one or several
presentations targeted at audiences like: expert networks in the
country or region, media, government-donor coordination bodies,
non state actors. The evaluation team may be asked to participate
in the presentation.
The manager may write an article to facilitate the dissemination of
the main conclusions and recommendations.
Both the budget and the time schedule are specified within the
framework of constraints set by the ToR.
2.2.1 Inception
The inception stage starts as soon as the evaluation team is
engaged, and its duration is limited to a few weeks.
Management documents
The evaluation team consults all relevant management and
monitoring documents/databases so as to acquire a
comprehensive knowledge of the project/programme covering:
Full identification
Resources planned, committed, disbursed
Progress of outputs
Names and addresses of potential informants
Ratings attributed through the result-oriented monitoring
system (ROM)
Availability of progress reports and evaluation reports, if
relevant.
Evaluation questions
The evaluation team establishes the list of questions on the
following bases:
Themes to be studied, as stated in the ToR
Logical framework
Reasoned coverage of the seven evaluation criteria.
Each question is commented in line with the following points:
Origin of the question and potential utility of the answer
Clarification of the terms used
Indicative methodological design (updated), foreseeable
difficulties and feasibility problems if any.
On the website: menu of issues / questions
Evaluation criteria
The following evaluation criteria correspond to the traditional
practice of evaluation of development aid formalised by the OECD-
Evaluation criteria
Relevance
Extent to which the objectives of the development
intervention are consistent with beneficiaries requirements,
country needs, global priorities and partners and ECs
policies.
Effectiveness
Extent to which the development interventions objectives
were achieved, or are expected to be achieved, taking into
account their relative importance.
Efficiency
Extent to which the outputs and/or desired effects have been
achieved with the lowest possible use of resources/inputs
(funds, expertise, time, administrative costs, etc.).
Sustainability
Extent to which the benefits from the development
intervention continue after termination of the external
intervention, or the probability that they continue in the long-
term in a way that is resilient to risks.
Impact
Positive and negative, primary and secondary long-term
effects produced by a development intervention, directly or
indirectly, intended or unintended.
Coherence
Extent to which activities undertaken allow the European
Commission to achieve its development policy objectives
without internal contradiction or without contradiction with
other Community policies. Extent to which they complement
partner countrys policies and other donors interventions.
Community value added
Extent to which the project/programme adds benefits to what
would have resulted from Member States interventions in the
same context.
Inception meeting
The evaluation team leader presents the works already achieved in
a reference group meeting. The presentation is supported by a
series of slides and by a commented list of evaluation questions.
Inception report
The evaluation team prepares an inception report which recalls and
formalises all the steps already taken, including an updated list of
questions in line with the comments received.
Each question is further developed into:
Indicators to be used for answering the question, and
corresponding sources of information
Strategy of analysis
Chain of reasoning for answering the question.
Indicators, sources of information and steps of reasoning remain
provisional at this stage of the process. However, the inception
report includes a detailed work plan for the next stage. The report
needs to be formally approved in order to move to the next stage.
Indicators
The logical framework includes Objectively Verifiable Indicators
(OVIs) and Sources of Information which are useful for
structuring the evaluators work. As far as OVIs have been
properly monitored, including baseline data, they become a
major part of the factual basis of the evaluation.
Indicators may also be available through a monitoring system, if
the project /programme is equipped with such a system.
Indicators may also be developed in the framework of the
evaluation as part of a questionnaire survey, an analysis of a
management database, or an analysis of statistical series.
Indicators may be quantitative or qualitative.
Strategy of analysis
Indicators and other types of data need to be analysed in order
to answering evaluation questions.
Four strategies of analysis can be considered:
Change analysis, which compares indicators over time, and/or
against targets
Meta-analysis, which extrapolates upon findings of other
evaluations and studies, after having carefully checked their
validity and transferability
Attribution analysis, which compares the observed changes
with a policy-off scenario, also called counterfactual
Contribution analysis, which confirms or disconfirms cause-
and-effect assumptions on the basis of a chain of reasoning.
The first strategy is the lightest one and may fit virtually all
types of questions. The three last strategies are better at
answering cause-and-effect questions.
Documentary analysis
The evaluation team gathers and analyses all available documents
(secondary data) that are directly related to the evaluation
questions:
Management documents, reviews, audits
Studies, research works or evaluations applying to similar
projects/programmes in similar contexts
Statistics
Any relevant and reliable document available through the
Internet.
It is by no means a review of all available documents. On the
contrary, the evaluation team only looks for what helps answering
the evaluation questions.
Interviewing managers
Members of the evaluation team undertake interviews with people
being or having been involved in the design, management and
supervision of the project/programme. Interviews cover
project/programme management, EC services, and possibly key
partners in the concerned country or countries.
At this stage, the evaluation team synthesises its provisional
findings into a series of first partial answers to the evaluation
questions. Limitations are clearly specified as well as issues still to
be covered and assumptions still to be tested during the field
phase.
Developing tools
The tools to be used in the field phase are developed. Tools range
from simple and usual ones like database extracts, documentary
analyses, interviews or field visits, to more technical ones like
focus groups, modelling, or cost benefit analysis. The Volume 4
describes a series of tools that are frequently used.
2.3.1 Preparation
The evaluation team leader prepares a work plan specifying all the
tasks to be implemented, together with responsibilities, time
schedule, mode of reporting, and quality requirements.
The work plan keeps flexible enough in order to accommodate for
last minute difficulties in the field.
The evaluation team provides key stakeholders in the partner
country with an indicative list of people to be interviewed, surveys
to be undertaken, dates of visit, itinerary, name of responsible
team members.
2.3.5 Debriefing
The evaluation team gathers in a debriefing meeting at the end of
the field phase. It undertakes to review its data and analyses, to
cross-check sources of information, to assess the strength of the
factual base, and to identify the most significant findings
Another debriefing meeting is held with the reference group in
order to discuss reliability and coverage of data collection, plus
significant findings.
The evaluation team presents a series of slides related to the
coverage and reliability of collected data, and to its first analyses
and findings. The meeting is an opportunity to strengthen the
evidence base of the evaluation.
2.4.1 Findings
The evaluation team formalises its findings, which only follow from
facts, data, interpretations and analyses. Findings may include
cause-and-effect statements (e.g. partnerships, as they were
managed, generated lasting effects). Unlike conclusions, findings
do not involve value judgments.
The evaluation team proceeds with a systematic review of its
findings with a view to confirming them. At this stage, its attitude
is one of systematic self criticism, e.g.:
If statistical analyses are used, do they stand validity
tests?
If findings arise from a case study, do other case studies
contradict them?
If findings arise from a survey, could they be affected by a
bias in the survey?
If findings arise from an information source, do cross-
checkings show contradictions with other sources?
Could findings be explained by external factors
independent from the project / programme under
evaluation?
Do findings contradict lessons learnt elsewhere and if yes,
is there a plausible explanation for that?
2.4.2 Conclusions
The evaluation team answers the evaluation questions through a
series of conclusions which derive from facts and findings. In
addition, some conclusions may relate to other issues which have
emerged during the evaluation process.
Conclusions involve value judgements, also called reasoned
assessments (e.g. partnerships were managed in a way that
improved sustainability in comparison to the previous approach).
Conclusions are justified in transparent manner by making the
following points explicit:
Which aspect of the project/programme is assessed?
Quality certificate
The evaluation team leader attaches a quality certificate to the
draft final report, indicating the extent to which:
Evaluation questions are answered
Reliability and validity limitations are specified
Conclusions apply evaluation criteria in an explicit and
transparent manner
The present guidelines have been used
Tools and analyses have been implemented according to
standards
The language, layout, illustrations, etc. are according to
standards.