You are on page 1of 26

CARE NEPAL

PROJECT DESIGN,
MONITORING & EVALUATION
STRATEGY:

August 18, 1998


Contact Persons:

Marcy Vigoda
Assistant Director
Balaram Thapa
Program Development Coordinator

F:\ARCHIVE\STRATEGY\DM&E\STRAT002.DOC

Table of Contents
1.
2.

3.
4
5.
6.

Introduction ............................................................................................................................. 1
Our terminology ...................................................................................................................... 1
2.1 Project Hierarchy Terminology ................................................................................. 1
2.2 Indicators ................................................................................................................... 2
2.3 Monitoring and Evaluation Terminology .................................................................. 3
Overall approach to design, monitoring and evaluation.......................................................... 4
What we do ............................................................................................................................. 5
Strengths and weaknesses ....................................................................................................... 7
Goals and future plans........................................................................................................... 12

ANNEXES:
Annex 1:
Annex 2:
Annex 3:
Annex 4:

Hierarchy of Objectives
Key Steps in Project Design
Guidelines for Monitoring and Evaluation Plan
Current Project Monitoring and Evaluation Systems

Abbreviations
AIP 1.2 ..................................Annual Implementation Plan (activity plan)
AOP .......................................Annual Operating Plan (prepared by each country office)
API .........................................Annual Portfolio Information, basic data required by CARE
USA and CARE Canada
ARI.........................................Acute Respiratory Infection
CO ..........................................Country Office
DM&E....................................Design, Monitoring and Evaluation
FCHV.....................................Female Community Health Volunteer
HLS(A) ..................................Household Livelihood Security Assessment
M&E ......................................Monitoring and Evaluation
MER.......................................Monitoring, Evaluation, Reporting (a software program
being piloted by CARE)
NGO.......................................Non-governmental organization
PIMS ......................................Project Information Management System (output
monitoring formats for CARE Nepal)
PIR .........................................Project Implementation Report (six-monthly reports)
SNV........................................Netherlands Development Organization

1.

Introduction

For several years, CARE Nepal has emphasized the improvement of design, monitoring and
evaluation systems (DM&E), in order to better demonstrate the results and impact of our
work. CARE Nepals annual plans have explicitly and extensively addressed monitoring and
evaluation issues since FY94. Accordingly, a number of steps have been undertaken in the
past several years to improve DM&E in CARE Nepal. These derived from the recognition
that while we felt we were doing very good work, it was not well documented. One of the
major and unique challenges for CARE Nepal is to establish DM&E systems that capture the
diversity of CARE Nepals multi-sectoral approach, while being manageable in size and
scope.
There have certainly been improvements in the last several years. Project designs are high
quality team efforts, understanding and skills in M&E and systems are stronger, and CARE
Nepal has developed a number of innovative monitoring and evaluation tools. There is still
however scope for further improvements, approaches for which are suggested in this paper.
This paper begins with a section on terminology, and then describes developments over the
past few years before moving into the strategy itself. This is important so that CARE Nepal
staff and other interested readers can be sure that they are talking about the same thing, thus
promoting a useful and constructive dialogue among colleagues.
In preparing this paper it often seemed that project design was something of a poor cousin
to monitoring and evaluation, not receiving the same attention. This is not surprising, insofar
as project design is a periodic activity while monitoring and evaluation are far more regular
and require sustained attention. A lesson learned, however, is that improved project design is
intimately related to better M&E.

2.

Our terminology

This section on terminology borrows heavily from the Discussion paper on monitoring,
evaluation and impact assessment in CARE Nepal, April 1996 and from CARE USA
Program Measurement Task Force documents. It is repeated here so that users of this
strategy are all working from a common understanding. Any discussion of improving
CAREs design, monitoring and evaluation needs to begin with a clarification of terms.

2.1

Project Hierarchy Terminology

Below, the key terms input, output, effect and impact are distinguished. Then, working
definitions for monitoring and evaluation are given. After each of these, there is a
description of what is currently done in CARE Nepal.

Input

Process

Output

Resources need by
project (ie funds,
staff,
commodities, inkind)

Interventions or
activities done by
project utilizing the
inputs

The direct result


of process;
products of
project activities

Effect
Improvements in
access to, or
quality of,
resources or
systems, and
changes in
practices

Impact
Changes in
human conditions
or well-being

This terminology corresponds directly to what has been used in the past in CARE:
Table 1: New and old CARE terminology
Old terminology
New terminology
Final goal
Impact
Intermediate goal
Effect
Output
Output
Activity
Process
Input
Input
The new terminology more precisely and accurately describes what is being addressed by
each goal level.1
Normally, inputs, processes and outputs are measured through routine monitoring. Effects
can sometimes be tracked through monitoring, but are more usually measured during an
evaluation, and require surveys, samples, and comparisons with baseline status. Impacts are
also measured during evaluations, but may require post-project evaluations before they are
evident. Annex 1 provides more detailed information on each level of objective.

2.2

Indicators

An indicator is a measure or criterion used to assist in verifying whether a proposed change


has occurred. Indicators are quantitative or qualitative criteria for success that enable one to
measure or assess the achievement of project goals. There are five general types of
indicators:
Box 1: Different Types of Indicators
Input indicators: Describe what goes into the project, such as the number of staff, the
amount of money spent.
Process indicators: Describe the activities undertaken by the project, such as the
number of management training sessions for womens groups, the number of trainings on
integrated pest management, etc.
Output indicators: Describe results of project activities such as the number of Leader
Farmers trained, the number of womens groups strengthened, or the number of bridges
constructed.
Effect indicators: Describe the changes in systems or behaviours/practices, such as the
percentage of farmers utilizing integrated pest management, percentage of communities
with access to contraceptives.
Impact indicators: Measure actual changes in conditions of project participants, related
to the basic problem the project is addressing. This might include changes in livelihood
status of the target population, health and nutritional status, wealth, etc. Some impacts,
such as fertility, cannot be measured either within the life of the project or even at the end
of the project: such measures take more time.
1

Sometimes we use different terminology where that is very much preferred by donors. For example, in
the Family Health Project we have a goal (impact), purpose (effects) and then outputs and activities
(processes).

Input and output indicators are easier to measure than impact indicators, but they provide
only an indirect measure of the success of the project. In the past, we have taken it for
granted that if we achieve our intended effects (intermediate goals), and, if the assumptions
we have identified hold true, then we can assume that our intended impact (final goal) is
achieved. But now we are aiming to try to actually measure the impact.
We often use indirect, or proxy, measures for those things which are not easily measured
directly. For example, because fertility is difficult to measure, we instead measure the
contraceptive prevalence rate. Similarly, because income is difficult to measure, we may
instead measure household consumption patterns (assuming that if income rises, households
will spend the additional money on certain items).
Note that because most CARE Nepal projects are multi-sectoral, impact goals tend to be
broader than in other missions. For example, a typical impact goal is about achieving
livelihood security for a certain target group rather than focusing on an individual aspect of
livelihood security, such as improved educational or health status.

2.3

Monitoring and Evaluation Terminology

Monitoring
Monitoring

The diagram to the left shows that monitoring


happens far more frequently than evaluation.

Monitoring
Evaluation
Monitoring
Monitoring
Evaluation

Monitoring is, the systematic and on-going collection and analysis of information for the
management of project implementation. With the information generated by monitoring,
project staff can track implementation of activities and compare them to project plans, and
make informed management decisions (CARE USA Program Manual). As noted above,
we can also monitor progress towards intended effects, and sometimes even project impacts.
Staff and/or participants normally monitor. Monitoring should always serve as a
management tool for field staff, supervisors and managers, providing them with information
that can be used to adjust activities or strategies.
Evaluation can take place during a project, at the end of a project, and sometimes years after
project completion. It is, the broader investigation of the design, implementation and results
of a project. Evaluation (may) identify strong and weak points of project design and
implementation; determine progress toward effect and impact achievement; identify project
results, both positive and negative, both intended and unplanned; identify lessons learned;
and assess if a project has, or probably will have, sustainable, substantial impact. (Adapted
from CARE USA Program Manual)
The focus on lessons learned is particularly important, especially as a contribution to
improving existing projects and designing better ones.

Although in reality all of us evaluate projects all the time for example during field visits
here we refer to periodic formal assessments of projects. There are different kinds of
evaluations, such as formative evaluations which take place during the life of the project and
are intended to improve the design of the project; end-of-project evaluations which assess
goal achievement and document lessons learned; and post-project evaluations which typically
are done two to three years after project completion, and assess the impacts the project has
had.

3.

Overall approach to design, monitoring and evaluation

CARE Nepal is utilizing an overall approach to project design, monitoring and evaluation,
which is outlined below. This table is consistent with recent thinking in CARE USA (as
outlined in documents at the 1998 Asia Region Household Livelihood Security Conference).
Table 2: Design, monitoring and evaluation activities
Activity
1. LRSP
2. Situational
Analysis to identify
new programming
areas
3. Project Design

4. Baseline study
5. ONGOING:
Project monitoring

6. PERIODIC:
Small-scale activity
evaluations/case
studies
7. Mid-term
evaluation
8. Final evaluation
9. End of project
report

10. Post-project
evaluation

Outputs
CO plan including vision and future directions, comparative advantage, targeting, broad
choice of partners, donor prospects: provides framework for future project designs
Multi-sectoral study in intended project area, involving both secondary and primary data
collection. Primary data collection largely qualitative, intended to result in stakeholder
analysis, problem analysis and development of project interventions, and thereby facilitate
project design process
Project proposal document, including logical framework (Where appropriate, indicators
may be drawn from CARE USA sectoral logframes or HLS indicators.) Ideally, a project
monitoring and evaluation plan, which guides all data collection activities in the project,
should be included in the design, but this more typically occurs at project start-up. (See
Annex 2 for a summary of the design process.)
Data needed to measure goal achievement; data to assist in more specifically designing
interventions.
The monitoring that is undertaken is driven by the M&E plan. (See Annex 3 for guidelines
on developing M&E plans.) Monitoring provides regular data to measure progress against
annual plans and project goals (specifically, activities, outputs and to a more limited extent,
effects). Monitoring outputs include short reports, progress shown towards indicators;
community-generated reports for community monitoring.
Documents process, outputs, effects and impacts of specific interventions or communities.
Represents a form of monitoring and evaluation.
Assess implementation approach and progress towards goals. May result in revised
implementation strategy. Typically includes project self-evaluation, community evaluation
and external evaluation.
Assesses goal achievement, and lessons learned which can be used for future project
designs. Again, normally includes project self-evaluation, community evaluation and
external evaluation. Baseline needs to be followed up.
This may be covered off by the follow-up to the baseline. If there is not a proper baseline,
this follow-up may be replaced by this report which provides a snapshot of the status of
interventions and groups. (How many groups? What is their status? How many water
systems? What is their coverage? What is their status? Are water tariffs collected?) This
is necessary to enable post-project assessments.
Assessment of the sustainability of institutions, practices and/or activities facilitated by the
project.

What we do

Below is a summary table that summarizes the major activities and formats used to monitor
and evaluate projects. It separates out what is done at the project, country office, and global
levels. Following the table is a more extensive description of monitoring and evaluation
activities.
Table 3: Hierarchy of monitoring and evaluation activities:
Monitoring/
Project formats used:
Mission formats
evaluation of:
used:

Sectoral monitoring

Global Achievement
Process:
Outputs

Effects

Impacts

formats/PIMS
AIP 1.2/PIRs
MER (on a pilot basis)
Sectoral monitoring
formats/PIMS
AIP 1.2/PIRs
Assessment of progress against
logframe (this is included in
PIRs)
MER (on a pilot basis)
Sectoral monitoring
formats/PIMS (to a very limited
extent)
Case studies
Project evaluations
Spider Model
Organizational capacity
assessment tool (for NGOs,
VDCs, DDCs)
MER (on a pilot basis)
Assessment of progress against
logframe (this is included in
PIRs)
Project final/post-project
evaluations
Case studies
MER (on a pilot basis)

CARE global formats


used:

API

API

Global Achievement
Format/PIMS
Mission AOP

Small scale evaluations


Mission AOP

API (to a limited


extent)

Small scale studies

Formats/PIMS
Mission AOP

(Note that different levels of monitoring may be reported in the same format, e.g. the API reports mostly on
outputs, but also on effects.) Note also that this matrix excludes the monitoring of inputs such as finances,
personnel and procurement.

Monitoring Activities
Sectoral monitoring formats, part of an overall Project Information Management
System (PIMS), which includes standard formats that projects use for interventions that
are applicable. Information generated in the PIMS provides the input for most other
output monitoring reports that are required by CARE at different levels. The PIMS
collects information on most activities/interventions supported by CARE Nepal projects.
Each project then decides which formats are appropriate for them to use.

The PIMS includes recording and reporting formats to be completed at the site, project
and country office levels, consolidating information as it moves upwards. The squares

below show each of the different levels of information involved in the PIMS. The circles
below the squares show the other formats to which the PIMS data directly contribute.
Site office
recording
formats

Site ->
Project
office
reporting
formats
(roll up of
information
sent to
Project
Office)

Project
Office
recording
formats
(consolidatio
n of all site
office
formats)

AIP 1.2
API

Project Office
->Central
Office
reporting
formats
(information
needed at
central office to
complete
global
achievement
formats, etc.)
Global
Achievement

Central
Office
Formats (roll
up of all
projects)

Sectoral
databases
Global
Achievement
roll-up

Regular six-monthly reporting of AIP 1.2 activities by staff, results of which are reported
and analyzed in the six-monthly PIR. (Information generated largely from PIMS.) The
PIR also includes reporting of progress against logframe indicators at the effect level.
Annual reporting of key, selected outputs in the Global Achievement Formats. (Much if
not total overlap with PIMS data.)
Monitoring of organizational capacity through innovative tools like the Spider Model
for community-based organizations, and organizational capacity assessment tools for
NGOs and Village Development Committees (local government institutions).
Annual reporting of outputs and to a limited extent effects in the CARE USA Annual
Portfolio Information. (Some overlap with PIMS.)
Small-scale evaluations of interventions, which have included the improved cookstove
program and kitchen gardening program in FY95; non-formal education and latrine
program in FY96/97; income-generation activities and partnership in FY97/98. FY99
evaluations to be planned.
Case studies, which aim to document the process, outcomes and lessons learned from
discrete interventions or programs in a single community, have been completed.
Participation in MER Pilot Project: MER or Monitoring, Evaluation, Reporting is
an automated system originally developed in CARE Honduras, and now being piloted
globally. The MER system is an automated one that can facilitate the collection and
analysis of monitoring and evaluation data. However, it depends on strong, logical
project designs, and perhaps this is its greatest strength, forcing projects to have such
designs for the system to work well.

Evaluation
Baselines:
Baselines are done for all new projects. These typically include a structured household
questionnaire and participatory methods. Health/family planning baselines are done

separately. For household questionnaires, a cluster sampling approach (meaning that


clusters in a randomized sampling of wards are surveyed) is used.
A follow-up to the original 1995 health/family planning baseline was done in mid-1997,
yielding useful and important data. This report has been summarized in a four-page
document, suitable for wide dissemination.

Project/Small Scale Evaluations


These are done in accordance with contract requirements, typically involving a mid-term
and final evaluation, used to strengthen implementation of current projects and contribute
to the design of new projects. Mid-term evaluations are usually process focused; while
they do look at results, they are more interested in the approaches undertaken by the
project and whether these are likely to lead to the stated objectives of the project. (A
recent design exercise, for the follow-on phase of three CARE Danmark supported
projects, involved a workshop which began with a one-day exercise compiling and
discussing lessons learned from recent evaluations.)
The small-scale evaluation series has continued (see above, under monitoring).
Special studies, such as the Field Staff Workload Study, aimed at better understanding
current processes, implementation modalities, and how they could be improved to more
effectively achieve project goals.
All project evaluations include both a project self-evaluation (usually facilitated by a staff
person from central office), as well as the formal external evaluation. A number of
evaluations also include an element of community participation.
Post-project evaluations are planned for several projects. This includes the Begnas Tal
Rupa Tal Watershed Management Project (for which unrestricted funds will be required),
Jajarkot Poverty Alleviation Project (for which funds are allocated in the project design).

5.

Strengths and weaknesses

The 1996 discussion paper outlined strengths and weaknesses of the system in place at the
time. At that time, identified strengths included:

A dynamic, evolving approach.


Gender-disaggregated monitoring data.
The use of evaluations and studies to improve programs.
Community involvement in evaluation activities.
The use of PIRs to reflect on progress and problems
Staff interest in case studies and small scale studies.

Major problems included the following (note that this information is a summary of the
discussion in the 1996 paper):
1. Weak monitoring systems: We were regularly requiring additional data collection, and
often the data collected through slightly different formats yielded different numbers!
There have been too many formats, often developed separately within each project, with
quite a bit of repetition between the formats. When staff transferred, there was no way to
ensure that those who replaced them had access to information about their working area,
including what had been achieved, problems encountered, etc.

2. Weak indicators: Indicators were often plucked from thin air, and were not related to the
existing baseline status or appropriate and realistic goals.
3. Inadequate information: There was an absence of tools to measure important
interventions (e.g. group capacity). Baseline information related to effects was not
available. Qualitative information on interventions was lacking, for example to
understand the differences in men and womens perspectives on new crop varieties.
4. Weak links between project designs and monitoring and evaluation activities: indicators
expressed in percentage terms (e.g. 40% of farmers adopt) were never translated into
numbers, making it difficult for a field worker to know what the goal really meant.
5. Late and irrelevant information: Projects were busy collecting data for village profiles.
While information is interesting, it is collected too late to be of use in planning
interventions, or is not relevant (e.g. number of people in the district who have achieved
different levels of schooling).
6. Quality of information collection: Information quality was compromised by the use of
English-language formats, the involvement of staff in collecting impact-related data
about interventions they had supported; inadequate support to staff in doing case studies;
and methodological weaknesses of data collection methods (e.g. for crop yields).
Moreover, the absence of clear definitions (what is a kitchen garden?) further
compromised data quality and consistency. This is in large part related to inadequate
staff skills in monitoring and evaluation.
7. How we use information: We had lots of numbers, but little analysis of their meaning, or
their context. Data, once rolled up, is again in English, and cannot be easily used by field
staff. Information was not well communicated in any language to projects. Often
information was not used at all to improve designs, interventions and strategies.
8. Who is responsible: Responsibility for data collection, roll-up, analysis and dissemination
was not clear. Case studies had the problem of not adequately involving central office
staff, resulting in little discussion about design, methodology, etc, and were usually of
poor quality.
Since then, there have been significant improvements. The following table outlines recent
improvements, and also suggests areas to strengthen. Activities in bold are those which can
conceivably be addressed in FY99: 2

Some of this is drawn from a session in the August 1997 CARE Nepal M&E Workshop.

Design

Positive Changes

Areas to Strengthen

All but one staff member involved in the original project design
training has actively participated in a subsequent project design
A situational analysis, using structured guidelines, is done for new
project designs. (This is similar to a Household Livelihood Security
Assessment, though there are important differences between the
two.)
Project designs are getting better; staff more involved in the design
and are well oriented with new project designs.
Logframes, in particular, are much stronger. Goals and indicators
are properly placed at appropriate levels (impact, effect, output, and
activity).
All field staff are familiar with their project logframes, which are
more actively and extensively used as a framework for project
planning.
A proposal review mechanism is in place, to ensure design quality.
A core team of four senior staff (in addition to others called upon
periodically) reviews all proposals, in addition to which two or three
external consultants are hired to review each proposal.
Projects have or are about to have monitoring and evaluation
frameworks, and are able to measure impact. (See Annex 4 for a
summary of this information by project.)

Proposal review criteria might be a useful tool for both internal


and external consultants to utilize during the review process.
(These could be either draft CARE USA criteria, or something
developed in-house.)
Situational analysis guidelines are to be revised, drawing on the
most positive elements of the Household Livelihood Security
Assessment approach.
Consultation with government counterparts at early stages of
design, either through participation in the design or regular
consultation might avoid later delays in obtaining project
approvals.

Capacity
Building

Positive Changes

Areas to Strengthen

A core of senior staff (managerial and technical) has participated in


project design training, and subsequently put it to use.
Four staff attended the MER workshop in India, and are able to
support implementation of MER in CARE Nepals pilot project.
A further group of staff has participated in management training
(especially in Denmark) where they have received logframe training.
All senior program staff participated in the M&E workshop in
August 1997. The material has been revised, so as to develop a
curriculum for project level workshops. As of the end of FY98,
projects will have participated in M&E workshops, a product of
which is a detailed project M&E plan.
Staff responsibilities for DM&E are clearly outlined in updated job
descriptions.
Guidelines have been developed for:
a) situational analysis for project design
b) designing terms of reference for studies
c) implementing case studies
d) project self-evaluation
e) community evaluation

Baseline studies are done for all new projects. These studies
typically use a mix of a structured household questionnaire and
participatory methods, and are focused on gathering information
related to the logframe that will enable an assessment of goal
achievement.
Follow-up studies for health/family planning baselines have taken
place, enabling CARE to measure progress and goal achievement.

Baselines

The project M&E workshops have been well received. Staff are clearer
on M&E concepts. However, follow-up training should be planned for
so that staff consolidate new knowledge and skills and further expand
these skills.
Project design training for new senior staff would be appropriate.
Training for project counterparts, especially those who would likely be
involved in future project designs, should be considered and planned for,
perhaps as part of training for new senior staff.
There is very limited capacity in community monitoring and this needs
attention.

In-house capacity should be further developed. Staff are good at


collecting information, less so at analyzing it.
A guideline for designing baseline studies is under development and
will be finalized by the end of FY98.

Monitoring

Positive Changes

Areas to Strengthen

A Project Information Management System


(PIMS), aimed at improving the collection of output
indicators and ensuring some consistency across
projects, is being put into place in FY99. Some
projects have been field-testing it in FY98, so that
improvements made reflect actual field experience
using it.
The MER system is being piloted in one project
and, if successful, will be expanded to other new
projects. The MER will be compatible with the
PIMS, and other pre-existing data collection
formats, such as the CARE USA/CARE Canada
API.
PIRs now report against logframe indicators, which
is good, as there is more orientation towards
assessing progress made at the effect level
Projects are actively using internal studies (which
straddle the boundary between monitoring and
evaluation) to make adjustments to program
implementation, though there are opportunities to
further improve staff learning from studies.
There is more analysis of health data going on.
The development of tools to assess group capacity
is an important and useful change.
Most projects will have M&E systems in place.

MER has yet to be fully implemented in the pilot project: this will require a
significant level of effort and commitment.
There is a lot of duplication in different databases (e.g. sectoral databases, global
achievement format), with duplicate reporting of the same information. We are still
collecting too much data!
There is still insufficient clarity about who is responsible for data collection and
analysis. Not enough analysis happens. There is still not enough use of data to
improve project implementation, despite improvements!
Terminology remains unclear (what is a kitchen garden, an improved kitchen
garden).
Current systems as yet do not permit us to adequately measure coverage and
adoption rates: what proportion of a population is benefiting or adopting project
promoted activities or systems. CARE Nepal must carefully assess whether the
PIMS addresses this important need.
We need to put more emphasis on effect level monitoring: if we are promoting
agricultural technologies, we need to know during the life of the project whether
farmers are adopting them. This does not mean a large survey. Participatory and
qualitative methods can be used to assess this. For example, in addition to our own
observations, a well-done set of focus groups, or a workshop with leader farmers,
could be used to assess whether farmers are using new technologies and why or why
not. Existing social maps could be used to actually track the number of farm
households adopting improved technologies.
PIRs could focus more on analysis of monitoring data, major issues facing the
project, the ways in which problems have been addressed, etc.
Strengthen two-way communication of monitoring data (sharing it with those who
provide it).
Support implementation of monitoring and evaluation systems.

6.

Goals and future plans

The overall goal of this strategy is:


To improve the impact of CARE Nepals projects through strong design, monitoring and
evaluation. To do this, CARE Nepal wants to strengthen the capacity of CARE Nepal and its
partners to:
Do high quality project design, monitoring and evaluation,
Move even further towards measuring impact, and
Actively use monitoring and evaluation results to further improve project interventions.
To achieve this, the following areas will receive particular attention:
1. Strengthen in-house capacity in design, monitoring and evaluation.
CARE Nepal has already taken steps to strengthen staff skills in these areas, and will put
further effort into this. CARE will continue to use external assistance as needed, but does not
want to be largely dependent on external inputs.
2. Development of project M&E plans to measure outputs, effects and impact, and
include community-based monitoring.
Projects already collect a lot of data, but certain things are still missing. Effect monitoring,
discussed in the table above, can be further strengthened. Community-based monitoring,
where we involve communities not only to measure outputs but effects and implementation
modalities, provide a different perspective on the project which can be very valuable to
improve interventions.
3.

Learning from monitoring and evaluation.

The small scale study and case study series are examples of activities that CARE Nepal uses
to promote learning. More can be done to ensure that information is disseminated to all staff,
and used in project planning.
4.

Developing partners capacity in M&E.

As CARE works more with local partners, they need to develop basic M&E skills, so as to
report on their activities, and learn from successes and mistakes.
The following table outlines M&E related activities in CAREs FY99 AOP. These represent
key activities planned. These plans focus on internal capacity building as well as skills
development for partners. In addition, the key area of data analysis is addressed. Another
element is the development of mechanisms for community-based monitoring. Note that
responsible persons and timelines, included in the AOP, are not described here.

Strategic Direction: Developing and applying impact-oriented monitoring and evaluation


systems for institutional learning.
Objective 1: Refine Activities:
and implement M&E Complete M&E workshops in all projects
plans
Apply improved approaches for project and community evaluation
Continue MER piloting
Strengthen community monitoring as a part of project monitoring process
in Jajarkot
Objective 2:
Increase analysis and
use of monitoring
data

Activities:
Establish and implement PIMS in the mission, including analysis,
synthesis and sharing of information
Conduct one small-scale evaluation
Conduct one case study
Produce summaries of two studies or reports to share widely with outside
audiences

Objective 3:
Strengthen M&E
capacity of selected
partners

Activities:
Identify key partners in all projects requiring M&E capacity
strengthening
Arrange training/workshop on M&E for key partners
Facilitate M&E plan preparation for key partners

Structures
CARE Nepal has made the conscious decision NOT to establish a separate M&E unit.
Rather, responsibilities for developing and supporting monitoring and evaluation systems are
shared among senior project staff and specifically included in annual Individual Operating
Plans.
For close to three years, through the end of FY98, CARE Nepal had the services of an SNV
Development Associate who served as an Evaluation and Documentation Officer, and
significantly contributed to improvements in this period. Outputs include all the baseline
studies, a guide to conducting baseline studies, and three small-scale activity
evaluations/studies (on literacy programming, income generating activities and community
organization). During each of his assignments, he worked with CARE staff, thereby
developing their capacity to conduct studies.
CARE Nepal anticipates that these services will be replaced by:
a) the use of local consultants
b) senior CARE Nepal staff taking a greater role in these activities.
CARE may hire a research officer, to support studies, strengthen project-level documentation,
and work closely with external consultants on studies.

As well, the NGO support manager to be hired at the start of FY99 will have responsibility
for monitoring and evaluation related to partnerships. This includes both refining and
supporting use of tools to assess and monitor partner capacity (such as the organizational
assessment tools), and supporting partners to obtain the training and advice they need on
monitoring and evaluation.
All levels of staff have important roles to play in implementing and supporting design,
monitoring and evaluation systems. These include:
Staff

Responsibilities
Collect field-level data; aggregate it at the site level; analyze and use data to
improve implementation; support community-based monitoring initiatives;
participate as needed in baseline studies.
Project sector
Aggregate data, analyze it at project level; ensure field staff have systems in
heads
place to collect accurate data; support their initiatives to analyze site-level data;
take lead on community-based monitoring initiatives; participate as needed in
baseline studies; conduct case studies with support from consultants/CO staff;
contribute to project completion reports; assist in planning project selfevaluations.
Project Managers Ensure M&E plan in place; guide data analysis at project level, ensure that this
is followed through sectorally and by site; guide project self-evaluations and
community evaluation; oversee case studies on behalf of project; participate in
project design teams as requested; prepare regular monitoring reports (PIRs).
Senior Technical Establish/refine sectoral monitoring formats; aggregate and analyze mission
Advisers/
data and ensure it is communicated with projects; support case studies;
Senior Training
participate, as requested, in project self-evaluations; support external studies as
Officers
required; participate in project design teams as requested. Identify partner
M&E needs. Ensure that partners adequately involved in new project designs.
Project
Ensure that adequate training provided to project staff to support monitoring
Coordinators
and evaluation requirements. Ensure M&E systems in place; participate in
project designs; guide project case studies; facilitate project self-evaluations;
contribute to TORs for studies and support them as required; edit and help
finalize PIRs and other donor reports.
Program
Ensure that mission M&E strategy is in place, revise strategy as needed, ensure
Coordinator,
that training resources are available to projects and CO staff; review project
Program
M&E plans to ensure quality control; take lead on new project design; review
Development
project designs; secure outside reviewers; plan and organize annual M&E
Coordinator,
activities (evaluations, etc); initiate community-based monitoring activities;
Assistant
Country Director support MER pilot project; ensure donors receive high quality reports.
Field-based staff

Partnering
Partnership with local organizations, as well as government, presents challenges with respect
to monitoring and evaluation. Local organizations often have extremely limited capacity in
M&E, while government partners often have their own systems that can be difficult to
influence.

To date, CARE has worked with partners to jointly develop monitoring and evaluation
indicators. In some cases (such as for health/family planning in Syangja), CARE has taken
the lead on the baseline survey, involving local NGO staff in data collection and initial data
analysis, with CARE taking responsibility for report completion. In other cases, this has
been left to local organizations, but without fully addressing the issue of their capacity.
It is critical that in agreements with partners we do the following:
a) identify who is responsible for what kinds of monitoring
b) with small emerging NGOs, ensure that we support the organization (directly or through
specialized organizations) to develop monitoring plans
c) build into our agreements plans for CARE to do regular monitoring of NGO activities.
With limited capacity partners, we should initially expect that they monitor up to the output
level, and put initial emphasis on that, while at the same time doing some supportive (rather
than policing) independent monitoring. CARE can take the lead on effect level
monitoring. At the same time, efforts should be made to orient local groups to the
importance of assessing the effectiveness of their interventions. Because many of our
partners are based in the community, they are well placed to draw upon the observations and
perceptions of local people about their interventions. Eager to demonstrate how well they
carry out planned activities, our partners may be reluctant to openly share problems or
difficulties being faced. We need to emphasize the importance of critical reflection. (This
can perhaps be done by demonstrating it ourselves!)
For baselines, the role of partners and CARE will vary. If we are doing a baseline for a new
project area, indicators related to partners activities could easily be included, thereby
avoiding extra work or duplication of effort. Partners can be trained to participate in the
baseline exercise. In cases where an overall project baseline is not being done, or specific
information is needed for new interventions supported by partners, CARE should ensure that
partners have developed adequate plans for baselines, and support them as necessary. Efforts
should be made to limit the collection of data to what is really essential.
In FY99, CARE will begin to address partner capacity in M&E, but also recognizes that this
is an incremental process, and that if our commitment to developing more equitable
partnerships is high, we must also accept that M&E cannot therefore be under our control,
and that quality will, at least initially, suffer.

Annex 2: Project Design Steps and Activities Necessary to Achieve Them


Project Design Step
1. Problem Statement

2. Strategy Development

3. Evaluation Plan

4. Operational Plan
5. Resource Requirements

6. Financial Plan

Activities
Conduct problem analysis
Conduct needs assessment
Conduct institutional environment scanning
(In CARE Nepal, this would normally be done through a review
of secondary data, a field-based situational analysis involving a
multi-sectoral team, and analysis of the data in a project design
workshop)
Develop goals and indicators
Formulate project strategy
Develop outputs and activities
(This would result in the development of a logframe, and short
write-ups on the planned project strategies.)
Develop a monitoring and evaluation plan (this may or may
not include key questions to be addressed in the evaluation)
(Elements of the monitoring and evaluation plan are implicit in
the logframe. Where it is not feasible to develop a full
monitoring and evaluation plan for the logframe, and this is done
instead at start-up, there should be a description of major
monitoring and evaluation activities.)
Develop implementation schedule
Determine staffing requirements
Develop organization chart
Write job descriptions
Determine other inputs necessary to develop budget
Develop project budget

This is adapted from a handout used in the CARE Asia Project Design Workshop developed by Rich Caldwell.

Annex 1: Hierarchy of objectives


New
CARE
terminology
Input

Old CARE
terminology

What it means

Who does it

Who takes credit


for it

When it
occurs

Input

Resources needed
by the project

100% attributable to
the project

Process

Activity

Interventions or
activities done by
the project

Project staff use


them (and are
accountable)
Project staff do
it (and are
accountable)

Within the life


of project
(continuously)
Within the life
of project
(continuously)

Output

Output

Products directly
produced by the
project

Project staff
produce it (and
are accountable)

100% attributable to
the project

Effect

Inter-mediate
goal

Improvements in
access to or quality
of resources (ie
changes in
systems), and
changes in
practices

Beneficiaries/
participants do
it, systems
reflect it

Should be largely
attributable to the
project, with other
influences relatively
minor

Changes in human
conditions or wellbeing of the target
population at
household level

Beneficiaries/pa
rticipants
experience it

Attribution is
difficult, with other
influences substantial
and inevitable

Impact

Final goal

(This is adapted from work done in CARE Uganda.)

100% attributable to
the project

Examples of objective
by level

Examples of associated indicators

Eight training
courses conducted
for FCHVs in ARI
management and
referral
100 FCHVs trained
in ARI management
and referral

# of courses conducted

# of FCHVs participating in course


# of FCHVs successfully completing
course

Improved referral,
treatment and care
of children with
ARI symptoms

# of children with reported ARI


symptoms in one month prior to
survey
% of children with ARI symptoms
appropriately treated

Improved health
status of children 05 years

Within the life


of project
(when
processes bear
fruit)
Within the life
of project
(may require
special study
to measure)

Sometimes
measurable
within life of
project, but
more likely
requires postproject
evaluation

ARI incidence rate

Annex 3: Guidelines for Monitoring and Evaluation Plans


A monitoring and evaluation plan outlines what information needs to be collected through (and after!) the
life of the project, in order to assess the completion of activities and outputs, and achievement of effect and
impact goals.
The plan should be developed during or immediately after project design.
The plan is what will guide all data collection during the life of the project, and it should reflect the need
for both quantitative and qualitative data.
The following matrix can be used to develop the M&E plan.
Note that depending on the data collection methods used, not all columns will be needed. For example, if
the data source is a household questionnaire, then the data collection method will be the very same.
However, if the data source is community PRA, then in the data collection method the particular tools
to be used can be specified.
Narrative
Summary
Impact
Goal
Effect
Goals
Outputs
Activities

Indicators

Data
Needed

Data
Source

Data
Method

Frequency
of Collection

Responsible
Person(s)

Data
Analysis

Dissemination
and Utilization

There is no column for assumptions in this matrix. However, if there are important assumptions in the
project design that need to be periodically and systematically monitored, they should be included.
Indicators:

measures that determine whether change has occurred

Data Needed:

the data necessasry to characterize the indicator. In some cases, a number of different
types of data will be needed for one indicator; in other cases it may only be one. For
example, if the indicator reads, % of households using latrines, the data needed will
include:

total # of households

# of households with latrines

# of households where latrines reported/observed to be used (and a complication


here is that not all members of the household will necessarily use the latrine)

Data Source:

where to find the data, from whom to collect the data. Possibilities might include a
household survey; district-level secondary data; community PRAs.

Data Methods: specific data-collection methods that will be used to obtain the data (e.g. Venn diagrams,
social mapping)
Frequency of Collection: how often will the data be collected? (e.g. at baseline and end of project?
Monthly? Quarterly?)
Person(s) Responsible:

Who is responsible for collection and/or analyzing data?

Analysis:

How will data be analyzed? Will simple trends be developed? Averages? Will
statistical tests be done?

Dissemination and
Utilization:

What reports will be generated from this information?

Annex 4: Current Project Monitoring and Evaluation Systems


Project
PN-15 Solu
PN-16 Mustang
PN-17 Mahottari

M&E
Framework?
In process
In process
In process

Monitoring
System?
Yes
Yes
Yes

Baseline?
Yes
No
Yes

PN-19 Bajura

Yes

Yes

Yes

PN-23 Gorkha
PN-24 Syangja

Yes
Yes

Yes
Yes

Yes
Yes

PN-29 Bardia
PN-30 FPP

Yes
Yes

Yes
Yes

Yes
Yes

PN-31 Jajarkot

In process

Yes

Yes

Family Health

Yes

Yes

Yes

Scheduled
Evaluations
See Family Health
May 1998
mid-term: 1/98
final: late 99
mid-term: 2000
final: 2002
Final: 5/99
mid-term: 2000
final: 2002
Final: 99/00
Mid-term: 99
mid-term, final and
post-project
mid-term, final
(phase one),
baseline, mid-term,
final (phase 2)

Impact Measurement?
Yes
No
For health/family planning
Yes
Yes
Yes
No
No (Another donor-supported
project tasked with impact
measurement)
Yes
Yes

You might also like