You are on page 1of 46

MAKING EVALUATION MATTER IN POLICY AND PROGRAMMING -A SRI LANKAN PERSPECTIVE

V.SIVAGNANASOTHY SECRETARY
MINISTRY OF TRADITIONAL INDUSTRIES AND SMALL ENTERPRISE DEVELOPMENT SRI LANKA sivagnanasothy@hotmail.com

Road Map
Development Evaluation - Sri Lankan Perspective Focus, types and approach to evaluation Evaluation Capacity Development Institutionalization of Evaluation Criteria for Selection and Evaluation Methodology Key attributes of good evaluation and quality standards Evaluation Feedback Planning, Budgeting and Policy making Process Institutionalizing Managing for Development Results

Why Evaluation Sri Lankan Perspective


Evaluation is a critical analysis of achievements and results of a project, program, policy or institution. An assessment, as systematic as possible of the on-going or completed projects .. Its design, implementation and results. The aim is to determine the relevance and fulfillment of objectives, development efficiency, effectiveness, impact and sustainability. An Evaluation should provide information that is credible and useful enabling the incorporation of lessons learned into the decision making process. (DAC/OECD) Evaluation offers Accountability for Results and a learning opportunity to find out what is working, what isnt and what needs to be improved

Focus of Evaluation

What works? What does not work? And Why? Under what context it works? or It does not work?.

Types of Evaluation conducted


Ex Ante Evaluation (Appraisal)

On-going Evaluation / Mid Term /Formative Evaluation - Projects may run into problem during implementation - Requires fresh look (leads to modification, adjustments and revision of program).
Ex- Post Evaluation / Summative Evaluation - Some years after the implementation is completed. - Leads to decisions concerning expansion / adoption and future Impact Evaluation Considerable years after implementation is completed focusing on outcomes and long term impacts

Approaches to Evaluation
Goal Based Evaluation Goal Free Evaluation (Michael Scriven) Utilization-Focused Evaluation (Michael Patton)

Goal Based Evaluation


Determine the extent to which the objectives of a program are actually achieved.

Goal Free Evaluation


Pre-determine Goals narrow the focus of the evaluation Goal free evaluation focus on actual outcomes rather than intended program outcomes. Goals act like blinders causing evaluator to miss important outcomes. Michel Scriven states that making evaluators responsible for the use of evaluation result is wrong. Asking them to participate in decision making injures their independence. Theyll become tailored to what the client wants.

Utilization Focused Evaluation


Micheal Patton stressed that the first step in evaluation is to identify decision-makers and information users to determine what information is needed. Evaluation adds value only if its results are used. Evaluators must guide decision makers, policy makers and managers to do the right thing. The two evaluation Gurus Michael Patton and Micheal Scriven have different perspectives and views where the former is concerned with utilization focused evaluation.

ECD Initiatives in Sri Lanka The ADB TA for Strengthening Post Evaluation Capacity (1991/92) to MPI
Introduction of methodology, techniques, procedures for PE (manuals and guidelines) On the job training of senior government officials Sensitization of policy makers and senior government officials Dissemination of evaluation findings and establishment of feedback arrangements Development of Computerized Evaluation Information System ( EIS) for storing and retrieving Post Evaluation Findings Introduced Evaluation Module in SLIDA/PIM to orient government officials

Evaluation Information System (EIS) to support evidenced based decision making and learning A Data Base of Evaluation Information

Inability to access evaluation information of projects has been a key problem.


Online access to project wise synopsis (one page summary) and sector wise synthesis and high level abstraction to busy senior officials Integrate lessons into planning, budgeting, policy process and project formulation (Avoid repetition of past mistakes)

Institutionalizing Evaluation in Sri Lanka


Evaluation Policy : Govt. should have an evaluation policy with clearly established guidelines and methods. Institutional Structure/Reporting to High Level Management: Central Unit responsible for evaluation directly reporting to the Cabinet Ministers, Parliament or Ministry or Board of Directors.

Evaluation Plan: Priority should be set with time schedule.


Impartial and Independence: Separating the evaluation from line management responsible for planning and managing development assistance. Reduce the potential for conflict of interest. Credibility and Usefulness: Evaluation should report successes as well as failures. Not only independent but also operationally relevant. Design of Evaluation: Each Evaluation must have a TOR defining the purpose, scope, and recipients of evaluation findings., describing the methods, including the time and resources.

Institutionalizing Evaluation in Sri Lanka


At Various Stages: Evaluation has an important role to play at different stages of a programme/project and should not be conducted only as an Ex-Post Evaluation exercise. Participation of Donors and Recipients: Both Donors and Recipients should involve in the evaluation process. Not be predominantly donordriven. As Evaluation findings are relevant to both parties, the TOR should address issues concerning donors and recipients. Take account of Senior Management and Policy Makers demands. Readable Report: Not Voluminous and Not an academic thesis. Differentiated according to different audience. Broader Perspective: Evaluations expanding from projects/programmes to sectors, themes, policy and country assistance programmes. Central Evaluation Units being set-up in DMCs. Dissemination of Results: The findings must be used and made available in time appropriate for decision-making. Feedback: Linking Evaluation findings to future activities to both policymakers and operational staff.

Efforts to institutionalize Evaluations


Institutionalize Evaluations within the Governments administrative structures involved Political will and Policy Commitment : Advocacy at Political Level and high echelons of the Government. Develop a Legal Foundation (Policy) Mandate on Evaluation with specific responsibility. Locate Evaluation Units at the Central, Sectoral and Regional Levels -CEU at MPI (Central) -PME at Sectoral Ministries -PME at Provincial, District and Divisional Level. Objective and Independent Institutional arrangements with linkages to the planning, budgeting and policy making function (Senior Management) of the Government.

Efforts to institutionalize Evaluations


Ensure Evaluators are independent, Objective and highly skilled Methodologies, Procedures and Systems (Manuals and Guidelines) Cost Effective and Less time consuming methods Staffing arrangements ( Professional / Technical) ECD and Training (Continuous Professional Education, Networking) In-Country Faculty Development Program (SLIDA, UOJ, PIM) In-Country Evaluation Network CoP- SLEvA Quality assurance (Standards and ethics) Joint Evaluations with Donors ECD Feedback arrangements and mechanisms

Criteria's used for selecting projects for evaluation

As Evaluation is an expensive exercise it is necessary to


carefully select projects for evaluation. Projects that are likely to be replicated. Projects of an innovative nature or unusual nature where feedback is sought. Project that may be running into problems (decision to terminate or re-adjust). Projects which may throw light on new or upcoming policy initiatives.

Conditions under which evaluation studies are in-appropriate


One Time Programme: No potential for continuation (Not Replicable); No sufficient impact to warrant evaluation. Absence of commitment for use of Evaluation Results: Projects for which administrators are unwilling to discontinue (personal or political reasons) or make changes. Strong Pre-conceived Notion Client or sponsorer is not open to contradictions. Absence of valid and dependable information Evaluation is pre-mature for the stage of the programme e.g. pre-mature summative evaluations or impact evaluations (Timing) Consensus cannot be achieved among major stakeholders on programme model and desired evaluation plan. Proposed Evaluation not feasible due to financial / human resources.

Sri Lanka Evaluation Methodology


DAC CRITERIA FOR EVALUATING DEVELOPMENT ASSISTANCE

Relevance
Efficiency Effectiveness

Impact
Sustainability

Evaluation Methodology : Rating System


Criterion Weight Rating Description
Highly Relevant Relevant Partly Relevant Irrelevant Highly Efficacious Efficacious Less Efficacious Inefficacious Highly Efficient Efficient Less Efficient Inefficient Most likely Likely Less Likely Unlikely Substantial Significant Moderate Negligible 3 2 1 0 3 2 1 0 3 2 1 0 3 2 1 0 3 2 1 0

Rating Value

1. Relevance

20%

2. Efficacy

25%

3. Efficiency

20%

4. Sustainability

20%

5. Institutional Development and Other Impact


Overall Assessment (Weighted average of A1, A2, A3,

15%

Highly successful (HS): Overall weighted average (OWA) is > 2.5 and none of the 5 criteria has a score of less than 2;Successful (S): OWA is between 1.6S 2.5 and none of the 5 criteria has a score of less than 1; Partly

Evaluation Methodology
Rating
Highly successful Successful Partly Successful Unsuccessful Judgment of the Achievement

Scoring

Indicators Evaluation Questions Evaluation Criteria


Network of Networks on Impact Evaluation

Verification of achievement and results


Question designed to addressed to criteria Relevance, efficiency, effectiveness, impact sustainability

Key Attributes of a Good Evaluation


Impartial, Independent, Objective, Credible and Balanced Perspective. Findings and recommendations are evidence based using rigorous methods. Remain Utilization focused and contributes to informed decision making and learning. Recommendations are not only independent but also operationally relevant (Not an academic exercise). Embodies professional standards and cost effectiveness. Extensive stakeholders were brought into the evaluation results.

Evaluability Assessment
To assess whether a programme is evaluable. Determinants of evaluability are ;
1. Clarity of the intended programme model or theory, Programme goals, objectives and performance criteria/indicators are defined and agreed. 2. Availability of data on performance 3. Intended uses of evaluation information's are well defined.

Program Theory and results-chain


A set of beliefs about how and why an initiative will work to change Example : National Irrigation Rehabilitation Project

Inputs
Funds Labor Technical Support Equipment

Output
Functional Scheme Strengthened FOs

Outcome
Improved Agriculture production through improved water Management

Impact
Improved Income and standard of living of the farmers

Improved O&M Practices


Strengthened Institutions

Evaluation Questions
A series of questions formulated to represent What one wants to know through the evaluation. Must Should and Nice to know
Evaluation question should be designed for each evaluation criteria.

DAC Evaluation Quality Standards


Key Pillars Expectations in Evaluation Quality Standards in two areas. Quality Evaluation Processes ( i.e Conduct of evaluation) Quality Evaluation Product ( i.e Reports ) Impartiality, Independence, Credibility and Usefulness

Importance of Feedback
Utility of any evaluation is a prime criterion for judging
its worth, regardless of the technical, practical and ethical merit Making Evaluation Report Effective Evaluation to have an impact, needs to be disseminated and communicated to ensure behavioral changes and action Linking to the Planning, Policy Making, Budgeting and Programme Management

Importance of Feedback Cont..


Reach Multiple Constituencies a. Primary Target Group / Key Actors
Those who are expected to take action on findings (Eg: Planners, Policy makers, Budget/Resource allocators, Donors)

b. Secondary Target Group/Influence Actors


Those who influence the key actors ( Eg; watchdog agencies, media, CSOs, NGOs)

Characteristics of an Effective Feedback


Responsive to the needs of different audiences (Demand driven, Client orientation, Tailoring Evaluation report to Audience needs) Timely Accessible Simple (user friendly) Avoid Information Overload Ongoing and systematic Use different approaches Promote follow-up

Mechanisms to promote Evaluation feedback


Dissemination Mechanisms ; the means by which
evaluation information is prepared and directed to client groups

Institutional Mechanisms ; the way in which


evaluation units are linked to other parts of the agency and how evaluation findings are formally considered by the agency

Dissemination Mechanisms
Abstracts and summaries a short description of the project or activity
evaluated and synopsis of key findings and issues, conclusion, recommendations made and lessons learned Annual Reports provide an opportunity to highlight findings, trends and lessons, to synthesize recent experience from a number of studies and to direct management attention to particular needs Synthesis Report Lessons learned over a number of years in particular sectors (eg health, roads) or on cross cutting issues ( eg; women in development, environment)

Bibliographies listing of evaluation reports manually or in computerized database Automated systems automated databases which increase the
accessibility of information from evaluations for various users, including project designers, policy and planning staff, management and evaluators themselves ( EIS). Feedback Seminars promote discussion of issues and lessons arising out of evaluations Press Conference & Presenting Evaluation Results

Institutional Mechanisms
Linkages to Senior Management evaluation reports be submitted to senior management ( higher authorities) for formal approval Linkages to Policy Development linkages between agency evaluation and policy units. Evaluation staff to be involved directly in new policy discussion Linking to Planning Link evaluation to project concept and project submission Linkages to Program Management- project approval documents must indicate how evaluation lessons have been taken into account in the design and planning of new activities. Linkages to Agency Procedures lesson from evaluation experience are incorporated into future agency policy and practices (eg; appraisal procedures) Linkages to Training lessons from evaluation are sometimes used in such programs ( e.g.; training on project design)

Synthesis
Inexpensive way of drawing from existing data ( make use of what is existing already) Translate technical findings into policy (Communication approach) Single evaluation findings may be challenging or unexpected, unlikely to be acted Enhanced the ability of evaluation to feed into planning, budgeting and policy making process

Institutionalize Feedback
People often strongly tempted to believe that the link between two activities of (a) building a stockpile of evaluation reports and (b) feedback are automatic Feedback calls for different kinds of skills ( more those of the communicator than the analyst) During feedback one is more likely to lose friends than to gain them. The courage to say what users may not want to hear is the characteristic of a honest evaluation function Feedback is not just distributing reports. Feedback has to be planned for and organized with as much care and determination

Institutionalize Feedback
Too often evaluation reports are written without adequate regard to who the audience is ( Supply-led rather than demand-led). Different audiences have different needs. If you want people to read your report you need to make them attractive ( eye-catching title) Evaluation reports should feed at a time when important decision are being taken, and not afterwards. Senior officials are too busy to read the reports. One page summaries of key findings are more effective. Computerize data base (EIS) is useful. Getting feedback institutionalized within the organization as a normal function is important

How to Make Evaluation Report More Effective


Tailoring Evaluation Report to Audience(s)
Identifying audiences (Audience analysis) Information Needs of audience (Difference audiences have different information needs)

Early involvement of Audience(s) (Date they need Information, the areas of interest, formats etc.)
Framing the evaluation questions to meet the needs of audiences Tailoring the contents of Evaluation report to the Evaluation Audience(s) Delivering the Message Tailoring report format, Style and language (non-technical language)

Timing of Evaluation Report (Critical)


Interim Report ( Preliminary results, findings) Informal verbal briefing Formative evaluation of a pilot programme cannot be delivered after programme has been completed

How to Make Evaluation Report More Effective Cont..


Not-too-surprising message (careful, sensitive, professional)
Sensitive to the political climate

Executive summary/Executive abstract


For busy Readers Most audiences are constrained by time and energy to read thick report

Cover both Positive and Negative messages (Present Positive first and then Negative. People will be defensive when you place criticisms) Attractive to the Audience
Story-telling Use simple language (Avoid jargons) Print Quality

Make Recommendation to Follow - up

Institutionalizing MfDR :Results Focus


Governments are increasingly being called upon to be more accountable for results. Citizens, parliamentarians and media expect national public management to focus beyond inputs, activities and outputs towards outcomes and impacts.

Results Focus
Inputs Activities/ Outputs No. of clinics built Outcomes/ Impacts Quality of health service delivery How many girls and boys are better educated Health Sector Funds disbursed

Education Sector

Funds disbursed

No. of schools built

Focus on Results in all phase of development (Planning, Implementation, and Post Implementation)

Core Components of MfDR


Setting clear objectives
Translating objectives into measurable goals and targets using Key Performance Indicators (KPIs) Allocating resources to activities that will contribute to the achievement of desired results

Measuring and reporting on results


Providing feedback to facilitate evidence based decisions
(Second International Roundtable on MfDR)

Results Based Management : Best Practice Models


Oregon Benchmarks Minnesota Milestones Virginia Scorecard

Defining MfDR
shared vision, clear goals and measurement of results

Budget Call 2010 Results Framework Palmyrah


Priority thrust Area
Development of Palmyrah plantation and Popularization of Palmyrah products

KPIs
Increased in number of Palmyrah plants from 11Mn to 20 Mn in 2016 (MC 10 year Dev.Framework p6) through re planting and new planting. Jaggery Production (kg) from 6 Centres - Jaffna, Mannar, Trincomalee, Kilinochchi, Mullaitivu and Vavuniya Pulp Production (Litres) from 4 Centres - Jaffna,Vavunia, Mannar and Trincomalee Soft Drinks Production (bottles) from 3 Centres Jaffna, Colombo, Vavuniya Fibre Production (Kg) from 6 Centres Jaffna, Mannar, Mullaitivu,Kilinochchi, Katpity and Hambantota

Baseline 2009
11Mn

Target
2010
11.5

2011
12

2012
13

7496

9500

10500

11500

4251

8000

9600

10500

16756

45000

49000

54000

5660

7500

8250

9000

Budget Call 2010 Managing for Development Results Framework


Priority thrust Area KPIs Base Year 2007 Target

2010
23%

2011
22%

2012
21%

Curative and Preventive Healthcare service

% of underweight children under 5 years

21.6%

Incidence of EPI target Diseases (TB) rate per 100,000 population Immunization coverage of infants against measles % of women in childbearing age practicing modern family planning methods Human Resources for Health Doctor Population ratio (Doctors per 100,000 population) Nurse/Population ratio (Nurses per 100,000 population)

48 97.1% 52.8%

42 100% 54%

40 100% 55%

38 100% 56%

55.1

75

78

80

157.3

160

165

170

Managing for Results


Performance measures assess progress.

Analysis
Where are we now? Performance Measures How did we do? Actions

Goals Where do we want to go?

How do we get there?


15,000 ft view

Fundamental Prerequisites
Political will and Policy environment Govt.Policy on MfDR: Budget Call 2010; NARC Administrative Reform Agenda; Finance Commission Budget Call. Champions to lead the change management process Leadership at different levels of Government (Change Agent) MfDR Strategy and Action Plan Adoption of a process approach-consensus building Buy-in (LM/NB/NPD/AG)

Country Level Community of Practice to facilitate peer to peer dialogue.


Statistical information Capacity Building and Readiness Assessment

What we want to see is

Results, Not Efforts

You might also like