You are on page 1of 30

What is ’evaluation’ of

research achievements and impacts

taking stock of current general issues

Robert Tijssen

Centre for Science and Technology Studies (CWTS)


Leiden University
Netherlands
CWTS
tijssen@cwts.leidenuniv.nl

Presentation at the brainstorming meeting on Evaluating Research Outcomes and Impacts


EFC - European Forum on Philanthropy and Research Funding, Brussels 17 April 2008
For European foundations, evaluation is an instrument

[ ‘and a process’] which teaches important lessons to be used by

foundations, their partner organisations, other civil society

organisations, and governments.

In this role, evaluation has increasingly been recognised as having

the capacity to strengthen foundations’ accountability,

management, understanding of their work’s results, and their

credibility, by disseminating powerful lessons about those results.

Source: The role of evaluation in the 21th century foundation / Edward Pauly. - Gütersloh : Bertelsmann Foundation, 2005
‘Research evaluation’

‘ [Research] evaluation is the systematic determination


of merit, worth, and significance of something or someone
[where research(-related) activities made a contribution] ’

What kind of ‘research’?

How ‘systematic’?

What kind of ‘contribution’?

How significant was the contribution (if any)?

How did the grant impact on the contribution?


Impacts of research grants

Adapted from Lewison (2000)


Financial-administrative environment
• Pressure from (government) accounting agencies
towards more detailed financial and legal record
keeping and towards stricter and more detailed
auditing of financial and legal procedures

• Similar pressure from in-house administrators,


financial departments, controllers and control units to
provide more (quantitative) information on (interim)
results of research funding

• Expected outcomes may have to be specified before


funding is released, monitored and evaluated

• Result: bureaucracy, paperwork and the need for


(more) performance indicators and evaluation reports
Objectives-based/evidence-based
approach
Organising principles

To meet a priori objectives (goals, targets, milestones, impacts)

Improve effectiveness and efficiency of funding

Purpose

To relate inputs to the attainment of objectives

Methodologies

Qualitative (detecting and interpreting value judgments)

Quantitative (detecting and interpreting quantities)


Objectives-based/evidence-based
approach
Key strengths
Common sense appeal
Widely used approach
Focus on explicit objectives
Uses established methods and (measurement) techniques

Key weaknesses
Objectives are not clearly specified
Narrow basis for judging the level of attainment of a project or program
Focuses primarily on (short term) results
Elements are (not sufficiently) amenable to quantitative measures
Evaluation Methods and Tools

Qualitative
In-depth, open-ended interviews
Direct observation
Written document analysis
Independent external sources (e.g. expert panels)

Quantitative
Questionnaires
Independent external sources (e.g. statistical data on
publication output and citation impact)
Improving and adapting
evaluation methodologies

Introducing ‘pillars of practical wisdom’


Form follows function
Evaluation approach should follow
grant objectives, framework conditions and
performance criteria
Evaluation objectives should focus on
the production, dissemination and
utilization of research-based
knowledge, skills and capabilities
Focus on the defining characteristics
of scientific research

People
Creativity and ideas
Risk taking
Long term agenda’s
International competition and/or cooperation
Trust, but verify
Include measurable objectives
Numbers matter

Measurable criteria and numerical performance


indicators drives behaviors and results

Goodhart’s Law: when a measure becomes a target, it ceases to be a


good measure
Evaluation toolbox

Use whatever tools and methods


most appropriate to reach all stakeholders
Evidence-based evaluations
require well-acceptable metrics
and performance indicators
Past performance
is an acceptable predictor of
current capabilities
and future success
Degrees of success and failure

• Specify key objectives and performance criteria as clearly as


possible in advance

• Apply uniform definitions, measurements, metrics and (inter)national


scales
Keep it simple

Other inputs Other outputs Other impacts

Grant Activities Output Impact

… but don’t ignore realities !


Basket of methods and measures

There is no single best method for research


performance evaluation
Garbage in, garbage out

Check the information value and quality


of the required data for monitoring and evaluation
Include external sources
of independent information

• Opinions of subject experts and expert panels

• Statistical information (bibliometric data)


Peer review based
quality control systems are not infallible
Results of peer review are highly dependent on the
selection of experts and expert panels/committees

Knowledgeable and impartial evaluators are difficult to


find (in small countries/scientific communities)

Critical aspects of research, research environment,


technical infrastructure, or details of research
findings, are difficult to grasp by external experts

Experts lack expertise across all relevant and related


knowledge domains (interdisciplinary and
multidisciplinary areas)

Experts lacking in newly emerging fields (with many


young researchers and newcomers)

Lack of consensus, lack of common frame of reference


Bibliometric indicators
• Bibliometric data are based on a large number of
expert opinions and peer judgements
(peer-review accepted publications and citations
from peers)

• Bibliometrics is non-reactive measurement


methodology
(decisions to accept or cite papers are not – or
much less - affected by the possibility of future
evaluation)

• Due to the large number of peer ‘votes’, and their


international scope, the information base is
expanded
(biases and misconceptions of local experts are also
countered)
Partnerships

Researchers and research organizations


should submit high-quality

(cleaned and harmonized) performance data


Evaluation fatigue and
administrative burdens

Collect essential information only

Tailor the information need to the relevant issues and contexts


(e.g. fields of science, types of activity, timescales)
Learning is the key

Accept trade-offs between accountability and


effectiveness/efficiency of administrative
structures and procedures

Mismatches between information needs and supply

Shift focus from one-off ex-post evaluations of


returns to ongoing monitoring of capability
development, scientific progress and
(sustainable) added values
It takes two to tango

Involve stakeholders and users and in all stages


of the evaluation process

Share tasks and responsibilities

From ‘jury judgment’ to ‘coaching’?


Learning curves
Focus evaluation on determining the key success
factors (and causes of failures)

Responsiveness to science-related developments

Improving evaluation methods and tools

You might also like