Professional Documents
Culture Documents
1 of 3
In the Innovation Statement late last year, the federal government indicated a strong belief
that more collaboration should occur between industry and university researchers.
At the same time, government, education and industry groupings have made numerous
recommendations for the impact of university research to be assessed alongside or in
addition to the existing assessment of the quality of research.
2016/4/11 10:03
2 of 3
Looking ahead, measures of quality such as the Excellence in Research for Australia (ERA)
rankings have been speculated to potentially influence future funding via prestigious
competitive schemes (such as the Australian Research Council), block funding for
infrastructure and the availability of government support for doctoral students via Australian
Postgraduate Awards.
So the demand for a measure of research quality and the potential uses of such a measure
are pretty clear.
But what valid reasons are there for investing significant resources in the measurement of
research impact or engagement?
If high-quality research addresses important practical problems (large or small), surely we
would expect impact would follow?
In this sense, the extent of impact is really a joint product of the quality (or robustness) of
research and the choice of topic (ie, practical versus more esoteric).
Ranking impact
Finally, how can impact be ranked? Is there a viable measure that can distinguish between
high and low impact? Existing case-study approaches are unlikely to yield any form of
quantifiable measurement of research impact.
Equally puzzling is the call to measure research engagement. What is the purpose of such an
exercise? Surely in a financially constrained research environment, universities readily
recognise the importance of such engagement and pursue it constantly.
We dont need a national assessment of engagement to encourage universities to engage.
Motive aside, one approach canvassed is the quantum of non-government investment in
research (ie, non-government research income).
This is arguably one rather limited way to measure engagement, and is focused on input
rather than output. If the purpose of any measurement is to capture outcomes, does it make
sense to focus exclusively on inputs? The logic of this escapes me.
2016/4/11 10:03
3 of 3
They would have us believe that a simple (but useful) measure of impact is the extent to which
university researchers receive industry funding. Surely this is, at best, a measure of
engagement, not impact.
Although the two are likely correlated, the extent will vary greatly across discipline areas.
Further, in business disciplines, much of the knowledge transfer that occurs via education
(including areas such as executive programs) reflects the impact of the constant process of
researching better business practices across areas such as accounting, finance, economics,
marketing and so on.
Discretionary expenditure on such programs by business is surely an indication of the extent
to which business schools and industry are engaged, yet this would be ignored if we focused
on research income alone.
We must not lose sight that quality (ie, rigour and innovativeness) is a necessary but not
sufficient condition for broader research impact.
Engagement is not impact, and simple measures such as non-government research income
tell us very little about genuine external engagement between universities and industry.
As accountants know, performance measurement reflects its purpose. What we need before
any further national assessment of attributes such as impact or engagement is clear
understanding of the purpose of such an exercise.
Only when the purpose is clearly specified can we have a sensible debate about
measurement principles.
Universities
ERA
research impact
Industry research links
Research quality
Tweet64
Share4
Get newsletter
2016/4/11 10:03