Professional Documents
Culture Documents
Access to this document was granted through an Emerald subscription provided by 274199 []
For Authors
If you would like to write for this, or any other Emerald publication, then please use our Emerald for
Authors service information about how to choose which publication to write for and submission guidelines
are available for all. Please visit www.emeraldinsight.com/authors for more information.
About Emerald www.emeraldinsight.com
Emerald is a global publisher linking research and practice to the benefit of society. The company
manages a portfolio of more than 290 journals and over 2,350 books and book series volumes, as well as
providing an extensive range of online products and additional customer resources and services.
Emerald is both COUNTER 4 and TRANSFER compliant. The organization is a partner of the Committee
on Publication Ethics (COPE) and also works with Portico and the LOCKSS initiative for digital archive
preservation.
as typically applied in public sector organisations. Such mechanisms are usually implemented as
a causal loop which is established between perceived performance and resulting actions, thereby
constituting a form of feedback control. Within this context a two-dimensional matrix model is
postulated in which the independent dimensions are the source of control and the nature of the
resultant control-action. The paper examines the implications revealed by this model within the
context of performance management and system dynamics. The potential role of influence
diagrams and dynamic simulation models is thereby introduced as a potential means of
unravelling the complex behaviour which can often arise in the presence of such interactive
cause-effect loops. A number of typical examples, drawn from within the public sector, are
invoked to illustrate the discussion.
Introduction
Although the measurement of performance in the public sector is relatively
new, a substantial body of literature on performance management has
developed since the late 1970s, encompassing terms such as performance
measures, performance indicators, performance appraisal and review, value for
money and, more recently, quality assurance. This literature has mirrored a
parallel development in which the language of performance has become an
almost everyday feature of work in public sector organisations, in some form or
another. Similarly, a new ``industry'' has developed within the public sector
which is concerned with collecting, reporting, and appraising organisational
performance (Holloway, 1999; Rouse, 1993, 1999).
Public sector organisations are differentiated in comparison with their
commercial counterparts in the private sector. There is no profit maximising
focus, little potential for income generation and, generally speaking, no bottom
line against which performance can ultimately be measured. The vast majority
of public sector organisations still generate most of their income from the State,
and have to account to several stakeholders. Consequently it was once, and not
that recently, considered impossible to measure performance in the public
sector.
The first attempts at performance evaluation and review were associated
with the failed attempts at large scale strategic planning in the 1970s, and it The International Journal of Public
was not until the appearance of organisational and managerial reforms Sector Management,
Vol. 13 No. 5, 2000, pp. 417-446.
introduced by the Conservative Governments of the 1980s and 1990s that # MCB University Press, 0951-3558
IJPSM public sector performance measurement became firmly established. Indeed, it is
13,5 one of the underlying arguments of this paper that, in relative terms,
performance measurement is still in its infancy (or at least, its adolescence).
Consequently, the approaches used are still in need of further investigation and
development, particularly in terms of understanding the resultant action
arising from the measurement and evaluation process.
418 Initially, attempts at evaluating public sector organisational performance
centred on the assessment of value for money. This was normally conducted by
external auditors through scrutinisation of agencies' accounts. Gradually, a
whole range of measures and indicators of performance arose throughout the
whole public sector, in an attempt to identify examples of good and poor
resource usage. More recently, the language of performance has been
Downloaded by University of Louisville At 03:02 24 January 2015 (PT)
should the information arising from the measurement process be used? Neither
question is easy to answer, although substantial effort has been applied in
attempting to do so, particularly in the case of the former.
It is common practice in public sector performance management literature to
talk about the three Es of:
(1) economy;
(2) efficiency; and
(3) effectiveness,
based upon a simple input, process and output model of organisations (Flynn,
1997; Rouse, 1999; Carter et al., 1995). Input resources are generally thought of
as physical, human (staff and clients/cases) and financial. Proponents of
knowledge management and associated concepts such as ``the learning
organisation'' would also include ``informational'' in this list. Financial inputs
are, perhaps, the most important as acquisition of other resource types usually
depends upon the funds available. Many measures commonly used in public
sector organisations are based on derivatives of this ``economy'' or input
oriented perspective, usually expressed in terms of cost, budget and staffing
totals. Comparisons can then be made across similar types of organisations.
Examples of generic measures used include cost per case, cost per service type,
numbers and categories of staff involved. These can then translate into specific
measures such as cost per patient, staff-student ratios, unit cost per refuse
collection, numbers of employed ancillary, skilled and professional employees,
and so on. Any change in these performance measures simply reflects the
``economy'' with which the organisation is using its resources and provides little
information about the operational processes within the organisation, apart
from some crude benchmarking.
Looking now towards the other end in the ``three Es'' spectrum are located
the outputs from the organisation. These can also be easily measured in
quantifiable terms such as patients treated, crimes solved, students gaining
various qualifications at different grades, children placed in foster care, and so
on. Unfortunately, as discussed below, these tell us little about the real success,
IJPSM or otherwise, of the organisation, and are mainly of use in the calculation of a
13,5 ratio of input to output which is a measure of organisational efficiency. An
increase in the number of outputs, for a given input, simply demonstrates how
efficiently an organisation is converting its inputs into outputs but provides
very little information about the effectiveness or value of these outputs.
Finally, effectiveness is concerned with the extent to which outputs meet
420 organisational needs and requirements and is therefore much more difficult to
assess, let alone measure. Public sector organisations are created to meet some
form of perceived societal need. However, it is debatable whether simply
increasing the number of, and measurement of, outputs, will automatically
result in the meeting of such needs? Questions even arise concerning the
definition of need. This is often vague and inconclusive such as, for example,
the need for ``a well educated society''. Similarly, the desired quality of the
Downloaded by University of Louisville At 03:02 24 January 2015 (PT)
outputs, in terms of meeting this need, may also be questioned. The actual role
(mission) of individual organisations and agencies in meeting this need, and the
different requirements or perceptions of various stakeholders, in terms of the
dimensions of the outputs encountered when attempting to meet this need, may
also be unclear. In education, for example, students, employers, the academic
community, and the Government all have different expectations and demands.
Hence it is necessary to define an additional term, namely ``outcome'', defined
here as the impact that outputs have in meeting this perceived need. This is
generally thought of in qualitative terms which implies that outcomes are
difficult, in themselves, to measure. Furthermore, the process is also frequently
complicated by the length of time it takes for such impacts to be identified.
Finally, the impact of outcomes arising from the actions of other agencies,
working in related policy areas, adds further complexity, e.g. welfare services
and health.
perform at an even higher standard, whilst the ``bad'' organisation has to make
even greater gains simply to survive. League tables in education and health,
and quality audits such as the Research Assessment Exercise in higher
education, are common examples of this mechanism (these are discussed in
more detail, as short case studies, later in this paper).
More recent developments have seen the introduction of standards covering
all aspects of organisational work, normally established on the basis of some
national criteria (although sometimes allowing for local interpretation) with
quality audits performed by an external quality agency. Once again, prominent
examples can be drawn from the education and health sectors. Individual
organisations are graded on their performance (quality) against these
standards and those considered as poor performers have to show improvement
or face threats of closure and the imposition of Government appointed
administrators to ``turn them round''. Even if there are no direct ties to resource
allocation within these organisations, a ``poor'' performance label often precedes
a fall in the customer/client base which in turn leads indirectly to a decline in
the resources available.
the organisation would typically imply the use of indicators to identify areas
which are not performing to expectation, and investigation of the reasons
behind this. Alternatively, authority located external to the organisation
implies the existence of an outside body holding the organisation accountable
for the way in which its resources are deployed. Obviously, external
accountability is important when considering the usage of public funds, but
limitations and inadequacies associated with the performance measures can
also have a negative and threatening impact upon the members of the
organisation, with emphasis placed upon ever increasing efficiency gains.
Consequently Smith (1995b) argues that external control should be concerned
with the achievement of outcomes, not accountability for the use of inputs.
The second dimension concerns the nature of the controlling action which is
taken and addresses the question of whether this would be construed as being
positive (supportive or beneficial) or negative (threatening or punitive)
respectively. Negative action implies an assumption that under-performance is
the result of mismanagement of resources leading to inefficiency. This may
lead to a reduction of the resource base, in consequence. Positive action, on the
other hand, would imply that in the same situation the indicators would be
used to highlight lack of achievement which then triggers initiation of an
investigation as to why this state has arisen. This could subsequently lead to a
re-deployment of resources to an under-resourced area, or to some form of
retraining, staff development or some other form of organisational
development strategy.
Figure 1.
Control locations and
resultant action matrix
model
Taking the location of the control authority and resultant action as the two Perspective of
independent dimensions, four quadrants can then be identified, as depicted in performance
Figure 1. management
Consideration of conditions prevailing in each of the four respective
quadrants may be characterised as follows:
(1) Quadrant 1 performance management systems would typically
include internal quality assurance, and assessment procedures (Boland 423
and Silbergh, 1996). Application of these would lead to positive
(supportive) actions in terms of actions arising. In principle, the concept
of total quality management could broadly characterise this approach.
(2) Quadrant 2 typically involves an external body that is responsible for
auditing an organisation's own performance measurement and quality
Downloaded by University of Louisville At 03:02 24 January 2015 (PT)
systems, rather than performing these directly and in detail for each of
an organisation's operational activities. This external agent can then
offer advice on how the organisation as a whole can develop its
component parts to ensure continued development and high
performance. The process is characterised by less emphasis on formal
measures and more reliance on general indicators of outcome (Smith,
1995b) and is therefore seen as basically positive in nature due to its
generally supportive role.
(3) Quadrant 3 is typically characterised by sub-units within the
organisation being measured internally, as in quadrant 1. In practice,
this function will often be performed by ``the centre''. However, in this
quadrant control is associated with negative consequences, in terms of
actions arising. This is especially true in large complex organisations
and public sector authorities where departmental budgets are allocated
centrally and may be adversely affected by alleged poor performance,
such as occurs in the Civil Service (Horton, 1999) and local government
(Painter and Isaac-Henry, 1999).
(4) Quadrant 4 is similar to quadrant 3 in that negative (punitive) actions
can be expected to follow a poor assessment, but differs in that authority
and control are now imposed externally. It is the contention of these
authors that ultimately the practice of performance management in the
public sector lies mainly (if not completely) in quadrant four. However,
as is argued in subsequent sections, system dynamics and systems
modelling theory readily illustrate that this is an inherently unstable
regime in which to operate, and can easily lead to an overall worsening
of public services rather than holistic improvement.
In summary, it is thereby seen that, within the matrix model, two extreme
positions can be identified in quadrants 1 and 4. These are respectively internal
control combined with positive action, on the one hand, and external control
combined with negative action, on the other. Furthermore, it is argued herein
that quadrant 1 is the most desirable location for most public sector
IJPSM organisations, in terms of satisfying, in the long term, the needs of the majority
13,5 of stakeholders. However, a majority of performance management systems in
the public sector may be seen to fall into quadrant 4. Examples include school
league tables and OFSTED related quality standards in education; the
Research Assessment Exercise and QAA quality audits in higher education;
hospital waiting lists, mortality league tables and clinical governance in the
424 NHS; HMIC performance indicators in the police service, etc.[1].
Systems concepts
Public sector management occurs within a complex, dynamic system involving
several nominally independent stakeholders, coupled with informational and
resource material flows and behaviour that is characterised by inertia and
multiple feedback loops. It is therefore apparent that the generic principles of
Downloaded by University of Louisville At 03:02 24 January 2015 (PT)
425
Downloaded by University of Louisville At 03:02 24 January 2015 (PT)
Figure 2.
Closed loop system and
subsystems
Figure 3.
Relationships between
alternative performance
measures
argued in previous sections, use of the term ``outcomes'' may be preferred to Perspective of
denote the essentially intangible, multidimensional attributes which are of real performance
concern, relative to stakeholder needs in the public sector. In Figure 3, this management
notion is depicted using the ``matching comparison'' block to assess outcomes
relative to needs. This encapsulates the now familiar prioritisation adage that:
It is more important to do the right things than to do things right. 427
The implication here is that it is more important to achieve the outcomes that
people want, rather than becoming optimally efficient in delivery. In fact
efficiency improvements may be achieved either by increasing outputs, while
deploying the same inputs, or by maintaining the same output with reduced
inputs. However, the ``prioritisation adage'' emphasises the importance of
Downloaded by University of Louisville At 03:02 24 January 2015 (PT)
achieving the right outputs, in preference to the goal of using inputs with
optimal efficiency. If a choice exists, this clearly points to the order of
prioritisation.
Conversely, some confusion can arise in the case where one of the desired
outcomes is the attainment of the lowest cost (i.e. the best possible use of input
resources). In this case, the prioritisation adage may prove somewhat
tautologous. This suggests that perhaps a more comprehensive definition may
be required that would address the simultaneous optimisation of both outcome
effectiveness and resource use efficiency. The concept of ``value for money'', as
depicted in Figure 3, is intended to encapsulate this notion.
428
Figure 4.
Negative feedback
system block diagram
Downloaded by University of Louisville At 03:02 24 January 2015 (PT)
Figure 5.
Negative feedback
system influence
diagram
system'' performs the transformation of those resources to create various Perspective of
outputs, one of which is the generation of expenditure. This in turn is ``sensed'' performance
or monitored by the accounting system. Ideally, expenditure equals budgeted management
resource at the desired equilibrium condition. However, if the rate of
expenditure then increases in magnitude, the negative sign in the causal loop
indicates that budget surplus will be reduced or become negative, which
implies that the controller then needs to reduce resource allocation. This, in 429
turn, should eventually reduce expenditure so that after some delay the system
is eventually restored to a state of balance. Hence we have the classic negative
feedback balancing action whose intention is to ensure that the system behaves
as its designers and operators (i.e. management) intended.
It should be noted that, in this sense, negative feedback arguably carries the
connotation of something that is actually good; i.e. achieving controllability of
Downloaded by University of Louisville At 03:02 24 January 2015 (PT)
the system and conformance with some clearly specified performance objective
(remaining within budget, in this case). This reverses the more conventional
usage of the adjective ``negative''.
Positive feedback
Positive feedback systems are structurally identical to those depicted in
Figures 4 and 5, except that the negative sign at the summing junction is now
replaced by a positive. This is readily depicted algebraically, using Figure 6 as
a basis, which depicts the equivalent, simplified, static case corresponding to
the fully dynamic representation depicted in Figure 4. The key difference is
that the system has now been configured for positive feedback and presented in
simplified format. In this case H and G appear as simple multipliers (rather
than the full dynamic transfer functions depicted in Figure 4). These act
directly on y, z and x respectively. Hence the output y, is seen to be equal to the
sum of the two components x and z multiplied by the simplified transfer
function G.
Hence y = (x + z). G where z = y.H.
So y = G.x + H.G.y.
Whereby y.(1 H.G) = G.x,
or
y G
:
x x1 H :G
The net result of producing this positive feedback closed loop is to produce a
multiplier effect[2]. For example, setting G = 1 (whereby y is nominally equal to
x in the corresponding open loop system) and setting H = 0.5 results in a
multiplier of 2. Likewise, if H is increased to 0.67, the multiplier increases to 3,
and so forth.
However, the transfer function expression above also displays the danger
inherent in positive feedback systems. For example, as the product H.G
approaches unity, then the multiplier effect becomes infinite, implying that the
smallest change in input leads to a massive corresponding response in output.
IJPSM Hence in practice, from a systems control perspective, positive feedback creates
13,5 a destabilising influence with a tendency to drive dynamic systems towards
instability. In this sense, positive implies bad, which again contradicts the
everyday language use of the adjective.
A typical example of positive feedback is depicted, in influence diagram
format, in Figure 7 (note that the two negative signs, around the loop, combine
430 through conventional algebraic notation to form a positive). This is similar to
Figure 5 except that the emphasis has now shifted so that ``system
performance'' becomes the focus of monitoring and control, rather than
budgetary activity. However, this is being done in such a way that performance
is now linked positively to provision of resources (not an uncommon
phenomenon, especially in public sector organisations, as discussed in previous
Downloaded by University of Louisville At 03:02 24 January 2015 (PT)
sections of this paper). The implication of this is that if performance dips below
target, the control system will register an expanded ``performance gap'' which is
then acted upon to produce a reduction in resource allocation. In many cases
this leads to further degradation of performance. A ``vicious spiral'' is thereby
initiated, leading ultimately towards collapse, unless some contravening action
is taken. In systems terminology, this would usually require that some form of
nonlinearity is encountered.
It is noted that, although systemically this is a typical example of ``positive
feedback'', a term which colloquially carries the connotation of something good,
in practice, the outcome will probably be deemed to be anything but good by
those affected. Indeed, perhaps ironically, the outcome would more likely be
considered to be very negative. Hence in systems terminology the terms
negative and positive feedback have a reversed polarity relative to more
traditional usage, including that which is associated with the matrix model in
Figure 1. Negative feedback implies careful system design with the intention of
creating balanced, stable systems that always seek convergence towards some
declared goal. Conversely, positive feedback, unless impeded by nonlinearity or
some other systemic influence, will tend towards instability and uncontrolled
``runaway'' situations.
Figure 6.
Positive feedback block
diagram (static case)
Perspective of
performance
management
431
Figure 7.
Positive feedback
influence diagram
Downloaded by University of Louisville At 03:02 24 January 2015 (PT)
Figure 8.
Composite influence
diagram
IJPSM However, a problem potentially exists with the behaviour of this system in the
13,5 event that resources are reduced, say to satisfy the budgetary requirement in
the inner loop. In the event that this ultimately triggers a reduction in the
performance of the core service delivered, this, in turn, increases the gap
between the target and achieved performance which then causes a further
reduction in resource, as depicted by the second negative sign in the outer loop.
432
A role for computer simulation
As described above, in qualitative terms, behaviour of the system depicted in
Figure 8 depends upon interaction between the two respective causal loops.
The actual nature of this interaction depends, in turn, upon a number of factors
including relative loop dominance, the degree of inertia and the magnitude of
Downloaded by University of Louisville At 03:02 24 January 2015 (PT)
any delays that are present. Since numerous possibilities exist, each
accompanied by its own particular behavioural consequences, an effective way
to evaluate the relative significance of these factors, and to understand the
behaviour of the system, is to build a computer simulation model of it (Vennix,
1996). This is depicted at a very fundamental level, in Figure 9, using a popular
commercial simulation software application IThink (Richmond, 1994).
This is essentially a continuous system simulation facility that performs
continuous numerical integration of flows (the generic modelling elements,
depicted by the double arrows) which accumulate, in time, as stocks (state
variables, generically depicted as rectangles). Algebraic manipulation occurs at
the ``converters'' (the circle elements). These may also incorporate nonlinear
functions, discontinuities and a wide range of logical operations which, in
Figure 9.
Simulation model of
performance
management system
effect, support a discrete-event functionality. The fourth and final generic Perspective of
element, used in creating simulation models, is the ``connector'' (single line) performance
which carries information between the other generic model ``building-blocks''. management
The upper section of the simulation patch diagram, presented as Figure 9,
shows the financial control loop with budget surplus (or deficit) represented as
a stock variable resulting from the net accumulation of resource flows and
expenditure. Expenditure is determined in relation to the established budget 433
but also incorporates a budget deficit ``trimming policy''. Changes in the
resource budget, or expenditure policy, may be implemented at the respective
converters while external influences, that can affect expenditure, outside of the
normal planning policy can be implemented in another independent converter
labelled ``external expenditure influence''.
The lower section of Figure 9 shows the performance control loop similarly
Downloaded by University of Louisville At 03:02 24 January 2015 (PT)
434
Figure 10.
Responses obtained
from simulation model
Downloaded by University of Louisville At 03:02 24 January 2015 (PT)
A similar observation could be made with respect to audits in further and 437
higher education. Moreover, further developments, potentially akin to the
school sector's National Curriculum, may already exist at the planning stage. It
is also notable that the new Government elected in 1997 has continued this
seasoned trend, introduced and developed by its predecessor, of increased
centralisation, the use of performance indicators and an enhanced role for
external audit (Holloway et al., 1999).
Downloaded by University of Louisville At 03:02 24 January 2015 (PT)
Discussion
Public sector management shares much in common with, and has been
significantly influenced by, practice in the private sector. However, in many
IJPSM respects performance management in the sector is relatively more complicated
13,5 due to the absence of the single overriding goal which ultimately dominates
private sector companies. That is, the motivation to make profits and provide
satisfactory financial returns to shareholder interests.
Consequently, attempts at performance improvement based on indicators
and systems of measurement have often proved controversial in some sense.
440 Concerns have been expressed that ``what gets measured is what can be
measured'' and, furthermore, ``what gets measured gets done''. However, the
question remains open as to whether this necessarily leads to the salutary
outcomes required, in terms of satisfaction of the needs for which such
organisations were originally created.
The paper has therefore examined, from a systems perspective, the
relationship between the actions initiated as part of the performance
Downloaded by University of Louisville At 03:02 24 January 2015 (PT)
in assessing public management reforms, also have noted a shift away from the
traditional public service culture. They argue that top-down, hierarchic
management is inconsistent with public sector professionalism, flexible and
responsive service provision and political participation by citizens (Farnham
and Horton, 1993, p. 254). However, developments since that time have been
characterised by an increased enthusiasm for external controls, as noted in the
cases above. Unfortunately, this often appears to take scant account of the
traditional qualities referred to above but is primarily concerned with ever-
increasing efficiency drives, apparently reflecting and emulating productivity
improvement initiatives in the private sector. However, in terms of long-term
outcome attainment, it must be recognised that such approaches may
ultimately prove to be misguided.
Conclusion
In summary, due consideration of systemic relationships, within the context of
the control location/action matrix model, is firmly endorsed as potentially
providing a framework within which attitudes and policies towards
performance management in the public sector can be reconsidered and possibly
refocused to reflect outcome attainment clearly. It is accepted that thinking
through the full run of cause and effect sequences, arising in the interconnected
dynamic feedback loops which are encountered in the domain of public sector
management, may be a daunting task in practice. However, the process of
qualitative mapping and modelling using influence diagrams, and quantitative
modelling leading to dynamic simulation, can offer considerable assistance in
this respect. These tools can subsequently reveal, at the policy design and
reformulation stage, the likely behaviour of the system, following
implementation of performance management initiatives or the experience
``shocks to the system'' arising from an uncontrollable environment. Finally, it
is suggested that the frameworks and methodologies presented herein, whilst
not offering a panacea for performance management in the public sector, do at
least provide a vehicle for consideration and deeper systemic understanding of
the organisational dynamics involved.
IJPSM Notes
13,5 1. These examples relate to the cases discussed in more detail later in the paper. However, the
list is not exclusive and these examples are symbolic of many other public services. For
further information see: Horton and Farnham (1999); Carter et al. (1995); Jackson (1995).
2. One of the most influential examples being the Keynesian multiplier (Keynes, 1951)
encountered in macro-economic theory, which ensures disproportionate outcomes to the
injection or retraction of money into the economy.
444
3. This refers to the drive to ensure that every patient is assigned a particular named nurse
who will be their primary point of contact with the hospital system.
References
Boland, T. and Silburgh, D. (1996), ``Managing for quality: the impact of quality management
initiatives on administrative structure and resource management processes in public
Downloaded by University of Louisville At 03:02 24 January 2015 (PT)
sector organizations'', International Review of Administrative Sciences, Vol. 62, pp. 351-67.
Carter, N., Klein, R. and Day, P. (1995), How Organisations Measure Success: The Use of
Performance Indicators in Government, Routledge, London.
Checkland, P. (1981), Systems Thinking, Systems Practice, Wiley, London.
Corby, S. (1999), ``The National Health Service'', in Horton, S. and Farnham, D. (Eds), Public
Management in Britain, Macmillan, London, ch. 11, pp. 180-93.
Coyle, R.G. (1977), Management System Dynamics, Wiley, Chichester.
Department of Health, (1997), The New NHS, Modern Dependable, (Cm 3807), HMSO, London.
Farnham, D. and Horton, S. (1993), ``The new public services managerialism: an assessment'', in
Farnham, D. and Horton, S. (Eds), Managing the New Public Services, Macmillan,
Basingstoke, ch. 11, pp. 237-54.
Flood, R.L. and Jackson, M.C. (1991), Creative Problem Solving, Wiley, Chichester.
Flynn, N. (1997), Public Sector Management, 3rd ed., Prentice-Hall/Harvester Wheatsheaf,
London.
Forrester, J.W. (1961), Industrial Dynamics, MIT Press, Cambridge, MA.
Fowler, A. (1998), ``Operations management and systemic modelling as frameworks for BPR'',
International Journal of Operations & Production Management, Vol. 18 Nos 9/10,
pp. 1028-56.
Fowler, A. (1999), ``Modelling, simulation and innovative design in complex adaptive business
management systems'', Computing and Control Engineering Journal, Vol. 10 No. 6,
pp. 267-76.
Greasley, A. and Barlow, S. (1998), ``Using simulation modelling for BBR: resource allocation in a
police custody process'', International Journal of Operations and Production Management,
Vol. 18 Nos 9/10.
Holloway, D., Horton, S. and Farnham, D. (1999), ``Education'', in Horton, S. and Farnham, D.
(Eds), Public Management in Britain, Macmillan, Basingstoke, ch. 12, pp. 194-212.
Holloway, J. (1999), ``Managing performance'', in Rose, A. and Lawton, A. (Eds), Public Services
Management, Financial Times/Prentice-Hall, Harlow, ch. 12, pp. 238-59.
Horton, S. (1999), ``The Civil Service'', in Horton, S. and Farnham, D. (Eds), Public Management in
Britain, Macmillan, Basingstoke, ch. 9, pp. 145-61.
Horton, S. and Farnham, D. (1999), Public Management in Britain, Macmillan, Basingstoke.
Jackson, P.M. (1995), Measures for Success in the Public Sector: A Reader, CIPFA, London.
Kaplan, R.S. (1991), ``New systems for management and control'', The Engineering Economist,
Vol. 36 No. 3, pp. 201-18.
Keynes, J.M. (1951), The General Theory of Employment, Interest and Money, Macmillan, Perspective of
London.
performance
Lehaney, B. and Hlupic, V. (1995), ``Simulation modelling for resource allocation and planning in
the health sector'', Journal of the Royal Society of Health, Vol. 115 No. 6, pp. 382-5. management
Loveday, B. (1999), ``Managing the police''', in Horton, S. and Farnham, D. (Eds), Public
Management in Britain, Macmillan, Basingstoke, ch. 13, pp. 213-31.
McGill, P. (1994), ``Turning the tables'', Guardian Education, 22 November, pp. 2-3. 445
Martin, A. (2000), ``A simulation engine for custom project management education'', International
Journal of Project Management, Vol. 18 No. 3, pp. 201-03.
Morecroft, J. (1999), ``Visualising and rehearsing strategy'', Business Strategy Review, Vol. 10
No. 3, pp. 17-32.
Morecroft, J.D.W. and Sterman, J.D. (1994), Modelling for Learning Organizations, Productivity
Press, Portland, OR.
Downloaded by University of Louisville At 03:02 24 January 2015 (PT)
Painter, C. and Isaac-Henry, K. (1999) ``Managing local public services'', in Horton, S. and
Farnham, D. (Eds), Public Management in Britain, Macmillan, Basingstoke, ch. 10,
pp. 162-79.
Pollitt, C. (1985), ``Measuring performance: a new system for the National Health Service'', Policy
and Politics, Vol. 13 No. 1, pp. 1-15.
Pollitt, C. (1993), Managerialism and the Public Services, 2nd ed., Blackwell, Oxford.
Rakich, J.S., Kuzdrall, P.J., Klafehn, K.A. and Krigline, A.G. (1991), ``Simulation in the hospital
setting: implications for managerial decision making and management development'', The
Journal of Management Development, Vol. 10 No. 4, pp. 31-4.
Richardson, G.P. (1991), Feedback Thought in the Social Sciences and Systems Theory, University
of Pennsylvania Press, Philadelphia, PA.
Richmond, B. (1994), Business Applications, High Performance Systems Inc., 45 Lyme Road,
Hannover, NH03755.
Rouse, J. (1993), ``Resource and performance management in public service organizations'', in
Isaac-Henry, K., Painter, C. and Barnes, C. (Eds), Management in the Public Sector,
Challenge and Change, Chapham & Hall, London, ch. 4, pp. 59-76.
Rouse, J. (1999), ``Performance management, quality management, and contracts'', in Horton, S.
and Farnham, D. (Eds), Public Management in Britain, Macmillan, Basingstoke, ch. 5,
pp. 76-93.
Senge, P. (1990), The Fifth Discipline, Doubleday/Currency, New York, NY.
Smith, P. (1995a), ``Performance indicators and outcomes in the public sector'', Public Money &
Management, Vol. 15 No. 4, pp. 13-16.
Smith, P. (1995b), ``Outcome-related performance indicators and organizational control in the
public sector'', in Holloway, J., Lewis, J. and Malloray, G. (Eds), Performance Measurement
and Evaluation, Sage, London, ch. 10, pp. 192-216.
Stacey, R.D. (1993), Strategic Management and Organisational Dynamics, Pitman, London.
Stacey, R.D. (1996), Complexity and Creativity in Organizations, Berrett-Koehler, San Francisco,
CA.
Sterman, J.D. (1987), ``Testing behavioural simulation models by direct experiment'',
Management Science, Vol. 33 No. 12, pp. 1572-92.
Sterman, J.D. (1989), ``Modeling managerial behaviour: misperceptions of feedback in a dynamic
decision-making experiment'', Management Science, Vol. 35 No. 3, pp. 321-39.
Towill, D.R. (1993), ``System dynamics background, methodology and applications'', Computing
and Control Engineering Journal, Vol. 6 No. 6, pp. 261-8.
IJPSM Vennix, J.A.M. (1996), Group Model Building: Facilitating Team Learning Using System
Dynamics, Wiley, Chichester.
13,5 von Bertalanffy, L. (1968), General Systems Theory, Foundations, Development and Application,
George Braziller, New York, NY.
Warren, K. (1999), ``The dynamics of strategy'', Business Strategy Review, Vol. 10 No. 3, pp. 1-16.
Wolstenholme, E.F. (1990), System Enquiry, a System Dynamic Approach, Wiley, Chichester.
446 Wolstenholme, E. and Stevenson, R. (1994), ``Systems thinking and systems modelling, new
perspectives on business strategy and process design'', Management Services, Vol. 38
No. 9.
Downloaded by University of Louisville At 03:02 24 January 2015 (PT)
This article has been cited by:
1. Margaret Taylor, Andrew Taylor. 2014. Performance measurement in the Third Sector: the development
of a stakeholder-focussed research agenda. Production Planning & Control 25, 1370-1385. [CrossRef]
2. Federico Cosenz. 2014. A Dynamic Viewpoint to Design Performance Management Systems in Academic
Institutions: Theory and Practice. International Journal of Public Administration 37, 955-969. [CrossRef]
3. Noore Alam Siddiquee. 2014. The Government Transformation Programme in Malaysia: A Shining
Example of Performance Management in the Public Sector?. Asian Journal of Political Science 22, 268-288.
[CrossRef]
4. Terence Lam, Keith Gale. 2014. Framework procurement for highways maintenance in the UK: can it offer
value for money for public-sector clients?. Structure and Infrastructure Engineering 1-12. [CrossRef]
5. Alexandra Collm, Kuno Schedler. 2014. Strategies for Introducing Organizational Innovation to Public
Service Organizations. Public Management Review 16, 140-161. [CrossRef]
Downloaded by University of Louisville At 03:02 24 January 2015 (PT)
6. Niall Piercy, Wendy Phillips, Michael Lewis. 2013. Change management in the public sector: the use of
cross-functional teams. Production Planning & Control 24:10-11, 976-987. [CrossRef]
7. Pan Suk Kim, Kil Pyo Hong. 2013. Major Constraints and Possible Solutions for Performance Management
in Korea. Public Management Review 15:8, 1137-1153. [CrossRef]
8. Henri Maurice Veillard Jeremy, Louise Schitz Michaela, Guisset Ann-Lise, Davidson Brown Adalsteinn,
S. Klazinga Niek. 2013. The PATH project in eight European countries: an evaluation. International
Journal of Health Care Quality Assurance 26:8, 703-713. [Abstract] [Full Text] [PDF]
9. Petra Pekkanen, Petri Niemi. 2013. Process performance improvement in justice organizationsPitfalls of
performance measurement. International Journal of Production Economics 143:2, 605-611. [CrossRef]
10. Francesco Longo, Dario Barbieri. 2013. Using Relational and Transactional MCSs to Manage the Delivery
of Outsourced Public Services: Evidence from Twelve Cases in the USA. Financial Accountability &
Management 29:1, 50-73. [CrossRef]
11. Andy Maun, Kerstin Nilsson, Carina Furker, Jrgen Thorn. 2013. Primary healthcare in transition a
qualitative study of how managers perceived a system change. BMC Health Services Research 13:1, 382.
[CrossRef]
12. Elena Gori, Silvia Fissi, Giacomo Manetti. 2012. The Performance of Nursery Schools in Italian
Municipalities. International Journal of Public Administration 35:14, 959-975. [CrossRef]
13. T. Watts, C.J. McNairConnolly. 2012. New performance measurement and management control systems.
Journal of Applied Accounting Research 13:3, 226-241. [Abstract] [Full Text] [PDF]
14. Kevin Baird, Herbert Schoch, Qi (James) Chen. 2012. Performance management system effectiveness in
Australian local government. Pacific Accounting Review 24:2, 161-185. [Abstract] [Full Text] [PDF]
15. Paula Mendes, Ana Carina Santos, Fernando Perna, Margarida Ribau Teixeira. 2012. The balanced
scorecard as an integrated model applied to the Portuguese public service: a case study in the waste sector.
Journal of Cleaner Production 24, 20-29. [CrossRef]
16. Christopher Koliba, Erica Campbell, Asim Zia. 2011. Performance Management Systems of Congestion
Management Networks. Public Performance & Management Review 34:4, 520-548. [CrossRef]
17. N. Videira, M. van den Belt, R. Antunes, R. Santos, R. BoumansIntegrated Modeling of Coastal and
Estuarine Ecosystem Services 79-108. [CrossRef]
18. Suchanya Posayanant, Chotchai Chareonngam. 2010. Prototype KPIs for rural infrastructure development.
International Journal of Productivity and Performance Management 59:8, 717-733. [Abstract] [Full Text]
[PDF]
19. Irma J. Baars, Silvia M. A. A. Evers, Arnoud Arntz, Godefridus G. van Merode. 2010. Performance
measurement in mental health care: present situation and future possibilities. The International Journal of
Health Planning and Management 25:3, 198-214. [CrossRef]
20. Michela Arnaboldi, Giovanni Azzone. 2010. Constructing performance measurement in the public sector.
Critical Perspectives on Accounting 21:4, 266-282. [CrossRef]
21. M. Sobtka, K.W. Platts. 2010. Managing without measuring: a study of an electricity distribution
company. Measuring Business Excellence 14:1, 28-42. [Abstract] [Full Text] [PDF]
22. Michel Leseure, Mel HudsonSmith, Manuel F. SurezBarraza, Juan RamisPujol. 2010. Implementation
of LeanKaizen in the human resource service process. Journal of Manufacturing Technology Management
21:3, 388-410. [Abstract] [Full Text] [PDF]
Downloaded by University of Louisville At 03:02 24 January 2015 (PT)
23. Hasan Buker, Osman Dolu. 2010. Police Job Satisfaction in Turkey: Effects of Demographic,
Organizational and Jurisdictional Factors. International Journal of Comparative and Applied Criminal Justice
34:1, 25-51. [CrossRef]
24. Ana Isabel Melo, Cludia S. Sarrico, Zoe Radnor. 2010. The Influence of Performance Management
Systems on Key Actors in Universities. Public Management Review 12:2, 233-254. [CrossRef]
25. Marjan Van Den Belt, Jennifer R. Kenyan, Elizabeth Krueger, Alison Maynard, Matthew Galen Roy, Ian
Raphael. 2010. Public sector administration of ecological economics systems using mediated modeling.
Annals of the New York Academy of Sciences 1185:1, 196-210. [CrossRef]
26. Martin Hensher, Bruce Keogh. 2009. Quality metrics. Surgery (Oxford) 27:9, 393-396. [CrossRef]
27. Karen Fryer, Jiju Antony, Susan Ogden. 2009. Performance management in the public sector. International
Journal of Public Sector Management 22:6, 478-498. [Abstract] [Full Text] [PDF]
28. Dimitra Petrakaki, Niall Hayes, Lucas Introna. 2009. Narrowing down accountability through performance
monitoring technology. Qualitative Research in Accounting & Management 6:3, 160-179. [Abstract] [Full
Text] [PDF]
29. David Shaw, Alex Lord. 2009. From land-use to 'spatial planning': <i>Reflections on the reform of the
English planning system</i>. Town Planning Review 80:4, 415-436. [CrossRef]
30. K. Lundberg, B. Balfors, L. Folkeson. 2009. Framework for environmental performance measurement in
a Swedish public sector organization. Journal of Cleaner Production 17:11, 1017-1024. [CrossRef]
31. Toms B. Ramos, Ins Alves, Rui Subtil, Joo Joanaz de Melo. 2009. The state of environmental
performance evaluation in the public sector: the case of the Portuguese defence sector. Journal of Cleaner
Production 17:1, 36-52. [CrossRef]
32. JAMES CUST. 2009. Using intermediate indicators: lessons for climate policy. Climate Policy 9:5,
450-463. [CrossRef]
33. Maria Rosaria D'Esposito, Michel Tenenhaus. 2008. Statistical methods in performance analysis. Applied
Stochastic Models in Business and Industry 24:5, 369-371. [CrossRef]
34. Andy Adcroft, Jon Teckman. 2008. Theories, concepts and the Rugby World Cup: using management to
understand sport. Management Decision 46:4, 600-625. [Abstract] [Full Text] [PDF]
35. Terence Y.M. Lam. 2008. Procuring professional housing maintenance services. Facilities 26:1/2, 33-53.
[Abstract] [Full Text] [PDF]
36. Benita M. Beamon, Burcu Balcik. 2008. Performance measurement in humanitarian relief chains.
International Journal of Public Sector Management 21:1, 4-25. [Abstract] [Full Text] [PDF]
37. Carlos Flavin, Javier Lozano. 2007. Market Orientation of Spanish Public Universities: A Suitable
Response to the Growing Competition. Journal of Marketing for Higher Education 17:1, 91-116. [CrossRef]
38. Toms B. Ramos, Ins Alves, Rui Subtil, Joo Joanaz de Melo. 2007. Environmental performance policy
indicators for the public sector: The case of the defence sector. Journal of Environmental Management
82:4, 410-432. [CrossRef]
39. Yvonne Brunetto, Rod Farr-Wharton. 2006. A Comparison of the Administrative Subculture of Public and
Private Sector Service Employees. International Journal of Public Administration 29:8, 619-638. [CrossRef]
40. Paul Walley, Kate Silvester, Shaun Mountford. 2006. Healthcare process improvement decisions: a systems
perspective. International Journal of Health Care Quality Assurance 19:1, 93-104. [Abstract] [Full Text]
[PDF]
41. K.B.C. Saxena. 2005. Towards excellence in egovernance. International Journal of Public Sector
Downloaded by University of Louisville At 03:02 24 January 2015 (PT)