You are on page 1of 8

Managerial Auditing Journal

Service quality performance measurement in public/private sectors


Stewart Black Senga Briggs William Keogh
Article information:
To cite this document:
Stewart Black Senga Briggs William Keogh, (2001),"Service quality performance measurement in public/private sectors",
Managerial Auditing J ournal, Vol. 16 Iss 7 pp. 400 - 405
Permanent link to this document:
http://dx.doi.org/10.1108/EUM0000000005715
Downloaded on: 07 October 2014, At: 05:18 (PT)
References: this document contains references to 20 other documents.
To copy this document: permissions@emeraldinsight.com
The fulltext of this document has been downloaded 2993 times since 2006*
Users who downloaded this article also downloaded:
J ohn Carson, (1991),"The keys to quality", Managing Service Quality: An International J ournal, Vol. 1 Iss 1 pp. 19-21
Barbara R. Lewis, (1990),"Service Quality: An Investigation of Customer Care in Major UK Organisations", International
J ournal of Service Industry Management, Vol. 1 Iss 2 pp. 33-44
Enrique Bign, Miguel A. Moliner, J avier Snchez, (2003),"Perceived quality and satisfaction in multiservice organisations:
the case of Spanish public services", J ournal of Services Marketing, Vol. 17 Iss 4 pp. 420-442
Access to this document was granted through an Emerald subscription provided by 604154 []
For Authors
If you would like to write for this, or any other Emerald publication, then please use our Emerald for Authors service
information about how to choose which publication to write for and submission guidelines are available for all. Please
visit www.emeraldinsight.com/authors for more information.
About Emerald www.emeraldinsight.com
Emerald is a global publisher linking research and practice to the benefit of society. The company manages a portfolio of
more than 290 journals and over 2,350 books and book series volumes, as well as providing an extensive range of online
products and additional customer resources and services.
Emerald is both COUNTER 4 and TRANSFER compliant. The organization is a partner of the Committee on Publication
Ethics (COPE) and also works with Portico and the LOCKSS initiative for digital archive preservation.
*Related content and download information correct at time of download.
D
o
w
n
l
o
a
d
e
d

b
y

U
n
i
v
e
r
s
i
t
a
s

S
e
b
e
l
a
s

M
a
r
e
t

A
t

0
5
:
1
8

0
7

O
c
t
o
b
e
r

2
0
1
4

(
P
T
)
Service quality performance measurement in
public/private sectors
Stewart Black
The Robert Gordon University, Aberdeen, UK
Senga Briggs
The Robert Gordon University, Aberdeen, UK
William Keogh
The Robert Gordon University, Aberdeen, UK
Why has performance
measurement of service quality
become important?
Zeithaml et al. (1990) define service quality as
``the extent of discrepancy between the
customers' expectations or desires and their
perceptions''. They argue that service quality
is critical to the success of all organisations,
more difficult to evaluate than goods quality,
and evaluated by customers not just in terms
of outcomes but on the basis of the whole
package of delivery. The same team
developed an influential technique
(``SERVQUAL'') to measure the five major
dimensions of quality which they identify in
a service (tangibles, reliability,
responsiveness, assurance and empathy).
They suggest that service quality should be
defined by customer criteria only. However,
this remains more an ideal than a description
of organisational practice, despite growing
attention to service quality.
In Britain, the quality of public services
has become important because of the explicit
emphasis this issue has been given by
Government since the early 1990s,
commencing with Conservative Prime
Minister John Major's proposals contained
in his Citizen's Charter White Paper (Prime
Minister, 1991). The extent of political party
consensus on the need to improve quality in
public services, particularly from a ``citizen''
perspective, was demonstrated by the fact
that Labour and the Liberal Democrats
published their own ``citizen charter''
proposals in the same year. In practice, this
approach has been focused on ``users'' not
citizens (Black, 1994).
This broad policy has been continued since
1997 by NewLabour. The Charter itself and its
associated statutorily-required reporting by
service providers of performance information
have been continued, and augmented by
important new policies, notably best value.
This approach to service quality is part of a
wider, decade-long Government view that
public services should be ``managed'' rather
than ``administered'' sometimes known as
the ``new public management''.
For businesses, service quality has become
important for a different reason because of
increasing national and global competition
which has forced them to differentiate
themselves from their competitors through
competition at the margins by use of quality
(e.g. speedy service delivery). The private
sector has been forced to manage customer
satisfaction with the ultimate goal of
increasing customer loyalty and hence
profits. In the drive for increased
competitiveness, service quality is ``the
cornerstone of marketing strategy''
(Asubonteng et al., 1996) and ``the single most
researched area in services marketing to
date'' (Fisk et al., 1994).
Meanwhile, in addition to these ``push''
factors evident in both sectors, there is an
important ``pull'' factor increasing and
changing consumer expectations (e.g. the
rise of ``consumerism''). Customers and users
expect services to be more readily available
(e.g. a demand for availability of service at
``out-of-hours'' times has been evident in both
sectors). Similarly, innovation is expected
(e.g. in the public sector, neighbourhood
rather than centre-of-town offices; in the
private sector, new competition to drive
down process such as e-commerce for goods
and services such as books and holidays, and
ferry operators diversifying into car
importing). Overall, therefore, service
quality and how to measure it are critical
issues for both sectors.
What is the current state of
practice?
There is a healthy interest within individual
organisations in both sectors in issues of
quality improvement (e.g. performance
The current issue and full text archive of this journal is available at
http://www.emerald-library.com/ft
[ 400]
Managerial Auditing Journal
16/7 [2001] 400405
# MCB University Press
[ISSN 0268-6902]
Keywords
Performance measurement,
Information, Service, Quality,
Public sector, Private sector
Abstract
Provides an overview of UK public
and private sector organisations'
use of performance information
relating to service quality. While
they have made some headway in
improving the range of
performance information they
have available, and in their use of
such information, significant
problems remain. These problems
include those of: conceptual mis-
development; limitations in
recognising the needs of different
stakeholders for such information;
data shortage difficulties; and
both technical and analytical
under-development of practice.
Assesses the outlook for
development of greater
understanding of service quality
measurement and makes a
number of suggestions for dealing
with these problems.
D
o
w
n
l
o
a
d
e
d

b
y

U
n
i
v
e
r
s
i
t
a
s

S
e
b
e
l
a
s

M
a
r
e
t

A
t

0
5
:
1
8

0
7

O
c
t
o
b
e
r

2
0
1
4

(
P
T
)
measurement) and improvement techniques
and models (e.g. SERVQUAL; the business
excellence model developed by the European
Foundation for Quality Management).
However, this interest and the particular
quality improvement initiatives involved are
often taken at face value, both by those
within and outwith the organisation, as
reflecting direct action on issues of service
quality. In fact, many initiatives and
techniques give little or no emphasis to
user/consumer dimensions of service
quality, instead focusing on other
dimensions, notably the priorities of service
producers and others (e.g. Government,
regulatory bodies).
In the public sector, performance
information within major service-providing
bodies (e.g. councils, health bodies, local
``quangos'') has been developed during the
1990s more in response to external initiatives
(e.g. government legislation and major policy
initiatives; regulatory activity by industry
``watchdogs'') than by the individual service-
providing body itself in response to its own
quality-measurement needs. Similarly, while
there has been work by professional bodies
in developing understanding of service
quality and how it can be measured, it has
been comparatively weak. Meanwhile, since
the 1980s, there has been an accelerating
interest in quality improvement models (e.g.
total quality management, including
``continuous improvement''), techniques (e.g.
process mapping), certification (e.g. British
Standards Institution standards) and awards
(e.g. Charter Marks). These relate at least in
part to issues of service quality.
In the private sector, customer satisfaction
is based on a complex series of factors which
come together to satisfy contractual
requirements. If the product or service is for
the general public, there may also be legal
requirements. Generally, it is customer
needs which lead to the standards offered.
Service quality performance information
derives from two streams information
gathered from the key business factors that
make up strategic elements within the
organisation and, appropriate performance
indicators which help to monitor and
measure quality.
Reporting on service quality has, in the
past, been less important at board level than
at operational level. There was an
assumption that if the firm was successful in
relation to key objectives (e.g. turnover,
market share), then quality must also be
right. In some cases, however, the need to
differentiate and compete at the margins has
seen some private sector organisations focus
on service quality at a strategic level.
The most common practice among
businesses has been to attempt to manage
customer satisfaction through measuring
overall satisfaction and loyalty. More
recently this has been supplemented by
measuring specific aspects of the services. In
this way, a company can identify its
strengths and areas for improvement.
Eccles (1991) argues that the limitations of
the traditional measurement systems have
led to a performance measurement
revolution in the use of quality measures in
performance measurement. Many of the
quality measures have been customer and
employee satisfaction measures. Banks
and Stone's (1996) analysis of a survey of
The Times Top 500 use of such measures,
such as customer and employee satisfaction,
exposed a significant gap between customer-
and employee-based strategies.
In each sector, developing service quality
performance indicators is not a simple task.
Defining appropriate indicators and
developing appropriate measurements
requires a wide variety of data. Although the
contexts for action and techniques differ in
each sector, both the firm and the public
service organisation must identify its
customers and their requirements, and how
to meet those needs by developing
appropriate work processes. In turn they
may have to break down each activity to
identify both raw information and
performance information drawn from it.
Identifying the value-adding steps is critical,
yet difficult. Service quality performance
information also includes information
received from customers, including
comments and complaints (e.g. timeliness,
accuracy and responsiveness).
None of these activities are meaningful
unless they are used by the organisation to
learn. In late Summer 2000, it emerged that
car manufacturer Mitsubishi in Japan had
simply hidden, rather than taken action on,
vast numbers of complaints over a long
period of time. If developed and used
appropriately, service quality performance
indicators have the potential to improve
organisational performance. Used
inappropriately, they can be destructive, for
example in relation to the performance of
groups or individuals within the
organisation (Deming, 1986).
What are the major problems, why
have they arisen and what are their
consequences?
While problems in tackling service quality
measurement vary by organisation and
[ 401]
Stewart Black, Senga Briggs
and William Keogh
Service quality performance
measurement in
public/private sectors
Managerial Auditing Journal
16/7 [2001] 400405
D
o
w
n
l
o
a
d
e
d

b
y

U
n
i
v
e
r
s
i
t
a
s

S
e
b
e
l
a
s

M
a
r
e
t

A
t

0
5
:
1
8

0
7

O
c
t
o
b
e
r

2
0
1
4

(
P
T
)
sector, we have identified four on the basis
that they are important because they appear
to be particularly widespread across
organisations in both sectors and are also
impervious to ``quick fix'' solutions:
.
Problem 1: conceptual mis-development.
.
Problem 2: limitations in recognising the
needs of different stakeholders for such
information.
.
Problem 3: data shortage difficulties.
.
Problem 4: both technical and analytical
under-development.
Problem 1
Concepts of ``service quality'' often remain
producer-driven, even in organisations
which explicitly claim to give priority to
user/customer interests. Perhaps the best
illustration of this, equally evident in the
public and private sectors, is the enthusiasm
since the 1980s for quality assurance
validation (e.g. the numerous ``standards''
developed by the British Standards
Institution). Here, in essence, pursuit of
quality has become pursuit of evidence,
susceptible to external validation, of the
organisation's operations being consistent
with its own pre-specified procedures and
standards. However, an important blind spot
is that those standards may have little or no
reference to the needs or concepts of quality
of the final service user. Our observation
relates not to the tool (validation), but to the
underlying mis-assumption that pursuing
``quality'' means pursuing service quality for
the user.
Problem 2
Related to this first problem is a second that
different stakeholders have different needs
and claims in relation to service quality. This
problem is partly conceptual: recognising
these differences (e.g. the different
requirements of customers, shareholders,
directors and front-line staff in the private
sector; those of users, ministers, civil
servants, regulatory bodies, managers,
front-line staff, partner organisations and
others in public services) requires some
organisational sophistication. The problem is
also partly practical: outcome-related
information, regardless of the identity of the
stakeholder (e.g. user, citizen, service
provider) is scarce.
Problem 3
Perhaps the greatest practical problem is that
the pool of performance information
available to inform judgements of service
quality is limited particularly the pool of
routinely-available information (as
contrasted with, say, periodic survey data).
Most organisations are awash with data and
starved of intelligence (Keogh et al., 1996).
Their routine data are heavily biased to
input information (e.g. cost; resources
applied), but contain noticeably fewer output
measures (e.g. ``effectiveness'' measures
such as success in targeting the intended
users/customers) and still fewer outcome
measures (particularly those of user
satisfaction). In public services, in 2000, after
a decade of the new public management, most
indicators still relate to measures of
economy, efficiency and, much less
frequently, to effectiveness or quality.
In public services, ``efficiency'' measures
are common, such as the time taken to
complete an action (e.g. attend a fire; getting
an appointment for surgery; make a benefit
payment), and are more much more
widespread than those for effectiveness (e.g.
in combating fires; alleviating poverty;
improving health). Meanwhile, mechanisms
such as surveys of users and ``focus groups''
have sometimes been used to fill this gap.
Both are useful, but also have limitations
for example, in different ways both may miss
the views of the majority of users or those
who may have particularly critical views,
and in any case are not a substitute for
routine information.
Problem 4
These three problems give rise to a further
one the organisation ``satisficing'' rather
than challenging these earlier problems.
Examples of this are well-known measuring
what is readily measurable and treating this
as a proxy for what ``should'' be measured, if
quality (or some other dimension of
performance) is the focus of interest.
Stafford et al. (1998) highlight the
importance of distinguishing customer
satisfaction and service quality where the
former relates to a single event and the latter
to ``consistency in the delivery of
satisfaction''. They found that, of the five
dimensions of quality identified by
Parasuraman et al. (1985), ``reliability'' was
decisively the most important determinant
yet this depends on measurement over time
when many organisations have only patchy
routine data on service failure.
Further associated problems here include
the organisation's failure to define
performance and/or quality (as contrasted
with measuring it); failure to distinguish
between measures to control and measures to
improve quality; managers or other staff
misunderstanding or misusing quality
measures; and a fear of exposing actual
performance (Zairi, 1992).
[ 402]
Stewart Black, Senga Briggs
and William Keogh
Service quality performance
measurement in
public/private sectors
Managerial Auditing Journal
16/7 [2001] 400405
D
o
w
n
l
o
a
d
e
d

b
y

U
n
i
v
e
r
s
i
t
a
s

S
e
b
e
l
a
s

M
a
r
e
t

A
t

0
5
:
1
8

0
7

O
c
t
o
b
e
r

2
0
1
4

(
P
T
)
It should also be acknowledged that some
commentators have found limitations in the
tools for measuring quality. Buttle (1996)
acknowledges the popularity of SERVQUAL,
but identifies shortcomings including: doubts
about whether customers assess quality in
terms of expectations and perceptions; the
value of the ``disconfirmation'' aspect (and
absence of an ``attitudinal'' dimension); and
the extent to which the five key SERVQUAL
dimensions are universal. Donnelly et al.
(1996) undertook research to try to customise
the SERVQUAL scale and approach to meet
the needs of local authority services and
found that the validity of the five dimensions
was questionable as it moved away from
commercial sector usage. They advised
caution when using customised versions to
measure service quality in local government.
Asubonteng et al. (1996) also found
limitations.
Which contrasts and comparisons
exist between the public and private
sectors?
Contrasts
The customer in the private sector can often
``shop'' elsewhere to enjoy better service
quality. In the public sector, this is not
always possible for example, because there
is only one provider of a particular service in
a locality, or because in some cases the user
may be an unwilling ``consumer'' (e.g. a
prisoner; a pupil; a child being looked after/
in care).
The concerns of the private sector differ.
Accountability through (external and
internal) reporting performance in relation
service quality is relatively unimportant,
while commercial considerations (e.g.
profitability; market share; survival) are a
greater spur to performance analysis. The
nature, value and use of service quality
performance information all appear to differ
from the position in public services.
However, in the case of public services,
there are limitations to this approach, which
must be acknowledged in particular, the
complex goals of many public services
(Gaster, 1995).
Comparisons
Klein and Carter (1988) find it extraordinary
that ``most approaches apparently assume
that the private sector has nothing to offer
the public sector''. Public service bodies
sometimes assume that techniques developed
in, or associated with, the private sector have
little value or relevance. Conversely, it is
perhaps even more widely assumed that the
private sector has little to learn from public
services. However, the public sector has
tangible expertise and experience to offer
which is either better developed or has no
private sector counterpart e.g. a necessarily
more sophisticated approach to
``stakeholder'' analysis (as stakeholders are
generally more numerous and relationships
are more complex); new expertise in strategic
and operations reviews (based on
implementation of best value in local
government since 1997); and an established
understanding of the rigours of audit and
statutory external reporting of Citizen's
Charter performance information.
We earlier concluded that the two main
``drivers'' for change in the attention given to
the issue of service quality are ``competitive
advantage'' in the private and public sector
and ``new public management'' in the public
sector. Nevertheless, there are striking
similarities in the responses of organisations
in each sector, in particular in their use of
models, tools and techniques which have
been found to have value in each sector for
example, the business excellence model, the
``balanced scorecard'' (Kaplan and Norton,
1992; 1996) and benchmarking. However,
these tools relate only in part to service
quality.
What solutions are available?
Although numerous solutions to these
problems are available, we have focussed on
the four which seem to offer greatest
opportunities for service quality
improvement. There are two main areas of
opportunity conceptual (``who needs service
quality information, and why?'') and
technical (``which measures meet these needs
in particular, which ``outcome''
measures?'').
It is important in examining these
solutions not to see tackling service quality
as simply a matter of adding to existing
information. There are important resource
issues for any organisation in collecting any
information, regardless of its value. Tackling
the conceptual and technical questions
should also be a means of targeting and
re-prioritising which information is most
relevant.
The ``balanced scorecard'' approach
provides a valuable context for this review of
service quality information needs. This
technique offers the organisation a
structured framework for establishing a
rounded selection of measures to meet
different needs, including those of service
quality from the user/customer perspective.
[ 403]
Stewart Black, Senga Briggs
and William Keogh
Service quality performance
measurement in
public/private sectors
Managerial Auditing Journal
16/7 [2001] 400405
D
o
w
n
l
o
a
d
e
d

b
y

U
n
i
v
e
r
s
i
t
a
s

S
e
b
e
l
a
s

M
a
r
e
t

A
t

0
5
:
1
8

0
7

O
c
t
o
b
e
r

2
0
1
4

(
P
T
)
In turn, this arms the organisation with user
focus data for its quality improvement
initiatives. To pick an illustration from
public services, a major measure of service
``quality'' in the NHS has been the waiting list
length, while recently more attention has
been focussed on waiting times. However,
both are important measures of different
aspects of the service, and neither is in
principle more important than the other.
There should be ``room'' for each on the
balanced scorecard.
What are the prospects for
development of performance
information on service quality?
Public sector
In public services, the test is already one of
responsiveness to stakeholders' needs (not
only users' needs) and not competitiveness.
Performance measurement can provide a
substitute for choice in public service if it
concentrates on aspects of performance that
are important to the public and customer.
An important development in UK public
services is the Government's over-arching
policy of Best Value. Initially it applied to
local authority services only, but it has
begun to be applied to other public services.
The policy has an unprecedented emphasis
not only on cost but also on quality, and on
the needs of service users. The public body
must review all services on a cyclical basis to
identify the scope for improvements, to
develop performance measures and to report
progress publicly.
Private sector
The main opportunity appears to be taking a
``holistic approach'', based on total quality
management and in particular an emphasis
on continuous improvement so the focus is
on the total service experience. This implies
a more complex approach but the key is to
identify the critical PIs as service quality will
be the cutting edge of competition.
A major opportunity for organisations is to
distinguish the different dimensions of
service quality. Examining SERVQUAL,
Johnston (1995) identified not five but 18
dimensions. This study suggests that the
service organisation should identify and then
prioritise the different dimensions of quality,
seen from the user/customer perspective.
Moving from the policy sphere to practice,
one particular technique offers considerable
scope for service quality measurement and
improvement benchmarking (the exchange
of comparable performance information
between organisations carrying out similar
functions, to identify the scope for
improvement based on actual achievements
of other similar organisations). The
significance of benchmarking is its relative
under-development yet considerable
potential for illuminating understanding.
Early activity has highlighted information
shortcomings and the need to re-consider the
basis for inter-organisation comparisons, so
caution is needed. Nevertheless,
benchmarking may lead to development of
not only improved data, but also the use of
data to effect quality improvements based on
inter-organisational learning. This is
particularly true of public services for at
least two reasons the fact that many are
provided on the same basis (e.g. as statutory
duty) by a range of broadly similar local
service providers, and the fact that
benchmarking activity is only in its infancy.
In addition to distinctive problems at
``sector'' level, individual organisations and
``industrial'' sub-sectors may face further
problems which are uniquely their own.
Nevertheless, there is also scope for greater
inter- and intra-sector benchmarking. It is
often assumed that comparisons cannot be
made between public and private
organisations because their imperatives (e.g.
basis for service delivery statutory or
commercial) differ sharply. However,
underlying processes in service delivery may
be broadly similar and measures of service
quality may be sufficiently ``generic'' (or may
adapted to this end) to permit meaningful
comparison.
Eklof and Westlund (1998) argue that there
is a lack of ``consistent and regularly
disseminated information on quality
performance'' from the customer perspective
in most sectors. They advocate a customer
satisfaction index (CSI) for better policy and
decision making at all levels of society. They
see a CSI as a system to ``model, measure,
estimate and analyse the interactions
between customer preferences, perceived
quality and reactions on the one hand and
the performance of the company on the
other''.
Conclusions
The differences between the public and
private sector are striking for example,
stakeholder, accounting, legal, and external
reporting requirements. Regulatory regimes
differ significantly. There are also clear
differences of organisational focus. Private
firms interact with other firms in the supply
chain as well as directly with the end user or
consumer. They would be dealing with their
[ 404]
Stewart Black, Senga Briggs
and William Keogh
Service quality performance
measurement in
public/private sectors
Managerial Auditing Journal
16/7 [2001] 400405
D
o
w
n
l
o
a
d
e
d

b
y

U
n
i
v
e
r
s
i
t
a
s

S
e
b
e
l
a
s

M
a
r
e
t

A
t

0
5
:
1
8

0
7

O
c
t
o
b
e
r

2
0
1
4

(
P
T
)
principal marketplace, if in a niche market,
and a number of major markets if they have
generic products or services to sell. Public
sector bodies serves the end user and since
most are local or regional bodies operates in
a local market. The private sector
organisation is concerned with identification
of markets and its position within a
competitive environment. Although
competitive tendering is used in the public
sector, competition is more limited. Some
similarities have emerged in recent years
for example, the growth of ``approved
supplier lists'', long in use in the private
sector, in public services. Overall, however,
it is not surprising that attitudes and
practices between the sectors can differ.
A greater understanding of the importance
of performance indicators and how to use
them to achieve strategic objectives is
essential at all levels of an organisation. The
outlook for service quality performance
measurement in the public and private
sectors appears to be at the same time
positive and negative. It is positive in two
respects. First, service providing
organisations are arguably more than ever
interested in and motivated to giving priority
to service quality improvement. Second,
there are more opportunities for developing
thinking and practice than ever before.
However the outlook is negative in that much
more conceptual and practical work is
needed to permit them to address the
measurement of service quality arguably
the single most important dimension of
performance for the service organisation.
Will the first two factors outweigh the third?
References
Asubonteng, P., McCleary, K.J. and Swan, J.E.
(1996), ``SERVQUAL revisited: a critical
review of service quality'', Journal of Services
Marketing, Vol. 10 No. 6, pp. 62-81.
Banks, J.M. and Stone, C.L. (1996), ``Business
improvement programmes: measuring
process'', The Times top 500 Total Quality
Management in Action, Chapman and Hall,
London.
Black, S. (1994), ``What does the Citizen's Charter
mean?'', in Connor, A. and Black, S. (Eds),
Performance Review and Quality in Social
Care, Jessica Kingsley, London.
Buttle, F. (1996), ``SERVQUAL: review, critique,
research agenda'', European Journal of
Marketing, Vol. 30 No. 1, pp. 8-32.
Deming, W.E. (1986), Out of the Crisis, MIT,
Cambridge, MA.
Donnelly, M., Shiu, E., Dalrymple, J.F. and
Wisniewski, M. (1996), ``Adapting the
SERVQUAL scale and approach to meet the
needs of local authority services'', Total
Quality Management in Action, Chapman and
Hall, London.
Eccles, R.G. (1991), ``The performance
measurement manifesto'', Harvard Business
Review, Jan-Feb, pp. 1131-7.
Eklof, J.A. and Westlund, A. (1998), ``Customer
satisfaction index and its role in quality
management'', Total Quality Management,
Vol. 9 Nos. 4/5, pp. 80-5.
Fisk, R.P., Brown, S.W. and Bitner, M.J. (1994),
``Tracking the evolution of the services
marketing literature'', Journal of Retailing,
Vol 69 Winter, pp. 420-50.
Gaster, L. (1995), Quality in Public Services, Open
University Press, Buckingham.
Johnston, R. (1995), ``The determinants of service
quality: satisfaction and dissatisfaction'',
International Journal of Service Management,
Vol. 6 No. 5, pp. 53-71.
Kaplan, R.S. and Norton, D.P. (1992), ``The
balanced scorecard measures that drive
performance'', Harvard Business Review,
January/February, pp. 71-9.
Kaplan, R.S. and Norton, D.P. (1996), The
Balanced Scorecard, Harvard Business
School, Boston, MA.
Keogh, W., Brown, P. and McGoldrick, S. (1996),
``A pilot study of quality costs at Sun
Microsystems'', Total Quality Management,
Vol. 7 No. 1, pp. 29-38.
Klein, R. and Carter, N. (1988), ``Performance
measurement: a review of concepts and issues
performance measurement getting the
concepts right'', Public Finance Foundation,
Discussion Report, Communications
(Publishing) Limited UK.
Parasuraman, A., Zeithaml, V. and Berry, L.
(1985), ``A conceptual model of service
quality and its implications for future
research'', Journal of Marketing, Vol. 49 Fall,
pp. 41-50.
Prime Minister (1991) ``The Citizen's Charter
raising the standard'', Cm 1599, London,
HMSO.
Stafford, M.R., Stafford, T.F. and Wells, B.P.
(1998), ``Determinants of service quality as
satisfaction in the auto casualty claims
process'', Journal of Services Marketing,
Vol. 12 No. 6, pp. 426-40.
Zairi, M. (1992), TQM-Based Performance
Measurement, Technical Communication
(Publishing) Ltd, Letchworth.
Zeithaml, V.A., Parasuraman, A. and Berry, L.L.
(1990), Delivering Service Quality, The Free
Press, New York, NY.
[ 405]
Stewart Black, Senga Briggs
and William Keogh
Service quality performance
measurement in
public/private sectors
Managerial Auditing Journal
16/7 [2001] 400405
D
o
w
n
l
o
a
d
e
d

b
y

U
n
i
v
e
r
s
i
t
a
s

S
e
b
e
l
a
s

M
a
r
e
t

A
t

0
5
:
1
8

0
7

O
c
t
o
b
e
r

2
0
1
4

(
P
T
)
This article has been cited by:
1. Vicente Pina, Lourdes Torres, Patricia Bachiller. 2014. Service quality in utility industries: the European telecommunications
sector. Managing Service Quality: An International Journal 24:1, 2-22. [Abstract] [Full Text] [PDF]
2. Karen Fryer, Susan Ogden, Jiju Anthony. 2013. Bessant's continuous improvement model: revisiting and revising. International
Journal of Public Sector Management 26:6, 481-494. [Abstract] [Full Text] [PDF]
3. Hussein Aljardali, Mazen Kaderi, Thierry Levy-Tadjine. 2012. The Implementation of the Balanced Scorecard in Lebanese
Public Higher Education Institutions. Procedia - Social and Behavioral Sciences 62, 98-108. [CrossRef]
4. Mahmoud M. Yasin, Carlos F. Gomes. 2010. Performance management in service operational settings: a selective literature
examination. Benchmarking: An International Journal 17:2, 214-231. [Abstract] [Full Text] [PDF]
5. Karen Fryer, Jiju Antony, Susan Ogden. 2009. Performance management in the public sector. International Journal of Public
Sector Management 22:6, 478-498. [Abstract] [Full Text] [PDF]
6. Simmy M. Marwa, Mohamed Zairi. 2009. In pursuit of performanceoriented civil service reforms (CSRs): a Kenyan
perspective. Measuring Business Excellence 13:2, 34-43. [Abstract] [Full Text] [PDF]
7. Juan Jos Tar. 2008. Self-assessment exercises: A comparison between a private sector organisation and higher education
institutions. International Journal of Production Economics 114:1, 105-118. [CrossRef]
8. Chris Voss, Nikos Tsikriktsis, Benjamin Funk, David Yarrow, Jane Owen. 2005. Managerial choice and performance in service
managementa comparison of private sector organizations with further education colleges. Journal of Operations Management
23:2, 179-195. [CrossRef]
D
o
w
n
l
o
a
d
e
d

b
y

U
n
i
v
e
r
s
i
t
a
s

S
e
b
e
l
a
s

M
a
r
e
t

A
t

0
5
:
1
8

0
7

O
c
t
o
b
e
r

2
0
1
4

(
P
T
)

You might also like