You are on page 1of 14

Journal of Retailing and Consumer Services 17 (2010) 464477

Contents lists available at ScienceDirect

Journal of Retailing and Consumer Services


journal homepage: www.elsevier.com/locate/jretconser

Developing e-service quality scales: A literature review


Riadh Ladhari n
Faculty of Business Administration, Laval University, Quebec, Canada

a r t i c l e in f o

Keywords:
E-service quality
Scale development
Dimensionality
Psychometric properties

a b s t r a c t
This study reviews the literature on e-service quality (e-SQ), with an emphasis on the methodological
issues involved in developing measurement scales and issues related to the dimensionality of the e-SQ
construct. We selected numerous studies on e-SQ from well-known databases and subjected them to a
thorough content analysis. The review shows that dimensions of e-service quality tend to be contingent
on the service industry. Despite the common dimensions often used in evaluating e-SQ, regardless of
the type of service on the internet (reliability/fullment, responsiveness, web design, ease of use/
usability, privacy/security, and information quality/benet), other dimensions are specic to
e-service contexts. The study also identies several conceptual and methodological limitations
associated with developing e-SQ measurement such as the lack of a rigorous validation process, the
problematic sample size and composition, the focus on functional aspects, and the use of a data-driven
approach. This is the rst study to undertake an extensive literature review of research on the
development of e-SQ scales. The ndings should be valuable to academics and practitioners alike.
& 2010 Elsevier Ltd. All rights reserved.

1. Introduction
Online service quality has a signicant inuence on many
important aspects of electronic commerce (e-commerce). These
include consumer trust in an online retailer (Gefen, 2002; Hsu,
2008; Hwang and Kim, 2007); site equity (Yoo and Donthu, 2001);
consumer attitudes towards the site (Hausman and Siekpe, 2009;
Yoo and Donthu, 2001); attitude toward e-shopping (Ha and Stoel,
2009); perceived value of the products/services (Hsu, 2008);

willingness to pay more (Fassnacht and Kose,


2007), user online

satisfaction (Cristobal et al., 2007; Fassnacht and Kose,


2007; Ho
and Lee, 2007; Lee and Lin, 2005); site loyalty intentions (Ho and
Lee, 2007; Yoo and Donthu, 2001); site recommendation intentions (Long and McMellon, 2004); and cross-buying (Fassnacht

and Kose,
2007). In view of the apparent importance of electronic
service quality (e-SQ), Hsu (2008) contends that the achievement
of superior online service quality should be the crucial differentiating strategy for all e-retailers; indeed, e-SQ has been
increasingly recognised as the most important determinant of
long-term performance and success for e-retailers (Fassnacht
and Koese, 2006; Holloway and Beatty, 2003; Santos, 2003;
Wolnbarger and Gilly, 2003; Zeithaml et al., 2000, 2002). An
understanding of how consumers evaluate e-SQ is thus of the
utmost importance for scholars and practitioners alike (Fassnacht
and Koese, 2006; van Riel et al., 2001). However, despite the

Tel.: + 1 418 6562131x7940; fax: + 1 418 6562624.


E-mail address: riadh.ladhari@fsa.ulaval.ca

0969-6989/$ - see front matter & 2010 Elsevier Ltd. All rights reserved.
doi:10.1016/j.jretconser.2010.06.003

obvious importance of the issue, the conceptualisation and


measurement of e-SQ are still at an early phase of development
(Cristobal et al., 2007; Fassnacht and Koese, 2006; Santos, 2003; van
Riel et al., 2001) and studies in this eld are still somewhat limited
and disparate (Gounaris and Dimitriadis, 2003; Parasuraman et al.,
2005). As Zeithaml et al. (2002, p. 371) note: Rigorous attention to
the concept of service quality delivery through Web sites is needed.
This would involve a comprehensive examination of the antecedents, composition, and consequences of service quality.
Against this background, the present study undertakes a
comprehensive review of the current state of knowledge regarding e-SQ. In doing so, the study reviews the literature on e-SQ
measurement models with a view to (i) analysing the key
methodological issues involved in the development of such scales,
and (ii) discussing the dimensional structure of the e-SQ
construct. As a result of these considerations, the paper provides
valuable insights and implications for the development and
application of e-SQ scales.

2. Denition and nature of e-SQ


Parasuraman et al. (2005, p. 217) dene e-SQ as y the extent
to which a web site facilitates efcient and effective shopping,
purchasing and delivery. This denition makes it clear that the
concept of e-SQ extends from the pre-purchase phase (ease of use,
product information, ordering information, and personal information protection) to the post-purchase phase (delivery, customer
support, fullment, and return policy). The online environment

R. Ladhari / Journal of Retailing and Consumer Services 17 (2010) 464477

differs from the traditional retail context in several ways. These


can be summarised as follows:

 Convenience and efciency: Consumers using the online




environment have the convenience of saving time and effort


in comparing the prices (and some technical features) of
products more efciently (Santos, 2003).
Safety and condentiality: Participation in the online environment involves users in distinctive issues regarding privacy,
safety, and condentiality.
Absence of face-to-face contact: Customers in the online
environment interact with a technical interface (Fassnacht
and Koese, 2006). The absence of person-to-person interaction
means that the traditional concepts and ways of measuring
service quality, which emphasise the personal interaction of
the conventional service encounter, are inadequate when
applied to e-SQ (van Riel et al., 2001).
Co-production of service quality: Customers in the online
environment play a more prominent role in co-producing the
delivered service than is the case in the traditional retail
context (Fassnacht and Koese, 2006).

3. Literature review
3.1. Issues of adequacy of dimensions of e-SQ
Several measures of e-SQ are described by Zeithaml et al.
(2002) as being ad hoc. These measures, which have attempted
to assess e-SQ mainly in terms of the design and quality of
websites, include factors that induce satisfaction with a website
and/or repeat visits (Alpar, 2001; Muylle et al., 1999; Rice, 1997;
Szymanski and Hise, 2000). In this regard, Alpar (2001) identies
four attributes of satisfaction with a website: (i) ease of use
(response speed, navigation support, use of new web technologies); (ii) information content (quantity, quality, accuracy,
customised information); (iii) entertainment (amusement, excitement); and (iv) interactivity (e-mail, live-chats, notice boards).
Liu and Arnett (2000) suggest that the determinants of website
success included the following: (i) information and service
quality; (ii) system use; (iii) playfulness; and (iv) system design
quality. Szymanski and Hise (2000) report four dominant factors
in consumer assessments of e-satisfaction: (i) convenience
(shopping times, ease of browsing); (ii) merchandising (product
offerings and information available online); (iii) site design
(uncluttered screens, easy search paths, fast presentations); and
(iv) nancial security.
Apart from the ad hoc use of website parameters (as described
above), other authors attempt to develop more direct and
comprehensive measures of the construct of e-SQ. Some researchers (such as Gefen, 2002) modify or replicate the well-known
SERVQUAL scale (Parasuraman et al., 1988, 1991), whereas others
develop their own scales to measure the construct (e.g., Ho and
Lee, 2007; Loiacono et al., 2002; Parasuraman et al., 2005).
According to Parasuraman et al. (1991, p. 445), SERVQUAL is a
generic instrument with good reliability and validity and broad
applicability. Parasuraman et al. (1988) nd that consumers
evaluate perceived service quality in terms of ve dimensions:
tangibility (the appearance of physical facilities, equipment, and
personnel); responsiveness (the willingness to help customers
and provide prompt service); reliability (the ability to perform the
promised service accurately and dependably); empathy (the level
of caring and individualised attention the rm provides to its
customers); and assurance (the knowledge and courtesy of
employees and their ability to inspire trust and condence).

465

These dimensions are measured by a total of 22 items, where each


item is measured according to the performance of the service
actually provided (P) and the expectations for the service (E). The
gap score (G) is therefore calculated as the difference between
performance and expectations (PE). The greater the gap scores,
the higher the perceived service quality.
It is true that SERVQUAL has been successfully applied in a
wide variety of traditional service settingsincluding (among
others) insurance services, library services, information systems,
healthcare settings, bank services, hotel services, and dental clinic
services. However, several difculties exist with regard to the
conceptualisation and operationalisation of the SERVQUAL scale
(e.g., Buttle, 1996; Ladhari, 2009). In particular, questions have
been raised about the applicability of the ve generic SERVQUAL
dimensions in several service industries. As a result, adaptations
of SERVQUAL have been proposed for various industry-specic
contexts, and the ndings suggest that the attributes of service
quality are context-bounded (e.g., Cai and Jun, 2003; Ladhari,
2009). Similar doubts have been raised regarding the applicability
of the ve SERVQUAL dimensions in the e-service context. In this
regard, Gefen (2002) apply an adapted SERVQUAL instrument to
the online service context and reported that the ve dimensions
collapsed into three: (i) tangibles; (ii) a combined dimension of
responsiveness, reliability, and assurance; and (iii) empathy. The
tangibles dimension is the most critical for inducing customer
loyalty whereas the combination dimension (responsiveness,
reliability, and assurance) is the most important for promoting
customer trust.
Parasuraman et al. (2005) acknowledge these difculties when
they suggest that y studying e-SQ requires scale development
that extends beyond merely adapting ofine scales. In a similar
vein, Parasuraman and Grewal (2000, p. 171) state that studies are
needed on whether the denitions and relative importance of the
ve service quality dimensions change when customers interact
with technology rather than with service personnel. SERVQUAL
was developed in the context of services provided through
personal interaction between customers and service providers;
as a result, its dimensions might not transpose directly to the e-SQ
context (Fassnacht and Koese, 2006; Hsu, 2008). Hsu (2008) notes
that the SERVQUAL model does not consider such dimensions as
security and ease of navigation, and Gefen (2002) contends that
the dimension of empathy is less important in the e-SQ context
because the online environment lacks personal human interaction. In addition, van Riel et al. (2001) argue that the tangibility
dimension of SERVQUAL could be replaced by a dimension of web
design or user interface.
In view of these difculties, it is apparent that the traditional
SERVQUAL model does not constitute a comprehensive instrument for assessing e-SQ. Several studies attempt to develop
specic measurement scales for online service quality, but the
task is neither simple nor straightforward. As Aladwani and Palvia
(2002) observe: Construct measurement y in the context of web
technologies and applications y is a challenging task.
Despite the difculties, several studies endeavour to identify
and measure the dimensions of the e-SQ construct. These studies
are the subject of the literature review that forms the substance of
the present study. The studies for review are summarised in
Table 1.

3.2. Methodological issues in developing e-SQ scales


The studies in Table 1 come from well-known databasessuch
as ScienceDirect, ABI/INFORM, and EBSCOhost. Only studies
focusing on developing an instrument for measuring e-service are
included and are subjected to a comprehensive in-depth content

466

Table 1
Selected studies on e-service quality scale development.
Domain of measure

Sample

Types of web site

Original items battery Data analysis


procedure for
assessing factorstructure

Final items
battery

Final number of
dimensions (number of
items)

Internal reliability
coefcient alpha /
Composite
construct reliability

ONiell et al. (2001)

Online library service


quality

269 students, users of


online libray service.

Online library service

18 items 5 point scale


Ofine administration

Exploratory factor
analysis

18 items

4 dimensions: contact,
responsiveness, reliability,
and tangibles.

Ranges from 0.68 to


0.86.

Yoo and Donthu


(2001)

Online retailers Web


site quality

69 students in the rst


stage (207 site
evaluations) and 47
individuals for the
second stage (187 site
evaluations).

Wide variety of sites


categories (such sites
as books, music and
videos, department
stores, computers,
apparel and
accessories, travel and
auto)

54 items 5 point scale


Ofine administration

Exploratory factor
analysis; Conrmatory
factor analysis

9 items

4 dimensions: ease of use


(2), aesthetic design (3),
processing speed (2), and
security (2).

Ranges from 0.69 to


0.83 (study 2).

Aldwani and Palvia


(2002)

Web service quality

101 students in the


rst study and 127
students for the second
study.

Sites for a bank, a


bookshop, a car
manufacturer, and an
electronics retailer

55 items 7 point scale


Ofine administration

Exploratory factor
analysis

25 items

Ranges from 0.88 to


4 dimensions: technical
0.94 (study 2)
adequacy (9), specic
content (6), content quality
(5), and web appearance
(5).

Barnes and Vidgen


(2002)

Web site quality

376 students and staff


of a university.

Internet bookshops

22 items 7 point scale


Online administration

Exploratory factor
analysis

22 items

5 dimensions: usability (4), Ranges from 0.70 to


design (4), information (7); 0.90.
trust (4), and empathy (3).

Francis and White


(2002)

Internet retailing
quality

302 Australian internet NAa


shoppers

35 items 7 point scale


Online administration

Exploratory factor
analysis

23 items

6 dimensions: web store


functionality (5), product
attribute description (2),
ownership conditions (5),
delivered products (2),
customer service (5), and
security (4).

Janda et al. (2002)

Internet retail service


quality

446 respondents
Internet users with at
least one Internet
purchase within the
last six months.

NA

30 items 7 point scale


Ofine administration

Conrmatory factor
analysis

22 items

5 dimensions: performance Ranges from 0.61 to


(6), access (4), security (4), 0.83.
sensation (4), and
information (4).

Li et al. (2002)

Web-based service
quality

202 respondents
Internet users
including college
students and
professionals.

Webmasters for
Fortune 1,000
companies

28 items 5 point scale


Online administration

Exploratory factor
analysis

25 items

Ranges from 0.68 to


6 dimensions:
0.87
responsiveness (6),
competence (7), quality of
information (4), empathy
(4), web assistance (2), and
call-back systems (2).

88 items

Exploratory factor
analysis; Conrmatory
factor analysis

36 items

Ranges from 0.72 to


12 dimensions:
0.93 (round 3).
informational Fit-to-task
(3), interactivity (3), trust
(3), response Time (3), ease
of understanding (3),
intuitive operations (3),
visual appeal (3),
innovativeness (3), owemotional appeal (3),
consistent image (3), online completeness (3), and
better than alternative
channels (3).

Loiacono et al. (2002) Website quality

12 selected web sites


511 undergraduate
from a preliminary
students in round 1;
exploratory research
336 undergraduate
students in round 2;
and 307 undergraduate
students in round 3.
Respondents were
asked to imagine that
they are searching for a
book.

Ranges from 0.73 to


0.87.

R. Ladhari / Journal of Retailing and Consumer Services 17 (2010) 464477

Study

4 dimension: information
content (4), design (3),
security (4), and privacy
(4).

Ranges from 0.87 to


0.89

41 items (IPC), 43 items Exploratory factor


analysis
(INP) 5 point scale
Online administration

19 items (IP), 25
items (INP)

6 dimensions (IP):
reliability (4), access (4),
ease of use (4),
personalization (3),
security (2), and credibility
(2).
7 dimensions (INP):
security (5),
responsiveness (5), ease of
use (4), availability (3),
reliability (3),
personalization (2), and
access (3).

Ranges from 0.59 to


0.89 (IP)
Ranges from 0.68 to
0.89 (INP)

32 items 5 point scale


Ofine administration

Exploratory factor
analysis

19 items

Ranges from 0.78 to


4 dimensions: web site
0.89.
design/content (6),
trustworthiness (4),
prompt/reliable service (4),
and communication (5).

Portal sites

14 items 7 point scale


Online administration

Exploratory factor
analysis; Conrmatory
factor analysis

13 items

Ranges from 0.76 to


3 dimensions: Customer
0.81
care and risk reduction
benet (4), information
benet (5), and interaction
facilitation benet (4).

1013 internet users


(members of Harris
Poll Online Panel).

NA

40 items 7 point scale


Online administration

Exploratory factor
analysis; Conrmatory
factor analysis

14 items

4 dimensions: website
design (5), fulllment/
reliability (3), security/
privacy (3), and customer
service (3).

137 online
customers58 were
students and the
remaining 79 were
professionals.

NA

40 items 5 point scale


Ofine administration

Exploratory factor
analysis

21 items

Ranges from 0.59 to


6 dimensions: reliable/
0.92.
prompt responses (6),
attentiveness (4), ease of
use (4), access (3), security
(2), and credibility (2).

36 items 7 point scale


Ofine administration

Exploratory factor
analysis; Conrmatory
factor analysis

25 items

6 dimensions: web
appearance (6),
entertainment (6),
informational t-to-task
(4), transaction capability
(4), response time (3), and
trust (2).

Ranges from 0.83 to


0.89.

53 items 7 point scale


Ofine administration

Exploratory factor
analysis

19 items

5 dimensions: tangibility
(7), assurance (3),
reliability (3), purchasing
process (3), and
responsiveness (3).

Ranges from 0.51 to


0.83.

B2C
214 individuals who
had completed at least
one online purchase in
the last six months.

Yang and Jun (2002)

E-service quality

271 subscribers to a
regional Internet
service provider

Cai and Jun (2003)

Online service quality

171 respondents. MBA NA


and undergraduate
students; members of a
local chapter of the
Institute for Supply
Management;
members of a local
chapter of the
American Society for
Quality Control.

Gounaris and
Dimitriadis (2003)

Web portal quality

603 Greek internet


users of three internet
service providers.

Wolnbarger and
Gilly (2003)

Etail quality

Jun et al. (2004)

Online service quality

NA

Kim and Stoel (2004) Apparel website


quality

Apparel retailers
273 US female
consumers who had
purchased apparel
online in the past three
years.

Long and McMellon


(2004)

E-retail service quality

447 consumers about


to purchase an item
from a retail Internet
site.

Yang et al. (2004)

Online service quality

NA

Online bank services

20 items

Ranges from 0.79 to


0.88.

467

15 items

Important
characteristics web
sites

R. Ladhari / Journal of Retailing and Consumer Services 17 (2010) 464477

NIb 7 point scale Ofine Exploratory factor


administration
analysis

Ranganathan and
Ganapathy (2002)

468

Table 1 (continued )
Study

Domain of measure

Sample

Types of web site

235 subjects who had


conducted commercial
transactions online.

Original items battery Data analysis


procedure for
assessing factorstructure
27 items 5 point scale
Online administration

Conrmatory factor
analysis

15 items 7 point scale


Ofine administration

Conrmatory factor
analysis

Online service quality

297 undergraduates.

Online bookstores

Parasuraman et al.
(2005)

Electronic service
quality

549 subjects for the


development stage and
858 customers for the
validation stage.

A range of sites for the 113 items 5 point scale Exploratory factor
Online administration analysis; Conrmatory
development stages
factor analysis
(apparel, electronics,
CDs, books, owers,
groceries, etc.) and two
online stores for the
validation stage
(amazon.com and
walmart.com)

Yang et al. (2005)

Web portal quality

1992 portal
subscribers.

Web portal

Bauer et al. (2006)

Service quality in
online shopping

384 members of an
online panel who
completed product
purchases online.

Collier and Bienstock E-retail service quality


(2006)

Fassnacht and Koese


(2006)

Quality of electronic
service

Final number of
dimensions (number of
items)

Internal reliability
coefcient alpha /
Composite
construct reliability

Ranges from 0.77 to


6 dimensions: reliability
0.91
(3), responsiveness (3),
competence (3), ease of use
(3), security (4), and
product portfolio (4).
Ranges from 0.74 to
0.85.

15 items

5 dimensions: web site


design (3), reliability (4),
responsiveness (3), trust
(2), and personalization
(3).

22 items

4 dimensions: efciency (8 Ranges from 0.83 to


items), system availability 0.94 (validation
stage).
(4), fulllment (7), and
privacy (3).

37 items 5 point scale


Online administration

19 items
Exploratory Factor
Analysis; Conrmatory
factor analysis

6 dimensions: usability (6), Ranges from 0.66 to


0.89.
usefulness of content (4),
adequacy of information
(5), accessibility (2), and
interaction (2).

NA

53 items 5 point scale


Online administration

Exploratory factor
analysis; Conrmatory
factor analysis

25 items

Ranges from 0.83 to


5 dimensions:
0.89.
functionality/design (7),
enjoyment (4), process (4),
reliability (6), and
responsiveness (4).

266 university
students (pre-test
stage) and 334 college
students (validation
stage) who have
completed an online
transaction with an eretailer.

NA

99 items 5 point scale


Ofine administration

Exploratory factor
analysis; Conrmatory
factor analysis

54 items

Ranges from 0.71 to


Process dimension:
0.93.
functionality (5),
information accuracy
(6), design (5), privacy
(4), and ease of use (5).
Outcome dimensions:
order accuracy (3), order
condition (3), and
timeliness (3);
Recovery dimension:
interactive fairness (10),
procedural fairness (6),
and outcome fairness
(4).

349 customers of
homepage service, 345
customers of sports
coverage service, 305
customers of online
shopping.

36 items 5 point scale


A service for the
Online administration
creation and
maintenance of
personal home pages, a
sports coverage
service, and an online

Exploratory factor
analysis; Conrmatory
factor analysis

24 items

3 second-order factors:
environment quality,
delivery quality, and
outcome quality
9 rst order-factors:
graphic quality (3),

Ranges from 0.91 to


0.93 for second-order
factors.
Ranges from 0.83 to
0.91 for rst-order
factors.

R. Ladhari / Journal of Retailing and Consumer Services 17 (2010) 464477

Lee and Lin (2005)

Final items
battery

store for electronic


equipment.

Ibrahim et al. (2006)

E-banking service
quality

e-bank services

461 internet users who NA


visited, bought or used
Internet service at least
once during the
previous three months.

26 items 5 point scale


Ofine administration

Exploratory factor
analysis

25 items

Ranges from 0.33 to


6 dimensions:
convenience/accuracy (8); 0.84
accessibility/ reliability (4);
good queue management
(3); personalization (4);
friendly/responsive
customer service (4);
targeted customer service
(2).

25 items 7 point scale


Ofine administration

Exploratory factor
analysis; Conrmatory
factor analysis

17 items

Ranges from 0.70 to


4 dimensions: customer
0.73
service (5 items), web
design (5 items), assurance
(5 items), and order
management (2 items).

Ho and Lee (2007)

E-travel service quality 289 online purchasers


for the development
stage and 382 online
purchasers for the
validation stage.

E-travel services

30 items 7 point scale


Online administration

Exploratory factor
analysis; Conrmatory
factor analysis

18 items

5 dimensions: information Ranges from 0.84 to


0.90 (validation
quality (3), security (3),
study)
website functionality (6),
customer relationships (3),
and responsiveness (3).

Sohn and Tadisina


(2008)

E-service quality

Internet based
nancial services

30 items Online and


ofine administration

Exploratory factor
analysis; Conrmatory
factor analysis

25 items

Ranges from 0.67 to


6 dimensions: trust (5),
0.88.
customised
communication (4), ease of
use (3), website content
and functionality (6),
reliability (5), and speed of
delivery (2).

a
b
c

204 customers
experienced with
internet-based
nancial services.

NA: Not addressed/Wide variety of sites categories.


NI: No information about the number of original items.
IP: internet purchasers; INP: internet non-purchasers.

R. Ladhari / Journal of Retailing and Consumer Services 17 (2010) 464477

Cristobal et al. (2007) E-service quality

135 UK banking
customers.

clarity of layout (3),


attractiveness of
selection (2),
information quality (3),
ease of use (4), technical
quality (3), reliability (2),
functional benet (2),
and emotional benet
(2).

469

470

R. Ladhari / Journal of Retailing and Consumer Services 17 (2010) 464477

analysis of the key methodological aspects of the development of


the various e-SQ scales and their proposed dimensions. This
review of the studies listed in Table 1 reveals: (i) several
methodological issues related to the development of e-SQ measurement; and (ii) several pertinent observations regarding the
dimensionality of the e-SQ construct (including the identication
of several common dimensions of various e-SQ scales and certain
limitations associated with the development of e-SQ scales).
A detailed discussion of these two subjects is presented below.
The methodological issues identied in this review can be
summarised as follows: research methods; sampling methods;
service industries considered; survey administration; generation
of items; purication and assessment of items; analysis of
dimensionality; scale reliability and validity.

quantitatively to identify the type and nature of incidents and


the frequency of occurrence (Gremler, 2004).
In one of the few studies applying the CIT to the online
environment, Holloway and Beatty (2003) address service
failure and service recovery. Using a combination of qualitative
(30 in-depth interviews) and quantitative methods (a survey
of 295 online shoppers who had experienced at least one
service failure within the past 6 months), they report six types
of service problems encountered by online shoppers: website
design, payment, delivery, product quality, and customer
service. When asked about service recovery, respondents reported
several reasons for dissatisfaction: lengthy delays, poor communication, poor quality customer service support, and generic
recoveries.

3.2.1. Research methods


Studies of e-SQ measurement use a variety of methodologiesqualitative (Santos, 2003; Zeithaml et al., 2000), quantitative (Bauer et al., 2006); and mixed (Wolnbarger and Gilly, 2003;
Yang et al., 2004). Using qualitative methods, Zeithaml et al.
(2000) identify 11 dimensions of e-SQ: (i) reliability; (ii) access;
(iii) ease of navigation; (iv) efciency; (v) responsiveness; (vi)
exibility; (vii) price knowledge; (viii) assurance/trust; (ix)
security/privacy; (x) site aesthetics; and (xi) personalization.
Santos (2003) also uses qualitative methods in conducting 30
ofine focus groups to investigate e-SQ dimensions. Yang et al.
(2004) use a mixed approach that combines content analysis of
critical incidents of online banking services and a web-based
survey of these services. They analyze two online consumer
review web sites (Gomez.com and ratingwonders.com) to obtain
848 consumer anecdotes about their banks. Their analysis of the
reviews nds 17 dimensions of online service quality that they
groups into three categories: customer service quality, online
system quality, and product or service variety.
In developing electronic service quality measurement scales,
researcher should use qualitative research at the earliest stage
possible of their work, using one of several methods. One method
that researchers seldom use is the critical incident technique
(CIT), a qualitative interview method to study signicant
processes, incidents and events identied by respondents (Chell,
1998). The goal is to understand signicant incidents from the
consumer perspective, taking into account behavioural, affective,
and cognitive aspects (Chell, 1998). The technique allows
respondents to indentify which events are the most important
to them (Gremler, 2004). In service research, the respondents
recall specic events they experienced with the service used. They
are asked to think of a time when they felt very satised
(dissatised) with the service received, to describe the service and
why they felt so happy (unhappy) (Johnston, 1995). The CIT
technique has a several advantages for electronic service quality
measure development. First, the respondents can use their own
language and terms to express their perceptions. Second,
respondents can classify the critical incidents into satisfactory
and unsatisfactory occurrences (Gremler, 2004; Johnston, 1995).
Previous studies report that determinants of satisfactory online
service quality are not the same as the determinants of
unsatisfactory online service quality (e.g., Yang and Fang, 2004).
Third, CIT can serve as an exploratory method to increase
knowledge about online service quality and identify the relevant
dimensions in a given online context (e.g., banking, travel agent
services, education, grocery shopping, bookstores, and libraries).
Previous studies show that traditional service quality components
vary depending on the service industry (Ladhari, 2008). In that
case, a purely quantitative approach can complement the CIT
technique. Fourth, CIT can be used both qualitatively and

3.2.2. Sampling methods


The samples for research into e-SQ are drawn from a variety of
populations. Several studies use convenience sampling (e.g., Cai
and Jun, 2003; Lee and Lin, 2005; Long and McMellon, 2004),
whereas only few use random sampling (e.g., Fassnacht and
Koese, 2006; Parasuraman et al., 2005). Several of the studies use
students for their surveys (e.g., Aldwani and Palvia, 2002; Lee and
Lin, 2005; Loiacono et al., 2002; Yoo and Donthu, 2001), despite a
major limitation being that these respondents are not usually
actual internet purchasers, but students who are merely invited to
visit websites and rate them. Yoo and Donthu (2001) gather data
from convenience samples of students who were asked to visit
and evaluate internet shopping sites over a period of 2 days.
Loiacono et al. (2002) also use a convenience sample of students
to visit and evaluate websites. They told undergraduate students
to explore a designated website and asked them to imagine that
they are searching for a book. Only few studies use non-student
samples. For example, Long and McMellon (2004) use respondents who were about to purchase items online. These respondents were asked to complete a questionnaire on expectations
before going on the internet, followed by a questionnaire on
perceptions after purchasing a product online. Parasuraman et al.
(2005) utilise only respondents who had visited the internet on at
least 12 occasions and made at least three purchases during the
preceding 3 months.
The studies reviewed have some limitations. First, the samples
used in most of the studies consist of student populations, which
may limit the generalisability of the scales and reduce their
applicability to the broader population of online users. Loiacono
et al. (2002, p. 435) question their own use of student samples:
While these subjects are typical of a substantial body of web
users, they are not a representative sample of all users. Kim and
Stoel (2004, p. 112) criticize the use of student samples: By
having students visit and rate websites with which they were not
familiar or interested in, those studies may have suffered
limitations in the accuracy of ndings with regard to perceptions
of actual users. In addition, most of these respondents are not
regular customers or users of the websites selected (Loiacono
et al., 2002).
Second, several studies use mostly US respondents. For
instance, all the respondents in Cai and Juns (2003) study are
from the southwest and the midwest regions in the US.
Ranganathan and Ganapathy (2002) use a sample of respondents
from Illinois. Seventy-four percent of the respondents in the Yang
et al. (2004) study are US residents. The reasons for internet use
and the behavior of these participants may differ from those in
other countries. Therefore, future studies should use more
diversied samples. The literature on traditional service quality
shows that dimensions of service quality differ from one country
to another (Ladhari, 2008).

R. Ladhari / Journal of Retailing and Consumer Services 17 (2010) 464477

Third, many respondents in these studies use the internet as an


information source and not for commercial transactions (e.g.,
Yang et al., 2004). They may have different perceptions of service
quality dimensions. Respondents who have not engaged in
commercial transactions on the internet may have concerns
about security compared to experienced internet buyers (Yang
et al., 2004). For instance, Cai and Jun (2003) reported differences
between online buyers and information searchers with respect to
their e-service quality perceptions. Information searchers rate the
four dimensions of e-service quality (trustworthiness, web site
design/content, communication, and prompt/reliable service)
lower than online buyers do. In addition, these two groups
differ on the relative importance of the four e-service quality
dimensions.
Finally, several studies used limited sample sizes. For instance,
Cai and Jun (2003) use a sample of only 171 respondents,
including 61 online searchers and 110 online buyers. In their
study on electronic banking service quality, Ibrahim et al. (2006)
use a sample of 131 customers. In another study, Aldwani and
Palvia (2002) use a sample of 101 students in their rst study and
127 students in their second study. These sample sizes are
relatively small for developing new scales. Future studies should
use more larger and diversied samples.
3.2.3. Service industries considered
Some studies collect data across several industries
(Ranganathan and Ganapathy, 2002; Yoo and Donthu, 2001),
whereas other studies focus on particular sectors. Ranganathan
and Ganapathy (2002) examine the key dimensions of B2C
websites and retain in their sample individuals who completed
at least one online purchase in the last 6 months. Yoo and Donthu
(2001) ask their student respondents to evaluate a broad range of
websitesincluding sites offering books, music and videos,
department stores, electronics, computers, sports and tness,
owers and gifts, health and beauty, and travel and auto. In
contrast, other studies focus on such specic sectors as library
services (ONeill et al., 2001), books (Barnes and Vidgen, 2002),
apparel (Kim and Stoel, 2004), nancial services (Sohn and
Tadisina, 2008), and travel services (Ho and Lee, 2007).
A few studies focus on support services related to purchasing
goods on the internet (e.g., Francis and White, 2002; Janda et al.,
2002; Kim and Stoel, 2004) while other studies focus on pure
service offers such as Web portal quality (e.g., Gounaris and
Dimitriadis, 2003; Yang et al., 2005). Other researchers develop
scales for measuring service quality for support services and pure
information web sites (e.g., Fassnacht and Koese, 2006; Yoo and
Donthu, 2001). For instance, Fassnacht and Koese (2006) discuss
three types of electronic service: online shopping for electronic
equipment (i.e., support services), the creation and maintenance
of home pages (i.e., pure service offer), and sports coverage (i.e.,
content offer).
3.2.4. Survey administration
Scholars use both online and ofine approaches for collecting
data in their studies. In qualitative research, researchers use
online and ofine focus group studies (Wolnbarger and Gilly,
2003), and ofine in-depth interviews (Cristobal et al., 2007). In
quantitative studies, researchers use mail surveys (Kim and Stoel,
2004; Sohn and Tadisina, 2008), website surveys (Parasuraman
et al., 2005; Sohn and Tadisina, 2008), and in-person surveys
(ONeill et al., 2001; Cristobal et al., 2007). For example, Kim and
Stoel (2004) collect data via a mail survey sent to a mailing list of
1000 randomly selected female shoppers who had purchased
apparel online in the preceding 3 years. Parasuraman et al. (2005)
utilise a research rm to administer an online survey to a random

471

sample of internet users. Cristobal et al. (2007) collect data


through personal interviews among a random sample of 461
internet users who had used the services provided by online
shops. Sohn and Tadisina (2008) use a combination of a mail
survey and a website survey by mailing a hardcopy of a
questionnaire to 2000 potential respondents and posting the
same questionnaire on the web to accommodate respondents
who were more familiar with the online interface.
Researchers should report more details about the mode of
administration of their surveys. They should also describe why
they choose one mode rather than another. For instance,
Ranganathan and Ganapathy (2002) give no information on how
they administer their survey. Considering that the object of the
research is e-service quality, researchers are expected to use webbased or e-mail-based surveys. Using other modes of administration may cause a disparity between the target population and the
framed population. In addition, web surveys have several
advantages (van Selm and Jankowski, 2006): convenience for
participants, no interviewer bias, direct data entry to electronic
les, easier recruitment of respondents and lower cost. Therefore,
studies using other modes of administration need to justify an
alternate choice.

3.2.5. Generation of items


Because e-SQ is a relatively new concept, scale items are
generated using both inductive methods (such as literature
reviews) and deductive methods (such as exploratory research),
but most came through deductive methods. A review of the
literature on service quality and electronic commerce by Loiacono
et al. (2002) generated 142 items. Cristobal et al. (2007) also use a
literature review (in combination with in-depth interviews) to
generate an extensive list of 86 potential items. Exploratory
studies for the generation of items utilise focus groups (ONeill
et al., 2001; Wolnberger and Gilly, 2003), in-depth interviews
(Bauer et al., 2006; Cristobal et al., 2007; Yang and Jun, 2002), and
content analysis of customer reviews (Yang et al., 2004). For
example, Yang et al. (2004) access two online review websites to
collect and analyse 848 customer reviews in generating potential
attributes of online banking services. Focus groups were conducted with university students (ONeill et al., 2001) and with
online shoppers (Wolnberger and Gilly, 2003). In some studies,
experts or managers are asked to comment on the wording of
items and/or to propose items (Gounaris and Dimitriadis, 2003;
Fassnacht and Koese, 2006; Yang et al., 2005). Other studies do not
state exactly how they generate their items or the number of items
initially generated (e.g., Ranganathan and Ganapathy, 2002).
There is no agreement in the literature about the exact nature
and denition of e-service quality dimensions. For example, the
web site design dimension is conceptualized and operationalized
in different ways. Ranganathan and Ganapathy (2002) include
delay and ease of navigation and the presence of visual
presentation aids in the web design construct. Loiacono et al.
(2002) report a dimension they call entertainment which
includes visual appeal (aesthetics of the web site), innovativeness
(the uniqueness and creativity of site design), and ow (the
emotional effect of using the web site). Cristobal et al. (2007) state
that web design consists of user-friendliness, content layout, and
content updating. Wolnberger and Gilly (2003) maintain that
the construct refers to navigation, order processing, appropriate
personalization, information search, and product selection.
Fassnacht and Koese (2006) conclude that dimensions reported
in one study cannot be compared to those in other studies on
e-service quality.
These different constructs highlight the lack of consensus
about the components of e-service quality. In fact, most of the

472

R. Ladhari / Journal of Retailing and Consumer Services 17 (2010) 464477

papers analyzed do not include clear denitions of e-service


dimensions. In addition, several studies adopt data-driven
approaches to the construct of e-service quality and components.
The scale items are derived from exploratory factor analysis and
the dimensions identication is based on which items have the
most important loading scores for each factor. The luck of
consensus leads to differences among studies on the number
and nature of the items generated and those nally retained.
In several studies, items generated are based solely on
qualitative research such as in-depth interviews and focus groups.
Future research should develop a more specic theoretical
framework that denes the e-service quality construct and
its dimensions more consistently and identies pertinent
scale-items.
3.2.6. Assessment and purication of items
In several studies, total item correlation serves as a criterion
for initial assessment and purication. Various cut-off points are
adopted: 0.30 by Cristobal et al. (2007), 0.40 by Loiacono et al.
(2002), and 0.50 by Francis and White (2002) and Kim and Stoel
(2004). Items are rejected by Loiacono et al. (2002) if they possess
a high correlation with items on other putative constructs (that is,
discriminant validity problems). Cristobal et al. (2007) use
conrmatory factor analysis to eliminate indicators whose
standardised coefcients are less than 0.5. Wolnberger and Gilly
(2003) are rigorous in retaining only items that (i) load at 0.50 or
more on a factor, (ii) do not load at more than 0.50 on two factors,
and (iii) have an item to total correlation of more than 0.40.
3.2.7. Analysis of dimensionality
The dimensionality of the scale is assessed using exploratory
factor analysis (EFA) and/or conrmatory factor analysis (CFA).
Exploratory analysis is used by several researchers such as Francis
and White (2002), Loiacono et al. (2002), Ranganathan and
Ganapathy (2002), Kim and Stoel (2004), Ibrahim et al. (2006),
Cristobal et al. (2007), Ho and Lee (2007), and Sohn and Tadisina
(2008). Conrmatory factor analysis is utilised by Janda et al.
(2002), Loiacono et al. (2002), Kim and Stoel (2004), Lee and Lin
(2005), Cristobal et al. (2007), Ho and Lee (2007), and Sohn and
Tadisina (2008).
As noted, most of the studies reviewed use EFA to reduce the
number of items in their constructs, but many researchers still
criticized its use. Kwok and Sharp (1998) describe the use of EFA
as a shing expedition. In fact, this technique has a number of
shortcomings. First, the estimates obtained for factor loadings are
not unique and the factor structure obtained is only one of an
innite number of potential solutions (Segars and Grovers, 1993).
Second, the CFA provides goodness-of-t indicators to evaluate
whether the factors structure ts the data, which is not the case
for EFA (Marsh and Hocevar, 1985). Third, when applied to data
exhibiting correlated factors, common factor analysis with
varimax rotation can produce distorted factor loadings and
incorrect conclusions on the factor solution (Segars and Grovers,
1993). Fourth, it is possible for items to load on more than one
factor in EFA, which may affect their distinctiveness and the
interpretation (Sureshchandar et al., 2002). Finally, contrary to
EFA, CFA allows researcher to compare several model specications and to examine invariance of a specic parameter in the
factor solution (Marsh and Hocevar, 1985). Given the limitations
of EFA, researchers should use a combination of EFA and CFA.
Other studies use multinational samples of internet users,
which may also create bias. Previous research reports that
cultural differences in response styles, such as extreme responses
and use of mid-points, are sources of bias that can threaten the
validity of scales (Diamantopoulos et al., 2008). Since the response

style cannot be completely eliminated through research design, so


researchers should establish measurement invariance via multigroup conrmatory factor analysis (Steenkamp and Baumgartner,
1998).

3.2.8. Scale reliability and validity


The reliability of scales (that is, the internal homogeneity of a
set of items) is usually assessed by Cronbachs a coefcient or by
Jorskogs r coefcient. Most scales in the present review exhibit
good reliability in terms of Cronbachs a coefcient, with values
greater than 0.70 (Fornell and Larcker, 1981; Nunally, 1978). For
example, Loiacono et al. (2002) report 12 dimensions with
Cronbachs a ranging from 0.72 to 0.93; Ranganathan and
Ganapathy (2002) nd four dimensions with Cronbachs a ranging
from 0.87 to 0.89; and Yang et al. (2004) report six dimensions
with r coefcients ranging from 0.77 to 0.91 (see Table 1).
However, in a few studies the reliability coefcients were under
the recommended level (as reported in Table 1). For example,
Long and McMellon (2004) nd that reliability coefcient values
are only 0.51 and 0.59 for purchasing process and responsiveness dimensions, respectively. Yang and Jun (2002) nd Cronbachs a value of the credibility dimension to be only 0.59.
Ibrahim et al. (2006) report Cronbachs a values at 0.33 and 0.57,
with friendly/responsive customer service and targeted customer service dimensions, respectively. This means that certain
scales reported in the literature are problematic.
Convergent validity (that is, the extent to which a set of items
assumed to represent a construct does in fact converge on the
same construct) is veried in various ways. Gounaris and
Dimitriadis (2003) evaluate this by calculating the average
variance extracted for each factor and conrming convergent
validity when the shared variance accounted for 0.50 or more of
the total variance. Other studies assess convergent validity by
correlating their scales with a measure of overall service quality.
For example, Loiacono et al. (2002) establish the convergent
validity of their WebQual scale by correlating the total score of
36 items with an overall quality single item measure.
Discriminant validity (that is, the extent to which measures of
theoretically unrelated constructs do not correlate with one
another) is established by Gounaris and Dimitriadis (2003) when
the average variance extracted for each factor is greater than the
squared correlation between that construct and other constructs
in the model. Discriminant validity is also evaluated by comparing
the t of two correlated factors with the t of a single-factor
model for each pair of dimensions (Kim and Stoel, 2004; Loiacono
et al., 2002); discriminant validity is established when the t of
the two-factor model is better than the t of the one-factor model
for each pair of factors. In other studies, discriminant validity is
examined by constraining the inter-factor correlations between
pairs of dimensions (one at a time) to unity, and repeating
conrmatory factor analysis (Parasuraman et al., 2005; Yang et al.,
2005); discriminant validity is conrmed when the constrained
model produces an increase in the chi-square statistic compared
with the non constrained model.
To assess predictive/nomological validity (that is, the extent to
which the scores of one construct are empirically related to the
scores of other conceptually related constructs), authors examine
the impact of particular e-SQ dimensions on (i) users overall
quality rating (Aladwani and Palvia, 2002; Bauer et al., 2006;
Parasuraman et al., 2005; Yang et al., 2004), (ii) satisfaction (Bauer
et al., 2006; Fassnacht and Koese, 2006; Kim and Stoel, 2004;
Yang et al., 2004), (iii) perceived value (Bauer et al., 2006;
Parasuraman et al., 2005), (iv) relationship duration (Bauer et al.,
2006), and (v) behavioural intentions (Bauer et al., 2006; Francis
and White, 2002; Loiacono et al., 2002; Parasuraman et al., 2005).

R. Ladhari / Journal of Retailing and Consumer Services 17 (2010) 464477

The psychometric propertiesthe three types of validity (i.e.,


convergent, discriminant, and predictive)of most of the scales in
this review are not examined. Long and McMellon (2004) prove
only the predictive validity of their newly developed scale.
The same observation applies to Yang and Yuns study (2002).
Few studies examine and conrm the convergent, discriminant,
and predictive validity of their newly developed scale (e.g.,
Aladwani and Palvia, 2002; Cristobal et al., 2007; Kim and Stoel,
2004; Loiacono et al., 2002; Parasuraman et al., 2005). The research
agenda in e-service quality should consider the validation process a
major issue. Future studies addressing the measurement of
e-service quality scale should rigorously test and report on the
psychometric properties of their newly developed scales.
3.3. Dimensionality and structure of the e-SQ construct
It is apparent from this review that certain general observations can be made regarding the dimensionality and structure of
the e-SQ construct as presented in these studies: (i) there is no
consensus on the number and the nature of the dimensions in
the e-SQ construct but globally six dimensions recur more
consistently (reliability/fullment, responsiveness, ease of
use/usability, privacy/security, web design, and information
quality/benet); (ii) some of the e-SQ dimensions in this review
are identical (or at least similar) to those reported for traditional
service quality; (iii) the studies reviewed here concentrate on
functional quality. Only a few studies deal with outcome quality;
and (iv) despite the general support for a hierarchical multidimensional model of service quality, little effort is made by the
authors reviewed here to examine such a structure for e-SQ. These
observations are discussed in greater detail below.
3.3.1. Dimensionality of the e-SQ construct
All of the studies in Table 1 nd the construct of e-SQ to be
multidimensional, with the number of reported dimensions
ranging from three (Gounaris and Dimitriadis, 2003) to 12
(Loiacono et al., 2002). It is apparent that there is no consensus
on the number and the nature of the dimensions of the e-SQ
construct identied in previous research. It is true that some
dimensions (such as reliability and ease of use) appear
consistently in the various models, which indicates that there
are some common dimensions used by customers in evaluating
e-SQ regardless of the type of service being delivered on the
Internet (Fassnacht and Koese, 2006; Zeithaml et al., 2000).
However, other dimensions mentioned in the various studies
appear to be specic to particular e-service contexts. These
observations mirror the debate regarding generic or specic
measures in assessing traditional/physical service quality (e.g.,
Karatepe et al., 2005; Ladhari, 2008). E-service quality dimensions
tend to be contingent on the service industry involved. Even in the
same industry, these dimensions depend on the type of user
service (Barrutia et al., 2009). For instance, informational content
is essential to portal web and internet banking services and less
important for companies such Amazon.com that produce physical
products (Barrutia et al., 2009). Kim and Stoels study (2004) uses
the 36-item scale developed by Loiacono et al. (2002) and reports
a different number of dimensions for the apparel industry. In their
study, Loiacono et al. (2002) report 12 dimensions.
The benet electronic web sites may yield depends on the
service setting. Each industry deals with different basic and
supplementary services and user needs. For instance, Fassnacht
and Koese (2006) distinguish between stand-alone services
(where the electronic service provided represents the main
benet for users) and support services (where the electronic
service facilitates the purchase of goods or services such online

473

reservations or online shopping). Stand- alone services are also


grouped into pure service offerings (e.g., online banking) and
content offerings (e.g. news and sport coverage).
Among the various dimensions the literature review cites, six
appear consistently: (i) reliability/fullment; (ii) responsiveness;
(iii) ease of use/usability; (iv) privacy/security, (v) web design;
and (vi) information quality/content.
The rst of these, reliability/fullment, which is also one of the
prominent dimensions in the traditional SERVQUAL instrument,
refers to the performance of a promised service in an accurate and
timely manner and to the delivery of intact and correct products
(or services) at times convenient to customers (Yang and Jun,
2002). In the studies reviewed here, this dimension is a signicant
determinant of (i) overall service quality (Lee and Lin, 2005;
Parasuraman et al., 2005; Sohn and Tadisina, 2008; Wolnbarger
and Gilly, 2003; Yang and Jun, 2002), (ii) satisfaction (Lee and Lin,
2005; Wolnbarger and Gilly, 2003), (iii) perceived value (Bauer
et al., 2006; Parasuraman et al., 2005), (iv) intention to purchase
(Lee and Lin, 2005; Wolnbarger and Gilly, 2003), and (v)
repurchase intentions (Bauer et al., 2006).
The second of the dimensions that appears consistently in the
studies reviewed here is responsiveness, which refers to a
willingness to help users (Li et al., 2002; ONeill et al., 2001),
prompt responses to customers enquiries and problems (Bauer
et al., 2006; Yang and Jun, 2002; Yang et al., 2004), and the
availability of alternative communication channels (Bauer, 2006).
In this regard, Lee and Lin (2005) report that responsiveness
inuences overall service quality and satisfaction.
Ease of use/usability refers to user friendliness, especially with
regard to searching for information (Yang et al., 2005; Yoo and
Donthu, 2001). Ease of access to available information is an
important reason for consumers choosing to purchase through the
internet (Cristobal et al., 2007; Wolnbarger and Gilly, 2003).
Such usability is an important aspect of e-SQ because the
e-business environment can be intimidating and complex for
many customers (Parasuraman et al., 2005).
The fourth dimension, privacy/security, refers to the protection of
personal and nancial information (Yoo and Donthu, 2001) and the
degree to which the site is perceived by consumers as being safe from
intrusion (Parasuraman et al., 2005). This dimension is relevant
because of the perceived risk of nancial loss and fraud in the online
environment (Parasuraman et al., 2005). Security has been identied
as the most important factor in determining e-SQ for consumers of
online banking services (White and Nteli, 2004). Security is the most
important inuence on intentions to revisit a site and make purchases
(Ranganathan and Ganapathy, 2002; Yoo and Donthu, 2001).
The fth common dimension, web design, refers to aesthetics
features and content as well as structure of online catalogues (Cai
and Jun, 2003). According to Sohn and Tadisina (2008), a website
design similar to a physical store environment inuences
customer perceptions of the online service provider and subsequent behavioural intentions. The design of a web site plays an
important role in attracting and retaining visitors and is as
important as its content (Ranganathan and Ganapathy, 2002).
The sixth dimension, information quality/benet, refers to the
adequacy and accuracy of the information users get when visiting
a web site (e.g., Collier and Bienstock, 2006; Fassnacht and Koese,
2006; Ho and Lee, 2007; Yang et al., 2005). This dimension
becomes important for pure service offerings such as web portal
services (Gounaris and Dimitriadis, 2003; Yang et al., 2005). Yang
et al. (2005) nd that two of the ve dimensions refer to the
quality of information: its adequacy and usefulness. The adequacy-of-information dimension was measured by items referring
to comprehensiveness, content completeness, sufciency, and
detailed contact. Usefulness of content refers to relevance,
uniqueness and whether it is up-to-date, as perceived by the user.

474

R. Ladhari / Journal of Retailing and Consumer Services 17 (2010) 464477

However, these dimensions are not necessarily generic and


exhaustive. Scales for measuring service quality have to vary
depending on service industry and web site offerings. In addition,
individual characteristics may inuence the e-service quality
dimensions considered by users. For instance, Yang and Jun
(2002) underline differences between purchasers and nonpurchasers according to the desired dimensions of e-service
quality. The common dimensions we report in this review could
be used as a starting point for developing e-SQ measurement
scales.
3.3.2. Comparison with traditional service-quality dimensions
Although new dimensions are identied in this review of e-SQ
dimensions, some of the online dimensions are identical (or
similar) to those reported for traditional service quality. For
instance, reliability, which is one of the key dimensions in the
ofine context (Parasuraman et al., 1988) is reported in numerous
e-SQ scales (e.g., Cai and Jun, 2003; Fassnacht and Koese, 2006;
Lee and Lin, 2005; Long and McMellon, 2004; ONeill et al., 2001;
Parasuraman et al., 2005; Wolnbarger and Gilly, 2003). Similarly,
responsiveness, which is one of the ve SERVQUAL dimensions is
also reported in several studies of e-SQ (e.g., Ho and Lee, 2007; Lee
and Lin, 2005; Long and McMellon, 2004; ONeill et al., 2001).
However, some differences from traditional interpretations of
these dimensions are apparent; in particular, the interpretation of
the responsiveness dimension is different in the web-based
context from its connotations in the traditional interpersonal
service environment.
Some traditional dimensions of physical service quality are
apparently not applicable in the context of e-SQ. For example,
empathy is apparently of less concern in the online environment,
which probably reects the absence of interpersonal contact in
this context. But for certain service contexts empathy becomes an
important dimension of e-SQ. Sohn and Tadisina (2008) identify
customised communication as important to customers in
evaluating e-services provided by internet-based nancial institutions. This dimension refers to personalised communication
between customers and companies and is similar to the empathy
dimension in the SERVQUAL model. The items used for measuring
customised communication are similar to those in the SERVQUAL
model (e.g., individual attention, understanding specic needs of
customers, and convenience in contacting employees).
Another traditional SERVQUAL dimension, tangibility, is
apparently subsumed within the common ease of use dimension
(website design and characteristics, structure of the online store
and catalogue, aesthetics) in the online environment.
Other dimensions are specic to e-service quality: trust
(Barnes and Vidgen, 2002; Loiacono et al., 2002; Sohn and
Tadisina, 2008), web site design and functionality/ease-of-use
(Bauer et al., 2006; Fassnacht and Koese, 2006; Ho and Lee, 2007;
Sohn and Tadisina, 2008; Yang and Jun, 2002; Yoo and Donthu,
2001), and security (Francis and White, 2002; Ranganathan and
Ganapathy, 2002; Yang and Jun, 2002; Yang et al., 2004; Yoo and
Donthu, 2001). The privacy/security is becoming less important
for internet users in their evaluation of e-service quality.
3.3.3. Outcome quality versus functional quality

Gronroos
(1990) distinguishes between functional quality
(how the service is delivered) and technical quality (the outcome
for the customer after service delivery). It is apparent that most of
the studies reported in Table 1 concentrate on functional quality;
indeed, few studies in this review deal with technical quality.
Among those that explicitly refer to outcome quality, Fassnacht
and Koese (2006) report three dimensions of e-SQ, which they
designate as environment quality, delivery quality, and out-

come quality. The outcome quality factor is captured by three


sub-dimensions: (i) reliability (accuracy and timeliness of
fullling the underlying service promise); (ii) functional benet
(the extent to which the service serves its actual purpose); and
(iii) emotional benet (the degree to which using the service
arouses positive feelings). The study of Collier and Bienstock
(2006), which also deals with outcome quality, distinguishes
three dimensions of e-SQ: (i) process (functionality, information
accuracy, design, privacy, ease of use); (ii) recovery (interactive
fairness, procedural fairness, outcome fairness); and (iii) outcome (order accuracy, order condition, and timeliness). This last
dimension of outcome is described by Collier and Bienstock,
2006 (p. 265) as y the ultimate reason a customer goes to a
web site.
Similar criticism is leveled at measuring physical service
quality (Fassnacht and Koese, 2006). The SERVQUAL model is
criticized because it focuses solely on functional quality. Despite
the criticism, studies on both functional and outcome dimensions
of service quality in the physical and electronic service quality
domains are still in short supply (Fassnacht and Koese, 2006).
3.3.4. Hierarchical structure of the e-SQ construct
Although several authors suggest that traditional service
quality is a multilevel construct made up several sub-dimensions
(Brady et al., 2002; Wilkins et al., 2007), it is apparent in the
studies reviewed here that few efforts examine such a hierarchical
structure for e-SQ. Among those who address this issue, Fassnacht
and Koese (2006) report empirical evidence for a hierarchical
model made up three rst-order dimensions (environment
quality, delivery quality, outcome quality) and nine secondorder factors. Similarly, Collier and Bienstock (2006) provide
empirical support for a conceptual model of e-SQ consisting of
three rst-order dimensions (process quality, outcome quality,
and recovery) and 11 second-order dimensions.

4. Conclusions and implications


4.1. Findings and research implications
This study reviews numerous studies that propose and report
on various scales for measuring electronic service quality (e-SQ).
As a result of this review, the present paper:

 summarises the methodological issues related to the develop-

ment of e-SQ scales (research methods, sampling methods,


service industries considered, survey administration, generation of items, assessment and purication of items, analysis of
dimensionality, scale reliability, and scale validity); and
discusses the dimensionality of the e-SQ construct (thus
identifying several common dimensions of e-SQ and several
limitations associated with the development of e-SQ scales).

This study shows the variability of the dimensions resulting


from previous studies on e-service quality measurement. The
study reveals that the key dimensions of e-SQ are reliability/
fullment, responsiveness, ease of use/usability, privacy/
security, web design, and information quality/benet. It is thus
apparent that two of the ve SERVQUAL dimensions of traditional
service quality (reliability and responsiveness) also constitute
key factors in the e-commerce context. However, it is also
apparent that other SERVQUAL dimensions, especially empathy,
assume less importance in the online environment. The review
also shows that some e-SQ dimensions are distinctive to
the e-retail environment, as opposed to the traditional retail
setting. Examples of these dimensions include ease of use,

R. Ladhari / Journal of Retailing and Consumer Services 17 (2010) 464477

privacy and security, website design, quality of information,


and personalization.
The review shows that several e-SQ dimensions are relevant
across several industries, whereas others are more or less specic
to particular online service industries. Any attempt to promote a
global (or generic) measure of e-SQ could be subject to similar
criticisms to those directed at generic measurement instruments
in traditional service quality (such as SERVQUAL). It is apparent
from this review that the generic e-SQ dimensions identied in
this study should be complemented by sector-specic dimensions
in particular contexts. The development of valid industry-specic
quality measurement scales would seem to be a fruitful avenue
for future research (just as it is proving to be in the case of
traditional service quality).
Apart from industry-specic scales, future research could also
seek to develop and compare specic e-SQ measurement scales
for different functional types of e-business. In undertaking
this research, the two-dimensional classication scheme of
internet businesses Francis and White (2004) devise could be
useful. According to this model, two dimensions can be used to
classify e-businesses: (i) fullment process (which can be
distinguished into electronic delivery and ofine delivery); and
(ii) product (which can be distinguished into the purchase of
services and the purchase of goods). By combining these two
dimensions (and the two sub-divisions of each), four types of
internet retailing can be identied: (i) ofinegoods (that is,
ofine delivery of purchased goods); (ii) ofine-services (ofine
delivery of purchased services); (iii) electronicgoods (electronic
delivery of purchased goods); and (iv) electronicservices
(electronic delivery of purchased services). It is likely that the
quality factors considered by internet users will differ across
these various categories of internet retailing. It could be interesting for future studies to examine the relative importance
of service quality dimensions across these four categories of
e-business.
The present study also nds that several newly developed
scales, such as SITEQUAL (Yoo and Donthu, 2001), WEBQUAL
(Loiacono et al., 2002), E-S-QUAL (Parasuraman et al., 2005),
eTransQual (Bauer et al., 2006), and PeSQ (Cristobal et al., 2007),
lack specic application and validation. A possible avenue of
future research is to replicate these e-SQ scales across different
contexts with a view to enhancing their external validity. Indeed,
Table 1 shows that studies are largely conned to business-toconsumer relationships. Developing measurement scale of electronic service quality in business-to-business industry would be
valuable. It is clear that the type of user (individual or
organizational) and the nature of the service setting should
determine the e-service quality dimensions retained.
The present study also nds that the dimensionality of the
e-SQ construct is not stable across studies, which probably
reects the diversity of the scope of the studies examined in this
paper. Some studies examine websites that sell goods or services
whereas other studies examine non-selling sites. Moreover,
some studies develop generic e-SQ scales whereas others
develop industry-specic scales. In addition, different methodological approaches are adopted for the identication of the
dimensions of e-SQ and/or the generation (and number) of
potential items within those dimensions. The small samples
used in several of the studies reported in this review do not
permit an adequate assessment of the validity of the scales;
moreover, several studies use convenience samples of students,
which limits the generalisability of the ndings. To ensure
external validity, it is recommended that future research should
use random samples of appropriate internet shoppers to identify
the key dimensions and their relative inuence on online
consumer behaviour.

475

Most of the studies in this review focus on functional quality


(that is, the service-delivery process that takes place on the
internet) rather than technical quality (the outcome of the service
process). According to Brady et al. (2002), such a reliance on
functional quality can constitute a misspecication of service
quality (at least with regard to traditional service quality). If this
contention also applies in the online context, it would seem that
the conceptualisation of e-SQ requires further development with
a view to paying appropriate attention to technical quality, as well
as functional quality.
Finally, most of the studies reviewed here describe e-SQ in
terms of reective attributes rather than formative attributes.
Jarvis et al. (2003) cite four rules for determining whether a
construct is reective or formative: (i) direction of causality from
construct to measure; (ii) interchangeability of the indicators;
(iii) co-variation among the indicators; and (iv) nomological net
among the construct indicators. For instance, in reective
measurement models, the direction of causality goes from
construct to measure, while the opposite is true for formative
measurement models. With formative measurement models,
indicators do not necessarily co-vary with each other, while they
are expected to co-vary with each other in reective measurement models. In reective models, indicators are supposed to
have the same antecedents and consequences, which is not
required in formative models (Jarvis et al., 2003). The formative
model has been proposed as an alternative approach for
measuring traditional service quality (Rossiter, 2002; Ladhari,
2009) and electronic service quality (Parasuraman et al., 2005;
Collier and Bienstock, 2006). Collier and Bienstock (2006), who
question the use of reective indicators to conceptualize electronic service quality, support the use of formative indicators.
Parasuraman et al. (2005) state that calling scale items formative
or reective indicators of latent constructs is a challenging issue.
Further studies are needed to examine the formative conceptualization of e-SQ in greater depth.

4.2. Managerial implications


In accordance with these ndings, we propose several
recommendations for consideration by e-business managers.
First, given the fact that responsiveness and reliability are
identied as key dimensions in e-SQ, online retailers and service
providers must ensure that they are able to perform the promised
services accurately and on time; moreover, all information about
products and services (characteristics, price, warranty, and return
policy) should be easy to locate and understand. The online
provider should provide accurate customer information on such
issues as billing and account balance. Second, to ensure that the
key dimension of responsiveness is fullled, internet retailers
should respond promptly to all enquiries from their customers
and ensure that their e-mail systems perform well at all times.
Third, given the importance of the dimension of ease of use,
the structure of the website and any online catalogues should be
logical and easy to navigate. Fourth, consumers must be made to
feel secure in providing personal and sensitive information (such
as credit card details). Managers should provide an explicit and
reassuring guarantee that their websites respect and protect
personal information at all times.
Finally, website managers should recognise that each online
business is unique and that it is therefore necessary for each
business to identify how its particular internet users dene e-SQ.
Managers should then design their websites to ensure that they
deliver e-SQ in a manner that meets the expectations of their
cohort of customers. These expectations can be identied in depth
using online focus groups of their customers. Internet service

476

R. Ladhari / Journal of Retailing and Consumer Services 17 (2010) 464477

providers should also ensure that they continuously track their


customers perceptions of service quality in terms of appropriate
dimensions and attributes. These dimensions, which can be
initially identied in Table 1 of the present study, should be
complemented by specic dimensions and attributes that are
identied from the managers own focus groups. Managers should
always be careful to utilise e-SQ scales that are appropriate to the
particular context in which they are applied.

References
Aladwani, A.M., Palvia, P.C., 2002. Developing and validating an instrument for
measuring user-perceived web quality. Information and Management 39 (6),
467476.
Alpar, P., 2001. Satisfaction with a web site: its measurement factors and
Marburg, Institut fur

correlates. Working Paper No. 99/01. Philipps-Universitat


Wirtschaftsinformatik.
Barnes, S.J., Vidgen, R.T., 2002. An integrative approach to the assessment of
E-commerce quality. Journal of Electronic Commerce Research 3 (3),
114127.
Barrutia, J.M., Charterina, J., Gilsanz, A., 2009. E-service quality: an internal,
multichannel and pure service perspective. Service Industries Journal 29 (12),
17071721.
Bauer, H.H., Falk, T., Hammerschmidt, M., 2006. ETransQual: a transaction processbased approach for capturing service quality in online shopping. Journal of
Business Research 59, 866875.
Brady, M.K., Cronin, J.J., Brand, R.R., 2002. Performance-only measurement of
service quality: a replication and extension. Journal of Business Research 55
(1), 1731.
Buttle, F., 1996. SERVQUAL: review, critique, research agenda. European Journal of
Marketing 30 (1), 832.
Cai, S., Jun, M., 2003. Internet users perceptions of online service quality: a
comparison of online buyers and information searchers. Managing Service
Quality 13 (6), 504519.
Chell, E., 1998. Critical incident technique. In: Symon, G., Cassell, C. (Eds.),
Qualitative Methods and Analysis in Organizational Research: A Practical
Guide. Sage, Thousand Oaks, CA, pp. 5172.
Cristobal, E., Flavian, C., Guinaliu, M., 2007. Perceived e-service quality (PeSQ):
measurement validation and effects on consumer satisfaction and web site
loyalty. Managing Service Quality 17 (3), 317340.
Collier, J.E., Bienstock, C.C., 2006. Measuring service quality in e-retailing. Journal
of Service Research 8 (3), 260275.
Diamantopoulos, A., Rieer, P., Roth, K.P., 2008. Advancing formative measurement
models. Journal of Business Research 61, 12031218.
Fassnacht, M., Koese, I., 2006. Quality of electronic services: conceptualizing and
testing a hierarchical model. Journal of Service Research 9 (1), 1937.

Fassnacht, M., Kose,


I., 2007. Consequences of web-based service quality:
uncovering a multi-faceted chain of effects. Journal of Interactive Marketing
21 (3), 3554.
Fornell, C., Larcker, D., 1981. Evaluating structural equation models with
unobservable variables and measurement error. Journal Marketing Research
18 (1), 3950.
Francis, J.E., White, L., 2002. PIRQUAL: a scale for measuring customer expectations
and perceptions of quality in Internet retailing. In: Evans, K., Scheer, L. (Eds.),
Marketing Educators Conference: Marketing Theory and Applications, vol. 13;
2002, American Marketing Association, Chicago, IL, pp. 263270.
Francis, J.E., White, L., 2004. Value across fulllment-product categories of internet
shopping. Managing Service Quality 14 (2/3), 226234.
Gefen, D., 2002. Customer loyalty in e-commerce. Journal of the Association for
Information Systems 3, 2751.
Gounaris, S., Dimitriadis, S., 2003. Assessing service quality on the web: evidence
from business-to-consumer portals. Journal of Services Marketing 17 (4/5),
529548.
Gremler, D.D., 2004. The critical incident technique in service research. Journal of
Service Research 7 (1), 6589.

Gronroos,
C., 1990. Service Management and Marketing. Lexington Books,
Lexington, MA.
Ha, S., Stoel, L., 2009. Consumer e-shopping acceptance: antecedents in a
technology acceptance model. Journal of Business Research 62, 565571.
Hausman, A.V., Siekpe, J.S., 2009. The effect of web interface features on consumer
online purchase intentions. Journal of Business Research 62, 513.
Ho, C.-I., Lee, Y.-L., 2007. The development of an e-travel service quality scale.
Tourism Management 26, 14341449.
Holloway, B.B., Beatty, S.E., 2003. Service failure in online retailing: a recovery
opportunity. Journal of Service Research 6 (1), 92105.
Hsu, S.-H., 2008. Developing an index for online customer satisfaction: adaptation
of American customer satisfaction index. Expert Systems with Applications 34,
30333042.
Hwang, Y., Kim, D.J., 2007. Customer self-service systems: the effects of perceived
web quality with service contents on enjoyment, anxiety, and e-trust. Decision
Support Systems 43, 746760.

Ibrahim, E.E., Joseph, M., Ibeh, K.I.N., 2006. Customers perception of electronic
service delivery in the UK retail banking sector. International Journal of Bank
Marketing 24 (7), 475493.
Janda, S., Trocchia, P.J., Gwinner, K.P., 2002. Consumer perceptions of Internet
retail service quality. International Journal of Service Industry Management 13
(5), 412431.
Jarvis, C.B., Mackenzie, S.B., Podsakoff, P.M., 2003. A critical review of construct
indicators and measurement model misspecication in marketing and
consumer research. Journal of Consumer Research 30, 199218.
Johnston, R., 1995. The determinants of service quality: satisers and dissatisers.
International Journal of Service Industry Management 6 (5), 5371.
Jun, M., Yang, Z., Kim, D., 2004. Customers perceptions of online retailing service
quality and their satisfaction. International Journal of Quality and Reliability
Management 2004 21 (8), 817840.
Karatepe, O.M., Yavas, U., Babakus, E., 2005. Measuring service quality of banks:
scale development and validation. Journal of Retailing and Consumer Services
12 (5), 373383.
Kim, S., Stoel, L., 2004. Apparel retailers: website quality dimensions and
satisfaction. Journal of Retailing and Consumer Services 11 (2), 109117.
Kwok, W.C.C., Sharp, D.J., 1998. A review of construct measurement issues
in behavioral accounting research. Journal of Accounting Literature 17,
137174.
Ladhari, R., 2008. Alternative measures of service quality: a review. Managing
Service Quality 18 (1), 6586.
Ladhari, R., 2009. A review of twenty years of SERVQUAL research. International
Journal of Quality and Services Sciences 1 (2), 172198.
Lee, G., Lin, H., 2005. Customer perceptions of e-service quality in online shopping.
International Journal of Retail and Distribution Management 33 (2), 161176.
Li, Y.N., Tan, K.C., Xie, M., 2002. Measuring web-based service quality. Total Quality
Management and Business Excellence 13 (5), 685700.
Liu, C., Arnett, K.P., 2000. Exploring the factors associated with web site success
in the context of electronic commerce. Information and Management 38,
2333.
Loiacono, E.T., Watson, R.T., Hoodhue, D.L., 2002. WEBQUAL: measure of web site
quality. Marketing Educators Conference: Marketing Theory and Applications
13, 432437.
Long, M., McMellon, C., 2004. Exploring the determinants of retail service quality
on the internet. Journal of Services Marketing 18 (1), 7890.
Marsh, H.W., Hocevar, D., 1985. The application of conrmatory factor analysis to
the study of self-concept: rst and higher order factor structures and their
invariance across groups. Psychological Bulletin 97 (3), 562582.
Muylle, S., Moenaert, R., Despontin, M., 1999. Measuring web site success: an
introduction to web site user satisfaction. Marketing Theory and Applications
10, 176177.
Nunnally, L.C., 1978. Psychometric Theory. McGraw-Hill, New York.
ONeill, M., Wright, C., Fitz, F., 2001. Quality evaluation in on-line service
environments: an application of the importanceperformance measurement
technique. Managing Service Quality 11 (6), 402417.
Parasuraman, A., Zeithaml, V.A., Berry, L.L., 1988. SERVQUAL: a multiple-item scale
for measuring consumer perceptions of service quality. Journal of Retailing 64,
1240.
Parasuraman, A., Zeithaml, V.A., Berry, L.L., 1991. Renement and reassessment of
the SERVQUAL scale. Journal of Retailing 67 (4), 420450.
Parasuraman, A., Grewal, D., 2000. The impact of technology on the quality-valueloyalty chain: a research agenda. Journal of the Academy of Marketing Science
28 (1), 168174.
Parasuraman, A., Zeithaml, V.A., Malhotra, A., 2005. E-S-Qual: a multiple-item
scale for assessing electronic service quality. Journal of Service Research 7 (3),
213233.
Ranganathan, C., Ganapathy, S., 2002. Key dimensions of business-to-consumer
web sites. Information and Management 39, 457465.
Rice, M., 1997. What makes users revisit web site? Marketing News 31 (6), 1213.
Rossiter, J.R., 2002. The C-0AR-SE procedure for scale development in marketing.
International Journal of Research in Marketing 19, 305335.
Santos, J., 2003. E-service quality: a model of virtual service quality dimension.
Managing Service Quality 13 (3), 233246.
Segars, A.H., Grover, V., 1993. Re-examining perceived ease of use and usefulness:
a conrmatory factor analysis. MIS Quarterly 17 (4), 517525.
Sohn, C., Tadisina, S.K., 2008. Development of e-service quality measure for
internet-based nancial institutions. Total Quality Management and Business
Excellence 19 (9), 903918.
Steenkamp, J.-B.E.M., Baumgartner, H., 1998. Assessing measurement invariance
in cross-national consumer research. Journal of Consumer Research 25 (1),
7890.
Sureshchandar, G.S., Rajendran, C., Anantharaman, R.N., 2002. Determinants of
customer-perceived service quality: a conrmatory factor analysis approach.
Journal of Services Marketing 16 (1), 934.
Szymanski, D.M., Hise, R.T., 2000. E-satisfaction: an initial examination. Journal of
Retailing 76 (3), 309322.
van Riel, A.C.R., Liljander, V., Jurriens, P., 2001. Exploring consumer evaluations of
e-services: a portal site. International Journal of Service Industry Management
12 (4), 359377.
van Selm, M., Jankowski, N.W., 2006. Conducting online surveys. Quality &
Quantity 40, 435456.
White, H., Nteli, F., 2004. Internet banking in the UK: why are there not more
customers? Journal of Financial Services Marketing 9 (1), 4956.

R. Ladhari / Journal of Retailing and Consumer Services 17 (2010) 464477

Wilkins, H., Merrilees, B., Herington, C., 2007. Toward an understanding of total service
quality in hotels. International Journal of Hospitality Management 26, 840853.
Wolnbarger, M., Gilly, M.C., 2003. ETailQ: dimensionalizing, measuring and
predicting retail quality. Journal of Retailing 79 (3), 183198.
Yang, Z., Fang, X., 2004. Online service quality dimensions and their relationships with
satisfaction: a content analysis of customer reviews of securities brokerage
services. International Journal of Service Industry Management 15 (3), 302326.
Yang, Z., Jun, M., 2002. Consumer perception of e-service quality: form purchaser
and non purchaser perspectives. Journal of Business Strategies 19 (1), 1941.
Yang, Z., Jun, M., Peterson, R.T., 2004. Measuring customer perceived online service
quality: scale development and managerial implications. International Journal
of Operations and Production Management 21 (11), 11491174.

477

Yang, Z., Cai, S., Zhou, Z., Zhou, N., 2005. Development and validation of an
instrument to measure user perceived service quality of information
presenting Web portals. Information and Management 42, 575589.
Yoo, B., Donthu, N., 2001. Developing a scale to measure the perceived quality of
Internet shopping sites (SITEQUAL). Quarterly Journal of Electronic Commerce
2 (1), 3147.
Zeithaml, V.A., Parasuraman, A., Malhotra, A., 2000. E-service quality: denition,
dimensions and conceptual model. Working Paper, Marketing Science
Institute, Cambridge, MA.
Zeithaml, V.A., Parasuraman, A., Malhotra, A., 2002. Service quality delivery
through web sites: a critical review of extant knowledge. Journal of the
Academy of Marketing Science 30 (4), 362375.

You might also like