You are on page 1of 21

The current issue and full text archive of this journal is available at

www.emeraldinsight.com/0960-4529.htm

MSQ
21,2

112

Development and testing of the


Consumer Experience Index (CEI)
SeungHyun Kim, JaeMin Cha, Bonnie J. Knutson and
Jeffrey A. Beck
The School of Hospitality Business, Eli Broad College of Business,
Michigan State University, East Lansing, Michigan, USA
Abstract
Purpose The primary purpose of this paper is to develop a parsimonious Consumer Experience
Index (CEI) and then identify and validate the dimensionality of the experience concept.
Design/methodology/approach The study employed a four-step methodology. After conducting
a pre-test and pilot test, data were collected from 397 adults via an online survey. A split-sample
technique was used for the data analysis. The first-split sample (n 199) was used to conduct the
exploratory factor analysis. Reliability, convergent validity, and discriminant validity were evaluated
with a second-half split sample (n 198) from confirmatory factor analysis.
Findings Scale-development procedures resulted in a seven-factor model comprised of the
following dimensions: environment, benefits, convenience, accessibility, utility, incentive, and trust.
Overall, the 26-item CEI is a reliable and valid measure to determine the underlying components of a
consumers experience.
Research limitation/implications This study concentrates on an experience based on the
general service delivery system rather than a specific industry or business sector. Applicability of this
experience measure should also be evaluated in specific, but diverse, business sectors. By
understanding these seven dimensions, management can develop effective marketing strategies for
providing memorable experience for consumers.
Originality/value Consumer experience has gone largely unmeasured. Built on the old business
axiom that you cannot manage what you cannot measure, this validated CEI tool can provide
businesses with an effective new management tool.
Keywords Consumers, Service climate, Service operations, Communication, Measurement
Paper type Research paper

Managing Service Quality


Vol. 21 No. 2, 2011
pp. 112-132
q Emerald Group Publishing Limited
0960-4529
DOI 10.1108/09604521111113429

1. Background
The service sector has always been an industry in which the customer experience is at
the core of its being. While the notion of the consumers experience has been long
noted, the concept had gone largely unrecognized until B. Joseph Pine II and James
H. Gilmore popularized the notion a decade ago (Pine and Gilmore, 1999). They defined
five types of business offerings: commodity, goods, service, experience and
transformation. According to them, there are clear economic distinctions between
experiences and the other offerings, contending that commodities are fungible, goods
tangible, service intangible, and experiences memorable. The authors explain that
experiences occur whenever a company intentionally uses services as the stage and
goods as props to engage an individual in an inherently personal way (p. 11). They
further differentiate the service business from the experience business, contending that
when a customer buys a service, he or she purchases a set of intangible activities
customized for the individual consumer. But when the consumer buys an experience,

he or she pays to spend time enjoying a series of memorable events that a business
stages to engage him or her in a personal way.
In this second decade of the twenty-first century, the focus is shifting from a
service-based to an experience-based economy. Because of todays advanced
technology, more sophisticated and demanding consumers, and an increasingly
competitive business environment, services are starting to look like commodities
(Knutson et al., 2006). That is, offering quality products and service are expected in
todays competitive world. They are becoming standardized and no longer suffice to
establish a competitive advantage. Businesses need to move beyond goods and
services to create memorable experiences for each customer, since each customers
experience is unique and individualized (Gilmore and Pine, 2002). Thus, Pine and
Gilmore argue that, in an experience economy, companies must make memories and
create the stage for generating greater economic value, rather than simply making
goods and delivering services.
2. Literature review
2.1 Defining the experience construct
A consumers (or customers) experience has been defined in various ways. Grewal et al.
(2009) see it as including every point of contact at which the customer interacts with
the business, product or service. Akin to their definition is the one put forth by Verhoef
et al. (2009) that the customer experience originates from a set of interactions between
a customer and product, a company, or part of its organization, which provoke a
reaction (p. 33). Meyer and Schwager (2007) consider it to be both an internal and
subjective response that people have to any direct or indirect contact with a company.
In this context, they claim that direct contact generally occurs in the course of
purchase, use, and service and is usually initiated by the consumer. On the other hand,
they deem indirect contact to involve unplanned encounters with representations of the
companys brand, products or services in the form of promotional elements such as
personal recommendations, advertising, public relations, news reports, reviews.
Investigating the concept of customer experience, Gentile et al. (2007) conceptualize it
as a dynamic evolution of the relationship between the company and the customer.
Their definition draws on the 2003 work of LaSalle and Britton, (in Gentile et al., 2007)
as well as the 2005 study by Shaw and Ivens (in Gentile et al., 2007), who postulate that
a customers experience stems from a set of personal interactions between a consumer
and an organization; these interactions can provoke a reaction at various psychological
and physiological levels. Furthermore, they argue that a customers reaction or
evaluation stems from a comparison between expectations and the stimuli coming
from the interaction at each consumer touch point or moment of contact (LaSalle and
Britton, 2003; Shaw and Ivens, 2005, in Gentile et al., 2007).
Although dissimilar in their wording, we found several common threads running
through these definitions that helped clarify our goal of developing an index that will
identify and measure the underlying dimensions of the consumer experience. First, the
experience a customer has with an organization is holistic in nature, and therefore,
multi-dimensional. Second, any consumer experience involves a person at various
levels, both psychological and physiological. Third, the experience a consumer has
with a brand is personal and individual. Finally, while there are frequent themes that
run through these definitions, researchers have not agreed on a single meaning for the

Development of
the CEI

113

MSQ
21,2

114

concept. These conclusions support the idea that an experience is an elusive, difficult,
and indistinct construct.
After reviewing the literature, we concur with Verhoef et al.s (2009) assessment that
customer experience has not been treated as a separate construct nor has it been
explored in depth from a theoretical perspective. Instead, it has been integrated into
service quality and satisfaction studies. As they point out, studies have considered it,
but more as an part of the buying/consumption process and/or how a business can
manipulate elements of the marketing mix (i.e. color, sound, packaging) to enhance the
customers experience.
In an increasingly competitive and global economic climate, profitability, and
perhaps even survival, requires more than good products and good service. To stand
out in this aggressive setting, businesses must provide memorable experiences for
individual customers. Although firms recognize the need to create economic value for
their customers in the form of experiences, there is specific lack of research designed to
identify and measure the dimensions of the customers experience. Thus, as Verhoef
et al. (2009) point out, it is imperative for researchers and practitioners to understand
the dimensions of a consumer experience as a construct. Before this understanding can
take place, however, the dimensions must be identified, measured, and verified. While
increasingly acknowledged as a distinct economic offering (Pine and Gilmore, 1999),
the consumer experience has gone largely unmeasured. Previous researchers (Grewal
et al., 2009; Pine and Gilmore, 1999; Verhoef et al., 2009) contributed to the literature by
outlining broad issues of the consumer experience and by providing extensive
overview of the definition of this construct. Taking to the next level this descriptive
approach, it is necessary that follow-up empirical research be done to test these
dimensions of the consumer experience. Our research may be defined or regarded as an
initial step in answering Verhoef et al.s (2009, p. 33) question How can customer
experience be measured in such a way that it captures all facets of this construct?.
Knutson et al. (2006) proposed a holistic model that structures the complex
relationship among the four major components of a consumer buying process:
(1) Expectations and perceptions of service quality.
(2) The consumers experience with the firm.
(3) Value.
(4) Satisfaction.
As they demonstrate, three of the components service quality, value, and
satisfaction have been the subject of extensive research designed to uncover their
underlying dimensions.
By extracting the embedded factors, researchers have given us tools by which the
service quality, value, and satisfaction constructs can be and have been measured in
various business sectors. Thus, they can be managed. To date, however, the
dimensions of the experience construct have not been extracted only assumed. While
the literature suggests various sets of clues for an experience, these clues have not
been subjected to empirical research. Only through rigorous study can they be
validated, refuted, or modified as dimensions of the experience construct. Walter et al.
(2010) identified and analyzed frequent drivers of the consumer service experience, by
employing a particular interview method, namely, the critical incident technique. This

particular study is limited to Swedish customers in the restaurant context, and is


focused on the interaction between consumers and service providers at the purchasing
stage, and calls for a more holistic picture of the customer experience.
If the experience factors can be extracted, identified and measured, much like
Parasuraman et al. (1988) initial work was able to do for service quality, the fourth
piece of the Knutson et al. (2006) holistic model will be in place and we will be able to
calculate the relationship among these four individual components. This will give us a
clearer understanding of the entire consumers decision-making process relative to
their experience. It will likewise provide management with additional direction for
managing the customers experience.
2.2 Index development
Webster defines an index as a thing that points out . . . a representation. In applying
this definition to the social sciences, Babbie (2007) defines it as a method of classifying
subjects, in terms of some variable or attribute by the combination of their responses to
items included in the index. As such, an index must have three characteristics. First, it
must have ordinal measure; that is, it must be constructed so as to rank order
respondents in terms of a specific variable. Second, it must be a composite measure; i.e.
an index measurement is based on more than one data item. Finally, an index is
constructed by simple accumulation; in other words, it is constructed through the
addition of scores assigned to the individual items.
Babbie (2007) indicates that the creation of an index involves several
methodological steps, including:
(1) selection of the index items;
(2) scoring of the index items; and
(3) validation of the index.
An index is created to measure some concept. The first criterion for selecting the index
items is, therefore, face validity; that is, each item must logically represent at least
some element of the construct being measured by the index. While our index was
designed to represent a customers experience, the nature of the items selected
determines how broadly that dimension can be measured. And assuming that variance
exists on the experience construct in the real world, the sum of the index items scores
should provide an indication of a respondents position on the construct. In other
words, our goal was to select items so that their summed score differentiates between
respondents. As described in the Methodology section, we met this first criterion
through our extensive review of the literature, our pretest (n 20) and the pilot study
(n 125).
In assigning scores for individual responses to each question, we had to decide
between the assignment of equal weights or different weights to each item. Since this
appears to be an open issue in index construction without firm rules, we followed
Babbies (2007, p. 162) suggestion that items should be weighted equally unless there
are compelling reasons for differential weighting; that is the burden of proof should be
on differential weighting; equal weighting should be the norm. He further suggests
that the Likert scaling can be used to determine the relative intensity of different items.
Thus, we incorporated the seven-point Likert measurement for the questionnaire,
meeting the second criterion.

Development of
the CEI

115

MSQ
21,2

116

Babbie (2007) cautions, however, that these overall scores are not the final product
of index construction; rather, they are for purposes of item analysis, resulting in the
selection of the best items for the index. If both of these steps are carefully carried out,
the likelihood of the index actually measuring the variable of interest is enhanced. To
prove useful, however, He states that there must be validation of the index through
item analysis to examine the extent to which the composite index score is related to the
individual items included in the index itself and through item reduction. In a complex
index containing many items, this step provides a test of the independent contribution
of each item to the index. If a given item is poorly related to the overall concept
measure, it may be assumed that other items in the index are masking the effect of the
item in question. Since that item contributes little or nothing to the power of the index,
Babbie (2007) believes it can be excluded. Thus, we met this third criterion through
exploratory factor analysis (EFA) and confirmatory factor analysis (CFA).
3. Methodology
Guided by our literature review and the work of Babbie (2007), we designed a research
methodology to develop, empirically test, and refine an index to measure the
consumers experience construct. Our primary objective was to develop a
parsimonious Consumer Experience Index (CEI) and then identify and validate the
dimensionality of the experience concept. To accomplish this goal, we employed a
four-step methodology. First, from a review of more than 600 articles in both the
popular and academic literature, we generated an initial 134-item questionnaire. It was
tested, in a paper-pencil format, for understandability and face validity. We then
converted the instrument into a web-based format, which was pre-tested by 20 senior
college students to determine the time necessary to complete the web-based survey and
to identify any technical and/or wording problems while completing and submitting it.
We next conducted a pilot study with 125 members from Secondwind Network, an
online consortium of advertising agencies, design studios, and Public Relations firms
in the USA and Canada. Results from this pilot study resulted in minor revisions to the
wording, layout of the web-based design, and reducing the original 134 items to 126.
In the third step we employed a web-based survey via the on-line survey company,
SurveyMonkey.com The survey was distributed through four distribution channels:
(1) Publicom, Inc., a Michigan-based marketing communications firm.
(2) Secondwind Network.
(3) The Detroit Chamber of Commerce.
(4) SeniorDiscount.com
Each distribution channel used its web site to invite its clients and/or members to
participate in the study by clicking on a link to the surveys web site. Respondents
were asked to indicate their position on each of the 126 statements on service/product
experience. Each of the named experience items was measured using a seven-point
Likert scale (1 strongly disagree; 2 moderately disagree; 3 slightly disagree;
4 neutral; 5 slightly agree; 6 moderately agree; and 7 strongly agree).
We adopted the sampling approach recommended by Dillmans tailored design
method (Dillman, 2000). First, our target sample received multiple contacts:

in advance of the main survey, all members received an advance e-letter or note
in the newsletter explaining the purpose of the study (pre-alert) one week prior to
the actual main survey; and
for those who did not respond to the survey initially, two reminder e-mails were
sent at intervals of two weeks.

Development of
the CEI

Second, we offered a monetary incentive by randomly selecting one participant to win


$500 cash. Those who provided contact information were entered automatically in the
drawing pool. Third, we provided multiple alternative modes of completion of the
survey. For example, on the web, where our web-based survey was posted,
respondents also had the option to print out the questionnaire, complete it, then either
mail or fax to us.
A total of 506 individuals responded to the web-based survey. One of the
characteristics of a web-based survey is that responses are saved in the online
database, as long as the participant clicks the submit button, regardless of completion.
It is possible for participants to skip a section of a questionnaire and simply click on
the submit button, and the database of web-based survey counts that as one of the
responses. After examining the data pattern, it was necessary for us to delete responses
in which there were some missing data. We kept only those surveys in which the rate
of item response was higher than 90 percent completion. The deletion of these missing
data resulted in 397 usable responses. Put differently, we deleted 21 percent of 506
responses for those respondents with less than 90 percent of the items completed. All
subsequent analyses were based on this usable total sample of 397 responses.
We split the sample into random halves. The first-split sample (n 199) was used
to conduct the exploratory factor analysis. For this sample, 57 percent of respondents
were female, the majority of which ranged in age between 40 and 60 (59.8 percent), with
44 percent indicating more than $100,000 in annual total household income. The
second-split sample (n 198) was used for the confirmatory factor analysis. In this
half, about two-thirds (69 percent) were females, with 53.3 percent ranging in age
between 40 and 60, with 36 percent of respondents reporting than $100,000 in total
annual household income. As Table I indicates, the sample profile between EFA and
CFA groups is comparable.

117

4. Findings
4.1 Index development through exploratory factor analysis
Exploratory factor analysis (EFA) was employed to initially identify underlying
dimensions of the experience construct and to reduce a large number of variables to a
smaller number in other words, scale refinement. To determine the appropriateness
of factor analysis, we examined the Kaiser-Meyer-Olkin (KMO) measure of sampling
adequacy and the Bartletts test of sphericity. The value of 0.60 or above is required for
KMO to be considered a good factor analysis (Tabachnick and Fidell, 2001), and our
finding showed that this measure of sampling adequacy was a robust 0.86, which is
much higher than the recommended value. The chi-square value of Bartletts test
indicated 3784.8, which was statistically significant at p , 0:01 level. Both these
results show that the first spilt sample (n 199) can be subjected to factor analysis to
identify the underlying patterns of the experience.

MSQ
21,2

118

Demographic
variables

Total
Descriptions (n 397)

Gender

Male
Female
Less than
$50,000
$50,000$100,000
Over
$100,000
Less than 40
years
40-60 years
Over 60
years

Income

Age

Table I.
Profile of respondents

Valid percentage (%)


The first-split sample for The second-split sample for
EFA (n 199)
CFA (n 198)

36.8
63.2

42.7
57.3

30.8
69.2

22.7

22.5

23.0

37.2

33.5

41.0

40.1

44.0

36.0

16.9
56.6

19.1
59.8

14.7
53.3

26.5

21.1

32.0

Note: n 397

The 126 items were first analyzed using principal components analysis with Varimax
rotation over the 199 responses (i.e. first-split sample). To select the numbers of
factors, we used a criterion in which eigenvalues were greater than 1.0 (Kaiser, 1960)
and Cattells (1966) scree test. Selected items were deleted, based on the following
suggested statistical criteria (e.g. Hatcher, 1994; Tabachnick and Fidell, 2001): an
average corrected item-to-total correlation below 0.35, an average inter-item
correlation below 0.2, factor loadings below 0.45, cross-loading greater than 0.4,
and reliability score below 0.70. Besides these statistical criteria, most importantly,
we evaluated each item for interpretation of meaning or clarity to examine face
validity regarding the items relationship to the appropriate dimension. The factor
extraction process resulted in a remaining set of 39 items, revealing seven underlying
constructs, as shown in Table II. These seven factors explained 59 percent of the
variances. In sum, findings from EFA indicate that seven dimensions are important
in an economic experience retaining 39 of the original 126 items. These dimensions
are labeled as environment (14 items), benefits (six items), convenience (five items),
accessibility (five items), utility (four items), incentive (three items), and trust (two
items). These seven dimensions are consistent with the findings of previous study,
conducted by Knutson et al. (2006).
4.2 Index validation through confirmatory factor analysis
Data from the second-split sample (n 198) were used for confirmatory factor
analysis. A series of CFAs was conducted to test three measurement models of the
experience, using AMOS 14.0 program:
(1) An initial first-order seven-factor model, which was obtained from EFA.
(2) A revised first-order seven-factor model, which was revised from the initial
first-order model.
(3) Second-order seven-factor model of the experience.

Items (39)
Factor 1. Environment (14 items)
V1 The surroundings of a product/service should
be entertaining to me
V2 Stimulating product/service environments
make me more likely to buy
V3 The product/service environment should be
fun
V4 The environment of a product/ service should
provide sensory stimulation
V5 The surroundings of a product/service must
have a specific theme
V6 Interaction with the product/service makes
shopping more interesting
V7 The product/service environment should
motivate me
V8 Atmosphere is an important element when
interacting with a product/service
V9 Music enhances my interaction with the
product/services
V10 Product/service presentation must be very
interactive
V11 I feel more comfortable if a product/ service
is shown in a realistic setting
V12 The most appealing product/service
environments have a consistent theme
V13 The appearance of a product/service is very
important to me
V14 Product/service displays are more beneficial
if they provide educational materials
Factor 2. Benefits (six items)
V15 Understanding how to use a product/service
is important to me
V16 Consistency in product/service performance
makes me more confident
V17 I must benefit from the product/service I use
V18 A product/service should be safe to use
V19 Most products/services are designed to fit
norms instead of individuals
V20 The utility of a product/service adds value
for me
Factor 3. Convenience (five items)
V21 The less time it takes to shop, the more likely
I am to buy that product/service again
V22 The less time it takes to shop, the more likely
I am to visit that store (web- based or otherwise)
again
V23 The entire shopping process should be fast
V24 If a product/service is easy to locate, I am
more likely to buy it
V25 The process of buying and using a product/
service should be simple

Factor loadings
3
4
5

Development of
the CEI

0.80

119

0.74
0.72
0.78
0.72
0.66
0.60
0.70
0.59
0.59
0.65
0.56
0.53
0.41
0.70
0.75
0.63
0.67
0.54
0.53
0.82
0.81
0.56
0.61
0.46
(continued)

Table II.
Results of exploratory
factor analysis:
experience

MSQ
21,2

120

Table II.

Items (39)
Factor 4. Accessibility (five items)
V26 Product/service information should be
readily available to me
V27 Stores (web-based or otherwise) must be well
organized so that I can find what I want
V28 The product/service must be easy for me to
acquire
V29 Products/services must always be readily
available
V30 Stores (web-based or otherwise) must be
clutter free
Factor 5. Utility (four items)
V31 Practicality is important for store (webbased or otherwise) designs
V32 The product/service must be distributed
through appropriate channels
V33 There should be no surprises surrounding a
product/service
V34 Product/service safety is my major concern
Factor 6. Incentive (three items)
V35 I am more likely to buy a product/service if
incentives are offered
V36 Incentives increase the chance that I will buy
the featured product/service
V37 Price promotions that accompany a product/
service are a bonus
Factor 7. Trust (two items)
V38 My satisfaction with a store (web-based or
otherwise) is the companys most important
concern
V39 My satisfaction with the product/service is a
companys most important concern
Eigenvalue
Variance explained
Reliability coefficient

Factor loadings
3
4
5

0.75
0.62
0.73
0.71
0.64
0.70
0.64
0.67
0.53
0.81
0.79
0.75

0.83
10.0
25.5
0.91

4.2
10.8
0.79

2.7
5.8
0.80

2.0
5.2
0.84

1.9
4.9
0.70

1.4
3.5
0.77

0.83
1.3
3.3
0.84

Notes: KMO 0.86 (x2 3,784.8, p , 0.05); total variance explained 59.0 percent; n 199

Normality for each variable in the proposed model was examined to determine if data
meet the normality assumption for maximum likelihood estimation method (MLE).
The normality test of variables was performed by skewness and kurtosis. Structural
equation modeling (SEM) is highly sensitive to the distributional characteristics of
data, and MLE used in SEM is based on the assumption of multivariate normality (Hair
et al., 2010). All these values of all variables in the model for univariate skewness and
kurtosis were satisfactorily within conventional criteria of normality (2 3 to 3 for
skewness and 2 10 to 10 for kurtosis), according to the guideline suggested by Kline
(1998).
In past decades, a number of fit indices have been developed to evaluate the model
fit. In this study, chi-square statistic/degree of freedom as well as model fit indices such

as comparative fit index (CFI; Bentler, 1990), non-normed fit index (NNFI; Bentler and
Bonett, 1980; Tucker and Lewis, 1983), and root-mean-square residual (RMSEA; Nevitt
and Hancock, 2000) were examined to evaluate the adequate fit of models. According to
Hu and Bentler (1999) and Kline (1998), a x 2/df less than 3 is considered a good fit. For
CFI and NNFI, values should be greater than 0.9 to be considered a good fit. A value of
less than 0.5 for RMSEA indicates good fit. Furthermore, x 2 difference tests were used
to compare and evaluate an adequate model and other alternative measurement
models. Both modification indices provided by AMOS output and the standardized
residual matrix, were examined to modify the models of the experience.
In the first step, the initial measurement model of the experience (i.e. initial
first-order seven-factor model), specified based on EFA results, was tested. This
initial measurement model of the experience consisted of seven factors, including
39 observed variables. Table III shows the overall fit indices for three measurement
models of the experience. According to overall fit indices, the initial measurement
model did not produce a good fit with the data, x 2 681 1209:1; p , 0:01
x 2 =df 1:78; CFI 0:84, NNFI 0.80, RMSEA 0.06). Thus, the initial
measurement model needed further modifications. Modification procedures were
conducted to identify candidate observed variables for deletion from the measurement
model. These procedures are customary to maximize the fit (Bollen, 1989; Hair et al.,
2010). Thus, model modification procedures proceeded to identify observed variables
that had low factor loadings, significant cross-loadings, and large residuals using
standardized factor loadings, modification index (MI) in regression weight, and
modification index (MI) in covariance between two error terms respectively. The MI
test objective was to determine whether, in subsequent runs, models would better
represent the data, with certain parameters specified as free rather than fixed. As the
minimum cut-off, it was suggested that a standardized factor loading should be greater
than 0.50, and each MI should not exceed 100 (Kline, 1998).
In the second step, 13 observed variables were identified with low factor loadings
(below suggested level of 0.50 for the expected constructs.), shared factor loadings, and
shared large residuals with other observed variable loadings (above MI suggested level of
100). Thus, those variables were removed from the initial first-order seven-factor model of
the experience and then the revised first-order seven-factor model of the experience was
re-estimated. That is, after removing observed variables based on three criteria, overall fit
measures (CFI, NNFI, RMSEA, and x 2 difference) were used iteratively to determine
whether the CFA model fitted data well. Results from running this revised model showed
that all fit indices suggested a good fit of data, x 2 278 391:2; p , 0:01
(x 2 =df 1:4; CFI 0:941; NNFI 0:931; RMSEA 0:045). According to the x 2

Models

x2

df x 2 =df NNFI

Initial first-order seven-factor model (39 items) 1,209.1 681


Revised first-order seven-factor model (26
items)
391.2 278
Second-order seven-factor model (26 items)
425.0 292

CFI

RMSEA

Dx 2

1.78

0.827 0.841

0.063

1.41
1.46

0.931 0.941
0.923 0.931

0.045
0.048

817.9 *
33.8

Notes: NNFI non-normed fit index; CFI comparative fit index; RMSEA root mean square
error of approximation; n 198

Development of
the CEI

121

Table III.
Comparison of overall fit
indices for three models
of the experience

MSQ
21,2

122

difference test, the improvement in fit between the initial first-order and revised first-order
seven-factor models was statistically significant (Dx 2 403 817:9; p , 0:05), after
deleting 13 items from the initial measurement model. The revised first-order
seven-factor model also surpassed the initial measurement model on all fit criteria,
which confirmed that the modifications were meaningful.
Through this process to evaluate a models fit, the revised first-order seven-factor
model of the experience resulted in seven factors, consisting of 26 items.
4.3 Descriptive statistics for the index and its seven dimensions
Reliability can reflect the internal consistency of the indicators measuring a given
latent variable (Tabachnick and Fidell, 2001). Means, standard deviations and
reliability scores of the total index and its seven factors, based on the second half split
sample of CFA are presented in Table IV.
Observed variables should have a Cronbachs alpha of 0.7 or higher to be judged
reliable measures (Nunnally, 1978). Both the total index and its seven dimensions are
robust, demonstrating good reliability from the revised first-order seven-factor model.
The overall index score has a standardized coefficient alpha of 0.91, while its seven
factors come in at: environment 0.94; benefits 0.87; convenience 0.81;
accessibility 0.87; utility 0.70; incentive 0.79; trust 0.84.
When looking at the mean scores and standard deviations, we found that the
average score of the total index was 5.53, while those of the several dimensions range
from a high of 6.35 to a low of 4.71. This showed us that the seven dimensions have a
hierarchical order for the consumer experience. Leading the way with a mean score of
6.35 is the factor, which we call Benefits, or the advantage for the consumer of the
service/product experience. The benefits may be based on Maslows hierarchy, the
consistency of delivery, customization for the consumer, or an element of all three.
Accessibility ranks second with a mean of 6.09. Accessibility focuses on distribution
channels and also relates to the cost, delivery, and availability of the service at the
moment a customer wishes to purchase. It includes speed of delivery (whether in
person or electronically), timeliness of the delivery, and the location of the delivery.
Somewhat akin to this dimension is Convenience, which comes in third and has a mean
of 5.94. This factor is time-based and supports the marketing axiom to make it easy
for the customer to spend money. For an experience, convenience is signified by the
Factors

Table IV.
Means, standard
deviations and
reliabilities for factors in
revised first-order
seven-factor model of the
experience

Environment
Benefits
Convenience
Accessibility
Utility
Incentive
Trust
Total scale

Number of variables

Mean

SD

Reliability (Cronbachs a)

7
4
3
4
3
3
2

4.71
6.35
5.83
6.05
5.87
5.66
5.52
5.53

0.86
0.59
0.97
0.71
0.76
0.94
1.38
0.57

0.94
0.87
0.81
0.87
0.70
0.79
0.84
0.91

Notes: Cronbachs a provides an estimate of the inter-item reliability or consistency; mean scores are
based on a seven-point Likert scale (1 strongly disagree; 2 moderately disagree; 3 slightly
disagree; 4 neutral; 5 slightly agree; 6 moderately agree; and 7 strongly agree); n 198

time and energy resources the customer must expend. Simply, the service offering
must not be complex for the customer to partake. Thus convenience connotes the
amount of hassle in acquiring/using the service. We named the fourth factor
Incentive, because it incorporates both monetary and non-monetary incentives as
inducements for the consumer to buy. An important aspect of a consumer experience is
whether the consumer received an incentive to purchase the service. That incentive can
positively leverage the degree of an experience by minimizing the psychic and financial
resources expended by the consumer. For example, several researchers also
highlighted the role of price promotion in creating consumer experience and in
influencing purchasing decision and brand loyalty (Ailawadi et al., 2009; Chiou-Wei
and Inman, 2008).
The fifth factor is Utility; with its mean score of 5.71. This dimension has an air of
sensibility in it because it incorporates the practical or functional nature of the
experience. The service must fit the purpose for which it was designed and for which it
was purchased by the consumer. Service performance, capabilities, and esthetics are
part of the utility experience. In sixth place, with a mean of 5.60, is the 7th factor, Trust.
Trust is a dimension that is established over time. Embedded within is the mantra that
perception is reality. Consumers, desiring a positive experience, trust the individuals
providing that service. Not to be confused with brand trust, trust in an experience is at
the basic level of human interaction in providing that service. Finally, with a mean of
4.71, a consumers experience includes the dimension of Environment and is analogous
to Parasuraman et al.s (1988) and Zeithaml et al.s (1990) Tangibles factor. It embodies
the physical surroundings in which the consumer buys and is conveyed through the
five senses. In the context of an experience, environment includes physical
surroundings and sensory stimulations. Furthermore, there is an environment where
interaction takes place. This interaction is both implicit and explicit. This, the
environment of an experience is a verbal and kinesthetic engagement with the people,
physical surroundings, and sensory stimulations of the service provider. Walter et al.
(2010) recently used a critical incident technique to identify frequent drivers of the
customer service experience, and observed that the physical environment played an
important role in shaping consumer experience in the restaurant context.
4.4 Evaluating convergent and discriminant validity of the experience model
Convergent validity is evidenced if different indicators used to measure the same
construct obtain strongly correlated scores. In SEM, convergent validity can be
assessed by reviewing the t-test for the factor loadings (Hatcher, 1994; Kline, 1998).
Here, all factor loadings for indicators measuring the same construct were statistically
significant, demonstrating that all indicators effectively measure their corresponding
construct (Anderson and Gerbing, 1988) and supporting convergent validity. Figure 1
shows values of factor loadings for the revised first-order seven-factor model of the
experience. Observed variables specified to measure each of the constructs all had
relatively high loadings (statistically significant at p , 0:05), ranging from 0.53 to 0.89,
which is supporting evidence of the convergent validity.
Discriminant validity was examined by the pair-wise correlations between factors
obtained from the revised first-order seven-factor model of the experience (26 items).
As a rule of thumb, Kline (1998) suggests that each pair-wise correlation between
factors should not exceed 0.85. As Table V shows, estimated correlations between

Development of
the CEI

123

MSQ
21,2

124

Figure 1.
Results of confirmatory
factor analysis for the
revised first-order
seven-factor model of the
experience

factors were not excessively high, and none of the pairs for the 95 percent confidence
interval approach 1.00, thus providing support for discriminant validity (Anderson
and Gerbing, 1988).
4.5 Testing an alternative model: second-order seven-factor model of experience
The first-order model of the experience implies that seven factors environment,
benefits, convenience, accessibility, utility, incentive, and trust are correlated, but not
governed by a common latent factor. Alternatively, the experience model may be
operationalized as a second-order model, where the seven factors are governed by a
higher-order factor, i.e. consumer experience. In other words, in the second-order model
of the experience, seven factors are presumed to have a common cause that accounts
for their intercorrelations (Kline, 1998). To see if the experience model has a higher
order construct explained by a number of related dimensions, we also tested the
second-order seven-factor model of the experience. Evaluation of a second-order CFA
model was an effort to achieve strong validity and reliability (Browne and Cudeck,
1993; Marsh and Hocevar, 1985).
As shown in Figure 2, the second-order standardized factor loadings of the
experience model are 0.51 for environment, 0.66 for benefits, 0.57 for convenience, 0.78
for accessibility, 0.79 for utility, 0.34 for incentive, and 0.36 for trust. While all
second-order loadings on overall experience were positive and significant, the sizes of
factor loadings for incentive and brand trust were relatively small.
According to the overall model statistics, the second-order seven-factor model of the
experience produced a good fit with the data, x 2 292 425:0; p , 0:01
(x 2 =df 1:46; CFI 0:93; NNFI 0:92; RMSEA 0:05) (see Table III.) While the
fit of the second-order seven-factor model was as good as that of the first-order
seven-factor model, the revised first-order seven-factor model of the experience
resulted in a better model fit of the data, compared to the second-order seven-factor
model. The x 2 difference test showed that the improvement in fit between the
first-order and second-order seven-factor model of the experience was not statistically
significant, Dx 2 14 33:8; p . 0:05. These results suggest that the revised
first-order seven-factor model of the experience, consisting of environment (seven
items), benefits (four items), convenience (three items), accessibility (four items), utility
(three items), incentive (three items), and trust (two items), provides the best
representation of the data in this study.

Factor
1
2
3
4
5
6
7

Factor 1

Factor 2

Factor 3

Factor 4

Factor 5

Factor 6

Factor 7

1
0.308
0.290
0.318
0.457
0.297
0.304

1
0.244
0.504
0.595
0.333
0.297

1
0.578
0.360
0.201
0.233

1
0.603
0.192
0.252

1
0.228
0.206

1
0.094

Notes: Factor 1 Environment; Factor 2 Benefits; Factor 3 Convenience; Factor 4 Accessibility;


Factor 5 Utility; Factor 6 Incentive; Factor 7 Trust; n 198

Development of
the CEI

125

Table V.
Correlations among
factors for the revised
first-order seven-factor
model of the experience

MSQ
21,2

126

Figure 2.
Results of confirmatory
factor analysis for the
second-order seven-factor
model of the experience

5. Conclusions and implications


From this research, three major conclusions may be drawn. First, EFA provides the
dimensionality of the experience construct. Seven factors were identified: environment,
benefits, convenience, accessibility, utility, incentive, and trust. Each factor has
satisfactorily high reliability greater than 0.70. Through EFA, items of the experience

model were refined to 39 items from 126 items accounting for 59 percent of total
variances. Subsequently, CFA confirms that the revised first-order seven-factor model
of the experience provides the best representation of the data in this study. The results
of CFA supports that of EFA in terms of dimensionality of the experience, but the
model fit was improved significantly by dropping 13 items from the EFA model. More
importantly, CFA provides a more rigorous interpretation of dimensionality than is
provided by the EFA. The final experience index consisted of 26 items. The 26-item
index is a reliable and valid measure to determine the underlying components for a
consumers experience.
Convergent validity and discriminant validity were demonstrated by factor
loadings and correlations between factors in CFA model respectively. As an additional
validation effort, the fit of second-order model was compared to the fit of revised
first-order seven-factor model of the experience, and we found that the revised
first-order seven-factor model was found to be better fit, compared to the second-order
seven-factor model.
Third, this study provides evidence that the experience construct is
multi-dimensional and therefore hierarchical in nature. For example, led by benefits
and accessibility, the index supports the notion that, in todays competitive economy,
value and time are driving forces.
Thus, we have confidence that the 26-item CEI represented by the revised first-order
seven-factor, provides a valid, reliable and valuable measure that can be used to
compute a consumers experience. Built on the old business axiom that you cannot
manage what you cannot measure, this validated CEI tool can provide businesses with
an effective new management tool.
Because consumers now want memorable experiences, Pine and Gilmore (1999)
argue that businesses can no longer be mere suppliers, but should be stagers of events
designed to be experienced. But until now, the dimensions of the experience construct
have not been empirically extracted. Because this research verified extraction of seven
factors inherent in the experience, organizations now have a valuable tool by which
they can help move their brand into the experience economy. By understanding these
seven dimensions, management can develop effective marketing and promotion
strategies for providing memorable experience for customers.
The CEI can be of value to firms in three important ways. First, managers can focus
their efforts on these seven critical elements of the consumer experience. These
elements provide a checklist of sorts that can address such issues as:
(1) Is the total environment consistent with brand image and customer
expectations?
(2) Does the customer understand the benefits/value of the experience?
(3) Can the customer access products/services easily i.e. when and where they
want?
(4) Does the experience give what is promised in brand promotion?
(5) What are the underlying incentives for customers beyond simply a discount?
(6) What stays in the customers memory about the experience?
Second, the CEI can help management measure the effectiveness of its customer
experience management (CEM) efforts. With identification of the seven experience

Development of
the CEI

127

MSQ
21,2

128

dimensions validated, managers can measure how important each attribute is to their
target markets. Using parallel questions, they can then survey customers about their
perceptions of the experience they did receive i.e. how well management is doing on
each of the seven dimensions. Differences in scores can be calculated for each of the
seven components as well as for the overall CEI, thereby showing managers in which
experience areas the brand is strong and/or weak. By comparing the two scores, the
organization will be able to judge how well it is doing in CEM.
Third, by extracting, identifying and measuring the dimensions of the
multi-faceted, and experience construct, this study puts the final piece of the
Knutson et al. (2006) holistic model of the consumer buying process in place. Thus,
organizations now have the tool to measure a consumers experience level, in addition
to their service quality, value and satisfaction levels. This means that businesses can
calculate the statistical relationships among these four individual components for their
customers. Thus, firms will have a more comprehensive understanding of the entire
consumer decision journey from pre-purchase through post-purchase. While the
number of individual touch points varies by brand, industry, and consumers, even
mid-size companies must manage more than 100 such points (Spengler and Wirt, 2009).
The validated CEI can be used in other ways too. For example, markets can be
segmented into groups based on their CEI scores (i.e. high, medium, low). By analyzing
each segments characteristics, the business will gain insights for more effective CEM.
Scores can also be used to group properties or units by regions or districts. Scores can be
compared over time, and by carefully examining the characteristics of winners and
losers, management can discover key factors that facilitate or hinder delivery of high
CEM. Good performers can be rewarded and their procedures replicated in other
properties; weak performers can be targeted for improvement. The format of the
questionnaire also makes it easy to include questions about competitors, giving
managers useful information about how their brand compares to its competition in CEM.
5.1 Limitations and future research
This study utilized a convenience sample. The results may have varied if the group of
participants had different ages or other socioeconomic compositions. Further evidence for
external validation should be provided by using other market samples for future research.
Along with the issue of using a convenience sample, it is important to acknowledge some
limitations associated with the web survey, notwithstanding that this particular method
has time- and cost-saving elements for data collection. In particular, researchers have
observed that important concerns to be addressed are coverage bias (bias due to sampled
instances not having or choosing not to access the internet; Couper, 2000; Solomon,
2001) and nonresponse bias (bias referring when participants to a survey are different
from those who did not respond in terms of demographic or attitudinal variables; Sax et al.,
2003). Because we had to rely on contact persons of four distribution channels to
disseminate our online survey links to their members, we were unable to determine
numbers of subjects who had access to our survey.
A limitation is that this study concentrates on the experience based on a general
service system, instead of focusing on one specific industry or business sector. While
this validated CEI tool is designed to measure a general consumer experience, it would
be beneficial to develop a CEI that can target a specific service provider, because each
service provider has specific characteristics. Based on the validated CEI tool, a

practical example of the CEI measuring consumer experience in the lodging industry is
provided in the Appendix (see Table AI), for practitioners. Applicability of experience
measures should be evaluated and further developed in specific, but diverse, sectors
such as banks, retailers, restaurants, health care, and airlines. We propose this sample
scale as a direction for future research.
Finally, to evaluate construct validity, it is necessary to evaluate convergent,
discriminant, and nomological validity (Hair et al., 2010). While we evaluated convergent
and discriminant validity of the experience model, we could not evaluate the
nomological validity of the experience construct. According to Hair et al. (2010, p. 701),
nomological validity can be evaluated by demonstrating that the constructs are related
to other constructs not included in the model in a manner that supports the theoretical
framework (Hair et al., 2010). Experience may be related in predictable ways to other
consumer-related phenomena, and these potential relationships offer additional avenues
for future research. A task for researchers is to identify the link among service quality,
value, satisfaction and experience, as proposed in the Knutson et al. (2006) model. This
will give us a clearer understanding of customers decision-making processes relative to
their consumer behavior as well as an opportunity to evaluate more rigorous construct
validity of the experience, including nomological validity.
References
Ailawadi, K., Beauchamp, J., Donthu, N., Gauri, D. and Shankar, V. (2009), Communication and
promotion decisions in retailing: a review and directions for future research, Journal of
Retailing, Vol. 85 No. 1, pp. 42-55.
Anderson, J. and Gerbing, D. (1988), Structural equation modeling in practice: a review and
recommended two-step approach, Psychological Bulletin, Vol. 103 No. 3, pp. 411-23.
Babbie, E.R. (2007), The Practice of Social Research, Thomson Wadsworth, Belmont, CA.
Bentler, P. (1990), Comparative fit indexes in structural models, Psychological Bulletin, Vol. 107
No. 2, pp. 238-46.
Bentler, P. and Bonett, D. (1980), Significance tests and goodness-of-fit in the analysis of
covariance structures, Psychological Bulletin, Vol. 88 No. 3, pp. 588-606.
Bollen, K.A. (1989), Structural Equations with Latent Variables, Wiley, New York, NY.
Browne, M. and Cudeck, R. (1993), Alternative ways of assessing model fit, in Bollen, K. and
Long, J. (Eds), Testing Structural Equation Models, Sage, Newbury Park, CA, pp. 136-62.
Cattell, R. (1966), The scree test for the number of factors, Multivariate Behavioral Research,
Vol. 1, pp. 629-37.
Chiou-Wei, S. and Inman, J. (2008), Do they like electronic coupons? A panel data analysis,
Journal of Retailing, Vol. 84 No. 3, pp. 297-307.
Couper, M. (2000), Web surveys: a review of issues and approaches, Public Opinion Quarterly,
Vol. 64, pp. 464-94.
Dillman, S. (2000), Mail and Internet Survey: Tailored Design Method, Wiley, New York, NY.
Gentile, C., Spiller, N. and Noci, G. (2007), How to sustain the customer experience: an overview
of experience components that co-create value with the customer, European Management
Journal, Vol. 25 No. 5, pp. 395-410.
Gilmore, H. and Pine, B. II (2002), Differentiating hospitality operations via experiences:
why selling services is not enough, Cornell Hotel and Restaurant Administration
Quarterly, Vol. 43 No. 3, pp. 87-96.
Grewal, D., Levy, M. and Kumar, V. (2009), Customer experience management in retailing: an
organizing framework, Journal of Retailing, Vol. 85 No. 1, pp. 1-14.

Development of
the CEI

129

MSQ
21,2

130

Hair, J., Black, W., Babin, B. and Anderson, R. (2010), Multivariate Data Analysis, Prentice-Hall,
Upper Saddle River, NJ.
Hatcher, L. (1994), A Step by Step Approach to Using the SAS System for Factor Analysis and
Structural Equation Modeling, SAS, Cary, NC.
Hu, L. and Bentler, P. (1999), Cut-off criteria for fit indexes in covariance structure analysis:
conventional criteria versus new alternatives, Structural Equation Modeling, Vol. 6,
pp. 1-55.
Kaiser, H. (1960), The application of electronic computers to factor analysis, Educational and
Psychological Measurement, Vol. 20, pp. 141-51.
Kline, R. (1998), Principles and Practices of Structural Equation Modeling, Guilford, New York,
NY.
Knutson, B., Beck, J., Kim, S. and Cha, J. (2006), Identifying the dimensions of the experience
constructs, Journal of Hospitality & Leisure Marketing, Vol. 15 No. 3, pp. 31-47.
LaSalle, D. and Britton, T.A. (2003), Priceless: Turning Ordinary Products into Extraordinary
Experiences, Harvard Business School Press, Boston, MA.
Marsh, H. and Hocevar, D. (1985), Application of confirmatory factor analysis to the study of
self-concept: first- and higher order factor models and their invariance across groups,
Psychological Bulletin, Vol. 97, pp. 562-82.
Meyer, C. and Schwager, A. (2007), Understanding customer experience, Harvard Business
Review, Vol. 85 No. 2, pp. 116-26.
Nevitt, J. and Hancock, C. (2000), Improving the root mean square error of approximation for
non-normal conditions in structural equation modeling, Journal of Experimental
Education, Vol. 68, pp. 251-68.
Nunnally, J. (1978), Psychometric Theory, McGraw-Hill, New York, NY.
Parasuraman, A., Zeithaml, V.A. and Berry, L.L. (1988), SERVQUAL: a multiple-item scale for
measuring consumer perceptions of service quality, Journal of Retailing, Vol. 64 No. 1,
pp. 12-40.
Pine, B. III and Gilmore, J. (1999), The Experience Economy, Harvard Business School Press,
Boston, MA.
Sax, L., Gilmartin, S. and Bryant, A. (2003), Assessing response rates and nonresponse bias In
web and paper surveys, Research in Higher Education, Vol. 44 No. 4, pp. 409-32.
Shaw, C. and Ivens, J. (2005), Building Great Customer Experience, Macmillan, New York, NY.
Solomon, D. (2001), Conducting web-based surveys, Practical Assessment, Research
& Evaluation, Vol. 7 No. 19, available at: http://pareonline.net/getvn.asp?v7&n19
(accessed September 21, 2010).
Spengler, C. and Wirt, W. (2009), The consumer decision journey, Io New Management, No. 3,
pp. 1-5.
Tabachnick, b. and Fidell, l. (2001), Using Multivariate Statistics, Allyn and Bacon, Needham
Heights, MA.
Tucker, L. and Lewis, C. (1983), The reliability coefficient for maximum likelihood factor
analysis, Psychometrika, Vol. 38, pp. 1-10.
Verhoef, P.C., Lemon, K.N., Parasuraman, A., Roggeveen, A., Tsiros, M. and Schlesinger, L.A.
(2009), Customer experience creation, determinants, dynamics, and management
strategies, Journal of Retailing, Vol. 85 No. 1, pp. 31-41.
Walter, U., Edvardsson, B. and Ostrom, A. (2010), Drivers of customers service experience:
a study in the restaurant industry, Managing Service Quality, Vol. 20 No. 3, pp. 236-59.
Zeithaml, V.A., Parasuraman, A. and Berry, L.L. (1990), Delivering Quality Service: Balancing
Customer Perceptions and Expectations, The Free Press, New York, NY.

Development of
the CEI

Appendix

Dimensions

Items

Environment

The hotels surroundings entertain me.


A stimulating hotel visage makes me more likely to stay
The hotels environment is enjoyable
The hotels environment is stimulating to the senses
The hotels theme makes me more likely to buy
Interaction with the hotels product, service, and staff makes staying at the hotel
more interesting
The hotels product and service motivates me to purchase
The hotels atmosphere has an impact on my interactions
Music in the hotel enhances my interaction with the hotel
The presentation of the hotels services should be interactive
I am more comfortable if the hotels services are consistent to its rating
The most appealing of the hotels services are consistent to its rating
The appearance of the hotel and its service personnel is important to me
Hotel point of purchase displays are more beneficial if they provide educational
information
Understanding how to use the hotels various services is important to me
Consistency in the hotels service assures me of a benefit
I benefit from the hotels service in order to be satisfied
The hotels product and service provides safety and security
The hotels service is tailored to the individual
The hotels service level is of value to me
The less time it takes me to receive the service I desire, the more likely I am to
use this service again
The less time it takes me to receive the service I desire, the more likely I am to
return to this hotel
The service process in this hotel should be quick
The easier it is to find the hotel, the more likely I am to stay there
The process of reserving accommodations and staying at the hotel is simple
Information about the accommodations and service are readily available at this
hotel
This hotel is well organized, so that I can find what I want
It is easy for me to check in at this hotel
Accommodations and service are readily available at this hotel
The hotels public areas must be clutter free
The design of this hotel is practical
I am able to reserve accommodations through channels I frequent
There are no surprises with the products or services at this hotel
When I stay at this hotel, safety is my major concern
I am more likely to stay at this hotel if incentives are offered
Incentives increase the chance I will stay at this hotel
A price promotion to what I receive in service is a bonus
My satisfaction with the hotel brand is the managements most important
concern
My satisfaction with the hotels product and service is the managements most
important concern

Benefits

Convenience

Accessibility

Utility

Incentive
Trust

Note: *Factor loadings were all significant at p , 0.01

131

Table AI.
Proposed lodging
experience items

MSQ
21,2

132

About the authors


SeungHyun Kim, PhD, is an Assistant Professor in The School of Hospitality Business, Michigan
State University. He mainly teaches the hospitality research methods and hospitality marketing.
Prior to coming to MSU, Dr Kim taught at Buffalo State, State University of New York in its
Hospitality and Tourism Management Department. For several years prior to that, he was a key
planning team member in the areas of real estate and resort development at both Samsung
Everland Co. Ltd and Toshiken Korea Co, Ltd. Dr Kim regularly collaborates with industry
practitioners for various research projects, including CMAA (Club Managers Association of
America), Ecolab, Meetings Michigan, and Travel Michigan. SeungHyun Kim is the
corresponding author and can be contacted at: kimseung@bus.msu.edu
JaeMin Cha, PhD, is an Assistant Professor of Foodservice Management in The School of
Hospitality Business, Michigan State University. Her research currently focuses on service
climate in foodservice operations, sustainability, experience economy, and emotional intelligence.
Her research has been published in both academic and industry journals, including Journal of
Hospitality and Tourism Research, International Journal of Hospitality Management, and Cornell
Quarterly. Among her prestigious awards are the Homer Higbee Scholarship, sponsored by
MSUs Office of International Students and Scholars, and the Award of the International
Foodservice Editorial Council.
Bonnie J. Knutson, PhD, is widely known as an authority on emerging lifestyle trends, the
customer experience, brand positioning, and creative marketing strategies. A Professor in The
School of Hospitality Business, Michigan State University, she has had numerous articles appear
in both referred journals and business publications, and is Editor-Emeritus of the Journal of
Hospitality and Leisure Marketing. Bonnie J. Knutson has been awarded the Withrow Award for
teaching and research, the Golden Key Teaching Excellence Award, and has been designated a
Advertising Education Foundation National Scholar, a Global Institute Affiliate Scholar, and a
Media Expert by Michigan State University.
Jeffrey A. Beck, PhD, is an Associate Professor in The School of Hospitality Business at
Michigan State University where he teaches courses in sales and marketing. He has over 15 years
of experience in the hospitality industry, which includes ten years with Marriott Lodging. He is
widely published in academic journals including research on consumer behavior, ethics in sales
and marketing, and lodging revenue manager activities. He earned his Bachelors degree in
Marketing from the Kelley School of Business at Indiana University, and his Masters and
Doctoral degrees from Purdue University.

To purchase reprints of this article please e-mail: reprints@emeraldinsight.com


Or visit our web site for further details: www.emeraldinsight.com/reprints

You might also like