You are on page 1of 21

Computers & Education 58 (2012) 13181338

Contents lists available at SciVerse ScienceDirect

Computers & Education


journal homepage: www.elsevier.com/locate/compedu

A hybrid approach to develop an analytical model for enhancing the service


quality of e-learning
Hung-Yi Wu a, *, Hsin-Yu Lin b
a
Department of Business Administration, National Chiayi University, No. 580, Xinmin Rd., Chiayi City 60054, Taiwan
b
Department of Business and Entrepreneurial Management, Kainan University, No. 1 Kainan Rd., Luzhu Shiang, Taoyuan 33857, Taiwan

a r t i c l e i n f o a b s t r a c t

Article history: The digital content industry is ourishing as a result of the rapid development of technology and the
Received 23 September 2011 widespread use of computer networks. As has been reported, the market size of the global e-learning
Received in revised form (i.e., distance education and telelearning) will reach USD 49.6 billion in 2014. However, to retain and/or
13 November 2011
increase the market share associated with e-learning, it is important to maintain or increase service
Accepted 22 December 2011
quality in this sector. This research was intended to develop an analytical model for enhancing the
service quality of e-learning using a hybrid approach from the perspective of customers. The evaluation
Keywords:
methodology integrates the three methods: rough set theory (RST), quality function deployment (QFD),
Distance education and telelearning
Evaluation methodologies and grey relational analysis (GRA). First, important criteria affecting service quality (referred to as
Interdisciplinary projects customer requirements (CRs)) and relevant technical information (referred to as technical requirements
(TRs)) for e-learning are compiled from an extensive literature review. Using the data regarding customer
satisfaction collected from a questionnaire survey, RST is then used to reduce the number of attributes
considered and to determine the CRs. Furthermore, in consultation with domain experts, QFD is used
together with GRA to analyze the interrelationships between CRs (which represent the voice of customer
(VOC)) and TRs (which represent the voice of the engineer (VOE)) and to create an order of priority for
the TRs given the CRs based on objective weighting using the entropy value. An illustrative example is
provideddan empirical analysis of the students who participated in the e-learning program at
a particular university. The results reveal that of the fourteen TRs, Curriculum development has the
greatest effect on e-learning service quality, followed by Evaluation, Guidance and tracing,
Instructional design, and Teaching materials. Both the CRs and the TRs may vary depending on the
individual organization. Nevertheless, the proposed model can be a useful point of reference for e-
learning service providers, helping them to identify the TRs that they can use to enhance service quality
and to target vital CRs.
 2011 Elsevier Ltd. All rights reserved.

1. Introduction

According to the American Society for Training & Development (ASTD), electronic learning (e-learning), also known as distance
education and telelearning, includes a wide range of applications and processes, such as web-based learning, computer learning, learning in
virtual classrooms, and digital collaboration. In all of these activities, course content can be delivered via the Internet, via regional extranets,
using audio/video technology, via satellite transmission, and using interactive TV and CD-ROM technology (ASTD, 2011). Kruse dened e-
learning as the use of technology to deliver learning programs and training programs through CD-ROMs, the Internet, LAN networks, and
wireless networks to promote action learning (Kruse, 2003). E-learning is mainly conducted by electronic means to spread knowledge and
facilitate knowledge acquisition and use. E-learning course materials can be presented in modules or can be organized on the basis of
particular goals that can be accessed or addressed synchronously or asynchronously without time constraints (Wentling et al., 2000). In
summary, in e-learning activities, learners and teachers use a digitalized approach, employing technology products such as computers, CDs,
PDAs, e-readers, and/or the Internet as a tool for instruction.

* Corresponding author. Tel.: 88652732838; fax: 88652732826.


E-mail addresses: hywu@mail.ncyu.edu.tw, whydec66@yahoo.com.tw (H.-Y. Wu), melobeelin@gmail.com (H.-Y. Lin).

0360-1315/$ see front matter  2011 Elsevier Ltd. All rights reserved.
doi:10.1016/j.compedu.2011.12.025
H.-Y. Wu, H.-Y. Lin / Computers & Education 58 (2012) 13181338 1319

In response to the extraordinary changes that have recently taken place in global networking and digital technologies, in Taiwan, the
National Science Council (NSC) ran the Taiwan e-Learning Program from 2003 to 2007. The program accomplished several achievements
during those ve years: (1) it rapidly increased the output value of the e-learning industry in Taiwan from 0.7 billion to 12 billion; (2) it
contributed to the implementation of e-learning in more than 20 industries, and (3) it increased the percentage of large-scale entrepreneurs
using e-learning from 14% to 52% (IDB, 2010a). However, as indicated in the report on the status and value of the e-learning industry
included in the 2010 project plan Promotion of e-Learning and Archive Industries Plan-Transformation & Enhancement of Learning
Industries Sub-Plan by the Industrial Development Bureau (IDB), Ministry of Economic Affairs (MOEA), Taiwan (IDB, 2010b), as a result of
the nancial tsunami, the overall performance of the global e-learning industry has not been as strong as expected since 2008. The economic
climate in Taiwan in particular has been challenging since that time. However, from 2008 to 2009, under a government program for
expanding domestic demand and as a result of efforts in related industries, the output value of Taiwans e-learning industry grew by 14.2% to
NTD 15.317 billion (Kuo, 2010).
By 2010, most of Taiwans enterprises had emerged from the nancial turmoil and were willing to spend more on staff training to
increase market demand. In addition, as the government has continued to promote e-learning over the years, most government
agencies, educational institutions, and large and medium-sized enterprises have incorporated e-learning platforms into their infor-
mation systems. In the economic upturn, driven by related industrial policy, the total value of the e-learning industry in Taiwan reached
NTD 26.569 billion in 2010 (IDB, 2011). In addition, the 2009 Ambient Insight report entitled The US Market for Self-Paced eLearning
Products and Services indicates that the global market is estimated to reach USD 49.6 billion in 2014 and that the annual compound
growth rate is 12.8% (IDB, 2010a).
Overall, the changes in the direction of national development policies, technological developments, and the popularization of computer
networks have facilitated the rise and success of the e-learning industry. To accelerate the development of the industry and enhance
national competitiveness, the MOEA in Taiwan developed the Two Trillion, Twin Stars industry development program, which outlines
Taiwans policy direction with regard to core and emerging industries. The policy is consistent with the Challenge 2008 National Devel-
opment Plan proposed by the Council for Economic Planning and Development (CEPD), Executive Yuan. The phrase Twin Stars refers to
two future star industries: the digital content industry (including software, video games, media, publishing, music, animation, digital
learning, and web services) and the biotechnology industry (CEPD, 2002). According to the Market Intelligence Center (MIC) (2005), from
2005 to 2008, Taiwans e-learning market expanded from NTD 6.75 billion to NTD 20 billion. By 2010, the global market size of the e-
learning market was USD 50 billion. The Gartner Group Research Institute in the United States has also indicated that from 2006 to 2011, the
worlds e-learning sales will have grown 14.5% annually (Eid, 2007).
There are various denitions of the term e-learning. In short, e-learning is a digitalized approach to learner-oriented learning activities
that employs information technology and the Internet. One of the main benets of e-learning is that e-learning provides a network platform
through which users (learners) can interact directly without restrictions on scheduling or location. Because e-learning is learner-centred
and initiated by the learners themselves, how to develop learning platforms to meet user (customer) requirements have become an
important issue (Gunasekaran, Ronald, & Shaul, 2002; Kruse, 2003; Rosenberg, 2001; Wang, 2003; Wentling et al., 2000). Some of the
current studies of e-learning focus on educational aspects of this learning mode (e.g., learning achievements and attitudes, cooperative
learning assessments) (Chang & Chen, 2009; Cheng, Chen, Wei, & Chen, 2009; Liaw, Huang, & Chen, 2007) whereas some studies stress
technical aspects of such learning (e.g., interfaces, standards, systems) (George & Labas, 2008; Gonzalez-Barbone & Anido-Rifon, 2010;
Ismail, 2002; Martinez-Ortiz, Sierra, Fernndez-Manjn, & Fernndez-Valmayor, 2009; Rey-Lpez, Daz-Redondo, Fernndez-Vilas, Pazos-
Arias, Garca-Duque, Gil-Solla et al., 2011). In addition, most of the related research has been conducted to measure learner satisfaction with
e-learning and/or to evaluate e-learning products/systems on the basis of particular criteria (Alptekin & Karsak, 2011; Ardito, Costabile, De
Marsico, Lanzilotti, Levialdi, Roselli et al., 2006; Levy, 2007; Wang, 2003). For instance, statistical analyses have been widely used in past
studies on the subject (Paechter, Maier, & Macher, 2010; Selim, 2007; Sun, Tsai, Finger, Chen, & Yeh, 2008; Wan, Wang, & Haggerty, 2008),
many of which discuss only one type of e-learning or investigate the relationships among various dimensions/factors. However, relatively
few papers have attempted to rank the possible techniques on the basis of the satisfaction of e-learning users with those techniques,
ultimately attempting to attain higher service quality within the e-learning industry.
Rough set theory (RST) is used to identify core attributes (Li, Tang, Luo, & Xu, 2009). Researchers can use quality function deployment
(QFD) to analyze the associations between customer requirements (CRs) and technical requirements (TRs) and to identify the TRs of product
design that most affect customer needs (Hauser & Clausing, 1988). Diverse mathematical models of QFD have evolved over the years (Li, Guo,
& Dong, 2008). Grey relational analysis (GRA) can be used in multi-criteria decision-making problems as well. GRA can be used to
comprehensively, quantitatively assess various factors; it is a simple and accurate form of analysis (Deng, 1989). According to a review of the
literature related to RST, QFD, and GRA, some researchers have combined RST with QFD to meet customer needs and solve product
development problems (Li & Wang, 2010; Zhai, Khoo, & Zhong, 2010). Others researchers have integrated QFD and GRA to make decisions
regarding business operations and service quality (Chen & Chou, 2011). More recently, a combination of RST and GRA has been used in
predictive models of business failure and design concept evaluation of product development (Lin, Wang, Wu, & Chuang, 2009; Zhai, Khoo, &
Zhong, 2009). Huang and Jane (2009) propose a hybrid model for stock market forecasting and portfolio selection based GRA and RST.
However, very little research has taken advantage of all three tools (i.e., RST, QFD, and GRA). Therefore, this study aims to integrate the three
methods, RST, QFD and GRA, to construct an analytical model for improving service quality within e-learning by determining the critical TRs
given the key CRs. The objectives of this research are (1) to identify the CRs (representing the voice of the customer (VOC)) and TRs
(representing the voice of the engineer (VOE)) of e-learning from the related literature and (2) to investigate the relationships between the
CRs and TRs and thus determine what technical aspects should be implemented to improve service quality. The aim of this study is thereby
to provide information that will be helpful to e-learning service providers.
The rest of this paper is organized as follows. Section 2 provides a brief overview of the e-learning industry and important CRs and TRs as
indicated by the literature review. The proposed research analysis model, including the basic concepts that are foundational to the three
adopted methods (RST, QFD and GRA), and the questionnaire design and data collection process are introduced in Section 3. Section 4
presents the empirical analysis, including the customer satisfaction data collected from learners using a targeted e-learning platform.
Section 4 also demonstrates how RST is used to identify the CRs and how QFD analysis is used to determine the relationship matrix for the
1320 H.-Y. Wu, H.-Y. Lin / Computers & Education 58 (2012) 13181338

CRs and TRs via expert questionnaires. Finally, Section 4 indicates how GRA can be used to prioritize the TRs. A discussion is provided in
Section 5. Finally, research conclusions, managerial implications, and recommendations for future research are summarized in Section 6.

2. CRs and TRs of e-learning

2.1. The CRs of e-learning

Kotler (1999) proposes that customer satisfaction has a signicant impact on business. A satised customer will usually buy products
again and discuss his or her satisfaction with those products with others, ignoring the competitions brand advertising and not buying
products by other companies. Oliver (1999) found that there is a positive relationship between customer satisfaction and customer
loyalty and that the pursuit of customer satisfaction is the only reasonable and feasible goal for rms. In this competitive environment,
companies may provide excellent service and meet diverse customer needs yet still not be able to retain customers because they do not
establish sustainable customer relationships. Customer loyalty, after all, is the foundation for the long-term protability of businesses
(Jen & Hu, 2003). Customer satisfaction results from meeting customer expectations during the product or service life cycle (Flott,
2002). Service quality can be regarded as the antecedent of customer satisfaction (Anderson & Sullivan, 1993; Bolton & Drew, 1991;
Cronin & Taylor, 1992), and a higher level of service quality will increase customer satisfaction (Cronin & Taylor, 1992). Cronin,
Brady, and Tomas (2000) suggest that service quality and customer satisfaction have a positive inuence on consumer behaviour.
Therefore, to retain customers, businesses must understand the importance of service, create customer value, listen to the voices of
their customers, and make service quality and customer satisfaction their goal, striving to meet customer needs. Therefore, in this study,
it is necessary to determine the CRs that have the greatest impact on service quality within e-learning based on customer needs and
customer satisfaction.
Wang (2003) developed a set of tools for evaluating students satisfaction with e-learning systems. This approach divides e-learning
satisfaction into four major dimensionsdthe learner interface, the learning community, content, and personalizationdthat include 17
criteria. Sun et al. (2008) proposed an integrated model with six dimensionsdlearners, instructors, courses, technology, design, and the
environmentdthat include 13 criteria. They used a multiple regression analysis to verify the signicance of the variables. The results of their
study show that seven criteria critically affect learner satisfaction: learner computer anxiety, instructor attitudes towards e-learning, e-
learning course exibility, e-learning course quality, perceived usefulness, perceived ease of use, and diversity of assessments. Shee and
Wang (2008) used a multi-criteria decision-making methodology from the perspective of learner satisfaction to evaluate web-based e-
learning systems. The analytic hierarchy process (AHP) was used to derive a hierarchy of four dimensions (the learner interface, the learner
community, system content, and personalization) that included a total of 13 criteria and then to calculate the relative importance of those
criteria. The learner interface is found the most important evaluation dimension for e-learning systems. Wu, Tennyson, and Hsia (2010)
constructed a research model to examine the determinants of students learning satisfaction in a blended e-learning system environment
based on social cognitive theory. They used the partial least squares method to validate the measurements and hypotheses. Their empirical
ndings reveal that computer self-efcacy, performance expectations, system functionality, content features, interaction, and learning
climate are the primary determinants of student learning satisfaction.
According to the literature, it would seem that the main dimensions that affect customer satisfaction with e-learning are curriculum
design, system design, the learning community, and personalization. Therefore, on basis of the four dimensions, this study identies the 14
factors affecting customer satisfaction with e-learning that have been used most often in past related research. Table 1 lists the 14 CRs of e-
learning used in this study. In addition, in order to use RST to narrow the number of CRs, the 14 criteria are considered condition attributes
(CAs), and overall satisfaction is considered the decision attribute (DA).

2.2. The TRs of e-learning

There were diverse specications and standards for metadata, student records, content sequencing, Internet courses, and computer
teaching before the term e-learning was developed. Many international standards organizations are actively engaged in research on e-

Table 1
Summary of the CRs of e-learning.

CA/DA Dimension Criteria Reference


Condition Curriculum design (CD) C1: Teaching materials updated in a timely manner Sun et al. (2008), Wang (2003), Wu et al. (2010)
attributes (CAs) C2: Usefulness of teaching materials
C3: Richness and diversication of teaching materials
C4: Practicability of teaching materials
System design (SD) C5: Ease of use Shee and Wang (2008), Sun et al. (2008), Wang (2003),
C6: Stability of network Wu et al. (2010)
C7: Quality of e-learning platform
Learning community (LC) C8: Ease of communicating with other learners Shee and Wang (2008), Sun et al. (2008), Wang (2003),
C9: Ease of sharing data/information Wu et al. (2010)
C10: Ease of sharing learning experience with others
Personalization (PE) C11: Function of recording learning history Shee and Wang (2008), Wang (2003)
C12: Ability to plan for learning progress
C13: Flexibility in choosing learning content
C14: Ability to assess learning performance
Decision attribute D1: Overall satisfaction Flott (2002), Kotler (1999), Wang (2003),
(DA) Wu et al. (2010)
H.-Y. Wu, H.-Y. Lin / Computers & Education 58 (2012) 13181338 1321

learning standards. Like Taiwan, China, Japan, and Singapore are also committed to promoting e-learning standards. In Europe, organiza-
tions such as the AICC (the Aviation Industry CBT Committee), the Association of Remote Instructional Authoring and Distribution Networks
for Europe (ARIADNE), Dublin Core, the EDUCAUSE IMS Consortium, and the IEEE Standards Association became involved in the devel-
opment of standards and/or norms for e-learning early on (CETIS, 2003). The U.S. Department of Defense has integrated various standards
for e-learning developed by different organizations and introduced the Sharable Content Object Reference Model (SCORM), which is
universally accepted (Friesen, 2003). Table 2 summarizes some of the important technical standards for e-learning developed by European
organizations (CETIS, 2003).
As previously mentioned, the 5-year eLearning and Archive Industry Promotion Plan, which included seven sub-plans, was proposed
by the NSC in 2008. The aim was to promote core competences among Taiwan e-learning suppliers, to establish new learning models, and to
enhance the effectiveness of rm e-learning (IDB, 2010b). In addition, to integrate all resources efciently and effectively, the National
Science & Technology Program for e-LearningQuality Certication Center (eLQCC) was publicly named the authority regulating the
certication, service, and advertising components of e-learning quality in 2006; the original name of the center was the National Science &
Technology Program for e-LearningQuality Service Center (eLQCC, 2009a). In developing the qualication standards for e-learning
courseware and services, the eLQCC has referenced international e-learning quality certication standards to ensure continuity (eLQCC,
2009a). The eLQCC oversees the sub-plans for e-learning-related standards and established the e-Learning Courseware Certication
(eLCC), e-Learning Service Certication (eLSC), and e-Learning Operations and Service Quality Certication (eLOSQC) (eLQCC, 2009b,
2009c, 2009d). The relevant normative content of e-learning, including that associated with the eLCC, eLSC, and eLOSQC, is briey
described as follows.

2.2.1. e-Learning Courseware Certication (eLCC)


Version 3.0 of the e-Learning Courseware Certication (eLCC), amended by the Quality Certication Center, was approved by the
Program Ofce (eLearning and Archive Industry Promotion Plan) on June 1, 2007. This is the latest version. As shown in Table 3, the details of
the eLCC (including 19 requirements and 37 criteria) can be divided into ve categories (eLQCC, 2009b). The eLCC has been demonstrated to
be a high-quality evaluation system that can help e-learning providers to more condently enhance e-learning courseware quality (Sung,
Chang, & Yu, 2011).

2.2.2. e-Learning Service Certication (eLSC)


Quality is the key to the success of e-learning. With this in mind, the evaluation indicators for the latest version of the e-Learning Service
Certication (eLSC) include three dimensions, eight specications, and 24 requirements that are used in selecting applications, developing
teaching materials, implementing teaching practices, and managing services in e-learning institutions (eLQCC, 2009c). These specications
are mainly used by e-learning service institutions. The service quality certication can help to conrm the level of service quality experi-
enced by students involved in e-learning activities. Table 4 shows the specications of the e-Learning Service Certication (eLSC).

2.2.3. e-Learning Operations and Service Quality Certication (eLOSQC)


Since the initiation of the e-learning service quality certication system in 2005, the top two types of applicants for e-learning service
quality certications have been general businesses and e-learning service providers. There are currently three types of certication that

Table 2
Summary of technical standards for e-learning developed by European organizations.

Dimension IMSa SIFb DCMIc AICCd W3Ce IEEEf BSIg ISOh CEN/ISSSi ADLj (SCORM)
Metadata Yes Yes Yes Yes Yes Yes Yes
Repository operations Yes Yes
Content packaging Yes Yes Yes
Content sequencing Yes Yes
Content runtime behaviour Yes Yes Yes
Assessment Yes Yes
Student and course data Yes Yes
Learner information Yes Yes Yes Yes Yes
Learner competencies Yes
Logistics Yes
Messaging/Web services Yes Yes
Accessible content Yes Yes Yes
Accessibility preferences for learners Yes
Learning design Yes
Collaboration Yes
Learner support Yes
Developer community Yes Yes
a
IMS: IMS Global Learning Consortium.
b
SIF: The Schools Interoperability Framework.
c
DCMI: The Dublin Core Metadata Initiative.
d
AICC: Aviation Industry Computer-based training Committee.
e
W3C: World Wide Web Consortium.
f
IEEE: The Institute of Electrical and Electronics Engineers.
g
BSI: The British Standards Institution.
h
ISO: The International Standards Organization.
i
CEN/ISSS: The Centre Europen de Normalization/The Information Society Standardization System.
j
ADL: Advanced Distributed Learning Network. Source: CETIS (2003).
1322 H.-Y. Wu, H.-Y. Lin / Computers & Education 58 (2012) 13181338

Table 3
Specications of e-Learning Courseware Certication (eLCC).

Specication Requirement
Content 1.1: Accuracy
1.2: Organization & Completeness
Navigation and Tracking 2.1: Navigation
2.2: Orientation & Help
2.3: Learning Tracking
Instructional Design 3.1: Instructional Objectives
3.2: Instructional Methods
3.3: Practice & Formative Evaluation
3.4: Summative Evaluation
3.5: Facilitation Strategies
3.6: Congruence
Instructional Media 4.1: Media Design & Use
4.2: Interface Design
4.3: Media Elements
Creativity 5.1: Content
5.2: Navigation & Tracking
5.3: Instructional Design
5.4: Instructional Media
5.5: Other

Source: eLQCC (2009b).

indicate e-learning service quality: unit certication, curriculum, and learning process certication. However, e-learning service
providers operate digital learning products that involve websites and CD-ROM products that may include a large number of units or
courses (eLQCC, 2009d). If e-learning service providers only have unit certication, then the overall marketing website/product suite is
not certied. A website/product suite cannot be considered certied without this more comprehensive verication. In addition, it is costly
to have all parts of a website or all products certied. Currently, the specications for e-learning service certication are not fully
consistent with the needs of e-learning service providers. With this problem in mind, in 2009, eLQCC held a trial conference for learning
services and created the rst draft of the eLOSQC specications, which were intended to meet the needs of e-learning service providers
and take market requirements into account. Table 5 lists the details of eLOSQC, which includes ve specications and 16 more specic
requirements.
From the e-learning standards in Europe and America (CETIS, 2003) and the above specications for e-learning service quality, including
the eLCC (Table 3), eLSC (Table 4), and eLOSQC (Table 5) specications established by eLQCC in Taiwan (eLQCC, 2009b, 2009c, 2009d), it is
possible to identify the TRs that should be considered by engineers working on e-learning products design. A total of 14 TRs necessary to
ensure e-learning service quality are listed in Table 6, along with their denitions and references.

Table 4
Specications of e-Learning Service Certication (eLSC).

Aspect Dimension Standard


Participant Learner support 1.1: Online availability of course information to learners.
1.2: Provision of training course to learners about the online learning system.
1.3: Provision of learner support and assistance.
Instructor support 2.1: Provides instructors with training courses of how to use the online learning system,
and offers online assistance at all times.
2.2: Assist the instructors to turn supplementary materials into online materials.
2.3: Assist the instructors to manage the learners questions.
2.4: Providing interaction mechanism among the instructors.
Curriculum Program development 3.1: Program development must have systematic analysis, design, and review on the
programs objectives and structure
3.2: The content of program courses must comply with the e-Learning Courseware Quality Standard.
3.3: Program has complete structure with appropriate amount of online courses implemented.
Course design 4.1: Courseware content satisfying the dimensions of e-Learning Courseware Quality Standard.
4.2: Courseware content has complete structure and the amount of course online is appropriate.
4.3: Review and update of courseware content
Instructional process 5.1: Facilitate interaction between learners and instructor, as well as among learners.
5.2: Timely feedback to learners question or assignment. (practice)
5.3: Recoding and analysis of learners online learning process.
System Organizational support 6.1: Vision and plan for e-learning development.
6.2: Human resource for e-learning operation.
6.3: Management mechanisms for e-learning operation.
Technology 7.1: Provision of Internet connection and bandwidth.
7.2: Specications of users computer.
7.3: Compatibility of e-learning platforms.
7.4: Maintenance support for software and hardware equipment.
Evaluation 8.1: Evaluation on e-learning related personnel or departmental services.
8.2: Course evaluation.
8.3: Evaluation of system and support tools.

Source: eLQCC (2009c).


H.-Y. Wu, H.-Y. Lin / Computers & Education 58 (2012) 13181338 1323

Table 5
Specications for e-Learning Operations and Service Quality Certication (eLOSQC).

Specication Requirement
Human resource 1.1: Operating service providers must have a complete human resources team.
1.2: Operating service providers must have sufcient management capacities.
1.3: The staff of operating services providers must have appropriate skills and qualications.
Website (Learning 2.1: Learning networks should have appropriate market positioning and marketing strategies.
network) Operations 2.2: Learning networks should have a sufcient quantity and quality of e-learning products.
2.3: Operating service providers should have a proven capacity for learning network planning,
business marketing and R & D innovation.
2.4: Operating service providers should have appropriate development plans for providing
services within the learning network.
Service process 3.1: Operating service providers must offer comprehensive e-learning products and service processes.
3.2: Operating service providers must provide adequate service processes for learners.
3.3: Operating service providers must provide comprehensive system service processes for educators and learners.
Information requirements 4.1: Operating service providers must offer comprehensive information about the learning services offered.
4.2: Operating service providers must offer complete information about the learning process.
4.3: Operating service providers must offer complete information for service providers.
Management system 5.1: Operating service providers must have sound management.
5.2: Operating service providers must have a sound internal audit mechanism.
5.3: Operating service providers must have a sound management review mechanism.

Source: eLQCC (2009d).

3. Proposed research model

The owchart of the research process is presented in Fig. 1. The proposed analytical framework is shown in Fig. 2. Through an extensive
literature review, this study rst determined the factors that most affect service quality in e-learning. This information was used to design
the survey questionnaire that would address customer satisfaction among e-learning users. RST was used to reduce the number of attributes
and identify the CRs (representing the VOC) for e-learning service quality (Step 1). From the relevant literature, the TRs (representing the
VOE) for e-learning service quality were also identied (Step 2) and incorporated into the QFD analysis to calculate objective weights for the

Table 6
Summary of TRs of e-learning service quality.

Criteria (TRs) Denition Reference


E1 Human resources Operating service providers must have a complete human resource team and eLSC (eLQCC, 2009c); eLOSQC
learning services participants must have appropriate abilities and qualications (eLQCC, 2009d)
E2 Operating abilities Operating service providers must have skill at developing marketing strategies, eLOSQC (eLQCC, 2009d)
business plans, research innovations, plans for future development, etc.
E3 Service process Operating service providers should develop comprehensive service SIF and W3C (CETIS, 2003);
and management processes. eLOSQC (eLQCC, 2009d)
E4 Information requirements Operating service providers must offer complete learning service/process IMS, SIF, IEEE, BSI, and ISO
information for learners. (CETIS, 2003); eLOSQC (eLQCC, 2009d)
E5 Management system Operating service providers must have a sound audit/review mechanism eLOSQC (eLQCC, 2009d)
of implementation performance for management system.
E6 Curriculum development Operating service providers must consider the objectives and structure eLSC (eLQCC, 2009c)
of their program courses as well as the quality and quantity of digitalized
courseware content.
E7 Teaching materials Operating service providers should offer the correct teaching materials IMS, DCMI, IEEE, W3C, and ADL
appropriately organized and clearly presented so that students can acquire (CETIS, 2003); eLSC (eLQCC, 2009c)
the expected knowledge and skills.
E8 Instructional design Operating service providers should achieve consistency in their instructional IMS (CETIS, 2003); eLSC (eLQCC, 2009c)
design and offer learners clear learning objectives, clear learning content,
appropriate learning strategies, and suitable assessments and feedback to
promote learning comprehension and learning interaction.
E9 Instructional process Operating service providers should keep a record of students learning eLSC (eLQCC, 2009c)
processes and appropriately respond to their learning performance.
E10 Navigation & tracking Operating service providers should develop navigation mechanisms that SIF, W3C (CETIS, 2003); eLSC (eLQCC, 2009c)
help students learn smoothly so that learners can effectively control the
progress of individual learning activities.
E11 Instructional media Operating service providers should effectively use instructional media and eLCC (eLQCC, 2009b)
aesthetics, appropriately designing and producing learning interfaces and
instructional media to promote learning comprehension.
E12 Instructor support Operating service providers must offer assistance with the production eLSC (eLQCC, 2009c)
of online teaching materials and curriculum implementation.
E13 Technology Operating service providers should offer complete user-end, server-side, IMS, DCMI, AICC, IEEE, and ADL
and network infrastructure (software and hardware equipment). (CETIS, 2003); eLSC (eLQCC, 2009c)
E14 Evaluation Operating services providers should conduct comprehensive evaluations IMS and BSI (CETIS, 2003); eLSC (eLQCC, 2009c)
of personnel, operating services, courseware content, learning programs,
and systems.
1324 H.-Y. Wu, H.-Y. Lin / Computers & Education 58 (2012) 13181338

Fig. 1. The owchart of the research process.

CRs using the entropy formula (Appendix A) and expert opinions (Step 3). Next, GRA was utilized to perform a comprehensive assessment
and analysis of the relationship matrix constructed by the CRs and TRs (Step 4). The results allow the factors that affect service quality in the
e-learning industry to be appropriately prioritized. The three data analysis methods, RST, QFD, and GRA, are briey introduced below,
followed by a detailed description of the data collection and questionnaire design developed by this study.

3.1. Rough set theory (RST)

RST, proposed by Pawlak (1991), is mainly used to process imprecise, uncertain and fuzzy information without a priori or additional
information. It can be used for data mining databases, thus generating decision rules that enhance decision making. Nowadays it has been
successfully applied in various elds, including decision analysis, knowledge database discovery, expert systems, inductive reasoning,
automatic classication, and pattern recognition. Especially, it has been used to address myriad business problems: for instance, in

Fig. 2. The proposed analytical framework.


H.-Y. Wu, H.-Y. Lin / Computers & Education 58 (2012) 13181338 1325

predicting business failure (Beynon & Peel, 2001; Dimitras, Slowinski, Susmaga, & Zopounidis, 1999; Ruzgar, Unsal, & Ruzgar, 2008) and in
stock market analysis (Shen & Loh, 2004). The basic principles of RST and the corresponding analysis procedures are briey introduced in the
following sections (Walczak & Massart, 1999).

3.1.1. Information system (IS)


The core capability of RST is classication. RST can derive a set of decision rules through classication or use a group of cases to form
decision-making rules. An information table is used to present the data to be analyzed. Each row in the table represents an event, a company,
an object, or some other similar construct whereas each column in the table represents an attribute used to describe/measure the char-
acteristics or observed result of each event. There are two types of attributes: condition attributes (CAs) and decision attributes (DAs). In
general, an information system includes four elements IS (U, A, V, f ) where U {x1,x2,.xm} is the universe (a non-empty set), A is the set
of all attributes collected from the database, each ai is part of A(ajA), and an information function denes f(xk,ai):U/V where V(vijVa) is
the set of values of a, called the domain of attribute a.

3.1.2. Indiscernibility relation


In the data classication process, some of the information (objects) will be indistinguishable based on some of the attributes. For every
set of attributes A3B, an indiscernibility relation Ind(B) is dened as follows: two objects, xi, xj are indiscernible by the set of B in A if
b(xi) b(xj) for every b3B. The equivalence class of Ind(B) is called an elementary set in B because it represents the smallest discernible
groups of objects. The construction of elementary sets is the rst step in classication with rough sets. For any element xi of U, the
equivalence class of xi in relation to Ind(B) is represented as [xi]Ind(B).

3.1.3. Lower and upper approximations


In an RST analysis, inconsistent classication may occur. The lower and upper approximations of a set can be used to address such
problems and identify the relationships between objectives. Fig. 3 depicts a lower approximation and an upper approximation. In the lower
approximation of a set, the elements absolutely belong to the set whereas in the upper approximation of a set, the elements may belong to
the set. Let X denote the subset of elements of the universe U(X3U). The lower approximation of X in BB4A is denoted as
BX fxi Ujxi IndB 3Xg. The upper approximation of X in BB4A is denoted as BX fxi Ujxi IndB XXsBg. The difference denoted as
BNX BX  BX is called a boundary of X in U where none of the objects belonging to the boundary can with certainty be classied within X
or X as far as the attributes B are concerned.

3.1.4. Accuracy of approximation


In the RST data classication process, because of the presence of boundaries, which can lead to inaccurate classication, the quality of
classication (referred to as accuracy) should be calculated on the basis of the upper and lower approximations. Given the specic attributes
BB4A, the accuracy measure for the set X is generally expressed as a percentage, dened as aB X jBX j=jBXj where jBX jor jBXj is the
number of objects contained in the lower (or upper) approximation of the set X and 0  aB(X)  1. If X is denable in U, then aB(X) 1; if X is
indenable in U, then aB(X) < 1.

3.1.5. Independence of attributes


To determine the independence of a set of attributes, one has to determine whether removing any attribute changes the number of
elementary sets in the IS. If Ind(A) Ind(A  ai) then attribute ai is superuous, which indicates that eliminating ai does not alter the number
of elementary sets. Otherwise, attribute ai is indispensable in A.

3.1.6. Core and reduct of attributes


In an information system, some attributes are often redundant or unimportant because they can be removed without affecting the
quality of decision making. Two fundamental concepts within RST are the core and reduct of attributes. The reduct is the essential part of an

Fig. 3. Schematic demonstration of the upper and lower approximation of set X. Source: Walczak and Massart (1999).
1326 H.-Y. Wu, H.-Y. Lin / Computers & Education 58 (2012) 13181338

IS, the minimal subset of independent attributes that yields the same classication of the objects as the full set of attributes A (i.e.,
Ind(R) Ind(A)). The core attribute set, is the common portion (interaction) of all reducts. Therefore, analyzing the core and reduct of
attributes cannot only effectively reduce system complexity but also help decision-makers discover what variables affect a particular
outcome. In an information system, several different sets of reduct attributes can often be derived, and the core attributes are the most
important CAs after reduction. The discernibility matrix is used to compute the reducts and the core. The discernibility matrix is denoted by
n  n where n denotes the number of elementary sets, and its elements are dened as the set of all attributes that discern elementary sets [x]i
and [x]j. The discernibility function f(A) can be used to determine the minimal subsets of attributes (reduct(s)) where f(A) is a Boolean
function, and its nal form can be calculated by the absorption law.

3.1.7. Decision table


A knowledge representation system containing the two sets of attributes A (i.e., CAs) and D (i.e., DAs) is called a decision table. Decision
tables are also useful for classication. Any supervised classication problem can be considered in a decision table analysis. The concept of
decision table analysis is more general than that of data classication. The decision table can contain many decision attributes. This type of
decision table can be used to express knowledge base D in terms of knowledge base A or to compare two knowledge bases.
Because RST is a powerful tool for database analysis and can easily be used to remove duplicate records, reduce irrelevant and redundant
attributes, and derive inference rules, RST is employed in this study to perform attribute reduction and identify critical attributes (i.e., CRs)
for e-learning service quality.

3.2. Quality function deployment (QFD)

The concept of QFD was rst introduced in Japan by Yoji Akao in 1966 (Mizuno & Akao, 1994). QFD is an approach driven by customer
needs that has been widely used in product design and manufacturing (Chan & Wu, 2002, 2005; Hauser & Clausing, 1988). The key function
of QFD is to translate the CRs (representing the VOC) into TRs (representing the VOE). In other words, QFD can be used as a managerial tool to
help organizations effectively prioritize improvement techniques and enhance product/service quality through systematic analyses.
The house of quality (HOQ) matrix is the fundamental structure of QFD (Hauser & Clausing, 1988). As illustrated in Fig. 4, the HOQ matrix
contains six major components:

(1) Customer requirements (WHATs): Customer requirements (CRs), customer attributes, and the level of quality demanded are the
foundation for the HOQ and provide a main reference point for the engineers regarding what attributes (criteria) the product should
have. CRs can be acquired directly through market surveys. Then, the voice of the customers is used to provide what customers actually
need and to ensure that the services provided meet the quality requirements indicated. Using a variety of methods, these requirements
can be employed to create a table of customer quality requirements.
(2) Technical requirements (HOWs): Given CRs (WHATs), the engineers should consider the technical requirements (TRs) of the service,
which represent the voice of the engineer (VOE). Whereas the CRs specify what (goals) customers insist on, the TRs show how (by
what means) to satisfy these customer needs.
(3) Relative importance of customer requirements: Given time and resource restrictions, companies should focus their efforts on the most
customer-emphasized requirements to achieve better customer satisfaction. Focussing on various CRs may be distracting and may lead
to resource waste. Therefore, it is crucial to use focus groups, individual interviews, and/or questionnaire surveys to determine customer
perceptions regarding the relative importance of CRs.

(6) Technical
Correlation matrix

(2) Technical /Design


requirements
(HOWs)

(1) Customer (3) Relative (4) Relationship matrix (5) Planning matrix/
requirements importance (WHATs vs. HOWs) Competitive analysis
(WHATs) of Customer
requirements

(7) Designing matrix


(Prioritized requirements,
Competitive benchmarks,
Technical targets)

Fig. 4. A generic structure of house of quality. Source: Hauser and Clausing (1988).
H.-Y. Wu, H.-Y. Lin / Computers & Education 58 (2012) 13181338 1327

(4) Relationship matrix (WHATs vs. HOWs): The relationship matrix between the HOWs and the WHATs is used to determine whether a CR
is related with a TR and to determine the amount of inuence of the relationship. Symbols or numbers are commonly used in HOQ to
indicate the strength of such an interrelationship. Creating this matrix is necessary to translate the VOC into the VOE, and the matrix
should be analyzed carefully by domain experts (engineers or technicians).
(5) Planning matrix/Competitive analysis: In a competitive analysis, a rm can compare itself with its major competitors, identify its market
position, and determine the strengths and weaknesses of its products or services in terms of CRs. The information used in the
comparison analysis can be collected by asking customers to evaluate the relative performance of the company and its competitors with
respect to each CR (WHAT) and to assign the company an overall performance value (indicating overall customer satisfaction).
(6) Technical correlation matrix: The correlation matrix is used to evaluate the dependencies among TRs (VOE), determining whether they
have a positive or negative impact on each other when adjusted. A correlation analysis can be conducted by experienced engineers. The
ndings culled from a correlation matrix can provide a reference point for determining the trade-offs of various TRs in product design.
Special attention should be paid to any technical requirement that will cause quality control problems, as this is necessary to avoid the
cost increases that result from defects and/or from redesigning the manufacturing process.
(7) Designing matrix: The designing matrix is very similar to the planning matrix. Both of these matrixes are intended to help produce the
level of product/service quality required by customers. The weight (i.e., the relative importance) and degree of difculty of each TR must
be analyzed in making a technical comparison. In general, TRs that are more difcult but have relatively greater weights are of a higher
priority or at least are equal to their competitors.

QFD can be broken down into several inter-linked phases (HOQs) so that customer needs/requirements can be addressed phase by
phase. The deployment process can be accomplished through a series of function transformations. Later phases use the important outputs
(HOWs) from the previous phases as their inputs (WHATs). In practice, different phases can be utilized depending on the desired
applications.
To date, the QFD matrix concept has been successfully employed not only in manufacturing but also in elds such as business process
re-engineering (Ho & Lin, 2009), knowledge-based system (Yan, Li, & Chen, 2005), the supplier selection process (Bevilacqua, Ciarapicab,
& Giacchettab, 2006), and the process of determining environmental production requirements (Lin, Cheng, Tseng, & Tsai, 2010). QFD has
also been utilized in educational services (Alptekin & Karsak, 2011; Bayraktaroglu & zgen, 2008; Hwarng & Teo, 2001; Levy, 2007;
Pitman, Motwani, Kumar, & Cheng, 1996; Sahney, Banwet, & Karunes, 2008; Thakkar, Deshmukh, & Shastree, 2006). The focus of this
study is the rst four parts of the HOQ matrix ((1) customer requirements, (2) technical requirements, (3) the relative importance of the
customer requirements, and (4) the relationship matrix) (Figs. 3 and 4), which are used to transform the CRs (the VOC) into TRs (the VOE).
QFD is used to determine what techniques should be used to advance service quality within e-learning based on the customer
perspective.

3.3. Grey relation analysis (GRA)

Grey theory was developed by Deng (Deng, 1982). In general, when the information in a system is complete, the system is called
white, when the required information is not available, the system is called black, and when the available information is incomplete or
uncertain, the system is called grey. Grey theory proposes new concepts and solutions related to systems with incomplete information
and supplies ways to acquire new information, converting grey systems into white systems. Deng (1989) classied grey theory into six
elds: grey generation, grey relational analysis, grey system modelling, grey forecasting, grey decision making, and grey system control. On
the basis of the development trends between systems, GRA projects the system data into geometric space to measure the closeness of the
geometric shapes. The closer the geometric shapes for two systems appear to one another, the stronger the relationship between the
systems will be (Deng, 1982, 1989). This method can also be used to rank priorities. The GRA calculation procedure is described in the
following section.

Step 1: Conrm the reference sequence (Xo) and the compared sequence (Xi).

The reference sequence and comparison sequence are


     
Xo xoj j 1; 2; .; n and Xi xij j 1; 2; .; n where i 1; 2; .; m: (1)

Step 2: Perform normalization (i.e., render the data dimensionless).

The upper-bound effectiveness of measurement (i.e., the larger the better) is

xij  min xij


x*ij i
(2)
min xij  min xij
i i

The lower-bound effectiveness of measurement (i.e., the smaller the better) is

max xij  xij


x*ij i
(3)
max xij  max xij
i i

The moderate effectiveness of measurement (i.e., nominal is best) is


1328 H.-Y. Wu, H.-Y. Lin / Computers & Education 58 (2012) 13181338

 
 
xij  xobj 
x*ij 1  n o (4)
max max xij  xobj ; xobj  min xij
i i

Step 3: Calculate the difference sequence (Doij).

 
Doij x*oj  x*ij ; where i 1; 2; .; m; j 1; 2; .; n (5)

Step 4: Calculate the grey relational coefcient (goij).

min min Doij z max max Doij


ci cj ci cj
goij (6)
Doij z max max Doij
ci cj

where z is the identication coefcient, normally set to z 0.5.

Step 5: Calculate the grey relational grade (Goi).

X
n
Goi wj  goij (7)
j1
Pn
where wj is the weight and j1 wj 1.

Step 6: Arrange the grey relation ordinal.

The grey relational grades (Goi) of the different compared sequences provide a ranking in which a higher value indicates a better
alternative.

3.4. Data collection and questionnaire design

As new trends emerge in e-learning and as learning technologies become increasingly advanced, the use of e-learning is helping to
further develop universities, which are regarded as the essential force driving the creation of the global village (Inglis, Ling, & Joosten,
2002; Laurillard, 2002). Thus, it is important to enhance e-learning systems to address current challenges (Jones & Oshea, 2004). In
addition, theoretically, e-learning is more useful for students who can study independently. This may be the reason why most studies of e-
learning take university/college students or people above the age of twenty as their research subjects (Bryant, Campbell, & Kerr, 2003;
Levy, 2007; Piccoli, Ahmad, & Lves, 2001; Zhang & Zhou, 2003). Therefore, the illustrative example presented here targeted university
students who have used certain e-learning systems (including software, databases, e-learning platforms, and other relevant learning
websites).
In this study, the related important factors, both CRs (Table 1) and TRs (Table 6), were synthesized from an extensive literature review.
These factors (dimensions and criteria) were conrmed in consultation with relevant experts, adjusted, and then used to create formal
questionnaire. The surveys were conducted in two stages. The questionnaire used in the rst-stage contains two parts: (1) learner satis-
faction measured in terms of each criterion of e-learning service quality and overall satisfaction and (2) basic personal background infor-
mation. Partial list of the questionnaire of customer satisfaction with e-learning service quality is shown in Table 7. A 5-point Likert scale was
used to measure the degree of customer satisfaction with service quality with reference to the various customer needs (i.e., CRs) that should
be addressed by the e-learning service provider. The 5-point scale used is as follows: 1: Very dissatised, 2: Dissatised, 3: Unsure, 4:

Table 7
Partial list of the questionnaire of customer satisfaction with e-learning service quality.

Please indicate the degree of your satisfaction with 1 2 3 4 5


service quality of e-learning for the following criteria
(customer needs) using the scale of 15
(1: Very dissatised, 5: Very satised).
Dimension 1: Curriculum design (CD)
C1: Teaching materials updated in a timely manner , , , , ,
C2: Usefulness of teaching materials , , , , ,
C3: Richness and diversication of teaching materials , , , , ,
C4: Practicability of teaching materials , , , , ,
H.-Y. Wu, H.-Y. Lin / Computers & Education 58 (2012) 13181338 1329

Satised, and 5: Very satised. The questionnaire survey was conducted with university students with at least three months of expe-
rience using a particular e-learning program/platform. In the second stage, the QFD questionnaire was constructed using the CRs (the VOC)
identied from the RST analysis of the data culled from the rst-stage questionnaires. This questionnaire also addresses the TRs (the VOE).
Again, it includes two parts: (1) the relationship between the CRs and TRs in the QFD/HOQ matrix of WHATs vs. HOWs (Appendix B) and (2)
basic information regarding the experts background. The experts indicated the degree of strength of the relationship between the CRs and
TRs using the following scale: 5: Strong relationship, 3: Moderate relationship, and 1: Weak relationship. Academic and industry
experts in the eld of e-learning were the main survey participants.

4. Empirical analysis

In accordance with the research owchart (Fig. 1) and proposed model (Fig. 2), students who had used a universitys e-learning program/
platform were surveyed so that it would be possible to conduct an empirical analysis. First, with the data collected from the questionnaire
survey conducted in the rst stage, the RST software, ROSE 2 (Rough Set Data Explorer 2), was used to determine the most essential CRs for e-
learning of those shown in Table 1. Then, the screened critical CRs were incorporated into the QFD/HOQ analysis and used as the inputs
(WHATs) along with the outputs (HOWs), the TRs shown in Table 6, to establish the relationship matrix (WHATs (CRs) vs. HOWs (TRs)). The
degree of the relationships between the CRs and TRs was determined by experts through the questionnaire survey in the second stage.
Furthermore, based on the latter values in the relationship matrix, the entropy method was used to calculate the objective weight of each
criterion (CR). Finally, using GRA, the relative importance of the TRs to the performance of e-learning services was determined with
reference to the CRs. The detailed results of the RST, QFD, and GRA analyses are as follows.

4.1. Screening the critical criteria of CRs of e-learning service quality by RST

The questionnaire surveys were conducted in two stages. In the rst stage, the questionnaire survey including the previously
summarized CRs (Table 1) was used to collect the customer satisfaction data. The questionnaires were issued to international students
(including graduates) who were studying at the case university (Kainan University in Taiwan) during 2007 and 2009 and had used the
Chinese e-learning programs (E-PEN Chinese). The survey was administrated from November to December 2010. A total of 60
questionnaires were distributed, and 47 valid questionnaires were returned. Therefore, the valid response rate was approximately
78.33%. The respondents backgrounds as indicated in the rst-stage survey are shown in Table 8. The number of male and female
respondents was approximately equal, with only a slightly higher number of responses from women. All of the respondents were more
than 18 years old and had educational experience at least at the college level. Approximately 70% (68.09%) of the respondents had
more than half a years experience with e-learning. Almost 90% (87.23%) of the respondents aimed to use the e-learning platform to
enhance their learning. More than half (55.32%) of the respondents had had at least three months of recent experience using the
current e-learning platform. In addition, more than 70% of the respondents had used the current e-learning platform for more than
half a year.
The 47 questionnaires collected in the rst stage were used for the study sample, and the optimal attribute value sets for the RST analyses
were obtained via trial and error. As previously described, a 5-point Likert scale was used to measure the degree of customer satisfaction
with the level of service quality provided by the e-learning platform. The nal attribute specications for the CRs used the RST analysis are
listed in Table 9. The 14 CRs (Table 1) (used as the CAs) are separated into two groups: {Dissatisfaction} 1 (values 1 and 2) and
{Satisfaction} 2 (values 3, 4, and 5) whereas overall satisfaction (set as the DA) is separated into three groups: {Dissatisfaction} 1
(values 1 and 2), {Unsure} 2 (value 3), and {Satisfaction} 3 (values 4 and 5).
ROSE 2 is a modular software system, created at the Laboratory of Intelligent Decision Support Systems of the Institute of Computing
Science in Poznan, for knowledge discovery and decision analysis (Predki, Slowinski, Stefanowski, Susmaga, & Wilk, 1998). It implements
basic elements of the rough set theory and rule discovery techniques through computations based on rough set fundamentals (Pawlak, 1991,
2002). The input data accepted by ROSE 2 are presented in an information table where rows correspond to objects (observations, cases, etc.)
and columns correspond to attributes (characteristics, features, etc.). The attributes are divided into disjoint sets of CAs and DAs (expressing
the partition of objects into classes, i.e. their classication). The input data can be stored in a plain text le according to the dened syntax
(Information System File, ISF). ROSE can also import data from other formats (e.g., .xls).

Table 8
Summary of the background information for the respondents who completed the rst-stage survey.

Category Item Total Percentage Category Item Total Percentage


Sex Male 22 46.81% Age S18 years old and &25 years old 32 68.09%
Female 25 53.19% >25 years old and &40 years old 10 31.91%
Education College/University 38 80.85% The nature (purpose) Academic applications 41 87.23%
of using the current (to enhance learning effect)
Master 9 19.15% (most recent) of Business applications 4 8.51%
e-learning platform (for business service use)
Total time of e-learning &0.5 year 15 31.91% Others 2 4.26%
experience in the past >0.5 year and &1 year 19 40.43% Total time of using the current &3 months 21 44.68%
>1 year and &2 years 4 8.51% (most recent) e-learning platform >3 months and &0.5 year 17 36.17%
>2 years and &3 years 5 10.64% >0.5 year and &1 year 6 12.77%
>3 years 4 8.51% >1 year and &2 years 3 6.38%
1330 H.-Y. Wu, H.-Y. Lin / Computers & Education 58 (2012) 13181338

Table 9
Attribute specications for CRs.

CA/DA Dimension Criteria Attribute vale sets


Condition attributes (CAs) Curriculum design (CD) C1: Teaching materials updated in a timely manner {Dissatisfaction} 1; {Satisfaction} 2
C2: Usefulness of teaching materials {Dissatisfaction} 1; {Satisfaction} 2
C3: Richness and diversication of teaching materials {Dissatisfaction} 1; {Satisfaction} 2
C4: Practicability of teaching materials {Dissatisfaction} 1; {Satisfaction} 2
System design (SD) C5: Ease of use {Dissatisfaction} 1; {Satisfaction} 2
C6: Stability of network {Dissatisfaction} 1; {Satisfaction} 2
C7: Quality of e-learning platform {Dissatisfaction} 1; {Satisfaction} 2
Learning community (LC) C8: Ease of communicating with other learners {Dissatisfaction} 1; {Satisfaction} 2
C9: Ease of sharing data/information {Dissatisfaction} 1; {Satisfaction} 2
C10: Ease of sharing learning experience with others {Dissatisfaction} 1; {Satisfaction} 2
Personalization (PE) C11: Function of recording learning history {Dissatisfaction} 1; {Satisfaction} 2
C12: Ability to plan for learning progress {Dissatisfaction} 1; {Satisfaction} 2
C13: Flexibility in choosing learning content {Dissatisfaction} 1; {Satisfaction} 2
C14: Ability to assess learning performance {Dissatisfaction} 1; {Satisfaction} 2
Decision attribute (DA) D1: Overall satisfaction {Dissatisfaction} 1; {unsure} 2;
{Satisfaction} 3

Table 10
Input data of the attributes by the nal attribute specications.

U CA DA

C1 C2 C3 C4 C5 C6 C7 C8 C9 C10 C11 C12 C13 C14 D1


x1 1 1 1 2 2 2 2 2 2 2 2 1 2 2 3
x2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3
x3 1 2 2 1 1 1 1 1 1 2 2 2 1 2 2
x4 1 1 1 1 1 1 1 2 1 2 1 2 1 2 2
x5 1 2 2 2 2 2 2 2 2 2 2 2 2 2 3
x6 2 2 2 1 2 2 2 2 2 2 2 1 2 2 3
x7 2 1 1 1 2 2 2 2 2 2 2 1 2 2 3
x8 1 2 2 1 2 2 2 2 2 2 2 2 2 2 3
x9 1 2 2 1 1 2 2 2 2 2 2 2 2 2 2
x10 2 1 1 2 1 2 2 2 2 2 1 1 2 2 2
x11 2 1 1 2 2 2 2 2 2 2 1 2 2 2 1
x12 1 2 2 2 2 2 2 2 2 1 1 2 1 2 2
x13 2 1 1 2 2 2 1 2 2 2 2 1 2 2 3
x14 2 2 2 2 1 2 2 2 2 2 2 2 2 2 3
x15 2 2 2 2 2 2 2 2 2 2 2 2 2 1 3
x16 1 2 2 2 2 2 2 2 2 2 2 2 1 2 3
x17 1 1 2 2 1 2 2 2 2 2 2 1 2 1 2
x18 1 1 1 1 1 2 2 2 2 2 2 1 2 2 2
x19 2 2 2 2 2 2 2 2 2 2 2 2 1 2 2
x20 2 2 2 2 2 1 1 2 2 2 2 2 1 2 2
x21 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2
x22 1 2 2 1 2 2 2 1 1 2 1 1 2 2 3
x23 2 2 2 2 2 2 2 2 2 2 2 2 1 2 3
x24 1 2 2 1 1 2 2 2 1 1 1 1 1 1 2
x25 2 2 2 2 2 2 2 2 2 2 2 2 1 2 3
x26 2 2 2 1 2 2 2 2 2 2 2 2 2 2 3
x27 2 2 2 2 1 2 2 2 2 2 2 2 2 2 3
x28 1 2 1 2 1 1 1 2 2 2 2 2 2 2 2
x29 1 2 2 1 1 2 2 2 2 2 2 2 2 2 2
x30 2 1 2 1 1 2 2 2 2 2 2 1 2 1 2
x31 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3
x32 1 1 1 1 2 2 2 2 2 2 2 1 1 2 2
x33 1 2 2 2 2 2 2 2 2 2 2 2 2 2 3
x34 2 2 2 2 2 1 2 2 2 2 2 2 2 2 2
x35 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3
x36 1 2 2 1 2 2 2 2 2 2 2 2 1 2 2
x37 2 2 2 1 2 2 2 2 2 2 2 2 2 2 2
x38 1 1 1 2 2 1 2 2 2 2 2 1 2 2 2
x39 1 2 2 2 1 2 2 2 2 2 2 2 1 2 2
x40 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3
x41 1 2 2 1 1 2 2 2 2 2 2 2 1 1 3
x42 1 1 1 1 2 2 2 2 2 1 1 1 1 1 1
x43 2 1 1 2 1 2 2 2 2 2 2 1 1 2 2
x44 2 2 2 1 2 2 2 2 2 2 2 2 2 2 3
x45 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
x46 1 2 2 1 2 2 2 2 2 2 2 2 2 2 3
x47 1 1 1 2 1 2 2 2 2 1 1 1 1 2 2

Note: 1. xi denotes the objects (the 47 e-learning users); 2. The values of the attributes (CAs and DA) are the adjusted data (optimal attribute values) collected from the rst-
stage questionnaire survey (Table 7). (As dened in Table 9, the values (a scale of 15) of customer satisfaction with CRs (C1C14, set as the CAs) are adjusted into two groups:
{Dissatisfaction} 1 (values 1 and 2) and {Satisfaction} 2 (values 3, 4, and 5) and the values (a scale of 15) of overall satisfaction (D1, set as the DA) are adjusted into
three groups: {Dissatisfaction} 1 (values 1 and 2), {Unsure} 2 (values 3), and {Satisfaction} 3 (values 4 and 5).).
H.-Y. Wu, H.-Y. Lin / Computers & Education 58 (2012) 13181338 1331

Table 11
Classication accuracy of CRs.

Class numbera Number of objects Upper approximation Lower approximation Accuracy Quality of the whole classicationb
Class 1 3 3 3 100% 80.85%
Class 2 22 28 19 67.86%
Class 3 22 25 16 64%
a
indicates the classication of the decision attribute (i.e., Class 1: Dissatisfaction; Class 2: Unsure; Class 3: Satisfaction).
b P P
is obtained by i Lower approximationi = i Number of objectsi ; i class number.

In the current study, the data of the attributes (CAs and DA) collected from the rst-stage questionnaire survey (Table 7) are adjusted
(preprocessed) for optimal RST analysis based on the attribute specications dened in Table 9. Then, the adjusted attribute values are
formed into an information table (as shown in Table 10), which are used as the input data for ROSE 2.
With reference to the concepts delineated in Section 3.1 (especially Sections 3.1.2 and 3.1.6), the discernibility function (f(A)), a Boolean
function calculated by the absorption law, is used to compute the reducts and the core according to the discernibility matrix consisting of
elements (the set of all attributes) that distinguish elementary sets [x]i and [x]j. For more details of the related derivation, some illustrative
examples can be further referred to the tutorial of RST introduced by Walczak and Massart (1999). In the ROSE 2 software system, a function
Reduction (the calculation of a Boolean function) can be performed on the input data to analyze the minimal subsets of attributes
(reducts), which is used to determine the most essential CRs for e-learning service quality.

4.1.1. Calculating the accuracy and quality of classication


Similarly, as the concepts introduced in Sections 3.1.3 and 3.1.4, a function Approximations of the ROSE 2 software was used to perform
empirical data analysis using the 14 CAs (C1C14) and one DA (D1) as presented in Table 10. The lower and upper approximation for each
class, the accuracy of their classication, and the quality of the overall classication are presented in Table 11. The quality level of the overall
classication was 80.85%, indicating that the classication is quite appropriate. (The higher classication accuracy rate, the higher the
quality level of the classication is (Shuai & Li, 2005).)

4.1.2. Deriving the core and reduct of attributes


In real evaluations, there may be too many criteria or redundant criteria. However, RST can be used to identify the core criteria. Hence,
this study used an RST analysis to eliminate superuous attributes and identify the core ones. Three sets of reducts and six core attributes
(i.e., C1, C4, C5, C6, C12, and C13) are listed in Table 12. Four redundant attributes (i.e., C7, C8, C9, and C14) were removed. Therefore, 10
criteria that are critical to e-learning service quality (C1, C2, C3, C4, C5, C6, C10, C11, C12, and C13) were identied from the 14 CRs. These
criteria represent the minimal subset of independent attributes (i.e., reducts of the set of attributes) that can guarantee the same customer
service ranking as the entire set. The 10 critical CRs and 14 TRs (VOE) (Table 6) were used to design the QFD questionnaire employed in the
second stage.

4.2. Analyzing the interrelationships between CRs and TRs using QFD

As previously mentioned, in the second stage, on the basis of the RST screening results, the relationship matrix for the QFD questionnaire
was designed to elucidate the interrelationships between the 10 CRs and 14 TRs for e-learning service quality. The main purpose of the
survey at this stage was to solicit the professional opinions of experts in the eld of e-learning, including academics and experienced e-
learning program developers/designers, regarding the degree of strength of the relationships between the CRs and TRs for e-learning service
quality. The aim was to identify the key TRs that developers can prioritize to improve service quality and enhance e-learning. The survey
questionnaires were mainly distributed to individuals with experience with teaching or research in the eld of e-learning or to those who
possessed e-learning design experience and/or e-learning-related business management experience. A total of 15 questionnaires were
distributed, and all were fully completed because this survey was conducted during face-to-face interviews. Table 13 presents the experts
background information. The table shows that the number of industry and academia experts is almost equal and that more than a quarter
(26.67%) of the experts have over ve years of work experience in elds related to e-learning.

4.3. Computing the objective weights of CRs by entropy

It is required to calculate criteria weights before performing GRA. In this study, the relative weights were computed using the entropy
method and the data collected from the QFD questionnaires. On the basis of the averaged scores indicating the degree of strength of the

Table 12
Core and reduct of attributes.

Reduct set no. Reduct attribute Core attributea Critical attributeb


Reduct 1 C1, C2, C4, C5, C6, C10, C12, C13 C1, C4, C5, C6, C12, C13 C1, C2, C3, C4, C5, C6, C10, C11, C12, C13
Reduct 2 C1, C3, C4, C5, C6, C10, C12, C13
Reduct 3 C1, C4, C5, C6, C11, C12, C13
a
Core attributes indicate the common portion (intersection) of the three reducts.
b
Critical attributes indicate the most essential CRs for e-learning quality service, which are obtained from removing the four redundant attributes (i.e., C7, C8, C9, and C14)
that are not included in the three reducts.
1332 H.-Y. Wu, H.-Y. Lin / Computers & Education 58 (2012) 13181338

Table 13
Summary of the background information of the respondents in the second-stage survey.

Category Item Total Percentage Category Item Total Percentage


Sex Male 9 60% Education College/University 4 26.67%
Female 6 40% Master 4 26.67%
Age S18 years old and &25 years old 1 6.67% Ph.D. 7 46.67%
>25 years old and &40 years old 8 56.33% Years of service &1 year 1 6.67%
>40 years old 6 40% >1 year and &2 years 4 26.67%
Background Industry 7 46.67% >2 years and &5 years 6 40%
Academia 8 53.33% >5 years 4 26.67%

relationships between the CRs and TRs as provided by the experts (shown in Table 14), the objective weights were derived using Eqs. (A2)
(A4). The analysis results are listed in Table 15. The top ve CRs for e-learning service quality are Usefulness of teaching materials (C2)
(0.1138), Practicability of teaching materials (C4) (0.1084), Richness and diversication of teaching materials (C3) (0.1034), Flexibility in
choosing learning content (C13) (0.0995), and Function of recording learning history (C11) (0.0993).

4.4. Prioritizing the order of the TRs by GRA

For each TR, the larger the relation score, the greater the impact of the TR is on the CR. The TRs with higher relation scores are more
critical to customer satisfaction. Thus, in this study, the relation scores are the TR performance values that were used in the GRA calculations
when the CRs were used as the evaluation criteria. Because the higher relation scores indicate stronger relationships and more importance,
the data presented in Table 14 should be normalized using Eq. (2) in the GRA. Using Eqs. (5) and (6), the values of the difference series (Doij)
and grey relational coefcients (goij) were obtained, respectively. Then, the grey relational grades (Goi) were derived using Eq. (7). These grey
relational grades, in turn, made it possible to determine the priority level of the various TRs given the aim of promoting e-learning service
quality with respect to the CRs. The TR rankings are shown in Table 16, where the TR with the highest grey relational grade is the most
important alternative. The results reveal that the top ve TRs are as follows: Curriculum development (E6) (0.668), Evaluation (E14)
(0.650), Navigation & tracking (E10) (0.631), Instructional design (E8) (0.599), and Teaching materials (E7) (0.597).

5. Discussions

In general, RST is a useful tool for analyzing raw data and simplifying information (i.e., reducing all redundant objects and attributes),
identifying data dependencies and important attributes, and classifying data. The key benet of RST is that it does not require any
preliminary work (e.g., statistical hypotheses). It also does not require large amounts of data, as with traditional statistical analysis methods
(Hair, Anderson, Tatham, & Black, 1995), or additional information about the data (e.g., a membership grade or possibility value), as in fuzzy
set theory (Pawlak & Skowron, 2007). Therefore, this study used RST to determine the reduct and core attributes, identifying those CRs that
affect perceptions of e-learning service quality.
In this study, using RST (Table 12), three sets of reduct attributes were identied from the 14 CRs. These three sets included six core
attributes (C1, C4, C5, C6, C12, and C13) and four relatively superuous CAs (C7, C8, C9, and C14). The six core attributes are as follows:
Teaching materials updated in a timely manner (C1), Practicability of teaching materials (C4), Ease of use (C5), Stability of network
(C6), Ability to plan for learning progress (C12), and Flexibility in choosing learning content (C13). These results imply that the six key
attributes can be used as evaluation criteria when companies assess customer satisfaction with their e-learning services. These ndings are
consistent with Shee and Wang (2008) and Sun et al. (2008). In contrast with the key attributes, the four redundant attributes (Quality of e-
learning platform (C7), Ease of communicating with other learners (C8), Ease of sharing data/information (C9), and Ability to assess

Table 14
The averaged relation scores for CRs (representing the VOC) and TRs (representing the VOE).

TR (VOE) CR (VOC)

C1 C2 C3 C4 C5 C6 C10 C11 C12 C13


E01 3.07 2.27 3 2 1.73 1.33 2 2.20 2.20 1.73
E02 2.73 3.53 3.13 2.87 2.47 1.73 3.13 2.47 2.53 2.40
E03 3.27 2.67 2.80 2.13 3 2.87 2.47 2.93 3.07 2.20
E04 4.20 2.87 3.80 2.40 2.33 2 3.93 2.87 3.53 3.27
E05 3.33 2.2 2.20 2 1.87 2.80 2.60 3 3.13 2.53
E06 3.93 3.93 4.20 3.80 2.73 2 3.13 3 3.07 3.13
E07 4.33 3.53 4.47 3.40 2.07 2.07 2.20 2.07 2.40 2.60
E08 4.07 3.93 4.07 3.40 2.87 1.73 2.40 2.40 2.60 2.53
E09 2.13 1.60 1.87 1.73 1.60 1.40 3.20 4.33 4.20 3.53
E10 2.20 1.93 2.20 2.47 2 2 4.20 4.47 4.47 4.47
E11 2.20 2.33 3 2.53 3.07 2.20 1.80 1.93 1.87 1.93
E12 4.20 3.40 3.67 3.27 2.40 1.93 2.47 2.67 2.67 2.53
E13 2.53 2.33 2.60 1.93 3.07 2.87 2.33 3.07 2.33 2.40
E14 3.60 3.13 3.53 3.67 3.53 2.80 3 3.07 3.20 2.93
H.-Y. Wu, H.-Y. Lin / Computers & Education 58 (2012) 13181338 1333

Table 15
The objective weights of the CRs.

Wj C1 C2 C3 C4 C5 C6 C10 C11 C12 C13


Weight 0.099 0.114 0.103 0.108 0.088 0.096 0.096 0.099 0.096 0.099
Ranking 6 1 3 2 10 9 8 5 7 4

Table 16
TR rankings based on the grey relational grades (Goi).

TR Goi Ranking
E1 Human resource 0.382 14
E2 Operating abilities 0.481 11
E3 Service process 0.512 8
E4 Information requirements 0.578 6
E5 Management system 0.472 12
E6 Curriculum development 0.668 1
E7 Teaching materials 0.597 5
E8 Instructional design 0.599 4
E9 Instructional process 0.485 9
E10 Navigation & tracking 0.631 3
E11 Instructional media 0.422 13
E12 Instructor support 0.548 7
E13 Technology 0.483 10
E14 Evaluation 0.650 2

learning performance (C14)) are unimportant evaluation criteria and can be removed. Eliminating the four attributes (CRs) will have no
effect on customer satisfaction.
According to the objective weighting analysis conducted using the entropy method (Table 15), the experts believe that Usefulness of
teaching materials (C2) (0.1138) is the most important CR for e-learning service quality, followed by Practicability of teaching
materials (C4) (0.1084), Richness and diversication of teaching materials (C3) (0.1034), Flexibility in choosing learning content
(C13) (0.0995), and Function of recording learning history (C11) (0.0993). Because the top three CRs are all associated with the
Curriculum design (CD) dimension, it seems apparent that e-learning service providers (e.g., platform managers) can improve
customer satisfaction with service quality by determining whether their curriculum can meet customer needs regarding the usefulness,
practicability, and richness and diversication of teaching materials. Ease of use (C5) (0.088) is the least important criterion, which
makes these ndings slightly different from those of Sun et al. (2008). This inconsistency may be a result of the technological advances
in current e-learning materials and digital media production; the better use of e-learning platforms has become one of the most basic
considerations inuencing the design of e-learning systems. As a result, the experts may see it as less important to emphasize Ease of
use. Overall, the analysis of the relative CR weights constitutes an important point of reference for e-learning-related industries in
measuring customer satisfaction.
Furthermore, according to the QFD-GRA analysis of the relationships between the CRs and TRs (Table 16), the top ve most important TRs
are Curriculum development (E6), Evaluation (E14), Navigation & tracking (E10), Instructional design (E8), and Teaching materials
(E7). The analysis results show that curriculum development, including the objectives and structure of the program courses as well as the
quality and volume of the digitalized courseware content, should be the primary consideration for e-learning program managers or
engineers designing or planning e-learning products/services. Moreover, to improve e-learning service quality, attention should be focused
on important TRs such as the construction of comprehensive evaluation systems, the provision of clear learning guidance and tracking
mechanisms, the development of consistent instructional design (including suitable assessment and feedback modules), and the use of
current, correct, and organized teaching materials. Most of the top ve most important TRs are curriculum related. Thus, it can be inferred
that instructional design and development should be the rst and foremost improvement techniques used to promote service quality in the
e-learning context. In other words, e-learning providers need to start at the foundation of e-learning programs, strengthening instructional
design to meet customer needs.
In the research by Wang and Hsu (2010), Teaching materials (E10) and Guidance and tracking (E7) were found to be the top two most
important factors in e-learning evaluations; thus, their ndings are consistent with the results of the current study. Furthermore, according
to Specication Clauses of eLSC (eLQCC, 2009c), the previously mentioned standards for e-learning service quality (Table 4), applicants for
any type of certication (i.e., unit, course, or program certication) must comply with the eighth standard, Evaluation (E14), which is the
second most important TR in the current study. All e-learning related staff, departments, programs, and systems and support tools should
comply with this requirement before applying for certication. Therefore, the ndings of this study are consistent with the norms for e-
learning service quality instituted by the IDB, MOEA, Taiwan (eLQCC, 2009c).

6. Conclusions

As the output of the e-learning industry continues to expand, e-learning service quality not only is increasingly valued by customers but
also is receiving more attention from rms seeking to develop competitive advantage. Therefore, understanding how to effectively improve
1334 H.-Y. Wu, H.-Y. Lin / Computers & Education 58 (2012) 13181338

service quality within e-learning has become essential. In this study, a universitys e-learning platform has been used as an illustrative
example. In this paper, using RST and with reference to an extensive literature review, 10 important CRs affecting customer satisfaction with
e-learning service quality and 14 key TRs for e-learning development and design are summarized. The relationship matrix for the CRs and
TRs developed using QFD is then presented. Finally, the results of the comprehensive GRA evaluation of the TR priority levels are presented.
(These results are based on the objective criteria weights determined using the entropy method.) A summary of the important research
conclusions of this study, the managerial implications of the results, and the limitations of the research is briey presented in the following
sections along with recommendations for future research.

6.1. Research conclusions

Using RST and the questionnaire data collected from the survey of e-learning users conducted the rst stage, 10 of 14 CRs were deter-
mined to be critical to customer satisfaction with e-learning service quality. (These CRs are referred to as CAs.) A total of four CRs that
accounted for 28.57% were excluded and the remaining 10 were used as the VOC in the QFD questionnaire in the second-stage survey of the
e-learning experts. The 10 critical CRs (the VOC) are Teaching materials updated in a timely manner, Usefulness of teaching materials,
Richness and diversication of teaching materials, Practicability of teaching materials, Ease of use, Stability of network, Ease of
sharing learning experience with others, Function of recording learning history, Ability to plan for learning progress, and Flexibility in
choosing learning content.
Accordingly, the objective entropy weights of the CRs were calculated using the data (performance values), which are the degree of
strength of the QFD relationship matrix between the 10 CRs and 14 TRs determined by the e-learning experts during the second stage. The
ve criteria (CRs) with the highest weights are Usefulness of teaching materials, Practicability of teaching materials, Richness and
diversication of teaching materials, Flexibility in choosing learning content, and Function of recording learning history. These are the
most important evaluation criteria for customer satisfaction with e-learning providers.
With respect to the relative weights of the 10 CRs, 14 TRs for e-learning were prioritized through an overall performance evaluation of the
QFD relationship matrix conducted using GRA. Curriculum development, Evaluation, Navigation & tracking, Instructional design, and
Teaching materials are the top ve most important TRs for enhancing e-learning service quality.

6.2. Managerial implications

As previously mentioned, because service quality is an essential factor that inuences customer repurchase intentions, improving service
quality is imperatives for businesses seeking to maintain their competitiveness in todays rapidly changing environment. This is certainly
true in the e-learning industry. The research ndings of this study make it possible to provide some recommendations for practical business
management within the e-learning industry. First, by referencing the weightings of the CRs (the VOC), e-learning service providers can
satisfy their customers by providing personalization options (e.g., exible learning content and a function for recording individual learning
histories). Such attention will allow them to develop higher quality curricula that include useful, practical, and diverse teaching materials. E-
learning service providers can thereby effectively shape customer perceptions by taking into account these more highly weighted criteria
(CRs). Simultaneously, e-learning customers (users) can also use these CRs as evaluation criteria in selecting e-learning products and/or
systems.
In addition, from the results of the QFD analysis, it is suggested that to satisfy customer needs, e-learning providers should focus on the
most important TRs, those related to curriculum development, evaluation mechanisms, and guidance and tracking functions. In short, the
more complete the competences of e-learning providers with regard to the above three requirements, the better their products and services
will address customer needs and the higher customer satisfaction will be.
With this in mind, e-learning service institutions should focus on the following goals to further improve their service quality: (1)
curriculum development, in which the goals and structure of the programs need to be considered via systematic analysis, design, and review
to ensure that the course content meets quality standards; (2) evaluation, with more sound evaluation mechanisms established to regularly
assess e-learning staff, services, curricula, and system and support tools not only to maintain system operations but also to improve the
quality of e-learning products and services; and (3) guidance and tracking, with more care taken to provide guidance mechanisms such as
instructions for learning activities, course maps, user databases, and user-friendly interfaces, all of which will allow learners to smoothly
carry out learning activities and effectively control their individual learning status. In summary, these three TRs are the highest priorities for
e-learning providers and engineers developing new products/services or improving existing ones and hoping to thereby achieve sustainable
competitive advantage.

6.3. Research limitations and future work

This research strives to be rigorous and objective. However, certain research limitations are inevitable. The following suggestions
may be helpful to researchers conducting follow-up studies of e-learning: (1) This study employed a survey of customer satisfaction
that only targeted participants with experience using a universitys e-learning platform. The use of this sample may limit the help-
fulness of the results because of the samples insufcient size and concentration. Therefore, it is suggested that follow-up studies
expand the sample and thereby rene the results. (2) In addition to traditional methods of statistical analysis (e.g., factor analysis) and
the RST method used in this study, the fuzzy Delphi method (FDM) can also be used to analyze expert opinions. (3) In this study, three
methods (RST, QFD, and GRA) were used together with objective weights to construct an analytical model for enhancing service
quality within e-learning. It is suggested that future research employ the AHP/analytic network analysis (ANP) to obtain subjective
weights and compare the results. Additionally, future work might investigate other criteria for satisfaction with e-learning and use
a combination of other methods such as a two-dimensional quality model analysis (using the Kano model), the decision-making trial
and evaluation laboratory (DEMATEL), or VIKOR. (4) From the results of this research, it is recommended that follow-up research
include case studies using in-depth interviews to further analyze the effectiveness of TRs. (5) Future studies might also extend the
H.-Y. Wu, H.-Y. Lin / Computers & Education 58 (2012) 13181338 1335

proposed model based on the features of service quality in different industries. For instance, different CRs and TRs should be analyzed
in other industry contexts.

Appendix A. Entropy

Originally, the term entropy was used to describe a physical phenomenon. Entropy is a measure of information uncertainty
represented by a discrete probability distribution (Shannon & Weaver, 1947). As a myriad of methods (e.g., the AHP/ANP, the
eigenvector method) are used to decide weights, entropy can be used to measure the weights of various attributes (Hwang & Yoon,
1989). Entropy analysis is an objective weighting method; the weights of the criteria are determined by the decision-makers on the
basis of the values in the decision (evaluation) matrix. When the entropy measure is used, the weight of a criterion is larger when its
value varies greatly, which indicates that it has more of an inuence. The current study uses the concept of entropy to calculate the
objective weights of the CRs. The entropy weighting method is then used to integrate the grey relational coefcients of each criterion
into the grey relational grades to reect the importance of each criterion (CR). The steps used in the entropy analysis can be
summarized as follows:

Step 1: Construct an evaluation matrix for the raw data.

The original evaluation matrix is constructed using m evaluation criteria to assess n samples as shown in Eq. (A1).

2 C1 C2 / Cm 3
A1 x11 x12 / x1m
D Xijnm A2 6
6 x21 x22 / x2m 7
7 i 1; 2; .; n; j 1; 2; .m (A1)
4 1 5
An xn1 xn2 . xnm nm

Step 2: Normalize the original evaluation matrix.

Because the original data for the evaluation criteria may be measured using different units, the evaluation matrix is normalized to ensure
consistency and comparability. Essentially, to create an objective baseline for the various evaluation criteria, the original matrix must be
normalized. The normalized original evaluation matrix (P [pij]nm) can be obtained using Eq. (A2) where pij is the performance rating.

X
n
pij xij = xpj i 1; 2; .; n (A2)
p1

Step 3: Calculate the entropy value of criteria.

The entropy value (ej) of criteria Cj(j 1,2,., m) can be calculated using Eq. (A3) where k(1/lnn) is a constant that guarantees 0  ej  1.

X
n  
ej k pij ln pij (A3)
i1

Step 4: Calculate the weights of criteria.

The criteria weights wj (j 1,2,., m) can be calculated using Eq. (A4), in which dj is the degree of divergence of the average intrinsic
information associated with each criterion. The more divergent the performance ratings pij (i 1,2,., n) (in Eq. (9)) for the criterion Cj are,
the higher its corresponding dj and the greater the weight wj will be, which means that criterion Cj is more important (Zeleny, 1982).

X
m
wj dj = dk dj 1  ej (A4)
k1
1336 H.-Y. Wu, H.-Y. Lin / Computers & Education 58 (2012) 13181338

Appendix B. QFD/HOQ questionnaire (Relationship matrix of WHATs vs. HOWs)

WHAT: CRs (VOC)

C3: Richness and diversification of teaching materials

C10: Ease of sharing learning experience with others


Please indicate the degree of

C1: Teaching materials updated in a timely manner

C8: Ease of communicating with other learners


strength between CRs (VOC) and
TRs (VOE) using the following

C4: Practicability of teaching materials


scale:

C9: Ease of sharing data/information


C2: Usefulness of teaching materials

C7: Quality of e-learning platform


1: Weak relationship

C6: Stability of network


3: Moderate relationship
5: Strong relationship

C5: Ease of use


HOWS: TRs (VOE)
E1: Human resources
E2: Operating abilities
E3: Service process
E4: Information requirements
E5: Management system
E6: Curriculum development
E7: Teaching materials
E8: Instructional design
E9: Instructional process
E10: Navigation & tracking
E11: Instructional media
E12: Instructor support
E13: Technology
E14: Evaluation

References

Alptekin, S. E., & Karsak, E. E. (2011). An integrated decision framework for evaluating and selecting e-learning products. Applied Soft Computing Journal, 11(3), 29902998.
American Society for Training & Development (ASTD). (2011). E-learning glossary. Retrieved August 10, 2010 from: http://www.astd.org/lc/glossary.htm.
Anderson, E. W., & Sullivan, M. W. (1993). The antecedents and consequences of customer satisfaction for rms. Marketing Science, 12(2), 125143.
Ardito, C., Costabile, M. F., De Marsico, M., Lanzilotti, R., Levialdi, S., Roselli, T., et al. (2006). An approach to usability evaluation of e-learning applications. Universal Access in
the Information Society, 4(3), 270283.
Bayraktaroglu, G., & zgen, (2008). Integrating the Kano model, AHP and planning matrix: QFD application in library services. Library Management, 29(4/5), 327351.
Bevilacqua, M., Ciarapicab, F. E., & Giacchettab, G. (2006). A fuzzy-QFD approach to supplier selection. Journal of Purchasing & Supply Management, 12(1), 1427.
Beynon, M. J., & Peel, M. J. (2001). Variable precision rough set theory and data discretisation: an application to corporate failure prediction. Omega, 29(6), 561576.
Bolton, R. N., & Drew, J. H. (1991). A longitudinal analysis of the impact of service changes on customer attitudes. Journal of Marketing, 55(1), 110.
Bryant, K., Campbell, J., & Kerr, D. (2003). Impact of web based exible learning on academic performance in information systems. Journal of Information Systems Education,
14(1), 4150.
CETIS. (2003). Whos doing what? The centre for educational technology interoperability standards. Retrieved December 15, 2010 from. http://zope.cetis.ac.uk/static/who-does-
what.html.
Chan, L. K., & Wu, M. L. (2002). Quality function deployment: a literature review. European Journal of Operational Research, 143(3), 463497.
Chan, L. K., & Wu, M. L. (2005). A systematic approach to quality function deployment with a full illustrative example. Omega, 33(2), 119139.
Chang, T. Y., & Chen, Y. T. (2009). Cooperative learning in e-learning: a peer assessment of student-centered using consistent fuzzy preference. Expert Systems with Appli-
cations, 36(4), 83428349.
Chen, Y. T., & Chou, T. Y. (2011). Applying GRA and QFD to improve library service quality. The Journal of Academic Librarianship, 37(3), 237245.
Cheng, C. H., Chen, T. L., Wei, L. Y., & Chen, J. S. (2009). A new e-learning achievement evaluation model based on RBF-NN and similarity lter. Neural Computing & Applications,
20(5), 659669.
Council for Economic Planning and Development (CEPD). (2002). Two trillion and twin star development program. Retrieved April 25, 2010 from. http://www.cepd.gov.tw/m1.
aspx?sNo0012498.
Cronin, J. J., Jr., Brady, M. K., & Tomas, H. G. M. (2000). Assessing the effects of quality, value, and customer satisfaction on consumer behavioral intentions in service
environment. Journal of Retailing, 76(2), 193218.
Cronin, J. J., Jr., & Taylor, S. A. (1992). Measuring services quality: are examination and extension. Journal of Marketing, 56(3), 5568.
Deng, J. L. (1982). Control problems of grey systems. Systems & Control Letters, 1(5), 288294.
Deng, J. L. (1989). Introduction to grey system. Journal of Grey System, 1(1), 124.
Dimitras, A. I., Slowinski, R., Susmaga, R., & Zopounidis, C. (1999). Business failure prediction using rough sets. European Journal of Operational Research, 114(2), 263280.
H.-Y. Wu, H.-Y. Lin / Computers & Education 58 (2012) 13181338 1337

e-Learning Quality Certication Center (eLQCC). (2009a). Enhancing international competitiveness through optimizing e-learning. Retrieved June 5, 2010 from. http://www.elq.
org.tw/en/about/origin01.aspx.
e-Learning Quality Certication Center (eLQCC). (2009b). e-Learning Courseware Certication (eLCC) e-Learning and Digital Achieves Program. Retrieved June 5, 2010 from.
http://www.elq.org.tw/en/elcc_apply/digi_material01.aspx.
e-Learning Quality Certication Center (eLQCC). (2009c). e-Learning Service Certication (eLSC) d e-Learning and Digital Achieves Program. Retrieved June 12, 2010 from. http://
www.elq.org.tw/en/elsc_apply/digi_service01.aspx.
e-Learning Quality Certication Center (eLQCC). (2009d). Specications of e-Learning Operations and Service Quality Certication (eLOSQC). Retrieved June 5, 2010 from. http://
study2009.csd.org.tw/elqsc/2010/0824/images/ 0806.pdf.
Eid, T. (2007). Forecast: e-learning suites and management system software, worldwide, 20062011. Retrieved December 15, 2010 from. http://www.gartner.com/
DisplayDocument?id543327.
Flott, L. W. (2002). Customer satisfaction. Metal Finishing, 100(1), 5863.
Friesen, N. (2003). Three objections to learning objects and e-learning standards. Retrieved June 5, 2010 from. http://www.learningspaces.org/n/papers/ objections.html.
George, S., & Labas, H. (2008). E-learning standards as a basis for contextual forums design. Computers in Human Behavior, 24(2), 138152.
Gonzalez-Barbone, V., & Anido-Rifon, L. (2010). From SCORM to Common Cartridge: a step forward. Computers & Education, 54(1), 88102.
Gunasekaran, A., Ronald, D. M., & Shaul, D. (2002). E-learning: research and applications. Industrial and Commercial Training, 34(2), 4453.
Hair, J. F., Anderson, R. E., Tatham, R. L., & Black, W. C. (1995). Multivariate data analysis. Upper Saddle River, New Jersey: Prentice Hall.
Hauser, J. R., & Clausing, D. (1988). The house of quality. Harvard Business Review, 66(3), 6373.
Ho, Y. C., & Lin, C. H. (2009). A QFD-, concurrent engineering-, and target costing-based methodology for ODM companies to formulate RFQs. Journal of Manufacturing
Technology Management, 20(8), 11191146.
Huang, K. Y., & Jane, C. J. (2009). A hybrid model for stock market forecasting and portfolio selection based on ARX, grey system and RS theories. Expert Systems with
Applications, 36(3), 53875392.
Hwang, C. L., & Yoon, K. (1989). Multiple attributes decision making. Berlin: Springer.
Hwarng, H. B., & Teo, C. (2001). Translating customers voices into operations requirements-A QFD application in higher education. International Journal of Quality & Reliability
Management, 18(2), 195226.
Industrial Development Bureau (IDB). (2010a). e-Learning and archives industry promotion plan: Program introduction. Retrieved March 15, 2010 from. http://www.epark.org.
tw/en_epark/index.php.
Industrial Development Bureau (IDB). (2010b). e-Learning and archives industry promotion plan: Outcome. Retrieved March 15, 2010 from. http://www.epark.org.tw/en_epark/
page1.php.
Industrial Development Bureau (IDB). (2011). e-Learning and archives industry promotion plan: The report of the survey analysis of the status quo and output values for domestic
and international e-learning industry. Retrieved October 31, 2011 from. http://elearning.iiiedu.org.tw/epark_result_page.php?id2011010 5155118.
Inglis, A., Ling, P., & Joosten, V. (2002). Delivering digitally. London: Kogan Page.
Ismail, J. (2002). The design of an e-learning system: beyond the hype. The Internet and Higher Education, 4(34), 329336.
Jen, W., & Hu, K. C. (2003). Application of perceived value model to identify factors affecting passengers repurchases intentions on city bus: a case of the Taipei metropolitan
area. Transportation, 30(3), 307327.
Jones, N., & Oshea, J. (2004). Challenging hierarchies: the impact of e-learning. Higher Education, 48(3), 379395.
Kotler, P. (1999). Marketing management: Analysis, planning, implementation and control (9th ed.). New Jersey: Prentice-Hall, Inc.
Kruse, K. (2003). e-Learning alphabet soup: A guide to terms. Retrieved June 5, 2010 from. http://www.purpletrain.com/news/20030929_01.htm.
Kuo, H. Z. (2010). The survey analysis of the status quo and output values for e-learning in 2009. Retrieved October 31, 2011 from. http://idp.teldap.tw/epaper/20100430/430.
Laurillard, D. (2002). Rethinking university teaching. London: Routledge Falmer.
Levy, Y. (2007). Comparing dropouts and persistence in e-learning courses. Computers & Education, 48(2), 185204.
Li, L., Guo, Q. S., & Dong, Z. M. (2008). An overview on the mathematical models of quality function deployment. In 2008 International Symposium on Information Science and
Engineering (pp. 7883).
Li, Y., Tang, J., Luo, X., & Xu, J. (2009). An integrated method of rough set, Kanos model and AHP for rating customer requirements nal importance. Expert Systems with
Applications, 36(3), 70457053.
Li, J. R., & Wang, Q. H. (2010). A rough set based data mining approach for house of quality analysis. International Journal of Production Research, 48(7), 20952107.
Liaw, S. S., Huang, H. M., & Chen, G. D. (2007). Surveying instructor and learner attitudes toward e-learning. Computers & Education, 49(4), 10661080.
Lin, Y. H., Cheng, H. P., Tseng, M. L., & Tsai, J. C. C. (2010). Using QFD and ANP to analyze the environmental production requirements in linguistic preferences. Expert Systems
with Applications, 37(3), 21862196.
Lin, R. H., Wang, Y. T., Wu, C. H., & Chuang, C. L. (2009). Developing a business failure prediction model via RST, GRA and CBR. Expert Systems with Applications, 36(2), 1593
1600.
Market Intelligence Center (MIC). (2005). Strategic analysis of the market trend and industrial development for digital content Chapter 5: The development trend of Taiwans
digital content. Taipei: Institute for Information Industry. (In Chinese).
Martinez-Ortiz, I., Sierra, J. L., Fernndez-Manjn, B., & Fernndez-Valmayor, A. (2009). Language engineering techniques for the development of e-learning applications.
Journal of Network and Computer Applications, 32(5), 10921105.
Mizuno, S., & Akao, Y. (1994). QFD: The customer-driven approach to quality planning and development. Tokyo: Asian Productivity Organization.
Oliver, R. L. (1999). Whence consumer loyalty. Journal of Marketing, 63(4), 3334.
Paechter, M., Maier, B., & Macher, D. (2010). Students expectations of, and experiences in e-learning: their relation to learning achievements and course satisfaction.
Computers & Education, 54(1), 222229.
Pawlak, Z. (1991). Rough Sets: Theoretical aspects of reasoning about data. Boston: Kluwer Academic Publishers.
Pawlak, Z. (2002). Rough sets and intelligent data analysis. Information Sciences, 147(14), 112.
Pawlak, Z., & Skowron, A. (2007). Rudiments of rough sets. Information Science, 177(1), 327.
Piccoli, G., Ahmad, R., & Lves, B. (2001). Web-based virtual learning environments: a research framework and a preliminary assessment of effectiveness in basic IT skills
training. MIS Quarterly, 25(4), 401425.
Pitman, G., Motwani, J., Kumar, A., & Cheng, C. H. (1996). QFD application in an educational setting: a pilot eld study. International Journal of Quality and Reliability
Management, 12(6), 6372.
Predki, B., Slowinski, R., Stefanowski, J., Susmaga, R., Wilk, S. (1998). ROSE Software Implementation of the Rough Set Theory. In Proceedings of the First International
Conference Rough Sets and Current Trends in Computing (RSCTC) (pp. 605608).
Rey-Lpez, M., Daz-Redondo, R. P., Fernndez-Vilas, A., Pazos-Arias, J. J., Garca-Duque, J., Gil-Solla, A., et al. (2011). An extension to the ADL SCORM standard to support
adaptively: the t-learning case-study. Computer Standards & Interfaces, 31(2), 309318.
Rosenberg, M. J. (2001). E-learning: Strategies for delivering knowledge in the digital age. Columbus, OH: McGraw-Hill.
Ruzgar, N. S., Unsal, F., & Ruzgar, B. (2008). Predicting business failures using the rough set theory approach: the case of the Turkish banks. International Journal of Math-
ematical Models and Methods in Applied Sciences, 2(1), 5764.
Sahney, S., Banwet, D. K., & Karunes, S. (2008). An integrated framework of indices for quality management in education: a faculty perspective. The TQM Journal, 20(5), 502
519.
Selim, H. M. (2007). Critical success factors for e-learning acceptance: conrmatory factor models. Computers & Education, 49(2), 396413.
Shannon, C. E., & Weaver, W. (1947). The mathematical theory of Communication. Urbana: The University of Illinois Press.
Shee, D. Y., & Wang, Y. S. (2008). Multi-criteria evaluation of the web-based e-learning system: a methodology based on learner satisfaction and its applications. Computers &
Education, 50(4), 894905.
Shen, L., & Loh, H. T. (2004). Applying rough sets to market timing decisions. Decision Support Systems, 37(4), 583597.
Shuai, J. J., & Li, H. L. (2005). Using rough set and worst practice DEA in business failure prediction. Lecture Notes in Computer Science, 3462(2), 503510.
Sun, P. C., Tsai, R. J., Finger, G., Chen, Y. Y., & Yeh, D. (2008). What drives a successful e-Learning? An empirical investigation of the critical factors inuencing learner
satisfaction. Computer & Education, 50(4), 11831202.
Sung, Y. T., Chang, K. E., & Yu, W. C. (2011). Evaluating the reliability and impact of a quality assurance system for e-learning courseware. Computers & Education, 57(2), 1615
1627.
1338 H.-Y. Wu, H.-Y. Lin / Computers & Education 58 (2012) 13181338

Thakkar, J. S., Deshmukh, G., & Shastree, A. (2006). Total quality management (TQM) in self-nanced technical institutions: a quality function deployment (QFD) and force
eld analysis approach. Quality Assurance in Education, 14(1), 5474.
Walczak, B., & Massart, D. L. (1999). Tutorial: rough sets theory. Chemometrics and Intelligent Laboratory Systems, 47(1), 116.
Wan, Z., Wang, Y., & Haggerty, N. (2008). Why people benet from e-learning differently: the effects of psychological processes on e-learning outcomes. Information &
Management, 45(8), 513521.
Wang, Y. S. (2003). Assessment of learner satisfaction with asynchronous electronic learning systems. Information & Management, 41(1), 7586.
Wang R., & Hsu, S. L. (2010). Assessment on effectiveness of e-learning in uncertainty, advanced management science (ICAMS). In 2010 IEEE International Conference, Vol. 9(11)
(pp. 6571).
Wentling, T. L., Waight, C., Gallaher, J., Fleur, J. L., Wang, C., & Kanfer, A. (2000). E-learning: A review of literature. Knowledge & Learning Systems Group, National Center for
Supercomputer Applications (NCSA), University of Illinois at Urbana-Champaign.
Wu, J. H., Tennyson, R. D., & Hsia, T. L. (2010). A study of student satisfaction in a blended e-learning system environment. Computers & Education, 55(1), 155164.
Yan, W., Li, P. K., & Chen, C. H. (2005). A QFD-enabled product conceptualization approach via design knowledge hierarchy and RCE neural network. Knowledge-Based Systems,
18(6), 279293.
Zeleny, M. (1982). Multiple criteria decision making. New York: McGraw-Hill.
Zhai, L. Y., Khoo, L. P., & Zhong, Z. W. (2009). Design concept evaluation in product development using rough sets and grey relation analysis. Expert Systems with Applications,
36(3), 70727079.
Zhai, L. Y., Khoo, L. P., & Zhong, Z. W. (2010). Towards a QFD-based expert system: a novel extension to fuzzy QFD methodology using rough set theory. Expert Systems with
Applications, 37(12), 88888896.
Zhang, D., & Zhou, L. (2003). E-learning with interactive multimedia. Information Resources Management Journal, 16(4), 113.

You might also like