You are on page 1of 8

Measuring customer satisfaction in higher education

Susan Aldridge and Jennifer Rowley

Context and management objectives


This article describes a methodology which used a Student Charter as the basis for a questionnaire-based survey. The methodology was tested and developed as an operational tool and not as a distinct research project. The Student Satisfaction Sub-group was charged with the responsibility for the development of an operational methodology that would complement existing quality assurance processes, tools and systems already in operation within the college. The team was, however, concerned that their practice should be informed by current debates concerning the measurement of educational and service quality. In addition it was deemed appropriate that the survey should be embedded in the increasingly electronic environment that has been created within the college over the past few years. The methodology also needed to be efcient and be operational within tight resource constraints. A brief profile of Edge Hill University College may serve to contextualise the project. Edge Hill University College is a medium-sized higher education institution offering undergraduate and postgraduate courses to over 6,000 full- and part-time students. It is located on three sites, the main site in Ormskirk, the School of Health Studies at Fazakerley and the in-service part of the School of Education on the Woodlands site. Computer networking covers all sites, but good access to these facilities is more recent on the Fazakerley and Woodlands sites than on the Ormskirk site. There is a major challenge in designing an evaluation methodology which is suitable for the colleges diverse student body. Students may enjoy varying levels of experience with information technology, attend the college for different lengths of time, attend in parttime or full-time mode, and, in general, seek different benefits packages from the college. The survey was designed to supplement the current arrangements for feedback from students on their course experience and their general satisfaction with services and facilities. This is currently collected through four main channels: (1) Through module evaluations focusing on module delivery and content. These are presented in summary form to subject

The author Susan Aldridge is Head of Student Services and Jennifer Rowley is Head of the School of Management and Social Sciences, both at Edge Hill University College, Ormskirk, UK. Abstract Evaluates a methodology which was developed to measure student satisfaction with signicant components of the service experience delivered to students at Edge Hill University College. Uses a questionnaire-based survey to collect information on student satisfaction. The methodology has two unique features: the Student Charter informed the survey design; and student responses were collected electronically through on-screen questionnaires accessible over an intranet. Outcomes suggest that there remains some resistance to the completion of an electronic questionnaire and both paper and electronic versions are likely to continue to be necessary in order to achieve optimum response rates. The methodology has identied specic aspects of the service experience where there was either an absence of student satisfaction or the level of student satisfaction was variable. These aspects have been further explored with focus groups and fed into the quality plan for the college. A negative quality model is proposed which may offer a framework for response to different types of feedback from students.

Quality Assurance in Education Volume 6 Number 4 1998 pp. 197204 MCB University Press ISSN 0968-4883

197

Measuring customer satisfaction in higher education

Quality Assurance in Education Volume 6 Number 4 1998 197204

Susan Aldridge and Jennifer Rowley

boards, school boards, and ultimately the academic standards committee. Verbal formative feedback is also received from students at subject boards and school boards. (2) Through cross-college questionnaires. The college has an established First Week Questionnaire which is used to evaluate the induction process. Student satisfaction surveys were also conducted in 1993/94 and 1994/95 with finalyear students after they had left the college. (3) Through standing committees of the college which have responsibility for specific or generic service areas, such as the Learning Resources Committee, the Catering Student Committee and EHE. (4) Through the college complaints procedure. While the college needs to continue to consolidate established work on module evaluation, induction phase evaluation, and, at the end of the students college experience, comments from alumni, it also needs to identify a more sophisticated and comprehensive methodology for measuring general student satisfaction at the end of each year. An integrated, cross-college approach to student feedback offered the opportunity of: (1) an enhanced and continuously rened methodology; (2) greater efciency in data collection and analysis; (3) reduction of questionnaire overload on students; (4) the possibility of a framework which would support the execution of more sophisticated longitudinal surveys as it became possible to track students responses over a few years.

The student satisfaction approach goes handin-hand with the development of a culture of continuous quality improvement (Harvey, 1995).

Work on approaches to the evaluation of the student experience can be divided into two loosely bound categories: methods that focus on assessing teaching and learning; and methods that assess the total student experience. These two categories will no doubt continue to coexist. The focus of earlier work lay in the rst category. More recently there has been a wider acknowledgement that the totality of the student experience of an institution is a useful perspective to adopt in student satisfaction and marketing terms. Teaching and learning is not something that occurs solely in the classroom or under the tutors direct supervision and the total student experience is becoming ever more central to the students attitudes to the institution. Many higher education institutions perform some evaluation of other aspects of the student experience beyond the assessment of the quality of teaching and learning. Library use surveys, have, for instance, been conducted for many years. Bell (1995), for example, describes the use of a survey at Leicester University using the priority search methodology. The methodology uses focus groups to generate questions for inclusion in a large questionnaire-based survey. Nevertheless, surveys of the service experience provided by higher education institutions are relatively recent. The Centre for Research into Quality at the University of Central England in Birmingham has conducted some pioneering work in this area. For the last eight years the Centre has undertaken an annual student satisfaction survey (Harvey, 1995). This survey represents one element of a methodology for collecting student feedback, the other two being student feedback questionnaires, and course or module feedback. The student satisfaction survey questionnaire is currently sent to the term-time addresses of 5,000 students (approximately 30 per cent of all UCE on-site students). The approach is intended to integrate student views into management decision making. This is achieved by assessing student satisfaction with a wide range of aspects of provision and then identifying which of these

Approaches to the measurement of student satisfaction


Not all institutions have systematic student evaluation processes in place. Of those that do very few in Australia and the United Kingdom make student evaluations compulsory. Different institutions (and even different academic departments within the same institution) use different questions on student evaluation forms. Institutions all vary in data collection yardsticks they impose (Ramsden, 1991).

198

Measuring customer satisfaction in higher education

Quality Assurance in Education Volume 6 Number 4 1998 197204

Susan Aldridge and Jennifer Rowley

areas are important to students. The outcomes are mapped on an importance and satisfaction grid. Those aspects that fall into the category of high importance and low satisfaction are priority areas for management intervention. Another important element of the process at UCE is that the questions are student determined. The rst stage of the process is focus groups, during which an attempt to identify those issues that are of interest to students is made. This means that there is no attempt to identify a general scale and that the independence of the various topics is not established mathematically, but rather by student denition within the focus groups. Table I shows typical student satisfaction topics as cited by Harvey (1995), and compares these with a similar list described by Hill (1995). A comparison of the two lists demonstrates that whilst there are similarities, there are also differences, which are, to some extent, determined by the facilities offered by the specic organisation. It is also important to note the function-specic focus of these questions which enhances the potential for managerial relevance (Rust et al., 1995). For each main topic students are asked to indicate: satisfaction ratings on a seven-point scale; patterns of use of facilities or preferences.
Table I Comparing topics in student satisfaction surveys

Each of these topics may have a page of associated questions, which will be designed using the input from the focus groups. The questionnaire runs to around 20 pages in length and takes around 45 minutes to complete. Student Satisfaction surveys based on UCEs approach have also been introduced at Cardiff Institute, Buckinghamshire College, University of Central Lancashire and Auckland Institute of Technology. Whilst the managerial relevance of a tool based on facilities-specific questions is clear, any work on customer satisfaction must also be informed by the wider debate on the measurement of service quality. There is a growing body of literature on the search for a general scale and instrument for the measurement of service quality in all or a number of distinct groups of service contexts. Such a scale might identify a set of service quality dimensions that has some general relevance. The most widely used and debated tool is the SERVQUAL instrument developed by Parasuraman et al. (1988). However, recent work on SERVPERF (Cronin and Taylor, 1992) is also important. The most important contributions from the service quality literature are the concepts of perceived quality, satisfaction and expectations and perceptions, which we acknowledge in the design of our tool. The construct of quality as

Harveys topics Library services Computer services Refectories Accommodation Course organisation and assessment Teaching staff and teaching style Teaching methods Student workload and assessment Social life Self-development Financial circumstance University environment

Hills topics Library service Computing facilities Catering service Accommodation service Course content Personal contact with academic staff Teaching methods Teaching quality Student involvement Work experience Financial services Feedback Joint consultation University bookshop Career service Counselling welfare Health service Students union Physical education Travel agency 199

Measuring customer satisfaction in higher education

Quality Assurance in Education Volume 6 Number 4 1998 197204

Susan Aldridge and Jennifer Rowley

conceptualised in the services literature is based on perceived quality. Perceived quality derives from the consumers overall evaluation of a service experience. Quality can be distinguished from satisfaction since quality is a general attitude, whereas satisfaction is linked to specific transactions. These two concepts, quality and satisfaction, are probably related, and some believe that satisfaction with a series of transactions leads to perceptions of good quality. Moving on to expectations and perceptions, there are two distinct models concerning the nature of service quality. Grnroos (1988) and the work on SERVQUAL (e.g. Parasuraman et al., 1988) support the notion that quality evaluations as perceived by customers stem from a comparison of what the customers feel that the organisation should offer (that is, their expectation) and their perceptions of the performance of the organisation providing the service. The proponents of the alternative model of service quality, used in the development of SERVPERF (e.g. Cronin and Taylor,1992) believe that this difference formulation is fundamentally flawed, and that service quality should be defined simply in terms of perception. Applications of SERVQUAL in higher education have, to date, met with little success. Buttle (1996) argues that the criticisms of SERVQUAL can be categorised into those associated with theoretical issues and those associated with operational issues. A common theoretical complaint is that the service dimensions hypothesised do not regularly emerge from the factor analysis and that different researchers have generated different sets of dimensions. On the operational side, the need to ask the same question twice is a common cause of criticism. Taking these matters into account, the questionnaire used in the student satisfaction survey asked only for perceptions, and did not seek to collect any data with respect to expectations. A further criticism of current approaches to, and scale for, the measurement of service quality is that they are static. These approaches capture a snapshot of satisfaction or perceptions of quality at one point in time. There is evidence to suggest that such snapshots do not tell the whole story. For example, Bolton and Drew (1991) note that average ratings of perceived service quality are very stable and only change slowly in

response to changes (either negative or positive) in service delivery. The effects of such changes on perceptions of service quality are slow to be realised. Another study which suggests that we should be particularly concerned with the longitudinal experience of students was conducted by Hill (1995). Pilot results indicate that it is possible the students may become more discriminating and critical of service delivery as their relationship with a higher education institution develops. The methodology that is proposed here will seek to collect data in a similar form for several years so that, in the longer term, it will be possible to conduct a longitudinal analysis. Another key feature of this study is that the questionnaire was designed by taking the statements in the Student Charter as a basis for the issues in respect of which responses were sought. Student Charters form the basis of a contract between the educational institution and its students. The Department for Education (1993) published a Charter for Higher Education, entitled Higher Quality and Choice, which explains the standards of service that students, employers and the general public can expect from a university. Many higher education institutions have since instituted a Student Charter (Aldridge and Rowley, 1998). A Charter is a standard against which performance can be measured, so it is appropriate to use a Student Charter as the basis for an assessment of students perceptions of service quality.

Methodology
Introduction The methodology embedded two key features: (1) the questions were based on the Student Charter (2) the questionnaire was delivered to students via on-screen form as far as possible. Table II summarises the project timetable for the development and implementation of the survey. This timetable reveals two important aspects of the design process. The process was informed by early input from consultants, and the process of, and outcomes from, the survey were reported to the Senior Management Group. Questionnaire design The basis of the questionnaire were the statements in the Student Charter. These were

200

Measuring customer satisfaction in higher education

Quality Assurance in Education Volume 6 Number 4 1998 197204

Susan Aldridge and Jennifer Rowley

Table II Student satisfaction survey timetable

Date September 1995 October 1995 November 1995 January/ February 1996 February 1996 April 1996 September 1996 October 1996 October 1996

Action Working group convened Discussions with consultants

Questionnaire design Pilot survey Full survey Report and commentary prepared Feedback to students through partnership group, student union representatives, etc Report to senior management group and recommendations for further development

students union; welfare rights. (6) Equal opportunities, disability and environment including: equal opportunities; eisability and special needs; environment. (7) Communication, consultation, feedback and complaints including: communication; consultation and feedback; complaints. (8) Evaluation (of this approach to gathering feedback). For ease of completion virtually all questions were framed as closed questions or statements with accompanying 5-point Likert scales, with the following options: strongly agree; agree; neutral; disagree; strongly disagree; not applicable. At the end of the last main section of the questionnaire students were invited to make any further comments. The questionnaire was designed to be completed within about ten minutes. Consideration was given to the inclusion of questions relating to the relative importance of specic facilities, but it was decided not to include these, since this would have led to longer completion times. The completed questionnaire was created as a series of World Wide Web pages, using HTML to create forms, which included active buttons. Such forms allow the entry of data, which are then saved in a text le. These text les were subsequently extracted and input into an Excel database, for subsequent analysis. Initial drafts of the questionnaire were piloted. Initially, the questionnaire was discussed with a student focus group. Next, live trials with a small group of students, who were also asked to comment on the questionnaire, were used to identify any remaining ambiguities and problems. Questionnaire delivery The questionnaire was delivered through two separate channels: electronically at Ormskirk, and in paper form at Fazakerley. Times were chosen close to the end of the 201

re-written as a series of statements, with any apparent duplications eliminated and additional consistency and clarity in terminology instilled as appropriate. To the statements derived from the Student Charter were added questions in two sections: Personal and Course Details, and Evaluation. The rst of these sections was intended to provide some of the variables that might act as a basis for cross-tabulation, and analysis of customer satisfaction for specic groups. The variables covered in this section were: gender, age, ethnic origin, disability, disability type, resident, mode of study, campus, level of course and length of student registration at Edge Hill. The nal section on Evaluation included a few questions that sought to elicit perceptions on the questionnaire completion process. The remainder of the questions were divided into the following categories: (1) Personal and course details. (2) Teaching and learning. (3) Teaching and learning support including: general; computer services; library services; mediaTech services. (4) Teaching and learning development. (5) Services and facilities for students including: on-campus accommodation; off-campus accommodation; careers service; catering services; child care; cleanliness of campus; counselling; health care; recreation and sport;

Measuring customer satisfaction in higher education

Quality Assurance in Education Volume 6 Number 4 1998 197204

Susan Aldridge and Jennifer Rowley

students experience of the academic year, but whilst students were still available to complete questionnaires, and, in the case of the Ormskirk site, still actively using the college network. A cash prize of 20 per student was awarded to ten students randomly selected from those who had completed the questionnaire. Results A total of 71 questionnaires were completed on the Ormskirk site. The response rate was disappointing. This may have been associated with the time of the year that the survey was conducted; it was agreed to review this time for subsequent surveys. Alternatively, the mode of delivery may not have motivated students to complete evaluation questionnaires. In addition, it was not clear whether all students had the IT skills to be able to access the questionnaire on the World Wide Web. We will continue to investigate the relationship between mode of delivery and response rate. The response rate on the Fazakerley site was much higher at 289 responses. At Fazakerley, paper questionnaires were delivered to students by tutors and then collected from students. This proactive tutor intervention is very successful in enhancing the response rate. One of the challenges associated with electronic delivery is the lack of student motivation. This issue has been revisited in subsequent years, but it has not yet been satisfactorily resolved. Due to the relatively low response rate, results were regarded as indicative, and were used to suggest issues for further investigation. Rigorous statistical analysis was not seen to be appropriate. A series of tables was generated, one for each question; these showed the percentage of responses against each of Strongly Agree, Agree, Neutral, Disagree, Strongly Disagree, Not Applicable. The weighted average percentage was calculated across all of these categories, in order to identify some key issues for further attention through various existing groups, agencies, individual managers and channels. The issues identied were: (1) College/Personal Tutor system. (2) Good food at reasonable prices. (3) Publicity of rights in relation to students union. (4) Training opportunities for students serving on committees.

(5) Opportunities to feed back on course and service provision through questionnaires etc. These same issues were also identied at the Fazakerley Campus, although they were signicantly less strongly expressed. At the conclusion of the work on the 1995/96 survey, it was agreed that the model was sufficiently robust and useful to be used again in 1996/97, but the timing for data collection would be reconsidered and work both through student focus groups and the students union with a view to enhancing response rates. Ultimately, after a few years it is hoped that a longitudinal database will provide a basis for more informed interpretations of each individual years results.

Embedding the satisfaction survey in the colleges quality framework


A paper which summarised the analysis and identied the key issues as listed above was considered by the Senior Management Group. It was agreed that further action based on the results should be initiated through the Recruitment, Access and Student Support Committee. This committee formed a working group which established student focus groups to analyse the issues that had yielded a negative coefcient. A report on the outcomes from the student focus groups was sent to the relevant areas of the college for consideration, and to the Senior Management Group for their information. The survey was repeated in much the same form in respect of the 1996/97 academic year, around Easter 1997. A similar format was retained with a view to the creation of a longitudinal database. However, similar problems recurred in relation to completion of electronic questionnaires, in particular; 101 electronic questionnaires were received from students based on the Ormskirk campus and 199 paper-based questionnaires from students based on the Fazakerley campus. For data collection for the 1997/98 year, a more proactive approach was taken to the promotion of the survey. The survey was accessible alongside a customer satisfaction survey which focused specically on the Learning Resource Centre. This was promoted through posters, contact with tutors and a draw, and was accessible through an item

202

Measuring customer satisfaction in higher education

Quality Assurance in Education Volume 6 Number 4 1998 197204

Susan Aldridge and Jennifer Rowley

Table III A negative quality model

Outcome Action Attitude change

One incident

General measure

Dissonance Complaints procedure Dissatisfaction Student representation, suggestions

Disaffection Wastage, including failure and withdrawal Disconrmation Course and module evaluation questionnaires. Student satisfaction

which appeared on the screen when students logged on. The student satisfaction survey is one route to the identication of areas for concern within the college. The college also employs a range of other approaches. These include: (1) The survey of student withdrawal. This was rst conducted at the end of the 1996/97 academic year, and will be repeated in subsequent years. The survey seeks to contact all students who have withdrawn during the academic year, by telephone. They are asked a number of questions which seek to identify the factors that led to withdrawal. (2) Complaints received through the complaints procedure are analysed in order to ascertain whether there are any shared or persistent issues. Reports on both student withdrawal and complaints form a focus for discussion at the subgroup of the Recruitment, Access and Student Support Committee which deals with student support and Guidance. They are also considered by the Senior Management Group. These groups, respectively, take responsibility for operational, or strategic initiatives which seek to address any key issues. (3) Module evaluations are performed at the end of each module, and issues from these are summarised in the annual reports produced by subject areas. These reports form part of the annual quality plan for the college. The annual quality plan reviews quality issues during the previous year, and identies the agenda for development and improvement for the next ve years. Necessary actions are identied and allocated to areas within the college, and responses are sought from these areas. The subsequent years quality plan will assess progress or action points.

Proposing a theoretical framework


One of the challenges facing higher education institutions is how to integrate the data from this variety of different sources. Managers need to make judgements about the integration and signicance of messages coming from different sources. Dawes and Rowley (1998) have proposed a negative quality model which might help in contextualising these different data streams. Table III shows how the data collection activities discussed above might be mapped onto the negative quality model. This model emphasises that organisations should seek to respond to incidents that lead to dissatisfaction as they arise. On occasion, individual incidents may lead to dissonance, and the formulation of a complaint. Continued perceived poor quality will, on the other hand, lead to disconfirmation which may be expressed through course and module questionnaires and other formal measurements of student summative evaluations. Disaffirmation occurs when the student ceases to be an effective member of the educational community. This withdrawal may be exhibited through formal withdrawal, or through failure. On occasion, disaffected students will remain in the institution, and continue to perform poorly; although disaffected they may feel that they have no option but to continue with their studies. These students are likely to be vulnerable to dissatisfaction, disconfirmation and dissonance.

References
Aldridge, S. and Rowley, J. (1998), Students Charters: an evaluation and reection, Quality in Higher Education, Vol. 4 No. 1, pp. 27-36. Bell, A. (1995), User satisfaction surveys: experience at Leicester, The New Review of Academic Librarianship, Vol. 1, pp. 175-8.

203

Measuring customer satisfaction in higher education

Quality Assurance in Education Volume 6 Number 4 1998 197204

Susan Aldridge and Jennifer Rowley

Bolton, R. and Drew, J. (1991), A multi-stage model of customers assessments of service quality and value, Journal of Consumer Research, Vol. 17, March, pp. 375-84. Buttle, F. (1996), SERVQUAL: review, critique, research agenda, European Journal of Marketing, Vol. 30 No. 1, pp. 8-32. Cronin, J.J. and Taylor, S.A. (1992), Measuring service quality: a re-examination and extension, Journal of Marketing, Vol. 56, July, pp. 55-68. Danaher, P.J. and Mattsson, J. (1994a), Cumulative encounter satisfaction in the hotel conference process, International Journal of Service Industry Management, Vol. 5 No. 4, pp. 69-80 Dawes, J. and Rowley, J. (1998), Negative evaluations of service quality a framework for identication and response (In Press). Department for Education (1993), Higher Quality and Choice: the Charter for Higher Education, Department for Education, London.

Gronroos, C. (1988), Service quality: the six criteria of good perceived quality service, Review of Business, Vol. 9 No. 3, Winter, pp. 10-13. Harvey, L. (1995), Student satisfaction, The New Review of Academic Librarianship, Vol. 1, pp. 161-73. Hill, F.M. (1995), Managing service quality in higher education, paper presented at the Quality Assurance in Education Conference, Manchester. Parasuraman, A., Zeithaml, V.A. and Berry, L. (1991), SERVQUAL: a multiple-item scale for measuring consumer perception of service quality, Journal of Retailing, Vol. 64, Spring, pp. 12-40. Ramsden, P.A. (1991), Performance indicator of teaching quality in higher education: the course experience questionnaire, Studies in Higher Education, Vol. 16 No. 2, pp. 129-50. Rust, R.T., Zahorik, A.J. and Keiningham, T.L. (1995), Return on quality ROQ: making service quality nancially accountable, Journal of Marketing, Vol. 59, April, pp. 58-70.

204

You might also like