You are on page 1of 17

Quality Assurance in Education

Academics’ feedback on the quality of appraisal evidence


Chenicheri Sid Nair Jinrui Li Li Kun Cai
Article information:
Downloaded by BAHRIA INSTITUTE OF MANAGEMENT & COMPUTER SCIENCE At 01:00 30 September 2015 (PT)

To cite this document:


Chenicheri Sid Nair Jinrui Li Li Kun Cai , (2015),"Academics’ feedback on the quality of appraisal
evidence", Quality Assurance in Education, Vol. 23 Iss 3 pp. 279 - 294
Permanent link to this document:
http://dx.doi.org/10.1108/QAE-05-2014-0023
Downloaded on: 30 September 2015, At: 01:00 (PT)
References: this document contains references to 64 other documents.
To copy this document: permissions@emeraldinsight.com
The fulltext of this document has been downloaded 106 times since 2015*
Users who downloaded this article also downloaded:
Mahsood Shah, Leonid Grebennikov, Chenicheri Sid Nair, (2015),"A decade of study on employer
feedback on the quality of university graduates", Quality Assurance in Education, Vol. 23 Iss 3 pp.
262-278 http://dx.doi.org/10.1108/QAE-04-2014-0018
Noha Elassy, (2015),"The concepts of quality, quality assurance and quality enhancement", Quality
Assurance in Education, Vol. 23 Iss 3 pp. 250-261 http://dx.doi.org/10.1108/QAE-11-2012-0046
Dodik Siswantoro, (2015),"Perception and awareness of Islamic accounting: student
perspectives", Quality Assurance in Education, Vol. 23 Iss 3 pp. 306-320 http://dx.doi.org/10.1108/
QAE-08-2012-0031

Access to this document was granted through an Emerald subscription provided by emerald-
srm:543713 []
For Authors
If you would like to write for this, or any other Emerald publication, then please use our Emerald
for Authors service information about how to choose which publication to write for and submission
guidelines are available for all. Please visit www.emeraldinsight.com/authors for more information.
About Emerald www.emeraldinsight.com
Emerald is a global publisher linking research and practice to the benefit of society. The company
manages a portfolio of more than 290 journals and over 2,350 books and book series volumes, as
well as providing an extensive range of online products and additional customer resources and
services.
Emerald is both COUNTER 4 and TRANSFER compliant. The organization is a partner of the
Committee on Publication Ethics (COPE) and also works with Portico and the LOCKSS initiative for
digital archive preservation.

*Related content and download information correct at time of


download.
The current issue and full text archive of this journal is available on Emerald Insight at:
www.emeraldinsight.com/0968-4883.htm

Academics’ feedback on the Quality of


appraisal
quality of appraisal evidence evidence
Chenicheri Sid Nair
Downloaded by BAHRIA INSTITUTE OF MANAGEMENT & COMPUTER SCIENCE At 01:00 30 September 2015 (PT)

Centre for Advancement of Teaching and Learning,


The University of Western Australia, Perth, Australia
279
Jinrui Li Received 27 May 2014
Revised 11 September 2014
Applied Linguistics, University of Waikato, Hamilton, New Zealand, and Accepted 16 October 2014
Li Kun Cai
Foreign Language Department,
North China University of Science and Technology, Tangshan, China

Abstract
Purpose – This paper aims to explore academics’ perspectives on the quality of appraisal evidence at
a Chinese university.
Design/methodology/approach – An online survey with both closed items and open-ended questions
was distributed among all academics at the university (n ⫽ 1,538). A total of 512 responded to the
questionnaire. The closed items were initially analysed using EXCEL and SPSS; the open-ended questions
were thematically analysed.
Findings – The academics believed that the quality of student survey and peer observation of
teaching were affected by subjectivity and the lack of understanding of appraisal. Academics also
suggested that appraisals should be contextualised and the approach standardised. The study suggests
the need for training that informs and engages relevant stakeholders to ensure the rigour of appraisal.
Originality/value – The study raises the issue of quality assurance regarding appraisal data from the
perspective of academics. It is based on the collaborative effort of academics in Australia, China and
New Zealand, with the support of the management staff at the case study university. The study informs
both appraisers and academics of quality assurance issues in appraisal. It also contributes to the
literature, in that it initiates dialogues between communities of practices through collective questioning
on the quality and mechanisms of appraisal in tertiary education.
Keywords Training, Performance appraisal, Quality assessment, Student survey, Education,
Questionnaire, Quality of evidence, Peer observation of teaching, Teacher appraisal
Paper type Research paper

Introduction
Staff appraisals are not only used as indicators for financial accountability, university
ranking and quality assurance (Shin, 2011), but are also related to personnel decisions,
promotion and professional development (Alderman et al., 2012; Marsh, 2007; Minelli
et al., 2007). This nexus of demands and purposes in tertiary education (Pinheiro, 2013)
results in the challenge of definition, collection and interpretation of performance
evidence for appraisal. Quality Assurance in Education
Trigwell (2011) argues that it is a challenge to select reliable indicators for appraisals. Vol. 23 No. 3, 2015
pp. 279-294
Egginton (2010) goes on further to argue that it not easy to make fair judgement on good © Emerald Group Publishing Limited
0968-4883
performance in appraisals, as it often involves conflicting goals and values. For DOI 10.1108/QAE-05-2014-0023
QAE example, although good teaching is often related to effective and meaningful teaching
23,3 that results in learning (Casey et al., 1997), its interpretations vary within and across
disciplines (Kreber, 2002) and in different contexts. Minelli et al. (2007) further define
four elements that bring about organisational impact of appraisals: the idea of the
assessment, the method of collection and analysis, the bodies that look after the
evaluation process and the way such data are used in the institution.
Downloaded by BAHRIA INSTITUTE OF MANAGEMENT & COMPUTER SCIENCE At 01:00 30 September 2015 (PT)

280 Generally speaking, research publications and student appraisal of teaching are key
evidential instruments used in university staff appraisal. However, compared with
teaching, research achievement seems more important to academics’ career
development (Taylor, 2007). In fact, academics often suffer from the tension between
teaching and research, in that teaching speciality is often under-valued (Bexley et al.,
2013). The same tension is felt among academics in universities in China (Du et al., 2010).
There is also a trend towards collecting evidence by means of qualitative approaches,
such as student interviews and peer observation of teaching, to reflect holistically staff’s
achievement (Bennett and Nair, 2010). However, little is known about whether, and how,
the quality of such evidence data is ensured. A numbers of studies on course appraisal
via student survey and/or peer review, and academics’ feedback on the quality of
appraisal evidence collected by student survey and class observation by colleagues and
experts, have been carried out in many Western countries (Stein et al., 2013). Hence, the
purpose of this study is to explore effects and issues of appraisal in a top provincial-level
university in north China, from the perspectives of Chinese academics. The particular
focus of this article is academics’ comment on the quality of the three common indicators
of performance: student’ survey, peer observation of teaching and research.
This study is a collaborative research project carried out by a community of
teacher-researchers and teacher development experts from Australia, China and New
Zealand, with support from the management staff of the case study university. It
contributes to the existing literature, in that it adds to the dialogue between communities
of practice through collective questioning of the existing mechanisms of appraisal in
tertiary education.

Student appraisal of teaching


One controversial form of evidence used in teaching appraisal is the end-of-course
student survey and/or interview. Regarding the validity and reliability of the student
survey, Chen and Watkins (2010), through analysis of the scores on 435 teachers in two
semesters and survey data among 388 of the teachers in a university in China, found that
the students’ appraisals were consistent and valid. This research supports previous
research supporting the reliability and validity of student appraisals (Benton and
Cashin, 2012; Marsh, 2007; Stein et al., 2013; Stowell et al., 2012). However, studies by
Crumbley et al. (2001), Pounder (2007), Buchanan (2011) and Darwin (2012) challenge
assumptions underpinning appraisals in higher education and question not only the
ethics but the validity of higher education institutions’ reliance on student surveys as
measures of effective pedagogic practice.
Some of the issues surrounding the validity and reliability of students’ appraisal
reported include teachers’ workload, their rapport with students, grading of
students’ assignments (Boysne, 2008; Amin, 2002), teachers’ work priorities, the
nature of courses and disciplines and students’ interest in the course (Rantanen,
2013). Comparatively, grading also seems to be a very important factor that affects
student appraisal (Balam and Shannon, 2010). Conflicting findings are also reported Quality of
regarding the utility of student appraisal. For example, Bennett and Nair (2010) appraisal
found that teachers believed that student surveys could help them improve
teaching. However, according to Palermo (2013), no evidence was found on the
evidence
causal effect between student survey feedback and overall improvement of
teaching. Harvey (2011), on the other hand, endorsed student feedback as one of the
Downloaded by BAHRIA INSTITUTE OF MANAGEMENT & COMPUTER SCIENCE At 01:00 30 September 2015 (PT)

most powerful tools in the improvement cycle within a higher education setting. 281
Adding to this discourse of student appraisal is the belief that student surveys were
not based on any articulated philosophy of quality teaching that underpins appraisal
and that student surveys cannot accommodate programme differences and
multi-dimensions of teaching (Lemos et al., 2011). In other words, student appraisal of
teaching was insufficient to inform pedagogy (Darwin, 2012). Moreover, it is argued that
students may not be able to make objective judgements about the teaching that takes
place in a classroom. For example, Winchester and Winchester (2012) explored the
effectiveness of student qualitative appraisal in a UK university by interviewing 7 out of
192 students (5 from UK, 2 from China). Students were asked to provide formative
feedback on teaching via the Moodle website on a weekly basis. The study found that
students’ motivation to provide feedback gradually decreased, became routinized and
tended to focus on negative aspects of teaching without critically analysing the positive
aspects. However, a number of researchers argue that students are reluctant to provide
feedback continuously if there is no response to their feedback (Nair, 2011; Powney and
Hall, 1998; Leckey and Neill, 2001; Harvey, 2003). Furthermore, Harvey (2003) points out
the importance of both the evidence of action-taken in response to students’ feedback
and the evidence of teaching improvement. To sum up, there are multilevel issues that
demotivate students to provide constructive feedback.
To inform teaching with students’ feedback, Winchester and Winchester (2013)
suggested that students’ evaluation of teaching should be regularly carried out as an
on-going feedback to teachers’ practices with the emphasis from Harvey (2011) that this
is not the only source of evidence that should be relied on. Further, Elassy (2013)
suggested that students also had a role in the process and should be trained if they are
to be involved in the appraisal.

Peer observation of teaching


Class observation of teaching by colleagues and experts is an evidence-approach used in
appraisals. The quality of peer observation is influenced by the relationship between an
observer and an observee (Bell, 2001). It is also influenced by the observers’ beliefs and
intentions concerning teaching, and the awareness of the diversity of teaching in
relation to themselves (Courneya et al., 2008). Moreover, the approach to collecting peer
observation evidence is important as a quality enhancement tool (Lomas and Nicholls,
2005). Kohut et al. (2007) studied the usefulness of peer observation by means of a survey
to which both tenured (143 respondents) and untenured (80) staff in an American
university responded. The peer review process in the university included:
• pre-observation interviews on the context of teaching;
• peer observation;
• using checklists and narrative statements;
• video-tape of the class;
QAE • self-analysis by the observees; and
23,3 • post-observation discussion between the observers and the observees to exchange
opinions and negotiate the meaning of the outcome.

The effectiveness of peer observations was attributed to the shared expectation,


Downloaded by BAHRIA INSTITUTE OF MANAGEMENT & COMPUTER SCIENCE At 01:00 30 September 2015 (PT)

participation and conversation between the observers and observees.


282 The quality of peer observation is also affected by the purpose of appraisal, that is
whether the appraisal is oriented by personnel management and/or linked to professional
development (Bartlett, 2000). One example is Byrne et al.’s (2010) study on perspectives,
engagement, benefits and issues of peer observation. Data were collected by questionnaire
and interviews with staff in a department of a UK university. A comparison was made
between the appraisal-based and development-based peer observation approaches. The
staff (n ⫽ 36) believed that the traditional box-ticking observation is just a routine operation
for management purposes. In contrast, those who conducted peer development-oriented
observation (n ⫽ 26) reported positive experiences of improvement to teaching and research.
Chamberlain et al. (2011) explored the reasons for staff engagement with peer review that
was for developmental rather than evaluative purposes. In this study, data were collected
using a questionnaire survey of 84 staff, along with three focus groups (n ⫽ 16) across
departments of a UK university. The study found that there was ambiguity and a lack of
discussion on the purpose, role of stakeholders and utility of peer observation outcomes.
There was a need for a connection between the peer observation outcome and follow-up
support for professional development. Bell and Cooper (2013) reported a partnership
approach to a staged peer observation of a teaching programme participated by 12 out of 20
staff in an engineering school of an Australian university. The programme had the following
features: voluntary participation into different stages of the programme; discussion of and
training in peer observation; participation of the head of school, not only as a leader, but also
as a learning partner; facilitation and support provided by an external-to-faculty
coordinator; and a partnership between junior and senior staff. The study reported that the
peer observation approach could enhance teaching skills and knowledge, if it were carefully
designed and supported to address the complexity of the process.

Research and teaching


Research and teaching are two crucial indicators of academics’ performance in tertiary
education. The discussion between teaching and research nexus has been ongoing for
decades (Douglas, 2013; Taylor, 2008); yet, there is no clear definition of the quantity and
quality of academic work with regard to both teaching and research (Soliman and Soliman,
1997). In addition, staff perspectives on the benefit of research on their teaching vary,
depending on the level of students and subjects they are teaching or the weighting of
research in appraisal (Taylor, 2007). A common perspective among academics is that time
spent on research tends to result in more positive outcomes in promotion and payment than
that spent on teaching (Douglas, 2013; Murphy and MacLaren, 2009). Teaching has been
degraded to a secondary level due to the priority research has in attracting government
funding (Lucas, 2006; Mayson and Schapper, 2012). According to Brew (2010), the
integration of research in teaching is affected by both external factors such as government
funding on research and internal factors including staff and students’ perspectives and the
nature of the courses. This tension between research and teaching has gained some
recognition, as universities place greater emphasis on the research domain for rankings, Quality of
public funding and institutional prestige (The Guardian, 2012). appraisal
evidence
Quality of appraisal evidence
Guba and Lincoln (1989) suggested that appraisal should do three things: take
Downloaded by BAHRIA INSTITUTE OF MANAGEMENT & COMPUTER SCIENCE At 01:00 30 September 2015 (PT)

qualitative approaches, engage participants and negotiate meaning among all


stakeholders. Concurring with this suggestion, Smith (2008) proposed a five-phase 283
model of appraisal. This model suggests collecting evidence from a multitude of sources
(including self-reflection, peer appraisal, students’ experience and outcome of learning),
providing opportunities and guidance to interpret the appraisal outcome and enhancing
the engagement with and application of the appraisal data for further development.
Although the multi-sources approach to appraisal has been gaining increasing attention,
little is known on how the quality of these sources is controlled and assessed. In studies on
staff appraisal in China, quality issues have been reported especially with regard to the
ineffectiveness of collecting and interpreting data (Chen and Yeager, 2011). In addition, Zou
et al. (2012) found, by document analysis of self-appraisal reports from 53 universities, that
the quality of tertiary education is interpreted institutionally as organisational, rather than
educational, quality. Moreover, although the outcomes of appraisals should be used for
improvements, Nair and Bennett (2011) report, however, the intertwining of evaluation and
improvement, and that informing the feedback outcomes to all concerned is not
systematically implemented in many appraisal systems. To ensure the quality of appraisal,
Freeman and Dobbins (2013) suggest that there is a need to engage all stakeholders such as
teachers and students in an ongoing dialogue of appraisal.
In summary, the existing studies have found that while both student survey and peer
observation of class teaching are useful evidence, the quality of these forms of evidence
may be affected by issues such as student and peer bias. Therefore, there is a need to
investigate the quality of evidence collection, from the academics’ perspective.

This study
This study is a case study on staff appraisal at a university in China. The overarching
research purpose of this study is to identify issues in the current appraisal systems in
China, and promote dialogue regarding these issues in general within the higher
education community. Specifically, this paper aims to explore academics’ opinions on
the quality of appraisal evidence collected.
The questionnaire was designed taking the following design considerations: firstly,
the items were developed taking into consideration relevant research literature on staff
appraisal. Secondly, the authors in this paper have had significant experiences in
researching elements of staff appraisal and the design of questionnaires (Nair and
Bennett, 2011). Once the items in the relevant domains of the questionnaire were
developed, it was then translated into Chinese by a National Accreditation Authority for
Translators and Interpreters Ltd Australia-certified translator. Both the English and the
Chinese versions of the questionnaire were then sent to the third author, who was a
lecturer at the participant university where the data were collected. The third author
provided contextual information and suggested improvement of the questionnaire
regarding the approaches of staff appraisal in the Chinese setting. The questionnaire
was revised and piloted among former colleagues in China to gauge the face validity of
the items in the questionnaire. Further improvements were made based on the piloting
QAE results. To ensure that the translation captured the essence of the English version of the
23,3 questionnaire, a back translation was also carried out.
The university in this study is a top university located in a province in northern China. It
has about 1,500 teaching staff and 50,000 students in various disciplines including
engineering, medicine and the humanities. Data in this study were collected through the
survey questionnaire, the Perceptions of Teaching Appraisal Questionnaire (PTAQ). The
Downloaded by BAHRIA INSTITUTE OF MANAGEMENT & COMPUTER SCIENCE At 01:00 30 September 2015 (PT)

284 questionnaire was made available online for all staff in the case study university.
A total of 512 academics responded to the survey, representing a response rate of 30 per
cent. The raw data were analysed using EXCEL, with reliabilities and validity of the
questionnaire analysed with SPSS. Open-ended items were thematically analysed following
a grounded theory approach (Braun and Clarke, 2006). For the purpose of this article, both
descriptive statistics and the open-ended items were utilised to present the overall findings.

The questionnaire
The PTAQ consists of 49 closed items in two parts, with the third part having four
open-ended questions. Part one of the questionnaire collected bio-demographic
information essential for this study. Part two of the questionnaire measured the various
aspects of the appraisal system. Table I outlines the domains and sub-domains that the
questionnaire measured.
The Cronbach alpha reliability of the items in the PTAQ, using the individual
participant as the unit of analysis, ranged from 0.87 to 0.92. Using a maximum likelihood
factor analysis, six sub-domains and two distinctive domains were identified in the
questionnaire.

The appraisal evidence


As shown in Table II, the majority of the respondents were junior academics–lecturers/
assistant professors (57.5 per cent). Senior academics–associate professors made up the
second largest group of respondents, constituting 30 per cent.
In terms of frequency of appraisals, there was some variation across the university.
Just over half of the respondents had an annual appraisal, while the remainder (about 48
per cent) reported that they had been appraised once every semester. The data showed
that the appraisal system comprised mainly student surveys and peer observation of
teaching. Other components of appraisal included research publications, self-appraisal
and mentors’ reports on new teachers’ performance. The appraisers were internal
subject experts, peers/fellow teachers, management staff and deans. Sometimes,
external experts were also included as appraisers.
Student surveys at this university took three forms: online, paper-based or in the
form of student interviews (Figure 1). The primary form of feedback was the online
mode. Student surveys are a compulsory component of the appraisal system. The
teaching appraisal system in the University was established in 1993. It was re-examined
and strengthened in 2004 according to the National Standard of Undergraduate
Education Assessment.
Though the methodology used varied with respect to the administration of the student
surveys or collection of feedback, the majority of staff (77 per cent) agreed that student
feedback on teaching was worthwhile. A small percentage (9 per cent) reflected that they
avoid giving a fail grade to students, because it might influence students’ rating on their
teaching. This goes against the empirical evidence in the early works of Marsh and
Downloaded by BAHRIA INSTITUTE OF MANAGEMENT & COMPUTER SCIENCE At 01:00 30 September 2015 (PT)

No. of
Domain Sub domain items Description Example of

Appraisal systems 3 Measure the approaches to The components of teaching appraisal


appraisal at the institution included:
classroom observation
students’ evaluation of my teaching by
questionnaire
my self-evaluation
peer and colleagues’

Student surveys Review of feedback 2 Measures the student survey After the evaluation, I will:
Engagement with data 9 mechanism and processes provide students with a summary of
Detailing actions 6 their feedback
Use of surveys 3 provide students with the actions I am
proposing to take or have taken
(Response scale: Always 1 2 3 4 5
Never)

Process of teacher appraisal 8 Measures structure and I received oral feedback from the
process of the teacher appraisers
appraisal used in the (Response scale: strongly agree, agree,
institution neutral, disagree, strongly disagree)
Impact of teacher appraisal Purpose 9 Measures the impact to the The purpose of teaching appraisal
Effects 3 appraisal on the teacher was:
finding out whether I could meet the
standard of teaching. (Response scale:
strongly agree, agree, neutral, disagree,
strongly disagree)

PTAQ
Table I.
285

Structure of the
evidence
appraisal
Quality of
QAE Roche (1997) in a Western setting which show that student grades have an insignificant
23,3 effect on students’ rating of their teachers.
Over half (56 per cent) of the staff in this Chinese university clearly expressed a belief
that the student survey on teaching was a tool for improvement. This finding is in line
with research by Nair and Bennett (2011). In addition, over half (55 per cent) believed
Downloaded by BAHRIA INSTITUTE OF MANAGEMENT & COMPUTER SCIENCE At 01:00 30 September 2015 (PT)

that it was used to monitor the quality of teaching by management, while less than half
286 (47 per cent) believed that it was also used to learn students’ learning experience.
Academics in general reported that they were well-informed about the appraisal
system at their university, as well as being given the necessary help in formulating
improvement plans for areas that needed improvement. However, only around 57 per
cent believed that the approaches used in the appraisal system could objectively reflect
their teaching performance. Although this sentiment was expressed, academics (62 per
cent) generally thought the appraisers who conducted their appraisal reviews were
trustworthy; however, about half of the staff reported that they received oral and/or
written feedback from the appraisers. These findings of staff perceptions on the
appraisal system are summarised in Table III.
Table IV outlines the staff perception of the impact of such teacher appraisals. The
result suggests that the system as such is useful and helps in improving their teaching.
In addition, there was a clear understanding that the data from the appraisals are
utilised as a management tool. An interesting outcome from the survey was that staff
perceived that the data from such appraisal had little bearing on the promotion exercise.
The data in Table IV also showed that comments from the student surveys were
perceived by staff as a positive influence in making them more effective teachers. Less than
20 per cent perceived that any negative comments had a detrimental effect on their teaching
and with the majority of teachers indicating that it did not influence their grading practice.

Appointment (%)

Professors 12.7
Associate professors 29.8
Table II. Lecturers 44.6
Respondent makeup Assistant professors/teachers 12.9

40 37.4

30
23.8
21.7
Percent

20 17.1

10

Figure 1.
0
Survey or feedback
mode Online or Paper Online Only Paper Only Student
Interviews
Subjectivity affected the quality of student survey and class visitation Quality of
Of the 361 academics who responded to the question on whether there were any factors that appraisal
affected appraisal, 46 per cent listed various factors. The main factor they outlined related to evidence
the subjectivity of appraisers, especially student appraisers whose judgement was
influenced by their attitudes and understanding of appraisal, their achievement in the class
Downloaded by BAHRIA INSTITUTE OF MANAGEMENT & COMPUTER SCIENCE At 01:00 30 September 2015 (PT)

and the teacher–student relationship. For example, one academic listed the following factors:
Student appraisal of teaching is not fair enough. Some students, especially those who got a fail 287
grade, take revenge of the teachers [using students’ appraisal of teaching]. Some teachers get
a high appraisal score by unethical approaches. Students have higher expectations towards
teachers who teach specialised courses than those who teach commonly required courses.
Colleagues who had class visitation did not provide real opinions due to a kind of adherence to
formality (Respondent No. 4).
Teachers further elaborated on subjectivity or appraisal biases such as appraisers and
appraisees coming from different subject areas, appraisers’ attitudes, emotions, their
preferred teaching style, relationship between appraisers and appraisees and a lack of
knowledge of appraisal. For example:
Sometimes the result of appraisal was not transparent. Emphasis was given on appraisal itself
rather than providing feedback and solution[s] (Respondent No. 22).

% agreement
Measurements (strongly agree ⫹ agree)

Clear about criteria used in the university 70.7


Informed of the appraisal process for my teaching 68.9
Approaches used objectively reflect my teaching performance 57.2
The appraisers are trustworthy 61.7
Received oral feedback from appraisers 60.4
Received written feedback from appraisers 57.4 Table III.
Given help in formulating improvement plans 76.8 Perceptions of
Appraisers were trustworthy 62.0 appraisal system

% agreement
Measurement (strongly agree ⫹ agree)

Met the standard of teaching 80.9


Informed teacher of strength and weakness 80.1
Decides on teachers promotion 31.8
For improving teaching 80.5
Part of professional development 66.9
Uses as management tool 74.4
Used for reporting to internal and external bodies 44.1
Student positive comments made me more confident teacher 87.9
Student negative comments helped me improve 75.9 Table IV.
Student negative comments discouraged me to teach 18.8 Impact of appraisal
Fear of failing students – effects appraisal 9.2 on the teacher
QAE In addition, the teachers pointed out that the general appraisal standard did not address
23,3 disciplinary differences and had too much emphasis on research. The appraisal was only a
routine process. These factors are further elaborated in answers to the following questions.

Research was over-weighted in appraisal


Three hundred and seventy-nine participants responded to the question about the
Downloaded by BAHRIA INSTITUTE OF MANAGEMENT & COMPUTER SCIENCE At 01:00 30 September 2015 (PT)

288 importance of research in the institution’s appraisal system. The majority of


participants (260) highlighted the importance of research over teaching, describing
research as the key reference for employment, promotion and rewards. For example, one
academic expressed this idea as follows:
[The research is] so important that all efforts of teaching are neglected. It makes one feel that
research projects, research achievements, and research publications can almost take over
everything. Other work, no matter how much or how well one has done, is regarded as useless
if without research (Respondent No. 462).
Some of the respondents also provided what they thought should be the weighting for
research in the appraisal system. This weighting ranged from 40 per cent to 100 per cent.
It seemed that the academics were in a dilemma about meeting the research
requirement due to a heavy teaching workload. Some reported that younger academics
were promoted faster than the senior ones because of research achievements:
Research is very important. It is the decisive factor in promotion. Some senior academics, who
are very good at teaching and have heavy workloads, cannot be promoted to a higher post
because they do not have enough time to do research. In contrast, some young academics, who
cannot teach well and have less of a workload, have no problem to be promoted because they
have enough time to do research. This has led to a phenomenon of respecting research and
despising teaching (Respondent No. 387).
In contrast, some young academics expressed difficulty in doing research and being an
effective teacher:
Research plays a very important role in appraisal. It is difficult to get a professional title
without much research. However, the general research level, for a university like ours, is
relatively low. Generally speaking, there are not many research projects, especially for young
academics. Some young academics have to take on more classes in order to earn more money,
which often influences their teaching effects (Respondent No. 134).

Contextualised but standardised approach to appraisal


Academics (271) provided feedback on what and how teacher appraisals could be improved
at the university. Almost 36 per cent (n ⫽ 97) emphasised that the teacher appraisal should
focus on teaching rather than research. They argued that the large ratio of research to
teaching in appraisal distracts their attention away from teaching towards research, which
endangered the quality of teaching. In addition, a heavy teaching workload left very little
time for research. The academics went on to articulate that there was a need to adjust or
reduce the ratio of research in appraisal, according to the nature of the work or position. This
strong emphasis was clearly enunciated as follows:
For those who are working at teaching-focused institutes, the appraisal should refer more to
the teaching ability and effects rather than the research achievements. Otherwise it would
make young staff put too much effort into research; consequently, the overall teaching effects
would drop down (Respondent No. 371).
Academics also expressed concern over other factors that affected the quality of Quality of
appraisal, including the expertise of the appraisers and the process of data collection: appraisal
Clarify the standards of appraisal; appraisers should know the specialised area of teaching; evidence
provide detailed feedback; focus on comprehensive aspects; manage student survey system
carefully; do not make appraisal a formality; research evidence should be used to improve
Downloaded by BAHRIA INSTITUTE OF MANAGEMENT & COMPUTER SCIENCE At 01:00 30 September 2015 (PT)

teaching rather than for the purpose of research itself (Respondent No. 94).
It seemed that the academics had a strong intention to improve their work. They clearly 289
suggested a number of approaches that could serve this purpose: prompt feedback,
training, exchange of opinions between appraisers and appraisees, modelling excellent
teaching and guidance provided by mentors.

Discussion
The study found that the current appraisal employed in the Chinese institution used a
multi-faceted approach based on student surveys and classroom visitation by
administrators, experts and peers, and research. This approach is in line with the work
of Smith (2008), who advocates a multi-phase model for appraisals. Further, academics
generally agreed on the value of students’ comments and believed that the main purpose
of appraisal was to improve teaching. However, relatively fewer academics believed in
the trustworthiness of the appraisers whose judgement may be affected by subjectivity
and lack of expertise. The academics expressed the need for the appraisal system to be
intertwined with opportunities for professional development.
The subjectivity of students in the appraisal process was a major concern of the
teachers. Some teachers pointed out that students might provide biased responses to
appraisal questions due to grades they received from the teachers, their understanding
of teaching and appraisal and their rapport with the teachers. These concerns concur
with Amin’s (2002) and Boysne’s (2008) studies on factors that affected the reliability of
students’ appraisal of teaching. However, a majority of the teachers believed that
student feedback surveys were valuable in regard to teaching improvement, indicating
that teachers might expect formative feedback instead of assessment from students.
Teachers also challenged the objectivity of other appraisers such as experts, peers and
administrators. Factors that affected objectivity were attitudes towards appraisal,
background knowledge of teaching and speciality and personal relationships. These factors
were in alignment with findings by Bell (2001) and Courneya et al. (2008). The teachers
believed that the issues outlined above resulted in the inability to provide objective reflection
on their actual performance. It seemed that neither students nor peers/colleagues were
engaged in a continuous dialogue on appraisal, as is suggested in the literature (Freeman and
Dobbins, 2013). This observation perhaps suggests there is a need for an in-depth discussion
on appraisal among all stakeholders to clarify the purposes, approaches, process and the
utility of appraisal and assessment in tertiary education.
Another major issue was the ratio of research to teaching in teacher appraisal. This
issue has been relatively less explored by other studies on teacher appraisal (Murphy
and MacLaren, 2009), although anecdotal evidence in Australian higher education
suggests that such concern is not an isolated issue nor confined to the Chinese
universities. The research literature clearly documents this tension where teaching is
considered the second cousin to research (Lucas, 2006; Taylor, 2008; Mayson and
Schapper, 2012; Murphy and MacLaren, 2009). According to the Chinese academics, this
issue is related to the purpose of appraisal, which in many cases is not well-defined.
QAE Chinese academics in general believed that the fundamental purpose of such appraisals
23,3 should be teaching, as this is the fundamental purpose of higher education institutions
in China. Generally, staff were broadly supportive of the idea that teaching and research
are part-and-parcel of academic life, though it was apparent to Chinese academics that
the institution was weak in their management of this relationship.
Moreover, it seemed that more flexibility was expected to address contextual issues.
Downloaded by BAHRIA INSTITUTE OF MANAGEMENT & COMPUTER SCIENCE At 01:00 30 September 2015 (PT)

290 For example, some academics pointed out that the general appraisal criteria were not
able to address disciplinary differences. In addition, academics argued that the
appraisal approach did not address the differences between new and experienced staff.
A factor that was strongly echoed by a number of teachers was that the appraisal was
carried out as a routine process and failed to address practical issues such as staff’s needs for
professional development and the differencing teaching workloads. This finding supports
the suggestions made by Nygaard and Belluigi (2011) that a contextualised approach should
be developed to address the complexity of appraisal. This finding of tying in appraisals with
professional development concurs with those of Chen and Yeager (2011) and Nie and Xu
(2006). As Chen and Yeager (2011) reported, academics wished to have prompt dialogical
feedback that could help with further improvement, the ability to exchange opinions with
experts and the provision of training opportunities.

Conclusion
This case study reveals three major issues that affect the quality of appraisal: appraisers’
expertise, the weight given to research and the pedagogical implications of the appraisal
outcome. What the data suggest is that there is an urgent need for an institution-wide
discussion on basic concepts of appraisal, and how to engage different stakeholders and
connect appraisal with learning and professional development. Interestingly, the results of
this first study of the appraisal system in a Chinese university reveal that training is needed
not only for academics but also for students to increase their understanding and competence
in providing appraisal evidence. Studies on feedback with students in filling out surveys
suggest that students themselves are deficient in understanding the importance of the
feedback they are giving, as well as understanding some of the terminologies that are used
in feedback questionnaires (Bennett and Nair, 2011; Weaver, 2006). In addition, the research
also highlights the importance to ensure that systematic training is provided for those who
collect appraisal data.
By exploring academics’ perspectives on the quality of appraisal evidence, this study not
only contributes to the understanding of appraisal in tertiary education in mainland China,
but also identifies common issues surrounding appraisals, specifically the quality of
evidence collection. One outcome of this study is to make the process and results of appraisal
useful. Future studies are needed to explore how a robust system of quality assurance can
contribute to more effective appraisal evidence collection and interpretation.
Moreover, this study suggests the need for training appraisers and reinforces the
research of Elassy (2013) and Freeman and Dobbins (2013). However, there has been
little discussion in the literature regarding how to systematically train staff as well as
students in this process. The authors of this study argue that an important role of
tertiary education is to help future graduates develop knowledge, scholarship and new
skills necessary for the workforce. One such new skill is giving appropriate feedback to
help organisations in the workplace improve their service. Therefore, further research
can explore how to integrate appraisal training to the curriculum.
To conclude, this study is indicative of not only the need to engage in a wider Quality of
discourse of appraisal training in the higher education sector but also the need to appraisal
recognise the important role teaching plays and the appropriate weighting that needs to evidence
be considered in the appraisal process.
Downloaded by BAHRIA INSTITUTE OF MANAGEMENT & COMPUTER SCIENCE At 01:00 30 September 2015 (PT)

References
Alderman, L., Towers, S. and Bannah, S. (2012), “Student feedback systems in higher education: a 291
focused literature review and environmental scan”, Quality in Higher Education, Vol. 18
No. 3, pp. 261-280.
Amin, M.E. (2002), “Six factors of course and teaching evaluation in a bilingual university in
Central Africa”, Assessment & Evaluation in Higher Education, Vol. 27 No. 3, pp. 281-291.
Balam, E.M. and Shannon, D.M. (2010), “Student ratings of college teaching: a comparison of
faculty and their students”, Assessment & Evaluation in Higher Education, Vol. 35 No. 2,
pp. 209-221.
Bartlett, S. (2000), “The development of teacher appraisal: a recent history”, British Journal of
Educational Studies, Vol. 48, pp. 24-33.
Bell, M. (2001), “Supported reflective practice: a programme of peer observation and feedback for
academic teaching development”, International Journal for Academic Development, Vol. 61,
pp. 29-39.
Bell, M. and Cooper, P. (2013), “Peer observation of teaching in university departments: a
framework for implementation”, International Journal for Academic Development, Vol. 18
No. 1, pp. 60-73.
Bennett, L. and Nair, C.S. (2010), “A recipe for effective participation rates for web based surveys”,
Assessment and Evaluation Journal, Vol. 35 No. 4, pp. 357-366.
Bennett, L. and Nair, C.S. (2011), “Demonstrating quality - feedback on feedback”, Proceedings of the
Australian Universities Quality Forum, Demonstrating Quality, Australian Universities Quality
Agency, Melbourne, pp. 26-31, available at: http://auqa.edu.au/qualityenhancement/
publications/occasional/publications/
Benton, S.L. and Cashin, W.E. (2012), “Student ratings of teaching: a summary of research and
literature”, IDEA Paper No. 50, The IDEA Center, Manhattan, KS, available at: www.
theideacenter.org/category/helpful-resources/knowledge-base/idea-papers (accessed 9
September 2014).
Bexley, E., Arkoudis, S. and James, R. (2013), “The motivations, values and future plans of
Australian academics”, Higher Education, Vol. 65 No. 3, pp. 385-400.
Boysne, G.A. (2008), “Revenge and student evaluations of teaching”, Teaching of Psychology,
Vol. 35, pp. 218-222.
Braun, V. and Clarke, V. (2006), “Using thematic analysis in psychology”, Qualitative Research in
Psychology, Vol. 3 No. 2, pp. 77-101.
Brew, A. (2010), “Imperatives and challenges in integrating teaching and research”, Higher
Education Research and Development, Vol. 29 No. 2, pp. 139-150.
Buchanan, J. (2011), “Quality teaching: means for its enhancement?”, Australian Universities
Review, Vol. 53 No. 1, pp. 66-72.
Byrne, J., Brown, H. and Challen, D. (2010), “Peer development as an alternative to peer
observation: a tool to enhance professional development”, International Journal for
Academic Development, Vol. 15 No. 3, pp. 215-228.
Casey, R.J., Gentile, P. and Bigger, S. (1997), “Teaching appraisal in higher education: an
Australian perspective”, Higher Education, Vol. 34, pp. 459-482.
QAE Chamberlain, J.M., D’Artrey, M. and Rowe, D.A. (2011), “Peer observation of teaching: a decoupled
process”, Active Learning in Higher Education, Vol. 12 No. 3, pp. 189-201.
23,3
Chen, G.H. and Watkins, D. (2010), “Stability and correlates of student evaluations of teaching at
a Chinese university”, Assessment & Evaluation in Higher Education, Vol. 36 No. 6,
pp. 675-685.
Chen, Q.Y. and Yeager, J. (2011), “Comparative study of faculty evaluation of teaching practice
Downloaded by BAHRIA INSTITUTE OF MANAGEMENT & COMPUTER SCIENCE At 01:00 30 September 2015 (PT)

292 between Chinese and US institutions of higher education”, Frontiers of Education in China,
Vol. 6 No. 2, pp. 200-226.
Courneya, C.A., Pratt, D.D. and Collins, J. (2008), “Through what perspective do we judge the
teaching of peers?”, Teaching and Teacher Education, Vol. 24, pp. 69-79.
Crumbley, L., Henry, B.K. and Kratchman, H. (2001), “Students’ perceptions of the evaluation of
college teaching”, Quality Assurance in Education, Vol. 9 No. 4, pp. 197-207.
Darwin, S. (2012), “Moving beyond face value: re-envisioning higher education evaluation as a
generator of professional knowledge”, Assessment & Evaluation in Higher Education,
Vol. 37 No. 6, pp. 733-745.
Douglas, A.S. (2013), “Advice from the professors in a university Social Sciences department on
the teaching-research nexus”, Teaching in Higher Education, Vol. 18 No. 4, pp. 377-388.
Du, P., Lai, M.H. and Lo, L.N.K. (2010), “Analysis of job satisfaction of university professors from
nine Chinese universities”, Frontiers of Education in China, Vol. 5 No. 3, pp. 430-449.
Egginton, B.E. (2010), “Introduction of formal performance appraisal of academic staff: the
management challenges associated with effective implementation”, Educational
Management Administration & Leadership, Vol. 38 No. 1, pp. 119-133.
Elassy, N. (2013), “A model of student involvement in the quality assurance system at institutional
level”, Quality Assurance of Education, Vol. 21 No. 2, pp. 162-198.
Freeman, R. and Dobbins, K. (2013), “Are we serious about enhancing courses? Using the
principles of assessment for learning to enhance course evaluation”, Assessment &
Evaluation in Higher Education, Vol. 38 No. 2, pp. 142-151.
Guba, E. and Lincoln, Y. (1989), Fourth Generation Evaluation, Sage, Newbury Park, CA.
Powney, J. and Hall, S. (1998), Closing the Loop: The Impact of Student Feedback on
Students’Subsequent Learning, Scottish Council for Research in Education, Edinburgh.
Harvey, L. (2003), “Student feedback”, Quality in Higher Education, Vol. 9 No. 1, pp. 3-20.
Harvey, L. (2011), “The nexus of feedback and improvement”, in Nair, C.S. and Mertova, P. (Eds),
Student Feedback: The Cornerstone to an Effective Quality Assurance System in Higher
Education, Woodhead Publishing, Oxford.
Kohut, G.F., Burnap, C. and Yon, M.G. (2007), “Peer observation of teaching: perceptions of the
observer and the observed”, College Teaching, Vol. 55 No. 1, pp. 19-25.
Kreber, C. (2002), “Controversy and consensus on the scholarship of teaching”, Studies in Higher
Education, Vol. 27 No. 2, pp. 151-167.
Leckey, J. and Neill, N. (2001), “Quantifying quality: the importance of student feedback”, Quality
in Higher Education, Vol. 7 No. 1, pp. 19-32.
Lemos, M.S., Queirós, C., Teixeira, P.M. and Menezes, I. (2011), “Development and validation of a
theoretically based, multidimensional questionnaire of students’ evaluation of university
teaching”, Assessment & Evaluation in Higher Education, Vol. 36 No. 7, pp. 843-864.
Lomas, L. and Nicholls, G. (2005), “Enhancing teaching quality through peer review of teaching”,
Quality in Higher Education, Vol. 11 No. 2, pp. 137-149.
Lucas, L. (2006), The Research Game in Academic Life, Open University Press, Maidenhead,
SRHE.
Marsh, H.W. (2007), “Students evaluations of university teaching: dimensionality, reliability, Quality of
validity, potential biases and usefulness”, in Perry, R.P. and Smart, J.C. (Eds), The
Scholarship of Teaching and Learning in Higher Education: An Evidence-based Perspective,
appraisal
Springer, Dordrecht, pp. 319-383. evidence
Marsh, H.W. and Roche, L.A. (1997), “Making students’ evaluations of teaching effectiveness
effective”, American Psychologist, Vol. 52, pp. 1187-1197.
Downloaded by BAHRIA INSTITUTE OF MANAGEMENT & COMPUTER SCIENCE At 01:00 30 September 2015 (PT)

Mayson, S. and Schapper, J. (2012), “Constructing teaching and research relations from the top: an
analysis of senior manager discourses on research-led teaching”, Higher Education, Vol. 64,
293
pp. 473-487.
Minelli, E., Rebora, G., Turri, M. and Huisman, J. (2007), “The impact of research and teaching
evaluation in universities: comparing an Italian and a Dutch case”, Quality in Higher
Education, Vol. 12 No. 2, pp. 109-124.
Murphy, T. and MacLaren, I. (2009), “Teaching portfolios and the quality enhancement project in
higher education”, Educational Futures, Vol. 2 No. 1, pp. 71-84.
Nair, C.S. (2011), “Students’ feedback an imperative to enhance quality in engineering education”,
International Journal of Quality Assurance in Engineering and Technology Education,
Vol. 1 No. 1, pp. 58-66.
Nair, C.S. and Bennett, L. (2011), “Using student satisfaction data to start conversations about
continuous improvement”, Quality Approaches in Higher Education, Vol. 2 No. 1, pp. 17-22.
Nie, D.M. and Xu, J.S. (2006), “The contradictions & countermeasures of the teaching quality
evaluation of college teachers”, Journal of Educational Science of Hunan Normal
University, Vol. 5 No. 3, pp. 48-51.
Nygaard, C. and Belluigi, D.Z. (2011), “A proposed methodology for contextualised evaluation in
higher education”, Assessment & Evaluation in Higher Education, Vol. 36 No. 6,
pp. 657-671.
Palermo, J. (2013), “Linking student evaluations to institutional goals: a change story”,
Assessment & Evaluation in Higher Education, Vol. 38 No. 20, pp. 211-223.
Pinheiro, R. (2013), “Bridging the local with the global: building a new university on the fringes of
Europe”, Tertiary Education and Management, Vol. 19 No. 2, pp. 144-160.
Pounder, J.S. (2007), “Is student evaluation of teaching worthwhile?”, Quality Assurance in
Education, Vol. 15 No. 2, pp. 178-191.
Rantanen, P. (2013), “The number of feedbacks needed for reliable evaluation: a multilevel
analysis of the reliability, stability and generalisability of students’ appraisal of teaching”,
Assessment & Evaluation in Higher Education, Vol. 38 No. 2, pp. 224-239.
Shin, J.C. (2011), “Teaching and research nexuses across faculty career stage, ability and affiliated
discipline in a South Korean research university”, Studies in Higher Education, Vol. 36
No. 4, pp. 485-503.
Smith, C. (2008), “Building effectiveness in teaching through targeted evaluation and response:
connecting evaluation to teaching improvement in higher education”, Assessment &
Evaluation in Higher Education, Vol. 33 No. 5, pp. 517-533.
Soliman, I. and Soliman, H. (1997), “Academic workload and quality”, Assessment & Evaluation in
Higher Education, Vol. 22 No. 2, pp. 135-157.
Stein, S.J., Spiller, D., Terry, S., Harris, T., Deaker, L. and Kennedy, J. (2013), “Tertiary teachers and
student evaluations: never the twain shall meet?”, Assessment & Evaluation in Higher
Education, Vol. 38 No. 7, pp. 892-904.
Stowell, J.R., Addison, W.E. and Smith, J.L. (2012), “Comparison of online and classroom-based
student evaluations of instruction”, Assessment & Evaluation in Higher Education, Vol. 37
No. 4, pp. 465-473.
QAE Taylor, J. (2007), “The teaching-research nexus: a model for institutional management”, Higher
Education, Vol. 54 No. 6, pp. 867-884.
23,3
Taylor, J. (2008), “The teaching–research nexus and the importance of context: a comparative
study of England and Sweden”, Compare: A Journal of Comparative and International
Education, Vol. 38 No. 1, pp. 53-69.
The Guardian (2012), “British universities are in a need of a teaching revolution”, available at:
Downloaded by BAHRIA INSTITUTE OF MANAGEMENT & COMPUTER SCIENCE At 01:00 30 September 2015 (PT)

294 www.theguardian.com/higher-education-network/blog/2012/feb/15/uk-universities-
teaching-revolution (accessed 13 May 2014).
Trigwell, K. (2011), “Measuring teaching performance”, in Shin, J.C., Toutkoushian, R.K. and
Teichler, U. (Eds), University Rankings: Theoretical Basis, Methodology and Impacts on
Global Higher Education, Springer, Dordrecht, pp. 165-184.
Weaver, M.R. (2006), “Do students value feedback? Student perceptions of tutors’ written
responses”, Assessment and Evaluation in Higher Education, Vol. 31 No. 3, pp. 379-394.
Winchester, M.K. and Winchester, T.M. (2012), “If you build it will they come? Exploring the
student perspective of weekly student evaluations of teaching”, Assessment & Evaluation
in Higher Education, Vol. 37 No. 6, pp. 671-682.
Winchester, M.K. and Winchester, T.M. (2013), “A longitudinal investigation of the impact of
faculty reflective practices on students’ evaluations of teaching”, British Journal of
Educational Technology, Vol. 45 No. 1, pp. 112-124.
Zou, Y.H., Du, X.Y. and Rasmussen, P. (2012), “Quality of higher education: organisational or
educational? A content analysis of Chinese university self-evaluation reports”, Quality in
Higher Education, Vol. 18 No. 2, pp. 169-184.

Further reading
Newton, J. (2002), “Views from below: academics coping with quality”, Quality in Higher
Education, Vol. 8 No. 1, pp. 39-61.

About the authors


Chenicheri Sid Nair is a Professor of Higher Education Development at the Centre for the
Advancement of Teaching and Learning (CATL). His current role looks at the quality of teaching
and learning at University of Western Australia (UWA). Dr Nair is a Chemical Engineer by
training, but his interest in helping students succeed in the applied sciences in higher education
led him to further specialise in Science and Technology education. This led him to his many works
in improving student life in the higher education system. His research work lies in the areas of
quality in the higher education system, classroom and school environments, and the
implementation of improvements from stakeholder feedback. Chenicheri Sid Nair is the
corresponding author and can be contacted at: sid.nair@uwa.edu.au
Jinrui Li is currently a Research Assistant at the University of Waikato and is co-editing a book
on learner autonomy in Asian countries. Previously, she worked as a Lecturer in universities both
in China and New Zealand. Dr Li has carried out research on peer feedback, tutors’ assessment
feedback on writing and staff appraisal in universities. Her research interest includes different
levels of assessment in tertiary education.
Li Kun Cai is a Lecturer in the Foreign Language Faculty of North China University of Science
and Technology. She is interested in various aspects of English language education, feedback and
assessment.

For instructions on how to order reprints of this article, please visit our website:
www.emeraldgrouppublishing.com/licensing/reprints.htm
Or contact us for further details: permissions@emeraldinsight.com

You might also like