Professional Documents
Culture Documents
COLLABORATIVE
ASSESSMENT
1 http://www.secondlife.com
Page 1
Online collaborative assessment
Online collaborative activity fits this framework well. It is also considered a key way to improve teaching
and learning:
Online collaborative assessment scores highly in most of these. It is potentially a highly valid form of
assessment because it permits the learner to undertake engaging and authentic tasks that can closely
match learning objectives; however, it faces the same reliability issues as traditional group work – maybe
more because of its innovative nature; it’s affordable because of its use of commonly (and usually, freely)
available Web 2.0 tools; and students generally enjoy using these tools and services. The issue of the
reliability of collaborative online assessment is considered later in this paper.
A characteristic not mentioned by Knight, but one that has important implications for online assessment, is
fairness, given the long-standing complaints from students about recognising individual effort (or lack of)
and penalising malpractice. Online collaborative assessment appears to be no better, perhaps worse, in
this regard than traditional group work.
Page 2
Online collaborative assessment
Page 3
Online collaborative assessment
environment. The best rubrics reflected the unique nature of online collaborative working, and had specific
criteria relating to this environment. 4 But practice varied widely.
The Becta research (Crook and Harrison, 2008) revealed current assessment practice as a barrier to
adoption in schools with its focus on individual work and the prioritising of reliability over validity:
“Many indicated that there was a tension between the collaborative learning
encouraged by Web 2.0 and the nature of the current assessment system.”
(Crook and Harrison, 2008)
“It was in the area of assessment that some teachers felt that Web 2.0 had some of its
greatest potential, as peer assessment and collaborative composition connected with
the personalisation and skills agenda.” (Crook and Harrison, 2008).
However, examples are isolated and there is no apparent over-arching pattern to the deployment of
collaborative tools – either for learning or assessment. Traditional methods of assessment prevail – even
when, as in the case of paper log books, online alternatives appear to be superior in every respect.
A particular concern is the reliability of online collaborative assessment, especially for high stakes
summative assessment. Many marking schemes simply regurgitate the rubrics used to assess traditional
group work. Often, when rubrics are customised to the online environment, they use crude metrics such as
frequency or length of contributions. Sometimes no rubrics are used at all: over 50% of SQA officers
reported that no marking scheme was used during the assessment of online collaborative work.
Teachers’ ability to use collaborative tools appears to be a significant barrier to wider adoption, this
being cited as the major factor in the SQA research. Teachers themselves feel that “curriculum and
assessment pressures reduced their opportunities to introduce Web 2.0 approaches.” (Crook and Harrison,
2008). So, in spite of its strong theoretical basis and vocational relevance, practice is currently limited
and, when it is used, quality is variable.
Page 4
Online collaborative assessment
The existing skill base of teachers and lecturers needs to be improved to embrace Web 2.0 skills; and
new attitudes towards collaborative working and group assessment may need to be engendered. This
training is also required by other educational professionals – over 50% of the respondents to the SQA
survey report their own knowledge and skills in this area to be a barrier to the increased use of these
tools for assessment.
The existing curriculum and assessment systems remain a significant source of inertia in assessment: the
“persistence of traditionalism” (Elton and Johnston, 2002). The “backwash effect” of assessment (whereby
teachers teach what they believe will be assessed – and little else) further entrenches existing practice and
constrains innovation in assessment. And the emphasis on reliability over validity by examing bodies
reinforces current practice in preference to more innovative forms of assessment.
A recurring concern among teachers and other educational professionals is the marking of online
collaborative work. The importance of clear and transparent marking criteria is well documented (see, for
example, Hounsell et al, 2007). But current practice is variable. The development of well-grounded criteria
for assessing online group work is vital to give teaches and lecturers more confidence in their assessment
of online activity.
Creanor (2000) advocates a number of criteria for assessing online collaborative activity. These are:
1. Presenting new ideas.
2. Building on others' contributions.
3. Critically appraising contributions.
4. Coherently summarising discussions.
5. Introducing and integrating a relevant body of knowledge.
6. Linking theoretical discussions to own experience.
Creanor’s criteria address some of the unique aspects of online collaborative working, and represent a
significant improvement over the crude application of “traditional” criteria to online activity. However,
they fail to address some of the unique skills involved in online group work, particularly the quality of
collaboration and the technical skills involved in structuring and presenting information.
A review of a number of exemplar marking schemes revealed a range of additional criteria in current use:
7. Collaborating with other contributors effectively.
8. Using the tool's facilities to structure and present information.
9. Providing accurate, concise and clearly written contributions.
10. Summarising concepts from readings.
11. Moving discussions forward.
12. Identifying strengths in contributions.
13. Providing constructive criticism where appropriate.
14. Suggesting solutions to problems.
15. Providing links to high quality and relevant online and offline resources.
16. Using multimedia to improve the quality of information.
17. Observing expected norms of behaviour for the medium in use.
There was little consistency in the selection or application of these criteria in the exemplars reviewed.
However, used consistently, a subset of these criteria could be selected to assess most online collaborative
activity, and a rubric could be created with marks awarded to each criterion in proportion to their
relevance to the assessment task.
Page 5
Online collaborative assessment
While the use of detailed criteria would improve the transparency and reliability of assessment, such an
approach has its limitations. As noted by Knight (2002), “attempts to produce precise criteria lead to a
proliferation of sentences and clauses, culminating in complex, hard to read documents.” And no amount of
criteria can remove the inherent subjectivity of marking schemes:
“The research evidence makes it clear that there is a degree to which criteria cannot be
unambiguously specificed but are subject to the social processes by which meanings
are contested and constructed.” (Greatorex, 1999)
CONCLUSION
Online assessment often shines a light on the dark corners of traditional assessment, exposing questionable
practices that has been tolerated for years and which only come to light when we seek to computerise the
task. This may be true of collaborative online assessment, which inherits many of the long-standing
problems associated with the assessment of traditional group work. However, the problems are not
insurmountable and the benefits appear to be understood and accepted by most practitioners.
Progress has been slow and there is a well documented danger that the assessment system (in fact, the
entire education system) comes adrift from the rest of society, with a reluctance to modernise its practices.
Adoption of collaborative approaches to learning appears to be a key trend in education and this has to
be reflected in the assessment system. This should be part of the modernisation of the assessment system
that has been widely called for.
References
Creanor, L. (2000) Structuring and Animating Online Tutorials. Case studies from the OTiS e-Workshop (Heriot-Watt Univeristy
and Robert Gordon University).
Crook, C., Harrison, C., (2008) Web 2.0 Technologies for Learning at Key Stages 3 and 4: Summary Report, Becta.
de Freitas, S. (2008) Serious Virtual Worlds: A Scoping Study, JISC.
Elliott, R. (2008) Survey results available at : http://www.surveymonkey.com/sr.aspx?sm=
DIH_2baU0bC_2bVIZDztauzmqfd34Rq_2fq030CfMtq92lyzE_3d, (Accessed: 20 December 2008)
Elton, L., Johnston, B., (2002) Assessment in Universities: A Critical Review of Research, LTSN Generic Centre.
Gillmor, D. (2004) We the Media, O’Reilly Publishing.
Greatorex, J. (1999) Generic Descriptors: A Health Check. Quality in Higher Education.
Hounsell, D., Klamphfleitner, M., Hounsell, J., Huxham, M., Thomson, K., Blair, S., Falchikov, N., (2007) Innovative Assessment Across
the Disciplines, The Higher Education Academy.
Knight, P. T., (2002) Summative Assessment in Higher Education: Practices in Disarray, Studies in Higher Education, Volume 27, No.
3.
Kolb, D. A. (1984) Experiential learning: Experience as the source of learning and development, New Jersey: Prentice-Hall.
Morgan, M. (2004) Notes towards a rhetoric of wiki. Paper presented at CCCC 2004, San Antonio TX.
Murphy, S., (1994) Portfolios and Curriculum Reform: Patterns in Practice, Assessing Writing 1.
Nicol, D., MacFarlane-Dick, D., (2006) Formative Assessment and Self-Regulated Learning, Studies in Higher Education, Volume 31
(2).
Pellegrino, J. W. (1999) The Evolution of Educational Assessment. William Angoff Memorial Lecture Series (ETS).
The Economist Intelligence Unit (2008) The Future of Higher Education: How Technology will Shape Learning, The Economist.
Vygotsky, L. G., (1978) Mind and society: The development of higher mental processes, Harvard University Press.
Page 6