Professional Documents
Culture Documents
PEGGY A. ERTMER
JENNIFER C. RICHARDSON
JAMES D. LEHMAN
TIMOTHY J. NEWBY
XI (CAROL) CHENG
CHRISTOPHER MONG
AYESHA SADAF
Purdue University
ABSTRACT
*The contents of this article were developed under grant #P116B060421 from the Fund
for Improvement of Post-Secondary Education (FIPSE), a program of the U.S. Department of
Education. The contents of this article were developed with the support of the grant, but the
contents do not necessarily represent the views or policies of the Department of Education,
and you should not assume endorsement by the Federal Government.
67
of feedback is most often cited as the reason for withdrawing (Ko & Rossen,
2001). Research has also shown that the quality of student discussion responses
can be increased through the use of constructive feedback that is prompt, con-
sistent, and ongoing (Ertmer & Stepich, 2004).
However, to attain this level of feedback in online courses, instructors must
invest a significant amount of time and effort. This, then, can lead to frustration
for both students and instructors as the need for more detailed and more frequent
feedback adds to instructors work load, which can lead to longer wait times
for students (Dunlap, 2005). Research by Gibbs and Simpson (2005) indicated
that wait time can seriously limit the effectiveness of feedback because formative
assessment is not useful to students once they have moved on to new topics.
However, based on work by Cook (2001), lower quality or less detailed feedback
can still be effective if provided to students in an efficient and frequent manner.
Results from Cooks research demonstrated that constant, automated feedback
offered by electronic quizzes in a content management system could still play a
significant role in improving student learning.
With these issues in mind, researchers have recommended the use of peer
feedback as an effective strategy for providing timely feedback. Liu and Carless
(2006) defined peer feedback as a communication process through which
learners enter into dialogues related to performance and standards (p. 280)
and affirmed that the inclusion of peer feedback in a course can be a practical
solution to providing students with an increased amount of feedback in a quicker
fashion. Palloff and Pratt (2007) suggested that online courses should auto-
matically include the expectation that students will provide meaningful feedback
to each other as a means of creating a connection between participants and of
providing new perspectives.
Another benefit of peer feedback is peer learning. Baud, Cohen, and Sampson
(1999) defined peer learning as the use of teaching and learning strategies in
which students learn with and from each other without the immediate intervention
of a teacher (pp. 413-414). Peer feedback allows learners the unique opportunity
to discuss the attributes of good or poor performance and to evaluate their own
performances against concrete examples from their peer group (Topping, 1998).
Assuming that the feedback activity is well designed, both the givers and receivers
of feedback can benefit from the process (Topping, 2005). For example, in a study
by Ertmer et al. (2007), participants involved in a peer feedback process described
how the process reinforced their learning and enabled them to achieve higher
levels of understanding. Liu, Lin, Chiu, and Yuan (2001) implemented a peer
feedback system in their web-based computer science course and found that
students increased their critical thinking. In addition, the authors noted that
the most successful students were strategic adapters (p. 250), who could use a
critical eye and adapt assignments to address peer comments.
However, there are challenges to implementing peer feedback. Topping
(1998) suggested that when it comes to receiving feedback, some students might
70 / ERTMER ET AL.
METHODS
Research Design
A mixed methods research design was used to examine students perceptions
of the value of peer feedback to their learning (Creswell & Plano Clark, 2007).
Participants perceptions were obtained through a survey questionnaire adminis-
tered after the completion of the online discussions in the course. Results from
close-ended items were augmented by responses to open-ended items; together
they provided a better understanding of the perceived role of peer feedback in
students learning from the online discussions. The goal of combining qualitative
and quantitative data was to provide strength to both forms of data, minimizing
the weaknesses of either type, and leading to a better understanding of the
research question by comparing the results from the different sources (Johnson
& Onwuegbuzie, 2004).
The major lens used for this study was pragmatism based on our intent to
examine real-world, problem-centered, and practice-oriented phenomena; that
is, we wanted to understand the phenomenon of peer feedback in online dis-
cussions and its relationship to perceived learning. The pragmatic worldview
is associated with a mixed method research approach and focuses mainly on the
PEER FEEDBACK IN ASYNCHRONOUS LEARNING /
71
Participants
Of the 286 students who completed a 2-credit introductory educational tech-
nology course in Fall 2008, 215 submitted responses to an online perception
survey (30 items). Demographic responses from these 215 students indicated
that 67% of the participants were female, while 33% were male. The majority of
the participants were either freshmen or sophomore students (77%). A majority
(68%) of the students reported little to no experience with online discussions
in previous courses (n = 0-1 courses that used online discussions); 32% had used
online discussions in two or more courses. In addition, 37% of the participants
had little to no previous experience with Blackboard; 63% had used Blackboard
in two or more courses. The majority of the students (89%) reported being fairly
to very comfortable with computers.
The introductory educational technology course was a required course for all
students majoring in elementary, secondary, and special education. Each week
students attended a 1-hour lecture and a 2-hour computer lab. As part of their
course enrollment, students were randomly assigned to one of 17 lab sections.
On average, labs consisted of 16 students (range = 10-25), facilitated by one
graduate teaching assistant (TA). At the beginning of the semester, lab sections
were alternately assigned to the peer feedback (PF) or no peer feedback (NO-PF)
condition. Because four TAs facilitated multiple sections, efforts were made to
assign all of their sections to the same condition (thus, some trading of assigned
conditions occurred). In the end, eight labs were assigned to the PF condition
(n = 134), while nine labs were assigned to the NO-PF condition (n = 152). Of
the 215 students who responded to the survey measure, 109 were in the PF
condition and 106 were in the NO-PF condition.
Procedures
Beginning with the second week of classes, and continuing for the next 3 weeks,
students participated in three online discussions (1 per week). The instructor of
PEER FEEDBACK IN ASYNCHRONOUS LEARNING / 73
the course introduced the online discussions to students during the first and
third class lectures, explaining the purpose of the discussions, in general, and
providing an overview of the upcoming weeks discussion, more specifically.
After the first discussion, the instructor took time during his lecture to discuss
his perceptions of the first online discussion and asked the TAs to illustrate
examples of quality postings in their lab meetings that week. These measures
were taken to assure that students understood the value of the discussions to their
learning and to avoid disconnects between the instructors and the students
expectations for participation.
Students were placed into discussion groups within the Blackboard course
management system based on their assigned lab sections. During the first lab
meeting, the TAs introduced the students to the discussion functions in
Blackboard (and the peer feedback tool, if appropriate). To ensure that the students
had a strong understanding of the expectations for posting and reviewing
peers comments, TAs explained the grading system and provided examples of
high-quality postings. In addition, TAs sent weekly e-mails to their students,
reminding them of posting requirements and deadlines.
The first discussion asked students to propose specific learning materials based
on the principles of an assigned learning theory (e.g., behaviorism, cognitive
information processing, constructivism). In order to increase participation in the
discussions, all students were required to make a minimum of one response
halfway through the week and to respond to a minimum of one posting made by a
peer later in the week. The Appendix includes the complete set of guidelines for
Discussion 1. In addition, students in the PF groups were required to rate and
give feedback on a minimum of three responses made by their peers, using
the peer review tool. That is, in addition to the more general guidelines (see
Appendix), students in the peer feedback groups were asked to use the peer
review tool in Blackboard to rate how helpful a response was to them personally
(see instructions following the *** in the Appendix) and to provide comments
to explain their ratings.
The second discussion was organized as a debate in which students were
asked to argue for or against the need to teach millennial learners differently
than previous generations. Students began by reading two assigned articles
about the millennial generation and then forming their arguments, based on the
position they were assigned. Posting guidelines were similar to those used in
Discussion 1.
The third and final discussion was organized as a case study, centered on
the issue of plagiarism. Students read two articles from the New York Times
about a young author whose first novel, after receiving rave reviews, was found
to closely parallel that of another authors and who seemingly plagiarized
language from two of the authors previous books. Two question prompts were
used to get the discussion started:
74 / ERTMER ET AL.
1. In what ways could TurnItIn.com (or something similar) have been used
to help this author avoid the criticism she faced?
2. Should such software be used in high schools and colleges as a way to
deter plagiarism?
Other discussion guidelines remained the same as the first two discussions.
During each discussion, the TAs played a facilitative role, encouraging further
topic discussion by asking probing questions, seeking clarification, and requesting
specific examples to illustrate students ideas. On average, TAs posted five
comments/questions per discussion.
The week following the third discussion, students completed a 30-item, author-
created perception survey that also included demographic items and background
experiences. The perception survey was designed to obtain more specific infor-
mation about students thoughts regarding the online discussions, particularly in
terms of perceived value to their learning. Additional questions were asked of the
peer feedback group to gather their thoughts about the advantages and limitations
to using the peer review tool as part of their online discussions.
Data Analysis
Descriptive statistics (frequency counts, means, standard deviations, etc.) were
calculated for the closed-ended items on the perception survey; where appropriate,
t-tests were used to identify differences between the responses from PF and
NO-PF students. Open-ended survey items were analyzed using a simple pattern-
seeking method to determine those aspects of the online discussions and peer
review that students found most valuable and most challenging; the responses
PEER FEEDBACK IN ASYNCHRONOUS LEARNING / 75
RESULTS
Perceptions of Online Discussions
In general, the students in the PF group rated themselves (1 = very uncom-
fortable and 5 = very comfortable) more comfortable using the online discussion
tool (M = 3.89) than the NO-PF group (M = 3.70). In addition, students in the
PF group recorded an average comfort level of 3.76 when posting responses and
3.71 when responding to others posts, compared to average comfort ratings
of 3.68 and 3.50, respectively, recorded by the NO-PF group (see Table 1). A
significant difference was noted between students ratings of confidence for
being able to contribute relevant ideas to the discussion, with students in the PF
group showing significantly higher confidence than students in the NO-PF group
(see Table 1).
When students were queried about perceived differences in their learning
based on participating in the discussions, 38% (n = 41) of the PF group and
31% (n = 33) of the NO-PF group indicated they had noticed differences, while
PF No-PF Independent
(n = 109) (n = 106) samples t-value p-Value
37% of both groups had not; the remainder were unsure. When asked to identify
the advantages to participating in online discussions, 66% (n = 72) of the PF
group and 59% (n = 63) of the NO-PF group selected the response, [Online
discussions] made it easier to express opinions and to participate in class dis-
cussions. Approximately half of the students in both groups (51% for both
groups) indicated that the discussions helped me understand the content
better, while 46% of the PF group and 44% of the NO-PF group agreed that
online discussions motivated me to study the course materials or other related
topics/content.
Primary limitations noted by the students in both groups included It was
hard to remember to do it (42%, PF; 52%, NO-PF), suggesting that students
may not have remembered to contribute to each discussion (or contributed on a
limited basis) as required. The grades students received from their participation
in the online discussions give some indication that this is true. On average,
students in the two groups received 24 (NO-PF) or 25 (PF) points out of a possible
30 for the three discussions and posted between two and three messages per
discussion (see Table 2). (Note: ns represent total number of students enrolled in
each group, not just those who had completed the survey.) Although the students
in the NO-PF groups posted slightly more comments, on average, than the PF
groups, these differences were not significant (Discussion 1: p = .70; Discussion 2:
p = .38; Discussion 3: p = .29).
Particularly in this blended course, in which required face-to-face class
attendance varied, students may have had trouble establishing a routine for
online attendance. On a positive note, however, students participation increased
from the first to the third discussion by an average of over one comment per
student. Whether this was due to students learning how to manage the online
workload better or simply because the third discussion was more interesting is
unknown. However, anecdotal information received from the instructor and three
of the course TAs suggests that students may have become more comfortable
disagreeing with each others ideas and felt safer sharing their ideas with their
peers as they participated in more online discussions.
Finally, we asked students if they were the instructor of the course, would they
continue using online discussions? Forty-four percent (n = 48) of the PF group
and 37% (n = 39) of the NO-PF group reported that they would continue using
the online discussions in the same manner. About 16% of both groups reported
that they would discontinue use of online discussions; reasons provided were
comparable to those typically found in the literature (e.g., lack of interaction,
low response quality, not useful to learning, preference for face-to-face discus-
sions). The remaining students reported that they would continue using online
discussions but with some kind of a change (e.g., increase the number, decrease
the number, or make a change in the format). Thus, a clear majority of students
recommended that the online discussions continue in some form. Apparently,
the students perceived sufficient value in the experience to want to see this
approach continue. As one student commented, Overall I thought the online
discussions were a great way to get us using the Internet as a resource to network
with our classmates.
Students were asked to rate how their attitudes toward peer learning had
changed following their participation in the three online discussions. Interestingly,
a greater percentage of the NO-PF group (50%) rated their attitudes as more
positive, compared to the PF group (40%), while approximately a third of the
students in each group (38%, PF; 31%, NO-PF) rated their attitudes as neutral
(that is, participation in the online discussions had not changed their attitudes).
In part, open-ended responses indicated that the differences may have been due to
the fact that students in the PF group were aware that their peers in the NO-PF labs
were not having to complete the additional task of providing peer feedback. While
a critical factor in a study such as this, this would not be an issue when an entire
class is involved in the peer feedback process. Finally, open-ended responses also
suggested that students in the PF group had greater chances of encountering
technical difficulties, due to the extra time spent online. Its important that the
systems we use for facilitating these processes are as robust as possible.
Comments from the students in both groups were primarily positive (55 positive
comments from PF students as opposed to 54 from NO-PF students). One student
in the PF group wrote, I think the online discussions enhanced my view toward
peer learning because I was able to gain different perspectives that I would not
have otherwise thought about. Another PF student explained, I am now more
relaxed with having my peers view my work than I was at the beginning of the
year. I also feel more relaxed with knowing that I can get an honest response
whether the person agrees with me or not. One student commented that she/he
would like to use more peer learning when I teach.
78 / ERTMER ET AL.
Similar comments were made by students in the NO-PF groups: With the first
assignment, I really disliked the online discussion. I thought it was tedious and
useless. However, after getting feedback from others [postings in response to
their posts], I realized it was a great tool to better gain an understanding on
a specific topic. Negative comments (PF = 20; NO-PF = 18) related more to
thinking that the online discussions (as opposed to peer learning) were not
beneficial and that many students posted just to get credit.
When students were asked to rate the level of collaboration with their peers
as a result of the online discussions, 19% (n = 21) of the PF group and 14% (n = 15)
of the NO-PF group indicated very high or high levels of collaboration; 54%
(n = 59) of the PF group and 53% (n = 56) of the NO-PF group indicated a
medium level of collaboration. These findings support previous literature, espe-
cially the expectation that students interactions in online learning environments
can create meaningful connections among participants (Palloff & Pratt, 2007)
In a previous study, Ertmer et al. (2007) required students to give peer feedback
without the use of a peer feedback tool and reported that the process was
time-consuming for the instructor as well as logistically difficult for the students
who were also learning how to give meaningful feedback. In this study, the
implementation of an embedded tool for providing peer feedback was expected
to remove many of these frustrations (see Figure 2). In fact, no comments were
made about not being able to use the embedded tool or not understanding how
to use it; rather, comments focused on the content of the peer feedback.
For example, when asked how to improve the peer rating system, 11 PF students
commented that the system would have been better if more explanation were
included with the ratings (e.g., I think people need to go more in depth of why
they rated something a certain way); four students asked if there were a way
for every student to get feedback (I think that there should be a way to make sure
that everyone gets a response from classmates); and five students recommended
that the rating scale include more levels (e.g., I felt that there needed to be
more stars to give a bigger range of ratings) (see Figure 3).
More students in the NO-PF group than the PF group listed as a limitation
that they were unsure what to post (32% vs 19%) and that they didnt know
how to respond to others postings (33% vs 21%). It is possible that giving
peer feedback provided more structure to students in the PF group, helping
them feel more comfortable posting responses. Comfort ratings, reported earlier,
support this hypothesis. For example, one student wrote, When I had to rate
my peers responses I really was not sure what to give them. Part of this was
that I did not want to offend them. However, despite the perception that
scoring was difficult, 46% of the PF students thought that they received helpful
feedback on their postings. This coincides with the idea that both giving and
PEER FEEDBACK IN ASYNCHRONOUS LEARNING / 79
receiving feedback can benefit learners if the exercise is well designed and
organized (Topping, 2005).
After examining the peer feedback posts it was discovered that the students
did not give as many ratings as required, which could have led to a decreased
effect in the outcomes previously discussed. In general, students only completed
38% of the required ratings. Moreover, the average ratings were relatively
high, 3.2 on a 4-point scale, which could indicate students ratings were not
providing true feedback and may have been slightly inflated. This assump-
tion seems plausible as several students in the PF group discussed the difficulty
in rating others responses. For example, one explained, Most of the time
the student will not be completely honest with you for fear of making you
mad/sad. So even though they might rate you three stars they are doing it out
of kindness.
80 / ERTMER ET AL.
DISCUSSION
In this study, the PF group demonstrated a higher level of comfort using the
online discussion tool, posting responses to the discussions, and responding to
others posts. More importantly, students in the PF group were significantly more
confident that they could contribute relevant ideas to the discussions. These
findings suggest as students became more involved in the PF process, their
confidence and comfort for participating in online discussions increased. It is
possible that giving peer feedback provided more structure to students in the
PF group; the peer feedback process prompted them to consider the relevance
of their posts in order to gain higher ratings from their peers.
In this study, students in both groups (PF and NO-PF) were able to participate
readily in the online discussions and all received feedback on their early efforts.
That is, feedback was provided:
In addition, it was intended that students in the peer feedback group receive three
peer ratings, with comments, on their initial posts for each discussion. Thus, if
students took the time to post their comments, they received feedback in multiple
forms, with one of these being in the form of a peer rating (for the PF group).
Previous researchers (e.g., Land & Dornisch, 2001; Shea & Bidjerano, 2009)
have noted that students online participation is often limited by low confidence
and lack of prior knowledge. However, providing students with positive and
constructive feedback after their first attempts may have helped mitigate this
potential problem.
The results of this study suggest that adding a peer review process on top of
the other feedback processes typically included in effective course designs, may
not lead to additional increases in perceived learning. In general, students in
this study discussed peer learning in terms of responses made to their postings
rather than the ratings they received. In fact, some students described struggling
with peer feedback, perhaps even becoming negative, due to their perceptions
of the poor cost-benefit ratiofor the amount of time they had to give to complete
the ratings (in addition to responding to peers postings), the payback was
deemed insufficient.
Along these lines, it is also important to consider the level of the learners in
this study, mostly underclassmen at the freshman and sophomore level, a group
unlikely to have had prior experience with peer feedback, and, as evidenced by
their background experiences, online discussions in general. As Palloff and Pratt
(2007) explained, providing meaningful feedback is not a naturally acquired skill.
PEER FEEDBACK IN ASYNCHRONOUS LEARNING / 83
For this group of learners, in particular, it is necessary to teach these skills, model
relevant behaviors, and provide encouragement in the peer feedback process.
APPENDIX
Lets imagine that you work for an educational firm that develops learning
curriculum for elementary school children. Your company adheres to a very
behaviorally/information processing/constructivistically [use the one you
have been assigned] oriented viewpoint of learning. A large school district
in Texas has come to your company and asked for you to develop a proposal
for the development of a science unit of instruction for fifth grade students.
Your unit will specifically be focused on insects. This is a very important
potential client for your company and your proposal will be in competition
with two other companies.
PEER FEEDBACK IN ASYNCHRONOUS LEARNING / 85
the literature, etc. Take some time to reflectively think about your response
before you send it.
Dont go on and on and on. Keep your comments conciseyou want others to
be able to read and understand what you have writtenbut no one wants to
spend hours and hours of reading.
You can add links to pictures, videos (e.g., YouTube), other web sites, etc. to
support your argument.
I hope you find this way of interacting interesting. We will have lots to discuss
during this and future online discussions.
REFERENCES
Allen, I. E., & Seaman, J. (2008). Staying the course: Online education in the United
States, 2008. Needham, MA: Sloan Consortium. Retrieved November 30, 2008, from
http://www.sloan-c.org/publications/survey/downloadreports
Allen, I. E., Seaman, J., & Garrett, R. (2007). Blending in: The extent and promise of
blended education in the United States. Needham, MA: Sloan Consortium. Retrieved
November 30, 2008, from http://www.sloan-c.org/publications/survey/downloadreports
Arbaugh, J. B. (2000). How classroom environment and student engagement affect learning
in Internet-based MBA courses. Business Communication Quarterly, 63(4), 9-26.
Arbaugh, J. B. (2001). How instructor immediacy behaviors affect student satisfaction
and learning in web-based courses. Business Communication Quarterly, 64(4), 42-54.
Baud, D., Cohen, R., & Sampson, J. (1999). Peer learning and assessment. Assessment
and Evaluation in Higher Education, 24, 413-426.
Bonk, C. J., & Zhang, K. (2008). Empowering online learning: 100+ activities for reading,
reflecting, displaying, and doing. San Francisco, CA: Jossey-Bass.
Cook, A. (2001). Assessing the use of flexible assessment. Assessment and Evaluation
in Higher Education, 26, 539-549.
Creswell, J. W. (2009). Research design: Qualitative, quantitative, and mixed method
approaches (3rd ed.). Los Angeles, CA: Sage.
Creswell, J. W., & Plano Clark, V. L. (2007). Designing and conducting mixed methods
research. Thousand Oaks, CA: Sage.
Dunlap, J. C. (2005). Workload reduction in online courses: Getting some shuteye.
Performance and Improvement, 44(5), 18-25.
Ertmer, P. A., Richardson, J. C., Belland, B., Camin, D., Connolly, P., Coulthard, G., et al.
(2007). Using peer feedback to enhance the quality of student online postings: An
exploratory study. Journal of Computer-Mediated Communication, 12(2). Available
online: http://jcmc.indiana.edu/vol12/issue2/ertmer.html
Ertmer, P. A., & Stepich, D. A. (2004, July). Examining the relationship between higher-
order learning and students perceived sense of community in an online learning
environment. Proceedings of the10th Australian World Wide Web conference, Gold
Coast, Australia.
Ertmer, P., Temur-Gedik, N., Richardson, J., & Newby, T. (2008). Perceived value of
online discussions: Perceptions of engineering and education students. In Proceedings
PEER FEEDBACK IN ASYNCHRONOUS LEARNING / 87
CO: Educause Center for Applied Research. Retrieved October 23, 2009, from
http://www.educause.edu/ers0906
Stepich, D. A., & Ertmer, P. A. (2003). Building community as a critical element of
online course design. Educational Technology, 43(5), 33-43.
Swan, K. (2002). Building communities in online courses: The importance of interaction.
Education, Communication and Information, 2(1), 23-49.
Topping, K. (1998). Peer assessment between students in colleges and universities.
Review of Educational Research, 68, 249-275.
Topping, K. (2005). Trends in peer learning. Educational Psychology, 25, 631-645.
U.S. Department of Education (US DOE). (2009). Evaluation of evidence-based practices
in online learning: A meta-analysis and review of online learning studies. Washington,
DC: U.S. Department of Education; Office of Planning, Evaluation, and Policy
Development. Retrieved October 24, 2009, from www.ed.gov/about/offices/list/
opepd/ppss/reports.html
Xie, K., Debacker, T. K., & Ferguson, C. (2006). Extending the traditional classroom
through online discussion: The role of student motivation. Journal of Educational
Computing Research, 34(1), 67-89.