You are on page 1of 23

J. EDUCATIONAL COMPUTING RESEARCH, Vol.

43(1) 67-88, 2010

PEER FEEDBACK IN A LARGE UNDERGRADUATE BLENDED


COURSE: PERCEPTIONS OF VALUE AND LEARNING*

PEGGY A. ERTMER
JENNIFER C. RICHARDSON
JAMES D. LEHMAN
TIMOTHY J. NEWBY
XI (CAROL) CHENG
CHRISTOPHER MONG
AYESHA SADAF
Purdue University

ABSTRACT

This study examined students perceptions of peer feedback and learning


in a large, undergraduate course that incorporated supplementary online
discussions. Peer feedback (PF) was facilitated via an automated rating
system, within Blackboard discussion forums, for half of the students enrolled
in the course. Following the peer feedback process, students in the PF group
perceived higher levels of confidence and comfort for posting and responding
in online discussions than students who did not receive peer feedback. A
significant difference was noted on perceptions of confidence for contributing
relevant ideas to the discussions, with the PF students expressing higher
levels of confidence, yet not all students perceived benefits to their learning.
Implications for the implementation of peer feedback in online and blended
courses are provided.

*The contents of this article were developed under grant #P116B060421 from the Fund
for Improvement of Post-Secondary Education (FIPSE), a program of the U.S. Department of
Education. The contents of this article were developed with the support of the grant, but the
contents do not necessarily represent the views or policies of the Department of Education,
and you should not assume endorsement by the Federal Government.

67

2010, Baywood Publishing Co., Inc.


doi: 10.2190/EC.43.1.e
http://baywood.com
68 / ERTMER ET AL.

According to a recent meta-analysis (US DOE, 2009), students who participate


in online instruction perform better, on average, than those who take the same
course face-to-face. Furthermore, those participating in blended courses (i.e.,
those that combine elements of both online and face-to-face) appear to do best,
regardless of level of course (undergraduate or graduate) or discipline. This bodes
particularly well for the large number of institutions who report that blended
instruction is among the fastest growing forms of distance education (Allen,
Seaman, & Garrett, 2007; US DOE, 2009).
While the reasons for students improved performances in blended environ-
ments have not been fully explicated, results from the latest survey conducted by
the Educause Center for Applied Research (ECAR; Smith, Salaway, & Borreson
Caruso, 2009) suggest that students learn best when professors balance their
uses of instructional technology with human interaction. That is, nearly 60% of
the 30,616 college students who responded to the 2009 ECAR survey noted a
preference for only a moderate amount of IT use in their classrooms and
suggested that instructors uses of IT should be balanced with the human touch
(p. 12). This is supported further by recommendations from researchers (e.g.,
Palloff & Pratt, 1999; Stepich & Ertmer, 2003; Swan, 2002) who have investi-
gated the influence of human interaction on students success in online courses.
For example, Arbaugh (2000, 2001) investigated the influence of a number of
variables on student learning in web-based MBA courses and found that the
factors most closely associated with student learning were those that related to
interaction within the class.
Although many instructional strategies can be employed to foster interaction
and student learning online (cf. Bonk & Zhang, 2008), one of the most widely
used instructional approaches is the asynchronous online discussion. In online
courses, asynchronous discussions replace in-class discussions, while in blended
or hybrid courses they are used to extend face-to-face discussions, providing
additional methods for students to interact with the content and each other. Online
discussions have the potential to assist students in the construction of knowledge
and serve as a scaffold that allows for multiple perspectives, negotiation of
meaning, and a reflection on knowledge gaps a learner may possess (Haavind,
2006). Students perceive online discussions to be more egalitarian than traditional
classroom discussions (Harasim, 1990) and online discussions create a sense of
social presence that helps to create a sense of community online (Gunawardena &
Zittle, 1997; Rourke, Anderson, Garrison, & Archer, 2001). According to Palloff
and Pratt (1999), The learning community is the vehicle through which learning
occurs online. It is the relationships and interactions among people through which
knowledge is generated (p. 15).
One strategy that has been used to increase instructor and learner interaction
in online discussions is that of feedback. Instructor feedback is often cited as the
catalyst for student learning in online environments (Ertmer, Richardson, Belland,
Camin, Connolly, Coulthard, et al., 2007; Palloff & Pratt, 2001), whereas the lack
PEER FEEDBACK IN ASYNCHRONOUS LEARNING / 69

of feedback is most often cited as the reason for withdrawing (Ko & Rossen,
2001). Research has also shown that the quality of student discussion responses
can be increased through the use of constructive feedback that is prompt, con-
sistent, and ongoing (Ertmer & Stepich, 2004).
However, to attain this level of feedback in online courses, instructors must
invest a significant amount of time and effort. This, then, can lead to frustration
for both students and instructors as the need for more detailed and more frequent
feedback adds to instructors work load, which can lead to longer wait times
for students (Dunlap, 2005). Research by Gibbs and Simpson (2005) indicated
that wait time can seriously limit the effectiveness of feedback because formative
assessment is not useful to students once they have moved on to new topics.
However, based on work by Cook (2001), lower quality or less detailed feedback
can still be effective if provided to students in an efficient and frequent manner.
Results from Cooks research demonstrated that constant, automated feedback
offered by electronic quizzes in a content management system could still play a
significant role in improving student learning.
With these issues in mind, researchers have recommended the use of peer
feedback as an effective strategy for providing timely feedback. Liu and Carless
(2006) defined peer feedback as a communication process through which
learners enter into dialogues related to performance and standards (p. 280)
and affirmed that the inclusion of peer feedback in a course can be a practical
solution to providing students with an increased amount of feedback in a quicker
fashion. Palloff and Pratt (2007) suggested that online courses should auto-
matically include the expectation that students will provide meaningful feedback
to each other as a means of creating a connection between participants and of
providing new perspectives.
Another benefit of peer feedback is peer learning. Baud, Cohen, and Sampson
(1999) defined peer learning as the use of teaching and learning strategies in
which students learn with and from each other without the immediate intervention
of a teacher (pp. 413-414). Peer feedback allows learners the unique opportunity
to discuss the attributes of good or poor performance and to evaluate their own
performances against concrete examples from their peer group (Topping, 1998).
Assuming that the feedback activity is well designed, both the givers and receivers
of feedback can benefit from the process (Topping, 2005). For example, in a study
by Ertmer et al. (2007), participants involved in a peer feedback process described
how the process reinforced their learning and enabled them to achieve higher
levels of understanding. Liu, Lin, Chiu, and Yuan (2001) implemented a peer
feedback system in their web-based computer science course and found that
students increased their critical thinking. In addition, the authors noted that
the most successful students were strategic adapters (p. 250), who could use a
critical eye and adapt assignments to address peer comments.
However, there are challenges to implementing peer feedback. Topping
(1998) suggested that when it comes to receiving feedback, some students might
70 / ERTMER ET AL.

be less likely to accept peer feedback as valid, especially if it is negative. On the


other hand, in giving feedback, there is a concern that students will be overly
lenient, leading to misguided compliments and a general reduction in quality of
work. Besides issues with validity and reliability, some instructors believe that
peer feedback is less practical than it appears on the surface, referencing the
amount of time spent collating and distributing peer feedback as a detriment to
its implementation (Liu & Carless, 2006). Furthermore, while previous results
obtained by Ertmer and Stepich (2004) demonstrated significant increases in
the quality of postings in a small graduate course when feedback was provided
by a teaching assistant, it is unclear whether this same effect can be achieved in
large undergraduate courses, using peer reviewers.

PURPOSE OF THE STUDY


This study was designed to examine issues of peer feedback and learning
in a large, undergraduate course using online discussions to supplement lecture.
Specifically, we examined students perceptions of learning from online dis-
cussions and the perceived value of peer feedback in online discussions. Peer
feedback was facilitated via an automated rating system (indicating how helpful
a posting was to the rater), augmented by descriptive comments, and embedded
within the online discussion tool in a Blackboard course environment (Figure 1).
Guidelines and incentives (course points) were provided for both posting com-
ments and providing feedback.

METHODS
Research Design
A mixed methods research design was used to examine students perceptions
of the value of peer feedback to their learning (Creswell & Plano Clark, 2007).
Participants perceptions were obtained through a survey questionnaire adminis-
tered after the completion of the online discussions in the course. Results from
close-ended items were augmented by responses to open-ended items; together
they provided a better understanding of the perceived role of peer feedback in
students learning from the online discussions. The goal of combining qualitative
and quantitative data was to provide strength to both forms of data, minimizing
the weaknesses of either type, and leading to a better understanding of the
research question by comparing the results from the different sources (Johnson
& Onwuegbuzie, 2004).
The major lens used for this study was pragmatism based on our intent to
examine real-world, problem-centered, and practice-oriented phenomena; that
is, we wanted to understand the phenomenon of peer feedback in online dis-
cussions and its relationship to perceived learning. The pragmatic worldview
is associated with a mixed method research approach and focuses mainly on the
PEER FEEDBACK IN ASYNCHRONOUS LEARNING /
71

Figure 1. Screenshot of the peer feedback tool in Blackboard.


72 / ERTMER ET AL.

consequences of the research, on the primary importance of questions asked


rather than methods, and multiple methods of data collection to inform the
problem under study (Creswell & Plano Clark, 2007, p. 23). As Creswell (2009)
explained, pragmatism as a worldview arises out of actions, situations, and
consequences rather than antecedent conditions (as in post-positivism) (p. 10).
He further explained that as a philosophical underpinning for mixed methods
studies, pragmatism focuses attention on the research problem and using
pluralistic approaches to derive knowledge about the problem. These quantitative
and qualitative methods were employed concurrently in one phase, that is, a
blended course in Fall 2008, with both quantitative and qualitative data collected
during the same time period, using the same survey instrument.

Participants
Of the 286 students who completed a 2-credit introductory educational tech-
nology course in Fall 2008, 215 submitted responses to an online perception
survey (30 items). Demographic responses from these 215 students indicated
that 67% of the participants were female, while 33% were male. The majority of
the participants were either freshmen or sophomore students (77%). A majority
(68%) of the students reported little to no experience with online discussions
in previous courses (n = 0-1 courses that used online discussions); 32% had used
online discussions in two or more courses. In addition, 37% of the participants
had little to no previous experience with Blackboard; 63% had used Blackboard
in two or more courses. The majority of the students (89%) reported being fairly
to very comfortable with computers.
The introductory educational technology course was a required course for all
students majoring in elementary, secondary, and special education. Each week
students attended a 1-hour lecture and a 2-hour computer lab. As part of their
course enrollment, students were randomly assigned to one of 17 lab sections.
On average, labs consisted of 16 students (range = 10-25), facilitated by one
graduate teaching assistant (TA). At the beginning of the semester, lab sections
were alternately assigned to the peer feedback (PF) or no peer feedback (NO-PF)
condition. Because four TAs facilitated multiple sections, efforts were made to
assign all of their sections to the same condition (thus, some trading of assigned
conditions occurred). In the end, eight labs were assigned to the PF condition
(n = 134), while nine labs were assigned to the NO-PF condition (n = 152). Of
the 215 students who responded to the survey measure, 109 were in the PF
condition and 106 were in the NO-PF condition.

Procedures
Beginning with the second week of classes, and continuing for the next 3 weeks,
students participated in three online discussions (1 per week). The instructor of
PEER FEEDBACK IN ASYNCHRONOUS LEARNING / 73

the course introduced the online discussions to students during the first and
third class lectures, explaining the purpose of the discussions, in general, and
providing an overview of the upcoming weeks discussion, more specifically.
After the first discussion, the instructor took time during his lecture to discuss
his perceptions of the first online discussion and asked the TAs to illustrate
examples of quality postings in their lab meetings that week. These measures
were taken to assure that students understood the value of the discussions to their
learning and to avoid disconnects between the instructors and the students
expectations for participation.
Students were placed into discussion groups within the Blackboard course
management system based on their assigned lab sections. During the first lab
meeting, the TAs introduced the students to the discussion functions in
Blackboard (and the peer feedback tool, if appropriate). To ensure that the students
had a strong understanding of the expectations for posting and reviewing
peers comments, TAs explained the grading system and provided examples of
high-quality postings. In addition, TAs sent weekly e-mails to their students,
reminding them of posting requirements and deadlines.
The first discussion asked students to propose specific learning materials based
on the principles of an assigned learning theory (e.g., behaviorism, cognitive
information processing, constructivism). In order to increase participation in the
discussions, all students were required to make a minimum of one response
halfway through the week and to respond to a minimum of one posting made by a
peer later in the week. The Appendix includes the complete set of guidelines for
Discussion 1. In addition, students in the PF groups were required to rate and
give feedback on a minimum of three responses made by their peers, using
the peer review tool. That is, in addition to the more general guidelines (see
Appendix), students in the peer feedback groups were asked to use the peer
review tool in Blackboard to rate how helpful a response was to them personally
(see instructions following the *** in the Appendix) and to provide comments
to explain their ratings.
The second discussion was organized as a debate in which students were
asked to argue for or against the need to teach millennial learners differently
than previous generations. Students began by reading two assigned articles
about the millennial generation and then forming their arguments, based on the
position they were assigned. Posting guidelines were similar to those used in
Discussion 1.
The third and final discussion was organized as a case study, centered on
the issue of plagiarism. Students read two articles from the New York Times
about a young author whose first novel, after receiving rave reviews, was found
to closely parallel that of another authors and who seemingly plagiarized
language from two of the authors previous books. Two question prompts were
used to get the discussion started:
74 / ERTMER ET AL.

1. In what ways could TurnItIn.com (or something similar) have been used
to help this author avoid the criticism she faced?
2. Should such software be used in high schools and colleges as a way to
deter plagiarism?
Other discussion guidelines remained the same as the first two discussions.
During each discussion, the TAs played a facilitative role, encouraging further
topic discussion by asking probing questions, seeking clarification, and requesting
specific examples to illustrate students ideas. On average, TAs posted five
comments/questions per discussion.
The week following the third discussion, students completed a 30-item, author-
created perception survey that also included demographic items and background
experiences. The perception survey was designed to obtain more specific infor-
mation about students thoughts regarding the online discussions, particularly in
terms of perceived value to their learning. Additional questions were asked of the
peer feedback group to gather their thoughts about the advantages and limitations
to using the peer review tool as part of their online discussions.

Instrument: Perception Survey


Following the three online discussions, students completed an online survey
that asked about their perceptions of the online discussions, the use of peer
feedback (if appropriate), and the perceived impact of these strategies. Question
formats included Likert-style (e.g., Rate your comfort level [on a scale from
1very uncomfortable to 5very comfortable] contributing responses to the
online discussions), multiple choice (e.g., What kind of feedback did you get
from your peers on your postings? Choose all that apply: thoughtful, superficial,
helpful, infrequent, not applicable), and open-ended formats (e.g., Describe
any differences youve noticed in your learning because of these online
discussions, What did you see as the biggest limitation or challenge to using
online discussions?). To examine students perceptions specifically related to
the feedback received, students were asked to indicate the type of feedback
they received and the perceived impact of the feedback on their learning (e.g.,
What suggestions do you have for making the feedback on students postings
more effective?).

Data Analysis
Descriptive statistics (frequency counts, means, standard deviations, etc.) were
calculated for the closed-ended items on the perception survey; where appropriate,
t-tests were used to identify differences between the responses from PF and
NO-PF students. Open-ended survey items were analyzed using a simple pattern-
seeking method to determine those aspects of the online discussions and peer
review that students found most valuable and most challenging; the responses
PEER FEEDBACK IN ASYNCHRONOUS LEARNING / 75

were categorized into overarching topics. In addition, discussion forums were


analyzed to determine the number of student responses in each discussion forum
(not counting peer reviews). We also examined the students peer ratings and
comments to determine the quality of feedback provided by peers.

RESULTS
Perceptions of Online Discussions
In general, the students in the PF group rated themselves (1 = very uncom-
fortable and 5 = very comfortable) more comfortable using the online discussion
tool (M = 3.89) than the NO-PF group (M = 3.70). In addition, students in the
PF group recorded an average comfort level of 3.76 when posting responses and
3.71 when responding to others posts, compared to average comfort ratings
of 3.68 and 3.50, respectively, recorded by the NO-PF group (see Table 1). A
significant difference was noted between students ratings of confidence for
being able to contribute relevant ideas to the discussion, with students in the PF
group showing significantly higher confidence than students in the NO-PF group
(see Table 1).
When students were queried about perceived differences in their learning
based on participating in the discussions, 38% (n = 41) of the PF group and
31% (n = 33) of the NO-PF group indicated they had noticed differences, while

Table 1. Comparisons between Groups on Ratings of


Comfort and Confidence

PF No-PF Independent
(n = 109) (n = 106) samples t-value p-Value

Comfort using tool Mean = 3.89 Mean = 3.70 1.33 .19


SD = 1.03 SD = 1.09

Comfort contributing Mean = 3.76 Mean = 3.68 .56 .58


SD = 1.05 SD = 1.1

Comfort responding Mean = 3.71 Mean = 3.50 1.36 .18


to others SD = 1.05 SD = 1.18

Confident will Mean = 3.98 Mean = 3.70 2.15 .03


contribute relevant SD = .96 SD = .97
ideas

Confident will Mean = 3.51 Mean = 3.31 1.41 .16


benefit from SD = 1.04 SD = 1.06
discussion
76 / ERTMER ET AL.

37% of both groups had not; the remainder were unsure. When asked to identify
the advantages to participating in online discussions, 66% (n = 72) of the PF
group and 59% (n = 63) of the NO-PF group selected the response, [Online
discussions] made it easier to express opinions and to participate in class dis-
cussions. Approximately half of the students in both groups (51% for both
groups) indicated that the discussions helped me understand the content
better, while 46% of the PF group and 44% of the NO-PF group agreed that
online discussions motivated me to study the course materials or other related
topics/content.
Primary limitations noted by the students in both groups included It was
hard to remember to do it (42%, PF; 52%, NO-PF), suggesting that students
may not have remembered to contribute to each discussion (or contributed on a
limited basis) as required. The grades students received from their participation
in the online discussions give some indication that this is true. On average,
students in the two groups received 24 (NO-PF) or 25 (PF) points out of a possible
30 for the three discussions and posted between two and three messages per
discussion (see Table 2). (Note: ns represent total number of students enrolled in
each group, not just those who had completed the survey.) Although the students
in the NO-PF groups posted slightly more comments, on average, than the PF
groups, these differences were not significant (Discussion 1: p = .70; Discussion 2:
p = .38; Discussion 3: p = .29).
Particularly in this blended course, in which required face-to-face class
attendance varied, students may have had trouble establishing a routine for
online attendance. On a positive note, however, students participation increased
from the first to the third discussion by an average of over one comment per
student. Whether this was due to students learning how to manage the online
workload better or simply because the third discussion was more interesting is
unknown. However, anecdotal information received from the instructor and three
of the course TAs suggests that students may have become more comfortable

Table 2. Number of Student Postings in Each Discussion


and Average/Student
Discussion 1 Discussion 2 Discussion 3

Total Avg/ Student Total Avg/ Student Total Avg/ Student

Peer Feedback 296 2.21 266 1.99 452 3.37


Group
(n = 134)

No Peer 351 2.31 346 2.28 553 3.64


Feedback Group
(n = 152)
PEER FEEDBACK IN ASYNCHRONOUS LEARNING / 77

disagreeing with each others ideas and felt safer sharing their ideas with their
peers as they participated in more online discussions.
Finally, we asked students if they were the instructor of the course, would they
continue using online discussions? Forty-four percent (n = 48) of the PF group
and 37% (n = 39) of the NO-PF group reported that they would continue using
the online discussions in the same manner. About 16% of both groups reported
that they would discontinue use of online discussions; reasons provided were
comparable to those typically found in the literature (e.g., lack of interaction,
low response quality, not useful to learning, preference for face-to-face discus-
sions). The remaining students reported that they would continue using online
discussions but with some kind of a change (e.g., increase the number, decrease
the number, or make a change in the format). Thus, a clear majority of students
recommended that the online discussions continue in some form. Apparently,
the students perceived sufficient value in the experience to want to see this
approach continue. As one student commented, Overall I thought the online
discussions were a great way to get us using the Internet as a resource to network
with our classmates.

Perceptions of Peer Learning

Students were asked to rate how their attitudes toward peer learning had
changed following their participation in the three online discussions. Interestingly,
a greater percentage of the NO-PF group (50%) rated their attitudes as more
positive, compared to the PF group (40%), while approximately a third of the
students in each group (38%, PF; 31%, NO-PF) rated their attitudes as neutral
(that is, participation in the online discussions had not changed their attitudes).
In part, open-ended responses indicated that the differences may have been due to
the fact that students in the PF group were aware that their peers in the NO-PF labs
were not having to complete the additional task of providing peer feedback. While
a critical factor in a study such as this, this would not be an issue when an entire
class is involved in the peer feedback process. Finally, open-ended responses also
suggested that students in the PF group had greater chances of encountering
technical difficulties, due to the extra time spent online. Its important that the
systems we use for facilitating these processes are as robust as possible.
Comments from the students in both groups were primarily positive (55 positive
comments from PF students as opposed to 54 from NO-PF students). One student
in the PF group wrote, I think the online discussions enhanced my view toward
peer learning because I was able to gain different perspectives that I would not
have otherwise thought about. Another PF student explained, I am now more
relaxed with having my peers view my work than I was at the beginning of the
year. I also feel more relaxed with knowing that I can get an honest response
whether the person agrees with me or not. One student commented that she/he
would like to use more peer learning when I teach.
78 / ERTMER ET AL.

Similar comments were made by students in the NO-PF groups: With the first
assignment, I really disliked the online discussion. I thought it was tedious and
useless. However, after getting feedback from others [postings in response to
their posts], I realized it was a great tool to better gain an understanding on
a specific topic. Negative comments (PF = 20; NO-PF = 18) related more to
thinking that the online discussions (as opposed to peer learning) were not
beneficial and that many students posted just to get credit.
When students were asked to rate the level of collaboration with their peers
as a result of the online discussions, 19% (n = 21) of the PF group and 14% (n = 15)
of the NO-PF group indicated very high or high levels of collaboration; 54%
(n = 59) of the PF group and 53% (n = 56) of the NO-PF group indicated a
medium level of collaboration. These findings support previous literature, espe-
cially the expectation that students interactions in online learning environments
can create meaningful connections among participants (Palloff & Pratt, 2007)

Perceptions of Peer Feedback

In a previous study, Ertmer et al. (2007) required students to give peer feedback
without the use of a peer feedback tool and reported that the process was
time-consuming for the instructor as well as logistically difficult for the students
who were also learning how to give meaningful feedback. In this study, the
implementation of an embedded tool for providing peer feedback was expected
to remove many of these frustrations (see Figure 2). In fact, no comments were
made about not being able to use the embedded tool or not understanding how
to use it; rather, comments focused on the content of the peer feedback.
For example, when asked how to improve the peer rating system, 11 PF students
commented that the system would have been better if more explanation were
included with the ratings (e.g., I think people need to go more in depth of why
they rated something a certain way); four students asked if there were a way
for every student to get feedback (I think that there should be a way to make sure
that everyone gets a response from classmates); and five students recommended
that the rating scale include more levels (e.g., I felt that there needed to be
more stars to give a bigger range of ratings) (see Figure 3).
More students in the NO-PF group than the PF group listed as a limitation
that they were unsure what to post (32% vs 19%) and that they didnt know
how to respond to others postings (33% vs 21%). It is possible that giving
peer feedback provided more structure to students in the PF group, helping
them feel more comfortable posting responses. Comfort ratings, reported earlier,
support this hypothesis. For example, one student wrote, When I had to rate
my peers responses I really was not sure what to give them. Part of this was
that I did not want to offend them. However, despite the perception that
scoring was difficult, 46% of the PF students thought that they received helpful
feedback on their postings. This coincides with the idea that both giving and
PEER FEEDBACK IN ASYNCHRONOUS LEARNING / 79

Figure 2. Sample post #1 with peer feedback and ratings.

receiving feedback can benefit learners if the exercise is well designed and
organized (Topping, 2005).
After examining the peer feedback posts it was discovered that the students
did not give as many ratings as required, which could have led to a decreased
effect in the outcomes previously discussed. In general, students only completed
38% of the required ratings. Moreover, the average ratings were relatively
high, 3.2 on a 4-point scale, which could indicate students ratings were not
providing true feedback and may have been slightly inflated. This assump-
tion seems plausible as several students in the PF group discussed the difficulty
in rating others responses. For example, one explained, Most of the time
the student will not be completely honest with you for fear of making you
mad/sad. So even though they might rate you three stars they are doing it out
of kindness.
80 / ERTMER ET AL.

Figure 3. Sample post #2 with peer feedback and ratings.

DISCUSSION

Although blended instruction is among the fastest growing types of enroll-


ment at universities today (US DOE, 2009), students success in these courses
will depend, to a great extent, on the ability of instructors to incorporate
meaningful levels and amounts of interaction within them (Palloff & Pratt,
2007; Swan, 2002). This study was designed to examine the perceived benefits
of peer feedback when used to increase interaction among students in a large
undergraduate blended course that used asynchronous online discussions to
supplement face-to-face lectures. Specifically, we examined students percep-
tions of learning from online discussions and the perceived value of peer feedback
as part of that learning.
PEER FEEDBACK IN ASYNCHRONOUS LEARNING / 81

Perceived Value of Online Discussions

In general, the students in this study noted a number of advantages to par-


ticipating in online discussions, with nearly two-thirds of the students agreeing
that the discussions made it easier to express their opinions and participate in
class, and approximately half indicating that the online discussions helped them
learn the content better and motivated them to study additional course materials.
This is similar to what other researchers have found (Harasim, 1990), as well as
what the authors reported in two previous studies comparing the responses
of students enrolled in two large-enrollment courses (Ertmer, Temur-Gedik,
Richardson, & Newby, 2008; Lehman, Richardson, Ertmer, Newby, & Campbell,
2009) that used a blended approach. As noted earlier, students tend to perceive
online discussions as being more egalitarian than face-to-face classroom dis-
cussions, primarily because every student has an equal opportunity to participate
(Palloff & Pratt, 2007).
However, several of our findings also warrant attention in terms of how the
process can be improved and how students attitudes or perceptions might be
improved based on ongoing revisions. For example, approximately 16% of both
groups reported that they would discontinue use of the online discussions.
While the responses provided were not unexpected (e.g., perceived lack of inter-
action) the percentage of students is high enough to warrant continued attention.
How can instructors improve students perceptions of the importance of online
discussions? This is especially important in large lecture classes where students
would not typically be involved in face-to-face discussions or individually
called upon to think about the course topics at length without the use of some
other method such as testing.
The findings of this study showed that even students who are relatively
inexperienced with online discussions can, over the course of a single semester,
become relatively comfortable with this approach and confident in their ability
to participate in online discussions as part of blended courses. Students in this
study were observed to increase their participation in the discussions, over time,
suggesting that they did, indeed, become more comfortable in the online environ-
ment, as well as more successful regulating their time commitments. However,
whereas previous research has suggested that students are satisfied with asyn-
chronous online discussions and benefit from them (Johnson, 2006), only a
minority of students in this study (approximately one-third) perceived a direct
effect on their learning. Although the use of asynchronous discussions can lead
to performance benefits relative to traditional classrooms for distance education
contexts (Lou, Bernard, & Abrami, 2006; US DOE, 2009), using them effectively
in blended courses is still a challenge to instructors who must find ways to
increase and maximize the perceived relevance and/or value of the discussions.
According to Xie, Debacker, and Ferguson (2006), when students perceive online
discussions as relevant, interesting, and enjoyable, their value increases.
82 / ERTMER ET AL.

Perceived Value of Peer Feedback


and Peer Learning

In this study, the PF group demonstrated a higher level of comfort using the
online discussion tool, posting responses to the discussions, and responding to
others posts. More importantly, students in the PF group were significantly more
confident that they could contribute relevant ideas to the discussions. These
findings suggest as students became more involved in the PF process, their
confidence and comfort for participating in online discussions increased. It is
possible that giving peer feedback provided more structure to students in the
PF group; the peer feedback process prompted them to consider the relevance
of their posts in order to gain higher ratings from their peers.
In this study, students in both groups (PF and NO-PF) were able to participate
readily in the online discussions and all received feedback on their early efforts.
That is, feedback was provided:

1. in the form of peers discussion comments;


2. as an assignment grade from the TA; and, more generally,
3. from the instructor during an open discussion regarding students efforts
in the first online discussion.

In addition, it was intended that students in the peer feedback group receive three
peer ratings, with comments, on their initial posts for each discussion. Thus, if
students took the time to post their comments, they received feedback in multiple
forms, with one of these being in the form of a peer rating (for the PF group).
Previous researchers (e.g., Land & Dornisch, 2001; Shea & Bidjerano, 2009)
have noted that students online participation is often limited by low confidence
and lack of prior knowledge. However, providing students with positive and
constructive feedback after their first attempts may have helped mitigate this
potential problem.
The results of this study suggest that adding a peer review process on top of
the other feedback processes typically included in effective course designs, may
not lead to additional increases in perceived learning. In general, students in
this study discussed peer learning in terms of responses made to their postings
rather than the ratings they received. In fact, some students described struggling
with peer feedback, perhaps even becoming negative, due to their perceptions
of the poor cost-benefit ratiofor the amount of time they had to give to complete
the ratings (in addition to responding to peers postings), the payback was
deemed insufficient.
Along these lines, it is also important to consider the level of the learners in
this study, mostly underclassmen at the freshman and sophomore level, a group
unlikely to have had prior experience with peer feedback, and, as evidenced by
their background experiences, online discussions in general. As Palloff and Pratt
(2007) explained, providing meaningful feedback is not a naturally acquired skill.
PEER FEEDBACK IN ASYNCHRONOUS LEARNING / 83

For this group of learners, in particular, it is necessary to teach these skills, model
relevant behaviors, and provide encouragement in the peer feedback process.

Limitations and Directions for


Future Research
Generalization of the results of this study are limited by the use of a single
undergraduate course which, while providing useful information, consisted of
mostly freshman and sophomore students. Additionally, the blended portion of
the course occurred early in the semester and lasted for only 3 weeks, rather than
being spread out over the entire semester. Furthermore, each discussion used a
different format, which may have impacted students ability to provide the kind
of ratings they were expected to provide in the peer feedback process. Future
research should examine the peer feedback process in light of these concerns and
include a more diverse set of courses and students as well as other models of
blended learning. Finally, the students in the PF groups struggled to understand
the differences between the requirements for the peer feedback process and the
normal requirements for posting and responding to their peers within the
online discussions. Unfortunately, this seemed to result in many students failing
to provide the required number of peer ratings which, in turn, made it difficult to
discern specific benefits from the peer feedback process. Additional research is
needed that eliminates this confusion for students and that enables researchers
to identify the added value, if any, of the peer feedback process.

Implications and Conclusion


Online and blended forms of learning are becoming increasingly important
in higher education, and, as a result, there is increasing interest in the use of
asynchronous online discussions (Allen & Seaman, 2008; Allen et al., 2007;
US DOE, 2009). However, the required time commitment is a deterrent to many
instructors, particularly those who teach large undergraduate courses (Dunlap,
2005). This study examined whether peer feedback could provide an adequate
substitute for the feedback that an instructor might typically provide. While the
results suggest that there is potential value in incorporating online discussions
within large, undergraduate blended and online course environments, there are
challenges in effectively implementing peer feedback as a part of them. Primarily,
the target audience needs to attain a certain comfort level with online discussions
and receive instruction and observe modeling of meaningful feedback.
Ertmer et al. (2007) demonstrated that one of the main benefits to the use of peer
feedback related to giving, as opposed to receiving, peer feedback. Yet, when
implementing this approach, using an automated rating scale (with associated
comments) within the Blackboard course management system, benefits were
hard to discern among undergraduate students. There are a number of reasons
why this may be true, including the difficulty associated with assuring that
84 / ERTMER ET AL.

undergraduates provide meaningful and honest feedback to each other. Strategies


are needed that elevate the peer feedback task above the perceived assignment
level to a more relevant learning level.
The challenge for instructors of blended courses who wish to use online
discussions is to find ways to maximize students perceived relevance and/or value
of the discussions. Given the results of this study, it may be relatively more
important for instructors to focus their efforts on providing both a strong rationale
for engaging in online discussions and on implementing typical kinds of feedback
opportunities within the discussions, than on incorporating peer feedback as an
additional strategy within them.

APPENDIX

Discussion #1: Learning Theories


Background: Before jumping into this discussion, you will need to watch and
listen to the information provided at: Learning Theories Online Discussion 1.
Click on this link, turn on your ear phones, and take notes on what is being
discussed. That presentation will explain what it is you are to do for this weeks
discussion.
Dividing you into small groups for the discussion: For this discussion you
will be divided into three groups (one group for each of the major theoretical
perspectives). Use this chart to determine which theoretical perspective you are
to discuss.

If your last name begins with A through H Behaviorism


If your last name begins with I through Q Cognitive Information Processing
If your last name begins with R through Z Constructivism

What you need to do:

1. Read the following case:

Lets imagine that you work for an educational firm that develops learning
curriculum for elementary school children. Your company adheres to a very
behaviorally/information processing/constructivistically [use the one you
have been assigned] oriented viewpoint of learning. A large school district
in Texas has come to your company and asked for you to develop a proposal
for the development of a science unit of instruction for fifth grade students.
Your unit will specifically be focused on insects. This is a very important
potential client for your company and your proposal will be in competition
with two other companies.
PEER FEEDBACK IN ASYNCHRONOUS LEARNING / 85

2. Discuss the following:


Part I: Identify two (or more) key elements based on your theoretical perspec-
tive that could be included within the learning materials in order for them
to be effective. Explain how and why your elements are associated with
your specific theoretical viewpoint. For example, if you are presenting key
behavioral elements within your instruction, you might want to explain
why reinforcement/rewards would play a critical role.
Part II: Respond to what the others have posted. Extend what someone from
your theoretical viewpoint has said. Clarify points, emphasize key elements,
give additional examples, and so on. In addition, respond to postings from
those of other/opposing theoretical viewpoints. Point out weaknesses in
their responses about selected key elements, ask for clarification, and give
ideas on what additional information that is needed.
3. Each of you should make a minimum of one response addressing Part I
(no later than Thursday evening) AND then also make a minimum of one
comment to a response made from one of your classmates (no later than
Sunday evening). You will be given online discussion participation points based
on the quality of your responses.

How your responses will be graded:


To earn all the discussion participation points, you must: (a) make a minimum of
one initial response to the online discussion question (Part I)no later than
Thursday of the discussion week, and (b) make an online comment to a response
made by someone else in the class (Part II)no later than Sunday of the discus-
sion week and ****(c) rate and give feedback on a minimum of 3 responses
made by others in the classthat is use the peer review feature.****
****As an added bonus, this discussion has a Peer Review feature. This allows
you to rate (from 1 to 4) how helpful a specific response was for you, personally.
This feature has been added to allow you to give direct feedback to others about the
responses they have given. In other words, you will pick three or more responses
made by others in the class, you will rate how helpful the comment was (4 = very
helpful; 1 = not helpful), and give them written feedback about their comment.
These ratings and feedback can be used to help each of us learn how to improve
our responses in future online discussions.*****

Ideas and thoughts that can help:


Dont just add comments such as Yeah, I agree with you. Those dont count
and they waste reading time. You need to explain why you agree or disagree.
Support your arguments in some way through additional examples, citation of
86 / ERTMER ET AL.

the literature, etc. Take some time to reflectively think about your response
before you send it.
Dont go on and on and on. Keep your comments conciseyou want others to
be able to read and understand what you have writtenbut no one wants to
spend hours and hours of reading.
You can add links to pictures, videos (e.g., YouTube), other web sites, etc. to
support your argument.

I hope you find this way of interacting interesting. We will have lots to discuss
during this and future online discussions.

REFERENCES

Allen, I. E., & Seaman, J. (2008). Staying the course: Online education in the United
States, 2008. Needham, MA: Sloan Consortium. Retrieved November 30, 2008, from
http://www.sloan-c.org/publications/survey/downloadreports
Allen, I. E., Seaman, J., & Garrett, R. (2007). Blending in: The extent and promise of
blended education in the United States. Needham, MA: Sloan Consortium. Retrieved
November 30, 2008, from http://www.sloan-c.org/publications/survey/downloadreports
Arbaugh, J. B. (2000). How classroom environment and student engagement affect learning
in Internet-based MBA courses. Business Communication Quarterly, 63(4), 9-26.
Arbaugh, J. B. (2001). How instructor immediacy behaviors affect student satisfaction
and learning in web-based courses. Business Communication Quarterly, 64(4), 42-54.
Baud, D., Cohen, R., & Sampson, J. (1999). Peer learning and assessment. Assessment
and Evaluation in Higher Education, 24, 413-426.
Bonk, C. J., & Zhang, K. (2008). Empowering online learning: 100+ activities for reading,
reflecting, displaying, and doing. San Francisco, CA: Jossey-Bass.
Cook, A. (2001). Assessing the use of flexible assessment. Assessment and Evaluation
in Higher Education, 26, 539-549.
Creswell, J. W. (2009). Research design: Qualitative, quantitative, and mixed method
approaches (3rd ed.). Los Angeles, CA: Sage.
Creswell, J. W., & Plano Clark, V. L. (2007). Designing and conducting mixed methods
research. Thousand Oaks, CA: Sage.
Dunlap, J. C. (2005). Workload reduction in online courses: Getting some shuteye.
Performance and Improvement, 44(5), 18-25.
Ertmer, P. A., Richardson, J. C., Belland, B., Camin, D., Connolly, P., Coulthard, G., et al.
(2007). Using peer feedback to enhance the quality of student online postings: An
exploratory study. Journal of Computer-Mediated Communication, 12(2). Available
online: http://jcmc.indiana.edu/vol12/issue2/ertmer.html
Ertmer, P. A., & Stepich, D. A. (2004, July). Examining the relationship between higher-
order learning and students perceived sense of community in an online learning
environment. Proceedings of the10th Australian World Wide Web conference, Gold
Coast, Australia.
Ertmer, P., Temur-Gedik, N., Richardson, J., & Newby, T. (2008). Perceived value of
online discussions: Perceptions of engineering and education students. In Proceedings
PEER FEEDBACK IN ASYNCHRONOUS LEARNING / 87

of World Conference on Educational Multimedia, Hypermedia and Telecommuni-


cations 2008 (pp. 4679-4687). Chesapeake, VA: AACE.
Gibbs, G., & Simpson, C. (2005). Conditions under which assessment supports student
learning. Learning and Teaching in Higher Education, 1(1), 3-31.
Gunawardena, C., & Zittle, F. (1997). Social presence as a predictor of satisfaction within
a computer mediated conferencing environment. American Journal of Distance
Education, 11(3), 8-26.
Haavind, S. (2006). Key factors of online course design and instructor facilitation that
enhance collaborative dialogue among learners. Paper presented at the annual meeting
of the American Educational Research Association, San Francisco, CA.
Harasim, L. M. (1990). Online education: Perspectives on a new environment. Westport,
CT: Greenwood Publishing.
Johnson, G. M. (2006). Synchronous and asynchronous text-based CM in educational
contexts: A review of recent research. TechTrends, 50(4), 46-53.
Johnson, R. B., & Onwuegbuzie, A. J. (2004). Mixed methods research: A research
paradigm whose time has come. Educational Researcher, 33(7), 14-26.
Ko, S., & Rossen, S. (2001). Teaching online: A practical guide. Boston, MA: Houghton-
Mifflin.
Land, S. M., & Dornisch, M. M. (2001). A case study of student use of asynchronous
bulletin board systems (BBS) to support reflection and evaluation. Journal of
Educational Technology Systems, 30, 365-377.
Lehman, J. D., Richardson, J. C., Ertmer, P. A., Newby, T. J., & Campbell, J. C.
(2009). Impact of asynchronous online discussions: A study of implementation in two
large-enrollment blended courses. In Proceedings of World Conference on Educa-
tional Multimedia, Hypermedia and Telecommunications 2009 (pp. 2928-2936).
Chesapeake, VA: AACE.
Liu, E. Z., Lin, S. J., Chiu, C., & Yuan, S. (2001). Web-based peer review: The learner
as both adapter and reviewer. IEEE Transactions on Education, 44, 246-251.
Liu, N., & Carless, D. (2006). Peer feedback: The learning element of peer assessment.
Teaching in Higher Education, 11, 279-290.
Lou, Y., Bernard, R. M., & Abrami, P. C. (2006). Media and pedagogy in undergraduate
distance education: A theory-based meta-analysis of empirical literature. Educational
Technology Research and Development, 54(2), 141-176.
Palloff, R. M., & Pratt, K. (1999). Building learning communities in cyberspace: Effective
strategies for the online classroom. San Francisco, CA: Jossey-Bass.
Palloff, R. M., & Pratt, K. (2001). Lessons from the cyperspace classroom: The realities
of online teaching. San Francisco, CA: Jossey-Bass.
Palloff, R. M., & Pratt, K. (2007). Building online learning communities: Effective strat-
egies for the virtual classroom. San Francisco, CA: Jossey-Bass.
Rourke, L., Anderson, T., Garrison, D. R., & Archer, W. (2001). Assessing social presence
in asynchronous text-based computer conferencing. Journal of Distance Education,
14(2), 50-71.
Shea, P., & Bidjerano, T. (2009). Community of inquiry as a theoretical framework
to foster epistemic engagement and cognitive presence in online education.
Computers & Education, 52, 543-553.
Smith, S. D., Salaway, G., & Borreson Caruso, J. (2009). ECAR key findings: The
ECAR study of undergraduate students and information technology, 2009. Boulder,
88 / ERTMER ET AL.

CO: Educause Center for Applied Research. Retrieved October 23, 2009, from
http://www.educause.edu/ers0906
Stepich, D. A., & Ertmer, P. A. (2003). Building community as a critical element of
online course design. Educational Technology, 43(5), 33-43.
Swan, K. (2002). Building communities in online courses: The importance of interaction.
Education, Communication and Information, 2(1), 23-49.
Topping, K. (1998). Peer assessment between students in colleges and universities.
Review of Educational Research, 68, 249-275.
Topping, K. (2005). Trends in peer learning. Educational Psychology, 25, 631-645.
U.S. Department of Education (US DOE). (2009). Evaluation of evidence-based practices
in online learning: A meta-analysis and review of online learning studies. Washington,
DC: U.S. Department of Education; Office of Planning, Evaluation, and Policy
Development. Retrieved October 24, 2009, from www.ed.gov/about/offices/list/
opepd/ppss/reports.html
Xie, K., Debacker, T. K., & Ferguson, C. (2006). Extending the traditional classroom
through online discussion: The role of student motivation. Journal of Educational
Computing Research, 34(1), 67-89.

Direct reprint requests to:


Dr. Peggy A. Ertmer
Purdue University
3144 Beering Hall of Liberal Arts and Education
100 N. University St.
West Lafayette, IN 47907-2098
e-mail: pertmer@purdue.edu
Copyright of Journal of Educational Computing Research is the property of Baywood Publishing Company, Inc.
and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright
holder's express written permission. However, users may print, download, or email articles for individual use.

You might also like