How effective is peer feedback for learning?

In this series of blog posts we will dive into the literature around peer review and peer feedback. Each post will summarize the main findings of a different academic paper, find all our research summaries here.

Improving the effectiveness of peer feedback for learning

Authors: Sarah Gielen, Elien Peeters, Filip Dochy, Patrick Onghena, Katrien Stuyven

Link to paper: https://pdfs.semanticscholar.org/7fcb/ab7f684a4b7906a289c1cf95ef3224af9eb0.pdf

What were the researchers trying to understand?

The researchers had 2 main questions:

  • Is receiving peer feedback effective for learning? The researchers are trying to discern if there are certain characteristics of the feedback that is especially useful for learning. It is important here to notice that the researchers are focused on what learning happens from receiving peer feedback and not the learning you get by giving feedback. The other interesting thing is their focus on “characteristics” of the feedback. Essentially this means looking at the written feedback and seeing if there are certain types of feedback that are more effective in teaching.
  • The effect of a particular intervention. They ask students to “reply” to the feedback and reflect upon it. They want to figure out if this specific feedback-reply intervention helps students use their received feedback more effectively.

How is the research conducted?

The researchers run an experiment with 43 students in two 7th grade classes. Each class submits 3 assignments, gives peer feedback to each other and then resubmits a revised version of their assignment (so each student in the experiment is assumed to submit 6 things in total).

All students were recruited from two classes of the same school, taught by the same teacher (class sizes were 22 and 21 students). All students were enrolled in the theory-oriented general secondary education track.

When they want to measure learning and effects of peer feedback, they look at how much improvement happens between the first draft (which is submitted for peer review) and the final version (which is resubmitted after the peer reviews are received and potentially used). They had two research assistants grade each essay (both drafts and final versions). This expert review was done using a scoring rubric (Appendix B of the paper). A subset of essays were rated by multiple experts and they observed “interrater” reliability of .74 (it is not clear which type of “interrater” reliability measure they use) indicating that the ratings are reliable.

So when all papers have been submitted and graded by expert graders, they essentially look at how much better the final versions were compared to the initial drafts. The average improvements are then correlated with feedback characteristics and whether or not the students participated in the feedback-reply intervention.

To figure out which characteristics of the feedback are most important for improving the drafts, they scored each paragraph in each review according to five feedback characteristics:

  • appropriateness
  • specificity
  • justification
  • suggestion
  • clear formulation

After scoring each paragraph in a review (the reviews included 6 paragraphs each), the scores were averaged for each characteristic. Again they find an acceptable inter rater reliability measure (0.65).

What are the research results?

In the experiment, all students participated in peer feedback sessions, so they don’t have any way to measure the effect of giving or getting feedback. They are only able to measure the effects of different types of feedback and the feedback-reply intervention.

The two main findings (answering their research questions) are:

  • Students that receive justified feedback improve the most. However, the effect is small for students that already had good assignments before the peer feedback.

“It also indicates that it is more important for a peer assessor to provide justification rather than accurate critique in the form of negative comments.”

  • Asking the students to reply to the feedback did not improve the work of the replier.

Another interesting finding not directly related to their research questions included improvements to students writing through peer feedback. According to a previous research study, peer feedback is effective because students don’t fully trust it. Instead, they investigate and double-check peer feedback.

“In the study by Yang, Badger, and Yu (2006) revision initiated by teacher feedback was less successful than revision initiated by peer feedback, probably because peer feedback induced uncertainty. Teacher feedback was accepted as such but proved to be associated with misinterpretation and miscommunication, whereas reservations regarding the accuracy of peer feedback induced discussion about the interpretation. Students’ reservations prompted them to search for confirmation by checking instruction manuals, asking the teacher, and/or performing more self-corrections. As a result, students acquired a deeper understanding of the subject. In contrast, teacher feedback lowered students’ self-corrections, perhaps students assumed that the teacher had addressed all errors and that no further corrections were required (Yang et al., 2006).”

What we found interesting:

This paper brings a few interesting things to the table! A lot of teachers care about the accuracy of feedback. This does not necessarily mean numerical accuracy, but also just correctness of the written feedback. This article does not argue that justified feedback is correct feedback (although there is likely to be a correlation). It argues that usefulness is only dependent on justification. This is could be plausible because students are more likely to use feedback if it is justified, and what they measure here is how good students are at improving their work based on their received peer feedback.

A problem with this paper is that they do not address the learning effects of giving feedback. Imagine that you could improve 100% by doing peer review, and that giving feedback accounts for 80% of that learning. In this case there is only 20% learning left when it comes to receiving and using the feedback. Since they don’t control for the learning effects of giving feedback, and since other papers propose that there is more learning in giving feedback than receiving it, then the results are not extremely useful.

Get inspired with our monthly newsletter! Sign up here:

Share this article

Recent blog posts

The difference between formative and summative assessment
In our last blog post, we introduced formative assessment, talked about its strengths and where it fits into lesson planning. Formative assessment’s role in encouraging metacognition and peer cooperation is thoroughly qualitative. It’s focused on the individual students, their needs, and strengthening their learning process. We also mentioned summative assessment as a complementary form of […]
How do people perceive peer feedback?
In this series of blog posts, we will dive into the literature around peer review and peer feedback. Each post will summarize the main findings of a different academic paper, find all our research summaries here. Academics’ perceptions of the benefits and challenges of self and peer assessment in higher education Authors: Chie Adachi, Deakin […]
A recap of PeergradeCon
March 13th we hosted the first Peergrade Conference, aptly named PeergradeCon. We decided to host PeergradeCon because we wanted to get us Peergraders out from behind our desks and meeting all our users. And to get our users meeting each other! Of course, we invited all our users and also opened up the invite to […]