Why are we betting on giving teachers more effective feedback (even though no one else is really doing this)?

 

Great teachers know about the power of feedback, and research supports this. In the Great Teaching Toolkit: Evidence Review (Coe et al., 2020), we summarised the evidence about feedback in both directions: giving students feedback to guide their learning and getting feedback from students to make teaching responsive.

But feedback doesn’t just help school students. A classic review by Kluger and de Nisi (1996) of the impact of feedback on performance found positive effects on average in a wide range of contexts. However, this and later reviews also demonstrated a wide variation of effect sizes, including many cases where feedback harms performance. The research on when feedback is most helpful is complex and hard to interpret easily (e.g., Shute, 2008; Wisniewski et al., 2020). On its own, feedback does not necessarily enhance performance, unless it is used to promote learning or motivation, and effects are greater if recipients are supported to implement changes.

Part of the power of feedback is that it provides a “reality check”. Despite our tendency as human beings to believe we can judge how well we are doing things, we are generally wrong. “The correlation between self-ratings of skill and actual performance in many domains is moderate to meagre” (Dunning et al., 2004, p. 69). In situations where we do not have good feedback, self-assessments of performance are inaccurate and mostly over-optimistic. To put it bluntly, we’re all probably worse at things than we think we are.

Existing feedback is a poor guide

Classroom teachers do get some feedback. Teachers constantly evaluate how well a lesson is going, looking for signs of confusion or for flagging interest among students, for example. But, as I wrote a decade ago (Coe, 2013), the things that are visible in classrooms are mostly “poor proxies” for student learning. Perceptions of our own role are also hugely distorted, as shown by the surprise—even shock—most teachers experience on seeing themselves teaching on video.

Many teachers do also get feedback from observations by colleagues. However, as I wrote in 2014, most of the judgements made by observers without specialist training are wrong (Coe, 2014). Even without scoring or rating the lesson, if the judgement that underpins any feedback is wrong, the feedback is unlikely to be helpful.

Why isn’t feedback central to all professional learning?

Given the power of feedback, its particular importance in learning and improvement in complex tasks, and the poverty and paucity of easily generated feedback in classrooms, it may seem surprising that more attempts at education improvement have not featured feedback more prominently. Most teachers would struggle to imagine having to support their students’ learning without giving and receiving feedback; yet in most models of professional learning, teachers receive very limited feedback about their performance and their planned learning is relatively unresponsive to their current status or progress. Coaching offers the potential to incorporate both kinds of feedback, but, as I have argued, the impact of coaching depends heavily on the scarce expertise of the coach. So this is unlikely to be an efficient, scalable approach on its own.

A number of studies have evaluated the impact of interventions that feature feedback to teachers to raise attainment of students (e.g, Kraft and Christian, 2021; van Geel et al., 2016). Although some do find positive effects, overall, the picture is mixed. There are three particular reasons identified why feedback may not lead to improvement that are salient:

  • Feedback can be brutal. We are not as good as we think we are and ignorance is bliss. Even if we know that the natural feedback we currently receive is actually uninformative, it is still comfortable and reassuring. Plus, if feedback tells us we’re not that good, we will have to do something about it (more work!). Therefore most people do not naturally seek out (or may avoid or disregard) helpful feedback.
  • Practical measures are hard to create. Even when we want it, effective feedback is hard to get. The requirements of practical measures are demanding and the expertise to create them is thinly spread. In practice, we often depend on weak proxies instead—when we even bother.
  • Acting on feedback is hard. Even when we receive high-quality feedback, the challenge of the hard work to implement and sustain a significant change remains.

Improvement science and feedback

The prioritisation of feedback within the Great Teaching Toolkit (GTT) has also been influenced by an area of research that foregrounds the power of feedback: improvement science (Lewis, 2015). Improvement solutions must fit the context, hence the need to be developed and adapted by local actors—improving quality is the job of those who do the job. For it to be successful, plans are treated as hypotheses to be tested; feedback is collected about implementation and impact using “practical measurement”.

Practical measurement has four distinctive requirements (Yeager et al., 2013). First, it focuses on intermediate “leading” indicators and direct measures of the underpinning mechanisms, not just the final outcome. Second, it provides granular and specific information, not just the high-level constructs that are often the target of research measures. Third, it is designed to have meaning and salience for the people who own the change (e.g., classroom teachers). Fourth, it has to be manageable to collect and interpret in the context of everyday work. All told, these measurements drive effective feedback—so long as they are meaningful and practical. After all, “We cannot improve at scale what we cannot measure” (Bryk et al., 2015, Chapter 4).

More effective feedback in the Great Teaching Toolkit

Conscious of the limitations of readily available feedback for teachers and school leaders, in the GTT we have prioritised the development of feedback tools that aim to make better feedback more easily available. As part of this process, we have identified four specific mechanisms by which feedback can help people to improve their performance:

1) Holding up a mirror

A key aim of our feedback tools is to help teachers to see their own classroom in a way that is broader, clearer, and more accurate than their raw experience can provide. The feedback tools should be like holding a mirror up to allow teachers to see themselves. This gives them insights into their classroom that may already be readily actionable, especially if they have a sound mental model of what great teaching looks like. The feedback itself may not always provide all the “answers” on what to do next. These additional insights may require further support (for example, through collaboration) to draw out diagnostic interpretations and convert them into action.

2) Motivating improvement

If feedback is received on a repeated basis, it allows teachers to see the progress they are making. Being able to see that you are improving some aspect of your practice is hugely motivating. Given the investment required in professional learning, it is important that these feelings of self-efficacy, competence and improvement are supported.

But feedback can also motivate by drawing attention to a gap between actual and desired performance, particularly where individuals have self-efficacy: the perception of their competence to reduce the gap by improving performance. The availability of feedback (and the social pressure of its uptake by others) also drives a shift from teachers thinking about their teaching. It is no longer something that is “just what they do,” “for the children,” or “good enough.” Instead, teachers’ thinking shifts to a focus on their effectiveness as a thing that can be improved (no matter how good they already are); they can feel their agency and ownership of it.

3) Focusing attention on what matters

Feedback directs attention to key goals and outcomes (Kluger & deNisi, 1996; Locke & Latham, 2002). Knowing that an aspect of teaching is being captured, measured and fed back increases its salience. If the feedback tools focus on the right things, they increase the alignment between the aspects of practice that really matter for student outcomes—and also what matters to teachers and school leaders.

4) Clarifying what good looks like

A requirement for deliberate practice, and for developing expertise in general, is the development of mental models (Deans for Impact, 2016). Operationalising an element of great teaching into a well-specified and transparent measurement process helps to build a clearer shared understanding: a mental model of that aspect of great teaching. Without that, it is conceivable for colleagues to have a conversation about an aspect of practice—say, “great questioning”—using the same words but actually meaning quite different things. Understanding is further supported by having clear and rich descriptions, along with examples (including a wide range of examples, boundary cases, and non-examples with different characteristics) across the spectrum between exemplary and routine practice.

Conclusion

Feedback can be one of the most powerful ways to improve goal-directed performance, supported by a vast body of research and theory. However, it often fails to live up to its potential, and needs the right supports in place to work best. Feedback tools are a key component of the Great Teaching Toolkit; their design and implementation are guided by the best available evidence.

Moreover, there is a recursive twist to the way we have built these feedback tools: every time a teacher or school uses the GTT, we are getting feedback about how effective it is. In the same way we tell teachers how feedback can help them be even better, we are using that feedback to help the GTT be even better.

Feedback really is the key.

References

Bryk, A. S., Gomez, L. M., Grunow, A., & LeMahieu, P. G. (2015). Learning to improve: How America’s schools can get better at getting better. Harvard Education Press. https://books.google.co.uk/books?id=CKZhDwAAQBAJ

Coe, R. (2013). Improving education: A triumph of hope over experience. https://profcoe.net/

Coe, R. (2014, January 9). Classroom observation: It’s harder than you think. CEM Blog. http://www.cem.org/blog/414/

Coe, R., Rauch, C. J., Kime, S., & Singleton, D. (2020). Great teaching toolkit: Evidence Review.

Deans for Impact. (2016). Practice with purpose: The emerging science of teacher expertise.

Dunning, D., Heath, C., & Suls, J. M. (2004). Flawed self-assessment. Psychological Science in the Public Interest, 5(3), 69–106. https://doi.org/10.1111/j.1529-1006.2004.00018.x

Kluger, A. N., & DeNisi, A. (1996). The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin, 119(2), 254.

Kraft, M. A., & Christian, A. (2021). Can teacher evaluation systems produce high-quality feedback? An administrator training field experiment. In EdWorkingPapers.com (62; 19). https://doi.org/10.26300/ydke-mt05

Lewis, C. (2015). What is improvement science? Do we need it in education? Educational Researcher, 44(1), 54–61. https://doi.org/10.3102/0013189X15570388

Locke, E. A., & Latham, G. P. (2002). Building a practically useful theory of goal setting and task motivation: A 35-year odyssey. American Psychologist, 57(9), 705–717. https://doi.org/10.1037/0003-066X.57.9.705

Shute, V. J. (2008). Focus on Formative Feedback. Review of Educational Research, 78(1), 153–189. https://doi.org/10.3102/0034654307313795

van Geel, M., Keuning, T., Visscher, A. J., & Fox, J.-P. (2016). Assessing the effects of a school-wide data-based decision-making Intervention on student achievement growth in primary schools. American Educational Research Journal, 53(2), 360–394. https://doi.org/10.3102/0002831216637346

Wisniewski, B., Zierer, K., & Hattie, J. (2020). The power of feedback revisited: A meta-analysis of educational feedback research. Frontiers in Psychology, 10, 3087. https://doi.org/10.3389/fpsyg.2019.03087

Yeager, D., Bryk, A., Muhich, J., & Morales, L. (2013). Practical measurement (78712).

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0
X
X