You know that feeling you get when you read something and it just hits you front and centre? That fizzing excitement that takes hold in your gut and subsequently demands the attention of every bit of your mind? That’s what happened when I started reading the Every Student Succeeds Act (ESSA).
OK, so I hear your barely-muffled snort, and I see a look of confusion spreading across your face (the one you usually reserve for the very kind of esoteric geekery you know I’m engaging in), but stick with me. I’m pretty excited.
This may become some kind of manifesto, who knows.
In the USA, the No Child Left Behind (NCLB) legislation has been succeeded by the Every Student Succeeds Act (ESSA), and for someone involved daily in the training and support of teachers and school leaders using research evidence and evaluation techniques to inform their decision-making, ESSA was a joy to read. With 61 uses of the phrase ‘evidence-based’ to describe approaches to decision-making in education, and paragraphs such as the following, I couldn’t help but think that this document represents a huge opportunity to do what colleagues such as our Advisory Board member Professor Rob Coe have been calling for, for many years:
“The State educational agency will ensure that local educational agencies, in developing and implementing programs under this part, will, to the extent feasible, work in consultation with outside intermediary organizations (such as educational service agencies), or individuals, that have practical expertise in the development or use of evidence-based strategies and programs to improve teaching, learning, and schools.”
But opportunity is, in this respect, much like ability: you can have bucket-loads of the stuff, but that doesn’t mean you’ll do anything really meaningful with it. Interpretation and operationalisation of any piece of legislation is freighted with ideology, laden with partisan belief, and bedecked by bureaucratic stalling. And maybe ESSA will fall foul of all of these problems (and more), but I see a gap through which states, districts and their schools could move to a better future, a threshold to be crossed which could truly be transformative.
Here’s the broad vision, and it starts at the bottom. Time to do some imagining.
Imagine a state in the USA. Now imagine a school district in that state. Now imagine a school in that district. In that school, a teacher (we’ll call her Kate) responds to the need to provide better feedback to students by looking at some easily-accessible and useful summaries of the research on feedback (she, too, got excited by ESSA!); she finds that feedback is generally more effective when focused on the task, rather than the learner. Kate also notes that delaying feedback can be more effective for high ability learners, but she recognises that both these findings are averages drawn from classrooms all over the world and from the last 20 years. But she thinks there is something important for her here.
Talking to her line manager (we’ll call him Dan), the teacher says she wants to try out some of the ideas she has learned. Dan, familiar with the idea of evaluation and the tools available to measure the impact on valued student outcomes, suggests that Kate tries out her new ideas in the form of a DIY evaluation. Dan’s training to use research and evaluation well comes into good use in supporting Kate. Kate implements her new approach to feedback and, over time, monitors student progress. Finally, she looks at her DIY evaluation results and finds that, for the 30 children involved in her small trial, there was a small positive impact on their learning. She reports this finding to Dan, who shares it with the school Principal (we’ll call her Dawn).
Dawn attends a district meeting the following week and mentions that a teacher in her school has designed an evidence-based feedback intervention and that the results seemed to be encouraging, though she acknowledges that the sample of students was small. The district officials (we’ll call them Juan and Grace) respond with encouragement and, in discussion with five other Principals, arrange for Kate’s feedback intervention to be trialled with five classes in each of the five different schools; in the 25 classes now trialling the intervention, the number of children involved in the evaluation has risen to 750. Teachers for those 25 classes receive training on how to deliver feedback in the way the first teacher described (Kate was asked to make a short video and share some exemplar materials), and the evaluation takes place, again using DIY evaluation methodology. The findings are, again, positive, this time showing more robustly that children receiving the feedback intervention make around four months more progress in reading than their counterparts who did not receive the intervention.
Juan and Grace note the findings and begin to work out a plan to train all teachers across the district to use the new feedback intervention (taking Kate’s video and materials as the central anchor point), but they also have a conversation with an independent evaluator. They commission the independent evaluator to assess the impact of the feedback intervention across 100 classes, looking at the impact on 3,000 students’ learning. Juan and Grace also talk to State education officials (they’re called Christine and Rob) who offer support for the independent evaluation.
With the independent evaluation completed, and the positive findings repeated, Christine and Rob prepare guidance and training opportunities for teachers and school leaders to use the feedback intervention, but also request that other teachers begin to think in the same way as Kate, researching and trialling ideas at the small-scale of the individual classroom. And so the cycle begins again…
OK, so there are massive gaps in the story, and I acknowledge that there is something highly idealistic and utopic about it. But really, I don’t care about that. I care that I describe the outline of a sustainable, self-improving system with robust, locally-contextualised evidence and evaluation at its heart. I don’t really believe that it only takes seven people to change an education system: I’m idealistic, not stupid. But with structures that encourage and enable teachers, school leaders, administrators and policy-makers to work with each other as they engage with and use evidence, a process such as that I describe is not unfeasible, I think.
We must never aim to produce education systems in which teachers and school leaders blithely abrogate their professional responsibility to apply sensitive, informed judgement, substituting it with defensive references to research evidence (an unintended consequence that could well occur if ESSA is not implemented sensitively). Even the best evidence will only ever tell us about the degree to which something worked, somewhere, somewhen and under certain conditions. But imagine a system in which teachers and school leaders had research and evaluation as systematic parts of their toolkit (and that’s the full extent of what I’m advocating here)! Imagine a system in which we had a good idea of the ‘best bets’ and that teachers and leaders knew how to find out if those ‘best bets’ worked for their students in their individual schools.