A (new) manifesto for evidence-based education: twenty years on

Professors Rob Coe and Stuart Kime revisit and update Rob’s 1999 ‘Manifesto for evidence-based education’. Read the manifesto below, or use the buttons to download a PDF version.

To cite this document, please use: Coe, R. and Kime, S. (2019). A (new) manifesto for evidence-based education: twenty years on. Sunderland, UK: Evidence Based Education.

Twenty years ago, Rob’s Manifesto for Evidence Based Education was published. In 1999, he predicted – with a nod to motherhood, and a large slice of apple pie in hand – that everything “fashionable, desirable and Good” would be ‘evidence-based’. Rob depicted an education ecosystem in which we would have “Evidence-Based Policy and Evidence-Based Teaching, Evidence-Based Training – who knows, maybe even Evidence-Based Inspection.”

Arguably, we have all of these things now, at least in part. However, the notion of ‘evidence-based’ remains controversial, disputed and misunderstood. For our part, we believe that evidence should be at the heart of education; we hope that the 2039 update to this document is simply called a manifesto for education.

Professor Rob Coe

Twenty years ago

One of the events that helped generate a debate about the use of evidence in the late 1990s was a lecture given by David Hargreaves to the Teacher Training Agency (Hargreaves, 1996[1]). Hargreaves was one of the most prominent scholars of his generation, a pioneer in the sociology of education. His classic 1967 book, Social Relations in a Secondary School, had changed the way people thought about pupil deviance and sub-cultures.

In 1996, Hargreaves was Professor of Education at Cambridge. But he was also an academic who had rolled his sleeves up and got his hands dirty in the messy world of schooling: he had previously been Chief Inspector[2] of the Inner London Education Authority. When someone of that stature publicly described educational research as “a private, esoteric activity, seen as irrelevant by most practitioners”, which gives “poor value for money” it caused some controversy. The problem, he argued, was not simply one of dissemination but of the quality of the research itself:

There is no vast body of research which, if only it were disseminated and acted upon by teachers, would yield huge benefits in the quality of teaching and learning.  One must ask the essential question:  just how much research is there which (i) demonstrates conclusively that if teachers change their practice from x to y there will be significant and enduring improvement in teaching and learning and (ii) has developed an effective method of convincing teachers of the benefits of, and means to, changing from x to y?

 

In 1996, Rob agreed with David Hargreaves that this was a key question. It seemed that educational researchers could be broadly divided into two groups — those who believed that educational research could and should provide such evidence, and those who did not.  Proponents of ‘evidence-based’ education fell into the former group. Over the next fifteen or so years, Rob invested a lot of energy in debates with researchers in group 2 (debates which were often very frustrating, and arguably pointless). In 1999 it seemed like there were hardly any of us in group 1.

At that time, almost no randomised field trials had been done in education in the UK. It was easy for pretty much everyone working in UK educational research who believed in the value of RCTs to meet in one (modest sized) room, as often they did. If a small panel were needed at such an event, we could easily invite all those who had actually done a trial in a school to be on it and probably still have spaces.

All of this has definitely changed.

Today there have been hundreds of trials in schools, supported by the EEF and other funders. Many hundreds of researchers have designed, analysed & reported those RCTs, so the depth and breadth of expertise is much greater. There are vibrant forums for teachers to engage with research, through Twitter and blogging, through networks such as the Chartered College of Teaching, researchED and the Research Schools Network and at numerous events around the world. Almost no-one now says it is unethical to do RCTs or that you can’t do them in schools. There is some debate still about how many (or what balance) we should have and what they can or can’t tell us – which is entirely appropriate, and something we must actively encourage.

But knowing about research evidence is still not a priority for most teachers, nor does it inform the majority of their decisions[3]. Should educational research be a more prominent feature of teachers’ thinking? Is Hargreaves’ ‘essential question’ still the key to evidence-based practice (if it ever was)?

[1] Hargreaves, D.H. (1996) Teaching as a Research-Based Profession: Possibilities and Prospects (Cambridge, Teacher Training Agency Annual Lecture).

[2] In the 1980s, ‘Inspectors’ were people who did evaluate schools, but mainly supported and helped them to improve.

[3] https://www.nfer.ac.uk/news-events/nfer-blogs/evidence-informed-approaches-to-teaching-where-are-we-now/

What do we mean by ‘evidence-based education’?

In the original Manifesto for evidence-based education, ‘evidence-based’ was described as “an approach which argues that policy and practice should be capable of being justified in terms of sound evidence about their likely effects”.

Research can never tell teachers what to do, nor should it; it can, however, help provide teachers and leaders with what Prof. Steve Higgins (and others) have called ‘best bets’. It can – and should – provide the theory underpinning the action in classrooms, leadership meetings, governing body committees and policy-making discussions.

Today, we advocate for a culture in which the status of evidence is normalised in decision-making, a world in which it would be considered simply wrong for professionals working in education not to have deep knowledge of rigorous and relevant research evidence; evidence which points to the likely effects of a proposed action or decision.

‘Evidence-based’ is often perceived as insider term: shorthand for an opaque, nebulous ideal to which many now subscribe but for which there appears no consensus, no common ground of shared meaning. To justify policies or practices “in terms of sound evidence about their likely effects” is not some kind of far-off utopia; it is pragmatic and achievable. It’s also uncomfortable: in many instances it means acknowledging the dearth of relevant and rigorous evidence; in others it means accepting that intuition and experience are wrong.

We use the terms ‘evidence-based’ and ‘evidence-informed’ interchangeably to mean precisely what the original Manifesto stated 20 years ago: “an approach which argues that policy and practice should be capable of being justified in terms of sound evidence about their likely effects.”

To be specific, we believe some of the key characteristics of an evidence-based approach are:

  • Understanding the evidence. People who make decisions in education need to know what the research says. However, it is more than just knowing that metacognition is a good thing and that reducing class size makes a lot less difference to learning than you might expect – although that is a good start. You need to understand enough about what ‘metacognition’ is to be able to recognise it if someone describes an intervention without using that word, or to be able to define it yourself[4]; or to know that class size effects may vary with children’s age and background[5]; or that when the numbers are small enough, class size effects may even reverse[6].

 

  • Testing the why. Related to the kind of deep knowledge of research findings in the previous point is a need for understanding the mechanisms and principles that govern how learning happens and the kinds of experiences and activities that enable or prevent it. Scientists will recognise this as theory: an explanation of observed phenomena that makes testable predictions. One demonstration of this would be the ability to explain to a sceptical colleague why the extra individual attention and feedback that you could give in a smaller class does not transform students’ attainment. Another (which we have both done and seen done by others – Tom Martell of EEF does a good version of it) is to present descriptions of two or more interventions and ask an audience (usually teachers but sometimes researchers – or both) to say which they think will have the biggest impact on learning (these are interventions that have already been evaluated). Admittedly, the information about each intervention is necessarily limited, but it is striking how badly most groups do. In our experience, they seldom beat chance. If we are presented with a description of an intervention and we have a good theory, we should be able to use that theory to predict what will happen. The fact that we mostly cannot do this indicates that we lack good enough theory. Even if that theory exists, it is not widely enough understood. And the scientific method does not stop once we formulate a theory: we then have to test it systematically, designing experiments that will differentially support or undermine different theories.

 

  • Being critical. A defining characteristic of research is that it does not accept what seems obvious, plausible or widely believed but seeks evidence. An evidence-based approach conditions people to hear alarms when they hear anything that seems obvious: “Yes, everyone knows that, but is it really true?”; “X is an expert authority on this, but are they right?”. They also self-monitor for cognitive biases[7], knowing that we can’t trust even our own intuitions.

 

  • Prioritising evaluation. Robust evaluation is a keystone of an evidence-based approach. It is not a nice-to-have but a fundamental requirement. Evaluation is hard to do, it can be equivocal and problematic and it is often quite expensive. It can also often disappoint: many of the things we thought would work don’t. But the cost of not evaluating is much higher in the long term. The more you are willing to compromise on evaluation, the less evidence-based you are.

 

  • Local, formative monitoring. A corollary of the realisation that ‘what works’ (or rather ‘what worked’) doesn’t always work is the need to evaluate whether those ‘best bets’ are in fact working. With your adaptations, in your context, is it having the impact you expect? What improvements could you make (and are they working)? Real-time, local, formative evaluation is arguably even harder than summative evaluation, but the complexities of implementation and the importance of context are such that without it you are unlikely to see real benefits. Doing this well requires a knowledge of evaluation design and of measurement – hence the need to understand assessment.

 

  • Changing with the evidence. A key test for anyone who wants to describe themselves as evidence-based is to be able list all the things they once believed (ideally passionately) which they subsequently learned were at odds with research evidence, and have since changed their minds about. If evidence hasn’t changed your mind about something important then you are probably more evidence-garnished than evidence-based.

 

  • Understanding methodology. Randomised controlled trials are sometimes described as the gold standard of evaluation design. It is true that they solve a fundamental problem in evaluation, but there are plenty of other problems they may not solve, and RCTs can be done well or they can be done badly. Systematic reviews set out to synthesise the evidence from multiple studies, but again there are many pitfalls, many of which researchers have known about for decades. If you want to make use of evidence, you need to be able to judge its quality; to do that you need to understand the strengths and weaknesses of different methodologies and how they can affect the claims people make.

 

On the other hand, there are some things that we would say are not part of an evidence-based approach:

  • A recipe. Can research identify failsafe practices that can be described and implemented? Or specify changes from x to y that guarantee improvement? We think not. Most things can be made to work with enough skill and determination and with the right adaptation and context; almost anything can fail to work without those things. Recognising that it is more complicated than that does not undermine an evidence-based approach, it is the very heart of it and likely strengthens it.

 

  • An instruction. Anything that requires compliance is unlikely to be evidence-based. Has there ever been an example where enforcing compliance with generalised instructions that leave no room for judgment, local context or adaptation has led to improvement in learner outcomes? We think not[8]. The skill, experience, habits and wisdom of the practitioner are always more important than the policy.

 

  • A mechanistic, oversimplified view of the world. Some critics have argued that an evidence-based approach reduces the world to what can be measured and evaluated in RCTs, dismissing all other kinds of knowledge. Our view of evidence-based practice acknowledges and respects all kinds of knowledge, but recognises that different kinds of evidence should have different weight in relation to particular claims. For causal impact claims, evidence from good RCTs is usually very important. Evidence used to inform decision-making must always be fit-for-purpose.

 

  • Neo-liberal disempowering of teachers. Others have argued that the evidence-based agenda is about taking agency away from teachers. We would argue the opposite: giving teachers knowledge that enables them to make the best decisions for their students is both empowering and professionalising.

 

  • A marketing bandwagon. In the past 20 years, savvy marketers and digital content providers have stepped on to the evidence bandwagon as if places on it were as limited as the very evidence behind their flashy claims. It is increasingly common to find those offering their goods and services to schools making bold – sometimes ludicrous – impact claims. Only a genuinely evidence-based approach can challenge such claims.

 

We hope that makes it clear what we think evidence-based education is and what it isn’t. There is a lot to be positive about from the last twenty years, but also still much more to do.

 

[4] The EEF Guidance Report on Metacognition is a good place to learn about this: https://educationendowmentfoundation.org.uk/tools/guidance-reports/metacognition-and-self-regulated-learning/

[5] See Hattie, J (2005) “The paradox of reducing class size and improving learning outcomes”. International Journal of Educational Research 43: 387-425 (http://dx.doi.org/10.1016/j.ijer.2006.07.002)

[6] See https://educationendowmentfoundation.org.uk/evidence-summaries/teaching-learning-toolkit/small-group-tuition/

[7] Eg https://www.kdnuggets.com/2018/09/practical-cognitive-biases.html

[8] Of course, in a spirit of ‘changing with the evidence’ we would be delighted to be confronted with evidence that contradicts our claim …

Enjoying the manifesto for evidence-based education?

What should education systems do (in the next twenty years)?

As this is a Manifesto, we now set out the policies we think would help to promote this agenda.

By ‘education systems’ we mean everyone who plays a part in the education of learners: teachers, learners themselves, parents, school leaders, policy makers, governors, researchers, teacher educators, service providers, etc. Here, we limit ourselves to four suggestions:

  1. Promote a scientific approach to learning about how to improve education. Given how much we don’t know about improving systems, schools and classrooms, we need to rethink the way we learn how to do this better. A scientific approach is characterised by generating hypotheses which we then test; by taking rigorous and relevant evidence as the final arbiter; by the use of measurement to define and operationalise key constructs. All stakeholders should endorse and promote these values.

 

  1. Create feedback systems that enable continuous improvement. In some domains (labelled ‘kind’ by Robin Hogarth[9]) actors have access to trustworthy feedback that allows them to judge whether what they do is effective and the impact of any changes they make. Unfortunately, normal teaching is a ‘wicked’ domain, where feedback does not offer a reliable guide. We need to create valid measures and good feedback loops to address this.

 

  1. Strengthen the working relationships between practitioners, researchers, funders and policy makers. There needs to be a shift in the relationship between research funders, researchers, teachers, leaders, and policy-makers. Such a shift must see the production of research evidence in education as problem-oriented, driven by demonstrable need, and in collaboration with school- and college-based practitioners. Methodologists must improve the tools available to researchers charged with finding answers to important questions; those researchers must receive training which is fit for newly-emerging purposes. We have known about many of the technical problems of research that have led to an inadequate evidence base (though it is better than no evidence base at all) for decades. It’s time to confront them, and to put money into fixing them.

 

  1. Strengthen the safeguards that allow policymakers to do the difficult but right. In a democracy, elected ministers make decisions about education policy. In our experience, politicians are mostly well-intentioned people who genuinely want to make the education system better, although they may have different views about exactly what that means and how to achieve it. However, even the best of them will find it hard to get political support for policies that may be perceived as unpopular (especially among their own constituencies) or whose benefits will take a long time to be realised. In other areas of UK policy there are bodies that are independent of government whose role it is to provide distance between political incentives for short-term, simplistic solutions and what evidence supports as best bets. Examples include the National Institute for Health and Care Excellence (NICE) or the Bank of England Monetary Policy Committee. Creating a similar organisation for education could be one way to achieve this separation.

[9] Hogarth, R. M. (2001). Educating intuition. Chicago, IL: The University of Chicago Press

What are we going to do?

We now work together at EBE, an organisation that sets out to help improve learner outcomes, worldwide and for good by enhancing the quality of teaching and learning through innovative, impactful and engaging professional learning. EBE exists solely for this purpose and we are committed to playing our part in creating classrooms, schools, networks and systems in which rigorous, relevant research evidence use is normalised.

Here are three things we plan to do to help realise the vision of this new manifesto for evidence-based education:

  1. Development of accessible online training and tools. EBE will continue to develop online training and tools which make evidence-based classroom practice more accessible for teachers and school leaders around the world. Building on our Assessment Lead Programme and Assessment Essentials course, we will continue our work to close the gap between the best available research evidence and classroom practice. We also commit to evaluating the impact of these programmes to help improve them further.

 

  1. The Great Teaching Review. We will write a review of the research evidence on effective teaching and make this freely available on the EBE website. Building on the Sutton Trust’s 2014 ‘What makes great teaching?’ report, a small EBE team will identify and describe the competencies of effective teaching. EBE will allow this Great Teaching Review and the feedback we gather from readers of it to guide our own decisions about where we focus our efforts thereafter. The framework will provide a basis for a set of tools to help teachers evaluate their own strengths and weaknesses and measure their improvement as they work on specific goals.

 

  1. The Professional Learning Evaluation Framework. Evaluating the impact of professional learning is hard. EBE will create a Professional Learning Evaluation Framework that can be embedded into professional learning programmes and courses as standard to help meet this challenge. From initial training and preparation programmes, we believe that we need to know much more about the impact (and the process which creates impact) on valued teacher-level and student-level outcomes.
0
X
X