We’ve been following with interest the recent activity in the edu-blogging world about assessment – in particular, Tom Sherrington’s post summarised the current status of assessment in UK schools more neatly than we ever could have, and Alex Quigley’s recent venture onto CEM’s blog touched upon some key worries and issues to address (the fact that teacher assessment is inherently biased, and the fact that teachers are grossly undertrained in such a key everyday skill, to name but two).
Having piloted our Assessment Lead Programme in a number of north-eastern schools last academic year, and having organised our first-ever residential course (next week!), the power of a robust, well-constructed and supported assessment framework has actually surpassed our expectations.
We have been massively impressed by the impact the initial training, tools and support have had, the enthusiasm our Assessment Leads have shown for the programme, and – most of all – the uses they’ve put the tools to already. To name but a few:
- Development of formative “hinge questions” – questions which very quickly and very effectively assess whether a pupil has understood and can apply a concept;
- Improvement of regular class tests – understanding the weaknesses of this kind of assessment, and being able to identify and mitigate against them; and
- Deep analysis of entrance assessments – an extremely high-stakes judgement is made on these results, so making sure the assessment is as reliable as it can be ensures those judgements are taken with the best possible information at hand.
This is complex stuff, but it is by no means beyond the realms of possibility or pragmatism. This is evidenced by the very fact that we’ve developed it and seen these teacher-led uses as a result.
However, it must be remembered that it is a journey – it takes commitment, and acceptance that current practice is not always the best path. This is where the real problem with “assessment” comes in.
We have known, for a long time now, that assessment has been seen as a mysterious “black box” – some inputs go in, and some outputs come out, but what happens in the middle isn’t necessarily known. This, traditionally, was something not to be messed with, and something which was first really opened by Paul Black and Dylan Wiliam in the late 90s.
The meaning of “assessment”
This was the start of “assessment for learning”, or what Wiliam now calls “responsive teaching” – a much more appropriate term, given what we know now. And what we know now is that the practical meaning of assessment is the number one problem with assessment.
Back at the start of this journey, and from discussions around the subject in our school-based research, what came to light is that the meaning of the word assessment itself in schools has become warped.
When we talked about it with teachers facing these challenges day-to-day, the themes that came up had, for the most part, negative connotations. Things like summative external exams and the pressure that surrounds them, MISs, swathes of data, accountability, workload, marking, progress, value-added, red pen (or green and purple pen), and so on.
The word “assessment” conjures these negative images, and creates immediate barriers to addressing the problem and to creating change for the better. This is a real problem.
Why is it such a problem?
In schools, and for teachers and school leaders, it’s a problem because assessment (in its full-functioning form) is a massive, versatile and incredibly powerful toolkit. There is far more to it than marking, pressure and external exams; as we have seen above, it doesn’t need to be this way.
Indeed, whilst there is no overnight solution to this problem of nomenclature and the associated labels, it is our firm belief that every school should have access to the key for this toolkit, and the manual on how to use it. What’s more, every teacher should have the licence and support to use the tools they need to – nothing more. It should not be over-complicated.
This isn’t something we alone think – at an event we hosted last month, Cambridge Assessment’s Tim Oates described a vision for the future of assessment where it is “the servant, not the master”. Assessment should not dictate how you as a teacher go about teaching, but you should dictate how and when assessment is used to your benefit, and to that of your pupils.
In the course of writing this post, I have realised that a toolkit might be the wrong metaphor altogether. If you go to Homebase and buy a toolbox, you get no guidance on how each of the tools should be used – iteratively, you can work it out, but it takes time and a lot of trial and error. What we actually mean by “toolkit” is more akin to an IKEA flatpack, really – it contains everything that you need to build the system you want, and instructions and tools for you to do so. Whether you want your desk (read: assessment system) to have its drawers on the left or the right is up to you: the ins and outs of the system are school- and context-specific, but the kit can acknowledge and cater for that.
We are working day-in day-out on this solution. We are launching our full-fledged Assessment Lead Programme in September, and we’re really excited about that. But that’s not why I’m writing this post.
Bit by bit, the education world (policy-makers, researchers and practitioners) needs to reclaim the term assessment for what it really is: assessing reliably – whether that be formally or informally, verbally or written, for formative or summative purposes – where our pupils are, so that we can hone our systems and inform our teaching accordingly, with the ultimate result being improved outcomes for pupils.