Guest post: What is the point of marking?

Our Director of Education, Stuart Kime has been working with his friend, mentor and former PhD supervisor, Prof Rob Coe to design and deliver a workshop on classroom assessment and evaluation to a group of teachers in the Tees Valley, as part of the Transforming Tees project. In this blog post, Gary Wootton – workshop delegate and English Subject Leader at St. Hild’s Church of England School in Hartlepool – offers his thoughts on the impact the workshop has had on his own practice.


On October 12th and 23rd, I attended two days of ‘Assessment and Diagnostic Training’ with Professor Rob Coe and Stuart Kime. The training centred around one key question: what is the point of marking? A lot of the day was spent discussing the practice of extensive written feedback, the extent to which this was actually beneficial, and who it was really for. A lot of it rang true, and Dr Rebecca Allen’s speech on workload is well worth a read: the key argument, as I understand it, from both the speech and the training day, is that too much of what teachers do is intended to evidence rather than promote learning. Apportioning blame is unnecessary here, but it’s certainly true – and seemed true to the hundreds of colleagues in the room – that marking has grown into a completely unmanageable beast. Most importantly, it comprises a huge amount of teacher time, and there simply isn’t the evidence base to suggest it’s worth this expenditure of time and effort.

My department has been trialling coded feedback this year. We use a Red-Amber-Green system, and the R-A-G stands for: ‘Rethink, Almost there & Got it’. After a piece of work, the teacher will come up with an improvement task for each piece of work, based on: if the child needs to rethink what they’ve done; if they’re almost there and just need to make a slight tweak; or if they’ve definitely got it and are ready for an extension. The assessment is strong: the emphasis of the process, always, is on devising next steps which would facilitate progress. The feedback is clear: students are given actionable advice, with a skills focus, which leads to them improving the piece of work and – importantly – similar pieces of work in the future. The marking, though, is minimal: staff are simply writing ‘R1’ or ‘G2’ at the end of a piece of work. It’s the conflation of these three things – marking, assessment, and feedback – which is culpable for many of the ills concerning teacher workload. Marking is a mechanical process – literally ‘marking’ a page. Just because it’s the easier of the three to scrutinise, monitor and evidence is no justification for making it the emphasis of school policy or professional practice. Teachers need to be encouraged to make assessments of their students’ learning as often as possible, and to provide the feedback to enhance their learning as often as possible. If this means less marking, then we should mark less.

Part of the day involved case studies from schools. One of the methods showcased was ‘Whole Class Feedback’. It’s done the rounds on Edutwitter, and I can see how it’s got merits. If it saves teachers time, and leads to more progress, then great. What does worry me, though, is that it already seems as though it’s been turned into another accountability tool, where they are to be saved and dated and provided as evidence to anyone who asks.

The most interesting part of the day – for me – was the discussion around assessment as learning. The false dichotomy between formative and summative assessment is well-documented, and the conversations around assessment of learning and assessment for learning have grown tiresome. Tests and assessments seem to have become almost a dirty word in the profession: I can’t count the number of (non-teaching!) advisors who’ve come into my school and advised against testing or assessing very frequently. The training session highlighted a relatively strong evidence base which suggests that iterative low-stakes testing is not only a measure of what they’ve learned and not only generates information for how they could learn better, but in-and-of-itself promotes learning.

The next day, returning to school after the training session, I created a bank of sixty multiple choice questions related to ‘A Christmas Carol’, which our Year 8s are studying this term. I gave them ten questions on context prior to reading, told them to highlight what they thought the answers were, and then told them to highlight – with a different colour – the correct answer for any they got wrong. Then, after each stage, they’ve had a further ten questions relating to it, as well as all questions from previous quizzes. Anecdotally, the students are getting increasingly higher percentage-scores on the quizzes. What this means – to me – is that they are not only retaining information, but actively correcting incorrect learning as they go. I’ve consciously only done this with year 8 students, as then I can make comparisons against how students in year 7 and year 9 perform in their final literature assessments at the end of this term, and I’ve got some baseline data on how these students were performing on literature assessments last year. The proof of any benefit will only be found then, and even then it’ll be difficult to attribute the cause of any success to one variable, but I’m encouraged from what I’ve seen already.

The training day was a much needed opportunity to have the time and space to reflect on assessment practice in my school. It was evidence-rich, challenged prevailing orthodoxy, and was practical and useful. The sections focused on marking, assessing and providing feedback were reassuring: I think what we’re doing is best for students in terms of progress, and best for staff in terms of workload. The sections focused on assessment as learning were more challenging, and caused me to go away and alter my practice. I’m convinced that change is for the better, but need to more thoroughly evaluate the impact to ensure it is in fact worth rolling it out to other year groups.


For more from us on assessment, check out the following links:

  • Our mini-series in association with ASCL on the four pillars of assessment. Published so far have been posts on purpose and validity; two more on reliability and value are coming up.
  • Free podcasts with Tim Oates and Peter Tymms.
  • Our Assessment Lead Programme is now open for registrations for cohort 2. Places are limited, so find out more and register your school here, or sign up for our free demo webinar on 4th December 2017.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.