Over the last three months, I have been tutoring first year undergraduates. This involves running tutorials on essay writing skills, marking three of their essays and providing feedback to the students on how they can improve. The first of these essays is just for formative purposes, with the marks from the second and third used towards the student’s end of year grade. I consider myself to be far more conscientious than many other markers; I provide in-text remarks, a long comment that informs students what they have done well and what they could improve next time and I host individual feedback sessions for each student to discuss their feedback. So why, having received their final summative essays, am I pounding my head off the wall when they’re still making the same mistakes that I pointed out in October?
Perhaps the main reason for my frustration is that I have spent over 60 hours marking and providing feedback with, seemingly, little effect on the outcomes for my students. I can relate with the exact same feeling when I was in school and marking a class of 38 pupils’ books. I give feedback with the best intention of students reading it, learning from it and acting on it next time. Yet, having received their final essays, there is little evidence of action. I’m not certain they’ve learnt from it and I’m beginning to doubt whether they even read it.
Now, it is entirely possible that the reason for the apparently poor attempts by my students were because they didn’t understand what they needed to do to improve, and therefore just did as they had done before. Perhaps I didn’t articulate what the specific learning goals were for the student; or maybe my feedback wasn’t presented in a manageable way? So, at this point, I’m going to make a huge assumption: that my feedback is fit for purpose. I am certainly not an expert and acknowledge I am biased when self-diagnosing my effectiveness, however, I have a basic grounding in the literature underpinning feedback and I always use the classroom checklist (available here) when offering students feedback. That leaves us at the question, if my feedback could be effective, why is it not producing the desired results?
I’ve reflected upon this question and, having begun to delve into the work of Dylan Wiliam and David Didau, perhaps the answer is quite simple. Maybe I’ve presented my feedback in a too simplistic form? Maybe, when given the choice between the complicated process of digesting and acting upon feedback and the simple act of considering a two-digit mark, the latter wins. The university uses a marking scale out of 100 and each essay is assigned a mark on this scale. I’ve long resented the idea of this method of feedback; can I realistically distinguish between a 65 and a 68? Is what I think warrants a 74 the same as the next marker? Should somebody who deviates wildly from the question but demonstrates broad, in-depth knowledge get the same mark as someone who answers the question but has serious inadequacies in their knowledge base? If students are offered the opportunity of simple ‘headline data’ – a potentially flawed grade – that could be rendering the rest of my feedback pointless.
My solution is not new and very simple: if giving a grade is hindering students from progressing, why give them a grade at all? If not including a mark forces students to read feedback provided for a start, that can only be a good thing? I would like to go one step further and suggest that my feedback can be streamlined to make it even more useful for the students. The marking guidelines specify there are eight assessment criteria on which work is judged, including strands such as formulation, structure and knowledge. I propose that feedback for each strand could be presented to students in the form of a progress continuum, like in this example below.
Not only can students see exactly which areas of their work require attention, the assessment criteria become demystified for students. Workload is drastically reduced for markers and prescriptive labels help facilitate consistency across different assessors. Furthermore, plotting the levels achieved at different stages of the student’s journey can be an excellent demonstration of the progress that each individual has made.
This method of feedback is very similar to that provided in the Centre for Evaluation and Monitoring’s (CEM) BASE assessment. When I started with EBE and encountered CEM’s assessments, it was great to see progress continuums being used effectively and also being well received by teachers. In training primary school colleagues who use BASE in their schools (for more information, see here), we often hear praise for how feedback in this manner can guide next steps for pupils. It also demonstrates that this can be suitable for a wide range of ages, across a variety of assessment criteria.
Do I profess that this is the answer to all our assessment needs? Alas not. However, my current method of marking is taking a lot of time and seems to be having very little impact. Sound familiar? So, there has to come a point at which I acknowledge that and take steps to remedy it. If removing overall marks helps to engage student’s in the feedback we’re giving, then we’re half way there. If we can stop working ourselves to an early grave at the same time, that has to be a no-brainer!