After listening to Daisy Christodoulou speak at the West Midlands Knowledge Hub last month and reading her book ‘ The Future of Assessment for Learning’, I decided to finally take the leap and trial ‘No More Marking’ in our department. As with every other English department (I’m guessing!) marking easily becomes unmanageable and it is difficult, as a HoD to guarantee and track the consistency of this, beyond running additional moderation and standardisation meetings. Unsurprisingly, the emphasis here almost certainly becomes about guaranteeing consistency at GCSE and A Level and KS3 end up being somewhat neglected.
I had read up about the process and followed a couple of English HoD blogs talking about their use of NMM- I had done the colour test online and was quickly convinced of the benefits of comparative judgement.
The key question for me was really whether this method would work in the context of our department and our school and I was also a little unclear about what sort of form the feedback should take. I couldn’t quite visualise what that might look like, beyond a whole- class marking crib sheet and I was keen for the students to be clear on exactly how to make further improvements, in an attempt to close the ‘knowing-doing gap’ that Christodoulou discusses on her blogs and in her books.
We decided to trial the system with a set of 160 Year 7 ‘Myths and Magic’ assessments, where students were asked to complete a piece of transformational writing, placing a mythical character they had learnt about within a modern context.
Once you’ve navigated your way around the website, the programme is very user friendly, especially as it gives you a step-by-step process to follow. The printing of QR sheets was easy enough (we limited each student to two sides of A4- with slightly bigger spaces between lines -to aid our ageing eyes!) and as these were printed in class order, it was easy to distribute these to the department.
The instructions were given to students so that they were clear not to tamper with the QR codes and the assessment took place in examination conditions.
Once complete, teachers handed their completed sheets to me and I scanned them through our school photocopier and uploaded them to the site. Again, I was surprised at the ease of this and I know that with further attempts, the process would definitely speed up.
Then, a judging session was created and I sent the judging links to staff who were involved in our trial. We extended our marking team for this trial so that we could get more feedback on its potential and success- so English teachers who don’t teach Year 7 marked, as did ourSENCO. As this was a trial and we were short of department time during these weeks, I asked for teachers to spend 40 minutes judging the work independently,giving them the guide of around 30 seconds per comparison. This was alien to some of us, even though we know we make snap judgements about pieces of work(probably within the first paragraph), making a quick comparison like this added pressure for some. Others said that they had really got into the swing of it, enjoying the process and it made it easier to spot consistent misconceptions that could be remedied during a more effective feedback lesson.
After the initial 40 minutes, we sat as a department and completed a further 20 minutes and then discussed our findings.What was most interesting about this conversation was that we were able to glean information that we hadn’t considered or discussed for a while as a team- what exactly, outside of the rubric, were we congratulating/ rewarding? What did we value most in a piece of writing? I had made the instruction quite ambiguous, ‘ Select the better piece of creative writing’, so each teacher naturally found themselves looking for something different-some favoured a well-structured narrative over detailed visual imagery, some favoured the creation of character over the narrative arc of a student’s work.
Perhaps the most useful process here was that with reduced independent marking time, we had created time to review what we had taught prior to this assessment and also how best to remedy some of the misconceptions students had. We were also conscious of ensuring that students were clear on what the end product should look like. We had given ourselves the time to review the ‘best’ pieces and to use them to guide how we feedback to the students.
The Feedback Lesson
I had spoken to a couple of schools about their experience with CJ and they had said that the way feedback was given was still a challenge for them- how would students know exactly what to do to improve if there are no annotations on their work?
We had a stab at this and interestingly found that students quickly realised that they had to look at their own work in a probing and critical way. This was useful in gathering and correcting misunderstandings ‘on the spot’, as the questions that students asked once they were completing the DIRT revealed whether they had the knowledge to apply those things in the first place. It meant feedback was direct and purposeful.
In the lesson, students were shown examples of work from the top 5 students and the teacher modelled what was valued about those aspects of the work. Four key aspects were the focus for these discussions (you can download the lesson PPT here).
After this, students were given opportunities to ‘spot the difference’ between the examples we had discussed and their own work and given further support on how this might be achieved. Time was given for them to make improvements as necessary.
Once we had seen the examples/ worked on some aspects that were weakest, students were asked to check their work with their buddies against this checklist- we tried to make these clear enough to be easily understood and could be easily checked by a partner. Teachers were on hand to support where needed.
Then, students completed the tasks according to where they felt the greatest need was. Again, the teacher was circulating,checking and correcting misunderstandings where necessary.
It’s clear that students do need training on how to read their own working in a critical way, but in this lesson, the mark that they achieved was secondary to them having a secure understanding of where the gaps were. The usual fear/ anxiety of getting results back was removed,because there was ‘real work’ to do in the meantime. We gave the results sheets back at the end of the lesson and there were no shocked faces of disappointment as the students already knew what needed to be done to write more successfully.
Whilst we may not use CJ for every assessment at this time, as a Head of Department, it has streamlined several processes that would have taken significantly longer. This process allowed for moderation,standardisation and quality assurance in one sitting and gave me so much more valuable information about my department, the consistency of judgement and the learning of our students. It was refreshing to be able to use the time more productively to improve the diet and provision we offer.
During the departmental feedback, teachers offered some excellent ideas for how we could adapt and use this in the future-perhaps we could use it as a peer assessment tool for Year 12/13 essays? For shorter GCSE questions? For mock exam marking? For end of year assessments?
I know that we have not taken advantage of the full package from NMM, but for us, using it to compare work within our own centre has been invaluable.
The opportunities are endless!
There’s no doubt: comparative judgement definitely has a place in the secondary English classroom.
I’d love the opportunity to discuss this further with any readers- I’m by no means an expert, but happy to talk through our experience and be a soundboard for those departments eager to give it a go.
Feel free to get in touch!