Colourful Comparative Judgement

Further Refinements Using Comparative Judgement

I recently wrote about my experimentations this year with Comparative Judgement, which are worth a read here. I spoke of making a further refinement in practice to make it easier to use with my sixth form essays.

Now that there are no longer AS exams to generate reliable data on our students,  we have conducted a second wave of formal assessments with our students , as a series of ‘end of year’ exams. A source essay was set for the AQA ‘2M Wars and Welfare’ unit relating to the General Strike, and students essays were scanned in to be judged comparatively. Comparative Judgement is supposed to be quick. Dr Chris Wheadon has remarked that it should be possible to make reliable judgements on which essay is best within thirty seconds. However, we were finding it difficult to do this. There are several points of comparison to make, and in previous rounds of comparing essays, it was difficult to determine which essay was best when, for example, one essay had made strong use of own knowledge to evaluate the accuracy of claims made in a source but another had a robust and rigorous dissection of the provenance of the source. Therefore, we decided to mark up the essays by highlighting the following ‘key elements’, that we determined were essential to determining the quality of the essay:

  • Use of precise own knowledge, integrated with the source content.
  • Comments relating to the provenance of the source.
  • Comments relating to the tone of the source & how this affects source utility.

 

This led to a discussion of how this could be practically be used when making a comparison. We determined, first of all, that when essays might initially appear equal, but where one did not have an even coverage across all three areas, the one that had a broader coverage would be determined best. In theory, we would have been doing this anyway, but marking up the essays beforehand made this significantly easier to spot and therefore judge.

We were also able to resolve other tensions too, when making judgements. It became clear that all of our students had deployed an excellent range of knowledge when determining the accuracy and validity of the arguments made by the sources. It was therefore easier to precisely compare the quality of students’ evaluation of the provenance of each source, having a visual guide of which parts of the lengthy essays to read.

The use of colour was therefore valuable in supporting us in extracting some general strengths & weaknesses of essays across the set. The significant points of comparison were clearer to spot and there were things we were looking for, when making a judgement, which were not coming up, i.e. precise comments on the purpose of the source. It also emerged, that we were not highlighting something of great importance to us; arguments sharply judging the value of the source. Our students were excellent at ripping apart the sources, and commenting on their accuracy and reliability and so on, but were not using these ideas to craft arguments about the utility of the sources to a historian. In essence, some were not answering the question. Some were, and some were not. This gave rise to a feedback task, where students were invited to either pick out of their essays for themselves where these arguments were, or to re-draft passages to insert such arguments.

Impact on the Students

Students also responded extremely positively to the colour coding of their essays. From initial intrigue about what each of the colours represented, once a key had been discerned, they were alive with discussions about what constituted a quality discussion about the value of the provenance of the source. Immediately, students were responding to comments about the structure and balance of their essays. Without prompting, they were looking across their tables to see if there were strong examples that they could look at, to support them with their own development.

This is certainly what we would want to see from sixth form students. By guiding students towards recognising certain parts of their essays, we had unlocked the potential for them to act as independent learners. This was in marked contrast to previous attempts at delivering feedback, where I had more generically suggested that they read through their essays themselves, and look to reconcile the feedback that had been given to what they had actually written. In over five sides of writing, this isn’t particularly helpful. Instead, individual students were focussing on what made their essays unique. Meanwhile, conversations about how limited students’ discussions were on the purpose and provenance of the sources took on new meaning for students, as they could very quickly see how far their writing differed from what I was suggesting had been required. As suggested, I was most pleased with students debating what they should really have said was. Some students challenged my highlighting. One high attaining student in particular instantly recognised that she had not discussed the provenance of the two sources, at all, but queried whether some of her remarks might have qualified. This led to a meaningful dialogue of why some of her suggestions did not ‘count’ as such. She immediately amended her answer to include some more relevant points, and her understanding of what it means to dissect the provenance of the source had been enhanced. The feedback appeared to be doing its job. However, only the next source essay can start to assess how far this assertion is true.