Colourful Comparative Judgement

Further Refinements Using Comparative Judgement

I recently wrote about my experimentations this year with Comparative Judgement, which are worth a read here. I spoke of making a further refinement in practice to make it easier to use with my sixth form essays.

Now that there are no longer AS exams to generate reliable data on our students,  we have conducted a second wave of formal assessments with our students , as a series of ‘end of year’ exams. A source essay was set for the AQA ‘2M Wars and Welfare’ unit relating to the General Strike, and students essays were scanned in to be judged comparatively. Comparative Judgement is supposed to be quick. Dr Chris Wheadon has remarked that it should be possible to make reliable judgements on which essay is best within thirty seconds. However, we were finding it difficult to do this. There are several points of comparison to make, and in previous rounds of comparing essays, it was difficult to determine which essay was best when, for example, one essay had made strong use of own knowledge to evaluate the accuracy of claims made in a source but another had a robust and rigorous dissection of the provenance of the source. Therefore, we decided to mark up the essays by highlighting the following ‘key elements’, that we determined were essential to determining the quality of the essay:

  • Use of precise own knowledge, integrated with the source content.
  • Comments relating to the provenance of the source.
  • Comments relating to the tone of the source & how this affects source utility.

 

This led to a discussion of how this could be practically be used when making a comparison. We determined, first of all, that when essays might initially appear equal, but where one did not have an even coverage across all three areas, the one that had a broader coverage would be determined best. In theory, we would have been doing this anyway, but marking up the essays beforehand made this significantly easier to spot and therefore judge.

We were also able to resolve other tensions too, when making judgements. It became clear that all of our students had deployed an excellent range of knowledge when determining the accuracy and validity of the arguments made by the sources. It was therefore easier to precisely compare the quality of students’ evaluation of the provenance of each source, having a visual guide of which parts of the lengthy essays to read.

The use of colour was therefore valuable in supporting us in extracting some general strengths & weaknesses of essays across the set. The significant points of comparison were clearer to spot and there were things we were looking for, when making a judgement, which were not coming up, i.e. precise comments on the purpose of the source. It also emerged, that we were not highlighting something of great importance to us; arguments sharply judging the value of the source. Our students were excellent at ripping apart the sources, and commenting on their accuracy and reliability and so on, but were not using these ideas to craft arguments about the utility of the sources to a historian. In essence, some were not answering the question. Some were, and some were not. This gave rise to a feedback task, where students were invited to either pick out of their essays for themselves where these arguments were, or to re-draft passages to insert such arguments.

Impact on the Students

Students also responded extremely positively to the colour coding of their essays. From initial intrigue about what each of the colours represented, once a key had been discerned, they were alive with discussions about what constituted a quality discussion about the value of the provenance of the source. Immediately, students were responding to comments about the structure and balance of their essays. Without prompting, they were looking across their tables to see if there were strong examples that they could look at, to support them with their own development.

This is certainly what we would want to see from sixth form students. By guiding students towards recognising certain parts of their essays, we had unlocked the potential for them to act as independent learners. This was in marked contrast to previous attempts at delivering feedback, where I had more generically suggested that they read through their essays themselves, and look to reconcile the feedback that had been given to what they had actually written. In over five sides of writing, this isn’t particularly helpful. Instead, individual students were focussing on what made their essays unique. Meanwhile, conversations about how limited students’ discussions were on the purpose and provenance of the sources took on new meaning for students, as they could very quickly see how far their writing differed from what I was suggesting had been required. As suggested, I was most pleased with students debating what they should really have said was. Some students challenged my highlighting. One high attaining student in particular instantly recognised that she had not discussed the provenance of the two sources, at all, but queried whether some of her remarks might have qualified. This led to a meaningful dialogue of why some of her suggestions did not ‘count’ as such. She immediately amended her answer to include some more relevant points, and her understanding of what it means to dissect the provenance of the source had been enhanced. The feedback appeared to be doing its job. However, only the next source essay can start to assess how far this assertion is true.

 

Making Good Progress

How can we put the lessons from Making Good Progress into practice?

I had originally intended this to be a follow up to my two blogs reviewing the new Robert Peal series of textbooks. However, I think the ideas contained in Daisy Christodoulou’s book demonstrate weaknesses with the design of most school’s assessment models and require application far more widely. There has been a refreshing focus on models and theories of assessment in education discourse recently. However, it has only served to depress me, that we’re doing it wrong! Time for some optimism, and to start thinking about what the next steps are to accurately assessing our pupils’ work.

You would need to read the book in full, of course, to see Daisy’s evidence base and full analysis of the problems with assessment in schools. I have written a particularly thorough summary of Daisy’s book that I would be keen to discuss with anyone should they wish to get in touch: it is a powerpoint slide summary for each chapter. However, I would suggest that Daisy’s unique contributions and her most important ideas are as follows:

  • Descriptor-led assessments are unreliable in getting an accurate idea of the quality of a piece of work.
  • Assessment grades are supposed to have a ‘shared meaning’. We need to be able to make reliable inferences from assessment grades. This is not the case if we simply aggregate levels applied to work in lessons, or to ‘end of topic’ pieces of work, and then report these aggregate grades. Daisy calls this banking, where students get the credit for learning something in the short-run but we do not know if it has stuck over time. I would suggest this is one of our biggest flaws, as teachers. We test learning too soon, rather than looking for a change in long-term thinking.
  • Summative assessments need to be strategically designed. We cannot use formative and summative assessments for the same task. Instead, we need to design a ‘summative assessment’ as the end goal. The final task, for example a GCSE exam question, needs to be broken down as finely as possible into its constituent knowledge and skill requirements. These then need to be built up over time, and assessed in a formative style, in a fashion that gives students opportunities for deliberate practice, and to attempt particular tasks again.

 

What Daisy proposes as a solution is an integrated model of assessment. A model which takes into account the differences between formative and summative assessments, and where every assessment is designed with reference to its ultimate purpose. What this looks like would be:

  • Formative assessments which are “specific, frequent, repetitive and recorded as raw marks.”
    • These would be regular tests, likely multiple-choice questions, where all students are supposed to get high marks and marks are unlikely to be recorded. Recording marks starts to blur the lines between formative assessment and summative assessment.
  • Summative assessments which are standard tests taken in standard conditions, sample a large domain and distinguish between pupils. They would also be infrequent: one term of work is not a wide enough domain to reliably assess.
    • For ‘quality model’ of assessments, such as English and the Humanities, these can be made particularly reliable through the use of comparative judgement. You could, and should, read more about it here. Daisy also suggests that we should use scaled scores, generated through nationally standardised assessments or comparative judgement. This would have the advantage of providing scores that could be compared across years, and class-averages can provide valuable data to evaluate the efficacy of teaching. I must confess that I need to understand the construction of ‘scaled scores’ more before I can meaningfully apply this information to my teaching practice. I would welcome the suggestion of a useful primer.

 

I’m starting to think about how I could meaningfully apply these lessons to a history department. Daisy suggests that the starting point is to have an effective understanding of the progression model. I think this is something that the history teaching community is already strong on, though the model remains contested which is no bad thing. However, the lack of standardisation across the history teaching community means we are unlikely to build up a bank of standardised summative assessments which we could use to meaningfully compare pupils’ work across schools, to diagnose weaknesses with our own students’ performance. This is something for academy chains and the Historical Association to perhaps tackle. I might be wrong, but I think this is something PiXL seem to be doing in Maths, and Dr Chris Wheadon is setting the foundations for in English. This isn’t something that can be designed at the individual department level.

Where teachers can more easily work together is on the construction of a “formative item bank”. This would consist of a series of multiple-choice questions that will expose students’ thinking on a topic, tease out misconceptions, and judge understanding. Invariably, students’ conceptual thinking in history is undermined by a lack of substantive knowledge. Only once teachers undertake this task, which surely must be a collective effort, can we discern the extent to which this style of formative assessment can detect first and second-order knowledge. Some adaptations might be required. We can then integrate this formative assessment with an appropriate model of summative assessments where the power of collective action on the part of history teachers will undoubtedly be even greater.

I shall therefore spend my holidays thinking about, among other things, what the first steps I need to take as a teacher are to develop such a bank of formative material, and how I would need to shape the structure of summative assessments across the various Key Stages. I intend to write more on this subject. I think it is at the very core of ensuring that we maximise the potential of the new knowledge-rich curriculums many are advocating. Of what use is such a curriculum if we do not have an accurate understanding of how far students are grasping its material?

 

‘Pealite Planning’ Part Two

A review of the textbook in light of the associated scheme of work and resources. Part one of the review can be found here: click.

I must point out, before any further critique, that Robert Peal has been extremely generous in sharing his resources and schemes of work. These are an excellent, and helpful contribution, of resources, which must be utilised critically and judiciously.

In my first post, I was critical of the comprehension style questions, and how they do not encourage students to think hard about the material. Peal does go somewhat further with his schemes of work, where each lesson of reading from the book is followed up with some written tasks. These questions, such as ‘what can a historian learn about the response to the Gunpowder Plot from a Dutch engraving?’ are likely to provoke deeper thinking and the writing process itself, I’d contend, also encourages students to ‘do something’ with the information.

These written tasks are, as this question implies, often linked to a discussion surrounding historical sources. These sources are grounded in discussions of their role and purpose in learning about the period and how they might be of value to historians. As such, Peal’s schemes of work imply that there is some engagement with the concept of using historical evidence, and this is the start of students beginning to consider how our historical knowledge can only be provisional in nature. One would have to be in the lessons to see how far Peal develops these ideas, but there is no reason why Peal’s resources should not lead to individual teacher’s doing so in their own classrooms.

Collins has also published free ‘teacher guides’ to accompany each of the textbooks. Within these, teachers are directed towards “thinking deeper” questions. These should be for all students, but at least these too encourage students to work up their historical knowledge from the raw chronicle they are provided with in the textbooks.

What troubled me here though, in these teacher guides, were the “suggested activities” to accompany each lesson. Take these two activities, which accompany the lesson on James I and the Gunpowder Plot, as an example:

  • Complete a storyboard of the Gunpowder Plot, giving an illustrated narrative of the series of events.
  • Further research the claims some people have made that the Gunpowder Plot was – to some extent – a hoax, and debate whether this could or could not be true.

 

I was surprised to see these tasks, from a knowledge-rich, anti-progressive teacher such as Peal. The idea of “complete a storyboard” isn’t particularly historical, and I’m not sure how far it is going to encourage students to really probe the significance of the gunpowder plot. I think this speaks to the lack of a genuine progression model across the textbook series. Whilst this lesson hangs under the banner of a chapter on the English Civil War, there’s no connection between the reigns of James I and Charles I. Here, we have missed a trick, and we’re lacking historical depth. It is also worth mentioning that Louis Everett didn’t mention that any further activities were a regular feature of the “reading” lessons during his presentation at the West London Free School history conference.

Peal is also very strong here on directing students towards more precise sources, and there are links to tasks where students are encouraged to read more academic history. Teachers will want to take the very clearly referenced materials and integrate them into a curriculum model with greater coherence. I have generated the impression, from the range of examples provided in the schemes of work, and from the conference, that the works of historians are primarily confined to homework reading. There is another missed trick here, in that it would perhaps be valuable to integrate historical debates with the lesson materials. I have written on the merits of doing so elsewhere on this blog, and one suspects this might also go some way towards helping students to understand that the textbook provides an interpretation rather than the interpretation.

In short, the schemes of work and Peal’s broader range of resources merit further exploration and any teacher looking at the textbooks must combine the two. However, there remain limitations to this package which will be addressed in one final post.

‘Pealite Planning’ Part One

A review of Robert Peal’s textbook series

I have seen some glowing reviews on Twitter and some strong criticism of Robert Peal’s ‘knowing history’ series of textbooks. I have used these books to plug a few gaps in my teaching, since attending the West London Free School history conference. However, it was when planning a series of lessons, namely on the causes of the English Civil War, that I felt genuinely motivated to write up my experiences of using his books. They have their strengths, and I think they do a job well. Criticism needs to be recalibrated, and more properly set against their strengths. I aim to address the quality of the books in this blog post, and will follow up with how my views are enhanced when they are put in the context of Peal’s schemes of work before finally considering gaps within this bigger picture.

The Books

My initial reaction to the books was, and remains, that they are excellent. I have been looking for a solid textbook that offers me a ‘lump of text’ to work with for quite some time. My deputy head, and fellow history teacher, even took a tour in our 1970s archive to find materials that offered us a knowledge-rich source of information to base our lessons off of. For this, they must be applauded. Too many textbooks lack this. They are crowded with sources, which are impossible to weave into the text and certainly do not enhance students’ understanding of history as a discipline. I am thinking, in particular here, of the various SHP books such as Contrasts and Connections.

My reservations about the books echo some of Alex Ford’s concerns. The language is ambitious. Too ambitious. I agree that it should be ambitious, and pitched high. However, I teach in a grammar school and, in places, the students are needing to refer to the knowledge organiser and a dictionary too often and it distracts from the overall narrative.

I imagine Peal might counter with the suggestion that students can still grasp the true narrative and the core of events whilst reading aloud. While this might help students to navigate the trickier language, I’m not a fan of reading aloud. I’ve never been convinced that it helps students to internalise the narrative, and to think about it.  David Didau has explained that getting students to follow along with a text, while it is read aloud, can be problematic. While Didau argues that reading aloud does aid comprehension for students with weaker literacy, once can’t help but suspect they would be overwhelmed with the difficulty of these texts. It would be interesting to see some research on this, and how much unfamiliar vocabulary can be navigated within a piece of text before students’ working memories are overloaded.

One of my other reservations was the interpretations that are offered within the text. As suggested above, I was teaching a series of lessons on the causes of the English Civil War. There isn’t any context on how Charles I’s problems with parliament can be traced back to James I’s relationship. In fact, James I only gets a mention in the book as part of the Gunpowder Plot. This disappointed me, and still left me needing to supplement the materials in the book with my own resources.

This isn’t a problem. But, where teachers are claiming that they can plan very quickly, and the books are being advertised as knowing history, the fact that we’re not getting the full story or a sense that it is just that, an interpretation which lays most of the problems at Charles I’s door, this is slightly problematic. The problem here lay more with the teachers claiming to use the book. I feel that the books have a lot to offer, if they’re used in a critical way, with teachers who don’t outsource their planning. Their value is only enhanced by Peal’s generous contribution of resources on his website, which I shall address in my next post.

My other issue relates to the ‘comprehension questions’. Asking five comprehension questions at the end of a large block of text is no indication of how far students are able to comprehend the information presented. Nor will it aid memory retention. I’m tempted to start recalling Willingham’s mantra that “memory is the residue of thought” and the questions on each double page spread do not encourage any thought. What I have done, is I have asked students to do a variety of things with the text. The following three are somewhat typical of my practice:

  • Asking students to “reduce” each paragraph to a one sentence summary. This can elicit whether students have understood the most important part of the text.
  • Asking students to “transform” the text into an image, leaning on the idea of “dual coding”.
  • Asking students to “prioritise” the most useful sentence for understanding a particular idea.

I believe this is better than asking comprehension questions, which do not encourage students to actively use the information. Look at this task which shows how easy it is to extract information and answer comprehension questions without having to assemble meaning.

 

 

In my next post I shall address how this can be taken further, in light of Robert Peal’s schemes of work.

 

Thinking Aloud

What do we want students to know about the middle ages?

My review of what I teach about the middle ages continues at a glacial pace. There are so many different angles from which to approach curriculum planning it is hard to settle down and make a start.

Michael Fordham Fordham suggests that one approach might include generating a list of essay style-questions. Perhaps 100 for Key-Stage Three. What might these questions read, uniquely, for the Middle Ages? I concur with Fordham that there needs to be a transition between the Middle Ages and the developing ‘Early Modern’ period post-1485. There might be questions that explicitly refer to this and indeed the full Millennia wide teaching of the key stage. But what of the Middle Ages alone?

My thinking has also been shaped by discussions surrounding ‘fingertip’ knowledge and ‘residual knowledge’. In light of my recent reading of Making Good Progress? the idea of planning for what students must know in the long-run, what we want the residual knowledge to be, seems to be a valuable starting point for planning. This would then need to be integrated with the disciplinary knowledge that students should be expected to pick up through the way that this historical content is taught.

When thrashing out what ‘residual knowledge’ we want pupils to have in the future, we need to have an eye on the next chapter of the story. I remain of the view that pupils need to be able to orient themselves in time, they should have a basic chronological overview in their head. This will involve frequent comparisons between historical periods to pinpoint their unique properties. This would hopefully generate a ‘chronological compass’ for pupils as well as giving some narrative logic to the development of Britain over the last thousand years or so.

With all of these considerations in mind, might I propose the following goals for teaching Medieval history, as something of a starting point. I certainly intend to refine these, and populate them with more specific historical content, that should hopefully fulfil these aims. That is the next question: what knowledge of the middle ages is ‘cumulatively sufficient’ to meet these goals?

 

Some goals for teaching medieval history:

Students should know that:

  • The Church, the King and the Nobility competed for power ‘at the top.’
  • England’s peasants lived complex lives & contested for power themselves
  • Power, wealth and ideas in England were shaped by events outside of her own borders*
  • There are few sources available for life in the Middle Ages. Much information has been extracted from few particular sources.
  • The Middle Ages’ legacy is joined up to events in the Early Modern period and indeed to life today.

*In proposing this, my thinking is shaped very much by Robert Winder’s superb Bloody Foreigners. I am painfully aware that my subject knowledge here is not what it should be. There are, therefore, likely to be gaps which I hope readers will fill.

This knowledge could perhaps be framed in the following essay-style questions:

  1. What was the role and influence of the Church in Medieval England?
    1. To what extent did it change?
  2. Who really governed Medieval England?
  3. How fair is it to speak of a ‘typical peasant experience?’
  4. How do historians know about life in the Middle Ages?
  5. How was England shaped by its invasions?
  6. What event served to change Medieval England the most?
  7. What feature of the Middle Ages has left the strongest legacy?
  8. What marks the Early Modern Period as distinct to the Middle Ages?

 

These questions might then be added to and adapted after considering what events, knowledge and sources are essential and what content we might be more selective about, but might add up to sufficient answers to these questions.

So. Three questions to be getting on with:

  • Is this a valid approach to curriculum design?
  • Are these goals comprehensive and historically valid for teaching the Middle Ages at KS3?
  • What knowledge of the Middle Ages is independently necessary and cumulatively sufficiently to meet these goals?

Using Comparative Judgement

Some practical reflections on its use in practice

I was first made aware of Comparative Judgement as a method of assessment last year, through one of David Didau’s informative blogposts. I had always meant to get around to using it, but was put off by a fear of using technology. I have regularly compared scripts when awarding marks, and have on occasion sought to put together some sort of order before being brought back to the use of nomoremarking.com by my Deputy Head, and fellow A-level history teacher, to mark some Y12 mock essays.

Having had some new, functional photocopiers installed with a scanning function, I was willing to press ahead. I shall outline the process below for the uninitiated and then offer a simple evaluation of its value below. I’ll probe these thoughts more deeply later in the week.

The Process

  1. Scan in the exam scripts. Really easy if you have a ‘scan to USB’ function on your photocopiers. I’ve become a dab hand at this. You’ll want to use an easy code (like P12 for the twelfth student in 8P) to name the files, rather than perhaps typing in all of their names. Each essay/piece of work needs to be scanned separately. It took me about 15 minutes to scan in 46 sixth form mock scripts.
  2. Upload the scripts to a new task on nomoremarking.com which is free to use.
  3. Get judging. It took a Luddite such as myself a little while to find this function. Bizarrely, the web address to access the scripts is located in a section called ‘judges’ but once there you simply click left and right, depending on which script is better in your opinion. Nomoremarking recommends going with your gut and taking less than 30seconds to make a judgement. In practice, this was true of some Y8 essays I’ve compared, but sixth form essays took an average of three minutes to judge.
  4. The data coming in is easy to read. You are provided with a downloadable readout of the rank order of your pupils. It also comes with an ‘Infit’ score to consider which essays the software is less confident in placing. This is often where you have invited multiple judges, and you have perhaps implicitly disagreed on its value.
  5. Apply some marks. I have been less sure of this. However, I’ve read a selection of essays, found some on the level boundaries, applied marks, and then distributed the marks evenly throughout the levels.
    On essays where the Infit score is above 1.0 (indicating unreliable judgements) we’ve had some really interesting discussions about the merits of the essays, what we should be looking for and then manually awarded marks using an exam board mark scheme. I think it is clearly going to be valuable if you bank scripts from year to year with marks you are confident with, and feed them in – this should save time in awarding marks, if you have essays with firm marks already in the mix.

Dare I say it, judging essays has become fun. The clicking gamifies marking and I’m in a scramble to meet my marking quota. we have found that multiplying the number of scripts by 3 to determine the total number of judgements that need to be made, and evenly dividing this up between the team of markers works fine. In practice, essays are being compared against others 8 times there, and we’re achieving a reliability score of over 0.8 which David Didau says is the goal, and in excess of national examinations.

Strengths Weaknesses
Marks are awarded with great confidence, and a reliable set of data on the rank order of class is valuable for a range of accountability measures & considering further interventions. It is difficult to overcome the urge to write individualised comments on essays. Students (and SLT?) need to expect feedback where this isn’t the case. This feeds in with Christodoulou’s recent work on separating out formative & summative assessments.
It’s quick. Doesn’t sound like it, but marking those mocks could easily have consumed 8 days at 15-20 minutes an essay. At 3 minutes an essay, plus scanning, plus determining marks (30 mins when you have no known marks within the judgement process) is significantly quicker. Transforming electronically judged essays into generic feedback for pupils requires careful thought. I’m still refining this.
There is less of a marking bias. Especially if you ask pupils to submit essays with a code (see part one above) rather than naming them. Essays that ‘miss the wash’ are troublesome to reliably fit in the process. This is probably more frustrating rather than the end of the world.
I have thought much more carefully about what I’m really looking for in essays. I think this has led me to be clearer, already, with my classes about how they need to develop their essays. Getting an entire team on board with this might be more difficult than using the software individually. If you’re marking procedure is out of step with other staff, as a head of department, you can still have little confidence in the reliability of marks generated.

 

How I intend to develop my use of comparative judgement further

  • Ask students to highlight key areas of the script. This might involve showing the mark scheme, and asking them to pick out the five sentences they most want the examiner to see. This should speed up comparisons. Before I stuck my first essays through the process, I had already put formative comments on them. These were a useful aid in passing comment.
  • Banking essays for next year with secure marks attached to them. This should eliminate significant amounts of time transforming the rank order into marks.
  • Get students to submit work electronically. I am in the midst of getting KS3 to do this with an outcome task to a unit of work. I’m not sure how valuable this will be. Paper, pen and scanning seems to be less hassle, so far.**
  • Learn what this anchoring business is which seems to be taking comparative judgement to the next level by connecting subsequent pieces of work together. If I get to the bottom of this, I’ll blog on it.

 

Comparative judgement seems to be a valuable tool for making summative judgements on the quality of pupils’ work. It does not replace marking, or feedback but these should be steps on the road towards a final piece of work. This is where the comparative judgement bit fits in.

Your thoughts and questions are invited.

 

** Update – I have now discovered that nomoremarking.com will not accept word documents. They need to be PDF files which throws this plan of mine out of the window.

Why should we share the work of academic historians?

Rachel Foster’s attendance at the WLFS history conference stirred a rather interesting discussion within my own department about the role of academic history in the classroom. Inspired by Foster’s talk, her excellent chapter in Debates in History Teaching, discussions with fellow participants of my MA and our own fertile minds we devised a list of why using academic history in the classroom is valuable. In no particular order, we suggested:

  • To provide the narrative. Historians can compel the interest of students in ways that perhaps we cannot.
  • To provide competing interpretations.*
    • To identify the key debates in history.
    • To explore how historical interpretations are constructed.
  • To model styles of writing, which was the basis of Jim Carroll’s workshop at the WLFS conference.
  • To develop historical reasoning.
  • To judge pieces of academic work. Arthur Chapman has shown me a number of examples where historians have willingly engaged with pupils in debate and assessed their work.
    • To enthuse and motivate students. Diana Laffin has a book club with her sixth formers.

 

There is an entirely different debate to be had, which I will blog on, regarding the limitations of using historical scholarship, and reducing our subject to academic history. History, of course, extends beyond academia.

I am keen to discuss the ‘how’ and ‘why’ use historical scholarship further. But the more important question remains the what. It is perhaps overly optimistic to suggest that more traditional schools, with their claims of quicker lesson planning, and improved behaviour management allows teachers the room to appropriately develop their subject knowledge. I’m perhaps projecting my own shortcomings onto the broader community. But given the age profile of participants in the WLFS history conference, I’d suggest I’m in good company when I say that I’m not on top of all of the historical debates surrounding the topics that we teach in school. There are some killer passages of text out there. Christine used a now well used extract from Simon Schama, “coffee table history” to illustrate a clear argument and style of historical writing. I am most particularly pleased with a section of text that I use from A Concise History of Australia to illuminate for my year nine students what life was like for convicts in Australia in the mid-nineteenth century.

There are a range of passages of text just like this. Those that you get an instinctive feel when you’re reading them that they offer you something special. One sentence gets write to the crux of the argument. A turn of phrase that beautifully ties off an entire monograph’s worth of argument. But they are in those entire monographs. One of the strengths of the history teaching community is its size, and its passion for its subject. I have yet to meet two history teachers with identical interests in specialisms. It strikes me that we need to do more to share those ‘killer passages’ when we see them. Those that we can share with students of year seven, or at least aspire to build them up towards. It is laudable the West London Free School sets a piece of academic reading each fortnight. They follow in a fine tradition of history teachers bringing historians into the classroom. However it speaks volumes that the Historical Association does not have a page to draw together these uses of academic history, in the way that it does collate thinking on a range of other curriculum issues.

History teachers are instinctively a sharing bunch. This is clearly seen with the exchange of resources and ideas following conferences such as SHP, and Robert Peal’s online resource collection to accompany his textbook series. We need to begin sharing the way we use historians in the classroom in the way that we do other resources. It would be valuable to crowdsource ways that we facilitate this process. Would one teacher and a dropbox account suffice or do we need a bigger vision?

 

*Ben Walsh gave one of the best lines of the WLFS conference, in suggesting that we do not send twelve-year-olds to fight grizzly bears. We send another grizzly bear. Students should not be evaluating historians’ interpretations but instead seeing how their views differ. This builds on Counsell’s own suggestions that students should ‘hear the shape and style’ of historians’ arguments.