Highlights from a seminar with Dr Lindsay Gibson: Small Cards, Big Picture: Constructing students’ narrative frameworks

It was inspiring to hear Dr Lindsay Gibson lead a seminar on how history educators can support their students in constructing big picture narrative of the past. It was interesting to hear how Canadian teachers have experienced similar difficulties to their British colleagues in getting students to consider what their individual topic based studies amount to. As many commentators in Britain seem to desire, Canada seems to struggle to get its students to string together narratives of Canadian history, deploying narratives which accurately sequence events and explain the developments in succession.

Gibson spoke at length about the existing literature, and he had numerous interesting insights. His seminar was built upon a summary of a pilot-study he has conducted with his colleagues in Canada. In short, Gibson had provided students with a ‘pre-test’ inviting students to list significant events in Canadian history in chronological order, and to then write a narrative of Canadian history. This was followed up with a teaching intervention and a repetition of the earlier test. I shall present some of the highlights of the seminar below:

 

  1. Narratives lay at the heart of students’ big pictures of the past. Gibson presented “narrative frameworks”, in a Shemiltian sense, as sitting above students’ historical knowledge using the second-order concepts and as a useful “instrument” for providing some organisation to students’ historical knowledge. Gibson very clearly buys into idea that students should be given a framework first, prior to teaching, across approximately five extended lessons. This is in direct contrast to other schools of thought which have suggested it might be better to develop a narrative out of a wealth of historical thinking, providing order to existing knowledge, rather than providing an outline to be revisited later. Such a belief was curious, given Gibson’s attempt to teach the overviews to tenth grade students: those with the most fixed and developed historical knowledge. It will be very interesting to hear how Gibson’s attempts to teach a framework to fifth grade students before developing their subject knowledge progress.
  2. A real strength of Gibson’s work, with his Canadian colleagues, was a resource he had created with a number of cards, akin to playing cards, which summarised key events in Canadian history. These cards encompassed a range of different themes, and ensured they covered a broad chronological range, including contact with the First Peoples, a history which is taking on increased prominence in Canadian discourse. These cards were used to test and support students in sequencing and have the potential to be used as a resource in developing students’ narratives. What was particularly impressive about these cards, were how they had been constructed with such close contact with academic historians. Gibson had created an exhaustive list of approximately 350 events and then submitted these to a range of historians, inviting them to highlight the seventy most significant on the list. One of the challenges involved in dealing with big picture narratives is the risk of ‘whose narrative?’ is being provided to students. Part of the aim of teaching students’ such narratives is to allow students to understand how narratives are constructed. To challenge narratives, they will need a broad base of knowledge, and this disciplinary authenticity should allow students to scrutinise and interrogate narratives they encounter in ‘everyday history’. This is the sort of powerful knowledge which we should be striving to impart as educators.
  3. Gibson suggested that one of the key features of a narrative are that they are “purposeful”. What he meant by this, I think, is that students’ narratives should have a clear beginning and end. There should be a clear ‘thread’ which links together the events that students have selected to include in a narrative. His task, for students, was to write a narrative of Canadian history and to give it a title, reflecting upon their work. Gibson hopes that this will help students to provide effective summary overviews of their narratives, and to see what past events have amounted to. This is extremely challenging, and is worth unpacking on its own, far beyond any discussions of how we might best support students in providing a narrative in the first place.
  4. When/if Gibson publishes the research he has conducted as part of this pilot study, the numerous findings he has made are essential reading. Gibson provided us with a wealth of interesting observations and conclusions, such as the number of errors students made before and after his teaching intervention. Gibson had tallies of the number of events students incorporated in the pre-test and post-test, the chronological breadth of events selected etc. etc. This provided fascinating insight into the students’ views on history, and how easily they were skewed towards the events they had studied most recently. It is exactly the sort of iterative analysis teachers should be conducting of their students’ work to target future interventions. For me, the most interesting observation that Gibson made was that, after the teaching intervention, students used fewer collective pronouns such as “we” and “our”. This suggests that students might have been coming to see narratives as historical constructs, and treating them in a more dispassionate fashion. It would be fascinating to unpack the reasons behind this, and consider how students understanding of their own history was being altered by the development of historical narratives of Canadian history.

 

Throughout the discussion the question of what prior knowledge (substantive and second-order) that students would need to be able to construct effective overviews constantly lurked in the background. I continue to disagree with Gibson, and other practitioners such as Shemilt, that overviews should be taught before more in-depth historical knowledge. Alison Kitson suggested that history educators have become increasingly effective at teaching historical interpretations. Perhaps it is time to encourage students to critically engage with historian’s narratives as interpretations. What might that look like?

Thanks must go to Dr Lindsay Gibson for generously giving his time, while visiting Europe, to lead this seminar. Further thanks must go to Arthur Chapman, and his colleagues, for the regular and lively discussions that take part at the IoE History Education Special Interest Group. 

Advertisements

“There are too many camera angles!”

Werner Herzog, the acclaimed German film director, reflects on the perils of modern football coverage. “There are too many camera angles” he complains. There is a particular beauty to seeing the ever so subtle shifts in the patterns of movement of a team. The tactical brilliance of a manager and how a team of eleven players execute are quite something to behold. There is a certain rigour and depth, a plane of sporting enjoyment that is hidden from view behind an excessive focus on the moments that attract more immediate focus. The disputed penalty, the fracas with between two team mates or an exciting diagonal pass.

And so it is with history. Students are left with a fragmented view of our past and no real sense of what each of their individual history lessons amount to. This is not a particularly new observation. OFSTED’s history subject reports have documented the problem in 2007 and 2011. History teachers have theorised on the problems in supporting students in forging connections across lessons and units of work, mostly in the columns of Teaching History. Jim Carroll, who has since turned his focus to the language of history teaching, published the most recent and thorough study on how teachers might support students in the British Curriculum Journal.

That said, the problem remains under theorised and the history teaching community is some way from reaching a consensus on how to help students construct meaningful big pictures of the past. Indeed, there is a fragile consensus at best on what students’ big pictures might look like. One of the more promising approaches revolve around Shemilt’s ‘frameworks’ of the past.[1] These have looked at providing students with some chronological and thematic organisation to all of human history. Students are given a provisional overview of human development, in socio-political and economic groups, with the past divided up into chronological chunks. In this way, students have something to slot new learning into and can question the original generalisations that they are taught. However, this ‘big picture first’ approach is fraught with difficulty, and Carroll has noticed an unsurprising lack of take up among history educators.

The Usable Historical Pasts project adopted a similar approach.[2]  Their conception of a ‘big picture’ was for students to be able to offer a chronological narrative of British history in the previous two thousand years. This is a more manageable temporal dimension, and therefore perhaps a curricular goal around which more of a consensus can be built. Their research is fascinating in demonstrating the problem of too many camera angles. When invited to offer a narrative of the past, even university undergraduate students were drawn towards “event like” narrative of the past, simply listing the more eye-catching and significant events of the past. Students’ selection of events appears to be far more intuitive than strategic. We need to provide students with the tools for constructing narratives that stretch across generations. Without it, students are left with mere episodes of the past. Without these extended narratives, how are students to see the true intricacies of the past? How might students, or citizens, truly relate the present to an unfolding past? If students are not taught to develop the skill of managing large amounts of information, why do we bother? If students assemble their own big picture narratives, might these be destructive and come into conflict with narratives driven by the evidence? At risk of asking one too many questions, if students big pictures are dominated by events that instinctively stick out to them, surely they are building their historical knowledge simply using experiential knowledge. Surely students need deliberate reflection and practise at selecting events, to see what’s really going on. We might focus on the penalty incident, because it speaks to our existing interest in the drama of the game. But what are the real issues of the game, of which this was just one fleeting moment.

What’s more, there’s a beauty to a bigger picture which every student is entitled to see. It is therefore great to see Dr Lindsay Gibson speaking at the Institute on the 9th April on this subject. Hopefully further dialogue among history educators ensues. What do history’s big pictures look like, and how can we support students in constructing them?

[1] Shemilt, D, (2000) ‘The Caliph’s Coin: The currency of Narrative Frameworks in History Teaching’ in Stearns, P., Seixas, P. et al (Eds.) Knowing, teaching and learning history: national and international perspectives, New York: New York University Press.

[2] Lee, P. J., & Howson, J. (2009). “Two Out of Five Did Not Know That Henry VIII had Six Wives:” History Education, Historical Literacy and Historical Consciousness. In L. Symcox & A. Wilshcut (Eds.), National History Standards: The Problem of The Canon and The Future of Teaching History, Charlotte, North Carolina: Information Age Publishing, Inc.

Practising my way to being a better teacher

Reading Harry Fletcher-Wood’s latest blog in his series on practice based development, I have come to reflect on how I can sharpen my own professional development.

Harry is right to suggest that practise is an extremely effective route to improving professional practice. My experience of teacher training and CPD sessions earlier in my career had been loaded full of ideas. There were card sorts of activities to try out, discussions around a range of different elements of teaching practice. In each of these sessions, I left with an unfortunate mix of feeling overwhelmed and disinterested. Such a range of strategies left me with a sense that I had a lot to achieve. Yet, with so many ideas discussed in such a superficial level of depth, no sense of how I might act upon them. So, I might have had ambitions to try some of the ideas out, I would file the wealth of resources in a nice “CPD” folder, which lived at the back of my classroom until there was a new packet of documents to add to it.

Teaching is all about habits and habit formation. When the lessons are coming thick and fast, we fall back on our instincts. These are entrenched, and driven by the values we hold as teachers. Far better then,  to focus on just one element of our practice to, and work at this until improvement comes. An in depth discussion of one idea, and deliberately practising its use, is far more likely to have success than a scattergun approach. I have been fortunate that my school has been on a ‘CPD journey’ in this direction in recent years. Teachers have signed up to ‘professional learning groups’, each with a particular focus such as ‘teaching for memory’, where precise strategies are discussed, related to our vision of what we want students to achieve. In a supportive environment, possible strategies are shared, dissected and amended for our own subject disciplines. These are then rehearsed in the classroom, and discussed at greater length in a follow up session to consider their effectiveness.

This has taken me some way. Harry’s blogs have provided me with some further reflections. In combination with my recent MA dissertation, I have come to see this practice based vision as a little front-loaded. I need to think more deliberately about the practise I intend to engage in. So far, the implementation of my habits has been too sketchy. Too reliant on external stimuli. I have been proficient in selecting areas of my practice requiring development, and prescribing a solution. Inevitably, school CPD sessions are too far apart to keep any sense of momentum going.

Harry’s latest blog includes a model from Brent Maddin at the TeacherSquared Teacher Institute that has encouraged me to think more carefully about how I deliberately plan my practise. So far, my planning has been ‘do something’ and ‘have something to discuss’ in the feedback CPD session. This week, my school launched ‘CPD seminars’, evolving the PLG (professional learning group) model discussed above. Colleagues read a small extract from Making Every Lesson Count, evaluated some of its ideas, and considered what it suggested about where we might be able to improve our practice. I selected the idea that students need to ‘layer up’ their writing as a goal to work on, which may well be worth discussing at greater length in a post of its own.

What I do need to do, on this occasion, is be more precise about how I intend to implement this strategy, and the evidence I intend to elicit. Ordinarily, the evidence might take the form of a the extent to which students’ essays improved. My goal is to get students to review their written work, and to change their habits, to see written work as ongoing rather than complete the moment a sufficient volume of work is on the page. This is still true. However, I need to think about what evidence is there that my habits have changed.

I have considered some potential options, which a more proficient teacher-trainer may well be able to expand upon:

  • Inviting colleagues or leadership into the classroom to feedback on the language I use when discussing written work.
  • Recording extracts of lessons to review the language and tasks used.
  • Work scrutiny.
  • A termly review of my teacher planner to review whether my homework & classroom tasks are consistent with the layering process I am keen to develop.
  • Fixing interviews in the near, medium and long term future to discuss with students how they view written work. Presumably if their attitudes have changed over time, I am being consistent in my newly adopted approach.

 

It might be particularly useful to have colleagues observe portions of my lessons, to consider the extent to which my teaching practice demonstrates that my habits & classroom instincts have chaged. However, this seems problematic. This places demands on stretched colleagues and there is always going to be a strong temptation to ‘deliver what they’re looking for’. Instead, I’m going to specify the precise changes I want to notice in my practice, and collect evidence as an individual. This seems to be the only sustainable way to develop large numbers of teachers. In this matter, I am accountable to myself. The presence (or absence) of a future post evaluating these changes in my teaching practice will be clue enough as to whether practising practise has worked!

Can you instinctively know the grade of a piece of work?

I have been thinking about how we might persuade colleagues to revisit their initial approaches to assessment at work. It is only once we truly understand the ingrained habits and assumptions that one can begin to encourage a genuine reorientation. I am also conscious that recent education debate is dominated by a so-called ‘progressive’ versus ‘neo-traditionalist’ dichotomy. Believing, habitually, that where two opposed positions are set out, the best approaches usually emerge from the grey area between, I want to better understand teacher’s responses to ‘new’ assessment theories to critically interrogate them and ensure that we are moving in the right direction.

My titular question is derived from a discussion with a senior colleague who had been raising questions of assessment with the English and Maths faculties. He mused that introducing new ideas about assessment might be a challenge, given an engrained belief that staff could instinctively feel the grade of a piece of work. This coincided with two separate colleagues asking me, last week, what grade they thought a piece of work: one was a history essay, another an Extended Project. This suggested to me that such a belief spread further across the school, and I’d expect across the profession more widely. You only have to look at lesson activities that offer students grade based learning objectives as evidence of this.

I wasn’t sure of what to make of this proposition. It seemed logical, on the face of it. Could colleagues, who have been in the profession for years, if not decades (as most of mine have), not build up a reliable method of applying grade based descriptors from the sheer volume of exemplars they had seen in their careers? This seems highly probable, in the case of Maths, where a quality model of progression prevails. Questions can be ‘grade seven’ or ‘grade nine’ skill-levels in Maths. This is not so in the humanities or perhaps in English.

Still, surely a teacher could look at enough essays to sense which engage in the analysis of sufficient complexity to merit a higher grade than another? My gut instinct is that teachers could do this. In essence, teachers build in certain expectations, certain criteria into their minds to operate some form of comparative judgement. But is it desirable? I’m not sure, but the issues seem to be as follows:

  • Colleagues who believe they can instinctively apply a grade to a piece of work are typically experienced and in positions of middle management. This is not the case, necessarily, for those in their department who are perhaps considerably less experienced. Middle managers need to provide the structures to enable these colleagues to assess work accurately, and ascertain its particular quality relative to the rest of the class, year-group or cohort.
  • However experienced a teaching colleague might be, the new 9-1 specifications are radically different to their predecessors. This might be true of some subjects, more than others, but as a ‘Modern World historian’ the new rubrics represent a genuine revolution in both content and assessment structures. Teachers are likely to have mastered the progression model, but clearly they cannot assess work against externally set grades at this stage in time.
  • For schools and departments to generate data to plan teaching and interventions, summative judgements about students’ work needs to be reliably produced. I mean this in the sense that even if every ‘dart thrown’ is wide of the bullseye, it should be wide by the same amount. This is more likely to be the case with work that is assessed by a department, rather than individual teachers, and this is best facilitated with methods such as comparative judgement, rather than teachers awarding grades against an internal gut judgement and/or level based criteria.
  • Even if you could instinctively suggest that an extended piece of writing, for example, was a particular grade, is this assessment part of a broader set of structures that allows for a skill-level and content-level analysis of a pupil’s knowledge that is updated in real time. The belief that you can give a piece of work a grade, the task in question must be of the sort that matches the final exam. It suggests a teaching approach that is not breaking down the skills and knowledge required and building these up, gradually, over time. Instead, students are being subjected to terminal exam style questions, which Christodoulou has demonstrated to us distorts the teaching process and is inherently inaccurate where teachers begin to ‘teach to the test.’

I’m also left with a desire to swing at the biggest assumption of all laying under the titular question. Why would you ever want to generate a grade against a student? Is it even necessary? Questions for those smarter, and more important, than myself.

 

Colourful Comparative Judgement

Further Refinements Using Comparative Judgement

I recently wrote about my experimentations this year with Comparative Judgement, which are worth a read here. I spoke of making a further refinement in practice to make it easier to use with my sixth form essays.

Now that there are no longer AS exams to generate reliable data on our students,  we have conducted a second wave of formal assessments with our students , as a series of ‘end of year’ exams. A source essay was set for the AQA ‘2M Wars and Welfare’ unit relating to the General Strike, and students essays were scanned in to be judged comparatively. Comparative Judgement is supposed to be quick. Dr Chris Wheadon has remarked that it should be possible to make reliable judgements on which essay is best within thirty seconds. However, we were finding it difficult to do this. There are several points of comparison to make, and in previous rounds of comparing essays, it was difficult to determine which essay was best when, for example, one essay had made strong use of own knowledge to evaluate the accuracy of claims made in a source but another had a robust and rigorous dissection of the provenance of the source. Therefore, we decided to mark up the essays by highlighting the following ‘key elements’, that we determined were essential to determining the quality of the essay:

  • Use of precise own knowledge, integrated with the source content.
  • Comments relating to the provenance of the source.
  • Comments relating to the tone of the source & how this affects source utility.

 

This led to a discussion of how this could be practically be used when making a comparison. We determined, first of all, that when essays might initially appear equal, but where one did not have an even coverage across all three areas, the one that had a broader coverage would be determined best. In theory, we would have been doing this anyway, but marking up the essays beforehand made this significantly easier to spot and therefore judge.

We were also able to resolve other tensions too, when making judgements. It became clear that all of our students had deployed an excellent range of knowledge when determining the accuracy and validity of the arguments made by the sources. It was therefore easier to precisely compare the quality of students’ evaluation of the provenance of each source, having a visual guide of which parts of the lengthy essays to read.

The use of colour was therefore valuable in supporting us in extracting some general strengths & weaknesses of essays across the set. The significant points of comparison were clearer to spot and there were things we were looking for, when making a judgement, which were not coming up, i.e. precise comments on the purpose of the source. It also emerged, that we were not highlighting something of great importance to us; arguments sharply judging the value of the source. Our students were excellent at ripping apart the sources, and commenting on their accuracy and reliability and so on, but were not using these ideas to craft arguments about the utility of the sources to a historian. In essence, some were not answering the question. Some were, and some were not. This gave rise to a feedback task, where students were invited to either pick out of their essays for themselves where these arguments were, or to re-draft passages to insert such arguments.

Impact on the Students

Students also responded extremely positively to the colour coding of their essays. From initial intrigue about what each of the colours represented, once a key had been discerned, they were alive with discussions about what constituted a quality discussion about the value of the provenance of the source. Immediately, students were responding to comments about the structure and balance of their essays. Without prompting, they were looking across their tables to see if there were strong examples that they could look at, to support them with their own development.

This is certainly what we would want to see from sixth form students. By guiding students towards recognising certain parts of their essays, we had unlocked the potential for them to act as independent learners. This was in marked contrast to previous attempts at delivering feedback, where I had more generically suggested that they read through their essays themselves, and look to reconcile the feedback that had been given to what they had actually written. In over five sides of writing, this isn’t particularly helpful. Instead, individual students were focussing on what made their essays unique. Meanwhile, conversations about how limited students’ discussions were on the purpose and provenance of the sources took on new meaning for students, as they could very quickly see how far their writing differed from what I was suggesting had been required. As suggested, I was most pleased with students debating what they should really have said was. Some students challenged my highlighting. One high attaining student in particular instantly recognised that she had not discussed the provenance of the two sources, at all, but queried whether some of her remarks might have qualified. This led to a meaningful dialogue of why some of her suggestions did not ‘count’ as such. She immediately amended her answer to include some more relevant points, and her understanding of what it means to dissect the provenance of the source had been enhanced. The feedback appeared to be doing its job. However, only the next source essay can start to assess how far this assertion is true.

 

Making Good Progress

How can we put the lessons from Making Good Progress into practice?

I had originally intended this to be a follow up to my two blogs reviewing the new Robert Peal series of textbooks. However, I think the ideas contained in Daisy Christodoulou’s book demonstrate weaknesses with the design of most school’s assessment models and require application far more widely. There has been a refreshing focus on models and theories of assessment in education discourse recently. However, it has only served to depress me, that we’re doing it wrong! Time for some optimism, and to start thinking about what the next steps are to accurately assessing our pupils’ work.

You would need to read the book in full, of course, to see Daisy’s evidence base and full analysis of the problems with assessment in schools. I have written a particularly thorough summary of Daisy’s book that I would be keen to discuss with anyone should they wish to get in touch: it is a powerpoint slide summary for each chapter. However, I would suggest that Daisy’s unique contributions and her most important ideas are as follows:

  • Descriptor-led assessments are unreliable in getting an accurate idea of the quality of a piece of work.
  • Assessment grades are supposed to have a ‘shared meaning’. We need to be able to make reliable inferences from assessment grades. This is not the case if we simply aggregate levels applied to work in lessons, or to ‘end of topic’ pieces of work, and then report these aggregate grades. Daisy calls this banking, where students get the credit for learning something in the short-run but we do not know if it has stuck over time. I would suggest this is one of our biggest flaws, as teachers. We test learning too soon, rather than looking for a change in long-term thinking.
  • Summative assessments need to be strategically designed. We cannot use formative and summative assessments for the same task. Instead, we need to design a ‘summative assessment’ as the end goal. The final task, for example a GCSE exam question, needs to be broken down as finely as possible into its constituent knowledge and skill requirements. These then need to be built up over time, and assessed in a formative style, in a fashion that gives students opportunities for deliberate practice, and to attempt particular tasks again.

 

What Daisy proposes as a solution is an integrated model of assessment. A model which takes into account the differences between formative and summative assessments, and where every assessment is designed with reference to its ultimate purpose. What this looks like would be:

  • Formative assessments which are “specific, frequent, repetitive and recorded as raw marks.”
    • These would be regular tests, likely multiple-choice questions, where all students are supposed to get high marks and marks are unlikely to be recorded. Recording marks starts to blur the lines between formative assessment and summative assessment.
  • Summative assessments which are standard tests taken in standard conditions, sample a large domain and distinguish between pupils. They would also be infrequent: one term of work is not a wide enough domain to reliably assess.
    • For ‘quality model’ of assessments, such as English and the Humanities, these can be made particularly reliable through the use of comparative judgement. You could, and should, read more about it here. Daisy also suggests that we should use scaled scores, generated through nationally standardised assessments or comparative judgement. This would have the advantage of providing scores that could be compared across years, and class-averages can provide valuable data to evaluate the efficacy of teaching. I must confess that I need to understand the construction of ‘scaled scores’ more before I can meaningfully apply this information to my teaching practice. I would welcome the suggestion of a useful primer.

 

I’m starting to think about how I could meaningfully apply these lessons to a history department. Daisy suggests that the starting point is to have an effective understanding of the progression model. I think this is something that the history teaching community is already strong on, though the model remains contested which is no bad thing. However, the lack of standardisation across the history teaching community means we are unlikely to build up a bank of standardised summative assessments which we could use to meaningfully compare pupils’ work across schools, to diagnose weaknesses with our own students’ performance. This is something for academy chains and the Historical Association to perhaps tackle. I might be wrong, but I think this is something PiXL seem to be doing in Maths, and Dr Chris Wheadon is setting the foundations for in English. This isn’t something that can be designed at the individual department level.

Where teachers can more easily work together is on the construction of a “formative item bank”. This would consist of a series of multiple-choice questions that will expose students’ thinking on a topic, tease out misconceptions, and judge understanding. Invariably, students’ conceptual thinking in history is undermined by a lack of substantive knowledge. Only once teachers undertake this task, which surely must be a collective effort, can we discern the extent to which this style of formative assessment can detect first and second-order knowledge. Some adaptations might be required. We can then integrate this formative assessment with an appropriate model of summative assessments where the power of collective action on the part of history teachers will undoubtedly be even greater.

I shall therefore spend my holidays thinking about, among other things, what the first steps I need to take as a teacher are to develop such a bank of formative material, and how I would need to shape the structure of summative assessments across the various Key Stages. I intend to write more on this subject. I think it is at the very core of ensuring that we maximise the potential of the new knowledge-rich curriculums many are advocating. Of what use is such a curriculum if we do not have an accurate understanding of how far students are grasping its material?

 

‘Pealite Planning’ Part Two

A review of the textbook in light of the associated scheme of work and resources. Part one of the review can be found here: click.

I must point out, before any further critique, that Robert Peal has been extremely generous in sharing his resources and schemes of work. These are an excellent, and helpful contribution, of resources, which must be utilised critically and judiciously.

In my first post, I was critical of the comprehension style questions, and how they do not encourage students to think hard about the material. Peal does go somewhat further with his schemes of work, where each lesson of reading from the book is followed up with some written tasks. These questions, such as ‘what can a historian learn about the response to the Gunpowder Plot from a Dutch engraving?’ are likely to provoke deeper thinking and the writing process itself, I’d contend, also encourages students to ‘do something’ with the information.

These written tasks are, as this question implies, often linked to a discussion surrounding historical sources. These sources are grounded in discussions of their role and purpose in learning about the period and how they might be of value to historians. As such, Peal’s schemes of work imply that there is some engagement with the concept of using historical evidence, and this is the start of students beginning to consider how our historical knowledge can only be provisional in nature. One would have to be in the lessons to see how far Peal develops these ideas, but there is no reason why Peal’s resources should not lead to individual teacher’s doing so in their own classrooms.

Collins has also published free ‘teacher guides’ to accompany each of the textbooks. Within these, teachers are directed towards “thinking deeper” questions. These should be for all students, but at least these too encourage students to work up their historical knowledge from the raw chronicle they are provided with in the textbooks.

What troubled me here though, in these teacher guides, were the “suggested activities” to accompany each lesson. Take these two activities, which accompany the lesson on James I and the Gunpowder Plot, as an example:

  • Complete a storyboard of the Gunpowder Plot, giving an illustrated narrative of the series of events.
  • Further research the claims some people have made that the Gunpowder Plot was – to some extent – a hoax, and debate whether this could or could not be true.

 

I was surprised to see these tasks, from a knowledge-rich, anti-progressive teacher such as Peal. The idea of “complete a storyboard” isn’t particularly historical, and I’m not sure how far it is going to encourage students to really probe the significance of the gunpowder plot. I think this speaks to the lack of a genuine progression model across the textbook series. Whilst this lesson hangs under the banner of a chapter on the English Civil War, there’s no connection between the reigns of James I and Charles I. Here, we have missed a trick, and we’re lacking historical depth. It is also worth mentioning that Louis Everett didn’t mention that any further activities were a regular feature of the “reading” lessons during his presentation at the West London Free School history conference.

Peal is also very strong here on directing students towards more precise sources, and there are links to tasks where students are encouraged to read more academic history. Teachers will want to take the very clearly referenced materials and integrate them into a curriculum model with greater coherence. I have generated the impression, from the range of examples provided in the schemes of work, and from the conference, that the works of historians are primarily confined to homework reading. There is another missed trick here, in that it would perhaps be valuable to integrate historical debates with the lesson materials. I have written on the merits of doing so elsewhere on this blog, and one suspects this might also go some way towards helping students to understand that the textbook provides an interpretation rather than the interpretation.

In short, the schemes of work and Peal’s broader range of resources merit further exploration and any teacher looking at the textbooks must combine the two. However, there remain limitations to this package which will be addressed in one final post.